threads
listlengths 1
275
|
---|
[
{
"msg_contents": "I have a stored proc that potentially inserts hundreds of thousands, potentially millions, of rows (below).\n\nThis stored proc is part of the the sequence of creating an ad campaign and links an ad to documents it should be displayed with.\n\nA few of these stored procs can run concurrently as users create ad campaigns.\n\nWe have 2 million documents now and linking an ad to all of them takes 5 minutes on my top-of-the-line SSD MacBook Pro.\n\nLast but not least, the system has to quickly serve ads while documents are being linked which is a problem at the moment.\n\nWhat can I do to make linking documents to ads faster or have less impact on the system. I would like the system to be as responsive with serving ads while the linking itself is allowed to take a few minutes. \n\nOne thing I'm concerned with, for example, is the whole multi-million row insert running within the stored proc transaction. I think inserting rows one by one or in small batches may be an improvement. I don't know how to accomplish this, though.\n\n\tThanks, Joel\n\n---\n\nCREATE DOMAIN doc_id AS varchar(64);\nCREATE DOMAIN id AS int;\n\nCREATE TABLE doc_ads\n(\n doc_id id NOT NULL REFERENCES docs,\n ad_id id NOT NULL REFERENCES ads,\n distance float NOT NULL\n);\n\nCREATE INDEX doc_ads_idx ON doc_ads(doc_id);\n\nCREATE OR REPLACE FUNCTION link_doc_to_ads(doc id, threshold float) \nRETURNS void AS $$\nBEGIN\n INSERT INTO doc_ads (doc_id, ad_id, distance)\n SELECT doc, (t).ad_id, (t).distance\n FROM (SELECT ads_within_distance(topics, threshold) AS t\n FROM docs\n WHERE id = doc) AS x;\n ANALYZE doc_ads;\nEND;\n$$ LANGUAGE plpgsql;\n\n--------------------------------------------------------------------------\n- for hire: mac osx device driver ninja, kernel extensions and usb drivers\n---------------------+------------+---------------------------------------\nhttp://wagerlabs.com | @wagerlabs | http://www.linkedin.com/in/joelreymont\n---------------------+------------+---------------------------------------\n\n\n\n",
"msg_date": "Sat, 30 Apr 2011 17:56:50 +0100",
"msg_from": "Joel Reymont <[email protected]>",
"msg_from_op": true,
"msg_subject": "stored proc and inserting hundreds of thousands of rows"
},
{
"msg_contents": "Joel Reymont <[email protected]> wrote:\n \n> We have 2 million documents now and linking an ad to all of them\n> takes 5 minutes on my top-of-the-line SSD MacBook Pro.\n \nHow long does it take to run just the SELECT part of the INSERT by\nitself?\n \n-Kevin\n",
"msg_date": "Sat, 30 Apr 2011 12:27:55 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stored proc and inserting hundreds of thousands\n\t of rows"
},
{
"msg_contents": "Calculating distance involves giving an array of 150 float8 to a pgsql\nfunction, then calling a C function 2 million times (at the moment),\ngiving it two arrays of 150 float8.\n\nJust calculating distance for 2 million rows and extracting the\ndistance takes less than a second. I think that includes sorting by\ndistance and sending 100 rows to the client.\n\nAre you suggesting eliminating the physical linking and calculating\nmatching documents on the fly?\n\nIs there a way to speed up my C function by giving it all the float\narrays, calling it once and having it return a set of matches? Would\nthis be faster than calling it from a select, once for each array?\n\nSent from my comfortable recliner\n\nOn 30/04/2011, at 18:28, Kevin Grittner <[email protected]> wrote:\n\n> Joel Reymont <[email protected]> wrote:\n>\n>> We have 2 million documents now and linking an ad to all of them\n>> takes 5 minutes on my top-of-the-line SSD MacBook Pro.\n>\n> How long does it take to run just the SELECT part of the INSERT by\n> itself?\n>\n> -Kevin\n",
"msg_date": "Sat, 30 Apr 2011 18:58:50 +0100",
"msg_from": "Joel Reymont <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: stored proc and inserting hundreds of thousands of rows"
},
{
"msg_contents": "\nIf you want to search by geographical coordinates, you could use a gist \nindex which can optimize that sort of things (like retrieving all rows \nwhich fit in a box).\n",
"msg_date": "Sat, 30 Apr 2011 20:04:51 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stored proc and inserting hundreds of thousands of rows"
},
{
"msg_contents": "I'm calculating distance between probability vectors, e.g. topics that\na document belongs to and the topics of an ad.\n\nThe distance function is already a C function. Topics are float8[150].\n\nDistance is calculated against all documents in the database so it's\narable scan.\n\nSent from my comfortable recliner\n\nOn 30/04/2011, at 19:04, Pierre C <[email protected]> wrote:\n\n>\n> If you want to search by geographical coordinates, you could use a gist index which can optimize that sort of things (like retrieving all rows which fit in a box).\n",
"msg_date": "Sat, 30 Apr 2011 19:14:31 +0100",
"msg_from": "Joel Reymont <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: stored proc and inserting hundreds of thousands of rows"
},
{
"msg_contents": "[rearranging to correct for top-posting]\n \nJoel Reymont <[email protected]> wrote:\n> Kevin Grittner <[email protected]> wrote:\n>> Joel Reymont <[email protected]> wrote:\n>>\n>>> We have 2 million documents now and linking an ad to all of them\n>>> takes 5 minutes on my top-of-the-line SSD MacBook Pro.\n>>\n>> How long does it take to run just the SELECT part of the INSERT\n>> by itself?\n \n> Are you suggesting eliminating the physical linking and\n> calculating matching documents on the fly?\n \nI'm not suggesting anything other than it being a good idea to\ndetermine where the time is being spent before trying to make it\nfaster. You showed this as the apparent source of the five minute\ndelay:\n \n INSERT INTO doc_ads (doc_id, ad_id, distance)\n SELECT doc, (t).ad_id, (t).distance\n FROM (SELECT ads_within_distance(topics, threshold) AS t\n FROM docs\n WHERE id = doc) AS x;\n \nWhat we don't know is how much of that time is due to writing to the\ndoc_ads table, and how much is due to reading the other tables. We\ncan find that out by running this:\n \n SELECT doc, (t).ad_id, (t).distance\n FROM (SELECT ads_within_distance(topics, threshold) AS t\n FROM docs\n WHERE id = doc) AS x;\n \nIf this is where most of the time is, the next thing is to run it\nwith EXPLAIN ANALYZE, and post the output. It's a whole different\nset of things to try to tune if that part is fast and the INSERT\nitself is slow.\n \nOf course, be aware of caching effects when you time this.\n \n-Kevin\n",
"msg_date": "Sat, 30 Apr 2011 13:24:12 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stored proc and inserting hundreds of thousands\n\t of rows"
},
{
"msg_contents": "Joel Reymont <[email protected]> wrote:\n \n> I'm calculating distance between probability vectors, e.g. topics\n> that a document belongs to and the topics of an ad.\n> \n> The distance function is already a C function. Topics are\n> float8[150].\n> \n> Distance is calculated against all documents in the database\n \nThere's probably a way to index that so that you don't need to do a\nfull calculation against all documents in the database each time. \nIt may even be amenable to knnGiST indexing (a new feature coming in\n9.1), which would let you do your select with an ORDER BY on the\ndistance.\n \nPostgreSQL has a lot of very cool features you just don't have in\nany other product! :-)\n \n-Kevin\n",
"msg_date": "Sat, 30 Apr 2011 13:36:46 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stored proc and inserting hundreds of thousands\n\t of rows"
},
{
"msg_contents": "\nOn Apr 30, 2011, at 7:24 PM, Kevin Grittner wrote:\n\n> If this is where most of the time is, the next thing is to run it\n> with EXPLAIN ANALYZE, and post the output.\n\nI was absolutely wrong about the calculation taking < 1s, it actually takes about 30s for 2 million rows.\n\nStill, the difference between 5 minutes and 30s must be the insert.\n\nSELECT (t).doc_id, (t).distance\n FROM (SELECT docs_within_distance('{ 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.586099770475, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.167233562858, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667, 0.00166666666667 }', 50.0) as t) AS x;\n\nThis takes 27.44 seconds\n\nEXPLAIN ANALYZE VERBOSE\n\nSubquery Scan on x (cost=0.00..0.27 rows=1 width=32) (actual time=22422.418..23835.468 rows=2000002 loops=1)\n Output: (t).doc_id, (t).distance\n -> Result (cost=0.00..0.26 rows=1 width=0) (actual time=22422.410..23184.086 rows=2000002 loops=1)\n Output: docs_within_distance(('{<array above goes here>}'::double precision[])::topics, 50::double precision)\nTotal runtime: 23948.563 ms\n\nTopics is defined thusly: CREATE DOMAIN topics AS float[150];\n\nDigging deeper into the distance function,\n\nEXPLAIN ANALYZE VERBOSE\nSELECT * \nFROM (SELECT id, divergence(<array above>, topics) AS distance FROM docs) AS tab\nWHERE tab.distance <= 50.0;\n\nSubquery Scan on tab (cost=0.00..383333.00 rows=666653 width=12) (actual time=0.027..20429.299 rows=2000002 loops=1)\n Output: tab.id, tab.distance\n Filter: (tab.distance <= 50::double precision)\n -> Seq Scan on public.docs (cost=0.00..358333.50 rows=1999960 width=36) (actual time=0.025..19908.200 rows=2000002 loops=1)\n Output: docs.id, divergence((<array above>::double precision[])::topics, docs.topics)\nTotal runtime: 20550.019 ms\n\nI can't dig any deeper because divergence is a C function.\n\n\tThanks, Joel\n\n--------------------------------------------------------------------------\n- for hire: mac osx device driver ninja, kernel extensions and usb drivers\n---------------------+------------+---------------------------------------\nhttp://wagerlabs.com | @wagerlabs | http://www.linkedin.com/in/joelreymont\n---------------------+------------+---------------------------------------\n\n\n\n",
"msg_date": "Sat, 30 Apr 2011 22:15:23 +0100",
"msg_from": "Joel Reymont <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: stored proc and inserting hundreds of thousands of rows"
},
{
"msg_contents": "\nOn Apr 30, 2011, at 7:36 PM, Kevin Grittner wrote:\n\n> It may even be amenable to knnGiST indexing (a new feature coming in\n> 9.1), which would let you do your select with an ORDER BY on the\n> distance.\n\nI don't think I can wait for 9.1, need to go live in a month, with PostgreSQL or without.\n\n> PostgreSQL has a lot of very cool features you just don't have in any other product! :-)\n\n\nThere's a strong NoSQL lobby here and I'm trying my best to show that PG can handle the job!\n\n--------------------------------------------------------------------------\n- for hire: mac osx device driver ninja, kernel extensions and usb drivers\n---------------------+------------+---------------------------------------\nhttp://wagerlabs.com | @wagerlabs | http://www.linkedin.com/in/joelreymont\n---------------------+------------+---------------------------------------\n\n\n\n",
"msg_date": "Sat, 30 Apr 2011 22:22:24 +0100",
"msg_from": "Joel Reymont <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: stored proc and inserting hundreds of thousands of rows"
},
{
"msg_contents": "On Sat, Apr 30, 2011 at 2:15 PM, Joel Reymont <[email protected]> wrote:\n>\n> On Apr 30, 2011, at 7:24 PM, Kevin Grittner wrote:\n>\n>> If this is where most of the time is, the next thing is to run it\n>> with EXPLAIN ANALYZE, and post the output.\n>\n> I was absolutely wrong about the calculation taking < 1s, it actually takes about 30s for 2 million rows.\n>\n> Still, the difference between 5 minutes and 30s must be the insert.\n\nBut what exactly are you inserting? The queries you reported below\nare not the same as the ones you originally described.\n\nIn particular, they do not seem to use the \"threshold\" parameter that\nthe original ones did, whose job is presumably to cut the 2 million\ndown to a much smaller number that meet the threshold. But how much\nsmaller is that number? This will have a large effect on how long the\ninsert takes.\n\n\n...\n\n> Digging deeper into the distance function,\n>\n> EXPLAIN ANALYZE VERBOSE\n> SELECT *\n> FROM (SELECT id, divergence(<array above>, topics) AS distance FROM docs) AS tab\n> WHERE tab.distance <= 50.0;\n>\n> Subquery Scan on tab (cost=0.00..383333.00 rows=666653 width=12) (actual time=0.027..20429.299 rows=2000002 loops=1)\n> Output: tab.id, tab.distance\n> Filter: (tab.distance <= 50::double precision)\n> -> Seq Scan on public.docs (cost=0.00..358333.50 rows=1999960 width=36) (actual time=0.025..19908.200 rows=2000002 loops=1)\n> Output: docs.id, divergence((<array above>::double precision[])::topics, docs.topics)\n\nIt looks like \"WHERE tab.distance <= 50.0;\" is not accomplishing\nanything. Are you sure the parameter shouldn't be <=0.50 instead?\n\nAlso, you previously said you didn't mind of this process took a\ncouple minutes, as long as it didn't interfere with other things going\non in the database. So you probably need to describe what those other\nthings going on in the database are.\n\nAlso, you might have a data correctness problem. If the plan is to\nscan new ads against all docs, and new docs against all ads; then if\nnew rows are added to each table during overlapping transaction, the\nnew ads against new docs comparison will not actually happen. You\nwill probably need to add manual locking to get around this problem.\n\nCheers\n\nJeff\n",
"msg_date": "Sat, 30 Apr 2011 15:11:56 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stored proc and inserting hundreds of thousands of rows"
},
{
"msg_contents": "\nOn Apr 30, 2011, at 11:11 PM, Jeff Janes wrote:\n\n> But what exactly are you inserting? The queries you reported below\n> are not the same as the ones you originally described.\n\nI posted the wrong query initially. The only difference is in the table that holds the probability array.\n\nI'm inserting document id and ad id pairs to show that this ad is not linked to this document. The mapping table has a primary key on the serial document id.\n\n> In particular, they do not seem to use the \"threshold\" parameter that\n> the original ones did, whose job is presumably to cut the 2 million\n> down to a much smaller number that meet the threshold. But how much\n> smaller is that number?\n\nThe 5 minutes is with a threshold large enough to be irrelevant. I would like to optimize the process before I apply the threshold to cut down the number of rows.\n\n> It looks like \"WHERE tab.distance <= 50.0;\" is not accomplishing\n> anything. Are you sure the parameter shouldn't be <=0.50 instead?\n\nNo, ignore the threshold for now.\n\n> Also, you previously said you didn't mind of this process took a\n> couple minutes, as long as it didn't interfere with other things going\n> on in the database. So you probably need to describe what those other\n> things going on in the database are.\n\nThose other things are ad serving which boils down to a lookup of ad ids linked to the document. \n\nThis is a lookup from the mapping table using the primary key that goes on at the same time as a large number of <doc,ad> mappings are being inserted into the same table.\n\nDocuments are uploaded into the system at a rate of 10k per day, once every couple of seconds. I wish I could get rid of storing the <doc,ad> mapping as that table is gonna grow absolutely huge when each new ad matches tens or hundreds of thousands of documents. \n\nI don't think I can do the matching when serving an ad, though, as I will still need to scan millions of probability vectors (one per doc) to calculate the distance between current document and existing ads.\n\nThen again, the number of ads in the system will always be a fraction of the number of documents so, perhaps, the matching of document to ads can be done at runtime.\n\n> Also, you might have a data correctness problem. If the plan is to\n> scan new ads against all docs, and new docs against all ads;\n\nThat's basically it. \n\nAs new ads are entered, they need to be matched with existing documents. \n\nAs new documents are entered, they need to be matched with existing ads. \n\nBoth ads and docs are represented by probability vectors of 150 floats so it's the same distance calculation.\n\n> then if new rows are added to each table during overlapping transaction, the\n> new ads against new docs comparison will not actually happen. You\n> will probably need to add manual locking to get around this problem.\n\nI'll ponder this, thanks for pointing it out!\n\n\n--------------------------------------------------------------------------\n- for hire: mac osx device driver ninja, kernel extensions and usb drivers\n---------------------+------------+---------------------------------------\nhttp://wagerlabs.com | @wagerlabs | http://www.linkedin.com/in/joelreymont\n---------------------+------------+---------------------------------------\n\n\n\n",
"msg_date": "Sat, 30 Apr 2011 23:29:25 +0100",
"msg_from": "Joel Reymont <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: stored proc and inserting hundreds of thousands of rows"
},
{
"msg_contents": "On Sat, Apr 30, 2011 at 3:29 PM, Joel Reymont <[email protected]> wrote:\n>\n> On Apr 30, 2011, at 11:11 PM, Jeff Janes wrote:\n>\n>> But what exactly are you inserting? The queries you reported below\n>> are not the same as the ones you originally described.\n>\n> I posted the wrong query initially. The only difference is in the table that holds the probability array.\n>\n> I'm inserting document id and ad id pairs to show that this ad is not linked to this document. The mapping table has a primary key on the serial document id.\n\nHaving the (doc_id, ad_id) pair be missing from the table is a far\nmore efficient way to show that the ad is not linked to the document\n(i.e. that it is below the threshold). Provided that you are careful\nthat there are no other reasons that the pair could be missing--but if\nyou are not careful about that, then I don't see how storing the full\nmatrix will save you anyway.\n\n>\n>> In particular, they do not seem to use the \"threshold\" parameter that\n>> the original ones did, whose job is presumably to cut the 2 million\n>> down to a much smaller number that meet the threshold. But how much\n>> smaller is that number?\n>\n> The 5 minutes is with a threshold large enough to be irrelevant. I would like to optimize the process before I apply the threshold to cut down the number of rows.\n>\n>> It looks like \"WHERE tab.distance <= 50.0;\" is not accomplishing\n>> anything. Are you sure the parameter shouldn't be <=0.50 instead?\n>\n> No, ignore the threshold for now.\n\nOK, but it seems to me that you are starting out by ruling out the one\noptimization that is most likely to work.\n\n>> Also, you previously said you didn't mind of this process took a\n>> couple minutes, as long as it didn't interfere with other things going\n>> on in the database. So you probably need to describe what those other\n>> things going on in the database are.\n>\n> Those other things are ad serving which boils down to a lookup of ad ids linked to the document.\n>\n> This is a lookup from the mapping table using the primary key that goes on at the same time as a large number of <doc,ad> mappings are being inserted into the same table.\n\nWhat numbers do you get for lookups per second when inserts are also\ngoing on, versus when they are not going on?\n\nThe way I would approach this is by making two independent tasks, one\nthat insert records at your anticipated rate \"insert into foo select\ngenerate_series from generate_series(1,100000);\" in a loop, and\nanother than generates select load against a separate table (like\npgbench -S) and see how the two interact with each other by competing\nfor CPU and IO.\n\nYou could throttle the insert process by adding pg_sleep(<some\nfraction of a second>) as a column in one of your selects, so it\npauses at every row. But due to granularity of pg_sleep, you might\nhave to put it in a CASE expression so it is invoked on only a random\nsubset of the rows rather than each row. But once throttled, will\nit be able to keep up with the flow of new docs and ads?\n\n\n>\n> I don't think I can do the matching when serving an ad, though, as I will still need to scan millions of probability vectors (one per doc) to calculate the distance between current document and existing ads.\n\ngist indices are designed to make this type of thing fast, by using\ntechniques to rule out most of those comparisons without actually\nperforming them. I don't know enough about the\nguts of either your distance function or the gist indexes to know if\nyou can do it this way, but if you can it would certainly be the way\nto go.\n\nCheers,\n\nJeff\n",
"msg_date": "Sat, 30 Apr 2011 17:12:15 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stored proc and inserting hundreds of thousands of rows"
},
{
"msg_contents": "On Sat, Apr 30, 2011 at 5:12 PM, Jeff Janes <[email protected]> wrote:\n\n>\n>\n> gist indices are designed to make this type of thing fast, by using\n> techniques to rule out most of those comparisons without actually\n> performing them. I don't know enough about the\n> guts of either your distance function or the gist indexes to know if\n> you can do it this way, but if you can it would certainly be the way\n> to go.\n>\n\nIt is definitely a good idea to consider a gist index for eliminating most\nof a large dataset, if applicable. Do a little reading on the topic and,\nhopefully, it's applicability (or not) will become apparent.\n\nHowever, as someone who has built a number of ad servers over the years, for\nseveral of the larger ad networks, the first thing I'd do is separate your\nad serving from real-time interaction with your database, no matter what the\nunderlying technology. If you ad serving is dependent on your database, it\nmeans that hiccups in the database will have an impact on ad serving, which\nis rarely tolerable. And god forbid you should need to take the db down for\na period of maintenance. The reliability and performance required of most ad\nservers is often greater than what should reasonably be expected of a\nrelational database, particularly if there are other processes accessing the\ndatabase, as is the case with your system. The first rule of ad serving is\nthat no outage of backend systems should ever be able to prevent or impact\nfront end ad serving. Some kind of in-memory cache of doc/ad mappings which\nthe ad server interacts with will serve you in good stead and will be much\neasier to scale horizontally than most relational db architectures lend\nthemselves to. If you have an ever increasing set of documents and ads,\nyou'll inevitably wind up 'sharding' your dataset across multiple db hosts\nin order to maintain performance - which creates a number of maintenance\ncomplexities. Easier to maintain a single database and do analytics over a\nsingle data source, but insulate it from the real-time performance\nrequirements of your ad serving. Even something as simple as a process that\npushes the most recent doc/ad mappings into a memcache instance could be\nsufficient - and you can scale your memcache across as many hosts as is\nnecessary to deliver the lookup latencies that you require no matter how\nlarge the dataset. Similarly, if you are updating the database from the ad\nserver with each ad served in order to record an impression or click, you'll\nbe far better off logging those and then processing the logs in bulk on a\nperiodic basis. If subsequent impressions are dependent upon what has\nalready been served historically, then use your memcache instance (or\nwhatever structure you eventually choose to utilize) to handle those\nlookups. This gives you the convenience and flexibility of a relational\nsystem with SQL for access, but without the constraints of the capabilities\nof a single host limiting real-time performance of the system as a whole.\n\nOn Sat, Apr 30, 2011 at 5:12 PM, Jeff Janes <[email protected]> wrote:\n\n\ngist indices are designed to make this type of thing fast, by using\ntechniques to rule out most of those comparisons without actually\nperforming them. I don't know enough about the\nguts of either your distance function or the gist indexes to know if\nyou can do it this way, but if you can it would certainly be the way\nto go.It is definitely a good idea to consider a gist index for eliminating most of a large dataset, if applicable. Do a little reading on the topic and, hopefully, it's applicability (or not) will become apparent.\nHowever, as someone who has built a number of ad servers over the years, for several of the larger ad networks, the first thing I'd do is separate your ad serving from real-time interaction with your database, no matter what the underlying technology. If you ad serving is dependent on your database, it means that hiccups in the database will have an impact on ad serving, which is rarely tolerable. And god forbid you should need to take the db down for a period of maintenance. The reliability and performance required of most ad servers is often greater than what should reasonably be expected of a relational database, particularly if there are other processes accessing the database, as is the case with your system. The first rule of ad serving is that no outage of backend systems should ever be able to prevent or impact front end ad serving. Some kind of in-memory cache of doc/ad mappings which the ad server interacts with will serve you in good stead and will be much easier to scale horizontally than most relational db architectures lend themselves to. If you have an ever increasing set of documents and ads, you'll inevitably wind up 'sharding' your dataset across multiple db hosts in order to maintain performance - which creates a number of maintenance complexities. Easier to maintain a single database and do analytics over a single data source, but insulate it from the real-time performance requirements of your ad serving. Even something as simple as a process that pushes the most recent doc/ad mappings into a memcache instance could be sufficient - and you can scale your memcache across as many hosts as is necessary to deliver the lookup latencies that you require no matter how large the dataset. Similarly, if you are updating the database from the ad server with each ad served in order to record an impression or click, you'll be far better off logging those and then processing the logs in bulk on a periodic basis. If subsequent impressions are dependent upon what has already been served historically, then use your memcache instance (or whatever structure you eventually choose to utilize) to handle those lookups. This gives you the convenience and flexibility of a relational system with SQL for access, but without the constraints of the capabilities of a single host limiting real-time performance of the system as a whole.",
"msg_date": "Sat, 30 Apr 2011 18:00:34 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stored proc and inserting hundreds of thousands of rows"
},
{
"msg_contents": "On 04/30/2011 09:00 PM, Samuel Gendler wrote:\n> Some kind of in-memory cache of doc/ad mappings which the ad server \n> interacts with will serve you in good stead and will be much easier to \n> scale horizontally than most relational db architectures lend \n> themselves to...Even something as simple as a process that pushes the \n> most recent doc/ad mappings into a memcache instance could be \n> sufficient - and you can scale your memcache across as many hosts as \n> is necessary to deliver the lookup latencies that you require no \n> matter how large the dataset. \n\nMany of the things I see people switching over to NoSQL key/value store \nsolutions would be served equally well on the performance side by a \nmemcache layer between the application and the database. If you can map \nthe problem into key/value pairs for NoSQL, you can almost certainly do \nthat using a layer above PostgreSQL instead.\n\nThe main downside of that, what people seem to object to, is that it \nmakes for two pieces of software that need to be maintained; the NoSQL \nsolutions can do it with just one. If you have more complicated queries \nto run, too, the benefit to using a more complicated database should \noutweigh that extra complexity though.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Sun, 01 May 2011 19:31:15 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: stored proc and inserting hundreds of thousands of\n rows"
}
] |
[
{
"msg_contents": "Hi. I'm on a 64 Bit CentOS 5 system, quadcore processor, 8GB RAM and\ntons of data storage (1 TB SATAII disks).\n\nThe current SHMMAX and SHMMIN are (commas added for legibility) --\n\nkernel.shmmax = 68,719,476,736\nkernel.shmall = 4,294,967,296\n\nNow, according to my reading in the PG manual and this list, a good\nrecommended value for SHMMAX is\n\n (shared_buffers * 8192)\n\nMy postgresql.conf settings at the moment are:\n\n max_connections = 300\n shared_buffers = 300MB\n effective_cache_size = 2000MB\n\nBy this calculation, shared_b * 8192 will be:\n\n 2,457,600,000,000\n\nThat's a humongous number. So either the principle for SHMMAX is\namiss, or I am reading this wrongly?\n\nSimilarly with \"fs.file_max\". There are articles like this one:\nhttp://tldp.org/LDP/solrhe/Securing-Optimizing-Linux-RH-Edition-v1.3/chap6sec72.html\n\nIs this relevant for PostgreSQL performance at all, or should I skip that?\n\nThanks for any pointers!\n",
"msg_date": "Sun, 1 May 2011 14:48:42 +0800",
"msg_from": "Phoenix Kiula <[email protected]>",
"msg_from_op": true,
"msg_subject": "The right SHMMAX and FILE_MAX"
},
{
"msg_contents": "Phoenix Kiula <[email protected]> wrote:\n \n> Now, according to my reading in the PG manual and this list, a\n> good recommended value for SHMMAX is\n> \n> (shared_buffers * 8192)\n \nWhere did you see that? The amount of data buffered is the number\nof shared buffers * 8KB. Taking shared_buffers as a number of bytes\nand multiplying by 8K makes no sense at all. Any documentation\nwhich can be read to indicate that should be fixed.\n \nBesides that, there is shared memory space needed besides the actual\nbuffered disk pages, so you're not looking at the whole picture once\nyou stop dealing with \"bytes squared\".\n \n-Kevin\n",
"msg_date": "Sun, 01 May 2011 12:18:50 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The right SHMMAX and FILE_MAX"
},
{
"msg_contents": "On 05/01/2011 02:48 AM, Phoenix Kiula wrote:\n> Hi. I'm on a 64 Bit CentOS 5 system, quadcore processor, 8GB RAM and\n> tons of data storage (1 TB SATAII disks).\n>\n> The current SHMMAX and SHMMIN are (commas added for legibility) --\n>\n> kernel.shmmax = 68,719,476,736\n> kernel.shmall = 4,294,967,296\n> \n\nThat's set higher than the amount of RAM in the server. Run the \nattached script; it will produce reasonable values for your server, \npresuming you'll never want to allocate >50% of the RAM in the server \nfor shared memory. Given standard tuning for shared_buffers is <40%, \nI've never run into a situation where this was a terrible choice if you \nwant to just set and forget about it. Only reason to fine-tine is if \nanother major user of shared memory is running on the server\n\n> Now, according to my reading in the PG manual and this list, a good\n> recommended value for SHMMAX is\n>\n> (shared_buffers * 8192)\n> \n\nThe value for shared_buffers stored internally is in 8192 byte pages:\n\nselect setting,unit,current_setting(name) from pg_settings where \nname='shared_buffers';\n setting | unit | current_setting\n---------+------+-----------------\n 4096 | 8kB | 32MB\n\nSo any formula you found that does this sort of thing is just converting \nit back to bytes again, and is probably from an earlier PostgreSQL \nversion where you couldn't set this parameter in memory units. SHMMAX \nneeds to be a bit bigger than shared_buffers in bytes.\n\n> Similarly with \"fs.file_max\". There are articles like this one:\n> http://tldp.org/LDP/solrhe/Securing-Optimizing-Linux-RH-Edition-v1.3/chap6sec72.html\n> Is this relevant for PostgreSQL performance at all, or should I skip that?\n> \n\nThat's ancient history. This is how big the default is on the two Linux \ndistributions I have handy:\n\n[RHEL5]\n$ cat /proc/sys/fs/file-max\n745312\n\n[Debian Squeeze]\n$ cat /proc/sys/fs/file-max\n1645719\n\nIt was a tiny number circa the RedHat 6 that manual was written for, now \nit's very unlikely you'll exceed the kernel setting here.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books",
"msg_date": "Sun, 01 May 2011 15:38:51 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The right SHMMAX and FILE_MAX"
},
{
"msg_contents": "I am also in need of a proper documentation that explains how to set \nSHMAX and SHMALL variables in Postgres.\n\nWhat things need to be taken in consideration before doing that ?\nWhat is the value of SHMAX & SHMALL if u have 16 GB RAM for Postgres \nServer ?\n\n\n\nThanks\n\nPhoenix Kiula wrote:\n> Hi. I'm on a 64 Bit CentOS 5 system, quadcore processor, 8GB RAM and\n> tons of data storage (1 TB SATAII disks).\n>\n> The current SHMMAX and SHMMIN are (commas added for legibility) --\n>\n> kernel.shmmax = 68,719,476,736\n> kernel.shmall = 4,294,967,296\n>\n> Now, according to my reading in the PG manual and this list, a good\n> recommended value for SHMMAX is\n>\n> (shared_buffers * 8192)\n>\n> My postgresql.conf settings at the moment are:\n>\n> max_connections = 300\n> shared_buffers = 300MB\n> effective_cache_size = 2000MB\n>\n> By this calculation, shared_b * 8192 will be:\n>\n> 2,457,600,000,000\n>\n> That's a humongous number. So either the principle for SHMMAX is\n> amiss, or I am reading this wrongly?\n>\n> Similarly with \"fs.file_max\". There are articles like this one:\n> http://tldp.org/LDP/solrhe/Securing-Optimizing-Linux-RH-Edition-v1.3/chap6sec72.html\n>\n> Is this relevant for PostgreSQL performance at all, or should I skip that?\n>\n> Thanks for any pointers!\n>\n> \n\n",
"msg_date": "Mon, 02 May 2011 10:23:15 +0530",
"msg_from": "Adarsh Sharma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The right SHMMAX and FILE_MAX"
},
{
"msg_contents": "On 01/05/11 18:48, Phoenix Kiula wrote:\n> Now, according to my reading in the PG manual and this list, a good\n> recommended value for SHMMAX is\n>\n> (shared_buffers * 8192)\n>\n> My postgresql.conf settings at the moment are:\n>\n> max_connections = 300\n> shared_buffers = 300MB\n> effective_cache_size = 2000MB\n>\n> By this calculation, shared_b * 8192 will be:\n>\n> 2,457,600,000,000\n>\n> That's a humongous number. So either the principle for SHMMAX is\n> amiss, or I am reading this wrongly?\n\nYou are confusing shared_buffers expressed as \"number of pages\" with \nshared_buffers expressed as \"MB\". The docs are assuming you are working \nwith the former (and would appear to be assuming your pagesize is 8K - \nwhich is teh default but not required to be the case). If you are \nshowing shared_buffers as \"MB\" then obviously you cane set SHMMAX using \nthe value multiplied by (1024*1024), so in your case:\n\n300 * (1024*1024) = 314572800\n\n\nHowever note that there are things other than shared_buffers require \nshared memory, so you'll need more than this. Use Greg's script.\n\nCheers\n\nMark\n",
"msg_date": "Mon, 02 May 2011 21:06:28 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The right SHMMAX and FILE_MAX"
},
{
"msg_contents": "Dne 2.5.2011 06:53, Adarsh Sharma napsal(a):\n> I am also in need of a proper documentation that explains how to set \n> SHMAX and SHMALL variables in Postgres.\n> \n> What things need to be taken in consideration before doing that ?\n> What is the value of SHMAX & SHMALL if u have 16 GB RAM for Postgres\n> Server ?\n\nWell, those two values actually define kernel limits for shared memory\nsegments (i.e. memory shared by multiple processes, in this case the\npostmaster proces and backends). So it's rather a question of tuning\nshared_buffers (because that's the shared memory segment) and then\nsetting those two values.\n\nSHMMAX - max. size of a single shared segment (in bytes)\nSHMALL - total size of shared segments (in pages, page is usually 4kB)\n\nSo if you decide you want 1GB shared buffers, you'll need at least this\n\nSHMMAX = 1024 * 1024 * 1024 (i.e. 1GB)\nSHMALL = 1024 * 256 (1GB in 4kB pages)\n\n(althouth the SHMALL should be higher, as there will be other processes\nthat need shared memory).\n\nThere's a lot of docs about this, e.g. this one (it's mostly for Oracle,\nbut it describes the shared memory quite nicely):\n\nhttp://www.puschitz.com/TuningLinuxForOracle.shtml#SettingSharedMemory\n\nregards\nTomas\n",
"msg_date": "Mon, 02 May 2011 12:50:19 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The right SHMMAX and FILE_MAX"
},
{
"msg_contents": "Tomas Vondra wrote:\n> Dne 2.5.2011 06:53, Adarsh Sharma napsal(a):\n> \n>> I am also in need of a proper documentation that explains how to set \n>> SHMAX and SHMALL variables in Postgres.\n>>\n>> What things need to be taken in consideration before doing that ?\n>> What is the value of SHMAX & SHMALL if u have 16 GB RAM for Postgres\n>> Server ?\n>> \n>\n> Well, those two values actually define kernel limits for shared memory\n> segments (i.e. memory shared by multiple processes, in this case the\n> postmaster proces and backends). So it's rather a question of tuning\n> shared_buffers (because that's the shared memory segment) and then\n> setting those two values.\n> \n\nWhen I was tuning Postgresql for best Performance, I set my \nshared_buffers= 4096 MB as I set 25% of RAM ( 1/4 )\n\nSo Do I need to set my SHMMAX =4096 MB.\n\nWhat is the SHMALL size now ?\n\n> SHMMAX - max. size of a single shared segment (in bytes)\n> SHMALL - total size of shared segments (in pages, page is usually 4kB)\n>\n> So if you decide you want 1GB shared buffers, you'll need at least this\n>\n> SHMMAX = 1024 * 1024 * 1024 (i.e. 1GB)\n> SHMALL = 1024 * 256 (1GB in 4kB pages)\n>\n> (althouth the SHMALL should be higher, as there will be other processes\n> that need shared memory).\n>\n> There's a lot of docs about this, e.g. this one (it's mostly for Oracle,\n> but it describes the shared memory quite nicely):\n>\n> http://www.puschitz.com/TuningLinuxForOracle.shtml#SettingSharedMemory\n>\n> regards\n> Tomas\n>\n> \n\n\n\n\n\n\n\nTomas Vondra wrote:\n\nDne 2.5.2011 06:53, Adarsh Sharma napsal(a):\n \n\nI am also in need of a proper documentation that explains how to set \nSHMAX and SHMALL variables in Postgres.\n\nWhat things need to be taken in consideration before doing that ?\nWhat is the value of SHMAX & SHMALL if u have 16 GB RAM for Postgres\nServer ?\n \n\n\nWell, those two values actually define kernel limits for shared memory\nsegments (i.e. memory shared by multiple processes, in this case the\npostmaster proces and backends). So it's rather a question of tuning\nshared_buffers (because that's the shared memory segment) and then\nsetting those two values.\n \n\n\nWhen I was tuning Postgresql for best Performance, I set my\nshared_buffers= 4096 MB as I set 25% of RAM ( 1/4 )\n\nSo Do I need to set my SHMMAX =4096 MB.\n\nWhat is the SHMALL size now ?\n\n\n\nSHMMAX - max. size of a single shared segment (in bytes)\nSHMALL - total size of shared segments (in pages, page is usually 4kB)\n\nSo if you decide you want 1GB shared buffers, you'll need at least this\n\nSHMMAX = 1024 * 1024 * 1024 (i.e. 1GB)\nSHMALL = 1024 * 256 (1GB in 4kB pages)\n\n(althouth the SHMALL should be higher, as there will be other processes\nthat need shared memory).\n\nThere's a lot of docs about this, e.g. this one (it's mostly for Oracle,\nbut it describes the shared memory quite nicely):\n\nhttp://www.puschitz.com/TuningLinuxForOracle.shtml#SettingSharedMemory\n\nregards\nTomas",
"msg_date": "Mon, 02 May 2011 16:31:43 +0530",
"msg_from": "Adarsh Sharma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The right SHMMAX and FILE_MAX"
},
{
"msg_contents": "On 05/02/2011 12:53 AM, Adarsh Sharma wrote:\n> I am also in need of a proper documentation that explains how to set \n> SHMAX and SHMALL variables in Postgres.\n>\n> What things need to be taken in consideration before doing that ?\n> What is the value of SHMAX & SHMALL if u have 16 GB RAM for Postgres \n> Server ?\n\nRunning the script I provided on a Linux server with 16GB of RAM:\n\n$ cat /proc/meminfo | grep MemTotal\nMemTotal: 16467464 kB\n\n$ getconf PAGE_SIZE\n4096\n\n$ ./shmsetup\n# Maximum shared segment size in bytes\nkernel.shmmax = 8431341568\n# Maximum number of shared memory segments in pages\nkernel.shmall = 2058433\n\nThat sets SHMMAX to 50% of total RAM, which should be sufficient for any \nreasonable installation of PostgreSQL. shmmall is a similar limit \nexpressed in system pages instead, so as seen here you have to divide by \nthe system page size of 4096 to figure it out.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Mon, 02 May 2011 11:18:52 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The right SHMMAX and FILE_MAX"
}
] |
[
{
"msg_contents": "Hi I have 3 tables \npage - revision - pagecontent\n\nCREATE TABLE mediawiki.page\n(\n page_id serial NOT NULL,\n page_namespace smallint NOT NULL,\n page_title text NOT NULL,\n page_restrictions text,\n page_counter bigint NOT NULL DEFAULT 0,\n page_is_redirect smallint NOT NULL DEFAULT 0,\n page_is_new smallint NOT NULL DEFAULT 0,\n page_random numeric(15,14) NOT NULL DEFAULT random(),\n page_touched timestamp with time zone,\n page_latest integer NOT NULL,\n page_len integer NOT NULL,\n titlevector tsvector,\n page_type integer NOT NULL DEFAULT 0,\n CONSTRAINT page_pkey PRIMARY KEY (page_id)\n);\n\nCREATE TABLE mediawiki.revision\n(\n rev_id serial NOT NULL,\n rev_page integer,\n rev_text_id integer,\n rev_comment text,\n rev_user integer NOT NULL,\n rev_user_text text NOT NULL,\n rev_timestamp timestamp with time zone NOT NULL,\n rev_minor_edit smallint NOT NULL DEFAULT 0,\n rev_deleted smallint NOT NULL DEFAULT 0,\n rev_len integer,\n rev_parent_id integer,\n CONSTRAINT revision_rev_page_fkey FOREIGN KEY (rev_page)\n REFERENCES mediawiki.page (page_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE,\n CONSTRAINT revision_rev_id_key UNIQUE (rev_id)\n)\n\nCREATE TABLE mediawiki.pagecontent\n(\n old_id integer NOT NULL DEFAULT\nnextval('mediawiki.text_old_id_seq'::regclass),\n old_text text,\n old_flags text,\n textvector tsvector,\n CONSTRAINT pagecontent_pkey PRIMARY KEY (old_id)\n)\n\nwhere i have query \nSELECT pa.page_id, pa.page_title, \nts_rank(pc.textvector,(to_tsquery('fotbal')))+ts_rank(pa.titlevector,(to_tsquery('fotbal')))*10\nas totalrank \n\tfrom mediawiki.page pa, mediawiki.revision re, mediawiki.pagecontent pc \n\tWHERE pa.page_id in \n\t\t(SELECT page_id FROM mediawiki.page WHERE page_id IN\n\t\t(SELECT page_id FROM mediawiki.page \n\t\t\t WHERE (titlevector @@ (to_tsquery('fotbal'))))\n\t\tOR page_id IN\n\t\t(SELECT p.page_id from mediawiki.page p,mediawiki.revision r,\n\t\t(SELECT old_id FROM mediawiki.pagecontent \n\t\tWHERE (textvector @@ (to_tsquery('fotbal')))) ss\n\t\tWHERE (p.page_id=r.rev_page AND r.rev_id=ss.old_id)))\n\tAND (pa.page_id=re.rev_page AND re.rev_id=pc.old_id)\n\tORDER BY totalrank LIMIT 100;\n\nThis query find out titles of pages in page and content in page content by\nfull text search - @@ \nafterwards i count for the resulted id by ts_rank the relevance.\n\nNow the problem. \nWhen I try ANALYZE it shows:\n\"Limit (cost=136568.00..136568.25 rows=100 width=185)\"\n\" -> Sort (cost=136568.00..137152.26 rows=233703 width=185)\"\n\" Sort Key: ((ts_rank(pc.textvector, to_tsquery('fotbal'::text)) +\n(ts_rank(pa.titlevector, to_tsquery('fotbal'::text)) * 10::double\nprecision)))\"\n\" -> Hash Join (cost=61707.99..127636.04 rows=233703 width=185)\"\n\" Hash Cond: (re.rev_id = pc.old_id)\"\n\" -> Merge Join (cost=24098.90..71107.48 rows=233703\nwidth=66)\"\n\" Merge Cond: (pa.page_id = re.rev_page)\"\n\" -> Merge Semi Join (cost=24096.98..55665.69\nrows=233703 width=66)\"\n\" Merge Cond: (pa.page_id =\nmediawiki.page.page_id)\"\n\" -> Index Scan using page_btree_id on page pa \n(cost=0.00..13155.20 rows=311604 width=62)\"\n\" -> Index Scan using page_btree_id on page \n(cost=24096.98..38810.19 rows=233703 width=4)\"\n\" Filter: ((hashed SubPlan 1) OR (hashed\nSubPlan 2))\"\n\" SubPlan 1\"\n\" -> Bitmap Heap Scan on page \n(cost=10.41..900.33 rows=270 width=4)\"\n\" Recheck Cond: (titlevector @@\nto_tsquery('fotbal'::text))\"\n\" -> Bitmap Index Scan on gin_index \n(cost=0.00..10.34 rows=270 width=0)\"\n\" Index Cond: (titlevector @@\nto_tsquery('fotbal'::text))\"\n\" SubPlan 2\"\n\" -> Nested Loop (cost=1499.29..23192.08\nrows=1558 width=4)\"\n\" -> Nested Loop \n(cost=1499.29..15967.11 rows=1558 width=4)\"\n\" -> Bitmap Heap Scan on\npagecontent (cost=1499.29..6448.12 rows=1558 width=4)\"\n\" Recheck Cond:\n(textvector @@ to_tsquery('fotbal'::text))\"\n\" -> Bitmap Index Scan\non gin_index2 (cost=0.00..1498.90 rows=1558 width=0)\"\n\" Index Cond:\n(textvector @@ to_tsquery('fotbal'::text))\"\n\" -> Index Scan using\npage_btree_rev_content_id on revision r (cost=0.00..6.10 rows=1 width=8)\"\n\" Index Cond: (r.rev_id =\npagecontent.old_id)\"\n\" -> Index Scan using page_btree_id\non page p (cost=0.00..4.62 rows=1 width=4)\"\n\" Index Cond: (p.page_id =\nr.rev_page)\"\n\" -> Index Scan using page_btree_rev_page_id on revision\nre (cost=0.00..11850.52 rows=311604 width=8)\"\n\" -> Hash (cost=27932.04..27932.04 rows=311604 width=127)\"\n\" -> Seq Scan on pagecontent pc (cost=0.00..27932.04\nrows=311604 width=127)\"\n\n\nI there some posibility to speed up the hash join which takes a lot of time?\nI have tried to find some solution, but it was not successfull.\nThanks a lot.--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Query-improvement-tp4362578p4362578.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Sun, 1 May 2011 03:23:52 -0700 (PDT)",
"msg_from": "Mark <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query improvement"
},
{
"msg_contents": "On Sun, May 1, 2011 at 12:23 PM, Mark <[email protected]> wrote:\n> Now the problem.\n> When I try ANALYZE it shows:\n\nThat's a regular explain... can you post an EXPLAIN ANALYZE?\n\nHash joins are very inefficient if they require big temporary files.\nI usually work around that by disabling hash joins for the problematic queries:\n\nset enable_hashjoin = false;\n<query>\nset enable_hashjoin = true;\n\nBut an explain analyze would confirm or deny that theory.\n",
"msg_date": "Mon, 2 May 2011 09:58:40 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query improvement"
},
{
"msg_contents": "Here is EXPLAIN ANALYZE:\n\n\"Limit (cost=136568.00..136568.25 rows=100 width=185) (actual\ntime=1952.174..1952.215 rows=100 loops=1)\"\n\" -> Sort (cost=136568.00..137152.26 rows=233703 width=185) (actual\ntime=1952.172..1952.188 rows=100 loops=1)\"\n\" Sort Key: ((ts_rank(pc.textvector, to_tsquery('fotbal'::text)) +\n(ts_rank(pa.titlevector, to_tsquery('fotbal'::text)) * 10::double\nprecision)))\"\n\" Sort Method: top-N heapsort Memory: 23kB\"\n\" -> Hash Join (cost=61707.99..127636.04 rows=233703 width=185)\n(actual time=1046.838..1947.815 rows=3278 loops=1)\"\n\" Hash Cond: (re.rev_id = pc.old_id)\"\n\" -> Merge Join (cost=24098.90..71107.48 rows=233703\nwidth=66) (actual time=200.884..859.453 rows=3278 loops=1)\"\n\" Merge Cond: (pa.page_id = re.rev_page)\"\n\" -> Merge Semi Join (cost=24096.98..55665.69\nrows=233703 width=66) (actual time=200.843..629.821 rows=3278 loops=1)\"\n\" Merge Cond: (pa.page_id =\nmediawiki.page.page_id)\"\n\" -> Index Scan using page_btree_id on page pa \n(cost=0.00..13155.20 rows=311604 width=62) (actual time=0.027..145.989\nrows=311175 loops=1)\"\n\" -> Index Scan using page_btree_id on page \n(cost=24096.98..38810.19 rows=233703 width=4) (actual time=200.779..429.219\nrows=3278 loops=1)\"\n\" Filter: ((hashed SubPlan 1) OR (hashed\nSubPlan 2))\"\n\" SubPlan 1\"\n\" -> Bitmap Heap Scan on page \n(cost=10.41..900.33 rows=270 width=4) (actual time=0.748..9.845 rows=280\nloops=1)\"\n\" Recheck Cond: (titlevector @@\nto_tsquery('fotbal'::text))\"\n\" -> Bitmap Index Scan on gin_index \n(cost=0.00..10.34 rows=270 width=0) (actual time=0.586..0.586 rows=280\nloops=1)\"\n\" Index Cond: (titlevector @@\nto_tsquery('fotbal'::text))\"\n\" SubPlan 2\"\n\" -> Nested Loop (cost=1499.29..23192.08\nrows=1558 width=4) (actual time=2.032..185.743 rows=3250 loops=1)\"\n\" -> Nested Loop \n(cost=1499.29..15967.11 rows=1558 width=4) (actual time=1.980..109.491\nrows=3250 loops=1)\"\n\" -> Bitmap Heap Scan on\npagecontent (cost=1499.29..6448.12 rows=1558 width=4) (actual\ntime=1.901..36.583 rows=3250 loops=1)\"\n\" Recheck Cond:\n(textvector @@ to_tsquery('fotbal'::text))\"\n\" -> Bitmap Index Scan\non gin_index2 (cost=0.00..1498.90 rows=1558 width=0) (actual\ntime=1.405..1.405 rows=3250 loops=1)\"\n\" Index Cond:\n(textvector @@ to_tsquery('fotbal'::text))\"\n\" -> Index Scan using\npage_btree_rev_content_id on revision r (cost=0.00..6.10 rows=1 width=8)\n(actual time=0.020..0.021 rows=1 loops=3250)\"\n\" Index Cond: (r.rev_id =\npagecontent.old_id)\"\n\" -> Index Scan using page_btree_id\non page p (cost=0.00..4.62 rows=1 width=4) (actual time=0.022..0.022 rows=1\nloops=3250)\"\n\" Index Cond: (p.page_id =\nr.rev_page)\"\n\" -> Index Scan using page_btree_rev_page_id on revision\nre (cost=0.00..11850.52 rows=311604 width=8) (actual time=0.012..166.042\nrows=311175 loops=1)\"\n\" -> Hash (cost=27932.04..27932.04 rows=311604 width=127)\n(actual time=801.000..801.000 rows=311604 loops=1)\"\n\" Buckets: 1024 Batches: 64 Memory Usage: 744kB\"\n\" -> Seq Scan on pagecontent pc (cost=0.00..27932.04\nrows=311604 width=127) (actual time=0.018..465.686 rows=311604 loops=1)\"\n\"Total runtime: 1952.962 ms\"\n\n\nI have tried \nset enable_hashjoin = false;\n<query>\nset enable_hashjoin = true; \n\nbut the result have been worst than before. By the way is there a posibility\nto create beeter query with same effect?\nI have tried more queries, but this has got best performance yet. \n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Query-improvement-tp4362578p4365717.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Mon, 2 May 2011 13:54:06 -0700 (PDT)",
"msg_from": "Mark <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query improvement"
},
{
"msg_contents": "On Mon, May 2, 2011 at 10:54 PM, Mark <[email protected]> wrote:\n> but the result have been worst than before. By the way is there a posibility\n> to create beeter query with same effect?\n> I have tried more queries, but this has got best performance yet.\n\nWell, this seems to be the worst part:\n\n (SELECT page_id FROM mediawiki.page WHERE page_id IN\n (SELECT page_id FROM mediawiki.page\n WHERE (titlevector @@ (to_tsquery('fotbal'))))\n OR page_id IN\n (SELECT p.page_id from mediawiki.page p,mediawiki.revision r,\n (SELECT old_id FROM mediawiki.pagecontent\n WHERE (textvector @@ (to_tsquery('fotbal')))) ss\n WHERE (p.page_id=r.rev_page AND r.rev_id=ss.old_id)))\n\nIf you're running a new enough pg (8.4+), you could try using CTEs for that.\n\nI haven't used CTEs much, but I think it goes something like:\n\nWITH someids AS (\n\n (SELECT page_id FROM mediawiki.page WHERE page_id IN\n (SELECT page_id FROM mediawiki.page\n WHERE (titlevector @@ (to_tsquery('fotbal'))))\n OR page_id IN\n (SELECT p.page_id from mediawiki.page p,mediawiki.revision r,\n (SELECT old_id FROM mediawiki.pagecontent\n WHERE (textvector @@ (to_tsquery('fotbal')))) ss\n WHERE (p.page_id=r.rev_page AND r.rev_id=ss.old_id)))\n\n)\nSELECT pa.page_id, pa.page_title,\nts_rank(pc.textvector,(to_tsquery('fotbal')))+ts_rank(pa.titlevector,(to_tsquery('fotbal')))*10\nas totalrank\n from mediawiki.page pa, mediawiki.revision re, mediawiki.pagecontent pc\n WHERE pa.page_id in someids\n AND (pa.page_id=re.rev_page AND re.rev_id=pc.old_id)\n ORDER BY totalrank LIMIT 100;\n",
"msg_date": "Tue, 3 May 2011 09:21:56 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query improvement"
},
{
"msg_contents": "\n> On Mon, May 2, 2011 at 10:54 PM, Mark <[email protected]> wrote:\n> > but the result have been worst than before. By the way is there a\nposibility\n> > to create beeter query with same effect?\n> > I have tried more queries, but this has got best performance yet.\n> \n> Well, this seems to be the worst part:\n> \n> (SELECT page_id FROM mediawiki.page WHERE page_id IN\n> (SELECT page_id FROM mediawiki.page\n> WHERE (titlevector @@ (to_tsquery('fotbal'))))\n> OR page_id IN\n> (SELECT p.page_id from mediawiki.page\np,mediawiki.revision r,\n> (SELECT old_id FROM mediawiki.pagecontent\n> WHERE (textvector @@ (to_tsquery('fotbal')))) ss\n> WHERE (p.page_id=r.rev_page AND r.rev_id=ss.old_id)))\n> \n \n\n'OR' statements often generate complicated plans. You should try to\nrewrite your Query with a n UNION clause.\nUsing explicit joins may also help the planner:\n \nSELECT page_id \nFROM mediawiki.page\nWHERE (titlevector @@ (to_tsquery('fotbal')))\n\nUNION \n\nSELECT p.page_id \nFROM mediawiki.page p \n JOIN mediawiki.revision r on (p.page_id=r.rev_page)\n JOIN mediawiki.pagecontent ss on (r.rev_id=ss.old_id)\nWHERE (ss.textvector @@ (to_tsquery('fotbal')))\n\nHTH,\n\nMarc Mamin\n\n",
"msg_date": "Tue, 3 May 2011 11:14:01 +0200",
"msg_from": "\"Marc Mamin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query improvement"
},
{
"msg_contents": "Thanks for reply both UNION and JOINS helped. Mainly the UNION helped a lot.\nNow the query takes 1sec max. Thanks a lot. \n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Query-improvement-tp4362578p4378157.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Sat, 7 May 2011 03:37:02 -0700 (PDT)",
"msg_from": "Mark <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query improvement"
},
{
"msg_contents": "Thanks a lot for reply. Finally I have used UNION, but thanks for your help.\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Query-improvement-tp4362578p4378160.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Sat, 7 May 2011 03:38:09 -0700 (PDT)",
"msg_from": "Mark <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query improvement"
},
{
"msg_contents": "Thanks for replies. Finally I have used UNION and JOINS, which helped. Mainly\nthe UNION helped a lot. Now the query takes 1sec max. Thanks a lot. \n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Query-improvement-tp4362578p4378163.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Sat, 7 May 2011 03:39:52 -0700 (PDT)",
"msg_from": "Mark <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query improvement"
},
{
"msg_contents": "On Mon, May 2, 2011 at 3:58 AM, Claudio Freire <[email protected]> wrote:\n> Hash joins are very inefficient if they require big temporary files.\n\nHmm, that's not been my experience. What have you seen?\n\nI've seen a 64-batch hash join beat out a\nnested-loop-with-inner-indexscan, which I never woulda believed,\nbut...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Fri, 13 May 2011 22:28:51 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query improvement"
}
] |
[
{
"msg_contents": "Hi,\n\nI tried with the PostgreSQL 9.0.4 + Hot Standby and running the database\nfrom Fusion IO Drive to understand the PG Performance.\n\nWhile doing so I got the \"*Query failed ERROR: catalog is missing 1\nattribute(s) for relid 172226*\". Any idea on this error? Is that combination\nPG + HotSB + Fusion IO Drive is not advisable?!\n\nRegards,\n\nSethu Prasad. G.\n\nHi,I tried with the PostgreSQL 9.0.4 + Hot Standby and running the database from Fusion IO Drive to understand the PG Performance.While doing so I got the \"Query failed ERROR: catalog is missing 1 attribute(s) for relid 172226\". Any idea on this error? Is that combination PG + HotSB + Fusion IO Drive is not advisable?!\nRegards,Sethu Prasad. G.",
"msg_date": "Tue, 3 May 2011 11:02:59 +0200",
"msg_from": "Sethu Prasad <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres 9.0.4 + Hot Standby + FusionIO Drive + Performance => Query\n\tfailed ERROR: catalog is missing 1 attribute(s) for relid 172226"
},
{
"msg_contents": "\n> While doing so I got the \"*Query failed ERROR: catalog is missing 1\n> attribute(s) for relid 172226*\". Any idea on this error? Is that combination\n> PG + HotSB + Fusion IO Drive is not advisable?!\n\nWhat were you doing when you got this error?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Wed, 04 May 2011 11:44:22 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 9.0.4 + Hot Standby + FusionIO Drive + Performance\n\t=> Query failed ERROR: catalog is missing 1 attribute(s) for relid\n\t172226"
},
{
"msg_contents": "I did the hot standby configured earlier and at that time I started\nusing(querying) the standby database.\n\nMay be something missed on the archive command.\n\n\nOn Wed, May 4, 2011 at 8:44 PM, Josh Berkus <[email protected]> wrote:\n\n>\n> > While doing so I got the \"*Query failed ERROR: catalog is missing 1\n> > attribute(s) for relid 172226*\". Any idea on this error? Is that\n> combination\n> > PG + HotSB + Fusion IO Drive is not advisable?!\n>\n> What were you doing when you got this error?\n>\n> --\n> Josh Berkus\n> PostgreSQL Experts Inc.\n> http://pgexperts.com\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI did the hot standby configured earlier and at that time I started using(querying) the standby database.May be something missed on the archive command.On Wed, May 4, 2011 at 8:44 PM, Josh Berkus <[email protected]> wrote:\n\n> While doing so I got the \"*Query failed ERROR: catalog is missing 1\n> attribute(s) for relid 172226*\". Any idea on this error? Is that combination\n> PG + HotSB + Fusion IO Drive is not advisable?!\n\nWhat were you doing when you got this error?\n\n--\nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 5 May 2011 09:47:41 +0200",
"msg_from": "Sethu Prasad <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres 9.0.4 + Hot Standby + FusionIO Drive +\n\tPerformance => Query failed ERROR: catalog is missing 1 attribute(s)\n\tfor relid 172226"
},
{
"msg_contents": "On Tue, May 3, 2011 at 10:02 AM, Sethu Prasad <[email protected]> wrote:\n\n> I tried with the PostgreSQL 9.0.4 + Hot Standby and running the database\n> from Fusion IO Drive to understand the PG Performance.\n>\n> While doing so I got the \"Query failed ERROR: catalog is missing 1\n> attribute(s) for relid 172226\". Any idea on this error? Is that combination\n> PG + HotSB + Fusion IO Drive is not advisable?!\n\nWhy I wonder do you think this might have anything to do with Hot\nStandby and/or FusionIO drives?\n\nThis indicates either catalog or catalog index corruption of some kind.\n\nDid you only get this error once?\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n",
"msg_date": "Sun, 8 May 2011 10:08:20 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 9.0.4 + Hot Standby + FusionIO Drive +\n\tPerformance => Query failed ERROR: catalog is missing 1 attribute(s)\n\tfor relid 172226"
},
{
"msg_contents": "On 5/5/11 12:47 AM, Sethu Prasad wrote:\n> I did the hot standby configured earlier and at that time I started\n> using(querying) the standby database.\n> \n> May be something missed on the archive command.\n\nMost likely, yes. PostgreSQL shouldn't start up under such\ncircumstances, but apparently you fooled it.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Mon, 09 May 2011 16:30:38 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 9.0.4 + Hot Standby + FusionIO Drive + Performance\n\t=> Query failed ERROR: catalog is missing 1 attribute(s) for relid\n\t172226"
},
{
"msg_contents": "Yes it has something to do with Hot Standby, if you omit some parts on the\narchive then the standby instance will not have the necessary stuff and\ncomplain like this..\n\nI kept the FusionIO drive in my checklist while attending to this issue, as\nwe tried it looking for performance combined with read-only hot standby and\nin doubt I thought that the recovery is not successful on this drive safely.\nso I pointed that Fio Drive here.\n\nStraight to say, I missed the pg_clog directory on archive.\n\nseq_page_cost = 1.0\n\nrandom_page_cost = 1.0\n\nIs the above settings are fine when we deal with Fio and Performance, as I\nhave the advice earlier stating that read and write are treated same with\nFio drives.\n\nAny suggestions on configuration changes to have read-only hot standby\nfaster on READs.\n\n- Sethu\n\n\nOn Sun, May 8, 2011 at 11:08 AM, Simon Riggs <[email protected]> wrote:\n\n> On Tue, May 3, 2011 at 10:02 AM, Sethu Prasad <[email protected]>\n> wrote:\n>\n> > I tried with the PostgreSQL 9.0.4 + Hot Standby and running the database\n> > from Fusion IO Drive to understand the PG Performance.\n> >\n> > While doing so I got the \"Query failed ERROR: catalog is missing 1\n> > attribute(s) for relid 172226\". Any idea on this error? Is that\n> combination\n> > PG + HotSB + Fusion IO Drive is not advisable?!\n>\n> Why I wonder do you think this might have anything to do with Hot\n> Standby and/or FusionIO drives?\n>\n> This indicates either catalog or catalog index corruption of some kind.\n>\n> Did you only get this error once?\n>\n> --\n> Simon Riggs http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\nYes it has something to do with Hot Standby, if you omit some parts on the archive then the standby instance will not have the necessary stuff and complain like this.. I kept the FusionIO drive in my checklist while attending to this issue, as we tried it looking for performance combined with read-only hot standby and in doubt I thought that the recovery is not successful on this drive safely. so I pointed that Fio Drive here.\nStraight to say, I missed the pg_clog directory on archive.\nseq_page_cost = 1.0\nrandom_page_cost = 1.0\nIs the above settings are fine when we deal with Fio and Performance, as I have the advice earlier stating that read and write are treated same with Fio drives.Any suggestions on configuration changes to have read-only hot standby faster on READs.\n- SethuOn Sun, May 8, 2011 at 11:08 AM, Simon Riggs <[email protected]> wrote:\nOn Tue, May 3, 2011 at 10:02 AM, Sethu Prasad <[email protected]> wrote:\n\n> I tried with the PostgreSQL 9.0.4 + Hot Standby and running the database\n> from Fusion IO Drive to understand the PG Performance.\n>\n> While doing so I got the \"Query failed ERROR: catalog is missing 1\n> attribute(s) for relid 172226\". Any idea on this error? Is that combination\n> PG + HotSB + Fusion IO Drive is not advisable?!\n\nWhy I wonder do you think this might have anything to do with Hot\nStandby and/or FusionIO drives?\n\nThis indicates either catalog or catalog index corruption of some kind.\n\nDid you only get this error once?\n\n--\n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 10 May 2011 09:23:13 +0200",
"msg_from": "Sethu Prasad <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres 9.0.4 + Hot Standby + FusionIO Drive +\n\tPerformance => Query failed ERROR: catalog is missing 1 attribute(s)\n\tfor relid 172226"
},
{
"msg_contents": "On Tue, May 10, 2011 at 3:23 AM, Sethu Prasad <[email protected]> wrote:\n> Yes it has something to do with Hot Standby, if you omit some parts on the\n> archive then the standby instance will not have the necessary stuff and\n> complain like this..\n\nIf you omit some parts of the archive, it won't start at all. To get\nit to complain like this, you need something more than accidental\nmisconfiguration.\n\n> I kept the FusionIO drive in my checklist while attending to this issue, as\n> we tried it looking for performance combined with read-only hot standby and\n> in doubt I thought that the recovery is not successful on this drive safely.\n> so I pointed that Fio Drive here.\n>\n> Straight to say, I missed the pg_clog directory on archive.\n>\n> seq_page_cost = 1.0\n>\n> random_page_cost = 1.0\n>\n> Is the above settings are fine when we deal with Fio and Performance, as I\n> have the advice earlier stating that read and write are treated same with\n> Fio drives.\n\nI would think more like 0.1 than 1.0.\n\n> Any suggestions on configuration changes to have read-only hot standby\n> faster on READs.\n\neffective_io_concurrency?\n\nAdjust OS readahead?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Sun, 15 May 2011 16:34:06 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 9.0.4 + Hot Standby + FusionIO Drive +\n\tPerformance => Query failed ERROR: catalog is missing 1 attribute(s)\n\tfor relid 172226"
}
] |
[
{
"msg_contents": "Hi,\n\nOur database has gotten rather large and we are running out of disk space.\nour disks are 15K rpm SAS disks in RAID 10.\n\nWe are going to rent some space on a FibreChannel SAN.\nThat gives us the opportunity to separate the data and the indexes.\nNow i thought it would be best to move the indexes to the SAN and leave the\ndata on the disks, since the disks are better at sequential I/O and the SAN\nwill have lots of random I/O since there are lots of users on it.\n\nIs that a wise thing to do?\n\nCheers,\n\nWBL\n\n-- \n\"Patriotism is the conviction that your country is superior to all others\nbecause you were born in it.\" -- George Bernard Shaw\n\nHi,Our database has gotten rather large and we are running out of disk space.our disks are 15K rpm SAS disks in RAID 10.We are going to rent some space on a FibreChannel SAN.\nThat gives us the opportunity to separate the data and the indexes.Now i thought it would be best to move the indexes to the SAN and leave the data on the disks, since the disks are better at sequential I/O and the SAN will have lots of random I/O since there are lots of users on it.\nIs that a wise thing to do?Cheers,WBL-- \"Patriotism is the conviction that your country is superior to all others because you were born in it.\" -- George Bernard Shaw",
"msg_date": "Tue, 3 May 2011 17:52:23 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PERFORMANCE] expanding to SAN: which portion best to move"
},
{
"msg_contents": "On 2011-05-03 17:52, Willy-Bas Loos wrote:\n> Our database has gotten rather large and we are running out of disk space.\n> our disks are 15K rpm SAS disks in RAID 10.\n>\n> We are going to rent some space on a FibreChannel SAN.\n> That gives us the opportunity to separate the data and the indexes.\n> Now i thought it would be best to move the indexes to the SAN and leave the\n> data on the disks, since the disks are better at sequential I/O and the SAN\n> will have lots of random I/O since there are lots of users on it.\n>\n> Is that a wise thing to do?\n\nIf you're satisfied with the current performance then it should be safe\nto keep the indices and move the data, the risk of the SAN performing\nworse on sequential I/O is not that high. But without testing and\nknowledge about the SAN then it is hard to say if what you currently\nhave is better or worse than the SAN. The vendor may have a \"way better \nsan\",\nbut is may also be shared among 200 other hosts connected over iSCSI or FC\nso your share may be even worse than what you currently have.\n\nWithout insight and testing is it hard to guess. I've pretty much come\nto the conclusion of going the DAS way every time, but it all depends on\nwhat your end looks like.\n\n-- \nJesper\n",
"msg_date": "Wed, 04 May 2011 06:43:20 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORMANCE] expanding to SAN: which portion best\n to move"
},
{
"msg_contents": "are you saying that, generally speaking, moving the data would be better\nunless the SAN performs worse than the disks?\nbesides your point that it depends on what our end looks like i mean.\n(and what do you mean by \"the DAS way\", sry no native speaker)\n\ncheers,\n\nwbl\n\nOn Wed, May 4, 2011 at 6:43 AM, Jesper Krogh <[email protected]> wrote:\n\n>\n> If you're satisfied with the current performance then it should be safe\n> to keep the indices and move the data, the risk of the SAN performing\n> worse on sequential I/O is not that high. But without testing and\n> knowledge about the SAN then it is hard to say if what you currently\n> have is better or worse than the SAN. The vendor may have a \"way better\n> san\",\n> but is may also be shared among 200 other hosts connected over iSCSI or FC\n> so your share may be even worse than what you currently have.\n>\n> Without insight and testing is it hard to guess. I've pretty much come\n> to the conclusion of going the DAS way every time, but it all depends on\n> what your end looks like.\n>\n> --\n> Jesper\n>\n\n\n\n-- \n\"Patriotism is the conviction that your country is superior to all others\nbecause you were born in it.\" -- George Bernard Shaw\n\nare you saying that, generally speaking, moving the data would be better unless the SAN performs worse than the disks?besides your point that it depends on what our end looks like i mean.(and what do you mean by \"the DAS way\", sry no native speaker)\ncheers,wblOn Wed, May 4, 2011 at 6:43 AM, Jesper Krogh <[email protected]> wrote:\n\nIf you're satisfied with the current performance then it should be safe\nto keep the indices and move the data, the risk of the SAN performing\nworse on sequential I/O is not that high. But without testing and\nknowledge about the SAN then it is hard to say if what you currently\nhave is better or worse than the SAN. The vendor may have a \"way better san\",\nbut is may also be shared among 200 other hosts connected over iSCSI or FC\nso your share may be even worse than what you currently have.\n\nWithout insight and testing is it hard to guess. I've pretty much come\nto the conclusion of going the DAS way every time, but it all depends on\nwhat your end looks like.\n\n-- \nJesper\n-- \"Patriotism is the conviction that your country is superior to all others because you were born in it.\" -- George Bernard Shaw",
"msg_date": "Wed, 4 May 2011 07:25:02 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORMANCE] expanding to SAN: which portion best to move"
},
{
"msg_contents": "On 2011-05-04 07:25, Willy-Bas Loos wrote:\n> are you saying that, generally speaking, moving the data would be better\n> unless the SAN performs worse than the disks?\nIt was more, \"given all the incertainties, that seems like the least \nrisky\".\nThe SAN might actually be less well performing than what you currently\nhave, you dont know yet I guess?\n\n> besides your point that it depends on what our end looks like i mean.\n> (and what do you mean by \"the DAS way\", sry no native speaker)\nDAS way => A disk array where the database has sole access\nto the hardware (not shared among other systems). Examples\nare Dell MD1200/1220 or similary.\n\n-- \nJesper\n\n",
"msg_date": "Wed, 04 May 2011 07:33:25 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORMANCE] expanding to SAN: which portion best\n to move"
},
{
"msg_contents": "I'm asking them for (real) benchmarks, thanks for the advice. (fio is not\navailable for us now to do it myself, grmbl)\n\nIt just occurred to me that it is not necessarily the case that reading the\nindexes causes a lot of random I/O (on the index itself).\nI mean, maybe the index is generally read sequentially and then, when\nretrieving the data, there is a lot of random I/O.\n\nif it's a long story, any tips for info about this (book or web site)?\n\ncheers,\n\nwbl\n\nOn Wed, May 4, 2011 at 7:33 AM, Jesper Krogh <[email protected]> wrote:\n\n> On 2011-05-04 07:25, Willy-Bas Loos wrote:\n>\n>> are you saying that, generally speaking, moving the data would be better\n>> unless the SAN performs worse than the disks?\n>>\n> It was more, \"given all the incertainties, that seems like the least\n> risky\".\n> The SAN might actually be less well performing than what you currently\n> have, you dont know yet I guess?\n>\n>\n> besides your point that it depends on what our end looks like i mean.\n>> (and what do you mean by \"the DAS way\", sry no native speaker)\n>>\n> DAS way => A disk array where the database has sole access\n> to the hardware (not shared among other systems). Examples\n> are Dell MD1200/1220 or similary.\n>\n> --\n> Jesper\n>\n>\n\n\n-- \n\"Patriotism is the conviction that your country is superior to all others\nbecause you were born in it.\" -- George Bernard Shaw\n\nI'm asking them for (real) benchmarks, thanks for the advice. (fio is not available for us now to do it myself, grmbl)It just occurred to me that it is not necessarily the case that reading the indexes causes a lot of random I/O (on the index itself).\nI mean, maybe the index is generally read sequentially and then, when retrieving the data, there is a lot of random I/O.if it's a long story, any tips for info about this (book or web site)?\ncheers,wblOn Wed, May 4, 2011 at 7:33 AM, Jesper Krogh <[email protected]> wrote:\nOn 2011-05-04 07:25, Willy-Bas Loos wrote:\n\nare you saying that, generally speaking, moving the data would be better\nunless the SAN performs worse than the disks?\n\nIt was more, \"given all the incertainties, that seems like the least risky\".\nThe SAN might actually be less well performing than what you currently\nhave, you dont know yet I guess?\n\n\nbesides your point that it depends on what our end looks like i mean.\n(and what do you mean by \"the DAS way\", sry no native speaker)\n\nDAS way => A disk array where the database has sole access\nto the hardware (not shared among other systems). Examples\nare Dell MD1200/1220 or similary.\n\n-- \nJesper\n\n-- \"Patriotism is the conviction that your country is superior to all others because you were born in it.\" -- George Bernard Shaw",
"msg_date": "Wed, 4 May 2011 12:31:29 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORMANCE] expanding to SAN: which portion best to move"
},
{
"msg_contents": "On Wed, May 4, 2011 at 6:31 AM, Willy-Bas Loos <[email protected]> wrote:\n> I'm asking them for (real) benchmarks, thanks for the advice. (fio is not\n> available for us now to do it myself, grmbl)\n> It just occurred to me that it is not necessarily the case that reading the\n> indexes causes a lot of random I/O (on the index itself).\n> I mean, maybe the index is generally read sequentially and then, when\n> retrieving the data, there is a lot of random I/O.\n> if it's a long story, any tips for info about this (book or web site)?\n\nIf you don't do anything special, and if the query plan says \"Index\nScan\" rather than \"Bitmap Index Scan\", then both the index I/O and the\ntable I/O are likely to be fairly random. However there are a number\nof cases in which you can expect the table I/O to be sequential:\n\n- In some cases, you may happen to insert rows with an ordering that\nmatches the index. For example, if you have a table with not too many\nupdates and deletes, and an index on a serial column, then new rows\nwill have a higher value in that column than old rows, and will also\ntypically be physically after older rows in the file. Or you might be\ninserting timestamped data from oldest to newest.\n- If the planner chooses a Bitmap Index Scan, it effectively scans the\nindex to figure out which table blocks to read, and then reads those\ntable blocks in block number order, so that the I/O is sequential,\nwith skips.\n- If you CLUSTER the table on a particular index, it will be\nphysically ordered to match the index's key ordering. As the table is\nfurther modified the degree of clustering will gradually decline;\neventually you may wish to re-CLUSTER.\n\nIt's also worth keeping in mind that the index itself won't\nnecessarily be accessed in physically sequential order. The point of\nthe index is to emit the rows in key order, but if the table is\nheavily updated, it won't necessarily be the case that a page\ncontaining lower-valued keys physically precedes a page containing\nhigher-valued keys. I'm actually somewhat fuzzy on how this works,\nand to what extent it's a problem in practice, but I am fairly sure it\ncan happen.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Fri, 13 May 2011 15:04:43 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORMANCE] expanding to SAN: which portion best to move"
},
{
"msg_contents": "On Fri, May 13, 2011 at 9:04 PM, Robert Haas <[email protected]> wrote:\n> On Wed, May 4, 2011 at 6:31 AM, Willy-Bas Loos <[email protected]> wrote:\n>> I'm asking them for (real) benchmarks, thanks for the advice. (fio is not\n>> available for us now to do it myself, grmbl)\n>> It just occurred to me that it is not necessarily the case that reading the\n>> indexes causes a lot of random I/O (on the index itself).\n>> I mean, maybe the index is generally read sequentially and then, when\n>> retrieving the data, there is a lot of random I/O.\n>> if it's a long story, any tips for info about this (book or web site)?\n>\n> If you don't do anything special, and if the query plan says \"Index\n> Scan\" rather than \"Bitmap Index Scan\", then both the index I/O and the\n> table I/O are likely to be fairly random. However there are a number\n> of cases in which you can expect the table I/O to be sequential:\n>\n> - In some cases, you may happen to insert rows with an ordering that\n> matches the index. For example, if you have a table with not too many\n> updates and deletes, and an index on a serial column, then new rows\n> will have a higher value in that column than old rows, and will also\n> typically be physically after older rows in the file. Or you might be\n> inserting timestamped data from oldest to newest.\n> - If the planner chooses a Bitmap Index Scan, it effectively scans the\n> index to figure out which table blocks to read, and then reads those\n> table blocks in block number order, so that the I/O is sequential,\n> with skips.\n\nAre these two separate phases (i.e. first scan index completely, then\naccess table)?\n\n> - If you CLUSTER the table on a particular index, it will be\n> physically ordered to match the index's key ordering. As the table is\n> further modified the degree of clustering will gradually decline;\n> eventually you may wish to re-CLUSTER.\n>\n> It's also worth keeping in mind that the index itself won't\n> necessarily be accessed in physically sequential order. The point of\n> the index is to emit the rows in key order, but if the table is\n> heavily updated, it won't necessarily be the case that a page\n> containing lower-valued keys physically precedes a page containing\n> higher-valued keys. I'm actually somewhat fuzzy on how this works,\n> and to what extent it's a problem in practice, but I am fairly sure it\n> can happen.\n\nSeparating index and tables might not be a totally good idea\ngenerally. Richard Foote has an excellent article about Oracle but I\nassume at least a few things do apply to PostgreSQL as well - it's at\nleast worth as something to check PostgreSQL's access patterns\nagainst:\n\nhttp://richardfoote.wordpress.com/2008/04/16/separate-indexes-from-tables-some-thoughts-part-i-everything-in-its-right-place/\n\nI would probably rather try to separate data by the nature and\nfrequency of accesses. One reasonable separation would be to leave\nall frequently accessed tables *and* their indexes on local RAID and\nmoving less frequently accessed data to the SAN. This separation\ncould be easily identified if you have separate tables for current and\nhistoric data.\n\nKind regards\n\nrobert\n\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Mon, 16 May 2011 10:19:47 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORMANCE] expanding to SAN: which portion best to move"
},
{
"msg_contents": "On Mon, May 16, 2011 at 4:19 AM, Robert Klemme\n<[email protected]> wrote:\n>> - If the planner chooses a Bitmap Index Scan, it effectively scans the\n>> index to figure out which table blocks to read, and then reads those\n>> table blocks in block number order, so that the I/O is sequential,\n>> with skips.\n>\n> Are these two separate phases (i.e. first scan index completely, then\n> access table)?\n\nYes.\n\n> Separating index and tables might not be a totally good idea\n> generally. Richard Foote has an excellent article about Oracle but I\n> assume at least a few things do apply to PostgreSQL as well - it's at\n> least worth as something to check PostgreSQL's access patterns\n> against:\n>\n> http://richardfoote.wordpress.com/2008/04/16/separate-indexes-from-tables-some-thoughts-part-i-everything-in-its-right-place/\n>\n> I would probably rather try to separate data by the nature and\n> frequency of accesses. One reasonable separation would be to leave\n> all frequently accessed tables *and* their indexes on local RAID and\n> moving less frequently accessed data to the SAN. This separation\n> could be easily identified if you have separate tables for current and\n> historic data.\n\nYeah, I think the idea of putting tables and indexes in separate\ntablespaces is mostly to bring more I/O bandwidth to bear on the same\ndata. But there are other reasonable things you might do also - e.g.\nput the indexes on an SSD, and the tables on a spinning disk, figuring\nthat the SSD is less reliable but you can always rebuild the index if\nyou need to...\n\nAlso, a lot of people have reported big speedups from putting pg_xlog\non a dedicated RAID 1 pair, or moving the PostgreSQL logs off the data\npartition. So those sorts of divisions should be considered also.\nYour idea of dividing things by access frequency is another good\nthought.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 16 May 2011 10:31:51 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORMANCE] expanding to SAN: which portion best to move"
},
{
"msg_contents": "On Mon, May 16, 2011 at 4:31 PM, Robert Haas <[email protected]> wrote:\n> On Mon, May 16, 2011 at 4:19 AM, Robert Klemme\n> <[email protected]> wrote:\n>>> - If the planner chooses a Bitmap Index Scan, it effectively scans the\n>>> index to figure out which table blocks to read, and then reads those\n>>> table blocks in block number order, so that the I/O is sequential,\n>>> with skips.\n>>\n>> Are these two separate phases (i.e. first scan index completely, then\n>> access table)?\n>\n> Yes.\n\nSo then a single query will only ever access one of both at a time.\n\n>> Separating index and tables might not be a totally good idea\n>> generally. Richard Foote has an excellent article about Oracle but I\n>> assume at least a few things do apply to PostgreSQL as well - it's at\n>> least worth as something to check PostgreSQL's access patterns\n>> against:\n>>\n>> http://richardfoote.wordpress.com/2008/04/16/separate-indexes-from-tables-some-thoughts-part-i-everything-in-its-right-place/\n>>\n>> I would probably rather try to separate data by the nature and\n>> frequency of accesses. One reasonable separation would be to leave\n>> all frequently accessed tables *and* their indexes on local RAID and\n>> moving less frequently accessed data to the SAN. This separation\n>> could be easily identified if you have separate tables for current and\n>> historic data.\n>\n> Yeah, I think the idea of putting tables and indexes in separate\n> tablespaces is mostly to bring more I/O bandwidth to bear on the same\n> data.\n\nRichard commented on that as well, I believe it was in\nhttp://richardfoote.wordpress.com/2008/04/18/separate-indexes-from-tables-some-thoughts-part-ii-there-there/\n\nThe main point is that you do not benefit from the larger IO bandwidth\nif access patterns do not permit parallel access to both disks (e.g.\nbecause you first need to read index blocks in order to know the table\nblocks to read). The story might be different though if you have a\nlot of concurrent accesses. But even then, if the table is a hotspot\nchances are that index blocks are cached and you only need physical IO\nfor table blocks...\n\n> But there are other reasonable things you might do also - e.g.\n> put the indexes on an SSD, and the tables on a spinning disk, figuring\n> that the SSD is less reliable but you can always rebuild the index if\n> you need to...\n\nRichard commented on that theory as well:\nhttp://richardfoote.wordpress.com/2008/05/02/indexes-in-their-own-tablespace-recoverability-advantages-get-back/\n\nThe point: if you do the math you might figure that lost indexes lead\nto so much downtime that you don't want to risk that and the rebuild\nisn't all that simple (in terms of time). For a reasonable sized\ndatabase recovery might be significantly faster than rebuilding.\n\n> Also, a lot of people have reported big speedups from putting pg_xlog\n> on a dedicated RAID 1 pair, or moving the PostgreSQL logs off the data\n> partition. So those sorts of divisions should be considered also.\n\nNow, this is something I'd seriously consider because access patterns\nto pg_xlog are vastly different than those of table and index data!\nSo you want to have pg_xlog on a device with high reliability and high\nwrite speed.\n\n> Your idea of dividing things by access frequency is another good\n> thought.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Tue, 17 May 2011 09:00:31 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORMANCE] expanding to SAN: which portion best to move"
},
{
"msg_contents": "On 05/17/2011 03:00 PM, Robert Klemme wrote:\n\n> The main point is that you do not benefit from the larger IO bandwidth\n> if access patterns do not permit parallel access to both disks (e.g.\n> because you first need to read index blocks in order to know the table\n> blocks to read).\n\nThis makes me wonder if Pg attempts to pre-fetch blocks of interest for \nareas where I/O needs can be known in advance, while there's still other \nworks or other I/O to do. For example, pre-fetching for the next \niteration of a nested loop while still executing the prior one. Is it \neven possible?\n\nI'm guessing not, because (AFAIK) Pg uses only synchronous blocking I/O, \nand with that there isn't really a way to pre-fetch w/o threads or \nhelper processes. Linux (at least) supports buffered async I/O, so it'd \nbe possible to submit such prefetch requests ... on modern Linux \nkernels. Portably doing so, though - not so much.\n\n--\nCraig Ringer\n",
"msg_date": "Tue, 17 May 2011 17:47:09 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORMANCE] expanding to SAN: which portion best\n to move"
},
{
"msg_contents": "2011/5/17 Craig Ringer <[email protected]>:\n> On 05/17/2011 03:00 PM, Robert Klemme wrote:\n>\n>> The main point is that you do not benefit from the larger IO bandwidth\n>> if access patterns do not permit parallel access to both disks (e.g.\n>> because you first need to read index blocks in order to know the table\n>> blocks to read).\n>\n> This makes me wonder if Pg attempts to pre-fetch blocks of interest for\n> areas where I/O needs can be known in advance, while there's still other\n> works or other I/O to do. For example, pre-fetching for the next iteration\n> of a nested loop while still executing the prior one. Is it even possible?\n>\n> I'm guessing not, because (AFAIK) Pg uses only synchronous blocking I/O, and\n> with that there isn't really a way to pre-fetch w/o threads or helper\n> processes. Linux (at least) supports buffered async I/O, so it'd be possible\n> to submit such prefetch requests ... on modern Linux kernels. Portably doing\n> so, though - not so much.\n\nPrefetching is used in bitmapheapscan. The GUC\neffeective_io_concurrency allow you increase the prefetch window.\n\n>\n> --\n> Craig Ringer\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Tue, 17 May 2011 12:13:21 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORMANCE] expanding to SAN: which portion best to move"
},
{
"msg_contents": "On Tue, May 17, 2011 at 11:47 AM, Craig Ringer\n<[email protected]> wrote:\n> On 05/17/2011 03:00 PM, Robert Klemme wrote:\n>\n>> The main point is that you do not benefit from the larger IO bandwidth\n>> if access patterns do not permit parallel access to both disks (e.g.\n>> because you first need to read index blocks in order to know the table\n>> blocks to read).\n>\n> This makes me wonder if Pg attempts to pre-fetch blocks of interest for\n> areas where I/O needs can be known in advance, while there's still other\n> works or other I/O to do. For example, pre-fetching for the next iteration\n> of a nested loop while still executing the prior one. Is it even possible?\n>\n> I'm guessing not, because (AFAIK) Pg uses only synchronous blocking I/O, and\n> with that there isn't really a way to pre-fetch w/o threads or helper\n> processes. Linux (at least) supports buffered async I/O, so it'd be possible\n> to submit such prefetch requests ... on modern Linux kernels. Portably doing\n> so, though - not so much.\n\nThere is a much more serious obstacle than the mere technical (if that\nwas one at all): prefetching is only reasonable if you can predict\nwhich data you need with high probability (say >= 80%). If you can't\nyou'll have much more IO than without prefetching and overall\nperformance likely suffers. Naturally that probability depends on the\ndata at hand and the access pattern. As Cédric wrote, there seems to\nbe at least one case where it's done.\n\nCheers\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Tue, 17 May 2011 15:29:51 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORMANCE] expanding to SAN: which portion best to move"
},
{
"msg_contents": "On 05/17/2011 05:47 AM, Craig Ringer wrote:\n> This makes me wonder if Pg attempts to pre-fetch blocks of interest \n> for areas where I/O needs can be known in advance, while there's still \n> other works or other I/O to do. For example, pre-fetching for the next \n> iteration of a nested loop while still executing the prior one. Is it \n> even possible?\n\nWell, remember that a nested loop isn't directly doing any I/O. It's \npulling rows from some lower level query node. So the useful question \nto ask is \"how can pre-fetch speed up the table access methods?\" That \nworked out like this:\n\nSequential Scan: logic here was added and measured as useful for one \nsystem with terrible I/O. Everywhere else it was tried on Linux, the \nread-ahead logic in the kernel seems to make this redundant. Punted as \ntoo much complexity relative to measured average gain. You can try to \ntweak this on a per-file database in an application, but the kernel has \nalmost as much information to make that decision usefully as the \ndatabase does.\n\nIndex Scan: It's hard to know what you're going to need in advance here \nand pipeline the reads, so this hasn't really been explored yet.\n\nBitmap heap scan: Here, the exact list of blocks to fetch is known in \nadvance, they're random, and it's quite possible for the kernel to \nschedule them more efficiently than serial access of them can do. This \nwas added as the effective_io_concurrency feature (it's the only thing \nthat feature impacts), which so far is only proven to work on Linux. \nAny OS implementing the POSIX API used will also get this however; \nFreeBSD was the next likely candidate that might benefit when I last \nlooked around.\n\n> I'm guessing not, because (AFAIK) Pg uses only synchronous blocking \n> I/O, and with that there isn't really a way to pre-fetch w/o threads \n> or helper processes. Linux (at least) supports buffered async I/O, so \n> it'd be possible to submit such prefetch requests ... on modern Linux \n> kernels. Portably doing so, though - not so much.\n\nLinux supports the POSIX_FADV_WILLNEED advisory call, which is perfect \nfor suggesting what blocks will be accessed in the near future in the \nbitmap heap scan case. That's how effective_io_concurrency works.\n\nBoth Solaris and Linux also have async I/O mechanisms that could be used \ninstead. Greg Stark built a prototype and there's an obvious speed-up \nthere to be had. But the APIs for this aren't very standard, and it's \nreally hard to rearchitect the PostgreSQL buffer manager to operate in a \nless synchronous way. Hoping that more kernels support the \"will need\" \nAPI usefully, which meshes very well with how PostgreSQL thinks about \nthe problem, is where things are at right now. With so many bigger \nPostgreSQL sites on Linux, that's worked out well so far.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Tue, 24 May 2011 14:48:52 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORMANCE] expanding to SAN: which portion best\n to move"
},
{
"msg_contents": "24.05.11 21:48, Greg Smith написав(ла):\n>\n> Bitmap heap scan: Here, the exact list of blocks to fetch is known in \n> advance, they're random, and it's quite possible for the kernel to \n> schedule them more efficiently than serial access of them can do. This \n> was added as the effective_io_concurrency feature (it's the only thing \n> that feature impacts), which so far is only proven to work on Linux. \n> Any OS implementing the POSIX API used will also get this however; \n> FreeBSD was the next likely candidate that might benefit when I last \n> looked around.\nFreeBSD unfortunately do not have the support :(\nIt has AIO, but does not have the call needed to enable this settings.\n\nBest regards, Vitalii Tymchyshyn\n",
"msg_date": "Wed, 25 May 2011 11:51:57 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORMANCE] expanding to SAN: which portion best\n to move"
},
{
"msg_contents": "On Mon, May 16, 2011 at 10:19 AM, Robert Klemme\n<[email protected]>wrote:\n\n> On Fri, May 13, 2011 at 9:04 PM, Robert Haas <[email protected]>\n> wrote:\n> Separating index and tables might not be a totally good idea\n> generally. Richard Foote has an excellent article about Oracle but I\n> assume at least a few things do apply to PostgreSQL as well - it's at\n> least worth as something to check PostgreSQL's access patterns\n> against:\n>\n>\n> http://richardfoote.wordpress.com/2008/04/16/separate-indexes-from-tables-some-thoughts-part-i-everything-in-its-right-place/\n>\n> I would probably rather try to separate data by the nature and\n> frequency of accesses. One reasonable separation would be to leave\n> all frequently accessed tables *and* their indexes on local RAID and\n> moving less frequently accessed data to the SAN. This separation\n> could be easily identified if you have separate tables for current and\n> historic data.\n>\n> Well, after reading your article i have been reading some materail about it\non the internet, stating that separating indexes from data for performance\nbenefits is a myth.\nI found your comment \"So then a single query will only ever access one of\nboth at a time.\" very smart (no sarcasm there).\nI also found a thread<http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:901906930328>on\nAskTom that said mainly \"the goal is to achieve even io.\" (that makes\nabsolute sense)\n\nIn my situation, where i need extra space on a SAN, it seems logical to\nseparate the tables from the indexes, to achieve just that: roughly even\nIO.. (put tables on san, leave indexes on raid10 cluster)\nOr am i being silly?\n\nCheers,\n\nWBL\n-- \n\"Patriotism is the conviction that your country is superior to all others\nbecause you were born in it.\" -- George Bernard Shaw\n\nOn Mon, May 16, 2011 at 10:19 AM, Robert Klemme <[email protected]> wrote:\nOn Fri, May 13, 2011 at 9:04 PM, Robert Haas <[email protected]> wrote:Separating index and tables might not be a totally good idea\ngenerally. Richard Foote has an excellent article about Oracle but I\nassume at least a few things do apply to PostgreSQL as well - it's at\nleast worth as something to check PostgreSQL's access patterns\nagainst:\n\nhttp://richardfoote.wordpress.com/2008/04/16/separate-indexes-from-tables-some-thoughts-part-i-everything-in-its-right-place/\n\nI would probably rather try to separate data by the nature and\nfrequency of accesses. One reasonable separation would be to leave\nall frequently accessed tables *and* their indexes on local RAID and\nmoving less frequently accessed data to the SAN. This separation\ncould be easily identified if you have separate tables for current and\nhistoric data.\nWell, after reading your article i have been reading some materail about it on the internet, stating that separating indexes from data for performance benefits is a myth.\nI found your comment \"So then a single query will only ever access one of both at a time.\" very smart (no sarcasm there).\nI also found a thread on AskTom that said mainly \"the goal is to achieve even io.\" (that makes absolute sense)\nIn my situation, where i need extra space on a SAN, it seems logical to separate the tables from the indexes, to achieve just that: roughly even IO.. (put tables on san, leave indexes on raid10 cluster)\nOr am i being silly?\nCheers,\nWBL-- \"Patriotism is the conviction that your country is superior to all others because you were born in it.\" -- George Bernard Shaw",
"msg_date": "Thu, 9 Jun 2011 13:43:26 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] [PERFORMANCE] expanding to SAN: which portion best to\n\tmove"
},
{
"msg_contents": "On 06/09/2011 07:43 AM, Willy-Bas Loos wrote:\n> Well, after reading your article i have been reading some materail \n> about it on the internet, stating that separating indexes from data \n> for performance benefits is a myth.\n> I found your comment \" So then a single query will only ever access \n> one of both at a time.\" very smart (no sarcasm there).\n> I also found a thread \n> <http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:901906930328> \n> on AskTom that said mainly \"the goal is to achieve even io.\" (that \n> makes absolute sense)\n\nThe idea that separating indexes and tables from one another via a \ntablespace is inherently good is a myth. Particularly nowadays, where \nthe fastest part of a drive is nearly twice as fast as the slowest one \nin sequential transfers, and the ratio between sequential and random I/O \nis huge. Trying to get clever about breaking out a tablespace is \nunlikely to outsmart what you'd get if you just let the OS deal with \nthat stuff.\n\nWhat is true is that when you have multiple tiers of storage speeds \navailable, allocating the indexes and tables among them optimally is \nboth difficult and potentially worthwhile. A customer of mine has two \ndrive arrays, one of which is about 50% faster than the other; second \nwas added as expansion once the first filled. Nowadays, both are 75% \nfull, and I/O on each has to be carefully balanced. Making sure the \nheavily hit indexes are on the fast array, and that less critical things \nare not placed there, is the difference between that site staying up or \ngoing down.\n\nThe hidden surprise in this problem for most people is the day they \ndiscover that *the* most popular indexes, the ones they figured had to \ngo on the fastest storage around, are actually sitting in RAM all the \ntime anyway. It's always fun and sad at the same time to watch someone \nspend a fortune on some small expensive storage solution, move their \nmost performance critical data to it, and discover nothing changed. \nSome days that works great; others, it's no faster all, because that \ndata was already in memory.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n\n\n\n\n\n\nOn 06/09/2011 07:43 AM, Willy-Bas Loos wrote:\nWell, after reading your article i have been reading some\nmaterail about it on the internet, stating that separating indexes from\ndata for performance benefits is a myth.\n I found your comment \"\n \nSo\nthen a single query will only ever access one of both at a time.\" very\nsmart (no sarcasm there).\nI also found a thread\non AskTom that said mainly \"the goal is to achieve even io.\" (that\nmakes absolute sense)\n\n\nThe idea that separating indexes and tables from one another via a\ntablespace is inherently good is a myth. Particularly nowadays, where\nthe fastest part of a drive is nearly twice as fast as the slowest one\nin sequential transfers, and the ratio between sequential and random\nI/O is huge. Trying to get clever about breaking out a tablespace is\nunlikely to outsmart what you'd get if you just let the OS deal with\nthat stuff.\n\nWhat is true is that when you have multiple tiers of storage speeds\navailable, allocating the indexes and tables among them optimally is\nboth difficult and potentially worthwhile. A customer of mine has two\ndrive arrays, one of which is about 50% faster than the other; second\nwas added as expansion once the first filled. Nowadays, both are 75%\nfull, and I/O on each has to be carefully balanced. Making sure the\nheavily hit indexes are on the fast array, and that less critical\nthings are not placed there, is the difference between that site\nstaying up or going down.\n\nThe hidden surprise in this problem for most people is the day they\ndiscover that *the* most popular indexes, the ones they figured had to\ngo on the fastest storage around, are actually sitting in RAM all the\ntime anyway. It's always fun and sad at the same time to watch someone\nspend a fortune on some small expensive storage solution, move their\nmost performance critical data to it, and discover nothing changed. \nSome days that works great; others, it's no faster all, because that\ndata was already in memory.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books",
"msg_date": "Thu, 09 Jun 2011 13:44:04 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] [PERFORMANCE] expanding to SAN: which portion\n\tbest to move"
},
{
"msg_contents": "On Thu, Jun 9, 2011 at 7:44 PM, Greg Smith <[email protected]> wrote:\n\n> **\n> On 06/09/2011 07:43 AM, Willy-Bas Loos wrote:\n>\n> Well, after reading your article i have been reading some materail about it\n> on the internet, stating that separating indexes from data for performance\n> benefits is a myth.\n> I found your comment \" So then a single query will only ever access one of\n> both at a time.\" very smart (no sarcasm there).\n> I also found a thread<http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:901906930328>on AskTom that said mainly \"the goal is to achieve even io.\" (that makes\n> absolute sense)\n>\n>\n> The idea that separating indexes and tables from one another via a\n> tablespace is inherently good is a myth. Particularly nowadays, where the\n> fastest part of a drive is nearly twice as fast as the slowest one in\n> sequential transfers, and the ratio between sequential and random I/O is\n> huge. Trying to get clever about breaking out a tablespace is unlikely to\n> outsmart what you'd get if you just let the OS deal with that stuff.\n>\n> What is true is that when you have multiple tiers of storage speeds\n> available, allocating the indexes and tables among them optimally is both\n> difficult and potentially worthwhile. A customer of mine has two drive\n> arrays, one of which is about 50% faster than the other; second was added as\n> expansion once the first filled. Nowadays, both are 75% full, and I/O on\n> each has to be carefully balanced. Making sure the heavily hit indexes are\n> on the fast array, and that less critical things are not placed there, is\n> the difference between that site staying up or going down.\n>\n> The hidden surprise in this problem for most people is the day they\n> discover that *the* most popular indexes, the ones they figured had to go on\n> the fastest storage around, are actually sitting in RAM all the time\n> anyway. It's always fun and sad at the same time to watch someone spend a\n> fortune on some small expensive storage solution, move their most\n> performance critical data to it, and discover nothing changed. Some days\n> that works great; others, it's no faster all, because that data was already\n> in memory.\n> <http://www.2ndQuadrant.com/books>\n>\n\nAdding a few more thoughts to this: it is important to understand the very\ndifferent nature of read and write IO. While write IO usually can be done\nconcurrently to different IO channels (devices) for read IO there are\ntypically dependencies, e.g. you need to read the index before you know\nwhich part of the table you need to read. Thus both cannot be done\nconcurrently for a *single select* unless the whole query is partitioned and\nexecuted in parallel (Oracle can do that for example). Even then each\nparallel executor has this dependency between index and table data. That's\nthe same situation as with concurrent queries into the same table and\nindex. There are of course exceptions, e.g. during a sequential table scan\nyou can know beforehand which blocks need to be read next and fetch them\nwile processing the current block(s). The buffering strategy also plays an\nimportant role here.\n\nBottom line: one needs to look at each case individually, do the math and\nideally also measurements.\n\nKind regards\n\nrobert\n\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n\nOn Thu, Jun 9, 2011 at 7:44 PM, Greg Smith <[email protected]> wrote:\n\n\nOn 06/09/2011 07:43 AM, Willy-Bas Loos wrote:\nWell, after reading your article i have been reading some\nmaterail about it on the internet, stating that separating indexes from\ndata for performance benefits is a myth.\n I found your comment \"\n \n So\nthen a single query will only ever access one of both at a time.\" very\nsmart (no sarcasm there).\nI also found a thread\non AskTom that said mainly \"the goal is to achieve even io.\" (that\nmakes absolute sense)\n\n\nThe idea that separating indexes and tables from one another via a\ntablespace is inherently good is a myth. Particularly nowadays, where\nthe fastest part of a drive is nearly twice as fast as the slowest one\nin sequential transfers, and the ratio between sequential and random\nI/O is huge. Trying to get clever about breaking out a tablespace is\nunlikely to outsmart what you'd get if you just let the OS deal with\nthat stuff.\n\nWhat is true is that when you have multiple tiers of storage speeds\navailable, allocating the indexes and tables among them optimally is\nboth difficult and potentially worthwhile. A customer of mine has two\ndrive arrays, one of which is about 50% faster than the other; second\nwas added as expansion once the first filled. Nowadays, both are 75%\nfull, and I/O on each has to be carefully balanced. Making sure the\nheavily hit indexes are on the fast array, and that less critical\nthings are not placed there, is the difference between that site\nstaying up or going down.\n\nThe hidden surprise in this problem for most people is the day they\ndiscover that *the* most popular indexes, the ones they figured had to\ngo on the fastest storage around, are actually sitting in RAM all the\ntime anyway. It's always fun and sad at the same time to watch someone\nspend a fortune on some small expensive storage solution, move their\nmost performance critical data to it, and discover nothing changed. \nSome days that works great; others, it's no faster all, because that\ndata was already in memory.\n\nAdding a few more thoughts to this: it is important to understand the very different nature of read and write IO. While write IO usually can be done concurrently to different IO channels (devices) for read IO there are typically dependencies, e.g. you need to read the index before you know which part of the table you need to read. Thus both cannot be done concurrently for a single select unless the whole query is partitioned and executed in parallel (Oracle can do that for example). Even then each parallel executor has this dependency between index and table data. That's the same situation as with concurrent queries into the same table and index. There are of course exceptions, e.g. during a sequential table scan you can know beforehand which blocks need to be read next and fetch them wile processing the current block(s). The buffering strategy also plays an important role here.\nBottom line: one needs to look at each case individually, do the math and ideally also measurements.Kind regardsrobert-- remember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/",
"msg_date": "Fri, 10 Jun 2011 12:02:50 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] [PERFORMANCE] expanding to SAN: which portion\n\tbest to move"
}
] |
[
{
"msg_contents": "What are the best practices for setting up PG 9.x on Amazon EC2 to get the best performance?\n\n\tThanks in advance, Joel\n\n--------------------------------------------------------------------------\n- for hire: mac osx device driver ninja, kernel extensions and usb drivers\n---------------------+------------+---------------------------------------\nhttp://wagerlabs.com | @wagerlabs | http://www.linkedin.com/in/joelreymont\n---------------------+------------+---------------------------------------\n\n\n\n",
"msg_date": "Tue, 3 May 2011 19:48:35 +0100",
"msg_from": "Joel Reymont <[email protected]>",
"msg_from_op": true,
"msg_subject": "amazon ec2"
},
{
"msg_contents": "On May 3, 2011 11:48:35 am Joel Reymont wrote:\n> What are the best practices for setting up PG 9.x on Amazon EC2 to get the\n> best performance?\n>\n\nI am also interested in tips for this. EBS seems to suck pretty bad.\n \n",
"msg_date": "Tue, 3 May 2011 12:41:20 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: amazon ec2"
},
{
"msg_contents": "\nOn May 3, 2011, at 8:41 PM, Alan Hodgson wrote:\n\n> I am also interested in tips for this. EBS seems to suck pretty bad.\n\nAlan, can you elaborate? Are you using PG on top of EBS?\n\n--------------------------------------------------------------------------\n- for hire: mac osx device driver ninja, kernel extensions and usb drivers\n---------------------+------------+---------------------------------------\nhttp://wagerlabs.com | @wagerlabs | http://www.linkedin.com/in/joelreymont\n---------------------+------------+---------------------------------------\n\n\n\n",
"msg_date": "Tue, 3 May 2011 20:43:13 +0100",
"msg_from": "Joel Reymont <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: amazon ec2"
},
{
"msg_contents": "On May 3, 2011 12:43:13 pm you wrote:\n> On May 3, 2011, at 8:41 PM, Alan Hodgson wrote:\n> > I am also interested in tips for this. EBS seems to suck pretty bad.\n> \n> Alan, can you elaborate? Are you using PG on top of EBS?\n> \n\nTrying to, yes.\n\nLet's see ...\n\nEBS volumes seem to vary in speed. Some are relatively fast. Some are really \nslow. Some fast ones become slow randomly. Some are fast attached to one \ninstance, but really slow attached to another.\n\nFast being a relative term, though. The fast ones seem to be able to do maybe \n400 random IOPS. And of course you can only get about 80MB/sec sequential \naccess to them on a good day.\n\nWhich is why I'm interested in how other people are doing it. So far EC2 \ndoesn't seem well suited to running databases at all.\n",
"msg_date": "Tue, 3 May 2011 13:09:51 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: amazon ec2"
},
{
"msg_contents": "iowait is a problem on any platform that relies on spinning media, compared\nto RAM.\nno matter how fast a disk is, and no matter how intelligent the controller\nis, you are still dealing with an access speed differential of 10^6 (speed\nof disk access compared to memory access).\ni have had good results by avoiding it.\nif you can do this, ec2 is not too shabby, but beware - it doesn't come\nfree.\nthis is achievable under the following circumstances (and maybe there are\nother ways to do this).\ni use a technique of pro-actively querying enough of my anticipated result\nset with a daemon procedure.\nas long as the frequency of your query daemon execution is greater than that\nof the competitor processes (eg ETL and other update activity), AND a\nsubstantial part of the result set will fit in available RAM, then the\nresult set will be served from file system cache at the time you want it.\ni have found that it doesn't take much to get this to happen, once you have\nidentified your critical result set.\nlike - you can get away with running it once/hour, and i'm still reducing\nthe frequency and getting good results.\nthis approach basically assumes a 90/10 rule - at any point in time, you\nonly want to access 10% of your data. if you can work out what the 10% is,\nand it will fit into RAM, then you can set it up to cache it.\nit also imposes no additional cost in ec2, because Amazon doesn't bill you\nfor CPU activity, although the large-RAM machines do cost more. Depends on\nhow big your critical result set is, and how much speed you need.\n\ndont know if this helps - the success/failure of it depends on your typical\nquery activity, the size of your critical result set, and whether you are\nable to get enough RAM to make this work.\n\nas i said it doesn't come for free, but you can make it work.\n\nas a further point, try also checking out greenplum - it is an excellent\npostgres derivative with a very powerful free version. the reason why i\nbring it up is because it offers block-level compression (with caveats - it\nalso doesn't come for free, so do due diligence and rtfm carefully). The\ncompression enabled me to improve the cache hit rate, and so you further\nreduce the iowait problem.\ngreenplum is also a better parallel machine than postgres, so combining the\ncache technique above with greenplum compression and parallel query, i have\nbeen able to get 20:1 reduction in response times for some of our queries.\nobviously introducing new database technology is a big deal, but we needed\nthe speed, and it kinda worked.\n\nmr\n\n\nOn Tue, May 3, 2011 at 1:09 PM, Alan Hodgson <[email protected]> wrote:\n\n> On May 3, 2011 12:43:13 pm you wrote:\n> > On May 3, 2011, at 8:41 PM, Alan Hodgson wrote:\n> > > I am also interested in tips for this. EBS seems to suck pretty bad.\n> >\n> > Alan, can you elaborate? Are you using PG on top of EBS?\n> >\n>\n> Trying to, yes.\n>\n> Let's see ...\n>\n> EBS volumes seem to vary in speed. Some are relatively fast. Some are\n> really\n> slow. Some fast ones become slow randomly. Some are fast attached to one\n> instance, but really slow attached to another.\n>\n> Fast being a relative term, though. The fast ones seem to be able to do\n> maybe\n> 400 random IOPS. And of course you can only get about 80MB/sec sequential\n> access to them on a good day.\n>\n> Which is why I'm interested in how other people are doing it. So far EC2\n> doesn't seem well suited to running databases at all.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\niowait is a problem on any platform that relies on spinning media, compared to RAM.no matter how fast a disk is, and no matter how intelligent the controller is, you are still dealing with an access speed differential of 10^6 (speed of disk access compared to memory access).\ni have had good results by avoiding it.if you can do this, ec2 is not too shabby, but beware - it doesn't come free.this is achievable under the following circumstances (and maybe there are other ways to do this).\ni use a technique of pro-actively querying enough of my anticipated result set with a daemon procedure.as long as the frequency of your query daemon execution is greater than that of the competitor processes (eg ETL and other update activity), AND a substantial part of the result set will fit in available RAM, then the result set will be served from file system cache at the time you want it.\ni have found that it doesn't take much to get this to happen, once you have identified your critical result set.like - you can get away with running it once/hour, and i'm still reducing the frequency and getting good results.\nthis approach basically assumes a 90/10 rule - at any point in time, you only want to access 10% of your data. if you can work out what the 10% is, and it will fit into RAM, then you can set it up to cache it.it also imposes no additional cost in ec2, because Amazon doesn't bill you for CPU activity, although the large-RAM machines do cost more. Depends on how big your critical result set is, and how much speed you need.\ndont know if this helps - the success/failure of it depends on your typical query activity, the size of your critical result set, and whether you are able to get enough RAM to make this work.as i said it doesn't come for free, but you can make it work.\nas a further point, try also checking out greenplum - it is an excellent postgres derivative with a very powerful free version. the reason why i bring it up is because it offers block-level compression (with caveats - it also doesn't come for free, so do due diligence and rtfm carefully). The compression enabled me to improve the cache hit rate, and so you further reduce the iowait problem.\ngreenplum is also a better parallel machine than postgres, so combining the cache technique above with greenplum compression and parallel query, i have been able to get 20:1 reduction in response times for some of our queries.\nobviously introducing new database technology is a big deal, but we needed the speed, and it kinda worked.mrOn Tue, May 3, 2011 at 1:09 PM, Alan Hodgson <[email protected]> wrote:\nOn May 3, 2011 12:43:13 pm you wrote:\n> On May 3, 2011, at 8:41 PM, Alan Hodgson wrote:\n> > I am also interested in tips for this. EBS seems to suck pretty bad.\n>\n> Alan, can you elaborate? Are you using PG on top of EBS?\n>\n\nTrying to, yes.\n\nLet's see ...\n\nEBS volumes seem to vary in speed. Some are relatively fast. Some are really\nslow. Some fast ones become slow randomly. Some are fast attached to one\ninstance, but really slow attached to another.\n\nFast being a relative term, though. The fast ones seem to be able to do maybe\n400 random IOPS. And of course you can only get about 80MB/sec sequential\naccess to them on a good day.\n\nWhich is why I'm interested in how other people are doing it. So far EC2\ndoesn't seem well suited to running databases at all.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 3 May 2011 13:48:47 -0700",
"msg_from": "Mark Rostron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: amazon ec2"
},
{
"msg_contents": "On Tue, May 3, 2011 at 2:09 PM, Alan Hodgson <[email protected]> wrote:\n\n> On May 3, 2011 12:43:13 pm you wrote:\n> > On May 3, 2011, at 8:41 PM, Alan Hodgson wrote:\n> > > I am also interested in tips for this. EBS seems to suck pretty bad.\n> >\n> > Alan, can you elaborate? Are you using PG on top of EBS?\n> >\n>\n> Trying to, yes.\n>\n> Let's see ...\n>\n> EBS volumes seem to vary in speed. Some are relatively fast. Some are\n> really\n> slow. Some fast ones become slow randomly. Some are fast attached to one\n> instance, but really slow attached to another.\n>\n>\nI ran pgbench tests late last year comparing EC2, GoGrid, a 5 year-old lab\nserver and a new server. Whether I used a stock postgresql.conf or tweaked,\nthe current 8.4 or 9.0, or varied the EC2 instance size EC2 was always at\nthe bottom ranging from 409.834 to 693.100 tps. GoGrid's pgbench TPS\nnumbers in similar tests were, on average, 3X that of EC2 (1,399.550 to\n1,631.887 tps). The tests I conducted were small with 10 connections and\ntotal 5,000 transactions. The single variable that helped pgbench tests in\nEC2 was to select an instance size where the number of cores was equal to or\ngreater than the number of connections I used in the tests however this only\nimproved things slightly (715.931 tps).\n\nFor comparisons purposes, I ran the same tests on a 24-way X5650 with 12 GB\nand SAS RAID 10. This server typically ranged from 2,188.348 to 2,216.377\ntps.\n\nI attributed GoGrids superior performance over EC2 as EC2 simply\nbeing over-allocated but that's just speculation on my part. To test my\ntheory, I had wanted to put the database on a ramdisk, or like device, in\nEC2 and GoGrid but never got around to it.\n\n\n\n> Fast being a relative term, though. The fast ones seem to be able to do\n> maybe\n> 400 random IOPS. And of course you can only get about 80MB/sec sequential\n> access to them on a good day.\n>\n> Which is why I'm interested in how other people are doing it. So far EC2\n> doesn't seem well suited to running databases at all.\n>\n>\nI was doing this perhaps to convince management to give me some time to\nvalidate our software (PG backed) on some of the cloud providers but with\nthose abysmal numbers I didn't even bother at the time. I may revisit at\nsome point b/c I know Amazon at least has been making architecture\nadjustments and updates.\n\nGreg\n\nOn Tue, May 3, 2011 at 2:09 PM, Alan Hodgson <[email protected]> wrote:\nOn May 3, 2011 12:43:13 pm you wrote:\n> On May 3, 2011, at 8:41 PM, Alan Hodgson wrote:\n> > I am also interested in tips for this. EBS seems to suck pretty bad.\n>\n> Alan, can you elaborate? Are you using PG on top of EBS?\n>\n\nTrying to, yes.\n\nLet's see ...\n\nEBS volumes seem to vary in speed. Some are relatively fast. Some are really\nslow. Some fast ones become slow randomly. Some are fast attached to one\ninstance, but really slow attached to another.\nI ran pgbench tests late last year comparing EC2, GoGrid, a 5 year-old lab server and a new server. Whether I used a stock postgresql.conf or tweaked, the current 8.4 or 9.0, or varied the EC2 instance size EC2 was always at the bottom ranging from 409.834 to 693.100 tps. GoGrid's pgbench TPS numbers in similar tests were, on average, 3X that of EC2 (1,399.550 to 1,631.887 tps). The tests I conducted were small with 10 connections and total 5,000 transactions. The single variable that helped pgbench tests in EC2 was to select an instance size where the number of cores was equal to or greater than the number of connections I used in the tests however this only improved things slightly (715.931 tps).\nFor comparisons purposes, I ran the same tests on a 24-way X5650 with 12 GB and SAS RAID 10. This server typically ranged from 2,188.348 to 2,216.377 tps.\nI attributed GoGrids superior performance over EC2 as EC2 simply being over-allocated but that's just speculation on my part. To test my theory, I had wanted to put the database on a ramdisk, or like device, in EC2 and GoGrid but never got around to it.\n \nFast being a relative term, though. The fast ones seem to be able to do maybe\n400 random IOPS. And of course you can only get about 80MB/sec sequential\naccess to them on a good day.\n\nWhich is why I'm interested in how other people are doing it. So far EC2\ndoesn't seem well suited to running databases at all.\n I was doing this perhaps to convince management to give me some time to validate our software (PG backed) on some of the cloud providers but with those abysmal numbers I didn't even bother at the time. I may revisit at some point b/c I know Amazon at least has been making architecture adjustments and updates.\nGreg",
"msg_date": "Tue, 3 May 2011 14:52:37 -0600",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: amazon ec2"
},
{
"msg_contents": "phoronix did some benchmarks of the ec2 machines and they show pretty poor \nnumbers, especially in the I/O side of things\n\nhttp://www.phoronix.com/scan.php?page=article&item=amazon_ec2_round1&num=1 \nhttp://www.phoronix.com/scan.php?page=article&item=amazon_ec2_micro&num=1\n\nDavid Lang\n\n\nOn Tue, 3 May 2011, Alan Hodgson wrote:\n\n> Date: Tue, 3 May 2011 13:09:51 -0700\n> From: Alan Hodgson <[email protected]>\n> To: [email protected]\n> Subject: Re: [PERFORM] amazon ec2\n> \n> On May 3, 2011 12:43:13 pm you wrote:\n>> On May 3, 2011, at 8:41 PM, Alan Hodgson wrote:\n>>> I am also interested in tips for this. EBS seems to suck pretty bad.\n>>\n>> Alan, can you elaborate? Are you using PG on top of EBS?\n>>\n>\n> Trying to, yes.\n>\n> Let's see ...\n>\n> EBS volumes seem to vary in speed. Some are relatively fast. Some are really\n> slow. Some fast ones become slow randomly. Some are fast attached to one\n> instance, but really slow attached to another.\n>\n> Fast being a relative term, though. The fast ones seem to be able to do maybe\n> 400 random IOPS. And of course you can only get about 80MB/sec sequential\n> access to them on a good day.\n>\n> Which is why I'm interested in how other people are doing it. So far EC2\n> doesn't seem well suited to running databases at all.\n>\n>\n",
"msg_date": "Tue, 3 May 2011 13:58:43 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: amazon ec2"
},
{
"msg_contents": "On 5/3/11 11:48 AM, Joel Reymont wrote:\n> What are the best practices for setting up PG 9.x on Amazon EC2 to get the best performance?\n\nYes. Don't use EC2.\n\nThere is no \"best\" performance on EC2. There's not even \"good\nperformance\". Basically, EC2 is the platform for when performance\ndoesn't matter.\n\nUse a dedicated server, or use a better cloud host.\n\nhttp://it.toolbox.com/blogs/database-soup/how-to-make-your-database-perform-well-on-amazon-ec2-45725\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Tue, 03 May 2011 14:39:23 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: amazon ec2"
},
{
"msg_contents": "Mark Rostron wrote:\n> the success/failure of it depends on your typical query activity, the \n> size of your critical result set, and whether you are able to get \n> enough RAM to make this work.\n\nBasically, it all comes down to \"does the working set of data I access \nfrequently fit in RAM?\" If it does, it's possible to get reasonable \nperformance out of an EC2 instance. The EBS disks are so slow, both on \naverage and particularly in cases where you have contention with other \nusers slowing you down, that any situation where you have to use them is \nnever going to work well. If most of the data fits in RAM, and the CPU \nresources available to your instance are sufficient to service your \nqueries, you might see acceptable performance.\n\n> greenplum is also a better parallel machine than postgres, so \n> combining the cache technique above with greenplum compression and \n> parallel query, i have been able to get 20:1 reduction in response \n> times for some of our queries.\n\nI've also seen over a 20:1 speedup over PostgreSQL by using Greenplum's \nfree Community Edition server, in situations where its column store + \ncompression features work well on the data set. That's easiest with an \nappend-only workload, and the data set needs to fit within the \nconstraints where indexes on compressed data are useful. But if you fit \nthe use profile it's good at, you end up with considerable ability to \ntrade-off using more CPU resources to speed up queries. It effectively \nincreases the amount of data that can be cached in RAM by a large \nmultiple, and in the EC2 context (where any access to disk is very slow) \nit can be quite valuable. My colleague Gabrielle wrote something about \nsetting this up on an earlier version of Greenplum's software at \nhttp://blog.2ndquadrant.com/en/2010/03/installing-greenplum-sne-ec2.html \nthat gives an idea how that was setup.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Tue, 03 May 2011 18:39:28 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: amazon ec2"
},
{
"msg_contents": "Greg Spiegelberg wrote:\n> I ran pgbench tests late last year comparing EC2, GoGrid, a 5 year-old \n> lab server and a new server. Whether I used a stock postgresql.conf \n> or tweaked, the current 8.4 or 9.0, or varied the EC2 instance size \n> EC2 was always at the bottom ranging from 409.834 to 693.100 tps. \n> GoGrid's pgbench TPS numbers in similar tests were, on average, 3X \n> that of EC2 (1,399.550 to 1,631.887 tps). The tests I conducted were \n> small with 10 connections and total 5,000 transactions. The single \n> variable that helped pgbench tests in EC2 was to select an instance \n> size where the number of cores was equal to or greater than the number \n> of connections I used in the tests however this only improved things \n> slightly (715.931 tps).\n\nThe standard pgbench test is extremely sensitive to how fast \ntransactions can be committed to disk. That doesn't reflect what \nperformance looks like on most real-world workloads, which tend toward \nmore reads. The fact that GoGrid is much faster at doing commits than \nEC2 is interesting, but that's only one of many parameters that impact \nperformance on more normal workloads.\n\nThe one parameter that can change how the test runs is turning off \nsynchronous_commit, which pulls the commit time out of the results to \nsome extent. And if you'd switched pgbench to a more read-oriented \ntest, you'd discover it becomes extremely sensitive to the size of the \ndatabase, as set by pgbench's scale parameter during setup.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Tue, 03 May 2011 18:44:13 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: amazon ec2"
},
{
"msg_contents": "On May 3, 2011, at 5:39 PM, Greg Smith wrote:\n> I've also seen over a 20:1 speedup over PostgreSQL by using Greenplum's free Community Edition server, in situations where its column store + compression features work well on the data set. That's easiest with an append-only workload, and the data set needs to fit within the constraints where indexes on compressed data are useful. But if you fit the use profile it's good at, you end up with considerable ability to trade-off using more CPU resources to speed up queries. It effectively increases the amount of data that can be cached in RAM by a large multiple, and in the EC2 context (where any access to disk is very slow) it can be quite valuable.\n\nFWIW, EnterpriseDB's \"InfiniCache\" provides the same caching benefit. The way that works is when PG goes to evict a page from shared buffers that page gets compressed and stuffed into a memcache cluster. When PG determines that a given page isn't in shared buffers it will then check that memcache cluster before reading the page from disk. This allows you to cache amounts of data that far exceed the amount of memory you could put in a physical server.\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n",
"msg_date": "Wed, 4 May 2011 08:05:35 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: amazon ec2"
},
{
"msg_contents": "On 05/03/2011 01:48 PM, Joel Reymont wrote:\n\n> What are the best practices for setting up PG 9.x on Amazon EC2 to\n> get the best performance?\n\nUse EC2 and other Amazon hosting for cloud-based client access only. \nTheir shared disk services are universally despised by basically \neveryone who has tried to use it for database hosting.\n\nThe recommended pattern is to have the scalable cloud clients access \n(and memcache) a remote DB at a colo or managed services host. EBS is a \nnice idea, and probably fine for things like image or video hosting, but \ndatabase access, especially for OLTP databases, will just result in \nwailing and gnashing of teeth.\n\nJust ask anyone who got bit by the recent EBS failure that spanned \n*several* availability zones. For all those clients who thought they \nwere safe by deploying across multiple ACs, it was a rather rude awakening.\n\nhttp://aws.amazon.com/message/65648/\n\nThe consensus seems to be that Amazon's cloud is fine... so long as you \nstay far, far away from EBS. Apparently that needs a little more work.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Wed, 4 May 2011 09:04:18 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: amazon ec2"
},
{
"msg_contents": "\n> FWIW, EnterpriseDB's \"InfiniCache\" provides the same caching benefit. The way that works is when PG goes to evict a page from shared buffers that page gets compressed and stuffed into a memcache cluster. When PG determines that a given page isn't in shared buffers it will then check that memcache cluster before reading the page from disk. This allows you to cache amounts of data that far exceed the amount of memory you could put in a physical server.\n\nSo memcached basically replaces the filesystem?\n\nThat sounds cool, but I'm wondering if it's actually a performance\nspeedup. Seems like it would only be a benefit for single-row lookups;\nany large reads would be a mess.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Wed, 04 May 2011 17:02:53 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: amazon ec2"
},
{
"msg_contents": "On Wed, 4 May 2011, Josh Berkus wrote:\n\n> Date: Wed, 04 May 2011 17:02:53 -0700\n> From: Josh Berkus <[email protected]>\n> To: postgres performance list <[email protected]>\n> Subject: Re: [PERFORM] amazon ec2\n> \n>\n>> FWIW, EnterpriseDB's \"InfiniCache\" provides the same caching benefit. The way that works is when PG goes to evict a page from shared buffers that page gets compressed and stuffed into a memcache cluster. When PG determines that a given page isn't in shared buffers it will then check that memcache cluster before reading the page from disk. This allows you to cache amounts of data that far exceed the amount of memory you could put in a physical server.\n>\n> So memcached basically replaces the filesystem?\n>\n> That sounds cool, but I'm wondering if it's actually a performance\n> speedup. Seems like it would only be a benefit for single-row lookups;\n> any large reads would be a mess.\n\nI think it would depend a lot on how well the page compresses.\n\nif the network I/O plus uncompression time is faster than the seek and \nread from disk it should be a win.\n\nI don't see why the benifit would be limited to single row lookups, \nanything within that page should be the same.\n\nfor multipage actions, the disk would have less of a disadvantage as \nreadahead may be able to hide some of the work while the memcache approach \nwould need to do separate transactions for each page.\n\nthis does bring up an interesting variation for hierarchical storage, \nusing compressed pages in memcache rather than dedicated resources on a \nlocal system. thanks for the thought and I'll keep it in mind. I can think \nof lots of cases where the database stores a relativly small set of values \nthat would compress well.\n\nDavid Lang\n",
"msg_date": "Wed, 4 May 2011 17:12:30 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: amazon ec2"
},
{
"msg_contents": "----- Original Message -----\n\n> From: Josh Berkus <[email protected]>\n> To: postgres performance list <[email protected]>\n> Cc: \n> Sent: Thursday, May 5, 2011 2:02 AM\n> Subject: Re: [PERFORM] amazon ec2\n> So memcached basically replaces the filesystem?\n> \n> That sounds cool, but I'm wondering if it's actually a performance\n> speedup. Seems like it would only be a benefit for single-row lookups;\n> any large reads would be a mess.\n\n\nI've never tested with pgsql, but with mysql it makes a *huge* difference when you're pulling data repeatedly. Multi-row lookups can be cached too:\n\n$rows = $cache->get(md5($query . '--' . serialize($args)));\n\nif ( !$rows) {\n // query and cache for a few hours...\n}\n\nThis is true even with mysql's caching features turned on. You spare the DB from doing identical queries that get repeated over and over. Memcache lets you pull those straight from the memory, allowing for the DB server to handle new queries exclusively.\n\n",
"msg_date": "Wed, 4 May 2011 17:21:14 -0700 (PDT)",
"msg_from": "Denis de Bernardy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: amazon ec2"
},
{
"msg_contents": "On Thu, May 5, 2011 at 1:02 AM, Josh Berkus <[email protected]> wrote:\n>\n>> FWIW, EnterpriseDB's \"InfiniCache\" provides the same caching benefit. The way that works is when PG goes to evict a page from shared buffers that page gets compressed and stuffed into a memcache cluster. When PG determines that a given page isn't in shared buffers it will then check that memcache cluster before reading the page from disk. This allows you to cache amounts of data that far exceed the amount of memory you could put in a physical server.\n>\n> So memcached basically replaces the filesystem?\n\nNo, it sits in between shared buffers and the filesystem, effectively\nproviding an additional layer of extremely large, compressed cache.\nEven on a single server there can be benefits over larger shared\nbuffers due to the compression.\n\n> That sounds cool, but I'm wondering if it's actually a performance\n> speedup. Seems like it would only be a benefit for single-row lookups;\n> any large reads would be a mess.\n\nDepends on the database and the workload - if you can fit your entire\n100GB database in cache, and your workload is read intensive then the\nspeedups are potentially huge (I've seen benchmarks showing 20x+).\nWrite intensive workloads, less so, similarly if the working set is\nfar larger than your cache size.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 5 May 2011 08:49:47 +0100",
"msg_from": "Dave Page <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: amazon ec2"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm running into erroneous row estimates when using an array type, and I'm running out of ideas on how to steer postgres into the right direction... I've tried setting statistics to 1000, vacuuming and analyzing over and over, rewriting the query differently... to no avail.\n\nThe table looks like this:\n\ncreate table test (\nid serial primary key,\nsortcol float unique,\nintarr1 int[] not null default '{}',\nintarr2 int[] not null default '{}'\n\n);\n\nIt contains 40k rows of random data which, for the sake of the queries that follow, aren't too far from what they'd contain in production.\n\n\n# select intarr1, intarr2, count(*) from test group by intarr1, intarr2;\n\n\n intarr1 | intarr2 | count \n--------------+-------------+-------\n {0} | {2,3} | 40018\n\n\nThe stats seem right:\n\n# select * from pg_stats where tablename = 'test' and attname = 'intarr1';\n\n schemaname | tablename | attname | inherited | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation \n------------+-----------+--------------+-----------+-----------+-----------+------------+------------------+-------------------+------------------+-------------\n test | test | intarr1 | f | 0 | 25 | 1 | {\"{0}\"} | {1} | | 1\n\n\nA query without any condition on the array results in reasonable estimates and the proper use of the index on the sort column:\n\n# explain analyze select * from test order by sortcol limit 10;\n\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..3.00 rows=10 width=217) (actual time=0.098..0.109 rows=10 loops=1)\n -> Index Scan using test_sortcol_key on test (cost=0.00..12019.08 rows=40018 width=217) (actual time=0.096..0.105 rows=10 loops=1)\n Total runtime: 0.200 ms\n\n\nAfter adding a condition on the array, however, the row estimate is completely off:\n\n\n# explain analyze select * from test where intarr1 && '{0,1}' order by sortcol limit 10;\n\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..605.96 rows=10 width=217) (actual time=0.131..0.175 rows=10 loops=1)\n -> Index Scan using test_sortcol_key on test (cost=0.00..12119.13 rows=200 width=217) (actual time=0.129..0.169 rows=10 loops=1)\n Filter: (intarr1 && '{0,1}'::integer[])\n\n\nWhen there's a condition on both arrays, this then leads to a seq scan:\n\n# explain analyze select * from test where intarr1 && '{0,1}' and intarr2 && '{2,4}' order by sortcol limit 10;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------\n Limit (cost=3241.28..3241.29 rows=1 width=217) (actual time=88.260..88.265 rows=10 loops=1)\n -> Sort (cost=3241.28..3241.29 rows=1 width=217) (actual time=88.258..88.260 rows=10 loops=1)\n Sort Key: sortcol\n Sort Method: top-N heapsort Memory: 27kB\n -> Seq Scan on test (cost=0.00..3241.27 rows=1 width=217) (actual time=0.169..68.785 rows=40018 loops=1)\n Filter: ((intarr1 && '{0,1}'::integer[]) AND (intarr2 && '{2,4}'::integer[]))\n Total runtime: 88.339 ms\n\n\nAdding a GIN index on the two arrays results in similar ugliness:\n\n# explain analyze select * from test where intarr1 && '{0,1}' and intarr2 && '{2,4}' order by lft limit 10;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=8.29..8.29 rows=1 width=217) (actual time=56.122..56.127 rows=10 loops=1)\n -> Sort (cost=8.29..8.29 rows=1 width=217) (actual time=56.120..56.122 rows=10 loops=1)\n Sort Key: sortcol\n Sort Method: top-N heapsort Memory: 27kB\n -> Bitmap Heap Scan on test (cost=4.26..8.28 rows=1 width=217) (actual time=19.635..39.824 rows=40018 loops=1)\n Recheck Cond: ((intarr1 && '{0,1}'::integer[]) AND (intarr2 && '{2,4}'::integer[]))\n -> Bitmap Index Scan on test_intarr1_intarr2_idx (cost=0.00..4.26 rows=1 width=0) (actual time=19.387..19.387 rows=40018 loops=1)\n Index Cond: ((intarr1 && '{0,1}'::integer[]) AND (intarr2 && '{2,4}'::integer[]))\n Total runtime: 56.210 ms\n\n\nMight this be a bug in the operator's selectivity, or am I doing something wrong?\n\nThanks in advance,\nDenis\n\n",
"msg_date": "Wed, 4 May 2011 06:40:50 -0700 (PDT)",
"msg_from": "Denis de Bernardy <[email protected]>",
"msg_from_op": true,
"msg_subject": "row estimate very wrong for array type"
},
{
"msg_contents": "Denis de Bernardy <[email protected]> writes:\n> [ estimates for array && suck ]\n> Might this be a bug in the operator's selectivity, or am I doing something wrong?\n\nArray && uses areasel() which is only a stub :-(\n\nIn the particular case here it'd be possible to get decent answers\njust by trying the operator against all the MCV-list entries, but it's\nunlikely that that would fix things for enough people to be worth the\ntrouble. Really you'd need to maintain statistics about the element\nvalues appearing in the array column in order to get useful estimates\nfor && queries. Jan Urbanski did something similar for tsvector columns\na year or two ago, but nobody's gotten around to doing it for array\ncolumns.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 04 May 2011 10:12:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: row estimate very wrong for array type "
},
{
"msg_contents": "That kind of limits the usefulness of aggregating hierarchical dependencies into array columns to avoid enormous join statements. :-|\n\n\nRe your todo item you mention in this thread:\n\nhttp://archives.postgresql.org/pgsql-hackers/2010-05/msg01864.php\n\nMy C is rusty, but I might have enough understanding of the PG internals to massage pre-existing code... Feel free to message me off list with pointers if you think I might be able to help.\n\n\n\n----- Original Message -----\n> From: Tom Lane <[email protected]>\n> To: Denis de Bernardy <[email protected]>\n> Cc: \"[email protected]\" <[email protected]>\n> Sent: Wednesday, May 4, 2011 4:12 PM\n> Subject: Re: [PERFORM] row estimate very wrong for array type \n> \n> Denis de Bernardy <[email protected]> writes:\n>> [ estimates for array && suck ]\n>> Might this be a bug in the operator's selectivity, or am I doing \n> something wrong?\n> \n> Array && uses areasel() which is only a stub :-(\n> \n> In the particular case here it'd be possible to get decent answers\n> just by trying the operator against all the MCV-list entries, but it's\n> unlikely that that would fix things for enough people to be worth the\n> trouble. Really you'd need to maintain statistics about the element\n> values appearing in the array column in order to get useful estimates\n> for && queries. Jan Urbanski did something similar for tsvector columns\n> a year or two ago, but nobody's gotten around to doing it for array\n> columns.\n> \n> regards, tom lane\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Wed, 4 May 2011 07:42:53 -0700 (PDT)",
"msg_from": "Denis de Bernardy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: row estimate very wrong for array type "
},
{
"msg_contents": "> ----- Original Message -----\n\n>> From: Tom Lane <[email protected]>\n>> To: Denis de Bernardy <[email protected]>\n>> Cc: \"[email protected]\" \n> <[email protected]>\n>> Sent: Wednesday, May 4, 2011 4:12 PM\n>> Subject: Re: [PERFORM] row estimate very wrong for array type \n>> \n>> Array && uses areasel() which is only a stub :-(\n\n\nOn a separate note, in case this ever gets found via google, I managed to force the use of the correct index in the meanwhile:\n\n# explain analyze select * from test where (0 = any(intarr1) or 1 = any(intarr1)) and (2 = any(intarr2) or 4 = any(intarr2)) order by sortcol limit 10;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..385.16 rows=10 width=217) (actual time=0.107..0.151 rows=10 loops=1)\n -> Index Scan using test_sortcol_key on test (cost=0.00..14019.98 rows=364 width=217) (actual time=0.106..0.146 rows=10 loops=1)\n Filter: (((0 = ANY (intarr1)) OR (1 = ANY (intarr1))) AND ((2 = ANY (intarr2)) OR (4 = ANY (intarr2))))\n Total runtime: 0.214 ms\n\n\nI guess I'm in for maintaining counts and rewriting queries as needed. :-(\n\nD\n",
"msg_date": "Wed, 4 May 2011 07:56:52 -0700 (PDT)",
"msg_from": "Denis de Bernardy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: row estimate very wrong for array type "
}
] |
[
{
"msg_contents": "I had a problem with performance engine database, I use the server with the \nfollowing specifications\n\n 1. its storage configuration?\n Storage SCSI 15K RAID 5\n 2. how his network?\n 2 gigabit bonding.\n3. type / behavior of applications that connect to the db?\n Direct connects one segment.\n4. use the machine?\n DB only\n5. its os? * nix or windows?\n Linux\n6. how large the existing data? growth?\n growth 100M / day\n\nI had a problem with memory is always exhausted, causing\ndatabase stack or very slowly, I wanted to ask how to do the tuning \npostgresql.conf settings for the above case, or whether there is another \nsolution\n\nthanks for regard\nI had a problem with performance engine database, I use the server with the following specifications 1. its storage configuration? Storage SCSI 15K RAID 5 2. how his network? 2 gigabit bonding. 3. type / behavior of applications that connect to the db? Direct connects one segment. 4. use the machine? DB only 5. its os? * nix or windows? Linux 6. how large the existing data? growth? growth 100M / day I had a problem with memory is always exhausted, causing database stack or very slowly, I wanted to ask how to do the tuning postgresql.conf settings for the above case, or whether there is another solution thanks for regard",
"msg_date": "Thu, 5 May 2011 09:18:59 +0800 (SGT)",
"msg_from": "Didik Prasetyo <[email protected]>",
"msg_from_op": true,
"msg_subject": "ask the database engine tuning on the server"
},
{
"msg_contents": "Didik Prasetyo <[email protected]> wrote:\n> I had a problem with performance engine database, I use the server\n> with the following specifications\n> \n> 1. its storage configuration?\n> Storage SCSI 15K RAID 5\n> 2. how his network?\n> 2 gigabit bonding.\n> 3. type / behavior of applications that connect to the db?\n> Direct connects one segment.\n> 4. use the machine?\n> DB only\n> 5. its os? * nix or windows?\n> Linux\n> 6. how large the existing data? growth?\n> growth 100M / day\n> \n> I had a problem with memory is always exhausted, causing\n> database stack or very slowly, I wanted to ask how to do the\n> tuning postgresql.conf settings for the above case, or whether\n> there is another solution\n \nI'm not entirely sure what problem you are seeing. If you post\nagain, please show your version and configuration. An easy way to\ndo this is to run the query on this page and pastte the output into\nyour post:\n \nhttp://wiki.postgresql.org/wiki/Server_Configuration\n \nFor general tuning advice you should read this page:\n \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n \nIf there is a particular query which is causing problems, please\npost detail related to that query. See this page:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \nIf it is some other sort of problem, it helps to provide more detail\nand to copy and paste any error messages:\n \nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n \nA couple general points which might apply:\n \n(1) RAID 5 is OK for reads, but is slow for random writes in heavy\nloads. This is mitigated somewhat by having a good RAID controller\nwith a battery backed RAM cache configured for write-back.\n \n(2) You need to be running autovacuum and you need to avoid\nlong-running transactions (including those which show as \"idle in\ntransaction\") to avoid bloat, which can cause your database to grow\nrapidly.\n \nI hope this helps,\n \n-Kevin\n",
"msg_date": "Thu, 05 May 2011 13:07:45 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ask the database engine tuning on the server"
}
] |
[
{
"msg_contents": "I referred chapter 14.3 of postgres document version 9.0.\n\nexplicit joins help the planner in planninng & thus improve performance.\nOn what relations are explicit joins to be added??\n\nI am getting data from 10 tables in a view.\nI don't know on which pair of tables I have to add explicit joins to improve\nperformance.\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Explicit-joins-tp4372000p4372000.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 4 May 2011 23:04:11 -0700 (PDT)",
"msg_from": "Rishabh Kumar Jain <[email protected]>",
"msg_from_op": true,
"msg_subject": "Explicit joins"
},
{
"msg_contents": "Rishabh Kumar Jain <[email protected]> wrote:\n \n> I am getting data from 10 tables in a view.\n> I don't know on which pair of tables I have to add explicit joins\n> to improve performance.\n \nThere's usually some fairly natural order in terms of understanding\nthe request. I find it's often good to try to state in words what\ndata I want to see (*not* how I think I could get that data, but\ndescribe logically which set of data I want), and list the tables in\nthe order the appear in that description. The description will\nnaturally tend to include or imply your join conditions and other\nselection criteria.\n \n-Kevin\n",
"msg_date": "Thu, 05 May 2011 12:16:20 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Explicit joins"
}
] |
[
{
"msg_contents": "We have a generated query from our web application which takes far longer to\ncomplete in 9.0.4, than in 8.3.7 (>60sec in 9.0.4 ~10sec in 8.3.7)\n\nThe query plan generated in 9.0, includes a Materialize step which takes the\nbulk of the time for the query. If I disable materialize (by running set\nenable_material='off';) the query takes ~6 seconds to run (NOTE: the query\nplan chosen in this case is very similar to the plan chosen in 8.3.7).\n>From reading\nhttp://rhaas.blogspot.com/2010/04/materialization-in-postgresql-90.html, I\nrealize that the planner will be more aggressive in choosing Materialize in\n9.0. Is there a way to modify the planner cost settings to minimize it's\nuse in cases like this?\n\nBelow, I have included the query in question and the \"explain analyze\"\noutput for the case where enable_material is on. .\n\nNOTE: I will send explain analyze output for when enable_material is 'off'\nand information on the postgresql version, settings and server configuration\nin a follow up email due the length of email restrictions on the mailing\nlist.\n\n(A vacuum analyze was run prior to running the queries for this email)\n\nAny help that you can provide would be greatly appreciated.\n\n\n\nQuery\n------------------------------------------------------------------------------------------------\n SELECT *\nFROM (\n SELECT\n Peter_SizeByType.RowId AS RowId,\n Peter_SizeByType.Name AS Name,\n Peter_SizeByType.Type AS Type,\n Peter_SizeByType.ZAve AS ZAve,\n Peter_SizeByType.TimeLabel AS TimeLabel,\n Peter_SizeByType.StorageTemperature AS StorageTemperature,\n Peter_SizeByType.Pdl AS Pdl,\n Peter_SizeByType.Cumulants AS Cumulants,\n Peter_SizeByType.AnalysisTool AS AnalysisTool,\n Peter_SizeByType.meanCountRate AS meanCountRate,\n Peter_SizeByType.ExtractionNumber AS ExtractionNumber,\n Peter_SizeByType.TestNumber AS TestNumber,\n Peter_SizeByType.Sort AS Sort\n FROM (SELECT PS_2.rowid AS RowId,\n F_3.Name AS Name,\n F_3.Type AS Type,\n PS_2.zave AS ZAve,\n PS_2.timelabel AS TimeLabel,\n PS_2.storagetemperature AS StorageTemperature,\n PS_2.pdl AS Pdl,\n PS_2.cumulants AS Cumulants,\n PS_2.analysistool AS AnalysisTool,\n PS_2.meancountrate AS meanCountRate,\n PS_2.extractionnumber AS ExtractionNumber,\n PS_2.testnumber AS TestNumber,\n T_4.sort AS Sort\n FROM (SELECT c69d129_particle_size_result_fields_5.analysistool AS\nanalysistool, c69d129_particle_size_result_fields_5.cumulants AS cumulants,\nc69d129_particle_size_result_fields_5.extractionnumber AS extractionnumber,\nc69d129_particle_size_result_fields_5.pdl AS pdl,\nc69d129_particle_size_result_fields_5.rowid AS rowid,\n (SELECT RunId FROM exp.Data WHERE RowId =\nc69d129_particle_size_result_fields_5.DataId)\n AS Run, c69d129_particle_size_result_fields_5$Run$.container AS\nRun_Folder, c69d129_particle_size_result_fields_5$Run$Folder$.entityid AS\nRun_Folder_EntityId, c69d129_particle_size_result_fields_5$Run$.name AS\nRun_Name, c69d129_particle_size_result_fields_5.storagetemperature AS\nstoragetemperature, c69d129_particle_size_result_fields_5.testnumber AS\ntestnumber, c69d129_particle_size_result_fields_5.timelabel AS timelabel,\nc69d129_particle_size_result_fields_5.zave AS zave,\nc69d129_particle_size_result_fields_5.meancountrate AS meancountrate,\nc69d129_particle_size_result_fields_5.rowid AS rowid1\n FROM (SELECT * FROM assayresult.\"c69d129_particle_size_result_fields\"\n WHERE (((SELECT Container FROM exp.Data WHERE RowId = DataId) IN\n('d938da12-1b43-102d-a8a2-78911b79dd1c'))))\nc69d129_particle_size_result_fields_5\n LEFT OUTER JOIN (SELECT * FROM exp.experimentrun\n WHERE (((protocollsid LIKE\n'urn:lsid:labkey.com:Particle+SizeProtocol.Folder-69:Particle+Size')))\nAND (container IN ('d938da12-1b43-102d-a8a2-78911b79dd1c')))\nc69d129_particle_size_result_fields_5$Run$ ON ((SELECT RunId FROM exp.Data\nWHERE RowId = c69d129_particle_size_result_fields_5.DataId) =\nc69d129_particle_size_result_fields_5$Run$.rowid)\n LEFT OUTER JOIN core.containers\nc69d129_particle_size_result_fields_5$Run$Folder$ ON\n(c69d129_particle_size_result_fields_5$Run$.container =\nc69d129_particle_size_result_fields_5$Run$Folder$.entityid)) PS_2\n INNER JOIN (SELECT Formulations_6.container AS Folder,\nFormulations_6$Folder$.entityid AS Folder_EntityId, Formulations_6.name AS\nName, Formulations_6.rowid AS RowId,\n CAST((SELECT StringValue FROM exp.ObjectProperty WHERE\nexp.ObjectProperty.PropertyId = 560 AND exp.ObjectProperty.ObjectId =\nFormulations_6$LSID$_C.ObjectId) AS VARCHAR(4000))\n AS Type\n FROM (SELECT * FROM exp.material\n WHERE (container IN ('d938da12-1b43-102d-a8a2-78911b79dd1c')) AND\n((cpastype = 'urn:lsid:labkey.com:SampleSet.Folder-69:Formulations')))\nFormulations_6\n LEFT OUTER JOIN core.containers Formulations_6$Folder$ ON\n(Formulations_6.container = Formulations_6$Folder$.entityid)\n LEFT OUTER JOIN exp.object Formulations_6$LSID$_C ON\n(Formulations_6.lsid = Formulations_6$LSID$_C.objecturi) AND\nFormulations_6.Container = 'd938da12-1b43-102d-a8a2-78911b79dd1c') F_3 ON\nPS_2.Run_Name=F_3.Name || '.xls' AND\nPS_2.Run_Folder_EntityId=F_3.Folder_EntityId\n INNER JOIN (SELECT\n CAST((SELECT FloatValue FROM exp.ObjectProperty WHERE\nexp.ObjectProperty.PropertyId = 1288 AND exp.ObjectProperty.ObjectId =\nTimepoints_7.objectid) AS INT)\n AS sort, Timepoints_7.key AS time\n FROM (SELECT * FROM exp.indexvarchar\n WHERE ((listid = 17))) Timepoints_7) T_4 ON PS_2.timelabel=T_4.time)\nPeter_SizeByType ) x\n ORDER BY Name ASC\n LIMIT 101\n\n\nExplain Analyze output with enable_material='on'\n------------------------------------------------------------------------------------------------\nLimit (cost=233190.18..233190.43 rows=101 width=71) (actual\ntime=194078.460..194078.645 rows=101 loops=1)\n -> Sort (cost=233190.18..233190.44 rows=104 width=71) (actual\ntime=194078.457..194078.520 rows=101 loops=1)\n Sort Key: material.name\n Sort Method: top-N heapsort Memory: 39kB\n -> Nested Loop Left Join (cost=3.27..233186.69 rows=104 width=71)\n(actual time=3996.558..193952.126 rows=67044 loops=1)\n Join Filter: ((material.container)::text =\n'd938da12-1b43-102d-a8a2-78911b79dd1c'::text)\n -> Nested Loop (cost=3.27..232216.91 rows=104 width=155)\n(actual time=3996.433..190230.691 rows=67044 loops=1)\n Join Filter: ((experimentrun.name)::text = ((\nmaterial.name)::text || '.xls'::text))\n -> Index Scan using idx_material_lsid on material\n (cost=0.00..444.90 rows=774 width=96) (actual time=0.251..14.974 rows=1303\nloops=1)\n Filter: (((container)::text =\n'd938da12-1b43-102d-a8a2-78911b79dd1c'::text) AND ((cpastype)::text =\n'urn:lsid:labkey.com:SampleSet.Folder-69:Formulations'::text))\n -> Materialize (cost=3.27..230512.55 rows=93\nwidth=129) (actual time=0.023..70.942 rows=67044 loops=1303)\n -> Nested Loop (cost=3.27..230512.08 rows=93\nwidth=129) (actual time=0.065..3355.484 rows=67044 loops=1)\n -> Nested Loop (cost=0.00..229803.99\nrows=93 width=137) (actual time=0.048..2335.540 rows=67044 loops=1)\n -> Nested Loop (cost=0.00..2.33\nrows=1 width=74) (actual time=0.013..0.023 rows=1 loops=1)\n -> Seq Scan on containers\n\"c69d129_particle_size_result_fields_5$run$folder$\" (cost=0.00..1.16 rows=1\nwidth=37) (actual time=0.006..0.009 rows=1 loops=1)\n Filter: ((entityid)::text\n= 'd938da12-1b43-102d-a8a2-78911b79dd1c'::text)\n -> Seq Scan on containers\n\"formulations_6$folder$\" (cost=0.00..1.16 rows=1 width=37) (actual\ntime=0.002..0.006 rows=1 loops=1)\n Filter:\n((\"formulations_6$folder$\".entityid)::text =\n'd938da12-1b43-102d-a8a2-78911b79dd1c'::text)\n -> Nested Loop\n (cost=0.00..229800.73 rows=93 width=63) (actual time=0.032..2169.448\nrows=67044 loops=1)\n Join Filter:\n((c69d129_particle_size_result_fields.timelabel)::text =\n(indexvarchar.key)::text)\n -> Seq Scan on\nc69d129_particle_size_result_fields (cost=0.00..229742.02 rows=348\nwidth=59) (actual time=0.016..527.225 rows=69654 loops=1)\n Filter: (((SubPlan\n3))::text = 'd938da12-1b43-102d-a8a2-78911b79dd1c'::text)\n SubPlan 3\n -> Index Scan using\npk_data on data (cost=0.00..3.27 rows=1 width=37) (actual time=0.003..0.004\nrows=1 loops=69654)\n Index Cond:\n(rowid = $2)\n -> Materialize\n (cost=0.00..1.32 rows=11 width=10) (actual time=0.001..0.010 rows=11\nloops=69654)\n -> Seq Scan on\nindexvarchar (cost=0.00..1.26 rows=11 width=10) (actual time=0.003..0.013\nrows=11 loops=1)\n Filter: (listid =\n17)\n -> Index Scan using pk_experimentrun on\nexperimentrun (cost=3.27..4.33 rows=1 width=74) (actual time=0.004..0.005\nrows=1 loops=67044)\n Index Cond: (experimentrun.rowid =\n(SubPlan 4))\n Filter:\n(((experimentrun.protocollsid)::text ~~\n'urn:lsid:labkey.com:Particle+SizeProtocol.Folder-69:Particle+Size'::text)\nAND ((experimentrun.container)::text = 'd938da12-1b43-102d-a8a\n2-78911b79dd1c'::text))\n SubPlan 4\n -> Index Scan using pk_data on\ndata (cost=0.00..3.27 rows=1 width=4) (actual time=0.003..0.004 rows=1\nloops=67044)\n Index Cond: (rowid = $2)\n SubPlan 4\n -> Index Scan using pk_data on\ndata (cost=0.00..3.27 rows=1 width=4) (actual time=0.003..0.004 rows=1\nloops=67044)\n Index Cond: (rowid = $2)\n -> Index Scan using uq_object on object\n\"formulations_6$lsid$_c\" (cost=0.00..2.69 rows=1 width=64) (actual\ntime=0.033..0.034 rows=1 loops=67044)\n Index Cond: ((material.lsid)::text =\n(\"formulations_6$lsid$_c\".objecturi)::text)\n SubPlan 1\n -> Index Scan using pk_objectproperty on objectproperty\n (cost=0.00..3.31 rows=1 width=13) (actual time=0.005..0.007 rows=1\nloops=67044)\n Index Cond: ((objectid = $0) AND (propertyid = 560))\n SubPlan 2\n -> Index Scan using pk_objectproperty on objectproperty\n (cost=0.00..3.31 rows=1 width=8) (actual time=0.004..0.005 rows=1\nloops=67044)\n Index Cond: ((objectid = $1) AND (propertyid = 1288))\n Total runtime: 194080.893 ms\n\n\nThank you,\n\nBrian Connolly\n\nWe have a generated query from our web application which takes far longer to complete in 9.0.4, than in 8.3.7 (>60sec in 9.0.4 ~10sec in 8.3.7)\nThe query plan generated in 9.0, includes a Materialize step which takes the bulk of the time for the query. If I disable materialize (by running set enable_material='off';) the query takes ~6 seconds to run (NOTE: the query plan chosen in this case is very similar to the plan chosen in 8.3.7). From reading http://rhaas.blogspot.com/2010/04/materialization-in-postgresql-90.html, I realize that the planner will be more aggressive in choosing Materialize in 9.0. Is there a way to modify the planner cost settings to minimize it's use in cases like this? \nBelow, I have included the query in question and the \"explain analyze\" output for the case where enable_material is on. . NOTE: I will send explain analyze output for when enable_material is 'off' and information on the postgresql version, settings and server configuration in a follow up email due the length of email restrictions on the mailing list. \n(A vacuum analyze was run prior to running the queries for this email) Any help that you can provide would be greatly appreciated.\nQuery ------------------------------------------------------------------------------------------------ SELECT *FROM ( SELECT Peter_SizeByType.RowId AS RowId,\n Peter_SizeByType.Name AS Name, Peter_SizeByType.Type AS Type, Peter_SizeByType.ZAve AS ZAve, Peter_SizeByType.TimeLabel AS TimeLabel, Peter_SizeByType.StorageTemperature AS StorageTemperature,\n Peter_SizeByType.Pdl AS Pdl, Peter_SizeByType.Cumulants AS Cumulants, Peter_SizeByType.AnalysisTool AS AnalysisTool, Peter_SizeByType.meanCountRate AS meanCountRate, Peter_SizeByType.ExtractionNumber AS ExtractionNumber,\n Peter_SizeByType.TestNumber AS TestNumber, Peter_SizeByType.Sort AS Sort FROM (SELECT PS_2.rowid AS RowId, F_3.Name AS Name, F_3.Type AS Type, PS_2.zave AS ZAve,\n PS_2.timelabel AS TimeLabel, PS_2.storagetemperature AS StorageTemperature, PS_2.pdl AS Pdl, PS_2.cumulants AS Cumulants, PS_2.analysistool AS AnalysisTool, PS_2.meancountrate AS meanCountRate,\n PS_2.extractionnumber AS ExtractionNumber, PS_2.testnumber AS TestNumber, T_4.sort AS Sort FROM (SELECT c69d129_particle_size_result_fields_5.analysistool AS analysistool, c69d129_particle_size_result_fields_5.cumulants AS cumulants, c69d129_particle_size_result_fields_5.extractionnumber AS extractionnumber, c69d129_particle_size_result_fields_5.pdl AS pdl, c69d129_particle_size_result_fields_5.rowid AS rowid,\n (SELECT RunId FROM exp.Data WHERE RowId = c69d129_particle_size_result_fields_5.DataId) AS Run, c69d129_particle_size_result_fields_5$Run$.container AS Run_Folder, c69d129_particle_size_result_fields_5$Run$Folder$.entityid AS Run_Folder_EntityId, c69d129_particle_size_result_fields_5$Run$.name AS Run_Name, c69d129_particle_size_result_fields_5.storagetemperature AS storagetemperature, c69d129_particle_size_result_fields_5.testnumber AS testnumber, c69d129_particle_size_result_fields_5.timelabel AS timelabel, c69d129_particle_size_result_fields_5.zave AS zave, c69d129_particle_size_result_fields_5.meancountrate AS meancountrate, c69d129_particle_size_result_fields_5.rowid AS rowid1\n FROM (SELECT * FROM assayresult.\"c69d129_particle_size_result_fields\" WHERE (((SELECT Container FROM exp.Data WHERE RowId = DataId) IN ('d938da12-1b43-102d-a8a2-78911b79dd1c')))) c69d129_particle_size_result_fields_5\n LEFT OUTER JOIN (SELECT * FROM exp.experimentrun WHERE (((protocollsid LIKE 'urn:lsid:labkey.com:Particle+SizeProtocol.Folder-69:Particle+Size'))) AND (container IN ('d938da12-1b43-102d-a8a2-78911b79dd1c'))) c69d129_particle_size_result_fields_5$Run$ ON ((SELECT RunId FROM exp.Data WHERE RowId = c69d129_particle_size_result_fields_5.DataId) = c69d129_particle_size_result_fields_5$Run$.rowid)\n LEFT OUTER JOIN core.containers c69d129_particle_size_result_fields_5$Run$Folder$ ON (c69d129_particle_size_result_fields_5$Run$.container = c69d129_particle_size_result_fields_5$Run$Folder$.entityid)) PS_2\n INNER JOIN (SELECT Formulations_6.container AS Folder, Formulations_6$Folder$.entityid AS Folder_EntityId, Formulations_6.name AS Name, Formulations_6.rowid AS RowId, CAST((SELECT StringValue FROM exp.ObjectProperty WHERE exp.ObjectProperty.PropertyId = 560 AND exp.ObjectProperty.ObjectId = Formulations_6$LSID$_C.ObjectId) AS VARCHAR(4000))\n AS Type FROM (SELECT * FROM exp.material WHERE (container IN ('d938da12-1b43-102d-a8a2-78911b79dd1c')) AND ((cpastype = 'urn:lsid:labkey.com:SampleSet.Folder-69:Formulations'))) Formulations_6\n LEFT OUTER JOIN core.containers Formulations_6$Folder$ ON (Formulations_6.container = Formulations_6$Folder$.entityid) LEFT OUTER JOIN exp.object Formulations_6$LSID$_C ON (Formulations_6.lsid = Formulations_6$LSID$_C.objecturi) AND Formulations_6.Container = 'd938da12-1b43-102d-a8a2-78911b79dd1c') F_3 ON PS_2.Run_Name=F_3.Name || '.xls' AND PS_2.Run_Folder_EntityId=F_3.Folder_EntityId\n INNER JOIN (SELECT CAST((SELECT FloatValue FROM exp.ObjectProperty WHERE exp.ObjectProperty.PropertyId = 1288 AND exp.ObjectProperty.ObjectId = Timepoints_7.objectid) AS INT) AS sort, Timepoints_7.key AS time\n FROM (SELECT * FROM exp.indexvarchar WHERE ((listid = 17))) Timepoints_7) T_4 ON PS_2.timelabel=T_4.time) Peter_SizeByType ) x ORDER BY Name ASC LIMIT 101\nExplain Analyze output with enable_material='on'------------------------------------------------------------------------------------------------\nLimit (cost=233190.18..233190.43 rows=101 width=71) (actual time=194078.460..194078.645 rows=101 loops=1) -> Sort (cost=233190.18..233190.44 rows=104 width=71) (actual time=194078.457..194078.520 rows=101 loops=1)\n Sort Key: material.name Sort Method: top-N heapsort Memory: 39kB -> Nested Loop Left Join (cost=3.27..233186.69 rows=104 width=71) (actual time=3996.558..193952.126 rows=67044 loops=1)\n Join Filter: ((material.container)::text = 'd938da12-1b43-102d-a8a2-78911b79dd1c'::text) -> Nested Loop (cost=3.27..232216.91 rows=104 width=155) (actual time=3996.433..190230.691 rows=67044 loops=1)\n Join Filter: ((experimentrun.name)::text = ((material.name)::text || '.xls'::text))\n -> Index Scan using idx_material_lsid on material (cost=0.00..444.90 rows=774 width=96) (actual time=0.251..14.974 rows=1303 loops=1) Filter: (((container)::text = 'd938da12-1b43-102d-a8a2-78911b79dd1c'::text) AND ((cpastype)::text = 'urn:lsid:labkey.com:SampleSet.Folder-69:Formulations'::text))\n -> Materialize (cost=3.27..230512.55 rows=93 width=129) (actual time=0.023..70.942 rows=67044 loops=1303) -> Nested Loop (cost=3.27..230512.08 rows=93 width=129) (actual time=0.065..3355.484 rows=67044 loops=1)\n -> Nested Loop (cost=0.00..229803.99 rows=93 width=137) (actual time=0.048..2335.540 rows=67044 loops=1) -> Nested Loop (cost=0.00..2.33 rows=1 width=74) (actual time=0.013..0.023 rows=1 loops=1)\n -> Seq Scan on containers \"c69d129_particle_size_result_fields_5$run$folder$\" (cost=0.00..1.16 rows=1 width=37) (actual time=0.006..0.009 rows=1 loops=1)\n Filter: ((entityid)::text = 'd938da12-1b43-102d-a8a2-78911b79dd1c'::text) -> Seq Scan on containers \"formulations_6$folder$\" (cost=0.00..1.16 rows=1 width=37) (actual time=0.002..0.006 rows=1 loops=1)\n Filter: ((\"formulations_6$folder$\".entityid)::text = 'd938da12-1b43-102d-a8a2-78911b79dd1c'::text) -> Nested Loop (cost=0.00..229800.73 rows=93 width=63) (actual time=0.032..2169.448 rows=67044 loops=1)\n Join Filter: ((c69d129_particle_size_result_fields.timelabel)::text = (indexvarchar.key)::text) -> Seq Scan on c69d129_particle_size_result_fields (cost=0.00..229742.02 rows=348 width=59) (actual time=0.016..527.225 rows=69654 loops=1)\n Filter: (((SubPlan 3))::text = 'd938da12-1b43-102d-a8a2-78911b79dd1c'::text) SubPlan 3\n -> Index Scan using pk_data on data (cost=0.00..3.27 rows=1 width=37) (actual time=0.003..0.004 rows=1 loops=69654) Index Cond: (rowid = $2)\n -> Materialize (cost=0.00..1.32 rows=11 width=10) (actual time=0.001..0.010 rows=11 loops=69654) -> Seq Scan on indexvarchar (cost=0.00..1.26 rows=11 width=10) (actual time=0.003..0.013 rows=11 loops=1)\n Filter: (listid = 17) -> Index Scan using pk_experimentrun on experimentrun (cost=3.27..4.33 rows=1 width=74) (actual time=0.004..0.005 rows=1 loops=67044)\n Index Cond: (experimentrun.rowid = (SubPlan 4)) Filter: (((experimentrun.protocollsid)::text ~~ 'urn:lsid:labkey.com:Particle+SizeProtocol.Folder-69:Particle+Size'::text) AND ((experimentrun.container)::text = 'd938da12-1b43-102d-a8a\n2-78911b79dd1c'::text)) SubPlan 4 -> Index Scan using pk_data on data (cost=0.00..3.27 rows=1 width=4) (actual time=0.003..0.004 rows=1 loops=67044)\n Index Cond: (rowid = $2) SubPlan 4 -> Index Scan using pk_data on data (cost=0.00..3.27 rows=1 width=4) (actual time=0.003..0.004 rows=1 loops=67044)\n Index Cond: (rowid = $2) -> Index Scan using uq_object on object \"formulations_6$lsid$_c\" (cost=0.00..2.69 rows=1 width=64) (actual time=0.033..0.034 rows=1 loops=67044)\n Index Cond: ((material.lsid)::text = (\"formulations_6$lsid$_c\".objecturi)::text) SubPlan 1 -> Index Scan using pk_objectproperty on objectproperty (cost=0.00..3.31 rows=1 width=13) (actual time=0.005..0.007 rows=1 loops=67044)\n Index Cond: ((objectid = $0) AND (propertyid = 560)) SubPlan 2 -> Index Scan using pk_objectproperty on objectproperty (cost=0.00..3.31 rows=1 width=8) (actual time=0.004..0.005 rows=1 loops=67044)\n Index Cond: ((objectid = $1) AND (propertyid = 1288)) Total runtime: 194080.893 msThank you, Brian Connolly",
"msg_date": "Thu, 5 May 2011 10:43:37 -0700",
"msg_from": "Brian Connolly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Poor query plan chosen in 9.0.3 vs 8.3.7"
},
{
"msg_contents": "Brian Connolly <[email protected]> writes:\n> Any help that you can provide would be greatly appreciated.\n\nI'd suggest trying to get rid of the weird little subselects, like this\none:\n\n> ... SELECT * FROM assayresult.\"c69d129_particle_size_result_fields\"\n> WHERE (((SELECT Container FROM exp.Data WHERE RowId = DataId) IN\n> ('d938da12-1b43-102d-a8a2-78911b79dd1c'))) ...\n\nIf you turned that into a regular join between\nc69d129_particle_size_result_fields and Data, the planner probably\nwouldn't be nearly as confused about how many rows would result.\nIt's the way-off rowcount estimate for this construct that's\ncausing most of the problem, AFAICS:\n\n -> Seq Scan on c69d129_particle_size_result_fields (cost=0.00..229742.02 rows=348 width=59) (actual time=0.018..572.402 rows=69654 loops=1)\n Filter: (((SubPlan 3))::text = 'd938da12-1b43-102d-a8a2-78911b79dd1c'::text)\n SubPlan 3\n -> Index Scan using pk_data on data (cost=0.00..3.27 rows=1 width=37) (actual time=0.004..0.005 rows=1 loops=69654)\n Index Cond: (rowid = $2)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 05 May 2011 16:27:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor query plan chosen in 9.0.3 vs 8.3.7 "
}
] |
[
{
"msg_contents": "Here is the explain analyze output for when enable_material is 'off' and\ninformation on the postgresql version, settings and server configuration (I\nhad to send a follow up email due the length of email restrictions on the\nmailing list.)\n\n(I apologize for the length of these email messages. And if this message\ndoes not get threaded properly as I did not receive a copy of my original\nmessage to mailing list)\n\n\n\nExplain Analyze output with enable_material='off'\n------------------------------------------------------------------------------------------------\n\nLimit (cost=231785.69..231785.94 rows=101 width=71) (actual\ntime=6616.943..6617.130 rows=101 loops=1)\n -> Sort (cost=231785.69..231785.95 rows=104 width=71) (actual\ntime=6616.940..6617.003 rows=101 loops=1)\n Sort Key: material.name\n Sort Method: top-N heapsort Memory: 39kB\n -> Nested Loop Left Join (cost=239.46..231782.21 rows=104\nwidth=71) (actual time=3.840..6484.883 rows=67044 loops=1)\n Join Filter: ((material.container)::text =\n'd938da12-1b43-102d-a8a2-78911b79dd1c'::text)\n -> Hash Join (cost=239.46..230812.42 rows=104 width=155)\n(actual time=3.785..2676.643 rows=67044 loops=1)\n Hash Cond: ((experimentrun.name)::text = ((\nmaterial.name)::text || '.xls'::text))\n -> Nested Loop (cost=3.27..230557.06 rows=93\nwidth=129) (actual time=0.170..2502.106 rows=67044 loops=1)\n -> Nested Loop (cost=0.00..229848.97 rows=93\nwidth=137) (actual time=0.153..1368.990 rows=67044 loops=1)\n -> Nested Loop (cost=0.00..2.33 rows=1\nwidth=74) (actual time=0.016..0.025 rows=1 loops=1)\n -> Seq Scan on containers\n\"c69d129_particle_size_result_fields_5$run$folder$\" (cost=0.00..1.16 rows=1\nwidth=37) (actual time=0.010..0.013 rows=1 loops=1)\n Filter: ((entityid)::text =\n'd938da12-1b43-102d-a8a2-78911b79dd1c'::text)\n -> Seq Scan on containers\n\"formulations_6$folder$\" (cost=0.00..1.16 rows=1 width=37) (actual\ntime=0.003..0.007 rows=1 loops=1)\n Filter:\n((\"formulations_6$folder$\".entityid)::text =\n'd938da12-1b43-102d-a8a2-78911b79dd1c'::text)\n -> Nested Loop (cost=0.00..229845.71\nrows=93 width=63) (actual time=0.133..1244.558 rows=67044 loops=1)\n -> Seq Scan on\nc69d129_particle_size_result_fields (cost=0.00..229742.02 rows=348\nwidth=59) (actual time=0.018..572.402 rows=69654 loops=1)\n Filter: (((SubPlan 3))::text =\n'd938da12-1b43-102d-a8a2-78911b79dd1c'::text)\n SubPlan 3\n -> Index Scan using pk_data\non data (cost=0.00..3.27 rows=1 width=37) (actual time=0.004..0.005 rows=1\nloops=69654)\n Index Cond: (rowid =\n$2)\n -> Index Scan using pk_indexvarchar\non indexvarchar (cost=0.00..0.29 rows=1 width=10) (actual time=0.004..0.006\nrows=1 loops=69654)\n Index Cond:\n((indexvarchar.listid = 17) AND ((indexvarchar.key)::text =\n(c69d129_particle_size_result_fields.timelabel)::text))\n -> Index Scan using pk_experimentrun on\nexperimentrun (cost=3.27..4.33 rows=1 width=74) (actual time=0.005..0.006\nrows=1 loops=67044)\n Index Cond: (experimentrun.rowid = (SubPlan\n4))\n Filter:\n(((experimentrun.protocollsid)::text ~~\n'urn:lsid:labkey.com:Particle+SizeProtocol.Folder-69:Particle+Size'::text)\nAND ((experimentrun.container)::text = 'd938da12-1b43-102d-a8a2-7891\n1b79dd1c'::text))\n SubPlan 4\n -> Index Scan using pk_data on data\n (cost=0.00..3.27 rows=1 width=4) (actual time=0.004..0.005 rows=1\nloops=67044)\n Index Cond: (rowid = $2)\n SubPlan 4\n -> Index Scan using pk_data on data\n (cost=0.00..3.27 rows=1 width=4) (actual time=0.004..0.005 rows=1\nloops=67044)\n Index Cond: (rowid = $2)\n -> Hash (cost=226.51..226.51 rows=774 width=96)\n(actual time=3.587..3.587 rows=1303 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 158kB\n -> Seq Scan on material (cost=0.00..226.51\nrows=774 width=96) (actual time=0.071..2.354 rows=1303 loops=1)\n Filter: (((container)::text =\n'd938da12-1b43-102d-a8a2-78911b79dd1c'::text) AND ((cpastype)::text =\n'urn:lsid:labkey.com:SampleSet.Folder-69:Formulations'::text))\n -> Index Scan using uq_object on object\n\"formulations_6$lsid$_c\" (cost=0.00..2.69 rows=1 width=64) (actual\ntime=0.035..0.036 rows=1 loops=67044)\n Index Cond: ((material.lsid)::text =\n(\"formulations_6$lsid$_c\".objecturi)::text)\n SubPlan 1\n -> Index Scan using pk_objectproperty on objectproperty\n (cost=0.00..3.31 rows=1 width=13) (actual time=0.005..0.006 rows=1\nloops=67044)\n Index Cond: ((objectid = $0) AND (propertyid = 560))\n SubPlan 2\n -> Index Scan using pk_objectproperty on objectproperty\n (cost=0.00..3.31 rows=1 width=8) (actual time=0.003..0.004 rows=1\nloops=67044)\n Index Cond: ((objectid = $1) AND (propertyid = 1288))\n Total runtime: 6617.360 ms\n\n\n\nOther Information\n------------------------------------------------------------------------------------------------\n\n - Version: PostgreSQL 9.0.3 on x86_64-unknown-linux-gnu, compiled by GCC\n gcc (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 64-bit\n - Operating System: Ubuntu 10.04 LTS\n - uname -a = Linux hostname 2.6.32-305-ec2 #9-Ubuntu SMP Thu Apr 15\n 08:05:38 UTC 2010 x86_64 GNU/Linux\n - Server Information\n - 4 cores\n - 8GB of RAM\n - PostgreSQL Configuration Information\n - max_connections = 50\n - shared_buffers = 2048MB\n - work_mem = 20MB\n - maintenance_work_mem = 1024MB\n - wal_buffers = 4MB\n - checkpoint_segments = 10\n - checkpoint_timeout = 15min\n - random_page_cost = 1.5 and 4.0 (tested with both default and\n non-default)\n - effective_cache_size = 6144MB\n - join_collapse_limit = 10\n - autovacuum = on\n - All other settings default\n - Vacuum and Analyze is run nightly.\n\n\n\nThank you for any help you might be able to provide.\n\nBrian Connolly\n\nHere is the explain analyze output for when enable_material is 'off' and information on the postgresql version, settings and server configuration (I had to send a follow up email due the length of email restrictions on the mailing list.) \n(I apologize for the length of these email messages. And if this message does not get threaded properly as I did not receive a copy of my original message to mailing list)\n\n\nExplain Analyze output with enable_material='off'------------------------------------------------------------------------------------------------Limit (cost=231785.69..231785.94 rows=101 width=71) (actual time=6616.943..6617.130 rows=101 loops=1)\n -> Sort (cost=231785.69..231785.95 rows=104 width=71) (actual time=6616.940..6617.003 rows=101 loops=1) Sort Key: material.name\n Sort Method: top-N heapsort Memory: 39kB -> Nested Loop Left Join (cost=239.46..231782.21 rows=104 width=71) (actual time=3.840..6484.883 rows=67044 loops=1) Join Filter: ((material.container)::text = 'd938da12-1b43-102d-a8a2-78911b79dd1c'::text)\n -> Hash Join (cost=239.46..230812.42 rows=104 width=155) (actual time=3.785..2676.643 rows=67044 loops=1) Hash Cond: ((experimentrun.name)::text = ((material.name)::text || '.xls'::text))\n -> Nested Loop (cost=3.27..230557.06 rows=93 width=129) (actual time=0.170..2502.106 rows=67044 loops=1) -> Nested Loop (cost=0.00..229848.97 rows=93 width=137) (actual time=0.153..1368.990 rows=67044 loops=1)\n -> Nested Loop (cost=0.00..2.33 rows=1 width=74) (actual time=0.016..0.025 rows=1 loops=1) -> Seq Scan on containers \"c69d129_particle_size_result_fields_5$run$folder$\" (cost=0.00..1.16 rows=1 width=37) (actual time=0.010..0.013 rows=1 loops=1)\n Filter: ((entityid)::text = 'd938da12-1b43-102d-a8a2-78911b79dd1c'::text) -> Seq Scan on containers \"formulations_6$folder$\" (cost=0.00..1.16 rows=1 width=37) (actual time=0.003..0.007 rows=1 loops=1)\n Filter: ((\"formulations_6$folder$\".entityid)::text = 'd938da12-1b43-102d-a8a2-78911b79dd1c'::text) -> Nested Loop (cost=0.00..229845.71 rows=93 width=63) (actual time=0.133..1244.558 rows=67044 loops=1)\n -> Seq Scan on c69d129_particle_size_result_fields (cost=0.00..229742.02 rows=348 width=59) (actual time=0.018..572.402 rows=69654 loops=1) Filter: (((SubPlan 3))::text = 'd938da12-1b43-102d-a8a2-78911b79dd1c'::text)\n SubPlan 3 -> Index Scan using pk_data on data (cost=0.00..3.27 rows=1 width=37) (actual time=0.004..0.005 rows=1 loops=69654)\n Index Cond: (rowid = $2) -> Index Scan using pk_indexvarchar on indexvarchar (cost=0.00..0.29 rows=1 width=10) (actual time=0.004..0.006 rows=1 loops=69654)\n Index Cond: ((indexvarchar.listid = 17) AND ((indexvarchar.key)::text = (c69d129_particle_size_result_fields.timelabel)::text)) -> Index Scan using pk_experimentrun on experimentrun (cost=3.27..4.33 rows=1 width=74) (actual time=0.005..0.006 rows=1 loops=67044)\n Index Cond: (experimentrun.rowid = (SubPlan 4)) Filter: (((experimentrun.protocollsid)::text ~~ 'urn:lsid:labkey.com:Particle+SizeProtocol.Folder-69:Particle+Size'::text) AND ((experimentrun.container)::text = 'd938da12-1b43-102d-a8a2-7891\n1b79dd1c'::text)) SubPlan 4 -> Index Scan using pk_data on data (cost=0.00..3.27 rows=1 width=4) (actual time=0.004..0.005 rows=1 loops=67044)\n Index Cond: (rowid = $2) SubPlan 4 -> Index Scan using pk_data on data (cost=0.00..3.27 rows=1 width=4) (actual time=0.004..0.005 rows=1 loops=67044)\n Index Cond: (rowid = $2) -> Hash (cost=226.51..226.51 rows=774 width=96) (actual time=3.587..3.587 rows=1303 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 158kB -> Seq Scan on material (cost=0.00..226.51 rows=774 width=96) (actual time=0.071..2.354 rows=1303 loops=1)\n Filter: (((container)::text = 'd938da12-1b43-102d-a8a2-78911b79dd1c'::text) AND ((cpastype)::text = 'urn:lsid:labkey.com:SampleSet.Folder-69:Formulations'::text))\n -> Index Scan using uq_object on object \"formulations_6$lsid$_c\" (cost=0.00..2.69 rows=1 width=64) (actual time=0.035..0.036 rows=1 loops=67044) Index Cond: ((material.lsid)::text = (\"formulations_6$lsid$_c\".objecturi)::text)\n SubPlan 1 -> Index Scan using pk_objectproperty on objectproperty (cost=0.00..3.31 rows=1 width=13) (actual time=0.005..0.006 rows=1 loops=67044) Index Cond: ((objectid = $0) AND (propertyid = 560))\n SubPlan 2 -> Index Scan using pk_objectproperty on objectproperty (cost=0.00..3.31 rows=1 width=8) (actual time=0.003..0.004 rows=1 loops=67044) Index Cond: ((objectid = $1) AND (propertyid = 1288))\n Total runtime: 6617.360 msOther Information ------------------------------------------------------------------------------------------------\nVersion: PostgreSQL 9.0.3 on x86_64-unknown-linux-gnu, compiled by GCC gcc (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 64-bitOperating System: Ubuntu 10.04 LTS\nuname -a = Linux hostname 2.6.32-305-ec2 #9-Ubuntu SMP Thu Apr 15 08:05:38 UTC 2010 x86_64 GNU/LinuxServer Information \n4 cores 8GB of RAMPostgreSQL Configuration Information max_connections = 50\nshared_buffers = 2048MBwork_mem = 20MBmaintenance_work_mem = 1024MBwal_buffers = 4MB\ncheckpoint_segments = 10checkpoint_timeout = 15minrandom_page_cost = 1.5 and 4.0 (tested with both default and non-default)\neffective_cache_size = 6144MBjoin_collapse_limit = 10autovacuum = onAll other settings default\nVacuum and Analyze is run nightly.Thank you for any help you might be able to provide. Brian Connolly",
"msg_date": "Thu, 5 May 2011 12:54:25 -0700",
"msg_from": "Brian Connolly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor query plan chosen in 9.0.3 vs 8.3.7"
},
{
"msg_contents": "Hey Brian,\n\nBrian Connolly wrote:\n> (I had to send a follow up email due the length of email restrictions on the\n> mailing list.)\n\nA tip for when you have this problem in the future -- turn off html mail.\nIt will reduce your email message length by 50% - 90%.\n\nHTH\n\nBosco.\n",
"msg_date": "Thu, 05 May 2011 13:01:05 -0700",
"msg_from": "Bosco Rama <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor query plan chosen in 9.0.3 vs 8.3.7"
}
] |
[
{
"msg_contents": "i have around 25mio records of data distributed yearly over 9 child \ntables (data.logs_20xx) that inherit from the master table data.logs. \nthe tables are partitioned using the field \"re_timestamp\", which has \nbtree indexes defined on all tables.\n\nthe query \"SELECT * FROM data.logs ORDER BY re_timestamp DESC LIMIT 100\" \ndoes use seq scans on all tables instead of using the existing indexes \nwhich takes ages. when issuing the the same query to one of the child \ntables directly (\"SELECT * FROM data.logs_2011 ORDER BY re_timestamp \nDESC LIMIT 100\") the index is used as expected and the data returned \nquickly.\n\nhow can i get postgres to use the indexes when querying the master table?\n\nplease find below the EXPLAIN ANALYZE output for both queries on my \ndevelopment machine (pgsql 9.0 x64 on windows 7).\n\nthanks in advance,\nthomas\n\n\nEXPLAIN ANALYZE SELECT * FROM data.logs\nORDER BY re_timestamp DESC LIMIT 100;\n\nLimit (cost=6331255.90..6331256.15 rows=100 width=1388) (actual \ntime=1592287.794..1592287.808 rows=100 loops=1)\n -> Sort (cost=6331255.90..6395928.37 rows=25868986 width=1388) \n(actual time=1592287.789..1592287.796 rows=100 loops=1)\n Sort Key: data.logs.re_timestamp\n Sort Method: top-N heapsort Memory: 217kB\n -> Result (cost=0.00..5342561.86 rows=25868986 width=1388) \n(actual time=0.026..1466420.868 rows=25870101 loops=1)\n -> Append (cost=0.00..5342561.86 rows=25868986 \nwidth=1388) (actual time=0.020..1417490.892 rows=25870101 loops=1)\n -> Seq Scan on logs (cost=0.00..10.40 rows=40 \nwidth=1776) (actual time=0.002..0.002 rows=0 loops=1)\n -> Seq Scan on logs_2011 logs \n(cost=0.00..195428.00 rows=904800 width=1449) (actual \ntime=0.017..92381.769 rows=904401 loops=1)\n -> Seq Scan on logs_2010 logs \n(cost=0.00..759610.67 rows=3578567 width=1426) (actual \ntime=23.996..257612.143 rows=3579586 loops=1)\n -> Seq Scan on logs_2009 logs \n(cost=0.00..841998.35 rows=3987235 width=1423) (actual \ntime=12.921..200385.903 rows=3986473 loops=1)\n -> Seq Scan on logs_2008 logs \n(cost=0.00..942810.60 rows=4409860 width=1444) (actual \ntime=18.861..226867.499 rows=4406653 loops=1)\n -> Seq Scan on logs_2007 logs \n(cost=0.00..730863.69 rows=3600569 width=1359) (actual \ntime=14.406..174082.413 rows=3603739 loops=1)\n -> Seq Scan on logs_2006 logs \n(cost=0.00..620978.29 rows=3089929 width=1348) (actual \ntime=21.647..147244.677 rows=3091214 loops=1)\n -> Seq Scan on logs_2005 logs \n(cost=0.00..486928.59 rows=2440959 width=1342) (actual \ntime=0.005..126479.314 rows=2438968 loops=1)\n -> Seq Scan on logs_2004 logs \n(cost=0.00..402991.92 rows=2031092 width=1327) (actual \ntime=23.007..98531.883 rows=2034041 loops=1)\n -> Seq Scan on logs_2003 logs \n(cost=0.00..360941.35 rows=1825935 width=1325) (actual \ntime=20.220..91773.705 rows=1825026 loops=1)\nTotal runtime: 1592293.267 ms\n\n\nEXPLAIN ANALYZE SELECT * FROM data.logs_2011\nORDER BY re_timestamp DESC LIMIT 100;\n\nLimit (cost=0.00..22.65 rows=100 width=1449) (actual \ntime=59.161..60.226 rows=100 loops=1)\n -> Index Scan Backward using logs_fts_2011_timestamp_idx on \nlogs_2011 (cost=0.00..204919.30 rows=904800 width=1449) (actual \ntime=59.158..60.215 rows=100 loops=1)\nTotal runtime: 60.316 ms\n",
"msg_date": "Fri, 06 May 2011 23:13:20 +0200",
"msg_from": "=?UTF-8?B?VGhvbWFzIEjDpGdp?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "indexes ignored when querying the master table"
},
{
"msg_contents": "* Thomas Hägi:\n\n> how can i get postgres to use the indexes when querying the master\n> table?\n\nI believe that this is a new feature in PostgreSQL 9.1 (\"Allow\ninheritance table queries to return meaningfully-sorted results\").\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Mon, 09 May 2011 07:57:18 +0000",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: indexes ignored when querying the master table"
},
{
"msg_contents": "On 05/06/2011 05:13 PM, Thomas Hägi wrote:\n> the query \"SELECT * FROM data.logs ORDER BY re_timestamp DESC LIMIT \n> 100\" does use seq scans on all tables instead of using the existing \n> indexes which takes ages. when issuing the the same query to one of \n> the child tables directly (\"SELECT * FROM data.logs_2011 ORDER BY \n> re_timestamp DESC LIMIT 100\") the index is used as expected and the \n> data returned quickly.\n>\n\nLet's see, cut and paste \nhttp://archives.postgresql.org/message-id/[email protected] \nand:\n\nThis is probably the limitation that's fixed in PostgreSQL 9.1 by this \ncommit (following a few others leading up to it): \nhttp://archives.postgresql.org/pgsql-committers/2010-11/msg00028.php\n\nThere was a good example showing what didn't work as expected before \n(along with an earlier patch that didn't everything the larger 9.1 \nimprovement does) at \nhttp://archives.postgresql.org/pgsql-hackers/2009-07/msg01115.php ; \n\"ORDER BY x DESC LIMIT 1\" returns the same things as MAX(x).\n\nIt's a pretty serious issue with the partitioning in earlier versions. I \nknow of multiple people, myself included, who have been compelled to \napply this change to an earlier version of PostgreSQL to make larger \npartitioned databases work correctly. The other option is to manually \ndecompose the queries into ones that target each of the child tables \nindividually, then combine the results, which is no fun either.\n\n(Am thinking about a documentation backpatch pointing out this limitation)\n\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Mon, 09 May 2011 07:45:55 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: indexes ignored when querying the master table"
},
{
"msg_contents": "hi florian\n\nsorry for the late reply - it took almost a day to dump & reload the \ndata into 9.1b1.\n\n>> how can i get postgres to use the indexes when querying the master\n>> table?\n>\n> I believe that this is a new feature in PostgreSQL 9.1 (\"Allow\n> inheritance table queries to return meaningfully-sorted results\").\n\nyou are right, pgsql 9.1 indeed makes use of the indexes now:\n\nEXPLAIN ANALYZE SELECT * FROM data.logs\nORDER BY re_timestamp DESC LIMIT 100;\n--------\nLimit (cost=11.63..36.45 rows=100 width=1390) (actual time=0.169..0.639 \nrows=100 loops=1)\n -> Result (cost=11.63..6421619.07 rows=25870141 width=1390) (actual \ntime=0.154..0.610 rows=100 loops=1)\n -> Merge Append (cost=11.63..6421619.07 rows=25870141 \nwidth=1390) (actual time=0.150..0.429 rows=100 loops=1)\n Sort Key: data.logs.re_timestamp\n -> Sort (cost=11.46..11.56 rows=40 width=1776) (actual \ntime=0.014..0.014 rows=0 loops=1)\n Sort Key: data.logs.re_timestamp\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on logs (cost=0.00..10.40 rows=40 \nwidth=1776) (actual time=0.003..0.003 rows=0 loops=1)\n -> Index Scan Backward using logs_2003_timestamp_idx on \nlogs_2003 logs (cost=0.00..373508.47 rows=1825026 width=1327) (actual \ntime=0.026..0.026 rows=1 loops=1)\n -> Index Scan Backward using logs_2004_timestamp_idx on \nlogs_2004 logs (cost=0.00..417220.55 rows=2034041 width=1327) (actual \ntime=0.012..0.012 rows=1 loops=1)\n -> Index Scan Backward using logs_2005_timestamp_idx on \nlogs_2005 logs (cost=0.00..502664.57 rows=2438968 width=1345) (actual \ntime=0.015..0.015 rows=1 loops=1)\n -> Index Scan Backward using logs_2006_timestamp_idx on \nlogs_2006 logs (cost=0.00..640419.01 rows=3091214 width=1354) (actual \ntime=0.015..0.015 rows=1 loops=1)\n -> Index Scan Backward using logs_2007_timestamp_idx on \nlogs_2007 logs (cost=0.00..752875.00 rows=3603739 width=1369) (actual \ntime=0.009..0.009 rows=1 loops=1)\n -> Index Scan Backward using logs_2008_timestamp_idx on \nlogs_2008 logs (cost=0.00..969357.51 rows=4406653 width=1440) (actual \ntime=0.007..0.007 rows=1 loops=1)\n -> Index Scan Backward using logs_2009_timestamp_idx on \nlogs_2009 logs (cost=0.00..862716.39 rows=3986473 width=1422) (actual \ntime=0.016..0.016 rows=1 loops=1)\n -> Index Scan Backward using logs_2010_timestamp_idx on \nlogs_2010 logs (cost=0.00..778529.29 rows=3579586 width=1426) (actual \ntime=0.009..0.009 rows=1 loops=1)\n -> Index Scan Backward using logs_2011_timestamp_idx on \nlogs_2011 logs (cost=0.00..200253.71 rows=904401 width=1453) (actual \ntime=0.006..0.089 rows=100 loops=1)\nTotal runtime: 1.765 ms\n\n\nthanks for your help,\nthomas\n",
"msg_date": "Tue, 10 May 2011 11:20:34 +0200",
"msg_from": "=?UTF-8?B?VGhvbWFzIEjDpGdp?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: indexes ignored when querying the master table"
}
] |
[
{
"msg_contents": "Dear list,\n\nI have a table with a few million rows and this index:\nCREATE INDEX bond_item_common_x7 ON bond_item_common \n((lower(original_filename)));\n\nThere are about 2M rows on bonddump and 4M rows on bond90.\n\nbonddump is on a 8MB RAM machine, bond90 is on a 72MB RAM machine.\n\nThe table is analyzed properly both places.\n\nI'm an index hint zealot, but aware of our different stances in the \nmatter. :)\n\nDropping the wildcard for the like, both databases uses the index.\n\nIs there a way to convince Postgres to try not to do full table scan as \nmuch? This is just one of several examples when it happily spends lots \nof time sequentially going thru tables.\n\nThanks,\nMarcus\n\n\n\n\npsql (9.0.4)\nType \"help\" for help.\n\nbonddump=# explain analyze select pic2.objectid\nbonddump-# from bond_item_common pic2\nbonddump-# where\nbonddump-# lower(pic2.original_filename) like 'this is a \ntest%' ;\n QUERY \nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using bond_item_common_x7 on bond_item_common pic2 \n(cost=0.01..8.69 rows=208 width=4) (actual time=26.415..26.415 rows=0 \nloops=1)\n Index Cond: ((lower((original_filename)::text) >= 'this is a \ntest'::text) AND (lower((original_filename)::text) < 'this is a \ntesu'::text))\n Filter: (lower((original_filename)::text) ~~ 'this is a test%'::text)\n Total runtime: 26.519 ms\n(4 rows)\n\n\n\n\npsql (9.0.4)\nbond90=> explain analyze select pic2.objectid\nbond90-> from bond_item_common pic2\nbond90-> where\nbond90-> lower(pic2.original_filename) like 'this is a test%' ;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on bond_item_common pic2 (cost=0.00..839226.81 rows=475 \nwidth=4) (actual time=10599.401..10599.401 rows=0 loops=1)\n Filter: (lower((original_filename)::text) ~~ 'this is a test%'::text)\n Total runtime: 10599.425 ms\n(3 rows)\n\n",
"msg_date": "Mon, 09 May 2011 20:06:32 +0200",
"msg_from": "Marcus Engene <[email protected]>",
"msg_from_op": true,
"msg_subject": "wildcard makes seq scan on prod db but not in test"
},
{
"msg_contents": "Marcus Engene <[email protected]> writes:\n> There are about 2M rows on bonddump and 4M rows on bond90.\n> bonddump is on a 8MB RAM machine, bond90 is on a 72MB RAM machine.\n> The table is analyzed properly both places.\n\nI'll bet one database was initialized in C locale and the other not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 May 2011 14:48:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wildcard makes seq scan on prod db but not in test "
},
{
"msg_contents": "Marcus Engene <[email protected]> wrote:\n \n> I have a table with a few million rows and this index:\n> CREATE INDEX bond_item_common_x7 ON bond_item_common \n> ((lower(original_filename)));\n \n> Dropping the wildcard for the like, both databases uses the index.\n> \n> Is there a way to convince Postgres to try not to do full table\n> scan as much?\n \nThat could be a difference is collations. What do you get from the\nquery on this page for each database?:\n \nhttp://wiki.postgresql.org/wiki/Server_Configuration\n \n-Kevin\n",
"msg_date": "Mon, 09 May 2011 13:57:07 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wildcard makes seq scan on prod db but not in\n\t test"
},
{
"msg_contents": "On 5/9/11 8:57 , Kevin Grittner wrote:\n>\n> That could be a difference is collations. What do you get from the\n> query on this page for each database?:\n>\n> http://wiki.postgresql.org/wiki/Server_Configuration\n>\n> -Kevin\n>\n> \nThere's indeed a different collation. Why is this affecting? Can i force \na column to be ascii?\n\nThe (fast) test server:\n version | PostgreSQL 9.0.4 on x86_64-apple-darwin10.7.0, \ncompiled by GCC i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. \nbuild 5664), 64-bit\n effective_cache_size | 512MB\n lc_collate | C\n lc_ctype | UTF-8\n maintenance_work_mem | 128MB\n max_connections | 100\n max_stack_depth | 2MB\n port | 5435\n server_encoding | UTF8\n shared_buffers | 512MB\n temp_buffers | 8192\n TimeZone | Europe/Zurich\n wal_buffers | 1MB\n work_mem | 128MB\n(14 rows)\n\nThe (slow) production server:\n version | PostgreSQL 9.0.4 on \nx86_64-unknown-linux-gnu, compiled by\nGCC gcc (Debian 4.3.2-1.1) 4.3.2, 64-bit\n checkpoint_completion_target | 0.9\n checkpoint_segments | 64\n effective_cache_size | 48GB\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n listen_addresses | localhost,10.0.0.3,74.50.57.76\n maintenance_work_mem | 1GB\n max_connections | 600\n max_stack_depth | 2MB\n port | 5435\n server_encoding | UTF8\n shared_buffers | 8GB\n temp_buffers | 32768\n TimeZone | UTC\n work_mem | 128MB\n(16 rows)\n\nThanks,\nMarcus\n\n",
"msg_date": "Mon, 09 May 2011 21:11:00 +0200",
"msg_from": "Marcus Engene <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: wildcard makes seq scan on prod db but not in\t test"
},
{
"msg_contents": "Marcus Engene <[email protected]> wrote:\n> On 5/9/11 8:57 , Kevin Grittner wrote:\n>>\n>> That could be a difference is collations. What do you get from\n>> the query on this page for each database?:\n>>\n>> http://wiki.postgresql.org/wiki/Server_Configuration\n \n> There's indeed a different collation. Why is this affecting?\n \nIf the index isn't sorted in an order which leaves the rows you are\nrequesting near one another, it's not very useful for the query. \nTry this query on both:\n \ncreate temp table order_example (val text);\ninsert into order_example values ('a z'),('ab'),('123'),(' 456');\nselect * from order_example order by val;\n \n> Can i force a column to be ascii?\n \nYou don't need to do that; you can specify an opclass for the index\nto tell it that you don't want to order by the normal collation, but\nrather in a way which will allow the index to be useful for pattern\nmatching:\n \nhttp://www.postgresql.org/docs/9.0/interactive/indexes-opclass.html\n \n> The (fast) test server:\n \n> effective_cache_size | 512MB\n> lc_collate | C\n> lc_ctype | UTF-8\n> maintenance_work_mem | 128MB\n> max_connections | 100\n> server_encoding | UTF8\n> shared_buffers | 512MB\n> temp_buffers | 8192\n> TimeZone | Europe/Zurich\n> wal_buffers | 1MB\n \n> The (slow) production server:\n \n> checkpoint_completion_target | 0.9\n> checkpoint_segments | 64\n> effective_cache_size | 48GB\n> lc_collate | en_US.UTF-8\n> lc_ctype | en_US.UTF-8\n> listen_addresses | localhost,10.0.0.3,74.50.57.76\n> maintenance_work_mem | 1GB\n> max_connections | 600\n> server_encoding | UTF8\n> shared_buffers | 8GB\n> temp_buffers | 32768\n> TimeZone | UTC\n \nAs you've discovered, with that many differences, performance tests\non one machine may have very little to do with actual performance on\nthe other.\n \n-Kevin\n",
"msg_date": "Mon, 09 May 2011 14:59:21 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wildcard makes seq scan on prod db but not in\t\n\t test"
},
{
"msg_contents": "On 5/9/11 9:59 , Kevin Grittner wrote:\n>\n> You don't need to do that; you can specify an opclass for the index\n> to tell it that you don't want to order by the normal collation, but\n> rather in a way which will allow the index to be useful for pattern\n> matching:\n>\n> http://www.postgresql.org/docs/9.0/interactive/indexes-opclass.html\n> -Kevin\n>\n> \n\nHi,\n\nThanks for the explanation. Works brilliantly!\n\nBest regards,\nMarcus\n\n\nFor future googlers:\n\nhttp://www.postgresonline.com/journal/archives/78-Why-is-my-index-not-being-used.html\n\ndrop index bond_item_common_x7;\n\nCREATE INDEX bond_item_common_x7 ON bond_item_common USING \nbtree(lower(original_filename) varchar_pattern_ops);\n\nbond90=> explain analyze\nselect pic2.objectid\nfrom bond_item_common pic2\nwhere\n lower(pic2.original_filename) like 'this is a test%' ;\n QUERY PLAN\n--------------------------------------------------------------...\n Bitmap Heap Scan on bond_item_common pic2 (cost=705.84..82746.05 \nrows=23870 width=4) (actual time=0.015..0.015 rows=0 loops=1)\n Filter: (lower((original_filename)::text) ~~ 'this is a test%'::text)\n -> Bitmap Index Scan on bond_item_common_x7 (cost=0.00..699.87 \nrows=23870 width=0) (actual time=0.014..0.014 rows=0 loops=1)\n Index Cond: ((lower((original_filename)::text) ~>=~ 'this is a \ntest'::text) AND (lower((original_filename)::text) ~<~ 'this is a \ntesu'::text))\n Total runtime: 0.033 ms\n\n",
"msg_date": "Mon, 09 May 2011 23:25:30 +0200",
"msg_from": "Marcus Engene <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: wildcard makes seq scan on prod db but not in\t\t test"
}
] |
[
{
"msg_contents": "I've got a fun problem.\n\nMy employer just purchased some new db servers that are very large. The\nspecs on them are:\n\n4 Intel X7550 CPU's (32 physical cores, HT turned off)\n1 TB Ram\n1.3 TB Fusion IO (2 1.3 TB Fusion IO Duo cards in a raid 10)\n3TB Sas Array (48 15K 146GB spindles)\n\nThe issue we are running into is how do we benchmark this server,\nspecifically, how do we get valid benchmarks for the Fusion IO card?\n Normally to eliminate the cache effect, you run iozone and other benchmark\nsuites at 2x the ram. However, we can't do that due to 2TB > 1.3TB.\n\nSo, does anyone have any suggestions/experiences in benchmarking storage\nwhen the storage is smaller then 2x memory?\n\nThanks,\n\nChris\n\nI've got a fun problem.My employer just purchased some new db servers that are very large. The specs on them are:4 Intel X7550 CPU's (32 physical cores, HT turned off)\n1 TB Ram1.3 TB Fusion IO (2 1.3 TB Fusion IO Duo cards in a raid 10)3TB Sas Array (48 15K 146GB spindles)The issue we are running into is how do we benchmark this server, specifically, how do we get valid benchmarks for the Fusion IO card? Normally to eliminate the cache effect, you run iozone and other benchmark suites at 2x the ram. However, we can't do that due to 2TB > 1.3TB. \nSo, does anyone have any suggestions/experiences in benchmarking storage when the storage is smaller then 2x memory? Thanks,Chris",
"msg_date": "Mon, 9 May 2011 16:32:27 -0400",
"msg_from": "Chris Hoover <[email protected]>",
"msg_from_op": true,
"msg_subject": "Benchmarking a large server"
},
{
"msg_contents": "On Mon, May 9, 2011 at 3:32 PM, Chris Hoover <[email protected]> wrote:\n> I've got a fun problem.\n> My employer just purchased some new db servers that are very large. The\n> specs on them are:\n> 4 Intel X7550 CPU's (32 physical cores, HT turned off)\n> 1 TB Ram\n> 1.3 TB Fusion IO (2 1.3 TB Fusion IO Duo cards in a raid 10)\n> 3TB Sas Array (48 15K 146GB spindles)\n\nmy GOODNESS! :-D. I mean, just, wow.\n\n> The issue we are running into is how do we benchmark this server,\n> specifically, how do we get valid benchmarks for the Fusion IO card?\n> Normally to eliminate the cache effect, you run iozone and other benchmark\n> suites at 2x the ram. However, we can't do that due to 2TB > 1.3TB.\n> So, does anyone have any suggestions/experiences in benchmarking storage\n> when the storage is smaller then 2x memory?\n\nhm, if it was me, I'd write a small C program that just jumped\ndirectly on the device around and did random writes assuming it wasn't\nformatted. For sequential read, just flush caches and dd the device\nto /dev/null. Probably someone will suggest better tools though.\n\nmerlin\n",
"msg_date": "Mon, 9 May 2011 15:50:51 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking a large server"
},
{
"msg_contents": "\n> hm, if it was me, I'd write a small C program that just jumped\n> directly on the device around and did random writes assuming it wasn't\n> formatted. For sequential read, just flush caches and dd the device\n> to /dev/null. Probably someone will suggest better tools though.\nI have a program I wrote years ago for a purpose like this. One of the \nthings it can\ndo is write to the filesystem at the same time as dirtying pages in a \nlarge shared\nor non-shared memory region. The idea was to emulate the behavior of a \ndatabase\nreasonably accurately. Something like bonnie++ would probably be a good \nstarting\npoint these days though.\n\n\n",
"msg_date": "Mon, 09 May 2011 14:59:01 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking a large server"
},
{
"msg_contents": "On May 9, 2011, at 1:32 PM, Chris Hoover wrote:\n\n> 1.3 TB Fusion IO (2 1.3 TB Fusion IO Duo cards in a raid 10)\n\nBe careful here. What if the entire card hiccups, instead of just a device on it? (We've had that happen to us before.) Depending on how you've done your raid 10, either all your parity is gone or your data is.",
"msg_date": "Mon, 9 May 2011 13:59:47 -0700",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking a large server"
},
{
"msg_contents": "On 05/09/2011 03:32 PM, Chris Hoover wrote:\n\n> So, does anyone have any suggestions/experiences in benchmarking storage\n> when the storage is smaller then 2x memory?\n\nWe had a similar problem when benching our FusionIO setup. What I did \nwas write a script that cleared out the Linux system cache before every \niteration of our pgbench tests. You can do that easily with:\n\necho 3 > /proc/sys/vm/drop_caches\n\nExecuted as root.\n\nThen we ran short (10, 20, 30, 40 clients, 10,000 transactions each) \npgbench tests, resetting the cache and the DB after every iteration. It \nwas all automated in a script, so it wasn't too much work.\n\nWe got (roughly) a 15x speed improvement over a 6x15k RPM RAID-10 setup \non the same server, with no other changes. This was definitely \ncorroborated after deployment, when our frequent periods of 100% disk IO \nutilization vanished and were replaced by occasional 20-30% spikes. Even \nthat's an unfair comparison in favor of the RAID, because we added DRBD \nto the mix because you can't share a PCI card between two servers.\n\nIf you do have two 1.3TB Duo cards in a 4x640GB RAID-10, you should get \neven better read times than we did.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Mon, 9 May 2011 16:01:26 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking a large server"
},
{
"msg_contents": "On Mon, May 9, 2011 at 3:59 PM, David Boreham <[email protected]> wrote:\n>\n>> hm, if it was me, I'd write a small C program that just jumped\n>> directly on the device around and did random writes assuming it wasn't\n>> formatted. For sequential read, just flush caches and dd the device\n>> to /dev/null. Probably someone will suggest better tools though.\n>\n> I have a program I wrote years ago for a purpose like this. One of the\n> things it can\n> do is write to the filesystem at the same time as dirtying pages in a large\n> shared\n> or non-shared memory region. The idea was to emulate the behavior of a\n> database\n> reasonably accurately. Something like bonnie++ would probably be a good\n> starting\n> point these days though.\n\nThe problem with bonnie++ is that the results aren't valid, especially\nthe read tests. I think it refuses to even run unless you set special\nswitches.\n\nmerlin\n",
"msg_date": "Mon, 9 May 2011 16:11:55 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking a large server"
},
{
"msg_contents": "On 5/9/2011 3:11 PM, Merlin Moncure wrote:\n> The problem with bonnie++ is that the results aren't valid, especially\n> the read tests. I think it refuses to even run unless you set special\n> switches.\n\nI only care about writes ;)\n\nBut definitely, be careful with the tools. I tend to prefer small \nprograms written in house myself,\nand of course simply running your application under a synthesized load.\n\n\n\n\n",
"msg_date": "Mon, 09 May 2011 15:13:53 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking a large server"
},
{
"msg_contents": "On 05/09/2011 04:32 PM, Chris Hoover wrote:\n> So, does anyone have any suggestions/experiences in benchmarking \n> storage when the storage is smaller then 2x memory? \n\nIf you do the Linux trick to drop its caches already mentioned, you can \nstart a database test with zero information in memory. In that \nsituation, whether or not everything could fit in RAM doesn't matter as \nmuch; you're starting with none of it in there. In that case, you can \nbenchmark things without having twice as much disk space. You just have \nto recognize that the test become less useful the longer you run it, and \nmeasure the results accordingly.\n\nA test starting from that state will start out showing you random I/O \nspeed on the device, slowing moving toward in-memory cached speeds as \nthe benchmark runs for a while. You really need to capture the latency \ndata for every transaction and graph it over time to make any sense of \nit. If you look at \"Using and Abusing pgbench\" at \nhttp://projects.2ndquadrant.com/talks , starting on P33 I have several \nslides showing such a test, done with pgbench and pgbench-tools. I \nadded a quick hack to pgbench-tools around then to make it easier to run \nthis specific type of test, but to my knowledge no one else has ever \nused it. (I've had talks about PostgreSQL in my yard that were better \nattended than that session, for which I blame Jonah Harris for doing a \ngreat talk in the room next door concurrent with it.)\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Mon, 09 May 2011 18:29:35 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking a large server"
},
{
"msg_contents": "2011/5/9 Chris Hoover <[email protected]>:\n> I've got a fun problem.\n> My employer just purchased some new db servers that are very large. The\n> specs on them are:\n> 4 Intel X7550 CPU's (32 physical cores, HT turned off)\n> 1 TB Ram\n> 1.3 TB Fusion IO (2 1.3 TB Fusion IO Duo cards in a raid 10)\n> 3TB Sas Array (48 15K 146GB spindles)\n> The issue we are running into is how do we benchmark this server,\n> specifically, how do we get valid benchmarks for the Fusion IO card?\n> Normally to eliminate the cache effect, you run iozone and other benchmark\n> suites at 2x the ram. However, we can't do that due to 2TB > 1.3TB.\n> So, does anyone have any suggestions/experiences in benchmarking storage\n> when the storage is smaller then 2x memory?\n\nYou can reduce the memory size on server boot.\nIf you use linux, you can add a 'mem=512G' to your boot time\nparameters. (maybe it supports only K or M, so 512*1024...)\n\n> Thanks,\n> Chris\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Tue, 10 May 2011 01:52:01 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking a large server"
},
{
"msg_contents": "\n2011/5/9 Chris Hoover<[email protected]>:\n\n> I've got a fun problem.\n> My employer just purchased some new db servers that are very large. The\n> specs on them are:\n> 4 Intel X7550 CPU's (32 physical cores, HT turned off)\n> 1 TB Ram\n> 1.3 TB Fusion IO (2 1.3 TB Fusion IO Duo cards in a raid 10)\n> 3TB Sas Array (48 15K 146GB spindles)\n> The issue we are running into is how do we benchmark this server,\n> specifically, how do we get valid benchmarks for the Fusion IO card?\n> Normally to eliminate the cache effect, you run iozone and other benchmark\n> suites at 2x the ram. However, we can't do that due to 2TB> 1.3TB.\n> So, does anyone have any suggestions/experiences in benchmarking storage\n> when the storage is smaller then 2x memory?\nMaybe this is a dumb question, but why do you care? If you have 1TB RAM and just a little more actual disk space, it seems like your database will always be cached in memory anyway. If you \"eliminate the cach effect,\" won't the benchmark actually give you the wrong real-life results?\n\nCraig\n\n",
"msg_date": "Mon, 09 May 2011 17:32:19 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking a large server"
},
{
"msg_contents": "On 5/9/2011 6:32 PM, Craig James wrote:\n> Maybe this is a dumb question, but why do you care? If you have 1TB \n> RAM and just a little more actual disk space, it seems like your \n> database will always be cached in memory anyway. If you \"eliminate \n> the cach effect,\" won't the benchmark actually give you the wrong \n> real-life results?\n\nThe time it takes to populate the cache from a cold start might be \nimportant.\n\nAlso, if it were me, I'd be wanting to check for weird performance \nbehavior at this memory scale.\nI've seen cases in the past where the VM subsystem went bananas because \nthe designers\nand testers of its algorithms never considered the physical memory size \nwe deployed.\n\nHow many times was the kernel tested with this much memory, for example \n? (never??)\n\n\n",
"msg_date": "Mon, 09 May 2011 18:38:29 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking a large server"
},
{
"msg_contents": "Craig James wrote:\n> Maybe this is a dumb question, but why do you care? If you have 1TB \n> RAM and just a little more actual disk space, it seems like your \n> database will always be cached in memory anyway. If you \"eliminate \n> the cach effect,\" won't the benchmark actually give you the wrong \n> real-life results?\n\nIf you'd just spent what two FusionIO drives cost, you'd want to make \ndamn sure they worked as expected too. Also, if you look carefully, \nthere is more disk space than this on the server, just not on the SSDs. \nIt's possible this setup could end up with most of RAM filled with data \nthat's stored on the regular drives. In that case the random \nperformance of the busy SSD would be critical. It would likely take a \nvery bad set of disk layout choices for that to happen, but I could see \nheavy sequential scans of tables in a data warehouse pushing in that \ndirection.\n\nIsolating out the SSD performance without using the larger capacity of \nthe regular drives on the server is an excellent idea here, it's just \ntricky to do.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Mon, 09 May 2011 20:45:41 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking a large server"
},
{
"msg_contents": "On Mon, 9 May 2011, David Boreham wrote:\n\n> On 5/9/2011 6:32 PM, Craig James wrote:\n>> Maybe this is a dumb question, but why do you care? If you have 1TB RAM \n>> and just a little more actual disk space, it seems like your database will \n>> always be cached in memory anyway. If you \"eliminate the cach effect,\" \n>> won't the benchmark actually give you the wrong real-life results?\n>\n> The time it takes to populate the cache from a cold start might be important.\n\nyou may also have other processes that will be contending with the disk \nbuffers for memory (for that matter, postgres may use a significant amount \nof that memory as it's producing it's results)\n\nDavid Lang\n\n> Also, if it were me, I'd be wanting to check for weird performance behavior \n> at this memory scale.\n> I've seen cases in the past where the VM subsystem went bananas because the \n> designers\n> and testers of its algorithms never considered the physical memory size we \n> deployed.\n>\n> How many times was the kernel tested with this much memory, for example ? \n> (never??)\n>\n>\n>\n>\n",
"msg_date": "Mon, 9 May 2011 17:46:14 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking a large server"
},
{
"msg_contents": "> How many times was the kernel tested with this much memory, for example\n> ? (never??)\n\nThis is actually *extremely* relevant.\n\nTake a look at /proc/sys/vm/dirty_ratio and /proc/sys/vm/dirty_background_ratio if you have an older Linux system, or /proc/sys/vm/dirty_bytes, and /proc/sys/vm/dirty_background_bytes with a newer one.\n\nOn older systems for instance, those are set to 40 and 20 respectively (recent kernels cut these in half). That's significant because ratio is the *percentage* of memory that can remain dirty before causing async, and background_ratio tells it when it should start writing in the background to avoid hitting that higher and much more disruptive number. This is another source of IO that can be completely independent of the checkpoint spikes that long plagued PostgreSQL versions prior to 8.3.\n\nWith that much memory (1TB!), that's over 100GB of dirty memory before it starts writing that out to disk even with the newer conservative settings. We had to tweak and test for days to find good settings for these, and our servers only have 96GB of RAM. You also have to consider, as fast as the FusionIO drives are, they're still NVRAM, which has write-amplification issues. How fast do you think it can commit 100GB of dirty memory to disk? Even with a background setting of 1%, that's 10GB on your system. \n\nThat means you'd need to use a very new kernel so you can utilize the dirty_bytes and dirty_background_bytes settings so you can force those settings into more sane levels to avoid unpredictable several-minute long asyncs. I'm not sure how much testing Linux sees on massive hardware like that, but that's just one hidden danger of not properly benchmarking the server and just thinking 1TB of memory and caching the entire dataset is only an improvement.\n\n--\nShaun Thomas\nPeak6 | 141 W. Jackson Blvd. | Suite 800 | Chicago, IL 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Tue, 10 May 2011 03:13:43 +0000",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking a large server"
},
{
"msg_contents": "On 05/09/2011 11:13 PM, Shaun Thomas wrote:\n> Take a look at /proc/sys/vm/dirty_ratio and \n> /proc/sys/vm/dirty_background_ratio if you have an older Linux system, \n> or /proc/sys/vm/dirty_bytes, and /proc/sys/vm/dirty_background_bytes \n> with a newer one.\n> On older systems for instance, those are set to 40 and 20 respectively (recent kernels cut these in half).\n\n1/4 actually; 10% and 5% starting in kernel 2.6.22. The main sources of \nthis on otherwise new servers I see are RedHat Linux RHEL5 systems \nrunning 2.6.18. But as you say, even the lower defaults of the newer \nkernels can be way too much on a system with lots of RAM.\n\nThe main downside I've seen of addressing this by using a kernel with \ndirty_bytes and dirty_background_bytes is that VACUUM can slow down \nconsiderably. It really relies on the filesystem having a lot of write \ncache to perform well. In many cases people are happy with VACUUM \nthrottling if it means nasty I/O spikes go away, but the trade-offs here \nare still painful at times.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Tue, 10 May 2011 00:44:33 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking a large server"
},
{
"msg_contents": "On 2011-05-09 22:32, Chris Hoover wrote:\n>\n> The issue we are running into is how do we benchmark this server, \n> specifically, how do we get valid benchmarks for the Fusion IO card? \n> Normally to eliminate the cache effect, you run iozone and other \n> benchmark suites at 2x the ram. However, we can't do that due to 2TB \n> > 1.3TB.\n>\n> So, does anyone have any suggestions/experiences in benchmarking \n> storage when the storage is smaller then 2x memory?\n\nOracle's Orion test tool has a configurable cache size parameter - it's \na separate download and specifically written to benchmark database oltp \nand olap like io patterns, see \nhttp://www.oracle.com/technetwork/topics/index-089595.html\n\n-- \nYeb Havinga\nhttp://www.mgrid.net/\nMastering Medical Data\n\n",
"msg_date": "Tue, 10 May 2011 08:58:21 +0200",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking a large server"
},
{
"msg_contents": "On Mon, May 9, 2011 at 10:32 PM, Chris Hoover <[email protected]> wrote:\n> So, does anyone have any suggestions/experiences in benchmarking storage\n> when the storage is smaller then 2x memory?\n\nTry writing a small python script (or C program) to mmap a large chunk\nof memory, with MAP_LOCKED, this will keep it in RAM and avoid that\nRAM from being used for caching.\nThe script should touch the memory at least once to avoid overcommit\nfrom getting smart on you.\n\nI think only root can lock memory, so that small program would have to\nrun as root.\n",
"msg_date": "Tue, 10 May 2011 09:24:18 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking a large server"
},
{
"msg_contents": "\nOn May 9, 2011, at 4:50 PM, Merlin Moncure wrote:\n>\n> hm, if it was me, I'd write a small C program that just jumped\n> directly on the device around and did random writes assuming it wasn't\n> formatted. For sequential read, just flush caches and dd the device\n> to /dev/null. Probably someone will suggest better tools though.\n>\n> merlin\n>\n\n<shameless plug>\nhttp://pgfoundry.org/projects/pgiosim\n\nit is a small program we use to beat the [bad word] out of io systems.\nit randomly seeks, does an 8kB read, optionally writes it out (and \noptionally fsyncing) and reports how fast it is going (you need to \nwatch iostat output as well so you can see actual physical tps without \nhte OS cache interfering).\n\nIt goes through regular read & write calls like PG (I didn't want to \nbother with junk like o_direct & friends).\n\nit is also now multithreaded so you can fire up a bunch of random read \nthreads (rather than firing up a bunch of pgiosims in parallel) and \nsee how things scale up.\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n",
"msg_date": "Tue, 10 May 2011 10:24:17 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking a large server"
},
{
"msg_contents": "2011/5/10 Greg Smith <[email protected]>:\n> On 05/09/2011 11:13 PM, Shaun Thomas wrote:\n>>\n>> Take a look at /proc/sys/vm/dirty_ratio and\n>> /proc/sys/vm/dirty_background_ratio if you have an older Linux system, or\n>> /proc/sys/vm/dirty_bytes, and /proc/sys/vm/dirty_background_bytes with a\n>> newer one.\n>> On older systems for instance, those are set to 40 and 20 respectively\n>> (recent kernels cut these in half).\n>\n> 1/4 actually; 10% and 5% starting in kernel 2.6.22. The main sources of\n> this on otherwise new servers I see are RedHat Linux RHEL5 systems running\n> 2.6.18. But as you say, even the lower defaults of the newer kernels can be\n> way too much on a system with lots of RAM.\n\none can experiment writeback storm with this script from Chris Mason,\nunder GPLv2:\nhttp://oss.oracle.com/~mason/fsync-tester.c\n\nYou need to tweak it a bit, AFAIR, this #define SIZE (32768*32) must\nbe reduced to be equal to 8kb blocks if you want similar to pg write\npattern.\n\nThe script does a big file, many small fsync, writing on both. Please,\nsee http://www.spinics.net/lists/linux-ext4/msg24308.html\n\nIt is used as a torture program by some linuxfs-hackers and may be\nuseful for the OP on his large server to validate hardware and kernel.\n\n\n>\n> The main downside I've seen of addressing this by using a kernel with\n> dirty_bytes and dirty_background_bytes is that VACUUM can slow down\n> considerably. It really relies on the filesystem having a lot of write\n> cache to perform well. In many cases people are happy with VACUUM\n> throttling if it means nasty I/O spikes go away, but the trade-offs here are\n> still painful at times.\n>\n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n> \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Tue, 10 May 2011 17:19:59 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking a large server"
},
{
"msg_contents": "Greg Smith wrote:\n> On 05/09/2011 11:13 PM, Shaun Thomas wrote:\n>> Take a look at /proc/sys/vm/dirty_ratio and \n>> /proc/sys/vm/dirty_background_ratio if you have an older Linux \n>> system, or /proc/sys/vm/dirty_bytes, and \n>> /proc/sys/vm/dirty_background_bytes with a newer one.\n>> On older systems for instance, those are set to 40 and 20 \n>> respectively (recent kernels cut these in half).\n>\n> 1/4 actually; 10% and 5% starting in kernel 2.6.22. The main sources \n> of this on otherwise new servers I see are RedHat Linux RHEL5 systems \n> running 2.6.18. But as you say, even the lower defaults of the newer \n> kernels can be way too much on a system with lots of RAM.\n\nUgh...we're both right, sort of. 2.6.22 dropped them to 5/10: \nhttp://kernelnewbies.org/Linux_2_6_22 as I said. But on the new \nScientific Linux 6 box I installed yesterday, they're at 10/20--as you \nsuggested.\n\nCan't believe I'm going to need a table by kernel version and possibly \ndistribution to keep this all straight now, what a mess.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Tue, 10 May 2011 13:50:40 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking a large server"
}
] |
[
{
"msg_contents": "I'm looking for a good ready-to-run broad spectrum (tests cpu bound,\ni/o bound, various cases, various sizes) benchmark. I tried dbt5 and\ngot it compiled after some effort but it immediately fails upon\nrunning so I punted. Anybody have any ideas where I could look?\n\nmerlin\n",
"msg_date": "Mon, 9 May 2011 15:41:07 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": true,
"msg_subject": "good performance benchmark"
},
{
"msg_contents": "On 5/9/11 1:41 PM, Merlin Moncure wrote:\n> I'm looking for a good ready-to-run broad spectrum (tests cpu bound,\n> i/o bound, various cases, various sizes) benchmark. I tried dbt5 and\n> got it compiled after some effort but it immediately fails upon\n> running so I punted. Anybody have any ideas where I could look?\n\nI don't know any real benchmark currently that isn't fairly involved to\nset up. As in, week-long debugging session. I wish it were different,\nbut to date nobody is available to put in the kind of work required to\nhave credible benchmarks which are relatively portable.\n\nDBT2 is a bit more stable than DBT5, though, so you might have a better\ntime with it.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Mon, 09 May 2011 16:23:01 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: good performance benchmark"
}
] |
[
{
"msg_contents": "I have a multi-threaded app. It uses ~22 threads to query Postgres.\n\nPostgres won't use more than 1 CPU core. The 22-threaded app only has 3% CPU\nutilization because it's mostly waiting on Postgres.\n\nHere's the details:\n\nThe app has a \"main\" thread that reads table A's 11,000,000 rows, one at a\ntime. The main thread spawns a new thread for each row in table A's data.\nThis new thread:\n\n 1. Opens a connection to the DB.\n 2. Does some calculations on the data, including 1 to 102 SELECTs on\n table B.\n 3. With an INSERT query, writes a new row to table C.\n 4. Closes the connection.\n 5. Thread dies. Its data is garbage collected eventually.\n\nPhysical/software details:\n\n - Core i7 processor--4 physical cores, but OS sees 8 cores\n via hyper-threading\n - 7200 RPM 500 GB HDD\n - About 1/2 total RAM is free during app execution\n - Windows 7 x64\n - Postgres 9.0.4 32-bit (32-bit required for PostGIS)\n - App is C# w/ .NET 4.0. PLINQ dispatches threads. Npgsql is Postgres\n connection tool.\n\nAt first, the app pounds all 8 cores. But it quickly tapers off, and only 1\ncore that's busy. The other 7 cores are barely doing a thing.\n\nPostgres has 9 open processes. 1 process was slamming that 1 busy core. The\nother 8 Postgres processes were alive but idle.\n\nEach thread creates its own connection. It's not concurrently shared with\nthe main thread or any other threads. I haven't disabled connection pooling;\nwhen a thread closes a connection, it's technically releasing it into a pool\nfor later threads to use.\n\nDisk utilization is low. The HDD light is off much more than it is on, and a\nreview of total HDD activity put it between 0% and 10% of total capacity.\nThe HDD busy indicator LED would regularly flicker every 0.1 to 0.3 seconds.\n\nThe app runs 2 different queries on table B. The 1st query is run once, the\n2nd query can be run up to 101 times. Table C has redundant indexes: every\ncolumn referenced in the SQL WHERE clauses for both queries are indexed\nseparately and jointly. E.g., if query X references columns Y and Z, there\nare 3 indexes:\n\n 1. An index for Y\n 2. An index for Z\n 3. An index for Y and Z\n\nTable C is simple. It has four columns: two integers, a varchar(18), and a\nboolean. It has no indexes. A primary key on the varchar(18) column is its\nonly constraint.\n\nA generalized version of my INSERT command for table C is:\n*INSERT INTO raw.C VALUES (:L, :M, :N, :P)*\n\nI am using parameters to fill in the 4 values.\n\nI have verified table C manually, and correct data is being stored in it.\n\nSeveral Google searches suggest Postgres should use multiple cores\nautomatically. I've consulted with Npgsql's developer, and he didn't see how\nNpgsql itself could force Postgres to one core. (See\nhttp://pgfoundry.org/pipermail/npgsql-devel/2011-May/001123.html.)\n\nWhat can I do to improve this? Could I be inadvertently limiting Postgres to\none core?\n\nAren Cambre\n\nI have a multi-threaded app. It uses ~22 threads to query Postgres.Postgres won't use more than 1 CPU core. The 22-threaded app only has 3% CPU utilization because it's mostly waiting on Postgres.\nHere's the details:The app has a \"main\" thread that reads table A's 11,000,000 rows, one at a time. The main thread spawns a new thread for each row in table A's data. This new thread:\nOpens a connection to the DB.Does some calculations on the data, including 1 to 102 SELECTs on table B.With an INSERT query, writes a new row to table C.\nCloses the connection.Thread dies. Its data is garbage collected eventually.Physical/software details:Core i7 processor--4 physical cores, but OS sees 8 cores via hyper-threading\n7200 RPM 500 GB HDDAbout 1/2 total RAM is free during app executionWindows 7 x64Postgres 9.0.4 32-bit (32-bit required for PostGIS)\nApp is C# w/ .NET 4.0. PLINQ dispatches threads. Npgsql is Postgres connection tool.At first, the app pounds all 8 cores. But it quickly tapers off, and only 1 core that's busy. The other 7 cores are barely doing a thing.\nPostgres has 9 open processes. 1 process was slamming that 1 busy core. The other 8 Postgres processes were alive but idle.Each thread creates its own connection. It's not concurrently shared with the main thread or any other threads. I haven't disabled connection pooling; when a thread closes a connection, it's technically releasing it into a pool for later threads to use.\nDisk utilization is low. The HDD light is off much more than it is on, and a review of total HDD activity put it between 0% and 10% of total capacity. The HDD busy indicator LED would regularly flicker every 0.1 to 0.3 seconds.\nThe app runs 2 different queries on table B. The 1st query is run once, the 2nd query can be run up to 101 times. Table C has redundant indexes: every column referenced in the SQL WHERE clauses for both queries are indexed separately and jointly. E.g., if query X references columns Y and Z, there are 3 indexes:\nAn index for YAn index for ZAn index for Y and Z\nTable C is simple. It has four columns: two integers, a varchar(18), and a boolean. It has no indexes. A primary key on the varchar(18) column is its only constraint.\n\nA generalized version of my INSERT command for table C is:INSERT INTO raw.C VALUES (:L, :M, :N, :P)\nI am using parameters to fill in the 4 values.I have verified table C manually, and correct data is being stored in it.Several Google searches suggest Postgres should use multiple cores automatically. I've consulted with Npgsql's developer, and he didn't see how Npgsql itself could force Postgres to one core. (See http://pgfoundry.org/pipermail/npgsql-devel/2011-May/001123.html.)\nWhat can I do to improve this? Could I be inadvertently limiting Postgres to one core?\n\nAren Cambre",
"msg_date": "Mon, 9 May 2011 16:23:13 -0500",
"msg_from": "Aren Cambre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres refusing to use >1 core"
},
{
"msg_contents": "On Mon, May 9, 2011 at 4:23 PM, Aren Cambre <[email protected]> wrote:\n> I have a multi-threaded app. It uses ~22 threads to query Postgres.\n> Postgres won't use more than 1 CPU core. The 22-threaded app only has 3% CPU\n> utilization because it's mostly waiting on Postgres.\n> Here's the details:\n> The app has a \"main\" thread that reads table A's 11,000,000 rows, one at a\n> time. The main thread spawns a new thread for each row in table A's data.\n> This new thread:\n>\n> Opens a connection to the DB.\n> Does some calculations on the data, including 1 to 102 SELECTs on table B.\n> With an INSERT query, writes a new row to table C.\n> Closes the connection.\n> Thread dies. Its data is garbage collected eventually.\n>\n> Physical/software details:\n>\n> Core i7 processor--4 physical cores, but OS sees 8 cores via hyper-threading\n> 7200 RPM 500 GB HDD\n> About 1/2 total RAM is free during app execution\n> Windows 7 x64\n> Postgres 9.0.4 32-bit (32-bit required for PostGIS)\n> App is C# w/ .NET 4.0. PLINQ dispatches threads. Npgsql is Postgres\n> connection tool.\n>\n> At first, the app pounds all 8 cores. But it quickly tapers off, and only 1\n> core that's busy. The other 7 cores are barely doing a thing.\n> Postgres has 9 open processes. 1 process was slamming that 1 busy core. The\n> other 8 Postgres processes were alive but idle.\n> Each thread creates its own connection. It's not concurrently shared with\n> the main thread or any other threads. I haven't disabled connection pooling;\n> when a thread closes a connection, it's technically releasing it into a pool\n> for later threads to use.\n> Disk utilization is low. The HDD light is off much more than it is on, and a\n> review of total HDD activity put it between 0% and 10% of total capacity.\n> The HDD busy indicator LED would regularly flicker every 0.1 to 0.3 seconds.\n> The app runs 2 different queries on table B. The 1st query is run once, the\n> 2nd query can be run up to 101 times. Table C has redundant indexes: every\n> column referenced in the SQL WHERE clauses for both queries are indexed\n> separately and jointly. E.g., if query X references columns Y and Z, there\n> are 3 indexes:\n>\n> An index for Y\n> An index for Z\n> An index for Y and Z\n>\n> Table C is simple. It has four columns: two integers, a varchar(18), and a\n> boolean. It has no indexes. A primary key on the varchar(18) column is its\n> only constraint.\n> A generalized version of my INSERT command for table C is:\n> INSERT INTO raw.C VALUES (:L, :M, :N, :P)\n> I am using parameters to fill in the 4 values.\n> I have verified table C manually, and correct data is being stored in it.\n> Several Google searches suggest Postgres should use multiple cores\n> automatically. I've consulted with Npgsql's developer, and he didn't see how\n> Npgsql itself could force Postgres to one core.\n> (See http://pgfoundry.org/pipermail/npgsql-devel/2011-May/001123.html.)\n> What can I do to improve this? Could I be inadvertently limiting Postgres to\n> one core?\n\nAre you sure you are really using > 1 connection? While your test is\nrunning, log onto postgres with psql and grab the output of\npg_stat_activity a few times. What do you see?\n\nmerlin\n",
"msg_date": "Mon, 9 May 2011 16:35:43 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "Aren Cambre <[email protected]> wrote:\n \n> Postgres won't use more than 1 CPU core.\n \nOne *connection* to PostgreSQL won't directly use more than one\ncore. As Merlin suggests, perhaps you're really only running one\nquery at a time? The other possibility is that you're somehow\nacquiring locks which cause one process to block others.\n \n> - Core i7 processor--4 physical cores, but OS sees 8 cores\n> via hyper-threading\n \nMost benchmarks I've seen comparing hyper-threading show that\nPostgreSQL performs better if you don't try to convince it that one\ncore is actually two different cores. With HT on, you tend to see\ncontext switching storms, and performance suffers.\n \n> At first, the app pounds all 8 cores.\n \nYou really shouldn't let the marketers get to you like that. You\nhave four cores, not eight.\n \nThe most important information for finding your bottleneck is\nprobably going to be in pg_stat_activity and pg_locks.\n \n-Kevin\n",
"msg_date": "Mon, 09 May 2011 16:45:51 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": ">\n> Are you sure you are really using > 1 connection? While your test is\n> running, log onto postgres with psql and grab the output of\n> pg_stat_activity a few times. What do you see?\n>\n\nThanks. If a connection corresponds to a process, then this suggests I am\nusing 1 connection for my main thread, and all the threads it spawns are\nsharing another connection.\n\nAren\n\nAre you sure you are really using > 1 connection? While your test is\nrunning, log onto postgres with psql and grab the output of\npg_stat_activity a few times. What do you see?Thanks. If a connection corresponds to a process, then this suggests I am using 1 connection for my main thread, and all the threads it spawns are sharing another connection.\nAren",
"msg_date": "Mon, 9 May 2011 16:50:45 -0500",
"msg_from": "Aren Cambre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": ">\n> > Postgres won't use more than 1 CPU core.\n>\n> One *connection* to PostgreSQL won't directly use more than one\n> core. As Merlin suggests, perhaps you're really only running one\n> query at a time? The other possibility is that you're somehow\n> acquiring locks which cause one process to block others.\n>\n\nThe \"one connection\" theory appears correct per prior email, if correctly\nunderstood what I was reading.\n\nI guess I need to head back over to the Npgsql folks and see what I am doing\nwrong?\n\n\n> > - Core i7 processor--4 physical cores, but OS sees 8 cores\n> > via hyper-threading\n>\n> Most benchmarks I've seen comparing hyper-threading show that\n> PostgreSQL performs better if you don't try to convince it that one\n> core is actually two different cores. With HT on, you tend to see\n> context switching storms, and performance suffers.\n>\n> > At first, the app pounds all 8 cores.\n>\n> You really shouldn't let the marketers get to you like that. You\n> have four cores, not eight.\n>\n\nI agree. :-) Just trying to express things as my OS sees and reports on\nthem.\n\nAren\n\n> Postgres won't use more than 1 CPU core.\n\nOne *connection* to PostgreSQL won't directly use more than one\ncore. As Merlin suggests, perhaps you're really only running one\nquery at a time? The other possibility is that you're somehow\nacquiring locks which cause one process to block others.The \"one connection\" theory appears correct per prior email, if correctly understood what I was reading.\nI guess I need to head back over to the Npgsql folks and see what I am doing wrong? > - Core i7 processor--4 physical cores, but OS sees 8 cores\n\n\n> via hyper-threading\n\nMost benchmarks I've seen comparing hyper-threading show that\nPostgreSQL performs better if you don't try to convince it that one\ncore is actually two different cores. With HT on, you tend to see\ncontext switching storms, and performance suffers.\n\n> At first, the app pounds all 8 cores.\n\nYou really shouldn't let the marketers get to you like that. You\nhave four cores, not eight.I agree. :-) Just trying to express things as my OS sees and reports on them.Aren",
"msg_date": "Mon, 9 May 2011 16:52:36 -0500",
"msg_from": "Aren Cambre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "On Mon, May 9, 2011 at 4:50 PM, Aren Cambre <[email protected]> wrote:\n>> Are you sure you are really using > 1 connection? While your test is\n>> running, log onto postgres with psql and grab the output of\n>> pg_stat_activity a few times. What do you see?\n>\n> Thanks. If a connection corresponds to a process, then this suggests I am\n> using 1 connection for my main thread, and all the threads it spawns are\n> sharing another connection.\n\nYes. However I can tell you with absolute certainly that postgres\nwill distribute work across cores. Actually the o/s does it -- each\nunique connection spawns a single threaded process on the backend. As\nlong as your o/s of choice is supports using more than once process at\nonce, your work will distribute. So, given that, your problem is:\n\n*) your code is actually using only one connection\n*) you have contention on the server side (say, a transaction\noutstanding that it blocking everyone)\n*) you have contention on the client side -- a lock in your code or\ninside npgsql\n*) your measuring is not correct.\n\nso follow the advice above. we need to see pg_stat_activity, and/or\npg_locks while your test is running (especially take note of pg_lock\nrecords with granted=f)\n\nmerlin\n",
"msg_date": "Mon, 9 May 2011 16:56:34 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "Aren Cambre <[email protected]> wrote:\n \n>>> - Core i7 processor--4 physical cores, but OS sees 8 cores\n>>> via hyper-threading\n>>\n>> Most benchmarks I've seen comparing hyper-threading show that\n>> PostgreSQL performs better if you don't try to convince it that\n>> one core is actually two different cores. With HT on, you tend\n>> to see context switching storms, and performance suffers.\n>>\n>> > At first, the app pounds all 8 cores.\n>>\n>> You really shouldn't let the marketers get to you like that. You\n>> have four cores, not eight.\n>>\n> \n> I agree. :-) Just trying to express things as my OS sees and\n> reports on them.\n \nYour OS won't *see* eight processors if you turn of HT. :-)\n \nI'm going to pursue this digression just a little further, because\nit probably will be biting you sooner or later. We make sure to\nconfigure the BIOS on our database servers to turn off\nhyperthreading. It really can make a big difference in performance.\n \n-Kevin\n",
"msg_date": "Mon, 09 May 2011 16:59:02 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "On 05/09/2011 05:59 PM, Kevin Grittner wrote:\n> I'm going to pursue this digression just a little further, because\n> it probably will be biting you sooner or later. We make sure to\n> configure the BIOS on our database servers to turn off\n> hyperthreading. It really can make a big difference in performance.\n> \n\nYou're using connection pooling quite aggressively though. The sort of \npeople who do actually benefit from hyperthreading are the ones who \ndon't, where there's lots of CPU time being burnt up in overhead you \ndon't see, and that even a virtual HT processor can help handle. I'm \nnot a big fan of the current hyperthreading implementation, but it's not \nnearly as bad as the older ones, and there are situations where it is \nuseful. I am unsurprised you don't ever see them on your workload \nthough, you're well tweaked enough to probably be memory or disk limited \nmuch of the time.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Mon, 09 May 2011 18:43:52 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": ">\n> so follow the advice above. we need to see pg_stat_activity, and/or\n> pg_locks while your test is running (especially take note of pg_lock\n> records with granted=f)\n\n\nAttached.\n\nThe database is named de. The process with procpid 3728 has the SQL query\nfor my \"main\" thread--the one that reads the 12,000,000 rows one by one.\nprocpid 6272 was handling the queries from the ~22 threads, although at the\ntime this was taken, it was idle. But if I monitor it, I can see the queries\nof tables B and C going through it.\n\nI am not clear what to read into pg_locks except that the \"main\" thread\n(3728's query) sure has a lot of locks! But all 3728 is doing is reading\nrows from table A, nothing else.\n\nAren",
"msg_date": "Mon, 9 May 2011 21:12:20 -0500",
"msg_from": "Aren Cambre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": ">\n> Your OS won't *see* eight processors if you turn of HT. :-)\n>\n> I'm going to pursue this digression just a little further, because\n> it probably will be biting you sooner or later. We make sure to\n> configure the BIOS on our database servers to turn off\n> hyperthreading. It really can make a big difference in performance.\n>\n\nOK, OK, I need to admit that this is a Core i7 720QM on an HP Envy 14\nlaptop. :-) There is no BIOS option to disable HT.\n\nI am a doctoral student (but married with kids, about 5-10 years over\ntraditional doctorate student age) and am trying to speed up some of my data\nanalysis with parallelism. Right now the current operation,if run in series,\ntakes 30 hours and only stresses one of the 8 (fake) cores. I'd rather see\nsomething that maximizes CPU use, provided that it doesn't overwhelm I/O.\n\nAren\n\nYour OS won't *see* eight processors if you turn of HT. :-)\n\nI'm going to pursue this digression just a little further, because\nit probably will be biting you sooner or later. We make sure to\nconfigure the BIOS on our database servers to turn off\nhyperthreading. It really can make a big difference in performance.OK, OK, I need to admit that this is a Core i7 720QM on an HP Envy 14 laptop. :-) There is no BIOS option to disable HT.\nI am a doctoral student (but married with kids, about 5-10 years over traditional doctorate student age) and am trying to speed up some of my data analysis with parallelism. Right now the current operation,if run in series, takes 30 hours and only stresses one of the 8 (fake) cores. I'd rather see something that maximizes CPU use, provided that it doesn't overwhelm I/O.\nAren",
"msg_date": "Mon, 9 May 2011 21:15:27 -0500",
"msg_from": "Aren Cambre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "On Mon, May 9, 2011 at 8:15 PM, Aren Cambre <[email protected]> wrote:\n>> Your OS won't *see* eight processors if you turn of HT. :-)\n>> I'm going to pursue this digression just a little further, because\n>> it probably will be biting you sooner or later. We make sure to\n>> configure the BIOS on our database servers to turn off\n>> hyperthreading. It really can make a big difference in performance.\n>\n> OK, OK, I need to admit that this is a Core i7 720QM on an HP Envy 14\n> laptop. :-) There is no BIOS option to disable HT.\n> I am a doctoral student (but married with kids, about 5-10 years over\n> traditional doctorate student age) and am trying to speed up some of my data\n> analysis with parallelism. Right now the current operation,if run in series,\n> takes 30 hours and only stresses one of the 8 (fake) cores. I'd rather see\n> something that maximizes CPU use, provided that it doesn't overwhelm I/O.\n\nThe easiest way to use more cores is to just partition the data you\nwant to work on into 4 or more chunks and launch that many\nmulti-threaded processes at once.\n",
"msg_date": "Mon, 9 May 2011 20:34:58 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "On Mon, May 9, 2011 at 10:15 PM, Aren Cambre <[email protected]> wrote:\n>> Your OS won't *see* eight processors if you turn of HT. :-)\n>> I'm going to pursue this digression just a little further, because\n>> it probably will be biting you sooner or later. We make sure to\n>> configure the BIOS on our database servers to turn off\n>> hyperthreading. It really can make a big difference in performance.\n>\n> OK, OK, I need to admit that this is a Core i7 720QM on an HP Envy 14\n> laptop. :-) There is no BIOS option to disable HT.\n> I am a doctoral student (but married with kids, about 5-10 years over\n> traditional doctorate student age) and am trying to speed up some of my data\n> analysis with parallelism. Right now the current operation,if run in series,\n> takes 30 hours and only stresses one of the 8 (fake) cores. I'd rather see\n> something that maximizes CPU use, provided that it doesn't overwhelm I/O.\n> Aren\n\nhow are you reading through the table? if you are using OFFSET, you\nowe me a steak dinner.\n\nmerlin\n",
"msg_date": "Mon, 9 May 2011 22:37:37 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": ">\n> how are you reading through the table? if you are using OFFSET, you\n> owe me a steak dinner.\n>\n>\nNope. :-)\n\nBelow is my exact code for the main thread. The C# PLINQ statement is\nhighlighted. Let me know if I can help to explain this.\n\n NpgsqlConnection arrestsConnection = new NpgsqlConnection\n(Properties.Settings.Default.dbConnectionString);\n\n arrestsConnection.Open();\n\n\n\n /// First clear out the geocoding table\n\n NpgsqlCommand geocodingTableClear = new NpgsqlCommand(\"TRUNCATE\nraw.\\\"TxDPS geocoding\\\"\", arrestsConnection);\n\n geocodingTableClear.ExecuteNonQuery();\n\n\n\n NpgsqlDataReader arrests = new NpgsqlCommand(\"SELECT * FROM\n\\\"raw\\\".\\\"TxDPS all arrests\\\"\", arrestsConnection).ExecuteReader();\n\n\n\n /// Based on the pattern defined at\n\n ///\nhttp://social.msdn.microsoft.com/Forums/en-US/parallelextensions/thread/2f5ce226-c500-4899-a923-99285ace42ae\n.\n\n foreach(IDataRecord arrest in\n\n from row in arrests.AsParallel().Cast <IDataRecord>()\n\n select row)\n\n {\n\n Geocoder geocodeThis = new Geocoder(arrest);\n\n geocodeThis.Geocode();\n\n }\n\n\n\n arrestsConnection.Close();\n\n\nAren\n\nhow are you reading through the table? if you are using OFFSET, you\nowe me a steak dinner.\nNope. :-)Below is my exact code for the main thread. The C# PLINQ statement is highlighted. Let me know if I can help to explain this.\n NpgsqlConnection arrestsConnection = new NpgsqlConnection(Properties.Settings.Default.dbConnectionString);\n \narrestsConnection.Open();\n \n /// First clear out the\ngeocoding table\n NpgsqlCommand geocodingTableClear = new NpgsqlCommand(\"TRUNCATE raw.\\\"TxDPS geocoding\\\"\",\narrestsConnection);\n \ngeocodingTableClear.ExecuteNonQuery();\n \n NpgsqlDataReader arrests = new NpgsqlCommand(\"SELECT * FROM \\\"raw\\\".\\\"TxDPS all\narrests\\\"\", arrestsConnection).ExecuteReader();\n \n /// Based on the pattern\ndefined at \n ///\nhttp://social.msdn.microsoft.com/Forums/en-US/parallelextensions/thread/2f5ce226-c500-4899-a923-99285ace42ae.\n foreach(IDataRecord\narrest in\n from\nrow in arrests.AsParallel().Cast <IDataRecord>()\n \nselect row)\n {\n \nGeocoder geocodeThis = new Geocoder(arrest);\n \ngeocodeThis.Geocode();\n }\n \n \narrestsConnection.Close();\n Aren",
"msg_date": "Mon, 9 May 2011 21:40:35 -0500",
"msg_from": "Aren Cambre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "On 10/05/11 10:40, Aren Cambre wrote:\n> how are you reading through the table? if you are using OFFSET, you\n> owe me a steak dinner.\n> \n> \n> Nope. :-)\n> \n> Below is my exact code for the main thread. The C# PLINQ statement is\n> highlighted. Let me know if I can help to explain this.\n\nLooking at that code, I can't help but wonder why you're not doing it\nserver side in batches. In general, it's really inefficient to use this\npattern:\n\nrows = runquery(\"select * from something\");\nfor (item in rows) {\n // do something with item\n}\n\nAdding implicit parallelism within the loop won't help you much if\nclient-side CPU use isn't your limitation. If each computation done on\n\"item\" is very expensive in client-side CPU this pattern makes sense,\nbut otherwise should be avoided in favour of grabbing big chunks of rows\nand processing them all at once in batch SQL statements that let the\ndatabase plan I/O efficiently.\n\nEven if you're going to rely on client-side looping - say, because of\ncomplex or library-based computation that must be done for each record -\nyou must ensure that EACH THREAD HAS ITS OWN CONNECTION, whether that's\na new connection established manually or one grabbed from an appropriate\npool. Your code below shows no evidence of that at all; you're probably\nsharing one connection between all the threads, achieving no real\nparallelism whatsoever.\n\nTry limiting your parallel invocation to 4 threads (since that's number\nof cores you have) and making sure each has its own connection. In your\ncase, that probably means having a new Geocoder instance grab a\nconnection from a pool that contains at least 5 connections (one per\nGeocoder, plus the main connection).\n\nIt also looks - though I don't know C# and npgsql so I can't be sure -\nlike you're passing some kind of query result object to the Geocoder.\nAvoid that, because they might be using the connection to progressively\nread data behind the scenes in which case you might land up having\nlocking issues, accidentally serializing your parallel work on the\nsingle main connection, etc. Instead, retrieve the contents of the\nIDataRecord (whatever that is) and pass that to the new Geocoder\ninstance, so the new Geocoder has *absolutely* *no* *link* to the\narrestsConnection and cannot possibly depend on it accidentally.\n\nEven better, use a server-side work queue implementation like pgq, and\nhave each worker use its private connection to ask the server for the\nnext record to process when it's done with the previous one, so you\ndon't need a co-ordinating queue thread in your client side at all. You\ncan also optionally make your client workers independent processes\nrather than threads that way, which simplifies debugging and resource\nmanagement.\n\n--\nCraig Ringer\n",
"msg_date": "Tue, 10 May 2011 12:01:47 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "On Mon, May 9, 2011 at 9:40 PM, Aren Cambre <[email protected]> wrote:\n>> how are you reading through the table? if you are using OFFSET, you\n>> owe me a steak dinner.\n>>\n>\n> Nope. :-)\n> Below is my exact code for the main thread. The C# PLINQ statement is\n> highlighted. Let me know if I can help to explain this.\n>\n> NpgsqlConnection arrestsConnection = new\n> NpgsqlConnection(Properties.Settings.Default.dbConnectionString);\n>\n> arrestsConnection.Open();\n>\n>\n>\n> /// First clear out the geocoding table\n>\n> NpgsqlCommand geocodingTableClear = new NpgsqlCommand(\"TRUNCATE\n> raw.\\\"TxDPS geocoding\\\"\", arrestsConnection);\n>\n> geocodingTableClear.ExecuteNonQuery();\n>\n>\n>\n> NpgsqlDataReader arrests = new NpgsqlCommand(\"SELECT * FROM\n> \\\"raw\\\".\\\"TxDPS all arrests\\\"\", arrestsConnection).ExecuteReader();\n>\n>\n>\n> /// Based on the pattern defined at\n>\n> ///\n> http://social.msdn.microsoft.com/Forums/en-US/parallelextensions/thread/2f5ce226-c500-4899-a923-99285ace42ae.\n>\n> foreach(IDataRecord arrest in\n>\n> from row in arrests.AsParallel().Cast <IDataRecord>()\n>\n> select row)\n>\n> {\n>\n> Geocoder geocodeThis = new Geocoder(arrest);\n>\n> geocodeThis.Geocode();\n>\n> }\n>\n>\n>\n> arrestsConnection.Close();\n\n\nhm. I'm not exactly sure. how about turning on statement level\nlogging on the server for a bit and seeing if any unexpected queries\nare being generated and sent to the server.\n\nmerlin\n",
"msg_date": "Tue, 10 May 2011 09:06:38 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "On 05/11/2011 05:34 AM, Aren Cambre wrote:\n\n> Using one thread, the app can do about 111 rows per second, and it's\n> only exercising 1.5 of 8 CPU cores while doing this. 12,000,000 rows /\n> 111 rows per second ~= 30 hours.\n>\n> I hoped to speed things up with some parallel processing.\n>\n> When the app is multithreaded, the app itself consumes about 3% CPU time\n> waiting for Postgres, which is only hammering 1 core and barely\n> exercising disk I/O (per two programs and HDD light).\n\nOK, so before looking at parallelism, you might want to look at why\nyou're not getting much out of Pg and your app with even one thread. You\nshould be able to put a high load on the disk disk - or one cpu core -\nwithout needing to split out work into multiple threads and parallel\nworkers.\n\nI suspect your app is doing lots of tiny single-row queries instead of\nefficiently batching things. It'll be wasting huge amounts of time\nwaiting for results. Even if every query is individually incredibly\nfast, with the number of them you seem to be doing you'll lose a LOT of\ntime if you loop over lots of little SELECTs.\n\nThe usual cause of the kind of slow performance you describe is an app\nthat \"chats\" with the database continuously, so its pattern is:\n\nloop:\n ask for row from database using SELECT\n retrieve result\n do a tiny bit of processing\n continue loop\n\nThis is incredibly inefficient, because Pg is always waiting for the app\nto ask for something or the app is waiting for Pg to return something.\nDuring each switch there are delays and inefficiencies. It's actually:\n\n\nloop:\n ask for a single row from database using SELECT\n [twiddle thumbs while database plans and executes the query]\n retrieve result\n do a tiny bit of processing [Pg twiddles its thumbs]\n continue loop\n\nWhat you want is your app and Pg working at the same time.\n\nAssuming that CPU is the limitation rather than database speed and disk\nI/O I'd use something like this:\n\nThread 1:\n get cursor for selecting all rows from database\n loop:\n get 100 rows from cursor\n add rows to processing queue\n if queue contains over 1000 rows:\n wait until queue contains less than 1000 rows\n\nThread 2:\n until there are no more rows:\n ask Thread 1 for 100 rows\n for each row:\n do a tiny bit of processing\n\n\nBy using a producer/consumer model like that you can ensure that thread\n1 is always talking to the database, keeping Pg busy, and thread 2 is\nalways working the CPUs. The two threads should share NOTHING except the\nqueue to keep the whole thing simple and clean. You must make sure that\nthe \"get 100 rows\" operation of the producer can happen even while the\nproducer is in the middle of getting some more rows from Pg (though not\nnecessarily in the middle of actually appending them to the queue data\nstructure) so you don't accidentally serialize on access to the producer\nthread.\n\nIf the single producer thread can't keep, try reading in bigger batches\nor adding more producer threads with a shared queue. If the single\nconsumer thread can't keep up with the producer, add more consumers to\nuse more CPU cores.\n\n[producer 1] [producer 2] [...] [producer n]\n | | | |\n ---------------------------------\n |\n queue\n |\n ---------------------------------\n | | | |\n[worker 1] [worker 2] [...] [worker n]\n\n... or you can have each worker fetch its own chunks of rows (getting\nrid of the producer/consumer split) using its own connection and just\nhave lots more workers to handle all the wasted idle time. A\nproducer/consumer approach will probably be faster, though.\n\nIf the consumer threads produce a result that must be sent back to the\ndatabase, you can either have each thread write it to the database using\nits own connection when it's done, or you can have them delegate that\nwork to another thread that's dedicated to INSERTing the results. If the\nINSERTer can't keep up, guess what, you spawn more of them working off a\nshared queue.\n\nIf the consumer threads require additional information from the database\nto do their work, make sure they avoid the:\n\nloop:\n fetch one row\n do work on row\n\npattern, instead fetching sets of rows from the database in batches. Use\njoins if necessary, or the IN() criterion.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 11 May 2011 09:32:35 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "On 11/05/11 05:34, Aren Cambre wrote:\n\n> Using one thread, the app can do about 111 rows per second, and it's\n> only exercising 1.5 of 8 CPU cores while doing this. 12,000,000 rows /\n> 111 rows per second ~= 30 hours.\n\nI don't know how I missed that. You ARE maxing out one cpu core, so\nyou're quite right that you need more threads unless you can make your\nsingle worker more efficient.\n\nWhy not just spawn more copies of your program and have them work on\nranges of the data, though? Might that not be simpler than juggling\nthreading schemes?\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 11 May 2011 09:35:01 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "On Tue, May 10, 2011 at 7:35 PM, Craig Ringer\n<[email protected]> wrote:\n> On 11/05/11 05:34, Aren Cambre wrote:\n>\n>> Using one thread, the app can do about 111 rows per second, and it's\n>> only exercising 1.5 of 8 CPU cores while doing this. 12,000,000 rows /\n>> 111 rows per second ~= 30 hours.\n>\n> I don't know how I missed that. You ARE maxing out one cpu core, so\n> you're quite right that you need more threads unless you can make your\n> single worker more efficient.\n>\n> Why not just spawn more copies of your program and have them work on\n> ranges of the data, though? Might that not be simpler than juggling\n> threading schemes?\n\nI suggested that earlier. But now I'm wondering if there's\nefficiencies to be gained by moving all the heavy lifting to the db as\nwell as splitting thiings into multiple partitions to work on. I.e.\ndon't grab 1,000 rows and work on them on the client side and then\ninsert data, do the data mangling in the query in the database. My\nexperience has been that moving things like this into the database can\nresult in performance gains of several factors, taking hour long\nprocesses and making them run in minutes.\n",
"msg_date": "Tue, 10 May 2011 22:26:57 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "\n> I suspect your app is doing lots of tiny single-row queries instead of\n> efficiently batching things. It'll be wasting huge amounts of time\n> waiting for results. Even if every query is individually incredibly\n> fast, with the number of them you seem to be doing you'll lose a LOT of\n> time if you loop over lots of little SELECTs.\n\nUsing unix sockets, you can expect about 10-20.000 queries/s on small \nsimple selects per core, which is quite a feat. TCP adds overhead, so it's \nslower. Over a network, add ping time.\n\nIn plpgsql code, you avoid roundtrips, data serializing, and context \nswitches, it can be 2-4x faster.\n\nBut a big SQL query can process millions of rows/s, it is much more \nefficient.\n",
"msg_date": "Wed, 11 May 2011 13:06:53 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "On 05/10/2011 11:26 PM, Scott Marlowe wrote:\n\n> I.e. don't grab 1,000 rows and work on them on the client side and\n> then insert data, do the data mangling in the query in the database.\n> My experience has been that moving things like this into the database\n> can result in performance gains of several factors, taking hour long\n> processes and making them run in minutes.\n\nThis is a problem I encounter constantly wherever I go. Programmer \nselects millions of rows from giant table. Programmer loops through \nresults one by one doing some magic on them. Programmer submits queries \nback to the database. Even in batches, that's going to take ages.\n\nDatabases are beasts at set-based operations. If the programmer can \nbuild a temp table of any kind and load that, they can turn their \nupdate/insert/whatever into a simple JOIN that runs several orders of \nmagnitude faster. Going the route of parallelism will probably work too, \nbut I doubt it's the right solution in this case.\n\nWhen there are tables with millions of rows involved, processing 111 per \nsecond is a bug. Even with ten perfectly balanced threads, 30 hours only \nbecomes three. On decent hardware, you can probably drop, reload, and \nindex the entire table faster than that.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Wed, 11 May 2011 11:04:49 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "\n> This is a problem I encounter constantly wherever I go. Programmer \n> selects millions of rows from giant table. Programmer loops through \n> results one by one doing some magic on them. Programmer submits queries \n> back to the database. Even in batches, that's going to take ages.\n\nReminds me of a recent question on stackoverflow :\n\nhttp://stackoverflow.com/questions/5952020/how-to-optimize-painfully-slow-mysql-query-that-finds-correlations\n\nAnd the answer :\n\nhttp://stackoverflow.com/questions/5952020/how-to-optimize-painfully-slow-mysql-query-that-finds-correlations/5954041#5954041\n\nOP was thinking \"row-based\", with subqueries in the role of \"doing some \nmagicm\".\nUsing a set-based solution with cascading WITH CTEs (and using the \nprevious CTE as a source in the next one for aggregation) => 100x speedup !\n",
"msg_date": "Wed, 11 May 2011 20:10:54 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "---- Original message ----\n>Date: Wed, 11 May 2011 11:04:49 -0500\n>From: [email protected] (on behalf of Shaun Thomas <[email protected]>)\n>Subject: Re: [PERFORM] Postgres refusing to use >1 core \n>To: Scott Marlowe <[email protected]>\n>Cc: Craig Ringer <[email protected]>,Aren Cambre <[email protected]>,<[email protected]>\n>\n>On 05/10/2011 11:26 PM, Scott Marlowe wrote:\n>\n>> I.e. don't grab 1,000 rows and work on them on the client side and\n>> then insert data, do the data mangling in the query in the database.\n>> My experience has been that moving things like this into the database\n>> can result in performance gains of several factors, taking hour long\n>> processes and making them run in minutes.\n>\n>This is a problem I encounter constantly wherever I go. Programmer \n>selects millions of rows from giant table. Programmer loops through \n>results one by one doing some magic on them. Programmer submits queries \n>back to the database. Even in batches, that's going to take ages.\n>\n>Databases are beasts at set-based operations. If the programmer can \n>build a temp table of any kind and load that, they can turn their \n>update/insert/whatever into a simple JOIN that runs several orders of \n>magnitude faster. Going the route of parallelism will probably work too, \n>but I doubt it's the right solution in this case.\n>\n>When there are tables with millions of rows involved, processing 111 per \n>second is a bug. Even with ten perfectly balanced threads, 30 hours only \n>becomes three. On decent hardware, you can probably drop, reload, and \n>index the entire table faster than that.\n>\n>-- \n>Shaun Thomas\n>OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n>312-676-8870\n>[email protected]\n>\n>______________________________________________\n>\n>See http://www.peak6.com/email_disclaimer.php\n>for terms and conditions related to this email\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\nSo, the $64 question: how did you find an engagement where, to bend Shakespeare, \"first thing we do, is kill all the coders\" isn't required? This RBAR mentality, abetted by xml/NoSql/xBase, is utterly pervasive. They absolutely refuse to learn anything different from the COBOL/VSAM messes of their grandfathers; well modulo syntax, of course. The mere suggestion, in my experience, that doing things faster with fewer lines of code/statements in the engine is met with overt hostility.\n\nRegards,\nRobert\n",
"msg_date": "Wed, 11 May 2011 15:53:09 -0400 (EDT)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1\n core"
},
{
"msg_contents": "On Wed, May 11, 2011 at 1:53 PM, <[email protected]> wrote:\n\n> So, the $64 question: how did you find an engagement where, to bend Shakespeare, \"first thing we do, is kill all the coders\" isn't required? This RBAR mentality, abetted by xml/NoSql/xBase, is utterly pervasive. They absolutely refuse to learn anything different from the COBOL/VSAM messes of their grandfathers; well modulo syntax, of course. The mere suggestion, in my experience, that doing things faster with fewer lines of code/statements in the engine is met with overt hostility.\n\nIt really depends. For a lot of development scaling to large numbers\nof users is never needed, and it's often more economical to develop\nquickly with a less efficient database layer. In my last job all our\nmain development was against a large transactional / relational db.\nBut some quick and dirty internal development used some very\ninefficient MVC methods but it only had to handle 45 users at a time,\nmax, and that was 45 users who accessed the system a few minutes at a\ntime.\n\nI've seen EVA systems that people tried to scale that were handling\nthousands of queries a second that when converted to real relational\ndbs needed dozens of queries a second to run, required a fraction of\ndb horsepower, and could scale to the same number of users with only\n1/10th to 1/100th the database underneath it. In those instances, you\nonly have to show the much higher efficiency to the people who pay for\nthe database servers.\n",
"msg_date": "Wed, 11 May 2011 15:44:37 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "On 05/11/2011 02:53 PM, [email protected] wrote:\n\n> So, the $64 question: how did you find an engagement where, to bend\n> Shakespeare, \"first thing we do, is kill all the coders\" isn't\n> required?\n\nIt's just one of those things you have to explain. Not just how to fix \nit, but *why* doing so fixes it. It's also not really a fair expectation \nin a lot of ways. Even when a coder uses all SQL, their inexperience in \nthe engine can still ruin performance. We spend years getting to know \nPostgreSQL, or just general DB techniques. They do the same with coding. \nAnd unless they're a developer for a very graphics intensive project, \nthey're probably not well acquainted with set theory.\n\nJust today, I took a query like this:\n\n UPDATE customer c\n SET c.login_counter = a.counter\n FROM (SELECT session_id, count(*) as counter\n FROM session\n WHERE date_created >= CURRENT_DATE\n GROUP BY session_id) a\n WHERE c.process_date = CURRENT_DATE\n AND c.customer_id = a.session_id\n\nAnd suggested this instead:\n\n CREATE TEMP TABLE tmp_login_counts AS\n SELECT session_id, count(1) AS counter\n FROM auth_token_arc\n WHERE date_created >= CURRENT_DATE\n GROUP BY session_id\n\n UPDATE reporting.customer c\n SET login_counter = a.counter\n FROM tmp_login_counts a\n WHERE c.process_date = CURRENT_DATE\n AND c.customer_id = a.session_id\n\nThe original query, with our very large tables, ran for over *two hours* \nthanks to a nested loop iterating over the subquery. My replacement ran \nin roughly 30 seconds. If we were using a newer version of PG, we could \nhave used a CTE. But do you get what I mean? Temp tables are a fairly \ncommon technique, but how would a coder know about CTEs? They're pretty \nnew, even to *us*.\n\nWe hold regular Lunch'n'Learns for our developers to teach them the \ngood/bad of what they're doing, and that helps significantly. Even hours \nlater, I see them using the techniques I showed them. The one I'm \npresenting soon is entitled '10 Ways to Ruin Performance' and they're \nall specific examples taken from day-to-day queries and jobs here, all \nfrom different categories of mistake. It's just a part of being a good DBA.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Wed, 11 May 2011 17:04:50 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "---- Original message ----\n>Date: Wed, 11 May 2011 17:04:50 -0500\n>From: [email protected] (on behalf of Shaun Thomas <[email protected]>)\n>Subject: Re: [PERFORM] Postgres refusing to use >1 core \n>To: <[email protected]>\n>Cc: Scott Marlowe <[email protected]>,Craig Ringer <[email protected]>,Aren Cambre <[email protected]>,<[email protected]>\n>\n>On 05/11/2011 02:53 PM, [email protected] wrote:\n>\n>> So, the $64 question: how did you find an engagement where, to bend\n>> Shakespeare, \"first thing we do, is kill all the coders\" isn't\n>> required?\n>\n>It's just one of those things you have to explain. Not just how to fix \n>it, but *why* doing so fixes it. It's also not really a fair expectation \n>in a lot of ways. Even when a coder uses all SQL, their inexperience in \n>the engine can still ruin performance. We spend years getting to know \n>PostgreSQL, or just general DB techniques. They do the same with coding. \n>And unless they're a developer for a very graphics intensive project, \n>they're probably not well acquainted with set theory.\n>\n>Just today, I took a query like this:\n>\n> UPDATE customer c\n> SET c.login_counter = a.counter\n> FROM (SELECT session_id, count(*) as counter\n> FROM session\n> WHERE date_created >= CURRENT_DATE\n> GROUP BY session_id) a\n> WHERE c.process_date = CURRENT_DATE\n> AND c.customer_id = a.session_id\n>\n>And suggested this instead:\n>\n> CREATE TEMP TABLE tmp_login_counts AS\n> SELECT session_id, count(1) AS counter\n> FROM auth_token_arc\n> WHERE date_created >= CURRENT_DATE\n> GROUP BY session_id\n>\n> UPDATE reporting.customer c\n> SET login_counter = a.counter\n> FROM tmp_login_counts a\n> WHERE c.process_date = CURRENT_DATE\n> AND c.customer_id = a.session_id\n>\n>The original query, with our very large tables, ran for over *two hours* \n>thanks to a nested loop iterating over the subquery. My replacement ran \n>in roughly 30 seconds. If we were using a newer version of PG, we could \n>have used a CTE. But do you get what I mean? Temp tables are a fairly \n>common technique, but how would a coder know about CTEs? They're pretty \n>new, even to *us*.\n>\n>We hold regular Lunch'n'Learns for our developers to teach them the \n>good/bad of what they're doing, and that helps significantly. Even hours \n>later, I see them using the techniques I showed them. The one I'm \n>presenting soon is entitled '10 Ways to Ruin Performance' and they're \n>all specific examples taken from day-to-day queries and jobs here, all \n>from different categories of mistake. It's just a part of being a good DBA.\n>\n>-- \n>Shaun Thomas\n>OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n>312-676-8870\n>[email protected]\n>\n>______________________________________________\n>\n>See http://www.peak6.com/email_disclaimer.php\n>for terms and conditions related to this email\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\nYou're (both) fortunate to have Suits and colleagues who are open to doing this A Better Way. Bless you.\n\nRegards,\nRobert\n",
"msg_date": "Wed, 11 May 2011 19:07:57 -0400 (EDT)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1\n core"
},
{
"msg_contents": "On 5/11/11 3:04 PM, Shaun Thomas wrote:\n> The original query, with our very large tables, ran for over *two hours*\n> thanks to a nested loop iterating over the subquery. My replacement ran\n> in roughly 30 seconds. If we were using a newer version of PG, we could\n> have used a CTE. But do you get what I mean? Temp tables are a fairly\n> common technique, but how would a coder know about CTEs? They're pretty\n> new, even to *us*.\n\nFor that matter, it would be even better if PostgreSQL realized that a\nmaterialize of the subquery was a better execution plan, and just did it\nfor you.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Wed, 11 May 2011 18:14:22 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": ">\n> I suspect your app is doing lots of tiny single-row queries instead of\n> efficiently batching things. It'll be wasting huge amounts of time\n> waiting for results. Even if every query is individually incredibly\n> fast, with the number of them you seem to be doing you'll lose a LOT of\n> time if you loop over lots of little SELECTs.\n>\n\nSo here's what's going on.\n\nI have a table of about 12,000,000 traffic tickets written by the Texas\nDepartment of Public Safety (TxDPS). Each ticket has a route name and a\nreference marker. On Interstate highways, reference marker = mile post. On\nall other roads, from US highways down to Farm to Market roads, the\nreference marker is based on a grid superimposed over the state. Basically\nthat reference marker increments as the road crosses a grid line, so unless\nthe road is perfectly N-S or E-W, these reference markers are more than a\nmile apart.\n\nI have a separate table with data from the Texas Department of\nTransportation (TxDOT). It is a database of almost all the state's reference\nmarkers, along with latitude and longitude for each.\n\nI am trying to geolocate each ticket by correlating the ticket's\nroute/reference marker to the same in the TxDOT database. And it's not\nstraightforward for a few reasons:\n\n*1. TxDPS and TxDOT formats are different.*\n\nTxDPS uses 1-5 to denote route type. 1 = Interstate. 2 = US or any state\nhighway except Farm to Market. 3 = Farm to Market, 4 = county road, 5 =\nlocal road. So if the route name is 0071 and route type is 2, it could mean\nUS 71 or TX 71, both of which really exist in Texas but are on different\nparts of the state.\n\nI haven't proven it yet, but it is possible that no two routes of the same\nnumber are in the same county. You wouldn't find both TX 71 and US 71 in the\nsame county.\n\nFor now, I am looking up the TxDOT database based on route type, name, and\ncounty, and I may need to repeat the lookup until I get a match.\n\nIn the above example, if the ticket is written for route_name = 0071,\nroute_type = 2, and county = 206, then I need to do searches against the\nTxDOT database for:\n\n 1. rte_nm = 'US71' AND county_num='206'\n 2. rte_nm = 'SH71' AND county_num='206'\n 3. rte_nm = 'UA71' AND county_num='206'\n 4. rte_nm = 'UP71' AND county_num='206'\n 5. ...\n\n*2. Not TxDPS reference markers correspond to TxDOT reference markers.*\n\nNow, if I've matched a route, I have to find the reference marker.\n\nThe TxDOT database is pretty good but not 100% complete, so some TxDPS\ntickets' reference markers may not exist in the TxDOT table. Plus, it's\npossible that some TxDPS tickets have the wrong marker.\n\nTo compensate, I am looking for the closest reference marker along the route\nthat is not more than 50 marker units away, either direction. I've again\nimplemented that with multiple queries, where I don't stop until I find a\nmatch. Suppose I am searching for reference marker 256 on TX 71. The queries\nwill be like this:\n\n 1. rte_nm = 'SH71' AND rm = '256' (base marker)\n 2. rte_nm = 'SH71' AND rm = '257' (+1)\n 3. rte_nm = 'SH71' AND rm = '255' (-1)\n 4. rte_nm = 'SH71' AND rm = '258' (+2)\n 5. rte_nm = 'SH71' AND rm = '254' (-2)\n 6. ...\n 7. rte_nm = 'SH71' AND rm = '306' (+50)\n 8. rte_nm = 'SH71' AND rm = '206' (-50)\n\nAssuming a matching route name was found in the prior step, the app will\nhave 1 to 101 of these queries for each ticket.\n\nAssuming steps 1 and 2 above worked out, now I have a reference marker. So I\nwrite to a third table that has four columns:\n\n 1. *HA_Arrest_Key* (varchar(18) that refers back to the TxDPS tickets\n table\n 2. *gid* (integer that refers to the unique identity of the reference\n marker in the TxDOT table)\n 3. *distance* (integer that is the distance, in reference markers,\n between that noted in the TxDPS ticket and the nearest marker found in the\n TxDOT table)\n 4. *hasLatLong* (Boolean that is true if TxDPS also recorded latitude and\n longitude for the ticket, presumably from an in-car device. These don't\n appear to be that accurate, plus a substantial portion of tickets have no\n lat/long.)\n\nRight now, I am doing a separate INSERT for each of the 12,000,000 rows\ninserted into this table.\n\nI guess the app is chatty like you suggest? HOWEVER, if I am reading system\nactivity correctly, the master thread that is going through the 12,000,000\ntickets appears to have its own Postgres process, and based on how quickly\nRAM usage initially shoots up the first ~60 seconds or so the app runs, it\nmay be reading all these rows into memory. But I am consulting with Npgsql\ndevelopers separately to make sure I am really understanding correctly. They\nsuspect that the PLINQ stuff (basically \"multithreading in a can\") may not\nbe dispatching threads as expected because it may be misreading things.\n\nBy using a producer/consumer model like that you can ensure that thread\n> 1 is always talking to the database, keeping Pg busy, and thread 2 is\n> always working the CPUs.\n\n\nThanks for the example and illustration.\n\n... or you can have each worker fetch its own chunks of rows (getting\n> rid of the producer/consumer split) using its own connection and just\n> have lots more workers to handle all the wasted idle time. A\n> producer/consumer approach will probably be faster, though.\n>\n\nThat's what PLINQ is *supposed* to do. In theory. :-) Working with Npgsql\nfolks to see if something is tripping it up.\n\nAren\n\nI suspect your app is doing lots of tiny single-row queries instead of\nefficiently batching things. It'll be wasting huge amounts of time\nwaiting for results. Even if every query is individually incredibly\nfast, with the number of them you seem to be doing you'll lose a LOT of\ntime if you loop over lots of little SELECTs.So here's what's going on.I have a table of about 12,000,000 traffic tickets written by the Texas Department of Public Safety (TxDPS). Each ticket has a route name and a reference marker. On Interstate highways, reference marker = mile post. On all other roads, from US highways down to Farm to Market roads, the reference marker is based on a grid superimposed over the state. Basically that reference marker increments as the road crosses a grid line, so unless the road is perfectly N-S or E-W, these reference markers are more than a mile apart.\nI have a separate table with data from the Texas Department of Transportation (TxDOT). It is a database of almost all the state's reference markers, along with latitude and longitude for each.\nI am trying to geolocate each ticket by correlating the ticket's route/reference marker to the same in the TxDOT database. And it's not straightforward for a few reasons:\n1. TxDPS and TxDOT formats are different.TxDPS uses 1-5 to denote route type. 1 = Interstate. 2 = US or any state highway except Farm to Market. 3 = Farm to Market, 4 = county road, 5 = local road. So if the route name is 0071 and route type is 2, it could mean US 71 or TX 71, both of which really exist in Texas but are on different parts of the state.\nI haven't proven it yet, but it is possible that no two routes of the same number are in the same county. You wouldn't find both TX 71 and US 71 in the same county.For now, I am looking up the TxDOT database based on route type, name, and county, and I may need to repeat the lookup until I get a match.\nIn the above example, if the ticket is written for route_name = 0071, route_type = 2, and county = 206, then I need to do searches against the TxDOT database for:rte_nm = 'US71' AND county_num='206'\nrte_nm = 'SH71' AND county_num='206'rte_nm = 'UA71' AND county_num='206'rte_nm = 'UP71' AND county_num='206'...2. Not TxDPS reference markers correspond to TxDOT reference markers.\nNow, if I've matched a route, I have to find the reference marker.The TxDOT database is pretty good but not 100% complete, so some TxDPS tickets' reference markers may not exist in the TxDOT table. Plus, it's possible that some TxDPS tickets have the wrong marker.\nTo compensate, I am looking for the closest reference marker along the route that is not more than 50 marker units away, either direction. I've again implemented that with multiple queries, where I don't stop until I find a match. Suppose I am searching for reference marker 256 on TX 71. The queries will be like this:\nrte_nm = 'SH71' AND rm = '256' (base marker)rte_nm = 'SH71' AND rm = '257' (+1)rte_nm = 'SH71' AND rm = '255' (-1)rte_nm = 'SH71' AND rm = '258' (+2)\nrte_nm = 'SH71' AND rm = '254' (-2)...rte_nm = 'SH71' AND rm = '306' (+50)rte_nm = 'SH71' AND rm = '206' (-50)Assuming a matching route name was found in the prior step, the app will have 1 to 101 of these queries for each ticket.\nAssuming steps 1 and 2 above worked out, now I have a reference marker. So I write to a third table that has four columns:HA_Arrest_Key (varchar(18) that refers back to the TxDPS tickets table\ngid (integer that refers to the unique identity of the reference marker in the TxDOT table)distance (integer that is the distance, in reference markers, between that noted in the TxDPS ticket and the nearest marker found in the TxDOT table)\nhasLatLong (Boolean that is true if TxDPS also recorded latitude and longitude for the ticket, presumably from an in-car device. These don't appear to be that accurate, plus a substantial portion of tickets have no lat/long.)\nRight now, I am doing a separate INSERT for each of the 12,000,000 rows inserted into this table.I guess the app is chatty like you suggest? HOWEVER, if I am reading system activity correctly, the master thread that is going through the 12,000,000 tickets appears to have its own Postgres process, and based on how quickly RAM usage initially shoots up the first ~60 seconds or so the app runs, it may be reading all these rows into memory. But I am consulting with Npgsql developers separately to make sure I am really understanding correctly. They suspect that the PLINQ stuff (basically \"multithreading in a can\") may not be dispatching threads as expected because it may be misreading things.\nBy using a producer/consumer model like that you can ensure that thread\n1 is always talking to the database, keeping Pg busy, and thread 2 is\nalways working the CPUs.Thanks for the example and illustration.\n\n... or you can have each worker fetch its own chunks of rows (getting\nrid of the producer/consumer split) using its own connection and just\nhave lots more workers to handle all the wasted idle time. A\nproducer/consumer approach will probably be faster, though.That's what PLINQ is supposed to do. In theory. :-) Working with Npgsql folks to see if something is tripping it up.\nAren",
"msg_date": "Wed, 11 May 2011 22:17:00 -0500",
"msg_from": "Aren Cambre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": ">\n> > Using one thread, the app can do about 111 rows per second, and it's\n> > only exercising 1.5 of 8 CPU cores while doing this. 12,000,000 rows /\n> > 111 rows per second ~= 30 hours.\n>\n> I don't know how I missed that. You ARE maxing out one cpu core, so\n> you're quite right that you need more threads unless you can make your\n> single worker more efficient.\n>\n\nAnd the problem is my app already has between 20 and 30 threads. Something\nabout C#'s PLINQ may not be working as intended...\n\nAren\n\n> Using one thread, the app can do about 111 rows per second, and it's\n\n\n> only exercising 1.5 of 8 CPU cores while doing this. 12,000,000 rows /\n> 111 rows per second ~= 30 hours.\n\nI don't know how I missed that. You ARE maxing out one cpu core, so\nyou're quite right that you need more threads unless you can make your\nsingle worker more efficient.And the problem is my app already has between 20 and 30 threads. Something about C#'s PLINQ may not be working as intended...\n\nAren",
"msg_date": "Wed, 11 May 2011 22:18:12 -0500",
"msg_from": "Aren Cambre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": ">\n>\n> I suspect your app is doing lots of tiny single-row queries instead of\n>> efficiently batching things. It'll be wasting huge amounts of time\n>> waiting for results. Even if every query is individually incredibly\n>> fast, with the number of them you seem to be doing you'll lose a LOT of\n>> time if you loop over lots of little SELECTs.\n>>\n>\n> Using unix sockets, you can expect about 10-20.000 queries/s on small\n> simple selects per core, which is quite a feat. TCP adds overhead, so it's\n> slower. Over a network, add ping time.\n>\n\nI'm talking to a Postgres on localhost, so in theory, I ought to be getting\nreally good throughput, but again, the problem may be with the way C#'s\nPLINQ \"multithreading in a can\" is managing things.\n\nAren\n\n\n\nI suspect your app is doing lots of tiny single-row queries instead of\nefficiently batching things. It'll be wasting huge amounts of time\nwaiting for results. Even if every query is individually incredibly\nfast, with the number of them you seem to be doing you'll lose a LOT of\ntime if you loop over lots of little SELECTs.\n\n\nUsing unix sockets, you can expect about 10-20.000 queries/s on small simple selects per core, which is quite a feat. TCP adds overhead, so it's slower. Over a network, add ping time.\nI'm talking to a Postgres on localhost, so in theory, I ought to be getting really good throughput, but again, the problem may be with the way C#'s PLINQ \"multithreading in a can\" is managing things.\nAren",
"msg_date": "Wed, 11 May 2011 22:20:51 -0500",
"msg_from": "Aren Cambre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "On Wed, May 11, 2011 at 9:20 PM, Aren Cambre <[email protected]> wrote:\n>> Using unix sockets, you can expect about 10-20.000 queries/s on small\n>> simple selects per core, which is quite a feat. TCP adds overhead, so it's\n>> slower. Over a network, add ping time.\n>\n> I'm talking to a Postgres on localhost, so in theory, I ought to be getting\n> really good throughput, but again, the problem may be with the way C#'s\n> PLINQ \"multithreading in a can\" is managing things.\n\nlocal tcp is gonna be slower not faster than unix sockets, not faster.\n But the big issue is that you need to exlpore doing the work in a\nlarge set not iteratively. Operations on sets are often much faster\nin aggregate.\n",
"msg_date": "Wed, 11 May 2011 21:35:38 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "On 5/11/2011 9:17 PM, Aren Cambre wrote:\n>\n> So here's what's going on.\n>\n<snip>\n\nIf I were doing this, considering the small size of the data set, I'd \nread all the data into memory.\nProcess it entirely in memory (with threads to saturate all the \nprocessors you have).\nThen write the results to the DB.\n\n\n",
"msg_date": "Wed, 11 May 2011 22:25:45 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "12.05.11 06:18, Aren Cambre ???????(??):\n>\n> > Using one thread, the app can do about 111 rows per second, and it's\n> > only exercising 1.5 of 8 CPU cores while doing this. 12,000,000\n> rows /\n> > 111 rows per second ~= 30 hours.\n>\n> I don't know how I missed that. You ARE maxing out one cpu core, so\n> you're quite right that you need more threads unless you can make your\n> single worker more efficient.\n>\n>\n> And the problem is my app already has between 20 and 30 threads. \n> Something about C#'s PLINQ may not be working as intended...\n>\nHave you checked that you are really doing fetch and processing in \nparallel? Dunno about C#, but under Java you have to make specific \nsettings (e.g. setFetchSize) or driver will fetch all the data on query \nrun. Check time needed to fetch first row from the query.\n\nBest regards, Vitalii Tymchyshyn\n\n\n\n\n\n\n\n 12.05.11 06:18, Aren Cambre написав(ла):\n \n\n\n> Using one thread, the app can do about\n 111 rows per second, and it's\n > only exercising 1.5 of 8 CPU cores while doing this.\n 12,000,000 rows /\n > 111 rows per second ~= 30 hours.\n\n\n I don't know how I missed that. You ARE maxing out one cpu\n core, so\n you're quite right that you need more threads unless you can\n make your\n single worker more efficient.\n\n\n\nAnd the problem is my app already has between 20 and 30\n threads. Something about C#'s PLINQ may not be working as\n intended...\n\n\n\n Have you checked that you are really doing fetch and processing in\n parallel? Dunno about C#, but under Java you have to make specific\n settings (e.g. setFetchSize) or driver will fetch all the data on\n query run. Check time needed to fetch first row from the query.\n\n Best regards, Vitalii Tymchyshyn",
"msg_date": "Thu, 12 May 2011 10:28:49 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "On Wed, 2011-05-11 at 17:04 -0500, Shaun Thomas wrote:\n> We hold regular Lunch'n'Learns for our developers to teach them the \n> good/bad of what they're doing, and that helps significantly. Even\n> hours later, I see them using the techniques I showed them. The one\n> I'm presenting soon is entitled '10 Ways to Ruin Performance' and\n> they're all specific examples taken from day-to-day queries and jobs\n> here, all from different categories of mistake. It's just a part of\n> being a good DBA.\n\nDo you happen to produce slides for these lunch n learns or are they\nmore informal than that? I guess you can work out where I'm going with\nthis ;)\n\n-- \nMichael Graham <[email protected]>\n\n\n",
"msg_date": "Thu, 12 May 2011 09:30:33 +0100",
"msg_from": "Michael Graham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "On 05/12/2011 03:30 AM, Michael Graham wrote:\n\n> Do you happen to produce slides for these lunch n learns or are they\n> more informal than that? I guess you can work out where I'm going with\n> this ;)\n\nOh of course. I use rst2s5 for my stuff, so I have the slideshow and \nalso generate a PDF complete with several paragraphs of explanation I \ndistribute after the presentation itself. I have two of them now, but \nI'll probably have a third in a couple months.\n\nMy next topic will probably be geared toward actual DBAs that might be \nintermediate level. Things like, what happens to an OLAP server that \nundergoes maintenance and experiences rapid (temporarily exponential) \nTPS increase. How that can affect the disk subsystem, how to recover, \nhow to possibly bootstrap as a temporary fix, etc. Certainly things I \nwould have liked to know before seeing them. I'm going to call it \"Your \nDatabase Probably Hates You.\" ;)\n\nI have a tendency to enjoy \"stories from the field,\" and I've got more \nthan a few where I've saved a database from certain death. Sometimes \nit's tweaking a few config settings, sometimes it's new hardware based \non system monitoring or allocation tests. Little things Senior DBAs \nmight know after experiencing them, or reading lists like this one.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Thu, 12 May 2011 08:06:07 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> On 5/11/11 3:04 PM, Shaun Thomas wrote:\n>> The original query, with our very large tables, ran for over *two hours*\n>> thanks to a nested loop iterating over the subquery. My replacement ran\n>> in roughly 30 seconds. If we were using a newer version of PG, we could\n>> have used a CTE. But do you get what I mean? Temp tables are a fairly\n>> common technique, but how would a coder know about CTEs? They're pretty\n>> new, even to *us*.\n\n> For that matter, it would be even better if PostgreSQL realized that a\n> materialize of the subquery was a better execution plan, and just did it\n> for you.\n\nIt does. I was a bit surprised that Shaun apparently got a plan that\ndidn't include a materialize step, because when I test a similar query\nhere, I get:\n1. a hash join, until I turn off enable_hashjoin; then\n2. a merge join, until I turn off enable_mergejoin; then\n3. a nestloop with materialize on the subquery scan.\nIn 9.0 and up I can get a nestloop without materialize by also turning\noff enable_material, but pre-9.0 there's no such option ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 May 2011 10:51:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core "
},
{
"msg_contents": "On 05/12/2011 09:51 AM, Tom Lane wrote:\n\n> It does. I was a bit surprised that Shaun apparently got a plan that\n> didn't include a materialize step, because when I test a similar query\n> here, I get:\n\nRemember when I said \"old version\" that prevented us from using CTEs?\nWe're still on 8.2 (basically, we're waiting for EnterpriseDB 9.0).\nIt's basically calculating the group aggregation wrong, but is that\nenough to trigger it to go nuts?\n\nSELECT c.*\n FROM customer c\n JOIN (SELECT session_id, count(1) as counter\n FROM session\n WHERE date_created >= '2011-05-11 05:00'\n AND date_created < '2011-05-11 06:00'\n AND from_interface = 'UNKNOWN'\n GROUP BY 1) a ON (c.customer_id = a.session_id)\n WHERE c.process_date = CURRENT_DATE - interval '1 day'\n AND c.row_out IS NULL;\n\nSo sayeth the planner:\n\n Nested Loop (cost=167.49..2354.62 rows=6 width=237) (actual time=43.949..166858.604 rows=168 loops=1)\n -> GroupAggregate (cost=167.49..176.97 rows=2 width=8) (actual time=1.042..2.827 rows=209 loops=1)\n -> Sort (cost=167.49..170.64 rows=1260 width=8) (actual time=1.037..1.347 rows=230 loops=1)\n Sort Key: session.session_id\n -> Index Scan using idx_session_date_created on session (cost=0.00..102.61 rows=1260 width=8) (actual time=0.044.\n.0.690 rows=230 loops=1)\n Index Cond: ((date_created >= '11-MAY-11 05:00:00'::timestamp without time zone) AND (date_created < '11-MAY-11 06:00:00'::\ntimestamp without time zone))\n Filter: ((from_interface)::text = 'UNKNOWN'::text)\n -> Index Scan using idx_customer_customer_id on customer c (cost=0.00..1088.78 rows=3 width=237) (actual time=19.820..798.348 rows=1 loops=\n209)\n Index Cond: (c.customer_id = a.session_id)\n Filter: ((process_date = (('now'::text)::date - '@ 1 day'::interval)) AND (row_out IS NULL))\n Total runtime: 166859.040 ms\n\nThat one hour extract is much, much slower than this:\n\nSELECT 1\n FROM customer c\n JOIN (SELECT session_id, count(*) as counter\n FROM session\n WHERE date_created >= '2011-05-08'\n GROUP BY 1) a ON (c.customer_id = a.session_id)\n WHERE c.process_date = CURRENT_DATE\n AND c.row_out IS NULL;\n\nWhich gives this plan:\n\n Merge Join (cost=244565.52..246488.78 rows=377 width=0) (actual time=1958.781..2385.667 rows=22205 loops=1)\n Merge Cond: (a.session_id = c.customer_id)\n -> GroupAggregate (cost=19176.22..20275.99 rows=271 width=8) (actual time=1142.179..1459.779 rows=26643 loops=1)\n -> Sort (cost=19176.22..19541.68 rows=146184 width=8) (actual time=1142.152..1374.328 rows=179006 loops=1)\n Sort Key: session.session_id\n -> Index Scan using idx_session_date_created on session (cost=0.00..6635.51 rows=146184 width=8) (actual time=0.0\n20..160.339 rows=179267 loops=1)\n Index Cond: (date_created >= '08-MAY-11 00:00:00'::timestamp without time zone)\n -> Sort (cost=225389.30..225797.47 rows=163267 width=8) (actual time=816.585..855.459 rows=155067 loops=1)\n Sort Key: c.customer_id\n -> Index Scan using idx_customer_rpt on customer c (cost=0.00..211252.93 rows=163267 width=8) (actual time=0.037..90.337 rows=155067 \nloops=1)\n Index Cond: (process_date = '10-MAY-11 00:00:00'::timestamp without time zone)\n Filter: (row_out IS NULL)\n\nBut make the inner query slightly smaller, and...\n\n Nested Loop (cost=13755.53..223453.98 rows=276 width=0)\n -> GroupAggregate (cost=13755.53..14558.26 rows=198 width=8)\n -> Sort (cost=13755.53..14022.28 rows=106700 width=8)\n Sort Key: session.session_id\n -> Index Scan using idx_session_date_created on session (cost=0.00..4844.37 rows=106700 width=8)\n Index Cond: (date_created >= '09-MAY-11 00:00:00'::timestamp without time zone)\n -> Index Scan using idx_customer_customer_id on customer c (cost=0.00..1055.01 rows=1 width=8)\n Index Cond: (c.customer_id = a.session_id)\n Filter: ((process_date = '10-MAY-11 00:00:00'::timestamp without time zone) AND (row_out IS NULL))\n\nI didn't want to wait two hours for that to finish. ;) But the\nstats are all pretty darn close, so far as I can tell. The only\nthing that's off is the group aggregate... by about two orders\nof magnitude. So I just chalked it up to 8.2 being relatively\nhorrible, and punted to just using a temp table to trick the\noptimizer into doing it right.\n\nBut my greater point was that even doing it all in SQL doesn't\nalways work, which we all know. Use of EXPLAIN abounds, but that\ndoesn't necessarily mean a dev will know how to fix a bad plan.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Thu, 12 May 2011 10:48:55 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "Shaun Thomas <[email protected]> writes:\n> On 05/12/2011 09:51 AM, Tom Lane wrote:\n>> It does. I was a bit surprised that Shaun apparently got a plan that\n>> didn't include a materialize step, because when I test a similar query\n>> here, I get:\n\n> Remember when I said \"old version\" that prevented us from using CTEs?\n> We're still on 8.2 (basically, we're waiting for EnterpriseDB 9.0).\n> It's basically calculating the group aggregation wrong, but is that\n> enough to trigger it to go nuts?\n\nHmm. As you say, the mistake it's making is a drastic underestimate of\nthe number of groups in the subquery, leading to a bad choice of join\nmethod. I find it odd that replacing the subquery with a temp table\nhelps, though, because (unless you stuck in an ANALYZE you didn't\nmention) it would have no stats at all about the number of groups in the\ntemp table. Maybe the default guess just happens to produce the more\ndesirable plan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 May 2011 12:07:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core "
},
{
"msg_contents": "On 05/12/2011 11:07 AM, Tom Lane wrote:\n\n> I find it odd that replacing the subquery with a temp table helps,\n> though, because (unless you stuck in an ANALYZE you didn't mention)\n> it would have no stats at all about the number of groups in the temp\n> table.\n\nI did have an analyze initially for exactly that reason. But what I \nfound odd is that in my rush to execute this for the end of day reports, \nI forgot that step, and it still ran fine. I've found that the planner \ntends to treat un-analyzed tables somewhat pessimistically, which is \nfine by me.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Thu, 12 May 2011 11:11:21 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "On Wed, May 11, 2011 at 9:17 PM, Aren Cambre <[email protected]> wrote:\n\n> *2. Not TxDPS reference markers correspond to TxDOT reference markers.*\n>\n> Now, if I've matched a route, I have to find the reference marker.\n>\n> The TxDOT database is pretty good but not 100% complete, so some TxDPS\n> tickets' reference markers may not exist in the TxDOT table. Plus, it's\n> possible that some TxDPS tickets have the wrong marker.\n>\n> To compensate, I am looking for the closest reference marker along the\n> route that is not more than 50 marker units away, either direction. I've\n> again implemented that with multiple queries, where I don't stop until I\n> find a match. Suppose I am searching for reference marker 256 on TX 71. The\n> queries will be like this:\n>\n> 1. rte_nm = 'SH71' AND rm = '256' (base marker)\n> 2. rte_nm = 'SH71' AND rm = '257' (+1)\n> 3. rte_nm = 'SH71' AND rm = '255' (-1)\n> 4. rte_nm = 'SH71' AND rm = '258' (+2)\n> 5. rte_nm = 'SH71' AND rm = '254' (-2)\n> 6. ...\n> 7. rte_nm = 'SH71' AND rm = '306' (+50)\n> 8. rte_nm = 'SH71' AND rm = '206' (-50)\n>\n> Assuming a matching route name was found in the prior step, the app will\n> have 1 to 101 of these queries for each ticket.\n>\n\nThis is a perfect example of a place where you could push some work out of\nthe application and into the database. You can consolidate your 1 to 101\nqueries into a single query. If you use:\n\nWHERE rte_nm='SH71' AND rm >= 206 AND rm <= 306 ORDER BY abs(rm - 256), rm -\n256 DESC LIMIT 1\n\nit will always return the same value as the first matching query from your\nlist, and will never have to make more than one trip to the database. Your\none trip might be slightly slower than any one of the single trips above,\nbut it will certainly be much faster in the case where you have to hit any\nsignificant % of your 101 potential queries.\n\n-Eric\n\nOn Wed, May 11, 2011 at 9:17 PM, Aren Cambre <[email protected]> wrote:\n2. Not TxDPS reference markers correspond to TxDOT reference markers.\nNow, if I've matched a route, I have to find the reference marker.The TxDOT database is pretty good but not 100% complete, so some TxDPS tickets' reference markers may not exist in the TxDOT table. Plus, it's possible that some TxDPS tickets have the wrong marker.\nTo compensate, I am looking for the closest reference marker along the route that is not more than 50 marker units away, either direction. I've again implemented that with multiple queries, where I don't stop until I find a match. Suppose I am searching for reference marker 256 on TX 71. The queries will be like this:\nrte_nm = 'SH71' AND rm = '256' (base marker)rte_nm = 'SH71' AND rm = '257' (+1)rte_nm = 'SH71' AND rm = '255' (-1)rte_nm = 'SH71' AND rm = '258' (+2)\nrte_nm = 'SH71' AND rm = '254' (-2)...rte_nm = 'SH71' AND rm = '306' (+50)rte_nm = 'SH71' AND rm = '206' (-50)Assuming a matching route name was found in the prior step, the app will have 1 to 101 of these queries for each ticket.\nThis is a perfect example of a place where you could push some work out of the application and into the database. You can consolidate your 1 to 101 queries into a single query. If you use:\nWHERE rte_nm='SH71' AND rm >= 206 AND rm <= 306 ORDER BY abs(rm - 256), rm - 256 DESC LIMIT 1 it will always return the same value as the first matching query from your list, and will never have to make more than one trip to the database. Your one trip might be slightly slower than any one of the single trips above, but it will certainly be much faster in the case where you have to hit any significant % of your 101 potential queries.\n-Eric",
"msg_date": "Thu, 12 May 2011 12:15:46 -0600",
"msg_from": "Eric McKeeth <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "Everyone,\n\nJust wanted to say thanks for your help with my performance question. You\nhave given me plenty of things to investigate. Further, I think the problem\nis almost certainly with my app, so I need to do more work there!\n\nI really like the idea of just loading everything in memory and then dumping\nit all out later. I have 6 GB RAM, so it should be plenty to handle this.\n\nAren Cambre\n\nEveryone,Just wanted to say thanks for your help with my performance question. You have given me plenty of things to investigate. Further, I think the problem is almost certainly with my app, so I need to do more work there!\nI really like the idea of just loading everything in memory and then dumping it all out later. I have 6 GB RAM, so it should be plenty to handle this.Aren Cambre",
"msg_date": "Thu, 12 May 2011 13:25:04 -0500",
"msg_from": "Aren Cambre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": ">\n> This is a perfect example of a place where you could push some work out of\n> the application and into the database. You can consolidate your 1 to 101\n> queries into a single query. If you use:\n>\n> WHERE rte_nm='SH71' AND rm >= 206 AND rm <= 306 ORDER BY abs(rm - 256), rm\n> - 256 DESC LIMIT 1\n>\n> it will always return the same value as the first matching query from your\n> list, and will never have to make more than one trip to the database. Your\n> one trip might be slightly slower than any one of the single trips above,\n> but it will certainly be much faster in the case where you have to hit any\n> significant % of your 101 potential queries.\n>\n\nTHANKS!! I've been obsessing so much about parallelism that I hadn't spent\nmuch time finding better queries.\n\nAren\n\nThis is a perfect example of a place where you could push some work out of the application and into the database. You can consolidate your 1 to 101 queries into a single query. If you use:\nWHERE rte_nm='SH71' AND rm >= 206 AND rm <= 306 ORDER BY abs(rm - 256), rm - 256 DESC LIMIT 1 it will always return the same value as the first matching query from your list, and will never have to make more than one trip to the database. Your one trip might be slightly slower than any one of the single trips above, but it will certainly be much faster in the case where you have to hit any significant % of your 101 potential queries.\nTHANKS!! I've been obsessing so much about parallelism that I hadn't spent much time finding better queries.Aren",
"msg_date": "Thu, 12 May 2011 13:27:44 -0500",
"msg_from": "Aren Cambre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "Just want to again say thanks for this query. It seriously sped up part of\nmy program.\n\nAren\n\nOn Thu, May 12, 2011 at 1:27 PM, Aren Cambre <[email protected]> wrote:\n\n> This is a perfect example of a place where you could push some work out of\n>> the application and into the database. You can consolidate your 1 to 101\n>> queries into a single query. If you use:\n>>\n>> WHERE rte_nm='SH71' AND rm >= 206 AND rm <= 306 ORDER BY abs(rm - 256), rm\n>> - 256 DESC LIMIT 1\n>>\n>> it will always return the same value as the first matching query from your\n>> list, and will never have to make more than one trip to the database. Your\n>> one trip might be slightly slower than any one of the single trips above,\n>> but it will certainly be much faster in the case where you have to hit any\n>> significant % of your 101 potential queries.\n>>\n>\n> THANKS!! I've been obsessing so much about parallelism that I hadn't spent\n> much time finding better queries.\n>\n> Aren\n>\n\nJust want to again say thanks for this query. It seriously sped up part of my program.ArenOn Thu, May 12, 2011 at 1:27 PM, Aren Cambre <[email protected]> wrote:\n\nThis is a perfect example of a place where you could push some work out of the application and into the database. You can consolidate your 1 to 101 queries into a single query. If you use:\nWHERE rte_nm='SH71' AND rm >= 206 AND rm <= 306 ORDER BY abs(rm - 256), rm - 256 DESC LIMIT 1 it will always return the same value as the first matching query from your list, and will never have to make more than one trip to the database. Your one trip might be slightly slower than any one of the single trips above, but it will certainly be much faster in the case where you have to hit any significant % of your 101 potential queries.\nTHANKS!! I've been obsessing so much about parallelism that I hadn't spent much time finding better queries.Aren",
"msg_date": "Sat, 21 May 2011 22:32:30 -0500",
"msg_from": "Aren Cambre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "Just wanted to again say thanks for everyone's help.\n\nThe main problem was that my program was running in serial, not parallel,\neven though I thought I used a textbook example of PLINQ. Your assistance\nhelped me get to the point where I could conclusively determine everything\nwas running in serial. It was more obvious than I realized.\n\nThanks to help through\nhttp://stackoverflow.com/questions/6086111/plinq-on-concurrentqueue-isnt-multithreading,\nI have switched to the .NET Framework's Task Parallel Library, and it's\nslamming the 8 cores hard now! And there's a bunch of concurrent connections\nto Postgres. :-)\n\nAren\n\nOn Mon, May 9, 2011 at 4:23 PM, Aren Cambre <[email protected]> wrote:\n\n> I have a multi-threaded app. It uses ~22 threads to query Postgres.\n>\n> Postgres won't use more than 1 CPU core. The 22-threaded app only has 3%\n> CPU utilization because it's mostly waiting on Postgres.\n>\n> Here's the details:\n>\n> The app has a \"main\" thread that reads table A's 11,000,000 rows, one at a\n> time. The main thread spawns a new thread for each row in table A's data.\n> This new thread:\n>\n> 1. Opens a connection to the DB.\n> 2. Does some calculations on the data, including 1 to 102 SELECTs on\n> table B.\n> 3. With an INSERT query, writes a new row to table C.\n> 4. Closes the connection.\n> 5. Thread dies. Its data is garbage collected eventually.\n>\n> Physical/software details:\n>\n> - Core i7 processor--4 physical cores, but OS sees 8 cores\n> via hyper-threading\n> - 7200 RPM 500 GB HDD\n> - About 1/2 total RAM is free during app execution\n> - Windows 7 x64\n> - Postgres 9.0.4 32-bit (32-bit required for PostGIS)\n> - App is C# w/ .NET 4.0. PLINQ dispatches threads. Npgsql is Postgres\n> connection tool.\n>\n> At first, the app pounds all 8 cores. But it quickly tapers off, and only 1\n> core that's busy. The other 7 cores are barely doing a thing.\n>\n> Postgres has 9 open processes. 1 process was slamming that 1 busy core. The\n> other 8 Postgres processes were alive but idle.\n>\n> Each thread creates its own connection. It's not concurrently shared with\n> the main thread or any other threads. I haven't disabled connection pooling;\n> when a thread closes a connection, it's technically releasing it into a pool\n> for later threads to use.\n>\n> Disk utilization is low. The HDD light is off much more than it is on, and\n> a review of total HDD activity put it between 0% and 10% of total capacity.\n> The HDD busy indicator LED would regularly flicker every 0.1 to 0.3 seconds.\n>\n> The app runs 2 different queries on table B. The 1st query is run once, the\n> 2nd query can be run up to 101 times. Table C has redundant indexes: every\n> column referenced in the SQL WHERE clauses for both queries are indexed\n> separately and jointly. E.g., if query X references columns Y and Z, there\n> are 3 indexes:\n>\n> 1. An index for Y\n> 2. An index for Z\n> 3. An index for Y and Z\n>\n> Table C is simple. It has four columns: two integers, a varchar(18), and a\n> boolean. It has no indexes. A primary key on the varchar(18) column is its\n> only constraint.\n>\n> A generalized version of my INSERT command for table C is:\n> *INSERT INTO raw.C VALUES (:L, :M, :N, :P)*\n>\n> I am using parameters to fill in the 4 values.\n>\n> I have verified table C manually, and correct data is being stored in it.\n>\n> Several Google searches suggest Postgres should use multiple cores\n> automatically. I've consulted with Npgsql's developer, and he didn't see how\n> Npgsql itself could force Postgres to one core. (See\n> http://pgfoundry.org/pipermail/npgsql-devel/2011-May/001123.html.)\n>\n> What can I do to improve this? Could I be inadvertently limiting Postgres\n> to one core?\n>\n> Aren Cambre\n>\n\nJust wanted to again say thanks for everyone's help.The main problem was that my program was running in serial, not parallel, even though I thought I used a textbook example of PLINQ. Your assistance helped me get to the point where I could conclusively determine everything was running in serial. It was more obvious than I realized.\nThanks to help through http://stackoverflow.com/questions/6086111/plinq-on-concurrentqueue-isnt-multithreading, I have switched to the .NET Framework's Task Parallel Library, and it's slamming the 8 cores hard now! And there's a bunch of concurrent connections to Postgres. :-)\nArenOn Mon, May 9, 2011 at 4:23 PM, Aren Cambre <[email protected]> wrote:\nI have a multi-threaded app. It uses ~22 threads to query Postgres.\nPostgres won't use more than 1 CPU core. The 22-threaded app only has 3% CPU utilization because it's mostly waiting on Postgres.\nHere's the details:The app has a \"main\" thread that reads table A's 11,000,000 rows, one at a time. The main thread spawns a new thread for each row in table A's data. This new thread:\nOpens a connection to the DB.Does some calculations on the data, including 1 to 102 SELECTs on table B.With an INSERT query, writes a new row to table C.\nCloses the connection.Thread dies. Its data is garbage collected eventually.Physical/software details:Core i7 processor--4 physical cores, but OS sees 8 cores via hyper-threading\n7200 RPM 500 GB HDDAbout 1/2 total RAM is free during app executionWindows 7 x64Postgres 9.0.4 32-bit (32-bit required for PostGIS)\nApp is C# w/ .NET 4.0. PLINQ dispatches threads. Npgsql is Postgres connection tool.At first, the app pounds all 8 cores. But it quickly tapers off, and only 1 core that's busy. The other 7 cores are barely doing a thing.\nPostgres has 9 open processes. 1 process was slamming that 1 busy core. The other 8 Postgres processes were alive but idle.Each thread creates its own connection. It's not concurrently shared with the main thread or any other threads. I haven't disabled connection pooling; when a thread closes a connection, it's technically releasing it into a pool for later threads to use.\nDisk utilization is low. The HDD light is off much more than it is on, and a review of total HDD activity put it between 0% and 10% of total capacity. The HDD busy indicator LED would regularly flicker every 0.1 to 0.3 seconds.\nThe app runs 2 different queries on table B. The 1st query is run once, the 2nd query can be run up to 101 times. Table C has redundant indexes: every column referenced in the SQL WHERE clauses for both queries are indexed separately and jointly. E.g., if query X references columns Y and Z, there are 3 indexes:\nAn index for YAn index for ZAn index for Y and Z\nTable C is simple. It has four columns: two integers, a varchar(18), and a boolean. It has no indexes. A primary key on the varchar(18) column is its only constraint.\nA generalized version of my INSERT command for table C is:INSERT INTO raw.C VALUES (:L, :M, :N, :P)\nI am using parameters to fill in the 4 values.I have verified table C manually, and correct data is being stored in it.Several Google searches suggest Postgres should use multiple cores automatically. I've consulted with Npgsql's developer, and he didn't see how Npgsql itself could force Postgres to one core. (See http://pgfoundry.org/pipermail/npgsql-devel/2011-May/001123.html.)\nWhat can I do to improve this? Could I be inadvertently limiting Postgres to one core?\nAren Cambre",
"msg_date": "Sun, 22 May 2011 09:08:26 -0500",
"msg_from": "Aren Cambre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "Also, thanks for the advice on batching my queries. I am now using a very\nefficient bulk data read and write methods for Postgres.\n\nMy program bulk reads 100,000 rows, processes those rows (during which it\ndoes a few SELECTs), and then writes 100,000 rows at a time.\n\nIt cycles through this until it has processed all 12,000,000 rows.\n\nThis, plus the parallelism fix, will probably convert this 30 hour program\nto a <2 hour program.\n\nAren\n\nOn Sun, May 22, 2011 at 9:08 AM, Aren Cambre <[email protected]> wrote:\n\n> Just wanted to again say thanks for everyone's help.\n>\n> The main problem was that my program was running in serial, not parallel,\n> even though I thought I used a textbook example of PLINQ. Your assistance\n> helped me get to the point where I could conclusively determine everything\n> was running in serial. It was more obvious than I realized.\n>\n> Thanks to help through\n> http://stackoverflow.com/questions/6086111/plinq-on-concurrentqueue-isnt-multithreading,\n> I have switched to the .NET Framework's Task Parallel Library, and it's\n> slamming the 8 cores hard now! And there's a bunch of concurrent connections\n> to Postgres. :-)\n>\n> Aren\n>\n> On Mon, May 9, 2011 at 4:23 PM, Aren Cambre <[email protected]> wrote:\n>\n>> I have a multi-threaded app. It uses ~22 threads to query Postgres.\n>>\n>> Postgres won't use more than 1 CPU core. The 22-threaded app only has 3%\n>> CPU utilization because it's mostly waiting on Postgres.\n>>\n>> Here's the details:\n>>\n>> The app has a \"main\" thread that reads table A's 11,000,000 rows, one at a\n>> time. The main thread spawns a new thread for each row in table A's data.\n>> This new thread:\n>>\n>> 1. Opens a connection to the DB.\n>> 2. Does some calculations on the data, including 1 to 102 SELECTs on\n>> table B.\n>> 3. With an INSERT query, writes a new row to table C.\n>> 4. Closes the connection.\n>> 5. Thread dies. Its data is garbage collected eventually.\n>>\n>> Physical/software details:\n>>\n>> - Core i7 processor--4 physical cores, but OS sees 8 cores\n>> via hyper-threading\n>> - 7200 RPM 500 GB HDD\n>> - About 1/2 total RAM is free during app execution\n>> - Windows 7 x64\n>> - Postgres 9.0.4 32-bit (32-bit required for PostGIS)\n>> - App is C# w/ .NET 4.0. PLINQ dispatches threads. Npgsql is Postgres\n>> connection tool.\n>>\n>> At first, the app pounds all 8 cores. But it quickly tapers off, and only\n>> 1 core that's busy. The other 7 cores are barely doing a thing.\n>>\n>> Postgres has 9 open processes. 1 process was slamming that 1 busy core.\n>> The other 8 Postgres processes were alive but idle.\n>>\n>> Each thread creates its own connection. It's not concurrently shared with\n>> the main thread or any other threads. I haven't disabled connection pooling;\n>> when a thread closes a connection, it's technically releasing it into a pool\n>> for later threads to use.\n>>\n>> Disk utilization is low. The HDD light is off much more than it is on, and\n>> a review of total HDD activity put it between 0% and 10% of total capacity.\n>> The HDD busy indicator LED would regularly flicker every 0.1 to 0.3 seconds.\n>>\n>> The app runs 2 different queries on table B. The 1st query is run once,\n>> the 2nd query can be run up to 101 times. Table C has redundant indexes:\n>> every column referenced in the SQL WHERE clauses for both queries are\n>> indexed separately and jointly. E.g., if query X references columns Y and Z,\n>> there are 3 indexes:\n>>\n>> 1. An index for Y\n>> 2. An index for Z\n>> 3. An index for Y and Z\n>>\n>> Table C is simple. It has four columns: two integers, a varchar(18), and a\n>> boolean. It has no indexes. A primary key on the varchar(18) column is its\n>> only constraint.\n>>\n>> A generalized version of my INSERT command for table C is:\n>> *INSERT INTO raw.C VALUES (:L, :M, :N, :P)*\n>>\n>> I am using parameters to fill in the 4 values.\n>>\n>> I have verified table C manually, and correct data is being stored in it.\n>>\n>> Several Google searches suggest Postgres should use multiple cores\n>> automatically. I've consulted with Npgsql's developer, and he didn't see how\n>> Npgsql itself could force Postgres to one core. (See\n>> http://pgfoundry.org/pipermail/npgsql-devel/2011-May/001123.html.)\n>>\n>> What can I do to improve this? Could I be inadvertently limiting Postgres\n>> to one core?\n>>\n>> Aren Cambre\n>>\n>\n>\n\nAlso, thanks for the advice on batching my queries. I am now using a very efficient bulk data read and write methods for Postgres.My program bulk reads 100,000 rows, processes those rows (during which it does a few SELECTs), and then writes 100,000 rows at a time.\nIt cycles through this until it has processed all 12,000,000 rows.This, plus the parallelism fix, will probably convert this 30 hour program to a <2 hour program.\nArenOn Sun, May 22, 2011 at 9:08 AM, Aren Cambre <[email protected]> wrote:\n\nJust wanted to again say thanks for everyone's help.The main problem was that my program was running in serial, not parallel, even though I thought I used a textbook example of PLINQ. Your assistance helped me get to the point where I could conclusively determine everything was running in serial. It was more obvious than I realized.\nThanks to help through http://stackoverflow.com/questions/6086111/plinq-on-concurrentqueue-isnt-multithreading, I have switched to the .NET Framework's Task Parallel Library, and it's slamming the 8 cores hard now! And there's a bunch of concurrent connections to Postgres. :-)\nArenOn Mon, May 9, 2011 at 4:23 PM, Aren Cambre <[email protected]> wrote:\nI have a multi-threaded app. It uses ~22 threads to query Postgres.\nPostgres won't use more than 1 CPU core. The 22-threaded app only has 3% CPU utilization because it's mostly waiting on Postgres.\nHere's the details:The app has a \"main\" thread that reads table A's 11,000,000 rows, one at a time. The main thread spawns a new thread for each row in table A's data. This new thread:\nOpens a connection to the DB.Does some calculations on the data, including 1 to 102 SELECTs on table B.With an INSERT query, writes a new row to table C.\nCloses the connection.Thread dies. Its data is garbage collected eventually.Physical/software details:Core i7 processor--4 physical cores, but OS sees 8 cores via hyper-threading\n7200 RPM 500 GB HDDAbout 1/2 total RAM is free during app executionWindows 7 x64Postgres 9.0.4 32-bit (32-bit required for PostGIS)\nApp is C# w/ .NET 4.0. PLINQ dispatches threads. Npgsql is Postgres connection tool.At first, the app pounds all 8 cores. But it quickly tapers off, and only 1 core that's busy. The other 7 cores are barely doing a thing.\nPostgres has 9 open processes. 1 process was slamming that 1 busy core. The other 8 Postgres processes were alive but idle.Each thread creates its own connection. It's not concurrently shared with the main thread or any other threads. I haven't disabled connection pooling; when a thread closes a connection, it's technically releasing it into a pool for later threads to use.\nDisk utilization is low. The HDD light is off much more than it is on, and a review of total HDD activity put it between 0% and 10% of total capacity. The HDD busy indicator LED would regularly flicker every 0.1 to 0.3 seconds.\nThe app runs 2 different queries on table B. The 1st query is run once, the 2nd query can be run up to 101 times. Table C has redundant indexes: every column referenced in the SQL WHERE clauses for both queries are indexed separately and jointly. E.g., if query X references columns Y and Z, there are 3 indexes:\nAn index for YAn index for ZAn index for Y and Z\nTable C is simple. It has four columns: two integers, a varchar(18), and a boolean. It has no indexes. A primary key on the varchar(18) column is its only constraint.\nA generalized version of my INSERT command for table C is:INSERT INTO raw.C VALUES (:L, :M, :N, :P)\nI am using parameters to fill in the 4 values.I have verified table C manually, and correct data is being stored in it.Several Google searches suggest Postgres should use multiple cores automatically. I've consulted with Npgsql's developer, and he didn't see how Npgsql itself could force Postgres to one core. (See http://pgfoundry.org/pipermail/npgsql-devel/2011-May/001123.html.)\nWhat can I do to improve this? Could I be inadvertently limiting Postgres to one core?\nAren Cambre",
"msg_date": "Sun, 22 May 2011 23:09:49 -0500",
"msg_from": "Aren Cambre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": "On 23/05/11 12:09, Aren Cambre wrote:\n> Also, thanks for the advice on batching my queries. I am now using a\n> very efficient bulk data read and write methods for Postgres.\n> \n> My program bulk reads 100,000 rows, processes those rows (during which\n> it does a few SELECTs), and then writes 100,000 rows at a time.\n> \n> It cycles through this until it has processed all 12,000,000 rows.\n> \n> This, plus the parallelism fix, will probably convert this 30 hour\n> program to a <2 hour program.\n\nIt's always good to hear when these things work out. Thanks for\nreporting back.\n\nUsing the set-based nature of relational databases to your advantage,\nwriting smarter queries that do more work server-side with fewer\nround-trips, and effective batching can make a huge difference.\n\n--\nCraig Ringer\n",
"msg_date": "Mon, 23 May 2011 13:22:51 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres refusing to use >1 core"
},
{
"msg_contents": ">\n> It's always good to hear when these things work out. Thanks for\n> reporting back.\n>\n> Using the set-based nature of relational databases to your advantage,\n> writing smarter queries that do more work server-side with fewer\n> round-trips, and effective batching can make a huge difference.\n>\n\nGlad I could be a good digital citizen! :-)\n\nCorrection: it's going to run for significantly more than 2 hours, but far\nless than 30 hours!\n\nI'm loving seeing the CPU meter showing all 8 of my (fake) cores being\npounded mercilessly!\n\nAren\n\nIt's always good to hear when these things work out. Thanks for\nreporting back.\n\nUsing the set-based nature of relational databases to your advantage,\nwriting smarter queries that do more work server-side with fewer\nround-trips, and effective batching can make a huge difference.Glad I could be a good digital citizen! :-)Correction: it's going to run for significantly more than 2 hours, but far less than 30 hours!\nI'm loving seeing the CPU meter showing all 8 of my (fake) cores being pounded mercilessly!Aren",
"msg_date": "Mon, 23 May 2011 09:06:48 -0500",
"msg_from": "Aren Cambre <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres refusing to use >1 core"
}
] |
[
{
"msg_contents": "I have 8-core server, I wanted to ask whether a query can be divided for \nmultiple processors or cores, if it could be what to do in postgresql\n\nThanks\n\nI have 8-core server, I wanted to ask whether a query can be divided for multiple processors or cores, if it could be what to do in postgresqlThanks",
"msg_date": "Tue, 10 May 2011 12:32:05 +0800 (SGT)",
"msg_from": "Didik Prasetyo <[email protected]>",
"msg_from_op": true,
"msg_subject": "partition query on multiple cores"
},
{
"msg_contents": "> I have 8-core server, I wanted to ask whether a query can be divided for\n> multiple processors or cores, if it could be what to do in postgresql\n\nNo, at this time (and for the foreseeable future), a single query will\nrun on a single core.\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n",
"msg_date": "Tue, 10 May 2011 08:06:38 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partition query on multiple cores"
},
{
"msg_contents": "On 05/10/2011 10:06 AM, Maciek Sakrejda wrote:\n\n>> I have 8-core server, I wanted to ask whether a query can be divided for\n>> multiple processors or cores, if it could be what to do in postgresql\n>\n> No, at this time (and for the foreseeable future), a single query will\n> run on a single core.\n\nIt can *kinda* be done. Take a look at GridSQL. It's really good for \nsplitting up reporting-like queries that benefit from parallel access of \nlarge tables. It's not exactly Hadoop, but I ran a test on a single \nsystem with two separate instances of PostgreSQL, and a single query \nover those two nodes cut execution time in half.\n\nIt's meant for server parallelism, so I wouldn't necessarily recommend \nsplitting your data up across nodes on the same server. But it seems to \ndeliver as promised when used in the right circumstances.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Tue, 10 May 2011 11:22:26 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partition query on multiple cores"
},
{
"msg_contents": "Dne 10.5.2011 18:22, Shaun Thomas napsal(a):\n> On 05/10/2011 10:06 AM, Maciek Sakrejda wrote:\n> \n>>> I have 8-core server, I wanted to ask whether a query can be divided for\n>>> multiple processors or cores, if it could be what to do in postgresql\n>>\n>> No, at this time (and for the foreseeable future), a single query will\n>> run on a single core.\n> \n> It can *kinda* be done. Take a look at GridSQL.\n\nOr pgpool-II, that can give you something similar.\n\nhttp://pgpool.projects.postgresql.org/\n\nregards\nTomas\n",
"msg_date": "Tue, 10 May 2011 20:57:47 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partition query on multiple cores"
},
{
"msg_contents": "On Tue, May 10, 2011 at 12:22 PM, Shaun Thomas <[email protected]> wrote:\n\n> On 05/10/2011 10:06 AM, Maciek Sakrejda wrote:\n>\n> I have 8-core server, I wanted to ask whether a query can be divided for\n>>> multiple processors or cores, if it could be what to do in postgresql\n>>>\n>>\n>> No, at this time (and for the foreseeable future), a single query will\n>> run on a single core.\n>>\n>\n> It can *kinda* be done. Take a look at GridSQL. It's really good for\n> splitting up reporting-like queries that benefit from parallel access of\n> large tables. It's not exactly Hadoop, but I ran a test on a single system\n> with two separate instances of PostgreSQL, and a single query over those two\n> nodes cut execution time in half.\n>\n> It's meant for server parallelism, so I wouldn't necessarily recommend\n> splitting your data up across nodes on the same server. But it seems to\n> deliver as promised when used in the right circumstances.\n>\n>\n\n\nYes, GridSQL is useful even in multi-core scenarios on a single server for\nquery parallelism. You can also use the same PostgreSQL instance (cluster),\nas the virtual node databases are named distinctly, which simplifies\nconfiguration.\n\n\nMason\n\nOn Tue, May 10, 2011 at 12:22 PM, Shaun Thomas <[email protected]> wrote:\nOn 05/10/2011 10:06 AM, Maciek Sakrejda wrote:\n\n\nI have 8-core server, I wanted to ask whether a query can be divided for\nmultiple processors or cores, if it could be what to do in postgresql\n\n\nNo, at this time (and for the foreseeable future), a single query will\nrun on a single core.\n\n\nIt can *kinda* be done. Take a look at GridSQL. It's really good for splitting up reporting-like queries that benefit from parallel access of large tables. It's not exactly Hadoop, but I ran a test on a single system with two separate instances of PostgreSQL, and a single query over those two nodes cut execution time in half.\n\nIt's meant for server parallelism, so I wouldn't necessarily recommend splitting your data up across nodes on the same server. But it seems to deliver as promised when used in the right circumstances.\n \nYes, GridSQL is useful even in multi-core scenarios on a single server for query parallelism. You can also use the same PostgreSQL instance (cluster), as the virtual node databases are named distinctly, which simplifies configuration.\nMason",
"msg_date": "Wed, 11 May 2011 07:56:51 -0400",
"msg_from": "Mason S <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partition query on multiple cores"
},
{
"msg_contents": "On Tue, May 10, 2011 at 2:57 PM, Tomas Vondra <[email protected]> wrote:\n\n> Dne 10.5.2011 18:22, Shaun Thomas napsal(a):\n> > On 05/10/2011 10:06 AM, Maciek Sakrejda wrote:\n> >\n> >>> I have 8-core server, I wanted to ask whether a query can be divided\n> for\n> >>> multiple processors or cores, if it could be what to do in postgresql\n> >>\n> >> No, at this time (and for the foreseeable future), a single query will\n> >> run on a single core.\n> >\n> > It can *kinda* be done. Take a look at GridSQL.\n>\n> Or pgpool-II, that can give you something similar.\n>\n> http://pgpool.projects.postgresql.org/\n>\n>\nLast time I tested parallelism in pgpool-II, I saw that if your query is\nfairly simple, pgpool-II will help. If it is more complex with joins and\naggregates, GridSQL will typically outperform it. GridSQL pushes down joins\nas much as possible, minimizes row shipping, and parallelizes aggregates and\ngrouping.\n\n\nMason Sharp\n\nOn Tue, May 10, 2011 at 2:57 PM, Tomas Vondra <[email protected]> wrote:\n\nDne 10.5.2011 18:22, Shaun Thomas napsal(a):\n> On 05/10/2011 10:06 AM, Maciek Sakrejda wrote:\n>\n>>> I have 8-core server, I wanted to ask whether a query can be divided for\n>>> multiple processors or cores, if it could be what to do in postgresql\n>>\n>> No, at this time (and for the foreseeable future), a single query will\n>> run on a single core.\n>\n> It can *kinda* be done. Take a look at GridSQL.\n\nOr pgpool-II, that can give you something similar.\n\nhttp://pgpool.projects.postgresql.org/\nLast time I tested parallelism in pgpool-II, I saw that if your query is fairly simple, pgpool-II will help. If it is more complex with joins and aggregates, GridSQL will typically outperform it. GridSQL pushes down joins as much as possible, minimizes row shipping, and parallelizes aggregates and grouping.\nMason Sharp",
"msg_date": "Wed, 11 May 2011 08:04:57 -0400",
"msg_from": "Mason S <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partition query on multiple cores"
}
] |
[
{
"msg_contents": "I am using Postgresql 8.2.13 and I found that most of the commits and insert\nor update statements are taking more than 4s in the db and the app\nperformance is slow for that.\nMy db settings are as follows;\nbgwriter_all_maxpages | 300 |\n bgwriter_all_percent | 15 |\n bgwriter_delay | 300 | ms\n bgwriter_lru_maxpages | 50 |\n bgwriter_lru_percent | 10 |\n\nSHOW checkpoint_segments ;\n checkpoint_segments\n---------------------\n 300\n(1 row)\n\n show work_mem ;\n work_mem\n----------\n 16MB\n(1 row)\n\n show checkpoint_timeout ;\n checkpoint_timeout\n--------------------\n 5min\n(1 row)\n\n show checkpoint_warning ;\n checkpoint_warning\n--------------------\n 30s\n(1 row)\n\nshow shared_buffers ;\n shared_buffers\n----------------\n 4GB\n(1 row)\n\n\nI have 32 gb RAM and its a 4*2=8 core processors.\nAny idea how to improve the performance?\n\nI am using Postgresql 8.2.13 and I found that most of the commits and insert or update statements are taking more than 4s in the db and the app performance is slow for that. My db settings are as follows;bgwriter_all_maxpages | 300 | \n bgwriter_all_percent | 15 | bgwriter_delay | 300 | ms bgwriter_lru_maxpages | 50 | bgwriter_lru_percent | 10 | SHOW checkpoint_segments ; checkpoint_segments ---------------------\n 300(1 row) show work_mem ; work_mem ---------- 16MB(1 row) show checkpoint_timeout ; checkpoint_timeout -------------------- 5min(1 row) show checkpoint_warning ;\n checkpoint_warning -------------------- 30s(1 row)show shared_buffers ; shared_buffers ---------------- 4GB(1 row)I have 32 gb RAM and its a 4*2=8 core processors.Any idea how to improve the performance?",
"msg_date": "Tue, 10 May 2011 13:01:04 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "8.2.13 commit is taking too much time"
},
{
"msg_contents": "\n> Any idea how to improve the performance?\n\nHmmm, I guess we'll need more info about resource usage (CPU, I/O, locks)\nused when the commit happens. Run these two commands\n\n$ iostat -x 1\n$ vmstat 1\n\nand then execute the commit. See what's causing problems. Is the drive\nutilization close to 100%? You've problems with disks (I'd bet this is the\ncause). Etc.\n\nThere's a very nice chapter about this in Greg's book.\n\nBTW what filesystem are you using? Ext3, ext4, reiserfs, xfs? I do\nremember there were some problems with sync, that some filesystems are\nunable to sync individual files and always sync everything (which is going\nto suck if you want to sync just the WAL).\n\nregards\nTomas\n\n",
"msg_date": "Tue, 10 May 2011 11:44:46 +0200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: 8.2.13 commit is taking too much time"
},
{
"msg_contents": "On 05/10/2011 03:01 AM, AI Rumman wrote:\n> I am using Postgresql 8.2.13 and I found that most of the commits and \n> insert or update statements are taking more than 4s in the db and the \n> app performance is slow for that.\n> My db settings are as follows;\n> bgwriter_all_maxpages | 300 |\n> bgwriter_all_percent | 15 |\n> bgwriter_delay | 300 | ms\n> bgwriter_lru_maxpages | 50 |\n> bgwriter_lru_percent | 10 |\n\nReduce bgwriter_all_maxpages to 0, definitely, and you might drop \nbgwriter_lru_maxpages to 0 too. Making the background writer in \nPostgreSQL 8.2 do more work as you've tried here increases the amount of \nrepeated I/O done by a lot, without actually getting rid of any pauses. \nIt wastes a lot of I/O capacity instead, making the problems you're \nseeing worse.\n\n> shared_buffers\n> ----------------\n> 4GB\n>\n\nOn 8.2, shared_buffers should be no more than 128MB if you want to avoid \nlong checkpoint pauses. You might even find best performance at the \ndefault of 32MB.\n\n\n> I have 32 gb RAM and its a 4*2=8 core processors.\n> Any idea how to improve the performance?\n\nThere's nothing you can do here that will work better than upgrading to \n8.3. See \nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm for \nmore information. PostgreSQL 8.2 had serious problems with the sort of \npauses you're seeing back when systems had only 4GB of memory; you'll \nnever get rid of them on a server with 32GB of RAM on that version.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Tue, 10 May 2011 11:55:56 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.2.13 commit is taking too much time"
}
] |
[
{
"msg_contents": "AMD Opteron(tm) Processor 4174 HE vs Intel(R) Xeon(R) CPU E5345 @ 2.33GHz\n\nI'm wondering if there is a performance difference running postgres on\nfedora on AMD vs Intel (the 2 listed above).\n\nI have an 8 way Intel Xeon box and a 12way AMD box and was thinking\nabout migrating to the new AMD box, from the 4 year old Intel box. But\nI wasn't sure if there is some performance stats on AMD multi core\nprocs vs the Intels for DB applications?\n\nThanks\n\nTory\n",
"msg_date": "Tue, 10 May 2011 10:28:44 -0700",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question processor speed differences."
},
{
"msg_contents": "On 05/10/2011 01:28 PM, Tory M Blue wrote:\n> AMD Opteron(tm) Processor 4174 HE vs Intel(R) Xeon(R) CPU E5345 @ 2.33GHz\n>\n> I'm wondering if there is a performance difference running postgres on\n> fedora on AMD vs Intel (the 2 listed above).\n>\n> I have an 8 way Intel Xeon box and a 12way AMD box and was thinking\n> about migrating to the new AMD box, from the 4 year old Intel box. But\n> I wasn't sure if there is some performance stats on AMD multi core\n> procs vs the Intels for DB applications?\n> \n\nThe real limiting factor on CPU performance on your E5345 is how fast \nthe server can shuffle things back and forth to memory. The FB-DIMM \nDDR2-667MHz memory on that server will be hard pressed to clear 5GB/s of \nmemory access, probably less. That matter a lot when running in-memory \ndatabase tasks, where the server is constantly shuffling 8K pages of \ndata around.\n\nThe new AMD box will have DDR3-1333 Mhz and a much better memory \narchitecture to go with it. I'd expect 6 to 7GB/s out of a single core, \nand across multiple cores you might hit as much as 20GB/s if you have 4 \nchannels of memory in there. Rough guess, new server is at least twice \nas fast, and might even hit four times as fast.\n\nIf you have access to both boxes and can find a quiet period, you could \ntry running stream-scaling: https://github.com/gregs1104/stream-scaling \nto quantify for yourself just how big the speed difference in this \narea. That's correlated extremely well for me with PostgreSQL \nperformance on SELECT statements. If you're going to disk instead of \nbeing limited by the CPU, none of this matters though. Make sure you \nreally are waiting for the CPUs most of the time.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Tue, 10 May 2011 16:53:24 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question processor speed differences."
}
] |
[
{
"msg_contents": "\nWhile reading about NoSQL,\n\n> MongoDB let's you store and search JSON objects.In that case, you don't \n> need to have the same \"columns\" in each \"row\"\n\nThe following ensued. Isn't it cute ?\n\nCREATE TABLE mongo ( id SERIAL PRIMARY KEY, obj hstore NOT NULL );\nINSERT INTO mongo (obj) SELECT ('a=>'||n||',key'||(n%10)||'=>'||n)::hstore \n FROM generate_series(1,100000) n;\n\nSELECT * FROM mongo LIMIT 10;\n id | obj\n----+-------------------------\n 1 | \"a\"=>\"1\", \"key1\"=>\"1\"\n 2 | \"a\"=>\"2\", \"key2\"=>\"2\"\n 3 | \"a\"=>\"3\", \"key3\"=>\"3\"\n 4 | \"a\"=>\"4\", \"key4\"=>\"4\"\n 5 | \"a\"=>\"5\", \"key5\"=>\"5\"\n 6 | \"a\"=>\"6\", \"key6\"=>\"6\"\n 7 | \"a\"=>\"7\", \"key7\"=>\"7\"\n 8 | \"a\"=>\"8\", \"key8\"=>\"8\"\n 9 | \"a\"=>\"9\", \"key9\"=>\"9\"\n 10 | \"a\"=>\"10\", \"key0\"=>\"10\"\n\nCREATE INDEX mongo_a ON mongo((obj->'a')) WHERE (obj->'a') IS NOT NULL;\nCREATE INDEX mongo_k1 ON mongo((obj->'key1')) WHERE (obj->'key1') IS NOT \nNULL;\nCREATE INDEX mongo_k2 ON mongo((obj->'key2')) WHERE (obj->'key2') IS NOT \nNULL;\nVACUUM ANALYZE mongo;\n\nSELECT * FROM mongo WHERE (obj->'key1')='271';\n id | obj\n-----+---------------------------\n 271 | \"a\"=>\"271\", \"key1\"=>\"271\"\n(1 ligne)\n\nEXPLAIN ANALYZE SELECT * FROM mongo WHERE (obj->'key1')='271';\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------\n Index Scan using mongo_k1 on mongo (cost=0.00..567.05 rows=513 width=36) \n(actual time=0.024..0.025 rows=1 loops=1)\n Index Cond: ((obj -> 'key1'::text) = '271'::text)\n Total runtime: 0.048 ms\n",
"msg_date": "Tue, 10 May 2011 19:56:03 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres NoSQL emulation"
},
{
"msg_contents": "On Tue, May 10, 2011 at 12:56 PM, Pierre C <[email protected]> wrote:\n>\n> While reading about NoSQL,\n>\n>> MongoDB let's you store and search JSON objects.In that case, you don't\n>> need to have the same \"columns\" in each \"row\"\n>\n> The following ensued. Isn't it cute ?\n>\n> CREATE TABLE mongo ( id SERIAL PRIMARY KEY, obj hstore NOT NULL );\n> INSERT INTO mongo (obj) SELECT ('a=>'||n||',key'||(n%10)||'=>'||n)::hstore\n> FROM generate_series(1,100000) n;\n>\n> SELECT * FROM mongo LIMIT 10;\n> id | obj\n> ----+-------------------------\n> 1 | \"a\"=>\"1\", \"key1\"=>\"1\"\n> 2 | \"a\"=>\"2\", \"key2\"=>\"2\"\n> 3 | \"a\"=>\"3\", \"key3\"=>\"3\"\n> 4 | \"a\"=>\"4\", \"key4\"=>\"4\"\n> 5 | \"a\"=>\"5\", \"key5\"=>\"5\"\n> 6 | \"a\"=>\"6\", \"key6\"=>\"6\"\n> 7 | \"a\"=>\"7\", \"key7\"=>\"7\"\n> 8 | \"a\"=>\"8\", \"key8\"=>\"8\"\n> 9 | \"a\"=>\"9\", \"key9\"=>\"9\"\n> 10 | \"a\"=>\"10\", \"key0\"=>\"10\"\n>\n> CREATE INDEX mongo_a ON mongo((obj->'a')) WHERE (obj->'a') IS NOT NULL;\n> CREATE INDEX mongo_k1 ON mongo((obj->'key1')) WHERE (obj->'key1') IS NOT\n> NULL;\n> CREATE INDEX mongo_k2 ON mongo((obj->'key2')) WHERE (obj->'key2') IS NOT\n> NULL;\n> VACUUM ANALYZE mongo;\n>\n> SELECT * FROM mongo WHERE (obj->'key1')='271';\n> id | obj\n> -----+---------------------------\n> 271 | \"a\"=>\"271\", \"key1\"=>\"271\"\n> (1 ligne)\n>\n> EXPLAIN ANALYZE SELECT * FROM mongo WHERE (obj->'key1')='271';\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------\n> Index Scan using mongo_k1 on mongo (cost=0.00..567.05 rows=513 width=36)\n> (actual time=0.024..0.025 rows=1 loops=1)\n> Index Cond: ((obj -> 'key1'::text) = '271'::text)\n> Total runtime: 0.048 ms\n\nwhy even have multiple rows? just jam it all it there! :-D\n\nmerlin\n",
"msg_date": "Tue, 10 May 2011 16:32:01 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres NoSQL emulation"
},
{
"msg_contents": "On Tue, May 10, 2011 at 3:32 PM, Merlin Moncure <[email protected]> wrote:\n> why even have multiple rows? just jam it all it there! :-D\n\nExactly, serialize the object and stuff it into a simple key->value\ntable. Way more efficient than EAV.\n",
"msg_date": "Tue, 10 May 2011 21:13:31 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres NoSQL emulation"
},
{
"msg_contents": "\n> why even have multiple rows? just jam it all it there! :-D\n\nLOL\n\nBut seriously, when using an ORM to stuff an object hierarchy into a \ndatabase, you usually get problems with class inheritance, and all \nsolutions suck more or less (ie, you get a zillion tables, with assorted \npile of JOINs, or stinky key/attributes schemes where all attributes end \nup as TEXT, or a table with 200 columns, most of them being NULL for a \ngiven line).\n\nNoSQL guys say \"hey just use NoSQL !\".\n\nIn a (common) case where the classes have some fields in common and othen \nsearched, and that the DB needs to know about and access easily, those \nbecome columns, with indexes. Then the other fields which only occur in \nsome derived class and are not very interesting to the DB get shoved into \na hstore. The big bonus being that you use only one table, and the \"extra\" \nfields can still be accessed and indexed (but a little slower than a \nnormal column). However I believe hstore can only store TEXT values...\n\nCould be interesting. Or not.\n",
"msg_date": "Wed, 11 May 2011 13:14:46 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres NoSQL emulation"
}
] |
[
{
"msg_contents": "\n Hello everyone,\n\n I have the following scenario:\n There's a web service that updates some information in two tables,\nevery 5 minutes.\n In order to do this it will issue a select on the tables, get some\ndata, think about it, and then update it if necessary.\n\n Sometimes - about once every two weeks, I think, it will start using\nan extremely inefficient plan where it will loop on many results from\nthe large table instead of getting the few results from small table and\nlooping on those. \n The difference in performance is devastating - from 18 ms to 10-20\nseconds, and of course drags everything down.\n The situation will usually not resolve itself - but it will resolve\nafter i run \"ANALYZE party; ANALYZE big_table\" about... 3-5 times.\nInteresting.\n\n When the problem is occuring, it is completely reproducible using\nlocal psql - thus probably not a connector issue.\n I have tried to reconnect and to re-prepare the statement to allow it\nto choose a new plan after the 'first' analyze, but it didn't help.\n I have tried to increase ANALYZE statistics target on party_id (as the\njoin field) on both tables to 300, but it doesn't appear to help (not\neven with the frequency of incidents).\n\n\nThe select is as follows:\nprepare ps(varchar,varchar,varchar) as select party.party_id from party,\nbig_table where external_id = $1 and party.party_id = big_table.party_id\nand attr_name = $2 and attr_value = $3;\nPREPARE\nexecute ps('13','GroupId','testshop');\nparty_id\n----------\n 659178\n\nThe query will always return exactly one row.\n\nI hope this is enough information to start a discussion on how to avoid\nthis. The only reliable solution we've come up with so far is to split\nselects and do the join in Java, but this seems like a very unorthodox\nsolution and could cause other trouble down the road. \n\nThank you in advance,\nAndrei Prodan\nSystems Administator\n\n\ntestdb=# select count(1) from party where external_id='13';\ncount\n-------\n 4\n(1 row)\ntestdb=# select count(1) from big_table where attr_name='GroupId';\n count\n---------\n 1025867\n(1 row)\n\ntestdb=# select count(1) from big_table where attr_value='testshop';\n count\n--------\n 917704\n(1 row)\n\nTable party:\nRows: 1.8M\nTable size: 163 MB\nIndexes size: 465 MB\n\nTable big_table: \n- Frequently updated\nRows: 7.2M\nTable size: 672 MB\nIndexes size: 1731 MB\n\nGOOD PLAN:\ntestdb=# explain analyze execute ps('13','GroupId','testshop');\n QUERY\nPLAN\n\n------------------------------------------------------------------------\n-----------------------------\n--------------------------------------\n Nested Loop (cost=0.00..19.11 rows=1 width=7) (actual\ntime=2.662..18.388 rows=1 loops=1)\n -> Index Scan using partyext_id_idx on party (cost=0.00..8.47\nrows=1 width=7) (actual time=2.439\n..2.495 rows=4 loops=1)\n Index Cond: ((external_id)::text = ($1)::text)\n -> Index Scan using pk_big_table on big_table (cost=0.00..10.62\nrows=1 width=7) (act ual time=3.972..3.972 rows=0 loops=4)\n Index Cond: (((big_table.party_id)::text =\n(party.party_id)::text) AND ((party_attribu te.attr_name)::text =\n($2)::text))\n Filter: ((big_table.attr_value)::text = ($3)::text) Total\nruntime: 18.484 ms\n(7 rows)\n\nBAD PLAN:\ntestdb=# explain analyze execute ps('13','GroupId','testshop');\n QUERY\nPLAN\n------------------------------------------------------------------------\n-----------------------------------------------------------------------\nNested Loop (cost=0.00..56.83 rows=4 width=7) (actual\ntime=355.569..9989.681 rows=1 loops=1)\n -> Index Scan using attr_name_value on big_table (cost=0.00..22.85\nrows=4 width=7) (actual time=0.176..757.646 rows=914786 loops=1)\n Index Cond: (((attr_name)::text = ($2)::text) AND\n((attr_value)::text = ($3)::text))\n -> Index Scan using pk_party on party (cost=0.00..8.48 rows=1\nwidth=7) (actual time=0.010..0.010 rows=0 loops=914786)\n Index Cond: ((party.party_id)::text =\n(big_table.party_id)::text)\n Filter: ((party.external_id)::text = ($1)::text) Total runtime:\n9989.749 ms\n(7 rows)\n\n\n name |\ncurrent_setting\n---------------------------------+--------------------------------------\n------------------------------------------------------------------------\n-----\n version | PostgreSQL 8.4.4 on\nx86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red\nHat 4.1.2-48), 64-bit\n autovacuum_analyze_scale_factor | 0.05\n autovacuum_max_workers | 9\n autovacuum_vacuum_scale_factor | 0.1\n checkpoint_segments | 30\n effective_cache_size | 6GB\n effective_io_concurrency | 6\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n listen_addresses | *\n log_autovacuum_min_duration | 1s\n log_checkpoints | on\n log_destination | stderr\n log_directory | /home.san/pg_log\n log_line_prefix | %r PID:%p - %t - Tx: %v %l -\n log_lock_waits | on\n log_min_duration_statement | 1s\n logging_collector | on\n maintenance_work_mem | 512MB\n max_connections | 1000\n max_stack_depth | 2MB\n server_encoding | UTF8\n shared_buffers | 3GB\n TimeZone | Europe/Berlin\n vacuum_cost_delay | 100ms\n vacuum_cost_limit | 200\n vacuum_cost_page_dirty | 40\n vacuum_cost_page_miss | 20\n wal_buffers | 2MB\n work_mem | 8MB\n(30 rows)\n\nCREATE DATABASE testdb\n WITH OWNER = testuser\n ENCODING = 'UTF8'\n TABLESPACE = pg_default\n LC_COLLATE = 'en_US.UTF-8'\n LC_CTYPE = 'en_US.UTF-8'\n CONNECTION LIMIT = -1;\n\n-- Table: party\n\n-- DROP TABLE party;\n\nCREATE TABLE party\n(\n party_id character varying(255) NOT NULL,\n party_type_id character varying(20),\n external_id character varying(30),\n preferred_currency_uom_id character varying(20),\n description text,\n status_id character varying(20),\n created_date timestamp with time zone,\n created_by_user_login character varying(255),\n last_modified_date timestamp with time zone,\n last_modified_by_user_login character varying(255),\n last_updated_stamp timestamp with time zone,\n last_updated_tx_stamp timestamp with time zone,\n created_stamp timestamp with time zone,\n created_tx_stamp timestamp with time zone,\n CONSTRAINT pk_party PRIMARY KEY (party_id),\n CONSTRAINT party_cul FOREIGN KEY (created_by_user_login)\n REFERENCES user_login (user_login_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT party_lmcul FOREIGN KEY (last_modified_by_user_login)\n REFERENCES user_login (user_login_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT party_pref_crncy FOREIGN KEY (preferred_currency_uom_id)\n REFERENCES uom (uom_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT party_pty_typ FOREIGN KEY (party_type_id)\n REFERENCES party_type (party_type_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT party_statusitm FOREIGN KEY (status_id)\n REFERENCES status_item (status_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (\n OIDS=FALSE,\n autovacuum_vacuum_scale_factor=0.002,\n autovacuum_analyze_scale_factor=0.001\n);\nALTER TABLE party OWNER TO postgres;\nALTER TABLE party ALTER COLUMN party_id SET STATISTICS 300;\n\n\n-- Index: mn_party_description\n\n-- DROP INDEX mn_party_description;\n\nCREATE INDEX mn_party_description\n ON party\n USING btree\n (description);\n\n-- Index: party_cul\n\n-- DROP INDEX party_cul;\n\nCREATE INDEX party_cul\n ON party\n USING btree\n (created_by_user_login);\n\n-- Index: party_lmcul\n\n-- DROP INDEX party_lmcul;\n\nCREATE INDEX party_lmcul\n ON party\n USING btree\n (last_modified_by_user_login);\n\n-- Index: party_pref_crncy\n\n-- DROP INDEX party_pref_crncy;\n\nCREATE INDEX party_pref_crncy\n ON party\n USING btree\n (preferred_currency_uom_id);\n\n-- Index: party_pty_typ\n\n-- DROP INDEX party_pty_typ;\n\nCREATE INDEX party_pty_typ\n ON party\n USING btree\n (party_type_id);\n\n-- Index: party_statusitm\n\n-- DROP INDEX party_statusitm;\n\nCREATE INDEX party_statusitm\n ON party\n USING btree\n (status_id);\n\n-- Index: party_txcrts\n\n-- DROP INDEX party_txcrts;\n\nCREATE INDEX party_txcrts\n ON party\n USING btree\n (created_tx_stamp);\n\n-- Index: party_txstmp\n\n-- DROP INDEX party_txstmp;\n\nCREATE INDEX party_txstmp\n ON party\n USING btree\n (last_updated_tx_stamp);\n\n-- Index: partyext_id_idx\n\n-- DROP INDEX partyext_id_idx;\n\nCREATE INDEX partyext_id_idx\n ON party\n USING btree\n (external_id);\n\n-- Index: upper_desc_idx\n\n-- DROP INDEX upper_desc_idx;\n\nCREATE INDEX upper_desc_idx\n ON party\n USING btree\n (upper(btrim(description)));\n\n\n-- Table: big_table\n\n-- DROP TABLE big_table;\n\nCREATE TABLE big_table\n(\n party_id character varying(255) NOT NULL,\n attr_name character varying(60) NOT NULL,\n attr_value character varying(255),\n last_updated_stamp timestamp with time zone,\n last_updated_tx_stamp timestamp with time zone,\n created_stamp timestamp with time zone,\n created_tx_stamp timestamp with time zone,\n CONSTRAINT pk_big_table PRIMARY KEY (party_id, attr_name),\n CONSTRAINT party_attr FOREIGN KEY (party_id)\n REFERENCES party (party_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (\n OIDS=FALSE,\n autovacuum_vacuum_scale_factor=0.002,\n autovacuum_analyze_scale_factor=0.001\n);\nALTER TABLE big_table OWNER TO postgres;\nALTER TABLE big_table ALTER COLUMN party_id SET STATISTICS 300;\n\n\n-- Index: attr_name_value\n\n-- DROP INDEX attr_name_value;\n\nCREATE INDEX attr_name_value\n ON big_table\n USING btree\n (attr_name, attr_value);\n\n-- Index: party_attr\n\n-- DROP INDEX party_attr;\n\nCREATE INDEX party_attr\n ON big_table\n USING btree\n (party_id);\n\n-- Index: prt_attrbt_txcrts\n\n-- DROP INDEX prt_attrbt_txcrts;\n\nCREATE INDEX prt_attrbt_txcrts\n ON big_table\n USING btree\n (created_tx_stamp);\n\n-- Index: prt_attrbt_txstmp\n\n-- DROP INDEX prt_attrbt_txstmp;\n\nCREATE INDEX prt_attrbt_txstmp\n ON big_table\n USING btree\n (last_updated_tx_stamp);\n\n",
"msg_date": "Wed, 11 May 2011 13:08:44 +0200",
"msg_from": "\"Prodan, Andrei\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "'Interesting' prepared statement slowdown on large table join"
},
{
"msg_contents": "On 05/11/2011 06:08 AM, Prodan, Andrei wrote:\n\n> Index Scan using attr_name_value on big_table (cost=0.00..22.85\n> rows=4 width=7) (actual time=0.176..757.646 rows=914786 loops=1)\n\nHoly inaccurate statistics, Batman!\n\nTry increasing your statistics target for attr_name and attr_value in \nyour big table. I know you said you set it to 300 on party_id, but what \nhappened here is that the optimizer thought this particular name/value \ncombo in your big table would return less rows, and it was horribly, \nhorribly wrong.\n\nYou might think about bumping up your default_statistics_target anyway \nto prevent problems like this in general. But definitely increase it on \nthose two columns and reanalyze. My guess is that your big_table is big \nenough that each analyze gets a different random sample of the various \nattr_name and attr_value combinations, so occasionally it'll get too few \nand start badly skewing query plans.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Wed, 11 May 2011 10:55:27 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'Interesting' prepared statement slowdown on large\n table join"
},
{
"msg_contents": "Shaun Thomas <[email protected]> writes:\n> On 05/11/2011 06:08 AM, Prodan, Andrei wrote:\n>> Index Scan using attr_name_value on big_table (cost=0.00..22.85\n>> rows=4 width=7) (actual time=0.176..757.646 rows=914786 loops=1)\n\n> Holy inaccurate statistics, Batman!\n\n> Try increasing your statistics target for attr_name and attr_value in \n> your big table.\n\nActually, the big problem here is probably not lack of statistics, but\nthe insistence on using a parameterized prepared plan in the first\nplace. If you're going to be doing queries where the number of selected\nrows varies that much, using a generic parameterized plan is just a\nrecipe for shooting yourself in the foot. The planner cannot know what\nthe actual search values will be, and thus has no way of adapting the\nplan based on how common those search values are. Having more stats\nwon't help in that situation.\n\nForget the prepared plan and just issue the query the old-fashioned way.\n\nI do suspect that the reason the plan is flipping back and forth is\ninstability of the collected statistics, which might be improved by\nincreasing the stats target, or then again maybe not. But that's really\nrather irrelevant.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 May 2011 12:38:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'Interesting' prepared statement slowdown on large table join "
},
{
"msg_contents": "On Wed, May 11, 2011 at 4:08 AM, Prodan, Andrei\n<[email protected]> wrote:\n>\n...\n>\n>\n> The select is as follows:\n> prepare ps(varchar,varchar,varchar) as select party.party_id from party,\n> big_table where external_id = $1 and party.party_id = big_table.party_id\n> and attr_name = $2 and attr_value = $3;\n> PREPARE\n> execute ps('13','GroupId','testshop');\n\n>\n> BAD PLAN:\n> testdb=# explain analyze execute ps('13','GroupId','testshop');\n> QUERY\n...\n> -> Index Scan using attr_name_value on big_table (cost=0.00..22.85\n> rows=4 width=7) (actual time=0.176..757.646 rows=914786 loops=1)\n> Index Cond: (((attr_name)::text = ($2)::text) AND\n> ((attr_value)::text = ($3)::text))\n\nSo it expects 4 rows and finds 914786, essentially the whole table.\nSo that is bad. But what is it thinking during the GOOD PLAN state?\n\nA possible way to get that information is to prepare a simpler\nprepared statement that omits the join to party and explain analyze it\nwith the same params for attr_name and attr_value. If that gives you\nthe full table scan rather than index scan, then you can \"set\nenable_seqscan=off\" try to force the index scan.\n\nCheers,\n\nJeff\n",
"msg_date": "Wed, 11 May 2011 12:07:07 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'Interesting' prepared statement slowdown on large\n table join"
},
{
"msg_contents": "Thank you for all the leads.\nI've increased stats to 1200 on everything obvious (external_id,\nattr_name, attr_value, party_id), and ran ANALYZE, but it didn't help at\nall - any other ideas of what else could be going wrong ?\n\nWe'll disable preparation, but the thing is it works brilliantly 90% of\nthe time and the other 10% should theoretically be fixable - because\nit's almost certainly a border scenario brought on by lack of\nmaintenance on something somewhere. \nIs there any point in trying to rebuild the indexes involved in case\nPostgres decided they're too bloated or something like that?\n\n@Shaun: I just finished trying to max out stats and sadly it doesn't\nhelp, thank you very much for trying anyway.\n\n@Tom: \nThe planner doesn't flip between the plans by itself - it will switch to\nthe BAD plan at some point and never go back.\nThe big_table has an extremely uneven distribution indeed. But it still\nplans right usually - and this apparently regardless of the statistics\ntarget.\n\n@Jeff: thank you for the clear plan interpretation - but I'm afraid I\ndon't really understand the second bit:\n1) I provided the GOOD plan, so we already know what postgres thinks,\nright? (Later edit: guess not. Doesn't work)\n2) There's no full table scan in any of the plans - it scans indices,\nthe problem seems to be that it scans them in the wrong order because it\nthinks there are very few WHERE matches in big_table - which is\nincorrect, as for that particular pair there is a huge amount of rows.\n\nThank you,\nAndrei\n",
"msg_date": "Thu, 12 May 2011 17:53:38 +0200",
"msg_from": "\"Prodan, Andrei\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 'Interesting' prepared statement slowdown on large table join"
},
{
"msg_contents": "On Thu, May 12, 2011 at 8:53 AM, Prodan, Andrei\n<[email protected]> wrote:\n>\n> @Jeff: thank you for the clear plan interpretation - but I'm afraid I\n> don't really understand the second bit:\n> 1) I provided the GOOD plan, so we already know what postgres thinks,\n> right? (Later edit: guess not. Doesn't work)\n> 2) There's no full table scan in any of the plans - it scans indices,\n> the problem seems to be that it scans them in the wrong order because it\n> thinks there are very few WHERE matches in big_table - which is\n> incorrect, as for that particular pair there is a huge amount of rows.\n\nHi Andrei,\n\n\"Explain analyze\" only gives you the cost/rows for the plan components\nit actually executed, it doesn't give you costs for alternative\nrejected plans. Since the GOOD PLAN doesn't include the index scan in\nquestion, it doesn't give the estimated or actual rows for that scan\nunder the stats/conditions that provoke the GOOD PLAN to be adopted.\nSo to get that information, you have to design an experimental\nprepared query that will get executed using that particular scan, that\nway it will report the results I wanted to see. My concern is that\nthe experimental query I proposed you use might instead decide to use\na full table scan rather than the desired index scan. Although come\nto think of it, I think the same code will be used to arrive at the\npredicted number of rows regardless of whether it does a FTS or the\ndesired index scan.\n\nCheers,\n\nJeff\n",
"msg_date": "Thu, 12 May 2011 16:28:42 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 'Interesting' prepared statement slowdown on large\n table join"
}
] |
[
{
"msg_contents": "Hi,\n\nWe have some indexes that don't seem to be used at all.\nI'd like to know since when they have not been used.\nThat is, the time when postgres started counting to reach the number that is\nin pg_stat_user_indexes.idx_scan\n\nIs there a way to retrieve that from the database ?\n\nCheers,\n\nWBL\n\n-- \n\"Patriotism is the conviction that your country is superior to all others\nbecause you were born in it.\" -- George Bernard Shaw\n\nHi,We have some indexes that don't seem to be used at all.I'd like to know since when they have not been used.That is, the time when postgres started counting to reach the number that is in pg_stat_user_indexes.idx_scan\nIs there a way to retrieve that from the database ?Cheers,WBL-- \"Patriotism is the conviction that your country is superior to all others because you were born in it.\" -- George Bernard Shaw",
"msg_date": "Thu, 12 May 2011 17:39:50 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PERFORM] since when has pg_stat_user_indexes.idx_scan been counting?"
},
{
"msg_contents": "Dne 12.5.2011 17:39, Willy-Bas Loos napsal(a):\n> Hi,\n> \n> We have some indexes that don't seem to be used at all.\n> I'd like to know since when they have not been used.\n> That is, the time when postgres started counting to reach the number\n> that is in pg_stat_user_indexes.idx_scan\n> \n> Is there a way to retrieve that from the database ?\n\nWell, not really :-( You could call pg_postmaster_start_time() to get\nthe start time, but that has two major drawbacks\n\n(1) The stats may be actually collected for much longer, because restart\ndoes not reset them.\n\n(2) If someone called pg_stat_reset(), the stats are lost but the start\ntime remains the same.\n\nSo there really is no reliable way to do detect this.\n\nIn 9.1 this is not true - there's a timestamp for each database (and\nglobal stats) to keep track of the last reset.\n\nregards\nTomas\n",
"msg_date": "Thu, 12 May 2011 20:18:05 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: since when has pg_stat_user_indexes.idx_scan been counting?"
},
{
"msg_contents": "Dne 12.5.2011 17:39, Willy-Bas Loos napsal(a):\n> Hi,\n> \n> We have some indexes that don't seem to be used at all.\n> I'd like to know since when they have not been used.\n> That is, the time when postgres started counting to reach the number\n> that is in pg_stat_user_indexes.idx_scan\n> \n> Is there a way to retrieve that from the database ?\n\nBTW it's really really tricky to remove indexes once they're created.\nWhat if the index is created for a single batch process that runs once a\nyear to close the fiscal year etc?\n\nSo be very careful about this.\n\nTomas\n",
"msg_date": "Thu, 12 May 2011 20:19:41 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: since when has pg_stat_user_indexes.idx_scan been counting?"
},
{
"msg_contents": "On Thu, May 12, 2011 at 9:09 PM, Willy-Bas Loos <[email protected]> wrote:\n\n> Hi,\n>\n> We have some indexes that don't seem to be used at all.\n> I'd like to know since when they have not been used.\n> That is, the time when postgres started counting to reach the number that\n> is in pg_stat_user_indexes.idx_scan\n>\n> Is there a way to retrieve that from the database ?\n>\n\n\n\"Analyze\" activity will update the statistics of each catalog table.\n\npg_postmaster_start_time --> Retrieves the Postmaster [ PostgreSQL Instance]\nstart time\n\npostgres=# select pg_postmaster_start_time();\n\n--Raghu Ram\n\nOn Thu, May 12, 2011 at 9:09 PM, Willy-Bas Loos <[email protected]> wrote:\nHi,We have some indexes that don't seem to be used at all.I'd like to know since when they have not been used.That is, the time when postgres started counting to reach the number that is in pg_stat_user_indexes.idx_scan\nIs there a way to retrieve that from the database ?\"Analyze\" activity will update the statistics of each catalog table.pg_postmaster_start_time --> Retrieves the Postmaster [ PostgreSQL Instance] start time\npostgres=# select pg_postmaster_start_time();--Raghu Ram",
"msg_date": "Thu, 12 May 2011 23:52:53 +0530",
"msg_from": "raghu ram <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] since when has pg_stat_user_indexes.idx_scan\n\tbeen counting?"
},
{
"msg_contents": "Then, are the index scans counted in a memory variable and written at\nanalyze time?\n\nOn Thu, May 12, 2011 at 8:22 PM, raghu ram <[email protected]> wrote:\n\n>\n> \"Analyze\" activity will update the statistics of each catalog table.\n> --Raghu Ram\n>\n>\n\n\n-- \n\"Patriotism is the conviction that your country is superior to all others\nbecause you were born in it.\" -- George Bernard Shaw\n\nThen, are the index scans counted in a memory variable and written at analyze time?\nOn Thu, May 12, 2011 at 8:22 PM, raghu ram <[email protected]> wrote:\n\"Analyze\" activity will update the statistics of each catalog table.--Raghu Ram\n-- \"Patriotism is the conviction that your country is superior to all others because you were born in it.\" -- George Bernard Shaw",
"msg_date": "Thu, 12 May 2011 22:03:27 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] since when has pg_stat_user_indexes.idx_scan\n\tbeen counting?"
},
{
"msg_contents": "Tomas Vondra wrote:\n> BTW it's really really tricky to remove indexes once they're created.\n> What if the index is created for a single batch process that runs once a\n> year to close the fiscal year etc?\n> \n\nTrue in theory. Reports that are executing something big at the end of \nthe year fall into three categories:\n\n1) They touch a whole lot of the data for the year first. In this case, \nsequential scan is likely regardless.\n\n2) They access data similarly to regular queries, using the same indexes.\n\n3) They have some very specific data only they touch that is retrieved \nwith an index.\n\nYou're saying to watch out for (3); I think that's not usually the case, \nbut that's a fair thing to warn about. Even in that case, though, it \nmay still be worth dropping the index. Year-end processes are not \nusually very sensitive to whether they take a little or a long time to \nexecute. But you will be paying to maintain the index every day while \nit is there.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Thu, 12 May 2011 16:06:49 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: since when has pg_stat_user_indexes.idx_scan been counting?"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> You're saying to watch out for (3); I think that's not usually the case, \n> but that's a fair thing to warn about. Even in that case, though, it \n> may still be worth dropping the index. Year-end processes are not \n> usually very sensitive to whether they take a little or a long time to \n> execute. But you will be paying to maintain the index every day while \n> it is there.\n\nYeah. Another idea worth considering is to have the year-end processing\nbuild the index it wants, use it, drop it. It seems unlikely that it's\nworth maintaining an index year-round for such infrequent usage.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 May 2011 16:16:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: since when has pg_stat_user_indexes.idx_scan been counting? "
},
{
"msg_contents": "Dne 12.5.2011 22:03, Willy-Bas Loos napsal(a):\n> Then, are the index scans counted in a memory variable and written at\n> analyze time?\n\nNo, I believe raghu mixed two things - stats used by the planner and\nstats about access to the data (how many tuples were read using an\nindex, etc.)\n\nStats for the planner are stored in pg_class/pg_statistic/pg_stats\ncatalogs and are updated by ANALYZE (either manual or automatic). This\nis what raghu refered to, but these stats are completely useless when\nlooking for unused indexes.\n\nStats about access to the data (index/seq scans, cache hit ratio etc.)\nare stored in pg_stat_* and pg_statio_* catalogs, and are updated after\nrunning each query. AFAIK it's not a synchronous process, but when a\nbackend finishes a query, it sends the stats to the postmaster (and\npostmaster updates the catalogs).\n\nTomas\n",
"msg_date": "Thu, 12 May 2011 22:46:16 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] since when has pg_stat_user_indexes.idx_scan\n\tbeen counting?"
},
{
"msg_contents": "Tomas Vondra wrote:\n> Stats about access to the data (index/seq scans, cache hit ratio etc.)\n> are stored in pg_stat_* and pg_statio_* catalogs, and are updated after\n> running each query. AFAIK it's not a synchronous process, but when a\n> backend finishes a query, it sends the stats to the postmaster (and\n> postmaster updates the catalogs).\n> \n\nDescription in the docs goes over this in a little more detail \nhttp://www.postgresql.org/docs/current/static/monitoring-stats.html :\n\n\"The statistics collector communicates with the backends needing \ninformation (including autovacuum) through temporary files. These files \nare stored in the pg_stat_tmp subdirectory...When using the statistics \nto monitor current activity, it is important to realize that the \ninformation does not update instantaneously. Each individual server \nprocess transmits new statistical counts to the collector just before \ngoing idle; so a query or transaction still in progress does not affect \nthe displayed totals. Also, the collector itself emits a new report at \nmost once per PGSTAT_STAT_INTERVAL milliseconds (500 unless altered \nwhile building the server). So the displayed information lags behind \nactual activity. However, current-query information collected by \ntrack_activities is always up-to-date.\"\n\nIt's not synchronous at all. The clients create a temporary file for \nthe statistics collector and move on. The actual statistics don't get \nupdated until the statistics collector decides enough time has passed to \nbother, which defaults to at most every 500ms.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Thu, 12 May 2011 22:26:03 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] since when has pg_stat_user_indexes.idx_scan\n\tbeen counting?"
},
{
"msg_contents": "> It's not synchronous at all. The clients create a temporary file for\n> the statistics collector and move on. The actual statistics don't get\n> updated until the statistics collector decides enough time has passed to\n> bother, which defaults to at most every 500ms.\n\nReally? I thought the clients send the updates using a socket, at least\nthat's what I see in backend/postmaster/pgstat.c (e.g. in\npgstat_send_bgwriter where the data are sent, and in PgstatCollectorMain\nwhere it's read from the socket and applied).\n\nBut no matter how exactly this works, this kind of stats has nothing to do\nwith ANALYZe - it's asynchronously updated every time you run a query.\n\nregards\nTomas\n\n",
"msg_date": "Fri, 13 May 2011 10:44:47 +0200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] since when has pg_stat_user_indexes.idx_scan\n\tbeen counting?"
}
] |
[
{
"msg_contents": "I've got a stored proc that constructs some aggregation queries as strings\nand then executes them. I'd like to be able to increase work_mem before\nrunning those queries. If I set a new value for work_mem within the stored\nproc prior to executing my query string, will that actually have an impact\non the query or is work_mem basically a constant once the outer statement\nthat calls the stored proc has begun? I'd just test, but it will take hours\nfor me to grab a copy of production data and import into a new db host for\ntesting. I've already started that process, but I'm betting I'll have an\nanswer by the time it completes. It's just the difference between modifying\nthe application which calls the procs (and doing a full software release in\norder to do so or else waiting a month to go in the next release) vs\nmodifying the procs themselves, which requires only db a update.\n\n--sam\n\nI've got a stored proc that constructs some aggregation queries as strings and then executes them. I'd like to be able to increase work_mem before running those queries. If I set a new value for work_mem within the stored proc prior to executing my query string, will that actually have an impact on the query or is work_mem basically a constant once the outer statement that calls the stored proc has begun? I'd just test, but it will take hours for me to grab a copy of production data and import into a new db host for testing. I've already started that process, but I'm betting I'll have an answer by the time it completes. It's just the difference between modifying the application which calls the procs (and doing a full software release in order to do so or else waiting a month to go in the next release) vs modifying the procs themselves, which requires only db a update.\n--sam",
"msg_date": "Thu, 12 May 2011 16:10:19 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": true,
"msg_subject": "setting configuration values inside a stored proc"
},
{
"msg_contents": "Hi,\n\nOn Friday, May 13, 2011 01:10:19 AM Samuel Gendler wrote:\n> I've got a stored proc that constructs some aggregation queries as strings\n> and then executes them. I'd like to be able to increase work_mem before\n> running those queries. If I set a new value for work_mem within the stored\n> proc prior to executing my query string, will that actually have an impact\n> on the query or is work_mem basically a constant once the outer statement\n> that calls the stored proc has begun? I'd just test, but it will take\n> hours for me to grab a copy of production data and import into a new db\n> host for testing. I've already started that process, but I'm betting I'll\n> have an answer by the time it completes. It's just the difference between\n> modifying the application which calls the procs (and doing a full software\n> release in order to do so or else waiting a month to go in the next\n> release) vs modifying the procs themselves, which requires only db a\n> update.\nI would suggest doing ALTER FUNCTION blub(blarg) SET work_mem = '512MB';\n\nAndres\n",
"msg_date": "Fri, 13 May 2011 10:28:15 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: setting configuration values inside a stored proc"
},
{
"msg_contents": "On Fri, May 13, 2011 at 1:28 AM, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On Friday, May 13, 2011 01:10:19 AM Samuel Gendler wrote:\n>\n> I would suggest doing ALTER FUNCTION blub(blarg) SET work_mem = '512MB';\n>\n>\nAh! That's perfect and very convenient. Thanks.\n\n--sam\n\nOn Fri, May 13, 2011 at 1:28 AM, Andres Freund <[email protected]> wrote:\nHi,\n\nOn Friday, May 13, 2011 01:10:19 AM Samuel Gendler wrote:\nI would suggest doing ALTER FUNCTION blub(blarg) SET work_mem = '512MB';\nAh! That's perfect and very convenient. Thanks.--sam",
"msg_date": "Fri, 13 May 2011 02:41:58 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: setting configuration values inside a stored proc"
}
] |
[
{
"msg_contents": "Hi everyone\n\nWe have recently started to port an application from Oracle to PostgreSQL.\nSo far, we are amazed with how great most things work.\n\nHowever, we have run into performance problems in one type of query which\nis quite common in our application. We have created a (simplified)\nreproducible test case which (hopefully!) creates all necessary tables\nand data to\nshow the problem.\n\nPlain-text description of the data model in the test case:\n\nWe have a set of objects (like electrical cables), each having\ntwo nodes in the table \"connections\" (think of these two rows together\nas an edge in a graph).\n\nAnother table \"connections_locked\" contains rows for some of\nthe same objects, which are locked by a long transaction.\n\nThe view connections_v performs a union all of the rows from\n\"connections\" which are not modified in the current long\ntransaction with the rows from \"connections_locked\" which\nare modified in the current long transaction.\n\nGoal:\nGiven an object id, we want to find all neighbors for this\nobject (that is, objects which share a node with this object).\n\nProblem:\nWe think that our query used to find neighbors would benefit\ngreatly from using some of our indexes, but we fail to make it\ndo so.\n\n\nOver to the actual test case:\n\n----------------------------------------------\n\n-- Tested on (from select version ()):\n-- PostgreSQL 9.0.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC)\n4.1.2 20080704 (Red Hat 4.1.2-46), 32-bit\n-- PostgreSQL 9.1beta1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC)\n4.1.2 20080704 (Red Hat 4.1.2-46), 32-bit\n\n-- Ubuntu 11.04, uname -a output:\n-- Linux <hostname> 2.6.38-8-generic-pae #42-Ubuntu SMP Mon Apr 11\n05:17:09 UTC 2011 i686 i686 i386 GNU/Linux\n-- Processor: Intel(R) Core(TM)2 Quad CPU Q9450 @ 2.66GHz\n-- Drive: Intel X25-M SSD\n\n\ndrop table if exists connections cascade;\ndrop table if exists connections_locked cascade;\n\n\ncreate table connections (\n con_id serial primary key,\n locked_by integer not null,\n obj_id integer not null,\n node integer not null\n);\n\n\n-- create test nodes, two per obj_id\ninsert into connections (locked_by, obj_id, node)\nselect 0, n/2, 1000 + (n + 1)/2 from generate_series (1,500000) as n;\n\ncreate index connections_node_idx on connections (node);\ncreate index connections_obj_idx on connections (obj_id);\nvacuum analyze connections;\n\n\n\ncreate table connections_locked (\n con_id integer not null,\n locked_by integer not null,\n obj_id integer not null,\n node integer not null,\n constraint locked_pk primary key (con_id, locked_by)\n);\n\n-- mark a few of the objects as locked by a long transaction\ninsert into connections_locked (con_id, locked_by, obj_id, node)\nselect n, 1 + n/50, n/2, 1000 + (n + 1)/2 from generate_series (1,25000) as n;\n\ncreate index connections_locked_node_idx on connections_locked (node);\ncreate index connections_locked_obj_idx on connections_locked (obj_id);\nvacuum analyze connections_locked;\n\n\n-- Create a view showing the world as seen by long transaction 4711.\n-- In real life, this uses a session variable instead of a hard-coded value.\ncreate or replace view connections_v as\nselect * from connections where locked_by <> 4711\nunion all\nselect * from connections_locked where locked_by = 4711;\n\n\n-- This is the query we are trying to optimize.\n-- We expect this to be able to use our indexes, but instead get\nsequential scans\nexplain analyze\nselect\n con2.obj_id\nfrom\n connections_v con1,\n connections_v con2\nwhere\n con1.obj_id = 17 and\n con2.node = con1.node\n;\n\n\n-- Output:\n-- Hash Join (cost=16.69..16368.89 rows=7501 width=4) (actual\ntime=0.096..778.830 rows=4 loops=1)\n-- Hash Cond: (\"*SELECT* 1\".node = \"*SELECT* 1\".node)\n-- -> Append (cost=0.00..14402.00 rows=500050 width=8) (actual\ntime=0.011..640.163 rows=500000 loops=1)\n-- -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..13953.00\nrows=500000 width=8) (actual time=0.011..430.645 rows=500000 loops=1)\n-- -> Seq Scan on connections (cost=0.00..8953.00\nrows=500000 width=16) (actual time=0.009..178.535 rows=500000 loops=1)\n-- Filter: (locked_by <> 4711)\n-- -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..449.00\nrows=50 width=8) (actual time=3.254..3.254 rows=0 loops=1)\n-- -> Seq Scan on connections_locked\n(cost=0.00..448.50 rows=50 width=16) (actual time=3.253..3.253 rows=0\nloops=1)\n-- Filter: (locked_by = 4711)\n-- -> Hash (cost=16.66..16.66 rows=3 width=4) (actual\ntime=0.028..0.028 rows=2 loops=1)\n-- Buckets: 1024 Batches: 1 Memory Usage: 1kB\n-- -> Append (cost=0.00..16.66 rows=3 width=4) (actual\ntime=0.013..0.025 rows=2 loops=1)\n-- -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..8.35\nrows=2 width=4) (actual time=0.013..0.016 rows=2 loops=1)\n-- -> Index Scan using connections_obj_idx on\nconnections (cost=0.00..8.33 rows=2 width=16) (actual\ntime=0.012..0.014 rows=2 loops=1)\n-- Index Cond: (obj_id = 17)\n-- Filter: (locked_by <> 4711)\n-- -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..8.30\nrows=1 width=4) (actual time=0.008..0.008 rows=0 loops=1)\n-- -> Index Scan using connections_locked_obj_idx\non connections_locked (cost=0.00..8.29 rows=1 width=16) (actual\ntime=0.007..0.007 rows=0 loops=1)\n-- Index Cond: (obj_id = 17)\n-- Filter: (locked_by = 4711)\n\n\n\n-- Rewriting the query to an almost-equivalent form yields almost the\nsame result (that is, seq scans)\nexplain analyze\nselect\n con2.obj_id\nfrom\n connections_v con2\n where con2.node in (select node from connections_v con1 where\ncon1.obj_id = 17);\n\n\n-- Simplifying the query even more to use a sub-select with a\nhard-coded value still results in seq scans\nexplain analyze\nselect\n con2.obj_id\nfrom\n connections_v con2\n where con2.node in (select 1015);\n\n\n-- Finally, when we simplify even more and just use a constant, we get\nthe index accesses we were hoping\n-- for all along.\nexplain analyze\nselect\n con2.obj_id\nfrom\n connections_v con2\n where con2.node in (1015);\n\n-- Result (cost=0.00..16.66 rows=3 width=4) (actual time=0.048..0.079\nrows=2 loops=1)\n-- -> Append (cost=0.00..16.66 rows=3 width=4) (actual\ntime=0.047..0.076 rows=2 loops=1)\n-- -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..8.35 rows=2\nwidth=4) (actual time=0.046..0.049 rows=2 loops=1)\n-- -> Index Scan using connections_node_idx on\nconnections (cost=0.00..8.33 rows=2 width=16) (actual\ntime=0.046..0.048 rows=2 loops=1)\n-- Index Cond: (node = 1015)\n-- Filter: (locked_by <> 4711)\n-- -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..8.30 rows=1\nwidth=4) (actual time=0.025..0.025 rows=0 loops=1)\n-- -> Index Scan using connections_locked_node_idx on\nconnections_locked (cost=0.00..8.29 rows=1 width=16) (actual\ntime=0.024..0.024 rows=0 loops=1)\n-- Index Cond: (node = 1015)\n-- Filter: (locked_by = 4711)\n\n\n\n------- end of test case -----\n\nCan someone explain what is happening here? Is there some way we can\nrewrite our query or some setting we could turn on or off to get the\noptimizer to choose to use our indexes?\n\n(testing with \"set enable_seqscan = false;\" does not make a difference\nas far as we can see)\n\nTo verify that we have really created all necessary indexes, we have\nconverted this simplified test case to Oracle syntax and tested it on\nour Oracle server. In this case, we do get the expected index accesses,\nso we think that we have in fact managed to isolate the problem using\nthis test case.\n\nWhat we are hoping for:\nSince we have lots of queries joining these kind of \"union all\"-views\nbetween a master table and a transaction table, we would be really\nglad to hear something like \"when you use these kinds of views, you\nneed to do X, Y and Z to get good performance\".\n\nThanks in advance for any help!\n/Fredrik\n",
"msg_date": "Fri, 13 May 2011 13:55:46 +0200",
"msg_from": "Fredrik Widlert <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to avoid seq scans for joins between union-all views (test case\n\tincluded)"
},
{
"msg_contents": "I might have misread, but:\n\n> select * from connections where locked_by <> 4711\n> union all\n> select * from connections_locked where locked_by = 4711;\n\n\nThe first part will result in a seq scan irrespective of indexes, and the second has no index on locked_by. The best you can do is to eliminate the seq scan on the second by adding the missing index on locked_by.\n\nThat said, note that index usage depends on your data distribution: postgres may identify that it'll read most/all of the table anyway, and opt to do a (cheaper) seq scan instead.\n\nD\n\n\n----- Original Message -----\n> From: Fredrik Widlert <[email protected]>\n> To: [email protected]\n> Cc: \n> Sent: Friday, May 13, 2011 1:55 PM\n> Subject: [PERFORM] How to avoid seq scans for joins between union-all views (test case included)\n> \n> Hi everyone\n> \n> We have recently started to port an application from Oracle to PostgreSQL.\n> So far, we are amazed with how great most things work.\n> \n> However, we have run into performance problems in one type of query which\n> is quite common in our application. We have created a (simplified)\n> reproducible test case which (hopefully!) creates all necessary tables\n> and data to\n> show the problem.\n> \n> Plain-text description of the data model in the test case:\n> \n> We have a set of objects (like electrical cables), each having\n> two nodes in the table \"connections\" (think of these two rows together\n> as an edge in a graph).\n> \n> Another table \"connections_locked\" contains rows for some of\n> the same objects, which are locked by a long transaction.\n> \n> The view connections_v performs a union all of the rows from\n> \"connections\" which are not modified in the current long\n> transaction with the rows from \"connections_locked\" which\n> are modified in the current long transaction.\n> \n> Goal:\n> Given an object id, we want to find all neighbors for this\n> object (that is, objects which share a node with this object).\n> \n> Problem:\n> We think that our query used to find neighbors would benefit\n> greatly from using some of our indexes, but we fail to make it\n> do so.\n> \n> \n> Over to the actual test case:\n> \n> ----------------------------------------------\n> \n> -- Tested on (from select version ()):\n> -- PostgreSQL 9.0.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC)\n> 4.1.2 20080704 (Red Hat 4.1.2-46), 32-bit\n> -- PostgreSQL 9.1beta1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC)\n> 4.1.2 20080704 (Red Hat 4.1.2-46), 32-bit\n> \n> -- Ubuntu 11.04, uname -a output:\n> -- Linux <hostname> 2.6.38-8-generic-pae #42-Ubuntu SMP Mon Apr 11\n> 05:17:09 UTC 2011 i686 i686 i386 GNU/Linux\n> -- Processor: Intel(R) Core(TM)2 Quad CPU Q9450 @ 2.66GHz\n> -- Drive: Intel X25-M SSD\n> \n> \n> drop table if exists connections cascade;\n> drop table if exists connections_locked cascade;\n> \n> \n> create table connections (\n> con_id serial primary key,\n> locked_by integer not null,\n> obj_id integer not null,\n> node integer not null\n> );\n> \n> \n> -- create test nodes, two per obj_id\n> insert into connections (locked_by, obj_id, node)\n> select 0, n/2, 1000 + (n + 1)/2 from generate_series (1,500000) as n;\n> \n> create index connections_node_idx on connections (node);\n> create index connections_obj_idx on connections (obj_id);\n> vacuum analyze connections;\n> \n> \n> \n> create table connections_locked (\n> con_id integer not null,\n> locked_by integer not null,\n> obj_id integer not null,\n> node integer not null,\n> constraint locked_pk primary key (con_id, locked_by)\n> );\n> \n> -- mark a few of the objects as locked by a long transaction\n> insert into connections_locked (con_id, locked_by, obj_id, node)\n> select n, 1 + n/50, n/2, 1000 + (n + 1)/2 from generate_series (1,25000) as n;\n> \n> create index connections_locked_node_idx on connections_locked (node);\n> create index connections_locked_obj_idx on connections_locked (obj_id);\n> vacuum analyze connections_locked;\n> \n> \n> -- Create a view showing the world as seen by long transaction 4711.\n> -- In real life, this uses a session variable instead of a hard-coded value.\n> create or replace view connections_v as\n> select * from connections where locked_by <> 4711\n> union all\n> select * from connections_locked where locked_by = 4711;\n> \n> \n> -- This is the query we are trying to optimize.\n> -- We expect this to be able to use our indexes, but instead get\n> sequential scans\n> explain analyze\n> select\n> con2.obj_id\n> from\n> connections_v con1,\n> connections_v con2\n> where\n> con1.obj_id = 17 and\n> con2.node = con1.node\n> ;\n> \n> \n> -- Output:\n> -- Hash Join (cost=16.69..16368.89 rows=7501 width=4) (actual\n> time=0.096..778.830 rows=4 loops=1)\n> -- Hash Cond: (\"*SELECT* 1\".node = \"*SELECT* 1\".node)\n> -- -> Append (cost=0.00..14402.00 rows=500050 width=8) (actual\n> time=0.011..640.163 rows=500000 loops=1)\n> -- -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..13953.00\n> rows=500000 width=8) (actual time=0.011..430.645 rows=500000 loops=1)\n> -- -> Seq Scan on connections (cost=0.00..8953.00\n> rows=500000 width=16) (actual time=0.009..178.535 rows=500000 loops=1)\n> -- Filter: (locked_by <> 4711)\n> -- -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..449.00\n> rows=50 width=8) (actual time=3.254..3.254 rows=0 loops=1)\n> -- -> Seq Scan on connections_locked\n> (cost=0.00..448.50 rows=50 width=16) (actual time=3.253..3.253 rows=0\n> loops=1)\n> -- Filter: (locked_by = 4711)\n> -- -> Hash (cost=16.66..16.66 rows=3 width=4) (actual\n> time=0.028..0.028 rows=2 loops=1)\n> -- Buckets: 1024 Batches: 1 Memory Usage: 1kB\n> -- -> Append (cost=0.00..16.66 rows=3 width=4) (actual\n> time=0.013..0.025 rows=2 loops=1)\n> -- -> Subquery Scan on \"*SELECT* 1\" \n> (cost=0.00..8.35\n> rows=2 width=4) (actual time=0.013..0.016 rows=2 loops=1)\n> -- -> Index Scan using connections_obj_idx on\n> connections (cost=0.00..8.33 rows=2 width=16) (actual\n> time=0.012..0.014 rows=2 loops=1)\n> -- Index Cond: (obj_id = 17)\n> -- Filter: (locked_by <> 4711)\n> -- -> Subquery Scan on \"*SELECT* 2\" \n> (cost=0.00..8.30\n> rows=1 width=4) (actual time=0.008..0.008 rows=0 loops=1)\n> -- -> Index Scan using connections_locked_obj_idx\n> on connections_locked (cost=0.00..8.29 rows=1 width=16) (actual\n> time=0.007..0.007 rows=0 loops=1)\n> -- Index Cond: (obj_id = 17)\n> -- Filter: (locked_by = 4711)\n> \n> \n> \n> -- Rewriting the query to an almost-equivalent form yields almost the\n> same result (that is, seq scans)\n> explain analyze\n> select\n> con2.obj_id\n> from\n> connections_v con2\n> where con2.node in (select node from connections_v con1 where\n> con1.obj_id = 17);\n> \n> \n> -- Simplifying the query even more to use a sub-select with a\n> hard-coded value still results in seq scans\n> explain analyze\n> select\n> con2.obj_id\n> from\n> connections_v con2\n> where con2.node in (select 1015);\n> \n> \n> -- Finally, when we simplify even more and just use a constant, we get\n> the index accesses we were hoping\n> -- for all along.\n> explain analyze\n> select\n> con2.obj_id\n> from\n> connections_v con2\n> where con2.node in (1015);\n> \n> -- Result (cost=0.00..16.66 rows=3 width=4) (actual time=0.048..0.079\n> rows=2 loops=1)\n> -- -> Append (cost=0.00..16.66 rows=3 width=4) (actual\n> time=0.047..0.076 rows=2 loops=1)\n> -- -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..8.35 \n> rows=2\n> width=4) (actual time=0.046..0.049 rows=2 loops=1)\n> -- -> Index Scan using connections_node_idx on\n> connections (cost=0.00..8.33 rows=2 width=16) (actual\n> time=0.046..0.048 rows=2 loops=1)\n> -- Index Cond: (node = 1015)\n> -- Filter: (locked_by <> 4711)\n> -- -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..8.30 \n> rows=1\n> width=4) (actual time=0.025..0.025 rows=0 loops=1)\n> -- -> Index Scan using connections_locked_node_idx on\n> connections_locked (cost=0.00..8.29 rows=1 width=16) (actual\n> time=0.024..0.024 rows=0 loops=1)\n> -- Index Cond: (node = 1015)\n> -- Filter: (locked_by = 4711)\n> \n> \n> \n> ------- end of test case -----\n> \n> Can someone explain what is happening here? Is there some way we can\n> rewrite our query or some setting we could turn on or off to get the\n> optimizer to choose to use our indexes?\n> \n> (testing with \"set enable_seqscan = false;\" does not make a difference\n> as far as we can see)\n> \n> To verify that we have really created all necessary indexes, we have\n> converted this simplified test case to Oracle syntax and tested it on\n> our Oracle server. In this case, we do get the expected index accesses,\n> so we think that we have in fact managed to isolate the problem using\n> this test case.\n> \n> What we are hoping for:\n> Since we have lots of queries joining these kind of \"union all\"-views\n> between a master table and a transaction table, we would be really\n> glad to hear something like \"when you use these kinds of views, you\n> need to do X, Y and Z to get good performance\".\n> \n> Thanks in advance for any help!\n> /Fredrik\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Fri, 13 May 2011 07:23:19 -0700 (PDT)",
"msg_from": "Denis de Bernardy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to avoid seq scans for joins between union-all views (test\n\tcase included)"
},
{
"msg_contents": "2011/5/13 Denis de Bernardy <[email protected]>:\n> I might have misread, but:\n>\n>> select * from connections where locked_by <> 4711\n>> union all\n>> select * from connections_locked where locked_by = 4711;\n>\n>\n> The first part will result in a seq scan irrespective of indexes, and the second has no index on locked_by. The best you can do is to eliminate the seq scan on the second by adding the missing index on locked_by.\n\njust rework the primary key to set the locked_id first should work.\n\n>\n> That said, note that index usage depends on your data distribution: postgres may identify that it'll read most/all of the table anyway, and opt to do a (cheaper) seq scan instead.\n\n\nFredrick, What indexes Oracle did choose ? (index-only scan ?)\n\n>\n> D\n>\n>\n> ----- Original Message -----\n>> From: Fredrik Widlert <[email protected]>\n>> To: [email protected]\n>> Cc:\n>> Sent: Friday, May 13, 2011 1:55 PM\n>> Subject: [PERFORM] How to avoid seq scans for joins between union-all views (test case included)\n>>\n>> Hi everyone\n>>\n>> We have recently started to port an application from Oracle to PostgreSQL.\n>> So far, we are amazed with how great most things work.\n>>\n>> However, we have run into performance problems in one type of query which\n>> is quite common in our application. We have created a (simplified)\n>> reproducible test case which (hopefully!) creates all necessary tables\n>> and data to\n>> show the problem.\n>>\n>> Plain-text description of the data model in the test case:\n>>\n>> We have a set of objects (like electrical cables), each having\n>> two nodes in the table \"connections\" (think of these two rows together\n>> as an edge in a graph).\n>>\n>> Another table \"connections_locked\" contains rows for some of\n>> the same objects, which are locked by a long transaction.\n>>\n>> The view connections_v performs a union all of the rows from\n>> \"connections\" which are not modified in the current long\n>> transaction with the rows from \"connections_locked\" which\n>> are modified in the current long transaction.\n>>\n>> Goal:\n>> Given an object id, we want to find all neighbors for this\n>> object (that is, objects which share a node with this object).\n>>\n>> Problem:\n>> We think that our query used to find neighbors would benefit\n>> greatly from using some of our indexes, but we fail to make it\n>> do so.\n>>\n>>\n>> Over to the actual test case:\n>>\n>> ----------------------------------------------\n>>\n>> -- Tested on (from select version ()):\n>> -- PostgreSQL 9.0.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC)\n>> 4.1.2 20080704 (Red Hat 4.1.2-46), 32-bit\n>> -- PostgreSQL 9.1beta1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC)\n>> 4.1.2 20080704 (Red Hat 4.1.2-46), 32-bit\n>>\n>> -- Ubuntu 11.04, uname -a output:\n>> -- Linux <hostname> 2.6.38-8-generic-pae #42-Ubuntu SMP Mon Apr 11\n>> 05:17:09 UTC 2011 i686 i686 i386 GNU/Linux\n>> -- Processor: Intel(R) Core(TM)2 Quad CPU Q9450 @ 2.66GHz\n>> -- Drive: Intel X25-M SSD\n>>\n>>\n>> drop table if exists connections cascade;\n>> drop table if exists connections_locked cascade;\n>>\n>>\n>> create table connections (\n>> con_id serial primary key,\n>> locked_by integer not null,\n>> obj_id integer not null,\n>> node integer not null\n>> );\n>>\n>>\n>> -- create test nodes, two per obj_id\n>> insert into connections (locked_by, obj_id, node)\n>> select 0, n/2, 1000 + (n + 1)/2 from generate_series (1,500000) as n;\n>>\n>> create index connections_node_idx on connections (node);\n>> create index connections_obj_idx on connections (obj_id);\n>> vacuum analyze connections;\n>>\n>>\n>>\n>> create table connections_locked (\n>> con_id integer not null,\n>> locked_by integer not null,\n>> obj_id integer not null,\n>> node integer not null,\n>> constraint locked_pk primary key (con_id, locked_by)\n>> );\n>>\n>> -- mark a few of the objects as locked by a long transaction\n>> insert into connections_locked (con_id, locked_by, obj_id, node)\n>> select n, 1 + n/50, n/2, 1000 + (n + 1)/2 from generate_series (1,25000) as n;\n>>\n>> create index connections_locked_node_idx on connections_locked (node);\n>> create index connections_locked_obj_idx on connections_locked (obj_id);\n>> vacuum analyze connections_locked;\n>>\n>>\n>> -- Create a view showing the world as seen by long transaction 4711.\n>> -- In real life, this uses a session variable instead of a hard-coded value.\n>> create or replace view connections_v as\n>> select * from connections where locked_by <> 4711\n>> union all\n>> select * from connections_locked where locked_by = 4711;\n>>\n>>\n>> -- This is the query we are trying to optimize.\n>> -- We expect this to be able to use our indexes, but instead get\n>> sequential scans\n>> explain analyze\n>> select\n>> con2.obj_id\n>> from\n>> connections_v con1,\n>> connections_v con2\n>> where\n>> con1.obj_id = 17 and\n>> con2.node = con1.node\n>> ;\n>>\n>>\n>> -- Output:\n>> -- Hash Join (cost=16.69..16368.89 rows=7501 width=4) (actual\n>> time=0.096..778.830 rows=4 loops=1)\n>> -- Hash Cond: (\"*SELECT* 1\".node = \"*SELECT* 1\".node)\n>> -- -> Append (cost=0.00..14402.00 rows=500050 width=8) (actual\n>> time=0.011..640.163 rows=500000 loops=1)\n>> -- -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..13953.00\n>> rows=500000 width=8) (actual time=0.011..430.645 rows=500000 loops=1)\n>> -- -> Seq Scan on connections (cost=0.00..8953.00\n>> rows=500000 width=16) (actual time=0.009..178.535 rows=500000 loops=1)\n>> -- Filter: (locked_by <> 4711)\n>> -- -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..449.00\n>> rows=50 width=8) (actual time=3.254..3.254 rows=0 loops=1)\n>> -- -> Seq Scan on connections_locked\n>> (cost=0.00..448.50 rows=50 width=16) (actual time=3.253..3.253 rows=0\n>> loops=1)\n>> -- Filter: (locked_by = 4711)\n>> -- -> Hash (cost=16.66..16.66 rows=3 width=4) (actual\n>> time=0.028..0.028 rows=2 loops=1)\n>> -- Buckets: 1024 Batches: 1 Memory Usage: 1kB\n>> -- -> Append (cost=0.00..16.66 rows=3 width=4) (actual\n>> time=0.013..0.025 rows=2 loops=1)\n>> -- -> Subquery Scan on \"*SELECT* 1\"\n>> (cost=0.00..8.35\n>> rows=2 width=4) (actual time=0.013..0.016 rows=2 loops=1)\n>> -- -> Index Scan using connections_obj_idx on\n>> connections (cost=0.00..8.33 rows=2 width=16) (actual\n>> time=0.012..0.014 rows=2 loops=1)\n>> -- Index Cond: (obj_id = 17)\n>> -- Filter: (locked_by <> 4711)\n>> -- -> Subquery Scan on \"*SELECT* 2\"\n>> (cost=0.00..8.30\n>> rows=1 width=4) (actual time=0.008..0.008 rows=0 loops=1)\n>> -- -> Index Scan using connections_locked_obj_idx\n>> on connections_locked (cost=0.00..8.29 rows=1 width=16) (actual\n>> time=0.007..0.007 rows=0 loops=1)\n>> -- Index Cond: (obj_id = 17)\n>> -- Filter: (locked_by = 4711)\n>>\n>>\n>>\n>> -- Rewriting the query to an almost-equivalent form yields almost the\n>> same result (that is, seq scans)\n>> explain analyze\n>> select\n>> con2.obj_id\n>> from\n>> connections_v con2\n>> where con2.node in (select node from connections_v con1 where\n>> con1.obj_id = 17);\n>>\n>>\n>> -- Simplifying the query even more to use a sub-select with a\n>> hard-coded value still results in seq scans\n>> explain analyze\n>> select\n>> con2.obj_id\n>> from\n>> connections_v con2\n>> where con2.node in (select 1015);\n>>\n>>\n>> -- Finally, when we simplify even more and just use a constant, we get\n>> the index accesses we were hoping\n>> -- for all along.\n>> explain analyze\n>> select\n>> con2.obj_id\n>> from\n>> connections_v con2\n>> where con2.node in (1015);\n>>\n>> -- Result (cost=0.00..16.66 rows=3 width=4) (actual time=0.048..0.079\n>> rows=2 loops=1)\n>> -- -> Append (cost=0.00..16.66 rows=3 width=4) (actual\n>> time=0.047..0.076 rows=2 loops=1)\n>> -- -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..8.35\n>> rows=2\n>> width=4) (actual time=0.046..0.049 rows=2 loops=1)\n>> -- -> Index Scan using connections_node_idx on\n>> connections (cost=0.00..8.33 rows=2 width=16) (actual\n>> time=0.046..0.048 rows=2 loops=1)\n>> -- Index Cond: (node = 1015)\n>> -- Filter: (locked_by <> 4711)\n>> -- -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..8.30\n>> rows=1\n>> width=4) (actual time=0.025..0.025 rows=0 loops=1)\n>> -- -> Index Scan using connections_locked_node_idx on\n>> connections_locked (cost=0.00..8.29 rows=1 width=16) (actual\n>> time=0.024..0.024 rows=0 loops=1)\n>> -- Index Cond: (node = 1015)\n>> -- Filter: (locked_by = 4711)\n>>\n>>\n>>\n>> ------- end of test case -----\n>>\n>> Can someone explain what is happening here? Is there some way we can\n>> rewrite our query or some setting we could turn on or off to get the\n>> optimizer to choose to use our indexes?\n>>\n>> (testing with \"set enable_seqscan = false;\" does not make a difference\n>> as far as we can see)\n>>\n>> To verify that we have really created all necessary indexes, we have\n>> converted this simplified test case to Oracle syntax and tested it on\n>> our Oracle server. In this case, we do get the expected index accesses,\n>> so we think that we have in fact managed to isolate the problem using\n>> this test case.\n>>\n>> What we are hoping for:\n>> Since we have lots of queries joining these kind of \"union all\"-views\n>> between a master table and a transaction table, we would be really\n>> glad to hear something like \"when you use these kinds of views, you\n>> need to do X, Y and Z to get good performance\".\n>>\n>> Thanks in advance for any help!\n>> /Fredrik\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Fri, 13 May 2011 16:48:38 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to avoid seq scans for joins between union-all\n\tviews (test case included)"
},
{
"msg_contents": "Hi Denis and Cédric\n\nThanks for your answers.\n\n> Fredrick, What indexes Oracle did choose ? (index-only scan ?)\n\nOracle chooses a plan which looks like this:\nSELECT STATEMENT Optimizer=ALL_ROWS (Cost=5 Card=7 Bytes=182)\n VIEW OF 'CONNECTIONS_V' (VIEW) (Cost=5 Card=7 Bytes=182)\n UNION-ALL\n INLIST ITERATOR\n TABLE ACCESS (BY INDEX ROWID) OF 'CONNECTIONS' (TABLE) (Cost=5\nCard=6 Bytes=54)\n INDEX (RANGE SCAN) OF 'CONNECTIONS_NODE_IDX' (INDEX) (Cost=4 Card=6)\n INLIST ITERATOR\n TABLE ACCESS (BY INDEX ROWID) OF 'CONNECTIONS_LOCKED' (TABLE)\n(Cost=0 Card=1 Bytes=39)\n INDEX (RANGE SCAN) OF 'CONNECTIONS_LOCKED_NODE_IDX' (INDEX)\n(Cost=0 Card=1)\n\nThis means that only the indexes of connections.node and\nconnections_locked.node are used.\n\nI don't think that we want to use any index for locked_by here,\nwe are hoping for the node = <value> predicate to be pushed\ninto both halves of the union all view (not sure if this is the right\nterminology).\n\nFor example, in the simplified-but-still-problematic query\nselect con2.obj_id from connections_v con2 where con2.node in (select 1015);\nwe are hoping for the node-index to be used for both connections and\nconnections_locked.\n\nWe hope to get the same plan/performance as for this query:\nselect con2.obj_id from connections_v con2 where con2.node in (1015);\nI don't understand why there is a difference between \"in (select\n1015)\" and \"in (1015)\"?\n\n> That said, note that index usage depends on your data distribution: postgres\n> may identify that it'll read most/all of the table anyway, and opt to do a\n> (cheaper) seq scan instead.\n\nYes, I know, but I've tried to create the test case data distribution in a way\nI hope makes this unlikely (0.5 million rows in one table, 25000 in the\nother table, two rows in each table for each distinct value of node, only\na few rows returned from the queries.\n\nThanks again for you answers so far\n/Fredrik\n",
"msg_date": "Fri, 13 May 2011 17:09:42 +0200",
"msg_from": "Fredrik Widlert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to avoid seq scans for joins between union-all\n\tviews (test case included)"
}
] |
[
{
"msg_contents": "Hi:\n\nI installed PostgreSQL9.0 from EnterpriseDB with“one click installer” in\nwindows 7 & 32bit.\n\nand use microsoft visual studio 2010 c++.\n\nI added the libpq.lib to the link property of the project, also included the\nlib folder and path.\n\nSuccessfully compiled .c and .cpp file after transfer .pgc file to .c file\nusing ECPG.\n\nBut it always have errors like this when link:\n\nerror LNK2019: unresolved external symbol _PGTYPESinterval_new referenced in\nfunction.\n\nAnd all the function PGTYPEStimestamp and PGTYPEsinterval can cause the\nsame error.\n\nDoes someone can help me?\n\nThanks.\n\nFanbin\n\nHi:\nI installed PostgreSQL9.0 from EnterpriseDB with“one click installer” in windows 7 & 32bit.\nand use microsoft visual studio 2010 c++. \nI added the libpq.lib to the link property of the project, also included the lib folder and path.\nSuccessfully compiled .c and .cpp file after transfer .pgc file to .c file using ECPG.\nBut it always have errors like this when link:\nerror LNK2019: unresolved external symbol _PGTYPESinterval_new referenced in function.\nAnd all the function PGTYPEStimestamp and PGTYPEsinterval can cause the same error. \nDoes someone can help me?\nThanks.\nFanbin",
"msg_date": "Fri, 13 May 2011 11:54:38 -0400",
"msg_from": "Fanbin Meng <[email protected]>",
"msg_from_op": true,
"msg_subject": "Link error when use Pgtypes function in windows"
},
{
"msg_contents": "> Does someone can help me?\n\nYou may want to try pgsql-general instead of this list.\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n",
"msg_date": "Fri, 13 May 2011 09:22:25 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Link error when use Pgtypes function in windows"
},
{
"msg_contents": "On Fri, May 13, 2011 at 12:22 PM, Maciek Sakrejda <[email protected]> wrote:\n>> Does someone can help me?\n>\n> You may want to try pgsql-general instead of this list.\n\nYeah, this isn't a performance question.\n\nBut I wonder if the problem might be that the OP needs to link with\nthe ecpg library, not just libpq.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 19 May 2011 16:10:49 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Link error when use Pgtypes function in windows"
}
] |
[
{
"msg_contents": "Hi all:\n\nI am adding pgiosim to our testing for new database hardware and I am\nseeing something I don't quite get and I think it's because I am using\npgiosim incorrectly.\n\nSpecs:\n\n OS: centos 5.5 kernel: 2.6.18-194.32.1.el5\n memory: 96GB\n cpu: 2x Intel(R) Xeon(R) X5690 @ 3.47GHz (6 core, ht enabled)\n disks: WD2003FYYS RE4\n raid: lsi - 9260-4i with 8 disks in raid 10 configuration\n 1MB stripe size\n raid cache enabled w/ bbu\n disk caches disabled\n filesystem: ext3 created with -E stride=256\n\nI am seeing really poor (70) iops with pgiosim. According to:\nhttp://www.tomshardware.com/reviews/2tb-hdd-7200,2430-8.html in the\ndatabase benchmark they are seeing ~170 iops on a single disk for\nthese drives. I would expect an 8 disk raid 10 should get better then\n3x the single disk rate (assuming the data is randomly distributed).\n\nTo test I am using 5 100GB files with\n\n sudo ~/pgiosim -c -b 100G -v file?\n\nI am using 100G sizes to make sure that the data read and files sizes\nexceed the memory size of the system.\n\nHowever if I use 5 1GB files (and still 100GB read data) I see 200+ to\n400+ iops at 50% of the 100GB of data read, which I assume means that\nthe data is cached in the OS cache and I am not really getting hard\ndrive/raid I/O measurement of iops.\n\nHowever, IIUC postgres will never have an index file greater than 1GB\nin size\n(http://www.postgresql.org/docs/8.4/static/storage-file-layout.html)\nand will just add 1GB segments, so the 1GB size files seems to be more\nrealistic.\n\nSo do I want 100 (or probably 2 or 3 times more say 300) 1GB files to\nfeed pgiosim? That way I will have enough data that not all of it can\nbe cached in memory and the file sizes (and file operations:\nopen/close) more closely match what postgres is doing with index\nfiles?\n\nAlso in the output of pgiosim I see:\n\n 25.17%, 2881 read, 0 written, 2304.56kB/sec 288.07 iops\n\nwhich I interpret (left to right) as the % of the 100GB that has been\nread, the number of read operations over some time period, number of\nbytes read/written and the io operations/sec. Iops always seems to be\n1/10th of the read number (rounded up to an integer). Is this\nexpected and if so anybody know why?\n\nWhile this is running if I also run \"iostat -p /dev/sdc 5\" I see:\n\n Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n sdc 166.40 2652.80 4.80 13264 24\n sdc1 2818.80 1.20 999.20 6 4996\n\nwhich I am interpreting as 2818 read/io operations (corresponding more\nor less to read in the pgiosim output) to the partition and of those\nonly 116 are actually going to the drive??? with the rest handled from\nOS cache.\n\nHowever the tps isn't increasing when I see pgiosim reporting:\n\n 48.47%, 4610 read, 0 written, 3687.62kB/sec 460.95 iops\n\nan iostat 5 output near the same time is reporting:\n\n Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n sdc 165.87 2647.50 4.79 13264 24\n sdc1 2812.97 0.60 995.41 3 4987\n\nso I am not sure if there is a correlation between the read and tps\nsettings.\n\nAlso I am assuming blks written is filesystem metadata although that\nseems like a lot of data \n\nIf I stop the pgiosim, the iostat drops to 0 write and reads as\nexpected.\n\nSo does anybody have any comments on how to test with pgiosim and how\nto correlate the iostat and pgiosim outputs?\n\nThanks for your feedback.\n-- \n\t\t\t\t-- rouilj\n\nJohn Rouillard System Administrator\nRenesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n",
"msg_date": "Fri, 13 May 2011 21:09:41 +0000",
"msg_from": "John Rouillard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Using pgiosim realistically"
},
{
"msg_contents": "On Fri, May 13, 2011 at 09:09:41PM +0000, John Rouillard wrote:\n> Hi all:\n> \n> I am adding pgiosim to our testing for new database hardware and I am\n> seeing something I don't quite get and I think it's because I am using\n> pgiosim incorrectly.\n> \n> Specs:\n> \n> OS: centos 5.5 kernel: 2.6.18-194.32.1.el5\n> memory: 96GB\n> cpu: 2x Intel(R) Xeon(R) X5690 @ 3.47GHz (6 core, ht enabled)\n> disks: WD2003FYYS RE4\n> raid: lsi - 9260-4i with 8 disks in raid 10 configuration\n> 1MB stripe size\n> raid cache enabled w/ bbu\n> disk caches disabled\n> filesystem: ext3 created with -E stride=256\n> \n> I am seeing really poor (70) iops with pgiosim. According to:\n> http://www.tomshardware.com/reviews/2tb-hdd-7200,2430-8.html in the\n> database benchmark they are seeing ~170 iops on a single disk for\n> these drives. I would expect an 8 disk raid 10 should get better then\n> 3x the single disk rate (assuming the data is randomly distributed).\n> \n> To test I am using 5 100GB files with\n> \n> sudo ~/pgiosim -c -b 100G -v file?\n> \n> I am using 100G sizes to make sure that the data read and files sizes\n> exceed the memory size of the system.\n> \n> However if I use 5 1GB files (and still 100GB read data) I see 200+ to\n> 400+ iops at 50% of the 100GB of data read, which I assume means that\n> the data is cached in the OS cache and I am not really getting hard\n> drive/raid I/O measurement of iops.\n> \n> However, IIUC postgres will never have an index file greater than 1GB\n> in size\n> (http://www.postgresql.org/docs/8.4/static/storage-file-layout.html)\n> and will just add 1GB segments, so the 1GB size files seems to be more\n> realistic.\n> \n> So do I want 100 (or probably 2 or 3 times more say 300) 1GB files to\n> feed pgiosim? That way I will have enough data that not all of it can\n> be cached in memory and the file sizes (and file operations:\n> open/close) more closely match what postgres is doing with index\n> files?\n> \n> Also in the output of pgiosim I see:\n> \n> 25.17%, 2881 read, 0 written, 2304.56kB/sec 288.07 iops\n> \n> which I interpret (left to right) as the % of the 100GB that has been\n> read, the number of read operations over some time period, number of\n> bytes read/written and the io operations/sec. Iops always seems to be\n> 1/10th of the read number (rounded up to an integer). Is this\n> expected and if so anybody know why?\n> \n> While this is running if I also run \"iostat -p /dev/sdc 5\" I see:\n> \n> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n> sdc 166.40 2652.80 4.80 13264 24\n> sdc1 2818.80 1.20 999.20 6 4996\n> \n> which I am interpreting as 2818 read/io operations (corresponding more\n> or less to read in the pgiosim output) to the partition and of those\n> only 116 are actually going to the drive??? with the rest handled from\n> OS cache.\n> \n> However the tps isn't increasing when I see pgiosim reporting:\n> \n> 48.47%, 4610 read, 0 written, 3687.62kB/sec 460.95 iops\n> \n> an iostat 5 output near the same time is reporting:\n> \n> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n> sdc 165.87 2647.50 4.79 13264 24\n> sdc1 2812.97 0.60 995.41 3 4987\n> \n> so I am not sure if there is a correlation between the read and tps\n> settings.\n> \n> Also I am assuming blks written is filesystem metadata although that\n> seems like a lot of data \n> \n> If I stop the pgiosim, the iostat drops to 0 write and reads as\n> expected.\n> \n> So does anybody have any comments on how to test with pgiosim and how\n> to correlate the iostat and pgiosim outputs?\n> \n> Thanks for your feedback.\n> -- \n> \t\t\t\t-- rouilj\n> \n> John Rouillard System Administrator\n> Renesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n> \n\nHi John,\n\nThose drives are 7200 rpm drives which would give you a maximum write\nrate of 120/sec at best with the cache disabled. I actually think your\n70/sec is closer to reality and what you should anticipate in real use.\nI do not see how they could make 170/sec. Did they strap a jet engine to\nthe drive. :)\n\nRegards,\nKen\n",
"msg_date": "Sat, 14 May 2011 12:07:02 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using pgiosim realistically"
},
{
"msg_contents": "On Sat, May 14, 2011 at 12:07:02PM -0500, [email protected] wrote:\n> On Fri, May 13, 2011 at 09:09:41PM +0000, John Rouillard wrote:\n> > I am adding pgiosim to our testing for new database hardware and I am\n> > seeing something I don't quite get and I think it's because I am using\n> > pgiosim incorrectly.\n> > \n> > Specs:\n> > \n> > OS: centos 5.5 kernel: 2.6.18-194.32.1.el5\n> > memory: 96GB\n> > cpu: 2x Intel(R) Xeon(R) X5690 @ 3.47GHz (6 core, ht enabled)\n> > disks: WD2003FYYS RE4\n> > raid: lsi - 9260-4i with 8 disks in raid 10 configuration\n> > 1MB stripe size\n> > raid cache enabled w/ bbu\n> > disk caches disabled\n> > filesystem: ext3 created with -E stride=256\n> > \n> > I am seeing really poor (70) iops with pgiosim. According to:\n> > http://www.tomshardware.com/reviews/2tb-hdd-7200,2430-8.html in the\n> > database benchmark they are seeing ~170 iops on a single disk for\n> > these drives. I would expect an 8 disk raid 10 should get better then\n> > 3x the single disk rate (assuming the data is randomly distributed).\n> Those drives are 7200 rpm drives which would give you a maximum write\n> rate of 120/sec at best with the cache disabled. I actually think your\n> 70/sec is closer to reality and what you should anticipate in real use.\n> I do not see how they could make 170/sec. Did they strap a jet engine to\n> the drive. :)\n\nHmm, I stated the disk cache was disabled. I should have said the disk\nwrite cache, but it's possible the readhead cache is disabled as well\n(not quite sure how to tell on the lsi cards). Also there isn't a lot\nof detail in what the database test mix is and I haven't tried\nresearching the site to see if the spec the exact test. If it included\na lot of writes and they were being handled by a cache then that could\nexplain it.\n\nHowever, in my case I have an 8 disk raid 10 with a read only load (in\nthis testing configuration). Shouldn't I expect more iops than a\nsingle disk can provide? Maybe pgiosim is hitting some other boundary\nthan just i/o?\n\nAlso it turns out that pgiosim can only handle 64 files. I haven't\nchecked to see if this is a compile time changable item or not.\n\n-- \n\t\t\t\t-- rouilj\n\nJohn Rouillard System Administrator\nRenesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n",
"msg_date": "Mon, 16 May 2011 13:17:30 +0000",
"msg_from": "John Rouillard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Using pgiosim realistically"
},
{
"msg_contents": "\nOn May 16, 2011, at 9:17 AM, John Rouillard wrote:\n\n>>>\n>>> I am seeing really poor (70) iops with pgiosim. According to:\n>>> http://www.tomshardware.com/reviews/2tb-hdd-7200,2430-8.html in the\n>>> database benchmark they are seeing ~170 iops on a single disk for\n>>> these drives. I would expect an 8 disk raid 10 should get better \n>>> then\n>>> 3x the single disk rate (assuming the data is randomly distributed).\n>> Those drives are 7200 rpm drives which would give you a maximum write\n>> rate of 120/sec at best with the cache disabled. I actually think \n>> your\n>> 70/sec is closer to reality and what you should anticipate in real \n>> use.\n>> I do not see how they could make 170/sec. Did they strap a jet \n>> engine to\n>> the drive. :)\n>\n\nalso you are reading with a worst case scenario for the mechanical \ndisk - randomly seeking around everywhere, which will lower \nperformance drastically.\n\n> Hmm, I stated the disk cache was disabled. I should have said the disk\n> write cache, but it's possible the readhead cache is disabled as well\n> (not quite sure how to tell on the lsi cards). Also there isn't a lot\n> of detail in what the database test mix is and I haven't tried\n> researching the site to see if the spec the exact test. If it included\n> a lot of writes and they were being handled by a cache then that could\n> explain it.\n>\n\nyou'll get some extra from the os readahead and the drive's potential \nown readahead.\n\n\n> However, in my case I have an 8 disk raid 10 with a read only load (in\n> this testing configuration). Shouldn't I expect more iops than a\n> single disk can provide? Maybe pgiosim is hitting some other boundary\n> than just i/o?\n>\n\ngiven your command line you are only running a single thread - use the \n-t argument to add more threads and that'll increase concurrency. a \nsingle process can only process so much at once and with multiple \nthreads requesting different things the drive will actually be able to \nrespond faster since it will have more work to do.\nI tend to test various levels - usually a single (-t 1 - the default) \nto get a base line, then -t (drives / 2), -t (#drives) up to probably \n4x drives (you'll see iops level off).\n\n> Also it turns out that pgiosim can only handle 64 files. I haven't\n> checked to see if this is a compile time changable item or not.\n>\n\nthat is a #define in pgiosim.c\n\nalso, are you running the latest pgiosim from pgfoundry?\n\nthe -w param to pgiosim has it rewrite blocks out as it runs. (it is a \npercentage).\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n",
"msg_date": "Mon, 16 May 2011 12:23:13 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using pgiosim realistically"
},
{
"msg_contents": "On Mon, May 16, 2011 at 12:23:13PM -0400, Jeff wrote:\n> On May 16, 2011, at 9:17 AM, John Rouillard wrote:\n> >However, in my case I have an 8 disk raid 10 with a read only load (in\n> >this testing configuration). Shouldn't I expect more iops than a\n> >single disk can provide? Maybe pgiosim is hitting some other boundary\n> >than just i/o?\n> >\n> \n> given your command line you are only running a single thread - use\n> the -t argument to add more threads and that'll increase\n> concurrency. a single process can only process so much at once and\n> with multiple threads requesting different things the drive will\n> actually be able to respond faster since it will have more work to\n> do.\n> I tend to test various levels - usually a single (-t 1 - the\n> default) to get a base line, then -t (drives / 2), -t (#drives) up\n> to probably 4x drives (you'll see iops level off).\n\nOk cool. I'll try that.\n \n> >Also it turns out that pgiosim can only handle 64 files. I haven't\n> >checked to see if this is a compile time changable item or not.\n> \n> that is a #define in pgiosim.c\n\nSo which is a better test, modifying the #define to allow specifying\n200-300 1GB files, or using 64 files but increasing the size of my\nfiles to 2-3GB for a total bytes in the file two or three times the\nmemory in my server (96GB)?\n\n> also, are you running the latest pgiosim from pgfoundry?\n\nyup version 0.5 from the foundry.\n\n> the -w param to pgiosim has it rewrite blocks out as it runs. (it is\n> a percentage).\n\nYup, I was running with that and getting low enough numbers, that I\nswitched to pure read tests. It looks like I just need multiple\nthreads so I can have multiple reads/writes in flight at the same\ntime.\n\n-- \n\t\t\t\t-- rouilj\n\nJohn Rouillard System Administrator\nRenesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n",
"msg_date": "Mon, 16 May 2011 17:06:36 +0000",
"msg_from": "John Rouillard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Using pgiosim realistically"
},
{
"msg_contents": "\nOn May 16, 2011, at 1:06 PM, John Rouillard wrote:\n\n>> that is a #define in pgiosim.c\n>\n> So which is a better test, modifying the #define to allow specifying\n> 200-300 1GB files, or using 64 files but increasing the size of my\n> files to 2-3GB for a total bytes in the file two or three times the\n> memory in my server (96GB)?\n>\n\nI tend to make 10G chunks with dd and run pgiosim over that.\ndd if=/dev/zero of=bigfile bs=1M count=10240\n\n>> the -w param to pgiosim has it rewrite blocks out as it runs. (it is\n>> a percentage).\n>\n> Yup, I was running with that and getting low enough numbers, that I\n> switched to pure read tests. It looks like I just need multiple\n> threads so I can have multiple reads/writes in flight at the same\n> time.\n>\n\nYep - you need multiple threads to get max throughput of your io.\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n",
"msg_date": "Mon, 16 May 2011 13:54:06 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using pgiosim realistically"
},
{
"msg_contents": "On Mon, May 16, 2011 at 01:54:06PM -0400, Jeff wrote:\n> Yep - you need multiple threads to get max throughput of your io.\n\nI am running:\n\n ~/pgiosim -c -b 100G -v -t4 file[0-9]*\n\nWill each thread move 100GB of data? I am seeing:\n\n 158.69%, 4260 read, 0 written, 3407.64kB/sec 425.95 iops\n\nMaybe the completion target percentage is off because of the threads?\n\n-- \n\t\t\t\t-- rouilj\n\nJohn Rouillard System Administrator\nRenesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n",
"msg_date": "Tue, 17 May 2011 17:35:19 +0000",
"msg_from": "John Rouillard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Using pgiosim realistically"
}
] |
[
{
"msg_contents": "Hi,\n\nI am conducting a benchmark to compare KVP table vs. hstore and got\nbad hstore performance results when the no. of records is greater than\nabout 500'000.\n\nCREATE TABLE kvp ( id SERIAL PRIMARY KEY, key text NOT NULL, value text );\n-- with index on key\nCREATE TABLE myhstore ( id SERIAL PRIMARY KEY, obj hstore NOT NULL );\n-- with GIST index on obj\n\nDoes anyone have experience with that?\n\nYours, Stefan\n",
"msg_date": "Sat, 14 May 2011 12:10:32 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "KVP table vs. hstore - hstore performance (Was: Postgres NoSQL\n\temulation)"
},
{
"msg_contents": "On 14/05/11 18:10, Stefan Keller wrote:\n> Hi,\n> \n> I am conducting a benchmark to compare KVP table vs. hstore and got\n> bad hstore performance results when the no. of records is greater than\n> about 500'000.\n> \n> CREATE TABLE kvp ( id SERIAL PRIMARY KEY, key text NOT NULL, value text );\n> -- with index on key\n> CREATE TABLE myhstore ( id SERIAL PRIMARY KEY, obj hstore NOT NULL );\n> -- with GIST index on obj\n> \n> Does anyone have experience with that?\n\nWhat are your queries?\n\nWhat does EXPLAIN ANALYZE report on those queries?\n\nDid you ANALYZE your tables after populating them?\n\n--\nCraig Ringer\n",
"msg_date": "Mon, 16 May 2011 08:25:23 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: KVP table vs. hstore - hstore performance (Was: Postgres\n\tNoSQL emulation)"
},
{
"msg_contents": "On Sat, May 14, 2011 at 5:10 AM, Stefan Keller <[email protected]> wrote:\n> Hi,\n>\n> I am conducting a benchmark to compare KVP table vs. hstore and got\n> bad hstore performance results when the no. of records is greater than\n> about 500'000.\n>\n> CREATE TABLE kvp ( id SERIAL PRIMARY KEY, key text NOT NULL, value text );\n> -- with index on key\n> CREATE TABLE myhstore ( id SERIAL PRIMARY KEY, obj hstore NOT NULL );\n> -- with GIST index on obj\n>\n> Does anyone have experience with that?\n\nhstore is not really designed for large-ish sets like that.\n\nmerlin\n",
"msg_date": "Mon, 16 May 2011 08:47:09 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: KVP table vs. hstore - hstore performance (Was:\n\tPostgres NoSQL emulation)"
},
{
"msg_contents": "Hi Merlin\n\nThe analyze command gave the following result:\n\nOn the KVP table:\nIndex Scan using kvpidx on bench_kvp (cost=0.00..8.53 rows=1 width=180) (actual time=0.037..0.038 rows=1 loops=1)\nIndex Cond: (bench_id = '200000_200000'::text)\nTotal runtime: 0.057 ms\n\nAnd on the Hstore table:\nBitmap Heap Scan on bench_hstore (cost=32.22..3507.54 rows=1000 width=265) (actual time=145.040..256.173 rows=1 loops=1)\nRecheck Cond: (bench_hstore @> '\"bench_id\"=>\"200000_200000\"'::hstore)\n-> Bitmap Index Scan on hidx (cost=0.00..31.97 rows=1000 width=0) (actual time=114.748..114.748 rows=30605 loops=1)\nIndex Cond: (bench_hstore @> '\"bench_id\"=>\"200000_200000\"'::hstore)\nTotal runtime: 256.211 ms\n\nFor Hstore I'm using a GIST index.\n\nTable analysis returned no message.\n\nMichel\n\n\nVon: Merlin Moncure [email protected]\nAn: Stefan Keller <[email protected]>\ncc: [email protected]\nDatum: 16. Mai 2011 15:47\nBetreff: Re: [PERFORM] KVP table vs. hstore - hstore performance (Was:\nPostgres NoSQL emulation)\n\nMerlin Moncure\n\nhstore is not really designed for large-ish sets like that.\n\nmerlin\n\n2011/5/16 Stefan Keller <[email protected]>:\n> Hoi Michel\n>\n> Hast du die EXPLAIN ANALYZE information?\n>\n> LG, Stefan\n>\n>\n> ---------- Forwarded message ----------\n> From: Craig Ringer <[email protected]>\n> Date: 2011/5/16\n> Subject: Re: [PERFORM] KVP table vs. hstore - hstore performance (Was:\n> Postgres NoSQL emulation)\n> To: Stefan Keller <[email protected]>\n> Cc: [email protected]\n>\n>\n> On 14/05/11 18:10, Stefan Keller wrote:\n>> Hi,\n>>\n>> I am conducting a benchmark to compare KVP table vs. hstore and got\n>> bad hstore performance results when the no. of records is greater than\n>> about 500'000.\n>>\n>> CREATE TABLE kvp ( id SERIAL PRIMARY KEY, key text NOT NULL, value text );\n>> -- with index on key\n>> CREATE TABLE myhstore ( id SERIAL PRIMARY KEY, obj hstore NOT NULL );\n>> -- with GIST index on obj\n>>\n>> Does anyone have experience with that?\n>\n> What are your queries?\n>\n> What does EXPLAIN ANALYZE report on those queries?\n>\n> Did you ANALYZE your tables after populating them?\n>\n> --\n> Craig Ringer\n>\n",
"msg_date": "Tue, 17 May 2011 17:10:31 +0200",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "FW: KVP table vs. hstore - hstore performance (Was:\n\tPostgres NoSQL emulation)"
},
{
"msg_contents": "On May 16, 2011, at 8:47 AM, Merlin Moncure wrote:\n> On Sat, May 14, 2011 at 5:10 AM, Stefan Keller <[email protected]> wrote:\n>> Hi,\n>> \n>> I am conducting a benchmark to compare KVP table vs. hstore and got\n>> bad hstore performance results when the no. of records is greater than\n>> about 500'000.\n>> \n>> CREATE TABLE kvp ( id SERIAL PRIMARY KEY, key text NOT NULL, value text );\n>> -- with index on key\n>> CREATE TABLE myhstore ( id SERIAL PRIMARY KEY, obj hstore NOT NULL );\n>> -- with GIST index on obj\n>> \n>> Does anyone have experience with that?\n> \n> hstore is not really designed for large-ish sets like that.\n\nAnd KVP is? ;)\n\nIIRC hstore ends up just storing everything as text, with pointers to know where things start and end. There's no real indexing inside hstore, so basically the only thing it can do is scan the entire hstore.\n\nThat said, I would strongly reconsider using KVP for anything except the most trivial of data sets. It is *extremely* inefficient. Do you really have absolutely no idea what *any* of your keys will be? Even if you need to support a certain amount of non-deterministic stuff, I would put everything you possibly can into real fields and only use KVP or hstore for things that you really didn't anticipate.\n\nKeep in mind that for every *value*, your overhead is 24 bytes for the heap header, 2+ varlena bytes in the heap, plus the length of the key. In the index you're looking at 6+ bytes of overhead, 1+ byte for varlena, plus the length of the key. The PK will cost you an additional 16-24 bytes, depending on alignment. So that's a *minimum* of ~50 bytes per value, and realistically the overhead will be closer to 65-70 bytes, *per value*. Unless your values are decent-sized strings, the overhead is going to be many times larger than the actual data!\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n",
"msg_date": "Tue, 17 May 2011 14:30:46 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: KVP table vs. hstore - hstore performance (Was: Postgres NoSQL\n\temulation)"
},
{
"msg_contents": "Hi Jim\n\nYou actually made me think about the schema Michel and I are using:\n\n> And KVP is? ;)\n\nCREATE TABLE mykvpstore( id bigint PRIMARY KEY )\nCREATE TABLE kvp ( id bigint REFERENCES mykvpstore(id), key text NOT\nNULL, value text, );\n-- with index on key\n\nAnd the table with the associative array type (hstore) is:\nCREATE TABLE myhstore ( id bigint PRIMARY KEY, kvps hstore NOT NULL );\n-- with GIST index on obj\n\nIt seems to me that in the mykvpstore-kvp there is also some overhead.\n\nAnd yes, we have no clue what keys to anticipate, except for some\ncommon ones like 'name': The use case is coming from OpenStreetMap\n(http://wiki.openstreetmap.org/wiki/Database_schema ).\n\nYours, Stefan\n\n\n2011/5/17 Jim Nasby <[email protected]>:\n> On May 16, 2011, at 8:47 AM, Merlin Moncure wrote:\n>> On Sat, May 14, 2011 at 5:10 AM, Stefan Keller <[email protected]> wrote:\n>>> Hi,\n>>>\n>>> I am conducting a benchmark to compare KVP table vs. hstore and got\n>>> bad hstore performance results when the no. of records is greater than\n>>> about 500'000.\n>>>\n>>> CREATE TABLE kvp ( id SERIAL PRIMARY KEY, key text NOT NULL, value text );\n>>> -- with index on key\n>>> CREATE TABLE myhstore ( id SERIAL PRIMARY KEY, obj hstore NOT NULL );\n>>> -- with GIST index on obj\n>>>\n>>> Does anyone have experience with that?\n>>\n>> hstore is not really designed for large-ish sets like that.\n>\n> And KVP is? ;)\n>\n> IIRC hstore ends up just storing everything as text, with pointers to know where things start and end. There's no real indexing inside hstore, so basically the only thing it can do is scan the entire hstore.\n>\n> That said, I would strongly reconsider using KVP for anything except the most trivial of data sets. It is *extremely* inefficient. Do you really have absolutely no idea what *any* of your keys will be? Even if you need to support a certain amount of non-deterministic stuff, I would put everything you possibly can into real fields and only use KVP or hstore for things that you really didn't anticipate.\n>\n> Keep in mind that for every *value*, your overhead is 24 bytes for the heap header, 2+ varlena bytes in the heap, plus the length of the key. In the index you're looking at 6+ bytes of overhead, 1+ byte for varlena, plus the length of the key. The PK will cost you an additional 16-24 bytes, depending on alignment. So that's a *minimum* of ~50 bytes per value, and realistically the overhead will be closer to 65-70 bytes, *per value*. Unless your values are decent-sized strings, the overhead is going to be many times larger than the actual data!\n> --\n> Jim C. Nasby, Database Architect [email protected]\n> 512.569.9461 (cell) http://jim.nasby.net\n>\n>\n>\n",
"msg_date": "Wed, 18 May 2011 00:07:05 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: KVP table vs. hstore - hstore performance (Was:\n\tPostgres NoSQL emulation)"
},
{
"msg_contents": "\n> Hi Merlin\n>\n> The analyze command gave the following result:\n>\n> On the KVP table:\n> Index Scan using kvpidx on bench_kvp (cost=0.00..8.53 rows=1 width=180) \n> (actual time=0.037..0.038 rows=1 loops=1)\n> Index Cond: (bench_id = '200000_200000'::text)\n> Total runtime: 0.057 ms\n>\n> And on the Hstore table:\n> Bitmap Heap Scan on bench_hstore (cost=32.22..3507.54 rows=1000 \n> width=265) (actual time=145.040..256.173 rows=1 loops=1)\n> Recheck Cond: (bench_hstore @> '\"bench_id\"=>\"200000_200000\"'::hstore)\n> -> Bitmap Index Scan on hidx (cost=0.00..31.97 rows=1000 width=0) \n> (actual time=114.748..114.748 rows=30605 loops=1)\n> Index Cond: (bench_hstore @> '\"bench_id\"=>\"200000_200000\"'::hstore)\n> Total runtime: 256.211 ms\n>\n> For Hstore I'm using a GIST index.\n>\n\nTry to create a btree index on \"(bench_hstore->bench_id) WHERE \n(bench_hstore->bench_id) IS NOT NULL\".\n\n",
"msg_date": "Mon, 23 May 2011 12:53:57 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: KVP table vs. hstore - hstore performance (Was:\n\tPostgres NoSQL emulation)"
},
{
"msg_contents": "On Tue, May 17, 2011 at 11:10 AM, <[email protected]> wrote:\n> For Hstore I'm using a GIST index.\n\nI would have thought that GIN would be a better choice for this workload.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 23 May 2011 13:19:48 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: KVP table vs. hstore - hstore performance (Was:\n\tPostgres NoSQL emulation)"
},
{
"msg_contents": "Salut Pierre\n\nYou wrote\n> Try to create a btree index on \"(bench_hstore->bench_id) WHERE\n> (bench_hstore->bench_id) IS NOT NULL\".\n\nWhat do you mean exactly?\n=> CREATE INDEX myhstore_kps_gin_idx ON myhstore USING gin(kvps) WHERE\n??? IS NOT NULL;\n\nMy table's def is:\n> CREATE TABLE myhstore ( id bigint PRIMARY KEY, kvps hstore NOT NULL );\nSo I'm doing something like:\nCREATE INDEX myhstore_kps_gin_idx ON myhstore USING gin(kvps);\n\nStefan\n\n\n2011/5/23 Pierre C <[email protected]>:\n>\n>> Hi Merlin\n>>\n>> The analyze command gave the following result:\n>>\n>> On the KVP table:\n>> Index Scan using kvpidx on bench_kvp (cost=0.00..8.53 rows=1 width=180)\n>> (actual time=0.037..0.038 rows=1 loops=1)\n>> Index Cond: (bench_id = '200000_200000'::text)\n>> Total runtime: 0.057 ms\n>>\n>> And on the Hstore table:\n>> Bitmap Heap Scan on bench_hstore (cost=32.22..3507.54 rows=1000 width=265)\n>> (actual time=145.040..256.173 rows=1 loops=1)\n>> Recheck Cond: (bench_hstore @> '\"bench_id\"=>\"200000_200000\"'::hstore)\n>> -> Bitmap Index Scan on hidx (cost=0.00..31.97 rows=1000 width=0) (actual\n>> time=114.748..114.748 rows=30605 loops=1)\n>> Index Cond: (bench_hstore @> '\"bench_id\"=>\"200000_200000\"'::hstore)\n>> Total runtime: 256.211 ms\n>>\n>> For Hstore I'm using a GIST index.\n>>\n>\n> Try to create a btree index on \"(bench_hstore->bench_id) WHERE\n> (bench_hstore->bench_id) IS NOT NULL\".\n>\n>\n",
"msg_date": "Wed, 25 May 2011 01:45:47 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FW: KVP table vs. hstore - hstore performance (Was:\n\tPostgres NoSQL emulation)"
},
{
"msg_contents": "> You wrote\n>> Try to create a btree index on \"(bench_hstore->bench_id) WHERE\n>> (bench_hstore->bench_id) IS NOT NULL\".\n>\n> What do you mean exactly?\n> => CREATE INDEX myhstore_kps_gin_idx ON myhstore USING gin(kvps) WHERE\n> ??? IS NOT NULL;\n>\n> My table's def is:\n>> CREATE TABLE myhstore ( id bigint PRIMARY KEY, kvps hstore NOT NULL );\n> So I'm doing something like:\n> CREATE INDEX myhstore_kps_gin_idx ON myhstore USING gin(kvps);\n\nHello ;\n\nI meant a plain old btree index like this :\n\nCREATE INDEX foo ON myhstore((kvps->'yourkeyname')) WHERE\n(kvps->'yourkeyname') IS NOT NULL;\n\nThe idea is that :\n\n- The reason to use hstore is to have an arbitrary number of keys and use\nthe keys you want, not have a fixed set of columns like in a table\n- Therefore, no hstore key is present in all rows (if it was, you'd make\nit a table column, and maybe index it)\n- You'll probably only want to index some of the keys/values (avoiding to\nindex values that contain serialized data or other stuff that never\nappears in a WHERE clause)\n\nSo, for each key that corresponds to a searchable attribute, I'd use a\nconditional index on that key, which only indexes the relevant rows. For\nkeys that never appear in a WHERE, no index is needed.\n\ngist is good if you want the intersecton of a hstore with another one (for\ninstance), btree is good if you want simple search or range search.\n",
"msg_date": "Wed, 25 May 2011 18:59:51 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: KVP table vs. hstore - hstore performance (Was:\n\tPostgres NoSQL emulation)"
},
{
"msg_contents": "On Wed, May 25, 2011 at 11:59 AM, Pierre C <[email protected]> wrote:\n>> You wrote\n>>>\n>>> Try to create a btree index on \"(bench_hstore->bench_id) WHERE\n>>> (bench_hstore->bench_id) IS NOT NULL\".\n>>\n>> What do you mean exactly?\n>> => CREATE INDEX myhstore_kps_gin_idx ON myhstore USING gin(kvps) WHERE\n>> ??? IS NOT NULL;\n>>\n>> My table's def is:\n>>>\n>>> CREATE TABLE myhstore ( id bigint PRIMARY KEY, kvps hstore NOT NULL );\n>>\n>> So I'm doing something like:\n>> CREATE INDEX myhstore_kps_gin_idx ON myhstore USING gin(kvps);\n>\n> Hello ;\n>\n> I meant a plain old btree index like this :\n>\n> CREATE INDEX foo ON myhstore((kvps->'yourkeyname')) WHERE\n> (kvps->'yourkeyname') IS NOT NULL;\n>\n> The idea is that :\n>\n> - The reason to use hstore is to have an arbitrary number of keys and use\n> the keys you want, not have a fixed set of columns like in a table\n> - Therefore, no hstore key is present in all rows (if it was, you'd make\n> it a table column, and maybe index it)\n> - You'll probably only want to index some of the keys/values (avoiding to\n> index values that contain serialized data or other stuff that never\n> appears in a WHERE clause)\n>\n> So, for each key that corresponds to a searchable attribute, I'd use a\n> conditional index on that key, which only indexes the relevant rows. For\n> keys that never appear in a WHERE, no index is needed.\n>\n> gist is good if you want the intersecton of a hstore with another one (for\n> instance), btree is good if you want simple search or range search.\n\n+1 on this approach. it works really well (unless, of course, you\nneed 50 indexes...)\n\nmerlin\n",
"msg_date": "Wed, 25 May 2011 17:08:51 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: KVP table vs. hstore - hstore performance (Was:\n\tPostgres NoSQL emulation)"
},
{
"msg_contents": "Hi all\n\nThank you to all who answered: That worked:\n\nCREATE INDEX planet_osm_point_tags_amenity\nON planet_osm_point ((tags->'amenity'))\nWHERE (tags->'amenity') IS NOT NULL;\n\nMy problem is, that in fact I don't know which tag to index since I'm\nrunning a web admin application where users can enter arbitrary\nqueries.\n\nYours, Stefan\n\n2011/5/25 Pierre C <[email protected]>:\n>> You wrote\n>>>\n>>> Try to create a btree index on \"(bench_hstore->bench_id) WHERE\n>>> (bench_hstore->bench_id) IS NOT NULL\".\n>>\n>> What do you mean exactly?\n>> => CREATE INDEX myhstore_kps_gin_idx ON myhstore USING gin(kvps) WHERE\n>> ??? IS NOT NULL;\n>>\n>> My table's def is:\n>>>\n>>> CREATE TABLE myhstore ( id bigint PRIMARY KEY, kvps hstore NOT NULL );\n>>\n>> So I'm doing something like:\n>> CREATE INDEX myhstore_kps_gin_idx ON myhstore USING gin(kvps);\n>\n> Hello ;\n>\n> I meant a plain old btree index like this :\n>\n> CREATE INDEX foo ON myhstore((kvps->'yourkeyname')) WHERE\n> (kvps->'yourkeyname') IS NOT NULL;\n>\n> The idea is that :\n>\n> - The reason to use hstore is to have an arbitrary number of keys and use\n> the keys you want, not have a fixed set of columns like in a table\n> - Therefore, no hstore key is present in all rows (if it was, you'd make\n> it a table column, and maybe index it)\n> - You'll probably only want to index some of the keys/values (avoiding to\n> index values that contain serialized data or other stuff that never\n> appears in a WHERE clause)\n>\n> So, for each key that corresponds to a searchable attribute, I'd use a\n> conditional index on that key, which only indexes the relevant rows. For\n> keys that never appear in a WHERE, no index is needed.\n>\n> gist is good if you want the intersecton of a hstore with another one (for\n> instance), btree is good if you want simple search or range search.\n>\n",
"msg_date": "Thu, 26 May 2011 00:58:39 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FW: KVP table vs. hstore - hstore performance (Was:\n\tPostgres NoSQL emulation)"
},
{
"msg_contents": "\n> My problem is, that in fact I don't know which tag to index since I'm\n> running a web admin application where users can enter arbitrary\n> queries.\n\nFor a tag cloud, try this :\n\n- table tags ( tag_id, tag_name )\n- table articles ( article_id )\n- table articles_to_tags( article_id, tag_id )\n\nnow this is the classical approach, which doesn't work so well when you \nwant to get an article that has several tags (tag intersection).\n\nSo, materialize the list of tag_ids for each article in an INTEGER[] array \nin the articles table, kept up to date with triggers.\n\nCreate a gist index on that, and use indexed array vs array operators.\n",
"msg_date": "Thu, 26 May 2011 11:24:24 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: KVP table vs. hstore - hstore performance (Was:\n\tPostgres NoSQL emulation)"
}
] |
[
{
"msg_contents": "Hi, I'm new to postgres and I have the next question. \n\nI have a\nphp program that makes 100000 inserts in my database.\n autoincrement\nnumbers inserted into a table with 5 columns.\n The script takes about 4\nminutes from a webserver\n Is it a normal time? \n\nHow could reduce this\ntime by a bulce of inserts? \n\nWhen I turn off fsync get much more\nperformance, but it is not ideal in power failure \n\nHARDWARE: 2 disks\n1TB 7200 rpm with software raid 1 (gmirror raid) \n\n8 Gb RAM \n\nCPU Intel\nQuad Core 2.4 Ghz \n\nOS: Freebsd 8.2 \n\nPOSTGRES VERSION: 9.0.4 \n\nMY\nPOSTGRES CONFIG: \n\n listen_addresses = '*'\n wal_level = archive\n fsync =\non\n archive_mode = on\n archive_command = 'exit 0'\n maintenance_work_mem\n= 480MB\n checkpoint_completion_target = 0.5\n effective_cache_size =\n5632MB\n work_mem = 40MB\n wal_buffers = 16MB\n checkpoint_segments = 30\n\nshared_buffers = 1920MB\n max_connections = 40 \n\nMY EXECUTION TIME OF MY\nSCRIPT: \n\n[root@webserver ~]# time php script.php\n\n real 4m54.846s\n user\n0m2.695s\n sys 0m1.775s \n\nMY SCIPT: \n\n\n\nHi, I'm new to postgres and I have the next question.\nI have a php program that makes 100000 inserts in my database. autoincrement numbers inserted into a table with 5 columns. The script takes about 4 minutes from a webserver Is it a normal time?\nHow could reduce this time by a bulce of inserts?\nWhen I turn off fsync get much more performance, but it is not ideal in power failure\n \nHardware: 2 disks 1TB 7200 rpm with software raid 1 (gmirror raid)\n8 Gb RAM\nCPU Intel Quad Core 2.4 Ghz\nOS: Freebsd 8.2\nPostgres version: 9.0.4\n \nMy postgres config:\n listen_addresses = '*' wal_level = archive fsync = on archive_mode = on archive_command = 'exit 0' maintenance_work_mem = 480MB checkpoint_completion_target = 0.5 effective_cache_size = 5632MB work_mem = 40MB wal_buffers = 16MB checkpoint_segments = 30 shared_buffers = 1920MB max_connections = 40\n \nMy execution time of my script:\n[root@webserver ~]# time php script.php real 4m54.846s user 0m2.695s sys 0m1.775s\n \nMy scipt:\n<?php\npg_connect(\"host=host port=port dbname=db user=user password=pass\") or die (\"No me conecto...\"); for ( $var = 1; $var <= 100000 ; $var++ ) { $sql = \"INSERT INTO server (aa, bb, cc, dd, ee) VALUES ('$var','$var','$var','$var','$var')\"; pg_query($sql); } ?>\nmy dd test is:\n#time sh -c \"dd if=/dev/zero of=/tmp/test count=500000 && fsync\" 500000+0 records in 500000+0 records out 256000000 bytes transferred in 2.147917 secs (119185237 bytes/sec) usage: fsync file ... real 0m2.177s user 0m0.188s sys 0m0.876s\n \nThanks, any help will be well recived,",
"msg_date": "Sun, 15 May 2011 19:02:39 -0300",
"msg_from": "Ezequiel Lovelle <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow loop =?UTF-8?Q?inserts=3F?="
},
{
"msg_contents": "Try wrapping all your inserts in a transaction:\n\npg_query('BEGIN');\n// your inserts\npg_query('COMMIT');\n\nThat way you won't have to sync each of those inserts to disk, should\nprovide a huge speedup. Of course this means your 10,000 inserts will be\nall or nothing, but it seems like in this case that should be fine.\n\n-Dan\n\nOn Sun, May 15, 2011 at 3:02 PM, Ezequiel Lovelle\n<[email protected]>wrote:\n\n> Hi, I'm new to postgres and I have the next question.\n>\n> I have a php program that makes 100000 inserts in my database.\n> autoincrement numbers inserted into a table with 5 columns.\n> The script takes about 4 minutes from a webserver\n> Is it a normal time?\n>\n> How could reduce this time by a bulce of inserts?\n>\n> When I turn off fsync get much more performance, but it is not ideal in\n> power failure\n>\n>\n>\n> *Hardware*: 2 disks 1TB 7200 rpm with software raid 1 (gmirror raid)\n>\n> 8 Gb RAM\n>\n> CPU Intel Quad Core 2.4 Ghz\n>\n> *OS*: Freebsd 8.2\n>\n> *Postgres version*: 9.0.4\n>\n>\n>\n> *My postgres config*:\n>\n>\n> listen_addresses = '*'\n> wal_level = archive\n> fsync = on\n> archive_mode = on\n> archive_command = 'exit 0'\n> maintenance_work_mem = 480MB\n> checkpoint_completion_target = 0.5\n> effective_cache_size = 5632MB\n> work_mem = 40MB\n> wal_buffers = 16MB\n> checkpoint_segments = 30\n> shared_buffers = 1920MB\n> max_connections = 40\n>\n>\n>\n> *My execution time of my script*:\n>\n> [root@webserver ~]# time php script.php\n>\n> real 4m54.846s\n> user 0m2.695s\n> sys 0m1.775s\n>\n>\n>\n> *My scipt*:\n>\n> <?php\n>\n> pg_connect(\"host=host port=port dbname=db user=user password=pass\") or die\n> (\"No me conecto...\");\n> for ( $var = 1; $var <= 100000 ; $var++ )\n> {\n> $sql = \"INSERT INTO server (aa, bb, cc, dd, ee) VALUES\n> ('$var','$var','$var','$var','$var')\";\n> pg_query($sql);\n> }\n> ?>\n>\n> *my dd test is*:\n>\n> #time sh -c \"dd if=/dev/zero of=/tmp/test count=500000 && fsync\"\n> 500000+0 records in\n> 500000+0 records out\n> 256000000 bytes transferred in 2.147917 secs (119185237 bytes/sec)\n> usage: fsync file ...\n>\n> real 0m2.177s\n> user 0m0.188s\n> sys 0m0.876s\n>\n>\n>\n> Thanks, any help will be well recived,\n>\n\nTry wrapping all your inserts in a transaction:pg_query('BEGIN');// your insertspg_query('COMMIT');That way you won't have to sync each of those inserts to disk, should provide a huge speedup. Of course this means your 10,000 inserts will be all or nothing, but it seems like in this case that should be fine.\n-DanOn Sun, May 15, 2011 at 3:02 PM, Ezequiel Lovelle <[email protected]> wrote:\n\n\nHi, I'm new to postgres and I have the next question.\nI have a php program that makes 100000 inserts in my database. autoincrement numbers inserted into a table with 5 columns. The script takes about 4 minutes from a webserver Is it a normal time?\nHow could reduce this time by a bulce of inserts?\nWhen I turn off fsync get much more performance, but it is not ideal in power failure\n \nHardware: 2 disks 1TB 7200 rpm with software raid 1 (gmirror raid)\n8 Gb RAM\nCPU Intel Quad Core 2.4 Ghz\nOS: Freebsd 8.2\nPostgres version: 9.0.4\n \nMy postgres config:\n listen_addresses = '*' wal_level = archive fsync = on archive_mode = on archive_command = 'exit 0' maintenance_work_mem = 480MB checkpoint_completion_target = 0.5 effective_cache_size = 5632MB\n work_mem = 40MB wal_buffers = 16MB checkpoint_segments = 30 shared_buffers = 1920MB max_connections = 40\n \nMy execution time of my script:\n[root@webserver ~]# time php script.php real 4m54.846s user 0m2.695s sys 0m1.775s\n \nMy scipt:\n<?php\npg_connect(\"host=host port=port dbname=db user=user password=pass\") or die (\"No me conecto...\"); for ( $var = 1; $var <= 100000 ; $var++ ) { $sql = \"INSERT INTO server (aa, bb, cc, dd, ee) VALUES ('$var','$var','$var','$var','$var')\";\n pg_query($sql); } ?>\nmy dd test is:\n#time sh -c \"dd if=/dev/zero of=/tmp/test count=500000 && fsync\" 500000+0 records in 500000+0 records out 256000000 bytes transferred in 2.147917 secs (119185237 bytes/sec) usage: fsync file ...\n real 0m2.177s user 0m0.188s sys 0m0.876s\n \nThanks, any help will be well recived,",
"msg_date": "Sun, 15 May 2011 15:27:31 -0700",
"msg_from": "Dan Birken <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow loop inserts?"
}
] |
[
{
"msg_contents": "Dear all,\nI have a query on 3 tables in a database as :-\n\n_*Explain Analyze Output :-*_\n\nexplain anayze select c.clause, s.subject ,s.object , s.verb, \ns.subject_type , s.object_type ,s.doc_id ,s.svo_id from clause2 c, svo2 \ns ,page_content p where c.clause_id=s.clause_id and s.doc_id=c.source_id \nand c.sentence_id=s.sentence_id and s.doc_id=p.crawled_page_id order by \ns.svo_id limit 1000 offset 17929000\n\n\"Limit (cost=21685592.91..21686802.44 rows=1000 width=2624) (actual \ntime=414601.802..414622.920 rows=1000 loops=1)\"\n\" -> Nested Loop (cost=59.77..320659013645.28 rows=265112018116 \nwidth=2624) (actual time=0.422..404902.314 rows=17930000 loops=1)\"\n\" -> Nested Loop (cost=0.00..313889654.42 rows=109882338 \nwidth=2628) (actual time=0.242..174223.789 rows=17736897 loops=1)\"\n\" -> Index Scan using pk_svo_id on svo2 s \n(cost=0.00..33914955.13 rows=26840752 width=2600) (actual \ntime=0.157..14691.039 rows=14238271 loops=1)\"\n\" -> Index Scan using idx_clause2_id on clause2 c \n(cost=0.00..10.36 rows=4 width=44) (actual time=0.007..0.008 rows=1 \nloops=14238271)\"\n\" Index Cond: ((c.source_id = s.doc_id) AND \n(c.clause_id = s.clause_id) AND (c.sentence_id = s.sentence_id))\"\n\" -> Bitmap Heap Scan on page_content p (cost=59.77..2885.18 \nrows=2413 width=8) (actual time=0.007..0.008 rows=1 loops=17736897)\"\n\" Recheck Cond: (p.crawled_page_id = s.doc_id)\"\n\" -> Bitmap Index Scan on idx_crawled_id \n(cost=0.00..59.17 rows=2413 width=0) (actual time=0.005..0.005 rows=1 \nloops=17736897)\"\n\" Index Cond: (p.crawled_page_id = s.doc_id)\"\n\"Total runtime: 414623.634 ms\"\n\n_*My Table & index definitions are as under :-\n\n*_Estimated rows in 3 tables are :-\n\nclause2 10341700\nsvo2 26008000\npage_content 479785\n\nCREATE TABLE clause2\n(\n id bigint NOT NULL DEFAULT nextval('clause_id_seq'::regclass),\n source_id integer,\n sentence_id integer,\n clause_id integer,\n tense character varying(30),\n clause text,\n CONSTRAINT pk_clause_id PRIMARY KEY (id)\n)WITH ( OIDS=FALSE);\nCREATE INDEX idx_clause2_id ON clause2 USING btree (source_id, \nclause_id, sentence_id);\n\nCREATE TABLE svo2\n(\n svo_id bigint NOT NULL DEFAULT nextval('svo_svo_id_seq'::regclass),\n doc_id integer,\n sentence_id integer,\n clause_id integer,\n negation integer,\n subject character varying(3000),\n verb character varying(3000),\n \"object\" character varying(3000),\n preposition character varying(3000),\n subject_type character varying(3000),\n object_type character varying(3000),\n subject_attribute character varying(3000),\n object_attribute character varying(3000),\n verb_attribute character varying(3000),\n subject_concept character varying(100),\n object_concept character varying(100),\n subject_sense character varying(100),\n object_sense character varying(100),\n subject_chain character varying(5000),\n object_chain character varying(5000),\n sub_type_id integer,\n obj_type_id integer,\n CONSTRAINT pk_svo_id PRIMARY KEY (svo_id)\n)WITH ( OIDS=FALSE);\nCREATE INDEX idx_svo2_id_dummy ON svo2 USING btree (doc_id, \nclause_id, sentence_id);\n\nCREATE TABLE page_content\n(\n content_id integer NOT NULL DEFAULT \nnextval('page_content_ogc_fid_seq'::regclass),\n wkb_geometry geometry,\n link_level integer,\n isprocessable integer,\n isvalid integer,\n isanalyzed integer,\n islocked integer,\n content_language character(10),\n url_id integer,\n publishing_date character(40),\n heading character(150),\n category character(150),\n crawled_page_url character(500),\n keywords character(500),\n dt_stamp timestamp with time zone,\n \"content\" character varying,\n crawled_page_id bigint,\n CONSTRAINT page_content_pk PRIMARY KEY (content_id),\n CONSTRAINT enforce_dims_wkb_geometry CHECK (st_ndims(wkb_geometry) = 2),\n CONSTRAINT enforce_srid_wkb_geometry CHECK (st_srid(wkb_geometry) = (-1))\n)WITH ( OIDS=FALSE);\nCREATE INDEX idx_crawled_id ON page_content USING btree \n(crawled_page_id);\nCREATE INDEX pgweb_idx ON page_content USING gin \n(to_tsvector('english'::regconfig, content::text));\n\nIf possible, Please let me know if I am something wrong or any alternate \nquery to run it faster.\n\n\nThanks\n\n\n\n\n\nDear all,\nI have a query on 3 tables in a database as :-\n\nExplain Analyze Output :-\n\nexplain anayze select c.clause, s.subject ,s.object , s.verb,\ns.subject_type , s.object_type ,s.doc_id ,s.svo_id from clause2 c, svo2\ns ,page_content p where c.clause_id=s.clause_id and\ns.doc_id=c.source_id and c.sentence_id=s.sentence_id and\ns.doc_id=p.crawled_page_id order by s.svo_id limit 1000 offset 17929000\n\n\"Limit (cost=21685592.91..21686802.44 rows=1000 width=2624) (actual\ntime=414601.802..414622.920 rows=1000 loops=1)\"\n\" -> Nested Loop (cost=59.77..320659013645.28 rows=265112018116\nwidth=2624) (actual time=0.422..404902.314 rows=17930000 loops=1)\"\n\" -> Nested Loop (cost=0.00..313889654.42 rows=109882338\nwidth=2628) (actual time=0.242..174223.789 rows=17736897 loops=1)\"\n\" -> Index Scan using pk_svo_id on svo2 s \n(cost=0.00..33914955.13 rows=26840752 width=2600) (actual\ntime=0.157..14691.039 rows=14238271 loops=1)\"\n\" -> Index Scan using idx_clause2_id on clause2 c \n(cost=0.00..10.36 rows=4 width=44) (actual time=0.007..0.008 rows=1\nloops=14238271)\"\n\" Index Cond: ((c.source_id = s.doc_id) AND\n(c.clause_id = s.clause_id) AND (c.sentence_id = s.sentence_id))\"\n\" -> Bitmap Heap Scan on page_content p \n(cost=59.77..2885.18 rows=2413 width=8) (actual time=0.007..0.008\nrows=1 loops=17736897)\"\n\" Recheck Cond: (p.crawled_page_id = s.doc_id)\"\n\" -> Bitmap Index Scan on idx_crawled_id \n(cost=0.00..59.17 rows=2413 width=0) (actual time=0.005..0.005 rows=1\nloops=17736897)\"\n\" Index Cond: (p.crawled_page_id = s.doc_id)\"\n\"Total runtime: 414623.634 ms\"\n\nMy Table & index definitions are as under :-\n\nEstimated rows in 3 tables are :-\n\nclause2 10341700\nsvo2 26008000\npage_content 479785\n\nCREATE TABLE clause2\n(\n id bigint NOT NULL DEFAULT nextval('clause_id_seq'::regclass),\n source_id integer,\n sentence_id integer,\n clause_id integer,\n tense character varying(30),\n clause text,\n CONSTRAINT pk_clause_id PRIMARY KEY (id)\n)WITH ( OIDS=FALSE);\nCREATE INDEX idx_clause2_id ON clause2 USING btree (source_id,\nclause_id, sentence_id);\n\nCREATE TABLE svo2\n(\n svo_id bigint NOT NULL DEFAULT nextval('svo_svo_id_seq'::regclass),\n doc_id integer,\n sentence_id integer,\n clause_id integer,\n negation integer,\n subject character varying(3000),\n verb character varying(3000),\n \"object\" character varying(3000),\n preposition character varying(3000),\n subject_type character varying(3000),\n object_type character varying(3000),\n subject_attribute character varying(3000),\n object_attribute character varying(3000),\n verb_attribute character varying(3000),\n subject_concept character varying(100),\n object_concept character varying(100),\n subject_sense character varying(100),\n object_sense character varying(100),\n subject_chain character varying(5000),\n object_chain character varying(5000),\n sub_type_id integer,\n obj_type_id integer,\n CONSTRAINT pk_svo_id PRIMARY KEY (svo_id)\n)WITH ( OIDS=FALSE);\nCREATE INDEX idx_svo2_id_dummy ON svo2 USING btree (doc_id,\nclause_id, sentence_id);\n\nCREATE TABLE page_content\n(\n content_id integer NOT NULL DEFAULT\nnextval('page_content_ogc_fid_seq'::regclass),\n wkb_geometry geometry,\n link_level integer,\n isprocessable integer,\n isvalid integer,\n isanalyzed integer,\n islocked integer,\n content_language character(10),\n url_id integer,\n publishing_date character(40),\n heading character(150),\n category character(150),\n crawled_page_url character(500),\n keywords character(500),\n dt_stamp timestamp with time zone,\n \"content\" character varying,\n crawled_page_id bigint,\n CONSTRAINT page_content_pk PRIMARY KEY (content_id),\n CONSTRAINT enforce_dims_wkb_geometry CHECK (st_ndims(wkb_geometry) =\n2),\n CONSTRAINT enforce_srid_wkb_geometry CHECK (st_srid(wkb_geometry) =\n(-1))\n)WITH ( OIDS=FALSE);\nCREATE INDEX idx_crawled_id ON page_content USING btree \n(crawled_page_id);\nCREATE INDEX pgweb_idx ON page_content USING gin \n(to_tsvector('english'::regconfig, content::text));\n\nIf possible, Please let me know if I am something wrong or any\nalternate query to run it faster.\n\n\nThanks",
"msg_date": "Mon, 16 May 2011 11:09:57 +0530",
"msg_from": "Adarsh Sharma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why query takes soo much time"
},
{
"msg_contents": "On 05/16/2011 01:39 PM, Adarsh Sharma wrote:\n> Dear all,\n> I have a query on 3 tables in a database as :-\n>\n> _*Explain Analyze Output :-*_\n>\n> explain anayze select c.clause, s.subject ,s.object , s.verb, \n> s.subject_type , s.object_type ,s.doc_id ,s.svo_id from clause2 c, \n> svo2 s ,page_content p where c.clause_id=s.clause_id and \n> s.doc_id=c.source_id and c.sentence_id=s.sentence_id and \n> s.doc_id=p.crawled_page_id order by s.svo_id limit 1000 offset 17929000\n>\n\nUsing limit and offset can be horrifyingly slow for non-trivial queries. \nAre you trying to paginate results? If not, what are you trying to achieve?\n\nIn most (all?) cases, Pg will have to execute the query up to the point \nwhere it's found limit+offset rows, producing and discarding offset rows \nas it goes. Needless to say, that's horrifyingly inefficient.\n\nReformatting your query for readability (to me) as:\n\nEXPLAIN ANALYZE\nSELECT c.clause, s.subject ,s.object , s.verb, s.subject_type, \ns.object_type ,s.doc_id ,s.svo_id\nFROM clause2 c INNER JOIN svo2 s ON (c.clause_id=s.clause_id AND \nc.source_id=s.doc_id AND c.sentence_id=s.sentence_id)\n INNER JOIN page_content p ON (s.doc_id=p.crawled_page_id)\nORDER BY s.svo_id limit 1000 offset 17929000\n\n... I can see that you're joining on \n(c.clause_id,c.source_id,c.sentence_id)=(s.clause_id,s.doc_id,s.sentence_id). \nYou have matching indexes idx_clause2_id and idx_svo2_id_dummy with \nmatching column ordering. Pg is using idx_clause2_id in the join of svo2 \nand clause2, but instead of doing a bitmap index scan using it and \nidx_svo2_id_dummy it's doing a nested loop using idx_clause2_id and \npk_svo_id.\n\nFirst: make sure your stats are up to date by ANALYZE-ing your tables \nand probably increasing the stats collected on the join columns and/or \nincreasing default_statistics_target. If that doesn't help, personally \nI'd play with the random_page_cost and seq_page_cost to see if they \nreflect your machine's actual performance, and to see if you get a more \nfavourable plan. If I were experimenting with this I'd also see if \ngiving the query lots of work_mem allowed it to try a different approach \nto the join.\n\n\n> \"Limit (cost=21685592.91..21686802.44 rows=1000 width=2624) (actual \n> time=414601.802..414622.920 rows=1000 loops=1)\"\n> \" -> Nested Loop (cost=59.77..320659013645.28 rows=265112018116 \n> width=2624) (actual time=0.422..404902.314 rows=17930000 loops=1)\"\n> \" -> Nested Loop (cost=0.00..313889654.42 rows=109882338 \n> width=2628) (actual time=0.242..174223.789 rows=17736897 loops=1)\"\n> \" -> Index Scan using pk_svo_id on svo2 s \n> (cost=0.00..33914955.13 rows=26840752 width=2600) (actual \n> time=0.157..14691.039 rows=14238271 loops=1)\"\n> \" -> Index Scan using idx_clause2_id on clause2 c \n> (cost=0.00..10.36 rows=4 width=44) (actual time=0.007..0.008 rows=1 \n> loops=14238271)\"\n> \" Index Cond: ((c.source_id = s.doc_id) AND \n> (c.clause_id = s.clause_id) AND (c.sentence_id = s.sentence_id))\"\n> \" -> Bitmap Heap Scan on page_content p (cost=59.77..2885.18 \n> rows=2413 width=8) (actual time=0.007..0.008 rows=1 loops=17736897)\"\n> \" Recheck Cond: (p.crawled_page_id = s.doc_id)\"\n> \" -> Bitmap Index Scan on idx_crawled_id \n> (cost=0.00..59.17 rows=2413 width=0) (actual time=0.005..0.005 rows=1 \n> loops=17736897)\"\n> \" Index Cond: (p.crawled_page_id = s.doc_id)\"\n> \"Total runtime: 414623.634 ms\"\n>\n> _*My Table & index definitions are as under :-\n>\n> *_Estimated rows in 3 tables are :-\n>\n> clause2 10341700\n> svo2 26008000\n> page_content 479785\n>\n> CREATE TABLE clause2\n> (\n> id bigint NOT NULL DEFAULT nextval('clause_id_seq'::regclass),\n> source_id integer,\n> sentence_id integer,\n> clause_id integer,\n> tense character varying(30),\n> clause text,\n> CONSTRAINT pk_clause_id PRIMARY KEY (id)\n> )WITH ( OIDS=FALSE);\n> CREATE INDEX idx_clause2_id ON clause2 USING btree (source_id, \n> clause_id, sentence_id);\n>\n> CREATE TABLE svo2\n> (\n> svo_id bigint NOT NULL DEFAULT nextval('svo_svo_id_seq'::regclass),\n> doc_id integer,\n> sentence_id integer,\n> clause_id integer,\n> negation integer,\n> subject character varying(3000),\n> verb character varying(3000),\n> \"object\" character varying(3000),\n> preposition character varying(3000),\n> subject_type character varying(3000),\n> object_type character varying(3000),\n> subject_attribute character varying(3000),\n> object_attribute character varying(3000),\n> verb_attribute character varying(3000),\n> subject_concept character varying(100),\n> object_concept character varying(100),\n> subject_sense character varying(100),\n> object_sense character varying(100),\n> subject_chain character varying(5000),\n> object_chain character varying(5000),\n> sub_type_id integer,\n> obj_type_id integer,\n> CONSTRAINT pk_svo_id PRIMARY KEY (svo_id)\n> )WITH ( OIDS=FALSE);\n> CREATE INDEX idx_svo2_id_dummy ON svo2 USING btree (doc_id, \n> clause_id, sentence_id);\n>\n> CREATE TABLE page_content\n> (\n> content_id integer NOT NULL DEFAULT \n> nextval('page_content_ogc_fid_seq'::regclass),\n> wkb_geometry geometry,\n> link_level integer,\n> isprocessable integer,\n> isvalid integer,\n> isanalyzed integer,\n> islocked integer,\n> content_language character(10),\n> url_id integer,\n> publishing_date character(40),\n> heading character(150),\n> category character(150),\n> crawled_page_url character(500),\n> keywords character(500),\n> dt_stamp timestamp with time zone,\n> \"content\" character varying,\n> crawled_page_id bigint,\n> CONSTRAINT page_content_pk PRIMARY KEY (content_id),\n> CONSTRAINT enforce_dims_wkb_geometry CHECK (st_ndims(wkb_geometry) = 2),\n> CONSTRAINT enforce_srid_wkb_geometry CHECK (st_srid(wkb_geometry) = \n> (-1))\n> )WITH ( OIDS=FALSE);\n> CREATE INDEX idx_crawled_id ON page_content USING btree \n> (crawled_page_id);\n> CREATE INDEX pgweb_idx ON page_content USING gin \n> (to_tsvector('english'::regconfig, content::text));\n>\n> If possible, Please let me know if I am something wrong or any \n> alternate query to run it faster.\n>\n>\n> Thanks\n\n\n\n\n\n\n\n\n On 05/16/2011 01:39 PM, Adarsh Sharma wrote:\n \n Dear all,\n I have a query on 3 tables in a database as :-\n\nExplain Analyze Output :-\n\n explain anayze select c.clause, s.subject ,s.object , s.verb,\n s.subject_type , s.object_type ,s.doc_id ,s.svo_id from clause2 c,\n svo2\n s ,page_content p where c.clause_id=s.clause_id and\n s.doc_id=c.source_id and c.sentence_id=s.sentence_id and\n s.doc_id=p.crawled_page_id order by s.svo_id limit 1000 offset\n 17929000\n\n\n\n Using limit and offset can be horrifyingly slow for non-trivial\n queries. Are you trying to paginate results? If not, what are you\n trying to achieve?\n\n In most (all?) cases, Pg will have to execute the query up to the\n point where it's found limit+offset rows, producing and discarding\n offset rows as it goes. Needless to say, that's horrifyingly\n inefficient.\n\n Reformatting your query for readability (to me) as:\n\n EXPLAIN ANALYZE\n SELECT c.clause, s.subject ,s.object , s.verb, s.subject_type,\n s.object_type ,s.doc_id ,s.svo_id \n FROM clause2 c INNER JOIN svo2 s ON (c.clause_id=s.clause_id AND\n c.source_id=s.doc_id AND c.sentence_id=s.sentence_id)\n INNER JOIN page_content p ON\n (s.doc_id=p.crawled_page_id)\n ORDER BY s.svo_id limit 1000 offset 17929000\n\n ... I can see that you're joining on\n (c.clause_id,c.source_id,c.sentence_id)=(s.clause_id,s.doc_id,s.sentence_id).\n You have matching indexes idx_clause2_id and idx_svo2_id_dummy with\n matching column ordering. Pg is using idx_clause2_id in the join of\n svo2 and clause2, but instead of doing a bitmap index scan using it\n and idx_svo2_id_dummy it's doing a nested loop using idx_clause2_id\n and pk_svo_id.\n\n First: make sure your stats are up to date by ANALYZE-ing your\n tables and probably increasing the stats collected on the join\n columns and/or increasing default_statistics_target. If that doesn't\n help, personally I'd play with the random_page_cost and\n seq_page_cost to see if they reflect your machine's actual\n performance, and to see if you get a more favourable plan. If I were\n experimenting with this I'd also see if giving the query lots of\n work_mem allowed it to try a different approach to the join.\n\n\n\n \"Limit (cost=21685592.91..21686802.44 rows=1000 width=2624)\n (actual\n time=414601.802..414622.920 rows=1000 loops=1)\"\n \" -> Nested Loop (cost=59.77..320659013645.28\n rows=265112018116\n width=2624) (actual time=0.422..404902.314 rows=17930000 loops=1)\"\n \" -> Nested Loop (cost=0.00..313889654.42\n rows=109882338\n width=2628) (actual time=0.242..174223.789 rows=17736897 loops=1)\"\n \" -> Index Scan using pk_svo_id on svo2 s \n (cost=0.00..33914955.13 rows=26840752 width=2600) (actual\n time=0.157..14691.039 rows=14238271 loops=1)\"\n \" -> Index Scan using idx_clause2_id on clause2\n c \n (cost=0.00..10.36 rows=4 width=44) (actual time=0.007..0.008\n rows=1\n loops=14238271)\"\n \" Index Cond: ((c.source_id = s.doc_id) AND\n (c.clause_id = s.clause_id) AND (c.sentence_id = s.sentence_id))\"\n \" -> Bitmap Heap Scan on page_content p \n (cost=59.77..2885.18 rows=2413 width=8) (actual time=0.007..0.008\n rows=1 loops=17736897)\"\n \" Recheck Cond: (p.crawled_page_id = s.doc_id)\"\n \" -> Bitmap Index Scan on idx_crawled_id \n (cost=0.00..59.17 rows=2413 width=0) (actual time=0.005..0.005\n rows=1\n loops=17736897)\"\n \" Index Cond: (p.crawled_page_id = s.doc_id)\"\n \"Total runtime: 414623.634 ms\"\n\nMy Table & index definitions are as under :-\n\nEstimated rows in 3 tables are :-\n\n clause2 10341700\n svo2 26008000\n page_content 479785\n\n CREATE TABLE clause2\n (\n id bigint NOT NULL DEFAULT nextval('clause_id_seq'::regclass),\n source_id integer,\n sentence_id integer,\n clause_id integer,\n tense character varying(30),\n clause text,\n CONSTRAINT pk_clause_id PRIMARY KEY (id)\n )WITH ( OIDS=FALSE);\n CREATE INDEX idx_clause2_id ON clause2 USING btree (source_id,\n clause_id, sentence_id);\n\n CREATE TABLE svo2\n (\n svo_id bigint NOT NULL DEFAULT\n nextval('svo_svo_id_seq'::regclass),\n doc_id integer,\n sentence_id integer,\n clause_id integer,\n negation integer,\n subject character varying(3000),\n verb character varying(3000),\n \"object\" character varying(3000),\n preposition character varying(3000),\n subject_type character varying(3000),\n object_type character varying(3000),\n subject_attribute character varying(3000),\n object_attribute character varying(3000),\n verb_attribute character varying(3000),\n subject_concept character varying(100),\n object_concept character varying(100),\n subject_sense character varying(100),\n object_sense character varying(100),\n subject_chain character varying(5000),\n object_chain character varying(5000),\n sub_type_id integer,\n obj_type_id integer,\n CONSTRAINT pk_svo_id PRIMARY KEY (svo_id)\n )WITH ( OIDS=FALSE);\n CREATE INDEX idx_svo2_id_dummy ON svo2 USING btree (doc_id,\n clause_id, sentence_id);\n\n CREATE TABLE page_content\n (\n content_id integer NOT NULL DEFAULT\n nextval('page_content_ogc_fid_seq'::regclass),\n wkb_geometry geometry,\n link_level integer,\n isprocessable integer,\n isvalid integer,\n isanalyzed integer,\n islocked integer,\n content_language character(10),\n url_id integer,\n publishing_date character(40),\n heading character(150),\n category character(150),\n crawled_page_url character(500),\n keywords character(500),\n dt_stamp timestamp with time zone,\n \"content\" character varying,\n crawled_page_id bigint,\n CONSTRAINT page_content_pk PRIMARY KEY (content_id),\n CONSTRAINT enforce_dims_wkb_geometry CHECK\n (st_ndims(wkb_geometry) =\n 2),\n CONSTRAINT enforce_srid_wkb_geometry CHECK\n (st_srid(wkb_geometry) =\n (-1))\n )WITH ( OIDS=FALSE);\n CREATE INDEX idx_crawled_id ON page_content USING btree \n (crawled_page_id);\n CREATE INDEX pgweb_idx ON page_content USING gin \n (to_tsvector('english'::regconfig, content::text));\n\n If possible, Please let me know if I am something wrong or any\n alternate query to run it faster.\n\n\n Thanks",
"msg_date": "Mon, 16 May 2011 18:00:03 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why query takes soo much time"
},
{
"msg_contents": "[big nestloop with a huge number of rows]\n\nYou're in an edge case, and I doubt you'll get things to run much faster: you want the last 1k rows out of an 18M row result set... It will be slow no matter what you do.\n\nWhat the plan is currently doing, is it's going through these 18M rows using a for each loop, until it returns the 1k requested rows. Without the offset, the plan is absolutely correct (and quite fast, I take it). With the enormous offset, it's a different story as you've noted.\n\nAn alternative plan could have been to hash join the tables together, to sort the result set, and to apply the limit/offset on the resulting set. You can probably force the planner to do so by rewriting your statement using a with statement, too:\n\nEXPLAIN ANALYZE\nWITH rows AS (\nSELECT c.clause, s.subject ,s.object , s.verb, s.subject_type, s.object_type ,s.doc_id ,s.svo_id \nFROM clause2 c INNER JOIN svo2 s ON (c.clause_id=s.clause_id AND c.source_id=s.doc_id AND c.sentence_id=s.sentence_id)\n INNER JOIN page_content p ON (s.doc_id=p.crawled_page_id)\n)\nSELECT *\nFROM rows\nORDER BY svo_id limit 1000 offset 17929000\n\n\nI've my doubts that it'll make much of a different, though: you'll still be extracting the last 1k rows out of 18M.\n\nD\n\n[big nestloop with a huge number of rows]You're in an edge case, and I doubt you'll get things to run much faster: you want the last 1k rows out of an 18M row result set... It will be slow no matter what you do.What the plan is currently doing, is it's going through these 18M rows using a for each loop, until it returns the 1k requested rows. Without the offset, the plan is absolutely correct (and quite fast, I take it). With the enormous offset, it's a different story as you've noted.An alternative plan could have been to hash join the tables together, to sort the result set, and to apply the limit/offset on the resulting set. You can probably force the planner to\n do so by rewriting your statement using a with statement, too:EXPLAIN ANALYZEWITH rows AS (SELECT c.clause, s.subject ,s.object , s.verb, s.subject_type, s.object_type ,s.doc_id ,s.svo_id FROM clause2 c INNER JOIN svo2 s ON (c.clause_id=s.clause_id AND c.source_id=s.doc_id AND c.sentence_id=s.sentence_id) INNER JOIN page_content p ON (s.doc_id=p.crawled_page_id))SELECT *FROM\n rowsORDER BY svo_id limit 1000 offset 17929000I've my doubts that it'll make much of a different, though: you'll still be extracting the last 1k rows out of 18M.D",
"msg_date": "Mon, 16 May 2011 04:16:44 -0700 (PDT)",
"msg_from": "Denis de Bernardy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why query takes soo much time"
},
{
"msg_contents": "Denis de Bernardy <[email protected]> writes:\n> An alternative plan could have been to hash join the tables together,\n> to sort the result set, and to apply the limit/offset on the resulting\n> set.\n\nIndeed. I rather wonder why the planner didn't do that to start with.\nThis plan looks to me like it might be suffering from insufficient\nwork_mem to allow use of a hash join. Or possibly the OP changed some\nof the cost_xxx or enable_xxx settings in a misguided attempt to force\nit to use indexes instead. As a rule of thumb, whole-table joins\nprobably ought not be using nestloop plans, and that frequently means\nthat indexes are worthless for them.\n\nBut in any case, as Craig noted, the real elephant in the room is the\nhuge OFFSET value. It seems likely that this query is not standing\nalone but is meant as one of a series that's supposed to provide\npaginated output, and if so the total cost of the series is just going\nto be impossible no matter what. The OP needs to think about using a\ncursor or some such to avoid repeating most of the work each time.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 May 2011 10:15:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why query takes soo much time "
}
] |
[
{
"msg_contents": "I am using Postgres 8.3 and I have an issue very closely related to the one\ndescribed here:\nhttp://archives.postgresql.org/pgsql-general/2005-06/msg00488.php\n\nBasically, I have a VIEW which is a UNION ALL of two tables but when I do a\nselect on the view using a LIMIT, it scans the entire tables and takes\nsignificantly longer than writing out the query with the LIMITs in the\nsub-queries themselves. Is there a solution to get the view to perform like\nthe sub-query version?\n\nThanks,\nDave\n\nI am using Postgres 8.3 and I have an issue very closely related to the one described here:http://archives.postgresql.org/pgsql-general/2005-06/msg00488.php\nBasically, I have a VIEW which is a UNION ALL of two tables but when I do a select on the view using a LIMIT, it scans the entire tables and takes significantly longer than writing out the query with the LIMITs in the sub-queries themselves. Is there a solution to get the view to perform like the sub-query version?\nThanks,Dave",
"msg_date": "Mon, 16 May 2011 12:38:12 -0700",
"msg_from": "Dave Johansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Pushing LIMIT into sub-queries of a UNION ALL"
},
{
"msg_contents": "On Mon, May 16, 2011 at 3:38 PM, Dave Johansen <[email protected]> wrote:\n> I am using Postgres 8.3 and I have an issue very closely related to the one\n> described here:\n> http://archives.postgresql.org/pgsql-general/2005-06/msg00488.php\n>\n> Basically, I have a VIEW which is a UNION ALL of two tables but when I do a\n> select on the view using a LIMIT, it scans the entire tables and takes\n> significantly longer than writing out the query with the LIMITs in the\n> sub-queries themselves. Is there a solution to get the view to perform like\n> the sub-query version?\n\nI believe this is fixed by MergeAppend in 9.1. You might want to try\n9.1beta1 and see if that works better for you.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 23 May 2011 13:21:17 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pushing LIMIT into sub-queries of a UNION ALL"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a quite complex, performance sensitive query in a system with a\nfew (7) joins:\nselect .... from t1 left join t2 .... WHERE id IN (select ....)\n\nFor this query the planner evaluates the IN with a hash semi join last,\nand all the joining is done by hash joins for all rows contained in t1.\n\nHowever when I specify the ids manually (IN (1, 2, 3, 4, 5) the\nplanner first does an index lookup on the primary key column id,\nand subsequently does nested loop joins using an index on t2 - which\ngives way better results.\n\nIs there any way to guide the planner to evaluate the IN condition\nfirst, instead of last?\nWhy is the planner behaving this way? (postgresql 8.4.??)\n\nThank you in advance, Clemens\n\n\nQuery plan with IN(select):\n\nSort (cost=165.77..165.77 rows=2 width=16974) (actual\ntime=13.459..13.460 rows=2 loops=1)\n Sort Key: this_.id\n Sort Method: quicksort Memory: 26kB\n -> Hash Semi Join (cost=123.09..165.76 rows=2 width=16974)\n(actual time=12.741..13.432 rows=2 loops=1)\n Hash Cond: (this_.id = kladdenent0_.id)\n -> Hash Left Join (cost=119.17..160.90 rows=348\nwidth=16974) (actual time=8.765..13.104 rows=342 loops=1)\n Hash Cond: (flugzeug2_.flugzeugtyp_id = flugzeugty3_.id)\n -> Hash Left Join (cost=118.10..155.08 rows=348\nwidth=16454) (actual time=8.724..12.412 rows=342 loops=1)\n Hash Cond: (flugzeug2_.zaehlertyp_id = bmintype4_.id)\n -> Hash Left Join (cost=117.06..152.71 rows=348\nwidth=15934) (actual time=8.660..11.786 rows=342 loops=1)\n Hash Cond: (this_.lehrerid = pilot5_.id)\n -> Hash Left Join (cost=96.66..130.46\nrows=348 width=8912) (actual time=6.395..8.899 rows=342 loops=1)\n Hash Cond: (this_.nachid = flugplatz6_.id)\n -> Hash Left Join\n(cost=93.89..122.90 rows=348 width=8370) (actual time=6.354..8.429\nrows=342 loops=1)\n Hash Cond: (this_.flugzeugid =\nflugzeug2_.id)\n -> Hash Left Join\n(cost=23.17..47.04 rows=348 width=7681) (actual time=1.992..3.374\nrows=342 loops=1)\n Hash Cond: (this_.pilotid\n= pilot7_.id)\n -> Hash Left Join\n(cost=2.78..22.04 rows=348 width=659) (actual time=0.044..0.548\nrows=342 loops=1)\n Hash Cond:\n(this_.vonid = flugplatz8_.id)\n -> Seq Scan on\nstartkladde this_ (cost=0.00..14.48 rows=348 width=117) (actual\ntime=0.004..0.074 rows=342 loops=1)\n -> Hash\n(cost=1.79..1.79 rows=79 width=542) (actual time=0.032..0.032 rows=79\nloops=1)\n -> Seq Scan\non flugplatz flugplatz8_ (cost=0.00..1.79 rows=79 width=542) (actual\ntime=0.003..0.010 rows=79 loops=1)\n -> Hash\n(cost=15.73..15.73 rows=373 width=7022) (actual time=1.938..1.938\nrows=375 loops=1)\n -> Seq Scan on\npilot pilot7_ (cost=0.00..15.73 rows=373 width=7022) (actual\ntime=0.006..0.769 rows=375 loops=1)\n -> Hash (cost=51.43..51.43\nrows=1543 width=689) (actual time=4.351..4.351 rows=1543 loops=1)\n -> Seq Scan on flugzeug\nflugzeug2_ (cost=0.00..51.43 rows=1543 width=689) (actual\ntime=0.006..1.615 rows=1543 loops=1)\n -> Hash (cost=1.79..1.79 rows=79\nwidth=542) (actual time=0.031..0.031 rows=79 loops=1)\n -> Seq Scan on flugplatz\nflugplatz6_ (cost=0.00..1.79 rows=79 width=542) (actual\ntime=0.003..0.011 rows=79 loops=1)\n -> Hash (cost=15.73..15.73 rows=373\nwidth=7022) (actual time=2.236..2.236 rows=375 loops=1)\n -> Seq Scan on pilot pilot5_\n(cost=0.00..15.73 rows=373 width=7022) (actual time=0.005..0.781\nrows=375 loops=1)\n -> Hash (cost=1.02..1.02 rows=2 width=520)\n(actual time=0.005..0.005 rows=2 loops=1)\n -> Seq Scan on bmintype bmintype4_\n(cost=0.00..1.02 rows=2 width=520) (actual time=0.003..0.004 rows=2\nloops=1)\n -> Hash (cost=1.03..1.03 rows=3 width=520) (actual\ntime=0.004..0.004 rows=3 loops=1)\n -> Seq Scan on flugzeugtype flugzeugty3_\n(cost=0.00..1.03 rows=3 width=520) (actual time=0.002..0.002 rows=3\nloops=1)\n -> Hash (cost=3.90..3.90 rows=2 width=4) (actual\ntime=0.239..0.239 rows=2 loops=1)\n -> Limit (cost=0.00..3.88 rows=2 width=4) (actual\ntime=0.202..0.236 rows=2 loops=1)\n -> Index Scan using startkladde_pkey on\nstartkladde kladdenent0_ (cost=0.00..56.24 rows=29 width=4) (actual\ntime=0.200..0.233 rows=2 loops=1)\n Filter: ((status > 0) OR (id = (-1)))\n",
"msg_date": "Tue, 17 May 2011 00:30:22 +0200",
"msg_from": "Clemens Eisserer <[email protected]>",
"msg_from_op": true,
"msg_subject": "hash semi join caused by \"IN (select ...)\""
},
{
"msg_contents": "Clemens Eisserer <[email protected]> writes:\n> I have a quite complex, performance sensitive query in a system with a\n> few (7) joins:\n> select .... from t1 left join t2 .... WHERE id IN (select ....)\n\nDoes it work as expected with one less join? If so, try increasing\njoin_collapse_limit ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 May 2011 19:22:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hash semi join caused by \"IN (select ...)\" "
},
{
"msg_contents": "On Mon, May 16, 2011 at 3:30 PM, Clemens Eisserer <[email protected]>wrote:\n\n> Hi,\n>\n> I have a quite complex, performance sensitive query in a system with a\n> few (7) joins:\n> select .... from t1 left join t2 .... WHERE id IN (select ....)\n>\n> For this query the planner evaluates the IN with a hash semi join last,\n> and all the joining is done by hash joins for all rows contained in t1.\n>\n> However when I specify the ids manually (IN (1, 2, 3, 4, 5) the\n> planner first does an index lookup on the primary key column id,\n> and subsequently does nested loop joins using an index on t2 - which\n> gives way better results.\n>\n> Is there any way to guide the planner to evaluate the IN condition\n> first, instead of last?\n> Why is the planner behaving this way? (postgresql 8.4.??)\n>\n> Thank you in advance, Clemens\n>\n>\n> Query plan with IN(select):\n>\n> Sort (cost=165.77..165.77 rows=2 width=16974) (actual\n> time=13.459..13.460 rows=2 loops=1)\n> Sort Key: this_.id\n> Sort Method: quicksort Memory: 26kB\n> -> Hash Semi Join (cost=123.09..165.76 rows=2 width=16974)\n> (actual time=12.741..13.432 rows=2 loops=1)\n> Hash Cond: (this_.id = kladdenent0_.id)\n> -> Hash Left Join (cost=119.17..160.90 rows=348\n> width=16974) (actual time=8.765..13.104 rows=342 loops=1)\n> Hash Cond: (flugzeug2_.flugzeugtyp_id = flugzeugty3_.id)\n> -> Hash Left Join (cost=118.10..155.08 rows=348\n> width=16454) (actual time=8.724..12.412 rows=342 loops=1)\n> Hash Cond: (flugzeug2_.zaehlertyp_id = bmintype4_.id)\n> -> Hash Left Join (cost=117.06..152.71 rows=348\n> width=15934) (actual time=8.660..11.786 rows=342 loops=1)\n> Hash Cond: (this_.lehrerid = pilot5_.id)\n> -> Hash Left Join (cost=96.66..130.46\n> rows=348 width=8912) (actual time=6.395..8.899 rows=342 loops=1)\n> Hash Cond: (this_.nachid = flugplatz6_.id)\n> -> Hash Left Join\n> (cost=93.89..122.90 rows=348 width=8370) (actual time=6.354..8.429\n> rows=342 loops=1)\n> Hash Cond: (this_.flugzeugid =\n> flugzeug2_.id)\n> -> Hash Left Join\n> (cost=23.17..47.04 rows=348 width=7681) (actual time=1.992..3.374\n> rows=342 loops=1)\n> Hash Cond: (this_.pilotid\n> = pilot7_.id)\n> -> Hash Left Join\n> (cost=2.78..22.04 rows=348 width=659) (actual time=0.044..0.548\n> rows=342 loops=1)\n> Hash Cond:\n> (this_.vonid = flugplatz8_.id)\n> -> Seq Scan on\n> startkladde this_ (cost=0.00..14.48 rows=348 width=117) (actual\n> time=0.004..0.074 rows=342 loops=1)\n> -> Hash\n> (cost=1.79..1.79 rows=79 width=542) (actual time=0.032..0.032 rows=79\n> loops=1)\n> -> Seq Scan\n> on flugplatz flugplatz8_ (cost=0.00..1.79 rows=79 width=542) (actual\n> time=0.003..0.010 rows=79 loops=1)\n> -> Hash\n> (cost=15.73..15.73 rows=373 width=7022) (actual time=1.938..1.938\n> rows=375 loops=1)\n> -> Seq Scan on\n> pilot pilot7_ (cost=0.00..15.73 rows=373 width=7022) (actual\n> time=0.006..0.769 rows=375 loops=1)\n> -> Hash (cost=51.43..51.43\n> rows=1543 width=689) (actual time=4.351..4.351 rows=1543 loops=1)\n> -> Seq Scan on flugzeug\n> flugzeug2_ (cost=0.00..51.43 rows=1543 width=689) (actual\n> time=0.006..1.615 rows=1543 loops=1)\n> -> Hash (cost=1.79..1.79 rows=79\n> width=542) (actual time=0.031..0.031 rows=79 loops=1)\n> -> Seq Scan on flugplatz\n> flugplatz6_ (cost=0.00..1.79 rows=79 width=542) (actual\n> time=0.003..0.011 rows=79 loops=1)\n> -> Hash (cost=15.73..15.73 rows=373\n> width=7022) (actual time=2.236..2.236 rows=375 loops=1)\n> -> Seq Scan on pilot pilot5_\n> (cost=0.00..15.73 rows=373 width=7022) (actual time=0.005..0.781\n> rows=375 loops=1)\n> -> Hash (cost=1.02..1.02 rows=2 width=520)\n> (actual time=0.005..0.005 rows=2 loops=1)\n> -> Seq Scan on bmintype bmintype4_\n> (cost=0.00..1.02 rows=2 width=520) (actual time=0.003..0.004 rows=2\n> loops=1)\n> -> Hash (cost=1.03..1.03 rows=3 width=520) (actual\n> time=0.004..0.004 rows=3 loops=1)\n> -> Seq Scan on flugzeugtype flugzeugty3_\n> (cost=0.00..1.03 rows=3 width=520) (actual time=0.002..0.002 rows=3\n> loops=1)\n> -> Hash (cost=3.90..3.90 rows=2 width=4) (actual\n> time=0.239..0.239 rows=2 loops=1)\n> -> Limit (cost=0.00..3.88 rows=2 width=4) (actual\n> time=0.202..0.236 rows=2 loops=1)\n> -> Index Scan using startkladde_pkey on\n> startkladde kladdenent0_ (cost=0.00..56.24 rows=29 width=4) (actual\n> time=0.200..0.233 rows=2 loops=1)\n> Filter: ((status > 0) OR (id = (-1)))\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nIn some cases, I've seen improved results when replacing the IN (...) with =\nANY(ARRAY(...)).\nDave\n\nOn Mon, May 16, 2011 at 3:30 PM, Clemens Eisserer <[email protected]> wrote:\nHi,\n\nI have a quite complex, performance sensitive query in a system with a\nfew (7) joins:\nselect .... from t1 left join t2 .... WHERE id IN (select ....)\n\nFor this query the planner evaluates the IN with a hash semi join last,\nand all the joining is done by hash joins for all rows contained in t1.\n\nHowever when I specify the ids manually (IN (1, 2, 3, 4, 5) the\nplanner first does an index lookup on the primary key column id,\nand subsequently does nested loop joins using an index on t2 - which\ngives way better results.\n\nIs there any way to guide the planner to evaluate the IN condition\nfirst, instead of last?\nWhy is the planner behaving this way? (postgresql 8.4.??)\n\nThank you in advance, Clemens\n\n\nQuery plan with IN(select):\n\nSort (cost=165.77..165.77 rows=2 width=16974) (actual\ntime=13.459..13.460 rows=2 loops=1)\n Sort Key: this_.id\n Sort Method: quicksort Memory: 26kB\n -> Hash Semi Join (cost=123.09..165.76 rows=2 width=16974)\n(actual time=12.741..13.432 rows=2 loops=1)\n Hash Cond: (this_.id = kladdenent0_.id)\n -> Hash Left Join (cost=119.17..160.90 rows=348\nwidth=16974) (actual time=8.765..13.104 rows=342 loops=1)\n Hash Cond: (flugzeug2_.flugzeugtyp_id = flugzeugty3_.id)\n -> Hash Left Join (cost=118.10..155.08 rows=348\nwidth=16454) (actual time=8.724..12.412 rows=342 loops=1)\n Hash Cond: (flugzeug2_.zaehlertyp_id = bmintype4_.id)\n -> Hash Left Join (cost=117.06..152.71 rows=348\nwidth=15934) (actual time=8.660..11.786 rows=342 loops=1)\n Hash Cond: (this_.lehrerid = pilot5_.id)\n -> Hash Left Join (cost=96.66..130.46\nrows=348 width=8912) (actual time=6.395..8.899 rows=342 loops=1)\n Hash Cond: (this_.nachid = flugplatz6_.id)\n -> Hash Left Join\n(cost=93.89..122.90 rows=348 width=8370) (actual time=6.354..8.429\nrows=342 loops=1)\n Hash Cond: (this_.flugzeugid =\nflugzeug2_.id)\n -> Hash Left Join\n(cost=23.17..47.04 rows=348 width=7681) (actual time=1.992..3.374\nrows=342 loops=1)\n Hash Cond: (this_.pilotid\n= pilot7_.id)\n -> Hash Left Join\n(cost=2.78..22.04 rows=348 width=659) (actual time=0.044..0.548\nrows=342 loops=1)\n Hash Cond:\n(this_.vonid = flugplatz8_.id)\n -> Seq Scan on\nstartkladde this_ (cost=0.00..14.48 rows=348 width=117) (actual\ntime=0.004..0.074 rows=342 loops=1)\n -> Hash\n(cost=1.79..1.79 rows=79 width=542) (actual time=0.032..0.032 rows=79\nloops=1)\n -> Seq Scan\non flugplatz flugplatz8_ (cost=0.00..1.79 rows=79 width=542) (actual\ntime=0.003..0.010 rows=79 loops=1)\n -> Hash\n(cost=15.73..15.73 rows=373 width=7022) (actual time=1.938..1.938\nrows=375 loops=1)\n -> Seq Scan on\npilot pilot7_ (cost=0.00..15.73 rows=373 width=7022) (actual\ntime=0.006..0.769 rows=375 loops=1)\n -> Hash (cost=51.43..51.43\nrows=1543 width=689) (actual time=4.351..4.351 rows=1543 loops=1)\n -> Seq Scan on flugzeug\nflugzeug2_ (cost=0.00..51.43 rows=1543 width=689) (actual\ntime=0.006..1.615 rows=1543 loops=1)\n -> Hash (cost=1.79..1.79 rows=79\nwidth=542) (actual time=0.031..0.031 rows=79 loops=1)\n -> Seq Scan on flugplatz\nflugplatz6_ (cost=0.00..1.79 rows=79 width=542) (actual\ntime=0.003..0.011 rows=79 loops=1)\n -> Hash (cost=15.73..15.73 rows=373\nwidth=7022) (actual time=2.236..2.236 rows=375 loops=1)\n -> Seq Scan on pilot pilot5_\n(cost=0.00..15.73 rows=373 width=7022) (actual time=0.005..0.781\nrows=375 loops=1)\n -> Hash (cost=1.02..1.02 rows=2 width=520)\n(actual time=0.005..0.005 rows=2 loops=1)\n -> Seq Scan on bmintype bmintype4_\n(cost=0.00..1.02 rows=2 width=520) (actual time=0.003..0.004 rows=2\nloops=1)\n -> Hash (cost=1.03..1.03 rows=3 width=520) (actual\ntime=0.004..0.004 rows=3 loops=1)\n -> Seq Scan on flugzeugtype flugzeugty3_\n(cost=0.00..1.03 rows=3 width=520) (actual time=0.002..0.002 rows=3\nloops=1)\n -> Hash (cost=3.90..3.90 rows=2 width=4) (actual\ntime=0.239..0.239 rows=2 loops=1)\n -> Limit (cost=0.00..3.88 rows=2 width=4) (actual\ntime=0.202..0.236 rows=2 loops=1)\n -> Index Scan using startkladde_pkey on\nstartkladde kladdenent0_ (cost=0.00..56.24 rows=29 width=4) (actual\ntime=0.200..0.233 rows=2 loops=1)\n Filter: ((status > 0) OR (id = (-1)))\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nIn some cases, I've seen improved results when replacing the IN (...) with = ANY(ARRAY(...)).Dave",
"msg_date": "Mon, 16 May 2011 19:44:20 -0700",
"msg_from": "Dave Johansen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hash semi join caused by \"IN (select ...)\""
},
{
"msg_contents": "Hi,\n\n>> select .... from t1 left join t2 .... WHERE id IN (select ....)\n>\n> Does it work as expected with one less join? If so, try increasing\n> join_collapse_limit ...\n\nThat did the trick - thanks a lot. I only had to increase\njoin_collapse_limit a bit and now get an almost perfect plan.\nInstead of hash-joining all the data, the planner generates\nnested-loop-joins with index only on the few rows I fetch.\n\nUsing = ANY(array(select... )) also seems to work, I wonder which one\nworks better. Does ANY(ARRAY(...)) force the optimizer to plan the\nsubquery seperated from the main query?\n\nThanks, Clemens\n",
"msg_date": "Tue, 17 May 2011 09:38:55 +0200",
"msg_from": "Clemens Eisserer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hash semi join caused by \"IN (select ...)\""
},
{
"msg_contents": "Hi,\n\nDoes anybody know why the planner treats \"= ANY(ARRAY(select ...))\"\ndifferently than \"IN(select ...)\"?\nWhich one is preferable, when I already have a lot of joins?\n\nThanks, Clemens\n\n2011/5/17 Clemens Eisserer <[email protected]>:\n> Hi,\n>\n>>> select .... from t1 left join t2 .... WHERE id IN (select ....)\n>>\n>> Does it work as expected with one less join? If so, try increasing\n>> join_collapse_limit ...\n>\n> That did the trick - thanks a lot. I only had to increase\n> join_collapse_limit a bit and now get an almost perfect plan.\n> Instead of hash-joining all the data, the planner generates\n> nested-loop-joins with index only on the few rows I fetch.\n>\n> Using = ANY(array(select... )) also seems to work, I wonder which one\n> works better. Does ANY(ARRAY(...)) force the optimizer to plan the\n> subquery seperated from the main query?\n>\n> Thanks, Clemens\n>\n",
"msg_date": "Wed, 18 May 2011 10:46:00 +0200",
"msg_from": "Clemens Eisserer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hash semi join caused by \"IN (select ...)\""
},
{
"msg_contents": "On Wed, May 18, 2011 at 1:46 AM, Clemens Eisserer <[email protected]>wrote:\n\n> Hi,\n>\n> Does anybody know why the planner treats \"= ANY(ARRAY(select ...))\"\n> differently than \"IN(select ...)\"?\n> Which one is preferable, when I already have a lot of joins?\n>\n> Thanks, Clemens\n>\n> 2011/5/17 Clemens Eisserer <[email protected]>:\n> > Hi,\n> >\n> >>> select .... from t1 left join t2 .... WHERE id IN (select ....)\n> >>\n> >> Does it work as expected with one less join? If so, try increasing\n> >> join_collapse_limit ...\n> >\n> > That did the trick - thanks a lot. I only had to increase\n> > join_collapse_limit a bit and now get an almost perfect plan.\n> > Instead of hash-joining all the data, the planner generates\n> > nested-loop-joins with index only on the few rows I fetch.\n> >\n> > Using = ANY(array(select... )) also seems to work, I wonder which one\n> > works better. Does ANY(ARRAY(...)) force the optimizer to plan the\n> > subquery seperated from the main query?\n> >\n> > Thanks, Clemens\n> >\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\nI'm just a user so I don't have definitive knowledge of this, but my\nexperience seems to indicate that the = ANY(ARRAY(SELECT ...)) does the\nselect and turns it into an array and then uses that in the where clause in\na manner similar to a hard coded list of values, like IN (1, 2, 3, ...). In\ntheory, the planner could do the same sort of things with the IN (SELECT\n...) but my experience seems to indicate that in some cases it decides not\nto use an index that it could.\n\nOne specific example I know of is that at least in PostgreSQL 8.3, a view\nwith a UNION/UNION ALL will push the = ANY(ARRAY(SELECT ...)) down into the\ntwo sub-queries, but the IN (SELECT ...) will be applied after the UNION\nALL.\n\nDave\n\nOn Wed, May 18, 2011 at 1:46 AM, Clemens Eisserer <[email protected]> wrote:\nHi,\n\nDoes anybody know why the planner treats \"= ANY(ARRAY(select ...))\"\ndifferently than \"IN(select ...)\"?\nWhich one is preferable, when I already have a lot of joins?\n\nThanks, Clemens\n\n2011/5/17 Clemens Eisserer <[email protected]>:\n> Hi,\n>\n>>> select .... from t1 left join t2 .... WHERE id IN (select ....)\n>>\n>> Does it work as expected with one less join? If so, try increasing\n>> join_collapse_limit ...\n>\n> That did the trick - thanks a lot. I only had to increase\n> join_collapse_limit a bit and now get an almost perfect plan.\n> Instead of hash-joining all the data, the planner generates\n> nested-loop-joins with index only on the few rows I fetch.\n>\n> Using = ANY(array(select... )) also seems to work, I wonder which one\n> works better. Does ANY(ARRAY(...)) force the optimizer to plan the\n> subquery seperated from the main query?\n>\n> Thanks, Clemens\n>\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nI'm just a user so I don't have definitive knowledge of this, but my experience seems to indicate that the = ANY(ARRAY(SELECT ...)) does the select and turns it into an array and then uses that in the where clause in a manner similar to a hard coded list of values, like IN (1, 2, 3, ...). In theory, the planner could do the same sort of things with the IN (SELECT ...) but my experience seems to indicate that in some cases it decides not to use an index that it could.\nOne specific example I know of is that at least in PostgreSQL 8.3, a view with a UNION/UNION ALL will push the = ANY(ARRAY(SELECT ...)) down into the two sub-queries, but the IN (SELECT ...) will be applied after the UNION ALL.\nDave",
"msg_date": "Wed, 18 May 2011 07:00:50 -0700",
"msg_from": "Dave Johansen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hash semi join caused by \"IN (select ...)\""
},
{
"msg_contents": "\n\nOn 5/17/11 12:38 AM, \"Clemens Eisserer\" <[email protected]> wrote:\n\n>Hi,\n>\n>>> select .... from t1 left join t2 .... WHERE id IN (select ....)\n>>\n>> Does it work as expected with one less join? If so, try increasing\n>> join_collapse_limit ...\n>\n>That did the trick - thanks a lot. I only had to increase\n>join_collapse_limit a bit and now get an almost perfect plan.\n>Instead of hash-joining all the data, the planner generates\n>nested-loop-joins with index only on the few rows I fetch.\n>\n>Using = ANY(array(select... )) also seems to work, I wonder which one\n>works better. Does ANY(ARRAY(...)) force the optimizer to plan the\n>subquery seperated from the main query?\n\n\nI'm not sure exactly what happens with ANY(ARRAY()).\n\nI am fairly confident that the planner simply transforms an IN(select ...)\nto a join, since they are equivalent.\n\nBecause \"foo IN (select ...)\" is just a join, it counts towards\njoin_collapse_limit.\n\n\n>\n>Thanks, Clemens\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Wed, 18 May 2011 10:41:40 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hash semi join caused by \"IN (select ...)\""
}
] |
[
{
"msg_contents": "Hi, guys.\n\n\nI have following environment configuration\n\n- Postgres 8.4.7 with following postresql.conf settings modified:\n\n listen_addresses = '*'\n\n max_connections = 100\n\n\n shared_buffers = 2048MB\n\n max_prepared_transactions = 100\n\n wal_buffers = 1024kB\n\n\n checkpoint_segments = 64\n\n checkpoint_completion_target = 0.8\n\n\n log_checkpoints = on\n\n\n- Two databases. Let's call them db_1 and db_2\n\n- J2EE application server that performs inserts into databases defined\nabove. (distribution transactions are used).\n\n- All constraints and indexes are on.\n\n- JMeter that acts as HTTP client and sends requests to server causing it to\ninsert rows. (case of new users registration)\n\n\nAfter running scenario scenario described above (with 10 concurrent threads)\nI have observed following behavior:\n\n\nFor the first time everything is fine and J2EE server handles about 700\nrequests/sec (about 2500 inserts into several tables per second). But after\nsome amount of time I observe performance degradation. In general it looks\nlike the following:\n\n\nTotal number of requests passed; Requests per second;\n\n382000; 768;\n\n546000; 765;\n\n580000; 723;\n\n650000; 700;\n\n671000; 656;\n\n700000; 628;\n\n\nCheckpoint logging gives me the following:\n\n2011-05-17 18:55:51 NOVST LOG: checkpoint starting: xlog\n\n2011-05-17 18:57:20 NOVST LOG: checkpoint complete: wrote 62861 buffers\n(24.0%); 0 transaction log file(s) added, 0 removed, 0 recycled;\nwrite=89.196 s, sync=0.029 s, total=89.242 s\n\n2011-05-17 18:57:47 NOVST LOG: checkpoint starting: xlog\n\n2011-05-17 18:59:02 NOVST LOG: checkpoint complete: wrote 83747 buffers\n(31.9%); 0 transaction log file(s) added, 0 removed, 64 recycled;\nwrite=75.001 s, sync=0.043 s, total=75.061 s\n\n2011-05-17 18:59:29 NOVST LOG: checkpoint starting: xlog\n\n2011-05-17 19:00:30 NOVST LOG: checkpoint complete: wrote 97341 buffers\n(37.1%); 0 transaction log file(s) added, 0 removed, 64 recycled;\nwrite=60.413 s, sync=0.050 s, total=60.479 s\n\n2011-05-17 19:00:55 NOVST LOG: checkpoint starting: xlog\n\n2011-05-17 19:01:48 NOVST LOG: checkpoint complete: wrote 110149 buffers\n(42.0%); 0 transaction log file(s) added, 0 removed, 64 recycled;\nwrite=52.285 s, sync=0.072 s, total=52.379 s\n\n2011-05-17 19:02:11 NOVST LOG: checkpoint starting: xlog\n\n2011-05-17 19:02:58 NOVST LOG: checkpoint complete: wrote 120003 buffers\n(45.8%); 0 transaction log file(s) added, 0 removed, 64 recycled;\nwrite=46.766 s, sync=0.082 s, total=46.864 s\n\n2011-05-17 19:03:20 NOVST LOG: checkpoint starting: xlog\n\n2011-05-17 19:04:18 NOVST LOG: checkpoint complete: wrote 122296 buffers\n(46.7%); 0 transaction log file(s) added, 0 removed, 64 recycled;\nwrite=57.795 s, sync=0.054 s, total=57.867 s\n\n2011-05-17 19:04:38 NOVST LOG: checkpoint starting: xlog\n\n2011-05-17 19:05:34 NOVST LOG: checkpoint complete: wrote 128165 buffers\n(48.9%); 0 transaction log file(s) added, 0 removed, 64 recycled;\nwrite=55.061 s, sync=0.087 s, total=55.188 s\n\n2011-05-17 19:05:53 NOVST LOG: checkpoint starting: xlog\n\n2011-05-17 19:06:51 NOVST LOG: checkpoint complete: wrote 138508 buffers\n(52.8%); 0 transaction log file(s) added, 0 removed, 64 recycled;\nwrite=57.919 s, sync=0.106 s, total=58.068 s\n\n2011-05-17 19:07:08 NOVST LOG: checkpoint starting: xlog\n\n2011-05-17 19:08:21 NOVST LOG: checkpoint complete: wrote 132485 buffers\n(50.5%); 0 transaction log file(s) added, 0 removed, 64 recycled;\nwrite=72.949 s, sync=0.081 s, total=73.047 s\n\n2011-05-17 19:08:40 NOVST LOG: checkpoint starting: xlog\n\n2011-05-17 19:09:48 NOVST LOG: checkpoint complete: wrote 139542 buffers\n(53.2%); 0 transaction log file(s) added, 0 removed, 64 recycled;\nwrite=68.193 s, sync=0.107 s, total=68.319 s\n\n2011-05-17 19:10:06 NOVST LOG: checkpoint starting: xlog\n\n2011-05-17 19:11:31 NOVST LOG: checkpoint complete: wrote 137657 buffers\n(52.5%); 0 transaction log file(s) added, 0 removed, 64 recycled;\nwrite=84.575 s, sync=0.047 s, total=84.640 s\n\n\nAlso I observed more heavy IO from iostat utility.\n\n\nSo my questions are:\n\n1. How does database size affect insert performance?\n\n2. Why does number of written buffers increase when database size grows?\n\n3. How can I further analyze this problem?\n\n-- \nBest regards.\n\n\n\n\n\n\n\nHi, guys.\n\nI have following environment configuration\n- Postgres 8.4.7 with following postresql.conf settings modified:\n\n\n\n\n\n listen_addresses = '*' \n max_connections = 100\n\n shared_buffers = 2048MB \n max_prepared_transactions = 100\n wal_buffers = 1024kB\n\n checkpoint_segments = 64\n checkpoint_completion_target = 0.8\n\n log_checkpoints = on\n- Two databases. Let's call them db_1 and db_2\n- J2EE application server that performs inserts into databases defined above. (distribution transactions are used).\n- All constraints and indexes are on.\n- JMeter that acts as HTTP client and sends requests to server causing it to insert rows. (case of new users registration)\n\nAfter running scenario scenario described above (with 10 concurrent threads) I have observed following behavior:\n\nFor the first time everything is fine and J2EE server handles about 700 requests/sec (about 2500 inserts into several tables per second). But after some amount of time I observe performance degradation. In general it looks like the following:\n\nTotal number of requests passed; Requests per second;\n382000; 768;\n546000; 765;\n580000; 723;\n650000; 700;\n671000; 656;\n700000; 628;\n\nCheckpoint logging gives me the following:\n2011-05-17 18:55:51 NOVST LOG: checkpoint starting: xlog\n2011-05-17 18:57:20 NOVST LOG: checkpoint complete: wrote 62861 buffers (24.0%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=89.196 s, sync=0.029 s, total=89.242 s\n2011-05-17 18:57:47 NOVST LOG: checkpoint starting: xlog\n2011-05-17 18:59:02 NOVST LOG: checkpoint complete: wrote 83747 buffers (31.9%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=75.001 s, sync=0.043 s, total=75.061 s\n2011-05-17 18:59:29 NOVST LOG: checkpoint starting: xlog\n2011-05-17 19:00:30 NOVST LOG: checkpoint complete: wrote 97341 buffers (37.1%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=60.413 s, sync=0.050 s, total=60.479 s\n2011-05-17 19:00:55 NOVST LOG: checkpoint starting: xlog\n2011-05-17 19:01:48 NOVST LOG: checkpoint complete: wrote 110149 buffers (42.0%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=52.285 s, sync=0.072 s, total=52.379 s\n2011-05-17 19:02:11 NOVST LOG: checkpoint starting: xlog\n2011-05-17 19:02:58 NOVST LOG: checkpoint complete: wrote 120003 buffers (45.8%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=46.766 s, sync=0.082 s, total=46.864 s\n2011-05-17 19:03:20 NOVST LOG: checkpoint starting: xlog\n2011-05-17 19:04:18 NOVST LOG: checkpoint complete: wrote 122296 buffers (46.7%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=57.795 s, sync=0.054 s, total=57.867 s\n2011-05-17 19:04:38 NOVST LOG: checkpoint starting: xlog\n2011-05-17 19:05:34 NOVST LOG: checkpoint complete: wrote 128165 buffers (48.9%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=55.061 s, sync=0.087 s, total=55.188 s\n2011-05-17 19:05:53 NOVST LOG: checkpoint starting: xlog\n2011-05-17 19:06:51 NOVST LOG: checkpoint complete: wrote 138508 buffers (52.8%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=57.919 s, sync=0.106 s, total=58.068 s\n2011-05-17 19:07:08 NOVST LOG: checkpoint starting: xlog\n2011-05-17 19:08:21 NOVST LOG: checkpoint complete: wrote 132485 buffers (50.5%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=72.949 s, sync=0.081 s, total=73.047 s\n2011-05-17 19:08:40 NOVST LOG: checkpoint starting: xlog\n2011-05-17 19:09:48 NOVST LOG: checkpoint complete: wrote 139542 buffers (53.2%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=68.193 s, sync=0.107 s, total=68.319 s\n2011-05-17 19:10:06 NOVST LOG: checkpoint starting: xlog\n2011-05-17 19:11:31 NOVST LOG: checkpoint complete: wrote 137657 buffers (52.5%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=84.575 s, sync=0.047 s, total=84.640 s\n\nAlso I observed more heavy IO from iostat utility.\n\nSo my questions are:\n1. How does database size affect insert performance?\n2. Why does number of written buffers increase when database size grows?\n3. How can I further analyze this problem?-- Best regards.",
"msg_date": "Tue, 17 May 2011 16:45:32 +0400",
"msg_from": "Andrey Vorobiev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance degradation of inserts when database size grows"
},
{
"msg_contents": "\n\n>1. How does database size affect insert performance?\n>2. Why does number of written buffers increase when database size grows?\n\nIt might be related to indexes. Indexes size affect insert performance.\n\n>3. How can I further analyze this problem?\n\n\nTry without indexes?\n",
"msg_date": "Sun, 22 May 2011 07:38:20 +0100 (BST)",
"msg_from": "Leonardo Francalanci <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of inserts when database size grows"
},
{
"msg_contents": "On 05/17/2011 07:45 AM, Andrey Vorobiev wrote:\n\n> 2011-05-17 18:55:51 NOVST LOG: checkpoint starting: xlog\n> 2011-05-17 18:57:20 NOVST LOG: checkpoint complete: wrote 62861 buffers\n> (24.0%); 0 transaction log file(s) added, 0 removed, 0 recycled;\n> write=89.196 s, sync=0.029 s, total=89.242 s\n\nIncrease your checkpoint_segments. If you see \"checkpoint starting: \nxlog\" instead of \"checkpoint starting: time\", you don't have enough \ncheckpoint segments to handle your writes. Checkpoints *will* degrade \nyour throughput.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Mon, 23 May 2011 08:30:34 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of inserts when database size\n grows"
},
{
"msg_contents": "Dne 23.5.2011 15:30, Shaun Thomas napsal(a):\n> On 05/17/2011 07:45 AM, Andrey Vorobiev wrote:\n> \n>> 2011-05-17 18:55:51 NOVST LOG: checkpoint starting: xlog\n>> 2011-05-17 18:57:20 NOVST LOG: checkpoint complete: wrote 62861 buffers\n>> (24.0%); 0 transaction log file(s) added, 0 removed, 0 recycled;\n>> write=89.196 s, sync=0.029 s, total=89.242 s\n> \n> Increase your checkpoint_segments. If you see \"checkpoint starting:\n> xlog\" instead of \"checkpoint starting: time\", you don't have enough\n> checkpoint segments to handle your writes. Checkpoints *will* degrade\n> your throughput.\n> \n\nReally? He already has 64 checkpoint segments, which is about 1GB of\nxlog data. The real problem is that the amount of buffers to write is\nconstantly growing. At the beginning there's 62861 buffers (500MB) and\nat the end there's 137657 buffers (1GB).\n\nIMHO increasing the number of checkpoint segments would make this\ndisruption even worse.\n\nWhat I don't understand is that the checkpoint time does not increase\nwith the amount of data to write. Writing the\n\n 62861 buffers total=89.242 s ( 5 MB/s)\n 83747 buffers total=75.061 s ( 9 MB/s)\n 97341 buffers total=60.479 s (13 MB/s)\n 110149 buffers total=52.379 s (17 MB/s)\n 120003 buffers total=46.864 s (20 MB/s)\n 122296 buffers total=57.867 s (17 MB/s)\n 128165 buffers total=55.188 s (18 MB/s)\n 138508 buffers total=58.068 s (19 MB/s)\n 132485 buffers total=73.047 s (14 MB/s)\n 139542 buffers total=68.319 s (16 MB/s)\n 137657 buffers total=84.640 s (13 MB/s)\n\nMaybe this depends on what sections of the files are modified\n(contiguous vs. not contiguous), but I doubt it.\n\nIn 9.1 there's a feature that spreads checkpoint writes, but with 8.4\nthat's not possible. I think think this might be tuned using background\nwriter, just make it more aggressive.\n\n- bgwriter_delay (decrease)\n- bgwriter_lru_maxpages (increase)\n- bgwriter_lru_multiplier (increase)\n\nregards\nTomas\n",
"msg_date": "Mon, 23 May 2011 20:46:39 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of inserts when database size\n grows"
},
{
"msg_contents": "On Mon, May 23, 2011 at 2:46 PM, Tomas Vondra <[email protected]> wrote:\n> Really? He already has 64 checkpoint segments, which is about 1GB of\n> xlog data. The real problem is that the amount of buffers to write is\n> constantly growing. At the beginning there's 62861 buffers (500MB) and\n> at the end there's 137657 buffers (1GB).\n>\n> IMHO increasing the number of checkpoint segments would make this\n> disruption even worse.\n\nMaybe - but it would also make the checkpoints less frequent, which\nmight be a good thing.\n\n> In 9.1 there's a feature that spreads checkpoint writes, but with 8.4\n> that's not possible.\n\nWhat feature are you referring to here? Checkpoint spreading was\nadded in 8.3, IIRC.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 23 May 2011 15:05:44 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of inserts when database size grows"
},
{
"msg_contents": "On Tue, May 17, 2011 at 8:45 AM, Andrey Vorobiev\n<[email protected]> wrote:\n> 1. How does database size affect insert performance?\n\nWell, if your database gets bigger, then your indexes will become\ndeeper, requiring more time to update. But I'm not sure that's your\nproblem here.\n\n> 2. Why does number of written buffers increase when database size grows?\n\nIt normally doesn't.\n\n> 3. How can I further analyze this problem?\n\nAre you actually inserting more user data into these tables, so that\nthey have more and more rows as time goes by, or are the data files\ngetting larger out of proportion to the amount of useful data in them?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 23 May 2011 15:08:16 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of inserts when database size grows"
},
{
"msg_contents": "Dne 23.5.2011 21:05, Robert Haas napsal(a):\n> On Mon, May 23, 2011 at 2:46 PM, Tomas Vondra <[email protected]> wrote:\n>> Really? He already has 64 checkpoint segments, which is about 1GB of\n>> xlog data. The real problem is that the amount of buffers to write is\n>> constantly growing. At the beginning there's 62861 buffers (500MB) and\n>> at the end there's 137657 buffers (1GB).\n>>\n>> IMHO increasing the number of checkpoint segments would make this\n>> disruption even worse.\n> \n> Maybe - but it would also make the checkpoints less frequent, which\n> might be a good thing.\n> \n>> In 9.1 there's a feature that spreads checkpoint writes, but with 8.4\n>> that's not possible.\n> \n> What feature are you referring to here? Checkpoint spreading was\n> added in 8.3, IIRC.\n\nYou're absolutely right, I was talking about\n\n checkpoint_completion_target\n\nand it was added in 8.3. Your memory is obviously better than mine.\n\nTomas\n",
"msg_date": "Mon, 23 May 2011 21:17:33 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of inserts when database size\n grows"
},
{
"msg_contents": "As near as I can tell from your test configuration description, you have\nJMeter --> J2EE --> Postgres.\nHave you ruled out the J2EE server as the problem? This problem may not be\nthe database.\nI would take a look at your app server's health and look for any potential\nissues there before spending too much time on the database. Perhaps there\nare memory issues or excessive garbage collection on the app server?\n\nTerry\n\n\nOn Tue, May 17, 2011 at 5:45 AM, Andrey Vorobiev <\[email protected]> wrote:\n\n> Hi, guys.\n>\n>\n> I have following environment configuration\n>\n> - Postgres 8.4.7 with following postresql.conf settings modified:\n>\n> listen_addresses = '*'\n>\n> max_connections = 100\n>\n>\n> shared_buffers = 2048MB\n>\n> max_prepared_transactions = 100\n>\n> wal_buffers = 1024kB\n>\n>\n> checkpoint_segments = 64\n>\n> checkpoint_completion_target = 0.8\n>\n>\n> log_checkpoints = on\n>\n>\n> - Two databases. Let's call them db_1 and db_2\n>\n> - J2EE application server that performs inserts into databases defined\n> above. (distribution transactions are used).\n>\n> - All constraints and indexes are on.\n>\n> - JMeter that acts as HTTP client and sends requests to server causing it\n> to insert rows. (case of new users registration)\n>\n>\n> After running scenario scenario described above (with 10 concurrent\n> threads) I have observed following behavior:\n>\n>\n> For the first time everything is fine and J2EE server handles about 700\n> requests/sec (about 2500 inserts into several tables per second). But after\n> some amount of time I observe performance degradation. In general it looks\n> like the following:\n>\n>\n> Total number of requests passed; Requests per second;\n>\n> 382000; 768;\n>\n> 546000; 765;\n>\n> 580000; 723;\n>\n> 650000; 700;\n>\n> 671000; 656;\n>\n> 700000; 628;\n>\n>\n> Checkpoint logging gives me the following:\n>\n> 2011-05-17 18:55:51 NOVST LOG: checkpoint starting: xlog\n>\n> 2011-05-17 18:57:20 NOVST LOG: checkpoint complete: wrote 62861 buffers\n> (24.0%); 0 transaction log file(s) added, 0 removed, 0 recycled;\n> write=89.196 s, sync=0.029 s, total=89.242 s\n>\n> 2011-05-17 18:57:47 NOVST LOG: checkpoint starting: xlog\n>\n> 2011-05-17 18:59:02 NOVST LOG: checkpoint complete: wrote 83747 buffers\n> (31.9%); 0 transaction log file(s) added, 0 removed, 64 recycled;\n> write=75.001 s, sync=0.043 s, total=75.061 s\n>\n> 2011-05-17 18:59:29 NOVST LOG: checkpoint starting: xlog\n>\n> 2011-05-17 19:00:30 NOVST LOG: checkpoint complete: wrote 97341 buffers\n> (37.1%); 0 transaction log file(s) added, 0 removed, 64 recycled;\n> write=60.413 s, sync=0.050 s, total=60.479 s\n>\n> 2011-05-17 19:00:55 NOVST LOG: checkpoint starting: xlog\n>\n> 2011-05-17 19:01:48 NOVST LOG: checkpoint complete: wrote 110149 buffers\n> (42.0%); 0 transaction log file(s) added, 0 removed, 64 recycled;\n> write=52.285 s, sync=0.072 s, total=52.379 s\n>\n> 2011-05-17 19:02:11 NOVST LOG: checkpoint starting: xlog\n>\n> 2011-05-17 19:02:58 NOVST LOG: checkpoint complete: wrote 120003 buffers\n> (45.8%); 0 transaction log file(s) added, 0 removed, 64 recycled;\n> write=46.766 s, sync=0.082 s, total=46.864 s\n>\n> 2011-05-17 19:03:20 NOVST LOG: checkpoint starting: xlog\n>\n> 2011-05-17 19:04:18 NOVST LOG: checkpoint complete: wrote 122296 buffers\n> (46.7%); 0 transaction log file(s) added, 0 removed, 64 recycled;\n> write=57.795 s, sync=0.054 s, total=57.867 s\n>\n> 2011-05-17 19:04:38 NOVST LOG: checkpoint starting: xlog\n>\n> 2011-05-17 19:05:34 NOVST LOG: checkpoint complete: wrote 128165 buffers\n> (48.9%); 0 transaction log file(s) added, 0 removed, 64 recycled;\n> write=55.061 s, sync=0.087 s, total=55.188 s\n>\n> 2011-05-17 19:05:53 NOVST LOG: checkpoint starting: xlog\n>\n> 2011-05-17 19:06:51 NOVST LOG: checkpoint complete: wrote 138508 buffers\n> (52.8%); 0 transaction log file(s) added, 0 removed, 64 recycled;\n> write=57.919 s, sync=0.106 s, total=58.068 s\n>\n> 2011-05-17 19:07:08 NOVST LOG: checkpoint starting: xlog\n>\n> 2011-05-17 19:08:21 NOVST LOG: checkpoint complete: wrote 132485 buffers\n> (50.5%); 0 transaction log file(s) added, 0 removed, 64 recycled;\n> write=72.949 s, sync=0.081 s, total=73.047 s\n>\n> 2011-05-17 19:08:40 NOVST LOG: checkpoint starting: xlog\n>\n> 2011-05-17 19:09:48 NOVST LOG: checkpoint complete: wrote 139542 buffers\n> (53.2%); 0 transaction log file(s) added, 0 removed, 64 recycled;\n> write=68.193 s, sync=0.107 s, total=68.319 s\n>\n> 2011-05-17 19:10:06 NOVST LOG: checkpoint starting: xlog\n>\n> 2011-05-17 19:11:31 NOVST LOG: checkpoint complete: wrote 137657 buffers\n> (52.5%); 0 transaction log file(s) added, 0 removed, 64 recycled;\n> write=84.575 s, sync=0.047 s, total=84.640 s\n>\n>\n> Also I observed more heavy IO from iostat utility.\n>\n>\n> So my questions are:\n>\n> 1. How does database size affect insert performance?\n>\n> 2. Why does number of written buffers increase when database size grows?\n>\n> 3. How can I further analyze this problem?\n>\n> --\n> Best regards.\n>\n\nAs near as I can tell from your test configuration description, you have JMeter --> J2EE --> Postgres.Have you ruled out the J2EE server as the problem? This problem may not be the database.I would take a look at your app server's health and look for any potential issues there before spending too much time on the database. Perhaps there are memory issues or excessive garbage collection on the app server?\nTerryOn Tue, May 17, 2011 at 5:45 AM, Andrey Vorobiev <[email protected]> wrote:\n\nHi, guys.\n\nI have following environment configuration\n- Postgres 8.4.7 with following postresql.conf settings modified:\n listen_addresses = '*' \n max_connections = 100\n\n shared_buffers = 2048MB \n max_prepared_transactions = 100\n wal_buffers = 1024kB\n\n checkpoint_segments = 64\n checkpoint_completion_target = 0.8\n\n log_checkpoints = on\n- Two databases. Let's call them db_1 and db_2\n- J2EE application server that performs inserts into databases defined above. (distribution transactions are used).\n- All constraints and indexes are on.\n- JMeter that acts as HTTP client and sends requests to server causing it to insert rows. (case of new users registration)\n\nAfter running scenario scenario described above (with 10 concurrent threads) I have observed following behavior:\n\nFor the first time everything is fine and J2EE server handles about 700 requests/sec (about 2500 inserts into several tables per second). But after some amount of time I observe performance degradation. In general it looks like the following:\n\nTotal number of requests passed; Requests per second;\n382000; 768;\n546000; 765;\n580000; 723;\n650000; 700;\n671000; 656;\n700000; 628;\n\nCheckpoint logging gives me the following:\n2011-05-17 18:55:51 NOVST LOG: checkpoint starting: xlog\n2011-05-17 18:57:20 NOVST LOG: checkpoint complete: wrote 62861 buffers (24.0%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=89.196 s, sync=0.029 s, total=89.242 s\n2011-05-17 18:57:47 NOVST LOG: checkpoint starting: xlog\n2011-05-17 18:59:02 NOVST LOG: checkpoint complete: wrote 83747 buffers (31.9%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=75.001 s, sync=0.043 s, total=75.061 s\n2011-05-17 18:59:29 NOVST LOG: checkpoint starting: xlog\n2011-05-17 19:00:30 NOVST LOG: checkpoint complete: wrote 97341 buffers (37.1%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=60.413 s, sync=0.050 s, total=60.479 s\n2011-05-17 19:00:55 NOVST LOG: checkpoint starting: xlog\n2011-05-17 19:01:48 NOVST LOG: checkpoint complete: wrote 110149 buffers (42.0%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=52.285 s, sync=0.072 s, total=52.379 s\n2011-05-17 19:02:11 NOVST LOG: checkpoint starting: xlog\n2011-05-17 19:02:58 NOVST LOG: checkpoint complete: wrote 120003 buffers (45.8%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=46.766 s, sync=0.082 s, total=46.864 s\n2011-05-17 19:03:20 NOVST LOG: checkpoint starting: xlog\n2011-05-17 19:04:18 NOVST LOG: checkpoint complete: wrote 122296 buffers (46.7%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=57.795 s, sync=0.054 s, total=57.867 s\n2011-05-17 19:04:38 NOVST LOG: checkpoint starting: xlog\n2011-05-17 19:05:34 NOVST LOG: checkpoint complete: wrote 128165 buffers (48.9%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=55.061 s, sync=0.087 s, total=55.188 s\n2011-05-17 19:05:53 NOVST LOG: checkpoint starting: xlog\n2011-05-17 19:06:51 NOVST LOG: checkpoint complete: wrote 138508 buffers (52.8%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=57.919 s, sync=0.106 s, total=58.068 s\n2011-05-17 19:07:08 NOVST LOG: checkpoint starting: xlog\n2011-05-17 19:08:21 NOVST LOG: checkpoint complete: wrote 132485 buffers (50.5%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=72.949 s, sync=0.081 s, total=73.047 s\n2011-05-17 19:08:40 NOVST LOG: checkpoint starting: xlog\n2011-05-17 19:09:48 NOVST LOG: checkpoint complete: wrote 139542 buffers (53.2%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=68.193 s, sync=0.107 s, total=68.319 s\n2011-05-17 19:10:06 NOVST LOG: checkpoint starting: xlog\n2011-05-17 19:11:31 NOVST LOG: checkpoint complete: wrote 137657 buffers (52.5%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=84.575 s, sync=0.047 s, total=84.640 s\n\nAlso I observed more heavy IO from iostat utility.\n\nSo my questions are:\n1. How does database size affect insert performance?\n2. Why does number of written buffers increase when database size grows?\n3. How can I further analyze this problem?-- Best regards.",
"msg_date": "Mon, 23 May 2011 22:24:28 -0700",
"msg_from": "Terry Schmitt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of inserts when database size grows"
},
{
"msg_contents": "Dne 24.5.2011 07:24, Terry Schmitt napsal(a):\n> As near as I can tell from your test configuration description, you have\n> JMeter --> J2EE --> Postgres.\n> Have you ruled out the J2EE server as the problem? This problem may not\n> be the database.\n> I would take a look at your app server's health and look for any\n> potential issues there before spending too much time on the database.\n> Perhaps there are memory issues or excessive garbage collection on the\n> app server?\n\nIt might be part of the problem, yes, but it's just a guess. We need to\nse some iostat / iotop / vmstat output to confirm that.\n\nThe probable cause here is that the indexes grow with the table, get\ndeeper, so when you insert a new row you need to modify more and more\npages. That's why the number of buffers grows over time and the\ncheckpoint takes more and more time (the average write speed is about 15\nMB/s - not sure if that's good or bad performance).\n\nThe question is whether this is influenced by other activity (Java GC or\nsomething)\n\nI see three ways to improve the checkpoint performance:\n\n 1) set checkpoint_completion_target = 0.9 or something like that\n (this should spread the checkpoint, but it also increases the\n amount of checkpoint segments to keep)\n\n 2) make the background writer more aggressive (tune the bgwriter_*\n variables), this is similar to (1)\n\n 3) improve the write performance (not sure how random the I/O is in\n this case, but a decent controller with a cache might help)\n\nand then two ways to decrease the index overhead / amount of modified\nbuffers\n\n 1) keep only the really necessary indexes (remove duplicate, indexes,\n remove indexes where another index already performs reasonably,\n etc.)\n\n 2) partition the table (so that only indexes on the current partition\n will be modified, and those will be more shallow)\n\nTomas\n",
"msg_date": "Tue, 24 May 2011 21:20:52 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of inserts when database size\n grows"
},
{
"msg_contents": "On 05/17/2011 08:45 AM, Andrey Vorobiev wrote:\n> 1. How does database size affect insert performance?\n\nAs indexes grow, it becomes slower to insert into them. It has to \nnavigate all of the indexes on the table to figure out where to add the \nnew row into there, and that navigation time goes up when tables are \nlarger. Try using the queries at \nhttp://wiki.postgresql.org/wiki/Disk_Usage to quantify how big your \nindexes are. Many people are absolutely shocked to see how large they \nbecome. And some database designers throw indexes onto every possible \ncolumn combination as if they were free.\n\n> 2. Why does number of written buffers increase when database size grows?\n>\n\nAs indexes grow, the changes needed to insert more rows get spread over \nmore blocks too.\n\nYou can install pg_buffercache and analyze what's actually getting dirty \nin the buffer cache to directly measure what's changing here. If you \nlook at http://projects.2ndquadrant.com/talks and download the \"Inside \nthe PostgreSQL Buffer Cache\" talk and its \"Sample Queries\" set, those \nwill give you some examples of how to summarize everything.\n\n> 3. How can I further analyze this problem?\n\nThis may not actually be a problem in that it's something you can \nresolve. If you assume that you can insert into a giant table at the \nsame speed you can insert into a trivial one, you'll have to adjust your \nthinking because that's never going to be true. Removing some indexes \nmay help; reducing the columns in the index is also good; and some \npeople end up partitioning their data specifically to help with this \nsituation. It's also possible to regain some of the earlier performance \nusing things like REINDEX and CLUSTER.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Tue, 24 May 2011 17:46:06 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation of inserts when database size\n grows"
}
] |
[
{
"msg_contents": "Hello,\n\nHow fillfactor impact performance of query?\n\nI have two cases,\nOne is a operational table, for each insert it have an update, this table\nmust have aprox. 1.000 insert per second and 1.000 update per second (same\ninserted row)\nIs necessary to change the fill factor?\n\n\nThe other case is a table that have few insert (statistics) but thousands or\nmillons of update, In this case the fillfactor is not necessary to change?\n\nThanks!\n\n\n\n",
"msg_date": "Tue, 17 May 2011 08:59:50 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fill Factor"
},
{
"msg_contents": "On Tue, May 17, 2011 at 6:59 AM, Anibal David Acosta <[email protected]> wrote:\n> Hello,\n>\n> How fillfactor impact performance of query?\n\nFillfactor tells the db how much empty space to leave in the database\nwhen creating a table and inserting rows. If you set it to 90% then\n10% of the space in the table will be available for updates can be\nused for the new data. Combined with pg 8.3+ HOT updates, this free\nspace allows updates to non-indexed fields to be close to \"free\"\nbecause now the index for that row needs no updates if the new datum\nfor that row first in the same 8k pg block.\n\n> I have two cases,\n> One is a operational table, for each insert it have an update, this table\n> must have aprox. 1.000 insert per second and 1.000 update per second (same\n> inserted row)\n\nIf you could combine the insert and update into one action that would\nbe preferable really.\n\n> Is necessary to change the fill factor?\n\nNot necessary but possibly better for performance.\n\n> The other case is a table that have few insert (statistics) but thousands or\n> millons of update, In this case the fillfactor is not necessary to change?\n\nActually updates are the time that a lower fill factor is most useful.\n But it doesn't need to be really low. anything below 95% is likely\nmore than you need. But it really depends on your access patterns. If\nyou're updating 20% of a table at a time, then a fillfactor of ~80%\nmight be the best fit. Whether or not the updates fit under the HOT\numbrella, lowering fill factor enough to allow the updates to happen\nin place without adding pages to the table files is usually a win.\n",
"msg_date": "Tue, 17 May 2011 07:23:41 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fill Factor"
},
{
"msg_contents": "2011/5/17 Scott Marlowe <[email protected]>:\n> On Tue, May 17, 2011 at 6:59 AM, Anibal David Acosta <[email protected]> wrote:\n>> Hello,\n>>\n>> How fillfactor impact performance of query?\n>\n> Fillfactor tells the db how much empty space to leave in the database\n> when creating a table and inserting rows. If you set it to 90% then\n> 10% of the space in the table will be available for updates can be\n> used for the new data. Combined with pg 8.3+ HOT updates, this free\n> space allows updates to non-indexed fields to be close to \"free\"\n> because now the index for that row needs no updates if the new datum\n> for that row first in the same 8k pg block.\n>\n>> I have two cases,\n>> One is a operational table, for each insert it have an update, this table\n>> must have aprox. 1.000 insert per second and 1.000 update per second (same\n>> inserted row)\n>\n> If you could combine the insert and update into one action that would\n> be preferable really.\n>\n>> Is necessary to change the fill factor?\n>\n> Not necessary but possibly better for performance.\n\ndepend of deletes ratio too... without delete I am unsure a reduced\nfillfactor will have a good impact on the long term.\n\n>\n>> The other case is a table that have few insert (statistics) but thousands or\n>> millons of update, In this case the fillfactor is not necessary to change?\n>\n> Actually updates are the time that a lower fill factor is most useful.\n> But it doesn't need to be really low. anything below 95% is likely\n> more than you need. But it really depends on your access patterns. If\n> you're updating 20% of a table at a time, then a fillfactor of ~80%\n> might be the best fit. Whether or not the updates fit under the HOT\n> umbrella, lowering fill factor enough to allow the updates to happen\n> in place without adding pages to the table files is usually a win.\n\nAnd one possible way to help adjust the fillfactor is to control the\nrelation size.\nSometimes reducing fillfactor a lot (60-80%) is good, the table is\nstuck at some XX MB and page are well reused.\n\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Tue, 17 May 2011 15:52:33 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fill Factor"
}
] |
[
{
"msg_contents": "I am using Postgres 8.3 and I have an issue very closely related to the one\ndescribed here:\nhttp://archives.postgresql.org/pgsql-general/2005-06/msg00488.php\n\nBasically, I have a VIEW which is a UNION ALL of two tables but when I do a\nselect on the view using a LIMIT, it scans the entire tables and takes\nsignificantly longer than writing out the query with the LIMITs in the\nsub-queries themselves. Is there a solution to get the view to perform like\nthe sub-query version?\n\nThanks,\nDave\n\nI am using Postgres 8.3 and I have an issue very closely related to the one described here:http://archives.postgresql.org/pgsql-general/2005-06/msg00488.php\nBasically, I have a VIEW which is a UNION ALL of two tables but when\n I do a select on the view using a LIMIT, it scans the entire tables and\n takes significantly longer than writing out the query with the LIMITs \nin the sub-queries themselves. Is there a solution to get the view to \nperform like the sub-query version?\nThanks,Dave",
"msg_date": "Tue, 17 May 2011 08:31:04 -0700",
"msg_from": "Dave Johansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Pushing LIMIT into sub-queries of a UNION ALL?"
},
{
"msg_contents": "Dave,\n\nhow often do you want to repeat that posting? What about instead\nreplying to the answers you got so far?\n\nCheers\n\nrobert\n\n\nOn Tue, May 17, 2011 at 5:31 PM, Dave Johansen <[email protected]> wrote:\n> I am using Postgres 8.3 and I have an issue very closely related to the one\n> described here:\n> http://archives.postgresql.org/pgsql-general/2005-06/msg00488.php\n>\n> Basically, I have a VIEW which is a UNION ALL of two tables but when I do a\n> select on the view using a LIMIT, it scans the entire tables and takes\n> significantly longer than writing out the query with the LIMITs in the\n> sub-queries themselves. Is there a solution to get the view to perform like\n> the sub-query version?\n>\n> Thanks,\n> Dave\n\n\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Sun, 22 May 2011 19:34:23 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pushing LIMIT into sub-queries of a UNION ALL?"
},
{
"msg_contents": "I apologize for the multiple posts. I sent this email right after joining\nthe list and after it hadn't shown up a day later I figured that it had been\nlost or something and sent the other one.\n\nAlso, the database I posted this about does not have internet access and so\nI'm working on getting it moved over to a machine that does or getting it\nthe info onto a machine where I can post the pertinent information about the\nschema and explain outputs.\n\nThanks,\nDave\n\n--\nDave Johansen\nphone: (520) 302-4526\n\n\nOn Sun, May 22, 2011 at 10:34 AM, Robert Klemme\n<[email protected]>wrote:\n\n> Dave,\n>\n> how often do you want to repeat that posting? What about instead\n> replying to the answers you got so far?\n>\n> Cheers\n>\n> robert\n>\n>\n> On Tue, May 17, 2011 at 5:31 PM, Dave Johansen <[email protected]>\n> wrote:\n> > I am using Postgres 8.3 and I have an issue very closely related to the\n> one\n> > described here:\n> > http://archives.postgresql.org/pgsql-general/2005-06/msg00488.php\n> >\n> > Basically, I have a VIEW which is a UNION ALL of two tables but when I do\n> a\n> > select on the view using a LIMIT, it scans the entire tables and takes\n> > significantly longer than writing out the query with the LIMITs in the\n> > sub-queries themselves. Is there a solution to get the view to perform\n> like\n> > the sub-query version?\n> >\n> > Thanks,\n> > Dave\n>\n>\n>\n> --\n> remember.guy do |as, often| as.you_can - without end\n> http://blog.rubybestpractices.com/\n>\n\nI apologize for the multiple posts. I sent this email right after joining the list and after it hadn't shown up a day later I figured that it had been lost or something and sent the other one.Also, the database I posted this about does not have internet access and so I'm working on getting it moved over to a machine that does or getting it the info onto a machine where I can post the pertinent information about the schema and explain outputs.\nThanks,Dave--Dave Johansenphone: (520) 302-4526\nOn Sun, May 22, 2011 at 10:34 AM, Robert Klemme <[email protected]> wrote:\nDave,\n\nhow often do you want to repeat that posting? What about instead\nreplying to the answers you got so far?\n\nCheers\n\nrobert\n\n\nOn Tue, May 17, 2011 at 5:31 PM, Dave Johansen <[email protected]> wrote:\n> I am using Postgres 8.3 and I have an issue very closely related to the one\n> described here:\n> http://archives.postgresql.org/pgsql-general/2005-06/msg00488.php\n>\n> Basically, I have a VIEW which is a UNION ALL of two tables but when I do a\n> select on the view using a LIMIT, it scans the entire tables and takes\n> significantly longer than writing out the query with the LIMITs in the\n> sub-queries themselves. Is there a solution to get the view to perform like\n> the sub-query version?\n>\n> Thanks,\n> Dave\n\n\n\n--\nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/",
"msg_date": "Mon, 23 May 2011 08:54:52 -0700",
"msg_from": "Dave Johansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pushing LIMIT into sub-queries of a UNION ALL?"
},
{
"msg_contents": "On Mon, May 23, 2011 at 5:54 PM, Dave Johansen <[email protected]> wrote:\n> I apologize for the multiple posts. I sent this email right after joining\n> the list and after it hadn't shown up a day later I figured that it had been\n> lost or something and sent the other one.\n\nSorry for the nitpicking but I even see _three_ instances of this\nposting (first on May 18th).\n\n> Also, the database I posted this about does not have internet access and so\n> I'm working on getting it moved over to a machine that does or getting it\n> the info onto a machine where I can post the pertinent information about the\n> schema and explain outputs.\n\nGreat!\n\nCheers\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Mon, 23 May 2011 21:47:18 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pushing LIMIT into sub-queries of a UNION ALL?"
},
{
"msg_contents": "On 5/23/11 8:54 AM, Dave Johansen wrote:\n> I apologize for the multiple posts. I sent this email right after joining\n> the list and after it hadn't shown up a day later I figured that it had been\n> lost or something and sent the other one.\n\nList moderation took a holiday while all of us were at pgCon.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Mon, 23 May 2011 14:31:12 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pushing LIMIT into sub-queries of a UNION ALL?"
}
] |
[
{
"msg_contents": "Hi - Linux newbie here, and more of a developer than pgsql SysAdmin...\n\nWhen trying to follow some performance tuning suggestions by Robert\nHaas here:\nhttp://www.linux.com/learn/tutorials/394523-configuring-postgresql-for-pretty-good-performance\n\nThis is with PgSql 9.0.3 running on the Amazon EC2 on a Ubuntu 10.10\nbox (EBS instance with an EBS raid0 array).\n\nI've run into a problem where increasing shared_buffers past say 10MB\ncauses a \"UTC FATAL\" error on startup (this is with it set to 50MB,\nand I only have max_connections set to 50 as I'm using connection\npooling):\n\n2011-05-17 16:53:38 UTC FATAL: could not create shared memory\nsegment: Invalid argument\n2011-05-17 16:53:38 UTC DETAIL: Failed system call was\nshmget(key=5432001, size=56934400, 03600).\n2011-05-17 16:53:38 UTC HINT: This error usually means that\nPostgreSQL's request for a shared memory segment exceeded your\nkernel's SHMMAX parameter. You can either reduce the request size or\nreconfigure the kernel with larger SHMMAX. To reduce the request size\n(currently 56934400 bytes), reduce PostgreSQL's shared_buffers\nparameter (currently 6400) and/or its max_connections parameter\n(currently 54).\n If the request size is already small, it's possible that it is\nless than your kernel's SHMMIN parameter, in which case raising the\nrequest size or reconfiguring SHMMIN is called for.\n The PostgreSQL documentation contains more information about\nshared memory configuration.\n\n\nHere is the result of free -t -m:\n total used free shared buffers\ncached\nMem: 7468 7425 43 0 7\n7030\n-/+ buffers/cache: 387 7081\nSwap: 0 0 0\nTotal: 7468 7425 43\n\nand vmstat:\n r b swpd free buff cache si so bi bo in cs us\nsy id wa\n 0 0 0 50812 7432 7192808 0 0 5 50 5 5 1\n0 98 0\n\nYou can see that 7081MB are used by the cache. In other examples of\ntop / free / vmstat I haven't seen anyone else with such a large\namount of ram in the cache column. Could that be what's keeping the\npostgresql service from starting after adjusting shared_buffers? If\nso, do I need to go to the Linux group to figure out how to let\nPostgresql acquire that memory space on startup?\n\nThanks very much,\nSTA\n",
"msg_date": "Tue, 17 May 2011 10:00:12 -0700 (PDT)",
"msg_from": "STA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Modifying shared_buffers causes despite plenty of ram"
},
{
"msg_contents": "On May 17, 1:00 pm, STA <[email protected]> wrote:\n> Hi - Linux newbie here, and more of a developer than pgsql SysAdmin...\n>\n\nSorry... title should be \"... causes error on startup\" or something. I\naccidentally clicked submit before I'd decided what the title should\nbe. :)\n",
"msg_date": "Tue, 17 May 2011 10:02:36 -0700 (PDT)",
"msg_from": "STA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Modifying shared_buffers causes despite plenty of ram"
},
{
"msg_contents": "On May 17, 1:02 pm, STA <[email protected]> wrote:\n> On May 17, 1:00 pm, STA <[email protected]> wrote:\n>\n> > Hi - Linux newbie here, and more of a developer than pgsql SysAdmin...\n>\n> Sorry... title should be \"... causes error on startup\" or something. I\n> accidentally clicked submit before I'd decided what the title should\n> be. :)\n\nAnswered my own question.... found the settings for kernel resources\nin linux here:\nhttp://developer.postgresql.org/pgdocs/postgres/kernel-resources.html\n",
"msg_date": "Tue, 17 May 2011 10:44:28 -0700 (PDT)",
"msg_from": "STA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Modifying shared_buffers causes despite plenty of ram"
}
] |
[
{
"msg_contents": "I am using Postgres 8.3.3 and I have a VIEW which is a UNION ALL of two\ntables but when I do a select on the view using a LIMIT, it scans the entire\ntables and takes significantly longer than writing out the query with the\nLIMITs in the sub-queries themselves. Is there a solution to get the view to\nperform like the query with the LIMIT explicitly placed in the sub-queries?\n\nI noticed a similar question in this post\nhttp://archives.postgresql.org/pgsql-general/2005-06/msg00488.php but I\nwasn't able to find an answer.\n\nThanks,\nDave\n\nI am using Postgres 8.3.3 and I have a VIEW which is a UNION ALL of two tables but when\n I do a select on the view using a LIMIT, it scans the entire tables and\n takes significantly longer than writing out the query with the LIMITs \nin the sub-queries themselves. Is there a solution to get the view to \nperform like the query with the LIMIT explicitly placed in the sub-queries?I noticed a similar question in this post http://archives.postgresql.org/pgsql-general/2005-06/msg00488.php but I wasn't able to find an answer.\nThanks,Dave",
"msg_date": "Wed, 18 May 2011 08:26:02 -0700",
"msg_from": "Dave Johansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "LIMIT and UNION ALL"
},
{
"msg_contents": "On Wed, May 18, 2011 at 5:26 PM, Dave Johansen <[email protected]> wrote:\n> I am using Postgres 8.3.3 and I have a VIEW which is a UNION ALL of two\n> tables but when I do a select on the view using a LIMIT, it scans the entire\n> tables and takes significantly longer than writing out the query with the\n> LIMITs in the sub-queries themselves. Is there a solution to get the view to\n> perform like the query with the LIMIT explicitly placed in the sub-queries?\n\nCan you show DDL and queries?\n\nThe query with the LIMIT on the subqueries and the one with the LIMIT\non the overall query are not semantically equivalent. Since you can\nhave an ORDER BY before the LIMIT on the query with the limit on the\nview the database must have all the rows before it can apply the\nordering and properly determine the limit. Although it might be\npossible to determine under particular circumstances that only one of\nthe tables needs to be queried or tables need only be queried\npartially I deem that quite complex. I do not know whether Postgres\ncan do such optimizations but for that we would certainly need to see\nthe concrete example (including constraint and indexes).\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Wed, 18 May 2011 17:54:47 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIMIT and UNION ALL"
},
{
"msg_contents": "Hello\n\n\n2011/5/18 Dave Johansen <[email protected]>:\n> I am using Postgres 8.3.3 and I have a VIEW which is a UNION ALL of two\n> tables but when I do a select on the view using a LIMIT, it scans the entire\n> tables and takes significantly longer than writing out the query with the\n> LIMITs in the sub-queries themselves. Is there a solution to get the view to\n> perform like the query with the LIMIT explicitly placed in the sub-queries?\n>\n> I noticed a similar question in this post\n> http://archives.postgresql.org/pgsql-general/2005-06/msg00488.php but I\n> wasn't able to find an answer.\n\nmaybe\n\nSELECT *\n FROM (SELECT * FROM tab1 LIMIT n) s1\nUNION ALL\n SELECT *\n FROM (SELECT * FROM tab2 LIMIT n) s2\nLIMIT n\n\nRegards\n\nPavel Stehule\n\n>\n> Thanks,\n> Dave\n",
"msg_date": "Wed, 18 May 2011 17:59:47 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIMIT and UNION ALL"
},
{
"msg_contents": "On Wed, May 18, 2011 at 8:54 AM, Robert Klemme\n<[email protected]>wrote:\n\n> On Wed, May 18, 2011 at 5:26 PM, Dave Johansen <[email protected]>\n> wrote:\n> > I am using Postgres 8.3.3 and I have a VIEW which is a UNION ALL of two\n> > tables but when I do a select on the view using a LIMIT, it scans the\n> entire\n> > tables and takes significantly longer than writing out the query with the\n> > LIMITs in the sub-queries themselves. Is there a solution to get the view\n> to\n> > perform like the query with the LIMIT explicitly placed in the\n> sub-queries?\n>\n> Can you show DDL and queries?\n>\n> The query with the LIMIT on the subqueries and the one with the LIMIT\n> on the overall query are not semantically equivalent. Since you can\n> have an ORDER BY before the LIMIT on the query with the limit on the\n> view the database must have all the rows before it can apply the\n> ordering and properly determine the limit. Although it might be\n> possible to determine under particular circumstances that only one of\n> the tables needs to be queried or tables need only be queried\n> partially I deem that quite complex. I do not know whether Postgres\n> can do such optimizations but for that we would certainly need to see\n> the concrete example (including constraint and indexes).\n>\n> Kind regards\n>\n> robert\n>\n> --\n> remember.guy do |as, often| as.you_can - without end\n> http://blog.rubybestpractices.com/\n>\n\nYes, there is an order by an index involved. Here's a simplified version of\nthe schema and queries that demonstrates the same behaviour.\n\n Table \"public.message1\"\n Column | Type |\nModifiers\n--------+--------------------------+--------------------------------------------------------\n rid | integer | not null default\nnextval('message1_rid_seq'::regclass)\n data | integer |\n tlocal | timestamp with time zone |\nIndexes:\n \"message1_pkey\" PRIMARY KEY, btree (rid)\nReferenced by:\n TABLE \"parsed1\" CONSTRAINT \"parsed1_msgid_fkey\" FOREIGN KEY (msgid)\nREFERENCES message1(rid)\n\n Table \"public.parsed1\"\n Column | Type |\nModifiers\n--------+--------------------------+--------------------------------------------------------\n rid | integer | not null default\nnextval('parsed1_rid_seq'::regclass)\n msgid | integer |\n data | integer |\n tlocal | timestamp with time zone |\nIndexes:\n \"parsed1_pkey\" PRIMARY KEY, btree (rid)\nForeign-key constraints:\n \"parsed1_msgid_fkey\" FOREIGN KEY (msgid) REFERENCES message1(rid) ON\nDELETE CASCADE\n\nFor this example, message2 has the same structure/definition and message1\nand parsed2 has the same structure/definition as parsed1.\n\n View \"public.parsed_all\"\n Column | Type | Modifiers\n--------+--------------------------+-----------\n rid | integer |\n msgid | integer |\n data | integer |\n tlocal | timestamp with time zone |\nView definition:\n SELECT parsed1.rid, parsed1.msgid, parsed1.data, parsed1.tlocal\n FROM parsed1\nUNION ALL\n SELECT parsed2.rid, parsed2.msgid, parsed2.data, parsed2.tlocal\n FROM parsed2;\n\n\n\n\nSlow version using the view:\n\nEXPLAIN ANALYZE SELECT * FROM parsed_all ORDER BY tlocal DESC LIMIT 10;\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=74985.28..74985.31 rows=10 width=20) (actual\ntime=6224.229..6224.244 rows=10 loops=1)\n -> Sort (cost=74985.28..79985.28 rows=2000000 width=20) (actual\ntime=6224.226..6224.230 rows=10 loops=1)\n Sort Key: parsed1.tlocal\n Sort Method: top-N heapsort Memory: 17kB\n -> Result (cost=0.00..31766.00 rows=2000000 width=20) (actual\ntime=0.026..4933.210 rows=2000000 loops=1)\n -> Append (cost=0.00..31766.00 rows=2000000 width=20)\n(actual time=0.024..2880.868 rows=2000000 loops=1)\n -> Seq Scan on parsed1 (cost=0.00..15883.00\nrows=1000000 width=20) (actual time=0.023..551.870 rows=1000000 loops=1)\n -> Seq Scan on parsed2 (cost=0.00..15883.00\nrows=1000000 width=20) (actual time=0.027..549.465 rows=1000000 loops=1)\n Total runtime: 6224.337 ms\n(9 rows)\n\nFast version using a direct query with limits in the sub-queries:\n\nEXPLAIN ANALYZE SELECT * FROM (SELECT * FROM (SELECT * FROM parsed1 ORDER BY\ntlocal DESC LIMIT 10) AS a UNION ALL SELECT * FROM (SELECT * FROM parsed2\nORDER BY tlocal DESC LIMIT 10) AS b) AS c ORDER BY tlocal DESC LIMIT 10;\n\nQUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------\n---------------------\n Limit (cost=1.33..1.35 rows=10 width=20) (actual time=0.131..0.145 rows=10\nloops=1)\n -> Sort (cost=1.33..1.38 rows=20 width=20) (actual time=0.129..0.132\nrows=10 loops=1)\n Sort Key: parsed1.tlocal\n Sort Method: quicksort Memory: 17kB\n -> Result (cost=0.00..0.90 rows=20 width=20) (actual\ntime=0.023..0.100 rows=20 loops=1)\n -> Append (cost=0.00..0.90 rows=20 width=20) (actual\ntime=0.020..0.078 rows=20 loops=1)\n -> Limit (cost=0.00..0.35 rows=10 width=20) (actual\ntime=0.020..0.035 rows=10 loops=1)\n -> Index Scan using parsed1_tlocal_index on\nparsed1 (cost=0.00..34790.39 rows=1000000 width=20) (actual time=0.018..0.\n025 rows=10 loops=1)\n -> Limit (cost=0.00..0.35 rows=10 width=20) (actual\ntime=0.010..0.024 rows=10 loops=1)\n -> Index Scan using parsed2_tlocal_index on\nparsed2 (cost=0.00..34758.39 rows=1000000 width=20) (actual time=0.009..0.\n015 rows=10 loops=1)\n Total runtime: 0.187 ms\n(11 rows)\n\n\nBasically, the second query is giving the same result as the version using\nthe view but is able to use the indexes because it the order by and limit\nare explicitly placed in the sub-queries. So is there a way to make the\nplanner perform the same sort of operation and push those same constraints\ninto the sub-queries on its own?\n\nThanks,\nDave\n\nOn Wed, May 18, 2011 at 8:54 AM, Robert Klemme <[email protected]> wrote:\nOn Wed, May 18, 2011 at 5:26 PM, Dave Johansen <[email protected]> wrote:\n> I am using Postgres 8.3.3 and I have a VIEW which is a UNION ALL of two\n> tables but when I do a select on the view using a LIMIT, it scans the entire\n> tables and takes significantly longer than writing out the query with the\n> LIMITs in the sub-queries themselves. Is there a solution to get the view to\n> perform like the query with the LIMIT explicitly placed in the sub-queries?\n\nCan you show DDL and queries?\n\nThe query with the LIMIT on the subqueries and the one with the LIMIT\non the overall query are not semantically equivalent. Since you can\nhave an ORDER BY before the LIMIT on the query with the limit on the\nview the database must have all the rows before it can apply the\nordering and properly determine the limit. Although it might be\npossible to determine under particular circumstances that only one of\nthe tables needs to be queried or tables need only be queried\npartially I deem that quite complex. I do not know whether Postgres\ncan do such optimizations but for that we would certainly need to see\nthe concrete example (including constraint and indexes).\n\nKind regards\n\nrobert\n\n--\nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\nYes, there is an order by an index involved. Here's a simplified version of the schema and queries that demonstrates the same behaviour. Table \"public.message1\"\n Column | Type | Modifiers --------+--------------------------+-------------------------------------------------------- rid | integer | not null default nextval('message1_rid_seq'::regclass)\n data | integer | tlocal | timestamp with time zone | Indexes: \"message1_pkey\" PRIMARY KEY, btree (rid)Referenced by: TABLE \"parsed1\" CONSTRAINT \"parsed1_msgid_fkey\" FOREIGN KEY (msgid) REFERENCES message1(rid)\n Table \"public.parsed1\" Column | Type | Modifiers --------+--------------------------+--------------------------------------------------------\n rid | integer | not null default nextval('parsed1_rid_seq'::regclass) msgid | integer | data | integer | tlocal | timestamp with time zone | \nIndexes: \"parsed1_pkey\" PRIMARY KEY, btree (rid)Foreign-key constraints: \"parsed1_msgid_fkey\" FOREIGN KEY (msgid) REFERENCES message1(rid) ON DELETE CASCADEFor this example, message2 has the same structure/definition and message1 and parsed2 has the same structure/definition as parsed1.\n View \"public.parsed_all\" Column | Type | Modifiers --------+--------------------------+----------- rid | integer | msgid | integer | \n data | integer | tlocal | timestamp with time zone | View definition: SELECT parsed1.rid, parsed1.msgid, parsed1.data, parsed1.tlocal FROM parsed1UNION ALL SELECT parsed2.rid, parsed2.msgid, parsed2.data, parsed2.tlocal\n FROM parsed2;Slow version using the view:EXPLAIN ANALYZE SELECT * FROM parsed_all ORDER BY tlocal DESC LIMIT 10; QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=74985.28..74985.31 rows=10 width=20) (actual time=6224.229..6224.244 rows=10 loops=1)\n -> Sort (cost=74985.28..79985.28 rows=2000000 width=20) (actual time=6224.226..6224.230 rows=10 loops=1) Sort Key: parsed1.tlocal Sort Method: top-N heapsort Memory: 17kB -> Result (cost=0.00..31766.00 rows=2000000 width=20) (actual time=0.026..4933.210 rows=2000000 loops=1)\n -> Append (cost=0.00..31766.00 rows=2000000 width=20) (actual time=0.024..2880.868 rows=2000000 loops=1) -> Seq Scan on parsed1 (cost=0.00..15883.00 rows=1000000 width=20) (actual time=0.023..551.870 rows=1000000 loops=1)\n -> Seq Scan on parsed2 (cost=0.00..15883.00 rows=1000000 width=20) (actual time=0.027..549.465 rows=1000000 loops=1) Total runtime: 6224.337 ms(9 rows)Fast version using a direct query with limits in the sub-queries:\nEXPLAIN ANALYZE SELECT * FROM (SELECT * FROM (SELECT * FROM parsed1 ORDER BY tlocal DESC LIMIT 10) AS a UNION ALL SELECT * FROM (SELECT * FROM parsed2 ORDER BY tlocal DESC LIMIT 10) AS b) AS c ORDER BY tlocal DESC LIMIT 10;\n QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------\n--------------------- Limit (cost=1.33..1.35 rows=10 width=20) (actual time=0.131..0.145 rows=10 loops=1) -> Sort (cost=1.33..1.38 rows=20 width=20) (actual time=0.129..0.132 rows=10 loops=1) Sort Key: parsed1.tlocal\n Sort Method: quicksort Memory: 17kB -> Result (cost=0.00..0.90 rows=20 width=20) (actual time=0.023..0.100 rows=20 loops=1) -> Append (cost=0.00..0.90 rows=20 width=20) (actual time=0.020..0.078 rows=20 loops=1)\n -> Limit (cost=0.00..0.35 rows=10 width=20) (actual time=0.020..0.035 rows=10 loops=1) -> Index Scan using parsed1_tlocal_index on parsed1 (cost=0.00..34790.39 rows=1000000 width=20) (actual time=0.018..0.\n025 rows=10 loops=1) -> Limit (cost=0.00..0.35 rows=10 width=20) (actual time=0.010..0.024 rows=10 loops=1) -> Index Scan using parsed2_tlocal_index on parsed2 (cost=0.00..34758.39 rows=1000000 width=20) (actual time=0.009..0.\n015 rows=10 loops=1) Total runtime: 0.187 ms(11 rows)Basically, the second query is giving the same result as the version using the view but is able to use the indexes because it the order by and limit are explicitly placed in the sub-queries. So is there a way to make the planner perform the same sort of operation and push those same constraints into the sub-queries on its own?\nThanks,Dave",
"msg_date": "Thu, 26 May 2011 09:21:45 -0700",
"msg_from": "Dave Johansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: LIMIT and UNION ALL"
},
{
"msg_contents": "Dave Johansen <[email protected]> writes:\n> ... So is there a way to make the\n> planner perform the same sort of operation and push those same constraints\n> into the sub-queries on its own?\n\nNo. As was mentioned upthread, there is a solution for this in 9.1,\nalthough it doesn't work in exactly the way you suggest.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 May 2011 13:00:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIMIT and UNION ALL "
}
] |
[
{
"msg_contents": "Am I reading this right in that the sort is taking almost 8 seconds?\r\n\r\n\"GroupAggregate (cost=95808.09..95808.14 rows=1 width=142) (actual time=14186.999..14694.524 rows=315635 loops=1)\"\r\n\" Output: sq.tag, sq.instrument, s.d1, s.d2, s.d3, s.d4, s.d5, s.d6, s.d7, s.d8, s.d9, s.d10, sum(sq.v)\"\r\n\" Buffers: shared hit=9763\"\r\n\" -> Sort (cost=95808.09..95808.09 rows=1 width=142) (actual time=14186.977..14287.068 rows=315635 loops=1)\"\r\n\" Output: sq.tag, sq.instrument, s.d1, s.d2, s.d3, s.d4, s.d5, s.d6, s.d7, s.d8, s.d9, s.d10, sq.v\"\r\n\" Sort Key: sq.tag, sq.instrument, s.d1, s.d2, s.d3, s.d4, s.d5, s.d6, s.d7, s.d8, s.d9, s.d10\"\r\n\" Sort Method: quicksort Memory: 79808kB\"\r\n\" Buffers: shared hit=9763\"\r\n\" -> Hash Join (cost=87341.48..95808.08 rows=1 width=142) (actual time=6000.728..12037.492 rows=315635 loops=1)\"\r\n\" Output: sq.tag, sq.instrument, s.d1, s.d2, s.d3, s.d4, s.d5, s.d6, s.d7, s.d8, s.d9, s.d10, sq.v\"\r\n\" Hash Cond: (s.scenarioid = sq.scenarioid)\"\r\n\" Buffers: shared hit=9763\"\r\n\r\n\r\n_______________________________________________________________________________________________\r\n| John W. Strange | Vice President | Global Commodities Technology\r\n| J.P. Morgan | 700 Louisiana, 11th Floor | T: 713-236-4122 | C: 281-744-6476 | F: 713 236-3333\r\n| [email protected]<mailto:[email protected]> | jpmorgan.com\r\n\r\n\r\n\r\nThis communication is for informational purposes only. It is not\r\nintended as an offer or solicitation for the purchase or sale of\r\nany financial instrument or as an official confirmation of any\r\ntransaction. All market prices, data and other information are not\r\nwarranted as to completeness or accuracy and are subject to change\r\nwithout notice. Any comments or statements made herein do not\r\nnecessarily reflect those of JPMorgan Chase & Co., its subsidiaries\r\nand affiliates.\r\n\r\nThis transmission may contain information that is privileged,\r\nconfidential, legally privileged, and/or exempt from disclosure\r\nunder applicable law. If you are not the intended recipient, you\r\nare hereby notified that any disclosure, copying, distribution, or\r\nuse of the information contained herein (including any reliance\r\nthereon) is STRICTLY PROHIBITED. Although this transmission and any\r\nattachments are believed to be free of any virus or other defect\r\nthat might affect any computer system into which it is received and\r\nopened, it is the responsibility of the recipient to ensure that it\r\nis virus free and no responsibility is accepted by JPMorgan Chase &\r\nCo., its subsidiaries and affiliates, as applicable, for any loss\r\nor damage arising in any way from its use. If you received this\r\ntransmission in error, please immediately contact the sender and\r\ndestroy the material in its entirety, whether in electronic or hard\r\ncopy format. Thank you.\r\n\r\nPlease refer to http://www.jpmorgan.com/pages/disclosures for\r\ndisclosures relating to European legal entities.\nAm I reading this right in that the sort is taking almost 8 seconds? \"GroupAggregate (cost=95808.09..95808.14 rows=1 width=142) (actual time=14186.999..14694.524 rows=315635 loops=1)\"\" Output: sq.tag, sq.instrument, s.d1, s.d2, s.d3, s.d4, s.d5, s.d6, s.d7, s.d8, s.d9, s.d10, sum(sq.v)\"\" Buffers: shared hit=9763\"\" -> Sort (cost=95808.09..95808.09 rows=1 width=142) (actual time=14186.977..14287.068 rows=315635 loops=1)\"\" Output: sq.tag, sq.instrument, s.d1, s.d2, s.d3, s.d4, s.d5, s.d6, s.d7, s.d8, s.d9, s.d10, sq.v\"\" Sort Key: sq.tag, sq.instrument, s.d1, s.d2, s.d3, s.d4, s.d5, s.d6, s.d7, s.d8, s.d9, s.d10\"\" Sort Method: quicksort Memory: 79808kB\"\" Buffers: shared hit=9763\"\" -> Hash Join (cost=87341.48..95808.08 rows=1 width=142) (actual time=6000.728..12037.492 rows=315635 loops=1)\"\" Output: sq.tag, sq.instrument, s.d1, s.d2, s.d3, s.d4, s.d5, s.d6, s.d7, s.d8, s.d9, s.d10, sq.v\"\" Hash Cond: (s.scenarioid = sq.scenarioid)\"\" Buffers: shared hit=9763\" _______________________________________________________________________________________________| John W. Strange | Vice President | Global Commodities Technology | J.P. Morgan | 700 Louisiana, 11th Floor | T: 713-236-4122 | C: 281-744-6476 | F: 713 236-3333| [email protected] | jpmorgan.com",
"msg_date": "Thu, 19 May 2011 17:13:30 -0400",
"msg_from": "\"Strange, John W\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "SORT performance - slow?"
},
{
"msg_contents": "Plus the entire explain analyze output into the form at\nhttp://explain.depesz.com/ and you'll get a nicely human readable output\nwhich shows both the inclusive and exclusive time spent on each step of the\nquery. It also highlights any steps which show inaccurate statistics. It\nwill also give you a perma-link which you can use in emails so that everyone\nelse can see the pretty version, too.\n\n\n\nOn Thu, May 19, 2011 at 2:13 PM, Strange, John W <\[email protected]> wrote:\n\n> Am I reading this right in that the sort is taking almost 8 seconds?\n>\n>\n>\n> *\"GroupAggregate (cost=95808.09..95808.14 rows=1 width=142) (actual\n> time=14186.999..14694.524 rows=315635 loops=1)\"*\n>\n> \" Output: sq.tag, sq.instrument, s.d1, s.d2, s.d3, s.d4, s.d5, s.d6, s.d7,\n> s.d8, s.d9, s.d10, sum(sq.v)\"\n>\n> \" Buffers: shared hit=9763\"\n>\n> *\" -> Sort (cost=95808.09..95808.09 rows=1 width=142) (actual\n> time=14186.977..14287.068 rows=315635 loops=1)\"*\n>\n> \" Output: sq.tag, sq.instrument, s.d1, s.d2, s.d3, s.d4, s.d5, s.d6,\n> s.d7, s.d8, s.d9, s.d10, sq.v\"\n>\n> \" Sort Key: sq.tag, sq.instrument, s.d1, s.d2, s.d3, s.d4, s.d5,\n> s.d6, s.d7, s.d8, s.d9, s.d10\"\n>\n> \" Sort Method: quicksort Memory: 79808kB\"\n>\n> \" Buffers: shared hit=9763\"\n>\n> *\" -> Hash Join (cost=87341.48..95808.08 rows=1 width=142)\n> (actual time=6000.728..12037.492 rows=315635 loops=1)\"*\n>\n> \" Output: sq.tag, sq.instrument, s.d1, s.d2, s.d3, s.d4, s.d5,\n> s.d6, s.d7, s.d8, s.d9, s.d10, sq.v\"\n>\n> \" Hash Cond: (s.scenarioid = sq.scenarioid)\"\n>\n> \" Buffers: shared hit=9763\"\n>\n>\n>\n>\n>\n>\n> _______________________________________________________________________________________________\n> |* John W. Strange* | Vice President | Global Commodities Technology\n> | J.P. Morgan | 700 Louisiana, 11th Floor | T: 713-236-4122 | C:\n> 281-744-6476 | F: 713 236-3333\n> | [email protected] | jpmorgan.com\n>\n>\n>\n> This communication is for informational purposes only. It is not intended\n> as an offer or solicitation for the purchase or sale of any financial\n> instrument or as an official confirmation of any transaction. All market\n> prices, data and other information are not warranted as to completeness or\n> accuracy and are subject to change without notice. Any comments or\n> statements made herein do not necessarily reflect those of JPMorgan Chase &\n> Co., its subsidiaries and affiliates. This transmission may contain\n> information that is privileged, confidential, legally privileged, and/or\n> exempt from disclosure under applicable law. If you are not the intended\n> recipient, you are hereby notified that any disclosure, copying,\n> distribution, or use of the information contained herein (including any\n> reliance thereon) is STRICTLY PROHIBITED. Although this transmission and any\n> attachments are believed to be free of any virus or other defect that might\n> affect any computer system into which it is received and opened, it is the\n> responsibility of the recipient to ensure that it is virus free and no\n> responsibility is accepted by JPMorgan Chase & Co., its subsidiaries and\n> affiliates, as applicable, for any loss or damage arising in any way from\n> its use. If you received this transmission in error, please immediately\n> contact the sender and destroy the material in its entirety, whether in\n> electronic or hard copy format. Thank you. Please refer to\n> http://www.jpmorgan.com/pages/disclosures for disclosures relating to\n> European legal entities.\n>\n\nPlus the entire explain analyze output into the form at http://explain.depesz.com/ and you'll get a nicely human readable output which shows both the inclusive and exclusive time spent on each step of the query. It also highlights any steps which show inaccurate statistics. It will also give you a perma-link which you can use in emails so that everyone else can see the pretty version, too.\nOn Thu, May 19, 2011 at 2:13 PM, Strange, John W <[email protected]> wrote:\nAm I reading this right in that the sort is taking almost 8 seconds? \"GroupAggregate (cost=95808.09..95808.14 rows=1 width=142) (actual time=14186.999..14694.524 rows=315635 loops=1)\"\n\" Output: sq.tag, sq.instrument, s.d1, s.d2, s.d3, s.d4, s.d5, s.d6, s.d7, s.d8, s.d9, s.d10, sum(sq.v)\"\" Buffers: shared hit=9763\"\n\" -> Sort (cost=95808.09..95808.09 rows=1 width=142) (actual time=14186.977..14287.068 rows=315635 loops=1)\"\" Output: sq.tag, sq.instrument, s.d1, s.d2, s.d3, s.d4, s.d5, s.d6, s.d7, s.d8, s.d9, s.d10, sq.v\"\n\" Sort Key: sq.tag, sq.instrument, s.d1, s.d2, s.d3, s.d4, s.d5, s.d6, s.d7, s.d8, s.d9, s.d10\"\" Sort Method: quicksort Memory: 79808kB\"\n\" Buffers: shared hit=9763\"\" -> Hash Join (cost=87341.48..95808.08 rows=1 width=142) (actual time=6000.728..12037.492 rows=315635 loops=1)\"\n\" Output: sq.tag, sq.instrument, s.d1, s.d2, s.d3, s.d4, s.d5, s.d6, s.d7, s.d8, s.d9, s.d10, sq.v\"\" Hash Cond: (s.scenarioid = sq.scenarioid)\"\n\" Buffers: shared hit=9763\" _______________________________________________________________________________________________\n| John W. Strange | Vice President | Global Commodities Technology | J.P. Morgan | 700 Louisiana, 11th Floor | T: 713-236-4122 | C: 281-744-6476 | F: 713 236-3333\n| [email protected] | jpmorgan.com\n \n\nThis communication is for informational purposes only. It is not intended as an offer or solicitation for the purchase or sale of any financial instrument or as an official confirmation of any transaction. All market prices, data and other information are not warranted as to completeness or accuracy and are subject to change without notice. Any comments or statements made herein do not necessarily reflect those of JPMorgan Chase & Co., its subsidiaries and affiliates.\n\nThis transmission may contain information that is privileged, confidential, legally privileged, and/or exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or use of the information contained herein (including any reliance thereon) is STRICTLY PROHIBITED. Although this transmission and any attachments are believed to be free of any virus or other defect that might affect any computer system into which it is received and opened, it is the responsibility of the recipient to ensure that it is virus free and no responsibility is accepted by JPMorgan Chase & Co., its subsidiaries and affiliates, as applicable, for any loss or damage arising in any way from its use. If you received this transmission in error, please immediately contact the sender and destroy the material in its entirety, whether in electronic or hard copy format. Thank you.\n\nPlease refer to http://www.jpmorgan.com/pages/disclosures for disclosures relating to European legal entities.",
"msg_date": "Thu, 19 May 2011 14:41:21 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SORT performance - slow?"
},
{
"msg_contents": "Dne 19.5.2011 23:13, Strange, John W napsal(a):\n> Am I reading this right in that the sort is taking almost 8 seconds?\n\nYou're probably reading it wrong. The sort itself takes about 1 ms (just\nsubtract the numbers in \"actual=\"). If you include all the overhead it\ntakes about 2.3 seconds (the hash join ends at 12 sec, the sort at 14.3).\n\nAnyway, your real problem is probably stale stats. Run ANALYZE on the\ntables referenced in the query, the estimates are very off. All the rows\nexpect 1 row but are getting 315k of them.\n\nregards\nTomas\n",
"msg_date": "Mon, 23 May 2011 01:11:40 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SORT performance - slow?"
},
{
"msg_contents": "> You're probably reading it wrong. The sort itself takes about 1 ms (just\n> subtract the numbers in \"actual=\").\n\nI thought it was cost=startup_cost..total_cost. That is not quite the\nsame thing, since startup_cost is effectively \"cost to produce first\nrow\", and Sort can't really operate in a \"streaming\" fashion (well,\ntheoretically, something like selection sort could, but that's beside\nthe point) so it needs to do all the work up front. I'm no explain\nexpert, so someone please correct me if I'm wrong.\n\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n",
"msg_date": "Mon, 23 May 2011 10:01:42 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SORT performance - slow?"
},
{
"msg_contents": "On Mon, May 23, 2011 at 1:01 PM, Maciek Sakrejda <[email protected]> wrote:\n>> You're probably reading it wrong. The sort itself takes about 1 ms (just\n>> subtract the numbers in \"actual=\").\n>\n> I thought it was cost=startup_cost..total_cost. That is not quite the\n> same thing, since startup_cost is effectively \"cost to produce first\n> row\", and Sort can't really operate in a \"streaming\" fashion (well,\n> theoretically, something like selection sort could, but that's beside\n> the point) so it needs to do all the work up front. I'm no explain\n> expert, so someone please correct me if I'm wrong.\n\nYou are right.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 23 May 2011 13:27:13 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SORT performance - slow?"
},
{
"msg_contents": "Dne 23.5.2011 19:01, Maciek Sakrejda napsal(a):\n>> You're probably reading it wrong. The sort itself takes about 1 ms (just\n>> subtract the numbers in \"actual=\").\n> \n> I thought it was cost=startup_cost..total_cost. That is not quite the\n> same thing, since startup_cost is effectively \"cost to produce first\n> row\", and Sort can't really operate in a \"streaming\" fashion (well,\n> theoretically, something like selection sort could, but that's beside\n> the point) so it needs to do all the work up front. I'm no explain\n> expert, so someone please correct me if I'm wrong.\n\nGood point, thanks. In that case the second number (2.3 sec) is correct.\n\nI still think the problem is not the sorting but the inaccurate\nestimates - fixing this might yield a much better / faster plan. But the\nOP posted just a small part of the plan, so it's hard to guess.\n\nregards\nTomas\n",
"msg_date": "Mon, 23 May 2011 19:56:59 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SORT performance - slow?"
}
] |
[
{
"msg_contents": "\"Strange, John W\" wrote:\n \n> Am I reading this right in that the sort is taking almost 8\n> seconds?\n \n> -> Sort ... actual time=14186.977..14287.068\n \n> -> Hash Join ... actual time=6000.728..12037.492\n \nThe run time of the sort is the difference between 12037 ms and\n14287 ms (the completion times). That's 2.25 seconds.\n \n> If you are not the intended recipient, you are hereby notified\n> that any disclosure, copying, distribution, or use of the\n> information contained herein (including any reliance thereon) is\n> STRICTLY PROHIBITED.\n \nYou probably already know this, but just to make sure -- you posted\nthis to a public list which is automatically replicated to several\nwebsites freely available to everyone on the planet.\n \n-Kevin\n\n\n",
"msg_date": "Thu, 19 May 2011 22:52:10 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SORT performance - slow?"
},
{
"msg_contents": "Kevin Grittner wrote:\n> \"Strange, John W\" wrote:\n>> If you are not the intended recipient, you are hereby notified\n>> that any disclosure, copying, distribution, or use of the\n>> information contained herein (including any reliance thereon) is\n>> STRICTLY PROHIBITED.\n>\n> You probably already know this, but just to make sure -- you posted\n> this to a public list which is automatically replicated to several\n> websites freely available to everyone on the planet.\n\nIt's irrelevant, since that \"STRICTLY PROHIBITED\" verbiage is irrelevant, \nunenforceable and legally meaningless. I could post that message on my \npersonal blog, being not the intended recipient myself, and they would b e \nutterly powerless to do anything about it even if they sent it privately to my \npersonal email inbox.\n\nI don't even know why people bother even putting such nonsense into their \nemails, let alone Usenet or mailing-list posts.\n\n-- \nLew\nHoni soit qui mal y pense.\nhttp://upload.wikimedia.org/wikipedia/commons/c/cf/Friz.jpg\n",
"msg_date": "Fri, 20 May 2011 12:47:13 -0400",
"msg_from": "Lew <[email protected]>",
"msg_from_op": false,
"msg_subject": "[OT]: Confidentiality disclosures in list posts (Was: SORT\n\tperformance - slow?)"
},
{
"msg_contents": "On 05/20/2011 11:47 AM, Lew wrote:\n\n> I don't even know why people bother even putting such nonsense into\n> their emails, let alone Usenet or mailing-list posts.\n\nThis may sound like a surprise, but many of us don't. Several companies \nuse an auto-append on any outgoing message not sent to an internal \nrecipient. You can see this for yourselves in this message, as my \ncompany's little blurb gets attached after my signature lines. It's just \nstandard boilerplate meant as a CYA measure, really.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Mon, 23 May 2011 08:26:11 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [OT]: Confidentiality disclosures in list posts (Was:\n\tSORT performance - slow?)"
}
] |
[
{
"msg_contents": "Hi,\nI am using contrib/cube code. I am building GIST index on cube data type\nthen it leads to a very large size of log file (nearly 220 MB for only 12k\nrecords).\nWhile creating index on geometry field with gist gives 1KB size of log file\nfor 17 lakh records.\n\nCan someone please tell me how to stop postgres to logged so much data in\ncase of cube?\n\nThanks\nNick\n\nHi,I am using contrib/cube code. I am building GIST index on cube \ndata type then it leads to a very large size of log file (nearly 220 MB \nfor only 12k records).While creating index on geometry field with gist gives 1KB size of log file for 17 lakh records.\nCan someone please tell me how to stop postgres to logged so much data in case of cube?ThanksNick",
"msg_date": "Sun, 22 May 2011 17:13:44 +0530",
"msg_from": "Nick Raj <[email protected]>",
"msg_from_op": true,
"msg_subject": "Logfile"
}
] |
[
{
"msg_contents": "I have a strange situation.\nI have a table of detail with millones of rows and a table of items with\nthousands of rows\n\nWhen I do..\n\nselect count(*) from wiz_application_response where application_item_id in\n(select id from wiz_application_item where application_id=110)\n\nThis query NOT use the index on column application_item_id, instead is doing\na sequential scan\n\nBUT, when I add the range of min and max id of the subquery, the postgres\nuses the INDEX\nThis is the second query...\n\nselect count(*) from wiz_application_response where application_item_id\nbetween 908 and 1030 and application_item_id in(select id from\nwiz_application_item where application_id=110)\n\n908 and 1030 are limits (lower and upper) of the subquery, the subquery\nreturns 100 elements aprox.\n\nSo, this is some bug?\n\nThanks!\n\nAnibal\n\n\n",
"msg_date": "Mon, 23 May 2011 17:30:34 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgres not use index, IN statement"
},
{
"msg_contents": "On 24/05/2011 5:30 AM, Anibal David Acosta wrote:\n\n> So, this is some bug?\n\nHard to know with the information provided. Please post EXPLAIN ANALYZE \noutput.\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\n-- \nCraig Ringer\n\nTech-related writing at http://soapyfrogs.blogspot.com/\n",
"msg_date": "Tue, 24 May 2011 07:26:46 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres not use index, IN statement"
}
] |
[
{
"msg_contents": "Hi all:\n\nNot sure if this is a performance question or a generic admin\nquestion. I have the following script running on a host different from\nthe database to use pgbench to test the database:\n\n pgbench -i (inital mode)\n pgsql vacuum analyze; (and some other code to dump table sizes)\n pgbench (multiple connections, jobs etc ....)\n\nwith a loop for setting different scales ....\n\n I seem to be able to provoke this error:\n\n vacuum...ERROR: invalid page header in\n block 2128910 of relation base/16385/21476\n\non a pgbench database created with a scale factor of 1000 relatively\nreliably (2 for 2). I am not seeing any disk errors from the raid\ncontroller or the operating system.\n\nRunning pg_dumpall to check for errors reports:\n\n pg_dump: Error message from server: ERROR: invalid page header in \n block 401585 of relation base/16385/21476\n\nwhich is different from the originaly reported block.\n\nDoes anybody have any suggestions?\n\nConfiguration details.\n\nOS: centos 5.5\nFilesystem: data - ext4 (note 4 not 3); 6.6T formatted\n wal - ext4; 1.5T formatted\nRaid: data - level 10, 8 disk wd2003; controller LSI MegaRAID SAS 9260-4i\n wal - level 1, 2 disk wd2003; controller LSI MegaRAID SAS 9260-4i\n\nCould it be an ext4 issue? It seems that ext4 may still be at the\nbleeding edge for postgres use.\n\nThanks for any thoughts even if it's go to the admin list.\n\n-- \n\t\t\t\t-- rouilj\n\nJohn Rouillard System Administrator\nRenesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n",
"msg_date": "Mon, 23 May 2011 22:16:03 +0000",
"msg_from": "John Rouillard <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"error with invalid page header\" while vacuuming pgbench data"
},
{
"msg_contents": "John Rouillard <[email protected]> wrote:\n \n> I seem to be able to provoke this error:\n> \n> vacuum...ERROR: invalid page header in\n> block 2128910 of relation base/16385/21476\n \nWhat version of PostgreSQL?\n \n-Kevin\n",
"msg_date": "Mon, 23 May 2011 17:21:04 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"error with invalid page header\" while vacuuming\n\t pgbench data"
},
{
"msg_contents": "On Mon, May 23, 2011 at 05:21:04PM -0500, Kevin Grittner wrote:\n> John Rouillard <[email protected]> wrote:\n> \n> > I seem to be able to provoke this error:\n> > \n> > vacuum...ERROR: invalid page header in\n> > block 2128910 of relation base/16385/21476\n> \n> What version of PostgreSQL?\n\nHmm, I thought I replied to this, but I haven't seen it come back to\nme on list. It's postgres version: 8.4.5.\n\nrpm -q shows\n\n postgresql84-server-8.4.5-1.el5_5.1\n\n-- \n\t\t\t\t-- rouilj\n\nJohn Rouillard System Administrator\nRenesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n",
"msg_date": "Wed, 25 May 2011 17:12:39 +0000",
"msg_from": "John Rouillard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"error with invalid page header\" while vacuuming\n pgbench data"
},
{
"msg_contents": "John Rouillard <[email protected]> wrote:\n> On Mon, May 23, 2011 at 05:21:04PM -0500, Kevin Grittner wrote:\n>> John Rouillard <[email protected]> wrote:\n>> \n>> > I seem to be able to provoke this error:\n>> > \n>> > vacuum...ERROR: invalid page header in\n>> > block 2128910 of relation base/16385/21476\n>> \n>> What version of PostgreSQL?\n> \n> Hmm, I thought I replied to this, but I haven't seen it come back\n> to me on list. It's postgres version: 8.4.5.\n> \n> rpm -q shows\n> \n> postgresql84-server-8.4.5-1.el5_5.1\n \nI was hoping someone else would jump in, but I see that your\nprevious post didn't copy the list, which solves *that* mystery.\n \nI'm curious whether you might have enabled one of the \"it's OK to\ntrash my database integrity to boost performance\" options. (People\nwith enough replication often feel that this *is* OK.) Please run\nthe query on this page and post the results:\n \nhttp://wiki.postgresql.org/wiki/Server_Configuration\n \nBasically, if fsync or full_page_writes is turned off and there was\na crash, that explains it. If not, it provides more information to\nproceed.\n \nYou might want to re-start the thread on pgsql-general, though. Not\neverybody who might be able to help with a problem like this follows\nthe performance list. Or, if you didn't set any of the dangerous\nconfiguration options, this sounds like a bug -- so pgsql-bugs might\nbe even better.\n \n-Kevin\n",
"msg_date": "Wed, 25 May 2011 15:19:59 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"error with invalid page header\" while vacuuming\n\t pgbench data"
},
{
"msg_contents": "On 05/23/2011 06:16 PM, John Rouillard wrote:\n> OS: centos 5.5\n> Filesystem: data - ext4 (note 4 not 3); 6.6T formatted\n> wal - ext4; 1.5T formatted\n> Raid: data - level 10, 8 disk wd2003; controller LSI MegaRAID SAS 9260-4i\n> wal - level 1, 2 disk wd2003; controller LSI MegaRAID SAS 9260-4i\n>\n> Could it be an ext4 issue? It seems that ext4 may still be at the\n> bleeding edge for postgres use.\n> \n\nI would not trust ext4 on CentOS 5.5 at all. ext4 support in 5.5 is \nlabeled by RedHat as being in \"Technology Preview\" state. I believe \nthat if you had a real RedHat system instead of CentOS kernel, you'd \ndiscover it's hard to even get it installed--you need to basically say \n\"yes, I know it's not for production, I want it anyway\" to get preview \npackages. It's not really intended for production use.\n\nWhat I'm hearing from people is that they run into the occasional ext4 \nbug with PostgreSQL, but the serious ones aren't happening very often \nnow, on systems running RHEL6 or Debian Squeeze. Those kernels are way, \nway ahead of the ext4 backport in RHEL5 based systems, and they're just \nbarely stable.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Wed, 25 May 2011 16:41:16 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"error with invalid page header\" while vacuuming pgbench\n data"
},
{
"msg_contents": "On Wed, May 25, 2011 at 03:19:59PM -0500, Kevin Grittner wrote:\n> John Rouillard <[email protected]> wrote:\n> > On Mon, May 23, 2011 at 05:21:04PM -0500, Kevin Grittner wrote:\n> >> John Rouillard <[email protected]> wrote:\n> >> \n> >> > I seem to be able to provoke this error:\n> >> > \n> >> > vacuum...ERROR: invalid page header in\n> >> > block 2128910 of relation base/16385/21476\n> >> \n> >> What version of PostgreSQL?\n> > \n> > Hmm, I thought I replied to this, but I haven't seen it come back\n> > to me on list. It's postgres version: 8.4.5.\n> > \n> > rpm -q shows\n> > \n> > postgresql84-server-8.4.5-1.el5_5.1\n> \n> I was hoping someone else would jump in, but I see that your\n> previous post didn't copy the list, which solves *that* mystery.\n> \n> I'm curious whether you might have enabled one of the \"it's OK to\n> trash my database integrity to boost performance\" options. (People\n> with enough replication often feel that this *is* OK.) Please run\n> the query on this page and post the results:\n> \n> http://wiki.postgresql.org/wiki/Server_Configuration\n> \n> Basically, if fsync or full_page_writes is turned off and there was\n> a crash, that explains it. If not, it provides more information to\n> proceed.\n\nNope. Neither is turned off. I can't run the query at the moment since\nthe system is in the middle of a memtest86+ check of 96GB of\nmemory. The relevent parts from the config file from the Configuration\nManagement system are:\n\n #fsync = on # turns forced synchronization\n # on or off\n #synchronous_commit = on # immediate fsync at commit\n #wal_sync_method = fsync # the default is the first option \n\n #full_page_writes = on # recover from partial page writes\n\nthis is the same setup I use on all my data warehouse systems (with\nminor pgtune type changes based on amount of memory). Running the\nquery on another system (using ext3, centos 5.5) shows:\n\n version | PostgreSQL 8.4.5 on\nx86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red\nHat 4.1.2-48), 64-bit\n archive_command | if test ! -e\n/var/lib/pgsql/data/ARCHIVE_ENABLED; then exit 0; fi; test ! -f\n/var/bak/pgsql/%f && cp %p /var/bak/p\ngsql/%f\n archive_mode | on\n checkpoint_completion_target | 0.9\n checkpoint_segments | 64\n constraint_exclusion | on\n custom_variable_classes | pg_stat_statements\n default_statistics_target | 100\n effective_cache_size | 8GB\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n listen_addresses | *\n log_checkpoints | on\n log_connections | on\n log_destination | stderr,syslog\n log_directory | pg_log\n log_filename | postgresql-%a.log\n log_line_prefix | %t %u@%d(%p)i: \n log_lock_waits | on\n log_min_duration_statement | 2s\n log_min_error_statement | warning\n log_min_messages | notice\n log_rotation_age | 1d\n log_rotation_size | 0\n log_temp_files | 0\n log_truncate_on_rotation | on\n logging_collector | on\n maintenance_work_mem | 1GB\n max_connections | 300\n max_locks_per_transaction | 128\n max_stack_depth | 2MB\n port | 5432\n server_encoding | UTF8\n shared_buffers | 4GB\n shared_preload_libraries | pg_stat_statements\n superuser_reserved_connections | 3\n tcp_keepalives_count | 0\n tcp_keepalives_idle | 0\n tcp_keepalives_interval | 0\n TimeZone | UTC\n wal_buffers | 32MB\n work_mem | 16MB\n\n> You might want to re-start the thread on pgsql-general, though. Not\n> everybody who might be able to help with a problem like this follows\n> the performance list. Or, if you didn't set any of the dangerous\n> configuration options, this sounds like a bug -- so pgsql-bugs might\n> be even better.\n\nWell I am also managing to panic the kernel on some runs as well. So\nmy guess is this is not only a postgres bug (if it's a postgres issue\nat all).\n\nAs gregg mentioned in another followup ext4 under centos 5.x may be an\nissue. I'll drop back to ext3 and see if I can replicate the\ncorruption or crashes one I rule out some potential hardware issues.\n\nIf I can replicate with ext3, then I'll follow up on -general or\n-bugs.\n\nExt4 pgbench results complete faster, but if it's not reliable ....\n\nThanks for your help.\n\n--\n\t\t\t\t-- rouilj\n\nJohn Rouillard System Administrator\nRenesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n",
"msg_date": "Wed, 25 May 2011 22:07:16 +0000",
"msg_from": "John Rouillard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: \"error with invalid page header\" while vacuuming\n pgbench data"
},
{
"msg_contents": "On Wed, May 25, 2011 at 4:07 PM, John Rouillard <[email protected]> wrote:\n> Well I am also managing to panic the kernel on some runs as well. So\n> my guess is this is not only a postgres bug (if it's a postgres issue\n> at all).\n>\n> As gregg mentioned in another followup ext4 under centos 5.x may be an\n> issue. I'll drop back to ext3 and see if I can replicate the\n> corruption or crashes one I rule out some potential hardware issues.\n\nAlso do the standard memtest86+ run to ensure your memory isn't bad.\nAlso do a simple dd if=/dev/sda of=/dev/null to make sure the drive\nhas no errors. It might be the drives. Look in your logs again to\nmake sure.\n",
"msg_date": "Wed, 25 May 2011 16:18:28 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"error with invalid page header\" while vacuuming\n pgbench data"
},
{
"msg_contents": "On 05/26/2011 06:18 AM, Scott Marlowe wrote:\n> On Wed, May 25, 2011 at 4:07 PM, John Rouillard<[email protected]> wrote:\n>> Well I am also managing to panic the kernel on some runs as well. So\n>> my guess is this is not only a postgres bug (if it's a postgres issue\n>> at all).\n>>\n>> As gregg mentioned in another followup ext4 under centos 5.x may be an\n>> issue. I'll drop back to ext3 and see if I can replicate the\n>> corruption or crashes one I rule out some potential hardware issues.\n>\n> Also do the standard memtest86+ run to ensure your memory isn't bad.\n> Also do a simple dd if=/dev/sda of=/dev/null to make sure the drive\n> has no errors.\n\nIf possible, also/instead use smartctl from smartmontools to ask the \ndrive to do an internal self-test and surface scan. This doesn't help \nyou with RAID volumes, but is often much more informative with plain \nphysical drives.\n\n--\nCraig Ringer\n",
"msg_date": "Thu, 26 May 2011 11:17:50 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"error with invalid page header\" while vacuuming pgbench\n data"
}
] |
[
{
"msg_contents": "Hi,\n\nIn my DB, there is a large table containing messages and one that contains\nmessage boxes.\nMessages are assigned to boxes via a child parent link m->b.\nIn order to obtain the last message for a specific box, I use the following\nSQL:\n\nSELECT m1.id FROM message m1 LEFT JOIN message m2 ON (m1.box_id = m2.box_id\nAND m1.id < m2.id) WHERE m2.id IS NULL AND m1.box_id = id;\n\nThis worked quite well for a long time. But now, suddenly the performance of\nthis query drastically degraded as new messages were added.\nIf these new messages are removed again, everything is back to normal. If\nother messages are removed instead, the problem remains, so it does not seem\nto be a memory issue. I fear I have difficulties to understand what is going\nwrong here.\n\nThis is the query plan when everything is fine:\n\n\"Seq Scan on public.box this_ (cost=0.00..10467236.32 rows=128 width=696)\n(actual time=0.169..7683.978 rows=128 loops=1)\"\n\" Output: this_.id, this_.login, (SubPlan 1)\"\n\" Buffers: shared hit=188413 read=94635 written=135, temp read=22530\nwritten=22374\"\n\" SubPlan 1\"\n\" -> Hash Anti Join (cost=41323.25..81775.25 rows=20427 width=8)\n(actual time=59.571..59.877 rows=1 loops=128)\"\n\" Output: m1.id\"\n\" Hash Cond: (m1.box_id = m2.box_id)\"\n\" Join Filter: (m1.id < m2.id)\"\n\" Buffers: shared hit=188412 read=94633 written=135, temp\nread=22530 written=22374\"\n\" -> Bitmap Heap Scan on public.message m1 (cost=577.97..40212.28\nrows=30640 width=16) (actual time=3.152..9.514 rows=17982 loops=128)\"\n\" Output: m1.id, m1.box_id\"\n\" Recheck Cond: (m1.box_id = $0)\"\n\" Buffers: shared hit=131993 read=9550 written=23\"\n\" -> Bitmap Index Scan on \"message_box_Idx\" \n(cost=0.00..570.31 rows=30640 width=0) (actual time=2.840..2.840 rows=18193\nloops=128)\"\n\" Index Cond: (m1.box_id = $0)\"\n\" Buffers: shared hit=314 read=6433 written=23\"\n\" -> Hash (cost=40212.28..40212.28 rows=30640 width=16) (actual\ntime=26.840..26.840 rows=20014 loops=115)\"\n\" Output: m2.box_id, m2.id\"\n\" Buckets: 4096 Batches: 4 (originally 2) Memory Usage:\n5444kB\"\n\" Buffers: shared hit=56419 read=85083 written=112, temp\nwritten=7767\"\n\" -> Bitmap Heap Scan on public.message m2 \n(cost=577.97..40212.28 rows=30640 width=16) (actual time=2.419..20.007\nrows=20014 loops=115)\"\n\" Output: m2.box_id, m2.id\"\n\" Recheck Cond: (m2.box_id = $0)\"\n\" Buffers: shared hit=56419 read=85083 written=112\"\n\" -> Bitmap Index Scan on \"message_box_Idx\" \n(cost=0.00..570.31 rows=30640 width=0) (actual time=2.166..2.166 rows=20249\nloops=115)\"\n\" Index Cond: (m2.box_id = $0)\"\n\" Buffers: shared hit=6708\"\n\"Total runtime: 7685.202 ms\"\n\nThis is the plan when the query gets sluggish:\n\n\"Seq Scan on public.box this_ (cost=0.00..10467236.32 rows=128 width=696)\n(actual time=0.262..179333.086 rows=128 loops=1)\"\n\" Output: this_.id, this_.login, (SubPlan 1)\"\n\" Buffers: shared hit=189065 read=93983 written=10, temp read=22668\nwritten=22512\"\n\" SubPlan 1\"\n\" -> Hash Anti Join (cost=41323.25..81775.25 rows=20427 width=8)\n(actual time=1264.700..1400.886 rows=1 loops=128)\"\n\" Output: m1.id\"\n\" Hash Cond: (m1.box_id = m2.box_id)\"\n\" Join Filter: (m1.id < m2.id)\"\n\" Buffers: shared hit=189064 read=93981 written=10, temp read=22668\nwritten=22512\"\n\" -> Bitmap Heap Scan on public.message m1 (cost=577.97..40212.28\nrows=30640 width=16) (actual time=3.109..9.850 rows=18060 loops=128)\"\n\" Output: m1.id, m1.box_id\"\n\" Recheck Cond: (m1.box_id = $0)\"\n\" Buffers: shared hit=132095 read=9448\"\n\" -> Bitmap Index Scan on \"message_box_Idx\" \n(cost=0.00..570.31 rows=30640 width=0) (actual time=2.867..2.867 rows=18193\nloops=128)\"\n\" Index Cond: (m1.box_id = $0)\"\n\" Buffers: shared hit=312 read=6435\"\n\" -> Hash (cost=40212.28..40212.28 rows=30640 width=16) (actual\ntime=27.533..27.533 rows=20102 loops=115)\"\n\" Output: m2.box_id, m2.id\"\n\" Buckets: 4096 Batches: 4 (originally 2) Memory Usage:\n5522kB\"\n\" Buffers: shared hit=56969 read=84533 written=10, temp\nwritten=7811\"\n\" -> Bitmap Heap Scan on public.message m2 \n(cost=577.97..40212.28 rows=30640 width=16) (actual time=2.406..20.492\nrows=20102 loops=115)\"\n\" Output: m2.box_id, m2.id\"\n\" Recheck Cond: (m2.box_id = $0)\"\n\" Buffers: shared hit=56969 read=84533 written=10\"\n\" -> Bitmap Index Scan on \"message_box_Idx\" \n(cost=0.00..570.31 rows=30640 width=0) (actual time=2.170..2.170 rows=20249\nloops=115)\"\n\" Index Cond: (m2.box_id = $0)\"\n\" Buffers: shared hit=6708\"\n\"Total runtime: 179334.310 ms\"\n\n\nSo from my limited experience, the only significant difference I see is that\nthe Hash Anti Join takes a lot more time in plan 2, but I do not understand\nwhy.\nIdeas somebody?\n\nThanks\npanam\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hash-Anti-Join-performance-degradation-tp4420974p4420974.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Mon, 23 May 2011 21:14:24 -0700 (PDT)",
"msg_from": "panam <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hash Anti Join performance degradation"
},
{
"msg_contents": "On 24/05/11 12:14, panam wrote:\n> Hi,\n> \n> In my DB, there is a large table containing messages and one that contains\n> message boxes.\n> Messages are assigned to boxes via a child parent link m->b.\n> In order to obtain the last message for a specific box, I use the following\n> SQL:\n> \n> SELECT m1.id FROM message m1 LEFT JOIN message m2 ON (m1.box_id = m2.box_id\n> AND m1.id < m2.id) WHERE m2.id IS NULL AND m1.box_id = id;\n> \n> This worked quite well for a long time. But now, suddenly the performance of\n> this query drastically degraded as new messages were added.\n> If these new messages are removed again, everything is back to normal. If\n> other messages are removed instead, the problem remains, so it does not seem\n> to be a memory issue. I fear I have difficulties to understand what is going\n> wrong here.\n\nThe usual cause is that the statistics for estimated row counts cross a\nthreshold that makes the query planner think that a different kind of\nplan will be faster.\n\nIf the query planner is using bad information about the performance of\nthe storage, then it will be making bad decisions about which approach\nis faster. So the usual thing to do is to adjust seq_page_cost and\nrandom_page_cost to more closely reflect the real performance of your\nhardware, and to make sure that effective_cache_size matches the real\namount of memory your computer has free for disk cache use.\n\nNewer versions of PostgreSQL always include query planning and\nstatistics improvements too.\n\nBTW, it can be really helpful to paste your query plans into\nhttp://explain.depesz.com/ , which will provide an easier to read visual\nanalysis of the plan. This will only work with query plans that haven't\nbeen butchered by mail client word wrapping, so I can't do it for you,\nbut if you paste them there and post the links that'd be really handy.\n\nAlso have a look at http://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nI found the plans you posted a bit hard to read. Not your fault; it's\nstupid mail clients. Maybe depesz.com needs to be taught to de-munge\nthe damage done to plans by common mail clients.\n\n--\nCraig Ringer\n",
"msg_date": "Tue, 24 May 2011 13:53:00 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash Anti Join performance degradation"
},
{
"msg_contents": "2011/5/24 panam <[email protected]>:\n> Hi,\n>\n> In my DB, there is a large table containing messages and one that contains\n> message boxes.\n> Messages are assigned to boxes via a child parent link m->b.\n> In order to obtain the last message for a specific box, I use the following\n> SQL:\n>\n> SELECT m1.id FROM message m1 LEFT JOIN message m2 ON (m1.box_id = m2.box_id\n> AND m1.id < m2.id) WHERE m2.id IS NULL AND m1.box_id = id;\n>\n> This worked quite well for a long time. But now, suddenly the performance of\n> this query drastically degraded as new messages were added.\n> If these new messages are removed again, everything is back to normal. If\n> other messages are removed instead, the problem remains, so it does not seem\n> to be a memory issue. I fear I have difficulties to understand what is going\n> wrong here.\n\nWe need more information here. The case is in fact interesting.\nWhat's the PostgreSQL version, and did you have log of vacuum and\ncheckpoint activity ? (no vacuum full/cluster or such thing running ?)\n\nObvisouly, Craig suggestion to read\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions is relevant as it\nhelps to have all common information required to analyze the issue.\n\n>\n> This is the query plan when everything is fine:\n>\n> \"Seq Scan on public.box this_ (cost=0.00..10467236.32 rows=128 width=696)\n> (actual time=0.169..7683.978 rows=128 loops=1)\"\n> \" Output: this_.id, this_.login, (SubPlan 1)\"\n> \" Buffers: shared hit=188413 read=94635 written=135, temp read=22530\n> written=22374\"\n> \" SubPlan 1\"\n> \" -> Hash Anti Join (cost=41323.25..81775.25 rows=20427 width=8)\n> (actual time=59.571..59.877 rows=1 loops=128)\"\n> \" Output: m1.id\"\n> \" Hash Cond: (m1.box_id = m2.box_id)\"\n> \" Join Filter: (m1.id < m2.id)\"\n> \" Buffers: shared hit=188412 read=94633 written=135, temp\n> read=22530 written=22374\"\n> \" -> Bitmap Heap Scan on public.message m1 (cost=577.97..40212.28\n> rows=30640 width=16) (actual time=3.152..9.514 rows=17982 loops=128)\"\n> \" Output: m1.id, m1.box_id\"\n> \" Recheck Cond: (m1.box_id = $0)\"\n> \" Buffers: shared hit=131993 read=9550 written=23\"\n> \" -> Bitmap Index Scan on \"message_box_Idx\"\n> (cost=0.00..570.31 rows=30640 width=0) (actual time=2.840..2.840 rows=18193\n> loops=128)\"\n> \" Index Cond: (m1.box_id = $0)\"\n> \" Buffers: shared hit=314 read=6433 written=23\"\n> \" -> Hash (cost=40212.28..40212.28 rows=30640 width=16) (actual\n> time=26.840..26.840 rows=20014 loops=115)\"\n> \" Output: m2.box_id, m2.id\"\n> \" Buckets: 4096 Batches: 4 (originally 2) Memory Usage:\n> 5444kB\"\n> \" Buffers: shared hit=56419 read=85083 written=112, temp\n> written=7767\"\n> \" -> Bitmap Heap Scan on public.message m2\n> (cost=577.97..40212.28 rows=30640 width=16) (actual time=2.419..20.007\n> rows=20014 loops=115)\"\n> \" Output: m2.box_id, m2.id\"\n> \" Recheck Cond: (m2.box_id = $0)\"\n> \" Buffers: shared hit=56419 read=85083 written=112\"\n> \" -> Bitmap Index Scan on \"message_box_Idx\"\n> (cost=0.00..570.31 rows=30640 width=0) (actual time=2.166..2.166 rows=20249\n> loops=115)\"\n> \" Index Cond: (m2.box_id = $0)\"\n> \" Buffers: shared hit=6708\"\n> \"Total runtime: 7685.202 ms\"\n>\n> This is the plan when the query gets sluggish:\n>\n> \"Seq Scan on public.box this_ (cost=0.00..10467236.32 rows=128 width=696)\n> (actual time=0.262..179333.086 rows=128 loops=1)\"\n> \" Output: this_.id, this_.login, (SubPlan 1)\"\n> \" Buffers: shared hit=189065 read=93983 written=10, temp read=22668\n> written=22512\"\n> \" SubPlan 1\"\n> \" -> Hash Anti Join (cost=41323.25..81775.25 rows=20427 width=8)\n> (actual time=1264.700..1400.886 rows=1 loops=128)\"\n> \" Output: m1.id\"\n> \" Hash Cond: (m1.box_id = m2.box_id)\"\n> \" Join Filter: (m1.id < m2.id)\"\n> \" Buffers: shared hit=189064 read=93981 written=10, temp read=22668\n> written=22512\"\n> \" -> Bitmap Heap Scan on public.message m1 (cost=577.97..40212.28\n> rows=30640 width=16) (actual time=3.109..9.850 rows=18060 loops=128)\"\n> \" Output: m1.id, m1.box_id\"\n> \" Recheck Cond: (m1.box_id = $0)\"\n> \" Buffers: shared hit=132095 read=9448\"\n> \" -> Bitmap Index Scan on \"message_box_Idx\"\n> (cost=0.00..570.31 rows=30640 width=0) (actual time=2.867..2.867 rows=18193\n> loops=128)\"\n> \" Index Cond: (m1.box_id = $0)\"\n> \" Buffers: shared hit=312 read=6435\"\n> \" -> Hash (cost=40212.28..40212.28 rows=30640 width=16) (actual\n> time=27.533..27.533 rows=20102 loops=115)\"\n> \" Output: m2.box_id, m2.id\"\n> \" Buckets: 4096 Batches: 4 (originally 2) Memory Usage:\n> 5522kB\"\n> \" Buffers: shared hit=56969 read=84533 written=10, temp\n> written=7811\"\n> \" -> Bitmap Heap Scan on public.message m2\n> (cost=577.97..40212.28 rows=30640 width=16) (actual time=2.406..20.492\n> rows=20102 loops=115)\"\n> \" Output: m2.box_id, m2.id\"\n> \" Recheck Cond: (m2.box_id = $0)\"\n> \" Buffers: shared hit=56969 read=84533 written=10\"\n> \" -> Bitmap Index Scan on \"message_box_Idx\"\n> (cost=0.00..570.31 rows=30640 width=0) (actual time=2.170..2.170 rows=20249\n> loops=115)\"\n> \" Index Cond: (m2.box_id = $0)\"\n> \" Buffers: shared hit=6708\"\n> \"Total runtime: 179334.310 ms\"\n>\n>\n> So from my limited experience, the only significant difference I see is that\n> the Hash Anti Join takes a lot more time in plan 2, but I do not understand\n> why.\n> Ideas somebody?\n>\n> Thanks\n> panam\n>\n>\n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/Hash-Anti-Join-performance-degradation-tp4420974p4420974.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Tue, 24 May 2011 13:16:44 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash Anti Join performance degradation"
},
{
"msg_contents": "Hi Craig and Cédric,\n\nThanks for the very informative introduction to the netiquette here and\nthanks for sharing your time.\nI wasn't aware of http://explain.depesz.com/, very useful.\nSo, here are the query plans:\nhttp://explain.depesz.com/s/6AU (1st from previous post, good)\nhttp://explain.depesz.com/s/YPS (2nd from previous post, bad)\n\n> The usual cause is that the statistics for estimated row counts cross a \n> threshold that makes the query planner think that a different kind of \n> plan will be faster.\n\nHm, as far as i understand the plans, they are equivalent, aren't they?\n\n> If the query planner is using bad information about the performance of \n> the storage, then it will be making bad decisions about which approach \n> is faster. So the usual thing to do is to adjust seq_page_cost and \n> random_page_cost to more closely reflect the real performance of your \n> hardware, and to make sure that effective_cache_size matches the real \n> amount of memory your computer has free for disk cache use.\n\nWill this make any difference even when the plans are equivalent as assumed\nabove?\n\nThe table creation SQL is as follows:\nhttp://pastebin.com/qFDUP7Aa (Message table); ~ 2328680\trows, is growing\nconstantly (~ 10000 new rows each day), \nhttp://pastebin.com/vEmh4hb8 (Box table); ~ 128\trows (growing very slowly 1\nrow every two days, each row updated about 2x a day)\n\nThe DB contains the same data, except that for the \"good\" query, the last\n10976 rows (0.4%) of message are removed by doing a\n\nDELETE FROM message where timestamp > TO_DATE ('05/23/2011','mm/dd/yyyy');\n\n\nThis speeds up the query by a factor of ~27. (207033.081 (bad) vs. 7683.978\n(good)).\n\nEach query was run before and after a vacuum analyze, one time to create\nappropriate statistics, and the second time to do the actual measurement.\nAll tests were made on the dev-machine, which is a 8GB, Core i7, Windows 7\n\nI experienced the issue at first on the \"production\"-environment, which is a\n64-bit Ubuntu, running PostgreSQL 9.0.1 on x86_64-unknown-linux-gnu,\ncompiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit,\nand later for analysis on the dev-environment, which is a\n64-bit Windows 7, running PostgreSQL 9.0.4, compiled by Visual C++ build\n1500, 64-bit\nFor testing, I've increased the buffers that I judge important for the issue\nto the following values:\neffective_cache_size: 4GB\nshared_buffers: 1GB\nwork_mem: 1GB\ntemp_buffers: 32MB\nAfter that, configuration was reloaded and the postgresql service was\nrestarted using pgAdmin.\nInterestingly, there was no performance gain as compared to the default\nsettings, the \"bad\" query even took about 30 seconds (15%) longer.\nAs well it seems, all data fit into memory, so there is not much disk I/O\ninvolved.\n\n@Cédric\n> did you have log of vacuum and checkpoint activity ?\n> (no vacuum full/cluster or such thing running ?)\nThere is no clustering involved here, its a pretty basic setup.\nHow can I obtain the information you require here? I could send you the\noutput of the analyse vacuum command from pgAdmin, but is there a way to\nmake it output the information in English (rather than German)? \n\nThanks for your interest in this issue.\n\nRegards,\npanam\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hash-Anti-Join-performance-degradation-tp4420974p4422247.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Tue, 24 May 2011 07:34:57 -0700 (PDT)",
"msg_from": "panam <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash Anti Join performance degradation"
},
{
"msg_contents": "panam <[email protected]> writes:\n> In my DB, there is a large table containing messages and one that contains\n> message boxes.\n> Messages are assigned to boxes via a child parent link m->b.\n> In order to obtain the last message for a specific box, I use the following\n> SQL:\n\n> SELECT m1.id FROM message m1 LEFT JOIN message m2 ON (m1.box_id = m2.box_id\n> AND m1.id < m2.id) WHERE m2.id IS NULL AND m1.box_id = id;\n\nBTW, this query doesn't actually match the EXPLAIN outputs...\n\n> So from my limited experience, the only significant difference I see is that\n> the Hash Anti Join takes a lot more time in plan 2, but I do not understand\n> why.\n\nWhatever's going on is below the level that EXPLAIN can show. I can\nthink of a couple of possibilities:\n\n1. The \"extra\" rows in the slower case all manage to come out to the\nsame hash value, or some very small number of distinct hash values, such\nthat we spend a lot of time searching a single hash chain. But it's\nhard to credit that adding 0.4% more rows could result in near 100x\nslowdown, no matter how bad their distribution.\n\n2. There's some inefficiency in the use of the temp files, though again\nit's far from clear why your two cases would be noticeably different\nthere. Possibly enabling log_temp_files would tell you something useful.\n\nOne other thing I'm not following is how come it's using hash temp files\nat all, when you claim in your later message that you've got work_mem\nset to 1GB. It should certainly not take more than a couple meg to hold\n20K rows at 16 payload bytes per row. You might want to check whether\nthat setting actually took effect.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 May 2011 16:38:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash Anti Join performance degradation "
},
{
"msg_contents": "On 24/05/11 22:34, panam wrote:\n\n>> The usual cause is that the statistics for estimated row counts cross a \n>> threshold that makes the query planner think that a different kind of \n>> plan will be faster.\n> \n> Hm, as far as i understand the plans, they are equivalent, aren't they?\n\nYes, they are, and the estimates are too. This isn't the usual case\nwhere the planner trips over a threshold and switches to a totally\ndifferent plan type that it thinks is faster, but isn't.\n\nThe estimates are actually IDENTICAL for the hash anti-join node of\ninterest, and so are the actual loop count and row count. Temp file\nactivity is much the same across both plans too.\n\nYou can reproduce this behaviour consistently? It's _seriously_ weird,\nand the sort of thing that when I encounter myself I tend to ask \"what\nelse is going on that I'm missing?\".\n\nWhat happens if you DELETE more rows? Or fewer? What's the threshold?\n\nWhat happens if you DELETE rows from the start not the end, or a random\nselection?\n\nDoes the problem persist if you DELETE the rows then CLUSTER the table\nbefore running the query?\n\nDoes the problem persist if you DELETE the rows then REINDEX?\n\n>> If the query planner is using bad information about the performance of \n>> the storage, then it will be making bad decisions about which approach \n>> is faster. [snip]\n> \n> Will this make any difference even when the plans are equivalent as assumed\n> above?\n\nNope. It doesn't seem to be a problem with plan selection.\n\n> This speeds up the query by a factor of ~27. (207033.081 (bad) vs. 7683.978\n> (good)).\n\nThat's a serious WTF.\n\n> @Cédric\n>> did you have log of vacuum and checkpoint activity ?\n>> (no vacuum full/cluster or such thing running ?)\n> There is no clustering involved here, its a pretty basic setup.\n\nHe means 'CLUSTER', the SQL command that tells PostgreSQL to re-organize\na table.\n\nThe answer from the rest of your post would appear to be 'no, it's being\nrun in an otherwise-idle stand-alone test environment'. Right?\n\n> How can I obtain the information you require here? I could send you the\n> output of the analyse vacuum command from pgAdmin, but is there a way to\n> make it output the information in English (rather than German)?\n\nIt's easy enough to read familiar output like that in German, if needs be.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 25 May 2011 09:53:13 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash Anti Join performance degradation"
},
{
"msg_contents": "Hi all,\n\n@Tom,\n> BTW, this query doesn't actually match the EXPLAIN outputs...\nYou're right, it is actually just the \"heavy\" subquery of a larger query\nwhich can be found here:\nhttp://pastebin.com/fuGrt0tB\n\n> One other thing I'm not following is how come it's using hash temp files \n> at all, when you claim in your later message that you've got work_mem \n> set to 1GB. It should certainly not take more than a couple meg to hold \n> 20K rows at 16 payload bytes per row. You might want to check whether \n> that setting actually took effect.\n As I said, I drastically increased the buffer sizes (at least I intended\nto) to see if it changed something. It first I thought it wouldn't. But\nyesterday (I think it was after a reboot), the \"bad\" queries suddenly were\nmuch faster (~ 20secs, still at least 3 times slower than the \"good\" queries\nthough). Today, they were very slow again (I replayed the dumps in between).\nSo I am not sure whether postgres actually picks up the altered\nconfiguration (even after reboot). Is there a way to determine the values\nactually used?\n\n@Craig\n> You can reproduce this behaviour consistently? It's _seriously_ weird, \n> and the sort of thing that when I encounter myself I tend to ask \"what \n> else is going on that I'm missing?\".\nYes, I can reproduce it consistently using a dumpfile. \n\n> What happens if you DELETE more rows? Or fewer? What's the threshold?\n> What happens if you DELETE rows from the start not the end, or a random \nselection?\n>From some experiments I made earlier, I conclude that the rows added last\nare the problem. Deleting the first 10% did not seem to speed up the bad\nquery. However, I haven't checked that systematically.\n\n> Does the problem persist if you DELETE the rows then CLUSTER the table \n> before running the query?\nWow, I wasn't aware of cluster. Applying it (clustering the id PK) on the\ntable causing the \"bad\" query worked wonders. It now needs just 4.3 secs as\ncompared to 600 secs before (now with one day of data added as compared to\nthe previous post) and 4.0 secs for the \"good\" query (also clustered) which\nis faster than the unclustered \"good\" query (about 6-8 secs).\n\n> Does the problem persist if you DELETE the rows then REINDEX?\nNo, not noticeably.\n\n> The answer from the rest of your post would appear to be 'no, it's being \n> run in an otherwise-idle stand-alone test environment'. Right?\nCorrect.\n\nIt seems my issue is solved (at least for now). My impression is that it was\njust somehow \"bad luck\" that the rows originally and replayed from the dumps\nwere kind of messed up in regard to their ids, especially - it seems - the\nnewly added ones. This is somehow consistent with the peculiarities of the\nquery which contains a pairwise id comparison which should greatly benefit\nan ordered set of ids.\nThis also made me wonder how the internal plan is carried out. Is the engine\nable to leverage the fact that a part/range of the rows is totally or\npartially ordered on disk, e.g. using some kind of binary search or even\n\"nearest neighbor\"-search in that section (i.e. a special \"micro-plan\" or\nalgorithm)? Or is the speed-up \"just\" because related data is usually\n\"nearby\" and most of the standard algorithms work best with clustered data?\nIf the first is not the case, would that be a potential point for\nimprovement? Maybe it would even be more efficient, if there were some sort\nof constraints that guarantee \"ordered row\" sections on the disk, i.e.\npreventing the addition of a row that had an index value in between two row\nvalues of an already ordered/clustered section. In the simplest case, it\nwould start with the \"first\" row and end with the \"last\" row (on the time of\ndoing the equivalent of \"cluster\"). So there would be a small list saying\nrows with id x - rows with id y are guaranteed to be ordered on disk (by id\nfor example) now and for all times.\n\nSo, would you like to further investigate my previous issue (I think it is\nstill strange that performance suddenly dropped that dramatically)?\n\nMany thanks and regards,\npanam \n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hash-Anti-Join-performance-degradation-tp4420974p4425890.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 25 May 2011 09:42:29 -0700 (PDT)",
"msg_from": "panam <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash Anti Join performance degradation"
},
{
"msg_contents": "panam <[email protected]> wrote:\n \n> Is there a way to determine the values actually used?\n \nThe pg_settings view. Try the query shown here:\n \nhttp://wiki.postgresql.org/wiki/Server_Configuration\n \n-Kevin\n",
"msg_date": "Wed, 25 May 2011 12:40:56 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash Anti Join performance degradation"
},
{
"msg_contents": "On 05/26/2011 12:42 AM, panam wrote:\n\n> So, would you like to further investigate my previous issue (I think it is\n> still strange that performance suddenly dropped that dramatically)?\n\nIt's a bit beyond me, but I suspect that it'd be best if you could hang \nonto the dump file in case someone has the time and enthusiasm to \ninvestigate it. I take it you can't distribute the dump file, even \nprivately?\n\n--\nCraig Ringer\n",
"msg_date": "Thu, 26 May 2011 10:40:22 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash Anti Join performance degradation"
},
{
"msg_contents": "Hi there,\n\n\nKevin Grittner wrote:\n> \n>> Is there a way to determine the values actually used?\n> The pg_settings view. Try the query shown here:\n> http://wiki.postgresql.org/wiki/Server_Configuration\n> \nThanks Kevin, very usful. Here is the output:\n\n\"version\";\"PostgreSQL 9.0.4, compiled by Visual C++ build 1500, 64-bit\"\n\"bytea_output\";\"escape\"\n\"client_encoding\";\"UNICODE\"\n\"effective_cache_size\";\"4GB\"\n\"lc_collate\";\"German_Germany.1252\"\n\"lc_ctype\";\"German_Germany.1252\"\n\"listen_addresses\";\"*\"\n\"log_destination\";\"stderr\"\n\"log_line_prefix\";\"%t \"\n\"logging_collector\";\"on\"\n\"max_connections\";\"100\"\n\"max_stack_depth\";\"2MB\"\n\"port\";\"5432\"\n\"server_encoding\";\"UTF8\"\n\"shared_buffers\";\"1GB\"\n\"temp_buffers\";\"4096\"\n\"TimeZone\";\"CET\"\n\"work_mem\";\"1GB\"\n\n\nCraig Ringer wrote:\n> \n> On 05/26/2011 12:42 AM, panam wrote:\n> It's a bit beyond me, but I suspect that it'd be best if you could hang \n> onto the dump file in case someone has the time and enthusiasm to \n> investigate it. I take it you can't distribute the dump file, even \n> privately?\n> \nFortunately, I managed to reduce it to the absolute minimum (i.e. only\nmeaningless ids), and the issue is still observable.\nYou can download it from here:\nhttp://www.zumodrive.com/file/460997770?key=cIdeODVlNz\n\nSome things to try:\n* tune your psql settings if you want\n* reindex, vaccum analzye if you want\n\n\"Patholgical\" query:\n\nselect\n\tb.id,\n\t(SELECT\n\t\tm1.id \n\tFROM\n\t\tmessage m1 \n\tLEFT JOIN\n\t\tmessage m2 \n\t\t\tON (\n\t\t\t\tm1.box_id = m2.box_id \n\t\t\t\tAND m1.id < m2.id\n\t\t\t) \n\tWHERE\n\t\tm2.id IS NULL \n\t\tAND m1.box_id = b.id)\nfrom\n\tbox b\n\n=> takes almost \"forever\" (~600 seconds on my system)\n\nTry\n\ndelete from message where id > 2550000;\n\n=> deletes 78404 rows\nDo the \"pathological\" query again\n=> speed is back (~4 seconds on my system)\n\nReplay the dump\nTry\n\ndelete from message where id < 1000000;\n\n=> deletes 835844 (10 times than before) rows. Maybe you can delete many\nmore, I haven't tested this systematically.\nDo the \"pathological\" query again\n=> takes almost \"forever\" (didn't wait...)\n\nReplay the dump\nCluster:\n\ncluster message_pkey on message;\n\nDo the \"pathological\" query again\n=> speed is back (~3 seconds on my system)\n\nAny third party confirmation?\n\nThanks\npanam\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hash-Anti-Join-performance-degradation-tp4420974p4428435.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Thu, 26 May 2011 05:33:37 -0700 (PDT)",
"msg_from": "panam <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash Anti Join performance degradation"
},
{
"msg_contents": "2011/5/26 panam <[email protected]>:\n> Hi there,\n>\n>\n> Kevin Grittner wrote:\n>>\n>>> Is there a way to determine the values actually used?\n>> The pg_settings view. Try the query shown here:\n>> http://wiki.postgresql.org/wiki/Server_Configuration\n>>\n> Thanks Kevin, very usful. Here is the output:\n>\n> \"version\";\"PostgreSQL 9.0.4, compiled by Visual C++ build 1500, 64-bit\"\n> \"bytea_output\";\"escape\"\n> \"client_encoding\";\"UNICODE\"\n> \"effective_cache_size\";\"4GB\"\n> \"lc_collate\";\"German_Germany.1252\"\n> \"lc_ctype\";\"German_Germany.1252\"\n> \"listen_addresses\";\"*\"\n> \"log_destination\";\"stderr\"\n> \"log_line_prefix\";\"%t \"\n> \"logging_collector\";\"on\"\n> \"max_connections\";\"100\"\n> \"max_stack_depth\";\"2MB\"\n> \"port\";\"5432\"\n> \"server_encoding\";\"UTF8\"\n> \"shared_buffers\";\"1GB\"\n> \"temp_buffers\";\"4096\"\n> \"TimeZone\";\"CET\"\n> \"work_mem\";\"1GB\"\n>\n>\n> Craig Ringer wrote:\n>>\n>> On 05/26/2011 12:42 AM, panam wrote:\n>> It's a bit beyond me, but I suspect that it'd be best if you could hang\n>> onto the dump file in case someone has the time and enthusiasm to\n>> investigate it. I take it you can't distribute the dump file, even\n>> privately?\n>>\n> Fortunately, I managed to reduce it to the absolute minimum (i.e. only\n> meaningless ids), and the issue is still observable.\n> You can download it from here:\n> http://www.zumodrive.com/file/460997770?key=cIdeODVlNz\n>\n> Some things to try:\n> * tune your psql settings if you want\n> * reindex, vaccum analzye if you want\n>\n> \"Patholgical\" query:\n>\n> select\n> b.id,\n> (SELECT\n> m1.id\n> FROM\n> message m1\n> LEFT JOIN\n> message m2\n> ON (\n> m1.box_id = m2.box_id\n> AND m1.id < m2.id\n> )\n> WHERE\n> m2.id IS NULL\n> AND m1.box_id = b.id)\n> from\n> box b\n>\n> => takes almost \"forever\" (~600 seconds on my system)\n>\n> Try\n>\n> delete from message where id > 2550000;\n>\n> => deletes 78404 rows\n> Do the \"pathological\" query again\n> => speed is back (~4 seconds on my system)\n>\n> Replay the dump\n> Try\n>\n> delete from message where id < 1000000;\n>\n> => deletes 835844 (10 times than before) rows. Maybe you can delete many\n> more, I haven't tested this systematically.\n> Do the \"pathological\" query again\n> => takes almost \"forever\" (didn't wait...)\n>\n> Replay the dump\n> Cluster:\n>\n> cluster message_pkey on message;\n>\n> Do the \"pathological\" query again\n> => speed is back (~3 seconds on my system)\n>\n> Any third party confirmation?\n\nwithout explaining further why the antijoin has bad performance\nwithout cluster, I wonder why you don't use this query :\n\nSELECT b.id,\n max(m.id)\nFROM box b, message m\nWHERE m.box_id = b.id\nGROUP BY b.id;\n\nlooks similar and fastest.\n\n>\n> Thanks\n> panam\n>\n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/Hash-Anti-Join-performance-degradation-tp4420974p4428435.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Thu, 26 May 2011 16:21:21 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash Anti Join performance degradation"
},
{
"msg_contents": "Cᅵdric Villemain<[email protected]> wrote:\n> 2011/5/26 panam <[email protected]>:\n \n>> \"max_connections\";\"100\"\n \n>> \"work_mem\";\"1GB\"\n \nEach connection can allocate work_mem, potentially several times. \nOn a machines without hundreds of GB of RAM, that pair of settings\ncould cause severe swapping.\n \n>> \"Patholgical\" query:\n>>\n>> select\n>> b.id,\n>> (SELECT\n>> m1.id\n>> FROM\n>> message m1\n>> LEFT JOIN\n>> message m2\n>> ON (\n>> m1.box_id = m2.box_id\n>> AND m1.id < m2.id\n>> )\n>> WHERE\n>> m2.id IS NULL\n>> AND m1.box_id = b.id)\n>> from\n>> box b\n \n> without explaining further why the antijoin has bad performance\n> without cluster, I wonder why you don't use this query :\n> \n> SELECT b.id,\n> max(m.id)\n> FROM box b, message m\n> WHERE m.box_id = b.id\n> GROUP BY b.id;\n> \n> looks similar and fastest.\n \nI think you would need a left join to actually get identical\nresults:\n \nSELECT b.id, max(m.id)\n FROM box b\n LEFT JOIN message m ON m.box_id = b.id\n GROUP BY b.id;\n \nBut yeah, I would expect this approach to be much faster. Rather\neasier to understand and harder to get wrong, too.\n \n-Kevin\n",
"msg_date": "Thu, 26 May 2011 09:48:46 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash Anti Join performance degradation"
},
{
"msg_contents": "Hi all,\n\n\nCédric Villemain-3 wrote:\n> \n> without explaining further why the antijoin has bad performance\n> without cluster, I wonder why you don't use this query :\n> \n> SELECT b.id,\n> max(m.id)\n> FROM box b, message m\n> WHERE m.box_id = b.id\n> GROUP BY b.id;\n> \n> looks similar and fastest.\n> \nI actually did use a similar strategy in the meantime (during my problem\nwith the \"left join\" query we are talking about here all the time) for\nmitigation.\nIt was\nSELECT MAX(e.id) FROM event_message e WHERE e.box_id = id\nand it performed worse in comparison to the \"left join\" query in the general\ncase (i.e. before my problems began).\nAt the end of this post is an explanation why I think I cannot use the\nsolution you suggested above.\n\n\nKevin Grittner wrote:\n> \n> Each connection can allocate work_mem, potentially several times. \n> On a machines without hundreds of GB of RAM, that pair of settings\n> could cause severe swapping.\n> \nIndeed, thanks for the warning. These settings are not for production but to\nexclude a performance degradation because of small cache sizes.\n\n\nKevin Grittner wrote:\n> \n> I think you would need a left join to actually get identical\n> results:\n> \n> SELECT b.id, max(m.id)\n> FROM box b\n> LEFT JOIN message m ON m.box_id = b.id\n> GROUP BY b.id;\n> \n> But yeah, I would expect this approach to be much faster. Rather\n> easier to understand and harder to get wrong, too.\n> \n> \nCorrect, it is much faster, even with unclustered ids.\nHowever, I think I cannot use it because of the way that query is generated\n(by hibernate).\nThe (simplyfied) base query is just\n\nSELECT b.id from box\n\nthe subquery\n\n(SELECT m1.id FROM message m1 \n LEFT JOIN message m2 \n ON (m1.box_id = m2.box_id AND m1.id < m2.id ) \n WHERE m2.id IS NULL AND m1.box_id = b.id) as lastMessageId\n\nis due to a hibernate formula (containing more or less plain SQL) to\ndetermine the last message id for that box. It ought to return just one row,\nnot multiple. So I am constrained to the subquery in all optimization\nattemps (I cannot combine them as you did), at least I do not see how. If\nyou have an idea for a more performant subquery though, let me know, as this\ncan easily be replaced.\n\nThanks for your help and suggestions\npanam\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hash-Anti-Join-performance-degradation-tp4420974p4429125.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Thu, 26 May 2011 09:08:07 -0700 (PDT)",
"msg_from": "panam <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash Anti Join performance degradation"
},
{
"msg_contents": "panam <[email protected]> wrote:\n \n> I cannot use it because of the way that query is generated\n> (by hibernate).\n> \n> The (simplyfied) base query is just\n> \n> SELECT b.id from box\n> \n> the subquery\n> \n> (SELECT m1.id FROM message m1 \n> LEFT JOIN message m2 \n> ON (m1.box_id = m2.box_id AND m1.id < m2.id ) \n> WHERE m2.id IS NULL AND m1.box_id = b.id) as lastMessageId\n> \n> is due to a hibernate formula (containing more or less plain SQL)\n> to determine the last message id for that box. It ought to return\n> just one row, not multiple. So I am constrained to the subquery in\n> all optimization attemps (I cannot combine them as you did), at\n> least I do not see how. If you have an idea for a more performant\n> subquery though, let me know, as this can easily be replaced.\n \nMaybe:\n \n(SELECT max(m1.id) FROM message m1 WHERE m1.box_id = b.id)\n \n-Kevin\n",
"msg_date": "Thu, 26 May 2011 12:08:18 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash Anti Join performance degradation"
},
{
"msg_contents": "Sorry,\n\nSELECT MAX(e.id) FROM event_message e WHERE e.box_id = id\n\nas posted previously should actually read\n\nSELECT max(m1.id) FROM message m1 WHERE m1.box_id = b.id)\n\nso I tried this already.\n\nRegards,\npanam\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Hash-Anti-Join-performance-degradation-tp4420974p4429475.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Thu, 26 May 2011 11:04:35 -0700 (PDT)",
"msg_from": "panam <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hash Anti Join performance degradation"
},
{
"msg_contents": "2011/5/26 panam <[email protected]>:\n> Hi all,\n>\n>\n> Cédric Villemain-3 wrote:\n>>\n>> without explaining further why the antijoin has bad performance\n>> without cluster, I wonder why you don't use this query :\n>>\n>> SELECT b.id,\n>> max(m.id)\n>> FROM box b, message m\n>> WHERE m.box_id = b.id\n>> GROUP BY b.id;\n>>\n>> looks similar and fastest.\n>>\n> I actually did use a similar strategy in the meantime (during my problem\n> with the \"left join\" query we are talking about here all the time) for\n> mitigation.\n> It was\n> SELECT MAX(e.id) FROM event_message e WHERE e.box_id = id\n> and it performed worse in comparison to the \"left join\" query in the general\n> case (i.e. before my problems began).\n> At the end of this post is an explanation why I think I cannot use the\n> solution you suggested above.\n>\n>\n> Kevin Grittner wrote:\n>>\n>> Each connection can allocate work_mem, potentially several times.\n>> On a machines without hundreds of GB of RAM, that pair of settings\n>> could cause severe swapping.\n>>\n> Indeed, thanks for the warning. These settings are not for production but to\n> exclude a performance degradation because of small cache sizes.\n>\n>\n> Kevin Grittner wrote:\n>>\n>> I think you would need a left join to actually get identical\n>> results:\n>>\n>> SELECT b.id, max(m.id)\n>> FROM box b\n>> LEFT JOIN message m ON m.box_id = b.id\n>> GROUP BY b.id;\n>>\n>> But yeah, I would expect this approach to be much faster. Rather\n>> easier to understand and harder to get wrong, too.\n>>\n>>\n> Correct, it is much faster, even with unclustered ids.\n> However, I think I cannot use it because of the way that query is generated\n> (by hibernate).\n> The (simplyfied) base query is just\n>\n> SELECT b.id from box\n>\n> the subquery\n>\n> (SELECT m1.id FROM message m1\n> LEFT JOIN message m2\n> ON (m1.box_id = m2.box_id AND m1.id < m2.id )\n> WHERE m2.id IS NULL AND m1.box_id = b.id) as lastMessageId\n>\n> is due to a hibernate formula (containing more or less plain SQL) to\n> determine the last message id for that box. It ought to return just one row,\n> not multiple. So I am constrained to the subquery in all optimization\n> attemps (I cannot combine them as you did), at least I do not see how. If\n> you have an idea for a more performant subquery though, let me know, as this\n> can easily be replaced.\n\nIn production, if you have a decent IO system, you can lower\nrandom_page_cost and it may be faster using index (by default, with\nthe use case you provided it choose a seqscan). It can be a bit tricky\nif you have to lower random_page_cost so much that it destroy others\nquery plan but increase the perf of the current one. if it happens,\npost again :) (sometime need to change other cost parameters but it\nneeds to be handle with care)\n\nI am not an hibernate expert, but I'll surprised if you can not drive\nhibernate to do what you want.\n\n>\n> Thanks for your help and suggestions\n> panam\n>\n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/Hash-Anti-Join-performance-degradation-tp4420974p4429125.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Thu, 26 May 2011 20:13:22 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash Anti Join performance degradation"
},
{
"msg_contents": "On 05/27/2011 02:13 AM, C�dric Villemain wrote:\n\n> I am not an hibernate expert, but I'll surprised if you can not drive\n> hibernate to do what you want.\n\nIf nothing else, you can do a native query in hand-written SQL through \nHibernate. ORMs are useful tools for some jobs, but it's good to be able \nto bypass them when needed too.\n\n--\nCraig Ringer\n",
"msg_date": "Fri, 27 May 2011 08:00:52 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash Anti Join performance degradation"
},
{
"msg_contents": "On Thu, May 26, 2011 at 8:33 AM, panam <[email protected]> wrote:\n> Any third party confirmation?\n\nYeah, it definitely looks like there is some kind of bug here. Or if\nnot a bug, then a very surprising feature. EXPLAIN ANALYZE outputs\nfrom your proposed test attached. Here's a unified diff of the two\noutputs:\n\n\n QUERY PLAN\n ----------------------------------------------------------------------------------------------------------------------------------------------------------\n- Seq Scan on box b (cost=0.00..3669095.76 rows=128 width=8) (actual\ntime=0.147..431517.693 rows=128 loops=1)\n+ Seq Scan on box b (cost=0.00..3669095.76 rows=128 width=8) (actual\ntime=0.047..6938.165 rows=128 loops=1)\n SubPlan 1\n- -> Hash Anti Join (cost=14742.77..28664.79 rows=19239 width=8)\n(actual time=2960.176..3370.425 rows=1 loops=128)\n+ -> Hash Anti Join (cost=14742.77..28664.79 rows=19239 width=8)\n(actual time=48.385..53.361 rows=1 loops=128)\n Hash Cond: (m1.box_id = m2.box_id)\n Join Filter: (m1.id < m2.id)\n- -> Bitmap Heap Scan on message m1 (cost=544.16..13696.88\nrows=28858 width=16) (actual time=2.320..6.204 rows=18487 loops=128)\n+ -> Bitmap Heap Scan on message m1 (cost=544.16..13696.88\nrows=28858 width=16) (actual time=1.928..5.502 rows=17875 loops=128)\n Recheck Cond: (box_id = b.id)\n- -> Bitmap Index Scan on \"message_box_Idx\"\n(cost=0.00..536.94 rows=28858 width=0) (actual time=2.251..2.251\nrows=18487 loops=128)\n+ -> Bitmap Index Scan on \"message_box_Idx\"\n(cost=0.00..536.94 rows=28858 width=0) (actual time=1.797..1.797\nrows=18487 loops=128)\n Index Cond: (box_id = b.id)\n- -> Hash (cost=13696.88..13696.88 rows=28858 width=16)\n(actual time=12.632..12.632 rows=19720 loops=120)\n- Buckets: 4096 Batches: 4 (originally 2) Memory Usage: 1787kB\n- -> Bitmap Heap Scan on message m2\n(cost=544.16..13696.88 rows=28858 width=16) (actual time=1.668..6.619\nrows=19720 loops=120)\n+ -> Hash (cost=13696.88..13696.88 rows=28858 width=16)\n(actual time=11.603..11.603 rows=20248 loops=113)\n+ Buckets: 4096 Batches: 4 (originally 2) Memory Usage: 1423kB\n+ -> Bitmap Heap Scan on message m2\n(cost=544.16..13696.88 rows=28858 width=16) (actual time=1.838..6.886\nrows=20248 loops=113)\n Recheck Cond: (box_id = b.id)\n- -> Bitmap Index Scan on \"message_box_Idx\"\n(cost=0.00..536.94 rows=28858 width=0) (actual time=1.602..1.602\nrows=19720 loops=120)\n+ -> Bitmap Index Scan on \"message_box_Idx\"\n(cost=0.00..536.94 rows=28858 width=0) (actual time=1.743..1.743\nrows=20903 loops=113)\n Index Cond: (box_id = b.id)\n- Total runtime: 431520.186 ms\n+ Total runtime: 6940.369 ms\n\nThat's pretty odd.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 31 May 2011 17:58:08 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hash Anti Join performance degradation"
},
{
"msg_contents": "2011/5/31 Robert Haas <[email protected]>:\n> On Thu, May 26, 2011 at 8:33 AM, panam <[email protected]> wrote:\n>> Any third party confirmation?\n>\n> Yeah, it definitely looks like there is some kind of bug here. Or if\n> not a bug, then a very surprising feature. EXPLAIN ANALYZE outputs\n> from your proposed test attached. Here's a unified diff of the two\n> outputs:\n>\n>\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------\n> - Seq Scan on box b (cost=0.00..3669095.76 rows=128 width=8) (actual\n> time=0.147..431517.693 rows=128 loops=1)\n> + Seq Scan on box b (cost=0.00..3669095.76 rows=128 width=8) (actual\n> time=0.047..6938.165 rows=128 loops=1)\n> SubPlan 1\n> - -> Hash Anti Join (cost=14742.77..28664.79 rows=19239 width=8)\n> (actual time=2960.176..3370.425 rows=1 loops=128)\n> + -> Hash Anti Join (cost=14742.77..28664.79 rows=19239 width=8)\n> (actual time=48.385..53.361 rows=1 loops=128)\n> Hash Cond: (m1.box_id = m2.box_id)\n> Join Filter: (m1.id < m2.id)\n> - -> Bitmap Heap Scan on message m1 (cost=544.16..13696.88\n> rows=28858 width=16) (actual time=2.320..6.204 rows=18487 loops=128)\n> + -> Bitmap Heap Scan on message m1 (cost=544.16..13696.88\n> rows=28858 width=16) (actual time=1.928..5.502 rows=17875 loops=128)\n> Recheck Cond: (box_id = b.id)\n> - -> Bitmap Index Scan on \"message_box_Idx\"\n> (cost=0.00..536.94 rows=28858 width=0) (actual time=2.251..2.251\n> rows=18487 loops=128)\n> + -> Bitmap Index Scan on \"message_box_Idx\"\n> (cost=0.00..536.94 rows=28858 width=0) (actual time=1.797..1.797\n> rows=18487 loops=128)\n> Index Cond: (box_id = b.id)\n> - -> Hash (cost=13696.88..13696.88 rows=28858 width=16)\n> (actual time=12.632..12.632 rows=19720 loops=120)\n> - Buckets: 4096 Batches: 4 (originally 2) Memory Usage: 1787kB\n> - -> Bitmap Heap Scan on message m2\n> (cost=544.16..13696.88 rows=28858 width=16) (actual time=1.668..6.619\n> rows=19720 loops=120)\n> + -> Hash (cost=13696.88..13696.88 rows=28858 width=16)\n> (actual time=11.603..11.603 rows=20248 loops=113)\n> + Buckets: 4096 Batches: 4 (originally 2) Memory Usage: 1423kB\n> + -> Bitmap Heap Scan on message m2\n> (cost=544.16..13696.88 rows=28858 width=16) (actual time=1.838..6.886\n> rows=20248 loops=113)\n> Recheck Cond: (box_id = b.id)\n> - -> Bitmap Index Scan on \"message_box_Idx\"\n> (cost=0.00..536.94 rows=28858 width=0) (actual time=1.602..1.602\n> rows=19720 loops=120)\n> + -> Bitmap Index Scan on \"message_box_Idx\"\n> (cost=0.00..536.94 rows=28858 width=0) (actual time=1.743..1.743\n> rows=20903 loops=113)\n> Index Cond: (box_id = b.id)\n> - Total runtime: 431520.186 ms\n> + Total runtime: 6940.369 ms\n>\n> That's pretty odd.\n\nYes, while here I noticed that the query was long to be killed.\nI added a CHECK_FOR_INTERRUPT() in the for(;;) loop in nodeHashjoin.c.\nIt fixes the delay when trying to kill but I don't know about\nperformance impact this can have in this place of the code.\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support",
"msg_date": "Wed, 1 Jun 2011 02:43:05 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hash Anti Join performance degradation"
},
{
"msg_contents": "On Tue, May 31, 2011 at 8:43 PM, Cédric Villemain\n<[email protected]> wrote:\n> Yes, while here I noticed that the query was long to be killed.\n> I added a CHECK_FOR_INTERRUPT() in the for(;;) loop in nodeHashjoin.c.\n> It fixes the delay when trying to kill but I don't know about\n> performance impact this can have in this place of the code.\n\nWell, seems easy enough to find out: just test the query with and\nwithout your patch (and without casserts). If there's no measurable\ndifference on this query, there probably won't be one anywhere.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 31 May 2011 20:55:38 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hash Anti Join performance degradation"
},
{
"msg_contents": "2011/6/1 Robert Haas <[email protected]>:\n> On Tue, May 31, 2011 at 8:43 PM, Cédric Villemain\n> <[email protected]> wrote:\n>> Yes, while here I noticed that the query was long to be killed.\n>> I added a CHECK_FOR_INTERRUPT() in the for(;;) loop in nodeHashjoin.c.\n>> It fixes the delay when trying to kill but I don't know about\n>> performance impact this can have in this place of the code.\n>\n> Well, seems easy enough to find out: just test the query with and\n> without your patch (and without casserts). If there's no measurable\n> difference on this query, there probably won't be one anywhere\n\nOh damned, I am currently with an eeepc, I'll need 2 days to bench that :-D\nI'll see tomorow.\n.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Wed, 1 Jun 2011 03:11:16 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hash Anti Join performance degradation"
},
{
"msg_contents": "On Tue, May 31, 2011 at 9:11 PM, Cédric Villemain\n<[email protected]> wrote:\n> Oh damned, I am currently with an eeepc, I'll need 2 days to bench that :-D\n> I'll see tomorow.\n\nLOL.\n\nWith respect to the root of the issue (why does the anti-join take so\nlong?), my first thought was that perhaps the OP was very unlucky and\nhad a lot of values that hashed to the same bucket. But that doesn't\nappear to be the case. There are 120 different box_id values in the\nmessage table, and running hashint8(box_id) % 16384 (which I think is\nthe right calculation: 4096 buckets * 4 batches) yields 120 different\nvalues.\n\nI think that part of the problem here is that the planner has no\nparticularly efficient way of executing a non-equijoin. Each probe\nfinds the appropriate hash bucket and must then probe the entire\nchain, all of which pass the hash condition and some of which fail the\njoin qual. So if there are n_1 instances of value v_1, n_2 instances\nof value v_2, etc. then the total effort is proportional to n_1^2 +\nn_2^2 + ...\n\nBut that doesn't really explain the problem because removing the last\nfew rows only changes that sum by a small amount.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 31 May 2011 22:58:16 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hash Anti Join performance degradation"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> With respect to the root of the issue (why does the anti-join take so\n> long?), my first thought was that perhaps the OP was very unlucky and\n> had a lot of values that hashed to the same bucket. But that doesn't\n> appear to be the case.\n\nWell, yes it is. Notice what the subquery is doing: for each row in\n\"box\", it's pulling all matching \"box_id\"s from message and running a\nself-join across those rows. The hash join condition is a complete\nno-op. And some of the box_ids have hundreds of thousands of rows.\n\nI'd just write it off as being a particularly stupid way to find the\nmax(), except I'm not sure why deleting just a few thousand rows\nimproves things so much. It looks like it ought to be an O(N^2)\nsituation, so the improvement should be noticeable but not amazing.\n\nAnd if you force it to not use a hashjoin, suddenly things are better.\nNestloop should also be O(N^2) in this situation, but seemingly it\navoids whatever weird corner case we are hitting here.\n\nAs Cedric says, the lack of any CHECK_FOR_INTERRUPTS in this loop is\nalso problematic. I'm not sure that right there is an ideal place\nto put it, but we need one somewhere in the loop.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 May 2011 23:47:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hash Anti Join performance degradation "
},
{
"msg_contents": "On Tue, May 31, 2011 at 11:47 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> With respect to the root of the issue (why does the anti-join take so\n>> long?), my first thought was that perhaps the OP was very unlucky and\n>> had a lot of values that hashed to the same bucket. But that doesn't\n>> appear to be the case.\n>\n> Well, yes it is. Notice what the subquery is doing: for each row in\n> \"box\", it's pulling all matching \"box_id\"s from message and running a\n> self-join across those rows. The hash join condition is a complete\n> no-op. And some of the box_ids have hundreds of thousands of rows.\n>\n> I'd just write it off as being a particularly stupid way to find the\n> max(), except I'm not sure why deleting just a few thousand rows\n> improves things so much. It looks like it ought to be an O(N^2)\n> situation, so the improvement should be noticeable but not amazing.\n\nYeah, this is what I was getting at, though perhaps I didn't say it\nwell. If the last 78K rows were particularly pathological in some\nway, that might explain something, but as far as one can see they are\nnot a whole heck of a lot different from the rest of the data.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 1 Jun 2011 07:40:27 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hash Anti Join performance degradation"
},
{
"msg_contents": "\nTom Lane-2 wrote:\n> \n> It looks like it ought to be an O(N^2)\n> situation, so the improvement should be noticeable but not amazing.\n> \n\nHm, the performance was reasonable again when doing a cluster...\nSo I believe this should be more a technical than an\nalgorithmical/complexity issue. Maybe it is the way the hashtable is built\nand that order makes a difference in that case? In short: Why is clustered\ndata not affected?\n\nRegards,\npanam\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Re-PERFORM-Hash-Anti-Join-performance-degradation-tp4443803p4445123.html\nSent from the PostgreSQL - hackers mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 1 Jun 2011 05:40:55 -0700 (PDT)",
"msg_from": "panam <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Hash Anti Join performance degradation"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Tue, May 31, 2011 at 11:47 PM, Tom Lane <[email protected]> wrote:\n>> I'd just write it off as being a particularly stupid way to find the\n>> max(), except I'm not sure why deleting just a few thousand rows\n>> improves things so much. �It looks like it ought to be an O(N^2)\n>> situation, so the improvement should be noticeable but not amazing.\n\n> Yeah, this is what I was getting at, though perhaps I didn't say it\n> well. If the last 78K rows were particularly pathological in some\n> way, that might explain something, but as far as one can see they are\n> not a whole heck of a lot different from the rest of the data.\n\nWell, I wasted a lot more time on this than I should have, but the\nanswer is: it's a pathological data ordering.\n\nThe given data file loads the message rows approximately in \"id\" order,\nand in fact once you lop off the ones with id > 2550000, it's\nsufficiently in order that the highest id is also physically last in\nthe table, at least for all of the box_ids that have large numbers of\nentries. Now, the tested query plan loads the hash table like this:\n\n -> Hash (cost=13685.86..13685.86 rows=28511 width=16) (actual time=176.286..176.286 rows=211210 loops=1)\n Buckets: 4096 Batches: 1 Memory Usage: 9901kB\n -> Bitmap Heap Scan on message m2 (cost=537.47..13685.86 rows=28511 width=16) (actual time=23.204..124.624 rows=211210 loops=1)\n Recheck Cond: (box_id = $1)\n -> Bitmap Index Scan on \"message_box_Idx\" (cost=0.00..530.34 rows=28511 width=0) (actual time=21.974..21.974 rows=211210 loops=1)\n Index Cond: (box_id = $1)\n\nBecause of the way that a bitmap heap scan works, the rows are\nguaranteed to be loaded into the hash table in physical order, which\nmeans (in the fast case) that the row with the largest \"id\" value gets\nloaded last. And because ExecHashTableInsert pushes each row onto the\nfront of its hash chain, that row ends up at the front of the hash\nchain. Net result: for all the outer rows that aren't the one with\nmaximum id, we get a joinqual match at the very first entry in the hash\nchain. Since it's an antijoin, we then reject that outer row and go\non to the next. The join thus ends up costing us only O(N) time.\n\nHowever, with the additional rows in place, there are a significant\nnumber of outer rows that don't get a match at the first hash chain\nentry, and we're spending more like O(N^2) time. I instrumented things\nfor the specific case of box_id = 69440, which is the most common\nbox_id, and got these results:\n\n 2389 got match of join quals at probe 208077\n 1 got match of join quals at probe 1\n 175 got match of join quals at probe 208077\n 273 got match of join quals at probe 1\n 21 got match of join quals at probe 208077\n 1 got match of join quals at probe 1\n 24 got match of join quals at probe 208077\n 6 got match of join quals at probe 1\n 157 got match of join quals at probe 208077\n 1 got match of join quals at probe 1\n 67 got match of join quals at probe 208077\n 18 got match of join quals at probe 1\n 1 generate null-extended tuple after 211211 probes\n 208075 got match of join quals at probe 1\n 1 got match of join quals at probe 208077\n\n(This is a \"uniq -c\" summary of a lot of printfs, so the first column\nis the number of consecutive occurrences of the same printout.) Even\nthough a large majority of the outer rows still manage to find a match\nat the first probe, there remain about 2800 that don't match there,\nbecause they've got pretty big ids, and so they traipse through the hash\nchain until they find the genuinely largest id, which is unfortunately\nway down there ---- the 208077'th chain entry in fact. That results in\nabout half a billion more ExecScanHashBucket and ExecQual calls than\noccur in the easy case (and that's just for this one box_id).\n\nSo it's not that the full data set is pathologically bad, it's that the\nreduced set is pathologically good. O(N^2) performance is what you\nshould expect for this query, and that's what you're actually getting\nwith the full data set.\n\nAlso, I noted earlier that performance seemed a good deal better with a\nNestLoop plan. The reason for that is that NestLoop doesn't have the\nreversal of inner row ordering that's caused by prepending entries to\nthe hash chain, so the very largest row id isn't 208077 entries into the\nlist for it, but only 211211-208077 = 3134 entries in. But it still\nmanages to eliminate most outer rows at the first probe, because there's\na fairly large value at that end of the dataset too.\n\nI don't see anything much that we could or should do about this. It\njust depends on the order in which things appear in the hash chain,\nand even if we fooled around with that ordering, we'd only be moving\nthe improbably-lucky behavior from one case to some other case.\n\nWe do need to look into putting a CHECK_FOR_INTERRUPTS call in here\nsomewhere, though. I'm inclined to think that right before the\nExecScanHashBucket is the best place. The reason that nest and merge\njoins don't show a comparable non-responsiveness to cancels is that they\nalways call a child plan node at the equivalent place, and ExecProcNode\nhas got a CHECK_FOR_INTERRUPTS. So we ought to check for interrupts\nat the point of \"fetching a tuple from the inner child plan\", and\nExecScanHashBucket is the equivalent thing in this logic. Cedric's\nsuggestion of putting it before the switch would get the job done, but\nit would result in wasting cycles during unimportant transitions from\none state machine state to another.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Jun 2011 16:25:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hash Anti Join performance degradation "
},
{
"msg_contents": "On Wed, Jun 1, 2011 at 4:25 PM, Tom Lane <[email protected]> wrote:\n> Because of the way that a bitmap heap scan works, the rows are\n> guaranteed to be loaded into the hash table in physical order, which\n> means (in the fast case) that the row with the largest \"id\" value gets\n> loaded last. And because ExecHashTableInsert pushes each row onto the\n> front of its hash chain, that row ends up at the front of the hash\n> chain. Net result: for all the outer rows that aren't the one with\n> maximum id, we get a joinqual match at the very first entry in the hash\n> chain. Since it's an antijoin, we then reject that outer row and go\n> on to the next. The join thus ends up costing us only O(N) time.\n\nAh! Make sense. If I'm reading your explanation right, this means\nthat we could have hit a similar pathological case on a nestloop as\nwell, just with a data ordering that is the reverse of what we have\nhere?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 1 Jun 2011 16:31:34 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hash Anti Join performance degradation"
},
{
"msg_contents": "2011/6/1 Tom Lane <[email protected]>:\n> We do need to look into putting a CHECK_FOR_INTERRUPTS call in here\n> somewhere, though. I'm inclined to think that right before the\n> ExecScanHashBucket is the best place. The reason that nest and merge\n> joins don't show a comparable non-responsiveness to cancels is that they\n> always call a child plan node at the equivalent place, and ExecProcNode\n> has got a CHECK_FOR_INTERRUPTS. So we ought to check for interrupts\n> at the point of \"fetching a tuple from the inner child plan\", and\n> ExecScanHashBucket is the equivalent thing in this logic. Cedric's\n> suggestion of putting it before the switch would get the job done, but\n> it would result in wasting cycles during unimportant transitions from\n> one state machine state to another.\n\n\nexact, thanks to your last email I read more the code and get the same\nconclusion and put it in a more appropriate place : before\nExecScanHashBucket.\n\nI was about sending it, so it is attached.\n\n\n\n>\n> regards, tom lane\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support",
"msg_date": "Wed, 1 Jun 2011 22:34:38 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hash Anti Join performance degradation"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, Jun 1, 2011 at 4:25 PM, Tom Lane <[email protected]> wrote:\n>> Because of the way that a bitmap heap scan works, the rows are\n>> guaranteed to be loaded into the hash table in physical order, which\n>> means (in the fast case) that the row with the largest \"id\" value gets\n>> loaded last. �And because ExecHashTableInsert pushes each row onto the\n>> front of its hash chain, that row ends up at the front of the hash\n>> chain. �Net result: for all the outer rows that aren't the one with\n>> maximum id, we get a joinqual match at the very first entry in the hash\n>> chain. �Since it's an antijoin, we then reject that outer row and go\n>> on to the next. �The join thus ends up costing us only O(N) time.\n\n> Ah! Make sense. If I'm reading your explanation right, this means\n> that we could have hit a similar pathological case on a nestloop as\n> well, just with a data ordering that is the reverse of what we have\n> here?\n\nYeah. It's just chance that this particular data set, with this\nparticular ordering, happens to work well with a nestloop version\nof the query. On average I'd expect nestloop to suck even more,\nbecause of more per-inner-tuple overhead.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Jun 2011 16:35:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hash Anti Join performance degradation "
},
{
"msg_contents": "On Wed, Jun 1, 2011 at 4:35 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> On Wed, Jun 1, 2011 at 4:25 PM, Tom Lane <[email protected]> wrote:\n>>> Because of the way that a bitmap heap scan works, the rows are\n>>> guaranteed to be loaded into the hash table in physical order, which\n>>> means (in the fast case) that the row with the largest \"id\" value gets\n>>> loaded last. And because ExecHashTableInsert pushes each row onto the\n>>> front of its hash chain, that row ends up at the front of the hash\n>>> chain. Net result: for all the outer rows that aren't the one with\n>>> maximum id, we get a joinqual match at the very first entry in the hash\n>>> chain. Since it's an antijoin, we then reject that outer row and go\n>>> on to the next. The join thus ends up costing us only O(N) time.\n>\n>> Ah! Make sense. If I'm reading your explanation right, this means\n>> that we could have hit a similar pathological case on a nestloop as\n>> well, just with a data ordering that is the reverse of what we have\n>> here?\n>\n> Yeah. It's just chance that this particular data set, with this\n> particular ordering, happens to work well with a nestloop version\n> of the query. On average I'd expect nestloop to suck even more,\n> because of more per-inner-tuple overhead.\n\nI guess the real issue here is that m1.id < m2.id has to be evaluated\nas a filter condition rather than a join qual. That tends to perform\npoorly in general, which is why rewriting this using min() or max() or\nORDER BY .. LIMIT 1 was elsewhere recommended. I've occasionally had\ncause to join on something other than equality in cases not\nsusceptible to such rewriting, so it would be neat to improve this\ncase, but it's not likely to make it to the top of my list any time\nsoon.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 1 Jun 2011 16:43:29 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hash Anti Join performance degradation"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> I guess the real issue here is that m1.id < m2.id has to be evaluated\n> as a filter condition rather than a join qual.\n\nWell, if you can invent an optimized join technique that works for\ninequalities, go for it ... but I think you should get at least a\nPhD thesis out of that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Jun 2011 16:47:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hash Anti Join performance degradation "
},
{
"msg_contents": "On Wed, Jun 1, 2011 at 4:47 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> I guess the real issue here is that m1.id < m2.id has to be evaluated\n>> as a filter condition rather than a join qual.\n>\n> Well, if you can invent an optimized join technique that works for\n> inequalities, go for it ... but I think you should get at least a\n> PhD thesis out of that.\n\nSounds good, except that so far NOT getting a PhD seems like a much\nbetter financial prospect. :-)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 1 Jun 2011 16:58:36 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hash Anti Join performance degradation"
},
{
"msg_contents": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]> writes:\n> exact, thanks to your last email I read more the code and get the same\n> conclusion and put it in a more appropriate place : before\n> ExecScanHashBucket.\n\n> I was about sending it, so it is attached.\n\nApplied with cosmetic adjustments.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Jun 2011 17:02:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hash Anti Join performance degradation "
},
{
"msg_contents": "On Wed, Jun 01, 2011 at 04:58:36PM -0400, Robert Haas wrote:\n> On Wed, Jun 1, 2011 at 4:47 PM, Tom Lane <[email protected]> wrote:\n> > Robert Haas <[email protected]> writes:\n> >> I guess the real issue here is that m1.id < m2.id has to be evaluated\n> >> as a filter condition rather than a join qual.\n> >\n> > Well, if you can invent an optimized join technique that works for\n> > inequalities, go for it ... but I think you should get at least a\n> > PhD thesis out of that.\n> \n> Sounds good, except that so far NOT getting a PhD seems like a much\n> better financial prospect. :-)\n\nYeah, last time I heard of some Uni being so impressed by independent\nwork that they just sort of handed out a Ph.D. it involved a Swiss\npatent clerk ...\n\nRoss\n-- \nRoss Reedstrom, Ph.D. [email protected]\nSystems Engineer & Admin, Research Scientist phone: 713-348-6166\nConnexions http://cnx.org fax: 713-348-3665\nRice University MS-375, Houston, TX 77005\nGPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE\n",
"msg_date": "Wed, 1 Jun 2011 16:53:15 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hash Anti Join performance degradation"
},
{
"msg_contents": "I'd like to thank you all for getting this analyzed, especially Tom!\nYour rigor is pretty impressive. Seems like otherwise it'd impossible to\nmaintain a DBS, though.\nIn the end, I know a lot more of postgres internals and that this\nidiosyncrasy (from a user perspective) could happen again. I guess it is my\nfirst time where I actually encountered an unexpected worst case scenario\nlike this...\nSeems it is up to me know to be a bit more creative with query optimzation.\nAnd in the end, it'll turn out to require an architectural change...\nAs the only thing to achieve is in fact to obtain the last id (currently\nstill with the constraint that it has to happen in an isolated subquery), i\nwonder whether this requirement (obtaining the last id) is worth a special\ntechnique/instrumentation/strategy ( lacking a good word here), given the\nfact that this data has a full logical ordering (in this case even total)\nand the use case is quite common I guess.\n\nSome ideas from an earlier post:\n\npanam wrote:\n> \n> ...\n> This also made me wonder how the internal plan is carried out. Is the\n> engine able to leverage the fact that a part/range of the rows [\"/index\n> entries\"] is totally or partially ordered on disk, e.g. using some kind of\n> binary search or even \"nearest neighbor\"-search in that section (i.e. a\n> special \"micro-plan\" or algorithm)? Or is the speed-up \"just\" because\n> related data is usually \"nearby\" and most of the standard algorithms work\n> best with clustered data?\n> If the first is not the case, would that be a potential point for\n> improvement? Maybe it would even be more efficient, if there were some\n> sort of constraints that guarantee \"ordered row\" sections on the disk,\n> i.e. preventing the addition of a row that had an index value in between\n> two row values of an already ordered/clustered section. In the simplest\n> case, it would start with the \"first\" row and end with the \"last\" row (on\n> the time of doing the equivalent of \"cluster\"). So there would be a small\n> list saying rows with id x - rows with id y are guaranteed to be ordered\n> on disk (by id for example) now and for all times.\n> \n \nMaybe I am completely off the mark but what's your conclusion? To much\neffort for small scenarios? Nothing that should be handled on a DB level? A\ntry to battle the laws of thermodynamics with small technical dodge?\n\nThanks again\npanam\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Re-PERFORM-Hash-Anti-Join-performance-degradation-tp4443803p4446629.html\nSent from the PostgreSQL - hackers mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 1 Jun 2011 15:49:31 -0700 (PDT)",
"msg_from": "panam <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Hash Anti Join performance degradation"
}
] |
[
{
"msg_contents": "\n\nTomas Vondra <[email protected]> wrote:\n\n>Dne 24.5.2011 07:24, Terry Schmitt napsal(a):\n>> As near as I can tell from your test configuration description, you have\n>> JMeter --> J2EE --> Postgres.\n>> Have you ruled out the J2EE server as the problem? This problem may not\n>> be the database.\n>> I would take a look at your app server's health and look for any\n>> potential issues there before spending too much time on the database.\n>> Perhaps there are memory issues or excessive garbage collection on the\n>> app server?\n>\n>It might be part of the problem, yes, but it's just a guess. We need to\n>se some iostat / iotop / vmstat output to confirm that.\n>\n>The probable cause here is that the indexes grow with the table, get\n>deeper, so when you insert a new row you need to modify more and more\n>pages. That's why the number of buffers grows over time and the\n>checkpoint takes more and more time (the average write speed is about 15\n>MB/s - not sure if that's good or bad performance).\n>\n>The question is whether this is influenced by other activity (Java GC or\n>something)\n>\n>I see three ways to improve the checkpoint performance:\n>\n> 1) set checkpoint_completion_target = 0.9 or something like that\n> (this should spread the checkpoint, but it also increases the\n> amount of checkpoint segments to keep)\n>\n> 2) make the background writer more aggressive (tune the bgwriter_*\n> variables), this is similar to (1)\n>\n> 3) improve the write performance (not sure how random the I/O is in\n> this case, but a decent controller with a cache might help)\n>\n>and then two ways to decrease the index overhead / amount of modified\n>buffers\n>\n> 1) keep only the really necessary indexes (remove duplicate, indexes,\n> remove indexes where another index already performs reasonably,\n> etc.)\n>\n> 2) partition the table (so that only indexes on the current partition\n> will be modified, and those will be more shallow)\n>\n>Tomas\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 24 May 2011 19:01:06 -0700",
"msg_from": "Santhakumaran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation of inserts when database size\n grows"
}
] |
[
{
"msg_contents": "Does anyone here have any bad experiences with the RAID card in subject ?\nThis is in an IBM server, with 2.5\" 10k drives.\n\nBut we seem to observe its poor performance in other configurations as\nwell (with different drives, different settings) in comparison with -\nsay, what dell provides.\n\n\nAny experiences ?\n\n-- \nGJ\n",
"msg_date": "Wed, 25 May 2011 08:33:52 +0100",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "serveRAID M5014 SAS"
},
{
"msg_contents": "Grzegorz Jaśkiewicz wrote:\n> Does anyone here have any bad experiences with the RAID card in subject ?\n> This is in an IBM server, with 2.5\" 10k drives.\n>\n> But we seem to observe its poor performance in other configurations as\n> well (with different drives, different settings) in comparison with -\n> say, what dell provides.\n> \n\nOlder ServeRAID cards have never been reported as very fast. They were \nan OK controller if you just want to mirror a pair of drives or \nsomething simple like that. Their performance on larger RAID arrays is \nterrible compared to the LSI products that Dell uses.\n\nHowever, the M5014 *is* an LSI derived product, with a proper \nbattery-backed write cache and everything. I'm not sure if older views \nare even useful now. Few things to check:\n\n-Is the battery working, and the write cache set in write-back mode?\n-Has read-ahead been set usefully?\n-Did you try to use a more complicated RAID mode than this card can handle?\n\nThose are the three easiest ways to trash performance here.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Wed, 25 May 2011 10:00:49 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: serveRAID M5014 SAS"
},
{
"msg_contents": "Greg Smith <[email protected]> wrote:\n \n> -Is the battery working, and the write cache set in write-back\n> mode?\n \nMy bet is on this point.\n \nOur hardware tech says that the difference between an M5014 and an\nM5015 is that the former takes a maximum of 256MB RAM while the\nlatter takes a maximum of 512MB RAM and that the M5014 ships with\n*no* RAM by default. He says you have to order the RAM as an\nextra-cost option on the M5014.\n \n-Kevin\n",
"msg_date": "Wed, 25 May 2011 09:54:00 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: serveRAID M5014 SAS"
},
{
"msg_contents": "On 25/05/11 19:33, Grzegorz Jaśkiewicz wrote:\n> Does anyone here have any bad experiences with the RAID card in subject ?\n> This is in an IBM server, with 2.5\" 10k drives.\n>\n> But we seem to observe its poor performance in other configurations as\n> well (with different drives, different settings) in comparison with -\n> say, what dell provides.\n>\n>\n>\n\nInterestingly enough, I've been benchmarking a M5015 SAS, with the \noptional wee cable for enabling the battery backup for the 512MB of \ncache. With a 6 disk raid 10 + 2 disk raid 1 - with the array settings \nNORA+DIRECT, and writeback enabled we're seeing quite good pgbench \nperformance (12 cores + 48G ram, Ubuntu 10.04 with xfs):\n\nscale 2500 db with 48 clients, 10 minute runs: 2300 tps\nscale 500 db with 24 clients, 10 minute runs: 6050 tps\n\nI did notice that the sequential performance was quite lackluster (using \nbonnie) - but are not too concerned about that for the use case (could \nprobably fix using blockdev --setra).\n\nI'm guessing that even tho your M5014 card comes with less ram (256M I \nthink), if you can enable the battery backup and cache writes it should \nbe quite good. Also I think the amount of ram on the card is upgradable \n(4G is the max for the M5105 I *think* - can't find the right doc to \ncheck this ATM sorry).\n\nCheers\n\nMark\n\n\n",
"msg_date": "Thu, 26 May 2011 10:24:59 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: serveRAID M5014 SAS"
},
{
"msg_contents": "On 26/05/11 10:24, Mark Kirkwood wrote:\n>\n>\n> Also I think the amount of ram on the card is upgradable (4G is the \n> max for the M5105 I *think* - can't find the right doc to check this \n> ATM sorry).\n>\n>\n\nLooking at the (very sparse) product docs, it looks I am mistaken above \n- and that the cache sizes are 256M for M5014 and 512M for M5015 and are \nnot upgradable beyond that. Looking at Kevin's post, I recommend \nchecking if you ordered the cache and battery with your card.\n\nCheers\n\nMark\n",
"msg_date": "Thu, 26 May 2011 10:42:15 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: serveRAID M5014 SAS"
},
{
"msg_contents": "The card is configured in 1+0 . with 128k stripe afaik (I'm a\ndeveloper, we don't have hardware guys here).\nAre you's sure about the lack of cache by default on the card ? I\nthought the difference is that 5104 has 256, and 5105 has 512 ram\nalready on it.\n",
"msg_date": "Thu, 26 May 2011 09:11:18 +0100",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: serveRAID M5014 SAS"
},
{
"msg_contents": "On 26/05/11 20:11, Grzegorz Jaśkiewicz wrote:\n> The card is configured in 1+0 . with 128k stripe afaik (I'm a\n> developer, we don't have hardware guys here).\n> Are you's sure about the lack of cache by default on the card ? I\n> thought the difference is that 5104 has 256, and 5105 has 512 ram\n> already on it.\n\nNo, I'm not sure about what the default is for the M5014 - I'd recommend \nchecking this with your supplier (or looking at the invoice if you can \nget it). My *feeling* is that you may have 256M cache but no battery kit \n- as this is an optional part - so the the card will not got into \nwriteback mode if that is the case.\n\nFWIW - we got our best (pgbench) results with 256K stripe, No (card) \nreadahead and hyperthreading off on the host.\n\nYou can interrogate the config of the card and the raid 10 array using \nthe megaraid cli package - you need to read the (frankly terrible) \nmanual to discover which switches to use to determine battery and cache \nstatus etc. If you email me privately I'll get you a link to the \nrelevant docs!\n\nCheers\n\nMark\n",
"msg_date": "Thu, 26 May 2011 20:25:27 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: serveRAID M5014 SAS"
},
{
"msg_contents": "Would HT have any impact to the I/O performance (postgresql, and fs in\ngeneral) ?.\n",
"msg_date": "Thu, 26 May 2011 09:31:05 +0100",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: serveRAID M5014 SAS"
},
{
"msg_contents": "On 26/05/11 20:31, Grzegorz Jaśkiewicz wrote:\n> Would HT have any impact to the I/O performance (postgresql, and fs in\n> general) ?.\n>\n\nThere have been previous discussions on this list about HT on vs off (I \ncan't recall what the consensus, if any about what the cause of any \nperformance difference was). In our case HT off gave us much better \nresults for what we think the typical number of clients will be - see \nattached (server learn-db1 is setup with trivial hardware raid and then \nsoftware raided via md, learn-db2 has its raid all in hardware. We've \nended up going with the latter setup).\n\nNote that the highest tps on the graph is about 2100 - we got this upto \njust over 2300 by changing from ext4 to xfs in later tests, and managed \nto push the tps for 100 clients up a little by setting no read ahead \n(NORA) for the arrays.\n\nCheers\n\nMark",
"msg_date": "Fri, 27 May 2011 10:43:25 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: serveRAID M5014 SAS"
},
{
"msg_contents": "Mark Kirkwood wrote:\n> You can interrogate the config of the card and the raid 10 array using \n> the megaraid cli package - you need to read the (frankly terrible) \n> manual to discover which switches to use to determine battery and \n> cache status etc. If you email me privately I'll get you a link to the \n> relevant docs!\n\nThat's assuming the MegaCli utility will work against IBM's version of \nthe card. They use an LSI chipset for the RAID parts, but I don't know \nif the card is so similar that it will talk using that utility or not.\n\nThe main useful site here is \nhttp://tools.rapidsoft.de/perc/perc-cheat-sheet.html ; here's how to \ndump all the main config info from an LSI card:\n\nMegaCli64 -LDInfo -Lall -aALL\n\nYou want to see a line like this:\n\n Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache \nif Bad BBU\n\nFor the arrays. And then check the battery like this:\n\nMegaCli64 -AdpBbuCmd -aALL\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Thu, 26 May 2011 19:19:27 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: serveRAID M5014 SAS"
},
{
"msg_contents": "On 27/05/11 11:19, Greg Smith wrote:\n> Mark Kirkwood wrote:\n>> You can interrogate the config of the card and the raid 10 array \n>> using the megaraid cli package - you need to read the (frankly \n>> terrible) manual to discover which switches to use to determine \n>> battery and cache status etc. If you email me privately I'll get you \n>> a link to the relevant docs!\n>\n> That's assuming the MegaCli utility will work against IBM's version of \n> the card. They use an LSI chipset for the RAID parts, but I don't \n> know if the card is so similar that it will talk using that utility or \n> not.\n\nIt does seem to.\n\nCheers\n\nMark\n",
"msg_date": "Fri, 27 May 2011 11:22:55 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: serveRAID M5014 SAS"
},
{
"msg_contents": "On 27/05/11 11:22, Mark Kirkwood wrote:\n> On 27/05/11 11:19, Greg Smith wrote:\n>> Mark Kirkwood wrote:\n>>> You can interrogate the config of the card and the raid 10 array \n>>> using the megaraid cli package - you need to read the (frankly \n>>> terrible) manual to discover which switches to use to determine \n>>> battery and cache status etc. If you email me privately I'll get you \n>>> a link to the relevant docs!\n>>\n>> That's assuming the MegaCli utility will work against IBM's version \n>> of the card. They use an LSI chipset for the RAID parts, but I don't \n>> know if the card is so similar that it will talk using that utility \n>> or not.\n>\n> It does seem to.\n>\n> Cheers\n>\n> Mark\n>\n\ne.g checking battery status:\n\nroot@learn-db2:~# MegaCli64 -AdpBbuCmd -GetBbuStatus -a0\n\nBBU status for Adapter: 0\n\nBatteryType: iBBU\nVoltage: 4040 mV\nCurrent: 0 mA\nTemperature: 28 C\n\nBBU Firmware Status:\n\n Charging Status : None\n Voltage : OK\n Temperature : OK\n Learn Cycle Requested\t : No\n Learn Cycle Active : No\n Learn Cycle Status : OK\n Learn Cycle Timeout : No\n I2c Errors Detected : No\n Battery Pack Missing : No\n Battery Replacement required : No\n Remaining Capacity Low : No\n Periodic Learn Required : No\n Transparent Learn : No\n\nBattery state:\n\nGasGuageStatus:\n Fully Discharged : No\n Fully Charged : Yes\n Discharging : Yes\n Initialized : Yes\n Remaining Time Alarm : No\n Remaining Capacity Alarm: No\n Discharge Terminated : No\n Over Temperature : No\n Charging Terminated : No\n Over Charged : No\n\nRelative State of Charge: 99 %\nCharger System State: 49168\nCharger System Ctrl: 0\nCharging current: 0 mA\nAbsolute state of charge: 99 %\nMax Error: 2 %\n\nExit Code: 0x00\n\n\nReminds me of out from DB2 diag commands (years ago...am ok now).\n",
"msg_date": "Fri, 27 May 2011 11:42:56 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: serveRAID M5014 SAS"
},
{
"msg_contents": "Mark Kirkwood <[email protected]> wrote:\n \n> Battery Pack Missing : No\n \n> Fully Charged : Yes\n \n> Initialized : Yes\n \nI'm not familiar with that output (I leave that to the hardware\nguys), but it sure looks like there's a battery there. The one\nthing I didn't see is whether it's configured for write-through or\nwrite-back. You want write-back for good performance.\n \n-Kevin\n",
"msg_date": "Fri, 27 May 2011 09:38:24 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: serveRAID M5014 SAS"
},
{
"msg_contents": "On 28/05/11 02:38, Kevin Grittner wrote:\n> Mark Kirkwood<[email protected]> wrote:\n>\n>> Battery Pack Missing : No\n>> Fully Charged : Yes\n>> Initialized : Yes\n>\n> I'm not familiar with that output (I leave that to the hardware\n> guys), but it sure looks like there's a battery there. The one\n> thing I didn't see is whether it's configured for write-through or\n> write-back. You want write-back for good performance.\n>\n>\n\nSorry for the confusion Kevin - that's the output for *our* M5015 with a \nbattery - what we need to see is the output for\nGrzegorz's M5014. Grzegorz - can you get that?\n\nCheers\n\nMark\n",
"msg_date": "Sat, 28 May 2011 11:24:43 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: serveRAID M5014 SAS"
},
{
"msg_contents": "Yeah, I got it Mark.\n\nUnfortunately my current management decided to suspend that\ninvestigation for a while, so I can't get any tests done or anything\nlike that.\n\nHowever we found out that another server we have shows similar issues.\nThe same card, slightly different motherboard and completely different\ndisks.\nThe basic issue is around concurrent reads and writes. The card is\nok-ish when one process hammers the disks, but as soon as it is\nmultiple ones - it just blows (and I use it as a technical term ;) ).\nStill, with only one process reading/writing - the raid card sucks big\ntime performance wise.\n",
"msg_date": "Sat, 28 May 2011 10:42:54 +0100",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: serveRAID M5014 SAS"
},
{
"msg_contents": "On 28/05/11 21:42, Grzegorz Jaśkiewicz wrote:\n> Yeah, I got it Mark.\n>\n> Unfortunately my current management decided to suspend that\n> investigation for a while, so I can't get any tests done or anything\n> like that.\n>\n> However we found out that another server we have shows similar issues.\n> The same card, slightly different motherboard and completely different\n> disks.\n> The basic issue is around concurrent reads and writes. The card is\n> ok-ish when one process hammers the disks, but as soon as it is\n> multiple ones - it just blows (and I use it as a technical term ;) ).\n> Still, with only one process reading/writing - the raid card sucks big\n> time performance wise.\n>\n\nSorry, Grzegorz I didn't mean to suggest you were not listening, I meant \nto ask if you could run the megaraid cli command to see if it showed up \na battery!\n\nIf/when your management decide to let you look at this again, I'll be \nhappy to help if I can!\n\nBest wishes\n\nMark\n",
"msg_date": "Sat, 28 May 2011 22:15:15 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: serveRAID M5014 SAS"
}
] |
[
{
"msg_contents": "Hi, everyone. I'm working on a project that's using PostgreSQL 8.3, \nthat requires me to translate strings of octal digits into strings of \ncharacters -- so '141142143' should become 'abc', although the database \ncolumn containing this data (both before and after) is a bytea.\n\n\nWhile the function I've written is accurate, it turns out that it's also \nridiculously slow. I've managed to speed it up a fair amount, to twice \nwhat it was previously doing, by folding a helper function into a main \none, and appending to an array (which I then join into a string at the \nend of the function) instead of concatenating a string onto itself time \nafter time.\n\n\nI realize that pl/pgsql is not a good choice for doing this sort of \ntask, and that another language -- say, one with direct support for \noctal digits, or with built-in, speedy array functions such as pop() and \npush() -- would be a better choice. But that's not an option at this \npoint.\n\n\nI should also note that I'm not manipulating a huge amount of data \nhere. We're talking about 300 or so rows, each of which contains about \n250 KB of data. (Hmm, could the problem be that I'm constantly forcing \nthe system to compress and uncompress the data in TOAST? I hadn't \nthought of that until just now...)\n\n\nI thus have two basic questions:\n\n\n(1) Are there any good guidelines for what operations in pl/pgsql are \noptimized for which data structures? For example, it turns out that a \ngreat deal of time is being spent in the substring() function, which \nsurprised me. I thought that by switching to an array, it might be \nfaster, but that wasn't the case, at least in my tests. Having a sense \nof what I should and shouldn't be trying, and which built-in functions \nare particularly fast or slow, would be useful to know.\n\n\n(2) Is there any configuration setting that would (perhaps) speed things \nup a bit? I thought that maybe work_mem would help, but the \ndocumentation didn't indicate this at all, and sure enough, nothing \nreally changed when I increased it.\n\n\nOf course, any suggestions for how to deal with octal digits in \nPostgreSQL 8.3, such as an octal equivalent to the x'ff' syntax, would \nbe more than welcome.\n\n\nThanks in advance,\n\n\nReuven\n\n-- \nReuven M. Lerner -- Web development, consulting, and training\nMobile: +972-54-496-8405 * US phone: 847-230-9795\nSkype/AIM: reuvenlerner\n\n",
"msg_date": "Wed, 25 May 2011 19:59:58 +0300",
"msg_from": "\"Reuven M. Lerner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Speeding up loops in pl/pgsql function"
},
{
"msg_contents": "On Wed, May 25, 2011 at 10:59, Reuven M. Lerner <[email protected]> wrote:\n> Hi, everyone. I'm working on a project that's using PostgreSQL 8.3, that\n> requires me to translate strings of octal digits into strings of characters\n> -- so '141142143' should become 'abc', although the database column\n> containing this data (both before and after) is a bytea.\n\nHave you tried something like:\nSELECT encode(regexp_replace('141142143', '(\\d{3})', '\\\\\\1',\n'g')::bytea, 'escape');\n\n> ...\n> Of course, any suggestions for how to deal with octal digits in PostgreSQL\n> 8.3, such as an octal equivalent to the x'ff' syntax, would be more than\n> welcome.\n\nI think select E'\\XXX' is what you are looking for (per the fine\nmanual: http://www.postgresql.org/docs/current/static/datatype-binary.html)\n",
"msg_date": "Wed, 25 May 2011 12:35:08 -0600",
"msg_from": "Alex Hunsaker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up loops in pl/pgsql function"
},
{
"msg_contents": "Hi, Alex. You wrote:\n> Have you tried something like:\n> SELECT encode(regexp_replace('141142143', '(\\d{3})', '\\\\\\1',\n> 'g')::bytea, 'escape');\nHmm, forgot about regexp_replace. It might do the trick, but without a \nfull-blown eval that I can run on the replacement side, it'll be a bit \nmore challenging. But that's a good direction to consider, for sure.\n\n> I think select E'\\XXX' is what you are looking for (per the fine\n> manual: http://www.postgresql.org/docs/current/static/datatype-binary.html)\nI didn't think that I could (easily) build a string like that from \ndigits in a variable or a column, but I'll poke around and see if it can \nwork.\n\nThanks,\n\nReuven\n\n\n-- \nReuven M. Lerner -- Web development, consulting, and training\nMobile: +972-54-496-8405 * US phone: 847-230-9795\nSkype/AIM: reuvenlerner\n\n",
"msg_date": "Wed, 25 May 2011 21:45:43 +0300",
"msg_from": "\"Reuven M. Lerner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up loops in pl/pgsql function"
},
{
"msg_contents": "On Wed, May 25, 2011 at 12:45, Reuven M. Lerner <[email protected]> wrote:\n> Hi, Alex. You wrote:\n\n>> I think select E'\\XXX' is what you are looking for (per the fine\n>> manual:\n>> http://www.postgresql.org/docs/current/static/datatype-binary.html)\n>\n> I didn't think that I could (easily) build a string like that from digits in\n> a variable or a column, but I'll poke around and see if it can work.\n\nWell, if you build '\\XXX' you can call escape(..., 'escape') on it\nlike I did with the regex above.\n",
"msg_date": "Wed, 25 May 2011 12:48:43 -0600",
"msg_from": "Alex Hunsaker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up loops in pl/pgsql function"
},
{
"msg_contents": "Hello\n\n>\n> (1) Are there any good guidelines for what operations in pl/pgsql are\n> optimized for which data structures? For example, it turns out that a great\n> deal of time is being spent in the substring() function, which surprised me.\n> I thought that by switching to an array, it might be faster, but that\n> wasn't the case, at least in my tests. Having a sense of what I should and\n> shouldn't be trying, and which built-in functions are particularly fast or\n> slow, would be useful to know.\n>\n\nPL/pgSQL is perfect like glue for SQL. For all other isn't good\n\nhttp://okbob.blogspot.com/2010/04/frequent-mistakes-in-plpgsql-design.html\nhttp://www.pgsql.cz/index.php/PL/pgSQL_%28en%29#When_PL.2FpgSQL_is_not_applicable\n\n>\n> (2) Is there any configuration setting that would (perhaps) speed things up\n> a bit? I thought that maybe work_mem would help, but the documentation\n> didn't indicate this at all, and sure enough, nothing really changed when I\n> increased it.\n>\n>\n\nprobably not\n\nJust PL/pgSQL is not C, and you cannot do some heavy string or array operations.\n\nRegards\n\nPavel Stehule\n",
"msg_date": "Wed, 25 May 2011 21:02:07 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up loops in pl/pgsql function"
},
{
"msg_contents": "On Wed, May 25, 2011 at 11:59 AM, Reuven M. Lerner <[email protected]> wrote:\n> Hi, everyone. I'm working on a project that's using PostgreSQL 8.3, that\n> requires me to translate strings of octal digits into strings of characters\n> -- so '141142143' should become 'abc', although the database column\n> containing this data (both before and after) is a bytea.\n>\n>\n> While the function I've written is accurate, it turns out that it's also\n> ridiculously slow. I've managed to speed it up a fair amount, to twice what\n> it was previously doing, by folding a helper function into a main one, and\n> appending to an array (which I then join into a string at the end of the\n> function) instead of concatenating a string onto itself time after time.\n>\n>\n> I realize that pl/pgsql is not a good choice for doing this sort of task,\n> and that another language -- say, one with direct support for octal digits,\n> or with built-in, speedy array functions such as pop() and push() -- would\n> be a better choice. But that's not an option at this point.\n>\n>\n> I should also note that I'm not manipulating a huge amount of data here.\n> We're talking about 300 or so rows, each of which contains about 250 KB of\n> data. (Hmm, could the problem be that I'm constantly forcing the system to\n> compress and uncompress the data in TOAST? I hadn't thought of that until\n> just now...)\n>\n>\n> I thus have two basic questions:\n>\n>\n> (1) Are there any good guidelines for what operations in pl/pgsql are\n> optimized for which data structures? For example, it turns out that a great\n> deal of time is being spent in the substring() function, which surprised me.\n> I thought that by switching to an array, it might be faster, but that\n> wasn't the case, at least in my tests. Having a sense of what I should and\n> shouldn't be trying, and which built-in functions are particularly fast or\n> slow, would be useful to know.\n>\n>\n> (2) Is there any configuration setting that would (perhaps) speed things up\n> a bit? I thought that maybe work_mem would help, but the documentation\n> didn't indicate this at all, and sure enough, nothing really changed when I\n> increased it.\n>\n>\n> Of course, any suggestions for how to deal with octal digits in PostgreSQL\n> 8.3, such as an octal equivalent to the x'ff' syntax, would be more than\n> welcome.\n\nlet's see the source. I bet we can get this figured out.\n\nmerlin\n",
"msg_date": "Wed, 25 May 2011 15:14:42 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up loops in pl/pgsql function"
},
{
"msg_contents": "Hi, everyone. Merlin wrote:\n\n> let's see the source. I bet we can get this figured out.\n\nHere you go... it looked nicer before I started to make optimizations; \nI've gotten it to run about 2x as fast as the previous version, but now \nI'm sorta stuck, looking for further optimizations, including possible \nuse of builtin functions.\n\nThanks for any suggestions you can offer.\n\nCREATE OR REPLACE FUNCTION translate_octals_into_decimals(bytea_string \nBYTEA) RETURNS BYTEA AS $$\nDECLARE\n bytea_string_length INTEGER := length(bytea_string);\n current_substring TEXT := '';\n translated_string_array BYTEA[];\n\n output_number INTEGER := 0;\n output_number_text TEXT := '';\n current_digit TEXT := '';\nBEGIN\n RAISE NOTICE '[translate_octals_into_decimals] start at %, string of \nlength %', clock_timestamp(), pg_size_pretty(length(bytea_string));\n\n FOR i IN 1..length(bytea_string) BY 3 LOOP\n current_substring := substring(bytea_string from i for 3);\n\n output_number := 0;\n\n FOR j IN 0..(length(current_substring) - 1) LOOP\n current_digit := substring(current_substring from \n(length(current_substring) - j) for 1);\n output_number := output_number + current_digit::integer * (8 ^ j);\n END LOOP;\n\n output_number_text = lpad(output_number::text, 3, '0');\n\n IF output_number_text::int = 92 THEN\n translated_string_array := array_append(translated_string_array, \nE'\\\\\\\\'::bytea);\n ELSIF output_number_text::int = 0 THEN\n translated_string_array := array_append(translated_string_array, \nE'\\\\000'::bytea);\n ELSE\n translated_string_array := array_append( translated_string_array, \nchr(output_number_text::integer)::bytea );\n END IF;\n\n END LOOP;\n\n RETURN array_to_string(translated_string_array, '');\nEND;\n$$ LANGUAGE 'plpgsql';\n\nReuven\n\n-- \nReuven M. Lerner -- Web development, consulting, and training\nMobile: +972-54-496-8405 * US phone: 847-230-9795\nSkype/AIM: reuvenlerner\n\n",
"msg_date": "Thu, 26 May 2011 01:26:17 +0300",
"msg_from": "\"Reuven M. Lerner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up loops in pl/pgsql function"
},
{
"msg_contents": "On 05/25/2011 11:45 AM, Reuven M. Lerner wrote:\n> Hi, Alex. You wrote:\n>> Have you tried something like:\n>> SELECT encode(regexp_replace('141142143', '(\\d{3})', '\\\\\\1',\n>> 'g')::bytea, 'escape');\n> Hmm, forgot about regexp_replace. It might do the trick, but without \n> a full-blown eval that I can run on the replacement side, it'll be a \n> bit more challenging. But that's a good direction to consider, for sure.\n\nThe function given didn't work exactly as written for me but it is on \nthe right track. See if this works for you (input validation is left as \nan exercise for the reader...:)):\n\ncreate or replace function octal_string_to_text(someoctal text) returns \ntext as $$\ndeclare\n binstring text;\nbegin\n execute 'select E''' || regexp_replace($1, E'(\\\\d{3})', E'\\\\\\\\\\\\1', \n'g') || '''' into binstring;\nreturn binstring;\nend\n$$ language plpgsql;\n\nCheers,\nSteve\n\n",
"msg_date": "Wed, 25 May 2011 17:03:02 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up loops in pl/pgsql function"
},
{
"msg_contents": "On Wed, May 25, 2011 at 8:03 PM, Steve Crawford\n<[email protected]> wrote:\n> On 05/25/2011 11:45 AM, Reuven M. Lerner wrote:\n>>\n>> Hi, Alex. You wrote:\n>>>\n>>> Have you tried something like:\n>>> SELECT encode(regexp_replace('141142143', '(\\d{3})', '\\\\\\1',\n>>> 'g')::bytea, 'escape');\n>>\n>> Hmm, forgot about regexp_replace. It might do the trick, but without a\n>> full-blown eval that I can run on the replacement side, it'll be a bit more\n>> challenging. But that's a good direction to consider, for sure.\n>\n> The function given didn't work exactly as written for me but it is on the\n> right track. See if this works for you (input validation is left as an\n> exercise for the reader...:)):\n>\n> create or replace function octal_string_to_text(someoctal text) returns text\n> as $$\n> declare\n> binstring text;\n> begin\n> execute 'select E''' || regexp_replace($1, E'(\\\\d{3})', E'\\\\\\\\\\\\1', 'g')\n> || '''' into binstring;\n> return binstring;\n> end\n> $$ language plpgsql;\n\nfour points (minor suggestions btw):\n1. if you are dealing with strings that have backslashes in them,\ndon't escape, but dollar quote. Also try not to use dollar parameter\nnotation if you can help it:\n($1, E'(\\\\d{3})', E'\\\\\\\\\\\\1', 'g') -> (someoctal , $q$(\\d{3})$q$,\n$q$\\\\\\1$q$, 'g')\n\nthis is particularly true with feeding strings to regex: that way you\ncan use the same string pg as in various validators.\n\n2. there is no need for execute here.\nexecute 'select E''' || regexp_replace($1, E'(\\\\d{3})', E'\\\\\\\\\\\\1', 'g')\nbecomes:\nbinstring := 'E''' || regexp_replace($1, $q$(\\d{3})$q$, $q$\\\\\\1$q$,\n'g') /* I *think* I got this right */\n\n3. if your function does not scribble on tables and has no or is not\ninfluenced by any side effects, mark it as IMMUTABLE. always.\n$$ language plpgsql IMMUTABLE;\n\n4. since all we are doing is generating a variable, prefer sql\nfunction vs plpgsql. this is particularly true in pre 8.4 postgres\n(IIRC) where you can call the function much more flexibly (select\nfunc(); vs select * from func();) if that's the case. Putting it all\ntogether,\n\ncreate or replace function octal_string_to_text(someoctal text)\nreturns text as $$\n SELECT 'E''' || regexp_replace($1, $q$(\\d{3})$q$, $q$\\\\\\1$q$, 'g');\n$$ sql immutable;\n\nNote I didn't actually check to see what your regex is donig (I'm\nassuming it's correct)...\n\nmerlin\n",
"msg_date": "Wed, 25 May 2011 21:20:23 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up loops in pl/pgsql function"
},
{
"msg_contents": "Thursday, May 26, 2011, 12:26:17 AM you wrote:\n\n> Here you go... it looked nicer before I started to make optimizations; \n> I've gotten it to run about 2x as fast as the previous version, but now \n> I'm sorta stuck, looking for further optimizations, including possible \n> use of builtin functions.\n\nI've got only a 9.0.4 to play with, and bytea's are passed as an\nhexadecimal string, so I resorted to writing the function with TEXT as\nparameters, but maybe the following helps a bit, avoiding a few IMHO\nuseless string/int-operations:\n\nCREATE OR REPLACE FUNCTION translate_octals_into_decimals(bytea_string text) RETURNS text AS $$\nDECLARE\n bytea_string_length INTEGER := length(bytea_string);\n translated_string_array BYTEA[];\n\n output_number INTEGER := 0;\n num1 INTEGER;\n num2 INTEGER;\n num3 INTEGER;\n npos INTEGER;\n nlen INTEGER;\nBEGIN\n RAISE NOTICE '[translate_octals_into_decimals] start at %, string of\nlength %', clock_timestamp(), pg_size_pretty(length(bytea_string));\n\n npos := 1;\n FOR i IN 1..bytea_string_length BY 3 LOOP\n num1 := substring(bytea_string from i for 1);\n num2 := substring(bytea_string from i+1 for 1);\n num3 := substring(bytea_string from i+2 for 1);\n output_number := 64*num1 + 8*num2 + num3;\n\n IF output_number = 0 THEN\n translated_string_array[npos] := E'\\\\000'::bytea;\n ELSIF output_number = 92 THEN\n translated_string_array[npos] := E'\\\\\\\\'::bytea;\n ELSE\n translated_string_array[npos] := chr(output_number)::bytea;\n END IF;\n npos := npos+1;\n END LOOP;\n\n RETURN array_to_string(translated_string_array, '');\nEND;\n$$ LANGUAGE 'plpgsql';\n\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n",
"msg_date": "Thu, 26 May 2011 10:46:55 +0200",
"msg_from": "Jochen Erwied <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up loops in pl/pgsql function"
},
{
"msg_contents": "On Wed, May 25, 2011 at 9:20 PM, Merlin Moncure <[email protected]> wrote:\n> On Wed, May 25, 2011 at 8:03 PM, Steve Crawford\n> <[email protected]> wrote:\n>> On 05/25/2011 11:45 AM, Reuven M. Lerner wrote:\n>>>\n>>> Hi, Alex. You wrote:\n>>>>\n>>>> Have you tried something like:\n>>>> SELECT encode(regexp_replace('141142143', '(\\d{3})', '\\\\\\1',\n>>>> 'g')::bytea, 'escape');\n>>>\n>>> Hmm, forgot about regexp_replace. It might do the trick, but without a\n>>> full-blown eval that I can run on the replacement side, it'll be a bit more\n>>> challenging. But that's a good direction to consider, for sure.\n>>\n>> The function given didn't work exactly as written for me but it is on the\n>> right track. See if this works for you (input validation is left as an\n>> exercise for the reader...:)):\n>>\n>> create or replace function octal_string_to_text(someoctal text) returns text\n>> as $$\n>> declare\n>> binstring text;\n>> begin\n>> execute 'select E''' || regexp_replace($1, E'(\\\\d{3})', E'\\\\\\\\\\\\1', 'g')\n>> || '''' into binstring;\n>> return binstring;\n>> end\n>> $$ language plpgsql;\n>\n> four points (minor suggestions btw):\n> 1. if you are dealing with strings that have backslashes in them,\n> don't escape, but dollar quote. Also try not to use dollar parameter\n> notation if you can help it:\n> ($1, E'(\\\\d{3})', E'\\\\\\\\\\\\1', 'g') -> (someoctal , $q$(\\d{3})$q$,\n> $q$\\\\\\1$q$, 'g')\n>\n> this is particularly true with feeding strings to regex: that way you\n> can use the same string pg as in various validators.\n>\n> 2. there is no need for execute here.\n> execute 'select E''' || regexp_replace($1, E'(\\\\d{3})', E'\\\\\\\\\\\\1', 'g')\n> becomes:\n> binstring := 'E''' || regexp_replace($1, $q$(\\d{3})$q$, $q$\\\\\\1$q$,\n> 'g') /* I *think* I got this right */\n>\n> 3. if your function does not scribble on tables and has no or is not\n> influenced by any side effects, mark it as IMMUTABLE. always.\n> $$ language plpgsql IMMUTABLE;\n>\n> 4. since all we are doing is generating a variable, prefer sql\n> function vs plpgsql. this is particularly true in pre 8.4 postgres\n> (IIRC) where you can call the function much more flexibly (select\n> func(); vs select * from func();) if that's the case. Putting it all\n> together,\n>\n> create or replace function octal_string_to_text(someoctal text)\n> returns text as $$\n> SELECT 'E''' || regexp_replace($1, $q$(\\d{3})$q$, $q$\\\\\\1$q$, 'g');\n> $$ sql immutable;\n>\n> Note I didn't actually check to see what your regex is donig (I'm\n> assuming it's correct)...\n\nhm, I slept on this and had the vague unsettling feeling I had said\nsomething stupid -- and I did. Double +1 to you for being cleverer\nthan me -- you are using 'execute' to eval the string back in to the\nstring. Only plpgsql can do that, so point 4 is also moot. Still,\nthe above points hold in principle, so if a way could be figured out\nto do this without execute, that would be nice.\n\nmerlin\n",
"msg_date": "Thu, 26 May 2011 08:11:28 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up loops in pl/pgsql function"
},
{
"msg_contents": "On Thu, May 26, 2011 at 8:11 AM, Merlin Moncure <[email protected]> wrote:\n> On Wed, May 25, 2011 at 9:20 PM, Merlin Moncure <[email protected]> wrote:\n>> On Wed, May 25, 2011 at 8:03 PM, Steve Crawford\n>> <[email protected]> wrote:\n>>> On 05/25/2011 11:45 AM, Reuven M. Lerner wrote:\n>>>>\n>>>> Hi, Alex. You wrote:\n>>>>>\n>>>>> Have you tried something like:\n>>>>> SELECT encode(regexp_replace('141142143', '(\\d{3})', '\\\\\\1',\n>>>>> 'g')::bytea, 'escape');\n>>>>\n>>>> Hmm, forgot about regexp_replace. It might do the trick, but without a\n>>>> full-blown eval that I can run on the replacement side, it'll be a bit more\n>>>> challenging. But that's a good direction to consider, for sure.\n>>>\n>>> The function given didn't work exactly as written for me but it is on the\n>>> right track. See if this works for you (input validation is left as an\n>>> exercise for the reader...:)):\n>>>\n>>> create or replace function octal_string_to_text(someoctal text) returns text\n>>> as $$\n>>> declare\n>>> binstring text;\n>>> begin\n>>> execute 'select E''' || regexp_replace($1, E'(\\\\d{3})', E'\\\\\\\\\\\\1', 'g')\n>>> || '''' into binstring;\n>>> return binstring;\n>>> end\n>>> $$ language plpgsql;\n>>\n>> four points (minor suggestions btw):\n>> 1. if you are dealing with strings that have backslashes in them,\n>> don't escape, but dollar quote. Also try not to use dollar parameter\n>> notation if you can help it:\n>> ($1, E'(\\\\d{3})', E'\\\\\\\\\\\\1', 'g') -> (someoctal , $q$(\\d{3})$q$,\n>> $q$\\\\\\1$q$, 'g')\n>>\n>> this is particularly true with feeding strings to regex: that way you\n>> can use the same string pg as in various validators.\n>>\n>> 2. there is no need for execute here.\n>> execute 'select E''' || regexp_replace($1, E'(\\\\d{3})', E'\\\\\\\\\\\\1', 'g')\n>> becomes:\n>> binstring := 'E''' || regexp_replace($1, $q$(\\d{3})$q$, $q$\\\\\\1$q$,\n>> 'g') /* I *think* I got this right */\n>>\n>> 3. if your function does not scribble on tables and has no or is not\n>> influenced by any side effects, mark it as IMMUTABLE. always.\n>> $$ language plpgsql IMMUTABLE;\n>>\n>> 4. since all we are doing is generating a variable, prefer sql\n>> function vs plpgsql. this is particularly true in pre 8.4 postgres\n>> (IIRC) where you can call the function much more flexibly (select\n>> func(); vs select * from func();) if that's the case. Putting it all\n>> together,\n>>\n>> create or replace function octal_string_to_text(someoctal text)\n>> returns text as $$\n>> SELECT 'E''' || regexp_replace($1, $q$(\\d{3})$q$, $q$\\\\\\1$q$, 'g');\n>> $$ sql immutable;\n>>\n>> Note I didn't actually check to see what your regex is donig (I'm\n>> assuming it's correct)...\n>\n> hm, I slept on this and had the vague unsettling feeling I had said\n> something stupid -- and I did. Double +1 to you for being cleverer\n> than me -- you are using 'execute' to eval the string back in to the\n> string. Only plpgsql can do that, so point 4 is also moot. Still,\n> the above points hold in principle, so if a way could be figured out\n> to do this without execute, that would be nice.\n\ngot it:\nselect decode(regexp_replace('141142143', '([0-9][0-9][0-9])',\n$q$\\\\\\1$q$ , 'g'), 'escape');\n decode\n--------\n abc\n(1 row)\n\nmerlin\n",
"msg_date": "Thu, 26 May 2011 08:36:11 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up loops in pl/pgsql function"
},
{
"msg_contents": "Wow.\n\nColor me impressed and grateful. I've been working on a different \nproject today, but I'll test these tonight.\n\nI'll never underestimate the regexp functionality in PostgreSQL again!\n\nReuven\n",
"msg_date": "Thu, 26 May 2011 15:49:37 +0300",
"msg_from": "\"Reuven M. Lerner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up loops in pl/pgsql function"
},
{
"msg_contents": "On 05/26/2011 05:36 AM, Merlin Moncure wrote:\n> ...\n> got it:\n> select decode(regexp_replace('141142143', '([0-9][0-9][0-9])',\n> $q$\\\\\\1$q$ , 'g'), 'escape');\n> decode\n> --------\n> abc\n> (1 row)\n>\n> merlin\n>\nNice. A word of warning, in 9.0 this returns a hex string:\n\nselect decode(regexp_replace('141142143', '([0-9][0-9][0-9])', \n$q$\\\\\\1$q$ , 'g'), 'escape');\n decode\n----------\n \\x616263\n\nSee http://www.postgresql.org/docs/9.0/static/release-9-0.html:\n\nE.5.2.3. Data Types\n bytea output now appears in hex format by default (Peter Eisentraut)\n The server parameter bytea_output can be used to select the \ntraditional output format if needed for compatibility.\n\nAnother wrinkle, the function I wrote sort of ignored the bytea issue by \nusing text. But text is subject to character-encoding (for both good and \nbad) while bytea is not so the ultimate solution will depend on whether \nthe input string is the octal representation of an un-encoded sequence \nof bytes or represents a string of ASCII/UTF-8/whatever... encoded text.\n\nCheers,\nSteve\n\n",
"msg_date": "Thu, 26 May 2011 08:24:45 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up loops in pl/pgsql function"
},
{
"msg_contents": "Hi, everyone.\n\nFirst of all, thanks for all of your help several days ago. The \nimprovements to our program were rather dramatic (in a positive sense).\n\nBased on the help that everyone gave, I'm working on something similar, \ntrying to use regexp_replace to transform a string into the result of \ninvoking a function on each character. For example, I'd like to do the \nfollowing:\n\nregexp_replace('abc', '(.)', ascii(E'\\\\1')::text, 'g');\n\nUnfortunately, the above invokes ascii() on the literal string E'\\\\1', \nrather than on the value of the backreference, which isn't nearly as \nuseful. I'd like to get '979899' back as a string. And of course, once \nI can get back the value of ascii(), I figure that it should work for \nany function that I define.\n\nThanks again for any suggestions everyone might have.\n\n(And if this should go to pgsql-general, then I'll understand. If it \nhelps, my alternative to regexp_replace is a super-slow function, akin \nto the one that I showed here last week.)\n\nReuven\n",
"msg_date": "Wed, 01 Jun 2011 12:19:30 +0300",
"msg_from": "\"Reuven M. Lerner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up loops in pl/pgsql function"
},
{
"msg_contents": "On Wed, Jun 1, 2011 at 4:19 AM, Reuven M. Lerner <[email protected]> wrote:\n> Hi, everyone.\n>\n> First of all, thanks for all of your help several days ago. The\n> improvements to our program were rather dramatic (in a positive sense).\n>\n> Based on the help that everyone gave, I'm working on something similar,\n> trying to use regexp_replace to transform a string into the result of\n> invoking a function on each character. For example, I'd like to do the\n> following:\n>\n> regexp_replace('abc', '(.)', ascii(E'\\\\1')::text, 'g');\n>\n> Unfortunately, the above invokes ascii() on the literal string E'\\\\1',\n> rather than on the value of the backreference, which isn't nearly as useful.\n> I'd like to get '979899' back as a string. And of course, once I can get\n> back the value of ascii(), I figure that it should work for any function\n> that I define.\n>\n> Thanks again for any suggestions everyone might have.\n>\n> (And if this should go to pgsql-general, then I'll understand. If it helps,\n> my alternative to regexp_replace is a super-slow function, akin to the one\n> that I showed here last week.)\n\nselect string_agg(v, '') from (select\nascii(regexp_split_to_table('abc', $$\\s*$$))::text as v) q;\n\n(what about 3 digit ascii codes?)\n\nmerlin\n",
"msg_date": "Wed, 1 Jun 2011 08:26:30 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up loops in pl/pgsql function"
},
{
"msg_contents": "Hi, Merlin. You wrote:\n\n> select string_agg(v, '') from (select\n> ascii(regexp_split_to_table('abc', $$\\s*$$))::text as v) q;\nWow. I've been programming with pl/pgsql for a good number of years, \nbut only now do I see the amazing usefulness of regexp_split_to_table \nand string_agg, neither of which I really used until now. Thanks for \nboth the solution and for opening my eyes.\n> (what about 3 digit ascii codes?)\nI have to put the number into a text field anyway, so I've been \nconverting the resulting number to text, and then using lpad to add \nleading zeroes as necessary.\n\nThanks again,\n\nReuven\n\n-- \nReuven M. Lerner -- Web development, consulting, and training\nMobile: +972-54-496-8405 * US phone: 847-230-9795\nSkype/AIM: reuvenlerner\n\n",
"msg_date": "Thu, 02 Jun 2011 09:49:27 +0300",
"msg_from": "\"Reuven M. Lerner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up loops in pl/pgsql function"
}
] |
[
{
"msg_contents": "Dkloskxe\n\nSteve Crawford <[email protected]> wrote:\n\n>On 05/25/2011 11:45 AM, Reuven M. Lerner wrote:\n>> Hi, Alex. You wrote:\n>>> Have you tried something like:\n>>> SELECT encode(regexp_replace('141142143', '(\\d{3})', '\\\\\\1',\n>>> 'g')::bytea, 'escape');\n>> Hmm, forgot about regexp_replace. It might do the trick, but without \n>> a full-blown eval that I can run on the replacement side, it'll be a \n>> bit more challenging. But that's a good direction to consider, for sure.\n>\n>The function given didn't work exactly as written for me but it is on \n>the right track. See if this works for you (input validation is left as \n>an exercise for the reader...:)):\n>\n>create or replace function octal_string_to_text(someoctal text) returns \n>text as $$\n>declare\n> binstring text;\n>begin\n> execute 'select E''' || regexp_replace($1, E'(\\\\d{3})', E'\\\\\\\\\\\\1', \n>'g') || '''' into binstring;\n>return binstring;\n>end\n>$$ language plpgsql;\n>\n>Cheers,\n>Steve\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 25 May 2011 20:55:53 -0700",
"msg_from": "Santhakumaran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up loops in pl/pgsql function"
}
] |
[
{
"msg_contents": "Hello performers, I've long been unhappy with the standard advice\ngiven for setting shared buffers. This includes the stupendously\nvague comments in the standard documentation, which suggest certain\nsettings in order to get 'good performance'. Performance of what?\nConnection negotiation speed? Not that it's wrong necessarily, but\nISTM too much based on speculative or anecdotal information. I'd like\nto see the lore around this setting clarified, especially so we can\nrefine advice to: 'if you are seeing symptoms x,y,z set shared_buffers\nfrom a to b to get symptom reduction of k'. I've never seen a\ndatabase blow up from setting them too low, but over the years I've\nhelped several people with bad i/o situations or outright OOM\nconditions from setting them too high.\n\nMy general understanding of shared_buffers is that they are a little\nbit faster than filesystem buffering (everything these days is\nultimately based on mmap AIUI, so there's no reason to suspect\nanything else). Where they are most helpful is for masking of i/o if\na page gets dirtied >1 times before it's written out to the heap, but\nseeing any benefit from that at all is going to be very workload\ndependent. There are also downsides using them instead of on the heap\nas well, and the amount of buffers you have influences checkpoint\nbehavior. So things are complex.\n\nSo, the challenge is this: I'd like to see repeatable test cases that\ndemonstrate regular performance gains > 20%. Double bonus points for\ncases that show gains > 50%. No points given for anecdotal or\nunverifiable data. Not only will this help raise the body of knowledge\nregarding the setting, but it will help produce benchmarking metrics\nagainst which we can measure multiple interesting buffer related\npatches in the pipeline. Anybody up for it?\n\nmerlin\n",
"msg_date": "Thu, 26 May 2011 09:31:59 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": true,
"msg_subject": "The shared buffers challenge"
},
{
"msg_contents": "Merlin Moncure <[email protected]> wrote:\n \n> So, the challenge is this: I'd like to see repeatable test cases\n> that demonstrate regular performance gains > 20%. Double bonus\n> points for cases that show gains > 50%.\n \nAre you talking throughput, maximum latency, or some other metric?\n \nIn our shop the metric we tuned for in reducing shared_buffers was\ngetting the number of \"fast\" queries (which normally run in under a\nmillisecond) which would occasionally, in clusters, take over 20\nseconds (and thus be canceled by our web app and present as errors\nto the public) down to zero. While I know there are those who care\nprimarily about throughput numbers, that's worthless to me without\nmaximum latency information under prolonged load. I'm not talking\n90th percentile latency numbers, either -- if 10% of our web\nrequests were timing out the villagers would be coming after us with\npitchforks and torches.\n \n-Kevin\n",
"msg_date": "Thu, 26 May 2011 10:10:13 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "On Thu, May 26, 2011 at 10:10 AM, Kevin Grittner\n<[email protected]> wrote:\n> Merlin Moncure <[email protected]> wrote:\n>\n>> So, the challenge is this: I'd like to see repeatable test cases\n>> that demonstrate regular performance gains > 20%. Double bonus\n>> points for cases that show gains > 50%.\n>\n> Are you talking throughput, maximum latency, or some other metric?\n\nI am talking about *any* metric..you've got something, let's see it.\nBut it's got to be verifiable, so no points scored.\n\nSee my note above about symptoms -- if your symptom of note happens to\nbe unpredictable spikes in fast query times under load, then I'd like\nto scribble that advice directly into the docs along with (hopefully)\nsome reasoning of exactly why more database managed buffers are\nhelping. As noted, I'm particularly interested in things we can test\noutside of production environments, since I'm pretty skeptical the\nWisconsin Court System is going to allow the internet to log in and\nrepeat and verify test methodologies. Point being: cranking buffers\nmay have been the bee's knees with, say, the 8.2 buffer manager, but\npresent and future improvements may have render that change moot or\neven counter productive. I doubt it's really changed much, but we\nreally need to do better on this -- all else being equal, the lowest\nshared_buffers setting possible without sacrificing performance is\nbest because it releases more memory to the o/s to be used for other\nthings -- so \"everthing's bigger in Texas\" type approaches to\npostgresql.conf manipulation (not that I see that here of course) are\nnot necessarily better :-).\n\nmerlin\n",
"msg_date": "Thu, 26 May 2011 10:36:18 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "On Thu, May 26, 2011 at 5:36 PM, Merlin Moncure <[email protected]> wrote:\n> Point being: cranking buffers\n> may have been the bee's knees with, say, the 8.2 buffer manager, but\n> present and future improvements may have render that change moot or\n> even counter productive.\n\nI suggest you read the docs on how shared buffers work, because,\nreasonably, it would be all the way around.\n\nRecent improvments into how postgres manage its shared buffer pool\nmakes them better than the OS cache, so there should be more incentive\nto increase them, rather than decrease them.\n\nWorkload conditions may make those improvements worthless, hinting\nthat you should decrease them.\n\nBut you have to know your workload and you have to know how the shared\nbuffers work.\n",
"msg_date": "Thu, 26 May 2011 17:45:22 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "On Thu, May 26, 2011 at 10:45 AM, Claudio Freire <[email protected]> wrote:\n> On Thu, May 26, 2011 at 5:36 PM, Merlin Moncure <[email protected]> wrote:\n>> Point being: cranking buffers\n>> may have been the bee's knees with, say, the 8.2 buffer manager, but\n>> present and future improvements may have render that change moot or\n>> even counter productive.\n>\n> I suggest you read the docs on how shared buffers work, because,\n> reasonably, it would be all the way around.\n>\n> Recent improvments into how postgres manage its shared buffer pool\n> makes them better than the OS cache, so there should be more incentive\n> to increase them, rather than decrease them.\n>\n> Workload conditions may make those improvements worthless, hinting\n> that you should decrease them.\n>\n> But you have to know your workload and you have to know how the shared\n> buffers work.\n\nI am not denying that any of those things are the case, although your\nassumption that I haven't read the documentation was obviously not\ngrounded upon research. What you and I know/don't know is not the\npoint. The point is what we can prove, because going through the\nmotions of doing that is useful. You are also totally missing my\nother thrust, which is that future changes to how things work could\nchange the dynamics of .conf configuration -- btw not for the first\ntime in the history of the project.\n\nmerlin\n",
"msg_date": "Thu, 26 May 2011 11:02:16 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "On Thu, May 26, 2011 at 6:02 PM, Merlin Moncure <[email protected]> wrote:\n> The point is what we can prove, because going through the\n> motions of doing that is useful.\n\nExactly, and whatever you can \"prove\" will be workload-dependant.\nSo you can't prove anything \"generally\", since no single setting is\nbest for all.\n\n> You are also totally missing my\n> other thrust, which is that future changes to how things work could\n> change the dynamics of .conf configuration\n\nNope, I'm not missing it, simply not commenting on it.\n",
"msg_date": "Thu, 26 May 2011 18:37:56 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "On Thu, May 26, 2011 at 11:37 AM, Claudio Freire <[email protected]> wrote:\n> On Thu, May 26, 2011 at 6:02 PM, Merlin Moncure <[email protected]> wrote:\n>> The point is what we can prove, because going through the\n>> motions of doing that is useful.\n>\n> Exactly, and whatever you can \"prove\" will be workload-dependant.\n> So you can't prove anything \"generally\", since no single setting is\n> best for all.\n\nThen we should stop telling people to adjust it unless we can match\nthe workload to the improvement. There are some people here who can\ndo that as if by magic, but that's not the issue. I'm trying to\nunderstand the why it works better for some than for others. What's\nfrustrating is simply believing something is the case, without trying\nto understand why. How about, instead of arguing with me, coming up\nwith something for the challenge?\n\nmerlin\n",
"msg_date": "Thu, 26 May 2011 11:45:38 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "Merlin Moncure <[email protected]> wrote:\n> Kevin Grittner <[email protected]> wrote:\n>> Merlin Moncure <[email protected]> wrote:\n>>\n>>> So, the challenge is this: I'd like to see repeatable test cases\n>>> that demonstrate regular performance gains > 20%. Double bonus\n>>> points for cases that show gains > 50%.\n>>\n>> Are you talking throughput, maximum latency, or some other\n>> metric?\n> \n> I am talking about *any* metric..you've got something, let's see\n> it. But it's got to be verifiable, so no points scored.\n \nOh, that wasn't to score points; just advocating for more than a\none-dimensional view of performance. I'm adding to your demands,\nnot attempting to satisfy them. :-)\n \n> See my note above about symptoms -- if your symptom of note\n> happens to be unpredictable spikes in fast query times under load,\n> then I'd like to scribble that advice directly into the docs along\n> with (hopefully) some reasoning of exactly why more database\n> managed buffers are helping.\n \nIn our case it was *fewer* shared_buffers which helped.\n \n> As noted, I'm particularly interested in things we can test\n> outside of production environments, since I'm pretty skeptical the\n> Wisconsin Court System is going to allow the internet to log in\n> and repeat and verify test methodologies.\n \nRight, while it was a fairly scientific and methodical test, it was\nagainst a live production environment. We adjusted parameters\nincrementally, a little each day, from where they had been toward\nvalues which were calculated in advance to be better based on our\ntheory of the problem (aided in no small part by analysis and advice\nfrom Greg Smith), and saw a small improvement each day with the\nproblem disappearing entirely right at the target values we had\ncalculated in advance. :-)\n \n> Point being: cranking buffers may have been the bee's knees with,\n> say, the 8.2 buffer manager, but present and future improvements\n> may have render that change moot or even counter productive.\n \nWe did find that in 8.3 and later we can support a larger\nshared_buffer setting without the problem than in 8.2 and earlier. \nWe still need to stay on the low side of what is often advocated to\nkeep the failure rate from this issue at zero.\n \n> all else being equal, the lowest shared_buffers setting possible\n> without sacrificing performance is best because it releases more\n> memory to the o/s to be used for other things\n \nI absolutely agree with this.\n \nI think the problem is that it is very tedious and time-consuming to\nconstruct artificial tests for these things. Greg Smith has spent a\nlot of time and done a lot of research investigating the dynamics of\nthese issues, and recommends a process of incremental adjustments\nfor tuning the relevant settings which, in my opinion, is going to\nbe better than any generalized advice on settings.\n \nDon't get me wrong, I would love to see numbers which \"earned\npoints\" under the criteria you outline. I would especially love it\nif they could be part of the suite of tests in our performance farm.\nI just think that the wealth of anecdotal evidence and the dearth of\nrepeatable benchmarks in this area is due to the relatively low-cost\ntechniques available to tune production systems to solve pressing\nneeds versus the relatively high cost of creating repeatable test\ncases (without, by the way, solving an immediate need).\n \n-Kevin\n",
"msg_date": "Thu, 26 May 2011 12:45:32 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "Merlin Moncure wrote:\n> So, the challenge is this: I'd like to see repeatable test cases that\n> demonstrate regular performance gains > 20%. Double bonus points for\n> cases that show gains > 50%.\n\nDo I run around challenging your suggestions and giving you homework? \nYou have no idea how much eye rolling this whole message provoked from me.\n\nOK, so the key thing to do is create a table such that shared_buffers is \nsmaller than the primary key index on a table, then UPDATE that table \nfuriously. This will page constantly out of the buffer cache to the OS \none, doing work that could be avoided. Increase shared_buffers to where \nit fits instead, and all the index writes are buffered to write only \nonce per checkpoint. Server settings to exaggerate the effect:\n\nshared_buffers = 32MB\ncheckpoint_segments = 256\nlog_checkpoints = on\nautovacuum = off\n\nTest case:\n\ncreatedb pgbench\npgbench -i -s 20 pgbench\npsql -d pgbench -c \"select \npg_size_pretty(pg_relation_size('public.pgbench_accounts_pkey'))\"\npsql -c \"select pg_stat_reset_shared('bgwriter')\"\npgbench -T 120 -c 4 -n pgbench\npsql -x -c \"SELECT * FROM pg_stat_bgwriter\"\n\nThis gives the following size for the primary key and results:\n\n pg_size_pretty\n----------------\n 34 MB\n\ntransaction type: TPC-B (sort of)\nscaling factor: 20\nquery mode: simple\nnumber of clients: 4\nnumber of threads: 1\nduration: 120 s\nnumber of transactions actually processed: 13236\ntps = 109.524954 (including connections establishing)\ntps = 109.548498 (excluding connections establishing)\n\n-[ RECORD 1 ]---------+------------------------------\ncheckpoints_timed | 0\ncheckpoints_req | 0\nbuffers_checkpoint | 0\nbuffers_clean | 16156\nmaxwritten_clean | 131\nbuffers_backend | 5701\nbuffers_backend_fsync | 0\nbuffers_alloc | 25276\nstats_reset | 2011-05-26 18:39:57.292777-04\n\nNow, change so the whole index fits instead:\n\nshared_buffers = 512MB\n\n...which follows the good old \"25% of RAM\" guidelines given this system \nhas 2GB of RAM. Restart the server, repeat the test case. New results:\n\ntransaction type: TPC-B (sort of)\nscaling factor: 20\nquery mode: simple\nnumber of clients: 4\nnumber of threads: 1\nduration: 120 s\nnumber of transactions actually processed: 103440\ntps = 861.834090 (including connections establishing)\ntps = 862.041716 (excluding connections establishing)\n\ngsmith@meddle:~/personal/scripts$ psql -x -c \"SELECT * FROM \npg_stat_bgwriter\"\n-[ RECORD 1 ]---------+------------------------------\ncheckpoints_timed | 0\ncheckpoints_req | 0\nbuffers_checkpoint | 0\nbuffers_clean | 0\nmaxwritten_clean | 0\nbuffers_backend | 1160\nbuffers_backend_fsync | 0\nbuffers_alloc | 34071\nstats_reset | 2011-05-26 18:43:40.887229-04\n\nRather than writing 16156+5701=21857 buffers out during the test to \nsupport all the index churn, instead only 1160 buffers go out, \nconsisting mostly of the data blocks for pgbench_accounts that are being \nupdated irregularly. With less than 1 / 18th as I/O to do, the system \nexecutes nearly 8X as many UPDATE statements during the test run.\n\nAs for figuring out how this impacts more complicated cases, I hear \nsomebody wrote a book or something that went into pages and pages of \ndetail about all this. You might want to check it out.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Thu, 26 May 2011 19:10:19 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "On Thu, May 26, 2011 at 4:10 PM, Greg Smith <[email protected]> wrote:\n\n>\n> As for figuring out how this impacts more complicated cases, I hear\n> somebody wrote a book or something that went into pages and pages of detail\n> about all this. You might want to check it out.\n>\n>\nI was just going to suggest that there was significant and detailed\ndocumentation of this stuff in a certain book, a well-thumbed copy of which\nshould be sitting on the desk of anyone attempting any kind of postgres\nperformance tuning.\n\nOn Thu, May 26, 2011 at 4:10 PM, Greg Smith <[email protected]> wrote:\n\nAs for figuring out how this impacts more complicated cases, I hear somebody wrote a book or something that went into pages and pages of detail about all this. You might want to check it out.\nI was just going to suggest that there was significant and detailed documentation of this stuff in a certain book, a well-thumbed copy of which should be sitting on the desk of anyone attempting any kind of postgres performance tuning.",
"msg_date": "Thu, 26 May 2011 16:40:24 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "On Thu, May 26, 2011 at 6:10 PM, Greg Smith <[email protected]> wrote:\n> Merlin Moncure wrote:\n>>\n>> So, the challenge is this: I'd like to see repeatable test cases that\n>> demonstrate regular performance gains > 20%. Double bonus points for\n>> cases that show gains > 50%.\n>\n> Do I run around challenging your suggestions and giving you homework? You\n> have no idea how much eye rolling this whole message provoked from me.\n\nThat's just plain unfair: I didn't challenge your suggestion nor give\nyou homework. In particular, I'm not suggesting the 25%-ish default is\nwrong -- but trying to help people understand why it's there and what\nit's doing. I bet 19 people out of 20 could not explain what the\nprimary effects of shared_buffers with any degree of accuracy. That\ngroup of people in fact would have included me until recently, when I\nstarted studying bufmgr.c for mostly unrelated reasons. Understand my\nbasic points:\n\n*) the documentation should really explain this better (in particular,\nit should debunk the myth 'more buffers = more caching')\n*) the 'what' is not nearly so important as the 'why'\n*) the popular understanding of what buffers do is totally, completely, wrong\n*) I'd like to gather cases to benchmark changes that interact with\nthese settings\n*) I think you are fighting the 'good fight'. I'm trying to help\n\n> OK, so the key thing to do is create a table such that shared_buffers is\n> smaller than the primary key index on a table, then UPDATE that table\n> furiously. This will page constantly out of the buffer cache to the OS one,\n> doing work that could be avoided. Increase shared_buffers to where it fits\n> instead, and all the index writes are buffered to write only once per\n> checkpoint. Server settings to exaggerate the effect:\n\nThis is exactly what I'm looking for...that's really quite striking.\nI knew that buffer 'hit' before it goes out the door is what to gun\nfor.\n\n> As for figuring out how this impacts more complicated cases, I hear somebody\n> wrote a book or something that went into pages and pages of detail about all\n> this. You might want to check it out.\n\nso i've heard: http://imgur.com/lGOqx (and yes: I 100% endorse the\nbook: everyone who is serious about postgres should own a copy).\nAnyways, double points to you ;-).\n\nmerlin\n",
"msg_date": "Fri, 27 May 2011 10:23:12 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "So how far do you go? 128MB? 32MB? 4MB?\n\nAnecdotal and an assumption, but I'm pretty confident that on any server\nwith at least 1GB of dedicated RAM, setting it any lower than 200MB is not\neven going to help latency (assuming checkpoint and log configuration is\nin the realm of sane, and connections*work_mem is sane).\n\nThe defaults have been so small for so long on most platforms, that any\nincrease over the default generally helps performance -- and in many cases\ndramatically. So if more is better, then most users assume that even more\nshould be better.\nBut its not so simple, there are drawbacks to a larger buffer and\ndiminishing returns with larger size. I think listing the drawbacks of a\nlarger buffer and symptoms that can result would be a big win.\n\nAnd there is an OS component to it too. You can actually get away with\nshared_buffers at 90% of RAM on Solaris. Linux will explode if you try\nthat (unless recent kernels have fixed its shared memory accounting).\n\n\nOn 5/26/11 8:10 AM, \"Kevin Grittner\" <[email protected]> wrote:\n\n>Merlin Moncure <[email protected]> wrote:\n> \n>> So, the challenge is this: I'd like to see repeatable test cases\n>> that demonstrate regular performance gains > 20%. Double bonus\n>> points for cases that show gains > 50%.\n> \n>Are you talking throughput, maximum latency, or some other metric?\n> \n>In our shop the metric we tuned for in reducing shared_buffers was\n>getting the number of \"fast\" queries (which normally run in under a\n>millisecond) which would occasionally, in clusters, take over 20\n>seconds (and thus be canceled by our web app and present as errors\n>to the public) down to zero. While I know there are those who care\n>primarily about throughput numbers, that's worthless to me without\n>maximum latency information under prolonged load. I'm not talking\n>90th percentile latency numbers, either -- if 10% of our web\n>requests were timing out the villagers would be coming after us with\n>pitchforks and torches.\n> \n>-Kevin\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Fri, 27 May 2011 09:44:30 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "Scott Carey <[email protected]> wrote:\n \n> So how far do you go? 128MB? 32MB? 4MB?\n \nUnder 8.2 we had to keep shared_buffers less than the RAM on our BBU\nRAID controller, which had 256MB -- so it worked best with\nshared_buffers in the 160MB to 200MB range. With 8.3 we found that\nanywhere from 512MB to 1GB performed better without creating\nclusters of stalls. In both cases we also had to significantly\nboost the aggressiveness of the background writer.\n \nSince the \"sweet spot\" is so dependent on such things as your RAID\ncontroller and your workload, I *highly* recommend Greg's\nincremental tuning approach. The rough guidelines which get tossed\nabout make reasonable starting points, but you really need to make\nrelatively small changes with the actual load you're trying to\noptimize and monitor the metrics which matter to you. On a big data\nwarehouse you might not care if the database becomes unresponsive\nfor a couple minutes every now and then if it means better overall\nthroughput. On a web server, you may not have much problem keeping\nup with the overall load, but want to ensure reasonable response\ntime.\n \n> Anecdotal and an assumption, but I'm pretty confident that on any\n> server with at least 1GB of dedicated RAM, setting it any lower\n> than 200MB is not even going to help latency (assuming checkpoint\n> and log configuration is in the realm of sane, and\n> connections*work_mem is sane).\n \nI would add the assumption that you've got at least 256MB BBU cache\non your RAID controller.\n \n> The defaults have been so small for so long on most platforms,\n> that any increase over the default generally helps performance --\n> and in many cases dramatically.\n \nAgreed.\n \n> So if more is better, then most users assume that even more should\n> be better.\n \nThat does seem like a real risk.\n \n-Kevin\n",
"msg_date": "Fri, 27 May 2011 12:07:09 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "Scott Carey wrote:\n> And there is an OS component to it too. You can actually get away with\n> shared_buffers at 90% of RAM on Solaris. Linux will explode if you try\n> that (unless recent kernels have fixed its shared memory accounting).\n> \n\nYou can use much larger values for shared_buffers on Solaris with UFS as \nthe filesystem than almost anywhere else, as you say. UFS defaults to \ncaching an extremely tiny amount of memory by default. Getting \nPostgreSQL to buffer everything therefore leads to minimal \ndouble-caching and little write caching that creates checkpoint spikes, \nso 90% is not impossible there.\n\nIf you're using ZFS instead, that defaults to similar aggressive caching \nas Linux. You may even have to turn that down if you want the database \nto have a large amount of memory for its own use even with normal levels \nof sizing; just space for shared_buffers and work_mem can end up being \ntoo large of a pair of competitors for caching RAM. ZFS is not really \nnot tuned all that differently from how Linux approaches caching in that \nregard.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Fri, 27 May 2011 14:26:31 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "Merlin Moncure wrote:\n> That's just plain unfair: I didn't challenge your suggestion nor give\n> you homework.\n\nI was stuck either responding to your challenge, or leaving the \nimpression I hadn't done the research to back the suggestions I make if \nI didn't. That made it a mandatory homework assignment for me, and I \ndidn't appreciate that.\n\n\n> *) the documentation should really explain this better (in particular,\n> it should debunk the myth 'more buffers = more caching'\n\nAny attempt to make a serious change to the documentation around \nperformance turns into a bikeshedding epic, where the burden of proof to \nmake a change is too large to be worth the trouble to me anymore. I \nfirst started publishing tuning papers outside of the main docs because \nit was the path of least resistance to actually getting something useful \nin front of people. After failing to get even basic good \nrecommendations for checkpoint_segments into the docs, I completely gave \nup on focusing there as my primary way to spread this sort of information.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Fri, 27 May 2011 14:47:51 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "On Fri, May 27, 2011 at 1:47 PM, Greg Smith <[email protected]> wrote:\n> Merlin Moncure wrote:\n>>\n>> That's just plain unfair: I didn't challenge your suggestion nor give\n>> you homework.\n>\n> I was stuck either responding to your challenge, or leaving the impression I\n> hadn't done the research to back the suggestions I make if I didn't. That\n> made it a mandatory homework assignment for me, and I didn't appreciate\n> that.\n\nreally -- that wasn't my intent. in any event, i apologize.\n\nmerlin\n",
"msg_date": "Fri, 27 May 2011 13:57:27 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "On Fri, May 27, 2011 at 2:47 PM, Greg Smith <[email protected]> wrote:\n> Any attempt to make a serious change to the documentation around performance\n> turns into a bikeshedding epic, where the burden of proof to make a change\n> is too large to be worth the trouble to me anymore. I first started\n> publishing tuning papers outside of the main docs because it was the path of\n> least resistance to actually getting something useful in front of people.\n> After failing to get even basic good recommendations for\n> checkpoint_segments into the docs, I completely gave up on focusing there as\n> my primary way to spread this sort of information.\n\nHmm. That's rather unfortunate. +1 for revisiting that topic, if you\nhave the energy for it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Fri, 27 May 2011 15:04:21 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": ">> After failing to get even basic good recommendations for\n>> checkpoint_segments into the docs, I completely gave up on focusing there as\n>> my primary way to spread this sort of information.\n>\n> Hmm. That's rather unfortunate. +1 for revisiting that topic, if you\n> have the energy for it.\n\nAnother +1. While I understand that this is not simple, many users\nwill not look outside of standard docs, especially when first\nevaluating PostgreSQL. Merlin is right that the current wording does\nnot really mention a down side to cranking shared_buffers on a system\nwith plenty of RAM.\n\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n",
"msg_date": "Fri, 27 May 2011 12:24:38 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "On Fri, May 27, 2011 at 9:24 PM, Maciek Sakrejda <[email protected]> wrote:\n> Another +1. While I understand that this is not simple, many users\n> will not look outside of standard docs, especially when first\n> evaluating PostgreSQL. Merlin is right that the current wording does\n> not really mention a down side to cranking shared_buffers on a system\n> with plenty of RAM.\n\nIf you read the whole docs it does.\n\nIf you read caching, checkpoints, WAL, all of it, you can connect the dots.\nIt isn't easy, but database management isn't easy.\n",
"msg_date": "Sat, 28 May 2011 00:12:10 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "On 27/05/11 11:10, Greg Smith wrote:\n>\n> OK, so the key thing to do is create a table such that shared_buffers \n> is smaller than the primary key index on a table, then UPDATE that \n> table furiously. This will page constantly out of the buffer cache to \n> the OS one, doing work that could be avoided. Increase shared_buffers \n> to where it fits instead, and all the index writes are buffered to \n> write only once per checkpoint. Server settings to exaggerate the \n> effect:\n>\n> shared_buffers = 32MB\n> checkpoint_segments = 256\n> log_checkpoints = on\n> autovacuum = off\n>\n> Test case:\n>\n> createdb pgbench\n> pgbench -i -s 20 pgbench\n> psql -d pgbench -c \"select \n> pg_size_pretty(pg_relation_size('public.pgbench_accounts_pkey'))\"\n> psql -c \"select pg_stat_reset_shared('bgwriter')\"\n> pgbench -T 120 -c 4 -n pgbench\n> psql -x -c \"SELECT * FROM pg_stat_bgwriter\"\n>\n> This gives the following size for the primary key and results:\n>\n> pg_size_pretty\n> ----------------\n> 34 MB\n>\n> transaction type: TPC-B (sort of)\n> scaling factor: 20\n> query mode: simple\n> number of clients: 4\n> number of threads: 1\n> duration: 120 s\n> number of transactions actually processed: 13236\n> tps = 109.524954 (including connections establishing)\n> tps = 109.548498 (excluding connections establishing)\n>\n> -[ RECORD 1 ]---------+------------------------------\n> checkpoints_timed | 0\n> checkpoints_req | 0\n> buffers_checkpoint | 0\n> buffers_clean | 16156\n> maxwritten_clean | 131\n> buffers_backend | 5701\n> buffers_backend_fsync | 0\n> buffers_alloc | 25276\n> stats_reset | 2011-05-26 18:39:57.292777-04\n>\n> Now, change so the whole index fits instead:\n>\n> shared_buffers = 512MB\n>\n> ...which follows the good old \"25% of RAM\" guidelines given this \n> system has 2GB of RAM. Restart the server, repeat the test case. New \n> results:\n>\n> transaction type: TPC-B (sort of)\n> scaling factor: 20\n> query mode: simple\n> number of clients: 4\n> number of threads: 1\n> duration: 120 s\n> number of transactions actually processed: 103440\n> tps = 861.834090 (including connections establishing)\n> tps = 862.041716 (excluding connections establishing)\n>\n> gsmith@meddle:~/personal/scripts$ psql -x -c \"SELECT * FROM \n> pg_stat_bgwriter\"\n> -[ RECORD 1 ]---------+------------------------------\n> checkpoints_timed | 0\n> checkpoints_req | 0\n> buffers_checkpoint | 0\n> buffers_clean | 0\n> maxwritten_clean | 0\n> buffers_backend | 1160\n> buffers_backend_fsync | 0\n> buffers_alloc | 34071\n> stats_reset | 2011-05-26 18:43:40.887229-04\n>\n> Rather than writing 16156+5701=21857 buffers out during the test to \n> support all the index churn, instead only 1160 buffers go out, \n> consisting mostly of the data blocks for pgbench_accounts that are \n> being updated irregularly. With less than 1 / 18th as I/O to do, the \n> system executes nearly 8X as many UPDATE statements during the test run.\n>\n> As for figuring out how this impacts more complicated cases, I hear \n> somebody wrote a book or something that went into pages and pages of \n> detail about all this. You might want to check it out.\n>\n\nGreg, having an example with some discussion like this in the docs would \nprobably be helpful. If you want to add it that would be great, however \nthat sounds dangerously like giving you homework :-) I'm happy to put \nsomething together for the docs if you'd prefer that I do my own \nassignments.\n\nCheers\n\nMark\n\n",
"msg_date": "Sat, 28 May 2011 11:30:27 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "On Thu, 2011-05-26 at 09:31 -0500, Merlin Moncure wrote:\n> Where they are most helpful is for masking of i/o if\n> a page gets dirtied >1 times before it's written out to the heap\n\nAnother possible benefit of higher shared_buffers is that it may reduce\nWAL flushes. A page cannot be evicted from shared_buffers until the WAL\nhas been flushed up to the page's LSN (\"WAL before data\"); so if there\nis memory pressure to evict dirty buffers, it may cause extra WAL\nflushes.\n\nI'm not sure what the practical effects of this are, however, but it\nmight be an interesting thing to test.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Fri, 27 May 2011 17:19:01 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "On 05/27/2011 07:30 PM, Mark Kirkwood wrote:\n> Greg, having an example with some discussion like this in the docs \n> would probably be helpful.\n\nIf we put that example into the docs, two years from now there will be \npeople showing up here saying \"I used the recommended configuration from \nthe docs\" that cut and paste it into their postgresql.conf, turning \nautovacuum off and everything. Periodically people used to publish \n\"recommended postgresql.conf\" settings on random web pages, sometimes \nwith poor suggestions, and those things kept showing up in people's \nconfigurations posted to the lists here for long after they were no \nlonger applicable. I've resisted publishing specific configuration \nexamples in favor of working on pgtune specifically because of having \nobserved that.\n\nThere's several new small features in 9.1 that make it a easier to \ninstrument checkpoint behavior and how it overlaps with shared_buffers \nincreases: summary of sync times, ability to reset pg_stat_bgwriter, \nand a timestamp on when it was last reset. It's not obvious at all how \nthose all stitch together into some new tuning approaches, but they do. \nBest example I've given so far is at \nhttp://archives.postgresql.org/pgsql-hackers/2011-02/msg00209.php ; look \nat how I can turn the mysterious buffers_backend field into something \nmeasured in MB/s using these new features.\n\nThat's the direction all this is moving toward. If it's easy for people \nto turn the various buffer statistics into human-readable form, the way \ntuning changes impact the server operation becomes much easier to see. \nDocumenting the much harder to execute methodology you can apply to \nearlier versions isn't real exciting to me at this point. I've done \nthat enough that people who want the info can find it, even if it's not \nall in the manual. The ways you can do it in 9.1 are so much easier, \nand more accurate in regards to the sync issues, that I'm more \ninterested in beefing up the manual in regards to using them at this point.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Sat, 28 May 2011 17:47:12 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The shared buffers challenge"
},
{
"msg_contents": "On Fri, May 27, 2011 at 7:19 PM, Jeff Davis <[email protected]> wrote:\n> On Thu, 2011-05-26 at 09:31 -0500, Merlin Moncure wrote:\n>> Where they are most helpful is for masking of i/o if\n>> a page gets dirtied >1 times before it's written out to the heap\n>\n> Another possible benefit of higher shared_buffers is that it may reduce\n> WAL flushes. A page cannot be evicted from shared_buffers until the WAL\n> has been flushed up to the page's LSN (\"WAL before data\"); so if there\n> is memory pressure to evict dirty buffers, it may cause extra WAL\n> flushes.\n>\n> I'm not sure what the practical effects of this are, however, but it\n> might be an interesting thing to test.\n\nHm, I bet it could make a fairly big difference if wal data is not on\na separate volume.\n\nmerlin\n",
"msg_date": "Tue, 31 May 2011 08:57:43 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The shared buffers challenge"
}
] |
[
{
"msg_contents": "Working on some optimization as well as finally getting off my\nbackside and moving us to 64bit (32gb+memory).\n\nI was reading and at some point it appears on freeBSD the Postgres\nblock size was upped to 16kb, from 8kb. And on my fedora systems I\nbelieve the default build is 8kb.\n\nWhen we were using ext2/ext3 etc made not much of a difference as far\nas I can tell since one was limited to 4kb at the file system (so 2\ndisk access for every Postgres write/read ??).\n\nNow with ext4, we can set the block size, so would it make sense for\nlarger data sets, that end up loading the entire 5 million row table\ninto memory, to have a larger block size, or matching block size\nbetween postgres and the filesystem (given that raid is configured to\noptimize the writes over all the spindles in your storage?) (leaving\nthat piece alone).\n\nI want to focus on the relation of Postgres default block size, and\nwhat the various issues/gains are with upping it , at the same time\nmatching or doing some storage magic to figure out the optimum between\nPostgres/filesystem?\n\nTrying to gain some performance and wondered if any of this tuning\neven is something I should bother with.\n\nFedora f12 (looking at CentOS)\npostgres 8.4.4 (probably time to start getting to 9.x)\nslon 1.2.20 (same, needs an update)\n\nBut system tuning, 64 bit first..\n\nThanks\nTory\n",
"msg_date": "Thu, 26 May 2011 15:34:55 -0700",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance block size."
},
{
"msg_contents": "On 27/05/2011 00:34, Tory M Blue wrote:\n> Working on some optimization as well as finally getting off my\n> backside and moving us to 64bit (32gb+memory).\n>\n> I was reading and at some point it appears on freeBSD the Postgres\n> block size was upped to 16kb, from 8kb. And on my fedora systems I\n> believe the default build is 8kb.\n\nThis happened some years ago but it was quickly reverted.\n\nThere were some apparent gains with the larger block size, especially \nsince FreeBSD's UFS uses 16 KiB blocks by default (going to 32 KiB \nreally soon now; note that this has no impact on small files because of \nthe concept of \"block fragments\"), but it was concluded that the stock \ninstallation should follow the defaults set by the developers, not \nporters. YMMV.\n\n> Trying to gain some performance and wondered if any of this tuning\n> even is something I should bother with.\n\nPlease do some benchmarks and report results.\n\n\n",
"msg_date": "Fri, 27 May 2011 12:54:27 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance block size."
}
] |
[
{
"msg_contents": "Hi.\n\nFirst extremely thanks for your works about postgresql .\n\nI wonder that after executing 'vaccumdb -z' some other process can not\nread their own msg queue during 2 ~ 3 minuts.\n\nvaccum executed every hour. and The processes have not any relations between\npostgreql.\n\nIs it possible ?\n\nHi. First extremely thanks for your works about postgresql .I wonder that after executing 'vaccumdb -z' some other process can not read their own msg queue during 2 ~ 3 minuts.\nvaccum executed every hour. and The processes have not any relations between postgreql.Is it possible ?",
"msg_date": "Fri, 27 May 2011 10:58:35 +0900",
"msg_from": "Junghwe Kim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is any effect other process performance after vaccumdb finished ?"
},
{
"msg_contents": "On 27/05/2011 9:58 AM, Junghwe Kim wrote:\n> Hi.\n>\n> First extremely thanks for your works about postgresql .\n>\n> I wonder that after executing 'vaccumdb -z' some other process can not\n> read their own msg queue during 2 ~ 3 minuts.\n\nThe most likely cause is checkpoint activity. Enable checkpoint logging \nand examine your server logs.\n\n> vaccum executed every hour. and The processes have not any relations\n> between postgreql.\n\nInstead of running vacuum from cron or some other scheduler, turn on \nautovacuum and set it to run aggressively. You will reduce the impact of \nvacuum if you run it *often* (every few minutes if not more frequently) \nand do so using autovacuum.\n\nBest results will be had on PostgreSQL 8.4 or above.\n\nPlease read this if you need to follow up, so you include enough \ninformation that a more complete answer can be given:\n\n http://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\n-- \nCraig Ringer\n\nTech-related writing at http://soapyfrogs.blogspot.com/\n",
"msg_date": "Fri, 27 May 2011 11:31:59 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is any effect other process performance after vaccumdb finished ?"
}
] |
[
{
"msg_contents": "Hi All\n\tMy database uses joined table inheritance and my server version is 9.0\n\nVersion string\tPostgreSQL 9.0rc1 on x86_64-pc-linux-gnu, compiled by GCC x86_64-pc-linux-gnu-gcc (Gentoo 4.4.4-r1 p1.1, pie-0.4.5) 4.4.4, 64-bit\t\n\nI have about 120,000 records in the table that everything else inherits from, if i truncate-cascaded this table it happens almost instantly. If i run 30,000 prepared \"DELETE FROM xxx WHERE \"ID\" = ?\" commands it takes close to 10 minutes.\n\nMy foreign keys to the base table are all set with \"ON DELETE CASCADE\". I've looked though all the feilds that relate to the \"ID\" in the base table and created btree indexes for them.\n\nCan anyone outline what I need to verify/do to ensure i'm getting the best performance for my deletes?\n\nRegards, Jarrod Chesney ",
"msg_date": "Tue, 31 May 2011 10:08:28 +1000",
"msg_from": "Jarrod Chesney <[email protected]>",
"msg_from_op": true,
"msg_subject": "Delete performance"
},
{
"msg_contents": "9.0rc1 ?\nYou know that the stable 9.0 has been out for quite a while now.\nIts not going to affect the delete speed in any way, but I would\ngenerally advice you to upgrade it to the lates 9.0.x\n\nAs for the delete it self, check if you have indices on the tables\nthat refer the main table on the referred column. Often times that's\nthe issue.\nOther thing is , number of triggers on the other tables.\n\n-- \nGJ\n",
"msg_date": "Tue, 31 May 2011 10:26:59 +0100",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete performance"
},
{
"msg_contents": "> If i run 30,000 prepared \"DELETE FROM xxx WHERE \"ID\" = ?\" commands it \n> takes close to 10 minutes.\n\nDo you run those in a single transaction or do you use one transaction per \nDELETE ?\n\nIn the latter case, postgres will ensure each transaction is commited to \ndisk, at each commit. Since this involves waiting for the physical I/O to \nhappen, it is slow. If you do it 30.000 times, it will be 30.000 times \nslow.\n\nNote that you should really do :\n\nDELETE FROM table WHERE id IN (huge list of ids).\n\nor\n\nDELETE FROM table JOIN VALUES (list of ids) ON (...)\n\nAlso, check your foreign keys using cascading deletes have indexes in the \nreferencing tables. Without an index, finding the rows to cascade-delete \nwill be slow.\n",
"msg_date": "Wed, 01 Jun 2011 01:11:39 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete performance"
},
{
"msg_contents": "On 1/06/2011 7:11 AM, Pierre C wrote:\n>> If i run 30,000 prepared \"DELETE FROM xxx WHERE \"ID\" = ?\" commands it\n>> takes close to 10 minutes.\n>\n> Do you run those in a single transaction or do you use one transaction\n> per DELETE ?\n>\n> In the latter case, postgres will ensure each transaction is commited to\n> disk, at each commit. Since this involves waiting for the physical I/O\n> to happen, it is slow. If you do it 30.000 times, it will be 30.000\n> times slow.\n\nNot only that, but if you're doing it via some application the app has \nto wait for Pg to respond before it can send the next query. This adds \neven more delay, as do all the processor switches between Pg and your \napplication.\n\nIf you really must issue individual DELETE commands one-by-one, I \n*think* you can use synchronous_commit=off or\n\n SET LOCAL synchronous_commit TO OFF;\n\nSee:\n\nhttp://www.postgresql.org/docs/current/static/runtime-config-wal.html\n\n\n-- \nCraig Ringer\n\nTech-related writing at http://soapyfrogs.blogspot.com/\n",
"msg_date": "Wed, 01 Jun 2011 09:40:52 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete performance"
},
{
"msg_contents": "I'm executing 30,000 single delete statements in one transaction.\n\nAt this point i'm looking into combining the multiple deletes into one statement and breaking my big transaction into smaller ones of about 100 deletes or so.\n\nOn 01/06/2011, at 11:40 AM, Craig Ringer wrote:\n\n> On 1/06/2011 7:11 AM, Pierre C wrote:\n>>> If i run 30,000 prepared \"DELETE FROM xxx WHERE \"ID\" = ?\" commands it\n>>> takes close to 10 minutes.\n>> \n>> Do you run those in a single transaction or do you use one transaction\n>> per DELETE ?\n>> \n>> In the latter case, postgres will ensure each transaction is commited to\n>> disk, at each commit. Since this involves waiting for the physical I/O\n>> to happen, it is slow. If you do it 30.000 times, it will be 30.000\n>> times slow.\n> \n> Not only that, but if you're doing it via some application the app has to wait for Pg to respond before it can send the next query. This adds even more delay, as do all the processor switches between Pg and your application.\n> \n> If you really must issue individual DELETE commands one-by-one, I *think* you can use synchronous_commit=off or\n> \n> SET LOCAL synchronous_commit TO OFF;\n> \n> See:\n> \n> http://www.postgresql.org/docs/current/static/runtime-config-wal.html\n> \n> \n> -- \n> Craig Ringer\n> \n> Tech-related writing at http://soapyfrogs.blogspot.com/\n\n",
"msg_date": "Wed, 1 Jun 2011 11:45:04 +1000",
"msg_from": "Jarrod Chesney <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Delete performance"
},
{
"msg_contents": "On 05/30/2011 08:08 PM, Jarrod Chesney wrote:\n> \tMy database uses joined table inheritance and my server version is 9.0\n> I have about 120,000 records in the table that everything else inherits from, if i truncate-cascaded this table it happens almost instantly. If i run 30,000 prepared \"DELETE FROM xxx WHERE \"ID\" = ?\" commands it takes close to 10 minutes.\n>\n> My foreign keys to the base table are all set with \"ON DELETE CASCADE\".\n\nYou may also want to make them DEFERRABLE and then use \"SET CONSTRAINTS \nALL DEFERRABLE\" so that the constraint checking all happens at one \ntime. This will cause more memory to be used, but all the constraint \nrelated work will happen in a batch.\n\nYou mentioned inheritance. That can cause some unexpected problems \nsometimes. You might want to do:\n\nEXPLAIN DELETE FROM ...\n\nTo see how this is executing. EXPLAIN works fine on DELETE statements, \ntoo, and it may highlight something strange about how the deletion is \nhappening. If you can, use EXPLAIN ANALYZE, but note that this will \nactually execute the statement--the deletion will happen, it's not just \na test.\n\nThere may be a problem with the query plan for the deletion that's \nactually causing the issue here, such as missing the right indexes. If \nyou have trouble reading it, http://explain.depesz.com/ is a good web \nresources to help break down where the time is going.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Wed, 01 Jun 2011 02:14:11 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete performance"
},
{
"msg_contents": "\nOn 01/06/2011, at 11:45 AM, Jarrod Chesney wrote:\n\n> I'm executing 30,000 single delete statements in one transaction.\n> \n> At this point i'm looking into combining the multiple deletes into one statement and breaking my big transaction into smaller ones of about 100 deletes or so.\n> \n> On 01/06/2011, at 11:40 AM, Craig Ringer wrote:\n> \n>> On 1/06/2011 7:11 AM, Pierre C wrote:\n>>>> If i run 30,000 prepared \"DELETE FROM xxx WHERE \"ID\" = ?\" commands it\n>>>> takes close to 10 minutes.\n>>> \n>>> Do you run those in a single transaction or do you use one transaction\n>>> per DELETE ?\n>>> \n>>> In the latter case, postgres will ensure each transaction is commited to\n>>> disk, at each commit. Since this involves waiting for the physical I/O\n>>> to happen, it is slow. If you do it 30.000 times, it will be 30.000\n>>> times slow.\n>> \n>> Not only that, but if you're doing it via some application the app has to wait for Pg to respond before it can send the next query. This adds even more delay, as do all the processor switches between Pg and your application.\n>> \n>> If you really must issue individual DELETE commands one-by-one, I *think* you can use synchronous_commit=off or\n>> \n>> SET LOCAL synchronous_commit TO OFF;\n>> \n>> See:\n>> \n>> http://www.postgresql.org/docs/current/static/runtime-config-wal.html\n>> \n>> \n>> -- \n>> Craig Ringer\n>> \n>> Tech-related writing at http://soapyfrogs.blogspot.com/\n> \n\nApologies for top posting, Sorry.",
"msg_date": "Thu, 2 Jun 2011 11:46:38 +1000",
"msg_from": "Jarrod Chesney <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Delete performance"
}
] |
[
{
"msg_contents": "Hi All,\n\nI have created partition on table Round_Action , which has 3 child partition\ntables.\n\n\nWhen I am firing a simple select query with limit on parent table it is\ntaking huge time to execute. But when I am firing this query directly on\nchild table it is taking few milliseconds.\n\n\nEXP.\nselect * from Round_Action where action_id =50000 limit 100 → execution time\n80 sec\n\nselect * from Round_Action_CH1 action_id =50000 limit 100 → execution time\n0.1 sec\n\nRound_Action is the parent table and has no record in the tables, all the\nrecords are lying in child tables.\n\nTable is having index on action_id.\n\nPartition is trigger based.\nPostgres Version : (PostgreSQL) 8.4.6\n\nWhy there is difference in execution time? What I am doing wrong?\n\n\n\n-- \nThanks & regards,\n JENISH\n\nHi All, \nI have created partition on table Round_Action , which has 3 child partition tables.\n\nWhen I am firing a simple select query with limit on parent table it is taking huge time to execute. But when I am firing this query directly on child table it is taking few milliseconds.\n\nEXP.\nselect * from Round_Action where action_id =50000 limit 100 → execution time 80 sec\nselect * from Round_Action_CH1 action_id =50000 limit 100 → execution time 0.1 sec\nRound_Action is the parent table and has no record in the tables, all the records are lying in child tables.\nTable is having index on action_id.\nPartition is trigger based.\nPostgres Version : (PostgreSQL) 8.4.6\n \nWhy there is difference in execution time? What I am doing wrong?\n-- Thanks & regards, JENISH",
"msg_date": "Tue, 31 May 2011 10:20:59 +0300",
"msg_from": "Jenish <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange behavior of child table."
},
{
"msg_contents": "On Tue, 2011-05-31 at 10:20 +0300, Jenish wrote:\n> Hi All, \n> \n> I have created partition on table Round_Action , which has 3 child\n> partition tables.\n> \n> \n> When I am firing a simple select query with limit on parent table it\n> is taking huge time to execute. But when I am firing this query\n> directly on child table it is taking few milliseconds.\n> \n> \n> EXP.\n> select * from Round_Action where action_id =50000 limit 100 →\n> execution time 80 sec\n> select * from Round_Action_CH1 action_id =50000 limit 100 → execution\n> time 0.1 sec\n> \n> Round_Action is the parent table and has no record in the tables, all\n> the records are lying in child tables.\n\nRun EXPLAIN ANALYZE on each of those queries, and post the results.\n\nSee http://wiki.postgresql.org/wiki/SlowQueryQuestions for a guide on\nhow to give the necessary information for others to help.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Wed, 01 Jun 2011 11:53:42 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange behavior of child table."
},
{
"msg_contents": "Hi Jeff,\n\nThanks for the help.\n\nThis is the first post by me, and I did mistake unknowingly. I will\ntake care of it next time.\n\nAgain thanks a lot for the help.\n\n--\nThanks & regards,\nJENISH VYAS\n\n\nOn Thu, Jun 2, 2011 at 10:04 AM, Jeff Davis <[email protected]> wrote:\n>\n> In the future, please remember to CC the list when replying unless you\n> have a reason not to. This thread is already off-list by now.\n>\n> Also, I just noticed that this plan has a sort, and the slow query in\n> the previous email did not. That looks like it might have been a mistake\n> when running the regular EXPLAIN (without ANALYZE), because the slow\n> plan does not look correct without a sort. Anyway...\n>\n> On Thu, 2011-06-02 at 09:23 +0300, Jenish wrote:\n> > Hi Jeff,\n> >\n> > This table is growing rapidly. Now the parent table is taking much\n> > more time for the same query. below is the complite details.\n>\n>\n> > \" -> Bitmap Heap Scan on game_round_actions_old\n> > game_round_actions (cost=73355.48..7277769.30 rows=2630099 width=65)\n> > (actual time=78319.248..302586.235 rows=2304337 loops=1)\"\n> > \" Recheck Cond: (table_id = 1)\"\n> > \" -> Bitmap Index Scan on\n> > \"PK_game_round_actions\" (cost=0.00..72697.95 rows=2630099 width=0)\n> > (actual time=78313.095..78313.095 rows=2304337 loops=1)\"\n> > \" Index Cond: (table_id = 1)\"\n>\n> That is the part of the plan that is taking time. Compare that to the\n> other plan:\n>\n> > 2) Child query\n> > explain analyse Select * from game_round_actions_old where table_id =\n> > 1 order by table_id,round_id limit 100\n> > \"Limit (cost=0.00..335.97 rows=100 width=65) (actual\n> > time=0.035..0.216 rows=100 loops=1)\"\n> > \" -> Index Scan using \"PK_game_round_actions\" on\n> > game_round_actions_old (cost=0.00..8836452.71 rows=2630099 width=65)\n> > (actual time=0.033..0.110 rows=100 loops=1)\"\n> > \" Index Cond: (table_id = 1)\"\n>\n> Notice that it's actually using the same index, but the slow plan is\n> using a bitmap index scan, and the fast plan is using a normal (ordered)\n> index scan.\n>\n> What's happening is that the top-level query is asking to ORDER BY\n> table_id, round_id LIMIT 100. Querying the child table can get that\n> order directly from the index, so it scans the index in order, fetches\n> only 100 tuples, and then it's done.\n>\n> But when querying the parent table, it's getting tuples from two tables,\n> and so the tuples aren't automatically in the right order to satisfy the\n> ORDER BY. So, it's collecting all of the matching tuples, which is about\n> 2.6M, then sorting them, then returning the first 100 -- much slower!\n>\n> A smarter approach is to scan both tables in the correct order\n> individually, and merge the results until you get 100 tuples. That would\n> make both queries run fast. 9.1 is smart enough to do that, but it's\n> still in beta right now.\n>\n> The only answer right now is to rewrite your slow query to be more like\n> the fast one. I think if you manually push down the ORDER BY ... LIMIT,\n> it will do the job. Something like:\n>\n> select * from\n> (select * from game_round_actions_old\n> where table_id = 1\n> order by table_id,round_id limit 100\n> UNION ALL\n> select * from game_round_actions_new\n> where table_id = 1\n> order by table_id,round_id limit 100)\n> order by table_id,round_id limit 100;\n>\n> might work. I haven't actually tested that query though.\n>\n> Regards,\n> Jeff Davis\n>\n>\n",
"msg_date": "Thu, 2 Jun 2011 16:39:35 +0300",
"msg_from": "Jenish <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange behavior of child table."
}
] |
[
{
"msg_contents": "On Wed, May 25, 2011 at 4:41 PM, Greg Smith <[email protected]> wrote:\n> On 05/23/2011 06:16 PM, John Rouillard wrote:\n>>\n>> OS: centos 5.5\n>> Filesystem: data - ext4 (note 4 not 3); 6.6T formatted\n>> wal - ext4; 1.5T formatted\n>> Raid: data - level 10, 8 disk wd2003; controller LSI MegaRAID SAS 9260-4i\n>> wal - level 1, 2 disk wd2003; controller LSI MegaRAID SAS 9260-4i\n>>\n>> Could it be an ext4 issue? It seems that ext4 may still be at the\n>> bleeding edge for postgres use.\n>>\n>\n> I would not trust ext4 on CentOS 5.5 at all. ext4 support in 5.5 is labeled\n> by RedHat as being in \"Technology Preview\" state. I believe that if you had\n> a real RedHat system instead of CentOS kernel, you'd discover it's hard to\n> even get it installed--you need to basically say \"yes, I know it's not for\n> production, I want it anyway\" to get preview packages. It's not really\n> intended for production use.\n>\n> What I'm hearing from people is that they run into the occasional ext4 bug\n> with PostgreSQL, but the serious ones aren't happening very often now, on\n> systems running RHEL6 or Debian Squeeze. Those kernels are way, way ahead\n> of the ext4 backport in RHEL5 based systems, and they're just barely stable.\n\nSo if you're running a RHEL5.4 or RHEL5.5 system, are you basically\nstuck with ext3? I'm not sure if I'm remembering correctly, but ISTM\nthat you've been uncomfortable with BOTH ext4 and XFS prior to RHEL6;\nbut OK with both beginning with RHEL6.\n\nAlso, any tips on mount options for XFS/ext4/ext3?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 31 May 2011 11:35:59 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": true,
"msg_subject": "picking a filesystem"
},
{
"msg_contents": "On 05/31/2011 10:35 AM, Robert Haas wrote:\n\n> So if you're running a RHEL5.4 or RHEL5.5 system, are you basically\n> stuck with ext3? I'm not sure if I'm remembering correctly, but ISTM\n> that you've been uncomfortable with BOTH ext4 and XFS prior to RHEL6;\n> but OK with both beginning with RHEL6.\n\nWe haven't had any problems (yet) running XFS on CentOS 5.5. Sure, it \ndoesn't have a lot of the recent kernel advances that made it faster, \nbut it out-performed our EXT3 filesystem in some cases by 40%.\n\n> Also, any tips on mount options for XFS/ext4/ext3?\n\nWe got the best performance by increasing the agcount during formatting. \nBut we also used some of the advanced logging options. I set the size to \n128m, enabled lazy-count to reduce logging overhead, and set version to \n2 so we could use a bigger log buffer in the mount options. So:\n\nmkfs.xfs -d agcount=256 -l size=128m,lazy-count=1,version=2\n\nFor mounting, aside from the usual noatime and nodiratime, we set the \nallocsize to 256m to reduce fragmentation, maxed out the logbufs at 8, \nand the logbsize to 256k to improve file deletion performance, and set \nthe attr2 option to better handle inodes. So:\n\nmount -o allocsize=256m,logbufs=8,noatime,nodiratime,attr2,logbsize=256k\n\nMaybe more recent XFS kernels have other options we're not aware of, but \nwe've had good luck with these so far.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Tue, 31 May 2011 12:09:23 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: picking a filesystem"
},
{
"msg_contents": "On Tue, May 31, 2011 at 8:35 AM, Robert Haas <[email protected]> wrote:\n\n>\n> So if you're running a RHEL5.4 or RHEL5.5 system, are you basically\n> stuck with ext3? I'm not sure if I'm remembering correctly, but ISTM\n> that you've been uncomfortable with BOTH ext4 and XFS prior to RHEL6;\n> but OK with both beginning with RHEL6.\n>\n> Also, any tips on mount options for XFS/ext4/ext3?\n>\n\nGreg's book has a whole chapter that goes through the pros and cons of each\ntype of fs and offers suggestions for configuring most of them for postgres.\n I haven't actually read the chapter in detail yet, so I won't try to\nsummarize its content here. It appeared to be pretty comprehensive during my\nquick scan of the chapter\n\nOn Tue, May 31, 2011 at 8:35 AM, Robert Haas <[email protected]> wrote:\n\nSo if you're running a RHEL5.4 or RHEL5.5 system, are you basically\nstuck with ext3? I'm not sure if I'm remembering correctly, but ISTM\nthat you've been uncomfortable with BOTH ext4 and XFS prior to RHEL6;\nbut OK with both beginning with RHEL6.\n\nAlso, any tips on mount options for XFS/ext4/ext3?Greg's book has a whole chapter that goes through the pros and cons of each type of fs and offers suggestions for configuring most of them for postgres. I haven't actually read the chapter in detail yet, so I won't try to summarize its content here. It appeared to be pretty comprehensive during my quick scan of the chapter",
"msg_date": "Tue, 31 May 2011 10:58:45 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: picking a filesystem"
}
] |
[
{
"msg_contents": "Hi All;\n\nWe have a table with approx 200 columns. about a dozen columns are text \ndata types and the rest are a mix of integers , bigint's and double \nprecision types.\n\nThe table has about 25million rows.\n\n\nThe app wants to run a query like this:\n\nselect count(pri_num) from max_xtrv_st_t\nwhere pri_num in (select max(pri_num) from max_xtrv_st_t where 1=1\n group by tds_cx_ind, cxs_ind_2 )\n\nI've tried to split the query up but made little progress, pri_num and \ntds_cx_ind are bigint's and cxs_ind_2 is an integer\n\nThe table has an index on all 3 columns (3 separate indexes)\n\nAnyone have any thoughts on tuning this query?\n\n\n-- \n---------------------------------------------\nKevin Kempter - Constent State\nA PostgreSQL Professional Services Company\n www.consistentstate.com\n---------------------------------------------\n\n",
"msg_date": "Wed, 01 Jun 2011 14:14:10 -0600",
"msg_from": "CS DBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem query"
},
{
"msg_contents": "On Wed, Jun 1, 2011 at 3:14 PM, CS DBA <[email protected]> wrote:\n> Hi All;\n>\n> We have a table with approx 200 columns. about a dozen columns are text data\n> types and the rest are a mix of integers , bigint's and double precision\n> types.\n>\n> The table has about 25million rows.\n>\n>\n> The app wants to run a query like this:\n>\n> select count(pri_num) from max_xtrv_st_t\n> where pri_num in (select max(pri_num) from max_xtrv_st_t where 1=1\n> group by tds_cx_ind, cxs_ind_2 )\n>\n> I've tried to split the query up but made little progress, pri_num and\n> tds_cx_ind are bigint's and cxs_ind_2 is an integer\n>\n> The table has an index on all 3 columns (3 separate indexes)\n>\n> Anyone have any thoughts on tuning this query?\n\nneed postgres version# and the current explain analyze (or explain, if\nyou can't wait for it)\n\nmerlin\n",
"msg_date": "Wed, 1 Jun 2011 16:15:24 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem query"
},
{
"msg_contents": "CS DBA <[email protected]> wrote:\n \n> The app wants to run a query like this:\n> \n> select count(pri_num) from max_xtrv_st_t\n> where pri_num in (select max(pri_num) from max_xtrv_st_t where 1=1\n> group by tds_cx_ind, cxs_ind_2)\n \nWhy not something simpler? There are a number of possibilities, and\nI don't claim this one is necessarily best (or even error free), but\nhow about something like?:\n\nselect count(*) from \n (select distinct max(pri_num)\n from max_xtrv_st_t\n group by tds_cx_ind, cxs_ind_2) x\n \n-Kevin\n",
"msg_date": "Wed, 01 Jun 2011 16:38:16 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem query"
},
{
"msg_contents": "On 06/01/2011 03:15 PM, Merlin Moncure wrote:\n> On Wed, Jun 1, 2011 at 3:14 PM, CS DBA<[email protected]> wrote:\n>> Hi All;\n>>\n>> We have a table with approx 200 columns. about a dozen columns are text data\n>> types and the rest are a mix of integers , bigint's and double precision\n>> types.\n>>\n>> The table has about 25million rows.\n>>\n>>\n>> The app wants to run a query like this:\n>>\n>> select count(pri_num) from max_xtrv_st_t\n>> where pri_num in (select max(pri_num) from max_xtrv_st_t where 1=1\n>> group by tds_cx_ind, cxs_ind_2 )\n>>\n>> I've tried to split the query up but made little progress, pri_num and\n>> tds_cx_ind are bigint's and cxs_ind_2 is an integer\n>>\n>> The table has an index on all 3 columns (3 separate indexes)\n>>\n>> Anyone have any thoughts on tuning this query?\n> need postgres version# and the current explain analyze (or explain, if\n> you can't wait for it)\n>\n> merlin\n\n\nPostgresql version 8.4.2\n\n\nExplain:\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------\n Aggregate (cost=6551481.85..6551481.86 rows=1 width=8)\n -> Nested Loop (cost=6550474.85..6551481.35 rows=200 width=8)\n -> HashAggregate (cost=6550474.85..6550476.85 rows=200 width=8)\n -> GroupAggregate (cost=5918263.18..6334840.58 \nrows=17250742 width=20)\n -> Sort (cost=5918263.18..5968498.96 \nrows=20094312 width=20)\n Sort Key: tds_cx_ind, cxs_ind_2\n -> Seq Scan on max_xtrv_st_t \n(cost=0.00..3068701.12 rows=20094312 width=20)\n -> Index Scan using max_xtrv_st_t_pkey on max_xtrv_st_t \n(cost=0.00..5.01 rows=1 width=8)\n Index Cond: (max_xtrv_st_t.pri_num = \n(max(max_xtrv_st_t.pri_num)))\n(9 rows)\n\n\n\n\n-- \n---------------------------------------------\nKevin Kempter - Constent State\nA PostgreSQL Professional Services Company\n www.consistentstate.com\n---------------------------------------------\n\n",
"msg_date": "Wed, 01 Jun 2011 16:26:57 -0600",
"msg_from": "CS DBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problem query"
},
{
"msg_contents": "On 06/01/2011 03:38 PM, Kevin Grittner wrote:\n> CS DBA<[email protected]> wrote:\n>\n>> The app wants to run a query like this:\n>>\n>> select count(pri_num) from max_xtrv_st_t\n>> where pri_num in (select max(pri_num) from max_xtrv_st_t where 1=1\n>> group by tds_cx_ind, cxs_ind_2)\n>\n> Why not something simpler? There are a number of possibilities, and\n> I don't claim this one is necessarily best (or even error free), but\n> how about something like?:\n>\n> select count(*) from\n> (select distinct max(pri_num)\n> from max_xtrv_st_t\n> group by tds_cx_ind, cxs_ind_2) x\n>\n> -Kevin\n\nI've tried a number of alternates, each one wants to do a seq scan of \nthe table (including your suggestion above).\n\n\n-- \n---------------------------------------------\nKevin Kempter - Constent State\nA PostgreSQL Professional Services Company\n www.consistentstate.com\n---------------------------------------------\n\n",
"msg_date": "Wed, 01 Jun 2011 16:28:33 -0600",
"msg_from": "CS DBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problem query"
},
{
"msg_contents": "On Wed, Jun 1, 2011 at 6:28 PM, CS DBA <[email protected]> wrote:\n> On 06/01/2011 03:38 PM, Kevin Grittner wrote:\n>>\n>> CS DBA<[email protected]> wrote:\n>>\n>>> The app wants to run a query like this:\n>>>\n>>> select count(pri_num) from max_xtrv_st_t\n>>> where pri_num in (select max(pri_num) from max_xtrv_st_t where 1=1\n>>> group by tds_cx_ind, cxs_ind_2)\n>>\n>> Why not something simpler? There are a number of possibilities, and\n>> I don't claim this one is necessarily best (or even error free), but\n>> how about something like?:\n>>\n>> select count(*) from\n>> (select distinct max(pri_num)\n>> from max_xtrv_st_t\n>> group by tds_cx_ind, cxs_ind_2) x\n>>\n>> -Kevin\n>\n> I've tried a number of alternates, each one wants to do a seq scan of the\n> table (including your suggestion above).\n\nwhy wouldn't you expect a sequential scan? what is the number of\nunique values for tds_cx_ind, cxs_ind_2 on the table?\n\none of the most important techniques with query optimization is to put\nyourself in the place of the database and try to imagine how *you*\nwould pass over the records...then try and coerce the database into\nthat plan.\n\nmerlin\n",
"msg_date": "Wed, 1 Jun 2011 21:31:18 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem query"
},
{
"msg_contents": "CS DBA <[email protected]> wrote:\n> On 06/01/2011 03:38 PM, Kevin Grittner wrote:\n \n>> select count(*) from\n>> (select distinct max(pri_num)\n>> from max_xtrv_st_t\n>> group by tds_cx_ind, cxs_ind_2) x\n \n> I've tried a number of alternates, each one wants to do a seq scan\n> of the table (including your suggestion above).\n \nIs there some reason to believe that a sequential scan isn't the\nfastest way to get the data? When generating summary data like\nthis, it often is faster than lots of random access. If you can\ncoerce it into a faster plan by turning off enable_seqscan on the\nconnection before running the query, then we can look at how you\nmight adjust your costing parameters to get better plans.\n \n-Kevin\n",
"msg_date": "Thu, 02 Jun 2011 08:47:02 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem query"
},
{
"msg_contents": "On 06/02/2011 08:47 AM, Kevin Grittner wrote:\n\n> Is there some reason to believe that a sequential scan isn't the\n> fastest way to get the data? When generating summary data like\n> this, it often is faster than lots of random access. If you can\n> coerce it into a faster plan by turning off enable_seqscan on the\n> connection before running the query, then we can look at how you\n> might adjust your costing parameters to get better plans.\n\nThis is right. There's really no way for the optimizer to get the values \nyou want, even though your columns are indexed. But your query is a tad \nnaive, unless you wrote up a special case for us. You're counting the \nnumber of maximum values in your table for tds_cx_ind and cxs_ind_2, but \nthere will always be at least one for every combination. What you really \nwant is this:\n\nSELECT count(1) FROM (\n SELECT DISTINCT tds_cx_ind, cxs_ind_2\n FROM max_xtrv_st_t\n);\n\nIf you really must have that inner query because it's generated and you \nwon't know what it contains, you'd be better off with a CTE:\n\nWITH x AS (\n SELECT max(pri_num)\n FROM max_xtrv_st_t\n GROUP BY tds_cx_ind, cxs_ind_2\n)\nSELECT count(1) FROM x;\n\nYou'll still get a sequence scan from these, however.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Thu, 2 Jun 2011 10:34:06 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem query"
},
{
"msg_contents": "Shaun Thomas <[email protected]> wrote:\n \n> You're counting the number of maximum values in your table for\n> tds_cx_ind and cxs_ind_2, but there will always be at least one\n> for every combination.\n \nGood point.\n \n> What you really want is this:\n> \n> SELECT count(1) FROM (\n> SELECT DISTINCT tds_cx_ind, cxs_ind_2\n> FROM max_xtrv_st_t\n> );\n \nOr maybe:\n \nSELECT count(DISTINCT (tds_cx_ind, cxs_ind_2)) FROM max_xtrv_st_t;\n \n-Kevin\n",
"msg_date": "Thu, 02 Jun 2011 10:41:45 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem query"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> wrote:\n> Shaun Thomas <[email protected]> wrote:\n \n>> What you really want is this:\n>> \n>> SELECT count(1) FROM (\n>> SELECT DISTINCT tds_cx_ind, cxs_ind_2\n>> FROM max_xtrv_st_t\n>> );\n> \n> Or maybe:\n> \n> SELECT count(DISTINCT (tds_cx_ind, cxs_ind_2)) FROM max_xtrv_st_t;\n \nOr maybe not. I tried various forms of the query against \"real\"\ntables here, and Shaun's format was ten times as fast as my last\nsuggestion and 12% faster than my first suggestion.\n \nThey all gave the same result, of course, and they all used a seq\nscan..\n \n-Kevin\n",
"msg_date": "Thu, 02 Jun 2011 11:15:50 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem query"
},
{
"msg_contents": "On 06/02/2011 11:15 AM, Kevin Grittner wrote:\n\n> They all gave the same result, of course, and they all used a seq\n> scan..\n\nAnd they all will. I created a test table with a bunch of \ngenerate_series and emulated 200 unique matches of column1 and column2, \non a table with a mere 1-million rows (5000 for each of column3). And no \nmatter what index combination I used, it always did a sequence scan... \neven when I indexed every column and indexed column3 descending.\n\nBut here's the thing. I turned off sequence scans to force index scans, \nand it got 2-3x slower. But is that really surprising? Without a proper \nwhere exclusion, it has to probe every occurrence... also known as a \nloose index scan, which PostgreSQL doesn't have (yet).\n\nAnd... this is horrifying, but:\n\nWITH RECURSIVE t1 AS (\n SELECT min(f.tds_cx_ind) AS tds_cx_ind\n FROM max_xtrv_st_t f\n UNION ALL\n SELECT (SELECT min(tds_cx_ind)\n FROM max_xtrv_st_t f\n WHERE f.tds_cx_ind > t1.tds_cx_ind)\n FROM t1\n WHERE t1.tds_cx_ind IS NOT NULL\n), t2 AS (\n SELECT min(f.cxs_ind_2) AS cxs_ind_2\n FROM max_xtrv_st_t f\n UNION ALL\n SELECT (SELECT min(cxs_ind_2)\n FROM max_xtrv_st_t f\n WHERE f.cxs_ind_2 > t2.cxs_ind_2)\n FROM t2\n WHERE t2.cxs_ind_2 IS NOT NULL\n)\nSELECT t1.tds_cx_ind, t2.cxs_ind_2 FROM t1, t2\n WHERE t1.tds_cx_ind IS NOT NULL\n AND t2.cxs_ind_2 IS NOT NULL;\n\nIt works on my test, but might not be what OP wants. It's a cross \nproduct of the two unique column sets, and it's possible it represents \ncombinations that don't exist. But I suppose a late EXISTS pass could \nsolve that problem.\n\nI assume there's an easier way to do that. In either case, when is PG \ngetting loose index scans? ;)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Thu, 2 Jun 2011 12:31:58 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem query"
},
{
"msg_contents": "On 06/02/2011 11:31 AM, Shaun Thomas wrote:\n> On 06/02/2011 11:15 AM, Kevin Grittner wrote:\n>\n>> They all gave the same result, of course, and they all used a seq\n>> scan..\n>\n> And they all will. I created a test table with a bunch of \n> generate_series and emulated 200 unique matches of column1 and \n> column2, on a table with a mere 1-million rows (5000 for each of \n> column3). And no matter what index combination I used, it always did a \n> sequence scan... even when I indexed every column and indexed column3 \n> descending.\n>\n> But here's the thing. I turned off sequence scans to force index \n> scans, and it got 2-3x slower. But is that really surprising? Without \n> a proper where exclusion, it has to probe every occurrence... also \n> known as a loose index scan, which PostgreSQL doesn't have (yet).\n>\n> And... this is horrifying, but:\n>\n> WITH RECURSIVE t1 AS (\n> SELECT min(f.tds_cx_ind) AS tds_cx_ind\n> FROM max_xtrv_st_t f\n> UNION ALL\n> SELECT (SELECT min(tds_cx_ind)\n> FROM max_xtrv_st_t f\n> WHERE f.tds_cx_ind > t1.tds_cx_ind)\n> FROM t1\n> WHERE t1.tds_cx_ind IS NOT NULL\n> ), t2 AS (\n> SELECT min(f.cxs_ind_2) AS cxs_ind_2\n> FROM max_xtrv_st_t f\n> UNION ALL\n> SELECT (SELECT min(cxs_ind_2)\n> FROM max_xtrv_st_t f\n> WHERE f.cxs_ind_2 > t2.cxs_ind_2)\n> FROM t2\n> WHERE t2.cxs_ind_2 IS NOT NULL\n> )\n> SELECT t1.tds_cx_ind, t2.cxs_ind_2 FROM t1, t2\n> WHERE t1.tds_cx_ind IS NOT NULL\n> AND t2.cxs_ind_2 IS NOT NULL;\n>\n> It works on my test, but might not be what OP wants. It's a cross \n> product of the two unique column sets, and it's possible it represents \n> combinations that don't exist. But I suppose a late EXISTS pass could \n> solve that problem.\n>\n> I assume there's an easier way to do that. In either case, when is PG \n> getting loose index scans? ;)\n>\n\n\nThanks everyone for the feedback. I'll attempt the suggestions from \ntoday as soon as I can and let you know where we end up.\n\n\n-- \n---------------------------------------------\nKevin Kempter - Constent State\nA PostgreSQL Professional Services Company\n www.consistentstate.com\n---------------------------------------------\n\n",
"msg_date": "Thu, 02 Jun 2011 13:17:21 -0600",
"msg_from": "CS DBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Problem query"
},
{
"msg_contents": "Shaun Thomas <[email protected]> wrote:\n> On 06/02/2011 11:15 AM, Kevin Grittner wrote:\n>\n>> They all gave the same result, of course, and they all used a seq\n>> scan..\n>\n> And they all will.\n \nI always eschew generalizations, since they're always wrong. ;-) I\nused a real table which had somewhat similar indexes to what I think\nthe OP is using, and tried the fastest query using the sequential\nscan. A typical result once cached:\n \nexplain analyze select count(*) from\n (select distinct \"caseType\", \"statusCode\" from \"Case\") x;\n \n Aggregate (cost=10105.01..10105.02 rows=1 width=0)\n (actual time=478.893..478.893 rows=1 loops=1)\n -> HashAggregate (cost=10101.95..10103.31 rows=136 width=6)\n (actual time=478.861..478.881 rows=79 loops=1)\n -> Seq Scan on \"Case\"\n (cost=0.00..7419.20 rows=536550 width=6)\n (actual time=0.010..316.481 rows=536550 loops=1)\n Total runtime: 478.940 ms\n \nThen I tried it with a setting designed to discourage seq scans.\nA typical run:\n \nset cpu_tuple_cost = 1;\nexplain analyze select count(*) from\n (select distinct \"caseType\", \"statusCode\" from \"Case\") x;\n\n Aggregate (cost=544529.30..544530.30 rows=1 width=0)\n (actual time=443.972..443.972 rows=1 loops=1)\n -> Unique (cost=0.00..544392.95 rows=136 width=6)\n (actual time=0.021..443.933 rows=79 loops=1)\n -> Index Scan using \"Case_CaseTypeStatus\" on \"Case\"\n (cost=0.00..541710.20 rows=536550 width=6)\n (actual time=0.019..347.193 rows=536550 loops=1)\n Total runtime: 444.014 ms\n \nNow, on a table which didn't fit in cache, this would probably be\nanother story....\n \n-Kevin\n",
"msg_date": "Thu, 02 Jun 2011 14:57:03 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem query"
}
] |
[
{
"msg_contents": "Hi. I'm interested in understanding the differences between\nCLUSTERing a table and making a dedicated one.\n\nWe have a table with about 1 million records. On a given day, only\nabout 1% of them are of interest. That 1% changes every day (it's\nWHERE active_date = today), and so we index and cluster on it.\n\nEven so, the planner shows a very large cost for the Index Scan: about\n3500. If I instead do a SELECT INTO temp_table FROM big_table WHERE\nactive_date = today, and then do SELECT * FROM temp_table, I get a\nplanned cost of 65. Yet, the actual time for both queries is almost\nidentical.\n\nQuestions:\n1. Why is there such a discrepancy between the planner's estimate and\nthe actual cost?\n\n2. In a case like this, will I in general see a performance gain by\ndoing a daily SELECT INTO and then querying from that table? My ad hoc\ntest doesn't indicate I would (despite the planner's prediction), and\nI'd rather avoid this if it won't help.\n\n3. In general, does CLUSTER provide all the performance benefits of a\ndedicated table? If it doesn't, what does it lack?\n\nThank you.\n",
"msg_date": "Wed, 1 Jun 2011 19:54:35 -0400",
"msg_from": "Robert James <[email protected]>",
"msg_from_op": true,
"msg_subject": "CLUSTER versus a dedicated table"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Robert James\n> Sent: Wednesday, June 01, 2011 5:55 PM\n> To: [email protected]\n> Subject: [PERFORM] CLUSTER versus a dedicated table\n> \n> Hi. I'm interested in understanding the differences between\n> CLUSTERing a table and making a dedicated one.\n> \n> We have a table with about 1 million records. On a given day, only\n> about 1% of them are of interest. That 1% changes every day (it's\n> WHERE active_date = today), and so we index and cluster on it.\n> \n> Even so, the planner shows a very large cost for the Index Scan: about\n> 3500. If I instead do a SELECT INTO temp_table FROM big_table WHERE\n> active_date = today, and then do SELECT * FROM temp_table, I get a\n> planned cost of 65. Yet, the actual time for both queries is almost\n> identical.\n> \n> Questions:\n> 1. Why is there such a discrepancy between the planner's estimate and\n> the actual cost?\n> \n> 2. In a case like this, will I in general see a performance gain by\n> doing a daily SELECT INTO and then querying from that table? My ad hoc\n> test doesn't indicate I would (despite the planner's prediction), and\n> I'd rather avoid this if it won't help.\n> \n> 3. In general, does CLUSTER provide all the performance benefits of a\n> dedicated table? If it doesn't, what does it lack?\n> \n> Thank you.\n\nStart here:\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\n1: there could be many reasons for the planner to come up a grossly\ninaccurate ESTIMATE for some values. Last time the table was analyzed, is\nusually where people start. \n\n2: look at table partitioning it's pretty straight forward and sounds like\nit might be a good fit for you. It will however involve some triggers or\nrules and check constraints. Table partitioning has some downsides though,\nyou should be aware of what they are before you commit to it. \n\n3: clustering, from a high level, just reorders the data on disk by a given\nindex. Depending on your setup keeping it close to that clustered ordering\nmight be trivial or it might not be. Big tables are relative to different\npeople, 1M rows might be a big table or it might not be, since you didn't\npost the size of the table and indexes we can only guess. Table\npartitioning helps most with table maintenance, IMO, but can be very useful\nit the constraint exclusion can eliminate a large number of child tables\nright off so it doesn't have to traverse large indexes or do lots of random\nIO. \n\n\nYou will need to post at lot more specific info if you want more specific\nhelp. The guide to reporting slow queries or guide to reporting problems\nand start gathering specific information and then post back to the list.\n\n\n\n-Mark\n\n> \n> --\n> Sent via pgsql-performance mailing list (pgsql-\n> [email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Wed, 1 Jun 2011 19:18:09 -0600",
"msg_from": "\"mark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER versus a dedicated table"
},
{
"msg_contents": "On Wed, Jun 1, 2011 at 7:54 PM, Robert James <[email protected]> wrote:\n> Hi. I'm interested in understanding the differences between\n> CLUSTERing a table and making a dedicated one.\n>\n> We have a table with about 1 million records. On a given day, only\n> about 1% of them are of interest. That 1% changes every day (it's\n> WHERE active_date = today), and so we index and cluster on it.\n>\n> Even so, the planner shows a very large cost for the Index Scan: about\n> 3500. If I instead do a SELECT INTO temp_table FROM big_table WHERE\n> active_date = today, and then do SELECT * FROM temp_table, I get a\n> planned cost of 65. Yet, the actual time for both queries is almost\n> identical.\n>\n> Questions:\n> 1. Why is there such a discrepancy between the planner's estimate and\n> the actual cost?\n>\n> 2. In a case like this, will I in general see a performance gain by\n> doing a daily SELECT INTO and then querying from that table? My ad hoc\n> test doesn't indicate I would (despite the planner's prediction), and\n> I'd rather avoid this if it won't help.\n>\n> 3. In general, does CLUSTER provide all the performance benefits of a\n> dedicated table? If it doesn't, what does it lack?\n\nno. i suspect you may be over thinking the problem -- what led you to\nwant to cluster in the first place?\n\nmerlin\n",
"msg_date": "Wed, 1 Jun 2011 21:33:32 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CLUSTER versus a dedicated table"
}
] |
[
{
"msg_contents": "A query I has spends a long time on Hash Joins (and Hash Left Joins).\nI have a few questions:\n\n1. When does Postgres decide to do a Hash Join, over another type of Join?\n2. Do Hash Joins normally perform poorly? What can I do to speed them up?\n3. What can I do to enable Postgres to use a faster type of join?\n\nIf there's a good resource for me to read on this, please let me know.\n\nThanks!\n",
"msg_date": "Wed, 1 Jun 2011 20:10:32 -0400",
"msg_from": "Robert James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Understanding Hash Join performance"
},
{
"msg_contents": "Robert James <[email protected]> wrote:\n \n> A query I has spends a long time on Hash Joins (and Hash Left\n> Joins).\n \nTo submit a post which gives us enough information to help you speed\nup that query, please read this page:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n> I have a few questions:\n> \n> 1. When does Postgres decide to do a Hash Join, over another type\n> of Join?\n> 2. Do Hash Joins normally perform poorly? What can I do to speed\n> them up?\n> 3. What can I do to enable Postgres to use a faster type of join?\n \nQuestions this general can only be answered in a general way, so\nhere goes.\n \nThe planner doesn't choose a particular plan type, exactly -- it\ngenerates a lot of alternative plans,, basically looking at all the\nways it knows how to retrieve the requested set of data, and\nestimates a cost for each plan based on available resources and\nadjustable costing factors. It will choose the plan with the lowest\nestimated cost. There are many situations where a hash join is\nfaster than the alternatives. If it's using one where another\nalternative is actually faster, it's not a matter of \"enabling a\nfaster join type\" -- it's a matter of setting your cost factors to\naccurately reflect the real costs on your system.\n \nYou can generally make hash joins faster by increasing work_mem, but\nthat tends to cause data to be pushed from cache sooner and can run\nyou out of memory entirely, so it must be tuned carefully. And the\nplanner does take the size of work_mem and the expected data set\ninto consideration when estimating the cost of the hash join.\n \n-Kevin\n",
"msg_date": "Thu, 02 Jun 2011 09:57:25 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Understanding Hash Join performance"
},
{
"msg_contents": "On Wed, Jun 1, 2011 at 8:10 PM, Robert James <[email protected]> wrote:\n> A query I has spends a long time on Hash Joins (and Hash Left Joins).\n> I have a few questions:\n>\n> 1. When does Postgres decide to do a Hash Join, over another type of Join?\n> 2. Do Hash Joins normally perform poorly? What can I do to speed them up?\n> 3. What can I do to enable Postgres to use a faster type of join?\n\nIME, hash joins usually are much faster than any other type. There's\nnot enough information in your email to speculate as to what might be\ngoing wrong in your particular case, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 2 Jun 2011 14:39:18 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Understanding Hash Join performance"
},
{
"msg_contents": "On Thu, Jun 2, 2011 at 4:57 PM, Kevin Grittner\n<[email protected]> wrote:\n> And the\n> planner does take the size of work_mem and the expected data set\n> into consideration when estimating the cost of the hash join.\n\nAnd shouldn't it?\n\nIn a gross mode, when hash joins go to disk, they perform very poorly.\nMaybe the planner should take that into account.\n",
"msg_date": "Thu, 2 Jun 2011 20:56:42 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Understanding Hash Join performance"
},
{
"msg_contents": "On Thu, Jun 2, 2011 at 2:56 PM, Claudio Freire <[email protected]> wrote:\n> On Thu, Jun 2, 2011 at 4:57 PM, Kevin Grittner\n> <[email protected]> wrote:\n>> And the\n>> planner does take the size of work_mem and the expected data set\n>> into consideration when estimating the cost of the hash join.\n>\n> And shouldn't it?\n>\n> In a gross mode, when hash joins go to disk, they perform very poorly.\n> Maybe the planner should take that into account.\n\nIt does.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 2 Jun 2011 23:55:35 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Understanding Hash Join performance"
}
] |
[
{
"msg_contents": "First off, this is posted to the wrong list -- this list is for\ndiscussion of development of the PostgreSQL product. There is a\nlist for performance questions where this belongs:\[email protected]. I'm moving this to the\nperformance list with a blind copy to the -hackers list so people\nknow where the discussion went.\n \nNick Raj <[email protected]> wrote:\n \n> When i execute the query first time, query takes a quite longer\n> time but second time execution of the same query takes very less\n> time (despite execution plan is same)\n \n> Why the same plan giving different execution time? (Reason may be\n> data gets buffered (cached) for the second time execution) Why\n> there is so much difference?\n \nBecause an access to a RAM buffer is much, much faster than a disk\naccess.\n \n> Which option will be true?\n \nIt depends entirely on how much of the data needed for the query is\ncached. Sometimes people will run a set of queries to \"warm\" the\ncache before letting users in.\n \n> MY postgresql.conf file having setting like this (this is original\n> setting, i haven't modify anything)\n \n> shared_buffers = 28MB\n \n> #work_mem = 1MB # min 64kB\n> #maintenance_work_mem = 16MB # min 1MB\n \nIf you're concerned about performance, these settings (and several\nothers) should probably be adjusted:\n \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server \n \n-Kevin\n\n",
"msg_date": "Mon, 06 Jun 2011 10:48:49 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Different execution time for same plan"
}
] |
[
{
"msg_contents": "I originally posted this on admin, but it was suggested to post it to\nperformance so here goes -\n\nI am in the process of implementing cascade on delete constraints\nretroactively on rather large tables so I can cleanly remove deprecated\ndata. The problem is recreating some foreign key constraints on tables of\n55 million rows+ was taking much longer than the maintenance window I had,\nand now I am looking for tricks to speed up the process, hopefully there is\nsomething obvious i am overlooking.\n\nhere is the sql I am running, sorry im trying to obfuscate object names a\nlittle -\n\nBEGIN;\nALTER TABLE ONLY t1 DROP CONSTRAINT fk_t1_t2_id;\nALTER TABLE ONLY t1 ADD CONSTRAINT fk_t1_t2_id FOREIGN KEY(id) REFERENCES\nt2(id)\nON DELETE CASCADE\nDEFERRABLE INITIALLY DEFERRED;\nCOMMIT;\n\n\nt1 has 55 million rows\nt2 has 72 million rows\nthe id columns are integer types\npostgres version 8.3.8\nthere are nightly vacuum/analyze commands, and auto vacuum is enabled.\n\nI have tried set constraints deferred, immediate, the id column on table 2\nis indexed, its the primary key. Nothing really seems to impact the time it\ntakes to recreate the constraint. There may be memory settings to tweak, I\nwas able to get it to run on a faster test server with local storage in\nabout 10 minutes, but it was running for over an hour in our production\nenvironment.. We took down the application and I verified it wasnt waiting\nfor an exclusive lock on the table or anything, it was running the alter\ntable command for that duration.\n\nLet me know if there is anything else I can supply that will help the\nreview, thanks!\n\nOne additional question - is there any way to check how long postgres is\nestimating an operation will take to complete while it is running?\n\nThanks again,\nMike\n\nI originally posted this on admin, but it was suggested to post it to performance so here goes - \nI am in the process of implementing cascade on delete constraints retroactively on rather large tables so I can cleanly remove deprecated data. The problem is recreating some foreign key constraints on tables of 55 million rows+ was taking much longer than the maintenance window I had, and now I am looking for tricks to speed up the process, hopefully there is something obvious i am overlooking.\nhere is the sql I am running, sorry im trying to obfuscate object names a little - \nBEGIN;\nALTER TABLE ONLY t1 DROP CONSTRAINT fk_t1_t2_id;ALTER TABLE ONLY t1 ADD CONSTRAINT fk_t1_t2_id FOREIGN KEY(id) REFERENCES t2(id) \nON DELETE CASCADE \nDEFERRABLE INITIALLY DEFERRED;COMMIT;\nt1 has 55 million rowst2 has 72 million rowsthe id columns are integer typespostgres version 8.3.8there are nightly vacuum/analyze commands, and auto vacuum is enabled.\nI have tried set constraints deferred, immediate, the id column on table 2 is indexed, its the primary key. Nothing really seems to impact the time it takes to recreate the constraint. There may be memory settings to tweak, I was able to get it to run on a faster test server with local storage in about 10 minutes, but it was running for over an hour in our production environment.. We took down the application and I verified it wasnt waiting for an exclusive lock on the table or anything, it was running the alter table command for that duration. \nLet me know if there is anything else I can supply that will help the review, thanks!One additional question - is there any way to check how long postgres is estimating an operation will take to complete while it is running?\nThanks again,Mike",
"msg_date": "Mon, 6 Jun 2011 15:35:04 -0500",
"msg_from": "Mike Broers <[email protected]>",
"msg_from_op": true,
"msg_subject": "poor performance when recreating constraints on large tables"
},
{
"msg_contents": "Mike Broers <[email protected]> writes:\n> I am in the process of implementing cascade on delete constraints\n> retroactively on rather large tables so I can cleanly remove deprecated\n> data. The problem is recreating some foreign key constraints on tables of\n> 55 million rows+ was taking much longer than the maintenance window I had,\n> and now I am looking for tricks to speed up the process, hopefully there is\n> something obvious i am overlooking.\n\nmaintenance_work_mem?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Jun 2011 16:37:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: poor performance when recreating constraints on large tables "
},
{
"msg_contents": "Thanks for the suggestion, maintenance_work_mem is set to the default of\n16MB on the host that was taking over an hour as well as on the host that\nwas taking less than 10 minutes. I tried setting it to 1GB on the faster\ntest server and it reduced the time from around 6-7 minutes to about 3:30.\n this is a good start, if there are any other suggestions please let me know\n- is there any query to check estimated time remaining on long running\ntransactions?\n\n\n\nOn Mon, Jun 6, 2011 at 3:37 PM, Tom Lane <[email protected]> wrote:\n\n> Mike Broers <[email protected]> writes:\n> > I am in the process of implementing cascade on delete constraints\n> > retroactively on rather large tables so I can cleanly remove deprecated\n> > data. The problem is recreating some foreign key constraints on tables\n> of\n> > 55 million rows+ was taking much longer than the maintenance window I\n> had,\n> > and now I am looking for tricks to speed up the process, hopefully there\n> is\n> > something obvious i am overlooking.\n>\n> maintenance_work_mem?\n>\n> regards, tom lane\n>\n\nThanks for the suggestion, maintenance_work_mem is set to the default of 16MB on the host that was taking over an hour as well as on the host that was taking less than 10 minutes. I tried setting it to 1GB on the faster test server and it reduced the time from around 6-7 minutes to about 3:30. this is a good start, if there are any other suggestions please let me know - is there any query to check estimated time remaining on long running transactions?\nOn Mon, Jun 6, 2011 at 3:37 PM, Tom Lane <[email protected]> wrote:\nMike Broers <[email protected]> writes:\n> I am in the process of implementing cascade on delete constraints\n> retroactively on rather large tables so I can cleanly remove deprecated\n> data. The problem is recreating some foreign key constraints on tables of\n> 55 million rows+ was taking much longer than the maintenance window I had,\n> and now I am looking for tricks to speed up the process, hopefully there is\n> something obvious i am overlooking.\n\nmaintenance_work_mem?\n\n regards, tom lane",
"msg_date": "Mon, 6 Jun 2011 17:10:30 -0500",
"msg_from": "Mike Broers <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: poor performance when recreating constraints on large tables"
},
{
"msg_contents": "On Mon, Jun 6, 2011 at 6:10 PM, Mike Broers <[email protected]> wrote:\n> Thanks for the suggestion, maintenance_work_mem is set to the default of\n> 16MB on the host that was taking over an hour as well as on the host that\n> was taking less than 10 minutes. I tried setting it to 1GB on the faster\n> test server and it reduced the time from around 6-7 minutes to about 3:30.\n> this is a good start, if there are any other suggestions please let me know\n> - is there any query to check estimated time remaining on long running\n> transactions?\n\nSadly, no. I suspect that coming up with a good algorithm for that is\na suitable topic for a PhD thesis. :-(\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 8 Jun 2011 15:28:56 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: poor performance when recreating constraints on large tables"
},
{
"msg_contents": "On Wed, Jun 8, 2011 at 12:28 PM, Robert Haas <[email protected]> wrote:\n\n> On Mon, Jun 6, 2011 at 6:10 PM, Mike Broers <[email protected]> wrote:\n> > Thanks for the suggestion, maintenance_work_mem is set to the default of\n> > 16MB on the host that was taking over an hour as well as on the host that\n> > was taking less than 10 minutes. I tried setting it to 1GB on the faster\n> > test server and it reduced the time from around 6-7 minutes to about\n> 3:30.\n> > this is a good start, if there are any other suggestions please let me\n> know\n> > - is there any query to check estimated time remaining on long running\n> > transactions?\n>\n> Sadly, no. I suspect that coming up with a good algorithm for that is\n> a suitable topic for a PhD thesis. :-(\n>\n>\nThe planner knows how many rows are expected for each step of the query\nplan, so it would be theoretically possible to compute how far along it is\nin processing a query based on those estimates, wouldn't it? Combine\npercentage complete with time elapsed and you could get somewhat close if\nthe stats are accurate, couldn't you? Of course, I have no clue as to the\ninternals of the planner and query executor which might or might not make\nsuch tracking of query execution possible.\n\nOn Wed, Jun 8, 2011 at 12:28 PM, Robert Haas <[email protected]> wrote:\nOn Mon, Jun 6, 2011 at 6:10 PM, Mike Broers <[email protected]> wrote:\n> Thanks for the suggestion, maintenance_work_mem is set to the default of\n> 16MB on the host that was taking over an hour as well as on the host that\n> was taking less than 10 minutes. I tried setting it to 1GB on the faster\n> test server and it reduced the time from around 6-7 minutes to about 3:30.\n> this is a good start, if there are any other suggestions please let me know\n> - is there any query to check estimated time remaining on long running\n> transactions?\n\nSadly, no. I suspect that coming up with a good algorithm for that is\na suitable topic for a PhD thesis. :-(\nThe planner knows how many rows are expected for each step of the query plan, so it would be theoretically possible to compute how far along it is in processing a query based on those estimates, wouldn't it? Combine percentage complete with time elapsed and you could get somewhat close if the stats are accurate, couldn't you? Of course, I have no clue as to the internals of the planner and query executor which might or might not make such tracking of query execution possible.",
"msg_date": "Wed, 8 Jun 2011 12:45:33 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: poor performance when recreating constraints on large tables"
},
{
"msg_contents": "Samuel Gendler <[email protected]> wrote:\n \n> The planner knows how many rows are expected for each step of the\n> query plan, so it would be theoretically possible to compute how\n> far along it is in processing a query based on those estimates,\n> wouldn't it?\n \nAnd it is sometimes off by orders of magnitude. How much remaining\ntime do you report when the number of rows actually processed so far\nis five times the estimated rows that the step would process? How\nabout after it chugs on from there to 20 time she estimated row\ncount? Of course, on your next query it might finish after\nprocessing only 5% of the estimated rows....\n \n-Kevin\n",
"msg_date": "Wed, 08 Jun 2011 14:53:32 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: poor performance when recreating constraints on\n\t large tables"
},
{
"msg_contents": "On Wed, Jun 8, 2011 at 12:53 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Samuel Gendler <[email protected]> wrote:\n>\n> > The planner knows how many rows are expected for each step of the\n> > query plan, so it would be theoretically possible to compute how\n> > far along it is in processing a query based on those estimates,\n> > wouldn't it?\n>\n> And it is sometimes off by orders of magnitude. How much remaining\n> time do you report when the number of rows actually processed so far\n> is five times the estimated rows that the step would process? How\n> about after it chugs on from there to 20 time she estimated row\n> count? Of course, on your next query it might finish after\n> processing only 5% of the estimated rows....\n>\n\nSure, but if it is a query that is slow enough for a time estimate to be\nuseful, odds are good that stats that are that far out of whack would\nactually be interesting to whoever is looking at the time estimate, so\nshowing some kind of 'N/A' response once things have gotten out of whack\nwouldn't be unwarranted. Not that I'm suggesting that any of this is a\nparticularly useful exercise. I'm just playing with the original thought\nexperiment suggestion.\n\n\n>\n> -Kevin\n>\n\nOn Wed, Jun 8, 2011 at 12:53 PM, Kevin Grittner <[email protected]> wrote:\nSamuel Gendler <[email protected]> wrote:\n\n> The planner knows how many rows are expected for each step of the\n> query plan, so it would be theoretically possible to compute how\n> far along it is in processing a query based on those estimates,\n> wouldn't it?\n\nAnd it is sometimes off by orders of magnitude. How much remaining\ntime do you report when the number of rows actually processed so far\nis five times the estimated rows that the step would process? How\nabout after it chugs on from there to 20 time she estimated row\ncount? Of course, on your next query it might finish after\nprocessing only 5% of the estimated rows....Sure, but if it is a query that is slow enough for a time estimate to be useful, odds are good that stats that are that far out of whack would actually be interesting to whoever is looking at the time estimate, so showing some kind of 'N/A' response once things have gotten out of whack wouldn't be unwarranted. Not that I'm suggesting that any of this is a particularly useful exercise. I'm just playing with the original thought experiment suggestion.\n \n\n-Kevin",
"msg_date": "Wed, 8 Jun 2011 12:57:48 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: poor performance when recreating constraints on large tables"
},
{
"msg_contents": "---------- Forwarded message ----------\nFrom: Claudio Freire <[email protected]>\nDate: Wed, Jun 8, 2011 at 11:57 PM\nSubject: Re: [PERFORM] poor performance when recreating constraints on\nlarge tables\nTo: Samuel Gendler <[email protected]>\n\n\nOn Wed, Jun 8, 2011 at 9:57 PM, Samuel Gendler\n<[email protected]> wrote:\n> Sure, but if it is a query that is slow enough for a time estimate to be\n> useful, odds are good that stats that are that far out of whack would\n> actually be interesting to whoever is looking at the time estimate, so\n> showing some kind of 'N/A' response once things have gotten out of whack\n> wouldn't be unwarranted. Not that I'm suggesting that any of this is a\n> particularly useful exercise. I'm just playing with the original thought\n> experiment suggestion.\n\nThere's a trick to get exactly that:\n\nDo an explain, fetch the expected rowcount on the result set, add a\ndummy sequence and a dummy field to the resultset \"nextval(...) as\nprogress\".\n\nNow, you won't get to read the progress column probably, but that\ndoesn't matter. Open up another transaction, and query it there.\nSequences are nontransactional.\n\nAll the smarts about figuring out the expected resultset's size\nremains on the application, which is fine by me.\n",
"msg_date": "Wed, 8 Jun 2011 23:57:37 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: poor performance when recreating constraints on large tables"
},
{
"msg_contents": "Samuel Gendler wrote:\n> Sure, but if it is a query that is slow enough for a time estimate to \n> be useful, odds are good that stats that are that far out of whack \n> would actually be interesting to whoever is looking at the time \n> estimate, so showing some kind of 'N/A' response once things have \n> gotten out of whack wouldn't be unwarranted.\n\nThe next question is what are you then going to do with that information?\n\nThe ability to track some measure of \"progress\" relative to expectations \nis mainly proposed as something helpful when a query has gone out of \ncontrol. When that's happened, the progress meter normally turns out to \nbe fundamentally broken; the plan isn't happening at all as expected. \nSo, as you say, you will get an \"N/A\" response that says the query is \nout of control, when in the cases where this sort of thing is expected \nto be the most useful.\n\nAt that point, you have two choices. You can let the query keep running \nand see how long it really takes. You have no idea how long that will \nbe, all you can do is wait and see because the estimation is trashed. \nOr you can decide to kill it. And the broken progress meter won't help \nwith that decision. So why put it there at all?\n\nWhat I try to do as a force of habit is run just about everything that \nmight take a while with \"\\timing\" on, and try to keep statement_timeout \nto a reasonable value at all times. Do that enough, and you get a feel \nfor what reasonable and unreasonable time scales look like better than \nthe query executor can be expected to figure them out for you. It would \nbe nice to provide a better UI here for tracking progress, but it would \nreally work only in the simplest of cases--which are of course the ones \nyou need it the least for.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Thu, 09 Jun 2011 01:57:25 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: poor performance when recreating constraints on large\n tables"
},
{
"msg_contents": "On Wed, Jun 8, 2011 at 10:57 PM, Greg Smith <[email protected]> wrote:\n\n> Samuel Gendler wrote:\n>\n>> Sure, but if it is a query that is slow enough for a time estimate to be\n>> useful, odds are good that stats that are that far out of whack would\n>> actually be interesting to whoever is looking at the time estimate, so\n>> showing some kind of 'N/A' response once things have gotten out of whack\n>> wouldn't be unwarranted.\n>>\n>\n> The next question is what are you then going to do with that information?\n>\n> The ability to track some measure of \"progress\" relative to expectations is\n> mainly proposed as something helpful when a query has gone out of control.\n> When that's happened, the progress meter normally turns out to be\n> fundamentally broken; the plan isn't happening at all as expected. So, as\n> you say, you will get an \"N/A\" response that says the query is out of\n> control, when in the cases where this sort of thing is expected to be the\n> most useful.\n>\n\nWell, in my case, the use I'd put it to is a query that is necessarily long\nrunning (aggregations over large quantities of data that take a minute or\ntwo to complete), and the stats are accurate enough that it would\npotentially let me show a progress meter of some kind in the few places\nwhere such queries are run interactively rather than on a schedule. Not\nthat I'm really thinking seriously about doing so, but there are places in\ncode I maintain where such a thing could prove useful if its accuracy is\nreasonable for the queries in question. ENough to at least toy with the\nsuggested sequence method and see what happens when I've got some spare time\nto play.\n\nOn Wed, Jun 8, 2011 at 10:57 PM, Greg Smith <[email protected]> wrote:\nSamuel Gendler wrote:\n\nSure, but if it is a query that is slow enough for a time estimate to be useful, odds are good that stats that are that far out of whack would actually be interesting to whoever is looking at the time estimate, so showing some kind of 'N/A' response once things have gotten out of whack wouldn't be unwarranted.\n\n\nThe next question is what are you then going to do with that information?\n\nThe ability to track some measure of \"progress\" relative to expectations is mainly proposed as something helpful when a query has gone out of control. When that's happened, the progress meter normally turns out to be fundamentally broken; the plan isn't happening at all as expected. So, as you say, you will get an \"N/A\" response that says the query is out of control, when in the cases where this sort of thing is expected to be the most useful.\nWell, in my case, the use I'd put it to is a query that is necessarily long running (aggregations over large quantities of data that take a minute or two to complete), and the stats are accurate enough that it would potentially let me show a progress meter of some kind in the few places where such queries are run interactively rather than on a schedule. Not that I'm really thinking seriously about doing so, but there are places in code I maintain where such a thing could prove useful if its accuracy is reasonable for the queries in question. ENough to at least toy with the suggested sequence method and see what happens when I've got some spare time to play.",
"msg_date": "Thu, 9 Jun 2011 04:55:41 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: poor performance when recreating constraints on large tables"
}
] |
[
{
"msg_contents": "Hi all,\n\nI am trying to speed up a query on a DB I inherited and I am falling\nflat on my face .\n\nI changed a query from NOT IN to use NOT EXISTS and my query time went\nfrom 19000ms to several hours (~50000000 ms). this shocked me so much\nI pretty much had to post. This seems like a corner case of the\nplanner not knowing that the nested-loops are going to turn out badly\nin this case. The planner choosing a 13hr nested loop here is\nbasically the reason I am posting.\n\nI have played around with rewriting this query using some CTEs and a\nleft join but thus far my results are not encouraging. Given what\nlittle I know , it seems like a LEFT JOIN where right_table.col is\nnull gets the same performance and estimates as a NOT EXISTS. (and\nstill picks a nested loop in this case)\n\nI can see where it all goes to hell time wise, turning off nested\nloops seems to keep it from running for hours for this query, but not\nsomething I am looking to do globally. The time is not really that\nmuch better than just leaving it alone with a NOT IN.\n\ntwo queries are at http://pgsql.privatepaste.com/a0b672bab0#\n\nthe \"pretty\" explain versions :\n\nNOT IN (with large work mem - 1GB)\nhttp://explain.depesz.com/s/ukj\n\nNOT IN (with only 64MB for work_mem)\nhttp://explain.depesz.com/s/wT0\n\nNOT EXISTS (with 64MB of work_mem)\nhttp://explain.depesz.com/s/EuX\n\nNOT EXISTS (with nested loop off. and 64MB of work_mem)\nhttp://explain.depesz.com/s/UXG\n\nLEFT JOIN/CTE (with nested loop off and 1GB of work_mem)\nhttp://explain.depesz.com/s/Hwm\n\ntable defs, with estimated row counts (which all 100% match exact row count)\nhttp://pgsql.privatepaste.com/c2ff39b653\n\ntried running an analyze across the whole database, no affect.\n\nI haven't gotten creative with explicit join orders yet .\n\npostgresql 9.0.2.\n\nwilling to try stuff for people as I can run things on a VM for days\nand it is no big deal. I can't do that on production machines.\n\nthoughts ? ideas ?\n\n\n-Mark\n",
"msg_date": "Mon, 6 Jun 2011 14:38:47 -0600",
"msg_from": "mark <[email protected]>",
"msg_from_op": true,
"msg_subject": "not exits slow compared to not in. (nested loops killing me)"
},
{
"msg_contents": "On 06/07/2011 04:38 AM, mark wrote:\n\n> NOT EXISTS (with 64MB of work_mem)\n> http://explain.depesz.com/s/EuX\n\nHash Anti Join (cost=443572.19..790776.84 rows=1 width=1560)\n(actual time=16337.711..50358.487 rows=2196299 loops=1)\n\nNote the estimated vs actual rows. Either your stats are completely \nridiculous, or the planner is confused.\n\nWhat are your stats target levels? Have you tried increasing the stats \nlevels on the table(s) or at least column(s) affected? Or tweaking \ndefault_statistics_target if you want to use a bigger hammer?\n\nIs autovacuum being allowed to do its work and regularly ANALYZE the \ndatabase? Does an explicit 'ANALYZE' help?\n\n--\nCraig Ringer\n",
"msg_date": "Tue, 07 Jun 2011 07:07:33 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: not exits slow compared to not in. (nested loops killing\n me)"
},
{
"msg_contents": "Craig Ringer <[email protected]> writes:\n> On 06/07/2011 04:38 AM, mark wrote:\n> Hash Anti Join (cost=443572.19..790776.84 rows=1 width=1560)\n> (actual time=16337.711..50358.487 rows=2196299 loops=1)\n\n> Note the estimated vs actual rows. Either your stats are completely \n> ridiculous, or the planner is confused.\n\nThe latter ... I think the OP is hurting for lack of this 9.0.4 fix:\nhttp://git.postgresql.org/gitweb?p=postgresql.git&a=commitdiff&h=159c47dc7170110a39f8a16b1d0b7811f5556f87\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Jun 2011 20:09:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: not exits slow compared to not in. (nested loops killing me) "
},
{
"msg_contents": "\n> -----Original Message-----\n> From: Craig Ringer [mailto:[email protected]]\n> Sent: Monday, June 06, 2011 5:08 PM\n> To: mark\n> Cc: [email protected]\n> Subject: Re: [PERFORM] not exits slow compared to not in. (nested loops\n> killing me)\n> \n> On 06/07/2011 04:38 AM, mark wrote:\n> \n> > NOT EXISTS (with 64MB of work_mem)\n> > http://explain.depesz.com/s/EuX\n> \n> Hash Anti Join (cost=443572.19..790776.84 rows=1 width=1560)\n> (actual time=16337.711..50358.487 rows=2196299 loops=1)\n> \n> Note the estimated vs actual rows. Either your stats are completely\n> ridiculous, or the planner is confused.\n\n\nI am starting to think the planner might be confused in 9.0.2. I got a\nreasonable query time, given resource constraints, on a very small VM on my\nlaptop running 9.0.4. \n\nI am going to work on getting the vm I was using to test this with up to\n9.0.4 and test again. \n\nThere is a note in the 9.0.4 release notes \n\" Improve planner's handling of semi-join and anti-join cases (Tom Lane)\" \n\nNot sure that is the reason I got a much better outcome with a much smaller\nvm. But once I do some more testing I will report back. \n\n\n> \n> What are your stats target levels? Have you tried increasing the stats\n> levels on the table(s) or at least column(s) affected? Or tweaking\n> default_statistics_target if you want to use a bigger hammer?\n\nWill try that as well. Currently the default stat target is 100. Will try at\n250, and 500 and report back. \n\n> \n> Is autovacuum being allowed to do its work and regularly ANALYZE the\n> database? Does an explicit 'ANALYZE' help?\n\nAuto vac is running, I have explicitly vacuum & analyzed the whole db. That\ndidn't change anything. \n\n\n\n\n\n> \n> --\n> Craig Ringer\n\n",
"msg_date": "Mon, 6 Jun 2011 18:16:12 -0600",
"msg_from": "\"mark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: not exits slow compared to not in. (nested loops killing me)"
}
] |
[
{
"msg_contents": "All,\n\nJust got this simple case off IRC today:\n\n8.4.4\nThis plan completes in 100ms:\n\nold_prod=# explain analyze select email from u_contact where id not in\n(select contact_id from u_user);\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------\n Seq Scan on u_contact (cost=2217.72..4759.74 rows=35560 width=22)\n(actual time=61.283..107.169 rows=4521 loops=1)\n Filter: (NOT (hashed SubPlan 1))\n SubPlan 1\n -> Seq Scan on u_user (cost=0.00..2051.38 rows=66538 width=8)\n(actual time=0.034..33.303 rows=66978 loops=1)\n Total runtime: 108.001 ms\n\n\n9.0.2\nThis plan does not complete in 15 minutes or more:\n\nnew_prod=# explain select email from u_contact where id not in (select\ncontact_id from u_user);\n QUERY PLAN\n---------------------------------------------------------------------------\n Seq Scan on u_contact (cost=0.00..100542356.74 rows=36878 width=22)\n Filter: (NOT (SubPlan 1))\n SubPlan 1\n -> Materialize (cost=0.00..2552.56 rows=69504 width=8)\n -> Seq Scan on u_user (cost=0.00..1933.04 rows=69504 width=8)\n(5 rows)\n\nI'm at a bit of a loss as to what's happening here. I'd guess another\nfailure of a bail-out-early plan, but I can't see how that would work\nwith this query.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Mon, 06 Jun 2011 14:45:43 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "8.4/9.0 simple query performance regression"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> Just got this simple case off IRC today:\n> [ hashed versus non-hashed subplan ]\n> I'm at a bit of a loss as to what's happening here.\n\nPossibly work_mem is smaller in the second installation?\n\n(If I'm counting on my fingers right, you'd need a setting of at least a\ncouple MB to let it choose a hashed subplan for this case.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Jun 2011 19:02:23 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.4/9.0 simple query performance regression "
},
{
"msg_contents": "07.06.11 00:45, Josh Berkus написав(ла):\n> All,\n>\n> Just got this simple case off IRC today:\n>\n> 8.4.4\n> This plan completes in 100ms:\n> Filter: (NOT (hashed SubPlan 1))\n\n> 9.0.2\n> This plan does not complete in 15 minutes or more:\n> Filter: (NOT (SubPlan 1))\n\"Hashed\" is the key. Hashed subplans usually has much better performance.\nYou need to increase work_mem. I suppose it is in default state as you \nneed not too much memory for hash of 70K integer values.\nBTW: Why do it want to materialize a result of seq scan without filter. \nI can see no benefits (or is it more narrow rows?)\n\nBest regards, Vitalii Tymchyshyn\n",
"msg_date": "Tue, 07 Jun 2011 11:19:29 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.4/9.0 simple query performance regression"
}
] |
[
{
"msg_contents": "Hi friend I Want to ask, is there any solutions or tools for monitoring memory \nperformance in postgre automatically, for example, will send allert if \nPeformance memory has exceeded 90%\n\nthank you for your help\nHi friend I Want to ask, is there any solutions or tools for monitoring memory performance in postgre automatically, for example, will send allert if Peformance memory has exceeded 90% thank you for your help",
"msg_date": "Tue, 7 Jun 2011 15:47:59 +0800 (SGT)",
"msg_from": "Didik Prasetyo <[email protected]>",
"msg_from_op": true,
"msg_subject": "i want to ask monitory peformance memory postgresql with\n automatically"
},
{
"msg_contents": "On 7/06/2011 3:47 PM, Didik Prasetyo wrote:\n> Hi friend I Want to ask, is there any solutions or tools for monitoring\n> memory performance in postgre automatically, for example, will send\n> allert if Peformance memory has exceeded 90%\n\nUse standard system monitoring tools like Nagios. There is a PostgreSQL \nagent for Nagios that will help do some postgresql-specific monitoring, \nbut most of the monitoring you will want to do is system level and is \nalready supported by Nagios and similar tools.\n\n-- \nCraig Ringer\n\nTech-related writing at http://soapyfrogs.blogspot.com/\n",
"msg_date": "Tue, 07 Jun 2011 17:58:46 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: i want to ask monitory peformance memory postgresql\n\twith automatically"
},
{
"msg_contents": "you can use Hyperic HQ too from SpringSource (a division of Vmware), but \nthe most usfel tools are the command tools (iostat, vmstat, top, free, lsof)\nRegards\n\nEl 6/7/2011 5:58 AM, Craig Ringer escribió:\n> On 7/06/2011 3:47 PM, Didik Prasetyo wrote:\n>> Hi friend I Want to ask, is there any solutions or tools for monitoring\n>> memory performance in postgre automatically, for example, will send\n>> allert if Peformance memory has exceeded 90%\n>\n> Use standard system monitoring tools like Nagios. There is a \n> PostgreSQL agent for Nagios that will help do some postgresql-specific \n> monitoring, but most of the monitoring you will want to do is system \n> level and is already supported by Nagios and similar tools.\n>\n\n-- \nMarcos Luís Ortíz Valmaseda\n Software Engineer (UCI)\n http://marcosluis2186.posterous.com\n http://twitter.com/marcosluis2186\n \n\n",
"msg_date": "Tue, 07 Jun 2011 12:57:00 -0400",
"msg_from": "Marcos Ortiz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: i want to ask monitory peformance memory postgresql\n\twith automatically"
}
] |
[
{
"msg_contents": "Version: PostgreSQL 8.3.5 (mammoth replicator)\n\nSchema:\n\nCREATE TABLE tdiag (\n diag_id integer DEFAULT nextval('diag_id_seq'::text),\n create_time\t\ttimestamp with time zone default now(),\t/* time this record \nwas created */\n diag_time timestamp with time zone not null,\n device_id integer, /* optional */\n fleet_id integer, /* optional */\n customer_id integer, /* optional */\n module character varying,\n node_kind smallint,\n diag_level smallint,\n tag character varying not null default '',\n message character varying not null default '',\n options text,\n\n PRIMARY KEY (diag_id)\n);\n\ncreate index tdiag_create_time ON tdiag(create_time);\n\nThe number of rows is over 33 million with time stamps over the past two \nweeks.\n\nThe create_time order is almost identical to the id order. What I want\nto find is the first or last entry by id in a given time range. The\nquery I am having a problem with is:\n\nsymstream2=> explain analyze select * from tdiag where (create_time \n>= '2011-06-03\n09:49:04.000000+0' and create_time < '2011-06-06 09:59:04.000000+0') order by \ndiag_id limit 1;\n\n \nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..16.75 rows=1 width=114) (actual time=69425.356..69425.358 \nrows=1 loops=1)\n -> Index Scan using tdiag_pkey on tdiag (cost=0.00..19114765.76 \nrows=1141019 width=114)\n(actual time=69425.352..69425.352 rows=1 loops=1)\n Filter: ((create_time >= '2011-06-03 19:49:04+10'::timestamp with \ntime zone) AND\n(create_time < '2011-06-06 19:59:04+10'::timestamp with time zone))\n Total runtime: 69425.400 ms\n\nPG seems to decide it must scan the diag_id column and filter each row by the \ncreate_time. \n\n\n\nIf I leave out the limit I get\n\nsymstream2=> explain analyze select * from tdiag where (create_time \n>= '2011-06-03\n09:49:04.000000+0' and create_time < '2011-06-06 09:59:04.000000+0') order by \ndiag_id;\n\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=957632.43..960484.98 rows=1141019 width=114) (actual \ntime=552.795..656.319 rows=86530\nloops=1)\n Sort Key: diag_id\n Sort Method: external merge Disk: 9872kB\n -> Bitmap Heap Scan on tdiag (cost=25763.48..638085.13 rows=1141019 \nwidth=114) (actual\ntime=43.232..322.441 rows=86530 loops=1)\n Recheck Cond: ((create_time >= '2011-06-03 19:49:04+10'::timestamp \nwith time zone) AND\n(create_time < '2011-06-06 19:59:04+10'::timestamp with time zone))\n -> Bitmap Index Scan on tdiag_create_time (cost=0.00..25478.23 \nrows=1141019 width=0)\n(actual time=42.574..42.574 rows=86530 loops=1)\n Index Cond: ((create_time >= '2011-06-03 \n19:49:04+10'::timestamp with time zone) AND\n(create_time < '2011-06-06 19:59:04+10'::timestamp with time zone))\n Total runtime: 736.440 ms\n(8 rows)\n\n\n\n\nI can be explicit about the query order:\n\nselect * into tt from tdiag where (create_time >= '2011-06-03 \n09:49:04.000000+0' and create_time <\n'2011-06-06 09:59:04.000000+0');\n\nsymstream2=> explain analyze select * from tt order by diag_id limit 1;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Limit (cost=2731.95..2731.95 rows=1 width=101) (actual time=440.165..440.166 \nrows=1 loops=1)\n -> Sort (cost=2731.95..2948.28 rows=86530 width=101) (actual \ntime=440.161..440.161 rows=1\nloops=1)\n Sort Key: diag_id\n Sort Method: top-N heapsort Memory: 17kB\n -> Seq Scan on tt (cost=0.00..2299.30 rows=86530 width=101) (actual \ntime=19.602..330.873\nrows=86530 loops=1)\n Total runtime: 440.209 ms\n(6 rows)\n\n\n\nBut if I try using a subquery I get\n\nsymstream2=> explain analyze select * from (select * from tdiag where \n(create_time >= '2011-06-03\n09:49:04.000000+0' and create_time < '2011-06-06 09:59:04.000000+0')) as sub \norder by diag_id limit\n1;\n\n \nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..16.75 rows=1 width=114) (actual time=90344.384..90344.385 \nrows=1 loops=1)\n -> Index Scan using tdiag_pkey on tdiag (cost=0.00..19114765.76 \nrows=1141019 width=114)\n(actual time=90344.380..90344.380 rows=1 loops=1)\n Filter: ((create_time >= '2011-06-03 19:49:04+10'::timestamp with \ntime zone) AND\n(create_time < '2011-06-06 19:59:04+10'::timestamp with time zone))\n Total runtime: 90344.431 ms\n\n\nHow do I make this both fast and simple?\n-- \nAnthony Shipman | Tech Support: The guys who follow the \[email protected] | 'Parade of New Products' with a shovel. \n",
"msg_date": "Tue, 7 Jun 2011 18:02:08 +1000",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "strange query plan with LIMIT"
},
{
"msg_contents": "> Version: PostgreSQL 8.3.5 (mammoth replicator)\n>\n> Schema:\n>\n> CREATE TABLE tdiag (\n> diag_id integer DEFAULT nextval('diag_id_seq'::text),\n> create_time\t\ttimestamp with time zone default now(),\t/* time this\n> record\n> was created */\n> diag_time timestamp with time zone not null,\n> device_id integer, /* optional */\n> fleet_id integer, /* optional */\n> customer_id integer, /* optional */\n> module character varying,\n> node_kind smallint,\n> diag_level smallint,\n> tag character varying not null default '',\n> message character varying not null default '',\n> options text,\n>\n> PRIMARY KEY (diag_id)\n> );\n>\n> create index tdiag_create_time ON tdiag(create_time);\n>\n> The number of rows is over 33 million with time stamps over the past two\n> weeks.\n>\n> The create_time order is almost identical to the id order. What I want\n> to find is the first or last entry by id in a given time range. The\n> query I am having a problem with is:\n\nHi,\n\nwhy are you reposting this? Pavel Stehule already recommended you to run\nANALYZE on the tdiag table - have you done that? What was the effect?\n\nThe stats are off - e.g. the bitmap scan says\n\n -> Bitmap Heap Scan on tdiag (cost=25763.48..638085.13 rows=1141019\nwidth=114) (actual time=43.232..322.441 rows=86530 loops=1)\n\nso it expects to get 1141019 rows but it gets 86530, i.e. about 7% of the\nexpected number. That might be enough to cause bad plan choice and thus\nperformance issues.\n\nAnd yet another recommendation - the sort is performed on disk, so give it\nmore work_mem and it should be much faster (should change from \"merge\nsort\" to \"quick sort\"). Try something like work_mem=20MB and see if it\ndoes the trick.\n\nregards\nTomas\n\n",
"msg_date": "Tue, 7 Jun 2011 18:40:13 +0200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: strange query plan with LIMIT"
},
{
"msg_contents": "On Wednesday 08 June 2011 02:40, [email protected] wrote:\n> Hi,\n>\n> why are you reposting this? Pavel Stehule already recommended you to run\n> ANALYZE on the tdiag table - have you done that? What was the effect?\n\nThe mailing list system hiccupped and I ended up with two posts.\n\nVACUUM ANALYZE was done, more than once.\nSetting the statistics value on the diag_id column to 1000 seemed to only make \nthe query a bit slower.\n\n>\n> The stats are off - e.g. the bitmap scan says\n>\n> -> Bitmap Heap Scan on tdiag (cost=25763.48..638085.13 rows=1141019\n> width=114) (actual time=43.232..322.441 rows=86530 loops=1)\n>\n> so it expects to get 1141019 rows but it gets 86530, i.e. about 7% of the\n> expected number. That might be enough to cause bad plan choice and thus\n> performance issues.\n\nWhat seems odd to me is that the only difference between the two is the limit \nclause:\n\nselect * from tdiag where (create_time >= '2011-06-03\n09:49:04.000000+0' and create_time < '2011-06-06 09:59:04.000000+0') order by \ndiag_id limit 1;\n\nselect * from tdiag where (create_time >= '2011-06-03\n09:49:04.000000+0' and create_time < '2011-06-06 09:59:04.000000+0') order by \ndiag_id;\n\nand yet the plan completely changes.\n\nI think that I have to force the evaluation order to get a reliably fast \nresult:\n\nbegin; create temporary table tt on commit drop as\nselect diag_id from tdiag where create_time >= '2011-06-03 09:49:04.000000+0' \n and create_time < '2011-06-06 09:59:04.000000+0';\nselect * from tdiag where diag_id in (select * from tt)\n order by diag_id limit 10; commit;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3566.24..3566.27 rows=10 width=112) (actual \ntime=1800.699..1800.736 rows=10 loops=1)\n -> Sort (cost=3566.24..3566.74 rows=200 width=112) (actual \ntime=1800.694..1800.708 rows=10 loops=1)\n Sort Key: tdiag.diag_id\n Sort Method: top-N heapsort Memory: 18kB\n -> Nested Loop (cost=1360.00..3561.92 rows=200 width=112) (actual \ntime=269.087..1608.324 rows=86530 loops=1)\n -> HashAggregate (cost=1360.00..1362.00 rows=200 width=4) \n(actual time=269.052..416.898 rows=86530 loops=1)\n -> Seq Scan on tt (cost=0.00..1156.00 rows=81600 \nwidth=4) (actual time=0.020..120.323 rows=86530 loops=1)\n -> Index Scan using tdiag_pkey on tdiag (cost=0.00..10.99 \nrows=1 width=112) (actual time=0.006..0.008 rows=1 loops=86530)\n Index Cond: (tdiag.diag_id = tt.diag_id)\n Total runtime: 1801.290 ms\n\n>\n> And yet another recommendation - the sort is performed on disk, so give it\n> more work_mem and it should be much faster (should change from \"merge\n> sort\" to \"quick sort\"). Try something like work_mem=20MB and see if it\n> does the trick.\n\nThis certainly speeds up the sorting.\n\n>\n> regards\n> Tomas\n\n-- \nAnthony Shipman | What most people think about\[email protected] | most things is mostly wrong.\n",
"msg_date": "Wed, 8 Jun 2011 15:08:05 +1000",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: strange query plan with LIMIT"
},
{
"msg_contents": "On Wed, Jun 8, 2011 at 7:08 AM, <[email protected]> wrote:\n> What seems odd to me is that the only difference between the two is the limit\n> clause\n\nWhy would that seem odd?\n\nOf course optimally executing a plan with limit is a lot different\nthan one without.\n\nJust... why are you sorting by diag_id?\n\nI believe you would be better off sorting by timestamp than diag_id,\nbut I don't know what the query is supposed to do.\n\nIn any case, that's a weakness I've seen in many database systems, and\npostgres is no exception: order + limit strongly suggests index usage,\nand when the ordered column has \"anti\" correlation with the where\nclause (that is, too many of the first rows in the ordered output are\nfiltered out by the whereclause), the plan with an index is\ninsufferably slow compared to a sequential scan + sort.\n\nPostgres has no way to know that, it depends on correlation between\nthe where clause and the ordering expressions.\n\nIf you cannot change the query, I think your only option is to either\nadd a specific index for that query (ie, if the where clause is always\nthe same, you could add a partial index), or just disable nested loops\nwith \"set enable_nestloop = false;\" just prior to running that query\n(and remember to re-enable afterwards).\n",
"msg_date": "Wed, 8 Jun 2011 09:39:00 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange query plan with LIMIT"
},
{
"msg_contents": "> What seems odd to me is that the only difference between the two is the\n> limit\n> clause:\n>\n> select * from tdiag where (create_time >= '2011-06-03\n> 09:49:04.000000+0' and create_time < '2011-06-06 09:59:04.000000+0') order\n> by\n> diag_id limit 1;\n>\n> select * from tdiag where (create_time >= '2011-06-03\n> 09:49:04.000000+0' and create_time < '2011-06-06 09:59:04.000000+0') order\n> by\n> diag_id;\n>\n> and yet the plan completely changes.\n\nAs Claudio Freire already pointed out, this is expected behavior. With\nLIMIT the planner prefers plans with low starting cost, as it expects to\nend soon and building index bitmap / hash table would be a waste. So\nactually it would be very odd if the plan did not change in this case ...\n\nAnyway I have no idea how to fix this \"clean\" - without messing with\nenable_* or cost variables or other such dirty tricks.\n\nregards\nTomas\n\n",
"msg_date": "Wed, 8 Jun 2011 10:33:15 +0200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: strange query plan with LIMIT"
},
{
"msg_contents": "On Wednesday 08 June 2011 17:39, Claudio Freire wrote:\n> Of course optimally executing a plan with limit is a lot different\n> than one without.\n\nI imagined that limit just cuts out a slice of the query results. \nIf it can find 80000 rows in 0.5 seconds then I would have thought that \nreturning just the first 100 of them should be just as easy.\n\n>\n> Just... why are you sorting by diag_id?\n>\n> I believe you would be better off sorting by timestamp than diag_id,\n> but I don't know what the query is supposed to do.\n\nThe timestamp is only almost monotonic. I need to scan the table in slices and \nI use limit and offset to select the slice.\n\nI've forced the query order with some pgsql like:\n\ndeclare\n query character varying;\n rec record;\nbegin\n -- PG 8.3 doesn't have the 'using' syntax nor 'return query execute'\n\n execute 'create temporary table tt on commit drop as ' ||\n 'select diag_id from tdiag ' || v_where;\n\n query = 'select * from tdiag where diag_id in (select * from tt) ' ||\n 'order by diag_id ' || v_limit || ' ' || v_offset;\n\n for rec in execute query loop\n return next rec;\n end loop;\nend;\n\n-- \nAnthony Shipman | Life is the interval\[email protected] | between pay days.\n",
"msg_date": "Wed, 8 Jun 2011 18:34:07 +1000",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: strange query plan with LIMIT"
},
{
"msg_contents": "2011/6/8 <[email protected]>:\n> On Wednesday 08 June 2011 17:39, Claudio Freire wrote:\n>> Of course optimally executing a plan with limit is a lot different\n>> than one without.\n>\n> I imagined that limit just cuts out a slice of the query results.\n> If it can find 80000 rows in 0.5 seconds then I would have thought that\n> returning just the first 100 of them should be just as easy.\n>\n>>\n>> Just... why are you sorting by diag_id?\n>>\n>> I believe you would be better off sorting by timestamp than diag_id,\n>> but I don't know what the query is supposed to do.\n>\n> The timestamp is only almost monotonic. I need to scan the table in slices and\n> I use limit and offset to select the slice.\n>\n> I've forced the query order with some pgsql like:\n>\n> declare\n> query character varying;\n> rec record;\n> begin\n> -- PG 8.3 doesn't have the 'using' syntax nor 'return query execute'\n>\n> execute 'create temporary table tt on commit drop as ' ||\n> 'select diag_id from tdiag ' || v_where;\n>\n> query = 'select * from tdiag where diag_id in (select * from tt) ' ||\n> 'order by diag_id ' || v_limit || ' ' || v_offset;\n>\n> for rec in execute query loop\n> return next rec;\n> end loop;\n> end;\n\nif you use FOR statement, there should be a problem in using a\nimplicit cursor - try to set a GUC cursor_tuple_fraction to 1.0.\n\nRegards\n\nPavel Stehule\n\n\n>\n> --\n> Anthony Shipman | Life is the interval\n> [email protected] | between pay days.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Wed, 8 Jun 2011 10:39:48 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange query plan with LIMIT"
},
{
"msg_contents": "On Wednesday 08 June 2011 18:39, Pavel Stehule wrote:\n> if you use FOR statement, there should be a problem in using a\n> implicit cursor - try to set a GUC cursor_tuple_fraction to 1.0.\nAlas this is mammoth replicator, equivalent to PG 8.3 and it doesn't have that \nparameter.\n-- \nAnthony Shipman | It's caches all the way \[email protected] | down.\n",
"msg_date": "Wed, 8 Jun 2011 19:36:31 +1000",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: strange query plan with LIMIT"
},
{
"msg_contents": "> On Wednesday 08 June 2011 17:39, Claudio Freire wrote:\n>> Of course optimally executing a plan with limit is a lot different\n>> than one without.\n>\n> I imagined that limit just cuts out a slice of the query results.\n> If it can find 80000 rows in 0.5 seconds then I would have thought that\n> returning just the first 100 of them should be just as easy.\n\nBut that's exactly the problem with LIMIT clause. The planner considers\ntwo choices - index scan with this estimate\n\nIndex Scan using tdiag_pkey on tdiag (cost=0.00..19114765.76\nrows=1141019 width=114)\n\nand bitmap index scan with this estimate\n\nBitmap Heap Scan on tdiag (cost=25763.48..638085.13 rows=1141019\nwidth=114)\n\nand says - hey, the index scan has much lower starting cost, and I'm using\nlimit so it's much better! Let's use index scan. But then it finds out it\nneeds to scan most of the table and that ruins the performance.\n\nHave you tried to create a composite index on those two columns? Not sure\nif that helps but I'd try that.\n\nTomas\n\n",
"msg_date": "Wed, 8 Jun 2011 11:47:43 +0200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: strange query plan with LIMIT"
},
{
"msg_contents": "Hello\n\n2011/6/8 <[email protected]>:\n> On Wednesday 08 June 2011 18:39, Pavel Stehule wrote:\n>> if you use FOR statement, there should be a problem in using a\n>> implicit cursor - try to set a GUC cursor_tuple_fraction to 1.0.\n> Alas this is mammoth replicator, equivalent to PG 8.3 and it doesn't have that\n> parameter.\n\nIt should be a part of problem - resp. combination with bad statistic.\nMaybe you should to rewrite your code to\n\nDECLARE int i = 0;\n\nFOR x IN EXECUTE '....'\nLOOP\n RETURN NEXT ...\n i := i + 1;\n EXIT WHEN i > limitvar\nEND LOOP\n\nRegards\n\nPavel Stehule\n\n\n\n> --\n> Anthony Shipman | It's caches all the way\n> [email protected] | down.\n>\n",
"msg_date": "Wed, 8 Jun 2011 11:58:04 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange query plan with LIMIT"
},
{
"msg_contents": "On Wednesday 08 June 2011 19:47, [email protected] wrote:\n> Have you tried to create a composite index on those two columns? Not sure\n> if that helps but I'd try that.\n>\n> Tomas\n\nDo you mean \n create index tdiag_index2 ON tdiag(diag_id, create_time);\nShould this be in addition to or instead of the single index on create_time?\n\n\n\nI must be doing something really wrong to get this to happen:\n\nsymstream2=> select count(*) from tdiag where create_time <= '2011-05-23 \n03:51:00.131597+0';\n count\n-------\n 0\n(1 row)\n\nsymstream2=> explain analyze select count(*) from tdiag where create_time \n<= '2011-05-23 03:51:00.131597+0';\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=863867.21..863867.22 rows=1 width=0) (actual \ntime=58994.078..58994.080 rows=1 loops=1)\n -> Seq Scan on tdiag (cost=0.00..844188.68 rows=7871413 width=0) (actual \ntime=58994.063..58994.063 rows=0 loops=1)\n Filter: (create_time <= '2011-05-23 13:51:00.131597+10'::timestamp \nwith time zone)\n Total runtime: 58994.172 ms\n(4 rows)\n\nsymstream2=> \\d tdiag\n Table \"public.tdiag\"\n Column | Type | Modifiers\n-------------+--------------------------+-----------------------------------------------------------\n diag_id | integer | not null default \nnextval(('diag_id_seq'::text)::regclass)\n create_time | timestamp with time zone | default now()\n diag_time | timestamp with time zone | not null\n device_id | integer |\n fleet_id | integer |\n customer_id | integer |\n module | character varying |\n node_kind | smallint |\n diag_level | smallint |\n message | character varying | not null default ''::character \nvarying\n options | text |\n tag | character varying | not null default ''::character \nvarying\nIndexes:\n \"tdiag_pkey\" PRIMARY KEY, btree (diag_id)\n \"tdiag_create_time\" btree (create_time)\n\n\n-- \nAnthony Shipman | Programming is like sex: One mistake and \[email protected] | you're providing support for a lifetime.\n",
"msg_date": "Thu, 9 Jun 2011 16:04:42 +1000",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: strange query plan with LIMIT"
},
{
"msg_contents": "On Thursday 09 June 2011 16:04, [email protected] wrote:\n> I must be doing something really wrong to get this to happen:\nYes I did. Ignore that.\n-- \nAnthony Shipman | flailover systems: When one goes down it \[email protected] | flails about until the other goes down too.\n",
"msg_date": "Thu, 9 Jun 2011 16:16:26 +1000",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: strange query plan with LIMIT"
},
{
"msg_contents": "On Wednesday 08 June 2011 19:47, [email protected] wrote:\n> Have you tried to create a composite index on those two columns? Not sure\n> if that helps but I'd try that.\n>\n> Tomas\n\nThis finally works well enough\n\nCREATE TABLE tdiag (\n diag_id integer DEFAULT nextval('diag_id_seq'::text),\n create_time\t\ttimestamp with time zone default now(),\n....\n PRIMARY KEY (diag_id)\n);\n\n-- ************ COMPOSITE INDEX\ncreate index tdiag_id_create on tdiag(diag_id, create_time);\n\nalter table tdiag alter column diag_id set statistics 1000;\nalter table tdiag alter column create_time set statistics 1000;\n\nand then just do the original query\n\nsymstream2=> explain analyze select * from tdiag where\nsymstream2-> (create_time >= '2011-06-07 02:00:00.000000+0' and create_time \n< '2011-06-10 07:58:03.000000+0') and diag_level <= 1\nsymstream2-> order by diag_id LIMIT 100 OFFSET 800;\n \nQUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=6064.19..6822.21 rows=100 width=112) (actual \ntime=1496.644..1497.094 rows=100 loops=1)\n -> Index Scan using tdiag_id_create on tdiag (cost=0.00..1320219.58 \nrows=174166 width=112) (actual time=1409.285..1495.831 rows=900 loops=1)\n Index Cond: ((create_time >= '2011-06-07 12:00:00+10'::timestamp with \ntime zone) AND (create_time < '2011-06-10 17:58:03+10'::timestamp with time \nzone))\n Filter: (diag_level <= 1)\n Total runtime: 1497.297 ms\n\n\nIf I had set the primary key to (diag_id, create_time) would simple queries on\ndiag_id still work well i.e.\n select * from tdiag where diag_id = 1234;\n\n-- \nAnthony Shipman | -module(erlang).\[email protected] | ''(_)->0. %-)\n",
"msg_date": "Fri, 10 Jun 2011 18:38:39 +1000",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: strange query plan with LIMIT"
},
{
"msg_contents": "> If I had set the primary key to (diag_id, create_time) would simple\n> queries on\n> diag_id still work well i.e.\n> select * from tdiag where diag_id = 1234;\n\nYes. IIRC the performance penalty for using non-leading column of an index\nis negligible. But why don't you try that on your own - just run an\nexplain and you'll get an immediate answer if that works.\n\nregards\nTomas\n\n",
"msg_date": "Fri, 10 Jun 2011 13:22:58 +0200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: strange query plan with LIMIT"
},
{
"msg_contents": "On Fri, Jun 10, 2011 at 1:22 PM, <[email protected]> wrote:\n>> If I had set the primary key to (diag_id, create_time) would simple\n>> queries on\n>> diag_id still work well i.e.\n>> select * from tdiag where diag_id = 1234;\n>\n> Yes. IIRC the performance penalty for using non-leading column of an index\n> is negligible. But why don't you try that on your own - just run an\n> explain and you'll get an immediate answer if that works.\n\nThe effective penalty, which you don't see on your explain, is the\nsize of the index.\n\nDepends on the data stored there, but the index can grow up to double\nsize (usually less than that), and the bigger index is slower for all\noperations.\n\nBut, in general, if you need both a single-column a multi-column\nindex, just go for a multipurpose multicolumn one.\n",
"msg_date": "Fri, 10 Jun 2011 17:22:28 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange query plan with LIMIT"
}
] |
[
{
"msg_contents": "Version: PostgreSQL 8.3.5 (mammoth replicator)\n\nSchema:\n\nCREATE TABLE tdiag (\n diag_id integer DEFAULT nextval('diag_id_seq'::text),\n create_time\t\ttimestamp with time zone default now(),\t/* time this record \nwas created */\n diag_time timestamp with time zone not null,\n device_id integer, /* optional */\n fleet_id integer, /* optional */\n customer_id integer, /* optional */\n module character varying,\n node_kind smallint,\n diag_level smallint,\n tag character varying not null default '',\n message character varying not null default '',\n options text,\n\n PRIMARY KEY (diag_id)\n);\n\ncreate index tdiag_create_time ON tdiag(create_time);\n\nThe number of rows is around 33 million with time stamps over the past two \nweeks.\nA VACUUM ANALYZE has been done recently on the table.\n\nThe create_time order is almost identical to the id order. What I want\nto find is the first or last entry by id in a given time range. The\nquery I am having a problem with is:\n\nsymstream2=> explain analyze select * from tdiag where (create_time \n>= '2011-06-03\n09:49:04.000000+0' and create_time < '2011-06-06 09:59:04.000000+0') order by \ndiag_id limit 1;\n\n \nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..16.75 rows=1 width=114) (actual time=69425.356..69425.358 \nrows=1 loops=1)\n -> Index Scan using tdiag_pkey on tdiag (cost=0.00..19114765.76 \nrows=1141019 width=114)\n(actual time=69425.352..69425.352 rows=1 loops=1)\n Filter: ((create_time >= '2011-06-03 19:49:04+10'::timestamp with \ntime zone) AND\n(create_time < '2011-06-06 19:59:04+10'::timestamp with time zone))\n Total runtime: 69425.400 ms\n\nPG seems to decide it must scan the diag_id column and filter each row by the \ncreate_time. \n\n\n\nIf I leave out the limit I get\n\nsymstream2=> explain analyze select * from tdiag where (create_time \n>= '2011-06-03\n09:49:04.000000+0' and create_time < '2011-06-06 09:59:04.000000+0') order by \ndiag_id;\n\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=957632.43..960484.98 rows=1141019 width=114) (actual \ntime=552.795..656.319 rows=86530\nloops=1)\n Sort Key: diag_id\n Sort Method: external merge Disk: 9872kB\n -> Bitmap Heap Scan on tdiag (cost=25763.48..638085.13 rows=1141019 \nwidth=114) (actual\ntime=43.232..322.441 rows=86530 loops=1)\n Recheck Cond: ((create_time >= '2011-06-03 19:49:04+10'::timestamp \nwith time zone) AND\n(create_time < '2011-06-06 19:59:04+10'::timestamp with time zone))\n -> Bitmap Index Scan on tdiag_create_time (cost=0.00..25478.23 \nrows=1141019 width=0)\n(actual time=42.574..42.574 rows=86530 loops=1)\n Index Cond: ((create_time >= '2011-06-03 \n19:49:04+10'::timestamp with time zone) AND\n(create_time < '2011-06-06 19:59:04+10'::timestamp with time zone))\n Total runtime: 736.440 ms\n(8 rows)\n\n\n\n\nI can be explicit about the query order:\n\nselect * into tt from tdiag where (create_time >= '2011-06-03 \n09:49:04.000000+0' and create_time <\n'2011-06-06 09:59:04.000000+0');\n\nsymstream2=> explain analyze select * from tt order by diag_id limit 1;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Limit (cost=2731.95..2731.95 rows=1 width=101) (actual time=440.165..440.166 \nrows=1 loops=1)\n -> Sort (cost=2731.95..2948.28 rows=86530 width=101) (actual \ntime=440.161..440.161 rows=1\nloops=1)\n Sort Key: diag_id\n Sort Method: top-N heapsort Memory: 17kB\n -> Seq Scan on tt (cost=0.00..2299.30 rows=86530 width=101) (actual \ntime=19.602..330.873\nrows=86530 loops=1)\n Total runtime: 440.209 ms\n(6 rows)\n\n\n\nBut if I try using a subquery I get\n\nsymstream2=> explain analyze select * from (select * from tdiag where \n(create_time >= '2011-06-03\n09:49:04.000000+0' and create_time < '2011-06-06 09:59:04.000000+0')) as sub \norder by diag_id limit\n1;\n\n \nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..16.75 rows=1 width=114) (actual time=90344.384..90344.385 \nrows=1 loops=1)\n -> Index Scan using tdiag_pkey on tdiag (cost=0.00..19114765.76 \nrows=1141019 width=114)\n(actual time=90344.380..90344.380 rows=1 loops=1)\n Filter: ((create_time >= '2011-06-03 19:49:04+10'::timestamp with \ntime zone) AND\n(create_time < '2011-06-06 19:59:04+10'::timestamp with time zone))\n Total runtime: 90344.431 ms\n\n\nHow do I make this both fast and simple?\n-- \nAnthony Shipman | flailover systems: When one goes down it \[email protected] | flails about until the other goes down too.\n",
"msg_date": "Tue, 7 Jun 2011 18:26:17 +1000",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "strange query plan with LIMIT"
},
{
"msg_contents": "Hello\n\ndid you run a ANALYZE statement on table tdiag? A statistics are\nabsolutelly out.\n\nRegards\n\nPavel Stehule\n\n2011/6/7 <[email protected]>:\n> Version: PostgreSQL 8.3.5 (mammoth replicator)\n>\n> Schema:\n>\n> CREATE TABLE tdiag (\n> diag_id integer DEFAULT nextval('diag_id_seq'::text),\n> create_time timestamp with time zone default now(), /* time this record\n> was created */\n> diag_time timestamp with time zone not null,\n> device_id integer, /* optional */\n> fleet_id integer, /* optional */\n> customer_id integer, /* optional */\n> module character varying,\n> node_kind smallint,\n> diag_level smallint,\n> tag character varying not null default '',\n> message character varying not null default '',\n> options text,\n>\n> PRIMARY KEY (diag_id)\n> );\n>\n> create index tdiag_create_time ON tdiag(create_time);\n>\n> The number of rows is around 33 million with time stamps over the past two\n> weeks.\n> A VACUUM ANALYZE has been done recently on the table.\n>\n> The create_time order is almost identical to the id order. What I want\n> to find is the first or last entry by id in a given time range. The\n> query I am having a problem with is:\n>\n> symstream2=> explain analyze select * from tdiag where (create_time\n>>= '2011-06-03\n> 09:49:04.000000+0' and create_time < '2011-06-06 09:59:04.000000+0') order by\n> diag_id limit 1;\n>\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..16.75 rows=1 width=114) (actual time=69425.356..69425.358\n> rows=1 loops=1)\n> -> Index Scan using tdiag_pkey on tdiag (cost=0.00..19114765.76\n> rows=1141019 width=114)\n> (actual time=69425.352..69425.352 rows=1 loops=1)\n> Filter: ((create_time >= '2011-06-03 19:49:04+10'::timestamp with\n> time zone) AND\n> (create_time < '2011-06-06 19:59:04+10'::timestamp with time zone))\n> Total runtime: 69425.400 ms\n>\n> PG seems to decide it must scan the diag_id column and filter each row by the\n> create_time.\n>\n>\n>\n> If I leave out the limit I get\n>\n> symstream2=> explain analyze select * from tdiag where (create_time\n>>= '2011-06-03\n> 09:49:04.000000+0' and create_time < '2011-06-06 09:59:04.000000+0') order by\n> diag_id;\n>\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=957632.43..960484.98 rows=1141019 width=114) (actual\n> time=552.795..656.319 rows=86530\n> loops=1)\n> Sort Key: diag_id\n> Sort Method: external merge Disk: 9872kB\n> -> Bitmap Heap Scan on tdiag (cost=25763.48..638085.13 rows=1141019\n> width=114) (actual\n> time=43.232..322.441 rows=86530 loops=1)\n> Recheck Cond: ((create_time >= '2011-06-03 19:49:04+10'::timestamp\n> with time zone) AND\n> (create_time < '2011-06-06 19:59:04+10'::timestamp with time zone))\n> -> Bitmap Index Scan on tdiag_create_time (cost=0.00..25478.23\n> rows=1141019 width=0)\n> (actual time=42.574..42.574 rows=86530 loops=1)\n> Index Cond: ((create_time >= '2011-06-03\n> 19:49:04+10'::timestamp with time zone) AND\n> (create_time < '2011-06-06 19:59:04+10'::timestamp with time zone))\n> Total runtime: 736.440 ms\n> (8 rows)\n>\n>\n>\n>\n> I can be explicit about the query order:\n>\n> select * into tt from tdiag where (create_time >= '2011-06-03\n> 09:49:04.000000+0' and create_time <\n> '2011-06-06 09:59:04.000000+0');\n>\n> symstream2=> explain analyze select * from tt order by diag_id limit 1;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=2731.95..2731.95 rows=1 width=101) (actual time=440.165..440.166\n> rows=1 loops=1)\n> -> Sort (cost=2731.95..2948.28 rows=86530 width=101) (actual\n> time=440.161..440.161 rows=1\n> loops=1)\n> Sort Key: diag_id\n> Sort Method: top-N heapsort Memory: 17kB\n> -> Seq Scan on tt (cost=0.00..2299.30 rows=86530 width=101) (actual\n> time=19.602..330.873\n> rows=86530 loops=1)\n> Total runtime: 440.209 ms\n> (6 rows)\n>\n>\n>\n> But if I try using a subquery I get\n>\n> symstream2=> explain analyze select * from (select * from tdiag where\n> (create_time >= '2011-06-03\n> 09:49:04.000000+0' and create_time < '2011-06-06 09:59:04.000000+0')) as sub\n> order by diag_id limit\n> 1;\n>\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..16.75 rows=1 width=114) (actual time=90344.384..90344.385\n> rows=1 loops=1)\n> -> Index Scan using tdiag_pkey on tdiag (cost=0.00..19114765.76\n> rows=1141019 width=114)\n> (actual time=90344.380..90344.380 rows=1 loops=1)\n> Filter: ((create_time >= '2011-06-03 19:49:04+10'::timestamp with\n> time zone) AND\n> (create_time < '2011-06-06 19:59:04+10'::timestamp with time zone))\n> Total runtime: 90344.431 ms\n>\n>\n> How do I make this both fast and simple?\n> --\n> Anthony Shipman | flailover systems: When one goes down it\n> [email protected] | flails about until the other goes down too.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Tue, 7 Jun 2011 12:43:22 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange query plan with LIMIT"
}
] |
[
{
"msg_contents": "Hi All,\nI'm having issues with a set of fairly related queries in my\napplication. EXPLAIN ANALYZE is showing them all to be getting stuck\nperforming roughly the same operation:\n -> Bitmap Heap Scan on logparser_entry\n(cost=4119.06..21520.55 rows=68787 width=8) (actual\ntime=107.032..444.864 rows=16168 loops=1)\n Recheck Cond: ((event_type)::text = ANY\n('{Attack,\"DoT Tick\",\"Critical Attack\"}'::text[]))\n Filter: ((((target_relation)::text <> ALL\n('{Other,N/A}'::text[])) OR (NOT (target_relation IS NOT NULL))) AND\n(log_id = 2))\n -> Bitmap Index Scan on\nlogparser_entry_event_type_like (cost=0.00..4101.86 rows=217733\nwidth=0) (actual time=46.392..46.392 rows=237151 loops=1)\n Index Cond: ((event_type)::text = ANY\n('{Attack,\"DoT Tick\",\"Critical Attack\"}'::text[]))\n -> Hash (cost=196.49..196.49 rows=9749 width=23)\n(actual time=19.606..19.606 rows=9749 loops=1)\n\nAll the queries are being generated by the Django ORM, so they are not\nparticularly well optimized pretty. I'd prefer to stay with the ORM\nas a lot of the queries are highly variable depending on the request\nparameters and so unless their are huge gains to be had by falling\nback to raw SQL it will save me a lot of development time to stay with\nthe ORM.\n\nThe table in question (logparser_entry) currently has 815000 records\n(but that only represents a very very small amount compared to what\nthe production server would have to handle, as this represents only 2\nlog objects when I would expect easily 100 or more logs to be uploaded\nper day).\n\nNulls should be rare in the fields.\n\nThis was being run on an AWS High CPU medium instance. Obviously not\nenoughfor a produciton system, but I would hope it would be more than\nadequate for testing when I'm the only one using the app. I opted for\nHigh CPU because the system doesn't seem to be IO bound even on a\nmicro instance (nearly 0 wait time according to top) and barely\ntouches the RAM even when tuned to be aggressive with memory usage.\nAt the same time it's running 100% cpu usage.\n\nMy server config:\nServer Config\n name |\n current_setting\n------------------------------+-------------------------------------------------------------------------------------------------------------------\n version | PostgreSQL 8.4.8 on i686-pc-linux-gnu,\ncompiled by GCC gcc-4.4.real (Ubuntu/Linaro 4.4.4-14ubuntu5) 4.4.5,\n32-bit\n checkpoint_completion_target | 0.9\n effective_cache_size | 1044MB\n external_pid_file | /var/run/postgresql/8.4-main.pid\n fsync | on\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n listen_addresses | *\n log_line_prefix | %t\n log_min_duration_statement | 250ms\n max_connections | 25\n max_stack_depth | 2MB\n port | 5432\n random_page_cost | 4\n server_encoding | UTF8\n shared_buffers | 16MB\n synchronous_commit | off\n TimeZone | UTC\n unix_socket_directory | /var/run/postgresql\n work_mem | 250MB\n(20 rows)\n\nTo try to make reading the queries easier I've attached a text file\nwith the queries and links to EXPLAIN ANALYZE outputs as well as\ncopied them below. I've tried a lot to tune these queries, but\nnothing seems to work. The queries always spend a large amount of\ntime in the same place. Is there something I missing that could\nimprove these or even a way to rework my schema to speed things up.\n\nThanks,\nJohn\n\n\nSELECT \"logparser_entry\".\"id\" ,\n \"logparser_entry\".\"log_id\" ,\n \"logparser_entry\".\"encounter_id\" ,\n \"logparser_entry\".\"entry_order\" ,\n \"logparser_entry\".\"timestamp\" ,\n \"logparser_entry\".\"seconds_since_start\" ,\n \"logparser_entry\".\"event_type\" ,\n \"logparser_entry\".\"actor_id\" ,\n \"logparser_entry\".\"actor_relation\" ,\n \"logparser_entry\".\"target_id\" ,\n \"logparser_entry\".\"target_relation\" ,\n \"logparser_entry\".\"pet_owner_id\" ,\n \"logparser_entry\".\"pet_owner_relation\" ,\n \"logparser_entry\".\"pet_target_owner_id\" ,\n \"logparser_entry\".\"pet_target_owner_relation\",\n \"logparser_entry\".\"ability_id\" ,\n \"logparser_entry\".\"effective_value\" ,\n \"logparser_entry\".\"blocked\" ,\n \"logparser_entry\".\"absorbed\" ,\n \"logparser_entry\".\"overkill\" ,\n \"logparser_entry\".\"overheal\" ,\n \"logparser_entry\".\"total_value\"\nFROM \"logparser_entry\"\nWHERE (\n \"logparser_entry\".\"log_id\" = 2\n AND NOT\n (\n (\n \"logparser_entry\".\"actor_relation\"\nIN (E'Other',\n\n E'N/A')\n AND \"logparser_entry\".\"actor_relation\"\nIS NOT NULL\n )\n )\n AND \"logparser_entry\".\"event_type\" IN (E'Attack' ,\n E'DoT Tick',\n E'Critical Attack')\n )\nORDER BY \"logparser_entry\".\"entry_order\" ASC\nLIMIT 1\nhttp://explain.depesz.com/s/vEx\n\n\nSELECT (ROUND(logparser_entry.seconds_since_start / 42)) AS \"interval\",\n SUM(\"logparser_entry\".\"effective_value\") AS\n\"effective_value__sum\"\nFROM \"logparser_entry\"\nWHERE (\n \"logparser_entry\".\"log_id\" = 2\n AND NOT\n (\n (\n \"logparser_entry\".\"actor_relation\"\nIN (E'Other',\n\n E'N/A')\n AND \"logparser_entry\".\"actor_relation\"\nIS NOT NULL\n )\n )\n AND \"logparser_entry\".\"event_type\" IN (E'Attack' ,\n E'DoT Tick',\n E'Critical Attack')\n )\nGROUP BY (ROUND(logparser_entry.seconds_since_start / 42)),\n ROUND(logparser_entry.seconds_since_start / 42)\nORDER BY \"interval\" ASC\nhttp://explain.depesz.com/s/Rhb\n\n\nSELECT (ROUND(logparser_entry.seconds_since_start / 45)) AS \"interval\",\n SUM(\"logparser_entry\".\"effective_value\") AS\n\"effective_value__sum\"\nFROM \"logparser_entry\"\nWHERE (\n \"logparser_entry\".\"log_id\" = 2\n AND NOT\n (\n (\n\n\"logparser_entry\".\"target_relation\" IN (E'Other',\n\n E'N/A')\n AND\n\"logparser_entry\".\"target_relation\" IS NOT NULL\n )\n AND\n (\n \"logparser_entry\".\"actor_relation\"\nIN (E'Other',\n\n E'N/A')\n AND \"logparser_entry\".\"actor_relation\"\nIS NOT NULL\n )\n )\n AND \"logparser_entry\".\"event_type\" IN (E'Heal',\n E'Heal Critical')\n )\nGROUP BY (ROUND(logparser_entry.seconds_since_start / 45)),\n ROUND(logparser_entry.seconds_since_start / 45)\nORDER BY \"interval\" ASC\nhttp://explain.depesz.com/s/JUo\n\n\nSELECT \"units_ability\".\"ability_name\",\n \"units_ability\".\"damage_type\" ,\n SUM(\"logparser_entry\".\"total_value\") AS \"total\"\nFROM \"logparser_entry\"\n LEFT OUTER JOIN \"units_ability\"\n ON (\n \"logparser_entry\".\"ability_id\" = \"units_ability\".\"id\"\n )\nWHERE (\n \"logparser_entry\".\"log_id\" = 2\n AND NOT\n (\n (\n\n\"logparser_entry\".\"target_relation\" IN (E'Other',\n\n E'N/A')\n AND\n\"logparser_entry\".\"target_relation\" IS NOT NULL\n )\n )\n AND \"logparser_entry\".\"event_type\" IN (E'Attack' ,\n E'DoT Tick',\n E'Critical Attack')\n )\nGROUP BY \"units_ability\".\"ability_name\",\n \"units_ability\".\"damage_type\" ,\n \"units_ability\".\"ability_name\",\n \"units_ability\".\"damage_type\"\nHAVING NOT\n (\n SUM(\"logparser_entry\".\"total_value\") = 0\n )\nORDER BY \"total\" DESC\nhttp://explain.depesz.com/s/VZA\n\n\n Table \"public.logparser_entry\"\n Column | Type |\n Modifiers\n---------------------------+------------------------+--------------------------------------------------------------\n id | integer | not null default\nnextval('logparser_entry_id_seq'::regclass)\n log_id | integer | not null\n encounter_id | integer |\n entry_order | integer | not null\n timestamp | time without time zone | not null\n seconds_since_start | integer | not null\n event_type | character varying(64) | not null\n actor_id | integer | not null\n actor_relation | character varying(24) |\n target_id | integer |\n target_relation | character varying(24) |\n pet_owner_id | integer |\n pet_owner_relation | character varying(24) |\n pet_target_owner_id | integer |\n pet_target_owner_relation | character varying(32) |\n ability_id | integer |\n effective_value | integer | not null\n blocked | integer | not null\n absorbed | integer | not null\n overkill | integer | not null\n overheal | integer | not null\n total_value | integer | not null\nIndexes:\n \"logparser_entry_pkey\" PRIMARY KEY, btree (id)\n \"logparser_entry_ability_id\" btree (ability_id)\n \"logparser_entry_actor_id\" btree (actor_id)\n \"logparser_entry_actor_relation\" btree (actor_relation)\n \"logparser_entry_actor_relation_like\" btree (actor_relation\nvarchar_pattern_ops)\n \"logparser_entry_encounter_id\" btree (encounter_id)\n \"logparser_entry_event_type\" btree (event_type)\n \"logparser_entry_event_type_like\" btree (event_type varchar_pattern_ops)\n \"logparser_entry_log_id\" btree (log_id)\n \"logparser_entry_pet_owner_id\" btree (pet_owner_id)\n \"logparser_entry_pet_target_owner_id\" btree (pet_target_owner_id)\n \"logparser_entry_target_id\" btree (target_id)\n \"logparser_entry_target_relation\" btree (target_relation)\n \"logparser_entry_target_relation_like\" btree (target_relation\nvarchar_pattern_ops)\nForeign-key constraints:\n \"logparser_entry_ability_id_fkey\" FOREIGN KEY (ability_id)\nREFERENCES units_ability(id) DEFERRABLE INITIALLY DEFERRED\n \"logparser_entry_actor_id_fkey\" FOREIGN KEY (actor_id) REFERENCES\nunits_unit(id) DEFERRABLE INITIALLY DEFERRED\n \"logparser_entry_encounter_id_fkey\" FOREIGN KEY (encounter_id)\nREFERENCES logparser_encounter(id) DEFERRABLE INITIALLY DEFERRED\n \"logparser_entry_log_id_fkey\" FOREIGN KEY (log_id) REFERENCES\nlogparser_log(id) DEFERRABLE INITIALLY DEFERRED\n \"logparser_entry_pet_owner_id_fkey\" FOREIGN KEY (pet_owner_id)\nREFERENCES units_unit(id) DEFERRABLE INITIALLY DEFERRED\n \"logparser_entry_pet_target_owner_id_fkey\" FOREIGN KEY\n(pet_target_owner_id) REFERENCES units_unit(id) DEFERRABLE INITIALLY\nDEFERRED\n \"logparser_entry_target_id_fkey\" FOREIGN KEY (target_id)\nREFERENCES units_unit(id) DEFERRABLE INITIALLY DEFERRED\n\n\nServer Config\n name |\n current_setting\n------------------------------+-------------------------------------------------------------------------------------------------------------------\n version | PostgreSQL 8.4.8 on i686-pc-linux-gnu,\ncompiled by GCC gcc-4.4.real (Ubuntu/Linaro 4.4.4-14ubuntu5) 4.4.5,\n32-bit\n checkpoint_completion_target | 0.9\n effective_cache_size | 1044MB\n external_pid_file | /var/run/postgresql/8.4-main.pid\n fsync | on\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n listen_addresses | *\n log_line_prefix | %t\n log_min_duration_statement | 250ms\n max_connections | 25\n max_stack_depth | 2MB\n port | 5432\n random_page_cost | 4\n server_encoding | UTF8\n shared_buffers | 16MB\n synchronous_commit | off\n TimeZone | UTC\n unix_socket_directory | /var/run/postgresql\n work_mem | 250MB\n(20 rows)",
"msg_date": "Tue, 7 Jun 2011 19:58:49 -0700",
"msg_from": "John Williams <[email protected]>",
"msg_from_op": true,
"msg_subject": "Set of related slow queries"
},
{
"msg_contents": "On 8/06/2011 10:58 AM, John Williams wrote:\n\n> -> Bitmap Heap Scan on logparser_entry\n> (cost=4119.06..21520.55 rows=68787 width=8) (actual\n> time=107.032..444.864 rows=16168 loops=1)\n> Recheck Cond: ((event_type)::text = ANY\n> ('{Attack,\"DoT Tick\",\"Critical Attack\"}'::text[]))\n> Filter: ((((target_relation)::text<> ALL\n> ('{Other,N/A}'::text[])) OR (NOT (target_relation IS NOT NULL))) AND\n> (log_id = 2))\n> -> Bitmap Index Scan on\n> logparser_entry_event_type_like (cost=0.00..4101.86 rows=217733\n> width=0) (actual time=46.392..46.392 rows=237151 loops=1)\n> Index Cond: ((event_type)::text = ANY\n> ('{Attack,\"DoT Tick\",\"Critical Attack\"}'::text[]))\n> -> Hash (cost=196.49..196.49 rows=9749 width=23)\n> (actual time=19.606..19.606 rows=9749 loops=1)\n\nThanks for including explain analyze output.\n\nIs there any chance you can pop the full explains (not just excerpts) in \nhere:\n\nhttp://explain.depesz.com/\n\n?\n\nBig query plans tend to get mangled into unreadable garbage by mail \nclients, unfortunately.\n\n-- \nCraig Ringer\n\nTech-related writing at http://soapyfrogs.blogspot.com/\n",
"msg_date": "Wed, 08 Jun 2011 17:20:46 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set of related slow queries"
},
{
"msg_contents": "> Thanks for including explain analyze output.\n>\n> Is there any chance you can pop the full explains (not just excerpts) in\n> here:\n>\n> http://explain.depesz.com/\n>\n> ?\n\nI believe he already did that - there's a link below each query.\n\nTomas\n\n",
"msg_date": "Wed, 8 Jun 2011 13:08:10 +0200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Set of related slow queries"
},
{
"msg_contents": "------------------------------+---------------------------------------------\n> shared_buffers | 16MB\n> work_mem | 250MB\n\nThis seems a bit suspicious. Are you sure you want to keep the\nshared_buffers so small and work_mem so large at the same time? There\nprobably are workloads where this is the right thing to do, but I doubt\nthis is the case. Why have you set it like this?\n\nI don't have much experience with running Pg on AWS, but I'd try to\nincrease the shared buffers to say 512MB and decrease the work_mem to 16MB\n(or something like that).\n\nUndersized shared_buffers might actually be part of the problem - to\naccess a row, the page needs to be loaded into shared_buffers. Even though\nthe I/O is very fast (or the page is already in the filesystem page\ncache), there's some locking etc. that needs to be done. When the cache is\nsmall (e.g. 16MB) then the pages need to be removed and read again\nfrequently. This might be one of the reasons why the CPU is 100% utilized.\n\n> SELECT \"logparser_entry\".\"id\" ,\n> \"logparser_entry\".\"log_id\" ,\n> \"logparser_entry\".\"encounter_id\" ,\n> \"logparser_entry\".\"entry_order\" ,\n> \"logparser_entry\".\"timestamp\" ,\n> \"logparser_entry\".\"seconds_since_start\" ,\n> \"logparser_entry\".\"event_type\" ,\n> \"logparser_entry\".\"actor_id\" ,\n> \"logparser_entry\".\"actor_relation\" ,\n> \"logparser_entry\".\"target_id\" ,\n> \"logparser_entry\".\"target_relation\" ,\n> \"logparser_entry\".\"pet_owner_id\" ,\n> \"logparser_entry\".\"pet_owner_relation\" ,\n> \"logparser_entry\".\"pet_target_owner_id\" ,\n> \"logparser_entry\".\"pet_target_owner_relation\",\n> \"logparser_entry\".\"ability_id\" ,\n> \"logparser_entry\".\"effective_value\" ,\n> \"logparser_entry\".\"blocked\" ,\n> \"logparser_entry\".\"absorbed\" ,\n> \"logparser_entry\".\"overkill\" ,\n> \"logparser_entry\".\"overheal\" ,\n> \"logparser_entry\".\"total_value\"\n> FROM \"logparser_entry\"\n> WHERE (\n> \"logparser_entry\".\"log_id\" = 2\n> AND NOT\n> (\n> (\n> \"logparser_entry\".\"actor_relation\"\n> IN (E'Other',\n>\n> E'N/A')\n> AND \"logparser_entry\".\"actor_relation\"\n> IS NOT NULL\n> )\n> )\n> AND \"logparser_entry\".\"event_type\" IN (E'Attack' ,\n> E'DoT Tick',\n> E'Critical Attack')\n> )\n> ORDER BY \"logparser_entry\".\"entry_order\" ASC\n> LIMIT 1\n> http://explain.depesz.com/s/vEx\n\nWell, the problem with this is that it needs to evaluate the whole result\nset, sort it by \"entry_order\" and then get the 1st row. And there's no\nindex on entry_order, so it has to evaluate the whole result set and then\nperform a traditional sort.\n\nTry to create an index on the \"entry_order\" column - that might push it\ntowards index scan (to be honest I don't know if PostgreSQL knows it can\ndo it this way, so maybe it won't work).\n\n> SELECT (ROUND(logparser_entry.seconds_since_start / 42)) AS \"interval\",\n> SUM(\"logparser_entry\".\"effective_value\") AS\n> \"effective_value__sum\"\n> FROM \"logparser_entry\"\n> WHERE (\n> \"logparser_entry\".\"log_id\" = 2\n> AND NOT\n> (\n> (\n> \"logparser_entry\".\"actor_relation\"\n> IN (E'Other',\n>\n> E'N/A')\n> AND \"logparser_entry\".\"actor_relation\"\n> IS NOT NULL\n> )\n> )\n> AND \"logparser_entry\".\"event_type\" IN (E'Attack' ,\n> E'DoT Tick',\n> E'Critical Attack')\n> )\n> GROUP BY (ROUND(logparser_entry.seconds_since_start / 42)),\n> ROUND(logparser_entry.seconds_since_start / 42)\n> ORDER BY \"interval\" ASC\n> http://explain.depesz.com/s/Rhb\n\nHm, this is probably the best plan possible - not sure how to make it\nfaster. I'd expect a better performance with larger shared_buffers.\n\n> http://explain.depesz.com/s/JUo\n\nSame as above. Good plan, maybe increase shared_buffers?\n\n> http://explain.depesz.com/s/VZA\n\nSame as above. Good plan, maybe increase shared_buffers.\n\nregards\nTomas\n\n",
"msg_date": "Wed, 8 Jun 2011 13:30:38 +0200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Set of related slow queries"
},
{
"msg_contents": "On 06/08/2011 07:08 PM, [email protected] wrote:\n>> Thanks for including explain analyze output.\n>>\n>> Is there any chance you can pop the full explains (not just excerpts) in\n>> here:\n>>\n>> http://explain.depesz.com/\n>>\n>> ?\n>\n> I believe he already did that - there's a link below each query.\n\nGah, I'm blind. Thanks.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 08 Jun 2011 21:08:39 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set of related slow queries"
},
{
"msg_contents": "On 06/08/2011 06:30 AM, [email protected] wrote:\n\n>> shared_buffers | 16MB\n>> work_mem | 250MB\n>\n> This seems a bit suspicious. Are you sure you want to keep the\n> shared_buffers so small and work_mem so large at the same time? There\n> probably are workloads where this is the right thing to do, but I doubt\n> this is the case. Why have you set it like this?\n\nI must concur in this case. I can't imagine any scenario where this \nmakes sense. Work-mem is allocated on a per-sort basis, not just per \nsession or transaction. So a large query could allocate several of these \nand run your system out of memory and cause the OOM killer to start \ncausing trouble.\n\n> I don't have much experience with running Pg on AWS, but I'd try to\n> increase the shared buffers to say 512MB and decrease the work_mem to\n> 16MB (or something like that).\n\nEasily good minimums. But it looks like your AWS only has 1GB of RAM \n(based on your effective_cache_size), so you may only want to increase \nit to 256MB. That said, reduce your work_mem to 8MB to start, and \nincrease it in 4MB increments if it's still too low.\n\nWith a setting of 16MB, it has to load data in and out of memory \nconstantly. Even if the host OS has cached every single block you'll \never use, that's only the raw table contents. Processing hundreds of \nthousands of rows still takes time, you just saved yourself the effort \nof fetching them from disk, shared_buffers is still necessary to do \nactual work.\n\nNow... I have some issues with your queries, which are likely the fault \nof the Django ORM, but still consider your analyze:\n\n > http://explain.depesz.com/s/vEx\n\nYour bitmap index scan on logparser is hilarious. The estimates are \nfine. 237k rows in 47ms when it expected 217k. If your table really does \nhave 815k rows in it, that's not very selective at all. Then it adds a \nheap scan for the remaining where conditions, and you end up with 100k \nrows it then has to sort. That's never going to be fast. 600ms actually \nisn't terrible for this many rows, and it also explains your high CPU.\n\nThen your next one:\n\n > http://explain.depesz.com/s/Rhb\n\n700ms, mostly because of the HashAggregate caused by grouping by \nround(((seconds_since_start / 42)). You're aggregating by a calculation \non 100k rows. Again, this will never be \"fast\" and 700ms is not terrible \nconsidering all the extra work the engine's doing. Again, your index \nscan returning everything and the kitchen sink is the root cause. Which \nalso is evidenced here:\n\n > http://explain.depesz.com/s/JUo\n\nAnd here:\n\nhttp://explain.depesz.com/s/VZA\n\nEverything is being caused because it's always using the \nogparser_entry_event_type_like index to fetch the initial 200k rows. The \nonly way to make this faster is to restrict the rows coming back. For \ninstance, since you know these values are coming in every day, why \nsearch through all of history every time?\n\nWhy not get your timestamp column involved? Maybe you only need to look \nat Attack, DoT Tick, and Critical Attack event types for the last day, \nor week, or even month. That alone should drastically reduce your row \ncounts and give the engine a much smaller data set to aggregate and sort.\n\nThe thing is, the way your queries are currently written, as you get \nmore data, this is just going to get worse and worse. Grabbing a quarter \nof a table that just gets bigger every day and then getting aggregates \n(group by, etc) is going to get slower every day unless you can restrict \nthe result set with more where clauses. If you need reports on a lot of \nthis data on a regular basis, consider running a nightly or hourly batch \nto insert them into a reporting table you can check later.\n\nThere's a lot you can do here.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Wed, 8 Jun 2011 08:36:22 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Set of related slow queries"
},
{
"msg_contents": "Hi All,\n\nLet me clarify this a bit.\n\nThe memory values are ridiculous you're completely correct. I've\nsince fixed that (it had no effect sadly).\n\nI've adjust the shared buffers to about 400MB. As per the tuning\nguide to set that to around 1/4 of your system memory (the AWS in\nquestion has 1.7GB). I didn't have the shared buffers set correctly\nto start because truthfully I had no idea how to incurease shmmax and\nI had to look that up.\n\nThe work_mem is very very high for the system It's running on\nadmittedly. I'm ok with leaving that though because currently I'm the\nonly one on the machine at all (this isn't a production set up it's a\ntesting setup). Realistically it's only that high because someone\nsuggusted trying a much higher value (I had already personally set it\nto 50MB as that was about 30% larger than the largest sort I found)\nand see if that improved the situation (it didn't).\n\nSeveral of the implications of my current set of data make things look\na little wrong so let me clarify the issue a bit. The table is\ncomposed of data coming from a games combat log. Each log represents\nabout 400k entries. Since I only really care to look at data from the\nperspective of each log, the log_id is infact going to be more most\nselective portion of the query. Right now the table only has two logs\nin it making this hard to see. But it should reflect that the\nsituation shouldn't get worse over time. I will basically never be\nlooking at more than a 400-500k record portion of my entries table at\na time.\n\nThis stuff gets really painful because I can't very well predict the\nqueries so I can't pre calculate and the data isn't like a system log,\nI could be accepting uploads of 100's of such logs per day. The\nactual queries that are run are a function of what the user wants to\nsee. Their are roughly 5 or so different data views, each of which\ntakes 15-25 separate queries to calculate all the various graphs and\naggregates. Frequently I won't be looking at the \"overall\" entire log\n(composed of 400k entries), instead I'll be looking at smaller slices\nof the data adding: WHERE seconds_since_start <= 1500 AND seconds\nsince start <= 4000 or some such with very arbitrary min and max.\n\nNow I should say I've seen almost this exact same work done before for\na different game. So I can't help but feel I must be missing\nsomething really important either in how I'm setting up my data or how\nI'm processing.\n\nThanks,\nJohn\n\n---\n\nJohn Williams\n42nd Design\nEmail: [email protected]\nSkype: druidjaidan\nPhone: (520) 440-7239\n\n\n\nOn Wed, Jun 8, 2011 at 6:36 AM, Shaun Thomas <[email protected]> wrote:\n> On 06/08/2011 06:30 AM, [email protected] wrote:\n>\n>>> shared_buffers | 16MB\n>>> work_mem | 250MB\n>>\n>> This seems a bit suspicious. Are you sure you want to keep the\n>> shared_buffers so small and work_mem so large at the same time? There\n>> probably are workloads where this is the right thing to do, but I doubt\n>> this is the case. Why have you set it like this?\n>\n> I must concur in this case. I can't imagine any scenario where this makes\n> sense. Work-mem is allocated on a per-sort basis, not just per session or\n> transaction. So a large query could allocate several of these and run your\n> system out of memory and cause the OOM killer to start causing trouble.\n>\n>> I don't have much experience with running Pg on AWS, but I'd try to\n>> increase the shared buffers to say 512MB and decrease the work_mem to\n>> 16MB (or something like that).\n>\n> Easily good minimums. But it looks like your AWS only has 1GB of RAM (based\n> on your effective_cache_size), so you may only want to increase it to 256MB.\n> That said, reduce your work_mem to 8MB to start, and increase it in 4MB\n> increments if it's still too low.\n>\n> With a setting of 16MB, it has to load data in and out of memory constantly.\n> Even if the host OS has cached every single block you'll ever use, that's\n> only the raw table contents. Processing hundreds of thousands of rows still\n> takes time, you just saved yourself the effort of fetching them from disk,\n> shared_buffers is still necessary to do actual work.\n>\n> Now... I have some issues with your queries, which are likely the fault of\n> the Django ORM, but still consider your analyze:\n>\n>> http://explain.depesz.com/s/vEx\n>\n> Your bitmap index scan on logparser is hilarious. The estimates are fine.\n> 237k rows in 47ms when it expected 217k. If your table really does have 815k\n> rows in it, that's not very selective at all. Then it adds a heap scan for\n> the remaining where conditions, and you end up with 100k rows it then has to\n> sort. That's never going to be fast. 600ms actually isn't terrible for this\n> many rows, and it also explains your high CPU.\n>\n> Then your next one:\n>\n>> http://explain.depesz.com/s/Rhb\n>\n> 700ms, mostly because of the HashAggregate caused by grouping by\n> round(((seconds_since_start / 42)). You're aggregating by a calculation on\n> 100k rows. Again, this will never be \"fast\" and 700ms is not terrible\n> considering all the extra work the engine's doing. Again, your index scan\n> returning everything and the kitchen sink is the root cause. Which also is\n> evidenced here:\n>\n>> http://explain.depesz.com/s/JUo\n>\n> And here:\n>\n> http://explain.depesz.com/s/VZA\n>\n> Everything is being caused because it's always using the\n> ogparser_entry_event_type_like index to fetch the initial 200k rows. The\n> only way to make this faster is to restrict the rows coming back. For\n> instance, since you know these values are coming in every day, why search\n> through all of history every time?\n>\n> Why not get your timestamp column involved? Maybe you only need to look at\n> Attack, DoT Tick, and Critical Attack event types for the last day, or week,\n> or even month. That alone should drastically reduce your row counts and give\n> the engine a much smaller data set to aggregate and sort.\n>\n> The thing is, the way your queries are currently written, as you get more\n> data, this is just going to get worse and worse. Grabbing a quarter of a\n> table that just gets bigger every day and then getting aggregates (group by,\n> etc) is going to get slower every day unless you can restrict the result set\n> with more where clauses. If you need reports on a lot of this data on a\n> regular basis, consider running a nightly or hourly batch to insert them\n> into a reporting table you can check later.\n>\n> There's a lot you can do here.\n>\n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n> 312-676-8870\n> [email protected]\n>\n> ______________________________________________\n>\n> See http://www.peak6.com/email_disclaimer.php\n> for terms and conditions related to this email\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Wed, 8 Jun 2011 09:37:09 -0700",
"msg_from": "John Williams <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Set of related slow queries"
}
] |
[
{
"msg_contents": "We have a postgresql 8.3.8 DB which consumes 100% of the CPU whenever we run\nany query. We got vmstat output Machine details are below:\n\n# /usr/bin/lscpu \nArchitecture: x86_64 \nCPU(s): 2 \nThread(s) per core: 1 \nCore(s) per socket: 1 \nCPU socket(s): 2 \nNUM node(s): 1 \nVendor ID: GenuineIntel \nCPU family: 6 \nModel: 26 \nStepping: 4 \nCPU MHz: 2666.761 \nL1d cache: 32K \nL1i cache: 32K \nL2 cache: 256K \nL3 cache: 12288K \n\nMemory: Total Used Free Shared Buffers \nCached \nMem: 3928240 3889552 38688 0 26992 \n2517012 \nSwap: 2097144 312208 1784936 \n\n# /bin/uname -a \nLinux wdhsen01 2.6.32.12-0.7-default #1 SMP 2010-05-20 11:14:20 +0200 x86_64\nx86_64 x86_64 GNU/Linux .\nPostgresql Memory parameters:\nshared_buffers = 1GB\nwork_mem = 128MB\t\t\t\t\nmaintenance_work_mem = 64MB\n\nI have attached the o/p of vmstat command alsp. Can you please help us in\ntuning any other parameters.\nhttp://postgresql.1045698.n5.nabble.com/file/n4465765/untitled.bmp \n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/100-CPU-Utilization-when-we-run-queries-tp4465765p4465765.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Tue, 7 Jun 2011 21:19:28 -0700 (PDT)",
"msg_from": "bakkiya <[email protected]>",
"msg_from_op": true,
"msg_subject": "100% CPU Utilization when we run queries."
},
{
"msg_contents": "On 8/06/2011 12:19 PM, bakkiya wrote:\n> We have a postgresql 8.3.8 DB which consumes 100% of the CPU whenever we run\n> any query.\n\nYep, that's what it's supposed to do if it's not I/O limited. What's the \nproblem? Is the query taking longer than you think it should to execute? \nDo you expect it to be waiting for disk rather than CPU?\n\nPlease show your problem query/queries, and EXPLAIN ANALYZE output. See: \nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\n> http://postgresql.1045698.n5.nabble.com/file/n4465765/untitled.bmp\n\nHeh. That's been resized to a 250x250 bitmap, which isn't exactly useful \nor readable.\n\nWhy not paste the text rather than a screenshot?\n\n-- \nCraig Ringer\n\nTech-related writing at http://soapyfrogs.blogspot.com/\n",
"msg_date": "Wed, 08 Jun 2011 13:07:10 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100% CPU Utilization when we run queries."
},
{
"msg_contents": "http://postgresql.1045698.n5.nabble.com/file/n4475458/untitled.bmp \n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/100-CPU-Utilization-when-we-run-queries-tp4465765p4475458.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Fri, 10 Jun 2011 00:39:17 -0700 (PDT)",
"msg_from": "bakkiya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 100% CPU Utilization when we run queries."
},
{
"msg_contents": "On Wed, Jun 8, 2011 at 07:19, bakkiya <[email protected]> wrote:\n> We have a postgresql 8.3.8 DB which consumes 100% of the CPU whenever we run\n> any query. We got vmstat output Machine details are below:\n\nAny query? Does even \"SELECT 1\" not work? Or \"SELECT * FROM sometable LIMIT 1\"\n\nOr are you having problems with only certain kinds of queries? If so,\nplease follow this for how to report it:\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nRegards,\nMarti\n",
"msg_date": "Fri, 10 Jun 2011 12:00:59 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100% CPU Utilization when we run queries."
},
{
"msg_contents": "On 06/10/2011 03:39 PM, bakkiya wrote:\n> http://postgresql.1045698.n5.nabble.com/file/n4475458/untitled.bmp\n\n404 file not found.\n\nThat's ... not overly useful.\n\nAgain, *PLEASE* read\n\n http://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\nand try posting again with enough information that someone might be able \nto actually help you. As Marti mentioned and as recommended by the link \nabove, you should particularly focus on the questions here:\n\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nIf you want help from people here, you will need to make enough effort \nto collect the information required to help you.\n\n--\nCraig Ringer\n",
"msg_date": "Fri, 10 Jun 2011 20:37:27 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100% CPU Utilization when we run queries."
},
{
"msg_contents": "Hi,\nSorry. I am posting the query details below.\nQuery:\nSELECT DISTINCT events_rpt_v3.init_service_comp FROM public.events_rpt_v3\nevents_rpt_v3; \nevents_rpt_v3 is a view based on partition tables.\nNumber of rows in events_rpt_v3: 57878 \n\nvmstat o/p:\nprocs -----------memory---------- ---swap-- -----io---- -system--\n-----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id\nwa st\n 1 0 632108 377948 132928 1392996 1 1 10 33 2 2 1 0 99 \n0 0\n 1 0 632108 371476 132928 1399016 0 0 3072 36 834 788 49 1 50 \n0 0\n 1 0 632108 364408 132928 1405112 0 0 3072 0 795 707 49 1 50 \n0 0\n 1 0 632108 357596 132928 1411944 0 0 3584 0 797 712 50 1 50 \n0 0\n 1 0 632108 351148 132928 1418540 0 0 3072 0 827 768 49 1 50 \n0 0\n 1 0 632108 344700 132928 1425028 0 0 3072 0 811 725 66 1 34 \n0 0\n 1 0 632108 337688 132928 1431976 0 0 3584 16 829 802 49 1 50 \n0 0\n 1 0 632108 330860 132928 1438808 0 0 3072 0 804 755 50 1 50 \n0 0\n 1 0 632108 323916 132928 1446032 0 0 3584 0 810 738 49 1 50 \n0 0\n 1 0 632108 317344 132928 1452544 0 0 3072 0 806 736 50 1 50 \n0 0\n 1 0 632108 310648 132928 1459120 0 0 3584 0 793 703 49 1 50 \n0 0\n 1 0 632108 304464 132928 1465488 0 0 3072 0 811 745 49 2 50 \n0 0\n 1 0 632108 297396 132936 1472068 0 0 3072 12 808 715 49 1 50 \n0 0\n 1 0 632108 290468 132936 1478876 0 0 3584 0 797 714 50 1 50 \n0 0\n 1 0 632108 284764 132944 1484132 0 0 3072 47284 996 776 50 2 43 \n6 0\n 2 0 632108 278564 132944 1490484 0 0 3072 0 813 720 58 1 41 \n0 0\n 1 0 632108 272480 132944 1496684 0 0 3072 32 822 742 56 2 43 \n0 0\n 1 0 632108 265800 132944 1503420 0 0 3584 0 826 743 50 1 50 \n0 0\n 1 0 632108 259592 132944 1509836 0 0 3072 0 798 742 49 1 50 \n0 0\n 1 0 632108 252772 132944 1516204 0 0 3072 0 771 716 58 2 40 \n0 0\n 1 0 632108 245952 132944 1523052 0 0 3584 4 785 699 50 1 50 \n0 0\nprocs -----------memory---------- ---swap-- -----io---- -system--\n-----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id\nwa st\n 1 0 632108 239380 132944 1529576 0 0 3072 21796 832 649 49 2 49 \n0 0\n 1 0 632108 233072 132944 1535908 0 0 3072 0 773 718 50 0 50 \n0 0\n 1 0 632108 226128 132944 1542780 0 0 3584 0 834 769 49 1 49 \n0 0\n 2 0 632108 219556 132944 1549556 0 0 3072 0 817 757 57 1 43 \n0 0\n 1 0 632108 213248 132944 1555864 0 0 3072 0 798 710 57 2 40 \n0 0\n 1 0 632108 206480 132944 1562492 0 0 3584 0 841 836 50 1 49 \n0 0\n 1 0 632108 200000 132944 1569012 0 0 3072 0 842 809 50 1 50 \n0 0\n 1 0 632108 193684 132944 1575320 0 0 3072 0 841 749 49 1 50 \n0 0\n 1 0 632108 188104 132956 1580916 0 0 3584 25540 923 772 49 3 46 \n3 0\n\nnovell@wdhsen01:~> vmstat 1 30\nprocs -----------memory---------- ---swap-- -----io---- -system--\n-----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id\nwa st\n 1 0 632108 45496 132808 1721868 1 1 10 33 2 2 1 0 99 \n0 0\n 2 0 632108 39420 132808 1728308 0 0 3072 0 817 747 58 1 42 \n0 0\n 1 0 632108 33344 132808 1734528 0 0 3072 0 760 670 56 1 43 \n0 0\n 1 0 632108 82200 132280 1685904 0 0 3584 12 751 679 49 2 49 \n1 0\n 1 0 632108 76496 132280 1692268 0 0 3072 0 739 672 48 2 50 \n0 0\n 1 0 632108 70172 132280 1698524 0 0 3072 28 767 691 50 1 50 \n0 0\n 1 0 632108 63152 132280 1705572 0 0 3584 0 785 727 49 1 50 \n0 0\n 1 0 632108 57208 132280 1711568 0 0 3072 0 784 722 49 2 49 \n0 0\n 2 0 632108 50884 132280 1717956 0 0 3072 0 804 711 50 1 50 \n0 0\n 1 0 632108 43956 132280 1724708 0 0 3584 0 815 779 49 1 50 \n0 0\n 2 0 632108 37772 132280 1731088 0 0 3072 32 814 724 56 1 43 \n0 0\n 1 0 632108 30456 132280 1738260 0 0 3584 0 790 700 60 1 40 \n0 0\n 1 0 632108 80056 131640 1689212 0 0 3072 0 795 716 49 2 50 \n0 0\n 1 0 632108 73740 131640 1695520 0 0 3072 0 832 803 50 1 50 \n0 0\n 1 0 632108 68664 131648 1700552 0 0 3072 47356 975 713 49 2 43 \n6 0\n 1 0 632108 61968 131652 1707292 0 0 3584 28 817 763 49 2 50 \n0 0\n 1 0 632108 56016 131652 1713424 0 0 3072 156 795 707 49 2 50 \n0 0\n 1 0 632108 49692 131652 1719764 0 0 3072 0 814 755 49 1 50 \n0 0\n 1 0 632108 42864 131652 1726720 0 0 3584 0 794 691 49 1 50 \n0 0\n 2 0 632108 36912 131652 1732716 0 0 3072 0 794 732 58 1 41 \n0 0\n 1 0 632108 91224 130992 1678484 0 0 3584 0 822 730 57 2 42 \n0 0\nprocs -----------memory---------- ---swap-- -----io---- -system--\n-----cpu------\n r b swpd free buff cache si so bi bo in cs us sy id\nwa st\n 1 0 632108 84900 130992 1685028 0 0 3072 22412 924 761 49 1 50 \n0 0\n 1 0 632108 78576 130992 1691260 0 0 3072 0 843 776 49 1 50 \n0 0\n 1 0 632108 71756 130992 1698284 0 0 3584 0 865 792 50 0 50 \n0 0\n 1 0 632108 65308 130992 1704828 0 0 3072 0 861 770 49 2 50 \n0 0\n 1 0 632108 58324 130992 1711600 0 0 3584 0 866 905 49 1 50 \n0 0\n 1 0 632108 51852 130992 1718008 0 0 3072 0 813 754 49 1 50 \n0 0\n 1 0 632108 45496 130992 1724524 0 0 3072 0 812 737 49 1 50 \n0 0\n 4 0 632108 38560 130992 1731312 0 0 3584 0 816 737 49 2 50 \n0 0\n 2 0 632108 33228 131004 1736472 0 0 3072 26356 904 727 61 3 34 \n3 0\n\nFor small tables, it is not using 100% of CPU, but the same query with limit\n1 is also taking 100% of CPU.\n Explain Analyze o/p is very big. I am pasting here:\n\nUnique (cost=133906741.63..133916648.38 rows=200 width=516) (actual\ntime=486765.451..487236.949 rows=35 loops=1)\n -> Sort (cost=133906741.63..133911695.00 rows=1981350 width=516) (actual\ntime=486765.450..487053.139 rows=1979735 loops=1)\n Sort Key: events_rpt_v3.init_service_comp\n Sort Method: external merge Disk: 46416kB\n -> Subquery Scan events_rpt_v3 (cost=131752986.71..133238999.21\nrows=1981350 width=516) (actual time=452529.136..472555.577 rows=1979735\nloops=1)\n -> Unique (cost=131752986.71..133219185.71 rows=1981350\nwidth=73436) (actual time=452529.128..471365.258 rows=1979735 loops=1)\n -> Sort (cost=131752986.71..131757940.09 rows=1981350\nwidth=73436) (actual time=452529.126..460179.820 rows=1979735 loops=1)\n\n Sort Key: public.events.evt_id, public.events.res,\npublic.events.sres, public.events.sev, public.events.evt_time,\npublic.events.evt_time, public.events.device_evt_time,\npublic.events.sentinel_process_time, public.events.begin_time,\npublic.events.end_time, public.events.repeat_cnt, public.events.dp_int,\npublic.events.sp_int, public.events.msg, public.events.evt,\npublic.events.et, public.events.cust_id, public.events.src_asset_id,\npublic.events.dest_asset_id, public.events.agent_id, public.events.prtcl_id,\npublic.events.arch_id, public.events.sip, (to_ip_char(public.events.sip)),\npublic.events.shn, public.events.sp, public.events.dip,\n(to_ip_char(public.events.dip)), public.events.dhn, public.events.dp,\npublic.events.sun, public.events.dun, public.events.fn, public.events.ei,\npublic.events.init_usr_sys_id, public.events.init_usr_identity_guid,\npublic.events.trgt_usr_sys_id, public.events.trgt_usr_identity_guid,\n\n Sort Method: external merge Disk: 1504224kB\n -> Append (cost=0.00..7167670.80 rows=1981350\nwidth=73436) (actual time=188.427..432870.506 rows=1979735 loops=1)\n -> Result (cost=0.00..7117927.97\nrows=1972965 width=73436) (actual time=188.427..431513.274 rows=1971352\nloops=1)\n -> Append (cost=0.00..212550.47\nrows=1972965 width=73436) (actual time=4.445..19200.613 rows=1971352\nloops=1)\n -> Seq Scan on events \n(cost=0.00..10.00 rows=1 width=73436) (actual time=0.000..0.000 rows=0\nloops=1)\n -> Seq Scan on events_p_max\nevents (cost=0.00..10.00 rows=1 width=73436) (actual time=0.000..0.000\nrows=0 loops=1)\n -> Seq Scan on\nevents_p_20110617110000 events (cost=0.00..10.00 rows=1 width=73436)\n(actual time=0.000..0.000 rows=0 loops=1)\n -> Result (cost=0.00..29929.33 rows=8385\nwidth=73436) (actual time=11.691..928.277 rows=8383 loops=1)\n -> Append (cost=0.00..581.83\nrows=8385 width=73436) (actual time=11.194..94.103 rows=8383 loops=1)\n -> Seq Scan on hist_events \n(cost=0.00..10.00 rows=1 width=73436) (actual time=0.001..0.001 rows=0\nloops=1)\n -> Seq Scan on\nhist_events_p_max hist_events (cost=0.00..10.00 rows=1 width=73436) (actual\ntime=0.000..0.000 rows=0 loops=1)\n\u001e\u001f\n -> Seq Scan on\nhist_events_p_20101105112519 hist_events (cost=0.00..212.18 rows=2918\nwidth=62004) (actual time=11.192..31.355 rows=2918 loops=1)\n -> Seq Scan on\nhist_events_p_20101107112519 hist_events (cost=0.00..169.64 rows=2764\nwidth=65510) (actual time=7.605..24.603 rows=2764 loops=1)\n -> Seq Scan on\nhist_events_p_20101104112519 hist_events (cost=0.00..180.01 rows=2701\nwidth=61827) (actual time=19.064..37.047 rows=2701 loops=1)\nTotal runtime: 487423.797 ms\n\nI have started vaccuum analyzing all the partitions, I will run the query\nand post the results once it is done.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/100-CPU-Utilization-when-we-run-queries-tp4465765p4493567.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 15 Jun 2011 21:35:00 -0700 (PDT)",
"msg_from": "bakkiya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 100% CPU Utilization when we run queries."
},
{
"msg_contents": "Any help, please?\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/100-CPU-Utilization-when-we-run-queries-tp4465765p4556775.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 6 Jul 2011 06:30:46 -0700 (PDT)",
"msg_from": "bakkiya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 100% CPU Utilization when we run queries."
},
{
"msg_contents": "bakkiya <[email protected]> wrote:\n \n> Any help, please?\n \nYou haven't provided enough information for anyone to be able to help.\n \nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n \n-Kevin\n",
"msg_date": "Wed, 06 Jul 2011 08:47:08 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100% CPU Utilization when we run queries."
},
{
"msg_contents": "Dne 6.7.2011 15:30, bakkiya napsal(a):\n> Any help, please?\n\nAccording to the EXPLAIN ANALYZE output (please, don't post it to the\nmailing list directly - use something like explain.depesz.com, I've done\nthat for you this time: http://explain.depesz.com/s/HMN), you're doing a\nUNIQUE over a lot of data (2 million rows, 1.5GB).\n\nThat is done by sorting the data, and sorting is very CPU intensive task\nusually. So the fact that the CPU is 100% utilized is kind of expected\nin this case. So that's a feature, not a bug.\n\nIn general each process is hitting some bottleneck. It might be an I/O,\nit might be a CPU, it might be something less visible (memory bandwidth\nor something like that).\n\nBut I've noticed one thing in your query - you're doing a UNIQUE in the\nview (probably, we don't know the definition) and then once again in the\nquery (but using just one column from the view).\n\nThe problem is the inner sort does not remove any rows (1979735 rows\nin/out). Why do you do the UNIQUE in the view? Do you really need it\nthere? I guess removing it might significantly improve the plan.\n\nTry to do the query without the view - it seems it's just an union of\ncurrent tables and a history (both partitioned, so do something like this)\n\nSELECT DISTINCT init_service_comp FROM (\n SELECT init_service_comp FROM events\n UNION\n SELECT init_service_comp FROM hist_events\n)\n\nor maybe even\n\nSELECT DISTINCT init_service_comp FROM (\n SELECT DISTINCT init_service_comp FROM events\n UNION\n SELECT DISTINCT init_service_comp FROM hist_events\n)\n\nLet's see how that works - post EXPLAIN ANALYZE using explain.depesz.com\n\nTomas\n",
"msg_date": "Wed, 06 Jul 2011 21:04:13 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100% CPU Utilization when we run queries."
},
{
"msg_contents": "On Wed, Jul 6, 2011 at 1:04 PM, Tomas Vondra <[email protected]> wrote:\n> Dne 6.7.2011 15:30, bakkiya napsal(a):\n>> Any help, please?\n>\n> According to the EXPLAIN ANALYZE output (please, don't post it to the\n> mailing list directly - use something like explain.depesz.com, I've done\n> that for you this time: http://explain.depesz.com/s/HMN), you're doing a\n> UNIQUE over a lot of data (2 million rows, 1.5GB).\n\nIt might have been optimized lately, but generally unique is slower\nthan group by will be.\n",
"msg_date": "Wed, 6 Jul 2011 15:05:58 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100% CPU Utilization when we run queries."
},
{
"msg_contents": "On 7/07/2011 3:04 AM, Tomas Vondra wrote:\n> That is done by sorting the data, and sorting is very CPU intensive task\n> usually. So the fact that the CPU is 100% utilized is kind of expected\n> in this case. So that's a feature, not a bug.\n>\n> In general each process is hitting some bottleneck. It might be an I/O,\n> it might be a CPU, it might be something less visible (memory bandwidth\n> or something like that).\n>\nThis is worth stressing. PostgreSQL will always use as much CPU time and \ndisk I/O as it can to get a job done as quickly as possible. Because \nmost queries need more CPU and less disk, or more disk and less CPU, \nyou'll usually find that PostgreSQL maxes out one or the other but not \nboth. People monitor CPU use more than disk use, so they tend to notice \nwhen CPU use is maxed out, but Pg maxes out your disk a lot too.\n\nThis is normal, and a good thing. If Pg didn't max out your CPU or disk, \nqueries would be slower. If you want to make things other than \nPostgreSQL happen faster at the expense of slowing down queries, you can \nuse your operating system's process priority mechanisms to give \nPostgreSQL a lower priority for access to CPU and/or disk. That will \nstill allow PostgreSQL to use all your CPU and disk when nothing else \nwants it, but will let other programs use it in preference to PostgreSQL \nif they need it.\n\nThe same thing applies to memory use. People notice that their operating \nsystem reports very little \"free\" memory and get worried about it. The \ntruth is that your OS should never have much free memory, because that \nmemory is not being used for anything useful. It usually keeps disk \ncache in memory when it's not needed for anything else, and trying to \nmake more \"free\" memory clears out the disk cache, making the computer \nslower. CPU use is a bit like that - it's not doing any good idle, so if \nnothing else needs it more you might as well use it.\n\nIf you're on linux, you can use the \"nice\", \"renice\" and \"ionice\" \nprograms to control CPU and disk access priority.\n\n--\nCraig Ringer\n\nPOST Newspapers\n276 Onslow Rd, Shenton Park\nPh: 08 9381 3088 Fax: 08 9388 2258\nABN: 50 008 917 717\nhttp://www.postnewspapers.com.au/\n",
"msg_date": "Thu, 07 Jul 2011 07:36:38 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100% CPU Utilization when we run queries."
},
{
"msg_contents": "On Wed, Jul 6, 2011 at 9:04 PM, Tomas Vondra <[email protected]> wrote:\n> Dne 6.7.2011 15:30, bakkiya napsal(a):\n>> Any help, please?\n>\n> According to the EXPLAIN ANALYZE output (please, don't post it to the\n> mailing list directly - use something like explain.depesz.com, I've done\n> that for you this time: http://explain.depesz.com/s/HMN), you're doing a\n> UNIQUE over a lot of data (2 million rows, 1.5GB).\n>\n> That is done by sorting the data, and sorting is very CPU intensive task\n> usually. So the fact that the CPU is 100% utilized is kind of expected\n> in this case. So that's a feature, not a bug.\n>\n> In general each process is hitting some bottleneck. It might be an I/O,\n> it might be a CPU, it might be something less visible (memory bandwidth\n> or something like that).\n>\n> But I've noticed one thing in your query - you're doing a UNIQUE in the\n> view (probably, we don't know the definition) and then once again in the\n> query (but using just one column from the view).\n>\n> The problem is the inner sort does not remove any rows (1979735 rows\n> in/out). Why do you do the UNIQUE in the view? Do you really need it\n> there? I guess removing it might significantly improve the plan.\n>\n> Try to do the query without the view - it seems it's just an union of\n> current tables and a history (both partitioned, so do something like this)\n>\n> SELECT DISTINCT init_service_comp FROM (\n> SELECT init_service_comp FROM events\n> UNION\n> SELECT init_service_comp FROM hist_events\n> )\n>\n> or maybe even\n>\n> SELECT DISTINCT init_service_comp FROM (\n> SELECT DISTINCT init_service_comp FROM events\n> UNION\n> SELECT DISTINCT init_service_comp FROM hist_events\n> )\n>\n> Let's see how that works - post EXPLAIN ANALYZE using explain.depesz.com\n\nIn this case UNION ALL is probably more appropriate than UNION - and\nmay have different performance characteristics (saving the UNIQUE?).\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Thu, 7 Jul 2011 11:45:20 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100% CPU Utilization when we run queries."
},
{
"msg_contents": "Thanks all for your help. It is really useful, I will modify the query and\npost the result.\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/100-CPU-Utilization-when-we-run-queries-tp4465765p4560941.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Thu, 7 Jul 2011 05:49:48 -0700 (PDT)",
"msg_from": "bakkiya <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 100% CPU Utilization when we run queries."
}
] |
[
{
"msg_contents": "We are thiiiiiis close to moving our datawarehouse from Oracle to\nPostgres. This query is identical on both systems, but runs much, much\nfaster on Oracle. Our Postgres host has far superior hardware and\ntuning parameters have been set via pgtune. Most everything else runs\nfaster in Postgres, except for this query. In Oracle, we get a hash\njoin that takes about 2 minutes:\n\nSQL> set line 200\ndelete from plan_table;\nexplain plan for\nCREATE TABLE ecr_opens\nas\nselect o.emailcampaignid, count(memberid) opencnt\n from openactivity o,ecr_sents s\n where s.emailcampaignid = o.emailcampaignid\n group by o.emailcampaignid;\n\nSELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());\nSQL> \n13 rows deleted.\n\nSQL> 2 3 4 5 6 7 \nExplained.\n\nSQL> SQL> \nPLAN_TABLE_OUTPUT\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nPlan hash value: 4034426201\n\n---------------------------------------------------------------------------------------------------------------------------------\n| Id | Operation | Name | Rows |\nBytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |\n---------------------------------------------------------------------------------------------------------------------------------\n| 0 | CREATE TABLE STATEMENT | | 5094 |\n91692 | 9651 (24)| 00:02:16 | | | |\n| 1 | LOAD AS SELECT | ECR_OPENS | |\n| | | | | |\n| 2 | PX COORDINATOR | | |\n| | | | | |\n| 3 | PX SEND QC (RANDOM) | :TQ10002 | 5094 |\n91692 | 2263 (100)| 00:00:32 | Q1,02 | P->S | QC (RAND) |\n| 4 | HASH GROUP BY | | 5094 |\n91692 | 2263 (100)| 00:00:32 | Q1,02 | PCWP | |\n| 5 | PX RECEIVE | | 5094 |\n91692 | 2263 (100)| 00:00:32 | Q1,02 | PCWP | |\n\nPLAN_TABLE_OUTPUT\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n| 6 | PX SEND HASH | :TQ10001 | 5094 |\n91692 | 2263 (100)| 00:00:32 | Q1,01 | P->P | HASH |\n| 7 | HASH GROUP BY | | 5094 |\n91692 | 2263 (100)| 00:00:32 | Q1,01 | PCWP | |\n| 8 | NESTED LOOPS | | 17M|\n297M| 200 (98)| 00:00:03 | Q1,01 | PCWP | |\n| 9 | BUFFER SORT | | |\n| | | Q1,01 | PCWC | |\n| 10 | PX RECEIVE | | |\n| | | Q1,01 | PCWP | |\n| 11 | PX SEND ROUND-ROBIN| :TQ10000 | |\n| | | | S->P | RND-ROBIN |\n| 12 | TABLE ACCESS FULL | ECR_SENTS | 476 |\n6188 | 3 (0)| 00:00:01 | | | |\n|* 13 | INDEX RANGE SCAN | OPENACT_EMCAMP_IDX | 36355 |\n177K| 1 (0)| 00:00:01 | Q1,01 | PCWP | |\n---------------------------------------------------------------------------------------------------------------------------------\n\nPredicate Information (identified by operation id):\n\nPLAN_TABLE_OUTPUT\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n---------------------------------------------------\n\n 13 - access(\"S\".\"EMAILCAMPAIGNID\"=\"O\".\"EMAILCAMPAIGNID\")\n\nNote\n-----\n - dynamic sampling used for this statement\n\n29 rows selected.\n\nSQL> desc openactivity\n Name Null? Type\n ----------------------------------------- --------\n----------------------------\n EMAILCAMPAIGNID NOT NULL NUMBER\n MEMBERID NOT NULL NUMBER\n OPENDATE DATE\n IPADDRESS VARCHAR2(25)\n DATE_ID NUMBER\n\nSQL> select count(*) from openactivity;\n\n COUNT(*)\n----------\n 192542480\n\nSQL> desc ecr_sents\n Name Null? Type\n ----------------------------------------- --------\n----------------------------\n EMAILCAMPAIGNID NUMBER\n MEMCNT NUMBER\n DATE_ID NUMBER\n SENTDATE DATE\n\n\nSQL> select count(*) from ecr_sents;\n\n COUNT(*)\n----------\n 476\n\nOur final result is the ecr_opens table which is 476 rows.\n\nOn Postgres, this same query takes about 58 minutes (could not run\nexplain analyze because it is in progress):\n\npg_dw=# explain CREATE TABLE ecr_opens with (FILLFACTOR=100)\npg_dw-# as\npg_dw-# select o.emailcampaignid, count(memberid) opencnt\npg_dw-# from openactivity o,ecr_sents s\npg_dw-# where s.emailcampaignid = o.emailcampaignid\npg_dw-# group by o.emailcampaignid;\n QUERY\nPLAN \n-------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=0.00..1788988.05 rows=9939 width=12)\n -> Nested Loop (cost=0.00..1742467.24 rows=9279316 width=12)\n -> Index Scan using ecr_sents_ecid_idx on ecr_sents s\n(cost=0.00..38.59 rows=479 width=4)\n -> Index Scan using openact_emcamp_idx on openactivity o\n(cost=0.00..3395.49 rows=19372 width=12)\n Index Cond: (o.emailcampaignid = s.emailcampaignid)\n(5 rows)\n\n\npg_dw=# \\d openactivity\n Table \"openactivity\"\n Column | Type | Modifiers \n-----------------+-----------------------+-----------\n emailcampaignid | integer | not null\n memberid | bigint | not null\n opendate | date | \n ipaddress | character varying(25) | \n date_id | integer | \nIndexes:\n \"openact_dateid_idx\" btree (date_id), tablespace \"pg_idx\"\n \"openact_emcamp_idx\" btree (emailcampaignid), tablespace \"pg_idx\"\n\npg_dw=# select count(*) from openactivity;\n count \n-----------\n 192542480\n\npg_dw=# \\d ecr_sents\n Table \"staging.ecr_sents\"\n Column | Type | Modifiers \n-----------------+---------+-----------\n emailcampaignid | integer | \n memcnt | numeric | \n date_id | integer | \n sentdate | date | \nIndexes:\n \"ecr_sents_ecid_idx\" btree (emailcampaignid), tablespace\n\"staging_idx\"\n\npg_dw=# select count(*) from ecr_sents;\n count \n-------\n 479\n\nWe added an index on ecr_sents to see if that improved performance, but\ndid not work. Both tables have updated stats:\n\n\npg_dw=# select relname, last_vacuum, last_autovacuum, last_analyze,\nlast_autoanalyze from pg_stat_all_tables where relname in\n('openactivity','ecr_sents');\n relname | last_vacuum | last_autovacuum |\nlast_analyze | last_autoanalyze \n--------------+-------------------------------+-----------------+-------------------------------+-------------------------------\n ecr_sents | | |\n2011-06-08 10:31:20.677172-04 | 2011-06-08 10:31:34.545504-04\n openactivity | 2011-06-02 16:34:47.129695-04 | |\n2011-06-07 13:48:21.909546-04 | 2011-04-27 17:49:15.004551-04\n\nRelevant info:\npg_dw=# SELECT\npg_dw-# 'version'::text AS \"name\",\npg_dw-# version() AS \"current_setting\"\npg_dw-# UNION ALL\npg_dw-# SELECT\npg_dw-# name,current_setting(name) \npg_dw-# FROM pg_settings \npg_dw-# WHERE NOT source='default' AND NOT name IN\npg_dw-# ('config_file','data_directory','hba_file','ident_file',\npg_dw(# 'log_timezone','DateStyle','lc_messages','lc_monetary',\npg_dw(# 'lc_numeric','lc_time','timezone_abbreviations',\npg_dw(# 'default_text_search_config','application_name',\npg_dw(# 'transaction_deferrable','transaction_isolation',\npg_dw(# 'transaction_read_only');\n name |\ncurrent_setting \n------------------------------+-------------------------------------------------------------------------------------------------------------------\n version | PostgreSQL 9.0.3 on\nx86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red\nHat 4.1.2-48), 64-bit\n archive_command | (disabled)\n archive_timeout | 1h\n autovacuum_max_workers | 10\n checkpoint_completion_target | 0.9\n checkpoint_segments | 64\n checkpoint_timeout | 1h\n constraint_exclusion | on\n default_statistics_target | 100\n effective_cache_size | 22GB\n effective_io_concurrency | 5\n fsync | on\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n listen_addresses | *\n log_checkpoints | on\n log_destination | stderr\n log_directory | pg_log\n log_error_verbosity | default\n log_filename | pg_dw.log\n log_line_prefix | %m-%u-%p\n log_lock_waits | on\n log_min_error_statement | panic\n log_min_messages | notice\n log_rotation_age | 0\n log_rotation_size | 0\n log_truncate_on_rotation | off\n logging_collector | on\n maintenance_work_mem | 1GB\n max_connections | 400\n max_stack_depth | 2MB\n search_path | xxxxx\n server_encoding | UTF8\n shared_buffers | 7680MB\n TimeZone | US/Eastern\n wal_buffers | 32MB\n wal_level | archive\n work_mem | 768MB\n\nShould this query be hashing the smaller table on Postgres rather than\nusing nested loops?\n\nThanks.\nTony\n\n",
"msg_date": "Wed, 08 Jun 2011 11:11:32 -0400",
"msg_from": "Tony Capobianco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Oracle v. Postgres 9.0 query performance"
},
{
"msg_contents": "> On Postgres, this same query takes about 58 minutes (could not run\n> explain analyze because it is in progress):\n>\n> pg_dw=# explain CREATE TABLE ecr_opens with (FILLFACTOR=100)\n> pg_dw-# as\n> pg_dw-# select o.emailcampaignid, count(memberid) opencnt\n> pg_dw-# from openactivity o,ecr_sents s\n> pg_dw-# where s.emailcampaignid = o.emailcampaignid\n> pg_dw-# group by o.emailcampaignid;\n> QUERY\n> PLAN\n> -------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=0.00..1788988.05 rows=9939 width=12)\n> -> Nested Loop (cost=0.00..1742467.24 rows=9279316 width=12)\n> -> Index Scan using ecr_sents_ecid_idx on ecr_sents s\n> (cost=0.00..38.59 rows=479 width=4)\n> -> Index Scan using openact_emcamp_idx on openactivity o\n> (cost=0.00..3395.49 rows=19372 width=12)\n> Index Cond: (o.emailcampaignid = s.emailcampaignid)\n> (5 rows)\n>\n\nPlease, post EXPLAIN ANALYZE, not just EXPLAIN. Preferably using\nexplain.depesz.com.\n\nregards\nTomas\n\n",
"msg_date": "Wed, 8 Jun 2011 17:31:26 +0200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Oracle v. Postgres 9.0 query performance"
},
{
"msg_contents": "Tony Capobianco <[email protected]> writes:\n> pg_dw=# explain CREATE TABLE ecr_opens with (FILLFACTOR=100)\n> pg_dw-# as\n> pg_dw-# select o.emailcampaignid, count(memberid) opencnt\n> pg_dw-# from openactivity o,ecr_sents s\n> pg_dw-# where s.emailcampaignid = o.emailcampaignid\n> pg_dw-# group by o.emailcampaignid;\n> QUERY\n> PLAN \n> -------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=0.00..1788988.05 rows=9939 width=12)\n> -> Nested Loop (cost=0.00..1742467.24 rows=9279316 width=12)\n> -> Index Scan using ecr_sents_ecid_idx on ecr_sents s\n> (cost=0.00..38.59 rows=479 width=4)\n> -> Index Scan using openact_emcamp_idx on openactivity o\n> (cost=0.00..3395.49 rows=19372 width=12)\n> Index Cond: (o.emailcampaignid = s.emailcampaignid)\n> (5 rows)\n\n> Should this query be hashing the smaller table on Postgres rather than\n> using nested loops?\n\nYeah, seems like it. Just for testing purposes, do \"set enable_nestloop\n= 0\" and see what plan you get then.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Jun 2011 11:33:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oracle v. Postgres 9.0 query performance "
},
{
"msg_contents": "pg_dw=# set enable_nestloop =0;\nSET\nTime: 0.165 ms\npg_dw=# explain CREATE TABLE ecr_opens with (FILLFACTOR=100)\npg_dw-# as\npg_dw-# select o.emailcampaignid, count(memberid) opencnt\npg_dw-# from openactivity o,ecr_sents s\npg_dw-# where s.emailcampaignid = o.emailcampaignid\npg_dw-# group by o.emailcampaignid;\n QUERY\nPLAN \n-----------------------------------------------------------------------------------------\n HashAggregate (cost=4391163.81..4391288.05 rows=9939 width=12)\n -> Hash Join (cost=14.78..4344767.23 rows=9279316 width=12)\n Hash Cond: (o.emailcampaignid = s.emailcampaignid)\n -> Seq Scan on openactivity o (cost=0.00..3529930.67\nrows=192540967 width=12)\n -> Hash (cost=8.79..8.79 rows=479 width=4)\n -> Seq Scan on ecr_sents s (cost=0.00..8.79 rows=479\nwidth=4)\n\nYikes. Two sequential scans.\n\n\nOn Wed, 2011-06-08 at 11:33 -0400, Tom Lane wrote:\n> Tony Capobianco <[email protected]> writes:\n> > pg_dw=# explain CREATE TABLE ecr_opens with (FILLFACTOR=100)\n> > pg_dw-# as\n> > pg_dw-# select o.emailcampaignid, count(memberid) opencnt\n> > pg_dw-# from openactivity o,ecr_sents s\n> > pg_dw-# where s.emailcampaignid = o.emailcampaignid\n> > pg_dw-# group by o.emailcampaignid;\n> > QUERY\n> > PLAN \n> > -------------------------------------------------------------------------------------------------------------\n> > GroupAggregate (cost=0.00..1788988.05 rows=9939 width=12)\n> > -> Nested Loop (cost=0.00..1742467.24 rows=9279316 width=12)\n> > -> Index Scan using ecr_sents_ecid_idx on ecr_sents s\n> > (cost=0.00..38.59 rows=479 width=4)\n> > -> Index Scan using openact_emcamp_idx on openactivity o\n> > (cost=0.00..3395.49 rows=19372 width=12)\n> > Index Cond: (o.emailcampaignid = s.emailcampaignid)\n> > (5 rows)\n> \n> > Should this query be hashing the smaller table on Postgres rather than\n> > using nested loops?\n> \n> Yeah, seems like it. Just for testing purposes, do \"set enable_nestloop\n> = 0\" and see what plan you get then.\n> \n> \t\t\tregards, tom lane\n> \n\n\n",
"msg_date": "Wed, 08 Jun 2011 11:40:34 -0400",
"msg_from": "Tony Capobianco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Oracle v. Postgres 9.0 query performance"
},
{
"msg_contents": "* Tony Capobianco ([email protected]) wrote:\n> HashAggregate (cost=4391163.81..4391288.05 rows=9939 width=12)\n> -> Hash Join (cost=14.78..4344767.23 rows=9279316 width=12)\n> Hash Cond: (o.emailcampaignid = s.emailcampaignid)\n> -> Seq Scan on openactivity o (cost=0.00..3529930.67\n> rows=192540967 width=12)\n> -> Hash (cost=8.79..8.79 rows=479 width=4)\n> -> Seq Scan on ecr_sents s (cost=0.00..8.79 rows=479\n> width=4)\n> \n> Yikes. Two sequential scans.\n\nErr, isn't that more-or-less exactly what you want here? The smaller\ntable is going to be hashed and then you'll traverse the bigger table\nand bounce each row off the hash table. Have you tried actually running\nthis and seeing how long it takes? The bigger table doesn't look to be\n*that* big, if your i/o subsystem is decent and you've got a lot of\nmemory available for kernel cacheing, should be quick.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Wed, 8 Jun 2011 11:51:59 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oracle v. Postgres 9.0 query performance"
},
{
"msg_contents": "08.06.11 18:40, Tony Capobianco написав(ла):\n> pg_dw=# set enable_nestloop =0;\n> SET\n> Time: 0.165 ms\n> pg_dw=# explain CREATE TABLE ecr_opens with (FILLFACTOR=100)\n> pg_dw-# as\n> pg_dw-# select o.emailcampaignid, count(memberid) opencnt\n> pg_dw-# from openactivity o,ecr_sents s\n> pg_dw-# where s.emailcampaignid = o.emailcampaignid\n> pg_dw-# group by o.emailcampaignid;\n> QUERY\n> PLAN\n> -----------------------------------------------------------------------------------------\n> HashAggregate (cost=4391163.81..4391288.05 rows=9939 width=12)\n> -> Hash Join (cost=14.78..4344767.23 rows=9279316 width=12)\n> Hash Cond: (o.emailcampaignid = s.emailcampaignid)\n> -> Seq Scan on openactivity o (cost=0.00..3529930.67\n> rows=192540967 width=12)\n> -> Hash (cost=8.79..8.79 rows=479 width=4)\n> -> Seq Scan on ecr_sents s (cost=0.00..8.79 rows=479\n> width=4)\n>\n> Yikes. Two sequential scans.\n\nYep. Can you see another options? Either you take each of 479 records \nand try to find matching records in another table using index (first \nplan), or you take both two tables fully (seq scan) and join - second plan.\nFirst plan is better if your large table is clustered enough on \nemailcampaignid field (479 index reads and 479 sequential table reads). \nIf it's not, you may get a 479 table reads transformed into a lot or \nrandom reads.\nBTW: May be you have different data clustering in PostgreSQL & Oracle? \nOr data in Oracle may be \"hot\" in caches?\nAlso, sequential scan is not too bad thing. It may be cheap enough to \nread millions of records if they are not too wide. Please show \"select \npg_size_pretty(pg_relation_size('openactivity'));\" Have you tried to \nexplain analyze second plan?\n\nBest regards, Vitalii Tymchyshyn\n\n\n>\n> On Wed, 2011-06-08 at 11:33 -0400, Tom Lane wrote:\n>> Tony Capobianco<[email protected]> writes:\n>>> pg_dw=# explain CREATE TABLE ecr_opens with (FILLFACTOR=100)\n>>> pg_dw-# as\n>>> pg_dw-# select o.emailcampaignid, count(memberid) opencnt\n>>> pg_dw-# from openactivity o,ecr_sents s\n>>> pg_dw-# where s.emailcampaignid = o.emailcampaignid\n>>> pg_dw-# group by o.emailcampaignid;\n>>> QUERY\n>>> PLAN\n>>> -------------------------------------------------------------------------------------------------------------\n>>> GroupAggregate (cost=0.00..1788988.05 rows=9939 width=12)\n>>> -> Nested Loop (cost=0.00..1742467.24 rows=9279316 width=12)\n>>> -> Index Scan using ecr_sents_ecid_idx on ecr_sents s\n>>> (cost=0.00..38.59 rows=479 width=4)\n>>> -> Index Scan using openact_emcamp_idx on openactivity o\n>>> (cost=0.00..3395.49 rows=19372 width=12)\n>>> Index Cond: (o.emailcampaignid = s.emailcampaignid)\n>>> (5 rows)\n>>> Should this query be hashing the smaller table on Postgres rather than\n>>> using nested loops?\n>> Yeah, seems like it. Just for testing purposes, do \"set enable_nestloop\n>> = 0\" and see what plan you get then.\n\n",
"msg_date": "Wed, 08 Jun 2011 18:52:05 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oracle v. Postgres 9.0 query performance"
},
{
"msg_contents": "Here's the explain analyze:\n\npg_dw=# explain analyze CREATE TABLE ecr_opens with (FILLFACTOR=100)\nas\nselect o.emailcampaignid, count(memberid) opencnt\n from openactivity o,ecr_sents s\n where s.emailcampaignid = o.emailcampaignid\n group by o.emailcampaignid;\n\nQUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=0.00..1788988.05 rows=9939 width=12) (actual\ntime=308630.967..2592279.526 rows=472 loops=1)\n -> Nested Loop (cost=0.00..1742467.24 rows=9279316 width=12)\n(actual time=31.489..2589363.047 rows=8586466 loops=1)\n -> Index Scan using ecr_sents_ecid_idx on ecr_sents s\n(cost=0.00..38.59 rows=479 width=4) (actual time=0.010..13.326 rows=479\nloops=1)\n -> Index Scan using openact_emcamp_idx on openactivity o\n(cost=0.00..3395.49 rows=19372 width=12) (actual time=1.336..5397.139\nrows=17926 loops=479)\n Index Cond: (o.emailcampaignid = s.emailcampaignid)\n Total runtime: 2592284.336 ms\n\n\nOn Wed, 2011-06-08 at 17:31 +0200, [email protected] wrote:\n> > On Postgres, this same query takes about 58 minutes (could not run\n> > explain analyze because it is in progress):\n> >\n> > pg_dw=# explain CREATE TABLE ecr_opens with (FILLFACTOR=100)\n> > pg_dw-# as\n> > pg_dw-# select o.emailcampaignid, count(memberid) opencnt\n> > pg_dw-# from openactivity o,ecr_sents s\n> > pg_dw-# where s.emailcampaignid = o.emailcampaignid\n> > pg_dw-# group by o.emailcampaignid;\n> > QUERY\n> > PLAN\n> > -------------------------------------------------------------------------------------------------------------\n> > GroupAggregate (cost=0.00..1788988.05 rows=9939 width=12)\n> > -> Nested Loop (cost=0.00..1742467.24 rows=9279316 width=12)\n> > -> Index Scan using ecr_sents_ecid_idx on ecr_sents s\n> > (cost=0.00..38.59 rows=479 width=4)\n> > -> Index Scan using openact_emcamp_idx on openactivity o\n> > (cost=0.00..3395.49 rows=19372 width=12)\n> > Index Cond: (o.emailcampaignid = s.emailcampaignid)\n> > (5 rows)\n> >\n> \n> Please, post EXPLAIN ANALYZE, not just EXPLAIN. Preferably using\n> explain.depesz.com.\n> \n> regards\n> Tomas\n> \n> \n\n\n",
"msg_date": "Wed, 08 Jun 2011 12:22:08 -0400",
"msg_from": "Tony Capobianco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Oracle v. Postgres 9.0 query performance"
},
{
"msg_contents": "Hello\n\nwhat is your settings for\n\nrandom_page_cost, seq_page_cost and work_mem?\n\nRegards\n\nPavel Stehule\n\n2011/6/8 Tony Capobianco <[email protected]>:\n> Here's the explain analyze:\n>\n> pg_dw=# explain analyze CREATE TABLE ecr_opens with (FILLFACTOR=100)\n> as\n> select o.emailcampaignid, count(memberid) opencnt\n> from openactivity o,ecr_sents s\n> where s.emailcampaignid = o.emailcampaignid\n> group by o.emailcampaignid;\n>\n> QUERY\n> PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=0.00..1788988.05 rows=9939 width=12) (actual\n> time=308630.967..2592279.526 rows=472 loops=1)\n> -> Nested Loop (cost=0.00..1742467.24 rows=9279316 width=12)\n> (actual time=31.489..2589363.047 rows=8586466 loops=1)\n> -> Index Scan using ecr_sents_ecid_idx on ecr_sents s\n> (cost=0.00..38.59 rows=479 width=4) (actual time=0.010..13.326 rows=479\n> loops=1)\n> -> Index Scan using openact_emcamp_idx on openactivity o\n> (cost=0.00..3395.49 rows=19372 width=12) (actual time=1.336..5397.139\n> rows=17926 loops=479)\n> Index Cond: (o.emailcampaignid = s.emailcampaignid)\n> Total runtime: 2592284.336 ms\n>\n>\n> On Wed, 2011-06-08 at 17:31 +0200, [email protected] wrote:\n>> > On Postgres, this same query takes about 58 minutes (could not run\n>> > explain analyze because it is in progress):\n>> >\n>> > pg_dw=# explain CREATE TABLE ecr_opens with (FILLFACTOR=100)\n>> > pg_dw-# as\n>> > pg_dw-# select o.emailcampaignid, count(memberid) opencnt\n>> > pg_dw-# from openactivity o,ecr_sents s\n>> > pg_dw-# where s.emailcampaignid = o.emailcampaignid\n>> > pg_dw-# group by o.emailcampaignid;\n>> > QUERY\n>> > PLAN\n>> > -------------------------------------------------------------------------------------------------------------\n>> > GroupAggregate (cost=0.00..1788988.05 rows=9939 width=12)\n>> > -> Nested Loop (cost=0.00..1742467.24 rows=9279316 width=12)\n>> > -> Index Scan using ecr_sents_ecid_idx on ecr_sents s\n>> > (cost=0.00..38.59 rows=479 width=4)\n>> > -> Index Scan using openact_emcamp_idx on openactivity o\n>> > (cost=0.00..3395.49 rows=19372 width=12)\n>> > Index Cond: (o.emailcampaignid = s.emailcampaignid)\n>> > (5 rows)\n>> >\n>>\n>> Please, post EXPLAIN ANALYZE, not just EXPLAIN. Preferably using\n>> explain.depesz.com.\n>>\n>> regards\n>> Tomas\n>>\n>>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Wed, 8 Jun 2011 18:27:08 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oracle v. Postgres 9.0 query performance"
},
{
"msg_contents": "Well, this ran much better. However, I'm not sure if it's because of\nset enable_nestloop = 0, or because I'm executing the query twice in a\nrow, where previous results may be cached. I will try this setting in\nmy code for when this process runs later today and see what the result\nis.\n\nThanks!\n\npg_dw=# explain analyze CREATE TABLE ecr_opens with (FILLFACTOR=100)\npg_dw-# as\npg_dw-# select o.emailcampaignid, count(memberid) opencnt\npg_dw-# from openactivity o,ecr_sents s\npg_dw-# where s.emailcampaignid = o.emailcampaignid\npg_dw-# group by o.emailcampaignid;\n\n QUERY\nPLAN \n------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=4391163.81..4391288.05 rows=9939 width=12) (actual\ntime=167254.751..167254.937 rows=472 loops=1)\n -> Hash Join (cost=14.78..4344767.23 rows=9279316 width=12) (actual\ntime=0.300..164577.131 rows=8586466 loops=1)\n Hash Cond: (o.emailcampaignid = s.emailcampaignid)\n -> Seq Scan on openactivity o (cost=0.00..3529930.67\nrows=192540967 width=12) (actual time=0.011..124351.878 rows=192542480\nloops=1)\n -> Hash (cost=8.79..8.79 rows=479 width=4) (actual\ntime=0.253..0.253 rows=479 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 17kB\n -> Seq Scan on ecr_sents s (cost=0.00..8.79 rows=479\nwidth=4) (actual time=0.010..0.121 rows=479 loops=1)\n Total runtime: 167279.950 ms\n\n\n\nOn Wed, 2011-06-08 at 11:51 -0400, Stephen Frost wrote:\n> * Tony Capobianco ([email protected]) wrote:\n> > HashAggregate (cost=4391163.81..4391288.05 rows=9939 width=12)\n> > -> Hash Join (cost=14.78..4344767.23 rows=9279316 width=12)\n> > Hash Cond: (o.emailcampaignid = s.emailcampaignid)\n> > -> Seq Scan on openactivity o (cost=0.00..3529930.67\n> > rows=192540967 width=12)\n> > -> Hash (cost=8.79..8.79 rows=479 width=4)\n> > -> Seq Scan on ecr_sents s (cost=0.00..8.79 rows=479\n> > width=4)\n> > \n> > Yikes. Two sequential scans.\n> \n> Err, isn't that more-or-less exactly what you want here? The smaller\n> table is going to be hashed and then you'll traverse the bigger table\n> and bounce each row off the hash table. Have you tried actually running\n> this and seeing how long it takes? The bigger table doesn't look to be\n> *that* big, if your i/o subsystem is decent and you've got a lot of\n> memory available for kernel cacheing, should be quick.\n> \n> \tThanks,\n> \n> \t\tStephen\n\n\n",
"msg_date": "Wed, 08 Jun 2011 12:28:06 -0400",
"msg_from": "Tony Capobianco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Oracle v. Postgres 9.0 query performance"
},
{
"msg_contents": "pg_dw=# show random_page_cost ;\n random_page_cost \n------------------\n 4\n(1 row)\n\nTime: 0.299 ms\npg_dw=# show seq_page_cost ;\n seq_page_cost \n---------------\n 1\n(1 row)\n\nTime: 0.250 ms\npg_dw=# show work_mem ;\n work_mem \n----------\n 768MB\n(1 row)\n\n\n\n\nOn Wed, 2011-06-08 at 18:27 +0200, Pavel Stehule wrote:\n> Hello\n> \n> what is your settings for\n> \n> random_page_cost, seq_page_cost and work_mem?\n> \n> Regards\n> \n> Pavel Stehule\n> \n> 2011/6/8 Tony Capobianco <[email protected]>:\n> > Here's the explain analyze:\n> >\n> > pg_dw=# explain analyze CREATE TABLE ecr_opens with (FILLFACTOR=100)\n> > as\n> > select o.emailcampaignid, count(memberid) opencnt\n> > from openactivity o,ecr_sents s\n> > where s.emailcampaignid = o.emailcampaignid\n> > group by o.emailcampaignid;\n> >\n> > QUERY\n> > PLAN\n> > ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > GroupAggregate (cost=0.00..1788988.05 rows=9939 width=12) (actual\n> > time=308630.967..2592279.526 rows=472 loops=1)\n> > -> Nested Loop (cost=0.00..1742467.24 rows=9279316 width=12)\n> > (actual time=31.489..2589363.047 rows=8586466 loops=1)\n> > -> Index Scan using ecr_sents_ecid_idx on ecr_sents s\n> > (cost=0.00..38.59 rows=479 width=4) (actual time=0.010..13.326 rows=479\n> > loops=1)\n> > -> Index Scan using openact_emcamp_idx on openactivity o\n> > (cost=0.00..3395.49 rows=19372 width=12) (actual time=1.336..5397.139\n> > rows=17926 loops=479)\n> > Index Cond: (o.emailcampaignid = s.emailcampaignid)\n> > Total runtime: 2592284.336 ms\n> >\n> >\n> > On Wed, 2011-06-08 at 17:31 +0200, [email protected] wrote:\n> >> > On Postgres, this same query takes about 58 minutes (could not run\n> >> > explain analyze because it is in progress):\n> >> >\n> >> > pg_dw=# explain CREATE TABLE ecr_opens with (FILLFACTOR=100)\n> >> > pg_dw-# as\n> >> > pg_dw-# select o.emailcampaignid, count(memberid) opencnt\n> >> > pg_dw-# from openactivity o,ecr_sents s\n> >> > pg_dw-# where s.emailcampaignid = o.emailcampaignid\n> >> > pg_dw-# group by o.emailcampaignid;\n> >> > QUERY\n> >> > PLAN\n> >> > -------------------------------------------------------------------------------------------------------------\n> >> > GroupAggregate (cost=0.00..1788988.05 rows=9939 width=12)\n> >> > -> Nested Loop (cost=0.00..1742467.24 rows=9279316 width=12)\n> >> > -> Index Scan using ecr_sents_ecid_idx on ecr_sents s\n> >> > (cost=0.00..38.59 rows=479 width=4)\n> >> > -> Index Scan using openact_emcamp_idx on openactivity o\n> >> > (cost=0.00..3395.49 rows=19372 width=12)\n> >> > Index Cond: (o.emailcampaignid = s.emailcampaignid)\n> >> > (5 rows)\n> >> >\n> >>\n> >> Please, post EXPLAIN ANALYZE, not just EXPLAIN. Preferably using\n> >> explain.depesz.com.\n> >>\n> >> regards\n> >> Tomas\n> >>\n> >>\n> >\n> >\n> >\n> > --\n> > Sent via pgsql-performance mailing list ([email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> >\n> \n\n\n",
"msg_date": "Wed, 08 Jun 2011 12:33:43 -0400",
"msg_from": "Tony Capobianco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Oracle v. Postgres 9.0 query performance"
},
{
"msg_contents": "Tony Capobianco <[email protected]> writes:\n> Well, this ran much better. However, I'm not sure if it's because of\n> set enable_nestloop = 0, or because I'm executing the query twice in a\n> row, where previous results may be cached. I will try this setting in\n> my code for when this process runs later today and see what the result\n> is.\n\nIf the performance differential holds up, you should look at adjusting\nyour cost parameters so that the planner isn't so wrong about which one\nis faster. Hacking enable_nestloop is a band-aid, not something you\nwant to use in production.\n\nLooking at the values you gave earlier, I wonder whether the\neffective_cache_size setting isn't unreasonably high. That's reducing\nthe estimated cost of accessing the large table via indexscans, and\nI'm thinking it reduced it too much.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Jun 2011 13:03:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oracle v. Postgres 9.0 query performance "
},
{
"msg_contents": "2011/6/8 Tony Capobianco <[email protected]>:\n> pg_dw=# show random_page_cost ;\n> random_page_cost\n> ------------------\n> 4\n> (1 row)\n>\n> Time: 0.299 ms\n> pg_dw=# show seq_page_cost ;\n> seq_page_cost\n> ---------------\n> 1\n> (1 row)\n>\n> Time: 0.250 ms\n> pg_dw=# show work_mem ;\n> work_mem\n> ----------\n> 768MB\n> (1 row)\n>\n>\n\nit is ok.\n\nPavel\n\n>\n>\n> On Wed, 2011-06-08 at 18:27 +0200, Pavel Stehule wrote:\n>> Hello\n>>\n>> what is your settings for\n>>\n>> random_page_cost, seq_page_cost and work_mem?\n>>\n>> Regards\n>>\n>> Pavel Stehule\n>>\n>> 2011/6/8 Tony Capobianco <[email protected]>:\n>> > Here's the explain analyze:\n>> >\n>> > pg_dw=# explain analyze CREATE TABLE ecr_opens with (FILLFACTOR=100)\n>> > as\n>> > select o.emailcampaignid, count(memberid) opencnt\n>> > from openactivity o,ecr_sents s\n>> > where s.emailcampaignid = o.emailcampaignid\n>> > group by o.emailcampaignid;\n>> >\n>> > QUERY\n>> > PLAN\n>> > ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> > GroupAggregate (cost=0.00..1788988.05 rows=9939 width=12) (actual\n>> > time=308630.967..2592279.526 rows=472 loops=1)\n>> > -> Nested Loop (cost=0.00..1742467.24 rows=9279316 width=12)\n>> > (actual time=31.489..2589363.047 rows=8586466 loops=1)\n>> > -> Index Scan using ecr_sents_ecid_idx on ecr_sents s\n>> > (cost=0.00..38.59 rows=479 width=4) (actual time=0.010..13.326 rows=479\n>> > loops=1)\n>> > -> Index Scan using openact_emcamp_idx on openactivity o\n>> > (cost=0.00..3395.49 rows=19372 width=12) (actual time=1.336..5397.139\n>> > rows=17926 loops=479)\n>> > Index Cond: (o.emailcampaignid = s.emailcampaignid)\n>> > Total runtime: 2592284.336 ms\n>> >\n>> >\n>> > On Wed, 2011-06-08 at 17:31 +0200, [email protected] wrote:\n>> >> > On Postgres, this same query takes about 58 minutes (could not run\n>> >> > explain analyze because it is in progress):\n>> >> >\n>> >> > pg_dw=# explain CREATE TABLE ecr_opens with (FILLFACTOR=100)\n>> >> > pg_dw-# as\n>> >> > pg_dw-# select o.emailcampaignid, count(memberid) opencnt\n>> >> > pg_dw-# from openactivity o,ecr_sents s\n>> >> > pg_dw-# where s.emailcampaignid = o.emailcampaignid\n>> >> > pg_dw-# group by o.emailcampaignid;\n>> >> > QUERY\n>> >> > PLAN\n>> >> > -------------------------------------------------------------------------------------------------------------\n>> >> > GroupAggregate (cost=0.00..1788988.05 rows=9939 width=12)\n>> >> > -> Nested Loop (cost=0.00..1742467.24 rows=9279316 width=12)\n>> >> > -> Index Scan using ecr_sents_ecid_idx on ecr_sents s\n>> >> > (cost=0.00..38.59 rows=479 width=4)\n>> >> > -> Index Scan using openact_emcamp_idx on openactivity o\n>> >> > (cost=0.00..3395.49 rows=19372 width=12)\n>> >> > Index Cond: (o.emailcampaignid = s.emailcampaignid)\n>> >> > (5 rows)\n>> >> >\n>> >>\n>> >> Please, post EXPLAIN ANALYZE, not just EXPLAIN. Preferably using\n>> >> explain.depesz.com.\n>> >>\n>> >> regards\n>> >> Tomas\n>> >>\n>> >>\n>> >\n>> >\n>> >\n>> > --\n>> > Sent via pgsql-performance mailing list ([email protected])\n>> > To make changes to your subscription:\n>> > http://www.postgresql.org/mailpref/pgsql-performance\n>> >\n>>\n>\n>\n>\n",
"msg_date": "Wed, 8 Jun 2011 19:17:12 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oracle v. Postgres 9.0 query performance"
},
{
"msg_contents": "My current setting is 22G. According to some documentation, I want to\nset effective_cache_size to my OS disk cache + shared_buffers. In this\ncase, I have 4 quad-core processors with 512K cache (8G) and my\nshared_buffers is 7680M. Therefore my effective_cache_size should be\napproximately 16G? Most of our other etl processes are running fine,\nhowever I'm curious if I could see a significant performance boost by\nreducing the effective_cache_size.\n\n\nOn Wed, 2011-06-08 at 13:03 -0400, Tom Lane wrote:\n> Tony Capobianco <[email protected]> writes:\n> > Well, this ran much better. However, I'm not sure if it's because of\n> > set enable_nestloop = 0, or because I'm executing the query twice in a\n> > row, where previous results may be cached. I will try this setting in\n> > my code for when this process runs later today and see what the result\n> > is.\n> \n> If the performance differential holds up, you should look at adjusting\n> your cost parameters so that the planner isn't so wrong about which one\n> is faster. Hacking enable_nestloop is a band-aid, not something you\n> want to use in production.\n> \n> Looking at the values you gave earlier, I wonder whether the\n> effective_cache_size setting isn't unreasonably high. That's reducing\n> the estimated cost of accessing the large table via indexscans, and\n> I'm thinking it reduced it too much.\n> \n> \t\t\tregards, tom lane\n> \n\n\n",
"msg_date": "Wed, 08 Jun 2011 15:03:00 -0400",
"msg_from": "Tony Capobianco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Oracle v. Postgres 9.0 query performance"
},
{
"msg_contents": "Tony Capobianco <[email protected]> wrote:\n \n> According to some documentation, I want to set\n> effective_cache_size to my OS disk cache + shared_buffers.\n \nThat seems reasonable, and is what has worked well for me.\n \n> In this case, I have 4 quad-core processors with 512K cache (8G)\n> and my shared_buffers is 7680M. Therefore my effective_cache_size\n> should be approximately 16G?\n \nI didn't follow that at all. Can you run `free` or `vmstat`? If\nso, go by what those say your cache size is.\n \n> Most of our other etl processes are running fine, however I'm\n> curious if I could see a significant performance boost by reducing\n> the effective_cache_size.\n \nSince it is an optimizer costing parameter and has no affect on\nmemory allocation, you can set it on a connection and run a query on\nthat connection to test the impact. Why wonder about it when you\ncan easily test it?\n \n-Kevin\n",
"msg_date": "Wed, 08 Jun 2011 14:30:50 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oracle v. Postgres 9.0 query performance"
},
{
"msg_contents": "On Wed, Jun 8, 2011 at 12:03 PM, Tony Capobianco <[email protected]\n> wrote:\n\n> My current setting is 22G. According to some documentation, I want to\n> set effective_cache_size to my OS disk cache + shared_buffers. In this\n> case, I have 4 quad-core processors with 512K cache (8G) and my\n> shared_buffers is 7680M. Therefore my effective_cache_size should be\n> approximately 16G? Most of our other etl processes are running fine,\n> however I'm curious if I could see a significant performance boost by\n> reducing the effective_cache_size.\n>\n>\ndisk cache, not CPU memory cache. It will be some significant fraction of\ntotal RAM on the host. Incidentally, 16 * 512K cache = 8MB, not 8GB.\n\nhttp://en.wikipedia.org/wiki/CPU_cache\n\nOn Wed, Jun 8, 2011 at 12:03 PM, Tony Capobianco <[email protected]> wrote:\nMy current setting is 22G. According to some documentation, I want to\nset effective_cache_size to my OS disk cache + shared_buffers. In this\ncase, I have 4 quad-core processors with 512K cache (8G) and my\nshared_buffers is 7680M. Therefore my effective_cache_size should be\napproximately 16G? Most of our other etl processes are running fine,\nhowever I'm curious if I could see a significant performance boost by\nreducing the effective_cache_size.\ndisk cache, not CPU memory cache. It will be some significant fraction of total RAM on the host. Incidentally, 16 * 512K cache = 8MB, not 8GB.\nhttp://en.wikipedia.org/wiki/CPU_cache",
"msg_date": "Wed, 8 Jun 2011 12:38:42 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oracle v. Postgres 9.0 query performance"
},
{
"msg_contents": "Oooo...some bad math there. Thanks.\n\nOn Wed, 2011-06-08 at 12:38 -0700, Samuel Gendler wrote:\n> \n> \n> On Wed, Jun 8, 2011 at 12:03 PM, Tony Capobianco\n> <[email protected]> wrote:\n> My current setting is 22G. According to some documentation, I\n> want to\n> set effective_cache_size to my OS disk cache +\n> shared_buffers. In this\n> case, I have 4 quad-core processors with 512K cache (8G) and\n> my\n> shared_buffers is 7680M. Therefore my effective_cache_size\n> should be\n> approximately 16G? Most of our other etl processes are\n> running fine,\n> however I'm curious if I could see a significant performance\n> boost by\n> reducing the effective_cache_size.\n> \n> \n> \n> \n> \n> disk cache, not CPU memory cache. It will be some significant\n> fraction of total RAM on the host. Incidentally, 16 * 512K cache =\n> 8MB, not 8GB.\n> \n> \n> http://en.wikipedia.org/wiki/CPU_cache\n> \n> \n> \n> \n\n\n",
"msg_date": "Wed, 08 Jun 2011 15:55:17 -0400",
"msg_from": "Tony Capobianco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Oracle v. Postgres 9.0 query performance"
},
{
"msg_contents": "> * Tony Capobianco ([email protected]) wrote:\n>> HashAggregate (cost=4391163.81..4391288.05 rows=9939 width=12)\n>> -> Hash Join (cost=14.78..4344767.23 rows=9279316 width=12)\n>> Hash Cond: (o.emailcampaignid = s.emailcampaignid)\n>> -> Seq Scan on openactivity o (cost=0.00..3529930.67\n>> rows=192540967 width=12)\n>> -> Hash (cost=8.79..8.79 rows=479 width=4)\n>> -> Seq Scan on ecr_sents s (cost=0.00..8.79 rows=479\n>> width=4)\n>> \n>> Yikes. Two sequential scans.\n> \n> Err, isn't that more-or-less exactly what you want here? The smaller\n> table is going to be hashed and then you'll traverse the bigger table\n> and bounce each row off the hash table. Have you tried actually running\n> this and seeing how long it takes? The bigger table doesn't look to be\n> *that* big, if your i/o subsystem is decent and you've got a lot of\n> memory available for kernel cacheing, should be quick.\n\nJust out of curiosity, is there any chance that this kind of query is\nspeeding up in 9.1 because of following changes?\n\n * Allow FULL OUTER JOIN to be implemented as a hash join, and allow\n either side of a LEFT OUTER JOIN or RIGHT OUTER JOIN to be hashed\n (Tom Lane)\n Previously FULL OUTER JOIN could only be implemented as a merge\n join, and LEFT OUTER JOIN and RIGHT OUTER JOIN could hash only the\n nullable side of the join. These changes provide additional query\n optimization possibilities.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n",
"msg_date": "Fri, 10 Jun 2011 11:21:43 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oracle v. Postgres 9.0 query performance"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> Just out of curiosity, is there any chance that this kind of query is\n> speeding up in 9.1 because of following changes?\n\n> * Allow FULL OUTER JOIN to be implemented as a hash join, and allow\n> either side of a LEFT OUTER JOIN or RIGHT OUTER JOIN to be hashed\n> (Tom Lane)\n\nThe given query wasn't an outer join, so this wouldn't affect it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Jun 2011 22:25:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Oracle v. Postgres 9.0 query performance "
}
] |
[
{
"msg_contents": "Hi,\n\nWe are running postgresql database server on itanium box.\n\nDatabase - Postgresql 8.3.9\nOS - SUSE Linux 11 SP 3\nServer - Itanium (ia64)\n\nWe are trying to enable user login / logout information in postgresql log\nfile, to do this we enabled the following in postgresql.conf,\n\nlog_connections = on\n\nlog_disconnections = on\n\nAfter this we are getting both database user login / logout information as\nwell as getting SQL queries executed by the application which increase the\nlog file size.\n\nPlease guide me how to get only the database user connection (without SQL\nstatements) information in logfile.\n\nThank you for spending your valuable time to read this.\n\nRegards\n\nMuthu\n\nHi,We are running postgresql database server on itanium box.Database - Postgresql 8.3.9OS - SUSE Linux 11 SP 3Server - Itanium (ia64)We are trying to enable user login / logout information in postgresql log file, to do this we enabled the following in postgresql.conf, \n\nlog_connections = \non\n\nlog_disconnections = on\nAfter this we are getting both database user login / logout information as well as getting SQL queries executed by the application which increase the log file size.Please guide me how to get only the database user connection (without SQL statements) information in logfile.\nThank you for spending your valuable time to read this.RegardsMuthu",
"msg_date": "Thu, 9 Jun 2011 13:20:32 +0530",
"msg_from": "muthu krishnan <[email protected]>",
"msg_from_op": true,
"msg_subject": "enable database user login/logout information"
},
{
"msg_contents": "On Thu, Jun 9, 2011 at 10:50, muthu krishnan\n<[email protected]> wrote:\n>\n> Please guide me how to get only the database user connection (without SQL\n> statements) information in logfile.\n\nlog_statement = none\n",
"msg_date": "Thu, 9 Jun 2011 10:54:55 +0300",
"msg_from": "Alexander Shulgin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: enable database user login/logout information"
}
] |
[
{
"msg_contents": "Hi,\n\nWe have installed postgresql on itanium server (HP rx7640), only on this\nserver we are getting the following error in /var/log/messages and also the\npostgres performance is very slow.\n\nError:\n\n*Jun 9 04:45:42 kernel: postmaster(31965): floating-point assist fault at\nip 40000000003dad71, isr 0000020000000008*\n\nServer details:\n\nDatabase - Postgresql 8.3.9\nOS - SUSE Linux 11 SP1\nItanium CPU ( ia64 bit architecture)\n\nkindly let me know your valuable suggestions to resolve this issue.\n\nRegards,\n\nMuthu\n\nHi,We have installed postgresql on itanium server (HP rx7640), only on this server we are getting the following error in /var/log/messages and also the postgres performance is very slow.Error: Jun 9 04:45:42 kernel: postmaster(31965): floating-point assist fault at ip 40000000003dad71, isr 0000020000000008\nServer details:Database - Postgresql 8.3.9OS - SUSE Linux 11 SP1Itanium CPU ( ia64 bit architecture) kindly let me know your valuable suggestions to resolve this issue.Regards,Muthu",
"msg_date": "Thu, 9 Jun 2011 14:50:33 +0530",
"msg_from": "muthu krishnan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql on itanium server"
},
{
"msg_contents": "Thursday, June 9, 2011, 11:20:33 AM you wrote:\n\n> *Jun 9 04:45:42 kernel: postmaster(31965): floating-point assist fault at\n> ip 40000000003dad71, isr 0000020000000008*\n\nA quick search for 'floating-point assist fault' reveals this article:\n\nhttp://h21007.www2.hp.com/portal/site/dspp/menuitem.863c3e4cbcdc3f3515b49c108973a801?ciid=62080055abe021100055abe02110275d6e10RCRD\n\nIn short: The message is harmless, and should not have any influence on \npostgres performance, except if you're getting millions of them...\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n",
"msg_date": "Thu, 9 Jun 2011 11:30:19 +0200",
"msg_from": "Jochen Erwied <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on itanium server"
},
{
"msg_contents": "Dear Jochen,\n\nThank you for suggesting the valuable URL, we are getting 3 \"floating point\nassist fault\" error for every second, will it impact the performance for\npostgresql?\n\nIs there any option to turn on \"flush to zero mode\" in itanium cpu while\ncompiling postgresql from source?\n\nThank you for your valuable time to read this.\n\nRegards,\n\nMuthu\n\nOn Thu, Jun 9, 2011 at 3:00 PM, Jochen Erwied <\[email protected]> wrote:\n\n> Thursday, June 9, 2011, 11:20:33 AM you wrote:\n>\n> > *Jun 9 04:45:42 kernel: postmaster(31965): floating-point assist fault\n> at\n> > ip 40000000003dad71, isr 0000020000000008*\n>\n> A quick search for 'floating-point assist fault' reveals this article:\n>\n>\n> http://h21007.www2.hp.com/portal/site/dspp/menuitem.863c3e4cbcdc3f3515b49c108973a801?ciid=62080055abe021100055abe02110275d6e10RCRD\n>\n> In short: The message is harmless, and should not have any influence on\n> postgres performance, except if you're getting millions of them...\n>\n> --\n> Jochen Erwied | home: [email protected] +49-208-38800-18, FAX:\n> -19\n> Sauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX:\n> -50\n> D-45470 Muelheim | mobile: [email protected]\n> +49-173-5404164\n>\n>\n\nDear Jochen,Thank you for suggesting the valuable URL, we are getting 3 \"floating point assist fault\" error for every second, will it impact the performance for postgresql?Is there any option to turn on \"flush to zero mode\" in itanium cpu while compiling postgresql from source?\nThank you for your valuable time to read this.Regards,MuthuOn Thu, Jun 9, 2011 at 3:00 PM, Jochen Erwied <[email protected]> wrote:\nThursday, June 9, 2011, 11:20:33 AM you wrote:\n\n> *Jun 9 04:45:42 kernel: postmaster(31965): floating-point assist fault at\n> ip 40000000003dad71, isr 0000020000000008*\n\nA quick search for 'floating-point assist fault' reveals this article:\n\nhttp://h21007.www2.hp.com/portal/site/dspp/menuitem.863c3e4cbcdc3f3515b49c108973a801?ciid=62080055abe021100055abe02110275d6e10RCRD\n\nIn short: The message is harmless, and should not have any influence on\npostgres performance, except if you're getting millions of them...\n\n--\nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164",
"msg_date": "Thu, 9 Jun 2011 15:33:12 +0530",
"msg_from": "muthu krishnan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql on itanium server"
},
{
"msg_contents": "On Thu, Jun 9, 2011 at 13:03, muthu krishnan\n<[email protected]> wrote:\n> Thank you for suggesting the valuable URL, we are getting 3 \"floating point\n> assist fault\" error for every second, will it impact the performance for\n> postgresql?\n\nProbably.\n\nThe kernel throttles these messages, so you're probably performing\nmany more of these calculations than the number of messages.\n\n> Is there any option to turn on \"flush to zero mode\" in itanium cpu while\n> compiling postgresql from source?\n\nAs the URL mentions, you can build with CFLAGS=-ffast-math, that\nshould work for PostgreSQL too.\n\nBut since you know you're operating with denormal numbers, you WILL\nget different results to queries. Whether that's a problem for you\ndepends on your application. You could start getting division by zero\nerrors for instance.\n\nRegards,\nMarti\n",
"msg_date": "Thu, 9 Jun 2011 13:45:06 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on itanium server"
},
{
"msg_contents": "Thursday, June 9, 2011, 12:03:12 PM you wrote:\n\n> Is there any option to turn on \"flush to zero mode\" in itanium cpu while\n> compiling postgresql from source?\n\nconfigure will complain when specifying './configure CFLAGS=-ffast-math'.\n\nmake won't, so a 'make CFLAGS='-O2 -Wall -ffast-math' after doing a normal\n'./configure' should do the trick.\n\nBut maybe one of the experts should explain if this will in fact work...\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n",
"msg_date": "Thu, 9 Jun 2011 12:46:57 +0200",
"msg_from": "Jochen Erwied <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on itanium server"
},
{
"msg_contents": "Thursday, June 9, 2011, 12:45:06 PM you wrote:\n\n> As the URL mentions, you can build with CFLAGS=-ffast-math, that\n> should work for PostgreSQL too.\n\nI just tried this with the source for 9.0.4, at least with this version the\nbuild will not complete since there is a check in\nsrc/backend/utils/adt/date.c throwing an error if FAST_MATH is active.\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n",
"msg_date": "Thu, 9 Jun 2011 12:53:23 +0200",
"msg_from": "Jochen Erwied <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on itanium server"
},
{
"msg_contents": "Jochen Erwied <[email protected]> writes:\n> Thursday, June 9, 2011, 12:45:06 PM you wrote:\n>> As the URL mentions, you can build with CFLAGS=-ffast-math, that\n>> should work for PostgreSQL too.\n\n> I just tried this with the source for 9.0.4, at least with this version the\n> build will not complete since there is a check in\n> src/backend/utils/adt/date.c throwing an error if FAST_MATH is active.\n\nYeah. See\nhttp://archives.postgresql.org/pgsql-bugs/2002-09/msg00169.php\nand following discussions, which eventually led to adding the #error.\n\nNow this was all done in regards to PG's original floating-point\ntimestamp implementation. It's possible that in an integer-datetimes\nbuild (which is now the default) we don't need to forbid -ffast-math to\nprevent strange datetime results. But nobody's done the work to prove\nthat, because there isn't any particularly good reason to enable\n-ffast-math in a database in the first place. (Other than coping with\nbrain-dead platforms, I guess.)\n\nHowever ... I'm not sure I believe that this is related to the OP's\nproblem anyway. Postgres doesn't normally work with any denormalized\nnumbers, so the messages he's seeing probably stem from some other sort\nof shortcoming in the hardware FP support. It would be interesting to\nsee specific examples of SQL operations that trigger the kernel message.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Jun 2011 12:20:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on itanium server "
}
] |
[
{
"msg_contents": "Hi, everyone. Some people with whom I'm working, and who have an 8.3 \nsystem running under Windows, asked me to look into their performance \nissues. They have a 1.5 GB database with a few dozen tables, and \n500,000 records at most. They said that their system has been running \nfor a few days, doing lots of INSERTs and SELECTs, and that the \nperformance has gotten worse and worse over time. (I don't have numbers \nto share.) It's true that the computer is being used by other processes \nas part of a black-box manufacturing system, but those are pretty \nconstant in CPU, disk, and memory needs, so I don't think that we would \nexpect to see degradation over time as a result of that work.\n\nI looked at the system, and found that we need to change \neffective_cache_size, such that it'll match the \"system cache\" number in \nthe Windows performance monitor. So yes, we'll take care of that, and \nI expect to see some improvement.\n\nBut the really surprising thing to me was that autovacuum hadn't run at \nall in the last three days. I checked, and the \"autovacuum\" parameter \nwas set in postgresql.conf, and using \"show\" in psql shows me that it \nwas set. But when I looked at pg_stat_user_tables, there was no \nindication of autovacuum *ever* having run. We also fail to see any \nautovacuum processes in the Windows process listing.\n\nCould this be because we're only doing INSERTs and SELECTs? In such a \ncase, then we would never reach the threshold of modified tuples that \nautovacuum looks for, and thus it would never run. That would, by my \nreasoning, mean that we'll never tag dead tuples (which isn't a big deal \nif we're never deleting or updating rows), but also that we'll never run \nANALYZE as part of autovacuum. Which would mean that we'd be running \nwith out-of-date statistics.\n\nI ran a manual \"vacuum analyze\", by the way, and it's taking a really \nlong time (1.5 hours, as of this writing) to run, but it's clearly doing \nsomething. Moreover, when we went to check on our vacuum process after \nabout an hour, we saw that autovacuum had kicked in, and was now \nrunning. Could it be that our manual invocation of vacuum led to \nautovacuum running?\n\nI have a feeling that our solution is going to have to involve a cron \ntype of job, running vacuum at regular intervals (like in the bad old \ndays), because autovacuum won't get triggered. But hey, if anyone has \nany pointers to offer on this topic, I'd certainly appreciate it.\n\nReuven\n\n-- \nReuven M. Lerner -- Web development, consulting, and training\nMobile: +972-54-496-8405 * US phone: 847-230-9795\nSkype/AIM: reuvenlerner\n\n",
"msg_date": "Thu, 09 Jun 2011 18:24:16 +0300",
"msg_from": "\"Reuven M. Lerner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Triggering autovacuum"
},
{
"msg_contents": "Reuven M. Lerner wrote:\n> Could this be because we're only doing INSERTs and SELECTs? In such a \n> case, then we would never reach the threshold of modified tuples that \n> autovacuum looks for, and thus it would never run. That would, by my \n> reasoning, mean that we'll never tag dead tuples (which isn't a big \n> deal if we're never deleting or updating rows), but also that we'll \n> never run ANALYZE as part of autovacuum. Which would mean that we'd \n> be running with out-of-date statistics.\n\nThe computation for whether the auto-analyze portion of autovacuum runs \ntakes into account INSERT traffic, so the stats don't go too far out of \ndata on this style of workload. The one for the vacuum work only \nconsiders dead rows. So your case should be seeing regular entries for \nthe last auto-analyze, but possibly not for last auto-vacuum.\n\nEventually autovacuum will kick in anyway for transaction id wraparound, \nand that might be traumatic when it does happen. You might want to \nschedule periodic manual vacuum on these tables to at least have that \nhappen at a good time. Wraparound autovacuum has this bad habit of \nfinally kicking in only during periods of peak busy on the server.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Thu, 09 Jun 2011 12:52:06 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Triggering autovacuum"
},
{
"msg_contents": "Hi, Greg. Thanks for the quick and useful answer, even if it means that \nmy hopes for a quick fix have been dashed. I guess I'll need to do some \nactual monitoring, then...\n\nReuven\n\n",
"msg_date": "Sun, 12 Jun 2011 01:37:59 +0300",
"msg_from": "\"Reuven M. Lerner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Triggering autovacuum"
},
{
"msg_contents": "On Sat, Jun 11, 2011 at 4:37 PM, Reuven M. Lerner <[email protected]> wrote:\n> Hi, Greg. Thanks for the quick and useful answer, even if it means that my\n> hopes for a quick fix have been dashed. I guess I'll need to do some actual\n> monitoring, then...\n\nYou mention pg_stat_user_tables, what did the last_analyze column for\nthose tables say?\n",
"msg_date": "Sat, 11 Jun 2011 21:55:03 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Triggering autovacuum"
}
] |
[
{
"msg_contents": "I have a function in pgsql language, this function do some select to some\ntables for verify some conditions and then do one insert to a table with NO\nindex. Update are not performed in the function\n\nWhen 1 client connected postgres do 180 execution per second\nWith 2 clients connected postgres do 110 execution per second\nWith 3 clients connected postgres do 90 execution per second\n\nFinally with 6 connected clients postgres do 60 executions per second\n(totally 360 executions per second)\n\nWhile testing, I monitor disk, memory and CPU and not found any overload.\n\nI know that with this information you can figure out somethigns, but in\nnormal conditions, Is normal the degradation of performance per connection\nwhen connections are incremented?\nOr should I spect 180 in the first and something similar in the second\nconnection? Maybe 170?\n\n\nThe server is a dual xeon quad core with 16 GB of ram and a very fast\nstorage\nThe OS is a windows 2008 R2 x64\n\nThanks\n\nAnibal\n\n\n",
"msg_date": "Fri, 10 Jun 2011 07:29:14 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "how much postgres can scale up?"
},
{
"msg_contents": "> I have a function in pgsql language, this function do some select to some\n> tables for verify some conditions and then do one insert to a table with\n> NO\n> index. Update are not performed in the function\n>\n> When 1 client connected postgres do 180 execution per second\n> With 2 clients connected postgres do 110 execution per second\n> With 3 clients connected postgres do 90 execution per second\n>\n> Finally with 6 connected clients postgres do 60 executions per second\n> (totally 360 executions per second)\n>\n> While testing, I monitor disk, memory and CPU and not found any overload.\n\nThere's always a bottleneck - otherwise the system might run faster (and\nhit another bottleneck eventually). It might be CPU, I/O, memory, locking\nand maybe some less frequent things.\n\n> I know that with this information you can figure out somethigns, but in\n> normal conditions, Is normal the degradation of performance per connection\n> when connections are incremented?\n> Or should I spect 180 in the first and something similar in the second\n> connection? Maybe 170?\n>\n>\n> The server is a dual xeon quad core with 16 GB of ram and a very fast\n> storage\n> The OS is a windows 2008 R2 x64\n\nMight be, but we need more details about how the system works. On Linux\nI'd ask for output from 'iostat -x 1' and 'vmstat 1' but you're on Windows\nso there are probably other tools.\n\nWhat version of PostgreSQL is this? What are the basic config values\n(shared_buffers, work_mem, effective_cache_size, ...)? Have you done some\ntuning? There's a wiki page about this:\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nHave you tried to log slow queries? Maybe there's one query that makes the\nwhole workload slow? See this:\nhttp://wiki.postgresql.org/wiki/Logging_Difficult_Queries\n\nTomas\n\n",
"msg_date": "Fri, 10 Jun 2011 14:10:06 +0200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: how much postgres can scale up?"
},
{
"msg_contents": "The version is Postgres 9.0\nYes, I setup the postgres.conf according to instructions in the \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\n\nCool, I will check this\nhttp://wiki.postgresql.org/wiki/Logging_Difficult_Queries\n\nLooks like great starting point to find bottleneck\n\nBut so, Is possible in excellent conditions that two connections duplicate the quantity of transactions per second?\n\nThanks!\n\n\n-----Mensaje original-----\nDe: [email protected] [mailto:[email protected]] \nEnviado el: viernes, 10 de junio de 2011 08:10 a.m.\nPara: Anibal David Acosta\nCC: [email protected]\nAsunto: Re: [PERFORM] how much postgres can scale up?\n\n> I have a function in pgsql language, this function do some select to \n> some tables for verify some conditions and then do one insert to a \n> table with NO index. Update are not performed in the function\n>\n> When 1 client connected postgres do 180 execution per second With 2 \n> clients connected postgres do 110 execution per second With 3 clients \n> connected postgres do 90 execution per second\n>\n> Finally with 6 connected clients postgres do 60 executions per second \n> (totally 360 executions per second)\n>\n> While testing, I monitor disk, memory and CPU and not found any overload.\n\nThere's always a bottleneck - otherwise the system might run faster (and hit another bottleneck eventually). It might be CPU, I/O, memory, locking and maybe some less frequent things.\n\n> I know that with this information you can figure out somethigns, but \n> in normal conditions, Is normal the degradation of performance per \n> connection when connections are incremented?\n> Or should I spect 180 in the first and something similar in the second \n> connection? Maybe 170?\n>\n>\n> The server is a dual xeon quad core with 16 GB of ram and a very fast \n> storage The OS is a windows 2008 R2 x64\n\nMight be, but we need more details about how the system works. On Linux I'd ask for output from 'iostat -x 1' and 'vmstat 1' but you're on Windows so there are probably other tools.\n\nWhat version of PostgreSQL is this? What are the basic config values (shared_buffers, work_mem, effective_cache_size, ...)? Have you done some tuning? There's a wiki page about this:\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nHave you tried to log slow queries? Maybe there's one query that makes the whole workload slow? See this:\nhttp://wiki.postgresql.org/wiki/Logging_Difficult_Queries\n\nTomas\n\n",
"msg_date": "Fri, 10 Jun 2011 08:56:50 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how much postgres can scale up?"
},
{
"msg_contents": "On 06/10/2011 07:29 PM, Anibal David Acosta wrote:\n\n> I know that with this information you can figure out somethigns, but in\n> normal conditions, Is normal the degradation of performance per connection\n> when connections are incremented?\n\nWith most loads, you will find that the throughput per-worker decreases \nas you add workers. The overall throughput will usually increase with \nnumber of workers until you reach a certain \"sweet spot\" then decrease \nas you add more workers after that.\n\nWhere that sweet spot is depends on how much your queries rely on CPU vs \ndisk vs memory, your Pg version, how many disks you have, how fast they \nare and in what configuration they are in, what/how many CPUs you have, \nhow much RAM you have, how fast your RAM is, etc. There's no simple \nformula because it's so workload dependent.\n\nThe usual *very* rough rule of thumb given here is that your sweet spot \nshould be *vaguely* number of cpu cores + number of hard drives. That's \n*incredibly* rough; if you care you should benchmark it using your real \nworkload.\n\nIf you need lots and lots of clients then it may be beneficial to use a \nconnection pool like pgbouncer or PgPool-II so you don't have lots more \nconnections trying to do work at once than your hardware can cope with. \nHaving fewer connections doing work in the database at the same time can \nimprove overall performance.\n\n--\nCraig Ringer\n",
"msg_date": "Fri, 10 Jun 2011 21:01:43 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how much postgres can scale up?"
},
{
"msg_contents": "On 06/10/2011 08:56 PM, Anibal David Acosta wrote:\n> The version is Postgres 9.0\n> Yes, I setup the postgres.conf according to instructions in the\n> http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n>\n>\n> Cool, I will check this\n> http://wiki.postgresql.org/wiki/Logging_Difficult_Queries\n>\n> Looks like great starting point to find bottleneck\n>\n> But so, Is possible in excellent conditions that two connections duplicate the quantity of transactions per second?\n\nFor two connections, if you have most of the data cached in RAM or you \nhave lots of fast disks, then sure. For that matter, if they're \nsynchronized scans of the same table then the second transaction might \nperform even faster than the first one!\n\nThere are increasing overheads with transaction synchronization, etc \nwith number of connections, and they'll usually land up contending for \nsystem resources like RAM (for disk cache, work_mem, etc), disk I/O, and \nCPU time. So you won't generally get linear scaling with number of \nconnections.\n\nGreg Smith has done some excellent and detailed work on this. I highly \nrecommend reading his writing, and you should consider buying his recent \nbook \"PostgreSQL 9.0 High Performance\".\n\nSee also:\n\nhttp://wiki.postgresql.org/wiki/Performance_Optimization\n\nThere have been lots of postgresql scaling benchmarks done over time, \ntoo. You'll find a lot of information if you look around the wiki and \nGoogle.\n\n--\nCraig Ringer\n",
"msg_date": "Fri, 10 Jun 2011 21:13:06 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how much postgres can scale up?"
},
{
"msg_contents": "Excellent.\n\nThanks I'll buy and read that book :)\n\n\nThanks!\n\n\n\n-----Mensaje original-----\nDe: Craig Ringer [mailto:[email protected]] \nEnviado el: viernes, 10 de junio de 2011 09:13 a.m.\nPara: Anibal David Acosta\nCC: [email protected]; [email protected]\nAsunto: Re: [PERFORM] how much postgres can scale up?\n\nOn 06/10/2011 08:56 PM, Anibal David Acosta wrote:\n> The version is Postgres 9.0\n> Yes, I setup the postgres.conf according to instructions in the \n> http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n>\n>\n> Cool, I will check this\n> http://wiki.postgresql.org/wiki/Logging_Difficult_Queries\n>\n> Looks like great starting point to find bottleneck\n>\n> But so, Is possible in excellent conditions that two connections duplicate the quantity of transactions per second?\n\nFor two connections, if you have most of the data cached in RAM or you have lots of fast disks, then sure. For that matter, if they're synchronized scans of the same table then the second transaction might perform even faster than the first one!\n\nThere are increasing overheads with transaction synchronization, etc with number of connections, and they'll usually land up contending for system resources like RAM (for disk cache, work_mem, etc), disk I/O, and CPU time. So you won't generally get linear scaling with number of connections.\n\nGreg Smith has done some excellent and detailed work on this. I highly recommend reading his writing, and you should consider buying his recent book \"PostgreSQL 9.0 High Performance\".\n\nSee also:\n\nhttp://wiki.postgresql.org/wiki/Performance_Optimization\n\nThere have been lots of postgresql scaling benchmarks done over time, too. You'll find a lot of information if you look around the wiki and Google.\n\n--\nCraig Ringer\n\n",
"msg_date": "Fri, 10 Jun 2011 09:19:22 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how much postgres can scale up?"
},
{
"msg_contents": "\n> When 1 client connected postgres do 180 execution per second\n\nThis is suspiciously close to 10.000 executions per minute.\n\nYou got 10k RPM disks ?\n\nHow's your IO system setup ?\n\nTry setting synchronous_commit to OFF in postgresql.conf and see if that \nchanges the results. That'll give useful information.\n",
"msg_date": "Fri, 10 Jun 2011 15:52:03 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how much postgres can scale up?"
},
{
"msg_contents": "\n> When 1 client connected postgres do 180 execution per second\n\nThis is suspiciously close to 10.000 executions per minute.\n\nYou got 10k RPM disks ?\n\nHow's your IO system setup ?\n\nTry setting synchronous_commit to OFF in postgresql.conf and see if that\nchanges the results. That'll give useful information.\n",
"msg_date": "Fri, 10 Jun 2011 15:57:25 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how much postgres can scale up?"
},
{
"msg_contents": "On 06/10/2011 07:29 AM, Anibal David Acosta wrote:\n> When 1 client connected postgres do 180 execution per second\n> With 2 clients connected postgres do 110 execution per second\n> With 3 clients connected postgres do 90 execution per second\n>\n> Finally with 6 connected clients postgres do 60 executions per second\n> (totally 360 executions per second)\n>\n> While testing, I monitor disk, memory and CPU and not found any overload.\n>\n> I know that with this information you can figure out somethigns, but in\n> normal conditions, Is normal the degradation of performance per connection\n> when connections are incremented?\n> Or should I spect 180 in the first and something similar in the second\n> connection? Maybe 170?\n> \n\nLet's reformat this the way most people present it:\n\nclients tps\n1 180\n2 220\n3 270\n6 360\n\nIt's common for a single connection doing INSERT statements to hit a \nbottleneck based on how fast the drives used can spin. That's anywhere \nfrom 100 to 200 inserts/section, approximately, unless you have a \nbattery-backed write cache. See \nhttp://wiki.postgresql.org/wiki/Reliable_Writes for more information.\n\nHowever, multiple clients can commit at once when a backlog occurs. So \nwhat you'll normally see in this situation is that the rate goes up \nfaster than this as clients are added. Here's a real sample, from a \nserver that's only physically capable of doing 120 commits/second on its \n7200 RPM drive:\n\nclients tps\n1 107\n2 109\n3 163\n4 216\n5 271\n6 325\n8 432\n10 530\n15 695\n\nThis is how it's supposed to scale even on basic hardware You didn't \nexplore this far enough to really know how well your scaling is working \nhere though. Since commit rates are limited by disk spin in this \nsituation, the situation for 1 to 5 clients is not really representative \nof how a large number of clients will end up working. As already \nmentioning, turning off synchronous_commit should give you an \ninteresting alternate set of numbers.\n\nIt's also possible there may be something wrong with whatever client \nlogic you are using here. Something about the way you've written it may \nbe acquiring a lock that blocks other clients from executing efficiently \nfor example. I'd suggest turning on log_lock_waits and setting \ndeadlock_timeout to a small number, which should show you some extra \nlogging in situations where people are waiting for locks. Running some \nqueries to look at the lock data such as the examples at \nhttp://wiki.postgresql.org/wiki/Lock_Monitoring might be helpful too.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Fri, 10 Jun 2011 12:49:38 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how much postgres can scale up?"
},
{
"msg_contents": "Ok, I think I found possible bottleneck.\n\nThe function that do some selects run really fast, more than 1.000\nexecutions per seconds\nBut the whole thing slowdown when update of one record in a very very small\ntable happed\nI test with insert instead of update and same behavior occur.\n\nSo, the only way to go up is turning off synchronous_commit, but it can be\ndangerous.\n\nAny, thanks a lot for your time.\n\nAnibal\n\n-----Mensaje original-----\nDe: [email protected]\n[mailto:[email protected]] En nombre de Greg Smith\nEnviado el: viernes, 10 de junio de 2011 12:50 p.m.\nPara: [email protected]\nAsunto: Re: [PERFORM] how much postgres can scale up?\n\nOn 06/10/2011 07:29 AM, Anibal David Acosta wrote:\n> When 1 client connected postgres do 180 execution per second With 2 \n> clients connected postgres do 110 execution per second With 3 clients \n> connected postgres do 90 execution per second\n>\n> Finally with 6 connected clients postgres do 60 executions per second \n> (totally 360 executions per second)\n>\n> While testing, I monitor disk, memory and CPU and not found any overload.\n>\n> I know that with this information you can figure out somethigns, but \n> in normal conditions, Is normal the degradation of performance per \n> connection when connections are incremented?\n> Or should I spect 180 in the first and something similar in the second \n> connection? Maybe 170?\n> \n\nLet's reformat this the way most people present it:\n\nclients tps\n1 180\n2 220\n3 270\n6 360\n\nIt's common for a single connection doing INSERT statements to hit a\nbottleneck based on how fast the drives used can spin. That's anywhere from\n100 to 200 inserts/section, approximately, unless you have a battery-backed\nwrite cache. See http://wiki.postgresql.org/wiki/Reliable_Writes for more\ninformation.\n\nHowever, multiple clients can commit at once when a backlog occurs. So what\nyou'll normally see in this situation is that the rate goes up faster than\nthis as clients are added. Here's a real sample, from a server that's only\nphysically capable of doing 120 commits/second on its\n7200 RPM drive:\n\nclients tps\n1 107\n2 109\n3 163\n4 216\n5 271\n6 325\n8 432\n10 530\n15 695\n\nThis is how it's supposed to scale even on basic hardware You didn't\nexplore this far enough to really know how well your scaling is working here\nthough. Since commit rates are limited by disk spin in this situation, the\nsituation for 1 to 5 clients is not really representative of how a large\nnumber of clients will end up working. As already mentioning, turning off\nsynchronous_commit should give you an interesting alternate set of numbers.\n\nIt's also possible there may be something wrong with whatever client logic\nyou are using here. Something about the way you've written it may be\nacquiring a lock that blocks other clients from executing efficiently for\nexample. I'd suggest turning on log_lock_waits and setting deadlock_timeout\nto a small number, which should show you some extra logging in situations\nwhere people are waiting for locks. Running some queries to look at the\nlock data such as the examples at\nhttp://wiki.postgresql.org/wiki/Lock_Monitoring might be helpful too.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Fri, 10 Jun 2011 14:16:33 -0400",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how much postgres can scale up?"
},
{
"msg_contents": "Greg's book is highly recommended, and in my opinion a \"must\" for anyone doing serious work with Postgres.\r\n\r\n> -----Original Message-----\r\n> From: [email protected] [mailto:pgsql-performance-\r\n> [email protected]] On Behalf Of Anibal David Acosta\r\n> Sent: Friday, June 10, 2011 7:19 AM\r\n> To: 'Craig Ringer'\r\n> Cc: [email protected]; [email protected]\r\n> Subject: Re: [PERFORM] how much postgres can scale up?\r\n> \r\n> Excellent.\r\n> \r\n> Thanks I'll buy and read that book :)\r\n> \r\n> \r\n> Thanks!\r\n> \r\n> \r\n> \r\n> -----Mensaje original-----\r\n> De: Craig Ringer [mailto:[email protected]]\r\n> Enviado el: viernes, 10 de junio de 2011 09:13 a.m.\r\n> Para: Anibal David Acosta\r\n> CC: [email protected]; [email protected]\r\n> Asunto: Re: [PERFORM] how much postgres can scale up?\r\n> \r\n> On 06/10/2011 08:56 PM, Anibal David Acosta wrote:\r\n> > The version is Postgres 9.0\r\n> > Yes, I setup the postgres.conf according to instructions in the\r\n> > http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\r\n> >\r\n> >\r\n> > Cool, I will check this\r\n> > http://wiki.postgresql.org/wiki/Logging_Difficult_Queries\r\n> >\r\n> > Looks like great starting point to find bottleneck\r\n> >\r\n> > But so, Is possible in excellent conditions that two connections\r\n> duplicate the quantity of transactions per second?\r\n> \r\n> For two connections, if you have most of the data cached in RAM or you\r\n> have lots of fast disks, then sure. For that matter, if they're\r\n> synchronized scans of the same table then the second transaction might\r\n> perform even faster than the first one!\r\n> \r\n> There are increasing overheads with transaction synchronization, etc\r\n> with number of connections, and they'll usually land up contending for\r\n> system resources like RAM (for disk cache, work_mem, etc), disk I/O,\r\n> and CPU time. So you won't generally get linear scaling with number of\r\n> connections.\r\n> \r\n> Greg Smith has done some excellent and detailed work on this. I highly\r\n> recommend reading his writing, and you should consider buying his\r\n> recent book \"PostgreSQL 9.0 High Performance\".\r\n> \r\n> See also:\r\n> \r\n> http://wiki.postgresql.org/wiki/Performance_Optimization\r\n> \r\n> There have been lots of postgresql scaling benchmarks done over time,\r\n> too. You'll find a lot of information if you look around the wiki and\r\n> Google.\r\n> \r\n> --\r\n> Craig Ringer\r\n> \r\n> \r\n> --\r\n> Sent via pgsql-performance mailing list (pgsql-\r\n> [email protected])\r\n> To make changes to your subscription:\r\n> http://www.postgresql.org/mailpref/pgsql-performance\r\n",
"msg_date": "Sun, 12 Jun 2011 09:03:18 -0600",
"msg_from": "\"Benjamin Krajmalnik\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how much postgres can scale up?"
}
] |
[
{
"msg_contents": "Hi,\n\nis there a way to change the sample size for statistics (that analyze\ngathers)?\nIt is said to be 10%. i would like to raise that, because we are getting bas\nestimations for n_distinct.\n\nCheers,\n\nWBL\n\n-- \n\"Patriotism is the conviction that your country is superior to all others\nbecause you were born in it.\" -- George Bernard Shaw\n\nHi,is there a way to change the sample size for statistics (that analyze gathers)?It is said to be 10%. i would like to raise that, because we are getting bas estimations for n_distinct.Cheers,WBL\n-- \"Patriotism is the conviction that your country is superior to all others because you were born in it.\" -- George Bernard Shaw",
"msg_date": "Fri, 10 Jun 2011 14:15:38 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "[PERFORM] change sample size for statistics"
},
{
"msg_contents": "On 6/10/11 5:15 AM, Willy-Bas Loos wrote:\n> Hi,\n> \n> is there a way to change the sample size for statistics (that analyze\n> gathers)?\n> It is said to be 10%. i would like to raise that, because we are getting bas\n> estimations for n_distinct.\n\nIt's not 10%. We use a fixed sample size, which is configurable on the\nsystem, table, or column basis.\n\nSome reading (read all these pages to understand what you're doing):\nhttp://www.postgresql.org/docs/9.0/static/planner-stats.html\nhttp://www.postgresql.org/docs/9.0/static/runtime-config-query.html#RUNTIME-CONFIG-QUERY-OTHER\nhttp://www.postgresql.org/docs/9.0/static/planner-stats-details.html\nhttp://www.postgresql.org/docs/9.0/static/sql-altertable.html\n(scroll down to \"set storage\" on that last page)\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Fri, 10 Jun 2011 12:58:19 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: change sample size for statistics"
},
{
"msg_contents": "[ Sorry, forgot to cc list ]\n\n>> It is said to be 10%. i would like to raise that, because we are getting bas\n>> estimations for n_distinct.\n>\n> More to the point, the estimator we use is going to be biased for many\n> ( probably most ) distributions no matter how large your sample size\n> is.\n>\n> If you need to fix ndistinct, a better approach may be to do it manually.\n>\n> Best,\n> Nathan\n>\n",
"msg_date": "Fri, 10 Jun 2011 13:07:19 -0700",
"msg_from": "Nathan Boley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: change sample size for statistics"
},
{
"msg_contents": "On Fri, Jun 10, 2011 at 9:58 PM, Josh Berkus <[email protected]> wrote:\n\n> It's not 10%. We use a fixed sample size, which is configurable on the\n> system, table, or column basis.\n>\n\nIt seems that you are referring to \"alter column set statistics\" and\n\"default_statistics_target\", which are the number of percentiles in the\nhistogram (and MCV's) .\nI mean the number of records that are scanned by analyze to come to the\nstatistics for the planner, especially n_disctict.\n\n\nOn Fri, Jun 10, 2011 at 10:06 PM, Nathan Boley <[email protected]> wrote:\n\n> If you need to fix ndistinct, a better approach may be to do it manually.\n>\n\nThat would be nice, but how do i prevent the analyzer to overwrite\nn_distinct without blocking the generation of new histogram values etc for\nthat column?\n\nWe use version 8.4 at the moment (on debian squeeze).\n\nCheers,\n\nWBL\n-- \n\"Patriotism is the conviction that your country is superior to all others\nbecause you were born in it.\" -- George Bernard Shaw\n\nOn Fri, Jun 10, 2011 at 9:58 PM, Josh Berkus <[email protected]> wrote:\nIt's not 10%. We use a fixed sample size, which is configurable on the\nsystem, table, or column basis.It seems that you are referring to \"alter column set statistics\" and \"default_statistics_target\", which are the number of percentiles in the histogram (and MCV's) .\nI mean the number of records that are scanned by analyze to come to the statistics for the planner, especially n_disctict.On Fri, Jun 10, 2011 at 10:06 PM, Nathan Boley <[email protected]> wrote:\n\nIf you need to fix ndistinct, a better approach may be to do it manually.That would be nice, but how do i prevent the analyzer to overwrite n_distinct without blocking the generation of new histogram values etc for that column?\nWe use version 8.4 at the moment (on debian squeeze).Cheers,WBL-- \"Patriotism is the conviction that your country is superior to all others because you were born in it.\" -- George Bernard Shaw",
"msg_date": "Tue, 14 Jun 2011 00:33:59 +0200",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] change sample size for statistics"
},
{
"msg_contents": "On Mon, Jun 13, 2011 at 6:33 PM, Willy-Bas Loos <[email protected]> wrote:\n> On Fri, Jun 10, 2011 at 9:58 PM, Josh Berkus <[email protected]> wrote:\n>>\n>> It's not 10%. We use a fixed sample size, which is configurable on the\n>> system, table, or column basis.\n>\n> It seems that you are referring to \"alter column set statistics\" and\n> \"default_statistics_target\", which are the number of percentiles in the\n> histogram (and MCV's) .\n> I mean the number of records that are scanned by analyze to come to the\n> statistics for the planner, especially n_disctict.\n\nIn 9.0+ you can do ALTER TABLE .. ALTER COLUMN .. SET (n_distinct = ...);\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 28 Jun 2011 22:55:45 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] change sample size for statistics"
}
] |
[
{
"msg_contents": "Hi performance gurus,\n\nOne of the reasons I prefer PostgreSQL is because it does not implement\nhints. However I have a situation which seems like I am forced to use a\nhint-like statement:\n\nI have 2 tables in PostgreSQL 9.0:\n\ntcpsessions - about 4 Mrows in lab, hundreds of Mrows in production\nprimary key(detectorid, id)\n\ntcpsessiondata - about 2-5 times bigger than tcpsessions\nForeign key(detectorid, sessionid) References tcpsessions(detectorid,id)\nThere is an index on (detectorid, sessionid)\n\nFor completeness tcpsessiondata is actually partitioned according to the\nofficial documentation but I will save you the details if that is not\nnecessary. For the purpose of this message, all the data will be available\nin one child table: tcpsessiondata_default\n\nWhen I run the following simple query:\n\nselect\n (createdtime / 60000000000) as timegroup,\n (sum(datafromsource)+sum(datafromdestination)) as numbytes,\n (sum(packetsfromsource)+sum(packetsfromdestination)) as numpackets\nfrom\n tcpsessiondata SD, tcpsessions SS\nwhere\n SD.detectorid = SS.detectorid\n and SD.sessionid = SS.id\n and SD.detectorid = 1\n and SD.sessionid >= 1001000000000::INT8 and SD.sessionid <=\n2001000000000::INT8\ngroup by\n timegroup\norder by\n timegroup asc\n\nI get the following plan:\n\"Sort (cost=259126.13..259126.63 rows=200 width=32) (actual\ntime=32526.762..32526.781 rows=20 loops=1)\"\n\" Output: ((sd.createdtime / 60000000000::bigint)),\n(((sum(sd.datafromsource) + sum(sd.datafromdestination)) /\n1048576::numeric)), ((sum(sd.packetsfromsource) +\nsum(sd.packetsfromdestination)))\"\n\" Sort Key: ((sd.createdtime / 60000000000::bigint))\"\n\" Sort Method: quicksort Memory: 26kB\"\n\" -> HashAggregate (cost=259112.49..259118.49 rows=200 width=32) (actual\ntime=32526.657..32526.700 rows=20 loops=1)\"\n\" Output: ((sd.createdtime / 60000000000::bigint)),\n((sum(sd.datafromsource) + sum(sd.datafromdestination)) / 1048576::numeric),\n(sum(sd.packetsfromsource) + sum(sd.packetsfromdestination))\"\n\" -> Hash Join (cost=126553.43..252603.29 rows=520736 width=32)\n(actual time=22400.430..31291.838 rows=570100 loops=1)\"\n\" Output: sd.createdtime, sd.datafromsource,\nsd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination,\n(sd.createdtime / 60000000000::bigint)\"\n\" Hash Cond: (sd.sessionid = ss.id)\"\n\" -> Append (cost=0.00..100246.89 rows=520736 width=42)\n(actual time=2382.160..6226.906 rows=570100 loops=1)\"\n\" -> Seq Scan on appqosdata.tcpsessiondata sd\n(cost=0.00..18.65 rows=1 width=42) (actual time=0.002..0.002 rows=0\nloops=1)\"\n\" Output: sd.createdtime, sd.datafromsource,\nsd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination,\nsd.detectorid, sd.sessionid\"\n\" Filter: ((sd.sessionid >= 1001000000000::bigint)\nAND (sd.sessionid <= 2001000000000::bigint) AND (sd.detectorid = 1))\"\n\" -> Bitmap Heap Scan on\nappqosdata.tcpsessiondata_default sd (cost=11001.37..100228.24 rows=520735\nwidth=42) (actual time=2382.154..5278.319 rows=570100 loops=1)\"\n\" Output: sd.createdtime, sd.datafromsource,\nsd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination,\nsd.detectorid, sd.sessionid\"\n\" Recheck Cond: ((sd.detectorid = 1) AND\n(sd.sessionid >= 1001000000000::bigint) AND (sd.sessionid <=\n2001000000000::bigint))\"\n\" -> Bitmap Index Scan on\nidx_tcpsessiondata_default_detectoridandsessionid (cost=0.00..10871.19\nrows=520735 width=0) (actual time=2351.865..2351.865 rows=574663 loops=1)\"\n\" Index Cond: ((sd.detectorid = 1) AND\n(sd.sessionid >= 1001000000000::bigint) AND (sd.sessionid <=\n2001000000000::bigint))\"\n\" -> Hash (cost=72340.48..72340.48 rows=3628356 width=10)\n(actual time=19878.891..19878.891 rows=3632586 loops=1)\"\n\" Output: ss.detectorid, ss.id\"\n\" Buckets: 8192 Batches: 64 Memory Usage: 2687kB\"\n\" -> Seq Scan on appqosdata.tcpsessions ss\n(cost=0.00..72340.48 rows=3628356 width=10) (actual time=627.164..14586.202\nrows=3632586 loops=1)\"\n\" Output: ss.detectorid, ss.id\"\n\" Filter: (ss.detectorid = 1)\"\n\"Total runtime: 32543.224 ms\"\n\n\nAs we can see the planner decides to go for an index scan on\ntcpsessiondata_default (as expected) and for a seq scan on tcpsessions.\nHowever if I add the following ugly condition to my query:\n and SS.detectorid = 1\n and SS.id >= 1001000000000::INT8 and SS.id <= 2001000000000::INT8\nso that the full query now becomes:\n\nselect\n (createdtime / 60000000000) as timegroup,\n (sum(datafromsource)+sum(datafromdestination)) as numbytes,\n (sum(packetsfromsource)+sum(packetsfromdestination)) as numpackets\nfrom\n tcpsessiondata SD, tcpsessions SS\nwhere\n SD.detectorid = SS.detectorid\n and SD.sessionid = SS.id\n and SD.detectorid = 1\n and SD.sessionid >= 1001000000000::INT8 and SD.sessionid <=\n2001000000000::INT8\n and SS.detectorid = 1\n and SS.id >= 1001000000000::INT8 and SS.id <= 2001000000000::INT8\ngroup by\n timegroup\norder by\n timegroup asc\n\nwell, now I have an index scan on tcpsessions as well and running time is 3\ntimes less than the previous one:\n\n\"Sort (cost=157312.59..157313.09 rows=200 width=32) (actual\ntime=9682.748..9682.764 rows=20 loops=1)\"\n\" Output: ((sd.createdtime / 60000000000::bigint)),\n(((sum(sd.datafromsource) + sum(sd.datafromdestination)) /\n1048576::numeric)), ((sum(sd.packetsfromsource) +\nsum(sd.packetsfromdestination)))\"\n\" Sort Key: ((sd.createdtime / 60000000000::bigint))\"\n\" Sort Method: quicksort Memory: 26kB\"\n\" -> HashAggregate (cost=157298.94..157304.94 rows=200 width=32) (actual\ntime=9682.649..9682.692 rows=20 loops=1)\"\n\" Output: ((sd.createdtime / 60000000000::bigint)),\n((sum(sd.datafromsource) + sum(sd.datafromdestination)) / 1048576::numeric),\n(sum(sd.packetsfromsource) + sum(sd.packetsfromdestination))\"\n\" -> Hash Join (cost=32934.67..150744.28 rows=524373 width=32)\n(actual time=3695.016..8370.629 rows=570100 loops=1)\"\n\" Output: sd.createdtime, sd.datafromsource,\nsd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination,\n(sd.createdtime / 60000000000::bigint)\"\n\" Hash Cond: (sd.sessionid = ss.id)\"\n\" -> Append (cost=0.00..100948.71 rows=524373 width=42)\n(actual time=2318.568..4799.985 rows=570100 loops=1)\"\n\" -> Seq Scan on appqosdata.tcpsessiondata sd\n(cost=0.00..18.65 rows=1 width=42) (actual time=0.001..0.001 rows=0\nloops=1)\"\n\" Output: sd.createdtime, sd.datafromsource,\nsd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination,\nsd.detectorid, sd.sessionid\"\n\" Filter: ((sd.sessionid >= 1001000000000::bigint)\nAND (sd.sessionid <= 2001000000000::bigint) AND (sd.detectorid = 1))\"\n\" -> Bitmap Heap Scan on\nappqosdata.tcpsessiondata_default sd (cost=11080.05..100930.06 rows=524372\nwidth=42) (actual time=2318.563..3789.844 rows=570100 loops=1)\"\n\" Output: sd.createdtime, sd.datafromsource,\nsd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination,\nsd.detectorid, sd.sessionid\"\n\" Recheck Cond: ((sd.detectorid = 1) AND\n(sd.sessionid >= 1001000000000::bigint) AND (sd.sessionid <=\n2001000000000::bigint))\"\n\" -> Bitmap Index Scan on\nidx_tcpsessiondata_default_detectoridandsessionid (cost=0.00..10948.96\nrows=524372 width=0) (actual time=2305.322..2305.322 rows=574663 loops=1)\"\n\" Index Cond: ((sd.detectorid = 1) AND\n(sd.sessionid >= 1001000000000::bigint) AND (sd.sessionid <=\n2001000000000::bigint))\"\n\" -> Hash (cost=30159.60..30159.60 rows=185726 width=10)\n(actual time=1345.307..1345.307 rows=194971 loops=1)\"\n\" Output: ss.detectorid, ss.id\"\n\" Buckets: 8192 Batches: 4 Memory Usage: 2297kB\"\n\" -> Bitmap Heap Scan on appqosdata.tcpsessions ss\n(cost=3407.46..30159.60 rows=185726 width=10) (actual time=483.572..1069.292\nrows=194971 loops=1)\"\n\" Output: ss.detectorid, ss.id\"\n\" Recheck Cond: ((ss.id >= 1001000000000::bigint)\nAND (ss.id <= 2001000000000::bigint))\"\n\" Filter: (ss.detectorid = 1)\"\n\" -> Bitmap Index Scan on idx_tcpsessions_id\n(cost=0.00..3361.02 rows=201751 width=0) (actual time=451.242..451.242\nrows=219103 loops=1)\"\n\" Index Cond: ((ss.id >=\n1001000000000::bigint) AND (ss.id <= 2001000000000::bigint))\"\n\"Total runtime: 9682.905 ms\"\n\n\nLet me also add that if I remove the conditions on SD but keep the\nconditions on SS, then I get an index scan on tcpsessions BUT a seq scan on\ntcpsessiondata.\n\nLet's now suppose that the index scan on both tables is the best choice as\nthe planner itself selects it in one of the 3 cases. (It is also the faster\nplan as we extract only 200 000 rows out of 4 millions in this example). But\nI am really surprised to see that the planner needs me to explicitly specify\nthe same condition twice like this:\n\n SD.detectorid = SS.detectorid\n and SD.sessionid = SS.id\n and SD.detectorid = 1\n and SD.sessionid >= 1001000000000::INT8 and SD.sessionid <=\n2001000000000::INT8\n and SS.detectorid = 1\n and SS.id >= 1001000000000::INT8 and SS.id <= 2001000000000::INT8\n\nin order to use the primary key on SS, even if it is absolutely clear that\n\"SD.detectorid = SS.detectorid and SD.sessionid = SS.id\". Well I hope you\nagree that repeating the same condition on SS seems very like giving a hint\nto use the index there. But I feel very uncomfortable to use such an ugly\ncondition, especially knowing that I am doing it to \"force an index\". On the\nother hand I am terrified that we may go in production for a seq scan on\nhundreds of millions of rows just to extract 200 000.\n\nWould you please explain that behavior and how would you suggest to proceed?\n\nThanks for any comments,\nSvetlin Manavski\n\nHi performance gurus,One of the reasons I prefer PostgreSQL is because it does not implement hints. However I have a situation which seems like I am forced to use a hint-like statement: I have 2 tables in PostgreSQL 9.0:\ntcpsessions - about 4 Mrows in lab, hundreds of Mrows in productionprimary key(detectorid, id)tcpsessiondata - about 2-5 times bigger than tcpsessionsForeign key(detectorid, sessionid) References tcpsessions(detectorid,id)\nThere is an index on (detectorid, sessionid)For completeness tcpsessiondata is actually partitioned according to the official documentation but I will save you the details if that is not necessary. For the purpose of this message, all the data will be available in one child table: tcpsessiondata_default\nWhen I run the following simple query:select (createdtime / 60000000000) as timegroup, (sum(datafromsource)+sum(datafromdestination)) as numbytes, (sum(packetsfromsource)+sum(packetsfromdestination)) as numpackets\nfrom tcpsessiondata SD, tcpsessions SSwhere SD.detectorid = SS.detectorid and SD.sessionid = SS.id and SD.detectorid = 1 and SD.sessionid >= 1001000000000::INT8 and SD.sessionid <= 2001000000000::INT8\ngroup by timegrouporder by timegroup ascI get the following plan:\"Sort (cost=259126.13..259126.63 rows=200 width=32) (actual time=32526.762..32526.781 rows=20 loops=1)\"\" Output: ((sd.createdtime / 60000000000::bigint)), (((sum(sd.datafromsource) + sum(sd.datafromdestination)) / 1048576::numeric)), ((sum(sd.packetsfromsource) + sum(sd.packetsfromdestination)))\"\n\" Sort Key: ((sd.createdtime / 60000000000::bigint))\"\" Sort Method: quicksort Memory: 26kB\"\" -> HashAggregate (cost=259112.49..259118.49 rows=200 width=32) (actual time=32526.657..32526.700 rows=20 loops=1)\"\n\" Output: ((sd.createdtime / 60000000000::bigint)), ((sum(sd.datafromsource) + sum(sd.datafromdestination)) / 1048576::numeric), (sum(sd.packetsfromsource) + sum(sd.packetsfromdestination))\"\" -> Hash Join (cost=126553.43..252603.29 rows=520736 width=32) (actual time=22400.430..31291.838 rows=570100 loops=1)\"\n\" Output: sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination, (sd.createdtime / 60000000000::bigint)\"\" Hash Cond: (sd.sessionid = ss.id)\"\n\" -> Append (cost=0.00..100246.89 rows=520736 width=42) (actual time=2382.160..6226.906 rows=570100 loops=1)\"\" -> Seq Scan on appqosdata.tcpsessiondata sd (cost=0.00..18.65 rows=1 width=42) (actual time=0.002..0.002 rows=0 loops=1)\"\n\" Output: sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination, sd.detectorid, sd.sessionid\"\" Filter: ((sd.sessionid >= 1001000000000::bigint) AND (sd.sessionid <= 2001000000000::bigint) AND (sd.detectorid = 1))\"\n\" -> Bitmap Heap Scan on appqosdata.tcpsessiondata_default sd (cost=11001.37..100228.24 rows=520735 width=42) (actual time=2382.154..5278.319 rows=570100 loops=1)\"\" Output: sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination, sd.detectorid, sd.sessionid\"\n\" Recheck Cond: ((sd.detectorid = 1) AND (sd.sessionid >= 1001000000000::bigint) AND (sd.sessionid <= 2001000000000::bigint))\"\" -> Bitmap Index Scan on idx_tcpsessiondata_default_detectoridandsessionid (cost=0.00..10871.19 rows=520735 width=0) (actual time=2351.865..2351.865 rows=574663 loops=1)\"\n\" Index Cond: ((sd.detectorid = 1) AND (sd.sessionid >= 1001000000000::bigint) AND (sd.sessionid <= 2001000000000::bigint))\"\" -> Hash (cost=72340.48..72340.48 rows=3628356 width=10) (actual time=19878.891..19878.891 rows=3632586 loops=1)\"\n\" Output: ss.detectorid, ss.id\"\" Buckets: 8192 Batches: 64 Memory Usage: 2687kB\"\" -> Seq Scan on appqosdata.tcpsessions ss (cost=0.00..72340.48 rows=3628356 width=10) (actual time=627.164..14586.202 rows=3632586 loops=1)\"\n\" Output: ss.detectorid, ss.id\"\" Filter: (ss.detectorid = 1)\"\"Total runtime: 32543.224 ms\"As we can see the planner decides to go for an index scan on tcpsessiondata_default (as expected) and for a seq scan on tcpsessions. However if I add the following ugly condition to my query:\n and SS.detectorid = 1\n\n and SS.id >= 1001000000000::INT8 and SS.id <= 2001000000000::INT8\n\nso that the full query now becomes:select\n (createdtime / 60000000000) as timegroup, \n (sum(datafromsource)+sum(datafromdestination)) as numbytes,\n (sum(packetsfromsource)+sum(packetsfromdestination)) as numpackets\nfrom\n tcpsessiondata SD, tcpsessions SS\nwhere\n SD.detectorid = SS.detectorid and SD.sessionid = SS.id and SD.detectorid = 1\n and SD.sessionid >= 1001000000000::INT8 and SD.sessionid <= 2001000000000::INT8\n and SS.detectorid = 1\n and SS.id >= 1001000000000::INT8 and SS.id <= 2001000000000::INT8\ngroup by\n timegroup\norder by \n timegroup asc\nwell, now I have an index scan on tcpsessions as well and running time is 3 times less than the previous one:\"Sort (cost=157312.59..157313.09 rows=200 width=32) (actual time=9682.748..9682.764 rows=20 loops=1)\"\n\" Output: ((sd.createdtime / 60000000000::bigint)), (((sum(sd.datafromsource) + sum(sd.datafromdestination)) / 1048576::numeric)), ((sum(sd.packetsfromsource) + sum(sd.packetsfromdestination)))\"\" Sort Key: ((sd.createdtime / 60000000000::bigint))\"\n\" Sort Method: quicksort Memory: 26kB\"\" -> HashAggregate (cost=157298.94..157304.94 rows=200 width=32) (actual time=9682.649..9682.692 rows=20 loops=1)\"\" Output: ((sd.createdtime / 60000000000::bigint)), ((sum(sd.datafromsource) + sum(sd.datafromdestination)) / 1048576::numeric), (sum(sd.packetsfromsource) + sum(sd.packetsfromdestination))\"\n\" -> Hash Join (cost=32934.67..150744.28 rows=524373 width=32) (actual time=3695.016..8370.629 rows=570100 loops=1)\"\" Output: sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination, (sd.createdtime / 60000000000::bigint)\"\n\" Hash Cond: (sd.sessionid = ss.id)\"\" -> Append (cost=0.00..100948.71 rows=524373 width=42) (actual time=2318.568..4799.985 rows=570100 loops=1)\"\n\" -> Seq Scan on appqosdata.tcpsessiondata sd (cost=0.00..18.65 rows=1 width=42) (actual time=0.001..0.001 rows=0 loops=1)\"\" Output: sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination, sd.detectorid, sd.sessionid\"\n\" Filter: ((sd.sessionid >= 1001000000000::bigint) AND (sd.sessionid <= 2001000000000::bigint) AND (sd.detectorid = 1))\"\" -> Bitmap Heap Scan on appqosdata.tcpsessiondata_default sd (cost=11080.05..100930.06 rows=524372 width=42) (actual time=2318.563..3789.844 rows=570100 loops=1)\"\n\" Output: sd.createdtime, sd.datafromsource, sd.datafromdestination, sd.packetsfromsource, sd.packetsfromdestination, sd.detectorid, sd.sessionid\"\" Recheck Cond: ((sd.detectorid = 1) AND (sd.sessionid >= 1001000000000::bigint) AND (sd.sessionid <= 2001000000000::bigint))\"\n\" -> Bitmap Index Scan on idx_tcpsessiondata_default_detectoridandsessionid (cost=0.00..10948.96 rows=524372 width=0) (actual time=2305.322..2305.322 rows=574663 loops=1)\"\" Index Cond: ((sd.detectorid = 1) AND (sd.sessionid >= 1001000000000::bigint) AND (sd.sessionid <= 2001000000000::bigint))\"\n\" -> Hash (cost=30159.60..30159.60 rows=185726 width=10) (actual time=1345.307..1345.307 rows=194971 loops=1)\"\" Output: ss.detectorid, ss.id\"\n\" Buckets: 8192 Batches: 4 Memory Usage: 2297kB\"\" -> Bitmap Heap Scan on appqosdata.tcpsessions ss (cost=3407.46..30159.60 rows=185726 width=10) (actual time=483.572..1069.292 rows=194971 loops=1)\"\n\" Output: ss.detectorid, ss.id\"\" Recheck Cond: ((ss.id >= 1001000000000::bigint) AND (ss.id <= 2001000000000::bigint))\"\n\" Filter: (ss.detectorid = 1)\"\" -> Bitmap Index Scan on idx_tcpsessions_id (cost=0.00..3361.02 rows=201751 width=0) (actual time=451.242..451.242 rows=219103 loops=1)\"\n\" Index Cond: ((ss.id >= 1001000000000::bigint) AND (ss.id <= 2001000000000::bigint))\"\"Total runtime: 9682.905 ms\"\nLet me also add that if I remove the conditions on SD but keep the conditions on SS, then I get an index scan on tcpsessions BUT a seq scan on tcpsessiondata. Let's now suppose that the index scan on both tables is the best choice as the planner itself selects it in one of the 3 cases. (It is also the faster plan as we extract only 200 000 rows out of 4 millions in this example). But I am really surprised to see that the planner needs me to explicitly specify the same condition twice like this:\n SD.detectorid = SS.detectorid\n and SD.sessionid = SS.id\n and SD.detectorid = 1\n and SD.sessionid >= 1001000000000::INT8 and SD.sessionid <= 2001000000000::INT8\n and SS.detectorid = 1\n and SS.id >= 1001000000000::INT8 and SS.id <= 2001000000000::INT8\nin order to use the primary key on SS, even if it is absolutely clear that \"SD.detectorid = SS.detectorid and SD.sessionid = SS.id\". Well I hope you agree that repeating the same condition on SS seems very like giving a hint to use the index there. But I feel very uncomfortable to use such an ugly condition, especially knowing that I am doing it to \"force an index\". On the other hand I am terrified that we may go in production for a seq scan on hundreds of millions of rows just to extract 200 000.\nWould you please explain that behavior and how would you suggest to proceed?Thanks for any comments,Svetlin Manavski",
"msg_date": "Tue, 14 Jun 2011 14:55:26 +0100",
"msg_from": "Svetlin Manavski <[email protected]>",
"msg_from_op": true,
"msg_subject": "need to repeat the same condition on joined tables in order to choose\n\tthe proper plan"
},
{
"msg_contents": "Svetlin Manavski <[email protected]> writes:\n> I am really surprised to see that the planner needs me to explicitly specify\n> the same condition twice like this:\n\n> SD.detectorid = SS.detectorid\n> and SD.sessionid = SS.id\n> and SD.detectorid = 1\n> and SD.sessionid >= 1001000000000::INT8 and SD.sessionid <=\n> 2001000000000::INT8\n> and SS.detectorid = 1\n> and SS.id >= 1001000000000::INT8 and SS.id <= 2001000000000::INT8\n\nThe planner does infer implied equalities, eg, given A = B and B = C\nit will figure out that A = C. What you are asking is for it to derive\ninequalities, eg infer A < C from A = B and B < C. That would be\nconsiderably more work for considerably less reward, since the sort of\nsituation where this is helpful doesn't come up very often. On balance\nI don't believe it's a good thing for us to do: I think it would make\nPG slower on average because on most queries it would just waste time\nlooking for this sort of situation.\n\n(In this example, the SS.detectorid = 1 clause is in fact unnecessary,\nsince the planner will infer it from SD.detectorid = SS.detectorid and\nSD.detectorid = 1. But it won't infer the range conditions on SS.id\nfrom the range conditions on SD.sessionid or vice versa.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Jun 2011 12:29:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: need to repeat the same condition on joined tables in order to\n\tchoose the proper plan"
},
{
"msg_contents": "On 14.06.2011 18:29, Tom Lane wrote:\n> Svetlin Manavski<[email protected]> writes:\n>> I am really surprised to see that the planner needs me to explicitly specify\n>> the same condition twice like this:\n>\n>> SD.detectorid = SS.detectorid\n>> and SD.sessionid = SS.id\n>> and SD.detectorid = 1\n>> and SD.sessionid>= 1001000000000::INT8 and SD.sessionid<=\n>> 2001000000000::INT8\n>> and SS.detectorid = 1\n>> and SS.id>= 1001000000000::INT8 and SS.id<= 2001000000000::INT8\n>\n> The planner does infer implied equalities, eg, given A = B and B = C\n> it will figure out that A = C. What you are asking is for it to derive\n> inequalities, eg infer A< C from A = B and B< C. That would be\n> considerably more work for considerably less reward, since the sort of\n> situation where this is helpful doesn't come up very often. On balance\n> I don't believe it's a good thing for us to do: I think it would make\n> PG slower on average because on most queries it would just waste time\n> looking for this sort of situation.\n>\n> (In this example, the SS.detectorid = 1 clause is in fact unnecessary,\n> since the planner will infer it from SD.detectorid = SS.detectorid and\n> SD.detectorid = 1. But it won't infer the range conditions on SS.id\n> from the range conditions on SD.sessionid or vice versa.)\n\nIs that the same for IN? Would it help in this particular case to use a \nand SS.id in (select ... where ... > and ... < ...) or with a CTE?\n\nKind regards\n\n\trobert\n\n",
"msg_date": "Tue, 14 Jun 2011 22:21:17 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: need to repeat the same condition on joined tables in order to\n\tchoose the proper plan"
},
{
"msg_contents": "Thanks Tom, this explain the behavior. But is there a more elegant way to\nachieve the only acceptable plan (index scan on both tables) without that\nugly syntax? It does seem to me like a specific syntax to help the current\npostgressql planner make the right decision. ( I am aware about the radical\nsolutions which impact the rest of the connection or the entire DB )\n\nAs it comes to the generic case, I do understand deriving inequalities may\nbe inefficient. I just want to point out that this is the case of joining\nand filtering on a field, which is the foreign key in one table and the\nprimary key in the other. That should be massively common in every non\ntrivial DB application. Maybe it does make sense to consider that specific\ncase in the planner, doesn't it?\n\nThank you,\nSvetlin Manavski\n\n\n\nOn Tue, Jun 14, 2011 at 5:29 PM, Tom Lane <[email protected]> wrote:\n\n> Svetlin Manavski <[email protected]> writes:\n> > I am really surprised to see that the planner needs me to explicitly\n> specify\n> > the same condition twice like this:\n>\n> > SD.detectorid = SS.detectorid\n> > and SD.sessionid = SS.id\n> > and SD.detectorid = 1\n> > and SD.sessionid >= 1001000000000::INT8 and SD.sessionid <=\n> > 2001000000000::INT8\n> > and SS.detectorid = 1\n> > and SS.id >= 1001000000000::INT8 and SS.id <= 2001000000000::INT8\n>\n> The planner does infer implied equalities, eg, given A = B and B = C\n> it will figure out that A = C. What you are asking is for it to derive\n> inequalities, eg infer A < C from A = B and B < C. That would be\n> considerably more work for considerably less reward, since the sort of\n> situation where this is helpful doesn't come up very often. On balance\n> I don't believe it's a good thing for us to do: I think it would make\n> PG slower on average because on most queries it would just waste time\n> looking for this sort of situation.\n>\n> (In this example, the SS.detectorid = 1 clause is in fact unnecessary,\n> since the planner will infer it from SD.detectorid = SS.detectorid and\n> SD.detectorid = 1. But it won't infer the range conditions on SS.id\n> from the range conditions on SD.sessionid or vice versa.)\n>\n> regards, tom lane\n>\n\nThanks Tom, this explain the behavior. But is there a more elegant way to achieve the only acceptable plan (index scan on both tables) without that ugly syntax? It does seem to me like a specific syntax to help the current postgressql planner make the right decision. ( I am aware about the radical solutions which impact the rest of the connection or the entire DB )\nAs it comes to the generic case, I do understand deriving inequalities may be inefficient. I just want to point out that this is the case of joining and filtering on a field, which is the foreign key in one table and the primary key in the other. That should be massively common in every non trivial DB application. Maybe it does make sense to consider that specific case in the planner, doesn't it?\nThank you,Svetlin ManavskiOn Tue, Jun 14, 2011 at 5:29 PM, Tom Lane <[email protected]> wrote:\nSvetlin Manavski <[email protected]> writes:\n\n> I am really surprised to see that the planner needs me to explicitly specify\n> the same condition twice like this:\n\n> SD.detectorid = SS.detectorid\n> and SD.sessionid = SS.id\n> and SD.detectorid = 1\n> and SD.sessionid >= 1001000000000::INT8 and SD.sessionid <=\n> 2001000000000::INT8\n> and SS.detectorid = 1\n> and SS.id >= 1001000000000::INT8 and SS.id <= 2001000000000::INT8\n\nThe planner does infer implied equalities, eg, given A = B and B = C\nit will figure out that A = C. What you are asking is for it to derive\ninequalities, eg infer A < C from A = B and B < C. That would be\nconsiderably more work for considerably less reward, since the sort of\nsituation where this is helpful doesn't come up very often. On balance\nI don't believe it's a good thing for us to do: I think it would make\nPG slower on average because on most queries it would just waste time\nlooking for this sort of situation.\n\n(In this example, the SS.detectorid = 1 clause is in fact unnecessary,\nsince the planner will infer it from SD.detectorid = SS.detectorid and\nSD.detectorid = 1. But it won't infer the range conditions on SS.id\nfrom the range conditions on SD.sessionid or vice versa.)\n\n regards, tom lane",
"msg_date": "Wed, 15 Jun 2011 09:55:07 +0100",
"msg_from": "Svetlin Manavski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: need to repeat the same condition on joined tables in\n\torder to choose the proper plan"
}
] |
[
{
"msg_contents": "Hi everybody,\n\nI am running PostgreSQL 9.0 which performs well in most of the cases. I\nwould skip all the parameters if these are not necessary.\n\nI need to frequently (every min) get the max value of the primary key column\non some tables, like this case which works perfectly well:\n\nexplain analyze select max(id) from appqosdata.tcpsessions;\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.49..0.50 rows=1 width=0) (actual time=45.316..45.317 rows=1\nloops=1) InitPlan 1 (returns $0)\n-> Limit (cost=0.00..0.49 rows=1 width=8) (actual time=45.302..45.303 rows=1\nloops=1)\n -> Index Scan Backward using idx_tcpsessions_id on tcpsessions\n(cost=0.00..6633362.76 rows=13459023 width=8) (actual time=45.296..45.296\nrows=1 loops=1)\nIndex Cond: (id IS NOT NULL)\nTotal runtime: 45.399 ms\n\nBut I have the following similar case which surprises me quite a lot:\n\nexplain analyze select max(createdtime) from appqosdata.tcpsessiondata;\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=1123868.30..1123868.31 rows=1 width=8) (actual\ntime=376932.636..376932.637 rows=1 loops=1)\n-> Append (cost=0.00..965113.04 rows=63502104 width=8) (actual\ntime=0.020..304844.944 rows=63501281 loops=1)\n-> Seq Scan on tcpsessiondata (cost=0.00..12.80 rows=780 width=8) (actual\ntime=0.002..0.002 rows=0 loops=1)\n-> Seq Scan on tcpsessiondata_default tcpsessiondata (cost=0.00..965100.24\nrows=63501324 width=8) (actual time=0.015..173159.505 rows=63501281 loops=1)\nTotal runtime: 376980.975 ms\n\nI have the following table definitions:\n\nCREATE TABLE appqosdata.tcpsessiondata_default\n(\n Primary key(createdtime), --bigint\ncheck (sessionid >= 0),\n\n Foreign key(detectorid, sessionid) References\nappqosdata.tcpsessions(detectorid,id)\n\n) inherits (appqosdata.tcpsessiondata);\n\nCREATE TABLE appqosdata.tcpsessions\n(\ndetectorid smallint not null default(0) references appqosdata.detectors(id),\nid bigint not null,\n\n ...\n\nprimary key(detectorid, id)\n);\n\nAs you can see I have tens of millions of rows in both tables which would be\nten times more in production. So seq scan is not acceptable at all to get\none single value.\nWhy that difference and what can I do to make the first query use its index\non the primary key.\n\nThank you,\nSvetlin Manavski\n\nHi everybody,I am running PostgreSQL 9.0 which performs well in most of the cases. I would skip all the parameters if these are not necessary.I need to frequently (every min) get the max value of the primary key column on some tables, like this case which works perfectly well:\nexplain analyze select max(id) from appqosdata.tcpsessions;-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.49..0.50 rows=1 width=0) (actual time=45.316..45.317 rows=1 loops=1) InitPlan 1 (returns $0) -> Limit (cost=0.00..0.49 rows=1 width=8) (actual time=45.302..45.303 rows=1 loops=1) -> Index Scan Backward using idx_tcpsessions_id on tcpsessions (cost=0.00..6633362.76 rows=13459023 width=8) (actual time=45.296..45.296 rows=1 loops=1)\n Index Cond: (id IS NOT NULL) Total runtime: 45.399 msBut I have the following similar case which surprises me quite a lot:explain analyze select max(createdtime) from appqosdata.tcpsessiondata;\n------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=1123868.30..1123868.31 rows=1 width=8) (actual time=376932.636..376932.637 rows=1 loops=1)\n -> Append (cost=0.00..965113.04 rows=63502104 width=8) (actual time=0.020..304844.944 rows=63501281 loops=1) -> Seq Scan on tcpsessiondata (cost=0.00..12.80 rows=780 width=8) (actual time=0.002..0.002 rows=0 loops=1)\n -> Seq Scan on tcpsessiondata_default tcpsessiondata (cost=0.00..965100.24 rows=63501324 width=8) (actual time=0.015..173159.505 rows=63501281 loops=1) Total runtime: 376980.975 ms\nI have the following table definitions:CREATE TABLE appqosdata.tcpsessiondata_default ( Primary key(createdtime), --bigint\tcheck (sessionid >= 0), Foreign key(detectorid, sessionid) References appqosdata.tcpsessions(detectorid,id)\n ) inherits (appqosdata.tcpsessiondata);CREATE TABLE appqosdata.tcpsessions(\tdetectorid smallint not null default(0) references appqosdata.detectors(id),\tid bigint not null, ...\nprimary key(detectorid, id));As you can see I have tens of millions of rows in both tables which would be ten times more in production. So seq scan is not acceptable at all to get one single value.\nWhy that difference and what can I do to make the first query use its index on the primary key.Thank you,Svetlin Manavski",
"msg_date": "Thu, 16 Jun 2011 14:55:30 +0100",
"msg_from": "Svetlin Manavski <[email protected]>",
"msg_from_op": true,
"msg_subject": "seq scan in the case of max() on the primary key column"
},
{
"msg_contents": "On 2011-06-16 15:55, Svetlin Manavski wrote:\n> Hi everybody,\n>\n> I am running PostgreSQL 9.0 which performs well in most of the cases. I\n> would skip all the parameters if these are not necessary.\n>\n> I need to frequently (every min) get the max value of the primary key column\n> on some tables, like this case which works perfectly well:\n>\n> explain analyze select max(id) from appqosdata.tcpsessions;\n\nTypically this is due to \"batch load\" and failing to run \"analyze\"\nmanually afterwards.. is this the case?\n\n-- \nJesper\n",
"msg_date": "Thu, 16 Jun 2011 19:03:05 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seq scan in the case of max() on the primary key column"
},
{
"msg_contents": "On Thu, Jun 16, 2011 at 15:55, Svetlin Manavski\n<[email protected]> wrote:\n> Hi everybody,\n>\n> I am running PostgreSQL 9.0 which performs well in most of the cases. I\n> would skip all the parameters if these are not necessary.\n> I need to frequently (every min) get the max value of the primary key column\n> on some tables, like this case which works perfectly well:\n> explain analyze select max(id) from appqosdata.tcpsessions;\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Result (cost=0.49..0.50 rows=1 width=0) (actual time=45.316..45.317 rows=1\n> loops=1) InitPlan 1 (returns $0)\n> -> Limit (cost=0.00..0.49 rows=1 width=8) (actual time=45.302..45.303 rows=1\n> loops=1)\n> -> Index Scan Backward using idx_tcpsessions_id on tcpsessions\n> (cost=0.00..6633362.76 rows=13459023 width=8) (actual time=45.296..45.296\n> rows=1 loops=1)\n> Index Cond: (id IS NOT NULL)\n> Total runtime: 45.399 ms\n>\n> But I have the following similar case which surprises me quite a lot:\n> explain analyze select max(createdtime) from appqosdata.tcpsessiondata;\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=1123868.30..1123868.31 rows=1 width=8) (actual\n> time=376932.636..376932.637 rows=1 loops=1)\n> -> Append (cost=0.00..965113.04 rows=63502104 width=8) (actual\n> time=0.020..304844.944 rows=63501281 loops=1)\n> -> Seq Scan on tcpsessiondata (cost=0.00..12.80 rows=780 width=8) (actual\n> time=0.002..0.002 rows=0 loops=1)\n> -> Seq Scan on tcpsessiondata_default tcpsessiondata (cost=0.00..965100.24\n> rows=63501324 width=8) (actual time=0.015..173159.505 rows=63501281 loops=1)\n> Total runtime: 376980.975 ms\n>\n> I have the following table definitions:\n> CREATE TABLE appqosdata.tcpsessiondata_default\n> (\n> Primary key(createdtime), --bigint\n> check (sessionid >= 0),\n>\n> Foreign key(detectorid, sessionid) References\n> appqosdata.tcpsessions(detectorid,id)\n>\n> ) inherits (appqosdata.tcpsessiondata);\n> CREATE TABLE appqosdata.tcpsessions\n> (\n> detectorid smallint not null default(0) references appqosdata.detectors(id),\n> id bigint not null,\n> ...\n> primary key(detectorid, id)\n> );\n>\n> As you can see I have tens of millions of rows in both tables which would be\n> ten times more in production. So seq scan is not acceptable at all to get\n> one single value.\n> Why that difference and what can I do to make the first query use its index\n> on the primary key.\n\nLooks like the first table is not partitioned, but the second one is?\n\nPostgreSQL 9.0 is unable to use an index scan to find min/max on a\npartitioned table. 9.1, however, can do that.\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n",
"msg_date": "Thu, 16 Jun 2011 19:25:56 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seq scan in the case of max() on the primary key column"
},
{
"msg_contents": "On 06/16/2011 12:25 PM, Magnus Hagander wrote:\n\n> PostgreSQL 9.0 is unable to use an index scan to find min/max on a\n> partitioned table. 9.1, however, can do that.\n\nUnfortunately this is true. You can fake it this way though:\n\n/**\n* Return the Maximum INT Value for a Partitioned Table Column\n*\n* @param string Name of Schema of the base partition table.\n* @param string Name of the base partition table.\n* @param string Name of column to search.\n*/\nCREATE OR REPLACE FUNCTION spc_max_part_int(VARCHAR, VARCHAR, VARCHAR)\nRETURNS INT AS\n$$\nDECLARE\n\n sSchema ALIAS FOR $1;\n sTable ALIAS FOR $2;\n sColName ALIAS FOR $3;\n\n sChild VARCHAR;\n nMax INT;\n nTemp INT;\n nParent OID;\n\nBEGIN\n\n EXECUTE '\n SELECT max(' || sColName ||')\n FROM ONLY ' || sSchema || '.' || quote_ident(sTable)\n INTO nMax;\n\n SELECT INTO nParent t.oid\n FROM pg_class t\n JOIN pg_namespace n ON (t.relnamespace=n.oid)\n WHERE n.nspname = sSchema\n AND t.relname = sTable;\n\n FOR sChild IN\n SELECT t.relname\n FROM pg_class t\n JOIN pg_inherits c ON (c.inhrelid=t.oid AND c.inhparent=nParent)\n LOOP\n nTemp := utility.spc_max_part_int(sSchema, sChild, sColName);\n nMax := greatest(nTemp, nMax);\n END LOOP;\n\n RETURN nMax;\n\nEND;\n$$ LANGUAGE plpgsql STABLE;\n\n\nYou can call that instead of max, and it'll be much faster. You can \ncreate an analog for min if you need it. So for this, you'd call:\n\nSELECT spc_max_part_int('appqosdata', 'tcpsessions', 'id');\n\nSomeone probably has a better solution. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Thu, 16 Jun 2011 13:36:31 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seq scan in the case of max() on the primary key column"
},
{
"msg_contents": "Yes, confirmed that the problem is in the partitioned table.\nShaun, that solution is brilliant.\nThank you,\nSvetlin Manavski\n\nOn Thu, Jun 16, 2011 at 7:36 PM, Shaun Thomas <[email protected]> wrote:\n\n> On 06/16/2011 12:25 PM, Magnus Hagander wrote:\n>\n> PostgreSQL 9.0 is unable to use an index scan to find min/max on a\n>> partitioned table. 9.1, however, can do that.\n>>\n>\n> Unfortunately this is true. You can fake it this way though:\n>\n> /**\n> * Return the Maximum INT Value for a Partitioned Table Column\n> *\n> * @param string Name of Schema of the base partition table.\n> * @param string Name of the base partition table.\n> * @param string Name of column to search.\n> */\n> CREATE OR REPLACE FUNCTION spc_max_part_int(VARCHAR, VARCHAR, VARCHAR)\n> RETURNS INT AS\n> $$\n> DECLARE\n>\n> sSchema ALIAS FOR $1;\n> sTable ALIAS FOR $2;\n> sColName ALIAS FOR $3;\n>\n> sChild VARCHAR;\n> nMax INT;\n> nTemp INT;\n> nParent OID;\n>\n> BEGIN\n>\n> EXECUTE '\n> SELECT max(' || sColName ||')\n> FROM ONLY ' || sSchema || '.' || quote_ident(sTable)\n> INTO nMax;\n>\n> SELECT INTO nParent t.oid\n> FROM pg_class t\n> JOIN pg_namespace n ON (t.relnamespace=n.oid)\n> WHERE n.nspname = sSchema\n> AND t.relname = sTable;\n>\n> FOR sChild IN\n> SELECT t.relname\n> FROM pg_class t\n> JOIN pg_inherits c ON (c.inhrelid=t.oid AND c.inhparent=nParent)\n> LOOP\n> nTemp := utility.spc_max_part_int(sSchema, sChild, sColName);\n> nMax := greatest(nTemp, nMax);\n> END LOOP;\n>\n> RETURN nMax;\n>\n> END;\n> $$ LANGUAGE plpgsql STABLE;\n>\n>\n> You can call that instead of max, and it'll be much faster. You can create\n> an analog for min if you need it. So for this, you'd call:\n>\n> SELECT spc_max_part_int('appqosdata', 'tcpsessions', 'id');\n>\n> Someone probably has a better solution. :)\n>\n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n> 312-676-8870\n> [email protected]\n>\n> ______________________________________________\n>\n> See http://www.peak6.com/email_disclaimer.php\n> for terms and conditions related to this email\n>\n\nYes, confirmed that the problem is in the partitioned table.Shaun, that solution is brilliant. Thank you,Svetlin ManavskiOn Thu, Jun 16, 2011 at 7:36 PM, Shaun Thomas <[email protected]> wrote:\nOn 06/16/2011 12:25 PM, Magnus Hagander wrote:\n\n\nPostgreSQL 9.0 is unable to use an index scan to find min/max on a\npartitioned table. 9.1, however, can do that.\n\n\nUnfortunately this is true. You can fake it this way though:\n\n/**\n* Return the Maximum INT Value for a Partitioned Table Column\n*\n* @param string Name of Schema of the base partition table.\n* @param string Name of the base partition table.\n* @param string Name of column to search.\n*/\nCREATE OR REPLACE FUNCTION spc_max_part_int(VARCHAR, VARCHAR, VARCHAR)\nRETURNS INT AS\n$$\nDECLARE\n\n sSchema ALIAS FOR $1;\n sTable ALIAS FOR $2;\n sColName ALIAS FOR $3;\n\n sChild VARCHAR;\n nMax INT;\n nTemp INT;\n nParent OID;\n\nBEGIN\n\n EXECUTE '\n SELECT max(' || sColName ||')\n FROM ONLY ' || sSchema || '.' || quote_ident(sTable)\n INTO nMax;\n\n SELECT INTO nParent t.oid\n FROM pg_class t\n JOIN pg_namespace n ON (t.relnamespace=n.oid)\n WHERE n.nspname = sSchema\n AND t.relname = sTable;\n\n FOR sChild IN\n SELECT t.relname\n FROM pg_class t\n JOIN pg_inherits c ON (c.inhrelid=t.oid AND c.inhparent=nParent)\n LOOP\n nTemp := utility.spc_max_part_int(sSchema, sChild, sColName);\n nMax := greatest(nTemp, nMax);\n END LOOP;\n\n RETURN nMax;\n\nEND;\n$$ LANGUAGE plpgsql STABLE;\n\n\nYou can call that instead of max, and it'll be much faster. You can create an analog for min if you need it. So for this, you'd call:\n\nSELECT spc_max_part_int('appqosdata', 'tcpsessions', 'id');\n\nSomeone probably has a better solution. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email",
"msg_date": "Fri, 17 Jun 2011 12:22:21 +0100",
"msg_from": "Svetlin Manavski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: seq scan in the case of max() on the primary key column"
},
{
"msg_contents": "On 06/17/2011 06:22 AM, Svetlin Manavski wrote:\n\n> Shaun, that solution is brilliant.\n\nDon't thank me. I actually got the basic idea from a post here a couple \nyears ago. The only difference is I formalized it somewhat and put it in \nour utility schema, where I put lots of other random useful stored procs \nI've accumulated over the years. I have another one that works with \ndates. :)\n\nI assume you already modified it by removing the 'utility' schema prefix \nfrom the recursive call. The recursive call is in case the child tables \nare themselves used as a template for further inheritance. It's rare, \nbut possible. This function will always get you the max value on a \ncolumn in a series of partitioned tables, and quickly so long as it's \nindexed.\n\nIt's a bit of a hack, but it's worked fine for us while we wait for the \nplanner to catch up. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Fri, 17 Jun 2011 07:43:46 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seq scan in the case of max() on the primary key column"
},
{
"msg_contents": "On 06/17/2011 08:43 AM, Shaun Thomas wrote:\n> It's a bit of a hack, but it's worked fine for us while we wait for \n> the planner to catch up. :)\n\nRight. In situations where people can modify their application to \nredirect MIN/MAX() calls over to directly query the individual \npartitions, that's a great workaround. Your function is the slickest \nsuch solution I've seen for that, so filing it away in case this pops up \nin that situation.\n\nBut if you can't touch the application code and just need it to work as \ndesired, you either need to use PostgreSQL 9.1 (not yet released) or \nfigure out how to backport that fix into an earlier version (not easy). \nA babbled a bit about this specific case at \nhttp://blog.2ndquadrant.com/en/2011/06/max-partitioning-with-min-pain.html \nif anyone wants more information, or a specific simple test case to play \nwith.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Fri, 17 Jun 2011 14:00:00 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seq scan in the case of max() on the primary key column"
},
{
"msg_contents": "On Jun 16, 2011, at 1:36 PM, Shaun Thomas wrote:\n> /**\n> * Return the Maximum INT Value for a Partitioned Table Column\n> *\n> * @param string Name of Schema of the base partition table.\n> * @param string Name of the base partition table.\n> * @param string Name of column to search.\n> */\n> CREATE OR REPLACE FUNCTION spc_max_part_int(VARCHAR, VARCHAR, VARCHAR)\n> RETURNS INT AS\n> $$\n> DECLARE\n> <snip>\n> SELECT INTO nParent t.oid\n> FROM pg_class t\n> JOIN pg_namespace n ON (t.relnamespace=n.oid)\n> WHERE n.nspname = sSchema\n> AND t.relname = sTable;\n\nFWIW, instead of that, I would do this:\n\nCREATE FUNCTION ...(\n p_parent_schema text\n , p_parent_table text\n) ...\nDECLARE\n c_parent_oid CONSTANT oid := (p_parent_schema || '.' || p_parent_table )::regclass;\n\n... or ...\n\nCREATE FUNCTION(\n p_parent text\n)\nDECLARE\n c_parent_oid CONSTANT oid := p_parent::regclass;\n\n\nAdvantages:\n\n- ::regclass is search_path-aware, so you're not forced into providing a schema if you don't want to\n- it will throw an error if it doesn't find a regclass entry\n- you can cast the oid back to text: EXECUTE 'SELECT max(' ... 'FROM ' || c_parent_oid::regclass\n- you can also query directly with the OID: SELECT relkind = 't' AS is_table FROM pg_class WHERE oid = c_parent_oid\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n",
"msg_date": "Fri, 17 Jun 2011 15:31:17 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seq scan in the case of max() on the primary key column"
},
{
"msg_contents": "On 06/17/2011 03:31 PM, Jim Nasby wrote:\n\n> c_parent_oid CONSTANT oid := (p_parent_schema || '.' ||\n> p_parent_table )::regclass;\n\nWell isn't *that* a handy bit of magic. How did I not know about that? \nThanks!\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Tue, 21 Jun 2011 14:49:46 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seq scan in the case of max() on the primary key column"
},
{
"msg_contents": "On Thu, Jun 16, 2011 at 21:36, Shaun Thomas <[email protected]> wrote:\n> You can call that instead of max, and it'll be much faster. You can create\n> an analog for min if you need it. So for this, you'd call:\n\nCool, I've needed this function sometimes but never bothered enough to\nwrite it myself. Now I created a wiki snippet page for this handy\nfeature here:\nhttps://wiki.postgresql.org/wiki/Efficient_min/max_over_partitioned_table\n\nWith Jim Nasby's idea to use regclass instead of relation names, the\nfunction is now half its length and probably more reliable. There's no\nneed to touch pg_class directly at all.\n\nI also changed it to return bigint instead of integer, as that's more\nversatile, and the performance loss is probably negligible.\n\nRegards,\nMarti\n",
"msg_date": "Wed, 22 Jun 2011 12:55:46 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seq scan in the case of max() on the primary key column"
},
{
"msg_contents": "On 06/22/2011 04:55 AM, Marti Raudsepp wrote:\n\n> With Jim Nasby's idea to use regclass instead of relation names, the\n> function is now half its length and probably more reliable. There's no\n> need to touch pg_class directly at all.\n\nSadly until we upgrade to EDB 9.0, I have to use my function. :) EDB 8.3 \n(which is really PostgreSQL 8.2) doesn't have a regclass->text \nconversion. But I'll bookmark the wiki page anyway, so I can update my \nfunction after upgrading. Heh.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Wed, 22 Jun 2011 08:12:45 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seq scan in the case of max() on the primary key column"
},
{
"msg_contents": "On 06/22/2011 05:55 AM, Marti Raudsepp wrote:\n> Now I created a wiki snippet page for this handy\n> feature here:\n> https://wiki.postgresql.org/wiki/Efficient_min/max_over_partitioned_table\n> \n\nI just tweaked this a bit to document the version compatibility issues \naround it and make it easier to follow. I think that's now the page we \nshould point people toward when this pops up again. Between that and my \nblog post I reference in it, they can find all the details and a \nworkaround in one place.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Wed, 22 Jun 2011 13:01:43 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seq scan in the case of max() on the primary key column"
},
{
"msg_contents": "On Wed, Jun 22, 2011 at 7:12 AM, Shaun Thomas <[email protected]> wrote:\n> On 06/22/2011 04:55 AM, Marti Raudsepp wrote:\n>\n>> With Jim Nasby's idea to use regclass instead of relation names, the\n>> function is now half its length and probably more reliable. There's no\n>> need to touch pg_class directly at all.\n>\n> Sadly until we upgrade to EDB 9.0, I have to use my function. :) EDB 8.3\n> (which is really PostgreSQL 8.2) doesn't have a regclass->text conversion.\n> But I'll bookmark the wiki page anyway, so I can update my function after\n> upgrading. Heh.\n>\n\nGiven that many folks still run < 9.0 in production, the wiki page\nshould really have a version of that function for older versions,\nwhether it's long or not.\n",
"msg_date": "Wed, 22 Jun 2011 12:12:37 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seq scan in the case of max() on the primary key column"
},
{
"msg_contents": "On 06/22/2011 01:12 PM, Scott Marlowe wrote:\n\n> Given that many folks still run< 9.0 in production, the wiki page\n> should really have a version of that function for older versions,\n> whether it's long or not.\n\nThis version does work on anything 8.3 and above. I just lamented on 9.0 \nbecause we decided to skip 8.4 in favor of 9.0. And as we use EDB \ninstead of PostgreSQL directly, our 8.3 is actually 8.2. Got that? ;)\n\nSorry for the confusion.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Wed, 22 Jun 2011 13:15:13 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seq scan in the case of max() on the primary key column"
},
{
"msg_contents": "On 06/22/2011 02:12 PM, Scott Marlowe wrote:\n> Given that many folks still run < 9.0 in production, the wiki page\n> should really have a version of that function for older versions,\n> whether it's long or not.\n> \n\nI updated the page already to be clear about what versions of PostgreSQL \nit works on, and it directs people to Shaun's original message if they \nare running 8.2. The only people who might get confused now are the \nones running EDB's versions, where the exact features you get in \nparticular versions can be slightly different than the community \nversion. But that problem both exists in other parts of the wiki, and \nis a bit outside of its scope to try and address.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Wed, 22 Jun 2011 17:35:39 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seq scan in the case of max() on the primary key column"
}
] |
[
{
"msg_contents": "Hi,\n\nI am evaluating hardware for a new PostgreSQL server. For reasons\nconcerning power consumption and available space it should not have\nmore than 4 disks (in a 1U case), if possible. Now, I am not sure what\ndisks to use and how to layout them to get the best performance.\n\nThe cheaper option would be to buy 15k Seagate SAS disks with a 3ware\n9750SA (battery backed) controller. Does it matter whether to use a\n4-disk RAID10 or 2x 2-disk RAID1 (system+pg_xlog , pg_data) setup? Am\nI right that both would be faster than just using a single 2-disk\nRAID1 for everything?\n\nA higher end option would be to use 2x 64G Intel X-25E SSD's with a\nLSI MegaRAID 9261 controller for pg_data and/or pg_xlog and 2x SAS\ndisks for the rest. Unfortunately, these SSD are the only ones offered\nby our supplier and they don't use a supercapacitor, AFAIK. Therefore\nI would have to disable the write cache on the SSD's somehow and just\nuse the cache on the controller only. Does anyone know if this will\nwork or even uses such a setup?\n\nFurthermore, the LSI MegaRAID 9261 offers CacheCade which uses SSD\ndisks a as secondary tier of cache for the SAS disks. Would this\nfeature make sense for a PostgreSQL server, performance wise?\n\nThank you for any hints and inputs.\n\nRegards,\n\nTom.\n",
"msg_date": "Thu, 16 Jun 2011 17:09:54 +0200",
"msg_from": "Haestan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance advice for a new low(er)-power server"
},
{
"msg_contents": "On Thu, Jun 16, 2011 at 10:09 AM, Haestan <[email protected]> wrote:\n> Hi,\n>\n> I am evaluating hardware for a new PostgreSQL server. For reasons\n> concerning power consumption and available space it should not have\n> more than 4 disks (in a 1U case), if possible. Now, I am not sure what\n> disks to use and how to layout them to get the best performance.\n>\n> The cheaper option would be to buy 15k Seagate SAS disks with a 3ware\n> 9750SA (battery backed) controller. Does it matter whether to use a\n> 4-disk RAID10 or 2x 2-disk RAID1 (system+pg_xlog , pg_data) setup? Am\n> I right that both would be faster than just using a single 2-disk\n> RAID1 for everything?\n\nwith 4 drives I think your best bet is single volume raid 10 (ssd or\nstandard disk).\n\n> A higher end option would be to use 2x 64G Intel X-25E SSD's with a\n> LSI MegaRAID 9261 controller for pg_data and/or pg_xlog and 2x SAS\n> disks for the rest. Unfortunately, these SSD are the only ones offered\n> by our supplier and they don't use a supercapacitor, AFAIK. Therefore\n> I would have to disable the write cache on the SSD's somehow and just\n> use the cache on the controller only. Does anyone know if this will\n> work or even uses such a setup?\n\nI am not a big fan of vendors that do not allow hooking in your own\ndrives. How well this setup works is going to depend on how well the\ncontroller works with the SSD. Still, as of today, it's probably\ngoing to be the best performance you can get for four drives...the\nx25-e remains the only SLC drive from a major vendor.\n\n> Furthermore, the LSI MegaRAID 9261 offers CacheCade which uses SSD\n> disks a as secondary tier of cache for the SAS disks. Would this\n> feature make sense for a PostgreSQL server, performance wise?\n\nI'm really skeptical about this feature.\n\n> Thank you for any hints and inputs.\n\nThe SSD space is going to see a lot more options from Intel later this\nyear. See: http://www.maximumpc.com/article/news/leaked_roadmap_points_upcoming_intel_ssds.\n If you have time, I'd consider waiting a month or so to see what\noptions become available.\n\nmerlin\n",
"msg_date": "Thu, 16 Jun 2011 13:19:37 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance advice for a new low(er)-power server"
},
{
"msg_contents": "On 2011-06-16 17:09, Haestan wrote:\n> I am evaluating hardware for a new PostgreSQL server. For reasons\n> concerning power consumption and available space it should not have\n> more than 4 disks (in a 1U case), if possible. Now, I am not sure what\n> disks to use and how to layout them to get the best performance.\nWhat is your data:memory-size ratio? Can you afford to have everything\nin memory and only have the disks to be able to sustain writes?\n\n> The cheaper option would be to buy 15k Seagate SAS disks with a 3ware\n> 9750SA (battery backed) controller. Does it matter whether to use a\n> 4-disk RAID10 or 2x 2-disk RAID1 (system+pg_xlog , pg_data) setup? Am\n> I right that both would be faster than just using a single 2-disk\n> RAID1 for everything?\n>\n> A higher end option would be to use 2x 64G Intel X-25E SSD's with a\n> LSI MegaRAID 9261 controller for pg_data and/or pg_xlog and 2x SAS\n> disks for the rest. Unfortunately, these SSD are the only ones offered\n> by our supplier and they don't use a supercapacitor, AFAIK. Therefore\n> I would have to disable the write cache on the SSD's somehow and just\n> use the cache on the controller only. Does anyone know if this will\n> work or even uses such a setup.\nAny SSD is orders of magnitude better than any rotating drive\nin terms of random reads. If you will benefit depends on your\ndata:memory ratio..\n\n> Furthermore, the LSI MegaRAID 9261 offers CacheCade which uses SSD\n> disks a as secondary tier of cache for the SAS disks. Would this\n> feature make sense for a PostgreSQL server, performance wise?\nI have one CacheCade setup... not a huge benefit but it seems\nmeasurable. (but really hard to test). .. compared to a full\nSSD-setup I wouldn't consider it at all.\n\n-- \nJesper\n",
"msg_date": "Thu, 16 Jun 2011 20:29:29 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance advice for a new low(er)-power server"
},
{
"msg_contents": "On 06/16/2011 11:09 AM, Haestan wrote:\n> The cheaper option would be to buy 15k Seagate SAS disks with a 3ware\n> 9750SA (battery backed) controller. Does it matter whether to use a\n> 4-disk RAID10 or 2x 2-disk RAID1 (system+pg_xlog , pg_data) setup? Am\n> I right that both would be faster than just using a single 2-disk\n> RAID1 for everything?\n> \n\nThe layout you proposed (OS+WAL , data) might be effective, but if your \nwrite volume is low it may not be much of an improvement at all over a \nsimple RAID1 of two drives. The odds that you are going to correctly \nlay out individual sections of a disk array with only two pairs to \nspread the data across aren't good. If this is all you have to work \nwith, a 4-disk RAID10 would at least guarantee you're taking advantage \nof all four drives. With that controller, it should be almost twice as \nfast in all cases as hooking up only two drives.\n\n> A higher end option would be to use 2x 64G Intel X-25E SSD's with a\n> LSI MegaRAID 9261 controller for pg_data and/or pg_xlog and 2x SAS\n> disks for the rest. Unfortunately, these SSD are the only ones offered\n> by our supplier and they don't use a supercapacitor, AFAIK. Therefore\n> I would have to disable the write cache on the SSD's somehow and just\n> use the cache on the controller only. Does anyone know if this will\n> work or even uses such a setup?\n> \n\nThese drives are one of the worst choices on the market for PostgreSQL \nstorage. They're unusably slow if you disable the caches, and even that \nisn't guaranteed to work. There is no way to make them safe. See \nhttp://wiki.postgresql.org/wiki/Reliable_Writes for more details. The \n3rd generation SSDs from Intel are much, much better; see \nhttp://blog.2ndquadrant.com/en/2011/04/intel-ssd-now-off-the-sherr-sh.html \nfor details.\n\nThere is another possibility I would suggest you consider. You could \nbuy the server with a single pair of drives now, then wait to see what \nperformance is like before filling the other two slots. It is far \neasier to figure out what drive technology makes sense if you have \nmeasurements from an existing system to guide that decision. And you \nmay be able to get newer drives from your vendor that slide into the \nempty slots. You may not ever even need more than a single RAID-1 \npair. I see lots of people waste money on drives that would be better \nspent on RAM.\n\n> Furthermore, the LSI MegaRAID 9261 offers CacheCade which uses SSD\n> disks a as secondary tier of cache for the SAS disks. Would this\n> feature make sense for a PostgreSQL server, performance wise?\n> \n\nThere are already three layers involved here:\n\n-Database shared_buffers cache\n-Operating system read/write cache\n-RAID controller cache\n\nI would be skeptical that adding a fourth one near the bottom of this \nstack is likely to help a lot. And you're adding a whole new layer of \ndifficult to test reliability issues, too. Overly complicated storage \nsolutions tend to introduce complicated failures that corrupt your data \nin unexpected ways.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Thu, 16 Jun 2011 14:43:03 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance advice for a new low(er)-power server"
},
{
"msg_contents": "On Thu, Jun 16, 2011 at 12:43 PM, Greg Smith <[email protected]> wrote:\n> There are already three layers involved here:\n>\n> -Database shared_buffers cache\n> -Operating system read/write cache\n> -RAID controller cache\n>\n> I would be skeptical that adding a fourth one near the bottom of this stack\n> is likely to help a lot. And you're adding a whole new layer of difficult\n> to test reliability issues, too. Overly complicated storage solutions tend\n> to introduce complicated failures that corrupt your data in unexpected ways.\n\nPlus each layer is from a different provider. The drive manufacturers\npoint to the RAID controller maker, the RAID controller people point\nat the SSDs you're using, and so on...\n",
"msg_date": "Thu, 16 Jun 2011 12:52:03 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance advice for a new low(er)-power server"
},
{
"msg_contents": "On Thu, Jun 16, 2011 at 1:43 PM, Greg Smith <[email protected]> wrote:\n> These drives are one of the worst choices on the market for PostgreSQL\n> storage. They're unusably slow if you disable the caches, and even that\n> isn't guaranteed to work. There is no way to make them safe. See\n> http://wiki.postgresql.org/wiki/Reliable_Writes for more details. The 3rd\n> generation SSDs from Intel are much, much better; see\n> http://blog.2ndquadrant.com/en/2011/04/intel-ssd-now-off-the-sherr-sh.html\n> for details.\n\nI don't necessarily agree. the drives are SLC and have the potential\nto have a much longer lifespan than any MLC drive, although this is\ngoing to depend a lot on the raid controller if write caching is\ndisabled. Also, reading the post that got all this started\n(http://www.mysqlperformanceblog.com/2009/03/02/ssd-xfs-lvm-fsync-write-cache-barrier-and-lost-transactions/),\nthe OP was able to configure them to run durably with 1200 write iops.\n While not great, that's still much better than any spinning disk.\n\nSo, if drive lifespan is a big deal, I think they still technically\nhave a place *today) although the drives that are just about to come\nout (the 710 and 720) will make them obsolete, because the built in\ncaching (particularly for the SLC 720) will make the drive superior in\nevery respect.\n\nmerlin\n",
"msg_date": "Thu, 16 Jun 2011 14:04:16 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance advice for a new low(er)-power server"
},
{
"msg_contents": "On 06/16/2011 03:04 PM, Merlin Moncure wrote:\n> I don't necessarily agree. the drives are SLC and have the potential\n> to have a much longer lifespan than any MLC drive, although this is\n> going to depend a lot on the raid controller if write caching is\n> disabled. Also, reading the post that got all this started\n> (http://www.mysqlperformanceblog.com/2009/03/02/ssd-xfs-lvm-fsync-write-cache-barrier-and-lost-transactions/),\n> the OP was able to configure them to run durably with 1200 write iops.\n> \n\nWe've also seen \nhttp://petereisentraut.blogspot.com/2009/07/solid-state-drive-benchmarks-and-write.html \nwhere Peter was only able to get 441 seeks/second on the bonnie++ mixed \nread/write test that way. And no one has measured the longevity of the \ndrive when it's running in this mode. A large portion of the lifespan \nadvantage MLC would normally have over SLC goes away if it can't cache \nwrites anymore. Worst-case, if the drive is always hit with 8K writes \nand the erase size is 128KB, you might get only 1/16 of the specified \nlifetime running it cacheless.\n\nI just can't recommend that people consider running one of these in a \nmode it was never intended to. The fact that the consumer drives from \nthis generation still lose data even with the write cache turned off \nshould make you ask what other, more difficult to trigger failure modes \nare still hanging around the enterprise drives, too. Everyone I've seen \nsuffer through problems with these gave up on them before even trying \nreally in-depth reliability tests, so I don't consider that territory \neven very well explored.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Thu, 16 Jun 2011 18:12:08 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance advice for a new low(er)-power server"
},
{
"msg_contents": "On Thu, Jun 16, 2011 at 5:12 PM, Greg Smith <[email protected]> wrote:\n> On 06/16/2011 03:04 PM, Merlin Moncure wrote:\n>>\n>> I don't necessarily agree. the drives are SLC and have the potential\n>> to have a much longer lifespan than any MLC drive, although this is\n>> going to depend a lot on the raid controller if write caching is\n>> disabled. Also, reading the post that got all this started\n>>\n>> (http://www.mysqlperformanceblog.com/2009/03/02/ssd-xfs-lvm-fsync-write-cache-barrier-and-lost-transactions/),\n>> the OP was able to configure them to run durably with 1200 write iops.\n>>\n>\n> We've also seen\n> http://petereisentraut.blogspot.com/2009/07/solid-state-drive-benchmarks-and-write.html\n> where Peter was only able to get 441 seeks/second on the bonnie++ mixed\n> read/write test that way. And no one has measured the longevity of the\n> drive when it's running in this mode. A large portion of the lifespan\n> advantage MLC would normally have over SLC goes away if it can't cache\n> writes anymore. Worst-case, if the drive is always hit with 8K writes and\n> the erase size is 128KB, you might get only 1/16 of the specified lifetime\n> running it cacheless.\n>\n> I just can't recommend that people consider running one of these in a mode\n> it was never intended to. The fact that the consumer drives from this\n> generation still lose data even with the write cache turned off should make\n> you ask what other, more difficult to trigger failure modes are still\n> hanging around the enterprise drives, too. Everyone I've seen suffer\n> through problems with these gave up on them before even trying really\n> in-depth reliability tests, so I don't consider that territory even very\n> well explored.\n\nI've always been more than little suspicious about Peter's results --\nusing lvm and luks, and the filesystem wasn't specified or the write\nbarriers setting. Also they are much slower than any other results\nI've seen, and no numbers are given using a more standard setup.\nOther people are showing 1200 write iops vs 441 mostly read iops. Not\nthat I think they aren't correct, but there simply has to be an\nexplanation of why his results are so much slower than all others on\nthe 'net...I think 1200 write iops is a realistic expectation.\n\nOne the cache/lifespan issue, you might be correct -- it's going to\ndepend on the raid controller. The drive has a 10x longer lifespan and\nthere is not going to be a 1:1 correspondence between sync requests\nand actual syncs to the drive. But the drive is denied the ability to\ndo it's own reordering so unless the raid controller is really\noptimized for flash longevity (doubtful), you probably do have to give\nback a lot of the savings you get from being SLC (possibly more). So\nit's complex -- but I think the whole issue becomes moot soon because\nnon consumer flash drives from here on out are going to have\ncapacitors in them (the 720 ramsdale will immediately knock out the\nx25-e). So the prudent course of action is to wait, or to just go with\nthe current crop capacitor based drives and deal with the lifespan\nissues.\n\nmerlin\n",
"msg_date": "Thu, 16 Jun 2011 17:44:11 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance advice for a new low(er)-power server"
},
{
"msg_contents": "\nOn Jun 16, 2011, at 20:29, Jesper Krogh wrote:\n\n> On 2011-06-16 17:09, Haestan wrote:\n>> I am evaluating hardware for a new PostgreSQL server. For reasons\n>> concerning power consumption and available space it should not have\n>> more than 4 disks (in a 1U case), if possible. Now, I am not sure what\n>> disks to use and how to layout them to get the best performance.\n> What is your data:memory-size ratio? Can you afford to have everything\n> in memory and only have the disks to be able to sustain writes?\n\nYes, I can definitely affort to have everything in memory. Right now,\nthe main database is about 10GB in size including bloat (around 4GB\nwithout). And there are some more db's of about 4GB in size. So in\ntotal around 14GB at the moment and slowly rising.\n\nI was planning to put in at least 16GB RAM or probably even 24GB to be\nsafe. The problem is that the data of the main db is more or less\nconstantly updated or deleted/reinserted throughout the day. It seems\nto me that the resulting bloat and the constant new data is really\nhurting the cache hit rate (which is now around 90% in the main appl.).\nIt's those remaining queries that read from the disk that I really\nwould like to speed up as best as possible.\n\n>> Furthermore, the LSI MegaRAID 9261 offers CacheCade which uses SSD\n>> disks a as secondary tier of cache for the SAS disks. Would this\n>> feature make sense for a PostgreSQL server, performance wise?\n> I have one CacheCade setup... not a huge benefit but it seems\n> measurable. (but really hard to test). .. compared to a full\n> SSD-setup I wouldn't consider it at all.\n\nThanks for that input. What I've read from you and others, the SSD\ncache doesn't seem a viable option for me.\n\n",
"msg_date": "Fri, 17 Jun 2011 10:42:19 +0200",
"msg_from": "Haestan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance advice for a new low(er)-power server"
},
{
"msg_contents": "\nOn Jun 16, 2011, at 20:43, Greg Smith wrote:\n> The layout you proposed (OS+WAL , data) might be effective, but if your write volume is low it may not be much of an improvement at all over a simple RAID1 of two drives. The odds that you are going to correctly lay out individual sections of a disk array with only two pairs to spread the data across aren't good. If this is all you have to work with, a 4-disk RAID10 would at least guarantee you're taking advantage of all four drives. With that controller, it should be almost twice as fast in all cases as hooking up only two drives.\n\nThe data is more or less constantly rewritten (it contains hourly updated travel related data). Therefore, I really tend to buy 4 disks from the start on.\n\n> There is another possibility I would suggest you consider. You could buy the server with a single pair of drives now, then wait to see what performance is like before filling the other two slots. It is far easier to figure out what drive technology makes sense if you have measurements from an existing system to guide that decision. And you may be able to get newer drives from your vendor that slide into the empty slots. You may not ever even need more than a single RAID-1 pair. I see lots of people waste money on drives that would be better spent on RAM.\n\nActually, there are already two older servers in place right now. The data is about 14GB in size and slowly rising. Considering the price for RAM I can easily afford to install more RAM than the db data is in size. I was aiming for 24GB. But even then, I cannot be sure that no queries will read from the disk. AFAIK, there is no way to force all the data to stay in cache (shared_buffers for example). \n\nThank you for your input so far.\n\nRegards,\n\nTom.",
"msg_date": "Fri, 17 Jun 2011 11:13:32 +0200",
"msg_from": "Haestan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance advice for a new low(er)-power server"
},
{
"msg_contents": "\n> On Jun 16, 2011, at 20:43, Greg Smith wrote:\n>> The layout you proposed (OS+WAL , data) might be effective, but if your\n>> write volume is low it may not be much of an improvement at all over a\n>> simple RAID1 of two drives. The odds that you are going to correctly\n>> lay out individual sections of a disk array with only two pairs to\n>> spread the data across aren't good. If this is all you have to work\n>> with, a 4-disk RAID10 would at least guarantee you're taking advantage\n>> of all four drives. With that controller, it should be almost twice as\n>> fast in all cases as hooking up only two drives.\n>\n> The data is more or less constantly rewritten (it contains hourly updated\n> travel related data). Therefore, I really tend to buy 4 disks from the\n> start on.\n\nJust make sure you have a Battery-backed-write cache or flash backed, so\nyou can set it safely to \"write-back\".\n\n>> There is another possibility I would suggest you consider. You could\n>> buy the server with a single pair of drives now, then wait to see what\n>> performance is like before filling the other two slots. It is far\n>> easier to figure out what drive technology makes sense if you have\n>> measurements from an existing system to guide that decision. And you\n>> may be able to get newer drives from your vendor that slide into the\n>> empty slots. You may not ever even need more than a single RAID-1 pair.\n>> I see lots of people waste money on drives that would be better spent\n>> on RAM.\n>\n> Actually, there are already two older servers in place right now. The data\n> is about 14GB in size and slowly rising. Considering the price for RAM I\n> can easily afford to install more RAM than the db data is in size. I was\n> aiming for 24GB. But even then, I cannot be sure that no queries will read\n> from the disk. AFAIK, there is no way to force all the data to stay in\n> cache (shared_buffers for example).\n\ntar cf - $PGDATA > /dev/null in cron every now and then /15-30 minutes)\nwould certainly do the trick . not pretty but we're solving problems not\ntalking about beauty. If you can ensure sufficient memory then it should\ncause any problems.. if you graph the timing of above command you should\neven be able to see when you need to add more memory.\n\n\n-- \nJesper\n\n",
"msg_date": "Fri, 17 Jun 2011 11:30:33 +0200 (CEST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Performance advice for a new low(er)-power server"
},
{
"msg_contents": "On Thu, Jun 16, 2011 at 5:44 PM, Merlin Moncure <[email protected]> wrote:\n> it's complex -- but I think the whole issue becomes moot soon because\n> non consumer flash drives from here on out are going to have\n> capacitors in them (the 720 ramsdale will immediately knock out the\n> x25-e). So the prudent course of action is to wait, or to just go with\n> the current crop capacitor based drives and deal with the lifespan\n> issues.\n\nI may have spoke too soon on this -- the 720 is a pci-e device...spec\ndetails are just now leaking out (see:\nhttp://www.engadget.com/2011/06/16/intels-710-lyndonville-and-720-ramsdale-ssds-see-full-spec/).\n 512mb onboard dram and \"56k\" write iops, and probably stupid\nexpensive. The 710 is a standard sata device with 2.4k claimed write\niops, using \"HET\" MLC about which there is zero information available.\n\n merlin\n",
"msg_date": "Fri, 17 Jun 2011 08:29:31 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance advice for a new low(er)-power server"
}
] |
[
{
"msg_contents": "Hello, postgresql guru's\n\nWhen developing ticket export mechanism for our ticketing system I \nthought the wise thing would be to generate the result (XML file) in \nstored function. This worked fine for small datasets, but for the larger \nones I see it takes much longer to generate the document than to select \nthe data needed for it.\n\nHere are several \"explain alalyze\"s :\n\nexplain analyze\nSELECT * from tex.fnk_access_control_tickets(8560, 0);\n\n\"Function Scan on fnk_access_control_tickets (cost=0.25..10.25 \nrows=1000 width=246) (actual time=479.276..484.429 rows=16292 loops=1)\"\n\"Total runtime: 486.774 ms\"\n\nexplain analyze\nSELECT\n --xmlagg(\n XMLELEMENT ( NAME \"bar\",\n XMLELEMENT ( NAME \"code\", tick_barcode),\n XMLELEMENT ( NAME \"stat\", status),\n CASE WHEN type IS NOT NULL THEN XMLELEMENT ( NAME \"tip\", type) \nELSE NULL END,\n CASE WHEN sec_name IS NOT NULL THEN XMLELEMENT ( NAME \"sec\", \nsec_name) ELSE NULL END,\n CASE WHEN row_name IS NOT NULL THEN XMLELEMENT ( NAME \"row\", \nrow_name) ELSE NULL END,\n CASE WHEN seat_name IS NOT NULL THEN XMLELEMENT ( NAME \"plc\", \nseat_name) ELSE NULL END,\n CASE WHEN substr(tick_barcode,length(tick_barcode),1)= '1' THEN\n XMLELEMENT ( NAME \"groups\",\n XMLELEMENT ( NAME \"group\", 1)\n )\n ELSE NULL END\n )\n -- )\n FROM tex.fnk_access_control_tickets(8560, 0);\n\n\"Function Scan on fnk_access_control_tickets (cost=0.25..17.75 \nrows=1000 width=238) (actual time=476.446..924.785 rows=16292 loops=1)\"\n\"Total runtime: 928.768 ms\"\n\n\nexplain analyze\nSELECT\n xmlagg(\n XMLELEMENT ( NAME \"bar\",\n XMLELEMENT ( NAME \"code\", tick_barcode),\n XMLELEMENT ( NAME \"stat\", status),\n CASE WHEN type IS NOT NULL THEN XMLELEMENT ( NAME \"tip\", type) \nELSE NULL END,\n CASE WHEN sec_name IS NOT NULL THEN XMLELEMENT ( NAME \"sec\", \nsec_name) ELSE NULL END,\n CASE WHEN row_name IS NOT NULL THEN XMLELEMENT ( NAME \"row\", \nrow_name) ELSE NULL END,\n CASE WHEN seat_name IS NOT NULL THEN XMLELEMENT ( NAME \"plc\", \nseat_name) ELSE NULL END,\n CASE WHEN substr(tick_barcode,length(tick_barcode),1)= '1' THEN\n XMLELEMENT ( NAME \"groups\",\n XMLELEMENT ( NAME \"group\", 1)\n )\n ELSE NULL END\n )\n )\n FROM tex.fnk_access_control_tickets(8560, 0);\n\n\"Aggregate (cost=12.75..12.77 rows=1 width=238) (actual \ntime=16110.847..16110.848 rows=1 loops=1)\"\n\" -> Function Scan on fnk_access_control_tickets (cost=0.25..10.25 \nrows=1000 width=238) (actual time=500.029..520.974 rows=16292 loops=1)\"\n\"Total runtime: 16111.264 ms\"\n\nIt seems the aggregate combining the elements is to blame... What I did \nnext was rewriting it in stored function using FOR loop and combining it \nin a text variable. Sadly the result was the same (approximately 16 \nseconds).\n\n From that I had to conclude that text := text + some_text operation is \nan expensive one, but now I have no ideas left how to solve the problem. \nAny notices, recommendations, advices are very welcome.\n\nI've also tried google'ing on XML creation in Postgresql, but found no \nwarnings or even mentioning xmlagg could cause a headache. I have \nnowhere to turn for help now, so please advice...\n\nNot sure if that will help, but anyway:\nServer:\nPostgreSQL 9.0.3 on i486-pc-linux-gnu, compiled by GCC gcc-4.4.real \n(Debian 4.4.5-10) 4.4.5, 32-bit\nsee the result of select * from pg_settings in attachment if needed.\nClient:\nWindows XP, pgAdmin 1.12.3\n\n\nThank you in advance.\n\n-- \nJulius Tuskenis\nProgramavimo skyriaus vadovas\nUAB nSoft\nmob. +37068233050",
"msg_date": "Thu, 16 Jun 2011 22:33:56 +0300",
"msg_from": "Julius Tuskenis <[email protected]>",
"msg_from_op": true,
"msg_subject": "generating a large XML document"
},
{
"msg_contents": "Hello,\n\nI'm sorry to write again, but as I received no answer I wonder if there \nis a better mailing list to address concerning this question? Or is \nthere nothing to be done about the speed of xmlagg ?. Please let me as \nno answer is the worst answer to get....\n\n-- \nJulius Tuskenis\nProgramavimo skyriaus vadovas\nUAB nSoft\nmob. +37068233050\n\n",
"msg_date": "Mon, 20 Jun 2011 09:36:32 +0300",
"msg_from": "Julius Tuskenis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: generating a large XML document"
},
{
"msg_contents": "Hello\n\n2011/6/20 Julius Tuskenis <[email protected]>:\n> Hello,\n>\n> I'm sorry to write again, but as I received no answer I wonder if there is a\n> better mailing list to address concerning this question? Or is there nothing\n> to be done about the speed of xmlagg ?. Please let me as no answer is the\n> worst answer to get....\n>\n\nIt's hard to say where is problem - PostgreSQL wraps libxml2 library\nfor xml functionality, so problem can be\n\na) inside libxml2\nb) on interface between libxml2 and PostgreSQL\nc) on PostgreSQL memory management\n\ncan you send a profile?\n\nRegards\n\nPavel Stehule\n\n> --\n> Julius Tuskenis\n> Programavimo skyriaus vadovas\n> UAB nSoft\n> mob. +37068233050\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Mon, 20 Jun 2011 08:51:44 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: generating a large XML document"
},
{
"msg_contents": "On Sun, Jun 19, 2011 at 11:36 PM, Julius Tuskenis <[email protected]> wrote:\n\n> Hello,\n>\n> I'm sorry to write again, but as I received no answer I wonder if there is\n> a better mailing list to address concerning this question? Or is there\n> nothing to be done about the speed of xmlagg ?. Please let me as no answer\n> is the worst answer to get....\n\n\nI played around a little after your initial post, but I'm not a postgresql\ndeveloper and I have no familiarity with the internals, so everything I'm\nabout to write is pure conjecture.\n\nProgramming languages that treat strings as immutable often suffer from this\nkind of problem. WIth each string concatenation, both strings have to be\ncopied to a new location in memory. I suspect that this is happening in\nthis case. The only viable fix would be to use a buffer that is mutable and\nappend into it rather than doing raw string concatenation. If this is truly\nthe problem, I don't see that you have much choice but to re-implement\nxmlagg in one of the available languages such that it uses a buffer instead\nof immutable string concatenation. It is probably that the xmlelement\nfunction doesn't need to be rewritten, since it is only concatenating\nrelatively short strings. It is less efficient than appending to a buffer,\nbut shouldn't get catastrophically slow. But xmlagg is concatenating many\nrows. If each concatenation results in a full copy of the already\nconcatenated rows, you can see where performance would drop\ncatastrophically.\n\nHere's the first google result for 'mutable string python' that I found,\nwhich addresses this problem in python.\nhttp://www.skymind.com/~ocrow/python_string/ You could rewrite the aggregate\nfunction in plpython using one of the techniques in that file. I just\nattempted to find the source to xm_agg in the postgresql source code and it\nis pretty well obfuscated, so I don't see that being much help. I wasn't\neven able to determine if the problem actually is immutable string\nconcatenation. So we don't know if xmlagg is building a DOM tree and then\nserializing it once (which would imply that XMLELEMENT returns a single DOM\nnode, or if it is all workign with strings. Barring answers from someone\nwho actually knows, I can only suggest that you read through the\ndocumentation on writing an aggregate function and then do some\nexperimentation to see what you get when you use your own aggregate function\ninstead of xml_agg. Hopefully, such experimentation won't take long to\ndetermine if re-implementing xml_agg with a mutable buffer is a viable\noption.\n\n--sam\n\nOn Sun, Jun 19, 2011 at 11:36 PM, Julius Tuskenis <[email protected]> wrote:\nHello,\n\nI'm sorry to write again, but as I received no answer I wonder if there is a better mailing list to address concerning this question? Or is there nothing to be done about the speed of xmlagg ?. Please let me as no answer is the worst answer to get....\nI played around a little after your initial post, but I'm not a postgresql developer and I have no familiarity with the internals, so everything I'm about to write is pure conjecture. \nProgramming languages that treat strings as immutable often suffer from this kind of problem. WIth each string concatenation, both strings have to be copied to a new location in memory. I suspect that this is happening in this case. The only viable fix would be to use a buffer that is mutable and append into it rather than doing raw string concatenation. If this is truly the problem, I don't see that you have much choice but to re-implement xmlagg in one of the available languages such that it uses a buffer instead of immutable string concatenation. It is probably that the xmlelement function doesn't need to be rewritten, since it is only concatenating relatively short strings. It is less efficient than appending to a buffer, but shouldn't get catastrophically slow. But xmlagg is concatenating many rows. If each concatenation results in a full copy of the already concatenated rows, you can see where performance would drop catastrophically.\nHere's the first google result for 'mutable string python' that I found, which addresses this problem in python. http://www.skymind.com/~ocrow/python_string/ You could rewrite the aggregate function in plpython using one of the techniques in that file. I just attempted to find the source to xm_agg in the postgresql source code and it is pretty well obfuscated, so I don't see that being much help. I wasn't even able to determine if the problem actually is immutable string concatenation. So we don't know if xmlagg is building a DOM tree and then serializing it once (which would imply that XMLELEMENT returns a single DOM node, or if it is all workign with strings. Barring answers from someone who actually knows, I can only suggest that you read through the documentation on writing an aggregate function and then do some experimentation to see what you get when you use your own aggregate function instead of xml_agg. Hopefully, such experimentation won't take long to determine if re-implementing xml_agg with a mutable buffer is a viable option.\n--sam",
"msg_date": "Mon, 20 Jun 2011 00:21:06 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: generating a large XML document"
},
{
"msg_contents": "Thank you, Pavel for your answer\n\n2011.06.20 09:51, Pavel Stehule rašė:\n> can you send a profile?\nExcuse me, but what do you mean by saying \"profile\"? I've sent content \nof pg_settings in the first post. Please be more specific as I am more \nof a programmer than an server administrator.\n\n-- \nJulius Tuskenis\nProgramavimo skyriaus vadovas\nUAB nSoft\nmob. +37068233050\n\n",
"msg_date": "Mon, 20 Jun 2011 10:29:45 +0300",
"msg_from": "Julius Tuskenis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: generating a large XML document"
},
{
"msg_contents": "Thank You, Samuel for the time it took to investigate the issue.\n\nI'll try to use buffer to see what the results are... I'll post results \nto the list if I succeed.\n\n-- \nJulius Tuskenis\nProgramavimo skyriaus vadovas\nUAB nSoft\nmob. +37068233050\n\n",
"msg_date": "Mon, 20 Jun 2011 10:38:41 +0300",
"msg_from": "Julius Tuskenis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: generating a large XML document"
},
{
"msg_contents": "2011/6/20 Julius Tuskenis <[email protected]>:\n> Thank you, Pavel for your answer\n>\n> 2011.06.20 09:51, Pavel Stehule rašė:\n>>\n>> can you send a profile?\n>\n> Excuse me, but what do you mean by saying \"profile\"? I've sent content of\n> pg_settings in the first post. Please be more specific as I am more of a\n> programmer than an server administrator.\n>\n\na result from oprofile profiler\n\nRegards\n\nPavel\n\n> --\n> Julius Tuskenis\n> Programavimo skyriaus vadovas\n> UAB nSoft\n> mob. +37068233050\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Mon, 20 Jun 2011 09:47:08 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: generating a large XML document"
},
{
"msg_contents": "2011/6/20 Pavel Stehule <[email protected]>:\n> 2011/6/20 Julius Tuskenis <[email protected]>:\n>> Thank you, Pavel for your answer\n>>\n>> 2011.06.20 09:51, Pavel Stehule rašė:\n>>>\n>>> can you send a profile?\n>>\n>> Excuse me, but what do you mean by saying \"profile\"? I've sent content of\n>> pg_settings in the first post. Please be more specific as I am more of a\n>> programmer than an server administrator.\n>>\n>\n> a result from oprofile profiler\n>\n\nI looked into code - probably a implementation of xmlagg is too silly\n\nxmlagg use a xmlconcat functions - that means repeated xml parsing and\nxmlserialization. So it is not effective on larger trees :(\n\nstring_agg is more effective now. The solution is only radical\nrefactoring of xmlagg function.\n\nPavel\n\n\n\n\n> Regards\n>\n> Pavel\n>\n>> --\n>> Julius Tuskenis\n>> Programavimo skyriaus vadovas\n>> UAB nSoft\n>> mob. +37068233050\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n",
"msg_date": "Mon, 20 Jun 2011 09:58:15 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: generating a large XML document"
},
{
"msg_contents": "2011.06.20 10:58, Pavel Stehule rašė:\n> string_agg is more effective now. The solution is only radical\n> refactoring of xmlagg function.\nThank you, Pavel for letting me know of string_agg.\n\nexplain analyze\nSELECT\n string_agg(\n XMLELEMENT ( NAME \"bar\",\n XMLELEMENT ( NAME \"code\", tick_barcode),\n XMLELEMENT ( NAME \"stat\", status),\n CASE WHEN type IS NOT NULL THEN XMLELEMENT ( NAME \"tip\", type) \nELSE NULL END,\n CASE WHEN sec_name IS NOT NULL THEN XMLELEMENT ( NAME \"sec\", \nsec_name) ELSE NULL END,\n CASE WHEN row_name IS NOT NULL THEN XMLELEMENT ( NAME \"row\", \nrow_name) ELSE NULL END,\n CASE WHEN seat_name IS NOT NULL THEN XMLELEMENT ( NAME \"plc\", \nseat_name) ELSE NULL END,\n CASE WHEN substr(tick_barcode,length(tick_barcode),1)= '1' THEN\n XMLELEMENT ( NAME \"groups\",\n XMLELEMENT ( NAME \"group\", 1)\n )\n ELSE NULL END\n )::text, NULL\n )::xml\n FROM tex.fnk_access_control_tickets(8560, 0);\n\n\"Aggregate (cost=12.75..12.77 rows=1 width=238) (actual \ntime=1025.502..1025.502 rows=1 loops=1)\"\n\" -> Function Scan on fnk_access_control_tickets (cost=0.25..10.25 \nrows=1000 width=238) (actual time=495.703..503.999 rows=16292 loops=1)\"\n\"Total runtime: 1036.775 ms\"\n\nIts over 10 times faster than using xmlagg.\n\n-- \nJulius Tuskenis\nProgramavimo skyriaus vadovas\nUAB nSoft\nmob. +37068233050\n\n",
"msg_date": "Mon, 20 Jun 2011 12:03:21 +0300",
"msg_from": "Julius Tuskenis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: generating a large XML document"
}
] |
[
{
"msg_contents": "Load testing of postgresql 8.4 for OLTP application \nsuitability showed that throughput of the \ndatabase significantly degraded over time from thousands of write \ntransactions per second to almost zero. Write transactions are in given\n\ncase insert/update/delete database transactions. The load driver used \nfor testing the database executed SQL queries in parallel threads and \nused prepared statement and connection pooling. Postgres performance \ndegraded in a couple of minutes after the first run of the test, and\nthe\n problem was reproducible with only 2 parallel client threads. \nSubsequent test executions showed degraded throughput since the \nbeginning. The degradation has been detected only in case of write \ntransactions - select transactions were not affected. After some time\nor\n after server restart the problem is reproducible - test achieves high \nthroughput and then degrades again. Linux top does not show any\npostgres\n processes performing any significant work, CPU usage during the test \nafter degradation is <1%, io waits are also normal.\n\nMachine used for the test is:\nRed Hat Enterprise Linux AS release\n4 (Nahant Update 6)\n8 CPU @ 2GHz\n16GB RAM\nWAL and data are on\nseparate SSD drives \n\n\nServer is initially configured as dedicated OLTP transaction\nprocessing:\n\nOptions changed from default:\nmax_connections =\n150\nshared_buffers = 4GB\nwal_buffers = 16MB\ncheckpoint_segments\n= 80\nmaintenance_work_mem = 2GB\n\n\nModified kernel params:\nkernel.shmmax =\n8932986880\nkernel.shmall = 2180905\nkernel.sem = 500 64000 200\n256\n\n\n \n\n\nDisabling and tuning autovacuum did not give any results. \n\nAny suggestions?\n\n \n\n\n----------------------------\nTäna teleka ette ei jõua? Pane film salvestama!\nminuTV.ee\nwww.minutv.ee\n\n\nLoad testing of postgresql 8.4 for OLTP application \nsuitability showed that throughput of the \ndatabase significantly degraded over time from thousands of write \ntransactions per second to almost zero. Write transactions are in given\n\ncase insert/update/delete database transactions. The load driver used \nfor testing the database executed SQL queries in parallel threads and \nused prepared statement and connection pooling. Postgres performance \ndegraded in a couple of minutes after the first run of the test, and\nthe\n problem was reproducible with only 2 parallel client threads. \nSubsequent test executions showed degraded throughput since the \nbeginning. The degradation has been detected only in case of write \ntransactions - select transactions were not affected. After some time\nor\n after server restart the problem is reproducible - test achieves high \nthroughput and then degrades again. Linux top does not show any\npostgres\n processes performing any significant work, CPU usage during the test \nafter degradation is <1%, io waits are also normal.\nMachine used for the test is:Red Hat Enterprise Linux AS release\n4 (Nahant Update 6)8 CPU @ 2GHz16GB RAMWAL and data are on\nseparate SSD drives \n\nServer is initially configured as dedicated OLTP transaction\nprocessing:\nOptions changed from default:max_connections =\n150shared_buffers = 4GBwal_buffers = 16MBcheckpoint_segments\n= 80maintenance_work_mem = 2GB\n\nModified kernel params:kernel.shmmax =\n8932986880kernel.shmall = 2180905kernel.sem = 500 64000 200\n256\n\n\nDisabling and tuning autovacuum did not give any results. \nAny suggestions?\n\n\n\n----------------------------\nTäna teleka ette ei jõua? Pane film salvestama!\nminuTV.ee",
"msg_date": "Fri, 17 Jun 2011 15:48:47 +0300",
"msg_from": "Kabu Taah <[email protected]>",
"msg_from_op": true,
"msg_subject": "Degrading PostgreSQL 8.4 write performance"
},
{
"msg_contents": "On Fri, Jun 17, 2011 at 7:48 AM, Kabu Taah <[email protected]> wrote:\n> Load testing of postgresql 8.4 for OLTP application suitability showed that\n> throughput of the database significantly degraded over time from thousands\n> of write transactions per second to almost zero. Write transactions are in\n> given case insert/update/delete database transactions. The load driver used\n> for testing the database executed SQL queries in parallel threads and used\n> prepared statement and connection pooling. Postgres performance degraded in\n> a couple of minutes after the first run of the test, and the problem was\n> reproducible with only 2 parallel client threads. Subsequent test executions\n> showed degraded throughput since the beginning. The degradation has been\n> detected only in case of write transactions - select transactions were not\n> affected. After some time or after server restart the problem is\n> reproducible - test achieves high throughput and then degrades again. Linux\n> top does not show any postgres processes performing any significant work,\n> CPU usage during the test after degradation is <1%, io waits are also\n> normal.\n\nThere are a ton of potential causes of this. The problem could be in\nyour code, the database driver, etc. The first step is to try and\nisolate a query that is not running properly and to benchmark it with\nexplain analyze. Being able to reproduce the problem in pgbench\nwould explain a lot as well.\n\nmerlin\n",
"msg_date": "Fri, 17 Jun 2011 09:08:28 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Degrading PostgreSQL 8.4 write performance"
},
{
"msg_contents": "On 06/17/2011 08:48 AM, Kabu Taah wrote:\n>\n> Load testing of postgresql 8.4 for OLTP application suitability showed \n> that throughput of the database significantly degraded over time from \n> thousands of write transactions per second to almost zero...Postgres \n> performance degraded in a couple of minutes after the first run of the \n> test, and the problem was reproducible with only 2 parallel client \n> threads.\n>\n\nWhen you write with PostgreSQL, things that are modified (\"dirtied\") in \nits cache are written out to the operating system write cache. \nEventually, that data gets written by the OS; sometimes it takes care of \nit on its own, in others the periodic database checkpoints (at least \nevery 5 minutes) does it.\n\nIt's possible to get a false idea that thousands of transactions per \nsecond is possible for a few minutes when benchmarking something, \nbecause of how write caches work. The first few thousand transactions \nare going to fill up the following caches:\n\n-Space for dirty data in shared_buffers\n-Operating system write cache space\n-Non-volatile ache on any RAID controller being used\n-Non-volatile cache on any drives you have (some SSDs have these)\n\nOnce all three of those are full, you are seeing the true write \nthroughput of the server. And it's not unusual for that to be 1/10 or \nless of the rate you saw when all the caches were empty, and writes to \ndisk weren't actually happening; they were just queued up.\n\nYou can watch Linux's cache fill up like this:\n\nwatch cat /proc/meminfo\n\nKeep your eye on the \"Dirty:\" line. It's going to rise for a while, and \nI'll bet your server performance dives once that reaches 10% of the \ntotal RAM in the server.\n\nAlso, turn on \"log_checkpoint\" in the server configuration. You'll also \ndiscover there's a drop in performance that begins the minute you see \none of those start. Performance when a checkpoint is happening is true \nserver performance; sometimes you get a burst that's much higher outside \nof that, but you can't count on that.\n\nThe RedHat 4 kernel is so old at this point, I'm not even sure exactly \nhow to tune it for SSD's. You really should be running RedHat 6 if you \nwant to take advantage of disks that fast.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Fri, 17 Jun 2011 13:54:10 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Degrading PostgreSQL 8.4 write performance"
},
{
"msg_contents": "\n> Load testing of postgresql 8.4 for OLTP application\n> suitability showed that throughput of the\n> database significantly degraded over time from thousands of write\n> transactions per second to almost zero.\n\nA typical postgres benchmarking gotcha is :\n\n- you start with empty tables\n- the benchmark fills them\n- query plans which were prepared based on stats of empty (or very small) \ntables become totally obsolete when the table sizes grow\n- therefore everything becomes very slow as the tables grow\n\nSo you should disconnect/reconnect or issue a DISCARD ALL periodically on \neach connection, and of course periodically do some VACUUM ANALYZE (or \nhave autovacuum do that for you).\n",
"msg_date": "Mon, 20 Jun 2011 01:05:32 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Degrading PostgreSQL 8.4 write performance"
}
] |
[
{
"msg_contents": "Greetings,\n\nI have been thinking a lot about pgsql performance when it is dealing\nwith tables with lots of rows on one table (several millions, maybe\nthousands of millions). Say, the Large Object use case:\n\none table has large objects (have a pointer to one object).\nThe large object table stores the large object in 2000 bytes chunks\n(iirc), so, if we have something like 1TB of data stored in large\nobjects, the large objects table would have something like 550M rows,\nif we get to 8TB, we will have 4400M rows (or so).\n\nI have read at several places that huge tables should be partitioned,\nto improve performance.... now, my first question comes: does the\nlarge objects system automatically partitions itself? if no: will\nLarge Objects system performance degrade as we add more data? (I guess\nit would).\n\nNow... I can't fully understand this: why does the performance\nactually goes lower? I mean, when we do partitioning, we take a\ncertain parameter to \"divide\" the data,and then use the same parameter\nto issue the request against the correct table... shouldn't the DB\nactually do something similar with the indexes? I mean, I have always\nthought about the indexes, well, exactly like that: approximation\nsearch, I know I'm looking for, say, a date that is less than\n2010-03-02, and the system should just position itself on the index\naround that date, and scan from that point backward... as far as my\nunderstanding goes, the partitioning only adds like this \"auxiliary\"\nindex, making the system, for example, go to a certain table if the\nquery goes toward one particular year (assuming we partitioned by\nyear), what if the DB actually implemented something like an Index for\nthe Index (so that the first search on huge tables scan on an smaller\nindex that points to a position on the larger index, thus avoiding the\nscan of the large index initially).\n\nWell.... I'm writing all of this half-sleep now, so... I'll re-read it\ntomorrow... in the meantime, just ignore anything that doesn't make a\nlot of sense :) .\n\nThanks!\n\nIldefonso Camargo\n",
"msg_date": "Sat, 18 Jun 2011 23:36:02 -0430",
"msg_from": "Jose Ildefonso Camargo Tolosa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Large rows number, and large objects"
},
{
"msg_contents": "On Sat, Jun 18, 2011 at 9:06 PM, Jose Ildefonso Camargo Tolosa <\[email protected]> wrote:\n\n> Greetings,\n>\n> I have been thinking a lot about pgsql performance when it is dealing\n> with tables with lots of rows on one table (several millions, maybe\n> thousands of millions). Say, the Large Object use case:\n>\n> one table has large objects (have a pointer to one object).\n> The large object table stores the large object in 2000 bytes chunks\n> (iirc), so, if we have something like 1TB of data stored in large\n> objects, the large objects table would have something like 550M rows,\n> if we get to 8TB, we will have 4400M rows (or so).\n>\n> I have read at several places that huge tables should be partitioned,\n> to improve performance.... now, my first question comes: does the\n> large objects system automatically partitions itself? if no: will\n> Large Objects system performance degrade as we add more data? (I guess\n> it would).\n>\n> Now... I can't fully understand this: why does the performance\n> actually goes lower? I mean, when we do partitioning, we take a\n> certain parameter to \"divide\" the data,and then use the same parameter\n> to issue the request against the correct table... shouldn't the DB\n> actually do something similar with the indexes? I mean, I have always\n> thought about the indexes, well, exactly like that: approximation\n> search, I know I'm looking for, say, a date that is less than\n> 2010-03-02, and the system should just position itself on the index\n> around that date, and scan from that point backward... as far as my\n> understanding goes, the partitioning only adds like this \"auxiliary\"\n> index, making the system, for example, go to a certain table if the\n> query goes toward one particular year (assuming we partitioned by\n> year), what if the DB actually implemented something like an Index for\n> the Index (so that the first search on huge tables scan on an smaller\n> index that points to a position on the larger index, thus avoiding the\n> scan of the large index initially).\n>\n> Well.... I'm writing all of this half-sleep now, so... I'll re-read it\n> tomorrow... in the meantime, just ignore anything that doesn't make a\n> lot of sense :) .\n>\n\nPartitioning helps in a number of ways. First, if running a query which\nmust scan an entire table, if the table is very large, that scan will be\nexpensive. Partitioning can allow the query planner to do a sequential scan\nover just some of the data and skip the rest (or process it via some other\nmanner, such as an index lookup). Also, the larger an index is, the more\nexpensive the index is to maintain. Inserts and lookups will both take\nlonger. Partitioning will give you n indexes, each with m/n entries\n(assuming fairly even distribution of data among partitions), so any given\nindex will be smaller, which means inserts into a partition will potentially\nbe much faster. Since large tables often also have very high insert rates,\nthis can be a big win. You can also gain better resource utilization by\nmoving less frequently used partitions onto slower media (via a tablespace),\nfreeing up space on your fastest i/o devices for the most important data. A\nlot of partitioning tends to happen by time, and the most frequently run\nqueries are often on the partitions containing the most recent data, so it\noften can be very beneficial to keep only the most recent partitions on\nfastest storage. Then there is caching. Indexes and tables are cached by\npage. Without clustering a table on a particular index, the contents of a\nsingle page may be quite arbitrary. Even with clustering, depending upon\nthe usage patterns of the table in question, it is entirely possible that\nany given page may only have some fairly small percentage of highly relevant\ndata if the table is very large. By partitioning, you can (potentially)\nensure that any given page in cache will have a higher density of highly\nrelevant entries, so you'll get better efficiency out of the caching layers.\n And with smaller indexes, it is less likely that loading an index into\nshared buffers will push some other useful chunk of data out of the cache.\n\nAs for the large object tables, I'm not sure about the internals. Assuming\nthat each table gets its own table for large objects, partitioning the main\ntable will have the effect of partitioning the large object table, too -\nkeeping index maintenance more reasonable and ensuring that lookups are as\nfast as possible. There's probably a debate to be had on the benefit of\nstoring very large numbers of large objects in the db, too (as opposed to\nkeeping references to them in the db and actually accessing them via some\nother mechanism. Both product requirements and performance are significant\nfactors in that discussion).\n\nAs for your suggestion that the db maintain an index on an index, how would\nthe database do so in an intelligent manner? It would have to maintain such\nindexes on every index and guess as to which values to use as boundaries for\neach bucket. Partitioning solves the same problem, but allows you to direct\nthe database such that it only does extra work where the dba, who is much\nmore knowledgable about the structure of the data and how it will be used\nthan the database itself, tells it to. And the dba gets to tell the db what\nbuckets to use when partitioning the database - via the check constraints on\nthe partitions. Without that, the db would have to guess as to appropriate\nbucket sizes and the distribution of values within them.\n\nI'm sure there are reasons beyond even those I've listed here. I'm not one\nof the postgresql devs, so my understanding of how it benefits from\npartitioning is shallow, at best. If the usage pattern of your very large\ntable is such that every query tends to use all of the table, then I'm not\nsure partitioning really offers much gain. The benefits of partitioning\nare, at least in part, predicated on only a subset of the data being useful\nto any one query, and the benefits get that much stronger if some portion of\nthe data is rarely used by any query.\n\nOn Sat, Jun 18, 2011 at 9:06 PM, Jose Ildefonso Camargo Tolosa <[email protected]> wrote:\nGreetings,\n\nI have been thinking a lot about pgsql performance when it is dealing\nwith tables with lots of rows on one table (several millions, maybe\nthousands of millions). Say, the Large Object use case:\n\none table has large objects (have a pointer to one object).\nThe large object table stores the large object in 2000 bytes chunks\n(iirc), so, if we have something like 1TB of data stored in large\nobjects, the large objects table would have something like 550M rows,\nif we get to 8TB, we will have 4400M rows (or so).\n\nI have read at several places that huge tables should be partitioned,\nto improve performance.... now, my first question comes: does the\nlarge objects system automatically partitions itself? if no: will\nLarge Objects system performance degrade as we add more data? (I guess\nit would).\n\nNow... I can't fully understand this: why does the performance\nactually goes lower? I mean, when we do partitioning, we take a\ncertain parameter to \"divide\" the data,and then use the same parameter\nto issue the request against the correct table... shouldn't the DB\nactually do something similar with the indexes? I mean, I have always\nthought about the indexes, well, exactly like that: approximation\nsearch, I know I'm looking for, say, a date that is less than\n2010-03-02, and the system should just position itself on the index\naround that date, and scan from that point backward... as far as my\nunderstanding goes, the partitioning only adds like this \"auxiliary\"\nindex, making the system, for example, go to a certain table if the\nquery goes toward one particular year (assuming we partitioned by\nyear), what if the DB actually implemented something like an Index for\nthe Index (so that the first search on huge tables scan on an smaller\nindex that points to a position on the larger index, thus avoiding the\nscan of the large index initially).\n\nWell.... I'm writing all of this half-sleep now, so... I'll re-read it\ntomorrow... in the meantime, just ignore anything that doesn't make a\nlot of sense :) .Partitioning helps in a number of ways. First, if running a query which must scan an entire table, if the table is very large, that scan will be expensive. Partitioning can allow the query planner to do a sequential scan over just some of the data and skip the rest (or process it via some other manner, such as an index lookup). Also, the larger an index is, the more expensive the index is to maintain. Inserts and lookups will both take longer. Partitioning will give you n indexes, each with m/n entries (assuming fairly even distribution of data among partitions), so any given index will be smaller, which means inserts into a partition will potentially be much faster. Since large tables often also have very high insert rates, this can be a big win. You can also gain better resource utilization by moving less frequently used partitions onto slower media (via a tablespace), freeing up space on your fastest i/o devices for the most important data. A lot of partitioning tends to happen by time, and the most frequently run queries are often on the partitions containing the most recent data, so it often can be very beneficial to keep only the most recent partitions on fastest storage. Then there is caching. Indexes and tables are cached by page. Without clustering a table on a particular index, the contents of a single page may be quite arbitrary. Even with clustering, depending upon the usage patterns of the table in question, it is entirely possible that any given page may only have some fairly small percentage of highly relevant data if the table is very large. By partitioning, you can (potentially) ensure that any given page in cache will have a higher density of highly relevant entries, so you'll get better efficiency out of the caching layers. And with smaller indexes, it is less likely that loading an index into shared buffers will push some other useful chunk of data out of the cache.\nAs for the large object tables, I'm not sure about the internals. Assuming that each table gets its own table for large objects, partitioning the main table will have the effect of partitioning the large object table, too - keeping index maintenance more reasonable and ensuring that lookups are as fast as possible. There's probably a debate to be had on the benefit of storing very large numbers of large objects in the db, too (as opposed to keeping references to them in the db and actually accessing them via some other mechanism. Both product requirements and performance are significant factors in that discussion).\nAs for your suggestion that the db maintain an index on an index, how would the database do so in an intelligent manner? It would have to maintain such indexes on every index and guess as to which values to use as boundaries for each bucket. Partitioning solves the same problem, but allows you to direct the database such that it only does extra work where the dba, who is much more knowledgable about the structure of the data and how it will be used than the database itself, tells it to. And the dba gets to tell the db what buckets to use when partitioning the database - via the check constraints on the partitions. Without that, the db would have to guess as to appropriate bucket sizes and the distribution of values within them.\nI'm sure there are reasons beyond even those I've listed here. I'm not one of the postgresql devs, so my understanding of how it benefits from partitioning is shallow, at best. If the usage pattern of your very large table is such that every query tends to use all of the table, then I'm not sure partitioning really offers much gain. The benefits of partitioning are, at least in part, predicated on only a subset of the data being useful to any one query, and the benefits get that much stronger if some portion of the data is rarely used by any query.",
"msg_date": "Sun, 19 Jun 2011 04:37:59 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large rows number, and large objects"
},
{
"msg_contents": "On 6/19/11 4:37 AM, Samuel Gendler wrote:\n> On Sat, Jun 18, 2011 at 9:06 PM, Jose Ildefonso Camargo Tolosa <[email protected] <mailto:[email protected]>> wrote:\n>\n> Greetings,\n>\n> I have been thinking a lot about pgsql performance when it is dealing\n> with tables with lots of rows on one table (several millions, maybe\n> thousands of millions). Say, the Large Object use case:\n>\n> one table has large objects (have a pointer to one object).\n> The large object table stores the large object in 2000 bytes chunks\n> (iirc), so, if we have something like 1TB of data stored in large\n> objects, the large objects table would have something like 550M rows,\n> if we get to 8TB, we will have 4400M rows (or so).\n>\n> I have read at several places that huge tables should be partitioned,\n> to improve performance.... now, my first question comes: does the\n> large objects system automatically partitions itself? if no: will\n> Large Objects system performance degrade as we add more data? (I guess\n> it would).\n>\nYou should consider \"partitioning\" your data in a different way: Separate the relational/searchable data from the bulk data that is merely being stored.\n\nRelational databases are just that: relational. The thing they do well is to store relationships between various objects, and they are very good at finding objects using relational queries and logical operators.\n\nBut when it comes to storing bulk data, a relational database is no better than a file system.\n\nIn our system, each \"object\" is represented by a big text object of a few kilobytes. Searching that text file is essential useless -- the only reason it's there is for visualization and to pass on to other applications. So it's separated out into its own table, which only has the text record and a primary key.\n\nWe then use other tables to hold extracted fields and computed data about the primary object, and the relationships between the objects. That means we've effectively \"partitioned\" our data into searchable relational data and non-searchable bulk data. The result is that we have around 50 GB of bulk data that's never searched, and about 1GB of relational, searchable data in a half-dozen other tables.\n\nWith this approach, there's no need for table partitioning, and full table scans are quite reasonable.\n\nCraig\n\n\n\n\n\n\n On 6/19/11 4:37 AM, Samuel Gendler wrote:\n\nOn Sat, Jun 18, 2011 at 9:06 PM, Jose\n Ildefonso Camargo Tolosa <[email protected]>\n wrote:\n\n Greetings,\n\n I have been thinking a lot about pgsql performance when it is\n dealing\n with tables with lots of rows on one table (several millions,\n maybe\n thousands of millions). Say, the Large Object use case:\n\n one table has large objects (have a pointer to one object).\n The large object table stores the large object in 2000 bytes\n chunks\n (iirc), so, if we have something like 1TB of data stored in\n large\n objects, the large objects table would have something like\n 550M rows,\n if we get to 8TB, we will have 4400M rows (or so).\n\n I have read at several places that huge tables should be\n partitioned,\n to improve performance.... now, my first question comes: does\n the\n large objects system automatically partitions itself? if no:\n will\n Large Objects system performance degrade as we add more data?\n (I guess\n it would).\n\n\n\n You should consider \"partitioning\" your data in a different way:\n Separate the relational/searchable data from the bulk data that is\n merely being stored.\n\n Relational databases are just that: relational. The thing they do\n well is to store relationships between various objects, and they are\n very good at finding objects using relational queries and logical\n operators.\n\n But when it comes to storing bulk data, a relational database is no\n better than a file system.\n\n In our system, each \"object\" is represented by a big text object of\n a few kilobytes. Searching that text file is essential useless --\n the only reason it's there is for visualization and to pass on to\n other applications. So it's separated out into its own table, which\n only has the text record and a primary key.\n\n We then use other tables to hold extracted fields and computed data\n about the primary object, and the relationships between the\n objects. That means we've effectively \"partitioned\" our data into\n searchable relational data and non-searchable bulk data. The result\n is that we have around 50 GB of bulk data that's never searched, and\n about 1GB of relational, searchable data in a half-dozen other\n tables.\n\n With this approach, there's no need for table partitioning, and full\n table scans are quite reasonable.\n\n Craig",
"msg_date": "Sun, 19 Jun 2011 08:49:28 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large rows number, and large objects"
},
{
"msg_contents": "Hi!\n\nThanks (you both, Samuel and Craig) for your answers!\n\nOn Sun, Jun 19, 2011 at 11:19 AM, Craig James\n<[email protected]> wrote:\n> On 6/19/11 4:37 AM, Samuel Gendler wrote:\n>\n> On Sat, Jun 18, 2011 at 9:06 PM, Jose Ildefonso Camargo Tolosa\n> <[email protected]> wrote:\n>>\n>> Greetings,\n>>\n>> I have been thinking a lot about pgsql performance when it is dealing\n>> with tables with lots of rows on one table (several millions, maybe\n>> thousands of millions). Say, the Large Object use case:\n>>\n>> one table has large objects (have a pointer to one object).\n>> The large object table stores the large object in 2000 bytes chunks\n>> (iirc), so, if we have something like 1TB of data stored in large\n>> objects, the large objects table would have something like 550M rows,\n>> if we get to 8TB, we will have 4400M rows (or so).\n>>\n>> I have read at several places that huge tables should be partitioned,\n>> to improve performance.... now, my first question comes: does the\n>> large objects system automatically partitions itself? if no: will\n>> Large Objects system performance degrade as we add more data? (I guess\n>> it would).\n>\n> You should consider \"partitioning\" your data in a different way: Separate\n> the relational/searchable data from the bulk data that is merely being\n> stored.\n>\n> Relational databases are just that: relational. The thing they do well is\n> to store relationships between various objects, and they are very good at\n> finding objects using relational queries and logical operators.\n>\n> But when it comes to storing bulk data, a relational database is no better\n> than a file system.\n>\n> In our system, each \"object\" is represented by a big text object of a few\n> kilobytes. Searching that text file is essential useless -- the only reason\n> it's there is for visualization and to pass on to other applications. So\n> it's separated out into its own table, which only has the text record and a\n> primary key.\n\nWell, my original schema does exactly that (I mimic the LO schema):\n\nfiles (searchable): id, name, size, hash, mime_type, number_chunks\nfiles_chunks : id, file_id, hash, chunk_number, data (bytea)\n\nSo, my bulk data is on files_chunks table, but due that data is\nrestricted (by me) to 2000 bytes, the total number of rows on the\nfiles_chunks table can get *huge*.\n\nSo, system would search the files table, and then, search the\nfiles_chunks table (to get each of the chunks, and, maybe, send them\nout to the web client).\n\nSo, with a prospect of ~4500M rows for that table, I really thought it\ncould be a good idea to partition files_chunks table. Due that I'm\nthinking on relatively small files (<100MB), table partitioning should\ndo great here, because I could manage to make all of the chunks for a\ntable to be contained on the same table. Now, even if the system\nwere to get larger files (>5GB), this approach should still work.\n\nThe original question was about Large Objects, and partitioning...\nsee, according to documentation:\nhttp://www.postgresql.org/docs/9.0/static/lo-intro.html\n\n\"All large objects are placed in a single system table called pg_largeobject.\"\n\nSo, the question is, if I were to store 8TB worth of data into large\nobjects system, it would actually make the pg_largeobject table slow,\nunless it was automatically partitioned.\n\nThanks for taking the time to discuss this matter with me!\n\nSincerely,\n\nIldefonso Camargo\n",
"msg_date": "Sun, 19 Jun 2011 21:49:24 -0430",
"msg_from": "Jose Ildefonso Camargo Tolosa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Large rows number, and large objects"
},
{
"msg_contents": "On Sun, Jun 19, 2011 at 10:19 PM, Jose Ildefonso Camargo Tolosa\n<[email protected]> wrote:\n> So, the question is, if I were to store 8TB worth of data into large\n> objects system, it would actually make the pg_largeobject table slow,\n> unless it was automatically partitioned.\n\nI think it's a bit of an oversimplification to say that large,\nunpartitioned tables are automatically going to be slow. Suppose you\nhad 100 tables that were each 80GB instead of one table that is 8TB.\nThe index lookups would be a bit faster on the smaller tables, but it\nwould take you some non-zero amount of time to figure out which index\nto read in the first place. It's not clear that you are really\ngaining all that much.\n\nMany of the advantages of partitioning have to do with maintenance\ntasks. For example, if you gather data on a daily basis, it's faster\nto drop the partition that contains Thursday's data than it is to do a\nDELETE that finds the rows and deletes them one at a time. And VACUUM\ncan be a problem on very large tables as well, because only one VACUUM\ncan run on a table at any given time. If the frequency with which the\ntable needs to be vacuumed is less than the time it takes for VACUUM\nto complete, then you've got a problem.\n\nBut I think that if we want to optimize pg_largeobject, we'd probably\ngain a lot more by switching to a different storage format than we\ncould ever gain by partitioning the table. For example, we might\ndecide that any object larger than 16MB should be stored in its own\nfile. Even somewhat smaller objects would likely benefit from being\nstored in larger chunks - say, a bunch of 64kB chunks, with any\noverage stored in the 2kB chunks we use now. While this might be an\ninteresting project, it's probably not going to be anyone's top\npriority, because it would be a lot of work for the amount of benefit\nyou'd get. There's an easy workaround: store the files in the\nfilesystem, and a path to those files in the database.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 19 Jul 2011 16:27:54 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large rows number, and large objects"
},
{
"msg_contents": "On Tue, Jul 19, 2011 at 3:57 PM, Robert Haas <[email protected]> wrote:\n\n> On Sun, Jun 19, 2011 at 10:19 PM, Jose Ildefonso Camargo Tolosa\n> <[email protected]> wrote:\n> > So, the question is, if I were to store 8TB worth of data into large\n> > objects system, it would actually make the pg_largeobject table slow,\n> > unless it was automatically partitioned.\n>\n> I think it's a bit of an oversimplification to say that large,\n> unpartitioned tables are automatically going to be slow. Suppose you\n> had 100 tables that were each 80GB instead of one table that is 8TB.\n> The index lookups would be a bit faster on the smaller tables, but it\n> would take you some non-zero amount of time to figure out which index\n> to read in the first place. It's not clear that you are really\n> gaining all that much.\n>\n\nCertainly.... but it is still very blurry to me on *when* it is better to\npartition than not.\n\n\n>\n> Many of the advantages of partitioning have to do with maintenance\n> tasks. For example, if you gather data on a daily basis, it's faster\n> to drop the partition that contains Thursday's data than it is to do a\n> DELETE that finds the rows and deletes them one at a time. And VACUUM\n> can be a problem on very large tables as well, because only one VACUUM\n> can run on a table at any given time. If the frequency with which the\n> table needs to be vacuumed is less than the time it takes for VACUUM\n> to complete, then you've got a problem.\n>\n\nAnd.... pg_largeobject table doesn't get vacuumed? I mean, isn't that table\njust as any other table?\n\n\n>\n> But I think that if we want to optimize pg_largeobject, we'd probably\n> gain a lot more by switching to a different storage format than we\n> could ever gain by partitioning the table. For example, we might\n> decide that any object larger than 16MB should be stored in its own\n> file. Even somewhat smaller objects would likely benefit from being\n> stored in larger chunks - say, a bunch of 64kB chunks, with any\n> overage stored in the 2kB chunks we use now. While this might be an\n> interesting project, it's probably not going to be anyone's top\n> priority, because it would be a lot of work for the amount of benefit\n> you'd get. There's an easy workaround: store the files in the\n> filesystem, and a path to those files in the database.\n>\n\nOk, one reason for storing a file *in* the DB is to be able to do PITR of a\nwrongly deleted files (or overwritten, and that kind of stuff), on the\nfilesystem level you would need a versioning filesystem (and I don't, yet,\nknow any that is stable in the Linux world).\n\nAlso, you can use streaming replication and at the same time you stream your\ndata, your files are also streamed to a secondary server (yes, on the\nFS-level you could use drbd or similar).\n\nIldefonso.\n\nOn Tue, Jul 19, 2011 at 3:57 PM, Robert Haas <[email protected]> wrote:\n\n\nOn Sun, Jun 19, 2011 at 10:19 PM, Jose Ildefonso Camargo Tolosa\n<[email protected]> wrote:\n> So, the question is, if I were to store 8TB worth of data into large\n> objects system, it would actually make the pg_largeobject table slow,\n> unless it was automatically partitioned.\n\nI think it's a bit of an oversimplification to say that large,\nunpartitioned tables are automatically going to be slow. Suppose you\nhad 100 tables that were each 80GB instead of one table that is 8TB.\nThe index lookups would be a bit faster on the smaller tables, but it\nwould take you some non-zero amount of time to figure out which index\nto read in the first place. It's not clear that you are really\ngaining all that much.Certainly.... but it is still very blurry to me on *when* it is better to partition than not. \n\nMany of the advantages of partitioning have to do with maintenance\ntasks. For example, if you gather data on a daily basis, it's faster\nto drop the partition that contains Thursday's data than it is to do a\nDELETE that finds the rows and deletes them one at a time. And VACUUM\ncan be a problem on very large tables as well, because only one VACUUM\ncan run on a table at any given time. If the frequency with which the\ntable needs to be vacuumed is less than the time it takes for VACUUM\nto complete, then you've got a problem.And.... pg_largeobject table doesn't get vacuumed? I mean, isn't that table just as any other table? \n\nBut I think that if we want to optimize pg_largeobject, we'd probably\ngain a lot more by switching to a different storage format than we\ncould ever gain by partitioning the table. For example, we might\ndecide that any object larger than 16MB should be stored in its own\nfile. Even somewhat smaller objects would likely benefit from being\nstored in larger chunks - say, a bunch of 64kB chunks, with any\noverage stored in the 2kB chunks we use now. While this might be an\ninteresting project, it's probably not going to be anyone's top\npriority, because it would be a lot of work for the amount of benefit\nyou'd get. There's an easy workaround: store the files in the\nfilesystem, and a path to those files in the database.Ok, one reason for storing a file *in* the DB is to be able to do PITR of a wrongly deleted files (or overwritten, and that kind of stuff), on the filesystem level you would need a versioning filesystem (and I don't, yet, know any that is stable in the Linux world).\nAlso, you can use streaming replication and at the same time you stream your data, your files are also streamed to a secondary server (yes, on the FS-level you could use drbd or similar).Ildefonso.",
"msg_date": "Wed, 20 Jul 2011 11:27:29 -0430",
"msg_from": "Jose Ildefonso Camargo Tolosa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Large rows number, and large objects"
},
{
"msg_contents": "On Wed, Jul 20, 2011 at 11:57 AM, Jose Ildefonso Camargo Tolosa\n<[email protected]> wrote:\n> On Tue, Jul 19, 2011 at 3:57 PM, Robert Haas <[email protected]> wrote:\n>> On Sun, Jun 19, 2011 at 10:19 PM, Jose Ildefonso Camargo Tolosa\n>> <[email protected]> wrote:\n>> > So, the question is, if I were to store 8TB worth of data into large\n>> > objects system, it would actually make the pg_largeobject table slow,\n>> > unless it was automatically partitioned.\n>>\n>> I think it's a bit of an oversimplification to say that large,\n>> unpartitioned tables are automatically going to be slow. Suppose you\n>> had 100 tables that were each 80GB instead of one table that is 8TB.\n>> The index lookups would be a bit faster on the smaller tables, but it\n>> would take you some non-zero amount of time to figure out which index\n>> to read in the first place. It's not clear that you are really\n>> gaining all that much.\n>\n> Certainly.... but it is still very blurry to me on *when* it is better to\n> partition than not.\n\nI think that figuring that out is as much an art as it is a science.\nIt's better to partition when most of your queries are going to touch\nonly a single partition; when you are likely to want to remove\npartitions in their entirety; when VACUUM starts to have trouble\nkeeping up... but the reality is that in some cases you probably have\nto try it both ways and see which one works better.\n\n>> Many of the advantages of partitioning have to do with maintenance\n>> tasks. For example, if you gather data on a daily basis, it's faster\n>> to drop the partition that contains Thursday's data than it is to do a\n>> DELETE that finds the rows and deletes them one at a time. And VACUUM\n>> can be a problem on very large tables as well, because only one VACUUM\n>> can run on a table at any given time. If the frequency with which the\n>> table needs to be vacuumed is less than the time it takes for VACUUM\n>> to complete, then you've got a problem.\n>\n> And.... pg_largeobject table doesn't get vacuumed? I mean, isn't that table\n> just as any other table?\n\nYes, it is. So, I agree: putting 8TB of data in there is probably\ngoing to hurt.\n\n>> But I think that if we want to optimize pg_largeobject, we'd probably\n>> gain a lot more by switching to a different storage format than we\n>> could ever gain by partitioning the table. For example, we might\n>> decide that any object larger than 16MB should be stored in its own\n>> file. Even somewhat smaller objects would likely benefit from being\n>> stored in larger chunks - say, a bunch of 64kB chunks, with any\n>> overage stored in the 2kB chunks we use now. While this might be an\n>> interesting project, it's probably not going to be anyone's top\n>> priority, because it would be a lot of work for the amount of benefit\n>> you'd get. There's an easy workaround: store the files in the\n>> filesystem, and a path to those files in the database.\n>\n> Ok, one reason for storing a file *in* the DB is to be able to do PITR of a\n> wrongly deleted files (or overwritten, and that kind of stuff), on the\n> filesystem level you would need a versioning filesystem (and I don't, yet,\n> know any that is stable in the Linux world).\n>\n> Also, you can use streaming replication and at the same time you stream your\n> data, your files are also streamed to a secondary server (yes, on the\n> FS-level you could use drbd or similar).\n\nWell, those are good arguments for putting the functionality in the\ndatabase and making it all play nicely with write-ahead logging. But\nnobody's felt motivated to write the code yet, so...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 20 Jul 2011 15:30:25 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large rows number, and large objects"
},
{
"msg_contents": "W dniu 20.07.2011 17:57, Jose Ildefonso Camargo Tolosa pisze:\n\n[...]\n\n> Many of the advantages of partitioning have to do with maintenance\n> tasks. For example, if you gather data on a daily basis, it's faster\n> to drop the partition that contains Thursday's data than it is to do a\n> DELETE that finds the rows and deletes them one at a time. And VACUUM\n> can be a problem on very large tables as well, because only one VACUUM\n> can run on a table at any given time. If the frequency with which the\n> table needs to be vacuumed is less than the time it takes for VACUUM\n> to complete, then you've got a problem.\n>\n>\n> And.... pg_largeobject table doesn't get vacuumed? I mean, isn't that\n> table just as any other table?\n\nVacuum is a real problem on big pg_largeobject table. I have 1.6 TB \ndatabase mostly with large objects and vacuuming that table on fast SAN \ntakes about 4 hours:\n\n now | start | time | datname | \n current_query\n---------------------+---------------------+----------+------------+----------------------------------------------\n 2011-07-20 20:12:03 | 2011-07-20 16:21:20 | 03:50:43 | bigdb | \nautovacuum: VACUUM pg_catalog.pg_largeobject\n(1 row)\n\n\nLO generates a lot of dead tuples when object are adding:\n\n relname | n_dead_tup\n------------------+------------\n pg_largeobject | 246731\n\nAdding LO is very fast when table is vacuumed. But when there is a lot \nof dead tuples adding LO is very slow (50-100 times slower) and eats \n100% of CPU.\n\nIt looks that better way is writing object directly as a bytea on \nparitioned tables althought it's a bit slower than LO interface on a \nvacuumed table.\n\n\nRegards,\nAndrzej\n",
"msg_date": "Wed, 20 Jul 2011 21:33:53 +0200",
"msg_from": "Andrzej Nakonieczny <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large rows number, and large objects"
},
{
"msg_contents": "On Wed, Jul 20, 2011 at 3:03 PM, Andrzej Nakonieczny\n<[email protected]> wrote:\n> W dniu 20.07.2011 17:57, Jose Ildefonso Camargo Tolosa pisze:\n>\n> [...]\n>\n>> Many of the advantages of partitioning have to do with maintenance\n>> tasks. For example, if you gather data on a daily basis, it's faster\n>> to drop the partition that contains Thursday's data than it is to do a\n>> DELETE that finds the rows and deletes them one at a time. And VACUUM\n>> can be a problem on very large tables as well, because only one VACUUM\n>> can run on a table at any given time. If the frequency with which the\n>> table needs to be vacuumed is less than the time it takes for VACUUM\n>> to complete, then you've got a problem.\n>>\n>>\n>> And.... pg_largeobject table doesn't get vacuumed? I mean, isn't that\n>> table just as any other table?\n>\n> Vacuum is a real problem on big pg_largeobject table. I have 1.6 TB database\n> mostly with large objects and vacuuming that table on fast SAN takes about 4\n> hours:\n>\n> now | start | time | datname |\n> current_query\n> ---------------------+---------------------+----------+------------+----------------------------------------------\n> 2011-07-20 20:12:03 | 2011-07-20 16:21:20 | 03:50:43 | bigdb |\n> autovacuum: VACUUM pg_catalog.pg_largeobject\n> (1 row)\n>\n>\n> LO generates a lot of dead tuples when object are adding:\n>\n> relname | n_dead_tup\n> ------------------+------------\n> pg_largeobject | 246731\n>\n> Adding LO is very fast when table is vacuumed. But when there is a lot of\n> dead tuples adding LO is very slow (50-100 times slower) and eats 100% of\n> CPU.\n>\n> It looks that better way is writing object directly as a bytea on paritioned\n> tables althought it's a bit slower than LO interface on a vacuumed table.\n\nWell... yes... I thought about that, but now then, what happen when\nyou need to fetch the file from the DB? will that be fetched\ncompletely at once? I'm thinking about large files here, say\n(hypothetically speaking) you have 1GB files stored.... if the system\nwill fetch the whole 1GB at once, it would take 1GB RAM (or not?), and\nthat's what I wanted to avoid by dividing the file in 2kB chunks\n(bytea chunks, actually).... I don't quite remember where I got the\n2kB size from... but I decided I wanted to avoid using TOAST too.\n\n>\n>\n> Regards,\n> Andrzej\n>\n",
"msg_date": "Thu, 21 Jul 2011 20:16:40 -0430",
"msg_from": "Jose Ildefonso Camargo Tolosa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Large rows number, and large objects"
}
] |
[
{
"msg_contents": "Hi,\n\nWe did a benchmark comparing a Key-Value-Pairs stored as EAV db schema\nversus hstore.\nThe results are promising in favor of hstore but there are some\nquestion which remain.\n\n1. Obviously the '@>' has to be used in order to let use the GiST index.\nWhy is the '->' operator not supported by GiST ('->' is actually\nmentioned in all examples of the doc.)?\n\n2. Currently the hstore elements are stored in order as they are\ncoming from the insert statement / constructor.\nWhy are the elements not ordered i.e. why is the hstore not cached in\nall hstore functions (like hstore_fetchval etc.)?\n\n3. In the source code 'hstore_io.c' one finds the following enigmatic\nnote: \"... very large hstore values can't be output. this could be\nfixed, but many other data types probably have the same issue.\"\nWhat is the max. length of a hstore (i.e. the max. length of the sum\nof all elements in text representation)?\n\n4. Last, I don't fully understand the following note in the hstore\ndoc. (http://www.postgresql.org/docs/current/interactive/hstore.html\n):\n> Notice that the old names are reversed from the convention\n> formerly followed by the core geometric data types!\n\nWhy names? Why not rather 'operators' or 'functions'?\nWhat does this \"reversed from the convention\" mean concretely?\n\nYours, Stefan\n",
"msg_date": "Sun, 19 Jun 2011 20:59:48 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "hstore - Implementation and performance issues around its operators"
},
{
"msg_contents": "On Sun, Jun 19, 2011 at 2:59 PM, Stefan Keller <[email protected]> wrote:\n> 1. Obviously the '@>' has to be used in order to let use the GiST index.\n> Why is the '->' operator not supported by GiST ('->' is actually\n> mentioned in all examples of the doc.)?\n\nBecause it's not a comparison operator.\n\n> 2. Currently the hstore elements are stored in order as they are\n> coming from the insert statement / constructor.\n> Why are the elements not ordered i.e. why is the hstore not cached in\n> all hstore functions (like hstore_fetchval etc.)?\n\nPutting the elements in order wouldn't really help, would it? I mean,\nyou'd need some kind of an index inside the hstore... which there\nisn't.\n\n> 3. In the source code 'hstore_io.c' one finds the following enigmatic\n> note: \"... very large hstore values can't be output. this could be\n> fixed, but many other data types probably have the same issue.\"\n> What is the max. length of a hstore (i.e. the max. length of the sum\n> of all elements in text representation)?\n\nI think that anything of half a gigabyte or more is at risk of falling\ndown there. But probably it's not smart to use such big hstores\nanyway.\n\n> 4. Last, I don't fully understand the following note in the hstore\n> doc. (http://www.postgresql.org/docs/current/interactive/hstore.html\n> ):\n>> Notice that the old names are reversed from the convention\n>> formerly followed by the core geometric data types!\n>\n> Why names? Why not rather 'operators' or 'functions'?\n\nIt's referring to the operator names.\n\n> What does this \"reversed from the convention\" mean concretely?\n\nThat comment could be a little more clear, but I think what it's\nsaying is that hstore's old @ is like the core geometic types old ~,\nand visca versa.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 19 Jul 2011 16:08:19 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hstore - Implementation and performance issues around\n\tits operators"
},
{
"msg_contents": "Hi Robert\n\nMany thanks for your answers.\n\n2011/7/19 Robert Haas <[email protected]>:\n> Putting the elements in order wouldn't really help, would it? I mean,\n> you'd need some kind of an index inside the hstore... which there\n> isn't.\n\nSorry for my inprecise question. In fact elements of a hstore are\nstored in order of (keylength,key) with the key comparison done\nbytewise (not locale-dependent). See e.g. function hstoreUniquePairs\nin http://doxygen.postgresql.org/ . This ordered property is being\nused by some hstore functions but not all - and I'm still wondering\nwhy.\n\nYours, Stefan\n\n\n2011/7/19 Robert Haas <[email protected]>:\n> On Sun, Jun 19, 2011 at 2:59 PM, Stefan Keller <[email protected]> wrote:\n>> 1. Obviously the '@>' has to be used in order to let use the GiST index.\n>> Why is the '->' operator not supported by GiST ('->' is actually\n>> mentioned in all examples of the doc.)?\n>\n> Because it's not a comparison operator.\n>\n>> 2. Currently the hstore elements are stored in order as they are\n>> coming from the insert statement / constructor.\n>> Why are the elements not ordered i.e. why is the hstore not cached in\n>> all hstore functions (like hstore_fetchval etc.)?\n>\n> Putting the elements in order wouldn't really help, would it? I mean,\n> you'd need some kind of an index inside the hstore... which there\n> isn't.\n>\n>> 3. In the source code 'hstore_io.c' one finds the following enigmatic\n>> note: \"... very large hstore values can't be output. this could be\n>> fixed, but many other data types probably have the same issue.\"\n>> What is the max. length of a hstore (i.e. the max. length of the sum\n>> of all elements in text representation)?\n>\n> I think that anything of half a gigabyte or more is at risk of falling\n> down there. But probably it's not smart to use such big hstores\n> anyway.\n>\n>> 4. Last, I don't fully understand the following note in the hstore\n>> doc. (http://www.postgresql.org/docs/current/interactive/hstore.html\n>> ):\n>>> Notice that the old names are reversed from the convention\n>>> formerly followed by the core geometric data types!\n>>\n>> Why names? Why not rather 'operators' or 'functions'?\n>\n> It's referring to the operator names.\n>\n>> What does this \"reversed from the convention\" mean concretely?\n>\n> That comment could be a little more clear, but I think what it's\n> saying is that hstore's old @ is like the core geometic types old ~,\n> and visca versa.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n",
"msg_date": "Tue, 19 Jul 2011 23:06:59 +0200",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hstore - Implementation and performance issues around\n\tits operators"
},
{
"msg_contents": "On Tue, Jul 19, 2011 at 5:06 PM, Stefan Keller <[email protected]> wrote:\n> 2011/7/19 Robert Haas <[email protected]>:\n>> Putting the elements in order wouldn't really help, would it? I mean,\n>> you'd need some kind of an index inside the hstore... which there\n>> isn't.\n>\n> Sorry for my inprecise question. In fact elements of a hstore are\n> stored in order of (keylength,key) with the key comparison done\n> bytewise (not locale-dependent). See e.g. function hstoreUniquePairs\n> in http://doxygen.postgresql.org/ . This ordered property is being\n> used by some hstore functions but not all - and I'm still wondering\n> why.\n\nNot sure, honestly. Is there some place where it would permit an\noptimization we're not currently doing?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Fri, 22 Jul 2011 13:08:04 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hstore - Implementation and performance issues around\n\tits operators"
}
] |
[
{
"msg_contents": "Hi all!\nPlease, just look at these query explanations and try to explain why\nplanner does so (PostgreSQL 8.4).\nThere is an index on table sms (number, timestamp).\n\nAnd three fast & simple queries:\n=# explain analyze select max(timestamp) from sms where number='5502712';\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=3.79..3.80 rows=1 width=0) (actual time=0.269..0.270\nrows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..3.79 rows=1 width=8) (actual\ntime=0.259..0.260 rows=1 loops=1)\n -> Index Scan Backward using sms_number_timestamp on sms\n(cost=0.00..5981.98 rows=1579 width=8) (actual time=0.253..0.253\nrows=1 loops=1)\n Index Cond: ((number)::text = '5502712'::text)\n Filter: (\"timestamp\" IS NOT NULL)\n Total runtime: 0.342 ms\n\n=# explain analyze select max(timestamp) from sms where number='5802693';\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=3.79..3.80 rows=1 width=0) (actual time=0.425..0.426\nrows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..3.79 rows=1 width=8) (actual\ntime=0.413..0.414 rows=1 loops=1)\n -> Index Scan Backward using sms_number_timestamp on sms\n(cost=0.00..5981.98 rows=1579 width=8) (actual time=0.409..0.409\nrows=1 loops=1)\n Index Cond: ((number)::text = '5802693'::text)\n Filter: (\"timestamp\" IS NOT NULL)\n Total runtime: 0.513 ms\n\n=# explain analyze select max(timestamp) from sms where number='5802693';\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=3.79..3.80 rows=1 width=0) (actual time=0.425..0.426\nrows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..3.79 rows=1 width=8) (actual\ntime=0.413..0.414 rows=1 loops=1)\n -> Index Scan Backward using sms_number_timestamp on sms\n(cost=0.00..5981.98 rows=1579 width=8) (actual time=0.409..0.409\nrows=1 loops=1)\n Index Cond: ((number)::text = '5802693'::text)\n Filter: (\"timestamp\" IS NOT NULL)\n Total runtime: 0.513 ms\n\n\n\nBut this does not work:\n# explain analyze select max(timestamp) from sms where number in\n('5502712','5802693','5801981');\n------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=15912.30..15912.31 rows=1 width=8) (actual\ntime=587.952..587.954 rows=1 loops=1)\n -> Bitmap Heap Scan on sms (cost=1413.02..15758.71 rows=61432\nwidth=8) (actual time=34.266..491.853 rows=59078 loops=1)\n Recheck Cond: ((number)::text = ANY\n('{5502712,5802693,5801981}'::text[]))\n -> Bitmap Index Scan on sms_number_timestamp\n(cost=0.00..1397.67 rows=61432 width=0) (actual time=30.778..30.778\nrows=59078 loops=1)\n Index Cond: ((number)::text = ANY\n('{5502712,5802693,5801981}'::text[]))\n Total runtime: 588.199 ms\n\nAnd this too:\n# explain analyze select max(timestamp) from sms where\nnumber='5502712' or number='5802693' or number='5801981';\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=16205.75..16205.76 rows=1 width=8) (actual\ntime=851.204..851.205 rows=1 loops=1)\n -> Bitmap Heap Scan on sms (cost=1473.31..16052.17 rows=61432\nwidth=8) (actual time=68.233..745.004 rows=59090 loops=1)\n Recheck Cond: (((number)::text = '5502712'::text) OR\n((number)::text = '5802693'::text) OR ((number)::text =\n'5801981'::text))\n -> BitmapOr (cost=1473.31..1473.31 rows=61592 width=0)\n(actual time=64.992..64.992 rows=0 loops=1)\n -> Bitmap Index Scan on sms_number_timestamp\n(cost=0.00..40.27 rows=1579 width=0) (actual time=0.588..0.588 rows=59\nloops=1)\n Index Cond: ((number)::text = '5502712'::text)\n -> Bitmap Index Scan on sms_number_timestamp\n(cost=0.00..40.27 rows=1579 width=0) (actual time=0.266..0.266 rows=59\nloops=1)\n Index Cond: ((number)::text = '5802693'::text)\n -> Bitmap Index Scan on sms_number_timestamp\n(cost=0.00..1346.69 rows=58434 width=0) (actual time=64.129..64.129\nrows=58972 loops=1)\n Index Cond: ((number)::text = '5801981'::text)\n Total runtime: 853.176 ms\n\n\nAccording to planner cost estimations - it has enough data to\nunderstand that it is better to aggregate maximum from three\nsubqueries. I suppose it's not a bug but not implemented feature -\nmaybe there is already something about it on roadmap?\n\n\n-- \nVladimir Kulev\nMobile: +7 (921) 555-44-22\n\nJabber: [email protected]\n\nSkype: lightoze\n",
"msg_date": "Mon, 20 Jun 2011 09:35:34 +0400",
"msg_from": "Vladimir Kulev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inoptimal query plan for max() and multicolumn index"
},
{
"msg_contents": "Vladimir Kulev <[email protected]> wrote:\n \n> # explain analyze select max(timestamp) from sms where number in\n> ('5502712','5802693','5801981');\n \n> According to planner cost estimations - it has enough data to\n> understand that it is better to aggregate maximum from three\n> subqueries. I suppose it's not a bug but not implemented feature\n \nYeah, you're hoping for an optimization which hasn't been\nimplemented.\n \nI expect you're hoping for a plan similar to what this gives you?:\n \nexplain analyze select greatest(\n (select max(timestamp) from sms where number = '5502712'),\n (select max(timestamp) from sms where number = '5802693'),\n (select max(timestamp) from sms where number = '5801981'));\n \n-Kevin\n",
"msg_date": "Mon, 20 Jun 2011 10:41:16 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inoptimal query plan for max() and multicolumn\n\t index"
},
{
"msg_contents": "Yes, exactly :)\n\nOn Mon, Jun 20, 2011 at 7:41 PM, Kevin Grittner\n<[email protected]> wrote:\n> I expect you're hoping for a plan similar to what this gives you?:\n>\n> explain analyze select greatest(\n> (select max(timestamp) from sms where number = '5502712'),\n> (select max(timestamp) from sms where number = '5802693'),\n> (select max(timestamp) from sms where number = '5801981'));\n\n-- \nVladimir Kulev\nMobile: +7 (921) 555-44-22\n\nJabber: [email protected]\n\nSkype: lightoze\n",
"msg_date": "Mon, 20 Jun 2011 20:08:00 +0400",
"msg_from": "Vladimir Kulev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inoptimal query plan for max() and multicolumn index"
},
{
"msg_contents": "Le 20/06/2011 18:08, Vladimir Kulev a écrit :\n>\n> Yes, exactly :)\n\nSQL Server does it but PG does not. Expect this for the future....\n\nSo try to rewrite the query like this :\n\nselect max(timestamp) from sms where number = '5502712'\nUNIUON ALL,\nselect max(timestamp) from sms where number = '5802693'\nUNION ALL\nselect max(timestamp) from sms where number = '5801981'\n\nTo see what happen to the query plan !\n\nA +\n\n>\n> On Mon, Jun 20, 2011 at 7:41 PM, Kevin Grittner\n> <[email protected]> wrote:\n>> I expect you're hoping for a plan similar to what this gives you?:\n>>\n>> explain analyze select greatest(\n>> (select max(timestamp) from sms where number = '5502712'),\n>> (select max(timestamp) from sms where number = '5802693'),\n>> (select max(timestamp) from sms where number = '5801981'));\n>\n\n\n-- \nFrédéric BROUARD - expert SGBDR et SQL - MVP SQL Server - 06 11 86 40 66\nLe site sur le langage SQL et les SGBDR : http://sqlpro.developpez.com\nEnseignant Arts & Métiers PACA, ISEN Toulon et CESI/EXIA Aix en Provence\nAudit, conseil, expertise, formation, modélisation, tuning, optimisation\n*********************** http://www.sqlspot.com *************************\n\n",
"msg_date": "Tue, 21 Jun 2011 12:49:02 +0200",
"msg_from": "\"F. BROUARD / SQLpro\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inoptimal query plan for max() and multicolumn index"
},
{
"msg_contents": "On 20/06/2011 07:35, Vladimir Kulev wrote:\n\n> But this does not work:\n> # explain analyze select max(timestamp) from sms where number in\n> ('5502712','5802693','5801981');\n\nTry to rewrite that query this way:\n\nexplain analyze select timestamp from sms where number in \n('5502712','5802693','5801981') order by timestamp desc limit 1;\n\n\nRegards\nGaetano Mendola\n\n\n",
"msg_date": "Fri, 15 Jul 2011 09:14:33 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inoptimal query plan for max() and multicolumn index"
},
{
"msg_contents": "On 20/06/2011 07:35, Vladimir Kulev wrote:\n\n> But this does not work:\n> # explain analyze select max(timestamp) from sms where number in\n> ('5502712','5802693','5801981');\n\nTry to rewrite that query this way:\n\nexplain analyze select timestamp from sms where number in \n('5502712','5802693','5801981') order by timestamp desc limit 1;\n\n\nRegards\nGaetano Mendola\n\n\n",
"msg_date": "Fri, 15 Jul 2011 09:14:45 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inoptimal query plan for max() and multicolumn index"
}
] |
[
{
"msg_contents": "hai friend i have problem with performance database in postgre, how to know \nslowly query in postgre,\ni want kill or stop query to make postgre slowly, on the server status on the \nadmin pg, sometimes the query and how long the query runs do not appear\n\nThanks for solution\n\nhai friend i have problem with performance database in postgre, how to know slowly query in postgre,i want kill or stop query to make postgre slowly, on the server status on the admin pg, sometimes the query and how long the query runs do not appearThanks for solution",
"msg_date": "Mon, 20 Jun 2011 15:57:12 +0800 (SGT)",
"msg_from": "Didik Prasetyo <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to know slowly query in lock postgre"
},
{
"msg_contents": "Something like this[0] ?\n\n[0] http://archives.postgresql.org/pgsql-hackers/2007-04/msg01037.php\n\nOn Mon, Jun 20, 2011 at 9:57 AM, Didik Prasetyo\n<[email protected]> wrote:\n> hai friend i have problem with performance database in postgre, how to know\n> slowly query in postgre,\n> i want kill or stop query to make postgre slowly, on the server status on\n> the admin pg, sometimes the query and how long the query runs do not appear\n>\n> Thanks for solution\n>\n",
"msg_date": "Mon, 20 Jun 2011 11:45:00 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to know slowly query in lock postgre"
},
{
"msg_contents": "Dne 20.6.2011 09:57, Didik Prasetyo napsal(a):\n> hai friend i have problem with performance database in postgre, how to\n> know slowly query in postgre,\n> i want kill or stop query to make postgre slowly, on the server status\n> on the admin pg, sometimes the query and how long the query runs do not\n> appear\n> \n> Thanks for solution\n\nKilling long running queries probably is not a good idea (at least the\nusers usually think that).\n\nYou should try to identify the slow queries and optimize them first. The\n\"log_min_duration_statement\" can do that - the queries that take longer\nwill be written to the postgresql log.\n\nhttp://www.postgresql.org/docs/8.3/static/runtime-config-logging.html\n\nregards\nTomas\n",
"msg_date": "Mon, 20 Jun 2011 21:09:34 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to know slowly query in lock postgre"
}
] |
[
{
"msg_contents": "I have a tsvector column docvector and a gin index on it\ndocmeta1_docvector_idx\n\nI have a simple query \"select * from docmeta1 where docvector @@\nplainto_tsquery('english', 'free');\" \n\nI find that the planner chooses a sequential scan of the table even when\nthe index performs orders of magnitude. I set random_page_cost = 1.0 for\nthe database to favor index use. However, I still see that the cost\nestimate for sequential scan of the entire table (23000) is cheaper than\nthe cost of using the index (33000). The time taken for sequential\naccess is 5200 ms and for index usage is only 85 ms.\n\nDetails here:\n\npostgres version 9.0.2\nstatistics on docvector is set to 10000 and as you can see the row\nestimates are fine.\n\nlawdb=# \\d docmeta1\n Table \"public.docmeta1\"\n Column | Type | Modifiers \n-------------+-----------+-----------\n tid | integer | not null\n docweight | integer | \n doctype | integer | \n publishdate | date | \n covertids | integer[] | \n titlevector | tsvector | \n docvector | tsvector | \nIndexes:\n \"docmeta1_pkey\" PRIMARY KEY, btree (tid)\n \"docmeta1_date_idx\" btree (publishdate)\n \"docmeta1_docvector_idx\" gin (docvector)\n \"docmeta1_title_idx\" gin (titlevector)\n\nlawdb=# SELECT relpages, reltuples FROM pg_class WHERE relname\n='docmeta1'; \nrelpages | reltuples \n----------+-----------\n 18951 | 329940\n\n\nlawdb=# explain analyze select * from docmeta1 where docvector @@\nplainto_tsquery('english', 'free');\n QUERY\nPLAN \n \n--------------------------------------------------------------------------------\n-----------------------------------\n Seq Scan on docmeta1 (cost=0.00..23075.25 rows=35966 width=427)\n(actual time=0\n.145..5189.556 rows=35966 loops=1)\n Filter: (docvector @@ '''free'''::tsquery)\n Total runtime: 5196.231 ms\n(3 rows)\n\nlawdb=# set enable_seqscan = off;\nSET\nlawdb=# explain analyze select * from docmeta1 where docvector @@\nplainto_tsquery('english', 'free');\n QUERY\nPLAN \n \n--------------------------------------------------------------------------------\n-----------------------------------------------------------\n Bitmap Heap Scan on docmeta1 (cost=14096.25..33000.83 rows=35966\nwidth=427) (a\nctual time=9.543..82.754 rows=35966 loops=1)\n Recheck Cond: (docvector @@ '''free'''::tsquery)\n -> Bitmap Index Scan on docmeta1_docvector_idx (cost=0.00..14087.26\nrows=35\n966 width=0) (actual time=8.059..8.059 rows=35967 loops=1)\n Index Cond: (docvector @@ '''free'''::tsquery)\n Total runtime: 85.304 ms\n(5 rows)\n\n\n-Sushant.\n\n",
"msg_date": "Mon, 20 Jun 2011 21:08:59 +0530",
"msg_from": "Sushant Sinha <[email protected]>",
"msg_from_op": true,
"msg_subject": "sequential scan unduly favored over text search gin index"
},
{
"msg_contents": "Sushant Sinha <[email protected]> wrote:\n \n> I have a tsvector column docvector and a gin index on it\n> docmeta1_docvector_idx\n> \n> I have a simple query \"select * from docmeta1 where docvector @@\n> plainto_tsquery('english', 'free');\" \n> \n> I find that the planner chooses a sequential scan of the table\n> even when the index performs orders of magnitude.\n \nDid you ANALYZE the table after loading the data and building the\nindex?\n \n-Kevin\n",
"msg_date": "Mon, 20 Jun 2011 10:58:13 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan unduly favored over text search\n\t gin index"
},
{
"msg_contents": "\nOn Mon, 2011-06-20 at 10:58 -0500, Kevin Grittner wrote:\n> Sushant Sinha <[email protected]> wrote:\n> \n> > I have a tsvector column docvector and a gin index on it\n> > docmeta1_docvector_idx\n> > \n> > I have a simple query \"select * from docmeta1 where docvector @@\n> > plainto_tsquery('english', 'free');\" \n> > \n> > I find that the planner chooses a sequential scan of the table\n> > even when the index performs orders of magnitude.\n> \n> Did you ANALYZE the table after loading the data and building the\n> index?\nYes and I mentioned that the row estimates are correct, which indicate\nthat the problem is somewhere else.\n\n-Sushant.\n \n> -Kevin\n\n\n",
"msg_date": "Mon, 20 Jun 2011 21:34:00 +0530",
"msg_from": "Sushant Sinha <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sequential scan unduly favored over text search gin\n index"
},
{
"msg_contents": "On 2011-06-20 17:38, Sushant Sinha wrote:\n> I have a tsvector column docvector and a gin index on it\n> docmeta1_docvector_idx\n>\n> I have a simple query \"select * from docmeta1 where docvector @@\n> plainto_tsquery('english', 'free');\"\n>\n> I find that the planner chooses a sequential scan of the table even when\n> the index performs orders of magnitude. I set random_page_cost = 1.0 for\n> the database to favor index use. However, I still see that the cost\n> estimate for sequential scan of the entire table (23000) is cheaper than\n> the cost of using the index (33000). The time taken for sequential\n> access is 5200 ms and for index usage is only 85 ms.\nThe cost-estimation code for gin-indices are not good in 9.0, this has\nhugely been improved in 9.1\n\nhttp://git.postgresql.org/gitweb?p=postgresql.git&a=search&h=HEAD&st=commit&s=gincost\n\nI think the individual patches apply quite cleanly to 9.0 as far\nas I remember.\n\n-- \nJesper\n",
"msg_date": "Mon, 20 Jun 2011 20:58:58 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan unduly favored over text search gin\n index"
},
{
"msg_contents": "Dne 20.6.2011 18:04, Sushant Sinha napsal(a):\n> \n> On Mon, 2011-06-20 at 10:58 -0500, Kevin Grittner wrote:\n>> Sushant Sinha <[email protected]> wrote:\n>> \n>>> I have a tsvector column docvector and a gin index on it\n>>> docmeta1_docvector_idx\n>>>\n>>> I have a simple query \"select * from docmeta1 where docvector @@\n>>> plainto_tsquery('english', 'free');\" \n>>>\n>>> I find that the planner chooses a sequential scan of the table\n>>> even when the index performs orders of magnitude.\n>> \n>> Did you ANALYZE the table after loading the data and building the\n>> index?\n> Yes and I mentioned that the row estimates are correct, which indicate\n> that the problem is somewhere else.\n\nHi,\n\nI agree the estimates are damn precise in this case (actually the\nestimates are exact). The problem is the planner thinks the seq scan is\nabout 30% cheaper than the bitmap index scan.\n\nI guess you could poke the planner towards the bitmap scan by lowering\nthe random_page_cost (the default value is 4, I'd say lowering it to 2\nshould do the trick).\n\nBut be careful, this will influence all the other queries! Those values\nshould somehow reflect the hardware of your system (type of drives,\namount of RAM, etc.) so you have to test the effects.\n\n\nregards\nTomas\n",
"msg_date": "Mon, 20 Jun 2011 21:01:35 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan unduly favored over text search gin\n index"
},
{
"msg_contents": "\n> \n> I agree the estimates are damn precise in this case (actually the\n> estimates are exact). The problem is the planner thinks the seq scan is\n> about 30% cheaper than the bitmap index scan.\n> \n> I guess you could poke the planner towards the bitmap scan by lowering\n> the random_page_cost (the default value is 4, I'd say lowering it to 2\n> should do the trick).\n\nThe numbers that I gave was after setting random_page_cost = 1.0 After\nthis I don't know what to do.\n\n-Sushant.\n\n",
"msg_date": "Tue, 21 Jun 2011 07:55:34 +0530",
"msg_from": "Sushant Sinha <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sequential scan unduly favored over text search gin\n index"
},
{
"msg_contents": "Sushant Sinha <[email protected]> writes:\n>> I guess you could poke the planner towards the bitmap scan by lowering\n>> the random_page_cost (the default value is 4, I'd say lowering it to 2\n>> should do the trick).\n\n> The numbers that I gave was after setting random_page_cost = 1.0 After\n> this I don't know what to do.\n\nI think part of the issue here is that the @@ operator is expensive,\nand so evaluating it once per row is expensive, but the pg_proc.procost\nsetting for it doesn't adequately reflect that. You could experiment\nwith tweaking that setting ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Jun 2011 01:53:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan unduly favored over text search gin index "
},
{
"msg_contents": "On Mon, Jun 20, 2011 at 8:38 AM, Sushant Sinha <[email protected]> wrote:\n>\n> postgres version 9.0.2\n> statistics on docvector is set to 10000 and as you can see the row\n> estimates are fine.\n>\n> lawdb=# \\d docmeta1\n> Table \"public.docmeta1\"\n> Column | Type | Modifiers\n> -------------+-----------+-----------\n> tid | integer | not null\n> docweight | integer |\n> doctype | integer |\n> publishdate | date |\n> covertids | integer[] |\n> titlevector | tsvector |\n> docvector | tsvector |\n> Indexes:\n> \"docmeta1_pkey\" PRIMARY KEY, btree (tid)\n> \"docmeta1_date_idx\" btree (publishdate)\n> \"docmeta1_docvector_idx\" gin (docvector)\n> \"docmeta1_title_idx\" gin (titlevector)\n>\n> lawdb=# SELECT relpages, reltuples FROM pg_class WHERE relname\n> ='docmeta1';\n> relpages | reltuples\n> ----------+-----------\n> 18951 | 329940\n\n\nWhat the are sizes of associated toast tables for the tsvector columns?\n\n>\n> lawdb=# explain analyze select * from docmeta1 where docvector @@\n> plainto_tsquery('english', 'free');\n\nIt would be nice to see the results of explain (analyze, buffers).\n\nCheers,\n\nJeff\n",
"msg_date": "Wed, 29 Jun 2011 20:41:53 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan unduly favored over text search gin index"
},
{
"msg_contents": "On Mon, Jun 20, 2011 at 10:53 PM, Tom Lane <[email protected]> wrote:\n> Sushant Sinha <[email protected]> writes:\n>>> I guess you could poke the planner towards the bitmap scan by lowering\n>>> the random_page_cost (the default value is 4, I'd say lowering it to 2\n>>> should do the trick).\n>\n>> The numbers that I gave was after setting random_page_cost = 1.0 After\n>> this I don't know what to do.\n>\n> I think part of the issue here is that the @@ operator is expensive,\n> and so evaluating it once per row is expensive, but the pg_proc.procost\n> setting for it doesn't adequately reflect that. You could experiment\n> with tweaking that setting ...\n\nIn something I was testing a couple months ago, by far the biggest\nexpense of the @@ operator in a full table scan was in crawling\nthrough the entire toast table (and not necessarily in sequential\norder) in order to get the tsvector data on which to apply the\noperator. So increasing the cost of @@ might very well be the best\nimmediate solution, but should the cost estimation code be changed to\nexplicitly take page reads associated with toast into account, so that\ncost of @@ itself and can remain a CPU based estimate rather than an\namalgam of CPU and IO?\n\nCheers,\n\nJeff\n",
"msg_date": "Wed, 29 Jun 2011 20:59:25 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequential scan unduly favored over text search gin index"
}
] |
[
{
"msg_contents": "PostgreSQL 8.4.8 on CentOS 5.6, x86_64. Default settings except work_mem = 1MB.\n\nNOTE: I am using partitioned tables here, and was querying the\n'master' table. Perhaps is this a Known Issue.\n\nI ran a query recently where the result was very large. The outer-most\npart of the query looked like this:\n\n HashAggregate (cost=56886512.96..56886514.96 rows=200 width=30)\n -> Result (cost=0.00..50842760.97 rows=2417500797 width=30)\n\nThe row count for 'Result' is in the right ballpark, but why does\nHashAggregate think that it can turn 2 *billion* rows of strings (an\naverage of 30 bytes long) into only 200? This is my primary concern.\nIf I don't disable hash aggregation, postgresql quickly consumes huge\nquantities of memory and eventually gets killed by the OOM manager.\n\n\n\nAfter manually disabling hash aggregation, I ran the same query. It's\nbeen running for over 2 days now. The disk is busy but actual data\ntransferred is very low. Total data size is approx. 250GB, perhaps a\nbit less.\n\nThe query scans 160 or so tables for data. If I use a distinct + union\non each table, the plan looks like this:\n\n Unique (cost=357204094.44..357318730.75 rows=22927263 width=28)\n -> Sort (cost=357204094.44..357261412.59 rows=22927263 width=28)\n\n23 million rows is more like it, and the cost is much lower. What is\nthe possibility that distinct/unique operations can be pushed \"down\"\ninto queries during the planning stage to see if they are less\nexpensive?\n\nIn this case, postgresql believes (probably correctly, I'll let you\nknow) that distinct(column foo from tableA + column foo from tableB +\ncolumn foo from tableC ...) is more expensive than distinct(distinct\ncolumn foo from tableA + distinct column foo from tableB .... ).\n\n-- \nJon\n",
"msg_date": "Mon, 20 Jun 2011 10:53:52 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "bad plan: 8.4.8, hashagg, work_mem=1MB."
},
{
"msg_contents": "Jon Nelson <[email protected]> writes:\n> I ran a query recently where the result was very large. The outer-most\n> part of the query looked like this:\n\n> HashAggregate (cost=56886512.96..56886514.96 rows=200 width=30)\n> -> Result (cost=0.00..50842760.97 rows=2417500797 width=30)\n\n> The row count for 'Result' is in the right ballpark, but why does\n> HashAggregate think that it can turn 2 *billion* rows of strings (an\n> average of 30 bytes long) into only 200?\n\n200 is the default assumption about number of groups when it's unable to\nmake any statistics-based estimate. You haven't shown us any details so\nit's hard to say more than that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Jun 2011 12:08:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad plan: 8.4.8, hashagg, work_mem=1MB. "
},
{
"msg_contents": "On Mon, Jun 20, 2011 at 11:08 AM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> I ran a query recently where the result was very large. The outer-most\n>> part of the query looked like this:\n>\n>> HashAggregate (cost=56886512.96..56886514.96 rows=200 width=30)\n>> -> Result (cost=0.00..50842760.97 rows=2417500797 width=30)\n>\n>> The row count for 'Result' is in the right ballpark, but why does\n>> HashAggregate think that it can turn 2 *billion* rows of strings (an\n>> average of 30 bytes long) into only 200?\n>\n> 200 is the default assumption about number of groups when it's unable to\n> make any statistics-based estimate. You haven't shown us any details so\n> it's hard to say more than that.\n\nWhat sorts of details would you like? The row count for the Result\nline is approximately correct -- the stats for all tables are up to\ndate (the tables never change after import). statistics is set at 100\ncurrently.\n\n\n-- \nJon\n",
"msg_date": "Mon, 20 Jun 2011 14:31:05 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bad plan: 8.4.8, hashagg, work_mem=1MB."
},
{
"msg_contents": "On Mon, Jun 20, 2011 at 3:31 PM, Jon Nelson <[email protected]> wrote:\n> On Mon, Jun 20, 2011 at 11:08 AM, Tom Lane <[email protected]> wrote:\n>> Jon Nelson <[email protected]> writes:\n>>> I ran a query recently where the result was very large. The outer-most\n>>> part of the query looked like this:\n>>\n>>> HashAggregate (cost=56886512.96..56886514.96 rows=200 width=30)\n>>> -> Result (cost=0.00..50842760.97 rows=2417500797 width=30)\n>>\n>>> The row count for 'Result' is in the right ballpark, but why does\n>>> HashAggregate think that it can turn 2 *billion* rows of strings (an\n>>> average of 30 bytes long) into only 200?\n>>\n>> 200 is the default assumption about number of groups when it's unable to\n>> make any statistics-based estimate. You haven't shown us any details so\n>> it's hard to say more than that.\n>\n> What sorts of details would you like? The row count for the Result\n> line is approximately correct -- the stats for all tables are up to\n> date (the tables never change after import). statistics is set at 100\n> currently.\n\nThe query and the full EXPLAIN output (attached as text files) would\nbe a good place to start....\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 19 Jul 2011 16:32:22 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad plan: 8.4.8, hashagg, work_mem=1MB."
}
] |
[
{
"msg_contents": "Compañeros buenas noches:\n\nExiste una sentencia SQL \"fuertemente acoplada\" al interios del Sistema Gestor \nde \nBases de Datos Postgresql que me permita \"trasponer\" tablas, \nalgo similar al contrib cross table de tablefunc.\n\nEstoy intentando con una sentencia como:\n\nSELECT id, name, max(case()) as a, max(case()) as b\nFROM table t1\nINNER JOIN ...\nINNER JOIN ...\nGROUP BY id, name.\n\nSin embargo con un gran volumen de datos la sentencia se tarda mucho.\n\nAgradezco de antemano sus comentarios y ayuda.\n\n\nAtentamente,\n\nMario Guerrero\nCompañeros buenas noches:Existe una sentencia SQL \"fuertemente acoplada\" al interios del Sistema Gestor de Bases de Datos Postgresql que me permita\n \"trasponer\" tablas, algo similar al contrib cross table de tablefunc.Estoy intentando con una sentencia como:SELECT id, name, max(case()) as a, max(case()) as bFROM table t1INNER JOIN ...INNER JOIN ...GROUP BY id, name.Sin embargo con un gran volumen de datos la sentencia se tarda mucho.Agradezco de antemano sus comentarios y ayuda.Atentamente,Mario Guerrero",
"msg_date": "Tue, 21 Jun 2011 04:35:53 +0100 (BST)",
"msg_from": "Mario Guerrero <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cross Table (Pivot)"
}
] |
[
{
"msg_contents": "I'm looking for advice from the I/O gurus who have been in the SSD game \nfor a while now.\n\nI understand that the majority of consumer grade SSD drives lack the \nrequired capacitor to complete a write on a sudden power loss. But, \nwhat about pairing up with a hardware controller with BBU write cache? \nCan the write cache be disabled at the drive and result in a safe setup?\n\nI'm exploring the combination of an Areca 1880ix-12 controller with 6x \nOCZ Vertex 3 V3LT-25SAT3 2.5\" 240GB SATA III drives in RAID-10. Has \nanyone tried this combination? What nasty surprise am I overlooking here?\n\nThanks\n-Dan\n",
"msg_date": "Mon, 20 Jun 2011 21:54:26 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Contemplating SSD Hardware RAID"
},
{
"msg_contents": "On 06/20/2011 11:54 PM, Dan Harris wrote:\n> I understand that the majority of consumer grade SSD drives lack the \n> required capacitor to complete a write on a sudden power loss. But, \n> what about pairing up with a hardware controller with BBU write \n> cache? Can the write cache be disabled at the drive and result in a \n> safe setup?\n\nSometimes, but not always, and you'll be playing a risky and \nunpredictable game to try it. See \nhttp://wiki.postgresql.org/wiki/Reliable_Writes for some anecdotes. And \neven if the reliability works out, you'll kill the expected longevity \nand performance of the drive.\n\n> I'm exploring the combination of an Areca 1880ix-12 controller with 6x \n> OCZ Vertex 3 V3LT-25SAT3 2.5\" 240GB SATA III drives in RAID-10. Has \n> anyone tried this combination? What nasty surprise am I overlooking \n> here?\n\nYou can expect database corruption the first time something unexpected \ninterrupts the power to the server. That's nasty, but it's not \nsurprising--that's well documented as what happens when you run \nPostreSQL on hardware with this feature set. You have to get a Vertex 3 \nPro to get one of the reliable 3rd gen designs from them with a \nsupercap. (I don't think those are even out yet though) We've had \nreports here of the earlier Vertex 2 Pro being fully stress tested and \nworking out well. I wouldn't even bother with a regular Vertex 3, \nbecause I don't see any reason to believe it could be reliable for \ndatabase use, just like the Vertex 2 failed to work in that role.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Tue, 21 Jun 2011 02:33:40 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Contemplating SSD Hardware RAID"
},
{
"msg_contents": "On 2011-06-21 08:33, Greg Smith wrote:\n> On 06/20/2011 11:54 PM, Dan Harris wrote:\n>\n>> I'm exploring the combination of an Areca 1880ix-12 controller with \n>> 6x OCZ Vertex 3 V3LT-25SAT3 2.5\" 240GB SATA III drives in RAID-10. \n>> Has anyone tried this combination? What nasty surprise am I \n>> overlooking here?\n>\n> You can expect database corruption the first time something unexpected \n> interrupts the power to the server. That's nasty, but it's not \n> surprising--that's well documented as what happens when you run \n> PostreSQL on hardware with this feature set. You have to get a Vertex \n> 3 Pro to get one of the reliable 3rd gen designs from them with a \n> supercap. (I don't think those are even out yet though) We've had \n> reports here of the earlier Vertex 2 Pro being fully stress tested and \n> working out well. I wouldn't even bother with a regular Vertex 3, \n> because I don't see any reason to believe it could be reliable for \n> database use, just like the Vertex 2 failed to work in that role.\n>\n\nI've tested both the Vertex 2, Vertex 2 Pro and Vertex 3. The vertex 3 \npro is not yet available. The vertex 3 I tested with pgbench didn't \noutperform the vertex 2 (yes, it was attached to a SATA III port). Also, \nthe vertex 3 didn't work in my designated system until a firmware \nupgrade that came available ~2.5 months after I purchased it. The \nsupport call I had with OCZ failed to mention it, and by pure \ncoincidende when I did some more testing at a later time, I ran the \nfirmware upgrade tool (that kind of hides which firmwares are available, \nif any) and it did an update, after that it was compatible with the \ndesignated motherboard.\n\nAnother disappointment was that after I had purchased the Vertex 3 \ndrive, OCZ announced a max-iops vertex 3. Did that actually mean I \nbought an inferior version? Talking about a bad out-of-the-box \nexperience. -1 ocz fan boy.\n\nWhen putting such a SSD up for database use I'd only consider a vertex 2 \npro (for the supercap), paired with another SSD of a different brand \nwith supercap (i.e. the recent intels). When this is done on a \nmotherboard with > 1 sata controller, you'd have controller redundancy \nand can also survive single drive failures when a drive wears out. \nHaving two different SSD versions decreases the chance of both wearing \nout the same time, and make you a bit more resilient against firmware \nbugs. It would be great if there was yet another supercapped SSD brand, \nwith a modified md software raid that reads all three drives at once and \ncompares results, instead of the occasional check. If at least two \ndrives agree on the contents, return the data.\n\n-- \nYeb Havinga\nhttp://www.mgrid.net/\nMastering Medical Data\n\n",
"msg_date": "Tue, 21 Jun 2011 09:51:50 +0200",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Contemplating SSD Hardware RAID"
},
{
"msg_contents": "On 2011-06-21 09:51, Yeb Havinga wrote:\n> On 2011-06-21 08:33, Greg Smith wrote:\n>> On 06/20/2011 11:54 PM, Dan Harris wrote:\n>>\n>>> I'm exploring the combination of an Areca 1880ix-12 controller with \n>>> 6x OCZ Vertex 3 V3LT-25SAT3 2.5\" 240GB SATA III drives in RAID-10. \n>>> Has anyone tried this combination? What nasty surprise am I \n>>> overlooking here?\n\nI forgot to mention that with an SSD it's important to watch the \nremaining lifetime. These values can be read with smartctl. When putting \nthe disk behind a hardware raid controller, you might not be able to \nread them from the OS, and the hardware RAID firmware might be to old to \nnot know about the SSD lifetime indicator or not even show it.\n\n-- \nYeb Havinga\nhttp://www.mgrid.net/\nMastering Medical Data\n\n",
"msg_date": "Tue, 21 Jun 2011 09:55:32 +0200",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Contemplating SSD Hardware RAID"
},
{
"msg_contents": "* Yeb Havinga:\n\n> I forgot to mention that with an SSD it's important to watch the\n> remaining lifetime. These values can be read with smartctl. When\n> putting the disk behind a hardware raid controller, you might not be\n> able to read them from the OS, and the hardware RAID firmware might be\n> to old to not know about the SSD lifetime indicator or not even show\n> it.\n\n3ware controllers offer SMART pass-through, and smartctl supports it.\nI'm sure there's something similar for Areca controllers.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Tue, 21 Jun 2011 11:19:43 +0000",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Contemplating SSD Hardware RAID"
},
{
"msg_contents": "On 06/21/2011 07:19 AM, Florian Weimer wrote:\n> 3ware controllers offer SMART pass-through, and smartctl supports it.\n> I'm sure there's something similar for Areca controllers.\n> \n\nDepends on the model, drives, and how you access the management \ninterface. For both manufacturers actually. Check out \nhttp://notemagnet.blogspot.com/2008/08/linux-disk-failures-areca-is-not-so.html \nfor example. There I talk about problems with a specific Areca \ncontroller, as well as noting in a comment at the end that there are \nlimitations with 3ware supporting not supporting SMART reports against \nSAS drives.\n\nPart of the whole evaluation chain for new server hardware, especially \nfor SSD, needs to be a look at what SMART data you can get. Yeb, I'd be \ncurious to get more details about what you've been seeing here if you \ncan share it. You have more different models around than I have access \nto, especially the OCZ ones which I can't get my clients to consider \nstill. (Their concerns about compatibility and support from a \nrelatively small vendor are not completely unfounded)\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Tue, 21 Jun 2011 11:11:53 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Contemplating SSD Hardware RAID"
},
{
"msg_contents": "Am Dienstag, 21. Juni 2011 05:54:26 schrieb Dan Harris:\n> I'm looking for advice from the I/O gurus who have been in the SSD game\n> for a while now.\n>\n> I understand that the majority of consumer grade SSD drives lack the\n> required capacitor to complete a write on a sudden power loss. But,\n> what about pairing up with a hardware controller with BBU write cache?\n> Can the write cache be disabled at the drive and result in a safe setup?\n>\n> I'm exploring the combination of an Areca 1880ix-12 controller with 6x\n> OCZ Vertex 3 V3LT-25SAT3 2.5\" 240GB SATA III drives in RAID-10. Has\n> anyone tried this combination? What nasty surprise am I overlooking here?\n>\n> Thanks\n> -Dan\n\nWont work.\n\nperiod.\n\nlong story: the loss of the write in the ssd cache is substantial. \n\nYou will loss perhaps the whole system.\n\nI have tested since 2006 ssd - adtron 2GB for 1200 Euro at first ... \n\ni can only advice to use a enterprise ready ssd. \n\ncandidates: intel new series , sandforce pro discs.\n\ni tried to submit a call at apc to construct a device thats similar to a \nbuffered drive frame (a capacitor holds up the 5 V since cache is written \nback) , but they have not answered. so no luck in using mainstream ssd for \nthe job. \n\nloss of the cache - or for mainstream sandforce the connection - will result \nin loss of changed frames (i.e 16 Mbytes of data per frame) in ssd.\n\nif this is the root of your filesystem - forget the disk.\n\nbtw.: since 2 years i have tested 16 discs for speed only. i sell the disc \nafter the test. i got 6 returns for failure within those 2 years - its really \nhappening to the mainstream discs.\n \n-- \nMit freundlichen Grüssen\nAnton Rommerskirchen\n",
"msg_date": "Tue, 21 Jun 2011 20:29:26 +0200",
"msg_from": "Anton Rommerskirchen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Contemplating SSD Hardware RAID"
},
{
"msg_contents": "On 2011-06-21 17:11, Greg Smith wrote:\n> On 06/21/2011 07:19 AM, Florian Weimer wrote:\n>> 3ware controllers offer SMART pass-through, and smartctl supports it.\n>> I'm sure there's something similar for Areca controllers.\n>\n> Depends on the model, drives, and how you access the management \n> interface. For both manufacturers actually. Check out \n> http://notemagnet.blogspot.com/2008/08/linux-disk-failures-areca-is-not-so.html \n> for example. There I talk about problems with a specific Areca \n> controller, as well as noting in a comment at the end that there are \n> limitations with 3ware supporting not supporting SMART reports against \n> SAS drives.\n>\n> Part of the whole evaluation chain for new server hardware, especially \n> for SSD, needs to be a look at what SMART data you can get. Yeb, I'd \n> be curious to get more details about what you've been seeing here if \n> you can share it. You have more different models around than I have \n> access to, especially the OCZ ones which I can't get my clients to \n> consider still. (Their concerns about compatibility and support from \n> a relatively small vendor are not completely unfounded)\n>\n\nThis is what a windows OCZ tool explains about the different smart \nvalues (excuse for no mark up) for a Vertex 2 Pro.\n\nSMART READ DATA\n Revision: 10\n Attributes List\n 1: SSD Raw Read Error Rate Normalized Rate: 120 \ntotal ECC and RAISE errors\n 5: SSD Retired Block Count Reserve blocks \nremaining: 100%\n 9: SSD Power-On Hours Total hours power on: 451\n 12: SSD Power Cycle Count Count of power on/off \ncycles: 61\n 13: SSD Soft Read Error Rate Normalized Rate: 120\n 100: SSD GBytes Erased Flash memory erases \nacross the entire drive: 128 GB\n 170: SSD Number of Remaining Spares Number of reserve Flash \nmemory blocks: 17417\n 171: SSD Program Fail Count Total number of Flash \nprogram operation failures: 0\n 172: SSD Erase Fail Count Total number of Flash \nerase operation failures: 0\n 174: SSD Unexpected power loss count Total number of \nunexpected power loss: 13\n 177: SSD Wear Range Delta Delta between most-worn \nand least-worn Flash blocks: 0\n 181: SSD Program Fail Count Total number of Flash \nprogram operation failures: 0\n 182: SSD Erase Fail Count Total number of Flash \nerase operation failures: 0\n 184: SSD End to End Error Detection I/O errors detected \nduring reads from flash memory: 0\n 187: SSD Reported Uncorrectable Errors Uncorrectable RAISE \nerrors reported to the host for all data access: 0\n 194: SSD Temperature Monitoring Current: 26 High: 37 \nLow: 0\n 195: SSD ECC On-the-fly Count Normalized Rate: 120\n 196: SSD Reallocation Event Count Total number of \nreallocated Flash blocks: 0\n 198: SSD Uncorrectable Sector Count Total number of \nuncorrectable errors when reading/writing a sector: 0\n 199: SSD SATA R-Errors Error Count Current SATA RError \ncount: 0\n 201: SSD Uncorrectable Soft Read Error Rate Normalized Rate: 120\n 204: SSD Soft ECC Correction Rate (RAISE) Normalized Rate: 120\n 230: SSD Life Curve Status Current state of drive \noperation based upon the Life Curve: 100\n 231: SSD Life Left Approximate SDD life \nRemaining: 99%\n 232: SSD Available Reserved Space Amount of Flash memory \nspace in reserve (GB): 17\n 235: SSD Supercap Health Condition of an \nexternal SuperCapacitor Health in mSec: 0\n 241: SSD Lifetime writes from host Number of bytes written \nto SSD: 448 GB\n 242: SSD Lifetime reads from host Number of bytes read \nfrom SSD: 192 GB\n\nSame tool for a Vertex 3 (not pro)\n\nSMART READ DATA\n Revision: 10\n Attributes List\n 1: SSD Raw Read Error Rate Normalized Rate: 120 \ntotal ECC and RAISE errors\n 5: SSD Retired Block Count Reserve blocks \nremaining: 100%\n 9: SSD Power-On Hours Total hours power on: 7\n 12: SSD Power Cycle Count Count of power on/off \ncycles: 13\n 171: SSD Program Fail Count Total number of Flash \nprogram operation failures: 0\n 172: SSD Erase Fail Count Total number of Flash \nerase operation failures: 0\n 174: SSD Unexpected power loss count Total number of \nunexpected power loss: 10\n 177: SSD Wear Range Delta Delta between most-worn \nand least-worn Flash blocks: 0\n 181: SSD Program Fail Count Total number of Flash \nprogram operation failures: 0\n 182: SSD Erase Fail Count Total number of Flash \nerase operation failures: 0\n 187: SSD Reported Uncorrectable Errors Uncorrectable RAISE \nerrors reported to the host for all data access: 0\n 194: SSD Temperature Monitoring Current: 128 High: 129 \nLow: 127\n 195: SSD ECC On-the-fly Count Normalized Rate: 100\n 196: SSD Reallocation Event Count Total number of \nreallocated Flash blocks: 0\n 201: SSD Uncorrectable Soft Read Error Rate Normalized Rate: 100\n 204: SSD Soft ECC Correction Rate (RAISE) Normalized Rate: 100\n 230: SSD Life Curve Status Current state of drive \noperation based upon the Life Curve: 100\n 231: SSD Life Left Approximate SDD life \nRemaining: 100%\n 241: SSD Lifetime writes from host Number of bytes written \nto SSD: 162 GB\n 242: SSD Lifetime reads from host Number of bytes read \nfrom SSD: 236 GB\n\n\nThere's some info burried in \nhttp://archives.postgresql.org/pgsql-performance/2011-03/msg00350.php \nwhere two Vertex 2 pro's are compared; the first has been really \nhammered with pgbench, the second had a few months duty in a \nworkstation. The raw value of SSD Available Reserved Space seems to be a \ngood candidate to watch to go to 0, since the pgbenched-drive has 16GB \nleft and the workstation disk 17GB. Would be cool to graph with e.g. \nsymon (http://i.imgur.com/T4NAq.png)\n\n-- \nYeb Havinga\nhttp://www.mgrid.net/\nMastering Medical Data\n\n",
"msg_date": "Tue, 21 Jun 2011 22:10:35 +0200",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Contemplating SSD Hardware RAID"
},
{
"msg_contents": "On 2011-06-21 22:10, Yeb Havinga wrote:\n>\n>\n> There's some info burried in \n> http://archives.postgresql.org/pgsql-performance/2011-03/msg00350.php \n> where two Vertex 2 pro's are compared; the first has been really \n> hammered with pgbench, the second had a few months duty in a \n> workstation. The raw value of SSD Available Reserved Space seems to be \n> a good candidate to watch to go to 0, since the pgbenched-drive has \n> 16GB left and the workstation disk 17GB. Would be cool to graph with \n> e.g. symon (http://i.imgur.com/T4NAq.png)\n>\n\nI forgot to mention that both newest firmware of the drives as well as \nsvn versions of smartmontools are advisable, before figuring out what \nall those strange values mean. It's too bad however that OCZ doesn't let \nthe user choose which firmware to run (the tool always picks the \nnewest), so after every upgrade it'll be a surprise what values are \nsupported or if any of the values are reset or differently interpreted. \nEven when disks in production might not be upgraded eagerly, replacing a \nfaulty drive means that one probably needs to be upgraded first and it \nwould be nice to have a uniform smart value readout for the monitoring \ntools.\n\n-- \nYeb Havinga\nhttp://www.mgrid.net/\nMastering Medical Data\n\n",
"msg_date": "Tue, 21 Jun 2011 22:25:47 +0200",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Contemplating SSD Hardware RAID"
},
{
"msg_contents": "On Tue, Jun 21, 2011 at 2:25 PM, Yeb Havinga <[email protected]> wrote:\n\n> strange values mean. It's too bad however that OCZ doesn't let the user\n> choose which firmware to run (the tool always picks the newest), so after\n> every upgrade it'll be a surprise what values are supported or if any of the\n\nThat right there pretty much eliminates them from consideration for\nenterprise applications.\n",
"msg_date": "Tue, 21 Jun 2011 14:32:13 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Contemplating SSD Hardware RAID"
},
{
"msg_contents": "On Tue, Jun 21, 2011 at 3:32 PM, Scott Marlowe <[email protected]> wrote:\n> On Tue, Jun 21, 2011 at 2:25 PM, Yeb Havinga <[email protected]> wrote:\n>\n>> strange values mean. It's too bad however that OCZ doesn't let the user\n>> choose which firmware to run (the tool always picks the newest), so after\n>> every upgrade it'll be a surprise what values are supported or if any of the\n>\n> That right there pretty much eliminates them from consideration for\n> enterprise applications.\n\nAs much as I've been irritated with Intel for being intentionally\noblique on the write caching issue -- I think they remain more or less\nthe only game in town for enterprise use. The x25-e has been the only\ndrive up until recently to seriously consider for write heavy\napplications (and Greg is pretty skeptical about that even). I have\ndirectly observed vertex pro drives burning out in ~ 18 months in\nconstant duty applications (which if you did the math is about right\non schedule) -- not good enough IMO.\n\nISTM Intel is clearly positioning the 710 Lyndonville as the main\ndrive in database environments to go with for most cases. At 3300\nIOPS (see http://www.anandtech.com/show/4452/intel-710-and-720-ssd-specifications)\nand some tinkering that results in 65 times greater longevity than\nstandard MLC, I expect the drive will be a huge hit as long as can\nsustain those numbers writing durably and it comes it at under the\n10$/gb price point.\n\nmerlin\n",
"msg_date": "Tue, 21 Jun 2011 16:35:33 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Contemplating SSD Hardware RAID"
},
{
"msg_contents": "On 06/21/2011 05:35 PM, Merlin Moncure wrote:\n> On Tue, Jun 21, 2011 at 3:32 PM, Scott Marlowe<[email protected]> wrote:\n> \n>> On Tue, Jun 21, 2011 at 2:25 PM, Yeb Havinga<[email protected]> wrote:\n>>\n>> \n>>> It's too bad however that OCZ doesn't let the user\n>>> choose which firmware to run (the tool always picks the newest), so after\n>>> every upgrade it'll be a surprise what values are supported or if any of the\n>>> \n>> That right there pretty much eliminates them from consideration for\n>> enterprise applications.\n>> \n> As much as I've been irritated with Intel for being intentionally\n> oblique on the write caching issue -- I think they remain more or less\n> the only game in town for enterprise use.\n\nThat's at the core of why I have been so consistently cranky about \nthem. The sort of customers I deal with who are willing to spend money \non banks of SSD will buy Intel, and the \"Enterprise\" feature set seems \ncompletely enough that it doesn't set off any alarms to them. The same \nis not true of OCZ, which unfortunately means I never even get them onto \nthe vendor grid in the first place. Everybody runs out to buy the Intel \nunits instead, they get burned by the write cache issues, lose data, and \nsometimes they even blame PostgreSQL for it.\n\nI have a customer who has around 50 X25-E drives, a little stack of them \nin six servers running two similar databases. They each run about a \nterabyte, and refill about every four months (old data eventually ages \nout, replaced by new). At the point I started working with them, they \nhad lost the entire recent history twice--terabyte gone, \nwhoosh!--because the power reliability is poor in their area. And \nnetwork connectivity is bad enough that they can't ship this volume of \nupdates to elsewhere either.\n\nIt happened again last month, and for the first time the database was \nrecoverable. I converted one server to be a cold spare, just archive \nthe WAL files. And that's the only one that lived through the nasty \npower spike+outage that corrupted the active databases on both the \nmaster and the warm standby of each set. All four of the servers where \nPostgreSQL was writing data and expected proper fsync guarantees, all \ngone from one power issue. At the point I got involved, they were about \nto cancel this entire PostgreSQL experiment because they assumed the \ndatabase had to be garbage that this kept happening; until I told them \nabout this known issue they never considered the drives were the \nproblem. That's what I think of when people ask me about the Intel X25-E.\n\nI've very happy with the little 3rd generation consumer grade SSD I \nbought from Intel though (320 series). If they just do the same style \nof write cache and reliability rework to the enterprise line, but using \nbetter flash, I agree that the first really serious yet affordable \nproduct for the database market may finally come out of that. We're \njust not there yet, and unfortunately for the person who started this \nround of discussion throwing hardware RAID at the problem doesn't make \nthis go away either.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Tue, 21 Jun 2011 18:17:21 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Contemplating SSD Hardware RAID"
},
{
"msg_contents": "On 06/21/2011 05:17 PM, Greg Smith wrote:\n\n> If they just do the same style of write cache and reliability rework\n> to the enterprise line, but using better flash, I agree that the\n> first really serious yet affordable product for the database market\n> may finally come out of that.\n\nAfter we started our research in this area and finally settled on \nFusionIO PCI cards (which survived several controlled and uncontrolled \nfailures completely intact), a consultant tried telling us he could \nbuild us a cage of SSDs for much cheaper, and with better performance.\n\nOnce I'd stopped laughing, I quickly shooed him away. One of the reasons \nthe PCI cards do so well is that they operate in a directly \nmemory-addressable manner, and always include capacitors. You lose some \noverhead due to the CPU running the driver, and you can't boot off of \nthem, but they're leagues ahead in terms of safety.\n\nBut like you said, they're certainly not what most people would call \naffordable. 640GB for two orders of magnitude more than an equivalent \nhard drive would cost? Ouch. Most companies are familiar---and hence \ncomfortable---with RAIDs of various flavors, so they see SSD performance \nnumbers and think to themselves \"What if that were in a RAID?\" Right \nnow, drives aren't quite there yet, or the ones that are cost more than \nmost want to spend.\n\nIt's a shame, really. But I'm willing to wait it out for now.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Wed, 22 Jun 2011 10:50:48 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Contemplating SSD Hardware RAID"
}
] |
[
{
"msg_contents": "Hello All..\nThis is my first PostgreSql database. It has 8 tables and 4 tables are very\nhuge each with 6million records.\nI have a simple view on this tables and it is taking more than 3hrs to\nreturn the results.\nCan someone help me the way to improve the db return the results in a faster\nway.\n\nI am not sure ... how to improve the performace and return the faster query\nresults in PostgreSql.\n\nI tried creating index on each of this tables \nfor example \nCREATE INDEX idx_idlocalizedname\n ON \"LocalizedName\"\n USING btree\n (id)\n WITH (FILLFACTOR=95);\n\nBut still it did not help me much\nCan someone guide me the way to improve the performance in PostgreSql\nThx,\nTriprua\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Improve-the-Postgres-Query-performance-tp4511903p4511903.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Tue, 21 Jun 2011 13:34:39 -0700 (PDT)",
"msg_from": "Tripura <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improve the Postgres Query performance"
},
{
"msg_contents": "On 22/06/11 04:34, Tripura wrote:\n> Hello All..\n> This is my first PostgreSql database. It has 8 tables and 4 tables are very\n> huge each with 6million records.\n> I have a simple view on this tables and it is taking more than 3hrs to\n> return the results.\n> Can someone help me the way to improve the db return the results in a faster\n> way.\n\nPlease read:\n\n http://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nand post a follow-up with more detail.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 22 Jun 2011 15:05:14 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improve the Postgres Query performance"
},
{
"msg_contents": "Hi,\nThankyou for the link, it heped me \n\nThx,\nTripura\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Improve-the-Postgres-Query-performance-tp4511903p4515457.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 22 Jun 2011 13:57:41 -0700 (PDT)",
"msg_from": "Tripura <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improve the Postgres Query performance"
}
] |
[
{
"msg_contents": "Hi list,\n\nI use Postgres 9.0.4.\n\nI have some tables with bitmask integers. Set bits are the interesting \nones. Usually they are sparse.\n\n-- Many rows & columns\nCREATE TABLE a_table\n(\n objectid INTEGER PRIMARY KEY NOT NULL\n,misc_bits INTEGER DEFAULT 0 NOT NULL\n...\n)\nWITHOUT OIDS;\n\n...and when I use it I...\n\nselect\n ...\nfrom\n a_table\nwhere\n 0 <> (misc_bits & (1 << 13))\n\nNow the dear tables have swollen and these scans aren't as nice anymore.\n\nWhat indexing strategies would you use here?\n\nExternal table?:\n\ncreate table a_table_feature_x\n(\n objectid INTEGER PRIMARY KEY NOT NULL -- fk to \na_table.objectid\n)\nWITHOUT OIDS;\n\n\nInternal in the big mama table?:\n\nCREATE TABLE a_table\n(\n objectid INTEGER PRIMARY KEY NOT NULL\n,misc_bits INTEGER DEFAULT 0 NOT NULL\n,feature_x VARCHAR(1) -- 'y' or null\n...\n)\nWITHOUT OIDS;\n\nCREATE INDEX a_table_x1 ON a_table(feature_x); -- I assume nulls are not \nhere\n\n\nSome other trick?\n\n\nThanks,\nMarcus\n",
"msg_date": "Wed, 22 Jun 2011 23:27:48 +0200",
"msg_from": "Marcus Engene <[email protected]>",
"msg_from_op": true,
"msg_subject": "bitmask index"
},
{
"msg_contents": "On 06/22/2011 05:27 PM, Marcus Engene wrote:\n> I have some tables with bitmask integers. Set bits are the interesting \n> ones. Usually they are sparse.\n\nIf it's sparse, create a partial index that just includes rows where the \nbit is set: \nhttp://www.postgresql.org/docs/current/static/indexes-partial.html\n\nYou need to be careful the query uses the exact syntax as the one that \ncreated the index for it to be used. But if you do that, it should be \nable to pull the rows that match out quickly.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Wed, 22 Jun 2011 17:42:25 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bitmask index"
},
{
"msg_contents": "On 06/22/2011 11:42 PM, Greg Smith wrote:\n> On 06/22/2011 05:27 PM, Marcus Engene wrote:\n>> I have some tables with bitmask integers. Set bits are the interesting\n>> ones. Usually they are sparse.\n>\n> If it's sparse, create a partial index that just includes rows where the\n> bit is set:\n> http://www.postgresql.org/docs/current/static/indexes-partial.html\n\nThat would mean that if different bits are queried there would need to \nbe several of those indexes.\n\nMaybe it's an alternative to index all rows where misc_bits <> 0 and \ninclude that criterion in the query.\n\nKind regards\n\n\trobert\n\n",
"msg_date": "Thu, 23 Jun 2011 17:55:35 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bitmask index"
},
{
"msg_contents": "On 6/22/11 11:42 , Greg Smith wrote:\n> On 06/22/2011 05:27 PM, Marcus Engene wrote:\n>> I have some tables with bitmask integers. Set bits are the \n>> interesting ones. Usually they are sparse.\n>\n> If it's sparse, create a partial index that just includes rows where \n> the bit is set: \n> http://www.postgresql.org/docs/current/static/indexes-partial.html\n>\n> You need to be careful the query uses the exact syntax as the one that \n> created the index for it to be used. But if you do that, it should be \n> able to pull the rows that match out quickly.\n>\nI ended up having a separate table with an index on.\n\nThough partial index solved another problem. Usually I'm a little bit \nannoyed with the optimizer and the developers religious \"fix the planner \ninstead of index hints\". I must say that I'm willing to reconsider my \nusual stance to that.\n\nWe have a large table of products where status=20 is a rare intermediate \nstatus. I added a...\n\nCREATE INDEX pond_item_common_x8 ON pond_item_common(pond_user, status)\nWHERE status = 20;\n\n...and a slow 5s select with users who had existing status=20 items \nbecame very fast. Planner, I guess, saw the 10000 status 20 clips (out \nof millions of items) instead of like 5 different values of status and \nthus ignoring the index. Super!\n\nTo my great amazement, the planner also managed to use the index when \ncounting how many status=20 items there are in total:\n\npond90=> explain analyze select\npond90-> coalesce(sum(tt.antal),0) as nbr_in_queue\npond90-> from\npond90-> (\npond90(> select\npond90(> pu.username\npond90(> ,t.antal\npond90(> from\npond90(> (\npond90(> select\npond90(> sum(1) as antal\npond90(> ,pond_user\npond90(> from\npond90(> pond_item_common\npond90(> where\npond90(> status = 20\npond90(> group by pond_user\npond90(> ) as t\npond90(> ,pond_user pu\npond90(> where\npond90(> pu.objectid = t.pond_user\npond90(> order by t.antal desc\npond90(> ) as tt;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=38079.45..38079.46 rows=1 width=8) (actual \ntime=166.439..166.440 rows=1 loops=1)\n -> Sort (cost=38079.13..38079.18 rows=21 width=18) (actual \ntime=166.009..166.085 rows=648 loops=1)\n Sort Key: (sum(1))\n Sort Method: quicksort Memory: 67kB\n -> Nested Loop (cost=37903.66..38078.67 rows=21 width=18) \n(actual time=157.545..165.561 rows=648 loops=1)\n -> HashAggregate (cost=37903.66..37903.92 rows=21 \nwidth=4) (actual time=157.493..157.720 rows=648 loops=1)\n -> Bitmap Heap Scan on pond_item_common \n(cost=451.43..37853.37 rows=10057 width=4) (actual time=9.061..151.511 \nrows=12352 loops=1)\n Recheck Cond: (status = 20)\n -> Bitmap Index Scan on \npond_item_common_x8 (cost=0.00..448.91 rows=10057 width=0) (actual \ntime=5.654..5.654 rows=20051 loops=1)\n Index Cond: (status = 20)\n -> Index Scan using pond_user_pkey on pond_user pu \n(cost=0.00..8.30 rows=1 width=14) (actual time=0.011..0.012 rows=1 \nloops=648)\n Index Cond: (pu.objectid = pond_item_common.pond_user)\n Total runtime: 166.709 ms\n(13 rows)\n\nMy hat's off to the dev gang. Impressive!\n\nBest,\nMarcus\n\n",
"msg_date": "Tue, 05 Jul 2011 12:15:30 +0200",
"msg_from": "Marcus Engene <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bitmask index"
},
{
"msg_contents": "On 07/05/2011 06:15 AM, Marcus Engene wrote:\n> Though partial index solved another problem. Usually I'm a little bit \n> annoyed with the optimizer and the developers religious \"fix the \n> planner instead of index hints\". I must say that I'm willing to \n> reconsider my usual stance to that.\n>\n> We have a large table of products where status=20 is a rare \n> intermediate status. I added a...\n>\n> CREATE INDEX pond_item_common_x8 ON pond_item_common(pond_user, status)\n> WHERE status = 20;\n>\n> ...and a slow 5s select with users who had existing status=20 items \n> became very fast. Planner, I guess, saw the 10000 status 20 clips (out \n> of millions of items) instead of like 5 different values of status and \n> thus ignoring the index. Super!\n>\n> To my great amazement, the planner also managed to use the index when \n> counting how many status=20 items there are in total:\n\nI'm glad we got you to make a jump toward common ground with the \ndatabase's intended use. There are many neat advanced ways to solve the \nsorts of problems people try to hammer with hints available in \nPostgreSQL, some of which don't even exist in other databases. It's \nkind of interesting to me how similarly one transition tends to happen \nto people who learn a lot about those options, enough that they can talk \nfully informed about things like how hints would have to work in \nPostgreSQL--for example: they'd have to consider all all these partial \nindex possibilities. Once you go through all that, suddenly a lot of \nthe people who do it realize that maybe hints aren't as important as \ngood design and indexing--when you take advantages of all the features \navailable to you--after all.\n\nTo help explain what happened to you here a little better, the planner \ntracks Most Common Values in the database, and it uses those statistics \nto make good decisions about the ones it finds. But when a value is \nreally rare, it's never going to make it to that list, and therefore the \nplanner is going to make a guess about how likely it is--likely a wrong \none. By creating a partial index on that item, it's essentially adding \nthat information--just how many rows are going to match a query looking \nfor that value--so that it can be utilized the same way MCVs are. \nAdding partial indexes on sparse columns that are critical to a common \nreport allow what I'm going to coin a new acronym for: those are part \nof the Most Important Values in that column. The MIV set is the MCV \ninformation plus information about the rare but critical columns. And \nthe easiest way to expose that data to the planner is with a partial index.\n\nI smell a blog post coming on this topic.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nComprehensive and Customized PostgreSQL Training Classes:\nhttp://www.2ndquadrant.us/postgresql-training/\n\n",
"msg_date": "Tue, 05 Jul 2011 13:43:26 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bitmask index"
}
] |
[
{
"msg_contents": "Hello\n\nI am attempting to run an update statement that copies two fields from \none table to another:\n\n\nUPDATE\n table_A\nSET\n(\n field_1\n, field_2\n) = (\ntable_B.field_1\n, table_B.field_2\n)\nFROM\ntable_B\nWHERE\ntable_B.id = table_A.id\n;\n\n\nTable \"table_B\" contains almost 75 million records, with IDs that match \nthose in \"table_A\".\n\nBoth \"field_1\" and \"field_2\" are DOUBLE PRECISION. The ID fields are \nSERIAL primary-key integers in both tables.\n\nI tested (the logic of) this statement with a very small sample, and it \nworked correctly.\n\nThe database runs on a dedicated Debian server in our office.\n\nI called both VACUUM and ANALYZE on the databased before invoking this \nstatement.\n\nThe statement has been running for 18+ hours so far.\n\nTOP, FREE and VMSTAT utilities indicate that only about half of the 6GB \nof memory is being used, so I have no reason to believe that the server \nis struggling.\n\nMy question is: can I reasonably expect a statement like this to \ncomplete with such a large data-set, even if it takes several days?\n\nWe do not mind waiting, but obviously we do not want to wait unnecessarily.\n\nMany thanks.\n\nHarry Mantheakis\nLondon, UK\n\n\n",
"msg_date": "Thu, 23 Jun 2011 16:05:45 +0100",
"msg_from": "Harry Mantheakis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Long Running Update"
},
{
"msg_contents": "On Thu, Jun 23, 2011 at 5:05 PM, Harry Mantheakis\n<[email protected]> wrote:\n> TOP, FREE and VMSTAT utilities indicate that only about half of the 6GB of\n> memory is being used, so I have no reason to believe that the server is\n> struggling.\n\nYou have a hinky idea of server load.\n\nMind you, there are lots of ways in which it could be struggling,\nother than memory usage.\nLike IO, CPU, lock contention...\n\nIn my experience, such huge updates struggle a lot with fsync and\nrandom I/O when updating the indices.\nIt will be a lot faster if you can drop all indices (including the\nPK), if you can.\n",
"msg_date": "Thu, 23 Jun 2011 19:18:23 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Long Running Update"
},
{
"msg_contents": "Harry Mantheakis <[email protected]> wrote:\n \n> UPDATE\n> table_A\n> SET\n> (\n> field_1\n> , field_2\n> ) = (\n> table_B.field_1\n> , table_B.field_2\n> )\n> FROM\n> table_B\n> WHERE\n> table_B.id = table_A.id\n> ;\n \nI would have just done:\n\n SET field_1 = table_B.field_1, field_2 = table_B.field_2\n \ninstead of using row value constructors. That might be slowing\nthings down a bit.\n \n> I tested (the logic of) this statement with a very small sample,\n> and it worked correctly.\n \nAlways a good sign. :-)\n \n> The statement has been running for 18+ hours so far.\n \n> My question is: can I reasonably expect a statement like this to \n> complete with such a large data-set, even if it takes several\n> days?\n \nIf it's not leaking memory, I expect that it will complete.\n \nTo get some sense of what it's doing, you could log on to another\nconnection and EXPLAIN the statement. (NOTE: Be careful *not* to\nuse EXPLAIN ANALYZE.)\n \nAnother thing to consider if you run something like this again is\nthat an UPDATE is an awful lot like an INSERT combined with a\nDELETE. The way PostgreSQL MVCC works, the old version of each row\nmust remain until the updating transaction completes. If you were\nto divide this update into a series of updates by key range, the new\nversions of the rows from later updates could re-use the space\npreviously occupied by the old version of rows from earlier updates.\nFor similar reasons, you might want to add something like this to\nyour WHERE clause, to prevent unnecessary updates:\n \n AND (table_B.field_1 IS DISTINCT FROM table_A.field_1\n OR table_B.field_2 IS DISTINCT FROM table_A.field_2);\n \n-Kevin\n",
"msg_date": "Thu, 23 Jun 2011 14:32:24 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Long Running Update"
},
{
"msg_contents": "Thank you Kevin.\n\n > SET field_1 = table_B.field_1, field_2 = table_B.field_2\n\nI will try that, if I have to next time.\n\n > add something like this toyour WHERE clause,\n > to prevent unnecessary updates:\n >\n > AND (table_B.field_1 IS DISTINCT FROM table_A.field_1\n > OR table_B.field_2 IS DISTINCT FROM table_A.field_2);\n\nThank you for that explanation - I will keep that in mind in future. (In \nthis case, the two fields that are being updated are all known to be \nempty - hence, distinct - in the target table.)\n\n > EXPLAIN the statement\n\nHere is the EXPLAIN result:\n\n----------------------------------------------------------------------\nQUERY PLAN\n----------------------------------------------------------------------\nHash Join (cost=2589312.08..16596998.47 rows=74558048 width=63)\nHash Cond: (table_A.id = table_B.id)\n-> Seq Scan on table_A(cost=0.00..1941825.05 rows=95612705 width=47)\n-> Hash (cost=1220472.48..1220472.48 rows=74558048 width=20)\n-> Seq Scan on table_B(cost=0.00..1220472.48 rows=74558048 width=20)\n----------------------------------------------------------------------\n\nThe documentation says the 'cost' numbers are 'units of disk page fetches'.\n\nDo you, by any chance, have any notion of how many disk page fetches can \nbe processed per second in practice - at least a rough idea?\n\nIOW how do I convert - guesstimate! - these numbers into (plausible) \ntime values?\n\nKind regards\n\nHarry Mantheakis\nLondon, UK\n\n",
"msg_date": "Fri, 24 Jun 2011 12:16:32 +0100",
"msg_from": "Harry Mantheakis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Long Running Update"
},
{
"msg_contents": "Thank you Claudio.\n\n > there are lots of ways in which it could be struggling...\n\nI have been monitoring the server with IOSTAT -d and IOSTAT -c and I \ncannot see anything alarming.\n\n > It will be a lot faster if you can drop all indices...\n\nThis is counter-intuitive - because the WHERE clause is matching the \nonly two indexed fields, and my understanding is that querying on \nindexed fields is faster than querying on fields that are not indexed. \n(Note also, that the indexed field is NOT being updated.)\n\nBut if this update fails, I shall try what you suggest!\n\nKind regards\n\nHarry Mantheakis\nLondon, UK\n\n",
"msg_date": "Fri, 24 Jun 2011 12:19:07 +0100",
"msg_from": "Harry Mantheakis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Long Running Update"
},
{
"msg_contents": "On Fri, Jun 24, 2011 at 1:19 PM, Harry Mantheakis\n<[email protected]> wrote:\n>\n>> there are lots of ways in which it could be struggling...\n>\n> I have been monitoring the server with IOSTAT -d and IOSTAT -c and I cannot\n> see anything alarming.\n\nIf iostat doesn't show disk load, either iostat doesn't work well\n(which could be the case, I've found a few broken installations here\nand there), or, perhaps, your update is waiting on some other update.\n\nI've seen cases when there are application-level deadlocks (ie,\ndeadlocks, but that the database alone cannot detect, and then your\nqueries stall like that. It happens quite frequently if you try such a\nmassive update on a loaded production server. In those cases, the\ntechnique someone mentioned (updating in smaller batches) usually\nworks nicely.\n\nYou should be able to see if it's locked waiting for something with\n\"select * from pg_stat_activity\".\n",
"msg_date": "Fri, 24 Jun 2011 13:39:54 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Long Running Update"
},
{
"msg_contents": "24.06.11 14:16, Harry Mantheakis написав(ла):\n>\n> > EXPLAIN the statement\n>\n> Here is the EXPLAIN result:\n>\n> ----------------------------------------------------------------------\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> Hash Join (cost=2589312.08..16596998.47 rows=74558048 width=63)\n> Hash Cond: (table_A.id = table_B.id)\n> -> Seq Scan on table_A(cost=0.00..1941825.05 rows=95612705 width=47)\n> -> Hash (cost=1220472.48..1220472.48 rows=74558048 width=20)\n> -> Seq Scan on table_B(cost=0.00..1220472.48 rows=74558048 width=20)\n> ----------------------------------------------------------------------\n>\n> The documentation says the 'cost' numbers are 'units of disk page \n> fetches'.\n>\n> Do you, by any chance, have any notion of how many disk page fetches \n> can be processed per second in practice - at least a rough idea?\n>\n> IOW how do I convert - guesstimate! - these numbers into (plausible) \n> time values?\nNo chance. This are \"virtual values\" for planner only.\nIf I read correctly, your query should go into two phases: build hash \nmap on one table, then update second table using the map. Not that this \nall valid unless you have any constraints (including foreign checks, \nboth sides) to check on any field of updated table. If you have, you'd \nbetter drop them.\nAnyway, this is two seq. scans. For a long query I am using a tool like \nktrace (freebsd) to get system read/write calls backend is doing. Then \nwith catalog tables you can map file names to relations \n(tables/indexes). Then you can see which stage you are on and how fast \nis it doing.\nNote that partially cached tables are awful (in FreeBSD, dunno for \nlinux) for such a query - I suppose this is because instead on \nsequential read, you get a lot of random reads that fools prefetch \nlogic. \"dd if=table_file of=/dev/null bs=8m\" helps me a lot. You can see \nit it helps if CPU time goes up.\n\nBest regards, Vitalii Tymchyshyn\n",
"msg_date": "Fri, 24 Jun 2011 14:45:15 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Long Running Update"
},
{
"msg_contents": "Thanks again Claudio.\n\nIt does not look like its locked/blocked - but your hint about doing \nthis task in smaller batches is a good one, and it would be easy enough \nto organise.\n\nI am going to let this task run over the week-end, and then decide. \nEither way, I shall update this thread.\n\nMuch obliged!\n\nHarry Mantheakis\nLondon, UK\n\n\nOn 24/06/2011 12:39, Claudio Freire wrote:\n> On Fri, Jun 24, 2011 at 1:19 PM, Harry Mantheakis\n> <[email protected]> wrote:\n>>> there are lots of ways in which it could be struggling...\n>> I have been monitoring the server with IOSTAT -d and IOSTAT -c and I cannot\n>> see anything alarming.\n> If iostat doesn't show disk load, either iostat doesn't work well\n> (which could be the case, I've found a few broken installations here\n> and there), or, perhaps, your update is waiting on some other update.\n>\n> I've seen cases when there are application-level deadlocks (ie,\n> deadlocks, but that the database alone cannot detect, and then your\n> queries stall like that. It happens quite frequently if you try such a\n> massive update on a loaded production server. In those cases, the\n> technique someone mentioned (updating in smaller batches) usually\n> works nicely.\n>\n> You should be able to see if it's locked waiting for something with\n> \"select * from pg_stat_activity\".\n",
"msg_date": "Fri, 24 Jun 2011 13:39:16 +0100",
"msg_from": "Harry Mantheakis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Long Running Update"
},
{
"msg_contents": "On 23/06/11 16:05, Harry Mantheakis wrote:\n> Hello\n>\n> I am attempting to run an update statement that copies two fields from \n> one table to another:\n>\n>\n> UPDATE\n> table_A\n> SET\n> (\n> field_1\n> , field_2\n> ) = (\n> table_B.field_1\n> , table_B.field_2\n> )\n> FROM\n> table_B\n> WHERE\n> table_B.id = table_A.id\n> ;\n>\n\nI frequently get updates involving a FROM clause wrong --- the resulting \ntable is correct but the running time is quadratic. You might want to \ntry a series of smaller examples to see if your query displays this \nbehaviour.\n\nMark Thornton\n\n",
"msg_date": "Fri, 24 Jun 2011 13:52:43 +0100",
"msg_from": "Mark Thornton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Long Running Update"
},
{
"msg_contents": "Mark Thornton <[email protected]> wrote:\n> On 23/06/11 16:05, Harry Mantheakis wrote:\n \n>> UPDATE\n>> table_A\n>> [ ... ]\n>> FROM\n>> table_B\n>> WHERE\n>> table_B.id = table_A.id\n \n> I frequently get updates involving a FROM clause wrong --- the\n> resulting table is correct but the running time is quadratic.\n \nThe most frequent way I've seen that happen is for someone to do:\n \nUPDATE table_A\n [ ... ]\n FROM table_A a, table_B b\n WHERE b.id = a.id\n \nBecause a FROM clause on an UPDATE statement is not in the standard,\ndifferent products have implemented this differently. In Sybase ASE\nor Microsoft SQL Server you need to do the above to alias table_A,\nand the two references to table_A are treated as one. In PostgreSQL\nthis would be two separate references and you would effectively be\ndoing the full update of all rows in table_A once for every row in\ntable_A. I don't think that is happening here based on the plan\nposted earlier in the thread.\n \n-Kevin\n",
"msg_date": "Fri, 24 Jun 2011 08:43:08 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Long Running Update"
},
{
"msg_contents": "Harry Mantheakis <[email protected]> wrote:\n \n> IOW how do I convert - guesstimate! - these numbers into\n> (plausible) time values?\n \nThey are abstract and it only matters that they are in the right\nratios to one another so that the planner can accurately pick the\ncheapest plan. With the default settings, seq_page_cost is 1, so if\neverything is tuned perfectly, the run time should match the time it\ntakes to sequentially read a number of pages (normally 8KB) which\nmatches the estimated cost. So with 8KB pages and seq_page_cost =\n1, the cost number says it should take the same amount of time as a\nsequential read of 130 GB.\n \nThe biggest reason this won't be close to actual run time is that is\nthat the planner just estimates the cost of *getting to* the correct\ntuples for update, implicitly assuming that the actual cost of the\nupdates will be the same regardless of how you find the tuples to be\nupdated. So if your costs were set in perfect proportion to\nreality, with seq_page_cost = 1, the above would tell you how fast a\nSELECT of the data to be updated should be. The cost numbers don't\nreally give a clue about the time to actually UPDATE the rows.\n \n-Kevin\n",
"msg_date": "Fri, 24 Jun 2011 09:00:23 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Long Running Update"
},
{
"msg_contents": "Harry Mantheakis <[email protected]> wrote:\n \n>> It will be a lot faster if you can drop all indices...\n> \n> This is counter-intuitive - because the WHERE clause is matching\n> the only two indexed fields, and my understanding is that querying\n> on indexed fields is faster than querying on fields that are not\n> indexed.\n \nBecause your UPDATE requires reading every tuple in every page of\nboth tables, it would be counter-productive to use the indexes. \nRandom access is much slower than sequential, so it's fastest to\njust blast through both tables sequentially. The plan you showed\nhas it scanning through table_B and loading the needed data into RAM\nin a hash table, then scanning through table_A and updating each row\nbased on what is in the RAM hash table.\n \nFor each row updated, if it isn't a HOT update, a new entry must be\ninserted into every index on table_A, so dropping the indexes before\nthe update and re-creating them afterward would probably be a very\ngood idea if you're going to do the whole table in one go, and\npossibly even if you're working in smaller chunks.\n \nOne thing which might help run time a lot, especially since you\nmentioned having a lot of unused RAM, is to run the update with a\nvery hight work_mem setting in the session running the UPDATE.\n \n> (Note also, that the indexed field is NOT being updated.)\n \nThat's one of the conditions for a HOT update. The other is that\nthere is space available on the same page for a new version of the\nrow, in addition to the old version.\n \n-Kevin\n",
"msg_date": "Fri, 24 Jun 2011 10:12:10 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Long Running Update"
},
{
"msg_contents": "Thank you so much for all the feedback, Kevin - much appreciated.\n\nI have stopped the all-in-one-go update from executing, and now I am \nexecuting a series of statements, each constrained to update no more \nthan 100,000 records at a time.\n\nInteresting fact: updating 100,000 rows takes 5 seconds. Quick.\n\nI tried updating 1 million rows in one go, and the statement was still \nrunning after 25 minutes, before I killed it!\n\nSo I am now executing an SQL script with almost 800 separate \nupdate-statements, each set to update 100K records, and the thing is \ntrundling along nicely. (I am fortunate enough to be surrounded by \nMatLab users, one of whom generated the 800-statement file in one minute \nflat!)\n\nMany thanks again for all the info.\n\nKind regards\n\nHarry Mantheakis\nLondon, UK\n\n",
"msg_date": "Fri, 24 Jun 2011 17:29:41 +0100",
"msg_from": "Harry Mantheakis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Long Running Update"
},
{
"msg_contents": "\n > try a series of smaller examples...\n\nMark, that was the tip the saved me!\n\nMany thanks.\n\nHarry Mantheakis\nLondon, UK\n\n",
"msg_date": "Fri, 24 Jun 2011 17:31:38 +0100",
"msg_from": "Harry Mantheakis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Long Running Update"
},
{
"msg_contents": "Harry Mantheakis <[email protected]> wrote:\n \n> I have stopped the all-in-one-go update from executing, and now I\n> am executing a series of statements, each constrained to update no\n> more than 100,000 records at a time.\n> \n> Interesting fact: updating 100,000 rows takes 5 seconds. Quick.\n \nOne last thing -- all these updates, included the aborted attempt at\na single-pass update, may cause significant bloat in both the heap\nand the index(es). I usually finish up with a CLUSTER on the table,\nfollowed by a VACUUM ANALYZE on the table.\n \n-Kevin\n",
"msg_date": "Fri, 24 Jun 2011 11:41:38 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Long Running Update"
},
{
"msg_contents": "I called VACUUM on the database after abandoning the original all-in-one \nupdate.\n\nAnd I have a daily cron script that executes the following statement:\n\nsudo -u postgres /usr/bin/vacuumdb -U postgres --all --analyze\n\nBut I had not considered using CLUSTER - I will certainly look into that.\n\nThanks again.\n\nHarry Mantheakis\nLondon, UK\n\n",
"msg_date": "Fri, 24 Jun 2011 17:49:55 +0100",
"msg_from": "Harry Mantheakis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Long Running Update"
},
{
"msg_contents": "I am glad to report that the 'salami-slice' approach worked nicely - all \ndone in about 2.5 hours.\n\nInstead of using an all-in-one-go statement, we executed 800 statements, \neach updating 100,000 records. On average it tool about 10-seconds for \neach statement to return.\n\nThis is \"thinking out of the box\" solution, which others might not be \nable to emulate.\n\nThe mystery remains, for me: why updating 100,000 records could complete \nin as quickly as 5 seconds, whereas an attempt to update a million \nrecords was still running after 25 minutes before we killed it?\n\nOne thing remains crystal clear: I love Postgresql :-)\n\nKind regards\n\nHarry Mantheakis\nLondon, UK\n\n\nOn 23/06/2011 16:05, Harry Mantheakis wrote:\n> Hello\n>\n> I am attempting to run an update statement that copies two fields from \n> one table to another:\n>\n>\n> UPDATE\n> table_A\n> SET\n> (\n> field_1\n> , field_2\n> ) = (\n> table_B.field_1\n> , table_B.field_2\n> )\n> FROM\n> table_B\n> WHERE\n> table_B.id = table_A.id\n> ;\n>\n>\n> Table \"table_B\" contains almost 75 million records, with IDs that \n> match those in \"table_A\".\n>\n> Both \"field_1\" and \"field_2\" are DOUBLE PRECISION. The ID fields are \n> SERIAL primary-key integers in both tables.\n>\n> I tested (the logic of) this statement with a very small sample, and \n> it worked correctly.\n>\n> The database runs on a dedicated Debian server in our office.\n>\n> I called both VACUUM and ANALYZE on the databased before invoking this \n> statement.\n>\n> The statement has been running for 18+ hours so far.\n>\n> TOP, FREE and VMSTAT utilities indicate that only about half of the \n> 6GB of memory is being used, so I have no reason to believe that the \n> server is struggling.\n>\n> My question is: can I reasonably expect a statement like this to \n> complete with such a large data-set, even if it takes several days?\n>\n> We do not mind waiting, but obviously we do not want to wait \n> unnecessarily.\n>\n> Many thanks.\n>\n> Harry Mantheakis\n> London, UK\n>\n>\n>\n",
"msg_date": "Mon, 27 Jun 2011 16:02:02 +0100",
"msg_from": "Harry Mantheakis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Long Running Update - My Solution"
},
{
"msg_contents": "Harry Mantheakis <[email protected]> wrote:\n \n> I am glad to report that the 'salami-slice' approach worked nicely\n> - all done in about 2.5 hours.\n \nGlad to hear it!\n \n> The mystery remains, for me: why updating 100,000 records could\n> complete in as quickly as 5 seconds, whereas an attempt to update\n> a million records was still running after 25 minutes before we\n> killed it?\n \nIf you use EXPLAIN with both statements (without ANALYZE, since you\nprobably don't want to trigger an actual *run* of the statement), is\nthere a completely different plan for the range covering each? If\nso, a severe jump like that might mean that your costing parameters\ncould use some adjustment, so that it switches from one plan to the\nother closer to the actual break-even point.\n \n> One thing remains crystal clear: I love Postgresql :-)\n \n:-)\n \n-Kevin\n",
"msg_date": "Mon, 27 Jun 2011 10:12:25 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Long Running Update - My Solution"
},
{
"msg_contents": "> The mystery remains, for me: why updating 100,000 records could complete\n> in as quickly as 5 seconds, whereas an attempt to update a million\n> records was still running after 25 minutes before we killed it?\n\nHi, there's a lot of possible causes. Usually this is caused by a plan\nchange - imagine for example that you need to sort a table and the amount\nof data just fits into work_mem, so that it can be sorted in memory. If\nyou need to perform the same query with 10x the data, you'll have to sort\nthe data on disk. Which is way slower, of course.\n\nAnd there are other such problems ...\n\n> One thing remains crystal clear: I love Postgresql :-)\n\nregards\nTomas\n\n",
"msg_date": "Mon, 27 Jun 2011 17:37:43 +0200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Long Running Update - My Solution"
},
{
"msg_contents": "On Mon, Jun 27, 2011 at 5:37 PM, <[email protected]> wrote:\n>> The mystery remains, for me: why updating 100,000 records could complete\n>> in as quickly as 5 seconds, whereas an attempt to update a million\n>> records was still running after 25 minutes before we killed it?\n>\n> Hi, there's a lot of possible causes. Usually this is caused by a plan\n> change - imagine for example that you need to sort a table and the amount\n> of data just fits into work_mem, so that it can be sorted in memory. If\n> you need to perform the same query with 10x the data, you'll have to sort\n> the data on disk. Which is way slower, of course.\n>\n> And there are other such problems ...\n\nI would rather assume it is one of the \"other problems\", typically\nrelated to handling the TX (e.g. checkpoints, WAL, creating copies of\nmodified records and adjusting indexes...).\n\nKind regards\n\nrobert\n\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Mon, 27 Jun 2011 21:29:21 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Long Running Update - My Solution"
},
{
"msg_contents": "Harry Mantheakis wrote:\n> The mystery remains, for me: why updating 100,000 records could \n> complete in as quickly as 5 seconds, whereas an attempt to update a \n> million records was still running after 25 minutes before we killed it?\n\nThe way you were doing this originally, it was joining every record in \ntable A against every record in table B, finding the matches (note the \nsequential scans on each in the query plan you showed). Having A * B \npossible matches there was using up a bunch of resources to line those \ntwo up for an efficient join, and it's possible that parts of that \nrequired spilling working data over to disk and other expensive \noperations. And you were guaranteeing that every row in each table was \nbeing processed in some way.\n\nNow, when you only took a small slice of A instead, and a small slice of \nB to match, this was likely using an index and working with a lot less \nrows in total--only ones in B that mattered were considered, not every \none in B. And each processing slice was working on less rows, making it \nmore likely to fit in memory, and thus avoiding both slow spill to disk \noperation and work that was less likely to fit into the system cache.\n\nI don't know exactly how much of each of these two components went into \nyour large run-time difference, but I suspect both were involved. The \nway the optimizer switches to using a sequential scan when doing bulk \noperations is often the right move. But if it happens in a way that \ncauses the set of data to be processed to become much larger than RAM, \nit can be a bad decision. The performance drop when things stop fitting \nin memory is not a slow one, it's like a giant cliff you fall off.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Mon, 27 Jun 2011 20:37:08 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Long Running Update - My Solution"
},
{
"msg_contents": "Hello Kevin\n\n > If you use EXPLAIN with both statements...\n\nYes, the plans are indeed very different.\n\nHere is the statement, set to update up to 100,000 records, which took \nabout 5 seconds to complete:\n\n\nUPDATE\n table_A\nSET\n field_1 = table_B.field_1\n, field_2 = table_B.field_2\nFROM\n table_B\nWHERE\n table_B.tb_id >= 0\nAND\n table_B.tb_id <= 100000\nAND\n table_B.tb_id = table_A.ta_id\n;\n\n\nThe query plan for the above is:\n\n\nNested Loop (cost=0.00..2127044.47 rows=73620 width=63)\n -> Index Scan using table_B_pkey on table_B (cost=0.00..151830.75 \nrows=73620 width=20)\n Index Cond: ((tb_id >= 0) AND (tb_id <= 100000))\n -> Index Scan using table_A_pkey on table_A (cost=0.00..26.82 \nrows=1 width=47)\n Index Cond: (table_A.ta_id = table_B.tb_id)\n\n\nNow, if I change the first AND clause to update 1M records, as follows:\n\n\ntable_B.id <= 1000000\n\n\nI get the following - quite different - query plan:\n\n\nHash Join (cost=537057.49..8041177.88 rows=852150 width=63)\n Hash Cond: (table_A.ta_id = table_B.tb_id)\n -> Seq Scan on table_A (cost=0.00..3294347.71 rows=145561171 width=47)\n -> Hash (cost=521411.62..521411.62 rows=852150 width=20)\n -> Bitmap Heap Scan on table_B (cost=22454.78..521411.62 \nrows=852150 width=20)\n Recheck Cond: ((tb_id >= 0) AND (tb_id <= 1000000))\n -> Bitmap Index Scan on table_B_pkey \n(cost=0.00..22241.74 rows=852150 width=0)\n Index Cond: ((tb_id >= 0) AND (tb_id <= 1000000))\n\n\nNote: When I tried updating 1M records, the command was still running \nafter 25 minutes before I killed it.\n\nThe sequential scan in the later plan looks expensive, and (I think) \nsupports what others have since mentioned, namely that when the \noptimizer moves to using sequential scans (working off the disk) things \nget a lot slower.\n\nFor me, the penny has finally dropped on why I should use EXPLAIN for \nbulk operations.\n\nThanks too, to Greg Smith, Robert Klemme and Thomas for all the feedback.\n\nKind regards\n\nHarry Mantheakis\nLondon, UK\n\n\n\n",
"msg_date": "Tue, 28 Jun 2011 10:48:57 +0100",
"msg_from": "Harry Mantheakis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Long Running Update - My Solution"
}
] |
[
{
"msg_contents": "Hi,\n\nHas anyone measured the cost of creating empty WAL segments while the\ndatabase is running? \n\nActually, when is the new file created? Just after one segment is filled\nup, or some time before then? What happens during WAL segment creation?\nIf there are pending transactions to be committed, do we see a delay?\n\nI was looking at how Oracle manages this, and I was told that you can\ncreate empty segments during installation, so I'm wondering whether it\nmight be a good addition to initdb or not:\n\ninitdb -S 30 -- create 30 empty segments during initdb\n\nRegards,\n-- \nDevrim GÜNDÜZ\nPrincipal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com\nPostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer\nCommunity: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\nhttp://www.gunduz.org Twitter: http://twitter.com/devrimgunduz",
"msg_date": "Fri, 24 Jun 2011 17:43:14 +0300",
"msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cost of creating an emply WAL segment"
},
{
"msg_contents": "On 06/24/2011 10:43 AM, Devrim GÜNDÜZ wrote:\n> Has anyone measured the cost of creating empty WAL segments while the\n> database is running?\n>\n> Actually, when is the new file created? Just after one segment is filled\n> up, or some time before then? What happens during WAL segment creation?\n> If there are pending transactions to be committed, do we see a delay?\n> \n\nExcellent set of questions. Yes, and it can be disturbingly high on a \nfresh system that's immediately hit with lots of transactions, or on one \nwhere checkpoint_segments was just raised a lot and then nailed with \nactivity. The problem also pops up in an even worse way when you have a \nserver that's overrun its normal checkpoint finish time, usually due to \na slow sync phase. That then also results in creation of new \nsegments--necessary because all of the old ones that would normally be \nrecycled already have been recycled--and that work is now competing \nagainst the already slow checkpoint writes, and backends running their \nown fsync too just to make the mix extra exciting.\n\nI know the server code does try to stay a bit ahead of this problem, by \ncreating one segment in advance under conditions I forget the details \nof, to reduce the odds a client will actually hit a delay there. It \nhelps the average case. But since it doesn't do much for the worst-case \nones people that make my life miserable, working on that doesn't make my \nlife easier; therefore I don't.\n\nThe main argument in favor of pre-populating WAL segments early after \nthe database starts is that it would probably be a small cheat on some \nbenchmarks, moving a bit of work that happens during the test to happen \nbefore then instead when it isn't counted. But I've never been excited \nabout the idea of creating empty segments near initdb time for 3 reasons:\n\n-Default postgresql.conf at initdb time has the pathetically small 3 \ncheckpoint_segments, so it won't actually work unless we increase it first.\n-Most servers go through a data loading stage before they hit production \nthat takes care of this anyway.\n-The worst problems I've ever seen in this area, by far (as in: at \nleast 100X worst than what you're asking about), are when new segments \nare created due to heavy write activity exhausting the list of ones to \nbe recycled during a checkpoint.\n\nTo give you an example of *that*, here is the new record-setting slow \ncheckpoint I just found in my inbox this morning, from a system that's \nbeen instrumented for its increasing checkpoint issues the last month:\n\ncheckpoint complete: wrote 63297 buffers (6.0%); 0 transaction log \nfile(s) added, 938 removed, 129 recycled; write=250.384 s, \nsync=14525.296 s, total=14786.868 s\n\nHere checkpoint_segments=64 and shared_buffers=8GB. The fact that this \ncheckpoint hit the \"create a new empty WAL segment\" code 938 times \nduring its 4 hour checkpoint sync phase is much more troubling than the \npauses people run into when making a new segments on a fresh server. I \nwould argue that if your database is new enough to not have populated a \nfull set of checkpoint_segments yet, it's also new enough that it can't \npossibly have enough data in it yet for that to really be a problem.\n\n(Note for hackers following along this far: I strongly suspect today's \nlatest pathological example is the sort improved by the \"compact fsync \nrequest\" feature added to 9.1. I'm now seeing so many of these in the \nfield that I'm collecting up data to support the idea of backporting \nthat in 8.3-9.0, as a bug fix because this turns into a \"system is too \nslow to be considered operational\" performance problem when it flares \nup. This was a rare thing I'd never seen before back in September when \nI started working on this area again; now I see it once a month on a new \nsystem somewhere.)\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Fri, 24 Jun 2011 12:18:49 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cost of creating an emply WAL segment"
},
{
"msg_contents": "On 06/24/2011 11:18 AM, Greg Smith wrote:\n\n> sync=14525.296 s, total=14786.868 s\n\nWhaaaaaaaaat!? 6% of 8GB is just shy of 500MB. That's not a small \namount, exactly, but it took 14525 seconds to call syncs for those \nwrites? What kind of ridiculous IO would cause something like that? \nThat's even way beyond an OS-level dirty buffer flush on a massive \nsystem. Wow!\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Fri, 24 Jun 2011 13:55:47 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cost of creating an emply WAL segment"
},
{
"msg_contents": "On 06/24/2011 02:55 PM, Shaun Thomas wrote:\n> On 06/24/2011 11:18 AM, Greg Smith wrote:\n>\n>> sync=14525.296 s, total=14786.868 s\n>\n> Whaaaaaaaaat!? 6% of 8GB is just shy of 500MB. That's not a small \n> amount, exactly, but it took 14525 seconds to call syncs for those \n> writes? What kind of ridiculous IO would cause something like that? \n> That's even way beyond an OS-level dirty buffer flush on a massive \n> system. Wow!\n\nIt is in fact smaller than the write cache on the disk array involved. \nThe mystery is explained at \nhttp://projects.2ndquadrant.it/sites/default/files/WriteStuff-PGCon2011.pdf\n\nThe most relevant part:\n\n-Background writer stop working normally while running sync\n-Never pauses to fully consume the fsync queues backends fill\n-Once filled, all backend writes do their own fsync\n-Serious competition for the checkpoint writes\n\nWhen the background writer's fsync queue fills, and you have 100 clients \nall doing their own writes and making an fsync call after each one of \nthem (the case on this server), the background writer ends up only \ngetting around 1/100 of the I/O capabilities of the server available in \nits time slice. And that's how a sync phase that might normally take \ntwo minutes on a really busy server ends up running for hours instead. \nThe improvement in 9.1 gets the individual backends involved in trying \nto compact the fsync queue when they discover it is full, which seems to \nmake the worst case behavior here much better.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Fri, 24 Jun 2011 19:26:31 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cost of creating an emply WAL segment"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI'm having trouble getting rid of a sequential scan on a table with roughly\n120k entries it. Creation of an index on that particular column which\ntriggers the sequential scan doesn't do anything, VACUUM and ANALYZE has\nbeen done on the table.\n\nThe table in question has the following definition:\n\n Column | Type |\n Modifiers\n--------------------+--------------------------+------------------------------------------------------------------\n post_id | bigint | not null default\nnextval('posts_post_id_seq'::regclass)\n forum_id | bigint | not null\n threadlink | character varying(255) | not null\n timestamp | timestamp with time zone | not null\n poster_id | bigint |\n thread_id | bigint | not null\n subject | text | not null\n text | text | not null\n postername | character varying(255) |\n internal_post_id | bigint | not null default\nnextval('posts_internal_post_id_seq'::regclass)\n internal_thread_id | bigint |\nIndexes:\n \"posts_pkey\" PRIMARY KEY, btree (internal_post_id)\n \"posts_forum_id_key\" UNIQUE, btree (forum_id, post_id)\n \"idx_internal_thread_id\" btree (internal_thread_id)\n \"idx_posts_poster_id\" btree (poster_id)\nForeign-key constraints:\n \"posts_forum_id_fkey\" FOREIGN KEY (forum_id) REFERENCES forums(forum_id)\n \"posts_internal_thread_id_fkey\" FOREIGN KEY (internal_thread_id)\nREFERENCES threads(internal_thread_id)\n \"posts_poster_id_fkey\" FOREIGN KEY (poster_id) REFERENCES\nposters(poster_id)\n\nThe query is this:\n\nSELECT threads.internal_thread_id AS threads_internal_thread_id,\nthreads.forum_id AS threads_forum_id, threads.thread_id AS\nthreads_thread_id, threads.title AS threads_title, threads.poster_id AS\nthreads_poster_id, threads.postername AS threads_postername,\nthreads.category AS threads_category, threads.posttype AS threads_posttype\n\n\nFROM threads JOIN posts ON threads.internal_thread_id =\nposts.internal_thread_id JOIN posters ON posts.poster_id = posters.poster_id\nJOIN posters_groups AS posters_groups_1 ON posters.poster_id =\nposters_groups_1.poster_id JOIN groups ON groups.group_id =\nposters_groups_1.group_id WHERE groups.group_id = 4 ORDER BY posts.timestamp\nDESC;\n\nThe query plan (with an explain analyze) gives me the following:\n\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=13995.93..14006.63 rows=4279 width=108) (actual\ntime=79.927..79.947 rows=165 loops=1)\n Sort Key: posts.\"timestamp\"\n Sort Method: quicksort Memory: 50kB\n -> Nested Loop (cost=6.97..13737.84 rows=4279 width=108) (actual\ntime=0.605..79.693 rows=165 loops=1)\n -> Seq Scan on groups (cost=0.00..1.05 rows=1 width=8) (actual\ntime=0.013..0.014 rows=1 loops=1)\n Filter: (group_id = 4)\n -> Nested Loop (cost=6.97..13694.00 rows=4279 width=116) (actual\ntime=0.587..79.616 rows=165 loops=1)\n -> Hash Join (cost=6.97..12343.10 rows=4279 width=24)\n(actual time=0.568..78.230 rows=165 loops=1)\n Hash Cond: (posts.poster_id = posters.poster_id)\n -> Seq Scan on posts (cost=0.00..11862.12 rows=112312\nwidth=24) (actual time=0.019..60.092 rows=112312 loops=1)\n -> Hash (cost=6.79..6.79 rows=14 width=24) (actual\ntime=0.101..0.101 rows=14 loops=1)\n -> Hash Join (cost=2.14..6.79 rows=14 width=24)\n(actual time=0.060..0.093 rows=14 loops=1)\n Hash Cond: (posters.poster_id =\nposters_groups_1.poster_id)\n -> Seq Scan on posters (cost=0.00..3.83\nrows=183 width=8) (actual time=0.006..0.023 rows=185 loops=1)\n -> Hash (cost=1.96..1.96 rows=14\nwidth=16) (actual time=0.025..0.025 rows=14 loops=1)\n -> Seq Scan on posters_groups\nposters_groups_1 (cost=0.00..1.96 rows=14 width=16) (actual\ntime=0.016..0.021 rows=14 loops=1)\n Filter: (group_id = 4)\n -> Index Scan using threads_pkey on threads\n (cost=0.00..0.30 rows=1 width=100) (actual time=0.006..0.007 rows=1\nloops=165)\n Index Cond: (threads.internal_thread_id =\nposts.internal_thread_id)\n Total runtime: 80.137 ms\n(20 rows)\n\nSo the big time lost is in this line:\n\nSeq Scan on posts (cost=0.00..11862.12 rows=112312 width=24) (actual\ntime=0.019..60.092 rows=112312 loops=1)\n\nwhich I can understand why it slow ;)\n\nBut I haven't yet managed to convert the Seq Scan into an Index Scan, and\nI'm not sure how to continue there.\n\nAs I am not a big expert on psql optimization, any input would be greatly\nappreciated.\n\nBest regards,\nJens\n\nHi everyone,I'm having trouble getting rid of a sequential scan on a table with roughly 120k entries it. Creation of an index on that particular column which triggers the sequential scan doesn't do anything, VACUUM and ANALYZE has been done on the table.\nThe table in question has the following definition: Column | Type | Modifiers--------------------+--------------------------+------------------------------------------------------------------\n post_id | bigint | not null default nextval('posts_post_id_seq'::regclass) forum_id | bigint | not null threadlink | character varying(255) | not null\n timestamp | timestamp with time zone | not null poster_id | bigint | thread_id | bigint | not null subject | text | not null\n text | text | not null postername | character varying(255) | internal_post_id | bigint | not null default nextval('posts_internal_post_id_seq'::regclass)\n internal_thread_id | bigint |Indexes: \"posts_pkey\" PRIMARY KEY, btree (internal_post_id) \"posts_forum_id_key\" UNIQUE, btree (forum_id, post_id)\n \"idx_internal_thread_id\" btree (internal_thread_id) \"idx_posts_poster_id\" btree (poster_id)Foreign-key constraints: \"posts_forum_id_fkey\" FOREIGN KEY (forum_id) REFERENCES forums(forum_id)\n \"posts_internal_thread_id_fkey\" FOREIGN KEY (internal_thread_id) REFERENCES threads(internal_thread_id) \"posts_poster_id_fkey\" FOREIGN KEY (poster_id) REFERENCES posters(poster_id)\nThe query is this:SELECT threads.internal_thread_id AS threads_internal_thread_id, threads.forum_id AS threads_forum_id, threads.thread_id AS threads_thread_id, threads.title AS threads_title, threads.poster_id AS threads_poster_id, threads.postername AS threads_postername, threads.category AS threads_category, threads.posttype AS threads_posttype FROM threads JOIN posts ON threads.internal_thread_id = posts.internal_thread_id JOIN posters ON posts.poster_id = posters.poster_id JOIN posters_groups AS posters_groups_1 ON posters.poster_id = posters_groups_1.poster_id JOIN groups ON groups.group_id = posters_groups_1.group_id WHERE groups.group_id = 4 ORDER BY posts.timestamp DESC;\nThe query plan (with an explain analyze) gives me the following: QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=13995.93..14006.63 rows=4279 width=108) (actual time=79.927..79.947 rows=165 loops=1) Sort Key: posts.\"timestamp\" Sort Method: quicksort Memory: 50kB -> Nested Loop (cost=6.97..13737.84 rows=4279 width=108) (actual time=0.605..79.693 rows=165 loops=1)\n -> Seq Scan on groups (cost=0.00..1.05 rows=1 width=8) (actual time=0.013..0.014 rows=1 loops=1) Filter: (group_id = 4) -> Nested Loop (cost=6.97..13694.00 rows=4279 width=116) (actual time=0.587..79.616 rows=165 loops=1)\n -> Hash Join (cost=6.97..12343.10 rows=4279 width=24) (actual time=0.568..78.230 rows=165 loops=1) Hash Cond: (posts.poster_id = posters.poster_id) -> Seq Scan on posts (cost=0.00..11862.12 rows=112312 width=24) (actual time=0.019..60.092 rows=112312 loops=1)\n -> Hash (cost=6.79..6.79 rows=14 width=24) (actual time=0.101..0.101 rows=14 loops=1) -> Hash Join (cost=2.14..6.79 rows=14 width=24) (actual time=0.060..0.093 rows=14 loops=1)\n Hash Cond: (posters.poster_id = posters_groups_1.poster_id) -> Seq Scan on posters (cost=0.00..3.83 rows=183 width=8) (actual time=0.006..0.023 rows=185 loops=1)\n -> Hash (cost=1.96..1.96 rows=14 width=16) (actual time=0.025..0.025 rows=14 loops=1) -> Seq Scan on posters_groups posters_groups_1 (cost=0.00..1.96 rows=14 width=16) (actual time=0.016..0.021 rows=14 loops=1)\n Filter: (group_id = 4) -> Index Scan using threads_pkey on threads (cost=0.00..0.30 rows=1 width=100) (actual time=0.006..0.007 rows=1 loops=165)\n Index Cond: (threads.internal_thread_id = posts.internal_thread_id) Total runtime: 80.137 ms(20 rows)So the big time lost is in this line:\nSeq Scan on posts (cost=0.00..11862.12 rows=112312 width=24) (actual time=0.019..60.092 rows=112312 loops=1)which I can understand why it slow ;)But I haven't yet managed to convert the Seq Scan into an Index Scan, and I'm not sure how to continue there.\nAs I am not a big expert on psql optimization, any input would be greatly appreciated.Best regards,Jens",
"msg_date": "Mon, 27 Jun 2011 14:46:46 +0200",
"msg_from": "Jens Hoffrichter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Getting rid of a seq scan in query on a large table"
},
{
"msg_contents": "Jens Hoffrichter <[email protected]> wrote:\n \n> I'm having trouble getting rid of a sequential scan on a table\n> with roughly 120k entries it.\n \nPlease post your configuration information and some information\nabout your hardware and OS.\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \nSince the table scan went through about 120000 rows in 60 ms, it is\nclear that your data is heavily cached, so random_page_cost should\nprobably be close to or equal to seq_page_cost, and that value\nshould probably be somewhere around 0.1 to 0.5. You should have\neffective_cache_size set to the sum of shared_buffers plus whatever\nyour OS cache is. I have sometimes found that I get faster plans\nwith cpu_tuple_cost increased.\n \nIf such tuning does cause it to choose the plan you expect, be sure\nto time it against what you have been getting. If the new plan is\nslower, you've taken the adjustments too far.\n \n-Kevin\n",
"msg_date": "Mon, 27 Jun 2011 09:30:23 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Getting rid of a seq scan in query on a large\n\t table"
},
{
"msg_contents": "----- Forwarded Message -----\n>From: Denis de Bernardy <[email protected]>\n>To: Jens Hoffrichter <[email protected]>\n>Sent: Tuesday, June 28, 2011 12:59 AM\n>Subject: Re: [PERFORM] Getting rid of a seq scan in query on a large table\n>\n>\n>> Hash Cond: (posts.poster_id = posters.poster_id)\n>\n>> -> Seq Scan on posts (cost=0.00..11862.12 rows=112312 width=24) (actual time=0.019..60.092 rows=112312 loops=1)\n>\n>\n>Unless I am mistaking, you've very few poster ids in there (since the two rows arguments are equal). The Postgres planner will identify this and just seq scan the whole thing instead of bothering to randomly access the rows one by one using the index. This looks like a case where you actually do not want it to use an index scan -- doing so will be slower.\n>\n>\n>D\n>\n>\n>\n>\n>\n>\n>\n>>________________________________\n>>From: Jens Hoffrichter <[email protected]>\n>>To: [email protected]\n>>Sent: Monday, June 27, 2011 2:46 PM\n>>Subject: [PERFORM] Getting rid of a seq scan in query on a large table\n>>\n>>\n>>Hi everyone,\n>>\n>>\n>>I'm having trouble getting rid of a sequential scan on a table with roughly 120k entries it. Creation of an index on that particular column which triggers the sequential scan doesn't do anything, VACUUM and ANALYZE has been done on the table.\n>>\n>>\n>>The table in question has the following definition:\n>>\n>>\n>> Column | Type | Modifiers\n>>--------------------+--------------------------+------------------------------------------------------------------\n>> post_id | bigint | not null default nextval('posts_post_id_seq'::regclass)\n>> forum_id | bigint | not null\n>> threadlink | character varying(255) | not null\n>> timestamp | timestamp with time zone | not null\n>> poster_id | bigint |\n>> thread_id | bigint | not null\n>> subject | text | not null\n>> text | text | not null\n>> postername | character varying(255) |\n>> internal_post_id | bigint | not null default nextval('posts_internal_post_id_seq'::regclass)\n>> internal_thread_id | bigint |\n>>Indexes:\n>> \"posts_pkey\" PRIMARY KEY, btree (internal_post_id)\n>> \"posts_forum_id_key\" UNIQUE, btree (forum_id, post_id)\n>> \"idx_internal_thread_id\" btree (internal_thread_id)\n>> \"idx_posts_poster_id\" btree (poster_id)\n>>Foreign-key constraints:\n>> \"posts_forum_id_fkey\" FOREIGN KEY (forum_id) REFERENCES forums(forum_id)\n>> \"posts_internal_thread_id_fkey\" FOREIGN KEY (internal_thread_id) REFERENCES threads(internal_thread_id)\n>> \"posts_poster_id_fkey\" FOREIGN KEY (poster_id) REFERENCES posters(poster_id)\n>>\n>>\n>>The query is this:\n>>\n>>\n>>SELECT threads.internal_thread_id AS threads_internal_thread_id, threads.forum_id AS threads_forum_id, threads.thread_id AS threads_thread_id, threads.title AS threads_title, threads.poster_id AS threads_poster_id, threads.postername AS threads_postername, threads.category AS threads_category, threads.posttype AS threads_posttype FROM threads JOIN posts ON threads.internal_thread_id = posts.internal_thread_id JOIN posters ON posts.poster_id = posters.poster_id JOIN posters_groups AS posters_groups_1 ON posters.poster_id = posters_groups_1.poster_id JOIN groups ON groups.group_id = posters_groups_1.group_id WHERE groups.group_id = 4 ORDER BY posts.timestamp DESC;\n>>\n>>\n>>The query plan (with an explain analyze) gives me the following:\n>>\n>>\n>> QUERY PLAN\n>>----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Sort (cost=13995.93..14006.63 rows=4279 width=108) (actual time=79.927..79.947 rows=165 loops=1)\n>> Sort Key: posts.\"timestamp\"\n>> Sort Method: quicksort Memory: 50kB\n>> -> Nested Loop (cost=6.97..13737.84 rows=4279 width=108) (actual time=0.605..79.693 rows=165 loops=1)\n>> -> Seq Scan on groups (cost=0.00..1.05 rows=1 width=8) (actual time=0.013..0.014 rows=1 loops=1)\n>> Filter: (group_id = 4)\n>> -> Nested Loop (cost=6.97..13694.00 rows=4279 width=116) (actual time=0.587..79.616 rows=165 loops=1)\n>> -> Hash Join (cost=6.97..12343.10 rows=4279 width=24) (actual time=0.568..78.230 rows=165 loops=1)\n>> Hash Cond: (posts.poster_id = posters.poster_id)\n>> -> Seq Scan on posts (cost=0.00..11862.12 rows=112312 width=24) (actual time=0.019..60.092 rows=112312 loops=1)\n>> -> Hash (cost=6.79..6.79 rows=14 width=24) (actual time=0.101..0.101 rows=14 loops=1)\n>> -> Hash Join (cost=2.14..6.79 rows=14 width=24) (actual time=0.060..0.093 rows=14 loops=1)\n>> Hash Cond: (posters.poster_id = posters_groups_1.poster_id)\n>> -> Seq Scan on posters (cost=0.00..3.83 rows=183 width=8) (actual time=0.006..0.023 rows=185 loops=1)\n>> -> Hash (cost=1.96..1.96 rows=14 width=16) (actual time=0.025..0.025 rows=14 loops=1)\n>> -> Seq Scan on posters_groups posters_groups_1 (cost=0.00..1.96 rows=14 width=16) (actual time=0.016..0.021 rows=14 loops=1)\n>> Filter: (group_id = 4)\n>> -> Index Scan using threads_pkey on threads (cost=0.00..0.30 rows=1 width=100) (actual time=0.006..0.007 rows=1 loops=165)\n>> Index Cond: (threads.internal_thread_id = posts.internal_thread_id)\n>> Total runtime: 80.137 ms\n>>(20 rows)\n>>\n>>\n>>So the big time lost is in this line:\n>>\n>>\n>>Seq Scan on posts (cost=0.00..11862.12 rows=112312 width=24) (actual time=0.019..60.092 rows=112312 loops=1)\n>>\n>>\n>>which I can understand why it slow ;)\n>>\n>>\n>>But I haven't yet managed to convert the Seq Scan into an Index Scan, and I'm not sure how to continue there.\n>>\n>>\n>>As I am not a big expert on psql optimization, any input would be greatly appreciated.\n>>\n>>\n>>Best regards,\n>>Jens\n>>\n>>\n>\n>\n----- Forwarded Message -----From: Denis de Bernardy <[email protected]>To: Jens Hoffrichter <[email protected]>Sent: Tuesday, June 28, 2011 12:59 AMSubject: Re: [PERFORM] Getting rid of a seq scan in query on a large table\n> Hash Cond: (posts.poster_id = posters.poster_id)> -> Seq Scan on posts (cost=0.00..11862.12 rows=112312 width=24) (actual time=0.019..60.092 rows=112312 loops=1)Unless I am mistaking, you've very few poster ids in there (since the two rows arguments are equal). The Postgres planner will identify this and just seq scan the whole thing instead of bothering to randomly access the rows one by one using the index. This looks like a case where you actually do not want it to use an index scan -- doing so will be\n slower.DFrom: Jens Hoffrichter <[email protected]>To: [email protected]: Monday, June 27, 2011 2:46 PMSubject: [PERFORM] Getting rid of a seq scan in query on a large table\nHi everyone,I'm having trouble getting rid of a sequential scan on a table with roughly 120k entries it. Creation of an index on that particular column which triggers the sequential scan doesn't do anything, VACUUM and ANALYZE has been done on the table.\nThe table in question has the following definition: Column | Type | Modifiers--------------------+--------------------------+------------------------------------------------------------------\n post_id | bigint | not null default nextval('posts_post_id_seq'::regclass) forum_id | bigint | not null threadlink | character varying(255) | not null\n timestamp | timestamp with time zone | not null poster_id | bigint | thread_id | bigint | not null subject | text | not null\n text | text | not null postername | character varying(255) | internal_post_id | bigint | not null default nextval('posts_internal_post_id_seq'::regclass)\n internal_thread_id | bigint |Indexes: \"posts_pkey\" PRIMARY KEY, btree (internal_post_id) \"posts_forum_id_key\" UNIQUE, btree (forum_id, post_id)\n \"idx_internal_thread_id\" btree (internal_thread_id) \"idx_posts_poster_id\" btree (poster_id)Foreign-key constraints: \"posts_forum_id_fkey\" FOREIGN KEY (forum_id) REFERENCES forums(forum_id)\n \"posts_internal_thread_id_fkey\" FOREIGN KEY (internal_thread_id) REFERENCES threads(internal_thread_id) \"posts_poster_id_fkey\" FOREIGN KEY (poster_id) REFERENCES posters(poster_id)\nThe query is this:SELECT threads.internal_thread_id AS threads_internal_thread_id, threads.forum_id AS threads_forum_id, threads.thread_id AS threads_thread_id, threads.title AS threads_title, threads.poster_id AS threads_poster_id, threads.postername AS threads_postername, threads.category AS threads_category, threads.posttype AS threads_posttype FROM threads JOIN\n posts ON threads.internal_thread_id = posts.internal_thread_id JOIN posters ON posts.poster_id = posters.poster_id JOIN posters_groups AS posters_groups_1 ON posters.poster_id = posters_groups_1.poster_id JOIN groups ON groups.group_id = posters_groups_1.group_id WHERE groups.group_id = 4 ORDER BY posts.timestamp DESC;\nThe query plan (with an explain analyze) gives me the following: QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=13995.93..14006.63 rows=4279 width=108) (actual time=79.927..79.947 rows=165 loops=1) Sort Key: posts.\"timestamp\" Sort Method: quicksort Memory: 50kB -> Nested Loop (cost=6.97..13737.84 rows=4279 width=108) (actual time=0.605..79.693 rows=165 loops=1)\n -> Seq Scan on groups (cost=0.00..1.05 rows=1 width=8) (actual time=0.013..0.014 rows=1 loops=1) Filter: (group_id = 4) -> Nested Loop (cost=6.97..13694.00 rows=4279 width=116) (actual time=0.587..79.616 rows=165 loops=1)\n -> Hash Join (cost=6.97..12343.10 rows=4279 width=24) (actual time=0.568..78.230 rows=165 loops=1) Hash Cond: (posts.poster_id = posters.poster_id) -> Seq Scan on posts (cost=0.00..11862.12 rows=112312 width=24) (actual time=0.019..60.092 rows=112312 loops=1)\n -> Hash (cost=6.79..6.79 rows=14 width=24) (actual time=0.101..0.101 rows=14 loops=1) -> Hash Join (cost=2.14..6.79 rows=14 width=24) (actual time=0.060..0.093 rows=14 loops=1)\n Hash Cond: (posters.poster_id = posters_groups_1.poster_id) -> Seq Scan on posters (cost=0.00..3.83 rows=183 width=8) (actual time=0.006..0.023 rows=185 loops=1)\n -> Hash (cost=1.96..1.96 rows=14 width=16) (actual time=0.025..0.025 rows=14 loops=1) -> Seq Scan on posters_groups posters_groups_1 (cost=0.00..1.96 rows=14 width=16) (actual time=0.016..0.021 rows=14 loops=1)\n Filter: (group_id = 4) -> Index Scan using threads_pkey on threads (cost=0.00..0.30 rows=1 width=100) (actual time=0.006..0.007 rows=1 loops=165)\n Index Cond: (threads.internal_thread_id = posts.internal_thread_id) Total runtime: 80.137 ms(20 rows)So the big time lost is in this line:\nSeq Scan on posts (cost=0.00..11862.12 rows=112312 width=24) (actual time=0.019..60.092 rows=112312 loops=1)which I can understand why it slow ;)But I haven't yet managed to convert the Seq Scan into an Index Scan, and I'm not sure how to continue there.\nAs I am not a big expert on psql optimization, any input would be greatly appreciated.Best regards,Jens",
"msg_date": "Mon, 27 Jun 2011 16:00:53 -0700 (PDT)",
"msg_from": "Denis de Bernardy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Fw: Getting rid of a seq scan in query on a large table"
}
] |
[
{
"msg_contents": "Hi All,\n\nI am facing some performance issue with insert into some table.\n\nI am using postgres 8.4.x\n\nTable is having 3 before insert trigger and one after insert trigger.\n\nWith all triggers enable it is inserting only 4-5 record per second.\n\nBut if I disable after insert trigger it is able to insert 667 records per\nsecond.\n\nAfter insert trigger is recursive trigger.\n\n\nMy question.\n\nHow to avoid the bottleneck?\n\nParallel processing is possible in Postgres? How?\n\n\nPlease give you suggestion.\n\n-- \nThanks & regards,\nJENISH VYAS\n\nHi All,\nI am facing\nsome performance issue with insert into some table.\n\nI am using\npostgres 8.4.x\n\nTable is having\n3 before insert trigger and one after insert trigger.\n\nWith all\ntriggers enable it is inserting only 4-5 record per second.\n\nBut if I\ndisable after insert trigger it is able to insert 667 records per\nsecond.\nAfter insert\ntrigger is recursive trigger.\n\nMy question.\nHow to avoid\nthe bottleneck?\n\nParallel\nprocessing is possible in Postgres? How?\nPlease give you suggestion. \n-- Thanks & regards,JENISH VYAS",
"msg_date": "Mon, 27 Jun 2011 17:22:58 +0300",
"msg_from": "Jenish <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance issue with Insert"
},
{
"msg_contents": "Jenish <[email protected]> wrote:\n \n> I am using postgres 8.4.x\n \nWith x being what? On what OS and hardware?\n \n> Table is having 3 before insert trigger and one after insert\n> trigger.\n> \n> With all triggers enable it is inserting only 4-5 record per\n> second.\n> \n> But if I disable after insert trigger it is able to insert 667\n> records per second.\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n> After insert trigger is recursive trigger.\n \nSo are you counting only the top level inserts or also the ones\ngenerated by the recursive inserts?\n \n> My question.\n> \n> How to avoid the bottleneck?\n \nFirst you need to find out what the bottleneck is.\n \n> Parallel processing is possible in Postgres? How?\n \nTo achieve parallel processing in PostgreSQL you need to use\nmultiple connections.\n \n-Kevin\n",
"msg_date": "Mon, 27 Jun 2011 09:37:53 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue with Insert"
},
{
"msg_contents": "Hi,\n\nDB : POSTGRES 8.4.8\nOS : Debian\nHD : SAS 10k rpm\n\nShared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM\n\nAfter insert trigger is again calling 2 more trigger and insert record in\nanother table depends on condition.\n\nwith all trigger enable there are 8 insert and 32 updates(approx. update is\ndepends on hierarchy)\n\nPlz explain multiple connections. Current scenario application server is\nsending all requests.\n\n-- \nThanks & regards,\nJENISH VYAS\n\nOn Mon, Jun 27, 2011 at 5:37 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Jenish <[email protected]> wrote:\n>\n> > I am using postgres 8.4.x\n>\n> With x being what? On what OS and hardware?\n>\n> > Table is having 3 before insert trigger and one after insert\n> > trigger.\n> >\n> > With all triggers enable it is inserting only 4-5 record per\n> > second.\n> >\n> > But if I disable after insert trigger it is able to insert 667\n> > records per second.\n>\n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n>\n> > After insert trigger is recursive trigger.\n>\n> So are you counting only the top level inserts or also the ones\n> generated by the recursive inserts?\n>\n> > My question.\n> >\n> > How to avoid the bottleneck?\n>\n> First you need to find out what the bottleneck is.\n>\n> > Parallel processing is possible in Postgres? How?\n>\n> To achieve parallel processing in PostgreSQL you need to use\n> multiple connections.\n>\n> -Kevin\n>\n\nHi,DB : POSTGRES 8.4.8OS : DebianHD : SAS 10k rpmShared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAMAfter insert trigger is again calling 2 more trigger and insert record in another table depends on condition.\nwith all trigger enable there are 8 insert and 32 updates(approx. update is depends on hierarchy)Plz explain multiple connections. Current scenario application server is sending all requests.\n-- Thanks & regards,JENISH VYASOn Mon, Jun 27, 2011 at 5:37 PM, Kevin Grittner <[email protected]> wrote:\nJenish <[email protected]> wrote:\n\n> I am using postgres 8.4.x\n\nWith x being what? On what OS and hardware?\n\n> Table is having 3 before insert trigger and one after insert\n> trigger.\n>\n> With all triggers enable it is inserting only 4-5 record per\n> second.\n>\n> But if I disable after insert trigger it is able to insert 667\n> records per second.\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\n> After insert trigger is recursive trigger.\n\nSo are you counting only the top level inserts or also the ones\ngenerated by the recursive inserts?\n\n> My question.\n>\n> How to avoid the bottleneck?\n\nFirst you need to find out what the bottleneck is.\n\n> Parallel processing is possible in Postgres? How?\n\nTo achieve parallel processing in PostgreSQL you need to use\nmultiple connections.\n\n-Kevin",
"msg_date": "Mon, 27 Jun 2011 18:01:37 +0300",
"msg_from": "Jenish <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issue with Insert"
},
{
"msg_contents": "> Hi,\n>\n> DB : POSTGRES 8.4.8\n> OS : Debian\n> HD : SAS 10k rpm\n>\n> Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM\n>\n> After insert trigger is again calling 2 more trigger and insert record in\n> another table depends on condition.\n>\n> with all trigger enable there are 8 insert and 32 updates(approx. update\n> is\n> depends on hierarchy)\n\nHi,\n\nit's very difficult to give you reliable recommendations with this little\ninfo, but the triggers are obviously the bottleneck. We have no idea what\nqueries are executed in them, but I guess there are some slow queries.\n\nFind out what queries are executed in the triggers, benchmark each of them\nand make them faster. Just don't forget that those SQL queries are\nexecuted as prepared statements, so they may behave a bit differently than\nplain queries. So use 'PREPARE' and 'EXPLAIN EXECUTE' to tune them.\n\n> Plz explain multiple connections. Current scenario application server is\n> sending all requests.\n\nPostgreSQL does not support parallel queries (i.e. a query distributed on\nmultiple CPUs) so each query may use just a single CPU. If you're CPU\nbound (one CPU is 100% utilized but the other CPUs are idle), you can\nusually parallelize the workload on your own - just use multiple\nconnections.\n\nBut if you're using an application server and there are multiple\nconnections used, this is not going to help you. How many connections are\nactive at the same time? Are the CPUs idle or utilized?\n\nTomas\n\n",
"msg_date": "Mon, 27 Jun 2011 17:32:10 +0200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Performance issue with Insert"
},
{
"msg_contents": "Hi,\n\nI have already checked all the statements present in the trigger, no one is\ntaking more then 20 ms.\n\nI am using 8-Processor, Quad-Core Server ,CPU utilization is more then 90-95\n% for all. (htop result)\n\nDB has 960 concurrent users.\n\nio : writing 3-4 MB per second or less (iotop result).\n\nScenario : All insert are waiting for previous insert to complete. Cant\nwe avoid this situation ?\nWhat is the \"max_connections\" postgresql support?\n\nPlz help....\n\n\n-- \nThanks & regards,\nJENISH VYAS\n\n\n\n\n\n\nOn Mon, Jun 27, 2011 at 6:32 PM, <[email protected]> wrote:\n\n> > Hi,\n> >\n> > DB : POSTGRES 8.4.8\n> > OS : Debian\n> > HD : SAS 10k rpm\n> >\n> > Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM\n> >\n> > After insert trigger is again calling 2 more trigger and insert record in\n> > another table depends on condition.\n> >\n> > with all trigger enable there are 8 insert and 32 updates(approx. update\n> > is\n> > depends on hierarchy)\n>\n> Hi,\n>\n> it's very difficult to give you reliable recommendations with this little\n> info, but the triggers are obviously the bottleneck. We have no idea what\n> queries are executed in them, but I guess there are some slow queries.\n>\n> Find out what queries are executed in the triggers, benchmark each of them\n> and make them faster. Just don't forget that those SQL queries are\n> executed as prepared statements, so they may behave a bit differently than\n> plain queries. So use 'PREPARE' and 'EXPLAIN EXECUTE' to tune them.\n>\n> > Plz explain multiple connections. Current scenario application server is\n> > sending all requests.\n>\n> PostgreSQL does not support parallel queries (i.e. a query distributed on\n> multiple CPUs) so each query may use just a single CPU. If you're CPU\n> bound (one CPU is 100% utilized but the other CPUs are idle), you can\n> usually parallelize the workload on your own - just use multiple\n> connections.\n>\n> But if you're using an application server and there are multiple\n> connections used, this is not going to help you. How many connections are\n> active at the same time? Are the CPUs idle or utilized?\n>\n> Tomas\n>\n>\n\nHi,I have already checked all the statements present in the trigger, no one is taking more then 20 ms.I am using 8-Processor, Quad-Core Server ,CPU utilization is more then 90-95 % for all. (htop result)\nDB has 960 concurrent users. io : writing 3-4 MB per second or less (iotop result).Scenario : All insert are waiting for previous insert to complete. Cant we avoid this situation ? \nWhat is the \"max_connections\" postgresql support? Plz help....-- Thanks & regards,JENISH VYAS \nOn Mon, Jun 27, 2011 at 6:32 PM, <[email protected]> wrote:\n> Hi,\n>\n> DB : POSTGRES 8.4.8\n> OS : Debian\n> HD : SAS 10k rpm\n>\n> Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM\n>\n> After insert trigger is again calling 2 more trigger and insert record in\n> another table depends on condition.\n>\n> with all trigger enable there are 8 insert and 32 updates(approx. update\n> is\n> depends on hierarchy)\n\nHi,\n\nit's very difficult to give you reliable recommendations with this little\ninfo, but the triggers are obviously the bottleneck. We have no idea what\nqueries are executed in them, but I guess there are some slow queries.\n\nFind out what queries are executed in the triggers, benchmark each of them\nand make them faster. Just don't forget that those SQL queries are\nexecuted as prepared statements, so they may behave a bit differently than\nplain queries. So use 'PREPARE' and 'EXPLAIN EXECUTE' to tune them.\n\n> Plz explain multiple connections. Current scenario application server is\n> sending all requests.\n\nPostgreSQL does not support parallel queries (i.e. a query distributed on\nmultiple CPUs) so each query may use just a single CPU. If you're CPU\nbound (one CPU is 100% utilized but the other CPUs are idle), you can\nusually parallelize the workload on your own - just use multiple\nconnections.\n\nBut if you're using an application server and there are multiple\nconnections used, this is not going to help you. How many connections are\nactive at the same time? Are the CPUs idle or utilized?\n\nTomas",
"msg_date": "Mon, 27 Jun 2011 18:58:26 +0300",
"msg_from": "Jenish <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issue with Insert"
},
{
"msg_contents": "On Mon, Jun 27, 2011 at 9:22 AM, Jenish <[email protected]> wrote:\n> Hi All,\n>\n> I am facing some performance issue with insert into some table.\n>\n> I am using postgres 8.4.x\n>\n> Table is having 3 before insert trigger and one after insert trigger.\n>\n> With all triggers enable it is inserting only 4-5 record per second.\n>\n> But if I disable after insert trigger it is able to insert 667 records per\n> second.\n>\n> After insert trigger is recursive trigger.\n>\n> My question.\n>\n> How to avoid the bottleneck?\n>\n> Parallel processing is possible in Postgres? How?\n>\n> Please give you suggestion.\n\nthis sounds like a coding issue -- to get to the bottom of this we are\ngoing to need to see the table and the triggers.\n\nmerlin\n",
"msg_date": "Mon, 27 Jun 2011 11:12:02 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue with Insert"
},
{
"msg_contents": "Dne 27.6.2011 17:58, Jenish napsal(a):\n> \n> Hi,\n> \n> I have already checked all the statements present in the trigger, no one\n> is taking more then 20 ms.\n> \n> I am using 8-Processor, Quad-Core Server ,CPU utilization is more then\n> 90-95 % for all. (htop result)\n\nSo all cores are 95% utilized? That means you're CPU bound and you need\nto fix that somehow.\n\nHow much of that belongs to postgres? Are there other processes\nconsuming significant portion of CPU? And what do you mean by\n'utilized'? Does that mean user/sys time, or wait time?\n\n> DB has 960 concurrent users. \n\nWhad does that mean? Does that mean there are 960 active connections?\n\n> io : writing 3-4 MB per second or less (iotop result).\n\nSequential or random? Post a few lines of 'iostat -x 1' and a few lines\nof 'vmstat 1' (collected when the database is busy).\n\n> Scenario : All insert are waiting for previous insert to complete. Cant\n> we avoid this situation ?\n\nWhat do you mean by 'previous'? Does that mean another insert in the\nsame session (connection), or something performed in another session?\n\n> What is the \"max_connections\" postgresql support? \n\nThat limits number of background processes - each connection is served\nby a dedicated posgres process. You can see that in top / ps output.\n\nHigh values usually mean you need some kind of pooling (you probably\nalready have one as you're using application server). And if the\nconnections are really active (doing something all the time), this\nshould not be significantly higher than the number of cores.\n\nSee, you have 8 cores, which means 8 seconds of CPU time each second. No\nmatter how many connections you allow, you still have just those 8\nseconds. So if you need to perform 100x something that takes 1 second,\nyou need to spend 100 seconds of CPU time. So with those 8 cores, you\ncan do that in about 12,5 seconds.\n\nActually if you create too many connections, you'll notice it takes much\nmore - there's an overhead with process management, context switching,\nlocking etc.\n\nregards\nTomas\n",
"msg_date": "Mon, 27 Jun 2011 21:46:12 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue with Insert"
},
{
"msg_contents": "Dne 27.6.2011 17:01, Jenish napsal(a):\n> Hi,\n> \n> DB : POSTGRES 8.4.8\n> OS : Debian\n> HD : SAS 10k rpm\n> \n> Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM\n\nHow much data are we talking about? Does that fit into the shared\nbuffers or is it significantly larger? Do the triggers touch the whole\ndatabase or just a small part of it (active part)?\n\nregards\nTomas\n",
"msg_date": "Mon, 27 Jun 2011 21:48:17 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue with Insert"
},
{
"msg_contents": "Hi ,\n\nThis server is the dedicated database server.\n\n And I am testing the limit for the concurrent active users. When I am\nrunning my test for 400 concurrent user ie. Active connection. I am getting\ngood performance but when I am running the same the same test for 950\nconcurrent users I am getting very bad performance.\n\n\n\n>> Scenario : All insert are waiting for previous insert to complete.\n\nI don’t know whether it is the same session or different session.\n\n\n\nDB id huge but Triggers are not touching the whole database.\n\nI’ll provide the result set of vmstat and iostat tomorrow.\n\n\n-- \nThanks & regards,\nJENISH VYAS\n\n\n\nOn Mon, Jun 27, 2011 at 10:48 PM, Tomas Vondra <[email protected]> wrote:\n\n> Dne 27.6.2011 17:01, Jenish napsal(a):\n> > Hi,\n> >\n> > DB : POSTGRES 8.4.8\n> > OS : Debian\n> > HD : SAS 10k rpm\n> >\n> > Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM\n>\n> How much data are we talking about? Does that fit into the shared\n> buffers or is it significantly larger? Do the triggers touch the whole\n> database or just a small part of it (active part)?\n>\n> regards\n> Tomas\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi ,This server is the dedicated database server. \n And I am testing the limit\nfor the concurrent active users. When I am running my test for 400 concurrent\nuser ie. Active connection. I am getting good performance but when I am running\nthe same the same test for 950 concurrent users I am getting very bad\nperformance.\n \n>> Scenario : All insert are\nwaiting for previous insert to complete. \nI don’t know whether it is the same session or different\nsession.\n \nDB id huge but Triggers are not touching the whole database.\nI’ll provide the result set of vmstat and iostat tomorrow.\n-- Thanks & regards,JENISH VYAS On Mon, Jun 27, 2011 at 10:48 PM, Tomas Vondra <[email protected]> wrote:\nDne 27.6.2011 17:01, Jenish napsal(a):\n> Hi,\n>\n> DB : POSTGRES 8.4.8\n> OS : Debian\n> HD : SAS 10k rpm\n>\n> Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM\n\nHow much data are we talking about? Does that fit into the shared\nbuffers or is it significantly larger? Do the triggers touch the whole\ndatabase or just a small part of it (active part)?\n\nregards\nTomas\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 27 Jun 2011 23:14:39 +0300",
"msg_from": "Jenish <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issue with Insert"
},
{
"msg_contents": "Dne 27.6.2011 22:14, Jenish napsal(a):\n> And I am testing the limit for the concurrent active users. When I am\n> running my test for 400 concurrent user ie. Active connection. I am\n> getting good performance but when I am running the same the same test\n> for 950 concurrent users I am getting very bad performance.\n\nThis is typical behaviour - the performance is good up until some point,\nthen it degrades much faster.\n\nWhy do you even need such number of connections? Does that really\nimprove performance (e.g. how many inserts do you do with 100 and 400\nconnections)?\n\nSuch number of active connections is not going to give you any advantage\nI guess ...\n\nregards\nTomas\n",
"msg_date": "Mon, 27 Jun 2011 22:47:35 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue with Insert"
},
{
"msg_contents": "Jenish <[email protected]> wrote:\n \n> This server is the dedicated database server.\n> \n> And I am testing the limit for the concurrent active users. When I\n> am running my test for 400 concurrent user ie. Active connection.\n> I am getting good performance but when I am running the same the\n> same test for 950 concurrent users I am getting very bad\n> performance.\n \nTo serve a large number of concurrent users you need to use a\nconnection pooler which limits the number of database connections to\na small number. Typically the most effective number of database\nconnections is somewhere between the number of actual cores on your\nserver and twice that plus the number of disk drives. (It depends\non the details of your hardware and your load.) The connection\npooler should queue requests which arrive when all database\nconnections are busy and release them for execution as transactions\ncomplete. Restricting the active database connections in this way\nimproves both throughput and latency and will allow you to serve a\nmuch larger number of users without getting into bad performance;\nand when you do \"hit the wall\" performance will degrade more\ngracefully.\n \n-Kevin\n",
"msg_date": "Mon, 27 Jun 2011 15:56:45 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue with Insert"
}
] |
[
{
"msg_contents": "Hello,\n\nI have a handful of queries that are performing very slowly. I realize that I will be hitting hardware limits at some point, but want to make sure Ive squeezed out every bit of performance I can before calling it quits.\n\nOur database is collecting traffic data at the rate of about 3 million rows a day. The most important columns in the table being queried are timestamp, volume, occupancy, and speed. I have also denormalized the table by adding road name, direction, mile marker, and lane type values to eliminate joins with other tables that contain information about the devices that collect this information. A typical query will involve segments of roadway (i.e. road names, directions, and mile marker bounds) over a certain period of time (e.g. morning rush hour), and will have filters to exclude questionable data such (e.g. speed > 100 MPH). Unfortunately, there are also a few cases in which a user will query data for many full days on all roadways, essentially querying everything for a large period of time. One other thing to note is that we only ever query readings with lane_type = through_lanes, although we are collecting ramp and reversible lane data to facilitate future reporting needs.\n\nTable Metadata:\n- Volume typically ranges anywhere from 0 to 20, averages around 4 5. A small percentage of the rows have null volume.\n- Occupancy ranges from 1 to 10, averages around 1 or 2\n- Speed is about what you would expect, ranging from 30 70 with an average somewhere near the middle\n- There are 17 roads\n- There are 2 directions per road\n- Mile marker ranges vary by roadway, typical ranges are something like 0 to 40 or 257 to 290\n- Most (80 to 90% +) of the readings have lane_type = through_lanes\n- Size of a daily table is about 360MB, a half month table is 5 to 6 GB\n\nFull Table and Index Schema:\n\nIve experimented with partitioning using a table per day and 2 tables per month (1st through 15th, 16th to end of month). 2 tables/month was the original approach to keep the number of tables from growing too rapidly, and shows about 3x slower performance. Using daily tables incurs extra planning overhead as expected, but isnt all that bad. Im OK with taking a 1 second planning hit if my overall query time decreases significantly. Furthermore, we will only be storing raw data for about a year and can aggregate old data. This means that I can do daily tables for raw data and larger/fewer tables for older data. The table and index structure is below, which is identical between daily and ½ month tables with a couple of exceptions:\n- Daily tables have a fill factor of 100, ½ month tables are default\n- Only the 4 column indexes were created for the daily tables since the others never get used\n\nCREATE TABLE vds_detector_data\n(\n reading_timestamp timestamp without time zone,\n vds_id integer,\n detector_id integer,\n status smallint,\n speed numeric(12,9),\n volume numeric(12,9),\n confidence smallint,\n occupancy numeric(12,9),\n loadid bigint,\n road_name character varying(150),\n road_dir character varying(2),\n mile_marker numeric(7,2),\n lane_number integer,\n lane_type character varying(32),\n CONSTRAINT vds_detector_vdsid_fkey FOREIGN KEY (vds_id, detector_id)\n REFERENCES ref_vds_detector_properties (device_id, detector_id) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE RESTRICT\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX vds_detector_data_dir_idx\n ON vds_detector_data\n USING btree\n (road_dir);\n\nCREATE INDEX vds_detector_data_lane_idx\n ON vds_detector_data\n USING btree\n (lane_number);\n\nCREATE INDEX vds_detector_data_mm_idx\n ON vds_detector_data\n USING btree\n (mile_marker);\n\nCREATE INDEX vds_detector_data_occupancy_idx\n ON vds_detector_data\n USING btree\n (occupancy);\n\nCREATE INDEX vds_detector_data_road_idx\n ON vds_detector_data\n USING btree\n (road_name);\n\nCREATE INDEX vds_detector_data_road_ts_mm_dir_idx\n ON vds_detector_data\n USING btree\n (road_name, reading_timestamp, mile_marker, road_dir);\n\nCREATE INDEX vds_detector_data_speed_idx\n ON vds_detector_data\n USING btree\n (speed);\n\nCREATE INDEX vds_detector_data_timestamp_idx\n ON vds_detector_data\n USING btree\n (reading_timestamp);\n\nCREATE INDEX vds_detector_data_ts_road_mm_dir_idx\n ON vds_detector_data\n USING btree\n (reading_timestamp, road_name, mile_marker, road_dir);\n\nCREATE INDEX vds_detector_data_volume_idx\n ON vds_detector_data\n USING btree\n (volume);\n\nEXPLAIN ANALYZE:\n\nQuery:\nselect cast(reading_timestamp as Date) as date, floor(extract(hour from reading_timestamp) / 1.0) * 1.0 as hour, floor(extract(minute from reading_timestamp) / 60) * 60 as min,\n count(*), sum(vdd.volume) as totalVolume, sum(vdd.occupancy*vdd.volume)/sum(vdd.volume) as avgOcc, sum(vdd.speed*vdd.volume)/sum(vdd.volume) as avgSpeed,\n avg(vdd.confidence) as avgConfidence, min(vdd.detector_id) as detectorId, vdd.vds_id as vdsId\nfrom vds_detector_data vdd\nwhere (vdd.reading_timestamp between '2011-4-01 00:00:00.000' and '2011-04-30 23:59:59.999') \n and vdd.volume!=0 \n and ((road_name='44' and mile_marker between 257.65 and 289.5 and (road_dir='E' or road_dir='W')) \n or (road_name='64' and mile_marker between 0.7 and 40.4 and (road_dir='E' or road_dir='W'))\n or (road_name='55' and mile_marker between 184.8 and 208.1 and (road_dir='N' or road_dir='S'))\n or (road_name='270' and mile_marker between 0.8 and 34.5 and (road_dir='N' or road_dir='S')))\n and not(vdd.speed<0.0 or vdd.speed>90.0 or vdd.volume=0.0) and vdd.lane_type in ('through_lanes') \ngroup by date, hour, min, vdd.vds_id, mile_marker\nhaving sum(vdd.volume)!=0\norder by vdd.vds_id, mile_marker;\n\nDaily table explain analyze: http://explain.depesz.com/s/iLY\nHalf month table explain analyze: http://explain.depesz.com/s/Unt\n\nPostgres version:\nPostgreSQL 8.4.8, compiled by Visual C++ build 1400, 32-bit\n\nHistory:\nNone, this is a new database and application\n\nHardware:\n- 2 Intel Xeon 2.13GHz processors with 8 cores each\n- 8GB RAM\n- Disks are in a RAID 5 configuration with a PERC H700 Integrated RAID Controller 512MB Cache\n- 5 disks, Seagate 7200 RPM SAS, 500GB each for a total capacity of about 2TB\n- Windows Server 2008 R2 64-bit (but 32-bit postgres)\n- Hardware upgrades arent an option at this point due to budget and time constraints\n\nMaintenance Setup:\nAutovacuum is disabled for these tables since the data is never updated. The tables that we are testing with at the moment will not grow any larger and have been both clustered and analyzed. They were clustered on the vds_detector_data_timestamp_idx index.\n\nGUC Settings:\neffective_cache_size: 2048MB\nwork_mem: 512MB\nshared_buffers: 64MB, 512MB, and 1024MB, each yielded the same query plan and took the same amount of time to execute give or take a few seconds\n\nSummary:\n\nThe time to get the raw data (before aggregation and sorting) is relatively similar between the daily and half month tables. It would appear that the major difference is the ordering of sort and aggregation, the daily tables aggregate first so the amount of data sorted is significantly less.\n\nSince the daily tables are only 360MB, I would hope that the entire table could be pulled into memory with one large sequential read. Of course this assumes that the file pieces are stored contiguously, but auto defrag is enabled and shows low fragmentation so Im trusting (as much as one can) Windows to do the right thing here. My drives have a 150MB/s sustained max throughput, and considering that data is spread across 5 drives I would hope to at least be able to reach the single disk theoretical limit and read an entire table plus the index into memory about 4 to 5 seconds. Based on the analyze output, each daily table averages 6 to 7 seconds, so Im pretty close there and maybe just limited by disk speed?\n\nIn both cases, the row estimates vs actual are way off. Ive increased statistics on the reading_timestamp and road_name columns to 100 and then 1000 with no change. I ran an ANALYZE after each statistics change. Should I be upping stats on the non-indexed columns as well? Ive read documentation that says I should be able to set statistics values for an entire table as opposed to per column, but havent found how to do that. I guess I was either too lazy to update statistics on each column or just didnt think it would help much.\n\nSo, any pointers for performance improvement?\n\nThanks,\nCraig\nOpen Roads Consulting, Inc.\n757-546-3401\nhttp://www.openroadsconsulting.com\n\nThis e-mail communication (including any attachments) may contain confidential and/or privileged material intended solely for the individual or entity to which it is addressed. If you are not the intended recipient, you should immediately stop reading this message and delete it from all computers that it resides on. Any unauthorized reading, distribution, copying or other use of this communication (or its attachments) is strictly prohibited. If you have received this communication in error, please notify us immediately.\n\n\n\nThis e-mail communication (including any attachments) may contain confidential and/or privileged material intended solely for the individual or entity to which it is addressed.\nP - Think before you print.\n\n\n\nHello,I have a handful of queries that are performing very slowly. I realize that I will be hitting hardware limits at some point, but want to make sure Ive squeezed out every bit of performance I can before calling it quits.Our database is collecting traffic data at the rate of about 3 million rows a day. The most important columns in the table being queried are timestamp, volume, occupancy, and speed. I have also denormalized the table by adding road name, direction, mile marker, and lane type values to eliminate joins with other tables that contain information about the devices that collect this information. A typical query will involve segments of roadway (i.e. road names, directions, and mile marker bounds) over a certain period of time (e.g. morning rush hour), and will have filters to exclude questionable data such (e.g. speed > 100 MPH). Unfortunately, there are also a few cases in which a user will query data for many full days on all roadways, essentially querying everything for a large period of time. One other thing to note is that we only ever query readings with lane_type = through_lanes, although we are collecting ramp and reversible lane data to facilitate future reporting needs.Table Metadata:- Volume typically ranges anywhere from 0 to 20, averages around 4 5. A small percentage of the rows have null volume.- Occupancy ranges from 1 to 10, averages around 1 or 2- Speed is about what you would expect, ranging from 30 70 with an average somewhere near the middle- There are 17 roads- There are 2 directions per road- Mile marker ranges vary by roadway, typical ranges are something like 0 to 40 or 257 to 290- Most (80 to 90% +) of the readings have lane_type = through_lanes- Size of a daily table is about 360MB, a half month table is 5 to 6 GBFull Table and Index Schema:Ive experimented with partitioning using a table per day and 2 tables per month (1st through 15th, 16th to end of month). 2 tables/month was the original approach to keep the number of tables from growing too rapidly, and shows about 3x slower performance. Using daily tables incurs extra planning overhead as expected, but isnt all that bad. Im OK with taking a 1 second planning hit if my overall query time decreases significantly. Furthermore, we will only be storing raw data for about a year and can aggregate old data. This means that I can do daily tables for raw data and larger/fewer tables for older data. The table and index structure is below, which is identical between daily and ½ month tables with a couple of exceptions:- Daily tables have a fill factor of 100, ½ month tables are default- Only the 4 column indexes were created for the daily tables since the others never get usedCREATE TABLE vds_detector_data( reading_timestamp timestamp without time zone, vds_id integer, detector_id integer, status smallint, speed numeric(12,9), volume numeric(12,9), confidence smallint, occupancy numeric(12,9), loadid bigint, road_name character varying(150), road_dir character varying(2), mile_marker numeric(7,2), lane_number integer, lane_type character varying(32), CONSTRAINT vds_detector_vdsid_fkey FOREIGN KEY (vds_id, detector_id) REFERENCES ref_vds_detector_properties (device_id, detector_id) MATCH SIMPLE ON UPDATE CASCADE ON DELETE RESTRICT)WITH ( OIDS=FALSE);CREATE INDEX vds_detector_data_dir_idx ON vds_detector_data USING btree (road_dir);CREATE INDEX vds_detector_data_lane_idx ON vds_detector_data USING btree (lane_number);CREATE INDEX vds_detector_data_mm_idx ON vds_detector_data USING btree (mile_marker);CREATE INDEX vds_detector_data_occupancy_idx ON vds_detector_data USING btree (occupancy);CREATE INDEX vds_detector_data_road_idx ON vds_detector_data USING btree (road_name);CREATE INDEX vds_detector_data_road_ts_mm_dir_idx ON vds_detector_data USING btree (road_name, reading_timestamp, mile_marker, road_dir);CREATE INDEX vds_detector_data_speed_idx ON vds_detector_data USING btree (speed);CREATE INDEX vds_detector_data_timestamp_idx ON vds_detector_data USING btree (reading_timestamp);CREATE INDEX vds_detector_data_ts_road_mm_dir_idx ON vds_detector_data USING btree (reading_timestamp, road_name, mile_marker, road_dir);CREATE INDEX vds_detector_data_volume_idx ON vds_detector_data USING btree (volume);EXPLAIN ANALYZE:Query:select cast(reading_timestamp as Date) as date, floor(extract(hour from reading_timestamp) / 1.0) * 1.0 as hour, floor(extract(minute from reading_timestamp) / 60) * 60 as min, count(*), sum(vdd.volume) as totalVolume, sum(vdd.occupancy*vdd.volume)/sum(vdd.volume) as avgOcc, sum(vdd.speed*vdd.volume)/sum(vdd.volume) as avgSpeed, avg(vdd.confidence) as avgConfidence, min(vdd.detector_id) as detectorId, vdd.vds_id as vdsIdfrom vds_detector_data vddwhere (vdd.reading_timestamp between '2011-4-01 00:00:00.000' and '2011-04-30 23:59:59.999') and vdd.volume!=0 and ((road_name='44' and mile_marker between 257.65 and 289.5 and (road_dir='E' or road_dir='W')) or (road_name='64' and mile_marker between 0.7 and 40.4 and (road_dir='E' or road_dir='W')) or (road_name='55' and mile_marker between 184.8 and 208.1 and (road_dir='N' or road_dir='S')) or (road_name='270' and mile_marker between 0.8 and 34.5 and (road_dir='N' or road_dir='S'))) and not(vdd.speed<0.0 or vdd.speed>90.0 or vdd.volume=0.0) and vdd.lane_type in ('through_lanes') group by date, hour, min, vdd.vds_id, mile_markerhaving sum(vdd.volume)!=0order by vdd.vds_id, mile_marker;Daily table explain analyze: http://explain.depesz.com/s/iLYHalf month table explain analyze: http://explain.depesz.com/s/UntPostgres version:PostgreSQL 8.4.8, compiled by Visual C++ build 1400, 32-bitHistory:None, this is a new database and applicationHardware:- 2 Intel Xeon 2.13GHz processors with 8 cores each- 8GB RAM- Disks are in a RAID 5 configuration with a PERC H700 Integrated RAID Controller 512MB Cache- 5 disks, Seagate 7200 RPM SAS, 500GB each for a total capacity of about 2TB- Windows Server 2008 R2 64-bit (but 32-bit postgres)- Hardware upgrades arent an option at this point due to budget and time constraintsMaintenance Setup:Autovacuum is disabled for these tables since the data is never updated. The tables that we are testing with at the moment will not grow any larger and have been both clustered and analyzed. They were clustered on the vds_detector_data_timestamp_idx index.GUC Settings:effective_cache_size: 2048MBwork_mem: 512MBshared_buffers: 64MB, 512MB, and 1024MB, each yielded the same query plan and took the same amount of time to execute give or take a few secondsSummary:The time to get the raw data (before aggregation and sorting) is relatively similar between the daily and half month tables. It would appear that the major difference is the ordering of sort and aggregation, the daily tables aggregate first so the amount of data sorted is significantly less.Since the daily tables are only 360MB, I would hope that the entire table could be pulled into memory with one large sequential read. Of course this assumes that the file pieces are stored contiguously, but auto defrag is enabled and shows low fragmentation so Im trusting (as much as one can) Windows to do the right thing here. My drives have a 150MB/s sustained max throughput, and considering that data is spread across 5 drives I would hope to at least be able to reach the single disk theoretical limit and read an entire table plus the index into memory about 4 to 5 seconds. Based on the analyze output, each daily table averages 6 to 7 seconds, so Im pretty close there and maybe just limited by disk speed?In both cases, the row estimates vs actual are way off. Ive increased statistics on the reading_timestamp and road_name columns to 100 and then 1000 with no change. I ran an ANALYZE after each statistics change. Should I be upping stats on the non-indexed columns as well? Ive read documentation that says I should be able to set statistics values for an entire table as opposed to per column, but havent found how to do that. I guess I was either too lazy to update statistics on each column or just didnt think it would help much.So, any pointers for performance improvement?Thanks,CraigOpen Roads Consulting, Inc.757-546-3401http://www.openroadsconsulting.comThis e-mail communication (including any attachments) may contain confidential and/or privileged material intended solely for the individual or entity to which it is addressed. If you are not the intended recipient, you should immediately stop reading this message and delete it from all computers that it resides on. Any unauthorized reading, distribution, copying or other use of this communication (or its attachments) is strictly prohibited. If you have received this communication in error, please notify us immediately.\n\nThis e-mail communication (including any attachments) may contain confidential and/or privileged material intended solely for the individual or entity to which it is addressed.P - Think before you print.",
"msg_date": "Tue, 28 Jun 2011 17:28:51 -0400",
"msg_from": "Craig McIlwee <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow performance when querying millions of rows"
},
{
"msg_contents": "Dne 28.6.2011 23:28, Craig McIlwee napsal(a):\n> Daily table explain analyze: http://explain.depesz.com/s/iLY\n> Half month table explain analyze: http://explain.depesz.com/s/Unt\n\nAre you sure those two queries are exactly the same? Because the daily\ncase output says the width is 50B, while the half-month case says it's\n75B. This might be why the sort/aggregate steps are switched, and that\nincreases the amount of data so that it has to be sorted on disk (which\nis why the half-month is so much slower).\n\nHaven't you added some columns to the half-month query?\n\n> Postgres version:\n> PostgreSQL 8.4.8, compiled by Visual C++ build 1400, 32-bit\n> \n> History:\n> None, this is a new database and application\n> \n> Hardware:\n> - 2 Intel Xeon 2.13GHz processors with 8 cores each\n> - 8GB RAM\n> - Disks are in a RAID 5 configuration with a PERC H700 Integrated RAID\n> Controller 512MB Cache\n> - 5 disks, Seagate 7200 RPM SAS, 500GB each for a total capacity of\n> about 2TB\n> - Windows Server 2008 R2 64-bit (but 32-bit postgres)\n> - Hardware upgrades arent an option at this point due to budget and time\n> constraints\n\nNot much experience with PostgreSQL on Windows, but this looks good to\nme. Not sure if RAID5 is a good choice, especially because of write\nperformance - this is probably one of the reasons why the disk sort is\nso slow (in the half-month case).\n\nAnd it's nice you have 8 cores, but don't forget each query executes on\na single background process, i.e. it may use single core. So the above\nquery can't use 8 cores - that's why the in-memory sort takes so long I\nguess.\n\n> Maintenance Setup:\n> Autovacuum is disabled for these tables since the data is never\n> updated. The tables that we are testing with at the moment will not\n> grow any larger and have been both clustered and analyzed. They were\n> clustered on the vds_detector_data_timestamp_idx index.\n> \n> GUC Settings:\n> effective_cache_size: 2048MB\n> work_mem: 512MB\n> shared_buffers: 64MB, 512MB, and 1024MB, each yielded the same query\n> plan and took the same amount of time to execute give or take a few seconds\n> \n> Summary:\n> \n> The time to get the raw data (before aggregation and sorting) is\n> relatively similar between the daily and half month tables. It would\n> appear that the major difference is the ordering of sort and\n> aggregation, the daily tables aggregate first so the amount of data\n> sorted is significantly less.\n\nYes, the ordering is the problem. The amount of data to sort is so huge\n(3GB) it does not fit into work_mem and has to be sorted on disk. Not\nsure why this happens, the only difference I've noticed is the 'width'\n(50B vs. 75B). Are those two queries exactly the same?\n\n> Since the daily tables are only 360MB, I would hope that the entire\n> table could be pulled into memory with one large sequential read. Of\n> course this assumes that the file pieces are stored contiguously, but\n> auto defrag is enabled and shows low fragmentation so Im trusting (as\n> much as one can) Windows to do the right thing here. My drives have a\n> 150MB/s sustained max throughput, and considering that data is spread\n> across 5 drives I would hope to at least be able to reach the single\n> disk theoretical limit and read an entire table plus the index into\n> memory about 4 to 5 seconds. Based on the analyze output, each daily\n> table averages 6 to 7 seconds, so Im pretty close there and maybe just\n> limited by disk speed?\n\nWell, you have 30 partitions and 7 seconds for each means 210 secons in\ntotal. Which is about the time you get (before the aggregate/sort).\n\nYou have to check where the bottleneck is - is it the I/O or CPU? I'd\nguess the CPU, but I may be wrong. On unix I'd use something like\niostat/vmstat/top to see what's going on - not sure what to use on\nWindows. I guess there is a some console or maybe Process Explorer from\nsysinternals.\n\n> In both cases, the row estimates vs actual are way off. Ive increased\n> statistics on the reading_timestamp and road_name columns to 100 and\n> then 1000 with no change. I ran an ANALYZE after each statistics\n> change. Should I be upping stats on the non-indexed columns as well? \n> Ive read documentation that says I should be able to set statistics\n> values for an entire table as opposed to per column, but havent found\n> how to do that. I guess I was either too lazy to update statistics on\n> each column or just didnt think it would help much.\n\nThe estimates seem pretty good to me - 10x difference is not that much.\nCould be better, but I don't think you can get a better plan, is seems\nvery reasonable to me.\n\n> So, any pointers for performance improvement?\n\nThree ideas what might help\n\n1) partial indexes\n\nHow much do the parameters in the query change? If there are parameters\nthat are always the same, you may try to create partial indexes. For\nexample if the 'vdd.volume' always has to be '0', then you can create\nthe index like this\n\n CREATE INDEX vds_detector_data_dir_idx\n ON vds_detector_data\n USING btree\n (road_dir)\n WHERE (vdd.volume!=0);\n\nThat way only the rows with 'vdd.volume!=0' will be included in the\nindex, the index will be smaller and the bitmap will be created faster.\nSimilarly for the other conditions. The smaller the index will be, the\nfaster the bitmap creation.\n\nIf all the conditions may change, or if the index size does not change\nmuch, you can't do this.\n\n2) prebuilt results\n\nAnother option is precomputation of the 'per partition results' - if you\nknow what the conditions are going to be, you can precompute them and\nthen query just those (much smaller) tables. But this is very\napplication specific.\n\nAgain, if the all the conditions change, you can't do this.\n\n3) distribute the queries\n\nAs I've mentioned, PostgreSQL does not distribute the queries on\nmultiple CPU cores, but you can do that on your own at the application\nlevel.\n\nFor example I see the GROUP BY clause contains 'date, hour, min' so you\ncan compute the results for each partition separately (in a different\nthread, using a separate connection) and then 'append' them.\n\nYes, you'll need to keep some metadata to do this efficiently (e.g. list\nof partitions along with from/to timestamps), but you should do this\nanyway I guess (at least I do that when partitioning tables, it makes\nthe management much easier).\n\nNot sure if you can do this with the other queries :-(\n\nregards\nTomas\n",
"msg_date": "Wed, 29 Jun 2011 00:39:25 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance when querying millions of rows"
},
{
"msg_contents": "On 06/28/2011 05:28 PM, Craig McIlwee wrote:\n> Autovacuum is disabled for these tables since the data is never \n> updated. The tables that we are testing with at the moment will not \n> grow any larger and have been both clustered and analyzed.\n\nNote that any such prep to keep from ever needing to maintain these \ntables in the future should include the FREEZE option, possibly with \nsome parameters tweaked first to make it more aggressive. Autovacuum \nwill eventually revisit them in order to prevent transaction ID \nwrap-around, even if it's disabled. If you're going to the trouble of \nprepping them so they are never touched again, you should do a freeze \nwith the right parameters to keep this from happening again.\n\n> work_mem: 512MB\n> shared_buffers: 64MB, 512MB, and 1024MB, each yielded the same query \n> plan and took the same amount of time to execute give or take a few \n> seconds\n\nshared_buffers doesn't normally impact the query plan; it impacts how \nmuch churn there is between the database and the operating system cache, \nmainly important for making write-heavy work efficient. On Windows, \nyou'll probably be safe to set this to 512MB and forget about it. It \ndoesn't benefit from large values anyway.\n\nThis is a very large work_mem setting however, so be careful that you \nwon't have many users connecting at once if you're going to use it. \nEach connection can use a multiple of work_mem, making it quite possible \nyou could run out of memory with this configuration. If that low user \ncount is true, you may want to make sure you're enforcing it by lowering \nmax_connections, as a safety measure to prevent problems.\n\n> Since the daily tables are only 360MB, I would hope that the entire \n> table could be pulled into memory with one large sequential read. Of \n> course this assumes that the file pieces are stored contiguously, but \n> auto defrag is enabled and shows low fragmentation so Im trusting (as \n> much as one can) Windows to do the right thing here. My drives have a \n> 150MB/s sustained max throughput, and considering that data is spread \n> across 5 drives I would hope to at least be able to reach the single \n> disk theoretical limit and read an entire table plus the index into \n> memory about 4 to 5 seconds. Based on the analyze output, each daily \n> table averages 6 to 7 seconds, so Im pretty close there and maybe just \n> limited by disk speed?\n\nOne thing to note is that your drive speed varies based on what part of \nthe disk things are located at; the slower parts of the drive will be \nmuch less than 150MB/s.\n\nOn Linux servers it's impossible to reach something close to the disk's \nraw speed without making the operating system read-ahead feature much \nmore aggressive than it is by default. Because PostgreSQL fetches a \nsingle block at a time, to keep the drive completely busy something has \nto notice the pattern of access and be reading data ahead of when the \ndatabase even asks for it. You may find a parameter you can tune in the \nproperties for the drives somewhere in the Windows Control Panel. And \nthere's a read-ahead setting on your PERC card that's better than \nnothing you may not have turned on (not as good as the Linux one, but \nit's useful). There are two useful settings there (\"on\" and \"adaptive\" \nif I recall correctly) that you can try, to see which works better.\n\n> Ive read documentation that says I should be able to set statistics \n> values for an entire table as opposed to per column, but havent found \n> how to do that. I guess I was either too lazy to update statistics on \n> each column or just didnt think it would help much.\n\nYou can adjust the statistics target across the entire database using \nthe default_statistics_target setting, or you can tweak them per column \nusing ALTER TABLE. There is no table-level control. I find it \ndifficult to answer questions about whether there is enough stats or not \nwithout actually looking at pg_stats to see how the database is \ninterpreting the data, and comparing it against the real distribution. \nThis is an area where flailing about trying things doesn't work very \nwell; you need to be very systematic about the analysis and testing \nstrategy if you're going to get anywhere useful. It's not easy to do.\n\nAs a larger commentary on what you're trying to do, applications like \nthis often find themselves at a point one day where you just can't allow \narbitrary user queries to run against them anymore. What normally \nhappens then is that the most common things that people really need end \nup being run one and stored in some summary form, using techniques such \nas materialized views: http://wiki.postgresql.org/wiki/Materialized_Views\n\nIn your case, I would start now on trying to find the common patters to \nthe long running reports that people generate, and see if it's possible \nto pre-compute some portion of them and save that summary. And you may \nfind yourself in a continuous battle with business requests regardless. \nIt's often key decision makers who feel they should be able to query any \nway they want, regardless of its impact on the database. Finding a \nmiddle ground there is usually challenging.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nComprehensive and Customized PostgreSQL Training Classes:\nhttp://www.2ndquadrant.us/postgresql-training/\n\n",
"msg_date": "Tue, 28 Jun 2011 18:42:51 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance when querying millions of rows"
},
{
"msg_contents": "> Dne 28.6.2011 23:28, Craig McIlwee napsal(a):\n> > Daily table explain analyze: http://explain.depesz.com/s/iLY\n> > Half month table explain analyze: http://explain.depesz.com/s/Unt\n> \n> Are you sure those two queries are exactly the same? Because the daily\n> case output says the width is 50B, while the half-month case says it's\n> 75B. This might be why the sort/aggregate steps are switched, and that\n> increases the amount of data so that it has to be sorted on disk (which\n> is why the half-month is so much slower).\n> \n> Haven't you added some columns to the half-month query?\n\nThe daily tables were created using CREATE TABLE AS from the half month tables, structure is the same with the exception of fill factor. Queries are identical except for the name of the master table that they select from.\n\n> \n> > Postgres version:\n> > PostgreSQL 8.4.8, compiled by Visual C++ build 1400, 32-bit\n> > \n> > History:\n> > None, this is a new database and application\n> > \n> > Hardware:\n> > - 2 Intel Xeon 2.13GHz processors with 8 cores each\n> > - 8GB RAM\n> > - Disks are in a RAID 5 configuration with a PERC H700 Integrated RAID\n> > Controller 512MB Cache\n> > - 5 disks, Seagate 7200 RPM SAS, 500GB each for a total capacity of\n> > about 2TB\n> > - Windows Server 2008 R2 64-bit (but 32-bit postgres)\n> > - Hardware upgrades arent an option at this point due to budget and time\n> > constraints\n> \n> Not much experience with PostgreSQL on Windows, but this looks good to\n> me. Not sure if RAID5 is a good choice, especially because of write\n> performance - this is probably one of the reasons why the disk sort is\n> so slow (in the half-month case).\n\nYes, the data import is painfully slow but I hope to make up for that with the read performance later.\n\n> \n> And it's nice you have 8 cores, but don't forget each query executes on\n> a single background process, i.e. it may use single core. So the above\n> query can't use 8 cores - that's why the in-memory sort takes so long I\n> guess.\n> \n> > Maintenance Setup:\n> > Autovacuum is disabled for these tables since the data is never\n> > updated. The tables that we are testing with at the moment will not\n> > grow any larger and have been both clustered and analyzed. They were\n> > clustered on the vds_detector_data_timestamp_idx index.\n> > \n> > GUC Settings:\n> > effective_cache_size: 2048MB\n> > work_mem: 512MB\n> > shared_buffers: 64MB, 512MB, and 1024MB, each yielded the same query\n> > plan and took the same amount of time to execute give or take a few\n> seconds\n> > \n> > Summary:\n> > \n> > The time to get the raw data (before aggregation and sorting) is\n> > relatively similar between the daily and half month tables. It would\n> > appear that the major difference is the ordering of sort and\n> > aggregation, the daily tables aggregate first so the amount of data\n> > sorted is significantly less.\n> \n> Yes, the ordering is the problem. The amount of data to sort is so huge\n> (3GB) it does not fit into work_mem and has to be sorted on disk. Not\n> sure why this happens, the only difference I've noticed is the 'width'\n> (50B vs. 75B). Are those two queries exactly the same?\n> \n> > Since the daily tables are only 360MB, I would hope that the entire\n> > table could be pulled into memory with one large sequential read. Of\n> > course this assumes that the file pieces are stored contiguously, but\n> > auto defrag is enabled and shows low fragmentation so Im trusting (as\n> > much as one can) Windows to do the right thing here. My drives have a\n> > 150MB/s sustained max throughput, and considering that data is spread\n> > across 5 drives I would hope to at least be able to reach the single\n> > disk theoretical limit and read an entire table plus the index into\n> > memory about 4 to 5 seconds. Based on the analyze output, each daily\n> > table averages 6 to 7 seconds, so Im pretty close there and maybe just\n> > limited by disk speed?\n> \n> Well, you have 30 partitions and 7 seconds for each means 210 secons in\n> total. Which is about the time you get (before the aggregate/sort).\n> \n> You have to check where the bottleneck is - is it the I/O or CPU? I'd\n> guess the CPU, but I may be wrong. On unix I'd use something like\n> iostat/vmstat/top to see what's going on - not sure what to use on\n> Windows. I guess there is a some console or maybe Process Explorer from\n> sysinternals.\n> \n> > In both cases, the row estimates vs actual are way off. Ive increased\n> > statistics on the reading_timestamp and road_name columns to 100 and\n> > then 1000 with no change. I ran an ANALYZE after each statistics\n> > change. Should I be upping stats on the non-indexed columns as well? \n> > Ive read documentation that says I should be able to set statistics\n> > values for an entire table as opposed to per column, but havent found\n> > how to do that. I guess I was either too lazy to update statistics on\n> > each column or just didnt think it would help much.\n> \n> The estimates seem pretty good to me - 10x difference is not that much.\n> Could be better, but I don't think you can get a better plan, is seems\n> very reasonable to me.\n> \n> > So, any pointers for performance improvement?\n> \n> Three ideas what might help\n> \n> 1) partial indexes\n> \n> How much do the parameters in the query change? If there are parameters\n> that are always the same, you may try to create partial indexes. For\n> example if the 'vdd.volume' always has to be '0', then you can create\n> the index like this\n> \n> CREATE INDEX vds_detector_data_dir_idx\n> ON vds_detector_data\n> USING btree\n> (road_dir)\n> WHERE (vdd.volume!=0);\n> \n> That way only the rows with 'vdd.volume!=0' will be included in the\n> index, the index will be smaller and the bitmap will be created faster.\n> Similarly for the other conditions. The smaller the index will be, the\n> faster the bitmap creation.\n> \n> If all the conditions may change, or if the index size does not change\n> much, you can't do this.\n\nThe 0 volume is the only thing that will always be present, but those records do account for 10 to 15% of the data. I'll give this a shot, I'm really interested in seeing what impact this had. For some reason I was under the impression that partial indexes were used for text searches, so I completely overlooked this.\n\n> \n> 2) prebuilt results\n> \n> Another option is precomputation of the 'per partition results' - if you\n> know what the conditions are going to be, you can precompute them and\n> then query just those (much smaller) tables. But this is very\n> application specific.\n> \n> Again, if the all the conditions change, you can't do this.\n\nThis has been one of the toughest issues. Due to the filtering capabilities, it's just not possible to precalculate anything.\n\n> \n> 3) distribute the queries\n> \n> As I've mentioned, PostgreSQL does not distribute the queries on\n> multiple CPU cores, but you can do that on your own at the application\n> level.\n> \n> For example I see the GROUP BY clause contains 'date, hour, min' so you\n> can compute the results for each partition separately (in a different\n> thread, using a separate connection) and then 'append' them.\n> \n> Yes, you'll need to keep some metadata to do this efficiently (e.g. list\n> of partitions along with from/to timestamps), but you should do this\n> anyway I guess (at least I do that when partitioning tables, it makes\n> the management much easier).\n\nI noticed this too after a little more testing, there are some serious performance gains to be had here. I started with a single day query and it took about 15 seconds. Next was 5 simultaneous queries all at about 30 seconds each and then 10 queries at 50 seconds each.\n\n> \n> Not sure if you can do this with the other queries :-(\n\nNot all, but many. The query in question is the most beastly of them all, so I'm pretty happy to have some strategy for improvement.\n\n> \n> regards\n> Tomas\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\nThanks for the help.\nCraig\n\nThis e-mail communication (including any attachments) may contain confidential and/or privileged material intended solely for the individual or entity to which it is addressed.\nP - Think before you print.\n\n\n\n> Dne 28.6.2011 23:28, Craig McIlwee napsal(a):> > Daily table explain analyze: http://explain.depesz.com/s/iLY> > Half month table explain analyze: http://explain.depesz.com/s/Unt> > Are you sure those two queries are exactly the same? Because the daily> case output says the width is 50B, while the half-month case says it's> 75B. This might be why the sort/aggregate steps are switched, and that> increases the amount of data so that it has to be sorted on disk (which> is why the half-month is so much slower).> > Haven't you added some columns to the half-month query?The daily tables were created using CREATE TABLE AS from the half month tables, structure is the same with the exception of fill factor. Queries are identical except for the name of the master table that they select from.> > > Postgres version:> > PostgreSQL 8.4.8, compiled by Visual C++ build 1400, 32-bit> > > > History:> > None, this is a new database and application> > > > Hardware:> > - 2 Intel Xeon 2.13GHz processors with 8 cores each> > - 8GB RAM> > - Disks are in a RAID 5 configuration with a PERC H700 Integrated RAID> > Controller 512MB Cache> > - 5 disks, Seagate 7200 RPM SAS, 500GB each for a total capacity of> > about 2TB> > - Windows Server 2008 R2 64-bit (but 32-bit postgres)> > - Hardware upgrades arent an option at this point due to budget and time> > constraints> > Not much experience with PostgreSQL on Windows, but this looks good to> me. Not sure if RAID5 is a good choice, especially because of write> performance - this is probably one of the reasons why the disk sort is> so slow (in the half-month case).Yes, the data import is painfully slow but I hope to make up for that with the read performance later.> > And it's nice you have 8 cores, but don't forget each query executes on> a single background process, i.e. it may use single core. So the above> query can't use 8 cores - that's why the in-memory sort takes so long I> guess.> > > Maintenance Setup:> > Autovacuum is disabled for these tables since the data is never> > updated. The tables that we are testing with at the moment will not> > grow any larger and have been both clustered and analyzed. They were> > clustered on the vds_detector_data_timestamp_idx index.> > > > GUC Settings:> > effective_cache_size: 2048MB> > work_mem: 512MB> > shared_buffers: 64MB, 512MB, and 1024MB, each yielded the same query> > plan and took the same amount of time to execute give or take a few> seconds> > > > Summary:> > > > The time to get the raw data (before aggregation and sorting) is> > relatively similar between the daily and half month tables. It would> > appear that the major difference is the ordering of sort and> > aggregation, the daily tables aggregate first so the amount of data> > sorted is significantly less.> > Yes, the ordering is the problem. The amount of data to sort is so huge> (3GB) it does not fit into work_mem and has to be sorted on disk. Not> sure why this happens, the only difference I've noticed is the 'width'> (50B vs. 75B). Are those two queries exactly the same?> > > Since the daily tables are only 360MB, I would hope that the entire> > table could be pulled into memory with one large sequential read. Of> > course this assumes that the file pieces are stored contiguously, but> > auto defrag is enabled and shows low fragmentation so Im trusting (as> > much as one can) Windows to do the right thing here. My drives have a> > 150MB/s sustained max throughput, and considering that data is spread> > across 5 drives I would hope to at least be able to reach the single> > disk theoretical limit and read an entire table plus the index into> > memory about 4 to 5 seconds. Based on the analyze output, each daily> > table averages 6 to 7 seconds, so Im pretty close there and maybe just> > limited by disk speed?> > Well, you have 30 partitions and 7 seconds for each means 210 secons in> total. Which is about the time you get (before the aggregate/sort).> > You have to check where the bottleneck is - is it the I/O or CPU? I'd> guess the CPU, but I may be wrong. On unix I'd use something like> iostat/vmstat/top to see what's going on - not sure what to use on> Windows. I guess there is a some console or maybe Process Explorer from> sysinternals.> > > In both cases, the row estimates vs actual are way off. Ive increased> > statistics on the reading_timestamp and road_name columns to 100 and> > then 1000 with no change. I ran an ANALYZE after each statistics> > change. Should I be upping stats on the non-indexed columns as well? > > Ive read documentation that says I should be able to set statistics> > values for an entire table as opposed to per column, but havent found> > how to do that. I guess I was either too lazy to update statistics on> > each column or just didnt think it would help much.> > The estimates seem pretty good to me - 10x difference is not that much.> Could be better, but I don't think you can get a better plan, is seems> very reasonable to me.> > > So, any pointers for performance improvement?> > Three ideas what might help> > 1) partial indexes> > How much do the parameters in the query change? If there are parameters> that are always the same, you may try to create partial indexes. For> example if the 'vdd.volume' always has to be '0', then you can create> the index like this> > CREATE INDEX vds_detector_data_dir_idx> ON vds_detector_data> USING btree> (road_dir)> WHERE (vdd.volume!=0);> > That way only the rows with 'vdd.volume!=0' will be included in the> index, the index will be smaller and the bitmap will be created faster.> Similarly for the other conditions. The smaller the index will be, the> faster the bitmap creation.> > If all the conditions may change, or if the index size does not change> much, you can't do this.The 0 volume is the only thing that will always be present, but those records do account for 10 to 15% of the data. I'll give this a shot, I'm really interested in seeing what impact this had. For some reason I was under the impression that partial indexes were used for text searches, so I completely overlooked this.> > 2) prebuilt results> > Another option is precomputation of the 'per partition results' - if you> know what the conditions are going to be, you can precompute them and> then query just those (much smaller) tables. But this is very> application specific.> > Again, if the all the conditions change, you can't do this.This has been one of the toughest issues. Due to the filtering capabilities, it's just not possible to precalculate anything.> > 3) distribute the queries> > As I've mentioned, PostgreSQL does not distribute the queries on> multiple CPU cores, but you can do that on your own at the application> level.> > For example I see the GROUP BY clause contains 'date, hour, min' so you> can compute the results for each partition separately (in a different> thread, using a separate connection) and then 'append' them.> > Yes, you'll need to keep some metadata to do this efficiently (e.g. list> of partitions along with from/to timestamps), but you should do this> anyway I guess (at least I do that when partitioning tables, it makes> the management much easier).I noticed this too after a little more testing, there are some serious performance gains to be had here. I started with a single day query and it took about 15 seconds. Next was 5 simultaneous queries all at about 30 seconds each and then 10 queries at 50 seconds each.> > Not sure if you can do this with the other queries :-(Not all, but many. The query in question is the most beastly of them all, so I'm pretty happy to have some strategy for improvement.> > regards> Tomas> > -- > Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance> Thanks for the help.Craig\n\nThis e-mail communication (including any attachments) may contain confidential and/or privileged material intended solely for the individual or entity to which it is addressed.P - Think before you print.",
"msg_date": "Tue, 28 Jun 2011 19:26:43 -0400",
"msg_from": "\"Craig McIlwee\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance when querying millions of rows"
},
{
"msg_contents": "> On 06/28/2011 05:28 PM, Craig McIlwee wrote:\n> > Autovacuum is disabled for these tables since the data is never \n> > updated. The tables that we are testing with at the moment will not \n> > grow any larger and have been both clustered and analyzed.\n> \n> Note that any such prep to keep from ever needing to maintain these \n> tables in the future should include the FREEZE option, possibly with \n> some parameters tweaked first to make it more aggressive. Autovacuum \n> will eventually revisit them in order to prevent transaction ID \n> wrap-around, even if it's disabled. If you're going to the trouble of \n> prepping them so they are never touched again, you should do a freeze \n> with the right parameters to keep this from happening again.\n> \n> > work_mem: 512MB\n> > shared_buffers: 64MB, 512MB, and 1024MB, each yielded the same query \n> > plan and took the same amount of time to execute give or take a few \n> > seconds\n> \n> shared_buffers doesn't normally impact the query plan; it impacts how \n> much churn there is between the database and the operating system cache, \n> mainly important for making write-heavy work efficient. On Windows, \n> you'll probably be safe to set this to 512MB and forget about it. It \n> doesn't benefit from large values anyway.\n\nI was thinking that shared buffers controlled the amount of data, primarily table and index pages, that the database could store in memory at once. Based on that assumption, I thought that a larger value would enable an entire table + index to be in memory together and speed up the query. Am I wrong?\n\n> \n> This is a very large work_mem setting however, so be careful that you \n> won't have many users connecting at once if you're going to use it. \n> Each connection can use a multiple of work_mem, making it quite possible \n> you could run out of memory with this configuration. If that low user \n> count is true, you may want to make sure you're enforcing it by lowering \n> max_connections, as a safety measure to prevent problems.\n\nI plan on lowering this quite a bit since I haven't seen much of a boost by increasing it.\n\n> \n> > Since the daily tables are only 360MB, I would hope that the entire \n> > table could be pulled into memory with one large sequential read. Of \n> > course this assumes that the file pieces are stored contiguously, but \n> > auto defrag is enabled and shows low fragmentation so Im trusting (as \n> > much as one can) Windows to do the right thing here. My drives have a \n> > 150MB/s sustained max throughput, and considering that data is spread \n> > across 5 drives I would hope to at least be able to reach the single \n> > disk theoretical limit and read an entire table plus the index into \n> > memory about 4 to 5 seconds. Based on the analyze output, each daily \n> > table averages 6 to 7 seconds, so Im pretty close there and maybe just \n> > limited by disk speed?\n> \n> One thing to note is that your drive speed varies based on what part of \n> the disk things are located at; the slower parts of the drive will be \n> much less than 150MB/s.\n> \n> On Linux servers it's impossible to reach something close to the disk's \n> raw speed without making the operating system read-ahead feature much \n> more aggressive than it is by default. Because PostgreSQL fetches a \n> single block at a time, to keep the drive completely busy something has \n> to notice the pattern of access and be reading data ahead of when the \n> database even asks for it. You may find a parameter you can tune in the \n> properties for the drives somewhere in the Windows Control Panel. And \n> there's a read-ahead setting on your PERC card that's better than \n> nothing you may not have turned on (not as good as the Linux one, but \n> it's useful). There are two useful settings there (\"on\" and \"adaptive\" \n> if I recall correctly) that you can try, to see which works better.\n\nLooks like they are set to adaptive read-ahead now. If the database is executing many concurrent queries, is it reasonable to suspect that the IO requests will compete with each other in such a way that the controller would rarely see many sequential requests since it is serving many processes? The controller does have an 'on' option also that forces read-ahead, maybe that would solve the issue if we can rely on the data to survive in the cache until the actual read request takes place.\n\n> \n> > Ive read documentation that says I should be able to set statistics \n> > values for an entire table as opposed to per column, but havent found \n> > how to do that. I guess I was either too lazy to update statistics on \n> > each column or just didnt think it would help much.\n> \n> You can adjust the statistics target across the entire database using \n> the default_statistics_target setting, or you can tweak them per column \n> using ALTER TABLE. There is no table-level control. I find it \n> difficult to answer questions about whether there is enough stats or not \n> without actually looking at pg_stats to see how the database is \n> interpreting the data, and comparing it against the real distribution. \n> This is an area where flailing about trying things doesn't work very \n> well; you need to be very systematic about the analysis and testing \n> strategy if you're going to get anywhere useful. It's not easy to do.\n> \n> As a larger commentary on what you're trying to do, applications like \n> this often find themselves at a point one day where you just can't allow \n> arbitrary user queries to run against them anymore. What normally \n> happens then is that the most common things that people really need end \n> up being run one and stored in some summary form, using techniques such \n> as materialized views: http://wiki.postgresql.org/wiki/Materialized_Views\n> \n> In your case, I would start now on trying to find the common patters to \n> the long running reports that people generate, and see if it's possible \n> to pre-compute some portion of them and save that summary. And you may \n> find yourself in a continuous battle with business requests regardless. \n> It's often key decision makers who feel they should be able to query any \n> way they want, regardless of its impact on the database. Finding a \n> middle ground there is usually challenging.\n> \n> -- \n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> Comprehensive and Customized PostgreSQL Training Classes:\n> http://www.2ndquadrant.us/postgresql-training/\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\nThanks,\nCraig\n\nThis e-mail communication (including any attachments) may contain confidential and/or privileged material intended solely for the individual or entity to which it is addressed.\nP - Think before you print.\n\n\n\n> On 06/28/2011 05:28 PM, Craig McIlwee wrote:> > Autovacuum is disabled for these tables since the data is never > > updated. The tables that we are testing with at the moment will not > > grow any larger and have been both clustered and analyzed.> > Note that any such prep to keep from ever needing to maintain these > tables in the future should include the FREEZE option, possibly with > some parameters tweaked first to make it more aggressive. Autovacuum > will eventually revisit them in order to prevent transaction ID > wrap-around, even if it's disabled. If you're going to the trouble of > prepping them so they are never touched again, you should do a freeze > with the right parameters to keep this from happening again.> > > work_mem: 512MB> > shared_buffers: 64MB, 512MB, and 1024MB, each yielded the same query > > plan and took the same amount of time to execute give or take a few > > seconds> > shared_buffers doesn't normally impact the query plan; it impacts how > much churn there is between the database and the operating system cache, > mainly important for making write-heavy work efficient. On Windows, > you'll probably be safe to set this to 512MB and forget about it. It > doesn't benefit from large values anyway.I was thinking that shared buffers controlled the amount of data, primarily table and index pages, that the database could store in memory at once. Based on that assumption, I thought that a larger value would enable an entire table + index to be in memory together and speed up the query. Am I wrong?> > This is a very large work_mem setting however, so be careful that you > won't have many users connecting at once if you're going to use it. > Each connection can use a multiple of work_mem, making it quite possible > you could run out of memory with this configuration. If that low user > count is true, you may want to make sure you're enforcing it by lowering > max_connections, as a safety measure to prevent problems.I plan on lowering this quite a bit since I haven't seen much of a boost by increasing it.> > > Since the daily tables are only 360MB, I would hope that the entire > > table could be pulled into memory with one large sequential read. Of > > course this assumes that the file pieces are stored contiguously, but > > auto defrag is enabled and shows low fragmentation so Im trusting (as > > much as one can) Windows to do the right thing here. My drives have a > > 150MB/s sustained max throughput, and considering that data is spread > > across 5 drives I would hope to at least be able to reach the single > > disk theoretical limit and read an entire table plus the index into > > memory about 4 to 5 seconds. Based on the analyze output, each daily > > table averages 6 to 7 seconds, so Im pretty close there and maybe just > > limited by disk speed?> > One thing to note is that your drive speed varies based on what part of > the disk things are located at; the slower parts of the drive will be > much less than 150MB/s.> > On Linux servers it's impossible to reach something close to the disk's > raw speed without making the operating system read-ahead feature much > more aggressive than it is by default. Because PostgreSQL fetches a > single block at a time, to keep the drive completely busy something has > to notice the pattern of access and be reading data ahead of when the > database even asks for it. You may find a parameter you can tune in the > properties for the drives somewhere in the Windows Control Panel. And > there's a read-ahead setting on your PERC card that's better than > nothing you may not have turned on (not as good as the Linux one, but > it's useful). There are two useful settings there (\"on\" and \"adaptive\" > if I recall correctly) that you can try, to see which works better.Looks like they are set to adaptive read-ahead now. If the database is executing many concurrent queries, is it reasonable to suspect that the IO requests will compete with each other in such a way that the controller would rarely see many sequential requests since it is serving many processes? The controller does have an 'on' option also that forces read-ahead, maybe that would solve the issue if we can rely on the data to survive in the cache until the actual read request takes place.> > > Ive read documentation that says I should be able to set statistics > > values for an entire table as opposed to per column, but havent found > > how to do that. I guess I was either too lazy to update statistics on > > each column or just didnt think it would help much.> > You can adjust the statistics target across the entire database using > the default_statistics_target setting, or you can tweak them per column > using ALTER TABLE. There is no table-level control. I find it > difficult to answer questions about whether there is enough stats or not > without actually looking at pg_stats to see how the database is > interpreting the data, and comparing it against the real distribution. > This is an area where flailing about trying things doesn't work very > well; you need to be very systematic about the analysis and testing > strategy if you're going to get anywhere useful. It's not easy to do.> > As a larger commentary on what you're trying to do, applications like > this often find themselves at a point one day where you just can't allow > arbitrary user queries to run against them anymore. What normally > happens then is that the most common things that people really need end > up being run one and stored in some summary form, using techniques such > as materialized views: http://wiki.postgresql.org/wiki/Materialized_Views> > In your case, I would start now on trying to find the common patters to > the long running reports that people generate, and see if it's possible > to pre-compute some portion of them and save that summary. And you may > find yourself in a continuous battle with business requests regardless. > It's often key decision makers who feel they should be able to query any > way they want, regardless of its impact on the database. Finding a > middle ground there is usually challenging.> > -- > Greg Smith 2ndQuadrant US [email protected] Baltimore, MD> Comprehensive and Customized PostgreSQL Training Classes:> http://www.2ndquadrant.us/postgresql-training/> > > -- > Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance> Thanks,Craig\n\nThis e-mail communication (including any attachments) may contain confidential and/or privileged material intended solely for the individual or entity to which it is addressed.P - Think before you print.",
"msg_date": "Tue, 28 Jun 2011 19:50:50 -0400",
"msg_from": "\"Craig McIlwee\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance when querying millions of rows"
},
{
"msg_contents": "Dne 29.6.2011 01:26, Craig McIlwee napsal(a):\n>> Dne 28.6.2011 23:28, Craig McIlwee napsal(a):\n>> Are you sure those two queries are exactly the same? Because the daily\n>> case output says the width is 50B, while the half-month case says it's\n>> 75B. This might be why the sort/aggregate steps are switched, and that\n>> increases the amount of data so that it has to be sorted on disk (which\n>> is why the half-month is so much slower).\n>>\n>> Haven't you added some columns to the half-month query?\n> \n> The daily tables were created using CREATE TABLE AS from the half month\n> tables, structure is the same with the exception of fill factor. \n> Queries are identical except for the name of the master table that they\n> select from.\n\nHm, I'm not sure where this width value comes from but I don't think\nit's related to fillfactor.\n\n>> Not much experience with PostgreSQL on Windows, but this looks good to\n>> me. Not sure if RAID5 is a good choice, especially because of write\n>> performance - this is probably one of the reasons why the disk sort is\n>> so slow (in the half-month case).\n> \n> Yes, the data import is painfully slow but I hope to make up for that\n> with the read performance later.\n\nGenerally you're right that RAID10 is going to be slower than RAID5 when\nreading (and faster when writing) the data, but how big the gap is\nreally depends on the controller. It's not that big I guess - see for\nexample this:\n\nhttp://www.kendalvandyke.com/2009/02/disk-performance-hands-on-part-5-raid.html\n\nThe first test shows that RAID10 is about 10% slower on reads but about\n60% faster on writes.\n\nBTW have you tuned the GUC settings for write (increasing checkpoint\nsegments may give much better write performance).\n\n> The 0 volume is the only thing that will always be present, but those\n> records do account for 10 to 15% of the data. I'll give this a shot,\n> I'm really interested in seeing what impact this had. For some reason I\n> was under the impression that partial indexes were used for text\n> searches, so I completely overlooked this.\n\nOr you might actually do two partitions for each day - one for volume=0\nand the other one for volume!=0. Not sure if that is worth the effort.\n\nOne more thing to try in this case - it's not that important how many\nrows suffice the condition, much more important is how many blocks need\nto be read from the disk. If those 10% rows are distributed evenly\nthrough the table (i.e. there's at least one in each 8kB block), the I/O\nstill needs to be done.\n\nAnd it's very likely the case, as you've clustered the tables according\nto the timestamp. Try to cluster the tables according to 'volume' and\ncheck the difference.\n\nregards\nTomas\n",
"msg_date": "Wed, 29 Jun 2011 02:03:23 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance when querying millions of rows"
},
{
"msg_contents": "Dne 29.6.2011 01:50, Craig McIlwee napsal(a):\n>> > work_mem: 512MB\n>> > shared_buffers: 64MB, 512MB, and 1024MB, each yielded the same query\n>> > plan and took the same amount of time to execute give or take a few\n>> > seconds\n>>\n>> shared_buffers doesn't normally impact the query plan; it impacts how\n>> much churn there is between the database and the operating system cache,\n>> mainly important for making write-heavy work efficient. On Windows,\n>> you'll probably be safe to set this to 512MB and forget about it. It\n>> doesn't benefit from large values anyway.\n> \n> I was thinking that shared buffers controlled the amount of data,\n> primarily table and index pages, that the database could store in memory\n> at once. Based on that assumption, I thought that a larger value would\n> enable an entire table + index to be in memory together and speed up the\n> query. Am I wrong?\n\nWell, you're right and wrong at the same time. The shared buffers really\ncontrols the amount of data that may be read into the database cache,\nthat's true. But this value is not used when building the execution\nplan. There's another value (effective_cache_size) that is used when\nplanning a query.\n\n>> > Ive read documentation that says I should be able to set statistics\n>> > values for an entire table as opposed to per column, but havent found\n>> > how to do that. I guess I was either too lazy to update statistics on\n>> > each column or just didnt think it would help much.\n\nLink to the docs? According to\n\nhttp://www.postgresql.org/docs/current/static/sql-altertable.html\n\nit's possible to set this only at the column level. And of course\nthere's a GUC default_statistics_target that defines default value.\n\nTomas\n",
"msg_date": "Wed, 29 Jun 2011 02:10:48 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance when querying millions of rows"
},
{
"msg_contents": "On 06/28/2011 07:26 PM, Craig McIlwee wrote:\n> Yes, the data import is painfully slow but I hope to make up for that \n> with the read performance later.\n\nYou can probably improve that with something like this:\n\nshared_buffers=512MB\ncheckpoint_segments=64\n\nMaybe bump up maintenance_work_mem too, if the vacuum part of that is \nthe painful one.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nComprehensive and Customized PostgreSQL Training Classes:\nhttp://www.2ndquadrant.us/postgresql-training/\n\n",
"msg_date": "Tue, 28 Jun 2011 20:51:50 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance when querying millions of rows"
},
{
"msg_contents": "On 06/28/2011 07:50 PM, Craig McIlwee wrote:\n> I was thinking that shared buffers controlled the amount of data, \n> primarily table and index pages, that the database could store in \n> memory at once. Based on that assumption, I thought that a larger \n> value would enable an entire table + index to be in memory together \n> and speed up the query. Am I wrong?\n\nIt does to some extent. But:\n\na) This amount doesn't impact query planning as much if you've set a \nlarge effective_cache_size\n\nb) The operating system is going to cache things outside of PostgreSQL, too\n\nc) Data read via a sequential scan sometimes skips going into \nshared_buffers, to keep that cache from being swamped with any single scan\n\nd) until the data has actually made its way into memory, you may be \npulling it in there by an inefficient random process at first. By the \ntime the cache is populated, the thing you wanted a populated cache to \naccelerate may already have finished.\n\nIt's possible to get insight into this all using pg_buffercache to \nactually see what's in the cache, and I've put up some talks and scripts \nto help with that at http://projects.2ndquadrant.com/talks you might \nfind useful.\n\n> Looks like they are set to adaptive read-ahead now. If the database \n> is executing many concurrent queries, is it reasonable to suspect that \n> the IO requests will compete with each other in such a way that the \n> controller would rarely see many sequential requests since it is \n> serving many processes? The controller does have an 'on' option also \n> that forces read-ahead, maybe that would solve the issue if we can \n> rely on the data to survive in the cache until the actual read request \n> takes place.\n\nI've never been able to find good documentation on just what the \ndifference between the adaptive and on modes of that controller really \nare, which is why I suggested you try both and see. Linux has a \nuniquely good read-ahead model that was tuned with PostgreSQL \nspecifically in mind. And you still have to tweak it upwards from the \ndefaults in order for the database to fetch things as fast as the drives \nare capable sometimes. So your idea that you will meet/exceed the \ndrive's capabilities for bulk sequential scans is less likely than you \nmight think. RAID5 in theory should give you 2X or more of the speed of \nany single disk when reading a clustered table, but the way PostgreSQL \ndoes it may make that hard to realize on Windows.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nComprehensive and Customized PostgreSQL Training Classes:\nhttp://www.2ndquadrant.us/postgresql-training/\n\n",
"msg_date": "Tue, 28 Jun 2011 21:01:53 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance when querying millions of rows"
}
] |
[
{
"msg_contents": "Hi all,\nI am running PostgreSQL 9.0 on a number of nodes in an application level\ncluster (there is different data on different machines). Currently a\nPL/pgSQL function generates automatically aggregation queries like the\nfollowing:\n\n(select * from appqosfe.F_total_utilization(1306918800000000000::INT8, NULL,\n60000000000::INT8, NULL))\nUNION ALL\n(SELECT * from dblink('remote1','select * from\nappqosfe.F_total_utilization(1306918800000000000::INT8, NULL,\n60000000000::INT8, NULL)') as T1(detectroid numeric, timegroup numeric,\nnumbytes numeric, numpackets numeric))\norder by timegroup asc\n\nThe above example supposes that only 2 nodes are active (one local and one\nremote). Here I can clearly see that the remote sub-query starts only when\nthe local one is completed so the total time grows linearly with the number\nof nodes.\n\nQuestion: Is there a way to get the same result from within a PL/pgSQL\nfunction but running all the sub-queries in parallel? In case it is not\ndirectly available, which one would be the simplest way to implement it in\nmy application? (I am very keen to avoid the obvious solution of an\nadditional multi-threaded layer which would do it out of the RDBMS)\n\nThank you,\nSvetlin Manavski\n\nHi all,I am running PostgreSQL 9.0 on a number of nodes in an application level cluster (there is different data on different machines). Currently a PL/pgSQL function generates automatically aggregation queries like the following: \n(select * from appqosfe.F_total_utilization(1306918800000000000::INT8, NULL, 60000000000::INT8, NULL)) UNION ALL (SELECT * from dblink('remote1','select * from appqosfe.F_total_utilization(1306918800000000000::INT8, NULL, 60000000000::INT8, NULL)') as T1(detectroid numeric, timegroup numeric, numbytes numeric, numpackets numeric)) \norder by timegroup asc The above example supposes that only 2 nodes are active (one local and one remote). Here I can clearly see that the remote sub-query starts only when the local one is completed so the total time grows linearly with the number of nodes. \nQuestion: Is there a way to get the same result from within a PL/pgSQL function but running all the sub-queries in parallel? In case it is not directly available, which one would be the simplest way to implement it in my application? (I am very keen to avoid the obvious solution of an additional multi-threaded layer which would do it out of the RDBMS)\nThank you,Svetlin Manavski",
"msg_date": "Wed, 29 Jun 2011 12:55:58 +0100",
"msg_from": "Svetlin Manavski <[email protected]>",
"msg_from_op": true,
"msg_subject": "is parallel union all possible over dblink?"
},
{
"msg_contents": "On Wed, 29 Jun 2011 13:55:58 +0200, Svetlin Manavski \n<[email protected]> wrote:\n\n> Question: Is there a way to get the same result from within a PL/pgSQL\n> function but running all the sub-queries in parallel? In case it is not\n> directly available, which one would be the simplest way to implement it \n> in\n> my application? (I am very keen to avoid the obvious solution of an\n> additional multi-threaded layer which would do it out of the RDBMS)\n\nHave you tried dblink_send_query() + dblink_get_results() yet?\n\nhttp://www.postgresql.org/docs/current/static/contrib-dblink-send-query.html\n\nYou'd have to do something like this to your queries [untested]:\n\nselect dblink_send_query('remote1','select * from\nappqosfe.F_total_utilization(1306918800000000000::INT8, NULL,\n60000000000::INT8, NULL)');\n(select * from appqosfe.F_total_utilization(1306918800000000000::INT8, \nNULL,\n60000000000::INT8, NULL))\nUNION ALL\n(SELECT * from dblink_get_result('remote1') as T1(detectroid numeric, \ntimegroup numeric,\nnumbytes numeric, numpackets numeric))\norder by timegroup asc;\n\ni.e. start your remote query/-ies asynchronously, then collect the results \nin the UNION query. At least in theory it should work...\n\nRegards,\n Marinos\n\n",
"msg_date": "Wed, 29 Jun 2011 20:37:18 +0200",
"msg_from": "\"Marinos Yannikos\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: is parallel union all possible over dblink?"
},
{
"msg_contents": "On Wed, Jun 29, 2011 at 12:37 PM, Marinos Yannikos <[email protected]> wrote:\n\n> On Wed, 29 Jun 2011 13:55:58 +0200, Svetlin Manavski <\n> [email protected]> wrote:\n>\n> Question: Is there a way to get the same result from within a PL/pgSQL\n>> function but running all the sub-queries in parallel? In case it is not\n>> directly available, which one would be the simplest way to implement it in\n>> my application? (I am very keen to avoid the obvious solution of an\n>> additional multi-threaded layer which would do it out of the RDBMS)\n>>\n>\n> Have you tried dblink_send_query() + dblink_get_results() yet?\n>\n> http://www.postgresql.org/**docs/current/static/contrib-**\n> dblink-send-query.html<http://www.postgresql.org/docs/current/static/contrib-dblink-send-query.html>\n>\n> You'd have to do something like this to your queries [untested]:\n>\n> select dblink_send_query('remote1','**select * from\n>\n> appqosfe.F_total_utilization(**1306918800000000000::INT8, NULL,\n> 60000000000::INT8, NULL)');\n>\n> (select * from appqosfe.F_total_utilization(**1306918800000000000::INT8,\n> NULL,\n> 60000000000::INT8, NULL))\n> UNION ALL\n> (SELECT * from dblink_get_result('remote1') as T1(detectroid numeric,\n> timegroup numeric,\n> numbytes numeric, numpackets numeric))\n> order by timegroup asc;\n>\n> i.e. start your remote query/-ies asynchronously, then collect the results\n> in the UNION query. At least in theory it should work...\n>\n>\nThis does work however you'll need to add a little more to it to ensure your\nUNION succeeds. In pseudo...\n\nconnection #1:\nCREATE TABLE target_1 ...\nBEGIN;\nLOCK TABLE target_1 IN ACCESS EXCLUSIVE MODE;\nINSERT INTO target_1 SELECT ...\nCOMMIT;\n\nconnection #2:\nCREATE TABLE target_2 ...\nBEGIN;\nLOCK TABLE target_2 IN ACCESS EXCLUSIVE MODE;\nINSERT INTO target_2 SELECT ...\nCOMMIT;\n\nconnection #3:\nSELECT * FROM target_1 UNION SELECT * FROM target_2;\n\nConnections 1 and 2 can be done in simultaneously and after both have\nreached the LOCK statement then the SELECT on connection 3 can be executed.\n Same fundamentals if all three connections are to different databases and\nconnection 3 uses dblink to pull the data.\n\nAnother alternative is to use GridSQL. I haven't used it myself but seen it\nin action on a large install with 4 backend databases. Pretty slick.\n\nGreg\n\nOn Wed, Jun 29, 2011 at 12:37 PM, Marinos Yannikos <[email protected]> wrote:\nOn Wed, 29 Jun 2011 13:55:58 +0200, Svetlin Manavski <[email protected]> wrote:\n\n\nQuestion: Is there a way to get the same result from within a PL/pgSQL\nfunction but running all the sub-queries in parallel? In case it is not\ndirectly available, which one would be the simplest way to implement it in\nmy application? (I am very keen to avoid the obvious solution of an\nadditional multi-threaded layer which would do it out of the RDBMS)\n\n\nHave you tried dblink_send_query() + dblink_get_results() yet?\n\nhttp://www.postgresql.org/docs/current/static/contrib-dblink-send-query.html\n\nYou'd have to do something like this to your queries [untested]:\n\nselect dblink_send_query('remote1','select * from\nappqosfe.F_total_utilization(1306918800000000000::INT8, NULL,\n60000000000::INT8, NULL)');\n(select * from appqosfe.F_total_utilization(1306918800000000000::INT8, NULL,\n60000000000::INT8, NULL))\nUNION ALL\n(SELECT * from dblink_get_result('remote1') as T1(detectroid numeric, timegroup numeric,\nnumbytes numeric, numpackets numeric))\norder by timegroup asc;\n\ni.e. start your remote query/-ies asynchronously, then collect the results in the UNION query. At least in theory it should work...\nThis does work however you'll need to add a little more to it to ensure your UNION succeeds. In pseudo...connection #1:CREATE TABLE target_1 ...\nBEGIN;LOCK TABLE target_1 IN ACCESS EXCLUSIVE MODE;INSERT INTO target_1 SELECT ...COMMIT;\nconnection #2:CREATE TABLE target_2 ...BEGIN;LOCK TABLE target_2 IN ACCESS EXCLUSIVE MODE;INSERT INTO target_2 SELECT ...COMMIT;connection #3:\nSELECT * FROM target_1 UNION SELECT * FROM target_2;Connections 1 and 2 can be done in simultaneously and after both have reached the LOCK statement then the SELECT on connection 3 can be executed. Same fundamentals if all three connections are to different databases and connection 3 uses dblink to pull the data.\nAnother alternative is to use GridSQL. I haven't used it myself but seen it in action on a large install with 4 backend databases. Pretty slick.Greg",
"msg_date": "Wed, 29 Jun 2011 13:14:32 -0600",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: is parallel union all possible over dblink?"
},
{
"msg_contents": "On 06/29/2011 02:14 PM, Greg Spiegelberg wrote:\n\n> Another alternative is to use GridSQL. I haven't used it myself but\n> seen it in action on a large install with 4 backend databases. Pretty\n> slick.\n\nWe actually demoed this as a proof of concept a while back. Even just \nhaving two instances on the same machine resulted in linear improvements \nin execution speed thanks to parallel query execution.\n\nSetting it up is something of a PITA, though, and the metadata database \nis completely arbitrary. You basically must use the GridSQL intermediate \nlayer if you ever want to see your data again. I wouldn't use it for \nanything but a reporting database that can be reconstructed if necessary.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Wed, 29 Jun 2011 14:38:24 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: is parallel union all possible over dblink?"
},
{
"msg_contents": "I am now a bit puzzled after the initial satisfaction by Marinos' reply.\n\n1. what do you mean exactly by \"to ensure your UNION succeeds\". The dblink\ndocs do not mention anything about issues using directly the suggested\ndblink_send_query() + dblink_get_results(). What problems should I expect in\nusing them as suggested by Marinos?\n\n2. If I understand correctly your method, it is not applicable from inside a\nstored procedure, is it? I need to keep all the business logic within\nPostgreSQL and provide just a clean interface to a simple GUI layer\n\n3. Unfortunately GridSQL and Pgpool-II do not seem mature and stable\nproducts to be used in commercial software. Neither one provides clear\ndocumentation. GridSQL has been discontinued and it is not clear what kind\nof future it will have. I have not tried GridSQL but I did try Pgpool-II. It\nis disappointing that it may stop working correctly even just because of the\nway you write the query (e.g. using uppercase in a field or using named\nfield in group by, ecc.). Even worse, when it does not recognize something\nin the parallel query, it just provides incorrect result (from only the\nlocal DB) rather than raising an exception. So I guess Pgpool-II in its\ncurrent state is good only for very simple applications, which are not\nsupposed to be reliable at all.\n\nThank you,\nSvetlin Manavski\n\n\nOn Wed, Jun 29, 2011 at 8:14 PM, Greg Spiegelberg <[email protected]>wrote:\n\n>\n> This does work however you'll need to add a little more to it to ensure\n> your UNION succeeds. In pseudo...\n>\n> connection #1:\n> CREATE TABLE target_1 ...\n> BEGIN;\n> LOCK TABLE target_1 IN ACCESS EXCLUSIVE MODE;\n> INSERT INTO target_1 SELECT ...\n> COMMIT;\n>\n> connection #2:\n> CREATE TABLE target_2 ...\n> BEGIN;\n> LOCK TABLE target_2 IN ACCESS EXCLUSIVE MODE;\n> INSERT INTO target_2 SELECT ...\n> COMMIT;\n>\n> connection #3:\n> SELECT * FROM target_1 UNION SELECT * FROM target_2;\n>\n> Connections 1 and 2 can be done in simultaneously and after both have\n> reached the LOCK statement then the SELECT on connection 3 can be executed.\n> Same fundamentals if all three connections are to different databases and\n> connection 3 uses dblink to pull the data.\n>\n> Another alternative is to use GridSQL. I haven't used it myself but seen\n> it in action on a large install with 4 backend databases. Pretty slick.\n>\n> Greg\n>\n>\n\nI am now a bit puzzled after the initial satisfaction by Marinos' reply. 1. what do you mean exactly by \"to ensure your UNION succeeds\". The dblink docs do not mention anything about issues using directly the suggested dblink_send_query() + dblink_get_results(). What problems should I expect in using them as suggested by Marinos?\n2. If I understand correctly your method, it is not applicable from inside a stored procedure, is it? I need to keep all the business logic within PostgreSQL and provide just a clean interface to a simple GUI layer\n3. Unfortunately GridSQL and Pgpool-II do not seem mature and stable products to be used in commercial software. Neither one provides clear documentation. GridSQL has been discontinued and it is not clear what kind of future it will have. I have not tried GridSQL but I did try Pgpool-II. It is disappointing that it may stop working correctly even just because of the way you write the query (e.g. using uppercase in a field or using named field in group by, ecc.). Even worse, when it does not recognize something in the parallel query, it just provides incorrect result (from only the local DB) rather than raising an exception. So I guess Pgpool-II in its current state is good only for very simple applications, which are not supposed to be reliable at all.\nThank you,Svetlin ManavskiOn Wed, Jun 29, 2011 at 8:14 PM, Greg Spiegelberg <[email protected]> wrote:\nThis does work however you'll need to add a little more to it to ensure your UNION succeeds. In pseudo...\nconnection #1:CREATE TABLE target_1 ...\nBEGIN;LOCK TABLE target_1 IN ACCESS EXCLUSIVE MODE;INSERT INTO target_1 SELECT ...COMMIT;\nconnection #2:CREATE TABLE target_2 ...BEGIN;LOCK TABLE target_2 IN ACCESS EXCLUSIVE MODE;INSERT INTO target_2 SELECT ...COMMIT;connection #3:\nSELECT * FROM target_1 UNION SELECT * FROM target_2;Connections 1 and 2 can be done in simultaneously and after both have reached the LOCK statement then the SELECT on connection 3 can be executed. Same fundamentals if all three connections are to different databases and connection 3 uses dblink to pull the data.\nAnother alternative is to use GridSQL. I haven't used it myself but seen it in action on a large install with 4 backend databases. Pretty slick.Greg",
"msg_date": "Thu, 30 Jun 2011 10:02:07 +0100",
"msg_from": "Svetlin Manavski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: is parallel union all possible over dblink?"
},
{
"msg_contents": "On Thu, Jun 30, 2011 at 3:02 AM, Svetlin Manavski <\[email protected]> wrote:\n\n> I am now a bit puzzled after the initial satisfaction by Marinos' reply.\n>\n> 1. what do you mean exactly by \"to ensure your UNION succeeds\". The dblink\n> docs do not mention anything about issues using directly the suggested\n> dblink_send_query() + dblink_get_results(). What problems should I expect in\n> using them as suggested by Marinos?\n>\n>\nAdmittedly, I hadn't used those specific dblink functions and imagined\ndblink_get_result() failing if the query on the connection wasn't finished.\n It appears now that after some experimentation that it's perfectly happy\nhanging until the query is finished executing.\n\n\n\n> 2. If I understand correctly your method, it is not applicable from inside\n> a stored procedure, is it? I need to keep all the business logic within\n> PostgreSQL and provide just a clean interface to a simple GUI layer\n>\n>\nThen dblink is your answer. My suggestion applies if you were implementing\na solution in the application.\n\n\n\n> 3. Unfortunately GridSQL and Pgpool-II do not seem mature and stable\n> products to be used in commercial software. Neither one provides clear\n> documentation. GridSQL has been discontinued and it is not clear what kind\n> of future it will have. I have not tried GridSQL but I did try Pgpool-II. It\n> is disappointing that it may stop working correctly even just because of the\n> way you write the query (e.g. using uppercase in a field or using named\n> field in group by, ecc.). Even worse, when it does not recognize something\n> in the parallel query, it just provides incorrect result (from only the\n> local DB) rather than raising an exception. So I guess Pgpool-II in its\n> current state is good only for very simple applications, which are not\n> supposed to be reliable at all.\n>\n>\nI don't think GridSQL is discontinued. Appears though EnterpriseDB has open\nsourced it and moved to http://sourceforge.net/projects/gridsql/. Not\nincredibly active but some as recent as last month.\n\nSorry for the confusion.\n\nGreg\n\nOn Thu, Jun 30, 2011 at 3:02 AM, Svetlin Manavski <[email protected]> wrote:\nI am now a bit puzzled after the initial satisfaction by Marinos' reply. 1. what do you mean exactly by \"to ensure your UNION succeeds\". The dblink docs do not mention anything about issues using directly the suggested dblink_send_query() + dblink_get_results(). What problems should I expect in using them as suggested by Marinos?\nAdmittedly, I hadn't used those specific dblink functions and imagined dblink_get_result() failing if the query on the connection wasn't finished. It appears now that after some experimentation that it's perfectly happy hanging until the query is finished executing.\n 2. If I understand correctly your method, it is not applicable from inside a stored procedure, is it? I need to keep all the business logic within PostgreSQL and provide just a clean interface to a simple GUI layer\nThen dblink is your answer. My suggestion applies if you were implementing a solution in the application. \n3. Unfortunately GridSQL and Pgpool-II do not seem mature and stable products to be used in commercial software. Neither one provides clear documentation. GridSQL has been discontinued and it is not clear what kind of future it will have. I have not tried GridSQL but I did try Pgpool-II. It is disappointing that it may stop working correctly even just because of the way you write the query (e.g. using uppercase in a field or using named field in group by, ecc.). Even worse, when it does not recognize something in the parallel query, it just provides incorrect result (from only the local DB) rather than raising an exception. So I guess Pgpool-II in its current state is good only for very simple applications, which are not supposed to be reliable at all.\nI don't think GridSQL is discontinued. Appears though EnterpriseDB has open sourced it and moved to http://sourceforge.net/projects/gridsql/. Not incredibly active but some as recent as last month.\nSorry for the confusion.Greg",
"msg_date": "Thu, 30 Jun 2011 06:37:24 -0600",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: is parallel union all possible over dblink?"
}
] |
[
{
"msg_contents": "Here's the setup:\n\nI'm cross joining two dimensions before left outer joining to a fact table\nso that I can throw a default value into the resultset wherever a value is\nmissing from the fact table. I have a time dimension and another dimension.\n I want the cross join to only cross a subset of rows from each dimension,\nso I have some filters in the ON clause of the inner join.\n\nI've got 2 nearly identical queries that perform incredibly differently.\n The only difference between them is that the fast query uses two text\ncolumns from the sensor dimension when filtering the inner join while the\nslow query uses bigint id values, where the ids correlate to the text\nstrings in the fast query. The string columns are device_name and\ndpoint_name, where device_name is unique but many devices have dpoints with\nthe same name. The bigint columns are device_id and dpoint_id, where both\ndevice_id and dpoint_id map to a single row. There are indexes on all of\nthem, so the only real difference is that an index on dpoint_name will have\nmore rows for a given value than the index on dpoint_id because dpoints with\nthe same name will still have different ids if attached to different\ndevices. In both queries, exactly 35 rows from the sensor dimension will be\nreturned. Note also that I'm aggregating fact table rows via avg() because\nI have duplicate rows in the fact table, but I want to extract only a single\nrow for each time and sensor row. The left outer join allows me to populate\nany missing rows with a default value and the aggregation via avg() combines\nduplicate rows so that they appear only once.\n\nI can easily just use the fast query, but my concern is that without\nunderstanding why the queries are executing differently, I might suddenly\ndiscover my code using the slow query plan instead of the fast one at some\npoint in the future, even when using the varchar columns instead of the\nbigint ids for filtering. They differ in execution speed by about 5x (which\ntranslates to many minutes), so that would be a big problem. If I could\nfigure out either a query structure or an index structure which will force\nthe fast query plan, I'd be much happier. So that is what I am looking for\n- an explanation of how I might convince the planner to always use the fast\nplan.\n\nIts a CentOS host - Quad core AMD Opteron 1.6Ghz, 2GB of RAM, single 75GB\ndisk with everything on it. I'm not looking for outright performance, just\nrelative performance between the 2 queries. DB config was taken wholesale\nfrom pg_tune with no changes, IIRC. It isn't a production box. I have yet\nto spec out production hardware for this application, so I don't know what\nit will eventually be.\n\n# select version();\n version\n\n------------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.4.5 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC)\n4.1.2 20080704 (Red Hat 4.1.2-48), 64-bit\n\n name | current_setting\n\n------------------------------+-------------------------------------------------\n checkpoint_completion_target | 0.9\n checkpoint_segments | 64\n default_statistics_target | 100\n effective_cache_size | 1408MB\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n log_directory | pg_log\n log_filename | postgresql-%a.log\n log_rotation_age | 1d\n log_rotation_size | 0\n log_truncate_on_rotation | on\n logging_collector | on\n maintenance_work_mem | 240MB\n max_connections | 20\n max_stack_depth | 2MB\n port | 5432\n server_encoding | UTF8\n shared_buffers | 480MB\n TimeZone | UTC\n wal_buffers | 32MB\n work_mem | 48MB\n\n\ntime dimension looks like this:\n\n Column | Type | Modifiers\n--------------------------+-----------------------------+-----------\n time_zone | character varying(64) |\n tstamp | timestamp without time zone |\n tstamptz | timestamp with time zone |\n local_key | bigint |\n utc_key | bigint |\nIndexes:\n \"idx_time_ny_local_key\" btree (local_key)\n \"idx_time_ny_tstamp\" btree (tstamp) CLUSTER\n \"idx_time_ny_tstamptz\" btree (tstamptz)\n \"idx_time_ny_utc_key\" btree (utc_key)\n\nplus lots of other columns (approx 25 columns, mostly integer) that aren't\nrelevant to this query. It has 420,480 rows where each row is 300 seconds\nafter the previous row. local_key and utc_key are bigint columns in the\nform yyyyMMddHHmm (utc_key in UTC time and the other in local time for the\ntime zone represented by the table) and tstamp is the same value as an\nactual timestamp type. tstamptz is just a convenient column for when I need\nto do time zone conversions. tstamp is a timestamp without timezone that\nstores the date and time for that row in the local time zone for the table\nin question.\n\nThe other dimension looks like this:\n\n Column | Type |\nModifiers\n---------------+-----------------------------+------------------------------------------------------------\n sensor_pk | bigint | not null default\nnextval('sensor_sensor_pk_seq'::regclass)\n building_id | bigint |\n device_id | bigint |\n device_type | character varying(16) |\n device_name | character varying(64) |\n dpoint_id | bigint |\n dpoint_name | character varying(64) |\nIndexes:\n \"sensor_pkey\" PRIMARY KEY, btree (sensor_pk)\n \"idx_sensor_device_id\" btree (device_id)\n \"idx_sensor_device_name\" btree (device_name)\n \"idx_sensor_device_type\" btree (device_type)\n \"idx_sensor_dpoint_id\" btree (dpoint_id)\n \"idx_sensor_dpoint_name\" btree (dpoint_name)\n\nThere are other columns, but again, they aren't relevant - about 10 more\ncolumns, half bigint and half varchar. Row count is less than 400 rows at\nthe moment, but it will potentially get much larger unless I partition on\nbuilding_id, which I may well do - in which case, row count will never\nexceed 100,000 and rarely exceed 10,000.\n\nThe fact table is as follows:\n\n Table \"facts.bldg_1_thermal_fact\"\n Column | Type | Modifiers\n--------------+-----------------------------+---------------\n time_fk | bigint | not null\n sensor_fk | bigint | not null\n tstamp | timestamp without time zone |\n dpoint_value | real |\n device_mode | bigint |\n\nWith actual data in child tables:\n\n Table \"facts.bldg_1_thermal_fact_20110501\"\n Column | Type | Modifiers\n--------------+-----------------------------+---------------\n time_fk | bigint | not null\n sensor_fk | bigint | not null\n tstamp | timestamp without time zone |\n dpoint_value | real |\n device_mode | bigint |\nIndexes:\n \"idx_bldg_1_thermal_fact_20110501_sensor_fk\" btree (sensor_fk)\n \"idx_bldg_1_thermal_fact_20110501_time_fk\" btree (time_fk) CLUSTER\nCheck constraints:\n \"bldg_1_thermal_fact_20110501_time_fk_check\" CHECK (time_fk >=\n201105010000::bigint AND time_fk < 201106010000::bigint)\n \"bldg_1_thermal_fact_20110501_tstamp_check\" CHECK (tstamp >= '2011-05-01\n00:00:00'::timestamp without time zone AND tstamp < '2011-06-01\n00:00:00'::timestamp without time zone)\nInherits: bldg_1_thermal_fact\n\nOne of the 2 partitions that exists at the moment contains 2 million rows,\nnone of which are relevant to the query in question. The other partition\nhas 4 million rows and contains 100% of the rows returned by the query.\n\nfast query is as follows:\n\n SELECT t.local_key,\n s.device_name||'_'||s.dpoint_name,\n CASE WHEN avg(f.dpoint_value) IS NULL THEN -999999\n ELSE avg(f.dpoint_value)::numeric(10,2)\n END as dpoint_value\n FROM dimensions.sensor s\n INNER JOIN dimensions.time_ny t\n ON s.building_id=1\n AND ((s.device_name='AHU_1M02' AND s.dpoint_name='SpaceTemp') OR\n(s.device_name='VAV_1M_01' AND\ns.dpoint_name='EffectSetPt')OR(s.device_name='VAV_1M_01' AND\ns.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_02A' AND\ns.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_02A' AND\ns.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_02' AND\ns.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_02' AND\ns.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_03' AND\ns.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_03' AND\ns.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_04' AND\ns.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_04' AND\ns.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_05A' AND\ns.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_05A' AND\ns.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_05' AND\ns.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_05' AND\ns.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_06' AND\ns.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_06' AND\ns.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_07' AND\ns.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_07' AND\ns.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_08' AND\ns.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_08' AND\ns.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_09' AND\ns.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_09' AND\ns.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_10' AND\ns.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_10' AND\ns.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_11' AND\ns.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_11' AND\ns.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_12' AND\ns.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_12' AND\ns.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_13' AND\ns.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_13' AND\ns.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_14' AND\ns.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_14' AND\ns.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_15' AND\ns.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_15' AND\ns.dpoint_name='SpaceTemp'))\n AND t.tstamp BETWEEN '15-Jun-2011 00:00' AND '29-Jun-2011 00:00'\n LEFT OUTER JOIN facts.bldg_1_thermal_fact f\n ON f.time_fk=t.local_key\n AND f.sensor_fk=s.sensor_pk\n GROUP BY 1,2\n ORDER BY 1,2\n\nNote: as complicated as that big filter on the inner join is, it will never\nresult in more than a few tens of rows being selected.\n\nFast explain is here:\nhttp://explain.depesz.com/s/FAaR\n\nSlow query is as follows:\n\n SELECT t.local_key,\n s.device_name||'_'||s.dpoint_name,\n CASE WHEN avg(f.dpoint_value) IS NULL THEN -999999\n ELSE avg(f.dpoint_value)::numeric(10,2)\n END as dpoint_value\n FROM dimensions.sensor s\n INNER JOIN dimensions.time_ny t\n ON s.building_id=1\n AND ((s.device_id=33 AND s.dpoint_id=183) OR (s.device_id=33 AND\ns.dpoint_id=184) OR (s.device_id=34 AND s.dpoint_id=187) OR (s.device_id=34\nAND s.dpoint_id=188) OR (s.device_id=35 AND s.dpoint_id=191) OR\n(s.device_id=35 AND s.dpoint_id=192) OR (s.device_id=36 AND s.dpoint_id=195)\nOR (s.device_id=36 AND s.dpoint_id=196) OR (s.device_id=77 AND\ns.dpoint_id=364) OR (s.device_id=20 AND s.dpoint_id=131) OR (s.device_id=20\nAND s.dpoint_id=132) OR (s.device_id=21 AND s.dpoint_id=135) OR\n(s.device_id=21 AND s.dpoint_id=136) OR (s.device_id=22 AND s.dpoint_id=139)\nOR (s.device_id=22 AND s.dpoint_id=140) OR (s.device_id=30 AND\ns.dpoint_id=171) OR (s.device_id=30 AND s.dpoint_id=172) OR (s.device_id=23\nAND s.dpoint_id=143) OR (s.device_id=23 AND s.dpoint_id=144) OR\n(s.device_id=24 AND s.dpoint_id=147) OR (s.device_id=24 AND s.dpoint_id=148)\nOR (s.device_id=25 AND s.dpoint_id=151) OR (s.device_id=25 AND\ns.dpoint_id=152) OR (s.device_id=26 AND s.dpoint_id=155) OR (s.device_id=26\nAND s.dpoint_id=156) OR (s.device_id=27 AND s.dpoint_id=159) OR\n(s.device_id=27 AND s.dpoint_id=160) OR (s.device_id=28 AND s.dpoint_id=163)\nOR (s.device_id=28 AND s.dpoint_id=164) OR (s.device_id=29 AND\ns.dpoint_id=167) OR (s.device_id=29 AND s.dpoint_id=168) OR (s.device_id=31\nAND s.dpoint_id=175) OR (s.device_id=31 AND s.dpoint_id=176) OR\n(s.device_id=32 AND s.dpoint_id=179) OR (s.device_id=32 AND\ns.dpoint_id=180))\n AND t.tstamp BETWEEN '15-Jun-2011 00:00' AND '29-Jun-2011 00:00'\n LEFT OUTER JOIN facts.bldg_1_thermal_fact f\n ON f.time_fk=t.local_key\n AND f.sensor_fk=s.sensor_pk\n GROUP BY 1,2\n ORDER BY 1,2\n\nSlow explain is here:\nhttp://explain.depesz.com/s/qao\n\nI tried rewriting the slow query such that the cross join is expressed as a\nsubquery left outer joined to the fact table, but it had no effect. the\nquery plan was identical.\n\nThe query plan also doesn't change if I replace the 2nd column with\ns.sensor_pk (which maps one to one to the string I am constructing, but\nwhich does have an index). This query is actually being used as the first\nquery in a call to crosstab(text, text) and that second column is just the\npivot column. It doesn't matter to me whether it is a string or the id,\nsince those labels wind up being re-written as column names of the function\nresult. (select * from crosstab() as q(local_key bigint, column1 real,\ncolumn2 real, ...).\n\nHere's the setup:I'm cross joining two dimensions before left outer joining to a fact table so that I can throw a default value into the resultset wherever a value is missing from the fact table. I have a time dimension and another dimension. I want the cross join to only cross a subset of rows from each dimension, so I have some filters in the ON clause of the inner join. \nI've got 2 nearly identical queries that perform incredibly differently. The only difference between them is that the fast query uses two text columns from the sensor dimension when filtering the inner join while the slow query uses bigint id values, where the ids correlate to the text strings in the fast query. The string columns are device_name and dpoint_name, where device_name is unique but many devices have dpoints with the same name. The bigint columns are device_id and dpoint_id, where both device_id and dpoint_id map to a single row. There are indexes on all of them, so the only real difference is that an index on dpoint_name will have more rows for a given value than the index on dpoint_id because dpoints with the same name will still have different ids if attached to different devices. In both queries, exactly 35 rows from the sensor dimension will be returned. Note also that I'm aggregating fact table rows via avg() because I have duplicate rows in the fact table, but I want to extract only a single row for each time and sensor row. The left outer join allows me to populate any missing rows with a default value and the aggregation via avg() combines duplicate rows so that they appear only once.\nI can easily just use the fast query, but my concern is that without understanding why the queries are executing differently, I might suddenly discover my code using the slow query plan instead of the fast one at some point in the future, even when using the varchar columns instead of the bigint ids for filtering. They differ in execution speed by about 5x (which translates to many minutes), so that would be a big problem. If I could figure out either a query structure or an index structure which will force the fast query plan, I'd be much happier. So that is what I am looking for - an explanation of how I might convince the planner to always use the fast plan.\nIts a CentOS host - Quad core AMD Opteron 1.6Ghz, 2GB of RAM, single 75GB disk with everything on it. I'm not looking for outright performance, just relative performance between the 2 queries. DB config was taken wholesale from pg_tune with no changes, IIRC. It isn't a production box. I have yet to spec out production hardware for this application, so I don't know what it will eventually be.\n# select version(); version ------------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.4.5 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-48), 64-bit name | current_setting \n------------------------------+------------------------------------------------- checkpoint_completion_target | 0.9 checkpoint_segments | 64 default_statistics_target | 100\n effective_cache_size | 1408MB lc_collate | en_US.UTF-8 lc_ctype | en_US.UTF-8 log_directory | pg_log log_filename | postgresql-%a.log\n log_rotation_age | 1d log_rotation_size | 0 log_truncate_on_rotation | on logging_collector | on maintenance_work_mem | 240MB\n max_connections | 20 max_stack_depth | 2MB port | 5432 server_encoding | UTF8 shared_buffers | 480MB\n TimeZone | UTC wal_buffers | 32MB work_mem | 48MBtime dimension looks like this:\n Column | Type | Modifiers --------------------------+-----------------------------+----------- time_zone | character varying(64) | \n tstamp | timestamp without time zone | tstamptz | timestamp with time zone | local_key | bigint | utc_key | bigint | \nIndexes: \"idx_time_ny_local_key\" btree (local_key) \"idx_time_ny_tstamp\" btree (tstamp) CLUSTER \"idx_time_ny_tstamptz\" btree (tstamptz)\n \"idx_time_ny_utc_key\" btree (utc_key)plus lots of other columns (approx 25 columns, mostly integer) that aren't relevant to this query. It has 420,480 rows where each row is 300 seconds after the previous row. local_key and utc_key are bigint columns in the form yyyyMMddHHmm (utc_key in UTC time and the other in local time for the time zone represented by the table) and tstamp is the same value as an actual timestamp type. tstamptz is just a convenient column for when I need to do time zone conversions. tstamp is a timestamp without timezone that stores the date and time for that row in the local time zone for the table in question.\nThe other dimension looks like this: Column | Type | Modifiers ---------------+-----------------------------+------------------------------------------------------------\n sensor_pk | bigint | not null default nextval('sensor_sensor_pk_seq'::regclass) building_id | bigint | device_id | bigint | \n device_type | character varying(16) | device_name | character varying(64) | dpoint_id | bigint | dpoint_name | character varying(64) | \nIndexes: \"sensor_pkey\" PRIMARY KEY, btree (sensor_pk) \"idx_sensor_device_id\" btree (device_id) \"idx_sensor_device_name\" btree (device_name)\n \"idx_sensor_device_type\" btree (device_type) \"idx_sensor_dpoint_id\" btree (dpoint_id) \"idx_sensor_dpoint_name\" btree (dpoint_name)\nThere are other columns, but again, they aren't relevant - about 10 more columns, half bigint and half varchar. Row count is less than 400 rows at the moment, but it will potentially get much larger unless I partition on building_id, which I may well do - in which case, row count will never exceed 100,000 and rarely exceed 10,000.\nThe fact table is as follows: Table \"facts.bldg_1_thermal_fact\" Column | Type | Modifiers --------------+-----------------------------+---------------\n time_fk | bigint | not null sensor_fk | bigint | not null tstamp | timestamp without time zone | dpoint_value | real | \n device_mode | bigint | With actual data in child tables: Table \"facts.bldg_1_thermal_fact_20110501\" Column | Type | Modifiers \n--------------+-----------------------------+--------------- time_fk | bigint | not null sensor_fk | bigint | not null tstamp | timestamp without time zone | \n dpoint_value | real | device_mode | bigint | Indexes: \"idx_bldg_1_thermal_fact_20110501_sensor_fk\" btree (sensor_fk)\n \"idx_bldg_1_thermal_fact_20110501_time_fk\" btree (time_fk) CLUSTERCheck constraints: \"bldg_1_thermal_fact_20110501_time_fk_check\" CHECK (time_fk >= 201105010000::bigint AND time_fk < 201106010000::bigint)\n \"bldg_1_thermal_fact_20110501_tstamp_check\" CHECK (tstamp >= '2011-05-01 00:00:00'::timestamp without time zone AND tstamp < '2011-06-01 00:00:00'::timestamp without time zone)\nInherits: bldg_1_thermal_factOne of the 2 partitions that exists at the moment contains 2 million rows, none of which are relevant to the query in question. The other partition has 4 million rows and contains 100% of the rows returned by the query.\nfast query is as follows: SELECT t.local_key, s.device_name||'_'||s.dpoint_name, CASE WHEN avg(f.dpoint_value) IS NULL THEN -999999\n ELSE avg(f.dpoint_value)::numeric(10,2) END as dpoint_value FROM dimensions.sensor s INNER JOIN dimensions.time_ny t ON s.building_id=1\n AND ((s.device_name='AHU_1M02' AND s.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_01' AND s.dpoint_name='EffectSetPt')OR(s.device_name='VAV_1M_01' AND s.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_02A' AND s.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_02A' AND s.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_02' AND s.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_02' AND s.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_03' AND s.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_03' AND s.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_04' AND s.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_04' AND s.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_05A' AND s.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_05A' AND s.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_05' AND s.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_05' AND s.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_06' AND s.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_06' AND s.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_07' AND s.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_07' AND s.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_08' AND s.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_08' AND s.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_09' AND s.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_09' AND s.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_10' AND s.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_10' AND s.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_11' AND s.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_11' AND s.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_12' AND s.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_12' AND s.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_13' AND s.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_13' AND s.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_14' AND s.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_14' AND s.dpoint_name='SpaceTemp') OR (s.device_name='VAV_1M_15' AND s.dpoint_name='EffectSetPt') OR (s.device_name='VAV_1M_15' AND s.dpoint_name='SpaceTemp'))\n AND t.tstamp BETWEEN '15-Jun-2011 00:00' AND '29-Jun-2011 00:00' LEFT OUTER JOIN facts.bldg_1_thermal_fact f ON f.time_fk=t.local_key AND f.sensor_fk=s.sensor_pk\n GROUP BY 1,2 ORDER BY 1,2Note: as complicated as that big filter on the inner join is, it will never result in more than a few tens of rows being selected.\nFast explain is here: http://explain.depesz.com/s/FAaRSlow query is as follows: SELECT t.local_key,\n s.device_name||'_'||s.dpoint_name, CASE WHEN avg(f.dpoint_value) IS NULL THEN -999999 ELSE avg(f.dpoint_value)::numeric(10,2) END as dpoint_value\n FROM dimensions.sensor s INNER JOIN dimensions.time_ny t ON s.building_id=1 AND ((s.device_id=33 AND s.dpoint_id=183) OR (s.device_id=33 AND s.dpoint_id=184) OR (s.device_id=34 AND s.dpoint_id=187) OR (s.device_id=34 AND s.dpoint_id=188) OR (s.device_id=35 AND s.dpoint_id=191) OR (s.device_id=35 AND s.dpoint_id=192) OR (s.device_id=36 AND s.dpoint_id=195) OR (s.device_id=36 AND s.dpoint_id=196) OR (s.device_id=77 AND s.dpoint_id=364) OR (s.device_id=20 AND s.dpoint_id=131) OR (s.device_id=20 AND s.dpoint_id=132) OR (s.device_id=21 AND s.dpoint_id=135) OR (s.device_id=21 AND s.dpoint_id=136) OR (s.device_id=22 AND s.dpoint_id=139) OR (s.device_id=22 AND s.dpoint_id=140) OR (s.device_id=30 AND s.dpoint_id=171) OR (s.device_id=30 AND s.dpoint_id=172) OR (s.device_id=23 AND s.dpoint_id=143) OR (s.device_id=23 AND s.dpoint_id=144) OR (s.device_id=24 AND s.dpoint_id=147) OR (s.device_id=24 AND s.dpoint_id=148) OR (s.device_id=25 AND s.dpoint_id=151) OR (s.device_id=25 AND s.dpoint_id=152) OR (s.device_id=26 AND s.dpoint_id=155) OR (s.device_id=26 AND s.dpoint_id=156) OR (s.device_id=27 AND s.dpoint_id=159) OR (s.device_id=27 AND s.dpoint_id=160) OR (s.device_id=28 AND s.dpoint_id=163) OR (s.device_id=28 AND s.dpoint_id=164) OR (s.device_id=29 AND s.dpoint_id=167) OR (s.device_id=29 AND s.dpoint_id=168) OR (s.device_id=31 AND s.dpoint_id=175) OR (s.device_id=31 AND s.dpoint_id=176) OR (s.device_id=32 AND s.dpoint_id=179) OR (s.device_id=32 AND s.dpoint_id=180))\n AND t.tstamp BETWEEN '15-Jun-2011 00:00' AND '29-Jun-2011 00:00' LEFT OUTER JOIN facts.bldg_1_thermal_fact f ON f.time_fk=t.local_key AND f.sensor_fk=s.sensor_pk \n GROUP BY 1,2 ORDER BY 1,2 Slow explain is here:http://explain.depesz.com/s/qaoI tried rewriting the slow query such that the cross join is expressed as a subquery left outer joined to the fact table, but it had no effect. the query plan was identical.\nThe query plan also doesn't change if I replace the 2nd column with s.sensor_pk (which maps one to one to the string I am constructing, but which does have an index). This query is actually being used as the first query in a call to crosstab(text, text) and that second column is just the pivot column. It doesn't matter to me whether it is a string or the id, since those labels wind up being re-written as column names of the function result. (select * from crosstab() as q(local_key bigint, column1 real, column2 real, ...).",
"msg_date": "Thu, 30 Jun 2011 01:53:14 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": true,
"msg_subject": "near identical queries have vastly different plans"
},
{
"msg_contents": "On Thu, Jun 30, 2011 at 1:53 AM, Samuel Gendler\n<[email protected]>wrote:\n\n> If I could figure out either a query structure or an index structure which\n> will force the fast query plan, I'd be much happier. So that is what I am\n> looking for - an explanation of how I might convince the planner to always\n> use the fast plan.\n>\n>\nFor the record, \"set enable_nestloop=false\" does force a more effective plan\nwhen using the 'slow' query. It is not quite identical in structure - it\nmaterializes the other side of the query, resulting in about 10% less\nperformance - but it is close enough that I'm tempted to disable nestloop\nwhenever I run the query in the hope that it will prevent the planner from\nswitching to the really awful plan. I know that's kind of a drastic\nmeasure, so hopefully someone out there will suggest a config fix which\naccomplishes the same thing without requiring special handling for this\nquery, but at least it works (for now).\n\nIncidentally, upgrading to 9.0.x is not out of the question if it is\nbelieved that doing so might help here. I'm only running 8.4 because I've\ngot another project in production on 8.4 and I don't want to have to deal\nwith running both versions on my development laptop. But that's a pretty\nweak reason for not upgrading, I know.\n\n--sam\n\nOn Thu, Jun 30, 2011 at 1:53 AM, Samuel Gendler <[email protected]> wrote:\nIf I could figure out either a query structure or an index structure which will force the fast query plan, I'd be much happier. So that is what I am looking for - an explanation of how I might convince the planner to always use the fast plan.\nFor the record, \"set enable_nestloop=false\" does force a more effective plan when using the 'slow' query. It is not quite identical in structure - it materializes the other side of the query, resulting in about 10% less performance - but it is close enough that I'm tempted to disable nestloop whenever I run the query in the hope that it will prevent the planner from switching to the really awful plan. I know that's kind of a drastic measure, so hopefully someone out there will suggest a config fix which accomplishes the same thing without requiring special handling for this query, but at least it works (for now).\nIncidentally, upgrading to 9.0.x is not out of the question if it is believed that doing so might help here. I'm only running 8.4 because I've got another project in production on 8.4 and I don't want to have to deal with running both versions on my development laptop. But that's a pretty weak reason for not upgrading, I know.\n--sam",
"msg_date": "Thu, 30 Jun 2011 02:40:10 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: near identical queries have vastly different plans"
},
{
"msg_contents": "On Thu, Jun 30, 2011 at 1:53 AM, Samuel Gendler\n<[email protected]>wrote:\n\n> If I could figure out either a query structure or an index structure which\n> will force the fast query plan, I'd be much happier. So that is what I am\n> looking for - an explanation of how I might convince the planner to always\n> use the fast plan.\n>\n>\nI eventually noticed that constraint_exclusion didn't seem to be working and\nremembered that it only works when the filter is on the partitioned table\nitself, not when the table is being filtered via a join. Adding a where\nclause which limits f.time_fk to the appropriate range not only fixed\nconstraint_exclusion behaviour, but also caused the query planner to produce\nthe same plan for both versions of the query - a plan that was an order of\nmagnitude faster than the previous fastest plan. It went from 20 seconds\nto just less than 2 seconds.\n\nOn Thu, Jun 30, 2011 at 1:53 AM, Samuel Gendler <[email protected]> wrote:\nIf I could figure out either a query structure or an index structure which will force the fast query plan, I'd be much happier. So that is what I am looking for - an explanation of how I might convince the planner to always use the fast plan.\nI eventually noticed that constraint_exclusion didn't seem to be working and remembered that it only works when the filter is on the partitioned table itself, not when the table is being filtered via a join. Adding a where clause which limits f.time_fk to the appropriate range not only fixed constraint_exclusion behaviour, but also caused the query planner to produce the same plan for both versions of the query - a plan that was an order of magnitude faster than the previous fastest plan. It went from 20 seconds to just less than 2 seconds.",
"msg_date": "Thu, 30 Jun 2011 13:34:52 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: near identical queries have vastly different plans"
},
{
"msg_contents": "Samuel Gendler <[email protected]> writes:\n> I've got 2 nearly identical queries that perform incredibly differently.\n\nThe reason the slow query sucks is that the planner is estimating at\nmost one \"s\" row will match that complicated AND/OR condition, so it\ngoes for a nestloop. In the \"fast\" query there is another complicated\nAND/OR filter condition, but it's not so far off on the number of\nmatching rows, so you get a better plan choice. Can't tell from the\ngiven information whether the better guess is pure luck, or there's some\ndifference in the column statistics that makes it able to get a better\nestimate for that.\n\nIn general, though, you're skating on thin ice anytime you ask the\nplanner to derive statistical estimates about combinations of correlated\ncolumns --- and these evidently are correlated. Think about refactoring\nthe table definitions so that you're only testing a single column, which\nANALYZE will be able to provide stats about. Or maybe you can express\nit as a test on a computed expression, which you could then keep an\nindex on, prompting ANALYZE to gather stats about that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Jul 2011 18:46:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: near identical queries have vastly different plans "
},
{
"msg_contents": "On Fri, Jul 1, 2011 at 3:46 PM, Tom Lane <[email protected]> wrote:\n\n> Samuel Gendler <[email protected]> writes:\n> > I've got 2 nearly identical queries that perform incredibly differently.\n>\n> The reason the slow query sucks is that the planner is estimating at\n> most one \"s\" row will match that complicated AND/OR condition, so it\n> goes for a nestloop. In the \"fast\" query there is another complicated\n> AND/OR filter condition, but it's not so far off on the number of\n> matching rows, so you get a better plan choice. Can't tell from the\n> given information whether the better guess is pure luck, or there's some\n> difference in the column statistics that makes it able to get a better\n> estimate for that.\n>\n> In general, though, you're skating on thin ice anytime you ask the\n> planner to derive statistical estimates about combinations of correlated\n> columns --- and these evidently are correlated. Think about refactoring\n> the table definitions so that you're only testing a single column, which\n> ANALYZE will be able to provide stats about. Or maybe you can express\n> it as a test on a computed expression, which you could then keep an\n> index on, prompting ANALYZE to gather stats about that.\n>\n\nThanks. There is actually already a column in s which is a primary key for\nthe 2 columns that are currently being tested for. I didn't write the\napplication code which generates the query, so can't say for sure why it is\nbeing generated as it is, but I'll ask the engineer in question to try the\nprimary key column instead and see what happens.\n\nOn Fri, Jul 1, 2011 at 3:46 PM, Tom Lane <[email protected]> wrote:\nSamuel Gendler <[email protected]> writes:\n> I've got 2 nearly identical queries that perform incredibly differently.\n\nThe reason the slow query sucks is that the planner is estimating at\nmost one \"s\" row will match that complicated AND/OR condition, so it\ngoes for a nestloop. In the \"fast\" query there is another complicated\nAND/OR filter condition, but it's not so far off on the number of\nmatching rows, so you get a better plan choice. Can't tell from the\ngiven information whether the better guess is pure luck, or there's some\ndifference in the column statistics that makes it able to get a better\nestimate for that.\n\nIn general, though, you're skating on thin ice anytime you ask the\nplanner to derive statistical estimates about combinations of correlated\ncolumns --- and these evidently are correlated. Think about refactoring\nthe table definitions so that you're only testing a single column, which\nANALYZE will be able to provide stats about. Or maybe you can express\nit as a test on a computed expression, which you could then keep an\nindex on, prompting ANALYZE to gather stats about that.Thanks. There is actually already a column in s which is a primary key for the 2 columns that are currently being tested for. I didn't write the application code which generates the query, so can't say for sure why it is being generated as it is, but I'll ask the engineer in question to try the primary key column instead and see what happens.",
"msg_date": "Fri, 1 Jul 2011 17:51:32 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: near identical queries have vastly different plans"
}
] |
[
{
"msg_contents": "All:\n\nWas curious if there was some sort of Open Source version of Infinite Cache,\nand/or a memcache layer that can be \"dropped\" in front of PostgreSQL without\napplication changes (which seems to be the \"key\" piece of Infinite Cache),\nor is this something that EnterpriseDB owns and you have to buy their\nversion of the software to use?\n\nI'm fine with piecing together a few different OS projects, but would prefer\nto not modify the app too much.\n\nThanks!\n\n\n-- \nAnthony Presley\n\nAll:Was curious if there was some sort of Open Source version of Infinite Cache, and/or a memcache layer that can be \"dropped\" in front of PostgreSQL without application changes (which seems to be the \"key\" piece of Infinite Cache), or is this something that EnterpriseDB owns and you have to buy their version of the software to use?\nI'm fine with piecing together a few different OS projects, but would prefer to not modify the app too much.Thanks!-- Anthony Presley",
"msg_date": "Fri, 1 Jul 2011 09:43:39 -0500",
"msg_from": "Anthony Presley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Infinite Cache"
},
{
"msg_contents": "On 07/01/2011 10:43 AM, Anthony Presley wrote:\n> Was curious if there was some sort of Open Source version of Infinite \n> Cache, and/or a memcache layer that can be \"dropped\" in front of \n> PostgreSQL without application changes (which seems to be the \"key\" \n> piece of Infinite Cache), or is this something that EnterpriseDB owns \n> and you have to buy their version of the software to use? \n\nThe best solution available for this class of problem is pgmemcache: \nhttp://pgfoundry.org/projects/pgmemcache/\n\nThere's not too much documentation about it around, but you'll find an \nintro talk at http://projects.2ndquadrant.com/char10 I found helpful \nwhen Hannu presented it. It does take some work to utilize, including \napplication code changes. The hardest part of which is usually making \nsure the key hashing scheme it uses to identify re-usable queries is \nuseful to you. And that isn't always the case.\n\nThis approach scales better than \"Infinite Cache\" because you can move \nthe whole mess onto another server optimized to be a caching system. \nThose systems have a very different set of trade-offs and \ncorrespondingly economics than a database server must have. The cache \nsystem can be a cheap box with a bunch of RAM, that's it. And the read \ntraffic it avoids passing to the server really doesn't touch the \ndatabase at all, which is way better than going to the database but \nbeing serviced quickly.\n\nEveryone would prefer performance improvements that don't involve any \nmodification of their application. The unfortunate reality of database \ndesign is that any server tuning can only provide a modest gain; if you \nmake things twice as fast you've done a great job. Whereas when doing \napplication redesign for better performance, I aim for a 10X speedup and \noften do much better than that.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nComprehensive and Customized PostgreSQL Training Classes:\nhttp://www.2ndquadrant.us/postgresql-training/\n\n",
"msg_date": "Fri, 01 Jul 2011 11:54:09 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Cache"
},
{
"msg_contents": "On Jul 1, 2011, at 9:43 AM, Anthony Presley wrote:\n> Was curious if there was some sort of Open Source version of Infinite Cache, and/or a memcache layer that can be \"dropped\" in front of PostgreSQL without application changes (which seems to be the \"key\" piece of Infinite Cache), or is this something that EnterpriseDB owns and you have to buy their version of the software to use?\n\nThere had been some talk at one point about getting the backend-changes to support Infinite Cache into mainline Postgres. If that ever happened you could build your own version of it.\n\nBTW, thanks to the compression feature of IC I've heard it can actually be beneficial to run it on the same server.\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n",
"msg_date": "Fri, 1 Jul 2011 17:37:51 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Cache"
},
{
"msg_contents": "On Fri, Jul 1, 2011 at 11:37 PM, Jim Nasby <[email protected]> wrote:\n> On Jul 1, 2011, at 9:43 AM, Anthony Presley wrote:\n>> Was curious if there was some sort of Open Source version of Infinite Cache, and/or a memcache layer that can be \"dropped\" in front of PostgreSQL without application changes (which seems to be the \"key\" piece of Infinite Cache), or is this something that EnterpriseDB owns and you have to buy their version of the software to use?\n>\n> There had been some talk at one point about getting the backend-changes to support Infinite Cache into mainline Postgres. If that ever happened you could build your own version of it.\n>\n> BTW, thanks to the compression feature of IC I've heard it can actually be beneficial to run it on the same server.\n\nCorrect.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Sat, 2 Jul 2011 09:20:33 +0100",
"msg_from": "Dave Page <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Cache"
},
{
"msg_contents": "On Sat, Jul 2, 2011 at 00:37, Jim Nasby <[email protected]> wrote:\n> On Jul 1, 2011, at 9:43 AM, Anthony Presley wrote:\n>> Was curious if there was some sort of Open Source version of Infinite Cache, and/or a memcache layer that can be \"dropped\" in front of PostgreSQL without application changes (which seems to be the \"key\" piece of Infinite Cache), or is this something that EnterpriseDB owns and you have to buy their version of the software to use?\n>\n> There had been some talk at one point about getting the backend-changes to support Infinite Cache into mainline Postgres. If that ever happened you could build your own version of it.\n\nI was at one point told that *all* of infinite cache would be\nsubmitted to the community, but it was in need of some cleanup first.\nBut by now I think that decision has been changed - I certainly hope\nit didn't take years to clean up ;) So I wouldn't hold my breath for\nthat one.\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n",
"msg_date": "Sun, 3 Jul 2011 13:21:09 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Cache"
},
{
"msg_contents": "On 07/01/2011 06:37 PM, Jim Nasby wrote:\n> BTW, thanks to the compression feature of IC I've heard it can \n> actually be beneficial to run it on the same server.\n\nSure, its implementation works in a way that helps improve performance \non the database server. My point was that I'd be shocked if it were \neven possible to double performance if you use it. Whereas putting a \npgmemcache server in front of the database can do much better than that, \non a system that reads the same things many times per update. \"Infinite \nCache\" is a useful technology and the fact that it work transparently \nthe application is a nice benefit of EDB's commercial product. But it's \nusually the case that if you really want to do the best possible \nimplementation of an approach, optimizing very specifically for your \napplication is what's needed.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nComprehensive and Customized PostgreSQL Training Classes:\nhttp://www.2ndquadrant.us/postgresql-training/\n\n",
"msg_date": "Sun, 03 Jul 2011 14:18:16 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Cache"
},
{
"msg_contents": "On 07/03/2011 06:21 AM, Magnus Hagander wrote:\n\n> I was at one point told that *all* of infinite cache would be\n> submitted to the community, but it was in need of some cleanup\n> first.\n\nI'm not sure what kind of cleanup would be involved, but we had some \nproblems with index corruption that wasn't fixed until a February patch \nwas applied. My guess is that earlier versions of Infinite Cache weren't \nall that careful with verifying certain edge cases during connection \nrenegotiation or timeout scenarios. It only seemed to pop up once every \nfew billion queries, but that's all it takes to bring down a heavy OLTP \nsystem.\n\nI'd say it's probably safe enough these days. But it's also one of those \nexclusive selling points they're using right now to garner EDB \ncustomers. So I doubt it'll be released any time *soon*, though may make \nit eventually.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Tue, 5 Jul 2011 08:35:38 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Cache"
},
{
"msg_contents": "On 05.07.2011 16:35, Shaun Thomas wrote:\n> I'd say it's probably safe enough these days. But it's also one of those\n> exclusive selling points they're using right now to garner EDB\n> customers. So I doubt it'll be released any time *soon*, though may make\n> it eventually.\n\nI doubt the community would want it even if it was open sourced. As an \nopen source project, what would probably make more sense is a similar \ncaching mechanism built into the kernel, somewhere between the \nfilesystem cache and user-space. That way any you could use it with any \napplication that benefits from the kind of large cache that Infinite \nCache provides.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Fri, 08 Jul 2011 16:34:06 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Cache"
},
{
"msg_contents": "On Fri, Jul 8, 2011 at 15:34, Heikki Linnakangas\n<[email protected]> wrote:\n> On 05.07.2011 16:35, Shaun Thomas wrote:\n>>\n>> I'd say it's probably safe enough these days. But it's also one of those\n>> exclusive selling points they're using right now to garner EDB\n>> customers. So I doubt it'll be released any time *soon*, though may make\n>> it eventually.\n>\n> I doubt the community would want it even if it was open sourced. As an open\n> source project, what would probably make more sense is a similar caching\n> mechanism built into the kernel, somewhere between the filesystem cache and\n> user-space. That way any you could use it with any application that benefits\n> from the kind of large cache that Infinite Cache provides.\n\nDon't underestimate how much easier it is to get something into an\nenvironment as part of the database than as an extra module or app\nthat you have to patch your kernel with. Unless you can get it all the\nway into the baseline kernel of course, but that's not going to be\neasy...\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n",
"msg_date": "Fri, 8 Jul 2011 15:41:54 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Cache"
}
] |
[
{
"msg_contents": "Dear all, \nHow can retrieve total query run-time when i run any query with out using explain \nfor example when i run any query such as\nselect * from tablel1;\nTotal query runtime: 443 ms.\n i want a function can return \"runtime: 443 ms\" with out using explain \ni want this command to call it from java source code\ncan any one help me??? .\nDear all, How can retrieve total query run-time when i run any query with out using explain for example when i run any query such asselect * from tablel1;Total query runtime: 443 ms. i want a function can return \"runtime: 443 ms\" with out using explain i want this command to call it from java source codecan any one help me??? .",
"msg_date": "Sat, 2 Jul 2011 21:13:54 -0700 (PDT)",
"msg_from": "Radhya sahal <[email protected]>",
"msg_from_op": true,
"msg_subject": "How can retrieve total query run-time with out using explain"
},
{
"msg_contents": "On 3/07/2011 12:13 PM, Radhya sahal wrote:\n> Dear all,\n> How can retrieve total query run-time when i run any query with out\n> using explain\n\nEXPLAIN doesn't tell you the query runtime.\n\nEXPLAIN ANALYZE does, but it doesn't return the query results, returning \nplan and timing information instead.\n\n> for example when i run any query such as\n> select * from tablel1;\n> Total query runtime: 443 ms.\n\nin psql, use:\n\n\\timing on\n\n> i want a function can return \"runtime: 443 ms\" with out using explain\n> i want this command to call it from java source code\n\nRecord the system time before you run the query using \nSystem.currentTimeMillis() or System.nanoTime().\n\nRecord the system time after you run the query.\n\nSubtract and convert the difference to a java.util.Date so you can \nformat it prettily for display, or just print the difference in \nmilliseconds. If you want more flexible date/time handling, see JodaTime.\n\nThis gives you the time the query took including how long it took to \nretrieve the initial resultset. If you want the time the query took to \nexecute on the server, not counting fetch time, this may not be what you \nwant.\n\nI don't know how to get query execution time *not* counting resultset \nretrieval time from the client. Anyone know if it's even possible?\n\n--\nCraig Ringer\n",
"msg_date": "Sun, 03 Jul 2011 14:09:18 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can retrieve total query run-time with out using\n explain"
}
] |
[
{
"msg_contents": "I am trying to perform a bulkload of 1 crore rows using copy of row width\n1000, but the transaction logs keeps getting huge space, is there a easy way\nto do it and avoid the transaction logs\n\nI am trying to perform a bulkload of 1 crore rows using copy of row width 1000, but the transaction logs keeps getting huge space, is there a easy way to do it and avoid the transaction logs",
"msg_date": "Tue, 5 Jul 2011 20:12:34 +0530",
"msg_from": "shouvik basu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres bulkload without transaction logs"
},
{
"msg_contents": "On 07/05/2011 10:42 AM, shouvik basu wrote:\n>\n> I am trying to perform a bulkload of 1 crore rows using copy of row \n> width 1000, but the transaction logs keeps getting huge space, is \n> there a easy way to do it and avoid the transaction logs\n>\n\nIf you're using the built-in streaming replication or an archive_command \nto replicate this data, you can't avoid the transaction logs; those are \nhow the data gets shipped to the other server.\n\nIf you CREATE or TRUNCATE the table you're loading data into as part of \na transaction that loads into that, in a single server setup this will \navoid creating the transaction log data. For example:\n\nBEGIN;\nTRUNCATE TABLE t;\nCOPY t FROM ...\nCOMMIT;\n\nThat COPY will execute with minimal pg_xlog activity, because if the \nserver crashes in the middle it will just roll back the table truncation.\n\nAnother piece of software you may find useful for this case is \nhttp://pgbulkload.projects.postgresql.org/\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nComprehensive and Customized PostgreSQL Training Classes:\nhttp://www.2ndquadrant.us/postgresql-training/\n\n",
"msg_date": "Tue, 05 Jul 2011 13:41:01 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres bulkload without transaction logs"
}
] |
[
{
"msg_contents": "I've just copied a database from one linux machine to another.\n\n\"Fast\" machine is CentOS 5.5, running postgres 9.0.0 64 bit\n\n \n\n\"Slow\" machine is Red Hat 5.5 running postgres 9.0.2 64 bit.\n\n \n\nHere's the query:\n\nexplain analyze select sentenceid from sentences where sentenceid = any\n( array(select sentenceid from sentences where docid =\nany(array[696374,696377])))\n\n \n\non the fast machine this is the explain:\n\n\"Bitmap Heap Scan on sentences (cost=924.41..964.47 rows=10 width=8)\n(actual time=0.748..0.800 rows=41 loops=1)\"\n\n\" Recheck Cond: (sentenceid = ANY ($0))\"\n\n\" InitPlan 1 (returns $0)\"\n\n\" -> Bitmap Heap Scan on sentences (cost=12.93..879.27 rows=220\nwidth=8) (actual time=0.199..0.446 rows=41 loops=1)\"\n\n\" Recheck Cond: (docid = ANY ('{696374,696377}'::bigint[]))\"\n\n\" -> Bitmap Index Scan on sentdocs (cost=0.00..12.87 rows=220\nwidth=0) (actual time=0.134..0.134 rows=41 loops=1)\"\n\n\" Index Cond: (docid = ANY\n('{696374,696377}'::bigint[]))\"\n\n\" -> Bitmap Index Scan on pk_sentences (cost=0.00..45.14 rows=10\nwidth=0) (actual time=0.741..0.741 rows=41 loops=1)\"\n\n\" Index Cond: (sentenceid = ANY ($0))\"\n\n\"Total runtime: 0.925 ms\"\n\n \n\nAnd on the slow machine:\n\n\"Seq Scan on sentences (cost=10000000608.90..10000445893.60 rows=10\nwidth=8) (actual time=2679.412..6372.393 rows=41 loops=1)\"\n\n\" Filter: (sentenceid = ANY ($0))\"\n\n\" InitPlan 1 (returns $0)\"\n\n\" -> Bitmap Heap Scan on sentences (cost=10.73..608.90 rows=152\nwidth=8) (actual time=0.044..0.076 rows=41 loops=1)\"\n\n\" Recheck Cond: (docid = ANY ('{696374,696377}'::integer[]))\"\n\n\" -> Bitmap Index Scan on sentdocs (cost=0.00..10.69 rows=152\nwidth=0) (actual time=0.037..0.037 rows=41 loops=1)\"\n\n\" Index Cond: (docid = ANY\n('{696374,696377}'::integer[]))\"\n\n\"Total runtime: 6372.468 ms\"\n\n \n\nThe configurations were identical initially, I've changed those on the\nslow machine but to no avail.\n\n \n\nthere is an index on sentences on the docid in both systems.\n\n \n\nI'm at quite a loss as to how/why this is occurring and what to do about\nit.\n\n \n\nI tried disabling seqscan on the slow machine but that also made no\ndifference.\n\n \n\nAny help/ideas much appreciated.\n\n \n\nMatthias\n\n\n\n\n\n\n\n\n\n\n\nI've just copied a database from one linux machine to another.\n\"Fast\" machine is CentOS 5.5, running postgres\n9.0.0 64 bit\n \n\"Slow\" machine is Red Hat 5.5 running postgres\n9.0.2 64 bit.\n \nHere's the query:\nexplain analyze select sentenceid from sentences where\nsentenceid = any ( array(select sentenceid from sentences where docid =\nany(array[696374,696377])))\n \non the fast machine this is the explain:\n\"Bitmap Heap Scan on sentences \n(cost=924.41..964.47 rows=10 width=8) (actual time=0.748..0.800 rows=41\nloops=1)\"\n\" Recheck Cond: (sentenceid = ANY ($0))\"\n\" InitPlan 1 (returns $0)\"\n\" -> Bitmap Heap Scan on\nsentences (cost=12.93..879.27 rows=220 width=8) (actual time=0.199..0.446\nrows=41 loops=1)\"\n\" \nRecheck Cond: (docid = ANY ('{696374,696377}'::bigint[]))\"\n\" \n-> Bitmap Index Scan on sentdocs (cost=0.00..12.87 rows=220\nwidth=0) (actual time=0.134..0.134 rows=41 loops=1)\"\n\" \nIndex Cond: (docid = ANY ('{696374,696377}'::bigint[]))\"\n\" -> Bitmap Index Scan on\npk_sentences (cost=0.00..45.14 rows=10 width=0) (actual time=0.741..0.741\nrows=41 loops=1)\"\n\" Index Cond:\n(sentenceid = ANY ($0))\"\n\"Total runtime: 0.925 ms\"\n \nAnd on the slow machine:\n\"Seq Scan on sentences \n(cost=10000000608.90..10000445893.60 rows=10 width=8) (actual time=2679.412..6372.393\nrows=41 loops=1)\"\n\" Filter: (sentenceid = ANY ($0))\"\n\" InitPlan 1 (returns $0)\"\n\" -> Bitmap Heap Scan on\nsentences (cost=10.73..608.90 rows=152 width=8) (actual time=0.044..0.076\nrows=41 loops=1)\"\n\" \nRecheck Cond: (docid = ANY ('{696374,696377}'::integer[]))\"\n\" \n-> Bitmap Index Scan on sentdocs (cost=0.00..10.69 rows=152\nwidth=0) (actual time=0.037..0.037 rows=41 loops=1)\"\n\" \nIndex Cond: (docid = ANY ('{696374,696377}'::integer[]))\"\n\"Total runtime: 6372.468 ms\"\n \nThe configurations were identical initially, I've changed\nthose on the slow machine but to no avail.\n \nthere is an index on sentences on the docid in both systems.\n \nI'm at quite a loss as to how/why this is occurring and what\nto do about it.\n \nI tried disabling seqscan on the slow machine but that also\nmade no difference.\n \nAny help/ideas much appreciated.\n \nMatthias",
"msg_date": "Tue, 5 Jul 2011 16:50:22 -0400",
"msg_from": "\"Matthias Howell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query in 9.0.2 not using index in 9.0.0 works fine"
},
{
"msg_contents": "On Tue, Jul 5, 2011 at 1:50 PM, Matthias Howell\n<[email protected]>wrote:\n\n> I've just copied a database from one linux machine to another.****\n>\n> \"Fast\" machine is CentOS 5.5, running postgres 9.0.0 64 bit****\n>\n> ** **\n>\n> \"Slow\" machine is Red Hat 5.5 running postgres 9.0.2 64 bit.****\n>\n> ** **\n>\n> Here's the query:****\n>\n> explain analyze select sentenceid from sentences where sentenceid = any (\n> array(select sentenceid from sentences where docid =\n> any(array[696374,696377])))****\n>\n> ** **\n>\n> on the fast machine this is the explain:****\n>\n> \"Bitmap Heap Scan on sentences (cost=924.41..964.47 rows=10 width=8)\n> (actual time=0.748..0.800 rows=41 loops=1)\"****\n>\n> \" Recheck Cond: (sentenceid = ANY ($0))\"****\n>\n> \" InitPlan 1 (returns $0)\"****\n>\n> \" -> Bitmap Heap Scan on sentences (cost=12.93..879.27 rows=220\n> width=8) (actual time=0.199..0.446 rows=41 loops=1)\"****\n>\n> \" Recheck Cond: (docid = ANY ('{696374,696377}'::bigint[]))\"****\n>\n> \" -> Bitmap Index Scan on sentdocs (cost=0.00..12.87 rows=220\n> width=0) (actual time=0.134..0.134 rows=41 loops=1)\"****\n>\n> \" Index Cond: (docid = ANY ('{696374,696377}'::bigint[]))\"*\n> ***\n>\n> \" -> Bitmap Index Scan on pk_sentences (cost=0.00..45.14 rows=10\n> width=0) (actual time=0.741..0.741 rows=41 loops=1)\"****\n>\n> \" Index Cond: (sentenceid = ANY ($0))\"****\n>\n> \"Total runtime: 0.925 ms\"****\n>\n> ** **\n>\n> And on the slow machine:****\n>\n> \"Seq Scan on sentences (cost=10000000608.90..10000445893.60 rows=10\n> width=8) (actual time=2679.412..6372.393 rows=41 loops=1)\"****\n>\n> \" Filter: (sentenceid = ANY ($0))\"****\n>\n> \" InitPlan 1 (returns $0)\"****\n>\n> \" -> Bitmap Heap Scan on sentences (cost=10.73..608.90 rows=152\n> width=8) (actual time=0.044..0.076 rows=41 loops=1)\"****\n>\n> \" Recheck Cond: (docid = ANY ('{696374,696377}'::integer[]))\"****\n>\n> \" -> Bitmap Index Scan on sentdocs (cost=0.00..10.69 rows=152\n> width=0) (actual time=0.037..0.037 rows=41 loops=1)\"****\n>\n> \" Index Cond: (docid = ANY ('{696374,696377}'::integer[]))\"\n> ****\n>\n> \"Total runtime: 6372.468 ms\"****\n>\n> ** **\n>\n> The configurations were identical initially, I've changed those on the slow\n> machine but to no avail.****\n>\n> ** **\n>\n> there is an index on sentences on the docid in both systems.****\n>\n> ** **\n>\n> I'm at quite a loss as to how/why this is occurring and what to do about\n> it.****\n>\n> ** **\n>\n> I tried disabling seqscan on the slow machine but that also made no\n> difference.****\n>\n> ** **\n>\n> Any help/ideas much appreciated.\n>\n\nHave you done a vacuum analyze since loading the data on the slow db? Are\nstatistics settings the same between the two hosts? It's interesting that\none version coerces the docid values to bigint and the other coerces to\ninteger, but that shouldn't impact the sentenceid comparison, which have to\nbe a consistent type since it is comparing sentenceid to sentenceid. Any\nreason why this isn't collapsed down to 'select distinct sentenceid from\nsentences where docid = any(array[696374,696377])' - is there a benefit to\nthe more complex structure? For that matter, why not 'where docid in (\n696374,696377)'\n\nI didn't see anything in the docs about distinct or any(array) that would\nindicate that that form should be preferred over IN ()\n\nOn Tue, Jul 5, 2011 at 1:50 PM, Matthias Howell <[email protected]> wrote:\n\n\nI've just copied a database from one linux machine to another.\n\"Fast\" machine is CentOS 5.5, running postgres\n9.0.0 64 bit\n \n\"Slow\" machine is Red Hat 5.5 running postgres\n9.0.2 64 bit.\n \nHere's the query:\nexplain analyze select sentenceid from sentences where\nsentenceid = any ( array(select sentenceid from sentences where docid =\nany(array[696374,696377])))\n \non the fast machine this is the explain:\n\"Bitmap Heap Scan on sentences \n(cost=924.41..964.47 rows=10 width=8) (actual time=0.748..0.800 rows=41\nloops=1)\"\n\" Recheck Cond: (sentenceid = ANY ($0))\"\n\" InitPlan 1 (returns $0)\"\n\" -> Bitmap Heap Scan on\nsentences (cost=12.93..879.27 rows=220 width=8) (actual time=0.199..0.446\nrows=41 loops=1)\"\n\" \nRecheck Cond: (docid = ANY ('{696374,696377}'::bigint[]))\"\n\" \n-> Bitmap Index Scan on sentdocs (cost=0.00..12.87 rows=220\nwidth=0) (actual time=0.134..0.134 rows=41 loops=1)\"\n\" \nIndex Cond: (docid = ANY ('{696374,696377}'::bigint[]))\"\n\" -> Bitmap Index Scan on\npk_sentences (cost=0.00..45.14 rows=10 width=0) (actual time=0.741..0.741\nrows=41 loops=1)\"\n\" Index Cond:\n(sentenceid = ANY ($0))\"\n\"Total runtime: 0.925 ms\"\n \nAnd on the slow machine:\n\"Seq Scan on sentences \n(cost=10000000608.90..10000445893.60 rows=10 width=8) (actual time=2679.412..6372.393\nrows=41 loops=1)\"\n\" Filter: (sentenceid = ANY ($0))\"\n\" InitPlan 1 (returns $0)\"\n\" -> Bitmap Heap Scan on\nsentences (cost=10.73..608.90 rows=152 width=8) (actual time=0.044..0.076\nrows=41 loops=1)\"\n\" \nRecheck Cond: (docid = ANY ('{696374,696377}'::integer[]))\"\n\" \n-> Bitmap Index Scan on sentdocs (cost=0.00..10.69 rows=152\nwidth=0) (actual time=0.037..0.037 rows=41 loops=1)\"\n\" \nIndex Cond: (docid = ANY ('{696374,696377}'::integer[]))\"\n\"Total runtime: 6372.468 ms\"\n \nThe configurations were identical initially, I've changed\nthose on the slow machine but to no avail.\n \nthere is an index on sentences on the docid in both systems.\n \nI'm at quite a loss as to how/why this is occurring and what\nto do about it.\n \nI tried disabling seqscan on the slow machine but that also\nmade no difference.\n \nAny help/ideas much appreciated.Have you done a vacuum analyze since loading the data on the slow db? Are statistics settings the same between the two hosts? It's interesting that one version coerces the docid values to bigint and the other coerces to integer, but that shouldn't impact the sentenceid comparison, which have to be a consistent type since it is comparing sentenceid to sentenceid. Any reason why this isn't collapsed down to 'select distinct sentenceid from sentences where docid = any(array[696374,696377])' - is there a benefit to the more complex structure? For that matter, why not 'where docid in (696374,696377)'\nI didn't see anything in the docs about distinct or any(array) that would indicate that that form should be preferred over IN ()",
"msg_date": "Wed, 6 Jul 2011 00:42:33 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query in 9.0.2 not using index in 9.0.0 works fine"
},
{
"msg_contents": "From: Samuel Gendler [mailto:[email protected]] \nSent: Wednesday, July 06, 2011 3:43 AM\nTo: Matthias Howell\nCc: [email protected]\nSubject: Re: [PERFORM] Query in 9.0.2 not using index in 9.0.0 works\nfine\n\n \n\n \n\nOn Tue, Jul 5, 2011 at 1:50 PM, Matthias Howell\n<[email protected]> wrote:\n\nI've just copied a database from one linux machine to another.\n\n\"Fast\" machine is CentOS 5.5, running postgres 9.0.0 64 bit\n\n \n\n\"Slow\" machine is Red Hat 5.5 running postgres 9.0.2 64 bit.\n\n \n\nHere's the query:\n\nexplain analyze select sentenceid from sentences where sentenceid = any\n( array(select sentenceid from sentences where docid =\nany(array[696374,696377])))\n\n \n\non the fast machine this is the explain:\n\n\"Bitmap Heap Scan on sentences (cost=924.41..964.47 rows=10 width=8)\n(actual time=0.748..0.800 rows=41 loops=1)\"\n\n\" Recheck Cond: (sentenceid = ANY ($0))\"\n\n\" InitPlan 1 (returns $0)\"\n\n\" -> Bitmap Heap Scan on sentences (cost=12.93..879.27 rows=220\nwidth=8) (actual time=0.199..0.446 rows=41 loops=1)\"\n\n\" Recheck Cond: (docid = ANY ('{696374,696377}'::bigint[]))\"\n\n\" -> Bitmap Index Scan on sentdocs (cost=0.00..12.87 rows=220\nwidth=0) (actual time=0.134..0.134 rows=41 loops=1)\"\n\n\" Index Cond: (docid = ANY\n('{696374,696377}'::bigint[]))\"\n\n\" -> Bitmap Index Scan on pk_sentences (cost=0.00..45.14 rows=10\nwidth=0) (actual time=0.741..0.741 rows=41 loops=1)\"\n\n\" Index Cond: (sentenceid = ANY ($0))\"\n\n\"Total runtime: 0.925 ms\"\n\n \n\nAnd on the slow machine:\n\n\"Seq Scan on sentences (cost=10000000608.90..10000445893.60 rows=10\nwidth=8) (actual time=2679.412..6372.393 rows=41 loops=1)\"\n\n\" Filter: (sentenceid = ANY ($0))\"\n\n\" InitPlan 1 (returns $0)\"\n\n\" -> Bitmap Heap Scan on sentences (cost=10.73..608.90 rows=152\nwidth=8) (actual time=0.044..0.076 rows=41 loops=1)\"\n\n\" Recheck Cond: (docid = ANY ('{696374,696377}'::integer[]))\"\n\n\" -> Bitmap Index Scan on sentdocs (cost=0.00..10.69 rows=152\nwidth=0) (actual time=0.037..0.037 rows=41 loops=1)\"\n\n\" Index Cond: (docid = ANY\n('{696374,696377}'::integer[]))\"\n\n\"Total runtime: 6372.468 ms\"\n\n \n\nThe configurations were identical initially, I've changed those on the\nslow machine but to no avail.\n\n \n\nthere is an index on sentences on the docid in both systems.\n\n \n\nI'm at quite a loss as to how/why this is occurring and what to do about\nit.\n\n \n\nI tried disabling seqscan on the slow machine but that also made no\ndifference.\n\n \n\nAny help/ideas much appreciated.\n\n \n\nHave you done a vacuum analyze since loading the data on the slow db?\nAre statistics settings the same between the two hosts? It's\ninteresting that one version coerces the docid values to bigint and the\nother coerces to integer, but that shouldn't impact the sentenceid\ncomparison, which have to be a consistent type since it is comparing\nsentenceid to sentenceid. Any reason why this isn't collapsed down to\n'select distinct sentenceid from sentences where docid =\nany(array[696374,696377])' - is there a benefit to the more complex\nstructure? For that matter, why not 'where docid in (696374,696377)'\n\n \n\nI didn't see anything in the docs about distinct or any(array) that\nwould indicate that that form should be preferred over IN ()\n\n \n\n______\n\n \n\nI ran vacuum analyze, and I even dropped and recreated the index.\n\n \n\nThe differences between postgresql.conf are:\n\nFAST:\n\nshared_buffers = 2GB\n\n \n\nSLOW:\n\nshared_buffers = 4GB\n\n \n\nThe reason for using the any=array is that the array of docids is\npassed in as a parameter. This query is a subquery of a larger query.\nI am trying to solve the problem for the smaller query. The difference\nin the explain in the big query is essentially the seq scan on\nsentences. This query is a sub query that is performed - with\nvariations - 3 times in the larger query. The the fast instance the\nlarger query takes 950 milliseconds, on the slow instance, over 30\nseconds.\n\n \n\nHowever, in the end, it was user brain damage.\n\n \n\nIt does use the doc id index for the subquery, but for some reason, the\nprimary key on sentences - the sentenceid - was not set. So in fact,\nthere is no index.\n\n \n\nMachines vindicated once again.\n\n\n\n\n\n\n\n\n\n\n\n \n \n\nFrom: Samuel Gendler\n[mailto:[email protected]] \nSent: Wednesday, July 06, 2011 3:43 AM\nTo: Matthias Howell\nCc: [email protected]\nSubject: Re: [PERFORM] Query in 9.0.2 not using index in 9.0.0 works\nfine\n\n \n \n\nOn Tue, Jul 5, 2011 at 1:50 PM, Matthias Howell <[email protected]>\nwrote:\n\n\nI've\njust copied a database from one linux machine to another.\n\"Fast\"\nmachine is CentOS 5.5, running postgres 9.0.0 64 bit\n \n\"Slow\"\nmachine is Red Hat 5.5 running postgres 9.0.2 64 bit.\n \nHere's\nthe query:\nexplain\nanalyze select sentenceid from sentences where sentenceid = any ( array(select\nsentenceid from sentences where docid = any(array[696374,696377])))\n \non\nthe fast machine this is the explain:\n\"Bitmap\nHeap Scan on sentences (cost=924.41..964.47 rows=10 width=8) (actual\ntime=0.748..0.800 rows=41 loops=1)\"\n\" \nRecheck Cond: (sentenceid = ANY ($0))\"\n\" \nInitPlan 1 (returns $0)\"\n\" \n-> Bitmap Heap Scan on sentences (cost=12.93..879.27 rows=220\nwidth=8) (actual time=0.199..0.446 rows=41 loops=1)\"\n\" \nRecheck Cond: (docid = ANY ('{696374,696377}'::bigint[]))\"\n\" \n-> Bitmap Index Scan on sentdocs (cost=0.00..12.87 rows=220\nwidth=0) (actual time=0.134..0.134 rows=41 loops=1)\"\n\" \nIndex Cond: (docid = ANY ('{696374,696377}'::bigint[]))\"\n\" \n-> Bitmap Index Scan on pk_sentences (cost=0.00..45.14 rows=10\nwidth=0) (actual time=0.741..0.741 rows=41 loops=1)\"\n\" \nIndex Cond: (sentenceid = ANY ($0))\"\n\"Total\nruntime: 0.925 ms\"\n \nAnd\non the slow machine:\n\"Seq\nScan on sentences (cost=10000000608.90..10000445893.60 rows=10 width=8)\n(actual time=2679.412..6372.393 rows=41 loops=1)\"\n\" \nFilter: (sentenceid = ANY ($0))\"\n\" \nInitPlan 1 (returns $0)\"\n\" \n-> Bitmap Heap Scan on sentences (cost=10.73..608.90 rows=152\nwidth=8) (actual time=0.044..0.076 rows=41 loops=1)\"\n\" \nRecheck Cond: (docid = ANY ('{696374,696377}'::integer[]))\"\n\" \n-> Bitmap Index Scan on sentdocs (cost=0.00..10.69 rows=152\nwidth=0) (actual time=0.037..0.037 rows=41 loops=1)\"\n\" \nIndex Cond: (docid = ANY ('{696374,696377}'::integer[]))\"\n\"Total\nruntime: 6372.468 ms\"\n \nThe\nconfigurations were identical initially, I've changed those on the slow machine\nbut to no avail.\n \nthere\nis an index on sentences on the docid in both systems.\n \nI'm\nat quite a loss as to how/why this is occurring and what to do about it.\n \nI\ntried disabling seqscan on the slow machine but that also made no difference.\n \nAny\nhelp/ideas much appreciated.\n\n\n\n \n\n\nHave you done a vacuum analyze since loading the data on the\nslow db? Are statistics settings the same between the two hosts?\n It's interesting that one version coerces the docid values to bigint and\nthe other coerces to integer, but that shouldn't impact the sentenceid\ncomparison, which have to be a consistent type since it is comparing sentenceid\nto sentenceid. Any reason why this isn't collapsed down to 'select distinct\nsentenceid from sentences where docid = any(array[696374,696377])' - is\nthere a benefit to the more complex structure? For that matter, why not\n'where docid in (696374,696377)'\n\n\n \n\n\nI didn't see anything in the docs about distinct or\nany(array) that would indicate that that form should be preferred over IN ()\n \n______\n \nI ran vacuum analyze, and I even dropped and recreated the\nindex.\n \nThe differences between postgresql.conf are:\nFAST:\nshared_buffers = 2GB\n \nSLOW:\nshared_buffers = 4GB\n \nThe reason for using the any=array is that the array of\ndocids is passed in as a parameter. This query is a subquery of a larger\nquery. I am trying to solve the problem for the smaller query. The\ndifference in the explain in the big query is essentially the seq scan on\nsentences. This query is a sub query that is performed - with variations\n- 3 times in the larger query. The the fast instance the larger query\ntakes 950 milliseconds, on the slow instance, over 30 seconds.\n \nHowever, in the end, it was user brain damage.\n \nIt does use the doc id index for the subquery, but for some\nreason, the primary key on sentences - the sentenceid - was not set. So\nin fact, there is no index.\n \nMachines vindicated once again.",
"msg_date": "Wed, 6 Jul 2011 08:50:13 -0400",
"msg_from": "\"Matthias Howell\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query in 9.0.2 not using index in 9.0.0 works fine"
},
{
"msg_contents": "On Wed, Jul 6, 2011 at 5:50 AM, Matthias Howell\n<[email protected]>wrote:\n\n>\n>\n> However, in the end, it was user brain damage.****\n>\n> ** **\n>\n> It does use the doc id index for the subquery, but for some reason, the\n> primary key on sentences - the sentenceid - was not set. So in fact, there\n> is no index.****\n>\n> ** **\n>\n> Machines vindicated once again.****\n>\n\nFor the record, if you follow the instructions for submitting slow query\nquestions, we'd likely have spotted it very quickly if you hadn't spotted it\nyourself while doing the cut and paste. The instructions ask for table\ndefinitions, so you'd likely have noticed the missing index when you copied\nthose into your email. The link (\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions ) is right there on the\nmailing list page at postgresql.org. It's always a toss-up whether to\nattempt to answer a question like yours or just respond with a link to that\npage when the relevant info is missing ;-)\n\n--sam\n\nOn Wed, Jul 6, 2011 at 5:50 AM, Matthias Howell <[email protected]> wrote:\n\n\n \nHowever, in the end, it was user brain damage.\n \nIt does use the doc id index for the subquery, but for some\nreason, the primary key on sentences - the sentenceid - was not set. So\nin fact, there is no index.\n \nMachines vindicated once again.\n\n\n\n\nFor the record, if you follow the instructions for submitting slow query questions, we'd likely have spotted it very quickly if you hadn't spotted it yourself while doing the cut and paste. The instructions ask for table definitions, so you'd likely have noticed the missing index when you copied those into your email. The link (http://wiki.postgresql.org/wiki/SlowQueryQuestions ) is right there on the mailing list page at postgresql.org. It's always a toss-up whether to attempt to answer a question like yours or just respond with a link to that page when the relevant info is missing ;-)\n--sam",
"msg_date": "Wed, 6 Jul 2011 13:01:39 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query in 9.0.2 not using index in 9.0.0 works fine"
}
] |
[
{
"msg_contents": "I have a query that uses ORDER BY and LIMIT to get a set of image data \nrows that match a given tag. When both ORDER BY and LIMIT are included \nfor some reason the planner chooses a very slow query plan. Dropping \none or the other results in a much faster query going from 4+ seconds -> \n30 ms. Database schema, EXPLAIN ANALYZE and other information can be \nfound at http://pgsql.privatepaste.com/31113c27bf Is there a way to \nconvince the planner to use the faster plan when doing both ORDER BY and \nLIMIT without using SET options or will I need to disable the slow plan \noptions to force the planner to use the fast plan?\n\nI found some stuff in the mailing list archives that looks related but I \ndidn't see any fixes. Apparently the planner hopes the merge join will \nfind the LIMIT # of rows fairly quickly but instead it winds up scanning \nalmost the entire table.\n\nThanks,\nJonathan\n",
"msg_date": "Tue, 05 Jul 2011 20:18:10 -0400",
"msg_from": "Jonathan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query when using ORDER BY *and* LIMIT"
},
{
"msg_contents": "Does anyone have any suggestions for my problem? (I have to wonder if \nI'm somehow just not getting peoples attention or what. This is my \nsecond question this week on a public mailing list that has gotten \nexactly 0 replies)\n\nJonathan\n\nOn 7/5/2011 8:18 PM, Jonathan wrote:\n> I have a query that uses ORDER BY and LIMIT to get a set of image data\n> rows that match a given tag. When both ORDER BY and LIMIT are included\n> for some reason the planner chooses a very slow query plan. Dropping one\n> or the other results in a much faster query going from 4+ seconds -> 30\n> ms. Database schema, EXPLAIN ANALYZE and other information can be found\n> at http://pgsql.privatepaste.com/31113c27bf Is there a way to convince\n> the planner to use the faster plan when doing both ORDER BY and LIMIT\n> without using SET options or will I need to disable the slow plan\n> options to force the planner to use the fast plan?\n>\n> I found some stuff in the mailing list archives that looks related but I\n> didn't see any fixes. Apparently the planner hopes the merge join will\n> find the LIMIT # of rows fairly quickly but instead it winds up scanning\n> almost the entire table.\n",
"msg_date": "Fri, 08 Jul 2011 18:23:41 -0400",
"msg_from": "Jonathan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query when using ORDER BY *and* LIMIT"
},
{
"msg_contents": "Hello\n\nIs impossible to help you without more detailed info about your problems,\n\nwe have to see a execution plan, we have to see slow query\n\nRegards\n\nPavel Stehule\n\n2011/7/9 Jonathan <[email protected]>:\n> Does anyone have any suggestions for my problem? (I have to wonder if I'm\n> somehow just not getting peoples attention or what. This is my second\n> question this week on a public mailing list that has gotten exactly 0\n> replies)\n>\n> Jonathan\n>\n> On 7/5/2011 8:18 PM, Jonathan wrote:\n>>\n>> I have a query that uses ORDER BY and LIMIT to get a set of image data\n>> rows that match a given tag. When both ORDER BY and LIMIT are included\n>> for some reason the planner chooses a very slow query plan. Dropping one\n>> or the other results in a much faster query going from 4+ seconds -> 30\n>> ms. Database schema, EXPLAIN ANALYZE and other information can be found\n>> at http://pgsql.privatepaste.com/31113c27bf Is there a way to convince\n>> the planner to use the faster plan when doing both ORDER BY and LIMIT\n>> without using SET options or will I need to disable the slow plan\n>> options to force the planner to use the fast plan?\n>>\n>> I found some stuff in the mailing list archives that looks related but I\n>> didn't see any fixes. Apparently the planner hopes the merge join will\n>> find the LIMIT # of rows fairly quickly but instead it winds up scanning\n>> almost the entire table.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Sat, 9 Jul 2011 05:39:14 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query when using ORDER BY *and* LIMIT"
},
{
"msg_contents": "Hello\n\nsorry, I didn't see a link on privatepastebin\n\nThere is problem in LIMIT, because query without LIMIT returns only a\nfew lines more than query with LIMIT. You can try to materialize query\nwithout LIMIT and then to use LIMIT like\n\nSELECT * FROM (your query without limit OFFSET 0) x LIMIT 30;\n\nRegards\n\nPavel Stehule\n\n2011/7/9 Pavel Stehule <[email protected]>:\n> Hello\n>\n> Is impossible to help you without more detailed info about your problems,\n>\n> we have to see a execution plan, we have to see slow query\n>\n> Regards\n>\n> Pavel Stehule\n>\n> 2011/7/9 Jonathan <[email protected]>:\n>> Does anyone have any suggestions for my problem? (I have to wonder if I'm\n>> somehow just not getting peoples attention or what. This is my second\n>> question this week on a public mailing list that has gotten exactly 0\n>> replies)\n>>\n>> Jonathan\n>>\n>> On 7/5/2011 8:18 PM, Jonathan wrote:\n>>>\n>>> I have a query that uses ORDER BY and LIMIT to get a set of image data\n>>> rows that match a given tag. When both ORDER BY and LIMIT are included\n>>> for some reason the planner chooses a very slow query plan. Dropping one\n>>> or the other results in a much faster query going from 4+ seconds -> 30\n>>> ms. Database schema, EXPLAIN ANALYZE and other information can be found\n>>> at http://pgsql.privatepaste.com/31113c27bf Is there a way to convince\n>>> the planner to use the faster plan when doing both ORDER BY and LIMIT\n>>> without using SET options or will I need to disable the slow plan\n>>> options to force the planner to use the fast plan?\n>>>\n>>> I found some stuff in the mailing list archives that looks related but I\n>>> didn't see any fixes. Apparently the planner hopes the merge join will\n>>> find the LIMIT # of rows fairly quickly but instead it winds up scanning\n>>> almost the entire table.\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n",
"msg_date": "Sat, 9 Jul 2011 05:49:34 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query when using ORDER BY *and* LIMIT"
},
{
"msg_contents": "I'm running into the same problem. I removed the limit and it was fine. I\nguess I could have removed the order by as well but it doesn't help if you\nreally need both.\n\nHave you found any more information on this?\n\nThanks!\n\nDave (Armstrong)\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-when-using-ORDER-BY-and-LIMIT-tp4555260p4900348.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Thu, 13 Oct 2011 12:34:09 -0700 (PDT)",
"msg_from": "davidsarmstrong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query when using ORDER BY *and* LIMIT"
},
{
"msg_contents": "Dave,\n\nSince I control the application that was performing the query and I've\nseparated my data into daily partitioned tables (which enforced my order by\nclause on a macro-level), I took Stephen's advice and implemented the nested\nloop over each daily table from within the application versus having\nPostgres figure it out for me. Sorry I don't have a better answer for you.\n\nMike\n\nOn Thu, Oct 13, 2011 at 3:34 PM, davidsarmstrong <[email protected]>wrote:\n\n> I'm running into the same problem. I removed the limit and it was fine. I\n> guess I could have removed the order by as well but it doesn't help if you\n> really need both.\n>\n> Have you found any more information on this?\n>\n> Thanks!\n>\n> Dave (Armstrong)\n>\n> --\n> View this message in context:\n> http://postgresql.1045698.n5.nabble.com/Slow-query-when-using-ORDER-BY-and-LIMIT-tp4555260p4900348.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nDave,Since I control the application that was performing the query and I've separated my data into daily partitioned tables (which enforced my order by clause on a macro-level), I took Stephen's advice and implemented the nested loop over each daily table from within the application versus having Postgres figure it out for me. Sorry I don't have a better answer for you.\nMikeOn Thu, Oct 13, 2011 at 3:34 PM, davidsarmstrong <[email protected]> wrote:\nI'm running into the same problem. I removed the limit and it was fine. I\nguess I could have removed the order by as well but it doesn't help if you\nreally need both.\n\nHave you found any more information on this?\n\nThanks!\n\nDave (Armstrong)\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-when-using-ORDER-BY-and-LIMIT-tp4555260p4900348.html\n\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sat, 15 Oct 2011 17:09:24 -0400",
"msg_from": "Michael Viscuso <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query when using ORDER BY *and* LIMIT"
}
] |
[
{
"msg_contents": "I have a query which seems to be taking an extraordinarily long time \n(many minutes, at least) when seemingly equivalent queries have \ndifferent plans and execute in seconds. naturally, I'd like to know why.\n\nVersion is Postgresql 8.4.8. The table, \"t\", is\n\n Column | Type | Modifiers\n--------+---------+-----------\n y | integer | not null\n x | integer | not null\n k | integer | not null\n j | integer | not null\n z | integer | not null\nIndexes:\n \"t_pkey\" PRIMARY KEY, btree (j, k, x, y, z)\n\nThe table population, in pseudocode, is this:\n for x in 0..9\n for y in 0..9999\n for z in 0..29\n INSERT INTO t VALUES(y,x,0,0,z)\n\nSo the table has 300000 entries, with j and k always 0.\n\nThe query is:\n\n SELECT *\n FROM (\n SELECT * FROM t GROUP BY j,k,x,z,y\n ) AS f\n NATURAL JOIN t;\n\nThe plan:\n\n Merge Join (cost=44508.90..66677.96 rows=1 width=20)\n Merge Cond: ((public.t.j = public.t.j) AND (public.t.k = public.t.k)\n AND (public.t.x = public.t.x))\n Join Filter: ((public.t.y = public.t.y) AND (public.t.z = public.t.z))\n -> Group (cost=44508.90..49008.90 rows=30000 width=20)\n -> Sort (cost=44508.90..45258.90 rows=300000 width=20)\n Sort Key: public.t.j, public.t.k, public.t.x, public.t.z,\n public.t.y\n -> Seq Scan on t (cost=0.00..4911.00 rows=300000 width=20)\n -> Index Scan using t_pkey on t (cost=0.00..14877.18 rows=300000\n width=20)\n\nThis query runs at least 20 minutes, with postmaster CPU utilization at \n99%, without completing. System is a 3.2GHz Zeon, 3GB memory, and not \nmuch else running.\n\nBy contrast, placing an intermediate result in a table \"u\" provides a \nresult in about 3 seconds:\n\n CREATE TEMPORARY TABLE u AS SELECT * FROM t GROUP BY j,k,x,z,y;\n SELECT * FROM u NATURAL JOIN t;\n\nChanging the order of the GROUP BY clause varies the plan, sometimes \nyielding shorter execution times. For example, this ordering executes in \nabout 1.5 seconds:\n\n SELECT *\n FROM (\n SELECT * FROM t GROUP BY j,k,x,y,z\n ) AS f\n NATURAL JOIN t;\n\nWith 120 permutations, I didn't try them all.\n\nI should note that the plans tend to have similar costs, so the query \nplanner presumably does not know that some permutations have \nsignificantly greater execution times.\n\nClem Dickey\n",
"msg_date": "Tue, 05 Jul 2011 19:26:18 -0700",
"msg_from": "Clem Dickey <[email protected]>",
"msg_from_op": true,
"msg_subject": "GROUP BY with reasonable timings in PLAN but unreasonable execution\n\ttime"
},
{
"msg_contents": "On 07/05/2011 07:26 PM, Clem Dickey wrote:\n\nUpdates after belatedly reading the \"slow queries\" guidelines:\n\nVersion: PostgreSQL 8.4.8 on x86_64-redhat-linux-gnu, compiled by GCC \ngcc (GCC) 4.4.5 20101112 (Red Hat 4.4.5-2), 64-bit\n\nThe query has always been slow; the table for this test case is never \nupdated. I don't run VACUUM but do run ANALYZE.\n\nOriginally all database config parameters were the default. Since \nyesterday I have changed\n shared_buffers = 224MB\n effective_cache_size = 1024MB\nbut seen no change in behavior.\n\n> Column | Type | Modifiers\n> --------+---------+-----------\n> y | integer | not null\n> x | integer | not null\n> k | integer | not null\n> j | integer | not null\n> z | integer | not null\n> Indexes:\n> \"t_pkey\" PRIMARY KEY, btree (j, k, x, y, z)\n>\n> The table population, in pseudocode, is this:\n> for x in 0..9\n> for y in 0..9999\n> for z in 0..29\n> INSERT INTO t VALUES(y,x,0,0,z)\n\n> The query is:\n>\n> SELECT *\n> FROM (\n> SELECT * FROM t GROUP BY j,k,x,z,y\n> ) AS f\n> NATURAL JOIN t;\n\nThe EXPLAIN ANALYZE output is http://explain.depesz.com/s/KGk\n\nNotes on the analysis:\n1. I see that the planner estimates that GROUP BY will reduce 300K rows \nto 30K, a bit odd because every row which the planner could examine is \nin a unique group.\n2. The JOIN is expected to produce one row. I'm not sure how the planner \ncame up with that estimate.\n\n> By contrast, placing an intermediate result in a table \"u\" provides a\n> result in about 3 seconds:\n\n=> EXPLAIN ANALYZE CREATE TABLE u AS SELECT * FROM t GROUP BY \nj,k,x,z,y;EXPLAIN ANALYZE SELECT * FROM u NATURAL JOIN t;DROP TABLE u;\n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------------\n Group (cost=44508.90..49008.90 rows=30000 width=20) (actual \ntime=1305.381..2028.385 rows=300000 loops=1)\n -> Sort (cost=44508.90..45258.90 rows=300000 width=20) (actual \ntime=1305.374..1673.843 rows=300000 loops=1)\n Sort Key: j, k, x, z, y\n Sort Method: external merge Disk: 8792kB\n -> Seq Scan on t (cost=0.00..4911.00 rows=300000 width=20) \n(actual time=0.008..62.935 rows=300000 loops=1)\n Total runtime: 2873.590 ms\n(6 rows)\n\n QUERY PLAN \n\n---------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=46229.86..72644.38 rows=1 width=20) (actual \ntime=1420.527..2383.507 rows=300000 loops=1)\n Merge Cond: ((t.j = u.j) AND (t.k = u.k) AND (t.x = u.x) AND (t.y = \nu.y) AND (t.z = u.z))\n -> Index Scan using t_pkey on t (cost=0.00..14877.18 rows=300000 \nwidth=20) (actual time=0.013..118.244 rows=300000 loops=1)\n -> Materialize (cost=46229.86..50123.52 rows=311493 width=20) \n(actual time=1420.498..1789.864 rows=300000 loops=1)\n -> Sort (cost=46229.86..47008.59 rows=311493 width=20) \n(actual time=1420.493..1692.988 rows=300000 loops=1)\n Sort Key: u.j, u.k, u.x, u.y, u.z\n Sort Method: external merge Disk: 8784kB\n -> Seq Scan on u (cost=0.00..5025.93 rows=311493 \nwidth=20) (actual time=0.018..78.850 rows=300000 loops=1)\n Total runtime: 2424.870 ms\n(9 rows)\n\n(Adding an \"ANALYZE\" on the temporary table improves the JOIN estimated \nfow count from 1 to about 299500, but does not change the plan.)\n\nClem Dickey\n",
"msg_date": "Wed, 06 Jul 2011 17:59:19 -0700",
"msg_from": "Clem Dickey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GROUP BY with reasonable timings in PLAN but unreasonable\n\texecution time"
},
{
"msg_contents": "On 07/06/2011 05:59 PM, Clem Dickey wrote:\n> On 07/05/2011 07:26 PM, Clem Dickey wrote:\n>\n>> Column | Type | Modifiers\n>> --------+---------+-----------\n>> y | integer | not null\n>> x | integer | not null\n>> k | integer | not null\n>> j | integer | not null\n>> z | integer | not null\n>> Indexes:\n>> \"t_pkey\" PRIMARY KEY, btree (j, k, x, y, z)\n>>\n>> The table population, in pseudocode, is this:\n>> for x in 0..9\n>> for y in 0..9999\n>> for z in 0..29\n>> INSERT INTO t VALUES(y,x,0,0,z)\n>\n>> The query is:\n>>\n>> SELECT *\n>> FROM (\n>> SELECT * FROM t GROUP BY j,k,x,z,y\n>> ) AS f\n>> NATURAL JOIN t;\n>\n> The EXPLAIN ANALYZE output is http://explain.depesz.com/s/KGk\n>\n> Notes on the analysis:\n> 1. I see that the planner estimates that GROUP BY will reduce 300K rows\n> to 30K, a bit odd because every row which the planner could examine is\n> in a unique group.\n\nGROUP BY assumes an average 10-element grouping in cases with more than \none GROUP BY expression. Wrong for this test case, but probably OK in \ngeneral.\n\n> 2. The JOIN is expected to produce one row. I'm not sure how the planner\n> came up with that estimate.\n\nThe winning Join (merge join) had a very poor estimate of its \nperformance. Like a low-ball contract bid. :-)\n\na. The Join cost estimators could have been given more information\n\nThe functions which estimate JOIN selectivity (e.g. the chance that \ntuples will match in an equijoin, for instance) use data produced by \nANALYZE. But the SELECT .. GROUP BY does not propagate ANALYZE data from \nthe columns of its input relation to its output relation. That is too \nbad, because the column value statistics (number of unique values) would \nhave improved selectivity estimates for all three join plans (merge \njoin, nested loop, and hash join).\n\nb. the Merge Join cost estimator did a poor job with the data it was given:\n\nIn function eqjoinsel_inner there are two cases (1) ANALYZE data is \navailable for both sides of the join and (2) ANALYZE data is missing for \none or both sides. Due to the GROUP BY processing described above, \nANALYZE data was available for \"t\" but not for \"SELECT * FROM t GROUP BY \n...\".\n\nThe logic in that case is \"use the column with the most distinct values\" \nto estimate selectivity. The default number of distinct values for a \ncolumn with no data (DEFAULT_NUM_DISTINCT) is 200. In my join the number \nof values was:\n\ncol in GROUP BY in table t\nj 200 1\nk 200 1\nx 200 10\ny 200 1000\nz 200 30\n\nIn 4 of the 5 columns the default value had more distinct values, and \nthe combined selectivity (chance that two arbitrary rows would have a \njoin match) was (1/200)^4 * 1/1000. Very small. The error is, IMO, that \nthe code does not distinguish known numbers from default numbers. A \ncomment in the code acknowledges this:\n\n\"XXX Can we be smarter if we have an MCV list for just one side?\"\n\nBut it concludes\n\n\"It seems that if we assume equal distribution for the other side, we \nend up with the same answer anyway.\"\n\nI don't think that is the case. Preferring a known value, where one \nexists, would provide a better estimate of the actual range of the data. \nIndeed, the var_eq_non_const in the same file (used by the nested loop \njoin estimator) does essentially that.\n\n- Clem Dickey\n",
"msg_date": "Fri, 08 Jul 2011 18:33:15 -0700",
"msg_from": "Clem Dickey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GROUP BY with reasonable timings in PLAN but unreasonable\n\texecution time"
},
{
"msg_contents": "On Fri, Jul 8, 2011 at 9:33 PM, Clem Dickey <[email protected]> wrote:\n> a. The Join cost estimators could have been given more information\n>\n> The functions which estimate JOIN selectivity (e.g. the chance that tuples\n> will match in an equijoin, for instance) use data produced by ANALYZE. But\n> the SELECT .. GROUP BY does not propagate ANALYZE data from the columns of\n> its input relation to its output relation. That is too bad, because the\n> column value statistics (number of unique values) would have improved\n> selectivity estimates for all three join plans (merge join, nested loop, and\n> hash join).\n\nYeah, I've had this same thought. In fact, I think that it would\nprobably be an improvement to pass through not just the number of\nunique values but the MCVs and frequencies of the non-GROUP-BY\ncolumns. Of course, for the grouping columns, we ought to let\nn_distinct = -1 pop out. Granted, the GROUP BY might totally change\nthe data distribution, so relying on the input column statistics to be\nmeaningful could be totally wrong, but on average it seems more likely\nto give a useful answer than a blind stab in the dark. I haven't\ngotten around to doing anything about this, but it seems like a good\nidea.\n\n> b. the Merge Join cost estimator did a poor job with the data it was given:\n>\n> In function eqjoinsel_inner there are two cases (1) ANALYZE data is\n> available for both sides of the join and (2) ANALYZE data is missing for one\n> or both sides. Due to the GROUP BY processing described above, ANALYZE data\n> was available for \"t\" but not for \"SELECT * FROM t GROUP BY ...\".\n>\n> The logic in that case is \"use the column with the most distinct values\" to\n> estimate selectivity. The default number of distinct values for a column\n> with no data (DEFAULT_NUM_DISTINCT) is 200. In my join the number of values\n> was:\n>\n> col in GROUP BY in table t\n> j 200 1\n> k 200 1\n> x 200 10\n> y 200 1000\n> z 200 30\n>\n> In 4 of the 5 columns the default value had more distinct values, and the\n> combined selectivity (chance that two arbitrary rows would have a join\n> match) was (1/200)^4 * 1/1000. Very small. The error is, IMO, that the code\n> does not distinguish known numbers from default numbers. A comment in the\n> code acknowledges this:\n>\n> \"XXX Can we be smarter if we have an MCV list for just one side?\"\n>\n> But it concludes\n>\n> \"It seems that if we assume equal distribution for the other side, we end up\n> with the same answer anyway.\"\n>\n> I don't think that is the case. Preferring a known value, where one exists,\n> would provide a better estimate of the actual range of the data. Indeed, the\n> var_eq_non_const in the same file (used by the nested loop join estimator)\n> does essentially that.\n\nI'm not sure I understand what you're getting at here, unless the idea\nis to make get_variable_numdistinct() somehow indicate to the caller\nwhether it had to punt. That might be worth doing.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 3 Aug 2011 09:29:16 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: GROUP BY with reasonable timings in PLAN but\n\tunreasonable execution time"
},
{
"msg_contents": "On 08/03/2011 06:29 AM, Robert Haas wrote:\n>> b. the Merge Join cost estimator did a poor job with the data it was given:\n>>\n>> In function eqjoinsel_inner there are two cases (1) ANALYZE data is\n>> available for both sides of the join and (2) ANALYZE data is missing for one\n>> or both sides. Due to the GROUP BY processing described above, ANALYZE data\n>> was available for \"t\" but not for \"SELECT * FROM t GROUP BY ...\".\n>>\n>> The logic in that case is \"use the column with the most distinct values\" to\n>> estimate selectivity. The default number of distinct values for a column\n>> with no data (DEFAULT_NUM_DISTINCT) is 200. In my join the number of values\n>> was:\n>>\n>> col in GROUP BY in table t\n>> j 200 1\n>> k 200 1\n>> x 200 10\n>> y 200 1000\n>> z 200 30\n>>\n>> In 4 of the 5 columns the default value had more distinct values, and the\n>> combined selectivity (chance that two arbitrary rows would have a join\n>> match) was (1/200)^4 * 1/1000. Very small. The error is, IMO, that the code\n>> does not distinguish known numbers from default numbers. A comment in the\n>> code acknowledges this:\n\n>\n> I'm not sure I understand what you're getting at here, unless the idea\n> is to make get_variable_numdistinct() somehow indicate to the caller\n> whether it had to punt. That might be worth doing.\n\nYes, the first step is to make \"punt\" a separate indicator. The second \nwould be to make good use of that indicator. As it is now, with \"punt\" \nbeing a possible data value, there two types of errors:\n\nFalse negative (code treats DEFAULT_NUM_DISTINCT as ordinary case when \nit is special):\n\nI wanted eqjoinsel_inner() to treat \"punt\" specially: to use the value \nfrom the known side of the JOIN when the other side is unknown. The \ncurrent behavior, although not ideal, is the expected use of a default \nvalue.\n\nFalse positive (code treats DEFAULT_NUM_DISTINCT as special case when it \nis ordinary):\n\neqjoinsel_semi() and estimate_hash_bucketsize() treat \nDEFAULT_NUM_DISTINCT specially. This behavior is less defensible than \nfalse positive, since a valid numeric value is being re-used as a flag.\n\n\nI suggest wrapping the value in a struct (to avoid accidental use) and \nusing macros for read access.\n\n typedef struct {\n double value; // negative means \"unknown\"\n } num_distinct_t;\n\n #define IS_NUM_DISTINCT_DEFINED(nd) ((nd).value >= 0)\n #define NUM_DISTINCT_VALUE(nd) ((nd).value)\n\n- Clem Dickey\n\nP.S. Congratulations on displacing MySQL in Mac OS X Lion Server.\n",
"msg_date": "Wed, 03 Aug 2011 18:53:19 -0700",
"msg_from": "Clem Dickey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: GROUP BY with reasonable timings in PLAN but unreasonable\n\texecution time"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a delete query taking 7.2G of ram (and counting) but I do not\nunderstant why so much memory is necessary. The server has 12G, and\nI'm afraid it'll go into swap. Using postgres 8.3.14.\n\nI'm purging some old data from table t1, which should cascade-delete\nreferencing rows in t2. Here's an anonymized rundown :\n\n\n# \\d t1\n Table \"public.t1\"\n Column | Type | Modifiers\n-----------+-----------------------------+---------------------------------\n t1id | integer | not null default\nnextval('t1_t1id_seq'::regclass)\n(...snip...)\nIndexes:\n \"message_pkey\" PRIMARY KEY, btree (id)\n(...snip...)\n\n# \\d t2\n Table \"public.t2\"\n Column | Type | Modifiers\n-----------------+-----------------------------+-----------------------------\n t2id | integer | not null default\nnextval('t2_t2id_seq'::regclass)\n t1id | integer | not null\n foo | integer | not null\n bar | timestamp without time zone | not null default now()\nIndexes:\n \"t2_pkey\" PRIMARY KEY, btree (t2id)\n \"t2_bar_key\" btree (bar)\n \"t2_t1id_key\" btree (t1id)\nForeign-key constraints:\n \"t2_t1id_fkey\" FOREIGN KEY (t1id) REFERENCES t1(t1id) ON UPDATE\nRESTRICT ON DELETE CASCADE\n\n# explain delete from t1 where t1id in (select t1id from t2 where\nfoo=0 and bar < '20101101');\n QUERY PLAN\n-----------------------------------------------------------------------------\n Nested Loop (cost=5088742.39..6705282.32 rows=30849 width=6)\n -> HashAggregate (cost=5088742.39..5089050.88 rows=30849 width=4)\n -> Index Scan using t2_bar_key on t2 (cost=0.00..5035501.50\nrows=21296354 width=4)\n Index Cond: (bar < '2010-11-01 00:00:00'::timestamp\nwithout time zone)\n Filter: (foo = 0)\n -> Index Scan using t1_pkey on t1 (cost=0.00..52.38 rows=1 width=10)\n Index Cond: (t1.t1id = t2.t1id)\n(7 rows)\n\n\nNote that the estimate of 30849 rows is way off : there should be\naround 55M rows deleted from t1, and 2-3 times as much from t2.\n\nWhen looking at the plan, I can easily imagine that data gets\naccumulated below the nestedloop (thus using all that memory), but why\nisn't each entry freed once one row has been deleted from t1 ? That\nentry isn't going to be found again in t1 or in t2, so why keep it\naround ?\n\nIs there a better way to write this query ? Would postgres 8.4/9.0\nhandle things better ?\n\n\n\nThanks in advance.\n\n\n-- \nVincent de Phily\n",
"msg_date": "Thu, 7 Jul 2011 15:34:19 +0200",
"msg_from": "vincent dephily <[email protected]>",
"msg_from_op": true,
"msg_subject": "DELETE taking too much memory"
},
{
"msg_contents": "How up to date are the statistics for the tables in question?\n\nWhat value do you have for effective cache size?\n\nMy guess would be that planner thinks the method it is using is right\neither for its current row number estimations, or the amount of memory\nit thinks it has to play with. \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of vincent\ndephily\nSent: 07 July 2011 14:34\nTo: [email protected]; [email protected]\nSubject: [PERFORM] DELETE taking too much memory\n\nHi,\n\nI have a delete query taking 7.2G of ram (and counting) but I do not\nunderstant why so much memory is necessary. The server has 12G, and\nI'm afraid it'll go into swap. Using postgres 8.3.14.\n\nI'm purging some old data from table t1, which should cascade-delete\nreferencing rows in t2. Here's an anonymized rundown :\n\n\n# \\d t1\n Table \"public.t1\"\n Column | Type | Modifiers\n-----------+-----------------------------+------------------------------\n---\n t1id | integer | not null default\nnextval('t1_t1id_seq'::regclass)\n(...snip...)\nIndexes:\n \"message_pkey\" PRIMARY KEY, btree (id)\n(...snip...)\n\n# \\d t2\n Table \"public.t2\"\n Column | Type | Modifiers\n-----------------+-----------------------------+------------------------\n-----\n t2id | integer | not null default\nnextval('t2_t2id_seq'::regclass)\n t1id | integer | not null\n foo | integer | not null\n bar | timestamp without time zone | not null default now()\nIndexes:\n \"t2_pkey\" PRIMARY KEY, btree (t2id)\n \"t2_bar_key\" btree (bar)\n \"t2_t1id_key\" btree (t1id)\nForeign-key constraints:\n \"t2_t1id_fkey\" FOREIGN KEY (t1id) REFERENCES t1(t1id) ON UPDATE\nRESTRICT ON DELETE CASCADE\n\n# explain delete from t1 where t1id in (select t1id from t2 where\nfoo=0 and bar < '20101101');\n QUERY PLAN\n------------------------------------------------------------------------\n-----\n Nested Loop (cost=5088742.39..6705282.32 rows=30849 width=6)\n -> HashAggregate (cost=5088742.39..5089050.88 rows=30849 width=4)\n -> Index Scan using t2_bar_key on t2 (cost=0.00..5035501.50\nrows=21296354 width=4)\n Index Cond: (bar < '2010-11-01 00:00:00'::timestamp\nwithout time zone)\n Filter: (foo = 0)\n -> Index Scan using t1_pkey on t1 (cost=0.00..52.38 rows=1\nwidth=10)\n Index Cond: (t1.t1id = t2.t1id)\n(7 rows)\n\n\nNote that the estimate of 30849 rows is way off : there should be\naround 55M rows deleted from t1, and 2-3 times as much from t2.\n\nWhen looking at the plan, I can easily imagine that data gets\naccumulated below the nestedloop (thus using all that memory), but why\nisn't each entry freed once one row has been deleted from t1 ? That\nentry isn't going to be found again in t1 or in t2, so why keep it\naround ?\n\nIs there a better way to write this query ? Would postgres 8.4/9.0\nhandle things better ?\n\n\n\nThanks in advance.\n\n\n-- \nVincent de Phily\n\n-- \nSent via pgsql-performance mailing list\n([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n___________________________________________________ \n \nThis email is intended for the named recipient. The information contained \nin it is confidential. You should not copy it for any purposes, nor \ndisclose its contents to any other party. If you received this email \nin error, please notify the sender immediately via email, and delete it from\nyour computer. \n \nAny views or opinions presented are solely those of the author and do not \nnecessarily represent those of the company. \n \nPCI Compliancy: Please note, we do not send or wish to receive banking, credit\nor debit card information by email or any other form of communication. \n\nPlease try our new on-line ordering system at http://www.cromwell.co.uk/ice\n\nCromwell Tools Limited, PO Box 14, 65 Chartwell Drive\nWigston, Leicester LE18 1AT. Tel 0116 2888000\nRegistered in England and Wales, Reg No 00986161\nVAT GB 115 5713 87 900\n__________________________________________________\n\n",
"msg_date": "Thu, 7 Jul 2011 19:54:08 +0100",
"msg_from": "\"French, Martin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE taking too much memory"
},
{
"msg_contents": "On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:\n> Hi,\n> \n> I have a delete query taking 7.2G of ram (and counting) but I do not\n> understant why so much memory is necessary. The server has 12G, and\n> I'm afraid it'll go into swap. Using postgres 8.3.14.\n> \n> I'm purging some old data from table t1, which should cascade-delete\n> referencing rows in t2. Here's an anonymized rundown :\n> \n> \n> # \\d t1\n> Table \"public.t1\"\n> Column | Type | Modifiers\n> -----------+-----------------------------+---------------------------------\n> t1id | integer | not null default\n> nextval('t1_t1id_seq'::regclass)\n> (...snip...)\n> Indexes:\n> \"message_pkey\" PRIMARY KEY, btree (id)\n> (...snip...)\n> \n> # \\d t2\n> Table \"public.t2\"\n> Column | Type | Modifiers\n> -----------------+-----------------------------+-----------------------------\n> t2id | integer | not null default\n> nextval('t2_t2id_seq'::regclass)\n> t1id | integer | not null\n> foo | integer | not null\n> bar | timestamp without time zone | not null default now()\n> Indexes:\n> \"t2_pkey\" PRIMARY KEY, btree (t2id)\n> \"t2_bar_key\" btree (bar)\n> \"t2_t1id_key\" btree (t1id)\n> Foreign-key constraints:\n> \"t2_t1id_fkey\" FOREIGN KEY (t1id) REFERENCES t1(t1id) ON UPDATE\n> RESTRICT ON DELETE CASCADE\n> \n> # explain delete from t1 where t1id in (select t1id from t2 where\n> foo=0 and bar < '20101101');\n> QUERY PLAN\n> -----------------------------------------------------------------------------\n> Nested Loop (cost=5088742.39..6705282.32 rows=30849 width=6)\n> -> HashAggregate (cost=5088742.39..5089050.88 rows=30849 width=4)\n> -> Index Scan using t2_bar_key on t2 (cost=0.00..5035501.50\n> rows=21296354 width=4)\n> Index Cond: (bar < '2010-11-01 00:00:00'::timestamp\n> without time zone)\n> Filter: (foo = 0)\n> -> Index Scan using t1_pkey on t1 (cost=0.00..52.38 rows=1 width=10)\n> Index Cond: (t1.t1id = t2.t1id)\n> (7 rows)\n> \n> \n> Note that the estimate of 30849 rows is way off : there should be\n> around 55M rows deleted from t1, and 2-3 times as much from t2.\n> \n> When looking at the plan, I can easily imagine that data gets\n> accumulated below the nestedloop (thus using all that memory), but why\n> isn't each entry freed once one row has been deleted from t1 ? That\n> entry isn't going to be found again in t1 or in t2, so why keep it\n> around ?\n> \n> Is there a better way to write this query ? Would postgres 8.4/9.0\n> handle things better ?\n> \n\nDo you have any DELETE triggers in t1 and/or t2?\n\n\n-- \nGuillaume\n http://blog.guillaume.lelarge.info\n http://www.dalibo.com\n\n",
"msg_date": "Thu, 07 Jul 2011 22:26:45 +0200",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE taking too much memory"
},
{
"msg_contents": "> On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:\n>> Hi,\n>>\n>> I have a delete query taking 7.2G of ram (and counting) but I do not\n>> understant why so much memory is necessary. The server has 12G, and\n>> I'm afraid it'll go into swap. Using postgres 8.3.14.\n>>\n>> I'm purging some old data from table t1, which should cascade-delete\n>> referencing rows in t2. Here's an anonymized rundown :\n>>\n>> # explain delete from t1 where t1id in (select t1id from t2 where\n>> foo=0 and bar < '20101101');\n\nIt looks as though you're hitting one of the known issues with\nPostgreSQL and FKs. The FK constraint checks and CASCADE actions are\nimplemented using AFTER triggers, which are queued up during the query\nto be executed at the end. For very large queries, this queue of\npending triggers can become very large, using up all available memory.\n\nThere's a TODO item to try to fix this for a future version of\nPostgreSQL (maybe I'll have another go at it for 9.2), but at the\nmoment all versions of PostgreSQL suffer from this problem.\n\nThe simplest work-around for you might be to break your deletes up\ninto smaller chunks, say 100k or 1M rows at a time, eg:\n\ndelete from t1 where t1id in (select t1id from t2 where foo=0 and bar\n< '20101101' limit 100000);\n\nRegards,\nDean\n",
"msg_date": "Fri, 8 Jul 2011 10:05:47 +0100",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE taking too much memory"
},
{
"msg_contents": "On Thursday 07 July 2011 22:26:45 Guillaume Lelarge wrote:\n> On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:\n> > Hi,\n> > \n> > I have a delete query taking 7.2G of ram (and counting) but I do not\n> > understant why so much memory is necessary. The server has 12G, and\n> > I'm afraid it'll go into swap. Using postgres 8.3.14.\n> > \n> > I'm purging some old data from table t1, which should cascade-delete\n> > referencing rows in t2. Here's an anonymized rundown :\n> > \n> > \n> > # \\d t1\n> > \n> > Table\n> > \"public.t1\"\n> > \n> > Column | Type | Modifiers\n> > \n> > -----------+-----------------------------+------------------------------\n> > ---\n> > \n> > t1id | integer | not null default\n> > \n> > nextval('t1_t1id_seq'::regclass)\n> > (...snip...)\n> > \n> > Indexes:\n> > \"message_pkey\" PRIMARY KEY, btree (id)\n> > \n> > (...snip...)\n> > \n> > # \\d t2\n> > \n> > Table\n> > \"public.t\n> > 2\"\n> > \n> > Column | Type | Modifiers\n> > \n> > -----------------+-----------------------------+------------------------\n> > -----\n> > \n> > t2id | integer | not null default\n> > \n> > nextval('t2_t2id_seq'::regclass)\n> > \n> > t1id | integer | not null\n> > foo | integer | not null\n> > bar | timestamp without time zone | not null default now()\n> > \n> > Indexes:\n> > \"t2_pkey\" PRIMARY KEY, btree (t2id)\n> > \"t2_bar_key\" btree (bar)\n> > \"t2_t1id_key\" btree (t1id)\n> > \n> > Foreign-key constraints:\n> > \"t2_t1id_fkey\" FOREIGN KEY (t1id) REFERENCES t1(t1id) ON UPDATE\n> > \n> > RESTRICT ON DELETE CASCADE\n> > \n> > # explain delete from t1 where t1id in (select t1id from t2 where\n> > foo=0 and bar < '20101101');\n> > \n> > QUERY PLAN\n> > \n> > ------------------------------------------------------------------------\n> > -----\n> > \n> > Nested Loop (cost=5088742.39..6705282.32 rows=30849 width=6)\n> > \n> > -> HashAggregate (cost=5088742.39..5089050.88 rows=30849\n> > width=4)\n> > \n> > -> Index Scan using t2_bar_key on t2 \n> > (cost=0.00..5035501.50\n> > \n> > rows=21296354 width=4)\n> > \n> > Index Cond: (bar < '2010-11-01\n> > 00:00:00'::timestamp\n> > \n> > without time zone)\n> > \n> > Filter: (foo = 0)\n> > \n> > -> Index Scan using t1_pkey on t1 (cost=0.00..52.38 rows=1\n> > width=10)\n> > \n> > Index Cond: (t1.t1id = t2.t1id)\n> > \n> > (7 rows)\n> > \n> > \n> > Note that the estimate of 30849 rows is way off : there should be\n> > around 55M rows deleted from t1, and 2-3 times as much from t2.\n> > \n> > When looking at the plan, I can easily imagine that data gets\n> > accumulated below the nestedloop (thus using all that memory), but why\n> > isn't each entry freed once one row has been deleted from t1 ? That\n> > entry isn't going to be found again in t1 or in t2, so why keep it\n> > around ?\n> > \n> > Is there a better way to write this query ? Would postgres 8.4/9.0\n> > handle things better ?\n> \n> Do you have any DELETE triggers in t1 and/or t2?\n\nNo, there are triggers on insert/update to t1 which both insert into t2, but \nno delete trigger. Deletions do cascade from t1 to t2 because of the foreign \nkey.\n-- \nVincent de Phily\nMobile Devices\n+33 (0) 142 119 325\n+353 (0) 85 710 6320 \n\nWarning\nThis message (and any associated files) is intended only for the use of its\nintended recipient and may contain information that is confidential, subject\nto copyright or constitutes a trade secret. If you are not the intended\nrecipient you are hereby notified that any dissemination, copying or\ndistribution of this message, or files associated with this message, is\nstrictly prohibited. If you have received this message in error, please\nnotify us immediately by replying to the message and deleting it from your\ncomputer. Any views or opinions presented are solely those of the author\[email protected] and do not necessarily represent those of \nthe\ncompany. Although the company has taken reasonable precautions to ensure no\nviruses are present in this email, the company cannot accept responsibility\nfor any loss or damage arising from the use of this email or attachments.\n",
"msg_date": "Fri, 08 Jul 2011 11:13:13 +0200",
"msg_from": "Vincent de Phily <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE taking too much memory"
},
{
"msg_contents": "On Thursday 07 July 2011 19:54:08 French, Martin wrote:\n> How up to date are the statistics for the tables in question?\n> \n> What value do you have for effective cache size?\n> \n> My guess would be that planner thinks the method it is using is right\n> either for its current row number estimations, or the amount of memory\n> it thinks it has to play with.\n\nNot very up to date I'm afraid (as shown by the low estimate of deleted rows). \nTable t2 has been insert-only since its re-creation (that's another story), \nwhile t1 is your classic insert-many, update-recent.\n\nWe haven't tweaked effective cache size yet, it's on the TODO... like many \nother things :/\n-- \nVincent de Phily\n",
"msg_date": "Fri, 08 Jul 2011 11:20:14 +0200",
"msg_from": "Vincent de Phily <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE taking too much memory"
},
{
"msg_contents": "If the query planner thinks it has the default amount of memory (128MB)\nand the stats are out of date, then it will by no means be able to plan\nproper execution.\n\nI would recommend setting the effective_cache_size to an appropriate\nvalue, running \"analyze\" on both tables with an appropriate stats\ntarget, and then explaining the query again to see if it's more\naccurate.\n\nCheers\n\n-----Original Message-----\nFrom: Vincent de Phily [mailto:[email protected]] \nSent: 08 July 2011 10:20\nTo: French, Martin\nCc: [email protected]; [email protected]\nSubject: Re: [PERFORM] DELETE taking too much memory\n\nOn Thursday 07 July 2011 19:54:08 French, Martin wrote:\n> How up to date are the statistics for the tables in question?\n> \n> What value do you have for effective cache size?\n> \n> My guess would be that planner thinks the method it is using is right\n> either for its current row number estimations, or the amount of memory\n> it thinks it has to play with.\n\nNot very up to date I'm afraid (as shown by the low estimate of deleted\nrows). \nTable t2 has been insert-only since its re-creation (that's another\nstory), \nwhile t1 is your classic insert-many, update-recent.\n\nWe haven't tweaked effective cache size yet, it's on the TODO... like\nmany \nother things :/\n-- \nVincent de Phily\n\n___________________________________________________ \n \nThis email is intended for the named recipient. The information contained \nin it is confidential. You should not copy it for any purposes, nor \ndisclose its contents to any other party. If you received this email \nin error, please notify the sender immediately via email, and delete it from\nyour computer. \n \nAny views or opinions presented are solely those of the author and do not \nnecessarily represent those of the company. \n \nPCI Compliancy: Please note, we do not send or wish to receive banking, credit\nor debit card information by email or any other form of communication. \n\nPlease try our new on-line ordering system at http://www.cromwell.co.uk/ice\n\nCromwell Tools Limited, PO Box 14, 65 Chartwell Drive\nWigston, Leicester LE18 1AT. Tel 0116 2888000\nRegistered in England and Wales, Reg No 00986161\nVAT GB 115 5713 87 900\n__________________________________________________\n\n",
"msg_date": "Fri, 8 Jul 2011 10:31:33 +0100",
"msg_from": "\"French, Martin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE taking too much memory"
},
{
"msg_contents": "On Friday 08 July 2011 10:05:47 Dean Rasheed wrote:\n> > On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:\n> >> Hi,\n> >> \n> >> I have a delete query taking 7.2G of ram (and counting) but I do not\n> >> understant why so much memory is necessary. The server has 12G, and\n> >> I'm afraid it'll go into swap. Using postgres 8.3.14.\n> >> \n> >> I'm purging some old data from table t1, which should cascade-delete\n> >> referencing rows in t2. Here's an anonymized rundown :\n> >> \n> >> # explain delete from t1 where t1id in (select t1id from t2 where\n> >> foo=0 and bar < '20101101');\n> \n> It looks as though you're hitting one of the known issues with\n> PostgreSQL and FKs. The FK constraint checks and CASCADE actions are\n> implemented using AFTER triggers, which are queued up during the query\n> to be executed at the end. For very large queries, this queue of\n> pending triggers can become very large, using up all available memory.\n> \n> There's a TODO item to try to fix this for a future version of\n> PostgreSQL (maybe I'll have another go at it for 9.2), but at the\n> moment all versions of PostgreSQL suffer from this problem.\n\nThat's very interesting, and a more plausible not-optimized-yet item than my \nguesses so far, thanks. Drop me a mail if you work on this, and I'll find some \ntime to test your code.\n\nI'm wondering though : this sounds like the behaviour of a \"deferrable\" fkey, \nwhich AFAICS is not the default and not my case ? I haven't explored that area \nof constraints yet, so there's certainly some detail that I'm missing.\n\n\n> The simplest work-around for you might be to break your deletes up\n> into smaller chunks, say 100k or 1M rows at a time, eg:\n> \n> delete from t1 where t1id in (select t1id from t2 where foo=0 and bar\n> < '20101101' limit 100000);\n\nYes, that's what we ended up doing. We canceled the query after 24h, shortly \nbefore the OOM killer would have, and started doing things in smaller batches.\n\n\n-- \nVincent de Phily\n",
"msg_date": "Fri, 08 Jul 2011 11:44:38 +0200",
"msg_from": "Vincent de Phily <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE taking too much memory"
},
{
"msg_contents": "On Friday 08 July 2011 10:31:33 French, Martin wrote:\n> If the query planner thinks it has the default amount of memory (128MB)\n> and the stats are out of date, then it will by no means be able to plan\n> proper execution.\n> \n> I would recommend setting the effective_cache_size to an appropriate\n> value, running \"analyze\" on both tables with an appropriate stats\n> target, and then explaining the query again to see if it's more\n> accurate.\n\nYes, I'll schedule those two to run during the night and repost an explain, \nfor information. However, we worked around the initial problem by running the \ndelete in smaller batches.\n\nThanks.\n-- \nVincent de Phily\n\n",
"msg_date": "Fri, 08 Jul 2011 11:50:16 +0200",
"msg_from": "Vincent de Phily <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE taking too much memory"
},
{
"msg_contents": "On 8 July 2011 10:44, Vincent de Phily\n<[email protected]> wrote:\n> On Friday 08 July 2011 10:05:47 Dean Rasheed wrote:\n>> > On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:\n>> >> Hi,\n>> >>\n>> >> I have a delete query taking 7.2G of ram (and counting) but I do not\n>> >> understant why so much memory is necessary. The server has 12G, and\n>> >> I'm afraid it'll go into swap. Using postgres 8.3.14.\n>> >>\n>> >> I'm purging some old data from table t1, which should cascade-delete\n>> >> referencing rows in t2. Here's an anonymized rundown :\n>> >>\n>> >> # explain delete from t1 where t1id in (select t1id from t2 where\n>> >> foo=0 and bar < '20101101');\n>>\n>> It looks as though you're hitting one of the known issues with\n>> PostgreSQL and FKs. The FK constraint checks and CASCADE actions are\n>> implemented using AFTER triggers, which are queued up during the query\n>> to be executed at the end. For very large queries, this queue of\n>> pending triggers can become very large, using up all available memory.\n>>\n>> There's a TODO item to try to fix this for a future version of\n>> PostgreSQL (maybe I'll have another go at it for 9.2), but at the\n>> moment all versions of PostgreSQL suffer from this problem.\n>\n> That's very interesting, and a more plausible not-optimized-yet item than my\n> guesses so far, thanks. Drop me a mail if you work on this, and I'll find some\n> time to test your code.\n>\n> I'm wondering though : this sounds like the behaviour of a \"deferrable\" fkey,\n> which AFAICS is not the default and not my case ? I haven't explored that area\n> of constraints yet, so there's certainly some detail that I'm missing.\n>\n\nYes, it's the same issue that affects deferrable PK and FK\nconstraints, but even non-deferrable FKs use AFTER ROW triggers that\nsuffer from this problem. These triggers don't show up in a \"\\d\" from\npsql, but they are there (try select * from pg_trigger where\ntgconstrrelid = 't1'::regclass) and because they fire AFTER rather\nthan BEFORE, queuing up large numbers of them is a problem.\n\nRegards,\nDean\n\n\n>\n>> The simplest work-around for you might be to break your deletes up\n>> into smaller chunks, say 100k or 1M rows at a time, eg:\n>>\n>> delete from t1 where t1id in (select t1id from t2 where foo=0 and bar\n>> < '20101101' limit 100000);\n>\n> Yes, that's what we ended up doing. We canceled the query after 24h, shortly\n> before the OOM killer would have, and started doing things in smaller batches.\n>\n>\n> --\n> Vincent de Phily\n>\n",
"msg_date": "Fri, 8 Jul 2011 11:48:57 +0100",
"msg_from": "Dean Rasheed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DELETE taking too much memory"
},
{
"msg_contents": "On Fri, Jul 8, 2011 at 12:48 PM, Dean Rasheed <[email protected]> wrote:\n> Yes, it's the same issue that affects deferrable PK and FK\n> constraints, but even non-deferrable FKs use AFTER ROW triggers that\n> suffer from this problem. These triggers don't show up in a \"\\d\" from\n> psql, but they are there (try select * from pg_trigger where\n> tgconstrrelid = 't1'::regclass) and because they fire AFTER rather\n> than BEFORE, queuing up large numbers of them is a problem.\n\nI would imagine an \"easy\" solution would be to \"compress\" the queue by\ninserting a single element representing all rows of row version id X.\n\nIe: a delete or update will need to check all the row versions it\ncreates with its txid, this txid could be used to represent the rows\nthat need checking afterwards right?\n",
"msg_date": "Fri, 8 Jul 2011 13:09:17 +0200",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE taking too much memory"
},
{
"msg_contents": "On Fri, Jul 8, 2011 at 4:35 AM, Dean Rasheed <[email protected]>wrote:\n\n> > On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:\n> >> Hi,\n> >>\n> >> I have a delete query taking 7.2G of ram (and counting) but I do not\n> >> understant why so much memory is necessary. The server has 12G, and\n> >> I'm afraid it'll go into swap. Using postgres 8.3.14.\n> >>\n> >> I'm purging some old data from table t1, which should cascade-delete\n> >> referencing rows in t2. Here's an anonymized rundown :\n> >>\n> >> # explain delete from t1 where t1id in (select t1id from t2 where\n> >> foo=0 and bar < '20101101');\n>\n> It looks as though you're hitting one of the known issues with\n> PostgreSQL and FKs. The FK constraint checks and CASCADE actions are\n> implemented using AFTER triggers, which are queued up during the query\n> to be executed at the end. For very large queries, this queue of\n> pending triggers can become very large, using up all available memory.\n>\n> There's a TODO item to try to fix this for a future version of\n> PostgreSQL (maybe I'll have another go at it for 9.2), but at the\n> moment all versions of PostgreSQL suffer from this problem.\n>\n> The simplest work-around for you might be to break your deletes up\n> into smaller chunks, say 100k or 1M rows at a time, eg:\n>\n> delete from t1 where t1id in (select t1id from t2 where foo=0 and bar\n> < '20101101' limit 100000);\n>\n\nI'd like to comment here.... I had serious performance issues with a similar\nquery (planner did horrible things), not sure if planner will do the same\ndumb thing it did for me, my query was against the same table (ie, t1=t2).\nI had this query:\n\ndelete from t1 where ctid in (select ctid from t1 where\ncreated_at<'20101231' limit 10000); <--- this was slooooow. Changed to:\n\ndelete from t1 where ctid = any(array(select ctid from t1 where\ncreated_at<'20101231' limit 10000)); <--- a lot faster.\n\nSo... will the same principle work here?, doing this?:\n\ndelete from t1 where t1id = any(array(select t1id from t2 where foo=0 and\nbar\n< '20101101' limit 100000)); <-- would this query be faster then original\none?\n\n\n\n>\n> Regards,\n> Dean\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOn Fri, Jul 8, 2011 at 4:35 AM, Dean Rasheed <[email protected]> wrote:\n> On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:\n>> Hi,\n>>\n>> I have a delete query taking 7.2G of ram (and counting) but I do not\n>> understant why so much memory is necessary. The server has 12G, and\n>> I'm afraid it'll go into swap. Using postgres 8.3.14.\n>>\n>> I'm purging some old data from table t1, which should cascade-delete\n>> referencing rows in t2. Here's an anonymized rundown :\n>>\n>> # explain delete from t1 where t1id in (select t1id from t2 where\n>> foo=0 and bar < '20101101');\n\nIt looks as though you're hitting one of the known issues with\nPostgreSQL and FKs. The FK constraint checks and CASCADE actions are\nimplemented using AFTER triggers, which are queued up during the query\nto be executed at the end. For very large queries, this queue of\npending triggers can become very large, using up all available memory.\n\nThere's a TODO item to try to fix this for a future version of\nPostgreSQL (maybe I'll have another go at it for 9.2), but at the\nmoment all versions of PostgreSQL suffer from this problem.\n\nThe simplest work-around for you might be to break your deletes up\ninto smaller chunks, say 100k or 1M rows at a time, eg:\n\ndelete from t1 where t1id in (select t1id from t2 where foo=0 and bar\n< '20101101' limit 100000);I'd like to comment here.... I had serious performance issues with a similar query (planner did horrible things), not sure if planner will do the same dumb thing it did for me, my query was against the same table (ie, t1=t2). I had this query:\ndelete from t1 where ctid in (select ctid from t1 where created_at<'20101231' limit 10000); <--- this was slooooow. Changed to:delete from t1 where ctid = any(array(select ctid from t1 where created_at<'20101231' limit 10000)); <--- a lot faster.\nSo... will the same principle work here?, doing this?:delete from t1 where t1id = any(array(select t1id from t2 where foo=0 and bar\n< '20101101' limit 100000)); <-- would this query be faster then original one?\n \n\nRegards,\nDean\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 8 Jul 2011 09:07:45 -0430",
"msg_from": "Jose Ildefonso Camargo Tolosa <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] DELETE taking too much memory"
}
] |
[
{
"msg_contents": "Is there any guidelines to sizing work_mem, shared_bufferes and other\nconfiguration parameters etc., with regards to very large records? I\nhave a table that has a bytea column and I am told that some of these\ncolumns contain over 400MB of data. I am having a problem on several\nservers reading and more specifically dumping these records (table)\nusing pg_dump\n\nThanks\n",
"msg_date": "Thu, 07 Jul 2011 10:33:05 -0400",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "very large record sizes and ressource usage"
},
{
"msg_contents": "On Thu, Jul 7, 2011 at 10:33 AM, <[email protected]> wrote:\n> Is there any guidelines to sizing work_mem, shared_bufferes and other\n> configuration parameters etc., with regards to very large records? I\n> have a table that has a bytea column and I am told that some of these\n> columns contain over 400MB of data. I am having a problem on several\n> servers reading and more specifically dumping these records (table)\n> using pg_dump\n\nwork_mem shouldn't make any difference to how well that performs;\nshared_buffers might, but there's no special advice for tuning it for\nlarge records vs. anything else. Large records just get broken up\ninto small records, under the hood. At any rate, your email is a\nlittle vague about exactly what the problem is. If you provide some\nmore detail you might get more help.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 28 Jul 2011 20:25:20 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: very large record sizes and ressource usage"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.