threads
listlengths 1
275
|
---|
[
{
"msg_contents": "I don't clearly understand why this happens, but when I try join some \ntables using arrays I end up with:\n\n=# explain select count(*) from urls JOIN rules ON urls.tag && rules.tag;\n QUERY PLAN \n\n---------------------------------------------------------------------------------------\n Aggregate (cost=1356.27..1356.28 rows=1 width=0)\n -> Nested Loop (cost=20.33..1354.96 rows=523 width=0)\n -> Seq Scan on rules (cost=0.00..1.01 rows=1 width=37)\n -> Bitmap Heap Scan on urls (cost=20.33..1347.42 rows=523 \nwidth=29)\n Recheck Cond: (urls.tag && rules.tag)\n -> Bitmap Index Scan on url_tag_g (cost=0.00..20.20 \nrows=523 width=0)\n Index Cond: (urls.tag && rules.tag)\n\nHere tag is text[] with list of tags. Whole select takes 142 ms. It \ndrops down to 42 ms when I add some conditions that strip result table \nto zero length.\n\nWhat am I missing? Is there any other ways to overlap those ones? Or \nshould I find \"any other way\"?\n\n-- \nSphinx of black quartz judge my vow.\n",
"msg_date": "Sun, 29 Aug 2010 16:18:22 +0300",
"msg_from": "Volodymyr Kostyrko <[email protected]>",
"msg_from_op": true,
"msg_subject": "array can be slow when joining?"
},
{
"msg_contents": "Hello\n\nif you read a some longer field - longer than 2Kb, then PostgreSQL has\nto read this value from a table_toast file. see\nhttp://developer.postgresql.org/pgdocs/postgres/storage-toast.html -\nand reading have to be slower than you don't read this field.\n\nRegards\n\nPavel Stehule\n\n2010/8/29 Volodymyr Kostyrko <[email protected]>:\n> I don't clearly understand why this happens, but when I try join some tables\n> using arrays I end up with:\n>\n> =# explain select count(*) from urls JOIN rules ON urls.tag && rules.tag;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------\n> Aggregate (cost=1356.27..1356.28 rows=1 width=0)\n> -> Nested Loop (cost=20.33..1354.96 rows=523 width=0)\n> -> Seq Scan on rules (cost=0.00..1.01 rows=1 width=37)\n> -> Bitmap Heap Scan on urls (cost=20.33..1347.42 rows=523\n> width=29)\n> Recheck Cond: (urls.tag && rules.tag)\n> -> Bitmap Index Scan on url_tag_g (cost=0.00..20.20 rows=523\n> width=0)\n> Index Cond: (urls.tag && rules.tag)\n>\n> Here tag is text[] with list of tags. Whole select takes 142 ms. It drops\n> down to 42 ms when I add some conditions that strip result table to zero\n> length.\n>\n> What am I missing? Is there any other ways to overlap those ones? Or should\n> I find \"any other way\"?\n>\n> --\n> Sphinx of black quartz judge my vow.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Mon, 30 Aug 2010 05:53:41 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: array can be slow when joining?"
}
] |
[
{
"msg_contents": "Hello,\n\nI just upgraded with pg_dump/restore from PostgreSQL 8.3.11 to 8.4.4 but \nI'm having major performance problems with a query with many left joins. \nProblem is that costs are now very, very, very high (was ok in 8.3). \nAnalyze has been done. Indexes are of course there.\n\n -> Merge Left Join \n(cost=1750660.22..4273805884876845789194861338991916289697885665127154313046252183850255795798561612107149662486528.00 \nrows=238233578115856634454073334945297075430094545596765511255148896328828230572227215727052643001958400 \nwidth=16)\n Merge Cond: (l.id = d2000903.fk_id)\n\nDetails with execution plan can be found at:\nhttp://www.wiesinger.com/tmp/pg_perf_84.txt\n\nI know that the data model is key/value pairs but it worked well in 8.3. I \nneed this flexibility.\n\nAny ideas?\n\nThnx.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n\n",
"msg_date": "Mon, 30 Aug 2010 08:20:05 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Major performance problem after upgrade from 8.3 to 8.4"
},
{
"msg_contents": "On Mon, Aug 30, 2010 at 12:20 AM, Gerhard Wiesinger <[email protected]> wrote:\n> Hello,\n>\n> I just upgraded with pg_dump/restore from PostgreSQL 8.3.11 to 8.4.4 but I'm\n> having major performance problems with a query with many left joins. Problem\n> is that costs are now very, very, very high (was ok in 8.3). Analyze has\n> been done. Indexes are of course there.\n>\n> -> Merge Left Join\n> (cost=1750660.22..4273805884876845789194861338991916289697885665127154313046252183850255795798561612107149662486528.00\n> rows=238233578115856634454073334945297075430094545596765511255148896328828230572227215727052643001958400\n> width=16)\n> Merge Cond: (l.id = d2000903.fk_id)\n\nWow! Other than an incredibly high cost AND row estimate, was the\nquery plan the same on 8.3 or different?\n\n> Details with execution plan can be found at:\n> http://www.wiesinger.com/tmp/pg_perf_84.txt\n\nWhat's up with the \"(actual time=.. rows= loops=) \" in the explain analyze?\n\n> I know that the data model is key/value pairs but it worked well in 8.3. I\n> need this flexibility.\n>\n> Any ideas?\n\nNot really. I would like an explain analyze from both 8.3 and 8.4.\nAre they tuned the same, things like work mem and default stats\ntarget?\n",
"msg_date": "Mon, 30 Aug 2010 01:00:28 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to 8.4"
},
{
"msg_contents": "Gerhard Wiesinger <[email protected]> wrote:\n\n> I know that the data model is key/value pairs but it worked well in 8.3. \n> I need this flexibility.\n>\n> Any ideas?\n\nIf i understand the query correctly it's a pivot-table, right?\n\nIf yes, and if i where you, i would try to rewrite this query, to\nsomething like:\n\nselect\n timestamp,\n sum (case when keyid = 1 then value else 0 end) as Raumsolltemperatur,\n ...\nfrom\n log\ngroup by\n timestamp;\n\n\nAssuming you can read a german text:\nhttp://www.pg-forum.de/h-ufig-gestellte-fragen-faq/4067-faq-zeilen-zu-spalten.html\n\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n",
"msg_date": "Mon, 30 Aug 2010 09:22:30 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n\t8.4"
},
{
"msg_contents": "On Mon, 30 Aug 2010, Scott Marlowe wrote:\n\n> On Mon, Aug 30, 2010 at 12:20 AM, Gerhard Wiesinger <[email protected]> wrote:\n>> Hello,\n>>\n>> I just upgraded with pg_dump/restore from PostgreSQL 8.3.11 to 8.4.4 but I'm\n>> having major performance problems with a query with many left joins. Problem\n>> is that costs are now very, very, very high (was ok in 8.3). Analyze has\n>> been done. Indexes are of course there.\n>>\n>> �-> �Merge Left Join\n>> (cost=1750660.22..4273805884876845789194861338991916289697885665127154313046252183850255795798561612107149662486528.00\n>> rows=238233578115856634454073334945297075430094545596765511255148896328828230572227215727052643001958400\n>> width=16)\n>> � � � �Merge Cond: (l.id = d2000903.fk_id)\n>\n> Wow! Other than an incredibly high cost AND row estimate, was the\n> query plan the same on 8.3 or different?\n>\n>> Details with execution plan can be found at:\n>> http://www.wiesinger.com/tmp/pg_perf_84.txt\n>\n> What's up with the \"(actual time=.. rows= loops=) \" in the explain analyze?\n\nWhat do you mean exactly? missing?\nI did it not with psql but with a GUI program.\n\n>\n>> I know that the data model is key/value pairs but it worked well in 8.3. I\n>> need this flexibility.\n>>\n>> Any ideas?\n>\n> Not really. I would like an explain analyze from both 8.3 and 8.4.\n> Are they tuned the same, things like work mem and default stats\n> target?\n\nI don't have a 8.3 version running anymore. But I'm havin an OLD version \nof a nearly exactly query plan (The sort was missing due to performance \nissues and it done now in a view, maybe also some more JOINS are added, \nbut all that doesn't have impacts on the basic principle of the query \nplan): \nhttp://www.wiesinger.com/tmp/pg_perf.txt\n\nTuning: Yes, on same machine with same parameters (manual diff on old \nconfig and added manually the parameters again).\n\nThnx.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n\n",
"msg_date": "Mon, 30 Aug 2010 09:25:01 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "On Mon, Aug 30, 2010 at 1:25 AM, Gerhard Wiesinger <[email protected]> wrote:\n> On Mon, 30 Aug 2010, Scott Marlowe wrote:\n>\n>> On Mon, Aug 30, 2010 at 12:20 AM, Gerhard Wiesinger <[email protected]>\n>> wrote:\n>>>\n>>> Hello,\n>>>\n>>> I just upgraded with pg_dump/restore from PostgreSQL 8.3.11 to 8.4.4 but\n>>> I'm\n>>> having major performance problems with a query with many left joins.\n>>> Problem\n>>> is that costs are now very, very, very high (was ok in 8.3). Analyze has\n>>> been done. Indexes are of course there.\n>>>\n>>> -> Merge Left Join\n>>>\n>>> (cost=1750660.22..4273805884876845789194861338991916289697885665127154313046252183850255795798561612107149662486528.00\n>>>\n>>> rows=238233578115856634454073334945297075430094545596765511255148896328828230572227215727052643001958400\n>>> width=16)\n>>> Merge Cond: (l.id = d2000903.fk_id)\n>>\n>> Wow! Other than an incredibly high cost AND row estimate, was the\n>> query plan the same on 8.3 or different?\n>>\n>>> Details with execution plan can be found at:\n>>> http://www.wiesinger.com/tmp/pg_perf_84.txt\n>>\n>> What's up with the \"(actual time=.. rows= loops=) \" in the explain\n>> analyze?\n>\n> What do you mean exactly? missing?\n> I did it not with psql but with a GUI program.\n\nNevermind, that was an artifact at http://explain.depesz.com/s/KyU not\nyour fault. Sorry.\n\n>>> I know that the data model is key/value pairs but it worked well in 8.3.\n>>> I\n>>> need this flexibility.\n>>>\n>>> Any ideas?\n>>\n>> Not really. I would like an explain analyze from both 8.3 and 8.4.\n>> Are they tuned the same, things like work mem and default stats\n>> target?\n>\n> I don't have a 8.3 version running anymore. But I'm havin an OLD version of\n> a nearly exactly query plan (The sort was missing due to performance issues\n> and it done now in a view, maybe also some more JOINS are added, but all\n> that doesn't have impacts on the basic principle of the query plan):\n> http://www.wiesinger.com/tmp/pg_perf.txt\n>\n> Tuning: Yes, on same machine with same parameters (manual diff on old config\n> and added manually the parameters again).\n\nHow long does the query take to run in 8.4? Do you have an explain\nanalyze of that? I'm still thinking that some change in the query\nplanner might be seeing all those left joins and coming up with some\nnon-linear value for row estimation. What's default stats target set\nto in that db?\n\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Mon, 30 Aug 2010 01:34:02 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to 8.4"
},
{
"msg_contents": "Hello\n\n2010/8/30 Andreas Kretschmer <[email protected]>:\n> Gerhard Wiesinger <[email protected]> wrote:\n>\n>> I know that the data model is key/value pairs but it worked well in 8.3.\n>> I need this flexibility.\n>>\n>> Any ideas?\n>\n> If i understand the query correctly it's a pivot-table, right?\n>\n\nno - it's just EAV table on very large data :(\n\nRegards\n\nPavel Stehule\n\n> If yes, and if i where you, i would try to rewrite this query, to\n> something like:\n>\n> select\n> timestamp,\n> sum (case when keyid = 1 then value else 0 end) as Raumsolltemperatur,\n> ...\n> from\n> log\n> group by\n> timestamp;\n>\n>\n> Assuming you can read a german text:\n> http://www.pg-forum.de/h-ufig-gestellte-fragen-faq/4067-faq-zeilen-zu-spalten.html\n>\n>\n>\n> Andreas\n> --\n> Really, I'm not out to destroy Microsoft. That will just be a completely\n> unintentional side effect. (Linus Torvalds)\n> \"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\n> Kaufbach, Saxony, Germany, Europe. N 51.05082°, E 13.56889°\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Mon, 30 Aug 2010 09:34:36 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to 8.4"
},
{
"msg_contents": "On Mon, 30 Aug 2010, Andreas Kretschmer wrote:\n\n> Gerhard Wiesinger <[email protected]> wrote:\n>\n>> I know that the data model is key/value pairs but it worked well in 8.3.\n>> I need this flexibility.\n>>\n>> Any ideas?\n>\n> If i understand the query correctly it's a pivot-table, right?\n>\n\nThe view flattens the key/value structure (for enchanceable technical \nmeasure values) to a flat structure with many columns. The view is that I \nhave a flat structure and then have easy to write queries for aggregation.\n\nThe query itself is just an aggregating query over that flat structure.\n\nAny ideas for better optimizations?\n\n> If yes, and if i where you, i would try to rewrite this query, to\n> something like:\n>\n> select\n> timestamp,\n> sum (case when keyid = 1 then value else 0 end) as Raumsolltemperatur,\n> ...\n> from\n> log\n> group by\n> timestamp;\n>\n\nI will try that. But what I don't understand: query was really fast in \n8.3 even with 24 hours timeframe.\n\n>\n> Assuming you can read a german text:\n> http://www.pg-forum.de/h-ufig-gestellte-fragen-faq/4067-faq-zeilen-zu-spalten.html\n\nHopefully yes :-) after that Fedora/Postgresql nightmare update night \n(Parallel Fedora upgrade stalled whole machine since that query from \nNagios was executed I guess hundreds of times since it got very slow in \n8.4). Machine was pingable but nothing else even on console :-( RAID \nrebuild and all other stuff :-(\n\nI planned the upgrade but I didn't expect problems with the query plan \ninstability :-(\n\nThnx.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n",
"msg_date": "Mon, 30 Aug 2010 09:48:14 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "On Mon, 30 Aug 2010, Scott Marlowe wrote:\n\n> On Mon, Aug 30, 2010 at 1:25 AM, Gerhard Wiesinger <[email protected]> wrote:\n>> On Mon, 30 Aug 2010, Scott Marlowe wrote:\n>>\n>>> On Mon, Aug 30, 2010 at 12:20 AM, Gerhard Wiesinger <[email protected]>\n>>> wrote:\n>>>>\n>>>> Hello,\n>>>>\n>>>> I just upgraded with pg_dump/restore from PostgreSQL 8.3.11 to 8.4.4 but\n>>>> I'm\n>>>> having major performance problems with a query with many left joins.\n>>>> Problem\n>>>> is that costs are now very, very, very high (was ok in 8.3). Analyze has\n>>>> been done. Indexes are of course there.\n>>>>\n>>>> �-> �Merge Left Join\n>>>>\n>>>> (cost=1750660.22..4273805884876845789194861338991916289697885665127154313046252183850255795798561612107149662486528.00\n>>>>\n>>>> rows=238233578115856634454073334945297075430094545596765511255148896328828230572227215727052643001958400\n>>>> width=16)\n>>>> � � � �Merge Cond: (l.id = d2000903.fk_id)\n>>>\n>>> Wow! �Other than an incredibly high cost AND row estimate, was the\n>>> query plan the same on 8.3 or different?\n>>>\n>>>> Details with execution plan can be found at:\n>>>> http://www.wiesinger.com/tmp/pg_perf_84.txt\n>>>\n>>> What's up with the \"(actual time=.. rows= loops=) \" in the explain\n>>> analyze?\n>>\n>> What do you mean exactly? missing?\n>> I did it not with psql but with a GUI program.\n>\n> Nevermind, that was an artifact at http://explain.depesz.com/s/KyU not\n> your fault. Sorry.\n>\n>>>> I know that the data model is key/value pairs but it worked well in 8.3.\n>>>> I\n>>>> need this flexibility.\n>>>>\n>>>> Any ideas?\n>>>\n>>> Not really. �I would like an explain analyze from both 8.3 and 8.4.\n>>> Are they tuned the same, things like work mem and default stats\n>>> target?\n>>\n>> I don't have a 8.3 version running anymore. But I'm havin an OLD version of\n>> a nearly exactly query plan (The sort was missing due to performance issues\n>> and it done now in a view, maybe also some more JOINS are added, but all\n>> that doesn't have impacts on the basic principle of the query plan):\n>> http://www.wiesinger.com/tmp/pg_perf.txt\n>>\n>> Tuning: Yes, on same machine with same parameters (manual diff on old config\n>> and added manually the parameters again).\n>\n> How long does the query take to run in 8.4? Do you have an explain\n> analyze of that? I'm still thinking that some change in the query\n> planner might be seeing all those left joins and coming up with some\n> non-linear value for row estimation. What's default stats target set\n> to in that db?\n\nIn config, default values:\n#default_statistics_target = 100 # range 1-10000\n\nHow can I find that out?\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n",
"msg_date": "Mon, 30 Aug 2010 09:56:24 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "On Mon, 30 Aug 2010, Pavel Stehule wrote:\n\n> Hello\n>\n> 2010/8/30 Andreas Kretschmer <[email protected]>:\n>> Gerhard Wiesinger <[email protected]> wrote:\n>>\n>>> I know that the data model is key/value pairs but it worked well in 8.3.\n>>> I need this flexibility.\n>>>\n>>> Any ideas?\n>>\n>> If i understand the query correctly it's a pivot-table, right?\n>>\n>\n> no - it's just EAV table on very large data :(\n\nYes, it is an EAV table, but with query space comparable low (Max. 1 day \nout of years, typically 5mins out of years).\n\nThnx.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n\n",
"msg_date": "Mon, 30 Aug 2010 09:58:15 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "2010/8/30 Gerhard Wiesinger <[email protected]>:\n> On Mon, 30 Aug 2010, Pavel Stehule wrote:\n>\n>> Hello\n>>\n>> 2010/8/30 Andreas Kretschmer <[email protected]>:\n>>>\n>>> Gerhard Wiesinger <[email protected]> wrote:\n>>>\n>>>> I know that the data model is key/value pairs but it worked well in 8.3.\n>>>> I need this flexibility.\n>>>>\n>>>> Any ideas?\n>>>\n>>> If i understand the query correctly it's a pivot-table, right?\n>>>\n>>\n>> no - it's just EAV table on very large data :(\n>\n> Yes, it is an EAV table, but with query space comparable low (Max. 1 day out\n> of years, typically 5mins out of years).\n>\n\nit is irelevant - there are repeated seq scans - so you need a\npartitioning or classic table - maybe materialized views can help\n\nPavel\n\n\n> Thnx.\n>\n> Ciao,\n> Gerhard\n>\n> --\n> http://www.wiesinger.com/\n>\n>\n",
"msg_date": "Mon, 30 Aug 2010 10:17:27 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to 8.4"
},
{
"msg_contents": "On Mon, 30 Aug 2010, Pavel Stehule wrote:\n\n> 2010/8/30 Gerhard Wiesinger <[email protected]>:\n>> On Mon, 30 Aug 2010, Pavel Stehule wrote:\n>>\n>>> Hello\n>>>\n>>> 2010/8/30 Andreas Kretschmer <[email protected]>:\n>>>>\n>>>> Gerhard Wiesinger <[email protected]> wrote:\n>>>>\n>>>>> I know that the data model is key/value pairs but it worked well in 8.3.\n>>>>> I need this flexibility.\n>>>>>\n>>>>> Any ideas?\n>>>>\n>>>> If i understand the query correctly it's a pivot-table, right?\n>>>>\n>>>\n>>> no - it's just EAV table on very large data :(\n>>\n>> Yes, it is an EAV table, but with query space comparable low (Max. 1 day out\n>> of years, typically 5mins out of years).\n>>\n>\n> it is irelevant - there are repeated seq scans - so you need a\n> partitioning or classic table - maybe materialized views can help\n\nI know the drawbacks of an EAV design but I don't want to discuss that. I \nwant to discuss the major performance decrease of PostgreSQL 8.3 \n(performance was ok) to PostgreSQL 8.4 (performance is NOT ok).\n\nAny further ideas how I can track this down?\nCan someone explain the difference in query plan from an optimizer point \nof view?\n\nThnx.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n",
"msg_date": "Mon, 30 Aug 2010 18:11:36 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "Gerhard Wiesinger <[email protected]> writes:\n> I know the drawbacks of an EAV design but I don't want to discuss that. I \n> want to discuss the major performance decrease of PostgreSQL 8.3 \n> (performance was ok) to PostgreSQL 8.4 (performance is NOT ok).\n\n> Any further ideas how I can track this down?\n> Can someone explain the difference in query plan from an optimizer point \n> of view?\n\nSince you haven't shown us the 8.3 plan, it's kind of hard to speculate ;-)\n\nOne thing that jumped out at me was that 8.4 appears to be expecting\nmultiple matches in each of the left-joined tables, which is why the\ntotal rowcount estimate balloons so fast. I rather imagine that you are\nexpecting at most one match in reality, else the query isn't going to\nbehave nicely. Is this correct? Are you *sure* you analyzed all these\ntables? And if that is how the data looks, where is the actual\nperformance problem? A bad rowcount estimate isn't in itself going\nto kill you.\n\nFWIW, in a similar albeit toy example, I don't see any difference\nbetween the 8.3 and 8.4 plans or cost estimates.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Aug 2010 12:22:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to 8.4 "
},
{
"msg_contents": "On Mon, 30 Aug 2010, Tom Lane wrote:\n\n> Gerhard Wiesinger <[email protected]> writes:\n>> I know the drawbacks of an EAV design but I don't want to discuss that. I\n>> want to discuss the major performance decrease of PostgreSQL 8.3\n>> (performance was ok) to PostgreSQL 8.4 (performance is NOT ok).\n>\n>> Any further ideas how I can track this down?\n>> Can someone explain the difference in query plan from an optimizer point\n>> of view?\n>\n> Since you haven't shown us the 8.3 plan, it's kind of hard to speculate ;-)\n>\n> One thing that jumped out at me was that 8.4 appears to be expecting\n> multiple matches in each of the left-joined tables, which is why the\n> total rowcount estimate balloons so fast. I rather imagine that you are\n> expecting at most one match in reality, else the query isn't going to\n> behave nicely. Is this correct? Are you *sure* you analyzed all these\n> tables? And if that is how the data looks, where is the actual\n> performance problem? A bad rowcount estimate isn't in itself going\n> to kill you.\n>\n> FWIW, in a similar albeit toy example, I don't see any difference\n> between the 8.3 and 8.4 plans or cost estimates.\n\nYes, I'm expecting only one match in reality and I thing PostgreSQL should \nalso know that from table definition and constraints. Long answer below.\n\nQuery doesn't \"end\" in PostgreSQL.\n\n From the definition:\nCREATE TABLE value_types (\n valuetypeid bigint PRIMARY KEY,\n description varchar(256) NOT NULL -- e.g. 'float', 'integer', 'boolean'\n);\n\nCREATE TABLE key_description (\n keyid bigint PRIMARY KEY,\n description varchar(256) NOT NULL UNIQUE,\n fk_valuetypeid bigint NOT NULL,\n unit varchar(256) NOT NULL, -- e.g. 'ᅵC'\n FOREIGN KEY(fk_valuetypeid) REFERENCES value_types(valuetypeid) ON DELETE RESTRICT\n);\n-- ALTER TABLE key_description DROP CONSTRAINT c_key_description_description;\n-- ALTER TABLE key_description ADD CONSTRAINT c_key_description_description UNIQUE(description);\n\n\nCREATE TABLE log (\n id bigserial PRIMARY KEY,\n datetime timestamp with time zone NOT NULL,\n tdate date NOT NULL,\n ttime time with time zone NOT NULL\n);\n\nCREATE TABLE log_details (\n fk_id bigint NOT NULL,\n fk_keyid bigint NOT NULL,\n value double precision NOT NULL,\n FOREIGN KEY (fk_id) REFERENCES log(id) ON DELETE CASCADE,\n FOREIGN KEY (fk_keyid) REFERENCES key_description(keyid) ON DELETE RESTRICT,\n CONSTRAINT unique_key_and_id UNIQUE(fk_id, fk_keyid)\n);\n\n\n\nTherefore keyid is unique and eg d1.fk_keyid is unique.\nWith constraint from log_details and d1.fk_keyid is unique fk_id is \nunique for a given d1.fk_keyid.\n\nBTW: I have the old data setup. /var/lib/pgsql-old. Is there a fast setup \nwith old version on different TCP port possible to compare query plans?\n\nThnx.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n",
"msg_date": "Mon, 30 Aug 2010 18:45:26 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "Gerhard Wiesinger <[email protected]> writes:\n> BTW: I have the old data setup. /var/lib/pgsql-old. Is there a fast setup \n> with old version on different TCP port possible to compare query plans?\n\nYou'll need to reinstall the old executables. If you put the new\nexecutables in the same directories, it's not going to be easy to\nrun both in parallel. If you didn't, then you just need to start\nthe old postmaster using a different port number.\n",
"msg_date": "Mon, 30 Aug 2010 12:49:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to 8.4 "
},
{
"msg_contents": "On Mon, 30 Aug 2010, Tom Lane wrote:\n\n> Gerhard Wiesinger <[email protected]> writes:\n>> BTW: I have the old data setup. /var/lib/pgsql-old. Is there a fast setup\n>> with old version on different TCP port possible to compare query plans?\n>\n> You'll need to reinstall the old executables. If you put the new\n> executables in the same directories, it's not going to be easy to\n> run both in parallel. If you didn't, then you just need to start\n> the old postmaster using a different port number.\n>\n\nI tried to get 8.3.11 ready again:\n# Problems with timezone on startup (Default not found)\n./configure --with-system-tzdata=/usr/share\ncp ./src/backend/postgres /bin/postgres-8.3.11\nsu -l postgres -c \"/bin/postgres-8.3.11 -p 54321 -D \n/var/lib/pgsql-old/data &\" >> /var/lib/pgsql-old/pgstartup.log 2>&1 < /dev/null\n\nProblem is that PostgreSQL doesn't listen and take much CPU and also disk \nI/O. 8.3 was shut down cleanly. 8.4 runs in parallel. Are there any \nproblems with shared buffer conflicts?\n\n PID USER PR NI VIRT SWAP RES CODE DATA SHR S %CPU %MEM TIME+ COMMAND\n 6997 postgres 20 0 113m 112m 1236 3688 1064 712 D 38.7 0.0 0:45.43 /bin/postgres-8.3.11 -p 54321 -D /var/lib/pgsql-old/data\n\nAny ideas?\n\nThnx.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n",
"msg_date": "Thu, 2 Sep 2010 07:53:28 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "Gerhard Wiesinger <[email protected]> writes:\n> Problem is that PostgreSQL doesn't listen and take much CPU and also disk \n> I/O. 8.3 was shut down cleanly.\n\nHm, you sure about that? What's in the postmaster log?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 02 Sep 2010 09:14:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to 8.4 "
},
{
"msg_contents": "On Thu, 2 Sep 2010, Tom Lane wrote:\n\n> Gerhard Wiesinger <[email protected]> writes:\n>> Problem is that PostgreSQL doesn't listen and take much CPU and also disk\n>> I/O. 8.3 was shut down cleanly.\n>\n> Hm, you sure about that? What's in the postmaster log?\n\nThat's the strange thing: I don't have anything in stdout/stderr log and \nin pg_log directory and even in syslog.\n\nThnx.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n",
"msg_date": "Fri, 3 Sep 2010 07:03:55 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "On Thu, 2 Sep 2010, Tom Lane wrote:\n\n> Gerhard Wiesinger <[email protected]> writes:\n>> Problem is that PostgreSQL doesn't listen and take much CPU and also disk\n>> I/O. 8.3 was shut down cleanly.\n>\n> Hm, you sure about that? What's in the postmaster log?\n\nBTW: Do I need other postgres user with a different home directory?\n\nThnx.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n",
"msg_date": "Fri, 3 Sep 2010 07:07:07 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "Gerhard Wiesinger <[email protected]> writes:\n> On Thu, 2 Sep 2010, Tom Lane wrote:\n>> Hm, you sure about that? What's in the postmaster log?\n\n> That's the strange thing: I don't have anything in stdout/stderr log and \n> in pg_log directory and even in syslog.\n\nNot even in that pgstartup.log file you sent stderr to?\n\nI have seen cases before where Postgres couldn't log anything because of\nSELinux. If this is a Red Hat based system, look in the kernel log for\nAVC messages. If you see any, then SELinux is probably blocking things\nbecause of the nonstandard directory locations. Turning it off\ntemporarily would be the easiest fix, though relabeling the files would\nbe a better one.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 2010 10:14:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to 8.4 "
},
{
"msg_contents": "On Fri, 3 Sep 2010, Tom Lane wrote:\n> Not even in that pgstartup.log file you sent stderr to?\n>\n> I have seen cases before where Postgres couldn't log anything because of\n> SELinux. If this is a Red Hat based system, look in the kernel log for\n> AVC messages. If you see any, then SELinux is probably blocking things\n> because of the nonstandard directory locations. Turning it off\n> temporarily would be the easiest fix, though relabeling the files would\n> be a better one.\n\nSELinux is already disabled.\ncat /etc/selinux/config | grep -v \"^#\"|grep -v \"^$\"\nSELINUXTYPE=targeted\nSELINUX=disabled\n\nYes, also the redirected log file is empty. Also kernel log is empty.\n\nI tried to redirect it to a different, new file, on startup I get nothing, \nafter killing it I get:\n2010-09-03 16:35:39.177 GMT [2149] @/: LOG: could not stat \"/usr/share/doc/xalan-j2-manual-2.7.0/apidocs\": No such file or directory\n\nAny ideas?\n\nBTW: Shared memory can't be any issue?\n\nCiao,\nGerhard\n\n-- http://www.wiesinger.com/\n",
"msg_date": "Fri, 3 Sep 2010 18:38:53 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "Gerhard Wiesinger <[email protected]> writes:\n> On Fri, 3 Sep 2010, Tom Lane wrote:\n>> Not even in that pgstartup.log file you sent stderr to?\n\n> Yes, also the redirected log file is empty. Also kernel log is empty.\n\nHuh. Try strace'ing the process to see what it's doing.\n\n> BTW: Shared memory can't be any issue?\n\nIf you're not getting any log messages at all, you've got worse problems\nthan shared memory.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 2010 12:47:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to 8.4 "
},
{
"msg_contents": "On Fri, 3 Sep 2010, Tom Lane wrote:\n\n> Gerhard Wiesinger <[email protected]> writes:\n>> On Fri, 3 Sep 2010, Tom Lane wrote:\n>>> Not even in that pgstartup.log file you sent stderr to?\n>\n>> Yes, also the redirected log file is empty. Also kernel log is empty.\n>\n> Huh. Try strace'ing the process to see what it's doing.\n\n\nIt tries to find something in /usr/share/ ...\nOk, I think from the compile time options:\n./configure --with-system-tzdata=/usr/share\n\nPreviously I had problems with the timezone on startup (Default not \nfound) so I tried to set the directory.\n\nMaybe the timezone thing (it looks for the Default timezone) is easier to \nfix ...\n\nThnx.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n",
"msg_date": "Fri, 3 Sep 2010 18:57:02 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "Gerhard Wiesinger <[email protected]> writes:\n> On Fri, 3 Sep 2010, Tom Lane wrote:\n>> Huh. Try strace'ing the process to see what it's doing.\n\n> It tries to find something in /usr/share/ ...\n> Ok, I think from the compile time options:\n> ./configure --with-system-tzdata=/usr/share\n\nDoh. I hadn't looked closely at that. Probably you want\n/usr/share/zoneinfo --- at least that's what the Red Hat RPMs use.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 2010 13:16:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to 8.4 "
},
{
"msg_contents": "On Fri, 3 Sep 2010, Tom Lane wrote:\n\n> Gerhard Wiesinger <[email protected]> writes:\n>> On Fri, 3 Sep 2010, Tom Lane wrote:\n>>> Huh. Try strace'ing the process to see what it's doing.\n>\n>> It tries to find something in /usr/share/ ...\n>> Ok, I think from the compile time options:\n>> ./configure --with-system-tzdata=/usr/share\n>\n> Doh. I hadn't looked closely at that. Probably you want\n> /usr/share/zoneinfo --- at least that's what the Red Hat RPMs use.\n>\n\nI tried even before I wrote to the mailinglist without success:\n./configure\n./configure --with-system-tzdata=/usr/share/pgsql/timezonesets\n./configure --with-system-tzdata=/usr/share/pgsql\n./configure --with-system-tzdata=/usr/share\nNow I tried without success:\n./configure --with-system-tzdata=/usr/share/zoneinfo\n\nWith last one I also get:\n2010-09-03 19:51:29.079 CEST [27916] @/: FATAL: invalid value for parameter \"timezone_abbreviations\": \"Default\"\n\nCorrect directory would be:\nls -l /usr/share/pgsql/timezonesets/Default\n-rw-r--r-- 1 root root 29602 2010-05-17 20:07 /usr/share/pgsql/timezonesets/Default\n\nFile looks also good.\n\nAny ideas?\n\nThnx.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n",
"msg_date": "Fri, 3 Sep 2010 19:57:00 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "Gerhard Wiesinger <[email protected]> writes:\n> On Fri, 3 Sep 2010, Tom Lane wrote:\n>> Doh. I hadn't looked closely at that. Probably you want\n>> /usr/share/zoneinfo --- at least that's what the Red Hat RPMs use.\n\n> I tried even before I wrote to the mailinglist without success:\n> ./configure\n\nI'd definitely suggest leaving out the --with-system-tzdata option\naltogether if you're not certain it works.\n\n> With last one I also get:\n> 2010-09-03 19:51:29.079 CEST [27916] @/: FATAL: invalid value for parameter \"timezone_abbreviations\": \"Default\"\n\nThis is a different problem; the --with-system-tzdata option wouldn't\naffect that.\n\nI think what may be happening here is that a postgres executable expects\nto find itself in a full installation tree, ie if it's in /someplace/bin\nthen the timezone files are in /someplace/share, etc. Did you do a full\n\"make install\" after building, or did you just copy the postgres\nexecutable?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 2010 14:09:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to 8.4 "
},
{
"msg_contents": "On Fri, 3 Sep 2010, Tom Lane wrote:\n\n> Gerhard Wiesinger <[email protected]> writes:\n>> On Fri, 3 Sep 2010, Tom Lane wrote:\n>>> Doh. I hadn't looked closely at that. Probably you want\n>>> /usr/share/zoneinfo --- at least that's what the Red Hat RPMs use.\n>\n>> I tried even before I wrote to the mailinglist without success:\n>> ./configure\n>\n> I'd definitely suggest leaving out the --with-system-tzdata option\n> altogether if you're not certain it works.\n\nOK.\n\n>\n>> With last one I also get:\n>> 2010-09-03 19:51:29.079 CEST [27916] @/: FATAL: invalid value for parameter \"timezone_abbreviations\": \"Default\"\n>\n> This is a different problem; the --with-system-tzdata option wouldn't\n> affect that.\n>\n> I think what may be happening here is that a postgres executable expects\n> to find itself in a full installation tree, ie if it's in /someplace/bin\n> then the timezone files are in /someplace/share, etc. Did you do a full\n> \"make install\" after building, or did you just copy the postgres\n> executable?\n\nI just copied it as discussed in the original mail to avoid that make \ninstall kills the 8.4 production RPM version:\ncp ./src/backend/postgres /bin/postgres-8.3.11\n\ncd /bin\nln -s /usr/share/pgsql/timezonesets share\ncd tarballdir\n./configure\nmake\ncp ./src/backend/postgres /bin/postgres-8.3.11\n\n2010-09-03 18:24:53.936 GMT [11753] @/: LOG: could not open directory \"/share/timezone\": No such file or directory\n2010-09-03 18:24:53.936 GMT [11753] @/: LOG: could not open directory \"/share/timezone\": No such file or directory\n2010-09-03 18:24:53.936 GMT [11753] @/: LOG: could not open directory \"/share/timezone\": No such file or directory\n2010-09-03 18:24:53.936 GMT [11753] @/: LOG: could not open directory \"/share/timezone\": No such file or directory\n2010-09-03 18:24:53.936 GMT [11753] @/: LOG: could not open directory \"/share/timezone\": No such file or directory\n2010-09-03 20:24:53.936 CEST [11753] @/: FATAL: invalid value for parameter \"timezone_abbreviations\": \"Default\"\n\nI previously made the strace and therefore I added the option to configure \nto get the right directory.\n\nAny further idea where I should copy the binary or any option or any file \ncopy for the time zone files?\n\nThnx.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n",
"msg_date": "Fri, 3 Sep 2010 20:27:04 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "Gerhard Wiesinger <[email protected]> writes:\n> On Fri, 3 Sep 2010, Tom Lane wrote:\n>> I think what may be happening here is that a postgres executable expects\n>> to find itself in a full installation tree, ie if it's in /someplace/bin\n>> then the timezone files are in /someplace/share, etc. Did you do a full\n>> \"make install\" after building, or did you just copy the postgres\n>> executable?\n\n> I just copied it as discussed in the original mail to avoid that make \n> install kills the 8.4 production RPM version:\n> cp ./src/backend/postgres /bin/postgres-8.3.11\n\nDefinitely not going to work. Instead, configure with --prefix set\nto /someplace/harmless, make, make install, execute from\n/someplace/harmless/bin/.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 2010 14:44:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to 8.4 "
},
{
"msg_contents": "On Fri, 3 Sep 2010, Tom Lane wrote:\n\n> Gerhard Wiesinger <[email protected]> writes:\n>> On Fri, 3 Sep 2010, Tom Lane wrote:\n>>> I think what may be happening here is that a postgres executable expects\n>>> to find itself in a full installation tree, ie if it's in /someplace/bin\n>>> then the timezone files are in /someplace/share, etc. Did you do a full\n>>> \"make install\" after building, or did you just copy the postgres\n>>> executable?\n>\n>> I just copied it as discussed in the original mail to avoid that make\n>> install kills the 8.4 production RPM version:\n>> cp ./src/backend/postgres /bin/postgres-8.3.11\n>\n> Definitely not going to work. Instead, configure with --prefix set\n> to /someplace/harmless, make, make install, execute from\n> /someplace/harmless/bin/.\n\nThanks tom, your support for PostgreSQL is really very, very good.\nInstall:\n./configure --prefix /opt/postgres-8.3\nmake\nmake install\nsu -l postgres -c \"/opt/postgres-8.3/bin/postgres -p 54321 -D /var/lib/pgsql-old/data &\" >> /var/lib/pgsql-old/pgstartup.log 2>&1 < /dev/null\n\nBack to the original problem:\n8.3 query plans: http://www.wiesinger.com/tmp/pg_perf_83_new.txt\n8.4 quey plans: http://www.wiesinger.com/tmp/pg_perf_84.txt\n\nMain difference as I saw:\n8.3: -> Nested Loop Left Join (cost=0.00..1195433.19 rows=67 width=16)\n8.4: -> Merge Left Join (cost=1750660.22..4273805884876845789194861338991916289697885665127154313046252183850255795798561612107149662486528.00 rows=238233578115856634454073334945297075430094545596765511255148896328828230572227215727052643001958400 width=16)\n\nAny ideas why? How to fix?\n\nThnx.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n",
"msg_date": "Fri, 3 Sep 2010 21:34:12 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "Gerhard Wiesinger <[email protected]> writes:\n> Back to the original problem:\n\nFinally ;-)\n\n> 8.3 query plans: http://www.wiesinger.com/tmp/pg_perf_83_new.txt\n> 8.4 quey plans: http://www.wiesinger.com/tmp/pg_perf_84.txt\n\nHmm. The 8.3 plan is indeed assuming that the number of rows will stay\nconstant as we bubble up through the join levels, but AFAICS this is\nsimply wrong:\n\n -> Nested Loop Left Join (cost=0.00..38028.89 rows=67 width=8)\n -> Nested Loop Left Join (cost=0.00..25399.46 rows=67 width=8)\n -> Nested Loop Left Join (cost=0.00..12770.04 rows=67 width=8)\n -> Index Scan using i_log_unique on log l (cost=0.00..140.61 rows=67 width=8)\n Index Cond: (datetime >= (now() - '00:01:00'::interval))\n -> Index Scan using unique_key_and_id on log_details d7 (cost=0.00..187.39 rows=89 width=8)\n Index Cond: ((l.id = d7.fk_id) AND (d7.fk_keyid = $6))\n -> Index Scan using unique_key_and_id on log_details d6 (cost=0.00..187.39 rows=89 width=8)\n Index Cond: ((l.id = d6.fk_id) AND (d6.fk_keyid = $5))\n -> Index Scan using unique_key_and_id on log_details d5 (cost=0.00..187.39 rows=89 width=8)\n Index Cond: ((l.id = d5.fk_id) AND (d5.fk_keyid = $4))\n\nIf the log_details indexscans are expected to produce 89 rows per\nexecution, then surely the join size should go up 89x at each level,\nbecause the join steps themselves don't eliminate anything.\n\nIn 8.4 the arithmetic is at least self-consistent:\n\n -> Nested Loop Left Join (cost=0.00..505256.95 rows=57630 width=8)\n -> Nested Loop Left Join (cost=0.00..294671.96 rows=6059 width=8)\n -> Nested Loop Left Join (cost=0.00..272532.55 rows=637 width=8)\n -> Index Scan using log_pkey on log l (cost=0.00..270203.92 rows=67 width=8)\n Filter: (datetime >= (now() - '00:01:00'::interval))\n -> Index Scan using unique_key_and_id on log_details d7 (cost=0.00..34.63 rows=10 width=8)\n Index Cond: ((l.id = d7.fk_id) AND (d7.fk_keyid = $6))\n -> Index Scan using unique_key_and_id on log_details d6 (cost=0.00..34.63 rows=10 width=8)\n Index Cond: ((l.id = d6.fk_id) AND (d6.fk_keyid = $5))\n -> Index Scan using unique_key_and_id on log_details d5 (cost=0.00..34.63 rows=10 width=8)\n Index Cond: ((l.id = d5.fk_id) AND (d5.fk_keyid = $4))\n\nThe rowcount estimates are apparently a shade less than 10, but they get\nrounded off in the display.\n\nI believe the reason for this change is that 8.4's join estimation code\nwas rewritten so that it wasn't completely bogus for outer joins. 8.3\nmight have been getting the right answer, but it was for the wrong\nreasons.\n\nSo the real question to be answered here is why doesn't it think that\neach of the unique_key_and_id indexscans produce just a single row, as\nyou indicated was the case. The 8.4 estimate is already a factor of\nalmost 10 closer to reality than 8.3's, but you need another factor of\n10. You might find that increasing the statistics target for the\nlog_details table helps.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 2010 17:10:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to 8.4 "
},
{
"msg_contents": "On Fri, 3 Sep 2010, Tom Lane wrote:\n\n> Gerhard Wiesinger <[email protected]> writes:\n>> 8.3 query plans: http://www.wiesinger.com/tmp/pg_perf_83_new.txt\n>> 8.4 quey plans: http://www.wiesinger.com/tmp/pg_perf_84.txt\n>\n> Hmm. The 8.3 plan is indeed assuming that the number of rows will stay\n> constant as we bubble up through the join levels, but AFAICS this is\n> simply wrong:\n>\n> -> Nested Loop Left Join (cost=0.00..38028.89 rows=67 width=8)\n> -> Nested Loop Left Join (cost=0.00..25399.46 rows=67 width=8)\n> -> Nested Loop Left Join (cost=0.00..12770.04 rows=67 width=8)\n> -> Index Scan using i_log_unique on log l (cost=0.00..140.61 rows=67 width=8)\n> Index Cond: (datetime >= (now() - '00:01:00'::interval))\n> -> Index Scan using unique_key_and_id on log_details d7 (cost=0.00..187.39 rows=89 width=8)\n> Index Cond: ((l.id = d7.fk_id) AND (d7.fk_keyid = $6))\n> -> Index Scan using unique_key_and_id on log_details d6 (cost=0.00..187.39 rows=89 width=8)\n> Index Cond: ((l.id = d6.fk_id) AND (d6.fk_keyid = $5))\n> -> Index Scan using unique_key_and_id on log_details d5 (cost=0.00..187.39 rows=89 width=8)\n> Index Cond: ((l.id = d5.fk_id) AND (d5.fk_keyid = $4))\n>\n> If the log_details indexscans are expected to produce 89 rows per\n> execution, then surely the join size should go up 89x at each level,\n> because the join steps themselves don't eliminate anything.\n>\n> In 8.4 the arithmetic is at least self-consistent:\n>\n> -> Nested Loop Left Join (cost=0.00..505256.95 rows=57630 width=8)\n> -> Nested Loop Left Join (cost=0.00..294671.96 rows=6059 width=8)\n> -> Nested Loop Left Join (cost=0.00..272532.55 rows=637 width=8)\n> -> Index Scan using log_pkey on log l (cost=0.00..270203.92 rows=67 width=8)\n> Filter: (datetime >= (now() - '00:01:00'::interval))\n> -> Index Scan using unique_key_and_id on log_details d7 (cost=0.00..34.63 rows=10 width=8)\n> Index Cond: ((l.id = d7.fk_id) AND (d7.fk_keyid = $6))\n> -> Index Scan using unique_key_and_id on log_details d6 (cost=0.00..34.63 rows=10 width=8)\n> Index Cond: ((l.id = d6.fk_id) AND (d6.fk_keyid = $5))\n> -> Index Scan using unique_key_and_id on log_details d5 (cost=0.00..34.63 rows=10 width=8)\n> Index Cond: ((l.id = d5.fk_id) AND (d5.fk_keyid = $4))\n>\n> The rowcount estimates are apparently a shade less than 10, but they get\n> rounded off in the display.\n>\n> I believe the reason for this change is that 8.4's join estimation code\n> was rewritten so that it wasn't completely bogus for outer joins. 8.3\n> might have been getting the right answer, but it was for the wrong\n> reasons.\n>\n> So the real question to be answered here is why doesn't it think that\n> each of the unique_key_and_id indexscans produce just a single row, as\n> you indicated was the case. The 8.4 estimate is already a factor of\n> almost 10 closer to reality than 8.3's, but you need another factor of\n> 10. You might find that increasing the statistics target for the\n> log_details table helps.\n\n\nOk, Tom, tried different things (more details are below):\n1.) Setting statistic target to 1000 and 10000 (without success), still \nmerge join\n2.) Tried to added a Index on description to help the planner for \nuniqueness (without success)\n3.) Forced the planner to use nested loop joins (SUCCESS):\nSET enable_hashjoin=false;SET enable_mergejoin=false;\n(BTW: How do use such settings in Java and PHP and Perl, is there a \ncommand available?)\n\nOpen questions:\nWhy does the planner not choose nested loop joins, that should be the \noptimal one for that situation?\nDoes the planner value: a.) UNIQUENESS b.) UNIQUENESS and NOT NULLs?\nAny ideas for improvement of the planner?\n\nDetails:\n-- CREATE UNIQUE INDEX unique_key_and_id ON log_details USING btree (fk_id, fk_keyid)\n-- 1000 and 10000 didn't help for better query plan for Nested Loop Left Join, still Merge Left Join\n-- Sample with:\n-- ALTER TABLE log_details ALTER COLUMN fk_id SET STATISTICS 10000;\n-- ALTER TABLE log_details ALTER COLUMN fk_keyid SET STATISTICS 10000;\n-- ANALYZE VERBOSE log_details;\n-- Still Merge Join:\n-- -> Merge Left Join (cost=9102353.88..83786934.25 rows=2726186787 width=16)\n-- Merge Cond: (l.id = d2000902.fk_id)\n-- -> Merge Left Join (cost=8926835.18..40288402.09 rows=972687282 width=24)\n-- Merge Cond: (l.id = d2000904.fk_id)\n-- Default values again\nALTER TABLE log_details ALTER COLUMN fk_id SET STATISTICS 100;\nALTER TABLE log_details ALTER COLUMN fk_keyid SET STATISTICS 100;\nANALYZE VERBOSE log_details;\n\n-- Tried to add WITHOUT SUCCESS (that planner could know that description is NOT NULL and UNIQE)\nDROP INDEX IF EXISTS i_key_description_desc;\nCREATE UNIQUE INDEX i_key_description_desc ON key_description (description);\n-- Therefore planner should know: keyid is NOT NULL and UNIQUE and only one result: (SELECT keyid FROM key_description WHERE description = 'Raumsolltemperatur')\n-- Therefore from constraint planner should know that fk_id is NOT NULL and UNIQUE: CONSTRAINT unique_key_and_id UNIQUE(fk_id, fk_keyid):\n-- LEFT JOIN log_details d1 ON l.id = d1.fk_id AND\n-- d1.fk_keyid = (SELECT keyid FROM key_description WHERE description = 'Raumsolltemperatur')\n-- Does the planner value alls those UNIQUEnesses and NOT NULLs?\n\n-- Again back to 8.3 query plan which is fast (319ms):\nSET enable_hashjoin=false;\nSET enable_mergejoin=false;\n-- -> Nested Loop Left Join (cost=0.00..22820970510.45 rows=2727492136 width=16)\n-- -> Nested Loop Left Join (cost=0.00..12810087616.29 rows=973121653 width=24)\n-- -> Nested Loop Left Join (cost=0.00..9238379092.22 rows=347192844 width=24)\n\nThnx.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n",
"msg_date": "Sat, 4 Sep 2010 08:58:30 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "Hello,\n\nAny news or ideas regarding this issue?\n\nThnx.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n\n\nOn Sat, 4 Sep 2010, Gerhard Wiesinger wrote:\n\n> On Fri, 3 Sep 2010, Tom Lane wrote:\n>\n>> Gerhard Wiesinger <[email protected]> writes:\n>>> 8.3 query plans: http://www.wiesinger.com/tmp/pg_perf_83_new.txt\n>>> 8.4 quey plans: http://www.wiesinger.com/tmp/pg_perf_84.txt\n>> \n>> Hmm. The 8.3 plan is indeed assuming that the number of rows will stay\n>> constant as we bubble up through the join levels, but AFAICS this is\n>> simply wrong:\n>>\n>> -> Nested Loop Left Join (cost=0.00..38028.89 rows=67 width=8)\n>> -> Nested Loop Left Join (cost=0.00..25399.46 rows=67 width=8)\n>> -> Nested Loop Left Join (cost=0.00..12770.04 rows=67 \n>> width=8)\n>> -> Index Scan using i_log_unique on log l \n>> (cost=0.00..140.61 rows=67 width=8)\n>> Index Cond: (datetime >= (now() - '00:01:00'::interval))\n>> -> Index Scan using unique_key_and_id on log_details d7 \n>> (cost=0.00..187.39 rows=89 width=8)\n>> Index Cond: ((l.id = d7.fk_id) AND (d7.fk_keyid = $6))\n>> -> Index Scan using unique_key_and_id on log_details d6 \n>> (cost=0.00..187.39 rows=89 width=8)\n>> Index Cond: ((l.id = d6.fk_id) AND (d6.fk_keyid = $5))\n>> -> Index Scan using unique_key_and_id on log_details d5 \n>> (cost=0.00..187.39 rows=89 width=8)\n>> Index Cond: ((l.id = d5.fk_id) AND (d5.fk_keyid = $4))\n>> \n>> If the log_details indexscans are expected to produce 89 rows per\n>> execution, then surely the join size should go up 89x at each level,\n>> because the join steps themselves don't eliminate anything.\n>> \n>> In 8.4 the arithmetic is at least self-consistent:\n>>\n>> -> Nested Loop Left Join (cost=0.00..505256.95 rows=57630 \n>> width=8)\n>> -> Nested Loop Left Join (cost=0.00..294671.96 rows=6059 \n>> width=8)\n>> -> Nested Loop Left Join (cost=0.00..272532.55 rows=637 \n>> width=8)\n>> -> Index Scan using log_pkey on log l \n>> (cost=0.00..270203.92 rows=67 width=8)\n>> Filter: (datetime >= (now() - '00:01:00'::interval))\n>> -> Index Scan using unique_key_and_id on log_details d7 \n>> (cost=0.00..34.63 rows=10 width=8)\n>> Index Cond: ((l.id = d7.fk_id) AND (d7.fk_keyid = $6))\n>> -> Index Scan using unique_key_and_id on log_details d6 \n>> (cost=0.00..34.63 rows=10 width=8)\n>> Index Cond: ((l.id = d6.fk_id) AND (d6.fk_keyid = $5))\n>> -> Index Scan using unique_key_and_id on log_details d5 \n>> (cost=0.00..34.63 rows=10 width=8)\n>> Index Cond: ((l.id = d5.fk_id) AND (d5.fk_keyid = $4))\n>> \n>> The rowcount estimates are apparently a shade less than 10, but they get\n>> rounded off in the display.\n>> \n>> I believe the reason for this change is that 8.4's join estimation code\n>> was rewritten so that it wasn't completely bogus for outer joins. 8.3\n>> might have been getting the right answer, but it was for the wrong\n>> reasons.\n>> \n>> So the real question to be answered here is why doesn't it think that\n>> each of the unique_key_and_id indexscans produce just a single row, as\n>> you indicated was the case. The 8.4 estimate is already a factor of\n>> almost 10 closer to reality than 8.3's, but you need another factor of\n>> 10. You might find that increasing the statistics target for the\n>> log_details table helps.\n>\n>\n> Ok, Tom, tried different things (more details are below):\n> 1.) Setting statistic target to 1000 and 10000 (without success), still merge \n> join\n> 2.) Tried to added a Index on description to help the planner for uniqueness \n> (without success)\n> 3.) Forced the planner to use nested loop joins (SUCCESS):\n> SET enable_hashjoin=false;SET enable_mergejoin=false;\n> (BTW: How do use such settings in Java and PHP and Perl, is there a command \n> available?)\n>\n> Open questions:\n> Why does the planner not choose nested loop joins, that should be the optimal \n> one for that situation?\n> Does the planner value: a.) UNIQUENESS b.) UNIQUENESS and NOT NULLs?\n> Any ideas for improvement of the planner?\n>\n> Details:\n> -- CREATE UNIQUE INDEX unique_key_and_id ON log_details USING btree (fk_id, \n> fk_keyid)\n> -- 1000 and 10000 didn't help for better query plan for Nested Loop Left \n> Join, still Merge Left Join\n> -- Sample with:\n> -- ALTER TABLE log_details ALTER COLUMN fk_id SET STATISTICS 10000;\n> -- ALTER TABLE log_details ALTER COLUMN fk_keyid SET STATISTICS 10000;\n> -- ANALYZE VERBOSE log_details;\n> -- Still Merge Join:\n> -- -> Merge Left Join (cost=9102353.88..83786934.25 rows=2726186787 \n> width=16)\n> -- Merge Cond: (l.id = d2000902.fk_id)\n> -- -> Merge Left Join (cost=8926835.18..40288402.09 rows=972687282 \n> width=24)\n> -- Merge Cond: (l.id = d2000904.fk_id)\n> -- Default values again\n> ALTER TABLE log_details ALTER COLUMN fk_id SET STATISTICS 100;\n> ALTER TABLE log_details ALTER COLUMN fk_keyid SET STATISTICS 100;\n> ANALYZE VERBOSE log_details;\n>\n> -- Tried to add WITHOUT SUCCESS (that planner could know that description is \n> NOT NULL and UNIQE)\n> DROP INDEX IF EXISTS i_key_description_desc;\n> CREATE UNIQUE INDEX i_key_description_desc ON key_description (description);\n> -- Therefore planner should know: keyid is NOT NULL and UNIQUE and only one \n> result: (SELECT keyid FROM key_description WHERE description = \n> 'Raumsolltemperatur')\n> -- Therefore from constraint planner should know that fk_id is NOT NULL and \n> UNIQUE: CONSTRAINT unique_key_and_id UNIQUE(fk_id, fk_keyid):\n> -- LEFT JOIN log_details d1 ON l.id = d1.fk_id AND\n> -- d1.fk_keyid = (SELECT keyid FROM key_description WHERE description = \n> 'Raumsolltemperatur')\n> -- Does the planner value alls those UNIQUEnesses and NOT NULLs?\n>\n> -- Again back to 8.3 query plan which is fast (319ms):\n> SET enable_hashjoin=false;\n> SET enable_mergejoin=false;\n> -- -> Nested Loop Left Join (cost=0.00..22820970510.45 rows=2727492136 \n> width=16)\n> -- -> Nested Loop Left Join (cost=0.00..12810087616.29 \n> rows=973121653 width=24)\n> -- -> Nested Loop Left Join (cost=0.00..9238379092.22 \n> rows=347192844 width=24)\n>\n> Thnx.\n>\n> Ciao,\n> Gerhard\n>\n> --\n> http://www.wiesinger.com/\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Mon, 13 Sep 2010 08:39:44 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "On Mon, Sep 13, 2010 at 2:39 AM, Gerhard Wiesinger <[email protected]> wrote:\n> Hello,\n>\n> Any news or ideas regarding this issue?\n\nhm. is retooling the query an option? specifically, can you try converting\n\nCREATE OR REPLACE VIEW log_entries AS\nSELECT\n l.id AS id,\n l.datetime AS datetime,\n l.tdate AS tdate,\n l.ttime AS ttime,\n d1.value AS Raumsolltemperatur,\n [...]\nFROM\n log l\nLEFT JOIN log_details d1 ON l.id = d1.fk_id AND\n d1.fk_keyid = (SELECT keyid FROM key_description WHERE description =\n'Raumsolltemperatur')\n [...]\n\nto\n\nCREATE OR REPLACE VIEW log_entries AS\nSELECT\n l.id AS id,\n l.datetime AS datetime,\n l.tdate AS tdate,\n l.ttime AS ttime,\n (select value from log_details ld join key_description kd on\nld.fk_keyid = kd.keyid where ld.fk_id = l.id and description =\n'Raumsolltemperatur') AS Raumsolltemperatur,\n [...]\n\n(I am not 100% sure I have your head around your query, but I think I do)?\nThis should get you a guaranteed (although not necessarily 'the best'\nplan, with each returned view column being treated independently of\nthe other (is that what you want?). Also, if schema changes are under\nconsideration, you can play log_details/key_description, using natural\nkey and cut out one of the joins. I can't speak to some of the more\ncomplex planner issues at play, but your query absolutely screams\noptimization at the SQL level.\n\nWhat I am 100% sure of, is that you can get better performance if you\ndo a little out of the box thinking here...\n\nmerlin\n",
"msg_date": "Mon, 13 Sep 2010 14:32:09 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to 8.4"
},
{
"msg_contents": "Hello Merlin,\n\nSeems to be a feasible approach. On problem which might be that when \nmultiple rows are returned that they are not ordered in each subselect \ncorrectly. Any idea to solve that?\n\ne.g.\nRaumsolltemperatur | Raumisttemperatur\nValue from time 1 | Value from time 2\nValue from time 2 | Value from time 1\n\nbut should be\nRaumsolltemperatur | Raumisttemperatur\nValue from time 1 | Value from time 1\nValue from time 2 | Value from time 2\n\nBut that might be solveable by first selecting keys from the log_details \ntable and then join again.\n\nI will try it in the evening and I have to think about in detail.\n\nBut thank you for the new approach and opening the mind :-)\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n\n\nOn Mon, 13 Sep 2010, Merlin Moncure wrote:\n\n> On Mon, Sep 13, 2010 at 2:39 AM, Gerhard Wiesinger <[email protected]> wrote:\n>> Hello,\n>>\n>> Any news or ideas regarding this issue?\n>\n> hm. is retooling the query an option? specifically, can you try converting\n>\n> CREATE OR REPLACE VIEW log_entries AS\n> SELECT\n> l.id AS id,\n> l.datetime AS datetime,\n> l.tdate AS tdate,\n> l.ttime AS ttime,\n> d1.value AS Raumsolltemperatur,\n> [...]\n> FROM\n> log l\n> LEFT JOIN log_details d1 ON l.id = d1.fk_id AND\n> d1.fk_keyid = (SELECT keyid FROM key_description WHERE description =\n> 'Raumsolltemperatur')\n> [...]\n>\n> to\n>\n> CREATE OR REPLACE VIEW log_entries AS\n> SELECT\n> l.id AS id,\n> l.datetime AS datetime,\n> l.tdate AS tdate,\n> l.ttime AS ttime,\n> (select value from log_details ld join key_description kd on\n> ld.fk_keyid = kd.keyid where ld.fk_id = l.id and description =\n> 'Raumsolltemperatur') AS Raumsolltemperatur,\n> [...]\n>\n> (I am not 100% sure I have your head around your query, but I think I do)?\n> This should get you a guaranteed (although not necessarily 'the best'\n> plan, with each returned view column being treated independently of\n> the other (is that what you want?). Also, if schema changes are under\n> consideration, you can play log_details/key_description, using natural\n> key and cut out one of the joins. I can't speak to some of the more\n> complex planner issues at play, but your query absolutely screams\n> optimization at the SQL level.\n>\n> What I am 100% sure of, is that you can get better performance if you\n> do a little out of the box thinking here...\n>\n> merlin\n>\n",
"msg_date": "Tue, 14 Sep 2010 08:07:18 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "On Tue, Sep 14, 2010 at 2:07 AM, Gerhard Wiesinger <[email protected]> wrote:\n> Hello Merlin,\n>\n> Seems to be a feasible approach. On problem which might be that when\n> multiple rows are returned that they are not ordered in each subselect\n> correctly. Any idea to solve that?\n>\n> e.g.\n> Raumsolltemperatur | Raumisttemperatur\n> Value from time 1 | Value from time 2\n> Value from time 2 | Value from time 1\n>\n> but should be\n> Raumsolltemperatur | Raumisttemperatur\n> Value from time 1 | Value from time 1\n> Value from time 2 | Value from time 2\n>\n> But that might be solveable by first selecting keys from the log_details\n> table and then join again.\n>\n> I will try it in the evening and I have to think about in detail.\n>\n> But thank you for the new approach and opening the mind :-)\n\nUsing subquery in that style select (<subquery>), ... is limited to\nresults that return 1 row, 1 column. I assumed that was the case...if\nit isn't in your view, you can always attempt arrays:\n\nCREATE OR REPLACE VIEW log_entries AS\nSELECT\n l.id AS id,\n l.datetime AS datetime,\n l.tdate AS tdate,\n l.ttime AS ttime,\n array(select value from log_details ld join key_description kd on\nld.fk_keyid = kd.keyid where ld.fk_id = l.id and description =\n'Raumsolltemperatur' order by XYZ) AS Raumsolltemperatur,\n [...]\n\narrays might raise the bar somewhat in terms of dealing with the\nreturned data, or they might work great. some experimentation is in\norder.\n\nXYZ being the ordering condition you want. If that isn't available\ninside the join then we need to think about this some more. We could\nprobably help more if you could describe the schema in a little more\ndetail. This is solvable.\n\nmerlin\n",
"msg_date": "Tue, 14 Sep 2010 08:01:05 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to 8.4"
},
{
"msg_contents": "On Tue, 14 Sep 2010, Merlin Moncure wrote:\n\n> On Tue, Sep 14, 2010 at 2:07 AM, Gerhard Wiesinger <[email protected]> wrote:\n>> Hello Merlin,\n>>\n>> Seems to be a feasible approach. On problem which might be that when\n>> multiple rows are returned that they are not ordered in each subselect\n>> correctly. Any idea to solve that?\n>>\n>> e.g.\n>> Raumsolltemperatur | Raumisttemperatur\n>> Value from time 1 �| Value from time 2\n>> Value from time 2 �| Value from time 1\n>>\n>> but should be\n>> Raumsolltemperatur | Raumisttemperatur\n>> Value from time 1 �| Value from time 1\n>> Value from time 2 �| Value from time 2\n>>\n>> But that might be solveable by first selecting keys from the log_details\n>> table and then join again.\n>>\n>> I will try it in the evening and I have to think about in detail.\n>>\n>> But thank you for the new approach and opening the mind :-)\n>\n> Using subquery in that style select (<subquery>), ... is limited to\n> results that return 1 row, 1 column. I assumed that was the case...if\n> it isn't in your view, you can always attempt arrays:\n>\n> CREATE OR REPLACE VIEW log_entries AS\n> SELECT\n> l.id AS id,\n> l.datetime AS datetime,\n> l.tdate AS tdate,\n> l.ttime AS ttime,\n> array(select value from log_details ld join key_description kd on\n> ld.fk_keyid = kd.keyid where ld.fk_id = l.id and description =\n> 'Raumsolltemperatur' order by XYZ) AS Raumsolltemperatur,\n> [...]\n>\n> arrays might raise the bar somewhat in terms of dealing with the\n> returned data, or they might work great. some experimentation is in\n> order.\n>\n> XYZ being the ordering condition you want. If that isn't available\n> inside the join then we need to think about this some more. We could\n> probably help more if you could describe the schema in a little more\n> detail. This is solvable.\n\nOf course, subquery is limited to a result set returning 1 row and 1 \ncolumn. Also order is of course preserved because of the join.\n\nFurther, I think I found a perfect query plan for the EAV pattern.\n\nFirst I tried your suggestion but there were some limitation with O(n^2) \nefforts (e.g. nested loops=12586 and also index scans with loop 12586):\n\nCREATE OR REPLACE VIEW log_entries_test AS\nSELECT\n l.id AS id,\n l.datetime AS datetime,\n l.tdate AS tdate,\n l.ttime AS ttime,\n (SELECT value FROM log_details d JOIN key_description kd ON d.fk_keyid = kd.keyid WHERE l.id = d.fk_id AND kd.description = 'Raumsolltemperatur') AS Raumsolltemperatur,\n (SELECT value FROM log_details d JOIN key_description kd ON d.fk_keyid = kd.keyid WHERE l.id = d.fk_id AND kd.description = 'Raumtemperatur') AS Raumtemperatur,\n (SELECT value FROM log_details d JOIN key_description kd ON d.fk_keyid = kd.keyid WHERE l.id = d.fk_id AND kd.description = 'Kesselsolltemperatur') AS Kesselsolltemperatur,\n (SELECT value FROM log_details d JOIN key_description kd ON d.fk_keyid = kd.keyid WHERE l.id = d.fk_id AND kd.description = 'Kesseltemperatur') AS Kesseltemperatur,\n....\nFROM\n log l\n;\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nEXPLAIN ANALYZE SELECT * FROM log_entries_test WHERE datetime > now() - INTERVAL '10 days' ORDER BY datetime DESC;\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nIndex Scan Backward using i_log_unique on log l (cost=0.00..140820.12 rows=69 width=32) (actual time=2.848..22812.331 rows=12586 loops=1)\n Index Cond: (datetime > (now() - '10 days'::interval))\n SubPlan 1\n -> Nested Loop (cost=0.00..19.99 rows=1 width=8) (actual time=0.007..0.018 rows=1 loops=12586)\n -> Seq Scan on key_description kd (cost=0.00..2.38 rows=1 width=8) (actual time=0.003..0.013 rows=1 loops=12586)\n Filter: ((description)::text = 'Raumsolltemperatur'::text)\n -> Index Scan using unique_key_and_id on log_details d (cost=0.00..17.60 rows=1 width=16) (actual time=0.004..0.004 rows=1 loops=12586)\n Index Cond: (($0 = d.fk_id) AND (d.fk_keyid = kd.keyid))\n SubPlan 2\n -> Nested Loop (cost=0.00..19.99 rows=1 width=8) (actual time=0.006..0.017 rows=1 loops=12586)\n -> Seq Scan on key_description kd (cost=0.00..2.38 rows=1 width=8) (actual time=0.003..0.013 rows=1 loops=12586)\n Filter: ((description)::text = 'Raumtemperatur'::text)\n -> Index Scan using unique_key_and_id on log_details d (cost=0.00..17.60 rows=1 width=16) (actual time=0.002..0.003 rows=1 loops=12586)\n Index Cond: (($0 = d.fk_id) AND (d.fk_keyid = kd.keyid))\n SubPlan 3\n -> Nested Loop (cost=0.00..19.99 rows=1 width=8) (actual time=0.005..0.017 rows=1 loops=12586)\n -> Seq Scan on key_description kd (cost=0.00..2.38 rows=1 width=8) (actual time=0.002..0.013 rows=1 loops=12586)\n Filter: ((description)::text = 'Kesselsolltemperatur'::text)\n -> Index Scan using unique_key_and_id on log_details d (cost=0.00..17.60 rows=1 width=16) (actual time=0.003..0.003 rows=1 loops=12586)\n Index Cond: (($0 = d.fk_id) AND (d.fk_keyid = kd.keyid))\n SubPlan 4\n -> Nested Loop (cost=0.00..19.99 rows=1 width=8) (actual time=0.006..0.017 rows=1 loops=12586)\n -> Seq Scan on key_description kd (cost=0.00..2.38 rows=1 width=8) (actual time=0.002..0.013 rows=1 loops=12586)\n Filter: ((description)::text = 'Kesseltemperatur'::text)\n -> Index Scan using unique_key_and_id on log_details d (cost=0.00..17.60 rows=1 width=16) (actual time=0.002..0.003 rows=1 loops=12586)\n Index Cond: (($0 = d.fk_id) AND (d.fk_keyid = kd.keyid))\n SubPlan 5\n -> Nested Loop (cost=0.00..19.99 rows=1 width=8) (actual time=0.005..0.017 rows=1 loops=12586)\n -> Seq Scan on key_description kd (cost=0.00..2.38 rows=1 width=8) (actual time=0.002..0.014 rows=1 loops=12586)\n Filter: ((description)::text = 'Speichersolltemperatur'::text)\n -> Index Scan using unique_key_and_id on log_details d (cost=0.00..17.60 rows=1 width=16) (actual time=0.002..0.003 rows=1 loops=12586)\n Index Cond: (($0 = d.fk_id) AND (d.fk_keyid = kd.keyid))\n SubPlan 6\n -> Nested Loop (cost=0.00..19.99 rows=1 width=8) (actual time=0.006..0.017 rows=1 loops=12586)\n -> Seq Scan on key_description kd (cost=0.00..2.38 rows=1 width=8) (actual time=0.003..0.013 rows=1 loops=12586)\n Filter: ((description)::text = 'Speichertemperatur'::text)\n -> Index Scan using unique_key_and_id on log_details d (cost=0.00..17.60 rows=1 width=16) (actual time=0.002..0.003 rows=1 loops=12586)\n Index Cond: (($0 = d.fk_id) AND (d.fk_keyid = kd.keyid))\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nTherefore I optimized the query further which can be done in the \nfollowing way with another subquery and IHMO a perfect query plan. Also \nthe subselect avoid multiple iterations for each of the result rows:\n\nCREATE OR REPLACE VIEW log_entries_test AS\nSELECT\n l.id AS id,\n l.datetime AS datetime,\n l.tdate AS tdate,\n l.ttime AS ttime,\n (SELECT value FROM log_details d WHERE l.id = d.fk_id AND d.fk_keyid = (SELECT keyid FROM key_description WHERE description = 'Raumsolltemperatur')) AS Raumsolltemperatur,\n (SELECT value FROM log_details d WHERE l.id = d.fk_id AND d.fk_keyid = (SELECT keyid FROM key_description WHERE description = 'Raumtemperatur')) AS Raumtemperatur,\n (SELECT value FROM log_details d WHERE l.id = d.fk_id AND d.fk_keyid = (SELECT keyid FROM key_description WHERE description = 'Kesselsolltemperatur')) AS Kesselsolltemperatur,\n (SELECT value FROM log_details d WHERE l.id = d.fk_id AND d.fk_keyid = (SELECT keyid FROM key_description WHERE description = 'Kesseltemperatur')) AS Kesseltemperatur,\n...\nFROM\n log l\n;\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nEXPLAIN ANALYZE SELECT * FROM log_entries_test WHERE datetime > now() - INTERVAL '10 days' ORDER BY datetime DESC;\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nIndex Scan Backward using i_log_unique on log l (cost=0.00..140603.99 rows=69 width=32) (actual time=2.588..5602.899 rows=12586 loops=1)\n Index Cond: (datetime > (now() - '10 days'::interval))\n SubPlan 2\n -> Index Scan using unique_key_and_id on log_details d (cost=2.38..19.97 rows=1 width=8) (actual time=0.010..0.011 rows=1 loops=12586)\n Index Cond: (($1 = fk_id) AND (fk_keyid = $0))\n InitPlan 1 (returns $0)\n -> Seq Scan on key_description (cost=0.00..2.38 rows=1 width=8) (actual time=0.015..0.066 rows=1 loops=1)\n Filter: ((description)::text = 'Raumsolltemperatur'::text)\n SubPlan 4\n -> Index Scan using unique_key_and_id on log_details d (cost=2.38..19.97 rows=1 width=8) (actual time=0.003..0.003 rows=1 loops=12586)\n Index Cond: (($1 = fk_id) AND (fk_keyid = $2))\n InitPlan 3 (returns $2)\n -> Seq Scan on key_description (cost=0.00..2.38 rows=1 width=8) (actual time=0.009..0.020 rows=1 loops=1)\n Filter: ((description)::text = 'Raumtemperatur'::text)\n SubPlan 6\n -> Index Scan using unique_key_and_id on log_details d (cost=2.38..19.97 rows=1 width=8) (actual time=0.002..0.003 rows=1 loops=12586)\n Index Cond: (($1 = fk_id) AND (fk_keyid = $3))\n InitPlan 5 (returns $3)\n -> Seq Scan on key_description (cost=0.00..2.38 rows=1 width=8) (actual time=0.005..0.017 rows=1 loops=1)\n Filter: ((description)::text = 'Kesselsolltemperatur'::text)\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nBTW: Schemadata is in the links discussed in the thread\n\nThnx to all for helping me.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n>From [email protected] Tue Sep 14 18:05:33 2010\nReceived: from maia.hub.org (maia-3.hub.org [200.46.204.243])\n\tby mail.postgresql.org (Postfix) with ESMTP id 2C6B01337BA8\n\tfor <[email protected]>; Tue, 14 Sep 2010 18:05:32 -0300 (ADT)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)\n with ESMTP id 33160-03\n for <[email protected]>;\n Tue, 14 Sep 2010 21:05:23 +0000 (UTC)\nX-Greylist: domain auto-whitelisted by SQLgrey-1.7.6\nReceived: from mail-ey0-f174.google.com (mail-ey0-f174.google.com [209.85.215.174])\n\tby mail.postgresql.org (Postfix) with ESMTP id 77CB91337BCE\n\tfor <[email protected]>; Tue, 14 Sep 2010 18:05:22 -0300 (ADT)\nReceived: by eyb6 with SMTP id 6so3359037eyb.19\n for <[email protected]>; Tue, 14 Sep 2010 14:05:22 -0700 (PDT)\nDKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=gmail.com; s=gamma;\n h=domainkey-signature:mime-version:received:received:in-reply-to\n :references:date:message-id:subject:from:to:cc:content-type\n :content-transfer-encoding;\n bh=ph7bnTo2Hx2D6BtOmOSqMxqmAn1RFjTjcqy8rJcQ8wc=;\n b=aJ4khtf9+vtlRti6BefsbdwACEVN1kB49+U3AIX14ryyt+BzXBTRVbMLD13xU2ywvS\n cZl6FdJH1Am5n0+7YSlJFjcSlPZ6h1xrIL9Rz4aCT6B3tlwICcVcIObwQej1f1nN6f4g\n YbhdZl3NfYiAt8gbqnCit3qDBTCaVYw/iOrJA=\nDomainKey-Signature: a=rsa-sha1; c=nofws;\n d=gmail.com; s=gamma;\n h=mime-version:in-reply-to:references:date:message-id:subject:from:to\n :cc:content-type:content-transfer-encoding;\n b=kL5l2BR3mBVvr5czOS+zOyUVPh10JLA+hOBCL5dc/9UDDfKUXcij1LyDT0MhkevFsq\n zo+7zzlyTcvgEFUIJVfzeJMv85CVHZPc7iCclYJj6rbcfU7JFYLWqVn99dSQD0xwlv/b\n NkFKrBUUrc55AA4PFooiWRkYaVAMxQq/Ghhro=\nMIME-Version: 1.0\nReceived: by 10.239.135.2 with SMTP id b2mr40462hbb.21.1284498321965; Tue, 14\n Sep 2010 14:05:21 -0700 (PDT)\nReceived: by 10.239.136.142 with HTTP; Tue, 14 Sep 2010 14:05:21 -0700 (PDT)\nIn-Reply-To: <[email protected]>\nReferences: <[email protected]>\n\t<[email protected]>\n\t<[email protected]>\n\t<[email protected]>\n\t<[email protected]>\n\t<[email protected]>\n\t<[email protected]>\n\t<[email protected]>\n\t<[email protected]>\n\t<[email protected]>\n\t<[email protected]>\n\t<[email protected]>\n\t<[email protected]>\n\t<[email protected]>\n\t<[email protected]>\n\t<[email protected]>\n\t<[email protected]>\n\t<[email protected]>\n\t<[email protected]>\n\t<[email protected]>\n\t<[email protected]>\n\t<[email protected]>\nDate: Tue, 14 Sep 2010 17:05:21 -0400\nMessage-ID: <[email protected]>\nSubject: Re: Major performance problem after upgrade from 8.3 to 8.4\nFrom: Merlin Moncure <[email protected]>\nTo: Gerhard Wiesinger <[email protected]>\nCc: [email protected], Tom Lane <[email protected]>, \n\tPavel Stehule <[email protected]>, Andreas Kretschmer <[email protected]>\nContent-Type: text/plain; charset=ISO-8859-1\nContent-Transfer-Encoding: quoted-printable\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits=-1.9 tagged_above=-10 required=5 tests=BAYES_00=-1.9\nX-Spam-Level: \nX-Archive-Number: 201009/81\nX-Sequence-Number: 40284\n\nOn Tue, Sep 14, 2010 at 3:59 PM, Gerhard Wiesinger <[email protected]> wr=\note:\n> On Tue, 14 Sep 2010, Merlin Moncure wrote:\n>\n>> On Tue, Sep 14, 2010 at 2:07 AM, Gerhard Wiesinger <[email protected]>\n>> wrote:\n>>>\n>>> Hello Merlin,\n>>>\n>>> Seems to be a feasible approach. On problem which might be that when\n>>> multiple rows are returned that they are not ordered in each subselect\n>>> correctly. Any idea to solve that?\n>>>\n>>> e.g.\n>>> Raumsolltemperatur | Raumisttemperatur\n>>> Value from time 1 =A0| Value from time 2\n>>> Value from time 2 =A0| Value from time 1\n>>>\n>>> but should be\n>>> Raumsolltemperatur | Raumisttemperatur\n>>> Value from time 1 =A0| Value from time 1\n>>> Value from time 2 =A0| Value from time 2\n>>>\n>>> But that might be solveable by first selecting keys from the log_detail=\ns\n>>> table and then join again.\n>>>\n>>> I will try it in the evening and I have to think about in detail.\n>>>\n>>> But thank you for the new approach and opening the mind :-)\n>>\n>> Using subquery in that style select (<subquery>), ... is limited to\n>> results that return 1 row, 1 column. =A0I assumed that was the case...if\n>> it isn't in your view, you can always attempt arrays:\n>>\n>> CREATE OR REPLACE VIEW log_entries AS\n>> SELECT\n>> l.id AS id,\n>> l.datetime AS datetime,\n>> l.tdate AS tdate,\n>> l.ttime AS ttime,\n>> array(select value from log_details ld join key_description kd on\n>> ld.fk_keyid =3D kd.keyid where ld.fk_id =3D l.id and =A0description =3D\n>> 'Raumsolltemperatur' order by XYZ) AS Raumsolltemperatur,\n>> [...]\n>>\n>> arrays might raise the bar somewhat in terms of dealing with the\n>> returned data, or they might work great. =A0some experimentation is in\n>> order.\n>>\n>> XYZ being the ordering condition you want. =A0If that isn't available\n>> inside the join then we need to think about this some more. =A0We could\n>> probably help more if you could describe the schema in a little more\n>> detail. =A0This is solvable.\n>\n> Of course, subquery is limited to a result set returning 1 row and 1 colu=\nmn.\n> Also order is of course preserved because of the join.\n>\n> Further, I think I found a perfect query plan for the EAV pattern.\n>\n> First I tried your suggestion but there were some limitation with O(n^2)\n> efforts (e.g. nested loops=3D12586 and also index scans with loop 12586):\n>\n> CREATE OR REPLACE VIEW log_entries_test AS\n> SELECT\n> =A0l.id AS id,\n> =A0l.datetime AS datetime,\n> =A0l.tdate AS tdate,\n> =A0l.ttime AS ttime,\n> =A0(SELECT value FROM log_details d JOIN key_description kd ON d.fk_keyid=\n =3D\n> kd.keyid WHERE l.id =3D d.fk_id AND kd.description =3D 'Raumsolltemperatu=\nr') AS\n> Raumsolltemperatur,\n> =A0(SELECT value FROM log_details d JOIN key_description kd ON d.fk_keyid=\n =3D\n> kd.keyid WHERE l.id =3D d.fk_id AND kd.description =3D 'Raumtemperatur') =\nAS\n> Raumtemperatur,\n> =A0(SELECT value FROM log_details d JOIN key_description kd ON d.fk_keyid=\n =3D\n> kd.keyid WHERE l.id =3D d.fk_id AND kd.description =3D 'Kesselsolltempera=\ntur')\n> AS Kesselsolltemperatur,\n> =A0(SELECT value FROM log_details d JOIN key_description kd ON d.fk_keyid=\n =3D\n> kd.keyid WHERE l.id =3D d.fk_id AND kd.description =3D 'Kesseltemperatur'=\n) AS\n> Kesseltemperatur,\n> ....\n> FROM\n> =A0log l\n> ;\n>\n> -------------------------------------------------------------------------=\n---------------------------------------------------------------------------=\n--------------------------------------------------------------\n> EXPLAIN ANALYZE SELECT * FROM log_entries_test WHERE datetime > now() -\n> INTERVAL '10 days' ORDER BY datetime DESC;\n> -------------------------------------------------------------------------=\n---------------------------------------------------------------------------=\n--------------------------------------------------------------\n> Index Scan Backward using i_log_unique on log l =A0(cost=3D0.00..140820.1=\n2\n> rows=3D69 width=3D32) (actual time=3D2.848..22812.331 rows=3D12586 loops=\n=3D1)\n> =A0Index Cond: (datetime > (now() - '10 days'::interval))\n> =A0SubPlan 1\n> =A0 =A0-> =A0Nested Loop =A0(cost=3D0.00..19.99 rows=3D1 width=3D8) (actu=\nal\n> time=3D0.007..0.018 rows=3D1 loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0-> =A0Seq Scan on key_description kd =A0(cost=3D0.00..=\n2.38 rows=3D1\n> width=3D8) (actual time=3D0.003..0.013 rows=3D1 loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Filter: ((description)::text =3D 'Raumsoll=\ntemperatur'::text)\n> =A0 =A0 =A0 =A0 =A0-> =A0Index Scan using unique_key_and_id on log_detail=\ns d\n> =A0(cost=3D0.00..17.60 rows=3D1 width=3D16) (actual time=3D0.004..0.004 r=\nows=3D1\n> loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Index Cond: (($0 =3D d.fk_id) AND (d.fk_ke=\nyid =3D kd.keyid))\n> =A0SubPlan 2\n> =A0 =A0-> =A0Nested Loop =A0(cost=3D0.00..19.99 rows=3D1 width=3D8) (actu=\nal\n> time=3D0.006..0.017 rows=3D1 loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0-> =A0Seq Scan on key_description kd =A0(cost=3D0.00..=\n2.38 rows=3D1\n> width=3D8) (actual time=3D0.003..0.013 rows=3D1 loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Filter: ((description)::text =3D 'Raumtemp=\neratur'::text)\n> =A0 =A0 =A0 =A0 =A0-> =A0Index Scan using unique_key_and_id on log_detail=\ns d\n> =A0(cost=3D0.00..17.60 rows=3D1 width=3D16) (actual time=3D0.002..0.003 r=\nows=3D1\n> loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Index Cond: (($0 =3D d.fk_id) AND (d.fk_ke=\nyid =3D kd.keyid))\n> =A0SubPlan 3\n> =A0 =A0-> =A0Nested Loop =A0(cost=3D0.00..19.99 rows=3D1 width=3D8) (actu=\nal\n> time=3D0.005..0.017 rows=3D1 loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0-> =A0Seq Scan on key_description kd =A0(cost=3D0.00..=\n2.38 rows=3D1\n> width=3D8) (actual time=3D0.002..0.013 rows=3D1 loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Filter: ((description)::text =3D 'Kesselso=\nlltemperatur'::text)\n> =A0 =A0 =A0 =A0 =A0-> =A0Index Scan using unique_key_and_id on log_detail=\ns d\n> =A0(cost=3D0.00..17.60 rows=3D1 width=3D16) (actual time=3D0.003..0.003 r=\nows=3D1\n> loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Index Cond: (($0 =3D d.fk_id) AND (d.fk_ke=\nyid =3D kd.keyid))\n> =A0SubPlan 4\n> =A0 =A0-> =A0Nested Loop =A0(cost=3D0.00..19.99 rows=3D1 width=3D8) (actu=\nal\n> time=3D0.006..0.017 rows=3D1 loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0-> =A0Seq Scan on key_description kd =A0(cost=3D0.00..=\n2.38 rows=3D1\n> width=3D8) (actual time=3D0.002..0.013 rows=3D1 loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Filter: ((description)::text =3D 'Kesselte=\nmperatur'::text)\n> =A0 =A0 =A0 =A0 =A0-> =A0Index Scan using unique_key_and_id on log_detail=\ns d\n> =A0(cost=3D0.00..17.60 rows=3D1 width=3D16) (actual time=3D0.002..0.003 r=\nows=3D1\n> loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Index Cond: (($0 =3D d.fk_id) AND (d.fk_ke=\nyid =3D kd.keyid))\n> =A0SubPlan 5\n> =A0 =A0-> =A0Nested Loop =A0(cost=3D0.00..19.99 rows=3D1 width=3D8) (actu=\nal\n> time=3D0.005..0.017 rows=3D1 loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0-> =A0Seq Scan on key_description kd =A0(cost=3D0.00..=\n2.38 rows=3D1\n> width=3D8) (actual time=3D0.002..0.014 rows=3D1 loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Filter: ((description)::text =3D\n> 'Speichersolltemperatur'::text)\n> =A0 =A0 =A0 =A0 =A0-> =A0Index Scan using unique_key_and_id on log_detail=\ns d\n> =A0(cost=3D0.00..17.60 rows=3D1 width=3D16) (actual time=3D0.002..0.003 r=\nows=3D1\n> loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Index Cond: (($0 =3D d.fk_id) AND (d.fk_ke=\nyid =3D kd.keyid))\n> =A0SubPlan 6\n> =A0 =A0-> =A0Nested Loop =A0(cost=3D0.00..19.99 rows=3D1 width=3D8) (actu=\nal\n> time=3D0.006..0.017 rows=3D1 loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0-> =A0Seq Scan on key_description kd =A0(cost=3D0.00..=\n2.38 rows=3D1\n> width=3D8) (actual time=3D0.003..0.013 rows=3D1 loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Filter: ((description)::text =3D 'Speicher=\ntemperatur'::text)\n> =A0 =A0 =A0 =A0 =A0-> =A0Index Scan using unique_key_and_id on log_detail=\ns d\n> =A0(cost=3D0.00..17.60 rows=3D1 width=3D16) (actual time=3D0.002..0.003 r=\nows=3D1\n> loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Index Cond: (($0 =3D d.fk_id) AND (d.fk_ke=\nyid =3D kd.keyid))\n> -------------------------------------------------------------------------=\n---------------------------------------------------------------------------=\n--------------------------------------------------------------\n>\n> Therefore I optimized the query further which can be done in the followin=\ng\n> way with another subquery and IHMO a perfect query plan. Also the subsele=\nct\n> avoid multiple iterations for each of the result rows:\n>\n> CREATE OR REPLACE VIEW log_entries_test AS\n> SELECT\n> =A0l.id AS id,\n> =A0l.datetime AS datetime,\n> =A0l.tdate AS tdate,\n> =A0l.ttime AS ttime,\n> =A0(SELECT value FROM log_details d WHERE l.id =3D d.fk_id AND d.fk_keyid=\n =3D\n> (SELECT keyid FROM key_description WHERE description =3D\n> 'Raumsolltemperatur')) AS Raumsolltemperatur,\n> =A0(SELECT value FROM log_details d WHERE l.id =3D d.fk_id AND d.fk_keyid=\n =3D\n> (SELECT keyid FROM key_description WHERE description =3D 'Raumtemperatur'=\n)) AS\n> Raumtemperatur,\n> =A0(SELECT value FROM log_details d WHERE l.id =3D d.fk_id AND d.fk_keyid=\n =3D\n> (SELECT keyid FROM key_description WHERE description =3D\n> 'Kesselsolltemperatur')) AS Kesselsolltemperatur,\n> =A0(SELECT value FROM log_details d WHERE l.id =3D d.fk_id AND d.fk_keyid=\n =3D\n> (SELECT keyid FROM key_description WHERE description =3D 'Kesseltemperatu=\nr'))\n> AS Kesseltemperatur,\n> ...\n> FROM\n> =A0log l\n> ;\n> -------------------------------------------------------------------------=\n---------------------------------------------------------------------------=\n--------------------------------------------------------------\n> EXPLAIN ANALYZE SELECT * FROM log_entries_test WHERE datetime > now() -\n> INTERVAL '10 days' ORDER BY datetime DESC;\n> -------------------------------------------------------------------------=\n---------------------------------------------------------------------------=\n--------------------------------------------------------------\n> Index Scan Backward using i_log_unique on log l =A0(cost=3D0.00..140603.9=\n9\n> rows=3D69 width=3D32) (actual time=3D2.588..5602.899 rows=3D12586 loops=\n=3D1)\n> =A0Index Cond: (datetime > (now() - '10 days'::interval))\n> =A0SubPlan 2\n> =A0 =A0-> =A0Index Scan using unique_key_and_id on log_details d\n> =A0(cost=3D2.38..19.97 rows=3D1 width=3D8) (actual time=3D0.010..0.011 ro=\nws=3D1\n> loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0Index Cond: (($1 =3D fk_id) AND (fk_keyid =3D $0))\n> =A0 =A0 =A0 =A0 =A0InitPlan 1 (returns $0)\n> =A0 =A0 =A0 =A0 =A0 =A0-> =A0Seq Scan on key_description =A0(cost=3D0.00.=\n.2.38 rows=3D1 width=3D8)\n> (actual time=3D0.015..0.066 rows=3D1 loops=3D1)\n> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Filter: ((description)::text =3D 'Raum=\nsolltemperatur'::text)\n> =A0SubPlan 4\n> =A0 =A0-> =A0Index Scan using unique_key_and_id on log_details d\n> =A0(cost=3D2.38..19.97 rows=3D1 width=3D8) (actual time=3D0.003..0.003 ro=\nws=3D1\n> loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0Index Cond: (($1 =3D fk_id) AND (fk_keyid =3D $2))\n> =A0 =A0 =A0 =A0 =A0InitPlan 3 (returns $2)\n> =A0 =A0 =A0 =A0 =A0 =A0-> =A0Seq Scan on key_description =A0(cost=3D0.00.=\n.2.38 rows=3D1 width=3D8)\n> (actual time=3D0.009..0.020 rows=3D1 loops=3D1)\n> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Filter: ((description)::text =3D 'Raum=\ntemperatur'::text)\n> =A0SubPlan 6\n> =A0 =A0-> =A0Index Scan using unique_key_and_id on log_details d\n> =A0(cost=3D2.38..19.97 rows=3D1 width=3D8) (actual time=3D0.002..0.003 ro=\nws=3D1\n> loops=3D12586)\n> =A0 =A0 =A0 =A0 =A0Index Cond: (($1 =3D fk_id) AND (fk_keyid =3D $3))\n> =A0 =A0 =A0 =A0 =A0InitPlan 5 (returns $3)\n> =A0 =A0 =A0 =A0 =A0 =A0-> =A0Seq Scan on key_description =A0(cost=3D0.00.=\n.2.38 rows=3D1 width=3D8)\n> (actual time=3D0.005..0.017 rows=3D1 loops=3D1)\n> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Filter: ((description)::text =3D\n> 'Kesselsolltemperatur'::text)\n> -------------------------------------------------------------------------=\n---------------------------------------------------------------------------=\n--------------------------------------------------------------\n>\n> BTW: Schemadata is in the links discussed in the thread\n>\n> Thnx to all for helping me.\n\nnp -- this felt particularly satisfying for some reason. btw, I think\nyou have some more low hanging optimization fruit. I think (although\nit would certainly have to be tested) hiding your attribute\ndescription under keyid is buying you nothing but headaches. If you\nused natural key style, making description primary key of\nkey_description (or unique), and had log_details have a description\ncolumn that directly referenced that column, your subquery:\n\n(\n SELECT value FROM log_details d WHERE l.id =3D d.fk_id AND d.fk_keyid =3D\n (\n SELECT keyid FROM key_description WHERE description =3D 'Kesselsolltemp=\neratur'\n )\n) AS Kesselsolltemperatur,\n\nwould look like this:\n(\n SELECT value FROM log_details d WHERE l.id =3D d.fk_id AND\nd.description =3D 'Kesselsolltemperatur'\n) AS Kesselsolltemperatur,\n\nyour index on log_details(fk_id, description) is of course fatter, but\nquite precise...does require rebuilding your entire dataset however.\nfood for thought.\n\nmerlin\n",
"msg_date": "Tue, 14 Sep 2010 21:59:09 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "On Tue, 14 Sep 2010, Merlin Moncure wrote:\n> np -- this felt particularly satisfying for some reason. btw, I think\n> you have some more low hanging optimization fruit. I think (although\n> it would certainly have to be tested) hiding your attribute\n> description under keyid is buying you nothing but headaches. If you\n> used natural key style, making description primary key of\n> key_description (or unique), and had log_details have a description\n> column that directly referenced that column, your subquery:\n>\n> (\n> SELECT value FROM log_details d WHERE l.id = d.fk_id AND d.fk_keyid =\n> (\n> SELECT keyid FROM key_description WHERE description = 'Kesselsolltemperatur'\n> )\n> ) AS Kesselsolltemperatur,\n>\n> would look like this:\n> (\n> SELECT value FROM log_details d WHERE l.id = d.fk_id AND\n> d.description = 'Kesselsolltemperatur'\n> ) AS Kesselsolltemperatur,\n>\n> your index on log_details(fk_id, description) is of course fatter, but\n> quite precise...does require rebuilding your entire dataset however.\n> food for thought.\n\nI think your suggestion might be slower because the WHERE clause and \npossible JOINS with BIGINT is much faster (especially when a lot of data \nis queried) than with a VARCHAR. With the latest query plan \nkey_description is only queried once per subselect which is perfect. I've \nalso chosen that indirection that I can change description without \nchanging too much in data model and all data rows on refactoring.\n\n@Tom: Do you think of planner enhancements regarding such situations where \nJOINS are \"converted\" to subselects?\n\nBTW: I had a small bug in the queries and in the code that one description \nwas wrong (one space too much: 'Meldung F4 2. Zeile' => 'Meldung F4 2. Zeile').\nWith this indirect data model this is very easy to change: Change \nthe view and change one code line. With your suggested data model I would \nhave to update millions of rows ...\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n",
"msg_date": "Wed, 15 Sep 2010 08:32:00 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "On Wed, Sep 15, 2010 at 2:32 AM, Gerhard Wiesinger <[email protected]> wrote:\n> On Tue, 14 Sep 2010, Merlin Moncure wrote:\n>>\n>> np -- this felt particularly satisfying for some reason. btw, I think\n>> you have some more low hanging optimization fruit. I think (although\n>> it would certainly have to be tested) hiding your attribute\n>> description under keyid is buying you nothing but headaches. If you\n>> used natural key style, making description primary key of\n>> key_description (or unique), and had log_details have a description\n>> column that directly referenced that column, your subquery:\n>>\n>> (\n>> SELECT value FROM log_details d WHERE l.id = d.fk_id AND d.fk_keyid =\n>> (\n>> SELECT keyid FROM key_description WHERE description =\n>> 'Kesselsolltemperatur'\n>> )\n>> ) AS Kesselsolltemperatur,\n>>\n>> would look like this:\n>> (\n>> SELECT value FROM log_details d WHERE l.id = d.fk_id AND\n>> d.description = 'Kesselsolltemperatur'\n>> ) AS Kesselsolltemperatur,\n>>\n>> your index on log_details(fk_id, description) is of course fatter, but\n>> quite precise...does require rebuilding your entire dataset however.\n>> food for thought.\n>\n> I think your suggestion might be slower because the WHERE clause and\n> possible JOINS with BIGINT is much faster (especially when a lot of data is\n> queried) than with a VARCHAR. With the latest query plan key_description is\n> only queried once per subselect which is perfect. I've also chosen that\n> indirection that I can change description without changing too much in data\n> model and all data rows on refactoring.\n\nYou're not joining -- you're filtering (and your assumption that\nbigint is always going to be faster is quite debatable depending on\ncircumstances). The join is skipped because of the key (yes, it's\ncheap lookup, but w/50 columns each doing it, nothing is cheap).\n\nmerlin\n",
"msg_date": "Wed, 15 Sep 2010 07:52:32 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to 8.4"
},
{
"msg_contents": "On Wed, 15 Sep 2010, Merlin Moncure wrote:\n\n> On Wed, Sep 15, 2010 at 2:32 AM, Gerhard Wiesinger <[email protected]> wrote:\n>> On Tue, 14 Sep 2010, Merlin Moncure wrote:\n>>>\n>>> np -- this felt particularly satisfying for some reason. btw, I think\n>>> you have some more low hanging optimization fruit. �I think (although\n>>> it would certainly have to be tested) hiding your attribute\n>>> description under keyid is buying you nothing but headaches. �If you\n>>> used natural key style, making description primary key of\n>>> key_description (or unique), and had log_details have a description\n>>> column that directly referenced that column, your subquery:\n>>>\n>>> (\n>>> �SELECT value FROM log_details d WHERE l.id = d.fk_id AND d.fk_keyid =\n>>> �(\n>>> � SELECT keyid FROM key_description WHERE description =\n>>> 'Kesselsolltemperatur'\n>>> �)\n>>> ) AS Kesselsolltemperatur,\n>>>\n>>> would look like this:\n>>> (\n>>> �SELECT value FROM log_details d WHERE l.id = d.fk_id AND\n>>> d.description = 'Kesselsolltemperatur'\n>>> ) AS Kesselsolltemperatur,\n>>>\n>>> your index on log_details(fk_id, description) is of course fatter, but\n>>> quite precise...does require rebuilding your entire dataset however.\n>>> food for thought.\n>>\n>> I think your suggestion might be slower because the WHERE clause and\n>> possible JOINS with BIGINT is much faster (especially when a lot of data is\n>> queried) than with a VARCHAR. With the latest query plan key_description is\n>> only queried once per subselect which is perfect. I've also chosen that\n>> indirection that I can change description without changing too much in data\n>> model and all data rows on refactoring.\n>\n> You're not joining -- you're filtering (and your assumption that\n> bigint is always going to be faster is quite debatable depending on\n> circumstances). The join is skipped because of the key (yes, it's\n> cheap lookup, but w/50 columns each doing it, nothing is cheap).\n\nI know that I'm not JOINing in that case - as discussed I ment possible \nJOINs in other query scenarios.\n\nBTW: Latest query plan is also optimal that only the \nused columns from the view are evaluated. With the full joined version \nall columns where used even when dropped in the result-set, e.g.:\nSELECT col1, col2 FROM view1; -- Equivalent to SELECT * FROM view1; as col1, col2 are all colums in that view\nSELECT col1 FROM view1; -- less effort with subselects when less columns are needed, joins have same \"full view\" effort here\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n>From [email protected] Wed Sep 15 15:48:33 2010\nReceived: from maia.hub.org (maia-5.hub.org [200.46.204.29])\n\tby mail.postgresql.org (Postfix) with ESMTP id 7F05F634580\n\tfor <[email protected]>; Mon, 13 Sep 2010 13:32:17 -0300 (ADT)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.29]) (amavisd-maia, port 10024)\n with ESMTP id 29820-03\n for <[email protected]>;\n Mon, 13 Sep 2010 16:32:09 +0000 (UTC)\nX-Greylist: domain auto-whitelisted by SQLgrey-1.7.6\nReceived: from mail-bw0-f46.google.com (mail-bw0-f46.google.com [209.85.214.46])\n\tby mail.postgresql.org (Postfix) with ESMTP id 1253A634253\n\tfor <[email protected]>; Mon, 13 Sep 2010 13:32:08 -0300 (ADT)\nReceived: by bwz11 with SMTP id 11so4631169bwz.19\n for <[email protected]>; Mon, 13 Sep 2010 09:32:07 -0700 (PDT)\nDKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=gmail.com; s=gamma;\n h=domainkey-signature:received:mime-version:received:in-reply-to\n :references:from:date:message-id:subject:to:cc:content-type;\n bh=SrL7EJgmKb4ZXmks4pzWfl5U/NaF4iQL9b5uuzAI9zY=;\n b=MbYVt2cy4SheQa3rY10fGMj2aAUSIM0QM0N+gDp9ubgBdqeIi0osny60Aq6y+9X3Pu\n 2LqxUkr0JFlvs4EdiqKoropSOzCQnlTmFovGKxeiShFz5xvzeIEaXZmb6D3rSzolNqYT\n xqwPAfUsSukZ4w9iznPM1qjrX+2eFJjTUtoFA=\nDomainKey-Signature: a=rsa-sha1; c=nofws;\n d=gmail.com; s=gamma;\n h=mime-version:in-reply-to:references:from:date:message-id:subject:to\n :cc:content-type;\n b=h8xCcEXv5qjPA38c6d7fceJbo1LpCSclqkEpZfI9pMxwEJTBHRCZPKiFTb8E7UTfto\n ImSZoiaTFZBF3caY23NEQPZh66+GbxF2nCmJW9SYS+jsWiZkZPGgIdBrEke+V/jiepn2\n /l5VMbeqC85lFQCswN0ciUiU78radGvPsbvUc=\nReceived: by 10.204.60.145 with SMTP id p17mr3407724bkh.56.1284395527479; Mon,\n 13 Sep 2010 09:32:07 -0700 (PDT)\nMIME-Version: 1.0\nReceived: by 10.204.10.11 with HTTP; Mon, 13 Sep 2010 09:31:47 -0700 (PDT)\nIn-Reply-To: <[email protected]>\nReferences: <[email protected]>\n <[email protected]> <[email protected]>\nFrom: Mason Harding <[email protected]>\nDate: Mon, 13 Sep 2010 09:31:47 -0700\nMessage-ID: <[email protected]>\nSubject: Re: Slow SQL lookup due to every field being listed in SORT KEY\nTo: Stephen Frost <[email protected]>\nCc: Tom Lane <[email protected]>, [email protected]\nContent-Type: multipart/alternative; boundary=001636c5b4562aa80e049026a395\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits=-1.899 tagged_above=-10 required=5\n tests=BAYES_00=-1.9, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001\nX-Spam-Level: \nX-Archive-Number: 201009/89\nX-Sequence-Number: 40292\n\n--001636c5b4562aa80e049026a395\nContent-Type: text/plain; charset=ISO-8859-1\n\nThanks all for your help. I didn't really understand why it was sorting on\nevery field, but it now makes sense. What I ended up doing was replacing\nthe\n\nSELECT DISTINCT * FROM .... JOIN ... WHERE ... ORDER BY... LIMIT ...\nwith\n\nSELECT * FROM ... WHERE id in (SELECT DISTINCT id FROM .... JOIN ... WHERE\n... ) ORDER BY... LIMIT ...\nThis reduced the lookup time down to 19 ms, which is much faster than just\nupping the work_mem, as that still took 800ms\n\nThanks all,\nMason\n\nOn Fri, Sep 10, 2010 at 7:03 PM, Stephen Frost <[email protected]> wrote:\n\n> * Tom Lane ([email protected]) wrote:\n> > The reason it's sorting by all the columns is the DISTINCT\n>\n> You might also verify that you actually need/*should* have the DISTINCT,\n> if it's included today.. Often developers put that in without\n> understanding why they're getting dups (which can often be due to\n> missing pieces from the JOIN clause or misunderstanding of the database\n> schema...).\n>\n> Stephen\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.9 (GNU/Linux)\n>\n> iEYEARECAAYFAkyK43kACgkQrzgMPqB3kihX4ACfVboO4jRzFO3hkckdHfrSeAgF\n> sysAnjmeoV7BA7uClEY8gXT4nEYhSx0u\n> =y556\n> -----END PGP SIGNATURE-----\n>\n>\n\n--001636c5b4562aa80e049026a395\nContent-Type: text/html; charset=ISO-8859-1\nContent-Transfer-Encoding: quoted-printable\n\nThanks all for your help.=A0 I didn't really understand why it was sort=\ning on every field, but it now makes sense.=A0 What I ended up doing was re=\nplacing the <br><br>SELECT DISTINCT * FROM .... JOIN ... WHERE ... ORDER BY=\n... LIMIT ...<br>\n\nwith<br><br>SELECT * FROM ... WHERE id in (SELECT DISTINCT id FROM .... JOI=\nN ... WHERE ... ) ORDER BY... LIMIT ...<br>This reduced the lookup time dow=\nn to 19 ms, which is much faster than just upping the work_mem, as that sti=\nll took 800ms<br>\n\n<br>Thanks all,<br>Mason<br><br><div class=3D\"gmail_quote\">On Fri, Sep 10, =\n2010 at 7:03 PM, Stephen Frost <span dir=3D\"ltr\"><<a href=3D\"mailto:sfro=\[email protected]\">[email protected]</a>></span> wrote:<br><blockquote cla=\nss=3D\"gmail_quote\" style=3D\"margin: 0pt 0pt 0pt 0.8ex; border-left: 1px sol=\nid rgb(204, 204, 204); padding-left: 1ex;\">\n\n<div class=3D\"im\">* Tom Lane (<a href=3D\"mailto:[email protected]\">tgl@sss.=\npgh.pa.us</a>) wrote:<br>\n> The reason it's sorting by all the columns is the DISTINCT<br>\n<br>\n</div>You might also verify that you actually need/*should* have the DISTIN=\nCT,<br>\nif it's included today.. =A0Often developers put that in without<br>\nunderstanding why they're getting dups (which can often be due to<br>\nmissing pieces from the JOIN clause or misunderstanding of the database<br>\nschema...).<br>\n<font color=3D\"#888888\"><br>\n =A0 =A0 =A0 =A0Stephen<br>\n</font><br>-----BEGIN PGP SIGNATURE-----<br>\nVersion: GnuPG v1.4.9 (GNU/Linux)<br>\n<br>\niEYEARECAAYFAkyK43kACgkQrzgMPqB3kihX4ACfVboO4jRzFO3hkckdHfrSeAgF<br>\nsysAnjmeoV7BA7uClEY8gXT4nEYhSx0u<br>\n=3Dy556<br>\n-----END PGP SIGNATURE-----<br>\n<br></blockquote></div><br>\n\n--001636c5b4562aa80e049026a395--\n",
"msg_date": "Wed, 15 Sep 2010 20:39:33 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to\n 8.4"
},
{
"msg_contents": "\nHi,\n\nI had a similar problem with many left join, reading about planning\noptimization i tried to edit postgresql.conf and uncommented the line\njoin_collapse_limit = 8 and set it to 1, disables collapsing of explicit .\nMy query its taking 2000s in 8.4 and the same query 2ms in 8.3. Now its\nworking fast in 8.4.\n\nBest regards,\n\nMarc\n-- \nView this message in context: http://postgresql.1045698.n5.nabble.com/Major-performance-problem-after-upgrade-from-8-3-to-8-4-tp2796390p3329435.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 5 Jan 2011 12:06:41 -0800 (PST)",
"msg_from": "Marc Antonio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major performance problem after upgrade from 8.3 to 8.4"
}
] |
[
{
"msg_contents": "Not sure if anyone else saw this, but it struck me as an interesting\nidea if it could be added to PostgreSQL. GPU accelerated database\noperations could be very... interesting. Of course, this could be\ndifficult to do in a way that usefully increases performance of\nPostgreSQL, but I'll leave that up to you guys to figure out.\n\nhttp://code.google.com/p/back40computing/wiki/RadixSorting\n\n\n\n-- \nEliot Gable\n\n\"We do not inherit the Earth from our ancestors: we borrow it from our\nchildren.\" ~David Brower\n\n\"I decided the words were too conservative for me. We're not borrowing\nfrom our children, we're stealing from them--and it's not even\nconsidered to be a crime.\" ~David Brower\n\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to\nlive; not live to eat.) ~Marcus Tullius Cicero\n",
"msg_date": "Mon, 30 Aug 2010 09:46:26 -0400",
"msg_from": "Eliot Gable <[email protected]>",
"msg_from_op": true,
"msg_subject": "GPU Accelerated Sorting"
},
{
"msg_contents": "On Mon, Aug 30, 2010 at 9:46 AM, Eliot Gable <[email protected]> wrote:\n> Not sure if anyone else saw this, but it struck me as an interesting\n> idea if it could be added to PostgreSQL. GPU accelerated database\n> operations could be very... interesting. Of course, this could be\n> difficult to do in a way that usefully increases performance of\n> PostgreSQL, but I'll leave that up to you guys to figure out.\n>\n> http://code.google.com/p/back40computing/wiki/RadixSorting\n\nIt would be hard to use this because, in addition to the fact that\nthis is specific to a very particular type of hardware, it only works\nif you're trying to do a very particular type of sort. For example,\nit wouldn't handle multi-byte characters properly. And it wouldn't\nhandle integers properly either - you'd end up sorting negatives after\npositives. You could possibly still find applications for it but\nthey'd be quite narrow, I think.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n",
"msg_date": "Tue, 21 Sep 2010 15:27:59 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GPU Accelerated Sorting"
}
] |
[
{
"msg_contents": "Not sure if anyone else saw this, but it struck me as an interesting\nidea if it could be added to PostgreSQL. GPU accelerated database\noperations could be very... interesting. Of course, this could be\ndifficult to do in a way that usefully increases performance of\nPostgreSQL, but I'll leave that up to you guys to figure out.\n\nhttp://code.google.com/p/back40computing/wiki/RadixSorting\n\n-- \nEliot Gable\n\n\"We do not inherit the Earth from our ancestors: we borrow it from our\nchildren.\" ~David Brower\n\n\"I decided the words were too conservative for me. We're not borrowing\nfrom our children, we're stealing from them--and it's not even\nconsidered to be a crime.\" ~David Brower\n\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to\nlive; not live to eat.) ~Marcus Tullius Cicero\n",
"msg_date": "Mon, 30 Aug 2010 09:51:22 -0400",
"msg_from": "Eliot Gable <[email protected]>",
"msg_from_op": true,
"msg_subject": "GPU Accelerated Sorting"
},
{
"msg_contents": "Eliot Gable wrote:\n> Not sure if anyone else saw this, but it struck me as an interesting\n> idea if it could be added to PostgreSQL. GPU accelerated database\n> operations could be very... interesting. Of course, this could be\n> difficult to do in a way that usefully increases performance of\n> PostgreSQL, but I'll leave that up to you guys to figure out.\n> \n\nThis comes up every year or so. The ability of GPU offloading to help \nwith sorting has to overcome the additional latency that comes from \ncopying everything over to it and then getting all the results back. If \nyou look at the typical types of sorting people see in PostgreSQL, it's \nhard to find ones that are a) big enough to benefit from being offloaded \nto the GPU like that, while also being b) not so bottlenecked on disk \nI/O that speeding up the CPU part matters. And if you need to sort \nsomething in that category, you probably just put an index on it instead \nand call it a day.\n\nIf you made me make a list of things I'd think would be worthwhile to \nspend effort improving in PostgreSQL, this would be on the research \nlist, but unlikely to even make my personal top 100 things that are work \nfiddling with.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Mon, 30 Aug 2010 10:05:21 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GPU Accelerated Sorting"
},
{
"msg_contents": "Hello,\n\nIn my humble opinion, while it can sound interesting from a theorical\npoint of view to outloads some operations to the GPU, there is a huge\npratical problem in current world : databases which are big enough to\nrequire such heavy optimization are usually runned on server hardware,\nwhich very rarely have powerful GPU.\n\nThere may be a small target of computers having both GPU and heavy\ndatabase, but that sounds very exceptional to me, so investing effort\ninto it sounds a bit unjustified to me.\n\nExcept as a matter of scientific research, which is it of course\nimportant for the future.\n\nRegards,\n\n-- \nGa�l Le Mignot - [email protected]\nPilot Systems - 9, rue Desargues - 75011 Paris\nTel : +33 1 44 53 05 55 - www.pilotsystems.net\nG�rez vos contacts et vos newsletters : www.cockpit-mailing.com\n",
"msg_date": "Mon, 30 Aug 2010 16:56:15 +0200",
"msg_from": "[email protected] (=?iso-8859-1?Q?Ga=EBl?= Le Mignot)",
"msg_from_op": false,
"msg_subject": "Re: GPU Accelerated Sorting"
},
{
"msg_contents": "Well, from that perspective, it becomes a \"chicken and egg\" problem.\nWithout the software support to use a GPU in a server for\nacceleration, nobody's going to build a server with a GPU.\n\nHowever, as previously stated, I can understand the challenges with\ndetermining whether the offloading would even be worthwhile (given\ndisk performance constraints and GPU memory loading constraints vs\npayoff for acceleration), much less finding actual real-world queries\nthat would benefit from it. I am sure there are probably hundreds of\nother performance improvements that could be made that would have much\nmore wide-spread appeal, but it's an interesting thing to consider.\nThis is not the first thing that's come up that a database could use a\nGPU for in order to improve performance, and it won't be the last.\nIt's something to keep in mind, and if enough additional items come up\nwhere GPUs can do better than CPUs, it might be worthwhile to start\nimplementing some of those things. Maybe if enough of them get\nimplemented, there will be enough overall performance increase to make\nit worthwhile to put a GPU in a database server. Besides, if you can\nsort data on a GPU faster than a CPU, you can probably also search for\ndata faster on the GPU than on the CPU under similar conditions. With\nmemory increasing like crazy in servers, it might be worthwhile to\nkeep indexes entirely in memory (but keep them sync'd to the disk,\nobviously) and for extremely large tables, dump them to the GPU for a\nmassively parallel search. In fact, if you can get enough GPU memory\nthat you could keep them entirely in the GPU memory and keep them\nupdated there any time they change, you could see some real\nperformance pay-offs.\n\nI'm not saying someone should just go out and do this right now, but\nit might be worthwhile to keep it in mind as code is re-written or\nupdated in the future how it might be structured to more easily\nimplement something like this in the future.\n\nOn Mon, Aug 30, 2010 at 10:56 AM, Gaël Le Mignot <[email protected]> wrote:\n> Hello,\n>\n> In my humble opinion, while it can sound interesting from a theorical\n> point of view to outloads some operations to the GPU, there is a huge\n> pratical problem in current world : databases which are big enough to\n> require such heavy optimization are usually runned on server hardware,\n> which very rarely have powerful GPU.\n>\n> There may be a small target of computers having both GPU and heavy\n> database, but that sounds very exceptional to me, so investing effort\n> into it sounds a bit unjustified to me.\n>\n> Except as a matter of scientific research, which is it of course\n> important for the future.\n>\n> Regards,\n>\n> --\n> Gaël Le Mignot - [email protected]\n> Pilot Systems - 9, rue Desargues - 75011 Paris\n> Tel : +33 1 44 53 05 55 - www.pilotsystems.net\n> Gérez vos contacts et vos newsletters : www.cockpit-mailing.com\n>\n\n\n\n-- \nEliot Gable\n\n\"We do not inherit the Earth from our ancestors: we borrow it from our\nchildren.\" ~David Brower\n\n\"I decided the words were too conservative for me. We're not borrowing\nfrom our children, we're stealing from them--and it's not even\nconsidered to be a crime.\" ~David Brower\n\n\"Esse oportet ut vivas, non vivere ut edas.\" (Thou shouldst eat to\nlive; not live to eat.) ~Marcus Tullius Cicero\n",
"msg_date": "Mon, 30 Aug 2010 14:49:27 -0400",
"msg_from": "Eliot Gable <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GPU Accelerated Sorting"
},
{
"msg_contents": "Greg Smith wrote:\n> This comes up every year or so. The ability of GPU offloading to help \n> with sorting has to overcome the additional latency that comes from \n> copying everything over to it and then getting all the results back. \n> If you look at the typical types of sorting people see in PostgreSQL, \n> it's hard to find ones that are a) big enough to benefit from being \n> offloaded to the GPU like that, while also being b) not so \n> bottlenecked on disk I/O that speeding up the CPU part matters. And \n> if you need to sort something in that category, you probably just put \n> an index on it instead and call it a day.\n>\n> If you made me make a list of things I'd think would be worthwhile to \n> spend effort improving in PostgreSQL, this would be on the research \n> list, but unlikely to even make my personal top 100 things that are \n> work fiddling with.\nRelated is 'Parallelizing query optimization' \n(http://www.vldb.org/pvldb/1/1453882.pdf) in which they actually \nexperiment with PostgreSQL. Note that their target platform is general \npurpose CPU, not a SIMD GPU processor.\n\n-- Yeb\n\n",
"msg_date": "Mon, 30 Aug 2010 21:25:38 +0200",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GPU Accelerated Sorting"
},
{
"msg_contents": "On Mon, Aug 30, 2010 at 8:56 AM, Gaël Le Mignot <[email protected]> wrote:\n> Hello,\n>\n> In my humble opinion, while it can sound interesting from a theorical\n> point of view to outloads some operations to the GPU, there is a huge\n> pratical problem in current world : databases which are big enough to\n> require such heavy optimization are usually runned on server hardware,\n> which very rarely have powerful GPU.\n\nThat's changed recently:\nhttp://www.aberdeeninc.com/abcatg/GPUservers.htm\n\n> There may be a small target of computers having both GPU and heavy\n> database, but that sounds very exceptional to me, so investing effort\n> into it sounds a bit unjustified to me.\n\nI tend to agree. OTOH, imagine using a 400 core GPU for offloading\nstuff that isn't just a sort, like travelling salesman type problems.\nThe beauty of if is, that with pgsql support dozens of scripting\nlanguages, you wouldn't have to build anything into pg's backends, you\ncould just write it in a pl langauge.\n\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Mon, 30 Aug 2010 14:43:35 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GPU Accelerated Sorting"
},
{
"msg_contents": " Feels like I fell through a worm hole in space/time, back to inmos in \n1987, and a guy from marketing has just\nwalked in the office going on about there's a customer who wants to use \nour massively parallel hardware to speed up databases...\n\n\n",
"msg_date": "Mon, 30 Aug 2010 14:47:09 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GPU Accelerated Sorting"
},
{
"msg_contents": "[email protected] (David Boreham) writes:\n> Feels like I fell through a worm hole in space/time, back to inmos in\n> 1987, and a guy from marketing has just\n> walked in the office going on about there's a customer who wants to\n> use our massively parallel hardware to speed up databases...\n\n... As long as you're willing to rewrite PostgreSQL in Occam 2...\n-- \nhttp://projects.cs.kent.ac.uk/projects/tock/trac/\nThe statistics on sanity are that one out of every four Americans is\nsuffering from some form of mental illness. Think of your three best\nfriends. If they're okay, then it's you. -- Rita Mae Brown\n",
"msg_date": "Mon, 30 Aug 2010 17:18:06 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GPU Accelerated Sorting"
},
{
"msg_contents": "On Mon, 2010-08-30 at 09:51 -0400, Eliot Gable wrote:\n> Not sure if anyone else saw this, but it struck me as an interesting\n> idea if it could be added to PostgreSQL. GPU accelerated database\n> operations could be very... interesting. Of course, this could be\n> difficult to do in a way that usefully increases performance of\n> PostgreSQL, but I'll leave that up to you guys to figure out.\n> \n> http://code.google.com/p/back40computing/wiki/RadixSorting\n> \n\nRadix sort is not a comparison sort. Comparison sorts work for any data\ntype for which you define a total order; and any total order is allowed.\nRadix sort only works for some data types and some total orders.\n\nHowever, it would be very nice to use radix sorting where it does work.\nThat would require some extensions to the type system, but it could be\ndone.\n\nThe GPU issue is orthogonal.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Mon, 30 Aug 2010 15:01:47 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GPU Accelerated Sorting"
},
{
"msg_contents": " On 8/30/2010 3:18 PM, Chris Browne wrote:\n> ... As long as you're willing to rewrite PostgreSQL in Occam 2...\n\nJust re-write it in Google's new language 'Go' : it's close enough to \nOccam and they'd probably fund the project..\n\n;)\n\n\n",
"msg_date": "Mon, 30 Aug 2010 16:37:14 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GPU Accelerated Sorting"
},
{
"msg_contents": "On a similar note, is Postgres' Quicksort a dual-pivot quicksort? This can be up to 2x as fast as a normal quicksort (25% fewer swap operations, and swap operations are more expensive than compares for most sorts).\n\nJust google 'dual pivot quicksort' for more info. \n\n\nAnd before anyone asks -- two pivots (3 partitions) is optimal. See http://mail.openjdk.java.net/pipermail/core-libs-dev/2009-September/002676.html\n\n\nOn Aug 30, 2010, at 12:25 PM, Yeb Havinga wrote:\n\n> Greg Smith wrote:\n>> This comes up every year or so. The ability of GPU offloading to help \n>> with sorting has to overcome the additional latency that comes from \n>> copying everything over to it and then getting all the results back. \n>> If you look at the typical types of sorting people see in PostgreSQL, \n>> it's hard to find ones that are a) big enough to benefit from being \n>> offloaded to the GPU like that, while also being b) not so \n>> bottlenecked on disk I/O that speeding up the CPU part matters. And \n>> if you need to sort something in that category, you probably just put \n>> an index on it instead and call it a day.\n>> \n>> If you made me make a list of things I'd think would be worthwhile to \n>> spend effort improving in PostgreSQL, this would be on the research \n>> list, but unlikely to even make my personal top 100 things that are \n>> work fiddling with.\n> Related is 'Parallelizing query optimization' \n> (http://www.vldb.org/pvldb/1/1453882.pdf) in which they actually \n> experiment with PostgreSQL. Note that their target platform is general \n> purpose CPU, not a SIMD GPU processor.\n> \n> -- Yeb\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Mon, 30 Aug 2010 18:17:48 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GPU Accelerated Sorting"
},
{
"msg_contents": "Scott Carey <[email protected]> writes:\n> On a similar note, is Postgres' Quicksort a dual-pivot quicksort? This can be up to 2x as fast as a normal quicksort (25% fewer swap operations, and swap operations are more expensive than compares for most sorts).\n\nIn Postgres, the swaps are pretty much free compared to the\ncomparisons. Sorry, but the above doesn't especially tempt me...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Aug 2010 21:58:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GPU Accelerated Sorting "
}
] |
[
{
"msg_contents": "Hi all ;\n\nwe have an automated partition creation process that includes the creation of \nan FK constraint. we have a few other servers with similar scenarios and this \nis the only server that stinks per when we create the new partitions.\n\nAnyone have any thoughts on how to debug this? were running postgres 8.4.4 on \nCentOS 5.5\n\nThanks in advance\n",
"msg_date": "Mon, 30 Aug 2010 16:28:25 -0600",
"msg_from": "Kevin Kempter <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow DDL creation"
},
{
"msg_contents": "On Mon, Aug 30, 2010 at 3:28 PM, Kevin Kempter\n<[email protected]> wrote:\n> Hi all ;\n>\n> we have an automated partition creation process that includes the creation of\n> an FK constraint. we have a few other servers with similar scenarios and this\n> is the only server that stinks per when we create the new partitions.\n>\n> Anyone have any thoughts on how to debug this? were running postgres 8.4.4 on\n> CentOS 5.5\n>\n> Thanks in advance\n\nIs the referenced column indexed?\n",
"msg_date": "Mon, 30 Aug 2010 16:04:20 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow DDL creation"
},
{
"msg_contents": "On Mon, Aug 30, 2010 at 04:28:25PM -0600, Kevin Kempter wrote:\n> Hi all ;\n> \n> we have an automated partition creation process that includes the creation of \n> an FK constraint. we have a few other servers with similar scenarios and this \n> is the only server that stinks per when we create the new partitions.\n> \n> Anyone have any thoughts on how to debug this? were running postgres 8.4.4 on \n> CentOS 5.5\n\nIf you're doing the partitions on demand, you could be getting\ndeadlocks. Any reason not to pre-create a big bunch of them in\nadvance?\n\nCheers,\nDavid.\n-- \nDavid Fetter <[email protected]> http://fetter.org/\nPhone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter\nSkype: davidfetter XMPP: [email protected]\niCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n",
"msg_date": "Mon, 30 Aug 2010 17:31:13 -0700",
"msg_from": "David Fetter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow DDL creation"
},
{
"msg_contents": "On Monday 30 August 2010 17:04, bricklen wrote:\n> On Mon, Aug 30, 2010 at 3:28 PM, Kevin Kempter\n>\n> <[email protected]> wrote:\n> > Hi all ;\n> >\n> > we have an automated partition creation process that includes the\n> > creation of an FK constraint. we have a few other servers with similar\n> > scenarios and this is the only server that stinks per when we create the\n> > new partitions.\n> >\n> > Anyone have any thoughts on how to debug this? were running postgres\n> > 8.4.4 on CentOS 5.5\n> >\n> > Thanks in advance\n>\n> Is the referenced column indexed?\n\nno, but its for a new partition so there's no data as of yet in the partition\n",
"msg_date": "Tue, 31 Aug 2010 09:35:14 -0600",
"msg_from": "Kevin Kempter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow DDL creation"
},
{
"msg_contents": "On Tue, Aug 31, 2010 at 11:35 AM, Kevin Kempter\n<[email protected]> wrote:\n> On Monday 30 August 2010 17:04, bricklen wrote:\n>> On Mon, Aug 30, 2010 at 3:28 PM, Kevin Kempter\n>>\n>> <[email protected]> wrote:\n>> > Hi all ;\n>> >\n>> > we have an automated partition creation process that includes the\n>> > creation of an FK constraint. we have a few other servers with similar\n>> > scenarios and this is the only server that stinks per when we create the\n>> > new partitions.\n>> >\n>> > Anyone have any thoughts on how to debug this? were running postgres\n>> > 8.4.4 on CentOS 5.5\n>> >\n>> > Thanks in advance\n>>\n>> Is the referenced column indexed?\n>\n> no, but its for a new partition so there's no data as of yet in the partition\n\nWhat exactly does \"stinks\" mean?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n",
"msg_date": "Tue, 21 Sep 2010 15:38:59 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow DDL creation"
}
] |
[
{
"msg_contents": "I've created a few user-defined types quite similar to uuid which we\nuse to store various hashes in the database. (The types use binary\nencoding internally, but only expose hexadecimal strings externally.)\n\nThe hashes are roughly equidistributed, so when I do a range query\nwhich is essentially based on a hash prefix(*), I expect the result to\ncontain N * 2**(-k) results, where N is the table size and k the\nnumber of bits in the range. Actual query results show that this is\nthe case. The odd thing is that the planner thinks that the range\nquery will return about one quarter of the table, independently of the\nrange specified. Of course, the row estimates are quite far off as a\nresult, leading to suboptimal plans.\n\nAny idea what could cause this? Do I need to provide some estimator\nfunction somewhere?\n\n(*) I don't use LIKE, because its optimization is hard-coded to a few\n types, but explicit BETWEEN ... AND queries.\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Fri, 03 Sep 2010 14:15:19 +0000",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Odd estimation issue with user-defined type"
},
{
"msg_contents": "Florian Weimer <[email protected]> writes:\n> I've created a few user-defined types quite similar to uuid which we\n> use to store various hashes in the database. (The types use binary\n> encoding internally, but only expose hexadecimal strings externally.)\n\n> The hashes are roughly equidistributed, so when I do a range query\n> which is essentially based on a hash prefix(*), I expect the result to\n> contain N * 2**(-k) results, where N is the table size and k the\n> number of bits in the range. Actual query results show that this is\n> the case. The odd thing is that the planner thinks that the range\n> query will return about one quarter of the table, independently of the\n> range specified. Of course, the row estimates are quite far off as a\n> result, leading to suboptimal plans.\n\n> Any idea what could cause this? Do I need to provide some estimator\n> function somewhere?\n\nIf you haven't, then how would you expect the planner to know that?\n\nLess flippantly, you really need to tell us exactly what planner support\nyou did provide, before you can expect any intelligent comment. Has the\ntype got a default btree opclass? What selectivity estimators did you\nattach to the comparison operators? Do you get MCV and/or histogram\nentries in pg_stats when you ANALYZE one of these columns, and if so\ndo they look sane?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 2010 11:14:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd estimation issue with user-defined type "
},
{
"msg_contents": "* Tom Lane:\n\n>> Any idea what could cause this? Do I need to provide some estimator\n>> function somewhere?\n>\n> If you haven't, then how would you expect the planner to know that?\n\nPerhaps it's psychic, or there is some trick I don't know about? 8-)\n\n> Less flippantly, you really need to tell us exactly what planner support\n> you did provide, before you can expect any intelligent comment. Has the\n> type got a default btree opclass?\n\nYes, I think so (because of CREATE OPERATOR CLASS ... USING btree).\n\n> What selectivity estimators did you attach to the comparison\n> operators?\n\nAh, I see, I probably need to provide a RESTRICT clause in the\noperator definition. That should do the trick, and should be fairly\neasy to implement in this case.\n\nSorry, I just missed this piece of information in the documentation, I\nshould have read it more carefully.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Fri, 03 Sep 2010 15:28:19 +0000",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd estimation issue with user-defined type"
}
] |
[
{
"msg_contents": "Hello,\nI have I query which behave strangely (according to me).\nAccording to the first plan PG makes absolutely unnecessary seq scan on \ntables \"invoices\" and \"domeini\" and etc.\nI thing they should be access only if there are rows from the where. Why \nthe left join executes first?\nThen I rewrite the query and move left joins to sub queries and the \nresult was great speed up.\nBut I thing it is more correctly to write the query with left joins. At \nleast the sub queries have similar parts which are now accessed twice.\n\nSo I will appreciate any suggestions how it is correct to write this \nquery and why the left join plan is so wrong.\n\n SELECT version();\n \nversion \n----------------------------------------------------------------------------------------------------------\n PostgreSQL 8.4.4 on amd64-portbld-freebsd8.1, compiled by GCC cc (GCC) \n4.2.1 20070719 [FreeBSD], 64-bit\n(1 row)\n\n\nBest regards,\n Kaloyan Iliev\n\n\n===============================ORIGINAL QUERY==============================\nexplain analyze SELECT\n DD.debtid,\n ADD.amount as saldo,\n DOM.fqdn ||DT.descr as domain_fqdn,\n S.descr_bg as service_descr_bg,\n ADD.pno,\n ADD.amount,\n M.name_bg as measure_name_bg,\n AC.ino,\n I.idate\n FROM debts_desc DD LEFT JOIN domeini DOM ON \n(DD.domain_id = DOM.id)\n LEFT \nJOIN domain_type DT ON (DOM.domain_type_id = DT.id)\n LEFT JOIN acc_debts \nADC ON (DD.debtid = ADC.debtid AND ADC.credit)\n LEFT JOIN \nacc_clients AC ON (AC.transact_no = ADC.transact_no AND NOT AC.credit)\n LEFT JOIN \ninvoices I ON (AC.ino = I.ino AND I.istatus = 0),\n acc_debts ADD,\n services S,\n measures M,\n proforms P\n WHERE DD.debtid = ADD.debtid\n AND DD.measure_id = M.measure_id\n AND DD.active\n AND NOT DD.paid\n AND DD.has_proform\n AND NOT DD.storned\n AND ADD.pno = \nP.pno\n AND NOT ADD.credit\n\n AND \nP.person1_id = 287294\n AND \nDD.serviceid = S.serviceid;\n\n \nQUERY \nPLAN \n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=37503.47..47243.77 rows=1 width=110) (actual \ntime=1522.796..1522.796 rows=0 loops=1)\n Join Filter: (dd.measure_id = m.measure_id)\n -> Nested Loop (cost=37503.47..47242.45 rows=1 width=106) (actual \ntime=1522.794..1522.794 rows=0 loops=1)\n Join Filter: (dd.serviceid = s.serviceid)\n -> Hash Join (cost=37503.47..47239.46 rows=1 width=79) \n(actual time=1522.791..1522.791 rows=0 loops=1)\n Hash Cond: (dd.debtid = add.debtid)\n -> Hash Left Join (cost=37475.95..47122.76 rows=23782 \nwidth=67) (actual time=1370.668..1521.629 rows=1037 loops=1)\n Hash Cond: (dom.domain_type_id = dt.id)\n -> Hash Left Join (cost=37474.12..46793.92 \nrows=23782 width=66) (actual time=1370.563..1519.302 rows=1037 loops=1)\n Hash Cond: (dd.domain_id = dom.id)\n -> Hash Left Join (cost=23487.71..30402.02 \nrows=23782 width=54) (actual time=556.587..636.320 rows=1037 loops=1)\n Hash Cond: (ac.ino = i.ino)\n -> Hash Left Join \n(cost=8410.66..14259.11 rows=23782 width=50) (actual \ntime=318.180..387.026 rows=1037 loops=1)\n Hash Cond: (adc.transact_no = \nac.transact_no)\n -> Hash Left Join \n(cost=4973.98..9903.69 rows=23782 width=50) (actual \ntime=175.979..234.068 rows=1037 loops=1)\n Hash Cond: (dd.debtid = \nadc.debtid)\n -> Seq Scan on debts_desc \ndd (cost=0.00..2866.52 rows=23782 width=46) (actual time=0.481..45.085 \nrows=1037 loops=1)\n Filter: (active AND \n(NOT paid) AND has_proform AND (NOT storned))\n -> Hash \n(cost=3942.08..3942.08 rows=62872 width=8) (actual time=175.410..175.410 \nrows=63157 loops=1)\n -> Seq Scan on \nacc_debts adc (cost=0.00..3942.08 rows=62872 width=8) (actual \ntime=0.097..102.172 rows=63157 loops=1)\n Filter: credit\n -> Hash (cost=2536.53..2536.53 \nrows=54812 width=8) (actual time=142.169..142.169 rows=54559 loops=1)\n -> Seq Scan on acc_clients \nac (cost=0.00..2536.53 rows=54812 width=8) (actual time=0.019..78.736 \nrows=54559 loops=1)\n Filter: (NOT credit)\n -> Hash (cost=14181.02..14181.02 \nrows=54562 width=8) (actual time=238.380..238.380 rows=54559 loops=1)\n -> Seq Scan on invoices i \n(cost=0.00..14181.02 rows=54562 width=8) (actual time=0.029..170.761 \nrows=54559 loops=1)\n Filter: (istatus = 0)\n -> Hash (cost=8669.96..8669.96 rows=305796 \nwidth=16) (actual time=813.940..813.940 rows=305796 loops=1)\n -> Seq Scan on domeini dom \n(cost=0.00..8669.96 rows=305796 width=16) (actual time=0.015..419.684 \nrows=305796 loops=1)\n -> Hash (cost=1.37..1.37 rows=37 width=9) (actual \ntime=0.087..0.087 rows=37 loops=1)\n -> Seq Scan on domain_type dt \n(cost=0.00..1.37 rows=37 width=9) (actual time=0.003..0.040 rows=37 loops=1)\n -> Hash (cost=27.45..27.45 rows=5 width=16) (actual \ntime=0.078..0.078 rows=1 loops=1)\n -> Nested Loop (cost=0.00..27.45 rows=5 width=16) \n(actual time=0.067..0.073 rows=1 loops=1)\n -> Index Scan using proforms_person1_id_idx \non proforms p (cost=0.00..10.62 rows=2 width=4) (actual \ntime=0.045..0.046 rows=1 loops=1)\n Index Cond: (person1_id = 287294)\n -> Index Scan using acc_debts_pno_idx on \nacc_debts add (cost=0.00..8.38 rows=3 width=16) (actual \ntime=0.017..0.019 rows=1 loops=1)\n Index Cond: (add.pno = p.pno)\n Filter: (NOT add.credit)\n -> Seq Scan on services s (cost=0.00..2.44 rows=44 width=31) \n(never executed)\n -> Seq Scan on measures m (cost=0.00..1.14 rows=14 width=8) (never \nexecuted)\n Total runtime: 1523.525 ms\n(41 rows)\n\n\n\n==================================================AFTER \nREWRITE============================================\n\nexplain analyze SELECT\n DD.debtid,\n ADD.amount as saldo,\n (SELECT DOM.fqdn ||DT.descr\n FROM domeini DOM, domain_type DT\n WHERE DOM.domain_type_id = DT.id\n AND DD.domain_id = DOM.id) as \ndomain_fqdn,\n S.descr_bg as service_descr_bg,\n ADD.pno,\n ADD.amount,\n M.name_bg as measure_name_bg,\n (SELECT AC.ino FROM acc_debts ACD,\n acc_clients AC\n WHERE ACD.debtid = ADD.debtid\n AND ACD.credit\n AND AC.transact_no = \nACD.transact_no\n AND NOT AC.credit) as ino,\n (SELECT I.idate FROM acc_debts ACD,\n acc_clients AC,\n invoices I\n WHERE ACD.debtid = ADD.debtid\n AND ACD.credit\n AND AC.transact_no = \nACD.transact_no\n AND NOT AC.credit\n AND AC.ino = I.ino\n AND I.istatus = 0) as idate\n FROM debts_desc DD,\n acc_debts ADD,\n services S,\n measures M,\n proforms P\n WHERE DD.debtid = ADD.debtid\n AND DD.measure_id = M.measure_id\n AND DD.active\n AND NOT DD.paid\n AND DD.has_proform\n AND NOT DD.storned\n AND ADD.pno = P.pno\n AND NOT ADD.credit\n AND P.person1_id = 287294\n AND DD.serviceid = S.serviceid;\n\n \nQUERY \nPLAN \n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..77.90 rows=1 width=93) (actual \ntime=0.047..0.047 rows=0 loops=1)\n -> Nested Loop (cost=0.00..32.96 rows=1 width=66) (actual \ntime=0.045..0.045 rows=0 loops=1)\n -> Nested Loop (cost=0.00..32.68 rows=1 width=62) (actual \ntime=0.043..0.043 rows=0 loops=1)\n -> Nested Loop (cost=0.00..27.45 rows=5 width=16) \n(actual time=0.026..0.031 rows=1 loops=1)\n -> Index Scan using proforms_person1_id_idx on \nproforms p (cost=0.00..10.62 rows=2 width=4) (actual time=0.013..0.014 \nrows=1 loops=1)\n Index Cond: (person1_id = 287294)\n -> Index Scan using acc_debts_pno_idx on acc_debts \nadd (cost=0.00..8.38 rows=3 width=16) (actual time=0.007..0.008 rows=1 \nloops=1)\n Index Cond: (add.pno = p.pno)\n Filter: (NOT add.credit)\n -> Index Scan using debts_desc_pkey on debts_desc dd \n(cost=0.00..1.03 rows=1 width=46) (actual time=0.007..0.007 rows=0 loops=1)\n Index Cond: (dd.debtid = add.debtid)\n Filter: (dd.active AND (NOT dd.paid) AND \ndd.has_proform AND (NOT dd.storned))\n -> Index Scan using measures_pkey on measures m \n(cost=0.00..0.27 rows=1 width=8) (never executed)\n Index Cond: (m.measure_id = dd.measure_id)\n -> Index Scan using services_pkey on services s (cost=0.00..0.27 \nrows=1 width=31) (never executed)\n Index Cond: (s.serviceid = dd.serviceid)\n SubPlan 1\n -> Hash Join (cost=8.31..9.84 rows=1 width=13) (never executed)\n Hash Cond: (dt.id = dom.domain_type_id)\n -> Seq Scan on domain_type dt (cost=0.00..1.37 rows=37 \nwidth=9) (never executed)\n -> Hash (cost=8.30..8.30 rows=1 width=12) (never executed)\n -> Index Scan using domeini_pkey on domeini dom \n(cost=0.00..8.30 rows=1 width=12) (never executed)\n Index Cond: ($0 = id)\n SubPlan 2\n -> Nested Loop (cost=0.00..16.63 rows=1 width=4) (never executed)\n -> Index Scan using acc_debts_debtid_idx on acc_debts acd \n(cost=0.00..8.33 rows=1 width=4) (never executed)\n Index Cond: (debtid = $1)\n Filter: credit\n -> Index Scan using acc_clients_transact_no_uidx on \nacc_clients ac (cost=0.00..8.28 rows=1 width=8) (never executed)\n Index Cond: (ac.transact_no = acd.transact_no)\n Filter: (NOT ac.credit)\n SubPlan 3\n -> Nested Loop (cost=0.00..18.19 rows=1 width=4) (never executed)\n -> Nested Loop (cost=0.00..16.63 rows=1 width=4) (never \nexecuted)\n -> Index Scan using acc_debts_debtid_idx on acc_debts \nacd (cost=0.00..8.33 rows=1 width=4) (never executed)\n Index Cond: (debtid = $1)\n Filter: credit\n -> Index Scan using acc_clients_transact_no_uidx on \nacc_clients ac (cost=0.00..8.28 rows=1 width=8) (never executed)\n Index Cond: (ac.transact_no = acd.transact_no)\n Filter: (NOT ac.credit)\n -> Index Scan using invoices_ino_uidx on invoices i \n(cost=0.00..1.55 rows=1 width=8) (never executed)\n Index Cond: (i.ino = ac.ino)\n Total runtime: 0.202 ms\n(43 rows)\n\n\n\n",
"msg_date": "Fri, 03 Sep 2010 19:16:40 +0300",
"msg_from": "Kaloyan Iliev Iliev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question about LEFT JOIN and query plan"
},
{
"msg_contents": "Kaloyan Iliev Iliev <[email protected]> writes:\n> I have I query which behave strangely (according to me).\n> According to the first plan PG makes absolutely unnecessary seq scan on \n> tables \"invoices\" and \"domeini\" and etc.\n\nI think you might get better results if you could get this rowcount\nestimate a bit more in line with reality:\n\n> -> Seq Scan on debts_desc dd (cost=0.00..2866.52 rows=23782 width=46) (actual time=0.481..45.085 rows=1037 loops=1)\n> Filter: (active AND (NOT paid) AND has_proform AND (NOT storned))\n\nIt's choosing to hash instead of doing (what it thinks will be) 23K\nindex probes into the other table. For 1000 probes the decision\nmight be different.\n\nI don't know if raising the stats target for that table will be enough\nto fix it. Most likely those four conditions are not uncorrelated.\nYou might need to think about revising the table's representation\nso that the query condition can be simpler and thus more accurately\nestimated.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Sep 2010 13:14:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about LEFT JOIN and query plan "
},
{
"msg_contents": "Kaloyan Iliev Iliev <[email protected]> wrote:\n \n> I thing they should be access only if there are rows from the\n> where. Why the left join executes first?\n \nOut of curiosity, what happens if you consistently us JOIN clauses,\nrather than mixing that with commas?:\n \nexplain analyze\nSELECT\n DD.debtid,\n ADD.amount as saldo,\n DOM.fqdn ||DT.descr as domain_fqdn,\n S.descr_bg as service_descr_bg,\n ADD.pno,\n ADD.amount,\n M.name_bg as measure_name_bg,\n AC.ino,\n I.idate\n FROM debts_desc DD\n JOIN proforms P ON (ADD.pno = P.pno)\n JOIN acc_debts ADD ON (DD.debtid = ADD.debtid)\n JOIN services S ON (DD.serviceid = S.serviceid)\n JOIN measures M ON (DD.measure_id = M.measure_id)\n LEFT JOIN domeini DOM ON (DD.domain_id = DOM.id)\n LEFT JOIN domain_type DT ON (DOM.domain_type_id = DT.id)\n LEFT JOIN acc_debts ADC\n ON (DD.debtid = ADC.debtid AND ADC.credit)\n LEFT JOIN acc_clients AC\n ON (AC.transact_no = ADC.transact_no AND NOT AC.credit)\n LEFT JOIN invoices I ON (AC.ino = I.ino AND I.istatus = 0)\n WHERE DD.active\n AND NOT DD.paid\n AND DD.has_proform\n AND NOT DD.storned\n AND NOT ADD.credit\n AND P.person1_id = 287294\n;\n \n-Kevin\n",
"msg_date": "Fri, 03 Sep 2010 12:16:52 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about LEFT JOIN and query plan"
},
{
"msg_contents": "\n\n\n\n\n\nHi,\nThe plan improves. So can you explain why?\nThanks in advance.\n\nKaloyan\n \nQUERY\nPLAN \n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..82.88 rows=1 width=68) (actual\ntime=92.455..92.455 rows=0 loops=1)\n -> Nested Loop Left Join (cost=0.00..77.73 rows=1 width=64)\n(actual time=92.453..92.453 rows=0 loops=1)\n -> Nested Loop Left Join (cost=0.00..69.44 rows=1\nwidth=64) (actual time=92.451..92.451 rows=0 loops=1)\n -> Nested Loop (cost=0.00..64.26 rows=1 width=60)\n(actual time=92.449..92.449 rows=0 loops=1)\n Join Filter: (dd.measure_id = m.measure_id)\n -> Nested Loop (cost=0.00..62.95 rows=1\nwidth=60) (actual time=92.447..92.447 rows=0 loops=1)\n Join Filter: (dd.serviceid = s.serviceid)\n -> Nested Loop Left Join \n(cost=0.00..59.96 rows=1 width=37) (actual time=92.444..92.444 rows=0\nloops=1)\n Join Filter: (dom.domain_type_id =\ndt.id)\n -> Nested Loop Left Join \n(cost=0.00..58.13 rows=1 width=36) (actual time=92.443..92.443 rows=0\nloops=1)\n -> Nested Loop \n(cost=0.00..52.88 rows=1 width=28) (actual time=92.440..92.440 rows=0\nloops=1)\n -> Nested Loop \n(cost=0.00..27.50 rows=5 width=16) (actual time=0.021..0.027 rows=1\nloops=1)\n -> Index Scan\nusing proforms_person1_id_idx on proforms p (cost=0.00..10.67 rows=2\nwidth=4) (actual time=0.008..0.009 rows=1 loops=1)\n Index Cond:\n(person1_id = 287294)\n -> Index Scan\nusing acc_debts_pno_idx on acc_debts add (cost=0.00..8.38 rows=3\nwidth=16) (actual time=0.007..0.009 rows=1 loops=1)\n Index Cond:\n(add.pno = p.pno)\n Filter: (NOT\nadd.credit)\n -> Index Scan using\ndebts_desc_pkey on debts_desc dd (cost=0.00..5.06 rows=1 width=16)\n(actual time=92.408..92.408 rows=0 loops=1)\n Index Cond:\n(dd.debtid = add.debtid)\n Filter: (dd.active\nAND (NOT dd.paid) AND dd.has_proform AND (NOT dd.storned))\n -> Index Scan using\ndomeini_pkey on domeini dom (cost=0.00..5.24 rows=1 width=16) (never\nexecuted)\n Index Cond: (dd.domain_id\n= dom.id)\n -> Seq Scan on domain_type dt \n(cost=0.00..1.37 rows=37 width=9) (never executed)\n -> Seq Scan on services s \n(cost=0.00..2.44 rows=44 width=31) (never executed)\n -> Seq Scan on measures m (cost=0.00..1.14\nrows=14 width=8) (never executed)\n -> Index Scan using acc_debts_debtid_idx on\nacc_debts adc (cost=0.00..5.16 rows=1 width=8) (never executed)\n Index Cond: (dd.debtid = adc.debtid)\n Filter: adc.credit\n -> Index Scan using acc_clients_transact_no_uidx on\nacc_clients ac (cost=0.00..8.28 rows=1 width=8) (never executed)\n Index Cond: (ac.transact_no = adc.transact_no)\n Filter: (NOT ac.credit)\n -> Index Scan using invoices_ino_uidx on invoices i \n(cost=0.00..5.13 rows=1 width=8) (never executed)\n Index Cond: (ac.ino = i.ino)\n Total runtime: 92.612 ms\n(34 rows)\n\n\nKevin Grittner wrote:\n\nKaloyan Iliev Iliev <[email protected]> wrote:\n \n \n\nI thing they should be access only if there are rows from the\nwhere. Why the left join executes first?\n \n\n \nOut of curiosity, what happens if you consistently us JOIN clauses,\nrather than mixing that with commas?:\n \nexplain analyze\nSELECT\n DD.debtid,\n ADD.amount as saldo,\n DOM.fqdn ||DT.descr as domain_fqdn,\n S.descr_bg as service_descr_bg,\n ADD.pno,\n ADD.amount,\n M.name_bg as measure_name_bg,\n AC.ino,\n I.idate\n FROM debts_desc DD\n JOIN proforms P ON (ADD.pno = P.pno)\n JOIN acc_debts ADD ON (DD.debtid = ADD.debtid)\n JOIN services S ON (DD.serviceid = S.serviceid)\n JOIN measures M ON (DD.measure_id = M.measure_id)\n LEFT JOIN domeini DOM ON (DD.domain_id = DOM.id)\n LEFT JOIN domain_type DT ON (DOM.domain_type_id = DT.id)\n LEFT JOIN acc_debts ADC\n ON (DD.debtid = ADC.debtid AND ADC.credit)\n LEFT JOIN acc_clients AC\n ON (AC.transact_no = ADC.transact_no AND NOT AC.credit)\n LEFT JOIN invoices I ON (AC.ino = I.ino AND I.istatus = 0)\n WHERE DD.active\n AND NOT DD.paid\n AND DD.has_proform\n AND NOT DD.storned\n AND NOT ADD.credit\n AND P.person1_id = 287294\n;\n \n-Kevin\n\n \n\n\n\n",
"msg_date": "Tue, 07 Sep 2010 10:19:16 +0300",
"msg_from": "Kaloyan Iliev Iliev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Question about LEFT JOIN and query plan"
},
{
"msg_contents": "Hello again,\nI have another query which performance drops drastically after PG upgrade.\nI can not improve the plan no matter how hard I try. I try creating new \nindexes and rewrite the query with JOIN .. ON instead of commas but \nnothing happens.\nI will appreciate any suggestions.\nBest regards,\n Kaloyan Iliev\n\n==========================VERSION \n8.2.15===================================================\n\n\nregbgrgr=# SELECT version();\n \nversion \n---------------------------------------------------------------------------------------------------\n PostgreSQL 8.2.15 on amd64-portbld-freebsd7.2, compiled by GCC cc (GCC) \n4.2.1 20070719 [FreeBSD]\n(1 row)\n\nregbgrgr=# explain analyze SELECT\n \nCOUNT (D.id) as all_domains_count\n FROM\n \ndomeini as D,\n \ndomainperson as DP,\n \nperson as P,\n \nrequest as R,\n \ndomain_status as DS\n WHERE\n \nR.number = D.request_number AND\n \nD.domain_status_id = DS.id AND\n \nDS.is_removed = 0 AND\n \nD.id = DP.domain_id AND\n \nDP.dp_type_id = 1 AND\n \nDP.person1_id = P.id AND ( LOWER (P.bulstat) = LOWER ('999999999') OR \nLOWER (P.bulstat) = 'bg'||'999999999');\n \nQUERY \nPLAN \n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=138.30..138.31 rows=1 width=4) (actual \ntime=0.804..0.806 rows=1 loops=1)\n -> Nested Loop (cost=74.70..138.29 rows=5 width=4) (actual \ntime=0.797..0.797 rows=0 loops=1)\n -> Nested Loop (cost=74.70..136.88 rows=5 width=8) (actual \ntime=0.793..0.793 rows=0 loops=1)\n -> Nested Loop (cost=74.70..135.44 rows=5 width=12) \n(actual time=0.791..0.791 rows=0 loops=1)\n -> Hash Join (cost=74.70..122.42 rows=5 width=4) \n(actual time=0.787..0.787 rows=0 loops=1)\n Hash Cond: (dp.person1_id = p.id)\n -> Bitmap Heap Scan on domainperson dp \n(cost=19.91..65.81 rows=472 width=8) (actual time=0.088..0.088 rows=1 \nloops=1)\n Recheck Cond: (dp_type_id = 1)\n -> Bitmap Index Scan on \ndomainperson_admin_person_uidx (cost=0.00..19.79 rows=472 width=0) \n(actual time=0.071..0.071 rows=474 loops=1)\n Index Cond: (dp_type_id = 1)\n -> Hash (cost=54.62..54.62 rows=14 width=4) \n(actual time=0.678..0.678 rows=0 loops=1)\n -> Seq Scan on person p \n(cost=0.00..54.62 rows=14 width=4) (actual time=0.675..0.675 rows=0 loops=1)\n Filter: ((lower(bulstat) = \n'999999999'::text) OR (lower(bulstat) = 'bg999999999'::text))\n -> Index Scan using domeini_pkey on domeini d \n(cost=0.00..2.59 rows=1 width=12) (never executed)\n Index Cond: (d.id = dp.domain_id)\n -> Index Scan using domain_status_pkey on domain_status \nds (cost=0.00..0.27 rows=1 width=4) (never executed)\n Index Cond: (d.domain_status_id = ds.id)\n Filter: (is_removed = 0)\n -> Index Scan using request_pkey on request r \n(cost=0.00..0.27 rows=1 width=4) (never executed)\n Index Cond: (r.number = d.request_number)\n Total runtime: 0.926 ms\n(21 rows)\n\nregbgrgr=# SHOW default_statistics_target ;\n default_statistics_target\n---------------------------\n 10\n(1 row)\n\n\n==========================VERSION \n8.4.4===================================================\nregbgrgr=# select version ();\n \nversion \n----------------------------------------------------------------------------------------------------------\n PostgreSQL 8.4.4 on amd64-portbld-freebsd8.1, compiled by GCC cc (GCC) \n4.2.1 20070719 [FreeBSD], 64-bit\n(1 row)\n\nregbgrgr=# explain analyze SELECT\n \nCOUNT (D.id) as all_domains_count\n FROM\n \ndomeini as D,\n \ndomainperson as DP,\n \nperson as P,\n \nrequest as R,\n \ndomain_status as DS\n WHERE\n \nR.number = D.request_number AND\n \nD.domain_status_id = DS.id AND\n \nDS.is_removed = 0 AND\n \nD.id = DP.domain_id AND\n \nDP.dp_type_id = 1 AND\n \nDP.person1_id = P.id AND ( LOWER (P.bulstat) = LOWER ('999999999') OR \nLOWER (P.bulstat) = 'bg'||'999999999');\n \nQUERY \nPLAN \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=61113.19..61113.20 rows=1 width=4) (actual \ntime=6013.705..6013.706 rows=1 loops=1)\n -> Hash Join (cost=20859.23..61023.00 rows=36075 width=4) (actual \ntime=4553.945..6013.098 rows=598 loops=1)\n Hash Cond: (d.request_number = r.number)\n -> Hash Join (cost=18796.01..57800.47 rows=36075 width=8) \n(actual time=4177.313..5646.153 rows=598 loops=1)\n Hash Cond: (d.domain_status_id = ds.id)\n -> Hash Join (cost=18778.40..57286.82 rows=36075 \nwidth=12) (actual time=4176.838..5643.637 rows=1357 loops=1)\n Hash Cond: (dp.domain_id = d.id)\n -> Hash Join (cost=4671.42..40710.39 rows=36080 \nwidth=4) (actual time=3210.201..4621.977 rows=1357 loops=1)\n Hash Cond: (dp.person1_id = p.id)\n -> Seq Scan on domainperson dp \n(cost=0.00..33976.29 rows=272302 width=8) (actual time=0.026..1128.230 \nrows=279008 loops=1)\n Filter: (dp_type_id = 1)\n -> Hash (cost=4634.39..4634.39 rows=2962 \nwidth=4) (actual time=3210.050..3210.050 rows=1263 loops=1)\n -> Bitmap Heap Scan on person p \n(cost=64.33..4634.39 rows=2962 width=4) (actual time=114.401..3206.440 \nrows=1263 loops=1)\n Recheck Cond: ((lower(bulstat) = \n'999999999'::text) OR (lower(bulstat) = 'bg999999999'::text))\n -> BitmapOr (cost=64.33..64.33 \nrows=2969 width=0) (actual time=95.115..95.115 rows=0 loops=1)\n -> Bitmap Index Scan on \nperson_bulstat_lower_idx (cost=0.00..31.43 rows=1485 width=0) (actual \ntime=33.525..33.525 rows=1241 loops=1)\n Index Cond: \n(lower(bulstat) = '999999999'::text)\n -> Bitmap Index Scan on \nperson_bulstat_lower_idx (cost=0.00..31.43 rows=1485 width=0) (actual \ntime=61.584..61.584 rows=22 loops=1)\n Index Cond: \n(lower(bulstat) = 'bg999999999'::text)\n -> Hash (cost=8728.77..8728.77 rows=309377 \nwidth=12) (actual time=957.267..957.267 rows=309410 loops=1)\n -> Seq Scan on domeini d \n(cost=0.00..8728.77 rows=309377 width=12) (actual time=0.015..563.414 \nrows=309410 loops=1)\n -> Hash (cost=15.31..15.31 rows=184 width=4) (actual \ntime=0.455..0.455 rows=184 loops=1)\n -> Seq Scan on domain_status ds (cost=0.00..15.31 \nrows=184 width=4) (actual time=0.009..0.252 rows=184 loops=1)\n Filter: (is_removed = 0)\n -> Hash (cost=1030.43..1030.43 rows=62943 width=4) (actual \ntime=356.134..356.134 rows=62815 loops=1)\n -> Seq Scan on request r (cost=0.00..1030.43 rows=62943 \nwidth=4) (actual time=10.902..275.137 rows=62815 loops=1)\n Total runtime: 6014.029 ms\n(27 rows)\n\nregbgrgr=# show default_statistics_target ;\n default_statistics_target\n---------------------------\n 100\n(1 row)\n\n",
"msg_date": "Tue, 07 Sep 2010 16:13:01 +0300",
"msg_from": "Kaloyan Iliev Iliev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Question about LEFT JOIN and query plan"
},
{
"msg_contents": "Sorry for the spam.\nThe 8.2.15 plan was on an empty database.\nOn a full database the plan was almost the same. So the question is \ncould I speed up the plan?\nWhy the \"Hash Cond: (dp.person1_id = p.id)\" isn't used for index scan on \nthat table?\n\nBest regards,\nKaloya Iliev\n\nHere is the plan on a full database:\n\n==========================VERSION \n8.2.17===================================================\n\n \nversion \n---------------------------------------------------------------------------------------------------\n PostgreSQL 8.2.17 on amd64-portbld-freebsd8.0, compiled by GCC cc (GCC) \n4.2.1 20070719 [FreeBSD]\n(1 row)\n\nregbgrgr=# SHOW default_statistics_target ;\n default_statistics_target\n---------------------------\n 10\n(1 row)\n\nregbgrgr=# explain analyze SELECT\n \nCOUNT (D.id) as all_domains_count\n FROM\n \ndomeini as D,\n \ndomainperson as DP,\n \nperson as P,\n \nrequest as R,\n \ndomain_status as DS\n WHERE\n \nR.number = D.request_number AND\n \nD.domain_status_id = DS.id AND\n \nDS.is_removed = 0 AND\n \nD.id = DP.domain_id AND\n \nDP.dp_type_id = 1 AND\n \nDP.person1_id = P.id AND ( LOWER (P.bulstat) = LOWER ('999999999') OR \nLOWER (P.bulstat) = 'bg'||'999999999');\n \nQUERY \nPLAN \n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=48342.78..48342.79 rows=1 width=4) (actual \ntime=2429.190..2429.192 rows=1 loops=1)\n -> Hash Join (cost=5142.26..48339.54 rows=1295 width=4) (actual \ntime=314.817..2427.752 rows=570 loops=1)\n Hash Cond: (d.request_number = r.number)\n -> Hash Join (cost=3088.49..45960.01 rows=1308 width=8) \n(actual time=37.001..2125.040 rows=570 loops=1)\n Hash Cond: (d.domain_status_id = ds.id)\n -> Nested Loop (cost=3064.88..45918.37 rows=1316 \nwidth=12) (actual time=35.584..2117.332 rows=1250 loops=1)\n -> Hash Join (cost=3064.88..40159.12 rows=1316 \nwidth=4) (actual time=35.506..2043.384 rows=1250 loops=1)\n Hash Cond: (dp.person1_id = p.id)\n -> Seq Scan on domainperson dp \n(cost=0.00..36010.68 rows=285441 width=8) (actual time=0.069..1459.818 \nrows=274533 loops=1)\n Filter: (dp_type_id = 1)\n -> Hash (cost=3048.93..3048.93 rows=1276 \nwidth=4) (actual time=35.206..35.206 rows=1157 loops=1)\n -> Bitmap Heap Scan on person p \n(cost=30.78..3048.93 rows=1276 width=4) (actual time=1.187..31.170 \nrows=1157 loops=1)\n Recheck Cond: ((lower(bulstat) = \n'999999999'::text) OR (lower(bulstat) = 'bg999999999'::text))\n -> BitmapOr (cost=30.78..30.78 \nrows=1276 width=0) (actual time=0.841..0.841 rows=0 loops=1)\n -> Bitmap Index Scan on \nperson_bulstat_lower_idx (cost=0.00..25.28 rows=1199 width=0) (actual \ntime=0.709..0.709 rows=1135 loops=1)\n Index Cond: \n(lower(bulstat) = '999999999'::text)\n -> Bitmap Index Scan on \nperson_bulstat_lower_idx (cost=0.00..4.86 rows=77 width=0) (actual \ntime=0.124..0.124 rows=22 loops=1)\n Index Cond: \n(lower(bulstat) = 'bg999999999'::text)\n -> Index Scan using domeini_pkey on domeini d \n(cost=0.00..4.36 rows=1 width=12) (actual time=0.043..0.046 rows=1 \nloops=1250)\n Index Cond: (d.id = dp.domain_id)\n -> Hash (cost=21.31..21.31 rows=184 width=4) (actual \ntime=1.380..1.380 rows=184 loops=1)\n -> Seq Scan on domain_status ds (cost=0.00..21.31 \nrows=184 width=4) (actual time=0.316..0.942 rows=184 loops=1)\n Filter: (is_removed = 0)\n -> Hash (cost=1026.01..1026.01 rows=59101 width=4) (actual \ntime=277.161..277.161 rows=59027 loops=1)\n -> Seq Scan on request r (cost=0.00..1026.01 rows=59101 \nwidth=4) (actual time=0.075..131.951 rows=59027 loops=1)\n Total runtime: 2429.603 ms\n(26 rows)\n\n\nKaloyan Iliev Iliev wrote:\n> Hello again,\n> I have another query which performance drops drastically after PG \n> upgrade.\n> I can not improve the plan no matter how hard I try. I try creating \n> new indexes and rewrite the query with JOIN .. ON instead of commas \n> but nothing happens.\n> I will appreciate any suggestions.\n> Best regards,\n> Kaloyan Iliev\n>\n> ==========================VERSION \n> 8.2.15===================================================\n>\n>\n> regbgrgr=# SELECT version();\n> \n> version \n> --------------------------------------------------------------------------------------------------- \n>\n> PostgreSQL 8.2.15 on amd64-portbld-freebsd7.2, compiled by GCC cc \n> (GCC) 4.2.1 20070719 [FreeBSD]\n> (1 row)\n>\n> regbgrgr=# explain analyze SELECT\n> \n> COUNT (D.id) as all_domains_count\n> FROM\n> \n> domeini as D,\n> \n> domainperson as DP,\n> \n> person as P,\n> \n> request as R,\n> \n> domain_status as DS\n> WHERE\n> \n> R.number = D.request_number AND\n> \n> D.domain_status_id = DS.id AND\n> \n> DS.is_removed = 0 AND\n> \n> D.id = DP.domain_id AND\n> \n> DP.dp_type_id = 1 AND\n> \n> DP.person1_id = P.id AND ( LOWER (P.bulstat) = LOWER ('999999999') OR \n> LOWER (P.bulstat) = 'bg'||'999999999');\n> \n> QUERY \n> PLAN \n>\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \n>\n> Aggregate (cost=138.30..138.31 rows=1 width=4) (actual \n> time=0.804..0.806 rows=1 loops=1)\n> -> Nested Loop (cost=74.70..138.29 rows=5 width=4) (actual \n> time=0.797..0.797 rows=0 loops=1)\n> -> Nested Loop (cost=74.70..136.88 rows=5 width=8) (actual \n> time=0.793..0.793 rows=0 loops=1)\n> -> Nested Loop (cost=74.70..135.44 rows=5 width=12) \n> (actual time=0.791..0.791 rows=0 loops=1)\n> -> Hash Join (cost=74.70..122.42 rows=5 width=4) \n> (actual time=0.787..0.787 rows=0 loops=1)\n> Hash Cond: (dp.person1_id = p.id)\n> -> Bitmap Heap Scan on domainperson dp \n> (cost=19.91..65.81 rows=472 width=8) (actual time=0.088..0.088 rows=1 \n> loops=1)\n> Recheck Cond: (dp_type_id = 1)\n> -> Bitmap Index Scan on \n> domainperson_admin_person_uidx (cost=0.00..19.79 rows=472 width=0) \n> (actual time=0.071..0.071 rows=474 loops=1)\n> Index Cond: (dp_type_id = 1)\n> -> Hash (cost=54.62..54.62 rows=14 \n> width=4) (actual time=0.678..0.678 rows=0 loops=1)\n> -> Seq Scan on person p \n> (cost=0.00..54.62 rows=14 width=4) (actual time=0.675..0.675 rows=0 \n> loops=1)\n> Filter: ((lower(bulstat) = \n> '999999999'::text) OR (lower(bulstat) = 'bg999999999'::text))\n> -> Index Scan using domeini_pkey on domeini d \n> (cost=0.00..2.59 rows=1 width=12) (never executed)\n> Index Cond: (d.id = dp.domain_id)\n> -> Index Scan using domain_status_pkey on domain_status \n> ds (cost=0.00..0.27 rows=1 width=4) (never executed)\n> Index Cond: (d.domain_status_id = ds.id)\n> Filter: (is_removed = 0)\n> -> Index Scan using request_pkey on request r \n> (cost=0.00..0.27 rows=1 width=4) (never executed)\n> Index Cond: (r.number = d.request_number)\n> Total runtime: 0.926 ms\n> (21 rows)\n>\n> regbgrgr=# SHOW default_statistics_target ;\n> default_statistics_target\n> ---------------------------\n> 10\n> (1 row)\n>\n>\n> ==========================VERSION \n> 8.4.4===================================================\n> regbgrgr=# select version ();\n> \n> version \n> ---------------------------------------------------------------------------------------------------------- \n>\n> PostgreSQL 8.4.4 on amd64-portbld-freebsd8.1, compiled by GCC cc (GCC) \n> 4.2.1 20070719 [FreeBSD], 64-bit\n> (1 row)\n>\n> regbgrgr=# explain analyze SELECT\n> \n> COUNT (D.id) as all_domains_count\n> FROM\n> \n> domeini as D,\n> \n> domainperson as DP,\n> \n> person as P,\n> \n> request as R,\n> \n> domain_status as DS\n> WHERE\n> \n> R.number = D.request_number AND\n> \n> D.domain_status_id = DS.id AND\n> \n> DS.is_removed = 0 AND\n> \n> D.id = DP.domain_id AND\n> \n> DP.dp_type_id = 1 AND\n> \n> DP.person1_id = P.id AND ( LOWER (P.bulstat) = LOWER ('999999999') OR \n> LOWER (P.bulstat) = 'bg'||'999999999');\n> \n> QUERY \n> PLAN \n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ \n>\n> Aggregate (cost=61113.19..61113.20 rows=1 width=4) (actual \n> time=6013.705..6013.706 rows=1 loops=1)\n> -> Hash Join (cost=20859.23..61023.00 rows=36075 width=4) (actual \n> time=4553.945..6013.098 rows=598 loops=1)\n> Hash Cond: (d.request_number = r.number)\n> -> Hash Join (cost=18796.01..57800.47 rows=36075 width=8) \n> (actual time=4177.313..5646.153 rows=598 loops=1)\n> Hash Cond: (d.domain_status_id = ds.id)\n> -> Hash Join (cost=18778.40..57286.82 rows=36075 \n> width=12) (actual time=4176.838..5643.637 rows=1357 loops=1)\n> Hash Cond: (dp.domain_id = d.id)\n> -> Hash Join (cost=4671.42..40710.39 rows=36080 \n> width=4) (actual time=3210.201..4621.977 rows=1357 loops=1)\n> Hash Cond: (dp.person1_id = p.id)\n> -> Seq Scan on domainperson dp \n> (cost=0.00..33976.29 rows=272302 width=8) (actual time=0.026..1128.230 \n> rows=279008 loops=1)\n> Filter: (dp_type_id = 1)\n> -> Hash (cost=4634.39..4634.39 rows=2962 \n> width=4) (actual time=3210.050..3210.050 rows=1263 loops=1)\n> -> Bitmap Heap Scan on person p \n> (cost=64.33..4634.39 rows=2962 width=4) (actual time=114.401..3206.440 \n> rows=1263 loops=1)\n> Recheck Cond: ((lower(bulstat) = \n> '999999999'::text) OR (lower(bulstat) = 'bg999999999'::text))\n> -> BitmapOr (cost=64.33..64.33 \n> rows=2969 width=0) (actual time=95.115..95.115 rows=0 loops=1)\n> -> Bitmap Index Scan on \n> person_bulstat_lower_idx (cost=0.00..31.43 rows=1485 width=0) (actual \n> time=33.525..33.525 rows=1241 loops=1)\n> Index Cond: \n> (lower(bulstat) = '999999999'::text)\n> -> Bitmap Index Scan on \n> person_bulstat_lower_idx (cost=0.00..31.43 rows=1485 width=0) (actual \n> time=61.584..61.584 rows=22 loops=1)\n> Index Cond: \n> (lower(bulstat) = 'bg999999999'::text)\n> -> Hash (cost=8728.77..8728.77 rows=309377 \n> width=12) (actual time=957.267..957.267 rows=309410 loops=1)\n> -> Seq Scan on domeini d \n> (cost=0.00..8728.77 rows=309377 width=12) (actual time=0.015..563.414 \n> rows=309410 loops=1)\n> -> Hash (cost=15.31..15.31 rows=184 width=4) (actual \n> time=0.455..0.455 rows=184 loops=1)\n> -> Seq Scan on domain_status ds \n> (cost=0.00..15.31 rows=184 width=4) (actual time=0.009..0.252 rows=184 \n> loops=1)\n> Filter: (is_removed = 0)\n> -> Hash (cost=1030.43..1030.43 rows=62943 width=4) (actual \n> time=356.134..356.134 rows=62815 loops=1)\n> -> Seq Scan on request r (cost=0.00..1030.43 \n> rows=62943 width=4) (actual time=10.902..275.137 rows=62815 loops=1)\n> Total runtime: 6014.029 ms\n> (27 rows)\n>\n> regbgrgr=# show default_statistics_target ;\n> default_statistics_target\n> ---------------------------\n> 100\n> (1 row)\n>\n>\n",
"msg_date": "Tue, 07 Sep 2010 16:28:38 +0300",
"msg_from": "Kaloyan Iliev Iliev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Question about LEFT JOIN and query plan"
},
{
"msg_contents": "Kaloyan Iliev Iliev <[email protected]> wrote:\n \n> The 8.2.15 plan was on an empty database.\n> On a full database the plan was almost the same. So the question\n> is could I speed up the plan?\n \nSince this is an entirely new query which doesn't include a LEFT\nJOIN, it's not good to just tack it onto the other thread. Could\nyou please re-post with an appropriate subject and a little more\ninformation?:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \nSchema (including indexes), configuration settings, and hardware\n(CPUs, storage system, and RAM) can be particularly significant.\n \nAlso, if you could *attach* the EXPLAIN ANALYZE output instead of\npasting it within the email, you'd save time for those trying to\nread it -- emails tend to get word-wrapped in a way which makes them\nhard to read without manual reformatting.\n \nThanks,\n \n-Kevin\n",
"msg_date": "Tue, 07 Sep 2010 09:29:46 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about LEFT JOIN and query plan"
},
{
"msg_contents": ">Kaloyan Iliev Iliev <[email protected]> wrote:\n> Kevin Grittner wrote:\n \n>> Out of curiosity, what happens if you consistently use JOIN\n>> clauses, rather than mixing that with commas?:\n \n> The plan improves. So can you explain why?\n \nCommas in a FROM clause bind more loosely than JOIN clauses,\nrearrangement from one side of an outer join to the other is a bit\ntricky, and the *_collapse_limit settings (which you have not shown)\ncan affect how much JOIN rearrangement is done for a complex query. \nOn a quick scan over your query it didn't appear that the\nrearrangement would break anything, so I wondered whether the\nplanner might do better if you made its job a bit easier by putting\nthe inner joins all on the left to start with and putting the tables\ncloser to the order of efficient access.\n \nIf you still see this difference with very high collapse limits,\nyour example might be a good one to support further work on the\noptimizer; but it would be more useful for that if you could create\na synthetic case to demonstrate the problem -- starting with\ncreation of tables, data, and indexes on which the different forms\nof the query yielded different plans.\n \n-Kevin\n",
"msg_date": "Tue, 07 Sep 2010 11:54:52 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about LEFT JOIN and query plan"
}
] |
[
{
"msg_contents": "Howdy,\n\nI'm running pgbench with a fairly large # of clients and getting this error in my PG log file.\n\nHere's the command:\n./pgbench -c 1100 testdb -l\n\nI get:\nLOG: could not send data to client: Broken pipe\n\n(I had to modify the pgbench.c file to make it go that high, i changed:\nMAXCLIENTS = 2048\n\nI thought maybe i was running out of resources so i checked my ulimits:\n\n ulimit -a\ncore file size (blocks, -c) 0\ndata seg size (kbytes, -d) unlimited\nscheduling priority (-e) 0\nfile size (blocks, -f) unlimited\npending signals (-i) unlimited\nmax locked memory (kbytes, -l) unlimited\nmax memory size (kbytes, -m) unlimited\nopen files (-n) 2048\npipe size (512 bytes, -p) 8\nPOSIX message queues (bytes, -q) unlimited\nreal-time priority (-r) 0\nstack size (kbytes, -s) unlimited\ncpu time (seconds, -t) unlimited\nmax user processes (-u) unlimited\nvirtual memory (kbytes, -v) unlimited\nfile locks (-x) unlimited\n\n\nThis is Pg 8.3.10. Redhat 64bit, the system has 48 cores, 256G of ram.. \n\n\nAny idea what would be causing the error?\n\nthanks\n\nDave\n",
"msg_date": "Wed, 8 Sep 2010 10:46:29 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgbench could not send data to client: Broken pipe"
},
{
"msg_contents": "David Kerr <[email protected]> writes:\n> I'm running pgbench with a fairly large # of clients and getting this error in my PG log file.\n> LOG: could not send data to client: Broken pipe\n\nThat error suggests that pgbench dropped the connection. You might be\nrunning into some bug or internal limitation in pgbench. Did you check\nto make sure pgbench isn't crashing?\n\n> (I had to modify the pgbench.c file to make it go that high, i changed:\n> MAXCLIENTS = 2048\n\nHm, you can't just arbitrarily change that number; it has to be less\nthan whatever number of open files select(2) supports. A look on my\nFedora 13 box suggests that 1024 is the limit there; I'm not sure which\nRed Hat variant you're using but I suspect it might have the same limit.\n\nAs of the 9.0 release, it's possible to run pgbench in a \"multi thread\"\nmode, and if you forced the subprocess rather than thread model it looks\nlike the select() limit would be per subprocess rather than global.\nSo I think you could get above the FD_SETSIZE limit with a bit of\nhacking if you were using 9.0's pgbench. No chance with 8.3 though.\n\n(This suggests BTW that we might want to expose the thread-versus-fork\nchoice in a slightly more user-controllable fashion, rather than\nassuming that threads are always better.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Sep 2010 15:10:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench could not send data to client: Broken pipe "
},
{
"msg_contents": "Tom Lane wrote:\n> As of the 9.0 release, it's possible to run pgbench in a \"multi thread\"\n> mode, and if you forced the subprocess rather than thread model it looks\n> like the select() limit would be per subprocess rather than global.\n> So I think you could get above the FD_SETSIZE limit with a bit of\n> hacking if you were using 9.0's pgbench. No chance with 8.3 though.\n> \n\nI believe David can do this easily enough by compiling a 9.0 source code \ntree with the \"--disable-thread-safety\" option. That's the simplest way \nto force the pgbench client to build itself using the multi-process \nmodel, rather than the multi-threaded one.\n\nIt's kind of futile to run pgbench simulating much more than a hundred \nor two clients before 9.0 anyway. Without multiple workers, you're \nlikely to just run into the process switching limitations within pgbench \nitself rather than testing server performance usefully. I've watched \nthe older pgbench program fail to come close to saturating an 8 core \nserver without running into its own limitations first.\n\nYou might run a 9.0 pgbench client against an 8.3 server though, if you \ndid the whole thing starting from pgbench database initialization over \nagain--the built-in tables like \"accounts\" changed to \"pgbench_accounts\" \nin 8.4. That might work, can't recall any changes that would prevent \nit; but as I haven't tested it yet I can't say for sure.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 08 Sep 2010 15:27:34 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench could not send data to client: Broken pipe"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> Tom Lane wrote:\n>> So I think you could get above the FD_SETSIZE limit with a bit of\n>> hacking if you were using 9.0's pgbench. No chance with 8.3 though.\n\n> I believe David can do this easily enough by compiling a 9.0 source code \n> tree with the \"--disable-thread-safety\" option.\n\nIt would take a bit more work than that, because the code still tries to\nlimit the client count based on FD_SETSIZE. He'd need to hack it so\nthat in non-thread mode, the limit is FD_SETSIZE per subprocess. I was\nsuggesting that an official patch to that effect would be a good thing.\n\n> It's kind of futile to run pgbench simulating much more than a hundred \n> or two clients before 9.0 anyway.\n\nYeah ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Sep 2010 15:44:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench could not send data to client: Broken pipe "
},
{
"msg_contents": "On Wed, Sep 08, 2010 at 03:27:34PM -0400, Greg Smith wrote:\n- Tom Lane wrote:\n- >As of the 9.0 release, it's possible to run pgbench in a \"multi thread\"\n- >mode, and if you forced the subprocess rather than thread model it looks\n- >like the select() limit would be per subprocess rather than global.\n- >So I think you could get above the FD_SETSIZE limit with a bit of\n- >hacking if you were using 9.0's pgbench. No chance with 8.3 though.\n- > \n- \n- I believe David can do this easily enough by compiling a 9.0 source code \n- tree with the \"--disable-thread-safety\" option. That's the simplest way \n- to force the pgbench client to build itself using the multi-process \n- model, rather than the multi-threaded one.\n- \n- It's kind of futile to run pgbench simulating much more than a hundred \n- or two clients before 9.0 anyway. Without multiple workers, you're \n- likely to just run into the process switching limitations within pgbench \n- itself rather than testing server performance usefully. I've watched \n- the older pgbench program fail to come close to saturating an 8 core \n- server without running into its own limitations first.\n- \n- You might run a 9.0 pgbench client against an 8.3 server though, if you \n- did the whole thing starting from pgbench database initialization over \n- again--the built-in tables like \"accounts\" changed to \"pgbench_accounts\" \n- in 8.4. That might work, can't recall any changes that would prevent \n- it; but as I haven't tested it yet I can't say for sure.\n\nThanks, I compiled the 9.0 RC1 branch with the --disable-thread-safety option\nand ran PG bench on my 8.3 DB it seemed to work fine, \n\nHowever, MAXCLIENTS is still 1024, if i hack it to switch it up to 2048 i \nget this:\nstarting vacuum...end.\nselect failed: Bad file descriptor <---------------\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1900\nnumber of threads: 1\nnumber of transactions per client: 10\nnumber of transactions actually processed: 3723/19000\ntps = 52.007642 (including connections establishing)\ntps = 82.579077 (excluding connections establishing)\n\n\nI'm not sure what Tom is referring to with the select(2) limitation, maybe I'm running\ninto it (where do i find that? /usr/include/sys/select.h? )\n\nshould i be running pgbench differently? I tried increasing the # of threads\nbut that didn't increase the number of backend's and i'm trying to simulate\n2000 physical backend processes.\n\nthanks\n\nDave\n",
"msg_date": "Wed, 8 Sep 2010 12:56:54 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbench could not send data to client: Broken pipe"
},
{
"msg_contents": "On Wed, Sep 08, 2010 at 03:44:36PM -0400, Tom Lane wrote:\n- Greg Smith <[email protected]> writes:\n- > Tom Lane wrote:\n- >> So I think you could get above the FD_SETSIZE limit with a bit of\n- >> hacking if you were using 9.0's pgbench. No chance with 8.3 though.\n- \n- > I believe David can do this easily enough by compiling a 9.0 source code \n- > tree with the \"--disable-thread-safety\" option.\n- \n- It would take a bit more work than that, because the code still tries to\n- limit the client count based on FD_SETSIZE. He'd need to hack it so\n- that in non-thread mode, the limit is FD_SETSIZE per subprocess. I was\n- suggesting that an official patch to that effect would be a good thing.\n\nYeah, that might be beyond me =)\n\nDave\n",
"msg_date": "Wed, 8 Sep 2010 12:58:17 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbench could not send data to client: Broken pipe"
},
{
"msg_contents": "David Kerr <[email protected]> writes:\n> should i be running pgbench differently? I tried increasing the # of threads\n> but that didn't increase the number of backend's and i'm trying to simulate\n> 2000 physical backend processes.\n\nThe odds are good that if you did get up that high, what you'd find is\npgbench itself being the bottleneck, not the server. What I'd suggest\nis running several copies of pgbench *on different machines*, all\nbeating on the one database server. Collating the results will be a bit\nmore of a PITA than if there were only one pgbench instance, but it'd\nbe a truer picture of real-world behavior.\n\nIt's probably also worth pointing out that 2000 backend processes is\nlikely to be a loser anyhow. If you're just doing this for academic\npurposes, fine, but if you're trying to set up a real system for 2000\nclients you almost certainly want to stick some connection pooling in\nthere.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Sep 2010 16:35:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench could not send data to client: Broken pipe "
},
{
"msg_contents": "On Wed, Sep 08, 2010 at 04:35:28PM -0400, Tom Lane wrote:\n- David Kerr <[email protected]> writes:\n- > should i be running pgbench differently? I tried increasing the # of threads\n- > but that didn't increase the number of backend's and i'm trying to simulate\n- > 2000 physical backend processes.\n- \n- The odds are good that if you did get up that high, what you'd find is\n- pgbench itself being the bottleneck, not the server. What I'd suggest\n- is running several copies of pgbench *on different machines*, all\n- beating on the one database server. Collating the results will be a bit\n- more of a PITA than if there were only one pgbench instance, but it'd\n- be a truer picture of real-world behavior.\n- \n- It's probably also worth pointing out that 2000 backend processes is\n- likely to be a loser anyhow. If you're just doing this for academic\n- purposes, fine, but if you're trying to set up a real system for 2000\n- clients you almost certainly want to stick some connection pooling in\n- there.\n- \n- \t\t\tregards, tom lane\n- \n\nah that's a good idea, i'll have to give that a shot.\n\nActually, this is real.. that's 2000 connections - connection pooled out to \n20k or so. (although i'm pushing for closer to 1000 connections).\n\nI know that's not the ideal way to go, but it's what i've got to work with.\n\nIt IS a huge box though...\n\nThanks\n\nDave\n",
"msg_date": "Wed, 8 Sep 2010 13:44:43 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbench could not send data to client: Broken pipe"
},
{
"msg_contents": "David Kerr <[email protected]> wrote:\n \n> Actually, this is real.. that's 2000 connections - connection\n> pooled out to 20k or so. (although i'm pushing for closer to 1000\n> connections).\n> \n> I know that's not the ideal way to go, but it's what i've got to\n> work with.\n> \n> It IS a huge box though...\n \nFWIW, my benchmarks (and I've had a couple people tell me this is\nconsistent with what they've seen) show best throughput and best\nresponse time when the connection pool is sized such that the number\nof active PostgreSQL connections is limited to about twice the\nnumber of CPU cores plus the number of effective spindles. Either\nyou've got one heck of a machine, or your \"sweet spot\" for the\nconnection pool will be well under 1000 connections.\n \nIt is important that your connection pool queues requests when\nthings are maxed out, and quickly submit a new request when\ncompletion brings the number of busy connections below the maximum.\n \n-Kevin\n",
"msg_date": "Wed, 08 Sep 2010 15:56:24 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench could not send data to client: Broken\n\t pipe"
},
{
"msg_contents": "On Wed, Sep 08, 2010 at 03:56:24PM -0500, Kevin Grittner wrote:\n- David Kerr <[email protected]> wrote:\n- \n- > Actually, this is real.. that's 2000 connections - connection\n- > pooled out to 20k or so. (although i'm pushing for closer to 1000\n- > connections).\n- > \n- > I know that's not the ideal way to go, but it's what i've got to\n- > work with.\n- > \n- > It IS a huge box though...\n- \n- FWIW, my benchmarks (and I've had a couple people tell me this is\n- consistent with what they've seen) show best throughput and best\n- response time when the connection pool is sized such that the number\n- of active PostgreSQL connections is limited to about twice the\n- number of CPU cores plus the number of effective spindles. Either\n- you've got one heck of a machine, or your \"sweet spot\" for the\n- connection pool will be well under 1000 connections.\n- \n- It is important that your connection pool queues requests when\n- things are maxed out, and quickly submit a new request when\n- completion brings the number of busy connections below the maximum.\n- \n- -Kevin\n\nHmm, i'm not following you. I've got 48 cores. that means my sweet-spot\nactive connections would be 96. (i.e., less than the default max_connections\nshipped with PG) and this is a very very expensive machine.\n\nNow if i were to connection pool that out to 15 people per connection, \nthat's 1440 users \"total\" able to use my app at one time. (with only \n96 actually doing anything). not really great for a web-based app that \nwill have millions of users accessing it when we're fully ramped up.\n\nI've got a few plans to spread the load out across multiple machines\nbut at 1440 users per machine this wouldn't be sustanable.. \n\nI know that other people are hosting more than that on larger machines\nso i hope i'm ok.\n\nThanks\n\nDave\n",
"msg_date": "Wed, 8 Sep 2010 14:20:56 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbench could not send data to client: Broken pipe"
},
{
"msg_contents": "David Kerr <[email protected]> wrote:\n \n> Hmm, i'm not following you. I've got 48 cores. that means my\n> sweet-spot active connections would be 96.\n \nPlus your effective spindle count. That can be hard to calculate,\nbut you could start by just counting spindles on your drive array.\n \n> Now if i were to connection pool that out to 15 people per\n> connection,\n \nWhere did you get that number? We routinely route hundreds of\nrequests per second (many of them with 10 or 20 joins) from five or\nten thousand connected users through a pool of 30 connections. It\nstarted out bigger, we kept shrinking it until we hit our sweet\nspot. The reason we started bigger is we've got 40 spindles to go\nwith the 16 cores, but the active portion of the database is cached,\nwhich reduces our effective spindle count to zero.\n \n> that's 1440 users \"total\" able to use my app at one time. (with\n> only 96 actually doing anything). not really great for a web-based\n> app that will have millions of users accessing it when we're fully\n> ramped up.\n \nOnce you have enough active connections to saturate the resources,\nadding more connections just adds contention for resources and\ncontext switching cost -- it does nothing to help you service more\nconcurrent users. The key is, as I mentioned before, to have the\npooler queue requests above the limit and promptly get them running\nas slots are freed.\n \n-Kevin\n",
"msg_date": "Wed, 08 Sep 2010 16:51:17 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench could not send data to client: Broken\n\t pipe"
},
{
"msg_contents": "On Wed, Sep 08, 2010 at 04:51:17PM -0500, Kevin Grittner wrote:\n- David Kerr <[email protected]> wrote:\n- \n- > Hmm, i'm not following you. I've got 48 cores. that means my\n- > sweet-spot active connections would be 96.\n- \n- Plus your effective spindle count. That can be hard to calculate,\n- but you could start by just counting spindles on your drive array.\n\nWe've got this weird LPAR thing at our hosting center. it's tough\nfor me to do.\n\n- > Now if i were to connection pool that out to 15 people per\n- > connection,\n- \n- Where did you get that number? We routinely route hundreds of\n- requests per second (many of them with 10 or 20 joins) from five or\n- ten thousand connected users through a pool of 30 connections. It\n- started out bigger, we kept shrinking it until we hit our sweet\n- spot. The reason we started bigger is we've got 40 spindles to go\n- with the 16 cores, but the active portion of the database is cached,\n- which reduces our effective spindle count to zero.\n\nThat's encouraging. I don't remember where I got the number from,\nbut my pooler will be Geronimo, so i think it came in that context.\n\n- > that's 1440 users \"total\" able to use my app at one time. (with\n- > only 96 actually doing anything). not really great for a web-based\n- > app that will have millions of users accessing it when we're fully\n- > ramped up.\n- \n- Once you have enough active connections to saturate the resources,\n- adding more connections just adds contention for resources and\n- context switching cost -- it does nothing to help you service more\n- concurrent users. The key is, as I mentioned before, to have the\n- pooler queue requests above the limit and promptly get them running\n- as slots are freed.\n\nRight, I understand that. My assertian/hope is that the saturation point\non this machine should be higher than most.\n\nDave\n",
"msg_date": "Wed, 8 Sep 2010 15:00:11 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbench could not send data to client: Broken pipe"
},
{
"msg_contents": "David Kerr <[email protected]> wrote:\n \n> My assertian/hope is that the saturation point\n> on this machine should be higher than most.\n \nHere's another way to think about it -- how long do you expect your\naverage database request to run? (Our top 20 transaction functions\naverage about 3ms per execution.) What does that work out to in\ntransactions per second? That's the TPS you can achieve *on each\nconnection* if your pooler is efficient. If you've determined a\nconnection pool size based on hardware resources, divide your\nanticipated requests per second by that pool size. If the result is\nless than the TPS each connection can handle, you're in good shape. \nIf it's higher, you may need more hardware to satisfy the load.\n \nOf course, the only way to really know some of these numbers is to\ntest your actual application on the real hardware under realistic\nload; but sometimes you can get a reasonable approximation from\nearly tests or \"gut feel\" based on experience with similar\napplications. I strongly recommend trying incremental changes to\nvarious configuration parameters once you have real load, and\nmonitor the impact. The optimal settings are often not what you\nexpect.\n \nAnd if the pooling isn't producing the results you expect, you\nshould look at its configuration, or (if you can) try other pooler\nproducts.\n \n-Kevin\n",
"msg_date": "Wed, 08 Sep 2010 17:27:24 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench could not send data to client: Broken\n\t pipe"
},
{
"msg_contents": "On Wed, Sep 08, 2010 at 05:27:24PM -0500, Kevin Grittner wrote:\n- David Kerr <[email protected]> wrote:\n- \n- > My assertian/hope is that the saturation point\n- > on this machine should be higher than most.\n- \n- Here's another way to think about it -- how long do you expect your\n- average database request to run? (Our top 20 transaction functions\n- average about 3ms per execution.) What does that work out to in\n- transactions per second? That's the TPS you can achieve *on each\n- connection* if your pooler is efficient. If you've determined a\n- connection pool size based on hardware resources, divide your\n- anticipated requests per second by that pool size. If the result is\n- less than the TPS each connection can handle, you're in good shape. \n- If it's higher, you may need more hardware to satisfy the load.\n- \n- Of course, the only way to really know some of these numbers is to\n- test your actual application on the real hardware under realistic\n- load; but sometimes you can get a reasonable approximation from\n- early tests or \"gut feel\" based on experience with similar\n- applications. I strongly recommend trying incremental changes to\n- various configuration parameters once you have real load, and\n- monitor the impact. The optimal settings are often not what you\n- expect.\n- \n- And if the pooling isn't producing the results you expect, you\n- should look at its configuration, or (if you can) try other pooler\n- products.\n- \n- -Kevin\n- \n\nThanks for the insight. we're currently in performance testing of the\napp. Currently, the JVM is the bottleneck, once we get past that\ni'm sure it will be the database at which point I'll have the kind\nof data you're talking about.\n\nDave\n",
"msg_date": "Wed, 8 Sep 2010 15:29:59 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbench could not send data to client: Broken pipe"
},
{
"msg_contents": "Excerpts from David Kerr's message of mié sep 08 18:29:59 -0400 2010:\n\n> Thanks for the insight. we're currently in performance testing of the\n> app. Currently, the JVM is the bottleneck, once we get past that\n> i'm sure it will be the database at which point I'll have the kind\n> of data you're talking about.\n\nHopefully you're not running the JVM stuff in the same machine.\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Thu, 09 Sep 2010 10:38:16 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench could not send data to client: Broken pipe"
},
{
"msg_contents": "On Thu, Sep 09, 2010 at 10:38:16AM -0400, Alvaro Herrera wrote:\n- Excerpts from David Kerr's message of mié sep 08 18:29:59 -0400 2010:\n- \n- > Thanks for the insight. we're currently in performance testing of the\n- > app. Currently, the JVM is the bottleneck, once we get past that\n- > i'm sure it will be the database at which point I'll have the kind\n- > of data you're talking about.\n- \n- Hopefully you're not running the JVM stuff in the same machine.\n\nNope, this server is 100% allocated to the database.\n\nDave\n",
"msg_date": "Thu, 9 Sep 2010 08:12:30 -0700",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgbench could not send data to client: Broken pipe"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Of course, the only way to really know some of these numbers is to\n> test your actual application on the real hardware under realistic\n> load; but sometimes you can get a reasonable approximation from\n> early tests or \"gut feel\" based on experience with similar\n> applications.\n\nAnd that latter part only works if your gut is as accurate as Kevin's. \nFor most people, even a rough direct measurement is much more useful \nthan any estimate.\n\nAnyway, Kevin's point--that ultimately you cannot really be executing \nmore things at once than you have CPUs--is an accurate one to remember \nhere. One reason to put connection pooling in front of your database is \nthat it cannot handle thousands of active connections at once without \nswitching between them very frequently. That wastes both CPU and other \nresources with contention that could be avoided.\n\nIf you expect, say, 1000 simultaneous users, and you have 48 CPUs, there \nis only 48ms worth of CPU time available to each user per second on \naverage. If you drop that to 100 users using a pooler, they'll each get \n480ms worth of it. But no matter what, when the CPUs are busy enough to \nalways have a queued backlog, they will clear at best 48 * 1 second = \n48000 ms of work from that queue each second, best case, no matter how \nyou setup the ratios here.\n\nNow, imagine that the average query takes 24ms. The two scenarios work \nout like this:\n\nWithout pooler: takes 24 / 48 = 0.5 seconds to execute in parallel with \n999 other processes\n\nWith pooler: Worst-case, the pooler queue is filled and there are 900 \nusers ahead of this one, representing 21600 ms worth of work to clear \nbefore this request will become active. The query waits 21600 / 48000 = \n0.45 seconds to get runtime on the CPU. Once it starts, though, it's \nonly contending with 99 other processes, so it gets 1/100 of the \navailable resources. 480 ms of CPU time executes per second for this \nquery; it runs in 0.05 seconds at that rate. Total runtime: 0.45 + \n0.05 = 0.5 seconds!\n\nSo the incoming query in this not completely contrived case (I just \npicked the numbers to make the math even) takes the same amount of time \nto deliver a result either way. It's just a matter of whether it spends \nthat time waiting for a clear slice of CPU time, or fighting with a lot \nof other processes the whole way. Once the incoming connections exceeds \nCPUs by enough of a margin that a pooler can expect to keep all the CPUs \nbusy, it delivers results at the same speed as using a larger number of \nconnections. And since the \"without pooler\" case assumes perfect \nslicing of time into units, it's the unrealistic one; contention among \nthe 1000 processes will actually make it slower than the pooled version \nin the real world. You won't see anywhere close to 48000 ms worth of \nwork delivered per second anymore if the server is constantly losing its \nCPU cache, swapping among an average of an average of 21 \nconnections/CPU. Whereas if it's only slightly more than 2 connections \nper CPU, each of them should alternate between the two processes easily \nenough.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Thu, 09 Sep 2010 12:05:25 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench could not send data to client: Broken\t pipe"
},
{
"msg_contents": "Greg Smith <[email protected]> wrote:\n> Kevin Grittner wrote:\n>> Of course, the only way to really know some of these numbers is\n>> to test your actual application on the real hardware under\n>> realistic load; but sometimes you can get a reasonable\n>> approximation from early tests or \"gut feel\" based on experience\n>> with similar applications.\n> \n> And that latter part only works if your gut is as accurate as\n> Kevin's. For most people, even a rough direct measurement is much\n> more useful than any estimate.\n \n:-) Indeed, when I talk about \"'gut feel' based on experience with\nsimilar applications\" I'm think of something like, \"When I had a\nquery with the same number of joins against tables about this size\nwith the same number and types of key columns, metrics showed that\nit took n ms and was CPU bound, and this new CPU and RAM hardware\nbenchmarks twice as fast, so I'll ballpark this at 2/3 the runtime\nas a gut feel, and follow up with measurements as soon as\npractical.\" That may not have been entirely clear....\n \n> So the incoming query in this not completely contrived case (I\n> just picked the numbers to make the math even) takes the same\n> amount of time to deliver a result either way.\n \nI'm gonna quibble with you here. Even if it gets done with the last\nrequest at the same time either way (which discounts the very real\ncontention and context switch costs), if you release the thundering\nherd of requests all at once they will all finish at about the same\ntime as that last request, while a queue allows a stream of\nresponses throughout. Since results start coming back almost\nimmediately, and stream through evenly, your *average response time*\nis nearly cut in half with the queue. And that's without figuring\nthe network congestion issues of having all those requests complete\nat the same time.\n \nIn my experience you can expect the response time benefit of\nreducing the size of your connection pool to match available\nresources to be more noticeable than the throughput improvements. \nThis directly contradicts many people's intuition, revealing the\ndownside of \"gut feel\".\n \n-Kevin\n",
"msg_date": "Thu, 09 Sep 2010 12:24:17 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench could not send data to client: Broken\t\n\t pipe"
},
{
"msg_contents": "Kevin Grittner wrote:\n> In my experience you can expect the response time benefit of\n> reducing the size of your connection pool to match available\n> resources to be more noticeable than the throughput improvements. \n> This directly contradicts many people's intuition, revealing the\n> downside of \"gut feel\".\n> \n\nThis is why I focused on showing there won't actually be a significant \nthroughput reduction, because that part is the most counterintuitive I \nthink. Accurately modeling the latency improvements of pooling requires \nmuch harder math, and it depends quite a bit on whether incoming traffic \nis even or in bursts. Easier in many cases to just swallow expectations \nand estimates and just try it instead.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Thu, 09 Sep 2010 14:07:22 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgbench could not send data to client: Broken\t\t pipe"
}
] |
[
{
"msg_contents": "Hi all. I Have the following query (tested in postgres 8.4 and 9.0rc1)\n\nSELECT distinct event0_.*\nFROM event event0_ inner join account account1_ on\nevent0_.account_id_owner=account1_.account_id\nLEFT OUTER JOIN friend friendcoll2_ ON\naccount1_.account_id=friendcoll2_.friend_account_id\nWHERE (event0_.account_id_owner=2 or friendcoll2_.account_id=2\n AND friendcoll2_.status=2 AND (event0_.is_recomended is null OR\nevent0_.is_recomended=false))\nORDER BY event0_.event_id DESC LIMIT 25\n\nNone of the tables listed here have more than a couple of thousand rows, and\nare all indexed. If I run that query as is, it will take up to 5 seconds,\nif I remove the ORDER BY and LIMIT, it will run into about 200 ms.\n\nBellow is the output from SET enable_seqscan = off;EXPLAIN ANALYZE VERBOSE.\nOn Postgresql 9.0 this takes 2.3 seconds, on 8.4 it takes 4-5 seconds. What\nI am noticing is that the Sort Key contains every row in event, not just\nevent_id. This seems to be causing the External Disk Merge. This will use\na memory merge if I have work_mem set to less than 30MB. If I set the SELECT\nto be SELECT distinct event0_.event_id, it will take about 19ms, but I need\nall rows returned.\n\nThanks all,\nMason\n\n Limit (cost=32124.36..32125.55 rows=25 width=164) (actual\ntime=2233.473..2301.552 rows=25 loops=1)\n Output: event0_.event_id, event0_.account_id_owner, event0_.event_name,\nevent0_.account_id_remote, event0_.path_id,\nevent0_.actual_user_workout_routine, event0_.user_workout_goal_id,\nevent0_.cdate, event0\n_.ctime, event0_.calories_burnt, event0_.distance_meters,\nevent0_.duration_seconds, event0_.path_name, event0_.routine_name,\nevent0_.dtype, event0_.item, event0_.is_deleted, event0_.is_recomended\n -> Unique (cost=32124.36..32128.26 rows=82 width=164) (actual\ntime=2233.471..2301.544 rows=25 loops=1)\n Output: event0_.event_id, event0_.account_id_owner,\nevent0_.event_name, event0_.account_id_remote, event0_.path_id,\nevent0_.actual_user_workout_routine, event0_.user_workout_goal_id,\nevent0_.cdate,\nevent0_.ctime, event0_.calories_burnt, event0_.distance_meters,\nevent0_.duration_seconds, event0_.path_name, event0_.routine_name,\nevent0_.dtype, event0_.item, event0_.is_deleted, event0_.is_recomended\n -> Sort (cost=32124.36..32124.57 rows=82 width=164) (actual\ntime=2233.470..2299.043 rows=4435 loops=1)\n Output: event0_.event_id, event0_.account_id_owner,\nevent0_.event_name, event0_.account_id_remote, event0_.path_id,\nevent0_.actual_user_workout_routine, event0_.user_workout_goal_id, event0_.c\ndate, event0_.ctime, event0_.calories_burnt, event0_.distance_meters,\nevent0_.duration_seconds, event0_.path_name, event0_.routine_name,\nevent0_.dtype, event0_.item, event0_.is_deleted, event0_.is_recomended\n Sort Key: event0_.event_id, event0_.account_id_owner,\nevent0_.event_name, event0_.account_id_remote, event0_.path_id,\nevent0_.actual_user_workout_routine, event0_.user_workout_goal_id, event0_\n.cdate, event0_.ctime, event0_.calories_burnt, event0_.distance_meters,\nevent0_.duration_seconds, event0_.path_name, event0_.routine_name,\nevent0_.dtype, event0_.item, event0_.is_deleted, event0_.is_recomend\ned\n Sort Method: external merge Disk: 6968kB\n -> Merge Join (cost=0.00..32121.75 rows=82 width=164)\n(actual time=0.105..197.393 rows=50895 loops=1)\n Output: event0_.event_id, event0_.account_id_owner,\nevent0_.event_name, event0_.account_id_remote, event0_.path_id,\nevent0_.actual_user_workout_routine, event0_.user_workout_goal_id, eve\nnt0_.cdate, event0_.ctime, event0_.calories_burnt, event0_.distance_meters,\nevent0_.duration_seconds, event0_.path_name, event0_.routine_name,\nevent0_.dtype, event0_.item, event0_.is_deleted, event0_.is_reco\nmended\n Merge Cond: (account1_.account_id =\nevent0_.account_id_owner)\n Join Filter: ((event0_.account_id_owner = 2) OR\n((friendcoll2_.account_id = 2) AND (friendcoll2_.status = 2) AND\n((event0_.is_recomended IS NULL) OR (NOT event0_.is_recomended))))\n -> Nested Loop Left Join (cost=0.00..31843.58\nrows=2155 width=10) (actual time=0.070..87.681 rows=3859 loops=1)\n Output: account1_.account_id,\nfriendcoll2_.account_id, friendcoll2_.status\n -> Index Scan using \"AccountIDPKIndex\" on\npublic.account account1_ (cost=0.00..209.05 rows=1890 width=4) (actual\ntime=0.025..0.981 rows=1890 loops=1)\n Output: account1_.account_id,\naccount1_.user_name, account1_.password, account1_.account_type,\naccount1_.is_active, account1_.is_quick_reg, account1_.name_last,\naccount1_.nam\ne_first, account1_.primary_image_url, account1_.ctime, account1_.cdate,\naccount1_.email_address, account1_.address_street_1,\naccount1_.address_street_2, account1_.address_city, account1_.address_state,\naccou\nnt1_.address_zip_code_1, account1_.address_zip_code_2,\naccount1_.date_of_birth, account1_.phone_home, account1_.phone_mobile,\naccount1_.phone_buisness, account1_.phone_buisness_ext, account1_.lon,\naccount1_.\nlat, account1_.dtype, account1_.last_login_date, account1_.about_user_blurb,\naccount1_.middle_initial, account1_.gender, account1_.address_country,\naccount1_.weight_lbs, account1_.network_size, account1_.pri\nmary_image_url_thumb, account1_.primary_image_url_small_thumb,\naccount1_.is_activity_partner_listed, account1_.relationship_status,\naccount1_.sec_profile_view, account1_.stats_build, account1_.stats_height_i\nnches, account1_.stats_activity_level, account1_.list_profile_age,\naccount1_.opt_in_third_party, account1_.opt_in_exclusive_offers,\naccount1_.opt_in_new_features, account1_.lat_lon_is_current, account1_.woc_\nno_of_entries, account1_.woc_weight_in_formula, account1_.woc_value_cardio,\naccount1_.woc_value_strength, account1_.woc_value_body_sculpting,\naccount1_.woc_value_body_flexibility, account1_.woc_value_weight_\nmanagement, account1_.woc_value_mental_vitality,\naccount1_.woc_value_heart_health, account1_.woc_value_general_fitness,\naccount1_.is_group, account1_.is_fivi_pro, account1_.is_group_open_invite,\naccount1_.mi\nssion_statement, account1_.twitter_account_name,\naccount1_.primary_photo_media_id, account1_.is_pro_listed,\naccount1_.registered_on, account1_.is_password_autogenerated,\naccount1_.is_set_password_hidden, acc\nount1_.is_notified_on_email_receipt, account1_.is_notified_on_friend_request\n -> Index Scan using \"friendIdx1\" on\npublic.friend friendcoll2_ (cost=0.00..16.59 rows=12 width=10) (actual\ntime=0.042..0.045 rows=1 loops=1890)\n Output: friendcoll2_.account_id,\nfriendcoll2_.friend_account_id, friendcoll2_.cdate, friendcoll2_.ctime,\nfriendcoll2_.status, friendcoll2_.friend_id\n Index Cond: (account1_.account_id =\nfriendcoll2_.friend_account_id)\n -> Materialize (cost=0.00..207.88 rows=2803\nwidth=164) (actual time=0.024..26.091 rows=241058 loops=1)\n Output: event0_.event_id,\nevent0_.account_id_owner, event0_.event_name, event0_.account_id_remote,\nevent0_.path_id, event0_.actual_user_workout_routine,\nevent0_.user_workout_goal_i\nd, event0_.cdate, event0_.ctime, event0_.calories_burnt,\nevent0_.distance_meters, event0_.duration_seconds, event0_.path_name,\nevent0_.routine_name, event0_.dtype, event0_.item, event0_.is_deleted,\nevent0_.i\ns_recomended\n -> Index Scan using \"eventIdxTstdAccountIdOwner\"\non public.event event0_ (cost=0.00..200.88 rows=2803 width=164) (actual\ntime=0.020..1.239 rows=2803 loops=1)\n Output: event0_.event_id,\nevent0_.account_id_owner, event0_.event_name, event0_.account_id_remote,\nevent0_.path_id, event0_.actual_user_workout_routine, event0_.user_workout_\ngoal_id, event0_.cdate, event0_.ctime, event0_.calories_burnt,\nevent0_.distance_meters, event0_.duration_seconds, event0_.path_name,\nevent0_.routine_name, event0_.dtype, event0_.item, event0_.is_deleted, eve\nnt0_.is_recomended\n Total runtime: 2303.210 ms\n\nHi all. I Have the following query (tested in postgres 8.4 and 9.0rc1)SELECT distinct event0_.*FROM event event0_ inner join account account1_ on event0_.account_id_owner=account1_.account_id LEFT OUTER JOIN friend friendcoll2_ ON account1_.account_id=friendcoll2_.friend_account_id \n\nWHERE (event0_.account_id_owner=2 or friendcoll2_.account_id=2 AND friendcoll2_.status=2 AND (event0_.is_recomended is null OR event0_.is_recomended=false))ORDER BY event0_.event_id DESC LIMIT 25None of the tables listed here have more than a couple of thousand rows, and are all indexed. If I run that query as is, it will take up to 5 seconds, if I remove the ORDER BY and LIMIT, it will run into about 200 ms.\nBellow is the output from SET enable_seqscan = off;EXPLAIN ANALYZE VERBOSE. On Postgresql 9.0 this takes 2.3 seconds, on 8.4 it takes 4-5 seconds. What I am noticing is that the Sort Key contains every row in event, not just event_id. This seems to be causing the External Disk Merge. This will use a memory merge if I have work_mem set to less than 30MB. If I set the SELECT to be SELECT distinct event0_.event_id, it will take about 19ms, but I need all rows returned.\nThanks all,Mason Limit (cost=32124.36..32125.55 rows=25 width=164) (actual time=2233.473..2301.552 rows=25 loops=1) Output: event0_.event_id, event0_.account_id_owner, event0_.event_name, event0_.account_id_remote, event0_.path_id, event0_.actual_user_workout_routine, event0_.user_workout_goal_id, event0_.cdate, event0\n\n_.ctime, event0_.calories_burnt, event0_.distance_meters, event0_.duration_seconds, event0_.path_name, event0_.routine_name, event0_.dtype, event0_.item, event0_.is_deleted, event0_.is_recomended -> Unique (cost=32124.36..32128.26 rows=82 width=164) (actual time=2233.471..2301.544 rows=25 loops=1)\n\n Output: event0_.event_id, event0_.account_id_owner, event0_.event_name, event0_.account_id_remote, event0_.path_id, event0_.actual_user_workout_routine, event0_.user_workout_goal_id, event0_.cdate, event0_.ctime, event0_.calories_burnt, event0_.distance_meters, event0_.duration_seconds, event0_.path_name, event0_.routine_name, event0_.dtype, event0_.item, event0_.is_deleted, event0_.is_recomended\n\n -> Sort (cost=32124.36..32124.57 rows=82 width=164) (actual time=2233.470..2299.043 rows=4435 loops=1) Output: event0_.event_id, event0_.account_id_owner, event0_.event_name, event0_.account_id_remote, event0_.path_id, event0_.actual_user_workout_routine, event0_.user_workout_goal_id, event0_.c\n\ndate, event0_.ctime, event0_.calories_burnt, event0_.distance_meters, event0_.duration_seconds, event0_.path_name, event0_.routine_name, event0_.dtype, event0_.item, event0_.is_deleted, event0_.is_recomended Sort Key: event0_.event_id, event0_.account_id_owner, event0_.event_name, event0_.account_id_remote, event0_.path_id, event0_.actual_user_workout_routine, event0_.user_workout_goal_id, event0_\n\n.cdate, event0_.ctime, event0_.calories_burnt, event0_.distance_meters, event0_.duration_seconds, event0_.path_name, event0_.routine_name, event0_.dtype, event0_.item, event0_.is_deleted, event0_.is_recomended Sort Method: external merge Disk: 6968kB\n\n -> Merge Join (cost=0.00..32121.75 rows=82 width=164) (actual time=0.105..197.393 rows=50895 loops=1) Output: event0_.event_id, event0_.account_id_owner, event0_.event_name, event0_.account_id_remote, event0_.path_id, event0_.actual_user_workout_routine, event0_.user_workout_goal_id, eve\n\nnt0_.cdate, event0_.ctime, event0_.calories_burnt, event0_.distance_meters, event0_.duration_seconds, event0_.path_name, event0_.routine_name, event0_.dtype, event0_.item, event0_.is_deleted, event0_.is_recomended\n\n Merge Cond: (account1_.account_id = event0_.account_id_owner) Join Filter: ((event0_.account_id_owner = 2) OR ((friendcoll2_.account_id = 2) AND (friendcoll2_.status = 2) AND ((event0_.is_recomended IS NULL) OR (NOT event0_.is_recomended))))\n\n -> Nested Loop Left Join (cost=0.00..31843.58 rows=2155 width=10) (actual time=0.070..87.681 rows=3859 loops=1) Output: account1_.account_id, friendcoll2_.account_id, friendcoll2_.status\n\n -> Index Scan using \"AccountIDPKIndex\" on public.account account1_ (cost=0.00..209.05 rows=1890 width=4) (actual time=0.025..0.981 rows=1890 loops=1) Output: account1_.account_id, account1_.user_name, account1_.password, account1_.account_type, account1_.is_active, account1_.is_quick_reg, account1_.name_last, account1_.nam\n\ne_first, account1_.primary_image_url, account1_.ctime, account1_.cdate, account1_.email_address, account1_.address_street_1, account1_.address_street_2, account1_.address_city, account1_.address_state, account1_.address_zip_code_1, account1_.address_zip_code_2, account1_.date_of_birth, account1_.phone_home, account1_.phone_mobile, account1_.phone_buisness, account1_.phone_buisness_ext, account1_.lon, account1_.\n\nlat, account1_.dtype, account1_.last_login_date, account1_.about_user_blurb, account1_.middle_initial, account1_.gender, account1_.address_country, account1_.weight_lbs, account1_.network_size, account1_.primary_image_url_thumb, account1_.primary_image_url_small_thumb, account1_.is_activity_partner_listed, account1_.relationship_status, account1_.sec_profile_view, account1_.stats_build, account1_.stats_height_i\n\nnches, account1_.stats_activity_level, account1_.list_profile_age, account1_.opt_in_third_party, account1_.opt_in_exclusive_offers, account1_.opt_in_new_features, account1_.lat_lon_is_current, account1_.woc_no_of_entries, account1_.woc_weight_in_formula, account1_.woc_value_cardio, account1_.woc_value_strength, account1_.woc_value_body_sculpting, account1_.woc_value_body_flexibility, account1_.woc_value_weight_\n\nmanagement, account1_.woc_value_mental_vitality, account1_.woc_value_heart_health, account1_.woc_value_general_fitness, account1_.is_group, account1_.is_fivi_pro, account1_.is_group_open_invite, account1_.mission_statement, account1_.twitter_account_name, account1_.primary_photo_media_id, account1_.is_pro_listed, account1_.registered_on, account1_.is_password_autogenerated, account1_.is_set_password_hidden, acc\n\nount1_.is_notified_on_email_receipt, account1_.is_notified_on_friend_request -> Index Scan using \"friendIdx1\" on public.friend friendcoll2_ (cost=0.00..16.59 rows=12 width=10) (actual time=0.042..0.045 rows=1 loops=1890)\n\n Output: friendcoll2_.account_id, friendcoll2_.friend_account_id, friendcoll2_.cdate, friendcoll2_.ctime, friendcoll2_.status, friendcoll2_.friend_id Index Cond: (account1_.account_id = friendcoll2_.friend_account_id)\n\n -> Materialize (cost=0.00..207.88 rows=2803 width=164) (actual time=0.024..26.091 rows=241058 loops=1) Output: event0_.event_id, event0_.account_id_owner, event0_.event_name, event0_.account_id_remote, event0_.path_id, event0_.actual_user_workout_routine, event0_.user_workout_goal_i\n\nd, event0_.cdate, event0_.ctime, event0_.calories_burnt, event0_.distance_meters, event0_.duration_seconds, event0_.path_name, event0_.routine_name, event0_.dtype, event0_.item, event0_.is_deleted, event0_.is_recomended\n\n -> Index Scan using \"eventIdxTstdAccountIdOwner\" on public.event event0_ (cost=0.00..200.88 rows=2803 width=164) (actual time=0.020..1.239 rows=2803 loops=1) Output: event0_.event_id, event0_.account_id_owner, event0_.event_name, event0_.account_id_remote, event0_.path_id, event0_.actual_user_workout_routine, event0_.user_workout_\n\ngoal_id, event0_.cdate, event0_.ctime, event0_.calories_burnt, event0_.distance_meters, event0_.duration_seconds, event0_.path_name, event0_.routine_name, event0_.dtype, event0_.item, event0_.is_deleted, event0_.is_recomended\n\n Total runtime: 2303.210 ms",
"msg_date": "Fri, 10 Sep 2010 15:35:17 -0700",
"msg_from": "Mason Harding <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow SQL lookup due to every field being listed in SORT KEY"
},
{
"msg_contents": "* Mason Harding ([email protected]) wrote:\n> Hi all. I Have the following query (tested in postgres 8.4 and 9.0rc1)\n\nCan you provide \\d output from all the tables involved..?\n\nAlso, what does the query plan look like w/o 'enable_seqscan=off' (which\nis not a good setting to use...)? Increasing work_mem is often a good\nidea if your system can afford it based on the number/kind of queries\nrunning concurrently. Note that you can also increase that setting for\njust a single role, single session, or even single query.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Fri, 10 Sep 2010 21:59:03 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow SQL lookup due to every field being listed in\n\tSORT KEY"
},
{
"msg_contents": "Mason Harding <[email protected]> writes:\n> Hi all. I Have the following query (tested in postgres 8.4 and 9.0rc1)\n\n> SELECT distinct event0_.*\n> FROM event event0_ inner join account account1_ on\n> event0_.account_id_owner=account1_.account_id\n> LEFT OUTER JOIN friend friendcoll2_ ON\n> account1_.account_id=friendcoll2_.friend_account_id\n> WHERE (event0_.account_id_owner=2 or friendcoll2_.account_id=2\n> AND friendcoll2_.status=2 AND (event0_.is_recomended is null OR\n> event0_.is_recomended=false))\n> ORDER BY event0_.event_id DESC LIMIT 25\n\n> None of the tables listed here have more than a couple of thousand rows, and\n> are all indexed. If I run that query as is, it will take up to 5 seconds,\n> if I remove the ORDER BY and LIMIT, it will run into about 200 ms.\n\nThe reason it's sorting by all the columns is the DISTINCT: that's\nimplemented by a sort-and-unique type of scheme so it has to be sure\nthat all the columns are sorted. You didn't show the non-ORDER-BY\nplan, but I suspect it's preferring a hash aggregation approach to\ndoing the DISTINCT if it doesn't have to produce sorted output.\n\nThe easiest way to make that query faster would be to raise work_mem\nenough so that the sort doesn't have to spill to disk.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Sep 2010 22:00:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow SQL lookup due to every field being listed in SORT KEY "
},
{
"msg_contents": "* Tom Lane ([email protected]) wrote:\n> The reason it's sorting by all the columns is the DISTINCT\n\nYou might also verify that you actually need/*should* have the DISTINCT,\nif it's included today.. Often developers put that in without\nunderstanding why they're getting dups (which can often be due to\nmissing pieces from the JOIN clause or misunderstanding of the database\nschema...).\n\n\tStephen",
"msg_date": "Fri, 10 Sep 2010 22:03:37 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow SQL lookup due to every field being listed in\n\tSORT KEY"
}
] |
[
{
"msg_contents": "I have a problem with some simple query:\n\nselect version();\n PostgreSQL 8.3.8 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.3.2\n20081105 (Red Hat 4.3.2-7)\nvacuum full bug_t1;\nvacuum full bug_t2;\nvacuum analyze bug_t1;\nvacuum analyze bug_t2;\nexplain analyze SELECT ze.id ,rr.id FROM bug_t2 AS rr join bug_t1 AS ze ON\n(ze.id=rr.id) WHERE (ze.ids=94543);\n\nResult is:\n\nMerge Join (cost=18.90..20.85 rows=1 width=8) (actual time=614.912..614.912\nrows=0 loops=1)\n Merge Cond: (rr.id = ze.id)\n -> Index Scan using bug_t2_i1 on bug_t2 rr (cost=0.00..17893.49\nrows=278417 width=4) (actual time=0.023..351.945 rows=278417 loops=1)\n -> Sort (cost=18.88..18.89 rows=4 width=4) (actual time=0.164..0.164\nrows=1 loops=1)\n Sort Key: ze.id\n Sort Method: quicksort Memory: 17kB\n -> Index Scan using bug_t1_i1 on bug_t1 ze (cost=0.00..18.84\nrows=4 width=4) (actual time=0.059..0.141 rows=4 loops=1)\n Index Cond: (ids = 94543)\n Total runtime: 615.003 ms\n\nBut after\n\nSET enable_mergejoin=off;\n\nresult is:\n\nNested Loop (cost=0.00..52.06 rows=1 width=8) (actual time=0.084..0.084\nrows=0 loops=1)\n -> Index Scan using bug_t1_i1 on bug_t1 ze (cost=0.00..18.84 rows=4\nwidth=4) (actual time=0.016..0.028 rows=4 loops=1)\n Index Cond: (ids = 94543)\n -> Index Scan using bug_t2_i1 on bug_t2 rr (cost=0.00..8.29 rows=1\nwidth=4) (actual time=0.008..0.008 rows=0 loops=4)\n Index Cond: (rr.id = ze.id)\nTotal runtime: 0.154 ms\n\n\nI think that problem is with estimation of total mergejoin time, why is it\nso small (18.90..20.85) while estimates of subqueries (especially first) is\nhigh (0..17893). Merging time should be high, because it needs to scan\nalmost all bug t2 table. Am I right?\n\nArtur Zajac\n\n\n\n\n",
"msg_date": "Sun, 12 Sep 2010 19:34:00 +0200",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem with mergejoin performance (some bug?)"
}
] |
[
{
"msg_contents": "\nI have a problem with some simple query:\n\nselect version();\n\tPostgreSQL 8.3.8 on i686-pc-linux-gnu, compiled by GCC gcc (GCC)\n4.3.2 20081105 (Red Hat 4.3.2-7)\nvacuum full bug_t1;\nvacuum full bug_t2;\nvacuum analyze bug_t1;\nvacuum analyze bug_t2;\nshow default_statistics_target;\n\t1000\nexplain analyze SELECT ze.id ,rr.id FROM bug_t2 AS rr join bug_t1 AS ze ON\n(ze.id=rr.id) WHERE (ze.ids=94543);\n\nResult is:\n\nMerge Join (cost=18.90..20.85 rows=1 width=8) (actual time=614.912..614.912\nrows=0 loops=1)\n Merge Cond: (rr.id = ze.id)\n -> Index Scan using bug_t2_i1 on bug_t2 rr (cost=0.00..17893.49\nrows=278417 width=4) (actual time=0.023..351.945 rows=278417 loops=1)\n -> Sort (cost=18.88..18.89 rows=4 width=4) (actual time=0.164..0.164\nrows=1 loops=1)\n Sort Key: ze.id\n Sort Method: quicksort Memory: 17kB\n -> Index Scan using bug_t1_i1 on bug_t1 ze (cost=0.00..18.84\nrows=4 width=4) (actual time=0.059..0.141 rows=4 loops=1)\n Index Cond: (ids = 94543)\n Total runtime: 615.003 ms\n\nBut after\n\nSET enable_mergejoin=off;\n\nresult is:\n\nNested Loop (cost=0.00..52.06 rows=1 width=8) (actual time=0.084..0.084\nrows=0 loops=1)\n -> Index Scan using bug_t1_i1 on bug_t1 ze (cost=0.00..18.84 rows=4\nwidth=4) (actual time=0.016..0.028 rows=4 loops=1)\n Index Cond: (ids = 94543)\n -> Index Scan using bug_t2_i1 on bug_t2 rr (cost=0.00..8.29 rows=1\nwidth=4) (actual time=0.008..0.008 rows=0 loops=4)\n Index Cond: (rr.id = ze.id)\nTotal runtime: 0.154 ms\n\n\nI think that problem is with estimation of total mergejoin time, why is it\nso small (18.90..20.85) while estimates of subqueries (especially first) is\nhigh (0..17893). Merging time should be high, because it needs to scan\nalmost all bug t2 table. Am I right?\n\n\nArtur Zajac\n\n\n\n\n",
"msg_date": "Mon, 13 Sep 2010 07:49:45 +0200",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem with mergejoin performance"
},
{
"msg_contents": "<[email protected]> writes:\n> Merge Join (cost=18.90..20.85 rows=1 width=8) (actual time=614.912..614.912\n> rows=0 loops=1)\n> Merge Cond: (rr.id = ze.id)\n> -> Index Scan using bug_t2_i1 on bug_t2 rr (cost=0.00..17893.49\n> rows=278417 width=4) (actual time=0.023..351.945 rows=278417 loops=1)\n> -> Sort (cost=18.88..18.89 rows=4 width=4) (actual time=0.164..0.164\n> rows=1 loops=1)\n> Sort Key: ze.id\n> Sort Method: quicksort Memory: 17kB\n> -> Index Scan using bug_t1_i1 on bug_t1 ze (cost=0.00..18.84\n> rows=4 width=4) (actual time=0.059..0.141 rows=4 loops=1)\n> Index Cond: (ids = 94543)\n> Total runtime: 615.003 ms\n\n> I think that problem is with estimation of total mergejoin time, why is it\n> so small (18.90..20.85) while estimates of subqueries (especially first) is\n> high (0..17893). Merging time should be high, because it needs to scan\n> almost all bug t2 table. Am I right?\n\nActually, a mergejoin can stop short of processing all of either input,\nif it exhausts the keys from the other input first; and the planner\nknows that. In this case it evidently thinks that the maximum key from\nbug_t1 is much less than the maximum key from bug_t2, so that most of\nthe indexscan on bug_t2 won't have to be executed. With only 4 rows\nin bug_t1 it doesn't seem very likely that it would get this wrong.\nWhat exactly are those join key values, and what are the min/max values\nin bug_t2?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Sep 2010 10:26:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem with mergejoin performance "
},
{
"msg_contents": "[ Please keep the list cc'd ]\n\n<[email protected]> writes:\n>> What exactly are those join key values, and what are the min/max values\n>> in bug_t2?\n\n> min of Bug_t1.id = 42,\n> max of Bug_t1.id = 393065,\n> min of Bug_t2.id = 352448,\n> max of Bug_t2.id = 388715,\n\n> select count(id) from bug_t2\n> \t29\n> select count(*) from bug_t2\n> \t278417\n> And because there is only 29 not null records in bug_t2:\n\nOh, that's the real problem: thousands of nulls in bug_t2.\nThe planner is thinking those don't have to be scanned, but\nthe executor wasn't on the same page until very recently:\nhttp://archives.postgresql.org/pgsql-committers/2010-05/msg00334.php\n\nThat patch will be in the next 8.3 update.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Sep 2010 11:16:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem with mergejoin performance "
}
] |
[
{
"msg_contents": "Hi all,\nI have a view v_table defined as following:\n\nselect a,b,c,d,e,f\nfrom t_table\nsort by a,b,c;\n\nthe usage pattern of this view is the following:\n\nselect distinct(a) from v_table;\nselect distinct(b) from v_table where a = \"XXX\";\nselect distinct(c) from v_table where a = \"XXX\" and b = \"YYYY\";\n\nbecause of that sort in the view definition the first query above\ntakes not less than 3 seconds. I have solved this performance issue\nremoving the sort from the view definition and putting it in the\nselect reducing the time from > 3secons to < 150ms.\n\nCan not the optimizer take rid of that useless sort on those\nkind of queries ?\n\n\nRegards\nGaetano Mendola\n\n\n\n",
"msg_date": "Mon, 13 Sep 2010 11:47:24 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Useless sort by"
},
{
"msg_contents": "Gaetano Mendola <[email protected]> writes:\n> because of that sort in the view definition the first query above\n> takes not less than 3 seconds. I have solved this performance issue\n> removing the sort from the view definition and putting it in the\n> select reducing the time from > 3secons to < 150ms.\n\n> Can not the optimizer take rid of that useless sort on those\n> kind of queries ?\n\nIt is not the optimizer's job to second-guess the user on whether a sort\nis really needed there. If we did make it throw away non-top-level\nsorts, we'd have hundreds of users screaming loudly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Sep 2010 10:44:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Useless sort by "
},
{
"msg_contents": "On 09/13/2010 04:44 PM, Tom Lane wrote:\n> Gaetano Mendola <[email protected]> writes:\n>> because of that sort in the view definition the first query above\n>> takes not less than 3 seconds. I have solved this performance issue\n>> removing the sort from the view definition and putting it in the\n>> select reducing the time from > 3secons to < 150ms.\n> \n>> Can not the optimizer take rid of that useless sort on those\n>> kind of queries ?\n> \n> It is not the optimizer's job to second-guess the user on whether a sort\n> is really needed there. If we did make it throw away non-top-level\n> sorts, we'd have hundreds of users screaming loudly.\n\nOf course I'm not suggesting to take away the \"sort by\" and give the user\nan unsorted result, I'm asking why the the optimizer in cases like:\n\n select unique(a) from v_table_with_order_by;\n\ndoesn't takes away the \"order by\" inside the view and puts it back \"rewriting the\nquery like this:\n\n select unique(a) from v_table_without_order_by\n order by a;\n\nthen the user will not know about it. The result is the same but 30 times\nfaster (in my case).\n\nRegards\nGaetano Mendola\n\n\n",
"msg_date": "Mon, 13 Sep 2010 17:55:46 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Useless sort by"
},
{
"msg_contents": "Gaetano Mendola <[email protected]> writes:\n> Of course I'm not suggesting to take away the \"sort by\" and give the user\n> an unsorted result, I'm asking why the the optimizer in cases like:\n\n> select unique(a) from v_table_with_order_by;\n\n> doesn't takes away the \"order by\" inside the view and puts it back \"rewriting the\n> query like this:\n\n> select unique(a) from v_table_without_order_by\n> order by a;\n\nThat changes the order in which the rows are fed to unique(a). The\nprincipal real-world use for a non-top-level ORDER BY is exactly to\ndetermine the order in which rows are fed to a function, so we will\nhave a revolt on our hands if we break that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Sep 2010 12:48:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Useless sort by "
},
{
"msg_contents": "On Mon, Sep 13, 2010 at 6:48 PM, Tom Lane <[email protected]> wrote:\n> Gaetano Mendola <[email protected]> writes:\n>> Of course I'm not suggesting to take away the \"sort by\" and give the user\n>> an unsorted result, I'm asking why the the optimizer in cases like:\n>\n>> select unique(a) from v_table_with_order_by;\n>\n>> doesn't takes away the \"order by\" inside the view and puts it back \"rewriting the\n>> query like this:\n>\n>> select unique(a) from v_table_without_order_by\n>> order by a;\n>\n> That changes the order in which the rows are fed to unique(a). The\n> principal real-world use for a non-top-level ORDER BY is exactly to\n> determine the order in which rows are fed to a function, so we will\n> have a revolt on our hands if we break that.\n\nI see your point, but some functions like: unique, count are not affected\nby the order of values fed, and I don't think either that unique has to\ngive out the unique values in the same fed order.\n\n\nRegards\nGaetano Mendola\n\n-- \ncpp-today.blogspot.com\n",
"msg_date": "Mon, 13 Sep 2010 19:09:11 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Useless sort by"
},
{
"msg_contents": "> I see your point, but some functions like: unique, count are not affected\n> by the order of values fed, and I don't think either that unique has to\n> give out the unique values in the same fed order.\n\nSure. You'd need additional metadata about which aggregates care about\nsort order and which don't. Our system is more sensitive to this sort\nof thing and so we've actually implemented this, but in the absence of\nthis \"order-sensitive\" flag, you have to assume sorts matter (or\nyou're leaving a *lot* of room for shooting yourself in the foot).\n\nEven with this, it seems a little dodgy to mess up sort order in a\ntop-level query. Relational databases are ostensibly relational, but I\nimagine in practice, it may be a toss-up in the trade-off between the\nperformance benefits of what you are suggesting and the breaking of\nimplicit non-relational behaviors that users have been taking for\ngranted.\n\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n",
"msg_date": "Mon, 13 Sep 2010 10:24:42 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Useless sort by"
},
{
"msg_contents": "On Mon, Sep 13, 2010 at 11:09 AM, Gaetano Mendola <[email protected]> wrote:\n> On Mon, Sep 13, 2010 at 6:48 PM, Tom Lane <[email protected]> wrote:\n>> Gaetano Mendola <[email protected]> writes:\n>>> Of course I'm not suggesting to take away the \"sort by\" and give the user\n>>> an unsorted result, I'm asking why the the optimizer in cases like:\n>>\n>>> select unique(a) from v_table_with_order_by;\n>>\n>>> doesn't takes away the \"order by\" inside the view and puts it back \"rewriting the\n>>> query like this:\n>>\n>>> select unique(a) from v_table_without_order_by\n>>> order by a;\n>>\n>> That changes the order in which the rows are fed to unique(a). The\n>> principal real-world use for a non-top-level ORDER BY is exactly to\n>> determine the order in which rows are fed to a function, so we will\n>> have a revolt on our hands if we break that.\n>\n> I see your point, but some functions like: unique, count are not affected\n> by the order of values fed, and I don't think either that unique has to\n> give out the unique values in the same fed order.\n\nFirst off, having a top level order by in a view is considered poor\npractice. It adds an overhead you may or may not need each time the\nview is accessed, and there's no simple way to avoid it once it's in\nthere.\n\nOn top of that you'd be adding complexity to the planner that would\nmake it slower and more likely to make mistakes, all to fix a problem\nthat I and most others don't have.\n\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Mon, 13 Sep 2010 13:08:05 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Useless sort by"
},
{
"msg_contents": "On 13/09/10 19:48, Tom Lane wrote:\n> Gaetano Mendola<[email protected]> writes:\n>> Of course I'm not suggesting to take away the \"sort by\" and give the user\n>> an unsorted result, I'm asking why the the optimizer in cases like:\n>\n>> select unique(a) from v_table_with_order_by;\n>\n>> doesn't takes away the \"order by\" inside the view and puts it back \"rewriting the\n>> query like this:\n>\n>> select unique(a) from v_table_without_order_by\n>> order by a;\n>\n> That changes the order in which the rows are fed to unique(a). The\n> principal real-world use for a non-top-level ORDER BY is exactly to\n> determine the order in which rows are fed to a function, so we will\n> have a revolt on our hands if we break that.\n\nYou could check for volatile functions. I think this could be done \nsafely. However, it doesn't seem worthwhile, it would be a fair amount \nof code, and it's not usually a good idea to put an ORDER BY in a view \nor subquery anyway unless you also have volatile functions in there, or \nyou want to coerce the optimizer to choose a certain plan.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 14 Sep 2010 10:10:37 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Useless sort by"
},
{
"msg_contents": "> You could check for volatile functions. I think this could be done safely.\n\nI don't think that's enough. A UDA like last() could have an immutable\nsfunc, but still be sensitive to the sort order. I think you'd need\nsomething like a special order-sensitive aggregate definition flag.\n\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n",
"msg_date": "Tue, 14 Sep 2010 08:09:18 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Useless sort by"
},
{
"msg_contents": "I presume there is more usage of this view than just those 3 queries\n(otherwise, for a start there would be no need for d, e, f in the view\ndefinition)\n\nWhy not just rewrite these 3 queries to go directly off the main table? Or,\ncreate a different view without the sort_by in its definition?\n\nOr, if these are used very frequently and performance is critical, consider\n(i) caching these results in the application layer, with logic to understand\nwhen they need to be updated, or (b) maintaining extra tables that just\ncontain (a) (a,b) and (a,b,c)\n\nObjectively, it's always better to optimize the SQL and application level\nfor the specific needs of the situation before concluding that the\nunderlying database engine should do these optimizations automatically, and\nit seems like there are a number of options you could explore here.\n\nCheers\nDave\n\nOn Mon, Sep 13, 2010 at 4:47 AM, Gaetano Mendola <[email protected]> wrote:\n\n> Hi all,\n> I have a view v_table defined as following:\n>\n> select a,b,c,d,e,f\n> from t_table\n> sort by a,b,c;\n>\n> the usage pattern of this view is the following:\n>\n> select distinct(a) from v_table;\n> select distinct(b) from v_table where a = \"XXX\";\n> select distinct(c) from v_table where a = \"XXX\" and b = \"YYYY\";\n>\n> because of that sort in the view definition the first query above\n> takes not less than 3 seconds. I have solved this performance issue\n> removing the sort from the view definition and putting it in the\n> select reducing the time from > 3secons to < 150ms.\n>\n> Can not the optimizer take rid of that useless sort on those\n> kind of queries ?\n>\n>\n> Regards\n> Gaetano Mendola\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI presume there is more usage of this view than just those 3 queries (otherwise, for a start there would be no need for d, e, f in the view definition)Why not just rewrite these 3 queries to go directly off the main table? Or, create a different view without the sort_by in its definition?\nOr, if these are used very frequently and performance is critical, consider (i) caching these results in the application layer, with logic to understand when they need to be updated, or (b) maintaining extra tables that just contain (a) (a,b) and (a,b,c) \nObjectively, it's always better to optimize the SQL and application level for the specific needs of the situation before concluding that the underlying database engine should do these optimizations automatically, and it seems like there are a number of options you could explore here.\nCheersDaveOn Mon, Sep 13, 2010 at 4:47 AM, Gaetano Mendola <[email protected]> wrote:\nHi all,\nI have a view v_table defined as following:\n\nselect a,b,c,d,e,f\nfrom t_table\nsort by a,b,c;\n\nthe usage pattern of this view is the following:\n\nselect distinct(a) from v_table;\nselect distinct(b) from v_table where a = \"XXX\";\nselect distinct(c) from v_table where a = \"XXX\" and b = \"YYYY\";\n\nbecause of that sort in the view definition the first query above\ntakes not less than 3 seconds. I have solved this performance issue\nremoving the sort from the view definition and putting it in the\nselect reducing the time from > 3secons to < 150ms.\n\nCan not the optimizer take rid of that useless sort on those\nkind of queries ?\n\n\nRegards\nGaetano Mendola\n\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 14 Sep 2010 11:15:08 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Useless sort by"
},
{
"msg_contents": "On Tue, Sep 14, 2010 at 6:15 PM, Dave Crooke <[email protected]> wrote:\n> I presume there is more usage of this view than just those 3 queries\n> (otherwise, for a start there would be no need for d, e, f in the view\n> definition)\n>\n> Why not just rewrite these 3 queries to go directly off the main table? Or,\n> create a different view without the sort_by in its definition?\n>\n> Or, if these are used very frequently and performance is critical, consider\n> (i) caching these results in the application layer, with logic to understand\n> when they need to be updated, or (b) maintaining extra tables that just\n> contain (a) (a,b) and (a,b,c)\n>\n> Objectively, it's always better to optimize the SQL and application level\n> for the specific needs of the situation before concluding that the\n> underlying database engine should do these optimizations automatically, and\n> it seems like there are a number of options you could explore here.\n\nQuestion here is not how to do it right, but how to make the optimizer smarter\nthan it is now, taking rid of work not needed.\n\nRegards\nGaetano Mendola\n\n-- \ncpp-today.blogspot.com\n",
"msg_date": "Tue, 14 Sep 2010 19:06:32 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Useless sort by"
},
{
"msg_contents": "On Mon, Sep 13, 2010 at 1:09 PM, Gaetano Mendola <[email protected]> wrote:\n> I see your point, but some functions like: unique, count are not affected\n> by the order of values fed, and I don't think either that unique has to\n> give out the unique values in the same fed order.\n\nGee, I'd sure expect it to.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n",
"msg_date": "Wed, 22 Sep 2010 20:54:22 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Useless sort by"
},
{
"msg_contents": "\n\n---- Original message ----\n>Date: Wed, 22 Sep 2010 20:54:22 -0400\n>From: [email protected] (on behalf of Robert Haas <[email protected]>)\n>Subject: Re: [PERFORM] Useless sort by \n>To: Gaetano Mendola <[email protected]>\n>Cc: Tom Lane <[email protected]>,[email protected]\n>\n>On Mon, Sep 13, 2010 at 1:09 PM, Gaetano Mendola <[email protected]> wrote:\n>> I see your point, but some functions like: unique, count are not affected\n>> by the order of values fed, and I don't think either that unique has to\n>> give out the unique values in the same fed order.\n>\n>Gee, I'd sure expect it to.\n\nSpoken like a dyed in the wool COBOL coder. The RM has no need for order; it's set based. I've dabbled in PG for some time, and my sense is increasingly that PG developers are truly code oriented, not database (set) oriented. \n\nrobert\n>\n>-- \n>Robert Haas\n>EnterpriseDB: http://www.enterprisedb.com\n>The Enterprise Postgres Company\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 Sep 2010 23:05:31 -0400 (EDT)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Useless sort by"
},
{
"msg_contents": "[email protected] wrote:\n> Spoken like a dyed in the wool COBOL coder. The RM has no need for order; it's set based. I've dabbled in PG for some time, and my sense is increasingly that PG developers are truly code oriented, not database (set) oriented. \n> \n\nI can't tell if you meant for this to be insulting or my reading it that \nway is wrong, but it certainly wasn't put in a helpful tone. Let me \nsummarize for you. You've been told that putting ORDER BY into a view \nis a generally poor idea anyway, that it's better to find ways avoid \nthis class of concern altogether. There are significant non-obvious \ntechnical challenges behind actually implementing the behavior you'd \nlike to see; the concerns raised by Tom and Maciek make your idea \nimpractical even if it were desired. And for every person like yourself \nwho'd see the benefit you're looking for, there are far more that would \nfind a change in this area a major problem. The concerns around \nbreakage due to assumed but not required aspects of the relational model \nare the ones the users of the software will be confused by, not the \ndevelopers of it. You have the classification wrong; the feedback \nyou've gotten here is from the developers being user oriented, not \ntheory oriented or code oriented. \n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Thu, 23 Sep 2010 00:01:08 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Useless sort by"
},
{
"msg_contents": "On Wed, Sep 22, 2010 at 10:01 PM, Greg Smith <[email protected]> wrote:\n> [email protected] wrote:\n>>\n>> Spoken like a dyed in the wool COBOL coder. The RM has no need for order;\n>> it's set based. I've dabbled in PG for some time, and my sense is\n>> increasingly that PG developers are truly code oriented, not database (set)\n>> oriented.\n>\n> I can't tell if you meant for this to be insulting or my reading it that way\n> is wrong, but it certainly wasn't put in a helpful tone. Let me summarize\n> for you. You've been told that putting ORDER BY into a view is a generally\n> poor idea anyway, that it's better to find ways avoid this class of concern\n> altogether.\n\nIt's been a few years since I've read the SQL spec, but doesn't it\nactually forbid order by in views but pgsql allows it anyway?\n\nLike you said, order by in a view is a bad practice to get into, and\nit's definitely not what a \"set oriented\" person would do. it's what\na code oriented person would do.\n",
"msg_date": "Wed, 22 Sep 2010 22:18:27 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Useless sort by"
},
{
"msg_contents": "On Wed, Sep 22, 2010 at 11:05 PM, <[email protected]> wrote:\n> Spoken like a dyed in the wool COBOL coder. The RM has no need for order; it's set based. I've dabbled in PG for some time, and my sense is increasingly that PG developers are truly code oriented, not database (set) oriented.\n\nI'm struggling to think of an adequate response to this. I think I'm\ngoing to go with: huh?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n",
"msg_date": "Thu, 23 Sep 2010 00:46:16 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Useless sort by"
},
{
"msg_contents": "On 09/23/2010 05:05 AM, [email protected] wrote:\n> Spoken like a dyed in the wool COBOL coder. The RM has no need for order; it's set based. I've dabbled in PG for some time, and my sense is increasingly that PG developers are truly code oriented, not database (set) oriented. \n\nThat's a bit harsh. Your sense if fooling you.\n\nRegards\nGaetano Mendola\n",
"msg_date": "Mon, 25 Oct 2010 12:09:28 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Useless sort by"
}
] |
[
{
"msg_contents": "All,\n\nI've been looking at pg_stat_user_tables (in 8.3, because of a project I\nhave), and it appears that autovacuum, and only autovaccum, updates the\ndata for this view. This means that one can never have data in\npg_stat_user_tables which is completely up-to-date, and if autovacuum is\noff, the view is useless.\n\nAm I reading this correctly? If so, shouldn't this be a TODO -- or is\nit fixed already in 9.0?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Mon, 13 Sep 2010 16:06:06 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Where does data in pg_stat_user_tables come from?"
},
{
"msg_contents": "On Mon, 2010-09-13 at 16:06 -0700, Josh Berkus wrote:\n> All,\n> \n> I've been looking at pg_stat_user_tables (in 8.3, because of a project I\n> have), and it appears that autovacuum, and only autovaccum, updates the\n> data for this view. This means that one can never have data in\n> pg_stat_user_tables which is completely up-to-date, and if autovacuum is\n> off, the view is useless.\n\nAs I recall its kept in shared_buffers (in some kind of counter) and\nupdated only when it is requested or when autovacuum fires. This was\ndone because we used to write stats every 500ms and it was a bottleneck.\n(IIRC)\n\nJoshua D. Drake\n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n",
"msg_date": "Mon, 13 Sep 2010 16:41:28 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Where does data in pg_stat_user_tables come from?"
},
{
"msg_contents": "On 9/13/10 4:41 PM, Joshua D. Drake wrote:\n> On Mon, 2010-09-13 at 16:06 -0700, Josh Berkus wrote:\n>> All,\n>>\n>> I've been looking at pg_stat_user_tables (in 8.3, because of a project I\n>> have), and it appears that autovacuum, and only autovaccum, updates the\n>> data for this view. This means that one can never have data in\n>> pg_stat_user_tables which is completely up-to-date, and if autovacuum is\n>> off, the view is useless.\n> \n> As I recall its kept in shared_buffers (in some kind of counter) and\n> updated only when it is requested or when autovacuum fires. This was\n> done because we used to write stats every 500ms and it was a bottleneck.\n> (IIRC)\n\nYes, looks like it only gets updated on SELECT or on autovacuum.\n\nThing is, a full VACUUM ANALYZE on the database, or even just ANALYZE,\nshould update some of the counters. And currently it doesnt, resulting\nin pg_class.reltuples often being far more up to date than\npg_stat_user_tables.n_live_tup. And frankly, no way to reconcile those\ntwo stats.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Mon, 13 Sep 2010 16:47:31 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Where does data in pg_stat_user_tables come from?"
},
{
"msg_contents": "On Mon, 2010-09-13 at 16:47 -0700, Josh Berkus wrote:\n> On 9/13/10 4:41 PM, Joshua D. Drake wrote:\n> > On Mon, 2010-09-13 at 16:06 -0700, Josh Berkus wrote:\n> >> All,\n> >>\n> >> I've been looking at pg_stat_user_tables (in 8.3, because of a project I\n> >> have), and it appears that autovacuum, and only autovaccum, updates the\n> >> data for this view. This means that one can never have data in\n> >> pg_stat_user_tables which is completely up-to-date, and if autovacuum is\n> >> off, the view is useless.\n> > \n> > As I recall its kept in shared_buffers (in some kind of counter) and\n> > updated only when it is requested or when autovacuum fires. This was\n> > done because we used to write stats every 500ms and it was a bottleneck.\n> > (IIRC)\n> \n> Yes, looks like it only gets updated on SELECT or on autovacuum.\n> \n> Thing is, a full VACUUM ANALYZE on the database, or even just ANALYZE,\n> should update some of the counters. And currently it doesnt, resulting\n> in pg_class.reltuples often being far more up to date than\n> pg_stat_user_tables.n_live_tup. And frankly, no way to reconcile those\n> two stats.\n\nIf you select from pg_stat_user_tables, the counters should be\nreasonably close unless your default_statistics_target is way off and\nthen pg_class.reltuples would be wrong.\n\nJoshua D. Drake\n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n",
"msg_date": "Mon, 13 Sep 2010 17:12:08 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Where does data in pg_stat_user_tables come from?"
},
{
"msg_contents": "\n> If you select from pg_stat_user_tables, the counters should be\n> reasonably close unless your default_statistics_target is way off and\n> then pg_class.reltuples would be wrong.\n\nAt least in 8.3, running ANALYZE does not update pg_stat_user_tables in\nany way. Does it in later versions?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Mon, 13 Sep 2010 17:53:51 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Where does data in pg_stat_user_tables come from?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> I've been looking at pg_stat_user_tables (in 8.3, because of a project I\n> have), and it appears that autovacuum, and only autovaccum, updates the\n> data for this view.\n\nUm ... it updates the last_autovacuum and last_autoanalyze columns,\nbut the others are not its responsibility.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Sep 2010 21:40:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Where does data in pg_stat_user_tables come from? "
},
{
"msg_contents": "Excerpts from Josh Berkus's message of lun sep 13 20:53:51 -0400 2010:\n> \n> > If you select from pg_stat_user_tables, the counters should be\n> > reasonably close unless your default_statistics_target is way off and\n> > then pg_class.reltuples would be wrong.\n> \n> At least in 8.3, running ANALYZE does not update pg_stat_user_tables in\n> any way. Does it in later versions?\n\nIt's been pure nonsense in this thread. Please show an example of\nwhat's not working.\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Mon, 13 Sep 2010 23:57:05 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Where does data in pg_stat_user_tables come from?"
},
{
"msg_contents": "\n> It's been pure nonsense in this thread. Please show an example of\n> what's not working.\n\n1) Init a postgresql 8.3 with autovacuum disabled.\n\n2) Load a backup of a database into that PostgreSQL.\n\n3) Check pg_stat_user_tables. n_live_tup for all tables will be 0.\n\n4) VACUUM ANALYZE the whole database.\n\n5) n_live_tup will *still* be 0. Whereas reltuples in pg_class will be\nreasonable accurate.\n\n> Um ... it updates the last_autovacuum and last_autoanalyze columns,\n> but the others are not its responsibility.\n\nRight. I'm contending that ANALYZE *should* update those columns.\nCurrent behavior is unintuitive and makes the stats in\npg_stat_user_tables almost useless, since you can never get even\napproximately a coherent snapshot of data for all tables.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Thu, 16 Sep 2010 11:39:22 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Where does data in pg_stat_user_tables come from?"
},
{
"msg_contents": "Le 16/09/2010 20:39, Josh Berkus a écrit :\n> \n>> It's been pure nonsense in this thread. Please show an example of\n>> what's not working.\n> \n> 1) Init a postgresql 8.3 with autovacuum disabled.\n> \n> 2) Load a backup of a database into that PostgreSQL.\n> \n> 3) Check pg_stat_user_tables. n_live_tup for all tables will be 0.\n> \n> 4) VACUUM ANALYZE the whole database.\n> \n> 5) n_live_tup will *still* be 0. Whereas reltuples in pg_class will be\n> reasonable accurate.\n> \n\nDid all your steps (except the fourth one). Works great (meaning\nn_live_tup is updated as it should be).\n\nI have to agree with Alvarro, this is complete nonsense. VACUUM ANALYZE\ndoesn't change the pg_stat_*_tables columns value, the stats collector does.\n\nIf your n_live_tup didn't get updated, I'm quite sure you have\ntrack_counts to off in your postgresql.conf file.\n\n>> Um ... it updates the last_autovacuum and last_autoanalyze columns,\n>> but the others are not its responsibility.\n> \n> Right. I'm contending that ANALYZE *should* update those columns.\n\nThe postgres process executing ANALYZE surely sent this information to\nthe stats collector (once again, if track_counts is on). Tried it\ntonight, works great too.\n\n> Current behavior is unintuitive and makes the stats in\n> pg_stat_user_tables almost useless, since you can never get even\n> approximately a coherent snapshot of data for all tables.\n> \n\nGet a look at your track_count setting.\n\n\n-- \nGuillaume\n http://www.postgresql.fr\n http://dalibo.com\n",
"msg_date": "Thu, 16 Sep 2010 21:02:07 +0200",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Where does data in pg_stat_user_tables come from?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> It's been pure nonsense in this thread. Please show an example of\n>> what's not working.\n\n> 1) Init a postgresql 8.3 with autovacuum disabled.\n\n> 2) Load a backup of a database into that PostgreSQL.\n\n> 3) Check pg_stat_user_tables. n_live_tup for all tables will be 0.\n\nReally? It works for me. You sure this installation hasn't got stats\ndisabled? Check the beginning of the postmaster log to see if there\nare any bleats about failing to start the stats collector.\n\n> 4) VACUUM ANALYZE the whole database.\n\n> 5) n_live_tup will *still* be 0. Whereas reltuples in pg_class will be\n> reasonable accurate.\n\nIt's possible you are seeing the effects of the fact that pre-9.0,\nvacuum and analyze wouldn't create a stats entry for a table that\ndidn't have one already. However, it's entirely not clear why you\nwouldn't have one already. Also, if you didn't, you wouldn't see any\nrow at all in the pg_stat_user_tables, not a row with n_live_tup = 0.\n\nIn any case, it's clear that your installation is not operating as\nintended, and as 8.3 does work for me here. Better look for something\ninterfering with stats collection.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Sep 2010 15:14:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Where does data in pg_stat_user_tables come from? "
},
{
"msg_contents": "On 9/16/10 12:14 PM, Tom Lane wrote:\n> In any case, it's clear that your installation is not operating as\n> intended, and as 8.3 does work for me here. Better look for something\n> interfering with stats collection.\n\nOK, will do. Thanks!\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Thu, 16 Sep 2010 12:34:29 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Where does data in pg_stat_user_tables come from?"
}
] |
[
{
"msg_contents": "Hello,\n\nI am relatively new to postgres (just a few months) so apologies if\nany of you are bearing with me.\n\nI am trying to get a rough idea of the amount of bang for the buck I\nmight see if I put in a connection pooling service into the enviroment\nvs our current methodology of using persistent open connections.\n\nWe have a number of in house applications that connect to a central\nPostgres instance. (8.3.7). The box is admitting underpowered with\nonly 8 cores, and 8gb or ram and not great disk IO out of an MSA-70.\nthe database is about 35GB on disk and does mainly (~95%) OTLP type\nqueries. I am currently begging for more ram.\n\nMost of the connections from the various apps hold idle connections\nuntil they need to execute a query once done go back to holding an\nopen idle connection. (there are ~600 open connections at any given\ntime, and most of the time most are idle)\n\nthis is typically fine while the number of active queries is low, but\nsome other application (that doesn't use connection pooling or holding\nopen connections when not in use) is hitting the db from time to time\nwith 50-100 small queries (2ms queries from my testing) nearly all at\nonce. when this happens the whole response time goes out the door\nhowever).\n\n\nI think from reading this list for a few weeks the answer is move to\nusing connection pooling package elsewhere to better manage incoming\nconnections, with a lower number to the db.\n\nI am told this will require some re-working of some app code as I\nunderstand pg-pool was tried a while back in our QA environment and\nserver parts of various in-house apps/scripts/..etc started to\nexperience show stopping problems.\n\nto help make my case to the devs and various managers I was wondering\nif someone could expand on what extra work is having to be done while\nqueries run and there is a high (500-600) number of open yet idle\nconnections to db. lots of the queries executed use sub-transactions\nif that makes a difference.\n\n\nbasically what I am paying extra for with that many persistent\nconnections, that I might save if I go to the effort of getting the\nin-house stuff to make use of a connection pooler ?\n\n\nthank you for your time.\n\n..: mark\n",
"msg_date": "Tue, 14 Sep 2010 10:10:33 -0600",
"msg_from": "mark <[email protected]>",
"msg_from_op": true,
"msg_subject": "Held idle connections vs use of a Pooler"
},
{
"msg_contents": "On 9/14/10 9:10 AM, mark wrote:\n> Hello,\n>\n> I am relatively new to postgres (just a few months) so apologies if\n> any of you are bearing with me.\n>\n> I am trying to get a rough idea of the amount of bang for the buck I\n> might see if I put in a connection pooling service into the enviroment\n> vs our current methodology of using persistent open connections.\n>\n> We have a number of in house applications that connect to a central\n> Postgres instance. (8.3.7). The box is admitting underpowered with\n> only 8 cores, and 8gb or ram and not great disk IO out of an MSA-70.\n> the database is about 35GB on disk and does mainly (~95%) OTLP type\n> queries. I am currently begging for more ram.\n>\n> Most of the connections from the various apps hold idle connections\n> until they need to execute a query once done go back to holding an\n> open idle connection. (there are ~600 open connections at any given\n> time, and most of the time most are idle)\n>\n> this is typically fine while the number of active queries is low, but\n> some other application (that doesn't use connection pooling or holding\n> open connections when not in use) is hitting the db from time to time\n> with 50-100 small queries (2ms queries from my testing) nearly all at\n> once. when this happens the whole response time goes out the door\n> however).\n\nWhile connection pooling may be a good answer for you, there also appears to be a problem/bug in 8.3.x that may be biting you. My installation is very similar to yours (hundreds of idle \"lightweight\" connections, occasional heavy use by certain apps). Look at this thread:\n\n http://archives.postgresql.org/pgsql-performance/2010-04/msg00071.php\n\nOn the server that's been upgraded to 8.4.4, we're not seeing this problem. But it's not in full production yet, so I can't say for sure that the CPU spikes are gone.\n\n(Unfortunately, the archives.postgresql.org HTML formatting is horrible -- why on Earth can't it wrap lines?)\n\nCraig\n\n>\n>\n> I think from reading this list for a few weeks the answer is move to\n> using connection pooling package elsewhere to better manage incoming\n> connections, with a lower number to the db.\n>\n> I am told this will require some re-working of some app code as I\n> understand pg-pool was tried a while back in our QA environment and\n> server parts of various in-house apps/scripts/..etc started to\n> experience show stopping problems.\n>\n> to help make my case to the devs and various managers I was wondering\n> if someone could expand on what extra work is having to be done while\n> queries run and there is a high (500-600) number of open yet idle\n> connections to db. lots of the queries executed use sub-transactions\n> if that makes a difference.\n>\n>\n> basically what I am paying extra for with that many persistent\n> connections, that I might save if I go to the effort of getting the\n> in-house stuff to make use of a connection pooler ?\n>\n>\n> thank you for your time.\n>\n> ..: mark\n>\n\n",
"msg_date": "Tue, 14 Sep 2010 09:44:19 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Held idle connections vs use of a Pooler"
},
{
"msg_contents": "On Tue, 2010-09-14 at 10:10 -0600, mark wrote:\n> Hello,\n> \n> I am relatively new to postgres (just a few months) so apologies if\n> any of you are bearing with me.\n> \n> I am trying to get a rough idea of the amount of bang for the buck I\n> might see if I put in a connection pooling service into the enviroment\n> vs our current methodology of using persistent open connections.\n\nWell what a pooler does is provide persisten open connections that can\nbe reused. What tech are you using for these persisten open connections?\n\n\n> Most of the connections from the various apps hold idle connections\n> until they need to execute a query once done go back to holding an\n> open idle connection. (there are ~600 open connections at any given\n> time, and most of the time most are idle)\n\nSounds like each app is holding its own pool?\n\n\n> I think from reading this list for a few weeks the answer is move to\n> using connection pooling package elsewhere to better manage incoming\n> connections, with a lower number to the db.\n\nCorrect, because each connection is overhead. If you have 600\nconnections, of which really only 20 are currently executing, that is\nhighly inefficient. \n\nA pooler would have say, 40 connections open, with 20 currently\nexecuting and a max pool of 600.\n\n> \n> I am told this will require some re-working of some app code as I\n> understand pg-pool was tried a while back in our QA environment and\n> server parts of various in-house apps/scripts/..etc started to\n> experience show stopping problems.\n\nUse pgbouncer. It is what Skype uses.\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n",
"msg_date": "Tue, 14 Sep 2010 09:58:45 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Held idle connections vs use of a Pooler"
},
{
"msg_contents": "On Tue, Sep 14, 2010 at 12:10 PM, mark <[email protected]> wrote:\n> Hello,\n>\n> I am relatively new to postgres (just a few months) so apologies if\n> any of you are bearing with me.\n>\n> I am trying to get a rough idea of the amount of bang for the buck I\n> might see if I put in a connection pooling service into the enviroment\n> vs our current methodology of using persistent open connections.\n>\n> We have a number of in house applications that connect to a central\n> Postgres instance. (8.3.7). The box is admitting underpowered with\n> only 8 cores, and 8gb or ram and not great disk IO out of an MSA-70.\n> the database is about 35GB on disk and does mainly (~95%) OTLP type\n> queries. I am currently begging for more ram.\n>\n> Most of the connections from the various apps hold idle connections\n> until they need to execute a query once done go back to holding an\n> open idle connection. (there are ~600 open connections at any given\n> time, and most of the time most are idle)\n\nThis is IMO a strong justification for a connection pooler. Certain\nclasses of problems will go away and you will have a more responsive\nserver under high load conditions.\n\n> this is typically fine while the number of active queries is low, but\n> some other application (that doesn't use connection pooling or holding\n> open connections when not in use) is hitting the db from time to time\n> with 50-100 small queries (2ms queries from my testing) nearly all at\n> once. when this happens the whole response time goes out the door\n> however).\n>\n>\n> I think from reading this list for a few weeks the answer is move to\n> using connection pooling package elsewhere to better manage incoming\n> connections, with a lower number to the db.\n>\n> I am told this will require some re-working of some app code as I\n> understand pg-pool was tried a while back in our QA environment and\n> server parts of various in-house apps/scripts/..etc started to\n> experience show stopping problems.\n\nWhat types of problems did you have? Performance related or bugs\nstemming from changes in the way your pooler runs the queries? What\nkind of session level objects (like prepared statements) do you rely\non? The answer to this question will affect the feasibility of using a\npooler, or which one you use. pgbouncer in transaction mode is a\ngreat choice if you can live under the restrictions -- it's almost\ncompletely transparent. pgpool I'm not nearly as familiar with.\n\n> to help make my case to the devs and various managers I was wondering\n> if someone could expand on what extra work is having to be done while\n> queries run and there is a high (500-600) number of open yet idle\n> connections to db. lots of the queries executed use sub-transactions\n> if that makes a difference.\n\nGeneral note: queries with subtransactions (savepoints or pl/pgsql\nexception handlers) are much more expensive than those without. I\nwould maybe be trying to batch work in your load spike somehow or\nworking it so that retries are done in the app vs the database.\n\nmerlin\n",
"msg_date": "Tue, 14 Sep 2010 13:01:42 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Held idle connections vs use of a Pooler"
}
] |
[
{
"msg_contents": "Hello all,\n\nI am trying to use aggregate queries in views, and when joining these \nviews to other\ntables, I get seq scan in the view, even if index scan would be clearly \nbetter. The views\nI am using in my Db are actually long pivot queries, but the following \nsimple test case is enough\nto show the problem.\n\nI will first show the table definitions, then the performance problem I \nam having.\n\ncreate table test1 (\n id serial primary key not null,\n other_id integer unique not null\n);\n\ncreate table test2 (\n id integer not null references test1(id),\n type integer,\n value text\n);\n\ncreate index test2_idx on test2(id);\n\ninsert into test1 select g, g+10000 from (select generate_series(1, \n10000) as g) t;\ninsert into test2 select g, g%3, 'testval'||g from (select \ngenerate_series(1, 10000) as g) t;\ninsert into test2 select g, (g+1)%3, 'testval'||g from (select \ngenerate_series(1, 10000) as g) t;\ninsert into test2 select g, (g+2)%3, 'testval'||g from (select \ngenerate_series(1, 10000) as g) t;\n\nNow, the following query is fast:\n\nselect * from test1 inner join (select array_agg(value), id\nfrom test2 group by id) t on test1.id = t.id where test1.id = 1;\n(0.6ms)\n\nBut the following query is slow (seqscan on test2):\n\nselect * from test1 inner join (select array_agg(value), id\nfrom test2 group by id) t on test1.id = t.id where test1.other_id = 10001;\n(45ms)\n\nThe same problem can be seen when running:\n\nselect * from test1 inner join (select array_agg(value), id\nfrom test2 group by id) t on test1.id = t.id where test1.id in (1, 2);\n(40ms runtime)\n\nFetching directly from test2 with id is fast:\n\nselect array_agg(value), id\nfrom test2 where test2.id in (1, 2) group by id;\n\nIf I set enable_seqscan to off, then I get fast results:\n\nselect * from test1 inner join (select array_agg(value), id\nfrom test2 group by id) t on test1.id = t.id where test1.other_id in \n(10001, 10002);\n(0.6ms)\n\nOr slow results, if the fetched rows happen to be in the end of the index:\n\nselect * from test1 inner join (select array_agg(value), id\nfrom test2 group by id) t on test1.id = t.id where test1.other_id = 20000;\n(40ms)\n\nExplain analyzes of the problematic query:\n\nWith enable_seqscan:\n\nexplain analyze select * from test1 inner join (select array_agg(value), id\nfrom test2 group by id) t on test1.id = t.id where test1.other_id = 10001;\n\nHash Join (cost=627.48..890.48 rows=50 width=44) (actual \ntime=91.575..108.085 rows=1 loops=1)\n Hash Cond: (test2.id = test1.id)\n -> HashAggregate (cost=627.00..752.00 rows=10000 width=15) (actual \ntime=82.663..98.281 rows=10000 loops=1)\n -> Seq Scan on test2 (cost=0.00..477.00 rows=30000 width=15) \n(actual time=0.009..30.650 rows=30000 loops=1)\n -> Hash (cost=0.47..0.47 rows=1 width=8) (actual time=0.026..0.026 \nrows=1 loops=1)\n -> Index Scan using test1_other_id_key on test1 \n(cost=0.00..0.47 rows=1 width=8) (actual time=0.018..0.021 rows=1 loops=1)\n Index Cond: (other_id = 10001)\nTotal runtime: 109.686 ms\n\nWithout enable_seqscan:\n\nexplain analyze select * from test1 inner join (select array_agg(value), id\nfrom test2 group by id) t on test1.id = t.id where test1.other_id = 10001;\n\nMerge Join (cost=0.48..895.91 rows=50 width=44) (actual \ntime=0.066..0.085 rows=1 loops=1)\n Merge Cond: (test2.id = test1.id)\n -> GroupAggregate (cost=0.00..769.56 rows=10000 width=15) (actual \ntime=0.040..0.054 rows=2 loops=1)\n -> Index Scan using test2_idx on test2 (cost=0.00..494.56 \nrows=30000 width=15) (actual time=0.017..0.030 rows=7 loops=1)\n -> Sort (cost=0.48..0.48 rows=1 width=8) (actual time=0.020..0.022 \nrows=1 loops=1)\n Sort Key: test1.id\n Sort Method: quicksort Memory: 17kB\n -> Index Scan using test1_other_id_key on test1 \n(cost=0.00..0.47 rows=1 width=8) (actual time=0.010..0.012 rows=1 loops=1)\n Index Cond: (other_id = 10001)\n\n\n - Anssi Kääriäinen\n\n\n\n\n",
"msg_date": "Wed, 15 Sep 2010 09:26:14 +0300",
"msg_from": "=?ISO-8859-1?Q?Anssi_K=E4=E4ri=E4inen?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance problem with joined aggregate query"
},
{
"msg_contents": "On Wed, Sep 15, 2010 at 2:26 AM, Anssi Kääriäinen\n<[email protected]> wrote:\n> Hello all,\n>\n> I am trying to use aggregate queries in views, and when joining these views\n> to other\n> tables, I get seq scan in the view, even if index scan would be clearly\n> better. The views\n> I am using in my Db are actually long pivot queries, but the following\n> simple test case is enough\n> to show the problem.\n>\n> I will first show the table definitions, then the performance problem I am\n> having.\n>\n> create table test1 (\n> id serial primary key not null,\n> other_id integer unique not null\n> );\n>\n> create table test2 (\n> id integer not null references test1(id),\n> type integer,\n> value text\n> );\n>\n> create index test2_idx on test2(id);\n>\n> insert into test1 select g, g+10000 from (select generate_series(1, 10000)\n> as g) t;\n> insert into test2 select g, g%3, 'testval'||g from (select\n> generate_series(1, 10000) as g) t;\n> insert into test2 select g, (g+1)%3, 'testval'||g from (select\n> generate_series(1, 10000) as g) t;\n> insert into test2 select g, (g+2)%3, 'testval'||g from (select\n> generate_series(1, 10000) as g) t;\n>\n> Now, the following query is fast:\n>\n> select * from test1 inner join (select array_agg(value), id\n> from test2 group by id) t on test1.id = t.id where test1.id = 1;\n> (0.6ms)\n>\n> But the following query is slow (seqscan on test2):\n>\n> select * from test1 inner join (select array_agg(value), id\n> from test2 group by id) t on test1.id = t.id where test1.other_id = 10001;\n> (45ms)\n>\n> The same problem can be seen when running:\n>\n> select * from test1 inner join (select array_agg(value), id\n> from test2 group by id) t on test1.id = t.id where test1.id in (1, 2);\n> (40ms runtime)\n>\n> Fetching directly from test2 with id is fast:\n>\n> select array_agg(value), id\n> from test2 where test2.id in (1, 2) group by id;\n>\n> If I set enable_seqscan to off, then I get fast results:\n>\n> select * from test1 inner join (select array_agg(value), id\n> from test2 group by id) t on test1.id = t.id where test1.other_id in (10001,\n> 10002);\n> (0.6ms)\n>\n> Or slow results, if the fetched rows happen to be in the end of the index:\n>\n> select * from test1 inner join (select array_agg(value), id\n> from test2 group by id) t on test1.id = t.id where test1.other_id = 20000;\n> (40ms)\n>\n> Explain analyzes of the problematic query:\n>\n> With enable_seqscan:\n>\n> explain analyze select * from test1 inner join (select array_agg(value), id\n> from test2 group by id) t on test1.id = t.id where test1.other_id = 10001;\n>\n> Hash Join (cost=627.48..890.48 rows=50 width=44) (actual\n> time=91.575..108.085 rows=1 loops=1)\n> Hash Cond: (test2.id = test1.id)\n> -> HashAggregate (cost=627.00..752.00 rows=10000 width=15) (actual\n> time=82.663..98.281 rows=10000 loops=1)\n> -> Seq Scan on test2 (cost=0.00..477.00 rows=30000 width=15)\n> (actual time=0.009..30.650 rows=30000 loops=1)\n> -> Hash (cost=0.47..0.47 rows=1 width=8) (actual time=0.026..0.026\n> rows=1 loops=1)\n> -> Index Scan using test1_other_id_key on test1 (cost=0.00..0.47\n> rows=1 width=8) (actual time=0.018..0.021 rows=1 loops=1)\n> Index Cond: (other_id = 10001)\n> Total runtime: 109.686 ms\n>\n> Without enable_seqscan:\n>\n> explain analyze select * from test1 inner join (select array_agg(value), id\n> from test2 group by id) t on test1.id = t.id where test1.other_id = 10001;\n>\n> Merge Join (cost=0.48..895.91 rows=50 width=44) (actual time=0.066..0.085\n> rows=1 loops=1)\n> Merge Cond: (test2.id = test1.id)\n> -> GroupAggregate (cost=0.00..769.56 rows=10000 width=15) (actual\n> time=0.040..0.054 rows=2 loops=1)\n> -> Index Scan using test2_idx on test2 (cost=0.00..494.56\n> rows=30000 width=15) (actual time=0.017..0.030 rows=7 loops=1)\n> -> Sort (cost=0.48..0.48 rows=1 width=8) (actual time=0.020..0.022\n> rows=1 loops=1)\n> Sort Key: test1.id\n> Sort Method: quicksort Memory: 17kB\n> -> Index Scan using test1_other_id_key on test1 (cost=0.00..0.47\n> rows=1 width=8) (actual time=0.010..0.012 rows=1 loops=1)\n> Index Cond: (other_id = 10001)\n\nTake a look at this, and the responses. Is it the same case?:\nhttp://www.mail-archive.com/[email protected]/msg21756.html\n\nmerlin\n",
"msg_date": "Wed, 15 Sep 2010 18:25:15 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem with joined aggregate query"
},
{
"msg_contents": "On 09/16/2010 01:25 AM, Merlin Moncure wrote:\n> Take a look at this, and the responses. Is it the same case?:\n> http://www.mail-archive.com/[email protected]/msg21756.html\n>\n> merlin\n> \nYes, looks like this is the same case. This makes it hard to use views\nhaving group by in them, as the whole group by part will always be\nexecuted. Back to planning board then...\n\nI guess my possibilities for pivot views are:\n - crosstab: Will make statistics go \"bad\", that is, the crosstab query\n will always seem to return static number of rows. This can cause\n problems in complex queries using the view. IIRC performance is\n a bit worse than pivot by group by.\n - left joins: if joining the same table 20 times, there will be some\n planner overhead. Maybe the best way for my usage case. Also about\n 2x slower than pivot using group by.\n - subselect each of the columns: way worse performance: for my use\n case, each added column adds about 50ms to run time, so for 20\n columns this will take 1 second. The group by pivot query runs in\n 250ms.\n\nAny other ideas?\n\n - Anssi\n",
"msg_date": "Thu, 16 Sep 2010 08:51:19 +0300",
"msg_from": "=?ISO-8859-1?Q?Anssi_K=E4=E4ri=E4inen?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance problem with joined aggregate query"
},
{
"msg_contents": "On Thu, Sep 16, 2010 at 1:51 AM, Anssi Kääriäinen\n<[email protected]> wrote:\n> Yes, looks like this is the same case. This makes it hard to use views\n> having group by in them, as the whole group by part will always be\n> executed. Back to planning board then...\n>\n> I guess my possibilities for pivot views are:\n> - crosstab: Will make statistics go \"bad\", that is, the crosstab query\n> will always seem to return static number of rows. This can cause\n> problems in complex queries using the view. IIRC performance is\n> a bit worse than pivot by group by.\n> - left joins: if joining the same table 20 times, there will be some\n> planner overhead. Maybe the best way for my usage case. Also about\n> 2x slower than pivot using group by.\n> - subselect each of the columns: way worse performance: for my use\n> case, each added column adds about 50ms to run time, so for 20\n> columns this will take 1 second. The group by pivot query runs in\n> 250ms.\n>\n> Any other ideas?\n\nyes. specifically, if you are targeting the aggregation towards an\narray you have another option:\n\nanalyze select * from test1 inner join (select array_agg(value), id\nfrom test2 group by id) t on test1.id = t.id where test1.other_id = 10001;\n\ncan be replaced w/\n\nselect *, array(select value from test2 where test2.id=test1.id) from\ntest1 where test1.other_id = 10001;\n\nThis (array vs array_agg) will give you faster performance than\naggregation so is better preferred unless you need to group for other\npurposes than relating. I think the join if it could match up over\nthe group by key can give theoretically better plans but this should\nbe good enough especially if you aren't pulling a large amount of data\nfrom the outer table.\n\nmerlin\n",
"msg_date": "Thu, 16 Sep 2010 10:52:49 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problem with joined aggregate query"
}
] |
[
{
"msg_contents": "We have a production database server ... it's quite busy but usually\nworking completely fine, simple queries taking a fraction of a\nmillisecond to run.\n\nRecently we've frequently encountered issues where some simple selects\n(meaning, selects doing an index lookup and fetching one row) have\nbecome stuck for several minutes. Apparently all requests on one\nexact table gets stuck, all requests not related to said table are\ngoing through without any problems. According to the pg_stat_activity\nview, all queries getting stuck was read-queries (selects), no updates\nor anything like that (some of the transactions were doing updates\nand/or inserts though).\n\nThe obvious thought seems to be that this is a locking issue ... but\nit doesn't seem so. For one thing, AFAIK locking shouldn't affect\nselects, only updates? I've also looked through tons of logs without\nfinding any obvious locking issues. In one of the instances, I could\nfind that there were some open transactions doing updates on one row\nin the table and then later becoming stuck (as all other transactions)\nwhen doing a select on another row in the said table.\n\nMy second thought was that the database is on the edge of being\noverloaded and that the memory situation is also just on the edge ...\nimportant indexes that used to be in memory now has to be accessed\nfrom the disk. Still, it doesn't make sense, we're not seeing any\nserious impact on the CPU iowait status, and it seems improbable that\nit should take minutes to load an index?\n\nThere aren't any long-lasting transactions going on when the jam\noccurs. I haven't checked much up, usually the jam seems to resolve\nitself pretty instantly, but I think that at some point it took half a\nminute from the first query was finished until the pg_stat_activity\nview got back to normal (meaning typically 0-5 simultaneously\nprocessed queries).\n\nFWIW, we're running pg 8.3.11, transaction isolation level\nserializable. The machine is quad-core hyperthreaded (16 CPUs visible\nto the OS), a SAN is used for storage, and different RAIDs are used\nfor the root partition, pg data and pg wals.\n\nAny ideas? I'm aware that some configuration (i.e. checkpoint\ninterval etc) may cause significant delay on write-queries ... but\nthis is only read-queries.\n",
"msg_date": "Wed, 15 Sep 2010 12:05:03 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "locking issue on simple selects?"
},
{
"msg_contents": "Tobias Brox <[email protected]> writes:\n> Recently we've frequently encountered issues where some simple selects\n> (meaning, selects doing an index lookup and fetching one row) have\n> become stuck for several minutes. Apparently all requests on one\n> exact table gets stuck, all requests not related to said table are\n> going through without any problems. According to the pg_stat_activity\n> view, all queries getting stuck was read-queries (selects), no updates\n> or anything like that (some of the transactions were doing updates\n> and/or inserts though).\n\n> The obvious thought seems to be that this is a locking issue ... but\n> it doesn't seem so. For one thing, AFAIK locking shouldn't affect\n> selects, only updates?\n\nAn exclusive lock will block selects too. Have you looked into pg_locks\nfor ungranted lock requests?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Sep 2010 09:39:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locking issue on simple selects? "
},
{
"msg_contents": "On 15 September 2010 15:39, Tom Lane <[email protected]> wrote:\n> An exclusive lock will block selects too. Have you looked into pg_locks\n> for ungranted lock requests?\n\nWell - I thought so, we have a logging script that logs the content of\nthe pg_locks table, it didn't log anything interesting but it may be a\nproblem with the script itself. It does an inner join on\npg_locks.relation = pg_class.oid but when I check now this join seems\nto remove most of the rows in the pg_locks table. Does it make sense\nat all to join pg_class with pg_locks? I will ask the sysadm to\nchange to an outer join as for now.\n",
"msg_date": "Wed, 15 Sep 2010 21:07:24 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: locking issue on simple selects?"
},
{
"msg_contents": "Tobias Brox <[email protected]> writes:\n> Well - I thought so, we have a logging script that logs the content of\n> the pg_locks table, it didn't log anything interesting but it may be a\n> problem with the script itself. It does an inner join on\n> pg_locks.relation = pg_class.oid but when I check now this join seems\n> to remove most of the rows in the pg_locks table. Does it make sense\n> at all to join pg_class with pg_locks? I will ask the sysadm to\n> change to an outer join as for now.\n\nThere are lots of locks that are not tied to a specific table, so an\ninner join is definitely bad. You could use an outer join to annotate\nrows that do correspond to table locks, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Sep 2010 15:21:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locking issue on simple selects? "
},
{
"msg_contents": "Tobias Brox wrote:\n> I thought so, we have a logging script that logs the content of\n> the pg_locks table, it didn't log anything interesting but it may be a\n> problem with the script itself. It does an inner join on\n> pg_locks.relation = pg_class.oid but when I check now this join seems\n> to remove most of the rows in the pg_locks table.\n\nSome of the waits you might be running into will be things waiting on \nanother transaction holding a lock to finish, which are probably wiped \nout by this approach. There are some useful examples of lock views on \nthe wiki:\n\nhttp://wiki.postgresql.org/wiki/Lock_Monitoring\nhttp://wiki.postgresql.org/wiki/Lock_dependency_information\nhttp://wiki.postgresql.org/wiki/Find_Locks\n\nAnd the idea you have of coverting the pg_class one to an outer join \nwill help.\n\nThe other thing you should try is toggling on log_lock_waits and \npossibly reducing deadlock_timeout. This will put a lot of the \ninformation you're trying to collect right into the logs.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Wed, 15 Sep 2010 15:28:24 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locking issue on simple selects?"
},
{
"msg_contents": " On 10-09-15 03:07 PM, Tobias Brox wrote:\n> On 15 September 2010 15:39, Tom Lane<[email protected]> wrote:\n>> An exclusive lock will block selects too. Have you looked into pg_locks\n>> for ungranted lock requests?\n> Well - I thought so, we have a logging script that logs the content of\n> the pg_locks table, it didn't log anything interesting but it may be a\n> problem with the script itself. It does an inner join on\n> pg_locks.relation = pg_class.oid but when I check now this join seems\n> to remove most of the rows in the pg_locks table. Does it make sense\n> at all to join pg_class with pg_locks? I will ask the sysadm to\n> change to an outer join as for now.\n>\n\nYou can also enable log_lock_waits and the lock waits will appear in \nyour Postgres logs.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n",
"msg_date": "Wed, 15 Sep 2010 15:32:56 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locking issue on simple selects?"
},
{
"msg_contents": "On 15 September 2010 21:28, Greg Smith <[email protected]> wrote:\n> There are some useful examples of lock views on the wiki:\n>\n> http://wiki.postgresql.org/wiki/Lock_Monitoring\n> http://wiki.postgresql.org/wiki/Lock_dependency_information\n> http://wiki.postgresql.org/wiki/Find_Locks\n\nThanks. I think those pages probably should be merged ... hmm ... if\nI manage to solve my locking issues I should probably try and\ncontribute to the wiki.\n\nReading the wiki pages, for me it boils down to three things:\n\n1) the current query we're logging seems good enough except that we\nshould do an outer join except for inner join towards pg_class, so\nI've asked our sysadm to fix it.\n\n2) the middle query on http://wiki.postgresql.org/wiki/Lock_Monitoring\nseems very useful, and I've asked our sysadm to set up logging of this\none as well.\n\n3) That log_lock_waits config option that you and Brad points to seems\nvery useful, so I've asked our sysadm to enable it.\n\nI also discovered that there is an attribute pg_stat_activity.waiting\n- I suppose it is 't' if a query is waiting for a lock? It seems\nquite useful ...\n\n> reducing deadlock_timeout.\n\nIt's set to one second, and some of the jams we have been experiencing\nhas lasted for several minutes. I also think it should say in the pg\nlog if there is a deadlock situation? I grepped for \"deadlock\" in the\nlogs without finding anything.\n\nWell, we'll improve the logging, and wait for the next \"jam\" to occur\n... and I'll make sure to post an update if/when I figure out\nsomething.\n",
"msg_date": "Wed, 15 Sep 2010 23:33:00 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: locking issue on simple selects?"
},
{
"msg_contents": "Tobias Brox wrote:\n> I think those pages probably should be merged ... hmm ... if\n> I manage to solve my locking issues I should probably try and\n> contribute to the wiki.\n> \n\nCertainly the 3rd one could be merged with one of the other two, and \nmaybe all merged into one. I haven't cleaned up that whole area in a \nwhole, it's due for a round of it soon. You're welcome to take a shot at \nit. We can always revert any mistakes made.\n\n>> reducing deadlock_timeout.\n>> \n>\n> It's set to one second, and some of the jams we have been experiencing\n> has lasted for several minutes. I also think it should say in the pg\n> log if there is a deadlock situation? \n\ndeadlock_timeout is how long a process trying to acquire a lock waits \nbefore it a) looks for a deadlock, and b) prints a report that it's \nwaiting into the logs when log_lock_waits is on. So if you're looking \nfor slow lock acquisition that's in the sub-second range, it can be \nuseful to reduce regardless of whether deadlock ever happens. That \ndoesn't sound like your situation though.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n\n\n\n\n\n\nTobias Brox wrote:\n\nI think those pages probably should be merged ... hmm ... if\nI manage to solve my locking issues I should probably try and\ncontribute to the wiki.\n \n\n\nCertainly the 3rd one could be merged with one of the other two, and\nmaybe all merged into one. I haven't cleaned up that whole area in a\nwhole, it's due for a round of it soon. You're welcome to take a shot\nat it. We can always revert any mistakes made.\n\n\n\nreducing deadlock_timeout.\n \n\n\nIt's set to one second, and some of the jams we have been experiencing\nhas lasted for several minutes. I also think it should say in the pg\nlog if there is a deadlock situation? \n\n\ndeadlock_timeout is how long a process trying to acquire a lock waits\nbefore it a) looks for a deadlock, and b) prints a report that it's\nwaiting into the logs when log_lock_waits is on. So if you're looking\nfor slow lock acquisition that's in the sub-second range, it can be\nuseful to reduce regardless of whether deadlock ever happens. That\ndoesn't sound like your situation though.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book",
"msg_date": "Wed, 15 Sep 2010 20:32:32 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locking issue on simple selects?"
},
{
"msg_contents": "On 15 September 2010 12:05, Tobias Brox <[email protected]> wrote:\n> Recently we've frequently encountered issues where some simple selects\n> (meaning, selects doing an index lookup and fetching one row) have\n> become stuck for several minutes. Apparently all requests on one\n> exact table gets stuck, all requests not related to said table are\n> going through without any problems.\n\nNow I've set up all kind of logging regarding locks, so it seems like\nwe're having issues that aren't lock-related. I just did a bit of\nresearch into one situation today.\n\nAll while having this problem, there was one heavy query running in\nparallell ... not sure if that's relevant.\n\nThen comes one query that requires a seq scan on the problem table\n(that won't happen again - I just added a new index). Four seconds\nlater comes another query requiring a simple index lookup. Still more\nqueries comes in, most of them simple index lookups, but on different\nindexes. After one minute there are 25 queries in the\npg_stat_activity view towards this table. It's not a particularly\nhuge table. Moments later all 25 queries have been executed.\n",
"msg_date": "Thu, 23 Sep 2010 22:47:26 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: locking issue on simple selects?"
},
{
"msg_contents": "Tobias Brox <[email protected]> wrote:\n \n> All while having this problem, there was one heavy query running\n> in parallell ... not sure if that's relevant.\n \nHave you turned on checkpoint logging? You might want to see if\nthese are happening at some particular point in the checkpoint\nprocessing. If so, look through the archives for posts from Greg\nSmith on how to tune that -- he's worked out a nice methodology to\niteratively improve your configuration in this regard.\n \n-Kevin\n",
"msg_date": "Thu, 23 Sep 2010 15:55:54 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locking issue on simple selects?"
},
{
"msg_contents": "On 23 September 2010 22:55, Kevin Grittner <[email protected]> wrote:\n> Have you turned on checkpoint logging?\n\nYes ... it seems so:\n\n13:19:13.840 - LOG: checkpoint complete: wrote 3849 buffers (0.2%); 0\ntransaction log file(s) added, 0 removed, 5 recycled; write=269.551 s,\nsync=0.103 s, total=269.953 s\n13:19:13.841 - LOG: checkpoint starting: xlog\n13:19:33 - the seq scan query towards the affected table started\n13:20:31.454 - one of the index lookup queries towards the affected\ntable was finished\n13:20:43.176 - LOG: checkpoint complete: wrote 108199 buffers (6.9%);\n0 transaction log file(s) added, 0 removed, 16 recycled; write=11.521\ns, sync=77.533 s, total=89.335 s\n\n> You might want to see if\n> these are happening at some particular point in the checkpoint\n> processing. If so, look through the archives for posts from Greg\n> Smith on how to tune that -- he's worked out a nice methodology to\n> iteratively improve your configuration in this regard.\n\nThank you, I will ... hmm ... I found this blog post:\n\nhttp://blog.2ndquadrant.com/en/2010/01/measuring-postgresql-checkpoin.html\n\nOf course I'm doing it my own way:\n\nselect *,now() as snapshot into tmp_pg_stat_bgwriter from pg_stat_bgwriter ;\n\ncreate view tmp_delta_pg_stat_bgwriter as\n select a.checkpoints_timed-b.checkpoints_timed as\ncheckpoints_timed,a.checkpoints_req-b.checkpoints_req as\ncheckpoints_req,a.buffers_checkpoint-b.buffers_checkpoint as\nbuffers_checkpoint,a.buffers_clean-b.buffers_clean as\nbuffers_clean,a.maxwritten_clean-b.maxwritten_clean as\nmaxwritten_clean,a.buffers_backend-b.buffers_backend as\nbuffers_backend,a.buffers_alloc-b.buffers_alloc as buffers_alloc,\nnow()-b.snapshot as interval\n from pg_stat_bgwriter a ,\n (select * from tmp_pg_stat_bgwriter order by snapshot desc limit 1) as b;\n\nCheckpoint timeout is set to 5 minutes. Right now we're having\nrelatively low activity. I'm not sure how to read the stats below,\nbut they look OK to me:\n\nselect * from tmp_delta_pg_stat_bgwriter ;\n checkpoints_timed | checkpoints_req | buffers_checkpoint |\nbuffers_clean | maxwritten_clean | buffers_backend | buffers_alloc |\n interval\n-------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------+-----------------\n 3 | 0 | 8277 |\n15 | 0 | 185 | 18691 |\n00:12:02.988842\n(1 row)\n",
"msg_date": "Fri, 24 Sep 2010 00:25:36 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: locking issue on simple selects?"
},
{
"msg_contents": "Tobias Brox <[email protected]> wrote:\n \n> 13:19:13.840 - LOG: checkpoint complete\n \n> 13:19:13.841 - LOG: checkpoint starting\n \n> 13:20:43.176 - LOG: checkpoint complete\n \nThere wasn't a lot of time between the completion of one checkpoint\nand the start of the next. And the two checkpoints finished a\nminute and a half apart. Perhaps you need to boost your\ncheckpoint_segments setting? What is it now?\n \n-Kevin\n",
"msg_date": "Thu, 23 Sep 2010 17:50:21 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: locking issue on simple selects?"
}
] |
[
{
"msg_contents": "Sorry for the blast...\nI am running into a POSTGRES error. It appears to be the DBMGR[4289]. Any ideas what the error maybe?\n\n\n[cid:[email protected]]",
"msg_date": "Wed, 15 Sep 2010 07:07:35 -0400",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "POSTGRES error"
},
{
"msg_contents": "<[email protected]> wrote:\n \n> I am running into a POSTGRES error. It appears to be the\n> DBMGR[4289]. Any ideas what the error maybe?\n \nI've never seen anything remotely like that. I don't see the string\n'DBMGR' anywhere in the PostgreSQL source.\n \nBefore anyone can begin to help you, we would need a lot more\ninformation. Please read this page and post again with more\ninformation:\n \nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n \nWhen you do, the pgsql-performance isn't the right list -- try\npgsql-general.\n \n-Kevin\n",
"msg_date": "Wed, 15 Sep 2010 08:38:07 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: POSTGRES error"
}
] |
[
{
"msg_contents": "Hi,\n\nI am confronted with a use case where my database mainly does big\naggregate select (ROLAP), a bunch of batch jobs, and quite few OLTP.\n\nI come into cases where the planner under-estimates the number of rows\nin some relations, chooses to go for nested loops, and takes forever to\ncomplete the request. (Notice as the side note that Oracle (10g or 11g)\nis not any better on this workload and will sometime go crazy and choose\na plan that takes hours...)\n\nI've played with statistics, vacuum and so on, but at the end the\nplanner is not accurate enough when evaluating the number of rows in\nsome complex queries.\n\nDisableing nested loops most of the time solves the performance issues\nin my tests... generally going from 30 sec. down to 1 sec.\n\nSo my question is : would it be a very bad idea to disable nested loops\nin production ?\nThe way I see it is that it could be a little bit less optimal to use\nmerge join or hash join when joining on a few rows, but this is peanuts\ncompared to how bad it is to use nested loops when the number of rows\nhappens to be much higher than what the planner thinks.\n\nIs this stupid, ie are there cases when merge join or hash join are much\nslower than nested loops on a few rows ?\n\nThanks in advance,\n\nFranck\n\n\n\n",
"msg_date": "Thu, 16 Sep 2010 10:23:47 +0200",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is disableing nested_loops a bad idea ?"
},
{
"msg_contents": "Without knowing more about your queries and table structure, it is hard to\nsay if there is a better solution. But one thing you should probably\nconsider doing is just finding the queries where disabling nested loops is\nverifiably effective and then just disabling nested loops on that connection\nbefore running the query and then reset after the query completes. That\nway, you won't impact queries that legitimately use nested loops. Someone\nwith more experience than I have in tuning the general postgres config may\nbe able to offer a better solution for getting the query planner to make\nbetter decisions with the global config, but they'll surely need to know a\nlot more about your queries in order to do so.\n\nOn Thu, Sep 16, 2010 at 1:23 AM, Franck Routier <[email protected]>wrote:\n\n> Hi,\n>\n> I am confronted with a use case where my database mainly does big\n> aggregate select (ROLAP), a bunch of batch jobs, and quite few OLTP.\n>\n> I come into cases where the planner under-estimates the number of rows\n> in some relations, chooses to go for nested loops, and takes forever to\n> complete the request. (Notice as the side note that Oracle (10g or 11g)\n> is not any better on this workload and will sometime go crazy and choose\n> a plan that takes hours...)\n>\n> I've played with statistics, vacuum and so on, but at the end the\n> planner is not accurate enough when evaluating the number of rows in\n> some complex queries.\n>\n> Disableing nested loops most of the time solves the performance issues\n> in my tests... generally going from 30 sec. down to 1 sec.\n>\n> So my question is : would it be a very bad idea to disable nested loops\n> in production ?\n> The way I see it is that it could be a little bit less optimal to use\n> merge join or hash join when joining on a few rows, but this is peanuts\n> compared to how bad it is to use nested loops when the number of rows\n> happens to be much higher than what the planner thinks.\n>\n> Is this stupid, ie are there cases when merge join or hash join are much\n> slower than nested loops on a few rows ?\n>\n> Thanks in advance,\n>\n> Franck\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nWithout knowing more about your queries and table structure, it is hard to say if there is a better solution. But one thing you should probably consider doing is just finding the queries where disabling nested loops is verifiably effective and then just disabling nested loops on that connection before running the query and then reset after the query completes. That way, you won't impact queries that legitimately use nested loops. Someone with more experience than I have in tuning the general postgres config may be able to offer a better solution for getting the query planner to make better decisions with the global config, but they'll surely need to know a lot more about your queries in order to do so.\nOn Thu, Sep 16, 2010 at 1:23 AM, Franck Routier <[email protected]> wrote:\nHi,\n\nI am confronted with a use case where my database mainly does big\naggregate select (ROLAP), a bunch of batch jobs, and quite few OLTP.\n\nI come into cases where the planner under-estimates the number of rows\nin some relations, chooses to go for nested loops, and takes forever to\ncomplete the request. (Notice as the side note that Oracle (10g or 11g)\nis not any better on this workload and will sometime go crazy and choose\na plan that takes hours...)\n\nI've played with statistics, vacuum and so on, but at the end the\nplanner is not accurate enough when evaluating the number of rows in\nsome complex queries.\n\nDisableing nested loops most of the time solves the performance issues\nin my tests... generally going from 30 sec. down to 1 sec.\n\nSo my question is : would it be a very bad idea to disable nested loops\nin production ?\nThe way I see it is that it could be a little bit less optimal to use\nmerge join or hash join when joining on a few rows, but this is peanuts\ncompared to how bad it is to use nested loops when the number of rows\nhappens to be much higher than what the planner thinks.\n\nIs this stupid, ie are there cases when merge join or hash join are much\nslower than nested loops on a few rows ?\n\nThanks in advance,\n\nFranck\n\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 16 Sep 2010 02:55:55 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is disableing nested_loops a bad idea ?"
},
{
"msg_contents": "Franck Routier <[email protected]> wrote:\n \n> I come into cases where the planner under-estimates the number of\n> rows in some relations, chooses to go for nested loops, and takes\n> forever to complete the request.\n \nPeople can provide more targeted assistance if you pick one of the\noffenders and provide enough information for a thorough analysis. \nIt's at least somewhat likely that some tweaks to your configuration\nor maintenance procedures could help all the queries, but starting\nwith just one is likely to highlight what those changes might be.\n \nFor ideas on what information to include, see this page:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n",
"msg_date": "Thu, 16 Sep 2010 08:49:58 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is disableing nested_loops a bad idea ?"
},
{
"msg_contents": "Thanks Kevin and Samuel for your input.\n\nThe point is we already made a lot of tweaking to try to tune postgresql\nto behave correctly. I work with Damien, and here is a post he did in\njuly to explain the kind of problems we have\nhttp://comments.gmane.org/gmane.comp.db.postgresql.performance/25745\n\nThe end of the thread was Robert Hass concluding that \"Disabling\nnestloops altogether, even for one particular query, is\noften going to be a sledgehammer where you need a scalpel. But then\nagain, a sledgehammer is better than no hammer.\"\n\nSo I wanted to better understand to what extend using a sledgehammer\nwill impact me :-)\n\nFranck\n\n\nLe jeudi 16 septembre 2010 à 08:49 -0500, Kevin Grittner a écrit :\n> Franck Routier <[email protected]> wrote:\n> \n> > I come into cases where the planner under-estimates the number of\n> > rows in some relations, chooses to go for nested loops, and takes\n> > forever to complete the request.\n> \n> People can provide more targeted assistance if you pick one of the\n> offenders and provide enough information for a thorough analysis. \n> It's at least somewhat likely that some tweaks to your configuration\n> or maintenance procedures could help all the queries, but starting\n> with just one is likely to highlight what those changes might be.\n> \n> For ideas on what information to include, see this page:\n> \n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n> \n> -Kevin\n> \n\n\n\n\n",
"msg_date": "Thu, 16 Sep 2010 16:13:06 +0200",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is disableing nested_loops a bad idea ?"
},
{
"msg_contents": "Franck Routier <[email protected]> wrote:\n \n> So I wanted to better understand to what extend using a\n> sledgehammer will impact me :-)\n \nDisabling it globally is likely to significantly hurt some queries. \nBefore resorting to that, you might decrease effective_cache_size,\nincrease random_page_cost, and (most importantly) do whatever you\ncan to improve statistics. Where those fail, and disabling nested\nloops helps, I concur with the advice to only do that for specific\nqueries, taking care to reset it afterward.\n \nIn other words, use that sledgehammer with great care, don't just\nswing it around wildly.... ;-)\n \n-Kevin\n",
"msg_date": "Thu, 16 Sep 2010 09:25:32 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is disableing nested_loops a bad idea ?"
},
{
"msg_contents": "On Thu, Sep 16, 2010 at 10:13 AM, Franck Routier\n<[email protected]> wrote:\n> Thanks Kevin and Samuel for your input.\n>\n> The point is we already made a lot of tweaking to try to tune postgresql\n> to behave correctly. I work with Damien, and here is a post he did in\n> july to explain the kind of problems we have\n> http://comments.gmane.org/gmane.comp.db.postgresql.performance/25745\n>\n> The end of the thread was Robert Hass concluding that \"Disabling\n> nestloops altogether, even for one particular query, is\n> often going to be a sledgehammer where you need a scalpel. But then\n> again, a sledgehammer is better than no hammer.\"\n>\n> So I wanted to better understand to what extend using a sledgehammer\n> will impact me :-)\n\nOne particular case where you may get a nasty surprise is:\n\nNested Loop\n-> Whatever\n-> Index Scan\n\nThis isn't necessarily terrible if the would-be index scan is on a\nsmall table, because a hash join may be not too bad. It may not be\ntoo good, either, but if the would-be index scan is on a large table\nthe whole thing might turn into a merge join. That can get pretty\nugly. Of course in some cases the planner may be able to rejigger the\nwhole plan in some way that mitigates the damage, but not necessarily.\n\nOne of the things I've noticed about our planner is that it becomes\nless predictable in stressful situations. As you increase the number\nof tables involved in join planning, for example, the query planner\nstill delivers a lot of very good plans, but not quite as predictably.\n Things don't slow down uniformly across the board; instead, most of\nthe plans remain pretty good but every once in a while (and with\nincreasing frequency as you keep cranking up the table count) you get\na bad one. Shutting off any of the enable_* constants will, I think,\nproduce a similar effect. Many queries can be adequate handled using\nsome other technique and you won't really notice it, but you may find\nthat you have a few (or someone will eventually write one) which\n*really* needs whatever technique you turned off for decent\nperformance. At that point you don't have a lot of options...\n\nIncidentally, it's \"Haas\", rather than \"Hass\".\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n",
"msg_date": "Sun, 26 Sep 2010 00:02:27 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is disableing nested_loops a bad idea ?"
}
] |
[
{
"msg_contents": "Since I've never stumbled on this until now, and I spend all day looking \nat this sort of thing, I'm guessing many of you haven't seen the \nfollowing collection of 3ware controller trivia either: \nhttp://makarevitch.org/rant/3ware/\n\nAll moving toward obsolete with 3ware being a part of LSI now I suspect, \nbut a great historical piece. It puts some numbers on the general \nanecdotal observations that the older 9500 series of cards from 3ware \nwere terrible in several common situations, and that the 9600 and later \ncards aren't so bad. I've never seen so much of the trivia related to \nusing these particular cards assembled into one well referenced place \nbefore. Probably interesting to those interested in general Linux disk \nperformance issues even if you don't care about this particular brand of \ncard.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Sat, 18 Sep 2010 03:50:44 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "3ware trivia overload"
},
{
"msg_contents": "I'll throw in my 2 cents worth:\n\n1) Performance using RAID 1 for reads sucks. You would expect throughput to\ndouble in this configuration, but it doesn't. That said, performance for\nRAID 1 is not noticeably worse than Linux MD. My testing showed the 3Ware\ncontroller to be about 20% faster than Linux MD for RAID 1.\n\n2) The management tools work well and can keep you informed of problems with\nthe array. They're also web based, so they are easy to use from other\ncomputers. Unlike some systems like IBM's which is Java based, or Adaptec's\nwhich just plain don't work on Linux (at least I could never get it to\nwork).\n\n3) They are very reliable. I've never had a problem I could attribute to the\ncontroller.\n\n4) I recently did some testing with a single SSD drive and a 3Ware\n9550SXU-4LP controller. I got something like 3,000 IOPs per second for\nrandom writes. I think the problem is not so much that the 3Ware controller\nsucks as that Consumer Grade SATA drives suck.\n\n5) A common problem is that if you don't have the BBU, the write cache\ndefaults to disabled. This can just kill performance. You have to either\npurchase a BBU or live dangerously and enable the write cache without the\nBBU.\n\n6) A lot of the performance numbers bandied about in those tests are not\nvalid. For example, one test was done using a 385 MB file, which is almost\ncertainly smaller than RAM on the system. One guy talks about getting 6000\nrandom reads per second from a standard consumer grade SATA drive. That's a\nseek time of .0166 ms. Not realistic. Clearly the Linux Page cache was\ninvolved and skewing numbers.\n\nSo, my opinion is that if you want really reliable RAID performance using\nconsumer grade drives, the 3Ware controller is the way to go. If reliability\nis not so important, use Linux MD. If you want high performance pay the very\nhigh $/GB for SCSI or SSD.\n\nGeorge Sexton\nMH Software, Inc.\n303 438-9585\nwww.mhsoftware.com\n\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Greg Smith\n> Sent: Saturday, September 18, 2010 1:51 AM\n> To: [email protected]\n> Subject: [PERFORM] 3ware trivia overload\n> \n> Since I've never stumbled on this until now, and I spend all day\n> looking\n> at this sort of thing, I'm guessing many of you haven't seen the\n> following collection of 3ware controller trivia either:\n> http://makarevitch.org/rant/3ware/\n> \n> All moving toward obsolete with 3ware being a part of LSI now I\n> suspect,\n> but a great historical piece. It puts some numbers on the general\n> anecdotal observations that the older 9500 series of cards from 3ware\n> were terrible in several common situations, and that the 9600 and later\n> cards aren't so bad. I've never seen so much of the trivia related to\n> using these particular cards assembled into one well referenced place\n> before. Probably interesting to those interested in general Linux disk\n> performance issues even if you don't care about this particular brand\n> of\n> card.\n> \n> --\n> Greg Smith, 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services and Support www.2ndQuadrant.us\n> Author, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\n> https://www.packtpub.com/postgresql-9-0-high-performance/book\n> \n> \n> --\n> Sent via pgsql-performance mailing list (pgsql-\n> [email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n",
"msg_date": "Mon, 20 Sep 2010 15:54:32 -0600",
"msg_from": "\"George Sexton\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 3ware trivia overload"
},
{
"msg_contents": "On Mon, Sep 20, 2010 at 2:54 PM, George Sexton <[email protected]> wrote:\n> I'll throw in my 2 cents worth:\n>\n> 1) Performance using RAID 1 for reads sucks. You would expect throughput to\n> double in this configuration, but it doesn't. That said, performance for\n> RAID 1 is not noticeably worse than Linux MD. My testing showed the 3Ware\n> controller to be about 20% faster than Linux MD for RAID 1.\n\nNo performance improvement is expected for streaming reads in any\nnon-striped RAID1. Random reads should nearly double in throughput.\n\nYou can use Linux's software RAID10 module to improve streaming reads\nof a \"mirror\" using the \"far\" layout which in effect stripes the data\nacross 2 disks at the expense of some hit in streaming write\nperformance. Testing is required to determine if this tradeoff works\nfor your workload or not.\n\n-Dave\n",
"msg_date": "Mon, 20 Sep 2010 15:09:41 -0700",
"msg_from": "David Rees <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 3ware trivia overload"
}
] |
[
{
"msg_contents": "Hello,\n\nwe are experiencing some performance degradation on a database where\nthe main table is running towards the 100M record. Together with the\nslowness of the queries I notice these symptoms:\n\n- size bloat of partial indexes\n- very bad planning estimates\n\nI'd appreciate any hint to get a better picture of what is going on\nand to understand how much the symptoms are correlated.\n\nThe most noticeable problems are with queries such as:\n\n select * from foos where <condition>\n\nwhere there is a very selective condition (about 10K record over 100M)\nand a partial index on them. The index is correctly taken in\nconsideration for the scan but with an extremely wrong estimate and\npainful performance, e.g.:\n\n# explain select count(*), sum(x) from foos where rcon IS NULL AND\nis_settled = true;\n QUERY PLAN\n----------------------------------------------------------------------------------------\n Aggregate (cost=4774842.01..4774842.02 rows=1 width=8)\n -> Bitmap Heap Scan on foos (cost=218211.50..4674496.17\nrows=20069167 width=8)\n Recheck Cond: ((rcon IS NULL) AND is_settled)\n -> Bitmap Index Scan on i_rcon3 (cost=0.00..213194.21\nrows=20069167 width=0)\n\n(I don't have an analyze output anymore for this, but the rows\nreturned were about 7K at the moment). This query used to run in\nsub-second time: recently it started taking several minutes or, if run\nquickly after a previous run, around 10 seconds.\n\npg_stat_all_index showed >400M size for this index: way too much to\nindex <10K records.\n\nTrying to solve this bloat problem I've tried:\n\n1: manually running vacuum on the table (the autovacuum had not\ntouched it for a while and it seems it avoids it probably because\nother table are updated more. The verbose output concerning the above\nindex was:\n\n...\nINFO: scanned index \"i_rcon3\" to remove 22369332 row versions\nDETAIL: CPU 0.84s/5.20u sec elapsed 50.18 sec.\n...\nINFO: \"foos\": removed 22369332 row versions in 1009710 pages\nDETAIL: CPU 34.38s/27.01u sec elapsed 2226.51 sec.\n...\nINFO: scanned index \"i_rcon3\" to remove 15330597 row versions\nDETAIL: CPU 0.48s/2.14u sec elapsed 15.42 sec.\n...\nINFO: \"foos\": removed 15330597 row versions in 569208 pages\nDETAIL: CPU 9.40s/8.42u sec elapsed 732.17 sec.\n...\nINFO: index \"i_rcon3\" now contains 43206 row versions in 53495 pages\nDETAIL: 9494602 index row versions were removed.\n53060 index pages have been deleted, 20032 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\n...\nWARNING: relation \"public.foos\" contains more than \"max_fsm_pages\"\npages with useful free space\nHINT: Consider using VACUUM FULL on this relation or increasing the\nconfiguration parameter \"max_fsm_pages\".\n\nAlbeit the output was promising, the planner estimate and index size\ndidn't change very much (I assumed the performance didn't change as\nwell so I didn't run an explain analyze).\n\n2. I tried to rebuild concurrently an index with exactly the same\nproperties: this produced an index with a more reasonable size (now,\nafter a busy weekend running it is about 12M) and this solved the\nperformance problem. It didn't fix the bad estimate anyway.\n\n3. I increased the statistics from the default 10 to 100 and analyzed\nexpecting to see some change in the estimated number of rows: apart\nfrom a small fluctuation the estimate remained around the 20M.\n\n4. the index was not indexing a distinct field but rather a fkey with\njust no more than 4K distinct values and an extremely uneven\ndistribution. I created an index with the same condition but on the\npkey but the estimate didn't change: stable on the 20M records even\nafter increasing the stats to 100 for the pkey field too.\n\nDoes anybody have some information about where the bloat is coming\nfrom and what is the best way to get rid of it? Would a vacuum full\nfix this kind of problem? Is there a way to fix it without taking the\nsystem offline?\n\nThe indexed condition is a state of the evolution of the records in\nthe table: many records assume that state for some time, then move to\na different state no more indexed. Is the continuous addition/deletion\nof records to the index causing the bloat (which can be then\nconsidered limited to the indexes with a similar usage pattern)? Is\nreindex/concurrent rebuild the best answer?\n\nAny idea of where the 20M record estimate is coming from? Isn't the\nsize of the partial index taken into account in the estimate?\n\nWe are running PG 8.3, planning for migration on new hardware and\nconcurrently on a new PG version in the near future. Are our\nproblematic behaviours known to be fixed in later releases?\n\nThank you very much. Regards.\n\n-- Daniele\n",
"msg_date": "Mon, 20 Sep 2010 12:59:09 +0100",
"msg_from": "Daniele Varrazzo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance degradation, index bloat and planner estimates"
},
{
"msg_contents": "On 20/09/2010 7:59 PM, Daniele Varrazzo wrote:\n\n> Does anybody have some information about where the bloat is coming\n> from and what is the best way to get rid of it? Would a vacuum full\n> fix this kind of problem? Is there a way to fix it without taking the\n> system offline?\n\nIt's hard to know where the index bloat comes from. The usual cause I \nsee reported here is with regular VACUUM FULL use, which doesn't seem to \nbe a factor in your case.\n\nA VACUUM FULL will not address index bloat; it's more likely to add to \nit. You'd want to use CLUSTER instead, but that'll still require an \nexclusive lock that takes the table offline for some time. Your current \nsolution - a concurrent reindex - is your best bet for a workaround \nuntil you find out what's causing the bloat.\n\nIf the bloat issue were with relations rather than indexes I'd suspect \nfree space map issues as you're on 8.3.\n\nhttp://www.postgresql.org/docs/8.3/interactive/runtime-config-resource.html\n\nMy (poor) understanding is that index-only bloat probably won't be an \nFSM issue.\n\n> The indexed condition is a state of the evolution of the records in\n> the table: many records assume that state for some time, then move to\n> a different state no more indexed. Is the continuous addition/deletion\n> of records to the index causing the bloat (which can be then\n> considered limited to the indexes with a similar usage pattern)?\n\nPersonally I don't know enough to answer that. I would've expected that \nproper VACUUMing would address any resulting index bloat, but\n\n> Any idea of where the 20M record estimate is coming from? Isn't the\n> size of the partial index taken into account in the estimate?\n\nI'd really help to have EXPLAIN ANALYZE output here.\n\n-- \nCraig Ringer\n\nTech-related writing at http://soapyfrogs.blogspot.com/\n",
"msg_date": "Wed, 22 Sep 2010 12:09:40 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation, index bloat and planner estimates"
},
{
"msg_contents": "Craig Ringer <[email protected]> writes:\n> If the bloat issue were with relations rather than indexes I'd suspect \n> free space map issues as you're on 8.3.\n\n> http://www.postgresql.org/docs/8.3/interactive/runtime-config-resource.html\n\n> My (poor) understanding is that index-only bloat probably won't be an \n> FSM issue.\n\nLack of FSM space can hurt indexes too, although I agree that if *only*\nindexes are bloating then it's probably not FSM to blame.\n\n>> The indexed condition is a state of the evolution of the records in\n>> the table: many records assume that state for some time, then move to\n>> a different state no more indexed. Is the continuous addition/deletion\n>> of records to the index causing the bloat (which can be then\n>> considered limited to the indexes with a similar usage pattern)?\n\n> Personally I don't know enough to answer that. I would've expected that \n> proper VACUUMing would address any resulting index bloat, but\n\nMaybe the index fencepost problem? If your usage pattern involves\ncreating many records and then deleting most of them, but leaving behind\na few records that are regularly spaced in the index ordering, then you\ncan end up with a situation where many index pages have only a few\nentries. An example case is creating daily records indexed by date, and\nthen deleting all but the last-day-of-the-month entries later. You end\nup with index pages only about 1/30th full. The index cannot be shrunk\nbecause no page is completely empty, but it contains much unused space\n--- which can never get re-used either, if you never insert any new keys\nin those key ranges.\n\nIf this is the problem then reindexing is the only fix.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Sep 2010 00:20:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation, index bloat and planner estimates "
}
] |
[
{
"msg_contents": "Hi All,\n\n(pg 8.3.7 on RHEL 2.6.18-92.el5 )\n\nI ran the query below (copied from\nhttp://pgsql.tapoueh.org/site/html/news/20080131.bloat.html ) on a\nproduction DB we have and I am looking at some pretty nasty looking\nnumbers for tables in the pg_catalog schema. I have tried a reindex\nand vaccum but neither seem to be clearing these out, tried a cluster\nand it won't let me.\n\nI am viewing the problem wrong? is there anything I can do while the\nDB is online ? do I need to clean up other things first ?\n\n\nthanks,\n\n..: Mark\n\n\n\n-[ RECORD 1 ]+--------------------------------\nschemaname | pg_catalog\ntablename | pg_attribute\nreltuples | 5669\nrelpages | 113529\notta | 92\ntbloat | 1234.0\nwastedpages | 113437\nwastedbytes | 929275904\nwastedsize | 886 MB\niname | pg_attribute_relid_attnam_index\nituples | 5669\nipages | 68\niotta | 80\nibloat | 0.9\nwastedipages | 0\nwastedibytes | 0\nwastedisize | 0 bytes\n\n\n\n\nSELECT\n schemaname, tablename, reltuples::bigint, relpages::bigint, otta,\n ROUND(CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages/otta::numeric\nEND,1) AS tbloat,\n relpages::bigint - otta AS wastedpages,\n bs*(sml.relpages-otta)::bigint AS wastedbytes,\n pg_size_pretty((bs*(relpages-otta))::bigint) AS wastedsize,\n iname, ituples::bigint, ipages::bigint, iotta,\n ROUND(CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE\nipages/iotta::numeric END,1) AS ibloat,\n CASE WHEN ipages < iotta THEN 0 ELSE ipages::bigint - iotta END\nAS wastedipages,\n CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS\nwastedibytes,\n CASE WHEN ipages < iotta THEN pg_size_pretty(0) ELSE\npg_size_pretty((bs*(ipages-iotta))::bigint) END AS wastedisize\n FROM (\n SELECT\n schemaname, tablename, cc.reltuples, cc.relpages, bs,\n CEIL((cc.reltuples*((datahdr+ma-\n (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma\nEND))+nullhdr2+4))/(bs-20::float)) AS otta,\n COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0)\nAS ituples, COALESCE(c2.relpages,0) AS ipages,\n COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0)\nAS iotta -- very rough approximation, assumes all cols\n FROM (\n SELECT\n ma,bs,schemaname,tablename,\n (datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma\nEND)))::numeric AS datahdr,\n (maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma\nELSE nullhdr%ma END))) AS nullhdr2\n FROM (\n SELECT\n schemaname, tablename, hdr, ma, bs,\n SUM((1-null_frac)*avg_width) AS datawidth,\n MAX(null_frac) AS maxfracsum,\n hdr+(\n SELECT 1+count(*)/8\n FROM pg_stats s2\n WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND\ns2.tablename = s.tablename\n ) AS nullhdr\n FROM pg_stats s, (\n SELECT\n (SELECT current_setting('block_size')::numeric) AS bs,\n CASE WHEN substring(v,12,3) IN ('8.0','8.1','8.2') THEN\n27 ELSE 23 END AS hdr,\n CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS ma\n FROM (SELECT version() AS v) AS foo\n ) AS constants\n GROUP BY 1,2,3,4,5\n ) AS foo\n ) AS rs\n JOIN pg_class cc ON cc.relname = rs.tablename\n JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname\n= rs.schemaname\n LEFT JOIN pg_index i ON indrelid = cc.oid\n LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid\n ) AS sml\n WHERE sml.relpages - otta > 0 OR ipages - iotta > 10\n ORDER BY wastedbytes DESC, wastedibytes DESC\n",
"msg_date": "Mon, 20 Sep 2010 11:25:45 -0600",
"msg_from": "mark <[email protected]>",
"msg_from_op": true,
"msg_subject": "cleanup on pg_ system tables?"
},
{
"msg_contents": "On Mon, Sep 20, 2010 at 1:25 PM, mark <[email protected]> wrote:\n> Hi All,\n>\n> (pg 8.3.7 on RHEL 2.6.18-92.el5 )\n>\n> I ran the query below (copied from\n> http://pgsql.tapoueh.org/site/html/news/20080131.bloat.html ) on a\n> production DB we have and I am looking at some pretty nasty looking\n> numbers for tables in the pg_catalog schema. I have tried a reindex\n> and vaccum but neither seem to be clearing these out, tried a cluster\n> and it won't let me.\n>\n> I am viewing the problem wrong? is there anything I can do while the\n> DB is online ? do I need to clean up other things first ?\n\nYou sure you tried VACUUM FULL, not just VACUUM? I've been in the same\nboat on 8.3, actually, see:\n\n http://archives.postgresql.org/pgsql-performance/2010-04/msg00204.php\n\nI migrated the server mentioned in that thread to 8.4, so at least I\ndon't have to deal with the max_fsm_pages problem; you might want to\ndouble check your postmaster logfile to make sure you don't see a\nbunch of warnings about max_fsm_pages.\n\nI put in a twice-hourly cron job which runs a VACUUM ANALYZE on the\npg_catalog tables I'd been having trouble with, which has helped keep\npg_catalog bloat down. My pg_catalog tables are still somewhat bloated\n(11 GB for pg_attribute), but at least they've been at a steady size\nfor the past few months without needing a VACUUM FULL.\n\nJosh\n",
"msg_date": "Mon, 20 Sep 2010 15:24:22 -0400",
"msg_from": "Josh Kupershmidt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cleanup on pg_ system tables?"
}
] |
[
{
"msg_contents": "All,\n\nI have a reporter who wants to talk to a data warehousing user of\nPostgreSQL (> 1TB preferred), on the record, about 9.0. Please e-mail\nme if you are available and when good times to chat would be (and time\nzone). Thanks!\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Mon, 20 Sep 2010 11:47:56 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Need PostgreSQL data warehousing user, on the record"
}
] |
[
{
"msg_contents": "The autovacuum daemon currently uses the number of inserted and\nupdated tuples to determine if it should run VACUUM ANALYZE on a\ntable. Why doesn’t it consider deleted tuples as well?\n\nFor example, I have a table which gets initially loaded with several\nmillion records. A batch process grabs the records 100 at a time, does\nsome processing and deletes them from the table in the order of the\nprimary key. Eventually, performance degrades because an autoanalyze\nis never run. The planner decides that it should do a sequential scan\ninstead of an index scan because the stats don't reflect reality. See\nexample below.\n\nI can set up a cron job to run the ANALYZE manually, but it seems like\nthe autovacuum daemon should be smart enough to figure this out on its\nown. Deletes can have as big an impact on the stats as inserts and\nupdates.\n\nJoe Miller\n\n---------------------------\n\ntestdb=# \\d test\n Table \"public.test\"\n Column | Type | Modifiers\n--------+---------+-----------\n id | integer | not null\n data | bytea |\nIndexes:\n \"test_pkey\" PRIMARY KEY, btree (id)\n\ntestdb=# insert into public.test select s.a, gen_random_bytes(256)\nfrom generate_series(1,10000000) as s(a);\nINSERT 0 10000000\n\ntestdb=# SELECT *\nFROM pg_stat_all_tables\nWHERE schemaname='public' AND relname='test';\n relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan |\nidx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd |\nn_live_tup | n_dead_tup | last_vacuum | last_autovacuum | last_analyze\n| last_autoanalyze\n---------+------------+---------+----------+--------------+----------+---------------+-----------+-----------+-----------+---------------+------------+------------+-------------+-----------------+--------------+------------------\n 5608158 | public | test | 1 | 0 | 0 |\n 0 | 10000000 | 0 | 0 | 0 |\n 0 | 0 | | | |\n2010-09-20 10:46:37.283775-04\n(1 row)\n\n\ntestdb=# explain analyze delete from public.test where id <= 100;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Index Scan using test_pkey on test (cost=0.00..71.63 rows=1000\nwidth=6) (actual time=13.251..22.916 rows=100 loops=1)\n Index Cond: (id <= 100)\n Total runtime: 23.271 ms\n(3 rows)\n\n{ delete records ad nauseum }\n\ntestdb=# explain analyze delete from public.test where id <= 7978800;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..410106.17 rows=2538412 width=6) (actual\ntime=48771.772..49681.562 rows=100 loops=1)\n Filter: (id <= 7978800)\n Total runtime: 49682.006 ms\n(3 rows)\n\ntestdb=# SELECT *\nFROM pg_stat_all_tables\nWHERE schemaname='public' AND relname='test';\n relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan |\nidx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd |\nn_live_tup | n_dead_tup | last_vacuum | last_autovacuum\n| last_analyze | last_autoanalyze\n---------+------------+---------+----------+--------------+----------+---------------+-----------+-----------+-----------+---------------+------------+------------+-------------+-------------------------------+--------------+-------------------------------\n 5608158 | public | test | 1 | 0 | 54345 |\n 5433206 | 10000000 | 0 | 5433200 | 0 |\n5459506 | 725300 | | 2010-09-20 14:45:54.757611-04 |\n | 2010-09-20 10:46:37.283775-04\n",
"msg_date": "Mon, 20 Sep 2010 16:38:37 -0400",
"msg_from": "Joe Miller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Auto ANALYZE criteria"
},
{
"msg_contents": "Joe Miller <[email protected]> wrote:\n \n> I can set up a cron job to run the ANALYZE manually, but it seems\n> like the autovacuum daemon should be smart enough to figure this\n> out on its own. Deletes can have as big an impact on the stats as\n> inserts and updates.\n \nBut until the deleted rows are vacuumed from the indexes, an index\nscan must read all the index entries for the deleted tuples, and\nvisit the heap to determine that they are not visible. Does a\nmanual run of ANALYZE without a VACUUM change the stats much for\nyou, or are you running VACUUM ANALYZE?\n \n-Kevin\n",
"msg_date": "Mon, 20 Sep 2010 17:28:53 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Auto ANALYZE criteria"
},
{
"msg_contents": "Joe Miller <[email protected]> writes:\n> The autovacuum daemon currently uses the number of inserted and\n> updated tuples to determine if it should run VACUUM ANALYZE on a\n> table.� Why doesn�t it consider deleted tuples as well?\n\nI think you misread the code.\n\nNow there *is* a problem, pre-9.0, if your update pattern is such that\nmost or all updates are HOT updates. To quote from the 9.0 alpha\nrelease notes:\n\n Revise pgstat's tracking of tuple changes to\n improve the reliability of decisions about when to\n auto-analyze. The previous code depended on n_live_tuples +\n n_dead_tuples - last_anl_tuples, where all three of these\n numbers could be bad estimates from ANALYZE itself. Even\n worse, in the presence of a steady flow of HOT updates and\n matching HOT-tuple reclamations, auto-analyze might never\n trigger at all, even if all three numbers are exactly right,\n because n_dead_tuples could hold steady.\n\nIt's not clear to me if that matches your problem, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Sep 2010 22:12:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Auto ANALYZE criteria "
},
{
"msg_contents": "I was looking at the autovacuum documentation:\nhttp://www.postgresql.org/docs/9.0/interactive/routine-vacuuming.html#AUTOVACUUM\n\n For analyze, a similar condition is used: the threshold, defined as:\n analyze threshold = analyze base threshold + analyze scale factor *\nnumber of tuples\n is compared to the total number of tuples inserted or updated since\nthe last ANALYZE.\n\nI guess that should be updated to read \"insert, updated or deleted\".\n\n\nOn Mon, Sep 20, 2010 at 10:12 PM, Tom Lane <[email protected]> wrote:\n> Joe Miller <[email protected]> writes:\n>> The autovacuum daemon currently uses the number of inserted and\n>> updated tuples to determine if it should run VACUUM ANALYZE on a\n>> table. Why doesn’t it consider deleted tuples as well?\n>\n> I think you misread the code.\n>\n> Now there *is* a problem, pre-9.0, if your update pattern is such that\n> most or all updates are HOT updates. To quote from the 9.0 alpha\n> release notes:\n>\n> Revise pgstat's tracking of tuple changes to\n> improve the reliability of decisions about when to\n> auto-analyze. The previous code depended on n_live_tuples +\n> n_dead_tuples - last_anl_tuples, where all three of these\n> numbers could be bad estimates from ANALYZE itself. Even\n> worse, in the presence of a steady flow of HOT updates and\n> matching HOT-tuple reclamations, auto-analyze might never\n> trigger at all, even if all three numbers are exactly right,\n> because n_dead_tuples could hold steady.\n>\n> It's not clear to me if that matches your problem, though.\n>\n> regards, tom lane\n>\n",
"msg_date": "Tue, 21 Sep 2010 09:33:00 -0400",
"msg_from": "Joe Miller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Auto ANALYZE criteria"
},
{
"msg_contents": "On Mon, Sep 20, 2010 at 6:28 PM, Kevin Grittner\n<[email protected]> wrote:\n> Joe Miller <[email protected]> wrote:\n>\n>> I can set up a cron job to run the ANALYZE manually, but it seems\n>> like the autovacuum daemon should be smart enough to figure this\n>> out on its own. Deletes can have as big an impact on the stats as\n>> inserts and updates.\n>\n> But until the deleted rows are vacuumed from the indexes, an index\n> scan must read all the index entries for the deleted tuples, and\n> visit the heap to determine that they are not visible. Does a\n> manual run of ANALYZE without a VACUUM change the stats much for\n> you, or are you running VACUUM ANALYZE?\n>\n> -Kevin\n>\n\nThe autovacuum is running correctly, so the deleted rows are being\nremoved. All I'm doing is an ANALYZE, not VACUUM ANALYZE.\n",
"msg_date": "Tue, 21 Sep 2010 10:59:24 -0400",
"msg_from": "Joe Miller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Auto ANALYZE criteria"
},
{
"msg_contents": "Joe Miller <[email protected]> writes:\n> I was looking at the autovacuum documentation:\n> http://www.postgresql.org/docs/9.0/interactive/routine-vacuuming.html#AUTOVACUUM\n\n> For analyze, a similar condition is used: the threshold, defined as:\n> analyze threshold = analyze base threshold + analyze scale factor *\n> number of tuples\n> is compared to the total number of tuples inserted or updated since\n> the last ANALYZE.\n\n> I guess that should be updated to read \"insert, updated or deleted\".\n\nMph. We caught the other places where the docs explain what the analyze\nthreshold is, but missed that one. Fixed, thanks for pointing it out.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Sep 2010 16:44:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Auto ANALYZE criteria "
},
{
"msg_contents": "Thanks for fixing the docs, but if that's the case, I shouldn't be\nseeing the behavior that I'm seeing.\n\nShould I flesh out this test case a little better and file a bug?\n\nThanks,\n\nJoe\n\n\nOn Tue, Sep 21, 2010 at 4:44 PM, Tom Lane <[email protected]> wrote:\n> Joe Miller <[email protected]> writes:\n>> I was looking at the autovacuum documentation:\n>> http://www.postgresql.org/docs/9.0/interactive/routine-vacuuming.html#AUTOVACUUM\n>\n>> For analyze, a similar condition is used: the threshold, defined as:\n>> analyze threshold = analyze base threshold + analyze scale factor *\n>> number of tuples\n>> is compared to the total number of tuples inserted or updated since\n>> the last ANALYZE.\n>\n>> I guess that should be updated to read \"insert, updated or deleted\".\n>\n> Mph. We caught the other places where the docs explain what the analyze\n> threshold is, but missed that one. Fixed, thanks for pointing it out.\n>\n> regards, tom lane\n>\n",
"msg_date": "Wed, 13 Oct 2010 17:20:11 -0400",
"msg_from": "Joe Miller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Auto ANALYZE criteria"
},
{
"msg_contents": "On Wed, Oct 13, 2010 at 5:20 PM, Joe Miller <[email protected]> wrote:\n> Thanks for fixing the docs, but if that's the case, I shouldn't be\n> seeing the behavior that I'm seeing.\n>\n> Should I flesh out this test case a little better and file a bug?\n\nA reproducible test case is always a good thing to have...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 26 Oct 2010 08:24:34 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Auto ANALYZE criteria"
}
] |
[
{
"msg_contents": "While everybody else was talking about a new software release or \nsomething today, I was busy finally nailing down something elusive that \npops up on this list regularly. A few weeks ago we just had a thread \nnamed \"Performance on new 64bit server compared to my 32bit desktop\" \ndiscussing how memory speed and number of active cores at a time are \nrelated. This is always interesting to PostgreSQL performance in \nparticular, because any one query can only execute on a core at a time. \nIf your workload tends toward small numbers of long queries, the ability \nof your server to handle high memory bandwidth with lots of cores \ndoesn't matter as much as access a single core can marshall.\n\nThe main program used for determine peak memory bandwidth is STREAM, \navailable at http://www.cs.virginia.edu/stream/\n\nThe thing I never see anybody doing is running that with increasing core \ncounts and showing the performance scaling. Another annoyance is that \nyou have to be extremely careful to test with enough memory to exceed \nthe sum of all caching on the processors by a large amount, or your \nresults will be quite inflated.\n\nI believe I have whipped both of these problems for Linux systems having \ngcc 4.2 or later, and the code to test is now available at: \nhttp://github.com/gregs1104/stream-scaling It adds all of the cache \nsizes, increases that by a whole order of magnitude to compute the test \nsize to really minimize their impact, and chugs away more or less \nautomatically trying all the core counts.\n\nThe documentation includes an initial 6 systems I was able to get \nsamples for, and they show a lot of the common things I've noticed \nbefore quite nicely. The upper limits of DDR2 systems even when you \nhave lots of banks, how amazingly fast speeds to a single core are with \nrecent Intel+DDR3/1600 systems, all stuff I've measured at a higher \nlevel are really higlighted with this low-level test.\n\nGiven the changes to the test method and size computations since the \nearlier tests posted in the past thread here, I'm afraid I can't include \nany of those results in the table. Note that this includes the newer \n48-core AMD server that Scott Marlowe posted results from earlier; the \none you see in my README.rst sample results is not it, that's an older \nsystem with 8 sockets, less cores per processor, and slower RAM. \nThere's still some concern in my mind about whether the test size was \nreally big enough in the earlier sample Scott submitted to the list, and \nhe actually has to do real work on that server for the moment before he \ncan re-test. Will get that filled in eventually.\n\nIf any of you into this sort of thing would like to contribute a result, \nI'd like to see the following (off-list please, I'll summarize on the \npage later, and let me know if you want to be credited or anonymous for \nthe contribution):\n\n-Full output from the stream-scaling run\n-Output of \"cat /proc/cpuinfo\" on your server\n-Total amount of RAM in the server (including the output from \"free\" \nwill suffice)\n-RAM topology and speed, if you know it. I can guess that in some cases \nif you don't know.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Tue, 21 Sep 2010 00:20:39 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Memory speed testing"
},
{
"msg_contents": "Last bit of promotion here for this project: \nhttp://github.com/gregs1104/stream-scaling has been updated with a few \nmore results anonymously submitted by list members here. The updated \nchart really fleshes out how the RAM scaling works on the older \nmulti-socket AMD servers now. That's something that's been described on \nthis list for years, but never really measured this clearly before to my \nknowledge. I have 4 and 8 socket examples in there now, showing \nDDR2/667 and DDR2/800.\n\nAlso have some initial numbers on the DDR-3 based Phenom II processors \ntoo, which fit right in the middle of the Intel samples. They look like \nanother round of good bang for the buck midrange desktop parts typical \nfor AMD.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Mon, 27 Sep 2010 12:53:26 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory speed testing"
}
] |
[
{
"msg_contents": "Hello,\n\nI have received some help from the IRC channel, however, the problem still exists. When running the following query with enable_seqscan set to 0, it takes less than a second, whereas with it set to 1, the query returns in 14 seconds. The machine itself has 8GB Ram and is running PostgreSQL 9.0 on Debian Lenny. The database size is about 7GB. \n\n\nQuery:\nSELECT tr.id, tr.sid\n FROM\n test_registration tr,\n INNER JOIN test_registration_result r on (tr.id = r.test_registration_id)\n WHERE.\n tr.test_administration_id='32a22b12-aa21-11df-a606-96551e8f4e4c'::uuid\n GROUP BY tr.id, tr.sid\n\n\n\ndemo=# \\d test_registration\n Table \"public.test_registration\"\n Column | Type | Modifiers \n------------------------+-----------------------------+------------------------\n id | uuid | not null\n sid | character varying(36) | not null\n created_date | timestamp without time zone | not null default now()\n modified_date | timestamp without time zone | not null\n test_administration_id | uuid | not null\n teacher_number | character varying(15) | \n test_version_id | uuid | \nIndexes:\n \"test_registration_pkey\" PRIMARY KEY, btree (id)\n \"test_registration_sid_key\" UNIQUE, btree (sid, test_administration_id)\n \"test_registration_teacher\" btree (teacher_number)\n \"test_registration_test_id\" btree (test_administration_id)\n\ndemo=# \\d test_registration_result\n Table \"public.test_registration_result\"\n Column | Type | Modifiers \n----------------------+-----------------------+-----------\n answer | character varying(15) | \n question_id | uuid | not null\n score | double precision | \n test_registration_id | uuid | not null\nIndexes:\n \"test_registration_result_pkey\" PRIMARY KEY, btree (question_id, test_registration_id)\n \"test_registration_result_answer\" btree (test_registration_id, answer, score)\n \"test_registration_result_test\" btree (test_registration_id)\n\n\nExplain Analyze:\n\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=951169.97..951198.37 rows=2840 width=25) (actual time=14669.039..14669.843 rows=2972 loops=1)\n -> Hash Join (cost=2988.07..939924.85 rows=2249024 width=25) (actual time=551.464..14400.061 rows=638980 loops=1)\n Hash Cond: (r.test_registration_id = tr.id)\n -> Seq Scan on test_registration_result r (cost=0.00..681946.72 rows=37199972 width=16) (actual time=0.015..6073.101 rows=37198734 loops=1)\n -> Hash (cost=2952.57..2952.57 rows=2840 width=25) (actual time=2.516..2.516 rows=2972 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 160kB\n -> Bitmap Heap Scan on test_registration tr (cost=44.29..2952.57 rows=2840 width=25) (actual time=0.528..1.458 rows=2972 loops=1)\n Recheck Cond: (test_administration_id = 'e26a165a-c19f-11df-be2f-778af560e5a2'::uuid)\n -> Bitmap Index Scan on test_registration_test_administration_id (cost=0.00..43.58 rows=2840 width=0) (actual time=0.507..0.507 rows=2972 loops=1)\n Index Cond: (test_administration_id = 'e26a165a-c19f-11df-be2f-778af560e5a2'::uuid)\n Total runtime: 14670.337 ms\n(11 rows)\n\n\nreal\t0m14.698s\nuser\t0m0.000s\nsys\t0m0.008s\n\n\nWith \"set enable_seqscan=0;\"\n\n\nSET\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=1225400.19..1225428.59 rows=2840 width=25) (actual time=748.397..749.160 rows=2972 loops=1)\n -> Nested Loop (cost=0.00..1214155.07 rows=2249024 width=25) (actual time=0.107..465.165 rows=638980 loops=1)\n -> Index Scan using test_registration_test_administration_id on test_registration tr (cost=0.00..4413.96 rows=2840 width=25) (actual time=0.050..1.610 rows=2972 loops=1)\n Index Cond: (test_administration_id = 'e26a165a-c19f-11df-be2f-778af560e5a2'::uuid)\n -> Index Scan using test_registration_result_answer on test_registration_result r (cost=0.00..416.07 rows=792 width=16) (actual time=0.019..0.106 rows=215 loops=2972)\n Index Cond: (r.test_registration_id = tr.id)\n Total runtime: 749.745 ms\n(7 rows)\n\n\nreal\t0m0.759s\nuser\t0m0.008s\nsys\t0m0.000s\n\n\nThe following parameters are changed in postgresql.conf and I have routinely vacuum analyzed the tables and database:\n\nshared_buffers = 2048MB \nwork_mem = 8MB\nmaintenance_work_mem = 256MB \nwal_buffers = 640kB\nrandom_page_cost = 4.0 \neffective_cache_size = 7000MB\ndefault_statistics_target = 200 \n\n\nfree -m:\n total used free shared buffers cached\nMem: 8003 7849 153 0 25 7555\n-/+ buffers/cache: 268 7735\nSwap: 7640 0 7639\n\n\nAny help would be appreciated. Thank you very much. \n\nOgden\nHello,I have received some help from the IRC channel, however, the problem still exists. When running the following query with enable_seqscan set to 0, it takes less than a second, whereas with it set to 1, the query returns in 14 seconds. The machine itself has 8GB Ram and is running PostgreSQL 9.0 on Debian Lenny. The database size is about 7GB. Query:SELECT tr.id, tr.sid\n FROM\n test_registration tr,\n INNER JOIN test_registration_result r on (tr.id = r.test_registration_id)\n WHERE.\n tr.test_administration_id='32a22b12-aa21-11df-a606-96551e8f4e4c'::uuid\n GROUP BY tr.id, tr.siddemo=# \\d test_registration\n Table \"public.test_registration\"\n Column | Type | Modifiers \n------------------------+-----------------------------+------------------------\n id | uuid | not null\n sid | character varying(36) | not null\n created_date | timestamp without time zone | not null default now()\n modified_date | timestamp without time zone | not null\n test_administration_id | uuid | not null\n teacher_number | character varying(15) | \n test_version_id | uuid | \nIndexes:\n \"test_registration_pkey\" PRIMARY KEY, btree (id)\n \"test_registration_sid_key\" UNIQUE, btree (sid, test_administration_id)\n \"test_registration_teacher\" btree (teacher_number)\n \"test_registration_test_id\" btree (test_administration_id)\n\ndemo=# \\d test_registration_result\n Table \"public.test_registration_result\"\n Column | Type | Modifiers \n----------------------+-----------------------+-----------\n answer | character varying(15) | \n question_id | uuid | not null\n score | double precision | \n test_registration_id | uuid | not null\nIndexes:\n \"test_registration_result_pkey\" PRIMARY KEY, btree (question_id, test_registration_id)\n \"test_registration_result_answer\" btree (test_registration_id, answer, score)\n \"test_registration_result_test\" btree (test_registration_id)\n\nExplain Analyze:\n\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=951169.97..951198.37 rows=2840 width=25) (actual time=14669.039..14669.843 rows=2972 loops=1)\n -> Hash Join (cost=2988.07..939924.85 rows=2249024 width=25) (actual time=551.464..14400.061 rows=638980 loops=1)\n Hash Cond: (r.test_registration_id = tr.id)\n -> Seq Scan on test_registration_result r (cost=0.00..681946.72 rows=37199972 width=16) (actual time=0.015..6073.101 rows=37198734 loops=1)\n -> Hash (cost=2952.57..2952.57 rows=2840 width=25) (actual time=2.516..2.516 rows=2972 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 160kB\n -> Bitmap Heap Scan on test_registration tr (cost=44.29..2952.57 rows=2840 width=25) (actual time=0.528..1.458 rows=2972 loops=1)\n Recheck Cond: (test_administration_id = 'e26a165a-c19f-11df-be2f-778af560e5a2'::uuid)\n -> Bitmap Index Scan on test_registration_test_administration_id (cost=0.00..43.58 rows=2840 width=0) (actual time=0.507..0.507 rows=2972 loops=1)\n Index Cond: (test_administration_id = 'e26a165a-c19f-11df-be2f-778af560e5a2'::uuid)\n Total runtime: 14670.337 ms\n(11 rows)\n\n\nreal\t0m14.698s\nuser\t0m0.000s\nsys\t0m0.008s\nWith \"set enable_seqscan=0;\"\n\n\nSET\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=1225400.19..1225428.59 rows=2840 width=25) (actual time=748.397..749.160 rows=2972 loops=1)\n -> Nested Loop (cost=0.00..1214155.07 rows=2249024 width=25) (actual time=0.107..465.165 rows=638980 loops=1)\n -> Index Scan using test_registration_test_administration_id on test_registration tr (cost=0.00..4413.96 rows=2840 width=25) (actual time=0.050..1.610 rows=2972 loops=1)\n Index Cond: (test_administration_id = 'e26a165a-c19f-11df-be2f-778af560e5a2'::uuid)\n -> Index Scan using test_registration_result_answer on test_registration_result r (cost=0.00..416.07 rows=792 width=16) (actual time=0.019..0.106 rows=215 loops=2972)\n Index Cond: (r.test_registration_id = tr.id)\n Total runtime: 749.745 ms\n(7 rows)\n\n\nreal\t0m0.759s\nuser\t0m0.008s\nsys\t0m0.000sThe following parameters are changed in postgresql.conf and I have routinely vacuum analyzed the tables and database:shared_buffers = 2048MB work_mem = 8MBmaintenance_work_mem = 256MB wal_buffers = 640kBrandom_page_cost = 4.0 effective_cache_size = 7000MBdefault_statistics_target = 200 free -m: total used free shared buffers cached\nMem: 8003 7849 153 0 25 7555\n-/+ buffers/cache: 268 7735\nSwap: 7640 0 7639\nAny help would be appreciated. Thank you very much. Ogden",
"msg_date": "Tue, 21 Sep 2010 12:32:01 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query much faster with enable_seqscan=0"
},
{
"msg_contents": "You DB is more than likely cached. You should adjust your\npage costs to better reflect reality and then the planner\ncan make more accurate estimates and then choose the proper\nplan.\n\nCheers,\nKen\n\nOn Tue, Sep 21, 2010 at 12:32:01PM -0500, Ogden wrote:\n> Hello,\n> \n> I have received some help from the IRC channel, however, the problem still exists. When running the following query with enable_seqscan set to 0, it takes less than a second, whereas with it set to 1, the query returns in 14 seconds. The machine itself has 8GB Ram and is running PostgreSQL 9.0 on Debian Lenny. The database size is about 7GB. \n> \n> \n> Query:\n> SELECT tr.id, tr.sid\n> FROM\n> test_registration tr,\n> INNER JOIN test_registration_result r on (tr.id = r.test_registration_id)\n> WHERE.\n> tr.test_administration_id='32a22b12-aa21-11df-a606-96551e8f4e4c'::uuid\n> GROUP BY tr.id, tr.sid\n> \n> \n> \n> demo=# \\d test_registration\n> Table \"public.test_registration\"\n> Column | Type | Modifiers \n> ------------------------+-----------------------------+------------------------\n> id | uuid | not null\n> sid | character varying(36) | not null\n> created_date | timestamp without time zone | not null default now()\n> modified_date | timestamp without time zone | not null\n> test_administration_id | uuid | not null\n> teacher_number | character varying(15) | \n> test_version_id | uuid | \n> Indexes:\n> \"test_registration_pkey\" PRIMARY KEY, btree (id)\n> \"test_registration_sid_key\" UNIQUE, btree (sid, test_administration_id)\n> \"test_registration_teacher\" btree (teacher_number)\n> \"test_registration_test_id\" btree (test_administration_id)\n> \n> demo=# \\d test_registration_result\n> Table \"public.test_registration_result\"\n> Column | Type | Modifiers \n> ----------------------+-----------------------+-----------\n> answer | character varying(15) | \n> question_id | uuid | not null\n> score | double precision | \n> test_registration_id | uuid | not null\n> Indexes:\n> \"test_registration_result_pkey\" PRIMARY KEY, btree (question_id, test_registration_id)\n> \"test_registration_result_answer\" btree (test_registration_id, answer, score)\n> \"test_registration_result_test\" btree (test_registration_id)\n> \n> \n> Explain Analyze:\n> \n> \n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=951169.97..951198.37 rows=2840 width=25) (actual time=14669.039..14669.843 rows=2972 loops=1)\n> -> Hash Join (cost=2988.07..939924.85 rows=2249024 width=25) (actual time=551.464..14400.061 rows=638980 loops=1)\n> Hash Cond: (r.test_registration_id = tr.id)\n> -> Seq Scan on test_registration_result r (cost=0.00..681946.72 rows=37199972 width=16) (actual time=0.015..6073.101 rows=37198734 loops=1)\n> -> Hash (cost=2952.57..2952.57 rows=2840 width=25) (actual time=2.516..2.516 rows=2972 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 160kB\n> -> Bitmap Heap Scan on test_registration tr (cost=44.29..2952.57 rows=2840 width=25) (actual time=0.528..1.458 rows=2972 loops=1)\n> Recheck Cond: (test_administration_id = 'e26a165a-c19f-11df-be2f-778af560e5a2'::uuid)\n> -> Bitmap Index Scan on test_registration_test_administration_id (cost=0.00..43.58 rows=2840 width=0) (actual time=0.507..0.507 rows=2972 loops=1)\n> Index Cond: (test_administration_id = 'e26a165a-c19f-11df-be2f-778af560e5a2'::uuid)\n> Total runtime: 14670.337 ms\n> (11 rows)\n> \n> \n> real\t0m14.698s\n> user\t0m0.000s\n> sys\t0m0.008s\n> \n> \n> With \"set enable_seqscan=0;\"\n> \n> \n> SET\n> QUERY PLAN \n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=1225400.19..1225428.59 rows=2840 width=25) (actual time=748.397..749.160 rows=2972 loops=1)\n> -> Nested Loop (cost=0.00..1214155.07 rows=2249024 width=25) (actual time=0.107..465.165 rows=638980 loops=1)\n> -> Index Scan using test_registration_test_administration_id on test_registration tr (cost=0.00..4413.96 rows=2840 width=25) (actual time=0.050..1.610 rows=2972 loops=1)\n> Index Cond: (test_administration_id = 'e26a165a-c19f-11df-be2f-778af560e5a2'::uuid)\n> -> Index Scan using test_registration_result_answer on test_registration_result r (cost=0.00..416.07 rows=792 width=16) (actual time=0.019..0.106 rows=215 loops=2972)\n> Index Cond: (r.test_registration_id = tr.id)\n> Total runtime: 749.745 ms\n> (7 rows)\n> \n> \n> real\t0m0.759s\n> user\t0m0.008s\n> sys\t0m0.000s\n> \n> \n> The following parameters are changed in postgresql.conf and I have routinely vacuum analyzed the tables and database:\n> \n> shared_buffers = 2048MB \n> work_mem = 8MB\n> maintenance_work_mem = 256MB \n> wal_buffers = 640kB\n> random_page_cost = 4.0 \n> effective_cache_size = 7000MB\n> default_statistics_target = 200 \n> \n> \n> free -m:\n> total used free shared buffers cached\n> Mem: 8003 7849 153 0 25 7555\n> -/+ buffers/cache: 268 7735\n> Swap: 7640 0 7639\n> \n> \n> Any help would be appreciated. Thank you very much. \n> \n> Ogden\n",
"msg_date": "Tue, 21 Sep 2010 13:06:18 -0500",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query much faster with enable_seqscan=0"
},
{
"msg_contents": "I assume you mean random_page_cost? It is currently set to 4.0 - is it better to increase or decrease this value?\n\nThank you \n\nOgden\n\n\nOn Sep 21, 2010, at 1:06 PM, Kenneth Marshall wrote:\n\n> You DB is more than likely cached. You should adjust your\n> page costs to better reflect reality and then the planner\n> can make more accurate estimates and then choose the proper\n> plan.\n> \n> Cheers,\n> Ken\n> \n> On Tue, Sep 21, 2010 at 12:32:01PM -0500, Ogden wrote:\n>> Hello,\n>> \n>> I have received some help from the IRC channel, however, the problem still exists. When running the following query with enable_seqscan set to 0, it takes less than a second, whereas with it set to 1, the query returns in 14 seconds. The machine itself has 8GB Ram and is running PostgreSQL 9.0 on Debian Lenny. The database size is about 7GB. \n>> \n>> \n>> Query:\n>> SELECT tr.id, tr.sid\n>> FROM\n>> test_registration tr,\n>> INNER JOIN test_registration_result r on (tr.id = r.test_registration_id)\n>> WHERE.\n>> tr.test_administration_id='32a22b12-aa21-11df-a606-96551e8f4e4c'::uuid\n>> GROUP BY tr.id, tr.sid\n>> \n>> \n>> \n>> demo=# \\d test_registration\n>> Table \"public.test_registration\"\n>> Column | Type | Modifiers \n>> ------------------------+-----------------------------+------------------------\n>> id | uuid | not null\n>> sid | character varying(36) | not null\n>> created_date | timestamp without time zone | not null default now()\n>> modified_date | timestamp without time zone | not null\n>> test_administration_id | uuid | not null\n>> teacher_number | character varying(15) | \n>> test_version_id | uuid | \n>> Indexes:\n>> \"test_registration_pkey\" PRIMARY KEY, btree (id)\n>> \"test_registration_sid_key\" UNIQUE, btree (sid, test_administration_id)\n>> \"test_registration_teacher\" btree (teacher_number)\n>> \"test_registration_test_id\" btree (test_administration_id)\n>> \n>> demo=# \\d test_registration_result\n>> Table \"public.test_registration_result\"\n>> Column | Type | Modifiers \n>> ----------------------+-----------------------+-----------\n>> answer | character varying(15) | \n>> question_id | uuid | not null\n>> score | double precision | \n>> test_registration_id | uuid | not null\n>> Indexes:\n>> \"test_registration_result_pkey\" PRIMARY KEY, btree (question_id, test_registration_id)\n>> \"test_registration_result_answer\" btree (test_registration_id, answer, score)\n>> \"test_registration_result_test\" btree (test_registration_id)\n>> \n>> \n>> Explain Analyze:\n>> \n>> \n>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> HashAggregate (cost=951169.97..951198.37 rows=2840 width=25) (actual time=14669.039..14669.843 rows=2972 loops=1)\n>> -> Hash Join (cost=2988.07..939924.85 rows=2249024 width=25) (actual time=551.464..14400.061 rows=638980 loops=1)\n>> Hash Cond: (r.test_registration_id = tr.id)\n>> -> Seq Scan on test_registration_result r (cost=0.00..681946.72 rows=37199972 width=16) (actual time=0.015..6073.101 rows=37198734 loops=1)\n>> -> Hash (cost=2952.57..2952.57 rows=2840 width=25) (actual time=2.516..2.516 rows=2972 loops=1)\n>> Buckets: 1024 Batches: 1 Memory Usage: 160kB\n>> -> Bitmap Heap Scan on test_registration tr (cost=44.29..2952.57 rows=2840 width=25) (actual time=0.528..1.458 rows=2972 loops=1)\n>> Recheck Cond: (test_administration_id = 'e26a165a-c19f-11df-be2f-778af560e5a2'::uuid)\n>> -> Bitmap Index Scan on test_registration_test_administration_id (cost=0.00..43.58 rows=2840 width=0) (actual time=0.507..0.507 rows=2972 loops=1)\n>> Index Cond: (test_administration_id = 'e26a165a-c19f-11df-be2f-778af560e5a2'::uuid)\n>> Total runtime: 14670.337 ms\n>> (11 rows)\n>> \n>> \n>> real\t0m14.698s\n>> user\t0m0.000s\n>> sys\t0m0.008s\n>> \n>> \n>> With \"set enable_seqscan=0;\"\n>> \n>> \n>> SET\n>> QUERY PLAN \n>> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> HashAggregate (cost=1225400.19..1225428.59 rows=2840 width=25) (actual time=748.397..749.160 rows=2972 loops=1)\n>> -> Nested Loop (cost=0.00..1214155.07 rows=2249024 width=25) (actual time=0.107..465.165 rows=638980 loops=1)\n>> -> Index Scan using test_registration_test_administration_id on test_registration tr (cost=0.00..4413.96 rows=2840 width=25) (actual time=0.050..1.610 rows=2972 loops=1)\n>> Index Cond: (test_administration_id = 'e26a165a-c19f-11df-be2f-778af560e5a2'::uuid)\n>> -> Index Scan using test_registration_result_answer on test_registration_result r (cost=0.00..416.07 rows=792 width=16) (actual time=0.019..0.106 rows=215 loops=2972)\n>> Index Cond: (r.test_registration_id = tr.id)\n>> Total runtime: 749.745 ms\n>> (7 rows)\n>> \n>> \n>> real\t0m0.759s\n>> user\t0m0.008s\n>> sys\t0m0.000s\n>> \n>> \n>> The following parameters are changed in postgresql.conf and I have routinely vacuum analyzed the tables and database:\n>> \n>> shared_buffers = 2048MB \n>> work_mem = 8MB\n>> maintenance_work_mem = 256MB \n>> wal_buffers = 640kB\n>> random_page_cost = 4.0 \n>> effective_cache_size = 7000MB\n>> default_statistics_target = 200 \n>> \n>> \n>> free -m:\n>> total used free shared buffers cached\n>> Mem: 8003 7849 153 0 25 7555\n>> -/+ buffers/cache: 268 7735\n>> Swap: 7640 0 7639\n>> \n>> \n>> Any help would be appreciated. Thank you very much. \n>> \n>> Ogden\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Tue, 21 Sep 2010 13:21:52 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query much faster with enable_seqscan=0"
},
{
"msg_contents": "On 2010-09-21 20:21, Ogden wrote:\n> I assume you mean random_page_cost? It is currently set to 4.0 - is it better to increase or decrease this value?\n> \n\nShould be lowered to a bit over seq_page_cost.. and more importantly.. \nyou should\nmake sure that you have updated your statistics .. run \"ANALYZE\";\n\n-- \nJesper\n",
"msg_date": "Tue, 21 Sep 2010 20:51:14 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query much faster with enable_seqscan=0"
},
{
"msg_contents": "How odd, I set the following:\n\nseq_page_cost = 1.0 \nrandom_page_cost = 2.0\n\nAnd now the query runs in milliseconds as opposed to 14 seconds. Could this really be the change? I am running ANALYZE now - how often is it recommended to do this?\n\nThank you\n\nOgden\n\n\nOn Sep 21, 2010, at 1:51 PM, Jesper Krogh wrote:\n\n> On 2010-09-21 20:21, Ogden wrote:\n>> I assume you mean random_page_cost? It is currently set to 4.0 - is it better to increase or decrease this value?\n>> \n> \n> Should be lowered to a bit over seq_page_cost.. and more importantly.. you should\n> make sure that you have updated your statistics .. run \"ANALYZE\";\n> \n> -- \n> Jesper\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Tue, 21 Sep 2010 14:02:11 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query much faster with enable_seqscan=0"
},
{
"msg_contents": "On Tue, 2010-09-21 at 14:02 -0500, Ogden wrote:\n> How odd, I set the following:\n> \n> seq_page_cost = 1.0 \n> random_page_cost = 2.0\n> \n> And now the query runs in milliseconds as opposed to 14 seconds. Could this really be the change? I am running ANALYZE now - how often is it recommended to do this?\n\nPostgreSQL's defaults are based on extremely small and some would say\n(non production) size databases. As a matter of course I always\nrecommend bringing seq_page_cost and random_page_cost more in line.\n\nHowever, you may want to try moving random_page_cost back to 4 and try\nincreasing cpu_tuple_cost instead.\n\nSincerely,\n\nJoshua D. Drake\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n",
"msg_date": "Tue, 21 Sep 2010 12:07:11 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query much faster with enable_seqscan=0"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> PostgreSQL's defaults are based on extremely small and some would say\n> (non production) size databases. As a matter of course I always\n> recommend bringing seq_page_cost and random_page_cost more in line.\n> \n\nAlso, they presume that not all of your data is going to be in memory, \nand the query optimizer needs to be careful about what it does and \ndoesn't pull from disk. If that's not the case, like here where there's \n8GB of RAM and a 7GB database, dramatic reductions to both seq_page_cost \nand random_page_cost can make sense. Don't be afraid to think lowering \nbelow 1.0 is going too far--something more like 0.01 for sequential and \n0.02 for random may actually reflect reality here.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Tue, 21 Sep 2010 15:16:57 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query much faster with enable_seqscan=0"
},
{
"msg_contents": "\nOn Sep 21, 2010, at 2:16 PM, Greg Smith wrote:\n\n> Joshua D. Drake wrote:\n>> PostgreSQL's defaults are based on extremely small and some would say\n>> (non production) size databases. As a matter of course I always\n>> recommend bringing seq_page_cost and random_page_cost more in line.\n>> \n> \n> Also, they presume that not all of your data is going to be in memory, and the query optimizer needs to be careful about what it does and doesn't pull from disk. If that's not the case, like here where there's 8GB of RAM and a 7GB database, dramatic reductions to both seq_page_cost and random_page_cost can make sense. Don't be afraid to think lowering below 1.0 is going too far--something more like 0.01 for sequential and 0.02 for random may actually reflect reality here.\n> \n\nI have done just that, per your recommendations and now what took 14 seconds, only takes less than a second, so it was certainly these figures I messed around with. I have set:\n\nseq_page_cost = 0.01 \nrandom_page_cost = 0.02 \ncpu_tuple_cost = 0.01\n\nEverything seems to run faster now. I think this should be fine - I'll keep an eye on things over the next few days. \n\nI truly appreciate everyone's help. \n\nOgden\n\n",
"msg_date": "Tue, 21 Sep 2010 14:34:42 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query much faster with enable_seqscan=0"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> and the query optimizer needs to be careful about what it does and \n> doesn't pull from disk. If that's not the case, like here where there's \n> 8GB of RAM and a 7GB database, dramatic reductions to both seq_page_cost \n> and random_page_cost can make sense. Don't be afraid to think lowering \n> below 1.0 is going too far--something more like 0.01 for sequential and \n> 0.02 for random may actually reflect reality here.\n\nIf you are tuning for an all-in-RAM situation, you should set\nrandom_page_cost equal to seq_page_cost (and usually set both smaller\nthan 1). By definition, those costs are equal if you're fetching from\nRAM. If it's only mostly-in-RAM then keeping random_page_cost a bit\nhigher makes sense.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Sep 2010 19:24:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query much faster with enable_seqscan=0 "
},
{
"msg_contents": "Ogden <[email protected]> writes:\n> SELECT tr.id, tr.sid\n> FROM\n> test_registration tr,\n> INNER JOIN test_registration_result r on (tr.id = r.test_registration_id)\n> WHERE.\n> tr.test_administration_id='32a22b12-aa21-11df-a606-96551e8f4e4c'::uuid\n> GROUP BY tr.id, tr.sid\n\nSeeing that tr.id is a primary key, I think you might be a lot better\noff if you avoided the inner join and group by. I think what you really\nwant here is something like\n\nSELECT tr.id, tr.sid\n FROM\n test_registration tr\n WHERE\n tr.test_administration_id='32a22b12-aa21-11df-a606-96551e8f4e4c'::uuid\n AND EXISTS(SELECT 1 FROM test_registration_result r\n WHERE tr.id = r.test_registration_id)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Sep 2010 19:30:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query much faster with enable_seqscan=0 "
},
{
"msg_contents": "\nOn Sep 21, 2010, at 2:34 PM, Ogden wrote:\n\n> \n> On Sep 21, 2010, at 2:16 PM, Greg Smith wrote:\n> \n>> Joshua D. Drake wrote:\n>>> PostgreSQL's defaults are based on extremely small and some would say\n>>> (non production) size databases. As a matter of course I always\n>>> recommend bringing seq_page_cost and random_page_cost more in line.\n>>> \n>> \n>> Also, they presume that not all of your data is going to be in memory, and the query optimizer needs to be careful about what it does and doesn't pull from disk. If that's not the case, like here where there's 8GB of RAM and a 7GB database, dramatic reductions to both seq_page_cost and random_page_cost can make sense. Don't be afraid to think lowering below 1.0 is going too far--something more like 0.01 for sequential and 0.02 for random may actually reflect reality here.\n>> \n> \n> I have done just that, per your recommendations and now what took 14 seconds, only takes less than a second, so it was certainly these figures I messed around with. I have set:\n> \n> seq_page_cost = 0.01 \n> random_page_cost = 0.02 \n> cpu_tuple_cost = 0.01\n> \n> Everything seems to run faster now. I think this should be fine - I'll keep an eye on things over the next few days. \n> \n> I truly appreciate everyone's help. \n> \n> Ogden\n> \n\n\nI spoke too soon - well I came in this morning and reran the query that was speeded up yesterday by a lot after tweaking those numbers. This morning the first time I ran it, it took 16 seconds whereas every subsequent run was a matter of 2 seconds. I assume there is OS caching going on for those results. Is this normal or could it also be the speed of my disks which is causing a lag when I first run it (it's RAID 5 across 6 disks). Is there any explanation for this or what should those settings really be? Perhaps 0.01 is too low?\n\nThank you\n\nOgden",
"msg_date": "Wed, 22 Sep 2010 08:36:43 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query much faster with enable_seqscan=0"
},
{
"msg_contents": "\nOn Sep 22, 2010, at 6:36 AM, Ogden wrote:\n\n> \n> On Sep 21, 2010, at 2:34 PM, Ogden wrote:\n> \n>> \n>> On Sep 21, 2010, at 2:16 PM, Greg Smith wrote:\n>> \n>>> Joshua D. Drake wrote:\n>>>> PostgreSQL's defaults are based on extremely small and some would say\n>>>> (non production) size databases. As a matter of course I always\n>>>> recommend bringing seq_page_cost and random_page_cost more in line.\n>>>> \n>>> \n>>> Also, they presume that not all of your data is going to be in memory, and the query optimizer needs to be careful about what it does and doesn't pull from disk. If that's not the case, like here where there's 8GB of RAM and a 7GB database, dramatic reductions to both seq_page_cost and random_page_cost can make sense. Don't be afraid to think lowering below 1.0 is going too far--something more like 0.01 for sequential and 0.02 for random may actually reflect reality here.\n>>> \n>> \n>> I have done just that, per your recommendations and now what took 14 seconds, only takes less than a second, so it was certainly these figures I messed around with. I have set:\n>> \n>> seq_page_cost = 0.01 \n>> random_page_cost = 0.02 \n>> cpu_tuple_cost = 0.01\n>> \n>> Everything seems to run faster now. I think this should be fine - I'll keep an eye on things over the next few days. \n>> \n>> I truly appreciate everyone's help. \n>> \n>> Ogden\n>> \n> \n> \n> I spoke too soon - well I came in this morning and reran the query that was speeded up yesterday by a lot after tweaking those numbers. This morning the first time I ran it, it took 16 seconds whereas every subsequent run was a matter of 2 seconds. I assume there is OS caching going on for those results. Is this normal or could it also be the speed of my disks which is causing a lag when I first run it (it's RAID 5 across 6 disks). Is there any explanation for this or what should those settings really be? Perhaps 0.01 is too low?\n> \n> Thank you\n> \n> Ogden\n\nWhen not cached, the plan with sequential scans will almost always be much faster.\n\nWhen cached in memory, the ones using indexes are almost always faster. \n\nThe tuning parameters are essentially telling postgres the likelihood of finding things on disk instead versus in memory. The default parameters are essentially \"not likely in memory, with a somewhat slow disk\".\n\n\n\n\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Fri, 24 Sep 2010 10:34:41 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query much faster with enable_seqscan=0"
},
{
"msg_contents": "On Wed, Sep 22, 2010 at 9:36 AM, Ogden <[email protected]> wrote:\n>\n> On Sep 21, 2010, at 2:34 PM, Ogden wrote:\n>\n>>\n>> On Sep 21, 2010, at 2:16 PM, Greg Smith wrote:\n>>\n>>> Joshua D. Drake wrote:\n>>>> PostgreSQL's defaults are based on extremely small and some would say\n>>>> (non production) size databases. As a matter of course I always\n>>>> recommend bringing seq_page_cost and random_page_cost more in line.\n>>>>\n>>>\n>>> Also, they presume that not all of your data is going to be in memory, and the query optimizer needs to be careful about what it does and doesn't pull from disk. If that's not the case, like here where there's 8GB of RAM and a 7GB database, dramatic reductions to both seq_page_cost and random_page_cost can make sense. Don't be afraid to think lowering below 1.0 is going too far--something more like 0.01 for sequential and 0.02 for random may actually reflect reality here.\n>>>\n>>\n>> I have done just that, per your recommendations and now what took 14 seconds, only takes less than a second, so it was certainly these figures I messed around with. I have set:\n>>\n>> seq_page_cost = 0.01\n>> random_page_cost = 0.02\n>> cpu_tuple_cost = 0.01\n>>\n>> Everything seems to run faster now. I think this should be fine - I'll keep an eye on things over the next few days.\n>>\n>> I truly appreciate everyone's help.\n>>\n>> Ogden\n>>\n>\n>\n> I spoke too soon - well I came in this morning and reran the query that was speeded up yesterday by a lot after tweaking those numbers. This morning the first time I ran it, it took 16 seconds whereas every subsequent run was a matter of 2 seconds. I assume there is OS caching going on for those results. Is this normal or could it also be the speed of my disks which is causing a lag when I first run it (it's RAID 5 across 6 disks). Is there any explanation for this or what should those settings really be? Perhaps 0.01 is too low?\n\nYeah, I think those numbers are a bit low. Your database probably\nisn't fully cached. Keep in mind there's going to be some fluctuation\nas to what is and is not in cache, and you can't expect whatever plan\nthe planner picks to be exactly perfect for both cases. I might try\nsomething more like 0.2 / 0.1. If you really need the query to be\nfast, though, you might need to do more than jigger the page costs.\nDid you try Tom's suggested rewrite?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n",
"msg_date": "Tue, 28 Sep 2010 12:03:10 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query much faster with enable_seqscan=0"
},
{
"msg_contents": "\nOn Sep 21, 2010, at 6:30 PM, Tom Lane wrote:\n\n> Ogden <[email protected]> writes:\n>> SELECT tr.id, tr.sid\n>> FROM\n>> test_registration tr,\n>> INNER JOIN test_registration_result r on (tr.id = r.test_registration_id)\n>> WHERE.\n>> tr.test_administration_id='32a22b12-aa21-11df-a606-96551e8f4e4c'::uuid\n>> GROUP BY tr.id, tr.sid\n> \n> Seeing that tr.id is a primary key, I think you might be a lot better\n> off if you avoided the inner join and group by. I think what you really\n> want here is something like\n> \n> SELECT tr.id, tr.sid\n> FROM\n> test_registration tr\n> WHERE\n> tr.test_administration_id='32a22b12-aa21-11df-a606-96551e8f4e4c'::uuid\n> AND EXISTS(SELECT 1 FROM test_registration_result r\n> WHERE tr.id = r.test_registration_id)\n> \n> \t\t\tregards, tom lane\n> \n\nThank you for this suggestion, however, what if I wanted some columns from test_registration_result - this wouldn't work, for example if I wanted test_registration_result.answer to be fetched. Hence, I had to have a JOIN with test_registration_result and a GROUP BY. I still am not happy with my query - the EXISTS executes in great speed however I cannot retrieve any of the columns from that table. \n\nThank you\n\nOgden\n\n",
"msg_date": "Tue, 12 Oct 2010 19:23:24 -0500",
"msg_from": "Ogden <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query much faster with enable_seqscan=0 "
},
{
"msg_contents": "On Tue, Sep 21, 2010 at 4:30 PM, Tom Lane <[email protected]> wrote:\n\n> Ogden <[email protected]> writes:\n> > SELECT tr.id, tr.sid\n> > FROM\n> > test_registration tr,\n> > INNER JOIN test_registration_result r on (tr.id =\n> r.test_registration_id)\n> > WHERE.\n> >\n> tr.test_administration_id='32a22b12-aa21-11df-a606-96551e8f4e4c'::uuid\n> > GROUP BY tr.id, tr.sid\n>\n> Seeing that tr.id is a primary key, I think you might be a lot better\n> off if you avoided the inner join and group by. I think what you really\n> want here is something like\n>\n> SELECT tr.id, tr.sid\n> FROM\n> test_registration tr\n> WHERE\n>\n> tr.test_administration_id='32a22b12-aa21-11df-a606-96551e8f4e4c'::uuid\n> AND EXISTS(SELECT 1 FROM test_registration_result r\n> WHERE tr.id = r.test_registration_id)\n>\n> regards, tom lane\n>\n>\nCould you explain the logic behind why this structure is better than the\nother? Is it always the case that one should just always use the\n'exists(select 1 from x...)' structure when trying to strip rows that don't\njoin or is it just the case when you know that the rows which do join are a\nfairly limited subset? Does the same advantage exist if filtering rows in\nthe joined table on some criteria, or is it better at that point to use an\ninner join and add a where clause to filter the joined rows.\n\nselect table1.columns\nfrom table1, table2\nwhere table1.column = 'some_value'\n and table1.fk = table2.pk\n AND table2.column = 'some_other_value'\n\nversus\n\nselect table1.columns\n from table1\nwhere table1.column = 'some_value'\n and exists(select 1 from table2 where table1.fk = table2.pk\n and table2.column ='some_other_value')\n\nOn Tue, Sep 21, 2010 at 4:30 PM, Tom Lane <[email protected]> wrote:\nOgden <[email protected]> writes:\n> SELECT tr.id, tr.sid\n> FROM\n> test_registration tr,\n> INNER JOIN test_registration_result r on (tr.id = r.test_registration_id)\n> WHERE.\n> tr.test_administration_id='32a22b12-aa21-11df-a606-96551e8f4e4c'::uuid\n> GROUP BY tr.id, tr.sid\n\nSeeing that tr.id is a primary key, I think you might be a lot better\noff if you avoided the inner join and group by. I think what you really\nwant here is something like\n\nSELECT tr.id, tr.sid\n FROM\n test_registration tr\n WHERE\n tr.test_administration_id='32a22b12-aa21-11df-a606-96551e8f4e4c'::uuid\n AND EXISTS(SELECT 1 FROM test_registration_result r\n WHERE tr.id = r.test_registration_id)\n\n regards, tom lane\nCould you explain the logic behind why this structure is better than the other? Is it always the case that one should just always use the 'exists(select 1 from x...)' structure when trying to strip rows that don't join or is it just the case when you know that the rows which do join are a fairly limited subset? Does the same advantage exist if filtering rows in the joined table on some criteria, or is it better at that point to use an inner join and add a where clause to filter the joined rows.\nselect table1.columnsfrom table1, table2where table1.column = 'some_value' and table1.fk = table2.pk\n AND table2.column = 'some_other_value'versusselect table1.columns from table1where table1.column = 'some_value' and exists(select 1 from table2 where table1.fk = table2.pk\n and table2.column ='some_other_value')",
"msg_date": "Tue, 12 Oct 2010 19:28:55 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query much faster with enable_seqscan=0"
},
{
"msg_contents": "On Tue, Oct 12, 2010 at 10:28 PM, Samuel Gendler\n<[email protected]> wrote:\n>\n>\n> On Tue, Sep 21, 2010 at 4:30 PM, Tom Lane <[email protected]> wrote:\n>>\n>> Ogden <[email protected]> writes:\n>> > SELECT tr.id, tr.sid\n>> > FROM\n>> > test_registration tr,\n>> > INNER JOIN test_registration_result r on (tr.id =\n>> > r.test_registration_id)\n>> > WHERE.\n>> >\n>> > tr.test_administration_id='32a22b12-aa21-11df-a606-96551e8f4e4c'::uuid\n>> > GROUP BY tr.id, tr.sid\n>>\n>> Seeing that tr.id is a primary key, I think you might be a lot better\n>> off if you avoided the inner join and group by. I think what you really\n>> want here is something like\n>>\n>> SELECT tr.id, tr.sid\n>> FROM\n>> test_registration tr\n>> WHERE\n>>\n>> tr.test_administration_id='32a22b12-aa21-11df-a606-96551e8f4e4c'::uuid\n>> AND EXISTS(SELECT 1 FROM test_registration_result r\n>> WHERE tr.id = r.test_registration_id)\n>>\n>> regards, tom lane\n>>\n>\n> Could you explain the logic behind why this structure is better than the\n> other? Is it always the case that one should just always use the\n> 'exists(select 1 from x...)' structure when trying to strip rows that don't\n> join or is it just the case when you know that the rows which do join are a\n> fairly limited subset? Does the same advantage exist if filtering rows in\n> the joined table on some criteria, or is it better at that point to use an\n> inner join and add a where clause to filter the joined rows.\n> select table1.columns\n> from table1, table2\n> where table1.column = 'some_value'\n> and table1.fk = table2.pk\n> AND table2.column = 'some_other_value'\n> versus\n> select table1.columns\n> from table1\n> where table1.column = 'some_value'\n> and exists(select 1 from table2 where table1.fk = table2.pk\n> and table2.column ='some_other_value')\n\nI don't think there's much difference between those two cases. I\nthink Tom's point was that GROUP BY can be expensive - which it\ncertainly can. It's absolutely necessary and unavoidable for certain\nqueries, of course, but don't include it unless you need it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 28 Oct 2010 12:21:01 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query much faster with enable_seqscan=0"
}
] |
[
{
"msg_contents": "I can't tell if you meant for this to be insulting or my reading it that way is wrong, but it certainly wasn't put in a helpful tone. Let me summarize for you. You've been told that putting ORDER BY into a view is a generally poor idea anyway, that it's better to find ways avoid this class of concern altogether. There are significant non-obvious technical challenges behind actually implementing the behavior you'd like to see; the concerns raised by Tom and Maciek make your idea impractical even if it were desired. And for every person like yourself who'd see the benefit you're looking for, there are far more that would find a change in this area a major problem. The concerns around breakage due to assumed but not required aspects of the relational model are the ones the users of the software will be confused by, not the developers of it. You have the classification wrong; the feedback you've gotten here is from the developers being user oriented, not theory oriented or \n c!\node oriented.\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n\nNot insulting, just amused bemusement. PG portrays itself as the best OS database, which it may well be. But it does so by stressing the row-by-agonizing-row approach to data. In other words, as just a record paradigm filestore for COBOL/java/C coders. I was expecting more Relational oomph. As Dr. Codd says: \"A Relational Model of Data for Large Shared Data Banks\". Less code, more data.\n\nrobert\n",
"msg_date": "Thu, 23 Sep 2010 09:51:16 -0400 (EDT)",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Useless sort by"
},
{
"msg_contents": "On Thu, Sep 23, 2010 at 7:51 AM, <[email protected]> wrote:\n> Not insulting, just amused bemusement. PG portrays itself as the best OS database, which it may well be. But it does so by stressing the row-by-agonizing-row approach to data. In other words, as just a record paradigm filestore for COBOL/java/C coders. I was expecting more Relational oomph. As Dr. Codd says: \"A Relational Model of Data for Large Shared Data Banks\". Less code, more data.\n\nSo what, exactly, would give pgsql more relationally \"oomph\"?\n\nYour assertion feels pretty hand wavy right now.\n\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Thu, 23 Sep 2010 11:11:50 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Useless sort by"
}
] |
[
{
"msg_contents": "Hello!\n\nI have this table:\n\ncreate table test (\n\ts1 varchar(255),\n\ts2 varchar(255),\n\ti1 integer,\n\ti2 integer,\n\n... over 100 other fields\n\n);\n\ntable contains over 8 million records\n\nthere's these indexes:\n\ncreate index is1 on test (s1);\ncreate index is2 on test (s2);\ncreate index ii1 on test (i1);\ncreate index ii2 on test (i2);\ncreate index ii3 on test (i1, i2);\n\nand then i run this query:\n\nselect\n*\nfrom (\n\tselect *\n\tfrom test\n\twhere\n\t\tis1 = 'aa' or is2 = 'aa'\t\t\n\t)\nwhere\n\tis1 = 1\n\tor (is1 = 1\n\t\tand is2 = 1)\n\tor (is1 = 2\n\t\tand is2 = 2)\n\tor (is1 = 3\n\t\tand is2 = 3)\n\n\nwhere part of outer query can have different count of\n\t\"or (is1 = N\n\t\tand is2 = M)\"\nexpressions, lets name this number X.\n\nWhen X is low planner chooses index scan using is1 and is2,\nthen BitmapAnd that with index scan using ii1, ii2 or ii3.\n\nBut when X is big enough (> 15) planner chooses seqscan and filter on\ni1, i2, s1, s2.\nSeqscan is very slow and I want to avoid it. Subquery is very fast\nand i don't know why postgres chooses that plan.\n\nI know I can set enable_seqscan = off.\nIs there other ways to enforce index usage?\n\npostgres pg_class have right estimate of rowcount.\n\n-- \nA: Because it messes up the order in which people normally read text.\nQ: Why is top-posting such a bad thing?\nA: Top-posting.\nQ: What is the most annoying thing in e-mail?\n",
"msg_date": "Thu, 23 Sep 2010 18:26:17 +0400",
"msg_from": "Dmitry Teslenko <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to enforce index sub-select over filter+seqscan"
},
{
"msg_contents": "Dmitry Teslenko <[email protected]> wrote:\n \n> Seqscan is very slow and I want to avoid it. Subquery is very fast\n> and i don't know why postgres chooses that plan.\n> \n> I know I can set enable_seqscan = off.\n> Is there other ways to enforce index usage?\n \nIf you come at it from that angle, you probably won't get the best\nresolution. PostgreSQL can see the alternative plans, and develops\nestimated costs of running each. It uses the one that it thinks\nwill be fastest. If it's wrong, there's probably something wrong\nwith the statistics it uses for estimating, or with the costing\ninformation. (There are some cases where it's not able to\naccurately estimate costs even if these are right, but let's check\nthe more common cases first.)\n \nPlease provide a little more information, like PostgreSQL version,\nthe postgresql.conf contents (excluding comments), OS, hardware, and\nthe EXPLAIN ANALYZE output of the query with and without\nenable_seqscan = off.\n \nOther useful ideas here:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n",
"msg_date": "Thu, 23 Sep 2010 10:43:35 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to enforce index sub-select over\n\t filter+seqscan"
},
{
"msg_contents": "On Thu, Sep 23, 2010 at 10:26 AM, Dmitry Teslenko <[email protected]> wrote:\n> Hello!\n>\n> I have this table:\n>\n> create table test (\n> s1 varchar(255),\n> s2 varchar(255),\n> i1 integer,\n> i2 integer,\n>\n> ... over 100 other fields\n>\n> );\n>\n> table contains over 8 million records\n>\n> there's these indexes:\n>\n> create index is1 on test (s1);\n> create index is2 on test (s2);\n> create index ii1 on test (i1);\n> create index ii2 on test (i2);\n> create index ii3 on test (i1, i2);\n>\n> and then i run this query:\n>\n> select\n> *\n> from (\n> select *\n> from test\n> where\n> is1 = 'aa' or is2 = 'aa'\n> )\n> where\n> is1 = 1\n> or (is1 = 1\n> and is2 = 1)\n> or (is1 = 2\n> and is2 = 2)\n> or (is1 = 3\n> and is2 = 3)\n\nhm, I think you meant to say:\ns1 = 'aa' or s2 = 'aa', i1 = 1 ... etc. details are important!\n\nConsider taking the combination of 'correct' pair of i1 and i2 and\nbuilding a table with 'values' and joining to that:\n\nselect * from test\n join\n (\n values (2,2), (3,3), ...\n ) q(i1, i2) using(i1,i2)\n where s1 = 'aa' or s2 = 'aa' or i1=1\n\nmerlin\n",
"msg_date": "Thu, 23 Sep 2010 15:41:43 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to enforce index sub-select over filter+seqscan"
},
{
"msg_contents": "On Thu, Sep 23, 2010 at 10:26 AM, Dmitry Teslenko <[email protected]> wrote:\n> I know I can set enable_seqscan = off.\n> Is there other ways to enforce index usage?\n\nNot really, but I suspect random_page_cost and seq_page_cost might\nhelp the planner make better decisions. Is your data by any chance\nmostly cached in memory?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n",
"msg_date": "Mon, 27 Sep 2010 13:30:03 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to enforce index sub-select over filter+seqscan"
}
] |
[
{
"msg_contents": "We've come to a tipping point with one of our database servers, it's\ngenerally quite loaded but up until recently it was handling the load\nwell - but now we're seeing that it struggles to process all the\nselects fast enough. Sometimes we're observing some weird lock-like\nbehaviour (see my other post on that), but most of the time the\ndatabase server is just not capable of handling the load fast enough\n(causing the queries to pile up in the pg_stat_activity-view).\n\nMy main hypothesis is that all the important indexes would fit snuggly\ninto the memory before, and now they don't. We'll eventually get the\nserver moved over to new and improved hardware, but while waiting for\nthat to happen we need to do focus on reducing the memory footprint of\nthe database. I have some general questions now ...\n\n1) Are there any good ways to verify my hypothesis? Some months ago I\nthought of running some small memory-gobbling program on the database\nserver just to see how much memory I could remove before we would see\nindications of the database being overloaded. It seems a bit radical,\nbut I think the information learned from such an experiment would be\nvery useful ... and we never managed to set up any testing environment\nthat faithfully replicates production traffic. Anyway, it's sort of\ntoo late now that we're already observing performance problems even\nwithout the memory gobbling script running.\n\n2) I've seen it discussed earlier on this list ... shared_buffers vs\nOS caches. Some claims that it has very little effect to adjust the\nsize of the shared buffers. Anyway, isn't it a risk that memory is\nwasted because important data is stored both in the OS cache and the\nshared buffers? What would happen if using almost all the available\nmemory for shared buffers? Or turn it down to a bare minimum and let\nthe OS do almost all the cache handling?\n\n3) We're discussing to drop some overlapping indexes ... i.e. to drop\none out of two indexes looking like this:\n\nsome_table(a)\nsome_table(a,b)\n\nWould the query \"select * from some_table where a=?\" run slower if we\ndrop the first index? Significantly?\n\n(in our situation I found that the number of distinct b's for each a\nis low and that the usage stats on the second index is quite low\ncompared with the first one, so I think we'll drop the second index).\n\n4) We're discussing to drop other indexes. Does it make sense at all\nas long as we're not experiencing problems with inserts/updates? I\nsuppose that if the index isn't used it will remain on disk and won't\naffect the memory usage ... but what if the index is rarely used ...\nwouldn't it be better to do a seqscan on a table that is frequently\naccessed and mostly in memory than to consult an index that is stored\non the disk?\n\nSorry for all the stupid questions ;-)\n",
"msg_date": "Thu, 23 Sep 2010 23:50:38 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Memory usage - indexes"
},
{
"msg_contents": "On 24/09/10 09:50, Tobias Brox wrote:\n> We've come to a tipping point with one of our database servers, it's\n> generally quite loaded but up until recently it was handling the load\n> well - but now we're seeing that it struggles to process all the\n> selects fast enough. Sometimes we're observing some weird lock-like\n> behaviour (see my other post on that), but most of the time the\n> database server is just not capable of handling the load fast enough\n> (causing the queries to pile up in the pg_stat_activity-view).\n>\n> My main hypothesis is that all the important indexes would fit snuggly\n> into the memory before, and now they don't. We'll eventually get the\n> server moved over to new and improved hardware, but while waiting for\n> that to happen we need to do focus on reducing the memory footprint of\n> the database. I have some general questions now ...\n>\n> 1) Are there any good ways to verify my hypothesis? Some months ago I\n> thought of running some small memory-gobbling program on the database\n> server just to see how much memory I could remove before we would see\n> indications of the database being overloaded. It seems a bit radical,\n> but I think the information learned from such an experiment would be\n> very useful ... and we never managed to set up any testing environment\n> that faithfully replicates production traffic. Anyway, it's sort of\n> too late now that we're already observing performance problems even\n> without the memory gobbling script running.\n>\n> 2) I've seen it discussed earlier on this list ... shared_buffers vs\n> OS caches. Some claims that it has very little effect to adjust the\n> size of the shared buffers. Anyway, isn't it a risk that memory is\n> wasted because important data is stored both in the OS cache and the\n> shared buffers? What would happen if using almost all the available\n> memory for shared buffers? Or turn it down to a bare minimum and let\n> the OS do almost all the cache handling?\n>\n> 3) We're discussing to drop some overlapping indexes ... i.e. to drop\n> one out of two indexes looking like this:\n>\n> some_table(a)\n> some_table(a,b)\n>\n> Would the query \"select * from some_table where a=?\" run slower if we\n> drop the first index? Significantly?\n>\n> (in our situation I found that the number of distinct b's for each a\n> is low and that the usage stats on the second index is quite low\n> compared with the first one, so I think we'll drop the second index).\n>\n> 4) We're discussing to drop other indexes. Does it make sense at all\n> as long as we're not experiencing problems with inserts/updates? I\n> suppose that if the index isn't used it will remain on disk and won't\n> affect the memory usage ... but what if the index is rarely used ...\n> wouldn't it be better to do a seqscan on a table that is frequently\n> accessed and mostly in memory than to consult an index that is stored\n> on the disk?\n>\n> Sorry for all the stupid questions ;-)\n>\n> \n\n\nAll good questions! Before (or maybe as well as) looking at index sizes \nvs memory I'd check to see if any of your commonly run queries have \nsuddenly started to use different plans due to data growth, e.g:\n\n- index scan to seq scan (perhaps because effective_cache_size is too \nsmall now)\n- hash agg to sort (work_mem too small now)\n\nWe had a case of the 1st point happen here a while ago, symptoms looked \nvery like what you are describing.\n\nRe index size, you could try indexes like:\n\nsome_table(a)\nsome_table(b)\n\nwhich may occupy less space, and the optimizer can bitmap and/or them to \nwork like the compound index some_table(a,b).\n\nregards\n\nMark\n",
"msg_date": "Fri, 24 Sep 2010 10:12:53 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": "Tobias Brox <[email protected]> wrote:\n \n> Sorry for all the stupid questions ;-)\n \nI'm with Mark -- I didn't see nay stupid questions there.\n \nWhere I would start, though, is by checking the level of bloat. One\nlong-running query under load, or one query which updates or deletes\na large number of rows, can put you into this state. If you find\nserious bloat you may need to schedule a maintenance window for\naggressive work (like CLUSTER) to fix it.\n \nEven before that, however, I would spend some time looking at the\npatterns of I/O under `vmstat 1` or `iostat 1` to get a sense of\nwhere the bottlenecks are. If you time-stamp the rows from vmstat\nyou can match them up against events in your log and periods of\nslow response.\n \n-Kevin\n",
"msg_date": "Fri, 24 Sep 2010 09:17:43 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": "Tobias,\n\nConsult pg_statio_user_indexes to see which indexes have been used and how much. Indexes with comparitively low usages rates aren't helping you much and are candidates for elimination. Also, partitioning large tables can help, since the indexes on each partition are smaller than one huge index on the original table.\n\nGood luck!\n\nBob Lunney\n\n--- On Thu, 9/23/10, Tobias Brox <[email protected]> wrote:\n\n> From: Tobias Brox <[email protected]>\n> Subject: [PERFORM] Memory usage - indexes\n> To: [email protected]\n> Date: Thursday, September 23, 2010, 5:50 PM\n> We've come to a tipping point with\n> one of our database servers, it's\n> generally quite loaded but up until recently it was\n> handling the load\n> well - but now we're seeing that it struggles to process\n> all the\n> selects fast enough. Sometimes we're observing some\n> weird lock-like\n> behaviour (see my other post on that), but most of the time\n> the\n> database server is just not capable of handling the load\n> fast enough\n> (causing the queries to pile up in the\n> pg_stat_activity-view).\n> \n> My main hypothesis is that all the important indexes would\n> fit snuggly\n> into the memory before, and now they don't. We'll\n> eventually get the\n> server moved over to new and improved hardware, but while\n> waiting for\n> that to happen we need to do focus on reducing the memory\n> footprint of\n> the database. I have some general questions now ...\n> \n> 1) Are there any good ways to verify my hypothesis? \n> Some months ago I\n> thought of running some small memory-gobbling program on\n> the database\n> server just to see how much memory I could remove before we\n> would see\n> indications of the database being overloaded. It\n> seems a bit radical,\n> but I think the information learned from such an experiment\n> would be\n> very useful ... and we never managed to set up any testing\n> environment\n> that faithfully replicates production traffic. \n> Anyway, it's sort of\n> too late now that we're already observing performance\n> problems even\n> without the memory gobbling script running.\n> \n> 2) I've seen it discussed earlier on this list ...\n> shared_buffers vs\n> OS caches. Some claims that it has very little effect\n> to adjust the\n> size of the shared buffers. Anyway, isn't it a risk\n> that memory is\n> wasted because important data is stored both in the OS\n> cache and the\n> shared buffers? What would happen if using almost all\n> the available\n> memory for shared buffers? Or turn it down to a bare\n> minimum and let\n> the OS do almost all the cache handling?\n> \n> 3) We're discussing to drop some overlapping indexes ...\n> i.e. to drop\n> one out of two indexes looking like this:\n> \n> some_table(a)\n> some_table(a,b)\n> \n> Would the query \"select * from some_table where a=?\" run\n> slower if we\n> drop the first index? Significantly?\n> \n> (in our situation I found that the number of distinct b's\n> for each a\n> is low and that the usage stats on the second index is\n> quite low\n> compared with the first one, so I think we'll drop the\n> second index).\n> \n> 4) We're discussing to drop other indexes. Does it\n> make sense at all\n> as long as we're not experiencing problems with\n> inserts/updates? I\n> suppose that if the index isn't used it will remain on disk\n> and won't\n> affect the memory usage ... but what if the index is rarely\n> used ...\n> wouldn't it be better to do a seqscan on a table that is\n> frequently\n> accessed and mostly in memory than to consult an index that\n> is stored\n> on the disk?\n> \n> Sorry for all the stupid questions ;-)\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n \n",
"msg_date": "Fri, 24 Sep 2010 09:23:49 -0700 (PDT)",
"msg_from": "Bob Lunney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": "On 24 September 2010 18:23, Bob Lunney <[email protected]> wrote:\n> Consult pg_statio_user_indexes to see which indexes have been used\n> and how much.\n\nWhat is the main differences between pg_statio_user_indexes and\npg_stat_user_indexes?\n\n> Indexes with comparitively low usages rates aren't helping you much and are\n> candidates for elimination.\n\nNo doubt about that - but the question was, would it really help us to\ndrop those indexes?\n\nI think the valid reasons for dropping indexes would be:\n\n1) To speed up inserts, updates and deletes\n\n2) To spend less disk space\n\n3) Eventually, speed up nightly vacuum (it wouldn't be an issue with\nautovacuum though)\n\n4) To spend less memory resources?\n\nI'm not at all concerned about 1 and 2 above - we don't have any\nperformance issues on the write part, and we have plenty of disk\ncapacity. We are still doing the nightly vacuum thing, and it does\nhurt us a bit since it's dragging ever more out in time. Anyway, it's\nnumber four I'm wondering most about - is it anything to be concerned\nabout or not for the least frequently used indexes? An index that\naren't being used would just stay on disk anyway, right? And if there\nare limited memory resources, the indexes that are most frequently\nused would fill up the cache space anyway? That's my thoughts at\nleast - are they way off?\n\nWe did have similar experiences some years ago - everything was\nrunning very fine all until one day when some semi-complicated\nvery-frequently-run selects started taking several seconds to run\nrather than tens of milliseconds. I found that we had two slightly\noverlapping indexes like this ...\n\n account_transaction(customer_id, trans_type)\n account_transaction(customer_id, trans_type, created)\n\nboth of those indexes where heavily used. I simply dropped the first\none, and the problems disappeared. I assume that both indexes up to\nsome point fitted snuggly into memory, but one day they were competing\nfor the limited memory space, dropping the redundant index solved the\nproblem all until the next hardware upgrade. I would never have found\nthose indexes searching for the least used indexes in the\npg_stat(io)_user_indexes view.\n",
"msg_date": "Fri, 24 Sep 2010 18:46:03 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": "On 24 September 2010 00:12, Mark Kirkwood <[email protected]> wrote:\n> All good questions! Before (or maybe as well as) looking at index sizes vs\n> memory I'd check to see if any of your commonly run queries have suddenly\n> started to use different plans due to data growth, e.g:\n>\n> - index scan to seq scan (perhaps because effective_cache_size is too small\n> now)\n> - hash agg to sort (work_mem too small now)\n\nWould be trivial if we had a handful of different queries and knew the\nplans by heart ... but our setup is slightly more complex than that.\nI would have to log the plans, wouldn't I? How would you go about it?\n I was having some thoughts to make up some script to scan through the\npostgres log, extract some stats on the queries run, and even do some\nexplains and store query plans.\n\nWe've started to chase down on seq scans (causing us to create even\nmore indexes and eating up more memory...). I have set up a simple\nsystem for archiving stats from pg_stat_user_tables now, like this:\n\ninsert into tmp_pg_stat_user_tables select *,now() as snapshot from\npg_stat_user_tables ;\n\nNBET=> \\d tmp_delta_pg_stat_user_tables\n View \"public.tmp_delta_pg_stat_user_tables\"\n Column | Type | Modifiers\n------------------+--------------------------+-----------\n duration | interval |\n relname | name |\n seq_scan | bigint |\n seq_tup_read | bigint |\n idx_scan | bigint |\n idx_tup_fetch | bigint |\n n_tup_ins | bigint |\n n_tup_upd | bigint |\n n_tup_del | bigint |\n n_tup_hot_upd | bigint |\n n_live_tup | bigint |\n n_dead_tup | bigint |\n last_vacuum | timestamp with time zone |\n last_autovacuum | timestamp with time zone |\n last_analyze | timestamp with time zone |\n last_autoanalyze | timestamp with time zone |\nView definition:\n SELECT now() - b.snapshot AS duration, a.relname, a.seq_scan -\nb.seq_scan AS seq_scan, a.seq_tup_read - b.seq_tup_read AS\nseq_tup_read, a.idx_scan - b.idx_scan AS idx_scan, a.idx_tup_fetch -\nb.idx_tup_fetch AS idx_tup_fetch, a.n_tup_ins - b.n_tup_ins AS\nn_tup_ins, a.n_tup_upd - b.n_tup_upd AS n_tup_upd, a.n_tup_del -\nb.n_tup_del AS n_tup_del, a.n_tup_hot_upd - b.n_tup_hot_upd AS\nn_tup_hot_upd, a.n_live_tup, a.n_dead_tup, a.last_vacuum,\na.last_autovacuum, a.last_analyze, a.last_autoanalyze\n FROM pg_stat_user_tables a, tmp_pg_stat_user_tables b\n WHERE b.snapshot = (( SELECT max(tmp_pg_stat_user_tables.snapshot) AS max\n FROM tmp_pg_stat_user_tables)) AND b.relname = a.relname;\n",
"msg_date": "Fri, 24 Sep 2010 18:52:42 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": " On 10-09-24 12:46 PM, Tobias Brox wrote:\n> On 24 September 2010 18:23, Bob Lunney<[email protected]> wrote:\n>> Consult pg_statio_user_indexes to see which indexes have been used\n>> and how much.\n> What is the main differences between pg_statio_user_indexes and\n> pg_stat_user_indexes?\n>\n\nThe pg_stat_* views give you usage information (for indexes - number of \nscans, numbers of tuples read/fetched). The pg_statio_* views give you \ninformation about block reads and block hits\n\n\n> I'm not at all concerned about 1 and 2 above - we don't have any\n> performance issues on the write part, and we have plenty of disk\n> capacity. We are still doing the nightly vacuum thing, and it does\n> hurt us a bit since it's dragging ever more out in time.\n\nWhy is the vacuum dragging out over time? Is the size of your data \nincreasing, are you doing more writes that leave dead tuples, or are \nyour tables and/or indexes getting bloated?\n\nAlso, is there a reason why you do nightly vacuums instead of letting \nautovacuum handle the work? We started doing far less vacuuming when we \nlet autovacuum handle things.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n",
"msg_date": "Fri, 24 Sep 2010 13:16:13 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": "On 24 September 2010 19:16, Brad Nicholson <[email protected]> wrote:\n[Brad Nicholson]\n> Why is the vacuum dragging out over time? Is the size of your data\n> increasing, are you doing more writes that leave dead tuples, or are your\n> tables and/or indexes getting bloated?\n\nDigressing a bit here ... but the biggest reason is the data size increasing.\n\nWe do have some bloat-problems as well - every now and then we decide\nto shut down the operation, use pg_dump to dump the entire database to\nan sql file and restore it. The benefits are dramatic, the space\nrequirement goes down a lot, and often some of our\nperformance-problems goes away after such an operation.\n\n> Also, is there a reason why you do nightly vacuums instead of letting\n> autovacuum handle the work?\n\nIf it was to me, we would have had autovacuum turned on. We've had\none bad experience when the autovacuumer decided to start vacuuming\none of the biggest table at the worst possible moment - and someone\nfigured autovacuum was a bad idea. I think we probably still would\nneed regular vacuums to avoid that happening, but with autovacuum on,\nmaybe we could have managed with regular vacuums only once a week or\nso.\n\n> We started doing far less vacuuming when we let\n> autovacuum handle things.\n\nWhat do you mean, that you could run regular vacuum less frequently,\nor that the regular vacuum would go faster?\n",
"msg_date": "Fri, 24 Sep 2010 19:41:18 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": "Tobias Brox <[email protected]> wrote:\n \n> If it was to me, we would have had autovacuum turned on. We've\n> had one bad experience when the autovacuumer decided to start\n> vacuuming one of the biggest table at the worst possible moment -\n> and someone figured autovacuum was a bad idea. I think we\n> probably still would need regular vacuums to avoid that happening,\n> but with autovacuum on, maybe we could have managed with regular\n> vacuums only once a week or so.\n \nRight, there's really no need to turn autovacuum off; if you hit it\nduring normal operations you've got enough bloat that it's going to\ntend to start dragging down performance if it *doesn't* run, and if\nyou don't want it kicking in on really big tables during the day, a\nnightly or weekly scheduled vacuum can probably prevent that.\n \nTwo other points -- you can adjust how aggressively autovacuum runs;\nif it's having a noticeable impact on concurrent queries, try a\nsmall adjustment to autovacuum cost numbers. Also, if you're not on\n8.4 (or higher!) yet, the changes in free space management and\nvacuums justify the upgrade all by themselves.\n \n-Kevin\n",
"msg_date": "Fri, 24 Sep 2010 12:50:41 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": "Tobias Brox wrote:\n> 1) Are there any good ways to verify my hypothesis?\n\nYou can confim easily whether the contents of the PostgreSQL buffer \ncache contain when you think they do by installing pg_buffercache. My \npaper and sample samples at \nhttp://www.pgcon.org/2010/schedule/events/218.en.html go over that.\n\nYou can also integrate that with a look at the OS level information by \nusing pgfincore: http://www.pgcon.org/2010/schedule/events/261.en.html\n\nI've found that if shared_buffers is set to a largish size, you can find \nout enough information from look at it to have a decent idea what's \ngoing on without going to that depth. But it's available if you want it.\n\n\n> 2) I've seen it discussed earlier on this list ... shared_buffers vs\n> OS caches. Some claims that it has very little effect to adjust the\n> size of the shared buffers. Anyway, isn't it a risk that memory is\n> wasted because important data is stored both in the OS cache and the\n> shared buffers?\nThe risk of overlap is overrated. What's much more likely to actually \nhappen is that you'll have good data in shared_buffers, then run \nsomething that completely destroys the OS cache (multiple seq scans just \nbelow the \"ring buffer\" threshold\", multiple large index scans, raging \nVACUUM work). Having copies of the most important pieces that stay in \nshared_buffers despite the OS cache being demolished is much more \nimportant to preserving decent performance than the concern about double \nbuffering database and OS contents--that only happens on trivial \nworkloads where there's not constant churn on the OS cache throwing \npages out like crazy.\n\nI have easily measurable improvements on client systems increasing \nshared_buffers into the 4GB - 8GB range. Popular indexes move into \nthere, stay there, and only get written out at checkpoint time rather \nthan all the time. However, if you write heavily enough that much of \nthis space gets dirty fast, you may not be be able to go that high \nbefore checkpoint issues start to make such sizes impractical.\n\n> What would happen if using almost all the available\n> memory for shared buffers? Or turn it down to a bare minimum and let\n> the OS do almost all the cache handling?\n> \n\nThe useful upper limit normally works out to be somewhere between 4GB \nand 1/2 of RAM. Using minimal values works for some people, \nparticularly on Windows, but you can measure that doing so generates far \nmore disk I/O activity than using a moderate sized cache by \ninstrumenting pg_stat_bgwriter, the way I describe in my talk.\n\n> 3) We're discussing to drop some overlapping indexes ... i.e. to drop\n> one out of two indexes looking like this:\n>\n> some_table(a)\n> some_table(a,b)\n>\n> Would the query \"select * from some_table where a=?\" run slower if we\n> drop the first index? Significantly?\n> \n\nYes, it would run slower, because now it has to sort through blocks in a \nlarger index in order to find anything. How significant that is depends \non the relative size of the indexes. To give a simple example, if (a) \nis 1GB, while (a,b) is 2GB, you can expect dropping (a) to halve the \nspeed of index lookups. Fatter indexes just take longer to navigate \nthrough.\n\n> (in our situation I found that the number of distinct b's for each a\n> is low and that the usage stats on the second index is quite low\n> compared with the first one, so I think we'll drop the second index).\n> \n\nYou are thinking correctly here now. If the addition of b to the index \nisn't buying you significant increases in selectivity, just get rid of \nit and work only with the index on a instead.\n\n> 4) We're discussing to drop other indexes. Does it make sense at all\n> as long as we're not experiencing problems with inserts/updates? I\n> suppose that if the index isn't used it will remain on disk and won't\n> affect the memory usage ... but what if the index is rarely used ...\n> wouldn't it be better to do a seqscan on a table that is frequently\n> accessed and mostly in memory than to consult an index that is stored\n> on the disk?\n> \n\nDon't speculate; measure the exact usage amount that each usage is being \nused and evaluate them on a case by case basis. If they're not being \nused, they're just adding overheard in many ways, and you should drop them.\n\nThere are a bunch of \"find useless index\" scripts floating around the \nweb (I think I swiped ideas from Robert Treat and Josh Berkus to build \nmine); here's the one I use now:\n\nSELECT\n schemaname as nspname,\n relname,\n indexrelname,\n idx_scan,\n pg_size_pretty(pg_relation_size(i.indexrelid)) AS index_size\nFROM\n pg_stat_user_indexes i\n JOIN pg_index USING (indexrelid)\nWHERE\n indisunique IS false\nORDER BY idx_scan,pg_relation_size(i.indexrelid) DESC;\n\nAnything that bubbles to the top of that list, you probably want to get \nrid of. Note that this ignores UNIQUE indexes, which you can't drop \nanyway, but are being used to answer queries. You might choose to \ninclude them anyway but just flag them in the output if the goal is to \nsee how often they are used.\n\nP.S. You seem busy re-inventing pgstatspack this week: \nhttp://pgfoundry.org/projects/pgstatspack/ does all of this \"take a \nsnapshot of the stats and store it in the database for future analysis\" \nwork for you. Working on that intead of continuing to hack individual \nstorage/retrieve scripts for each statistics counter set would be a \nbetter contribution to the PostgreSQL community.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Fri, 24 Sep 2010 14:01:12 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": "Tobias Brox wrote:\n> We do have some bloat-problems as well - every now and then we decide\n> to shut down the operation, use pg_dump to dump the entire database to\n> an sql file and restore it. The benefits are dramatic, the space\n> requirement goes down a lot, and often some of our\n> performance-problems goes away after such an operation.\n> \n\nYou can do the same thing with far less trouble if you just CLUSTER the \ntable. It takes a lock while it runs so there's still downtime needed, \nbut it's far faster than a dump/reload and safer too.\n\n> If it was to me, we would have had autovacuum turned on. We've had\n> one bad experience when the autovacuumer decided to start vacuuming\n> one of the biggest table at the worst possible moment - and someone\n> figured autovacuum was a bad idea. I think we probably still would\n> need regular vacuums to avoid that happening, but with autovacuum on,\n> maybe we could have managed with regular vacuums only once a week or\n> so.\n> \n\nThe answer to \"we once saw autovacuum go mad and cause us problems\" is \nnever the knee-jerk \"disable autovacuum\", it's usually \"change \nautovacuum so it runs far more often but with lower intesity\". \nSometimes it's \"keep autovacuum on but proactively hit the biggest \ntables with manual vacuums at slow times\" too.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Fri, 24 Sep 2010 14:05:34 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": " On 10-09-24 01:41 PM, Tobias Brox wrote:\n> What do you mean, that you could run regular vacuum less frequently,\n> or that the regular vacuum would go faster?\n\nIt means that vacuums ran less frequently. With cron triggered vacuums, \nwe estimated when tables needed to be vacuumed, and vacuumed them \naccordingly. Because of unpredictable shifts in activity, we scheduled \nthe vacuums to happen more often than needed.\n\nWith autovacuum, we vacuum some of our large tables far less \nfrequently. We have a few large tables that used to get vacuumed every \nother day that now get vacuumed once or twice a month.\n\nThe vacuums themselves take longer now as we use the vacuum cost delay \nto control the IO. That wasn't an option for us when we did manual \nvacuums as that was in 8.1 when vacuums were still treated as long \nrunning transactions. Stretching a vacuum out to a few hours prior to \n8.2 would bloat other tables.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n",
"msg_date": "Fri, 24 Sep 2010 14:34:06 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": "Tobias,\n\nFirst off, what version of PostgreSQL are you running? If you have 8.4, nightly vacuuming shouldn't be necessary with properly tuned autovacuum jobs. \n\nThe pertinent difference between pg_stat_user_indexes and pg_statio_user_indexes is the latter shows the number of blocks read from disk or found in the cache. You're correct, unused indexes will remain on disk, but indexes that don't completely fit into memory must be read from disk for each index scan, and that hurts performance. (In fact, it will suddenly drop like a rock. BTDT.) By making smaller equivalent indexes on partitioned data the indexes for individual partitions are more likely to stay in memory, which is particularly important when multiple passes are made over the index by a query.\n\nYou are correct on all the points you make concerning indexes, but point 4 is the one I'm referring to. You discovered this independently yourself, according to your anecdote about the overlapping indexes.\n\nBob Lunney\n\n\n--- On Fri, 9/24/10, Tobias Brox <[email protected]> wrote:\n\n> From: Tobias Brox <[email protected]>\n> Subject: Re: [PERFORM] Memory usage - indexes\n> To: \"Bob Lunney\" <[email protected]>\n> Cc: [email protected]\n> Date: Friday, September 24, 2010, 12:46 PM\n> On 24 September 2010 18:23, Bob\n> Lunney <[email protected]>\n> wrote:\n> > Consult pg_statio_user_indexes to see which indexes\n> have been used\n> > and how much.\n> \n> What is the main differences between pg_statio_user_indexes\n> and\n> pg_stat_user_indexes?\n> \n> > Indexes with comparitively low usages rates\n> aren't helping you much and are\n> > candidates for elimination.\n> \n> No doubt about that - but the question was, would it really\n> help us to\n> drop those indexes?\n> \n> I think the valid reasons for dropping indexes would be:\n> \n> 1) To speed up inserts, updates and deletes\n> \n> 2) To spend less disk space\n> \n> 3) Eventually, speed up nightly vacuum (it wouldn't be an\n> issue with\n> autovacuum though)\n> \n> 4) To spend less memory resources?\n> \n> I'm not at all concerned about 1 and 2 above - we don't\n> have any\n> performance issues on the write part, and we have plenty of\n> disk\n> capacity. We are still doing the nightly vacuum\n> thing, and it does\n> hurt us a bit since it's dragging ever more out in\n> time. Anyway, it's\n> number four I'm wondering most about - is it anything to be\n> concerned\n> about or not for the least frequently used indexes? \n> An index that\n> aren't being used would just stay on disk anyway,\n> right? And if there\n> are limited memory resources, the indexes that are most\n> frequently\n> used would fill up the cache space anyway? That's my\n> thoughts at\n> least - are they way off?\n> \n> We did have similar experiences some years ago - everything\n> was\n> running very fine all until one day when some\n> semi-complicated\n> very-frequently-run selects started taking several seconds\n> to run\n> rather than tens of milliseconds. I found that we had\n> two slightly\n> overlapping indexes like this ...\n> \n> account_transaction(customer_id, trans_type)\n> account_transaction(customer_id, trans_type,\n> created)\n> \n> both of those indexes where heavily used. I simply\n> dropped the first\n> one, and the problems disappeared. I assume that both\n> indexes up to\n> some point fitted snuggly into memory, but one day they\n> were competing\n> for the limited memory space, dropping the redundant index\n> solved the\n> problem all until the next hardware upgrade. I would\n> never have found\n> those indexes searching for the least used indexes in the\n> pg_stat(io)_user_indexes view.\n> \n\n\n \n",
"msg_date": "Fri, 24 Sep 2010 12:06:05 -0700 (PDT)",
"msg_from": "Bob Lunney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": " On 10-09-24 03:06 PM, Bob Lunney wrote:\n> The pertinent difference between pg_stat_user_indexes and pg_statio_user_indexes is the latter shows the number of blocks read from disk or found in the cache.\n\nI have a minor, but very important correction involving this point. The \npg_statio tables show you what blocks are found in the Postgres buffer \ncache, and what ones are not.\n\nFor the ones that are not, those blocks may come from the OS filesystem \ncache, a battery backed cache, or on the actual disk. There is a big \ndifference in performance based on where you are actually getting those \nblocks from (and you can't this info from Postgres).\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n",
"msg_date": "Fri, 24 Sep 2010 15:24:54 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": "On 24 September 2010 21:06, Bob Lunney <[email protected]> wrote:\n> First off, what version of PostgreSQL are you running? If you have 8.4, nightly vacuuming shouldn't be necessary with properly tuned autovacuum jobs.\n\n8.3. We'll upgrade to 9.0 during the December holidays fwiw. But\npoint taken, I will continue to push for autovacuum to be turned on.\n\nAnyway, I think the nightly vacuuming does have some merit. For some\nof the queries, most of the daytime we're quite sensitive to latency.\nWell, I guess the proper solution to that is to tune the autovacuum\nconfiguration so it acts less aggressively at the times of the day\nwhere we need low latency...\n\n> You're correct, unused indexes will\n> remain on disk, but indexes that don't completely fit into memory must be\n> read from disk for each index scan, and that hurts performance. (In fact, it\n> will suddenly drop like a rock. BTDT.)\n\nSounds quite a lot like our problems nowadays - as well as previous\ntime when I found that overlapping index that could be dropped.\n\n> By making smaller equivalent indexes on partitioned data the indexes for\n> individual partitions are more likely to stay in memory, which is particularly\n> important when multiple passes are made over the index by a query.\n\nI was looking a bit into table partitioning some years ago, but didn't\nreally find any nice way to partition our tables. One solution would\nprobably be to partition by creation date and set up one partition for\neach year, but it seems like a butt ugly solution, and I believe it\nwould only help if the select statement spans a date range on the\ncreation time.\n\n> You are correct on all the points you make concerning indexes, but point 4\n> is the one I'm referring to. You discovered this independently yourself,\n> according to your anecdote about the overlapping indexes.\n\nYes, but that was the heavily used index ... my belief is that the\n_unused_ index, or infrequently used index wouldn't cause such memory\nproblems. (Then again, I suppose it would be faster to scan a\nnon-optimal index that is in memory than an optimal index that is on\ndisk?) Well, if both you and Greg Smith recommends to drop those\nindexes, I suppose we probably should do that ... ;-)\n",
"msg_date": "Fri, 24 Sep 2010 21:58:12 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": "On 24 September 2010 00:12, Mark Kirkwood <[email protected]> wrote:\n> Re index size, you could try indexes like:\n>\n> some_table(a)\n> some_table(b)\n>\n> which may occupy less space, and the optimizer can bitmap and/or them to\n> work like the compound index some_table(a,b).\n\nHm ... never considered that ... but is it cost effective on large\nindexes? I guess I should do some testing ...\n",
"msg_date": "Fri, 24 Sep 2010 22:00:03 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": "On 24 September 2010 21:24, Brad Nicholson <[email protected]> wrote:\n>> The pertinent difference between pg_stat_user_indexes and\n>> pg_statio_user_indexes is the latter shows the number of blocks read from\n>> disk or found in the cache.\n>\n> I have a minor, but very important correction involving this point. The\n> pg_statio tables show you what blocks are found in the Postgres buffer\n> cache, and what ones are not.\n\nRight. Then, studying how the pg_statio table develops over time\nwould probably give a hint on my first question in my original post\n... how to check the hypothesis that we're running out of memory.\nThat said, I've sent an email to our sysadmin asking him to consider\nthe pg_buffercache module suggested by Greg Smith.\n\nIncreasing the shared_buffers on the cost of OS caches would then have\none \"selling point\" ... better possibilities to monitor the memory\nusage.\n",
"msg_date": "Fri, 24 Sep 2010 22:08:08 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": "Thanks for spending your time on this ... amidst all the useful\nfeedback I've received, I'd rate your post as the most useful post.\n\n>> 1) Are there any good ways to verify my hypothesis?\n>\n> You can confim easily whether the contents of the PostgreSQL buffer cache\n> contain when you think they do by installing pg_buffercache. My paper and\n> sample samples at http://www.pgcon.org/2010/schedule/events/218.en.html go\n> over that.\n\nI've asked the sysadmin to consider installing it. From what I\nunderstood from other posts, the pg_statio_user_indexes and\npg_statio_user_tables would also indicate to what extent data is found\nin shared buffers and not. Monitoring it over time could possibly\nhelp us predicting the \"tipping points\" before they happen. Though\nstill, if most of the cacheing takes place on the OS level, one\nwouldn't learn that much from studying the shared buffers usage ...\n\n> You can also integrate that with a look at the OS level information by using\n> pgfincore: http://www.pgcon.org/2010/schedule/events/261.en.html\n\n... ah, right ... :-)\n\n> I've found that if shared_buffers is set to a largish size, you can find out\n> enough information from look at it to have a decent idea what's going on\n> without going to that depth. But it's available if you want it.\n\nHaven't studied it in details yet, but the information value in itself\nmay be a \"selling point\" for increasing the buffer size.\n\n> I have easily measurable improvements on client systems increasing\n> shared_buffers into the 4GB - 8GB range. Popular indexes move into there,\n> stay there, and only get written out at checkpoint time rather than all the\n> time.\n\nOurs is at 12 GB, out of 70 GB total RAM.\n\n> However, if you write heavily enough that much of this space gets\n> dirty fast, you may not be be able to go that high before checkpoint issues\n> start to make such sizes impractical.\n\nI think we did have some issues at some point ... we do have some\napplications that are very sensitive towards latency. Though, I think\nthe problem was eventually solved. I think I somehow managed to\ndeliver the message that it was not a good idea to store\nkeep-alive-messages sent every second from multiple clients into the\nmain production database, and that it was an equally bad idea to\ndisconnect the clients after a three seconds timeout :-) Anyway,\ntoday we have mostly issues with read access, not write access.\n\n> Using minimal values works for some people, particularly on\n> Windows,\n\nHuh ... does it mean Windows have better OS cache handling than Linux?\n To me it sounds insane to run a database under a buggy GUI ... but I\nsuppose I should keep that to myself :-)\n\n> Yes, it would run slower, because now it has to sort through blocks in a\n> larger index in order to find anything. How significant that is depends on\n> the relative size of the indexes. To give a simple example, if (a) is 1GB,\n> while (a,b) is 2GB, you can expect dropping (a) to halve the speed of index\n> lookups. Fatter indexes just take longer to navigate through.\n\nLinear relationship between the time it takes to do index lookups vs\nthe fatness of the index? That's not what I guessed in the first\nplace ... but I suppose you're right.\n\n> P.S. You seem busy re-inventing pgstatspack this week:\n> http://pgfoundry.org/projects/pgstatspack/ does all of this \"take a\n> snapshot of the stats and store it in the database for future analysis\" work\n> for you. Working on that intead of continuing to hack individual\n> storage/retrieve scripts for each statistics counter set would be a better\n> contribution to the PostgreSQL community.\n\nSometimes it takes more work to implement work already done by others\nthan to reimplement the logics ... but anyway, I will have a look\nbefore I make more snapshot tables ;-)\n",
"msg_date": "Fri, 24 Sep 2010 22:50:17 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": "Tobias Brox wrote:\n>> I have easily measurable improvements on client systems increasing\n>> shared_buffers into the 4GB - 8GB range. Popular indexes move into there,\n>> stay there, and only get written out at checkpoint time rather than all the\n>> time.\n>> \n>\n> Ours is at 12 GB, out of 70 GB total RAM.\n> \n\nGet a snapshot of what's in there using pg_buffercache. And then reduce \nthat to at or under 8GB. Everyone I've seen test in this area says that \nperformance starts to drop badly with shared_buffers greater than \nsomewhere between 8GB and 10GB, so 12GB is well into the area where it's \ndegraded already.\n\n\n> Huh ... does it mean Windows have better OS cache handling than Linux?\n> To me it sounds insane to run a database under a buggy GUI ... but I\n> suppose I should keep that to myself :-)\n> \n\nNo, windows has slow shared memory issues when used the way PostgreSQL \ndoes, so you push at the OS cache instead as the next best thing.\n\n\n> Linear relationship between the time it takes to do index lookups vs\n> the fatness of the index? That's not what I guessed in the first\n> place ... but I suppose you're right.\n> \n\nIf you're scanning 10% of a 10GB index, you can bet that's going to take \nlonger to do than scanning 10% of a 5GB index. So unless the bigger \nindex is significantly adding to how selective the query is--so that you \nare, say, only scanning 2% of the 10GB index because indexing on two \nrows allowed you to remove many candidate rows--you might as well use a \nslimmer one instead.\n\nOverindexed tables containing more columns than are actually selective \nis a very popular source of PostgreSQL slowdowns. It's easy to say \"oh, \nI look this data up using columns a,b,c, so lets put an index on \na,b,c\". But if an index on a alone is 1% selective, that's probably \nwrong; just index it instead, so that you have one lean, easy to \nmaintain index there that's more likely to be in RAM at all times. Let \nthe CPU chew on filtering out which of those 1% matches also match the \n(b,c) criteria instead.\n\nObviously rough guidance here--you need to simulate to know for sure. \nEvery drop an index in a transaction block just to see how a query plan \nchanges if it's not there anymore, then rollback so it never really went \naway? Great fun for this sort of experiment, try it sometime.\n\n> Sometimes it takes more work to implement work already done by others\n> than to reimplement the logics ... but anyway, I will have a look\n> before I make more snapshot tables ;-)\n> \n\nYou will be surprised at how exactly you are reimplementing that \nparticular project.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n\n\n\n\n\n\nTobias Brox wrote:\n\n\nI have easily measurable improvements on client systems increasing\nshared_buffers into the 4GB - 8GB range. Popular indexes move into there,\nstay there, and only get written out at checkpoint time rather than all the\ntime.\n \n\n\nOurs is at 12 GB, out of 70 GB total RAM.\n \n\n\nGet a snapshot of what's in there using pg_buffercache. And then\nreduce that to at or under 8GB. Everyone I've seen test in this area\nsays that performance starts to drop badly with shared_buffers greater\nthan somewhere between 8GB and 10GB, so 12GB is well into the area\nwhere it's degraded already.\n\n\n\nHuh ... does it mean Windows have better OS cache handling than Linux?\n To me it sounds insane to run a database under a buggy GUI ... but I\nsuppose I should keep that to myself :-)\n \n\n\nNo, windows has slow shared memory issues when used the way PostgreSQL\ndoes, so you push at the OS cache instead as the next best thing.\n\n\n\nLinear relationship between the time it takes to do index lookups vs\nthe fatness of the index? That's not what I guessed in the first\nplace ... but I suppose you're right.\n \n\n\nIf you're scanning 10% of a 10GB index, you can bet that's going to\ntake longer to do than scanning 10% of a 5GB index. So unless the\nbigger index is significantly adding to how selective the query is--so\nthat you are, say, only scanning 2% of the 10GB index because indexing\non two rows allowed you to remove many candidate rows--you might as\nwell use a slimmer one instead.\n\nOverindexed tables containing more columns than are actually selective\nis a very popular source of PostgreSQL slowdowns. It's easy to say\n\"oh, I look this data up using columns a,b,c, so lets put an index on\na,b,c\". But if an index on a alone is 1% selective, that's probably\nwrong; just index it instead, so that you have one lean, easy to\nmaintain index there that's more likely to be in RAM at all times. Let\nthe CPU chew on filtering out which of those 1% matches also match the\n(b,c) criteria instead.\n\nObviously rough guidance here--you need to simulate to know for sure. \nEvery drop an index in a transaction block just to see how a query plan\nchanges if it's not there anymore, then rollback so it never really\nwent away? Great fun for this sort of experiment, try it sometime.\n\n\nSometimes it takes more work to implement work already done by others\nthan to reimplement the logics ... but anyway, I will have a look\nbefore I make more snapshot tables ;-)\n \n\n\nYou will be surprised at how exactly you are reimplementing that\nparticular project.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book",
"msg_date": "Fri, 24 Sep 2010 18:00:41 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": "On 25 September 2010 00:00, Greg Smith <[email protected]> wrote:\n> Overindexed tables containing more columns than are actually selective is a\n> very popular source of PostgreSQL slowdowns. It's easy to say \"oh, I look\n> this data up using columns a,b,c, so lets put an index on a,b,c\". But if an\n> index on a alone is 1% selective, that's probably wrong; just index it\n> instead, so that you have one lean, easy to maintain index there that's more\n> likely to be in RAM at all times. Let the CPU chew on filtering out which\n> of those 1% matches also match the (b,c) criteria instead.\n\nHm ... yes, we have quite many of those indexes. Some of them we\ncan't live without. Digging out 1% out of a fat 100M table (1M rows)\nwhen one really just needs 20 rows is just too costly. Well, I guess\nwe should try to have a serious walk-through to see what indexes\nreally are needed. After all, that really seems to be our main\nproblem nowadays - some frequently used indexes doesn't fit very\nsnuggly into memory.\n\n> Every drop an index in a transaction block just to see how a query plan\n> changes if it's not there anymore, then rollback so it never really went away?\n> Great fun for this sort of experiment, try it sometime.\n\nYes, I was playing a bit with it long time ago ... but it seems a bit\nrisky to do this in the production environment ... wouldn't want\ninserts to get stuck due to locks. There is also the problem that we\ndon't really have an overview of which queries would be affected if\ndropping an index. Best thing we can do is to drop an index and\nmonitor the stats on seq scans, new slow queries popping up, etc.\n",
"msg_date": "Sat, 25 Sep 2010 12:29:30 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": "I just got this crazy, stupid or maybe genius idea :-)\n\nOne thing that I've learned in this thread is that fat indexes (i.e.\nsome index on some_table(a,b,c,d,e,f)) is to be avoided as much as\npossible.\n\nOne of our biggest indexes looks like this:\n\nacc_trans(customer_id, trans_type, created)\n\nFor the very most of the time an index like this would do:\n\nacc_trans(customer_id, trans_type, created)\n\nBut then there are those few troublesome customers that have tens of\nthousands of transactions, they interactively inspect transaction\nlistings through the web, sometimes the query \"give me my 20 most\nrecent transactions of trans_type 6\" gets stuck, maybe the customer\nhas no transactions of trans type 6 and all the transactions needs to\nbe scanned through. Since this is done interactively and through our\nfront-end web page, we want all queries to be lightning fast.\n\nNow, my idea is to drop that fat index and replace it with conditional\nindexes for a dozen of heavy users - like those:\n\n acc_trans(trans_type, created) where customer_id=224885;\n acc_trans(trans_type, created) where customer_id=643112;\n acc_trans(trans_type, created) where customer_id=15;\n\nor maybe like this:\n\n acc_trans(customer_id, trans_type, created) where customer_id in ( ... );\n\nAny comments?\n\nMy sysadmin is worried that it would be a too big hit on performance\nwhen doing inserts. It may also cause more overhead when planning the\nqueries. Is that significant? Is this idea genius or stupid or just\nsomewhere in between?\n",
"msg_date": "Wed, 29 Sep 2010 08:41:48 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": "On 29/09/10 19:41, Tobias Brox wrote:\n> I just got this crazy, stupid or maybe genius idea :-)\n>\n>\n> Now, my idea is to drop that fat index and replace it with conditional\n> indexes for a dozen of heavy users - like those:\n>\n> acc_trans(trans_type, created) where customer_id=224885;\n> acc_trans(trans_type, created) where customer_id=643112;\n> acc_trans(trans_type, created) where customer_id=15;\n>\n> or maybe like this:\n>\n> acc_trans(customer_id, trans_type, created) where customer_id in ( ... );\n>\n> Any comments?\n>\n> My sysadmin is worried that it would be a too big hit on performance\n> when doing inserts. It may also cause more overhead when planning the\n> queries. Is that significant? Is this idea genius or stupid or just\n> somewhere in between?\n>\n> \n\nYeah, I think the idea of trying to have a few smaller indexes for the \n'hot' customers is a good idea. However I am wondering if just using \nsingle column indexes and seeing if the bitmap scan/merge of smaller \nindexes is actually more efficient is worth testing - i.e:\n\nacc_trans(trans_type);\nacc_trans(created);\nacc_trans(customer_id);\n\nIt may mean that you have to to scrutinize your effective_cache_size and \nwork_mem parameters, but could possibly be simpler and more flexible.\n\nregards\n\nMark\n\n\n\n",
"msg_date": "Wed, 29 Sep 2010 21:03:08 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": "On 29 September 2010 10:03, Mark Kirkwood\n<[email protected]> > Yeah, I think the idea of trying to\nhave a few smaller indexes for the 'hot'\n> customers is a good idea. However I am wondering if just using single column\n> indexes and seeing if the bitmap scan/merge of smaller indexes is actually\n> more efficient is worth testing - i.e:\n>\n> acc_trans(trans_type);\n> acc_trans(created);\n> acc_trans(customer_id);\n\nMy gut feeling tells me that it's not a good idea - consider that we\nwant to pull out 20 rows from a 60M table. If I'm not mistaken, with\nbitmapping it's needed to do operations on the whole indexes - 60M\nbits is still 7.5 megabytes. Well, I suppose that nowadays it's\nrelatively fast to bitmap 7.5 Mb of memory, but probably some orders\nof magnitude more than the few milliseconds it takes to pick out the\n20 rows directly from the specialized index.\n\nWell, why rely on gut feelings - when things can be measured. I\ndidn't take those figures from the production database server though,\nbut at least it gives a hint on what to expect.\n\nFirst, using the three-key index for \"select * from acc_trans where\ncustomer_id=? and trans_type=? order by created desc limit 20\". I\nchose one of the users with most transactions, and I tested with the\nmost popular transaction type as well as one transaction type where he\nhas just a handful of transactions. Both took significantly less than\n1 ms to run. Then I deleted all indexes and created the three\nsuggested indexes. Using the popular transaction type, it took 123\nms. Well, that's 500 times as much time, but still acceptable. Here\nis the query plan:\n\n=> explain analyze select * from acc_trans where customer_id=67368\nand trans_type=8 order by created desc limit 20;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1486.23..1486.28 rows=20 width=200) (actual\ntime=123.685..123.687 rows=3 loops=1)\n -> Sort (cost=1486.23..1486.34 rows=43 width=200) (actual\ntime=123.684..123.685 rows=3 loops=1)\n Sort Key: created\n Sort Method: quicksort Memory: 25kB\n -> Bitmap Heap Scan on acc_trans (cost=1313.90..1485.08\nrows=43 width=200) (actual time=121.350..123.669 rows=3 loops=1)\n Recheck Cond: ((trans_type = 8) AND (customer_id = 67368))\n -> BitmapAnd (cost=1313.90..1313.90 rows=43 width=0)\n(actual time=120.342..120.342 rows=0 loops=1)\n -> Bitmap Index Scan on\naccount_transaction_on_type (cost=0.00..256.31 rows=13614 width=0)\n(actual time=12.200..12.200 rows=43209 loops=1)\n Index Cond: (trans_type = 8)\n -> Bitmap Index Scan on\naccount_transaction_on_user (cost=0.00..1057.31 rows=56947 width=0)\n(actual time=104.578..104.578 rows=59133 loops=1)\n Index Cond: (users_id = 67368)\n Total runtime: 123.752 ms\n(12 rows)\n\nWith the most popular trans type it chose another plan and it took\nmore than 3s (totally unacceptable):\n\n=> explain analyze select * from acc_trans where customer_id=67368\nand trans_type=6 order by created desc limit 20;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..44537.82 rows=20 width=200) (actual\ntime=1746.288..3204.029 rows=20 loops=1)\n -> Index Scan Backward using account_transaction_on_created on\nacc_trans (cost=0.00..55402817.90 rows=24879 width=200) (actual\ntime=1746.285..3204.021 rows=20 loops=1)\n Filter: ((customer_id = 67368) AND (trans_type = 6))\n Total runtime: 3204.079 ms\n(4 rows)\n\nAlthough this customer has several tens of thousands of transactions,\ndropping the three-key-index and use an index on users_id,created is\nclearly a better option than running out of memory:\n\n=> explain analyze select * from acc_trans where customer_id=67368 and\ntrans_type=8 order by created desc limit 20;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..98524.88 rows=20 width=200) (actual\ntime=0.669..197.012 rows=3 loops=1)\n -> Index Scan Backward using account_transaction_by_user_ts on\nacc_trans (cost=0.00..211828.49 rows=43 width=200) (actual\ntime=0.668..197.006 rows=3 loops=1)\n Index Cond: (customer_id = 67368)\n Filter: (trans_type = 8)\n Total runtime: 197.066 ms\n(5 rows)\n\n0.2s sounds acceptable, it's just that this may be just a small part\nof building the web page, so it adds up ... and probably (I didn't\ncheck how profitable this customer is) this is probably exactly the\nkind of customer we wouldn't want to get annoyed with several seconds\npage load time.\n",
"msg_date": "Wed, 29 Sep 2010 14:09:44 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Memory usage - indexes"
},
{
"msg_contents": "On 30/09/10 01:09, Tobias Brox wrote:\n> With the most popular trans type it chose another plan and it took\n> more than 3s (totally unacceptable):\n>\n> \n\nTry tweeking effective_cache_size up a bit and see what happens - I've \nfound these bitmap plans to be sensitive to it sometimes.\n\nregards\n\nMark\n\n",
"msg_date": "Fri, 01 Oct 2010 10:01:26 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory usage - indexes"
}
] |
[
{
"msg_contents": "Hi all\n\nI've have a strange problem with my Windows postgresql-9.0 service \nstopping after any transaction which manipulates tables in any database \n(Deleting records, Inserting records, bulk importing via \\copy, etc). This \nproblem occurs regardless whether I'm accessing the database server via \npgAdmin III on a client machine or the server itself as well as through \nthe command prompt. The end result it that I have to constantly restart \nthe postgresql service after any transactions. Sometime there is a slight \ndelay allowing a couple of transactions but the service is always \neventually stopped.\n\nThe problem appears to have started when I changed permissions on my \nmachine to allow the user 'postgres' access to the C drive following the \ninstructions in the last post of this thread:\nhttp://www.jitterbit.com/PhpBB/viewtopic.php?f=5&t=869\n\nThe specs of postres environment are:\nWindows XP SP3\nPostgreSQL 9.0.0, compiled by Visual C++ build 1500, 32-bit\npgAdmin III 1.12.0 (Sep17 2010, rev: REL-1_12_0)\nPostGIS 2.0SVN\n\nHope someone can shed some light on this issue.\n\nCheers\n\nAdrian\n\n\nNotice:\nThis email and any attachments may contain information that is personal, \nconfidential, legally privileged and/or copyright.No part of it should be reproduced, \nadapted or communicated without the prior written consent of the copyright owner. \n\nIt is the responsibility of the recipient to check for and remove viruses.\nIf you have received this email in error, please notify the sender by return email, delete \nit from your system and destroy any copies. You are not authorised to use, communicate or rely on the information \ncontained in this email.\n\nPlease consider the environment before printing this email.\n\nHi all\n\nI've have a strange problem with my\nWindows postgresql-9.0 service stopping after any transaction which manipulates\ntables in any database (Deleting records, Inserting records, bulk importing\nvia \\copy, etc). This problem occurs regardless whether I'm accessing the\ndatabase server via pgAdmin III on a client machine or the server itself\nas well as through the command prompt. The end result it that I have to\nconstantly restart the postgresql service after any transactions. Sometime\nthere is a slight delay allowing a couple of transactions but the service\nis always eventually stopped.\n\nThe problem appears to have started\nwhen I changed permissions on my machine to allow the user 'postgres' access\nto the C drive following the instructions in the last post of this thread:\nhttp://www.jitterbit.com/PhpBB/viewtopic.php?f=5&t=869\n\nThe specs of postres environment are:\n\nWindows XP SP3\nPostgreSQL 9.0.0, compiled by Visual\nC++ build 1500, 32-bit\npgAdmin III 1.12.0 (Sep17 2010, rev:\nREL-1_12_0)\nPostGIS 2.0SVN\nHope someone can shed some light on\nthis issue.\n\nCheers\n\nAdrian\n\n\nNotice:This \nemail and any attachments may contain information that is personal, \nconfidential,legally privileged and/or copyright. No \npart of it should be reproduced, adapted or communicated without the prior written consent of the copyright owner. \n\nIt is the responsibility of the recipient to \ncheck for and remove viruses.\nIf you have received this email in error, please \nnotify the sender by return email, delete it from your system and destroy any \ncopies. You are not authorised to use, communicate or rely on the information \ncontained in this email.\nPlease consider the environment before \nprinting this email.",
"msg_date": "Fri, 24 Sep 2010 14:39:48 +1000",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "postgresql-9.0 Windows service stops after database transaction"
},
{
"msg_contents": "On 24 September 2010 05:39, <[email protected]> wrote:\n> Hi all\n>\n> I've have a strange problem with my Windows postgresql-9.0 service stopping\n> after any transaction which manipulates tables in any database (Deleting\n> records, Inserting records, bulk importing via \\copy, etc). This problem\n> occurs regardless whether I'm accessing the database server via pgAdmin III\n> on a client machine or the server itself as well as through the command\n> prompt. The end result it that I have to constantly restart the postgresql\n> service after any transactions. Sometime there is a slight delay allowing a\n> couple of transactions but the service is always eventually stopped.\n>\n> The problem appears to have started when I changed permissions on my machine\n> to allow the user 'postgres' access to the C drive following the\n> instructions in the last post of this thread:\n> http://www.jitterbit.com/PhpBB/viewtopic.php?f=5&t=869\n>\n> The specs of postres environment are:\n>\n> Windows XP SP3\n> PostgreSQL 9.0.0, compiled by Visual C++ build 1500, 32-bit\n> pgAdmin III 1.12.0 (Sep17 2010, rev: REL-1_12_0)\n> PostGIS 2.0SVN\n>\n> Hope someone can shed some light on this issue.\n\nWhat appears in the logs?\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n",
"msg_date": "Fri, 24 Sep 2010 08:05:00 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql-9.0 Windows service stops after database transaction"
},
{
"msg_contents": "Hi Thom\n\nThanks for the reply and the log seems to have shed some light. Looks like \nI have my old postgres 8.3 service still running concurrently which is \ninterfering with the new 9.0 service. This is the message that comes up a \nlot in the log.\n\n2010-09-24 15:39:34 EST LOG: unexpected EOF on client connection\n2010-09-24 15:39:37 EST LOG: unexpected EOF on client connection\n2010-09-24 15:39:37 EST LOG: unexpected EOF on client connection\n2010-09-24 15:40:07 EST LOG: server process (PID 3724) was terminated by \nexception 0xC0000005\n2010-09-24 15:40:07 EST HINT: See C include file \"ntstatus.h\" for a \ndescription of the hexadecimal value.\n2010-09-24 15:40:07 EST LOG: terminating any other active server \nprocesses\n2010-09-24 15:40:07 EST WARNING: terminating connection because of crash \nof another server process\n2010-09-24 15:40:07 EST DETAIL: The postmaster has commanded this server \nprocess to roll back the current transaction and exit, because another \nserver process exited abnormally and possibly corrupted shared memory.\n2010-09-24 15:40:07 EST HINT: In a moment you should be able to reconnect \nto the database and repeat your command.\n\nIt's a bit strange though because I did do an uninstall of the postgres \n8.3 before installing the 9.0. The 8.3 service is still present in Windows \nServices but I've disabled with with no effect on the problem. I notice it \nstill is also listed as a separate server in pgAdmin but when it's \nactivated it refers to the 9.0 databases. I'm guessing that I have to do a \ndeeper uninstall of the 8.3 service. Do you have any pointers on how to go \nabout this?\n\nCheers\n\nAdrian\n\n\n\n\n\nFrom: Thom Brown <[email protected]>\nTo: [email protected]\nCc: [email protected]\nDate: 24/09/2010 05:05 PM\nSubject: Re: [PERFORM] postgresql-9.0 Windows service stops after \ndatabase transaction\n\n\n\nOn 24 September 2010 05:39, <[email protected]> wrote:\n> Hi all\n>\n> I've have a strange problem with my Windows postgresql-9.0 service \nstopping\n> after any transaction which manipulates tables in any database (Deleting\n> records, Inserting records, bulk importing via \\copy, etc). This problem\n> occurs regardless whether I'm accessing the database server via pgAdmin \nIII\n> on a client machine or the server itself as well as through the command\n> prompt. The end result it that I have to constantly restart the \npostgresql\n> service after any transactions. Sometime there is a slight delay \nallowing a\n> couple of transactions but the service is always eventually stopped.\n>\n> The problem appears to have started when I changed permissions on my \nmachine\n> to allow the user 'postgres' access to the C drive following the\n> instructions in the last post of this thread:\n> http://www.jitterbit.com/PhpBB/viewtopic.php?f=5&t=869\n>\n> The specs of postres environment are:\n>\n> Windows XP SP3\n> PostgreSQL 9.0.0, compiled by Visual C++ build 1500, 32-bit\n> pgAdmin III 1.12.0 (Sep17 2010, rev: REL-1_12_0)\n> PostGIS 2.0SVN\n>\n> Hope someone can shed some light on this issue.\n\nWhat appears in the logs?\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n\n\nNotice:\nThis email and any attachments may contain information that is personal, \nconfidential, legally privileged and/or copyright.No part of it should be reproduced, \nadapted or communicated without the prior written consent of the copyright owner. \n\nIt is the responsibility of the recipient to check for and remove viruses.\nIf you have received this email in error, please notify the sender by return email, delete \nit from your system and destroy any copies. You are not authorised to use, communicate or rely on the information \ncontained in this email.\n\nPlease consider the environment before printing this email.\n\nHi Thom\n\nThanks for the reply and the log seems\nto have shed some light. Looks like I have my old postgres 8.3 service\nstill running concurrently which is interfering with the new 9.0 service.\nThis is the message that comes up a lot in the log.\n\n2010-09-24 15:39:34 EST LOG: unexpected\nEOF on client connection\n2010-09-24 15:39:37 EST LOG: unexpected\nEOF on client connection\n2010-09-24 15:39:37 EST LOG: unexpected\nEOF on client connection\n2010-09-24 15:40:07 EST LOG: server\nprocess (PID 3724) was terminated by exception 0xC0000005\n2010-09-24 15:40:07 EST HINT: See\nC include file \"ntstatus.h\" for a description of the hexadecimal\nvalue.\n2010-09-24 15:40:07 EST LOG: terminating\nany other active server processes\n2010-09-24 15:40:07 EST WARNING:\n terminating connection because of crash of another server process\n2010-09-24 15:40:07 EST DETAIL: The\npostmaster has commanded this server process to roll back the current transaction\nand exit, because another server process exited abnormally and possibly\ncorrupted shared memory.\n2010-09-24 15:40:07 EST HINT: In\na moment you should be able to reconnect to the database and repeat your\ncommand.\n\nIt's a bit strange though because I\ndid do an uninstall of the postgres 8.3 before installing the 9.0. The\n8.3 service is still present in Windows Services but I've disabled with\nwith no effect on the problem. I notice it still is also listed as a separate\nserver in pgAdmin but when it's activated it refers to the 9.0 databases.\nI'm guessing that I have to do a deeper uninstall of the 8.3 service. Do\nyou have any pointers on how to go about this?\n\nCheers\n\nAdrian\n\n\n\n\n\nFrom: \n Thom Brown <[email protected]>\nTo: \n [email protected]\nCc: \n [email protected]\nDate: \n 24/09/2010 05:05 PM\nSubject: \n Re: [PERFORM]\npostgresql-9.0 Windows service stops after database transaction\n\n\n\n\nOn 24 September 2010 05:39, <[email protected]>\nwrote:\n> Hi all\n>\n> I've have a strange problem with my Windows postgresql-9.0 service\nstopping\n> after any transaction which manipulates tables in any database (Deleting\n> records, Inserting records, bulk importing via \\copy, etc). This problem\n> occurs regardless whether I'm accessing the database server via pgAdmin\nIII\n> on a client machine or the server itself as well as through the command\n> prompt. The end result it that I have to constantly restart the postgresql\n> service after any transactions. Sometime there is a slight delay allowing\na\n> couple of transactions but the service is always eventually stopped.\n>\n> The problem appears to have started when I changed permissions on\nmy machine\n> to allow the user 'postgres' access to the C drive following the\n> instructions in the last post of this thread:\n> http://www.jitterbit.com/PhpBB/viewtopic.php?f=5&t=869\n>\n> The specs of postres environment are:\n>\n> Windows XP SP3\n> PostgreSQL 9.0.0, compiled by Visual C++ build 1500, 32-bit\n> pgAdmin III 1.12.0 (Sep17 2010, rev: REL-1_12_0)\n> PostGIS 2.0SVN\n>\n> Hope someone can shed some light on this issue.\n\nWhat appears in the logs?\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n\n\nNotice:This \nemail and any attachments may contain information that is personal, \nconfidential,legally privileged and/or copyright. No \npart of it should be reproduced, adapted or communicated without the prior written consent of the copyright owner. \n\nIt is the responsibility of the recipient to \ncheck for and remove viruses.\nIf you have received this email in error, please \nnotify the sender by return email, delete it from your system and destroy any \ncopies. You are not authorised to use, communicate or rely on the information \ncontained in this email.\nPlease consider the environment before \nprinting this email.",
"msg_date": "Mon, 27 Sep 2010 09:18:17 +1000",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: postgresql-9.0 Windows service stops after database\n transaction"
},
{
"msg_contents": "Hi again Thom\n\nJust a quick update regarding my previous reply to you. I managed to do a \ndeeper uninstall of Postgresql 8.3 with these following steps.\n\n* Remove the Registry entries. \n(HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Installations\\postgresql-8.3) and \n(HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Services\\postgresql-8.3)\n* Remove the postgresql-8.3 service. (sc delete postgresql-8.3) \n\nThis didn't fix the problem although the 8.3 service is now no longer \npresent in Windows Services nor in pgAdmin. Below is the complete log \noutput when the original error occurs and the 9.0 service is shut down. \nAll I did was delete the content of a table and this is what happens.\n\n2010-09-27 09:52:48 EST LOG: unexpected EOF on client connection\n2010-09-27 09:53:05 EST LOG: unexpected EOF on client connection\n2010-09-27 09:53:19 EST LOG: server process (PID 2564) was terminated by \nexception 0xC0000005\n2010-09-27 09:53:19 EST HINT: See C include file \"ntstatus.h\" for a \ndescription of the hexadecimal value.\n2010-09-27 09:53:19 EST LOG: terminating any other active server \nprocesses\n2010-09-27 09:53:19 EST WARNING: terminating connection because of crash \nof another server process\n2010-09-27 09:53:19 EST DETAIL: The postmaster has commanded this server \nprocess to roll back the current transaction and exit, because another \nserver process exited abnormally and possibly corrupted shared memory.\n2010-09-27 09:53:19 EST HINT: In a moment you should be able to reconnect \nto the database and repeat your command.\n2010-09-27 09:53:19 EST WARNING: terminating connection because of crash \nof another server process\n2010-09-27 09:53:19 EST DETAIL: The postmaster has commanded this server \nprocess to roll back the current transaction and exit, because another \nserver process exited abnormally and possibly corrupted shared memory.\n2010-09-27 09:53:19 EST HINT: In a moment you should be able to reconnect \nto the database and repeat your command.\n2010-09-27 09:53:19 EST WARNING: terminating connection because of crash \nof another server process\n2010-09-27 09:53:19 EST DETAIL: The postmaster has commanded this server \nprocess to roll back the current transaction and exit, because another \nserver process exited abnormally and possibly corrupted shared memory.\n2010-09-27 09:53:19 EST HINT: In a moment you should be able to reconnect \nto the database and repeat your command.\n2010-09-27 09:53:19 EST LOG: all server processes terminated; \nreinitializing\n2010-09-27 09:53:29 EST FATAL: pre-existing shared memory block is still \nin use\n2010-09-27 09:53:29 EST HINT: Check if there are any old server processes \nstill running, and terminate them.\n\nThanks again.\n\nCheers\n\nAdrian\n\n\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nAdrian Kitchingman, Aquatic Spatial Scientist | Arthur Rylah Institute, \nDept of Sustainability and Environment\n123 Brown St, Heidelberg, Victoria, Australia, 3084 | ph: + (03) 9450 8716 \n| fax: + (03) 9450 8799\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <°)))>< \n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \n\n\n\n\nFrom: Thom Brown <[email protected]>\nTo: [email protected]\nCc: [email protected]\nDate: 24/09/2010 05:05 PM\nSubject: Re: [PERFORM] postgresql-9.0 Windows service stops after \ndatabase transaction\n\n\n\nOn 24 September 2010 05:39, <[email protected]> wrote:\n> Hi all\n>\n> I've have a strange problem with my Windows postgresql-9.0 service \nstopping\n> after any transaction which manipulates tables in any database (Deleting\n> records, Inserting records, bulk importing via \\copy, etc). This problem\n> occurs regardless whether I'm accessing the database server via pgAdmin \nIII\n> on a client machine or the server itself as well as through the command\n> prompt. The end result it that I have to constantly restart the \npostgresql\n> service after any transactions. Sometime there is a slight delay \nallowing a\n> couple of transactions but the service is always eventually stopped.\n>\n> The problem appears to have started when I changed permissions on my \nmachine\n> to allow the user 'postgres' access to the C drive following the\n> instructions in the last post of this thread:\n> http://www.jitterbit.com/PhpBB/viewtopic.php?f=5&t=869\n>\n> The specs of postres environment are:\n>\n> Windows XP SP3\n> PostgreSQL 9.0.0, compiled by Visual C++ build 1500, 32-bit\n> pgAdmin III 1.12.0 (Sep17 2010, rev: REL-1_12_0)\n> PostGIS 2.0SVN\n>\n> Hope someone can shed some light on this issue.\n\nWhat appears in the logs?\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n\n\nNotice:\nThis email and any attachments may contain information that is personal, \nconfidential, legally privileged and/or copyright.No part of it should be reproduced, \nadapted or communicated without the prior written consent of the copyright owner. \n\nIt is the responsibility of the recipient to check for and remove viruses.\nIf you have received this email in error, please notify the sender by return email, delete \nit from your system and destroy any copies. You are not authorised to use, communicate or rely on the information \ncontained in this email.\n\nPlease consider the environment before printing this email.\n\nHi again Thom\n\nJust a quick update regarding my previous\nreply to you. I managed to do a deeper uninstall of Postgresql 8.3 with\nthese following steps.\n\n* Remove the Registry entries. (HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Installations\\postgresql-8.3)\nand (HKEY_LOCAL_MACHINE\\SOFTWARE\\PostgreSQL\\Services\\postgresql-8.3)\n* Remove the postgresql-8.3 service.\n(sc delete postgresql-8.3) \n\nThis didn't fix the problem although\nthe 8.3 service is now no longer present in Windows Services nor in pgAdmin.\nBelow is the complete log output when the original error occurs and the\n9.0 service is shut down. All I did was delete the content of a table and\nthis is what happens.\n\n2010-09-27 09:52:48 EST LOG: unexpected\nEOF on client connection\n2010-09-27 09:53:05 EST LOG: unexpected\nEOF on client connection\n2010-09-27 09:53:19 EST LOG: server\nprocess (PID 2564) was terminated by exception 0xC0000005\n2010-09-27 09:53:19 EST HINT: See\nC include file \"ntstatus.h\" for a description of the hexadecimal\nvalue.\n2010-09-27 09:53:19 EST LOG: terminating\nany other active server processes\n2010-09-27 09:53:19 EST WARNING:\n terminating connection because of crash of another server process\n2010-09-27 09:53:19 EST DETAIL: The\npostmaster has commanded this server process to roll back the current transaction\nand exit, because another server process exited abnormally and possibly\ncorrupted shared memory.\n2010-09-27 09:53:19 EST HINT: In\na moment you should be able to reconnect to the database and repeat your\ncommand.\n2010-09-27 09:53:19 EST WARNING:\n terminating connection because of crash of another server process\n2010-09-27 09:53:19 EST DETAIL: The\npostmaster has commanded this server process to roll back the current transaction\nand exit, because another server process exited abnormally and possibly\ncorrupted shared memory.\n2010-09-27 09:53:19 EST HINT: In\na moment you should be able to reconnect to the database and repeat your\ncommand.\n2010-09-27 09:53:19 EST WARNING:\n terminating connection because of crash of another server process\n2010-09-27 09:53:19 EST DETAIL: The\npostmaster has commanded this server process to roll back the current transaction\nand exit, because another server process exited abnormally and possibly\ncorrupted shared memory.\n2010-09-27 09:53:19 EST HINT: In\na moment you should be able to reconnect to the database and repeat your\ncommand.\n2010-09-27 09:53:19 EST LOG: all\nserver processes terminated; reinitializing\n2010-09-27 09:53:29 EST FATAL: pre-existing\nshared memory block is still in use\n2010-09-27 09:53:29 EST HINT: Check\nif there are any old server processes still running, and terminate them.\n\nThanks again.\n\nCheers\n\nAdrian\n\n\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nAdrian Kitchingman, Aquatic Spatial Scientist | Arthur\nRylah Institute, Dept of Sustainability\nand Environment\n123 Brown St, Heidelberg, Victoria, Australia, 3084 | ph: + (03) 9450 8716\n| fax: + (03) 9450 8799\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <°)))>< ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\n\n\n\nFrom: \n Thom Brown <[email protected]>\nTo: \n [email protected]\nCc: \n [email protected]\nDate: \n 24/09/2010 05:05 PM\nSubject: \n Re: [PERFORM]\npostgresql-9.0 Windows service stops after database transaction\n\n\n\n\nOn 24 September 2010 05:39, <[email protected]>\nwrote:\n> Hi all\n>\n> I've have a strange problem with my Windows postgresql-9.0 service\nstopping\n> after any transaction which manipulates tables in any database (Deleting\n> records, Inserting records, bulk importing via \\copy, etc). This problem\n> occurs regardless whether I'm accessing the database server via pgAdmin\nIII\n> on a client machine or the server itself as well as through the command\n> prompt. The end result it that I have to constantly restart the postgresql\n> service after any transactions. Sometime there is a slight delay allowing\na\n> couple of transactions but the service is always eventually stopped.\n>\n> The problem appears to have started when I changed permissions on\nmy machine\n> to allow the user 'postgres' access to the C drive following the\n> instructions in the last post of this thread:\n> http://www.jitterbit.com/PhpBB/viewtopic.php?f=5&t=869\n>\n> The specs of postres environment are:\n>\n> Windows XP SP3\n> PostgreSQL 9.0.0, compiled by Visual C++ build 1500, 32-bit\n> pgAdmin III 1.12.0 (Sep17 2010, rev: REL-1_12_0)\n> PostGIS 2.0SVN\n>\n> Hope someone can shed some light on this issue.\n\nWhat appears in the logs?\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n\n\nNotice:This \nemail and any attachments may contain information that is personal, \nconfidential,legally privileged and/or copyright. No \npart of it should be reproduced, adapted or communicated without the prior written consent of the copyright owner. \n\nIt is the responsibility of the recipient to \ncheck for and remove viruses.\nIf you have received this email in error, please \nnotify the sender by return email, delete it from your system and destroy any \ncopies. You are not authorised to use, communicate or rely on the information \ncontained in this email.\nPlease consider the environment before \nprinting this email.",
"msg_date": "Mon, 27 Sep 2010 10:06:34 +1000",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: postgresql-9.0 Windows service stops after database\n transaction"
},
{
"msg_contents": "On 27/09/10 08:06, [email protected] wrote:\n> \n> /2010-09-27 09:53:19 EST LOG: server process (PID 2564) was terminated\n> by exception 0xC0000005/\n> /2010-09-27 09:53:19 EST HINT: See C include file \"ntstatus.h\" for a\n> description of the hexadecimal value./\n\nThat's an access violation - in UNIX land, a segmentation fault, signal\n11. A plain 'ol crash of a backend.\n\nIt'd be really nice to have some information about how it crashed. It'd\nbe ideal if you could make a .sql file that, when run on a newly created\ndatabase, creates a table and manipulates it in a way that causes the\nbackend to crash. Please do not add PostGIS to the test database. If you\ncannot make it crash without PostGIS in the mix, that tells us something.\n\nIf you can make it crash without PostGIS, please post the .sql file that\ncreates the table(s) and runs the command that crashes the server.\n\nIf you have to add PostGIS to make it crash, it might be best to talk to\nthe PostGIS folks about the problem.\n\nIn either case, to make things even better you could collect some crash\ninformation that may help diagnose where in PostgreSQL the crash occurs.\nSee:\n\nhttp://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Windows\n\n\n\nBy the way: You don't have the old 8.3 data directory on the system PATH\ndo you? You can check the value of the system path in the System control\npanel, though exactly where varies from Windows version to Windows version.\n\n\n-- \nCraig Ringer\n\nTech-related writing: http://soapyfrogs.blogspot.com/\n",
"msg_date": "Tue, 28 Sep 2010 17:11:54 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql-9.0 Windows service stops after database\n transaction"
},
{
"msg_contents": "Hi all\n\nLooks like Craig is right in suspecting that the PostGIS installation was \ncausing the problem. I did a full reinstall of Postgres and ran it without \nPostGIS with no problems. For those interested, the bug has been noted on \nthe PostGIS bugs list but yet to have a fix.\nhttp://trac.osgeo.org/postgis/ticket/518\n\nThanks for the help.\n\nCheers\n\nAdrian\n\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nAdrian Kitchingman, Aquatic Spatial Scientist | Arthur Rylah Institute, \nDept of Sustainability and Environment\n123 Brown St, Heidelberg, Victoria, Australia, 3084 | ph: + (03) 9450 8716 \n| fax: + (03) 9450 8799\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <°)))>< \n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \n\n\nNotice:\nThis email and any attachments may contain information that is personal, \nconfidential, legally privileged and/or copyright.No part of it should be reproduced, \nadapted or communicated without the prior written consent of the copyright owner. \n\nIt is the responsibility of the recipient to check for and remove viruses.\nIf you have received this email in error, please notify the sender by return email, delete \nit from your system and destroy any copies. You are not authorised to use, communicate or rely on the information \ncontained in this email.\n\nPlease consider the environment before printing this email.\n\nHi all\n\nLooks like Craig is right in suspecting\nthat the PostGIS installation was causing the problem. I did a full reinstall\nof Postgres and ran it without PostGIS with no problems. For those interested,\nthe bug has been noted on the PostGIS bugs list but yet to have a fix.\nhttp://trac.osgeo.org/postgis/ticket/518\n\nThanks for the help.\n\nCheers\n\nAdrian\n\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nAdrian Kitchingman, Aquatic Spatial Scientist | Arthur\nRylah Institute, Dept of Sustainability\nand Environment\n123 Brown St, Heidelberg, Victoria, Australia, 3084 | ph: + (03) 9450 8716\n| fax: + (03) 9450 8799\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <°)))>< ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\n\nNotice:This \nemail and any attachments may contain information that is personal, \nconfidential,legally privileged and/or copyright. No \npart of it should be reproduced, adapted or communicated without the prior written consent of the copyright owner. \n\nIt is the responsibility of the recipient to \ncheck for and remove viruses.\nIf you have received this email in error, please \nnotify the sender by return email, delete it from your system and destroy any \ncopies. You are not authorised to use, communicate or rely on the information \ncontained in this email.\nPlease consider the environment before \nprinting this email.",
"msg_date": "Thu, 30 Sep 2010 16:00:30 +1000",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: postgresql-9.0 Windows service stops after database\n transaction"
}
] |
[
{
"msg_contents": "Our Java application manages its own schema. Some of this is from Hibernate,\nbut some is hand-crafted JDBC.\n\nBy way of an upgrade path, we have a few places where we have added\nadditional indexes to optimize performance, and so at startup time the\napplication issues \"CREATE INDEX ...\" statements for these, expecting to\ncatch the harmless exception \"ERROR: relation \"date_index\" already exists\",\nas a simpler alternative to using the meta-data to check for it first.\n\nIn general, this seems to work fine, but we have one installation where we\nobserved one of these CREATE statements hanging up in the database, as if\nwaiting for a lock, thus stalling the app startup - it's PG 8.4.4 64-bit on\nRHEL 5, installed with the postgresql.org YUM repository.\n\nStopping and restarting PG did not clear the issue. While this is going on,\nthe database is otherwise responsive, e.g. to access with psql.\n\nIs this \"expected failure\" considered a dangerous practice in PGSQL and\nshould we add checks?\n\nDoes the hangup indicate a possible corruption problem with the DB?\n\nCheers\nDave\n\nOur Java application manages its own schema. Some of this is from Hibernate, but some is hand-crafted JDBC. By way of an upgrade path, we have a few places where we have added additional indexes to optimize performance, and so at startup time the application issues \"CREATE INDEX ...\" statements for these, expecting to catch the harmless exception \"ERROR: relation \"date_index\" already exists\", as a simpler alternative to using the meta-data to check for it first.\nIn general, this seems to work fine, but we have one installation where we observed one of these CREATE statements hanging up in the database, as if waiting for a lock, thus stalling the app startup - it's PG 8.4.4 64-bit on RHEL 5, installed with the postgresql.org YUM repository. \nStopping and restarting PG did not clear the issue. While this is going on, the database is otherwise responsive, e.g. to access with psql.\nIs this \"expected failure\" considered a dangerous practice in PGSQL and should we add checks? Does the hangup indicate a possible corruption problem with the DB?CheersDave",
"msg_date": "Mon, 27 Sep 2010 13:50:43 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Odd behaviour with redundant CREATE statement"
},
{
"msg_contents": "On Mon, Sep 27, 2010 at 8:50 PM, Dave Crooke <[email protected]> wrote:\n\n>\n> Our Java application manages its own schema. Some of this is from\n> Hibernate, but some is hand-crafted JDBC.\n>\n> By way of an upgrade path, we have a few places where we have added\n> additional indexes to optimize performance, and so at startup time the\n> application issues \"CREATE INDEX ...\" statements for these, expecting to\n> catch the harmless exception \"ERROR: relation \"date_index\" already exists\",\n> as a simpler alternative to using the meta-data to check for it first.\n>\n> In general, this seems to work fine, but we have one installation where we\n> observed one of these CREATE statements hanging up in the database, as if\n> waiting for a lock, thus stalling the app startup\n\n\nYou can tell if it is really waiting by looking at 'select * from pg_locks',\nand check the 'granted' column.\n\n\n\n> it's PG 8.4.4 64-bit on RHEL 5, installed with the postgresql.org YUM\n> repository.\n>\n> Stopping and restarting PG did not clear the issue. While this is going on,\n> the database is otherwise responsive, e.g. to access with psql.\n>\n\nAlso check if you have any prepared transactions waiting to be committed or\nrolled back.\n\nselect * from pg_prepared_xacts\n\n\n>\n> Is this \"expected failure\" considered a dangerous practice in PGSQL and\n> should we add checks?\n>\n> Does the hangup indicate a possible corruption problem with the DB?\n>\n\nVery unlikely.\n\nRegards,\n-- \ngurjeet.singh\n@ EnterpriseDB - The Enterprise Postgres Company\nhttp://www.EnterpriseDB.com\n\nsingh.gurjeet@{ gmail | yahoo }.com\nTwitter/Skype: singh_gurjeet\n\nMail sent from my BlackLaptop device\n\nOn Mon, Sep 27, 2010 at 8:50 PM, Dave Crooke <[email protected]> wrote:\nOur Java application manages its own schema. Some of this is from Hibernate, but some is hand-crafted JDBC. By way of an upgrade path, we have a few places where we have added additional indexes to optimize performance, and so at startup time the application issues \"CREATE INDEX ...\" statements for these, expecting to catch the harmless exception \"ERROR: relation \"date_index\" already exists\", as a simpler alternative to using the meta-data to check for it first.\nIn general, this seems to work fine, but we have one installation where we observed one of these CREATE statements hanging up in the database, as if waiting for a lock, thus stalling the app startup\nYou can tell if it is really waiting by looking at 'select * from pg_locks', and check the 'granted' column. \n\nit's PG 8.4.4 64-bit on RHEL 5, installed with the postgresql.org YUM repository. \nStopping and restarting PG did not clear the issue. While this is going on, the database is otherwise responsive, e.g. to access with psql.Also check if you have any prepared transactions waiting to be committed or rolled back.\nselect * from pg_prepared_xacts \nIs this \"expected failure\" considered a dangerous practice in PGSQL and should we add checks? Does the hangup indicate a possible corruption problem with the DB?Very unlikely.\nRegards,-- gurjeet.singh@ EnterpriseDB - The Enterprise Postgres Companyhttp://www.EnterpriseDB.comsingh.gurjeet@{ gmail | yahoo }.com\nTwitter/Skype: singh_gurjeet\nMail sent from my BlackLaptop device",
"msg_date": "Mon, 27 Sep 2010 21:27:53 +0200",
"msg_from": "Gurjeet Singh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd behaviour with redundant CREATE statement"
},
{
"msg_contents": "On Mon, Sep 27, 2010 at 3:27 PM, Gurjeet Singh <[email protected]> wrote:\n> On Mon, Sep 27, 2010 at 8:50 PM, Dave Crooke <[email protected]> wrote:\n>>\n>> Our Java application manages its own schema. Some of this is from\n>> Hibernate, but some is hand-crafted JDBC.\n>>\n>> By way of an upgrade path, we have a few places where we have added\n>> additional indexes to optimize performance, and so at startup time the\n>> application issues \"CREATE INDEX ...\" statements for these, expecting to\n>> catch the harmless exception \"ERROR: relation \"date_index\" already exists\",\n>> as a simpler alternative to using the meta-data to check for it first.\n>>\n>> In general, this seems to work fine, but we have one installation where we\n>> observed one of these CREATE statements hanging up in the database, as if\n>> waiting for a lock, thus stalling the app startup\n>\n> You can tell if it is really waiting by looking at 'select * from pg_locks',\n> and check the 'granted' column.\n\nCREATE INDEX (without CONCURRENTLY) tries to acquire a share-lock on\nthe table, which will conflict with any concurrent INSERT, UPDATE,\nDELETE, or VACUUM. It probably tries to acquire the lock before\nnoticing that the index is a duplicate. CREATE INDEX CONCURRENTLY\nmight be an option, or you could write and call a PL/pgsql function\n(or, in 9.0, use a DO block) to test for the existence of the index\nbefore trying create it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n",
"msg_date": "Thu, 7 Oct 2010 16:40:13 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd behaviour with redundant CREATE statement"
},
{
"msg_contents": "Thanks folks, that makes sense. We're now being more precise with our DDL\n:-)\n\nCheers\nDave\n\nOn Thu, Oct 7, 2010 at 3:40 PM, Robert Haas <[email protected]> wrote:\n\n> On Mon, Sep 27, 2010 at 3:27 PM, Gurjeet Singh <[email protected]>\n> wrote:\n> > On Mon, Sep 27, 2010 at 8:50 PM, Dave Crooke <[email protected]> wrote:\n> >>\n> >> Our Java application manages its own schema. Some of this is from\n> >> Hibernate, but some is hand-crafted JDBC.\n> >>\n> >> By way of an upgrade path, we have a few places where we have added\n> >> additional indexes to optimize performance, and so at startup time the\n> >> application issues \"CREATE INDEX ...\" statements for these, expecting to\n> >> catch the harmless exception \"ERROR: relation \"date_index\" already\n> exists\",\n> >> as a simpler alternative to using the meta-data to check for it first.\n> >>\n> >> In general, this seems to work fine, but we have one installation where\n> we\n> >> observed one of these CREATE statements hanging up in the database, as\n> if\n> >> waiting for a lock, thus stalling the app startup\n> >\n> > You can tell if it is really waiting by looking at 'select * from\n> pg_locks',\n> > and check the 'granted' column.\n>\n> CREATE INDEX (without CONCURRENTLY) tries to acquire a share-lock on\n> the table, which will conflict with any concurrent INSERT, UPDATE,\n> DELETE, or VACUUM. It probably tries to acquire the lock before\n> noticing that the index is a duplicate. CREATE INDEX CONCURRENTLY\n> might be an option, or you could write and call a PL/pgsql function\n> (or, in 9.0, use a DO block) to test for the existence of the index\n> before trying create it.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise Postgres Company\n>\n\nThanks folks, that makes sense. We're now being more precise with our DDL :-)CheersDaveOn Thu, Oct 7, 2010 at 3:40 PM, Robert Haas <[email protected]> wrote:\nOn Mon, Sep 27, 2010 at 3:27 PM, Gurjeet Singh <[email protected]> wrote:\n\n> On Mon, Sep 27, 2010 at 8:50 PM, Dave Crooke <[email protected]> wrote:\n>>\n>> Our Java application manages its own schema. Some of this is from\n>> Hibernate, but some is hand-crafted JDBC.\n>>\n>> By way of an upgrade path, we have a few places where we have added\n>> additional indexes to optimize performance, and so at startup time the\n>> application issues \"CREATE INDEX ...\" statements for these, expecting to\n>> catch the harmless exception \"ERROR: relation \"date_index\" already exists\",\n>> as a simpler alternative to using the meta-data to check for it first.\n>>\n>> In general, this seems to work fine, but we have one installation where we\n>> observed one of these CREATE statements hanging up in the database, as if\n>> waiting for a lock, thus stalling the app startup\n>\n> You can tell if it is really waiting by looking at 'select * from pg_locks',\n> and check the 'granted' column.\n\nCREATE INDEX (without CONCURRENTLY) tries to acquire a share-lock on\nthe table, which will conflict with any concurrent INSERT, UPDATE,\nDELETE, or VACUUM. It probably tries to acquire the lock before\nnoticing that the index is a duplicate. CREATE INDEX CONCURRENTLY\nmight be an option, or you could write and call a PL/pgsql function\n(or, in 9.0, use a DO block) to test for the existence of the index\nbefore trying create it.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company",
"msg_date": "Thu, 7 Oct 2010 16:51:42 -0500",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd behaviour with redundant CREATE statement"
}
] |
[
{
"msg_contents": "Hi,\n\nCan any one of you suggest how the archived Xlogs can be cleaned in\npostgres-9.\n\nWe want to use streaming replication and have set the \"wal_level\" to\n\"hot_standby\" and \"archive_mode\" to \"on\".\n\nRegards,\nNimesh.\n\nHi,Can any one of you suggest how the archived Xlogs can be cleaned in postgres-9. We want to use streaming replication and have set the \"wal_level\" to \"hot_standby\" and \"archive_mode\" to \"on\".\nRegards,Nimesh.",
"msg_date": "Tue, 28 Sep 2010 15:31:13 +0530",
"msg_from": "Nimesh Satam <[email protected]>",
"msg_from_op": true,
"msg_subject": "Clean up of archived Xlogs in postgres-9."
},
{
"msg_contents": "On Tue, September 28, 2010 12:01, Nimesh Satam wrote:\n> Hi,\n>\n> Can any one of you suggest how the archived Xlogs can be cleaned in\n> postgres-9.\n>\n> We want to use streaming replication and have set the \"wal_level\" to\n> \"hot_standby\" and \"archive_mode\" to \"on\".\n>\n\nSee contrib/pg_archivecleanup:\n\nhttp://www.postgresql.org/docs/9.0/static/pgarchivecleanup.html\n\nhth,\n\nErik Rijkers\n\n",
"msg_date": "Tue, 28 Sep 2010 14:29:47 +0200",
"msg_from": "\"Erik Rijkers\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clean up of archived Xlogs in postgres-9."
},
{
"msg_contents": "Hi all,\n\n\nWe are trying to use the streaming replication of postgres9 and hence we\narchive the WAL logs in the archive location. But we are not able find the\nright documentation to clean the archive location.\n\nAs suggested by Erik Rijkers we tried using the pg_archivecleanup but we are\nnot sure, from which WAL location should the cleanup take place.\n\nCan anybody suggest how we find which wal logs are safe to be dropped from\nthe archive location while using the pg_archivecleanup or any other options?\nAny kind of documentation would be of great help.\n\n\n\nRegards,\nNimesh.\n\n\n---------- Forwarded message ----------\nFrom: Erik Rijkers <[email protected]>\nDate: Tue, Sep 28, 2010 at 5:59 PM\nSubject: Re: [PERFORM] Clean up of archived Xlogs in postgres-9.\nTo: Nimesh Satam <[email protected]>\nCc: [email protected]\n\n\nOn Tue, September 28, 2010 12:01, Nimesh Satam wrote:\n> Hi,\n>\n> Can any one of you suggest how the archived Xlogs can be cleaned in\n> postgres-9.\n>\n> We want to use streaming replication and have set the \"wal_level\" to\n> \"hot_standby\" and \"archive_mode\" to \"on\".\n>\n\nSee contrib/pg_archivecleanup:\n\nhttp://www.postgresql.org/docs/9.0/static/pgarchivecleanup.html\n\nhth,\n\nErik Rijkers\n\nHi all,We are trying to use the streaming replication of postgres9 and hence we archive the WAL logs in the archive location. But we are not able find the right documentation to clean the archive location.\nAs suggested by Erik Rijkers we tried using the pg_archivecleanup but we are not sure, from which WAL location should the cleanup take place.Can anybody suggest how we find which wal logs are safe to be dropped from the archive location while using the pg_archivecleanup or any other options? Any kind of documentation would be of great help.\nRegards,Nimesh.---------- Forwarded message ----------From: Erik Rijkers <[email protected]>\nDate: Tue, Sep 28, 2010 at 5:59 PM\nSubject: Re: [PERFORM] Clean up of archived Xlogs in postgres-9.To: Nimesh Satam <[email protected]>Cc: [email protected]\nOn Tue, September 28, 2010 12:01, Nimesh Satam wrote:\n> Hi,\n>\n> Can any one of you suggest how the archived Xlogs can be cleaned in\n> postgres-9.\n>\n> We want to use streaming replication and have set the \"wal_level\" to\n> \"hot_standby\" and \"archive_mode\" to \"on\".\n>\n\nSee contrib/pg_archivecleanup:\n\nhttp://www.postgresql.org/docs/9.0/static/pgarchivecleanup.html\n\nhth,\n\nErik Rijkers",
"msg_date": "Fri, 22 Oct 2010 16:44:15 +0530",
"msg_from": "Nimesh Satam <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: [PERFORM] Clean up of archived Xlogs in postgres-9."
}
] |
[
{
"msg_contents": "I'm doing an OS upgrade and have been sitting on 8.4.3 for sometime. I\nwas wondering if it's better for the short term just to bring things\nto 8.4.4 and let 9.0 bake a bit longer, or are people with large data\nsets running 9.0 in production already?\n\nJust looking for 9.0 feedback (understand it's still quite new).\n\nThanks\nTory\n",
"msg_date": "Tue, 28 Sep 2010 12:22:17 -0700",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Running 9 in production? Sticking with 8.4.4 for a while?"
},
{
"msg_contents": "Tory,\n\nWe will continue to test under 9.0 but will keep production at 8.4.4 for a\nwhile longer as we want to see what kinds of issues show up over the next\nfew weeks with 9.0. 9.0 has some features we would like to use but it isn't\nworth the risk of production. I think that the PostGres team has one of the\nbest develop and test cycle management systems but there is always things\nthat are going to pop up after a release. It just depends on the level of\npain your willing to suffer. It seems that the developers are able to track\ndown and kill bugs in a timely manner so I would expect to see a 9.0.1\nversion in the near future. At that point we'll start doing a lot more tire\nkicking.\n\nBest Regards\n\nMike Gould\n\n\"Tory M Blue\" <[email protected]> wrote:\n> I'm doing an OS upgrade and have been sitting on 8.4.3 for sometime. I\n> was wondering if it's better for the short term just to bring things\n> to 8.4.4 and let 9.0 bake a bit longer, or are people with large data\n> sets running 9.0 in production already?\n> \n> Just looking for 9.0 feedback (understand it's still quite new).\n> \n> Thanks\n> Tory\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n--\nMichael Gould, Managing Partner\nIntermodal Software Solutions, LLC\n904.226.0978\n904.592.5250 fax\n\n\n",
"msg_date": "Tue, 28 Sep 2010 14:50:04 -0500",
"msg_from": "Michael Gould <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Running 9 in production? Sticking with 8.4.4 for a while?"
},
{
"msg_contents": "Tory M Blue wrote:\n> I'm doing an OS upgrade and have been sitting on 8.4.3 for sometime. I\n> was wondering if it's better for the short term just to bring things\n> to 8.4.4 and let 9.0 bake a bit longer, or are people with large data\n> sets running 9.0 in production already?\n> \n\nI'm aware of two people with large data sets who have been running 9.0 \nin production since it was in beta. Like most code, what you have to \nconsider is how much the code path you expect to use each day has been \nmodified during the previous release. If you're using 9.0 as \"a better \n8.4\", the odds of your running into a problem are on the low side of the \nrisk curve. But those using the features that are both new and were \nworked on until the very end of the development cycle, like the new \nreplication features, they are much more likely to run into a bug.\n\nThere are two main things I've been advising clients to be very careful \nabout when considering an early 9.0 upgrade mainly as a straight \nreplacement for 8.4 (making it possible to start testing the replication \nstuff too, but not relying on that immediately). The changes to \nacceptable PL/pgSQL syntax can easily break some existing procedures; \neasy to fix if you find them in testing, but you if you have a lot of \nfunctions in that language you should really do a code audit along with \ntesting.\n\nThe other is that query plans are much more likely to use Materialize \nnodes now in ways they never did before. That planning change will be \nlong-term positive, but I expect to see some short-term performance \nregressions in plans that used to work better; not because of the code \nitself, just because of Murphy's Law. This is similar to how the hash \naggregation changes made to 8.4 could produce worse plans under \nunexpected circumstances than what people saw in 8.3 and earlier, which \nis something else you're also exposed to here if your existing code is \nrunning on 8.4. There are less of those situations in the recent 8.4 \nreleases than the early ones, but the possibility of aggressive hashing \nbeing worse than the older approach still happens. I've seen exactly \none of them on a production server running 8.4, and the problem had \nalready been reported to the relevant list before I got there.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Tue, 28 Sep 2010 16:45:33 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Running 9 in production? Sticking with 8.4.4 for a\n while?"
},
{
"msg_contents": "On 9/28/2010 4:45 PM, Greg Smith wrote:\n> Tory M Blue wrote:\n>> I'm doing an OS upgrade and have been sitting on 8.4.3 for sometime. I\n>> was wondering if it's better for the short term just to bring things\n>> to 8.4.4 and let 9.0 bake a bit longer, or are people with large data\n>> sets running 9.0 in production already?\n>\n> I'm aware of two people with large data sets who have been running 9.0\n> in production since it was in beta. Like most code, what you have to\n> consider is how much the code path you expect to use each day has been\n> modified during the previous release. If you're using 9.0 as \"a better\n> 8.4\", the odds of your running into a problem are on the low side of the\n> risk curve. But those using the features that are both new and were\n> worked on until the very end of the development cycle, like the new\n> replication features, they are much more likely to run into a bug.\n\nA conservative approach is never to use version x.0 of *anything*. The \nPG developers are very talented (and also very helpful on these mailing \nlists - thanks for that), but they are human. For work I'm paid to do \n(as opposed to my own or charity work), I like to stay at least one \npoint release behind the bleeding edge.\n\n-- \nGuy Rouillier\n",
"msg_date": "Tue, 28 Sep 2010 18:14:36 -0400",
"msg_from": "Guy Rouillier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Running 9 in production? Sticking with 8.4.4 for a\n while?"
},
{
"msg_contents": "On Tue, Sep 28, 2010 at 3:14 PM, Guy Rouillier <[email protected]> wrote:\n> On 9/28/2010 4:45 PM, Greg Smith wrote:\n>>\n>> Tory M Blue wrote:\n>>>\n>>> I'm doing an OS upgrade and have been sitting on 8.4.3 for sometime. I\n>>> was wondering if it's better for the short term just to bring things\n>>> to 8.4.4 and let 9.0 bake a bit longer, or are people with large data\n>>> sets running 9.0 in production already?\n>>\n>> I'm aware of two people with large data sets who have been running 9.0\n>> in production since it was in beta. Like most code, what you have to\n>> consider is how much the code path you expect to use each day has been\n>> modified during the previous release. If you're using 9.0 as \"a better\n>> 8.4\", the odds of your running into a problem are on the low side of the\n>> risk curve. But those using the features that are both new and were\n>> worked on until the very end of the development cycle, like the new\n>> replication features, they are much more likely to run into a bug.\n>\n> A conservative approach is never to use version x.0 of *anything*. The PG\n> developers are very talented (and also very helpful on these mailing lists -\n> thanks for that), but they are human. For work I'm paid to do (as opposed\n> to my own or charity work), I like to stay at least one point release behind\n> the bleeding edge.\n>\n> --\n> Guy RouillierG\n\nThanks guys, truly appreciate the length to which you replied. I like\nhearing the theory and reasoning behind ones decision.\n\nIt sounds like my general theory of waiting is shared amongst the\ngroup. I can't really absorb large issues in production, so I'll throw\nthe .4 release of 8.4 out and be happy for a while.\n\nI'm using slony, so it will take a ton of testing and time to look\nover and test the 9.0 replication piece, if I even consider making\nthat jump.\n\nThanks for the input, it's appreciated\n\nTory\n",
"msg_date": "Tue, 28 Sep 2010 15:52:51 -0700",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Running 9 in production? Sticking with 8.4.4 for a while?"
},
{
"msg_contents": "Tory M Blue writes:\n\n> Just looking for 9.0 feedback (understand it's still quite new).\n\nAlthough not 9.0 feedback per se.. but something to consider...\n\nI am moving a number of machines from 8.1 to 8.4.\nWe use a rolling procedure with a spare:\nmachine A running 8.1\nmachine B is prepped with 8.4\nchange names and IPs\nkeep old A for a bit just in case...\n\nThe last replacement we noticed a big regression in performance. Upon \ninspection the machine we replaced was 3.7.. Both 8.X and 8.X have serious \nproblems with a particualr query (dealing with it on performance now).\n\nSo I guess what I am trying to say... you need to test with your own data \nset. That lone 8.3.7 machine was literally a fluke. We were going to move \nall 8.1 to 8.4.. but at one point did one in 8.3. If we had not done that we \nwould not have noticed that both 8.1 and 8.4 have problems with some of our \nqueries.\n\nI am even leaning now to have a lab with 8.3 for testing queries. \n\n",
"msg_date": "Wed, 27 Oct 2010 11:19:43 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Running 9 in production? Sticking with 8.4.4 for a while?"
}
] |
[
{
"msg_contents": "Hi,\n\n After reading lots of documentation, I still don't understand fully how\nPG knows if some needed data is in memory or in second storage.\n\n While choosing the best query plan, the optimizer must take this into\naccount. Does PG consider this? If so, how does it know?\n\n I presume it checks its shared buffer. But if the data is not in the\nshared buffer, it could be on OS cache, and the plan would not be optimized.\n\n Can PG check the OS cache? If not, considering a dedicated DB server, is\nit advised to raise the shared buffer size up to a maximum that does not\ninfluence the rest of the system? This way, the shared buffer check would\nhave a higher probability of returning a correct answer.\n\n When setting seq_page_cost and random_page_cost, do I have to consider\nthe probability that data will be in memory? Or does seq_page_cost mean\n\"sequential access on disk\" and random_page_cost mean \"random access on\ndisk\"?\n\n I appreciate if someone could clear this out.\n\n Thanks!\n\nFabrício dos Anjos Silva\nLinkCom Soluções em T.I.\n\n Hi, After reading lots of documentation, I still don't understand fully how PG knows if some needed data is in memory or in second storage. While choosing the best query plan, the optimizer must take this into account. Does PG consider this? If so, how does it know?\n I presume it checks its shared buffer. But if the data is not in the shared buffer, it could be on OS cache, and the plan would not be optimized. Can PG check the OS cache? If not, considering a dedicated DB server, is it advised to raise the shared buffer size up to a maximum that does not influence the rest of the system? This way, the shared buffer check would have a higher probability of returning a correct answer.\n When setting seq_page_cost and random_page_cost, do I have to consider the probability that data will be in memory? Or does seq_page_cost mean \"sequential access on disk\" and random_page_cost mean \"random access on disk\"?\n I appreciate if someone could clear this out. Thanks!Fabrício dos Anjos SilvaLinkCom Soluções em T.I.",
"msg_date": "Wed, 29 Sep 2010 13:31:20 -0300",
"msg_from": "=?ISO-8859-1?Q?Fabr=EDcio_dos_Anjos_Silva?=\n\t<[email protected]>",
"msg_from_op": true,
"msg_subject": "How does PG know if data is in memory?"
},
{
"msg_contents": "Fabrᅵcio dos Anjos Silva<[email protected]> wrote:\n \n> After reading lots of documentation, I still don't understand\n> fully how PG knows if some needed data is in memory or in second\n> storage.\n \n> Does PG consider this?\n \nNo.\n \n> When setting seq_page_cost and random_page_cost, do I have to\n> consider the probability that data will be in memory?\n \nYes.\n \n-Kevin\n",
"msg_date": "Wed, 29 Sep 2010 11:36:45 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "2010/9/29 Fabrício dos Anjos Silva <[email protected]>\n\n>\n>\n> When setting seq_page_cost and random_page_cost, do I have to consider\n> the probability that data will be in memory? Or does seq_page_cost mean\n> \"sequential access on disk\" and random_page_cost mean \"random access on\n> disk\"?\n>\n>\nThe reason seq_page_cost and random_page_cost exist as parameters is so that\nyou can inform the optimizer what the relative costs of those actions are,\nwhich is directly related to the expected size of the filesystem cache,\nratio of total db size to available cache memory, and the performance of\nyour disk i/o subsystems (and any other disk-related work the host may be\ndoing). effective_cache_size allows you to tell postgres how big you\nbelieve all available cache memory is - shared_buffers and OS cache.\n\nAs to your question about increasing shared_buffers to be some significant\nproportion of available RAM - apparently, that is not a good idea. I've\nseen advice that said you shouldn't go above 8GB for shared_buffers and I've\nalso seen 12GB suggested as an upper limit, too. On my host with 48GB of\nRAM, I didn't see much difference between 8GB and 12GB on a fairly wide\nvariety of tests, so mine is set at 8GB with an efective_cache_size of 36GB.\n\n\n> I appreciate if someone could clear this out.\n>\n> Thanks!\n>\n> Fabrício dos Anjos Silva\n> LinkCom Soluções em T.I.\n>\n>\n\n2010/9/29 Fabrício dos Anjos Silva <[email protected]>\n\n When setting seq_page_cost and random_page_cost, do I have to consider the probability that data will be in memory? Or does seq_page_cost mean \"sequential access on disk\" and random_page_cost mean \"random access on disk\"?\nThe reason seq_page_cost and random_page_cost exist as parameters is so that you can inform the optimizer what the relative costs of those actions are, which is directly related to the expected size of the filesystem cache, ratio of total db size to available cache memory, and the performance of your disk i/o subsystems (and any other disk-related work the host may be doing). effective_cache_size allows you to tell postgres how big you believe all available cache memory is - shared_buffers and OS cache.\nAs to your question about increasing shared_buffers to be some significant proportion of available RAM - apparently, that is not a good idea. I've seen advice that said you shouldn't go above 8GB for shared_buffers and I've also seen 12GB suggested as an upper limit, too. On my host with 48GB of RAM, I didn't see much difference between 8GB and 12GB on a fairly wide variety of tests, so mine is set at 8GB with an efective_cache_size of 36GB.\n I appreciate if someone could clear this out. Thanks!\nFabrício dos Anjos SilvaLinkCom Soluções em T.I.",
"msg_date": "Wed, 29 Sep 2010 10:08:39 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "Thank you all for the replies.\n\n If PG does not know whether needed data is in memory, how does it\nestimate cost? There is a huge difference between access time in memory and\nin secondary storage. Not taking this into account results in almost\n\"useless\" estimates. I am not saying that PG does a pour job, but I've been\nusing it for 4 years and from time to time I notice very pour estimates.\nAfter some testing 2 years ago, the only configuration I could manage to use\nwas to tell PG to avoid Seq Scan and Index Scans. I know that in many\nsituations these techniques are the best to choose, but when they are chosen\nwhere they are not suitable, I get very bad plans.\n Recently, I faced poor performance again, but this time because we\nstarted to work with larger tables (10M rows). This encourage me to study PG\ntuning again, trying to understand how the planner works and trying to get\nthe best of it. Unfortunately, it does not seem to be an easy task.\n If someone could point good books about PG tuning, I would appreciate\nthat. I found some yet to be released books about PG 9. Any comments about\nthem?\n\n Thank you all.\n\nFabrício dos Anjos Silva\nLinkCom Soluções em T.I.\n\n\n\nEm 29 de setembro de 2010 14:08, Samuel Gendler\n<[email protected]>escreveu:\n\n>\n>\n> 2010/9/29 Fabrício dos Anjos Silva <[email protected]>\n>\n>\n>>\n>> When setting seq_page_cost and random_page_cost, do I have to consider\n>> the probability that data will be in memory? Or does seq_page_cost mean\n>> \"sequential access on disk\" and random_page_cost mean \"random access on\n>> disk\"?\n>>\n>>\n> The reason seq_page_cost and random_page_cost exist as parameters is so\n> that you can inform the optimizer what the relative costs of those actions\n> are, which is directly related to the expected size of the filesystem cache,\n> ratio of total db size to available cache memory, and the performance of\n> your disk i/o subsystems (and any other disk-related work the host may be\n> doing). effective_cache_size allows you to tell postgres how big you\n> believe all available cache memory is - shared_buffers and OS cache.\n>\n> As to your question about increasing shared_buffers to be some significant\n> proportion of available RAM - apparently, that is not a good idea. I've\n> seen advice that said you shouldn't go above 8GB for shared_buffers and I've\n> also seen 12GB suggested as an upper limit, too. On my host with 48GB of\n> RAM, I didn't see much difference between 8GB and 12GB on a fairly wide\n> variety of tests, so mine is set at 8GB with an efective_cache_size of 36GB.\n>\n>\n>> I appreciate if someone could clear this out.\n>>\n>> Thanks!\n>>\n>> Fabrício dos Anjos Silva\n>> LinkCom Soluções em T.I.\n>>\n>>\n>\n\n Thank you all for the replies. If PG does not know whether needed data is in memory, how does it estimate cost? There is a huge difference between access time in memory and in secondary storage. Not taking this into account results in almost \"useless\" estimates. I am not saying that PG does a pour job, but I've been using it for 4 years and from time to time I notice very pour estimates. After some testing 2 years ago, the only configuration I could manage to use was to tell PG to avoid Seq Scan and Index Scans. I know that in many situations these techniques are the best to choose, but when they are chosen where they are not suitable, I get very bad plans.\n\n Recently, I faced poor performance again, but this time because we started to work with larger tables (10M rows). This encourage me to study PG tuning again, trying to understand how the planner works and trying to get the best of it. Unfortunately, it does not seem to be an easy task.\n\n If someone could point good books about PG tuning, I would appreciate that. I found some yet to be released books about PG 9. Any comments about them? Thank you all.Fabrício dos Anjos Silva\n\nLinkCom Soluções em T.I.\nEm 29 de setembro de 2010 14:08, Samuel Gendler <[email protected]> escreveu:\n2010/9/29 Fabrício dos Anjos Silva <[email protected]>\n\n When setting seq_page_cost and random_page_cost, do I have to consider the probability that data will be in memory? Or does seq_page_cost mean \"sequential access on disk\" and random_page_cost mean \"random access on disk\"?\nThe reason seq_page_cost and random_page_cost exist as parameters is so that you can inform the optimizer what the relative costs of those actions are, which is directly related to the expected size of the filesystem cache, ratio of total db size to available cache memory, and the performance of your disk i/o subsystems (and any other disk-related work the host may be doing). effective_cache_size allows you to tell postgres how big you believe all available cache memory is - shared_buffers and OS cache.\nAs to your question about increasing shared_buffers to be some significant proportion of available RAM - apparently, that is not a good idea. I've seen advice that said you shouldn't go above 8GB for shared_buffers and I've also seen 12GB suggested as an upper limit, too. On my host with 48GB of RAM, I didn't see much difference between 8GB and 12GB on a fairly wide variety of tests, so mine is set at 8GB with an efective_cache_size of 36GB.\n\n I appreciate if someone could clear this out. Thanks!\n\nFabrício dos Anjos SilvaLinkCom Soluções em T.I.",
"msg_date": "Fri, 1 Oct 2010 08:12:36 -0300",
"msg_from": "=?ISO-8859-1?Q?Fabr=EDcio_dos_Anjos_Silva?=\n\t<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "On 1/10/2010 7:12 PM, Fabr�cio dos Anjos Silva wrote:\n>\n> Thank you all for the replies.\n>\n> If PG does not know whether needed data is in memory, how does it\n> estimate cost? There is a huge difference between access time in memory\n> and in secondary storage. Not taking this into account results in almost\n> \"useless\" estimates.\n\nIt's generally configured with the conservative assumption that data \nwill have to come from disk.\n\nNote that the query planner's job isn't to figure out how long the query \nwill take. It's to compare various possible query plans and decide which \nwill be fastest. There are certainly cases where knowing what's cached \nwould help with this - for example: if an index is cached but the table \ndata isn't, it's more likely to be worth using the index to reduce disk \nreads. But I don't know just how much difference it really makes.\n\nBecause the query often only wants a small subset of the data, and whole \nrelations are rarely fully cached, it's not enough to know that \"some of \nrelation X is cached\", it has to know if the cached parts are the parts \nthat'll be required, or at least an approximation of that. It sounds \nhorrendously complicated to keep track of to me, and in the end it won't \nmake query execution any faster, it'll just potentially help the planner \npick a better plan. I wonder if that'd be worth the extra CPU time spent \nmanaging the cache and cache content stats, and using those cache stats \nwhen planning? It'd be an interesting experiment, but the outcome is \nhardly obvious.\n\nAs you can see, I don't really agree that the planner's estimates are \nuseless just because it's not very aware of the cache's current \ncontents. It has a pretty good idea of the system's memory and how much \nof that can be used for cache, and knows how big various indexes and \nrelations are. That seems to work pretty well.\n\nIf some kind of cache awareness was to be added, I'd be interested in \nseeing a \"hotness\" measure that tracked how heavily a given \nrelation/index has been accessed and how much has been read from it \nrecently. A sort of age-scaled blocks-per-second measure that includes \nboth cached and uncached (disk) reads. This would let the planner know \nhow likely parts of a given index/relation are to be cached in memory \nwithout imposing the cost of tracking the cache in detail. I'm still not \nsure it'd be all that useful, though...\n\n > I am not saying that PG does a pour job, but I've\n> been using it for 4 years and from time to time I notice very pour\n> estimates.\n\nMost of the issues reported here, at least, are statistics issues, \nrather than lack of knowledge about cache status. The planner thinks \nit'll find (say) 2 tuples maching a filter, and instead finds 100,000, \nso it chooses a much less efficient join type. That sort of thing is \nreally independent of the cache state.\n\n> Recently, I faced poor performance again, but this time because we\n> started to work with larger tables (10M rows). This encourage me to\n> study PG tuning again, trying to understand how the planner works and\n> trying to get the best of it. Unfortunately, it does not seem to be an\n> easy task.\n\nNo argument there! Like any database there's a fair bit of black magic \ninvolved, and a whole lot of benchmarking. The key thing is to have \nappropriate statistics (usually high), get a reasonable random_page_cost \nand seq_page_cost, to set your effective cache size appropriately, and \nto set reasonable work_mem.\n\n\"Reasonable\" is hard to work out for work_mem, because Pg's work_mem \nlimit is per-sort (etc) not per-query or per-backend. I understand that \nmaking it per-query is way, way harder than it sounds at face value, \nthough, so we must make do.\n\n-- \nCraig Ringer\n\nTech-related writing at http://soapyfrogs.blogspot.com/\n",
"msg_date": "Fri, 01 Oct 2010 20:24:21 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "> It sounds horrendously complicated to keep track of to me, and in the \n> end it won't make query execution any faster, it'll just potentially \n> help the planner pick a better plan. I wonder if that'd be worth the \n> extra CPU time spent managing the cache and cache content stats, and \n> using those cache stats when planning? It'd be an interesting \n> experiment, but the outcome is hardly obvious.\n\nWell, suppose you pick an index scan, the only way to know which index \n(and heap) pages you'll need is to actually do the index scan... which \nisn't really something you'd do when planning. So you scan,\n",
"msg_date": "Fri, 01 Oct 2010 15:45:34 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "Craig Ringer <[email protected]> wrote:\n \n> Because the query often only wants a small subset of the data, and\n> whole relations are rarely fully cached, it's not enough to know\n> that \"some of relation X is cached\", it has to know if the cached\n> parts are the parts that'll be required, or at least an\n> approximation of that. It sounds horrendously complicated to keep\n> track of to me, and in the end it won't make query execution any\n> faster, it'll just potentially help the planner pick a better\n> plan. I wonder if that'd be worth the extra CPU time spent\n> managing the cache and cache content stats, and using those cache\n> stats when planning? It'd be an interesting experiment, but the\n> outcome is hardly obvious.\n \nI agree with that, but I think there's an even more insidious issue\nhere. Biasing plans heavily toward using what is already in cache\ncould have a destabilizing effect on performance. Let's say that\nsome query or maintenance skews the cache toward some plan which is\nmuch slower when cached than another plan would be if cached. Let's\nalso postulate that this query runs very frequently. It will always\nsettle for what's fastest *this* time, not what would make for\nfastest performance if consistently used. If it never chooses the\nplan which would run better if cached, the data used for that plan\nmay never make it into cache, and you will limp along with the\ninferior plan forever.\n \nIf you set the overall level of caching you expect, the optimizer\nwill tend to wind up with data cached to support the optimal plans\nfor that level of caching for the frequently run queries.\n \n-Kevin\n",
"msg_date": "Fri, 01 Oct 2010 08:46:07 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "Craig,\n\n I agree with you. Not completely, but I do.\n\n I'm just stuck in a situation where I can't figure out what values to use\nfor the parameters. I can't even think of a way on how to test and discover\nthat.\n I followed Josh Berkus' GUC spreadsheet and some tutorials on PG wiki,\nbut how do I test if my configuration is good or bad? I see in PG log that\nsome queries have bad plans, but should I do in order to tell PG to make\nbetter decisions? I tried different values with no success.\n\n I understand that parameters have no \"work everywhere\" values. Each\ndatabase has its characteristics and each server has its HW specifications.\n\n Is there any automated test tool? A can compile a list of real-world\nqueries, and provide an exact copy of my db server just for testing. But how\ndo I do it? Write a bunch of scripts? Is there any serious tool that try\ndifferent parameters, run a load test, process results and generate reports?\n\n Again, thanks all of you for the replies.\n\n Cheers,\n\nFabrício dos Anjos Silva\nLinkCom Soluções em T.I.\n\n\n\n2010/10/1 Kevin Grittner <[email protected]>\n\n> Craig Ringer <[email protected]> wrote:\n>\n> > Because the query often only wants a small subset of the data, and\n> > whole relations are rarely fully cached, it's not enough to know\n> > that \"some of relation X is cached\", it has to know if the cached\n> > parts are the parts that'll be required, or at least an\n> > approximation of that. It sounds horrendously complicated to keep\n> > track of to me, and in the end it won't make query execution any\n> > faster, it'll just potentially help the planner pick a better\n> > plan. I wonder if that'd be worth the extra CPU time spent\n> > managing the cache and cache content stats, and using those cache\n> > stats when planning? It'd be an interesting experiment, but the\n> > outcome is hardly obvious.\n>\n> I agree with that, but I think there's an even more insidious issue\n> here. Biasing plans heavily toward using what is already in cache\n> could have a destabilizing effect on performance. Let's say that\n> some query or maintenance skews the cache toward some plan which is\n> much slower when cached than another plan would be if cached. Let's\n> also postulate that this query runs very frequently. It will always\n> settle for what's fastest *this* time, not what would make for\n> fastest performance if consistently used. If it never chooses the\n> plan which would run better if cached, the data used for that plan\n> may never make it into cache, and you will limp along with the\n> inferior plan forever.\n>\n> If you set the overall level of caching you expect, the optimizer\n> will tend to wind up with data cached to support the optimal plans\n> for that level of caching for the frequently run queries.\n>\n> -Kevin\n>\n\n Craig, I agree with you. Not completely, but I do. I'm just stuck in a situation where I can't figure out what values to use for the parameters. I can't even think of a way on how to test and discover that.\n\n I followed Josh Berkus' GUC spreadsheet and some tutorials on PG wiki, but how do I test if my configuration is good or bad? I see in PG log that some queries have bad plans, but should I do in order to tell PG to make better decisions? I tried different values with no success.\n I understand that parameters have no \"work everywhere\" values. Each database has its characteristics and each server has its HW specifications. Is there any automated test tool? A can compile a list of real-world queries, and provide an exact copy of my db server just for testing. But how do I do it? Write a bunch of scripts? Is there any serious tool that try different parameters, run a load test, process results and generate reports?\n Again, thanks all of you for the replies. Cheers,Fabrício dos Anjos SilvaLinkCom Soluções em T.I.\n2010/10/1 Kevin Grittner <[email protected]>\nCraig Ringer <[email protected]> wrote:\n\n> Because the query often only wants a small subset of the data, and\n> whole relations are rarely fully cached, it's not enough to know\n> that \"some of relation X is cached\", it has to know if the cached\n> parts are the parts that'll be required, or at least an\n> approximation of that. It sounds horrendously complicated to keep\n> track of to me, and in the end it won't make query execution any\n> faster, it'll just potentially help the planner pick a better\n> plan. I wonder if that'd be worth the extra CPU time spent\n> managing the cache and cache content stats, and using those cache\n> stats when planning? It'd be an interesting experiment, but the\n> outcome is hardly obvious.\n\nI agree with that, but I think there's an even more insidious issue\nhere. Biasing plans heavily toward using what is already in cache\ncould have a destabilizing effect on performance. Let's say that\nsome query or maintenance skews the cache toward some plan which is\nmuch slower when cached than another plan would be if cached. Let's\nalso postulate that this query runs very frequently. It will always\nsettle for what's fastest *this* time, not what would make for\nfastest performance if consistently used. If it never chooses the\nplan which would run better if cached, the data used for that plan\nmay never make it into cache, and you will limp along with the\ninferior plan forever.\n\nIf you set the overall level of caching you expect, the optimizer\nwill tend to wind up with data cached to support the optimal plans\nfor that level of caching for the frequently run queries.\n\n-Kevin",
"msg_date": "Fri, 1 Oct 2010 11:00:44 -0300",
"msg_from": "=?ISO-8859-1?Q?Fabr=EDcio_dos_Anjos_Silva?=\n\t<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> I agree with that, but I think there's an even more insidious issue\n> here. Biasing plans heavily toward using what is already in cache\n> could have a destabilizing effect on performance.\n\nNot to mention the destabilizing effect on the plans themselves.\nBehavior like that would make EXPLAIN nearly useless, because the plan\nyou get would vary from moment to moment even when \"nothing is\nchanging\". It's fairly clear that people don't actually want that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Oct 2010 10:40:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory? "
},
{
"msg_contents": "2010/10/1 Fabrício dos Anjos Silva <[email protected]>\n\n> Craig,\n>\n> I agree with you. Not completely, but I do.\n>\n> I'm just stuck in a situation where I can't figure out what values to\n> use for the parameters. I can't even think of a way on how to test and\n> discover that.\n> I followed Josh Berkus' GUC spreadsheet and some tutorials on PG wiki,\n> but how do I test if my configuration is good or bad? I see in PG log that\n> some queries have bad plans, but should I do in order to tell PG to make\n> better decisions? I tried different values with no success.\n>\n> You can set different values for most configuration params on individual db\n> connections. You can test different values for individual slow-running\n> queries. Rather than disabling whole features in the entire database -\n> which may well make lots of other queries run less quickly - you can, at the\n> very least, just disable those features before running the queries that are\n> known to be slow and for which you could not find global values which worked\n> well. Disable sequence plans just before running query x, or boost work_mem\n> to a very high value just for query y. It is also possible that you've\n> simply outstripped your hardware's capability. We had a database with a\n> number of tables containing tens of millions of rows and queries which\n> frequently required aggregating over whole tables. Moving from 8Gb of RAM\n> to 48GB of RAM (so that a large chunk of the db fits in memory) and from 6\n> spindles to 12, and then just modifying the global config to suit the new\n> hardware gave us a huge performance boost that we could never have gotten on\n> the old hardware, no matter how much tuning of individual queries we did. I\n> was actually able to drop all of the custom config tweaks that we had on\n> individual queries, though I'm sure I'll eventually wind up adding some back\n> - queries that aggregate over large tables really benefit from a lot of\n> work_mem - more than I want to configure globally.\n>\n\n2010/10/1 Fabrício dos Anjos Silva <[email protected]>\n Craig, I agree with you. Not completely, but I do. I'm just stuck in a situation where I can't figure out what values to use for the parameters. I can't even think of a way on how to test and discover that.\n\n\n I followed Josh Berkus' GUC spreadsheet and some tutorials on PG wiki, but how do I test if my configuration is good or bad? I see in PG log that some queries have bad plans, but should I do in order to tell PG to make better decisions? I tried different values with no success.\nYou can set different values for most configuration params on individual db connections. You can test different values for individual slow-running queries. Rather than disabling whole features in the entire database - which may well make lots of other queries run less quickly - you can, at the very least, just disable those features before running the queries that are known to be slow and for which you could not find global values which worked well. Disable sequence plans just before running query x, or boost work_mem to a very high value just for query y. It is also possible that you've simply outstripped your hardware's capability. We had a database with a number of tables containing tens of millions of rows and queries which frequently required aggregating over whole tables. Moving from 8Gb of RAM to 48GB of RAM (so that a large chunk of the db fits in memory) and from 6 spindles to 12, and then just modifying the global config to suit the new hardware gave us a huge performance boost that we could never have gotten on the old hardware, no matter how much tuning of individual queries we did. I was actually able to drop all of the custom config tweaks that we had on individual queries, though I'm sure I'll eventually wind up adding some back - queries that aggregate over large tables really benefit from a lot of work_mem - more than I want to configure globally.",
"msg_date": "Fri, 1 Oct 2010 09:50:39 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "Samuel Gendler wrote:\n> As to your question about increasing shared_buffers to be some \n> significant proportion of available RAM - apparently, that is not a \n> good idea. I've seen advice that said you shouldn't go above 8GB for \n> shared_buffers and I've also seen 12GB suggested as an upper limit, \n> too. On my host with 48GB of RAM, I didn't see much difference \n> between 8GB and 12GB on a fairly wide variety of tests, so mine is set \n> at 8GB with an efective_cache_size of 36GB.\n\nThe publicly discussed tests done at Sun suggested 10GB was the \neffective upper limit on Solaris before performance started dropping \ninstead of increasing on some of their internal benchmarks. And I've \nheard privately from two people who have done similar experiments on \nLinux and found closer to 8GB to be the point where performance started \nto drop. I'm hoping to get some hardware capable of providing some more \npublic results in this area, and some improvements if we can get better \ndata about what causes this drop in efficiency.\n\nGiven that some write-heavy workloads start to suffer considerable \ncheckpoint issues when shared_buffers is set to a really high value, \nthere's at least two reasons to be conservative here. The big win is \ngoing from the tiny default to hundreds of megabytes. Performance keeps \ngoing up for many people into the low gigabytes range, but the odds of \nhitting a downside increase too. Since PostgreSQL uses the OS cache, \ntoo, I see some sytems with a whole lot of RAM where the 512MB - 1GB \nrange still ends up being optimal, just in terms of balancing the \nimprovements you get from things being in the cache vs. the downsides of \nheavy checkpoint writes.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Sun, 03 Oct 2010 23:07:27 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "Fabr�cio dos Anjos Silva wrote:\n> After reading lots of documentation, I still don't understand fully \n> how PG knows if some needed data is in memory or in second storage. \n> While choosing the best query plan, the optimizer must take this into \n> account. Does PG consider this? If so, how does it know?\n\nThere are actually two different questions here, and I'm not sure both \nhave been completely clarified for you by the discussion yet.\n\nPostgreSQL has its own dedicated pool of memory, sized by \nshared_buffers. When you request a page, if it's already in memory, you \nget pointed toward that copy and no physical I/O happens. If it's not, \nPostgreSQL asks the OS to read the page. It's possible that will return \na page that's in the OS cache; the database currently has no idea when \nthis does or doesn't happen though. The hit % counters in the database \nonly reflect shared_buffers hits, not OS ones. Some work to integrate \nthe OS cache information into the database has been done, the current \nleading project in that area is pgfincore: \nhttp://pgfoundry.org/projects/pgfincore/\n\nHowever, none of this information is considered at all by the query \noptimizer. It makes plans without any knowledge of what is or isn't in \nRAM right now, either the dedicated database memory or the OS cache. \nOnly the ratios of the planner constants are really considered. You can \nset those on a query by query basis to provide subtle hints when you \nknow something the planner doesn't, but you have to be very careful \nabout doing that as those plans tend to get obsolete eventually when you \ndo that trick.\n\nI had a brain-storming session on this subject with a few of the hackers \nin the community in this area a while back I haven't had a chance to do \nsomething with yet (it exists only as a pile of scribbled notes so \nfar). There's a couple of ways to collect data on what's in the \ndatabase and OS cache, and a couple of ways to then expose that data to \nthe optimizer. But that needs to be done very carefully, almost \ncertainly as only a manual process at first, because something that's \nproducing cache feedback all of the time will cause plans to change all \nthe time, too. Where I suspect this is going is that we may end up \ntracking various statistics over time, then periodically providing a way \nto export a mass of \"typical % cached\" data back to the optimizer for \nuse in plan cost estimation purposes. But the idea of monitoring \ncontinuously and always planning based on the most recent data available \nhas some stability issues, both from a \"too many unpredictable plan \nchanges\" and a \"bad short-term feedback loop\" perspective, as mentioned \nby Tom and Kevin already.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Sun, 03 Oct 2010 23:22:52 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "Craig Ringer wrote:\n> If some kind of cache awareness was to be added, I'd be interested in \n> seeing a \"hotness\" measure that tracked how heavily a given \n> relation/index has been accessed and how much has been read from it \n> recently. A sort of age-scaled blocks-per-second measure that includes \n> both cached and uncached (disk) reads. This would let the planner know \n> how likely parts of a given index/relation are to be cached in memory \n> without imposing the cost of tracking the cache in detail. I'm still \n> not sure it'd be all that useful, though...\n\nYup, that's one of the design ideas scribbled in my notes, as is the \nidea of what someone dubbed a \"heat map\" that tracked which parts of the \nrelation where actually the ones in RAM, the other issue you mentioned. \nThe problem facing a lot of development possibilities in this area is \nthat we don't have any continuous benchmarking of complicated plans \ngoing on right now. So if something really innovative is done, there's \nreally no automatic way to test the result and then see what types of \nplans it improves and what it makes worse. Until there's some better \nperformance regression work like that around, development on the \noptimizer has to favor being very conservative.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Sun, 03 Oct 2010 23:29:03 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "Fabr�cio dos Anjos Silva wrote:\n> Is there any automated test tool? A can compile a list of real-world \n> queries, and provide an exact copy of my db server just for testing. \n> But how do I do it? Write a bunch of scripts? Is there any serious \n> tool that try different parameters, run a load test, process results \n> and generate reports?\n\nThere's a list of tools for playing back a test workload at \nhttp://wiki.postgresql.org/wiki/Statement_Playback\n\nI'm not aware of anyone beyond some academic research that has taken \nthat idea and built something to test many database parameter \ncombinations. They did that at \nhttp://www.cs.duke.edu/~shivnath/papers/ituned.pdf but I don't think \nthat code went public; I asked about it at one point and never heard \nanything really useful back.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Sun, 03 Oct 2010 23:34:06 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "Fabr�cio dos Anjos Silva wrote:\n> If someone could point good books about PG tuning, I would appreciate \n> that. I found some yet to be released books about PG 9. Any comments \n> about them?\n\nThe largest treatment of the subject already in print I'm aware of is in \nthe Korry and Susan Douglas \"PostgreSQL\" book from 2005, which has \naround 30 pages convering PostgreSQL 8.0. That material is very good \nand much of it still applies, but it's mostly theory without many \nexamples, and there have been a lot of changes to the optimizer since \nthen. There's a good talk by Robert Haas on this subject too that's \nquite current, you can find his slides at \nhttp://sites.google.com/site/robertmhaas/presentations and a recording \nof one version of him giving it is at \nhttp://www.pgcon.org/2010/schedule/events/208.en.html\n\nMy \"PostgreSQL 9.0 High Performance\", due out later this month if things \ncontinue on schedule, has about 80 pages dedicated to indexing and query \noptimization (waiting for final typesetting to know the exact count). \nThe main difference with what I do compared to every other treatment \nI've seen of this subject is I suggest a sample small but not trivial \ndata set, then show plans for real queries run against it. So you \nshould be able to duplicate the examples, and then tinker with them on \nyour own system to see their plans change as you adjust parameters to \nfollow along. That really is the only way to gain expertise here. \nStarting with the troublesome queries from your live system will work \njust as well for that, once you get familiar with enough of the basics.\n\nI'd suggest taking a look and listen to Robert's talk for now, that will \nget you started in the right direction, and combine it with reading all \nof the documentation in the manual on this subject. That should keep \nyou busy for a while, and by the time you're done you may find my book \nis available too.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Mon, 04 Oct 2010 00:02:11 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "2010/10/4 Greg Smith <[email protected]>:\n> Craig Ringer wrote:\n>>\n>> If some kind of cache awareness was to be added, I'd be interested in\n>> seeing a \"hotness\" measure that tracked how heavily a given relation/index\n>> has been accessed and how much has been read from it recently. A sort of\n>> age-scaled blocks-per-second measure that includes both cached and uncached\n>> (disk) reads. This would let the planner know how likely parts of a given\n>> index/relation are to be cached in memory without imposing the cost of\n>> tracking the cache in detail. I'm still not sure it'd be all that useful,\n>> though...\n>\n> Yup, that's one of the design ideas scribbled in my notes, as is the idea of\n> what someone dubbed a \"heat map\" that tracked which parts of the relation\n> where actually the ones in RAM, the other issue you mentioned. The problem\n> facing a lot of development possibilities in this area is that we don't have\n> any continuous benchmarking of complicated plans going on right now. So if\n> something really innovative is done, there's really no automatic way to test\n> the result and then see what types of plans it improves and what it makes\n> worse. Until there's some better performance regression work like that\n> around, development on the optimizer has to favor being very conservative.\n\n* tracking specific block is not very easy because of readahead. You\nend-up measuring exactly if a block was in memory at the moment you\nrequested it physicaly, not at the moment the first seek/fread happen.\nIt is still interesting stat imho.\n\nI wonder how that can add value to the planner.\n\n* If the planner knows more about the OS cache it can guess the\neffective_cache_size on its own, which is probably already nice to\nhave.\n\nExtract from postgres code:\n * We use an approximation proposed by Mackert and Lohman, \"Index Scans\n * Using a Finite LRU Buffer: A Validated I/O Model\", ACM Transactions\n * on Database Systems, Vol. 14, No. 3, September 1989, Pages 401-424.\n\nPlanner use that in conjunction with effective_cache_size to guess if\nit is interesting to scan the index.\nAll is to know if this model is still valid in front of a more precise\nknowledge of the OS page cache... and also if it matches how different\nsystems like windows and linux handle page cache.\n\nHooks around cost estimation should help writing a module to rethink\nthat part of the planner and make it use the statistics about cache. I\nwonder if adding such hooks to core impact its performances ? Anyway\ndoing that is probably the easier and shorter way to test the\nbehavior.\n\n\n>\n> --\n> Greg Smith, 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services and Support www.2ndQuadrant.us\n> Author, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\n> https://www.packtpub.com/postgresql-9-0-high-performance/book\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Mon, 4 Oct 2010 22:39:40 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "On 10/04/2010 04:22 AM, Greg Smith wrote:\n> I had a brain-storming session on this subject with a few of the hackers in the community in this area a while back I haven't had a chance to do something with yet (it exists only as a pile of scribbled notes so far). There's a couple of ways to collect data on what's in the database and OS cache, and a couple of ways to then expose that data to the optimizer. But that needs to be done very carefully, almost certainly as only a manual process at first, because something that's producing cache feedback all of the time will cause plans to change all the time, too. Where I suspect this is going is that we may end up tracking various statistics over time, then periodically providing a way to export a mass of \"typical % cached\" data back to the optimizer for use in plan cost estimation purposes. But the idea of monitoring continuously and always planning based on the most recent data available has some stability issues, both from a \"too many unpredictable plan changes\" and a \"ba\nd\n> short-term feedback loop\" perspective, as mentioned by Tom and Kevin already.\n\nWhy not monitor the distribution of response times, rather than \"cached\" vs. not?\n\nThat a) avoids the issue of discovering what was a cache hit b) deals neatly with\nmultilevel caching c) feeds directly into cost estimation.\n\nCheers,\n Jeremy\n",
"msg_date": "Mon, 04 Oct 2010 23:47:16 +0100",
"msg_from": "Jeremy Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "On Mon, Oct 4, 2010 at 6:47 PM, Jeremy Harris <[email protected]> wrote:\n> On 10/04/2010 04:22 AM, Greg Smith wrote:\n>>\n>> I had a brain-storming session on this subject with a few of the hackers\n>> in the community in this area a while back I haven't had a chance to do\n>> something with yet (it exists only as a pile of scribbled notes so far).\n>> There's a couple of ways to collect data on what's in the database and OS\n>> cache, and a couple of ways to then expose that data to the optimizer. But\n>> that needs to be done very carefully, almost certainly as only a manual\n>> process at first, because something that's producing cache feedback all of\n>> the time will cause plans to change all the time, too. Where I suspect this\n>> is going is that we may end up tracking various statistics over time, then\n>> periodically providing a way to export a mass of \"typical % cached\" data\n>> back to the optimizer for use in plan cost estimation purposes. But the idea\n>> of monitoring continuously and always planning based on the most recent data\n>> available has some stability issues, both from a \"too many unpredictable\n>> plan changes\" and a \"ba\n>\n> d\n>>\n>> short-term feedback loop\" perspective, as mentioned by Tom and Kevin\n>> already.\n>\n> Why not monitor the distribution of response times, rather than \"cached\" vs.\n> not?\n>\n> That a) avoids the issue of discovering what was a cache hit b) deals\n> neatly with\n> multilevel caching c) feeds directly into cost estimation.\n\nI was hot on doing better cache modeling a year or two ago, but the\nelephant in the room is that it's unclear that it solves any\nreal-world problem. The OP is clearly having a problem, but there's\nnot enough information in his post to say what is actually causing it,\nand it's probably not caching effects. We get occasional complaints\nof the form \"the first time I run this query it's slow, and then after\nthat it's fast\" but, as Craig Ringer pointed out upthread, not too\nmany. And even with respect to the complaints we do get, it's far\nfrom clear that the cure is any better than the disease. Taking\ncaching effects into account could easily result in the first\nexecution being slightly less slow and all of the subsequent\nexecutions being moderately slow. That would not be an improvement\nfor most people. The reports that seem really painful to me are the\nones where people with really big machines complain of needing HOURS\nfor the cache to warm up, and having the system bogged down to a\nstandstill until then. But changing the cost model isn't going to\nhelp them either.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 11 Oct 2010 22:59:28 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "An approach that works can be found in DB2, and likely elsewhere. \n\nThe key is that tablespaces/tables/indexes/buffers are all attached through the bufferpool (the DB2 term). A tablespace/bufferpool match is defined. Then tables and indexes are assigned to the tablespace (and implicitly, the bufferpool). As a result, one can effectively pin data in memory. This is very useful, but not low hanging fruit to implement.\n\nThe introduction of rudimentary tablespaces is a first step. I assumed that the point was to get to a DB2-like structure at some point. Yes?\n\nRobert\n\n---- Original message ----\n>Date: Mon, 11 Oct 2010 22:59:28 -0400\n>From: [email protected] (on behalf of Robert Haas <[email protected]>)\n>Subject: Re: [PERFORM] How does PG know if data is in memory? \n>To: Jeremy Harris <[email protected]>\n>Cc: [email protected]\n>\n>On Mon, Oct 4, 2010 at 6:47 PM, Jeremy Harris <[email protected]> wrote:\n>> On 10/04/2010 04:22 AM, Greg Smith wrote:\n>>>\n>>> I had a brain-storming session on this subject with a few of the hackers\n>>> in the community in this area a while back I haven't had a chance to do\n>>> something with yet (it exists only as a pile of scribbled notes so far).\n>>> There's a couple of ways to collect data on what's in the database and OS\n>>> cache, and a couple of ways to then expose that data to the optimizer. But\n>>> that needs to be done very carefully, almost certainly as only a manual\n>>> process at first, because something that's producing cache feedback all of\n>>> the time will cause plans to change all the time, too. Where I suspect this\n>>> is going is that we may end up tracking various statistics over time, then\n>>> periodically providing a way to export a mass of \"typical % cached\" data\n>>> back to the optimizer for use in plan cost estimation purposes. But the idea\n>>> of monitoring continuously and always planning based on the most recent data\n>>> available has some stability issues, both from a \"too many unpredictable\n>>> plan changes\" and a \"ba\n>>\n>> d\n>>>\n>>> short-term feedback loop\" perspective, as mentioned by Tom and Kevin\n>>> already.\n>>\n>> Why not monitor the distribution of response times, rather than \"cached\" vs.\n>> not?\n>>\n>> That a) avoids the issue of discovering what was a cache hit b) deals\n>> neatly with\n>> multilevel caching c) feeds directly into cost estimation.\n>\n>I was hot on doing better cache modeling a year or two ago, but the\n>elephant in the room is that it's unclear that it solves any\n>real-world problem. The OP is clearly having a problem, but there's\n>not enough information in his post to say what is actually causing it,\n>and it's probably not caching effects. We get occasional complaints\n>of the form \"the first time I run this query it's slow, and then after\n>that it's fast\" but, as Craig Ringer pointed out upthread, not too\n>many. And even with respect to the complaints we do get, it's far\n>from clear that the cure is any better than the disease. Taking\n>caching effects into account could easily result in the first\n>execution being slightly less slow and all of the subsequent\n>executions being moderately slow. That would not be an improvement\n>for most people. The reports that seem really painful to me are the\n>ones where people with really big machines complain of needing HOURS\n>for the cache to warm up, and having the system bogged down to a\n>standstill until then. But changing the cost model isn't going to\n>help them either.\n>\n>-- \n>Robert Haas\n>EnterpriseDB: http://www.enterprisedb.com\n>The Enterprise PostgreSQL Company\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 11 Oct 2010 23:11:34 -0400 (EDT)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is\n in memory?"
},
{
"msg_contents": "On Mon, Oct 11, 2010 at 11:11 PM, <[email protected]> wrote:\n> An approach that works can be found in DB2, and likely elsewhere.\n>\n> The key is that tablespaces/tables/indexes/buffers are all attached through the bufferpool (the DB2 term). A tablespace/bufferpool match is defined. Then tables and indexes are assigned to the tablespace (and implicitly, the bufferpool). As a result, one can effectively pin data in memory. This is very useful, but not low hanging fruit to implement.\n>\n> The introduction of rudimentary tablespaces is a first step. I assumed that the point was to get to a DB2-like structure at some point. Yes?\n\nWe already have tablespaces, and our data already is accessed through\nthe buffer pool.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 12 Oct 2010 08:34:23 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "The discussions I've seen indicated that, in use, tablespaces were at the database level, but, yes, the docs do say that a table can be assigned to a defined tablespace. What I still can't find is syntax which establishes buffers/caches/whatever and assigns them to tablespaces. Without that, I'm not sure what benefit there is to tablespaces, other than a sort of RAID-lite.\n\nRobert\n\n\n---- Original message ----\n>Date: Tue, 12 Oct 2010 08:34:23 -0400\n>From: [email protected] (on behalf of Robert Haas <[email protected]>)\n>Subject: Re: [PERFORM] How does PG know if data is in memory? \n>To: [email protected]\n>Cc: [email protected]\n>\n>On Mon, Oct 11, 2010 at 11:11 PM, <[email protected]> wrote:\n>> An approach that works can be found in DB2, and likely elsewhere.\n>>\n>> The key is that tablespaces/tables/indexes/buffers are all attached through the bufferpool (the DB2 term). A tablespace/bufferpool match is defined. Then tables and indexes are assigned to the tablespace (and implicitly, the bufferpool). As a result, one can effectively pin data in memory. This is very useful, but not low hanging fruit to implement.\n>>\n>> The introduction of rudimentary tablespaces is a first step. I assumed that the point was to get to a DB2-like structure at some point. Yes?\n>\n>We already have tablespaces, and our data already is accessed through\n>the buffer pool.\n>\n>-- \n>Robert Haas\n>EnterpriseDB: http://www.enterprisedb.com\n>The Enterprise PostgreSQL Company\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 12 Oct 2010 10:20:19 -0400 (EDT)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is\n in memory?"
},
{
"msg_contents": "<[email protected]> wrote:\n \n> An approach that works can be found in DB2, and likely elsewhere.\n> \n> The key is that tablespaces/tables/indexes/buffers are all\n> attached through the bufferpool (the DB2 term). A tablespace/\n> bufferpool match is defined. Then tables and indexes are assigned\n> to the tablespace (and implicitly, the bufferpool). As a result,\n> one can effectively pin data in memory. This is very useful, but\n> not low hanging fruit to implement.\n \nThis sounds similar to Sybase named caches. You can segment off\nportions of the memory for specific caches, break that up into space\nreserved for different I/O buffer sizes, and bind specific database\nobjects (tables and indexes) to specific caches. On the few\noccasions where someone had failed to configure the named caches\nwhen setting up a machine, it was caught almost immediately after\ndeployment because of end-user complaints about poor performance. \nThis was so critical to performance for us when we were using\nSybase, that one of my first reactions on finding it missing in\nPostgreSQL was distress over the inability to tune as I had.\n \nWhen I posted to the list about it, the response was that LRU\neviction was superior to any tuning any human would do. I didn't\nand don't believe that, but have found it's close enough in the\nPostgreSQL environment to be *way* down my list of performance\nissues. In fact, when looking at the marginal benefits it would\ngenerate in PostgreSQL when done right, versus the number of people\nwho would shoot themselves in the foot with it, even I have come\naround to feeling it's probably not a good idea.\n \nFWIW, the four main reasons for using it were:\n \n(1) Heavily used data could be kept fully cached in RAM and not\ndriven out by transient activity.\n \n(2) You could flag a cache used for (1) above as using \"relaxed LRU\naccounting\" -- it saved a lot of time tracking repeated references,\nleaving more CPU for other purposes.\n \n(3) Each named cache had its own separate set of locks, reducing\ncontention.\n \n(4) Large tables for which the heap was often were scanned in its\nentirety or for a range on the clustered index could be put in a\nrelatively small cache with large I/O buffers. This avoided blowing\nout the default cache space for situations which almost always\nrequired disk I/O anyway.\n \nNone of that is anything for amateurs to play with. You need to set\nup caches like that based on evidence from monitoring and do careful\nbenchmarking of the results to actually achieve improvements over\nLRU logic.\n \n> The introduction of rudimentary tablespaces is a first step. I\n> assumed that the point was to get to a DB2-like structure at some\n> point. Yes?\n \nAs far as I can tell, there is nobody with that intent.\n \n-Kevin\n",
"msg_date": "Tue, 12 Oct 2010 09:35:56 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "Couldn't have said it better myself; covered all the bases. If PG wants to become an industrial strength database, worthy of replacing DB2/etc., then these are the sorts of knobs and switches it will need. \n\n\n-- None of that is anything for amateurs to play with.\n\nNot jam a stick in anybody's eye, but shouldn't database pros not be amateurs? Or are most PG-ers coders who don't really want to design and tune a database?\n\nRobert\n\n---- Original message ----\n>Date: Tue, 12 Oct 2010 09:35:56 -0500\n>From: [email protected] (on behalf of \"Kevin Grittner\" <[email protected]>)\n>Subject: Re: [PERFORM] How does PG know if data is in memory? \n>To: <[email protected]>,<[email protected]>\n>\n><[email protected]> wrote:\n> \n>> An approach that works can be found in DB2, and likely elsewhere.\n>> \n>> The key is that tablespaces/tables/indexes/buffers are all\n>> attached through the bufferpool (the DB2 term). A tablespace/\n>> bufferpool match is defined. Then tables and indexes are assigned\n>> to the tablespace (and implicitly, the bufferpool). As a result,\n>> one can effectively pin data in memory. This is very useful, but\n>> not low hanging fruit to implement.\n> \n>This sounds similar to Sybase named caches. You can segment off\n>portions of the memory for specific caches, break that up into space\n>reserved for different I/O buffer sizes, and bind specific database\n>objects (tables and indexes) to specific caches. On the few\n>occasions where someone had failed to configure the named caches\n>when setting up a machine, it was caught almost immediately after\n>deployment because of end-user complaints about poor performance. \n>This was so critical to performance for us when we were using\n>Sybase, that one of my first reactions on finding it missing in\n>PostgreSQL was distress over the inability to tune as I had.\n> \n>When I posted to the list about it, the response was that LRU\n>eviction was superior to any tuning any human would do. I didn't\n>and don't believe that, but have found it's close enough in the\n>PostgreSQL environment to be *way* down my list of performance\n>issues. In fact, when looking at the marginal benefits it would\n>generate in PostgreSQL when done right, versus the number of people\n>who would shoot themselves in the foot with it, even I have come\n>around to feeling it's probably not a good idea.\n> \n>FWIW, the four main reasons for using it were:\n> \n>(1) Heavily used data could be kept fully cached in RAM and not\n>driven out by transient activity.\n> \n>(2) You could flag a cache used for (1) above as using \"relaxed LRU\n>accounting\" -- it saved a lot of time tracking repeated references,\n>leaving more CPU for other purposes.\n> \n>(3) Each named cache had its own separate set of locks, reducing\n>contention.\n> \n>(4) Large tables for which the heap was often were scanned in its\n>entirety or for a range on the clustered index could be put in a\n>relatively small cache with large I/O buffers. This avoided blowing\n>out the default cache space for situations which almost always\n>required disk I/O anyway.\n> \n>None of that is anything for amateurs to play with. You need to set\n>up caches like that based on evidence from monitoring and do careful\n>benchmarking of the results to actually achieve improvements over\n>LRU logic.\n> \n>> The introduction of rudimentary tablespaces is a first step. I\n>> assumed that the point was to get to a DB2-like structure at some\n>> point. Yes?\n> \n>As far as I can tell, there is nobody with that intent.\n> \n>-Kevin\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 12 Oct 2010 10:49:44 -0400 (EDT)",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is\n in memory?"
},
{
"msg_contents": "<[email protected]> wrote:\n \n> -- None of that is anything for amateurs to play with.\n> \n> Not jam a stick in anybody's eye, but shouldn't database pros not\n> be amateurs?\n \nWhile many PostgreSQL installations are managed by professional\nDBAs, or programmers or consultants with a deep enough grasp of the\nissues to tune a knob like that appropriately, PostgreSQL is also\nused in environments without such staff. In fact, there is pressure\nto make PostgreSQL easier to configure for exactly that reason. If\nwe add more knobs which are this hard to tune correctly, we would\nrisk inundation with complaints from people to tried to use it and\nmade things worse.\n \n-Kevin\n",
"msg_date": "Tue, 12 Oct 2010 10:11:29 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "Kevin Grittner wrote:\n> \n> ...Sybase named caches...segment off portions of the memory for\n> specific caches... bind specific database\n> objects (tables and indexes) to specific caches. ...\n> \n> When I posted to the list about it, the response was that LRU\n> eviction was superior to any tuning any human would do. I didn't\n> and don't believe that....\n> \n> FWIW, the four main reasons for using it were:\n> (1) Heavily used data could be kept fully cached in RAM...\n\nLightly-used-but-important data seems like another use case.\n\nLRU's probably far better than me at optimizing for the total\nthroughput and/or average response time. But if there's a\nrequirement:\n \"Even though this query's very rare, it should respond\n ASAP, even at the expense of the throughput of the rest\n of the system.\"\nit sounds like this kind of hand-tuning might be useful.\n\n",
"msg_date": "Tue, 12 Oct 2010 20:16:25 -0700",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "On Tue, Oct 12, 2010 at 10:35 AM, Kevin Grittner\n<[email protected]> wrote:\n> (1) Heavily used data could be kept fully cached in RAM and not\n> driven out by transient activity.\n\nWe've attempted to address this problem by adding logic to prevent the\nbuffer cache from being trashed by vacuums, bulk loads, and sequential\nscans. It would be interesting to know if anyone has examples of that\nlogic falling over or proving inadequate.\n\n> (2) You could flag a cache used for (1) above as using \"relaxed LRU\n> accounting\" -- it saved a lot of time tracking repeated references,\n> leaving more CPU for other purposes.\n\nWe never do strict LRU accounting.\n\n> (3) Each named cache had its own separate set of locks, reducing\n> contention.\n\nWe have lock partitions, but as discussed recently on -hackers, they\nseem to start falling over around 26 cores. We probably need to\nimprove that, but I'd rather do that by making the locking more\nefficient and by increasing the number of partitions rather than by\nallowing users to partition the buffer pool by hand.\n\n> (4) Large tables for which the heap was often were scanned in its\n> entirety or for a range on the clustered index could be put in a\n> relatively small cache with large I/O buffers. This avoided blowing\n> out the default cache space for situations which almost always\n> required disk I/O anyway.\n\nI think, but am not quite sure, that my answer to point #1 is also\nrelevant here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 13 Oct 2010 02:40:40 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "2010/10/13 Ron Mayer <[email protected]>:\n> Kevin Grittner wrote:\n>>\n>> ...Sybase named caches...segment off portions of the memory for\n>> specific caches... bind specific database\n>> objects (tables and indexes) to specific caches. ...\n>>\n>> When I posted to the list about it, the response was that LRU\n>> eviction was superior to any tuning any human would do. I didn't\n>> and don't believe that....\n>>\n>> FWIW, the four main reasons for using it were:\n>> (1) Heavily used data could be kept fully cached in RAM...\n>\n> Lightly-used-but-important data seems like another use case.\n>\n> LRU's probably far better than me at optimizing for the total\n> throughput and/or average response time. But if there's a\n> requirement:\n> \"Even though this query's very rare, it should respond\n> ASAP, even at the expense of the throughput of the rest\n> of the system.\"\n> it sounds like this kind of hand-tuning might be useful.\n\nit is exactly one of the purpose of pgfincore :\nhttp://villemain.org/projects/pgfincore#load_a_table_or_an_index_in_os_page_cache\n\n\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Thu, 14 Oct 2010 20:47:43 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "Kevin Grittner wrote:\n> <[email protected]> wrote:\n> \n> > -- None of that is anything for amateurs to play with.\n> > \n> > Not jam a stick in anybody's eye, but shouldn't database pros not\n> > be amateurs?\n> \n> While many PostgreSQL installations are managed by professional\n> DBAs, or programmers or consultants with a deep enough grasp of the\n> issues to tune a knob like that appropriately, PostgreSQL is also\n> used in environments without such staff. In fact, there is pressure\n> to make PostgreSQL easier to configure for exactly that reason. If\n> we add more knobs which are this hard to tune correctly, we would\n> risk inundation with complaints from people to tried to use it and\n> made things worse.\n\nAgreed. Here is a blog entry that explains some of the tradeoffs of\nadding knobs:\n\n\thttp://momjian.us/main/blogs/pgblog/2009.html#January_10_2009\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Thu, 21 Oct 2010 14:55:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "Greg Smith writes:\n\n> heard privately from two people who have done similar experiments on \n> Linux and found closer to 8GB to be the point where performance started \n\nSo on a machine with 72GB is 8GB still the recommended value?\nUsually have only 10 to 20 connections. \n",
"msg_date": "Wed, 27 Oct 2010 22:30:08 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
},
{
"msg_contents": "2010/10/28 Francisco Reyes <[email protected]>:\n> Greg Smith writes:\n>\n>> heard privately from two people who have done similar experiments on Linux\n>> and found closer to 8GB to be the point where performance started\n>\n> So on a machine with 72GB is 8GB still the recommended value?\n\nYes, as a maximum, not a minimum. (Some applications will work better\nwith less shared_buffers than others)\n\n> Usually have only 10 to 20 connections.\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Thu, 28 Oct 2010 17:31:29 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How does PG know if data is in memory?"
}
] |
[
{
"msg_contents": "Hi,\n\n I have this situation:\n\nDatabase size: 7,6GB (on repository)\nMemory size: 7,7GB\n1 CPU: aprox. 2GHz Xeon\nNumber of tables: 1 (empresa)\n\nCREATE TABLE \"public\".\"empresa\" (\n \"cdempresa\" INTEGER NOT NULL,\n \"razaosocial\" VARCHAR(180),\n \"cnpj\" VARCHAR(14) NOT NULL,\n \"ie\" VARCHAR(13),\n \"endereco\" VARCHAR(150),\n \"numero\" VARCHAR(40),\n \"complemento\" VARCHAR(140),\n \"bairro\" VARCHAR(80),\n \"municipio\" VARCHAR(80),\n \"cep\" VARCHAR(8),\n \"cxpostal\" VARCHAR(12),\n \"telefone\" VARCHAR(80),\n \"data\" VARCHAR(10),\n \"ramo\" VARCHAR(200),\n \"email\" VARCHAR(80),\n \"uf\" CHAR(10),\n \"origem\" VARCHAR(30),\n \"nomefantasia\" VARCHAR(120),\n \"site\" VARCHAR(80),\n \"dtatualizacao\" TIMESTAMP WITHOUT TIME ZONE,\n \"areautil\" VARCHAR(8),\n \"ramosecundario\" VARCHAR(200),\n \"observacao\" VARCHAR(120),\n \"natureza\" VARCHAR(80),\n \"situacao\" VARCHAR(80),\n \"cdramo\" INTEGER,\n \"cdramorf\" INTEGER,\n \"ramo3\" VARCHAR(200),\n \"ramo4\" VARCHAR(200),\n \"ramo5\" VARCHAR(200),\n \"ramo6\" VARCHAR(200),\n \"fonte\" VARCHAR(12),\n \"dtcriacao\" DATE,\n \"cdramorf2\" INTEGER,\n \"ramo7\" VARCHAR(200),\n \"ramo8\" VARCHAR(200),\n \"ramo9\" VARCHAR(200),\n \"ramo10\" VARCHAR(200),\n \"razaosocialts\" TSVECTOR,\n \"latitude\" DOUBLE PRECISION,\n \"longitude\" DOUBLE PRECISION,\n \"precisao\" VARCHAR(1),\n CONSTRAINT \"pk_empresa\" PRIMARY KEY(\"cdempresa\")\n) WITHOUT OIDS;\n\nCREATE INDEX \"idx_cnpj\" ON \"public\".\"empresa\"\n USING btree (\"cnpj\");\n\nCREATE INDEX \"idx_empresa_dtcriacao\" ON \"public\".\"empresa\"\n USING btree (\"dtcriacao\");\n\nalter table empresa alter column cnpj set statistics 1000;\nanalyze verbose empresa (cnpj);\nINFO: \"empresa\": scanned 300000 of 514508 pages, containing 5339862 live\nrows and 0 dead rows; 300000 rows in sample, 9158006 estimated total rows\n\nalter table empresa alter column dtcriacao set statistics 1000;\nanalyze verbose empresa (dtcriacao);\nINFO: \"empresa\": scanned 300000 of 514508 pages, containing 5342266 live\nrows and 0 dead rows; 300000 rows in sample, 9162129 estimated total rows\n\nshared_buffers = 2000MB\nwork_mem = 64MB\nmaintenance_work_mem = 256MB\neffective_io_concurrency = 4 (using RAID-0 on 4 disks)\nseq_page_cost = 0.01\nrandom_page_cost = 0.01\ncpu_tuple_cost = 0.003\ncpu_index_tuple_cost = 0.001\ncpu_operator_cost = 0.0005\neffective_cache_size = 7200MB\ngeqo_threshold = 15\n\n\n All data and metadata required for the following queries are already in\nOS cache.\n\nWhen enable_indexscan is off, I execute the same query 3 times, altering how\nmuch data I want to query (check the current_date-X part). In this scenario,\nI get the following plans:\n\nexplain analyze select max(cnpj) from empresa where dtcriacao >=\ncurrent_date-3;\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n Aggregate (cost=18.30..18.30 rows=1 width=15) (actual time=50.075..50.076\nrows=1 loops=1)\n -> Bitmap Heap Scan on empresa (cost=1.15..17.96 rows=682 width=15)\n(actual time=36.252..47.264 rows=1985 loops=1)\n Recheck Cond: (dtcriacao >= (('now'::text)::date - 3))\n -> Bitmap Index Scan on idx_empresa_dtcriacao (cost=0.00..1.12\nrows=682 width=0) (actual time=35.980..35.980 rows=1985 loops=1)\n Index Cond: (dtcriacao >= (('now'::text)::date - 3))\n Total runtime: 50.193 ms\n(6 rows)\n\nexplain analyze select max(cnpj) from empresa where dtcriacao >=\ncurrent_date-4;\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n Aggregate (cost=36.31..36.31 rows=1 width=15) (actual time=41.880..41.881\nrows=1 loops=1)\n -> Bitmap Heap Scan on empresa (cost=2.25..35.63 rows=1364 width=15)\n(actual time=23.291..38.146 rows=2639 loops=1)\n Recheck Cond: (dtcriacao >= (('now'::text)::date - 4))\n -> Bitmap Index Scan on idx_empresa_dtcriacao (cost=0.00..2.18\nrows=1364 width=0) (actual time=22.946..22.946 rows=2639 loops=1)\n Index Cond: (dtcriacao >= (('now'::text)::date - 4))\n Total runtime: 42.025 ms\n(6 rows)\n\nexplain analyze select max(cnpj) from empresa where dtcriacao >=\ncurrent_date-5;\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n Aggregate (cost=54.13..54.13 rows=1 width=15) (actual time=93.265..93.266\nrows=1 loops=1)\n -> Bitmap Heap Scan on empresa (cost=3.35..53.11 rows=2045 width=15)\n(actual time=26.749..84.553 rows=6380 loops=1)\n Recheck Cond: (dtcriacao >= (('now'::text)::date - 5))\n -> Bitmap Index Scan on idx_empresa_dtcriacao (cost=0.00..3.24\nrows=2045 width=0) (actual time=26.160..26.160 rows=6380 loops=1)\n Index Cond: (dtcriacao >= (('now'::text)::date - 5))\n Total runtime: 93.439 ms\n(6 rows)\n\nNote that the plan is the same for all 3 queries.\n\nHowever, when enable_indexscan is on, I execute the same 3 queries, and I\nget the following plans:\n\nexplain analyze select max(cnpj) from empresa where dtcriacao >=\ncurrent_date-3;\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n Aggregate (cost=17.14..17.14 rows=1 width=15) (actual time=35.960..35.961\nrows=1 loops=1)\n -> Index Scan using idx_empresa_dtcriacao on empresa (cost=0.00..16.80\nrows=682 width=15) (actual time=0.078..23.215 rows=1985 loops=1)\n Index Cond: (dtcriacao >= (('now'::text)::date - 3))\n Total runtime: 36.083 ms\n(4 rows)\n\nexplain analyze select max(cnpj) from empresa where dtcriacao >=\ncurrent_date-4;\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n Aggregate (cost=34.20..34.20 rows=1 width=15) (actual time=40.625..40.626\nrows=1 loops=1)\n -> Index Scan using idx_empresa_dtcriacao on empresa (cost=0.00..33.52\nrows=1364 width=15) (actual time=0.071..37.019 rows=2639 loops=1)\n Index Cond: (dtcriacao >= (('now'::text)::date - 4))\n Total runtime: 40.740 ms\n(4 rows)\n\nexplain analyze select max(cnpj) from empresa where dtcriacao >=\ncurrent_date-5;\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------\n Result (cost=32.24..32.24 rows=1 width=0) (actual time=5223.937..5223.938\nrows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..32.24 rows=1 width=15) (actual\ntime=5223.921..5223.922 rows=1 loops=1)\n -> Index Scan Backward using idx_cnpj on empresa\n(cost=0.00..65925.02 rows=2045 width=15) (actual time=5223.913..5223.913\nrows=1 loops=1)\n Index Cond: ((cnpj)::text IS NOT NULL)\n Filter: (dtcriacao >= (('now'::text)::date - 5))\n Total runtime: 5224.037 ms\n(7 rows)\n\n Note that when I subtract at least 5 from current_date, the plan is\nchanged to an Index Scan Backward on idx_cnpj, which is a worse choice.\n\n My question is: Why the cost of Limit on the last query, estimated as\n32.24 if the Index Scan Backward is estimated at 65925.02? Since there is a\nfilter based on column dtcriacao, the whole index is going to be analyzed,\nand Limit is going to wait for the complete Index Scan to complete. Why use\nidx_cnpj in this case? Why not use idx_empresa_dtcriacao?\n\n Just for comparison, consider this query:\n\nexplain analyze select max(cnpj) from empresa;\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.133..0.134 rows=1\nloops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..0.00 rows=1 width=15) (actual time=0.120..0.121\nrows=1 loops=1)\n -> Index Scan Backward using idx_cnpj on empresa\n(cost=0.00..42146.33 rows=9162129 width=15) (actual time=0.114..0.114 rows=1\nloops=1)\n Index Cond: ((cnpj)::text IS NOT NULL)\n Total runtime: 0.212 ms\n(6 rows)\n\n In this case, it is correct to use the Index Scan Backward on idx_cnpj,\nsince the Limit will interrupt it just after the first returned value. The\nestimated Limit cost of 0.00 is ok, even if the Scan cost is estimated at\n42146.33.\n\n I only managed to get all data in OS cache after mounting this new server\nwith 7,7GB of memory, which I can't afford to use permanently. My real\nserver has only 1,7GB of memory and this Index Scan Backward plan takes\nforever to run (I really don't know how much time), making tons of random\nseeks. When enable_indexscan id off, the query runs quickly on the 1,7GB\nserver.\n Initially I was using statistics for these 2 columns as 200 and 300, but\neven after changing to 1000, the problem persists. I tried several different\nvalues the seq_page_cost, random_page_cost, cpu_tuple_cost,\ncpu_index_tuple_cost and cpu_operator_cost with no success.\n\n Could someone explain me this?\n\n Thanks,\n\nFabrício dos Anjos Silva\nLinkCom Soluções em T.I.\n\n Hi, I have this situation:Database size: 7,6GB (on repository)Memory size: 7,7GB1 CPU: aprox. 2GHz XeonNumber of tables: 1 (empresa)CREATE TABLE \"public\".\"empresa\" (\n\n \"cdempresa\" INTEGER NOT NULL, \"razaosocial\" VARCHAR(180), \"cnpj\" VARCHAR(14) NOT NULL, \"ie\" VARCHAR(13), \"endereco\" VARCHAR(150), \"numero\" VARCHAR(40),\n\n \"complemento\" VARCHAR(140), \"bairro\" VARCHAR(80), \"municipio\" VARCHAR(80), \"cep\" VARCHAR(8), \"cxpostal\" VARCHAR(12), \"telefone\" VARCHAR(80),\n\n \"data\" VARCHAR(10), \"ramo\" VARCHAR(200), \"email\" VARCHAR(80), \"uf\" CHAR(10), \"origem\" VARCHAR(30), \"nomefantasia\" VARCHAR(120), \"site\" VARCHAR(80),\n\n \"dtatualizacao\" TIMESTAMP WITHOUT TIME ZONE, \"areautil\" VARCHAR(8), \"ramosecundario\" VARCHAR(200), \"observacao\" VARCHAR(120), \"natureza\" VARCHAR(80),\n\n \"situacao\" VARCHAR(80), \"cdramo\" INTEGER, \"cdramorf\" INTEGER, \"ramo3\" VARCHAR(200), \"ramo4\" VARCHAR(200), \"ramo5\" VARCHAR(200), \"ramo6\" VARCHAR(200),\n\n \"fonte\" VARCHAR(12), \"dtcriacao\" DATE, \"cdramorf2\" INTEGER, \"ramo7\" VARCHAR(200), \"ramo8\" VARCHAR(200), \"ramo9\" VARCHAR(200), \"ramo10\" VARCHAR(200),\n\n \"razaosocialts\" TSVECTOR, \"latitude\" DOUBLE PRECISION, \"longitude\" DOUBLE PRECISION, \"precisao\" VARCHAR(1), CONSTRAINT \"pk_empresa\" PRIMARY KEY(\"cdempresa\")\n\n) WITHOUT OIDS;CREATE INDEX \"idx_cnpj\" ON \"public\".\"empresa\" USING btree (\"cnpj\");CREATE INDEX \"idx_empresa_dtcriacao\" ON \"public\".\"empresa\"\n\n USING btree (\"dtcriacao\");alter table empresa alter column cnpj set statistics 1000;analyze verbose empresa (cnpj);INFO: \"empresa\": scanned 300000 of 514508 pages, containing 5339862 live rows and 0 dead rows; 300000 rows in sample, 9158006 estimated total rows\nalter table empresa alter column dtcriacao set statistics 1000;analyze verbose empresa (dtcriacao);INFO: \"empresa\": scanned 300000 of 514508 pages, containing 5342266 live rows and 0 dead rows; 300000 rows in sample, 9162129 estimated total rows\nshared_buffers = 2000MBwork_mem = 64MBmaintenance_work_mem = 256MBeffective_io_concurrency = 4 (using RAID-0 on 4 disks)seq_page_cost = 0.01random_page_cost = 0.01cpu_tuple_cost = 0.003cpu_index_tuple_cost = 0.001\n\ncpu_operator_cost = 0.0005effective_cache_size = 7200MBgeqo_threshold = 15 All data and metadata required for the following queries are already in OS cache.When enable_indexscan is off, I execute the same query 3 times, altering how much data I want to query (check the current_date-X part). In this scenario, I get the following plans:\nexplain analyze select max(cnpj) from empresa where dtcriacao >= current_date-3; QUERY PLAN-----------------------------------------------------------------------------------------\n\n Aggregate (cost=18.30..18.30 rows=1 width=15) (actual time=50.075..50.076 rows=1 loops=1) -> Bitmap Heap Scan on empresa (cost=1.15..17.96 rows=682 width=15) (actual time=36.252..47.264 rows=1985 loops=1)\n\n Recheck Cond: (dtcriacao >= (('now'::text)::date - 3)) -> Bitmap Index Scan on idx_empresa_dtcriacao (cost=0.00..1.12 rows=682 width=0) (actual time=35.980..35.980 rows=1985 loops=1)\n\n Index Cond: (dtcriacao >= (('now'::text)::date - 3)) Total runtime: 50.193 ms(6 rows)explain analyze select max(cnpj) from empresa where dtcriacao >= current_date-4; QUERY PLAN\n\n----------------------------------------------------------------------------------------- Aggregate (cost=36.31..36.31 rows=1 width=15) (actual time=41.880..41.881 rows=1 loops=1) -> Bitmap Heap Scan on empresa (cost=2.25..35.63 rows=1364 width=15) (actual time=23.291..38.146 rows=2639 loops=1)\n\n Recheck Cond: (dtcriacao >= (('now'::text)::date - 4)) -> Bitmap Index Scan on idx_empresa_dtcriacao (cost=0.00..2.18 rows=1364 width=0) (actual time=22.946..22.946 rows=2639 loops=1)\n\n Index Cond: (dtcriacao >= (('now'::text)::date - 4)) Total runtime: 42.025 ms(6 rows)explain analyze select max(cnpj) from empresa where dtcriacao >= current_date-5; QUERY PLAN\n\n----------------------------------------------------------------------------------------- Aggregate (cost=54.13..54.13 rows=1 width=15) (actual time=93.265..93.266 rows=1 loops=1) -> Bitmap Heap Scan on empresa (cost=3.35..53.11 rows=2045 width=15) (actual time=26.749..84.553 rows=6380 loops=1)\n\n Recheck Cond: (dtcriacao >= (('now'::text)::date - 5)) -> Bitmap Index Scan on idx_empresa_dtcriacao (cost=0.00..3.24 rows=2045 width=0) (actual time=26.160..26.160 rows=6380 loops=1)\n\n Index Cond: (dtcriacao >= (('now'::text)::date - 5)) Total runtime: 93.439 ms(6 rows)Note that the plan is the same for all 3 queries.However, when enable_indexscan is on, I execute the same 3 queries, and I get the following plans:\nexplain analyze select max(cnpj) from empresa where dtcriacao >= current_date-3; QUERY PLAN-----------------------------------------------------------------------------------------\n\n Aggregate (cost=17.14..17.14 rows=1 width=15) (actual time=35.960..35.961 rows=1 loops=1) -> Index Scan using idx_empresa_dtcriacao on empresa (cost=0.00..16.80 rows=682 width=15) (actual time=0.078..23.215 rows=1985 loops=1)\n\n Index Cond: (dtcriacao >= (('now'::text)::date - 3)) Total runtime: 36.083 ms(4 rows)explain analyze select max(cnpj) from empresa where dtcriacao >= current_date-4; QUERY PLAN\n\n----------------------------------------------------------------------------------------- Aggregate (cost=34.20..34.20 rows=1 width=15) (actual time=40.625..40.626 rows=1 loops=1) -> Index Scan using idx_empresa_dtcriacao on empresa (cost=0.00..33.52 rows=1364 width=15) (actual time=0.071..37.019 rows=2639 loops=1)\n\n Index Cond: (dtcriacao >= (('now'::text)::date - 4)) Total runtime: 40.740 ms(4 rows)explain analyze select max(cnpj) from empresa where dtcriacao >= current_date-5; QUERY PLAN\n\n----------------------------------------------------------------------------------------- Result (cost=32.24..32.24 rows=1 width=0) (actual time=5223.937..5223.938 rows=1 loops=1) InitPlan 1 (returns $0) -> Limit (cost=0.00..32.24 rows=1 width=15) (actual time=5223.921..5223.922 rows=1 loops=1)\n\n -> Index Scan Backward using idx_cnpj on empresa (cost=0.00..65925.02 rows=2045 width=15) (actual time=5223.913..5223.913 rows=1 loops=1) Index Cond: ((cnpj)::text IS NOT NULL) Filter: (dtcriacao >= (('now'::text)::date - 5))\n\n Total runtime: 5224.037 ms(7 rows) Note that when I subtract at least 5 from current_date, the plan is changed to an Index Scan Backward on idx_cnpj, which is a worse choice. My question is: Why the cost of Limit on the last query, estimated as 32.24 if the Index Scan Backward is estimated at 65925.02? Since there is a filter based on column dtcriacao, the whole index is going to be analyzed, and Limit is going to wait for the complete Index Scan to complete. Why use idx_cnpj in this case? Why not use idx_empresa_dtcriacao?\n Just for comparison, consider this query:explain analyze select max(cnpj) from empresa; QUERY PLAN-----------------------------------------------------------------------------------------\n\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.133..0.134 rows=1 loops=1) InitPlan 1 (returns $0) -> Limit (cost=0.00..0.00 rows=1 width=15) (actual time=0.120..0.121 rows=1 loops=1) -> Index Scan Backward using idx_cnpj on empresa (cost=0.00..42146.33 rows=9162129 width=15) (actual time=0.114..0.114 rows=1 loops=1)\n\n Index Cond: ((cnpj)::text IS NOT NULL) Total runtime: 0.212 ms(6 rows) In this case, it is correct to use the Index Scan Backward on idx_cnpj, since the Limit will interrupt it just after the first returned value. The estimated Limit cost of 0.00 is ok, even if the Scan cost is estimated at 42146.33.\n I only managed to get all data in OS cache after mounting this new server with 7,7GB of memory, which I can't afford to use permanently. My real server has only 1,7GB of memory and this Index Scan Backward plan takes forever to run (I really don't know how much time), making tons of random seeks. When enable_indexscan id off, the query runs quickly on the 1,7GB server.\n\n Initially I was using statistics for these 2 columns as 200 and 300, but even after changing to 1000, the problem persists. I tried several different values the seq_page_cost, random_page_cost, cpu_tuple_cost, cpu_index_tuple_cost and cpu_operator_cost with no success.\n Could someone explain me this? Thanks,Fabrício dos Anjos SilvaLinkCom Soluções em T.I.",
"msg_date": "Wed, 29 Sep 2010 13:55:26 -0300",
"msg_from": "=?ISO-8859-1?Q?Fabr=EDcio_dos_Anjos_Silva?=\n\t<[email protected]>",
"msg_from_op": true,
"msg_subject": "Wrong index choice"
},
{
"msg_contents": "Fabrᅵcio dos Anjos Silva<[email protected]> wrote:\n \n> explain analyze select max(cnpj) from empresa where dtcriacao >=\n> current_date-5;\n \n> Result (cost=32.24..32.24 rows=1 width=0) (actual\n> time=5223.937..5223.938 rows=1 loops=1)\n> InitPlan 1 (returns $0)\n> -> Limit (cost=0.00..32.24 rows=1 width=15) (actual\n> time=5223.921..5223.922 rows=1 loops=1)\n> -> Index Scan Backward using idx_cnpj on empresa\n> (cost=0.00..65925.02 rows=2045 width=15) (actual\n> time=5223.913..5223.913 rows=1 loops=1)\n> Index Cond: ((cnpj)::text IS NOT NULL)\n> Filter: (dtcriacao >= (('now'::text)::date - 5))\n> Total runtime: 5224.037 ms\n \n> My question is: Why the cost of Limit on the last query, estimated\n> as 32.24 if the Index Scan Backward is estimated at 65925.02?\n \nIf you divide the total cost for the step by the number of rows it\nwould take to read all the way through, you get 32.24; so it clearly\nexpects to find a row which matches the filter condition right away.\n(Or it fails to consider the fact that the filter condition could\ncause it to read multiple rows looking for a match.)\n \n> Since there is a filter based on column dtcriacao, the whole index\n> is going to be analyzed, and Limit is going to wait for the\n> complete Index Scan to complete.\n \nOnly if there are no matching rows. Since you're asking for the\nmax, if it reads in descending sequence on the index, it can stop as\nsoon as it finds one matching row.\n \n-Kevin\n",
"msg_date": "Wed, 29 Sep 2010 16:00:39 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong index choice"
},
{
"msg_contents": "=?ISO-8859-1?Q?Fabr=EDcio_dos_Anjos_Silva?= <[email protected]> writes:\n> explain analyze select max(cnpj) from empresa where dtcriacao >=\n> current_date-5;\n> QUERY\n> PLAN\n> -----------------------------------------------------------------------------------------\n> Result (cost=32.24..32.24 rows=1 width=0) (actual time=5223.937..5223.938\n> rows=1 loops=1)\n> InitPlan 1 (returns $0)\n> -> Limit (cost=0.00..32.24 rows=1 width=15) (actual\n> time=5223.921..5223.922 rows=1 loops=1)\n> -> Index Scan Backward using idx_cnpj on empresa\n> (cost=0.00..65925.02 rows=2045 width=15) (actual time=5223.913..5223.913\n> rows=1 loops=1)\n> Index Cond: ((cnpj)::text IS NOT NULL)\n> Filter: (dtcriacao >= (('now'::text)::date - 5))\n> Total runtime: 5224.037 ms\n> (7 rows)\n\nBTW, a large part of the reason that it's switching to this plan type\ntoo soon is that you've got random_page_cost set really small:\n\n> seq_page_cost = 0.01\n> random_page_cost = 0.01\n\nI think you need to back those off by an order of magnitude or so.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 03 Oct 2010 17:39:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong index choice "
}
] |
[
{
"msg_contents": "Hi,\n\nAre there any significant performance improvements or regressions from 8.4 to 9.0? If yes, which areas (inserts, updates, selects, etc) are those in?\n\nIn a related question, is there any public data that compares the performances of various Postgresql versions? \n\nThanks\n\n\n \n",
"msg_date": "Wed, 29 Sep 2010 17:01:55 -0700 (PDT)",
"msg_from": "Andy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance improvements/regressions from 8.4 to 9.0? "
},
{
"msg_contents": "Andy wrote:\n> Are there any significant performance improvements or regressions from 8.4 to 9.0? If yes, which areas (inserts, updates, selects, etc) are those in?\n> \n\nThere were two major rounds of tinkering to the query planner/optimizer \nthat can impact the types of plans you get from some SELECT statements. \nJoin removal allows pulling out tables that used to be involved in a \nquery in earlier versions. That shouldn't ever cause a regression, but \nsince it will cause different types of plans one is still possible. The \nother changes I'm seeing impact plans relate to increased use of \n\"Materialize\" nodes in some types of queries. Those are also normally \npositive too, but like any plan change there's always a chance for a \ndifferent plan to turn out to be inferior. There are some other query \noptimizer changes too, stuff that impacts like NULL handling and \ncomparisons when you're at the end of the range of the previously \nanalyzed segment of the table. \n\nBut there really wasn't anything changed that will impact INSERT/UPDATE \nstatements much that I'm aware of, or even simple SELECT statements that \ndon't happen to intersect with one of the improved areas. Some earlier \nversions of PostgreSQL had pretty sweeping performance changes to them; \n9.0 has some useful targeted areas, particularly for complicated query \nplans, but not really general across the board improvements. See the \n\"Performance\" section of \nhttp://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.0 for some \ngood examples of most of what I mentioned above, along with some other \nimprovements like finer control over statistics and database query \nparameters. Better performance from 9.0 is more likely to come from \nthings like scaling out reads using multiple slaves, replicating with \nthe Streaming Replication/Hot Standby combination.\n\nNote that if you turn on some of the new replication features, that can \ndisable some write optimizations and slow things down in a slightly \ndifferent way than it did before on the master in the process. \nSpecifically, if you touch the new wal_level parameter, some things that \nused to skip write-ahead log writing will no longer be able to do so. \nBut that situation isn't that much different from earlier versions, \nwhere turning on archive_mode and setting the archive_command introduced \nmany of the same de-optimizations.\n\n> In a related question, is there any public data that compares the performances of various Postgresql versions? \n> \n\nhttp://suckit.blog.hu/2009/09/29/postgresql_history covers 8.0 through \n8.4, which were the versions that showed the biggest percentage changes \nupward. The minor regression you see in 8.4 there is mainly due to a \nchange to the default value of default_statistics_target, which was \noptimized out of the box more for larger queries than tiny ones in that \nversion. That hurt a number of trivial benchmarks a few percent, but in \nthe real world is more likely to be an improvement rather than a problem.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Thu, 30 Sep 2010 01:15:42 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance improvements/regressions from 8.4 to 9.0?"
}
] |
[
{
"msg_contents": "Hi everyone. I have a question, and it's well beyond me to even speculate\nabout the inner workings of postgresql on this.\n\nI have a \"places\" table, and a \"coordinates\" column, of type POINT.\n\nIf I want to find every place within, for example, a distance of 1 unit from\nan arbitrary point, I'll do:\n\nCREATE INDEX ON places USING gist (circle(coordinates, 1));\n\nAnd then I'll fetch the desired rows like this:\n\nSELECT * FROM places WHERE circle(coordinates, 1) @> circle('(a,b)', 0);\n(where (a,b) is an arbitrary point)\n\nI'd like to know how this index works, though, as it seems to me the only\nway to have this kind of index to work is to calculate the distance of every\npoint in a square of sides 2*1=2 units centered on (a, b).\n\nSo, am I wrong to think it works like that? If it does work like that, could\nI have instead two columns of type FLOAT (xcoordinate and ycoordinate) and\ncreate traditional b-tree indexes on both of these, and then do something\nlike:\n\nSELECT * FROM places WHERE xcoordinate >= (a-1) AND xcoordinate <= (a+1) AND\nycoordinate >= (b-1) AND ycoordinate <= (b+1) And\nSQRT(POW(a-xcoordinate,2)+POW(b-ycoordinate,2))<=1;\n\nIf you can also pinpoint me to where I can find this sort of information\n(index utilization and planning, performance tuning), I'd be very grateful.\nThank you already,\nMarcelo Zabani.\n\nHi everyone. I have a question, and it's well beyond me to even speculate about the inner workings of postgresql on this.I have a \"places\" table, and a \"coordinates\" column, of type POINT.\nIf I want to find every place within, for example, a distance of 1 unit from an arbitrary point, I'll do:CREATE INDEX ON places USING gist (circle(coordinates, 1));And then I'll fetch the desired rows like this:\nSELECT * FROM places WHERE circle(coordinates, 1) @> circle('(a,b)', 0);(where (a,b) is an arbitrary point)I'd like to know how this index works, though, as it seems to me the only way to have this kind of index to work is to calculate the distance of every point in a square of sides 2*1=2 units centered on (a, b).\nSo, am I wrong to think it works like that? If it does work like that, could I have instead two columns of type FLOAT (xcoordinate and ycoordinate) and create traditional b-tree indexes on both of these, and then do something like:\nSELECT * FROM places WHERE xcoordinate >= (a-1) AND xcoordinate <= (a+1) AND ycoordinate >= (b-1) AND ycoordinate <= (b+1) And SQRT(POW(a-xcoordinate,2)+POW(b-ycoordinate,2))<=1;If you can also pinpoint me to where I can find this sort of information (index utilization and planning, performance tuning), I'd be very grateful.\nThank you already,Marcelo Zabani.",
"msg_date": "Thu, 30 Sep 2010 15:33:03 -0300",
"msg_from": "Marcelo Zabani <[email protected]>",
"msg_from_op": true,
"msg_subject": "gist indexes for distance calculations"
},
{
"msg_contents": "Marcelo Zabani <[email protected]> writes:\n> CREATE INDEX ON places USING gist (circle(coordinates, 1));\n\n> I'd like to know how this index works, though, as it seems to me the only\n> way to have this kind of index to work is to calculate the distance of every\n> point in a square of sides 2*1=2 units centered on (a, b).\n\nI believe it is a bounding-box based implementation; that is, each\nnon-leaf entry in the index stores the bounding box that covers all the\nentries in the child page (plus its children if any). So a lookup\ndescends to only those child pages that could possibly contain entries\noverlapping the target circle.\n\n> So, am I wrong to think it works like that? If it does work like that, could\n> I have instead two columns of type FLOAT (xcoordinate and ycoordinate) and\n> create traditional b-tree indexes on both of these, and then do something\n> like:\n\n> SELECT * FROM places WHERE xcoordinate >= (a-1) AND xcoordinate <= (a+1) AND\n> ycoordinate >= (b-1) AND ycoordinate <= (b+1) And\n> SQRT(POW(a-xcoordinate,2)+POW(b-ycoordinate,2))<=1;\n\nWell, there's nothing stopping you from doing that, but search\nperformance is likely to be a whole lot worse than with an actual 2-D\nindex. The reason is that it'll have to fetch index entries for\neverything in the vertical strip between a-1 and a+1, as well as\neverything in the horizontal strip between b-1 and b+1; most of which is\nnowhere near the target. If your circles are always very very small\nthis might work tolerably, but in most applications the amount of stuff\nfetched soon gets out of hand.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 30 Sep 2010 15:16:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gist indexes for distance calculations "
},
{
"msg_contents": "So let me see if I understand, when searching for everyone within radius \"r\"\nof point (a,b), the gist index will be used like this:\n* C is the circle centered on (a,b) of radius \"r\"\n\n1. Traverse down the tree, starting at the root. Only go down nodes whose\nbounding-box has a non-empty intersection with the circle C (how fast is\nthis verification?)\n2. Once the leaves are reached, verify for everyone of them whether they're\ninside C or not, returning those that are.\n\nIf this really is how it happens, then I ask: What about boxes that\nintersect (does that happen)? What if the boxes aren't \"nice\" (boxes with a\nvery small desired intersection with the circle and a very large quantity of\nunwanted rows).\n\nAlso, you've said that with b-tree indexes on two orthogonal coordinates\n(two columns) postgresql would need to check the ENTIRE vertical strip\nbounded by a-1 and a+1 and the ENTIRE horizontal strip bounded by b-1 and\nb+1 (two limitless, \"infinite\" rectangles)? This leads me to another\nquestion:\n\n- Isn't there an equality check between all the rows returned by both\nindexes, and then the actual distance calculations are performed only for\nthose returned by both indexes? What if I have many more indexes on my\ntable, how are things done?\n\nAnd if I may, one last question:\n- Is box-bounding the index strategy used for all geometric operations with\ngist indexes?\n\nAlso, to Oleg:\nI had personally tested the use of gist indexes (without limiting the number\nof returned rows, however) for more than 15 million rows (with their\ncoordinates distributed a VERY LARGE area, sadly). The results were still\nimpressive to me (although I didn't know what to expect, maximum running\ntimes of around 17ms seemed great to me!).\nAnd sorry for sending this message to your personal email (my mistake).\n\nThanks a lot for all the help, if you can lead me to any docs/articles, I'll\ngladly read them.\n\nSo let me see if I understand, when searching for everyone within radius\n \"r\" of point (a,b), the gist index will be used like this:* C is the circle centered on (a,b) of radius \"r\"1.\n Traverse down the tree, starting at the root. Only go down nodes whose \nbounding-box has a non-empty intersection with the circle C (how fast is\n this verification?)\n2. Once the leaves are reached, verify for everyone of them whether they're inside C or not, returning those that are.If\n this really is how it happens, then I ask: What about boxes that \nintersect (does that happen)? What if the boxes aren't \"nice\" (boxes \nwith a very small desired intersection with the circle and a very large \nquantity of unwanted rows).\nAlso, you've said that with b-tree indexes on two orthogonal \ncoordinates (two columns) postgresql would need to check the ENTIRE \nvertical strip bounded by a-1 and a+1 and the ENTIRE horizontal strip \nbounded by b-1 and b+1 (two limitless, \"infinite\" rectangles)? This \nleads me to another question:\n- Isn't there an equality check between all the rows returned by \nboth indexes, and then the actual distance calculations are performed \nonly for those returned by both indexes? What if I have many more \nindexes on my table, how are things done?\nAnd if I may, one last question:- Is box-bounding the index strategy used for all geometric operations with gist indexes?Also, to Oleg:I\n had personally tested the use of gist indexes (without limiting the \nnumber of returned rows, however) for more than 15 million rows (with \ntheir coordinates distributed a VERY LARGE area, sadly). The results \nwere still impressive to me (although I didn't know what to expect, \nmaximum running times of around 17ms seemed great to me!).And sorry for sending this message to your personal email (my mistake).\nThanks a lot for all the help, if you can lead me to any docs/articles, I'll gladly read them.",
"msg_date": "Thu, 30 Sep 2010 23:50:14 -0300",
"msg_from": "Marcelo Zabani <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: gist indexes for distance calculations"
},
{
"msg_contents": ">\n> Thanks a lot for all the help, if you can lead me to any docs/articles,\n> I'll gladly read them.\n\n\nI found this: http://en.wikipedia.org/wiki/R-tree\n\n<http://en.wikipedia.org/wiki/R-tree>Looks like what Tom was talking about,\nja?\n\nKarim\n\nThanks a lot for all the help, if you can lead me to any docs/articles, I'll gladly read them.\nI found this: http://en.wikipedia.org/wiki/R-treeLooks like what Tom was talking about, ja?\nKarim",
"msg_date": "Thu, 30 Sep 2010 21:45:29 -0700",
"msg_from": "Karim Nassar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gist indexes for distance calculations"
},
{
"msg_contents": "On 2010-09-30 20:33, Marcelo Zabani wrote:\n> If you can also pinpoint me to where I can find this sort of information\n> (index utilization and planning, performance tuning), I'd be very grateful.\n> Thank you already,\n> \n\nIsn't this what the knngist patches are for?\nhttps://commitfest.postgresql.org/action/patch_view?id=350\n\nhttp://www.sai.msu.su/~megera/wiki/knngist\n\n\n-- \nJesper\n",
"msg_date": "Fri, 01 Oct 2010 07:56:18 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gist indexes for distance calculations"
},
{
"msg_contents": "On Thu, Sep 30, 2010 at 2:33 PM, Marcelo Zabani <[email protected]> wrote:\n> Hi everyone. I have a question, and it's well beyond me to even speculate\n> about the inner workings of postgresql on this.\n>\n> I have a \"places\" table, and a \"coordinates\" column, of type POINT.\n>\n> If I want to find every place within, for example, a distance of 1 unit from\n> an arbitrary point, I'll do:\n>\n> CREATE INDEX ON places USING gist (circle(coordinates, 1));\n>\n> And then I'll fetch the desired rows like this:\n>\n> SELECT * FROM places WHERE circle(coordinates, 1) @> circle('(a,b)', 0);\n> (where (a,b) is an arbitrary point)\n>\n> I'd like to know how this index works, though, as it seems to me the only\n> way to have this kind of index to work is to calculate the distance of every\n> point in a square of sides 2*1=2 units centered on (a, b).\n>\n> So, am I wrong to think it works like that? If it does work like that, could\n> I have instead two columns of type FLOAT (xcoordinate and ycoordinate) and\n> create traditional b-tree indexes on both of these, and then do something\n> like:\n>\n> SELECT * FROM places WHERE xcoordinate >= (a-1) AND xcoordinate <= (a+1) AND\n> ycoordinate >= (b-1) AND ycoordinate <= (b+1) And\n> SQRT(POW(a-xcoordinate,2)+POW(b-ycoordinate,2))<=1;\n>\n> If you can also pinpoint me to where I can find this sort of information\n> (index utilization and planning, performance tuning), I'd be very grateful.\n\nA quick heads up: It's possible, although it may not necessarily help,\nto further reduce distance calcs by drawing an inner bounding box of\npoints that are confirmed good. Your outer box is made by squaring\nthe circle on lat/lon projection -- you can also calculate the biggest\nlat lon 'rectangle' that completely fits inside the circle, and play\nwith a query that looks something like this (pseudo sql):\n\nselect * from points where (point inside good box) or (point inside\npossible box and dist(point, mypoint < n));\n\nYou get reduction of dist calcs at expense of second gist lookup. You\ncan also, of course, do this on application side, but what's the fun\nin that? :-).\n\nmerlin\n",
"msg_date": "Fri, 1 Oct 2010 12:04:19 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gist indexes for distance calculations"
},
{
"msg_contents": "Thanks a lot everyone for all the info! It is all really helpful.\n\n\n2010/10/1 Merlin Moncure <[email protected]>\n\n> On Thu, Sep 30, 2010 at 2:33 PM, Marcelo Zabani <[email protected]>\n> wrote:\n> > Hi everyone. I have a question, and it's well beyond me to even speculate\n> > about the inner workings of postgresql on this.\n> >\n> > I have a \"places\" table, and a \"coordinates\" column, of type POINT.\n> >\n> > If I want to find every place within, for example, a distance of 1 unit\n> from\n> > an arbitrary point, I'll do:\n> >\n> > CREATE INDEX ON places USING gist (circle(coordinates, 1));\n> >\n> > And then I'll fetch the desired rows like this:\n> >\n> > SELECT * FROM places WHERE circle(coordinates, 1) @> circle('(a,b)', 0);\n> > (where (a,b) is an arbitrary point)\n> >\n> > I'd like to know how this index works, though, as it seems to me the only\n> > way to have this kind of index to work is to calculate the distance of\n> every\n> > point in a square of sides 2*1=2 units centered on (a, b).\n> >\n> > So, am I wrong to think it works like that? If it does work like that,\n> could\n> > I have instead two columns of type FLOAT (xcoordinate and ycoordinate)\n> and\n> > create traditional b-tree indexes on both of these, and then do something\n> > like:\n> >\n> > SELECT * FROM places WHERE xcoordinate >= (a-1) AND xcoordinate <= (a+1)\n> AND\n> > ycoordinate >= (b-1) AND ycoordinate <= (b+1) And\n> > SQRT(POW(a-xcoordinate,2)+POW(b-ycoordinate,2))<=1;\n> >\n> > If you can also pinpoint me to where I can find this sort of information\n> > (index utilization and planning, performance tuning), I'd be very\n> grateful.\n>\n> A quick heads up: It's possible, although it may not necessarily help,\n> to further reduce distance calcs by drawing an inner bounding box of\n> points that are confirmed good. Your outer box is made by squaring\n> the circle on lat/lon projection -- you can also calculate the biggest\n> lat lon 'rectangle' that completely fits inside the circle, and play\n> with a query that looks something like this (pseudo sql):\n>\n> select * from points where (point inside good box) or (point inside\n> possible box and dist(point, mypoint < n));\n>\n> You get reduction of dist calcs at expense of second gist lookup. You\n> can also, of course, do this on application side, but what's the fun\n> in that? :-).\n>\n> merlin\n>\n\n\n\n-- \nMarcelo Zabani\n(19) 9341-0221\n\nThanks a lot everyone for all the info! It is all really helpful.\n \n2010/10/1 Merlin Moncure <[email protected]>\n\n\n\nOn Thu, Sep 30, 2010 at 2:33 PM, Marcelo Zabani <[email protected]> wrote:> Hi everyone. I have a question, and it's well beyond me to even speculate\n> about the inner workings of postgresql on this.>> I have a \"places\" table, and a \"coordinates\" column, of type POINT.>> If I want to find every place within, for example, a distance of 1 unit from\n> an arbitrary point, I'll do:>> CREATE INDEX ON places USING gist (circle(coordinates, 1));>> And then I'll fetch the desired rows like this:>> SELECT * FROM places WHERE circle(coordinates, 1) @> circle('(a,b)', 0);\n> (where (a,b) is an arbitrary point)>> I'd like to know how this index works, though, as it seems to me the only> way to have this kind of index to work is to calculate the distance of every\n> point in a square of sides 2*1=2 units centered on (a, b).>> So, am I wrong to think it works like that? If it does work like that, could> I have instead two columns of type FLOAT (xcoordinate and ycoordinate) and\n> create traditional b-tree indexes on both of these, and then do something> like:>> SELECT * FROM places WHERE xcoordinate >= (a-1) AND xcoordinate <= (a+1) AND> ycoordinate >= (b-1) AND ycoordinate <= (b+1) And\n> SQRT(POW(a-xcoordinate,2)+POW(b-ycoordinate,2))<=1;>> If you can also pinpoint me to where I can find this sort of information> (index utilization and planning, performance tuning), I'd be very grateful.\nA quick heads up: It's possible, although it may not necessarily help,to further reduce distance calcs by drawing an inner bounding box ofpoints that are confirmed good. Your outer box is made by squaring\nthe circle on lat/lon projection -- you can also calculate the biggestlat lon 'rectangle' that completely fits inside the circle, and playwith a query that looks something like this (pseudo sql):select * from points where (point inside good box) or (point inside\npossible box and dist(point, mypoint < n));You get reduction of dist calcs at expense of second gist lookup. Youcan also, of course, do this on application side, but what's the funin that? :-).\nmerlin-- Marcelo Zabani(19) 9341-0221",
"msg_date": "Fri, 1 Oct 2010 14:12:01 -0300",
"msg_from": "Marcelo Zabani <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: gist indexes for distance calculations"
},
{
"msg_contents": "On Fri, Oct 1, 2010 at 1:56 AM, Jesper Krogh <[email protected]> wrote:\n> On 2010-09-30 20:33, Marcelo Zabani wrote:\n>>\n>> If you can also pinpoint me to where I can find this sort of information\n>> (index utilization and planning, performance tuning), I'd be very\n>> grateful.\n>> Thank you already,\n>>\n>\n> Isn't this what the knngist patches are for?\n> https://commitfest.postgresql.org/action/patch_view?id=350\n>\n> http://www.sai.msu.su/~megera/wiki/knngist\n\nThose are for when you want to order by distance; the OP is trying to\n*filter* by distance, which is different.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Mon, 11 Oct 2010 20:52:23 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: gist indexes for distance calculations"
}
] |
[
{
"msg_contents": "Simon, Greg, etc.,\n\nJust barked my nose against a major performance issue with CE &\npartitioning, and was wondering if anyone had poked at it.\n\nThe issue is this: when a partitioned table is evaluated by the planner\nfor constraint exclusion, it evaluates ALL check constraints on each\npartition, regardless of whether or not they include a referenced column\nin the query (and whether or not they relate to partitioning). If some\nof those check constraints are expensive (like GIS functions) then this\ncan add considerably (on the order of 2ms per partition) to planning time.\n\nIf this is news to anyone, I have a nice test case.\n\nSo ... how plausible is it to fix the planner so that it only evaluates\ncheck constraints on a partition if there is a match of referenced\ncolumns? Are we talking \"moderate\", \"hard\" or \"nearly impossible\"?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Fri, 01 Oct 2010 16:46:31 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Issue for partitioning with extra check constriants"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> The issue is this: when a partitioned table is evaluated by the planner\n> for constraint exclusion, it evaluates ALL check constraints on each\n> partition, regardless of whether or not they include a referenced column\n> in the query (and whether or not they relate to partitioning).\n\n[ shrug ... ] We do not promise that the current partitioning scheme\nscales to the number of partitions where this is likely to be an\ninteresting concern.\n\n*After* we have a real partitioning scheme, it might be worth worrying\nabout this sort of problem, if it's still a problem then.\n\n> Are we talking \"moderate\", \"hard\" or \"nearly impossible\"?\n\nWe're talking \"wasted effort on a dead-end situation\". The time that\nwould go into this would be much better spent on real partitioning.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Oct 2010 19:56:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue for partitioning with extra check constriants "
},
{
"msg_contents": "\n> [ shrug ... ] We do not promise that the current partitioning scheme\n> scales to the number of partitions where this is likely to be an\n> interesting concern.\n\nActually, you can demonstrate pretty significant response time delays on\nonly 50 partitions.\n\n> We're talking \"wasted effort on a dead-end situation\". The time that\n> would go into this would be much better spent on real partitioning.\n\nThat only applies if someone is working on \"real partitioning\". Is anyone?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Fri, 01 Oct 2010 16:57:52 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Issue for partitioning with extra check constriants"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> [ shrug ... ] We do not promise that the current partitioning scheme\n>> scales to the number of partitions where this is likely to be an\n>> interesting concern.\n\n> Actually, you can demonstrate pretty significant response time delays on\n> only 50 partitions.\n\nAnd your point is? The design center for the current setup is maybe 5\nor 10 partitions. We didn't intend it to be used for more partitions\nthan you might have spindles to spread the data across.\n\n>> We're talking \"wasted effort on a dead-end situation\". The time that\n>> would go into this would be much better spent on real partitioning.\n\n> That only applies if someone is working on \"real partitioning\". Is anyone?\n\nThere is discussion going on, and even if there weren't, the argument\nstill applies. Time spent on this band-aid would be time taken away\nfrom a real solution. In case you haven't noticed, we have very finite\namounts of manpower that's competent to do planner surgery.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Oct 2010 20:16:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue for partitioning with extra check constriants "
},
{
"msg_contents": "\n> And your point is? The design center for the current setup is maybe 5\n> or 10 partitions. We didn't intend it to be used for more partitions\n> than you might have spindles to spread the data across.\n\nWhere did that come from? It certainly wasn't anywhere when the feature\nwas introduced. Simon intended for this version of partitioning to\nscale to 100-200 partitions (and it does, provided that you dump all\nother table constraints), and partitioning has nothing to do with\nspindles. I think you're getting it mixed up with tablespaces.\n\nThe main reason for partitioning is ease of maintenance (VACUUM,\ndropping partitions, etc.) not any kind of I/O optimization.\n\nI'd like to add the following statement to our docs on partitioning, in\nsection 5.9.4:\n\n=====\n\nConstraint exclusion is tested for every CHECK constraint on the\npartitions, even CHECK constraints which have nothing to do with the\npartitioning scheme. This can add siginficant extra planner time,\nespecially if your partitions have CHECK constraints which are costly to\nevaluate. For performance, it can be a good idea to eliminate all extra\nCHECK constraints on partitions or to re-implement them as triggers.\n\n=====\n\n>In case you haven't noticed, we have very finite\n> amounts of manpower that's competent to do planner surgery.\n\nPoint.\n\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Mon, 04 Oct 2010 11:34:03 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Issue for partitioning with extra check constriants"
},
{
"msg_contents": "On Mon, 2010-10-04 at 11:34 -0700, Josh Berkus wrote:\n> > And your point is? The design center for the current setup is maybe 5\n> > or 10 partitions. We didn't intend it to be used for more partitions\n> > than you might have spindles to spread the data across.\n> \n> Where did that come from? \n\nYeah that is a bit odd. I don't recall any discussion in regards to such\na weird limitation.\n\n> It certainly wasn't anywhere when the feature\n> was introduced. Simon intended for this version of partitioning to\n> scale to 100-200 partitions (and it does, provided that you dump all\n> other table constraints), and partitioning has nothing to do with\n> spindles. I think you're getting it mixed up with tablespaces.\n\nGreat! that would be an excellent addition.\n\n\n> \n> The main reason for partitioning is ease of maintenance (VACUUM,\n> dropping partitions, etc.) not any kind of I/O optimization.\n\nWell that is certainly \"a\" main reason but it is not \"the\" main reason.\nWe have lots of customers using it to manage very large amounts of data\nusing the constraint exclusion features (and gaining from the smaller\nindex sizes).\n\n\nJd\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n",
"msg_date": "Mon, 04 Oct 2010 11:44:26 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue for partitioning with extra check constriants"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> And your point is? The design center for the current setup is maybe 5\n>> or 10 partitions. We didn't intend it to be used for more partitions\n>> than you might have spindles to spread the data across.\n\n> Where did that come from? It certainly wasn't anywhere when the feature\n> was introduced. Simon intended for this version of partitioning to\n> scale to 100-200 partitions (and it does, provided that you dump all\n> other table constraints), and partitioning has nothing to do with\n> spindles. I think you're getting it mixed up with tablespaces.\n\n[ shrug... ] If Simon thought that, he obviously hadn't done any\ncareful study of the planner's performance. You can maybe get that far\nas long as the partitions have just very simple constraints, but\nanything nontrivial won't scale. As you found out.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 04 Oct 2010 19:36:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Issue for partitioning with extra check constriants "
}
] |
[
{
"msg_contents": "On 3/10/2010 7:39 AM, Richard Troy wrote:\n\n> I can't speak for modern \"OpenVMS\", but \"back in the day\", VMS had a very\n> effective memory management strategy which, in effect, made it as if all\n> memory was a cache for disk. It did this by means of a mechanism by which\n> to identify all potentially reachable disk space. When disk was read in,\n> an entry would be made mapping the memory to the disk space from which it\n> came - and if it was later updated, the mapping entry was marked \"dirty.\"\n> Whenever disk access was contemplated, a check was made to see if it was\n> already in memory and if so, it'd provide access to the in-memory copy\n> instead of doing the read again. (This also permitted, under some\n> circumstances, to reduce write activity as well.)\n\nThat's how Linux's memory management works, too, at least if I \nunderstand you correctly. Pretty much every modern OS does it. Pg is \nreliant on the operating system's disk cache, and has some minimal \nknowledge of it (see effective_cache_size) .\n\nI don't know how shared_buffers management works, but certainly at the \nOS cache level that's what already happens.\n\n-- \nCraig Ringer\n\nTech-related writing at http://soapyfrogs.blogspot.com/\n",
"msg_date": "Sun, 03 Oct 2010 10:42:45 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How does PG know if data is in memory?"
}
] |
[
{
"msg_contents": "Hi,\n\nfor whom it may concern:\nhttp://pdos.csail.mit.edu/mosbench/\n\nThey tested with 8.3.9, i wonder what results 9.0 would give.\n\nBest regards and keep up the good work\n\nHakan\n\nHi,for whom it may concern:http://pdos.csail.mit.edu/mosbench/They tested with 8.3.9, i wonder what results 9.0 would give.\nBest regards and keep up the good workHakan",
"msg_date": "Mon, 4 Oct 2010 16:44:23 +0200",
"msg_from": "Hakan Kocaman <[email protected]>",
"msg_from_op": true,
"msg_subject": "MIT benchmarks pgsql multicore (up to 48)performance"
},
{
"msg_contents": "On Mon, Oct 4, 2010 at 10:44 AM, Hakan Kocaman <[email protected]> wrote:\n> for whom it may concern:\n> http://pdos.csail.mit.edu/mosbench/\n> They tested with 8.3.9, i wonder what results 9.0 would give.\n> Best regards and keep up the good work\n> Hakan\n\nHere's the most relevant bit to us:\n\n--\nThe “Stock” line in Figures 7 and 8 shows that Post- greSQL has poor\nscalability on the stock kernel. The first bottleneck we encountered,\nwhich caused the read/write workload’s total throughput to peak at\nonly 28 cores, was due to PostgreSQL’s design. PostgreSQL implements\nrow- and table-level locks atop user-level mutexes; as a result, even\na non-conflicting row- or table-level lock acquisition requires\nexclusively locking one of only 16 global mutexes. This leads to\nunnecessary contention for non-conflicting acquisitions of the same\nlock—as seen in the read/write workload—and to false contention\nbetween unrelated locks that hash to the same exclusive mutex. We\naddress this problem by rewriting PostgreSQL’s row- and table-level\nlock manager and its mutexes to be lock-free in the uncontended case,\nand by increasing the number of mutexes from 16 to 1024.\n--\n\nI believe the \"one of only 16 global mutexes\" comment is referring to\nNUM_LOCK_PARTITIONS (there's also NUM_BUFFER_PARTITIONS, but that\nwouldn't be relevant for row and table-level locks). Increasing that\nfrom 16 to 1024 wouldn't be free and it's not clear to me that they've\ndone anything to work around the downsides of such a change. Perhaps\nit's worthwhile anyway on a 48-core machine! The use of lock-free\ntechniques seems quite interesting; unfortunately, I know next to\nnothing about the topic and this paper doesn't provide much of an\nintroduction. Anyone have a reference to a good introductory paper on\nthe topic?\n\nThe other sort of interesting thing that they mention is that\napparently I/O between shared buffers and the underlying data files\ncauses a lot of kernel contention due to inode locks induced by\nlseek(). There's nothing much we can do about that within PG but\nsurely it would be nice if it got fixed upstream.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n",
"msg_date": "Mon, 4 Oct 2010 13:13:36 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MIT benchmarks pgsql multicore (up to 48)performance"
},
{
"msg_contents": "\nOn Oct 4, 2010, at 13:13 , Robert Haas wrote:\n\n> On Mon, Oct 4, 2010 at 10:44 AM, Hakan Kocaman <[email protected]> wrote:\n>> for whom it may concern:\n>> http://pdos.csail.mit.edu/mosbench/\n>> They tested with 8.3.9, i wonder what results 9.0 would give.\n>> Best regards and keep up the good work\n>> Hakan\n> \n> Here's the most relevant bit to us:\n\n<snip/>\n\n> The use of lock-free\n> techniques seems quite interesting; unfortunately, I know next to\n> nothing about the topic and this paper doesn't provide much of an\n> introduction. Anyone have a reference to a good introductory paper on\n> the topic?\n\nThe README in the postgres section of the git repo leads me to think the code that includes the fixes it there, if someone wants to look into it (wrt to the Postgres lock manager changes). Didn't check the licensing.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n\n",
"msg_date": "Mon, 4 Oct 2010 13:38:33 -0400",
"msg_from": "Michael Glaesemann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MIT benchmarks pgsql multicore (up to 48)performance"
},
{
"msg_contents": "On Mon, Oct 4, 2010 at 1:38 PM, Michael Glaesemann <[email protected]> wrote:\n>\n> On Oct 4, 2010, at 13:13 , Robert Haas wrote:\n>\n>> On Mon, Oct 4, 2010 at 10:44 AM, Hakan Kocaman <[email protected]> wrote:\n>>> for whom it may concern:\n>>> http://pdos.csail.mit.edu/mosbench/\n>>> They tested with 8.3.9, i wonder what results 9.0 would give.\n>>> Best regards and keep up the good work\n>>> Hakan\n>>\n>> Here's the most relevant bit to us:\n>\n> <snip/>\n>\n>> The use of lock-free\n>> techniques seems quite interesting; unfortunately, I know next to\n>> nothing about the topic and this paper doesn't provide much of an\n>> introduction. Anyone have a reference to a good introductory paper on\n>> the topic?\n>\n> The README in the postgres section of the git repo leads me to think the code that includes the fixes it there, if someone wants to look into it (wrt to the Postgres lock manager changes). Didn't check the licensing.\n\nIt does, but it's a bunch of x86-specific hacks that breaks various\nimportant features and include comments like \"use usual technique for\nlock-free thingamabob\". So even if the licensing is/were suitable,\nthe code's not usable. I think the paper is neat from the point of\nview of providing us with some information about where the scalability\nbottlenecks might be on hardware to which most of us don't have easy\naccess, but as far as the implementation goes I think we're on our\nown.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n",
"msg_date": "Mon, 4 Oct 2010 13:47:48 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MIT benchmarks pgsql multicore (up to 48)performance"
},
{
"msg_contents": "I wasn't involved in this work but I do know a bit about it. Sadly, the\nwork on Postgres performance was cut down to under a page, complete\nwith the amazing offhand mention of \"rewriting PostgreSQL's lock\nmanager\". Here are a few more details...\n\nThe benchmarks in this paper are all about stressing the kernel. The\ndatabase is entirely in memory -- it's stored on tmpfs rather than on\ndisk, and it fits within shared_buffers. The workload consists of index\nlookups and inserts on a single table. You can fill in all the caveats\nabout what conclusions can and cannot be drawn from this workload.\n\nThe big takeaway for -hackers, I think, is that lock manager\nperformance is going to be an issue for large multicore systems, and\nthe uncontended cases need to be lock-free. That includes cases where\nmultiple threads are trying to acquire the same lock in compatible\nmodes.\n\nCurrently even acquiring a shared heavyweight lock requires taking out\nan exclusive LWLock on the partition, and acquiring shared LWLocks\nrequires acquiring a spinlock. All of this gets more expensive on\nmulticores, where even acquiring spinlocks can take longer than the\nwork being done in the critical section.\n\nTheir modifications to Postgres should be available in the code that\nwas published last night. As I understand it, the approach is to\nimplement LWLocks with atomic operations on a counter that contains\nboth the exclusive and shared lock count. Heavyweight locks do\nsomething similar but with counters for each lock mode packed into a\nword.\n\nNote that their implementation of the lock manager omits some features\nfor simplicity, like deadlock detection, 2PC, and probably any\nsemblance of portability. (These are the sort of things we're allowed\nto do in the research world! :-)\n\nThe other major bottleneck they ran into was a kernel one: reading from\nthe heap file requires a couple lseek operations, and Linux acquires a\nmutex on the inode to do that. The proper place to fix this is\ncertainly in the kernel but it may be possible to work around in\nPostgres.\n\nDan\n\n-- \nDan R. K. Ports MIT CSAIL http://drkp.net/\n",
"msg_date": "Mon, 4 Oct 2010 13:55:45 -0400",
"msg_from": "Dan Ports <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MIT benchmarks pgsql multicore (up to 48)performance"
},
{
"msg_contents": "Here's a video on lock-free hashing for example:\n\nhttp://video.google.com/videoplay?docid=2139967204534450862#\n\nI guess by \"lock-free in the uncontended case\" they mean the buffer\ncache manager is lock-free unless you're actually contending on the\nsame buffer?\n",
"msg_date": "Mon, 4 Oct 2010 11:06:27 -0700",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MIT benchmarks pgsql multicore (up to 48)performance"
},
{
"msg_contents": "On Mon, Oct 04, 2010 at 01:13:36PM -0400, Robert Haas wrote:\n> I believe the \"one of only 16 global mutexes\" comment is referring to\n> NUM_LOCK_PARTITIONS (there's also NUM_BUFFER_PARTITIONS, but that\n> wouldn't be relevant for row and table-level locks).\n\nYes -- my understanding is that they hit two lock-related problems:\n 1) LWLock contention caused by acquiring the same lock in compatible\n modes (e.g. multiple shared locks)\n 2) false contention caused by acquiring two locks that hashed to the\n same partition\nand the first was the worse problem. The lock-free structures helpe\nwith both, so the impact of changing NUM_LOCK_PARTITIONS was less\ninteresting.\n\nDan\n\n-- \nDan R. K. Ports MIT CSAIL http://drkp.net/\n",
"msg_date": "Mon, 4 Oct 2010 14:35:42 -0400",
"msg_from": "Dan Ports <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MIT benchmarks pgsql multicore (up to 48)performance"
},
{
"msg_contents": "Dan,\n\n(btw, OpenSQL Confererence is going to be at MIT in 2 weeks. Think\nanyone from the MOSBENCH team could attend?\nhttp://www.opensqlcamp.org/Main_Page)\n\n> The big takeaway for -hackers, I think, is that lock manager\n> performance is going to be an issue for large multicore systems, and\n> the uncontended cases need to be lock-free. That includes cases where\n> multiple threads are trying to acquire the same lock in compatible\n> modes.\n\nYes; we were aware of this due to work Jignesh did at Sun on TPC-E.\n\n> Currently even acquiring a shared heavyweight lock requires taking out\n> an exclusive LWLock on the partition, and acquiring shared LWLocks\n> requires acquiring a spinlock. All of this gets more expensive on\n> multicores, where even acquiring spinlocks can take longer than the\n> work being done in the critical section.\n\nCertainly, the question has always been how to fix it without breaking\nmajor features and endangering data integrity.\n\n> Note that their implementation of the lock manager omits some features\n> for simplicity, like deadlock detection, 2PC, and probably any\n> semblance of portability. (These are the sort of things we're allowed\n> to do in the research world! :-)\n\nWell, nice that you did! We'd never have that much time to experiment\nwith non-production stuff as a group in the project. So, now we have a\ntheoretical solution which we can look at maybe implementing parts of in\nsome watered-down form.\n\n> The other major bottleneck they ran into was a kernel one: reading from\n> the heap file requires a couple lseek operations, and Linux acquires a\n> mutex on the inode to do that. The proper place to fix this is\n> certainly in the kernel but it may be possible to work around in\n> Postgres.\n\nOr we could complain to Kernel.org. They've been fairly responsive in\nthe past. Too bad this didn't get posted earlier; I just got back from\nLinuxCon.\n\nSo you know someone who can speak technically to this issue? I can put\nthem in touch with the Linux geeks in charge of that part of the kernel\ncode.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Mon, 04 Oct 2010 11:49:43 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MIT benchmarks pgsql multicore (up to 48)performance"
},
{
"msg_contents": "On Oct 4, 2010, at 11:06, Greg Stark <[email protected]> wrote:\n\n> I guess by \"lock-free in the uncontended case\" they mean the buffer\n> cache manager is lock-free unless you're actually contending on the\n> same buffer?\n\nThat refers to being able to acquire non-conflicting row/table locks without needing an exclusive LWLock, and acquiring shared LWLocks without spinlocks if possible.\n\nI think the buffer cache manager is the next bottleneck after the row/table lock manager. Seems like it would also be a good candidate for similar techniques, but that's totally uninformed speculation on my part.\n\nDan",
"msg_date": "Mon, 4 Oct 2010 12:22:32 -0700",
"msg_from": "Dan Ports <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MIT benchmarks pgsql multicore (up to 48)performance"
},
{
"msg_contents": "On Mon, Oct 4, 2010 at 8:44 AM, Hakan Kocaman <[email protected]> wrote:\n> Hi,\n> for whom it may concern:\n> http://pdos.csail.mit.edu/mosbench/\n> They tested with 8.3.9, i wonder what results 9.0 would give.\n> Best regards and keep up the good work\n\nThey mention that these tests were run on the older 8xxx series\nopterons which has much slower memory speed and HT speed as well. I\nwonder how much better the newer 6xxx series magny cours would have\ndone on it... When I tested some simple benchmarks like pgbench, I\ngot scalability right to 48 processes on our 48 core magny cours\nmachines.\n\nStill, lots of room for improvement in kernel and pgsql.\n\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Mon, 4 Oct 2010 13:35:44 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] MIT benchmarks pgsql multicore (up to 48)performance"
},
{
"msg_contents": "On 10/04/10 20:49, Josh Berkus wrote:\n\n>> The other major bottleneck they ran into was a kernel one: reading from\n>> the heap file requires a couple lseek operations, and Linux acquires a\n>> mutex on the inode to do that. The proper place to fix this is\n>> certainly in the kernel but it may be possible to work around in\n>> Postgres.\n> \n> Or we could complain to Kernel.org. They've been fairly responsive in\n> the past. Too bad this didn't get posted earlier; I just got back from\n> LinuxCon.\n> \n> So you know someone who can speak technically to this issue? I can put\n> them in touch with the Linux geeks in charge of that part of the kernel\n> code.\n\nHmmm... lseek? As in \"lseek() then read() or write()\" idiom? It AFAIK\ncannot be fixed since you're modifying the global \"strean position\"\nvariable and something has got to lock that.\n\nOTOH, pread() / pwrite() don't have to do that.\n\n",
"msg_date": "Thu, 07 Oct 2010 00:31:19 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MIT benchmarks pgsql multicore (up to 48)performance"
},
{
"msg_contents": "On Wed, Oct 6, 2010 at 5:31 PM, Ivan Voras <[email protected]> wrote:\n> On 10/04/10 20:49, Josh Berkus wrote:\n>\n>>> The other major bottleneck they ran into was a kernel one: reading from\n>>> the heap file requires a couple lseek operations, and Linux acquires a\n>>> mutex on the inode to do that. The proper place to fix this is\n>>> certainly in the kernel but it may be possible to work around in\n>>> Postgres.\n>>\n>> Or we could complain to Kernel.org. They've been fairly responsive in\n>> the past. Too bad this didn't get posted earlier; I just got back from\n>> LinuxCon.\n>>\n>> So you know someone who can speak technically to this issue? I can put\n>> them in touch with the Linux geeks in charge of that part of the kernel\n>> code.\n>\n> Hmmm... lseek? As in \"lseek() then read() or write()\" idiom? It AFAIK\n> cannot be fixed since you're modifying the global \"strean position\"\n> variable and something has got to lock that.\n>\n> OTOH, pread() / pwrite() don't have to do that.\n\nWhile lseek is very \"cheap\" it is like any other system call in that\nwhen you multiple \"cheap\" times \"a jillion\" you end up with \"notable\"\nor even \"lots\". I've personally seen notable performance improvements\nby switching to pread/pwrite instead of lseek+{read,write}. For\nplatforms that don't implement pread or pwrite, wrapper calls are\ntrivial to produce. One less system call is, in this case, 50% fewer.\n\n\n-- \nJon\n",
"msg_date": "Wed, 6 Oct 2010 17:34:20 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MIT benchmarks pgsql multicore (up to 48)performance"
},
{
"msg_contents": "On Wed, Oct 6, 2010 at 6:31 PM, Ivan Voras <[email protected]> wrote:\n> On 10/04/10 20:49, Josh Berkus wrote:\n>\n>>> The other major bottleneck they ran into was a kernel one: reading from\n>>> the heap file requires a couple lseek operations, and Linux acquires a\n>>> mutex on the inode to do that. The proper place to fix this is\n>>> certainly in the kernel but it may be possible to work around in\n>>> Postgres.\n>>\n>> Or we could complain to Kernel.org. They've been fairly responsive in\n>> the past. Too bad this didn't get posted earlier; I just got back from\n>> LinuxCon.\n>>\n>> So you know someone who can speak technically to this issue? I can put\n>> them in touch with the Linux geeks in charge of that part of the kernel\n>> code.\n>\n> Hmmm... lseek? As in \"lseek() then read() or write()\" idiom? It AFAIK\n> cannot be fixed since you're modifying the global \"strean position\"\n> variable and something has got to lock that.\n\nWell, there are lock free algorithms using CAS, no?\n\n> OTOH, pread() / pwrite() don't have to do that.\n\nHey, I didn't know about those. That sounds like it might be worth\ninvestigating, though I confess I lack a 48-core machine on which to\nmeasure the alleged benefit.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n",
"msg_date": "Wed, 6 Oct 2010 20:39:48 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MIT benchmarks pgsql multicore (up to 48)performance"
},
{
"msg_contents": "Ivan Voras <[email protected]> writes:\n> On 10/04/10 20:49, Josh Berkus wrote:\n>>> The other major bottleneck they ran into was a kernel one: reading from\n>>> the heap file requires a couple lseek operations, and Linux acquires a\n>>> mutex on the inode to do that.\n\n> Hmmm... lseek? As in \"lseek() then read() or write()\" idiom? It AFAIK\n> cannot be fixed since you're modifying the global \"strean position\"\n> variable and something has got to lock that.\n\nUm, there is no \"global stream position\" associated with an inode.\nA file position is associated with an open-file descriptor.\n\nIf Josh quoted the problem correctly, the issue is that the kernel is\nlocking a file's inode (which may be referenced by quite a lot of file\ndescriptors) in order to change the state of one file descriptor.\nIt sure sounds like a possible source of contention to me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 06 Oct 2010 21:25:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MIT benchmarks pgsql multicore (up to 48)performance "
},
{
"msg_contents": "* Robert Haas ([email protected]) wrote:\n> Hey, I didn't know about those. That sounds like it might be worth\n> investigating, though I confess I lack a 48-core machine on which to\n> measure the alleged benefit.\n\nI've got a couple 24-core systems, if it'd be sufficiently useful to\ntest with..\n\n\tStephen",
"msg_date": "Wed, 6 Oct 2010 21:30:12 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MIT benchmarks pgsql multicore (up to\n\t48)performance"
},
{
"msg_contents": "On Wed, Oct 6, 2010 at 9:30 PM, Stephen Frost <[email protected]> wrote:\n> * Robert Haas ([email protected]) wrote:\n>> Hey, I didn't know about those. That sounds like it might be worth\n>> investigating, though I confess I lack a 48-core machine on which to\n>> measure the alleged benefit.\n>\n> I've got a couple 24-core systems, if it'd be sufficiently useful to\n> test with..\n\nIt's good to be you.\n\nI don't suppose you could try to replicate the lseek() contention?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n",
"msg_date": "Wed, 6 Oct 2010 22:01:20 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MIT benchmarks pgsql multicore (up to 48)performance"
},
{
"msg_contents": "* Robert Haas ([email protected]) wrote:\n> It's good to be you.\n\nThey're HP BL465 G7's w/ 2x 12-core AMD processors and 48G of RAM.\nUnfortunately, they currently only have local storage, but it seems\nunlikely that would be an issue for this.\n\n> I don't suppose you could try to replicate the lseek() contention?\n\nI can give it a shot, but the impression I had from the paper is that\nthe lseek() contention wouldn't be seen without the changes to the lock\nmanager...? Or did I misunderstand?\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Wed, 6 Oct 2010 22:07:07 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MIT benchmarks pgsql multicore (up to\n\t48)performance"
},
{
"msg_contents": "On 7 October 2010 03:25, Tom Lane <[email protected]> wrote:\n> Ivan Voras <[email protected]> writes:\n>> On 10/04/10 20:49, Josh Berkus wrote:\n>>>> The other major bottleneck they ran into was a kernel one: reading from\n>>>> the heap file requires a couple lseek operations, and Linux acquires a\n>>>> mutex on the inode to do that.\n>\n>> Hmmm... lseek? As in \"lseek() then read() or write()\" idiom? It AFAIK\n>> cannot be fixed since you're modifying the global \"strean position\"\n>> variable and something has got to lock that.\n>\n> Um, there is no \"global stream position\" associated with an inode.\n> A file position is associated with an open-file descriptor.\n\nYou're right of course, I was pattern matching late last night on the\n\"lseek()\" and \"locking problems\" keywords and ignored \"inode\".\n\n> If Josh quoted the problem correctly, the issue is that the kernel is\n> locking a file's inode (which may be referenced by quite a lot of file\n> descriptors) in order to change the state of one file descriptor.\n> It sure sounds like a possible source of contention to me.\n\nThough it does depend on the details of how pg uses it. Forked\nprocesses share their parents' file descriptors.\n",
"msg_date": "Thu, 7 Oct 2010 14:19:08 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MIT benchmarks pgsql multicore (up to 48)performance"
},
{
"msg_contents": "On Wed, Oct 6, 2010 at 10:07 PM, Stephen Frost <[email protected]> wrote:\n> * Robert Haas ([email protected]) wrote:\n>> It's good to be you.\n>\n> They're HP BL465 G7's w/ 2x 12-core AMD processors and 48G of RAM.\n> Unfortunately, they currently only have local storage, but it seems\n> unlikely that would be an issue for this.\n>\n>> I don't suppose you could try to replicate the lseek() contention?\n>\n> I can give it a shot, but the impression I had from the paper is that\n> the lseek() contention wouldn't be seen without the changes to the lock\n> manager...? Or did I misunderstand?\n\n<rereads appropriate section of paper>\n\nLooks like the lock manager problems hit at 28 cores, and the lseek\nproblems at 36 cores. So your system might not even be big enough to\nmanifest either problem.\n\nIt's unclear to me whether a 48-core system would be able to see the\nlseek issues without improvements to the lock manager, but perhaps it\nwould be possible by, say, increasing the number of lock partitions by\n8x. It would be nice to segregate these issues though, because using\npread/pwrite is probably a lot less work than rewriting our lock\nmanager. Do you have tools to measure the lseek overhead? If so, we\ncould prepare a patch to use pread()/pwrite() and just see whether\nthat reduced the overhead, without worrying so much about whether it\nwas actually a major bottleneck.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n",
"msg_date": "Thu, 7 Oct 2010 08:33:07 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MIT benchmarks pgsql multicore (up to 48)performance"
},
{
"msg_contents": "On 10/07/10 02:39, Robert Haas wrote:\n> On Wed, Oct 6, 2010 at 6:31 PM, Ivan Voras<[email protected]> wrote:\n>> On 10/04/10 20:49, Josh Berkus wrote:\n>>\n>>>> The other major bottleneck they ran into was a kernel one: reading from\n>>>> the heap file requires a couple lseek operations, and Linux acquires a\n>>>> mutex on the inode to do that. The proper place to fix this is\n>>>> certainly in the kernel but it may be possible to work around in\n>>>> Postgres.\n>>>\n>>> Or we could complain to Kernel.org. They've been fairly responsive in\n>>> the past. Too bad this didn't get posted earlier; I just got back from\n>>> LinuxCon.\n>>>\n>>> So you know someone who can speak technically to this issue? I can put\n>>> them in touch with the Linux geeks in charge of that part of the kernel\n>>> code.\n>>\n>> Hmmm... lseek? As in \"lseek() then read() or write()\" idiom? It AFAIK\n>> cannot be fixed since you're modifying the global \"strean position\"\n>> variable and something has got to lock that.\n>\n> Well, there are lock free algorithms using CAS, no?\n\nNothing is really \"lock free\" - in this case the algorithms simply push \nthe locking down to atomic operations on the CPU (and the memory bus). \nSemantically, *something* has to lock the memory region for however \nbrief period of time and then propagate that update to other CPUs' \ncaches (i.e. invalidate them).\n\n>> OTOH, pread() / pwrite() don't have to do that.\n>\n> Hey, I didn't know about those. That sounds like it might be worth\n> investigating, though I confess I lack a 48-core machine on which to\n> measure the alleged benefit.\n\nAs Jon said, it will in any case reduce the number of these syscalls by \nhalf, and they can be wrapped by a C macro for the platforms which don't \nimplement them.\n\nhttp://man.freebsd.org/pread\n\n(and just in case it's needed: pread() is a special case of preadv()).\n\n",
"msg_date": "Thu, 07 Oct 2010 14:47:06 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MIT benchmarks pgsql multicore (up to 48)performance"
},
{
"msg_contents": "Robert Haas <[email protected]> wrote:\n \n> perhaps it would be possible by, say, increasing the number of\n> lock partitions by 8x. It would be nice to segregate these issues\n> though, because using pread/pwrite is probably a lot less work\n> than rewriting our lock manager.\n \nYou mean easier than changing this 4 to a 7?:\n \n#define LOG2_NUM_LOCK_PARTITIONS 4\n \nOr am I missing something?\n \n-Kevin\n",
"msg_date": "Thu, 07 Oct 2010 12:21:21 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MIT benchmarks pgsql multicore (up to\n\t 48)performance"
},
{
"msg_contents": "* Kevin Grittner ([email protected]) wrote:\n> Robert Haas <[email protected]> wrote:\n> > perhaps it would be possible by, say, increasing the number of\n> > lock partitions by 8x. It would be nice to segregate these issues\n> > though, because using pread/pwrite is probably a lot less work\n> > than rewriting our lock manager.\n> \n> You mean easier than changing this 4 to a 7?:\n> \n> #define LOG2_NUM_LOCK_PARTITIONS 4\n> \n> Or am I missing something?\n\nI'm pretty sure we were talking about the change described in the paper\nof moving to a system which uses atomic changes instead of spinlocks for\ncertain locking situations..\n\nIf that's all the MIT folks did, they certainly made it sound like alot\nmore. :)\n\n\tStephen",
"msg_date": "Thu, 7 Oct 2010 14:06:20 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MIT benchmarks pgsql multicore (up to\n\t48)performance"
},
{
"msg_contents": "Stephen Frost <[email protected]> wrote: \n> Kevin Grittner ([email protected]) wrote:\n>> Robert Haas <[email protected]> wrote:\n \n>>> perhaps it would be possible by, say, increasing the number of\n>>> lock partitions by 8x.\n \n>> changing this 4 to a 7?:\n>> \n>> #define LOG2_NUM_LOCK_PARTITIONS 4\n \n> I'm pretty sure we were talking about the change described in the\n> paper of moving to a system which uses atomic changes instead of\n> spinlocks for certain locking situations..\n \nWell, they also mentioned increasing the number of lock partitions\nto reduce contention, and that seemed to be what Robert was talking\nabout in the quoted section.\n \nOf course, that's not the *only* thing they did; it's just the point\nwhich seemed to be under discussion just there.\n \n-Kevin\n",
"msg_date": "Thu, 07 Oct 2010 13:22:02 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MIT benchmarks pgsql multicore (up to\n\t 48)performance"
},
{
"msg_contents": "On Thu, Oct 7, 2010 at 1:21 PM, Kevin Grittner\n<[email protected]> wrote:\n> Robert Haas <[email protected]> wrote:\n>\n>> perhaps it would be possible by, say, increasing the number of\n>> lock partitions by 8x. It would be nice to segregate these issues\n>> though, because using pread/pwrite is probably a lot less work\n>> than rewriting our lock manager.\n>\n> You mean easier than changing this 4 to a 7?:\n>\n> #define LOG2_NUM_LOCK_PARTITIONS 4\n>\n> Or am I missing something?\n\nRight. They did something more complicated (and, I think, better)\nthan that, but that change by itself might be enough to ameliorate the\nlock contention enough to see the lsek() issue.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n",
"msg_date": "Thu, 7 Oct 2010 16:31:36 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MIT benchmarks pgsql multicore (up to 48)performance"
}
] |
[
{
"msg_contents": "Hi, \n\nWe have done the following test with PostgreSQL 9.0.\n\ncreate table bytea_demo ( index int, part1 bytea)\");\n\nThen we have instantiated a loop (1000) with the following action: \n\ninsert into bytea_demo ( index, part1, ) values ('%d', '%s'); \n \", i, entry);\n\n1a) In a first measurement part is supported with a bytea area (entry) of 4000 bytes (4000 characters)\n1b) In a second run part is supported with a bytea area (entry) of 5000 bytes (5000 characters). \n\nResult: The runtime of case 1a) is ~ 3 sec, however for case 1b) the runtime is ~ 43 sec. Why here we have such a large difference in runtime. \n\n\n\nBR\nIngo Sander\n\n\nBest Regards/mfG\nIngo Sander\n=========================================================\nNokia Siemens Networks GmbH &Co. KG\nNWS EP CP SVSS Platform Tech Support DE\nSt.-Martin-Str. 76\nD-81541 München\n*Tel.: +49-89-515938390\n*[email protected]\n\nNokia Siemens Networks GmbH & Co. KG\nSitz der Gesellschaft: München / Registered office: Munich\nRegistergericht: München / Commercial registry: Munich, HRA 88537\nWEEE-Reg.-Nr.: DE 52984304\n\nPersönlich haftende Gesellschafterin / General Partner: Nokia Siemens Networks Management GmbH\nGeschäftsleitung / Board of Directors: Lydia Sommer, Olaf Horsthemke\nVorsitzender des Aufsichtsrats / Chairman of supervisory board: Herbert Merz\nSitz der Gesellschaft: München / Registered office: Munich\nRegistergericht: München / Commercial registry: Munich, HRB 163416\n\n\n\n\n\n\n\nRuntime dependency from size of a bytea field\n\n\n\nHi, \n\nWe have done the following test with PostgreSQL 9.0.\n\ncreate table bytea_demo ( index int, part1 bytea)\");\n\nThen we have instantiated a loop (1000) with the following action: \n\ninsert into bytea_demo ( index, part1, ) values ('%d', '%s'); \n \", i, entry);\n\n1a) In a first measurement part is supported with a bytea area (entry) of 4000 bytes (4000 characters)\n1b) In a second run part is supported with a bytea area (entry) of 5000 bytes (5000 characters). \n\nResult: The runtime of case 1a) is ~ 3 sec, however for case 1b) the runtime is ~ 43 sec. Why here we have such a large difference in runtime. \n\n\nBR\nIngo Sander\n\n\nBest Regards/mfG\nIngo Sander\n=========================================================\nNokia Siemens Networks GmbH &Co. KG\nNWS EP CP SVSS Platform Tech Support DE\nSt.-Martin-Str. 76\nD-81541 München\n(Tel.: +49-89-515938390\[email protected]\n\nNokia Siemens Networks GmbH & Co. KG\nSitz der Gesellschaft: München / Registered office: Munich\nRegistergericht: München / Commercial registry: Munich, HRA 88537\nWEEE-Reg.-Nr.: DE 52984304\n\nPersönlich haftende Gesellschafterin / General Partner: Nokia Siemens Networks Management GmbH\nGeschäftsleitung / Board of Directors: Lydia Sommer, Olaf Horsthemke\nVorsitzender des Aufsichtsrats / Chairman of supervisory board: Herbert Merz\nSitz der Gesellschaft: München / Registered office: Munich\nRegistergericht: München / Commercial registry: Munich, HRB 163416",
"msg_date": "Tue, 5 Oct 2010 09:23:27 +0200",
"msg_from": "\"Sander, Ingo (NSN - DE/Munich)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Runtime dependency from size of a bytea field"
},
{
"msg_contents": "On Tue, Oct 5, 2010 at 3:23 AM, Sander, Ingo (NSN - DE/Munich)\n<[email protected]> wrote:\n> Hi,\n>\n> We have done the following test with PostgreSQL 9.0.\n>\n> create table bytea_demo ( index int, part1 bytea)\");\n>\n> Then we have instantiated a loop (1000) with the following action:\n>\n> insert into bytea_demo ( index, part1, ) values ('%d', '%s');\n> \", i, entry);\n>\n> 1a) In a first measurement part is supported with a bytea area (entry) of\n> 4000 bytes (4000 characters)\n> 1b) In a second run part is supported with a bytea area (entry) of 5000\n> bytes (5000 characters).\n>\n> Result: The runtime of case 1a) is ~ 3 sec, however for case 1b) the runtime\n> is ~ 43 sec. Why here we have such a large difference in runtime.\n\nProbably you are hitting toast threshold and running into compression.\n compression you can disable, but toast you cannot (short of\nrecompiling with higher blocksz).\n\nmerlin\n",
"msg_date": "Tue, 5 Oct 2010 12:11:29 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Runtime dependency from size of a bytea field"
},
{
"msg_contents": "On 10/06/2010 12:11 AM, Merlin Moncure wrote:\n\n> Probably you are hitting toast threshold and running into compression.\n> compression you can disable, but toast you cannot (short of\n> recompiling with higher blocksz).\n\nFor the OP's reference:\n\nhttp://www.postgresql.org/docs/current/static/storage-toast.html\nhttp://www.postgresql.org/docs/current/static/sql-altertable.html\n\nWhile (I think) PLAIN storage could be used, the inability to span rows \nover blocks means you would't get over 8k anyway.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 06 Oct 2010 09:23:35 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Runtime dependency from size of a bytea field"
},
{
"msg_contents": "Changing of the storage method ( alter table bytea_demo Alter part1 Set\nstorage EXTERNAL) \nor the increasing of the BLOCK_SIZE (new compilation of the code with\n--with-blocksize=32) change the behaviour. \n\nIngo Sander\n \n\n-----Original Message-----\nFrom: ext Craig Ringer [mailto:[email protected]] \nSent: Wednesday, October 06, 2010 3:24 AM\nTo: Merlin Moncure\nCc: Sander, Ingo (NSN - DE/Munich); [email protected]\nSubject: Re: [PERFORM] Runtime dependency from size of a bytea field\n\nOn 10/06/2010 12:11 AM, Merlin Moncure wrote:\n\n> Probably you are hitting toast threshold and running into compression.\n> compression you can disable, but toast you cannot (short of\n> recompiling with higher blocksz).\n\nFor the OP's reference:\n\nhttp://www.postgresql.org/docs/current/static/storage-toast.html\nhttp://www.postgresql.org/docs/current/static/sql-altertable.html\n\nWhile (I think) PLAIN storage could be used, the inability to span rows \nover blocks means you would't get over 8k anyway.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 6 Oct 2010 07:39:26 +0200",
"msg_from": "\"Sander, Ingo (NSN - DE/Munich)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Runtime dependency from size of a bytea field"
},
{
"msg_contents": "On Wed, Oct 6, 2010 at 1:39 AM, Sander, Ingo (NSN - DE/Munich)\n<[email protected]> wrote:\n> Changing of the storage method ( alter table bytea_demo Alter part1 Set\n> storage EXTERNAL)\n> or the increasing of the BLOCK_SIZE (new compilation of the code with\n> --with-blocksize=32) change the behaviour.\n\nyeah -- however changing block size is major surgery and is going to\nhave other effects (some of them negative) besides raising toast\nthreshold. I would start with disabling compression and see where you\nstood on performance terms.\n\nmerlin\n",
"msg_date": "Wed, 6 Oct 2010 08:51:03 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Runtime dependency from size of a bytea field"
},
{
"msg_contents": "Hi, \n\nI thougth I have disabled compressing by setting alter command? Or is\nthere another command?\n\nBR\nIngo \n\n-----Original Message-----\nFrom: ext Merlin Moncure [mailto:[email protected]] \nSent: Wednesday, October 06, 2010 2:51 PM\nTo: Sander, Ingo (NSN - DE/Munich)\nCc: ext Craig Ringer; [email protected]\nSubject: Re: [PERFORM] Runtime dependency from size of a bytea field\n\nOn Wed, Oct 6, 2010 at 1:39 AM, Sander, Ingo (NSN - DE/Munich)\n<[email protected]> wrote:\n> Changing of the storage method ( alter table bytea_demo Alter part1\nSet\n> storage EXTERNAL)\n> or the increasing of the BLOCK_SIZE (new compilation of the code with\n> --with-blocksize=32) change the behaviour.\n\nyeah -- however changing block size is major surgery and is going to\nhave other effects (some of them negative) besides raising toast\nthreshold. I would start with disabling compression and see where you\nstood on performance terms.\n\nmerlin\n",
"msg_date": "Wed, 6 Oct 2010 16:22:26 +0200",
"msg_from": "\"Sander, Ingo (NSN - DE/Munich)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Runtime dependency from size of a bytea field"
},
{
"msg_contents": "On Wed, Oct 6, 2010 at 10:22 AM, Sander, Ingo (NSN - DE/Munich)\n<[email protected]> wrote:\n> Hi,\n>\n> I thougth I have disabled compressing by setting alter command? Or is\n> there another command?\n\nyes. have you re-run the test? got any performance results?\n\nmerlin\n",
"msg_date": "Wed, 6 Oct 2010 10:49:53 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Runtime dependency from size of a bytea field"
},
{
"msg_contents": "As written before I have rerun the test a) without compression and b)\nwith enlarged BLOCK_SIZE. Result was the same.\n\nBR\nIngo\n\n-----Original Message-----\nFrom: ext Merlin Moncure [mailto:[email protected]] \nSent: Wednesday, October 06, 2010 4:50 PM\nTo: Sander, Ingo (NSN - DE/Munich)\nCc: ext Craig Ringer; [email protected]\nSubject: Re: [PERFORM] Runtime dependency from size of a bytea field\n\nOn Wed, Oct 6, 2010 at 10:22 AM, Sander, Ingo (NSN - DE/Munich)\n<[email protected]> wrote:\n> Hi,\n>\n> I thougth I have disabled compressing by setting alter command? Or is\n> there another command?\n\nyes. have you re-run the test? got any performance results?\n\nmerlin\n",
"msg_date": "Thu, 7 Oct 2010 06:11:52 +0200",
"msg_from": "\"Sander, Ingo (NSN - DE/Munich)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Runtime dependency from size of a bytea field"
},
{
"msg_contents": "On Thu, Oct 7, 2010 at 12:11 AM, Sander, Ingo (NSN - DE/Munich)\n<[email protected]> wrote:\n> As written before I have rerun the test a) without compression and b)\n> with enlarged BLOCK_SIZE. Result was the same.\n\nUsing libpqtypes (source code follows after sig), stock postgres,\nstock table, I was not able to confirm your results. 4000 bytea\nblocks, loops of 1000 I was able to send in about 600ms. 50000 byte\nblocks I was able to send in around 2 seconds on workstation class\nhardware -- maybe something else is going on?.\n\nmerlin\n\n#include \"libpq-fe.h\"\n#include \"libpqtypes.h\"\n\n#define DATASZ 50000\n\nint main()\n{\n int i;\n PGbytea b;\n char data[DATASZ];\n PGconn *c = PQconnectdb(\"host=localhost dbname=postgres\");\n if(PQstatus(c) != CONNECTION_OK)\n {\n printf(\"bad connection\");\n return -1;\n }\n\n PQtypesRegister(c);\n\n b.data = data;\n b.len = DATASZ;\n\n for(i=0; i<1000; i++)\n {\n PGresult *res = PQexecf(c, \"insert into bytea_demo(index, part1)\nvalues (%int4, %bytea)\", i, &b);\n\n if(!res)\n {\n printf(\"got %s\\n\", PQgeterror());\n return -1;\n }\n PQclear(res);\n }\n\n PQfinish(c);\n}\n",
"msg_date": "Thu, 7 Oct 2010 10:49:27 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Runtime dependency from size of a bytea field"
},
{
"msg_contents": "On Thu, Oct 7, 2010 at 10:49 AM, Merlin Moncure <[email protected]> wrote:\n> On Thu, Oct 7, 2010 at 12:11 AM, Sander, Ingo (NSN - DE/Munich)\n> <[email protected]> wrote:\n>> As written before I have rerun the test a) without compression and b)\n>> with enlarged BLOCK_SIZE. Result was the same.\n>\n> Using libpqtypes (source code follows after sig), stock postgres,\n> stock table, I was not able to confirm your results. 4000 bytea\n> blocks, loops of 1000 I was able to send in about 600ms. 50000 byte\n> blocks I was able to send in around 2 seconds on workstation class\n> hardware -- maybe something else is going on?.\n\nI re-ran the test, initializing the bytea data to random values (i\nwondered if uninitialized data getting awesome compression was skewing\nthe results).\n\nThis slowed down 50000 bytea case to around 3.5-4 seconds. That's\n12-15mb/sec from single thread which is IMNSHO not too shabby. If\nyour data compresses decently and you hack a good bang/buck\ncompression alg into the backend like lzo you can easily double that\nnumber.\n\nmerlin\n",
"msg_date": "Thu, 7 Oct 2010 13:16:36 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Runtime dependency from size of a bytea field"
},
{
"msg_contents": "Hi, \n\nThe difference to my test is that we use the ODBC interface in our C program. Could it be that the difference in the runtimes is caused by the ODBC?\n\nBR\nIngo \n\n-----Original Message-----\nFrom: ext Merlin Moncure [mailto:[email protected]] \nSent: Thursday, October 07, 2010 7:17 PM\nTo: Sander, Ingo (NSN - DE/Munich)\nCc: ext Craig Ringer; [email protected]\nSubject: Re: [PERFORM] Runtime dependency from size of a bytea field\n\nOn Thu, Oct 7, 2010 at 10:49 AM, Merlin Moncure <[email protected]> wrote:\n> On Thu, Oct 7, 2010 at 12:11 AM, Sander, Ingo (NSN - DE/Munich)\n> <[email protected]> wrote:\n>> As written before I have rerun the test a) without compression and b)\n>> with enlarged BLOCK_SIZE. Result was the same.\n>\n> Using libpqtypes (source code follows after sig), stock postgres,\n> stock table, I was not able to confirm your results. 4000 bytea\n> blocks, loops of 1000 I was able to send in about 600ms. 50000 byte\n> blocks I was able to send in around 2 seconds on workstation class\n> hardware -- maybe something else is going on?.\n\nI re-ran the test, initializing the bytea data to random values (i\nwondered if uninitialized data getting awesome compression was skewing\nthe results).\n\nThis slowed down 50000 bytea case to around 3.5-4 seconds. That's\n12-15mb/sec from single thread which is IMNSHO not too shabby. If\nyour data compresses decently and you hack a good bang/buck\ncompression alg into the backend like lzo you can easily double that\nnumber.\n\nmerlin\n",
"msg_date": "Fri, 8 Oct 2010 06:53:18 +0200",
"msg_from": "\"Sander, Ingo (NSN - DE/Munich)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Runtime dependency from size of a bytea field"
},
{
"msg_contents": "On Fri, Oct 8, 2010 at 12:53 AM, Sander, Ingo (NSN - DE/Munich)\n<[email protected]> wrote:\n> The difference to my test is that we use the ODBC interface in our C program. Could it be that the difference in the runtimes is caused by the ODBC?\n\nI've heard tell that ODBC is substantially slower than a native libpq\nconnection, but I don't know that for a fact, not being an ODBC user.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 12 Oct 2010 08:36:24 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Runtime dependency from size of a bytea field"
}
] |
[
{
"msg_contents": "Hi All,\n\nWe have been trying to implement the streaming replication on one of our\ntest servers. We have set archive_xlogs to on. But on the standby server,\nwhen we start the database we get error the following errors:\n\nLOG: shutting down\nLOG: database system is shut down\nLOG: database system was shut down in recovery at 2010-10-05 05:05:27 EDT\nLOG: entering standby mode\ncp: cannot stat `/home/postgres/archive_xlogs/000000010000000D00000036': No\nsuch file or directory\nLOG: consistent recovery state reached at D/36000078\nLOG: database system is ready to accept read only connections\nLOG: invalid record length at D/36000078\ncp: cannot stat `/home/postgres/archive_xlogs/000000010000000D00000036': No\nsuch file or directory\nLOG: streaming replication successfully connected to primary\n\nThe file \"000000010000000D00000036\" is seen in pg_xlog folder of the primary\ndatabase, but is not yet pushed to the archive location.\n\nCan you let us know what the error means and if we are doing anything wrong?\n\nRegards,\nNimesh.\n\nHi All,We have been trying to implement the streaming replication on one of our test servers. We have set archive_xlogs to on. But on the standby server, when we start the database we get error the following errors:\nLOG: shutting downLOG: database system is shut downLOG: database system was shut down in recovery at 2010-10-05 05:05:27 EDTLOG: entering standby modecp: cannot stat `/home/postgres/archive_xlogs/000000010000000D00000036': No such file or directory\nLOG: consistent recovery state reached at D/36000078LOG: database system is ready to accept read only connectionsLOG: invalid record length at D/36000078cp: cannot stat `/home/postgres/archive_xlogs/000000010000000D00000036': No such file or directory\nLOG: streaming replication successfully connected to primaryThe file \"000000010000000D00000036\" is seen in pg_xlog folder of the primary database, but is not yet pushed to the archive location.Can you let us know what the error means and if we are doing anything wrong?\nRegards,Nimesh.",
"msg_date": "Tue, 5 Oct 2010 14:41:49 +0530",
"msg_from": "Nimesh Satam <[email protected]>",
"msg_from_op": true,
"msg_subject": "Error message in wal_log for Streaming replication"
},
{
"msg_contents": "On Tue, 2010-10-05 at 14:41 +0530, Nimesh Satam wrote:\n> We have been trying to implement the streaming replication on one of\n> our test servers. We have set archive_xlogs to on. But on the standby\n> server, when we start the database we get error the following errors:\n> \n> LOG: shutting down\n> LOG: database system is shut down\n> LOG: database system was shut down in recovery at 2010-10-05 05:05:27\n> EDT\n> LOG: entering standby mode\n> cp: cannot stat\n> `/home/postgres/archive_xlogs/000000010000000D00000036': No such file\n> or directory\n> LOG: consistent recovery state reached at D/36000078\n> LOG: database system is ready to accept read only connections\n> LOG: invalid record length at D/36000078\n> cp: cannot stat\n> `/home/postgres/archive_xlogs/000000010000000D00000036': No such file\n> or directory\n> LOG: streaming replication successfully connected to primary\n> \n> The file \"000000010000000D00000036\" is seen in pg_xlog folder of the\n> primary database, but is not yet pushed to the archive location.\n> \n> Can you let us know what the error means and if we are doing anything\n> wrong?\n\n[ this question is more appropriate on pgsql-general ]\n\nThose aren't postgres error messages, those are error messages generated\nby \"cp\".\n\nSee:\nhttp://www.postgresql.org/docs/9.0/static/archive-recovery-settings.html\n\n\"The command will be asked for file names that are not present in the\narchive; it must return nonzero when so asked.\"\n\nSo, it is safe to ignore those errors.\n\nPersonally, I would use a restore_command that is silent when the file\ndoesn't exist so that it doesn't pollute your logs. I'm not sure why the\ndocumentation suggests \"cp\".\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Tue, 05 Oct 2010 14:05:17 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Error message in wal_log for Streaming replication"
}
] |
[
{
"msg_contents": "All,\n\nThere's a number of blog tests floating around comparing XFS and Ext3,\nand the various Linux schedulers, for PGDATA or for an all-in-one mount.\n\nHowever, the WAL has a rather particular write pattern, and it's\nreasonable to assume that it shouldn't be optimized the same way as\nPGDATA. Has anyone done any head-to-heads for WAL drive configuration\nchanges?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Wed, 06 Oct 2010 16:11:16 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "XFS vs Ext3, and schedulers, for WAL"
},
{
"msg_contents": "\n> There's a number of blog tests floating around comparing XFS and Ext3,\n> and the various Linux schedulers, for PGDATA or for an all-in-one mount.\n> \n> However, the WAL has a rather particular write pattern, and it's\n> reasonable to assume that it shouldn't be optimized the same way as\n> PGDATA. Has anyone done any head-to-heads for WAL drive configuration\n> changes?\n\nThat would be a \"no\", then. Looks like I have my work cut out for me ...\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Mon, 11 Oct 2010 10:50:08 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: XFS vs Ext3, and schedulers, for WAL"
},
{
"msg_contents": "On Mon, 2010-10-11 at 10:50 -0700, Josh Berkus wrote:\n> > There's a number of blog tests floating around comparing XFS and Ext3,\n> > and the various Linux schedulers, for PGDATA or for an all-in-one mount.\n> > \n> > However, the WAL has a rather particular write pattern, and it's\n> > reasonable to assume that it shouldn't be optimized the same way as\n> > PGDATA. Has anyone done any head-to-heads for WAL drive configuration\n> > changes?\n> \n> That would be a \"no\", then. Looks like I have my work cut out for me ...\n\nThe only thing I have done is:\n\nhttp://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/\n\nIt doesn't cover XFS but it provides a decent and simple comparison on\next2/ext3 etc...\n\nRemember xlog is sequential so pushing it off is useful.\n\nJD\n\n> \n> -- \n> -- Josh Berkus\n> PostgreSQL Experts Inc.\n> http://www.pgexperts.com\n> \n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n",
"msg_date": "Mon, 11 Oct 2010 11:06:23 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XFS vs Ext3, and schedulers, for WAL"
},
{
"msg_contents": "\n> http://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/\n> \n> It doesn't cover XFS but it provides a decent and simple comparison on\n> ext2/ext3 etc...\n\nYeah, it doesn't test actual log writing, though. Nor specific settings\neven for those two filesystems. So it's still a mystery ...\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Mon, 11 Oct 2010 11:59:20 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: XFS vs Ext3, and schedulers, for WAL"
}
] |
[
{
"msg_contents": "Sorry to forget to give the postgres version as 8.1\n\nOn Thu, Oct 7, 2010 at 2:12 PM, Srikanth K <[email protected]> wrote:\n\n> Hi Can u Please let me know how can i optimize this query better. As i am\n> attaching u the Query, Schema and Explain Analyze Report.\n>\n> Plerase help me in optimizing this query.\n>\n> select\n> s.*,a.actid,a.phone,d.domid,d.domname,d.domno,a.actno,a.actname,p.descr as\n> svcdescr from vwsubsmin s\n> inner join packages p on s.svcno=p.pkgno\n> inner join account a on a.actno=s.actno\n> inner join ssgdom d on a.domno=d.domno\n> inner join (select subsno from getexpiringsubs(1,cast('2' as\n> integer),cast('3' as double precision),'4') as\n> (subsno int,expirydt timestamp without time zone,balcpt double precision))\n> as e on s.subsno=e.subsno\n> where s.status <=15 and d.domno=5\n> order by d.domname,s.expirydt,a.actname;\n>\n>\n> --\n> regards,\n> Srikanth Kata\n>\n\n\n\n-- \nregards,\nSrikanth Kata\n\nSorry to forget to give the postgres version as 8.1On Thu, Oct 7, 2010 at 2:12 PM, Srikanth K <[email protected]> wrote:\nHi Can u Please let me know how can i optimize this query better. As i am attaching u the Query, Schema and Explain Analyze Report.\nPlerase help me in optimizing this query.select s.*,a.actid,a.phone,d.domid,d.domname,d.domno,a.actno,a.actname,p.descr as svcdescr from vwsubsmin s \ninner join packages p on s.svcno=p.pkgno inner join account a on a.actno=s.actno inner join ssgdom d on a.domno=d.domno inner join (select subsno from getexpiringsubs(1,cast('2' as integer),cast('3' as double precision),'4') as \n\n(subsno int,expirydt timestamp without time zone,balcpt double precision)) as e on s.subsno=e.subsno where s.status <=15 and d.domno=5 order by d.domname,s.expirydt,a.actname;\n-- regards,Srikanth Kata\n-- regards,Srikanth Kata",
"msg_date": "Thu, 7 Oct 2010 15:07:37 +0530",
"msg_from": "Srikanth K <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing query"
}
] |
[
{
"msg_contents": "currently PG 8.1.3. See attached for my postgresql.conf. Server is\nfreebsd 6.2 w/ a fast 3TB storage array and only 2GB of ram.\n\nWe're running RTG which is a like mrtg, cricket, etc. basically\nqueries network devices via SNMP, throws stats into the DB for making\npretty bandwidth graphs. We've got hundreds of devices, with 10K+\nports and probably 100K's of stats being queried every 5 minutes. In\norder to do all that work, the back end SNMP querier is multi-threaded\nand opens a PG connection per-thread. We're running 30 threads. This\nis basically all INSERTS, but only ends up to being about 32,000/5\nminutes.\n\nThe graphing front end CGI is all SELECT. There's 12k tables today,\nand new tables are created each month. The number of rows per table\nis 100-700k, with most in the 600-700K range. 190GB of data so far.\nGood news is that queries have no joins and are limited to only a few\ntables at a time.\n\nBasically, each connection is taking about 100MB resident. As we need\nto increase the number of threads to be able to query all the devices\nin the 5 minute window, we're running out of memory. There aren't\nthat many CGI connections at anyone one time, but obviously query\nperformance isn't great, but honestly is surprisingly good all things\nconsidered.\n\nHonestly, not looking to improve PG's performance, really although I\nwouldn't complain. Just better manage memory/hardware. I assume I\ncan't start up two instances of PG pointing at the same files, one\nread-only and one read-write with different memory profiles, so I\nassume my only real option is throw more RAM at it. I don't have $$$\nfor another array/server for a master/slave right now. Or perhaps\ntweaking my .conf file? Are newer PG versions more memory efficient?\n\nThanks,\nAaron\n\n-- \nAaron Turner\nhttp://synfin.net/ Twitter: @synfinatic\nhttp://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & Windows\nThose who would give up essential Liberty, to purchase a little temporary\nSafety, deserve neither Liberty nor Safety.\n -- Benjamin Franklin\n\"carpe diem quam minimum credula postero\"",
"msg_date": "Thu, 7 Oct 2010 10:47:54 -0700",
"msg_from": "Aaron Turner <[email protected]>",
"msg_from_op": true,
"msg_subject": "large dataset with write vs read clients"
},
{
"msg_contents": " On 10/7/10 11:47 AM, Aaron Turner wrote:\n> <snip>\n>\n> Basically, each connection is taking about 100MB resident. As we need\n> to increase the number of threads to be able to query all the devices\n> in the 5 minute window, we're running out of memory.\nI think the first thing to do is look into using a connection pooler \nlike pgpool to reduce your connection memory overhead.\n\n-Dan\n",
"msg_date": "Thu, 07 Oct 2010 12:29:38 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: large dataset with write vs read clients"
},
{
"msg_contents": "* Dan Harris ([email protected]) wrote:\n> On 10/7/10 11:47 AM, Aaron Turner wrote:\n>> Basically, each connection is taking about 100MB resident. As we need\n>> to increase the number of threads to be able to query all the devices\n>> in the 5 minute window, we're running out of memory.\n> I think the first thing to do is look into using a connection pooler \n> like pgpool to reduce your connection memory overhead.\n\nYeah.. Having the number of database connections be close to the number\nof processors is usually recommended.\n\n\tStephen",
"msg_date": "Thu, 7 Oct 2010 14:57:48 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: large dataset with write vs read clients"
},
{
"msg_contents": "* Aaron Turner ([email protected]) wrote:\n> The graphing front end CGI is all SELECT. There's 12k tables today,\n> and new tables are created each month. \n\nThat's a heck of alot of tables.. Probably more than you really need.\nNot sure if reducing that number would help query times though.\n\n> The number of rows per table\n> is 100-700k, with most in the 600-700K range. 190GB of data so far.\n> Good news is that queries have no joins and are limited to only a few\n> tables at a time.\n\nHave you got indexes and whatnot on these tables?\n\n> Basically, each connection is taking about 100MB resident. As we need\n> to increase the number of threads to be able to query all the devices\n> in the 5 minute window, we're running out of memory. There aren't\n> that many CGI connections at anyone one time, but obviously query\n> performance isn't great, but honestly is surprisingly good all things\n> considered.\n\nI'm kind of suprised at each connection taking 100MB, especially ones\nwhich are just doing simple inserts.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Thu, 7 Oct 2010 15:00:06 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: large dataset with write vs read clients"
},
{
"msg_contents": "* Aaron Turner ([email protected]) wrote:\n> Basically, each connection is taking about 100MB resident\n\nErrr.. Given that your shared buffers are around 100M, I think you're\nconfusing what you see in top with reality. The shared buffers are\nvisible in every process, but it's all the same actual memory, not 100M\nper process.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Thu, 7 Oct 2010 15:02:08 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: large dataset with write vs read clients"
},
{
"msg_contents": "On Thu, Oct 7, 2010 at 12:02 PM, Stephen Frost <[email protected]> wrote:\n> * Aaron Turner ([email protected]) wrote:\n>> Basically, each connection is taking about 100MB resident\n>\n> Errr.. Given that your shared buffers are around 100M, I think you're\n> confusing what you see in top with reality. The shared buffers are\n> visible in every process, but it's all the same actual memory, not 100M\n> per process.\n\nAh, I had missed that. Thanks for the tip. Sounds like I should\nstill investigate pgpool though. If nothing else it should improve\ninsert performance right?\n\nAs for the tables, no indexes. We're using a constraint on one of the\ncolumns (date) w/ table inheritance to limit which tables are scanned\nsince SELECT's are always for a specific date range. By always\nquerying the inherited table, we're effectively getting a cheap\nsemi-granular index without any insert overhead. Unfortunately,\nwithout forking the RTG code significantly, redesigning the schema\nreally isn't viable.\n\n-- \nAaron Turner\nhttp://synfin.net/ Twitter: @synfinatic\nhttp://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & Windows\nThose who would give up essential Liberty, to purchase a little temporary\nSafety, deserve neither Liberty nor Safety.\n -- Benjamin Franklin\n\"carpe diem quam minimum credula postero\"\n",
"msg_date": "Thu, 7 Oct 2010 13:03:47 -0700",
"msg_from": "Aaron Turner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: large dataset with write vs read clients"
},
{
"msg_contents": "Aaron Turner wrote:\n> Are newer PG versions more memory efficient?\n> \n\nMoving from PostgreSQL 8.1 to 8.3 or later should make everything you do \nhappen 2X to 3X faster, before even taking into account that you can \ntune the later versions better too. See \nhttp://suckit.blog.hu/2009/09/29/postgresql_history for a simple \ncomparison of how much performance jumped on both reads and writes in \nthe later versions than what you're running. Memory consumption will on \naverage decrease too, simply via the fact that queries start and finish \nmore quickly. Given an even workload, there will be less of them \nrunning at a time on a newer version to keep up.\n\nGiven the size of your database, I'd advise you consider a migration to \na new version ASAP. 8.4 is a nice stable release at this point, that's \nthe one to consider moving to. The biggest single problem people \nupgrading from 8.1 to 8.3 or later see is related to changes in how data \nis cast between text and integer types; 1 doesn't equal '1' anymore is \nthe quick explanation of that. See \nhttp://wiki.postgresql.org/wiki/Version_History for links to some notes \non that, as well as other good resources related to upgrading. This may \nrequire small application changes to deal with.\n\nEven not considering the performance increases, PostgreSQL 8.1 is due to \nbe dropped from active support potentially as early as next month: \nhttp://wiki.postgresql.org/wiki/PostgreSQL_Release_Support_Policy\n\nAlso: PostgreSQL 8.1.3 has several known bugs that can lead to various \nsorts of nasty data corruption. You should at least consider an \nimmediate upgrade to the latest release of that version, 8.1.22. Small \nversion number increases in PostgreSQL only consist of serious bug \nfixes, not feature changes. See \nhttp://www.postgresql.org/support/versioning for notes about the \nproject's standard for changes here, and how it feels about the risks of \nrunning versions with known bugs in them vs. upgrading.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Thu, 07 Oct 2010 17:47:07 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: large dataset with write vs read clients"
},
{
"msg_contents": "On Thu, Oct 7, 2010 at 2:47 PM, Greg Smith <[email protected]> wrote:\n> Aaron Turner wrote:\n>>\n>> Are newer PG versions more memory efficient?\n>>\n>\n> Moving from PostgreSQL 8.1 to 8.3 or later should make everything you do\n> happen 2X to 3X faster, before even taking into account that you can tune\n> the later versions better too. See\n> http://suckit.blog.hu/2009/09/29/postgresql_history for a simple comparison\n> of how much performance jumped on both reads and writes in the later\n> versions than what you're running. Memory consumption will on average\n> decrease too, simply via the fact that queries start and finish more\n> quickly. Given an even workload, there will be less of them running at a\n> time on a newer version to keep up.\n>\n> Given the size of your database, I'd advise you consider a migration to a\n> new version ASAP. 8.4 is a nice stable release at this point, that's the\n> one to consider moving to. The biggest single problem people upgrading from\n> 8.1 to 8.3 or later see is related to changes in how data is cast between\n> text and integer types; 1 doesn't equal '1' anymore is the quick explanation\n> of that. See http://wiki.postgresql.org/wiki/Version_History for links to\n> some notes on that, as well as other good resources related to upgrading.\n> This may require small application changes to deal with.\n>\n> Even not considering the performance increases, PostgreSQL 8.1 is due to be\n> dropped from active support potentially as early as next month:\n> http://wiki.postgresql.org/wiki/PostgreSQL_Release_Support_Policy\n>\n> Also: PostgreSQL 8.1.3 has several known bugs that can lead to various\n> sorts of nasty data corruption. You should at least consider an immediate\n> upgrade to the latest release of that version, 8.1.22. Small version number\n> increases in PostgreSQL only consist of serious bug fixes, not feature\n> changes. See http://www.postgresql.org/support/versioning for notes about\n> the project's standard for changes here, and how it feels about the risks of\n> running versions with known bugs in them vs. upgrading.\n>\n> --\n> Greg Smith, 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services and Support www.2ndQuadrant.us\n> Author, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\n> https://www.packtpub.com/postgresql-9-0-high-performance/book\n>\n>\n\nThanks for the info Greg. Sounds like I've got an upgrade in the near\nfuture! :)\n\nAgain, thanks to everyone who's responded; it's been really\ninformative and helpful. The PG community has always proven to be\nawesome!\n\n\n\n-- \nAaron Turner\nhttp://synfin.net/ Twitter: @synfinatic\nhttp://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & Windows\nThose who would give up essential Liberty, to purchase a little temporary\nSafety, deserve neither Liberty nor Safety.\n -- Benjamin Franklin\n\"carpe diem quam minimum credula postero\"\n",
"msg_date": "Thu, 7 Oct 2010 15:11:29 -0700",
"msg_from": "Aaron Turner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: large dataset with write vs read clients"
},
{
"msg_contents": "* Greg Smith:\n\n> Given the size of your database, I'd advise you consider a migration\n> to a new version ASAP. 8.4 is a nice stable release at this point,\n> that's the one to consider moving to.\n\nIt also offers asynchronous commits, which might be a good tradeoff\nhere (especially if the data gathered is not used for billing purposes\n8-).\n",
"msg_date": "Sat, 09 Oct 2010 22:45:47 +0200",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: large dataset with write vs read clients"
},
{
"msg_contents": "I have a logical problem with asynchronous commit. The \"commit\" command \nshould instruct the database to make the outcome of the transaction \npermanent. The application should wait to see whether the commit was \nsuccessful or not. Asynchronous behavior in the commit statement breaks \nthe ACID rules and should not be used in a RDBMS system. If you don't \nneed ACID, you may not need RDBMS at all. You may try with MongoDB. \nMongoDB is web scale: http://www.youtube.com/watch?v=b2F-DItXtZs\n\nFlorian Weimer wrote:\n> * Greg Smith:\n>\n> \n>> Given the size of your database, I'd advise you consider a migration\n>> to a new version ASAP. 8.4 is a nice stable release at this point,\n>> that's the one to consider moving to.\n>> \n>\n> It also offers asynchronous commits, which might be a good tradeoff\n> here (especially if the data gathered is not used for billing purposes\n> 8-).\n>\n> \n\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n",
"msg_date": "Sat, 09 Oct 2010 17:35:02 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: large dataset with write vs read clients"
},
{
"msg_contents": "On 10/10/2010 5:35 AM, Mladen Gogala wrote:\n> I have a logical problem with asynchronous commit. The \"commit\" command\n> should instruct the database to make the outcome of the transaction\n> permanent. The application should wait to see whether the commit was\n> successful or not. Asynchronous behavior in the commit statement breaks\n> the ACID rules and should not be used in a RDBMS system. If you don't\n> need ACID, you may not need RDBMS at all. You may try with MongoDB.\n> MongoDB is web scale: http://www.youtube.com/watch?v=b2F-DItXtZs\n\nThat argument makes little sense to me.\n\nBecause you can afford a clearly defined and bounded loosening of the \ndurability guarantee provided by the database, such that you know and \naccept the possible loss of x seconds of work if your OS crashes or your \nUPS fails, this means you don't really need durability guarantees at all \n- let alone all that atomic commit silliness, transaction isolation, or \nthe guarantee of a consistent on-disk state?\n\nSome of the other flavours of non-SQL databases, both those that've been \naround forever (PICK/UniVerse/etc, Berkeley DB, Cache, etc) and those \nthat're new and fashionable Cassandra, CouchDB, etc, provide some ACID \nproperties anyway. If you don't need/want an SQL interface to your \ndatabase you don't have to throw out all that other database-y goodness \nif you haven't been drinking too much of the NoSQL kool-aid.\n\nThere *are* situations in which it's necessary to switch to relying on \ndistributed, eventually-consistent databases with non-traditional \napproaches to data management. It's awfully nice not to have to, though, \nand can force you to do a lot more wheel reinvention when it comes to \nquerying, analysing and reporting on your data.\n\nFWIW, a common approach in this sort of situation has historically been \n- accepting that RDBMSs aren't great at continuous fast loading of \nindividual records - to log the records in batches to a flat file, \nBerkeley DB, etc as a staging point. You periodically rotate that file \nout and bulk-load its contents into the RDBMS for analysis and \nreporting. This doesn't have to be every hour - every minute is usually \npretty reasonable, and still gives your database a much easier time \nwithout forcing you to modify your app to batch inserts into \ntransactions or anything like that.\n\n-- \nCraig Ringer\n\nTech-related writing at http://soapyfrogs.blogspot.com/\n",
"msg_date": "Sun, 10 Oct 2010 14:43:12 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: large dataset with write vs read clients"
},
{
"msg_contents": " On 10/10/2010 2:43 AM, Craig Ringer wrote:\n>\n> Some of the other flavours of non-SQL databases, both those that've been\n> around forever (PICK/UniVerse/etc, Berkeley DB, Cache, etc) and those\n> that're new and fashionable Cassandra, CouchDB, etc, provide some ACID\n> properties anyway. If you don't need/want an SQL interface to your\n> database you don't have to throw out all that other database-y goodness\n> if you haven't been drinking too much of the NoSQL kool-aid.\nThis is a terrible misunderstanding. You haven't taken a look at that \nYoutube clip I sent you, have you? I am an Oracle DBA, first and \nforemost, disturbing the peace since 1989. I haven't been drinking the \nNoSQL kool-aid at all.\nI was simply being facetious. ACID rules are business rules and I am \nbitterly opposed to relaxing them. BTW, my favorite drink is Sam Adams Ale.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n",
"msg_date": "Sun, 10 Oct 2010 02:55:39 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: large dataset with write vs read clients"
},
{
"msg_contents": "On 10/10/2010 2:55 PM, Mladen Gogala wrote:\n> On 10/10/2010 2:43 AM, Craig Ringer wrote:\n>>\n>> Some of the other flavours of non-SQL databases, both those that've been\n>> around forever (PICK/UniVerse/etc, Berkeley DB, Cache, etc) and those\n>> that're new and fashionable Cassandra, CouchDB, etc, provide some ACID\n>> properties anyway. If you don't need/want an SQL interface to your\n>> database you don't have to throw out all that other database-y goodness\n>> if you haven't been drinking too much of the NoSQL kool-aid.\n> This is a terrible misunderstanding. You haven't taken a look at that\n> Youtube clip I sent you, have you?\n\nI'm not so good with video when I'm seeking information not \nentertainment. I really dislike having to sit and watch someone \nsloooowly get aroud to the point; give me something to skim read and \nI'll do that. The trend toward video news etc drives me nuts - IMO just \ndetracting from the guts of the story/argument/explanation in most cases.\n\nOne of the wonderful things about the written word is that everybody can \nbenefit from it at their own natural pace. Video, like university \nlectures, takes that away and forces the video to be paced to the needs \nof the slowest.\n\nMy dislike of video-as-information is a quirk that's clearly not shared \nby too many given how trendy video is becoming on the 'net. OTOH, it's \nprobably not a grossly unreasonable choice when dealing with lots of \nmailing list posts/requests. Imagine if the Pg list accepted video link \nquestions - ugh.\n\nHey, maybe I should try posting YouTube video answers to a few questions \nfor kicks, see how people react ;-)\n\n> I am an Oracle DBA, first and\n> foremost, disturbing the peace since 1989. I haven't been drinking the\n> NoSQL kool-aid at all.\n> I was simply being facetious. ACID rules are business rules and I am\n> bitterly opposed to relaxing them. BTW, my favorite drink is Sam Adams Ale.\n\nAah, thanks. I completely missed it - which is a little scary, in that \nIMO that message could've been believably written in deadly earnest by a \nNoSQL over-enthusiast. Good work ... I think. Eek.\n\nSam Adams ale, I'm afrid, does not travel well from Australia.\n\n-- \nCraig Ringer\n\nTech-related writing at http://soapyfrogs.blogspot.com/\n",
"msg_date": "Sun, 10 Oct 2010 15:20:55 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: large dataset with write vs read clients"
},
{
"msg_contents": "* Mladen Gogala:\n\n> I have a logical problem with asynchronous commit. The \"commit\"\n> command should instruct the database to make the outcome of the\n> transaction permanent. The application should wait to see whether the\n> commit was successful or not. Asynchronous behavior in the commit\n> statement breaks the ACID rules and should not be used in a RDBMS\n> system.\n\nThat's a bit over the top. It may make sense to use PostgreSQL even\nif the file system doesn't guarantuee ACID by keeping multiple\nchecksummed copies of the database files. Asynchronous commits offer\nyet another trade-off here.\n\nSome people use RDBMSs mostly for the *M* part, to get a consistent\nadministration experience across multiple applications. And even with\nasynchronous commits, PostgreSQL will maintain a consistent state of\nthe database.\n",
"msg_date": "Sun, 10 Oct 2010 13:45:01 +0200",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: large dataset with write vs read clients"
},
{
"msg_contents": "On 10/10/2010 7:45 AM, Florian Weimer wrote:\n> Some people use RDBMSs mostly for the*M* part, to get a consistent\n> administration experience across multiple applications. And even with\n> asynchronous commits, PostgreSQL will maintain a consistent state of\n> the database.\n\nBoth Postgres and Oracle have that option and both databases will \nmaintain the consistent state, but both databases will allow the loss of \ndata in case of system crash. Strictly speaking, that does break the \n\"D\" in ACID.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n\n\n\n\n\n\n On 10/10/2010 7:45 AM, Florian Weimer wrote:\n \nSome people use RDBMSs mostly for the *M* part, to get a consistent\nadministration experience across multiple applications. And even with\nasynchronous commits, PostgreSQL will maintain a consistent state of\nthe database.\n\n\n\n Both Postgres and Oracle have that option and both databases will\n maintain the consistent state, but both databases will allow the\n loss of data in case of system crash. Strictly speaking, that does\n break the \"D\" in ACID.\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com",
"msg_date": "Sun, 10 Oct 2010 11:29:49 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: large dataset with write vs read clients"
},
{
"msg_contents": "[email protected] (Craig Ringer) writes:\n> Hey, maybe I should try posting YouTube video answers to a few\n> questions for kicks, see how people react ;-)\n\nAnd make sure it uses the same voice as is used in the \"MongoDB is web\nscale\" video, to ensure that people interpret it correctly :-).\n-- \noutput = (\"cbbrowne\" \"@\" \"gmail.com\")\nhttp://linuxdatabases.info/info/nonrdbms.html\nThe *Worst* Things to Say to a Police Officer: Hey, is that a 9 mm?\nThat's nothing compared to this .44 magnum.\n",
"msg_date": "Tue, 12 Oct 2010 12:13:09 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: large dataset with write vs read clients"
},
{
"msg_contents": "[email protected] (Mladen Gogala) writes:\n> I have a logical problem with asynchronous commit. The \"commit\"\n> command should instruct the database to make the outcome of the\n> transaction permanent. The application should wait to see whether the\n> commit was successful or not. Asynchronous behavior in the commit\n> statement breaks the ACID rules and should not be used in a RDBMS\n> system. If you don't need ACID, you may not need RDBMS at all. You may\n> try with MongoDB. MongoDB is web scale:\n> http://www.youtube.com/watch?v=b2F-DItXtZs\n\nThe \"client\" always has the option of connecting to a set of databases,\nand stowing parts of the data hither and thither. That often leads to\nthe relaxation called \"BASE.\" (And IBM has been selling that relaxation\nas MQ-Series since the early '90s!)\n\nThere often *ARE* cases where it is acceptable for some of the data to\nnot be as durable, because that data is readily reconstructed. This is\nparticularly common for calculated/cached/aggregated data.\n\nMany things can get relaxed for a \"data warehouse\" data store, where the\ndatabase is not authoritative, but rather aggregates data drawn from\nother authoritative sources. In such applications, neither the A, C, I,\nnor the D are pointedly crucial, in the DW data store.\n\n- We don't put the original foreign key constraints into the DW\n database; they don't need to be enforced a second time. Ditto for\n constraints of all sorts.\n\n- Batching of the loading of updates is likely to break several of the\n letters. And I find it *quite* acceptable to lose \"D\" if the data may\n be safely reloaded into the DW database.\n\nI don't think this is either cavalier nor that it points to \"MongoDB is\nweb scale.\"\n-- \n\"cbbrowne\",\"@\",\"gmail.com\"\nRules of the Evil Overlord #181. \"I will decree that all hay be\nshipped in tightly-packed bales. Any wagonload of loose hay attempting\nto pass through a checkpoint will be set on fire.\"\n",
"msg_date": "Tue, 12 Oct 2010 12:41:17 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: large dataset with write vs read clients"
}
] |
[
{
"msg_contents": "I'm weighing options for a new server. In addition to PostgreSQL, this \nmachine will handle some modest Samba and Rsync load.\n\nI will have enough RAM so the virtually all disk-read activity will be \ncached. The average PostgreSQL read activity will be modest - a mix of \nsingle-record and fairly large (reporting) result-sets. Writes will be \nmodest as well but will come in brief (1-5 second) bursts of individual \ninserts. The rate of insert requests will hit 100-200/second for those \nbrief bursts.\n\nSo...\n\nAm I likely to be better off putting $$$ toward battery-backup on the \nRAID or toward adding a second RAID-set and splitting off the WAL \ntraffic? Or something else?\n\nCheers,\nSteve\n\n",
"msg_date": "Thu, 07 Oct 2010 16:38:04 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": true,
"msg_subject": "BBU Cache vs. spindles"
},
{
"msg_contents": "On Oct 7, 2010, at 4:38 PM, Steve Crawford wrote:\n\n> I'm weighing options for a new server. In addition to PostgreSQL, this machine will handle some modest Samba and Rsync load.\n> \n> I will have enough RAM so the virtually all disk-read activity will be cached. The average PostgreSQL read activity will be modest - a mix of single-record and fairly large (reporting) result-sets. Writes will be modest as well but will come in brief (1-5 second) bursts of individual inserts. The rate of insert requests will hit 100-200/second for those brief bursts.\n> \n> So...\n> \n> Am I likely to be better off putting $$$ toward battery-backup on the RAID or toward adding a second RAID-set and splitting off the WAL traffic? Or something else?\n\nA BBU is, what, $100 or so? Adding one seems a no-brainer to me. Dedicated WAL spindles are nice and all, but they're still spinning media. Raid card cache is waaaay faster, and while it's best at bursty writes, it sounds like bursty writes are precisely what you have.\n\n\n",
"msg_date": "Fri, 8 Oct 2010 11:08:21 -0700",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Ben Chobot wrote:\n> On Oct 7, 2010, at 4:38 PM, Steve Crawford wrote:\n> \n> > I'm weighing options for a new server. In addition to PostgreSQL, this machine will handle some modest Samba and Rsync load.\n> >\n> > I will have enough RAM so the virtually all disk-read activity will be cached. The average PostgreSQL read activity will be modest - a mix of single-record and fairly large (reporting) result-sets. Writes will be modest as well but will come in brief (1-5 second) bursts of individual inserts. The rate of insert requests will hit 100-200/second for those brief bursts.\n> >\n> > So...\n> >\n> > Am I likely to be better off putting $$$ toward battery-backup on the RAID or toward adding a second RAID-set and splitting off the WAL traffic? Or something else?\n> \n> A BBU is, what, $100 or so? Adding one seems a no-brainer to me.\n> Dedicated WAL spindles are nice and all, but they're still spinning\n> media. Raid card cache is waaaay faster, and while it's best at bursty\n> writes, it sounds like bursty writes are precisely what you have.\n\nTotally agree!\n\n--\n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Wed, 20 Oct 2010 22:13:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "On Wed, 2010-10-20 at 22:13 -0400, Bruce Momjian wrote:\n> Ben Chobot wrote:\n> > On Oct 7, 2010, at 4:38 PM, Steve Crawford wrote:\n> > \n> > > I'm weighing options for a new server. In addition to PostgreSQL, this machine will handle some modest Samba and Rsync load.\n> > >\n> > > I will have enough RAM so the virtually all disk-read activity will be cached. The average PostgreSQL read activity will be modest - a mix of single-record and fairly large (reporting) result-sets. Writes will be modest as well but will come in brief (1-5 second) bursts of individual inserts. The rate of insert requests will hit 100-200/second for those brief bursts.\n> > >\n> > > So...\n> > >\n> > > Am I likely to be better off putting $$$ toward battery-backup on the RAID or toward adding a second RAID-set and splitting off the WAL traffic? Or something else?\n> > \n> > A BBU is, what, $100 or so? Adding one seems a no-brainer to me.\n> > Dedicated WAL spindles are nice and all, but they're still spinning\n> > media. Raid card cache is waaaay faster, and while it's best at bursty\n> > writes, it sounds like bursty writes are precisely what you have.\n> \n> Totally agree!\n\nBBU first, more spindles second.\n\n> \n> --\n> Bruce Momjian <[email protected]> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n> \n> + It's impossible for everything to be true. +\n> \n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n",
"msg_date": "Wed, 20 Oct 2010 19:25:19 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "On Wed, Oct 20, 2010 at 8:25 PM, Joshua D. Drake <[email protected]> wrote:\n> On Wed, 2010-10-20 at 22:13 -0400, Bruce Momjian wrote:\n>> Ben Chobot wrote:\n>> > On Oct 7, 2010, at 4:38 PM, Steve Crawford wrote:\n>> >\n>> > > I'm weighing options for a new server. In addition to PostgreSQL, this machine will handle some modest Samba and Rsync load.\n>> > >\n>> > > I will have enough RAM so the virtually all disk-read activity will be cached. The average PostgreSQL read activity will be modest - a mix of single-record and fairly large (reporting) result-sets. Writes will be modest as well but will come in brief (1-5 second) bursts of individual inserts. The rate of insert requests will hit 100-200/second for those brief bursts.\n>> > >\n>> > > So...\n>> > >\n>> > > Am I likely to be better off putting $$$ toward battery-backup on the RAID or toward adding a second RAID-set and splitting off the WAL traffic? Or something else?\n>> >\n>> > A BBU is, what, $100 or so? Adding one seems a no-brainer to me.\n>> > Dedicated WAL spindles are nice and all, but they're still spinning\n>> > media. Raid card cache is waaaay faster, and while it's best at bursty\n>> > writes, it sounds like bursty writes are precisely what you have.\n>>\n>> Totally agree!\n>\n> BBU first, more spindles second.\n\nAgreed. note that while you can get incredible burst performance from\na battery backed cache, due to both caching and writing out of order,\nonce the throughput begins to saturate at the speed of the disk array,\nthe bbu cache is now only re-ordering really, as it will eventually\nfill up faster than the disks can take the writes, and you'll settle\nin at some percentage of your max tps you get for a short benchmark\nrun. It's vitally important that once you put a BBU cache in place,\nyou run a very long running transactional test (pgbench is a simple\none to start with) that floods the io subsystem so you see what you're\naverage throughput is with the WAL and data store getting flooded. I\nknow on my system pgbench runs of a few minutes can be 3 or 4 times\nfaster than runs that last for the better part of an hour.\n",
"msg_date": "Wed, 20 Oct 2010 22:45:08 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "On 10/20/2010 09:45 PM, Scott Marlowe wrote:\n> On Wed, Oct 20, 2010 at 8:25 PM, Joshua D. Drake<[email protected]> wrote:\n> \n>> On Wed, 2010-10-20 at 22:13 -0400, Bruce Momjian wrote:\n>> \n>>> Ben Chobot wrote:\n>>> \n>>>> On Oct 7, 2010, at 4:38 PM, Steve Crawford wrote:\n>>>>\n>>>> \n>>>>> I'm weighing options for a new server. In addition to PostgreSQL, this machine will handle some modest Samba and Rsync load.\n>>>>>\n>>>>> I will have enough RAM so the virtually all disk-read activity will be cached. The average PostgreSQL read activity will be modest - a mix of single-record and fairly large (reporting) result-sets. Writes will be modest as well but will come in brief (1-5 second) bursts of individual inserts. The rate of insert requests will hit 100-200/second for those brief bursts.\n>>>>>\n>>>>> So...\n>>>>>\n>>>>> Am I likely to be better off putting $$$ toward battery-backup on the RAID or toward adding a second RAID-set and splitting off the WAL traffic? Or something else?\n>>>>> \n>>>> A BBU is, what, $100 or so? Adding one seems a no-brainer to me.\n>>>> Dedicated WAL spindles are nice and all, but they're still spinning\n>>>> media. Raid card cache is waaaay faster, and while it's best at bursty\n>>>> writes, it sounds like bursty writes are precisely what you have.\n>>>> \n>>> Totally agree!\n>>> \n>> BBU first, more spindles second.\n>> \n> Agreed. note that while you can get incredible burst performance from\n> a battery backed cache, due to both caching and writing out of order,\n> once the throughput begins to saturate at the speed of the disk array,\n> the bbu cache is now only re-ordering really, as it will eventually\n> fill up faster than the disks can take the writes, and you'll settle\n> in at some percentage of your max tps you get for a short benchmark\n> run. It's vitally important that once you put a BBU cache in place,\n> you run a very long running transactional test (pgbench is a simple\n> one to start with) that floods the io subsystem so you see what you're\n> average throughput is with the WAL and data store getting flooded. I\n> know on my system pgbench runs of a few minutes can be 3 or 4 times\n> faster than runs that last for the better part of an hour.\n>\n> \nThanks for all the replies. This is what I suspected but since I can't \njust buy one of everything to try, I wanted a sanity-check before \nspending the $$$.\n\nI am not too worried about saturating the controller cache as the \ncurrent much lower spec machine can handle the sustained load just fine \nand the bursts are typically only 1-3 seconds long spaced a minute or \nmore apart.\n\nCheers,\nSteve\n\n\n\n",
"msg_date": "Thu, 21 Oct 2010 09:42:52 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Scott Marlowe wrote:\n> On Wed, Oct 20, 2010 at 8:25 PM, Joshua D. Drake <[email protected]> wrote:\n> > On Wed, 2010-10-20 at 22:13 -0400, Bruce Momjian wrote:\n> >> Ben Chobot wrote:\n> >> > On Oct 7, 2010, at 4:38 PM, Steve Crawford wrote:\n> >> >\n> >> > > I'm weighing options for a new server. In addition to PostgreSQL, this machine will handle some modest Samba and Rsync load.\n> >> > >\n> >> > > I will have enough RAM so the virtually all disk-read activity will be cached. The average PostgreSQL read activity will be modest - a mix of single-record and fairly large (reporting) result-sets. Writes will be modest as well but will come in brief (1-5 second) bursts of individual inserts. The rate of insert requests will hit 100-200/second for those brief bursts.\n> >> > >\n> >> > > So...\n> >> > >\n> >> > > Am I likely to be better off putting $$$ toward battery-backup on the RAID or toward adding a second RAID-set and splitting off the WAL traffic? Or something else?\n> >> >\n> >> > A BBU is, what, $100 or so? Adding one seems a no-brainer to me.\n> >> > Dedicated WAL spindles are nice and all, but they're still spinning\n> >> > media. Raid card cache is waaaay faster, and while it's best at bursty\n> >> > writes, it sounds like bursty writes are precisely what you have.\n> >>\n> >> Totally agree!\n> >\n> > BBU first, more spindles second.\n> \n> Agreed. note that while you can get incredible burst performance from\n> a battery backed cache, due to both caching and writing out of order,\n> once the throughput begins to saturate at the speed of the disk array,\n> the bbu cache is now only re-ordering really, as it will eventually\n> fill up faster than the disks can take the writes, and you'll settle\n> in at some percentage of your max tps you get for a short benchmark\n> run. It's vitally important that once you put a BBU cache in place,\n> you run a very long running transactional test (pgbench is a simple\n> one to start with) that floods the io subsystem so you see what you're\n> average throughput is with the WAL and data store getting flooded. I\n> know on my system pgbench runs of a few minutes can be 3 or 4 times\n> faster than runs that last for the better part of an hour.\n\nWith a BBU you can turn off full_page_writes, which should decrease the\nWAL traffic.\n\nHowever, I don't see this mentioned in our documentation. Should I add\nit?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Thu, 21 Oct 2010 12:51:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Bruce Momjian <[email protected]> wrote:\n \n> With a BBU you can turn off full_page_writes\n \nMy understanding is that that is not without risk. What happens if\nthe WAL is written, there is a commit, but the data page has not yet\nbeen written to the controller? Don't we still have a torn page?\n \n-Kevin\n",
"msg_date": "Thu, 21 Oct 2010 11:56:02 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Bruce Momjian <[email protected]> wrote:\n> \n> > With a BBU you can turn off full_page_writes\n> \n> My understanding is that that is not without risk. What happens if\n> the WAL is written, there is a commit, but the data page has not yet\n> been written to the controller? Don't we still have a torn page?\n\nI don't see how full_page_writes affect non-written pages to the\ncontroller.\n\nfull_page_writes is designed to guard against a partial write to a\ndevice. I don't think the raid cache can be partially written to, and\nthe cache will not be cleared until the drive has fully writen the data\nto disk.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Thu, 21 Oct 2010 12:59:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Bruce Momjian wrote:\n> With a BBU you can turn off full_page_writes, which should decrease the\n> WAL traffic.\n>\n> However, I don't see this mentioned in our documentation. Should I add\n> it?\n> \n\nWhat I would like to do is beef up the documentation with some concrete \nexamples of how to figure out if your cache and associated write path \nare working reliably or not. It should be possible to include \"does \nthis handle full page writes correctly?\" in that test suite. Until we \nhave something like that, I'm concerned that bugs in filesystem or \ncontroller handling may make full_page_writes unsafe even with a BBU, \nand we'd have no way for people to tell if that's true or not.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\n\n",
"msg_date": "Thu, 21 Oct 2010 13:04:17 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Bruce Momjian <[email protected]> wrote:\n \n> full_page_writes is designed to guard against a partial write to a\n> device. I don't think the raid cache can be partially written to\n \nSo you're confident that an 8kB write to the controller will not be\ndone as a series of smaller atomic writes by the OS file system?\n \n-Kevin\n",
"msg_date": "Thu, 21 Oct 2010 12:15:15 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Bruce Momjian <[email protected]> wrote:\n> \n> \n>> full_page_writes is designed to guard against a partial write to a\n>> device. I don't think the raid cache can be partially written to\n>> \n> \n> So you're confident that an 8kB write to the controller will not be\n> done as a series of smaller atomic writes by the OS file system?\n\nSure, that happens. But if the BBU has gotten an fsync call after the \n8K write, it shouldn't return success until after all 8K are in its \ncache. That's why full_page_writes should be safe on a system with BBU \nas Bruce is suggesting. But I'd like to see some independent proof of \nthat fact, that includes some targeted tests users can run, before we \nstart recommending that practice.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\n\n",
"msg_date": "Thu, 21 Oct 2010 13:31:09 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Greg Smith <[email protected]> wrote: \n> Kevin Grittner wrote:\n \n>> So you're confident that an 8kB write to the controller will not\n>> be done as a series of smaller atomic writes by the OS file\n>> system?\n> \n> Sure, that happens. But if the BBU has gotten an fsync call after\n> the 8K write, it shouldn't return success until after all 8K are\n> in its cache.\n \nI'm not concerned about an fsync after the controller has it; I'm\nconcerned about a system crash in the middle of writing an 8K page\nto the controller. Other than the expected *size* of the window of\ntime during which you're vulnerable, what does a BBU caching\ncontroller buy you in this regard? Can't the OS rearrange the\nwrites of disk sectors after the 8K page is written to the OS cache\nso that the window might occasionally be rather large?\n \n-Kevin\n",
"msg_date": "Thu, 21 Oct 2010 13:54:03 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Greg Smith <[email protected]> wrote: \n> > Kevin Grittner wrote:\n> \n> >> So you're confident that an 8kB write to the controller will not\n> >> be done as a series of smaller atomic writes by the OS file\n> >> system?\n> > \n> > Sure, that happens. But if the BBU has gotten an fsync call after\n> > the 8K write, it shouldn't return success until after all 8K are\n> > in its cache.\n> \n> I'm not concerned about an fsync after the controller has it; I'm\n> concerned about a system crash in the middle of writing an 8K page\n> to the controller. Other than the expected *size* of the window of\n> time during which you're vulnerable, what does a BBU caching\n> controller buy you in this regard? Can't the OS rearrange the\n> writes of disk sectors after the 8K page is written to the OS cache\n> so that the window might occasionally be rather large?\n\nIf the write fails to the controller, the page is not flushed and PG\ndoes not continue. If the write fails, the fsync never happens, and\nhence PG stops.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Thu, 21 Oct 2010 15:07:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Bruce Momjian <[email protected]> wrote:\n \n> If the write fails to the controller, the page is not flushed and\n> PG does not continue. If the write fails, the fsync never\n> happens, and hence PG stops.\n \nPG stops? This case at issue is when the OS crashes or the plug is\npulled in the middle of writing a page. I don't think PG will\nnormally have the option of a graceful stop after that. To quote\nthe Fine Manual:\n \nhttp://www.postgresql.org/docs/current/interactive/runtime-config-wal.html#GUC-FULL-PAGE-WRITES\n \n| a page write that is in process during an operating system crash\n| might be only partially completed, leading to an on-disk page that\n| contains a mix of old and new data. The row-level change data\n| normally stored in WAL will not be enough to completely restore\n| such a page during post-crash recovery. Storing the full page\n| image guarantees that the page can be correctly restored\n \nLike I said, the only difference between the page being written to\nplatters and to a BBU cache that I can see is the average size of\nthe window of time in which you're vulnerable, not whether there is\na window. I don't think you've really addressed that concern.\n \n-Kevin\n",
"msg_date": "Thu, 21 Oct 2010 14:31:01 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Bruce Momjian <[email protected]> wrote:\n> \n> > If the write fails to the controller, the page is not flushed and\n> > PG does not continue. If the write fails, the fsync never\n> > happens, and hence PG stops.\n> \n> PG stops? This case at issue is when the OS crashes or the plug is\n> pulled in the middle of writing a page. I don't think PG will\n> normally have the option of a graceful stop after that. To quote\n> the Fine Manual:\n\nIf the OS crashes during a write or fsync, we have not committed the\ntransaction.\n\n> \n> http://www.postgresql.org/docs/current/interactive/runtime-config-wal.html#GUC-FULL-PAGE-WRITES\n> \n> | a page write that is in process during an operating system crash\n> | might be only partially completed, leading to an on-disk page that\n> | contains a mix of old and new data. The row-level change data\n> | normally stored in WAL will not be enough to completely restore\n> | such a page during post-crash recovery. Storing the full page\n> | image guarantees that the page can be correctly restored\n> \n> Like I said, the only difference between the page being written to\n> platters and to a BBU cache that I can see is the average size of\n> the window of time in which you're vulnerable, not whether there is\n> a window. I don't think you've really addressed that concern.\n\nI assume we send a full 8k to the controller, and a failure during that\nwrite is not registered as a write. A disk drive is modifying permanent\nstorage so there is always the possibility of that failing. I assume\nthe BBU just rewrites that after startup.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Thu, 21 Oct 2010 15:35:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Bruce Momjian <[email protected]> wrote:\n \n> I assume we send a full 8k to the controller, and a failure during\n> that write is not registered as a write.\n \nOn what do you base that assumption? I assume that we send a full\n8K to the OS cache, and the file system writes disk sectors\naccording to its own algorithm. With either platters or BBU cache,\nthe data is persisted on fsync; why do you see a risk with one but\nnot the other?\n \n-Kevin\n",
"msg_date": "Thu, 21 Oct 2010 14:42:06 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Bruce Momjian <[email protected]> wrote:\n> \n> > I assume we send a full 8k to the controller, and a failure during\n> > that write is not registered as a write.\n> \n> On what do you base that assumption? I assume that we send a full\n> 8K to the OS cache, and the file system writes disk sectors\n> according to its own algorithm. With either platters or BBU cache,\n> the data is persisted on fsync; why do you see a risk with one but\n> not the other?\n\nNow that is an interesting question. We write 8k to the kernel, but the\nkernel doesn't have to honor those write sizes, so while we probably\ncan't get a partial 512-byte block written to disk with an BBU (that\nisn't cleanup up by the BBU on reboot), we could get some 512-byte\nblocks of an 8k written and others not.\n\nI agree you are right and a BBU does not mean you can safely turn off\nfull_page_writes.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Thu, 21 Oct 2010 16:01:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "On Thursday 21 October 2010 21:42:06 Kevin Grittner wrote:\n> Bruce Momjian <[email protected]> wrote:\n> > I assume we send a full 8k to the controller, and a failure during\n> > that write is not registered as a write.\n> \n> On what do you base that assumption? I assume that we send a full\n> 8K to the OS cache, and the file system writes disk sectors\n> according to its own algorithm. With either platters or BBU cache,\n> the data is persisted on fsync; why do you see a risk with one but\n> not the other?\nAt least on linux pages can certainly get written out in < 8kb batches if \nyoure under memory pressure.\n\nAndres\n\n\n",
"msg_date": "Thu, 21 Oct 2010 22:18:09 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Kevin Grittner wrote:\n> I assume that we send a full\n> 8K to the OS cache, and the file system writes disk sectors\n> according to its own algorithm. With either platters or BBU cache,\n> the data is persisted on fsync; why do you see a risk with one but\n> not the other\n\nI'd like a 10 minute argument please. I started to write something to \nrefute this, only to clarify in my head the sequence of events that \nleads to the most questionable result, where I feel a bit less certain \nthan I did before of the safety here. Here is the worst case I believe \nyou're describing:\n\n1) Transaction is written to the WAL and sync'd; client receives \nCOMMIT. Since full_page_writes is off, the data in the WAL consists \nonly of the delta of what changed on the page.\n2) 8K database page is written to OS cache\n3) PG calls fsync to force the database block out\n4) OS writes first 4K block of the change to the BBU write cache. Worst \ncase, this fills the cache, and it takes a moment for some random writes \nto process before it has space to buffer again (makes this more likely \nto happen, but it's not required to see the failure case here)\n5) Sudden power interruption, second half of the page write is lost\n6) Server restarts\n7) That 4K write is now replayed from the battery's cache\n\nAt this point, you now have a torn 8K page, with 1/2 old and 1/2 new \ndata. Without a full page write in the WAL, is it always possible to \nrestore its original state now? In theory, I think you do. Since the \ndelta in the WAL should be overwriting all of the bytes that changed \nbetween the old and new version of the page, applying it on top of any \nfour possible states here:\n\n1) None of the data was written to the database page yet\n2) The first 4K of data was written out\n3) The second 4K of data was written out\n4) All 8K was actually written out\n\nShould lead to the same result: an 8K page that includes the change that \nwas in the WAL but not onto disk at the point when the crash happened.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Thu, 21 Oct 2010 23:47:24 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> At this point, you now have a torn 8K page, with 1/2 old and 1/2 new \n> data.\n\nRight.\n\n> Without a full page write in the WAL, is it always possible to \n> restore its original state now? In theory, I think you do. Since the \n> delta in the WAL should be overwriting all of the bytes that changed \n> between the old and new version of the page, applying it on top of any \n> four possible states here:\n\nYou've got entirely too simplistic a view of what the \"delta\" might be,\nI fear. In particular there are various sorts of changes that involve\ninserting the data carried in the WAL record and shifting pre-existing\ndata around to make room, or removing an item and moving remaining data\naround. If you try to replay that type of action against a torn page,\nyou'll get corrupted results.\n\nWe could possibly redefine the WAL records so that they weren't just the\nminimum amount of data but carried every byte that'd changed on the\npage, and then I think what you're envisioning would work. But the\nrecords would be a lot bulkier. It's not clear to me that this would be\na net savings over the current design, particularly not if there's\na long interval between checkpoints.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Oct 2010 00:05:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles "
},
{
"msg_contents": "Kevin Grittner wrote:\n> With either platters or BBU cache,\n> the data is persisted on fsync; why do you see a risk with one but\n> not the other\n\nForgot to address this part. The troublesome sequence if you don't have \na BBU is:\n\n1) WAL data is written to the OS cache\n2) PG calls fsync\n3) Data is tranferred into the drive's volatile, non battery-backed cache\n4) Drive lies about data being on disk, says fsync is done\n5) That 8K data page is written out to the OS cache, also with fsync, \nthen onto the drive. It says it has that too.\n6) Due to current disk head location, 4KB of the data page gets written \nout before it gets to the WAL data\n7) System crashes\n\nNow you're dead. You've just torn a data page, but not written any of \nthe data to the WAL necessary to reconstruct any valid version of that page.\n\nI think Kevin's point here may be that if your fsync isn't reliable, \nyou're always in trouble. But if your fsync is good, even torn pages \nshould be repairable by the deltas written to the WAL, as I described in \nthe message I just sent before this one. That's true regardless of \nwhether you achieved \"non-lying fsync\" with a BBU or by turning a \ndrive's write cache off. There's nothing really special about the BBU \nbeyond it behind the most common form of reliable write cache that \nworks. You get the same properties at a slower rate with a drive that's \nconfigured to never lie about writes.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Fri, 22 Oct 2010 00:06:21 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Greg Smith <[email protected]> wrote:\n \n> I think Kevin's point here may be that if your fsync isn't\n> reliable, you're always in trouble. But if your fsync is good,\n> even torn pages should be repairable by the deltas written to the\n> WAL\n \nI was actually just arguing that a BBU doesn't eliminate a risk\nhere; if there is a risk with production-quality disk drives, there\nis a risk with a controller with a BBU cache. The BBU cache just\ntends to reduce the window of time in which corruption can occur. I\nwasn't too sure of *why* there was a risk, but Tom's post cleared\nthat up.\n \nI wonder why we need to expose this GUC at all -- perhaps it should\nbe off when fsync is off and on otherwise? Leaving it on without\nfsync is just harming performance for not much benefit, and turning\nit off with fsync seems to be saying that you are willing to\ntolerate a known risk of database corruption, just not quite so much\nas you have without fsync. In reality it seems most likely to be a\nmistake, either way.\n \n-Kevin\n",
"msg_date": "Fri, 22 Oct 2010 08:46:34 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Tom Lane wrote:\n> You've got entirely too simplistic a view of what the \"delta\" might be,\n> I fear. In particular there are various sorts of changes that involve\n> inserting the data carried in the WAL record and shifting pre-existing\n> data around to make room, or removing an item and moving remaining data\n> around. If you try to replay that type of action against a torn page,\n> you'll get corrupted results.\n> \n\nI wasn't sure exactly how those were encoded, thanks for the \nclarification. Given that, it seems to me there are only two situations \nwhere full_page_writes is safe to turn off:\n\n1) The operating system block size is exactly the same database block \nsize, and all writes are guaranteed to be atomic to that block size. \n\n2) You're using a form of journaled filesystem where data blocks are \nnever updated, they're always written elsewhere and the filesystem is \nredirected to that new block once it's on disk.\n\nLooks to me like whether or not there's a non-volatile write cache \nsitting in the middle, like a BBU protected RAID card, doesn't really \nmake any difference here then.\n\nI think that most people who have thought they were safe to turn off \nfull_page_writes in the past did so because they believed they were in \ncategory (1) here. I've never advised anyone to do that, because it's \nso difficult to validate the truth of. Just given that, I'd be tempted \nto join in on suggesting this parameter just go away in the name of \nsafety, except that I think category (2) here is growing now. ZFS is \nthe most obvious example where the atomic write implementation seems to \nalways make disabling full_page_writes safe.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Fri, 22 Oct 2010 11:37:23 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "On 2010-10-22 17:37, Greg Smith wrote:\n> I think that most people who have thought they were safe to turn off\n> full_page_writes in the past did so because they believed they were\n> in category (1) here. I've never advised anyone to do that, because\n> it's so difficult to validate the truth of. Just given that, I'd be\n> tempted to join in on suggesting this parameter just go away in the\n> name of safety, except that I think category (2) here is growing now.\n> ZFS is the most obvious example where the atomic write implementation\n> seems to always make disabling full_page_writes safe.\n\nCan you point to some ZFS docs that tell that this is the case.. I'd be \nsurprised\nif it doesnt copy away the old block and replaces it with the new one \nin-place. The\nother behaviour would quite quickly lead to a hugely fragmented \nfilesystem that\nperforms next to useless and ZFS doesnt seem to be in that category..\n\n ... All given my total lack of insight into ZFS.\n\n-- \nJesper\n\n\n\n\n\n\n\n\n\n\nOn 2010-10-22 17:37, Greg Smith wrote:\n> I think that most people who have thought they were safe to turn\noff\n> full_page_writes in the past did so because they believed they were\n> in category (1) here. I've never advised anyone to do that,\nbecause\n> it's so difficult to validate the truth of. Just given that, I'd\nbe\n> tempted to join in on suggesting this parameter just go away in the\n> name of safety, except that I think category (2) here is growing\nnow.\n> ZFS is the most obvious example where the atomic write\nimplementation\n> seems to always make disabling full_page_writes safe.\n\nCan you point to some ZFS docs that tell that this is the case.. I'd\nbe surprised \nif it doesnt copy away the old block and replaces it with the new one\nin-place. The\nother behaviour would quite quickly lead to a hugely fragmented\nfilesystem that \nperforms next to useless and ZFS doesnt seem to be in that category..\n\n ... All given my total lack of insight into ZFS. \n\n-- \nJesper",
"msg_date": "Fri, 22 Oct 2010 18:36:22 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "On Fri, Oct 22, 2010 at 8:37 AM, Greg Smith <[email protected]> wrote:\n> Tom Lane wrote:\n>>\n>> You've got entirely too simplistic a view of what the \"delta\" might be,\n>> I fear. In particular there are various sorts of changes that involve\n>> inserting the data carried in the WAL record and shifting pre-existing\n>> data around to make room, or removing an item and moving remaining data\n>> around. If you try to replay that type of action against a torn page,\n>> you'll get corrupted results.\n>>\n>\n> I wasn't sure exactly how those were encoded, thanks for the clarification.\n> Given that, it seems to me there are only two situations where\n> full_page_writes is safe to turn off:\n>\n> 1) The operating system block size is exactly the same database block size,\n> and all writes are guaranteed to be atomic to that block size.\n> 2) You're using a form of journaled filesystem where data blocks are never\n> updated, they're always written elsewhere and the filesystem is redirected\n> to that new block once it's on disk.\n>\n> Looks to me like whether or not there's a non-volatile write cache sitting\n> in the middle, like a BBU protected RAID card, doesn't really make any\n> difference here then.\n>\n> I think that most people who have thought they were safe to turn off\n> full_page_writes in the past did so because they believed they were in\n> category (1) here. I've never advised anyone to do that, because it's so\n> difficult to validate the truth of. Just given that, I'd be tempted to join\n> in on suggesting this parameter just go away in the name of safety, except\n> that I think category (2) here is growing now. ZFS is the most obvious\n> example where the atomic write implementation seems to always make disabling\n> full_page_writes safe.\n>\n\nFor the sake of argument, has PG considered using a double write\nbuffer similar to InnodB?\n\n\n-- \nRob Wultsch\[email protected]\n",
"msg_date": "Fri, 22 Oct 2010 10:16:57 -0700",
"msg_from": "Rob Wultsch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Rob Wultsch <[email protected]> wrote:\n \n> has PG considered using a double write buffer similar to InnodB?\n \nThat seems inferior to the full_page_writes strategy, where you only\nwrite a page twice the first time it is written after a checkpoint. \nWe're talking about when we might be able to write *less*, not more.\n \n-Kevin\n",
"msg_date": "Fri, 22 Oct 2010 12:28:17 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "On Fri, Oct 22, 2010 at 10:28 AM, Kevin Grittner\n<[email protected]> wrote:\n> Rob Wultsch <[email protected]> wrote:\n>\n>> has PG considered using a double write buffer similar to InnodB?\n>\n> That seems inferior to the full_page_writes strategy, where you only\n> write a page twice the first time it is written after a checkpoint.\n> We're talking about when we might be able to write *less*, not more.\n>\n> -Kevin\n>\n\nBy \"write\" do you mean number of writes, or the number of bytes of the\nwrites? For number of writes, yes a double write buffer will lose. In\nterms of number of bytes, I would think full_page_writes=off + double\nwrite buffer should be far superior, particularly given that the WAL\nis shipped over the network to slaves.\n\n-- \nRob Wultsch\[email protected]\n",
"msg_date": "Fri, 22 Oct 2010 11:41:48 -0700",
"msg_from": "Rob Wultsch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Rob Wultsch <[email protected]> wrote:\n \n> I would think full_page_writes=off + double write buffer should be\n> far superior, particularly given that the WAL is shipped over the\n> network to slaves.\n \nFor a reasonably brief description of InnoDB double write buffers, I\nfound this:\n \nhttp://www.mysqlperformanceblog.com/2006/08/04/innodb-double-write/\n \nOne big question before even considering this would by how to\ndetermine whether a potentially torn page \"is inconsistent\". \nWithout a page CRC or some such mechanism, I don't see how this\ntechnique is possible.\n \nEven if it's possible, it's far from clear to me that it would be an\nimprovement. The author estimates (apparently somewhat loosely)\nthat it's a 5% to 10% performance hit in InnoDB; I'm far from\ncertain that full_page_writes cost us that much. Does anyone have\nbenchmark numbers handy?\n \n-Kevin\n",
"msg_date": "Fri, 22 Oct 2010 14:05:39 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "On Fri, Oct 22, 2010 at 12:05 PM, Kevin Grittner\n<[email protected]> wrote:\n> Rob Wultsch <[email protected]> wrote:\n>\n>> I would think full_page_writes=off + double write buffer should be\n>> far superior, particularly given that the WAL is shipped over the\n>> network to slaves.\n>\n> For a reasonably brief description of InnoDB double write buffers, I\n> found this:\n>\n> http://www.mysqlperformanceblog.com/2006/08/04/innodb-double-write/\n>\n> One big question before even considering this would by how to\n> determine whether a potentially torn page \"is inconsistent\".\n> Without a page CRC or some such mechanism, I don't see how this\n> technique is possible.\n>\n> Even if it's possible, it's far from clear to me that it would be an\n> improvement. The author estimates (apparently somewhat loosely)\n> that it's a 5% to 10% performance hit in InnoDB; I'm far from\n> certain that full_page_writes cost us that much. Does anyone have\n> benchmark numbers handy?\n>\n> -Kevin\n>\n\nIgnoring (briefly) the cost in terms of performance of the different\nsystem, not needing full_page_writes would make geographically\ndispersed replication possible for certain cases where it is not\ncurrently (or at least rather painful).\n\n-- \nRob Wultsch\[email protected]\n",
"msg_date": "Fri, 22 Oct 2010 13:06:11 -0700",
"msg_from": "Rob Wultsch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Rob Wultsch <[email protected]> wrote:\n \n> not needing full_page_writes would make geographically dispersed\n> replication possible for certain cases where it is not currently\n> (or at least rather painful).\n \nDo you have any hard numbers on WAL file size impact? How much does\npglesslog help in a file-based WAL transmission environment? Should\nwe be considering similar filtering for streaming replication?\n \n-Kevin\n",
"msg_date": "Fri, 22 Oct 2010 15:15:30 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "On Fri, Oct 22, 2010 at 1:15 PM, Kevin Grittner\n<[email protected]> wrote:\n> Rob Wultsch <[email protected]> wrote:\n>\n>> not needing full_page_writes would make geographically dispersed\n>> replication possible for certain cases where it is not currently\n>> (or at least rather painful).\n>\n> Do you have any hard numbers on WAL file size impact? How much does\n> pglesslog help in a file-based WAL transmission environment? Should\n> we be considering similar filtering for streaming replication?\n>\n> -Kevin\n>\n\nNo, I am DBA that mostly works on MySQL. I have had to deal with\n(handwaving...) tangential issues recently. I really would like to\nwork with PG more and this seems like it would be a significant\nhindrance for certain usage patterns. Lots of replication does not\ntake place over gig...\n\n\n-- \nRob Wultsch\[email protected]\n",
"msg_date": "Fri, 22 Oct 2010 21:07:49 -0700",
"msg_from": "Rob Wultsch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "\n> Even if it's possible, it's far from clear to me that it would be an\n> improvement. The author estimates (apparently somewhat loosely)\n> that it's a 5% to 10% performance hit in InnoDB; I'm far from\n> certain that full_page_writes cost us that much. Does anyone have\n> benchmark numbers handy?\n\nIt most certainly can, depending on your CPU saturation and I/O support. \n I've seen a 10% improvement in througput time from turning off \nfull_page_writes on some machines, such as when we were doing the \nSpecJAppserver benchmarks on Solaris.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Sat, 23 Oct 2010 14:03:06 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Kevin Grittner wrote:\n> On what do you base that assumption? I assume that we send a full\n> 8K to the OS cache, and the file system writes disk sectors\n> according to its own algorithm. With either platters or BBU cache,\n> the data is persisted on fsync; why do you see a risk with one but\n> not the other?\n> \nSurely 'the data is persisted sometime after our write and before the \nfsynch returns, but\nmay be written:\n - in small chunks\n - out of order\n - in an unpredictable way'\n\nWhen I looked at the internals of TokyoCabinet for example, the design \nwas flawed but\nwould be 'fairly robust' so long as mmap'd pages that were dirtied did \nnot get persisted\nuntil msync, and were then persisted atomically.\n\n",
"msg_date": "Sun, 24 Oct 2010 09:05:19 +0100",
"msg_from": "James Mansion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "James Mansion wrote:\n> When I looked at the internals of TokyoCabinet for example, the design \n> was flawed but\n> would be 'fairly robust' so long as mmap'd pages that were dirtied did \n> not get persisted\n> until msync, and were then persisted atomically.\n\nIf TokyoCabinet presumes that's true and overwrites existing blocks with \nthat assumption, it would land onto my list of databases I wouldn't \ntrust to hold my TODO list. Flip off power to a server, and you have no \nidea what portion of the blocks sitting in the drive's cache actually \nmade it to disk; that's not even guaranteed atomic to the byte level. \nTorn pages happen all the time unless you either a) put the entire write \ninto a non-volatile cache before writing any of it, b) write and sync \nsomewhere else first and then do a journaled filesystem pointer swap \nfrom the old page to the new one, or c) journal the whole write the way \nPostgreSQL does with full_page_writes and the WAL. The discussion here \nveered off over whether (a) was sufficiently satisfied just by having a \nRAID controller with battery backup, and what I concluded from the dive \ninto the details is that it's definitely not true unless the filesystem \nblock size exactly matches the database one. And even then, make sure \nyou test heavily.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Sun, 24 Oct 2010 12:53:13 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Jesper Krogh wrote:\n> Can you point to some ZFS docs that tell that this is the case.. I'd \n> be surprised\n> if it doesnt copy away the old block and replaces it with the new one \n> in-place. The\n> other behaviour would quite quickly lead to a hugely fragmented \n> filesystem that\n> performs next to useless and ZFS doesnt seem to be in that category..\n\nhttp://all-unix.blogspot.com/2007/03/zfs-cow-and-relate-features.html\n\n\"Blocks containing active data are never overwritten in place; instead, \na new block is allocated, modified data is written to it, and then any \nmetadata blocks referencing it are similarly read, reallocated, and \nwritten.\"\n\nhttp://opensolaris.org/jive/thread.jspa?messageID=19264 discusses how \nthis interacts with the common types of hardware around: no guaratees \nwith lying hard drives as always, but otherwise you're fine.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Sun, 24 Oct 2010 13:04:45 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> James Mansion wrote:\n>> When I looked at the internals of TokyoCabinet for example, the design \n>> was flawed but\n>> would be 'fairly robust' so long as mmap'd pages that were dirtied did \n>> not get persisted\n>> until msync, and were then persisted atomically.\n\n> If TokyoCabinet presumes that's true and overwrites existing blocks with \n> that assumption, it would land onto my list of databases I wouldn't \n> trust to hold my TODO list. Flip off power to a server, and you have no \n> idea what portion of the blocks sitting in the drive's cache actually \n> made it to disk; that's not even guaranteed atomic to the byte level. \n\nThe other and probably worse problem is that there's no application\ncontrol over how soon changes to mmap'd pages get to disk. An msync\nwill flush them out, but the kernel is free to write dirty pages sooner.\nSo if they're depending for consistency on writes not happening until\nmsync, it's broken by design. (This is one of the big reasons we don't\nuse mmap'd space for Postgres disk buffers.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 24 Oct 2010 16:40:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles "
},
{
"msg_contents": "\nOn Oct 22, 2010, at 1:06 PM, Rob Wultsch wrote:\n\n> On Fri, Oct 22, 2010 at 12:05 PM, Kevin Grittner\n> <[email protected]> wrote:\n>> Rob Wultsch <[email protected]> wrote:\n>> \n>>> I would think full_page_writes=off + double write buffer should be\n>>> far superior, particularly given that the WAL is shipped over the\n>>> network to slaves.\n>> \n>> For a reasonably brief description of InnoDB double write buffers, I\n>> found this:\n>> \n>> http://www.mysqlperformanceblog.com/2006/08/04/innodb-double-write/\n>> \n>> One big question before even considering this would by how to\n>> determine whether a potentially torn page \"is inconsistent\".\n>> Without a page CRC or some such mechanism, I don't see how this\n>> technique is possible.\n>> \n>> Even if it's possible, it's far from clear to me that it would be an\n>> improvement. The author estimates (apparently somewhat loosely)\n>> that it's a 5% to 10% performance hit in InnoDB; I'm far from\n>> certain that full_page_writes cost us that much. Does anyone have\n>> benchmark numbers handy?\n>> \n>> -Kevin\n>> \n> \n> Ignoring (briefly) the cost in terms of performance of the different\n> system, not needing full_page_writes would make geographically\n> dispersed replication possible for certain cases where it is not\n> currently (or at least rather painful)..\n\nAm I missing something here?\n\nCan't the network replication traffic be partial pages, but the WAL log on the slave (and master) be full pages? In other words, the slave can transform a partial page update into a full page xlog entry.\n\n\n(optional) 1. Log partial pages received from master to disk. (not xlog, something else, useful to persist changes faster)\n2. Read page from disk for update.\n3. Log full page modification to xlog for local commit.\n4. Update page in memory and write out to OS as usual.\n\nThe lack of the full page from the master would mean that you have to do a read-modify-write rather than just overwrite, but I think that works fine if network bandwidth is your bottleneck.\nI don't know enough of the guts of Postgres to be certain, but it seems conceptually like this is possible. \n\nAlso one could use lzo compression and get a likely factor of two space saving with small CPU cost.\n\n> \n> -- \n> Rob Wultsch\n> [email protected]\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Tue, 26 Oct 2010 00:22:44 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "On Fri, Oct 22, 2010 at 3:05 PM, Kevin Grittner\n<[email protected]> wrote:\n> Rob Wultsch <[email protected]> wrote:\n>\n>> I would think full_page_writes=off + double write buffer should be\n>> far superior, particularly given that the WAL is shipped over the\n>> network to slaves.\n>\n> For a reasonably brief description of InnoDB double write buffers, I\n> found this:\n>\n> http://www.mysqlperformanceblog.com/2006/08/04/innodb-double-write/\n>\n> One big question before even considering this would by how to\n> determine whether a potentially torn page \"is inconsistent\".\n> Without a page CRC or some such mechanism, I don't see how this\n> technique is possible.\n\nThere are two sides to this problem: figuring out when to write a page\nto the double write buffer, and figuring out when to read it back from\nthe double write buffer. The first seems easy: we just do it whenever\nwe would XLOG a full page image. As to the second, when we write the\npage out to the double write buffer, we could also write to the double\nwrite buffer the LSN of the WAL record which depends on that full page\nimage. Then, at the start of recovery, we scan the double write\nbuffer and remember all those LSNs. When we reach one of them, we\nreplay the full page image.\n\nThe good thing about this is that it would reduce WAL volume; the bad\nthing about it is that it would probably mean doing two fsyncs where\nwe only now do one.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 26 Oct 2010 08:41:13 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "On Tue, Oct 26, 2010 at 5:41 AM, Robert Haas <[email protected]> wrote:\n> On Fri, Oct 22, 2010 at 3:05 PM, Kevin Grittner\n> <[email protected]> wrote:\n>> Rob Wultsch <[email protected]> wrote:\n>>\n>>> I would think full_page_writes=off + double write buffer should be\n>>> far superior, particularly given that the WAL is shipped over the\n>>> network to slaves.\n>>\n>> For a reasonably brief description of InnoDB double write buffers, I\n>> found this:\n>>\n>> http://www.mysqlperformanceblog.com/2006/08/04/innodb-double-write/\n>>\n>> One big question before even considering this would by how to\n>> determine whether a potentially torn page \"is inconsistent\".\n>> Without a page CRC or some such mechanism, I don't see how this\n>> technique is possible.\n>\n> There are two sides to this problem: figuring out when to write a page\n> to the double write buffer, and figuring out when to read it back from\n> the double write buffer. The first seems easy: we just do it whenever\n> we would XLOG a full page image. As to the second, when we write the\n> page out to the double write buffer, we could also write to the double\n> write buffer the LSN of the WAL record which depends on that full page\n> image. Then, at the start of recovery, we scan the double write\n> buffer and remember all those LSNs. When we reach one of them, we\n> replay the full page image.\n>\n> The good thing about this is that it would reduce WAL volume; the bad\n> thing about it is that it would probably mean doing two fsyncs where\n> we only now do one.\n>\n\nThe double write buffer is one of the few areas where InnoDB does more\nIO (in the form of fsynch's) than PG. InnoDB also has fuzzy\ncheckpoints (which help to keep dirty pages in memory longer),\nbuffering of writing out changes to secondary indexes, and recently\ntunable page level compression.\n\nGiven that InnoDB is not shipping its logs across the wire, I don't\nthink many users would really care if it used the double writer or\nfull page writes approach to the redo log (other than the fact that\nthe log files would be bigger). PG on the other hand *is* pushing its\nlogs over the wire...\n\n-- \nRob Wultsch\[email protected]\n",
"msg_date": "Tue, 26 Oct 2010 07:13:53 -0700",
"msg_from": "Rob Wultsch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "On Tue, Oct 26, 2010 at 10:13 AM, Rob Wultsch <[email protected]> wrote:\n> The double write buffer is one of the few areas where InnoDB does more\n> IO (in the form of fsynch's) than PG. InnoDB also has fuzzy\n> checkpoints (which help to keep dirty pages in memory longer),\n> buffering of writing out changes to secondary indexes, and recently\n> tunable page level compression.\n\nBaron Schwartz was talking to me about this at Surge. I don't really\nunderstand how the fuzzy checkpoint stuff works, and I haven't been\nable to find a good description of it anywhere. How does it keep\ndirty pages in memory longer? Details on the other things you mention\nwould be interesting to hear, too.\n\n> Given that InnoDB is not shipping its logs across the wire, I don't\n> think many users would really care if it used the double writer or\n> full page writes approach to the redo log (other than the fact that\n> the log files would be bigger). PG on the other hand *is* pushing its\n> logs over the wire...\n\nSo how is InnoDB doing replication? Is there a second log just for that?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 26 Oct 2010 10:25:38 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "On Tue, Oct 26, 2010 at 7:25 AM, Robert Haas <[email protected]> wrote:\n> On Tue, Oct 26, 2010 at 10:13 AM, Rob Wultsch <[email protected]> wrote:\n>> The double write buffer is one of the few areas where InnoDB does more\n>> IO (in the form of fsynch's) than PG. InnoDB also has fuzzy\n>> checkpoints (which help to keep dirty pages in memory longer),\n>> buffering of writing out changes to secondary indexes, and recently\n>> tunable page level compression.\n>\n> Baron Schwartz was talking to me about this at Surge. I don't really\n> understand how the fuzzy checkpoint stuff works, and I haven't been\n> able to find a good description of it anywhere. How does it keep\n> dirty pages in memory longer? Details on the other things you mention\n> would be interesting to hear, too.\n\nFor checkpoint behavior:\nhttp://books.google.com/books?id=S_yHERPRZScC&pg=PA606&lpg=PA606&dq=fuzzy+checkpoint&source=bl&ots=JJrzRUKBGh&sig=UOMPsRy5E-YDgjAFkaSVn3dps_M&hl=en&ei=_k8yTOfeHYzZnAepyumLBA&sa=X&oi=book_result&ct=result&resnum=8&ved=0CEYQ6AEwBw#v=onepage&q=fuzzy%20checkpoint&f=false\n\nI would think that best case behavior \"sharp\" checkpoints with a large\ncheckpoint_completion_target would have behavior similar to a fuzzy\ncheckpoint.\n\nInsert (for innodb 1.1+ evidently there is also does delete and purge)\nbuffering:\nhttp://dev.mysql.com/doc/refman/5.5/en/innodb-insert-buffering.html\n\nFor a recent ~800GB db I had to restore, the insert buffer saved 92%\nof io needed for secondary indexes.\n\nCompression:\nhttp://dev.mysql.com/doc/innodb-plugin/1.0/en/innodb-compression-internals.html\n\nFor many workloads 50% compression results in negligible impact to\nperformance. For certain workloads compression can help performance.\nPlease note that InnoDB also has non-tunable toast like feature.\n\n\n>> Given that InnoDB is not shipping its logs across the wire, I don't\n>> think many users would really care if it used the double writer or\n>> full page writes approach to the redo log (other than the fact that\n>> the log files would be bigger). PG on the other hand *is* pushing its\n>> logs over the wire...\n>\n> So how is InnoDB doing replication? Is there a second log just for that?\n>\n\nThe other log is the \"binary log\" and it is one of the biggest\nproblems with MySQL. Running MySQL in such a way that the binary log\nstays in sync with the InnoDB redo has a very significant impact on\nperformance.\nhttp://www.mysqlperformanceblog.com/2010/10/23/mysql-limitations-part-2-the-binary-log/\nhttp://mysqlha.blogspot.com/2010/09/mysql-versus-mongodb-update-performance.html\n(check out the pretty graph)\n\nIf you are going to West you should considering heading over to the\nFacebook office on Tuesday as the MySQL team is having something of an\nopen house:\nhttp://www.facebook.com/event.php?eid=160712450628622\n\nMark Callaghan from the Facebook MySQL Engineering (and several\nmembers of their ops team, for that matter) team understands InnoDB\ndramatically better than I do.\n\n-- \nRob Wultsch\[email protected]\n",
"msg_date": "Tue, 26 Oct 2010 21:41:55 -0700",
"msg_from": "Rob Wultsch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 12:41 AM, Rob Wultsch <[email protected]> wrote:\n> On Tue, Oct 26, 2010 at 7:25 AM, Robert Haas <[email protected]> wrote:\n>> On Tue, Oct 26, 2010 at 10:13 AM, Rob Wultsch <[email protected]> wrote:\n>>> The double write buffer is one of the few areas where InnoDB does more\n>>> IO (in the form of fsynch's) than PG. InnoDB also has fuzzy\n>>> checkpoints (which help to keep dirty pages in memory longer),\n>>> buffering of writing out changes to secondary indexes, and recently\n>>> tunable page level compression.\n>>\n>> Baron Schwartz was talking to me about this at Surge. I don't really\n>> understand how the fuzzy checkpoint stuff works, and I haven't been\n>> able to find a good description of it anywhere. How does it keep\n>> dirty pages in memory longer? Details on the other things you mention\n>> would be interesting to hear, too.\n>\n> For checkpoint behavior:\n> http://books.google.com/books?id=S_yHERPRZScC&pg=PA606&lpg=PA606&dq=fuzzy+checkpoint&source=bl&ots=JJrzRUKBGh&sig=UOMPsRy5E-YDgjAFkaSVn3dps_M&hl=en&ei=_k8yTOfeHYzZnAepyumLBA&sa=X&oi=book_result&ct=result&resnum=8&ved=0CEYQ6AEwBw#v=onepage&q=fuzzy%20checkpoint&f=false\n>\n> I would think that best case behavior \"sharp\" checkpoints with a large\n> checkpoint_completion_target would have behavior similar to a fuzzy\n> checkpoint.\n\nWell, under that definition of a fuzzy checkpoint, our checkpoints are\nfuzzy even with checkpoint_completion_target=0.\n\nWhat Baron seemed to be describing was a scheme whereby you could do\nwhat I might call partial checkpoints. IOW, you want to move the redo\npointer without writing out ALL the dirty buffers in memory, so you\nwrite out the pages with the oldest LSNs and then move the redo\npointer to the oldest LSN you have left. Except that doesn't quite\nwork, because the page might have been dirtied at LSN X and then later\nupdated again at LSN Y, and you still have to flush it to disk before\nmoving the redo pointer to any value >X. So you work around that by\nmaintaining a \"first dirtied\" LSN for each page as well as the current\nLSN.\n\nI'm not 100% sure that this is how it works or that it would work in\nPG, but even assuming that it is and does, I'm not sure what the\nbenefit is over the checkpoint-spreading logic we have now. There\nmight be some benefit in sorting the writes that we do, so that we can\nspread out the fsyncs. So, write all the blocks to a give file,\nfsync, and then repeat for each underlying data file that has at least\none dirty block. But that's completely orthogonal to (and would\nactually be hindered by) the approach described in the preceding\nparagraph.\n\n> Insert (for innodb 1.1+ evidently there is also does delete and purge)\n> buffering:\n> http://dev.mysql.com/doc/refman/5.5/en/innodb-insert-buffering.html\n\nWe do something a bit like this for GIST indices. It would be\ninteresting to see if it also has a benefit for btree indices.\n\n> For a recent ~800GB db I had to restore, the insert buffer saved 92%\n> of io needed for secondary indexes.\n>\n> Compression:\n> http://dev.mysql.com/doc/innodb-plugin/1.0/en/innodb-compression-internals.html\n>\n> For many workloads 50% compression results in negligible impact to\n> performance. For certain workloads compression can help performance.\n> Please note that InnoDB also has non-tunable toast like feature.\n\nInteresting. I am surprised this works well. It seems that this only\nworks for pages that can be compressed by >=50%, which seems like it\ncould result in a lot of CPU wasted on failed attempts to compress.\n\n>>> Given that InnoDB is not shipping its logs across the wire, I don't\n>>> think many users would really care if it used the double writer or\n>>> full page writes approach to the redo log (other than the fact that\n>>> the log files would be bigger). PG on the other hand *is* pushing its\n>>> logs over the wire...\n>>\n>> So how is InnoDB doing replication? Is there a second log just for that?\n>>\n>\n> The other log is the \"binary log\" and it is one of the biggest\n> problems with MySQL. Running MySQL in such a way that the binary log\n> stays in sync with the InnoDB redo has a very significant impact on\n> performance.\n> http://www.mysqlperformanceblog.com/2010/10/23/mysql-limitations-part-2-the-binary-log/\n> http://mysqlha.blogspot.com/2010/09/mysql-versus-mongodb-update-performance.html\n> (check out the pretty graph)\n\nHmm. That seems kinda painful. Having to ship full page images over\nthe wire doesn't seems so bad by comparison, though I'm not very happy\nabout having to do that either.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 27 Oct 2010 21:55:06 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 6:55 PM, Robert Haas <[email protected]> wrote:\n> On Wed, Oct 27, 2010 at 12:41 AM, Rob Wultsch <[email protected]> wrote:\n>> On Tue, Oct 26, 2010 at 7:25 AM, Robert Haas <[email protected]> wrote:\n>>> On Tue, Oct 26, 2010 at 10:13 AM, Rob Wultsch <[email protected]> wrote:\n>>>> The double write buffer is one of the few areas where InnoDB does more\n>>>> IO (in the form of fsynch's) than PG. InnoDB also has fuzzy\n>>>> checkpoints (which help to keep dirty pages in memory longer),\n>>>> buffering of writing out changes to secondary indexes, and recently\n>>>> tunable page level compression.\n>>>\n>>> Baron Schwartz was talking to me about this at Surge. I don't really\n>>> understand how the fuzzy checkpoint stuff works, and I haven't been\n>>> able to find a good description of it anywhere. How does it keep\n>>> dirty pages in memory longer? Details on the other things you mention\n>>> would be interesting to hear, too.\n>>\n>> For checkpoint behavior:\n>> http://books.google.com/books?id=S_yHERPRZScC&pg=PA606&lpg=PA606&dq=fuzzy+checkpoint&source=bl&ots=JJrzRUKBGh&sig=UOMPsRy5E-YDgjAFkaSVn3dps_M&hl=en&ei=_k8yTOfeHYzZnAepyumLBA&sa=X&oi=book_result&ct=result&resnum=8&ved=0CEYQ6AEwBw#v=onepage&q=fuzzy%20checkpoint&f=false\n>>\n>> I would think that best case behavior \"sharp\" checkpoints with a large\n>> checkpoint_completion_target would have behavior similar to a fuzzy\n>> checkpoint.\n>\n> Well, under that definition of a fuzzy checkpoint, our checkpoints are\n> fuzzy even with checkpoint_completion_target=0.\n>\n> What Baron seemed to be describing was a scheme whereby you could do\n> what I might call partial checkpoints. IOW, you want to move the redo\n> pointer without writing out ALL the dirty buffers in memory, so you\n> write out the pages with the oldest LSNs and then move the redo\n> pointer to the oldest LSN you have left. Except that doesn't quite\n> work, because the page might have been dirtied at LSN X and then later\n> updated again at LSN Y, and you still have to flush it to disk before\n> moving the redo pointer to any value >X. So you work around that by\n> maintaining a \"first dirtied\" LSN for each page as well as the current\n> LSN.\n>\n> I'm not 100% sure that this is how it works or that it would work in\n> PG, but even assuming that it is and does, I'm not sure what the\n> benefit is over the checkpoint-spreading logic we have now. There\n> might be some benefit in sorting the writes that we do, so that we can\n> spread out the fsyncs. So, write all the blocks to a give file,\n> fsync, and then repeat for each underlying data file that has at least\n> one dirty block. But that's completely orthogonal to (and would\n> actually be hindered by) the approach described in the preceding\n> paragraph.\n\nI wish I could answer your questions better. I am a power user that\ndoes not fully understand InnoDB internals. There are not all that\nmany folks that have a very good understanding of InnoDB internals\n(given how well it works there is not all that much need).\n\n>\n>> Insert (for innodb 1.1+ evidently there is also does delete and purge)\n>> buffering:\n>> http://dev.mysql.com/doc/refman/5.5/en/innodb-insert-buffering.html\n>\n> We do something a bit like this for GIST indices. It would be\n> interesting to see if it also has a benefit for btree indices.\n>\n>> For a recent ~800GB db I had to restore, the insert buffer saved 92%\n>> of io needed for secondary indexes.\n>>\n>> Compression:\n>> http://dev.mysql.com/doc/innodb-plugin/1.0/en/innodb-compression-internals.html\n>>\n>> For many workloads 50% compression results in negligible impact to\n>> performance. For certain workloads compression can help performance.\n>> Please note that InnoDB also has non-tunable toast like feature.\n>\n> Interesting. I am surprised this works well. It seems that this only\n> works for pages that can be compressed by >=50%, which seems like it\n> could result in a lot of CPU wasted on failed attempts to compress.\n\nIn my world, the spinning disk is almost always the bottleneck.\nTrading CPU for IO is almost always a good deal for me.\n\n>\n>>>> Given that InnoDB is not shipping its logs across the wire, I don't\n>>>> think many users would really care if it used the double writer or\n>>>> full page writes approach to the redo log (other than the fact that\n>>>> the log files would be bigger). PG on the other hand *is* pushing its\n>>>> logs over the wire...\n>>>\n>>> So how is InnoDB doing replication? Is there a second log just for that?\n>>>\n>>\n>> The other log is the \"binary log\" and it is one of the biggest\n>> problems with MySQL. Running MySQL in such a way that the binary log\n>> stays in sync with the InnoDB redo has a very significant impact on\n>> performance.\n>> http://www.mysqlperformanceblog.com/2010/10/23/mysql-limitations-part-2-the-binary-log/\n>> http://mysqlha.blogspot.com/2010/09/mysql-versus-mongodb-update-performance.html\n>> (check out the pretty graph)\n>\n> Hmm. That seems kinda painful. Having to ship full page images over\n> the wire doesn't seems so bad by comparison, though I'm not very happy\n> about having to do that either.\n>\n\nThe binary log is less than ideal, but with MySQL replication I can\nreplicate to *many* servers that are *very* geographically distributed\nwithout all that many headaches. In addition it is simple enough that\nI can have junior DBA manage it. I have doubts that I could make PG\ndo the same anywhere near as easily, particularly given how long and\nnarrow some pipes are...\n\n\n-- \nRob Wultsch\[email protected]\n",
"msg_date": "Wed, 27 Oct 2010 20:43:52 -0700",
"msg_from": "Rob Wultsch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Tom Lane wrote:\n> The other and probably worse problem is that there's no application\n> control over how soon changes to mmap'd pages get to disk. An msync\n> will flush them out, but the kernel is free to write dirty pages sooner.\n> So if they're depending for consistency on writes not happening until\n> msync, it's broken by design. (This is one of the big reasons we don't\n> use mmap'd space for Postgres disk buffers.)\n> \nWell, I agree that it sucks for the reason you give - but you use write \nand that's *exactly* the\nsame in terms of when it gets written, as when you update a byte on an \nmmap'd page.\n\nAnd you're quite happy to use write.\n\nThe only difference is that its a lot more explicit where the point of \n'maybe its written and maybe\nit isn't' occurs.\n\nThere need be no real difference in the architecture for one over the \nother: there does seem to be\nevidence that write and read can have better forward-read and \nwrite-behind behaviour, because\nread/write does allow you to initiate an IO with a hint to a size that \nexceeds a hardware page.\n\nAnd yes, after getting into the details while starting to port TC to \nWindows, I decided to bin\nit. Especially handy that SQLite3 has WAL now. (And one last dig - TC \ndidn't even\nhave a checksum that would let you tell when it had been broken: but it \nmight all be fixed now\nof course, I don't have time to check.)\n\nJames\n\n",
"msg_date": "Thu, 28 Oct 2010 21:33:19 +0100",
"msg_from": "James Mansion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "James Mansion <[email protected]> writes:\n> Tom Lane wrote:\n>> The other and probably worse problem is that there's no application\n>> control over how soon changes to mmap'd pages get to disk. An msync\n>> will flush them out, but the kernel is free to write dirty pages sooner.\n>> So if they're depending for consistency on writes not happening until\n>> msync, it's broken by design. (This is one of the big reasons we don't\n>> use mmap'd space for Postgres disk buffers.)\n\n> Well, I agree that it sucks for the reason you give - but you use\n> write and that's *exactly* the same in terms of when it gets written,\n> as when you update a byte on an mmap'd page.\n\nUh, no, it is not. The difference is that we can update a byte in a\nshared buffer, and know that it *isn't* getting written out before we\nsay so. If the buffer were mmap'd then we'd have no control over that,\nwhich makes it mighty hard to obey the WAL \"write log before data\"\nparadigm.\n\nIt's true that we don't know whether write() causes an immediate or\ndelayed disk write, but we generally don't care that much. What we do\ncare about is being able to ensure that a WAL write happens before the\ndata write, and with mmap we don't have control over that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 28 Oct 2010 17:26:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles "
},
{
"msg_contents": "On Thu, Oct 28, 2010 at 5:26 PM, Tom Lane <[email protected]> wrote:\n> James Mansion <[email protected]> writes:\n>> Tom Lane wrote:\n>>> The other and probably worse problem is that there's no application\n>>> control over how soon changes to mmap'd pages get to disk. An msync\n>>> will flush them out, but the kernel is free to write dirty pages sooner.\n>>> So if they're depending for consistency on writes not happening until\n>>> msync, it's broken by design. (This is one of the big reasons we don't\n>>> use mmap'd space for Postgres disk buffers.)\n>\n>> Well, I agree that it sucks for the reason you give - but you use\n>> write and that's *exactly* the same in terms of when it gets written,\n>> as when you update a byte on an mmap'd page.\n>\n> Uh, no, it is not. The difference is that we can update a byte in a\n> shared buffer, and know that it *isn't* getting written out before we\n> say so. If the buffer were mmap'd then we'd have no control over that,\n> which makes it mighty hard to obey the WAL \"write log before data\"\n> paradigm.\n>\n> It's true that we don't know whether write() causes an immediate or\n> delayed disk write, but we generally don't care that much. What we do\n> care about is being able to ensure that a WAL write happens before the\n> data write, and with mmap we don't have control over that.\n\nWell, we COULD keep the data in shared buffers, and then copy it into\nan mmap()'d region rather than calling write(), but I'm not sure\nthere's any advantage to it. Managing address space mappings is a\npain in the butt.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Fri, 29 Oct 2010 11:43:58 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "On Fri, Oct 29, 2010 at 11:43 AM, Robert Haas <[email protected]> wrote:\n\n> Well, we COULD keep the data in shared buffers, and then copy it into\n> an mmap()'d region rather than calling write(), but I'm not sure\n> there's any advantage to it. Managing address space mappings is a\n> pain in the butt.\n\nI could see this being a *theoretical* benefit in the case that the\nbackground writer gains the ability to write out all blocks associated\nwith a file in order. In that case, you might get a win because you\ncould get a single mmap of the entire file, and just wholesale memcpy\nblocks across, then sync/unmap it.\n\nThis, of course assumes a few things that must be for it to be per formant:\n0) a list of blocks to be written grouped by files is readily available.\n1) The pages you write to must be in the page cache, or your memcpy is\ngoing to fault them in. With a plain write, you don't need the\nover-written page in the cache.\n2) Now, instead of the torn-page problem being FS block/sector sized\nbase, you can now actually have a possibly arbitrary amount of the\nblock memory written when the kernel writes out the page. you\n*really* need full-page-writes.\n3) The mmap overhead required for the kernel to setup the mappings is\nless than the repeated syscalls of a simple write().\n\nAll those things seem like something that somebody could synthetically\nbenchmark to prove value before even trying to bolt into PostgreSQL.\n\na.\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.\n",
"msg_date": "Fri, 29 Oct 2010 11:56:09 -0400",
"msg_from": "Aidan Van Dyk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Oct 28, 2010 at 5:26 PM, Tom Lane <[email protected]> wrote:\n>> It's true that we don't know whether write() causes an immediate or\n>> delayed disk write, but we generally don't care that much. �What we do\n>> care about is being able to ensure that a WAL write happens before the\n>> data write, and with mmap we don't have control over that.\n\n> Well, we COULD keep the data in shared buffers, and then copy it into\n> an mmap()'d region rather than calling write(), but I'm not sure\n> there's any advantage to it. Managing address space mappings is a\n> pain in the butt.\n\nIn principle that ought to be right about the same speed as using\nwrite() to copy the data from shared buffers to kernel disk buffers,\nanyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Oct 2010 11:57:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles "
},
{
"msg_contents": "On Fri, Oct 29, 2010 at 11:56 AM, Aidan Van Dyk <[email protected]> wrote:\n> 1) The pages you write to must be in the page cache, or your memcpy is\n> going to fault them in. With a plain write, you don't need the\n> over-written page in the cache.\n\nI seem to remember a time many years ago when I got bitten by this\nproblem. The fact that our I/O is in 8K pages means this could be a\npretty severe hit, I think.\n\n> 2) Now, instead of the torn-page problem being FS block/sector sized\n> base, you can now actually have a possibly arbitrary amount of the\n> block memory written when the kernel writes out the page. you\n> *really* need full-page-writes.\n\nYeah.\n\n> 3) The mmap overhead required for the kernel to setup the mappings is\n> less than the repeated syscalls of a simple write().\n\nYou'd expect to save something from that; but on the other hand, at\nleast on 32-bit systems, there's a very limited number of 1GB files\nthat can be simultaneously mapped into one address space, and it's a\nlot smaller than the number of file descriptors that you can have\nopen. Rumor has it that cutting down the number of fds that can stay\nopen simultaneously is pretty bad for performance, so cutting it down\nto a number you can count on one hand (maybe one finger) would\nprobably be bad. Maybe on 64-bit it would be OK but it seems like an\nawful lot of complexity for at most a minor savings (and a pretty bad\nanti-savings if point #1 kicks in).\n\nAnyway this is all totally off-topic...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Fri, 29 Oct 2010 12:44:06 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "The email system at [email protected] is spamming me with a\nmessage telling me that [email protected] thinks my posts are\nspam.\n\n...Robert\n\n\n---------- Forwarded message ----------\nFrom: <[email protected]>\nDate: Fri, Oct 29, 2010 at 12:45 PM\nSubject: failure notice\nTo: [email protected]\n\n\nHi. This is the qmail-send program at burntmail.com.\nI'm afraid I wasn't able to deliver your message to the following addresses.\nThis is a permanent error; I've given up. Sorry it didn't work out.\n\n<[email protected]>:\nSorry - The message you have sent was identified as spam by Spam\nAssassin (message bounced)\n\n--- Below this line is a copy of the message.\n\nReturn-Path: <[email protected]>\nReceived: (qmail 29883 invoked from network); 29 Oct 2010 16:45:55 -0000\nReceived: from unknown (74.52.149.146)\n by burntmail.com with SMTP; 29 Oct 2010 16:45:55 -0000\n>From [email protected] Fri Oct 29 11:45:12 2010\nReturn-path: <[email protected]>\nEnvelope-to: [email protected]\nDelivery-date: Fri, 29 Oct 2010 11:45:12 -0500\nReceived: from [200.46.208.106] (helo=mx1.hub.org)\n by mx01.burntspam.com with esmtp (Exim 4.63)\n (envelope-from <[email protected]>)\n id 1PBs4l-0004Vn-Sf\n for [email protected]; Fri, 29 Oct 2010 11:45:12 -0500\nReceived: from postgresql.org (mail.postgresql.org [200.46.204.86])\n by mx1.hub.org (Postfix) with ESMTP id 995F33269608;\n Fri, 29 Oct 2010 16:44:31 +0000 (UTC)\nReceived: from maia.hub.org (maia-5.hub.org [200.46.204.29])\n by mail.postgresql.org (Postfix) with ESMTP id 97AAD1337B6C\n for <[email protected]>;\nFri, 29 Oct 2010 13:44:15 -0300 (ADT)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.29]) (amavisd-maia, port 10024)\n with ESMTP id 59612-09\n for <[email protected]>;\n Fri, 29 Oct 2010 16:44:08 +0000 (UTC)\nX-BMDA-Edge: IPR=0,DYN=0,SRB=0,SPM=6.9,BTS=1.1,RBL=0,HIS=0,WHT=0,STR=0\nX-Greylist: domain auto-whitelisted by SQLgrey-1.7.6\nReceived: from mail-iw0-f174.google.com (mail-iw0-f174.google.com\n[209.85.214.174])\n by mail.postgresql.org (Postfix) with ESMTP id F24EF133616B\n for <[email protected]>; Fri, 29 Oct 2010\n13:44:07 -0300 (ADT)\nReceived: by iwn10 with SMTP id 10so3905880iwn.19\n for <[email protected]>; Fri, 29 Oct 2010\n09:44:07 -0700 (PDT)\nDKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=gmail.com; s=gamma;\n h=domainkey-signature:mime-version:received:received:in-reply-to\n :references:date:message-id:subject:from:to:cc:content-type\n :content-transfer-encoding;\n bh=nsTf9SMp9latxxCm4tIz7goGfoWLY9cyktdrdmnQ3NA=;\n b=XMNuOdqxryiNGYUTsa5YW9++XX7vCL/slcrHOmeCXILOSEWPjj4hcYi1d3DACRO2uU\n 4YVTKrTYlgJ01AjLZRb4iR8Wa/5j6MoTCgpqyyaBelyCMqEyM5m/cS9KgtVoipnLrC/a\n vYTBfIKrSwmzyGE/HXj40uiBmLSajXTBrJHX8=\nDomainKey-Signature: a=rsa-sha1; c=nofws;\n d=gmail.com; s=gamma;\n h=mime-version:in-reply-to:references:date:message-id:subject:from:to\n :cc:content-type:content-transfer-encoding;\n b=bE0oL6KD08QtxQvelBRJ6ZqCDLSgoDrKBXWJvaHQ+SDRg2cPIaSUgwX6axKk2VKDsk\n TZKVtCOWQr/sjigfdHQLuIlzjz99yELrltKIH8WWF36QwiLDpeYXLFuUve7lrj7BKNRj\n gGqxFdBSqBaZSf6qrBYh/Wk2LfdmwxQXHbm8I=\nMIME-Version: 1.0\nReceived: by 10.231.154.73 with SMTP id n9mr4219450ibw.10.1288370647220; Fri,\n 29 Oct 2010 09:44:07 -0700 (PDT)\nReceived: by 10.231.33.71 with HTTP; Fri, 29 Oct 2010 09:44:06 -0700 (PDT)\nIn-Reply-To: <[email protected]>\nReferences: <[email protected]>\n <[email protected]>\n <[email protected]>\n <[email protected]>\n <[email protected]>\n <[email protected]>\n <[email protected]>\n <[email protected]>\n <[email protected]>\n <[email protected]>\nDate: Fri, 29 Oct 2010 12:44:06 -0400\nMessage-ID: <[email protected]>\nSubject: Re: [PERFORM] BBU Cache vs. spindles\nFrom: Robert Haas <[email protected]>\nTo: Aidan Van Dyk <[email protected]>\nCc: Tom Lane <[email protected]>, James Mansion <[email protected]>,\n Greg Smith <[email protected]>, Kevin Grittner\n<[email protected]>,\n Bruce Momjian <[email protected]>, [email protected],\n Scott Marlowe <[email protected]>, Steve Crawford\n<[email protected]>,\n [email protected], Ben Chobot <[email protected]>\nContent-Type: text/plain; charset=ISO-8859-1\nContent-Transfer-Encoding: quoted-printable\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits=-1.9 tagged_above=-10 required=5 tests=BAYES_00=-1.9,\n RCVD_IN_DNSWL_NONE=-0.0001\nX-Spam-Level:\nX-Mailing-List: pgsql-performance\nList-Archive: <http://archives.postgresql.org/pgsql-performance>\nList-Help: <mailto:[email protected]?body=help>\nList-ID: <pgsql-performance.postgresql.org>\nList-Owner: <mailto:[email protected]>\nList-Post: <mailto:[email protected]>\nList-Subscribe: <mailto:[email protected]?body=sub%20pgsql-performance>\nList-Unsubscribe:\n<mailto:[email protected]?body=unsub%20pgsql-performance>\nPrecedence: bulk\nSender: [email protected]\nX-Relayed-For: (mx1.hub.org) [200.46.208.106]\n\nOn Fri, Oct 29, 2010 at 11:56 AM, Aidan Van Dyk <[email protected]> wrote:\n> 1) The pages you write to must be in the page cache, or your memcpy is\n> going to fault them in. =A0With a plain write, you don't need the\n> over-written page in the cache.\n\nI seem to remember a time many years ago when I got bitten by this\nproblem. The fact that our I/O is in 8K pages means this could be a\npretty severe hit, I think.\n\n> 2) Now, instead of the torn-page problem being FS block/sector sized\n> base, you can now actually have a possibly arbitrary amount of the\n> block memory written when the kernel writes out the page. =A0you\n> *really* need full-page-writes.\n\nYeah.\n\n> 3) The mmap overhead required for the kernel to setup the mappings is\n> less than the repeated syscalls of a simple write().\n\nYou'd expect to save something from that; but on the other hand, at\nleast on 32-bit systems, there's a very limited number of 1GB files\nthat can be simultaneously mapped into one address space, and it's a\nlot smaller than the number of file descriptors that you can have\nopen. Rumor has it that cutting down the number of fds that can stay\nopen simultaneously is pretty bad for performance, so cutting it down\nto a number you can count on one hand (maybe one finger) would\nprobably be bad. Maybe on 64-bit it would be OK but it seems like an\nawful lot of complexity for at most a minor savings (and a pretty bad\nanti-savings if point #1 kicks in).\n\nAnyway this is all totally off-topic...\n\n--=20\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n--=20\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Fri, 29 Oct 2010 14:13:31 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Fwd: failure notice"
},
{
"msg_contents": "Excerpts from Greg Smith's message of jue oct 21 14:04:17 -0300 2010:\n\n> What I would like to do is beef up the documentation with some concrete \n> examples of how to figure out if your cache and associated write path \n> are working reliably or not. It should be possible to include \"does \n> this handle full page writes correctly?\" in that test suite. Until we \n> have something like that, I'm concerned that bugs in filesystem or \n> controller handling may make full_page_writes unsafe even with a BBU, \n> and we'd have no way for people to tell if that's true or not.\n\nI think if you assume that there are bugs in the filesystem which you\nneed to protect against, you are already hosed. I imagine there must be\nsome filesystem bug that makes it safe to have full_page_writes=on, but\nunsafe to have full_page_writes=off; but I'd probably discard those as a\nrare minority and thus not worth worrying about.\n\nI agree it would be worth testing though.\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Fri, 29 Oct 2010 15:49:12 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "On Fri, 29 Oct 2010, Robert Haas wrote:\n\n> On Thu, Oct 28, 2010 at 5:26 PM, Tom Lane <[email protected]> wrote:\n>> James Mansion <[email protected]> writes:\n>>> Tom Lane wrote:\n>>>> The other and probably worse problem is that there's no application\n>>>> control over how soon changes to mmap'd pages get to disk. �An msync\n>>>> will flush them out, but the kernel is free to write dirty pages sooner.\n>>>> So if they're depending for consistency on writes not happening until\n>>>> msync, it's broken by design. �(This is one of the big reasons we don't\n>>>> use mmap'd space for Postgres disk buffers.)\n>>\n>>> Well, I agree that it sucks for the reason you give - but you use\n>>> write and that's *exactly* the same in terms of when it gets written,\n>>> as when you update a byte on an mmap'd page.\n>>\n>> Uh, no, it is not. �The difference is that we can update a byte in a\n>> shared buffer, and know that it *isn't* getting written out before we\n>> say so. �If the buffer were mmap'd then we'd have no control over that,\n>> which makes it mighty hard to obey the WAL \"write log before data\"\n>> paradigm.\n>>\n>> It's true that we don't know whether write() causes an immediate or\n>> delayed disk write, but we generally don't care that much. �What we do\n>> care about is being able to ensure that a WAL write happens before the\n>> data write, and with mmap we don't have control over that.\n>\n> Well, we COULD keep the data in shared buffers, and then copy it into\n> an mmap()'d region rather than calling write(), but I'm not sure\n> there's any advantage to it. Managing address space mappings is a\n> pain in the butt.\n\nkeep in mind that you have no way of knowing what order the data in the \nmmap region gets written out to disk.\n\nDavid Lang\n>From [email protected] Fri Oct 29 17:34:16 2010\nReceived: from maia.hub.org (maia-3.hub.org [200.46.204.243])\n\tby mail.postgresql.org (Postfix) with ESMTP id A0D1C1337B63\n\tfor <[email protected]>; Fri, 29 Oct 2010 17:34:15 -0300 (ADT)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)\n with ESMTP id 12192-01\n for <[email protected]>;\n Fri, 29 Oct 2010 20:34:08 +0000 (UTC)\nX-Greylist: delayed 00:06:39.061466 by SQLgrey-1.7.6\nReceived: from 29.mail-out.ovh.net (29.mail-out.ovh.net [87.98.216.213])\n\tby mail.postgresql.org (Postfix) with SMTP id 0E308133729E\n\tfor <[email protected]>; Fri, 29 Oct 2010 17:34:07 -0300 (ADT)\nReceived: (qmail 22970 invoked by uid 503); 29 Oct 2010 20:40:06 -0000\nReceived: from b9.ovh.net (HELO mail641.ha.ovh.net) (213.186.33.59)\n by 29.mail-out.ovh.net with SMTP; 29 Oct 2010 20:40:06 -0000\nReceived: from b0.ovh.net (HELO queueout) (213.186.33.50)\n\tby b0.ovh.net with SMTP; 29 Oct 2010 22:27:25 +0200\nReceived: from par69-8-88-161-102-87.fbx.proxad.net (HELO apollo13) (lists%[email protected])\n by ns0.ovh.net with SMTP; 29 Oct 2010 22:27:23 +0200\nContent-Type: text/plain; charset=utf-8; format=flowed; delsp=yes\nTo: [email protected], \"Steve Wong\" <[email protected]>\nSubject: Re: MVCC and Implications for (Near) Real-Time Application\nReferences: <[email protected]>\nDate: Fri, 29 Oct 2010 22:27:23 +0200\nMIME-Version: 1.0\nContent-Transfer-Encoding: 8bit\nFrom: \"Pierre C\" <[email protected]>\nMessage-ID: <op.vlctrxpjeorkce@apollo13>\nIn-Reply-To: <[email protected]>\nUser-Agent: Opera Mail/10.62 (Linux)\nX-Ovh-Tracer-Id: 3326752751527134769\nX-Ovh-Remote: 88.161.102.87 (par69-8-88-161-102-87.fbx.proxad.net)\nX-Ovh-Local: 213.186.33.20 (ns0.ovh.net)\nX-Spam-Check: DONE|U 0.5/N\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits=-1.9 tagged_above=-10 required=5 tests=BAYES_00=-1.9,\n RCVD_IN_DNSWL_NONE=-0.0001\nX-Spam-Level: \nX-Archive-Number: 201010/589\nX-Sequence-Number: 41007\n\n\n> My questions are: (1) Does the MVCC architecture introduce significant \n> delays between insert by a thread and visibility by other threads\n\nAs said by others, once commited it is immediately visible to all\n\n> (2) Are there any available benchmarks that can measure this delay?\n\nSince you will not be batching INSERTs, you will use 1 INSERT per \ntransaction.\nIf you use Autocommit mode, that's it.\nIf you don't, you will get a few extra network roundtrips after the \nINSERT, to send the COMMIT.\n\nOne INSERT is usually extremely fast unless you're short on RAM and the \nindexes that need updating need some disk seeking.\n\nAnyway, doing lots of INSERTs each in its own transaction is usually very \nlow-throughput, because at each COMMIT, postgres must always be sure that \nall the data is actually written to the harddisks. So, depending on the \nspeed of your harddisks, each COMMIT can take up to 10-20 milliseconds.\n\nOn a 7200rpm harddisk, it is absolutely impossible to do more than 7200 \ncommits/minute if you want to be sure each time that the data really is \nwritten on the harddisk, unless :\n\n- you use several threads (one disk write can group several commits from \ndifferent connections, see the config file docs)\n- you turn of synchronous_commit ; in this case commit is instantaneous, \nbut if your server loses power or crashes, the last few seconds of data \nmay be lost (database integrity is still guaranteed though)\n- you use a battery backup cache on your RAID controller, in this case \n\"written to the harddisks\" is replaced by \"written to batteyr backed RAM\" \nwhich is a lot faster\n\nIf you dont use battery backed cache, place the xlog on a different RAID1 \narray than the tables/indexes, this allows committing of xlog records \n(which is the time critical part) to proceed smoothly and not be disturbed \nby other IO on the indexes/tables. Also consider tuning your bgwriter and \ncheckpoints, after experimentation under realistic load conditions.\n\nSo, when you benchmark your application, if you get disappointing results, \nthink about this...\n",
"msg_date": "Fri, 29 Oct 2010 12:09:02 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Tom Lane wrote:\n> Uh, no, it is not. The difference is that we can update a byte in a\n> shared buffer, and know that it *isn't* getting written out before we\n> \nWell, I don't know where yu got the idea I was refering to that sort of \nthing - its\nthe same as writing to a buffer before copying to the mmap'd area.\n> It's true that we don't know whether write() causes an immediate or\n> delayed disk write, but we generally don't care that much. What we do\n> \nWhich is what I was refering to.\n> care about is being able to ensure that a WAL write happens before the\n> data write, and with mmap we don't have control over that.\n>\n> \nI think you have just the same control either way, because you can only \nforce ordering\nwith an appropriate explicit sync, and in the absence of such a sync all \nbets are off for\nwhether/when each disk page is written out, and if you can't ensure that \nthe controller\nand disk are write through you'd better do a hardware cache flush.too, \nright?\n\nA shame that so many systems have relatively poor handling of that \nhardware flush.\n\n",
"msg_date": "Fri, 29 Oct 2010 21:52:30 +0100",
"msg_from": "James Mansion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "On Fri, 29 Oct 2010, James Mansion wrote:\n\n> Tom Lane wrote:\n>> Uh, no, it is not. The difference is that we can update a byte in a\n>> shared buffer, and know that it *isn't* getting written out before we\n>> \n> Well, I don't know where yu got the idea I was refering to that sort of thing \n> - its\n> the same as writing to a buffer before copying to the mmap'd area.\n>> It's true that we don't know whether write() causes an immediate or\n>> delayed disk write, but we generally don't care that much. What we do\n>> \n> Which is what I was refering to.\n>> care about is being able to ensure that a WAL write happens before the\n>> data write, and with mmap we don't have control over that.\n>>\n>> \n> I think you have just the same control either way, because you can only force \n> ordering\n> with an appropriate explicit sync, and in the absence of such a sync all bets \n> are off for\n> whether/when each disk page is written out, and if you can't ensure that the \n> controller\n> and disk are write through you'd better do a hardware cache flush.too, right?\n>\n> A shame that so many systems have relatively poor handling of that hardware \n> flush.\n\nthe issue is that when you update a mmaped chunk of data, it could be \nwritten out immediatly without you doing _anything_ (and thanks to \nmultiple cores/threads, it could get written out while you are still in \nthe middle of updating it). When you update an internal buffer and then \nwrite that, you know that nothing will hit the disk before you issue the \nwrite command.\n\nDavid Lang\n",
"msg_date": "Fri, 29 Oct 2010 14:14:26 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Greg Smith wrote:\n> Kevin Grittner wrote:\n> > I assume that we send a full\n> > 8K to the OS cache, and the file system writes disk sectors\n> > according to its own algorithm. With either platters or BBU cache,\n> > the data is persisted on fsync; why do you see a risk with one but\n> > not the other\n> \n> I'd like a 10 minute argument please. I started to write something to \n> refute this, only to clarify in my head the sequence of events that \n> leads to the most questionable result, where I feel a bit less certain \n> than I did before of the safety here. Here is the worst case I believe \n> you're describing:\n> \n> 1) Transaction is written to the WAL and sync'd; client receives \n> COMMIT. Since full_page_writes is off, the data in the WAL consists \n> only of the delta of what changed on the page.\n> 2) 8K database page is written to OS cache\n> 3) PG calls fsync to force the database block out\n> 4) OS writes first 4K block of the change to the BBU write cache. Worst \n> case, this fills the cache, and it takes a moment for some random writes \n> to process before it has space to buffer again (makes this more likely \n> to happen, but it's not required to see the failure case here)\n> 5) Sudden power interruption, second half of the page write is lost\n> 6) Server restarts\n> 7) That 4K write is now replayed from the battery's cache\n> \n> At this point, you now have a torn 8K page, with 1/2 old and 1/2 new \n\nBased on this report, I think we need to update our documentation and\nbackpatch removal of text that says that BBU users can safely turn off\nfull-page writes. Patch attached.\n\nI think we have fallen into a trap I remember from the late 1990's where\nI was assuming that an 8k-block based file system would write to the\ndisk atomically in 8k segments, which of course it cannot. My bet is\nthat even if you write to the kernel in 8k pages, and have an 8k file\nsystem, the disk is still accessed via 512-byte blocks, even with a BBU.\n \n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +",
"msg_date": "Tue, 30 Nov 2010 22:07:18 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Greg Smith <[email protected]> wrote:\n> \n> > I think Kevin's point here may be that if your fsync isn't\n> > reliable, you're always in trouble. But if your fsync is good,\n> > even torn pages should be repairable by the deltas written to the\n> > WAL\n> \n> I was actually just arguing that a BBU doesn't eliminate a risk\n> here; if there is a risk with production-quality disk drives, there\n> is a risk with a controller with a BBU cache. The BBU cache just\n> tends to reduce the window of time in which corruption can occur. I\n> wasn't too sure of *why* there was a risk, but Tom's post cleared\n> that up.\n> \n> I wonder why we need to expose this GUC at all -- perhaps it should\n> be off when fsync is off and on otherwise? Leaving it on without\n> fsync is just harming performance for not much benefit, and turning\n> it off with fsync seems to be saying that you are willing to\n> tolerate a known risk of database corruption, just not quite so much\n> as you have without fsync. In reality it seems most likely to be a\n> mistake, either way.\n\nAccording to our docs, and my submitted patch, if you are using ZFS then\nyou can turn off full-page writes, so full-page writes are still useful.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Tue, 30 Nov 2010 22:13:12 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Greg Smith wrote:\n> Tom Lane wrote:\n> > You've got entirely too simplistic a view of what the \"delta\" might be,\n> > I fear. In particular there are various sorts of changes that involve\n> > inserting the data carried in the WAL record and shifting pre-existing\n> > data around to make room, or removing an item and moving remaining data\n> > around. If you try to replay that type of action against a torn page,\n> > you'll get corrupted results.\n> > \n> \n> I wasn't sure exactly how those were encoded, thanks for the \n> clarification. Given that, it seems to me there are only two situations \n> where full_page_writes is safe to turn off:\n> \n> 1) The operating system block size is exactly the same database block \n> size, and all writes are guaranteed to be atomic to that block size. \n\nIs that true? I have no idea. I thought everything was done at the\n512-byte block level.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Tue, 30 Nov 2010 22:54:59 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "\n> Is that true? I have no idea. I thought everything was done at the\n> 512-byte block level.\n\nNewer disks (2TB and up) can have 4k sectors, but this still means a page \nspans several sectors.\n",
"msg_date": "Wed, 01 Dec 2010 09:12:08 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Pierre C wrote:\n> \n> > Is that true? I have no idea. I thought everything was done at the\n> > 512-byte block level.\n> \n> Newer disks (2TB and up) can have 4k sectors, but this still means a page \n> spans several sectors.\n\nYes, I had heard about that.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Wed, 1 Dec 2010 08:48:01 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Greg Smith wrote:\n> > Kevin Grittner wrote:\n> > > I assume that we send a full\n> > > 8K to the OS cache, and the file system writes disk sectors\n> > > according to its own algorithm. With either platters or BBU cache,\n> > > the data is persisted on fsync; why do you see a risk with one but\n> > > not the other\n> > \n> > I'd like a 10 minute argument please. I started to write something to \n> > refute this, only to clarify in my head the sequence of events that \n> > leads to the most questionable result, where I feel a bit less certain \n> > than I did before of the safety here. Here is the worst case I believe \n> > you're describing:\n> > \n> > 1) Transaction is written to the WAL and sync'd; client receives \n> > COMMIT. Since full_page_writes is off, the data in the WAL consists \n> > only of the delta of what changed on the page.\n> > 2) 8K database page is written to OS cache\n> > 3) PG calls fsync to force the database block out\n> > 4) OS writes first 4K block of the change to the BBU write cache. Worst \n> > case, this fills the cache, and it takes a moment for some random writes \n> > to process before it has space to buffer again (makes this more likely \n> > to happen, but it's not required to see the failure case here)\n> > 5) Sudden power interruption, second half of the page write is lost\n> > 6) Server restarts\n> > 7) That 4K write is now replayed from the battery's cache\n> > \n> > At this point, you now have a torn 8K page, with 1/2 old and 1/2 new \n> \n> Based on this report, I think we need to update our documentation and\n> backpatch removal of text that says that BBU users can safely turn off\n> full-page writes. Patch attached.\n> \n> I think we have fallen into a trap I remember from the late 1990's where\n> I was assuming that an 8k-block based file system would write to the\n> disk atomically in 8k segments, which of course it cannot. My bet is\n> that even if you write to the kernel in 8k pages, and have an 8k file\n> system, the disk is still accessed via 512-byte blocks, even with a BBU.\n\nDoc patch applied.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +",
"msg_date": "Wed, 22 Dec 2010 21:12:23 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BBU Cache vs. spindles"
}
] |
[
{
"msg_contents": "I know that there haven been many discussions on the slowness of count(*) even \nwhen an index is involved because the visibility of the rows has to be \nchecked. In the past I have seen many suggestions about using triggers and \ntables to keep track of counts and while this works fine in a situation where \nyou know what the report is going to be ahead of time, this is simply not an \noption when an unknown WHERE clause is to be used (dynamically generated).\nI ran into a fine example of this when I was searching this mailing list, \n\"Searching in 856,646 pages took 13.48202 seconds. Site search powered by \nPostgreSQL 8.3.\" Obviously at some point count(*) came into play here because \nthe site made a list of pages (1 2 3 4 5 6 > next). I very commonly make a \nlist of pages from search results, and the biggest time killer here is the \ncount(*) portion, even worse yet, I sometimes have to hit the database with \ntwo SELECT statements, one with OFFSET and LIMIT to get the page of results I \nneed and another to get the amount of total rows so I can estimate how many \npages of results are available. The point I am driving at here is that since \nbuilding a list of pages of results is such a common thing to do, there need \nto be some specific high speed ways to do this in one query. Maybe an \nestimate(*) that works like count but gives an answer from the index without \nchecking visibility? I am sure that this would be good enough to make a page \nlist, it is really no big deal if it errors on the positive side, maybe the \nlist of pages has an extra page off the end. I can live with that. What I \ncan't live with is taking 13 seconds to get a page of results from 850,000 \nrows in a table.\n-Neil-\n",
"msg_date": "Sat, 9 Oct 2010 16:26:18 -0700",
"msg_from": "Neil Whelchel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow count(*) again..."
},
{
"msg_contents": "On Sat, Oct 9, 2010 at 5:26 PM, Neil Whelchel <[email protected]> wrote:\n> I know that there haven been many discussions on the slowness of count(*) even\n> when an index is involved because the visibility of the rows has to be\n> checked. In the past I have seen many suggestions about using triggers and\n> tables to keep track of counts and while this works fine in a situation where\n> you know what the report is going to be ahead of time, this is simply not an\n> option when an unknown WHERE clause is to be used (dynamically generated).\n> I ran into a fine example of this when I was searching this mailing list,\n> \"Searching in 856,646 pages took 13.48202 seconds. Site search powered by\n> PostgreSQL 8.3.\" Obviously at some point count(*) came into play here because\n> the site made a list of pages (1 2 3 4 5 6 > next). I very commonly make a\n> list of pages from search results, and the biggest time killer here is the\n> count(*) portion, even worse yet, I sometimes have to hit the database with\n> two SELECT statements, one with OFFSET and LIMIT to get the page of results I\n> need and another to get the amount of total rows so I can estimate how many\n> pages of results are available. The point I am driving at here is that since\n> building a list of pages of results is such a common thing to do, there need\n> to be some specific high speed ways to do this in one query. Maybe an\n> estimate(*) that works like count but gives an answer from the index without\n> checking visibility? I am sure that this would be good enough to make a page\n> list, it is really no big deal if it errors on the positive side, maybe the\n> list of pages has an extra page off the end. I can live with that. What I\n> can't live with is taking 13 seconds to get a page of results from 850,000\n> rows in a table.\n\n99% of the time in the situations you don't need an exact measure, and\nassuming analyze has run recently, select rel_tuples from pg_class for\na given table is more than close enough. I'm sure wrapping that in a\nsimple estimated_rows() function would be easy enough to do.\n",
"msg_date": "Sat, 9 Oct 2010 19:47:34 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Neil Whelchel wrote:\n> I know that there haven been many discussions on the slowness of count(*) even \n> when an index is involved because the visibility of the rows has to be \n> checked. In the past I have seen many suggestions about using triggers and \n> tables to keep track of counts and while this works fine in a situation where \n> you know what the report is going to be ahead of time, this is simply not an \n> option when an unknown WHERE clause is to be used (dynamically generated).\n> I ran into a fine example of this when I was searching this mailing list, \n> \"Searching in 856,646 pages took 13.48202 seconds. Site search powered by \n> PostgreSQL 8.3.\" Obviously at some point count(*) came into play here because \n> the site made a list of pages (1 2 3 4 5 6 > next). I very commonly make a \n> list of pages from search results, and the biggest time killer here is the \n> count(*) portion, even worse yet, I sometimes have to hit the database with \n> two SELECT statements, one with OFFSET and LIMIT to get the page of results I \n> need and another to get the amount of total rows so I can estimate how many \n> pages of results are available. The point I am driving at here is that since \n> building a list of pages of results is such a common thing to do, there need \n> to be some specific high speed ways to do this in one query. Maybe an \n> estimate(*) that works like count but gives an answer from the index without \n> checking visibility? I am sure that this would be good enough to make a page \n> list, it is really no big deal if it errors on the positive side, maybe the \n> list of pages has an extra page off the end. I can live with that. What I \n> can't live with is taking 13 seconds to get a page of results from 850,000 \n> rows in a table.\n> -Neil-\n>\n> \nUnfortunately, the problem is in the rather primitive way PostgreSQL \ndoes I/O. It didn't change in 9.0 so there is nothing you could gain by \nupgrading. If you execute strace -o /tmp/pg.out -e read <PID of the \nsequential scan process> and inspect the file /tmp/pg.out when the query \nfinishes, you will notice a gazillion of read requests, all of them 8192 \nbytes in size. That means that PostgreSQL is reading the table block by \nblock, without any merging of the requests. You can alleviate the pain \nby using the OS tricks, like specifying the deadline I/O scheduler in \nthe grub.conf and set prefetch on the FS block devices by using \nblockdev, but there is nothing special that can be done, short of \nrewriting the way PostgreSQL does I/O. There were rumors about the \nversion 9.0 and asynchronous I/O, but that didn't materialize. That is \nreally strange to me, because PostgreSQL tables are files or groups of \nfiles, if the table size exceeds 1GB. It wouldn't be very hard to try \nreading 1MB at a time and that would speed up the full table scan \nsignificantly.\nProblem with single block I/O is that there is a context switch for each \nrequest, the I/O scheduler has to work hard to merge requests \nappropriately and there is really no need for that, tables are files \nnavigating through files is not a problem, even with much larger blocks.\nIn another database, whose name I will not mention, there is a parameter \ndb_file_multiblock_read_count which specifies how many blocks will be \nread by a single read when doing a full table scan. PostgreSQL is in \ndire need of something similar and it wouldn't even be that hard to \nimplement.\n\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n",
"msg_date": "Sat, 09 Oct 2010 21:54:15 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On 10/09/2010 06:54 PM, Mladen Gogala wrote:\n> In another database, whose name I will not mention, there is a parameter\n> db_file_multiblock_read_count which specifies how many blocks will be\n> read by a single read when doing a full table scan. PostgreSQL is in\n> dire need of something similar and it wouldn't even be that hard to\n> implement.\n\nYou're correct in that it isn't particularly difficult to implement for\nsequential scans. But I have done some testing with aggressive read\nahead, and although it is clearly a big win with a single client, the\nbenefit was less clear as concurrency was increased.\n\nJoe\n\n-- \nJoe Conway\ncredativ LLC: http://www.credativ.us\nLinux, PostgreSQL, and general Open Source\nTraining, Service, Consulting, & 24x7 Support",
"msg_date": "Sat, 09 Oct 2010 19:10:38 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Joe Conway wrote:\n> On 10/09/2010 06:54 PM, Mladen Gogala wrote:\n> \n>> In another database, whose name I will not mention, there is a parameter\n>> db_file_multiblock_read_count which specifies how many blocks will be\n>> read by a single read when doing a full table scan. PostgreSQL is in\n>> dire need of something similar and it wouldn't even be that hard to\n>> implement.\n>> \n>\n> You're correct in that it isn't particularly difficult to implement for\n> sequential scans. But I have done some testing with aggressive read\n> ahead, and although it is clearly a big win with a single client, the\n> benefit was less clear as concurrency was increased.\n>\n> Joe\n>\n> \nWell, in my opinion that should be left to the DBA, the same as in the \n\"other database\". The mythical DBA, the creature that mighty Larry \nEllison himself is on a crusade against, usually can figure out the \nright value for the database he or she's is in charge of. I humbly \nconfess to being an Oracle DBA for more than 2 decades and now branching \ninto Postgres because my employer is less than enthusiastic about \nOracle, with the special accent on their pricing.\n\nModern databases, Postgres included, are quite complex and companies \nneed DBA personnel to help fine tune the applications. I know that good \nDBA personnel is quite expensive but without a competent DBA who knows \nthe database software well enough, companies can and will suffer from \nblunders with performance, downtime, lost data and alike. In the world \nwhere almost every application is written for the web, performance, \nuptime and user experience are of the critical importance. The \narchitects of Postgres database would be well advised to operate under \nthe assumption that every production database has a competent DBA \nkeeping an eye on the database.\n\nEvery application has its own mix of sequential and index scans, you \ncannot possibly test all possible applications. Aggressive read-ahead \nor \"multi-block reads\" can be a performance problem and it will \ncomplicate the optimizer, because the optimizer now has a new variable \nto account for: the block size, potentially making seq_page_cost even \ncheaper and random_page_cost even more expensive, depending on the \nblocking. However, slow sequential scan is, in my humble opinion, the \nsingle biggest performance problem of the PostgreSQL databases and \nshould be improved, the sooner, the better. You should, however, count \non the DBA personnel to help with the tuning.\nWe're the Tinkerbells of the database world. I am 6'4\", 240 LBS, no wings.\n\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n",
"msg_date": "Sat, 09 Oct 2010 22:44:14 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Saturday 09 October 2010 18:47:34 Scott Marlowe wrote:\n> On Sat, Oct 9, 2010 at 5:26 PM, Neil Whelchel <[email protected]> \nwrote:\n> > I know that there haven been many discussions on the slowness of count(*)\n> > even when an index is involved because the visibility of the rows has to\n> > be checked. In the past I have seen many suggestions about using\n> > triggers and tables to keep track of counts and while this works fine in\n> > a situation where you know what the report is going to be ahead of time,\n> > this is simply not an option when an unknown WHERE clause is to be used\n> > (dynamically generated). I ran into a fine example of this when I was\n> > searching this mailing list, \"Searching in 856,646 pages took 13.48202\n> > seconds. Site search powered by PostgreSQL 8.3.\" Obviously at some point\n> > count(*) came into play here because the site made a list of pages (1 2\n> > 3 4 5 6 > next). I very commonly make a list of pages from search\n> > results, and the biggest time killer here is the count(*) portion, even\n> > worse yet, I sometimes have to hit the database with two SELECT\n> > statements, one with OFFSET and LIMIT to get the page of results I need\n> > and another to get the amount of total rows so I can estimate how many\n> > pages of results are available. The point I am driving at here is that\n> > since building a list of pages of results is such a common thing to do,\n> > there need to be some specific high speed ways to do this in one query.\n> > Maybe an estimate(*) that works like count but gives an answer from the\n> > index without checking visibility? I am sure that this would be good\n> > enough to make a page list, it is really no big deal if it errors on the\n> > positive side, maybe the list of pages has an extra page off the end. I\n> > can live with that. What I can't live with is taking 13 seconds to get a\n> > page of results from 850,000 rows in a table.\n> \n> 99% of the time in the situations you don't need an exact measure, and\n> assuming analyze has run recently, select rel_tuples from pg_class for\n> a given table is more than close enough. I'm sure wrapping that in a\n> simple estimated_rows() function would be easy enough to do.\n\nThis is a very good approach and it works very well when you are counting the \nentire table, but when you have no control over the WHERE clause, it doesn't \nhelp. IE: someone puts in a word to look for in a web form.\n\nFrom my perspective, this issue is the biggest problem there is when using \nPostgres to create web pages, and it is so commonly used, I think that there \nshould be a specific way to deal with it so that you don't have to run the \nsame WHERE clause twice. \nIE: SELECT count(*) FROM <table> WHERE <clause>; to get the total amount of \nitems to make page navigation links, then:\nSELECT <columns> FROM table WHERE <clause> LIMIT <items_per_page> OFFSET \n<(page_no-1)*items_per_page>; to get the actual page contents.\n \nIt's bad enough that count(*) is slow, then you have to do it all over again \nto get the results you need! I have not dug into this much yet, but would it \nbe possible to return the amount of rows that a WHERE clause would actually \nreturn if the LIMIT and OFFSET were not applied. IE: When a normal query is \nexecuted, the server returns the number of rows aside from the actual row \ndata. Would it be a big deal to modify this to allow it to return the amount \nof rows before the LIMIT and OFFSET is applied as well? This would sure cut \ndown on time it takes to do the same WHERE clause twice... I have considered \nusing a cursor to do this, however this requires a transfer of all of the rows \nto the client to get a total count, then setting the cursor to get the rows \nthat you are interested in. Or is there a way around this that I am not aware \nof?\n-Neil-\n\n",
"msg_date": "Sat, 9 Oct 2010 20:02:12 -0700",
"msg_from": "Neil Whelchel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Sat, Oct 9, 2010 at 7:44 PM, Mladen Gogala <[email protected]>wrote:\n\n> The architects of Postgres database would be well advised to operate under\n> the assumption that every production database has a competent DBA keeping an\n> eye on the database.\n>\n\nI'd actually go so far as to say that they have already made this\nassumption. The out of the box config needs modification for all but the\nmost low-volume applications and postgres really benefits from having some\nattention paid to performance. Not only does tuning the db provide enormous\ngains, but it is often possible to dramatically improve query responsiveness\nby simply restructuring a query (assuming an aggregating query over a fairly\nlarge table with a few joins thrown in). My team does not have a competent\nDBA (though I've got 15+ years of experience developing on top of various\ndbs and certainly don't make overly naive assumptions about how things work)\nand the gains that we made, when I finally just sat down and read everything\nI could get my hands on about postgres and started reading this list, were\nreally quite impressive. I intend to take some of the courses offered by\nsome of the companies that are active on this list when my schedule allows\nin order to expand my knowledge even farther, as a DBA is a luxury we cannot\nreally afford at the moment.\n\nOn Sat, Oct 9, 2010 at 7:44 PM, Mladen Gogala <[email protected]> wrote:\n The architects of Postgres database would be well advised to operate under the assumption that every production database has a competent DBA keeping an eye on the database.\nI'd actually go so far as to say that they have already made this assumption. The out of the box config needs modification for all but the most low-volume applications and postgres really benefits from having some attention paid to performance. Not only does tuning the db provide enormous gains, but it is often possible to dramatically improve query responsiveness by simply restructuring a query (assuming an aggregating query over a fairly large table with a few joins thrown in). My team does not have a competent DBA (though I've got 15+ years of experience developing on top of various dbs and certainly don't make overly naive assumptions about how things work) and the gains that we made, when I finally just sat down and read everything I could get my hands on about postgres and started reading this list, were really quite impressive. I intend to take some of the courses offered by some of the companies that are active on this list when my schedule allows in order to expand my knowledge even farther, as a DBA is a luxury we cannot really afford at the moment.",
"msg_date": "Sat, 9 Oct 2010 20:07:00 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On 10/10/2010 11:02 AM, Neil Whelchel wrote:\n> On Saturday 09 October 2010 18:47:34 Scott Marlowe wrote:\n>> On Sat, Oct 9, 2010 at 5:26 PM, Neil Whelchel<[email protected]>\n> wrote:\n>>> I know that there haven been many discussions on the slowness of count(*)\n>>> even when an index is involved because the visibility of the rows has to\n>>> be checked. In the past I have seen many suggestions about using\n>>> triggers and tables to keep track of counts and while this works fine in\n>>> a situation where you know what the report is going to be ahead of time,\n>>> this is simply not an option when an unknown WHERE clause is to be used\n>>> (dynamically generated). I ran into a fine example of this when I was\n>>> searching this mailing list, \"Searching in 856,646 pages took 13.48202\n>>> seconds. Site search powered by PostgreSQL 8.3.\" Obviously at some point\n>>> count(*) came into play here because the site made a list of pages (1 2\n>>> 3 4 5 6> next). I very commonly make a list of pages from search\n>>> results, and the biggest time killer here is the count(*) portion, even\n>>> worse yet, I sometimes have to hit the database with two SELECT\n>>> statements, one with OFFSET and LIMIT to get the page of results I need\n>>> and another to get the amount of total rows so I can estimate how many\n>>> pages of results are available. The point I am driving at here is that\n>>> since building a list of pages of results is such a common thing to do,\n>>> there need to be some specific high speed ways to do this in one query.\n>>> Maybe an estimate(*) that works like count but gives an answer from the\n>>> index without checking visibility? I am sure that this would be good\n>>> enough to make a page list, it is really no big deal if it errors on the\n>>> positive side, maybe the list of pages has an extra page off the end. I\n>>> can live with that. What I can't live with is taking 13 seconds to get a\n>>> page of results from 850,000 rows in a table.\n>>\n>> 99% of the time in the situations you don't need an exact measure, and\n>> assuming analyze has run recently, select rel_tuples from pg_class for\n>> a given table is more than close enough. I'm sure wrapping that in a\n>> simple estimated_rows() function would be easy enough to do.\n>\n> This is a very good approach and it works very well when you are counting the\n> entire table, but when you have no control over the WHERE clause, it doesn't\n> help. IE: someone puts in a word to look for in a web form.\n\nFor that sort of thing, there isn't much that'll help you except \nvisibility-aware indexes, covering indexes, etc if/when they're \nimplemented. Even then, they'd only help when it was a simple \nindex-driven query with no need to hit the table to recheck any test \nconditions, etc.\n\nI guess there could be *some* way to expose the query planner's cost \nestimates in a manner useful for result count estimation ... but given \nhow coarse its stats are and how wildly out the estimates can be, I kind \nof doubt it. It's really intended for query planning decisions and more \ninterested in orders of magnitude, \"0, 1, or more than that\" measures, \netc, and seems to consider 30% here or there to be pretty insignificant \nmost of the time.\n\n> It's bad enough that count(*) is slow, then you have to do it all over again\n> to get the results you need! I have not dug into this much yet, but would it\n> be possible to return the amount of rows that a WHERE clause would actually\n> return if the LIMIT and OFFSET were not applied. IE: When a normal query is\n> executed, the server returns the number of rows aside from the actual row\n> data. Would it be a big deal to modify this to allow it to return the amount\n> of rows before the LIMIT and OFFSET is applied as well?\n\nIt'd force the server to fully execute the query. Then again, it sounds \nlike you're doing that anyway.\n\n-- \nCraig Ringer\n\nTech-related writing at http://soapyfrogs.blogspot.com/\n",
"msg_date": "Sun, 10 Oct 2010 14:56:15 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On 10/10/2010 9:54 AM, Mladen Gogala wrote:\n\n> Unfortunately, the problem is in the rather primitive way PostgreSQL\n> does I/O. It didn't change in 9.0 so there is nothing you could gain by\n> upgrading. If you execute strace -o /tmp/pg.out -e read <PID of the\n> sequential scan process> and inspect the file /tmp/pg.out when the query\n> finishes, you will notice a gazillion of read requests, all of them 8192\n> bytes in size. That means that PostgreSQL is reading the table block by\n> block, without any merging of the requests.\n\nI'd be really interested in any measurements you've done to determine \nthe cost of this over doing reads in larger chunks. If they're properly \ndetailed and thought out, the -hackers list is likely to be interested \nas well.\n\nThe Linux kernel, at least, does request merging (and splitting, and \nmerging, and more splitting) along the request path, and I'd personally \nexpect that most of the cost of 8k requests would be in the increased \nnumber of system calls, buffer copies, etc required. Measurements \ndemonstrating or contradicting this would be good to see.\n\nIt's worth being aware that there are memory costs to doing larger \nreads, especially when you have many backends each of which want to \nallocate a larger buffer for reading. If you can use a chunk of \nshared_buffers as the direct destination for the read that's OK, but \notherwise you're up for (1mb-8kb)*num_backends extra memory use on I/O \nbuffers that could otherwise be used as shared_buffers or OS cache.\n\nAsync I/O, too, has costs.\n\n > PostgreSQL is in\n> dire need of something similar and it wouldn't even be that hard to\n> implement.\n\nI'd really like to see both those assertions backed with data or patches ;-)\n\nPersonally, I know just enough about how PG's I/O path works to suspect \nthat \"not that hard to implement\" is probably a little ... \nover-optimistic. Sure, it's not that hard to implement in a new program \nwith no wired-in architectural and design choices; that doesn't mean \nit's easy to retrofit onto existing code, especially a bunch of \nco-operating processes with their own buffer management.\n\n-- \nCraig Ringer\n\nTech-related writing at http://soapyfrogs.blogspot.com/\n",
"msg_date": "Sun, 10 Oct 2010 15:14:18 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Saturday 09 October 2010 23:56:15 Craig Ringer wrote:\n> On 10/10/2010 11:02 AM, Neil Whelchel wrote:\n> > On Saturday 09 October 2010 18:47:34 Scott Marlowe wrote:\n> >> On Sat, Oct 9, 2010 at 5:26 PM, Neil Whelchel<[email protected]>\n> > \n> > wrote:\n> >>> I know that there haven been many discussions on the slowness of\n> >>> count(*) even when an index is involved because the visibility of the\n> >>> rows has to be checked. In the past I have seen many suggestions about\n> >>> using triggers and tables to keep track of counts and while this works\n> >>> fine in a situation where you know what the report is going to be\n> >>> ahead of time, this is simply not an option when an unknown WHERE\n> >>> clause is to be used (dynamically generated). I ran into a fine\n> >>> example of this when I was searching this mailing list, \"Searching in\n> >>> 856,646 pages took 13.48202 seconds. Site search powered by PostgreSQL\n> >>> 8.3.\" Obviously at some point count(*) came into play here because the\n> >>> site made a list of pages (1 2 3 4 5 6> next). I very commonly make a\n> >>> list of pages from search results, and the biggest time killer here is\n> >>> the count(*) portion, even worse yet, I sometimes have to hit the\n> >>> database with two SELECT statements, one with OFFSET and LIMIT to get\n> >>> the page of results I need and another to get the amount of total rows\n> >>> so I can estimate how many pages of results are available. The point I\n> >>> am driving at here is that since building a list of pages of results\n> >>> is such a common thing to do, there need to be some specific high\n> >>> speed ways to do this in one query. Maybe an estimate(*) that works\n> >>> like count but gives an answer from the index without checking\n> >>> visibility? I am sure that this would be good enough to make a page\n> >>> list, it is really no big deal if it errors on the positive side,\n> >>> maybe the list of pages has an extra page off the end. I can live with\n> >>> that. What I can't live with is taking 13 seconds to get a page of\n> >>> results from 850,000 rows in a table.\n> >> \n> >> 99% of the time in the situations you don't need an exact measure, and\n> >> assuming analyze has run recently, select rel_tuples from pg_class for\n> >> a given table is more than close enough. I'm sure wrapping that in a\n> >> simple estimated_rows() function would be easy enough to do.\n> > \n> > This is a very good approach and it works very well when you are counting\n> > the entire table, but when you have no control over the WHERE clause, it\n> > doesn't help. IE: someone puts in a word to look for in a web form.\n> \n> For that sort of thing, there isn't much that'll help you except\n> visibility-aware indexes, covering indexes, etc if/when they're\n> implemented. Even then, they'd only help when it was a simple\n> index-driven query with no need to hit the table to recheck any test\n> conditions, etc.\n\nGood point, maybe this is turning more into a discussion of how to generate a \nlist of pages of results and one page of results with one query so we don't \nhave to do the same painfully slow query twice to do a very common task.\n\nOn the other hand, I copied a table out of one of my production servers that \nhas about 60,000 rows with 6 columns (numeric, numeric, bool, bool, timestamp, \ntext). The first numeric column has numbers evenly spread between 0 and 100 \nand it is indexed. I put the table in a pair of database servers both running \non the same physical hardware. One server is Postgres, the other is a popular \nserver (I am not mentioning names here). on Postgres: SELECT count(*) FROM \ntable where column>50; takes about 8 seconds to run. The other database server \ntook less than one second (about 25 ms) as it is using the index (I assume) to \ncome up with the results. It is true that this is not a fair test because both \nservers were tested with their default settings, and the defaults for Postgres \nare much more conservative, however, I don't think that any amount of settings \ntweaking will bring them even in the same ball park. There has been discussion \nabout the other server returning an incorrect count because all of the indexed \nrows may not be live at the time. This is not a problem for the intended use, \nthat is why I suggested another function like estimate(*). It's name suggests \nthat the result will be close, not 100% correct, which is plenty good enough \nfor generating a list of results pages in most cases. I am faced with a very \nserious problem here. If the query to make a list of pages takes say 6 seconds \nand it takes another 6 seconds to generate a page of results, the customer is \nwaiting 12 seconds. This is not going to work. If count made a quick estimate, \nsay less than a second, and it took 6 seconds to come up with the actual \nresults, I could live with that. Or if coming up with the window of results \nvia (OFFSET and LIMIT) and returned the total number of rows that would have \nmatched the query, then I would still have everything I need to render the \npage in a reasonable time. I really think that this needs to be addressed \nsomewhere. It's not like I am the only one that does this. You see it nearly \neverywhere a long list of results is (expected to be) returned in a web site. \nAmong the people I work with, this seems to be the most mentioned reason that \nthey claim that they don't use Postgres for their projects.\n\nIt would be nice to see how the server comes up with the search results and \nlist of links to pages of results for this mailing list. \n(http://search.postgresql.org/search?q=slow+count%28%29&m=1&l=&d=365&s=r) I am \nguessing that it probably uses the count and query method I am talking about.\n\n> \n> I guess there could be *some* way to expose the query planner's cost\n> estimates in a manner useful for result count estimation ... but given\n> how coarse its stats are and how wildly out the estimates can be, I kind\n> of doubt it. It's really intended for query planning decisions and more\n> interested in orders of magnitude, \"0, 1, or more than that\" measures,\n> etc, and seems to consider 30% here or there to be pretty insignificant\n> most of the time.\n> \n> > It's bad enough that count(*) is slow, then you have to do it all over\n> > again to get the results you need! I have not dug into this much yet,\n> > but would it be possible to return the amount of rows that a WHERE\n> > clause would actually return if the LIMIT and OFFSET were not applied.\n> > IE: When a normal query is executed, the server returns the number of\n> > rows aside from the actual row data. Would it be a big deal to modify\n> > this to allow it to return the amount of rows before the LIMIT and\n> > OFFSET is applied as well?\n> \n> It'd force the server to fully execute the query. Then again, it sounds\n> like you're doing that anyway.\n",
"msg_date": "Sun, 10 Oct 2010 03:29:42 -0700",
"msg_from": "Neil Whelchel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "2010/10/10 Neil Whelchel <[email protected]>\n\n> On Saturday 09 October 2010 18:47:34 Scott Marlowe wrote:\n> > On Sat, Oct 9, 2010 at 5:26 PM, Neil Whelchel <[email protected]>\n> wrote:\n> > > I know that there haven been many discussions on the slowness of\n> count(*)\n> > > even when an index is involved because the visibility of the rows has\n> to\n> > > be checked. In the past I have seen many suggestions about using\n> > > triggers and tables to keep track of counts and while this works fine\n> in\n> > > a situation where you know what the report is going to be ahead of\n> time,\n> > > this is simply not an option when an unknown WHERE clause is to be used\n> > > (dynamically generated). I ran into a fine example of this when I was\n> > > searching this mailing list, \"Searching in 856,646 pages took 13.48202\n> > > seconds. Site search powered by PostgreSQL 8.3.\" Obviously at some\n> point\n> > > count(*) came into play here because the site made a list of pages (1 2\n> > > 3 4 5 6 > next). I very commonly make a list of pages from search\n> > > results, and the biggest time killer here is the count(*) portion, even\n> > > worse yet, I sometimes have to hit the database with two SELECT\n> > > statements, one with OFFSET and LIMIT to get the page of results I need\n> > > and another to get the amount of total rows so I can estimate how many\n> > > pages of results are available. The point I am driving at here is that\n> > > since building a list of pages of results is such a common thing to do,\n> > > there need to be some specific high speed ways to do this in one query.\n> > > Maybe an estimate(*) that works like count but gives an answer from the\n> > > index without checking visibility? I am sure that this would be good\n> > > enough to make a page list, it is really no big deal if it errors on\n> the\n> > > positive side, maybe the list of pages has an extra page off the end. I\n> > > can live with that. What I can't live with is taking 13 seconds to get\n> a\n> > > page of results from 850,000 rows in a table.\n> >\n> > 99% of the time in the situations you don't need an exact measure, and\n> > assuming analyze has run recently, select rel_tuples from pg_class for\n> > a given table is more than close enough. I'm sure wrapping that in a\n> > simple estimated_rows() function would be easy enough to do.\n>\n> This is a very good approach and it works very well when you are counting\n> the\n> entire table, but when you have no control over the WHERE clause, it\n> doesn't\n> help. IE: someone puts in a word to look for in a web form.\n>\n> From my perspective, this issue is the biggest problem there is when using\n> Postgres to create web pages, and it is so commonly used, I think that\n> there\n> should be a specific way to deal with it so that you don't have to run the\n> same WHERE clause twice.\n> IE: SELECT count(*) FROM <table> WHERE <clause>; to get the total amount of\n> items to make page navigation links, then:\n> SELECT <columns> FROM table WHERE <clause> LIMIT <items_per_page> OFFSET\n> <(page_no-1)*items_per_page>; to get the actual page contents.\n>\n> How about\nselect * from (select *, count(*) over () as total_count from <table> where\n<clause) a LIMIT <items_per_page> OFFSET\n<(page_no-1)*items_per_page>\nIt will return you total_count column with equal value in each row. You may\nhave problems if no rows are returned (e.g. page num is too high).\n-- \nBest regards,\n Vitalii Tymchyshyn\n\n2010/10/10 Neil Whelchel <[email protected]>\nOn Saturday 09 October 2010 18:47:34 Scott Marlowe wrote:\n> On Sat, Oct 9, 2010 at 5:26 PM, Neil Whelchel <[email protected]>\nwrote:\n> > I know that there haven been many discussions on the slowness of count(*)\n> > even when an index is involved because the visibility of the rows has to\n> > be checked. In the past I have seen many suggestions about using\n> > triggers and tables to keep track of counts and while this works fine in\n> > a situation where you know what the report is going to be ahead of time,\n> > this is simply not an option when an unknown WHERE clause is to be used\n> > (dynamically generated). I ran into a fine example of this when I was\n> > searching this mailing list, \"Searching in 856,646 pages took 13.48202\n> > seconds. Site search powered by PostgreSQL 8.3.\" Obviously at some point\n> > count(*) came into play here because the site made a list of pages (1 2\n> > 3 4 5 6 > next). I very commonly make a list of pages from search\n> > results, and the biggest time killer here is the count(*) portion, even\n> > worse yet, I sometimes have to hit the database with two SELECT\n> > statements, one with OFFSET and LIMIT to get the page of results I need\n> > and another to get the amount of total rows so I can estimate how many\n> > pages of results are available. The point I am driving at here is that\n> > since building a list of pages of results is such a common thing to do,\n> > there need to be some specific high speed ways to do this in one query.\n> > Maybe an estimate(*) that works like count but gives an answer from the\n> > index without checking visibility? I am sure that this would be good\n> > enough to make a page list, it is really no big deal if it errors on the\n> > positive side, maybe the list of pages has an extra page off the end. I\n> > can live with that. What I can't live with is taking 13 seconds to get a\n> > page of results from 850,000 rows in a table.\n>\n> 99% of the time in the situations you don't need an exact measure, and\n> assuming analyze has run recently, select rel_tuples from pg_class for\n> a given table is more than close enough. I'm sure wrapping that in a\n> simple estimated_rows() function would be easy enough to do.\n\nThis is a very good approach and it works very well when you are counting the\nentire table, but when you have no control over the WHERE clause, it doesn't\nhelp. IE: someone puts in a word to look for in a web form.\n\n>From my perspective, this issue is the biggest problem there is when using\nPostgres to create web pages, and it is so commonly used, I think that there\nshould be a specific way to deal with it so that you don't have to run the\nsame WHERE clause twice.\nIE: SELECT count(*) FROM <table> WHERE <clause>; to get the total amount of\nitems to make page navigation links, then:\nSELECT <columns> FROM table WHERE <clause> LIMIT <items_per_page> OFFSET\n<(page_no-1)*items_per_page>; to get the actual page contents.How aboutselect * from (select *, count(*) over () as total_count from <table> where <clause) a LIMIT <items_per_page> OFFSET\n<(page_no-1)*items_per_page>It will return you total_count column with equal value in each row. You may have problems if no rows are returned (e.g. page num is too high).-- Best regards,\n Vitalii Tymchyshyn",
"msg_date": "Sun, 10 Oct 2010 15:02:03 +0300",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": " On 10/10/2010 6:29 AM, Neil Whelchel wrote:\n> On Saturday 09 October 2010 23:56:15 Craig Ringer wrote:\n>> On 10/10/2010 11:02 AM, Neil Whelchel wrote:\n>>> On Saturday 09 October 2010 18:47:34 Scott Marlowe wrote:\n>>>> On Sat, Oct 9, 2010 at 5:26 PM, Neil Whelchel<[email protected]>\n>>> wrote:\n>>>>> I know that there haven been many discussions on the slowness of\n>>>>> count(*) even when an index is involved because the visibility of the\n>>>>> rows has to be checked. In the past I have seen many suggestions about\n>>>>> using triggers and tables to keep track of counts and while this works\n>>>>> fine in a situation where you know what the report is going to be\n>>>>> ahead of time, this is simply not an option when an unknown WHERE\n>>>>> clause is to be used (dynamically generated). I ran into a fine\n>>>>> example of this when I was searching this mailing list, \"Searching in\n>>>>> 856,646 pages took 13.48202 seconds. Site search powered by PostgreSQL\n>>>>> 8.3.\" Obviously at some point count(*) came into play here because the\n>>>>> site made a list of pages (1 2 3 4 5 6> next). I very commonly make a\n>>>>> list of pages from search results, and the biggest time killer here is\n>>>>> the count(*) portion, even worse yet, I sometimes have to hit the\n>>>>> database with two SELECT statements, one with OFFSET and LIMIT to get\n>>>>> the page of results I need and another to get the amount of total rows\n>>>>> so I can estimate how many pages of results are available. The point I\n>>>>> am driving at here is that since building a list of pages of results\n>>>>> is such a common thing to do, there need to be some specific high\n>>>>> speed ways to do this in one query. Maybe an estimate(*) that works\n>>>>> like count but gives an answer from the index without checking\n>>>>> visibility? I am sure that this would be good enough to make a page\n>>>>> list, it is really no big deal if it errors on the positive side,\n>>>>> maybe the list of pages has an extra page off the end. I can live with\n>>>>> that. What I can't live with is taking 13 seconds to get a page of\n>>>>> results from 850,000 rows in a table.\n> Good point, maybe this is turning more into a discussion of how to generate a\n> list of pages of results and one page of results with one query so we don't\n> have to do the same painfully slow query twice to do a very common task.\n>\n> On the other hand, I copied a table out of one of my production servers that\n> has about 60,000 rows with 6 columns (numeric, numeric, bool, bool, timestamp,\n> text). The first numeric column has numbers evenly spread between 0 and 100\n> and it is indexed. I put the table in a pair of database servers both running\n> on the same physical hardware. One server is Postgres, the other is a popular\n> server (I am not mentioning names here). on Postgres: SELECT count(*) FROM\n> table where column>50; takes about 8 seconds to run. The other database server\n> took less than one second (about 25 ms) as it is using the index (I assume) to\n> come up with the results. It is true that this is not a fair test because both\n> servers were tested with their default settings, and the defaults for Postgres\n> are much more conservative, however, I don't think that any amount of settings\n> tweaking will bring them even in the same ball park. There has been discussion\n> about the other server returning an incorrect count because all of the indexed\n> rows may not be live at the time. This is not a problem for the intended use,\n> that is why I suggested another function like estimate(*). It's name suggests\n> that the result will be close, not 100% correct, which is plenty good enough\n> for generating a list of results pages in most cases. I am faced with a very\n> serious problem here. If the query to make a list of pages takes say 6 seconds\n> and it takes another 6 seconds to generate a page of results, the customer is\n> waiting 12 seconds. This is not going to work. If count made a quick estimate,\n> say less than a second, and it took 6 seconds to come up with the actual\n> results, I could live with that. Or if coming up with the window of results\n> via (OFFSET and LIMIT) and returned the total number of rows that would have\n> matched the query, then I would still have everything I need to render the\n> page in a reasonable time. I really think that this needs to be addressed\n> somewhere. It's not like I am the only one that does this. You see it nearly\n> everywhere a long list of results is (expected to be) returned in a web site.\n> Among the people I work with, this seems to be the most mentioned reason that\n> they claim that they don't use Postgres for their projects.\n>\n> It would be nice to see how the server comes up with the search results and\n> list of links to pages of results for this mailing list.\n> (http://search.postgresql.org/search?q=slow+count%28%29&m=1&l=&d=365&s=r) I am\n> guessing that it probably uses the count and query method I am talking about.\n>\n>> I guess there could be *some* way to expose the query planner's cost\n>> estimates in a manner useful for result count estimation ... but given\n>> how coarse its stats are and how wildly out the estimates can be, I kind\n>> of doubt it. It's really intended for query planning decisions and more\n>> interested in orders of magnitude, \"0, 1, or more than that\" measures,\n>> etc, and seems to consider 30% here or there to be pretty insignificant\n>> most of the time.\n>>\n>>> It's bad enough that count(*) is slow, then you have to do it all over\n>>> again to get the results you need! I have not dug into this much yet,\n>>> but would it be possible to return the amount of rows that a WHERE\n>>> clause would actually return if the LIMIT and OFFSET were not applied.\n>>> IE: When a normal query is executed, the server returns the number of\n>>> rows aside from the actual row data. Would it be a big deal to modify\n>>> this to allow it to return the amount of rows before the LIMIT and\n>>> OFFSET is applied as well?\n>> It'd force the server to fully execute the query. Then again, it sounds\n>> like you're doing that anyway.\nHow big is your DB?\nHow fast is your disk access?\nAny chance disks/RAM can be addressed?\n\nMy disk access is pitiful...\nfirst run, 2.3 million rows.. 0m35.38s, subsequent runs.. real 0m2.55s\n\nrthompso@hw-prod-repdb1> time psql -c \"select count(*) from my_production_table\" reporting\n count\n---------\n 2340704\n(1 row)\n\n\nreal 0m35.38s\nuser 0m0.25s\nsys 0m0.03s\n\nsubsequent runs.... (count changes due to inserts.)\n\nrthompso@hw-prod-repdb1> time psql -c \"select count(*) from my_production_table\" reporting\n count\n---------\n 2363707\n(1 row)\n\n\nreal 0m2.70s\nuser 0m0.27s\nsys 0m0.02s\nrthompso@hw-prod-repdb1> time psql -c \"select count(*) from my_production_table\" reporting\n count\n---------\n 2363707\n(1 row)\n\n\nreal 0m2.55s\nuser 0m0.26s\nsys 0m0.02s\nrthompso@hw-prod-repdb1> time psql -c \"select count(*) from my_production_table\" reporting\n count\n---------\n 2363707\n(1 row)\n\n\nreal 0m2.50s\nuser 0m0.26s\nsys 0m0.02s\n\nreporting=# SELECT pg_size_pretty(pg_total_relation_size('my_production_table'));\n pg_size_pretty\n----------------\n 1890 MB\n(1 row)\n\n",
"msg_date": "Sun, 10 Oct 2010 11:02:32 -0400",
"msg_from": "Reid Thompson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On 10/10/2010 6:29 PM, Neil Whelchel wrote:\n> On the other hand, I copied a table out of one of my production servers that\n> has about 60,000 rows with 6 columns (numeric, numeric, bool, bool, timestamp,\n> text). The first numeric column has numbers evenly spread between 0 and 100\n> and it is indexed. I put the table in a pair of database servers both running\n> on the same physical hardware. One server is Postgres, the other is a popular\n> server (I am not mentioning names here).\n\nPlease do. Your comment is pretty meaningless otherwise.\n\nIf you're talking about MySQL: Were you using InnoDB or MyISAM table \nstorage? Of course it's fast with MyISAM, it relies on locks to do \nupdates and has bugger all capability for write concurrency, or to \npermit readers while writing is going on.\n\nIf you're using InnoDB, then I'd like to know how they've managed that.\n\nIf you're talking about some *other* database, please name it and \nprovide any useful details, because the hand waving is not helpful.\n\n > I don't think that any amount of settings\n> tweaking will bring them even in the same ball park.\n\nIf you are, in fact, comparing MySQL+MyISAM and PostgreSQL, then you're \nquite right. Pg will never have such a fast count() as MyISAM does or \nthe same insanely fast read performance, and MyISAM will never be as \nreliable, robust or concurrency-friendly as Pg is. Take your pick, you \ncan't have both.\n\n> There has been discussion\n> about the other server returning an incorrect count because all of the indexed\n> rows may not be live at the time. This is not a problem for the intended use,\n> that is why I suggested another function like estimate(*). It's name suggests\n> that the result will be close, not 100% correct, which is plenty good enough\n> for generating a list of results pages in most cases.\n\nDo you have any practical suggestions for generating such an estimate, \nthough? I find it hard to think of any way the server can do that \ndoesn't involve executing the query. The table stats are WAY too general \nand a bit hit-and-miss, and there isn't really any other way to do it.\n\nIf all you want is a way to retrieve both a subset of results AND a \ncount of how many results would've been generated, it sounds like all \nyou really need is a way to get the total number of results returned by \na cursor query, which isn't a big engineering challenge. I expect that \nin current Pg versions a trivial PL/PgSQL function could be used to \nslurp and discard unwanted results, but a better in-server option to \ncount the results from a cursor query would certainly be nice.\n\n-- \nCraig Ringer\n\nTech-related writing at http://soapyfrogs.blogspot.com/\n",
"msg_date": "Sun, 10 Oct 2010 23:30:09 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Craig Ringer wrote:\n> On 10/10/2010 9:54 AM, Mladen Gogala wrote:\n>\n> \n>> Unfortunately, the problem is in the rather primitive way PostgreSQL\n>> does I/O. It didn't change in 9.0 so there is nothing you could gain by\n>> upgrading. If you execute strace -o /tmp/pg.out -e read <PID of the\n>> sequential scan process> and inspect the file /tmp/pg.out when the query\n>> finishes, you will notice a gazillion of read requests, all of them 8192\n>> bytes in size. That means that PostgreSQL is reading the table block by\n>> block, without any merging of the requests.\n>> \n>\n> I'd be really interested in any measurements you've done to determine \n> the cost of this over doing reads in larger chunks. If they're properly \n> detailed and thought out, the -hackers list is likely to be interested \n> as well.\n> \nI can provide measurements, but from Oracle RDBMS. Postgres doesn't \nallow tuning of that aspect, so no measurement can be done. Would the \nnumbers from Oracle RDBMS be acceptable?\n\n\n> The Linux kernel, at least, does request merging (and splitting, and \n> merging, and more splitting) along the request path, and I'd personally \n> expect that most of the cost of 8k requests would be in the increased \n> number of system calls, buffer copies, etc required. Measurements \n> demonstrating or contradicting this would be good to see.\n> \n\nEven the cost of hundreds of thousands of context switches is far from \nnegligible. What kind of measurements do you expect me to do with the \ndatabase which doesn't support tweaking of that aspect of its operation?\n\n> It's worth being aware that there are memory costs to doing larger \n> reads, especially when you have many backends each of which want to \n> allocate a larger buffer for reading. \n\nOh, it's not only larger memory, the buffer management would have to be \nchanged too, to prevent process doing a sequential scan from inundating \nthe shared buffers. Alternatively, the blocks would have to be written \ninto the private memory and immediately thrown away after that. However, \nthe experience with Oracle tells me that this is well worth it. Here are \nthe numbers:\n\nConnected to:\nOracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production\nWith the Partitioning, Real Application Clusters, OLAP, Data Mining\nand Real Application Testing options\n\nSQL> show parameter db_file_multi\n\nNAME TYPE VALUE\n------------------------------------ ----------- \n------------------------------\ndb_file_multiblock_read_count integer 16\nSQL> alter session set db_file_multiblock_read_count=1;\n\nSession altered.\nSQL> select count(*) from ni_occurrence;\n\n COUNT(*)\n----------\n 402062638\n\nElapsed: 00:08:20.88\nSQL> alter session set db_file_multiblock_read_count=128;\n\nSession altered.\n\nElapsed: 00:00:00.50\nSQL> select count(*) from ni_occurrence;\n\n COUNT(*)\n----------\n 402062638\n\nElapsed: 00:02:17.58\n\n\nIn other words, when I batched the sequential scan to do 128 blocks I/O, \nit was 4 times faster than when I did the single block I/O.\nDoes that provide enough of an evidence and, if not, why not?\n\n\n> If you can use a chunk of \n> shared_buffers as the direct destination for the read that's OK, but \n> otherwise you're up for (1mb-8kb)*num_backends extra memory use on I/O \n> buffers that could otherwise be used as shared_buffers or OS cache.\n>\n> Async I/O, too, has costs.\n> \n\nThere is a common platitude that says that there is no such thing as \nfree lunch. However, both Oracle RDBMS and IBM DB2 use asynchronous I/O, \nprobably because they're unaware of the danger. Let me now give you a \nfull table scan of a much smaller table located in a Postgres database:\n\nnews=> select count(*) from internet_web_sites;\n count \n---------\n 1290133\n(1 row)\n\nTime: 12838.958 ms\n\n\nOracle counts 400 million records in 2 minutes and Postgres 9.01 takes \n12.8 seconds to count 1.2 million records? Do you see the disparity?\n\nBoth databases, Oracle and Postgres, are utilizing the same 3Par SAN \ndevice, the machines housing both databases are comparable HP 64 bit \nLinux machines, both running 64 bit version of Red Hat 5.5. Respective \ntable sizes are here:\n\nSQL> select bytes/1048576 as MB from user_segments\n 2 where segment_name='NI_OCCURRENCE';\n\n MB\n----------\n 35329\n\nnews=> select pg_size_pretty(pg_table_size('moreover.internet_web_sites'));\n pg_size_pretty\n----------------\n 216 MB\n(1 row)\n\nSo, I really pushed Oracle much harder than I pushed Postgres.\n\n> > PostgreSQL is in\n> \n>> dire need of something similar and it wouldn't even be that hard to\n>> implement.\n>> \n>\n> I'd really like to see both those assertions backed with data or patches ;-)\n> \n\nWith the database that doesn't allow tuning of that aspect, it's the \nself-defeating proposition. However, I did my best to give you the numbers.\n\n> Personally, I know just enough about how PG's I/O path works to suspect \n> that \"not that hard to implement\" is probably a little ... \n> over-optimistic. Sure, it's not that hard to implement in a new program \n> with no wired-in architectural and design choices; that doesn't mean \n> it's easy to retrofit onto existing code, especially a bunch of \n> co-operating processes with their own buffer management.\n>\n> \nIt maybe so, but slow sequential scan is still the largest single \nperformance problem of PostgreSQL. The frequency with which that topic \nappears on the mailing lists should serve as a good evidence for that. I \ndid my best to prove my case. Again, requiring \"hard numbers\" when \nusing the database which doesn't allow tweaking of the I/O size is self \ndefeating proposition. The other databases, like DB2 and Oracle both \nallow tweaking of that aspect of its operation, Oracle even on the per \nsession basis. If you still claim that it wouldn't make the difference, \nthe onus to prove it is on you.\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n",
"msg_date": "Sun, 10 Oct 2010 13:14:22 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": " On 10/10/2010 11:02 AM, Reid Thompson wrote:\n>>>> On Sat, Oct 9, 2010 at 5:26 PM, Neil Whelchel<[email protected]>\n>>>>\n>> On the other hand, I copied a table out of one of my production servers that\n>> has about 60,000 rows with 6 columns (numeric, numeric, bool, bool, timestamp,\n>> text). The first numeric column has numbers evenly spread between 0 and 100\n>> and it is indexed. I put the table in a pair of database servers both running\n>> on the same physical hardware. One server is Postgres, the other is a popular\n>> server (I am not mentioning names here). on Postgres: SELECT count(*) FROM\n>> table where column>50; takes about 8 seconds to run. The other database server\n>> took less than one second (about 25 ms) as it is using the index (I assume) to\n>> come up with the results. It is true that this is not a fair test because both\n>> servers were tested with their default settings, and the defaults for Postgres\n>> are much more conservative, however, I don't think that any amount of settings\n>> tweaking will bring them even in the same ball park. There has been discussion\n>> about the other server returning an incorrect count because all of the indexed\n>> rows may not be live at the time. This is not a problem for the intended use,\n>> that is why I suggested another function like estimate(*). It's name suggests\n>> that the result will be close, not 100% correct, which is plenty good enough\n>> for generating a list of results pages in most cases. I am faced with a very\n>> serious problem here. If the query to make a list of pages takes say 6 seconds\n>> and it takes another 6 seconds to generate a page of results, the customer is\n>> waiting 12 seconds. This is not going to work. If count made a quick estimate,\n>> say less than a second, and it took 6 seconds to come up with the actual\n>> results, I could live with that. Or if coming up with the window of results\n>> via (OFFSET and LIMIT) and returned the total number of rows that would have\n>> matched the query, then I would still have everything I need to render the\n>> page in a reasonable time. I really think that this needs to be addressed\n>> somewhere. It's not like I am the only one that does this. You see it nearly\n>> everywhere a long list of results is (expected to be) returned in a web site.\n>> Among the people I work with, this seems to be the most mentioned reason that\n>> they claim that they don't use Postgres for their projects. t anyway.\n>\n> How big is your DB?\n> How fast is your disk access?\n> Any chance disks/RAM can be addressed?\n>\n> My disk access is pitiful...\n> first run, 2.3 million rows.. 0m35.38s, subsequent runs.. real 0m2.55s\n>\n> rthompso@hw-prod-repdb1> time psql -c \"select count(*) from my_production_table\" reporting\n> count\n> ---------\n> 2340704\n> (1 row)\n>\n>\n> real 0m35.38s\n> user 0m0.25s\n> sys 0m0.03s\n>\n> subsequent runs.... (count changes due to inserts.)\n>\n> rthompso@hw-prod-repdb1> time psql -c \"select count(*) from my_production_table\" reporting\n> count\n> ---------\n> 2363707\n> (1 row)\n>\n>\n> real 0m2.70s\n> user 0m0.27s\n> sys 0m0.02s\n> rthompso@hw-prod-repdb1> time psql -c \"select count(*) from my_production_table\" reporting\n> count\n> ---------\n> 2363707\n> (1 row)\n>\n>\n> real 0m2.55s\n> user 0m0.26s\n> sys 0m0.02s\n> rthompso@hw-prod-repdb1> time psql -c \"select count(*) from my_production_table\" reporting\n> count\n> ---------\n> 2363707\n> (1 row)\n>\n>\n> real 0m2.50s\n> user 0m0.26s\n> sys 0m0.02s\n>\n> reporting=# SELECT pg_size_pretty(pg_total_relation_size('my_production_table'));\n> pg_size_pretty\n> ----------------\n> 1890 MB\n> (1 row)\n>\n>\nforgot to note, my table schema is significantly larger.\n\nrthompso@hw-prod-repdb1> time psql -c \"\\d my_production_table_201010\" reporting\n Table \"public.my_production_table_201010\"\n Column | Type | Modifiers\n-----------------------------+-----------------------------+----------------------------------------------------------------\n | integer | not null default \nnextval('my_production_table_parent_id_seq'::regclass)\n | character varying(20) |\n | character(1) |\n | character varying(32) |\n | character varying(32) |\n | character varying(20) |\n | character varying(5) |\n | character varying(5) |\n | date |\n | character(1) |\n | character varying(32) |\n | character varying(32) |\n | character varying(32) |\n | character varying(2) |\n | character varying(10) |\n | character varying(10) |\n | character varying(32) |\n | character varying(7) |\n | character varying(10) |\n | character varying(2) |\n | character varying(9) |\n | character varying(9) |\n | character varying(9) |\n | character varying(10) |\n | character varying(32) |\n | character varying(32) |\n | character varying(20) |\n | character varying(5) |\n | character varying(5) |\n | character varying(32) |\n | character varying(32) |\n | character varying(32) |\n | character varying(2) |\n | character varying(10) |\n | character varying(10) |\n | character varying(10) |\n | character varying(10) |\n | integer |\n | character varying(2) |\n | character varying(32) |\n | character varying(32) |\n | integer |\n | integer |\n | text |\n | character varying(3) |\n | date |\n | date |\n | date |\n | integer |\n | integer |\n | integer |\n | integer |\n | character varying(6) |\n | character varying(10) |\n | character varying(32) |\n | character varying(32) |\n | character varying(32) |\n | character varying(10) |\n | character varying(6) |\n | character varying(8) |\n | boolean |\n | character(1) |\n | date |\n | integer |\n | date |\n | character varying(11) |\n | character varying(4) |\n | character(1) |\n | date |\n | character varying(5) |\n | character varying(20) |\n | date |\n | character(1) |\n | character(1) |\n | character varying(2) |\n | text |\n | integer |\n | integer |\n | timestamp without time zone | default now()\n | timestamp without time zone |\n | character varying(64) |\n | character varying(64) |\n | character varying(64) |\nIndexes:\n \"my_production_table_201010_pkey\" PRIMARY KEY, btree (id)\n \"my_production_table_201010_date_idx\" btree (xxxxdate), tablespace \"indexspace\"\n \"my_production_table_201010_epatient_idx\" btree (storeid, xxxxxxxxxxxxx), tablespace \"indexspace\"\n \"my_production_table_201010_medicationname_idx\" btree (xxxxxxxxxxxxxx), tablespace \"indexspace\"\n \"my_production_table_201010_ndc_idx\" btree (xxx), tablespace \"indexspace\"\nCheck constraints:\n \"my_production_table_201010_filldate_check\" CHECK (xxxxdate >= '2010-10-01'::date AND xxxxdate < \n'2010-11-01'::date)\nForeign-key constraints:\n \"my_production_table_201010_pkgfileid_fkey\" FOREIGN KEY (pkgfileid) REFERENCES my_production_tablefiles(id)\nInherits: my_production_table_parent\n\n\n",
"msg_date": "Sun, 10 Oct 2010 14:33:04 -0400",
"msg_from": "Reid Thompson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Sunday 10 October 2010 05:02:03 Віталій Тимчишин wrote:\n> 2010/10/10 Neil Whelchel <[email protected]>\n> \n> > On Saturday 09 October 2010 18:47:34 Scott Marlowe wrote:\n> > > On Sat, Oct 9, 2010 at 5:26 PM, Neil Whelchel <[email protected]>\n> > \n> > wrote:\n> > > > I know that there haven been many discussions on the slowness of\n> > \n> > count(*)\n> > \n> > > > even when an index is involved because the visibility of the rows has\n> > \n> > to\n> > \n> > > > be checked. In the past I have seen many suggestions about using\n> > > > triggers and tables to keep track of counts and while this works fine\n> > \n> > in\n> > \n> > > > a situation where you know what the report is going to be ahead of\n> > \n> > time,\n> > \n> > > > this is simply not an option when an unknown WHERE clause is to be\n> > > > used (dynamically generated). I ran into a fine example of this when\n> > > > I was searching this mailing list, \"Searching in 856,646 pages took\n> > > > 13.48202 seconds. Site search powered by PostgreSQL 8.3.\" Obviously\n> > > > at some\n> > \n> > point\n> > \n> > > > count(*) came into play here because the site made a list of pages (1\n> > > > 2 3 4 5 6 > next). I very commonly make a list of pages from search\n> > > > results, and the biggest time killer here is the count(*) portion,\n> > > > even worse yet, I sometimes have to hit the database with two SELECT\n> > > > statements, one with OFFSET and LIMIT to get the page of results I\n> > > > need and another to get the amount of total rows so I can estimate\n> > > > how many pages of results are available. The point I am driving at\n> > > > here is that since building a list of pages of results is such a\n> > > > common thing to do, there need to be some specific high speed ways\n> > > > to do this in one query. Maybe an estimate(*) that works like count\n> > > > but gives an answer from the index without checking visibility? I am\n> > > > sure that this would be good enough to make a page list, it is\n> > > > really no big deal if it errors on\n> > \n> > the\n> > \n> > > > positive side, maybe the list of pages has an extra page off the end.\n> > > > I can live with that. What I can't live with is taking 13 seconds to\n> > > > get\n> > \n> > a\n> > \n> > > > page of results from 850,000 rows in a table.\n> > > \n> > > 99% of the time in the situations you don't need an exact measure, and\n> > > assuming analyze has run recently, select rel_tuples from pg_class for\n> > > a given table is more than close enough. I'm sure wrapping that in a\n> > > simple estimated_rows() function would be easy enough to do.\n> > \n> > This is a very good approach and it works very well when you are counting\n> > the\n> > entire table, but when you have no control over the WHERE clause, it\n> > doesn't\n> > help. IE: someone puts in a word to look for in a web form.\n> > \n> > From my perspective, this issue is the biggest problem there is when\n> > using Postgres to create web pages, and it is so commonly used, I think\n> > that there\n> > should be a specific way to deal with it so that you don't have to run\n> > the same WHERE clause twice.\n> > IE: SELECT count(*) FROM <table> WHERE <clause>; to get the total amount\n> > of items to make page navigation links, then:\n> > SELECT <columns> FROM table WHERE <clause> LIMIT <items_per_page> OFFSET\n> > <(page_no-1)*items_per_page>; to get the actual page contents.\n> > \n> > How about\n> \n> select * from (select *, count(*) over () as total_count from <table> where\n> <clause) a LIMIT <items_per_page> OFFSET\n> <(page_no-1)*items_per_page>\n> It will return you total_count column with equal value in each row. You may\n> have problems if no rows are returned (e.g. page num is too high).\n\nI have done this before, but the speedup from the two hits to the database \nthat I mentioned above is tiny, just a few ms. It seems to end up doing about \nthe same thing on the database end. The reason that I don't commonly do this \nis what you said about not getting a count result if you run off the end.\n-Neil-\n",
"msg_date": "Sun, 10 Oct 2010 14:59:48 -0700",
"msg_from": "Neil Whelchel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On 10/11/2010 01:14 AM, Mladen Gogala wrote:\n\n> I can provide measurements, but from Oracle RDBMS. Postgres doesn't\n> allow tuning of that aspect, so no measurement can be done. Would the\n> numbers from Oracle RDBMS be acceptable?\n\nWell, they'd tell me a lot about Oracle's performance as I/O chunk size \nscales, but almost nothing about the cost of small I/O operations vs \nlarger ones in general.\n\nTypically dedicated test programs that simulate the database read \npatterns would be used for this sort of thing. I'd be surprised if \nnobody on -hackers has already done suitable testing; I was mostly \nasking because I was interested in how you were backing your assertions.\n\nPostgreSQL isn't Oracle; their design is in many ways very different. \nMost importantly, Oracle uses a redo log, where PostgreSQL stores old \nrows with visibility information directly in the tables. It is possible \nthat a larger proportion of Oracle's I/O costs are fixed per-block \noverheads rather than per-byte costs, so it seeks to batch requests into \nlarger chunks. Of course, it's also possible that 8k chunk I/O is just \nuniversally expensive and is something Pg should avoid, too, but we \ncan't know that without\ndedicated testing, which I at least haven't done. I don't follow \n-hackers closely, and wouldn't have seen discussion about testing done \nthere. The archives are likely to contain useful discussions.\n\nThen again, IIRC Pg's page size is also it's I/O size, so you could \nactually get larger I/O chunking by rebuilding Pg with larger pages. \nHaving never had the need, I haven't examined the performance of page \nsize changes on I/O performance.\n\n>> The Linux kernel, at least, does request merging (and splitting, and\n>> merging, and more splitting) along the request path, and I'd\n>> personally expect that most of the cost of 8k requests would be in the\n>> increased number of system calls, buffer copies, etc required.\n>> Measurements demonstrating or contradicting this would be good to see.\n>\n> Even the cost of hundreds of thousands of context switches is far from\n> negligible. What kind of measurements do you expect me to do with the\n> database which doesn't support tweaking of that aspect of its operation?\n\nTest programs, or references to testing done by others that demonstrates \nthese costs in isolation. Of course, they still wouldn't show what gain \nPg might obtain (nothing except hacking on Pg's sources really will) but \nthey'd help measure the costs of doing I/O that way.\n\nI suspect you're right that large I/O chunks would be desirable and a \npotential performance improvement. What I'd like to know is *how* \n*much*, or at least how much the current approach costs in pure \noverheads like context switches and scheduler delays.\n\n> Does that provide enough of an evidence and, if not, why not?\n\nIt shows that it helps Oracle a lot ;-)\n\nWithout isolating how much of that is raw costs of the block I/O and how \nmuch is costs internal to Oracle, it's still hard to have much idea how \nmuch it'd benefit Pg to take a similar approach.\n\nI'm sure the folks on -hackers have been over this and know a whole lot \nmore about it than I do, though.\n\n> Oracle counts 400 million records in 2 minutes and Postgres 9.01 takes\n> 12.8 seconds to count 1.2 million records? Do you see the disparity?\n\nSure. What I don't know is how much of that is due to block sizes. There \nare all sorts of areas where Oracle could be gaining.\n\n> It maybe so, but slow sequential scan is still the largest single\n> performance problem of PostgreSQL. The frequency with which that topic\n> appears on the mailing lists should serve as a good evidence for that.\n\nI'm certainly not arguing that it could use improvement; it's clearly \nhurting some users. I just don't know if I/O chunking is the answer - I \nsuspect that if it were, then it would've become a priority for one or \nmore people working on Pg much sooner.\n\nIt's quite likely that it's one of those things where it makes a huge \ndifference for Oracle because Oracle has managed to optimize out most of \nthe other bigger costs. If Pg still has other areas that make I/O more \nexpensive per-byte (say, visibility checks) and low fixed per-block \ncosts, then there'd be little point in chunking I/O. My understanding is \nthat that's pretty much how things stand at the moment, but I'd love \nverification from someone who's done the testing.\n\n>If you still claim that it wouldn't make the difference,\n> the onus to prove it is on you.\n\nI didn't mean to claim that it would make no difference. If I sounded \nlike it, sorry.\n\nI just want to know how _much_ , or more accurately how great the \noverheads of the current approach in Pg are vs doing much larger reads.\n\n--\nCraig Ringer\n\n",
"msg_date": "Mon, 11 Oct 2010 06:41:16 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Sun, Oct 10, 2010 at 12:14 PM, Mladen Gogala\n<[email protected]> wrote:\n>\n>\n>\n> In other words, when I batched the sequential scan to do 128 blocks I/O, it\n> was 4 times faster than when I did the single block I/O.\n> Does that provide enough of an evidence and, if not, why not?\n\n\nThese numbers tell us nothing because, unless you dropped the caches\nbetween runs, then at least part of some runs was very probably\ncached.\n\n-- \nJon\n",
"msg_date": "Sun, 10 Oct 2010 17:50:22 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Mon, Oct 11, 2010 at 06:41:16AM +0800, Craig Ringer wrote:\n> On 10/11/2010 01:14 AM, Mladen Gogala wrote:\n>\n>> I can provide measurements, but from Oracle RDBMS. Postgres doesn't\n>> allow tuning of that aspect, so no measurement can be done. Would the\n>> numbers from Oracle RDBMS be acceptable?\n>\n> Well, they'd tell me a lot about Oracle's performance as I/O chunk size \n> scales, but almost nothing about the cost of small I/O operations vs \n> larger ones in general.\n>\n> Typically dedicated test programs that simulate the database read \n> patterns would be used for this sort of thing. I'd be surprised if \n> nobody on -hackers has already done suitable testing; I was mostly \n> asking because I was interested in how you were backing your assertions.\n\nOne thing a test program would have to take into account is multiple\nconcurrent users. What speeds up the single user case may well hurt the\nmulti user case, and the behaviors that hurt single user cases may have been\nput in place on purpose to allow decent multi-user performance. Of course, all\nof that is \"might\" and \"maybe\", and I can't prove any assertions about block\nsize either. But the fact of multiple users needs to be kept in mind.\n\nIt was asserted that reading bigger chunks would help performance; a response\nsuggested that, at least in Linux, setting readahead on a device would\nessentially do the same thing. Or that's what I got from the thread, anyway.\nI'm interested to know how similar performance might be between the large\nblock size case and the large readahead case. Comments, anyone?\n\n--\nJoshua Tolley / eggyknap\nEnd Point Corporation\nhttp://www.endpoint.com",
"msg_date": "Sun, 10 Oct 2010 18:27:53 -0600",
"msg_from": "Joshua Tolley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On 10/11/2010 08:27 AM, Joshua Tolley wrote:\n\n> One thing a test program would have to take into account is multiple\n> concurrent users. What speeds up the single user case may well hurt the\n> multi user case, and the behaviors that hurt single user cases may have been\n> put in place on purpose to allow decent multi-user performance. Of course, all\n> of that is \"might\" and \"maybe\", and I can't prove any assertions about block\n> size either. But the fact of multiple users needs to be kept in mind.\n\nAgreed. I've put together a simple test program to test I/O chunk sizes. \nIt only tests single-user performance, but it'd be pretty trivial to \nadapt it to spawn a couple of worker children or run several threads, \neach with a suitable delay as it's rather uncommon to have a bunch of \nseqscans all fire off at once.\n\n From this test it's pretty clear that with buffered I/O of an uncached \n700mb file under Linux, the I/O chunk size makes very little difference, \nwith all chunk sizes taking 9.8s to read the test file, with \nnear-identical CPU utilization. Caches were dropped between each test run.\n\nFor direct I/O (by ORing the O_DIRECT flag to the open() flags), chunk \nsize is *hugely* significant, with 4k chunk reads of the test file \ntaking 38s, 8k 22s, 16k 14s, 32k 10.8s, 64k - 1024k 9.8s, then rising a \nlittle again over 1024k.\n\nApparently Oracle is almost always configured to use direct I/O, so it \nwould benefit massively from large chunk sizes. PostgreSQL is almost \nnever used with direct I/O, and at least in terms of the low-level costs \nof syscalls and file system activity, shouldn't care at all about read \nchunk sizes.\n\nBumping readahead from 256 to 8192 made no significant difference for \neither case. Of course, I'm on a crappy laptop disk...\n\nI'm guessing this is the origin of the OP's focus on I/O chunk sizes.\n\nAnyway, for the single-seqscan case, I see little evidence here that \nusing a bigger read chunk size would help PostgreSQL reduce overheads or \nimprove performance.\n\nOP: Is your Oracle instance using direct I/O?\n\n--\nCraig Ringer",
"msg_date": "Mon, 11 Oct 2010 08:51:43 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": " On 10/10/2010 8:27 PM, Joshua Tolley wrote:\n> It was asserted that reading bigger chunks would help performance; a response\n> suggested that, at least in Linux, setting readahead on a device would\n> essentially do the same thing. Or that's what I got from the thread, anyway.\n> I'm interested to know how similar performance might be between the large\n> block size case and the large readahead case. Comments, anyone?\n>\n\nCraig maybe right, the fact that Oracle is doing direct I/O probably \ndoes account for the difference. The fact is, however, that the question \nabout slow sequential scan appears with some regularity on PostgreSQL \nforums. My guess that a larger chunk would be helpful may not be \ncorrect, but I do believe that there is a problem with a too slow \nsequential scan. Bigger chunks are a very traditional solution which \nmay not work but the problem is still there.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n",
"msg_date": "Sun, 10 Oct 2010 23:14:43 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Sun, Oct 10, 2010 at 11:14:43PM -0400, Mladen Gogala wrote:\n> The fact is, however, that the question \n> about slow sequential scan appears with some regularity on PostgreSQL \n> forums.\n\nDefinitely. Whether that's because there's something pathologically wrong with\nsequential scans, or just because they're the slowest of the common\noperations, remains to be seen. After all, if sequential scans were suddenly\nfast, something else would be the slowest thing postgres commonly did.\n\nAll that said, if there's gain to be had by increasing block size, or\nsomething else, esp. if it's low hanging fruit, w00t.\n\n--\nJoshua Tolley / eggyknap\nEnd Point Corporation\nhttp://www.endpoint.com",
"msg_date": "Sun, 10 Oct 2010 21:21:54 -0600",
"msg_from": "Joshua Tolley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On 11/10/10 11:14, Mladen Gogala wrote:\n> On 10/10/2010 8:27 PM, Joshua Tolley wrote:\n>> It was asserted that reading bigger chunks would help performance; a\n>> response\n>> suggested that, at least in Linux, setting readahead on a device would\n>> essentially do the same thing. Or that's what I got from the thread,\n>> anyway.\n>> I'm interested to know how similar performance might be between the large\n>> block size case and the large readahead case. Comments, anyone?\n>>\n> \n> Craig maybe right, the fact that Oracle is doing direct I/O probably\n> does account for the difference. The fact is, however, that the question\n> about slow sequential scan appears with some regularity on PostgreSQL\n> forums. My guess that a larger chunk would be helpful may not be\n> correct, but I do believe that there is a problem with a too slow\n> sequential scan. Bigger chunks are a very traditional solution which\n> may not work but the problem is still there.\n\nNow that, I agree with.\n\nBTW, I casually looked into async I/O a little, and it seems the general\nsituation for async I/O on Linux is \"pretty bloody awful\". POSIX async\nI/O uses signal-driven completion handlers - but signal queue depth\nlimits mean they aren't necessarily reliable, so you can get lost\ncompletions and have to scan the event buffer periodically to catch\nthem. The alternative is running completion handlers in threads, but\napparently there are queue depth limit issues there too, as well as the\njoy that is getting POSIX threading right. I think there was some talk\nabout this on -HACKERS a while ago. Here's the main discussion on async\nI/O I've found:\n\nhttp://archives.postgresql.org/pgsql-hackers/2006-10/msg00820.php\n\n... from which it seems that async buffered I/O is poorly supported, if\nat all, on current Linux kernels. Don't know about the BSDs. As Pg is\n*really* poorly suited to direct I/O, relying on the OS buffer cache as\nit does, unbuffered direct I/O isn't really an option.\n\nLinux async I/O seems to be designed for network I/O and for monitoring\nlots of files for changes, rather than for highly concurrent I/O on one\nor a few files. It shows.\n\n\nRe slow seqscans, there is still plenty of room to move:\n\n- Sequential scans cannot (AFAIK) use the visibility map introduced in\n8.4 to skip sections of tables that are known to contain only dead\ntuples not visible to any transaction or free space. This potential\noptimization could make a big difference in tables with FILLFACTOR or\nwith holes created by certain update patterns.\n\n- Covering indexes (\"index oriented\" table columns) and/or indexes with\nembedded visibility information could dramatically improve the\nperformance of certain queries by eliminating the need to hit the heap\nat all, albeit at the cost of trade-offs elsewhere. This would be\nparticularly useful for those classic count() queries. There have been\ndiscussions about these on -hackers, but I'm not up with the current\nstatus or lack thereof.\n\n- There's been recent talk of using pread() rather than lseek() and\nread() to save on syscall overhead. The difference is probably minimal,\nbut it'd be nice.\n\n\nIt is worth being aware of a few other factors:\n\n- Sometimes seqscans are actually the fastest option, and people don't\nrealize this, so they try to force index use where it doesn't make\nsense. This is the cause of a significant number of list complaints.\n\n- Slow sequential scans are often a consequence of table bloat. It's\nworth checking for this. Pg's autovacuum and manual vacuum have improved\nin performance and usability dramatically over time, but still have room\nto move. Sometimes people disable autovacuum in the name of a\nshort-lived performance boost, not realizing it'll have horrible effects\non performance in the mid- to long- term.\n\n- Seqscans can be chosen when index scans are more appropriate if the\nrandom_page_cost and seq_page_cost aren't set sensibly, which they\nusually aren't. This doesn't make seqscans any faster, but it's even\nworse when you have a good index you're not using. I can't help but\nwonder if a bundled \"quick and dirty benchmark\" tool for Pg would be\nbeneficial in helping to determine appropriate values for these settings\nand for effective io concurrency.\n\n\n-- \nCraig Ringer\n\nTech-related writing: http://soapyfrogs.blogspot.com/\n",
"msg_date": "Mon, 11 Oct 2010 12:11:58 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Sunday 10 October 2010 15:41:16 you wrote:\n> On 10/11/2010 01:14 AM, Mladen Gogala wrote:\n> > I can provide measurements, but from Oracle RDBMS. Postgres doesn't\n> > allow tuning of that aspect, so no measurement can be done. Would the\n> > numbers from Oracle RDBMS be acceptable?\n> \n> Well, they'd tell me a lot about Oracle's performance as I/O chunk size\n> scales, but almost nothing about the cost of small I/O operations vs\n> larger ones in general.\n> \n> Typically dedicated test programs that simulate the database read\n> patterns would be used for this sort of thing. I'd be surprised if\n> nobody on -hackers has already done suitable testing; I was mostly\n> asking because I was interested in how you were backing your assertions.\n> \n> PostgreSQL isn't Oracle; their design is in many ways very different.\n> Most importantly, Oracle uses a redo log, where PostgreSQL stores old\n> rows with visibility information directly in the tables. It is possible\n> that a larger proportion of Oracle's I/O costs are fixed per-block\n> overheads rather than per-byte costs, so it seeks to batch requests into\n> larger chunks. Of course, it's also possible that 8k chunk I/O is just\n> universally expensive and is something Pg should avoid, too, but we\n> can't know that without\n> dedicated testing, which I at least haven't done. I don't follow\n> -hackers closely, and wouldn't have seen discussion about testing done\n> there. The archives are likely to contain useful discussions.\n> \n> Then again, IIRC Pg's page size is also it's I/O size, so you could\n> actually get larger I/O chunking by rebuilding Pg with larger pages.\n> Having never had the need, I haven't examined the performance of page\n> size changes on I/O performance.\n\nThis is a really good idea! I will look into doing this and I will post the \nresults as soon as I can get it done.\n\nRight now, I am building a test machine with two dual core Intel processors \nand two 15KRPM mirrored hard drives, 1 GB ram. I am using a small amount of \nram because I will be using small test tables. I may do testing in the future \nwith more ram and bigger tables, but I think I can accomplish what we are all \nafter with what I have. The machine will be limited to running the database \nserver in test, init, bash, and ssh, no other processes will be running except \nfor what is directly involved with testing. I will post exact specs when I \npost test results. I will create some test tables, and the same tables will be \nused in all tests. Suggestions for optimal Postgres and system configuration \nare welcome. I will try any suggested settings that I have time to test.\n-Neil-\n\n\n> \n> >> The Linux kernel, at least, does request merging (and splitting, and\n> >> merging, and more splitting) along the request path, and I'd\n> >> personally expect that most of the cost of 8k requests would be in the\n> >> increased number of system calls, buffer copies, etc required.\n> >> Measurements demonstrating or contradicting this would be good to see.\n> > \n> > Even the cost of hundreds of thousands of context switches is far from\n> > negligible. What kind of measurements do you expect me to do with the\n> > database which doesn't support tweaking of that aspect of its operation?\n> \n> Test programs, or references to testing done by others that demonstrates\n> these costs in isolation. Of course, they still wouldn't show what gain\n> Pg might obtain (nothing except hacking on Pg's sources really will) but\n> they'd help measure the costs of doing I/O that way.\n> \n> I suspect you're right that large I/O chunks would be desirable and a\n> potential performance improvement. What I'd like to know is *how*\n> *much*, or at least how much the current approach costs in pure\n> overheads like context switches and scheduler delays.\n> \n> > Does that provide enough of an evidence and, if not, why not?\n> \n> It shows that it helps Oracle a lot ;-)\n> \n> Without isolating how much of that is raw costs of the block I/O and how\n> much is costs internal to Oracle, it's still hard to have much idea how\n> much it'd benefit Pg to take a similar approach.\n> \n> I'm sure the folks on -hackers have been over this and know a whole lot\n> more about it than I do, though.\n> \n> > Oracle counts 400 million records in 2 minutes and Postgres 9.01 takes\n> > 12.8 seconds to count 1.2 million records? Do you see the disparity?\n> \n> Sure. What I don't know is how much of that is due to block sizes. There\n> are all sorts of areas where Oracle could be gaining.\n> \n> > It maybe so, but slow sequential scan is still the largest single\n> > performance problem of PostgreSQL. The frequency with which that topic\n> > appears on the mailing lists should serve as a good evidence for that.\n> \n> I'm certainly not arguing that it could use improvement; it's clearly\n> hurting some users. I just don't know if I/O chunking is the answer - I\n> suspect that if it were, then it would've become a priority for one or\n> more people working on Pg much sooner.\n> \n> It's quite likely that it's one of those things where it makes a huge\n> difference for Oracle because Oracle has managed to optimize out most of\n> the other bigger costs. If Pg still has other areas that make I/O more\n> expensive per-byte (say, visibility checks) and low fixed per-block\n> costs, then there'd be little point in chunking I/O. My understanding is\n> that that's pretty much how things stand at the moment, but I'd love\n> verification from someone who's done the testing.\n> \n> >If you still claim that it wouldn't make the difference,\n> >\n> > the onus to prove it is on you.\n> \n> I didn't mean to claim that it would make no difference. If I sounded\n> like it, sorry.\n> \n> I just want to know how _much_ , or more accurately how great the\n> overheads of the current approach in Pg are vs doing much larger reads.\n> \n> --\n> Craig Ringer\n",
"msg_date": "Sun, 10 Oct 2010 21:15:56 -0700",
"msg_from": "Neil Whelchel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "\n> I ran into a fine example of this when I was searching this mailing list,\n> \"Searching in 856,646 pages took 13.48202 seconds. Site search powered by\n> PostgreSQL 8.3.\" Obviously at some point count(*) came into play here\n\nWell, tsearch full text search is excellent, but it has to work inside the \nlimits of the postgres database itself, which means row visibility checks, \nand therefore, yes, extremely slow count(*) on large result sets when the \ntables are not cached in RAM.\n\nAlso, if you want to use custom sorting (like by date, thread, etc) \npossibly all the matching rows will have to be read and sorted.\n\nConsider, for example, the Xapian full text search engine. It is not MVCC \n(it is single writer, multiple reader, so only one process can update the \nindex at a time, but readers are not locked out during writes). Of course, \nyou would never want something like that for your main database ! However, \nin its particular application, which is multi-criteria full text search \n(and flexible sorting of results), it just nukes tsearch2 on datasets not \ncached in RAM, simply because everything in it including disk layout etc, \nhas been optimized for the application. Lucene is similar (but I have not \nbenchmarked it versus tsearch2, so I can't tell).\n\nSo, if your full text search is a problem, just use Xapian. You can update \nthe Xapian index from a postgres trigger (using an independent process, or \nsimply, a plpython trigger using the python Xapian bindings). You can \nquery it using an extra process acting as a server, or you can write a \nset-returning plpython function which performs Xapian searches, and you \ncan join the results to your tables.\n\n> Pg will never have such a fast count() as MyISAM does or the same \n> insanely fast read performance,\n\nBenchmark it you'll see, MyISAM is faster than postgres for \"small simple \nselects\", only if :\n- pg doesn't use prepared queries (planning time takes longer than a \nreally simple select)\n- myisam can use index-only access\n- noone is writing to the myisam table at the moment, obviously\n\nOn equal grounds (ie, SELECT * FROM table WHERE pk = value) there is no \ndifference. The TCP/IP overhead is larger than the query anyway, you have \nto use unix sockets on both to get valid timings. Since by default on \nlocalhost MySQL seems to use unix sockets and PG uses tcp/ip, PG seem 2x \nslower, which is in fact not true.\n",
"msg_date": "Mon, 11 Oct 2010 12:09:02 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On 10/9/10 6:47 PM, Scott Marlowe wrote:\n> On Sat, Oct 9, 2010 at 5:26 PM, Neil Whelchel<[email protected]> wrote:\n>> I know that there haven been many discussions on the slowness of count(*) even\n>> when an index is involved because the visibility of the rows has to be\n>> checked. In the past I have seen many suggestions about using triggers and\n>> tables to keep track of counts and while this works fine in a situation where\n>> you know what the report is going to be ahead of time, this is simply not an\n>> option when an unknown WHERE clause is to be used (dynamically generated).\n>> I ran into a fine example of this when I was searching this mailing list,\n>> \"Searching in 856,646 pages took 13.48202 seconds. Site search powered by\n>> PostgreSQL 8.3.\" Obviously at some point count(*) came into play here because\n>> the site made a list of pages (1 2 3 4 5 6> next). I very commonly make a\n>> list of pages from search results, and the biggest time killer here is the\n>> count(*) portion, even worse yet, I sometimes have to hit the database with\n>> two SELECT statements, one with OFFSET and LIMIT to get the page of results I\n>> need and another to get the amount of total rows so I can estimate how many\n>> pages of results are available. The point I am driving at here is that since\n>> building a list of pages of results is such a common thing to do, there need\n>> to be some specific high speed ways to do this in one query. Maybe an\n>> estimate(*) that works like count but gives an answer from the index without\n>> checking visibility? I am sure that this would be good enough to make a page\n>> list, it is really no big deal if it errors on the positive side, maybe the\n>> list of pages has an extra page off the end. I can live with that. What I\n>> can't live with is taking 13 seconds to get a page of results from 850,000\n>> rows in a table.\n>\n> 99% of the time in the situations you don't need an exact measure, and\n> assuming analyze has run recently, select rel_tuples from pg_class for\n> a given table is more than close enough. I'm sure wrapping that in a\n> simple estimated_rows() function would be easy enough to do.\n\nFirst of all, it's not true. There are plenty of applications that need an exact answer. Second, even if it is only 1%, that means it's 1% of the queries, not 1% of people. Sooner or later a large fraction of developers will run into this. It's probably been the most-asked question I've seen on this forum in the four years I've been here. It's a real problem, and it needs a real solution.\n\nI know it's a hard problem to solve, but can we stop hinting that those of us who have this problem are somehow being dense?\n\nThanks,\nCraig\n",
"msg_date": "Mon, 11 Oct 2010 10:46:17 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Monday 11 October 2010 10:46:17 Craig James wrote:\n> On 10/9/10 6:47 PM, Scott Marlowe wrote:\n> > On Sat, Oct 9, 2010 at 5:26 PM, Neil Whelchel<[email protected]> \nwrote:\n> >> I know that there haven been many discussions on the slowness of\n> >> count(*) even when an index is involved because the visibility of the\n> >> rows has to be checked. In the past I have seen many suggestions about\n> >> using triggers and tables to keep track of counts and while this works\n> >> fine in a situation where you know what the report is going to be ahead\n> >> of time, this is simply not an option when an unknown WHERE clause is\n> >> to be used (dynamically generated). I ran into a fine example of this\n> >> when I was searching this mailing list, \"Searching in 856,646 pages\n> >> took 13.48202 seconds. Site search powered by PostgreSQL 8.3.\"\n> >> Obviously at some point count(*) came into play here because the site\n> >> made a list of pages (1 2 3 4 5 6> next). I very commonly make a list\n> >> of pages from search results, and the biggest time killer here is the\n> >> count(*) portion, even worse yet, I sometimes have to hit the database\n> >> with two SELECT statements, one with OFFSET and LIMIT to get the page\n> >> of results I need and another to get the amount of total rows so I can\n> >> estimate how many pages of results are available. The point I am\n> >> driving at here is that since building a list of pages of results is\n> >> such a common thing to do, there need to be some specific high speed\n> >> ways to do this in one query. Maybe an estimate(*) that works like\n> >> count but gives an answer from the index without checking visibility? I\n> >> am sure that this would be good enough to make a page list, it is\n> >> really no big deal if it errors on the positive side, maybe the list of\n> >> pages has an extra page off the end. I can live with that. What I can't\n> >> live with is taking 13 seconds to get a page of results from 850,000\n> >> rows in a table.\n> > \n> > 99% of the time in the situations you don't need an exact measure, and\n> > assuming analyze has run recently, select rel_tuples from pg_class for\n> > a given table is more than close enough. I'm sure wrapping that in a\n> > simple estimated_rows() function would be easy enough to do.\n> \n> First of all, it's not true. There are plenty of applications that need an\n> exact answer. Second, even if it is only 1%, that means it's 1% of the\n> queries, not 1% of people. Sooner or later a large fraction of developers\n> will run into this. It's probably been the most-asked question I've seen\n> on this forum in the four years I've been here. It's a real problem, and\n> it needs a real solution.\n> \n> I know it's a hard problem to solve, but can we stop hinting that those of\n> us who have this problem are somehow being dense?\n> \n> Thanks,\n> Craig\n\nThat is why I suggested an estimate(*) that works like (a faster) count(*) \nexcept that it may be off a bit. I think that is what he was talking about \nwhen he wrote this.\n\nI don't think that anyone here is trying to cast any blame, we are just \npointing out that there is a real problem here that involves what seems to be \na very common task, and it is placing a huge disadvantage on the use of \nPostgres to other systems that can do it in less time. There doesn't seem to \nbe any disagreement that count(*) is slower than it could be due to MVCC and \nother reasons, which is fine. However at the chopping end of the line, if a \nslow count(*) makes a Postgres driven website take say a minute to render a \nweb page, it is completely useless if it can be replaced with a database \nengine that can do the same thing in (much) less time. On my servers, this is \nthe major sticking point. There are so many people waiting on count(*), that \nthe server runs out of memory and it is forced to stop accepting more \nconnections until some of the threads finish. This makes many unhappy \ncustomers.\nWhen it comes to serving up web pages that contain a slice of a table with \nlinks to other slices, knowing about how many slices is very important. But I \nthink that we can all agree that the exact amount is not a make or break (even \nbetter if the estimate is a bit high), so an estimate(*) function that takes \nsome shortcuts here to get a much faster response (maybe off a bit) would \nsolve a huge problem.\n\nWhat it all boils down to is webserver response time, and there are really two \nthings that are slowing things down more than what should be needed.\nSo there are really two possible solutions either of which would be a big \nhelp:\n1. A faster count(*), or something like my proposed estimate(*).\n2. A way to get the total rows matched when using LIMIT and OFFSET before \nLIMIT and OFFSET are applied.\n\nIf you are making a web page that contains a few results of many possible \nresults, you need two things for sure which means that there are really two \nproblems with Postgres for doing this task.\n1. You need to know (about) how many total rows. This requires a hit to the \ndatabase which requires a scan of the table to get, there is no way to do this \nfaster than count(*) as far as I know.\n2. You need a slice of the data which requires another scan to the table to \nget, and using the same WHERE clause as above. This seems like a total waste, \nbecause we just did that with the exception of actually fetching the data.\n\nWhy do it twice when if there was a way to get a slice using OFFSET and LIMIT \nand get the amount of rows that matched before the OFFSET and LIMIT was \napplied you could do the scan once? I think that this would improve things and \ngive Postgres an edge over other systems.\n\nI hope this makes sense to at least one person in the right place. ;)\n\n-Neil-\n",
"msg_date": "Mon, 11 Oct 2010 12:54:57 -0700",
"msg_from": "Neil Whelchel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Neil Whelchel wrote:\n>\n>\n> That is why I suggested an estimate(*) that works like (a faster) count(*) \n> except that it may be off a bit. I think that is what he was talking about \n> when he wrote this.\n>\n> \nThe main problem with \"select count(*)\" is that it gets seriously \nmis-used. Using \"select count(*)\" to establish existence is bad for \nperformance and for code readability. \n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Mon, 11 Oct 2010 16:58:37 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Mon, Oct 11, 2010 at 12:54 PM, Neil Whelchel <[email protected]>wrote:\n\n>\n> 2. You need a slice of the data which requires another scan to the table to\n> get, and using the same WHERE clause as above. This seems like a total\n> waste,\n> because we just did that with the exception of actually fetching the data.\n>\n> Why do it twice when if there was a way to get a slice using OFFSET and\n> LIMIT\n> and get the amount of rows that matched before the OFFSET and LIMIT was\n> applied you could do the scan once? I think that this would improve things\n> and\n> give Postgres an edge over other systems.\n>\n>\nI'd go even farther with number 2 and suggest that a form of offset/limit\nwhich can return the total count OR have a total count be passed in to be\nreturned the same way as if total count were being computed would make the\nuse of that api even easier, since you could keep re-using the number\nreturned the first time without changing the api that gets used depending\nupon context. Of course, you could contrive to set that up via a stored\nproc relatively easily by simply doing the count(*) once, then appending it\nto each row of the offset/limit query by including it in the select\nstatement. Let it optionally receive the total to be used as an input\nparameter, which if not null will result in the count(*) block being skipped\nin the proc. You'd incur the full cost of the table scan plus offset/limit\nquery once, but then not for each and every page. Since the modified api\nyou suggest for offset/limit would surely have to perform the table scan\nonce, that solution really isn't giving much more value than implementing\nas a stored proc other than the flexibility of executing an arbitrary query.\n Modified offset/limit combined with the count_estimate functionality would\nbe very useful in this circumstance, though - especially if the estimate\nwould just do a full count if the estimate is under a certain threshold. A\n25% discrepancy when counting millions of rows is a lot less of an issue\nthan a 25% discrepancy when counting 10 rows.\n\nOne issue with an estimation is that you must be certain that the estimate\n>= actual count or else the app must always attempt to load the page BEYOND\nthe last page of the estimate in order to determine if the estimate must be\nrevised upward. Otherwise, you risk leaving rows out entirely. Probably ok\nwhen returning search results. Not so much when displaying a list of\nassets.\n\nOn Mon, Oct 11, 2010 at 12:54 PM, Neil Whelchel <[email protected]> wrote:\n\n2. You need a slice of the data which requires another scan to the table to\nget, and using the same WHERE clause as above. This seems like a total waste,\nbecause we just did that with the exception of actually fetching the data.\n\nWhy do it twice when if there was a way to get a slice using OFFSET and LIMIT\nand get the amount of rows that matched before the OFFSET and LIMIT was\napplied you could do the scan once? I think that this would improve things and\ngive Postgres an edge over other systems.\nI'd go even farther with number 2 and suggest that a form of offset/limit which can return the total count OR have a total count be passed in to be returned the same way as if total count were being computed would make the use of that api even easier, since you could keep re-using the number returned the first time without changing the api that gets used depending upon context. Of course, you could contrive to set that up via a stored proc relatively easily by simply doing the count(*) once, then appending it to each row of the offset/limit query by including it in the select statement. Let it optionally receive the total to be used as an input parameter, which if not null will result in the count(*) block being skipped in the proc. You'd incur the full cost of the table scan plus offset/limit query once, but then not for each and every page. Since the modified api you suggest for offset/limit would surely have to perform the table scan once, that solution really isn't giving much more value than implementing as a stored proc other than the flexibility of executing an arbitrary query. Modified offset/limit combined with the count_estimate functionality would be very useful in this circumstance, though - especially if the estimate would just do a full count if the estimate is under a certain threshold. A 25% discrepancy when counting millions of rows is a lot less of an issue than a 25% discrepancy when counting 10 rows. \nOne issue with an estimation is that you must be certain that the estimate >= actual count or else the app must always attempt to load the page BEYOND the last page of the estimate in order to determine if the estimate must be revised upward. Otherwise, you risk leaving rows out entirely. Probably ok when returning search results. Not so much when displaying a list of assets.",
"msg_date": "Mon, 11 Oct 2010 15:03:38 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": " On 10/11/2010 3:54 PM, Neil Whelchel wrote:\n> 1. A faster count(*), or something like my proposed estimate(*).\n> 2. A way to get the total rows matched when using LIMIT and OFFSET before\n> LIMIT and OFFSET are applied.\n\nThe biggest single problem with \"select count(*)\" is that it is \nseriously overused. People use that idiom to establish existence, which \nusually leads to a performance disaster in the application using it, \nunless the table has no more than few hundred records. SQL language, of \nwhich PostgreSQL offers an excellent implementation, offers [NOT] \nEXISTS clause since its inception in the Jurassic era. The problem is \nwith the sequential scan, not with counting. I'd even go as far as to \nsuggest that 99% instances of the \"select count(*)\" idiom are probably \nbad use of the SQL language.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n",
"msg_date": "Mon, 11 Oct 2010 19:50:36 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "\nOn Oct 10, 2010, at 10:14 AM, Mladen Gogala wrote:\n\n> \n> SQL> show parameter db_file_multi\n> \n> NAME TYPE VALUE\n> ------------------------------------ ----------- \n> ------------------------------\n> db_file_multiblock_read_count integer 16\n> SQL> alter session set db_file_multiblock_read_count=1;\n> \n> Session altered.\n> SQL> select count(*) from ni_occurrence;\n> \n> COUNT(*)\n> ----------\n> 402062638\n> \n> Elapsed: 00:08:20.88\n> SQL> alter session set db_file_multiblock_read_count=128;\n> \n> Session altered.\n> \n> Elapsed: 00:00:00.50\n> SQL> select count(*) from ni_occurrence;\n> \n> COUNT(*)\n> ----------\n> 402062638\n> \n> Elapsed: 00:02:17.58\n> \n> \n> In other words, when I batched the sequential scan to do 128 blocks I/O, \n> it was 4 times faster than when I did the single block I/O.\n> Does that provide enough of an evidence and, if not, why not?\n> \n\nDid you tune the linux FS read-ahead first? You can get large gains by doing that if you are on ext3.\nblockdev --setra 2048 <device>\n\nwould give you a 1MB read-ahead. Also, consider XFS and its built-in defragmentation. I have found that a longer lived postgres DB will get extreme \nfile fragmentation over time and sequential scans end up mostly random. On-line file defrag helps tremendously.\n\n> It maybe so, but slow sequential scan is still the largest single \n> performance problem of PostgreSQL. The frequency with which that topic \n> appears on the mailing lists should serve as a good evidence for that. I \n> did my best to prove my case. \n\nI'm not sure its all the I/O however. It seems that Postgres uses a lot more CPU than other DB's to crack open a tuple and inspect it. Testing on unindexed tables with count(*) I can get between 200MB and 800MB per second off disk max with full cpu utilization (depending on the average tuple size and contents). This is on a disk array that can do 1200MB/sec. It always feels dissapointing to not be able to max out the disk throughput on the simplest possible query. \n\n> Again, requiring \"hard numbers\" when \n> using the database which doesn't allow tweaking of the I/O size is self \n> defeating proposition. The other databases, like DB2 and Oracle both \n> allow tweaking of that aspect of its operation, Oracle even on the per \n> session basis. If you still claim that it wouldn't make the difference, \n> the onus to prove it is on you.\n> \n> -- \n> Mladen Gogala \n> Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> www.vmsinfo.com \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Mon, 11 Oct 2010 19:02:51 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Joshua Tolley wrote:\n> It was asserted that reading bigger chunks would help performance; a response\n> suggested that, at least in Linux, setting readahead on a device would\n> essentially do the same thing. Or that's what I got from the thread, anyway.\n> I'm interested to know how similar performance might be between the large\n> block size case and the large readahead case.\n\nLarge read-ahead addresses the complaint here (bulk reads are slow) just \nfine, which is one reason why this whole topic isn't nearly as \ninteresting as claimed. Larger chunk sizes in theory will do the same \nthing, but then you're guaranteed to be reading larger blocks than \nnecessary in all cases. The nice thing about a good adaptive read-ahead \nis that it can target small blocks normally, and only kick into heavy \nread-ahead mode when the I/O pattern justifies it.\n\nThis is a problem for the operating system to solve, and such solutions \nout there are already good enough that PostgreSQL has little reason to \ntry and innovate in this area. I routinely see seq scan throughput \ndouble on Linux just by tweaking read-ahead from the tiny defaults to a \nsane value.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Mon, 11 Oct 2010 22:19:04 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "\nOn Oct 11, 2010, at 7:02 PM, Scott Carey wrote:\n\n> \n> On Oct 10, 2010, at 10:14 AM, Mladen Gogala wrote:\n> \n>> \n>> SQL> show parameter db_file_multi\n>> \n>> NAME TYPE VALUE\n>> ------------------------------------ ----------- \n>> ------------------------------\n>> db_file_multiblock_read_count integer 16\n>> SQL> alter session set db_file_multiblock_read_count=1;\n>> \n>> Session altered.\n>> SQL> select count(*) from ni_occurrence;\n>> \n>> COUNT(*)\n>> ----------\n>> 402062638\n>> \n>> Elapsed: 00:08:20.88\n>> SQL> alter session set db_file_multiblock_read_count=128;\n>> \n>> Session altered.\n>> \n>> Elapsed: 00:00:00.50\n>> SQL> select count(*) from ni_occurrence;\n>> \n>> COUNT(*)\n>> ----------\n>> 402062638\n>> \n>> Elapsed: 00:02:17.58\n>> \n>> \n>> In other words, when I batched the sequential scan to do 128 blocks I/O, \n>> it was 4 times faster than when I did the single block I/O.\n>> Does that provide enough of an evidence and, if not, why not?\n>> \n> \n> Did you tune the linux FS read-ahead first? You can get large gains by doing that if you are on ext3.\n> blockdev --setra 2048 <device>\n> \nScratch that, if you are using DirectIO, block read-ahead does nothing. The default is 128K for buffered I/O read-ahead.\n\n> would give you a 1MB read-ahead. Also, consider XFS and its built-in defragmentation. I have found that a longer lived postgres DB will get extreme \n> file fragmentation over time and sequential scans end up mostly random. On-line file defrag helps tremendously.\n> \n>> It maybe so, but slow sequential scan is still the largest single \n>> performance problem of PostgreSQL. The frequency with which that topic \n>> appears on the mailing lists should serve as a good evidence for that. I \n>> did my best to prove my case. \n> \n> I'm not sure its all the I/O however. It seems that Postgres uses a lot more CPU than other DB's to crack open a tuple and inspect it. Testing on unindexed tables with count(*) I can get between 200MB and 800MB per second off disk max with full cpu utilization (depending on the average tuple size and contents). This is on a disk array that can do 1200MB/sec. It always feels dissapointing to not be able to max out the disk throughput on the simplest possible query. \n> \n>> Again, requiring \"hard numbers\" when \n>> using the database which doesn't allow tweaking of the I/O size is self \n>> defeating proposition. The other databases, like DB2 and Oracle both \n>> allow tweaking of that aspect of its operation, Oracle even on the per \n>> session basis. If you still claim that it wouldn't make the difference, \n>> the onus to prove it is on you.\n>> \n>> -- \n>> Mladen Gogala \n>> Sr. Oracle DBA\n>> 1500 Broadway\n>> New York, NY 10036\n>> (212) 329-5251\n>> www.vmsinfo.com \n>> \n>> \n>> -- \n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Mon, 11 Oct 2010 19:21:04 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": " On 10/11/2010 10:02 PM, Scott Carey wrote:\n> Did you tune the linux FS read-ahead first? You can get large gains by doing that if you are on ext3.\n> blockdev --setra 2048<device>\n\nActually, I have blockdev --setra 32768\n\n> would give you a 1MB read-ahead. Also, consider XFS and its built-in defragmentation. I have found that a longer lived postgres DB will get extreme\n> file fragmentation over time and sequential scans end up mostly random. On-line file defrag helps tremendously.\n\nI agree, but I am afraid that after the demise of SGI, XFS isn't being \ndeveloped. The company adopted the policy of using only the plain \nvanilla Ext3, which is unfortunate, but I can't do much about it. There \nis a lesson to be learned from the story of ReiserFS. I am aware of the \nfact that Ext3 is rather basic, block oriented file system which doesn't \nperform well when compared to HPFS, VxFS or JFS2 and has no notion of \nextents, but I believe that I am stuck with it, until the advent of \nExt4. BTW, there is no defragmenter for Ext4, not even on Ubuntu, which \nis rather bleeding edge distribution.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n",
"msg_date": "Mon, 11 Oct 2010 22:23:46 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Monday 11 October 2010 16:50:36 you wrote:\n> On 10/11/2010 3:54 PM, Neil Whelchel wrote:\n> > 1. A faster count(*), or something like my proposed estimate(*).\n> > 2. A way to get the total rows matched when using LIMIT and OFFSET before\n> > LIMIT and OFFSET are applied.\n> \n> The biggest single problem with \"select count(*)\" is that it is\n> seriously overused. People use that idiom to establish existence, which\n> usually leads to a performance disaster in the application using it,\n> unless the table has no more than few hundred records. SQL language, of\n> which PostgreSQL offers an excellent implementation, offers [NOT]\n> EXISTS clause since its inception in the Jurassic era. The problem is\n> with the sequential scan, not with counting. I'd even go as far as to\n> suggest that 99% instances of the \"select count(*)\" idiom are probably\n> bad use of the SQL language.\n\nI agree, I have seen many very bad examples of using count(*). I will go so \nfar as to question the use of count(*) in my examples here. It there a better \nway to come up with a page list than using count(*)? What is the best method \nto make a page of results and a list of links to other pages of results? Am I \nbarking up the wrong tree here?\n-Neil-\n",
"msg_date": "Mon, 11 Oct 2010 19:36:45 -0700",
"msg_from": "Neil Whelchel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Monday 11 October 2010 19:23:46 Mladen Gogala wrote:\n> On 10/11/2010 10:02 PM, Scott Carey wrote:\n> > Did you tune the linux FS read-ahead first? You can get large gains by\n> > doing that if you are on ext3. blockdev --setra 2048<device>\n> \n> Actually, I have blockdev --setra 32768\n> \n> > would give you a 1MB read-ahead. Also, consider XFS and its built-in\n> > defragmentation. I have found that a longer lived postgres DB will get\n> > extreme file fragmentation over time and sequential scans end up mostly\n> > random. On-line file defrag helps tremendously.\n> \n> I agree, but I am afraid that after the demise of SGI, XFS isn't being\n> developed. The company adopted the policy of using only the plain\n> vanilla Ext3, which is unfortunate, but I can't do much about it. There\n> is a lesson to be learned from the story of ReiserFS. I am aware of the\n> fact that Ext3 is rather basic, block oriented file system which doesn't\n> perform well when compared to HPFS, VxFS or JFS2 and has no notion of\n> extents, but I believe that I am stuck with it, until the advent of\n> Ext4. BTW, there is no defragmenter for Ext4, not even on Ubuntu, which\n> is rather bleeding edge distribution.\n\nWhen it comes to a database that has many modifications to its tables, it \nseems that XFS pulls way ahead of other filesystems (likely) because of its \non-line defragmentation among other reasons. I am not sure that XFS is not \n(properly) maintained. The people at xfs.org seem to be making steady \nprogress, and high quality updates. I have been using it for some time now (9+ \nyears), and as it is, it does everything I need it to do, and it is very \nreliable. I really can't see anything changing in the next few years that \nwould effect its usability as a filesystem for Postgres, so until something \nproves to be better, I can't understand why you wouldn't want to use it, \nmaintained or not.\n-Neil-\n",
"msg_date": "Mon, 11 Oct 2010 20:42:42 -0700",
"msg_from": "Neil Whelchel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Mon, Oct 11, 2010 at 7:19 PM, Greg Smith <[email protected]> wrote:\n>\n>\n> This is a problem for the operating system to solve, and such solutions out\n> there are already good enough that PostgreSQL has little reason to try and\n> innovate in this area. I routinely see seq scan throughput double on Linux\n> just by tweaking read-ahead from the tiny defaults to a sane value.\n>\n\nI spent some time going through the various tuning docs on the wiki whie\nbringing some new hardware up and I can't remember seeing any discussion of\ntweaking read-ahead at all in the normal performance-tuning references. Do\nyou have any documentation of the kinds of tweaking you have done and its\neffects on different types of workloads?\n\nOn Mon, Oct 11, 2010 at 7:19 PM, Greg Smith <[email protected]> wrote:\n\nThis is a problem for the operating system to solve, and such solutions out there are already good enough that PostgreSQL has little reason to try and innovate in this area. I routinely see seq scan throughput double on Linux just by tweaking read-ahead from the tiny defaults to a sane value.\nI spent some time going through the various tuning docs on the wiki whie bringing some new hardware up and I can't remember seeing any discussion of tweaking read-ahead at all in the normal performance-tuning references. Do you have any documentation of the kinds of tweaking you have done and its effects on different types of workloads?",
"msg_date": "Mon, 11 Oct 2010 20:58:45 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "I can't speak to documentation, but it is something that helps as your I/O subsystem gets more powerful, and how much it helps depends more on your hardware, which may have adaptive read ahead on its own, and your file system which may be more or less efficient at sequential I/O. For example ext3 out of the box gets a much bigger gain from tuning read-ahead than XFS on a DELL PERC6 RAID card (but still ends up slower).\n\nLinux Read-ahead has no effect on random access performance. A workload consisting of mixed sequential scans and random reads can be tuned to favor one over the other based on a combination of the I/O scheduler used and the ammount of read-ahead. Larger read-ahead helps sequential scans, and the Deadline scheduler tends to favor throughput (sequential scans) over latency (random access) compared to the cfq scheduler.\n\n\n\nOn Oct 11, 2010, at 8:58 PM, Samuel Gendler wrote:\n\n\n\nOn Mon, Oct 11, 2010 at 7:19 PM, Greg Smith <[email protected]<mailto:[email protected]>> wrote:\n\nThis is a problem for the operating system to solve, and such solutions out there are already good enough that PostgreSQL has little reason to try and innovate in this area. I routinely see seq scan throughput double on Linux just by tweaking read-ahead from the tiny defaults to a sane value.\n\nI spent some time going through the various tuning docs on the wiki whie bringing some new hardware up and I can't remember seeing any discussion of tweaking read-ahead at all in the normal performance-tuning references. Do you have any documentation of the kinds of tweaking you have done and its effects on different types of workloads?\n\n\n\nI can't speak to documentation, but it is something that helps as your I/O subsystem gets more powerful, and how much it helps depends more on your hardware, which may have adaptive read ahead on its own, and your file system which may be more or less efficient at sequential I/O. For example ext3 out of the box gets a much bigger gain from tuning read-ahead than XFS on a DELL PERC6 RAID card (but still ends up slower). Linux Read-ahead has no effect on random access performance. A workload consisting of mixed sequential scans and random reads can be tuned to favor one over the other based on a combination of the I/O scheduler used and the ammount of read-ahead. Larger read-ahead helps sequential scans, and the Deadline scheduler tends to favor throughput (sequential scans) over latency (random access) compared to the cfq scheduler.On Oct 11, 2010, at 8:58 PM, Samuel Gendler wrote:On Mon, Oct 11, 2010 at 7:19 PM, Greg Smith <[email protected]> wrote:\n\nThis is a problem for the operating system to solve, and such solutions out there are already good enough that PostgreSQL has little reason to try and innovate in this area. I routinely see seq scan throughput double on Linux just by tweaking read-ahead from the tiny defaults to a sane value.\nI spent some time going through the various tuning docs on the wiki whie bringing some new hardware up and I can't remember seeing any discussion of tweaking read-ahead at all in the normal performance-tuning references. Do you have any documentation of the kinds of tweaking you have done and its effects on different types of workloads?",
"msg_date": "Mon, 11 Oct 2010 21:06:07 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Mon, Oct 11, 2010 at 9:06 PM, Scott Carey <[email protected]>wrote:\n\n> I can't speak to documentation, but it is something that helps as your I/O\n> subsystem gets more powerful, and how much it helps depends more on your\n> hardware, which may have adaptive read ahead on its own, and your file\n> system which may be more or less efficient at sequential I/O. For example\n> ext3 out of the box gets a much bigger gain from tuning read-ahead than XFS\n> on a DELL PERC6 RAID card (but still ends up slower).\n>\n>\nGeez. I wish someone would have written something quite so bold as 'xfs is\nalways faster than ext3' in the standard tuning docs. I couldn't find\nanything that made a strong filesystem recommendation. How does xfs compare\nto ext4? I wound up on ext4 on a dell perc6 raid card when an unexpected\nhardware failure on a production system caused my test system to get thrown\ninto production before I could do any serious testing of xfs. If there is a\nstrong consensus that xfs is simply better, I could afford the downtime to\nswitch.\n\nAs it happens, this is a system where all of the heavy workload is in the\nform of sequential scan type load. The OLTP workload is very minimal (tens\nof queries per minute on a small number of small tables), but there are a\nlot of reporting queries that wind up doing sequential scans of large\npartitions (millions to tens of millions of rows). We've sized the new\nhardware so that the most commonly used partitions fit into memory, but if\nwe could speed the queries that touch less frequently used partitions, that\nwould be good. I'm the closest thing our team has to a DBA, which really\nonly means that I'm the one person on the dev team or the ops team to have\nread all of the postgres docs and wiki and the mailing lists. I claim no\nactual DBA experience or expertise and have limited cycles to devote to\ntuning and testing, so if there is an established wisdom for filesystem\nchoice and read ahead tuning, I'd be very interested in hearing it.\n\nOn Mon, Oct 11, 2010 at 9:06 PM, Scott Carey <[email protected]> wrote:\nI can't speak to documentation, but it is something that helps as your I/O subsystem gets more powerful, and how much it helps depends more on your hardware, which may have adaptive read ahead on its own, and your file system which may be more or less efficient at sequential I/O. For example ext3 out of the box gets a much bigger gain from tuning read-ahead than XFS on a DELL PERC6 RAID card (but still ends up slower). \nGeez. I wish someone would have written something quite so bold as 'xfs is always faster than ext3' in the standard tuning docs. I couldn't find anything that made a strong filesystem recommendation. How does xfs compare to ext4? I wound up on ext4 on a dell perc6 raid card when an unexpected hardware failure on a production system caused my test system to get thrown into production before I could do any serious testing of xfs. If there is a strong consensus that xfs is simply better, I could afford the downtime to switch.\nAs it happens, this is a system where all of the heavy workload is in the form of sequential scan type load. The OLTP workload is very minimal (tens of queries per minute on a small number of small tables), but there are a lot of reporting queries that wind up doing sequential scans of large partitions (millions to tens of millions of rows). We've sized the new hardware so that the most commonly used partitions fit into memory, but if we could speed the queries that touch less frequently used partitions, that would be good. I'm the closest thing our team has to a DBA, which really only means that I'm the one person on the dev team or the ops team to have read all of the postgres docs and wiki and the mailing lists. I claim no actual DBA experience or expertise and have limited cycles to devote to tuning and testing, so if there is an established wisdom for filesystem choice and read ahead tuning, I'd be very interested in hearing it.",
"msg_date": "Mon, 11 Oct 2010 21:21:59 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Mon, 11 Oct 2010, Samuel Gendler wrote:\n\n> On Mon, Oct 11, 2010 at 9:06 PM, Scott Carey <[email protected]>wrote:\n>\n>> I can't speak to documentation, but it is something that helps as your I/O\n>> subsystem gets more powerful, and how much it helps depends more on your\n>> hardware, which may have adaptive read ahead on its own, and your file\n>> system which may be more or less efficient at sequential I/O. For example\n>> ext3 out of the box gets a much bigger gain from tuning read-ahead than XFS\n>> on a DELL PERC6 RAID card (but still ends up slower).\n>>\n>>\n> Geez. I wish someone would have written something quite so bold as 'xfs is\n> always faster than ext3' in the standard tuning docs. I couldn't find\n> anything that made a strong filesystem recommendation. How does xfs compare\n> to ext4? I wound up on ext4 on a dell perc6 raid card when an unexpected\n> hardware failure on a production system caused my test system to get thrown\n> into production before I could do any serious testing of xfs. If there is a\n> strong consensus that xfs is simply better, I could afford the downtime to\n> switch.\n\nunfortunantly you are not going to get a clear opinion here.\n\next3 has a long track record, and since it is the default, it gets a lot \nof testing. it does have known issues\n\nxfs had problems on linux immediatly after it was ported. It continues to \nbe improved and many people have been using it for years and trust it. XFS \ndoes have a weakness in creating/deleting large numbers of small files.\n\next4 is the new kid on the block. it claims good things, but it's so new \nthat many people don't trust it yet\n\nbtrfs is the 'future of filesystems' that is supposed to be better than \nanything else, but it's definantly not stable yet, and time will tell if \nit really lives up to it's promises.\n\nan this is just on linux\n\non BSD or solaris (or with out-of-kernel patches) you also have ZFS, which \nsome people swear by, and other people swear at.\n\nDavid Lang\n\n\n> As it happens, this is a system where all of the heavy workload is in the\n> form of sequential scan type load. The OLTP workload is very minimal (tens\n> of queries per minute on a small number of small tables), but there are a\n> lot of reporting queries that wind up doing sequential scans of large\n> partitions (millions to tens of millions of rows). We've sized the new\n> hardware so that the most commonly used partitions fit into memory, but if\n> we could speed the queries that touch less frequently used partitions, that\n> would be good. I'm the closest thing our team has to a DBA, which really\n> only means that I'm the one person on the dev team or the ops team to have\n> read all of the postgres docs and wiki and the mailing lists. I claim no\n> actual DBA experience or expertise and have limited cycles to devote to\n> tuning and testing, so if there is an established wisdom for filesystem\n> choice and read ahead tuning, I'd be very interested in hearing it.\n>\n",
"msg_date": "Mon, 11 Oct 2010 21:35:25 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Mladen Gogala wrote:\n> I agree, but I am afraid that after the demise of SGI, XFS isn't being \n> developed.\n\nIt's back to being well maintained again; see \nhttp://blog.2ndquadrant.com/en/2010/04/the-return-of-xfs-on-linux.html \nfor some history here and why it's become relevant to RedHat in \nparticular recently.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Tue, 12 Oct 2010 02:32:09 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Samuel Gendler wrote:\n> I spent some time going through the various tuning docs on the wiki \n> whie bringing some new hardware up and I can't remember seeing any \n> discussion of tweaking read-ahead at all in the normal \n> performance-tuning references. Do you have any documentation of the \n> kinds of tweaking you have done and its effects on different types of \n> workloads?\n\nMuch of my recent research has gone into the book you'll see plugged \nbelow rather than the wiki. The basics of read-ahead tuning is that you \ncan see it increase bonnie++ sequential read results when you increase \nit, to a point. Get to that point and stop and you should be in good shape.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Tue, 12 Oct 2010 02:39:32 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "11.10.10 20:46, Craig James написав(ла):\n>\n> First of all, it's not true. There are plenty of applications that \n> need an exact answer. Second, even if it is only 1%, that means it's \n> 1% of the queries, not 1% of people. Sooner or later a large fraction \n> of developers will run into this. It's probably been the most-asked \n> question I've seen on this forum in the four years I've been here. \n> It's a real problem, and it needs a real solution.\n>\n> I know it's a hard problem to solve, but can we stop hinting that \n> those of us who have this problem are somehow being dense?\n>\nBTW: There is a lot of talk about MVCC, but is next solution possible:\n1) Create a page information map that for each page in the table will \ntell you how may rows are within and if any write (either successful or \nnot) were done to this page. This even can be two maps to make second \none really small (a bit per page) - so that it could be most time in-memory.\n2) When you need to to count(*) or index check - first check if there \nwere no writes to the page. If not - you can use count information from \npage info/index data without going to the page itself\n3) Let vacuum clear the bit after frozing all the tuples in the page (am \nI using terminology correctly?).\n\nIn this case all read-only (archive) data will be this bit off and \nindex/count(*) will be really fast.\nAm I missing something?\n\nBest regards, Vitalii Tymchyshyn.\n",
"msg_date": "Tue, 12 Oct 2010 10:56:13 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On 10/12/2010 03:56 PM, Vitalii Tymchyshyn wrote:\n\n> BTW: There is a lot of talk about MVCC, but is next solution possible:\n> 1) Create a page information map that for each page in the table will\n> tell you how may rows are within and if any write (either successful or\n> not) were done to this page. This even can be two maps to make second\n> one really small (a bit per page) - so that it could be most time\n> in-memory.\n> 2) When you need to to count(*) or index check - first check if there\n> were no writes to the page. If not - you can use count information from\n> page info/index data without going to the page itself\n> 3) Let vacuum clear the bit after frozing all the tuples in the page (am\n> I using terminology correctly?).\n\nPart of this already exists. It's called the visibility map, and is \npresent in 8.4 and above. It's not currently used for queries, but can \npotentially be used to aid some kinds of query.\n\nhttp://www.postgresql.org/docs/8.4/static/storage-vm.html\n\n> In this case all read-only (archive) data will be this bit off and\n> index/count(*) will be really fast.\n\nA count with any joins or filter criteria would still have to scan all \npages with visible tuples in them. So the visibility map helps speed up \nscanning of bloated tables, but doesn't provide a magical \"fast count\" \nexcept in the utterly trivial \"select count(*) from tablename;\" case, \nand can probably only be used for accurate results when there are no \nread/write transactions currently open. Even if you kept a count of \ntuples in each page along with the mvcc transaction ID information \nrequired to determine for which transactions that count is valid, it'd \nonly be useful if you didn't have to do any condition checks, and it'd \nbe yet another thing to update with every insert/delete/update.\n\nPerhaps for some users that'd be worth having, but it seems to me like \nit'd have pretty narrow utility. I'm not sure that's the answer.\n\n--\nCraig Ringer\n",
"msg_date": "Tue, 12 Oct 2010 16:14:58 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Tue, 12 Oct 2010, Craig Ringer wrote:\n\n>\n>> BTW: There is a lot of talk about MVCC, but is next solution possible:\n>> 1) Create a page information map that for each page in the table will\n>> tell you how may rows are within and if any write (either successful or\n>> not) were done to this page. This even can be two maps to make second\n>> one really small (a bit per page) - so that it could be most time\n>> in-memory.\n>> 2) When you need to to count(*) or index check - first check if there\n>> were no writes to the page. If not - you can use count information from\n>> page info/index data without going to the page itself\n>> 3) Let vacuum clear the bit after frozing all the tuples in the page (am\n>> I using terminology correctly?).\n>\n> Part of this already exists. It's called the visibility map, and is present \n> in 8.4 and above. It's not currently used for queries, but can potentially be \n> used to aid some kinds of query.\n>\n> http://www.postgresql.org/docs/8.4/static/storage-vm.html\n>\n>> In this case all read-only (archive) data will be this bit off and\n>> index/count(*) will be really fast.\n>\n> A count with any joins or filter criteria would still have to scan all pages \n> with visible tuples in them. So the visibility map helps speed up scanning of \n> bloated tables, but doesn't provide a magical \"fast count\" except in the \n> utterly trivial \"select count(*) from tablename;\" case, and can probably only \n> be used for accurate results when there are no read/write transactions \n> currently open. Even if you kept a count of tuples in each page along with \n> the mvcc transaction ID information required to determine for which \n> transactions that count is valid, it'd only be useful if you didn't have to \n> do any condition checks, and it'd be yet another thing to update with every \n> insert/delete/update.\n>\n> Perhaps for some users that'd be worth having, but it seems to me like it'd \n> have pretty narrow utility. I'm not sure that's the answer.\n\nfrom a PR point of view, speeding up the trivil count(*) case could be \nworth it, just to avoid people complaining about it not being fast.\n\nin the case where you are doing a count(*) where query and the where is on \nan indexed column, could the search just look at the index + the \nvisibility mapping rather than doing an sequential search through the \ntable?\n\nas for your worries about the accuracy of a visibility based count in the \nface of other transactions, wouldn't you run into the same issues if you \nare doing a sequential scan with the same transactions in process?\n\nDavid Lang\n",
"msg_date": "Tue, 12 Oct 2010 01:22:39 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "12.10.10 11:14, Craig Ringer написав(ла):\n> On 10/12/2010 03:56 PM, Vitalii Tymchyshyn wrote:\n>\n>> BTW: There is a lot of talk about MVCC, but is next solution possible:\n>> 1) Create a page information map that for each page in the table will\n>> tell you how may rows are within and if any write (either successful or\n>> not) were done to this page. This even can be two maps to make second\n>> one really small (a bit per page) - so that it could be most time\n>> in-memory.\n>> 2) When you need to to count(*) or index check - first check if there\n>> were no writes to the page. If not - you can use count information from\n>> page info/index data without going to the page itself\n>> 3) Let vacuum clear the bit after frozing all the tuples in the page (am\n>> I using terminology correctly?).\n>\n> Part of this already exists. It's called the visibility map, and is \n> present in 8.4 and above. It's not currently used for queries, but can \n> potentially be used to aid some kinds of query.\n>\n> http://www.postgresql.org/docs/8.4/static/storage-vm.html\n>\n>> In this case all read-only (archive) data will be this bit off and\n>> index/count(*) will be really fast.\n>\n> A count with any joins or filter criteria would still have to scan all \n> pages with visible tuples in them. \nIf one don't use parittioning. With proper partitioning, filter can \nsimply select a partitions.\n\nAlso filtering can be mapped on the index lookup. And if one could join \nindex hash and visibility map, much like two indexes can be bit joined \nnow, count can be really fast for all but non-frozen tuples.\n> So the visibility map helps speed up scanning of bloated tables, but \n> doesn't provide a magical \"fast count\" except in the utterly trivial \n> \"select count(*) from tablename;\" case, and can probably only be used \n> for accurate results when there are no read/write transactions \n> currently open. \nWhy so? You simply has to recount the pages that are marked dirty in \nusual way. But count problem usually occurs when there are a lot of \narchive data (you need to count over 100K records) that is not modified.\n\nBest regards, Vitalii Tymchyshyn\n",
"msg_date": "Tue, 12 Oct 2010 11:34:22 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On 10/12/2010 04:22 PM, [email protected] wrote:\n\n> from a PR point of view, speeding up the trivil count(*) case could be\n> worth it, just to avoid people complaining about it not being fast.\n\nAt the cost of a fair bit more complexity, though, and slowing \neverything else down.\n\nThe proper solution here remains, IMO, support for visibility \ninformation in indexes, whether by storing it once in the index and once \nin the heap (ouch!), storing it out-of-line, or using a covering index \nwhere one or more columns are stored wholly in the index not in the \ntable heap at all.\n\nHere are a few of the many past discussions about this that have already \ncovered some of the same ground:\n\nhttp://stackoverflow.com/questions/839015/postgres-could-an-index-organized-tables-paved-way-for-faster-select-count-fr\n\nhttp://osdir.com/ml/db.postgresql.performance/2003-10/msg00075.html\n(and the rest of the thread)\n\nA decent look with Google will find many, many more.\n\n> in the case where you are doing a count(*) where query and the where is\n> on an indexed column, could the search just look at the index + the\n> visibility mapping rather than doing an sequential search through the\n> table?\n\nNope, because the visibility map, which is IIRC only one bit per page, \ndoesn't record how many tuples there are on the page, or enough \ninformation about them to determine how many of them are visible to the \ncurrent transaction*.\n\n> as for your worries about the accuracy of a visibility based count in\n> the face of other transactions, wouldn't you run into the same issues if\n> you are doing a sequential scan with the same transactions in process?\n\nNo. Every tuple in a table heap in postgresql has hidden fields, some of \nwhich are used to determine whether the current transaction* can \"see\" \nthe tuple - it may have been inserted after this transaction started, or \ndeleted before this transaction started, so it's not visible to this \ntransaction but may still be to others.\n\nhttp://www.postgresql.org/docs/current/static/ddl-system-columns.html\n\nThis information isn't available in the visibility map, or in indexes. \nThat's why PostgreSQL has to hit the heap to find it.\n\n* current transaction should really be \"current snapshot\". The snapshot \nis taken at the start of the whole transaction for SERIALIZABLE \nisolation, and at the start of each statement for READ COMMITTED isolation.\n\n--\nCraig Ringer\n",
"msg_date": "Tue, 12 Oct 2010 19:44:59 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "[email protected] wrote:\n> from a PR point of view, speeding up the trivil count(*) case could be \n> worth it, just to avoid people complaining about it not being fast.\n>\n> \nFixing PR stuff is not the approach that I would take. People are \ncomplaining about select count(*) because they're using it in all the \nwrong places. My assessment that there is a problem with sequential \nscan was wrong. Now, let's again take Oracle as the measure.\nSomeone asked me about caching the data. Here it is:\n\nSQL> connect system/*********\nConnected.\nSQL> alter system flush buffer_cache;\n\nSystem altered.\n\nElapsed: 00:00:12.68\nSQL> connect adbase/*********\nConnected.\nSQL> alter session set db_file_multiblock_read_Count=128;\n\nSession altered.\n\nElapsed: 00:00:00.41\nSQL> select count(*) from ni_occurrence;\n\n COUNT(*)\n----------\n 402062638\n\nElapsed: 00:02:37.77\n\nSQL> select bytes/1048576 MB from user_segments\n 2 where segment_name='NI_OCCURRENCE';\n\n MB\n----------\n 35329\n\nElapsed: 00:00:00.20\nSQL>\n\n\nSo, the results weren't cached the first time around. The explanation is \nthe fact that Oracle, as of the version 10.2.0, reads the table in the \nprivate process memory, not in the shared buffers. This table alone is \n35GB in size, Oracle took 2 minutes 47 seconds to read it using the \nfull table scan. If I do the same thing with PostgreSQL and a comparable \ntable, Postgres is, in fact, faster:\n\npsql (9.0.1)\nType \"help\" for help.\n\nnews=> \\timing\nTiming is on.\nnews=> select count(*) from moreover_documents_y2010m09;\n count \n----------\n 17242655\n(1 row)\n\nTime: 113135.114 ms\nnews=> select pg_size_pretty(pg_table_size('moreover_documents_y2010m09'));\n pg_size_pretty\n----------------\n 27 GB\n(1 row)\n\nTime: 100.849 ms\nnews=>\n\nThe number of rows is significantly smaller, but the table contains \nrather significant \"text\" field which consumes quite a bit of TOAST \nstorage and the sizes are comparable. Postgres read through 27GB in 113 \nseconds, less than 2 minutes and oracle took 2 minutes 37 seconds to \nread through 35GB. I stand corrected: there is nothing wrong with the \nspeed of the Postgres sequential scan.\n\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n",
"msg_date": "Tue, 12 Oct 2010 08:27:26 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Tue, Oct 12, 2010 at 7:27 AM, Mladen Gogala\n<[email protected]> wrote:\n>\n> So, the results weren't cached the first time around. The explanation is the\n> fact that Oracle, as of the version 10.2.0, reads the table in the private\n> process memory, not in the shared buffers. This table alone is 35GB in\n> size, Oracle took 2 minutes 47 seconds to read it using the full table\n> scan. If I do the same thing with PostgreSQL and a comparable table,\n> Postgres is, in fact, faster:\n\n\nWell, I didn't quite mean that - having no familiarity with Oracle I\ndon't know what the alter system statement does, but I was talking\nspecifically about the linux buffer and page cache. The easiest way to\ndrop the linux caches in one fell swoop is:\n\necho 3 > /proc/sys/vm/drop_caches\n\nIs there a command to tell postgresql to drop/clear/reset it's buffer_cache?\n\nClearing/dropping both the system (Linux) and the DB caches is\nimportant when doing benchmarks that involve I/O.\n\n\n\n-- \nJon\n",
"msg_date": "Tue, 12 Oct 2010 08:07:46 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Jon Nelson wrote:\n> Is there a command to tell postgresql to drop/clear/reset it's buffer_cache?\n> \n\nNo. Usually the sequence used to remove all cached data from RAM before \na benchmark is:\n\npg_ctl stop\nsync\necho 3 > /proc/sys/vm/drop_caches\npg_ctl start\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Tue, 12 Oct 2010 09:18:34 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Tue, Oct 12, 2010 at 3:07 PM, Jon Nelson <[email protected]> wrote:\n> On Tue, Oct 12, 2010 at 7:27 AM, Mladen Gogala\n> <[email protected]> wrote:\n>>\n>> So, the results weren't cached the first time around. The explanation is the\n>> fact that Oracle, as of the version 10.2.0, reads the table in the private\n>> process memory, not in the shared buffers. This table alone is 35GB in\n>> size, Oracle took 2 minutes 47 seconds to read it using the full table\n>> scan. If I do the same thing with PostgreSQL and a comparable table,\n>> Postgres is, in fact, faster:\n>\n> Well, I didn't quite mean that - having no familiarity with Oracle I\n> don't know what the alter system statement does, but I was talking\n> specifically about the linux buffer and page cache. The easiest way to\n> drop the linux caches in one fell swoop is:\n>\n> echo 3 > /proc/sys/vm/drop_caches\n\nAFAIK this won't affect Oracle when using direct IO (which bypasses\nthe page cache).\n\nLuca\n",
"msg_date": "Tue, 12 Oct 2010 15:19:52 +0200",
"msg_from": "Luca Tettamanti <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Tue, Oct 12, 2010 at 8:18 AM, Greg Smith <[email protected]> wrote:\n> No. Usually the sequence used to remove all cached data from RAM before a\n> benchmark is:\n\nAll cached data (as cached in postgresql - *not* the Linux system\ncaches)..., right?\n\n\n-- \nJon\n",
"msg_date": "Tue, 12 Oct 2010 08:20:14 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Jon Nelson wrote:\n>\n> Well, I didn't quite mean that - having no familiarity with Oracle I\n> don't know what the alter system statement does, but I was talking\n> specifically about the linux buffer and page cache. \n> \n\nThose are not utilized by Oracle. This is a RAC instance, running on \ntop of ASM, which is an Oracle volume manager, using raw devices. There \nis no file system on those disks:\n\nSQL> select file_name from dba_data_files\n 2 where tablespace_name='ADBASE_DATA';\n\nFILE_NAME\n--------------------------------------------------------------------------------\n+DGDATA/stag3/datafile/adbase_data.262.727278257\n+DGDATA/stag3/datafile/adbase_data.263.727278741\n+DGDATA/stag3/datafile/adbase_data.264.727280145\n+DGDATA/stag3/datafile/adbase_data.265.727280683\n\n[oracle@lpo-oracle-30 ~]$ $ORA_CRS_HOME/bin/crs_stat -l\nNAME=ora.STAG3.STAG31.inst\nTYPE=application\nTARGET=ONLINE\nSTATE=ONLINE on lpo-oracle-30\n\nNAME=ora.STAG3.STAG32.inst\nTYPE=application\nTARGET=ONLINE\nSTATE=ONLINE on lpo-oracle-31\n\nNAME=ora.STAG3.db\nTYPE=application\nTARGET=ONLINE\nSTATE=ONLINE on lpo-oracle-31\n\nNAME=ora.lpo-oracle-30.ASM1.asm\nTYPE=application\nTARGET=ONLINE\nSTATE=ONLINE on lpo-oracle-30\n\nNAME=ora.lpo-oracle-30.LISTENER_LPO-ORACLE-30.lsnr\nTYPE=application\nTARGET=ONLINE\nSTATE=ONLINE on lpo-oracle-30\n\nNAME=ora.lpo-oracle-30.gsd\nTYPE=application\nTARGET=ONLINE\nSTATE=ONLINE on lpo-oracle-30\n\nNAME=ora.lpo-oracle-30.ons\nTYPE=application\nTARGET=ONLINE\nSTATE=ONLINE on lpo-oracle-30\n\nNAME=ora.lpo-oracle-30.vip\nTYPE=application\nTARGET=ONLINE\nSTATE=ONLINE on lpo-oracle-30\n\nNAME=ora.lpo-oracle-31.ASM2.asm\nTYPE=application\nTARGET=ONLINE\nSTATE=ONLINE on lpo-oracle-31\n\nNAME=ora.lpo-oracle-31.LISTENER_LPO-ORACLE-31.lsnr\nTYPE=application\nTARGET=ONLINE\nSTATE=ONLINE on lpo-oracle-31\n\nNAME=ora.lpo-oracle-31.gsd\nTYPE=application\nTARGET=ONLINE\nSTATE=ONLINE on lpo-oracle-31\n\nNAME=ora.lpo-oracle-31.ons\nTYPE=application\nTARGET=ONLINE\nSTATE=ONLINE on lpo-oracle-31\n\nNAME=ora.lpo-oracle-31.vip\nTYPE=application\nTARGET=ONLINE\nSTATE=ONLINE on lpo-oracle-31\n\n\n\nThe only way to flush cache is the aforementioned \"alter system\" \ncommand. AFAIK, Postgres doesn't have anything like that. Oracle uses \nraw devices precisely to avoid double buffering.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Tue, 12 Oct 2010 09:55:39 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Mladen Gogala <[email protected]> writes:\n> The number of rows is significantly smaller, but the table contains \n> rather significant \"text\" field which consumes quite a bit of TOAST \n> storage and the sizes are comparable. Postgres read through 27GB in 113 \n> seconds, less than 2 minutes and oracle took 2 minutes 37 seconds to \n> read through 35GB. I stand corrected: there is nothing wrong with the \n> speed of the Postgres sequential scan.\n\nUm ... the whole point of TOAST is that the data isn't in-line.\nSo what Postgres was actually reading through was probably quite a\nlot less than 27Gb. It's probably hard to make a completely\napples-to-apples comparison because the two databases are so different,\nbut I don't think this one proves that PG is faster than Oracle.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Oct 2010 09:56:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again... "
},
{
"msg_contents": "Neil Whelchel <[email protected]> wrote:\n \n> What is the best method to make a page of results and a list of\n> links to other pages of results?\n \nFor our most heavily used web app we decided to have the renderer\njust read the list of cases and render the pages to disk, and then\npresent the first one. We set a limit of 500 entries on the list;\nif we get past 500 we put up a page telling them to refine their\nsearch criteria. That won't work for all circumstances, but it\nworks well for out web app.\n \n-Kevin\n",
"msg_date": "Tue, 12 Oct 2010 08:56:33 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Tom Lane wrote:\n> Mladen Gogala <[email protected]> writes:\n> \n>> The number of rows is significantly smaller, but the table contains \n>> rather significant \"text\" field which consumes quite a bit of TOAST \n>> storage and the sizes are comparable. Postgres read through 27GB in 113 \n>> seconds, less than 2 minutes and oracle took 2 minutes 37 seconds to \n>> read through 35GB. I stand corrected: there is nothing wrong with the \n>> speed of the Postgres sequential scan.\n>> \n>\n> Um ... the whole point of TOAST is that the data isn't in-line.\n> So what Postgres was actually reading through was probably quite a\n> lot less than 27Gb. It's probably hard to make a completely\n> apples-to-apples comparison because the two databases are so different,\n> but I don't think this one proves that PG is faster than Oracle.\n>\n> \t\t\tregards, tom lane\n> \n\nAs is usually the case, you're right. I will try copying the table to \nPostgres over the weekend, my management would not look kindly upon my \ncopying 35GB of the production data during the working hours, for the \nscientific reasons. I have the storage and I can test, I will post the \nresult. I developed quite an efficient Perl script which does copying \nwithout the intervening CSV file, so that the copy should not take more \nthan 2 hours. I will be able to impose a shared lock on the table over \nthe weekend, so that I don't blow away the UNDO segments.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Tue, 12 Oct 2010 10:04:18 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": ">> The biggest single problem with \"select count(*)\" is that it is\n>> seriously overused. People use that idiom to establish existence, which\n>> usually leads to a performance disaster in the application using it,\n>> unless the table has no more than few hundred records. SQL language, of\n>> which PostgreSQL offers an excellent implementation, offers [NOT]\n>> EXISTS clause since its inception in the Jurassic era. The problem is\n>> with the sequential scan, not with counting. I'd even go as far as to\n>> suggest that 99% instances of the \"select count(*)\" idiom are probably\n>> bad use of the SQL language.\n>\n> I agree, I have seen many very bad examples of using count(*). I will go so\n> far as to question the use of count(*) in my examples here. It there a better\n> way to come up with a page list than using count(*)? What is the best method\n> to make a page of results and a list of links to other pages of results? Am I\n> barking up the wrong tree here?\nOne way I have dealt with this on very large tables is to cache the \ncount(*) at the application level (using memcached, terracotta, or \nsomething along those lines) and then increment that cache whenever you \nadd a row to the relevant table. On application restart that cache is \nre-initialized with a regular old count(*). This approach works really \nwell and all large systems in my experience need caching in front of the \nDB eventually. If you have a simpler system with say a single \napplication/web server you can simply store the value in a variable, the \nspecifics would depend on the language and framework you are using.\n\nAnother more all-DB approach is to create a statistics tables into which \nyou place aggregated statistics rows (num deleted, num inserted, totals, \netc) at an appropriate time interval in your code. So you have rows \ncontaining aggregated statistics information for the past and some tiny \nportion of the new data happening right now that hasn't yet been \naggregated. Queries then look like a summation of the aggregated values \nin the statistics table plus a count(*) over just the newest portion of \nthe data table and are generally very fast.\n\nOverall I have found that once things get big the layers of your app \nstack start to blend together and have to be combined in clever ways to \nkeep speed up. Postgres is a beast but when you run into things it \ncan't do well just find a way to cache it or make it work together with \nsome other persistence tech to handle those cases.\n\n\n\n\n\n\n\n\n\n\nThe biggest single problem with \"select count(*)\" is that it is\nseriously overused. People use that idiom to establish existence, which\nusually leads to a performance disaster in the application using it,\nunless the table has no more than few hundred records. SQL language, of\nwhich PostgreSQL offers an excellent implementation, offers [NOT]\nEXISTS clause since its inception in the Jurassic era. The problem is\nwith the sequential scan, not with counting. I'd even go as far as to\nsuggest that 99% instances of the \"select count(*)\" idiom are probably\nbad use of the SQL language.\n\n\n\nI agree, I have seen many very bad examples of using count(*). I will go so \nfar as to question the use of count(*) in my examples here. It there a better \nway to come up with a page list than using count(*)? What is the best method \nto make a page of results and a list of links to other pages of results? Am I \nbarking up the wrong tree here?\n\n\n One way I have dealt with this on very large tables is to cache the\n count(*) at the application level (using memcached, terracotta, or\n something along those lines) and then increment that cache whenever\n you add a row to the relevant table. On application restart that\n cache is re-initialized with a regular old count(*). This approach\n works really well and all large systems in my experience need\n caching in front of the DB eventually. If you have a simpler system\n with say a single application/web server you can simply store the\n value in a variable, the specifics would depend on the language and\n framework you are using.\n\n Another more all-DB approach is to create a statistics tables into\n which you place aggregated statistics rows (num deleted, num\n inserted, totals, etc) at an appropriate time interval in your\n code. So you have rows containing aggregated statistics information\n for the past and some tiny portion of the new data happening right\n now that hasn't yet been aggregated. Queries then look like a\n summation of the aggregated values in the statistics table plus a\n count(*) over just the newest portion of the data table and are\n generally very fast.\n\n Overall I have found that once things get big the layers of your app\n stack start to blend together and have to be combined in clever ways\n to keep speed up. Postgres is a beast but when you run into things\n it can't do well just find a way to cache it or make it work\n together with some other persistence tech to handle those cases.",
"msg_date": "Tue, 12 Oct 2010 10:19:57 -0400",
"msg_from": "Joe Uhl <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Sat, Oct 9, 2010 at 4:26 PM, Neil Whelchel <[email protected]> wrote:\n> Maybe an\n> estimate(*) that works like count but gives an answer from the index without\n> checking visibility? I am sure that this would be good enough to make a page\n> list, it is really no big deal if it errors on the positive side, maybe the\n> list of pages has an extra page off the end. I can live with that. What I\n> can't live with is taking 13 seconds to get a page of results from 850,000\n> rows in a table.\n> -Neil-\n>\n\nFWIW, Michael Fuhr wrote a small function to parse the EXPLAIN plan a\nfew years ago and it works pretty well assuming your stats are up to\ndate.\n\nhttp://markmail.org/message/gknqthlwry2eoqey\n",
"msg_date": "Tue, 12 Oct 2010 08:12:53 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Jon Nelson <[email protected]> wrote:\n> Greg Smith <[email protected]> wrote:\n \n>> Usually the sequence used to remove all cached data from RAM\n>> before a benchmark is:\n> \n> All cached data (as cached in postgresql - *not* the Linux system\n> caches)..., right?\n \nNo. The stop and start of PostgreSQL causes empty PostgreSQL\ncaches. These lines, in between the stop and start, force the Linux\ncache to be empty (on recent kernel versions):\n \nsync\necho 3 > /proc/sys/vm/drop_caches\n \n-Kevin\n",
"msg_date": "Tue, 12 Oct 2010 10:29:22 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": " On 10/11/10 8:02 PM, Scott Carey wrote:\n> would give you a 1MB read-ahead. Also, consider XFS and its built-in defragmentation. I have found that a longer lived postgres DB will get extreme\n> file fragmentation over time and sequential scans end up mostly random. On-line file defrag helps tremendously.\n>\nWe just had a corrupt table caused by an XFS online defrag. I'm scared \nto use this again while the db is live. Has anyone else found this to \nbe safe? But, I can vouch for the fragmentation issue, it happens very \nquickly in our system.\n\n-Dan\n",
"msg_date": "Tue, 12 Oct 2010 09:39:19 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Tue, 12 Oct 2010, Joe Uhl wrote:\n\n>>> The biggest single problem with \"select count(*)\" is that it is\n>>> seriously overused. People use that idiom to establish existence, which\n>>> usually leads to a performance disaster in the application using it,\n>>> unless the table has no more than few hundred records. SQL language, of\n>>> which PostgreSQL offers an excellent implementation, offers [NOT]\n>>> EXISTS clause since its inception in the Jurassic era. The problem is\n>>> with the sequential scan, not with counting. I'd even go as far as to\n>>> suggest that 99% instances of the \"select count(*)\" idiom are probably\n>>> bad use of the SQL language.\n>> \n>> I agree, I have seen many very bad examples of using count(*). I will go so\n>> far as to question the use of count(*) in my examples here. It there a \n>> better\n>> way to come up with a page list than using count(*)? What is the best \n>> method\n>> to make a page of results and a list of links to other pages of results? Am \n>> I\n>> barking up the wrong tree here?\n> One way I have dealt with this on very large tables is to cache the count(*) \n> at the application level (using memcached, terracotta, or something along \n> those lines) and then increment that cache whenever you add a row to the \n> relevant table. On application restart that cache is re-initialized with a \n> regular old count(*). This approach works really well and all large systems \n> in my experience need caching in front of the DB eventually. If you have a \n> simpler system with say a single application/web server you can simply store \n> the value in a variable, the specifics would depend on the language and \n> framework you are using.\n\nthis works if you know ahead of time what the criteria of the search is \ngoing to be.\n\nso it will work for\n\nselect count(*) from table;\n\nwhat this won't work for is cases wher the criteria of the search is \nunpredictable, i.e.\n\nask the user for input\n\nselect count(*) from table where field=$input;\n\nDavid Lang\n\n> Another more all-DB approach is to create a statistics tables into which you \n> place aggregated statistics rows (num deleted, num inserted, totals, etc) at \n> an appropriate time interval in your code. So you have rows containing \n> aggregated statistics information for the past and some tiny portion of the \n> new data happening right now that hasn't yet been aggregated. Queries then \n> look like a summation of the aggregated values in the statistics table plus a \n> count(*) over just the newest portion of the data table and are generally \n> very fast.\n>\n> Overall I have found that once things get big the layers of your app stack \n> start to blend together and have to be combined in clever ways to keep speed \n> up. Postgres is a beast but when you run into things it can't do well just \n> find a way to cache it or make it work together with some other persistence \n> tech to handle those cases.\n>\n>\n>\n",
"msg_date": "Tue, 12 Oct 2010 08:48:34 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Tue, 12 Oct 2010, Mladen Gogala wrote:\n\n> [email protected] wrote:\n>> from a PR point of view, speeding up the trivil count(*) case could be \n>> worth it, just to avoid people complaining about it not being fast.\n>>\n>> \n> Fixing PR stuff is not the approach that I would take. People are complaining \n> about select count(*) because they're using it in all the wrong places.\n\nthat may be the case, but if it's possible to make it less painful it will \nmean more people use postgres, both because it works better for them when \nthey are using the suboptimal programs, but also because when people do \ntheir trivial testing of databases to decide which one they will use, they \nwon't rule out postgres because \"it's so slow\"\n\nthe fact of the matter is that people do use count(*), and even though \nthere are usually ways to avoid doing so, having the programmer have to do \nsomething different for postgres than they do for other databases is \nraising a barrier against postgres untilization in anything.\n\nDavid Lang\n\n> My \n> assessment that there is a problem with sequential scan was wrong. Now, let's \n> again take Oracle as the measure.\n> Someone asked me about caching the data. Here it is:\n>\n> SQL> connect system/*********\n> Connected.\n> SQL> alter system flush buffer_cache;\n>\n> System altered.\n>\n> Elapsed: 00:00:12.68\n> SQL> connect adbase/*********\n> Connected.\n> SQL> alter session set db_file_multiblock_read_Count=128;\n>\n> Session altered.\n>\n> Elapsed: 00:00:00.41\n> SQL> select count(*) from ni_occurrence;\n>\n> COUNT(*)\n> ----------\n> 402062638\n>\n> Elapsed: 00:02:37.77\n>\n> SQL> select bytes/1048576 MB from user_segments\n> 2 where segment_name='NI_OCCURRENCE';\n>\n> MB\n> ----------\n> 35329\n>\n> Elapsed: 00:00:00.20\n> SQL>\n>\n>\n> So, the results weren't cached the first time around. The explanation is the \n> fact that Oracle, as of the version 10.2.0, reads the table in the private \n> process memory, not in the shared buffers. This table alone is 35GB in \n> size, Oracle took 2 minutes 47 seconds to read it using the full table scan. \n> If I do the same thing with PostgreSQL and a comparable table, Postgres is, \n> in fact, faster:\n>\n> psql (9.0.1)\n> Type \"help\" for help.\n>\n> news=> \\timing\n> Timing is on.\n> news=> select count(*) from moreover_documents_y2010m09;\n> count ----------\n> 17242655\n> (1 row)\n>\n> Time: 113135.114 ms\n> news=> select pg_size_pretty(pg_table_size('moreover_documents_y2010m09'));\n> pg_size_pretty\n> ----------------\n> 27 GB\n> (1 row)\n>\n> Time: 100.849 ms\n> news=>\n>\n> The number of rows is significantly smaller, but the table contains rather \n> significant \"text\" field which consumes quite a bit of TOAST storage and the \n> sizes are comparable. Postgres read through 27GB in 113 seconds, less than 2 \n> minutes and oracle took 2 minutes 37 seconds to read through 35GB. I stand \n> corrected: there is nothing wrong with the speed of the Postgres sequential \n> scan.\n>\n>\n>\n",
"msg_date": "Tue, 12 Oct 2010 08:52:48 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Tue, 12 Oct 2010, Craig Ringer wrote:\n\n> On 10/12/2010 04:22 PM, [email protected] wrote:\n>\n>> from a PR point of view, speeding up the trivil count(*) case could be\n>> worth it, just to avoid people complaining about it not being fast.\n>\n> At the cost of a fair bit more complexity, though, and slowing everything \n> else down.\n\ncomplexity probably, although given how complex the planner is already is \nthis significant?\n\nas far as slowing everything else down, why would it do that? (beyond the \nsimple fact that any new thing the planner can do makes the planner take a \nlittle longer)\n\nDavid Lang\n\n> The proper solution here remains, IMO, support for visibility information in \n> indexes, whether by storing it once in the index and once in the heap \n> (ouch!), storing it out-of-line, or using a covering index where one or more \n> columns are stored wholly in the index not in the table heap at all.\n>\n> Here are a few of the many past discussions about this that have already \n> covered some of the same ground:\n>\n> http://stackoverflow.com/questions/839015/postgres-could-an-index-organized-tables-paved-way-for-faster-select-count-fr\n>\n> http://osdir.com/ml/db.postgresql.performance/2003-10/msg00075.html\n> (and the rest of the thread)\n>\n> A decent look with Google will find many, many more.\n>\n>> in the case where you are doing a count(*) where query and the where is\n>> on an indexed column, could the search just look at the index + the\n>> visibility mapping rather than doing an sequential search through the\n>> table?\n>\n> Nope, because the visibility map, which is IIRC only one bit per page, \n> doesn't record how many tuples there are on the page, or enough information \n> about them to determine how many of them are visible to the current \n> transaction*.\n>\n>> as for your worries about the accuracy of a visibility based count in\n>> the face of other transactions, wouldn't you run into the same issues if\n>> you are doing a sequential scan with the same transactions in process?\n>\n> No. Every tuple in a table heap in postgresql has hidden fields, some of \n> which are used to determine whether the current transaction* can \"see\" the \n> tuple - it may have been inserted after this transaction started, or deleted \n> before this transaction started, so it's not visible to this transaction but \n> may still be to others.\n>\n> http://www.postgresql.org/docs/current/static/ddl-system-columns.html\n>\n> This information isn't available in the visibility map, or in indexes. That's \n> why PostgreSQL has to hit the heap to find it.\n>\n> * current transaction should really be \"current snapshot\". The snapshot is \n> taken at the start of the whole transaction for SERIALIZABLE isolation, and \n> at the start of each statement for READ COMMITTED isolation.\n>\n> --\n> Craig Ringer\n>\n",
"msg_date": "Tue, 12 Oct 2010 08:54:24 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Oct 11, 2010, at 9:21 PM, Samuel Gendler wrote:\n\n\n\nOn Mon, Oct 11, 2010 at 9:06 PM, Scott Carey <[email protected]<mailto:[email protected]>> wrote:\nI can't speak to documentation, but it is something that helps as your I/O subsystem gets more powerful, and how much it helps depends more on your hardware, which may have adaptive read ahead on its own, and your file system which may be more or less efficient at sequential I/O. For example ext3 out of the box gets a much bigger gain from tuning read-ahead than XFS on a DELL PERC6 RAID card (but still ends up slower).\n\n\nGeez. I wish someone would have written something quite so bold as 'xfs is always faster than ext3' in the standard tuning docs. I couldn't find anything that made a strong filesystem recommendation. How does xfs compare to ext4? I wound up on ext4 on a dell perc6 raid card when an unexpected hardware failure on a production system caused my test system to get thrown into production before I could do any serious testing of xfs. If there is a strong consensus that xfs is simply better, I could afford the downtime to switch.\n\nAs it happens, this is a system where all of the heavy workload is in the form of sequential scan type load. The OLTP workload is very minimal (tens of queries per minute on a small number of small tables), but there are a lot of reporting queries that wind up doing sequential scans of large partitions (millions to tens of millions of rows). We've sized the new hardware so that the most commonly used partitions fit into memory, but if we could speed the queries that touch less frequently used partitions, that would be good. I'm the closest thing our team has to a DBA, which really only means that I'm the one person on the dev team or the ops team to have read all of the postgres docs and wiki and the mailing lists. I claim no actual DBA experience or expertise and have limited cycles to devote to tuning and testing, so if there is an established wisdom for filesystem choice and read ahead tuning, I'd be very interested in hearing it.\n\n\next4 is a very fast file system. Its faster than ext2, but has many more features and has the all-important journaling.\n\nHowever, for large reporting queries and sequential scans, XFS will win in the long run if you use the online defragmenter. Otherwise, your sequential scans won't be all that sequential on any file system over time if your tables aren't written once, forever, serially. Parallel restore will result in a system that is fragmented -- ext4 will do best at limiting this on the restore, but only xfs has online defragmentation. We schedule ours daily and it noticeably improves sequential scan I/O.\n\nSupposedly, an online defragmenter is in the works for ext4 but it may be years before its available.\n\nOn Oct 11, 2010, at 9:21 PM, Samuel Gendler wrote:On Mon, Oct 11, 2010 at 9:06 PM, Scott Carey <[email protected]> wrote:\nI can't speak to documentation, but it is something that helps as your I/O subsystem gets more powerful, and how much it helps depends more on your hardware, which may have adaptive read ahead on its own, and your file system which may be more or less efficient at sequential I/O. For example ext3 out of the box gets a much bigger gain from tuning read-ahead than XFS on a DELL PERC6 RAID card (but still ends up slower). \nGeez. I wish someone would have written something quite so bold as 'xfs is always faster than ext3' in the standard tuning docs. I couldn't find anything that made a strong filesystem recommendation. How does xfs compare to ext4? I wound up on ext4 on a dell perc6 raid card when an unexpected hardware failure on a production system caused my test system to get thrown into production before I could do any serious testing of xfs. If there is a strong consensus that xfs is simply better, I could afford the downtime to switch.\nAs it happens, this is a system where all of the heavy workload is in the form of sequential scan type load. The OLTP workload is very minimal (tens of queries per minute on a small number of small tables), but there are a lot of reporting queries that wind up doing sequential scans of large partitions (millions to tens of millions of rows). We've sized the new hardware so that the most commonly used partitions fit into memory, but if we could speed the queries that touch less frequently used partitions, that would be good. I'm the closest thing our team has to a DBA, which really only means that I'm the one person on the dev team or the ops team to have read all of the postgres docs and wiki and the mailing lists. I claim no actual DBA experience or expertise and have limited cycles to devote to tuning and testing, so if there is an established wisdom for filesystem choice and read ahead tuning, I'd be very interested in hearing it.\n\next4 is a very fast file system. Its faster than ext2, but has many more features and has the all-important journaling.However, for large reporting queries and sequential scans, XFS will win in the long run if you use the online defragmenter. Otherwise, your sequential scans won't be all that sequential on any file system over time if your tables aren't written once, forever, serially. Parallel restore will result in a system that is fragmented -- ext4 will do best at limiting this on the restore, but only xfs has online defragmentation. We schedule ours daily and it noticeably improves sequential scan I/O.Supposedly, an online defragmenter is in the works for ext4 but it may be years before its available.",
"msg_date": "Tue, 12 Oct 2010 09:02:39 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "[email protected] (Samuel Gendler) writes:\n> Geez. I wish someone would have written something quite so bold as\n> 'xfs is always faster than ext3' in the standard tuning docs. I\n> couldn't find anything that made a strong filesystem\n> recommendation. How does xfs compare to ext4? I wound up on ext4 on\n> a dell perc6 raid card when an unexpected hardware failure on a\n> production system caused my test system to get thrown into production\n> before I could do any serious testing of xfs. If there is a strong\n> consensus that xfs is simply better, I could afford the downtime to\n> switch.\n\nIt's news to me (in this thread!) that XFS is actually \"getting some\ndeveloper love,\" which is a pretty crucial factor to considering it\nrelevant.\n\nXFS was an SGI creation, and, with:\n\n a) the not-scintillating performance of the company,\n\n b) the lack of a lot of visible work going into the filesystem,\n\n c) the paucity of support by Linux vendors (for a long time, if you \n told RHAT you were having problems, and were using XFS, the next\n step would be to park the ticket awaiting your installing a\n \"supported filesystem\")\n\nit didn't look like XFS was a terribly good bet. Those issues were\ncertainly causing concern a couple of years ago.\n\nFaster \"raw performance\" isn't much good if it comes with a risk of:\n - Losing data\n - Losing support from vendors\n\nIf XFS now *is* getting support from both the development and support\nperspectives, then the above concerns may have been invalidated. It\nwould be very encouraging, if so.\n-- \noutput = (\"cbbrowne\" \"@\" \"gmail.com\")\nRules of the Evil Overlord #228. \"If the hero claims he wishes to\nconfess in public or to me personally, I will remind him that a\nnotarized deposition will serve just as well.\"\n",
"msg_date": "Tue, 12 Oct 2010 12:03:57 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Tue, Oct 12, 2010 at 9:02 AM, Scott Carey <[email protected]>wrote:\n>\n>\n> However, for large reporting queries and sequential scans, XFS will win in\n> the long run if you use the online defragmenter. Otherwise, your sequential\n> scans won't be all that sequential on any file system over time if your\n> tables aren't written once, forever, serially. Parallel restore will\n> result in a system that is fragmented -- ext4 will do best at limiting this\n> on the restore, but only xfs has online defragmentation. We schedule ours\n> daily and it noticeably improves sequential scan I/O.\n>\n>\nOur reporting tables are written sequentially and left unmodified until\nentire partitions are dropped. However, equivalent partitions tend to get a\nlittle bit larger over time, so newer partitions won't necessarily fit into\nthe gaps left by prior partition drops, so it is possible that partitions\nwill be split into two sections, but should still be very sequential, if not\nperfectly so. It would seem that we stumbled into an ideal architecture for\ndoing this kind of work - mostly by virtue of starting with 8.2.x and having\nhuge problems with autovacuum and vacuum taking forever and dragging the db\nto halt, which caused us to move to an architecture which aggregates and\nthen drops older data in entire partitions instead of updating aggregates\nindividually and then deleting rows. Partitions are sized such that most\nreporting queries run over entire partitions, too (which was completely\naccidental since I had not yet delved into individual query optimization at\nthe time), so even though we are doing sequential scans, we at least run as\nfew of them as possible and are able to keep hot data in memory.\n\n--sam\n\nOn Tue, Oct 12, 2010 at 9:02 AM, Scott Carey <[email protected]> wrote:\nHowever, for large reporting queries and sequential scans, XFS will win in the long run if you use the online defragmenter. Otherwise, your sequential scans won't be all that sequential on any file system over time if your tables aren't written once, forever, serially. Parallel restore will result in a system that is fragmented -- ext4 will do best at limiting this on the restore, but only xfs has online defragmentation. We schedule ours daily and it noticeably improves sequential scan I/O.\nOur reporting tables are written sequentially and left unmodified until entire partitions are dropped. However, equivalent partitions tend to get a little bit larger over time, so newer partitions won't necessarily fit into the gaps left by prior partition drops, so it is possible that partitions will be split into two sections, but should still be very sequential, if not perfectly so. It would seem that we stumbled into an ideal architecture for doing this kind of work - mostly by virtue of starting with 8.2.x and having huge problems with autovacuum and vacuum taking forever and dragging the db to halt, which caused us to move to an architecture which aggregates and then drops older data in entire partitions instead of updating aggregates individually and then deleting rows. Partitions are sized such that most reporting queries run over entire partitions, too (which was completely accidental since I had not yet delved into individual query optimization at the time), so even though we are doing sequential scans, we at least run as few of them as possible and are able to keep hot data in memory.\n--sam",
"msg_date": "Tue, 12 Oct 2010 09:23:06 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": ">> \n> \n> A count with any joins or filter criteria would still have to scan all \n> pages with visible tuples in them. So the visibility map helps speed up \n> scanning of bloated tables, but doesn't provide a magical \"fast count\" \n> except in the utterly trivial \"select count(*) from tablename;\" case, \n> and can probably only be used for accurate results when there are no \n> read/write transactions currently open.\n\nselect count(*) from tablename where [condition or filter that can use an index] [group by on columns in the index]\n\nwill also work, I think.\n\nAdditionally, I think it can work if other open transactions exist, provided they haven't written to the table being scanned. If they have, then only those pages that have been altered and marked in the visibility map need to be cracked open the normal way.\n\n> Even if you kept a count of \n> tuples in each page along with the mvcc transaction ID information \n> required to determine for which transactions that count is valid, it'd \n> only be useful if you didn't have to do any condition checks, and it'd \n> be yet another thing to update with every insert/delete/update.\n> \n\nYes, lots of drawbacks and added complexity.\n\n> Perhaps for some users that'd be worth having, but it seems to me like \n> it'd have pretty narrow utility. I'm not sure that's the answer.\n> \n> --\n> Craig Ringer\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Tue, 12 Oct 2010 09:35:46 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On 2010-10-12 18:02, Scott Carey wrote:\n> However, for large reporting queries and sequential scans, XFS will\n> win in the long run if you use the online defragmenter. Otherwise,\n> your sequential scans won't be all that sequential on any file system\n> over time if your tables aren't written once, forever, serially.\n> Parallel restore will result in a system that is fragmented -- ext4\n> will do best at limiting this on the restore, but only xfs has online\n> defragmentation. We schedule ours daily and it noticeably improves\n> sequential scan I/O.\n>\n> Supposedly, an online defragmenter is in the works for ext4 but it\n> may be years before its available.\n\nIf some clever postgres hacker could teach postgres to allocate blocks\nusing posix_fallocate in quite large batches, say .. something like:\nfallocate(min(current_relation_size *0.1,1073741824));\n\nSo if you have a relations filling 10GB allready, they the next \"file\" \nfor the\nrelations is just fully allocated on the first byte by the filesystem. That\nwould ensure that large table is sitting efficiently on the filesystem \nlevel with\na minimum of fragmentation on ext4(and other FS's supporting \nposix_fallocate)\nand for small systems it would only fill 10% more of diskspace... ..\n\n.. last night I spend an hour looking for where its done but couldnt \nfind the\nsource-file where extention of an existing relation takes place.. can\nsomeone give directions?\n\n-- \nJesper\n\n\n\n\n\n\n\n\n\n\nOn 2010-10-12 18:02, Scott Carey wrote:\n> However, for large reporting\nqueries and sequential scans, XFS will\n> win in the long run if you use the online defragmenter. Otherwise,\n> your sequential scans won't be all that sequential on any file\nsystem\n> over time if your tables aren't written once, forever, serially.\n> Parallel restore will result in a system that is fragmented -- ext4\n> will do best at limiting this on the restore, but only xfs has\nonline\n> defragmentation. We schedule ours daily and it noticeably improves\n> sequential scan I/O.\n> \n> Supposedly, an online defragmenter is in the works for ext4 but it\n> may be years before its available.\n\nIf some clever postgres hacker could teach postgres to allocate blocks\nusing posix_fallocate in quite large batches, say .. something like:\nfallocate(min(current_relation_size *0.1,1073741824)); \n\nSo if you have a relations filling 10GB allready, they the next \"file\"\nfor the \nrelations is just fully allocated on the first byte by the filesystem.\nThat \nwould ensure that large table is sitting efficiently on the filesystem\nlevel with \na minimum of fragmentation on ext4(and other FS's supporting\nposix_fallocate)\nand for small systems it would only fill 10% more of diskspace... ..\n\n.. last night I spend an hour looking for where its done but couldnt\nfind the \nsource-file where extention of an existing relation takes place.. can \nsomeone give directions? \n\n-- \nJesper",
"msg_date": "Tue, 12 Oct 2010 18:38:12 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "\nOn Oct 12, 2010, at 8:39 AM, Dan Harris wrote:\n\n> On 10/11/10 8:02 PM, Scott Carey wrote:\n>> would give you a 1MB read-ahead. Also, consider XFS and its built-in defragmentation. I have found that a longer lived postgres DB will get extreme\n>> file fragmentation over time and sequential scans end up mostly random. On-line file defrag helps tremendously.\n>> \n> We just had a corrupt table caused by an XFS online defrag. I'm scared \n> to use this again while the db is live. Has anyone else found this to \n> be safe? But, I can vouch for the fragmentation issue, it happens very \n> quickly in our system.\n> \n\nWhat version? I'm using the latest CentoOS extras build.\n\nWe've been doing online defrag for a while now on a very busy database with > 8TB of data. Not that that means there are no bugs... \n\nIt is a relatively simple thing in xfs -- it writes a new file to temp in a way that allocates contiguous space if available, then if the file has not been modified since it was re-written it is essentially moved on top of the other one. This should be safe provided the journaling and storage is safe, etc.\n\n\n> -Dan\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Tue, 12 Oct 2010 09:44:02 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "\nOn Oct 12, 2010, at 8:54 AM, <[email protected]> wrote:\n\n> On Tue, 12 Oct 2010, Craig Ringer wrote:\n> \n>> On 10/12/2010 04:22 PM, [email protected] wrote:\n>> \n>>> from a PR point of view, speeding up the trivil count(*) case could be\n>>> worth it, just to avoid people complaining about it not being fast.\n>> \n>> At the cost of a fair bit more complexity, though, and slowing everything \n>> else down.\n> \n> complexity probably, although given how complex the planner is already is \n> this significant?\n> \n> as far as slowing everything else down, why would it do that? (beyond the \n> simple fact that any new thing the planner can do makes the planner take a \n> little longer)\n> \n> David Lang\n> \nI wouldn't even expect the planner to do more work. An Index Scan can simply avoid going to the tuples for visibility under some circumstances.\n\n",
"msg_date": "Tue, 12 Oct 2010 09:46:08 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "\nOn Oct 12, 2010, at 9:46 AM, Scott Carey wrote:\n\n> \n> On Oct 12, 2010, at 8:54 AM, <[email protected]> wrote:\n> \n>> On Tue, 12 Oct 2010, Craig Ringer wrote:\n>> \n>>> On 10/12/2010 04:22 PM, [email protected] wrote:\n>>> \n>>>> from a PR point of view, speeding up the trivil count(*) case could be\n>>>> worth it, just to avoid people complaining about it not being fast.\n>>> \n>>> At the cost of a fair bit more complexity, though, and slowing everything \n>>> else down.\n>> \n>> complexity probably, although given how complex the planner is already is \n>> this significant?\n>> \n>> as far as slowing everything else down, why would it do that? (beyond the \n>> simple fact that any new thing the planner can do makes the planner take a \n>> little longer)\n>> \n>> David Lang\n>> \n> I wouldn't even expect the planner to do more work. An Index Scan can simply avoid going to the tuples for visibility under some circumstances.\n> \n> \nOf course, the planner has to .... Otherwise it won't choose the Index Scan over the sequential scan. So the cost of index scans when all the info other than visibility is in the index would need to be lowered.\n\n\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Tue, 12 Oct 2010 09:50:40 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": " On 10/12/10 10:44 AM, Scott Carey wrote:\n> On Oct 12, 2010, at 8:39 AM, Dan Harris wrote:\n>\n>> On 10/11/10 8:02 PM, Scott Carey wrote:\n>>> would give you a 1MB read-ahead. Also, consider XFS and its built-in defragmentation. I have found that a longer lived postgres DB will get extreme\n>>> file fragmentation over time and sequential scans end up mostly random. On-line file defrag helps tremendously.\n>>>\n>> We just had a corrupt table caused by an XFS online defrag. I'm scared\n>> to use this again while the db is live. Has anyone else found this to\n>> be safe? But, I can vouch for the fragmentation issue, it happens very\n>> quickly in our system.\n>>\n> What version? I'm using the latest CentoOS extras build.\n>\n> We've been doing online defrag for a while now on a very busy database with> 8TB of data. Not that that means there are no bugs...\n>\n> It is a relatively simple thing in xfs -- it writes a new file to temp in a way that allocates contiguous space if available, then if the file has not been modified since it was re-written it is essentially moved on top of the other one. This should be safe provided the journaling and storage is safe, etc.\n>\nI'm not sure how to figure out what version of XFS we're on.. but it's \nLinux kernel 2.6.24-24 x86_64 on Ubuntu Server 8.04.3. Postgres version 8.3\n\nWe're due for an upgrade on that server soon so we'll do some more \ntesting once we upgrade. Right now we are just living with the \nfragmentation. I'm glad to hear the regular on-line defrag is working \nsuccessfully, at least that gives me hope we can rely on it in the future.\n\n-Dan\n",
"msg_date": "Tue, 12 Oct 2010 11:06:47 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "So I spent a bit of quality time with oprofile this morning, and found\nonce again that there's no substitute for having actual data before\ntheorizing.\n\nTest case software: current Git HEAD (plus one code change explained\nbelow), compiled with --enable-debug to support oprofile, cassert off;\nno other special configure options. Running on current Fedora 13 (gcc\n4.4.4 in particular). All postgresql.conf options are out-of-the-box.\n\nTest case hardware: recently purchased mid-grade desktop, dual Xeon\nE5503 processors (Nehalem cores, 2GHZ), 4GB DDR3-800 RAM, no-name\nSATA disk.\n\nTest query: \"select count(*) from t\" where t has 4 nonnull integer\ncolumns and 81920000 rows, occupying 3459MB. I chose that size\nspecifically to fit into available RAM, so that on repeated executions\nno physical I/O will occur.\n\nOn this setup I find that \"select count(*)\" runs in about 7.5sec when\nthe data is fully cached in RAM, for a scanning speed of 460MB/sec.\nThis is well in excess of what the machine's disk hardware can do:\nbonnie++ rates the machine's disk read speed at 152MB/sec. So in theory\nPG should be able to completely saturate the disk when processing a table\nbigger than RAM. In reality the test case run time if I've just flushed\ncache is about 28sec, working out to a scan rate of 123MB/sec. I expect\nif I'd bothered to tune the kernel readahead parameters as outlined\nearlier in this thread, I could get to 150MB/sec.\n\nNow of course this disk setup is far from industrial strength, but the\nprocessor isn't what you'd put in a serious database server either (in\nparticular, its available memory bandwidth is well behind the curve).\nAlso, the table is pretty narrow (only 16 payload bytes per row), and\nany wider test table would show a pretty much linear scaling of achievable\nscan rate versus table width. So I don't see much support here at all\nfor the notion that we scan slower than available disk bandwidth.\n\nFurther details from poking at it with oprofile: in the fully-cached\ncase the CPU time is about 80% Postgres and 20% kernel. That kernel\ntime is of course all to do with moving pages from kernel disk buffers\ninto Postgres shared memory. Although I've not bothered to increase\nshared_buffers from the default 32MB, it wouldn't matter on this benchmark\nunless I were able to make shared_buffers hold the entire table ... and\neven then I'd only save 20%.\n\noprofile further shows that (with stock Postgres sources) the userspace\nruntime breaks down like this:\n\nsamples % symbol name\n141267 13.0810 heapgettup_pagemode\n85947 7.9585 advance_aggregates\n83031 7.6885 ExecProject\n78975 7.3129 advance_transition_function\n75060 6.9504 heapgetpage\n73540 6.8096 ExecClearTuple\n69355 6.4221 ExecProcNode\n59288 5.4899 heap_getnext\n57745 5.3470 ExecScan\n55618 5.1501 HeapTupleSatisfiesMVCC\n47057 4.3574 MemoryContextReset\n41904 3.8802 ExecStoreTuple\n37146 3.4396 SeqNext\n32206 2.9822 ExecAgg\n22135 2.0496 XidInMVCCSnapshot\n21142 1.9577 int8inc\n19280 1.7853 AllocSetReset\n18211 1.6863 hash_search_with_hash_value\n16285 1.5079 TransactionIdPrecedes\n\nI also looked at the source-line-level breakdown, though that's too bulky\nto post here. The most interesting fact here is that tuple visibility\ntesting (MVCC) overhead is simply nonexistent: it'd be in heapgetpage()\nif it were being done, which it isn't because all the pages of the table\nhave the PageIsAllVisible bit set. In a previous run where those bits\nweren't set but the per-tuple hint bits were, visibility testing still\nonly ate a percent or two of the runtime. So the theory some people have\nespoused in this thread that visibility testing is the bottleneck doesn't\nhold water either. If you go back and look at previous pgsql-hackers\ndiscussions about that, what people have been worried about is not the CPU\ncost of visibility testing but the need for indexscan queries to visit\nthe heap for no other purpose than to check the visibility flags. In a\nseqscan it's not going to matter.\n\nI looked a bit more closely at the heapgettup_pagemode timing. The\nlines shown by opannotate as more than 0.1 percent of the runtime are\n\n 22545 2.2074 :{ /* heapgettup_pagemode total: 153737 15.0528 */\n 5685 0.5566 :\tbool\t\tbackward = ScanDirectionIsBackward(dir);\n 5789 0.5668 :\t\tif (!scan->rs_inited)\n 5693 0.5574 :\t\t\tlineindex = scan->rs_cindex + 1;\n 11429 1.1190 :\t\tdp = (Page) BufferGetPage(scan->rs_cbuf);\n 5693 0.5574 :\t\tlinesleft = lines - lineindex;\n 5766 0.5646 :\t\twhile (linesleft > 0)\n 5129 0.5022 :\t\t\tlineoff = scan->rs_vistuples[lineindex];\n 44461 4.3533 :\t\t\ttuple->t_data = (HeapTupleHeader) PageGetItem((Page) dp, lpp);\n 11135 1.0903 :\t\t\ttuple->t_len = ItemIdGetLength(lpp);\n 5692 0.5573 :\t\t\tif (key != NULL)\n 5773 0.5653 :\t\t\t\tHeapKeyTest(tuple, RelationGetDescr(scan->rs_rd),\n 5674 0.5556 :\t\t\t\t\tscan->rs_cindex = lineindex;\n 11406 1.1168 :}\n\nThere doesn't seem to be a whole lot of room for improvement there.\nMaybe we could shave a couple percent with some tenser coding (I'm\nwondering why HeapKeyTest is being reached, in particular, when there's\nno WHERE clause). But any local changes here will be marginal at best.\n\nOne thing I did find is that the time spent in ExecProject/ExecClearTuple,\namounting to nearly 15% of the runtime, is just for evaluating the\narguments of the aggregate ... and count(*) hasn't got any arguments.\nSo a patch like this improves the run speed by about 15%:\n\ndiff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c\nindex a7dafeb..051e70c 100644\n*** a/src/backend/executor/nodeAgg.c\n--- b/src/backend/executor/nodeAgg.c\n*************** advance_aggregates(AggState *aggstate, A\n*** 480,486 ****\n \t\tTupleTableSlot *slot;\n \n \t\t/* Evaluate the current input expressions for this aggregate */\n! \t\tslot = ExecProject(peraggstate->evalproj, NULL);\n \n \t\tif (peraggstate->numSortCols > 0)\n \t\t{\n--- 480,489 ----\n \t\tTupleTableSlot *slot;\n \n \t\t/* Evaluate the current input expressions for this aggregate */\n! \t\tif (peraggstate->evalproj)\n! \t\t\tslot = ExecProject(peraggstate->evalproj, NULL);\n! \t\telse\n! \t\t\tslot = peraggstate->evalslot;\n \n \t\tif (peraggstate->numSortCols > 0)\n \t\t{\n*************** ExecInitAgg(Agg *node, EState *estate, i\n*** 1728,1738 ****\n \t\tperaggstate->evalslot = ExecInitExtraTupleSlot(estate);\n \t\tExecSetSlotDescriptor(peraggstate->evalslot, peraggstate->evaldesc);\n \n! \t\t/* Set up projection info for evaluation */\n! \t\tperaggstate->evalproj = ExecBuildProjectionInfo(aggrefstate->args,\n! \t\t\t\t\t\t\t\t\t\t\t\t\t\taggstate->tmpcontext,\n! \t\t\t\t\t\t\t\t\t\t\t\t\t\tperaggstate->evalslot,\n! \t\t\t\t\t\t\t\t\t\t\t\t\t\tNULL);\n \n \t\t/*\n \t\t * If we're doing either DISTINCT or ORDER BY, then we have a list of\n--- 1731,1744 ----\n \t\tperaggstate->evalslot = ExecInitExtraTupleSlot(estate);\n \t\tExecSetSlotDescriptor(peraggstate->evalslot, peraggstate->evaldesc);\n \n! \t\t/* Set up projection info for evaluation, if agg has any args */\n! \t\tif (aggrefstate->args)\n! \t\t\tperaggstate->evalproj = ExecBuildProjectionInfo(aggrefstate->args,\n! \t\t\t\t\t\t\t\t\t\t\t\t\t\t\taggstate->tmpcontext,\n! \t\t\t\t\t\t\t\t\t\t\t\t\t\t\tperaggstate->evalslot,\n! \t\t\t\t\t\t\t\t\t\t\t\t\t\t\tNULL);\n! \t\telse\n! \t\t\tperaggstate->evalproj = NULL;\n \n \t\t/*\n \t\t * If we're doing either DISTINCT or ORDER BY, then we have a list of\n\nbringing the oprofile results to\n\nsamples % symbol name\n181660 17.9017 heapgettup_pagemode\n138049 13.6040 advance_transition_function\n102865 10.1368 advance_aggregates\n80948 7.9770 ExecProcNode\n79943 7.8780 heap_getnext\n73384 7.2316 ExecScan\n60607 5.9725 MemoryContextReset\n53889 5.3105 ExecStoreTuple\n46666 4.5987 SeqNext\n40535 3.9945 ExecAgg\n33481 3.2994 int8inc\n32202 3.1733 heapgetpage\n26068 2.5689 AllocSetReset\n18493 1.8224 hash_search_with_hash_value\n8679 0.8553 LWLockAcquire\n6615 0.6519 ExecSeqScan\n6583 0.6487 LWLockRelease\n3928 0.3871 hash_any\n3715 0.3661 ReadBuffer_common\n\n(note that this, not the stock code, is what corresponds to the 7.5sec\nruntime I quoted above --- it's about 8.5sec without that change).\n\nAt this point what we've got is 25% of the runtime in nodeAgg.c overhead,\nand it's difficult to see how to get any real improvement without tackling\nthat. Rather than apply the patch shown above, I'm tempted to think about\nhard-wiring COUNT(*) as a special case in nodeAgg.c such that we don't go\nthrough advance_aggregates/advance_transition_function at all, but just\nincrement a counter directly. However, that would very clearly be\noptimizing COUNT(*) and nothing else. Given the opinions expressed\nelsewhere in this thread that heavy reliance on COUNT(*) represents\nbad application design, I'm not sure that such a patch would meet with\ngeneral approval.\n\nActually the patch shown above is optimizing COUNT(*) and nothing else,\ntoo, since it's hard to conceive of any other zero-argument aggregate.\n\nAnyway, if anyone is hot to make COUNT(*) faster, that's where to look.\nI don't think any of the previous discussion in this thread is on-point\nat all, except for the parts where people suggested avoiding it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Oct 2010 13:07:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again... "
},
{
"msg_contents": "[email protected] wrote:\n> On Tue, 12 Oct 2010, Mladen Gogala wrote:\n>\n> \n>> [email protected] wrote:\n>> \n>>> from a PR point of view, speeding up the trivil count(*) case could be \n>>> worth it, just to avoid people complaining about it not being fast.\n>>>\n>>>\n>>> \n>> Fixing PR stuff is not the approach that I would take. People are complaining \n>> about select count(*) because they're using it in all the wrong places.\n>> \n>\n> that may be the case, but if it's possible to make it less painful it will \n> mean more people use postgres, both because it works better for them when \n> they are using the suboptimal programs, but also because when people do \n> their trivial testing of databases to decide which one they will use, they \n> won't rule out postgres because \"it's so slow\"\n> \n>\n\nThere is no free lunch. If the count field is maintained somewhere, the \nconcurrency will suffer. I find the idea of fixing the \"count delusion\" \nridiculous, may Richard Dawkins forgive me for this pun. Saying that \nsomething is slow without testing and a proper\nconsideration is ridiculous. As a DBA, I usually get complaints like \n\"the database is slow today\" 3 times before lunch, every day. The \ndatabase is never slow, the database is a warehouse where you keep your \ndata. What is slow is the access to the data, and that is done by, guess \nwhat, the application program. Almost always, it's the application \nthat's slow, not the database. As for the \"select count(*)\", idiom, what \nare you trying to do? Where are you using it? If you are using it for \npagination, consider the possibility of not specifying\nthe number of pages on the website, just the links \"next -->\" and \"prev \n<--\". Alternatively, you can fetch a small amount into the web page and \ndirect the users who would like to see the complete information to a \nbackground reporting too. Mixing batch reports and online reports is a \nvery easy thing to do. If you are using it to establish existence, \nyou're doing it wrong. I've had a problem like that this morning. A \ndeveloper came to me with the usual phrase that the \"database is slow\". \nIt was a PHP form which should write an output file and let the user \nknow where the file is. The function looks like this:\n\nfunction put_xls($sth) {\n global $FNAME;\n $FNAME=$FNAME.\".xls\";\n $lineno=0;\n $ncols=$sth->FieldCount();\n for ($i = 0;$i <= $ncols;$i++) {\n $cols[$i] = $sth->FetchField($i);\n $colnames[$i]=$cols[$i]->name;\n }\n $workbook = new Spreadsheet_Excel_Writer(\"/software$FNAME\");\n $format_bold =& $workbook->addFormat();\n $format_bold->setBold();\n $format_bold->setAlign('left');\n $format_left =& $workbook->addFormat();\n $format_left->setAlign('left');\n $worksheet = & $workbook->addWorksheet('Moreover Search');\n $worksheet->writeRow($lineno++,0,$colnames,$format_bold);\n while($row=$sth->FetchRow()) {\n $worksheet->writeRow($lineno++,0,$row,$format_left);\n }\n $workbook->close();\n $cnt=$sth->Recordcount();\n return($cnt);\n}\n\nThe relevant includes are here:\n\nrequire ('Date.php');\nrequire ('adodb5/tohtml.inc.php');\nrequire_once ('adodb5/adodb.inc.php');\nrequire_once ('adodb5/adodb-exceptions.inc.php');\nrequire_once 'Spreadsheet/Excel/Writer.php';\n$ADODB_FETCH_MODE = ADODB_FETCH_NUM;\n\nSo, what is the problem here? Why was the \"database slow\"? As it turns \nout, the PEAR module for writing Excel spreadsheets, which is the tool \nused here, creates the entire spreadsheet in memory and writes it out \non the \"close\" command. What was spinning was \"httpd\" process, the \ndatabase was completely and utterly idle, rolling thumbs and awaiting \norders. Using the \"fputcsv\" instead, made the function fly. The only \nthing that was lost were the bold column titles. Changing little things \ncan result in the big performance gains. Making \"select count(*)\" \nunnaturally fast would be tending to bad programming practices. I am \nnot sure that this is a desirable development. You can't expect people \nto adjust the database software to your application. Applications are \nalways database specific. Writing an application that will access a \nPostgreSQL database is not the same as writing an application that will \naccess an Oracle database.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Tue, 12 Oct 2010 13:36:46 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On 2010-10-12 19:07, Tom Lane wrote:\n> Anyway, if anyone is hot to make COUNT(*) faster, that's where to look.\n> I don't think any of the previous discussion in this thread is on-point\n> at all, except for the parts where people suggested avoiding it.\n> \n\nI would have to say that allthough it is nice to get count(*) faster I\nthink your testing is way too simple.\n\nIt pretty much proves that in terms of the code involved in the\ncount(*) process there is not much to be achieved. But your table\nhas way to little payload. As PG currently is it will start by pushing\ndata off to TOAST when the tuple size reaches 1KB\nand the speed of count(*) is very much dominated by the amount\nof \"dead weight\" it has to draw in together with the heap-access for the\nrow on accessing the table. Creating a case where the table is this\nslim is (in my viewpoint) very much to the extreme on the small side.\n\nJust having 32 bytes bytes of \"payload\" would more or less double\nyou time to count if I read you test results correctly?. .. and in the\nsituation where diskaccess would be needed .. way more.\n\nDividing by pg_relation_size by the amout of tuples in our production\nsystem I end up having no avg tuple size less than 100bytes.\n\n.. without having complete insigt.. a visibillity map that could be used in\nconjunction with indices would solve that. What the cost would be\nof maintaining it is also a factor.\n\nJesper\n\n-- \nJesper\n",
"msg_date": "Tue, 12 Oct 2010 20:22:01 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Jesper Krogh <[email protected]> writes:\n> On 2010-10-12 19:07, Tom Lane wrote:\n>> Anyway, if anyone is hot to make COUNT(*) faster, that's where to look.\n\n> Just having 32 bytes bytes of \"payload\" would more or less double\n> you time to count if I read you test results correctly?. .. and in the\n> situation where diskaccess would be needed .. way more.\n\n> Dividing by pg_relation_size by the amout of tuples in our production\n> system I end up having no avg tuple size less than 100bytes.\n\nWell, yeah. I deliberately tested with a very narrow table so as to\nstress the per-row CPU costs as much as possible. With any wider table\nyou're just going to be I/O bound.\n\n> .. without having complete insigt.. a visibillity map that could be used in\n> conjunction with indices would solve that. What the cost would be\n> of maintaining it is also a factor.\n\nI'm less than convinced that that approach will result in a significant\nwin. It's certainly not going to do anything to convert COUNT(*) into\nan O(1) operation, which frankly is what the complainants are expecting.\nThere's basically no hope of solving the \"PR problem\" without somehow\nturning COUNT(*) into a materialized-view reference. We've discussed\nthat in the past, and know how to do it in principle, but the complexity\nand distributed overhead are daunting.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Oct 2010 14:58:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again... "
},
{
"msg_contents": "\n> suggest that 99% instances of the \"select count(*)\" idiom are probably\n>> bad use of the SQL language.\n\nWell, suppose you paginate results. If the user sees that the search query \nreturns 500 pages, there are two options :\n\n- you're google, and your sorting algorithms are so good that the answer \nthe user wants is in the first page\n- or the user will refine his search by entering more keywords tu get a \nmanageable result set\n\nSo, in both cases, the count(*) was useless anyway. And the slowest ones \nare the most useless, since the user will immediatey discard the result \nand refine his query.\n\nIf your full text search is slow, try Xapian or Lucene.\n",
"msg_date": "Tue, 12 Oct 2010 23:35:01 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Tuesday 12 October 2010 07:19:57 you wrote:\n> >> The biggest single problem with \"select count(*)\" is that it is\n> >> seriously overused. People use that idiom to establish existence, which\n> >> usually leads to a performance disaster in the application using it,\n> >> unless the table has no more than few hundred records. SQL language, of\n> >> which PostgreSQL offers an excellent implementation, offers [NOT]\n> >> EXISTS clause since its inception in the Jurassic era. The problem is\n> >> with the sequential scan, not with counting. I'd even go as far as to\n> >> suggest that 99% instances of the \"select count(*)\" idiom are probably\n> >> bad use of the SQL language.\n> > \n> > I agree, I have seen many very bad examples of using count(*). I will go\n> > so far as to question the use of count(*) in my examples here. It there\n> > a better way to come up with a page list than using count(*)? What is\n> > the best method to make a page of results and a list of links to other\n> > pages of results? Am I barking up the wrong tree here?\n> \n> One way I have dealt with this on very large tables is to cache the\n> count(*) at the application level (using memcached, terracotta, or\n> something along those lines) and then increment that cache whenever you\n> add a row to the relevant table. On application restart that cache is\n> re-initialized with a regular old count(*). This approach works really\n> well and all large systems in my experience need caching in front of the\n> DB eventually. If you have a simpler system with say a single\n> application/web server you can simply store the value in a variable, the\n> specifics would depend on the language and framework you are using.\n\nI use this method when ever possible. I talked about it in my first post.\nI generally keep a table around I call counts. It has many rows that store \ncount numbers from frequently used views.\nThe one that I can't do anything about is the case where you nave no control \nover the WHERE clause, (or where there may be simply too many options to count \neverything ahead of time without making things even slower). That is the point \nof this entire thread, or was... ;)\n-Neil-\n\n\n> \n> Another more all-DB approach is to create a statistics tables into which\n> you place aggregated statistics rows (num deleted, num inserted, totals,\n> etc) at an appropriate time interval in your code. So you have rows\n> containing aggregated statistics information for the past and some tiny\n> portion of the new data happening right now that hasn't yet been\n> aggregated. Queries then look like a summation of the aggregated values\n> in the statistics table plus a count(*) over just the newest portion of\n> the data table and are generally very fast.\n> \n> Overall I have found that once things get big the layers of your app\n> stack start to blend together and have to be combined in clever ways to\n> keep speed up. Postgres is a beast but when you run into things it\n> can't do well just find a way to cache it or make it work together with\n> some other persistence tech to handle those cases.\n",
"msg_date": "Tue, 12 Oct 2010 15:21:31 -0700",
"msg_from": "Neil Whelchel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Pierre C wrote:\n>>\n>\n> Well, suppose you paginate results. If the user sees that the search query \n> returns 500 pages, there are two options :\n> \nWith Google, I usually lose patience on the page 3. All that I, as an \nend user, need to know is whether there are more than 10 pages. The \nfact that there are 1776 pages in the result set is not particularly \nuseful to me. I couldn't care less whether the number of returned pages \nis 1492, 1776 or 1861, I'm going to look at, at most, the first 5 of them.\n\n> - you're google, and your sorting algorithms are so good that the answer \n> the user wants is in the first page\n> - or the user will refine his search by entering more keywords tu get a \n> manageable result set\n>\n> So, in both cases, the count(*) was useless anyway. And the slowest ones \n> are the most useless, since the user will immediatey discard the result \n> and refine his query.\n>\n> If your full text search is slow, try Xapian or Lucene.\n>\n> \nMay I also recommend Sphinx? It's a very nice text search engine, with \nthe price equal to that of Lucene.\n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Tue, 12 Oct 2010 18:30:38 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Tuesday 12 October 2010 08:39:19 Dan Harris wrote:\n> On 10/11/10 8:02 PM, Scott Carey wrote:\n> > would give you a 1MB read-ahead. Also, consider XFS and its built-in\n> > defragmentation. I have found that a longer lived postgres DB will get\n> > extreme file fragmentation over time and sequential scans end up mostly\n> > random. On-line file defrag helps tremendously.\n> \n> We just had a corrupt table caused by an XFS online defrag. I'm scared\n> to use this again while the db is live. Has anyone else found this to\n> be safe? But, I can vouch for the fragmentation issue, it happens very\n> quickly in our system.\n> \n> -Dan\n\nI would like to know the details of what was going on that caused your \nproblem. I have been using XFS for over 9 years, and it has never caused any \ntrouble at all in a production environment. Sure, I had many problems with it \non the test bench, but in most cases the issues were very clear and easy to \navoid in production. There were some (older) XFS tools that caused some \nproblems, but that is in the past, and as time goes on, it seems take less and \nless planning to make it work properly.\n-Neil-\n",
"msg_date": "Tue, 12 Oct 2010 15:33:33 -0700",
"msg_from": "Neil Whelchel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Tuesday 12 October 2010 14:35:01 you wrote:\n> > suggest that 99% instances of the \"select count(*)\" idiom are probably\n> > \n> >> bad use of the SQL language.\n> \n> Well, suppose you paginate results. If the user sees that the search query\n> returns 500 pages, there are two options :\n> \n> - you're google, and your sorting algorithms are so good that the answer\n> the user wants is in the first page\n> - or the user will refine his search by entering more keywords tu get a\n> manageable result set\n> \n> So, in both cases, the count(*) was useless anyway. And the slowest ones\n> are the most useless, since the user will immediatey discard the result\n> and refine his query.\n> \n> If your full text search is slow, try Xapian or Lucene.\n\nI guess I have to comment here again and point out that while I am having this \nissue with text searches, I avoid using count(*) in such cases, I just use \nnext and previous links. Where the real problem (for me) is that when someone \nsearches a date or time range. My application keeps track of huge amounts of \nrealtime transactional data. So an administrator might want a report as to \nwhat some data point did yesterday between 3 and 4 PM. Under normal conditions \nthe range of records that match can be between 0 and over 5,000. This is \nreally killing me especially when the reporting people want a list of how many \ntransactions each that were on points in a given zipcode had this morning \nbetween 8 and 9 AM, it takes about 5 minutes to run on a server that has \nenough ram to hold the entire table!\n\nPseudo query:\nShow how many transactions per node in zipcode 92252 between 8:00 and 9:00 \ntoday:\n\npoint_number | number_of_transactions\n65889\t|\t31\n34814\t|\t4865\n28349\t|\t0\n3358\t|\t364\n...\n\n24 total rows, > 5 minutes.\n\nThen they want every node to be a link to a list of actual data within the \nspecified timeframe.\nThis is where I have to to the same query twice to first find out how many for \nthe page links, then again to get a page of results.\nSure, I could keep tables around that have numbers by the hour, minute, day or \nwhatever to cache up results for speeding things, then the problem is that \nwhen the data is put into the server, there are so many statistics tables to \nupdate, the front end becomes a huge problem. Also, it makes for a huge mess \nof tables to think about when I need to make a report.\n\n-Neil-\n\n\n",
"msg_date": "Tue, 12 Oct 2010 16:19:33 -0700",
"msg_from": "Neil Whelchel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": " On 10/12/10 4:33 PM, Neil Whelchel wrote:\n> On Tuesday 12 October 2010 08:39:19 Dan Harris wrote:\n>> On 10/11/10 8:02 PM, Scott Carey wrote:\n>>> would give you a 1MB read-ahead. Also, consider XFS and its built-in\n>>> defragmentation. I have found that a longer lived postgres DB will get\n>>> extreme file fragmentation over time and sequential scans end up mostly\n>>> random. On-line file defrag helps tremendously.\n>> We just had a corrupt table caused by an XFS online defrag. I'm scared\n>> to use this again while the db is live. Has anyone else found this to\n>> be safe? But, I can vouch for the fragmentation issue, it happens very\n>> quickly in our system.\n>>\n>> -Dan\n> I would like to know the details of what was going on that caused your\n> problem. I have been using XFS for over 9 years, and it has never caused any\n> trouble at all in a production environment. Sure, I had many problems with it\n> on the test bench, but in most cases the issues were very clear and easy to\n> avoid in production. There were some (older) XFS tools that caused some\n> problems, but that is in the past, and as time goes on, it seems take less and\n> less planning to make it work properly.\n> -Neil-\n>\nThere were roughly 50 transactions/sec going on at the time I ran it. \nxfs_db reported 99% fragmentation before it ran ( we haven't been \nrunning it via cron ). The operation completed in about 15 minutes ( \n360GB of used data on the file system ) with no errors. Everything \nseemed fine until the next morning when a user went to query a table we \ngot a message about a \"missing\" file inside the pg cluster. We were \nunable to query the table at all via psql. It was a bit of a panic \nsituation so we restored that table from backup immediately and the \nproblem was solved without doing more research.\n\nThis database has been running for years with no problem ( and none \nsince ), that was the first time I tried to do an on-line defrag and \nthat was the only unusual variable introduced into the system at that \ntime so it was a strong enough correlation for me to believe that caused \nit. Hopefully this was just a corner case..\n\n-Dan\n\n",
"msg_date": "Tue, 12 Oct 2010 18:00:11 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Tue, Oct 12, 2010 at 1:07 PM, Tom Lane <[email protected]> wrote:\n> Anyway, if anyone is hot to make COUNT(*) faster, that's where to look.\n> I don't think any of the previous discussion in this thread is on-point\n> at all, except for the parts where people suggested avoiding it.\n\nI kind of hope that index-only scans help with this, too. If you have\na wide table and a narrow (but not partial) index, and if the\nvisibility map bits are mostly set, it ought to be cheaper to read the\nindex than the table - certainly in the case where any disk I/O is\ninvolved, and maybe even if it isn't.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 13 Oct 2010 02:45:16 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Sunday 10 October 2010 21:15:56 Neil Whelchel wrote:\n\n> Right now, I am building a test machine with two dual core Intel processors\n> and two 15KRPM mirrored hard drives, 1 GB ram. I am using a small amount of\n> ram because I will be using small test tables. I may do testing in the\n> future with more ram and bigger tables, but I think I can accomplish what\n> we are all after with what I have. The machine will be limited to running\n> the database server in test, init, bash, and ssh, no other processes will\n> be running except for what is directly involved with testing. I will post\n> exact specs when I post test results. I will create some test tables, and\n> the same tables will be used in all tests. Suggestions for optimal\n> Postgres and system configuration are welcome. I will try any suggested\n> settings that I have time to test. -Neil-\n> \n\nOk the test machine is up and running:\nA few more details, the hard drives are SCSI Ultra-320, the CPUs are 2.8 GHZ, \n533 MHZ FSB. I wanted to make a more memory cramped machine to keep the table \nto RAM ratio closer to the production machines, but for now, all I have are \n1GB DDRs, and the machine requires pairs, so total memory is 2GB. Swap is \nturned off.\n\nThe data I will be using is a couple of days of raw data from a production \nsystem. The columns of interest are numeric and timestamp. I will use the \nexact same data for all tests.\n\n Table \"public.log\"\n Column | Type | Modifiers \n------------------+-----------------------------+------------------------\n batch_id | integer | \n t_stamp | timestamp without time zone | not null default now()\n raw_data | numeric | \n data_value | numeric | \n data_value_delta | numeric | \n journal_value | numeric | \n journal_data | numeric | \n machine_id | integer | not null\n group_number | integer | \nIndexes:\n \"log_idx\" btree (group_number, batch_id)\n \"log_oid_idx\" btree (oid)\n \"log_t_stamp\" btree (t_stamp)\n\nThe initial test is with XFS with write barriers turned on, this makes for \nvery slow writes. The point of the first test is to get a baseline of \neverything out-of-the-box. So, here are the numbers:\n\nInsert the data into one table:\ncrash:~# time psql -U test test -q < log.sql\nreal 679m43.678s\nuser 1m4.948s\nsys 13m1.893s\n\ncrash:~# echo 3 > /proc/sys/vm/drop_caches\ncrash:~# time psql -U test test -c \"SELECT count(*) FROM log;\" \n count \n----------\n 10050886\n(1 row)\n\nreal 0m11.812s\nuser 0m0.000s\nsys 0m0.004s\n\ncrash:~# time psql -U test test -c \"SELECT count(*) FROM log;\" \n count \n----------\n 10050886\n(1 row)\n\nreal 0m3.737s\nuser 0m0.000s\nsys 0m0.000s\n\nAs can be seen here, the cache helps..\nAnd the numbers are not all that bad, so let's throw a sabot into the gears:\ncrash:~# time psql -U test test -c \"UPDATE log SET raw_data=raw_data+1\"\nUPDATE 10050886\n\nreal 14m13.802s\nuser 0m0.000s\nsys 0m0.000s\n\ncrash:~# time psql -U test test -c \"SELECT count(*) FROM log;\"\n count \n----------\n 10050886\n(1 row)\n\nreal 3m32.757s\nuser 0m0.000s\nsys 0m0.000s\n\nJust to be sure:\ncrash:~# time psql -U test test -c \"SELECT count(*) FROM log;\"\n count \n----------\n 10050886\n(1 row)\n\nreal 2m38.631s\nuser 0m0.000s\nsys 0m0.000s\n\nIt looks like cache knocked about a minute off, still quite sad.\nSo, I shutdown Postgres, ran xfs_fsr, and started Postgres:\ncrash:~# echo 3 > /proc/sys/vm/drop_caches\ncrash:~# time psql -U test test -c \"SELECT count(*) FROM log;\"\n count \n----------\n 10050886\n(1 row)\n\nreal 1m36.304s\nuser 0m0.000s\nsys 0m0.000s\n\nSo it seems that defragmentation knocked another minute off:\nLet's see how much cache helps now:\ncrash:~# time psql -U test test -c \"SELECT count(*) FROM log;\"\n count \n----------\n 10050886\n(1 row)\n\nreal 1m34.873s\nuser 0m0.000s\nsys 0m0.000s\n\nNot much... And we are a long way from the 3.7 seconds with a freshly inserted \ntable. Maybe the maid can help here.\ncrash:~# time psql -U test test -c \"VACUUM log;\"\nVACUUM\n\nreal 22m31.931s\nuser 0m0.000s\nsys 0m0.000s\n\ncrash:~# time psql -U test test -c \"SELECT count(*) FROM log;\"\n count \n----------\n 10050886\n(1 row)\n\nreal 1m30.927s\nuser 0m0.000s\nsys 0m0.000s\n\nNope...\nSo, possible conclusions are: \n1. Even with VACUUM database table speed degrades as tables are updated.\n2. Time testing on a freshly INSERTed table gives results that are not real-\nworld. \n3. Filesystem defragmentation helps (some).\n4. Cache only makes a small difference once a table has been UPDATEd.\n\nI am going to leave this configuration running for the next day or so. This \nway I can try any suggestions and play with any more ideas that I have.\nI will try these same tests on ext4 later, along with any good suggested \ntests.\nI will try MySQL with the dame data with both XFS and ext4.\n-Neil-\n",
"msg_date": "Tue, 12 Oct 2010 23:47:19 -0700",
"msg_from": "Neil Whelchel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On 13/10/10 19:47, Neil Whelchel wrote:\n>\n> Nope...\n> So, possible conclusions are:\n> 1. Even with VACUUM database table speed degrades as tables are updated.\n> 2. Time testing on a freshly INSERTed table gives results that are not real-\n> world.\n> 3. Filesystem defragmentation helps (some).\n> 4. Cache only makes a small difference once a table has been UPDATEd.\n>\n> I am going to leave this configuration running for the next day or so. This\n> way I can try any suggestions and play with any more ideas that I have.\n> I will try these same tests on ext4 later, along with any good suggested\n> tests.\n> I will try MySQL with the dame data with both XFS and ext4.\n> -Neil-\n>\n> \n\nI think that major effect you are seeing here is that the UPDATE has \nmade the table twice as big on disk (even after VACUUM etc), and it has \ngone from fitting in ram to not fitting in ram - so cannot be \neffectively cached anymore.\n\nThis would not normally happen in real life (assuming UPDATEs only \nmodify a small part of a table per transaction). However administration \nupdates (e.g 'oh! - ref 1 should now be ref 2 please update \neverything') *will* cause the table size to double.\n\nThis is an artifact of Postgres's non overwriting storage manager - \nMysql will update in place and you will not see this.\n\nTry VACUUM FULL on the table and retest.\n\nregards\n\nMark\n",
"msg_date": "Wed, 13 Oct 2010 20:19:26 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "\n> I guess I have to comment here again and point out that while I am \n> having this\n> issue with text searches, I avoid using count(*) in such cases, I just \n> use\n> next and previous links.\n\nUnfortunately sometimes you got to do an ORDER BY on search results, and \nthen all the rows got to be read...\n\n> Where the real problem (for me) is that when someone\n> searches a date or time range. My application keeps track of huge\n\nHave you tried CLUSTER ?\n\nAlso, it is sad to say, but if you need an engine able to use index-only \nscans which would fit this type of query, replicate the table to MyISAM. \nUnfortunately, the MySQL optimizer is really not so smart about complex \nreporting queries (no hash joins, no hash aggregates) so if you don't have \na multicolumn index covering that you can use for index-only scan in your \nquery, you'll get either a really huge sort or a really nasty nested loop \nindex scan...\n",
"msg_date": "Wed, 13 Oct 2010 09:46:25 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Wednesday 13 October 2010 00:19:26 Mark Kirkwood wrote:\n> On 13/10/10 19:47, Neil Whelchel wrote:\n> > Nope...\n> > So, possible conclusions are:\n> > 1. Even with VACUUM database table speed degrades as tables are updated.\n> > 2. Time testing on a freshly INSERTed table gives results that are not\n> > real- world.\n> > 3. Filesystem defragmentation helps (some).\n> > 4. Cache only makes a small difference once a table has been UPDATEd.\n> > \n> > I am going to leave this configuration running for the next day or so.\n> > This way I can try any suggestions and play with any more ideas that I\n> > have. I will try these same tests on ext4 later, along with any good\n> > suggested tests.\n> > I will try MySQL with the dame data with both XFS and ext4.\n> > -Neil-\n> \n> I think that major effect you are seeing here is that the UPDATE has\n> made the table twice as big on disk (even after VACUUM etc), and it has\n> gone from fitting in ram to not fitting in ram - so cannot be\n> effectively cached anymore.\n> \n> This would not normally happen in real life (assuming UPDATEs only\n> modify a small part of a table per transaction). However administration\n> updates (e.g 'oh! - ref 1 should now be ref 2 please update\n> everything') *will* cause the table size to double.\n> \n> This is an artifact of Postgres's non overwriting storage manager -\n> Mysql will update in place and you will not see this.\n> \n> Try VACUUM FULL on the table and retest.\n> \n> regards\n> \n> Mark\n\nThere seems to be allot of discussion about VACUUM FULL, and its problems. The \noverall buzz seems to be that VACUUM FULL is a bad idea (I could be wrong \nhere). It has been some time since I have read the changelogs, but I seem to \nremember that there have been some major changes to VACUUM FULL recently. \nMaybe this needs to be re-visited in the documentation.\n\ncrash:~# time psql -U test test -c \"VACUUM FULL log;\"\nVACUUM\n\nreal 4m49.055s\nuser 0m0.000s\nsys 0m0.000s\n\ncrash:~# time psql -U test test -c \"SELECT count(*) FROM log;\"\n count \n----------\n 10050886\n(1 row)\n\nreal 0m9.665s\nuser 0m0.000s\nsys 0m0.004s\n\nA huge improvement from the minute and a half before the VACUUM FULL.\ncrash:~# time psql -U test test -c \"SELECT count(*) FROM log;\"\n count \n----------\n 10050886\n(1 row)\n\nreal 0m3.786s\nuser 0m0.000s\nsys 0m0.000s\n\nAnd the cache helps...\nSo, we are right back to within 10ms of where we started after INSERTing the \ndata, but it took a VACUUM FULL to accomplish this (by making the table fit in \nRAM).\nThis is a big problem on a production machine as the VACUUM FULL is likely to \nget in the way of INSERTing realtime data into the table. \n\nSo to add to the conclusion pile:\n5. When you have no control over the WHERE clause which may send count(*) \nthrough more rows of a table that would fit in RAM your performance will be \ntoo slow, so count is missing a LIMIT feature to avoid this.\n6. Keep tables that are to be updated frequently as narrow as possible: Link \nthem to wider tables to store the columns that are less frequently updated.\n\nSo with our conclusion pile so far we can deduce that if we were to keep all \nof our data in two column tables (one to link them together, and the other to \nstore one column of data), we stand a much better chance of making the entire \ntable to be counted fit in RAM, so we simply apply the WHERE clause to a \nspecific table as opposed to a column within a wider table... This seems to \ndefeat the entire goal of the relational database...\n\n-Neil-\n",
"msg_date": "Wed, 13 Oct 2010 01:38:38 -0700",
"msg_from": "Neil Whelchel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": " On 10/13/2010 2:47 AM, Neil Whelchel wrote:\n> Even with VACUUM database table speed degrades\n\nWhat the heck is the \"database table speed\"? Tables don't do anything.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n",
"msg_date": "Wed, 13 Oct 2010 04:40:53 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": " On 10/13/2010 3:19 AM, Mark Kirkwood wrote:\n> I think that major effect you are seeing here is that the UPDATE has\n> made the table twice as big on disk (even after VACUUM etc), and it has\n> gone from fitting in ram to not fitting in ram - so cannot be\n> effectively cached anymore.\n>\nIn the real world, tables are larger than the available memory. I have \ntables of several hundred gigabytes in size. Tables shouldn't be \n\"effectively cached\", the next step would be to measure \"buffer cache \nhit ratio\", tables should be effectively used.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n",
"msg_date": "Wed, 13 Oct 2010 04:44:09 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On 13/10/10 21:38, Neil Whelchel wrote:\n>\n> So with our conclusion pile so far we can deduce that if we were to keep all\n> of our data in two column tables (one to link them together, and the other to\n> store one column of data), we stand a much better chance of making the entire\n> table to be counted fit in RAM, so we simply apply the WHERE clause to a\n> specific table as opposed to a column within a wider table... This seems to\n> defeat the entire goal of the relational database...\n>\n> \n\nThat is a bit excessive I think - a more reasonable conclusion to draw \nis that tables bigger than ram will drop to IO max speed to scan, rather \nthan DIMM max speed...\n\nThere are things you can do to radically improve IO throughput - e.g a \npair of AMC or ARECA 12 slot RAID cards setup RAID 10 and tuned properly \nshould give you a max sequential throughput of something like 12*100 \nMB/s = 1.2 GB/s. So your example table (estimated at 2GB) so be able to \nbe counted by Postgres in about 3-4 seconds...\n\nThis assumes a more capable machine than you are testing on I suspect.\n\nCheers\n\nMark\n",
"msg_date": "Wed, 13 Oct 2010 21:50:23 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Wednesday 13 October 2010 01:50:23 Mark Kirkwood wrote:\n> On 13/10/10 21:38, Neil Whelchel wrote:\n> > So with our conclusion pile so far we can deduce that if we were to keep\n> > all of our data in two column tables (one to link them together, and the\n> > other to store one column of data), we stand a much better chance of\n> > making the entire table to be counted fit in RAM, so we simply apply the\n> > WHERE clause to a specific table as opposed to a column within a wider\n> > table... This seems to defeat the entire goal of the relational\n> > database...\n> \n> That is a bit excessive I think - a more reasonable conclusion to draw\n> is that tables bigger than ram will drop to IO max speed to scan, rather\n> than DIMM max speed...\n> \n> There are things you can do to radically improve IO throughput - e.g a\n> pair of AMC or ARECA 12 slot RAID cards setup RAID 10 and tuned properly\n> should give you a max sequential throughput of something like 12*100\n> MB/s = 1.2 GB/s. So your example table (estimated at 2GB) so be able to\n> be counted by Postgres in about 3-4 seconds...\n> \n> This assumes a more capable machine than you are testing on I suspect.\n> \n> Cheers\n> \n> Mark\nThe good ol' bruit force approach! I knew I'd see this one sooner or later. \nThough I was not sure if I was going to see the 16TB of RAM suggestion first.\nSeriously though, as the title of this thread suggests, everything is \nrelative. Sure count(*) and everything else will work faster with more system \npower. It just seems to me that count(*) is slower than it could be given a \nset of conditions. I started this thread because I think that there must be a \nbetter way to count matches from an INDEXed column than shoving the entire \ntable through RAM (including columns that you are not interested in at the \nminute). And even worse, when you have no (reasonable) control of the WHERE \nclause preventing your system from thrashing for the next week because \nsomebody put in criteria that matched a few TB of records and there is no way \nto LIMIT count(*) other than externally timing the query and aborting it if it \ntakes too long. Whet is needed is a way to determine how many rows are likely \nto match a given WHERE clause so we can cut off useless queries, but we need a \nfast count(*) for that, or a limit on the existing one... I seem to remember \nsaying something about an index driven estimate(*) at one point...\n\nI might go as far as to rattle the cage of the developers to see if it makes \nany sense to add some column oriented storage capability to Postgres. That \nwould be the hot ticket to be able to specify an attribute on a column so that \nthe back end could shadow or store a column in a column oriented table so \naggregate functions could work on them with good efficiency, or is that an \nINDEX?\n\nSince the thread has started, I have had people ask about different system \nconfigurations, especially the filesystem (XFS, ext4...). I have never tested \next4, and since we are all involved here, I thought that I could do so and \nshare my results for others, that is why I got into time testing stuff.\nTime testing count(*) in my later postings is really not the point as count is \nsimply dragging the entire table off of the RAID through RAM, I can use any \nother function like max()... No that can narrow down its scan with an INDEX... \nOk, sum(), there we go!\n\n-Neil-\n\n",
"msg_date": "Wed, 13 Oct 2010 03:16:11 -0700",
"msg_from": "Neil Whelchel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "12.10.10 14:44, Craig Ringer написав(ла):\n>\n>> in the case where you are doing a count(*) where query and the where is\n>> on an indexed column, could the search just look at the index + the\n>> visibility mapping rather than doing an sequential search through the\n>> table?\n>\n> Nope, because the visibility map, which is IIRC only one bit per page, \n> doesn't record how many tuples there are on the page, or enough \n> information about them to determine how many of them are visible to \n> the current transaction*.\nI'd say it can tell you that your may not recheck given tuple, can't it? \nYou still have to count all index tuples and recheck the ones that are \nuncertain. Does it work in this way? This can help a lot for wide tuples \nin table, but with narrow index and mostly read-only data.\n\nBest regards, Vitalii Tymchyshyn\n",
"msg_date": "Wed, 13 Oct 2010 13:41:52 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "12.10.10 21:58, Tom Lane написав(ла):\n>\n> I'm less than convinced that that approach will result in a significant\n> win. It's certainly not going to do anything to convert COUNT(*) into\n> an O(1) operation, which frankly is what the complainants are expecting.\n> There's basically no hope of solving the \"PR problem\" without somehow\n> turning COUNT(*) into a materialized-view reference. We've discussed\n> that in the past, and know how to do it in principle, but the complexity\n> and distributed overhead are daunting.\n>\n> \nI've though about \"aggregate\" indexes, something like\ncreate index index_name on table_name(count(*) group by column1, column2);\nOR\ncreate index index_name on table_name(count(*));\nfor table-wide count\n\nTo make it usable one would need:\n1) Allow third aggregate function SMERGE that can merge one aggregate \nstate to another\n2) The index should be regular index (e.g. btree) on column1, column2 \nthat for each pair has page list to which it's data may belong (in \npast/current running transactions), and aggregate state for each page \nthat were frozen previously\nWhen index is used, it can use precalculated values for \"pages with all \ntuples vacuumed\" (I suspect this is information from visibility map) and \nshould do regular calculation for all non-frozen pages with visibility \nchecks and everything what's needed.\nWhen vacuum processes the page, it should (in sync or async way) \ncalculate aggregate values for the page.\n\nIMHO Such an indexes would make materialized views/triggers/high level \ncaches unneeded in most cases.\n\nBest regards, Vitalii Tymchyshyn\n\n",
"msg_date": "Wed, 13 Oct 2010 13:54:19 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On 13/10/2010 12:38 AM, Jesper Krogh wrote:\n\n> If some clever postgres hacker could teach postgres to allocate blocks\n> using posix_fallocate in quite large batches, say .. something like:\n> fallocate(min(current_relation_size *0.1,1073741824))\n\nThere doesn't seem to be any use of posix_fallocate in the sources, at \nleast according to git grep. The patch that introduced posix_fadvise use \napparently had posix_fallocate in it, but that use appears to have been \nremoved down the track.\n\nIt's worth noting that posix_fallocate sucks if your file system doesn't \nintelligent support for it. IIRC it's horrible on ext3, where it can \ntake a while to return while it allocates (and IIRC zeroes!) all those \nblocks. This may be part of why it's not used. In past testing with \nposix_fallocate for other tools I've also found rather mixed performance \nresults - it can slow things down rather than speed them up, depending \non the file system in use and all sorts of other factors.\n\nIf Pg was to use posix_fallocate, it'd probably need control over it on \na per-tablespace basis.\n\n-- \nCraig Ringer\n\nTech-related writing at http://soapyfrogs.blogspot.com/\n",
"msg_date": "Wed, 13 Oct 2010 19:42:00 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Dan Harris wrote:\n> I'm not sure how to figure out what version of XFS we're on.. but it's \n> Linux kernel 2.6.24-24 x86_64 on Ubuntu Server 8.04.3. Postgres \n> version 8.3\n\nThere's the kernel side support that matches your kernel, as well as the \nxfsprogs package. The latter is where a lot of distributions haven't \nkept up with upstream changes, and where I suspect the defragmenter bug \nyou ran into is located at.\n\nHardy ships with 2.9.4-2: http://packages.ubuntu.com/hardy/xfsprogs\n\nThe work incorporating a more stable XFS into RHEL started with xfsprogs \n3.0.1-6 going into Fedora 11, and 3.1.X would represent a current \nrelease. So your Ubuntu kernel is two major improvement releases \nbehind, 3.0 and 3.1 were the upgrades to xfsprogs where things really \ngot going again making that code modern and solid. Ubuntu Lucid \nswitched to 3.1.0, RHEL6 will probably ship 3.1.0 too.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Wed, 13 Oct 2010 08:12:19 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": " On 10/13/2010 8:12 AM, Greg Smith wrote:\n> The work incorporating a more stable XFS into RHEL started with xfsprogs\n> 3.0.1-6 going into Fedora 11, and 3.1.X would represent a current\n> release. So your Ubuntu kernel is two major improvement releases\n> behind, 3.0 and 3.1 were the upgrades to xfsprogs where things really\n> got going again making that code modern and solid. Ubuntu Lucid\n> switched to 3.1.0, RHEL6 will probably ship 3.1.0 too.\n>\n\nI am afraid that my management will not let me use anything that doesn't \nexist as a RPM package in the current Red Hat distribution. No Ubuntu, \nno Fedora, no manual linking. There will always be that ominous \nquestion: how many other companies are using XFS? From the business \nperspective, questions like that make perfect sense.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n",
"msg_date": "Wed, 13 Oct 2010 08:33:28 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Mladen Gogala wrote:\n> I am afraid that my management will not let me use anything that \n> doesn't exist as a RPM package in the current Red Hat distribution. No \n> Ubuntu, no Fedora, no manual linking. There will always be that \n> ominous question: how many other companies are using XFS? From the \n> business perspective, questions like that make perfect sense.\n\nXFS support is available as an optional module starting in RHEL 5.5. In \nCentOS, you just grab it, so that's what I've been doing. My \nunderstanding is that you may have to ask your sales rep to enable \naccess to it under the official RedHat Network channels if you're using \na subscription from them. I'm not sure exactly what the support \nsituation is with it, but it's definitely available as an RPM from RedHat.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Wed, 13 Oct 2010 09:02:21 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Wed, Oct 13, 2010 at 4:38 AM, Neil Whelchel <[email protected]> wrote:\n> There seems to be allot of discussion about VACUUM FULL, and its problems. The\n> overall buzz seems to be that VACUUM FULL is a bad idea (I could be wrong\n> here). It has been some time since I have read the changelogs, but I seem to\n> remember that there have been some major changes to VACUUM FULL recently.\n> Maybe this needs to be re-visited in the documentation.\n\nIn 9.0, VACUUM FULL does something similar to what CLUSTER does. This\nis a much better idea than what it did in 8.4 and prior.\n\n> crash:~# time psql -U test test -c \"VACUUM FULL log;\"\n> VACUUM\n>\n> real 4m49.055s\n> user 0m0.000s\n> sys 0m0.000s\n>\n> crash:~# time psql -U test test -c \"SELECT count(*) FROM log;\"\n> count\n> ----------\n> 10050886\n> (1 row)\n>\n> real 0m9.665s\n> user 0m0.000s\n> sys 0m0.004s\n>\n> A huge improvement from the minute and a half before the VACUUM FULL.\n\nThis is a very surprising result that I would like to understand\nbetter. Let's assume that your UPDATE statement bloated the table by\n2x (you could use pg_relation_size to find out exactly; the details\nprobably depend on fillfactor which you might want to lower if you're\ngoing to do lots of updates). That ought to mean that count(*) has to\ngrovel through twice as much data, so instead of taking 9 seconds it\nought to take 18 seconds. Where the heck is the other 1:12 going?\nThis might sort of make sense if the original table was laid out\nsequentially on disk and the updated table was not, but how and why\nwould that happen?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 13 Oct 2010 09:27:34 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Wed, Oct 13, 2010 at 6:16 AM, Neil Whelchel <[email protected]> wrote:\n> I might go as far as to rattle the cage of the developers to see if it makes\n> any sense to add some column oriented storage capability to Postgres. That\n> would be the hot ticket to be able to specify an attribute on a column so that\n> the back end could shadow or store a column in a column oriented table so\n> aggregate functions could work on them with good efficiency, or is that an\n> INDEX?\n\nI'd love to work on that, but without funding it's tough to find the\ntime. It's a big project.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 13 Oct 2010 09:28:43 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Neil Whelchel <[email protected]> wrote:\n \n> crash:~# time psql -U test test -c \"UPDATE log SET\n> raw_data=raw_data+1\"\n> UPDATE 10050886\n> \n> real 14m13.802s\n> user 0m0.000s\n> sys 0m0.000s\n> \n> crash:~# time psql -U test test -c \"SELECT count(*) FROM log;\"\n> count \n> ----------\n> 10050886\n> (1 row)\n> \n> real 3m32.757s\n> user 0m0.000s\n> sys 0m0.000s\n> \n> Just to be sure:\n> crash:~# time psql -U test test -c \"SELECT count(*) FROM log;\"\n> count \n> ----------\n> 10050886\n> (1 row)\n> \n> real 2m38.631s\n> user 0m0.000s\n> sys 0m0.000s\n> \n> It looks like cache knocked about a minute off\n \nThat's unlikely to be caching, since you just updated the rows. \nIt's much more likely to be one or both of rewriting the rows as you\nread them to set hint bits or competing with autovacuum.\n \nThe large increase after the update probably means you went from a\ntable which was fully cached to something larger than the total\ncache.\n \n-Kevin\n",
"msg_date": "Wed, 13 Oct 2010 08:45:00 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Neil Whelchel <[email protected]> writes:\n> Insert the data into one table:\n> crash:~# time psql -U test test -q < log.sql\n> real 679m43.678s\n> user 1m4.948s\n> sys 13m1.893s\n\n> crash:~# echo 3 > /proc/sys/vm/drop_caches\n> crash:~# time psql -U test test -c \"SELECT count(*) FROM log;\" \n> count \n> ----------\n> 10050886\n> (1 row)\n\n> real 0m11.812s\n> user 0m0.000s\n> sys 0m0.004s\n\n> crash:~# time psql -U test test -c \"SELECT count(*) FROM log;\" \n> count \n> ----------\n> 10050886\n> (1 row)\n\n> real 0m3.737s\n> user 0m0.000s\n> sys 0m0.000s\n\n> As can be seen here, the cache helps..\n\nThat's probably got little to do with caching and everything to do with\nsetting hint bits on the first SELECT pass.\n\nI concur with Mark's question about whether your UPDATE pushed the table\nsize across the limit of what would fit in RAM.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Oct 2010 09:49:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again... "
},
{
"msg_contents": "On 2010-10-13 15:28, Robert Haas wrote:\n> On Wed, Oct 13, 2010 at 6:16 AM, Neil Whelchel<[email protected]> wrote:\n> \n>> I might go as far as to rattle the cage of the developers to see if it makes\n>> any sense to add some column oriented storage capability to Postgres. That\n>> would be the hot ticket to be able to specify an attribute on a column so that\n>> the back end could shadow or store a column in a column oriented table so\n>> aggregate functions could work on them with good efficiency, or is that an\n>> INDEX?\n>> \n> I'd love to work on that, but without funding it's tough to find the\n> time. It's a big project.\n> \nIs it hugely different from just getting the visibillity map suitable\nfor doing index-only scans and extracting values from the index\ndirectly as Heikki has explained?\n\nThat would essentially do a column oriented table (the index itself)\nof a specific columns (or column set).\n\n... still a huge task though.\n\n-- \nJesper\n",
"msg_date": "Wed, 13 Oct 2010 19:59:48 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Wed, Oct 13, 2010 at 07:49, Tom Lane <[email protected]> wrote:\n> Neil Whelchel <[email protected]> writes:\n\n> I concur with Mark's question about whether your UPDATE pushed the table\n> size across the limit of what would fit in RAM.\n\nYeah, you said you have ~2GB of ram, just counting the bytes and the\nnumber of rows (not including padding or overhead) puts you around\n~670MB. Some quick testing here on a 64 bit box :\n\n=> create table log (batch_id int, t_stamp timestamp without time zone\nnot null default now(), raw_data numeric, data_value numeric,\ndata_value_delta numeric, journal_value numeric, journal_data numeric,\nmachine_id integer not null, group_number integer) with oids;\nCREATE TABLE\nTime: 34.310 ms\n\n=> insert into log (batch_id, data_value, data_value_delta,\njournal_value, journal_data, group_number, machine_id, raw_data)\nselect 1, 1, 1, 1, 1, 1, 1, 1 from generate_series(1, 10050886);\nINSERT 0 10050886\nTime: 32818.529 ms\n\n=> SELECT pg_size_pretty(pg_total_relation_size('log'));\n pg_size_pretty\n----------------\n 969 MB\n\n=> update log set raw_data = raw_data+1;\nUPDATE 10050886\nTime: 65805.741 ms\n\n=> SELECT pg_size_pretty(pg_total_relation_size('log'));\n pg_size_pretty\n----------------\n 1939 MB\n\n=> SELECT count(*) from log;\n count\n----------\n 10050886\n(1 row)\n\nTime: 11181.005 ms\n\n=> SELECT count(*) from log;\n count\n----------\n 10050886\n(1 row)\n\nTime: 2825.569 ms\n\nThis box has ~6GB ram.\n\n\nBTW did anyone else hear the below in a Valeris voice?\n> And the numbers are not all that bad, so let's throw a sabot into the gears:\n> crash:~# time psql -U test test -c \"UPDATE log SET raw_data=raw_data+1\"\n",
"msg_date": "Wed, 13 Oct 2010 12:17:19 -0600",
"msg_from": "Alex Hunsaker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Wed, Oct 13, 2010 at 02:38, Neil Whelchel <[email protected]> wrote:\n\n> And the cache helps...\n> So, we are right back to within 10ms of where we started after INSERTing the\n> data, but it took a VACUUM FULL to accomplish this (by making the table fit in\n> RAM).\n> This is a big problem on a production machine as the VACUUM FULL is likely to\n> get in the way of INSERTing realtime data into the table.\n\nRight, but the real point is how often do you plan on mass updating\nthe table? Thats (hopefully) the only time a vacuum full should be\nneeded. Otherwise (auto) vacuum will probably work most of the time.\n\n> 6. Keep tables that are to be updated frequently as narrow as possible: Link\n> them to wider tables to store the columns that are less frequently updated.\n\nAgain I don't think its updated frequently so much as mass updated. I\nrun some databases here that have tens to hundreds of updates every\nsecond. The difference is I don't update *all* 26 million rows at the\nsame time that often. But If I did, Id probably want to lower the\nfillfactor.\n\nFor example:\n=> update log set raw_data = raw_data+1;\nUPDATE 10050886\nTime: 59387.021 ms\n=> SELECT pg_size_pretty(pg_total_relation_size('log'));\n pg_size_pretty\n----------------\n 1939 MB\n\n=> update log set raw_data = raw_data+1;\nUPDATE 10050886\nTime: 70549.425 ms\n=> SELECT pg_size_pretty(pg_total_relation_size('log'));\n pg_size_pretty\n----------------\n 2909 MB\n\n=> update log set raw_data = raw_data+1;\nUPDATE 10050886\nTime: 78551.544 ms\n=> SELECT pg_size_pretty(pg_total_relation_size('log'));\n pg_size_pretty\n----------------\n 3879 MB\n\n=> update log set raw_data = raw_data+1;\nUPDATE 10050886\nTime: 74443.945 ms\n=> SELECT pg_size_pretty(pg_total_relation_size('log'));\n pg_size_pretty\n----------------\n 4848 MB\n\n\nHere you see basically linear growth, after some vacuuming:\n\n=> VACUUM log;\nVACUUM\nTime: 193055.857 ms\n=> SELECT pg_size_pretty(pg_total_relation_size('log'));\n pg_size_pretty\n----------------\n 4848 MB\n\n=> VACUUM log;\nVACUUM\nTime: 38281.541 ms\nwhopper=> SELECT pg_size_pretty(pg_total_relation_size('log'));\n pg_size_pretty\n----------------\n 4848 MB\n\n=> VACUUM log;\nVACUUM\nTime: 28.531 ms\n=> SELECT pg_size_pretty(pg_total_relation_size('log'));\n pg_size_pretty\n----------------\n 4848 MB\n\nHey... its not shrinking it at all...:\n=> VACUUM verbose log;\nINFO: vacuuming \"public.log\"\nINFO: \"log\": found 0 removable, 0 nonremovable row versions in 31 out\nof 620425 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 2511 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.01u sec elapsed 0.01 sec.\nINFO: vacuuming \"pg_toast.pg_toast_10544753\"\nINFO: index \"pg_toast_10544753_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_10544753\": found 0 removable, 0 nonremovable row\nversions in 0 out of 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\nTime: 29.070 ms\n\n-- ok lets start over and this time set fillfactor to 50;\n=> alter table log set (fillfactor = 50);\n=> vacuum full log;\n=> SELECT pg_size_pretty(pg_total_relation_size('log'));\n pg_size_pretty\n----------------\n 1963 MB\n\n-- 2x the default size, lets see what an update does now\n=> update log set raw_data = raw_data+1;\nUPDATE 10050886\nTime: 70424.752 ms\n=> SELECT pg_size_pretty(pg_total_relation_size('log'));\n pg_size_pretty\n----------------\n 1963 MB\n\n-- hey ! same size\n\n=> update log set raw_data = raw_data+1;\nUPDATE 10050886\nTime: 58112.895 ms\n=> SELECT pg_size_pretty(pg_total_relation_size('log'));\n pg_size_pretty\n----------------\n 1963 MB\n(1 row)\n\n-- Still the same\n\nSo in short... vacuum seems to fall over flat with mass updates, set a\nlower fillfactor :).\n\n> So with our conclusion pile so far we can deduce that if we were to keep all\n> of our data in two column tables (one to link them together, and the other to\n> store one column of data), we stand a much better chance of making the entire\n> table to be counted fit in RAM,\n\nI dunno about that... Seems like if you only had 2 tables both would\nfail to fit in ram fairly quickly :)\n\n> so we simply apply the WHERE clause to a\n> specific table as opposed to a column within a wider table... This seems to\n> defeat the entire goal of the relational database...\n\nSure... thats one answer. See\nhttp://wiki.postgresql.org/wiki/Slow_Counting for more. But the basic\nideas are:\n1) estimate the count\n2) use triggers and keep the count somewhere else\n3) keep it in ram\n",
"msg_date": "Wed, 13 Oct 2010 13:09:22 -0600",
"msg_from": "Alex Hunsaker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Wed, 13 Oct 2010, Tom Lane wrote:\n\n> Neil Whelchel <[email protected]> writes:\n>\n> That's probably got little to do with caching and everything to do with\n> setting hint bits on the first SELECT pass.\n>\n> I concur with Mark's question about whether your UPDATE pushed the table\n> size across the limit of what would fit in RAM.\n\nNeil, can you just double the size of your initial test to make sure that \nit's too large to fit in ram to start with?\n\nDavid Lang\n",
"msg_date": "Wed, 13 Oct 2010 12:37:45 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again... "
},
{
"msg_contents": "On Wednesday 13 October 2010 05:33:28 Mladen Gogala wrote:\n> On 10/13/2010 8:12 AM, Greg Smith wrote:\n> > The work incorporating a more stable XFS into RHEL started with xfsprogs\n> > 3.0.1-6 going into Fedora 11, and 3.1.X would represent a current\n> > release. So your Ubuntu kernel is two major improvement releases\n> > behind, 3.0 and 3.1 were the upgrades to xfsprogs where things really\n> > got going again making that code modern and solid. Ubuntu Lucid\n> > switched to 3.1.0, RHEL6 will probably ship 3.1.0 too.\n> \n> I am afraid that my management will not let me use anything that doesn't\n> exist as a RPM package in the current Red Hat distribution. No Ubuntu,\n> no Fedora, no manual linking. There will always be that ominous\n> question: how many other companies are using XFS? From the business\n> perspective, questions like that make perfect sense.\n\nXFS sees extensive use in the billing departments of many phone and utility \ncompanies. Maybe not the code that you see in Linux, but the on-disk format, \nwhich I think is unchanged since its original release. (You can use the modern \nXFS code in Linux to mount a filesystem from an older SGI machine that used \nXFS.) The code in Linux is based on the code that SGI released some time in \n2000, which worked at that time very well for the SGI machine. At the time \nthat SGI came up with XFS, they had realtime in mind. They added specific \nfeatures to the filesystem to guarantee IO at a specific rate, this was \nintended for database and other realtime applications. I have not looked at \nthe Linux version to see if it contains these extensions. I will be doing this \nsoon, however as my next big project will require a true realtime system.\n-Neil-\n",
"msg_date": "Wed, 13 Oct 2010 13:08:26 -0700",
"msg_from": "Neil Whelchel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Wednesday 13 October 2010 06:27:34 you wrote:\n> On Wed, Oct 13, 2010 at 4:38 AM, Neil Whelchel <[email protected]> \nwrote:\n> > There seems to be allot of discussion about VACUUM FULL, and its\n> > problems. The overall buzz seems to be that VACUUM FULL is a bad idea (I\n> > could be wrong here). It has been some time since I have read the\n> > changelogs, but I seem to remember that there have been some major\n> > changes to VACUUM FULL recently. Maybe this needs to be re-visited in\n> > the documentation.\n> \n> In 9.0, VACUUM FULL does something similar to what CLUSTER does. This\n> is a much better idea than what it did in 8.4 and prior.\n> \n> > crash:~# time psql -U test test -c \"VACUUM FULL log;\"\n> > VACUUM\n> > \n> > real 4m49.055s\n> > user 0m0.000s\n> > sys 0m0.000s\n> > \n> > crash:~# time psql -U test test -c \"SELECT count(*) FROM log;\"\n> > count\n> > ----------\n> > 10050886\n> > (1 row)\n> > \n> > real 0m9.665s\n> > user 0m0.000s\n> > sys 0m0.004s\n> > \n> > A huge improvement from the minute and a half before the VACUUM FULL.\n> \n> This is a very surprising result that I would like to understand\n> better. Let's assume that your UPDATE statement bloated the table by\n> 2x (you could use pg_relation_size to find out exactly; the details\n> probably depend on fillfactor which you might want to lower if you're\n> going to do lots of updates). That ought to mean that count(*) has to\n> grovel through twice as much data, so instead of taking 9 seconds it\n> ought to take 18 seconds. Where the heck is the other 1:12 going?\n> This might sort of make sense if the original table was laid out\n> sequentially on disk and the updated table was not, but how and why\n> would that happen?\nThis is likely due to the table not fitting in memory before the VACUUM FULL.\nI am glad that you suggested using pg_relation_size, I somehow didn't think of \nit at the time. I will redo the test and publish the results of \npg_relation_size.\n-Neil-\n",
"msg_date": "Wed, 13 Oct 2010 13:19:06 -0700",
"msg_from": "Neil Whelchel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On 13/10/10 23:16, Neil Whelchel wrote:\n> \n> The good ol' bruit force approach! I knew I'd see this one sooner or later.\n> Though I was not sure if I was going to see the 16TB of RAM suggestion first.\n> Seriously though, as the title of this thread suggests, everything is\n> relative. Sure count(*) and everything else will work faster with more system\n> power. It just seems to me that count(*) is slower than it could be given a\n> set of conditions....\n>\n> Since the thread has started, I have had people ask about different system\n> configurations, especially the filesystem (XFS, ext4...). I have never tested\n> ext4, and since we are all involved here, I thought that I could do so and\n> share my results for others, that is why I got into time testing stuff.\n> Time testing count(*) in my later postings is really not the point as count is\n> simply dragging the entire table off of the RAID through RAM, I can use any\n> other function like max()... No that can narrow down its scan with an INDEX...\n> Ok, sum(), there we go!\n>\n>\n> \n\nWell in some (quite common) use cases, the queries cannot be known in \nadvance, and the tables are considerably bigger than ram... this makes \nthe fast IO a good option - sometimes better (and in the end cheaper) \nthan trying to maintain every conceivable covering index.\n\nOf course it would be great if Postgres could use the indexes alone to \nexecute certain queries - we may see some of that capability in the next \nfew release (keep and eye on messages concerning the 'Visibility Map').\n\nregards\n\nMark\n\n",
"msg_date": "Thu, 14 Oct 2010 10:07:07 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On 13/10/10 21:44, Mladen Gogala wrote:\n> On 10/13/2010 3:19 AM, Mark Kirkwood wrote:\n>> I think that major effect you are seeing here is that the UPDATE has\n>> made the table twice as big on disk (even after VACUUM etc), and it has\n>> gone from fitting in ram to not fitting in ram - so cannot be\n>> effectively cached anymore.\n>>\n> In the real world, tables are larger than the available memory. I have \n> tables of several hundred gigabytes in size. Tables shouldn't be \n> \"effectively cached\", the next step would be to measure \"buffer cache \n> hit ratio\", tables should be effectively used.\n>\nSorry Mladen,\n\nI didn't mean to suggest that all tables should fit into ram... but was \npointing out (one reason) why Neil would expect to see a different \nsequential scan speed after the UPDATE.\n\nI agree that in many interesting cases, tables are bigger than ram [1].\n\nCheers\n\nMark\n\n[1] Having said that, these days 64GB of ram is not unusual for a \nserver... and we have many real customer databases smaller than this \nwhere I work.\n",
"msg_date": "Thu, 14 Oct 2010 10:48:21 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Wed, Oct 13, 2010 at 1:59 PM, Jesper Krogh <[email protected]> wrote:\n> On 2010-10-13 15:28, Robert Haas wrote:\n>>\n>> On Wed, Oct 13, 2010 at 6:16 AM, Neil Whelchel<[email protected]>\n>> wrote:\n>>\n>>>\n>>> I might go as far as to rattle the cage of the developers to see if it\n>>> makes\n>>> any sense to add some column oriented storage capability to Postgres.\n>>> That\n>>> would be the hot ticket to be able to specify an attribute on a column so\n>>> that\n>>> the back end could shadow or store a column in a column oriented table so\n>>> aggregate functions could work on them with good efficiency, or is that\n>>> an\n>>> INDEX?\n>>>\n>>\n>> I'd love to work on that, but without funding it's tough to find the\n>> time. It's a big project.\n>>\n>\n> Is it hugely different from just getting the visibillity map suitable\n> for doing index-only scans and extracting values from the index\n> directly as Heikki has explained?]\n\nI think that there's a lot more to a real column-oriented database\nthan index-only scans, although, of course, index-only scans are very\nimportant.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 13 Oct 2010 22:18:14 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Could this be an interesting test use of https://www.fossexperts.com/ ? \n\n'Community' driven proposal - multiple people / orgs agree to pay various\nportions? Maybe with multiple funders a reasonable target fund amount could\nbe reached.\n\nJust throwing around ideas here. \n\n\nMark\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Robert Haas\nSent: Wednesday, October 13, 2010 7:29 AM\nTo: Neil Whelchel\nCc: [email protected]\nSubject: Re: [PERFORM] Slow count(*) again...\n\nOn Wed, Oct 13, 2010 at 6:16 AM, Neil Whelchel <[email protected]>\nwrote:\n> I might go as far as to rattle the cage of the developers to see if it\nmakes\n> any sense to add some column oriented storage capability to Postgres. That\n> would be the hot ticket to be able to specify an attribute on a column so\nthat\n> the back end could shadow or store a column in a column oriented table so\n> aggregate functions could work on them with good efficiency, or is that an\n> INDEX?\n\nI'd love to work on that, but without funding it's tough to find the\ntime. It's a big project.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Wed, 13 Oct 2010 22:22:16 -0600",
"msg_from": "\"mark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On 2010-10-14 06:22, mark wrote:\n> Could this be an interesting test use of https://www.fossexperts.com/ ?\n>\n> 'Community' driven proposal - multiple people / orgs agree to pay various\n> portions? Maybe with multiple funders a reasonable target fund amount could\n> be reached.\n> \nI might convince my boss to chip in... but how do we get the task\nup there.. should we find one to give an estimate first?\n\n-- \nJesper\n",
"msg_date": "Thu, 14 Oct 2010 17:29:40 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Thu, Oct 14, 2010 at 12:22 AM, mark <[email protected]> wrote:\n> Could this be an interesting test use of https://www.fossexperts.com/ ?\n>\n> 'Community' driven proposal - multiple people / orgs agree to pay various\n> portions? Maybe with multiple funders a reasonable target fund amount could\n> be reached.\n>\n> Just throwing around ideas here.\n\nThis is a bit off-topic, but as of now, they're only accepting\nproposals for projects to be performed by CommandPrompt itself. So\nthat doesn't help me much (note the sig).\n\nBut in theory it's a good idea. Of course, when and if they open it\nup, then what? If more than one developer or company is interested in\na project, who determines who gets to do the work and get paid for it?\n If that determination is made by CommandPrompt itself, or if it's\njust a free-for-all to see who can get their name on the patch that\nends up being committed, it's going to be hard to get other\npeople/companies to take it very seriously.\n\nAnother problem is that even when they do open it up, they apparently\nintend to charge 7.5 - 15% of the contract value as a finder's fee.\nThat's a lot of money. For a $100 project it's totally reasonable,\nbut for a $10,000 project it's far more expensive than the value of\nthe service they're providing can justify. (Let's not even talk about\na $100,000 project.)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 14 Oct 2010 15:56:05 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On 2010-10-14 21:56, Robert Haas wrote:\n> On Thu, Oct 14, 2010 at 12:22 AM, mark<[email protected]> wrote:\n> \n>> Could this be an interesting test use of https://www.fossexperts.com/ ?\n>>\n>> 'Community' driven proposal - multiple people / orgs agree to pay various\n>> portions? Maybe with multiple funders a reasonable target fund amount could\n>> be reached.\n>>\n>> Just throwing around ideas here.\n>> \n> This is a bit off-topic, but as of now, they're only accepting\n> proposals for projects to be performed by CommandPrompt itself. So\n> that doesn't help me much (note the sig).\n>\n> But in theory it's a good idea. Of course, when and if they open it\n> up, then what? If more than one developer or company is interested in\n> a project, who determines who gets to do the work and get paid for it?\n> If that determination is made by CommandPrompt itself, or if it's\n> just a free-for-all to see who can get their name on the patch that\n> ends up being committed, it's going to be hard to get other\n> people/companies to take it very seriously.\n> \nCouldnt you open up a dialog about it?\n> Another problem is that even when they do open it up, they apparently\n> intend to charge 7.5 - 15% of the contract value as a finder's fee.\n> That's a lot of money. For a $100 project it's totally reasonable,\n> but for a $10,000 project it's far more expensive than the value of\n> the service they're providing can justify. (Let's not even talk about\n> a $100,000 project.)\n> \n\nHi Robert.\n\nI can definately see your arguments, but you failed to describe\na \"better\" way?\n\nMany of us rely heavily on PostgreSQL and would\nlike to get \"this feature\", but sponsoring it all alone does not seem\nlike a viable option (just a guess), taken into consideration we dont\neven have an estimate about how big it is, but I saw the estimate of\n15K USD of the \"ALTER column position\" description.. and the\nvisibillity map is most likely in the \"same ballpark\" (from my\nperspective).\n\nSo in order to get something like a visibillity map (insert your \nfavorite big\nfeature here), you have the option:\n\n* Sponsor it all by yourself. (where its most likely going to be too big,\n or if it is the center of your applictions, then you definitely turn \nto a\n RDBMS that has supported it for longer times, if you can).\n* Wait for someone else to sponsor it all by them selves. (that happens\n occationally, but for particular features is it hard to see when and \nwhat,\n and the actual sponsor would still have the dilemma in the first point).\n* Hack it yourselves (many of us dont have time neither skills to do it, and\n my employer actually wants me to focus on the stuff that brings most \ndirect\n value for my time, which is a category hacking PG does not fall into \nwhen the\n business is about something totally else).\n* A kind of microsponsoring like above?\n* Your proposal in here?\n\nTo me.. the 4'th bullet point looks like the most viable so far..\n\nTo be honest, if it is EDB, Redpill, Command Prompt, 2nd Quadrant or\nwhoever end up doing the job is, seen from this perspective not\nimportant, just it ends in the hands of someone \"capable\" of doing\nit. ... allthougth Heikki has done some work on this task allready.\n\nPreferrably I would like to get it coordinated by the PG project itself. \nBut\nI can see that it is really hard to do that kind of stuff. And you would \nstill\nface the challenge about who should end up doing the thing.\n\nJesper .. dropped Joshua Drake on CC, he might have given all of this some\nseconds of thought allready.\n\n-- \nJesper\n\n",
"msg_date": "Fri, 15 Oct 2010 07:04:43 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "On Wed, 2010-10-13 at 09:02 -0400, Greg Smith wrote:\n\n> XFS support is available as an optional module starting in RHEL 5.5.\n> In CentOS, you just grab it, so that's what I've been doing. My \n> understanding is that you may have to ask your sales rep to enable \n> access to it under the official RedHat Network channels if you're\n> using a subscription from them. I'm not sure exactly what the support\n> situation is with it, but it's definitely available as an RPM from\n> RedHat.\n\nRight. It is called \"Red Hat Scalable File System\", and once paid, it is\navailable via RHN.\n-- \nDevrim GÜNDÜZ\nPostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer\nPostgreSQL RPM Repository: http://yum.pgrpms.org\nCommunity: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\nhttp://www.gunduz.org Twitter: http://twitter.com/devrimgunduz",
"msg_date": "Fri, 15 Oct 2010 11:36:26 +0300",
"msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Jesper Krogh wrote:\n> To be honest, if it is EDB, Redpill, Command Prompt, 2nd Quadrant or\n> whoever end up doing the job is, seen from this perspective not\n> important, just it ends in the hands of someone \"capable\" of doing\n> it. ... although Heikki has done some work on this task already.\n\nNow you're closing in on why this is a touchy subject. Heikki has \nalready done work here funded by EDB. As such, the idea of anyone else \nbeing put in charge of fund raising and allocation for this particular \nfeature would be a political mess. While it would be nice if there was \na completely fair sponsorship model for developing community PostgreSQL \nfeatures, overseen by a benevolent, free, and completely unaffiliated \noverlord, we're not quite there yet. In cases like these, where there's \nevidence a company with a track record of delivering features is already \ninvolved, you're probably better off contacting someone from there \ndirectly--rather than trying to fit that into the public bounty model \nsome PostgreSQL work is getting done via lately. The visibility map is \na particularly troublesome one, because the list of \"capable\" people who \ncould work on that, but who aren't already working at a company having \nsome relations with EDB, is rather slim.\n\nI know that's kind of frustrating to hear, for people who would like to \nget a feature done but can't finance the whole thing themselves. But \nlook on the bright side--the base price is free, and when you give most \nPostgreSQL companies money to work on something it's at least possible \nto get what you want done. You'd have to pay a whole lot more than the \n$15K number you threw out there before any of the commercial database \nvendors would pay any attention to your particular feature request.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Fri, 15 Oct 2010 23:28:29 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "bricklen wrote:\n> On Sat, Oct 9, 2010 at 4:26 PM, Neil Whelchel <[email protected]> wrote:\n> > Maybe an\n> > estimate(*) that works like count but gives an answer from the index without\n> > checking visibility? I am sure that this would be good enough to make a page\n> > list, it is really no big deal if it errors on the positive side, maybe the\n> > list of pages has an extra page off the end. I can live with that. What I\n> > can't live with is taking 13 seconds to get a page of results from 850,000\n> > rows in a table.\n> > -Neil-\n> >\n> \n> FWIW, Michael Fuhr wrote a small function to parse the EXPLAIN plan a\n> few years ago and it works pretty well assuming your stats are up to\n> date.\n> \n> http://markmail.org/message/gknqthlwry2eoqey\n\nWhat I recommend is to execute the query with EXPLAIN, and look at the\nestimated rows and costs. If the row number is large, just round it to\nthe nearest thousand and return it to the application as a count ---\nthis is what Google does for searches (just try it).\n\nIf the row count/cost are low, run the query and return an exact count.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Thu, 21 Oct 2010 00:07:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "\nOn Oct 12, 2010, at 11:58 AM, Tom Lane wrote:\n\n> Jesper Krogh <[email protected]> writes:\n>> On 2010-10-12 19:07, Tom Lane wrote:\n>>> Anyway, if anyone is hot to make COUNT(*) faster, that's where to look.\n> \n>> Just having 32 bytes bytes of \"payload\" would more or less double\n>> you time to count if I read you test results correctly?. .. and in the\n>> situation where diskaccess would be needed .. way more.\n> \n>> Dividing by pg_relation_size by the amout of tuples in our production\n>> system I end up having no avg tuple size less than 100bytes.\n> \n> Well, yeah. I deliberately tested with a very narrow table so as to\n> stress the per-row CPU costs as much as possible. With any wider table\n> you're just going to be I/O bound.\n\n\nOn a wimpy disk, I/O bound for sure. But my disks go 1000MB/sec. No query can go fast enough for them. The best I've gotten is 800MB/sec, on a wide row (average 800 bytes). Most tables go 300MB/sec or so. And with 72GB of RAM, many scans are in-memory anyway.\n\nA single SSD with supercapacitor will go about 500MB/sec by itself next spring. I will easily be able to build a system with 2GB/sec I/O for under $10k.\n\n\n",
"msg_date": "Wed, 20 Oct 2010 21:47:24 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again... "
},
{
"msg_contents": "On 2010-10-21 06:47, Scott Carey wrote:\n> On a wimpy disk, I/O bound for sure. But my disks go 1000MB/sec.\n> No query can go fast enough for them. The best I've gotten is\n> 800MB/sec, on a wide row (average 800 bytes). Most tables go\n> 300MB/sec or so. And with 72GB of RAM, many scans are in-memory\n> anyway.\n\nIs it cpu or io bound while doing it?\n\nCan you scan it faster using time cat relation-oid.* > /dev/null\n\n> A single SSD with supercapacitor will go about 500MB/sec by itself\n> next spring. I will easily be able to build a system with 2GB/sec\n> I/O for under $10k.\n\nWhat filesystem are you using? Readahead?\nCan you try to check the filesystemfragmentation of the table using filefrag?\n\n-- \nJesper\n\n\n\n\n\n\n\n\n\n\n\nOn 2010-10-21 06:47, Scott Carey wrote:\n> On a wimpy disk, I/O bound for\nsure. But my disks go 1000MB/sec.\n> No query can go fast enough for them. The best I've gotten is\n> 800MB/sec, on a wide row (average 800 bytes). Most tables go\n> 300MB/sec or so. And with 72GB of RAM, many scans are in-memory\n> anyway.\n\nIs it cpu or io bound while doing it?\n\nCan you scan it faster using time cat relation-oid.* > /dev/null \n \n> A single SSD with supercapacitor will go about 500MB/sec by itself\n> next spring. I will easily be able to build a system with 2GB/sec\n> I/O for under $10k.\n\n\nWhat filesystem are you using? Readahead?\nCan you try to check the filesystemfragmentation of the table using filefrag?\n\n-- \nJesper",
"msg_date": "Thu, 21 Oct 2010 20:13:24 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "\nOn Oct 21, 2010, at 11:13 AM, Jesper Krogh wrote:\n\n> On 2010-10-21 06:47, Scott Carey wrote:\n> > On a wimpy disk, I/O bound for\n> sure. But my disks go 1000MB/sec.\n> \n> > No query can go fast enough for them. The best I've gotten is\n> \n> > 800MB/sec, on a wide row (average 800 bytes). Most tables go\n> \n> > 300MB/sec or so. And with 72GB of RAM, many scans are in-memory\n> \n> > anyway.\n> \n> \n> Is it cpu or io bound while doing it?\nI/O bound with the fio benchmark tool if 16K blocks or greater, CPU bound with 8K blocks or smaller. CentOS 5.5.\nCPU bound with postgres. \n\n\n> Can you scan it faster using time cat relation-oid.* > /dev/null \n> \n\nI'm not sure what you mean. in psql, select * piped to /dev/null is VERY CPU bound because of all the formatting. I haven't toyed with COPY. Do you mean the actual files? 'dd' tests from actual files are similar to fio, but not as consistent and hard to add concurrency. That is faster than postgres.\n\n> \n> > A single SSD with supercapacitor will go about 500MB/sec by itself\n> \n> > next spring. I will easily be able to build a system with 2GB/sec\n> \n> > I/O for under $10k.\n> \n> \n> \n> What filesystem are you using? Readahead?\n> Can you try to check the filesystemfragmentation of the table using filefrag?\n> \nXFS, defragmented once a day. Readahead 40960 (20MB, 1MB per spindle). two raid 10 arrays, each 10 discs each (2 hot spare), software raid-0 tying those together (md, 1MB blocks). Two Adaptec 5805 (or 5085, the external SAS one). A third raid card for the OS/xlog with 4x10krpm sas drives internal.\n\nFragmentation quickly takes this down a lot as do small files and concurrent activity, since its only enough spindles for ~2000 iops. But its almost all large reporting queries on partitioned tables (500,000 partitions). A few smaller tables are starting to cause too many seeks so those might end up on a smaller, high iops tablespace later.\n\nOver time the disks have filled up and there is a significant slowdown in sequential transfer at the end of the partition -- 600MB/sec max. That is still CPU bound on most scans, but postgres can go that fast on some scans.\n\nOff topic:\nOther interesting features is how this setup causes the system tables to bloat by factors of 2x to 8x each week, and requires frequent vacuum full + reindex on several of them else they become 1.5GB in size. Nothing like lots of temp table work + hour long concurrent transactions to make the system catalog bloat. I suppose with 8.4 many temp tables could be replaced using WITH queries, but in other cases analyzing a temp table is the only way to get a sane query plan.\n\n\n> -- \n> Jesper\n> \n> \n> \n\n",
"msg_date": "Thu, 21 Oct 2010 16:11:22 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow count(*) again..."
},
{
"msg_contents": "Tom Lane wrote:\n> At this point what we've got is 25% of the runtime in nodeAgg.c overhead,\n> and it's difficult to see how to get any real improvement without tackling\n> that. Rather than apply the patch shown above, I'm tempted to think about\n> hard-wiring COUNT(*) as a special case in nodeAgg.c such that we don't go\n> through advance_aggregates/advance_transition_function at all, but just\n> increment a counter directly. However, that would very clearly be\n> optimizing COUNT(*) and nothing else. Given the opinions expressed\n> elsewhere in this thread that heavy reliance on COUNT(*) represents\n> bad application design, I'm not sure that such a patch would meet with\n> general approval.\n> \n> Actually the patch shown above is optimizing COUNT(*) and nothing else,\n> too, since it's hard to conceive of any other zero-argument aggregate.\n> \n> Anyway, if anyone is hot to make COUNT(*) faster, that's where to look.\n> I don't think any of the previous discussion in this thread is on-point\n> at all, except for the parts where people suggested avoiding it.\n\nDo we want a TODO about optimizing COUNT(*) to avoid aggregate\nprocessing overhead?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Tue, 1 Feb 2011 17:47:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Slow count(*) again..."
},
{
"msg_contents": "\n\nOn 02/01/2011 05:47 PM, Bruce Momjian wrote:\n> Tom Lane wrote:\n>> At this point what we've got is 25% of the runtime in nodeAgg.c overhead,\n>> and it's difficult to see how to get any real improvement without tackling\n>> that. Rather than apply the patch shown above, I'm tempted to think about\n>> hard-wiring COUNT(*) as a special case in nodeAgg.c such that we don't go\n>> through advance_aggregates/advance_transition_function at all, but just\n>> increment a counter directly. However, that would very clearly be\n>> optimizing COUNT(*) and nothing else. Given the opinions expressed\n>> elsewhere in this thread that heavy reliance on COUNT(*) represents\n>> bad application design, I'm not sure that such a patch would meet with\n>> general approval.\n>>\n>> Actually the patch shown above is optimizing COUNT(*) and nothing else,\n>> too, since it's hard to conceive of any other zero-argument aggregate.\n>>\n>> Anyway, if anyone is hot to make COUNT(*) faster, that's where to look.\n>> I don't think any of the previous discussion in this thread is on-point\n>> at all, except for the parts where people suggested avoiding it.\n> Do we want a TODO about optimizing COUNT(*) to avoid aggregate\n> processing overhead?\n\nWhether or not it's bad application design, it's ubiquitous, and we \nshould make it work as best we can, IMNSHO. This often generates \ncomplaints about Postgres, and if we really plan for world domination \nthis needs to be part of it.\n\ncheers\n\nandrew\n",
"msg_date": "Tue, 01 Feb 2011 18:03:39 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Slow count(*) again..."
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 02/01/2011 05:47 PM, Bruce Momjian wrote:\n>> Tom Lane wrote:\n>>> At this point what we've got is 25% of the runtime in nodeAgg.c overhead,\n>>> and it's difficult to see how to get any real improvement without tackling\n>>> that.\n\n>> Do we want a TODO about optimizing COUNT(*) to avoid aggregate\n>> processing overhead?\n\n> Whether or not it's bad application design, it's ubiquitous, and we \n> should make it work as best we can, IMNSHO. This often generates \n> complaints about Postgres, and if we really plan for world domination \n> this needs to be part of it.\n\nI don't think that saving ~25% on COUNT(*) runtime will help that at all.\nThe people who complain about it expect it to be instantaneous.\n\nIf this sort of hack were free, I'd be all for doing it anyway; but I'm\nconcerned that adding tests to enable a fast path will slow down every\nother aggregate, or else duplicate a lot of code that we'll then have to\nmaintain.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Feb 2011 18:12:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Slow count(*) again... "
},
{
"msg_contents": "On 2/1/2011 5:47 PM, Bruce Momjian wrote:\n> Do we want a TODO about optimizing COUNT(*) to avoid aggregate\n> processing overhead?\n>\n\nDefinitely not. In my opinion, and I've seen more than a few database \ndesigns, having count(*) is almost always an error.\nIf I am counting a large table like the one below, waiting for 30 \nseconds more is not going to make much of a difference.\nTo paraphrase Kenny Rogers, it will be time enough for counting when the \napplication is done.\n\nTiming is on.\nnews=> select count(*) from moreover_documents_y2011m01;\n count\n----------\n 20350907\n(1 row)\n\nTime: 124142.437 ms\nnews=>\n\n-- \n\nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com\nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Tue, 01 Feb 2011 18:21:04 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Slow count(*) again..."
},
{
"msg_contents": "On 2/1/2011 6:03 PM, Andrew Dunstan wrote:\n> Whether or not it's bad application design, it's ubiquitous, and we\n> should make it work as best we can, IMNSHO. This often generates\n> complaints about Postgres, and if we really plan for world domination\n> this needs to be part of it.\n\nThere are many other things to fix first. One of them would be optimizer \ndecisions when a temp table is involved. I would also vote for wait \nevent interface, tracing and hints, much rather than speeding up \ncount(*). World domination will not be achieved by speeding up count(*), \nit will be achieved by providing overall performance akin to what the \nplayer who has already achieved the world domination. I believe that the \ncompany is called \"Oracle Corp.\" or something like that?\n\n-- \n\nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com\nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Tue, 01 Feb 2011 18:44:17 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Tue, Feb 1, 2011 at 3:44 PM, Mladen Gogala <[email protected]>wrote:\n\n> On 2/1/2011 6:03 PM, Andrew Dunstan wrote:\n>\n>> Whether or not it's bad application design, it's ubiquitous, and we\n>> should make it work as best we can, IMNSHO. This often generates\n>> complaints about Postgres, and if we really plan for world domination\n>> this needs to be part of it.\n>>\n>\n> There are many other things to fix first. One of them would be optimizer\n> decisions when a temp table is involved. I would also vote for wait event\n> interface, tracing and hints, much rather than speeding up count(*). World\n> domination will not be achieved by speeding up count(*), it will be achieved\n> by providing overall performance akin to what the player who has already\n> achieved the world domination. I believe that the company is called \"Oracle\n> Corp.\" or something like that?\n>\n>\n> Mladen Gogala\n> Sr. Oracle DBA\n>\n\nDon't listen to him. He's got an oracle bias. Slashdot already announced\nthat NoSQL is actually going to dominate the world, so postgres has already\nlost that battle. Everything postgres devs do now is just an exercise in\nrelational masturbation. Trust me.\n\nOn Tue, Feb 1, 2011 at 3:44 PM, Mladen Gogala <[email protected]> wrote:\nOn 2/1/2011 6:03 PM, Andrew Dunstan wrote:\n\nWhether or not it's bad application design, it's ubiquitous, and we\nshould make it work as best we can, IMNSHO. This often generates\ncomplaints about Postgres, and if we really plan for world domination\nthis needs to be part of it.\n\n\nThere are many other things to fix first. One of them would be optimizer decisions when a temp table is involved. I would also vote for wait event interface, tracing and hints, much rather than speeding up count(*). World domination will not be achieved by speeding up count(*), it will be achieved by providing overall performance akin to what the player who has already achieved the world domination. I believe that the company is called \"Oracle Corp.\" or something like that?\nMladen GogalaSr. Oracle DBA Don't listen to him. He's got an oracle bias. Slashdot already announced that NoSQL is actually going to dominate the world, so postgres has already lost that battle. Everything postgres devs do now is just an exercise in relational masturbation. Trust me.",
"msg_date": "Tue, 1 Feb 2011 19:13:38 -0800",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Samuel Gendler wrote:\n>\n> \n> Don't listen to him. He's got an oracle bias.\nAnd bad sinuses, too.\n> Slashdot already announced that NoSQL is actually going to dominate \n> the world, so postgres has already lost that battle. Everything \n> postgres devs do now is just an exercise in relational masturbation. \n> Trust me.\n>\nI knew that there is some entertainment value on this list. Samuel, your \npoint of view is very..., er, refreshing. Trust me.\n\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n",
"msg_date": "Tue, 01 Feb 2011 22:40:16 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Tue, Feb 1, 2011 at 7:40 PM, Mladen Gogala <[email protected]>wrote:\n\n> Samuel Gendler wrote:\n>\n>>\n>> Don't listen to him. He's got an oracle bias.\n>>\n> And bad sinuses, too.\n>\n> Slashdot already announced that NoSQL is actually going to dominate the\n>> world, so postgres has already lost that battle. Everything postgres devs\n>> do now is just an exercise in relational masturbation. Trust me.\n>>\n>> I knew that there is some entertainment value on this list. Samuel, your\n> point of view is very..., er, refreshing. Trust me.\n>\n>\nYou get that that was sarcasm, right?\n\n\n>\n>\n\nOn Tue, Feb 1, 2011 at 7:40 PM, Mladen Gogala <[email protected]> wrote:\nSamuel Gendler wrote:\n\n\n Don't listen to him. He's got an oracle bias.\n\nAnd bad sinuses, too.\n\n Slashdot already announced that NoSQL is actually going to dominate the world, so postgres has already lost that battle. Everything postgres devs do now is just an exercise in relational masturbation. Trust me.\n\n\nI knew that there is some entertainment value on this list. Samuel, your point of view is very..., er, refreshing. Trust me.\nYou get that that was sarcasm, right?",
"msg_date": "Tue, 1 Feb 2011 20:07:47 -0800",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n> > On 02/01/2011 05:47 PM, Bruce Momjian wrote:\n> >> Tom Lane wrote:\n> >>> At this point what we've got is 25% of the runtime in nodeAgg.c overhead,\n> >>> and it's difficult to see how to get any real improvement without tackling\n> >>> that.\n> \n> >> Do we want a TODO about optimizing COUNT(*) to avoid aggregate\n> >> processing overhead?\n> \n> > Whether or not it's bad application design, it's ubiquitous, and we \n> > should make it work as best we can, IMNSHO. This often generates \n> > complaints about Postgres, and if we really plan for world domination \n> > this needs to be part of it.\n> \n> I don't think that saving ~25% on COUNT(*) runtime will help that at all.\n> The people who complain about it expect it to be instantaneous.\n> \n> If this sort of hack were free, I'd be all for doing it anyway; but I'm\n> concerned that adding tests to enable a fast path will slow down every\n> other aggregate, or else duplicate a lot of code that we'll then have to\n> maintain.\n\nOK, thank you.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Wed, 2 Feb 2011 11:03:37 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Slow count(*) again..."
},
{
"msg_contents": "On Tue, Feb 1, 2011 at 6:44 PM, Mladen Gogala <[email protected]> wrote:\n> On 2/1/2011 6:03 PM, Andrew Dunstan wrote:\n>>\n>> Whether or not it's bad application design, it's ubiquitous, and we\n>> should make it work as best we can, IMNSHO. This often generates\n>> complaints about Postgres, and if we really plan for world domination\n>> this needs to be part of it.\n>\n> There are many other things to fix first. One of them would be optimizer\n> decisions when a temp table is involved.\n\nIt would be pretty hard to make autoanalyze work on such tables\nwithout removing some of the performance benefits of having such\ntables in the first place - namely, the local buffer manager. But you\ncould ANALYZE them by hand.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 2 Feb 2011 12:19:08 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Robert Haas wrote:\n> On Tue, Feb 1, 2011 \n> It would be pretty hard to make autoanalyze work on such tables\n> without removing some of the performance benefits of having such\n> tables in the first place - namely, the local buffer manager. But you\n> could ANALYZE them by hand.\n>\n> \nNot necessarily autoanalyze, some default rules for the situations when \nstats is not there should be put in place,\nlike the following:\n1) If there is a usable index on the temp table - use it.\n2) It there isn't a usable index on the temp table and there is a join, \nmake the temp table the first table\n in the nested loop join.\n\nPeople are complaining about the optimizer not using the indexes all \nover the place, there should be a way to\nmake the optimizer explicitly prefer the indexes, like was the case with \nOracle's venerable RBO (rules based\noptimizer). RBO didn't use statistics, it had a rank of access method \nand used the access method with the highest\nrank of all available access methods. In practice, it translated into: \nif an index exists - use it.\n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Wed, 02 Feb 2011 13:11:33 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Wed, Feb 2, 2011 at 12:11 PM, Mladen Gogala\n<[email protected]> wrote:\n> Robert Haas wrote:\n>>\n>> On Tue, Feb 1, 2011 It would be pretty hard to make autoanalyze work on\n>> such tables\n>> without removing some of the performance benefits of having such\n>> tables in the first place - namely, the local buffer manager. But you\n>> could ANALYZE them by hand.\n>>\n>>\n>\n> Not necessarily autoanalyze, some default rules for the situations when\n> stats is not there should be put in place,\n> like the following:\n> 1) If there is a usable index on the temp table - use it.\n> 2) It there isn't a usable index on the temp table and there is a join, make\n> the temp table the first table\n> in the nested loop join.\n>\n> People are complaining about the optimizer not using the indexes all over\n> the place, there should be a way to\n> make the optimizer explicitly prefer the indexes, like was the case with\n> Oracle's venerable RBO (rules based\n> optimizer). RBO didn't use statistics, it had a rank of access method and\n> used the access method with the highest\n> rank of all available access methods. In practice, it translated into: if an\n> index exists - use it.\n\nHowever, sometimes using an index results in a HORRIBLE HORRIBLE plan.\nI recently encountered the issue myself, and plopping an ANALYZE\n$tablename in there, since I was using a temporary table anyway, make\nall the difference. The planner switched from an index-based query to\na sequential scan, and a sequential scan was (is) vastly more\nefficient in this particular case.\n\nPersonally, I'd get rid of autovacuum/autoanalyze support on temporary\ntables (they typically have short lives and are often accessed\nimmediately after creation preventing the auto* stuff from being\nuseful anyway), *AND* every time I ask I'm always told \"make sure\nANALYZE the table before you use it\".\n\n\n-- \nJon\n",
"msg_date": "Wed, 2 Feb 2011 12:19:20 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Wed, Feb 2, 2011 at 1:11 PM, Mladen Gogala <[email protected]> wrote:\n> Not necessarily autoanalyze, some default rules for the situations when\n> stats is not there should be put in place,\n> like the following:\n> 1) If there is a usable index on the temp table - use it.\n> 2) It there isn't a usable index on the temp table and there is a join, make\n> the temp table the first table\n> in the nested loop join.\n\nThe default selectivity estimates ought to make this happen already.\n\ncreate temporary table foo (a integer, b text);\nCREATE TABLE\ninsert into foo select g, random()::text||random()::text from\ngenerate_series(1, 10000) g;\nINSERT 0 10000\nalter table foo add primary key (a);\nNOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index\n\"foo_pkey\" for table \"foo\"\nALTER TABLE\nexplain select * from foo where a = 1;\n QUERY PLAN\n---------------------------------------------------------------------\n Index Scan using foo_pkey on foo (cost=0.00..8.27 rows=1 width=36)\n Index Cond: (a = 1)\n(2 rows)\n\nYou're going to need to come up with actual examples of situations\nthat you think can be improved upon if you want to get anywhere here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 2 Feb 2011 13:20:59 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Wed, Feb 2, 2011 at 1:19 PM, Jon Nelson <[email protected]> wrote:\n> However, sometimes using an index results in a HORRIBLE HORRIBLE plan.\n> I recently encountered the issue myself, and plopping an ANALYZE\n> $tablename in there, since I was using a temporary table anyway, make\n> all the difference. The planner switched from an index-based query to\n> a sequential scan, and a sequential scan was (is) vastly more\n> efficient in this particular case.\n\nYep...\n\n> Personally, I'd get rid of autovacuum/autoanalyze support on temporary\n> tables\n\nWe don't have any such support, which I think is the root of Mladen's complaint.\n\n> (they typically have short lives and are often accessed\n> immediately after creation preventing the auto* stuff from being\n> useful anyway), *AND* every time I ask I'm always told \"make sure\n> ANALYZE the table before you use it\".\n\nYeah. Any kind of bulk load into an empty table can be a problem,\neven if it's not temporary. When you load a bunch of data and then\nimmediately plan a query against it, autoanalyze hasn't had a chance\nto do its thing yet, so sometimes you get a lousy plan. In the case\nof temporary tables, this can happen even if there's a delay before\nyou use the data. Some sort of fix for this - where the first query\nthat needs the stats does an analyze first - seems like it could be\nquite useful (although it would suck if the transaction that took it\nupon itself to do the analyze then rolled back, losing the stats and\nforcing the next guy to do it all over again).\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 2 Feb 2011 13:32:28 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Mladen Gogala wrote:\n> People are complaining about the optimizer not using the indexes all \n> over the place, there should be a way to\n> make the optimizer explicitly prefer the indexes, like was the case \n> with Oracle's venerable RBO (rules based\n> optimizer). RBO didn't use statistics, it had a rank of access method \n> and used the access method with the highest\n> rank of all available access methods. In practice, it translated into: \n> if an index exists - use it.\n\nGiven that even Oracle kicked out the RBO a long time ago, I'm not so \nsure longing for those good old days will go very far. I regularly see \nqueries that were tweaked to always use an index run at 1/10 or less the \nspeed of a sequential scan against the same data. The same people \ncomplaining \"all over the place\" about this topic are also the sort who \nwrite them. There are two main fallacies at play here that make this \nhappen:\n\n1) Even if you use an index, PostgreSQL must still retrieve the \nassociated table data to execute the query in order to execute its \nversion of MVCC\n\n2) The sort of random I/O done by index lookups can be as much as 50X as \nexpensive on standard hard drives as sequential, if every block goes to \nphysical hardware.\n\nIf I were to work on improving this area, it would be executing on some \nplans a few of us have sketched out for exposing some notion about what \nindexes are actually in memory to the optimizer. There are more obvious \nfixes to the specific case of temp tables though.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Wed, 02 Feb 2011 13:47:21 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Greg Smith wrote:\n> Given that even Oracle kicked out the RBO a long time ago, I'm not so \n> sure longing for those good old days will go very far. I regularly see \n> queries that were tweaked to always use an index run at 1/10 or less the \n> speed of a sequential scan against the same data. The same people \n> complaining \"all over the place\" about this topic are also the sort who \n> write them. There are two main fallacies at play here that make this \n> happen:\n> \nOracle just gives an impression that RBO is gone. It's actually still \nthere, even in 11.2:\n\nConnected to:\nOracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production\nWith the Partitioning, OLAP, Data Mining and Real Application Testing \noptions\n\nSQL> alter session set optimizer_mode=rule;\n\nSession altered.\n\nOracle people were just as puritanical as Postgres people, if not more \nso. However, the huge backlash made them reconsider the decision. RBO is \nofficially de-supported, obsolete and despised but it is also widely \nused, even in the Oracle's own SYS schema. Oracle is having huge \nproblems with trying to get people to the cost based optimizer, but they \nare not yet quite done.\n\n> 1) Even if you use an index, PostgreSQL must still retrieve the \n> associated table data to execute the query in order to execute its \n> version of MVCC\n> \nOf course. Nobody contests that. However, index scans for OLTP are \nindispensable. Sequential scans just don't do the trick in some situations.\n\n\n> 2) The sort of random I/O done by index lookups can be as much as 50X as \n> expensive on standard hard drives as sequential, if every block goes to \n> physical hardware.\n> \n\nGreg, how many questions about queries not using an index have you seen? \nThere is a reason why people keep asking that. The sheer number of \nquestions like that on this group should tell you that there is a \nproblem there. \nThere must be a relatively simple way of influencing optimizer \ndecisions. With all due respect, I consider myself smarter than the \noptimizer. I'm 6'4\", 235LBS so telling me that you disagree and that I \nam more stupid than a computer program, would not be a smart thing to \ndo. Please, do not misunderestimate me.\n\n> If I were to work on improving this area, it would be executing on some \n> plans a few of us have sketched out for exposing some notion about what \n> indexes are actually in memory to the optimizer. There are more obvious \n> fixes to the specific case of temp tables though.\n>\n> \nI've had a run in with a temporary table, that I had to resolve by \ndisabling hash join and merge join, that really irritated me.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Wed, 02 Feb 2011 15:54:26 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Mladen Gogala wrote:\n> > 2) The sort of random I/O done by index lookups can be as much as 50X as \n> > expensive on standard hard drives as sequential, if every block goes to \n> > physical hardware.\n> > \n> \n> Greg, how many questions about queries not using an index have you seen? \n> There is a reason why people keep asking that. The sheer number of \n> questions like that on this group should tell you that there is a \n> problem there. \n\nVery few of those reports found that an index scan was indeed faster ---\nthey just assumed so but when they actually tested it, they understood.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Wed, 2 Feb 2011 16:11:25 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Wed, Feb 02, 2011 at 03:54:26PM -0500, Mladen Gogala wrote:\n> Greg Smith wrote:\n>> Given that even Oracle kicked out the RBO a long time ago, I'm not so sure \n>> longing for those good old days will go very far. I regularly see queries \n>> that were tweaked to always use an index run at 1/10 or less the speed of \n>> a sequential scan against the same data. The same people complaining \"all \n>> over the place\" about this topic are also the sort who write them. There \n>> are two main fallacies at play here that make this happen:\n>> \n> Oracle just gives an impression that RBO is gone. It's actually still \n> there, even in 11.2:\n>\n> Connected to:\n> Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production\n> With the Partitioning, OLAP, Data Mining and Real Application Testing \n> options\n>\n> SQL> alter session set optimizer_mode=rule;\n>\n> Session altered.\n>\n> Oracle people were just as puritanical as Postgres people, if not more so. \n> However, the huge backlash made them reconsider the decision. RBO is \n> officially de-supported, obsolete and despised but it is also widely used, \n> even in the Oracle's own SYS schema. Oracle is having huge problems with \n> trying to get people to the cost based optimizer, but they are not yet \n> quite done.\n>\n\nThis problem in getting people to migrate to the cost-based optimizer\nseems to stem from the original use of the rule based optimizer and\nthe ability to (mis)hint every option in the DB. If I were running\na shop with 100k-1m lines of SQL code with embedded hints, I would\nrun screaming at the QA required to move to the cost-based system.\nIn many ways, the RBO itself + hints is hindering the adoption of\nthe CBO. Are there any stats on the adoption/use of the CBO on new\nOracle users/shops?\n\n>> 1) Even if you use an index, PostgreSQL must still retrieve the associated \n>> table data to execute the query in order to execute its version of MVCC\n>> \n> Of course. Nobody contests that. However, index scans for OLTP are \n> indispensable. Sequential scans just don't do the trick in some situations.\n>\n>\n>> 2) The sort of random I/O done by index lookups can be as much as 50X as \n>> expensive on standard hard drives as sequential, if every block goes to \n>> physical hardware.\n>> \n>\n> Greg, how many questions about queries not using an index have you seen? \n> There is a reason why people keep asking that. The sheer number of \n> questions like that on this group should tell you that there is a problem \n> there. There must be a relatively simple way of influencing optimizer \n> decisions. With all due respect, I consider myself smarter than the \n> optimizer. I'm 6'4\", 235LBS so telling me that you disagree and that I am \n> more stupid than a computer program, would not be a smart thing to do. \n> Please, do not misunderestimate me.\n>\n\nI see them come up regularly. However, there really are not all that\nmany when you consider how many people are using PostgreSQL. Its\noptimizer works quite well. Knowing how hints can be misused, I would\nrather have the developers use their resource to improve the optimizer\nthan spend time on a hint system that would be mis-used over and over\nby beginners, with the attendent posts to HACKERS/PERFORM/NOVICE/...\ngroups. I certainly have had a fun time or two in my limited Oracle\nexperience tracking down a hint-based performance problem, so it\nworks both ways.\n\nRegards,\nKen\n",
"msg_date": "Wed, 2 Feb 2011 15:14:06 -0600",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "> With all\n> due respect, I consider myself smarter than the optimizer. I'm 6'4\", 235LBS\n> so telling me that you disagree and that I am more stupid than a computer\n> program, would not be a smart thing to do. Please, do not misunderestimate\n> me.\n\nI don't see computer programs make thinly veiled threats, especially\nin a public forum.\nI'll do what you claim is not the smart thing and disagree with you.\nYou are wrong.\nYou are dragging the signal-to-noise ratio of this discussion down.\nYou owe Greg an apology.\n",
"msg_date": "Wed, 2 Feb 2011 16:25:00 -0500",
"msg_from": "Justin Pitts <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Kenneth Marshall wrote:\n>\n>\n> I see them come up regularly. However, there really are not all that\n> many when you consider how many people are using PostgreSQL. Its\n> optimizer works quite well. Knowing how hints can be misused, I would\n> rather have the developers use their resource to improve the optimizer\n> than spend time on a hint system that would be mis-used over and over\n> by beginners, with the attendent posts to HACKERS/PERFORM/NOVICE/...\n> groups. I certainly have had a fun time or two in my limited Oracle\n> experience tracking down a hint-based performance problem, so it\n> works both ways.\n>\n> Regards,\n> Ken\n> \n\nKen, the story is really simple: when a problem with a bad query arises, \nthe DBA has to make it work, one way or another. The weapon of choice \nare usually hints, but there is also the ability to set the critical \nstatistic variables to the desired values. If my users are screaming \nthat the application response time is slow, I cannot afford to wait for \ndevelopers to fix the optimizer. I will therefore not use Postgres for \nmy mission critical applications, as long as there are no hints.\n\nOracle is expensive, but not as expensive as the downtime. And that's \nthe bottom line. Yes, hints can cause problems, but the absence of hints \nand wait interface can cause even bigger problems. This is not a choice \nbetween good and evil, as in the Nick Cage movies, it is a choice \nbetween evil and lesser evil. I would love to be able to use Postgres \nfor some of my mission critical applications. Saving tens of thousands \nof dollars would make me a company hero and earn me a hefty bonus, so I \nhave a personal incentive to do so. Performance is normally not a \nproblem. If the application is carefully crafted and designed, it will \nwork more or less the same as Oracle. However, applications sometimes \nneed maintenance. Ruth from sales wants the IT to start ingesting data \nin UTF8 because we have clients in other countries. She also wants us to \ntrack language and countries. Columns have to be added to the tables, \napplications have to be changed, foreign keys added, triggers altered, \netc, etc. What you end up with is usually less than optimal. \nApplications have life cycle and they move from being young and sexy to \nbeing an old fart application, just as people do. Hints are Viagra for \napplications. Under the ideal conditions, it is not needed, but once the \napp is past certain age....\n\nThe other problem is that plans change with the stats, not necessarily \nfor the better. People clean a large table, Postgres runs auto-vacuum, \nstats change and all the plans change, too. If some of the new plans are \nunacceptable, there isn't much you can do about it, but to hint it to \nthe proper plan. Let's not pretend, Postgres does support sort of hints \nwith the \"set enable_<access method>\" and random/sequential scan cost. \nAlso, effective cache size is openly used to trick the optimizer into \nbelieving that there is more memory than there actually is. Hints are \nalready there, they're just not as elegant as Oracle's solution. If I \nset sequential page cost to 4 and random page cost to 1, I have, \neffectively, introduced rule based optimizer to Postgres. I am not sure \nwhy is there such a puritanical resistance to hints on one side and, on \nother side, there are means to achieve exactly the same thing. As my \nsignature line says, I am a senior Oracle DBA, with quite a bit of \nexperience. What I need to approve moving mission critical applications \nto Postgres are better monitoring tools and something to help me with \nquick and dirty fixes when necessary. I am willing to learn, I got the \ncompany to invest some money and do pilot projects, but I am not \nprepared to have my boss saying \"we could have fixed the problem, had we \nstayed on Oracle\".\n\nBTW:\nOn my last airplane trip, I saw Nick Cage in the \"Sorcerer's Apprentice\" \nand my brain still hurts.\n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Wed, 02 Feb 2011 16:59:50 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Justin Pitts wrote:\n>> With all\n>> due respect, I consider myself smarter than the optimizer. I'm 6'4\", 235LBS\n>> so telling me that you disagree and that I am more stupid than a computer\n>> program, would not be a smart thing to do. Please, do not misunderestimate\n>> me.\n>> \n>\n> I don't see computer programs make thinly veiled threats, especially\n> in a public forum.\n> I'll do what you claim is not the smart thing and disagree with you.\n> You are wrong.\n> You are dragging the signal-to-noise ratio of this discussion down.\n> You owe Greg an apology.\n> \nI apologize if that was understood as a threat. It was actually a joke. \nI thought that my using of the word \"misunderestimate\" has made it \nabundantly clear. Apparently, G.W. doesn't have as many fans as I have \npreviously thought. Once again, it was a joke, I humbly apologize if \nthat was misunderstood.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Wed, 02 Feb 2011 17:03:28 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Mladen Gogala wrote:\n> Greg, how many questions about queries not using an index have you \n> seen? There is a reason why people keep asking that. The sheer number \n> of questions like that on this group should tell you that there is a \n> problem there. There must be a relatively simple way of influencing \n> optimizer decisions. \n\nI think that's not quite the right question. For every person like \nyourself who is making an informed \"the optimizer is really picking the \nwrong index\" request, I think there are more who are asking for that but \nare not actually right that it will help. I think you would agree that \nthis area is hard to understand, and easy to make mistakes about, yes? \nSo the right question is \"how many questions about queries not using an \nindex would have actually benefitted from the behavior they asked for?\" \nThat's a much fuzzier and harder to answer question.\n\nI agree that it would be nice to provide a UI for the informed. \nUnfortunately, the problem I was pointing out is that doing so could, on \naverage, make PostgreSQL appear to run worse to people who use it. \nThings like which index and merge type are appropriate changes as data \ncomes in, and some of the plan switches that occur because of that are \nthe right thing to do--not a mistake on the optimizer's part. I'm sure \nyou've seen people put together plan rules for the RBO that worked fine \non small data sets, but were very wrong as production data volume went \nup. That problem should be less likely to happen to a CBO approach. It \nisn't always, of course, but trying to build a RBO-style approach from \nscratch now to resolve those cases isn't necessarily the right way to \nproceed.\n\nGiven limited resources as a development community, it's hard to justify \nworking on hinting--which has its own complexity to do right--when there \nare so many things that I think are more likely to help *everyone* that \ncould be done instead. The unfortunate situation we're in, unlike \nOracle, is that there isn't a practically infinite amount of money \navailable to fund every possible approach here, then see which turn out \nto work later after our customers suffer through the bad ones for a while.\n\n> With all due respect, I consider myself smarter than the optimizer. \n> I'm 6'4\", 235LBS so telling me that you disagree and that I am more \n> stupid than a computer program, would not be a smart thing to do. \n> Please, do not misunderestimate me.\n\nI remember when I used to only weigh that much. You are lucky to be \nsuch a slim little guy!\n\nOh, I guess I should add, :)\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Wed, 02 Feb 2011 19:03:06 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Greg Smith wrote:\n> Mladen Gogala wrote:\n> > Greg, how many questions about queries not using an index have you \n> > seen? There is a reason why people keep asking that. The sheer number \n> > of questions like that on this group should tell you that there is a \n> > problem there. There must be a relatively simple way of influencing \n> > optimizer decisions. \n> \n> I think that's not quite the right question. For every person like \n> yourself who is making an informed \"the optimizer is really picking the \n> wrong index\" request, I think there are more who are asking for that but \n> are not actually right that it will help. I think you would agree that \n> this area is hard to understand, and easy to make mistakes about, yes? \n> So the right question is \"how many questions about queries not using an \n> index would have actually benefitted from the behavior they asked for?\" \n> That's a much fuzzier and harder to answer question.\n\nAgreed. I created an FAQ entry years ago to explain this point and tell\npeople how to test it:\n\n\thttp://wiki.postgresql.org/wiki/FAQ#Why_are_my_queries_slow.3F_Why_don.27t_they_use_my_indexes.3F\n\nOnce I added that FAQ we had many fewer email questions about index\nchoice.\n\n> > With all due respect, I consider myself smarter than the optimizer. \n> > I'm 6'4\", 235LBS so telling me that you disagree and that I am more \n> > stupid than a computer program, would not be a smart thing to do. \n> > Please, do not misunderestimate me.\n> \n> I remember when I used to only weigh that much. You are lucky to be \n> such a slim little guy!\n> \n> Oh, I guess I should add, :)\n\nOh, wow, what a great retort. :-)\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Wed, 2 Feb 2011 19:13:44 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Thank you.\n\nIt appears I owe an apology also, for jumping to that conclusion. It\nwas rash and unfair of me. I am sorry.\n\nOn Wed, Feb 2, 2011 at 5:03 PM, Mladen Gogala <[email protected]> wrote:\n> Justin Pitts wrote:\n>>>\n>>> With all\n>>> due respect, I consider myself smarter than the optimizer. I'm 6'4\",\n>>> 235LBS\n>>> so telling me that you disagree and that I am more stupid than a computer\n>>> program, would not be a smart thing to do. Please, do not\n>>> misunderestimate\n>>> me.\n>>>\n>>\n>> I don't see computer programs make thinly veiled threats, especially\n>> in a public forum.\n>> I'll do what you claim is not the smart thing and disagree with you.\n>> You are wrong.\n>> You are dragging the signal-to-noise ratio of this discussion down.\n>> You owe Greg an apology.\n>>\n>\n> I apologize if that was understood as a threat. It was actually a joke. I\n> thought that my using of the word \"misunderestimate\" has made it abundantly\n> clear. Apparently, G.W. doesn't have as many fans as I have previously\n> thought. Once again, it was a joke, I humbly apologize if that was\n> misunderstood.\n>\n> --\n>\n> Mladen Gogala Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> http://www.vmsinfo.com The Leader in Integrated Media Intelligence Solutions\n>\n>\n>\n>\n",
"msg_date": "Wed, 2 Feb 2011 19:29:20 -0500",
"msg_from": "Justin Pitts <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Wed, Feb 2, 2011 at 7:03 PM, Greg Smith <[email protected]> wrote:\n> Given limited resources as a development community, it's hard to justify\n> working on hinting--which has its own complexity to do right--when there are\n> so many things that I think are more likely to help *everyone* that could be\n> done instead. The unfortunate situation we're in, unlike Oracle, is that\n> there isn't a practically infinite amount of money available to fund every\n> possible approach here, then see which turn out to work later after our\n> customers suffer through the bad ones for a while.\n\nThere are actually very few queries where I actually want to force the\nplanner to use a particular index, which is the sort of thing Oracle\nlets you do. If it's a simple query and\nrandom_page_cost/seq_page_cost are reasonably well adjusted, the\nplanner's choice is very, very likely to be correct. If it's a\ncomplex query, the planner has more likelihood of going wrong, but\nforcing it to use an index on one table isn't going to help much if\nthat table is being used on the inner side of a hash join. You almost\nneed to be able to force the entire plan into the shape you've chosen,\nand that's a lot of work and not terribly robust. The most common\ntype of \"hard to fix\" query problem - by far - is a bad selectivity\nestimate. Being able to hint that would be worth more than any number\nof hints about which indexes to use, in my book.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 2 Feb 2011 21:01:07 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On 2/2/2011 7:03 PM, Greg Smith wrote:\n> I think that's not quite the right question. For every person like\n> yourself who is making an informed \"the optimizer is really picking the\n> wrong index\" request, I think there are more who are asking for that but\n> are not actually right that it will help. I think you would agree that\n> this area is hard to understand, and easy to make mistakes about, yes?\n> So the right question is \"how many questions about queries not using an\n> index would have actually benefitted from the behavior they asked for?\"\n> That's a much fuzzier and harder to answer question.\n>\n> I agree that it would be nice to provide a UI for the informed.\n> Unfortunately, the problem I was pointing out is that doing so could, on\n> average, make PostgreSQL appear to run worse to people who use it.\nGreg, I understand your concerns, but let me point out two things:\n1) The basic mechanism is already there. PostgreSQL has a myriad of \nways to actually control the optimizer. One, completely analogous to \nOracle mechanisms, is to control the cost of sequential vs. random page \nscan. The other, completely analogous to Oracle hints, is based on the \ngroup of switches for turning on and off various join and access \nmethods. This also includes setting join_collapse limit to 1, to force \nthe desired join order. The third way is to actually make the optimizer \nwork a lot harder by setting gego_effort to 10 and \ndefault_statistics_target to 1000 or more, which will increase the size \nof histograms and increase the time and CPU spent on parsing. I can \nliterally force the plan of my choosing on Postgres optimizer. The \nmechanisms are already there, I am only pleading for a more elegant version.\n\n2) The guys who may spread Postgres and help it achieve the desired \nworld domination, discussed here the other day, are database \nadministrators in the big companies. If you get people from JP Morgan \nChase, Bank of America, Goldman Sachs or Lehman Brothers to start using \nPostgres for serious projects, the rest will follow the suit. People \nfrom some of these companies have already been seen on NYC Postgres \nmeetings.\nGranted, MySQL started on the other end of the spectrum, by being used \nfor ordering downloaded MP3 collections, but it had found its way into \nthe corporate server rooms, too. The techies at big companies are the \nguys who will or will not make it happen. And these guys are not \nbeginners. Appeasing them may actually go a long way.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n",
"msg_date": "Wed, 02 Feb 2011 21:45:19 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Mladen Gogala wrote:\n> The techies at big companies are the guys who will or will not make it \n> happen. And these guys are not beginners. Appeasing them may actually \n> go a long way.\n\nThe PostgreSQL community isn't real big on appeasing people if it's at \nthe expense of robustness or correctness, and this issue falls into that \ncategory. There are downsides to that, but good things too. Chasing \nafter whatever made people happy regardless of its impact on the server \ncode itself has in my mind contributed to why Oracle is so bloated and \nMySQL so buggy, to pick two examples from my favorite horse to whip. \nTrying to graft an alternate UI for the stuff that needs to be tweaked \nhere to do better, one flexible enough to actually handle the complexity \nof the job, is going to add some code with a new class of bugs and \ncontinous maintenance headaches. Being picky about rejecting such \nthings is part of the reason why the PostgreSQL code has developed a \ngood reputation.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Thu, 03 Feb 2011 01:16:36 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "02.02.11 20:32, Robert Haas написав(ла):\n>\n> Yeah. Any kind of bulk load into an empty table can be a problem,\n> even if it's not temporary. When you load a bunch of data and then\n> immediately plan a query against it, autoanalyze hasn't had a chance\n> to do its thing yet, so sometimes you get a lousy plan.\n\nMay be introducing something like 'AutoAnalyze' threshold will help? I \nmean that any insert/update/delete statement that changes more then x% \nof table (and no less then y records) must do analyze right after it was \nfinished.\nDefaults like x=50 y=10000 should be quite good as for me.\n\nBest regards, Vitalii Tymchyshyn\n",
"msg_date": "Thu, 03 Feb 2011 11:54:49 +0200",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, 3 Feb 2011, Vitalii Tymchyshyn wrote:\n\n> 02.02.11 20:32, Robert Haas ???????(??):\n>> \n>> Yeah. Any kind of bulk load into an empty table can be a problem,\n>> even if it's not temporary. When you load a bunch of data and then\n>> immediately plan a query against it, autoanalyze hasn't had a chance\n>> to do its thing yet, so sometimes you get a lousy plan.\n>\n> May be introducing something like 'AutoAnalyze' threshold will help? I mean \n> that any insert/update/delete statement that changes more then x% of table \n> (and no less then y records) must do analyze right after it was finished.\n> Defaults like x=50 y=10000 should be quite good as for me.\n\nIf I am understanding things correctly, a full Analyze is going over all \nthe data in the table to figure out patterns.\n\nIf this is the case, wouldn't it make sense in the situation where you are \nloading an entire table from scratch to run the Analyze as you are \nprocessing the data? If you don't want to slow down the main thread that's \ninserting the data, you could copy the data to a second thread and do the \nanalysis while it's still in RAM rather than having to read it off of disk \nafterwords.\n\nthis doesn't make sense for updates to existing databases, but the use \ncase of loading a bunch of data and then querying it right away isn't \n_that_ uncommon.\n\nDavid Lang\n",
"msg_date": "Thu, 3 Feb 2011 02:11:58 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 03, 2011 at 02:11:58AM -0800, [email protected] wrote:\n> On Thu, 3 Feb 2011, Vitalii Tymchyshyn wrote:\n>\n>> 02.02.11 20:32, Robert Haas ???????(??):\n>>> Yeah. Any kind of bulk load into an empty table can be a problem,\n>>> even if it's not temporary. When you load a bunch of data and then\n>>> immediately plan a query against it, autoanalyze hasn't had a chance\n>>> to do its thing yet, so sometimes you get a lousy plan.\n>>\n>> May be introducing something like 'AutoAnalyze' threshold will help? I \n>> mean that any insert/update/delete statement that changes more then x% of \n>> table (and no less then y records) must do analyze right after it was \n>> finished.\n>> Defaults like x=50 y=10000 should be quite good as for me.\n>\n> If I am understanding things correctly, a full Analyze is going over all \n> the data in the table to figure out patterns.\n>\n> If this is the case, wouldn't it make sense in the situation where you are \n> loading an entire table from scratch to run the Analyze as you are \n> processing the data? If you don't want to slow down the main thread that's \n> inserting the data, you could copy the data to a second thread and do the \n> analysis while it's still in RAM rather than having to read it off of disk \n> afterwords.\n>\n> this doesn't make sense for updates to existing databases, but the use case \n> of loading a bunch of data and then querying it right away isn't _that_ \n> uncommon.\n>\n> David Lang\n>\n\n+1 for in-flight ANALYZE. This would be great for bulk loads of\nreal tables as well as temp tables.\n\nCheers,\nKen\n",
"msg_date": "Thu, 3 Feb 2011 07:41:42 -0600",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 3, 2011 at 7:41 AM, Kenneth Marshall <[email protected]> wrote:\n> On Thu, Feb 03, 2011 at 02:11:58AM -0800, [email protected] wrote:\n>> On Thu, 3 Feb 2011, Vitalii Tymchyshyn wrote:\n>>\n>>> 02.02.11 20:32, Robert Haas ???????(??):\n>>>> Yeah. Any kind of bulk load into an empty table can be a problem,\n>>>> even if it's not temporary. When you load a bunch of data and then\n>>>> immediately plan a query against it, autoanalyze hasn't had a chance\n>>>> to do its thing yet, so sometimes you get a lousy plan.\n>>>\n>>> May be introducing something like 'AutoAnalyze' threshold will help? I\n>>> mean that any insert/update/delete statement that changes more then x% of\n>>> table (and no less then y records) must do analyze right after it was\n>>> finished.\n>>> Defaults like x=50 y=10000 should be quite good as for me.\n>>\n>> If I am understanding things correctly, a full Analyze is going over all\n>> the data in the table to figure out patterns.\n>>\n>> If this is the case, wouldn't it make sense in the situation where you are\n>> loading an entire table from scratch to run the Analyze as you are\n>> processing the data? If you don't want to slow down the main thread that's\n>> inserting the data, you could copy the data to a second thread and do the\n>> analysis while it's still in RAM rather than having to read it off of disk\n>> afterwords.\n>>\n>> this doesn't make sense for updates to existing databases, but the use case\n>> of loading a bunch of data and then querying it right away isn't _that_\n>> uncommon.\n>>\n>> David Lang\n>>\n>\n> +1 for in-flight ANALYZE. This would be great for bulk loads of\n> real tables as well as temp tables.\n\nYes, please, that would be really nice.\n\n\n\n\n-- \nJon\n",
"msg_date": "Thu, 3 Feb 2011 08:20:01 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 3, 2011 at 4:54 AM, Vitalii Tymchyshyn <[email protected]> wrote:\n> 02.02.11 20:32, Robert Haas написав(ла):\n>>\n>> Yeah. Any kind of bulk load into an empty table can be a problem,\n>> even if it's not temporary. When you load a bunch of data and then\n>> immediately plan a query against it, autoanalyze hasn't had a chance\n>> to do its thing yet, so sometimes you get a lousy plan.\n>\n> May be introducing something like 'AutoAnalyze' threshold will help? I mean\n> that any insert/update/delete statement that changes more then x% of table\n> (and no less then y records) must do analyze right after it was finished.\n> Defaults like x=50 y=10000 should be quite good as for me.\n\nThat would actually be a pessimization for many real world cases. Consider:\n\nCOPY\nCOPY\nCOPY\nCOPY\nCOPY\nCOPY\nCOPY\nCOPY\nCOPY\nCOPY\nCOPY\nCOPY\nCOPY\nSELECT\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 3 Feb 2011 10:31:27 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 3, 2011 at 5:11 AM, <[email protected]> wrote:\n> If I am understanding things correctly, a full Analyze is going over all the\n> data in the table to figure out patterns.\n\nNo. It's going over a small, fixed-size sample which depends on\ndefault_statistics_target but NOT on the table size. It's really\nimportant to come up with a solution that's not susceptible to running\nANALYZE over and over again, in many cases unnecessarily.\n\n> If this is the case, wouldn't it make sense in the situation where you are\n> loading an entire table from scratch to run the Analyze as you are\n> processing the data? If you don't want to slow down the main thread that's\n> inserting the data, you could copy the data to a second thread and do the\n> analysis while it's still in RAM rather than having to read it off of disk\n> afterwords.\n\nWell that's basically what autoanalyze is going to do anyway, if the\ntable is small enough to fit in shared_buffers. And it's actually\nusually BAD if it starts running while you're doing a large bulk load,\nbecause it competes for I/O bandwidth and the buffer cache and slows\nthings down. Especially when you're bulk loading for a long time and\nit tries to run over and over. I'd really like to suppress all those\nasynchronous ANALYZE operations and instead do ONE synchronous one at\nthe end, when we try to use the data.\n\nOf course, the devil is in the nontrivial details.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 3 Feb 2011 10:35:43 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "03.02.11 17:31, Robert Haas написав(ла):\n>\n>> May be introducing something like 'AutoAnalyze' threshold will help? I mean\n>> that any insert/update/delete statement that changes more then x% of table\n>> (and no less then y records) must do analyze right after it was finished.\n>> Defaults like x=50 y=10000 should be quite good as for me.\n> That would actually be a pessimization for many real world cases. Consider:\n>\n> COPY\n> COPY\n> COPY\n> COPY\n> COPY\n> COPY\n> COPY\n> COPY\n> COPY\n> COPY\n> COPY\n> COPY\n> COPY\n> SELECT\nIf all the copies are ~ same in size and large this will make it:\n\nCOPY\nANALYZE\nCOPY\nANALYZE\nCOPY\nCOPY\nANALYZE\nCOPY\nCOPY\nCOPY\nCOPY\nANALYZE\nCOPY\nCOPY\nCOPY\nCOPY\nCOPY\nSELECT\n\ninstead of\n\nCOPY\nCOPY\nCOPY\nCOPY\nCOPY\nCOPY\nCOPY\nCOPY\nCOPY\nCOPY\nCOPY\nCOPY\nCOPY\nANALYZE (manual, if one is clever enough)\nSELECT\n\nSo, yes this will add 3 more analyze, but\n1) Analyze is pretty cheap comparing to large data loading. I'd say this \nwould add few percent of burden. And NOT doing analyze manually before \nselect can raise select costs orders of magnitude.\n2) How often in real world a single table is loaded in many COPY \nstatements? (I don't say it's not often, I really don't know). At least \nfor restore it is not the case, is not it?\n3) default thresholds are things to discuss. You can make x=90 or x=200 \n(latter will make it run only for massive load/insert operations). You \ncan even make it disabled by default for people to test. Or enable by \ndefault for temp tables only (and have two sets of thresholds)\n4) As most other settings, this threshold can be changed on up to \nper-query basis.\n\nP.S. I would also like to have index analyze as part of any create index \nprocess.\n\nBest regards, Vitalii Tymchyshyn\n\n",
"msg_date": "Thu, 03 Feb 2011 17:43:47 +0200",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Greg Smith wrote:\n> Mladen Gogala wrote:\n> \n>> The techies at big companies are the guys who will or will not make it \n>> happen. And these guys are not beginners. Appeasing them may actually \n>> go a long way.\n>> \n>\n> The PostgreSQL community isn't real big on appeasing people if it's at \n> the expense of robustness or correctness, and this issue falls into that \n> category. \nWith all due respect, I don't see how does the issue of hints fall into \nthis category? As I explained, the mechanisms are already there, they're \njust not elegant enough. The verb \"appease\" doesn't convey the meaning \nthat I had in mind quite correctly. The phrase \"target population\" would \nhave described what I wanted to say in a much better way .\n> There are downsides to that, but good things too. Chasing \n> after whatever made people happy regardless of its impact on the server \n> code itself has in my mind contributed to why Oracle is so bloated and \n> MySQL so buggy, to pick two examples from my favorite horse to whip. \n> \nWell, those two databases are also used much more widely than Postgres, \nwhich means that they're doing something better than Postgres.\n\nHints are not even that complicated to program. The SQL parser should \ncompile the list of hints into a table and optimizer should check \nwhether any of the applicable access methods exist in the table. If it \ndoes - use it. If not, ignore it. This looks to me like a philosophical \nissue, not a programming issue. Basically, the current Postgres \nphilosophy can be described like this: if the database was a gas stove, \nit would occasionally catch fire. However, bundling a fire extinguisher \nwith the stove is somehow seen as bad. When the stove catches fire, \nusers is expected to report the issue and wait for a better stove to be \ndeveloped. It is a very rough analogy, but rather accurate one, too.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Thu, 03 Feb 2011 11:38:14 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Mladen Gogala wrote:\n> Greg Smith wrote:\n> > Mladen Gogala wrote:\n> > \n> >> The techies at big companies are the guys who will or will not make it \n> >> happen. And these guys are not beginners. Appeasing them may actually \n> >> go a long way.\n> >> \n> >\n> > The PostgreSQL community isn't real big on appeasing people if it's at \n> > the expense of robustness or correctness, and this issue falls into that \n> > category. \n>\n> With all due respect, I don't see how does the issue of hints fall into \n> this category? As I explained, the mechanisms are already there, they're \n> just not elegant enough. The verb \"appease\" doesn't convey the meaning \n> that I had in mind quite correctly. The phrase \"target population\" would \n> have described what I wanted to say in a much better way .\n\nThe settings are currently there to better model the real world\n(random_page_cost), or for testing (enable_seqscan). They are not there\nto force certain plans. They can be used for that, but that is not\ntheir purpose and they would not have been added if that was their\npurpose.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Thu, 3 Feb 2011 11:56:56 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Mladen Gogala wrote:\n> Hints are not even that complicated to program. The SQL parser should \n> compile the list of hints into a table and optimizer should check \n> whether any of the applicable access methods exist in the table. If it \n> does - use it. If not, ignore it. This looks to me like a philosophical \n> issue, not a programming issue. Basically, the current Postgres \n> philosophy can be described like this: if the database was a gas stove, \n> it would occasionally catch fire. However, bundling a fire extinguisher \n> with the stove is somehow seen as bad. When the stove catches fire, \n> users is expected to report the issue and wait for a better stove to be \n> developed. It is a very rough analogy, but rather accurate one, too.\n\nThat might be true.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Thu, 3 Feb 2011 11:57:30 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On 02/03/2011 10:38 AM, Mladen Gogala wrote:\n\n> With all due respect, I don't see how does the issue of hints fall\n> into this category?\n\nYou have a few good arguments, and if you hadn't said this, it wouldn't \nhave been so obvious that there was a fundamental philosophical \ndisconnect. I asked this same question almost ten years ago, and the \nanswer Tom gave me was more than sufficient.\n\nIt all boils down to the database. Hints, whether they're \nwell-intentioned or not, effectively cover up bugs in the optimizer, \nplanner, or some other approach the database is using to build its \nexecution. Your analogy is that PG is a gas stove, so bundle a fire \nextinguisher. Well, the devs believe that the stove should be upgraded \nto electric or possibly even induction to remove the need for the \nextinguisher.\n\nIf they left hints in, it would just be one more thing to deprecate as \nthe original need for the hint was removed. If you really need hints \nthat badly, EnterpriseDB cobbled the Oracle syntax into the planner, and \nit seems to work alright. That doesn't mean it's right, just that it \nworks. EnterpriseDB will now have to support those query hints forever, \neven if the planner gets so advanced they're effectively useless.\n\n> Well, those two databases are also used much more widely than\n> Postgres, which means that they're doing something better than\n> Postgres.\n\nPlease don't make arguments like this. \"Better\" is such a subjective \nevaluation it means nothing. Are Honda Accords \"better\" than Lamborghini \nGallardos because more people buy Accords? The MySQL/PostgreSQL flame \nwar is a long and sometimes bitter one, and bringing it up to try and \npersuade the devs to \"see reason\" is just going to backfire.\n\n> Hints are not even that complicated to program.\n\nThen write a contrib module. It's not part of the core DB, and it \nprobably never will be. This is a *very* old argument. There's literally \nnothing you can say, no argument you can bring, that hasn't been heard a \nmillion times in the last decade.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Thu, 3 Feb 2011 11:10:06 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Mladen Gogala <[email protected]> writes:\n> Hints are not even that complicated to program.\n\nWith all due respect, you don't know what you're talking about.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Feb 2011 12:27:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again... "
},
{
"msg_contents": "On Thu, Feb 3, 2011 at 11:56 AM, Bruce Momjian <[email protected]> wrote:\n> The settings are currently there to better model the real world\n> (random_page_cost), or for testing (enable_seqscan). They are not there\n> to force certain plans. They can be used for that, but that is not\n> their purpose and they would not have been added if that was their\n> purpose.\n\nSure. But Mladen's point is that this is rather narrow-minded. I\nhappen to agree. We are not building an ivory tower. We are building\na program that real people will use to solve real problems, and it is\nnot our job to artificially prevent them from achieving their\nobjectives so that we remain motivated to improve future versions of\nthe code.\n\nI don't, however, agree with his contention that this is easy to\nimplement. It would be easy to implement something that sucked. It\nwould be hard to implement something that actually helped in the cases\nwhere the existing settings aren't already sufficient.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 3 Feb 2011 12:28:29 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "[email protected] (Mladen Gogala) writes:\n> Hints are not even that complicated to program. The SQL parser should\n> compile the list of hints into a table and optimizer should check\n> whether any of the applicable access methods exist in the table. If it\n> does - use it. If not, ignore it. This looks to me like a\n> philosophical issue, not a programming issue.\n\nIt's worth looking back to what has already been elaborated on in the\nToDo.\n\nhttp://wiki.postgresql.org/wiki/Todo\n-----------------------------------\nOptimizer hints (not wanted)\n\nOptimizer hints are used to work around problems in the optimizer and\nintroduce upgrade and maintenance issues. We would rather have the\nproblems reported and fixed. We have discussed a more sophisticated\nsystem of per-class cost adjustment instead, but a specification remains\nto be developed.\n-----------------------------------\n\nThe complaint is that kludging hints into a particular query attacks the\nproblem from the wrong direction.\n\nThe alternative recommended is to collect some declarative information,\nthat *won't* be part of the query, that *won't* be processed by the\nparser, and that *won't* kludge up the query with information that is\nliable to turn into crud over time.\n\nTom Lane was pretty specific about some kinds of declarative information\nthat seemed useful:\n <http://archives.postgresql.org/pgsql-hackers/2006-10/msg00663.php>\n\nOn Jeapordy, participants are expected to phrase one's answers in the\nform of a question, and doing so is rewarded.\n\nBased on the presence of \"query hints\" on the Not Wanted portion of the\nToDo list, it's pretty clear that participants here are expected to\npropose optimizer hints in ways that do NOT involve decorating queries\nwith crud. You'll get a vastly friendlier response if you at least make\nan attempt to attack the problem in the \"declarative information\"\nfashion.\n\nPerhaps we're all wrong in believing that pushing query optimization\ninformation into application queries by decorating the application with\nhints, is the right idea but it's a belief that certainly seems to be\nregularly agreed upon by gentle readers.\n-- \n\"cbbrowne\",\"@\",\"linuxdatabases.info\"\nThe people's revolutionary committee has decided that the name \"e\" is\nretrogressive, unmulticious and reactionary, and has been flushed.\nPlease update your abbrevs.\n",
"msg_date": "Thu, 03 Feb 2011 12:44:23 -0500",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> I don't, however, agree with his contention that this is easy to\n> implement. It would be easy to implement something that sucked. It\n> would be hard to implement something that actually helped in the cases\n> where the existing settings aren't already sufficient.\n\nExactly. A hint system that actually did more good than harm would be a\nvery nontrivial project. IMO such effort is better spent on making the\noptimizer smarter.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Feb 2011 12:46:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again... "
},
{
"msg_contents": "Mladen Gogala wrote:\n> With all due respect, I don't see how does the issue of hints fall \n> into this category? As I explained, the mechanisms are already there, \n> they're just not elegant enough.\n\nYou're making some assumptions about what a more elegant mechanism would \nlook to develop that are simplifying the actual situation here. If you \ntake a survey of everyone who ever works on this area of the code, and \nresponses to this thread are already approaching a significant \npercentage of such people, you'll discover that doing what you want is \nmore difficult--and very much \"not elegant enough\" from the perspective \nof the code involved--than you think it would be.\n\nIt's actually kind of funny...I've run into more than one person who \ncharged into the PostgreSQL source code with the goal of \"I'm going to \nadd good hinting!\" But it seems like the minute anyone gets enough \nunderstanding of how it fits together to actually do that, they realize \nthere are just plain better things to be done in there instead. I used \nto be in the same situation you're in--thinking that all it would take \nis a better UI for tweaking the existing parameters. But now that I've \nactually done such tweaking for long enough to get a feel for what's \nreally wrong with the underlying assumptions, I can name 3 better uses \nof development resources that I'd rather work on instead. I mentioned \nincorporating cache visibility already, Robert has talked about \nimprovements to the sensitivity estimates, and the third one is \nimproving pooling of work_mem so individual clients can get more of it \nsafely.\n\n> Well, those two databases are also used much more widely than \n> Postgres, which means that they're doing something better than Postgres.\n\n\"Starting earlier\" is the only \"better\" here. Obviously Oracle got a \nmuch earlier start than either open-source database. The real \ndivergence in MySQL adoption relative to PostgreSQL was when they \nreleased a Windows port in January of 1998. PostgreSQL didn't really \nmatch that with a fully native port until January of 2005.\n\nCheck out \nhttp://www.indeed.com/jobtrends?q=postgres%2C+mysql%2C+oracle&relative=1&relative=1 \nif you want to see the real story here. Oracle has a large installed \nbase, but it's considered a troublesome legacy product being replaced \nwhenever possible now in every place I visit. Obviously my view of the \nworld as seen through my client feedback is skewed a bit toward \nPostgreSQL adoption. But you would be hard pressed to support any view \nthat suggests Oracle usage is anything other than flat or decreasing at \nthis point. When usage of one product is growing at an expontential \nrate and the other is not growing at all, eventually the market share \ncurves always cross too.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Thu, 03 Feb 2011 13:17:08 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Chris Browne wrote:\n> It's worth looking back to what has already been elaborated on in the\n> ToDo.\n> \n\nAnd that precisely is what I am trying to contest.\n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Thu, 03 Feb 2011 14:09:35 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 03, 2011 at 12:44:23PM -0500, Chris Browne wrote:\n> [email protected] (Mladen Gogala) writes:\n> > Hints are not even that complicated to program. The SQL parser should\n> > compile the list of hints into a table and optimizer should check\n> > whether any of the applicable access methods exist in the table. If it\n> > does - use it. If not, ignore it. This looks to me like a\n> > philosophical issue, not a programming issue.\n> \n> It's worth looking back to what has already been elaborated on in the\n> ToDo.\n> \n> http://wiki.postgresql.org/wiki/Todo\n> -----------------------------------\n> Optimizer hints (not wanted)\n> \n> Optimizer hints are used to work around problems in the optimizer and\n> introduce upgrade and maintenance issues. We would rather have the\n> problems reported and fixed. We have discussed a more sophisticated\n> system of per-class cost adjustment instead, but a specification remains\n> to be developed.\n\nAnd as to the 'wait around for a new version to fix that': there are\nconstantly excellent examples of exactly this happening, all the time\nwith PostgreSQL - most recent example I've seen -\nhttp://archives.postgresql.org/pgsql-performance/2011-01/msg00337.php\n\nThe wait often isn't long, at all.\n\nRoss\n-- \nRoss Reedstrom, Ph.D. [email protected]\nSystems Engineer & Admin, Research Scientist phone: 713-348-6166\nConnexions http://cnx.org fax: 713-348-3665\nRice University MS-375, Houston, TX 77005\nGPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE\n",
"msg_date": "Thu, 3 Feb 2011 13:24:42 -0600",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, 3 Feb 2011, Robert Haas wrote:\n\n> On Thu, Feb 3, 2011 at 5:11 AM, <[email protected]> wrote:\n>> If I am understanding things correctly, a full Analyze is going over all the\n>> data in the table to figure out patterns.\n>\n> No. It's going over a small, fixed-size sample which depends on\n> default_statistics_target but NOT on the table size. It's really\n> important to come up with a solution that's not susceptible to running\n> ANALYZE over and over again, in many cases unnecessarily.\n>\n>> If this is the case, wouldn't it make sense in the situation where you are\n>> loading an entire table from scratch to run the Analyze as you are\n>> processing the data? If you don't want to slow down the main thread that's\n>> inserting the data, you could copy the data to a second thread and do the\n>> analysis while it's still in RAM rather than having to read it off of disk\n>> afterwords.\n>\n> Well that's basically what autoanalyze is going to do anyway, if the\n> table is small enough to fit in shared_buffers. And it's actually\n> usually BAD if it starts running while you're doing a large bulk load,\n> because it competes for I/O bandwidth and the buffer cache and slows\n> things down. Especially when you're bulk loading for a long time and\n> it tries to run over and over. I'd really like to suppress all those\n> asynchronous ANALYZE operations and instead do ONE synchronous one at\n> the end, when we try to use the data.\n\nIf the table is not large enough to fit in ram, then it will compete for \nI/O, and the user will have to wait.\n\nwhat I'm proposing is that as the records are created, the process doing \nthe creation makes copies of the records (either all of them, or some of \nthem if not all are needed for the analysis, possibly via shareing memory \nwith the analysis process), this would be synchronous with the load, not \nasynchronous.\n\nthis would take zero I/O bandwidth, it would take up some ram, memory \nbandwidth, and cpu time, but a load of a large table like this is I/O \ncontrained.\n\nit would not make sense for this to be the default, but as an option it \nshould save a significant amount of time.\n\nI am making the assumption that an Analyze run only has to go over the \ndata once (a seqential scan of the table if it's >> ram for example) and \ngathers stats as it goes.\n\nwith the current code, this is a completely separate process that knows \nnothing about the load, so if you kick it off when you start the load, it \nmakes a pass over the table (competing for I/O), finishes, you continue to \nupdate the table, so it makes another pass, etc. As you say, this is a bad \nthing to do. I am saying to have an option that ties the two togeather, \nessentially making the data feed into the Analyze run be a fork of the \ndata comeing out of the insert run going to disk. So the Analyze run \ndoesn't do any I/O and isn't going to complete until the insert is \ncomplete. At which time it will have seen one copy of the entire table.\n\nDavid Lang\n",
"msg_date": "Thu, 3 Feb 2011 12:54:02 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Shaun Thomas wrote:\n> On 02/03/2011 10:38 AM, Mladen Gogala wrote:\n>\n> \n> It all boils down to the database. Hints, whether they're \n> well-intentioned or not, effectively cover up bugs in the optimizer, \n> planner, or some other approach the database is using to build its \n> execution. \nHints don't cover up bugs, they simply make it possible for the user to \ncircumvent the bugs and keep the users happy. As I hinted before, this \nis actually a purist argument which was made by someone who has never \nhad to support a massive production database with many users for living.\n> Your analogy is that PG is a gas stove, so bundle a fire \n> extinguisher. Well, the devs believe that the stove should be upgraded \n> to electric or possibly even induction to remove the need for the \n> extinguisher.\n> \nIn the meantime, the fire is burning. What should the hapless owner of \nthe database application do in the meantime? Tell the users that it will \nbe better in the next version? As I've said before: hints are make it or \nbreak it point. Without hints, I cannot consider Postgres for the \nmission critical projects. I am managing big databases for living and I \nflatter myself that after more than two decades of doing it, I am not \ntoo bad at it.\n\n> If they left hints in, it would just be one more thing to deprecate as \n> the original need for the hint was removed. If you really need hints \n> that badly, EnterpriseDB cobbled the Oracle syntax into the planner, and \n> it seems to work alright. That doesn't mean it's right, just that it \n> works. EnterpriseDB will now have to support those query hints forever, \n> even if the planner gets so advanced they're effectively useless.\n> \n\nI don't foresee that to happen in my lifetime. And I plan to go on for \nquite a while. There will always be optimizer bugs, users will be \nsmarter and know more about their data than computer programs in \nforeseeable future. What this attitude boils down to is that developers \ndon't trust their users enough to give them control of the execution \npath. I profoundly disagree with that type of philosophy. DB2 also has \nhints: http://tinyurl.com/48fv7w7\nSo does SQL Server: \nhttp://www.sql-server-performance.com/tips/hints_general_p1.aspx\nFinally, even the Postgres greatest open source competitor MySQL \nsupports hints: http://dev.mysql.com/doc/refman/5.0/en/index-hints.html\n\nI must say that this purist attitude is extremely surprising to me. All \nthe major DB vendors support optimizer hints, yet in the Postgres \ncommunity, they are considered bad with almost religious fervor.\nPostgres community is quite unique with the fatwa against hints.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Thu, 03 Feb 2011 16:01:40 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "[email protected] (Mladen Gogala) writes:\n> I must say that this purist attitude is extremely surprising to\n> me. All the major DB vendors support optimizer hints, yet in the\n> Postgres community, they are considered bad with almost religious\n> fervor.\n> Postgres community is quite unique with the fatwa against hints.\n\nWell, the community declines to add hints until there is actual\nconsensus on a good way to add hints.\n\nNobody has ever proposed a way to add hints where consensus was arrived\nat that the way was good, so...\n-- \nhttp://linuxfinances.info/info/nonrdbms.html\nRules of the Evil Overlord #192. \"If I appoint someone as my consort,\nI will not subsequently inform her that she is being replaced by a\nyounger, more attractive woman. <http://www.eviloverlord.com/>\n",
"msg_date": "Thu, 03 Feb 2011 16:18:41 -0500",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Mladen Gogala <[email protected]> wrote:\n \n> In the meantime, the fire is burning. What should the hapless\n> owner of the database application do in the meantime? Tell the\n> users that it will be better in the next version? As I've said\n> before: hints are make it or break it point. Without hints, I\n> cannot consider Postgres for the mission critical projects. I am\n> managing big databases for living and I flatter myself that after\n> more than two decades of doing it, I am not too bad at it.\n \nWell, I've been at it since 1972, and I'm OK with the current\nsituation because I push hard for *testing* in advance of production\ndeployment. So I generally discover that leaving a pan of grease on\nmaximum flame unattended is a bad idea in the test lab, where no\nserious damage is done. Then I take steps to ensure that this\ndoesn't happen in the user world.\n \nWe've got about 100 production databases, some at 2TB and growing,\nand 100 development, testing, and staging databases. About 3,000\ndirectly connected users and millions of web hits per day generating\ntens of millions of queries. Lots of fun replication and automated\ninterfaces to business partners -- DOT, county sheriffs, local\npolice agencies, district attorneys, public defenders offices,\nDepartment of Revenue (for tax intercept collections), Department of\nJustice, etc. (That was really just the tip of the iceberg.)\n \nAlmost all of this was converted inside of a year with minimal fuss\nand only a one user complaint that I can recall. Most users\ndescribed it as a \"non-event\", with the only visible difference\nbeing that applications were \"snappier\" than under the commercial\ndatabase product. One type of query was slow in Milwaukee County\n(our largest). We tuned seq_page_cost and random_page_cost until\nall queries were running with good plans. It did not require any\ndown time to sort this out and fix it -- same day turnaround. This\nis not a matter of hinting; it's a matter of creating a cost model\nfor the planner which matches reality. (We don't set this or any\nother \"hint\" per query, we tune the model.) When the cost estimates\nmirror reality, good plans are chosen.\n \n-Kevin\n",
"msg_date": "Thu, 03 Feb 2011 15:29:25 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On 02/03/2011 03:01 PM, Mladen Gogala wrote:\n\n> As I hinted before, this is actually a purist argument which was made\n> by someone who has never had to support a massive production database\n> with many users for living.\n\nOur database handles 9000 transactions per second and over 200-million \ntransactions per day just fine, thanks. It may not be a \"real database\" \nin your world, but it's real enough for us.\n\n> I must say that this purist attitude is extremely surprising to me.\n> All the major DB vendors support optimizer hints, yet in the\n> Postgres community, they are considered bad with almost religious\n> fervor. Postgres community is quite unique with the fatwa against\n> hints.\n\nYou missed the argument. The community, or at least the devs, see hints \nas an ugly hack. Do I agree? Not completely, but I can definitely \nunderstand the perspective. Saying every other \"vendor\" has hints is \nreally just admitting every other vendor has a crappy optimizer. Is that \nsomething to be proud of?\n\nIn almost every single case I've seen a query with bad performance, it's \nthe fault of the author or the DBA. Not enough where clauses; not paying \nattention to cardinality or selectivity; inappropriate or misapplied \nindexes; insufficient table statistics... the list of worse grievances \nout there is endless.\n\nAnd here's something I never saw you consider: hints making performance \nworse. Sure, for now, forcing a sequence scan or forcing it to use \nindexes on a specific table is faster for some specific edge-case. But \nhints are like most code, and tend to miss frequent refactor. As the \noptimizer improves, hints likely won't, meaning code is likely to be \nslower than if the hints didn't exist. This of course ignores the \ncontents of a table are likely to evolve or grow in volume, which can \nalso drastically alter the path the optimizer would choose, but can't \nbecause a hint is forcing it to take a specific path.\n\nWant to remove a reverse index scan? Reindex with DESC on the column \nbeing reversed. That was added in 8.3. Getting too many calls for nested \nloops when a merge or hash would be faster? Increase the statistics \ntarget for the column causing the problems and re-analyze. Find an \nactual bug in the optimizer? Tell the devs and they'll fix it. Just stay \ncurrent, and you get all those benefits. This is true for any database; \nbugs get fixed, things get faster and more secure.\n\nOr like I said, if you really need hints that badly, use EnterpriseDB \ninstead. It's basically completely Oracle-compatible at this point. But \npestering the PostgreSQL dev community about how inferior they are, and \nhow they're doing it wrong, and how they're just another vendor making a \ndatabase product that can't support massive production databases, is \ndoing nothing but ensuring they'll ignore you. Flies, honey, vinegar, etc.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Thu, 3 Feb 2011 15:34:19 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Chris Browne wrote:\n> Well, the community declines to add hints until there is actual\n> consensus on a good way to add hints.\n> \nOK. That's another matter entirely. Who should make that decision? Is \nthere a committee or a person who would be capable of making that decision?\n\n> Nobody has ever proposed a way to add hints where consensus was arrived\n> at that the way was good, so...\n> \n\nSo, I will have to go back on my decision to use Postgres and \nre-consider MySQL? I will rather throw away the effort invested in \nstudying Postgres than to risk an unfixable application downtime. I am \nnot sure about the world domination thing, though. Optimizer hints are a \nbig feature that everybody else has and Postgres does not have because \nof religious reasons.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Thu, 03 Feb 2011 16:50:20 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On 04/02/11 10:01, Mladen Gogala wrote:\n> In the meantime, the fire is burning. What should the hapless owner of \n> the database application do in the meantime? Tell the users that it \n> will be better in the next version? As I've said before: hints are \n> make it or break it point. Without hints, I cannot consider Postgres \n> for the mission critical projects. I am managing big databases for \n> living and I flatter myself that after more than two decades of doing \n> it, I am not too bad at it.\n\nThis is somewhat of a straw man argument. This sort of query that the \noptimizer does badly usually gets noticed during the test cycle i.e \nbefore production, so there is some lead time to get a fix into the \ncode, or add/subtract indexes/redesign the query concerned.\n\nThe cases I've seen in production typically involve \"outgrowing\" \noptimizer parameter settings: (e.g work_mem, effective_cache_size) as \nthe application dataset gets bigger over time. I would note that this is \n*more* likely to happen with hints, as they lobotomize the optimizer so \nit *cannot* react to dataset size or distribution changes.\n\nregards\n\nMark\n",
"msg_date": "Fri, 04 Feb 2011 10:51:24 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Shaun Thomas wrote:\n> You missed the argument. The community, or at least the devs, see hints \n> as an ugly hack. Do I agree? Not completely, but I can definitely \n> understand the perspective. Saying every other \"vendor\" has hints is \n> really just admitting every other vendor has a crappy optimizer. Is that \n> something to be proud of?\n> \nThis is funny? Everybody else has a crappy optimizer? That's a funny way \nof looking at the fact that every other major database supports hints. I \nwould be tempted to call that a major missing feature, but the statement \nthat everybody else has a crappy optimizer sounds kind of funny. No \ndisrespect meant. It's not unlike claiming that the Earth is 6000 years old.\n\n>\n> And here's something I never saw you consider: hints making performance \n> worse. \n> \nSure. If you give me the steering wheell, there is a chance that I might \nget car off the cliff or even run someone over, but that doesn't mean \nthat there is no need for having one. After all, we're talking about the \nability to control the optimizer decision.\n\n> Want to remove a reverse index scan? Reindex with DESC on the column \n> being reversed. That was added in 8.3. Getting too many calls for nested \n> loops when a merge or hash would be faster? Increase the statistics \n> target for the column causing the problems and re-analyze. Find an \n> actual bug in the optimizer? Tell the devs and they'll fix it. Just stay \n> current, and you get all those benefits. This is true for any database; \n> bugs get fixed, things get faster and more secure.\n> \nIn the meantime, the other databases provide hints which help me bridge \nthe gap. As I said before: hints are there, even if they were not meant \nto be used that way. I can do things in a way that I consider very \nnon-elegant. The hints are there because they are definitely needed. \nYet, there is a religious zeal and a fatwa against them.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Thu, 03 Feb 2011 17:03:07 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Feb 3, 2011, at 1:50 PM, Mladen Gogala wrote:\n\n> So, I will have to go back on my decision to use Postgres and re-consider MySQL? I will rather throw away the effort invested in studying Postgres than to risk an unfixable application downtime. I am not sure about the world domination thing, though. Optimizer hints are a big feature that everybody else has and Postgres does not have because of religious reasons.\n\nAs always, you should use the tool you consider best for the job. If you think MySQL as both a product and a community has a better chance of giving you what you want, then you should use MySQL.",
"msg_date": "Thu, 3 Feb 2011 14:04:06 -0800",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "2011/2/3 Mladen Gogala <[email protected]>:\n> Chris Browne wrote:\n>>\n>> Well, the community declines to add hints until there is actual\n>> consensus on a good way to add hints.\n>>\n>\n> OK. That's another matter entirely. Who should make that decision? Is\n> there a committee or a person who would be capable of making that decision?\n>\n\nBecause there are not consensus about hints, then hints are not in pg.\n\nAnd community development must be based on consensus. There are not second way.\n\nHints are not a win from some reasons.\n\nSituation isn't immutable. There are a lot of features, that was\nrejected first time - like replication. But it needs a different\naccess. You have to show tests, use cases, code and you have to\nsatisfy all people, so your request is good and necessary. Argument,\nso other databases has this feature is a last on top ten.\n\n>> Nobody has ever proposed a way to add hints where consensus was arrived\n>> at that the way was good, so...\n>>\n>\n> So, I will have to go back on my decision to use Postgres and re-consider\n> MySQL? I will rather throw away the effort invested in studying Postgres\n> than to risk an unfixable application downtime. I am not sure about the\n> world domination thing, though. Optimizer hints are a big feature that\n> everybody else has and Postgres does not have because of religious reasons.\n\nit's not correct from you. There is a real arguments against hints.\n\n>\n\nyou can try a edb. There is a other external modul\n\nhttp://postgresql.1045698.n5.nabble.com/contrib-plantuner-enable-PostgreSQL-planner-hints-td1924794.html\n\nRegards\n\nPavel Stehule\n\n\n> --\n>\n> Mladen Gogala Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> http://www.vmsinfo.com The Leader in Integrated Media Intelligence Solutions\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Thu, 3 Feb 2011 23:05:50 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On 2/3/11 1:18 PM, Chris Browne wrote:\n> [email protected] (Mladen Gogala) writes:\n>> I must say that this purist attitude is extremely surprising to\n>> me. All the major DB vendors support optimizer hints,\n\nI don't think that's actually accurate. Can you give me a list of\nDBMSes which support hints other than Oracle?\n\n> Well, the community declines to add hints until there is actual\n> consensus on a good way to add hints.\n> \n> Nobody has ever proposed a way to add hints where consensus was arrived\n> at that the way was good, so...\n\nWell, we did actually have some pretty good proposals (IIRC) for\nselectively adjusting the cost model to take into account DBA knowledge.\n These needed some refinement, but in general seem like the right way to go.\n\nHowever, since this system wasn't directly compatible with Oracle Hints,\nfolks pushing for hints dropped the solution as unsatisfactory. This is\nthe discussion we have every time: the users who want hints specifically\nwant hints which work exactly like Oracle's, and aren't interested in a\nsystem designed for PostgreSQL. It's gotten very boring; it's like the\nrequests to support MySQL-only syntax.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Thu, 03 Feb 2011 14:08:00 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "> The hints are there because they are definitely needed. Yet, there is a\n> religious zeal and a fatwa against them.\n\nThe opposition is philosophical, not \"religious\". There is no \"fatwa\".\nIf you want a serious discussion, avoid inflammatory terms.\n\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\nwww.truviso.com\n",
"msg_date": "Thu, 3 Feb 2011 14:09:33 -0800",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "> In the meantime, the other databases provide hints which help me bridge the\n> gap. As I said before: hints are there, even if they were not meant to be\n> used that way. I can do things in a way that I consider very non-elegant.\n> The hints are there because they are definitely needed. Yet, there is a\n> religious zeal and a fatwa against them.\n>\n\nOther databases has different development model. It isn't based on\nconsensus. The are not any commercial model for PostgreSQL. There are\nnot possible to pay programmers. So you can pay and as customer, you\nare boss or use it freely and search a consensus - a common talk.\n\nRegards\n\nPavel Stehule\n\n> --\n>\n> Mladen Gogala Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> http://www.vmsinfo.com The Leader in Integrated Media Intelligence Solutions\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Thu, 3 Feb 2011 23:10:52 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Josh Berkus wrote:\n> However, since this system wasn't directly compatible with Oracle Hints,\n> folks pushing for hints dropped the solution as unsatisfactory. This is\n> the discussion we have every time: the users who want hints specifically\n> want hints which work exactly like Oracle's, and aren't interested in a\n> system designed for PostgreSQL. It's gotten very boring; it's like the\n> requests to support MySQL-only syntax.\n> \nActually, I don't want Oracle hints. Oracle hints are ugly and \ncumbersome. I would prefer something like this:\n\nhttp://dev.mysql.com/doc/refman/5.0/en/index-hints.html\n\nThat should also answer the question about other databases supporting hints.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Thu, 03 Feb 2011 17:12:07 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Mladen Gogala wrote:\n> Actually, I don't want Oracle hints. Oracle hints are ugly and \n> cumbersome. I would prefer something like this:\n>\n> http://dev.mysql.com/doc/refman/5.0/en/index-hints.html\n>\n> That should also answer the question about other databases supporting hints.\n> \n\nSorry. I forgot that MySQL too is now an Oracle product.\n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Thu, 03 Feb 2011 17:13:09 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On 04/02/11 11:08, Josh Berkus wrote:\n> I don't think that's actually accurate. Can you give me a list of\n> DBMSes which support hints other than Oracle?\n>\nDB2 LUW (Linux, Unix, Win32 code base) has hint profiles:\n\nhttp://justdb2chatter.blogspot.com/2008/06/db2-hints-optimizer-selection.html\n\n",
"msg_date": "Fri, 04 Feb 2011 11:17:06 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "\nOn Feb 3, 2011, at 17:08, Josh Berkus wrote:\n\n> On 2/3/11 1:18 PM, Chris Browne wrote:\n>> [email protected] (Mladen Gogala) writes:\n>>> I must say that this purist attitude is extremely surprising to\n>>> me. All the major DB vendors support optimizer hints,\n> \n> I don't think that's actually accurate. Can you give me a list of\n> DBMSes which support hints other than Oracle?\n\n1 minute of Googling shows results for:\n\ndb2:\n<http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=/com.ibm.db2.doc.admin/p9li375.htm>\n\ninformix:\n<http://www.ibm.com/developerworks/data/zones/informix/library/techarticle/0502fan/0502fan.html>\n\nsybase:\n<http://searchenterpriselinux.techtarget.com/answer/Query-hints-to-override-optimizer>\n\nmysql:\n<http://dev.mysql.com/doc/refman/5.0/en/index-hints.html>\n\nI haven't read much of the rest of this thread, so others may have brought these up before.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n\n",
"msg_date": "Thu, 3 Feb 2011 17:19:04 -0500",
"msg_from": "Michael Glaesemann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "[email protected] wrote:\n> I am making the assumption that an Analyze run only has to go over the \n> data once (a seqential scan of the table if it's >> ram for example) \n> and gathers stats as it goes.\n\nAnd that's the part there's some confusion about here. ANALYZE grabs a \nrandom set of samples from the table, the number of which is guided by \nthe setting for default_statistics_target. The amount of time it takes \nis not proportional to the table size; it's only proportional to the \nsampling size. Adding a process whose overhead is proportional to the \ntable size, such as the continuous analyze idea you're proposing, is \nquite likely to be a big step backwards relative to just running a \nsingle ANALYZE after the loading is finished.\n\nWhat people should be doing if concerned about multiple passes happening \nis something like this:\n\nCREATE TABLE t (s serial, i integer) WITH (autovacuum_enabled=off);\n[populate table]\nANALYZE t;\nALTER TABLE t SET (autovacuum_enabled=on);\n\nI'm not optimistic the database will ever get smart enough to recognize \nbulk loading and do this sort of thing automatically, but as the \nworkaround is so simple it's hard to get motivated to work on trying.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Thu, 03 Feb 2011 17:35:12 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Maciek Sakrejda wrote:\n>> The hints are there because they are definitely needed. Yet, there is a\n>> religious zeal and a fatwa against them.\n>> \n>\n> The opposition is philosophical, not \"religious\". There is no \"fatwa\".\n> If you want a serious discussion, avoid inflammatory terms.\n>\n>\n> \nI don't want to insult anybody but the whole thing does look strange. \nMaybe we can agree to remove that ridiculous \"we don't want hints\" note \nfrom Postgresql wiki? That would make it look less like , hmph, \nphilosophical issue and more \"not yet implemented\" issue, especially if \nwe have in mind that hints are already here, in the form of \n\"enable_<method>\" switches.\n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Thu, 03 Feb 2011 17:39:06 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Mark Kirkwood wrote:\n> On 04/02/11 11:08, Josh Berkus wrote:\n> \n>> I don't think that's actually accurate. Can you give me a list of\n>> DBMSes which support hints other than Oracle?\n>>\n>> \n> DB2 LUW (Linux, Unix, Win32 code base) has hint profiles:\n>\n> http://justdb2chatter.blogspot.com/2008/06/db2-hints-optimizer-selection.html\n>\n>\n> \nSQL Server and MySQL too.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Thu, 03 Feb 2011 17:40:17 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "\n> I don't want to insult anybody but the whole thing does look strange.\n> Maybe we can agree to remove that ridiculous \"we don't want hints\" note\n> from Postgresql wiki? That would make it look less like , hmph,\n> philosophical issue and more \"not yet implemented\" issue, especially if\n> we have in mind that hints are already here, in the form of\n> \"enable_<method>\" switches.\n\nLink? There's a lot of stuff on the wiki.\n\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Thu, 03 Feb 2011 15:00:37 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Mladen Gogala <[email protected]> wrote:\n \n> Maybe we can agree to remove that ridiculous \"we don't want hints\"\n> note from Postgresql wiki?\n \nI'd be against that. This is rehashed less frequently since that\nwent in. Less wasted time and bandwidth with it there.\n \n> That would make it look less like , hmph, philosophical issue and\n> more \"not yet implemented\" issue,\n \nExactly what we don't want.\n \n> especially if we have in mind that hints are already here, in the\n> form of \"enable_<method>\" switches.\n \nThose aren't intended as hints for production use. They're there\nfor diagnostic purposes. In our shop we've never used any of those\nflags in production.\n \nThat said, there are ways to force an optimization barrier when\nneeded, which I have occasionally seen people find useful. And\nthere are sometimes provably logically equivalent ways to write a\nquery which result in different plans with different performance. \nIt's rare that someone presents a poorly performing query on the\nlist and doesn't get a satisfactory resolution fairly quickly -- if\nthey present sufficient detail and work nicely with others who are\nvolunteering their time to help.\n \n-Kevin\n",
"msg_date": "Thu, 03 Feb 2011 17:00:50 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Josh Berkus wrote:\n>> I don't want to insult anybody but the whole thing does look strange.\n>> Maybe we can agree to remove that ridiculous \"we don't want hints\" note\n>> from Postgresql wiki? That would make it look less like , hmph,\n>> philosophical issue and more \"not yet implemented\" issue, especially if\n>> we have in mind that hints are already here, in the form of\n>> \"enable_<method>\" switches.\n>> \n>\n> Link? There's a lot of stuff on the wiki.\n>\n>\n> \nhttp://wiki.postgresql.org/wiki/Todo#Features_We_Do_Not_Want\n\nNo. 2 on the list.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Thu, 03 Feb 2011 18:25:02 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 3, 2011 at 3:54 PM, <[email protected]> wrote:\n> with the current code, this is a completely separate process that knows\n> nothing about the load, so if you kick it off when you start the load, it\n> makes a pass over the table (competing for I/O), finishes, you continue to\n> update the table, so it makes another pass, etc. As you say, this is a bad\n> thing to do. I am saying to have an option that ties the two togeather,\n> essentially making the data feed into the Analyze run be a fork of the data\n> comeing out of the insert run going to disk. So the Analyze run doesn't do\n> any I/O and isn't going to complete until the insert is complete. At which\n> time it will have seen one copy of the entire table.\n\nYeah, but you'll be passing the entire table through this separate\nprocess that may only need to see 1% of it or less on a large table.\nIf you want to write the code and prove it's better than what we have\nnow, or some other approach that someone else may implement in the\nmeantime, hey, this is an open source project, and I like improvements\nas much as the next guy. But my prediction for what it's worth is\nthat the results will suck. :-)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 3 Feb 2011 18:29:54 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Kevin Grittner wrote:\n> Mladen Gogala <[email protected]> wrote:\n> \n> \n>> Maybe we can agree to remove that ridiculous \"we don't want hints\"\n>> note from Postgresql wiki?\n>> \n> \n> I'd be against that. This is rehashed less frequently since that\n> went in. Less wasted time and bandwidth with it there.\n> \n\nWell, the problem will not go away. As I've said before, all other \ndatabases have that feature and none of the reasons listed here \nconvinced me that everybody else has a crappy optimizer. The problem \nmay go away altogether if people stop using PostgreSQL.\n> \n> \n>> That would make it look less like , hmph, philosophical issue and\n>> more \"not yet implemented\" issue,\n>> \n> \n> Exactly what we don't want.\n> \nWho is \"we\"?\n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Thu, 03 Feb 2011 18:33:21 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, 2011-02-03 at 18:33 -0500, Mladen Gogala wrote:\n> \n> > \n> > Exactly what we don't want.\n> > \n> Who is \"we\"?\n\nThe majority of long term hackers.\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n",
"msg_date": "Thu, 03 Feb 2011 15:56:57 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On 2/3/11 1:34 PM, Shaun Thomas wrote:\n>> I must say that this purist attitude is extremely surprising to me.\n>> All the major DB vendors support optimizer hints, yet in the\n>> Postgres community, they are considered bad with almost religious\n>> fervor. Postgres community is quite unique with the fatwa against\n>> hints.\n>\n> You missed the argument. The community, or at least the devs, see hints\n> as an ugly hack.\n\nLet's kill the myth right now that Postgres doesn't have hints. It DOES have hints.\n\nJust read this forum for a few days and see how many time there are suggestions like \"disable nested loops\" or \"disable seqscan\", or \"change the random page cost\", or \"change the join collapse limit\".\n\nAll of these options are nothing more than a way of altering the planner's choices so that it will pick the plan that the designer already suspects is more optimal.\n\nIf that's not a hint, I don't know what is.\n\nCraig\n",
"msg_date": "Thu, 03 Feb 2011 16:08:10 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 3, 2011 at 6:33 PM, Mladen Gogala <[email protected]> wrote:\n> Kevin Grittner wrote:\n>> Mladen Gogala <[email protected]> wrote:\n>>>\n>>> Maybe we can agree to remove that ridiculous \"we don't want hints\"\n>>> note from Postgresql wiki?\n>>>\n>>\n>> I'd be against that. This is rehashed less frequently since that\n>> went in. Less wasted time and bandwidth with it there.\n>\n> Well, the problem will not go away. As I've said before, all other\n> databases have that feature and none of the reasons listed here convinced me\n> that everybody else has a crappy optimizer. The problem may go away\n> altogether if people stop using PostgreSQL.\n\nYou seem to be asserting that without hints, problem queries can't be\nfixed. But you haven't offered any evidence for that proposition, and\nit doesn't match my experience, or the experience of other people on\nthis list who have been using PostgreSQL for a very long time. If you\nwant to seriously advance this conversation, you should (1) learn how\npeople who use PostgreSQL solve these problems and then (2) if you\nthink there are cases where those methods are inadequate, present\nthem, and let's have a discussion about it. People in this community\nDO change their mind about things - but they do so in response to\n*evidence*. You haven't presented one tangible example of where the\nsort of hints you seem to want would actually help anything, and yet\nyou're accusing the people who don't agree with you of being engaged\nin a religious war. It seems to me that the shoe is on the other\nfoot. Religion is when you believe something first and then look for\nevidence to support it. Science goes the other direction.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 3 Feb 2011 19:08:21 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Joshua D. Drake wrote:\n> On Thu, 2011-02-03 at 18:33 -0500, Mladen Gogala wrote:\n> \n>> \n>> \n>>> \n>>> Exactly what we don't want.\n>>> \n>>> \n>> Who is \"we\"?\n>> \n>\n> The majority of long term hackers.\n>\n> \nIf that is so, I don't see \"world domination\" in the future, exactly \nthe opposite. Database whose creators don't trust their users cannot \ncount on the very bright future. All other databases do have that \nfeature. I must say, this debate gave me a good deal of stuff to think \nabout.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Thu, 03 Feb 2011 19:13:17 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On 2011-02-03 23:29, Robert Haas wrote:\n> Yeah, but you'll be passing the entire table through this separate\n> process that may only need to see 1% of it or less on a large table.\n\nIt doesn't sound too impossible to pass only a percentage, starting high\nand dropping towards 1% once the loaded size has become \"large\".\n-- \nJeremy\n",
"msg_date": "Fri, 04 Feb 2011 00:29:22 +0000",
"msg_from": "Jeremy Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "\n> All other databases do have that feature. I must say, this\n> debate gave me a good deal of stuff to think about.\n\nAaaaand, I think we're done here. The idea that the lack of hints will kill\nPostgreSQL is already demonstrably false. This is sounding more and\nmore like a petulant tantrum.\n\nFolks, I apologize for ever taking part in this conversation and contributing\nto the loss of signal to noise. Please forgive me.\n\n--\nShaun Thomas\nPeak6 | 141 W. Jackson Blvd. | Suite 800 | Chicago, IL 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Thu, 3 Feb 2011 18:30:50 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, 3 Feb 2011, Robert Haas wrote:\n\n> On Thu, Feb 3, 2011 at 3:54 PM, <[email protected]> wrote:\n>> with the current code, this is a completely separate process that knows\n>> nothing about the load, so if you kick it off when you start the load, it\n>> makes a pass over the table (competing for I/O), finishes, you continue to\n>> update the table, so it makes another pass, etc. As you say, this is a bad\n>> thing to do. I am saying to have an option that ties the two togeather,\n>> essentially making the data feed into the Analyze run be a fork of the data\n>> comeing out of the insert run going to disk. So the Analyze run doesn't do\n>> any I/O and isn't going to complete until the insert is complete. At which\n>> time it will have seen one copy of the entire table.\n>\n> Yeah, but you'll be passing the entire table through this separate\n> process that may only need to see 1% of it or less on a large table.\n> If you want to write the code and prove it's better than what we have\n> now, or some other approach that someone else may implement in the\n> meantime, hey, this is an open source project, and I like improvements\n> as much as the next guy. But my prediction for what it's worth is\n> that the results will suck. :-)\n\nI will point out that 1% of a very large table can still be a lot of disk \nI/O that is avoided (especially if it's random I/O that's avoided)\n\nDavid Lang\n",
"msg_date": "Thu, 3 Feb 2011 16:39:12 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Robert Haas wrote:\n> On Thu, Feb 3, 2011 at 6:33 PM, Mladen Gogala <[email protected]> wrote:\n> \n>> Kevin Grittner wrote:\n>> \n>>> Mladen Gogala <[email protected]> wrote:\n>>> \n>>>> Maybe we can agree to remove that ridiculous \"we don't want hints\"\n>>>> note from Postgresql wiki?\n>>>>\n>>>> \n>>> I'd be against that. This is rehashed less frequently since that\n>>> went in. Less wasted time and bandwidth with it there.\n>>> \n>> Well, the problem will not go away. As I've said before, all other\n>> databases have that feature and none of the reasons listed here convinced me\n>> that everybody else has a crappy optimizer. The problem may go away\n>> altogether if people stop using PostgreSQL.\n>> \n>\n> You seem to be asserting that without hints, problem queries can't be\n> fixed. But you haven't offered any evidence for that proposition, and\n> it doesn't match my experience, or the experience of other people on\n> this list who have been using PostgreSQL for a very long time. If you\n> want to seriously advance this conversation, you should (1) learn how\n> people who use PostgreSQL solve these problems and then (2) if you\n> think there are cases where those methods are inadequate, present\n> them, and let's have a discussion about it. People in this community\n> DO change their mind about things - but they do so in response to\n> *evidence*. You haven't presented one tangible example of where the\n> sort of hints you seem to want would actually help anything, and yet\n> you're accusing the people who don't agree with you of being engaged\n> in a religious war. It seems to me that the shoe is on the other\n> foot. Religion is when you believe something first and then look for\n> evidence to support it. Science goes the other direction.\n>\n> \nActually, it is not unlike a religious dogma, only stating that \"hints \nare bad\". It even says so in the wiki. The arguments are\n1) Refusal to implement hints is motivated by distrust toward users, \nciting that some people may mess things up.\n Yes, they can, with and without hints.\n2) All other databases have them. This is a major feature and if I were \nin the MySQL camp, I would use it as an\n argument. Asking me for some \"proof\" is missing the point. All other \ndatabases have hints precisely because\n they are useful. Assertion that only Postgres is so smart that can \noperate without hints doesn't match the\n reality. As a matter of fact, Oracle RDBMS on the same machine will \nregularly beat PgSQL in performance.\n That has been my experience so far. I even posted counting query \nresults.\n3) Hints are \"make it or break it\" feature. They're absolutely needed in \nthe fire extinguishing situations.\n\nI see no arguments to say otherwise and until that ridiculous \"we don't \nwant hints\" dogma is on wiki, this is precisely what it is: a dogma. \nDogmas do not change and I am sorry that you don't see it that way. \nHowever, this discussion\ndid convince me that I need to take another look at MySQL and tone down \nmy engagement with PostgreSQL community. This is my last post on the \nsubject because posts are becoming increasingly personal. This level of \nirritation is also\ncharacteristic of a religious community chastising a sinner. Let me \nremind you again: all other major databases have that possibility: \nOracle, MySQL, DB2, SQL Server and Informix. Requiring burden of proof \nabout hints is equivalent to saying that all these databases are \ndeveloped by idiots and have a crappy optimizer.\nI am not going to back down, but I may stop using Postgres altogether. \nIf that was your goal, you almost achieved it. Oh yes, and good luck \nwith the world domination. If there is not enough common sense even to \ntake down that stupid dogma on the wiki, there isn't much hope left.\nWith this post, my participation in this group is finished, for the \nforeseeable future.\n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Thu, 03 Feb 2011 19:39:42 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On 2011-02-03 21:51, Mark Kirkwood wrote:\n> The cases I've seen in production typically involve \"outgrowing\" optimizer parameter settings: (e.g work_mem, effective_cache_size) as the application dataset gets bigger over time.\n\nAn argument in favour of the DBMS maintaining a running estimate of such things.\n-- \nJeremy\n",
"msg_date": "Fri, 04 Feb 2011 00:49:52 +0000",
"msg_from": "Jeremy Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "\n\nOn PostgreSQL, the difference in no hints and hints for that one query \nwith skewed data is that the query finishes a little faster. On some \nothers, which shall remain nameless, it is the difference between \nfinishing in seconds or days, or maybe never. Hints can be useful, but \nI can also see why they are not a top priority. They are rarely needed, \nand only when working around a bug. If you want them so badly, you have \nthe source, write a contrib module (can you do that on Oracle or \nMSSQL?) If I have a choice between the developers spending time on \nimplementing hints, and spending time on improving the optimiser, I'll \ntake the optimiser.\n\nTom Kyte agrees:\nhttp://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:8912905298920\nhttp://tkyte.blogspot.com/2006/08/words-of-wisdom.html\n\n\n\nOracle can be faster on count queries, but that is the only case I have \nseen. Generally on most other queries, especially when it involves \ncomplex joins, or indexes on text fields, PostgreSQL is faster on the \nsame hardware.\n\n",
"msg_date": "Thu, 03 Feb 2011 18:18:28 -0700",
"msg_from": "Grant Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On 04/02/11 13:49, Jeremy Harris wrote:\n> On 2011-02-03 21:51, Mark Kirkwood wrote:\n>> The cases I've seen in production typically involve \"outgrowing\" \n>> optimizer parameter settings: (e.g work_mem, effective_cache_size) as \n>> the application dataset gets bigger over time.\n>\n> An argument in favour of the DBMS maintaining a running estimate of \n> such things.\n\nThat is an interesting idea - I'm not quite sure how it could apply to \nserver config settings (e.g work_mem) for which it would be dangerous to \nallow the server to increase on the fly, but it sure would be handy to \nhave some sort of query execution \"memory\" so that alerts like:\n\n\"STATEMENT: SELECT blah : PARAMETERS blah: using temp file(s), last \nexecution used memory\"\n\ncould be generated (this could be quite complex I guess, requiring some \nsort of long lived statement plan cache).\n\nCheers\n\nMark\n\n\n\n\n\n\n On 04/02/11 13:49, Jeremy Harris wrote:\n On\n 2011-02-03 21:51, Mark Kirkwood wrote:\n \nThe cases I've seen in production\n typically involve \"outgrowing\" optimizer parameter settings:\n (e.g work_mem, effective_cache_size) as the application dataset\n gets bigger over time.\n \n\n\n An argument in favour of the DBMS maintaining a running estimate\n of such things.\n \n\n\n That is an interesting idea - I'm not quite sure how it could\n apply to server config settings (e.g work_mem) for which it\n would be dangerous to allow the server to increase on the fly,\n but it sure would be handy to have some sort of query execution\n \"memory\" so that alerts like:\n\n \"STATEMENT: SELECT blah : PARAMETERS blah: using temp file(s),\n last execution used memory\"\n\n could be generated (this could be quite complex I guess,\n requiring some sort of long lived statement plan cache).\n\n Cheers\n\n Mark",
"msg_date": "Fri, 04 Feb 2011 14:28:08 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 3, 2011 at 7:39 PM, <[email protected]> wrote:\n>> Yeah, but you'll be passing the entire table through this separate\n>> process that may only need to see 1% of it or less on a large table.\n>> If you want to write the code and prove it's better than what we have\n>> now, or some other approach that someone else may implement in the\n>> meantime, hey, this is an open source project, and I like improvements\n>> as much as the next guy. But my prediction for what it's worth is\n>> that the results will suck. :-)\n>\n> I will point out that 1% of a very large table can still be a lot of disk\n> I/O that is avoided (especially if it's random I/O that's avoided)\n\nSure, but I think that trying to avoid it will be costly in other ways\n- you'll be streaming a huge volume of data through some auxiliary\nprocess, which will have to apply some algorithm that's very different\nfrom the one we use today. The reality is that I think there's little\nevidence that the way we do ANALYZE now is too expensive. It's\ntypically very cheap and works very well. It's a bit annoying when it\nfires off in the middle of a giant data load, so we might need to\nchange the time of it a little, but if there's a problem with the\noperation itself being too costly, this is the first I'm hearing of\nit. We've actually worked *really* hard to make it cheap.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 3 Feb 2011 20:29:14 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 3, 2011 at 7:39 PM, Mladen Gogala <[email protected]> wrote:\n> reality. As a matter of fact, Oracle RDBMS on the same machine will\n> regularly beat PgSQL in performance.\n> That has been my experience so far. I even posted counting query results.\n\nIt sure is, but those count queries didn't run faster because of query\nplanner hints. They ran faster because of things like index-only\nscans, fast full index scans, asynchronous I/O, and parallel query.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 3 Feb 2011 20:36:32 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, 3 Feb 2011, Robert Haas wrote:\n\n> On Thu, Feb 3, 2011 at 7:39 PM, <[email protected]> wrote:\n>>> Yeah, but you'll be passing the entire table through this separate\n>>> process that may only need to see 1% of it or less on a large table.\n>>> If you want to write the code and prove it's better than what we have\n>>> now, or some other approach that someone else may implement in the\n>>> meantime, hey, this is an open source project, and I like improvements\n>>> as much as the next guy. �But my prediction for what it's worth is\n>>> that the results will suck. �:-)\n>>\n>> I will point out that 1% of a very large table can still be a lot of disk\n>> I/O that is avoided (especially if it's random I/O that's avoided)\n>\n> Sure, but I think that trying to avoid it will be costly in other ways\n> - you'll be streaming a huge volume of data through some auxiliary\n> process, which will have to apply some algorithm that's very different\n> from the one we use today. The reality is that I think there's little\n> evidence that the way we do ANALYZE now is too expensive. It's\n> typically very cheap and works very well. It's a bit annoying when it\n> fires off in the middle of a giant data load, so we might need to\n> change the time of it a little, but if there's a problem with the\n> operation itself being too costly, this is the first I'm hearing of\n> it. We've actually worked *really* hard to make it cheap.\n\nI could be misunderstanding things here, but my understanding is that it's \n'cheap' in that it has little impact on the database while it is running.\n\nthe issue here is that the workflow is\n\nload data\nanalyze\nstart work\n\nso the cost of analyze in this workflow is not \"1% impact on query speed \nfor the next X time\", it's \"the database can't be used for the next X time \nwhile we wait for analyze to finish running\"\n\nI don't understand why the algorithm would have to be so different than \nwhat's done today, surely the analyze thread could easily be tweaked to \nignore the rest of the data (assuming we don't have the thread sending the \ndata to analyze do the filtering)\n\nDavid Lang\n>From [email protected] Thu Feb 3 21:46:39 2011\nReceived: from maia.hub.org (maia-2.hub.org [200.46.204.251])\n\tby mail.postgresql.org (Postfix) with ESMTP id 7F1811337B96\n\tfor <[email protected]>; Thu, 3 Feb 2011 21:46:39 -0400 (AST)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.251]) (amavisd-maia, port 10024)\n with ESMTP id 80837-04\n for <[email protected]>;\n Fri, 4 Feb 2011 01:46:32 +0000 (UTC)\nX-Greylist: from auto-whitelisted by SQLgrey-1.7.6\nReceived: from outmail148143.authsmtp.com (outmail148143.authsmtp.com [62.13.148.143])\n\tby mail.postgresql.org (Postfix) with ESMTP id AF4A11337B95\n\tfor <[email protected]>; Thu, 3 Feb 2011 21:46:31 -0400 (AST)\nReceived: from mail-c193.authsmtp.com (mail-c193.authsmtp.com [62.13.128.118])\n\tby punt8.authsmtp.com (8.14.2/8.14.2/Kp) with ESMTP id p141kVx4097555;\n\tFri, 4 Feb 2011 01:46:31 GMT\nReceived: from Sidney-Stratton.local (dsl081-245-111.sfo1.dsl.speakeasy.net [64.81.245.111])\n\t(authenticated bits=0)\n\tby mail.authsmtp.com (8.14.2/8.14.2) with ESMTP id p141kSnH064206;\n\tFri, 4 Feb 2011 01:46:29 GMT\nMessage-ID: <[email protected]>\nDate: Thu, 03 Feb 2011 17:46:27 -0800\nFrom: Josh Berkus <[email protected]>\nUser-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1b3pre) Gecko/20090223 Thunderbird/3.0b2\nMIME-Version: 1.0\nTo: [email protected]\nCC: Mladen Gogala <[email protected]>\nSubject: Re: Why we don't want hints Was: Slow count(*) again...\nReferences: <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]>\nIn-Reply-To: <[email protected]>\nContent-Type: text/plain; charset=UTF-8\nContent-Transfer-Encoding: 7bit\nX-Server-Quench: 9647b83a-3000-11e0-97bb-002264978518\nX-AuthReport-Spam: If SPAM / abuse - report it at: http://www.authsmtp.com/abuse\nX-AuthRoute: OCdyZgscClZXSx8a IioLCC5HRQ8+YBZL BAkGMA9GIUINWEQL c1ACch19PVdbHwkA AnYLWl5QVldyWS1z bxRZbBtfZk9QXgRr T0pMQFdNFEsoABgA XX1AKhl0cwdGfjB3 Zk9qEHldWEMofUUs X01UFW0bZGY1aH0W VxIKagNUcgFMehZC YlV+XD1vNG8XDRoV JSEUBRUEdQpfOWxK T0kBKlRdXQ4UFzgg DxADGyk0VXIMXHd7 FBghNRYXG1sXLgVw cBMoVlsZNVlUTGUA \nX-Authentic-SMTP: 61633136333939.1014:706\nX-AuthFastPath: 0 (Was 255)\nX-AuthVirus-Status: No virus detected - but ensure you scan with your own anti-virus system.\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits=-1.9 tagged_above=-10 required=5 tests=BAYES_00=-1.9,\n RCVD_IN_DNSWL_NONE=-0.0001\nX-Spam-Level: \nX-Archive-Number: 201102/149\nX-Sequence-Number: 42287\n\n\n> http://wiki.postgresql.org/wiki/Todo#Features_We_Do_Not_Want\n> \n> No. 2 on the list.\n\nHeck, *I* wrote that text.\n\nI quote:\n\n\"Optimizer hints are used to work around problems in the optimizer and\nintroduce upgrade and maintenance issues. We would rather have the\nproblems reported and fixed. We have discussed a more sophisticated\nsystem of per-class cost adjustment instead, but a specification remains\nto be developed.\"\n\nThat seems pretty straightforwards. There are even links to prior\ndiscussions about what kind of system would work. I don't think this\ntext needs any adjustment; that's our clear consensus on the hint issue:\nwe want a tool which works better than what we've seen in other databases.\n\nQuite frankly, the main reason why most DBMSes have a hinting system has\nnothing to do with the quality of optimizer and everything to do with\nDBAs who think they're smarter than the optimizer (incorrectly). Oracle\nhas a darned good query optimizer, and SQL server's is even better.\nHowever, there are a lot of undereducated or fossilized DBAs out there\nwho don't trust the query planner and want to override it in fairly\narbitrary ways; I refer you to the collected works of Dan Tow, for example.\n\nIn many cases Hints are used by DBAs in \"emergency\" situations because\nthey are easier than figuring out what the underlying issue is, even\nwhen that could be done relatively simply. Improving diagnostic query\ntools would be a much better approach here; for example, the team\nworking on hypothetical indexes has a lot to offer. If you can figure\nout what's really wrong with the query in 10 minutes, you don't need a hint.\n\nYes, I occasionally run across cases where having a query tweaking\nsystem would help me fix a pathological failure in the planner.\nHowever, even on data warehouses that's less than 0.1% of the queries I\ndeal with, so this isn't exactly a common event. And any hinting system\nwe develop needs to address those specific cases, NOT a hypothetical\ncase which can't be tested. Otherwise we'll implement hints which\nactually don't improve queries.\n\nCommercial DBMSes have to give in to what their big paying customers\nwant, no matter how stupid it is. I'm grateful that I can work on a DBMS\n-- the third most popular SQL DBMS in the world -- which can focus on\nquality instead.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Thu, 3 Feb 2011 17:37:14 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 3, 2011 at 8:37 PM, <[email protected]> wrote:\n> On Thu, 3 Feb 2011, Robert Haas wrote:\n>\n>> On Thu, Feb 3, 2011 at 7:39 PM, <[email protected]> wrote:\n>>>>\n>>>> Yeah, but you'll be passing the entire table through this separate\n>>>> process that may only need to see 1% of it or less on a large table.\n>>>> If you want to write the code and prove it's better than what we have\n>>>> now, or some other approach that someone else may implement in the\n>>>> meantime, hey, this is an open source project, and I like improvements\n>>>> as much as the next guy. But my prediction for what it's worth is\n>>>> that the results will suck. :-)\n>>>\n>>> I will point out that 1% of a very large table can still be a lot of disk\n>>> I/O that is avoided (especially if it's random I/O that's avoided)\n>>\n>> Sure, but I think that trying to avoid it will be costly in other ways\n>> - you'll be streaming a huge volume of data through some auxiliary\n>> process, which will have to apply some algorithm that's very different\n>> from the one we use today. The reality is that I think there's little\n>> evidence that the way we do ANALYZE now is too expensive. It's\n>> typically very cheap and works very well. It's a bit annoying when it\n>> fires off in the middle of a giant data load, so we might need to\n>> change the time of it a little, but if there's a problem with the\n>> operation itself being too costly, this is the first I'm hearing of\n>> it. We've actually worked *really* hard to make it cheap.\n>\n> I could be misunderstanding things here, but my understanding is that it's\n> 'cheap' in that it has little impact on the database while it is running.\n\nI mean that it's cheap in that it usually takes very little time to complete.\n\n> the issue here is that the workflow is\n>\n> load data\n> analyze\n> start work\n>\n> so the cost of analyze in this workflow is not \"1% impact on query speed for\n> the next X time\", it's \"the database can't be used for the next X time while\n> we wait for analyze to finish running\"\n\nOK.\n\n> I don't understand why the algorithm would have to be so different than\n> what's done today, surely the analyze thread could easily be tweaked to\n> ignore the rest of the data (assuming we don't have the thread sending the\n> data to analyze do the filtering)\n\nIf you want to randomly pick 10,000 rows out of all the rows that are\ngoing to be inserted in the table without knowing in advance how many\nthere will be, how do you do that? Maybe there's an algorithm, but\nit's not obvious to me. But mostly, I question how expensive it is to\nhave a second process looking at the entire table contents vs. going\nback and rereading a sample of rows at the end. I can't remember\nanyone ever complaining \"ANALYZE took too long to run\". I only\nremember complaints of the form \"I had to remember to manually run it\nand I wish it had just happened by itself\".\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 3 Feb 2011 21:05:52 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "> I can't remember\n> anyone ever complaining \"ANALYZE took too long to run\". I only\n> remember complaints of the form \"I had to remember to manually run it\n> and I wish it had just happened by itself\".\n\nRobert,\n\nThis sounds like an argument in favor of an implicit ANALYZE after all\nCOPY statements, and/or an implicit autoanalyze check after all\nINSERT/UPDATE statements.\n\n-Conor\n",
"msg_date": "Thu, 3 Feb 2011 18:12:57 -0800",
"msg_from": "Conor Walsh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, 2011-02-03 at 18:12 -0800, Conor Walsh wrote:\n> > I can't remember\n> > anyone ever complaining \"ANALYZE took too long to run\". I only\n> > remember complaints of the form \"I had to remember to manually run it\n> > and I wish it had just happened by itself\".\n> \n> Robert,\n> \n> This sounds like an argument in favor of an implicit ANALYZE after all\n> COPY statements, and/or an implicit autoanalyze check after all\n> INSERT/UPDATE statements.\n\nWell that already happens. Assuming you insert/update or copy in a\ngreater amount than the threshold for the \n\nautovacuum_analyze_scale_factor\n\nThen autovacuum is going to analyze on the next run. The default is .1\nso it certainly doesn't take much.\n\nJD\n\n> \n> -Conor\n> \n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n",
"msg_date": "Thu, 03 Feb 2011 18:33:30 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 3, 2011 at 6:33 PM, Joshua D. Drake <[email protected]> wrote:\n> Well that already happens...\n\nMy understanding is that auto-analyze will fire only after my\ntransaction is completed, because it is a seperate daemon. If I do\nlike so:\n\nBEGIN;\nCOPY ...;\n-- Dangerously un-analyzed\nSELECT complicated-stuff ...;\nEND;\n\nAuto-analyze does not benefit me, or might not because it won't fire\noften enough. I agree that analyze is very fast, and it often seems\nto me like the cost/benefit ratio suggests making auto-analyze even\nmore aggressive.\n\nDisclaimer/disclosure: I deal exclusively with very large data sets\nthese days, so analyzing all the time is almost a highly effective\nworst-case amortization. I understand that constant analyze is not so\ngreat in, say, an OLTP setting. But if the check is cheap, making\nauto-analyze more integrated and less daemon-driven might be a net\nwin. I'm not sure.\n\n-Conor\n",
"msg_date": "Thu, 3 Feb 2011 18:45:09 -0800",
"msg_from": "Conor Walsh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 3, 2011 at 5:39 PM, Mladen Gogala <[email protected]> wrote:\n> Actually, it is not unlike a religious dogma, only stating that \"hints are\n> bad\". It even says so in the wiki. The arguments are\n\nThere's been considerably more output than \"hints bad! Hulk Smash!\"\n\n> 1) Refusal to implement hints is motivated by distrust toward users, citing\n> that some people may mess things up.\n\nIt's more about creating a knob that will create more problems than it\nsolves. Which I get. And making sure that if you make such a knob\nthat it'll do the least damage and give the most usefulness. Until a\ngood proposal and some code to do it shows up, we're all just waving\nour hands around describing different parts of the elephant.\n\n> 2) All other databases have them. This is a major feature and if I were in\n> the MySQL camp, I would use it as an\n> argument. Asking me for some \"proof\" is missing the point. All other\n> databases have hints precisely because\n> they are useful.\n\nUh, two points. 1: Argumentum Ad Populum. Just because it's popular\ndoesn't mean it's right. 2: Other databases have them because their\noptimizers can't make the right decision even most of the time. Yes\nthey're useful, but like a plastic bad covering a broken car window,\nthey're useful because they cover something that's inherently broken.\n\n\n> Assertion that only Postgres is so smart that can operate\n> without hints doesn't match the\n> reality.\n\nAgain, you're twisting what people have said. the point being that\nwhile postgresql makes mistakes, we'd rather concentrate on making the\nplanner smarter than giving it a lobotomy and running it remotely like\na robot.\n\n\n> As a matter of fact, Oracle RDBMS on the same machine will\n> regularly beat PgSQL in performance.\n\nYes. And this has little to do with hints. It has to do with years\nof development lead with THOUSANDS of engineers who can work on the\nmost esoteric corner cases in their spare time. Find the pg project a\ncouple hundred software engineers and maybe we'll catch Oracle a\nlittle quicker. Otherwise we'll have to marshall our resources to do\nthe best we can on the project ,and that means avoiding maintenance\nblack holes and having the devs work on the things that give the most\nbenefit for the cost. Hints are something only a tiny percentage of\nusers could actually use and use well.\n\nWrite a check, hire some developers and get the code done and present\nit to the community. If it's good and works it'll likely get\naccepted. Or use EDB, since it has oracle compatibility in it.\n\n> That has been my experience so far. I even posted counting query results.\n> 3) Hints are \"make it or break it\" feature. They're absolutely needed in the\n> fire extinguishing situations.\n\nI've been using pg since 6.5.2. I've used Oracle since version 8 or\nso. I have never been in a situation with postgresql where I couldn't\nfix the problem with either tuning, query editing, or asking Tom for a\npatch for a problem I found in it. Turnaround time on the last patch\nthat was made to fix my problem was somewhere in the 24 hour range.\nIf Oracle can patch their planner that fast, let me know.\n\n> I see no arguments to say otherwise and until that ridiculous \"we don't want\n> hints\" dogma is on wiki, this is precisely what it is: a dogma. Dogmas do\n> not change and I am sorry that you don't see it that way. However, this\n> discussion\n\nNo, it's not dogma, you need to present a strong coherent argument,\nnot threaten people on the list etc.\n",
"msg_date": "Thu, 3 Feb 2011 19:59:46 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, 2011-02-03 at 16:50 -0500, Mladen Gogala wrote:\n> Chris Browne wrote:\n> > Well, the community declines to add hints until there is actual\n> > consensus on a good way to add hints.\n> > \n> OK. That's another matter entirely. Who should make that decision? Is \n> there a committee or a person who would be capable of making that decision?\n\nAdmittedly I haven't read this whole discussion, but it seems like\n\"hints\" might be too poorly defined right now.\n\nIf by \"hints\" we mean some mechanism to influence the planner in a more\nfine-grained way, I could imagine that some proposal along those lines\nmight gain significant support.\n\nBut, as always, it depends on the content and quality of the proposal\nmore than the title. If someone has thoughtful proposal that tries to\nbalance things like:\n* DBA control versus query changes/comments\n* compatibility across versions versus finer plan control\n* allowing the existing optimizer to optimize portions of the\n query while controlling other portions\n* indicating costs and cardinalities versus plans directly\n\nI am confident that such a proposal will gain traction among the\ncommunity as a whole.\n\nHowever, a series proposals for individual hacks for specific purposes\nwill probably be rejected. I am in no way implying that you are\napproaching it this way -- I am just trying to characterize an approach\nthat won't make progress.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Thu, 03 Feb 2011 19:01:10 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 3, 2011 at 7:05 PM, Robert Haas <[email protected]> wrote:\n> If you want to randomly pick 10,000 rows out of all the rows that are\n> going to be inserted in the table without knowing in advance how many\n> there will be, how do you do that?\n\nMaybe you could instead just have it use some % of the rows going by?\nJust a guess.\n",
"msg_date": "Thu, 3 Feb 2011 20:13:22 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On 02/03/2011 09:45 PM, Conor Walsh wrote:\n> My understanding is that auto-analyze will fire only after my\n> transaction is completed, because it is a seperate daemon. If I do\n> like so:\n>\n> BEGIN;\n> COPY ...;\n> -- Dangerously un-analyzed\n> SELECT complicated-stuff ...;\n> END;\n>\n> Auto-analyze does not benefit me, or might not because it won't fire\n> often enough. I agree that analyze is very fast, and it often seems\n> to me like the cost/benefit ratio suggests making auto-analyze even\n> more aggressive.\n\nThe count discussion is boring. Nothing new there. But auto-analyze on \ndirty writes does interest me. :-)\n\nMy understanding is:\n\n1) Background daemon wakes up and checks whether a number of changes \nhave happened to the database, irrelevant of transaction boundaries.\n\n2) Background daemon analyzes a percentage of rows in the database for \nstatistical data, irrelevant of row visibility.\n\n3) Analyze is important for both visible rows and invisible rows, as \nplan execution is impacted by invisible rows. As long as they are part \nof the table, they may impact the queries performed against the table.\n\n4) It doesn't matter if the invisible rows are invisible because they \nare not yet committed, or because they are not yet vacuumed.\n\nWould somebody in the know please confirm the above understanding for my \nown piece of mind?\n\nThanks,\nmark\n\n-- \nMark Mielke<[email protected]>\n\n",
"msg_date": "Thu, 03 Feb 2011 22:31:35 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Does auto-analyze work on dirty writes? (was: Re: [HACKERS]\n\tSlow count(*) again...)"
},
{
"msg_contents": "Scott Marlowe wrote:\n> Yes they're useful, but like a plastic bad covering a broken car window,\n> they're useful because they cover something that's inherently broken.\n> \n\nAwesome. Now we have a car anology, with a funny typo no less. \n\"Plastic bad\", I love it. This is real progress toward getting all the \ncommon list argument idioms aired out. All we need now is a homage to \nMike Godwin and we can close this down.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Thu, 03 Feb 2011 22:40:31 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 3, 2011 at 8:40 PM, Greg Smith <[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> Yes they're useful, but like a plastic bad covering a broken car window,\n>> they're useful because they cover something that's inherently broken.\n>>\n>\n> Awesome. Now we have a car anology, with a funny typo no less. \"Plastic\n> bad\", I love it. This is real progress toward getting all the common list\n> argument idioms aired out. All we need now is a homage to Mike Godwin and\n> we can close this down.\n\nIt's not so much a car analogy as a plastic bad analogy.\n",
"msg_date": "Thu, 3 Feb 2011 20:48:46 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Scott Marlowe wrote:\n> It's not so much a car analogy as a plastic bad analogy.\n> \n\nIs that like a Plastic Ono Band? Because I think one of those is the \nonly thing holding the part of my bumper I smashed in the snow on right \nnow. I could be wrong about the name.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Thu, 03 Feb 2011 22:56:12 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 3, 2011 at 8:56 PM, Greg Smith <[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> It's not so much a car analogy as a plastic bad analogy.\n>>\n>\n> Is that like a Plastic Ono Band? Because I think one of those is the only\n> thing holding the part of my bumper I smashed in the snow on right now. I\n> could be wrong about the name.\n\nNo, that's a plastic oh no! band you have.\n",
"msg_date": "Thu, 3 Feb 2011 21:00:01 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Scott Marlowe wrote:\n> No, that's a plastic oh no! band you have.\n> \n\nWow, right you are. So with this type holding together my Japanese car, \nif it breaks and parts fall off, I'm supposed to yell \"Oh, no! There \ngoes Tokyo!\", yes?\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Thu, 03 Feb 2011 23:10:41 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 3, 2011 at 6:05 PM, Robert Haas <[email protected]> wrote:\n\n>\n> If you want to randomly pick 10,000 rows out of all the rows that are\n> going to be inserted in the table without knowing in advance how many\n> there will be, how do you do that?\n>\n\nReservoir sampling, as the most well-known option:\nhttp://en.wikipedia.org/wiki/Reservoir_sampling\n\n-- \n- David T. Wilson\[email protected]\n\nOn Thu, Feb 3, 2011 at 6:05 PM, Robert Haas <[email protected]> wrote:\n\nIf you want to randomly pick 10,000 rows out of all the rows that are\ngoing to be inserted in the table without knowing in advance how many\nthere will be, how do you do that?Reservoir sampling, as the most well-known option: http://en.wikipedia.org/wiki/Reservoir_sampling\n-- - David T. [email protected]",
"msg_date": "Thu, 3 Feb 2011 21:06:18 -0800",
"msg_from": "David Wilson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Neat. That was my 'you learn something every day' moment. Thanks.\n\nOn Thu, Feb 3, 2011 at 9:06 PM, David Wilson <[email protected]>wrote:\n\n>\n>\n> On Thu, Feb 3, 2011 at 6:05 PM, Robert Haas <[email protected]> wrote:\n>\n>>\n>> If you want to randomly pick 10,000 rows out of all the rows that are\n>> going to be inserted in the table without knowing in advance how many\n>> there will be, how do you do that?\n>>\n>\n> Reservoir sampling, as the most well-known option:\n> http://en.wikipedia.org/wiki/Reservoir_sampling\n>\n> --\n> - David T. Wilson\n> [email protected]\n>\n\nNeat. That was my 'you learn something every day' moment. Thanks.On Thu, Feb 3, 2011 at 9:06 PM, David Wilson <[email protected]> wrote:\nOn Thu, Feb 3, 2011 at 6:05 PM, Robert Haas <[email protected]> wrote:\n\n\nIf you want to randomly pick 10,000 rows out of all the rows that are\ngoing to be inserted in the table without knowing in advance how many\nthere will be, how do you do that?Reservoir sampling, as the most well-known option: http://en.wikipedia.org/wiki/Reservoir_sampling\n-- - David T. [email protected]",
"msg_date": "Thu, 3 Feb 2011 22:36:20 -0800",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "2011/2/3 <[email protected]>\n\n>\n> If the table is not large enough to fit in ram, then it will compete for\n> I/O, and the user will have to wait.\n>\n> what I'm proposing is that as the records are created, the process doing\n> the creation makes copies of the records (either all of them, or some of\n> them if not all are needed for the analysis, possibly via shareing memory\n> with the analysis process), this would be synchronous with the load, not\n> asynchronous.\n>\n> this would take zero I/O bandwidth, it would take up some ram, memory\n> bandwidth, and cpu time, but a load of a large table like this is I/O\n> contrained.\n>\n> it would not make sense for this to be the default, but as an option it\n> should save a significant amount of time.\n>\n> I am making the assumption that an Analyze run only has to go over the data\n> once (a seqential scan of the table if it's >> ram for example) and gathers\n> stats as it goes.\n>\n> with the current code, this is a completely separate process that knows\n> nothing about the load, so if you kick it off when you start the load, it\n> makes a pass over the table (competing for I/O), finishes, you continue to\n> update the table, so it makes another pass, etc. As you say, this is a bad\n> thing to do. I am saying to have an option that ties the two togeather,\n> essentially making the data feed into the Analyze run be a fork of the data\n> comeing out of the insert run going to disk. So the Analyze run doesn't do\n> any I/O and isn't going to complete until the insert is complete. At which\n> time it will have seen one copy of the entire table.\n>\n> Actually that are two different problems. The one is to make analyze more\nautomatic to make select right after insert more clever by providing\nstatistics to it.\nAnother is to make it take less IO resources.\nI dont like for it to be embedded into insert (unless the threshold can be\ndetermined before inserts starts). Simply because it is more CPU/memory that\nwill slow down each insert. And if you will add knob, that is disabled by\ndefault, this will be no more good than manual analyze.\n\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\n2011/2/3 <[email protected]>\n\nIf the table is not large enough to fit in ram, then it will compete for I/O, and the user will have to wait.\n\nwhat I'm proposing is that as the records are created, the process doing the creation makes copies of the records (either all of them, or some of them if not all are needed for the analysis, possibly via shareing memory with the analysis process), this would be synchronous with the load, not asynchronous.\n\nthis would take zero I/O bandwidth, it would take up some ram, memory bandwidth, and cpu time, but a load of a large table like this is I/O contrained.\n\nit would not make sense for this to be the default, but as an option it should save a significant amount of time.\n\nI am making the assumption that an Analyze run only has to go over the data once (a seqential scan of the table if it's >> ram for example) and gathers stats as it goes.\n\nwith the current code, this is a completely separate process that knows nothing about the load, so if you kick it off when you start the load, it makes a pass over the table (competing for I/O), finishes, you continue to update the table, so it makes another pass, etc. As you say, this is a bad thing to do. I am saying to have an option that ties the two togeather, essentially making the data feed into the Analyze run be a fork of the data comeing out of the insert run going to disk. So the Analyze run doesn't do any I/O and isn't going to complete until the insert is complete. At which time it will have seen one copy of the entire table.\nActually that are two different problems. The one is to make analyze more automatic to make select right after insert more clever by providing statistics to it. Another is to make it take less IO resources.\nI dont like for it to be embedded into insert (unless the threshold can be determined before inserts starts). Simply because it is more CPU/memory that will slow down each insert. And if you will add knob, that is disabled by default, this will be no more good than manual analyze.\n-- Best regards, Vitalii Tymchyshyn",
"msg_date": "Fri, 4 Feb 2011 09:08:59 +0200",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "2011/2/4 Mladen Gogala <[email protected]>\n\n> Josh Berkus wrote:\n>\n>> However, since this system wasn't directly compatible with Oracle Hints,\n>> folks pushing for hints dropped the solution as unsatisfactory. This is\n>> the discussion we have every time: the users who want hints specifically\n>> want hints which work exactly like Oracle's, and aren't interested in a\n>> system designed for PostgreSQL. It's gotten very boring; it's like the\n>> requests to support MySQL-only syntax.\n>>\n>>\n> Actually, I don't want Oracle hints. Oracle hints are ugly and cumbersome.\n> I would prefer something like this:\n>\n>\n> http://dev.mysql.com/doc/refman/5.0/en/index-hints.html\n>\n> As far as I can see, this should be embedded into query, should not it? You\ncan achive something like this by setting variables right before query\n(usually even in same sall by embedding multiple statements into execute\nquery call).\nE.g. \"set random_page_cost=1;select something that need index; set\nrandom_page_to to default;\". Yes this is as ugly as a hack may look and\ncan't be used on per-table basis in complex statement, but you have it.\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\n2011/2/4 Mladen Gogala <[email protected]>\nJosh Berkus wrote:\n\nHowever, since this system wasn't directly compatible with Oracle Hints,\nfolks pushing for hints dropped the solution as unsatisfactory. This is\nthe discussion we have every time: the users who want hints specifically\nwant hints which work exactly like Oracle's, and aren't interested in a\nsystem designed for PostgreSQL. It's gotten very boring; it's like the\nrequests to support MySQL-only syntax.\n \n\nActually, I don't want Oracle hints. Oracle hints are ugly and cumbersome. I would prefer something like this:\n\nhttp://dev.mysql.com/doc/refman/5.0/en/index-hints.html\nAs far as I can see, this should be embedded into query, should not it? You can achive something like this by setting variables right before query (usually even in same sall by embedding multiple statements into execute query call).\nE.g. \"set random_page_cost=1;select something that need index; set random_page_to to default;\". Yes this is as ugly as a hack may look and can't be used on per-table basis in complex statement, but you have it.\n-- Best regards, Vitalii Tymchyshyn",
"msg_date": "Fri, 4 Feb 2011 09:24:20 +0200",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Fri, 4 Feb 2011, ??????? ???????? wrote:\n\n> 2011/2/3 <[email protected]>\n>\n>>\n>> If the table is not large enough to fit in ram, then it will compete for\n>> I/O, and the user will have to wait.\n>>\n>> what I'm proposing is that as the records are created, the process doing\n>> the creation makes copies of the records (either all of them, or some of\n>> them if not all are needed for the analysis, possibly via shareing memory\n>> with the analysis process), this would be synchronous with the load, not\n>> asynchronous.\n>>\n>> this would take zero I/O bandwidth, it would take up some ram, memory\n>> bandwidth, and cpu time, but a load of a large table like this is I/O\n>> contrained.\n>>\n>> it would not make sense for this to be the default, but as an option it\n>> should save a significant amount of time.\n>>\n>> I am making the assumption that an Analyze run only has to go over the data\n>> once (a seqential scan of the table if it's >> ram for example) and gathers\n>> stats as it goes.\n>>\n>> with the current code, this is a completely separate process that knows\n>> nothing about the load, so if you kick it off when you start the load, it\n>> makes a pass over the table (competing for I/O), finishes, you continue to\n>> update the table, so it makes another pass, etc. As you say, this is a bad\n>> thing to do. I am saying to have an option that ties the two togeather,\n>> essentially making the data feed into the Analyze run be a fork of the data\n>> comeing out of the insert run going to disk. So the Analyze run doesn't do\n>> any I/O and isn't going to complete until the insert is complete. At which\n>> time it will have seen one copy of the entire table.\n>>\n> Actually that are two different problems. The one is to make analyze more\n> automatic to make select right after insert more clever by providing\n> statistics to it.\n> Another is to make it take less IO resources.\n> I dont like for it to be embedded into insert (unless the threshold can be\n> determined before inserts starts). Simply because it is more CPU/memory that\n> will slow down each insert. And if you will add knob, that is disabled by\n> default, this will be no more good than manual analyze.\n\nif it can happen during the copy instead of being a step after the copy it \nwill speed things up. things like the existing parallel restore could use \nthis instead ofneeding a separate pass. so I don't think that having to \nturn it on manually makes it useless, any more than the fact that you have \nto explicity disable fsync makes that disabling feature useless (and the \ntwo features would be likely to be used togeather)\n\nwhen a copy command is issued, I assume that there is some indication of \nhow much data is going to follow. I know that it's not just 'insert \neverything until the TCP connection terminates' because that would give \nyou no way of knowing if the copy got everything in or was interrupted \npart way through. think about what happens with ftp if the connection \ndrops, you get a partial file 'successfully' as there is no size provided, \nbut with HTTP you get a known-bad transfer that you can abort or resume.\n\nDavid Lang\n",
"msg_date": "Thu, 3 Feb 2011 23:32:47 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "4 лютого 2011 р. 09:32 <[email protected]> написав:\n\n>\n>\n> when a copy command is issued, I assume that there is some indication of\n> how much data is going to follow. I know that it's not just 'insert\n> everything until the TCP connection terminates' because that would give you\n> no way of knowing if the copy got everything in or was interrupted part way\n> through. think about what happens with ftp if the connection drops, you get\n> a partial file 'successfully' as there is no size provided, but with HTTP\n> you get a known-bad transfer that you can abort or resume.\n>\n> I don't think so, since you can do 'cat my_large_copy.sql | psql'. AFAIR it\nsimply looks for end of data marker, either in protocol or in stream itself\n(run copy from stdin in psql and it will tell you what marker is).\n\n\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\n4 лютого 2011 р. 09:32 <[email protected]> написав:\n\n\nwhen a copy command is issued, I assume that there is some indication of how much data is going to follow. I know that it's not just 'insert everything until the TCP connection terminates' because that would give you no way of knowing if the copy got everything in or was interrupted part way through. think about what happens with ftp if the connection drops, you get a partial file 'successfully' as there is no size provided, but with HTTP you get a known-bad transfer that you can abort or resume.\nI don't think so, since you can do 'cat my_large_copy.sql | psql'. AFAIR it simply looks for end of data marker, either in protocol or in stream itself (run copy from stdin in psql and it will tell you what marker is). \n-- Best regards, Vitalii Tymchyshyn",
"msg_date": "Fri, 4 Feb 2011 09:39:38 +0200",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "\n\nOn 02/04/2011 02:32 AM, [email protected] wrote:\n>\n> when a copy command is issued, I assume that there is some indication \n> of how much data is going to follow.\n>\n>\n\nNo of course there isn't. How would we do that with a stream like STDIN? \nRead the code.\n\ncheers\n\nandrew\n",
"msg_date": "Fri, 04 Feb 2011 02:59:06 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Mladen Gogala schrieb:\n\n> Well, the problem will not go away. As I've said before, all other \n> databases have that feature and none of the reasons listed here \n> convinced me that everybody else has a crappy optimizer. The problem \n> may go away altogether if people stop using PostgreSQL.\n\nA common problem of programmers is, that they want a solution they \nalready know for a problem they already know, even if it is the worst \nsolution the can choose.\n\nThere are so many possibilities to solve a given problem and you even \nhave time to do this before your application get released.\n\nAlso: if you rely so heavily on hints, then use a database which \nsupports hints. A basic mantra in every training i have given is: use \nthe tool/technic/persons which fits best for the needs of the project. \nThere are many databases out there - choose for every project the one, \nwhich fits best!\n\nGreetings from Germany,\nTorsten\n-- \nhttp://www.dddbl.de - ein Datenbank-Layer, der die Arbeit mit 8 \nverschiedenen Datenbanksystemen abstrahiert,\nQueries von Applikationen trennt und automatisch die Query-Ergebnisse \nauswerten kann.\n",
"msg_date": "Fri, 04 Feb 2011 09:43:23 +0100",
"msg_from": "=?ISO-8859-1?Q?Torsten_Z=FChlsdorff?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "\n> Yes. And this has little to do with hints. It has to do with years\n> of development lead with THOUSANDS of engineers who can work on the\n> most esoteric corner cases in their spare time. Find the pg project a\n> couple hundred software engineers and maybe we'll catch Oracle a\n> little quicker. Otherwise we'll have to marshall our resources to do\n> the best we can on the project ,and that means avoiding maintenance\n> black holes and having the devs work on the things that give the most\n> benefit for the cost. Hints are something only a tiny percentage of\n> users could actually use and use well.\n>\n> Write a check, hire some developers and get the code done and present\n> it to the community. If it's good and works it'll likely get\n> accepted. Or use EDB, since it has oracle compatibility in it.\n>\nI have to disagree with you here. I have never seen Oracle outperform \nPostgreSQL on complex joins, which is where the planner comes in. \nPerhaps on certain throughput things, but this is likely do to how we \nhandle dead rows, and counts, which is definitely because of how dead \nrows are handled, but the easier maintenance makes up for those. Also \nboth of those are by a small percentage.\n\nI have many times had Oracle queries that never finish (OK maybe not \nnever, but not over a long weekend) on large hardware, but can be \nfinished on PostgreSQL in a matter or minutes on cheap hardware. This \nhappens to the point that often I have set up a PostgreSQL database to \ncopy the data to for querying and runnign the complex reports, even \nthough the origin of the data was Oracle, since the application was \nOracle specific. It took less time to duplicate the database and run \nthe query on PostgreSQL than it did to just run it on Oracle.\n",
"msg_date": "Fri, 04 Feb 2011 06:05:33 -0700",
"msg_from": "Grant Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 03, 2011 at 04:39:12PM -0800, [email protected] wrote:\n> On Thu, 3 Feb 2011, Robert Haas wrote:\n>\n>> On Thu, Feb 3, 2011 at 3:54 PM, <[email protected]> wrote:\n>>> with the current code, this is a completely separate process that knows\n>>> nothing about the load, so if you kick it off when you start the load, it\n>>> makes a pass over the table (competing for I/O), finishes, you continue \n>>> to\n>>> update the table, so it makes another pass, etc. As you say, this is a \n>>> bad\n>>> thing to do. I am saying to have an option that ties the two togeather,\n>>> essentially making the data feed into the Analyze run be a fork of the \n>>> data\n>>> comeing out of the insert run going to disk. So the Analyze run doesn't \n>>> do\n>>> any I/O and isn't going to complete until the insert is complete. At \n>>> which\n>>> time it will have seen one copy of the entire table.\n>>\n>> Yeah, but you'll be passing the entire table through this separate\n>> process that may only need to see 1% of it or less on a large table.\n>> If you want to write the code and prove it's better than what we have\n>> now, or some other approach that someone else may implement in the\n>> meantime, hey, this is an open source project, and I like improvements\n>> as much as the next guy. But my prediction for what it's worth is\n>> that the results will suck. :-)\n>\n> I will point out that 1% of a very large table can still be a lot of disk \n> I/O that is avoided (especially if it's random I/O that's avoided)\n>\n> David Lang\n>\n\nIn addition, the streaming ANALYZE can provide better statistics at\nany time during the load and it will be complete immediately. As far\nas passing the entire table through the ANALYZE process, a simple\ncounter can be used to only send the required samples based on the\nstatistics target. Where this would seem to help the most is in\ntemporary tables which currently do not work with autovacuum but it\nwould streamline their use for more complicated queries that need\nan analyze to perform well.\n\nRegards,\nKen\n",
"msg_date": "Fri, 4 Feb 2011 08:33:15 -0600",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "04.02.11 16:33, Kenneth Marshall написав(ла):\n>\n> In addition, the streaming ANALYZE can provide better statistics at\n> any time during the load and it will be complete immediately. As far\n> as passing the entire table through the ANALYZE process, a simple\n> counter can be used to only send the required samples based on the\n> statistics target. Where this would seem to help the most is in\n> temporary tables which currently do not work with autovacuum but it\n> would streamline their use for more complicated queries that need\n> an analyze to perform well.\n>\nActually for me the main \"con\" with streaming analyze is that it adds \nsignificant CPU burden to already not too fast load process. Especially \nif it's automatically done for any load operation performed (and I can't \nsee how it can be enabled on some threshold).\nAnd you can't start after some threshold of data passed by since you may \nloose significant information (like minimal values).\n\nBest regards, Vitalii Tymchyshyn\n",
"msg_date": "Fri, 04 Feb 2011 16:38:30 +0200",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 03, 2011 at 09:05:52PM -0500, Robert Haas wrote:\n> On Thu, Feb 3, 2011 at 8:37 PM, <[email protected]> wrote:\n> > On Thu, 3 Feb 2011, Robert Haas wrote:\n> >\n> >> On Thu, Feb 3, 2011 at 7:39 PM, ?<[email protected]> wrote:\n> >>>>\n> >>>> Yeah, but you'll be passing the entire table through this separate\n> >>>> process that may only need to see 1% of it or less on a large table.\n> >>>> If you want to write the code and prove it's better than what we have\n> >>>> now, or some other approach that someone else may implement in the\n> >>>> meantime, hey, this is an open source project, and I like improvements\n> >>>> as much as the next guy. ?But my prediction for what it's worth is\n> >>>> that the results will suck. ?:-)\n> >>>\n> >>> I will point out that 1% of a very large table can still be a lot of disk\n> >>> I/O that is avoided (especially if it's random I/O that's avoided)\n> >>\n> >> Sure, but I think that trying to avoid it will be costly in other ways\n> >> - you'll be streaming a huge volume of data through some auxiliary\n> >> process, which will have to apply some algorithm that's very different\n> >> from the one we use today. ?The reality is that I think there's little\n> >> evidence that the way we do ANALYZE now is too expensive. ?It's\n> >> typically very cheap and works very well. ?It's a bit annoying when it\n> >> fires off in the middle of a giant data load, so we might need to\n> >> change the time of it a little, but if there's a problem with the\n> >> operation itself being too costly, this is the first I'm hearing of\n> >> it. ?We've actually worked *really* hard to make it cheap.\n> >\n> > I could be misunderstanding things here, but my understanding is that it's\n> > 'cheap' in that it has little impact on the database while it is running.\n> \n> I mean that it's cheap in that it usually takes very little time to complete.\n> \n> > the issue here is that the workflow is\n> >\n> > load data\n> > analyze\n> > start work\n> >\n> > so the cost of analyze in this workflow is not \"1% impact on query speed for\n> > the next X time\", it's \"the database can't be used for the next X time while\n> > we wait for analyze to finish running\"\n> \n> OK.\n> \n> > I don't understand why the algorithm would have to be so different than\n> > what's done today, surely the analyze thread could easily be tweaked to\n> > ignore the rest of the data (assuming we don't have the thread sending the\n> > data to analyze do the filtering)\n> \n> If you want to randomly pick 10,000 rows out of all the rows that are\n> going to be inserted in the table without knowing in advance how many\n> there will be, how do you do that? Maybe there's an algorithm, but\n> it's not obvious to me. But mostly, I question how expensive it is to\n> have a second process looking at the entire table contents vs. going\n> back and rereading a sample of rows at the end. I can't remember\n> anyone ever complaining \"ANALYZE took too long to run\". I only\n> remember complaints of the form \"I had to remember to manually run it\n> and I wish it had just happened by itself\".\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\nProbably doomed to be shot down, but since you are effectively inline,\nyou could sample assuming a range of table row counts. Start at the\nsize of a table where random (index) lookups are faster than a sequential\nscan and then at appropriate multiples, 100x, 100*100X,... then you should\nbe able to generate appropriate statistics. I have not actually looked at\nhow that would happen, but it would certainly allow you to process far, far\nfewer rows than the entire table.\n\nRegards,\nKen\n",
"msg_date": "Fri, 4 Feb 2011 08:52:20 -0600",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Informix IDS supports hints as well; normally the only need for hints in\nthis engine is when the Table/Index statistics are not being updated on a\nregular basis (ie: lazy DBA).\n\n\nOn 3 February 2011 22:17, Mark Kirkwood <[email protected]>wrote:\n\n> On 04/02/11 11:08, Josh Berkus wrote:\n>\n>> I don't think that's actually accurate. Can you give me a list of\n>> DBMSes which support hints other than Oracle?\n>>\n>> DB2 LUW (Linux, Unix, Win32 code base) has hint profiles:\n>\n>\n> http://justdb2chatter.blogspot.com/2008/06/db2-hints-optimizer-selection.html\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \n\n\nNick Lello | Web Architect\no +44 (0) 8433309374 | m +44 (0) 7917 138319\nEmail: nick.lello at rentrak.com\nRENTRAK | www.rentrak.com | NASDAQ: RENT\n\nInformix IDS supports hints as well; normally the only need for hints in this engine is when the Table/Index statistics are not being updated on a regular basis (ie: lazy DBA).On 3 February 2011 22:17, Mark Kirkwood <[email protected]> wrote:\nOn 04/02/11 11:08, Josh Berkus wrote:\n\nI don't think that's actually accurate. Can you give me a list of\nDBMSes which support hints other than Oracle?\n\n\nDB2 LUW (Linux, Unix, Win32 code base) has hint profiles:\n\nhttp://justdb2chatter.blogspot.com/2008/06/db2-hints-optimizer-selection.html\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Nick Lello | Web Architecto +44 (0) 8433309374 | m +44 (0) 7917 138319Email: nick.lello at rentrak.com\n\nRENTRAK | www.rentrak.com | NASDAQ: RENT",
"msg_date": "Fri, 4 Feb 2011 14:55:01 +0000",
"msg_from": "Nick Lello <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Mark Mielke <[email protected]> writes:\n> My understanding is:\n\n> 1) Background daemon wakes up and checks whether a number of changes \n> have happened to the database, irrelevant of transaction boundaries.\n\n> 2) Background daemon analyzes a percentage of rows in the database for \n> statistical data, irrelevant of row visibility.\n\n> 3) Analyze is important for both visible rows and invisible rows, as \n> plan execution is impacted by invisible rows. As long as they are part \n> of the table, they may impact the queries performed against the table.\n\n> 4) It doesn't matter if the invisible rows are invisible because they \n> are not yet committed, or because they are not yet vacuumed.\n\n> Would somebody in the know please confirm the above understanding for my \n> own piece of mind?\n\nNo.\n\n1. Autovacuum fires when the stats collector's insert/update/delete\ncounts have reached appropriate thresholds. Those counts are\naccumulated from messages sent by backends at transaction commit or\nrollback, so they take no account of what's been done by transactions\nstill in progress.\n\n2. Only live rows are included in the stats computed by ANALYZE.\n(IIRC it uses SnapshotNow to decide whether rows are live.)\n\nAlthough the stats collector does track an estimate of the number of\ndead rows for the benefit of autovacuum, this isn't used by planning.\nTable bloat is accounted for only in terms of growth of the physical\nsize of the table in blocks.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Feb 2011 10:41:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does auto-analyze work on dirty writes? (was: Re: [HACKERS] Slow\n\tcount(*) again...)"
},
{
"msg_contents": "On Fri, Feb 4, 2011 at 6:05 AM, Grant Johnson <[email protected]> wrote:\n>\n>> Yes. And this has little to do with hints. It has to do with years\n>> of development lead with THOUSANDS of engineers who can work on the\n>> most esoteric corner cases in their spare time. Find the pg project a\n>> couple hundred software engineers and maybe we'll catch Oracle a\n>> little quicker. Otherwise we'll have to marshall our resources to do\n>> the best we can on the project ,and that means avoiding maintenance\n>> black holes and having the devs work on the things that give the most\n>> benefit for the cost. Hints are something only a tiny percentage of\n>> users could actually use and use well.\n>>\n>> Write a check, hire some developers and get the code done and present\n>> it to the community. If it's good and works it'll likely get\n>> accepted. Or use EDB, since it has oracle compatibility in it.\n>>\n> I have to disagree with you here. I have never seen Oracle outperform\n> PostgreSQL on complex joins, which is where the planner comes in. Perhaps\n> on certain throughput things, but this is likely do to how we handle dead\n> rows, and counts, which is definitely because of how dead rows are handled,\n> but the easier maintenance makes up for those. Also both of those are by a\n> small percentage.\n>\n> I have many times had Oracle queries that never finish (OK maybe not never,\n> but not over a long weekend) on large hardware, but can be finished on\n> PostgreSQL in a matter or minutes on cheap hardware. This happens to the\n> point that often I have set up a PostgreSQL database to copy the data to for\n> querying and runnign the complex reports, even though the origin of the data\n> was Oracle, since the application was Oracle specific. It took less time\n> to duplicate the database and run the query on PostgreSQL than it did to\n> just run it on Oracle.\n\nIt very much depends on the query. With lots of tables to join, and\nwith pg 8.1 which is what I used when we were running Oracle 9, Oracle\nwon. With fewer tables to join in an otherwise complex reporting\nquery PostgreSQL won. I did the exact thing you're talking about. I\nactually wrote a simple replication system fro Oracle to PostgreSQL\n(it was allowed to be imperfect because it was stats data and we could\nrecreate at a moment).\n\nPostgreSQL on a PIV workstation with 2G ram and 4 SATA drives in\nRAID-10 stomped Oracle on much bigger Sun hardware into the ground for\nreporting queries. Queries that ran for hours or didn't finish in\nOracle ran in 5 to 30 minutes on the pg box.\n\nBut not all queries were like that.\n",
"msg_date": "Fri, 4 Feb 2011 09:05:32 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Fri, Feb 4, 2011 at 9:38 AM, Vitalii Tymchyshyn <[email protected]> wrote:\n> Actually for me the main \"con\" with streaming analyze is that it adds\n> significant CPU burden to already not too fast load process.\n\nExactly.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Fri, 4 Feb 2011 13:48:32 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Greg Smith wrote:\n> Check out \n> http://www.indeed.com/jobtrends?q=postgres%2C+mysql%2C+oracle&relative=1&relative=1 \n> if you want to see the real story here. Oracle has a large installed \n> base, but it's considered a troublesome legacy product being replaced \n\n+1 for Oracle being a \"troublesome legacy product\".\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Fri, 4 Feb 2011 17:58:17 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Mladen Gogala wrote:\n> Chris Browne wrote:\n> > Well, the community declines to add hints until there is actual\n> > consensus on a good way to add hints.\n> > \n> OK. That's another matter entirely. Who should make that decision? Is \n> there a committee or a person who would be capable of making that decision?\n> \n> > Nobody has ever proposed a way to add hints where consensus was arrived\n> > at that the way was good, so...\n> > \n> \n> So, I will have to go back on my decision to use Postgres and \n> re-consider MySQL? I will rather throw away the effort invested in \n\nYou want to reconsider using MySQL because Postgres doesn't have hints. \nHard to see how that logic works.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Fri, 4 Feb 2011 18:17:12 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Mladen Gogala wrote:\n> Actually, it is not unlike a religious dogma, only stating that \"hints \n> are bad\". It even says so in the wiki. The arguments are\n> 1) Refusal to implement hints is motivated by distrust toward users, \n> citing that some people may mess things up.\n> Yes, they can, with and without hints.\n> 2) All other databases have them. This is a major feature and if I were \n> in the MySQL camp, I would use it as an\n> argument. Asking me for some \"proof\" is missing the point. All other \n> databases have hints precisely because\n> they are useful. Assertion that only Postgres is so smart that can \n> operate without hints doesn't match the\n> reality. As a matter of fact, Oracle RDBMS on the same machine will \n> regularly beat PgSQL in performance.\n> That has been my experience so far. I even posted counting query \n> results.\n> 3) Hints are \"make it or break it\" feature. They're absolutely needed in \n> the fire extinguishing situations.\n> \n> I see no arguments to say otherwise and until that ridiculous \"we don't \n> want hints\" dogma is on wiki, this is precisely what it is: a dogma. \n\nUh, that is kind of funny considering that text is on a 'wiki', meaning\neverything there is open to change if the group agrees.\n\n> Dogmas do not change and I am sorry that you don't see it that way. \n> However, this discussion\n> did convince me that I need to take another look at MySQL and tone down \n> my engagement with PostgreSQL community. This is my last post on the \n> subject because posts are becoming increasingly personal. This level of \n> irritation is also\n> characteristic of a religious community chastising a sinner. Let me \n> remind you again: all other major databases have that possibility: \n> Oracle, MySQL, DB2, SQL Server and Informix. Requiring burden of proof \n> about hints is equivalent to saying that all these databases are \n> developed by idiots and have a crappy optimizer.\n\nYou need to state the case for hints independent of what other databases\ndo, and indepdendent of fixing the problems where the optimizer doesn't\nmatch reatility.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Fri, 4 Feb 2011 19:17:10 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On 02/04/2011 10:41 AM, Tom Lane wrote:\n> 1. Autovacuum fires when the stats collector's insert/update/delete\n> counts have reached appropriate thresholds. Those counts are\n> accumulated from messages sent by backends at transaction commit or\n> rollback, so they take no account of what's been done by transactions\n> still in progress.\n>\n> 2. Only live rows are included in the stats computed by ANALYZE.\n> (IIRC it uses SnapshotNow to decide whether rows are live.)\n>\n> Although the stats collector does track an estimate of the number of\n> dead rows for the benefit of autovacuum, this isn't used by planning.\n> Table bloat is accounted for only in terms of growth of the physical\n> size of the table in blocks.\n\nThanks, Tom.\n\nDoes this un-analyzed \"bloat\" not impact queries? I guess the worst case \nhere is if autovaccum is disabled for some reason and 99% of the table \nis dead rows. If I understand the above correctly, I think analyze might \ngenerate a bad plan under this scenario, thinking that a value is \nunique, using the index - but every tuple in the index has the same \nvalue and each has to be looked up in the table to see if it is visible?\n\nStill, I guess the idea here is not to disable autovacuum, making dead \nrows insignificant in the grand scheme of things. I haven't specifically \nnoticed any performance problems here - PostgreSQL is working great for \nme as usual. Just curiosity...\n\nCheers,\nmark\n\n-- \nMark Mielke<[email protected]>\n\n",
"msg_date": "Fri, 04 Feb 2011 20:50:13 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does auto-analyze work on dirty writes?"
},
{
"msg_contents": "On Fri, Feb 4, 2011 at 5:17 PM, Bruce Momjian <[email protected]> wrote:\n> Mladen Gogala wrote:\n>> characteristic of a religious community chastising a sinner. Let me\n>> remind you again: all other major databases have that possibility:\n>> Oracle, MySQL, DB2, SQL Server and Informix. Requiring burden of proof\n>> about hints is equivalent to saying that all these databases are\n>> developed by idiots and have a crappy optimizer.\n>\n> You need to state the case for hints independent of what other databases\n> do, and indepdendent of fixing the problems where the optimizer doesn't\n> match reatility.\n\nAnd that kind of limits to an area where we would the ability to nudge\ncosts instead of just set them for an individual part of a query.\ni.e. join b on (a.a=b.b set selectivity=0.01) or (a.a=b.b set\nselectivity=1.0) or something like that. i.e. a.a and b.b have a lot\nof matches or few, etc. If there's any thought of hinting it should\nbe something that a DBA, knowing his data model well, WILL know more\nthan the current planner because the planner can't get cross table\nstatistics yet.\n\nBut then, why not do something to allow cross table indexes and / or\nstatistics? To me that would go much further to helping fix the\nissues where the current planner \"flies blind\".\n\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Fri, 4 Feb 2011 21:45:05 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Fri, 4 Feb 2011, Vitalii Tymchyshyn wrote:\n\n> 04.02.11 16:33, Kenneth Marshall ???????(??):\n>> \n>> In addition, the streaming ANALYZE can provide better statistics at\n>> any time during the load and it will be complete immediately. As far\n>> as passing the entire table through the ANALYZE process, a simple\n>> counter can be used to only send the required samples based on the\n>> statistics target. Where this would seem to help the most is in\n>> temporary tables which currently do not work with autovacuum but it\n>> would streamline their use for more complicated queries that need\n>> an analyze to perform well.\n>> \n> Actually for me the main \"con\" with streaming analyze is that it adds \n> significant CPU burden to already not too fast load process. Especially if \n> it's automatically done for any load operation performed (and I can't see how \n> it can be enabled on some threshold).\n\ntwo thoughts\n\n1. if it's a large enough load, itsn't it I/O bound?\n\n\n2. this chould be done in a separate process/thread than the load itself, \nthat way the overhead of the load is just copying the data in memory to \nthe other process.\n\nwith a multi-threaded load, this would eat up some cpu that could be used \nfor the load, but cores/chip are still climbing rapidly so I expect that \nit's still pretty easy to end up with enough CPU to handle the extra load.\n\nDavid Lang\n\n> And you can't start after some threshold of data passed by since you may \n> loose significant information (like minimal values).\n>\n> Best regards, Vitalii Tymchyshyn\n>\n",
"msg_date": "Fri, 4 Feb 2011 21:46:30 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Sat, Feb 5, 2011 at 12:46 AM, <[email protected]> wrote:\n>> Actually for me the main \"con\" with streaming analyze is that it adds\n>> significant CPU burden to already not too fast load process. Especially if\n>> it's automatically done for any load operation performed (and I can't see\n>> how it can be enabled on some threshold).\n>\n> two thoughts\n>\n> 1. if it's a large enough load, itsn't it I/O bound?\n\nSometimes. Our COPY is not as cheap as we'd like it to be.\n\n> 2. this chould be done in a separate process/thread than the load itself,\n> that way the overhead of the load is just copying the data in memory to the\n> other process.\n\nI think that's more expensive than you're giving it credit for.\n\nBut by all means implement it and post the patch if it works out...!\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Sat, 5 Feb 2011 01:37:49 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Fri, Feb 4, 2011 at 11:37 PM, Robert Haas <[email protected]> wrote:\n> On Sat, Feb 5, 2011 at 12:46 AM, <[email protected]> wrote:\n>>> Actually for me the main \"con\" with streaming analyze is that it adds\n>>> significant CPU burden to already not too fast load process. Especially if\n>>> it's automatically done for any load operation performed (and I can't see\n>>> how it can be enabled on some threshold).\n>>\n>> two thoughts\n>>\n>> 1. if it's a large enough load, itsn't it I/O bound?\n>\n> Sometimes. Our COPY is not as cheap as we'd like it to be.\n\nWith a 24 drive RAID-10 array that can read at ~1GB/s I am almost\nalways CPU bound during copies. This isn't wholly bad as it leaves\nspare IO for the rest of the machine so regular work carries on just\nfine.\n",
"msg_date": "Sat, 5 Feb 2011 01:38:40 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "Scott Marlowe wrote:\n> With a 24 drive RAID-10 array that can read at ~1GB/s I am almost\n> always CPU bound during copies. This isn't wholly bad as it leaves\n> spare IO for the rest of the machine so regular work carries on just\n> fine.\n> \n\nAnd you don't need nearly that much I/O bandwidth to reach that point. \nI've hit being CPU bound on COPY...FROM on systems with far less drives \nthan 24.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Sat, 05 Feb 2011 03:49:05 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On 2011-02-03 22:48, Scott Marlowe wrote:\n> On Thu, Feb 3, 2011 at 8:40 PM, Greg Smith<[email protected]> wrote:\n>> Scott Marlowe wrote:\n>>>\n>>> Yes they're useful, but like a plastic bad covering a broken car window,\n>>> they're useful because they cover something that's inherently broken.\n>>>\n>>\n>> Awesome. Now we have a car anology, with a funny typo no less. \"Plastic\n>> bad\", I love it. This is real progress toward getting all the common list\n>> argument idioms aired out. All we need now is a homage to Mike Godwin and\n>> we can close this down.\n>\n> It's not so much a car analogy as a plastic bad analogy.\n>\n\n\nDon't be such an analogy Nazi.\n",
"msg_date": "Wed, 09 Feb 2011 20:58:12 -0500",
"msg_from": "Gorshkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 3, 2011 at 8:46 PM, Josh Berkus <[email protected]> wrote:\n> \"Optimizer hints are used to work around problems in the optimizer and\n> introduce upgrade and maintenance issues. We would rather have the\n> problems reported and fixed. We have discussed a more sophisticated\n> system of per-class cost adjustment instead, but a specification remains\n> to be developed.\"\n>\n> That seems pretty straightforwards. There are even links to prior\n> discussions about what kind of system would work. I don't think this\n> text needs any adjustment; that's our clear consensus on the hint issue:\n> we want a tool which works better than what we've seen in other databases.\n\nI think it's just dumb to say we don't want hints. We want hints, or\nat least many of us do. We just want them to actually work, and to\nnot suck. Can't we just stop saying we don't want them, and say that\nwe do want something, but it has to be really good?\n\n> Yes, I occasionally run across cases where having a query tweaking\n> system would help me fix a pathological failure in the planner.\n> However, even on data warehouses that's less than 0.1% of the queries I\n> deal with, so this isn't exactly a common event. And any hinting system\n> we develop needs to address those specific cases, NOT a hypothetical\n> case which can't be tested. Otherwise we'll implement hints which\n> actually don't improve queries.\n\nNo argument.\n\nThe bottom line here is that a lot of features that we don't have are\nthings that we don't want in the sense that we're not interested in\nworking on them over other things that seem more pressing, and we have\nfinite manpower. But if someone feels motivated to work on it, and\ncan actually come up with something good, then why should we give the\nimpression that such a thing would be rejected out of hand? I think\nwe ought to nuke that item and replace it with some items in the\noptimizer section that express what we DO want, which is some better\nways of fixing queries the few queries that suck despite our best (and\nvery successful) efforts to produce a top-notch optimizer.\n\nThe problem with multi-column statistics is a particularly good\nexample of something in this class. We may have a great solution to\nthat problem for PostgreSQL 11.0. But between now and then, if you\nhave that problem, there is no good way to adjust the selectivity\nestimates. If this were an academic research project or just being\nused for toy projects that didn't really matter, we might not care.\nBut this is a real database that people are relying on for their\nlivelihood, and we should be willing to provide a way for those people\nto not get fired when they hit the 0.1% of queries that can't be fixed\nusing existing methods. I don't know exactly what the right solution\nis off the top of my head, but digging in our heels is not it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 10 Feb 2011 10:50:40 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*) again..."
},
{
"msg_contents": "Robert Haas <[email protected]> wrote:\n \n> I think it's just dumb to say we don't want hints. We want hints,\n> or at least many of us do.\n \nWell, yeah. Even those most ostensibly opposed to hints have been\nknown to post that they would rather not have the optimizer\nrecognize two logically equivalent constructs and optimize them the\nsame because they find the current difference \"useful to coerce the\noptimizer\" to choose a certain plan. That's implementing hints but\nrefusing to document them. And it sometimes bites those who don't\nknow they're accidentally using a hint construct. An explicit and\ndocumented hint construct would be better. Probably not a \"use this\nplan\" type hint, but some form of optimization barrier hint, maybe. \nYou know, like OFFSET 0, but more explicitly hint-like.\n \n> The bottom line here is that a lot of features that we don't have\n> are things that we don't want in the sense that we're not\n> interested in working on them over other things that seem more\n> pressing, and we have finite manpower. But if someone feels\n> motivated to work on it, and can actually come up with something\n> good, then why should we give the impression that such a thing\n> would be rejected out of hand? I think we ought to nuke that item\n> and replace it with some items in the optimizer section that\n> express what we DO want, which is some better ways of fixing\n> queries the few queries that suck despite our best (and very\n> successful) efforts to produce a top-notch optimizer.\n> \n> The problem with multi-column statistics is a particularly good\n> example of something in this class. We may have a great solution\n> to that problem for PostgreSQL 11.0. But between now and then, if\n> you have that problem, there is no good way to adjust the\n> selectivity estimates.\n \nYeah, this is probably the most important area to devise some\nexplicit way for a DBA who knows that such multicolumn selections\nare going to be used, and is capable of calculating some correlation\nfactor, could supply it to the optimizer to override the naive\ncalculation it currently does. Even there I would tend to think\nthat the sort of \"do it this way\" hints that people seem to\ninitially want wouldn't be good; it should be a way to override the\ncosting factor which the optimizer gets wrong, so it can do its\nusual excellent job of evaluating plans with accurate costs.\n \n> I don't know exactly what the right solution is off the top of my\n> head, but digging in our heels is not it.\n \nWell, I'm comfortable digging in my heels against doing *lame* hints\njust because \"it's what all the other kids are doing,\" which I think\nis the only thing which would have satisfied the OP on this thread. \n>From both on-list posts and ones exchanged off-list with me, it\nseems he was stubbornly resistant to properly tuning the server to\nsee if any problems remained, or posting particular problems to see\nhow they would be most effectively handled in PostgreSQL. We\nobviously can't be drawn into dumb approaches because of\nill-informed demands like that.\n \n-Kevin\n",
"msg_date": "Thu, 10 Feb 2011 10:45:20 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*)\n\t again..."
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Robert Haas <[email protected]> wrote:\n>> I don't know exactly what the right solution is off the top of my\n>> head, but digging in our heels is not it.\n \n> Well, I'm comfortable digging in my heels against doing *lame* hints\n> just because \"it's what all the other kids are doing,\" which I think\n> is the only thing which would have satisfied the OP on this thread. \n\nRight. If someone comes up with a design that avoids the serious\npitfalls of traditional hinting schemes, that'd be great. But I'm\nnot interested in implementing Oracle-like hints just because Oracle\nhas them, which I think was basically what the OP wanted. I haven't\nseen a hinting scheme that didn't suck (and that includes the aspects\nof our own current behavior that are hint-like). I don't say that\nthere can't be one.\n\nI believe that the FAQ entry is meant to answer people who come along\nand say \"oh, this is easily solved, just do what $PRODUCT does\". The\ngeneric answer to that is \"no, it's not that easy\". But maybe the FAQ\nshould be rephrased to be more like \"we don't want traditional hints\nbecause of problems X, Y, and Z. If you have an idea that avoids those\nproblems, let us know.\"\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 10 Feb 2011 12:01:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*) again... "
},
{
"msg_contents": "On Thu, Feb 10, 2011 at 11:45 AM, Kevin Grittner\n<[email protected]> wrote:\n> Well, I'm comfortable digging in my heels against doing *lame* hints\n> just because \"it's what all the other kids are doing,\" which I think\n> is the only thing which would have satisfied the OP on this thread.\n> From both on-list posts and ones exchanged off-list with me, it\n> seems he was stubbornly resistant to properly tuning the server to\n> see if any problems remained, or posting particular problems to see\n> how they would be most effectively handled in PostgreSQL. We\n> obviously can't be drawn into dumb approaches because of\n> ill-informed demands like that.\n\nNor was I proposing any such thing. But that doesn't make \"we don't\nwant hints\" an accurate statement. Despite the impression that OP\nwent away with, the real situation is a lot more nuanced than that,\nand the statement on the Todo list gives the wrong impression, IMHO.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 10 Feb 2011 12:02:58 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*) again..."
},
{
"msg_contents": "On 02/10/2011 10:45 AM, Kevin Grittner wrote:\n\n> Even there I would tend to think that the sort of \"do it this way\"\n> hints that people seem to initially want wouldn't be good; it should\n> be a way to override the costing factor which the optimizer gets\n> wrong, so it can do its usual excellent job of evaluating plans with\n> accurate costs.\n\nYou know... that's an interesting approach. We already do that with \nfunctions by allowing users to specify the estimated cost, rows \nreturned, and even override config settings. It's an inexact science at \nbest, but it might help the optimizer out.\n\nReally... how difficult would it be to add that syntax to the JOIN \nstatement, for example?\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Thu, 10 Feb 2011 11:06:42 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*)\t again..."
},
{
"msg_contents": "On 02/10/2011 11:01 AM, Tom Lane wrote:\n\n> But I'm not interested in implementing Oracle-like hints just because\n> Oracle has them, which I think was basically what the OP wanted.\n\nHilariously, I'm not so sure that's what the OP wanted. Several of us \npointed him to EnterpriseDB and their Oracle-style syntax, and the only \nthing he said about that was to use it as further evidence that \nPostgreSQL should implement them. I'm very tempted to say he wanted \nsomething for free, and was angry he couldn't get it.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Thu, 10 Feb 2011 11:09:18 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 10, 2011 at 12:01 PM, Tom Lane <[email protected]> wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n>> Robert Haas <[email protected]> wrote:\n>>> I don't know exactly what the right solution is off the top of my\n>>> head, but digging in our heels is not it.\n>\n>> Well, I'm comfortable digging in my heels against doing *lame* hints\n>> just because \"it's what all the other kids are doing,\" which I think\n>> is the only thing which would have satisfied the OP on this thread.\n>\n> Right. If someone comes up with a design that avoids the serious\n> pitfalls of traditional hinting schemes, that'd be great. But I'm\n> not interested in implementing Oracle-like hints just because Oracle\n> has them, which I think was basically what the OP wanted. I haven't\n> seen a hinting scheme that didn't suck (and that includes the aspects\n> of our own current behavior that are hint-like). I don't say that\n> there can't be one.\n>\n> I believe that the FAQ entry is meant to answer people who come along\n> and say \"oh, this is easily solved, just do what $PRODUCT does\". The\n> generic answer to that is \"no, it's not that easy\". But maybe the FAQ\n> should be rephrased to be more like \"we don't want traditional hints\n> because of problems X, Y, and Z. If you have an idea that avoids those\n> problems, let us know.\"\n\nThat's closer to where I think the community is on this issue, for sure.\n\nFrankly, I think we should also have some much better documentation\nabout how to fix problems in the optimizer. Before the OP went off on\na rant, he actually showed up at a webinar I did looking for advice on\nhow to fix queries in PG, which wasn't exactly the topic of the\nwebinar, so he didn't get his answer. But the only way you're going\nto find out about a lot of the tricks that we rely on is to read the\nmailing lists, and that's below our usual standard of documentation.\nSure, it's a bunch of ugly hacks, but they're useful when you're being\neaten by a crocodile, and the need for them isn't limited to people\nwho have time to spend all day reading pgsql-whatever.\n\nI also think that we have enough knowledge between us to identify the\nareas where some better hints, or hint-ish mechanisms, would actually\nbe useful. I feel like I have a pretty good idea where the bodies are\nburied, and what some of the solutions might look like. But I'm not\nsure I want to open that can of worms while we're trying to close out\nthis CommitFest. In fact I'm pretty sure I don't. But I would like\nto change the Todo text to say something less misleading.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 10 Feb 2011 12:19:34 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*) again..."
},
{
"msg_contents": "Shaun Thomas <[email protected]> wrote:\n \n> how difficult would it be to add that syntax to the JOIN\n> statement, for example?\n \nSomething like this syntax?:\n \nJOIN WITH (correlation_factor=0.3)\n \nWhere 1.0 might mean that for each value on the left there was only\none distinct value on the right, and 0.0 would mean that they were\nentirely independent? (Just as an off-the-cuff example -- I'm not\nat all sure that this makes sense, let alone is the best thing to\nspecify. I'm trying to get at *syntax* here, not particular knobs.)\n \n-Kevin\n",
"msg_date": "Thu, 10 Feb 2011 11:21:51 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*)\t\n\t again..."
},
{
"msg_contents": "[email protected] (Robert Haas) writes:\n> On Thu, Feb 10, 2011 at 11:45 AM, Kevin Grittner\n> <[email protected]> wrote:\n>> Well, I'm comfortable digging in my heels against doing *lame* hints\n>> just because \"it's what all the other kids are doing,\" which I think\n>> is the only thing which would have satisfied the OP on this thread.\n>> From both on-list posts and ones exchanged off-list with me, it\n>> seems he was stubbornly resistant to properly tuning the server to\n>> see if any problems remained, or posting particular problems to see\n>> how they would be most effectively handled in PostgreSQL. We\n>> obviously can't be drawn into dumb approaches because of\n>> ill-informed demands like that.\n>\n> Nor was I proposing any such thing. But that doesn't make \"we don't\n> want hints\" an accurate statement. Despite the impression that OP\n> went away with, the real situation is a lot more nuanced than that,\n> and the statement on the Todo list gives the wrong impression, IMHO.\n\nI have added the following comment to the ToDo:\n\n We are not interested to implement hints in ways they are commonly\n implemented on other databases, and proposals based on \"because\n they've got them\" will not be welcomed. If you have an idea that\n avoids the problems that have been observed with other hint systems,\n that could lead to valuable discussion.\n\nThat seems to me to characterize the nuance.\n-- \nlet name=\"cbbrowne\" and tld=\"gmail.com\" in String.concat \"@\" [name;tld];;\nhttp://www3.sympatico.ca/cbbrowne/languages.html\nIf only women came with pull-down menus and online help.\n",
"msg_date": "Thu, 10 Feb 2011 12:25:37 -0500",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints"
},
{
"msg_contents": "Robert Haas <[email protected]> wrote:\n \n>> maybe the FAQ should be rephrased to be more like \"we don't want\n>> traditional hints because of problems X, Y, and Z. If you have\n>> an idea that avoids those problems, let us know.\"\n> \n> That's closer to where I think the community is on this issue\n \nThat sounds pretty good to me.\n \n-Kevin\n",
"msg_date": "Thu, 10 Feb 2011 11:27:18 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*)\n\t again..."
},
{
"msg_contents": "On 02/10/2011 11:21 AM, Kevin Grittner wrote:\n\n> Something like this syntax?:\n>\n> JOIN WITH (correlation_factor=0.3)\n\nI was thinking more:\n\nJOIN foo_tab USING (foo_id) WITH (COST=50)\n\nor something, to exploit the hooks that already exist for functions, for \nexample. But it's still an interesting concept. Tell the optimizer what \nyou want and how the data is really related in cases where it's wrong, \nand let it figure out the best path.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n",
"msg_date": "Thu, 10 Feb 2011 11:30:46 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*)\t\t again..."
},
{
"msg_contents": "Shaun Thomas <[email protected]> wrote:\n \n> I was thinking more:\n> \n> JOIN foo_tab USING (foo_id) WITH (COST=50)\n \nThe problem I have with that syntax is that it would be hard to read\nwhen you have some nested set of joins or a (SELECT) in the JOIN\ninstead of simple table name. For me, at least, it would \"get lost\"\nless easily if it were right next to the JOIN keyword.\n \nThe problem with a COST factor is that it's not obvious to me what\nit would apply to:\n - each row on the left?\n - each row on the right?\n - each row in the result of the JOIN step?\n - the entire step?\n \nHow would it scale based on other criteria which affected the number\nof rows on either side of the join?\n \nIf I'm understanding the problem correctly, the part the optimizer\ngets wrong (because we don't yet have statistics to support a better\nassumption) is assuming that selection criteria on opposite sides of\na join affect entirely independent sets of what would be in the\nresult without the criteria. To use an oft-cited example, when one\ntable is selected by zip code and the other by city, that's a bad\nassumption about the correlation, leading to bad estimates, leading\nto bad costing, leading to bad plans. The OP wanted to override\nstep 4, a COST setting would try to override step 3, but I think we\nwould want to override step 1 (until we get statistics which let us\ncompute that accurately).\n \n-Kevin\n",
"msg_date": "Thu, 10 Feb 2011 11:44:29 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*)\t\t\n\t again..."
},
{
"msg_contents": "On Fri, Feb 4, 2011 at 8:50 PM, Mark Mielke <[email protected]> wrote:\n> On 02/04/2011 10:41 AM, Tom Lane wrote:\n>>\n>> 1. Autovacuum fires when the stats collector's insert/update/delete\n>> counts have reached appropriate thresholds. Those counts are\n>> accumulated from messages sent by backends at transaction commit or\n>> rollback, so they take no account of what's been done by transactions\n>> still in progress.\n>>\n>> 2. Only live rows are included in the stats computed by ANALYZE.\n>> (IIRC it uses SnapshotNow to decide whether rows are live.)\n>>\n>> Although the stats collector does track an estimate of the number of\n>> dead rows for the benefit of autovacuum, this isn't used by planning.\n>> Table bloat is accounted for only in terms of growth of the physical\n>> size of the table in blocks.\n>\n> Thanks, Tom.\n>\n> Does this un-analyzed \"bloat\" not impact queries? I guess the worst case\n> here is if autovaccum is disabled for some reason and 99% of the table is\n> dead rows. If I understand the above correctly, I think analyze might\n> generate a bad plan under this scenario, thinking that a value is unique,\n> using the index - but every tuple in the index has the same value and each\n> has to be looked up in the table to see if it is visible?\n\nIt sounds like you're describing something like a one-row table with a\nunique index on one of its column, getting updates that can't be made\nHOT, and not getting vacuumed. That scenario does suck - I had a test\ncase I was using it a while back that generated something similar -\nbut I'm not sure how much it's worth worrying about the plan, because\neither an index scan or a sequential scan is going to be awful.\n\nTo put that another way, I've founded that the optimizer copes pretty\nwell with adjusting plans as tables get bloated - mostly by using\nindex scans rather than sequential scans. It's possible there is some\nimprovement still to be had there, but I would be a lot more\ninterested in fixing the bloat, at least based on my own experiences.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 10 Feb 2011 12:51:31 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Does auto-analyze work on dirty writes?"
},
{
"msg_contents": "Shaun Thomas wrote:\n> Hilariously, I'm not so sure that's what the OP wanted. \n\nSomeone to blame as a scapegoat for why his badly planned project had \nfailed. I've done several Oracle conversions before, and never met \nsomeone who was so resistent to doing the right things for such a \nconversion. You have to relatively flexible in your thinking to work \nwith the good and away from the bad parts of PostgreSQL for such a \nproject to succeed. I didn't hear a whole lot of \"flexible\" in that \ndiscussion.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Thu, 10 Feb 2011 12:56:10 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*) again..."
},
{
"msg_contents": "Greg Smith <[email protected]> wrote:\n> Shaun Thomas wrote:\n>> Hilariously, I'm not so sure that's what the OP wanted. \n> \n> Someone to blame as a scapegoat for why his badly planned project\n> had failed. I've done several Oracle conversions before, and\n> never met someone who was so resistent to doing the right things\n> for such a conversion. You have to relatively flexible in your\n> thinking to work with the good and away from the bad parts of\n> PostgreSQL for such a project to succeed. I didn't hear a whole\n> lot of \"flexible\" in that discussion.\n \nI was thinking along the same lines, but couldn't find the words to\nput it so politely, so I held back. Still biting my tongue, but I\nappreciate your milder summary.\n \n-Kevin\n",
"msg_date": "Thu, 10 Feb 2011 12:26:44 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*)\n\t again..."
},
{
"msg_contents": "On 2/10/11 9:21 AM, Kevin Grittner wrote:\n> Shaun Thomas<[email protected]> wrote:\n>\n>> how difficult would it be to add that syntax to the JOIN\n>> statement, for example?\n>\n> Something like this syntax?:\n>\n> JOIN WITH (correlation_factor=0.3)\n>\n> Where 1.0 might mean that for each value on the left there was only\n> one distinct value on the right, and 0.0 would mean that they were\n> entirely independent? (Just as an off-the-cuff example -- I'm not\n> at all sure that this makes sense, let alone is the best thing to\n> specify. I'm trying to get at *syntax* here, not particular knobs.)\n\nThere are two types of problems:\n\n1. The optimizer is imperfect and makes a sub-optimal choice.\n\n2. There is theoretical reasons why it's hard for the optimizer. For example, in a table with 50 columns, there is a staggering number of possible correlations. An optimizer can't possibly figure this out, but a human might know them from the start. The City/Postal-code correlation is a good example.\n\nFor #1, Postgres should never offer any sort of hint mechanism. As many have pointed out, it's far better to spend the time fixing the optimizer than adding hacks.\n\nFor #2, it might make sense to give a designer a way to tell Postgres stuff that it couldn't possibly figure out. But ... not until the problem is clearly defined.\n\nWhat should happen is that someone writes with an example query, and the community realizes that no amount of cleverness from Postgres could ever solve it (for solid theoretical reasons). Only then, when the problem is clearly defined, should we talk about solutions and SQL extensions.\n\nCraig\n",
"msg_date": "Thu, 10 Feb 2011 10:32:31 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*)\t\t again..."
},
{
"msg_contents": "On 4 February 2011 04:46, Josh Berkus <[email protected]> wrote:\n> \"Optimizer hints are used to work around problems in the optimizer and\n> introduce upgrade and maintenance issues. We would rather have the\n> problems reported and fixed. We have discussed a more sophisticated\n> system of per-class cost adjustment instead, but a specification remains\n> to be developed.\"\n\nI have no clue about how hints works in Oracle ... I've never been\nworking \"enterprise level\" on anything else than Postgres. Anyway,\ntoday I just came over an interesting problem in our production\ndatabase today - and I think it would be a benefit to be able to\nexplicitly tell the planner what index to use (the dev team is adding\nredundant attributes and more indexes to solve the problem - which\nworries me, because we will run into serious problems as soon as there\nwon't be enough memory for all the frequently-used indexes).\n\nWe have users and transactions, and we have transaction types. The\ntransaction table is huge. The users are able to interactively check\ntheir transaction listings online, and they have some simple filter\noptions available as well. Slightly simplified, the queries done\nlooks like this:\n\n select * from account_transaction where account_id=? order by\ncreated desc limit 25;\n\n select * from account_transaction where trans_type_id in ( ...\nlong, hard-coded list ...) and account_id=? order by created desc\nlimit 25;\n\nand we have indexes on:\n\n account_transaction(account_id, created)\n\n account_transaction(account_id, trans_type_id, created)\n\n(At this point, someone would probably suggest to make three\nsingle-key indexes and use bitmap index scan ... well, pulling 25 rows\nfrom the end of an index may be orders of magnitude faster than doing\nbitmap index mapping on huge indexes)\n\nFor the second query, the planner would chose the first index - and\nmaybe it makes sense - most of our customers have between 10-30% of\nthe transactions from the long list of transaction types, slim indexes\nare good and by average the slimmer index would probably do the job a\nbit faster. The problem is with the corner cases - for some of our\nextreme customers thousands of transaction index tuples may need to be\nscanned before 25 rows with the correct transaction type is pulled\nout, and if the index happens to be on disk, it may take tens of\nseconds to pull out the answer. Tens of seconds of waiting leads to\nfrustration, it is a lot nowadays in an interactive session. Also, I\nhaven't really checked it up, but it may very well be that this is\nexactly the kind of customers we want to retain.\n\nTo summarize, there are two things the planner doesn't know - it\ndoesn't know that there exists such corner cases where the real cost\nis far larger than the estimated cost, and it doesn't know that it's\nmore important to keep the worst-case cost on a reasonable level than\nto minimize the average cost. In the ideal world postgres would have\nsufficiently good statistics to know that for user #77777 it is better\nto chose the second index, but I suppose it would be easier if I was\nable to explicitly hide the account_transaction(account_id, created)\nindex for this query. Well, I know of one way to do it ... but I\nsuppose it's not a good idea to put \"drop index foo; select ...;\nrollback;\" into production ;-)\n",
"msg_date": "Thu, 10 Feb 2011 23:55:29 +0300",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*) again..."
},
{
"msg_contents": "Tobias Brox <[email protected]> writes:\n> I have no clue about how hints works in Oracle ... I've never been\n> working \"enterprise level\" on anything else than Postgres. Anyway,\n> today I just came over an interesting problem in our production\n> database today - and I think it would be a benefit to be able to\n> explicitly tell the planner what index to use (the dev team is adding\n> redundant attributes and more indexes to solve the problem - which\n> worries me, because we will run into serious problems as soon as there\n> won't be enough memory for all the frequently-used indexes).\n\n> We have users and transactions, and we have transaction types. The\n> transaction table is huge. The users are able to interactively check\n> their transaction listings online, and they have some simple filter\n> options available as well. Slightly simplified, the queries done\n> looks like this:\n\n> select * from account_transaction where account_id=? order by\n> created desc limit 25;\n\n> select * from account_transaction where trans_type_id in ( ...\n> long, hard-coded list ...) and account_id=? order by created desc\n> limit 25;\n\n> and we have indexes on:\n\n> account_transaction(account_id, created)\n\n> account_transaction(account_id, trans_type_id, created)\n\nWell, in this case the optimizer *is* smarter than you are, and the\nreason is that it remembers the correct rules for when indexes are\nuseful. That second index is of no value for either query, because\n\"in\" doesn't work the way you're hoping.\n\nI understand the larger point you're trying to make, but this example\nalso nicely illustrates the point being made on the other side, that\n\"force the optimizer to use the index I think it should use\" isn't a\nvery good solution.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 10 Feb 2011 16:12:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*) again... "
},
{
"msg_contents": "2011/2/10 Tobias Brox <[email protected]>\n\n> On 4 February 2011 04:46, Josh Berkus <[email protected]> wrote:\n> > \"Optimizer hints are used to work around problems in the optimizer and\n> > introduce upgrade and maintenance issues. We would rather have the\n> > problems reported and fixed. We have discussed a more sophisticated\n> > system of per-class cost adjustment instead, but a specification remains\n> > to be developed.\"\n>\n> I have no clue about how hints works in Oracle ... I've never been\n> working \"enterprise level\" on anything else than Postgres. Anyway,\n> today I just came over an interesting problem in our production\n> database today - and I think it would be a benefit to be able to\n> explicitly tell the planner what index to use (the dev team is adding\n> redundant attributes and more indexes to solve the problem - which\n> worries me, because we will run into serious problems as soon as there\n> won't be enough memory for all the frequently-used indexes).\n>\n> We have users and transactions, and we have transaction types. The\n> transaction table is huge. The users are able to interactively check\n> their transaction listings online, and they have some simple filter\n> options available as well. Slightly simplified, the queries done\n> looks like this:\n>\n> select * from account_transaction where account_id=? order by\n> created desc limit 25;\n>\n> select * from account_transaction where trans_type_id in ( ...\n> long, hard-coded list ...) and account_id=? order by created desc\n> limit 25;\n>\n> and we have indexes on:\n>\n> account_transaction(account_id, created)\n>\n> account_transaction(account_id, trans_type_id, created)\n>\n> If the list is hard-coded, you can create partial index on\naccount_transaction(account_id, created desc) where trans_type_id in ( ...\nlong, hard-coded list ...)\n\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\n2011/2/10 Tobias Brox <[email protected]>\nOn 4 February 2011 04:46, Josh Berkus <[email protected]> wrote:\n> \"Optimizer hints are used to work around problems in the optimizer and\n> introduce upgrade and maintenance issues. We would rather have the\n> problems reported and fixed. We have discussed a more sophisticated\n> system of per-class cost adjustment instead, but a specification remains\n> to be developed.\"\n\nI have no clue about how hints works in Oracle ... I've never been\nworking \"enterprise level\" on anything else than Postgres. Anyway,\ntoday I just came over an interesting problem in our production\ndatabase today - and I think it would be a benefit to be able to\nexplicitly tell the planner what index to use (the dev team is adding\nredundant attributes and more indexes to solve the problem - which\nworries me, because we will run into serious problems as soon as there\nwon't be enough memory for all the frequently-used indexes).\n\nWe have users and transactions, and we have transaction types. The\ntransaction table is huge. The users are able to interactively check\ntheir transaction listings online, and they have some simple filter\noptions available as well. Slightly simplified, the queries done\nlooks like this:\n\n select * from account_transaction where account_id=? order by\ncreated desc limit 25;\n\n select * from account_transaction where trans_type_id in ( ...\nlong, hard-coded list ...) and account_id=? order by created desc\nlimit 25;\n\nand we have indexes on:\n account_transaction(account_id, created)\n\n account_transaction(account_id, trans_type_id, created)If the list is hard-coded, you can create partial index on account_transaction(account_id, created desc) where trans_type_id in ( ...\nlong, hard-coded list ...) -- Best regards, Vitalii Tymchyshyn",
"msg_date": "Fri, 11 Feb 2011 10:19:01 +0200",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*) again..."
},
{
"msg_contents": "2011/2/11 Віталій Тимчишин <[email protected]>:\n> If the list is hard-coded, you can create partial index on\n> account_transaction(account_id, created desc) where trans_type_id in ( ...\n> long, hard-coded list ...)\n\nMy idea as well, though it looks ugly and it would be a maintenance\nhead-ache (upgrading the index as new transaction types are added\nwould mean \"costly\" write locks on the table, and we can't rely on\nmanual processes to get it right ... we might need to set up scripts\nto either upgrade the index or alert us if the index needs upgrading).\n",
"msg_date": "Fri, 11 Feb 2011 12:29:06 +0300",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*) again..."
},
{
"msg_contents": "11.02.11 11:29, Tobias Brox О©ҐО©ҐО©ҐО©ҐО©ҐО©ҐО©Ґ(О©ҐО©Ґ):\n> 2011/2/11 О©ҐО©ҐО©ҐО©ҐліО©Ґ О©ҐО©ҐО©ҐО©ҐО©ҐО©ҐО©ҐО©Ґ<[email protected]>:\n>> If the list is hard-coded, you can create partial index on\n>> account_transaction(account_id, created desc) where trans_type_id in ( ...\n>> long, hard-coded list ...)\n> My idea as well, though it looks ugly and it would be a maintenance\n> head-ache (upgrading the index as new transaction types are added\n> would mean \"costly\" write locks on the table,\nCreate new one concurrently.\n> and we can't rely on\n> manual processes to get it right ... we might need to set up scripts\n> to either upgrade the index or alert us if the index needs upgrading).\nYep. Another option could be to add query rewrite as\n\nselect * from (\nselect * from account_transaction where trans_type_id =type1 and \naccount_id=? order by created desc limit 25 union all\nselect * from account_transaction where trans_type_id =type2 and \naccount_id=? order by created desc limit 25 union all\n...\nunion all\nselect * from account_transaction where trans_type_id =typeN and \naccount_id=? order by created desc limit 25\n) a\norder by created desc limit 25\n\nThis will allow to use three-column index in the way it can be used for \nsuch query. Yet if N is large query will look ugly. And I am not sure if \noptimizer is smart enough for not to fetch 25*N rows.\n\n\nBest regards, Vitalii Tymchyshyn\n\n",
"msg_date": "Fri, 11 Feb 2011 11:44:05 +0200",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*) again..."
},
{
"msg_contents": "2011/2/11 Vitalii Tymchyshyn <[email protected]>:\n>> My idea as well, though it looks ugly and it would be a maintenance\n>> head-ache (upgrading the index as new transaction types are added\n>> would mean \"costly\" write locks on the table,\n>\n> Create new one concurrently.\n\nConcurrently? Are there any ways to add large indexes without\nblocking inserts to the table for the time it takes to create the\nindex?\n\n> Yep. Another option could be to add query rewrite as\n>\n> select * from (\n> select * from account_transaction where trans_type_id =type1 and\n> account_id=? order by created desc limit 25 union all\n> select * from account_transaction where trans_type_id =type2 and\n> account_id=? order by created desc limit 25 union all\n> ...\n> union all\n> select * from account_transaction where trans_type_id =typeN and\n> account_id=? order by created desc limit 25\n> ) a\n> order by created desc limit 25\n\nI actually considered that. For the test case given it works very\nfast. Not sure if it would work universally ... it scales well when\nhaving extreme amounts of transactions outside the given transaction\nlist (the case we have problems with now), but it wouldn't scale if\nsome user has an extreme amount of transactions within the list.\nHowever, I think our \"extreme amount of transactions\"-problem is\nmostly limited to the transaction types outside the list.\n",
"msg_date": "Fri, 11 Feb 2011 14:26:01 +0300",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*) again..."
},
{
"msg_contents": "On 02/11/2011 12:26 PM, Tobias Brox wrote:\n> 2011/2/11 Vitalii Tymchyshyn<[email protected]>:\n>>> My idea as well, though it looks ugly and it would be a maintenance\n>>> head-ache (upgrading the index as new transaction types are added\n>>> would mean \"costly\" write locks on the table,\n>>\n>> Create new one concurrently.\n>\n> Concurrently? Are there any ways to add large indexes without\n> blocking inserts to the table for the time it takes to create the\n> index?\n\nyep, AFAIR since 8.2\nsee: http://www.postgresql.org/docs/8.2/static/sql-createindex.html#SQL-CREATEINDEX-CONCURRENTLY\n\n[cut]\n\nAndrea\n",
"msg_date": "Fri, 11 Feb 2011 12:33:22 +0100",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*) again..."
},
{
"msg_contents": "\n> select * from account_transaction where trans_type_id in ( ...\n> long, hard-coded list ...) and account_id=? order by created desc\n> limit 25;\n\nYou could use an index on (account_id, created, trans_type), in \nreplacement of your index on (account_id, created). This will not prevent \nthe \"Index Scan Backwards\", but at least, index rows with trans_type not \nmatching the WHERE clause will not generate any heap access...\n",
"msg_date": "Fri, 11 Feb 2011 15:51:44 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 10, 2011 at 9:25 AM, Chris Browne <[email protected]> wrote:\n> [email protected] (Robert Haas) writes:\n>> On Thu, Feb 10, 2011 at 11:45 AM, Kevin Grittner\n>> <[email protected]> wrote:\n>>> Well, I'm comfortable digging in my heels against doing *lame* hints\n>>> just because \"it's what all the other kids are doing,\" which I think\n>>> is the only thing which would have satisfied the OP on this thread.\n>>> From both on-list posts and ones exchanged off-list with me, it\n>>> seems he was stubbornly resistant to properly tuning the server to\n>>> see if any problems remained, or posting particular problems to see\n>>> how they would be most effectively handled in PostgreSQL. We\n>>> obviously can't be drawn into dumb approaches because of\n>>> ill-informed demands like that.\n>>\n>> Nor was I proposing any such thing. But that doesn't make \"we don't\n>> want hints\" an accurate statement. Despite the impression that OP\n>> went away with, the real situation is a lot more nuanced than that,\n>> and the statement on the Todo list gives the wrong impression, IMHO.\n>\n> I have added the following comment to the ToDo:\n>\n> We are not interested to implement hints in ways they are commonly\n> implemented on other databases, and proposals based on \"because\n> they've got them\" will not be welcomed. If you have an idea that\n> avoids the problems that have been observed with other hint systems,\n> that could lead to valuable discussion.\n>\n> That seems to me to characterize the nuance.\n\n\nWhere exactly are the problems with other systems noted? Most other\nsystems have this option so saying \"They have problems\" is a giant cop\nout.\n\n\n-- \nRob Wultsch\[email protected]\n",
"msg_date": "Sun, 13 Feb 2011 12:40:09 -0800",
"msg_from": "Rob Wultsch <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints"
},
{
"msg_contents": "I've wordsmithed Chris's changes some, and then spun off a completely\nseparate page for Hints discussion, since the NotToDo item was becoming\ntoo long.\n\n> Something like this syntax?:\n>\n> JOIN WITH (correlation_factor=0.3)\n\nPlease, NO!\n\nThis is exactly the kind of hint that I regard as a last resort if we\nrun out of implementation alternatives. Any hint which gets coded into\nthe actual queries becomes a *massive* maintenance and upgrade headache\nthereafter. If we're implementing a hint alternative, we should look at\nstuff in this priority order:\n\n1. Useful tuning of additional cost parameters by GUC (i.e.\ncursor_tuple_fraction)\n2. Modifying cost parameters on database *objects* (i.e. \"ndistinct=500\")\n3. Adding new parameters to modify on database objects (i.e.\n\"distribution=normal(1.5,17)\",\"new_rows=0.1\")\n4. Query hints (if all of the above fails to give fixes for some tested\nproblem)\n\n> Where exactly are the problems with other systems noted? Most other\n> systems have this option so saying \"They have problems\" is a giant cop\n> out.\n\nI've put my list down:\nhttp://wiki.postgresql.org/wiki/OptimizerHintsDiscussion#Problems_with_existing_Hint_stystems\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Sun, 13 Feb 2011 14:29:32 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints"
},
{
"msg_contents": "On Sun, Feb 13, 2011 at 3:29 PM, Josh Berkus <[email protected]> wrote:\n> I've wordsmithed Chris's changes some, and then spun off a completely\n> separate page for Hints discussion, since the NotToDo item was becoming\n> too long.\n>\n>> Something like this syntax?:\n>>\n>> JOIN WITH (correlation_factor=0.3)\n>\n> Please, NO!\n>\n> This is exactly the kind of hint that I regard as a last resort if we\n> run out of implementation alternatives. Any hint which gets coded into\n> the actual queries becomes a *massive* maintenance and upgrade headache\n> thereafter. If we're implementing a hint alternative, we should look at\n> stuff in this priority order:\n>\n> 1. Useful tuning of additional cost parameters by GUC (i.e.\n> cursor_tuple_fraction)\n> 2. Modifying cost parameters on database *objects* (i.e. \"ndistinct=500\")\n> 3. Adding new parameters to modify on database objects (i.e.\n> \"distribution=normal(1.5,17)\",\"new_rows=0.1\")\n> 4. Query hints (if all of the above fails to give fixes for some tested\n> problem)\n\nI fail to see how 1 through 3 can tell the planner the correlation\nbetween two fields in two separate tables.\n",
"msg_date": "Sun, 13 Feb 2011 15:52:22 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints"
},
{
"msg_contents": "On Sun, Feb 13, 2011 at 10:49 PM, Josh Berkus <[email protected]> wrote:\n>\n>> I fail to see how 1 through 3 can tell the planner the correlation\n>> between two fields in two separate tables.\n>\n> CREATE CORRELATION_ESTIMATE ( table1.colA ) = ( table2.colB ) IS 0.3\n>\n> ... and then it fixes the correlation for *every* query in the database, not\n> just that one. And is easy to fix if the correlation changes.\n\nI like that. Even better, could we setup some kind of simple command\nto tell analyze to collect stats for the two columns together?\n",
"msg_date": "Mon, 14 Feb 2011 00:01:37 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Shaun Thomas <[email protected]> wrote:\n> \n> > how difficult would it be to add that syntax to the JOIN\n> > statement, for example?\n> \n> Something like this syntax?:\n> \n> JOIN WITH (correlation_factor=0.3)\n> \n> Where 1.0 might mean that for each value on the left there was only\n> one distinct value on the right, and 0.0 would mean that they were\n> entirely independent? (Just as an off-the-cuff example -- I'm not\n> at all sure that this makes sense, let alone is the best thing to\n> specify. I'm trying to get at *syntax* here, not particular knobs.)\n\nI am not excited about the idea of putting these correlations in\nqueries. What would be more intesting would be for analyze to build a\ncorrelation coeffficent matrix showing how columns are correlated:\n\n\ta b c\n a 1 .4 0\n b .1 1 -.3\n c .2 .3 1\n\nand those correlations could be used to weigh how the single-column\nstatistics should be combined.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Wed, 16 Feb 2011 16:22:26 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*)\n again..."
},
{
"msg_contents": "On Wed, Feb 16, 2011 at 4:22 PM, Bruce Momjian <[email protected]> wrote:\n> I am not excited about the idea of putting these correlations in\n> queries. What would be more intesting would be for analyze to build a\n> correlation coeffficent matrix showing how columns are correlated:\n>\n> a b c\n> a 1 .4 0\n> b .1 1 -.3\n> c .2 .3 1\n>\n> and those correlations could be used to weigh how the single-column\n> statistics should be combined.\n\nIf you can make it work, I'll take it... it's (much) easier said than\ndone, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 22 Feb 2011 21:22:10 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*) again..."
},
{
"msg_contents": "Hi. I have the idea: hints joined to function. For example instead of\n\nWHERE table1.field1=table2.field2\nwrite:\nWHERE specificeq(table1.field1,table2.field2)\n\nand hints add to declaration of specificeq function.\n\n2011/2/23, Robert Haas <[email protected]>:\n> On Wed, Feb 16, 2011 at 4:22 PM, Bruce Momjian <[email protected]> wrote:\n>> I am not excited about the idea of putting these correlations in\n>> queries. What would be more intesting would be for analyze to build a\n>> correlation coeffficent matrix showing how columns are correlated:\n>>\n>> a b c\n>> a 1 .4 0\n>> b .1 1 -.3\n>> c .2 .3 1\n>>\n>> and those correlations could be used to weigh how the single-column\n>> statistics should be combined.\n>\n> If you can make it work, I'll take it... it's (much) easier said than\n> done, though.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n-- \n------------\npasman\n",
"msg_date": "Sun, 5 Jun 2011 16:25:39 +0100",
"msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*) again..."
},
{
"msg_contents": "On Thu, Feb 10, 2011 at 7:32 PM, Craig James <[email protected]> wrote:\n> On 2/10/11 9:21 AM, Kevin Grittner wrote:\n>>\n>> Shaun Thomas<[email protected]> wrote:\n>>\n>>> how difficult would it be to add that syntax to the JOIN\n>>> statement, for example?\n>>\n>> Something like this syntax?:\n>>\n>> JOIN WITH (correlation_factor=0.3)\n>>\n>> Where 1.0 might mean that for each value on the left there was only\n>> one distinct value on the right, and 0.0 would mean that they were\n>> entirely independent? (Just as an off-the-cuff example -- I'm not\n>> at all sure that this makes sense, let alone is the best thing to\n>> specify. I'm trying to get at *syntax* here, not particular knobs.)\n>\n> There are two types of problems:\n>\n> 1. The optimizer is imperfect and makes a sub-optimal choice.\n>\n> 2. There is theoretical reasons why it's hard for the optimizer. For\n> example, in a table with 50 columns, there is a staggering number of\n> possible correlations. An optimizer can't possibly figure this out, but a\n> human might know them from the start. The City/Postal-code correlation is a\n> good example.\n>\n> For #1, Postgres should never offer any sort of hint mechanism. As many\n> have pointed out, it's far better to spend the time fixing the optimizer\n> than adding hacks.\n>\n> For #2, it might make sense to give a designer a way to tell Postgres stuff\n> that it couldn't possibly figure out. But ... not until the problem is\n> clearly defined.\n>\n> What should happen is that someone writes with an example query, and the\n> community realizes that no amount of cleverness from Postgres could ever\n> solve it (for solid theoretical reasons). Only then, when the problem is\n> clearly defined, should we talk about solutions and SQL extensions.\n\nI don't have one such query handy. However, I think your posting is a\ngood starting point for a discussion how to figure out what we need\nand how a good solution could look like. For example, one thing I\ndislike about hints is that they go into the query. There are a few\ndrawbacks of this approach\n\n- Applications need to be changed to benefit which is not always possible.\n- One important class of such applications are those that use OR\nmappers - hinting then would have to be buried in OR mapper code or\nconfiguration.\n- Hints in the query work only for exactly that query (this might be\nan advantage depending on point of view).\n\nI think the solution should rather be to tell Postgres what \"it\ncouldn't possibly figure out\". I imagine that could be some form of\ndescription of the distribution of data in columns and / or\ncorrelations between columns. Advantage would be that the optimizer\ngets additional input which it can use (i.e. the usage can change\nbetween releases), the information is separate from queries (more like\nmeta data for tables) and thus all queries using a particular table\nwhich was augmented with this meta data would benefit. Usage of this\nmeta data could be controlled by a flag per session (as well as\nglobally) so it would be relatively easy to find out whether this meta\ndata has become obsolete (because data changed or a new release of the\ndatabase is in use).\n\nKind regards\n\nrobert\n\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n",
"msg_date": "Mon, 6 Jun 2011 10:14:43 +0200",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why we don't want hints Was: Slow count(*) again..."
}
] |
[
{
"msg_contents": "I have more than 1300000 records in crm table and I partioned the table with\ndeleted = 0 key.\nIt is working fine except that after partioion query is taking more time\nthan the previous one.\nI already set constraint_exclusion = on; My DB version is Postgresql 8.1\n\nI added the explain anayze for both the states.\nAny idea please why the delay is being occured.\n\nexplain analyze\n select *\n from crm as c\n inner join activity as a on c.crmid = a.activityid\n inner join seactivityrel as s on c.crmid= s.crmid\n where c.deleted = 0;\n\n\n\n Before partiion:\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=116378.17..358658.14 rows=8306416340 width=549) (actual\ntime=18273.373..18276.638 rows=1 loops=1)\n Hash Cond: (\"outer\".crmid = \"inner\".activityid)\n -> Append (cost=0.00..37825.51 rows=949942 width=329) (actual\ntime=0.051..5753.293 rows=949941 loops=1)\n -> Seq Scan on crm c (cost=0.00..13.25 rows=1 width=280) (actual\ntime=0.002..0.002 rows=0 loops=1)\n Filter: (deleted = 0)\n -> Seq Scan on crm_active c (cost=0.00..37812.26 rows=949941 width=329)\n(actual time=0.046..3914.645 rows=949941 loops=1)\n Filter: (deleted = 0)\n -> Hash (cost=72725.10..72725.10 rows=1748826 width=153) (actual\ntime=8716.413..8716.413 rows=1 loops=1)\n -> Merge Join (cost=0.00..72725.10 rows=1748826 width=153) (actual\ntime=7122.474..8716.314 rows=1 loops=1)\n Merge Cond: (\"outer\".activityid = \"inner\".crmid)\n -> Index Scan using activity_activityid_subject_idx on activity a\n(cost=0.00..11489.23 rows=343003 width=145) (actual time=0.430..1075.108\nrows=343001 loops=1)\n -> Index Scan using seactivityrel_crmid_idx on seactivityrel s\n(cost=0.00..38518.04 rows=1748826 width=8) (actual time=76.291..5410.545\nrows=1748826 loops=1)\n Total runtime: 18276.780 ms\n(13 rows)\n\nAfter partition:\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=115857.19..357283.03 rows=8306416340 width=548) (actual\ntime=85871.145..85874.584 rows=1 loops=1)\n Hash Cond: (\"outer\".crmid = \"inner\".activityid)\n -> Append (cost=0.00..37825.51 rows=949942 width=329) (actual\ntime=0.167..72430.097 rows=949941 loops=1)\n -> Seq Scan on crm c (cost=0.00..13.25 rows=1 width=280) (actual\ntime=0.001..0.001 rows=0 loops=1)\n Filter: (deleted = 0)\n -> Seq Scan on crm_active c (cost=0.00..37812.26 rows=949941 width=329)\n(actual time=0.162..70604.116 rows=949941 loops=1)\n Filter: (deleted = 0)\n -> Hash (cost=73058.13..73058.13 rows=1748826 width=152) (actual\ntime=9603.453..9603.453 rows=1 loops=1)\n -> Merge Join (cost=0.00..73058.13 rows=1748826 width=152) (actual\ntime=7959.707..9603.101 rows=1 loops=1)\n Merge Cond: (\"outer\".activityid = \"inner\".crmid)\n -> Index Scan using activity_pkey on activity a (cost=0.00..11822.25\nrows=343004 width=144) (actual time=88.467..1167.556 rows=343001 loops=1)\n -> Index Scan using seactivityrel_crmid_idx on seactivityrel s\n(cost=0.00..38518.04 rows=1748826 width=8) (actual time=0.459..6148.843\nrows=1748826 loops=1)\n Total runtime: 85875.591 ms\n(13 rows)\n\nI have more than 1300000 records in crm table and I partioned the table with deleted = 0 key.It is working fine except that after partioion query is taking more time than the previous one.I already set constraint_exclusion = on; My DB version is Postgresql 8.1\nI added the explain anayze for both the states.Any idea please why the delay is being occured.explain analyze select * from crm as c inner join activity as a on c.crmid = a.activityid inner join seactivityrel as s on c.crmid= s.crmid\n where c.deleted = 0; Before partiion: QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=116378.17..358658.14 rows=8306416340 width=549) (actual time=18273.373..18276.638 rows=1 loops=1)\n Hash Cond: (\"outer\".crmid = \"inner\".activityid) -> Append (cost=0.00..37825.51 rows=949942 width=329) (actual time=0.051..5753.293 rows=949941 loops=1) -> Seq Scan on crm c (cost=0.00..13.25 rows=1 width=280) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: (deleted = 0) -> Seq Scan on crm_active c (cost=0.00..37812.26 rows=949941 width=329) (actual time=0.046..3914.645 rows=949941 loops=1) Filter: (deleted = 0) -> Hash (cost=72725.10..72725.10 rows=1748826 width=153) (actual time=8716.413..8716.413 rows=1 loops=1)\n -> Merge Join (cost=0.00..72725.10 rows=1748826 width=153) (actual time=7122.474..8716.314 rows=1 loops=1) Merge Cond: (\"outer\".activityid = \"inner\".crmid) -> Index Scan using activity_activityid_subject_idx on activity a (cost=0.00..11489.23 rows=343003 width=145) (actual time=0.430..1075.108 rows=343001 loops=1)\n -> Index Scan using seactivityrel_crmid_idx on seactivityrel s (cost=0.00..38518.04 rows=1748826 width=8) (actual time=76.291..5410.545 rows=1748826 loops=1) Total runtime: 18276.780 ms(13 rows)\nAfter partition: QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=115857.19..357283.03 rows=8306416340 width=548) (actual time=85871.145..85874.584 rows=1 loops=1)\n Hash Cond: (\"outer\".crmid = \"inner\".activityid) -> Append (cost=0.00..37825.51 rows=949942 width=329) (actual time=0.167..72430.097 rows=949941 loops=1) -> Seq Scan on crm c (cost=0.00..13.25 rows=1 width=280) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: (deleted = 0) -> Seq Scan on crm_active c (cost=0.00..37812.26 rows=949941 width=329) (actual time=0.162..70604.116 rows=949941 loops=1) Filter: (deleted = 0) -> Hash (cost=73058.13..73058.13 rows=1748826 width=152) (actual time=9603.453..9603.453 rows=1 loops=1)\n -> Merge Join (cost=0.00..73058.13 rows=1748826 width=152) (actual time=7959.707..9603.101 rows=1 loops=1) Merge Cond: (\"outer\".activityid = \"inner\".crmid) -> Index Scan using activity_pkey on activity a (cost=0.00..11822.25 rows=343004 width=144) (actual time=88.467..1167.556 rows=343001 loops=1)\n -> Index Scan using seactivityrel_crmid_idx on seactivityrel s (cost=0.00..38518.04 rows=1748826 width=8) (actual time=0.459..6148.843 rows=1748826 loops=1) Total runtime: 85875.591 ms(13 rows)",
"msg_date": "Sun, 10 Oct 2010 15:52:06 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Db slow down after table partition"
}
] |
[
{
"msg_contents": "I already sent the mail earlier. but added wrong explain. So I mail it\nagain.\n\nI have more than 1300000 records in crm table and I partioned the table with\ndeleted = 0 key.\nIt is working fine except that after partioion query is taking more time\nthan the previous one.\nI already set constraint_exclusion = on;\n\nI added the explain anayze for both the states.\nAny idea please why the delay is being occured.\n\nexplain analyze\n select *\n from crm as c\n inner join activity as a on c.crmid = a.activityid\n inner join seactivityrel as s on c.crmid= s.crmid\n where c.deleted = 0;\n\n\n\n Before partiion:\n\n QUERY PLAN\n--------------------------------------------------------------------\n Merge Join (cost=0.00..107563.24 rows=308029 width=459) (actual\ntime=13912.064..18196.713 rows=1 loops=1)\n Merge Cond: (\"outer\".crmid = \"inner\".crmid)\n -> Merge Join (cost=0.00..60995.18 rows=239062 width=451) (actual\ntime=60.972..9698.700 rows=331563 loops=1)\n Merge Cond: (\"outer\".crmid = \"inner\".activityid)\n -> Index Scan using crm_pkey on crm c (cost=0.00..43559.49 rows=945968\nwidth=308) (actual time=52.877..6139.369 rows=949938 loops=1)\n Filter: (deleted = 0)\n -> Index Scan using activity_pkey on activity a (cost=0.00..11822.64\nrows=343003 width=143) (actual time=7.999..1456.232 rows=343001 loops=1)\n -> Index Scan using seactivityrel_crmid_idx on seactivityrel s\n(cost=0.00..38518.04 rows=1748826 width=8) (actual time=0.305..6278.171\nrows=1748826 loops=1)\n Total runtime: 18196.832 ms\n\n\n\nAfter partition:\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=115857.19..357283.03 rows=8306416340 width=548) (actual\ntime=85871.145..85874.584 rows=1 loops=1)\n Hash Cond: (\"outer\".crmid = \"inner\".activityid)\n -> Append (cost=0.00..37825.51 rows=949942 width=329) (actual\ntime=0.167..72430.097 rows=949941 loops=1)\n -> Seq Scan on crm c (cost=0.00..13.25 rows=1 width=280) (actual\ntime=0.001..0.001 rows=0 loops=1)\n Filter: (deleted = 0)\n -> Seq Scan on crm_active c (cost=0.00..37812.26 rows=949941 width=329)\n(actual time=0.162..70604.116 rows=949941 loops=1)\n Filter: (deleted = 0)\n -> Hash (cost=73058.13..73058.13 rows=1748826 width=152) (actual\ntime=9603.453..9603.453 rows=1 loops=1)\n -> Merge Join (cost=0.00..73058.13 rows=1748826 width=152) (actual\ntime=7959.707..9603.101 rows=1 loops=1)\n Merge Cond: (\"outer\".activityid = \"inner\".crmid)\n -> Index Scan using activity_pkey on activity a (cost=0.00..11822.25\nrows=343004 width=144) (actual time=88.467..1167.556 rows=343001 loops=1)\n -> Index Scan using seactivityrel_crmid_idx on seactivityrel s\n(cost=0.00..38518.04 rows=1748826 width=8) (actual time=0.459..6148.843\nrows=1748826 loops=1)\n Total runtime: 85875.591 ms\n(13 rows)\n\nI already sent the mail earlier. but added wrong explain. So I mail it again.I have more than 1300000 records in crm table and I partioned the table with deleted = 0 key.It is working fine except that after partioion query is taking more time than the previous one.\nI already set constraint_exclusion = on;I added the explain anayze for both the states.Any idea please why the delay is being occured.explain analyze select * from crm as c inner join activity as a on c.crmid = a.activityid \n inner join seactivityrel as s on c.crmid= s.crmid where c.deleted = 0; Before partiion: QUERY PLAN -------------------------------------------------------------------- Merge Join (cost=0.00..107563.24 rows=308029 width=459) (actual time=13912.064..18196.713 rows=1 loops=1)\n Merge Cond: (\"outer\".crmid = \"inner\".crmid) -> Merge Join (cost=0.00..60995.18 rows=239062 width=451) (actual time=60.972..9698.700 rows=331563 loops=1) Merge Cond: (\"outer\".crmid = \"inner\".activityid)\n -> Index Scan using crm_pkey on crm c (cost=0.00..43559.49 rows=945968 width=308) (actual time=52.877..6139.369 rows=949938 loops=1) Filter: (deleted = 0) -> Index Scan using activity_pkey on activity a (cost=0.00..11822.64 rows=343003 width=143) (actual time=7.999..1456.232 rows=343001 loops=1)\n -> Index Scan using seactivityrel_crmid_idx on seactivityrel s (cost=0.00..38518.04 rows=1748826 width=8) (actual time=0.305..6278.171 rows=1748826 loops=1) Total runtime: 18196.832 msAfter partition:\n QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=115857.19..357283.03 rows=8306416340 width=548) (actual time=85871.145..85874.584 rows=1 loops=1)\n Hash Cond: (\"outer\".crmid = \"inner\".activityid) -> Append (cost=0.00..37825.51 rows=949942 width=329) (actual time=0.167..72430.097 rows=949941 loops=1) -> Seq Scan on crm c (cost=0.00..13.25 rows=1 width=280) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: (deleted = 0) -> Seq Scan on crm_active c (cost=0.00..37812.26 rows=949941 width=329) (actual time=0.162..70604.116 rows=949941 loops=1) Filter: (deleted = 0) -> Hash (cost=73058.13..73058.13 rows=1748826 width=152) (actual time=9603.453..9603.453 rows=1 loops=1)\n -> Merge Join (cost=0.00..73058.13 rows=1748826 width=152) (actual time=7959.707..9603.101 rows=1 loops=1) Merge Cond: (\"outer\".activityid = \"inner\".crmid) -> Index Scan using activity_pkey on activity a (cost=0.00..11822.25 rows=343004 width=144) (actual time=88.467..1167.556 rows=343001 loops=1)\n -> Index Scan using seactivityrel_crmid_idx on seactivityrel s (cost=0.00..38518.04 rows=1748826 width=8) (actual time=0.459..6148.843 rows=1748826 loops=1) Total runtime: 85875.591 ms(13 rows)",
"msg_date": "Sun, 10 Oct 2010 16:51:17 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "DB slow down after table partition"
},
{
"msg_contents": "On 10 October 2010 11:51, AI Rumman <[email protected]> wrote:\n> I already sent the mail earlier. but added wrong explain. So I mail it\n> again.\n>\n> I have more than 1300000 records in crm table and I partioned the table with\n> deleted = 0 key.\n> It is working fine except that after partioion query is taking more time\n> than the previous one.\n> I already set constraint_exclusion = on;\n>\n> I added the explain anayze for both the states.\n> Any idea please why the delay is being occured.\n>\n> explain analyze\n> select *\n> from crm as c\n> inner join activity as a on c.crmid = a.activityid\n> inner join seactivityrel as s on c.crmid= s.crmid\n> where c.deleted = 0;\n>\n>\n>\n> Before partiion:\n>\n> QUERY PLAN\n> --------------------------------------------------------------------\n> Merge Join (cost=0.00..107563.24 rows=308029 width=459) (actual\n> time=13912.064..18196.713 rows=1 loops=1)\n> Merge Cond: (\"outer\".crmid = \"inner\".crmid)\n> -> Merge Join (cost=0.00..60995.18 rows=239062 width=451) (actual\n> time=60.972..9698.700 rows=331563 loops=1)\n> Merge Cond: (\"outer\".crmid = \"inner\".activityid)\n> -> Index Scan using crm_pkey on crm c (cost=0.00..43559.49 rows=945968\n> width=308) (actual time=52.877..6139.369 rows=949938 loops=1)\n> Filter: (deleted = 0)\n> -> Index Scan using activity_pkey on activity a (cost=0.00..11822.64\n> rows=343003 width=143) (actual time=7.999..1456.232 rows=343001 loops=1)\n> -> Index Scan using seactivityrel_crmid_idx on seactivityrel s\n> (cost=0.00..38518.04 rows=1748826 width=8) (actual time=0.305..6278.171\n> rows=1748826 loops=1)\n> Total runtime: 18196.832 ms\n>\n>\n>\n> After partition:\n>\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=115857.19..357283.03 rows=8306416340 width=548) (actual\n> time=85871.145..85874.584 rows=1 loops=1)\n> Hash Cond: (\"outer\".crmid = \"inner\".activityid)\n> -> Append (cost=0.00..37825.51 rows=949942 width=329) (actual\n> time=0.167..72430.097 rows=949941 loops=1)\n> -> Seq Scan on crm c (cost=0.00..13.25 rows=1 width=280) (actual\n> time=0.001..0.001 rows=0 loops=1)\n> Filter: (deleted = 0)\n> -> Seq Scan on crm_active c (cost=0.00..37812.26 rows=949941 width=329)\n> (actual time=0.162..70604.116 rows=949941 loops=1)\n> Filter: (deleted = 0)\n> -> Hash (cost=73058.13..73058.13 rows=1748826 width=152) (actual\n> time=9603.453..9603.453 rows=1 loops=1)\n> -> Merge Join (cost=0.00..73058.13 rows=1748826 width=152) (actual\n> time=7959.707..9603.101 rows=1 loops=1)\n> Merge Cond: (\"outer\".activityid = \"inner\".crmid)\n> -> Index Scan using activity_pkey on activity a (cost=0.00..11822.25\n> rows=343004 width=144) (actual time=88.467..1167.556 rows=343001 loops=1)\n> -> Index Scan using seactivityrel_crmid_idx on seactivityrel s\n> (cost=0.00..38518.04 rows=1748826 width=8) (actual time=0.459..6148.843\n> rows=1748826 loops=1)\n> Total runtime: 85875.591 ms\n> (13 rows)\n\nIf you look at your latest explain, it shows that it's merging the\nresults of a full sequential scan of both crm and crm_active. Is\ncrm_active a child table of crm?\n\nDo you no longer have the index \"crm_pkey\" on the parent table? It\ndoesn't appear to be there anymore. And also, if you only want\nresults where active = 0, create a partial index, such as:\n\nCREATE INDEX idx_crm_inactive on crm (active) WHERE active = 0;\n\nThis would create an index for \"inactive\" entries on the crm table.\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n",
"msg_date": "Sun, 10 Oct 2010 12:04:31 +0100",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB slow down after table partition"
}
] |
[
{
"msg_contents": "I need to join two tales say TAB_A and TAB_B, where TAB_A is greater than\nTAB_B in size and records.\nWhich Table should I put first in join order?\nAny idea please.\n\nI need to join two tales say TAB_A and TAB_B, where TAB_A is greater than TAB_B in size and records.Which Table should I put first in join order?Any idea please.",
"msg_date": "Mon, 11 Oct 2010 12:38:45 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "join order"
},
{
"msg_contents": "On Mon, Oct 11, 2010 at 12:38 AM, AI Rumman <[email protected]> wrote:\n> I need to join two tales say TAB_A and TAB_B, where TAB_A is greater than\n> TAB_B in size and records.\n> Which Table should I put first in join order?\n\nIf it's a regular old inner join it doesn't matter, the query planner\nwill figure it out. If it's a left or right join then you\n(hopefullly) already know the order you need to use. If it's a full\nouter join again it doesn't matter, the query planner will figure it\nout.\n\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Mon, 11 Oct 2010 00:46:50 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: join order"
}
] |
[
{
"msg_contents": "Hello,\nI have heard it said that if a stored procedure is declared as VOLATILE,\nthen no good optimizations can be done on queries within the stored\nprocedure or queries that use the stored procedure (say as the column in a\nview). I have seen this in practice, recommended on the irc channel, and in\nthe archives (\nhttp://archives.postgresql.org/pgsql-performance/2008-01/msg00283.php). Can\nsomeone help me understand or point me to some documentation explaining why\nthis is so?\n\nAny insights would be appreciated. I'm new to pgsql and would like to know a\nlittle more about what is going on under the hood.\n\nThanks,\nDamon\n\nHello,I have heard it said that if a stored procedure is declared as VOLATILE, then no good optimizations can be done on queries within the stored procedure or queries that use the stored procedure (say as the column in a view). I have seen this in practice, recommended on the irc channel, and in the archives (http://archives.postgresql.org/pgsql-performance/2008-01/msg00283.php). Can someone help me understand or point me to some documentation explaining why this is so? \nAny insights would be appreciated. I'm new to pgsql and would like to know a little more about what is going on under the hood.Thanks,Damon",
"msg_date": "Mon, 11 Oct 2010 16:10:08 -0700",
"msg_from": "Damon Snyder <[email protected]>",
"msg_from_op": true,
"msg_subject": "Stored procedure declared as VOLATILE => no good optimization is done"
},
{
"msg_contents": "On Mon, Oct 11, 2010 at 7:10 PM, Damon Snyder <[email protected]> wrote:\n> Hello,\n> I have heard it said that if a stored procedure is declared as VOLATILE,\n> then no good optimizations can be done on queries within the stored\n> procedure or queries that use the stored procedure (say as the column in a\n> view). I have seen this in practice, recommended on the irc channel, and in\n> the archives\n> (http://archives.postgresql.org/pgsql-performance/2008-01/msg00283.php). Can\n> someone help me understand or point me to some documentation explaining why\n> this is so?\n> Any insights would be appreciated. I'm new to pgsql and would like to know a\n> little more about what is going on under the hood.\n> Thanks,\n> Damon\n\nThe theory behind 'volatile' is pretty simple -- each execution of the\nfunction, regardless of the inputs, can be expected to produce a\ncompletely independent result, or modifies the datbase. In the case\nof immutable, which is on the other end, particular set of inputs will\nproduce one and only result, and doesn't modify anything.\n\nIn the immutable case, the planner can shuffle the function call\naround in the query, calling it less, simplifying joins, etc. There\nare lots of theoretical optimizations that can be done since the\ninputs (principally table column values and literal values) can be\nassumed static for the duration of the query.\n\n'stable' is almost like immutable, but is only guaranteed static for\nthe duration of the query. most functions that read from but don't\nwrite to the database will fit in this category. Most optimizations\nstill apply here, but stable functions can't be used in indexes and\ncan't be executed and saved off in plan time where it might be helpful\n(prepared statements and pl/pgsql plans).\n\nbroadly speaking:\n*) function generates same output from inputs regardless of what's\ngoing on in the database, and has no side effects: IMMUTABLE\n*) function reads (only) from tables, or is an immutable function in\nmost senses but influenced from the GUC (or any other out of scope\nthing): STABLE\n*) all other cases: VOLATILE (which is btw the default)\n\nmerlin\n",
"msg_date": "Fri, 15 Oct 2010 17:06:55 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored procedure declared as VOLATILE => no good\n\toptimization is done"
},
{
"msg_contents": "> broadly speaking:\n> *) function generates same output from inputs regardless of what's\n> going on in the database, and has no side effects: IMMUTABLE\n\nSo can I say \"if a function is marked IMMUTABLE, then it should never\nmodify database\"? Is there any counter example?\n\n> *) function reads (only) from tables, or is an immutable function in\n> most senses but influenced from the GUC (or any other out of scope\n> thing): STABLE\n\nIt seems if above is correct, I can say STABLE functions should never\nmodify databases as well.\n\n> *) all other cases: VOLATILE (which is btw the default)\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese: http://www.sraoss.co.jp\n",
"msg_date": "Sat, 16 Oct 2010 10:47:38 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored procedure declared as VOLATILE => no good\n\toptimization is done"
},
{
"msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> So can I say \"if a function is marked IMMUTABLE, then it should never\n> modify database\"? Is there any counter example?\n> It seems if above is correct, I can say STABLE functions should never\n> modify databases as well.\n\nBoth of those things are explicitly stated here:\nhttp://developer.postgresql.org/pgdocs/postgres/xfunc-volatility.html\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Oct 2010 22:31:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored procedure declared as VOLATILE => no good optimization is\n\tdone"
},
{
"msg_contents": "On Fri, Oct 15, 2010 at 10:31 PM, Tom Lane <[email protected]> wrote:\n> Tatsuo Ishii <[email protected]> writes:\n>> So can I say \"if a function is marked IMMUTABLE, then it should never\n>> modify database\"? Is there any counter example?\n>> It seems if above is correct, I can say STABLE functions should never\n>> modify databases as well.\n>\n> Both of those things are explicitly stated here:\n> http://developer.postgresql.org/pgdocs/postgres/xfunc-volatility.html\n\nOk, being pedantic here, but:\n\nI think more interesting is *why* the 'immutable shall not modify the\ndatabase' requirement is there. IOW, suppose you ignore the warnings\non the docs and force immutability on a function that writes (via the\nfunction loophole) to the database, why exactly is this a bad idea?\nThe reasoning given in the documentation explains a problematic\nsymptom of doing so but gives little technical reasoning what it\nshould never be done.\n\nOne reason why writing to the database breaks immutability is that\nwriting to the database depends on resources that can change after the\nfact: function immutability also pertains to failure -- if a function\nerrors (or not) with a set of inputs, it should always do so. If you\nwrite to a table, you could violate a constraint from one call to the\nnext, or the table may not even be there at all...\n\nWriting to the database means you are influencing other systems, and\nvia constraints they are influencing you, so it makes it wrong by\ndefinition. That said, if you were writing to, say, a table with no\nmeaningful constraints this actually wouldn't be so bad as long as you\ncan also deal with the other big issue with immutability, namely that\nthere is not 1:1 correspondence between when the function is logically\nevaluated and when it is executed. This more or less eliminates\nlogging (at least outside of debugging purposes), the only thing I can\nfigure you can usefully do on a table w/no enforceable constraints.\nAlso, a big use case for immutable function is to allow use in\nindexing, and it would be just crazy (again, debugging purposes aside)\nto write to a table on index evaluation.\n\nmerlin\n",
"msg_date": "Sat, 16 Oct 2010 15:54:14 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored procedure declared as VOLATILE => no good\n\toptimization is done"
},
{
"msg_contents": "Thank you for all of the responses. This was really helpful.\n\nDamon\n\nOn Sat, Oct 16, 2010 at 12:54 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Fri, Oct 15, 2010 at 10:31 PM, Tom Lane <[email protected]> wrote:\n> > Tatsuo Ishii <[email protected]> writes:\n> >> So can I say \"if a function is marked IMMUTABLE, then it should never\n> >> modify database\"? Is there any counter example?\n> >> It seems if above is correct, I can say STABLE functions should never\n> >> modify databases as well.\n> >\n> > Both of those things are explicitly stated here:\n> > http://developer.postgresql.org/pgdocs/postgres/xfunc-volatility.html\n>\n> Ok, being pedantic here, but:\n>\n> I think more interesting is *why* the 'immutable shall not modify the\n> database' requirement is there. IOW, suppose you ignore the warnings\n> on the docs and force immutability on a function that writes (via the\n> function loophole) to the database, why exactly is this a bad idea?\n> The reasoning given in the documentation explains a problematic\n> symptom of doing so but gives little technical reasoning what it\n> should never be done.\n>\n> One reason why writing to the database breaks immutability is that\n> writing to the database depends on resources that can change after the\n> fact: function immutability also pertains to failure -- if a function\n> errors (or not) with a set of inputs, it should always do so. If you\n> write to a table, you could violate a constraint from one call to the\n> next, or the table may not even be there at all...\n>\n> Writing to the database means you are influencing other systems, and\n> via constraints they are influencing you, so it makes it wrong by\n> definition. That said, if you were writing to, say, a table with no\n> meaningful constraints this actually wouldn't be so bad as long as you\n> can also deal with the other big issue with immutability, namely that\n> there is not 1:1 correspondence between when the function is logically\n> evaluated and when it is executed. This more or less eliminates\n> logging (at least outside of debugging purposes), the only thing I can\n> figure you can usefully do on a table w/no enforceable constraints.\n> Also, a big use case for immutable function is to allow use in\n> indexing, and it would be just crazy (again, debugging purposes aside)\n> to write to a table on index evaluation.\n>\n> merlin\n>\n\nThank you for all of the responses. This was really helpful.DamonOn Sat, Oct 16, 2010 at 12:54 PM, Merlin Moncure <[email protected]> wrote:\nOn Fri, Oct 15, 2010 at 10:31 PM, Tom Lane <[email protected]> wrote:\n\n> Tatsuo Ishii <[email protected]> writes:\n>> So can I say \"if a function is marked IMMUTABLE, then it should never\n>> modify database\"? Is there any counter example?\n>> It seems if above is correct, I can say STABLE functions should never\n>> modify databases as well.\n>\n> Both of those things are explicitly stated here:\n> http://developer.postgresql.org/pgdocs/postgres/xfunc-volatility.html\n\nOk, being pedantic here, but:\n\nI think more interesting is *why* the 'immutable shall not modify the\ndatabase' requirement is there. IOW, suppose you ignore the warnings\non the docs and force immutability on a function that writes (via the\nfunction loophole) to the database, why exactly is this a bad idea?\nThe reasoning given in the documentation explains a problematic\nsymptom of doing so but gives little technical reasoning what it\nshould never be done.\n\nOne reason why writing to the database breaks immutability is that\nwriting to the database depends on resources that can change after the\nfact: function immutability also pertains to failure -- if a function\nerrors (or not) with a set of inputs, it should always do so. If you\nwrite to a table, you could violate a constraint from one call to the\nnext, or the table may not even be there at all...\n\nWriting to the database means you are influencing other systems, and\nvia constraints they are influencing you, so it makes it wrong by\ndefinition. That said, if you were writing to, say, a table with no\nmeaningful constraints this actually wouldn't be so bad as long as you\ncan also deal with the other big issue with immutability, namely that\nthere is not 1:1 correspondence between when the function is logically\nevaluated and when it is executed. This more or less eliminates\nlogging (at least outside of debugging purposes), the only thing I can\nfigure you can usefully do on a table w/no enforceable constraints.\nAlso, a big use case for immutable function is to allow use in\nindexing, and it would be just crazy (again, debugging purposes aside)\nto write to a table on index evaluation.\n\nmerlin",
"msg_date": "Mon, 25 Oct 2010 16:27:42 -0700",
"msg_from": "Damon Snyder <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Stored procedure declared as VOLATILE => no good\n\toptimization is done"
}
] |
[
{
"msg_contents": "Are there any performance implications (benefits) to executing queries\nin a transaction where\nSET TRANSACTION READ ONLY;\nhas been executed?\n\n\n-- \nJon\n",
"msg_date": "Tue, 12 Oct 2010 12:24:47 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "read only transactions"
},
{
"msg_contents": "Jon Nelson <[email protected]> wrote:\n \n> Are there any performance implications (benefits) to executing\n> queries in a transaction where\n> SET TRANSACTION READ ONLY;\n> has been executed?\n \nI don't think it allows much optimization in any current release.\n \nIt wouldn't be a bad idea to use it where appropriate, though, as\nfuture releases might do something with it. If you include this on\nthe BEGIN statement, that will save a round trip.\n \n-Kevin\n",
"msg_date": "Tue, 12 Oct 2010 13:58:13 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: read only transactions"
},
{
"msg_contents": "Jon Nelson <[email protected]> writes:\n> Are there any performance implications (benefits) to executing queries\n> in a transaction where\n> SET TRANSACTION READ ONLY;\n> has been executed?\n\nNo.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Oct 2010 15:33:44 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: read only transactions "
},
{
"msg_contents": "[email protected] (Jon Nelson) writes:\n> Are there any performance implications (benefits) to executing queries\n> in a transaction where\n> SET TRANSACTION READ ONLY;\n> has been executed?\n\nDirectly? No.\n\nIndirectly, well, a *leetle* bit...\n\nTransactions done READ ONLY do not generate actual XIDs, which reduces\nthe amount of XID generation (pretty tautological!), which reduces the\nneed to do VACUUM to protect against XID wraparound.\n\n <http://www.postgresql.org/docs/8.4/static/routine-vacuuming.html#VACUUM-BASICS>\n\nIf you process 50 million transactions, that chews thru 50 million XIDs.\n\nIf 45 million of those were processed via READ ONLY transactions, then\nthe same processing only chews thru 5 million XIDs, meaning that the\nXID-relevant vacuums can be done rather less frequently.\n\nThis only terribly much matters if:\n a) your database is so large that there are tables on which VACUUM\n would run for a very long time, and\n\n b) you are chewing through XIDs mighty quickly.\n\nIf either condition isn't true, then the indirect effect isn't important\neither. \n-- \nlet name=\"cbbrowne\" and tld=\"gmail.com\" in name ^ \"@\" ^ tld;;\n\"I'm not switching from slrn. I'm quite confident that anything that\n*needs* to be posted in HTML is fatuous garbage not worth my time.\"\n-- David M. Cook <[email protected]>\n",
"msg_date": "Tue, 12 Oct 2010 16:32:31 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: read only transactions"
},
{
"msg_contents": "Chris Browne <[email protected]> writes:\n> [email protected] (Jon Nelson) writes:\n>> Are there any performance implications (benefits) to executing queries\n>> in a transaction where\n>> SET TRANSACTION READ ONLY;\n>> has been executed?\n\n> Directly? No.\n\n> Indirectly, well, a *leetle* bit...\n\n> Transactions done READ ONLY do not generate actual XIDs, which reduces\n> the amount of XID generation (pretty tautological!), which reduces the\n> need to do VACUUM to protect against XID wraparound.\n\nYou're right that a read-only transaction doesn't generate an XID.\nBut that is not a function of whether you do SET TRANSACTION READ ONLY;\nit's a function of refraining from attempting any database changes.\nThe SET might be useful for clarifying and enforcing your intent, but\nit's not a performance boost to use it, versus just doing the read-only\ntransaction without it.\n\nAlso, I believe that SET TRANSACTION READ ONLY isn't a \"hard\" read only\nrestriction anyway --- it'll still allow writes to temp tables for\nexample, which will cause assignment of an XID.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Oct 2010 17:16:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: read only transactions "
}
] |
[
{
"msg_contents": "I've got a few tables that periodically get entirely refreshed via a COPY. I\ndon't really have a good mechanism for detecting only rows which have\nchanged so even though the differences are small, a full copy is easiest.\n However, the data includes a primary key column, so I can't simply load\ninto the existing table and then drop older rows. So we load into a table\nwith a different name and then, within a transaction, drop the old and\nrename the new. However, while the transaction will cause a query against\nthat table to block until the transaction commits, when the transaction\ncommits, the blocked query will fail with an error message like: ERROR:\n could not open relation with OID 17556\n\nIs there some way to do the drop+rename in a manner which will preserve the\nOID or otherwise allow blocked queries to execute correctly once they\nunblock?\n\nA secondary issue is that if permissions were granted to a role on the old\ntable, the new table does not acquire those permissions and they must be\ngranted again.\n\nThe biggest table that gets updated like this is a couple hundred thousand\nrows, with maybe a few thousand rows actually changing or being added with\neach load. Suggestions for alternative mechanisms for doing the loading are\nwelcome. I'd really rather avoid updating every row in a several hundred\nthousand row table, especially without easy upsert functionality. The data\nis small enough that selecting everything and then comparing in memory\nbefore updating modified rows is doable, but sure seems like a lot of work\nif it can be avoided.\n\nWriting this caused me to think of a possible solution, which appears to\nwork correctly, but I'd like to confirm it with folks in the know:\n\nInstead of this:\n\n CREATE TABLE mytable_temp...;\n COPY INTO mytable_temp...;\nBEGIN;\n DROP TABLE mytable;\n ALTER TABLE mytable_temp RENAME TO mytable;\nCOMMIT;\n\nWhich will cause any overlapping queries to pick up the wrong OID for\nmytable and then fail when the transaction commits, I tested this:\n\nCOPY INTO mytable_temp;\nBEGIN;\n ALTER TABLE mytable RENAME TO mytable_old;\n ALTER TABLE mytable_temp RENAME TO mytable;\nCOMMIT;\nDROP TABLE mytable_old;\n\nIt would appear that any query that uses mytable which overlaps with the\ntransaction will pick up the OID of the original mytable and then block\nuntil the transaction commits. WHen the transaction commits, those queries\nwill successfully run against the original OID (no queries write to this\ntable except for the bulk load) and will complete, at which time, the table\ndrop will finally complete. Meanwhile, any queries which don't overlap (or\nperhaps any queries which start after the rename from mytable_temp to\nmytable has occurred) will successfully complete against the new table.\n\nThe net result appears to be that I will no longer suffer the missing OID\nerror, which seemed to periodically completely hose a db connection,\nrequiring that the connection be closed since no subequent queries would\never succeed, whether they touched the table in question or not. I've only\nseen that erroneous behaviour on 8.3 (so far - we only recently upgraded to\n8.4.4), but it was fairly mysterious because I've never been able to\nreplicate it in testing. I could get a single missing OID error, but never\none that would break all subsequent queries.\n\nAre my assumptions about this correct?\n\nI've got a few tables that periodically get entirely refreshed via a COPY. I don't really have a good mechanism for detecting only rows which have changed so even though the differences are small, a full copy is easiest. However, the data includes a primary key column, so I can't simply load into the existing table and then drop older rows. So we load into a table with a different name and then, within a transaction, drop the old and rename the new. However, while the transaction will cause a query against that table to block until the transaction commits, when the transaction commits, the blocked query will fail with an error message like: ERROR: could not open relation with OID 17556\nIs there some way to do the drop+rename in a manner which will preserve the OID or otherwise allow blocked queries to execute correctly once they unblock?A secondary issue is that if permissions were granted to a role on the old table, the new table does not acquire those permissions and they must be granted again.\nThe biggest table that gets updated like this is a couple hundred thousand rows, with maybe a few thousand rows actually changing or being added with each load. Suggestions for alternative mechanisms for doing the loading are welcome. I'd really rather avoid updating every row in a several hundred thousand row table, especially without easy upsert functionality. The data is small enough that selecting everything and then comparing in memory before updating modified rows is doable, but sure seems like a lot of work if it can be avoided.\nWriting this caused me to think of a possible solution, which appears to work correctly, but I'd like to confirm it with folks in the know:Instead of this:\n CREATE TABLE mytable_temp...; COPY INTO mytable_temp...;BEGIN; DROP TABLE mytable; ALTER TABLE mytable_temp RENAME TO mytable;COMMIT;\nWhich will cause any overlapping queries to pick up the wrong OID for mytable and then fail when the transaction commits, I tested this:COPY INTO mytable_temp;BEGIN;\n ALTER TABLE mytable RENAME TO mytable_old; ALTER TABLE mytable_temp RENAME TO mytable;COMMIT;DROP TABLE mytable_old;It would appear that any query that uses mytable which overlaps with the transaction will pick up the OID of the original mytable and then block until the transaction commits. WHen the transaction commits, those queries will successfully run against the original OID (no queries write to this table except for the bulk load) and will complete, at which time, the table drop will finally complete. Meanwhile, any queries which don't overlap (or perhaps any queries which start after the rename from mytable_temp to mytable has occurred) will successfully complete against the new table. \nThe net result appears to be that I will no longer suffer the missing OID error, which seemed to periodically completely hose a db connection, requiring that the connection be closed since no subequent queries would ever succeed, whether they touched the table in question or not. I've only seen that erroneous behaviour on 8.3 (so far - we only recently upgraded to 8.4.4), but it was fairly mysterious because I've never been able to replicate it in testing. I could get a single missing OID error, but never one that would break all subsequent queries.\nAre my assumptions about this correct?",
"msg_date": "Tue, 12 Oct 2010 14:16:11 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": true,
"msg_subject": "bulk load performance question"
},
{
"msg_contents": "Samuel Gendler <[email protected]> writes:\n> Is there some way to do the drop+rename in a manner which will preserve the\n> OID or otherwise allow blocked queries to execute correctly once they\n> unblock?\n\nNo, but you could consider \n\tbegin;\n\ttruncate original_table;\n\tinsert into original_table select * from new_data;\n\tcommit;\n\n> A secondary issue is that if permissions were granted to a role on the old\n> table, the new table does not acquire those permissions and they must be\n> granted again.\n\nNot to mention foreign keys ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 12 Oct 2010 17:22:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bulk load performance question "
}
] |
[
{
"msg_contents": "Hi, everyone. I'm working with a client to try to optimize their use of\nPostgreSQL. They're running 8.3 on a Windows platform, packaged as part\nof a physical product that is delivered to customers.\n\nWe're planning to upgrade to 9.0 at some point in the coming months, but\nthis question is relevant for 8.3 (and perhaps beyond).\n\nAll of the database-related logic for this application is in server-side\nfunctions, written in PL/PgSQL. That is, the application never issues a\nSELECT or INSERT; rather, it invokes a function with parameters, and the\nfunction handles the query. It's not unusual for a function to invoke\none or more other PL/PgSQL functions as part of its execution.\n\nSince many of these PL/PgSQL functions are just acting as wrappers around\nqueries, I thought that it would be a cheap speedup for us to change some\nof them to SQL functions, rather than PL/PgSQL. After all, PL/PgSQL is (I\nthought) interpreted, whereas SQL functions can be inlined and handled\ndirectly by the optimizer and such.\n\nWe made the change to one or two functions, and were rather surprised to\nsee the performance drop by quite a bit.\n\nMy question is whether this is somehow to be expected. Under what\nconditions will SQL functions be slower than PL/PgSQL functions? Is there\na heuristic that I can/should use to know this in advance? Does it matter\nif the SELECT being executed operates against a table, or a PL/PgSQL\nfunction?\n\nThanks in advance for any insights everyone can offer.\n\nReuven\n\n-- \nReuven M. Lerner -- Web development, consulting, and training\nMobile: +972-54-496-8405 * US phone: 847-230-9795\nSkype/AIM: reuvenlerner\n\n\n\n",
"msg_date": "Wed, 13 Oct 2010 09:30:45 +0200",
"msg_from": "\"Reuven M. Lerner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "SQL functions vs. PL/PgSQL functions"
},
{
"msg_contents": "On 13/10/2010 3:30 PM, Reuven M. Lerner wrote:\n\n> My question is whether this is somehow to be expected. Under what\n> conditions will SQL functions be slower than PL/PgSQL functions?\n\nThe main cases I can think of:\n\n- Where the SQL function is inlined (PL/PgSQL functions can't be \ninlined, some SQL functions can) and the inlining turns out to be a \nperformance loss rather than a gain.\n\n- Where the PL/PgSQL function was constructing queries dynamically for \nEXECUTE ... USING, so each query contained its parameters directly. If \nconverted to an SQL function (or a PL/PgSQL function using SELECT / \nPERFORM instead of EXECUTE ... USING) the planner will make more generic \nchoices because it doesn't have stats on specific parameter values. \nThese choices are sometimes not all that great.\n\nBeyond that, I'd have to wait to hear from someone who has more real \nknowledge than my hand-waving can provide.\n\n-- \nCraig Ringer\n\nTech-related writing at http://soapyfrogs.blogspot.com/\n",
"msg_date": "Wed, 13 Oct 2010 16:11:48 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL functions vs. PL/PgSQL functions"
},
{
"msg_contents": "On Wed, Oct 13, 2010 at 3:30 AM, Reuven M. Lerner <[email protected]> wrote:\n> Hi, everyone. I'm working with a client to try to optimize their use of\n> PostgreSQL. They're running 8.3 on a Windows platform, packaged as part\n> of a physical product that is delivered to customers.\n>\n> We're planning to upgrade to 9.0 at some point in the coming months, but\n> this question is relevant for 8.3 (and perhaps beyond).\n>\n> All of the database-related logic for this application is in server-side\n> functions, written in PL/PgSQL. That is, the application never issues a\n> SELECT or INSERT; rather, it invokes a function with parameters, and the\n> function handles the query. It's not unusual for a function to invoke\n> one or more other PL/PgSQL functions as part of its execution.\n>\n> Since many of these PL/PgSQL functions are just acting as wrappers around\n> queries, I thought that it would be a cheap speedup for us to change some\n> of them to SQL functions, rather than PL/PgSQL. After all, PL/PgSQL is (I\n> thought) interpreted, whereas SQL functions can be inlined and handled\n> directly by the optimizer and such.\n>\n> We made the change to one or two functions, and were rather surprised to\n> see the performance drop by quite a bit.\n>\n> My question is whether this is somehow to be expected. Under what\n> conditions will SQL functions be slower than PL/PgSQL functions? Is there\n> a heuristic that I can/should use to know this in advance? Does it matter\n> if the SELECT being executed operates against a table, or a PL/PgSQL\n> function?\n>\n> Thanks in advance for any insights everyone can offer.\n\n*) SQL functions require you to use $n notation for input arguments vs\nthe argument name.\n*) SQL functions are fairly transparent to planning/execution. They\nare re-planned every time they are run (as are views)\n*) simple SQL functions can be inlined, allowing for much smarter\nplans where they are called (especially if they are immutable/stable)\n*) SQL functions are much more forcefully validated when created.\nThis is of course very nice, but can occasionally be a pain, if you\nwant the function to apply to a search path other than the default\nsearch path. This forces me to disable body checking in particular\ncases.\n*) In the not so old days, SQL functions could be called in more\nconexts (select func() vs select * from func()). This is now changed\nthough.\n*) SQL returning setof functions, can send RETURNING from\ninsert/update to the output of the function. This is the ONLY way to\ndo this at present (until we get wCTE) w/o involving the client.\n\n*) plpgsql functions are completely planned and that plan is held for\nthe duration of the session, or until a invalidation event occurs\n(statistics driven, table dropping, etc). This adds overhead to first\ncall but reduces overhead in subsequent calls since you don't have to\nre-plan. This also means you can't float the function over multiple\nsearch paths on the same connection (EVER, even if you DISCARD). This\nalso means you have to be aware of temp table interactions w/plans if\nyou are concerned about performance.\n*) plpgsql allows dynamic execution (can use to get around above),\nspecific variable names, sane error handling, and all kinds of other\nwonderful things too numerous to mention.\n*) plpgsql simple expressions (like n:=n+1) can bypass SPI, and\ntherefore run pretty quickly.\n\nboth sql and plpgsql functions create a mvcc snapshot as soon as the\nfunction is entered. This can and will cause headaches if you are\nwriting highly concurrent systems utilizing serializable transactions.\n(this is one of the biggest annoyances with a 100% pl interface to\nyour db).\n\nwhen you make the jump to 9.0, you might want to check out libpqtypes\nif you are writing your client in C. it will greatly easy sending\ncomplex data to/from the database to receiving functions. certain\nother db interfaces can also do this, for example python has a very\ngood database driver for postgres.\n\nmerlin\n",
"msg_date": "Wed, 13 Oct 2010 09:50:33 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL functions vs. PL/PgSQL functions"
},
{
"msg_contents": "\"Reuven M. Lerner\" <[email protected]> writes:\n> All of the database-related logic for this application is in server-side\n> functions, written in PL/PgSQL. That is, the application never issues a\n> SELECT or INSERT; rather, it invokes a function with parameters, and the\n> function handles the query. It's not unusual for a function to invoke\n> one or more other PL/PgSQL functions as part of its execution.\n\n> Since many of these PL/PgSQL functions are just acting as wrappers around\n> queries, I thought that it would be a cheap speedup for us to change some\n> of them to SQL functions, rather than PL/PgSQL. After all, PL/PgSQL is (I\n> thought) interpreted, whereas SQL functions can be inlined and handled\n> directly by the optimizer and such.\n\n> We made the change to one or two functions, and were rather surprised to\n> see the performance drop by quite a bit.\n\n> My question is whether this is somehow to be expected.\n\nIt's not particularly surprising, especially not if your past\ndevelopment has tended to tune the app so that plpgsql works well.\n\nIn the first place, SQL operations issued in plpgsql aren't somehow\n\"interpreted\" when everything else is \"compiled\". It's the same\nexecution engine. It would be fair to speak of control logic in\nplpgsql as being interpreted; but since SQL functions don't have any\nability to include control logic at all, you're not going to be moving\nanything of that description over. Besides, the control logic usually\ntakes next to no time compared to the SQL operations.\n\nThe reason that plpgsql-issued queries are sometimes slower than queries\nexecuted directly is that plpgsql parameterizes the queries according\nto whatever plpgsql variables/parameters they use, and sometimes you get\na worse plan if the planner can't see the exact values of particular\nvariables used in a query.\n\nThe reason plpgsql does that is that it saves the plans for individual\nSQL queries within a function for the life of the session. SQL\nfunctions involve no such state --- either they get inlined into the\ncalling query, in which case they have to be planned when that query is,\nor else they are planned on-the-fly at beginning of execution. So your\nchange has definitely de-optimized things in the sense of introducing\nmore planning work.\n\nNow you could have seen a win anyway, if plpgsql's parameterized\nquery plans were sufficiently inefficient that planning on-the-fly\nwith actual variable values would beat them out. But that's evidently\nnot the case for (most of?) your usage patterns. In places where it is\nthe case, the usual advice is to fix it by using EXECUTE, not by giving\nup plpgsql's ability to cache plans everywhere else.\n\nIt's possible that at some point we'll try to introduce plan caching\nfor non-inlined SQL functions. But at best this would put them on a\npar with plpgsql speed-wise. Really the only place where a SQL function\nwill be a win for performance is if it can be inlined into the calling\nquery, and that's pretty much never the case in the usage pattern you're\ntalking about. (The sort of inlining we're talking about is more or\nless textual substitution, and you can't insert an INSERT/UPDATE/DELETE\nin a SELECT.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Oct 2010 10:14:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL functions vs. PL/PgSQL functions "
},
{
"msg_contents": " Wow. Thanks so much to all of you for the thoughtful and helpful \nresponses!\n\nReuven\n",
"msg_date": "Wed, 13 Oct 2010 21:34:54 +0200",
"msg_from": "\"Reuven M. Lerner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SQL functions vs. PL/PgSQL functions"
},
{
"msg_contents": "On Wed, Oct 13, 2010 at 10:14 AM, Tom Lane <[email protected]> wrote:\n> It's possible that at some point we'll try to introduce plan caching\n> for non-inlined SQL functions.\n\nhm, I think the search_path/function plan issue would have to be dealt\nwith before doing this -- a while back IIRC you suggested function\nplans might be organized around search_path setting at plan time, or\nthis would break a fair amount of code (for example, mine) :-).\n\nmerlin\n",
"msg_date": "Thu, 14 Oct 2010 10:28:50 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL functions vs. PL/PgSQL functions"
},
{
"msg_contents": "Merlin Moncure <[email protected]> writes:\n> On Wed, Oct 13, 2010 at 10:14 AM, Tom Lane <[email protected]> wrote:\n>> It's possible that at some point we'll try to introduce plan caching\n>> for non-inlined SQL functions.\n\n> hm, I think the search_path/function plan issue would have to be dealt\n> with before doing this --\n\nYeah, perhaps. There doesn't seem to be any groundswell of demand for\ndoing anything about that anyway. Particularly since plpgsql is now\ninstalled by default, a reasonable answer to \"I'd like the system to\ncache plans for this\" is now \"so write it in plpgsql instead\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Oct 2010 10:40:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL functions vs. PL/PgSQL functions "
}
] |
[
{
"msg_contents": "I hit an issue with window aggregate costing while experimenting with\nproviding a count of the full match along side a limited result set.\nSeems that the window aggregate node doesn't take into account that it\nhas to consume the whole input before outputting the first row. When\nthis is combined with a limit, the resulting cost estimate is wildly\nunderestimated, leading to suboptimal plans.\n\nIs this a known issue? I couldn't find anything referring to this on\nthe mailing list or todo.\n\nCode to reproduce follows:\n\nants=# CREATE TABLE test (a int, b int);\nCREATE TABLE\nants=# INSERT INTO test (a,b) SELECT random()*1e6, random()*1e6 FROM\ngenerate_series(1,1000000);\nINSERT 0 1000000\nants=# CREATE INDEX a_idx ON test (a);\nCREATE INDEX\nants=# CREATE INDEX b_idx ON test (b);\nCREATE INDEX\nants=# ANALYZE test;\nANALYZE\nants=# EXPLAIN ANALYZE SELECT *, COUNT(*) OVER () FROM test WHERE a <\n2500 ORDER BY b LIMIT 10;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..195.31 rows=10 width=8) (actual\ntime=728.325..728.339 rows=10 loops=1)\n -> WindowAgg (cost=0.00..46209.93 rows=2366 width=8) (actual\ntime=728.324..728.337 rows=10 loops=1)\n -> Index Scan using b_idx on test (cost=0.00..46180.36\nrows=2366 width=8) (actual time=0.334..727.221 rows=2512 loops=1)\n Filter: (a < 2500)\n Total runtime: 728.401 ms\n(5 rows)\n\nants=# SET enable_indexscan = off;\nSET\nants=# EXPLAIN ANALYZE SELECT *, COUNT(*) OVER () FROM test WHERE a <\n2500 ORDER BY b LIMIT 10;\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3986.82..3986.85 rows=10 width=8) (actual\ntime=7.186..7.189 rows=10 loops=1)\n -> Sort (cost=3986.82..3992.74 rows=2366 width=8) (actual\ntime=7.185..7.187 rows=10 loops=1)\n Sort Key: b\n Sort Method: top-N heapsort Memory: 25kB\n -> WindowAgg (cost=46.70..3935.69 rows=2366 width=8)\n(actual time=4.181..6.508 rows=2512 loops=1)\n -> Bitmap Heap Scan on test (cost=46.70..3906.12\nrows=2366 width=8) (actual time=0.933..3.555 rows=2512 loops=1)\n Recheck Cond: (a < 2500)\n -> Bitmap Index Scan on a_idx (cost=0.00..46.10\nrows=2366 width=0) (actual time=0.512..0.512 rows=2512 loops=1)\n Index Cond: (a < 2500)\n Total runtime: 7.228 ms\n(10 rows)\n",
"msg_date": "Wed, 13 Oct 2010 19:57:44 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bogus startup cost for WindowAgg"
},
{
"msg_contents": "Ants Aasma <[email protected]> writes:\n> Seems that the window aggregate node doesn't take into account that it\n> has to consume the whole input before outputting the first row.\n\nWell, the reason it doesn't assume that is it's not true ;-). In this\nparticular case it's true, but more generally you only have to read the\ncurrent input partition, and often not even all of that.\n\nI'm not sure offhand how much intelligence would have to be added to\nmake a reasonable estimate of the effects of having to read ahead of the\ncurrent input row, but it's probably not trivial. We haven't spent much\ntime at all yet on creating a realistic cost model for WindowAgg...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Oct 2010 13:30:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bogus startup cost for WindowAgg "
},
{
"msg_contents": "Ants Aasma wrote:\n> I hit an issue with window aggregate costing while experimenting with\n> providing a count of the full match along side a limited result set.\n> Seems that the window aggregate node doesn't take into account that it\n> has to consume the whole input before outputting the first row. When\n> this is combined with a limit, the resulting cost estimate is wildly\n> underestimated, leading to suboptimal plans.\n>\n> Is this a known issue? I couldn't find anything referring to this on\n> the mailing list or todo.\n>\n> \nWhat is your histogram size? That's defined by the \ndefault_statistics_target in your postgresql.conf.\nCheck the column histograms like this:\n\n news=> select attname,array_length(most_common_vals,1)\n from pg_stats\n where tablename='moreover_documents_y2010m09';\n attname | array_length\n ----------------------+--------------\n document_id | \n dre_reference | \n headline | 1024\n author | 212\n url | \n rank | 59\n content | 1024\n stories_like_this | \n internet_web_site_id | 1024\n harvest_time | 1024\n valid_time | 1024\n keyword | 95\n article_id | \n media_type | 5\n source_type | 1\n created_at | 1024\n autonomy_fed_at | 1024\n language | 37\n (18 rows)\n\n news=> show default_statistics_target;\n default_statistics_target\n ---------------------------\n 1024\n (1 row)\n\nYou will see that for most of the columns, the length of the histogram \narray corresponds to the value of the default_statistics_target \nparameter. For those that are smaller, the size is the total number of \nvalues in the column in the sample taken by the \"analyze\" command. The \nlonger histogram, the better plan. In this case, the size does matter. \nNote that there are no histograms for the document_id and dre_reference \ncolumns. Those are the primary and unique keys, the optimizer can easily \nguess the distribution of values.\n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Wed, 13 Oct 2010 15:35:02 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bogus startup cost for WindowAgg"
},
{
"msg_contents": "Ants Aasma wrote:\n> I hit an issue with window aggregate costing while experimenting with\n> providing a count of the full match along side a limited result set.\n> Seems that the window aggregate node doesn't take into account that it\n> has to consume the whole input before outputting the first row. When\n> this is combined with a limit, the resulting cost estimate is wildly\n> underestimated, leading to suboptimal plans.\n> \nWhat is your histogram size? That's defined by the\ndefault_statistics_target in your postgresql.conf.\nCheck the column histograms like this:\n\n news=> select attname,array_length(most_common_vals,1)\n from pg_stats\n where tablename='moreover_documents_y2010m09';\n attname | array_length\n ----------------------+--------------\n document_id | \n dre_reference | \n headline | 1024\n author | 212\n url | \n rank | 59\n content | 1024\n stories_like_this | \n internet_web_site_id | 1024\n harvest_time | 1024\n valid_time | 1024\n keyword | 95\n article_id | \n media_type | 5\n source_type | 1\n created_at | 1024\n autonomy_fed_at | 1024\n language | 37\n (18 rows)\n\n news=> show default_statistics_target;\n default_statistics_target\n ---------------------------\n 1024\n (1 row)\n\nYou will see that for most of the columns, the length of the histogram\narray corresponds to the value of the default_statistics_target\nparameter. For those that are smaller, the size is the total number of\nvalues in the column in the sample taken by the \"analyze\" command. The\nlonger histogram, the better plan. In this case, the size does matter.\nNote that there are no histograms for the document_id and dre_reference\ncolumns. Those are the primary and unique keys, the optimizer can easily\nguess the distribution of values.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Wed, 13 Oct 2010 18:28:58 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bogus startup cost for WindowAgg"
},
{
"msg_contents": "On Wed, Oct 13, 2010 at 10:35 PM, Mladen Gogala\n<[email protected]> wrote:\n> You will see that for most of the columns, the length of the histogram array\n> corresponds to the value of the default_statistics_target parameter. For\n> those that are smaller, the size is the total number of values in the column\n> in the sample taken by the \"analyze\" command. The longer histogram, the\n> better plan. In this case, the size does matter.\n\nThe issue here isn't that the statistics are off. The issue is, as Tom\nsaid, that the optimizer doesn't consider them for the cost model of\nthe window aggregate. The trivial case I put forward wouldn't be too\nhard to cover - if there's no partitioning of the window and the frame\nis over the full partition, the startup cost should be nearly the same\nas the full cost. But outside of the trick I tried, I'm not sure if\nthe trivial case matters much. I can also see how the estimation gets\npretty hairy when partitioning, frames and real window functions come\ninto play.\n\nOne idea would be to cost three different cases. If the aggregate\nneeds to read ahead some most likely constant number of rows, i.e. is\nnot using an unbounded following frame, leave the startup cost as is.\nIf there is partitioning, estimate the number of groups produced by\nthe partitioning and add one n-th of the difference between startup\nand total cost. Otherwise, if the frame is to the end of the partition\nand there is no partitioning, set the startup cost equal to total\ncost, or in terms of the previous case, n=1. I don't know how accurate\nestimating the number of groups would be, or even if it is feasible to\ndo it. If those assumptions hold, then it seems to me that this method\nshould at-least cover any large O(n) effects.\n",
"msg_date": "Thu, 14 Oct 2010 01:39:28 +0300",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bogus startup cost for WindowAgg"
}
] |
[
{
"msg_contents": "Hello\n\nI have an application hosted on Heroku. They use postgres. It's more or less\nabstracted away, but I can get some performance data from New Relic. For the\nmost part, performance is ok, but now and then some queries take a few\nseconds, and spike up to 15 or even 16 seconds! Ouch!\n\nThis is the most detailed information I could get form New Relic. Do you\nhave any suggestions how I could improve the performance?\n\nQUERY PLANLimit (cost=26.49..1893.46 rows=1 width=4) QUERY PLAN -> Unique\n(cost=26.49..44833.82 rows=24 width=4) QUERY PLAN -> Nested Loop\n(cost=26.49..44833.81 rows=24 width=4) QUERY PLAN -> Merge Join\n(cost=26.49..44532.99 rows=4773 width=8) QUERY PLAN Merge Cond: (songs.id =\nartists_songs.song_id) QUERY PLAN -> Index Scan using songs_pkey on songs\n(cost=0.00..25219.30 rows=4773 width=4) QUERY PLAN Filter:\n(lower((name)::text) = 'thirteen'::text) QUERY PLAN -> Index Scan using\nindex_artists_songs_on_song_id on artists_songs (cost=0.00..18822.04\nrows=960465 width=8) QUERY PLAN -> Index Scan using artists_pkey on artists\n(cost=0.00..0.06 rows=1 width=4) QUERY PLAN Index Cond: (artists.id =\nartists_songs.artist_id) QUERY PLAN Filter: (lower((artists.name)::text) =\n'red mountain church'::text)\n\n\n-- \n=========================================\nBrandon Casci\nLoudcaster\nhttp://loudcaster.com\n=========================================\n\nHelloI have an application hosted on Heroku. They use postgres. It's more or less abstracted away, but I can get some performance data from New Relic. For the most part, performance is ok, but now and then some queries take a few seconds, and spike up to 15 or even 16 seconds! Ouch!\nThis is the most detailed information I could get form New Relic. Do you have any suggestions how I could improve the performance? QUERY PLANLimit (cost=26.49..1893.46 \nrows=1 width=4)\nQUERY PLAN -> Unique (cost=26.49..44833.82 rows=24 width=4)\nQUERY PLAN -> Nested Loop (cost=26.49..44833.81 rows=24 \nwidth=4)\nQUERY PLAN -> Merge Join (cost=26.49..44532.99 \nrows=4773 width=8)\nQUERY PLAN Merge Cond: (songs.id = \nartists_songs.song_id)\nQUERY PLAN -> Index Scan using songs_pkey on \nsongs (cost=0.00..25219.30 rows=4773 width=4)\nQUERY PLAN Filter: (lower((name)::text) = \n'thirteen'::text)\nQUERY PLAN -> Index Scan using \nindex_artists_songs_on_song_id on artists_songs (cost=0.00..18822.04 \nrows=960465 width=8)\nQUERY PLAN -> Index Scan using artists_pkey on artists \n (cost=0.00..0.06 rows=1 width=4)\nQUERY PLAN Index Cond: (artists.id = \nartists_songs.artist_id)\nQUERY PLAN Filter: (lower((artists.name)::text) = \n'red mountain church'::text)\n-- =========================================Brandon CasciLoudcasterhttp://loudcaster.com=========================================",
"msg_date": "Wed, 13 Oct 2010 17:18:10 -0400",
"msg_from": "Brandon Casci <[email protected]>",
"msg_from_op": true,
"msg_subject": "help with understanding EXPLAIN"
},
{
"msg_contents": "On 10/14/2010 05:18 AM, Brandon Casci wrote:\n> Hello\n>\n> I have an application hosted on Heroku. They use postgres. It's more or\n> less abstracted away, but I can get some performance data from New\n> Relic. For the most part, performance is ok, but now and then some\n> queries take a few seconds, and spike up to 15 or even 16 seconds! Ouch!\n>\n> This is the most detailed information I could get form New Relic. Do you\n> have any suggestions how I could improve the performance?\n\nThat query plan is nearly illegible, and it contains no information \nabout how the query actually executed or how many rows were fetched from \nvarious tables. I can't personally see any way to give you useful advice \nbased on that information.\n\nI know you're going through a third party, but do think you could get \nthem to post the original form, unmangled by mail clients and whatever \nelse they've done to it, here? :\n\nhttp://explain.depesz.com\n\nGet them to use EXPLAIN ANALYZE rather than plain EXPLAIN, and to \nprovide the text of the query that generated the plan.\n\n--\nCraig Ringer\n",
"msg_date": "Sat, 16 Oct 2010 09:17:59 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] help with understanding EXPLAIN"
}
] |
[
{
"msg_contents": "Hello\n\nI have an application hosted on Heroku. They use postgres. It's more or less\nabstracted away, but I can get some performance data from New Relic. For the\nmost part, performance is ok, but now and then some queries take a few\nseconds, and spike up to 15 or even 16 seconds! Ouch!\n\nThis is the most detailed information I could get form New Relic. Do you\nhave any suggestions how I could improve the performance?\n\nQUERY PLANLimit (cost=26.49..1893.46 rows=1 width=4) QUERY PLAN -> Unique\n(cost=26.49..44833.82 rows=24 width=4) QUERY PLAN -> Nested Loop\n(cost=26.49..44833.81 rows=24 width=4) QUERY PLAN -> Merge Join\n(cost=26.49..44532.99 rows=4773 width=8) QUERY PLAN Merge Cond: (songs.id =\nartists_songs.song_id) QUERY PLAN -> Index Scan using songs_pkey on songs\n(cost=0.00..25219.30 rows=4773 width=4) QUERY PLAN Filter:\n(lower((name)::text) = 'thirteen'::text) QUERY PLAN -> Index Scan using\nindex_artists_songs_on_song_id on artists_songs (cost=0.00..18822.04\nrows=960465 width=8) QUERY PLAN -> Index Scan using artists_pkey on artists\n(cost=0.00..0.06 rows=1 width=4) QUERY PLAN Index Cond: (artists.id =\nartists_songs.artist_id) QUERY PLAN Filter: (lower((artists.name)::text) =\n'red mountain church'::text)\n\nThanks!\n\n-- \n=========================================\nBrandon Casci\nLoudcaster\nhttp://loudcaster.com\n=========================================\n\nHelloI have an application hosted on Heroku. They use postgres. \nIt's more or less abstracted away, but I can get some performance data \nfrom New Relic. For the most part, performance is ok, but now and then \nsome queries take a few seconds, and spike up to 15 or even 16 seconds! \nOuch!\nThis is the most detailed information I could get form New Relic. Do\n you have any suggestions how I could improve the performance? QUERY PLANLimit (cost=26.49..1893.46 \nrows=1 width=4)\nQUERY PLAN -> Unique (cost=26.49..44833.82 rows=24 width=4)\nQUERY PLAN -> Nested Loop (cost=26.49..44833.81 rows=24 \nwidth=4)\nQUERY PLAN -> Merge Join (cost=26.49..44532.99 \nrows=4773 width=8)\nQUERY PLAN Merge Cond: (songs.id = \nartists_songs.song_id)\nQUERY PLAN -> Index Scan using songs_pkey on \nsongs (cost=0.00..25219.30 rows=4773 width=4)\nQUERY PLAN Filter: (lower((name)::text) = \n'thirteen'::text)\nQUERY PLAN -> Index Scan using \nindex_artists_songs_on_song_id on artists_songs (cost=0.00..18822.04 \nrows=960465 width=8)\nQUERY PLAN -> Index Scan using artists_pkey on artists \n (cost=0.00..0.06 rows=1 width=4)\nQUERY PLAN Index Cond: (artists.id = \nartists_songs.artist_id)\nQUERY PLAN Filter: (lower((artists.name)::text) = \n'red mountain church'::text)\nThanks!-- =========================================Brandon CasciLoudcasterhttp://loudcaster.com=========================================",
"msg_date": "Wed, 13 Oct 2010 18:06:44 -0400",
"msg_from": "Brandon Casci <[email protected]>",
"msg_from_op": true,
"msg_subject": "help with understanding EXPLAIN and boosting performance"
},
{
"msg_contents": "Brandon Casci <[email protected]> writes:\n> I have an application hosted on Heroku. They use postgres. It's more or less\n> abstracted away, but I can get some performance data from New Relic. For the\n> most part, performance is ok, but now and then some queries take a few\n> seconds, and spike up to 15 or even 16 seconds! Ouch!\n\nThe particular query you're showing here doesn't look terribly\nexpensive. Are you sure this is one that took that long?\n\nIf you're seeing identical queries take significantly different times,\nI'd wonder about what else is happening on the server. The most obvious\nexplanation for that type of behavior is that everything is swapped into\nRAM when it's fast, but has to be read from disk when it's slow. If\nthat's what's happening you should consider buying more RAM or\noffloading some activities to other machines, so that there's not so\nmuch competition for memory space.\n\nIf specific queries are consistently slow then EXPLAIN might give some\nuseful info about those. It's unlikely to tell you much about\nnon-reproducible slowdowns, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Oct 2010 14:35:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: help with understanding EXPLAIN and boosting performance "
}
] |
[
{
"msg_contents": "Hello people,\n\nI'm having trouble to persuade index scan to check all of the conditions \nI specify _inside_ index cond. That is, _some_ condition always get \npushed out of index cond and applied later (which will often result, for \nmy real table contents, in too many unwanted rows initially hit by index \nscan and hence randomly slow queries)\nAn index with all relevant columns does exist of course.\n\nHere goes an example.\n\ncreate table foo (\n id serial primary key,\n rec_time timestamp with time zone DEFAULT now(),\n some_value integer,\n some_data text\n);\nCREATE INDEX foo_test ON foo (id, rec_time, some_value);\nset enable_seqscan = false;\nset enable_bitmapscan = true;\n\nexplain select id from foo where true\n and rec_time > '2010-01-01 22:00:06'\n --and rec_time < '2010-10-14 23:59'\n and some_value in (1, 2)\n and id > 123\n\nThis one works perfectly as I want it (and note \"and rec_time < ... \" \ncondition is commented out):\n\nBitmap Heap Scan on foo (cost=13.18..17.19 rows=1 width=4)\n Recheck Cond: ((id > 123) AND (rec_time > '2010-01-01 \n22:00:06+03'::timestamp with time zone) AND (some_value = ANY \n('{1,2}'::integer[])))\n -> Bitmap Index Scan on foo_test (cost=0.00..13.18 rows=1 width=0)\n Index Cond: ((id > 123) AND (rec_time > '2010-01-01 \n22:00:06+03'::timestamp with time zone) AND (some_value = ANY \n('{1,2}'::integer[])))\"\n\nNow, as soon as I enable \"and rec_time < ... \" condition, I get the\nfollowing:\n\nexplain select id from foo where true\n and rec_time > '2010-01-01 22:00:06'\n and rec_time < '2010-10-14 23:59'\n and some_value in (1, 2)\n and id > 123\n\nBitmap Heap Scan on foo (cost=8.59..13.94 rows=1 width=4)\n Recheck Cond: ((id > 123) AND (rec_time > '2010-01-01 \n22:00:06+03'::timestamp with time zone) AND (rec_time < '2010-10-14 \n23:59:00+04'::timestamp with time zone))\n Filter: (some_value = ANY ('{1,2}'::integer[]))\n -> Bitmap Index Scan on foo_test (cost=0.00..8.59 rows=2 width=0)\n Index Cond: ((id > 123) AND (rec_time > '2010-01-01 \n22:00:06+03'::timestamp with time zone) AND (rec_time < '2010-10-14 \n23:59:00+04'::timestamp with time zone))\n\nSo, \"in (1, 2)\" condition is not in Index Cond anymore! Why is that? How \ncan I push it back?\n\nSELECT version();\nPostgreSQL 8.3.1, compiled by Visual C++ build 1400\nbut the behaviour seems exactly the same in 9.0 (just checked it briefly).\n\nThank you!\nPlease CC me, I'm not on the list.\n\nNikolai\n",
"msg_date": "Thu, 14 Oct 2010 19:49:33 +0400",
"msg_from": "Nikolai Zhubr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index scan / Index cond limitation or ?"
},
{
"msg_contents": "Nikolai Zhubr <[email protected]> writes:\n> So, \"in (1, 2)\" condition is not in Index Cond anymore! Why is that? How \n> can I push it back?\n\nIt thinks the indexscan condition is sufficiently selective already.\nAn = ANY condition like that will force multiple index searches,\none for each of the OR'd possibilities, so it's far from \"free\" to add\nit to the index condition. The planner doesn't think it's worth it.\nPerhaps on your real query it is, but there's not much point in\ndebating about the behavior on this toy table; without realistic\ntable sizes and up-to-date stats it's impossible to say whether that\nchoice is correct or not.\n\n> SELECT version();\n> PostgreSQL 8.3.1, compiled by Visual C++ build 1400\n\nYou really, really, really ought to be running 8.3.something-newer.\nWe didn't put out the last 11 8.3.x bugfix updates just because\nwe didn't have anything better to do.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Oct 2010 14:29:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index scan / Index cond limitation or ? "
},
{
"msg_contents": "15.10.2010 22:29, Tom Lane:\n> Nikolai Zhubr<[email protected]> writes:\n>> So, \"in (1, 2)\" condition is not in Index Cond anymore! Why is that? How\n>> can I push it back?\n>\n> It thinks the indexscan condition is sufficiently selective already.\n> An = ANY condition like that will force multiple index searches,\n> one for each of the OR'd possibilities, so it's far from \"free\" to add\n> it to the index condition. The planner doesn't think it's worth it.\n\nAha, ok. It makes sense then. Is this specific case (=ANY in index cond) \ndescribed somewhere with reasonable detail? I always try to RTFM first \nand most of the time I can find pretty good hints in the regular manual \nalready (sufficient as a starting point at least) but this specific \ntopic seems to be somewhat mysterious.\n\n> Perhaps on your real query it is, but there's not much point in\n> debating about the behavior on this toy table; without realistic\n> table sizes and up-to-date stats it's impossible to say whether that\n> choice is correct or not.\n>\n>> SELECT version();\n>> PostgreSQL 8.3.1, compiled by Visual C++ build 1400\n>\n> You really, really, really ought to be running 8.3.something-newer.\n> We didn't put out the last 11 8.3.x bugfix updates just because\n> we didn't have anything better to do.\n\nYes, I know, and I do appreciate the efforts of postgresql devels to \ncreate updates for older versions too.\nThis server is internal-only (so it does not see any real world yet). \nAnyway, I hope to update everything to 9.0.1 soon.\n\nThank you!\n\nNikolai\n>\n> \t\t\tregards, tom lane\n>\n>\n\n",
"msg_date": "Sat, 16 Oct 2010 01:00:02 +0400",
"msg_from": "Nikolai Zhubr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index scan / Index cond limitation or ?"
}
] |
[
{
"msg_contents": "We are in the process of testing migration of our oracle data warehouse\nover to postgres. A potential showstopper are full table scans on our\nmembers table. We can't function on postgres effectively unless index\nscans are employed. I'm thinking I don't have something set correctly\nin my postgresql.conf file, but I'm not sure what.\n\nThis table has approximately 300million rows.\n\nVersion:\nSELECT version();\n\nversion \n------------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.4.2 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC)\n4.1.2 20071124 (Red Hat 4.1.2-42), 64-bit\n\nWe have 4 quad-core processors and 32GB of RAM. The below query uses\nthe members_sorted_idx_001 index in oracle, but in postgres, the\noptimizer chooses a sequential scan.\n\nexplain analyze create table tmp_srcmem_emws1\nas\nselect emailaddress, websiteid\n from members\n where emailok = 1\n and emailbounced = 0;\n QUERY\nPLAN \n------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on members (cost=0.00..14137154.64 rows=238177981 width=29)\n(actual time=0.052..685834.785 rows=236660930 loops=1)\n Filter: ((emailok = 1::numeric) AND (emailbounced = 0::numeric))\n Total runtime: 850306.220 ms\n(3 rows)\n\nshow shared_buffers ;\n shared_buffers \n----------------\n 7680MB\n(1 row)\n\nshow effective_cache_size ;\n effective_cache_size \n----------------------\n 22GB\n(1 row)\n\nshow work_mem ;\n work_mem \n----------\n 768MB\n(1 row)\n\nshow enable_seqscan ;\n enable_seqscan \n----------------\n on\n(1 row)\n\nBelow are the data definitions for the table/indexes in question:\n\n\\d members\n Table \"members\"\n Column | Type | Modifiers \n---------------------+-----------------------------+-----------\n memberid | numeric | not null\n firstname | character varying(50) | \n lastname | character varying(50) | \n emailaddress | character varying(50) | \n password | character varying(50) | \n address1 | character varying(50) | \n address2 | character varying(50) | \n city | character varying(50) | \n statecode | character varying(50) | \n zipcode | character varying(50) | \n birthdate | date | \n emailok | numeric(2,0) | \n gender | character varying(1) | \n addeddate | timestamp without time zone | \n emailbounced | numeric(2,0) | \n changedate | timestamp without time zone | \n optoutsource | character varying(100) | \n websiteid | numeric | \n promotionid | numeric | \n sourceid | numeric | \n siteid | character varying(64) | \n srcwebsiteid | numeric | \n homephone | character varying(20) | \n homeareacode | character varying(10) | \n campaignid | numeric | \n srcmemberid | numeric | \n optoutdate | date | \n regcomplete | numeric(1,0) | \n regcompletesourceid | numeric | \n ipaddress | character varying(25) | \n pageid | numeric | \n streetaddressstatus | numeric(1,0) | \n middlename | character varying(50) | \n optinprechecked | numeric(1,0) | \n optinposition | numeric | \n homephonestatus | numeric | \n addeddate_id | numeric | \n changedate_id | numeric | \n rpmindex | numeric | \n optmode | numeric(1,0) | \n countryid | numeric | \n confirmoptin | numeric(2,0) | \n bouncedate | date | \n memberageid | numeric | \n sourceid2 | numeric | \n remoteuserid | character varying(50) | \n goal | numeric(1,0) | \n flowdepth | numeric | \n pagetype | numeric | \n savepassword | character varying(50) | \n customerprofileid | numeric | \nIndexes:\n \"email_website_unq\" UNIQUE, btree (emailaddress, websiteid),\ntablespace \"members_idx\"\n \"member_addeddateid_idx\" btree (addeddate_id), tablespace\n\"members_idx\"\n \"member_changedateid_idx\" btree (changedate_id), tablespace\n\"members_idx\"\n \"members_fdate_idx\" btree (to_char_year_month(addeddate)),\ntablespace \"esave_idx\"\n \"members_memberid_idx\" btree (memberid), tablespace \"members_idx\"\n \"members_mid_emailok_idx\" btree (memberid, emailaddress, zipcode,\nfirstname, emailok), tablespace \"members_idx\"\n \"members_sorted_idx_001\" btree (websiteid, emailok, emailbounced,\naddeddate, memberid, zipcode, statecode, emailaddress), tablespace\n\"members_idx\"\n \"members_src_idx\" btree (websiteid, emailbounced, sourceid),\ntablespace \"members_idx\"\n \"members_wid_idx\" btree (websiteid), tablespace \"members_idx\"\n\nselect tablename, indexname, tablespace, indexdef from pg_indexes where\ntablename = 'members';\n-[ RECORD\n1 ]----------------------------------------------------------------------------------------------------------------------------------------------------\ntablename | members\nindexname | members_fdate_idx\ntablespace | esave_idx\nindexdef | CREATE INDEX members_fdate_idx ON members USING btree\n(to_char_year_month(addeddate))\n-[ RECORD\n2 ]----------------------------------------------------------------------------------------------------------------------------------------------------\ntablename | members\nindexname | member_changedateid_idx\ntablespace | members_idx\nindexdef | CREATE INDEX member_changedateid_idx ON members USING btree\n(changedate_id)\n-[ RECORD\n3 ]----------------------------------------------------------------------------------------------------------------------------------------------------\ntablename | members\nindexname | member_addeddateid_idx\ntablespace | members_idx\nindexdef | CREATE INDEX member_addeddateid_idx ON members USING btree\n(addeddate_id)\n-[ RECORD\n4 ]----------------------------------------------------------------------------------------------------------------------------------------------------\ntablename | members\nindexname | members_wid_idx\ntablespace | members_idx\nindexdef | CREATE INDEX members_wid_idx ON members USING btree\n(websiteid)\n-[ RECORD\n5 ]----------------------------------------------------------------------------------------------------------------------------------------------------\ntablename | members\nindexname | members_src_idx\ntablespace | members_idx\nindexdef | CREATE INDEX members_src_idx ON members USING btree\n(websiteid, emailbounced, sourceid)\n-[ RECORD\n6 ]----------------------------------------------------------------------------------------------------------------------------------------------------\ntablename | members\nindexname | members_sorted_idx_001\ntablespace | members_idx\nindexdef | CREATE INDEX members_sorted_idx_001 ON members USING btree\n(websiteid, emailok, emailbounced, addeddate, memberid, zipcode,\nstatecode, emailaddress)\n-[ RECORD\n7 ]----------------------------------------------------------------------------------------------------------------------------------------------------\ntablename | members\nindexname | members_mid_emailok_idx\ntablespace | members_idx\nindexdef | CREATE INDEX members_mid_emailok_idx ON members USING btree\n(memberid, emailaddress, zipcode, firstname, emailok)\n-[ RECORD\n8 ]----------------------------------------------------------------------------------------------------------------------------------------------------\ntablename | members\nindexname | members_memberid_idx\ntablespace | members_idx\nindexdef | CREATE INDEX members_memberid_idx ON members USING btree\n(memberid)\n-[ RECORD\n9 ]----------------------------------------------------------------------------------------------------------------------------------------------------\ntablename | members\nindexname | email_website_unq\ntablespace | members_idx\nindexdef | CREATE UNIQUE INDEX email_website_unq ON members USING\nbtree (emailaddress, websiteid)\n\n\nThis table has also been vacuumed analyzed as well:\n\nselect * from pg_stat_all_tables where relname = 'members';\n-[ RECORD 1 ]----+------------------------------\nrelid | 3112786\nschemaname | xxxxx\nrelname | members\nseq_scan | 298\nseq_tup_read | 42791828896\nidx_scan | 31396925\nidx_tup_fetch | 1083796963\nn_tup_ins | 291308316\nn_tup_upd | 0\nn_tup_del | 4188020\nn_tup_hot_upd | 0\nn_live_tup | 285364632\nn_dead_tup | 109658\nlast_vacuum | 2010-10-12 20:26:01.227393-04\nlast_autovacuum | \nlast_analyze | 2010-10-12 20:28:01.105656-04\nlast_autoanalyze | 2010-09-16 20:50:00.712418-04\n\n\n",
"msg_date": "Thu, 14 Oct 2010 15:43:04 -0400",
"msg_from": "Tony Capobianco <[email protected]>",
"msg_from_op": true,
"msg_subject": "oracle to psql migration - slow query in postgres"
},
{
"msg_contents": "Just my take on this.\n\nThe first thing I'd do is think real hard about whether you really\nreally want 'numeric' instead of boolean, smallint, or integer. The\nsecond thing is that none of your indices (which specify a whole bunch\nof fields, by the way) have only just emailok, emailbounced, or only\nthe pair of them. Without knowing the needs of your app, I would\nreconsider your index choices and go with fewer columns per index.\n\nFor this particular query I would think either two indexes (depending\non the cardinality of the data, one for each of emailok, emailbounced)\nor one index (containing both emailok, emailbounced) would make quite\na bit of difference. Consider creating the indexes using a WITH\nclause, for example:\n\nCREATE INDEX members_just_an_example_idx ON members (emailok,\nemailbounced) WHERE emailok = 1 AND emailbounced = 0;\n\nObviously that index is only useful in situations where both fields\nare specified with those values. Furthermore, if the result is such\nthat a very high percentage of the table has those conditions a\nsequential scan is going to be cheaper, anyway.\n\n-- \nJon\n",
"msg_date": "Thu, 14 Oct 2010 15:10:01 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: oracle to psql migration - slow query in postgres"
},
{
"msg_contents": "On Thu, Oct 14, 2010 at 12:43 PM, Tony Capobianco\n<[email protected]> wrote:\n> We have 4 quad-core processors and 32GB of RAM. The below query uses\n> the members_sorted_idx_001 index in oracle, but in postgres, the\n> optimizer chooses a sequential scan.\n>\n> explain analyze create table tmp_srcmem_emws1\n> as\n> select emailaddress, websiteid\n> from members\n> where emailok = 1\n> and emailbounced = 0;\n\n\nMaybe a couple indexes to try:\n\ncreate index members_emailok_emailbounced_idx on members (emailok,emailbounced);\n\nor a functional index (will likely be smaller, depending on the\ncontents of your table):\ncreate index members_emailok_emailbounced_idx on members\n(emailok,emailbounced) where emailok = 1 and emailbounced = 0; -- if\nyou use that combination of 1 and 0 regularly\n",
"msg_date": "Thu, 14 Oct 2010 13:23:12 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: oracle to psql migration - slow query in postgres"
},
{
"msg_contents": "On 10/14/10 21:43, Tony Capobianco wrote:\n\n\n> We have 4 quad-core processors and 32GB of RAM. The below query uses\n> the members_sorted_idx_001 index in oracle, but in postgres, the\n> optimizer chooses a sequential scan.\n>\n> explain analyze create table tmp_srcmem_emws1\n> as\n> select emailaddress, websiteid\n> from members\n> where emailok = 1\n> and emailbounced = 0;\n> QUERY\n> PLAN\n> ------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on members (cost=0.00..14137154.64 rows=238177981 width=29)\n> (actual time=0.052..685834.785 rows=236660930 loops=1)\n> Filter: ((emailok = 1::numeric) AND (emailbounced = 0::numeric))\n> Total runtime: 850306.220 ms\n> (3 rows)\n\n> Indexes:\n> \"email_website_unq\" UNIQUE, btree (emailaddress, websiteid),\n> tablespace \"members_idx\"\n> \"member_addeddateid_idx\" btree (addeddate_id), tablespace\n> \"members_idx\"\n> \"member_changedateid_idx\" btree (changedate_id), tablespace\n> \"members_idx\"\n> \"members_fdate_idx\" btree (to_char_year_month(addeddate)),\n> tablespace \"esave_idx\"\n> \"members_memberid_idx\" btree (memberid), tablespace \"members_idx\"\n> \"members_mid_emailok_idx\" btree (memberid, emailaddress, zipcode,\n> firstname, emailok), tablespace \"members_idx\"\n> \"members_sorted_idx_001\" btree (websiteid, emailok, emailbounced,\n> addeddate, memberid, zipcode, statecode, emailaddress), tablespace\n> \"members_idx\"\n> \"members_src_idx\" btree (websiteid, emailbounced, sourceid),\n> tablespace \"members_idx\"\n> \"members_wid_idx\" btree (websiteid), tablespace \"members_idx\"\n\nPostgreSQL doesn't fetch data directly from indexes, so there is no way \nfor it to reasonably use an index declared like:\n\n\"members_sorted_idx_001\" btree (websiteid, emailok, emailbounced, \naddeddate, memberid, zipcode, statecode, emailaddress)\n\nYou need a direct index on the fields you are using in your query, i.e. \nan index on (emailok, emailbounced).\n\nOTOH, those columns look boolean-like. It depends on what your data set \nis, but if the majority of records contain (emailok=1 and \nemailbounced=0) an index may not help you much.\n\n",
"msg_date": "Thu, 14 Oct 2010 22:32:09 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: oracle to psql migration - slow query in postgres"
},
{
"msg_contents": "2010/10/14 Tony Capobianco <[email protected]>:\n> We are in the process of testing migration of our oracle data warehouse\n> over to postgres. A potential showstopper are full table scans on our\n> members table. We can't function on postgres effectively unless index\n> scans are employed. I'm thinking I don't have something set correctly\n> in my postgresql.conf file, but I'm not sure what.\n>\n> This table has approximately 300million rows.\n\nand your query grab rows=236 660 930 of them. An index might be\nuseless in this situation.\n\n>\n> Version:\n> SELECT version();\n>\n> version\n> ------------------------------------------------------------------------------------------------------------------\n> PostgreSQL 8.4.2 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC)\n> 4.1.2 20071124 (Red Hat 4.1.2-42), 64-bit\n>\n> We have 4 quad-core processors and 32GB of RAM. The below query uses\n> the members_sorted_idx_001 index in oracle, but in postgres, the\n> optimizer chooses a sequential scan.\n>\n> explain analyze create table tmp_srcmem_emws1\n> as\n> select emailaddress, websiteid\n> from members\n> where emailok = 1\n> and emailbounced = 0;\n> QUERY\n> PLAN\n> ------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on members (cost=0.00..14137154.64 rows=238177981 width=29)\n> (actual time=0.052..685834.785 rows=236660930 loops=1)\n> Filter: ((emailok = 1::numeric) AND (emailbounced = 0::numeric))\n> Total runtime: 850306.220 ms\n> (3 rows)\n>\n> show shared_buffers ;\n> shared_buffers\n> ----------------\n> 7680MB\n> (1 row)\n>\n> show effective_cache_size ;\n> effective_cache_size\n> ----------------------\n> 22GB\n> (1 row)\n>\n> show work_mem ;\n> work_mem\n> ----------\n> 768MB\n> (1 row)\n>\n> show enable_seqscan ;\n> enable_seqscan\n> ----------------\n> on\n> (1 row)\n>\n> Below are the data definitions for the table/indexes in question:\n>\n> \\d members\n> Table \"members\"\n> Column | Type | Modifiers\n> ---------------------+-----------------------------+-----------\n> memberid | numeric | not null\n> firstname | character varying(50) |\n> lastname | character varying(50) |\n> emailaddress | character varying(50) |\n> password | character varying(50) |\n> address1 | character varying(50) |\n> address2 | character varying(50) |\n> city | character varying(50) |\n> statecode | character varying(50) |\n> zipcode | character varying(50) |\n> birthdate | date |\n> emailok | numeric(2,0) |\n> gender | character varying(1) |\n> addeddate | timestamp without time zone |\n> emailbounced | numeric(2,0) |\n> changedate | timestamp without time zone |\n> optoutsource | character varying(100) |\n> websiteid | numeric |\n> promotionid | numeric |\n> sourceid | numeric |\n> siteid | character varying(64) |\n> srcwebsiteid | numeric |\n> homephone | character varying(20) |\n> homeareacode | character varying(10) |\n> campaignid | numeric |\n> srcmemberid | numeric |\n> optoutdate | date |\n> regcomplete | numeric(1,0) |\n> regcompletesourceid | numeric |\n> ipaddress | character varying(25) |\n> pageid | numeric |\n> streetaddressstatus | numeric(1,0) |\n> middlename | character varying(50) |\n> optinprechecked | numeric(1,0) |\n> optinposition | numeric |\n> homephonestatus | numeric |\n> addeddate_id | numeric |\n> changedate_id | numeric |\n> rpmindex | numeric |\n> optmode | numeric(1,0) |\n> countryid | numeric |\n> confirmoptin | numeric(2,0) |\n> bouncedate | date |\n> memberageid | numeric |\n> sourceid2 | numeric |\n> remoteuserid | character varying(50) |\n> goal | numeric(1,0) |\n> flowdepth | numeric |\n> pagetype | numeric |\n> savepassword | character varying(50) |\n> customerprofileid | numeric |\n> Indexes:\n> \"email_website_unq\" UNIQUE, btree (emailaddress, websiteid),\n> tablespace \"members_idx\"\n> \"member_addeddateid_idx\" btree (addeddate_id), tablespace\n> \"members_idx\"\n> \"member_changedateid_idx\" btree (changedate_id), tablespace\n> \"members_idx\"\n> \"members_fdate_idx\" btree (to_char_year_month(addeddate)),\n> tablespace \"esave_idx\"\n> \"members_memberid_idx\" btree (memberid), tablespace \"members_idx\"\n> \"members_mid_emailok_idx\" btree (memberid, emailaddress, zipcode,\n> firstname, emailok), tablespace \"members_idx\"\n> \"members_sorted_idx_001\" btree (websiteid, emailok, emailbounced,\n> addeddate, memberid, zipcode, statecode, emailaddress), tablespace\n> \"members_idx\"\n> \"members_src_idx\" btree (websiteid, emailbounced, sourceid),\n> tablespace \"members_idx\"\n> \"members_wid_idx\" btree (websiteid), tablespace \"members_idx\"\n>\n> select tablename, indexname, tablespace, indexdef from pg_indexes where\n> tablename = 'members';\n> -[ RECORD\n> 1 ]----------------------------------------------------------------------------------------------------------------------------------------------------\n> tablename | members\n> indexname | members_fdate_idx\n> tablespace | esave_idx\n> indexdef | CREATE INDEX members_fdate_idx ON members USING btree\n> (to_char_year_month(addeddate))\n> -[ RECORD\n> 2 ]----------------------------------------------------------------------------------------------------------------------------------------------------\n> tablename | members\n> indexname | member_changedateid_idx\n> tablespace | members_idx\n> indexdef | CREATE INDEX member_changedateid_idx ON members USING btree\n> (changedate_id)\n> -[ RECORD\n> 3 ]----------------------------------------------------------------------------------------------------------------------------------------------------\n> tablename | members\n> indexname | member_addeddateid_idx\n> tablespace | members_idx\n> indexdef | CREATE INDEX member_addeddateid_idx ON members USING btree\n> (addeddate_id)\n> -[ RECORD\n> 4 ]----------------------------------------------------------------------------------------------------------------------------------------------------\n> tablename | members\n> indexname | members_wid_idx\n> tablespace | members_idx\n> indexdef | CREATE INDEX members_wid_idx ON members USING btree\n> (websiteid)\n> -[ RECORD\n> 5 ]----------------------------------------------------------------------------------------------------------------------------------------------------\n> tablename | members\n> indexname | members_src_idx\n> tablespace | members_idx\n> indexdef | CREATE INDEX members_src_idx ON members USING btree\n> (websiteid, emailbounced, sourceid)\n> -[ RECORD\n> 6 ]----------------------------------------------------------------------------------------------------------------------------------------------------\n> tablename | members\n> indexname | members_sorted_idx_001\n> tablespace | members_idx\n> indexdef | CREATE INDEX members_sorted_idx_001 ON members USING btree\n> (websiteid, emailok, emailbounced, addeddate, memberid, zipcode,\n> statecode, emailaddress)\n> -[ RECORD\n> 7 ]----------------------------------------------------------------------------------------------------------------------------------------------------\n> tablename | members\n> indexname | members_mid_emailok_idx\n> tablespace | members_idx\n> indexdef | CREATE INDEX members_mid_emailok_idx ON members USING btree\n> (memberid, emailaddress, zipcode, firstname, emailok)\n> -[ RECORD\n> 8 ]----------------------------------------------------------------------------------------------------------------------------------------------------\n> tablename | members\n> indexname | members_memberid_idx\n> tablespace | members_idx\n> indexdef | CREATE INDEX members_memberid_idx ON members USING btree\n> (memberid)\n> -[ RECORD\n> 9 ]----------------------------------------------------------------------------------------------------------------------------------------------------\n> tablename | members\n> indexname | email_website_unq\n> tablespace | members_idx\n> indexdef | CREATE UNIQUE INDEX email_website_unq ON members USING\n> btree (emailaddress, websiteid)\n>\n>\n> This table has also been vacuumed analyzed as well:\n>\n> select * from pg_stat_all_tables where relname = 'members';\n> -[ RECORD 1 ]----+------------------------------\n> relid | 3112786\n> schemaname | xxxxx\n> relname | members\n> seq_scan | 298\n> seq_tup_read | 42791828896\n> idx_scan | 31396925\n> idx_tup_fetch | 1083796963\n> n_tup_ins | 291308316\n> n_tup_upd | 0\n> n_tup_del | 4188020\n> n_tup_hot_upd | 0\n> n_live_tup | 285364632\n> n_dead_tup | 109658\n> last_vacuum | 2010-10-12 20:26:01.227393-04\n> last_autovacuum |\n> last_analyze | 2010-10-12 20:28:01.105656-04\n> last_autoanalyze | 2010-09-16 20:50:00.712418-04\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Thu, 14 Oct 2010 23:25:41 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: oracle to psql migration - slow query in postgres"
},
{
"msg_contents": "\n> emailok | numeric(2,0) |\n\nNote that NUMERIC is meant for\n- really large numbers with lots of digits\n- or controlled precision and rounding (ie, order total isn't \n99.999999999999 $)\n\nAccordingly, NUMERIC is a lot slower in all operations, and uses a lot \nmore space, than all the other numeric types.\n\nI see many columns in your table that are declared as NUMERIC but should \nbe BOOLs, or SMALLINTs, or INTs, or BIGINTs.\n\nPerhaps Oracle handles these differently, I dunno.\n",
"msg_date": "Fri, 15 Oct 2010 00:50:20 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: oracle to psql migration - slow query in postgres"
},
{
"msg_contents": " On 10/14/2010 4:10 PM, Jon Nelson wrote:\n> The first thing I'd do is think real hard about whether you really\n> really want 'numeric' instead of boolean, smallint, or integer. The\n> second thing is that none of your indices (which specify a whole bunch\n> of fields, by the way) have only just emailok, emailbounced, or only\n> the pair of them. Without knowing the needs of your app, I would\n> reconsider your index choices and go with fewer columns per index.\n>\nAlso, make sure that the statistics is good, that histograms are large \nenough and that Geico (the genetic query optimizer) will really work \nhard to save you 15% or more on the query execution time. You can also \nmake sure that any index existing index is used, by disabling the \nsequential scan and then activating and de-activating indexes with the \ndummy expressions, just as it was done with Oracle's rule based optimizer.\nI agree that a good data model is even more crucial for Postgres than is \nthe case with Oracle. Oracle, because of its rich assortment of tweaking \n& hacking tools and parameters, can be made to perform, even if the \nmodel is designed by someone who didn't apply the rules of good design. \nPostgres is much more susceptible to bad models and it is much harder to \nwork around a badly designed model in Postgres than in Oracle. What \npeople do not understand is that every application in the world will \nbecome badly designed after years of maintenance, adding columns, \ncreating additional indexes, views, tables and triggers and than \ndeploying various tools to design applications. As noted by Murphy, \nthings develop from bad to worse. Keep Postgres models simple and \nseparated, because it's much easier to keep clearly defined models \nsimple and effective than to keep models with 700 tables and 350 views, \nfrequently with conflicting names, different columns named the same and \nsame columns named differently. And monitor, monitor, monitor. Use \nstrace, ltrace, pgstatspack, auto_explain, pgfouine, pgadmin, top, sar, \niostat and all tools you can get hold of. Without the event interface, \nit's frequently a guessing game. It is, however, possible to manage \nthings. If working with partitioning, be very aware that PostgreSQL \noptimizer has certain problems with partitions, especially with group \nfunctions. If you want speed, everything must be prefixed with \npartitioning column: indexes, expressions, joins. There is no explicit \nstar schema and creating hash indexes will not buy you much, as a matter \nof fact, Postgres community is extremely suspicious of the hash indexes \nand I don't see them widely used.\nHaving said that, I was able to solve the problems with my speed and \npartitioning.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n",
"msg_date": "Thu, 14 Oct 2010 23:59:14 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: oracle to psql migration - slow query in postgres"
},
{
"msg_contents": "On Thu, Oct 14, 2010 at 8:59 PM, Mladen Gogala <[email protected]>wrote:\n\n> If working with partitioning, be very aware that PostgreSQL optimizer has\n> certain problems with partitions, especially with group functions. If you\n> want speed, everything must be prefixed with partitioning column: indexes,\n> expressions, joins. There is no explicit star schema and creating hash\n> indexes will not buy you much, as a matter of fact, Postgres community is\n> extremely suspicious of the hash indexes and I don't see them widely used.\n> Having said that, I was able to solve the problems with my speed and\n> partitioning.\n>\n>\nCould you elaborate on this, please? What do you mean by 'everythin must be\nprefixed with partitioning column?'\n\n--sam\n\nOn Thu, Oct 14, 2010 at 8:59 PM, Mladen Gogala <[email protected]> wrote:\n If working with partitioning, be very aware that PostgreSQL optimizer has certain problems with partitions, especially with group functions. If you want speed, everything must be prefixed with partitioning column: indexes, expressions, joins. There is no explicit star schema and creating hash indexes will not buy you much, as a matter of fact, Postgres community is extremely suspicious of the hash indexes and I don't see them widely used.\n\nHaving said that, I was able to solve the problems with my speed and partitioning.\nCould you elaborate on this, please? What do you mean by 'everythin must be prefixed with partitioning column?'--sam",
"msg_date": "Fri, 15 Oct 2010 01:12:54 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: oracle to psql migration - slow query in postgres"
},
{
"msg_contents": "Samuel Gendler wrote:\n>\n>\n> On Thu, Oct 14, 2010 at 8:59 PM, Mladen Gogala \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> If working with partitioning, be very aware that PostgreSQL\n> optimizer has certain problems with partitions, especially with\n> group functions. If you want speed, everything must be prefixed\n> with partitioning column: indexes, expressions, joins. There is no\n> explicit star schema and creating hash indexes will not buy you\n> much, as a matter of fact, Postgres community is extremely\n> suspicious of the hash indexes and I don't see them widely used.\n> Having said that, I was able to solve the problems with my speed\n> and partitioning.\n>\n>\n> Could you elaborate on this, please? What do you mean by 'everythin \n> must be prefixed with partitioning column?'\n>\n> --sam\nIf you have partitioned table part_tab, partitioned on the column \nitem_date and if there is a global primary key in Oracle, let's call it \nitem_id, then queries like \"select * from part_tab where item_id=12345\" \nwill perform worse than queries with item_date\"\n\nselect * from part_tab where item_id=12345 and item_date='2010-10-15'\n\nThis also applies to inserts and updates. Strictly speaking, the \nitem_date column in the query above is not necessary, after all, the \nitem_id column is the primary key. However, with range scans you will \nget much better results if you include the item_date column than if you \nuse combination of columns without. The term \"prefixed indexes\" is \nborrowed from Oracle RDBMS and means that the beginning column in the \nindex is the column on which the table is partitioned. Oracle, as \nopposed to Postgres, has global indexes, the indexes that span all \npartitions. PostgreSQL only maintains indexes on each of the partitions \nseparately. Oracle calls such indexes \"local indexes\" and defines them \non the partitioned table level. Here is a brief and rather succinct \nexplanation of the terminology:\n\nhttp://www.oracle-base.com/articles/8i/PartitionedTablesAndIndexes.php\n\n\nOf, course, there are other differences between Oracle partitioning and \nPostgreSQL partitioning. The main difference is $10000/CPU.\nI am talking from experience:\n\nnews=> \\d moreover_documents\n Table \"moreover.moreover_documents\"\n Column | Type | Modifiers\n----------------------+-----------------------------+-----------\n document_id | bigint | not null\n dre_reference | bigint | not null\n headline | character varying(4000) |\n author | character varying(200) |\n url | character varying(1000) |\n rank | bigint |\n content | text |\n stories_like_this | character varying(1000) |\n internet_web_site_id | bigint | not null\n harvest_time | timestamp without time zone |\n valid_time | timestamp without time zone |\n keyword | character varying(200) |\n article_id | bigint | not null\n media_type | character varying(20) |\n source_type | character varying(20) |\n created_at | timestamp without time zone |\n autonomy_fed_at | timestamp without time zone |\n language | character varying(150) |\nIndexes:\n \"moreover_documents_pkey\" PRIMARY KEY, btree (document_id)\nTriggers:\n insert_moreover_trigger BEFORE INSERT ON moreover_documents FOR EACH \nROW EXE\nCUTE PROCEDURE moreover_insert_trgfn()\nNumber of child tables: 8 (Use \\d+ to list them.)\n\nThe child tables are, of course, partitions.\n\nHere is the original:\n\n\nConnected to:\nOracle Database 10g Enterprise Edition Release 10.2.0.5.0 - Production\nWith the Partitioning, Real Application Clusters, OLAP, Data Mining\nand Real Application Testing options\n\nSQL> desc moreover_documents\n Name Null? Type\n ----------------------------------------- -------- \n----------------------------\n DOCUMENT# NOT NULL NUMBER\n DRE_REFERENCE NOT NULL NUMBER\n HEADLINE VARCHAR2(4000)\n AUTHOR VARCHAR2(200)\n URL VARCHAR2(1000)\n RANK NUMBER\n CONTENT CLOB\n STORIES_LIKE_THIS VARCHAR2(1000)\n INTERNET_WEB_SITE# NOT NULL NUMBER\n HARVEST_TIME DATE\n VALID_TIME DATE\n KEYWORD VARCHAR2(200)\n ARTICLE_ID NOT NULL NUMBER\n MEDIA_TYPE VARCHAR2(20)\n CREATED_AT DATE\n SOURCE_TYPE VARCHAR2(50)\n PUBLISH_DATE DATE\n AUTONOMY_FED_AT DATE\n LANGUAGE VARCHAR2(150)\n\nSQL>\n\n\n\nI must say that it took me some time to get things right.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Fri, 15 Oct 2010 10:51:48 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: oracle to psql migration - slow query in postgres"
},
{
"msg_contents": "On Thu, Oct 14, 2010 at 3:43 PM, Tony Capobianco\n<[email protected]> wrote:\n> explain analyze create table tmp_srcmem_emws1\n> as\n> select emailaddress, websiteid\n> from members\n> where emailok = 1\n> and emailbounced = 0;\n\n*) as others have noted, none of your indexes will back this\nexpression. For an index to match properly the index must have all\nthe fields matched in the 'where' clause in left to right order. you\ncould rearrange indexes you already have and probably get things to\nwork properly.\n\n*) If you want things to go really fast, and the combination of\nemailok, emailbounced is a small percentage (say, less than 5) in the\ntable, and you are not interested in the schema level changes your\ntable is screaming, and the (1,0) combination is what you want to\nfrequently match and you should consider:\n\ncreate function email_interesting(ok numeric, bounced numeric) returns bool as\n$$\n select $1 = 1 and $2 = 0;\n$$ language sql immutable;\n\ncreate function members_email_interesting_idx on\n members(email_interesting(emailok, emailbounced)) where email_interesting();\n\nThis will build a partial index which you can query via:\nselect emailaddress, websiteid\n from members\n where email_interesting(emailok, emailbounced);\n\nmerlin\n",
"msg_date": "Fri, 15 Oct 2010 11:06:08 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: oracle to psql migration - slow query in postgres"
},
{
"msg_contents": "The recommendations on the numeric columns are fantastic. Thank you\nvery much. We will revisit our methods of assigning datatypes when we\nmigrate our data over from Oracle.\nRegarding the full table scans; it appears inevitable that full table\nscans are necessary for the volume of data involved and the present\ndesign of our indexes. Over time, indexes were added/removed to satisfy\nparticular functionality. Considering this is our most important table,\nI will research exactly how this table is queried to better\noptimize/reorganize our indexes.\n\nThanks for your help.\nTony\n\n\nOn Thu, 2010-10-14 at 23:59 -0400, Mladen Gogala wrote:\n> On 10/14/2010 4:10 PM, Jon Nelson wrote:\n> > The first thing I'd do is think real hard about whether you really\n> > really want 'numeric' instead of boolean, smallint, or integer. The\n> > second thing is that none of your indices (which specify a whole bunch\n> > of fields, by the way) have only just emailok, emailbounced, or only\n> > the pair of them. Without knowing the needs of your app, I would\n> > reconsider your index choices and go with fewer columns per index.\n> >\n> Also, make sure that the statistics is good, that histograms are large \n> enough and that Geico (the genetic query optimizer) will really work \n> hard to save you 15% or more on the query execution time. You can also \n> make sure that any index existing index is used, by disabling the \n> sequential scan and then activating and de-activating indexes with the \n> dummy expressions, just as it was done with Oracle's rule based optimizer.\n> I agree that a good data model is even more crucial for Postgres than is \n> the case with Oracle. Oracle, because of its rich assortment of tweaking \n> & hacking tools and parameters, can be made to perform, even if the \n> model is designed by someone who didn't apply the rules of good design. \n> Postgres is much more susceptible to bad models and it is much harder to \n> work around a badly designed model in Postgres than in Oracle. What \n> people do not understand is that every application in the world will \n> become badly designed after years of maintenance, adding columns, \n> creating additional indexes, views, tables and triggers and than \n> deploying various tools to design applications. As noted by Murphy, \n> things develop from bad to worse. Keep Postgres models simple and \n> separated, because it's much easier to keep clearly defined models \n> simple and effective than to keep models with 700 tables and 350 views, \n> frequently with conflicting names, different columns named the same and \n> same columns named differently. And monitor, monitor, monitor. Use \n> strace, ltrace, pgstatspack, auto_explain, pgfouine, pgadmin, top, sar, \n> iostat and all tools you can get hold of. Without the event interface, \n> it's frequently a guessing game. It is, however, possible to manage \n> things. If working with partitioning, be very aware that PostgreSQL \n> optimizer has certain problems with partitions, especially with group \n> functions. If you want speed, everything must be prefixed with \n> partitioning column: indexes, expressions, joins. There is no explicit \n> star schema and creating hash indexes will not buy you much, as a matter \n> of fact, Postgres community is extremely suspicious of the hash indexes \n> and I don't see them widely used.\n> Having said that, I was able to solve the problems with my speed and \n> partitioning.\n> \n> -- \n> Mladen Gogala\n> Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> www.vmsinfo.com\n> \n> \n\n\n",
"msg_date": "Fri, 15 Oct 2010 11:48:22 -0400",
"msg_from": "Tony Capobianco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: oracle to psql migration - slow query in postgres"
},
{
"msg_contents": ">> This table has approximately 300million rows.\n>\n> and your query grab rows=236 660 930 of them. An index might be\n> useless in this situation.\n\nI want to point out that this is probably the most important comment\nhere. A couple of people have noted out that the index won't work for\nthis query, but more importantly, an index is (probably) not desirable\nfor this query. As an analogy (since everyone loves half-baked\nprogramming analogies), if you want to find a couple of bakeries to\nsponsor your MySQL Data Integrity Issues Awareness Walk by donating\nscones, you use the yellow pages. If you want to hit up every business\nin the area to donate whatever they can, you're better off canvasing\nthe neighborhood.\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n",
"msg_date": "Fri, 15 Oct 2010 08:49:20 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: oracle to psql migration - slow query in postgres"
},
{
"msg_contents": " \n\n> -----Original Message-----\n> From: Tony Capobianco [mailto:[email protected]] \n> Sent: Thursday, October 14, 2010 3:43 PM\n> To: [email protected]\n> Subject: oracle to psql migration - slow query in postgres\n> \n> We are in the process of testing migration of our oracle data \n> warehouse over to postgres. A potential showstopper are full \n> table scans on our members table. We can't function on \n> postgres effectively unless index scans are employed. I'm \n> thinking I don't have something set correctly in my \n> postgresql.conf file, but I'm not sure what.\n> \n> This table has approximately 300million rows.\n> \n> Version:\n> SELECT version();\n> \n> version \n> --------------------------------------------------------------\n> ----------------------------------------------------\n> PostgreSQL 8.4.2 on x86_64-redhat-linux-gnu, compiled by GCC \n> gcc (GCC)\n> 4.1.2 20071124 (Red Hat 4.1.2-42), 64-bit\n> \n> We have 4 quad-core processors and 32GB of RAM. The below \n> query uses the members_sorted_idx_001 index in oracle, but in \n> postgres, the optimizer chooses a sequential scan.\n> \n> explain analyze create table tmp_srcmem_emws1 as select \n> emailaddress, websiteid\n> from members\n> where emailok = 1\n> and emailbounced = 0;\n> QUERY\n> PLAN \n> --------------------------------------------------------------\n> ----------------------------------------------------------------\n> Seq Scan on members (cost=0.00..14137154.64 rows=238177981 \n> width=29) (actual time=0.052..685834.785 rows=236660930 loops=1)\n> Filter: ((emailok = 1::numeric) AND (emailbounced = \n> 0::numeric)) Total runtime: 850306.220 ms\n> (3 rows)\n> \n> show shared_buffers ;\n> shared_buffers\n> ----------------\n> 7680MB\n> (1 row)\n> \n> show effective_cache_size ;\n> effective_cache_size\n> ----------------------\n> 22GB\n> (1 row)\n> \n> show work_mem ;\n> work_mem\n> ----------\n> 768MB\n> (1 row)\n> \n> show enable_seqscan ;\n> enable_seqscan\n> ----------------\n> on\n> (1 row)\n> \n> Below are the data definitions for the table/indexes in question:\n> \n> \\d members\n> Table \"members\"\n> Column | Type | Modifiers \n> ---------------------+-----------------------------+-----------\n> memberid | numeric | not null\n> firstname | character varying(50) | \n> lastname | character varying(50) | \n> emailaddress | character varying(50) | \n> password | character varying(50) | \n> address1 | character varying(50) | \n> address2 | character varying(50) | \n> city | character varying(50) | \n> statecode | character varying(50) | \n> zipcode | character varying(50) | \n> birthdate | date | \n> emailok | numeric(2,0) | \n> gender | character varying(1) | \n> addeddate | timestamp without time zone | \n> emailbounced | numeric(2,0) | \n> changedate | timestamp without time zone | \n> optoutsource | character varying(100) | \n> websiteid | numeric | \n> promotionid | numeric | \n> sourceid | numeric | \n> siteid | character varying(64) | \n> srcwebsiteid | numeric | \n> homephone | character varying(20) | \n> homeareacode | character varying(10) | \n> campaignid | numeric | \n> srcmemberid | numeric | \n> optoutdate | date | \n> regcomplete | numeric(1,0) | \n> regcompletesourceid | numeric | \n> ipaddress | character varying(25) | \n> pageid | numeric | \n> streetaddressstatus | numeric(1,0) | \n> middlename | character varying(50) | \n> optinprechecked | numeric(1,0) | \n> optinposition | numeric | \n> homephonestatus | numeric | \n> addeddate_id | numeric | \n> changedate_id | numeric | \n> rpmindex | numeric | \n> optmode | numeric(1,0) | \n> countryid | numeric | \n> confirmoptin | numeric(2,0) | \n> bouncedate | date | \n> memberageid | numeric | \n> sourceid2 | numeric | \n> remoteuserid | character varying(50) | \n> goal | numeric(1,0) | \n> flowdepth | numeric | \n> pagetype | numeric | \n> savepassword | character varying(50) | \n> customerprofileid | numeric | \n> Indexes:\n> \"email_website_unq\" UNIQUE, btree (emailaddress, \n> websiteid), tablespace \"members_idx\"\n> \"member_addeddateid_idx\" btree (addeddate_id), tablespace \n> \"members_idx\"\n> \"member_changedateid_idx\" btree (changedate_id), \n> tablespace \"members_idx\"\n> \"members_fdate_idx\" btree \n> (to_char_year_month(addeddate)), tablespace \"esave_idx\"\n> \"members_memberid_idx\" btree (memberid), tablespace \"members_idx\"\n> \"members_mid_emailok_idx\" btree (memberid, emailaddress, \n> zipcode, firstname, emailok), tablespace \"members_idx\"\n> \"members_sorted_idx_001\" btree (websiteid, emailok, \n> emailbounced, addeddate, memberid, zipcode, statecode, \n> emailaddress), tablespace \"members_idx\"\n> \"members_src_idx\" btree (websiteid, emailbounced, \n> sourceid), tablespace \"members_idx\"\n> \"members_wid_idx\" btree (websiteid), tablespace \"members_idx\"\n> \n> select tablename, indexname, tablespace, indexdef from \n> pg_indexes where tablename = 'members'; -[ RECORD\n> 1 \n> ]-------------------------------------------------------------\n> --------------------------------------------------------------\n> -------------------------\n> tablename | members\n> indexname | members_fdate_idx\n> tablespace | esave_idx\n> indexdef | CREATE INDEX members_fdate_idx ON members USING btree\n> (to_char_year_month(addeddate))\n> -[ RECORD\n> 2 \n> ]-------------------------------------------------------------\n> --------------------------------------------------------------\n> -------------------------\n> tablename | members\n> indexname | member_changedateid_idx\n> tablespace | members_idx\n> indexdef | CREATE INDEX member_changedateid_idx ON members \n> USING btree\n> (changedate_id)\n> -[ RECORD\n> 3 \n> ]-------------------------------------------------------------\n> --------------------------------------------------------------\n> -------------------------\n> tablename | members\n> indexname | member_addeddateid_idx\n> tablespace | members_idx\n> indexdef | CREATE INDEX member_addeddateid_idx ON members \n> USING btree\n> (addeddate_id)\n> -[ RECORD\n> 4 \n> ]-------------------------------------------------------------\n> --------------------------------------------------------------\n> -------------------------\n> tablename | members\n> indexname | members_wid_idx\n> tablespace | members_idx\n> indexdef | CREATE INDEX members_wid_idx ON members USING btree\n> (websiteid)\n> -[ RECORD\n> 5 \n> ]-------------------------------------------------------------\n> --------------------------------------------------------------\n> -------------------------\n> tablename | members\n> indexname | members_src_idx\n> tablespace | members_idx\n> indexdef | CREATE INDEX members_src_idx ON members USING btree\n> (websiteid, emailbounced, sourceid)\n> -[ RECORD\n> 6 \n> ]-------------------------------------------------------------\n> --------------------------------------------------------------\n> -------------------------\n> tablename | members\n> indexname | members_sorted_idx_001\n> tablespace | members_idx\n> indexdef | CREATE INDEX members_sorted_idx_001 ON members \n> USING btree\n> (websiteid, emailok, emailbounced, addeddate, memberid, \n> zipcode, statecode, emailaddress) -[ RECORD\n> 7 \n> ]-------------------------------------------------------------\n> --------------------------------------------------------------\n> -------------------------\n> tablename | members\n> indexname | members_mid_emailok_idx\n> tablespace | members_idx\n> indexdef | CREATE INDEX members_mid_emailok_idx ON members \n> USING btree\n> (memberid, emailaddress, zipcode, firstname, emailok) -[ RECORD\n> 8 \n> ]-------------------------------------------------------------\n> --------------------------------------------------------------\n> -------------------------\n> tablename | members\n> indexname | members_memberid_idx\n> tablespace | members_idx\n> indexdef | CREATE INDEX members_memberid_idx ON members USING btree\n> (memberid)\n> -[ RECORD\n> 9 \n> ]-------------------------------------------------------------\n> --------------------------------------------------------------\n> -------------------------\n> tablename | members\n> indexname | email_website_unq\n> tablespace | members_idx\n> indexdef | CREATE UNIQUE INDEX email_website_unq ON members USING\n> btree (emailaddress, websiteid)\n> \n> \n> This table has also been vacuumed analyzed as well:\n> \n> select * from pg_stat_all_tables where relname = 'members'; \n> -[ RECORD 1 ]----+------------------------------\n> relid | 3112786\n> schemaname | xxxxx\n> relname | members\n> seq_scan | 298\n> seq_tup_read | 42791828896\n> idx_scan | 31396925\n> idx_tup_fetch | 1083796963\n> n_tup_ins | 291308316\n> n_tup_upd | 0\n> n_tup_del | 4188020\n> n_tup_hot_upd | 0\n> n_live_tup | 285364632\n> n_dead_tup | 109658\n> last_vacuum | 2010-10-12 20:26:01.227393-04\n> last_autovacuum | \n> last_analyze | 2010-10-12 20:28:01.105656-04\n> last_autoanalyze | 2010-09-16 20:50:00.712418-04\n> \n> \n\n\nTony,\nFor your query:\n\n> select \n> emailaddress, websiteid\n> from members\n> where emailok = 1\n> and emailbounced = 0;\n\nyour table doesn't have any indexes where \"emailok\" or \"emailbounced\"\nare leading columns.\nThat's why existing indexes can not be used.\n\nIf you specified \"websiteid\" in the \"where\" clause then (most probably)\nthe index members_sorted_idx_001 will be used (based on selectivity and\nstatistics known to optimizer). \n\nIf this query (as is - without \"websiteid\") is important for your app,\ncreate another index on (emailok, emailbounced) which should help, of\ncourse if selectivity of your where clause is good enough (not to\nperform full table scan).\n\nRegards,\nIgor Neyman\n",
"msg_date": "Fri, 15 Oct 2010 13:43:40 -0400",
"msg_from": "\"Igor Neyman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: oracle to psql migration - slow query in postgres"
},
{
"msg_contents": "I am curious about this scenario:\n\nWhat is going to happen when a server is running and the Postgres log file is \naccidentally deleted or renamed? The server will have no place to write its log \nentries obviously.. what else?.\n\nIs there any way to check within Postgres what the current log name (with a \nparticular time stamp when rotated last time) is so that a file with a correct \nmatching name can be manually created?\n\nFor this situation, what is the proper way to get the server to write to a new \nlog file without having to bounce the Postgres server itself?\n\nThanks!\n\n\n \nI am curious about this scenario:What is going to happen when a server is running and the Postgres log file is accidentally deleted or renamed? The server will have no place to write its log entries obviously.. what else?.Is there any way to check within Postgres what the current log name (with a particular time stamp when rotated last time) is so that a file with a correct matching name can be manually created?For this situation, what is the proper way to get the server to write to a new log file without having to bounce the Postgres server itself?Thanks!",
"msg_date": "Fri, 15 Oct 2010 11:03:12 -0700 (PDT)",
"msg_from": "Jessica Richard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Postgres log file"
},
{
"msg_contents": "Thanks for all your responses. What's interesting is that an index is\nused when this query is executed in Oracle. It appears to do some\nparallel processing:\n\nSQL> set line 200\ndelete from plan_table;\nexplain plan for\nselect websiteid, emailaddress\n from members\n where emailok = 1\n and emailbounced = 0;\n\nSELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());\nSQL> \n3 rows deleted.\n\nSQL> 2 3 4 5 \nExplained.\n\nSQL> SQL> \nPLAN_TABLE_OUTPUT\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nPlan hash value: 4247959398\n\n-------------------------------------------------------------------------------------------------------------------------------\n| Id | Operation | Name | Rows | Bytes\n| Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |\n-------------------------------------------------------------------------------------------------------------------------------\n| 0 | SELECT STATEMENT | | 237M|\n7248M| 469K (2)| 01:49:33 | | | |\n| 1 | PX COORDINATOR | | |\n| | | | | |\n| 2 | PX SEND QC (RANDOM) | :TQ10000 | 237M|\n7248M| 469K (2)| 01:49:33 | Q1,00 | P->S | QC (RAND) |\n| 3 | PX BLOCK ITERATOR | | 237M|\n7248M| 469K (2)| 01:49:33 | Q1,00 | PCWC | |\n|* 4 | INDEX FAST FULL SCAN| MEMBERS_SORTED_IDX_001 | 237M|\n7248M| 469K (2)| 01:49:33 | Q1,00 | PCWP | |\n-------------------------------------------------------------------------------------------------------------------------------\n\nPLAN_TABLE_OUTPUT\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nPredicate Information (identified by operation id):\n---------------------------------------------------\n\n 4 - filter(\"EMAILBOUNCED\"=0 AND \"EMAILOK\"=1)\n\n16 rows selected.\n\n\nOn Fri, 2010-10-15 at 13:43 -0400, Igor Neyman wrote:\n> \n> > -----Original Message-----\n> > From: Tony Capobianco [mailto:[email protected]] \n> > Sent: Thursday, October 14, 2010 3:43 PM\n> > To: [email protected]\n> > Subject: oracle to psql migration - slow query in postgres\n> > \n> > We are in the process of testing migration of our oracle data \n> > warehouse over to postgres. A potential showstopper are full \n> > table scans on our members table. We can't function on \n> > postgres effectively unless index scans are employed. I'm \n> > thinking I don't have something set correctly in my \n> > postgresql.conf file, but I'm not sure what.\n> > \n> > This table has approximately 300million rows.\n> > \n> > Version:\n> > SELECT version();\n> > \n> > version \n> > --------------------------------------------------------------\n> > ----------------------------------------------------\n> > PostgreSQL 8.4.2 on x86_64-redhat-linux-gnu, compiled by GCC \n> > gcc (GCC)\n> > 4.1.2 20071124 (Red Hat 4.1.2-42), 64-bit\n> > \n> > We have 4 quad-core processors and 32GB of RAM. The below \n> > query uses the members_sorted_idx_001 index in oracle, but in \n> > postgres, the optimizer chooses a sequential scan.\n> > \n> > explain analyze create table tmp_srcmem_emws1 as select \n> > emailaddress, websiteid\n> > from members\n> > where emailok = 1\n> > and emailbounced = 0;\n> > QUERY\n> > PLAN \n> > --------------------------------------------------------------\n> > ----------------------------------------------------------------\n> > Seq Scan on members (cost=0.00..14137154.64 rows=238177981 \n> > width=29) (actual time=0.052..685834.785 rows=236660930 loops=1)\n> > Filter: ((emailok = 1::numeric) AND (emailbounced = \n> > 0::numeric)) Total runtime: 850306.220 ms\n> > (3 rows)\n> > \n> > show shared_buffers ;\n> > shared_buffers\n> > ----------------\n> > 7680MB\n> > (1 row)\n> > \n> > show effective_cache_size ;\n> > effective_cache_size\n> > ----------------------\n> > 22GB\n> > (1 row)\n> > \n> > show work_mem ;\n> > work_mem\n> > ----------\n> > 768MB\n> > (1 row)\n> > \n> > show enable_seqscan ;\n> > enable_seqscan\n> > ----------------\n> > on\n> > (1 row)\n> > \n> > Below are the data definitions for the table/indexes in question:\n> > \n> > \\d members\n> > Table \"members\"\n> > Column | Type | Modifiers \n> > ---------------------+-----------------------------+-----------\n> > memberid | numeric | not null\n> > firstname | character varying(50) | \n> > lastname | character varying(50) | \n> > emailaddress | character varying(50) | \n> > password | character varying(50) | \n> > address1 | character varying(50) | \n> > address2 | character varying(50) | \n> > city | character varying(50) | \n> > statecode | character varying(50) | \n> > zipcode | character varying(50) | \n> > birthdate | date | \n> > emailok | numeric(2,0) | \n> > gender | character varying(1) | \n> > addeddate | timestamp without time zone | \n> > emailbounced | numeric(2,0) | \n> > changedate | timestamp without time zone | \n> > optoutsource | character varying(100) | \n> > websiteid | numeric | \n> > promotionid | numeric | \n> > sourceid | numeric | \n> > siteid | character varying(64) | \n> > srcwebsiteid | numeric | \n> > homephone | character varying(20) | \n> > homeareacode | character varying(10) | \n> > campaignid | numeric | \n> > srcmemberid | numeric | \n> > optoutdate | date | \n> > regcomplete | numeric(1,0) | \n> > regcompletesourceid | numeric | \n> > ipaddress | character varying(25) | \n> > pageid | numeric | \n> > streetaddressstatus | numeric(1,0) | \n> > middlename | character varying(50) | \n> > optinprechecked | numeric(1,0) | \n> > optinposition | numeric | \n> > homephonestatus | numeric | \n> > addeddate_id | numeric | \n> > changedate_id | numeric | \n> > rpmindex | numeric | \n> > optmode | numeric(1,0) | \n> > countryid | numeric | \n> > confirmoptin | numeric(2,0) | \n> > bouncedate | date | \n> > memberageid | numeric | \n> > sourceid2 | numeric | \n> > remoteuserid | character varying(50) | \n> > goal | numeric(1,0) | \n> > flowdepth | numeric | \n> > pagetype | numeric | \n> > savepassword | character varying(50) | \n> > customerprofileid | numeric | \n> > Indexes:\n> > \"email_website_unq\" UNIQUE, btree (emailaddress, \n> > websiteid), tablespace \"members_idx\"\n> > \"member_addeddateid_idx\" btree (addeddate_id), tablespace \n> > \"members_idx\"\n> > \"member_changedateid_idx\" btree (changedate_id), \n> > tablespace \"members_idx\"\n> > \"members_fdate_idx\" btree \n> > (to_char_year_month(addeddate)), tablespace \"esave_idx\"\n> > \"members_memberid_idx\" btree (memberid), tablespace \"members_idx\"\n> > \"members_mid_emailok_idx\" btree (memberid, emailaddress, \n> > zipcode, firstname, emailok), tablespace \"members_idx\"\n> > \"members_sorted_idx_001\" btree (websiteid, emailok, \n> > emailbounced, addeddate, memberid, zipcode, statecode, \n> > emailaddress), tablespace \"members_idx\"\n> > \"members_src_idx\" btree (websiteid, emailbounced, \n> > sourceid), tablespace \"members_idx\"\n> > \"members_wid_idx\" btree (websiteid), tablespace \"members_idx\"\n> > \n> > select tablename, indexname, tablespace, indexdef from \n> > pg_indexes where tablename = 'members'; -[ RECORD\n> > 1 \n> > ]-------------------------------------------------------------\n> > --------------------------------------------------------------\n> > -------------------------\n> > tablename | members\n> > indexname | members_fdate_idx\n> > tablespace | esave_idx\n> > indexdef | CREATE INDEX members_fdate_idx ON members USING btree\n> > (to_char_year_month(addeddate))\n> > -[ RECORD\n> > 2 \n> > ]-------------------------------------------------------------\n> > --------------------------------------------------------------\n> > -------------------------\n> > tablename | members\n> > indexname | member_changedateid_idx\n> > tablespace | members_idx\n> > indexdef | CREATE INDEX member_changedateid_idx ON members \n> > USING btree\n> > (changedate_id)\n> > -[ RECORD\n> > 3 \n> > ]-------------------------------------------------------------\n> > --------------------------------------------------------------\n> > -------------------------\n> > tablename | members\n> > indexname | member_addeddateid_idx\n> > tablespace | members_idx\n> > indexdef | CREATE INDEX member_addeddateid_idx ON members \n> > USING btree\n> > (addeddate_id)\n> > -[ RECORD\n> > 4 \n> > ]-------------------------------------------------------------\n> > --------------------------------------------------------------\n> > -------------------------\n> > tablename | members\n> > indexname | members_wid_idx\n> > tablespace | members_idx\n> > indexdef | CREATE INDEX members_wid_idx ON members USING btree\n> > (websiteid)\n> > -[ RECORD\n> > 5 \n> > ]-------------------------------------------------------------\n> > --------------------------------------------------------------\n> > -------------------------\n> > tablename | members\n> > indexname | members_src_idx\n> > tablespace | members_idx\n> > indexdef | CREATE INDEX members_src_idx ON members USING btree\n> > (websiteid, emailbounced, sourceid)\n> > -[ RECORD\n> > 6 \n> > ]-------------------------------------------------------------\n> > --------------------------------------------------------------\n> > -------------------------\n> > tablename | members\n> > indexname | members_sorted_idx_001\n> > tablespace | members_idx\n> > indexdef | CREATE INDEX members_sorted_idx_001 ON members \n> > USING btree\n> > (websiteid, emailok, emailbounced, addeddate, memberid, \n> > zipcode, statecode, emailaddress) -[ RECORD\n> > 7 \n> > ]-------------------------------------------------------------\n> > --------------------------------------------------------------\n> > -------------------------\n> > tablename | members\n> > indexname | members_mid_emailok_idx\n> > tablespace | members_idx\n> > indexdef | CREATE INDEX members_mid_emailok_idx ON members \n> > USING btree\n> > (memberid, emailaddress, zipcode, firstname, emailok) -[ RECORD\n> > 8 \n> > ]-------------------------------------------------------------\n> > --------------------------------------------------------------\n> > -------------------------\n> > tablename | members\n> > indexname | members_memberid_idx\n> > tablespace | members_idx\n> > indexdef | CREATE INDEX members_memberid_idx ON members USING btree\n> > (memberid)\n> > -[ RECORD\n> > 9 \n> > ]-------------------------------------------------------------\n> > --------------------------------------------------------------\n> > -------------------------\n> > tablename | members\n> > indexname | email_website_unq\n> > tablespace | members_idx\n> > indexdef | CREATE UNIQUE INDEX email_website_unq ON members USING\n> > btree (emailaddress, websiteid)\n> > \n> > \n> > This table has also been vacuumed analyzed as well:\n> > \n> > select * from pg_stat_all_tables where relname = 'members'; \n> > -[ RECORD 1 ]----+------------------------------\n> > relid | 3112786\n> > schemaname | xxxxx\n> > relname | members\n> > seq_scan | 298\n> > seq_tup_read | 42791828896\n> > idx_scan | 31396925\n> > idx_tup_fetch | 1083796963\n> > n_tup_ins | 291308316\n> > n_tup_upd | 0\n> > n_tup_del | 4188020\n> > n_tup_hot_upd | 0\n> > n_live_tup | 285364632\n> > n_dead_tup | 109658\n> > last_vacuum | 2010-10-12 20:26:01.227393-04\n> > last_autovacuum | \n> > last_analyze | 2010-10-12 20:28:01.105656-04\n> > last_autoanalyze | 2010-09-16 20:50:00.712418-04\n> > \n> > \n> \n> \n> Tony,\n> For your query:\n> \n> > select \n> > emailaddress, websiteid\n> > from members\n> > where emailok = 1\n> > and emailbounced = 0;\n> \n> your table doesn't have any indexes where \"emailok\" or \"emailbounced\"\n> are leading columns.\n> That's why existing indexes can not be used.\n> \n> If you specified \"websiteid\" in the \"where\" clause then (most probably)\n> the index members_sorted_idx_001 will be used (based on selectivity and\n> statistics known to optimizer). \n> \n> If this query (as is - without \"websiteid\") is important for your app,\n> create another index on (emailok, emailbounced) which should help, of\n> course if selectivity of your where clause is good enough (not to\n> perform full table scan).\n> \n> Regards,\n> Igor Neyman\n> \n\n\n",
"msg_date": "Fri, 15 Oct 2010 14:13:49 -0400",
"msg_from": "Tony Capobianco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: oracle to psql migration - slow query in postgres"
},
{
"msg_contents": "\n\n\n\n\n On Linux at least - not sure about Windows - deleting a file does\n not remove it from the file system until there are no more processes\n holding the file open.\n So while postgres holds the file open, it can keep writing to it\n happily.\n You won't be able to see the file in the directory, and the space\n won't be freed. You can find such things with the \"find\"command by\n looking for files with a link count of zero.\n\n\n On 10/15/2010 11:03 AM, Jessica Richard wrote:\n \n\nI am curious about this scenario:\n\n What is going to happen when a server is running and the\n Postgres log file is accidentally deleted or renamed? The\n server will have no place to write its log entries obviously..\n what else?.\n\n Is there any way to check within Postgres what the current log\n name (with a particular time stamp when rotated last time) is so\n that a file with a correct matching name can be manually\n created?\n\n For this situation, what is the proper way to get the server to\n write to a new log file without having to bounce the Postgres\n server itself?\n\n Thanks!\n\n\n\n\n\n\n\n-- \n\n\n\n\n\n Steve Francis\n\n\nLogicMonitor\n LLC\n\n\n\n\n\n\n\n\n\n\[email protected]\n Monitoring Made Easy\nwww.logicmonitor.com\n\n\n Ph: 1 888\n 41 LOGIC x500\n Ph: 1 805 698 0770\n\n\n\n \n\n\n\n\n\n",
"msg_date": "Fri, 15 Oct 2010 11:15:37 -0700",
"msg_from": "Steve Francis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres log file"
},
{
"msg_contents": "\n> -----Original Message-----\n> From: Tony Capobianco [mailto:[email protected]] \n> Sent: Friday, October 15, 2010 2:14 PM\n> To: [email protected]\n> Subject: Re: oracle to psql migration - slow query in postgres\n> \n> Thanks for all your responses. What's interesting is that an \n> index is used when this query is executed in Oracle. It \n> appears to do some parallel processing:\n> \n> SQL> set line 200\n> delete from plan_table;\n> explain plan for\n> select websiteid, emailaddress\n> from members\n> where emailok = 1\n> and emailbounced = 0;\n> \n> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());\n> SQL> \n> 3 rows deleted.\n> \n> SQL> 2 3 4 5 \n> Explained.\n> \n> SQL> SQL> \n> PLAN_TABLE_OUTPUT\n> --------------------------------------------------------------\n> --------------------------------------------------------------\n> --------------------------------------------------------------\n> --------------\n> Plan hash value: 4247959398\n> \n> --------------------------------------------------------------\n> -----------------------------------------------------------------\n> | Id | Operation | Name | \n> Rows | Bytes\n> | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |\n> --------------------------------------------------------------\n> -----------------------------------------------------------------\n> | 0 | SELECT STATEMENT | | 237M|\n> 7248M| 469K (2)| 01:49:33 | | | |\n> | 1 | PX COORDINATOR | | |\n> | | | | | |\n> | 2 | PX SEND QC (RANDOM) | :TQ10000 | 237M|\n> 7248M| 469K (2)| 01:49:33 | Q1,00 | P->S | QC (RAND) |\n> | 3 | PX BLOCK ITERATOR | | 237M|\n> 7248M| 469K (2)| 01:49:33 | Q1,00 | PCWC | |\n> |* 4 | INDEX FAST FULL SCAN| MEMBERS_SORTED_IDX_001 | 237M|\n> 7248M| 469K (2)| 01:49:33 | Q1,00 | PCWP | |\n> --------------------------------------------------------------\n> -----------------------------------------------------------------\n> \n> PLAN_TABLE_OUTPUT\n> --------------------------------------------------------------\n> --------------------------------------------------------------\n> --------------------------------------------------------------\n> --------------\n> \n> Predicate Information (identified by operation id):\n> ---------------------------------------------------\n> \n> 4 - filter(\"EMAILBOUNCED\"=0 AND \"EMAILOK\"=1)\n> \n> 16 rows selected.\n> \n> \n\n1. Postgres doesn't have \"FAST FULL SCAN\" because even if all the info\nis in the index, it need to visit the row in the table (\"visibility\"\nissue).\n\n2. Postgres doesn't have parallel executions.\n\nBUT, it's free anf has greate community support, as you already saw.\n\nRegards,\nIgor Neyman\n",
"msg_date": "Fri, 15 Oct 2010 14:54:11 -0400",
"msg_from": "\"Igor Neyman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: oracle to psql migration - slow query in postgres"
},
{
"msg_contents": "Very true Igor! Free is my favorite price. \nI'll figure a way around this issue.\n\nThanks for your help.\nTony\n\nOn Fri, 2010-10-15 at 14:54 -0400, Igor Neyman wrote:\n> > -----Original Message-----\n> > From: Tony Capobianco [mailto:[email protected]] \n> > Sent: Friday, October 15, 2010 2:14 PM\n> > To: [email protected]\n> > Subject: Re: oracle to psql migration - slow query in postgres\n> > \n> > Thanks for all your responses. What's interesting is that an \n> > index is used when this query is executed in Oracle. It \n> > appears to do some parallel processing:\n> > \n> > SQL> set line 200\n> > delete from plan_table;\n> > explain plan for\n> > select websiteid, emailaddress\n> > from members\n> > where emailok = 1\n> > and emailbounced = 0;\n> > \n> > SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());\n> > SQL> \n> > 3 rows deleted.\n> > \n> > SQL> 2 3 4 5 \n> > Explained.\n> > \n> > SQL> SQL> \n> > PLAN_TABLE_OUTPUT\n> > --------------------------------------------------------------\n> > --------------------------------------------------------------\n> > --------------------------------------------------------------\n> > --------------\n> > Plan hash value: 4247959398\n> > \n> > --------------------------------------------------------------\n> > -----------------------------------------------------------------\n> > | Id | Operation | Name | \n> > Rows | Bytes\n> > | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |\n> > --------------------------------------------------------------\n> > -----------------------------------------------------------------\n> > | 0 | SELECT STATEMENT | | 237M|\n> > 7248M| 469K (2)| 01:49:33 | | | |\n> > | 1 | PX COORDINATOR | | |\n> > | | | | | |\n> > | 2 | PX SEND QC (RANDOM) | :TQ10000 | 237M|\n> > 7248M| 469K (2)| 01:49:33 | Q1,00 | P->S | QC (RAND) |\n> > | 3 | PX BLOCK ITERATOR | | 237M|\n> > 7248M| 469K (2)| 01:49:33 | Q1,00 | PCWC | |\n> > |* 4 | INDEX FAST FULL SCAN| MEMBERS_SORTED_IDX_001 | 237M|\n> > 7248M| 469K (2)| 01:49:33 | Q1,00 | PCWP | |\n> > --------------------------------------------------------------\n> > -----------------------------------------------------------------\n> > \n> > PLAN_TABLE_OUTPUT\n> > --------------------------------------------------------------\n> > --------------------------------------------------------------\n> > --------------------------------------------------------------\n> > --------------\n> > \n> > Predicate Information (identified by operation id):\n> > ---------------------------------------------------\n> > \n> > 4 - filter(\"EMAILBOUNCED\"=0 AND \"EMAILOK\"=1)\n> > \n> > 16 rows selected.\n> > \n> > \n> \n> 1. Postgres doesn't have \"FAST FULL SCAN\" because even if all the info\n> is in the index, it need to visit the row in the table (\"visibility\"\n> issue).\n> \n> 2. Postgres doesn't have parallel executions.\n> \n> BUT, it's free anf has greate community support, as you already saw.\n> \n> Regards,\n> Igor Neyman\n> \n\n\n",
"msg_date": "Fri, 15 Oct 2010 15:22:52 -0400",
"msg_from": "Tony Capobianco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: oracle to psql migration - slow query in postgres"
}
] |
[
{
"msg_contents": "postgres 8.4.4 on openSUSE 11.3 (2.6.36rc7, x86_64).\n\nI was watching a fairly large query run and observed that the disk\nlight went out. I checked 'top' and postgres was using 100% CPU so I\nstrace'd the running process.\nThis is what I saw:\n\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(87, 0, SEEK_END) = 585531392\nlseek(94, 270680064, SEEK_SET) = 270680064\nread(94, \"<elided>\"..., 8192) = 8192\n\nand I observed that pattern quite a bit.\n\nI know lseek is cheap, but a superfluous systemcall is a superfluous\nsystemcall, and over a short period amounted to 37% (according to\nstrace) of the time spent in the system.\n\nWhat's with the excess calls to lseek?\n\nThe query plan was a nested loop anti-join (on purpose).\n\n-- \nJon\n",
"msg_date": "Thu, 14 Oct 2010 15:00:32 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "odd postgresql performance (excessive lseek)"
},
{
"msg_contents": "No replies?\n\nThis is another situation where using pread would have saved a lot of\ntime and sped things up a bit, but failing that, keeping track of the\nfile position ourselves and only lseek'ing when necessary would also\nhelp. Postgresql was spending 37% of it's time in redundant lseek!\n\n-- \nJon\n",
"msg_date": "Tue, 19 Oct 2010 08:10:47 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: odd postgresql performance (excessive lseek)"
},
{
"msg_contents": "On Tue, Oct 19, 2010 at 9:10 AM, Jon Nelson <[email protected]> wrote:\n> No replies?\n>\n> This is another situation where using pread would have saved a lot of\n> time and sped things up a bit, but failing that, keeping track of the\n> file position ourselves and only lseek'ing when necessary would also\n> help. Postgresql was spending 37% of it's time in redundant lseek!\n\n37% of cpu time? Is that according to strace -T? how did you measure it?\n\nmerlin\n",
"msg_date": "Tue, 19 Oct 2010 09:25:20 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: odd postgresql performance (excessive lseek)"
},
{
"msg_contents": "On Tue, Oct 19, 2010 at 8:25 AM, Merlin Moncure <[email protected]> wrote:\n> On Tue, Oct 19, 2010 at 9:10 AM, Jon Nelson <[email protected]> wrote:\n>> No replies?\n>>\n>> This is another situation where using pread would have saved a lot of\n>> time and sped things up a bit, but failing that, keeping track of the\n>> file position ourselves and only lseek'ing when necessary would also\n>> help. Postgresql was spending 37% of it's time in redundant lseek!\n>\n> 37% of cpu time? Is that according to strace -T? how did you measure it?\n\nPer the original post, it (redundant lseek system calls) accounted for\n37% of the time spent in the kernel.\n\nstrace -f -p <pid> -c\n\n\n-- \nJon\n",
"msg_date": "Tue, 19 Oct 2010 08:38:30 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: odd postgresql performance (excessive lseek)"
},
{
"msg_contents": "Jon Nelson <[email protected]> writes:\n> This is another situation where using pread would have saved a lot of\n> time and sped things up a bit, but failing that, keeping track of the\n> file position ourselves and only lseek'ing when necessary would also\n> help.\n\nNo, it wouldn't; you don't have the slightest idea what's going on\nthere. Those lseeks are for the purpose of detecting the current EOF\nlocation, ie, finding out whether some other backend has extended the\nfile recently. We could get rid of them, but only at the cost of\nputting in some other communication mechanism instead.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 19 Oct 2010 10:36:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: odd postgresql performance (excessive lseek) "
},
{
"msg_contents": "On Tue, Oct 19, 2010 at 9:36 AM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> This is another situation where using pread would have saved a lot of\n>> time and sped things up a bit, but failing that, keeping track of the\n>> file position ourselves and only lseek'ing when necessary would also\n>> help.\n>\n> No, it wouldn't; you don't have the slightest idea what's going on\n> there. Those lseeks are for the purpose of detecting the current EOF\n> location, ie, finding out whether some other backend has extended the\n> file recently. We could get rid of them, but only at the cost of\n> putting in some other communication mechanism instead.\n\nThat's a little harsh (it's not untrue, though).\n\nIt's true I don't know how postgresql works WRT how it manages files,\nbut now I've been educated (some). I'm guessing, then, that due to how\neach backend may extend files without the other backends knowing of\nit, that using fallocate or some-such is also likely a non-starter. I\nask because, especially when allocating files 8KB at a time, file\nfragmentation on a busy system is potentially high. I recently saw an\next3 filesystem (dedicated to postgresql) with 38% file fragmentation\nand, yes, it does make a huge performance difference in some cases.\nAfter manually defragmenting some files (with pg offline) I saw a read\nspeed increase for single-MB-per-second to\nhigh-double-digit-MB-per-second. However, after asking pg to rewrite\nsome of the worst files (by way of CLUSTER or ALTER TABLE) I saw no\nimprovement - I'm guessing due to the 8KB-at-a-time allocation\nmechanism.\n\nHas any work been done on making use of shared memory for file stats\nor using fallocate (or posix_fallocate) to allocate files in larger\nchunks?\n\n-- \nJon\n",
"msg_date": "Tue, 19 Oct 2010 10:27:48 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: odd postgresql performance (excessive lseek)"
},
{
"msg_contents": "Jon Nelson wrote:\n> That's a little harsh (it's not untrue, though).\n> \n\nWelcome to pgsql-performance! You can get a right answer, or a nice \nanswer, but given the personalities involved it's hard to get both at \nthe same time. With this crowd, you need to be careful stating \nsomething you were speculating about as if you were certain of it, and \nto be certain here usually means \"I read the source code\". I recommend \nwriting theories as a question (\"would X have sped this up?\") rather \nthan a statement (\"X will speed this up\") if you want to see gentler \nanswers.\n\n> Has any work been done on making use of shared memory for file stats\n> or using fallocate (or posix_fallocate) to allocate files in larger\n> chunks?\n> \n\nJust plain old pre-allocating in larger chunks turns out to work well. \nAm hoping to get that into an official patch eventually.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\" Pre-ordering at:\nhttps://www.packtpub.com/postgresql-9-0-high-performance/book\n\n",
"msg_date": "Tue, 19 Oct 2010 12:28:29 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: odd postgresql performance (excessive lseek)"
},
{
"msg_contents": "On Tue, Oct 19, 2010 at 12:28 PM, Greg Smith <[email protected]> wrote:\n> Jon Nelson wrote:\n>>\n>> That's a little harsh (it's not untrue, though).\n>>\n>\n> Welcome to pgsql-performance! You can get a right answer, or a nice answer,\n> but given the personalities involved it's hard to get both at the same time.\n> With this crowd, you need to be careful stating something you were\n> speculating about as if you were certain of it, and to be certain here\n> usually means \"I read the source code\". I recommend writing theories as a\n> question (\"would X have sped this up?\") rather than a statement (\"X will\n> speed this up\") if you want to see gentler answers.\n>\n>> Has any work been done on making use of shared memory for file stats\n>> or using fallocate (or posix_fallocate) to allocate files in larger\n>> chunks?\n>>\n>\n> Just plain old pre-allocating in larger chunks turns out to work well. Am\n> hoping to get that into an official patch eventually.\n\nhm...wouldn't larger allocation blocks reduce the frequency of 'this\nfile got larger via another backend' type events, and therefore raise\nthe benefit of pushing the events out vs checking them over and over?\n\nmerlin\n",
"msg_date": "Tue, 19 Oct 2010 13:41:04 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: odd postgresql performance (excessive lseek)"
},
{
"msg_contents": "On Tue, Oct 19, 2010 at 10:36 AM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> This is another situation where using pread would have saved a lot of\n>> time and sped things up a bit, but failing that, keeping track of the\n>> file position ourselves and only lseek'ing when necessary would also\n>> help.\n>\n> No, it wouldn't; you don't have the slightest idea what's going on\n> there. Those lseeks are for the purpose of detecting the current EOF\n> location, ie, finding out whether some other backend has extended the\n> file recently. We could get rid of them, but only at the cost of\n> putting in some other communication mechanism instead.\n\nI don't get it. Why would be doing that in a tight loop within a\nsingle backend?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 26 Oct 2010 23:05:10 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: odd postgresql performance (excessive lseek)"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Tue, Oct 19, 2010 at 10:36 AM, Tom Lane <[email protected]> wrote:\n>> Those lseeks are for the purpose of detecting the current EOF\n>> location, ie, finding out whether some other backend has extended the\n>> file recently. �We could get rid of them, but only at the cost of\n>> putting in some other communication mechanism instead.\n\n> I don't get it. Why would be doing that in a tight loop within a\n> single backend?\n\nWell, we weren't shown any context whatsoever about what the backend was\nactually doing ... but for example the planner likes to recheck the\ncurrent physical size of each relation in a query, so that it's working\nwith an up-to-date number. That could probably be avoided, since an\nestimate would be good enough as long as it wasn't horribly stale.\nBut there are other places that *have* to have the accurate size, like\nseqscan startup. I doubt it was as tight a loop as all that. It\nwouldn't be hard at all to have an example where those lseeks are the\nonly operations visible to strace, if all the data the backend needs is\nin shared buffers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Oct 2010 23:57:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: odd postgresql performance (excessive lseek) "
}
] |
[
{
"msg_contents": "Damon Snyder wrote:\n \n> I have heard it said that if a stored procedure is declared as\n> VOLATILE, then no good optimizations can be done on queries within\n> the stored procedure or queries that use the stored procedure (say\n> as the column in a view). I have seen this in practice, recommended\n> on the irc channel, and in the archives (\n> http://archives.postgresql.org/pgsql-performance/2008-01/msg00283.php\n> ). Can\n> someone help me understand or point me to some documentation\n> explaining why this is so?\n \nHere's the documentation:\n \nhttp://www.postgresql.org/docs/current/interactive/sql-createfunction.html\n \nhttp://www.postgresql.org/docs/current/interactive/xfunc-volatility.html\n \n-Kevin\n",
"msg_date": "Fri, 15 Oct 2010 14:24:39 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Stored procedure declared as VOLATILE => no good\n\toptimization is done"
}
] |
[
{
"msg_contents": "Hi Guys,\n\nI am interested in finding out the pros/cons of using UUID as a primary key field. My requirement states that UUID would be perfect in my case as I will be having many small databases which will link up to a global database using the UUID. Hence, the need for a unique key across all databases. It would be extremely helpful if someone could help me figure this out, as it is critical for my project.\n\nThanks in advance,\n\nNav ",
"msg_date": "Sat, 16 Oct 2010 07:28:04 +0530",
"msg_from": "Navkirat Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "UUID performance as primary key"
},
{
"msg_contents": "On 16/10/2010 9:58 AM, Navkirat Singh wrote:\n> Hi Guys,\n>\n> I am interested in finding out the pros/cons of using UUID as a primary key field. My requirement states that UUID would be perfect in my case as I will be having many small databases which will link up to a global database using the UUID. Hence, the need for a unique key across all databases. It would be extremely helpful if someone could help me figure this out, as it is critical for my project.\n\nPro: No need for (serverid,serverseq) pair primary keys or hacks with \nmodulus based key generation. Doesn't set any pre-determined limit on \nhow many servers/databases may be in a cluster.\n\nCon: Slower than modulo key generation approach, uses more storage. \nForeign key relationships may be slower too.\n\nOverall, UUIDs seem to be a favoured approach. The other way people seem \nto do this is by assigning a unique instance id to each server/database \nout of a maximum \"n\" instances decided at setup time. Every key \ngeneration sequence increments by \"n\" whenever it generates a key, with \nan offset of the server/database id. That way, if n=100, server 1 will \ngenerate primary keys 001, 101, 201, 301, ..., server 2 will generate \nprimary keys 002, 102, 202, 302, ... and so on.\n\nThat works great until you need more than 100 instances, at which point \nyou're really, REALLY boned. In really busy systems it also limits the \ntotal amount of primary key space - but with BIGINT primary keys, that's \nunlikely to be something you need to worry about.\n\nThe composite primary key (serverid,sequenceid) approach avoids the need \nfor a pre-defined maximum number of servers, but can be slow to index \nand can require more storage, especially because of tuple headers.\n\nI have no firsthand experience with any of these approaches so I can't \noffer you a considered opinion. I know that the MS-SQL crowd at least \nstrongly prefer UUIDs, but they have very strong in-database UUID \nsupport. MySQL folks seem to mostly favour the modulo primary key \ngeneration approach. I don't see much discussion of the issue here - I \nget the impression Pg doesn't see heavy use in sharded environments.\n\n-- \nCraig Ringer\n\nTech-related writing at http://soapyfrogs.blogspot.com/\n",
"msg_date": "Sat, 16 Oct 2010 10:59:33 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UUID performance as primary key"
},
{
"msg_contents": "Wouldn't UUID PK cause a significant drop in insert performance because every insert is now out of order, which leads to a constant re-arranging of the B+ tree? The amount of random IO's that's going to generate would just kill the performance.\n\n--- On Fri, 10/15/10, Craig Ringer <[email protected]> wrote:\n\nFrom: Craig Ringer <[email protected]>\nSubject: Re: [PERFORM] UUID performance as primary key\nTo: \"Navkirat Singh\" <[email protected]>\nCc: [email protected]\nDate: Friday, October 15, 2010, 10:59 PM\n\nOn 16/10/2010 9:58 AM, Navkirat Singh wrote:\n> Hi Guys,\n> \n> I am interested in finding out the pros/cons of using UUID as a primary key field. My requirement states that UUID would be perfect in my case as I will be having many small databases which will link up to a global database using the UUID. Hence, the need for a unique key across all databases. It would be extremely helpful if someone could help me figure this out, as it is critical for my project.\n\nPro: No need for (serverid,serverseq) pair primary keys or hacks with modulus based key generation. Doesn't set any pre-determined limit on how many servers/databases may be in a cluster.\n\nCon: Slower than modulo key generation approach, uses more storage. Foreign key relationships may be slower too.\n\nOverall, UUIDs seem to be a favoured approach. The other way people seem to do this is by assigning a unique instance id to each server/database out of a maximum \"n\" instances decided at setup time. Every key generation sequence increments by \"n\" whenever it generates a key, with an offset of the server/database id. That way, if n=100, server 1 will generate primary keys 001, 101, 201, 301, ..., server 2 will generate primary keys 002, 102, 202, 302, ... and so on.\n\nThat works great until you need more than 100 instances, at which point you're really, REALLY boned. In really busy systems it also limits the total amount of primary key space - but with BIGINT primary keys, that's unlikely to be something you need to worry about.\n\nThe composite primary key (serverid,sequenceid) approach avoids the need for a pre-defined maximum number of servers, but can be slow to index and can require more storage, especially because of tuple headers.\n\nI have no firsthand experience with any of these approaches so I can't offer you a considered opinion. I know that the MS-SQL crowd at least strongly prefer UUIDs, but they have very strong in-database UUID support. MySQL folks seem to mostly favour the modulo primary key generation approach. I don't see much discussion of the issue here - I get the impression Pg doesn't see heavy use in sharded environments.\n\n-- Craig Ringer\n\nTech-related writing at http://soapyfrogs.blogspot.com/\n\n-- Sent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n \nWouldn't UUID PK cause a significant drop in insert performance because every insert is now out of order, which leads to a constant re-arranging of the B+ tree? The amount of random IO's that's going to generate would just kill the performance.--- On Fri, 10/15/10, Craig Ringer <[email protected]> wrote:From: Craig Ringer <[email protected]>Subject: Re: [PERFORM] UUID performance as primary keyTo: \"Navkirat Singh\" <[email protected]>Cc: [email protected]: Friday, October 15, 2010, 10:59 PMOn 16/10/2010 9:58 AM, Navkirat Singh wrote:> Hi Guys,> > I am interested in finding out the pros/cons of using UUID as a\n primary key field. My requirement states that UUID would be perfect in my case as I will be having many small databases which will link up to a global database using the UUID. Hence, the need for a unique key across all databases. It would be extremely helpful if someone could help me figure this out, as it is critical for my project.Pro: No need for (serverid,serverseq) pair primary keys or hacks with modulus based key generation. Doesn't set any pre-determined limit on how many servers/databases may be in a cluster.Con: Slower than modulo key generation approach, uses more storage. Foreign key relationships may be slower too.Overall, UUIDs seem to be a favoured approach. The other way people seem to do this is by assigning a unique instance id to each server/database out of a maximum \"n\" instances decided at setup time. Every key generation sequence increments by \"n\" whenever it generates a key, with an offset of the\n server/database id. That way, if n=100, server 1 will generate primary keys 001, 101, 201, 301, ..., server 2 will generate primary keys 002, 102, 202, 302, ... and so on.That works great until you need more than 100 instances, at which point you're really, REALLY boned. In really busy systems it also limits the total amount of primary key space - but with BIGINT primary keys, that's unlikely to be something you need to worry about.The composite primary key (serverid,sequenceid) approach avoids the need for a pre-defined maximum number of servers, but can be slow to index and can require more storage, especially because of tuple headers.I have no firsthand experience with any of these approaches so I can't offer you a considered opinion. I know that the MS-SQL crowd at least strongly prefer UUIDs, but they have very strong in-database UUID support. MySQL folks seem to mostly favour the modulo primary key generation approach. I\n don't see much discussion of the issue here - I get the impression Pg doesn't see heavy use in sharded environments.-- Craig RingerTech-related writing at http://soapyfrogs.blogspot.com/-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 15 Oct 2010 20:46:49 -0700 (PDT)",
"msg_from": "Andy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UUID performance as primary key"
},
{
"msg_contents": "On 10/15/10 6:58 PM, Navkirat Singh wrote:\n> I am interested in finding out the pros/cons of using UUID as a\n> primary key field. My requirement states that UUID would be perfect\n> in my case as I will be having many small databases which will link\n> up to a global database using the UUID. Hence, the need for a unique\n> key across all databases.\n\nYou left out one piece of information: How many keys per second do you need?\n\nWe put a sequence in the global database that all secondary databases use to get their IDs. It means an extra connect/disconnect (a pooler can minimize this), so if you're issuing thousands of IDs per second, this isn't a good idea. But for a small-ish number of IDs per second, it gets you the benefit of a universal ID without the size of the UUID field.\n\nCraig (the other one)\n",
"msg_date": "Sat, 16 Oct 2010 10:28:41 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UUID performance as primary key"
},
{
"msg_contents": "On Fri, Oct 15, 2010 at 10:59 PM, Craig Ringer\n<[email protected]> wrote:\n> On 16/10/2010 9:58 AM, Navkirat Singh wrote:\n>>\n>> Hi Guys,\n>>\n>> I am interested in finding out the pros/cons of using UUID as a primary\n>> key field. My requirement states that UUID would be perfect in my case as I\n>> will be having many small databases which will link up to a global database\n>> using the UUID. Hence, the need for a unique key across all databases. It\n>> would be extremely helpful if someone could help me figure this out, as it\n>> is critical for my project.\n>\n> Pro: No need for (serverid,serverseq) pair primary keys or hacks with\n> modulus based key generation. Doesn't set any pre-determined limit on how\n> many servers/databases may be in a cluster.\n>\n> Con: Slower than modulo key generation approach, uses more storage. Foreign\n> key relationships may be slower too.\n>\n> Overall, UUIDs seem to be a favoured approach. The other way people seem to\n> do this is by assigning a unique instance id to each server/database out of\n> a maximum \"n\" instances decided at setup time. Every key generation sequence\n> increments by \"n\" whenever it generates a key, with an offset of the\n> server/database id. That way, if n=100, server 1 will generate primary keys\n> 001, 101, 201, 301, ..., server 2 will generate primary keys 002, 102, 202,\n> 302, ... and so on.\n>\n> That works great until you need more than 100 instances, at which point\n> you're really, REALLY boned. In really busy systems it also limits the total\n> amount of primary key space - but with BIGINT primary keys, that's unlikely\n> to be something you need to worry about.\n>\n> The composite primary key (serverid,sequenceid) approach avoids the need for\n> a pre-defined maximum number of servers, but can be slow to index and can\n> require more storage, especially because of tuple headers.\n>\n> I have no firsthand experience with any of these approaches so I can't offer\n> you a considered opinion. I know that the MS-SQL crowd at least strongly\n> prefer UUIDs, but they have very strong in-database UUID support. MySQL\n> folks seem to mostly favour the modulo primary key generation approach. I\n> don't see much discussion of the issue here - I get the impression Pg\n> doesn't see heavy use in sharded environments.\n\nI think your analysis is right on the money except for one thing: the\ncomposite approach doesn't need server_id as part of the key and could\nbe left off the index. In fact, it can be left off the table\ncompletely since the value is static for the entire database. You\nobviously can't check RI between databases so storing the value\neverywhere is of no value. server_id only matters when comparing data\nfrom one database to another, which will rarely happen inside a\nparticular client database (and if it does, you'd have to store the\nforeign server_id).\n\nAny 'master' database that did control operations would of course have\nto store server_id for each row but I suspect that's not where the\nbulk of the data would be. Ditto any application code...it would have\nto do something like this:\n\nselect server_id(), foo_id from foo where ..\n\nserver_id() is of course immutable function. Since you are not\nmanaging 2 billion+ servers, this will be an 'int', or even a\nsmallint. I think this approach is stronger than UUID approach in\nevery way. Even stronger would be to not use surrogate keys at all,\nbut involve what ever makes the decision that routes data between\ndatabases as part of a more natural key (no way to know for sure if\nthis works for OP w/available info).\n\nI personally dislike sequence hacks.\n\nmerlin\n",
"msg_date": "Sat, 16 Oct 2010 15:35:07 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UUID performance as primary key"
}
] |
[
{
"msg_contents": "We currently have\n\n log_min_duration_statement = 5000\n\nand are seeing statements like the following logged\n\n2010-10-16 05:55:52 EDT [6334]: [1-1] LOG: duration: 5572.517 ms \nstatement: EXECUTE <unnamed> [PREPARE: COMMIT]\n2010-10-16 06:06:24 EDT [26856]: [1-1] LOG: duration: 5617.866 ms \nstatement: EXECUTE <unnamed> [PREPARE: COMMIT]\n2010-10-16 06:06:24 EDT [20740]: [13-1] LOG: duration: 5210.190 ms \nstatement: EXECUTE <unnamed> [PREPARE: COMMIT]\n2010-10-16 08:24:06 EDT [8743]: [1-1] LOG: duration: 6487.346 ms \nstatement: EXECUTE <unnamed> [PREPARE: COMMIT]\n\nQuestions;\n\n1) What do these statements mean?\n2) How do I dig deeper to determine why they are taking longer than 5 secs.\n\nVersion Info -->\n\nselect version();\n version \n\n------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.1.17 on x86_64-unknown-linux-gnu, compiled by GCC gcc \n(GCC) 4.1.1 20070105 (Red Hat 4.1.1-52)\n",
"msg_date": "Sat, 16 Oct 2010 08:32:11 -0400",
"msg_from": "Eric Comeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help with duration of statement: EXECUTE <unnamed> [PREPARE: COMMIT]"
},
{
"msg_contents": "Eric Comeau <[email protected]> writes:\n> 2010-10-16 05:55:52 EDT [6334]: [1-1] LOG: duration: 5572.517 ms \n> statement: EXECUTE <unnamed> [PREPARE: COMMIT]\n> 2010-10-16 06:06:24 EDT [26856]: [1-1] LOG: duration: 5617.866 ms \n> statement: EXECUTE <unnamed> [PREPARE: COMMIT]\n> 2010-10-16 06:06:24 EDT [20740]: [13-1] LOG: duration: 5210.190 ms \n> statement: EXECUTE <unnamed> [PREPARE: COMMIT]\n> 2010-10-16 08:24:06 EDT [8743]: [1-1] LOG: duration: 6487.346 ms \n> statement: EXECUTE <unnamed> [PREPARE: COMMIT]\n\n> Questions;\n\n> 1) What do these statements mean?\n\nThey appear to be COMMIT commands. (It's pretty stupid to be using the\nPREPARE/EXECUTE machinery to execute a COMMIT, but that's evidently what\nyour client-side code is doing.)\n\n> 2) How do I dig deeper to determine why they are taking longer than 5 secs.\n\nMy guess would be overstressed disk subsystem. A COMMIT doesn't require\nmuch except fsync'ing the commit WAL record down to disk ... but if the\ndisk is too busy to process that request quickly, you might have to\nwait. It also seems possible that the filesystem is interlocking the\nfsync on the WAL file with previous writes to other files. Anyway,\nwatching things with vmstat or iostat to see if there's an activity\nspike when this is happening would confirm or refute that idea.\n\n[ thinks for a bit ... ] Actually, it's possible that the COMMIT\ncommand is doing nontrivial work before it can really commit. Perhaps\nyou have deferred foreign keys to check?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Oct 2010 10:13:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with duration of statement: EXECUTE <unnamed> [PREPARE:\n\tCOMMIT]"
},
{
"msg_contents": "Tom Lane wrote:\n> My guess would be overstressed disk subsystem. A COMMIT doesn't require\n> much except fsync'ing the commit WAL record down to disk ... \nDoesn't the \"commit\" statement also release all the locks held by the \ntransaction?\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Mon, 18 Oct 2010 10:25:17 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with duration of statement: EXECUTE <unnamed>\n\t[PREPARE: COMMIT]"
},
{
"msg_contents": "Mladen Gogala <[email protected]> writes:\n> Tom Lane wrote:\n>> My guess would be overstressed disk subsystem. A COMMIT doesn't require\n>> much except fsync'ing the commit WAL record down to disk ... \n\n> Doesn't the \"commit\" statement also release all the locks held by the \n> transaction?\n\nYeah, and there's a nontrivial amount of other cleanup too, but it all\namounts to just changes in in-memory data structures. I don't see that\ntaking five seconds, especially not if commits of similar transactions\nusually take much less than that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Oct 2010 11:02:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with duration of statement: EXECUTE <unnamed> [PREPARE:\n\tCOMMIT]"
},
{
"msg_contents": "On 10-10-18 11:02 AM, Tom Lane wrote:\n> Mladen Gogala<[email protected]> writes:\n>> Tom Lane wrote:\n>>> My guess would be overstressed disk subsystem. A COMMIT doesn't require\n>>> much except fsync'ing the commit WAL record down to disk ...\n>\n>> Doesn't the \"commit\" statement also release all the locks held by the\n>> transaction?\n>\n> Yeah, and there's a nontrivial amount of other cleanup too, but it all\n> amounts to just changes in in-memory data structures. I don't see that\n> taking five seconds, especially not if commits of similar transactions\n> usually take much less than that.\n>\n> \t\t\tregards, tom lane\n>\n\nThanks for the info. The system is a QA system under load. It is running \n200 jobs per minute, so yes the disk it stressed. Our application \nbundles PG into its install and installs the app and database all on the \nsame filesystem. The QA folks probably have lots of logging turned on as \nwell.\n\nI am not sure what front-end client is doing the prepare/execute on a \ncommit - I found it strange, I'm glad someone else does as well.\n\nThe web app is using jboss with connection pooling, but there is a \nscheduler built in C using libpq as well.\n\nThanks for the hint on deferred fk, I'll check, but I think if that was \nthe case it would be happening much more often - like maybe almost all \ncommits for this transaction type.\n\nThe OS is RH 5.2 64-bit, and I'm surprised they don't have iostat \ninstalled on it by default. There is vmstat. The load avg is\n\n06:36:49 up 28 days, 15:20, 5 users, load average: 19.44, 22.59, 22.50\n\nOkay - I'm starting to see other stmts other than just commits taking \nlonger than 5 secs sometimes as well now - stress test has been running \nfor 3 days now...some commits 17 and 15 secs ouch...\n\n 2010-10-19 05:44:35 EDT [11760]: [10-1] LOG: duration: 17137.425 ms \nstatement: commit\n2010-10-19 05:44:36 EDT [10704]: [14-1] LOG: duration: 14928.903 ms \nstatement: EXECUTE <unnamed> [PREPARE: COMMIT]\n2010-10-19 05:44:36 EDT [12535]: [1-1] LOG: duration: 13241.032 ms \nstatement: EXECUTE <unnamed> [PREPARE: update scheduled_job set \nactive_filename=$1, active_state=$2, begin_time=$3, changed_by=$4, \nchanged_on=$5, created_by=$6, created_on=$7, current_run=$8, \ndeferred_time=$9, deleted=$10, end_time=$11, expire_at=$12, \nfrequency_spec=$13, job_class=$14, contract_id=$15, job_name=$16, \nlast_active_status_msg=$17, last_exit_code=$18, package_id=$19, \nperc_denominator=$20, perc_numerator=$21, retry_at=$22, \nscheduled_at=$23, scheduled_state=$24, start_at=$25, states_list=$26, \ntimezone=$27, total_runs=$28 where id=$29]\n2010-10-19 05:44:41 EDT [11760]: [11-1] LOG: duration: 6000.118 ms \nstatement: commit\n2010-10-19 05:44:49 EDT [10704]: [15-1] LOG: duration: 13804.450 ms \nstatement: EXECUTE <unnamed> [PREPARE: COMMIT]\n2010-10-19 05:44:49 EDT [12535]: [2-1] LOG: duration: 13807.317 ms \nstatement: EXECUTE <unnamed> [PREPARE: COMMIT]\n2010-10-19 05:45:00 EDT [11760]: [12-1] LOG: duration: 18879.010 ms \nstatement: commit\n2010-10-19 05:45:18 EDT [10704]: [16-1] LOG: duration: 28177.626 ms \nstatement: EXECUTE <unnamed> [PREPARE: COMMIT]\n2010-10-19 05:45:20 EDT [11760]: [13-1] LOG: duration: 19740.822 ms \nstatement: commit\n2010-10-19 05:45:20 EDT [13093]: [1-1] LOG: duration: 20828.412 ms \nstatement: EXECUTE <unnamed> [PREPARE: COMMIT]\n\n\nI do not have a vmstat to look at from when the stmts above executed, \nwish I did, here is vmstat 5, now but at this time everything is \nexecuting under 5 secs... procs -----------memory---------- ---swap-- \n-----io---- --system-- -----cpu------\n r b swpd free buff cache si so bi bo in cs us sy \nid wa st\n65 0 3232 340480 166812 2212884 0 0 0 1519 1676 50841 70 \n30 0 0 0\n57 0 3232 273332 166856 2212796 0 0 0 1157 1609 52887 69 \n31 0 0 0\n73 0 3232 320668 166884 2212668 0 0 0 781 1458 53420 70 \n30 0 0 0\n44 0 3232 393240 166900 2213272 0 0 43 1336 1578 53155 70 \n30 0 0 0\n42 0 3232 349176 166928 2213244 0 0 2 656 1449 52006 70 \n30 0 0 0\n35 0 3232 299320 166972 2213436 0 0 3 1312 1582 51126 75 \n25 0 0 0\n68 0 3232 265868 167012 2213420 0 0 0 739 1484 51982 74 \n26 0 0 0\n42 0 3232 234672 167048 2212440 0 0 2 772 1550 50536 74 \n26 0 0 0\n72 0 3232 252232 167080 2213004 0 0 0 1192 1616 48063 77 \n23 0 0 0\n56 0 3232 336852 167112 2213220 0 0 0 699 1433 50655 78 \n22 0 0 0\n38 0 3232 302212 167148 2213380 0 0 0 786 1578 49895 76 \n24 0 0 0\n61 0 3232 381884 167180 2213260 0 0 6 943 1525 46474 77 \n23 0 0 0\n66 0 3232 366568 167216 2213716 0 0 0 1150 1491 39232 82 \n18 0 0 0\n93 0 3232 343792 167232 2213680 0 0 2 946 1504 39030 82 \n18 0 0 0\n66 0 3232 377376 167268 2213260 0 0 0 954 1427 37206 84 \n16 0 0 0\n60 0 3232 319552 167288 2212952 0 0 0 385 1365 34413 83 \n17 0 0 0\n53 0 3232 320024 167400 2213184 0 0 2 3119 1576 33904 81 \n19 0 0 0\n42 0 3232 256116 167432 2213716 0 0 0 1062 1501 32128 85 \n15 0 0 0\n11 0 3232 783712 167480 2224604 0 0 2 3219 1499 33598 79 \n21 0 0 0\n52 0 3232 828444 167520 2224668 0 0 3 1129 1429 40321 70 \n30 0 0 0\n29 0 3232 933804 167548 2224828 0 0 8 1197 1384 37422 71 \n29 0 0 0\n33 0 3232 974348 167560 2224956 0 0 0 1049 1438 34956 70 \n31 0 0 0\n31 0 3232 941496 167588 2224956 0 0 0 716 1346 34662 68 \n32 0 0 0\n40 0 3232 846540 167616 2225032 0 0 0 758 1426 35924 70 \n30 0 0 0\n\n\nSomething I just realized is they are running LVM as well - and I'm not \nvery up on LVM - here is lvs output\nlvs --aligned\n LV VG Attr LSize Origin Snap% Move Log Copy% Convert\n LogVol00 VolGroup00 -wi-ao 146.97G\n LogVol01 VolGroup00 -wi-ao 1.94G\n\n\n\n\n",
"msg_date": "Tue, 19 Oct 2010 07:23:57 -0400",
"msg_from": "Eric Comeau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help with duration of statement: EXECUTE <unnamed>\n\t[PREPARE: COMMIT]"
},
{
"msg_contents": "Eric Comeau <[email protected]> writes:\n> Okay - I'm starting to see other stmts other than just commits taking \n> longer than 5 secs sometimes as well now - stress test has been running \n> for 3 days now...some commits 17 and 15 secs ouch...\n\nIf it's not just commits then some of the stranger theories go away.\nI think you're almost certainly looking at overstressed disk.\n\nIt'd be worth your while to turn on checkpoint logging and see if\nthe slow operations occur during checkpoints. If so, you might be\nable to ameliorate things by tweaking the checkpoint parameters\nto spread out checkpoint I/O load some more. But the bottom line\nhere is that you haven't got much headroom between your application's\nneeds and the max throughput available from your disk. Better disk\nhardware will be the only real cure.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 19 Oct 2010 10:25:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with duration of statement: EXECUTE <unnamed> [PREPARE:\n\tCOMMIT]"
}
] |
[
{
"msg_contents": "There was some doubt as for the speed of doing the select count(*) in \nPostgreSQL and Oracle.\nTo that end, I copied the most part of the Oracle table I used before to \nPostgres. Although the copy\nwasn't complete, the resulting table is already significantly larger \nthan the table it was copied from. The result still shows that Oracle is \nsignificantly faster:\nOracle result:\n\nSQL> alter system flush buffer_cache;\n\nSystem altered.\n\nSQL> select /*+ full(NO) noparallel */ count(*) from ni_occurrence no;\n\n COUNT(*)\n----------\n 402062638\n\nElapsed: 00:03:16.45\n\n\n\nHints are necessary because Oracle table is declared as parallel and I \ndidn't want the PK index to be used for counting. Oracle has a good \nhabit of using PK's for counting, if available.\n\n\nSQL> select bytes/1048576 as MB\n 2 from user_segments\n 3 where segment_name='NI_OCCURRENCE';\n\n MB\n----------\n 35329\n\nElapsed: 00:00:00.85\nSQL>\n\nSo, oracle stores 402 million records in 35GB and counts them in 3 \nminutes 16.45 seconds The very same table was partially copied to \nPostgres, copying died with ORA-01555 snapshot too old sometimes this \nmorning. I ran vacuumdb -f -z on the database after the copy completed \nand the results are below.\n\nmgogala=# select count(*) from ni_occurrence;\n count\n-----------\n 382400476\n(1 row)\n\nTime: 221716.466 ms\nmgogala=#\nmgogala=# select 221/60::real;\n ?column?\n------------------\n 3.68333333333333\n(1 row)\n\nTime: 0.357 ms\nmgogala=#\nmgogala=# select pg_size_pretty(pg_table_size('ni_occurrence'));\n pg_size_pretty\n----------------\n 46 GB\n(1 row)\n\nTime: 0.420 ms\nmgogala=#\n\nThe database wasn't restarted, no caches were flushed, the comparison \nwas done with a serious advantage for PostgreSQL. Postgres needed 3.68 \nminutes to complete the count which is about the same Oracle but still \nsomewhat slower. Also, I am worried about the sizes. Postgres table is \n11GB larger than the original, despite having less data. That was an \nunfair and unbalanced comparison because Oracle's cache was flushed and \nOracle was artificially restrained to use the full table scan without \nthe aid of parallelism. Here is the same result, with no hints and the \nautotrace on, which shows what happens if I turn the hints off:\n\nSQL> select count(*) from ni_occurrence no;\n\n COUNT(*)\n----------\n 402062638\n\nElapsed: 00:00:52.61\n\nExecution Plan\n----------------------------------------------------------\nPlan hash value: 53476935\n\n--------------------------------------------------------------------------------\n----------------------------------------\n\n| Id | Operation | Name | Rows | Cost (%CPU)|\n Time | TQ |IN-OUT| PQ Distrib |\n\n--------------------------------------------------------------------------------\n----------------------------------------\n\n| 0 | SELECT STATEMENT | | 1 | 54001 (19)|\n 00:01:08 | | | |\n\n| 1 | SORT AGGREGATE | | 1 | |\n | | | |\n\n| 2 | PX COORDINATOR | | | |\n | | | |\n\n| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | |\n | Q1,00 | P->S | QC (RAND) |\n\n| 4 | SORT AGGREGATE | | 1 | |\n | Q1,00 | PCWP | |\n\n| 5 | PX BLOCK ITERATOR | | 402M| 54001 (19)|\n 00:01:08 | Q1,00 | PCWC | |\n\n| 6 | INDEX FAST FULL SCAN| IDX_NI_OCCURRENCE_PID | 402M| \n54001 (19)|\n 00:01:08 | Q1,00 | PCWP | |\n\n--------------------------------------------------------------------------------\n----------------------------------------\n\nIt took just 52 seconds to count everything, but Oracle didn't even scan \nthe table, it scanned a unique index, in parallel. That is the \nalgorithmic advantage that forced me to restrict the execution plan with \nhints. My conclusion is that the speed of the full scan is OK, about the \nsame as Oracle speed. There are, however, three significant algorithm \nadvantages on the Oracle's side:\n\n1) Oracle can use indexes to calculate \"select count\"\n2) Oracle can use parallelism.\n3) Oracle can use indexes in combination with the parallel processing.\n\n\n\nHere are the descriptions:\n\nSQL> desc ni_occurrence\n Name Null? Type\n ----------------------------------------- -------- \n----------------------------\n ID NOT NULL NUMBER(22)\n PERMANENT_ID NOT NULL VARCHAR2(12)\n CALL_LETTERS NOT NULL VARCHAR2(5)\n AIRDATE NOT NULL DATE\n DURATION NOT NULL NUMBER(4)\n PROGRAM_TITLE VARCHAR2(360)\n COST NUMBER(15)\n ASSETID NUMBER(12)\n MARKET_ID NUMBER\n GMT_TIME DATE\n ORIG_ST_OCC_ID NUMBER\n EPISODE VARCHAR2(450)\n IMPRESSIONS NUMBER\n\nSQL>\nmgogala=# \\d ni_occurrence\n Table \"public.ni_occurrence\"\n Column | Type | Modifiers\n----------------+-----------------------------+-----------\n id | bigint | not null\n permanent_id | character varying(12) | not null\n call_letters | character varying(5) | not null\n airdate | timestamp without time zone | not null\n duration | smallint | not null\n program_title | character varying(360) |\n cost | bigint |\n assetid | bigint |\n market_id | bigint |\n gmt_time | timestamp without time zone |\n orig_st_occ_id | bigint |\n episode | character varying(450) |\n impressions | bigint |\nIndexes:\n \"ni_occurrence_pk\" PRIMARY KEY, btree (id)\n\nmgogala=#\n\nOracle block is 16k, version is 10.2.0.5 RAC, 64 bit (is anybody still \nusing 32bit db servers?) . Postgres is 9.0.1, 64 bit. Both machines are \nrunning Red Hat 5.5:\n\n\n[mgogala@lpo-postgres-d01 ~]$ cat /etc/redhat-release\nRed Hat Enterprise Linux Server release 5.5 (Tikanga)\n[mgogala@lpo-postgres-d01 ~]$\n\nLinux lpo-postgres-d01 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT \n2010 x86_64 x86_64 x86_64 GNU/Linux\n[mgogala@lpo-postgres-d01 ~]$\n\n\n\n-- \nhttp://mgogala.freehostia.com\n\n",
"msg_date": "Sat, 16 Oct 2010 12:51:59 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": true,
"msg_subject": "Select count(*), the sequel"
},
{
"msg_contents": "16.10.10 19:51, Mladen Gogala написав(ла):\n> There was some doubt as for the speed of doing the select count(*) in \n> PostgreSQL and Oracle.\n> To that end, I copied the most part of the Oracle table I used before \n> to Postgres. Although the copy\n> wasn't complete, the resulting table is already significantly larger \n> than the table it was copied from. The result still shows that Oracle \n> is significantly faster:\n\nHello.\n\nDid you vacuum postgresql DB before the count(*). I ask this because \n(unless table was created & loaded in same transaction) on the first \nscan, postgresql has to write hint bits to the whole table. Second scan \nmay be way faster.\n\nBest regards, Vitalii Tymchyshyn\n",
"msg_date": "Mon, 18 Oct 2010 10:58:39 +0300",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select count(*), the sequel"
},
{
"msg_contents": " On 10/18/2010 3:58 AM, Vitalii Tymchyshyn wrote:\n> Hello.\n>\n> Did you vacuum postgresql DB before the count(*). I ask this because\n> (unless table was created& loaded in same transaction) on the first\n> scan, postgresql has to write hint bits to the whole table. Second scan\n> may be way faster.\n>\n> Best regards, Vitalii Tymchyshyn\n\nVitalli, yes I did vacuum before the count.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n",
"msg_date": "Mon, 18 Oct 2010 07:41:41 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select count(*), the sequel"
}
] |
[
{
"msg_contents": "There was some doubt as for the speed of doing the select count(*) in \nPostgreSQL and Oracle.\nTo that end, I copied the most part of the Oracle table I used before to \nPostgres. Although the copy\nwasn't complete, the resulting table is already significantly larger \nthan the table it was copied from. The result still shows that Oracle is \nsignificantly faster:\nOracle result:\n\nSQL> alter system flush buffer_cache;\n\nSystem altered.\n\nSQL> select /*+ full(NO) noparallel */ count(*) from ni_occurrence no;\n\n COUNT(*)\n----------\n 402062638\n\nElapsed: 00:03:16.45\n\n\n\nHints are necessary because Oracle table is declared as parallel and I \ndidn't want the PK index to be used for counting. Oracle has a good \nhabit of using PK's for counting, if available.\n\n\nSQL> select bytes/1048576 as MB\n 2 from user_segments\n 3 where segment_name='NI_OCCURRENCE';\n\n MB\n----------\n 35329\n\nElapsed: 00:00:00.85\nSQL>\n\nSo, oracle stores 402 million records in 35GB and counts them in 3 \nminutes 16.45 seconds The very same table was partially copied to \nPostgres, copying died with ORA-01555 snapshot too old sometimes this \nmorning. I ran vacuumdb -f -z on the database after the copy completed \nand the results are below.\n\nmgogala=# select count(*) from ni_occurrence;\n count\n-----------\n 382400476\n(1 row)\n\nTime: 221716.466 ms\nmgogala=#\nmgogala=# select 221/60::real;\n ?column?\n------------------\n 3.68333333333333\n(1 row)\n\nTime: 0.357 ms\nmgogala=#\nmgogala=# select pg_size_pretty(pg_table_size('ni_occurrence'));\n pg_size_pretty\n----------------\n 46 GB\n(1 row)\n\nTime: 0.420 ms\nmgogala=#\n\nThe database wasn't restarted, no caches were flushed, the comparison \nwas done with a serious advantage for PostgreSQL. Postgres needed 3.68 \nminutes to complete the count which is about the same Oracle but still \nsomewhat slower. Also, I am worried about the sizes. Postgres table is \n11GB larger than the original, despite having less data. That was an \nunfair and unbalanced comparison because Oracle's cache was flushed and \nOracle was artificially restrained to use the full table scan without \nthe aid of parallelism. Here is the same result, with no hints and the \nautotrace on, which shows what happens if I turn the hints off:\n\nSQL> select count(*) from ni_occurrence no;\n\n COUNT(*)\n----------\n 402062638\n\nElapsed: 00:00:52.61\n\nExecution Plan\n----------------------------------------------------------\nPlan hash value: 53476935\n\n-------------------------------------------------------------------------------- \n\n----------------------------------------\n\n| Id | Operation | Name | Rows | Cost (%CPU)|\n Time | TQ |IN-OUT| PQ Distrib |\n\n-------------------------------------------------------------------------------- \n\n----------------------------------------\n\n| 0 | SELECT STATEMENT | | 1 | 54001 (19)|\n 00:01:08 | | | |\n\n| 1 | SORT AGGREGATE | | 1 | |\n | | | |\n\n| 2 | PX COORDINATOR | | | |\n | | | |\n\n| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 \n| |\n | Q1,00 | P->S | QC (RAND) |\n\n| 4 | SORT AGGREGATE | | 1 | |\n | Q1,00 | PCWP | |\n\n| 5 | PX BLOCK ITERATOR | | 402M| 54001 (19)|\n 00:01:08 | Q1,00 | PCWC | |\n\n| 6 | INDEX FAST FULL SCAN| IDX_NI_OCCURRENCE_PID | 402M| \n54001 (19)|\n 00:01:08 | Q1,00 | PCWP | |\n\n-------------------------------------------------------------------------------- \n\n----------------------------------------\n\nIt took just 52 seconds to count everything, but Oracle didn't even scan \nthe table, it scanned a unique index, in parallel. That is the \nalgorithmic advantage that forced me to restrict the execution plan with \nhints. My conclusion is that the speed of the full scan is OK, about the \nsame as Oracle speed. There are, however, three significant algorithm \nadvantages on the Oracle's side:\n\n1) Oracle can use indexes to calculate \"select count\"\n2) Oracle can use parallelism.\n3) Oracle can use indexes in combination with the parallel processing.\n\n\n\nHere are the descriptions:\n\nSQL> desc ni_occurrence\n Name Null? Type\n ----------------------------------------- -------- \n----------------------------\n ID NOT NULL NUMBER(22)\n PERMANENT_ID NOT NULL VARCHAR2(12)\n CALL_LETTERS NOT NULL VARCHAR2(5)\n AIRDATE NOT NULL DATE\n DURATION NOT NULL NUMBER(4)\n PROGRAM_TITLE VARCHAR2(360)\n COST NUMBER(15)\n ASSETID NUMBER(12)\n MARKET_ID NUMBER\n GMT_TIME DATE\n ORIG_ST_OCC_ID NUMBER\n EPISODE VARCHAR2(450)\n IMPRESSIONS NUMBER\n\nSQL>\nmgogala=# \\d ni_occurrence\n Table \"public.ni_occurrence\"\n Column | Type | Modifiers\n----------------+-----------------------------+-----------\n id | bigint | not null\n permanent_id | character varying(12) | not null\n call_letters | character varying(5) | not null\n airdate | timestamp without time zone | not null\n duration | smallint | not null\n program_title | character varying(360) |\n cost | bigint |\n assetid | bigint |\n market_id | bigint |\n gmt_time | timestamp without time zone |\n orig_st_occ_id | bigint |\n episode | character varying(450) |\n impressions | bigint |\nIndexes:\n \"ni_occurrence_pk\" PRIMARY KEY, btree (id)\n\nmgogala=#\n\nOracle block is 16k, version is 10.2.0.5 RAC, 64 bit (is anybody still \nusing 32bit db servers?) . Postgres is 9.0.1, 64 bit. Both machines are \nrunning Red Hat 5.5:\n\n\n[mgogala@lpo-postgres-d01 ~]$ cat /etc/redhat-release\nRed Hat Enterprise Linux Server release 5.5 (Tikanga)\n[mgogala@lpo-postgres-d01 ~]$\n\nLinux lpo-postgres-d01 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT \n2010 x86_64 x86_64 x86_64 GNU/Linux\n[mgogala@lpo-postgres-d01 ~]$\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\nThe Leader in integrated Media Intelligence Solutions\n\n\n\n\n\n\n\nThere was some doubt as for the speed of doing the select count(*) in\nPostgreSQL and Oracle.\n\nTo that end, I copied the most part of the Oracle table I used before\nto Postgres. Although the copy\n\nwasn't complete, the resulting table is already significantly larger\nthan the table it was copied from. The result still shows that Oracle\nis significantly faster:\n\nOracle result:\n\n\nSQL> alter system flush buffer_cache;\n\n\nSystem altered.\n\n\nSQL> select /*+ full(NO) noparallel */ count(*) from ni_occurrence\nno;\n\n\n COUNT(*)\n\n----------\n\n 402062638\n\n\nElapsed: 00:03:16.45\n\n\n\n\nHints are necessary because Oracle table is declared as parallel and I\ndidn't want the PK index to be used for counting. Oracle has a good\nhabit of using PK's for counting, if available.\n\n\n\nSQL> select bytes/1048576 as MB\n\n 2 from user_segments\n\n 3 where segment_name='NI_OCCURRENCE';\n\n\n MB\n\n----------\n\n 35329\n\n\nElapsed: 00:00:00.85\n\nSQL>\n\n\nSo, oracle stores 402 million records in 35GB and counts them in 3\nminutes 16.45 seconds The very same table was partially copied to\nPostgres, copying died with ORA-01555 snapshot too old sometimes this\nmorning. I ran vacuumdb -f -z on the database after the copy completed\nand the results are below.\n\n\nmgogala=# select count(*) from ni_occurrence;\n\n count\n\n-----------\n\n 382400476\n\n(1 row)\n\n\nTime: 221716.466 ms\n\nmgogala=#\n\nmgogala=# select 221/60::real;\n\n ?column?\n\n------------------\n\n 3.68333333333333\n\n(1 row)\n\n\nTime: 0.357 ms\n\nmgogala=#\n\nmgogala=# select pg_size_pretty(pg_table_size('ni_occurrence'));\n\n pg_size_pretty\n\n----------------\n\n 46 GB\n\n(1 row)\n\n\nTime: 0.420 ms\n\nmgogala=#\n\n\nThe database wasn't restarted, no caches were flushed, the comparison\nwas done with a serious advantage for PostgreSQL. Postgres needed 3.68\nminutes to complete the count which is about the same Oracle but still\nsomewhat slower. Also, I am worried about the sizes. Postgres table is\n11GB larger than the original, despite having less data. That was an\nunfair and unbalanced comparison because Oracle's cache was flushed and\nOracle was artificially restrained to use the full table scan without\nthe aid of parallelism. Here is the same result, with no hints and the\nautotrace on, which shows what happens if I turn the hints off:\n\n\nSQL> select count(*) from ni_occurrence no;\n\n\n COUNT(*)\n\n----------\n\n 402062638\n\n\nElapsed: 00:00:52.61\n\n\nExecution Plan\n\n----------------------------------------------------------\n\nPlan hash value: 53476935\n\n\n--------------------------------------------------------------------------------\n\n----------------------------------------\n\n\n| Id | Operation | Name | Rows | Cost (%CPU)|\n\n Time | TQ |IN-OUT|\nPQ Distrib |\n\n\n--------------------------------------------------------------------------------\n\n----------------------------------------\n\n\n| 0 | SELECT STATEMENT | | 1 | 54001 (19)|\n\n 00:01:08 | | | |\n\n\n| 1 | SORT AGGREGATE | | 1 | |\n\n | | | |\n\n\n| 2 | PX COORDINATOR | | | |\n\n | | | |\n\n\n| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1\n| |\n\n | Q1,00 | P->S | QC (RAND) |\n\n\n| 4 | SORT AGGREGATE | | 1 | |\n\n | Q1,00 | PCWP | |\n\n\n| 5 | PX BLOCK ITERATOR | | 402M| 54001 \n(19)|\n\n 00:01:08 | Q1,00 | PCWC | |\n\n\n| 6 | INDEX FAST FULL SCAN| IDX_NI_OCCURRENCE_PID | 402M|\n54001 (19)|\n\n 00:01:08 | Q1,00 | PCWP | |\n\n\n--------------------------------------------------------------------------------\n\n----------------------------------------\n\n\nIt took just 52 seconds to count everything, but Oracle didn't even\nscan the table, it scanned a unique index, in parallel. That is the\nalgorithmic advantage that forced me to restrict the execution plan\nwith hints. My conclusion is that the speed of the full scan is OK,\nabout the same as Oracle speed. There are, however, three significant\nalgorithm advantages on the Oracle's side:\n\n\n1) Oracle can use indexes to calculate \"select count\"\n\n2) Oracle can use parallelism.\n\n3) Oracle can use indexes in combination with the parallel processing.\n\n\n\n\nHere are the descriptions:\n\n\nSQL> desc ni_occurrence\n\n Name Null? Type\n\n ----------------------------------------- --------\n----------------------------\n\n ID NOT NULL NUMBER(22)\n\n PERMANENT_ID NOT NULL VARCHAR2(12)\n\n CALL_LETTERS NOT NULL VARCHAR2(5)\n\n AIRDATE NOT NULL DATE\n\n DURATION NOT NULL NUMBER(4)\n\n PROGRAM_TITLE VARCHAR2(360)\n\n COST NUMBER(15)\n\n ASSETID NUMBER(12)\n\n MARKET_ID NUMBER\n\n GMT_TIME DATE\n\n ORIG_ST_OCC_ID NUMBER\n\n EPISODE VARCHAR2(450)\n\n IMPRESSIONS NUMBER\n\n\nSQL>\n\nmgogala=# \\d ni_occurrence\n\n Table \"public.ni_occurrence\"\n\n Column | Type | Modifiers\n\n----------------+-----------------------------+-----------\n\n id | bigint | not null\n\n permanent_id | character varying(12) | not null\n\n call_letters | character varying(5) | not null\n\n airdate | timestamp without time zone | not null\n\n duration | smallint | not null\n\n program_title | character varying(360) |\n\n cost | bigint |\n\n assetid | bigint |\n\n market_id | bigint |\n\n gmt_time | timestamp without time zone |\n\n orig_st_occ_id | bigint |\n\n episode | character varying(450) |\n\n impressions | bigint |\n\nIndexes:\n\n \"ni_occurrence_pk\" PRIMARY KEY, btree (id)\n\n\nmgogala=#\n\n\nOracle block is 16k, version is 10.2.0.5 RAC, 64 bit (is anybody still\nusing 32bit db servers?) . Postgres is 9.0.1, 64 bit. Both machines are\nrunning Red Hat 5.5:\n\n\n\n[mgogala@lpo-postgres-d01 ~]$ cat /etc/redhat-release\n\nRed Hat Enterprise Linux Server release 5.5 (Tikanga)\n\n[mgogala@lpo-postgres-d01 ~]$\n\n\nLinux lpo-postgres-d01 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT\n2010 x86_64 x86_64 x86_64 GNU/Linux\n\n[mgogala@lpo-postgres-d01 ~]$\n\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \nThe Leader in integrated Media Intelligence Solutions",
"msg_date": "Sat, 16 Oct 2010 12:53:50 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": true,
"msg_subject": "Select count(*), the sequel"
},
{
"msg_contents": "Hi,\n\nInteresting data points. The amount of rows that you managed to\ninsert into PostgreSQL before Oracle gave up the ghost is 95%\nof the rows in the Oracle version of the database. To count 5%\nfewer rows, it took PostgreSQL 24 seconds longer. Or adjusting\nfor the missing rows, 52 seconds longer for the entire table\nor 18% longer than the full table scan in Oracle. This seems to\nbe well within the table layout size differences, possibly due\nto the fillfactor used --not really bad at all. Now the timings\ndue to algorithm changes are interesting as indicating the room\nfor improvement due to those type of changes. A parallel sequential\nfull-table scan in PostgreSQL could provide the same speed up.\nCurrently that is not possible ... but development continues a\npace...\n\nIn fact, developing such functions in PostgreSQL could end up\nbeing less expensive long-term than licensing Oracle RAC. I think\nthe point that you have helped make is that PostgreSQL performs\nvery well for many use cases that have typically been relegated\nto expensive commecial databases such as Oracle, DB2,...\n\nRegards,\nKen\n\nOn Sat, Oct 16, 2010 at 12:53:50PM -0400, Mladen Gogala wrote:\n> There was some doubt as for the speed of doing the select count(*) in \n> PostgreSQL and Oracle.\n> To that end, I copied the most part of the Oracle table I used before to \n> Postgres. Although the copy\n> wasn't complete, the resulting table is already significantly larger than \n> the table it was copied from. The result still shows that Oracle is \n> significantly faster:\n> Oracle result:\n>\n> SQL> alter system flush buffer_cache;\n>\n> System altered.\n>\n> SQL> select /*+ full(NO) noparallel */ count(*) from ni_occurrence no;\n>\n> COUNT(*)\n> ----------\n> 402062638\n>\n> Elapsed: 00:03:16.45\n>\n>\n>\n> Hints are necessary because Oracle table is declared as parallel and I \n> didn't want the PK index to be used for counting. Oracle has a good habit \n> of using PK's for counting, if available.\n>\n>\n> SQL> select bytes/1048576 as MB\n> 2 from user_segments\n> 3 where segment_name='NI_OCCURRENCE';\n>\n> MB\n> ----------\n> 35329\n>\n> Elapsed: 00:00:00.85\n> SQL>\n>\n> So, oracle stores 402 million records in 35GB and counts them in 3 minutes \n> 16.45 seconds The very same table was partially copied to Postgres, \n> copying died with ORA-01555 snapshot too old sometimes this morning. I ran \n> vacuumdb -f -z on the database after the copy completed and the results are \n> below.\n>\n> mgogala=# select count(*) from ni_occurrence;\n> count\n> -----------\n> 382400476\n> (1 row)\n>\n> Time: 221716.466 ms\n> mgogala=#\n> mgogala=# select 221/60::real;\n> ?column?\n> ------------------\n> 3.68333333333333\n> (1 row)\n>\n> Time: 0.357 ms\n> mgogala=#\n> mgogala=# select pg_size_pretty(pg_table_size('ni_occurrence'));\n> pg_size_pretty\n> ----------------\n> 46 GB\n> (1 row)\n>\n> Time: 0.420 ms\n> mgogala=#\n>\n> The database wasn't restarted, no caches were flushed, the comparison was \n> done with a serious advantage for PostgreSQL. Postgres needed 3.68 minutes \n> to complete the count which is about the same Oracle but still somewhat \n> slower. Also, I am worried about the sizes. Postgres table is 11GB larger \n> than the original, despite having less data. That was an unfair and \n> unbalanced comparison because Oracle's cache was flushed and Oracle was \n> artificially restrained to use the full table scan without the aid of \n> parallelism. Here is the same result, with no hints and the autotrace on, \n> which shows what happens if I turn the hints off:\n>\n> SQL> select count(*) from ni_occurrence no;\n>\n> COUNT(*)\n> ----------\n> 402062638\n>\n> Elapsed: 00:00:52.61\n>\n> Execution Plan\n> ----------------------------------------------------------\n> Plan hash value: 53476935\n>\n> --------------------------------------------------------------------------------\n> ----------------------------------------\n>\n> | Id | Operation | Name | Rows | Cost (%CPU)|\n> Time | TQ |IN-OUT| PQ Distrib |\n>\n> --------------------------------------------------------------------------------\n> ----------------------------------------\n>\n> | 0 | SELECT STATEMENT | | 1 | 54001 (19)|\n> 00:01:08 | | | |\n>\n> | 1 | SORT AGGREGATE | | 1 | |\n> | | | |\n>\n> | 2 | PX COORDINATOR | | | |\n> | | | |\n>\n> | 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | \n> |\n> | Q1,00 | P->S | QC (RAND) |\n>\n> | 4 | SORT AGGREGATE | | 1 | |\n> | Q1,00 | PCWP | |\n>\n> | 5 | PX BLOCK ITERATOR | | 402M| 54001 (19)|\n> 00:01:08 | Q1,00 | PCWC | |\n>\n> | 6 | INDEX FAST FULL SCAN| IDX_NI_OCCURRENCE_PID | 402M| 54001 \n> (19)|\n> 00:01:08 | Q1,00 | PCWP | |\n>\n> --------------------------------------------------------------------------------\n> ----------------------------------------\n>\n> It took just 52 seconds to count everything, but Oracle didn't even scan \n> the table, it scanned a unique index, in parallel. That is the algorithmic \n> advantage that forced me to restrict the execution plan with hints. My \n> conclusion is that the speed of the full scan is OK, about the same as \n> Oracle speed. There are, however, three significant algorithm advantages \n> on the Oracle's side:\n>\n> 1) Oracle can use indexes to calculate \"select count\"\n> 2) Oracle can use parallelism.\n> 3) Oracle can use indexes in combination with the parallel processing.\n>\n>\n>\n> Here are the descriptions:\n>\n> SQL> desc ni_occurrence\n> Name Null? Type\n> ----------------------------------------- -------- \n> ----------------------------\n> ID NOT NULL NUMBER(22)\n> PERMANENT_ID NOT NULL VARCHAR2(12)\n> CALL_LETTERS NOT NULL VARCHAR2(5)\n> AIRDATE NOT NULL DATE\n> DURATION NOT NULL NUMBER(4)\n> PROGRAM_TITLE VARCHAR2(360)\n> COST NUMBER(15)\n> ASSETID NUMBER(12)\n> MARKET_ID NUMBER\n> GMT_TIME DATE\n> ORIG_ST_OCC_ID NUMBER\n> EPISODE VARCHAR2(450)\n> IMPRESSIONS NUMBER\n>\n> SQL>\n> mgogala=# \\d ni_occurrence\n> Table \"public.ni_occurrence\"\n> Column | Type | Modifiers\n> ----------------+-----------------------------+-----------\n> id | bigint | not null\n> permanent_id | character varying(12) | not null\n> call_letters | character varying(5) | not null\n> airdate | timestamp without time zone | not null\n> duration | smallint | not null\n> program_title | character varying(360) |\n> cost | bigint |\n> assetid | bigint |\n> market_id | bigint |\n> gmt_time | timestamp without time zone |\n> orig_st_occ_id | bigint |\n> episode | character varying(450) |\n> impressions | bigint |\n> Indexes:\n> \"ni_occurrence_pk\" PRIMARY KEY, btree (id)\n>\n> mgogala=#\n>\n> Oracle block is 16k, version is 10.2.0.5 RAC, 64 bit (is anybody still \n> using 32bit db servers?) . Postgres is 9.0.1, 64 bit. Both machines are \n> running Red Hat 5.5:\n>\n>\n> [mgogala@lpo-postgres-d01 ~]$ cat /etc/redhat-release\n> Red Hat Enterprise Linux Server release 5.5 (Tikanga)\n> [mgogala@lpo-postgres-d01 ~]$\n>\n> Linux lpo-postgres-d01 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 \n> x86_64 x86_64 x86_64 GNU/Linux\n> [mgogala@lpo-postgres-d01 ~]$\n>\n> -- \n> Mladen Gogala\n> Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> www.vmsinfo.com\n> The Leader in integrated Media Intelligence Solutions\n>\n",
"msg_date": "Sat, 16 Oct 2010 13:44:43 -0500",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select count(*), the sequel"
},
{
"msg_contents": "On Sat, Oct 16, 2010 at 2:44 PM, Kenneth Marshall <[email protected]> wrote:\n> Interesting data points. The amount of rows that you managed to\n> insert into PostgreSQL before Oracle gave up the ghost is 95%\n> of the rows in the Oracle version of the database. To count 5%\n> fewer rows, it took PostgreSQL 24 seconds longer. Or adjusting\n> for the missing rows, 52 seconds longer for the entire table\n> or 18% longer than the full table scan in Oracle. This seems to\n> be well within the table layout size differences, possibly due\n> to the fillfactor used --not really bad at all.\n\nI don't think this is due to fillfactor - the default fillfactor is\n100, and anyway we ARE larger on disk than Oracle. We really need to\ndo something about that, in the changes to NUMERIC in 9.1 are a step\nin that direction, but I think a lot more work is needed. I think it\nwould be really helpful if we could try to quantify where the extra\nspace is going.\n\nSome places to look:\n\n- Bloated representations of individual datatypes. (I know that even\nthe new NUMERIC format is larger than Oracle's NUMBER.)\n- Excessive per-tuple overhead. Ours is 24 bytes, plus the item pointer.\n- Alignment requirements. We have a fair number of datatypes that\nrequire 4 or 8 byte alignment. How much is that hurting us?\n- Compression. Maybe Oracle's algorithm does better than PGLZ.\n\nIf we can quantify where we're losing vs. Oracle - or any other\ncompetitor - that might give us some idea where to start looking.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 26 Oct 2010 18:45:30 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select count(*), the sequel"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> I don't think this is due to fillfactor - the default fillfactor is\n> 100, and anyway we ARE larger on disk than Oracle. We really need to\n> do something about that, in the changes to NUMERIC in 9.1 are a step\n> in that direction, but I think a lot more work is needed.\n\nOf course, the chances of doing anything more than extremely-marginal\nkluges without breaking on-disk compatibility are pretty tiny. Given\nwhere we are at the moment, I see no appetite for forced dump-and-reloads\nfor several years to come. So I don't foresee that anything is likely\nto come of such efforts in the near future. Even if somebody had a\ngreat idea that would make things smaller without any other penalty,\nwhich I'm not sure I believe either.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Oct 2010 18:51:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select count(*), the sequel "
},
{
"msg_contents": "On Tue, Oct 26, 2010 at 6:51 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> I don't think this is due to fillfactor - the default fillfactor is\n>> 100, and anyway we ARE larger on disk than Oracle. We really need to\n>> do something about that, in the changes to NUMERIC in 9.1 are a step\n>> in that direction, but I think a lot more work is needed.\n>\n> Of course, the chances of doing anything more than extremely-marginal\n> kluges without breaking on-disk compatibility are pretty tiny. Given\n> where we are at the moment, I see no appetite for forced dump-and-reloads\n> for several years to come. So I don't foresee that anything is likely\n> to come of such efforts in the near future. Even if somebody had a\n> great idea that would make things smaller without any other penalty,\n> which I'm not sure I believe either.\n\nLet's try not to prejudge the outcome without doing the research.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 26 Oct 2010 21:48:43 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select count(*), the sequel"
},
{
"msg_contents": "> Even if somebody had a\n> great idea that would make things smaller without any other penalty,\n> which I'm not sure I believe either.\n\nI'd say that the only things likely to bring an improvement significant \nenough to warrant the (quite large) hassle of implementation would be :\n\n- read-only / archive tables (get rid of row header overhead)\n- in-page compression using per-column delta storage for instance (no \nrandom access penalty, but hard to implement, maybe easier for read-only \ntables)\n- dumb LZO-style compression (license problems, needs parallel \ndecompressor, random access penalty, hard to implement too)\n",
"msg_date": "Wed, 27 Oct 2010 21:52:49 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select count(*), the sequel "
},
{
"msg_contents": "\"Pierre C\" <[email protected]> wrote:\n \n> in-page compression\n \nHow would that be different from the in-page compression done by\nTOAST now? Or are you just talking about being able to make it\nmore aggressive?\n \n-Kevin\n",
"msg_date": "Wed, 27 Oct 2010 15:11:16 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select count(*), the sequel"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 09:52:49PM +0200, Pierre C wrote:\n>> Even if somebody had a\n>> great idea that would make things smaller without any other penalty,\n>> which I'm not sure I believe either.\n>\n> I'd say that the only things likely to bring an improvement significant \n> enough to warrant the (quite large) hassle of implementation would be :\n>\n> - read-only / archive tables (get rid of row header overhead)\n> - in-page compression using per-column delta storage for instance (no \n> random access penalty, but hard to implement, maybe easier for read-only \n> tables)\n> - dumb LZO-style compression (license problems, needs parallel \n> decompressor, random access penalty, hard to implement too)\n>\n\nDifferent algorithms have been discussed before. A quick search turned\nup:\n\nquicklz - GPL or commercial\nfastlz - MIT works with BSD okay\nzippy - Google - no idea about the licensing\nlzf - BSD-type\nlzo - GPL or commercial\nzlib - current algorithm\n\nOf these lzf can compress at almost 3.7X of zlib and decompress at 1.7X\nand fastlz can compress at 3.1X of zlib and decompress at 1.9X. The same\ncomparison put lzo at 3.0X for compression and 1.8X decompress. The block\ndesign of lzl/fastlz may be useful to support substring access to toasted\ndata among other ideas that have been floated here in the past.\n\nJust keeping the hope alive for faster compression.\n\nCheers,\nKen\n",
"msg_date": "Wed, 27 Oct 2010 15:41:15 -0500",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select count(*), the sequel"
},
{
"msg_contents": "Kenneth Marshall, 27.10.2010 22:41:\n> Different algorithms have been discussed before. A quick search turned\n> up:\n>\n> quicklz - GPL or commercial\n> fastlz - MIT works with BSD okay\n> zippy - Google - no idea about the licensing\n> lzf - BSD-type\n> lzo - GPL or commercial\n> zlib - current algorithm\n>\n> Of these lzf can compress at almost 3.7X of zlib and decompress at 1.7X\n> and fastlz can compress at 3.1X of zlib and decompress at 1.9X. The same\n> comparison put lzo at 3.0X for compression and 1.8X decompress. The block\n> design of lzl/fastlz may be useful to support substring access to toasted\n> data among other ideas that have been floated here in the past.\n>\n> Just keeping the hope alive for faster compression.\n\nWhat about a dictionary based compression (like DB2 does)?\n\nIn a nutshell: it creates a list of \"words\" in a page. For each word, the occurance in the db-block are stored and the actual word is removed from the page/block itself. This covers all rows on a page and can give a very impressive overall compression.\nThis compression is not done only on disk but in-memory as well (the page is loaded with the dictionary into memory).\n\nI believe Oracle 11 does something similar.\n\nRegards\nThomas\n\n",
"msg_date": "Wed, 27 Oct 2010 22:54:11 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select count(*), the sequel"
},
{
"msg_contents": "Kenneth Marshall <[email protected]> writes:\n> Just keeping the hope alive for faster compression.\n\nIs there any evidence that that's something we should worry about?\nI can't recall ever having seen a code profile that shows the\npg_lzcompress.c code high enough to look like a bottleneck compared\nto other query costs.\n\nNow, the benefits of 2X or 3X space savings would be pretty obvious,\nbut I've seen no evidence that we could easily achieve that either.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Oct 2010 17:49:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select count(*), the sequel "
},
{
"msg_contents": "> \"Pierre C\" <[email protected]> wrote:\n>\n>> in-page compression\n> How would that be different from the in-page compression done by\n> TOAST now? Or are you just talking about being able to make it\n> more aggressive?\n> -Kevin\n\nWell, I suppose lzo-style compression would be better used on data that is \nwritten a few times maximum and then mostly read (like a forum, data \nwarehouse, etc). Then, good candidate pages for compression also probably \nhave all tuples visible to all transactions, therefore all row headers \nwould be identical and would compress very well. Of course this introduces \na \"small\" problem for deletes and updates...\n\nDelta compression is : take all the values for a column inside a page, \nlook at the values and their statistical distribution, notice for example \nthat they're all INTs and the values on the page fit between X+n and X-n, \nstore X and only encode n with as few bits as possible for each row. This \nis only an example, the idea is to exploit the fact that on the same page, \nall the values of one column often have lots in common. xid values in row \nheaders are a good example of this.\n\nTOAST compresses datums, so it performs well on large datums ; this is the \nopposite, the idea is to compress small tuples by using the reduncancies \nbetween tuples.\n",
"msg_date": "Thu, 28 Oct 2010 11:33:16 +0200",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select count(*), the sequel"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 05:49:42PM -0400, Tom Lane wrote:\n> Kenneth Marshall <[email protected]> writes:\n> > Just keeping the hope alive for faster compression.\n> \n> Is there any evidence that that's something we should worry about?\n> I can't recall ever having seen a code profile that shows the\n> pg_lzcompress.c code high enough to look like a bottleneck compared\n> to other query costs.\n> \n> Now, the benefits of 2X or 3X space savings would be pretty obvious,\n> but I've seen no evidence that we could easily achieve that either.\n> \n> \t\t\tregards, tom lane\n> \n\nOne use is to allow substr() on toasted values without needing to\ndecompress the entire contents. Another possibility is to keep\nlarger fields compressed in memory for some value of \"larger\".\nWith faster compression, it might by useful to compress the WAL\nfiles to support faster data rates and therefore update rates\nfor the same hardware. And there are always the in page common\nsubstring storage optimizations to reduce index/table sizes.\n\nRegards,\nKen\n",
"msg_date": "Thu, 28 Oct 2010 08:34:36 -0500",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select count(*), the sequel"
}
] |
[
{
"msg_contents": "At present for reporting I use following types of query:\nselect crm.*, crm_cnt.cnt\nfrom crm,\n(select count(*) as cnt from crm) crm_cnt;\nHere count query is used to find the total number of records.\nSame FROM clause is copied in both the part of the query.\nIs there any other good alternative way to get this similar value?\n\nAt present for reporting I use following types of query: select crm.*, crm_cnt.cntfrom crm,(select count(*) as cnt from crm) crm_cnt;Here count query is used to find the total number of records. \nSame FROM clause is copied in both the part of the query.Is there any other good alternative way to get this similar value?",
"msg_date": "Mon, 18 Oct 2010 11:16:11 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to get the total number of records in report"
},
{
"msg_contents": "On Mon, Oct 18, 2010 at 1:16 AM, AI Rumman <[email protected]> wrote:\n> At present for reporting I use following types of query:\n> select crm.*, crm_cnt.cnt\n> from crm,\n> (select count(*) as cnt from crm) crm_cnt;\n> Here count query is used to find the total number of records.\n> Same FROM clause is copied in both the part of the query.\n> Is there any other good alternative way to get this similar value?\n\nWell, it looks like you're currently executing two sequential scans\nover the \"crm\" table. And you're including the total row-count as a\nseparate column in every row you get back, although you really only\nneed this piece of information once.\n\nSince you're fetching all of the \"crm\" table anyway, why not get rid\nof the COUNT(*) entirely and just keep a count on the client-side of\nthe total number of rows you've fetched?\n\nJosh\n",
"msg_date": "Mon, 18 Oct 2010 11:52:45 -0400",
"msg_from": "Josh Kupershmidt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to get the total number of records in report"
},
{
"msg_contents": "Not actualy. I used pagination with limit clause in details query and I need\nthe total number of records in the detail query.\n\nOn Mon, Oct 18, 2010 at 9:52 PM, Josh Kupershmidt <[email protected]>wrote:\n\n> On Mon, Oct 18, 2010 at 1:16 AM, AI Rumman <[email protected]> wrote:\n> > At present for reporting I use following types of query:\n> > select crm.*, crm_cnt.cnt\n> > from crm,\n> > (select count(*) as cnt from crm) crm_cnt;\n> > Here count query is used to find the total number of records.\n> > Same FROM clause is copied in both the part of the query.\n> > Is there any other good alternative way to get this similar value?\n>\n> Well, it looks like you're currently executing two sequential scans\n> over the \"crm\" table. And you're including the total row-count as a\n> separate column in every row you get back, although you really only\n> need this piece of information once.\n>\n> Since you're fetching all of the \"crm\" table anyway, why not get rid\n> of the COUNT(*) entirely and just keep a count on the client-side of\n> the total number of rows you've fetched?\n>\n> Josh\n>\n\nNot actualy. I used pagination with limit clause in details query and I need the total number of records in the detail query.On Mon, Oct 18, 2010 at 9:52 PM, Josh Kupershmidt <[email protected]> wrote:\nOn Mon, Oct 18, 2010 at 1:16 AM, AI Rumman <[email protected]> wrote:\n\n> At present for reporting I use following types of query:\n> select crm.*, crm_cnt.cnt\n> from crm,\n> (select count(*) as cnt from crm) crm_cnt;\n> Here count query is used to find the total number of records.\n> Same FROM clause is copied in both the part of the query.\n> Is there any other good alternative way to get this similar value?\n\nWell, it looks like you're currently executing two sequential scans\nover the \"crm\" table. And you're including the total row-count as a\nseparate column in every row you get back, although you really only\nneed this piece of information once.\n\nSince you're fetching all of the \"crm\" table anyway, why not get rid\nof the COUNT(*) entirely and just keep a count on the client-side of\nthe total number of rows you've fetched?\n\nJosh",
"msg_date": "Tue, 19 Oct 2010 13:18:45 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to get the total number of records in report"
},
{
"msg_contents": "On Tue, Oct 19, 2010 at 1:18 AM, AI Rumman <[email protected]> wrote:\n\n> Not actualy. I used pagination with limit clause in details query and I\n> need the total number of records in the detail query.\n>\n>\nCan you use a cursor? Roughly...\n\nBEGIN;\nDECLARE x CURSOR FOR SELECT * FROM crm;\nMOVE FORWARD ALL IN x;\nMOVE BACKWARD ALL IN x;\nMOVE FORWARD 100 IN x;\nFETCH FORWARD 100 FROM x;\nCLOSE x;\nCOMMIT;\n\nYour application would need to get the actual text result from the \"MOVE\nFORWARD ALL IN x;\" statement to know the total number of records from your\nSELECT. After that, do your pagination via the \"MOVE FORWARD 100 IN x;\" and\n\"FETCH FORWARD 100 FROM x;\" statements.\n\nHTH.\nGreg\n\nOn Tue, Oct 19, 2010 at 1:18 AM, AI Rumman <[email protected]> wrote:\nNot actualy. I used pagination with limit clause in details query and I need the total number of records in the detail query.Can you use a cursor? Roughly...\nBEGIN;DECLARE x CURSOR FOR SELECT * FROM crm;MOVE FORWARD ALL IN x;MOVE BACKWARD ALL IN x;MOVE FORWARD 100 IN x;FETCH FORWARD 100 FROM x;CLOSE x;COMMIT; Your application would need to get the actual text result from the \"MOVE FORWARD ALL IN x;\" statement to know the total number of records from your SELECT. After that, do your pagination via the \"MOVE FORWARD 100 IN x;\" and \"FETCH FORWARD 100 FROM x;\" statements.\nHTH.Greg",
"msg_date": "Tue, 19 Oct 2010 08:12:48 -0600",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to get the total number of records in report"
},
{
"msg_contents": "On Mon, Oct 18, 2010 at 1:16 AM, AI Rumman <[email protected]> wrote:\n> At present for reporting I use following types of query:\n> select crm.*, crm_cnt.cnt\n> from crm,\n> (select count(*) as cnt from crm) crm_cnt;\n> Here count query is used to find the total number of records.\n> Same FROM clause is copied in both the part of the query.\n> Is there any other good alternative way to get this similar value?\n\nProbably the best way to do this type of thing is handle it on the\nclient. However, if you want to do it this way and your from clause\nis more complex than 'from table', you can possibly improve on this\nwith a CTE:\n\nwith q as (select * from <something expensive>)\nselect q.* q_cnt.cnt from q, (select count(*) as cnt from q) q_cnt;\n\nThe advantage here is that the CTE is materialized without having to\ndo the whole query again. This can be win or loss depending on the\nquery.\n\nmerlin\n",
"msg_date": "Tue, 19 Oct 2010 19:56:27 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to get the total number of records in report"
},
{
"msg_contents": "On Tue, Oct 19, 2010 at 7:56 PM, Merlin Moncure <[email protected]> wrote:\n> On Mon, Oct 18, 2010 at 1:16 AM, AI Rumman <[email protected]> wrote:\n>> At present for reporting I use following types of query:\n>> select crm.*, crm_cnt.cnt\n>> from crm,\n>> (select count(*) as cnt from crm) crm_cnt;\n>> Here count query is used to find the total number of records.\n>> Same FROM clause is copied in both the part of the query.\n>> Is there any other good alternative way to get this similar value?\n>\n> Probably the best way to do this type of thing is handle it on the\n> client. However, if you want to do it this way and your from clause\n> is more complex than 'from table', you can possibly improve on this\n> with a CTE:\n>\n> with q as (select * from <something expensive>)\n> select q.* q_cnt.cnt from q, (select count(*) as cnt from q) q_cnt;\n>\n> The advantage here is that the CTE is materialized without having to\n> do the whole query again. This can be win or loss depending on the\n> query.\n\nWhat about\n\nselect crm.*, sum(1) over () as crm_count from crm limit 10;\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 28 Oct 2010 13:05:30 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to get the total number of records in report"
},
{
"msg_contents": "On Thu, Oct 28, 2010 at 1:05 PM, Robert Haas <[email protected]> wrote:\n> On Tue, Oct 19, 2010 at 7:56 PM, Merlin Moncure <[email protected]> wrote:\n>> On Mon, Oct 18, 2010 at 1:16 AM, AI Rumman <[email protected]> wrote:\n>>> At present for reporting I use following types of query:\n>>> select crm.*, crm_cnt.cnt\n>>> from crm,\n>>> (select count(*) as cnt from crm) crm_cnt;\n>>> Here count query is used to find the total number of records.\n>>> Same FROM clause is copied in both the part of the query.\n>>> Is there any other good alternative way to get this similar value?\n>>\n>> Probably the best way to do this type of thing is handle it on the\n>> client. However, if you want to do it this way and your from clause\n>> is more complex than 'from table', you can possibly improve on this\n>> with a CTE:\n>>\n>> with q as (select * from <something expensive>)\n>> select q.* q_cnt.cnt from q, (select count(*) as cnt from q) q_cnt;\n>>\n>> The advantage here is that the CTE is materialized without having to\n>> do the whole query again. This can be win or loss depending on the\n>> query.\n>\n> What about\n>\n> select crm.*, sum(1) over () as crm_count from crm limit 10;\n\nHm, after a few quick tests it seems your approach is better in just\nabout every respect :-).\n\nmerlin\n",
"msg_date": "Thu, 28 Oct 2010 13:40:02 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to get the total number of records in report"
},
{
"msg_contents": "But I am using Postgresql 8.1 and it is not possible to write query as your\none here.\n\nOn Thu, Oct 28, 2010 at 11:05 PM, Robert Haas <[email protected]> wrote:\n\n> On Tue, Oct 19, 2010 at 7:56 PM, Merlin Moncure <[email protected]>\n> wrote:\n> > On Mon, Oct 18, 2010 at 1:16 AM, AI Rumman <[email protected]> wrote:\n> >> At present for reporting I use following types of query:\n> >> select crm.*, crm_cnt.cnt\n> >> from crm,\n> >> (select count(*) as cnt from crm) crm_cnt;\n> >> Here count query is used to find the total number of records.\n> >> Same FROM clause is copied in both the part of the query.\n> >> Is there any other good alternative way to get this similar value?\n> >\n> > Probably the best way to do this type of thing is handle it on the\n> > client. However, if you want to do it this way and your from clause\n> > is more complex than 'from table', you can possibly improve on this\n> > with a CTE:\n> >\n> > with q as (select * from <something expensive>)\n> > select q.* q_cnt.cnt from q, (select count(*) as cnt from q) q_cnt;\n> >\n> > The advantage here is that the CTE is materialized without having to\n> > do the whole query again. This can be win or loss depending on the\n> > query.\n>\n> What about\n>\n> select crm.*, sum(1) over () as crm_count from crm limit 10;\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nBut I am using Postgresql 8.1 and it is not possible to write query as your one here.On Thu, Oct 28, 2010 at 11:05 PM, Robert Haas <[email protected]> wrote:\nOn Tue, Oct 19, 2010 at 7:56 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Mon, Oct 18, 2010 at 1:16 AM, AI Rumman <[email protected]> wrote:\n>> At present for reporting I use following types of query:\n>> select crm.*, crm_cnt.cnt\n>> from crm,\n>> (select count(*) as cnt from crm) crm_cnt;\n>> Here count query is used to find the total number of records.\n>> Same FROM clause is copied in both the part of the query.\n>> Is there any other good alternative way to get this similar value?\n>\n> Probably the best way to do this type of thing is handle it on the\n> client. However, if you want to do it this way and your from clause\n> is more complex than 'from table', you can possibly improve on this\n> with a CTE:\n>\n> with q as (select * from <something expensive>)\n> select q.* q_cnt.cnt from q, (select count(*) as cnt from q) q_cnt;\n>\n> The advantage here is that the CTE is materialized without having to\n> do the whole query again. This can be win or loss depending on the\n> query.\n\nWhat about\n\nselect crm.*, sum(1) over () as crm_count from crm limit 10;\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 28 Oct 2010 23:49:25 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to get the total number of records in report"
},
{
"msg_contents": "On Thu, Oct 28, 2010 at 1:49 PM, AI Rumman <[email protected]> wrote:\n> But I am using Postgresql 8.1 and it is not possible to write query as your\n> one here.\n\nwith 8.1, you are limited to subquery approach, application derived\ncount, plpgsql hacks, etc.\n\nmerlin\n",
"msg_date": "Thu, 28 Oct 2010 14:18:26 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to get the total number of records in report"
}
] |
[
{
"msg_contents": "I have a table with an array column.\nI added a GIN index to the array:\n\nCREATE INDEX foo_idx ON t USING GIN (alternatecodes) WHERE\nalternatecodes IS NOT NULL;\n\nThat's all well and good.\nHowever, some queries started failing and I was able to reproduce the\nbehavior in psql!\n\nSELECT * FROM t WHERE alternatecodes IS NOT NULL;\nreturns:\nERROR: GIN indexes do not support whole-index scans\n\nWhaaa? Adding an *index* makes my /queries/ stop working? How can this be?\nThis really violated my principle of least surprise. If GIN indexes\ndon't support whole-index scans, fine, don't use them, but don't make\na perfectly valid query fail because of it.\n\nThis seems like a bug. Is it?\n\nPostgreSQL version:\n\n PostgreSQL 8.4.5 on x86_64-redhat-linux-gnu, compiled by GCC gcc\n(GCC) 4.1.2 20080704 (Red Hat 4.1.2-48), 64-bit\n\n\n-- \nJon\n",
"msg_date": "Mon, 18 Oct 2010 15:59:26 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "unexpected query failure: ERROR: GIN indexes do not support\n\twhole-index scans"
},
{
"msg_contents": "Jon Nelson <[email protected]> writes:\n> CREATE INDEX foo_idx ON t USING GIN (alternatecodes) WHERE\n> alternatecodes IS NOT NULL;\n> SELECT * FROM t WHERE alternatecodes IS NOT NULL;\n> ERROR: GIN indexes do not support whole-index scans\n\nYep, this is a known issue. It's going to take major surgery on GIN to\nfix it, so don't hold your breath. In the particular case, what good do\nyou think the WHERE clause is doing anyway? GIN won't index nulls at\nall ... which indeed is an aspect of the underlying issue --- see recent\ndiscussions, eg here:\nhttp://archives.postgresql.org/pgsql-hackers/2010-10/msg00521.php\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Oct 2010 19:01:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unexpected query failure: ERROR: GIN indexes do not support\n\twhole-index scans"
},
{
"msg_contents": "On Mon, Oct 18, 2010 at 6:01 PM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> CREATE INDEX foo_idx ON t USING GIN (alternatecodes) WHERE\n>> alternatecodes IS NOT NULL;\n>> SELECT * FROM t WHERE alternatecodes IS NOT NULL;\n>> ERROR: GIN indexes do not support whole-index scans\n>\n> Yep, this is a known issue. It's going to take major surgery on GIN to\n> fix it, so don't hold your breath. In the particular case, what good do\n> you think the WHERE clause is doing anyway? GIN won't index nulls at\n> all ... which indeed is an aspect of the underlying issue --- see recent\n> discussions, eg here:\n> http://archives.postgresql.org/pgsql-hackers/2010-10/msg00521.php\n\nOK, so GIN doesn't index NULLs. I guess the \"IS NOT NULL\" part comes\nabout as a habit - that particular column is fairly sparse. However,\nI'm honestly quite surprised at two things:\n\n1. if GIN indexes ignore NULLs, then either it should grump when one\nspecifics \"WHERE ... IS NOT NULL\" or it should be treated as a no-op\n\n2. (and this is by far the more surprising) that the /presence/ of an\nINDEX can *break* a SELECT. It's not that the engine ignores the index\n- that would be reasonable - but that I can't issue a SELECT with a\nWHERE statement that matches the same as the index.\n\nHowever, I see that this also surprised Josh Berkus, and not that long\nago (11 days!), so I'll just shush.\n\nThanks!\n\n\n\n-- \nJon\n",
"msg_date": "Mon, 18 Oct 2010 21:47:08 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: unexpected query failure: ERROR: GIN indexes do not\n\tsupport whole-index scans"
}
] |
[
{
"msg_contents": "8.4.5\n\nI consistently see HashJoin plans that hash the large table, and scan the small table. This is especially puzzling in some cases where I have 30M rows in the big table and ~ 100 in the small... shouldn't it hash the small table and scan the big one?\n\nHere is one case I saw just recently\n\n Hash Cond: ((a.e_id)::text = (ta.name)::text)\n -> Index Scan using c_a_s_e_id on a (cost=0.00..8.21 rows=14 width=27)\n Index Cond: (id = 12)\n -> Hash (cost=89126.79..89126.79 rows=4825695 width=74)\n -> Seq Scan on p_a_1287446030 tmp (cost=0.00..89126.79 rows=4825695 width=74)\n Filter: (id = 12)\n\nDoes this ever make sense? Isn't it always better to hash the smaller side of the join, or at least predominantly so? Maybe if you want the order of elements returning from the join to coincide with the order of the outer part of the join for a join higher up the plan tree. in this specific case, I want the order to be based on the larger table for the join higher up (not shown) in the plan so that its index scan is in the order that tmp already is.\n\nCertainly, for very small hash tables (< 1000 entries) the cache effects strongly favor small tables -- the lookup should be very cheap. Building a very large hash is not cheap, and wastes lots of memory. I suppose at very large sizes something else might come into play that favors hashing the bigger table, but I can't think of what that would be for the general case.\n\nAny ideas? I've seen this with dozens of queries, some simple, some with 5 or 6 tables and joins. I even tried making work_mem very small in a 30M row to 500 row join, and it STILL hashed the big table. At first I thought that I was reading the plan wrong, but google suggests its doing what it looks like its doing. Perhaps this is a bug?",
"msg_date": "Mon, 18 Oct 2010 18:40:11 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": true,
"msg_subject": "HashJoin order, hash the large or small table? Postgres likes to\n\thash the big one, why?"
},
{
"msg_contents": "Scott Carey <[email protected]> writes:\n> I consistently see HashJoin plans that hash the large table, and scan\n> the small table.\n\nCould we see a self-contained test case? And what cost parameters are\nyou using, especially work_mem?\n\n> This is especially puzzling in some cases where I have 30M rows in the big table and ~ 100 in the small... shouldn't it hash the small table and scan the big one?\n\nWell, size of the table isn't the only factor; in particular, a highly\nnonuniform distribution of the key value will inflate the cost estimate\nfor using a table on the inner size of the hash. But the example you\nshow here seems a bit extreme.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Oct 2010 23:43:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: HashJoin order,\n\thash the large or small table? Postgres likes to hash the big one,\n\twhy?"
},
{
"msg_contents": "\nOn Oct 18, 2010, at 8:43 PM, Tom Lane wrote:\n\n> Scott Carey <[email protected]> writes:\n>> I consistently see HashJoin plans that hash the large table, and scan\n>> the small table.\n> \n> Could we see a self-contained test case? And what cost parameters are\n> you using, especially work_mem?\n\nI'll see if I can make a test case. \n\nTough to do since I catch these on a production machine when a query is taking a long time, and some of the tables are transient.\nwork_mem is 800MB, I tried 128K and it didn't matter. It will switch to merge join at some point, but not a smaller hash, and the merge join is definitely slower.\n\n> \n>> This is especially puzzling in some cases where I have 30M rows in the big table and ~ 100 in the small... shouldn't it hash the small table and scan the big one?\n> \n> Well, size of the table isn't the only factor; in particular, a highly\n> nonuniform distribution of the key value will inflate the cost estimate\n> for using a table on the inner size of the hash. But the example you\n> show here seems a bit extreme.\n> \n\nIn the case today, the index scan returns unique values, and the large table has only a little skew on the join key.\n\nAnother case I ran into a few weeks ago is odd. \n(8.4.3 this time)\n\nrr=> explain INSERT INTO pav2 (p_id, a_id, values, last_updated, s_id) SELECT av.p_id, av.a_id, av.values, av.last_updated, a.s_id \nFROM pav av, attr a where av.a_id = a.id;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------\n Hash Join (cost=2946093.92..631471410.73 rows=1342587125 width=69)\n Hash Cond: (a.id = av.a_id)\n -> Seq Scan on attr a (cost=0.00..275.21 rows=20241 width=8)\n -> Hash (cost=1200493.44..1200493.44 rows=70707864 width=65)\n -> Seq Scan on pav av (cost=0.00..1200493.44 rows=70707864 width=65)\n\nIf the cost to hash is 1200493, and it needs to probe the hash 20241 times, why would the total cost be 631471410? The cost to probe can't be that big! A cost of 500 to probe and join? \nWhy favor hashing the large table and probing with the small values rather than the other way around?\n\nIn this case, I turned enable_mergejoin off in a test because it was deciding to sort the 70M rows instead of hash 20k rows and scan 70M, and then got this 'backwards' hash. The merge join is much slower, but the cost estimate is much less and no combination of cost parameters will make it switch (both estimates are affected up and down similarly by the cost parameters).\n\nBoth tables analyzed, etc. One of them is a bulk operation staging table with no indexes (the big one), but it is analyzed. The (av.p_id, av.a_id) pair is unique in it. a.id is unique (primary key). The above thinks it is going to match 20 times on average (but it actually matches only 1 -- PK join). av.a_id is somewhat skewed , but that is irrelevant if it always matches one. Even if it did match 20 on average, is it worse to probe a hash table 70M times and retrieve 20 maches each time than probe 20k times and retrive 70000 matches each time? Its the same number of hash function calls and comparisons, but different memory profile.\n\n\n\n\n> \t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 19 Oct 2010 02:00:11 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: HashJoin order, hash the large or small table?\n\tPostgres likes to hash the big one, why?"
},
{
"msg_contents": "Scott Carey wrote:\n>\n> If the cost to hash is 1200493, and it needs to probe the hash 20241 times, why would the total cost be 631471410? The cost to probe can't be that big! A cost of 500 to probe and join? \n> Why favor hashing the large table and probing with the small values rather than the other way around?\n> \n>\n\nMay I ask a stupid question: how is the query cost calculated? What are \nthe units? I/O requests? CPU cycles? Monopoly money?\n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Tue, 19 Oct 2010 10:22:42 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: HashJoin order, hash the large or small table? Postgres\n\tlikes to hash the big one, why?"
},
{
"msg_contents": "Mladen Gogala <[email protected]> wrote:\n \n> how is the query cost calculated? What are \n> the units? I/O requests? CPU cycles? Monopoly money?\n \nhttp://www.postgresql.org/docs/current/interactive/runtime-config-query.html#RUNTIME-CONFIG-QUERY-CONSTANTS\n \n-Kevin\n",
"msg_date": "Tue, 19 Oct 2010 10:16:31 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: HashJoin order, hash the large or small table?\n\tPostgres likes to hash the big one, why?"
},
{
"msg_contents": "On Mon, Oct 18, 2010 at 9:40 PM, Scott Carey <[email protected]> wrote:\n> 8.4.5\n>\n> I consistently see HashJoin plans that hash the large table, and scan the small table. This is especially puzzling in some cases where I have 30M rows in the big table and ~ 100 in the small... shouldn't it hash the small table and scan the big one?\n>\n> Here is one case I saw just recently\n>\n> Hash Cond: ((a.e_id)::text = (ta.name)::text)\n> -> Index Scan using c_a_s_e_id on a (cost=0.00..8.21 rows=14 width=27)\n> Index Cond: (id = 12)\n> -> Hash (cost=89126.79..89126.79 rows=4825695 width=74)\n> -> Seq Scan on p_a_1287446030 tmp (cost=0.00..89126.79 rows=4825695 width=74)\n> Filter: (id = 12)\n\nCan we have the complex EXPLAIN output here, please? And the query?\nFor example, this would be perfectly sensible if the previous line\nstarted with \"Hash Semi Join\" or \"Hash Anti Join\".\n\nrhaas=# explain select * from little where exists (select * from big\nwhere big.a = little.a);\n QUERY PLAN\n-----------------------------------------------------------------------\n Hash Semi Join (cost=3084.00..3478.30 rows=10 width=4)\n Hash Cond: (little.a = big.a)\n -> Seq Scan on little (cost=0.00..1.10 rows=10 width=4)\n -> Hash (cost=1443.00..1443.00 rows=100000 width=4)\n -> Seq Scan on big (cost=0.00..1443.00 rows=100000 width=4)\n(5 rows)\n\nI'm also a bit suspicious of the fact that the hash condition has a\ncast to text on both sides, which implies, to me anyway, that the\nunderlying data types are not text. That might mean that the query\nplanner doesn't have very good statistics, which might mean that the\njoin selectivity estimates are wackadoo, which can apparently cause\nthis problem:\n\nrhaas=# explain select * from little, big where little.a = big.a;\n QUERY PLAN\n-----------------------------------------------------------------------\n Hash Join (cost=3084.00..3577.00 rows=2400 width=8)\n Hash Cond: (little.a = big.a)\n -> Seq Scan on little (cost=0.00..34.00 rows=2400 width=4)\n -> Hash (cost=1443.00..1443.00 rows=100000 width=4)\n -> Seq Scan on big (cost=0.00..1443.00 rows=100000 width=4)\n(5 rows)\n\nrhaas=# analyze;\nANALYZE\nrhaas=# explain select * from little, big where little.a = big.a;\n QUERY PLAN\n-------------------------------------------------------------------\n Hash Join (cost=1.23..1819.32 rows=10 width=8)\n Hash Cond: (big.a = little.a)\n -> Seq Scan on big (cost=0.00..1443.00 rows=100000 width=4)\n -> Hash (cost=1.10..1.10 rows=10 width=4)\n -> Seq Scan on little (cost=0.00..1.10 rows=10 width=4)\n(5 rows)\n\nThis doesn't appear to make a lot of sense, but...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 26 Oct 2010 22:56:07 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: HashJoin order, hash the large or small table? Postgres\n\tlikes to hash the big one, why?"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> I'm also a bit suspicious of the fact that the hash condition has a\n> cast to text on both sides, which implies, to me anyway, that the\n> underlying data types are not text. That might mean that the query\n> planner doesn't have very good statistics, which might mean that the\n> join selectivity estimates are wackadoo, which can apparently cause\n> this problem:\n\nUm ... you're guilty of the same thing as the OP, ie not showing how\nyou got this example. But I'm guessing that it was something like\n\ncreate table little as select * from generate_series(1,10) a;\ncreate table big as select * from generate_series(1,100000) a;\n... wait for auto-analyze of big ...\nexplain select * from little, big where little.a = big.a;\n\nHere, big is large enough to prod autovacuum into analyzing it,\nwhereas little isn't. So when the planner runs, it sees\n\n(1) big is known to have 100000 rows, and big.a is known unique;\n(2) little is estimated to have many fewer rows, but nothing is\n known about the distribution of little.a.\n\nIn this situation, it's going to prefer to hash big, because hash join\nbehaves pretty nicely when the inner rel is uniformly distributed and\nthe outer not, but not nicely at all when it's the other way round.\nIt'll change its mind as soon as you analyze little, but it doesn't\nlike taking a chance on an unknown distribution. See cost_hashjoin\nand particularly estimate_hash_bucketsize.\n\nI'm not convinced this explains Scott's results though --- the numbers\nhe's showing don't seem to add up even if you assume a pretty bad\ndistribution for the smaller rel.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Oct 2010 23:48:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: HashJoin order,\n\thash the large or small table? Postgres likes to hash the big one,\n\twhy?"
},
{
"msg_contents": "\nOn Oct 26, 2010, at 8:48 PM, Tom Lane wrote:\n\n> Robert Haas <[email protected]> writes:\n>> I'm also a bit suspicious of the fact that the hash condition has a\n>> cast to text on both sides, which implies, to me anyway, that the\n>> underlying data types are not text. That might mean that the query\n>> planner doesn't have very good statistics, which might mean that the\n>> join selectivity estimates are wackadoo, which can apparently cause\n>> this problem:\n> \n> Um ... you're guilty of the same thing as the OP, ie not showing how\n> you got this example. But I'm guessing that it was something like\n> \n> create table little as select * from generate_series(1,10) a;\n> create table big as select * from generate_series(1,100000) a;\n> ... wait for auto-analyze of big ...\n> explain select * from little, big where little.a = big.a;\n> \n> Here, big is large enough to prod autovacuum into analyzing it,\n> whereas little isn't. So when the planner runs, it sees\n> \n> (1) big is known to have 100000 rows, and big.a is known unique;\n> (2) little is estimated to have many fewer rows, but nothing is\n> known about the distribution of little.a.\n> \n> In this situation, it's going to prefer to hash big, because hash join\n> behaves pretty nicely when the inner rel is uniformly distributed and\n> the outer not, but not nicely at all when it's the other way round.\n> It'll change its mind as soon as you analyze little, but it doesn't\n> like taking a chance on an unknown distribution. See cost_hashjoin\n> and particularly estimate_hash_bucketsize.\n\nThe type of hash table will make a difference in how it behaves with skew. Open addressing versus linking, etc. \n\n> \n> I'm not convinced this explains Scott's results though --- the numbers\n> he's showing don't seem to add up even if you assume a pretty bad\n> distribution for the smaller rel.\n\nAnswering both messages partially:\n\nThe join is on varchar columns. So no they are cast ::text because its from two slightly different varchar declarations to ::text.\n\nThe small relation in this case is unique on the key. But I think postgres doesn't know that because there is a unique index on:\n\n(id, name) and the query filters for id = 12 in the example, leaving name unique. But postgres does not identify this as a unique condition for the key. However, there are only about 150 distinct values of id, so even if 'name' collides sometimes across ids, there can be no more than 150 values that map to one key.\n\nI gave a partial plan, the parent joins are sometimes anti-joins. The general query form is two from the temp table:\nAn update to a main table where the unique index keys match the temp (inner join)\nAn insert into the main table where the unique index keys do not exist (NOT EXISTS query, results in anti-join). The large relation is the one being inserted/updated into the main table. \nBoth have the same subplan that takes all the time in the middle -- a hash that hashes the large table and probes from the small side. I am still completely confused on how the hashjoin is calculating costs. In one of my examples it seems to be way off. \n\n\nWhy does hashjoin behave poorly when the inner relation is not uniformly distributed and the outer is? In particular for anti-join this should be the best case scenario.\n\nMy assumption is this: Imagine the worst possible skew on the small table. Every value is in the same key and there are 20,000 entries in one list under one key. The large table is in the outer relation, and probes for every element and has uniform distribution, 20 million -- one thousand rows for each of 20,000 keys. \nInner Join: \n The values that match the key join agains the list. There is no faster way to join once the list is identified. It is essentially nested loops per matching key. 1000 matches each output 20,000 rows. If the hashjoin was reversed, then the inner relation would be the large table and the outer relation would be the small table. There would be 20,000 matches that each output 1000 rows.\nSemi-Join:\n The same thing happens, but additionally rows that match nothing are emitted. If the relation that is always kept is the large one, it makes more sense to hash the small.\nAnti-Join:\n Two relations, I'll call one \"select\" the other is \"not exists\". The \"select\" relation is the one we are keeping, but only if its key does not match any key in the \"not exists\" relation.\nCase 1: The large uniform table is \"select\" and the small one \"not exists\"\n It is always optimal to have the small one as the inner hashed relation, no matter what the skew. If the inner relation is the \"not exists\" table, only the key needs to be kept in the inner relation hash, the values or number of matches do not matter, so skew is irrelevant and actually makes it faster than even distribution of keys.\nCase 2: The small relation is the \"select\"\n Using the large \"not exists\" relation as the inner relation works well, as only the existence of a key needs to be kept. Hashing the small \"select\" relation works if it is small enough. Whenever a key matches from the outer relation, remove it from the hash, then at the end take what remains in the hash as the result. This is also generally immune to skew. \n\nAm I missing something? Is the Hash that is built storing each tuple in an open-addressed entry, and thus sensitive to key-value skew? or something like that? If the hash only has one entry per key, with a linked list of values that match the key, I don't see how skew is a factor for hashjoin. I am probably missing something.\n\n\n> \n> \t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 27 Oct 2010 12:25:42 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: HashJoin order, hash the large or small table?\n\tPostgres likes to hash the big one, why?"
},
{
"msg_contents": "Scott Carey <[email protected]> writes:\n> Why does hashjoin behave poorly when the inner relation is not\n> uniformly distributed and the outer is?\n\nBecause a poorly distributed inner relation leads to long hash chains.\nIn the very worst case, all the keys are on the same hash chain and it\ndegenerates to a nested-loop join. (There is an assumption in the\ncosting code that the longer hash chains also tend to get searched more\noften, which maybe doesn't apply if the outer rel is flat, but it's not\nobvious that it's safe to not assume that.)\n\n> In particular for anti-join this should be the best case scenario.\n\nNot really. It's still searching a long hash chain; maybe it will find\nan exact match early in the chain, or maybe not. It's certainly not\n*better* than antijoin with a well-distributed inner rel. Although the\npoint is moot, anyway, since if it's an antijoin there is only one\ncandidate for which rel to put on the outside.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Oct 2010 15:56:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: HashJoin order,\n\thash the large or small table? Postgres likes to hash the big one,\n\twhy?"
}
] |
[
{
"msg_contents": "I have the following query running on 8.4, which takes 3516 ms. It is\nvery straight forward. It brings back 116412 records. The explain only\ntakes 1348ms\n\n \n\nselect VehicleUsed.VehicleUsedId as VehicleUsedId , \n\nVehicleUsed.VehicleUsedDisplayPriority as VehicleUsedDisplayPriority , \n\nVehicleUsed.VehicleYear as VehicleYear , \n\nVehicleUsed.VehicleUsedDisplayPriority as VehicleUsedDisplayPriority , \n\nVehicleUsed.HasVehicleUsedThumbnail as HasVehicleUsedThumbnail , \n\nVehicleUsed.HasVehicleUsedPrice as HasVehicleUsedPrice , \n\nVehicleUsed.VehicleUsedPrice as VehicleUsedPrice , \n\nVehicleUsed.HasVehicleUsedMileage as HasVehicleUsedMileage , \n\nVehicleUsed.VehicleUsedMileage as VehicleUsedMileage , \n\nVehicleUsed.IsCPO as IsCPO , VehicleUsed.IsMTCA as IsMTCA \n\nfrom VehicleUsed \n\nwhere ( VehicleUsed.VehicleMakeId = 28 ) \n\norder by VehicleUsed.VehicleUsedDisplayPriority ,\nVehicleUsed.VehicleYear desc , VehicleUsed.HasVehicleUsedThumbnail desc\n, VehicleUsed.HasVehicleUsedPrice desc , VehicleUsed.VehicleUsedPrice ,\nVehicleUsed.HasVehicleUsedMileage desc , VehicleUsed.VehicleUsedMileage\n, \n\nVehicleUsed.IsCPO desc , VehicleUsed.IsMTCA desc\n\n \n\n \n\nThe explain is also very straight forward\n\n \n\n\"Sort (cost=104491.48..105656.24 rows=116476 width=41) (actual\ntime=1288.413..1325.457 rows=116412 loops=1)\"\n\n\" Sort Key: vehicleuseddisplaypriority, vehicleyear,\nhasvehicleusedthumbnail, hasvehicleusedprice, vehicleusedprice,\nhasvehicleusedmileage, vehicleusedmileage, iscpo, ismtca\"\n\n\" Sort Method: quicksort Memory: 19443kB\"\n\n\" -> Bitmap Heap Scan on vehicleused (cost=7458.06..65286.42\nrows=116476 width=41) (actual time=34.982..402.164 rows=116412 loops=1)\"\n\n\" Recheck Cond: (vehiclemakeid = 28)\"\n\n\" -> Bitmap Index Scan on vehicleused_i08 (cost=0.00..7341.59\nrows=116476 width=0) (actual time=22.854..22.854 rows=116412 loops=1)\"\n\n\" Index Cond: (vehiclemakeid = 28)\"\n\n\"Total runtime: 1348.487 ms\"\n\n \n\nCan someone tell me why after it runs the index scan it hen runs a\nbitmap heap scan? It should not take this long to run should it? If I\nlimit the results it comes back in 300ms.\n\n \n\nI have recently run a vacuum analyze on the VehicleUsed table.\n\nAny help would be appreciated.\n\n \n\nPam Ozer\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nI have the following query running on 8.4, which takes 3516\nms. It is very straight forward. It brings back 116412 records. \nThe explain only takes 1348ms\n \nselect VehicleUsed.VehicleUsedId as VehicleUsedId , \nVehicleUsed.VehicleUsedDisplayPriority as\nVehicleUsedDisplayPriority , \nVehicleUsed.VehicleYear as VehicleYear , \nVehicleUsed.VehicleUsedDisplayPriority as\nVehicleUsedDisplayPriority , \nVehicleUsed.HasVehicleUsedThumbnail as\nHasVehicleUsedThumbnail , \nVehicleUsed.HasVehicleUsedPrice as HasVehicleUsedPrice , \nVehicleUsed.VehicleUsedPrice as VehicleUsedPrice , \nVehicleUsed.HasVehicleUsedMileage as HasVehicleUsedMileage ,\n\nVehicleUsed.VehicleUsedMileage as VehicleUsedMileage , \nVehicleUsed.IsCPO as IsCPO , VehicleUsed.IsMTCA as IsMTCA \nfrom VehicleUsed \nwhere ( VehicleUsed.VehicleMakeId = 28 ) \norder by VehicleUsed.VehicleUsedDisplayPriority ,\nVehicleUsed.VehicleYear desc , VehicleUsed.HasVehicleUsedThumbnail desc ,\nVehicleUsed.HasVehicleUsedPrice desc , VehicleUsed.VehicleUsedPrice ,\nVehicleUsed.HasVehicleUsedMileage desc , VehicleUsed.VehicleUsedMileage , \nVehicleUsed.IsCPO desc , VehicleUsed.IsMTCA desc\n \n \nThe explain is also very straight forward\n \n\"Sort (cost=104491.48..105656.24 rows=116476\nwidth=41) (actual time=1288.413..1325.457 rows=116412 loops=1)\"\n\" Sort Key: vehicleuseddisplaypriority, vehicleyear,\nhasvehicleusedthumbnail, hasvehicleusedprice, vehicleusedprice,\nhasvehicleusedmileage, vehicleusedmileage, iscpo, ismtca\"\n\" Sort Method: quicksort Memory:\n19443kB\"\n\" -> Bitmap Heap Scan on\nvehicleused (cost=7458.06..65286.42 rows=116476 width=41) (actual\ntime=34.982..402.164 rows=116412 loops=1)\"\n\" Recheck\nCond: (vehiclemakeid = 28)\"\n\" -> \nBitmap Index Scan on vehicleused_i08 (cost=0.00..7341.59 rows=116476\nwidth=0) (actual time=22.854..22.854 rows=116412 loops=1)\"\n\" Index\nCond: (vehiclemakeid = 28)\"\n\"Total runtime: 1348.487 ms\"\n \nCan someone tell me why after it runs the index scan it hen\nruns a bitmap heap scan? It should not take this long to run should\nit? If I limit the results it comes back in 300ms.\n \nI have recently run a vacuum analyze on the VehicleUsed\ntable.\nAny help would be appreciated.\n \n\nPam Ozer",
"msg_date": "Tue, 19 Oct 2010 11:21:02 -0700",
"msg_from": "\"Ozer, Pam\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow Query- Simple taking "
},
{
"msg_contents": "\"Ozer, Pam\" <[email protected]> wrote:\n \n> I have the following query running on 8.4, which takes 3516 ms. \n> It is very straight forward. It brings back 116412 records. The\n> explain only takes 1348ms\n \nThe EXPLAIN ANALYZE doesn't have to return 116412 rows to the\nclient. It doesn't seem too out of line to me that it takes two\nseconds to do that.\n \n> Can someone tell me why after it runs the index scan it hen runs a\n> bitmap heap scan?\n \nWithout visiting the heap it can't tell whether the tuples it has\nfound are visible to your query. Also, it needs to get the actual\nvalues out of the heap.\n \n> It should not take this long to run should it?\n \nIf you want an answer to that, we need more information. See this\npage for ideas:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n> If I limit the results it comes back in 300ms.\n \nI don't find that surprising. Wouldn't you think that reading and\ntransmitting more rows would take more time?\n \n-Kevin\n",
"msg_date": "Tue, 19 Oct 2010 15:58:58 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query- Simple taking"
},
{
"msg_contents": "On Tue, Oct 19, 2010 at 8:21 PM, Ozer, Pam <[email protected]> wrote:\n> I have the following query running on 8.4, which takes 3516 ms. It is very\n> straight forward. It brings back 116412 records. The explain only takes\n> 1348ms\n\n> \"Sort (cost=104491.48..105656.24 rows=116476 width=41) (actual\n> time=1288.413..1325.457 rows=116412 loops=1)\"\n>\n> \" Sort Key: vehicleuseddisplaypriority, vehicleyear,\n> hasvehicleusedthumbnail, hasvehicleusedprice, vehicleusedprice,\n> hasvehicleusedmileage, vehicleusedmileage, iscpo, ismtca\"\n>\n> \" Sort Method: quicksort Memory: 19443kB\"\n>\n> \" -> Bitmap Heap Scan on vehicleused (cost=7458.06..65286.42 rows=116476\n> width=41) (actual time=34.982..402.164 rows=116412 loops=1)\"\n>\n> \" Recheck Cond: (vehiclemakeid = 28)\"\n>\n> \" -> Bitmap Index Scan on vehicleused_i08 (cost=0.00..7341.59\n> rows=116476 width=0) (actual time=22.854..22.854 rows=116412 loops=1)\"\n>\n> \" Index Cond: (vehiclemakeid = 28)\"\n>\n> \"Total runtime: 1348.487 ms\"\n>\n>\n>\n> Can someone tell me why after it runs the index scan it hen runs a bitmap\n> heap scan?\n\nHi,\n\nAs far as I understand, the bitmap index scan only marks which pages\ncontain rows matching the conditions. The bitmap heap scan will read\nthese marked pages sequentially and recheck the condition as some\npages will contain more data than requested.\n\nPgsql will use a 'nomal' index scan if it believes that there's no\nadded value in reading it sequentially instead of according to the\nindex. In this case the planner is expecting a lot of matches, so it\nmakes sense that it will optimize for I/O throughput.\n\nI'm wondering why you need to run a query that returns that many rows though.\n\n\nKind regards,\nMathieu\n",
"msg_date": "Tue, 19 Oct 2010 23:51:21 +0200",
"msg_from": "Mathieu De Zutter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query- Simple taking"
},
{
"msg_contents": "On mysql the same query only takes milliseconds not seconds. That's a\nbig difference.\n\n-----Original Message-----\nFrom: Kevin Grittner [mailto:[email protected]] \nSent: Tuesday, October 19, 2010 1:59 PM\nTo: Ozer, Pam; [email protected]\nSubject: Re: [PERFORM] Slow Query- Simple taking\n\n\"Ozer, Pam\" <[email protected]> wrote:\n \n> I have the following query running on 8.4, which takes 3516 ms. \n> It is very straight forward. It brings back 116412 records. The\n> explain only takes 1348ms\n \nThe EXPLAIN ANALYZE doesn't have to return 116412 rows to the\nclient. It doesn't seem too out of line to me that it takes two\nseconds to do that.\n \n> Can someone tell me why after it runs the index scan it hen runs a\n> bitmap heap scan?\n \nWithout visiting the heap it can't tell whether the tuples it has\nfound are visible to your query. Also, it needs to get the actual\nvalues out of the heap.\n \n> It should not take this long to run should it?\n \nIf you want an answer to that, we need more information. See this\npage for ideas:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n> If I limit the results it comes back in 300ms.\n \nI don't find that surprising. Wouldn't you think that reading and\ntransmitting more rows would take more time?\n \n-Kevin\n",
"msg_date": "Tue, 19 Oct 2010 15:05:42 -0700",
"msg_from": "\"Ozer, Pam\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow Query- Simple taking"
},
{
"msg_contents": "On Tue, Oct 19, 2010 at 2:21 PM, Ozer, Pam <[email protected]> wrote:\n> I have the following query running on 8.4, which takes 3516 ms. It is very\n> straight forward. It brings back 116412 records. The explain only takes\n> 1348ms\n>\n> select VehicleUsed.VehicleUsedId as VehicleUsedId ,\n>\n> VehicleUsed.VehicleUsedDisplayPriority as VehicleUsedDisplayPriority ,\n>\n> VehicleUsed.VehicleYear as VehicleYear ,\n>\n> VehicleUsed.VehicleUsedDisplayPriority as VehicleUsedDisplayPriority ,\n>\n> VehicleUsed.HasVehicleUsedThumbnail as HasVehicleUsedThumbnail ,\n>\n> VehicleUsed.HasVehicleUsedPrice as HasVehicleUsedPrice ,\n>\n> VehicleUsed.VehicleUsedPrice as VehicleUsedPrice ,\n>\n> VehicleUsed.HasVehicleUsedMileage as HasVehicleUsedMileage ,\n>\n> VehicleUsed.VehicleUsedMileage as VehicleUsedMileage ,\n>\n> VehicleUsed.IsCPO as IsCPO , VehicleUsed.IsMTCA as IsMTCA\n>\n> from VehicleUsed\n>\n> where ( VehicleUsed.VehicleMakeId = 28 )\n>\n> order by VehicleUsed.VehicleUsedDisplayPriority , VehicleUsed.VehicleYear\n> desc , VehicleUsed.HasVehicleUsedThumbnail desc ,\n> VehicleUsed.HasVehicleUsedPrice desc , VehicleUsed.VehicleUsedPrice ,\n> VehicleUsed.HasVehicleUsedMileage desc , VehicleUsed.VehicleUsedMileage ,\n>\n> VehicleUsed.IsCPO desc , VehicleUsed.IsMTCA desc\n>\n>\n>\n>\n>\n> The explain is also very straight forward\n>\n>\n>\n> \"Sort (cost=104491.48..105656.24 rows=116476 width=41) (actual\n> time=1288.413..1325.457 rows=116412 loops=1)\"\n>\n> \" Sort Key: vehicleuseddisplaypriority, vehicleyear,\n> hasvehicleusedthumbnail, hasvehicleusedprice, vehicleusedprice,\n> hasvehicleusedmileage, vehicleusedmileage, iscpo, ismtca\"\n>\n> \" Sort Method: quicksort Memory: 19443kB\"\n>\n> \" -> Bitmap Heap Scan on vehicleused (cost=7458.06..65286.42 rows=116476\n> width=41) (actual time=34.982..402.164 rows=116412 loops=1)\"\n>\n> \" Recheck Cond: (vehiclemakeid = 28)\"\n>\n> \" -> Bitmap Index Scan on vehicleused_i08 (cost=0.00..7341.59\n> rows=116476 width=0) (actual time=22.854..22.854 rows=116412 loops=1)\"\n>\n> \" Index Cond: (vehiclemakeid = 28)\"\n>\n> \"Total runtime: 1348.487 ms\"\n>\n>\n>\n> Can someone tell me why after it runs the index scan it hen runs a bitmap\n> heap scan? It should not take this long to run should it? If I limit the\n> results it comes back in 300ms.\n\nIt doesn't. The EXPLAIN output shows it running the bitmap index scan\nfirst and then bitmap heap scan. The bitmap index scan is taking 22\nms, and the bitmap index and bitmap heap scans combined are taking 402\nms. The sort is then taking another 800+ ms for a total of 1325 ms.\nAny additional time is spent returning rows to the client.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 28 Oct 2010 10:39:20 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query- Simple taking"
},
{
"msg_contents": "On Thu, Oct 28, 2010 at 10:39 AM, Robert Haas <[email protected]> wrote:\n>> Can someone tell me why after it runs the index scan it hen runs a bitmap\n>> heap scan? It should not take this long to run should it? If I limit the\n>> results it comes back in 300ms.\n>\n> It doesn't. The EXPLAIN output shows it running the bitmap index scan\n> first and then bitmap heap scan. The bitmap index scan is taking 22\n> ms, and the bitmap index and bitmap heap scans combined are taking 402\n> ms. The sort is then taking another 800+ ms for a total of 1325 ms.\n> Any additional time is spent returning rows to the client.\n\nDoh! I misread your email. You had it right, and I'm all wet.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 28 Oct 2010 10:40:05 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query- Simple taking"
},
{
"msg_contents": "On Tue, Oct 19, 2010 at 6:05 PM, Ozer, Pam <[email protected]> wrote:\n> On mysql the same query only takes milliseconds not seconds. That's a\n> big difference.\n\nI can believe that MySQL is faster, because they probably don't need\nto do the bitmap heap scan. There is a much-anticipated feature\ncalled index-only scans that we don't have yet in PG, which would help\ncases like this a great deal.\n\nBut I don't see how MySQL could send back 116,000 rows to the client\nin milliseconds, or sort them that quickly.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 28 Oct 2010 10:42:18 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query- Simple taking"
},
{
"msg_contents": "On 10/28/2010 10:42 AM, Robert Haas wrote:\n> I can believe that MySQL is faster, because they probably don't need\n> to do the bitmap heap scan. There is a much-anticipated feature\n> called index-only scans that we don't have yet in PG, which would help\n> cases like this a great deal.\nYyesss! Any time frame on that? Can you make it into 9.0.2?\n\n-- \n\nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com\nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Thu, 28 Oct 2010 10:51:10 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query- Simple taking"
},
{
"msg_contents": "On Thu, Oct 28, 2010 at 7:51 AM, Mladen Gogala\n<[email protected]> wrote:\n\n> Yyesss! Any time frame on that? Can you make it into 9.0.2?\n\nMaybe 9.1.0 or 9.2.0 :) 9.0's features are already frozen.\n\n\n-- \nRegards,\nRichard Broersma Jr.\n\nVisit the Los Angeles PostgreSQL Users Group (LAPUG)\nhttp://pugs.postgresql.org/lapug\n",
"msg_date": "Thu, 28 Oct 2010 07:53:42 -0700",
"msg_from": "Richard Broersma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query- Simple taking"
},
{
"msg_contents": "On 10/28/2010 10:53 AM, Richard Broersma wrote:\n> On Thu, Oct 28, 2010 at 7:51 AM, Mladen Gogala\n> <[email protected]> wrote:\n>\n>> Yyesss! Any time frame on that? Can you make it into 9.0.2?\n> Maybe 9.1.0 or 9.2.0 :) 9.0's features are already frozen.\n>\n>\nWell, with all this global warming around us, index scans may still thaw \nin time to make it into 9.0.2\n\n-- \n\nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com\nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Thu, 28 Oct 2010 11:23:13 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query- Simple taking"
},
{
"msg_contents": "On Thu, Oct 28, 2010 at 11:23 AM, Mladen Gogala\n<[email protected]> wrote:\n> On 10/28/2010 10:53 AM, Richard Broersma wrote:\n>>\n>> On Thu, Oct 28, 2010 at 7:51 AM, Mladen Gogala\n>> <[email protected]> wrote:\n>>\n>>> Yyesss! Any time frame on that? Can you make it into 9.0.2?\n>>\n>> Maybe 9.1.0 or 9.2.0 :) 9.0's features are already frozen.\n>>\n>>\n> Well, with all this global warming around us, index scans may still thaw in\n> time to make it into 9.0.2\n\nI fear this is not going to happen for 9.1.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 28 Oct 2010 12:18:31 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query- Simple taking"
}
] |
[
{
"msg_contents": "Folks,\n\nI am running into a problem with the postmaster: from time to time, it\nruns for a long time. E.g., from top:\n\n23425 postgres 20 0 22008 10m 10m R 99.9 0.5 21:45.87 postmaster\n\nI'd like to figure out what it is doing. How can I figure out what\nstatement causes the problem? \n\nis there a way I can log all SQL statements to a file, together with the\ntime it took to execute them?\n\n-- \nDimi Paun <[email protected]>\nLattica, Inc.\n\n",
"msg_date": "Wed, 20 Oct 2010 14:44:15 -0400",
"msg_from": "Dimi Paun <[email protected]>",
"msg_from_op": true,
"msg_subject": "What is postmaster doing?"
},
{
"msg_contents": "On Wed, 2010-10-20 at 14:44 -0400, Dimi Paun wrote:\r\n> Folks,\r\n\r\n> is there a way I can log all SQL statements to a file, together with the\r\n> time it took to execute them?\r\n> \r\n> -- \r\n> Dimi Paun <[email protected]>\r\n> Lattica, Inc.\r\n\r\nThis is controlled by settings in the postgresql.conf file.\r\nsee the appropriate doc page vv for your version\r\nhttp://www.postgresql.org/docs/8.2/static/runtime-config-logging.html\r\n\n\n\n\n\nRe: [PERFORM] What is postmaster doing?\n\n\n\nOn Wed, 2010-10-20 at 14:44 -0400, Dimi Paun wrote:\r\n> Folks,\n\r\n> is there a way I can log all SQL statements to a file, together with the\r\n> time it took to execute them?\r\n>\r\n> --\r\n> Dimi Paun <[email protected]>\r\n> Lattica, Inc.\n\r\nThis is controlled by settings in the postgresql.conf file.\r\nsee the appropriate doc page vv for your version\nhttp://www.postgresql.org/docs/8.2/static/runtime-config-logging.html",
"msg_date": "Wed, 20 Oct 2010 15:24:08 -0400",
"msg_from": "\"Reid Thompson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is postmaster doing?"
},
{
"msg_contents": "On Wed, 2010-10-20 at 15:24 -0400, Reid Thompson wrote:\n> This is controlled by settings in the postgresql.conf file.\n> see the appropriate doc page vv for your version\n> http://www.postgresql.org/docs/8.2/static/runtime-config-logging.html\n\nThanks for the link Reid, this seems to be doing what I need.\n\nToo bad I couldn't figure out what was going on when I was experiencing\nthe high load, but now that I have the logging enabled, it shouldn't be\na problem to figure things out.\n\n-- \nDimi Paun <[email protected]>\nLattica, Inc.\n\n",
"msg_date": "Wed, 20 Oct 2010 15:51:43 -0400",
"msg_from": "Dimi Paun <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What is postmaster doing?"
},
{
"msg_contents": "On Wed, 2010-10-20 at 14:44 -0400, Dimi Paun wrote:\n> 23425 postgres 20 0 22008 10m 10m R 99.9 0.5 21:45.87 postmaster\n> \n> I'd like to figure out what it is doing. How can I figure out what\n> statement causes the problem? \n> \n\nIt seems strange that the postmaster is eating 99% cpu. Is there a\nchance that it's flooded with connection attempts?\n\nUsually the work is done by backend processes, not the postmaster. The\npostmaster just does some management like accepting connections and\nstarting new processes.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Wed, 20 Oct 2010 12:59:41 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is postmaster doing?"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> On Wed, 2010-10-20 at 14:44 -0400, Dimi Paun wrote:\n>> 23425 postgres 20 0 22008 10m 10m R 99.9 0.5 21:45.87 postmaster\n>> \n>> I'd like to figure out what it is doing. How can I figure out what\n>> statement causes the problem? \n\n> It seems strange that the postmaster is eating 99% cpu. Is there a\n> chance that it's flooded with connection attempts?\n\nIt's probably a backend process, not the postmaster --- I suspect the\nOP is using a version of ps that only tells you the original process\nname by default. \"ps auxww\" or \"ps -ef\" (depending on platform)\nis likely to be more informative. Looking into pg_stat_activity,\neven more so.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Oct 2010 16:26:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is postmaster doing? "
},
{
"msg_contents": "Dimi Paun wrote:\n> Folks,\n>\n> I am running into a problem with the postmaster: from time to time, it\n> runs for a long time. E.g., from top:\n>\n> 23425 postgres 20 0 22008 10m 10m R 99.9 0.5 21:45.87 postmaster\n>\n> I'd like to figure out what it is doing. How can I figure out what\n> statement causes the problem? \n>\n> is there a way I can log all SQL statements to a file, together with the\n> time it took to execute them?\n>\n> \nYou can do one better: you can even explain the statements, based on the \nexecution time. There is a module called auto explain:\n\nhttp://www.postgresql.org/docs/8.4/static/auto-explain.html\n\nFor the log files, you can parse them using pgfouine and quickly find \nout the most expensive SQL statements.\n\n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Wed, 20 Oct 2010 16:30:19 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is postmaster doing?"
},
{
"msg_contents": "On Wed, 2010-10-20 at 16:26 -0400, Tom Lane wrote:\n> > It seems strange that the postmaster is eating 99% cpu. Is there a\n> > chance that it's flooded with connection attempts?\n\nMaybe, I'll try to figure that one out next time it happens.\n\n> It's probably a backend process, not the postmaster --- I suspect the\n> OP is using a version of ps that only tells you the original process\n> name by default. \"ps auxww\" or \"ps -ef\" (depending on platform)\n> is likely to be more informative. Looking into pg_stat_activity,\n> even more so.\n\nI'm running CentOS 5.5, using procps-3.2.7-16.el5. I cannot check\nmore at this point as postmaster seems to have finished whatever it\nwas doing, but I'll try to investigate better next time.\n\n-- \nDimi Paun <[email protected]>\nLattica, Inc.\n\n",
"msg_date": "Wed, 20 Oct 2010 16:39:33 -0400",
"msg_from": "Dimi Paun <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What is postmaster doing?"
},
{
"msg_contents": "Dimi Paun <[email protected]> writes:\n> On Wed, 2010-10-20 at 16:26 -0400, Tom Lane wrote:\n>> It's probably a backend process, not the postmaster --- I suspect the\n>> OP is using a version of ps that only tells you the original process\n>> name by default.\n\n> I'm running CentOS 5.5, using procps-3.2.7-16.el5.\n\nHm, what ps options did you use? I'm having a hard time reproducing\nyour display format on Fedora 13 (procps-3.2.8-7.fc13.x86_64).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Oct 2010 16:45:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is postmaster doing? "
},
{
"msg_contents": "On Wed, 2010-10-20 at 16:45 -0400, Tom Lane wrote:\n> Hm, what ps options did you use? I'm having a hard time reproducing\n> your display format on Fedora 13 (procps-3.2.8-7.fc13.x86_64).\n\nSorry, it wasn't a ps output, it was a line from top(1).\nMy to header says:\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND \n23425 postgres 20 0 22008 10m 10m R 99.9 0.5 21:45.87 postmaster\n\n-- \nDimi Paun <[email protected]>\nLattica, Inc.\n\n",
"msg_date": "Wed, 20 Oct 2010 16:55:50 -0400",
"msg_from": "Dimi Paun <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What is postmaster doing?"
},
{
"msg_contents": "Dimi Paun <[email protected]> writes:\n> On Wed, 2010-10-20 at 16:45 -0400, Tom Lane wrote:\n>> Hm, what ps options did you use? I'm having a hard time reproducing\n>> your display format on Fedora 13 (procps-3.2.8-7.fc13.x86_64).\n\n> Sorry, it wasn't a ps output, it was a line from top(1).\n\nOh, yeah, top typically doesn't give you the up-to-date process command\nline. Next time try ps, or pg_stat_activity.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Oct 2010 16:57:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is postmaster doing? "
},
{
"msg_contents": "On Wed, Oct 20, 2010 at 2:57 PM, Tom Lane <[email protected]> wrote:\n> Dimi Paun <[email protected]> writes:\n>> On Wed, 2010-10-20 at 16:45 -0400, Tom Lane wrote:\n>>> Hm, what ps options did you use? I'm having a hard time reproducing\n>>> your display format on Fedora 13 (procps-3.2.8-7.fc13.x86_64).\n>\n>> Sorry, it wasn't a ps output, it was a line from top(1).\n>\n> Oh, yeah, top typically doesn't give you the up-to-date process command\n> line. Next time try ps, or pg_stat_activity.\n\nOr use htop. it identifies all the basic postgresql processes by job,\nlike logger process, writer process and so on.\n",
"msg_date": "Wed, 20 Oct 2010 15:47:21 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is postmaster doing?"
},
{
"msg_contents": "On Wed, Oct 20, 2010 at 3:47 PM, Scott Marlowe <[email protected]> wrote:\n> Or use htop. it identifies all the basic postgresql processes by job,\n> like logger process, writer process and so on.\n\nFYI, htop is available from the epel repo.\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Wed, 20 Oct 2010 15:51:14 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is postmaster doing?"
},
{
"msg_contents": "Dimi Paun wrote:\n> Sorry, it wasn't a ps output, it was a line from top(1).\n> My to header says:\n>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND \n> 23425 postgres 20 0 22008 10m 10m R 99.9 0.5 21:45.87 postmaster\n> \n\nUse \"top -c\" instead. On Linux that will show you what each of the \nclients is currently doing most of the time, the ones that are running \nfor a long time at least.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\": \nhttp://www.2ndquadrant.com/books\n\n\n",
"msg_date": "Wed, 20 Oct 2010 22:39:32 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is postmaster doing?"
}
] |
[
{
"msg_contents": "I don't know why seq scan is running on the following query where the same\nquery is giving index scan on other servers:\nexplain analyze\nselect *\nfrom act\nwhere act.acttype in ( 'Meeting','Call','Task');\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on act (cost=0.00..13386.78 rows=259671 width=142) (actual\ntime=0.013..484.572 rows=263639 loops=1)\n Filter: (((acttype)::text = 'Meeting'::text) OR ((acttype)::text =\n'Call'::text) OR ((acttype)::text = 'Task'::text))\n Total runtime: 732.956 ms\n(3 rows)\n\nThe above query is giving index scan on other servers and even if I rewrite\nthe query as follows I got index scan:\nexplain analyze\nselect *\nfrom act\nwhere act.acttype = 'Meeting'\nor act.acttype = 'Call';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on act (cost=17.98..1083.80 rows=2277 width=142) (actual\ntime=1.901..9.722 rows=4808 loops=1)\n Recheck Cond: (((acttype)::text = 'Meeting'::text) OR ((acttype)::text =\n'Call'::text))\n -> BitmapOr (cost=17.98..17.98 rows=2281 width=0) (actual\ntime=1.262..1.262 rows=0 loops=1)\n -> Bitmap Index Scan on act_acttype_idx (cost=0.00..8.99 rows=1141\nwidth=0) (actual time=0.790..0.790 rows=3181 loops=1)\n Index Cond: ((acttype)::text = 'Meeting'::text)\n -> Bitmap Index Scan on act_acttype_idx (cost=0.00..8.99 rows=1141\nwidth=0) (actual time=0.469..0.469 rows=1630 loops=1)\n Index Cond: ((acttype)::text = 'Call'::text)\n Total runtime: 14.227 ms\n(8 rows)\n\n\n\\d act\n Table \"public.act\"\n Column | Type | Modifiers\n------------------+------------------------+-------------------------------------------\n actid | integer | not null default 0\n subject | character varying(250) | not null\n semodule | character varying(20) |\n acttype | character varying(200) | not null\n date_start | date | not null\n due_date | date |\n time_start | character varying(50) |\n time_end | character varying(50) |\n sendnotification | character varying(3) | not null default '0'::character\nvarying\n duration_hours | character varying(2) |\n duration_minutes | character varying(200) |\n status | character varying(200) |\n eventstatus | character varying(200) |\n priority | character varying(200) |\n location | character varying(150) |\n notime | character varying(3) | not null default '0'::character varying\n visibility | character varying(50) | not null default 'all'::character\nvarying\n recurringtype | character varying(200) |\n end_date | date |\n end_time | character varying(50) |\nIndexes:\n \"act_pkey\" PRIMARY KEY, btree (actid)\n \"act_acttype_idx\" btree (acttype)\n \"act_date_start_idx\" btree (date_start)\n \"act_due_date_idx\" btree (due_date)\n \"act_eventstatus_idx\" btree (eventstatus)\n \"act_status_idx\" btree (status)\n \"act_subject_idx\" btree (subject)\n \"act_time_start_idx\" btree (time_start)\n\nAny idea please.\n\nI don't know why seq scan is running on the following query where the same query is giving index scan on other servers:explain analyzeselect *from actwhere act.acttype in ( 'Meeting','Call','Task');\n QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on act (cost=0.00..13386.78 rows=259671 width=142) (actual time=0.013..484.572 rows=263639 loops=1) Filter: (((acttype)::text = 'Meeting'::text) OR ((acttype)::text = 'Call'::text) OR ((acttype)::text = 'Task'::text))\n Total runtime: 732.956 ms(3 rows)The above query is giving index scan on other servers and even if I rewrite the query as follows I got index scan:explain analyzeselect *\nfrom actwhere act.acttype = 'Meeting'or act.acttype = 'Call'; QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on act (cost=17.98..1083.80 rows=2277 width=142) (actual time=1.901..9.722 rows=4808 loops=1)\n Recheck Cond: (((acttype)::text = 'Meeting'::text) OR ((acttype)::text = 'Call'::text)) -> BitmapOr (cost=17.98..17.98 rows=2281 width=0) (actual time=1.262..1.262 rows=0 loops=1) -> Bitmap Index Scan on act_acttype_idx (cost=0.00..8.99 rows=1141 width=0) (actual time=0.790..0.790 rows=3181 loops=1)\n Index Cond: ((acttype)::text = 'Meeting'::text) -> Bitmap Index Scan on act_acttype_idx (cost=0.00..8.99 rows=1141 width=0) (actual time=0.469..0.469 rows=1630 loops=1) Index Cond: ((acttype)::text = 'Call'::text)\n Total runtime: 14.227 ms(8 rows)\\d act Table \"public.act\" Column | Type | Modifiers ------------------+------------------------+-------------------------------------------\n actid | integer | not null default 0 subject | character varying(250) | not null semodule | character varying(20) | acttype | character varying(200) | not null date_start | date | not null\n due_date | date | time_start | character varying(50) | time_end | character varying(50) | sendnotification | character varying(3) | not null default '0'::character varying\n duration_hours | character varying(2) | duration_minutes | character varying(200) | status | character varying(200) | eventstatus | character varying(200) | priority | character varying(200) | \n location | character varying(150) | notime | character varying(3) | not null default '0'::character varying visibility | character varying(50) | not null default 'all'::character varying\n recurringtype | character varying(200) | end_date | date | end_time | character varying(50) | Indexes: \"act_pkey\" PRIMARY KEY, btree (actid) \"act_acttype_idx\" btree (acttype)\n \"act_date_start_idx\" btree (date_start) \"act_due_date_idx\" btree (due_date) \"act_eventstatus_idx\" btree (eventstatus) \"act_status_idx\" btree (status) \"act_subject_idx\" btree (subject)\n \"act_time_start_idx\" btree (time_start)Any idea please.",
"msg_date": "Thu, 21 Oct 2010 11:25:06 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index scan is not working, why??"
},
{
"msg_contents": "please provide non-default config options on this host plus the same from a\nhost which is using an index scan, please. Also, postgresql version, OS,\nand all of the other stuff that is asked for in this document:\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions. It is impossible to say\nwhy the query planner might be choosing a particular plan without any\ninsight whatsoever as to how the server is configured.\n\n\n\nOn Wed, Oct 20, 2010 at 10:25 PM, AI Rumman <[email protected]> wrote:\n\n> I don't know why seq scan is running on the following query where the same\n> query is giving index scan on other servers:\n> explain analyze\n> select *\n> from act\n> where act.acttype in ( 'Meeting','Call','Task');\n> QUERY PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on act (cost=0.00..13386.78 rows=259671 width=142) (actual\n> time=0.013..484.572 rows=263639 loops=1)\n> Filter: (((acttype)::text = 'Meeting'::text) OR ((acttype)::text =\n> 'Call'::text) OR ((acttype)::text = 'Task'::text))\n> Total runtime: 732.956 ms\n> (3 rows)\n>\n> The above query is giving index scan on other servers and even if I rewrite\n> the query as follows I got index scan:\n> explain analyze\n> select *\n> from act\n> where act.acttype = 'Meeting'\n> or act.acttype = 'Call';\n> QUERY PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on act (cost=17.98..1083.80 rows=2277 width=142) (actual\n> time=1.901..9.722 rows=4808 loops=1)\n> Recheck Cond: (((acttype)::text = 'Meeting'::text) OR ((acttype)::text =\n> 'Call'::text))\n> -> BitmapOr (cost=17.98..17.98 rows=2281 width=0) (actual\n> time=1.262..1.262 rows=0 loops=1)\n> -> Bitmap Index Scan on act_acttype_idx (cost=0.00..8.99 rows=1141\n> width=0) (actual time=0.790..0.790 rows=3181 loops=1)\n> Index Cond: ((acttype)::text = 'Meeting'::text)\n> -> Bitmap Index Scan on act_acttype_idx (cost=0.00..8.99 rows=1141\n> width=0) (actual time=0.469..0.469 rows=1630 loops=1)\n> Index Cond: ((acttype)::text = 'Call'::text)\n> Total runtime: 14.227 ms\n> (8 rows)\n>\n>\n> \\d act\n> Table \"public.act\"\n> Column | Type | Modifiers\n>\n> ------------------+------------------------+-------------------------------------------\n> actid | integer | not null default 0\n> subject | character varying(250) | not null\n> semodule | character varying(20) |\n> acttype | character varying(200) | not null\n> date_start | date | not null\n> due_date | date |\n> time_start | character varying(50) |\n> time_end | character varying(50) |\n> sendnotification | character varying(3) | not null default '0'::character\n> varying\n> duration_hours | character varying(2) |\n> duration_minutes | character varying(200) |\n> status | character varying(200) |\n> eventstatus | character varying(200) |\n> priority | character varying(200) |\n> location | character varying(150) |\n> notime | character varying(3) | not null default '0'::character varying\n> visibility | character varying(50) | not null default 'all'::character\n> varying\n> recurringtype | character varying(200) |\n> end_date | date |\n> end_time | character varying(50) |\n> Indexes:\n> \"act_pkey\" PRIMARY KEY, btree (actid)\n> \"act_acttype_idx\" btree (acttype)\n> \"act_date_start_idx\" btree (date_start)\n> \"act_due_date_idx\" btree (due_date)\n> \"act_eventstatus_idx\" btree (eventstatus)\n> \"act_status_idx\" btree (status)\n> \"act_subject_idx\" btree (subject)\n> \"act_time_start_idx\" btree (time_start)\n>\n> Any idea please.\n>\n\nplease provide non-default config options on this host plus the same from a host which is using an index scan, please. Also, postgresql version, OS, and all of the other stuff that is asked for in this document: http://wiki.postgresql.org/wiki/SlowQueryQuestions. It is impossible to say why the query planner might be choosing a particular plan without any insight whatsoever as to how the server is configured.\nOn Wed, Oct 20, 2010 at 10:25 PM, AI Rumman <[email protected]> wrote:\nI don't know why seq scan is running on the following query where the same query is giving index scan on other servers:explain analyzeselect *from actwhere act.acttype in ( 'Meeting','Call','Task');\n\n QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------\n\n Seq Scan on act (cost=0.00..13386.78 rows=259671 width=142) (actual time=0.013..484.572 rows=263639 loops=1) Filter: (((acttype)::text = 'Meeting'::text) OR ((acttype)::text = 'Call'::text) OR ((acttype)::text = 'Task'::text))\n\n Total runtime: 732.956 ms(3 rows)The above query is giving index scan on other servers and even if I rewrite the query as follows I got index scan:explain analyzeselect *\n\nfrom actwhere act.acttype = 'Meeting'or act.acttype = 'Call'; QUERY PLAN \n\n---------------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on act (cost=17.98..1083.80 rows=2277 width=142) (actual time=1.901..9.722 rows=4808 loops=1)\n\n Recheck Cond: (((acttype)::text = 'Meeting'::text) OR ((acttype)::text = 'Call'::text)) -> BitmapOr (cost=17.98..17.98 rows=2281 width=0) (actual time=1.262..1.262 rows=0 loops=1) -> Bitmap Index Scan on act_acttype_idx (cost=0.00..8.99 rows=1141 width=0) (actual time=0.790..0.790 rows=3181 loops=1)\n\n Index Cond: ((acttype)::text = 'Meeting'::text) -> Bitmap Index Scan on act_acttype_idx (cost=0.00..8.99 rows=1141 width=0) (actual time=0.469..0.469 rows=1630 loops=1) Index Cond: ((acttype)::text = 'Call'::text)\n\n Total runtime: 14.227 ms(8 rows)\\d act Table \"public.act\" Column | Type | Modifiers ------------------+------------------------+-------------------------------------------\n\n actid | integer | not null default 0 subject | character varying(250) | not null semodule | character varying(20) | acttype | character varying(200) | not null\n date_start | date | not null\n due_date | date | time_start | character varying(50) | time_end | character varying(50) | sendnotification | character varying(3) | not null default '0'::character varying\n\n duration_hours | character varying(2) | duration_minutes | character varying(200) | status | character varying(200) | eventstatus | character varying(200) | priority | character varying(200) | \n\n location | character varying(150) | notime | character varying(3) | not null default '0'::character varying visibility | character varying(50) | not null default 'all'::character varying\n\n recurringtype | character varying(200) | end_date | date | end_time | character varying(50) | Indexes: \"act_pkey\" PRIMARY KEY, btree (actid) \"act_acttype_idx\" btree (acttype)\n\n \"act_date_start_idx\" btree (date_start) \"act_due_date_idx\" btree (due_date) \"act_eventstatus_idx\" btree (eventstatus) \"act_status_idx\" btree (status) \"act_subject_idx\" btree (subject)\n\n \"act_time_start_idx\" btree (time_start)Any idea please.",
"msg_date": "Thu, 21 Oct 2010 00:51:21 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index scan is not working, why??"
},
{
"msg_contents": "On Thu, Oct 21, 2010 at 1:51 AM, Samuel Gendler\n<[email protected]> wrote:\n> please provide non-default config options on this host plus the same from a\n> host which is using an index scan, please. Also, postgresql version, OS,\n> and all of the other stuff that is asked for in this\n> document: http://wiki.postgresql.org/wiki/SlowQueryQuestions. It is\n> impossible to say why the query planner might be choosing a particular plan\n> without any insight whatsoever as to how the server is configured.\n\nI know it's mentioned in that wiki doc, but the ddl for the table and\nits indexes, or the output of \\d tablename is quite useful and should\nbe included as well.\n",
"msg_date": "Thu, 21 Oct 2010 01:54:00 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index scan is not working, why??"
},
{
"msg_contents": "AI Rumman wrote:\n> I don't know why seq scan is running on the following query where the \n> same query is giving index scan on other servers:\n> explain analyze\n> select *\n> from act\n> where act.acttype in ( 'Meeting','Call','Task');\n> QUERY PLAN \n> ----------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on act (cost=0.00..13386.78 rows=259671 width=142) (actual \n> time=0.013..484.572 rows=263639 loops=1)\n> Filter: (((acttype)::text = 'Meeting'::text) OR ((acttype)::text = \n> 'Call'::text) OR ((acttype)::text = 'Task'::text))\n> Total runtime: 732.956 ms\n> (3 rows)\nAl, what percentage of the rows fits the above criteria? How big are \nyour histograms?\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n",
"msg_date": "Thu, 21 Oct 2010 09:27:44 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index scan is not working, why??"
},
{
"msg_contents": " \n\n> -----Original Message-----\n> From: AI Rumman [mailto:[email protected]] \n> Sent: Thursday, October 21, 2010 1:25 AM\n> To: [email protected]\n> Subject: Index scan is not working, why??\n> \n> I don't know why seq scan is running on the following query \n> where the same query is giving index scan on other servers:\n> explain analyze\n> \n> select *\n> from act\n> where act.acttype in ( 'Meeting','Call','Task');\n> QUERY PLAN\n> --------------------------------------------------------------\n> --------------------------------------------------------------\n> ------------\n> Seq Scan on act (cost=0.00..13386.78 rows=259671 width=142) \n> (actual time=0.013..484.572 rows=263639 loops=1)\n> Filter: (((acttype)::text = 'Meeting'::text) OR \n> ((acttype)::text = 'Call'::text) OR ((acttype)::text = \n> 'Task'::text)) Total runtime: 732.956 ms\n> (3 rows)\n> \n> \n> The above query is giving index scan on other servers and \n> even if I rewrite the query as follows I got index scan:\n> explain analyze\n> \n> select *\n> from act\n> where act.acttype = 'Meeting'\n> or act.acttype = 'Call';\n> QUERY PLAN\n> --------------------------------------------------------------\n> --------------------------------------------------------------\n> ------------------\n> Bitmap Heap Scan on act (cost=17.98..1083.80 rows=2277 \n> width=142) (actual time=1.901..9.722 rows=4808 loops=1)\n> Recheck Cond: (((acttype)::text = 'Meeting'::text) OR \n> ((acttype)::text = 'Call'::text))\n> -> BitmapOr (cost=17.98..17.98 rows=2281 width=0) (actual \n> time=1.262..1.262 rows=0 loops=1)\n> -> Bitmap Index Scan on act_acttype_idx (cost=0.00..8.99 \n> rows=1141 width=0) (actual time=0.790..0.790 rows=3181 loops=1)\n> Index Cond: ((acttype)::text = 'Meeting'::text)\n> -> Bitmap Index Scan on act_acttype_idx (cost=0.00..8.99 \n> rows=1141 width=0) (actual time=0.469..0.469 rows=1630 loops=1)\n> Index Cond: ((acttype)::text = 'Call'::text) Total \n> runtime: 14.227 ms\n> (8 rows)\n> \n> \n\n\"Index Scan\" is not alwayes prefarable to \"Seq Scan\", it depends on\nselectivity of your query.\nWhen retrieving substancial portion of big table seq scan is usually\nfaster, that's why optimizer chooses it.\n\nYour queries (and possibly data sets in the tables on different servers)\nare not the same.\nYour first query (which uses seq scan) returns 259671 which is probably\nsubstantial part of the whole table.\nYour second query (which uses index scan) returns only 4808 rows, which\nmakes index access less costly in this case.\n\nRegards,\nIgor Neyman\n",
"msg_date": "Thu, 21 Oct 2010 16:13:52 -0400",
"msg_from": "\"Igor Neyman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index scan is not working, why??"
}
] |
[
{
"msg_contents": "\nHello,\n\nWe are using PostgreSQL for storing data and full-text search indexes\nfor the webiste of a daily newspaper. We are very happy overall with the\nresults, but we have one \"weird\" behaviour that we would like to solve.\n\nThe problem is when we index objects into the full-text search part of\nthe database (which a DELETE and then an INSERT into a specific table),\nthe INSERT sometimes take a long time (from 10s to 20s), but the same\ninsert (and many other similar ones) are fast (below 0.2s).\n\nThis slowness comes regularly, about every 200 objects indexed,\nregardless of the frequency of the inserts. If I reindex one object\nevery 5 seconds for one hour, or one object every second for 10 minutes,\nI've the same kind of results : around 0.5% of the time, indexing took\nmore than 10s.\n\nThe positive point is that this slowness doesn't block the rest of\nqueries to the database, but it's still painful to have to wait (even if\nonly once in a while) for 10s or 20s when the end-user hits the \"save\"\nbutton.\n\nThis slowness is associated with very high IO load on the operating\nsystem. I tried playing with checkpoint parameters (making them more\nfrequent or less frequent, but I didn't notice any siginificant\ndifference).\n\nDo you have any hint on how to smooth the process, so we don't have this\nregular huge slowdown ?\n\n\n\nIf you want more details about the setup :\n\n- server is a Xen virtual machine with 8Gb of memory, disks being 15000\n rpm SAS disks on RAID 1, and CPU being one core of a Nehalem processor\n (but CPU load is low anyway).\n\n- the database schema is like :\n\nCREATE TABLE sesql_index (\n classname varchar(255),\n id integer,\n created_at timestamp,\n modified_at timestamp,\n created_by integer,\n modified_by integer,\n workflow_state integer,\n site_id integer,\n title_text text,\n title_tsv tsvector,\n subtitle_text text,\n subtitle_tsv tsvector,\n fulltext_text text,\n fulltext_tsv tsvector,\n authors integer[],\n folders integer[],\n [...]\n indexed_at timestamp DEFAULT NOW(),\n PRIMARY KEY (classname, id)\n);\n\nCREATE TABLE sesql_author (CHECK (classname = 'Author'), \n PRIMARY KEY (classname, id)) INHERITS (sesql_index);\n\nCREATE TABLE sesql_program (CHECK (classname = 'Program'), \n PRIMARY KEY (classname, id)) INHERITS (sesql_index);\n\nCREATE TABLE sesql_default (CHECK (classname = 'Slideshow' OR classname\n= 'Book' OR classname = 'Article' OR classname = 'Publication' OR\nclassname = 'Forum'), PRIMARY KEY (classname, id)) INHERITS (sesql_index);\n\n(with a few other similar tables for different objects).\n\nInserts/deletes are done directly into the child tables, searches are\ndone either on the master table (sesql_index) or on the child tables\ndepending of the use case (but search works fine anyway).\n\nIn addition to that we have several indexes, created on each child\ntables :\n\nCREATE INDEX sesql_default_classname_index ON sesql_default (classname);\nCREATE INDEX sesql_default_id_index ON sesql_default (id);\nCREATE INDEX sesql_default_created_at_index ON sesql_default (created_at);\nCREATE INDEX sesql_default_modified_at_index ON sesql_default (modified_at);\nCREATE INDEX sesql_default_created_by_index ON sesql_default (created_by);\nCREATE INDEX sesql_default_modified_by_index ON sesql_default (modified_by);\nCREATE INDEX sesql_default_workflow_state_index ON sesql_default (workflow_state);\nCREATE INDEX sesql_default_site_id_index ON sesql_default (site_id);\nCREATE INDEX sesql_default_publication_date_index ON sesql_default (publication_date);\nCREATE INDEX sesql_default_authors_index ON sesql_default USING GIN (authors);\nCREATE INDEX sesql_default_folders_index ON sesql_default USING GIN (folders);\n\nAnd the heavy ones, for each fulltext field, we have two columns, the\ntext and the tsv, with an index on the tsv, and the tsv itself is\nupdated via a trigger :\n\nCREATE INDEX sesql_default_fulltext_index ON sesql_default USING GIN (fulltext_tsv);\n\nCREATE TRIGGER sesql_default_fulltext_update BEFORE INSERT OR UPDATE\nON sesql_default FOR EACH ROW EXECUTE PROCEDURE\ntsvector_update_trigger(fulltext_tsv, 'public.lem_french', fulltext_text);\n\nThanks a lot for reading me until here ;)\n\nRegards,\n\n-- \nGa�l Le Mignot - [email protected]\nPilot Systems - 9, rue Desargues - 75011 Paris\nTel : +33 1 44 53 05 55 - www.pilotsystems.net\nG�rez vos contacts et vos newsletters : www.cockpit-mailing.com\n",
"msg_date": "Thu, 21 Oct 2010 14:25:44 +0200",
"msg_from": "Gael Le Mignot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Periodically slow inserts "
},
{
"msg_contents": "Hi,\n\nThere are a lot of details missing about your system:\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nCheers,\nKen\n\nOn Thu, Oct 21, 2010 at 02:25:44PM +0200, Gael Le Mignot wrote:\n> \n> Hello,\n> \n> We are using PostgreSQL for storing data and full-text search indexes\n> for the webiste of a daily newspaper. We are very happy overall with the\n> results, but we have one \"weird\" behaviour that we would like to solve.\n> \n> ...\n",
"msg_date": "Thu, 21 Oct 2010 08:03:08 -0500",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Periodically slow inserts"
},
{
"msg_contents": " \n> We are using PostgreSQL for storing data and full-text search indexes\n> for the webiste of a daily newspaper. We are very happy overall with the\n> results, but we have one \"weird\" behaviour that we would like to solve.\n\n\nI think there's a lot of missing info worth knowing:\n\n1) checkpoints logs? Enable them, maybe the \"slowness\" happens\nat checkpoints:\n\nlog_checkpoints=true\n\n2) How many rows does each table contain?\n\n3) HW: how many discs you have, and which controller you're using (and:\ndoes it use a BBU?)\n\nThe more you tell the list, the better help you'll get...\n\n\n \n",
"msg_date": "Thu, 21 Oct 2010 14:15:40 +0100 (BST)",
"msg_from": "Leonardo Francalanci <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Periodically slow inserts"
},
{
"msg_contents": "Gael Le Mignot <[email protected]> writes:\n> The problem is when we index objects into the full-text search part of\n> the database (which a DELETE and then an INSERT into a specific table),\n> the INSERT sometimes take a long time (from 10s to 20s), but the same\n> insert (and many other similar ones) are fast (below 0.2s).\n\n> This slowness comes regularly, about every 200 objects indexed,\n> regardless of the frequency of the inserts.\n\nHm. You didn't say which PG version you're using, but if it's >= 8.4,\nI think this may be caused by GIN's habit of queuing index insertions\nuntil it's accumulated a reasonable-size batch:\nhttp://www.postgresql.org/docs/9.0/static/gin-implementation.html#GIN-FAST-UPDATE\n\nWhile you can turn that off, I think that doing so will reduce the\nindex's search efficiency over time. It might be better to schedule\nregular vacuums on the table so that the work is done by vacuum rather\nthan foreground queries.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Oct 2010 10:55:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Periodically slow inserts "
},
{
"msg_contents": "Hello Leonardo!\n\nThu, 21 Oct 2010 14:15:40 +0100 (BST), you wrote: \n\n >> We are using PostgreSQL for storing data and full-text search indexes\n >> for the webiste of a daily newspaper. We are very happy overall with the\n >> results, but we have one \"weird\" behaviour that we would like to solve.\n\n > I think there's a lot of missing info worth knowing:\n\n > 1) checkpoints logs? Enable them, maybe the \"slowness\" happens\n > at checkpoints:\n\n > log_checkpoints=true\n\nYes, it's the checkpoints. The delay is the delay of the \"sync\" part of\nthe checkpoints :\n\n2010-10-21 16:39:15 CEST LOG: checkpoint complete: wrote 365 buffers\n(11.9%); 0 transaction log file(s) added, 0 removed, 3 recycled;\nwrite=0.403 s, sync=21.312 s, total=21.829 s\n\nMaybe there is something I misunderstood, but aren't the checkpoints\nsupposed to run smoothly over the checkpoint_completion_target interval ?\n\nIs there any way to smooth it over time ? \n\n > 2) How many rows does each table contain?\n\nThe problems occur on the \"big\" table with around 570 000 rows. Sorry I\nforgot that information.\n\n > 3) HW: how many discs you have, and which controller you're using (and:\n > does it use a BBU?)\n\n2 SAS 15K disks in RAID1 (Linux software RAID). The controller is LSI\nSAS1068E PCI-Express Fusion-MPT SAS, and we did enable the write cache\n(sdparm says :\n WCE 1 [cha: y]\n). \n\nNot sure if it has a BBU, but we have redundant power supply, and when\nwe'll go live, we'll have a warm standby on different hardware through\nWAL log shipping (it's not in place right now), and we can afford a few\nminutes of dataloss in case of exceptional failure.\n\n > The more you tell the list, the better help you'll get...\n\nOf course, thanks for your feedback.\n\nAs for the othe questions of the Wiki page :\n\n- I don't think explain/explain analyze will provide any information for\n inserts with no subqueries/...\n\n- We have (Debian) default config for autovacuum, and I tried a \"vacuum\n analyze;\" just before running a bench, it didn't change anything.\n\n- I tried moving the WAL to another pair of similar RAID1 SAS disks, but\n it didn't have any significant effect.\n\nAnd I also forgot to give the PostgreSQL version, it's 8.4.4 from Debian\nbackports.\n\nRegards,\n\n-- \nGa�l Le Mignot - [email protected]\nPilot Systems - 9, rue Desargues - 75011 Paris\nTel : +33 1 44 53 05 55 - www.pilotsystems.net\nG�rez vos contacts et vos newsletters : www.cockpit-mailing.com\n",
"msg_date": "Thu, 21 Oct 2010 17:07:13 +0200",
"msg_from": "Gael Le Mignot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Periodically slow inserts"
},
{
"msg_contents": "\n> > does it use a BBU?)\n\n\nSorry, this was supposed to read \"do you have cache on the controller\", of\ncourse a battery can't change the performance... but you got it anyway...\n\n\n \n",
"msg_date": "Thu, 21 Oct 2010 16:11:15 +0100 (BST)",
"msg_from": "Leonardo Francalanci <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Periodically slow inserts"
},
{
"msg_contents": "> 2010-10-21 16:39:15 CEST LOG: checkpoint complete: wrote 365 buffers\n> (11.9%); 0 transaction log file(s) added, 0 removed, 3 recycled;\n> write=0.403 s, sync=21.312 s, total=21.829 s\n\n\nI'm no expert, but isn't 21s to sync 365 buffers a big amount of time?\n\n\n\n \n",
"msg_date": "Thu, 21 Oct 2010 16:30:15 +0100 (BST)",
"msg_from": "Leonardo Francalanci <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Periodically slow inserts"
},
{
"msg_contents": "2010/10/21 Leonardo Francalanci <[email protected]>:\n>> 2010-10-21 16:39:15 CEST LOG: checkpoint complete: wrote 365 buffers\n>> (11.9%); 0 transaction log file(s) added, 0 removed, 3 recycled;\n>> write=0.403 s, sync=21.312 s, total=21.829 s\n>\n>\n> I'm no expert, but isn't 21s to sync 365 buffers a big amount of time?\n\nIt is.\n\nI suggest to look at /proc/meminfo about dirty buffers and the results\nof 'iostat -x 2' runing for some moment\n\n\n>\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Thu, 21 Oct 2010 17:40:12 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Periodically slow inserts"
},
{
"msg_contents": "Gael Le Mignot wrote:\n> The delay is the delay of the \"sync\" part of\n> the checkpoints :\n>\n> 2010-10-21 16:39:15 CEST LOG: checkpoint complete: wrote 365 buffers\n> (11.9%); 0 transaction log file(s) added, 0 removed, 3 recycled;\n> write=0.403 s, sync=21.312 s, total=21.829 s\n>\n> Maybe there is something I misunderstood, but aren't the checkpoints\n> supposed to run smoothly over the checkpoint_completion_target interval ?\n> \n\nWell, first off you have to get the checkpoints spaced out in time \nenough for that to work. Both checkpoint_segments and possibly \ncheckpoint_timeout may also need to be increased in order for the \ncheckpoint write spreading code to work. When I start seeing long sync \ntimes, I'll usually shoot for >64 segments and >10 minutes for the \ntimeout to give that smoothing work some room to do what it's supposed to.\n\nHowever, only the main checkpoint writes are spread over time. The hope \nis that by the time the sync phase starts, the operating system will \nhave already written most of them out. Sometimes, particularly in \nservers with lots of RAM for caching writes, this doesn't happen. In \nthat case, you can have gigabytes of data queued up at the beginning of \nthe sync phase--which is not spread out at all.\n\nWe are currently working on a \"spread sync\" feature for PostgreSQL that \nmakes this problem better on platforms/filesystems it's possible to \nimprove behavior on (you can't do anything about this issue on ext3 for \nexample). I'll be talking about that development at the PgWest \nconference in a two weeks: \nhttps://www.postgresqlconference.org/content/righting-your-writes and \nhope to submit a patch with a first version of this feature to the \nNovember development CommitFest, in hopes of it making it into version 9.1.\n\nIf you can't come up with any solution here and need help with your \ncurrent version sooner than that, we've already backported this \nimprovement all the way to V8.3; drop me an off-list note if you want to \ndiscuss consulting services in this area we have available. If you're \nlucky, just adjusting the segment and timeout values may be enough for you.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\"\nhttp://www.2ndquadrant.com/books\n\n\n",
"msg_date": "Thu, 21 Oct 2010 12:03:32 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Periodically slow inserts"
},
{
"msg_contents": "Hello Tom!\n\nThu, 21 Oct 2010 10:55:48 -0400, you wrote: \n\n > Gael Le Mignot <[email protected]> writes:\n >> The problem is when we index objects into the full-text search part of\n >> the database (which a DELETE and then an INSERT into a specific table),\n >> the INSERT sometimes take a long time (from 10s to 20s), but the same\n >> insert (and many other similar ones) are fast (below 0.2s).\n\n >> This slowness comes regularly, about every 200 objects indexed,\n >> regardless of the frequency of the inserts.\n\n > Hm. You didn't say which PG version you're using, but if it's >= 8.4,\n > I think this may be caused by GIN's habit of queuing index insertions\n > until it's accumulated a reasonable-size batch:\n > http://www.postgresql.org/docs/9.0/static/gin-implementation.html#GIN-FAST-UPDATE\n\n > While you can turn that off, I think that doing so will reduce the\n > index's search efficiency over time. It might be better to schedule\n > regular vacuums on the table so that the work is done by vacuum rather\n > than foreground queries.\n\nThanks for your feedback.\n\nIt seems to be related, at least, if I increase the work_mem variable,\nthe slowness because bigger (up to 1 minute for a work_mem of 8mb) but\nmuch less frequent (around 0.05% instead of 0.5% of the requests for 8mb\ninstead of 1mb).\n\nSo a big work_mem and a regular vacuum would do the tick, I think. Does\nauto_vacuum triggers the gin index vacuuming too, or does it require a\nmanual vacuum ?\n\nRegards, \n\n-- \nGa�l Le Mignot - [email protected]\nPilot Systems - 9, rue Desargues - 75011 Paris\nTel : +33 1 44 53 05 55 - www.pilotsystems.net\nG�rez vos contacts et vos newsletters : www.cockpit-mailing.com\n",
"msg_date": "Thu, 21 Oct 2010 18:07:47 +0200",
"msg_from": "Gael Le Mignot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Periodically slow inserts"
},
{
"msg_contents": "Gael Le Mignot wrote:\n> Hello,\n>\n> We are using PostgreSQL for storing data and full-text search indexes\n> for the webiste of a daily newspaper. We are very happy overall with the\n> results, but we have one \"weird\" behaviour that we would like to solve.\n>\n> The problem is when we index objects into the full-text search part of\n> the database (which a DELETE and then an INSERT into a specific table),\n> the INSERT sometimes take a long time (from 10s to 20s), but the same\n> insert (and many other similar ones) are fast (below 0.2s).\n> \nHave you tried with strace, just to see where the time is spent?\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Thu, 21 Oct 2010 12:15:03 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Periodically slow inserts"
},
{
"msg_contents": "Gael Le Mignot <[email protected]> writes:\n> Thu, 21 Oct 2010 10:55:48 -0400, you wrote: \n>>> I think this may be caused by GIN's habit of queuing index insertions\n>>> until it's accumulated a reasonable-size batch:\n>>> http://www.postgresql.org/docs/9.0/static/gin-implementation.html#GIN-FAST-UPDATE\n\n> So a big work_mem and a regular vacuum would do the tick, I think. Does\n> auto_vacuum triggers the gin index vacuuming too, or does it require a\n> manual vacuum ?\n\nAutovacuum will handle it too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Oct 2010 12:50:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Periodically slow inserts "
},
{
"msg_contents": ">>> I think this may be caused by GIN's habit of queuing index insertions\n>>> until it's accumulated a reasonable-size batch:\n\nSo the fact that it takes 21s to sync 365 buffers in this case is normal?\n\n\n \n",
"msg_date": "Fri, 22 Oct 2010 08:45:02 +0100 (BST)",
"msg_from": "Leonardo Francalanci <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Periodically slow inserts"
},
{
"msg_contents": "Hello,\n\nThanks to everyone who gave me hints and feedbacks. I managed to solve\nthe problem.\n\nMy understanding of what was happening is the following :\n\n- The gin index (as explained on [1]), stores temporary list, and when\n they grow big enough, those are dispatched into the real index. Vacuum\n also does this index flush, in background.\n\n- This index flush, on a table with 500k rows, means making changes to a\n lot of disk pages, filling the WAL in one big burst, forcing an\n immediate checkpoint, and blocking the INSERT that triggered it.\n\nI managed to solve the problem by adjusting two set of parameters :\n\n- The work_mem variable, which sepcify the maximal size of the temporary\n list before the gin index is \"flushed\". \n\n- The autovacuum parameters.\n\nThe main idea was to increase the size of temporary lists (through\nwork_mem) and increase the frequency of autovacuums, to ensure that\nunder real life load (even heavy real life load), the \"index flush\" is\nalways done by the autovacuum, and never by the \"list is full\" trigger.\n\nWith this setup, I managed to handle indexing 10 000 objects in 2 hours\nwithout any stall, which is much more than we'll have to handle under\nreal life load.\n\nRegards,\n\n\n[1] http://www.postgresql.org/docs/8.4/static/gin-implementation.html\n\n-- \nGa�l Le Mignot - [email protected]\nPilot Systems - 9, rue Desargues - 75011 Paris\nTel : +33 1 44 53 05 55 - www.pilotsystems.net\nG�rez vos contacts et vos newsletters : www.cockpit-mailing.com\n",
"msg_date": "Fri, 22 Oct 2010 17:10:52 +0200",
"msg_from": "Gael Le Mignot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Periodically slow inserts"
}
] |
[
{
"msg_contents": "Now that some of my recent writing has gone from NDA protected to public \nsample, I've added a new page to the PostgreSQL wiki that provides a \ngood starting set of resources to learn about an ever popular topic \nhere, how write cache problems can lead to database corruption: \nhttp://wiki.postgresql.org/wiki/Reliable_Writes\n\nBruce also has a presentation he's been working on that adds pictures \nshowing the flow of data through the various cache levels, to help \npeople visualize the whole thing, that should get added into there once \nhe's finished tweaking it.\n\nI'd like to get some feedback from the members of this list about what's \nstill missing after this expanded data dump. Ultimately I'd like to get \nthis page to be an authoritative enough resource that the \"Reliability\" \nsection of the official documentation could point back to this as a \nrecommendation for additional information. So much of this material \nrequires singling out specific vendors and staying up to date with \nhardware changes, both things that the official docs are not a good \nplace for.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\nAuthor, \"PostgreSQL 9.0 High Performance\":\nhttp://www.2ndquadrant.com/books\n\n\n",
"msg_date": "Thu, 21 Oct 2010 10:08:05 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "New wiki page on write reliability"
},
{
"msg_contents": "On 10-10-21 10:08 AM, Greg Smith wrote:\n> Now that some of my recent writing has gone from NDA protected to \n> public sample, I've added a new page to the PostgreSQL wiki that \n> provides a good starting set of resources to learn about an ever \n> popular topic here, how write cache problems can lead to database \n> corruption: http://wiki.postgresql.org/wiki/Reliable_Writes\n>\n> Bruce also has a presentation he's been working on that adds pictures \n> showing the flow of data through the various cache levels, to help \n> people visualize the whole thing, that should get added into there \n> once he's finished tweaking it.\n>\n> I'd like to get some feedback from the members of this list about \n> what's still missing after this expanded data dump. Ultimately I'd \n> like to get this page to be an authoritative enough resource that the \n> \"Reliability\" section of the official documentation could point back \n> to this as a recommendation for additional information. So much of \n> this material requires singling out specific vendors and staying up to \n> date with hardware changes, both things that the official docs are not \n> a good place for.\n>\n\nLooks like a good start.\n\nI think a warning turning fsync off, the dangers of async_commit, and \nthe potential problems with disabling full_page_writes might be worth \nmentioning on this page, unless you want to leave that buried in the \nattached references.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n",
"msg_date": "Thu, 21 Oct 2010 10:34:49 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New wiki page on write reliability"
},
{
"msg_contents": "Brad Nicholson wrote:\n> I think a warning turning fsync off, the dangers of async_commit, and \n> the potential problems with disabling full_page_writes might be worth \n> mentioning on this page, unless you want to leave that buried in the \n> attached references.\n\nGood idea to highlight that. What I just did here was point out which \nof the references covered that specific topic, which is as good as I \ncould do for now. \n\nIt's hard for me to justify spending time writing more about those when \nthey are covered in the attached references, and I personally can't do \nit because of my publishing agreement. The fact that the information \nabout this topic is what ended up being released as the sample material \nfrom my book is not coincidence--I wanted to be able to share what I'd \ndone here as a free community resources because this topic is so \nimportant to me. But I can't go much further than what I've already put \nup there myself.\n\n-- \nGreg Smith, 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\n\n",
"msg_date": "Thu, 21 Oct 2010 11:45:11 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New wiki page on write reliability"
},
{
"msg_contents": "Greg Smith wrote:\n> Now that some of my recent writing has gone from NDA protected to public \n> sample, I've added a new page to the PostgreSQL wiki that provides a \n> good starting set of resources to learn about an ever popular topic \n> here, how write cache problems can lead to database corruption: \n> http://wiki.postgresql.org/wiki/Reliable_Writes\n> \n> Bruce also has a presentation he's been working on that adds pictures \n> showing the flow of data through the various cache levels, to help \n> people visualize the whole thing, that should get added into there once \n> he's finished tweaking it.\n\nMy presentation is done and is now on the wiki too:\n\n\thttp://momjian.us/main/writings/pgsql/hw_selection.pdf\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Thu, 21 Oct 2010 12:09:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New wiki page on write reliability"
}
] |
[
{
"msg_contents": "Hello,\n\nDoes anyone have any experience with running postgreSQL on Blue Arc's \nnetwork storage products? In particular, we are interested in the the \nTitan and Mercury series:\n\nhttp://www.bluearc.com/data-storage-products/titan-3000-network-storage-system.shtml\nhttp://www.bluearc.com/data-storage-products/mercury-network-storage-system.shtml\n\n-- \nTim Goodaire 416-673-4126 [email protected]\nDatabase Team Lead, Afilias Canada Corp.\n\n",
"msg_date": "Thu, 21 Oct 2010 14:54:48 -0400",
"msg_from": "Tim Goodaire <[email protected]>",
"msg_from_op": true,
"msg_subject": "Experiences with running PostgreSQL on Blue Arc Network Storage"
}
] |
[
{
"msg_contents": "Rob Wultsch wrote:\n \n> I really would like to work with PG more and this seems like\n> [full_page_writes] would be a significant hindrance for certain\n> usage patterns. Lots of replication does not take place over gig...\n \nCertainly most of the Wisconsin State Courts replication takes place\nover WAN connections at a few Mbps. I haven't seen any evidence that\nhaving full_page_writes on has caused us problems, personally.\n \nIn the PostgreSQL community you generally need to show some hard\nnumbers from a repeatable test case for the community to believe that\nthere's a problem which needs fixing, much less to buy in to some\nparticular fix for the purported problem. On the other hand, if you\ncan show that there actually *is* a problem, I've never seen a group\nwhich responds so quickly and effectively to solve it as the\nPostgreSQL community. Don't get too attached to a particular\nsolution without proof that it's better than the alternatives,\nthough....\n \n-Kevin\n",
"msg_date": "Sat, 23 Oct 2010 11:41:33 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BBU Cache vs. spindles"
}
] |
[
{
"msg_contents": "Hello,\n\nWhat is the general view of performance CPU's nowadays when it comes to \nPostgreSQL performance? Which CPU is the better choice, in regards to \nRAM access-times, stream speed, cache synchronization etc. Which is the \nbetter CPU given the limitation of using AMD64 (x86-64)?\n\nWe're getting ready to replace our (now) aging db servers with some \nbrand new with higher core count. The old ones are 4-socket dual-core \nOpteron 8218's with 48GB RAM. Right now the disk-subsystem is not the \nlimiting factor so we're aiming for higher core-count and as well as \nfaster and more RAM. We're also moving into the territory of version 9.0 \nwith streaming replication to be able to offload at least a part of the \nread-only queries to the slave database. The connection count on the \ndatabase usually lies in the region of ~2500 connections and the \ndatabase is small enough that it can be kept entirely in RAM (dump is \nabout 2,5GB).\n\nRegards,\nChristian Elmerot\n",
"msg_date": "Mon, 25 Oct 2010 11:53:17 +0200",
"msg_from": "\"Christian Elmerot @ One.com\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "CPUs for new databases"
}
] |
[
{
"msg_contents": "Hello Experts,\nMy application uses Oracle DB, and makes use of OCI interface.\nI have been able to develop similar interface using postgreSQL library.\nHowever, I have done some tests but results for PostgreSQL have not been \nencouraging for a few of them.\n\nMy questions/scenarios are:\n\n1. How does PostgreSQL perform when inserting data into an indexed (type: btree) \ntable? Is it true that as you add the indexes on a table, the performance \ndeteriorates significantly whereas Oracle does not show that much performance \ndecrease. I have tried almost all postgreSQL performance tips available. I want \nto have very good \"insert\" performance (with indexes), \"select\" performance is \nnot that important at this point of time.\n\n2. What are the average storage requirements of postgres compared to Oracle? I \ninserted upto 1 million records. The storage requirement of postgreSQL is almost \ndouble than that of Oracle.\n\nThanks in anticipation.\n\n\n \nHello Experts,My application uses Oracle DB, and makes use of OCI interface.I have been able to develop similar interface using postgreSQL library.However, I have done some tests but results for PostgreSQL have not been encouraging for a few of them.My questions/scenarios are:1. How does PostgreSQL perform when inserting data into an indexed (type: btree) table? Is it true that as you add the indexes on a table, the performance deteriorates significantly whereas Oracle does not show that much performance decrease. I have tried almost all postgreSQL performance tips available. I want to have very good \"insert\" performance (with indexes), \"select\" performance is not that important at this point of time.2. What are the average storage requirements of postgres\n compared to Oracle? I inserted upto 1 million records. The storage requirement of postgreSQL is almost double than that of Oracle.Thanks in anticipation.",
"msg_date": "Mon, 25 Oct 2010 11:12:40 -0700 (PDT)",
"msg_from": "Divakar Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres insert performance and storage requirement compared to\n Oracle"
},
{
"msg_contents": "On Mon, 2010-10-25 at 11:12 -0700, Divakar Singh wrote:\n\n> My questions/scenarios are:\n> \n> 1. How does PostgreSQL perform when inserting data into an indexed\n> (type: btree) \n> table? Is it true that as you add the indexes on a table, the\n> performance \n> deteriorates significantly whereas Oracle does not show that much\n> performance \n> decrease. I have tried almost all postgreSQL performance tips\n> available. I want \n> to have very good \"insert\" performance (with indexes), \"select\"\n> performance is \n> not that important at this point of time.\n\nDid you test?\n\n> \n> 2. What are the average storage requirements of postgres compared to\n> Oracle? I \n> inserted upto 1 million records. The storage requirement of postgreSQL\n> is almost \n> double than that of Oracle.\n\nWhat was your table structure?\n\nJoshua D. Drake\n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n",
"msg_date": "Mon, 25 Oct 2010 11:20:33 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On Mon, Oct 25, 2010 at 12:12 PM, Divakar Singh <[email protected]> wrote:\n> Hello Experts,\n> My application uses Oracle DB, and makes use of OCI interface.\n> I have been able to develop similar interface using postgreSQL library.\n> However, I have done some tests but results for PostgreSQL have not been\n> encouraging for a few of them.\n\nTell us more about your tests and results please.\n",
"msg_date": "Mon, 25 Oct 2010 12:26:27 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "> My questions/scenarios are:\n> \n> 1. How does PostgreSQL perform when inserting data into an indexed\n> (type: btree) \n> table? Is it true that as you add the indexes on a table, the\n> performance \n> deteriorates significantly whereas Oracle does not show that much\n> performance \n> decrease. I have tried almost all postgreSQL performance tips\n> available. I want \n> to have very good \"insert\" performance (with indexes), \"select\"\n> performance is \n> not that important at this point of time.\n\n-- Did you test?\n\nYes. the performance was comparable when using SQL procedure. However, When I \nused libpq, PostgreSQL performed very bad. There was some difference in \nenvironment also between these 2 tests, but I am assuming libpq vs SQL was the \nreal cause. Or it was something else?\n \n> \n> 2. What are the average storage requirements of postgres compared to\n> Oracle? I \n> inserted upto 1 million records. The storage requirement of postgreSQL\n> is almost \n> double than that of Oracle.\n\n -- What was your table structure?\n\nSome 10-12 columns ( like 2 timestamp, 4 int and 4 varchar), with 5 indexes on \nvarchar and int fields including 1 implicit index coz of PK. \n\n\n\nJoshua D. Drake\n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n\n \n> My questions/scenarios are:> > 1. How does PostgreSQL perform when inserting data into an indexed> (type: btree) > table? Is it true that as you add the indexes on a table, the> performance > deteriorates significantly whereas Oracle does not show that much> performance > decrease. I have tried almost all postgreSQL performance tips> available. I want > to have very good \"insert\" performance (with indexes), \"select\"> performance is > not that important at this point of time.-- Did you test?Yes. the performance was comparable when using SQL procedure. However, When I used libpq, PostgreSQL performed very bad. There was some\n difference in environment also between these 2 tests, but I am assuming libpq vs SQL was the real cause. Or it was something else? > > 2. What are the average storage requirements of postgres compared to> Oracle? I > inserted upto 1 million records. The storage requirement of postgreSQL> is almost > double than that of Oracle. -- What was your table structure?Some 10-12 columns ( like 2 timestamp, 4 int and 4 varchar), with 5 indexes on varchar and int fields including 1 implicit index coz of PK. Joshua D. Drake-- PostgreSQL.org Major ContributorCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579Consulting, Training, Support, Custom Development, Engineeringhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt",
"msg_date": "Mon, 25 Oct 2010 11:31:22 -0700 (PDT)",
"msg_from": "Divakar Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres insert performance and storage requirement compared to\n\tOracle"
},
{
"msg_contents": "On Mon, Oct 25, 2010 at 2:12 PM, Divakar Singh <[email protected]> wrote:\n> 1. How does PostgreSQL perform when inserting data into an indexed (type:\n> btree) table? Is it true that as you add the indexes on a table, the\n> performance deteriorates significantly whereas Oracle does not show that\n> much performance decrease. I have tried almost all postgreSQL performance\n> tips available. I want to have very good \"insert\" performance (with\n> indexes), \"select\" performance is not that important at this point of time.\n\nI don't claim to have any experience with Oracle, but this boast\nsmells fishy. See for example Figure 3-2 (pp. 57-58) in \"The Art of\nSQL\", where the author presents simple charts showing the performance\nimpact upon INSERTs of adding indexes to a table in Oracle and MySQL:\nthey're both in the same ballpark, and the performance impact is\nindeed significant. As Joshua Drake suggests, table schemas and test\nresults would help your case.\n\nJosh\n",
"msg_date": "Mon, 25 Oct 2010 14:33:10 -0400",
"msg_from": "Josh Kupershmidt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "Storage test was simple, but the data (seconds taken) for INSERT test for PG vs \nOracle for 1, 2, 3,4 and 5 indexes was:\nPG:\n\n25 \n30 \n37 \n42 \n45 \n\nOracle:\n\n\n33 \n43 \n50 \n65 \n68 Rows inserted: 100,000 \nAbove results show good INSERT performance of PG when using SQL procedures. But \nperformance when I use C++ lib is very bad. I did that test some time back so I \ndo not have data for that right now.\n\n\n\n\n________________________________\nFrom: Scott Marlowe <[email protected]>\nTo: Divakar Singh <[email protected]>\nCc: [email protected]\nSent: Mon, October 25, 2010 11:56:27 PM\nSubject: Re: [PERFORM] Postgres insert performance and storage requirement \ncompared to Oracle\n\nOn Mon, Oct 25, 2010 at 12:12 PM, Divakar Singh <[email protected]> wrote:\n> Hello Experts,\n> My application uses Oracle DB, and makes use of OCI interface.\n> I have been able to develop similar interface using postgreSQL library.\n> However, I have done some tests but results for PostgreSQL have not been\n> encouraging for a few of them.\n\nTell us more about your tests and results please.\n\n\n\n \nStorage test was simple, but the data (seconds taken) for INSERT test for PG vs Oracle for 1, 2, 3,4 and 5 indexes was:PG:\n\n\n25\n\n\n30\n\n\n37\n\n\n42\n\n\n45\n\nOracle:\n\n\n33\n\n\n43\n\n\n50\n\n\n65\n\n\n68\n\nRows inserted: 100,000 Above results show good INSERT performance of PG when using SQL procedures. But performance when I use C++ lib is very bad. I did that test some time back so I do not have data for that right now.From: Scott Marlowe <[email protected]>To: Divakar Singh <[email protected]>Cc: [email protected]: Mon, October 25, 2010 11:56:27 PMSubject: Re: [PERFORM] Postgres insert performance and storage requirement compared to OracleOn\n Mon, Oct 25, 2010 at 12:12 PM, Divakar Singh <[email protected]> wrote:> Hello Experts,> My application uses Oracle DB, and makes use of OCI interface.> I have been able to develop similar interface using postgreSQL library.> However, I have done some tests but results for PostgreSQL have not been> encouraging for a few of them.Tell us more about your tests and results please.",
"msg_date": "Mon, 25 Oct 2010 11:36:24 -0700 (PDT)",
"msg_from": "Divakar Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres insert performance and storage requirement compared to\n\tOracle"
},
{
"msg_contents": "On Mon, 2010-10-25 at 11:36 -0700, Divakar Singh wrote:\n> \n> 68 Rows inserted: 100,000 \n> Above results show good INSERT performance of PG when using SQL\n> procedures. But \n> performance when I use C++ lib is very bad. I did that test some time\n> back so I \n> do not have data for that right now.\n\nThis is interesting, are you using libpq or libpqXX?\n\nJoshua D. Drake\n\n\n> \n> \n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n",
"msg_date": "Mon, 25 Oct 2010 11:38:52 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "Hi Joshua,\nI have been only using libpq.\nIs libpqXX better than the other?\nIs there any notable facility in libpqxx which could aid in fast inserts or \nbetter performance in general?\n\n Best Regards,\nDivakar\n\n\n\n\n________________________________\nFrom: Joshua D. Drake <[email protected]>\nTo: Divakar Singh <[email protected]>\nCc: Scott Marlowe <[email protected]>; [email protected]\nSent: Tue, October 26, 2010 12:08:52 AM\nSubject: Re: [PERFORM] Postgres insert performance and storage requirement \ncompared to Oracle\n\nOn Mon, 2010-10-25 at 11:36 -0700, Divakar Singh wrote:\n> \n> 68 Rows inserted: 100,000 \n> Above results show good INSERT performance of PG when using SQL\n> procedures. But \n> performance when I use C++ lib is very bad. I did that test some time\n> back so I \n> do not have data for that right now.\n\nThis is interesting, are you using libpq or libpqXX?\n\nJoshua D. Drake\n\n\n> \n> \n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n \nHi Joshua,I have been only using libpq.Is libpqXX better than the other?Is there any notable facility in libpqxx which could aid in fast inserts or better performance in general? Best Regards,DivakarFrom: Joshua D. Drake <[email protected]>To: Divakar Singh <[email protected]>Cc: Scott Marlowe <[email protected]>; [email protected]: Tue, October 26, 2010 12:08:52 AMSubject: Re: [PERFORM] Postgres insert performance and storage requirement compared to OracleOn Mon, 2010-10-25 at 11:36 -0700, Divakar Singh wrote:> > 68 Rows inserted: 100,000 > Above results show good INSERT performance of PG when using SQL> procedures. But > performance when I use C++ lib is very bad. I did that test some time> back so I > do not have data for that right now.This is interesting, are you using libpq or libpqXX?Joshua D. Drake> > -- PostgreSQL.org Major ContributorCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579Consulting, Training, Support, Custom Development, Engineeringhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 25 Oct 2010 11:42:48 -0700 (PDT)",
"msg_from": "Divakar Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres insert performance and storage requirement compared to\n\tOracle"
},
{
"msg_contents": "On 10-10-25 02:31 PM, Divakar Singh wrote:\n>\n> > My questions/scenarios are:\n> >\n> > 1. How does PostgreSQL perform when inserting data into an indexed\n> > (type: btree)\n> > table? Is it true that as you add the indexes on a table, the\n> > performance\n> > deteriorates significantly whereas Oracle does not show that much\n> > performance\n> > decrease. I have tried almost all postgreSQL performance tips\n> > available. I want\n> > to have very good \"insert\" performance (with indexes), \"select\"\n> > performance is\n> > not that important at this point of time.\n>\n> -- Did you test?\n>\n> Yes. the performance was comparable when using SQL procedure. However,\n> When I used libpq, PostgreSQL performed very bad. There was some\n> difference in environment also between these 2 tests, but I am assuming\n> libpq vs SQL was the real cause. Or it was something else?\n\nSo your saying that when you load the data with psql it loads fine, but \nwhen you load it using libpq it takes much longer?\n\nHow are you using libpq?\n-Are you opening and closing the database connection between each insert?\n-Are you doing all of your inserts as one big transaction or are you \ndoing a transaction per insert\n-Are you using prepared statements for your inserts?\n-Are you using the COPY command to load your data or the INSERT command?\n-Are you running your libpq program on the same server as postgresql?\n-How is your libpq program connecting to postgresql, is it using ssl?\n\n>\n> Some 10-12 columns ( like 2 timestamp, 4 int and 4 varchar), with 5\n> indexes on varchar and int fields including 1 implicit index coz of PK.\n\nIf your run \"VACUUM VERBOSE tablename\" on the table, what does it say?\n\nYou also don't mention which version of postgresql your using.\n\n>\n>\n> Joshua D. Drake\n>\n>\n> --\n> PostgreSQL.org Major Contributor\n> Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\n> Consulting, Training, Support, Custom Development, Engineering\n> http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n>\n>\n\n",
"msg_date": "Mon, 25 Oct 2010 14:46:46 -0400",
"msg_from": "Steve Singer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "Answers:\n\nHow are you using libpq?\n-Are you opening and closing the database connection between each insert?\n\n[Need to check, will come back on this]\n\n-Are you doing all of your inserts as one big transaction or are you doing a \ntransaction per insert\n\n[Answer: for C++ program, one insert per transaction in PG as well as Oracle. \nBut in stored proc, I think both use only 1 transaction for all inserts]\n\n-Are you using prepared statements for your inserts?\n\n[Need to check, will come back on this]\n\n-Are you using the COPY command to load your data or the INSERT command?\n\n[No]\n\n-Are you running your libpq program on the same server as postgresql?\n\n[Yes]\n\n-How is your libpq program connecting to postgresql, is it using ssl?\n\n[No]\n\nIf your run \"VACUUM VERBOSE tablename\" on the table, what does it say?\n\n[Need to check, will come back on this]\n\nYou also don't mention which version of postgresql your using.\n\n[Latest, 9.x]\n\n Best Regards,\nDivakar\n\n\n\n\n________________________________\nFrom: Steve Singer <[email protected]>\nTo: Divakar Singh <[email protected]>\nCc: [email protected]; [email protected]\nSent: Tue, October 26, 2010 12:16:46 AM\nSubject: Re: [PERFORM] Postgres insert performance and storage requirement \ncompared to Oracle\n\nOn 10-10-25 02:31 PM, Divakar Singh wrote:\n>\n> > My questions/scenarios are:\n> >\n> > 1. How does PostgreSQL perform when inserting data into an indexed\n> > (type: btree)\n> > table? Is it true that as you add the indexes on a table, the\n> > performance\n> > deteriorates significantly whereas Oracle does not show that much\n> > performance\n> > decrease. I have tried almost all postgreSQL performance tips\n> > available. I want\n> > to have very good \"insert\" performance (with indexes), \"select\"\n> > performance is\n> > not that important at this point of time.\n>\n> -- Did you test?\n>\n> Yes. the performance was comparable when using SQL procedure. However,\n> When I used libpq, PostgreSQL performed very bad. There was some\n> difference in environment also between these 2 tests, but I am assuming\n> libpq vs SQL was the real cause. Or it was something else?\n\nSo your saying that when you load the data with psql it loads fine, but \nwhen you load it using libpq it takes much longer?\n\nHow are you using libpq?\n-Are you opening and closing the database connection between each insert?\n-Are you doing all of your inserts as one big transaction or are you \ndoing a transaction per insert\n-Are you using prepared statements for your inserts?\n-Are you using the COPY command to load your data or the INSERT command?\n-Are you running your libpq program on the same server as postgresql?\n-How is your libpq program connecting to postgresql, is it using ssl?\n\n>\n> Some 10-12 columns ( like 2 timestamp, 4 int and 4 varchar), with 5\n> indexes on varchar and int fields including 1 implicit index coz of PK.\n\nIf your run \"VACUUM VERBOSE tablename\" on the table, what does it say?\n\nYou also don't mention which version of postgresql your using.\n\n>\n>\n> Joshua D. Drake\n>\n>\n> --\n> PostgreSQL.org Major Contributor\n> Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\n> Consulting, Training, Support, Custom Development, Engineering\n> http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n>\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n \nAnswers:How are you using libpq?-Are you opening and closing the \ndatabase connection between each insert?[Need to check, will come back on this]-Are you doing all of your \ninserts as one big transaction or are you doing a transaction per \ninsert[Answer: for C++ program, one insert per transaction in PG as well as Oracle. But in stored proc, I think both use only 1 transaction for all inserts]-Are you using prepared statements for your inserts?[Need to check, will come back on this]-Are \nyou using the COPY command to load your data or the INSERT command?[No]-Are\n you running your libpq program on the same server as postgresql?[Yes]-How\n is your libpq program connecting to postgresql, is it using ssl?[No]If your run \"VACUUM VERBOSE tablename\" on the table, what does it \nsay?[Need to check, will come back on this]You also don't mention which version of postgresql your \nusing.[Latest, 9.x] Best Regards,DivakarFrom: Steve Singer <[email protected]>To: Divakar Singh <[email protected]>Cc: [email protected]; [email protected]: Tue, October 26, 2010 12:16:46 AMSubject: Re: [PERFORM] Postgres insert performance and storage requirement compared to OracleOn 10-10-25 02:31 PM, Divakar Singh wrote:>> > My questions/scenarios are:> >> > 1.\n How does PostgreSQL perform when inserting data into an indexed> > (type: btree)> > table? Is it true that as you add the indexes on a table, the> > performance> > deteriorates significantly whereas Oracle does not show that much> > performance> > decrease. I have tried almost all postgreSQL performance tips> > available. I want> > to have very good \"insert\" performance (with indexes), \"select\"> > performance is> > not that important at this point of time.>> -- Did you test?>> Yes. the performance was comparable when using SQL procedure. However,> When I used libpq, PostgreSQL performed very bad. There was some> difference in environment also between these 2 tests, but I am assuming> libpq vs SQL was the real cause. Or it was something\n else?So your saying that when you load the data with psql it loads fine, but when you load it using libpq it takes much longer?How are you using libpq?-Are you opening and closing the database connection between each insert?-Are you doing all of your inserts as one big transaction or are you doing a transaction per insert-Are you using prepared statements for your inserts?-Are you using the COPY command to load your data or the INSERT command?-Are you running your libpq program on the same server as postgresql?-How is your libpq program connecting to postgresql, is it using ssl?>> Some 10-12 columns ( like 2 timestamp, 4 int and 4 varchar), with 5> indexes on varchar and int fields including 1 implicit index coz of PK.If your run \"VACUUM VERBOSE tablename\" on the table, what does it say?You also don't mention which version of postgresql your\n using.>>> Joshua D. Drake>>> --> PostgreSQL.org Major Contributor> Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579> Consulting, Training, Support, Custom Development, Engineering> http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt>>-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 25 Oct 2010 11:52:31 -0700 (PDT)",
"msg_from": "Divakar Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres insert performance and storage requirement compared to\n\tOracle"
},
{
"msg_contents": "Profiling could tell you where is the time lost and where is your \nprogram spending time. Having experience with both Oracle and Postgres, \nI don't feel that there is much of a difference in the insert speed. I \nam not using C++, I am using scripting languages like Perl and PHP and, \nas far as inserts go, I don't see much of a difference. I have an \napplication which inserts approximately 600,000 records into a \nPostgreSQL 9.0.1 per day, in chunks of up to 60,000 records every hour. \nThe table is partitioned and there are indexes on the underlying \npartitions. I haven't noticed any problems with inserts. Also, if I use \n\"copy\" instead of the \"insert\" command, I can be even faster. In \naddition to that, PostgreSQL doesn't support index organized tables.\n\nDivakar Singh wrote:\n> Storage test was simple, but the data (seconds taken) for INSERT test \n> for PG vs Oracle for 1, 2, 3,4 and 5 indexes was:\n> PG:\n> 25\n> 30\n> 37\n> 42\n> 45\n>\n>\n>\n> Oracle:\n>\n> 33\n> 43\n> 50\n> 65\n> 68\n>\n> Rows inserted: 100,000\n> Above results show good INSERT performance of PG when using SQL \n> procedures. But performance when I use C++ lib is very bad. I did that \n> test some time back so I do not have data for that right now.\n>\n> ------------------------------------------------------------------------\n> *From:* Scott Marlowe <[email protected]>\n> *To:* Divakar Singh <[email protected]>\n> *Cc:* [email protected]\n> *Sent:* Mon, October 25, 2010 11:56:27 PM\n> *Subject:* Re: [PERFORM] Postgres insert performance and storage \n> requirement compared to Oracle\n>\n> On Mon, Oct 25, 2010 at 12:12 PM, Divakar Singh <[email protected] \n> <mailto:[email protected]>> wrote:\n> > Hello Experts,\n> > My application uses Oracle DB, and makes use of OCI interface.\n> > I have been able to develop similar interface using postgreSQL library.\n> > However, I have done some tests but results for PostgreSQL have not been\n> > encouraging for a few of them.\n>\n> Tell us more about your tests and results please.\n>\n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Mon, 25 Oct 2010 14:56:13 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "\nOn Mon, Oct 25, 2010 at 11:39:30AM -0700, Divakar Singh wrote:\n> Thanks Ray,\n> Already seen that, but it does not tell about storage requirement compared to \n> Oracle. I find it takes 2 times space than oracle. \n> \n> \n> Best Regards,\n> Divakar\n> ________________________________\n> From: Ray Stell <[email protected]>\n> To: Divakar Singh <[email protected]>\n> Sent: Tue, October 26, 2010 12:05:23 AM\n> Subject: Re: [PERFORM] Postgres insert performance and storage requirement \n> compared to Oracle\n> \n> On Mon, Oct 25, 2010 at 11:12:40AM -0700, Divakar Singh wrote:\n> > \n> > 2. What are the average storage requirements of postgres compared to Oracle? I \n> \n> > inserted upto 1 million records. The storage requirement of postgreSQL is \n> >almost \n> >\n> > double than that of Oracle.\n> \n> there's a fine manual:\n> http://www.postgresql.org/docs/9.0/interactive/storage.html\n\n\nMaybe compare to oracle's storage documentation:\n\n http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/sql_elements001.htm#SQLRF30020\n http://download.oracle.com/docs/cd/E11882_01/server.112/e17120/schema007.htm#ADMIN11622\n\nI don't believe for a second the byte count is double in pg, but that's just\na religious expression, I've never counted.\n",
"msg_date": "Mon, 25 Oct 2010 15:21:57 -0400",
"msg_from": "Ray Stell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On Mon, Oct 25, 2010 at 12:36 PM, Divakar Singh <[email protected]> wrote:\n>\n> Storage test was simple, but the data (seconds taken) for INSERT test for PG vs Oracle for 1, 2, 3,4 and 5 indexes was:\n> PG:\n> 25\n> 30\n> 37\n> 42\n> 45\n>\n> Oracle:\n>\n> 33\n> 43\n> 50\n> 65\n> 68\n> Rows inserted: 100,000\n> Above results show good INSERT performance of PG when using SQL procedures. But performance when I use C++ lib is very bad. I did that test some time back so I do not have data for that right now.\n\nSo, assuming I wanted to reproduce your results, can you provide a\nself contained test case that shows these differences? I have always\ngotten really good performance using libpq myself, so I'm looking for\nwhat it is you might be doing differently from me that would make it\nso slow.\n",
"msg_date": "Mon, 25 Oct 2010 13:26:41 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On October 25, 2010 11:36:24 am Divakar Singh wrote:\n> Above results show good INSERT performance of PG when using SQL procedures.\n> But performance when I use C++ lib is very bad. I did that test some time\n> back so I do not have data for that right now.\n\nWrap it in a transaction.\n",
"msg_date": "Mon, 25 Oct 2010 12:51:41 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement compared to\n\tOracle"
},
{
"msg_contents": "On Mon, Oct 25, 2010 at 2:12 PM, Divakar Singh <[email protected]> wrote:\n> Hello Experts,\n> My application uses Oracle DB, and makes use of OCI interface.\n> I have been able to develop similar interface using postgreSQL library.\n> However, I have done some tests but results for PostgreSQL have not been\n> encouraging for a few of them.\n>\n> My questions/scenarios are:\n>\n> 1. How does PostgreSQL perform when inserting data into an indexed (type:\n> btree) table? Is it true that as you add the indexes on a table, the\n> performance deteriorates significantly whereas Oracle does not show that\n> much performance decrease. I have tried almost all postgreSQL performance\n> tips available. I want to have very good \"insert\" performance (with\n> indexes), \"select\" performance is not that important at this point of time.\n>\n> 2. What are the average storage requirements of postgres compared to Oracle?\n> I inserted upto 1 million records. The storage requirement of postgreSQL is\n> almost double than that of Oracle.\n>u\n> Thanks in anticipation.\n\nI ran the following tests w/libpqtypes. While you probably wont end\nup using libpqtypes, it's illustrative to mention it because it's\ngenerally the easiest way to get data into postgres and by far the\nfastest (excluding 'COPY'). source code follows after the sig (I\nbanged it out quite quickly, it's messy!) :-). I am not seeing your\nresults.\n\nvia libpqtypes: Inserting, begin..insert..(repeat 1000000x) commit;\nlocal workstation: 2m24s\nremote server: 8m8s\n\nvia libpqtypes, but stacking array and unstacking on server (this\ncould be optimized further by using local prepare):\nlocal workstation: 43s (io bound)\nremote server: 29s (first million)\nremote server: 29s (second million)\ncreate index (1.8s) remote\nremote server: 33s (third million, w/index)\n\nobviously insert at a time tests are network bound. throw a couple of\nindexes in there and you should see some degradation, but nothing too\nterrible.\n\nmerlin\nlibpqtypes.esilo.com\n\nins1.c (insert at a time)\n#include \"libpq-fe.h\"\n#include \"libpqtypes.h\"\n\n#define INS_COUNT 1000000\n\nint main()\n{\n int i;\n\n PGconn *conn = PQconnectdb(\"host=devdb dbname=postgres port=8071\");\n if(PQstatus(conn) != CONNECTION_OK)\n {\n printf(\"bad connection\");\n return -1;\n }\n\n PQtypesRegister(conn);\n\n PQexec(conn, \"begin\");\n\n for(i=0; i<INS_COUNT; i++)\n {\n PGint4 a=i;\n PGtext b = \"some_text\";\n PGtimestamp c;\n PGbytea d;\n\n d.len = 8;\n d.data = b;\n\n c.date.isbc = 0;\n c.date.year = 2000;\n c.date.mon = 0;\n c.date.mday = 19;\n c.time.hour = 10;\n c.time.min = 41;\n c.time.sec = 6;\n c.time.usec = 0;\n c.time.gmtoff = -18000;\n\n PGresult *res = PQexecf(conn, \"insert into ins_test(a,b,c,d)\nvalues(%int4, %text, %timestamptz, %bytea)\", a, b, &c, &d);\n\n if(!res)\n {\n printf(\"got %s\\n\", PQgeterror());\n return -1;\n }\n PQclear(res);\n }\n\n PQexec(conn, \"commit\");\n\n PQfinish(conn);\n}\n\n\nins2.c (array stack/unstack)\n#include \"libpq-fe.h\"\n#include \"libpqtypes.h\"\n\n#define INS_COUNT 1000000\n\nint main()\n{\n int i;\n\n PGconn *conn = PQconnectdb(\"host=devdb dbname=postgres port=8071\");\n PGresult *res;\n if(PQstatus(conn) != CONNECTION_OK)\n {\n printf(\"bad connection\");\n return -1;\n }\n\n PQtypesRegister(conn);\n\n PGregisterType type = {\"ins_test\", NULL, NULL};\n PQregisterComposites(conn, &type, 1);\n\n PGparam *p = PQparamCreate(conn);\n PGarray arr;\n arr.param = PQparamCreate(conn);\n arr.ndims = 0;\n\n for(i=0; i<INS_COUNT; i++)\n {\n PGint4 a=i;\n PGtext b = \"some_text\";\n PGtimestamp c;\n PGbytea d;\n PGparam *i = PQparamCreate(conn);\n\n d.len = 8;\n d.data = b;\n\n c.date.isbc = 0;\n c.date.year = 2000;\n c.date.mon = 0;\n c.date.mday = 19;\n c.time.hour = 10;\n c.time.min = 41;\n c.time.sec = 6;\n c.time.usec = 0;\n c.time.gmtoff = -18000;\n\n PQputf(i, \"%int4 %text %timestamptz %bytea\", a, b, &c, &d);\n PQputf(arr.param, \"%ins_test\", i);\n }\n\n if(!PQputf(p, \"%ins_test[]\", &arr))\n {\n printf(\"putf failed: %s\\n\", PQgeterror());\n return -1;\n }\n res = PQparamExec(conn, p, \"insert into ins_test select (unnest($1)).*\", 1);\n\n if(!res)\n {\n printf(\"got %s\\n\", PQgeterror());\n return -1;\n }\n PQclear(res);\n PQfinish(conn);\n}\n",
"msg_date": "Mon, 25 Oct 2010 16:28:57 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On Mon, Oct 25, 2010 at 4:28 PM, Merlin Moncure <[email protected]> wrote:\n> I ran the following tests w/libpqtypes. While you probably wont end\n> up using libpqtypes, it's illustrative to mention it because it's\n> generally the easiest way to get data into postgres and by far the\n> fastest (excluding 'COPY'). source code follows after the sig (I\n> banged it out quite quickly, it's messy!) :-). I am not seeing your\n> results.\n\nI had a really horrible bug in there -- leaking a param inside the\narray push loop. cleaning it up dropped another 5 seconds or so from\nthe 4th million inserted to the remote server!. Using local prepare\n(PQspecPrepare) prob another second or two could be shaved off.\n\n PGparam *t = PQparamCreate(conn);\n\n for(i=0; i<INS_COUNT; i++)\n {\n PGint4 a=i;\n PGtext b = \"some_text\";\n PGtimestamp c;\n PGbytea d;\n\n d.len = 8;\n d.data = b;\n\n c.date.isbc = 0;\n c.date.year = 2000;\n c.date.mon = 0;\n c.date.mday = 19;\n c.time.hour = 10;\n c.time.min = 41;\n c.time.sec = 6;\n c.time.usec = 0;\n c.time.gmtoff = -18000;\n\n PQputf(t, \"%int4 %text %timestamptz %bytea\", a, b, &c, &d);\n PQputf(arr.param, \"%ins_test\", t);\n PQparamReset(t);\n }\n\nmerlin\n",
"msg_date": "Mon, 25 Oct 2010 16:51:02 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "Hi Merlin,\nThanks for your quick input.\nWell 1 difference worth mentioning:\nI am inserting each row in a separate transaction, due to design of my program. \n\n -Divakar\n\n\n\n\n________________________________\nFrom: Merlin Moncure <[email protected]>\nTo: Divakar Singh <[email protected]>\nCc: [email protected]\nSent: Tue, October 26, 2010 2:21:02 AM\nSubject: Re: [PERFORM] Postgres insert performance and storage requirement \ncompared to Oracle\n\nOn Mon, Oct 25, 2010 at 4:28 PM, Merlin Moncure <[email protected]> wrote:\n> I ran the following tests w/libpqtypes. While you probably wont end\n> up using libpqtypes, it's illustrative to mention it because it's\n> generally the easiest way to get data into postgres and by far the\n> fastest (excluding 'COPY'). source code follows after the sig (I\n> banged it out quite quickly, it's messy!) :-). I am not seeing your\n> results.\n\nI had a really horrible bug in there -- leaking a param inside the\narray push loop. cleaning it up dropped another 5 seconds or so from\nthe 4th million inserted to the remote server!. Using local prepare\n(PQspecPrepare) prob another second or two could be shaved off.\n\nPGparam *t = PQparamCreate(conn);\n\nfor(i=0; i<INS_COUNT; i++)\n{\n PGint4 a=i;\n PGtext b = \"some_text\";\n PGtimestamp c;\n PGbytea d;\n\n d.len = 8;\n d.data = b;\n\n c.date.isbc = 0;\n c.date.year = 2000;\n c.date.mon = 0;\n c.date.mday = 19;\n c.time.hour = 10;\n c.time.min = 41;\n c.time.sec = 6;\n c.time.usec = 0;\n c.time.gmtoff = -18000;\n\n PQputf(t, \"%int4 %text %timestamptz %bytea\", a, b, &c, &d);\n PQputf(arr.param, \"%ins_test\", t);\n PQparamReset(t);\n}\n\nmerlin\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n \nHi Merlin,Thanks for your quick input.Well 1 difference worth mentioning:I am inserting each row in a separate transaction, due to design of my program. -DivakarFrom: Merlin Moncure <[email protected]>To: Divakar Singh <[email protected]>Cc: [email protected]: Tue, October 26, 2010 2:21:02 AMSubject: Re: [PERFORM] Postgres insert performance and storage requirement compared to OracleOn Mon, Oct 25, 2010 at 4:28 PM, Merlin Moncure <[email protected]> wrote:> I ran the following tests w/libpqtypes. While you probably wont end> up using libpqtypes, it's illustrative to mention it because it's> generally the easiest way to get data into postgres and by far the> fastest (excluding 'COPY'). source code follows after the sig (I> banged it out quite quickly, it's messy!) :-). I am not seeing your> results.I had a really horrible bug in there -- leaking a param inside thearray push loop. cleaning it up dropped another 5 seconds or so fromthe 4th million inserted to the remote server!. Using local prepare(PQspecPrepare) prob another second or two\n could be shaved off. PGparam *t = PQparamCreate(conn); for(i=0; i<INS_COUNT; i++) { PGint4 a=i; PGtext b = \"some_text\"; PGtimestamp c; PGbytea d; d.len = 8; d.data = b; c.date.isbc = 0; c.date.year = 2000; c.date.mon = 0; c.date.mday = 19; c.time.hour = 10; c.time.min = 41; c.time.sec = 6; c.time.usec = 0; c.time.gmtoff = -18000; PQputf(t, \"%int4 %text %timestamptz %bytea\", a, b, &c, &d); PQputf(arr.param, \"%ins_test\", t); PQparamReset(t); }merlin-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 26 Oct 2010 04:44:28 -0700 (PDT)",
"msg_from": "Divakar Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres insert performance and storage requirement compared to\n\tOracle"
},
{
"msg_contents": "On Tue, Oct 26, 2010 at 7:44 AM, Divakar Singh <[email protected]> wrote:\n> Hi Merlin,\n> Thanks for your quick input.\n> Well 1 difference worth mentioning:\n> I am inserting each row in a separate transaction, due to design of my\n> program.\n\nWell, that right there is going to define your application\nperformance. You have basically three major issues -- postgresql\nexecutes each query synchronously through the protocol, transaction\noverhead, and i/o issues coming from per transaction sync. libpq\nsupports asynchronous queries, but only from the clients point of view\n-- so that this only helps if you have non trivial work to do setting\nup each query. The database is inherently capable of doing what you\nwant it to do...you may just have to rethink certain things if you\nwant to unlock the true power of postgres...\n\nYou have several broad areas of attack:\n*) client side: use prepared queries (PQexecPrepared) possibly\nasynchronously (PQsendPrepared). Reasonably you can expect 5-50%\nspeedup if not i/o bound\n*) Stage data to a temp table: temp tables are not wal logged or\nsynced. Periodically they can be flushed to a permanent table.\nPossible data loss\n*) Relax sync policy (synchronous_commit/fsync) -- be advised these\nsettings are dangerous\n*) Multiple client writers -- as long as you are not i/o bound, you\nwill see big improvements in tps from multiple clients\n*) Stage/queue application data before inserting it -- requires\nretooling application, but you can see orders of magnitude jump insert\nperformance\n\nmerlin\n",
"msg_date": "Tue, 26 Oct 2010 09:20:38 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "> temp tables are not wal logged or\n> synced. Periodically they can be flushed to a permanent table.\n\n\nWhat do you mean with \"Periodically they can be flushed to\na permanent table\"? Just doing \n\ninsert into tabb select * from temptable\n\nor using a proper, per-temporary table command???\n\n\n \n",
"msg_date": "Tue, 26 Oct 2010 16:08:20 +0100 (BST)",
"msg_from": "Leonardo Francalanci <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement compared to\n\tOracle"
},
{
"msg_contents": "On Tue, Oct 26, 2010 at 11:08 AM, Leonardo Francalanci <[email protected]> wrote:\n>> temp tables are not wal logged or\n>> synced. Periodically they can be flushed to a permanent table.\n>\n>\n> What do you mean with \"Periodically they can be flushed to\n> a permanent table\"? Just doing\n>\n> insert into tabb select * from temptable\n>\n\nyup, that's exactly what I mean -- this will give you more uniform\ninsert performance (your temp table doesn't even need indexes). Every\nN records (say 10000) you send to permanent and truncate the temp\ntable. Obviously, this is more fragile approach so weigh the\npros/cons carefully.\n\nmerlin\n",
"msg_date": "Tue, 26 Oct 2010 11:41:59 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On 10/26/2010 11:41 AM, Merlin Moncure wrote:\n> yup, that's exactly what I mean -- this will give you more uniform\n> insert performance (your temp table doesn't even need indexes). Every\n> N records (say 10000) you send to permanent and truncate the temp\n> table. Obviously, this is more fragile approach so weigh the\n> pros/cons carefully.\n>\n> merlin\n\nTruncate temporary table? What a horrible advice! All that you need is \nthe temporary table to delete rows on commit.\n\n-- \n\nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com\nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Tue, 26 Oct 2010 17:02:55 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On Tue, Oct 26, 2010 at 4:02 PM, Mladen Gogala\n<[email protected]> wrote:\n> On 10/26/2010 11:41 AM, Merlin Moncure wrote:\n>>\n>> yup, that's exactly what I mean -- this will give you more uniform\n>> insert performance (your temp table doesn't even need indexes). Every\n>> N records (say 10000) you send to permanent and truncate the temp\n>> table. Obviously, this is more fragile approach so weigh the\n>> pros/cons carefully.\n>>\n>> merlin\n>\n> Truncate temporary table? What a horrible advice! All that you need is the\n> temporary table to delete rows on commit.\n\nI believe Merlin was suggesting that, after doing 10000 inserts into\nthe temporary table, that something like this might work better:\n\nstart loop:\n populate rows in temporary table\n insert from temporary table into permanent table\n truncate temporary table\n loop\n\nI do something similar, where I COPY data to a temporary table, do\nlots of manipulations, and then perform a series of INSERTS from the\ntemporary table into a permanent table.\n\n-- \nJon\n",
"msg_date": "Tue, 26 Oct 2010 16:27:23 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On 10/26/2010 5:27 PM, Jon Nelson wrote:\n> start loop:\n> populate rows in temporary table\n> insert from temporary table into permanent table\n> truncate temporary table\n> loop\n>\n> I do something similar, where I COPY data to a temporary table, do\n> lots of manipulations, and then perform a series of INSERTS from the\n> temporary table into a permanent table.\n>\n\n1) It's definitely not faster because you have to insert into the \ntemporary table, in addition to inserting into the permanent table.\n2) This is what I had in mind:\n\nmgogala=# create table a(c1 int);\nCREATE TABLE\nmgogala=# create temporary table t1(c1 int) on commit delete rows;\nCREATE TABLE\nmgogala=# begin;\nBEGIN\nmgogala=# insert into t1 select generate_series(1,1000);\nINSERT 0 1000\nmgogala=# insert into a select * from t1;\nINSERT 0 1000\nmgogala=# commit;\nCOMMIT\nmgogala=# select count(*) from a;\n count\n-------\n 1000\n(1 row)\n\nmgogala=# select count(*) from t1;\n count\n-------\n 0\n(1 row)\n\nThe table is created with \"on commit obliterate rows\" option which means \nthat there is no need to do \"truncate\". The \"truncate\" command is a \nheavy artillery. Truncating a temporary table is like shooting ducks in \na duck pond, with a howitzer.\n\n-- \n\nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com\nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Tue, 26 Oct 2010 17:54:57 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On Tue, Oct 26, 2010 at 5:54 PM, Mladen Gogala\n<[email protected]> wrote:\n> On 10/26/2010 5:27 PM, Jon Nelson wrote:\n>>\n>> start loop:\n>> populate rows in temporary table\n>> insert from temporary table into permanent table\n>> truncate temporary table\n>> loop\n>>\n>> I do something similar, where I COPY data to a temporary table, do\n>> lots of manipulations, and then perform a series of INSERTS from the\n>> temporary table into a permanent table.\n>>\n>\n> 1) It's definitely not faster because you have to insert into the temporary\n> table, in addition to inserting into the permanent table.\n> 2) This is what I had in mind:\n>\n> mgogala=# create table a(c1 int);\n> CREATE TABLE\n> mgogala=# create temporary table t1(c1 int) on commit delete rows;\n> CREATE TABLE\n> mgogala=# begin;\n> BEGIN\n> mgogala=# insert into t1 select generate_series(1,1000);\n> INSERT 0 1000\n> mgogala=# insert into a select * from t1;\n> INSERT 0 1000\n> mgogala=# commit;\n> COMMIT\n> mgogala=# select count(*) from a;\n> count\n> -------\n> 1000\n> (1 row)\n>\n> mgogala=# select count(*) from t1;\n> count\n> -------\n> 0\n> (1 row)\n>\n> The table is created with \"on commit obliterate rows\" option which means\n> that there is no need to do \"truncate\". The \"truncate\" command is a heavy\n> artillery. Truncating a temporary table is like shooting ducks in a duck\n> pond, with a howitzer.\n\nYou are not paying attention ;-). Look upthread: \"I am inserting each\nrow in a separate transaction, due to design of my program.\" (also on\ncommit/drop is no picnic either, but I digress...)\n\nmerlin\n",
"msg_date": "Tue, 26 Oct 2010 18:14:15 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On Tue, Oct 26, 2010 at 5:54 PM, Mladen Gogala\n<[email protected]> wrote:\n> The table is created with \"on commit obliterate rows\" option which means\n> that there is no need to do \"truncate\". The \"truncate\" command is a heavy\n> artillery. Truncating a temporary table is like shooting ducks in a duck\n> pond, with a howitzer.\n\nThis is just not true. ON COMMIT DELETE ROWS simply arranges for a\nTRUNCATE to happen immediately before each commit. See\nPreCommit_on_commit_actions() in tablecmds.c.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 26 Oct 2010 18:50:37 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On Tue, Oct 26, 2010 at 6:50 PM, Robert Haas <[email protected]> wrote:\n> On Tue, Oct 26, 2010 at 5:54 PM, Mladen Gogala\n> <[email protected]> wrote:\n>> The table is created with \"on commit obliterate rows\" option which means\n>> that there is no need to do \"truncate\". The \"truncate\" command is a heavy\n>> artillery. Truncating a temporary table is like shooting ducks in a duck\n>> pond, with a howitzer.\n>\n> This is just not true. ON COMMIT DELETE ROWS simply arranges for a\n> TRUNCATE to happen immediately before each commit. See\n> PreCommit_on_commit_actions() in tablecmds.c.\n\nquite so. If you are doing anything performance sensitive with 'on\ncommit drop', you are better off organizing a cache around\ntxid_current() (now(), pid for older pg versions). Skips the writes\nto the system catalogs and truncate.\n\nmerlin\n",
"msg_date": "Tue, 26 Oct 2010 19:16:53 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "Dear All,\nThanks for your inputs on the insert performance part.\nAny suggestion on storage requirement?\nVACUUM is certainly not an option, because this is something related to \nmaintenance AFTER insertion. \n\nI am talking about the plain storage requirement w.r. to Oracle, which I \nobserved is twice of Oracle in case millions of rows are inserted.\nAnybody who tried to analyze the average storage requirement of PG w.r. to \nOracle?\n\n Best Regards,\nDivakar\n\n\n\n\n________________________________\nFrom: Merlin Moncure <[email protected]>\nTo: Robert Haas <[email protected]>\nCc: Mladen Gogala <[email protected]>; [email protected]\nSent: Wed, October 27, 2010 4:46:53 AM\nSubject: Re: [PERFORM] Postgres insert performance and storage requirement \ncompared to Oracle\n\nOn Tue, Oct 26, 2010 at 6:50 PM, Robert Haas <[email protected]> wrote:\n> On Tue, Oct 26, 2010 at 5:54 PM, Mladen Gogala\n> <[email protected]> wrote:\n>> The table is created with \"on commit obliterate rows\" option which means\n>> that there is no need to do \"truncate\". The \"truncate\" command is a heavy\n>> artillery. Truncating a temporary table is like shooting ducks in a duck\n>> pond, with a howitzer.\n>\n> This is just not true. ON COMMIT DELETE ROWS simply arranges for a\n> TRUNCATE to happen immediately before each commit. See\n> PreCommit_on_commit_actions() in tablecmds.c.\n\nquite so. If you are doing anything performance sensitive with 'on\ncommit drop', you are better off organizing a cache around\ntxid_current() (now(), pid for older pg versions). Skips the writes\nto the system catalogs and truncate.\n\nmerlin\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n \nDear All,Thanks for your inputs on the insert performance part.Any suggestion on storage requirement?VACUUM is certainly not an option, because this is something related to maintenance AFTER insertion. I am talking about the plain storage requirement w.r. to Oracle, which I observed is twice of Oracle in case millions of rows are inserted.Anybody who tried to analyze the average storage requirement of PG w.r. to Oracle? Best Regards,DivakarFrom: Merlin Moncure <[email protected]>To: Robert Haas <[email protected]>Cc: Mladen Gogala <[email protected]>; [email protected]: Wed, October 27, 2010 4:46:53 AMSubject: Re: [PERFORM] Postgres insert performance and storage requirement compared to OracleOn Tue, Oct 26, 2010 at 6:50 PM, Robert Haas <[email protected]> wrote:> On Tue, Oct 26, 2010 at 5:54 PM, Mladen Gogala> <[email protected]> wrote:>> The table is created with \"on commit obliterate rows\" option which means>> that there is no need to do \"truncate\". The \"truncate\" command is\n a heavy>> artillery. Truncating a temporary table is like shooting ducks in a duck>> pond, with a howitzer.>> This is just not true. ON COMMIT DELETE ROWS simply arranges for a> TRUNCATE to happen immediately before each commit. See> PreCommit_on_commit_actions() in tablecmds.c.quite so. If you are doing anything performance sensitive with 'oncommit drop', you are better off organizing a cache aroundtxid_current() (now(), pid for older pg versions). Skips the writesto the system catalogs and truncate.merlin-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 26 Oct 2010 20:10:56 -0700 (PDT)",
"msg_from": "Divakar Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres insert performance and storage requirement compared to\n\tOracle"
},
{
"msg_contents": "On 10/26/10 17:41, Merlin Moncure wrote:\n> On Tue, Oct 26, 2010 at 11:08 AM, Leonardo Francalanci <[email protected]> wrote:\n>>> temp tables are not wal logged or\n>>> synced. Periodically they can be flushed to a permanent table.\n>>\n>>\n>> What do you mean with \"Periodically they can be flushed to\n>> a permanent table\"? Just doing\n>>\n>> insert into tabb select * from temptable\n>>\n> \n> yup, that's exactly what I mean -- this will give you more uniform\n\nIn effect, when so much data is in temporary storage, a better option\nwould be to simply configure \"synchronous_commit = off\" (better in the\nsense that the application would not need to be changed). The effects\nare almost the same - in both cases transactions might be lost but the\ndatabase will survive.\n\n\n",
"msg_date": "Wed, 27 Oct 2010 12:13:10 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement compared\n\tto Oracle"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 6:13 AM, Ivan Voras <[email protected]> wrote:\n> On 10/26/10 17:41, Merlin Moncure wrote:\n>> On Tue, Oct 26, 2010 at 11:08 AM, Leonardo Francalanci <[email protected]> wrote:\n>>>> temp tables are not wal logged or\n>>>> synced. Periodically they can be flushed to a permanent table.\n>>>\n>>>\n>>> What do you mean with \"Periodically they can be flushed to\n>>> a permanent table\"? Just doing\n>>>\n>>> insert into tabb select * from temptable\n>>>\n>>\n>> yup, that's exactly what I mean -- this will give you more uniform\n>\n> In effect, when so much data is in temporary storage, a better option\n> would be to simply configure \"synchronous_commit = off\" (better in the\n> sense that the application would not need to be changed). The effects\n> are almost the same - in both cases transactions might be lost but the\n> database will survive.\n\nright -- although that's a system wide setting and perhaps other\ntables still require full synchronous fsync. Still -- fair point\n(although I bet you are still going to get better performance going by\nthe temp route if only by a hair).\n\nmerlin\n",
"msg_date": "Wed, 27 Oct 2010 07:05:44 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "\nOn Oct 26, 2010, at 2:54 PM, Mladen Gogala wrote:\n\n> On 10/26/2010 5:27 PM, Jon Nelson wrote:\n>> start loop:\n>> populate rows in temporary table\n>> insert from temporary table into permanent table\n>> truncate temporary table\n>> loop\n>> \n>> I do something similar, where I COPY data to a temporary table, do\n>> lots of manipulations, and then perform a series of INSERTS from the\n>> temporary table into a permanent table.\n>> \n> \n> 1) It's definitely not faster because you have to insert into the \n> temporary table, in addition to inserting into the permanent table.\n\nIt is almost always significantly faster than a direct bulk load into a table. \n* The temp table has no indexes, the final table usually does, bulk operations on indexes are faster than per row operations.\n* The final table might require both updates and inserts, doing these in bulk from a temp stage table is far faster than per row.\n* You don't even have to commit after the merge from the temp table, and can loop until its all done, then commit -- though this can have table/index bloat implications if doing updates.\n\n> 2) This is what I had in mind:\n> \n> mgogala=# create table a(c1 int);\n> CREATE TABLE\n> mgogala=# create temporary table t1(c1 int) on commit delete rows;\n> CREATE TABLE\n> mgogala=# begin;\n> BEGIN\n> mgogala=# insert into t1 select generate_series(1,1000);\n> INSERT 0 1000\n> mgogala=# insert into a select * from t1;\n> INSERT 0 1000\n> mgogala=# commit;\n> COMMIT\n> mgogala=# select count(*) from a;\n> count\n> -------\n> 1000\n> (1 row)\n> \n> mgogala=# select count(*) from t1;\n> count\n> -------\n> 0\n> (1 row)\n> \n> The table is created with \"on commit obliterate rows\" option which means \n> that there is no need to do \"truncate\". The \"truncate\" command is a \n> heavy artillery. Truncating a temporary table is like shooting ducks in \n> a duck pond, with a howitzer.\n\n??? Test it. DELETE is slow, truncate is nearly instantaneous for normal tables. For temp tables its the same thing. Maybe in Oracle TRUNCATE is a howitzer, in Postgres its lightweight. Your loop above requires a commit after every 1000 rows. What if you require that all rows are seen at once or not at all? What if you fail part way through? One big transaction is often a better idea and/or required. Especially in postgres, with no undo-log, bulk inserts in one large transaction work out very well -- usually better than multiple smaller transactions.\n> \n> -- \n> \n> Mladen Gogala\n> Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> http://www.vmsinfo.com\n> The Leader in Integrated Media Intelligence Solutions\n> \n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Wed, 27 Oct 2010 10:48:35 -0700",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On Tue, Oct 26, 2010 at 11:10 PM, Divakar Singh <[email protected]> wrote:\n> Dear All,\n> Thanks for your inputs on the insert performance part.\n> Any suggestion on storage requirement?\n> VACUUM is certainly not an option, because this is something related to\n> maintenance AFTER insertion.\n> I am talking about the plain storage requirement w.r. to Oracle, which I\n> observed is twice of Oracle in case millions of rows are inserted.\n> Anybody who tried to analyze the average storage requirement of PG w.r. to\n> Oracle?\n\nThere isn't much you can to about storage use other than avoid stupid\nthings (like using char() vs varchar()), smart table layout, toast\ncompression, etc. Are you sure this is a problem?\n\nmerlin\n",
"msg_date": "Wed, 27 Oct 2010 14:06:00 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On 10/27/2010 1:48 PM, Scott Carey wrote:\n>\n> It is almost always significantly faster than a direct bulk load into a table.\n> * The temp table has no indexes, the final table usually does, bulk operations on indexes are faster than per row operations.\n> * The final table might require both updates and inserts, doing these in bulk from a temp stage table is far faster than per row.\n> * You don't even have to commit after the merge from the temp table, and can loop until its all done, then commit -- though this can have table/index bloat implications if doing updates.\n\nScott, I find this very hard to believe. If you are inserting into a \ntemporary table and then into the target table, you will do 2 inserts \ninstead of just one. What you are telling me is that it is faster for me \nto drive from NYC to Washington DC by driving first to Miami and then \nfrom Miami to DC.\n\n>> 2) This is what I had in mind:\n>>\n>> mgogala=# create table a(c1 int);\n>> CREATE TABLE\n>> mgogala=# create temporary table t1(c1 int) on commit delete rows;\n>> CREATE TABLE\n>> mgogala=# begin;\n>> BEGIN\n>> mgogala=# insert into t1 select generate_series(1,1000);\n>> INSERT 0 1000\n>> mgogala=# insert into a select * from t1;\n>> INSERT 0 1000\n>> mgogala=# commit;\n>> COMMIT\n>> mgogala=# select count(*) from a;\n>> count\n>> -------\n>> 1000\n>> (1 row)\n>>\n>> mgogala=# select count(*) from t1;\n>> count\n>> -------\n>> 0\n>> (1 row)\n>>\n>> The table is created with \"on commit obliterate rows\" option which means\n>> that there is no need to do \"truncate\". The \"truncate\" command is a\n>> heavy artillery. Truncating a temporary table is like shooting ducks in\n>> a duck pond, with a howitzer.\n> ??? Test it. DELETE is slow, truncate is nearly instantaneous for normal tables. For temp tables its the same thing. Maybe in Oracle TRUNCATE is a howitzer, in Postgres its lightweight.\n\nTruncate has specific list of tasks to do:\n1) lock the table in the exclusive mode to prevent concurrent \ntransactions on the table.\n2) Release the file space and update the table headers.\n3) Flush any buffers possibly residing in shared memory.\n4) Repeat the procedures on the indexes.\n\nOf course, in case of the normal table, all of these changes are logged, \npossibly producing WAL archives. That is still much faster than delete \nwhich depends on the number of rows that need to be deleted, but not \nexactly lightweight, either. In Postgres, truncate recognizes that the \ntable is a temporary table so it makes a few shortcuts, which makes the \ntruncate faster.\n\n1) No need to flush buffers.\n2) Locking requirements are much less stringent.\n3) No WAL archives are produced.\n\nTemporary tables are completely different beasts in Oracle and Postgres. \nYes, you are right, truncate of a temporary table is a big no-no in the \nOracle world, especially in the RAC environment. However, I do find \"ON \nCOMMIT DELETE ROWS\" trick to be more elegant than the truncate. Here is \nthe classic Tom Kyte, on the topic of truncating the temporary tables: \n*http://tinyurl.com/29kph3p\n\n\"*NO. truncate is DDL. DDL is expensive. Truncation is something that \nshould be done very infrequently.\n Now, I don't mean \"turn your truncates into DELETE's\" -- that would \nbe even worse. I mean -- avoid having\n to truncate or delete every row in the first place. Use a transaction \nbased temporary table and upon commit, it'll empty itself.\"\n\n> Your loop above requires a commit after every 1000 rows. What if you require that all rows are seen at once or not at all? What if you fail part way through? One big transaction is often a better idea and/or required. Especially in postgres, with no undo-log, bulk inserts in one large transaction work out very well -- usually better than multiple smaller transactions.\n\nI don't contest that. I also prefer to do things in one big transaction, \nif possible.\n\n-- \n\nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com\nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n\n\n\n\n\n\n On 10/27/2010 1:48 PM, Scott Carey wrote:\n \nIt is almost always significantly faster than a direct bulk load into a table. \n* The temp table has no indexes, the final table usually does, bulk operations on indexes are faster than per row operations.\n* The final table might require both updates and inserts, doing these in bulk from a temp stage table is far faster than per row.\n* You don't even have to commit after the merge from the temp table, and can loop until its all done, then commit -- though this can have table/index bloat implications if doing updates.\n\n\n\n Scott, I find this very hard to believe. If you are inserting into a\n temporary table and then into the target table, you will do 2\n inserts instead of just one. What you are telling me is that it is\n faster for me to drive from NYC to Washington DC by driving first to\n Miami and then from Miami to DC.\n\n\n\n\n\n2) This is what I had in mind:\n\nmgogala=# create table a(c1 int);\nCREATE TABLE\nmgogala=# create temporary table t1(c1 int) on commit delete rows;\nCREATE TABLE\nmgogala=# begin;\nBEGIN\nmgogala=# insert into t1 select generate_series(1,1000);\nINSERT 0 1000\nmgogala=# insert into a select * from t1;\nINSERT 0 1000\nmgogala=# commit;\nCOMMIT\nmgogala=# select count(*) from a;\n count\n-------\n 1000\n(1 row)\n\nmgogala=# select count(*) from t1;\n count\n-------\n 0\n(1 row)\n\nThe table is created with \"on commit obliterate rows\" option which means \nthat there is no need to do \"truncate\". The \"truncate\" command is a \nheavy artillery. Truncating a temporary table is like shooting ducks in \na duck pond, with a howitzer.\n\n\n\n??? Test it. DELETE is slow, truncate is nearly instantaneous for normal tables. For temp tables its the same thing. Maybe in Oracle TRUNCATE is a howitzer, in Postgres its lightweight. \n\n\n Truncate has specific list of tasks to do: \n 1) lock the table in the exclusive mode to prevent concurrent\n transactions on the table.\n 2) Release the file space and update the table headers.\n 3) Flush any buffers possibly residing in shared memory.\n 4) Repeat the procedures on the indexes.\n\n Of course, in case of the normal table, all of these changes are\n logged, possibly producing WAL archives. That is still much faster\n than delete which depends on the number of rows that need to be\n deleted, but not exactly lightweight, either. In Postgres, truncate\n recognizes that the table is a temporary table so it makes a few\n shortcuts, which makes the truncate faster.\n\n 1) No need to flush buffers.\n 2) Locking requirements are much less stringent.\n 3) No WAL archives are produced.\n\n Temporary tables are completely different beasts in Oracle and\n Postgres. Yes, you are right, truncate of a temporary table is a big\n no-no in the Oracle world, especially in the RAC environment.\n However, I do find \"ON COMMIT DELETE ROWS\" trick to be more elegant\n than the truncate. Here is the classic Tom Kyte, on the topic of\n truncating the temporary tables: http://tinyurl.com/29kph3p\n\n \"NO. truncate is DDL. DDL is expensive. Truncation is\n something that should be done very infrequently. \n Now, I don't mean \"turn your truncates into DELETE's\" -- that\n would be even worse. I mean -- avoid having \n to truncate or delete every row in the first place. Use a\n transaction based temporary table and upon commit, it'll empty\n itself.\"\n\n\nYour loop above requires a commit after every 1000 rows. What if you require that all rows are seen at once or not at all? What if you fail part way through? One big transaction is often a better idea and/or required. Especially in postgres, with no undo-log, bulk inserts in one large transaction work out very well -- usually better than multiple smaller transactions.\n\n\n\n I don't contest that. I also prefer to do things in one big\n transaction, if possible.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions",
"msg_date": "Wed, 27 Oct 2010 14:06:53 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 2:06 PM, Mladen Gogala\n<[email protected]> wrote:\n> Scott, I find this very hard to believe. If you are inserting into a\n> temporary table and then into the target table, you will do 2 inserts\n> instead of just one. What you are telling me is that it is faster for me to\n> drive from NYC to Washington DC by driving first to Miami and then from\n> Miami to DC.\n\nThe reason why in one transaction per insert environment staging to\ntemp table first is very simple...non temp table inserts have to be\nwal logged and fsync'd. When you batch them into the main table, you\nget more efficient use of WAL and ONE sync operation. This is\nespecially advantageous if the inserts are coming fast and furious and\nthere are other things going on in the database at the time, or there\nare multiple inserters.\n\nIf you have luxury of batching data in a transaction, you don't have\nto worry about it.\n\nmerlin\n",
"msg_date": "Wed, 27 Oct 2010 14:13:21 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "yes this is a very clearly visible problem.\nThe difference b/w oracle and PG increases with more rows.\nwhen oracle takes 3 GB, PG takes around 6 GB.\nI only use varchar.\nI will try to use your tips on \"smart table layout, toast compression\".\nAssuming these suggested options do not have any performance penalty?\n\n Best Regards,\nDivakar\n\n\n\n\n________________________________\nFrom: Merlin Moncure <[email protected]>\nTo: Divakar Singh <[email protected]>\nCc: Robert Haas <[email protected]>; Mladen Gogala \n<[email protected]>; [email protected]\nSent: Wed, October 27, 2010 11:36:00 PM\nSubject: Re: [PERFORM] Postgres insert performance and storage requirement \ncompared to Oracle\n\nOn Tue, Oct 26, 2010 at 11:10 PM, Divakar Singh <[email protected]> wrote:\n> Dear All,\n> Thanks for your inputs on the insert performance part.\n> Any suggestion on storage requirement?\n> VACUUM is certainly not an option, because this is something related to\n> maintenance AFTER insertion.\n> I am talking about the plain storage requirement w.r. to Oracle, which I\n> observed is twice of Oracle in case millions of rows are inserted.\n> Anybody who tried to analyze the average storage requirement of PG w.r. to\n> Oracle?\n\nThere isn't much you can to about storage use other than avoid stupid\nthings (like using char() vs varchar()), smart table layout, toast\ncompression, etc. Are you sure this is a problem?\n\nmerlin\n\n\n\n \nyes this is a very clearly visible problem.The difference b/w oracle and PG increases with more rows.when oracle takes 3 GB, PG takes around 6 GB.I only use varchar.I will try to use your tips on \"smart table layout, toast compression\".Assuming these suggested options do not have any performance penalty? Best Regards,DivakarFrom: Merlin Moncure <[email protected]>To: Divakar Singh <[email protected]>Cc:\n Robert Haas <[email protected]>; Mladen Gogala <[email protected]>; [email protected]: Wed, October 27, 2010 11:36:00 PMSubject: Re: [PERFORM] Postgres insert performance and storage requirement compared to OracleOn Tue, Oct 26, 2010 at 11:10 PM, Divakar Singh <[email protected]> wrote:> Dear All,> Thanks for your inputs on the insert performance part.> Any suggestion on storage requirement?> VACUUM is certainly not an option, because this is something related to> maintenance AFTER insertion.> I am talking about the plain storage requirement w.r. to Oracle, which I> observed is twice of Oracle in case millions of rows are inserted.> Anybody who tried to\n analyze the average storage requirement of PG w.r. to> Oracle?There isn't much you can to about storage use other than avoid stupidthings (like using char() vs varchar()), smart table layout, toastcompression, etc. Are you sure this is a problem?merlin",
"msg_date": "Wed, 27 Oct 2010 11:14:30 -0700 (PDT)",
"msg_from": "Divakar Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres insert performance and storage requirement compared to\n\tOracle"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 2:14 PM, Divakar Singh <[email protected]> wrote:\n> yes this is a very clearly visible problem.\n> The difference b/w oracle and PG increases with more rows.\n> when oracle takes 3 GB, PG takes around 6 GB.\n> I only use varchar.\n> I will try to use your tips on \"smart table layout, toast compression\".\n> Assuming these suggested options do not have any performance penalty?\n\nThese will only be helpful in particular cases, for example if your\nlayout is bad :-). toast compression is for dealing with large datums\n(on by default iirc). Also it's very hard to get apples to apples\ncomparison test via synthetic insertion benchmark. It's simply not\nthe whole story.\n\nThe deal with postgres is that things are pretty optimized and fairly\nunlikely to get a whole lot better than they are today. The table\nlayout is pretty optimal already, nulls are bitmaps, data lengths are\nusing fancy bitwise length mechanism, etc. Each record in postgres\nhas a 20 byte header that has to be factored in to any storage\nestimation, plus the index usage.\n\nPostgres indexes are pretty compact, and oracle (internals I am not\nfamiliar with) also has to do MVCC type management, so I am suspecting\nyour measurement is off (aka, operator error) or oracle is cheating\nsomehow by optimizing away storage requirements somehow via some sort\nof tradeoff. However you still fail to explain why storage size is a\nproblem. Are planning to port oracle to postgres on a volume that is\n>50% full? :-)\n\nmerlin\n",
"msg_date": "Wed, 27 Oct 2010 14:28:06 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On 2010-10-27 20:28, Merlin Moncure wrote:\n> Postgres indexes are pretty compact, and oracle (internals I am not\n> familiar with) also has to do MVCC type management, so I am suspecting\n> your measurement is off (aka, operator error) or oracle is cheating\n> somehow by optimizing away storage requirements somehow via some sort\n> of tradeoff. However you still fail to explain why storage size is a\n> problem. Are planning to port oracle to postgres on a volume that is\n> 50% full? :-)\n> \nPretty ignorant comment.. sorry ..\n\nBut when your database approaches something that is not mainly\nfitting in memory, space directly translates into speed and a more\ncompact table utillizes the OS-page cache better. This is both\ntrue for index and table page caching.\n\nAnd the more compact your table the later you hit the stage where\nyou cant fit into memory anymore.\n\n.. but if above isn't issues, then your statements are true.\n\n-- \nJesper\n",
"msg_date": "Wed, 27 Oct 2010 20:42:19 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On 10-10-27 02:14 PM, Divakar Singh wrote:\n> yes this is a very clearly visible problem.\n> The difference b/w oracle and PG increases with more rows.\n> when oracle takes 3 GB, PG takes around 6 GB.\n> I only use varchar.\n> I will try to use your tips on \"smart table layout, toast compression\".\n> Assuming these suggested options do not have any performance penalty?\n> Best Regards,\n> Divakar\n\nIn between test runs are you cleaning out the tables with a \"DELETE FROM \naaaaa\" or are you using the TRUNCATE command? Or dropping the table and \nrecreating it.\n\nIf your just using DELETE it might be that disk space is still being \nused by the old versions of the rows.\n\nAlso is postgresql using more space than oracle for storing the index \ndata or the main table data? and is any particular index larger on \npostgresql compared to Oracle.\n\n\n\n\n>\n>\n> ------------------------------------------------------------------------\n> *From:* Merlin Moncure <[email protected]>\n> *To:* Divakar Singh <[email protected]>\n> *Cc:* Robert Haas <[email protected]>; Mladen Gogala\n> <[email protected]>; [email protected]\n> *Sent:* Wed, October 27, 2010 11:36:00 PM\n> *Subject:* Re: [PERFORM] Postgres insert performance and storage\n> requirement compared to Oracle\n>\n> On Tue, Oct 26, 2010 at 11:10 PM, Divakar Singh <[email protected]\n> <mailto:[email protected]>> wrote:\n> > Dear All,\n> > Thanks for your inputs on the insert performance part.\n> > Any suggestion on storage requirement?\n> > VACUUM is certainly not an option, because this is something related to\n> > maintenance AFTER insertion.\n> > I am talking about the plain storage requirement w.r. to Oracle, which I\n> > observed is twice of Oracle in case millions of rows are inserted.\n> > Anybody who tried to analyze the average storage requirement of PG\n> w.r. to\n> > Oracle?\n>\n> There isn't much you can to about storage use other than avoid stupid\n> things (like using char() vs varchar()), smart table layout, toast\n> compression, etc. Are you sure this is a problem?\n>\n> merlin\n>\n\n",
"msg_date": "Wed, 27 Oct 2010 14:51:02 -0400",
"msg_from": "Steve Singer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 2:42 PM, Jesper Krogh <[email protected]> wrote:\n> On 2010-10-27 20:28, Merlin Moncure wrote:\n>>\n>> Postgres indexes are pretty compact, and oracle (internals I am not\n>> familiar with) also has to do MVCC type management, so I am suspecting\n>> your measurement is off (aka, operator error) or oracle is cheating\n>> somehow by optimizing away storage requirements somehow via some sort\n>> of tradeoff. However you still fail to explain why storage size is a\n>> problem. Are planning to port oracle to postgres on a volume that is\n>> 50% full? :-)\n>>\n>\n> Pretty ignorant comment.. sorry ..\n>\n> But when your database approaches something that is not mainly\n> fitting in memory, space directly translates into speed and a more\n> compact table utillizes the OS-page cache better. This is both\n> true for index and table page caching.\n>\n> And the more compact your table the later you hit the stage where\n> you cant fit into memory anymore.\n>\n> .. but if above isn't issues, then your statements are true.\n\nYes, I am quite aware of how the o/s page cache works. All else being\nequal, I more compact database obviously would be preferred. However\n'all else' is not necessarily equal. I can mount my database on bzip\nvolume, that must make it faster, right? wrong. I understand the\npostgres storage architecture pretty well, and the low hanging fruit\nhaving been grabbed further layout compression is only going to come\nas a result of tradeoffs.\n\nNow, comparing oracle vs postgres, mvcc works differently because\noracle uses rollback logs while postgres maintains extra/old versions\nin the heap. This will add up to big storage usage based on various\nthings, but should not so much be reflected via insert only test.\n\nmerlin\n",
"msg_date": "Wed, 27 Oct 2010 14:51:23 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On 2010-10-27 20:51, Merlin Moncure wrote:\n\n>> Yes, I am quite aware of how the o/s page cache works. All else being\n>> equal, I more compact database obviously would be preferred. However\n>> 'all else' is not necessarily equal. I can mount my database on bzip\n>> volume, that must make it faster, right? wrong. I understand the\n>> postgres storage architecture pretty well, and the low hanging fruit\n>> having been grabbed further layout compression is only going to come\n>> as a result of tradeoffs.\n>> \nOr configureabillity.. Not directly related to overall space consumption\nbut I have been working on a patch that would make TOAST* kick in\nearlier in the process, giving a \"slimmer\" main table with visibillity \ninformation\nand simple columns and moving larger colums more aggressively to TOAST.\n\nThe overall disadvantage of TOAST is the need for an extra disk seek if\nyou actually need the data. If the application rarely needs the large\ncolumns but often do count/filtering on simple values this will eventually\nlead to a better utillization of the OS-page-cache with a very small \noverhead\nto PG (in terms of code) and 0 overhead in the applications that benefit.\n\nKeeping in mind that as SSD-drives get more common the \"the extra disk seek\"\ndrops dramatically, but the drive is by itself probably still 100-1000x \nslower than\nmain memory, so keeping \"the right data\" in the OS-cache is also a \nparameter.\n\nIf you deal with data where the individual tuple-size goes up, currently \nTOAST\nfirst kicks in at 2KB (compressed size) which leads to a very sparse \nmain table\nin terms of visibillity information and count and selects on simple values\nwill drag a huge amount of data into the cache-layers thats not needed \nthere.\n\nAnother suggestion could be to make the compression of text columns kick in\nearlier .. if thats possible. (I dont claim that its achiveable)\n\nUnless the tuple-header is hugely bloated I have problems creating a \nsituation in my\nhead where hammering that one can change anything significantly.\n\n* http://www.mail-archive.com/[email protected]/msg159726.html\n\n-- \nJesper\n",
"msg_date": "Wed, 27 Oct 2010 21:47:23 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 6:13 AM, Ivan Voras <[email protected]> wrote:\n> On 10/26/10 17:41, Merlin Moncure wrote:\n>> On Tue, Oct 26, 2010 at 11:08 AM, Leonardo Francalanci <[email protected]> wrote:\n>>>> temp tables are not wal logged or\n>>>> synced. Periodically they can be flushed to a permanent table.\n>>>\n>>>\n>>> What do you mean with \"Periodically they can be flushed to\n>>> a permanent table\"? Just doing\n>>>\n>>> insert into tabb select * from temptable\n>>>\n>>\n>> yup, that's exactly what I mean -- this will give you more uniform\n>\n> In effect, when so much data is in temporary storage, a better option\n> would be to simply configure \"synchronous_commit = off\" (better in the\n> sense that the application would not need to be changed). The effects\n> are almost the same - in both cases transactions might be lost but the\n> database will survive.\n\nGee, I wonder if it would possible for PG to automatically do an\nasynchronous commit of any transaction which touches only temp tables.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 27 Oct 2010 22:01:07 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> Gee, I wonder if it would possible for PG to automatically do an\n> asynchronous commit of any transaction which touches only temp tables.\n\nHmm ... do we need a commit at all in such a case? If our XID has only\ngone into temp tables, I think we need to write to clog, but we don't\nreally need a WAL entry, synced or otherwise.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Oct 2010 23:32:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Postgres insert performance and storage requirement compared\n\tto Oracle"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 3:47 PM, Jesper Krogh <[email protected]> wrote:\n> On 2010-10-27 20:51, Merlin Moncure wrote:\n>\n>>> Yes, I am quite aware of how the o/s page cache works. All else being\n>>> equal, I more compact database obviously would be preferred. However\n>>> 'all else' is not necessarily equal. I can mount my database on bzip\n>>> volume, that must make it faster, right? wrong. I understand the\n>>> postgres storage architecture pretty well, and the low hanging fruit\n>>> having been grabbed further layout compression is only going to come\n>>> as a result of tradeoffs.\n>>>\n>\n> Or configureabillity.. Not directly related to overall space consumption\n> but I have been working on a patch that would make TOAST* kick in\n> earlier in the process, giving a \"slimmer\" main table with visibillity\n> information\n> and simple columns and moving larger colums more aggressively to TOAST.\n\nDo you have any benchmarks supporting if/when such a change would be beneficial?\n\nmerlin\n",
"msg_date": "Thu, 28 Oct 2010 09:13:22 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "[moving to -hackers, from -performance]\n\nOn Wed, Oct 27, 2010 at 11:32 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> Gee, I wonder if it would possible for PG to automatically do an\n>> asynchronous commit of any transaction which touches only temp tables.\n>\n> Hmm ... do we need a commit at all in such a case? If our XID has only\n> gone into temp tables, I think we need to write to clog, but we don't\n> really need a WAL entry, synced or otherwise.\n\nI think we might need a commit record anyway to keep Hot Standby's\nKnownAssignedXids tracking from getting confused. It might be\npossible to suppress it when wal_level is less than hot_standby, but\nI'm not sure it's worth it.\n\nYou definitely need to write to CLOG, because otherwise a subsequent\ntransaction from within the same backend might fail to see those\ntuples.\n\nAlso, I think that the right test is probably \"Have we done anything\nthat needs to be WAL-logged?\". We can get that conveniently by\nchecking whether XactLastRecEnd.xrecoff. One option looks to be to\njust change this:\n\n if (XactSyncCommit || forceSyncCommit || nrels > 0)\n\n...to say ((XactSyncCommit && XactLastRecEnd.recoff != 0) ||\nforceSyncCommit || nrels > 0).\n\nBut I'm wondering if we can instead rejigger things so that this test\nmoves out of the !markXidCommitted branch of the if statement and\ndrops down below the whole if statement.\n\n /*\n * If we didn't create XLOG entries, we're done here;\notherwise we\n * should flush those entries the same as a commit\nrecord. (An\n * example of a possible record that wouldn't cause an XID to be\n * assigned is a sequence advance record due to\nnextval() --- we want\n * to flush that to disk before reporting commit.)\n */\n if (XactLastRecEnd.xrecoff == 0)\n goto cleanup;\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Compan\n",
"msg_date": "Thu, 28 Oct 2010 10:23:05 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On 2010-10-28 15:13, Merlin Moncure wrote:\n> On Wed, Oct 27, 2010 at 3:47 PM, Jesper Krogh<[email protected]> wrote:\n> \n>> On 2010-10-27 20:51, Merlin Moncure wrote:\n>>\n>> \n>>>> Yes, I am quite aware of how the o/s page cache works. All else being\n>>>> equal, I more compact database obviously would be preferred. However\n>>>> 'all else' is not necessarily equal. I can mount my database on bzip\n>>>> volume, that must make it faster, right? wrong. I understand the\n>>>> postgres storage architecture pretty well, and the low hanging fruit\n>>>> having been grabbed further layout compression is only going to come\n>>>> as a result of tradeoffs.\n>>>>\n>>>> \n>> Or configureabillity.. Not directly related to overall space consumption\n>> but I have been working on a patch that would make TOAST* kick in\n>> earlier in the process, giving a \"slimmer\" main table with visibillity\n>> information\n>> and simple columns and moving larger colums more aggressively to TOAST.\n>> \n> Do you have any benchmarks supporting if/when such a change would be beneficial?\n>\n> \nOn, IO-bound queries it pretty much translates to the ration between\nthe toast-table-size vs. the main-table-size.\n\nTrying to aggressively speed up \"select count(*) from table\" gives this:\nhttp://www.mail-archive.com/[email protected]/msg146153.html\nwith shutdown of pg and drop caches inbetween... the \"default\" select \ncount (*) on 50K tuples\ngives 4.613ms (2 tuples pr page) vs. 318ms... (8 tuples pr page).\n\nPG default is inbetween...\n\n\n-- \nJesper\n\n",
"msg_date": "Thu, 28 Oct 2010 17:28:53 +0200",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
}
] |
[
{
"msg_contents": "Hi all, \n\nWe are tuning a PostgreSQL box with AIX 5.3 and got stucked in a very odd situation.\nWhen a query got ran for the second time, the system seems to deliver the results to slow.\n\nHere´s some background info:\n\nAIX Box: \nPostgreSQL 8.4.4, AIX 5.3-9 64bits, SAN IBM DS3400, 8x450GB SAS 15K Raid-5\n8GB RAM, 2.3GB Shared buffers\n\nDebian Box:\nPostgreSQL 8.4.4, Debian 4.3.2 64bits, SAN IBM DS3400, 5x300GB SAS 15K Raid-0\n7GB RAM, 2.1GB Shared buffers\n\nRight now, we changed lots of AIX tunables to increase disk and SO performance. \nOf course, postgres got tunned as well. I can post all changes made until now if needed.\n\nTo keep it simple, I will try to explain only the buffer read issue.\nThis query [1] took like 14s to run at AIX, and almost the same time at Debian.\nThe issue is when I run it for the second time:\nAIX - 8s\nDebian - 0.3s\n\nThese times keep repeating after the second run, and I can ensure AIX isn´t touching the disks anymore.\nI´ve never seen this behaviour before. I heard about Direct I/O and I was thinking about givng it a shot.\nAny ideas?\n\n\n1 - http://explain.depesz.com/s/5oz\n\n\n[]´s, André Volpato \n\n",
"msg_date": "Mon, 25 Oct 2010 16:21:34 -0200 (BRST)",
"msg_from": "=?utf-8?Q?Andr=C3=A9_Volpato?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "AIX slow buffer reads"
},
{
"msg_contents": "On Mon, Oct 25, 2010 at 2:21 PM, André Volpato\n<[email protected]> wrote:\n> Hi all,\n>\n> We are tuning a PostgreSQL box with AIX 5.3 and got stucked in a very odd situation.\n> When a query got ran for the second time, the system seems to deliver the results to slow.\n>\n> Here´s some background info:\n>\n> AIX Box:\n> PostgreSQL 8.4.4, AIX 5.3-9 64bits, SAN IBM DS3400, 8x450GB SAS 15K Raid-5\n> 8GB RAM, 2.3GB Shared buffers\n>\n> Debian Box:\n> PostgreSQL 8.4.4, Debian 4.3.2 64bits, SAN IBM DS3400, 5x300GB SAS 15K Raid-0\n> 7GB RAM, 2.1GB Shared buffers\n>\n> Right now, we changed lots of AIX tunables to increase disk and SO performance.\n> Of course, postgres got tunned as well. I can post all changes made until now if needed.\n>\n> To keep it simple, I will try to explain only the buffer read issue.\n> This query [1] took like 14s to run at AIX, and almost the same time at Debian.\n> The issue is when I run it for the second time:\n> AIX - 8s\n> Debian - 0.3s\n>\n> These times keep repeating after the second run, and I can ensure AIX isn´t touching the disks anymore.\n> I´ve never seen this behaviour before. I heard about Direct I/O and I was thinking about givng it a shot.\n> Any ideas?\n>\n\nI doubt disk/io is the problem.\n\n*) Are the plans *exactly* the same?\n\n*) Are you running explain analyze? There are some platform specific\ninteractions caused by timing.\n\n*) Are you transferring the data across the network? rule out\n(horribly difficult to diagnose/fix) network effects.\n\nmerlin\n",
"msg_date": "Mon, 25 Oct 2010 14:50:42 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AIX slow buffer reads"
},
{
"msg_contents": "\n| On Mon, Oct 25, 2010 at 2:21 PM, André Volpato\n| <[email protected]> wrote:\n| > Hi all,\n| >\n| > We are tuning a PostgreSQL box with AIX 5.3 and got stucked in a\n| > very odd situation.\n| > When a query got ran for the second time, the system seems to\n| > deliver the results to slow.\n| >\n| > Here´s some background info:\n| >\n| > AIX Box:\n| > PostgreSQL 8.4.4, AIX 5.3-9 64bits, SAN IBM DS3400, 8x450GB SAS 15K\n| > Raid-5\n| > 8GB RAM, 2.3GB Shared buffers\n| >\n| > Debian Box:\n| > PostgreSQL 8.4.4, Debian 4.3.2 64bits, SAN IBM DS3400, 5x300GB SAS\n| > 15K Raid-0\n| > 7GB RAM, 2.1GB Shared buffers\n| >\n| > Right now, we changed lots of AIX tunables to increase disk and SO\n| > performance.\n| > Of course, postgres got tunned as well. I can post all changes made\n| > until now if needed.\n| >\n| > To keep it simple, I will try to explain only the buffer read issue.\n| > This query [1] took like 14s to run at AIX, and almost the same time\n| > at Debian.\n| > The issue is when I run it for the second time:\n| > AIX - 8s\n| > Debian - 0.3s\n| >\n| > These times keep repeating after the second run, and I can ensure\n| > AIX isn´t touching the disks anymore.\n| > I´ve never seen this behaviour before. I heard about Direct I/O and\n| > I was thinking about givng it a shot.\n| > Any ideas?\n| >\n| \n| I doubt disk/io is the problem.\n \nMe either.\nLike I said, AIX do not touch the storage when runing the query.\nIt became CPU-bound after data got into cache.\n\n\n| *) Are the plans *exactly* the same?\n\n\nThe plan I sent refers to the AIX box:\nhttp://explain.depesz.com/s/5oz\nAt Debian, the plan looks pretty much the same.\n\n\n| *) Are you running explain analyze? There are some platform specific\n| interactions caused by timing.\n\nYes. I´m not concerned about timing because the difference (8s against 0.3s) is huge.\n\n\n| *) Are you transferring the data across the network? rule out\n| (horribly difficult to diagnose/fix) network effects.\n\nNot likely... Both boxes are in the same Bladecenter, using the same storage.\n\n\n| \n| merlin\n\n\n[]´s, Andre Volpato\n",
"msg_date": "Mon, 25 Oct 2010 17:26:29 -0200 (BRST)",
"msg_from": "=?utf-8?Q?Andr=C3=A9_Volpato?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AIX slow buffer reads"
},
{
"msg_contents": "On 10-10-25 03:26 PM, André Volpato wrote:\n> | On Mon, Oct 25, 2010 at 2:21 PM, André Volpato\n> |<[email protected]> wrote:\n> |> Hi all,\n> |>\n> |> We are tuning a PostgreSQL box with AIX 5.3 and got stucked in a\n> |> very odd situation.\n> |> When a query got ran for the second time, the system seems to\n> |> deliver the results to slow.\n> |>\n> |> Here´s some background info:\n> |>\n> |> AIX Box:\n> |> PostgreSQL 8.4.4, AIX 5.3-9 64bits, SAN IBM DS3400, 8x450GB SAS 15K\n> |> Raid-5\n> |> 8GB RAM, 2.3GB Shared buffers\n> |>\n> |> Debian Box:\n> |> PostgreSQL 8.4.4, Debian 4.3.2 64bits, SAN IBM DS3400, 5x300GB SAS\n> |> 15K Raid-0\n> |> 7GB RAM, 2.1GB Shared buffers\n> |>\n> |> Right now, we changed lots of AIX tunables to increase disk and SO\n> |> performance.\n> |> Of course, postgres got tunned as well. I can post all changes made\n> |> until now if needed.\n> |>\n> |> To keep it simple, I will try to explain only the buffer read issue.\n> |> This query [1] took like 14s to run at AIX, and almost the same time\n> |> at Debian.\n> |> The issue is when I run it for the second time:\n> |> AIX - 8s\n> |> Debian - 0.3s\n> |>\n> |> These times keep repeating after the second run, and I can ensure\n> |> AIX isn´t touching the disks anymore.\n> |> I´ve never seen this behaviour before. I heard about Direct I/O and\n> |> I was thinking about givng it a shot.\n> |> Any ideas?\n> |>\n> |\n> | I doubt disk/io is the problem.\n>\n> Me either.\n> Like I said, AIX do not touch the storage when runing the query.\n> It became CPU-bound after data got into cache.\n\nHave you confirmed that the hardware is ok on both servers?\n\nHave both OS's been tuned by people that know how to tune the respective \nOS's? AIX is very different than Linux, and needs to be tuned accordingly.\n\nOn AIX can you trace why it is CPU bound? What else is taking the CPU \ntime, anything?\n\nAlso, can you provide the output of pg_config from your AIX build?\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n",
"msg_date": "Tue, 26 Oct 2010 10:06:52 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AIX slow buffer reads"
},
{
"msg_contents": "----- Mensagem original -----\n| On 10-10-25 03:26 PM, André Volpato wrote:\n| > | On Mon, Oct 25, 2010 at 2:21 PM, André Volpato\n| > |<[email protected]> wrote:\n\n(...)\n\n| > |> These times keep repeating after the second run, and I can\n| > |> ensure AIX isn´t touching the disks anymore.\n| > |> I´ve never seen this behaviour before. I heard about Direct I/O\n| > |> and I was thinking about givng it a shot.\n| > |> \n| > |> Any ideas?\n| > |>\n| > |\n| > | I doubt disk/io is the problem.\n| >\n| > Me either.\n| > Like I said, AIX do not touch the storage when runing the query.\n| > It became CPU-bound after data got into cache.\n| \n| Have you confirmed that the hardware is ok on both servers?\n| \n\nThe hardware was recently instaled and checked by the vendor team.\nAIX box is on JS22:\nPostgreSQL 8.4.4, AIX 5.3-9 64bits, SAN IBM DS3400, 8x450GB SAS 15K Raid-5\n8GB RAM (DDR2 667)\n\n# lsconf\nSystem Model: IBM,7998-61X\nProcessor Type: PowerPC_POWER6\nProcessor Implementation Mode: POWER 6\nProcessor Version: PV_6\nNumber Of Processors: 4\nProcessor Clock Speed: 4005 MHz\nCPU Type: 64-bit\nKernel Type: 64-bit\nMemory Size: 7680 MB\n\nDebian box is on HS21:\nPostgreSQL 8.4.4, Debian 4.3.2 64bits, SAN IBM DS3400, 5x300GB SAS 15K Raid-0\n7GB RAM (DDR2 667)\nWe are forced to use RedHat on this machine, so we are virtualizing the Debian box.\n\n# cpuinfo\nprocessor : [0-7]\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 23\nmodel name : Intel(R) Xeon(R) CPU E5420 @ 2.50GHz\nstepping : 6\ncpu MHz : 2500.148\ncache size : 6144 KB\n\n\n\n| Have both OS's been tuned by people that know how to tune the\n| respective OS's? AIX is very different than Linux, and needs to be tuned\n| accordingly.\n\nWe´ve been tuning AIX for the last 3 weeks, and lots of tuneables got changed.\nOn Debian, we have far more experience, and it´s been a chalenge to understand how AIX works.\n\nMost important tunes:\npage_steal_method=1\nlru_file_repage=0\nkernel_heap_psize=64k\nmaxperm%=90\nmaxclient%=90\nminperm%=20\n\nDisk:\nchdev -l hdisk8 -a queue_depth=24\nchdev -l hdisk8 -a reserve_policy=no_reserve\nchdev -l hdisk8 -a algorithm=round_robin\nchdev -l hdisk8 -a max_transfer=0x400000\n\nHBA:\nchdev -l fcs0 -P -a max_xfer_size=0x400000 -a num_cmd_elems=1024\n\nPostgres:\nshared_buffers = 2304MB\neffective_io_concurrency = 5\nwal_sync_method = fdatasync\nwal_buffers = 2MB\ncheckpoint_segments = 32\ncheckpoint_timeout = 10min\nrandom_page_cost = 2.5\neffective_cache_size = 7144MB\n\nLike I said, there´s more but this is the most important.\n\n\n| \n| On AIX can you trace why it is CPU bound? What else is taking the CPU\n| time, anything?| \n\n\nWe´re using iostat, svmon and vmstat to trace CPU, swap and IO activity.\nOn 'topas' we saw no disk activity at all, but we get a Wait% about 70%, and about 700 pages/s read in PageIn, no PageOut, no PgspIn and no PgspOut.\nIt´s a dedicated server, no process runing besides postgres.\n\n\n\n| Also, can you provide the output of pg_config from your AIX build?\n\n# pg_config\nBINDIR = /usr/local/pgsql/bin\nDOCDIR = /usr/local/pgsql/share/doc\nHTMLDIR = /usr/local/pgsql/share/doc\nINCLUDEDIR = /usr/local/pgsql/include\nPKGINCLUDEDIR = /usr/local/pgsql/include\nINCLUDEDIR-SERVER = /usr/local/pgsql/include/server\nLIBDIR = /usr/local/pgsql/lib\nPKGLIBDIR = /usr/local/pgsql/lib\nLOCALEDIR = /usr/local/pgsql/share/locale\nMANDIR = /usr/local/pgsql/share/man\nSHAREDIR = /usr/local/pgsql/share\nSYSCONFDIR = /usr/local/pgsql/etc\nPGXS = /usr/local/pgsql/lib/pgxs/src/makefiles/pgxs.mk\nCONFIGURE = '--enable-integer-datetimes' '--with-readline' '--with-threads' '--with-zlib' '--with-html' 'CC=gcc -maix64' 'LDFLAGS=-Wl,-bbigtoc'\nCC = gcc -maix64\nCPPFLAGS =\nCFLAGS = -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -fwrapv\nCFLAGS_SL =\nLDFLAGS = -Wl,-bbigtoc -Wl,-blibpath:/usr/local/pgsql/lib:/usr/lib:/lib\nLDFLAGS_SL = -Wl,-bnoentry -Wl,-H512 -Wl,-bM:SRE\nLIBS = -lpgport -lz -lreadline -lld -lm\nVERSION = PostgreSQL 8.4.4\n\n\n| \n| --\n| Brad Nicholson 416-673-4106\n| Database Administrator, Afilias Canada Corp.\n\n[]´s, Andre Volpato\n",
"msg_date": "Tue, 26 Oct 2010 19:04:10 -0200 (BRST)",
"msg_from": "=?utf-8?Q?Andr=C3=A9_Volpato?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AIX slow buffer reads"
},
{
"msg_contents": "On 10-10-26 05:04 PM, André Volpato wrote:\n> ----- Mensagem original -----\n> | On 10-10-25 03:26 PM, André Volpato wrote:\n> |> | On Mon, Oct 25, 2010 at 2:21 PM, André Volpato\n> |> |<[email protected]> wrote:\n>\n> (...)\n>\n> |> |> These times keep repeating after the second run, and I can\n> |> |> ensure AIX isn´t touching the disks anymore.\n> |> |> I´ve never seen this behaviour before. I heard about Direct I/O\n> |> |> and I was thinking about givng it a shot.\n> |> |>\n> |> |> Any ideas?\n> |> |>\n> |> |\n> |> | I doubt disk/io is the problem.\n> |>\n> |> Me either.\n> |> Like I said, AIX do not touch the storage when runing the query.\n> |> It became CPU-bound after data got into cache.\n> |\n> | Have you confirmed that the hardware is ok on both servers?\n> |\n>\n> The hardware was recently instaled and checked by the vendor team.\n> AIX box is on JS22:\n> PostgreSQL 8.4.4, AIX 5.3-9 64bits, SAN IBM DS3400, 8x450GB SAS 15K Raid-5\n> 8GB RAM (DDR2 667)\n>\n> # lsconf\n> System Model: IBM,7998-61X\n> Processor Type: PowerPC_POWER6\n> Processor Implementation Mode: POWER 6\n> Processor Version: PV_6\n> Number Of Processors: 4\n> Processor Clock Speed: 4005 MHz\n> CPU Type: 64-bit\n> Kernel Type: 64-bit\n> Memory Size: 7680 MB\n>\n> Debian box is on HS21:\n> PostgreSQL 8.4.4, Debian 4.3.2 64bits, SAN IBM DS3400, 5x300GB SAS 15K Raid-0\n> 7GB RAM (DDR2 667)\n> We are forced to use RedHat on this machine, so we are virtualizing the Debian box.\n>\n> # cpuinfo\n> processor : [0-7]\n> vendor_id : GenuineIntel\n> cpu family : 6\n> model : 23\n> model name : Intel(R) Xeon(R) CPU E5420 @ 2.50GHz\n> stepping : 6\n> cpu MHz : 2500.148\n> cache size : 6144 KB\n>\n>\n>\n> | Have both OS's been tuned by people that know how to tune the\n> | respective OS's? AIX is very different than Linux, and needs to be tuned\n> | accordingly.\n>\n> We´ve been tuning AIX for the last 3 weeks, and lots of tuneables got changed.\n> On Debian, we have far more experience, and it´s been a chalenge to understand how AIX works.\n>\n> Most important tunes:\n> page_steal_method=1\n> lru_file_repage=0\n> kernel_heap_psize=64k\n> maxperm%=90\n> maxclient%=90\n> minperm%=20\n>\n> Disk:\n> chdev -l hdisk8 -a queue_depth=24\n> chdev -l hdisk8 -a reserve_policy=no_reserve\n> chdev -l hdisk8 -a algorithm=round_robin\n> chdev -l hdisk8 -a max_transfer=0x400000\n>\n> HBA:\n> chdev -l fcs0 -P -a max_xfer_size=0x400000 -a num_cmd_elems=1024\n>\n> Postgres:\n> shared_buffers = 2304MB\n> effective_io_concurrency = 5\n\nI wonder if effective_io_concurrency has anything to do with it. It was \nimplemented and mainly tested on Linux, and I am unsure if it will do \nanything on AIX. The plan you posted for the query does a bitmap index \nscans which is what effective_io_concurrency will speed up.\n\nCan you post the output of explain analyze for that query on both AIX \nand Linux? That will show where the time is being spent.\n\nIf it is being spent in the bitmap index scan, try setting \neffective_io_concurrency to 0 for Linux, and see what effect that has.\n\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n",
"msg_date": "Wed, 27 Oct 2010 08:42:33 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AIX slow buffer reads"
},
{
"msg_contents": "\n----- Mensagem original -----\n| On 10-10-26 05:04 PM, André Volpato wrote:\n| > ----- Mensagem original -----\n| > | On 10-10-25 03:26 PM, André Volpato wrote:\n| > |> | On Mon, Oct 25, 2010 at 2:21 PM, André Volpato\n| > |> |<[email protected]> wrote:\n| >\n| > (...)\n| >\n| > |> |> These times keep repeating after the second run, and I can\n| > |> |> ensure AIX isn´t touching the disks anymore.\n| > |> |> I´ve never seen this behaviour before. I heard about Direct\n| > |> |> I/O\n| > |> |> and I was thinking about givng it a shot.\n| > |> |>\n| > |> |> Any ideas?\n| > |> |>\n| > |> |\n| > |> | I doubt disk/io is the problem.\n| > |>\n| > |> Me either.\n| > |> Like I said, AIX do not touch the storage when runing the query.\n| > |> It became CPU-bound after data got into cache.\n| > |\n| > | Have you confirmed that the hardware is ok on both servers?\n| > |\n| >\n| > The hardware was recently instaled and checked by the vendor team.\n| > AIX box is on JS22:\n| > PostgreSQL 8.4.4, AIX 5.3-9 64bits, SAN IBM DS3400, 8x450GB SAS 15K\n| > Raid-5\n| > 8GB RAM (DDR2 667)\n| >\n| > # lsconf\n| > System Model: IBM,7998-61X\n| > Processor Type: PowerPC_POWER6\n| > Processor Implementation Mode: POWER 6\n| > Processor Version: PV_6\n| > Number Of Processors: 4\n| > Processor Clock Speed: 4005 MHz\n| > CPU Type: 64-bit\n| > Kernel Type: 64-bit\n| > Memory Size: 7680 MB\n| >\n| > Debian box is on HS21:\n| > PostgreSQL 8.4.4, Debian 4.3.2 64bits, SAN IBM DS3400, 5x300GB SAS\n| > 15K Raid-0\n| > 7GB RAM (DDR2 667)\n| > We are forced to use RedHat on this machine, so we are virtualizing\n| > the Debian box.\n| >\n| > # cpuinfo\n| > processor : [0-7]\n| > vendor_id : GenuineIntel\n| > cpu family : 6\n| > model : 23\n| > model name : Intel(R) Xeon(R) CPU E5420 @ 2.50GHz\n| > stepping : 6\n| > cpu MHz : 2500.148\n| > cache size : 6144 KB\n| >\n| >\n| >\n| > | Have both OS's been tuned by people that know how to tune the\n| > | respective OS's? AIX is very different than Linux, and needs to be\n| > | tuned\n| > | accordingly.\n| >\n| > We´ve been tuning AIX for the last 3 weeks, and lots of tuneables\n| > got changed.\n| > On Debian, we have far more experience, and it´s been a chalenge to\n| > understand how AIX works.\n| >\n| > Most important tunes:\n| > page_steal_method=1\n| > lru_file_repage=0\n| > kernel_heap_psize=64k\n| > maxperm%=90\n| > maxclient%=90\n| > minperm%=20\n| >\n| > Disk:\n| > chdev -l hdisk8 -a queue_depth=24\n| > chdev -l hdisk8 -a reserve_policy=no_reserve\n| > chdev -l hdisk8 -a algorithm=round_robin\n| > chdev -l hdisk8 -a max_transfer=0x400000\n| >\n| > HBA:\n| > chdev -l fcs0 -P -a max_xfer_size=0x400000 -a num_cmd_elems=1024\n| >\n| > Postgres:\n| > shared_buffers = 2304MB\n| > effective_io_concurrency = 5\n| \n| I wonder if effective_io_concurrency has anything to do with it. It\n| was\n| implemented and mainly tested on Linux, and I am unsure if it will do\n| anything on AIX. The plan you posted for the query does a bitmap index\n| scans which is what effective_io_concurrency will speed up.\n| \n| Can you post the output of explain analyze for that query on both AIX\n| and Linux? That will show where the time is being spent.\n\n\nI changed the querys in order to make a more valuable comparison.\n\nDebian first run (23s):\nhttp://explain.depesz.com/s/1fT\n\nAIX first run (40s):\nhttp://explain.depesz.com/s/CRG\n\nDebian cached consecutive runs (8s)\nhttp://explain.depesz.com/s/QAi\n\nAIX cached consecutive runs (12s)\nhttp://explain.depesz.com/s/xJU\n\nBoth boxes are runing with DDR2 667, so RAM speed seems to be the bootleneck now.\nWe´re about to try RedHat EL6 in the next few days.\n\n| \n| If it is being spent in the bitmap index scan, try setting\n| effective_io_concurrency to 0 for Linux, and see what effect that has.\n\nI disabled effective_io_concurrency at AIX but it made no changes on bitmap index times.\n\n\n\n\n| --\n| Brad Nicholson 416-673-4106\n| Database Administrator, Afilias Canada Corp.\n| \n\n[]´s, Andre Volpato\n",
"msg_date": "Wed, 27 Oct 2010 15:05:09 -0200 (BRST)",
"msg_from": "=?utf-8?Q?Andr=C3=A9_Volpato?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AIX slow buffer reads"
},
{
"msg_contents": "André Volpato wrote:\n> | \n> | If it is being spent in the bitmap index scan, try setting\n> | effective_io_concurrency to 0 for Linux, and see what effect that has.\n>\n> I disabled effective_io_concurrency at AIX but it made no changes on bitmap index times.\n> \n\nBrad's point is that it probably doesn't do anything at all on AIX, and \nis already disabled accordingly. But on Linux, it is doing something, \nand that might be contributing to why it's executing so much better on \nthat platform. If you disable that parameter on your Debian box, that \nshould give you an idea whether that particular speed-up is a major \ncomponent to the difference you're seeing or not.\n\nAlso, if the system check was done by the \"vendor team\" team, don't \ntrust them at all. It doesn't sound like a disk problem is involved in \nyour case yet, but be sure to do your own basic disk benchmarking too \nrather than believing what you're sold. There's a quick intro to that \nat http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm \nand a much longer treatment of the subject in my book if you want a lot \nmore details. I don't have any AIX-specific tuning advice in there though.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Wed, 27 Oct 2010 15:24:25 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AIX slow buffer reads"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> André Volpato wrote:\n>> I disabled effective_io_concurrency at AIX but it made no changes on bitmap index times.\n\n> Brad's point is that it probably doesn't do anything at all on AIX, and \n> is already disabled accordingly.\n\nAFAICT from googling, AIX does have posix_fadvise, though maybe it\ndoesn't do anything useful ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Oct 2010 16:10:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AIX slow buffer reads "
},
{
"msg_contents": "----- Mensagem original -----\n| André Volpato wrote:\n| > |\n| > | If it is being spent in the bitmap index scan, try setting\n| > | effective_io_concurrency to 0 for Linux, and see what effect that\n| > | has.\n| >\n| > I disabled effective_io_concurrency at AIX but it made no changes on\n| > bitmap index times.\n| >\n| \n| Brad's point is that it probably doesn't do anything at all on AIX,\n| and is already disabled accordingly. But on Linux, it is doing something,\n| and that might be contributing to why it's executing so much better on\n| that platform. If you disable that parameter on your Debian box, that\n| should give you an idea whether that particular speed-up is a major\n| component to the difference you're seeing or not.\n\n\nCant do it right now, but will do it ASAP and post here.\n\n\n| Also, if the system check was done by the \"vendor team\" team, don't\n| trust them at all. It doesn't sound like a disk problem is involved in\n| your case yet, but be sure to do your own basic disk benchmarking too\n| rather than believing what you're sold. There's a quick intro to that\n| at\n| http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm\n| and a much longer treatment of the subject in my book if you want a\n| lot\n| more details. I don't have any AIX-specific tuning advice in there\n| though.| \n\n\nI´m gonna read your sugestion, thanks.\nWe tested the disks also, and we did a lot of tuning to get acceptable transfer rates at AIX.\n\nYesterday I tried your \"stream-scaling\" and get around 7000MB/s (single thread) and 10000MB/s (eight threads) at AIX, and a little less than that at Debian, since its a virtual box.\nI found that even my notebook is close to that transfer rates, and both boxes are limited by DDR2 speeds.\n\n\n| --\n| Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n| PostgreSQL Training, Services and Support www.2ndQuadrant.us\n| \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n[]´s, André Volpato\n",
"msg_date": "Wed, 27 Oct 2010 18:56:52 -0200 (BRST)",
"msg_from": "=?utf-8?Q?Andr=C3=A9_Volpato?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AIX slow buffer reads"
},
{
"msg_contents": "On 10/27/2010 4:10 PM, Tom Lane wrote:\n> Greg Smith<[email protected]> writes:\n>> André Volpato wrote:\n>>> I disabled effective_io_concurrency at AIX but it made no changes on bitmap index times.\n>> Brad's point is that it probably doesn't do anything at all on AIX, and\n>> is already disabled accordingly.\n> AFAICT from googling, AIX does have posix_fadvise, though maybe it\n> doesn't do anything useful ...\n>\n> \t\t\tregards, tom lane\n\nIf there is an easy way to check if it does do anything useful? If so, \nI can check it out.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n",
"msg_date": "Wed, 27 Oct 2010 18:37:23 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AIX slow buffer reads"
},
{
"msg_contents": "Brad Nicholson <[email protected]> writes:\n> On 10/27/2010 4:10 PM, Tom Lane wrote:\n>> AFAICT from googling, AIX does have posix_fadvise, though maybe it\n>> doesn't do anything useful ...\n\n> If there is an easy way to check if it does do anything useful? If so, \n> I can check it out.\n\nIf you don't see any performance change in bitmap scans between\neffective_io_concurrency = 0 and effective_io_concurrency = maybe 4 or\nso, then you could probably conclude it's a no-op.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Oct 2010 18:44:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: AIX slow buffer reads "
},
{
"msg_contents": "\n----- Mensagem original -----\n| André Volpato wrote:\n| > |\n| > | If it is being spent in the bitmap index scan, try setting\n| > | effective_io_concurrency to 0 for Linux, and see what effect that\n| > | has.\n| >\n| > I disabled effective_io_concurrency at AIX but it made no changes on\n| > bitmap index times.\n| >\n| \n| Brad's point is that it probably doesn't do anything at all on AIX,\n| and\n| is already disabled accordingly. But on Linux, it is doing something,\n| and that might be contributing to why it's executing so much better on\n| that platform. If you disable that parameter on your Debian box, that\n| should give you an idea whether that particular speed-up is a major\n| component to the difference you're seeing or not.\n\n\nHere´s new explains based on Debian box:\n\n(1) effective_io_concurrency = 5\n# /etc/init.d/postgresql stop\n# echo 3 > /proc/sys/vm/drop_caches\n# /etc/init.d/postgresql start\n\nhttp://explain.depesz.com/s/br\n\n(2) effective_io_concurrency = 0\n# /etc/init.d/postgresql stop\n# echo 3 > /proc/sys/vm/drop_caches\n# /etc/init.d/postgresql start\n\nhttp://explain.depesz.com/s/3A0\n\nBitmapAnd really gets improved a little bit in (1), but Bitmap index scans got a lot worse.\n\n\n| --\n| Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n| PostgreSQL Training, Services and Support www.2ndQuadrant.us\n| \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n[]´s, Andre Volpato\n",
"msg_date": "Thu, 28 Oct 2010 10:33:07 -0200 (BRST)",
"msg_from": "=?utf-8?Q?Andr=C3=A9_Volpato?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: AIX slow buffer reads"
}
] |
[
{
"msg_contents": "Hi experts,\n\nI have a (near) real-time application in which inserts into the database needs \nto be visible to queries from other threads with minimal delay. The inserts are \ntriggered by real-time events and are therefore asynchronous (i.e. many \nperformance tips I read related to batch inserts or copy do not apply here, \nsince these events cannot be predicted or batched), and the inserted data need \nto be available within a couple of seconds to other threads (for example, an \ninserted row that only appears to other query threads 5 seconds or more after \nthe insert is not acceptable). The delay should be under 2 seconds maximum, \nsub-1 second would be great.\n\nMy questions are: (1) Does the MVCC architecture introduce significant delays \nbetween insert by a thread and visibility by other threads (I am unclear about \nhow multiple versions are \"collapsed\" or reconciled, as well as how different \nquery threads are seeing which version)? (2) Are there any available benchmarks \nthat can measure this delay? (3) What are relevant config parameters that will \nreduce this delay?\n\nThanks for your patience with my ignorance of MVCC (still learning more about \nit),\nSteve\n\n\n \nHi experts,I have a (near) real-time application in which inserts into the database needs to be visible to queries from other threads with minimal delay. The inserts are triggered by real-time events and are therefore asynchronous (i.e. many performance tips I read related to batch inserts or copy do not apply here, since these events cannot be predicted or batched), and the inserted data need to be available within a couple of seconds to other threads (for example, an inserted row that only appears to other query threads 5 seconds or more after the insert is not acceptable). The delay should be under 2 seconds maximum, sub-1 second would be great.My questions are: (1) Does the MVCC architecture introduce significant delays between\n insert by a thread and visibility by other threads (I am unclear about how multiple versions are \"collapsed\" or reconciled, as well as how different query threads are seeing which version)? (2) Are there any available benchmarks that can measure this delay? (3) What are relevant config parameters that will reduce this delay?Thanks for your patience with my ignorance of MVCC (still learning more about it),Steve",
"msg_date": "Mon, 25 Oct 2010 11:46:23 -0700 (PDT)",
"msg_from": "Steve Wong <[email protected]>",
"msg_from_op": true,
"msg_subject": "MVCC and Implications for (Near) Real-Time Application"
},
{
"msg_contents": "\nOn Oct 25, 2010, at 2:46 PM, Steve Wong wrote:\n\n> Hi experts,\n> \n> I have a (near) real-time application in which inserts into the database needs \n> to be visible to queries from other threads with minimal delay. The inserts are \n> triggered by real-time events and are therefore asynchronous (i.e. many \n> performance tips I read related to batch inserts or copy do not apply here, \n> since these events cannot be predicted or batched), and the inserted data need \n> to be available within a couple of seconds to other threads (for example, an \n> inserted row that only appears to other query threads 5 seconds or more after \n> the insert is not acceptable). The delay should be under 2 seconds maximum, \n> sub-1 second would be great.\n> \n> My questions are: (1) Does the MVCC architecture introduce significant delays \n> between insert by a thread and visibility by other threads (I am unclear about \n> how multiple versions are \"collapsed\" or reconciled, as well as how different \n> query threads are seeing which version)? (2) Are there any available benchmarks \n> that can measure this delay? (3) What are relevant config parameters that will \n> reduce this delay?\n\nThere is no way to know without testing whether your hardware, OS, database schema, and database load can meet your demands. However, there is no technical reason why PostgreSQL could not meet your timing goals- MVCC does not inherently introduce delays, however the PostgreSQL implementation requires a cleanup process which can introduce latency.\n\nIf you find that your current architecture is not up to the task, consider using LISTEN/NOTIFY with a payload (new in 9.0), which we are using for a similar \"live-update\" system.\n\nCheers,\nM\n\n\n",
"msg_date": "Fri, 29 Oct 2010 13:05:48 -0400",
"msg_from": "\"A.M.\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MVCC and Implications for (Near) Real-Time Application"
},
{
"msg_contents": "Steve Wong <[email protected]> wrote:\n \n> (1) Does the MVCC architecture introduce significant delays\n> between insert by a thread and visibility by other threads (I am\n> unclear about how multiple versions are \"collapsed\" or reconciled,\n> as well as how different query threads are seeing which version)?\n \nAs soon as the inserting transaction commits the inserted row is\nvisible to new snapshots. If you are in an explicit transaction the\ncommit will have occurred before the return from the COMMIT request;\notherwise it will have completed before the return from the INSERT\nrequest.\n \nYou will get a new snapshot for every statement in READ COMMITTED\n(or lower) transaction isolation. You will get a new snapshot for\neach database transaction in higher isolation levels.\n \n-Kevin\n",
"msg_date": "Fri, 29 Oct 2010 12:18:20 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MVCC and Implications for (Near) Real-Time\n\t Application"
}
] |
[
{
"msg_contents": "Hey\n\nTurned on log_min_duration_statement today and started getting timings on sql statements (version 8.3.10).\nCan anyone please tell me how to interpret the (S_nn/C_nn) information in the log line.\n\nLOG: duration: 19817.211 ms execute S_73/C_74: ....(statement text) .....\n\nThanks for your time\nMr\n\n\nHey Turned on log_min_duration_statement today and started getting timings on sql statements (version 8.3.10).Can anyone please tell me how to interpret the (S_nn/C_nn) information in the log line. LOG: duration: 19817.211 ms execute S_73/C_74: ….(statement text) ….. Thanks for your timeMr",
"msg_date": "Mon, 25 Oct 2010 15:33:47 -0700",
"msg_from": "Mark Rostron <[email protected]>",
"msg_from_op": true,
"msg_subject": "interpret statement log duration information "
},
{
"msg_contents": "Mark Rostron <[email protected]> writes:\n> Can anyone please tell me how to interpret the (S_nn/C_nn) information in the log line.\n\n> LOG: duration: 19817.211 ms execute S_73/C_74: ....(statement text) .....\n\nIt's prepared statement name slash portal name. You'd have to look at\nyour client-side code to find out what the names refer to.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 25 Oct 2010 18:54:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: interpret statement log duration information "
}
] |
[
{
"msg_contents": "Which one is faster?\nselect count(*) from talble\nor\nselect count(id) from table\nwhere id is the primary key.\n\nWhich one is faster?select count(*) from talbleorselect count(id) from tablewhere id is the primary key.",
"msg_date": "Tue, 26 Oct 2010 16:56:48 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "which one is faster"
},
{
"msg_contents": "On 26 October 2010 12:56, AI Rumman <[email protected]> wrote:\n\n> Which one is faster?\n> select count(*) from talble\n> or\n> select count(id) from table\n> where id is the primary key.\n>\n\n\nCheck the query plan, both queries are the same.\n\nregards\nSzymon\n\nOn 26 October 2010 12:56, AI Rumman <[email protected]> wrote:\nWhich one is faster?select count(*) from talbleorselect count(id) from tablewhere id is the primary key.\nCheck the query plan, both queries are the same.regardsSzymon",
"msg_date": "Tue, 26 Oct 2010 12:59:05 +0200",
"msg_from": "Szymon Guz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: which one is faster"
},
{
"msg_contents": "W dniu 26.10.2010 12:59, Szymon Guz pisze:\n> both queries are the same.\n\nIMHO they aren't the same, but they returns the same value in this case.\nI mean count(field) doesn't count NULL values, count(*) does it.\nI'm writing this only for note:)\nRegards\n",
"msg_date": "Tue, 26 Oct 2010 13:59:03 +0200",
"msg_from": "=?UTF-8?B?TWFyY2luIE1pcm9zxYJhdw==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: which one is faster"
},
{
"msg_contents": "2010/10/26 Marcin Mirosław <[email protected]>\n\n> W dniu 26.10.2010 12:59, Szymon Guz pisze:\n> > both queries are the same.\n>\n> IMHO they aren't the same, but they returns the same value in this case.\n> I mean count(field) doesn't count NULL values, count(*) does it.\n> I'm writing this only for note:)\n> Regards\n>\n>\nYup, indeed. I omitted that note, as it was written that the field is\nprimary key :).\n\nregards\nSzymon\n\n2010/10/26 Marcin Mirosław <[email protected]>\nW dniu 26.10.2010 12:59, Szymon Guz pisze:\n> both queries are the same.\n\nIMHO they aren't the same, but they returns the same value in this case.\nI mean count(field) doesn't count NULL values, count(*) does it.\nI'm writing this only for note:)\nRegards\nYup, indeed. I omitted that note, as it was written that the field is primary key :).regardsSzymon",
"msg_date": "Tue, 26 Oct 2010 14:05:36 +0200",
"msg_from": "Szymon Guz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: which one is faster"
},
{
"msg_contents": "implementation wise, count(*) is faster. Very easy to test:\n\nSELECT COUNT(*) FROM generate_series(1,100) a, generate_series(1,1000) b;\n\nSELECT COUNT(a) FROM generate_series(1,100) a, generate_series(1,1000) b;\n\n\n;]\n",
"msg_date": "Tue, 26 Oct 2010 13:08:37 +0100",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: which one is faster"
},
{
"msg_contents": "2010/10/26 Grzegorz Jaśkiewicz <[email protected]>\n\n> implementation wise, count(*) is faster. Very easy to test:\n>\n> SELECT COUNT(*) FROM generate_series(1,100) a, generate_series(1,1000) b;\n>\n> SELECT COUNT(a) FROM generate_series(1,100) a, generate_series(1,1000) b;\n>\n>\n> ;]\n>\n\nWell, strange. Why is that slower?\n\n2010/10/26 Grzegorz Jaśkiewicz <[email protected]>\nimplementation wise, count(*) is faster. Very easy to test:\n\nSELECT COUNT(*) FROM generate_series(1,100) a, generate_series(1,1000) b;\n\nSELECT COUNT(a) FROM generate_series(1,100) a, generate_series(1,1000) b;\n\n\n;]\nWell, strange. Why is that slower?",
"msg_date": "Tue, 26 Oct 2010 14:16:31 +0200",
"msg_from": "Szymon Guz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: which one is faster"
},
{
"msg_contents": "2010/10/26 Szymon Guz <[email protected]>:\n>\n> Well, strange. Why is that slower?\n\nTo answer that fully, you would need to see the implementation.\nsuffice to say,\n\ncount(a) does:\n\nif (a <> NULL)\n{\n count++;\n}\n\nand count(*) does:\n\n count++;\n\n\n\n-- \nGJ\n",
"msg_date": "Tue, 26 Oct 2010 13:20:33 +0100",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: which one is faster"
},
{
"msg_contents": "2010/10/26 Grzegorz Jaśkiewicz <[email protected]>\n\n> 2010/10/26 Szymon Guz <[email protected]>:\n> >\n> > Well, strange. Why is that slower?\n>\n> To answer that fully, you would need to see the implementation.\n> suffice to say,\n>\n> count(a) does:\n>\n> if (a <> NULL)\n> {\n> count++;\n> }\n>\n> and count(*) does:\n>\n> count++;\n>\n>\n>\nYup, I was afraid of that, even if there is not null on the column... but I\nthink usually nobody notices the difference with count.\n\nregards\nSzymon\n\n2010/10/26 Grzegorz Jaśkiewicz <[email protected]>\n2010/10/26 Szymon Guz <[email protected]>:\n>\n> Well, strange. Why is that slower?\n\nTo answer that fully, you would need to see the implementation.\nsuffice to say,\n\ncount(a) does:\n\nif (a <> NULL)\n{\n count++;\n}\n\nand count(*) does:\n\n count++;\nYup, I was afraid of that, even if there is not null on the column... but I think usually nobody notices the difference with count.regardsSzymon",
"msg_date": "Tue, 26 Oct 2010 14:23:21 +0200",
"msg_from": "Szymon Guz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: which one is faster"
},
{
"msg_contents": "On 10/26/2010 6:56 AM, AI Rumman wrote:\n> Which one is faster?\n> select count(*) from talble\n> or\n> select count(id) from table\n> where id is the primary key.\nPostgreSQL doesn't utilize the access methods known as \"FULL INDEX SCAN\" \nand \"FAST FULL INDEX SCAN\", so the optimizer will generate the \nsequential scan in both cases. In other words, PostgreSQL will read the \nentire table when counting, no matter what.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n",
"msg_date": "Tue, 26 Oct 2010 08:32:54 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: which one is faster"
}
] |
[
{
"msg_contents": "Hello,\n\nWhat is the general view of performance CPU's nowadays when it comes to \nPostgreSQL performance? Which CPU is the better choice, in regards to \nRAM access-times, stream speed, cache synchronization etc. Which is the \nbetter CPU given the limitation of using AMD64 (x86-64)?\n\nWe're getting ready to replace our (now) aging db servers with some \nbrand new with higher core count. The old ones are 4-socket dual-core \nOpteron 8218's with 48GB RAM. Right now the disk-subsystem is not the \nlimiting factor so we're aiming for higher core-count and as well as \nfaster and more RAM. We're also moving into the territory of version 9.0 \nwith streaming replication to be able to offload at least a part of the \nread-only queries to the slave database. The connection count on the \ndatabase usually lies in the region of ~2500 connections and the \ndatabase is small enough that it can be kept entirely in RAM (dump is \nabout 2,5GB).\n\nRegards,\nChristian Elmerot\n",
"msg_date": "Tue, 26 Oct 2010 14:55:06 +0200",
"msg_from": "Christian Elmerot <[email protected]>",
"msg_from_op": true,
"msg_subject": "CPUs for new databases"
},
{
"msg_contents": "Christian Elmerot <[email protected]> wrote:\n \n> What is the general view of performance CPU's nowadays when it\n> comes to PostgreSQL performance? Which CPU is the better choice,\n> in regards to RAM access-times, stream speed, cache\n> synchronization etc. Which is the better CPU given the limitation\n> of using AMD64 (x86-64)?\n \nYou might want to review recent posts by Greg Smith on this. One\nsuch thread starts here:\n \nhttp://archives.postgresql.org/pgsql-performance/2010-09/msg00120.php\n \n> We're getting ready to replace our (now) aging db servers with\n> some brand new with higher core count. The old ones are 4-socket\n> dual-core Opteron 8218's with 48GB RAM. Right now the disk-subsystem\n> is not the limiting factor so we're aiming for higher core-count\n> and as well as faster and more RAM. We're also moving into the\n> territory of version 9.0 with streaming replication to be able to\n> offload at least a part of the read-only queries to the slave\n> database. The connection count on the database usually lies in the\n> region of ~2500 connections and the database is small enough that\n> it can be kept entirely in RAM (dump is about 2,5GB).\n \nYou really should try connection pooling. Even though many people\nfind it counterintuitive, it is likely to improve both throughput\nand response time significantly. See any of the many previous\nthreads on the topic for reasons.\n \n-Kevin\n",
"msg_date": "Tue, 26 Oct 2010 09:27:49 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPUs for new databases"
},
{
"msg_contents": "On Tue, Oct 26, 2010 at 6:55 AM, Christian Elmerot <[email protected]> wrote:\n> Hello,\n>\n> What is the general view of performance CPU's nowadays when it comes to\n> PostgreSQL performance? Which CPU is the better choice, in regards to RAM\n> access-times, stream speed, cache synchronization etc. Which is the better\n> CPU given the limitation of using AMD64 (x86-64)?\n\nFor faster but fewer individual cores the Intels are pretty good. For\nway more cores, each being pretty fast and having enough memory\nbandwidth to use all those cores, the AMDs are very impressive. The\nMagny Cours AMDs are probably the best 4 socket cpus made.\n\n> We're getting ready to replace our (now) aging db servers with some brand\n> new with higher core count. The old ones are 4-socket dual-core Opteron\n> 8218's with 48GB RAM.\n\nA single AMD 12 core Magny Cours or Intel Nehalem 8 core cpu would be\ntwice as fast or more than the old machine.\n\n> The connection count on the database usually lies in\n> the region of ~2500 connections and the database is small enough that it can\n> be kept entirely in RAM (dump is about 2,5GB).\n\nAs another poster mentioned, you should really look at connection pooling.\n",
"msg_date": "Tue, 26 Oct 2010 08:50:48 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPUs for new databases"
},
{
"msg_contents": "On 2010-10-26 16:27, Kevin Grittner wrote:\n> Christian Elmerot<[email protected]> wrote:\n>\n>> What is the general view of performance CPU's nowadays when it\n>> comes to PostgreSQL performance? Which CPU is the better choice,\n>> in regards to RAM access-times, stream speed, cache\n>> synchronization etc. Which is the better CPU given the limitation\n>> of using AMD64 (x86-64)?\n>\n> You might want to review recent posts by Greg Smith on this. One\n> such thread starts here:\n>\n> http://archives.postgresql.org/pgsql-performance/2010-09/msg00120.php\n\nI've read those posts before and they are interresting but only part of \nthe puzzle.\n\n\n> \n>> We're getting ready to replace our (now) aging db servers with\n>> some brand new with higher core count. The old ones are 4-socket\n>> dual-core Opteron 8218's with 48GB RAM. Right now the disk-subsystem\n>> is not the limiting factor so we're aiming for higher core-count\n>> and as well as faster and more RAM. We're also moving into the\n>> territory of version 9.0 with streaming replication to be able to\n>> offload at least a part of the read-only queries to the slave\n>> database. The connection count on the database usually lies in the\n>> region of ~2500 connections and the database is small enough that\n>> it can be kept entirely in RAM (dump is about 2,5GB).\n>\n> You really should try connection pooling. Even though many people\n> find it counterintuitive, it is likely to improve both throughput\n> and response time significantly. See any of the many previous\n> threads on the topic for reasons.\n\nI believe you are right as this is actually something we're looking into \nas we're making read-only queries pass through a dedicated set of \nlookup-hosts as well as having writes that are not time critical to pass \nthrough another set of hosts.\n\nRegards,\nChristian Elmerot\n",
"msg_date": "Tue, 26 Oct 2010 17:23:34 +0200",
"msg_from": "Christian Elmerot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CPUs for new databases"
},
{
"msg_contents": "On 10/26/10 7:50 AM, Scott Marlowe wrote:\n> For faster but fewer individual cores the Intels are pretty good. For\n> way more cores, each being pretty fast and having enough memory\n> bandwidth to use all those cores, the AMDs are very impressive. The\n> Magny Cours AMDs are probably the best 4 socket cpus made.\n\nIn a general workload, fewer faster cores are better. We do not scale\nperfectly across cores. The only case where that's not true is\nmaintaining lots of idle connections, and that's really better dealt\nwith in software.\n\nSo I've been buying Intel for the last couple years. Would love to see\nsome testing on database workloads.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Tue, 26 Oct 2010 13:54:07 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPUs for new databases"
},
{
"msg_contents": ">>>>> \"JB\" == Josh Berkus <[email protected]> writes:\n\nJB> In a general workload, fewer faster cores are better. We do not scale\nJB> perfectly across cores. The only case where that's not true is\nJB> maintaining lots of idle connections, and that's really better dealt\nJB> with in software.\n\nI've found that ram speed is the most limiting factor I've run into for\nthose cases where the db fits in RAM. The less efficient lookups run\njust as fast when the CPU is in powersving mode as in performance, which\nimplies that the cores are mostly waiting on RAM (cache or main).\n\nI suspect cache size and ram speed will be the most important factors\nuntil the point where disk i/o speed and capacity take over.\n\nI'm sure some db applications run computaionally expensive queries on\nthe server, but most queries seem light on computaion and heavy on\ngathering and comparing.\n\nIt can help to use recent versions of gcc with -march=native. And\nrecent versions of glibc offer improved string ops on recent hardware.\n\n-JimC\n-- \nJames Cloos <[email protected]> OpenPGP: 1024D/ED7DAEA6\n",
"msg_date": "Tue, 26 Oct 2010 19:45:12 -0400",
"msg_from": "James Cloos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPUs for new databases"
},
{
"msg_contents": "On 10/27/10 01:45, James Cloos wrote:\n>>>>>> \"JB\" == Josh Berkus<[email protected]> writes:\n>\n> JB> In a general workload, fewer faster cores are better. We do not scale\n> JB> perfectly across cores. The only case where that's not true is\n> JB> maintaining lots of idle connections, and that's really better dealt\n> JB> with in software.\n>\n> I've found that ram speed is the most limiting factor I've run into for\n> those cases where the db fits in RAM. The less efficient lookups run\n> just as fast when the CPU is in powersving mode as in performance, which\n> implies that the cores are mostly waiting on RAM (cache or main).\n>\n> I suspect cache size and ram speed will be the most important factors\n> until the point where disk i/o speed and capacity take over.\n\nFWIW, yes - once the IO is fast enough or not necessary (e.g. the \nread-mostly database fits in RAM), RAM bandwidth *is* the next \nbottleneck and it really, really can be observed in actual loads. Buying \na QPI-based CPU instead of the cheaper DMI-based ones (if talking about \nIntel chips), and faster memory modules (DDR3-1333+) really makes a \ndifference in this case.\n\n(QPI and DMI are basically the evolution the front side bus; AMD had HT \n- HyperTransport for years now. Wikipedia of course has more information \nfor the interested.)\n\n\n",
"msg_date": "Wed, 27 Oct 2010 02:18:43 +0200",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPUs for new databases"
},
{
"msg_contents": "On Tue, Oct 26, 2010 at 6:18 PM, Ivan Voras <[email protected]> wrote:\n> FWIW, yes - once the IO is fast enough or not necessary (e.g. the\n> read-mostly database fits in RAM), RAM bandwidth *is* the next bottleneck\n> and it really, really can be observed in actual loads. Buying a QPI-based\n> CPU instead of the cheaper DMI-based ones (if talking about Intel chips),\n> and faster memory modules (DDR3-1333+) really makes a difference in this\n> case.\n>\n> (QPI and DMI are basically the evolution the front side bus; AMD had HT -\n> HyperTransport for years now. Wikipedia of course has more information for\n> the interested.)\n\nNote that there are greatly different speeds in HyperTransport from\none AMD chipset to the next. The newest ones, currently Magny Cours\nare VERY fast with 1333MHz memory in 64 banks on my 4 cpu x 12 core\nmachine. And it does scale with each thread I throw at it through\nright at 48. Note that those CPUs have 12Megs L3 cache, which makes a\nbig difference if a lot can fit in cache, but even if it can't the\nspeed to main memory is very good. There was an earlier thread with\nGreg and I in it where we posted the memory bandwidth numbers for that\nmachine and it was insane how much data all 48 cores could pump into /\nout of memory at the same time.\n",
"msg_date": "Tue, 26 Oct 2010 19:14:26 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPUs for new databases"
},
{
"msg_contents": "Scott Marlowe wrote:\n> There was an earlier thread with\n> Greg and I in it where we posted the memory bandwidth numbers for that\n> machine and it was insane how much data all 48 cores could pump into /\n> out of memory at the same time.\n> \nYeah, it was insane. Building a economical 'that generation opteron' \ndatabase server has been on my wishlist since that thread, my current \nfavorite is the 8-core 6128 opteron, for $275,- at newegg \nhttp://www.newegg.com/Product/Product.aspx?Item=N82E16819105266\n\nAh might as well drop the whole config on my wishlist as well:\n\n2 times that 8 core processor\nSupermicro H8DGU-F motherboard - 16 dimm slots, dual socket, dual Intel \nethernet and additional ethernet for IPMI.\n2 times KVR1333D3D4R9SK8/32G memory - 4GB dimms seem to be at the GB/$ \nsweet spot at the moment for DDR3\n1 time OCZ Vertex 2 Pro 100GB (there was a thread about this sandforce \ndisk as well: a SSD with supercap that acts as battery backup)\nmaybe another one or two spindled 2.5\" drives for archive/backup.\nSupermicro 113TQ-563UB chassis\n\nAt the time I looked this up, I could buy it for just over �3000,-\n\nregards\nYeb Havinga\n\nPS: I'm in no way involved with either of the manufacturers, nor one of \ntheir fanboys. I'm just interested, like the OP, what is good \nhardware/config for a PG related server.\n\n",
"msg_date": "Wed, 27 Oct 2010 09:37:35 +0200",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPUs for new databases"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 1:37 AM, Yeb Havinga <[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> There was an earlier thread with\n>> Greg and I in it where we posted the memory bandwidth numbers for that\n>> machine and it was insane how much data all 48 cores could pump into /\n>> out of memory at the same time.\n>>\n>\n> Yeah, it was insane. Building a economical 'that generation opteron'\n> database server has been on my wishlist since that thread, my current\n> favorite is the 8-core 6128 opteron, for $275,- at newegg\n> http://www.newegg.com/Product/Product.aspx?Item=N82E16819105266\n>\n> Ah might as well drop the whole config on my wishlist as well:\n>\n> 2 times that 8 core processor\n> Supermicro H8DGU-F motherboard - 16 dimm slots, dual socket, dual Intel\n> ethernet and additional ethernet for IPMI.\n> 2 times KVR1333D3D4R9SK8/32G memory - 4GB dimms seem to be at the GB/$ sweet\n> spot at the moment for DDR3\n> 1 time OCZ Vertex 2 Pro 100GB (there was a thread about this sandforce disk\n> as well: a SSD with supercap that acts as battery backup)\n> maybe another one or two spindled 2.5\" drives for archive/backup.\n> Supermicro 113TQ-563UB chassis\n>\n> At the time I looked this up, I could buy it for just over €3000,-\n\nIt's important to remember that often we're talking about a machine\nthat has to run dozens of concurrent requests when you start needing\nthis many cores, and consequently, how man spindles (SSD or HD) to\nsustain a certain throughput rate.\n\nIf you're looking at that many cores make sure you can put enough SSDs\nand / or HDs underneath it to keep up. Just being able to go from 4\nto 8 drives can extend the life of a db server by years. Supermicro\nmakes some nice 2U enclosures that hold either 8 or 16 2.5\" drives.\n\n> PS: I'm in no way involved with either of the manufacturers, nor one of\n> their fanboys. I'm just interested, like the OP, what is good\n> hardware/config for a PG related server.\n\nMe either really. Both times I bought db servers were right after AMD\nhad taken a lead in SMP. Got a fair number of intel cpu machines in\nthe farm that work great, but not as database servers. But I am keen\non the 8 core AMDs to come down. Those things have crazy good memory\nbandwidth and you can actually use all 16 cores in a server. I've got\na previous intermediate AMD with the old 6 core cpus, and that thing\ncan't run more than 8 processes before it starts slowing down.\n\nI don't know your projected data usage needs, but if they are at all\non a positive slope, consider the machines with 8 drive bays at least,\neven if you only need 2 or 4 drives now. Those chassis let you extend\nthe IO of a db server at will to 2 to 4 times it's original setup\npretty easily.\n\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Wed, 27 Oct 2010 01:52:20 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPUs for new databases"
},
{
"msg_contents": "One last note. Our vendor at the time we ordered our quad 12 core\nmachines could only provide that mobo in a 1U chassis. Consequently\nwe bought all external arrays for that machine. Since you're looking\nat a dual 8 core machine, you should be able to get a mobo like that\nin almost any chassis you want.\n",
"msg_date": "Wed, 27 Oct 2010 02:06:51 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPUs for new databases"
},
{
"msg_contents": "On 10/26/10 6:14 PM, Scott Marlowe wrote:\n> There was an earlier thread with\n> Greg and I in it where we posted the memory bandwidth numbers for that\n> machine and it was insane how much data all 48 cores could pump into /\n> out of memory at the same time.\n\nWell, the next step then is to do some database server benchmarking.\n\nMy experience has been that PostgreSQL scales poorly past 30 cores, or\neven at lower levels depending on the workload. So it would be\ninteresting to see if the memory bandwidth on the AMDs makes up for our\nscaling issues.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Wed, 27 Oct 2010 11:03:36 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPUs for new databases"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 12:03 PM, Josh Berkus <[email protected]> wrote:\n> On 10/26/10 6:14 PM, Scott Marlowe wrote:\n>> There was an earlier thread with\n>> Greg and I in it where we posted the memory bandwidth numbers for that\n>> machine and it was insane how much data all 48 cores could pump into /\n>> out of memory at the same time.\n>\n> Well, the next step then is to do some database server benchmarking.\n>\n> My experience has been that PostgreSQL scales poorly past 30 cores, or\n> even at lower levels depending on the workload. So it would be\n> interesting to see if the memory bandwidth on the AMDs makes up for our\n> scaling issues.\n\nWhich OSes have you tested it on? And what hardware? For smaller\noperations, like pgbench, where a large amount of what you're working\non fits in cache, I get near linear scaling right up to 48 cores.\nOverall performance increases til about 50 threads, then drops off to\nabout 60 to 70% peak for the next hundred or so threads I add on.\n",
"msg_date": "Wed, 27 Oct 2010 12:28:02 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPUs for new databases"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 12:28 PM, Scott Marlowe <[email protected]> wrote:\n> On Wed, Oct 27, 2010 at 12:03 PM, Josh Berkus <[email protected]> wrote:\n>> On 10/26/10 6:14 PM, Scott Marlowe wrote:\n>>> There was an earlier thread with\n>>> Greg and I in it where we posted the memory bandwidth numbers for that\n>>> machine and it was insane how much data all 48 cores could pump into /\n>>> out of memory at the same time.\n>>\n>> Well, the next step then is to do some database server benchmarking.\n>>\n>> My experience has been that PostgreSQL scales poorly past 30 cores, or\n>> even at lower levels depending on the workload. So it would be\n>> interesting to see if the memory bandwidth on the AMDs makes up for our\n>> scaling issues.\n>\n> Which OSes have you tested it on? And what hardware? For smaller\n> operations, like pgbench, where a large amount of what you're working\n> on fits in cache, I get near linear scaling right up to 48 cores.\n> Overall performance increases til about 50 threads, then drops off to\n> about 60 to 70% peak for the next hundred or so threads I add on.\n\nAnd that's with 8.3.latest on ubuntu 10.04 with latest updates on HW RAID.\n",
"msg_date": "Wed, 27 Oct 2010 13:17:20 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPUs for new databases"
},
{
"msg_contents": "Ivan Voras wrote:\n> FWIW, yes - once the IO is fast enough or not necessary (e.g. the \n> read-mostly database fits in RAM), RAM bandwidth *is* the next \n> bottleneck and it really, really can be observed in actual loads.\n\nThis is exactly what I've concluded, after many rounds of correlating \nmemory speed tests with pgbench tests against in-RAM databases. And \nit's the reason why I've written the stream-scaling utility and been \ncollecting test results from as many systems as possible. That seemed \nto get dismissed upthread as not being the answer the poster was looking \nfor, but I think you have to get a handle on that part before the rest \nof the trivia involved even matters.\n\nI have a bunch more results that have been flowing in that I need to \npublish there soon. Note that there is a bug in stream-scaling where \nsufficiently large systems can hit a compiler problem where it reports \n\"relocation truncated to fit: R_X86_64_PC32 against `.bss'\". I have two \nof those reports and am working on resolving.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Wed, 27 Oct 2010 15:58:46 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPUs for new databases"
},
{
"msg_contents": "On 2010-10-27 21:58, Greg Smith wrote:\n> Ivan Voras wrote:\n>> FWIW, yes - once the IO is fast enough or not necessary (e.g. the \n>> read-mostly database fits in RAM), RAM bandwidth *is* the next \n>> bottleneck and it really, really can be observed in actual loads.\n>\n> This is exactly what I've concluded, after many rounds of correlating \n> memory speed tests with pgbench tests against in-RAM databases. And \n> it's the reason why I've written the stream-scaling utility and been \n> collecting test results from as many systems as possible. That seemed \n> to get dismissed upthread as not being the answer the poster was \n> looking for, but I think you have to get a handle on that part before \n> the rest of the trivia involved even matters.\n>\n> I have a bunch more results that have been flowing in that I need to \n> publish there soon. Note that there is a bug in stream-scaling where \n> sufficiently large systems can hit a compiler problem where it reports \n> \"relocation truncated to fit: R_X86_64_PC32 against `.bss'\". I have \n> two of those reports and am working on resolving.\n>\n\nJust to chime in after the new systems were purchased and installed. We \nended up with buying a 4x Opteron 6168 (12core Magny-cours,12MB cache @ \n1.9Ghz) with 128GB 1333Mhz DDR3 RAM. That's an insane 48 cores. That is \nperhaps slightly beyond the scaling horizon for Postgres at the moment \nbut we're confident that scaling will improve over the lifetime with \nthese servers.\n\nUsing the stream-scaling test we see some very impressive numbers:\n\nHighest results comes at 32 threads:\n\nNumber of Threads requested = 32\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 81013.5506 0.0378 0.0377 0.0379\n\nThe pattern is quite clear in that any multiple of 4 (the number of \nphysical CPU packages) get a higher value but thinking about how the \nmemory is connected and utilized this makes perfect sense.\n\nFull output below\n\nRegards,\nChristian Elmerot, One.com\n\n\n\n\n=== CPU cache information ===\nCPU /sys/devices/system/cpu/cpu0 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu0 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu0 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu0 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu1 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu1 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu1 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu1 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu10 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu10 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu10 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu10 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu11 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu11 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu11 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu11 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu12 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu12 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu12 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu12 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu13 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu13 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu13 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu13 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu14 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu14 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu14 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu14 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu15 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu15 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu15 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu15 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu16 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu16 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu16 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu16 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu17 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu17 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu17 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu17 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu18 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu18 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu18 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu18 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu19 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu19 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu19 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu19 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu2 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu2 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu2 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu2 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu20 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu20 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu20 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu20 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu21 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu21 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu21 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu21 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu22 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu22 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu22 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu22 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu23 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu23 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu23 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu23 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu24 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu24 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu24 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu24 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu25 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu25 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu25 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu25 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu26 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu26 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu26 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu26 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu27 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu27 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu27 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu27 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu28 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu28 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu28 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu28 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu29 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu29 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu29 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu29 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu3 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu3 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu3 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu3 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu30 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu30 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu30 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu30 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu31 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu31 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu31 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu31 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu32 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu32 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu32 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu32 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu33 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu33 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu33 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu33 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu34 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu34 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu34 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu34 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu35 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu35 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu35 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu35 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu36 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu36 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu36 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu36 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu37 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu37 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu37 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu37 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu38 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu38 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu38 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu38 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu39 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu39 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu39 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu39 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu4 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu4 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu4 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu4 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu40 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu40 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu40 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu40 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu41 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu41 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu41 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu41 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu42 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu42 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu42 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu42 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu43 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu43 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu43 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu43 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu44 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu44 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu44 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu44 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu45 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu45 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu45 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu45 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu46 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu46 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu46 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu46 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu47 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu47 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu47 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu47 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu5 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu5 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu5 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu5 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu6 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu6 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu6 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu6 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu7 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu7 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu7 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu7 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu8 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu8 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu8 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu8 Level 3 Cache: 5118K (Unified)\nCPU /sys/devices/system/cpu/cpu9 Level 1 Cache: 64K (Data)\nCPU /sys/devices/system/cpu/cpu9 Level 1 Cache: 64K (Instruction)\nCPU /sys/devices/system/cpu/cpu9 Level 2 Cache: 512K (Unified)\nCPU /sys/devices/system/cpu/cpu9 Level 3 Cache: 5118K (Unified)\nTotal CPU system cache: 279871488 bytes\nComputed minimum array elements needed: 127214312\nMinimum array elements used: 127214312\n\n=== Check and build stream ===\n\n=== Testing up to 48 cores ===\n\n-------------------------------------------------------------\nSTREAM version $Revision: 5.9 $\n-------------------------------------------------------------\nThis system uses 8 bytes per DOUBLE PRECISION word.\n-------------------------------------------------------------\nArray size = 127214312, Offset = 0\nTotal memory required = 2911.7 MB.\nEach test is run 10 times, but only\nthe *best* time for each is used.\n-------------------------------------------------------------\nNumber of Threads requested = 1\n-------------------------------------------------------------\nPrinting one line per active thread....\n-------------------------------------------------------------\nYour clock granularity/precision appears to be 1 microseconds.\nEach test below will take on the order of 254582 microseconds.\n (= 254582 clock ticks)\nIncrease the size of the arrays if this shows that\nyou are not getting at least 20 clock ticks per test.\n-------------------------------------------------------------\nWARNING -- The above is only a rough guideline.\nFor best results, please be sure you know the\nprecision of your system timer.\n-------------------------------------------------------------\nFunction Rate (MB/s) Avg time Min time Max time\nCopy: 6057.6445 0.3418 0.3360 0.3750\nScale: 6028.3481 0.3442 0.3376 0.3786\nAdd: 6304.5394 0.4900 0.4843 0.5142\nTriad: 6236.0693 0.4968 0.4896 0.5219\n-------------------------------------------------------------\nSolution Validates\n-------------------------------------------------------------\n\nNumber of Threads requested = 2\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 12471.5252 0.2448 0.2448 0.2449\n\nNumber of Threads requested = 3\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 15952.3092 0.1914 0.1914 0.1915\n\nNumber of Threads requested = 4\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 24935.8135 0.1225 0.1224 0.1225\n\nNumber of Threads requested = 5\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 26223.8995 0.1165 0.1164 0.1166\n\nNumber of Threads requested = 6\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 36886.6048 0.0828 0.0828 0.0828\n\nNumber of Threads requested = 7\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 36930.7515 0.0827 0.0827 0.0828\n\nNumber of Threads requested = 8\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 38068.1227 0.0826 0.0802 0.0833\n\nNumber of Threads requested = 9\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 21442.7286 0.1506 0.1424 0.1639\n\nNumber of Threads requested = 10\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 22577.0833 0.1356 0.1352 0.1359\n\nNumber of Threads requested = 11\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 23312.3289 0.1311 0.1310 0.1311\n\nNumber of Threads requested = 12\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 40323.1058 0.0760 0.0757 0.0763\n\nNumber of Threads requested = 13\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 47004.6724 0.0652 0.0650 0.0654\n\nNumber of Threads requested = 14\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 44424.2111 0.0687 0.0687 0.0688\n\nNumber of Threads requested = 15\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 52259.2348 0.0585 0.0584 0.0587\n\nNumber of Threads requested = 16\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 64229.4556 0.0476 0.0475 0.0477\n\nNumber of Threads requested = 17\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 34654.6042 0.0969 0.0881 0.0989\n\nNumber of Threads requested = 18\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 43236.4397 0.0846 0.0706 0.0985\n\nNumber of Threads requested = 19\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 40173.4578 0.0783 0.0760 0.0799\n\nNumber of Threads requested = 20\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 52418.1724 0.0585 0.0582 0.0587\n\nNumber of Threads requested = 21\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 59309.7805 0.0517 0.0515 0.0518\n\nNumber of Threads requested = 22\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 55953.3174 0.0547 0.0546 0.0548\n\nNumber of Threads requested = 23\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 69792.5266 0.0439 0.0437 0.0443\n\nNumber of Threads requested = 24\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 78001.9366 0.0393 0.0391 0.0393\n\nNumber of Threads requested = 25\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 56661.3804 0.0670 0.0539 0.0740\n\nNumber of Threads requested = 26\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 51899.3931 0.0624 0.0588 0.0674\n\nNumber of Threads requested = 27\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 48560.3902 0.0681 0.0629 0.0704\n\nNumber of Threads requested = 28\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 63773.3287 0.0485 0.0479 0.0498\n\nNumber of Threads requested = 29\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 67561.1570 0.0456 0.0452 0.0457\n\nNumber of Threads requested = 30\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 59568.8426 0.0514 0.0513 0.0515\n\nNumber of Threads requested = 31\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 54612.0337 0.0565 0.0559 0.0567\n\nNumber of Threads requested = 32\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 81013.5506 0.0378 0.0377 0.0379\n\nNumber of Threads requested = 33\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 58938.5382 0.0570 0.0518 0.0594\n\nNumber of Threads requested = 34\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 58142.9574 0.0555 0.0525 0.0591\n\nNumber of Threads requested = 35\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 52356.8789 0.0590 0.0583 0.0594\n\nNumber of Threads requested = 36\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 64303.6362 0.0481 0.0475 0.0485\n\nNumber of Threads requested = 37\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 63251.3840 0.0483 0.0483 0.0484\n\nNumber of Threads requested = 38\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 74401.3522 0.0411 0.0410 0.0412\n\nNumber of Threads requested = 39\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 77623.2130 0.0394 0.0393 0.0394\n\nNumber of Threads requested = 40\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 80152.0442 0.0383 0.0381 0.0384\n\nNumber of Threads requested = 41\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 68952.6217 0.0443 0.0443 0.0443\n\nNumber of Threads requested = 42\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 69971.7614 0.0437 0.0436 0.0437\n\nNumber of Threads requested = 43\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 71488.5304 0.0428 0.0427 0.0430\n\nNumber of Threads requested = 44\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 72992.9602 0.0419 0.0418 0.0420\n\nNumber of Threads requested = 45\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 75000.9485 0.0408 0.0407 0.0409\n\nNumber of Threads requested = 46\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 76208.7407 0.0401 0.0401 0.0402\n\nNumber of Threads requested = 47\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 77969.6418 0.0392 0.0392 0.0393\n\nNumber of Threads requested = 48\nFunction Rate (MB/s) Avg time Min time Max time\nTriad: 79731.3522 0.0384 0.0383 0.0385\n\n\n",
"msg_date": "Fri, 26 Nov 2010 17:38:51 +0100",
"msg_from": "\"Christian Elmerot @ One.com\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPUs for new databases"
},
{
"msg_contents": "Christian Elmerot @ One.com wrote:\n> Highest results comes at 32 threads:\n> Number of Threads requested = 32\n> Function Rate (MB/s) Avg time Min time Max time\n> Triad: 81013.5506 0.0378 0.0377 0.0379\n\nThere is some run-to-run variation in the results of this test, and \naccordingly some margin for error in each individual result. I wouldn't \nconsider the difference between the speed at 32 threads (81) and 48 \n(79.7) to be statistically significant, and based on the overall shape \nof the results curve that 32 result looks suspicious. I would bet that \nif you run the test multiple times, you'd sometimes seen the 48 core one \nrun faster than the 32.\n\n> The pattern is quite clear in that any multiple of 4 (the number of \n> physical CPU packages) get a higher value but thinking about how the \n> memory is connected and utilized this makes perfect sense.\n\nIn addition to the memory issues, there's also thread CPU scheduling \ninvolved here. Ideally the benchmark would pin each thread to a single \ncore and keep it there for the runtime of the test, but it's not there \nyet. I suspect one source of variation at odd numbers of threads \ninvolves processes that bounce between CPUs more than in the more even \ncases.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Fri, 26 Nov 2010 17:30:16 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPUs for new databases"
},
{
"msg_contents": "\nOn Nov 26, 2010, at 2:30 PM, Greg Smith wrote:\n\n> \n> In addition to the memory issues, there's also thread CPU scheduling \n> involved here. Ideally the benchmark would pin each thread to a single \n> core and keep it there for the runtime of the test, but it's not there \n> yet. I suspect one source of variation at odd numbers of threads \n> involves processes that bounce between CPUs more than in the more even \n> cases.\n> \n\nDepends on what you're interested in.\n\nPostgres doesn't pin threads to processors. Postgres doesn't use threads. A STREAM benchmark that used multiple processes, with half SYSV shared and half in-process memory access, would be better. How the OS schedules the processes and memory access is critical. One server might score higher on an optimized 'pin the processes' STREAM test, but be slower in the real world for Postgres because its not testing anything that Postgres can do.\n\n\n> -- \n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services and Support www.2ndQuadrant.us\n> \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Fri, 3 Dec 2010 09:36:43 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPUs for new databases"
}
] |
[
{
"msg_contents": "I have the following query:\n\n \n\nselect distinct Region.RegionShort as RegionShort\n\n,County.County as County \n\nfrom Region \n\njoin PostalCodeRegionCountyCity on\n(PostalCodeRegionCountyCity.RegionId=Region.RegionId) \n\njoin DealerGroupGeoCache on\n(DealerGroupGeoCache.RegionId=PostalCodeRegionCountyCity.RegionId) \n\n and\n(DealerGroupGeoCache.CountyId=PostalCodeRegionCountyCity.CountyId) \n\n and\n(DealerGroupGeoCache.CityId=PostalCodeRegionCountyCity.CityId) \n\njoin County on (PostalCodeRegionCountyCity.CountyId=County.CountyId) \n\nwhere (DealerGroupGeoCache.DealerGroupId=13) and\n(PostalCodeRegionCountyCity.RegionId=5)\n\n \n\nWith the following Explain:\n\n \n\n\"HashAggregate (cost=6743.96..6747.36 rows=34 width=11) (actual\ntime=854.407..854.425 rows=57 loops=1)\"\n\n\" -> Nested Loop (cost=0.00..6743.28 rows=34 width=11) (actual\ntime=0.062..762.698 rows=163491 loops=1)\"\n\n\" -> Nested Loop (cost=0.00..6679.19 rows=34 width=11) (actual\ntime=0.053..260.001 rows=163491 loops=1)\"\n\n\" -> Index Scan using region_i00 on region\n(cost=0.00..3.36 rows=1 width=5) (actual time=0.009..0.011 rows=1\nloops=1)\"\n\n\" Index Cond: (regionid = 5)\"\n\n\" -> Merge Join (cost=0.00..6672.43 rows=34 width=10)\n(actual time=0.040..189.654 rows=163491 loops=1)\"\n\n\" Merge Cond: ((postalcoderegioncountycity.countyid =\ndealergroupgeocache.countyid) AND (postalcoderegioncountycity.cityid =\ndealergroupgeocache.cityid))\"\n\n\" -> Index Scan using postalcoderegioncountycity_i06\non postalcoderegioncountycity (cost=0.00..716.05 rows=2616 width=10)\n(actual time=0.018..1.591 rows=2615 loops=1)\"\n\n\" Index Cond: (regionid = 5)\"\n\n\" -> Index Scan using dealergroupgeocache_i01 on\ndealergroupgeocache (cost=0.00..5719.56 rows=9055 width=10) (actual\ntime=0.015..87.689 rows=163491 loops=1)\"\n\n\" Index Cond:\n((dealergroupgeocache.dealergroupid = 13) AND\n(dealergroupgeocache.regionid = 5))\"\n\n\" -> Index Scan using county_i00 on county (cost=0.00..1.77\nrows=1 width=12) (actual time=0.002..0.002 rows=1 loops=163491)\"\n\n\" Index Cond: (county.countyid =\ndealergroupgeocache.countyid)\"\n\n\"Total runtime: 854.513 ms\"\n\n \n\nThe statistics have been recently updated and it does not change the bad\nestimates. \n\n \n\nThe DealerGroupGeoCache Table has 765392 Rows, And the query returns 57\nrows. \n\n \n\nI am not at all involved in the way the server is set up so being able\nto change the settings is not very likely unless it will make a huge\ndifference.\n\n \n\nIs there any way for me to speed up this query without changing the\nsettings?\n\n \n\nIf not what would you think the changes that would be needed?\n\n \n\nWe are currently running Postgres8.4 with the following settings.\n\n \n\nshared_buffers = 500MB #\nmin 128kB\n\neffective_cache_size = 1000MB\n\n \n\nmax_connections = 100\n\ntemp_buffers = 100MB\n\nwork_mem = 100MB\n\nmaintenance_work_mem = 500MB\n\nmax_files_per_process = 10000\n\nseq_page_cost = 1.0\n\nrandom_page_cost = 1.1\n\ncpu_tuple_cost = 0.1\n\ncpu_index_tuple_cost = 0.05\n\ncpu_operator_cost = 0.01\n\ndefault_statistics_target = 1000\n\nautovacuum_max_workers = 1\n\n \n\n#log_min_messages = DEBUG1\n\n#log_min_duration_statement = 1000\n\n#log_statement = all\n\n#log_temp_files = 128\n\n#log_lock_waits = on\n\n#log_line_prefix = '%m %u %d %h %p %i %c %l %s'\n\n#log_duration = on\n\n#debug_print_plan = on\n\n \n\nAny help is appreciated,\n\n \n\nPam\n\n \n\n \n\n \n\n \n\n\n\n\n\n\n\n\n\n\n\nI have the following query:\n \nselect distinct Region.RegionShort as RegionShort\n,County.County as County \nfrom Region \njoin PostalCodeRegionCountyCity on\n(PostalCodeRegionCountyCity.RegionId=Region.RegionId) \njoin DealerGroupGeoCache on\n(DealerGroupGeoCache.RegionId=PostalCodeRegionCountyCity.RegionId) \n and\n(DealerGroupGeoCache.CountyId=PostalCodeRegionCountyCity.CountyId) \n and\n(DealerGroupGeoCache.CityId=PostalCodeRegionCountyCity.CityId) \njoin County on\n(PostalCodeRegionCountyCity.CountyId=County.CountyId) \nwhere (DealerGroupGeoCache.DealerGroupId=13) and\n(PostalCodeRegionCountyCity.RegionId=5)\n \nWith the following Explain:\n \n\"HashAggregate (cost=6743.96..6747.36 rows=34\nwidth=11) (actual time=854.407..854.425 rows=57 loops=1)\"\n\" -> Nested Loop \n(cost=0.00..6743.28 rows=34 width=11) (actual time=0.062..762.698 rows=163491\nloops=1)\"\n\" -> \nNested Loop (cost=0.00..6679.19 rows=34 width=11) (actual\ntime=0.053..260.001 rows=163491 loops=1)\"\n\" \n-> Index Scan using region_i00 on region (cost=0.00..3.36 rows=1\nwidth=5) (actual time=0.009..0.011 rows=1 loops=1)\"\n\" \nIndex Cond: (regionid = 5)\"\n\" -> \nMerge Join (cost=0.00..6672.43 rows=34 width=10) (actual\ntime=0.040..189.654 rows=163491 loops=1)\"\n\" \nMerge Cond: ((postalcoderegioncountycity.countyid =\ndealergroupgeocache.countyid) AND (postalcoderegioncountycity.cityid =\ndealergroupgeocache.cityid))\"\n\" \n-> Index Scan using postalcoderegioncountycity_i06 on\npostalcoderegioncountycity (cost=0.00..716.05 rows=2616 width=10) (actual\ntime=0.018..1.591 rows=2615 loops=1)\"\n\" \nIndex Cond: (regionid = 5)\"\n\" \n-> Index Scan using dealergroupgeocache_i01 on\ndealergroupgeocache (cost=0.00..5719.56 rows=9055 width=10) (actual time=0.015..87.689\nrows=163491 loops=1)\"\n\" \nIndex Cond: ((dealergroupgeocache.dealergroupid = 13) AND\n(dealergroupgeocache.regionid = 5))\"\n\" -> \nIndex Scan using county_i00 on county (cost=0.00..1.77 rows=1 width=12)\n(actual time=0.002..0.002 rows=1 loops=163491)\"\n\" \nIndex Cond: (county.countyid = dealergroupgeocache.countyid)\"\n\"Total runtime: 854.513 ms\"\n \nThe statistics have been recently updated and it does not\nchange the bad estimates. \n \nThe DealerGroupGeoCache Table has 765392 Rows, And the\nquery returns 57 rows. \n \nI am not at all involved in the way the server is set up so\nbeing able to change the settings is not very likely unless it will make a huge\ndifference.\n \nIs there any way for me to speed up this query without\nchanging the settings?\n \nIf not what would you think the changes that would be\nneeded?\n \nWe are currently running Postgres8.4 with the\nfollowing settings.\n \nshared_buffers = 500MB #\nmin 128kB\neffective_cache_size = 1000MB\n \nmax_connections = 100\ntemp_buffers = 100MB\nwork_mem = 100MB\nmaintenance_work_mem = 500MB\nmax_files_per_process = 10000\nseq_page_cost = 1.0\nrandom_page_cost = 1.1\ncpu_tuple_cost = 0.1\ncpu_index_tuple_cost = 0.05\ncpu_operator_cost = 0.01\ndefault_statistics_target = 1000\nautovacuum_max_workers = 1\n \n#log_min_messages = DEBUG1\n#log_min_duration_statement = 1000\n#log_statement = all\n#log_temp_files = 128\n#log_lock_waits = on\n#log_line_prefix = '%m %u %d %h %p %i %c %l %s'\n#log_duration = on\n#debug_print_plan = on\n \nAny help is appreciated,\n \nPam",
"msg_date": "Tue, 26 Oct 2010 16:27:28 -0700",
"msg_from": "\"Ozer, Pam\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow Query- Bad Row Estimate"
}
] |
[
{
"msg_contents": "CentOS 5.4 and 5.5\n\nQuery\nSELECT sum(usramt) as givensum,\n sum (case when usedid > 0 then usramt else 0 end) as usedsum\n FROM argrades\n WHERE userstatus in (5) and\n membid in (select distinct members.membid from members, cards\nwhere members.membid = cards.membid and members.membershipid = 40 and useraccount in\n( select useraccount from cards where membid in\n(select membid from members where commonid = 3594)))\n\n\nRun on 8.3.7. Result below was from second run. First run took 2 seconds.\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=205095.01..205095.02 rows=1 width=10) (actual time=1.430..1.431 rows=1 loops=1)\n -> Nested Loop (cost=948.04..204823.07 rows=54387 width=10) (actual time=0.153..1.016 rows=472 loops=1)\n -> Unique (cost=948.04..948.42 rows=76 width=4) (actual time=0.126..0.128 rows=2 loops=1)\n -> Sort (cost=948.04..948.23 rows=76 width=4) (actual time=0.126..0.126 rows=2 loops=1)\n Sort Key: public.members.membid\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=362.66..945.66 rows=76 width=4) (actual time=0.089..0.113 rows=2 loops=1)\n -> Nested Loop (cost=362.66..891.84 rows=76 width=4) (actual time=0.082..0.097 rows=2 loops=1)\n -> HashAggregate (cost=362.66..363.07 rows=41 width=29) (actual time=0.055..0.056 rows=2 loops=1)\n -> Nested Loop (cost=11.79..362.55 rows=41 width=29) (actual time=0.044..0.054 rows=2 loops=1)\n -> HashAggregate (cost=11.79..12.20 rows=41 width=4) (actual time=0.032..0.032 rows=2 loops=1)\n -> Index Scan using members_commonid on members (cost=0.00..11.69 rows=41 width=4) (actual time=0.010..0.013 rows=2 loops=1)\n Index Cond: (commonid = 3594)\n -> Index Scan using cards_membid on cards (cost=0.00..8.53 rows=1 width=33) (actual time=0.007..0.007 rows=1 loops=2)\n Index Cond: (public.cards.membid = public.members.membid)\n -> Index Scan using cards_useraccount on cards (cost=0.00..12.87 rows=2 width=33) (actual time=0.019..0.019 rows=1 loops=2)\n Index Cond: (public.cards.useraccount = public.cards.useraccount)\n -> Index Scan using members_pkey on members (cost=0.00..0.70 rows=1 width=4) (actual time=0.006..0.007 rows=1 loops=2)\n Index Cond: (public.members.membid = public.cards.membid)\n Filter: (public.members.membershipid = 40)\n -> Index Scan using argrades_membid on argrades (cost=0.00..2673.60 rows=716 width=14) (actual time=0.020..0.319 rows=236 loops=2)\n Index Cond: (argrades.membid = public.members.membid)\n Filter: (argrades.userstatus = 5)\n Total runtime: 1.576 ms\n\n\nQuery on 8.4.4 shown below. Unfortunately the RPMs used for that install had enable-cassert. \nSame run on identical machine with 8.4.5 was 120 seconds.\n\t\t\t\t\t\t\t QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=8726078.65..8726078.66 rows=1 width=10) (actual time=280704.491..280704.492 rows=1 loops=1)\n -> Hash Semi Join (cost=295565.48..8725460.91 rows=123547 width=10) (actual time=1995.191..280698.091 rows=472 loops=1)\n Hash Cond: (argrades.membid = public.members.membid)\n -> Seq Scan on argrades (cost=0.00..7705748.40 rows=197758125 width=14) (actual time=0.033..245808.983 rows=197504297 loops=1)\n Filter: (userstatus = 5)\n -> Hash (cost=265133.95..265133.95 rows=2434522 width=4) (actual time=1837.777..1837.777 rows=2 loops=1)\n -> HashAggregate (cost=216443.51..240788.73 rows=2434522 width=4) (actual time=1834.352..1837.760 rows=2 loops=1)\n -> Hash Join (cost=31151.39..210357.21 rows=2434522 width=4) (actual time=1620.620..1830.788 rows=2 loops=1)\n Hash Cond: (public.members.membid = public.cards.membid)\n -> Seq Scan on members (cost=0.00..121379.95 rows=2434956 width=4) (actual time=0.024..1085.143 rows=2435153 loops=1)\n Filter: (membershipid = 40)\n -> Hash (cost=719.87..719.87 rows=2434522 width=4) (actual time=241.921..241.921 rows=2 loops=1)\n -> Nested Loop (cost=293.80..719.87 rows=2434522 width=4) (actual time=228.867..241.909 rows=2 loops=1)\n -> HashAggregate (cost=293.80..294.13 rows=33 width=29) (actual time=169.551..169.553 rows=2 loops=1)\n -> Nested Loop (cost=11.33..293.71 rows=33 width=29) (actual time=145.940..169.543 rows=2 loops=1)\n -> HashAggregate (cost=11.33..11.66 rows=33 width=4) (actual time=64.730..64.732 rows=2 loops=1)\n -> Index Scan using members_commonid on members (cost=0.00..11.25 rows=33 width=4) (actual time = 64.688..64.703 rows=2 loops=1)\n Index Cond: (commonid = 3594)\n -> Index Scan using cards_membid on cards (cost=0.00..8.53 rows=1 width=33) (actual time= 52.400..52.401 rows=1 loops=2)\n Index Cond: (public.cards.membid = public.members.membid)\n -> Index Scan using cards_useraccount on cards (cost=0.00..12.88 rows=2 width=33) (actual time=36.172.. 36.173 rows=1 loops=2)\n Index Cond: (public.cards.useraccount = public.cards.useraccount)\n Total runtime: 280730.327 ms\n\nThe 8.4 machines have more memory than the 8.3.7 and are in general much \nbetter machines.\n8.4 settings\nShared_buffers 18GB\neffective_cache_size 18GB\n\nMachines have 72GB of RAM\nTried turning off sequential scan on the 8.4.5 and that did not help.\n\nAny ideas/suggestions?\n",
"msg_date": "Wed, 27 Oct 2010 08:41:23 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Regression: 8.3 2 seconds -> 8.4 100+ seconds"
},
{
"msg_contents": "Hello\n\n>\n> The 8.4 machines have more memory than the 8.3.7 and are in general much\n> better machines.\n> 8.4 settings\n> Shared_buffers 18GB\n> effective_cache_size 18GB\n>\n> Machines have 72GB of RAM\n> Tried turning off sequential scan on the 8.4.5 and that did not help.\n>\n> Any ideas/suggestions?\n>\n\nincrease statistics on columns. The estimation is totally out.\n\nRegards\n\nPavel Stehule\n\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Wed, 27 Oct 2010 15:16:54 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regression: 8.3 2 seconds -> 8.4 100+ seconds"
},
{
"msg_contents": "Pavel Stehule writes:\n\n> increase statistics on columns. The estimation is totally out.\n\nStats when I ran the above was at 250.\nWill try 500.\n",
"msg_date": "Wed, 27 Oct 2010 10:53:22 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Regression: 8.3 2 seconds -> 8.4 100+ seconds"
},
{
"msg_contents": "W dniu 27.10.2010 16:53, Francisco Reyes pisze:\n> Pavel Stehule writes:\n> \n>> increase statistics on columns. The estimation is totally out.\n> \n> Stats when I ran the above was at 250.\n> Will try 500.\n\n\nPlease paste: ANALYZE VERBOSE argrades\nRegards\n\n",
"msg_date": "Wed, 27 Oct 2010 16:56:44 +0200",
"msg_from": "=?UTF-8?B?TWFyY2luIE1pcm9zxYJhdw==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regression: 8.3 2 seconds -> 8.4 100+ seconds"
},
{
"msg_contents": " [UTF-8]Marcin Miros�aw writes:\n\n> W dniu 27.10.2010 16:53, Francisco Reyes pisze:\n>> Pavel Stehule writes:\n>> \n>>> increase statistics on columns. The estimation is totally out.\n>> \n>> Stats when I ran the above was at 250.\n>> Will try 500.\n\nDid 1000.\n \n \n> Please paste: ANALYZE VERBOSE argrades\n\n\nanalyze verbose argrades;\nINFO: analyzing \"public.argrades\"\nINFO: \"argrades\": scanned 75000 of 5122661 pages, containing 3054081 live \nrows and 4152 dead rows; 75000 rows in sample, 208600288 estimated total rows\nANALYZE\n",
"msg_date": "Wed, 27 Oct 2010 11:14:05 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Regression: 8.3 2 seconds -> 8.4 100+ seconds"
},
{
"msg_contents": "Francisco Reyes <[email protected]> writes:\n> CentOS 5.4 and 5.5\n> Query\n> SELECT sum(usramt) as givensum,\n> sum (case when usedid > 0 then usramt else 0 end) as usedsum\n> FROM argrades\n> WHERE userstatus in (5) and\n> membid in (select distinct members.membid from members, cards\n> where members.membid = cards.membid and members.membershipid = 40 and useraccount in\n> ( select useraccount from cards where membid in\n> (select membid from members where commonid = 3594)))\n\nPlease provide a self-contained test case for this. In testing here,\n8.4 can still figure out that it ought to use an indexscan against the\nlarge table ... so there is some aspect of your tables that you're not\nshowing us.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Oct 2010 12:09:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regression: 8.3 2 seconds -> 8.4 100+ seconds "
},
{
"msg_contents": "Tom Lane writes:\n\n> Please provide a self-contained test case for this. In testing here,\n> 8.4 can still figure out that it ought to use an indexscan against the\n> large table ... so there is some aspect of your tables that you're not\n> showing us.\n\nIt is possible, and very likely, that it is an issue with the distribution \nof values.\n\nThe data is 40GB+ and the largest table is 200M+ rows.. \nSetting a standalone setup may be difficult, but what we could do is \nsomething like webex where you, or someone else we would trust, would \nconnect to a web conference and tell us what to type.\n\n",
"msg_date": "Wed, 27 Oct 2010 13:02:47 -0400",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Regression: 8.3 2 seconds -> 8.4 100+ seconds"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 8:41 AM, Francisco Reyes <[email protected]> wrote:\n> -> Nested Loop (cost=293.80..719.87\n> rows=2434522 width=4) (actual time=228.867..241.909 rows=2 loops=1)\n> -> HashAggregate (cost=293.80..294.13\n> rows=33 width=29) (actual time=169.551..169.553 rows=2 loops=1)\n> -> Nested Loop\n> (cost=11.33..293.71 rows=33 width=29) (actual time=145.940..169.543 rows=2\n> loops=1)\n> -> HashAggregate\n> (cost=11.33..11.66 rows=33 width=4) (actual time=64.730..64.732 rows=2\n> loops=1)\n> -> Index Scan using\n> members_commonid on members (cost=0.00..11.25 rows=33 width=4) (actual time\n> = 64.688..64.703 rows=2 loops=1)\n> Index Cond:\n> (commonid = 3594)\n> -> Index Scan using\n> cards_membid on cards (cost=0.00..8.53 rows=1 width=33) (actual time=\n> 52.400..52.401 rows=1 loops=2)\n> Index Cond:\n> (public.cards.membid = public.members.membid)\n> -> Index Scan using cards_useraccount\n> on cards (cost=0.00..12.88 rows=2 width=33) (actual time=36.172.. 36.173\n> rows=1 loops=2)\n> Index Cond:\n> (public.cards.useraccount = public.cards.useraccount)\n\nThis part looks really strange to me. Here we have a nested loop\nwhose outer side is estimated to produce 33 rows and whose outer side\nis estimated to produce 2 rows. Given that, one would think that the\nestimate for the loop as a whole shouldn't be more than 33 * 2 = 66\nrows (or maybe a bit more if 33 is really 33.4999 rounded down, and 2\nis really 2.49999 rounded down). But the actual estimate is 5 orders\nof magnitude larger. How is that possible?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Sat, 6 Nov 2010 21:23:28 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regression: 8.3 2 seconds -> 8.4 100+ seconds"
},
{
"msg_contents": "Robert Haas writes:\n\n> This part looks really strange to me. Here we have a nested loop\n> whose outer side is estimated to produce 33 rows and whose outer side\n> is estimated to produce 2 rows.\n\nWe have retained someone to help us troubleshoot the issue.\nOnce the issue has been resolved I will make sure to post our \nfindings.\n",
"msg_date": "Mon, 08 Nov 2010 04:13:35 -0500",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Regression: 8.3 2 seconds -> 8.4 100+ seconds"
},
{
"msg_contents": "Reviving a thread from two months ago...Francisco asked about a query \nthat was much slower in 8.3 at \nhttp://archives.postgresql.org/message-id/cone.1288183283.263738.5695.1000@shelca\n\nThere was some theorizing about a stats problem. I've now pulled \npg_stats data from the production server, and that alone doesn't seem to \nbe enough to account for what's hapenning. Robert asked a good question \nat \nhttp://archives.postgresql.org/message-id/[email protected] \nthat never was followed up on. That specificially is what I'm still \nconfused about, even after collecting a lot more data from this system, \nso continuing from there:\n\nRobert Haas wrote:\n> This part looks really strange to me. Here we have a nested loop\n> whose outer side is estimated to produce 33 rows and whose [inner] side\n> is estimated to produce 2 rows. Given that, one would think that the\n> estimate for the loop as a whole shouldn't be more than 33 * 2 = 66\n> rows (or maybe a bit more if 33 is really 33.4999 rounded down, and 2\n> is really 2.49999 rounded down). But the actual estimate is 5 orders\n> of magnitude larger. How is that possible? \n\nThat part stuck out to me too. I have no idea how that particular bit \nof Nested Loop ends up with that giant count for estimated output rows. \nI thought it would be about 33 * 2 too, so how is that turning into an \nestimate of 2434522 output rows? I believe this is the part that's \nexecuting \"cards.useraccount IN\".\n\nHere's a cut-down subset of the query that just does the suspicious part \nwithout any nodes above it, run on the 8.4 system having the problem. \nThis is basically the same plan that was seen in a lower-level node \nbefore, just the number of matching rows for the one loop has gone from \n33 to 34 due to more data being added to the table since the previous \nrun. I think it's easier to follow the logic it's trying to execute \nwhen simplified like this:\n\nSELECT\n members.membid\nFROM cards\nWHERE cards.useraccount IN\n (SELECT useraccount FROM cards WHERE membid IN\n (SELECT membid from members where commonid = 3594)\n )\n;\n\nNested Loop (cost=303.01..742.28 rows=2469120 width=4) (actual \ntime=0.066..0.082 rows=2 loops=1)\n -> HashAggregate (cost=303.01..303.35 rows=34 width=29) (actual \ntime=0.048..0.048 rows=2 loops=1)\n -> Nested Loop (cost=11.86..302.93 rows=34 width=29) (actual \ntime=0.034..0.045 rows=2 loops=1)\n -> HashAggregate (cost=11.86..12.20 rows=34 width=4) \n(actual time=0.023..0.024 rows=2 loops=1)\n -> Index Scan using members_commonid on members \n(cost=0.00..11.77 rows=34 width=4) (actual time=0.014..0.016 rows=2 loops=1)\n Index Cond: (commonid = 3594)\n -> Index Scan using cards_membid on cards \n(cost=0.00..8.54 rows=1 width=33) (actual time=0.009..0.010 rows=1 loops=2)\n Index Cond: (public.creditcards.membid = members.membid)\n -> Index Scan using cards_useraccount on cards (cost=0.00..12.88 \nrows=2 width=33) (actual time=0.015..0.016 rows=1 loops=2)\n Index Cond: (public.cards.useraccount = public.cards.useraccount)\n\nIt's possible to rewrite this whole thing using a join instead of IN, \nand sure enough doing so gives a better plan. That's how they got \naround this being a crippling issue. I'm still suspicious of what \ncaused such a performance regression from 8.3 though, where this query \nexecuted so much better.\n\nStepping back from that part of the query for a second, the main time \nrelated difference between the 8.3 and 8.4 plans involves how much of \nthe members table gets scanned. When 8.3 looks up a matching item in \nthat table, in order to implement this part of the larger query:\n\nWHERE members.membid = cards.membid AND members.membershipid = 40\n\nIt uses the membid index and gets a quick plan, followed by filtering on \nmembershipid:\n\n-> Index Scan using members_pkey on members (cost=0.00..0.70 rows=1 \nwidth=4) (actual time=0.006..0.007 rows=1 loops=2)\n Index Cond: (public.members.membid = public.cards.membid)\n Filter: (public.members.membershipid = 40)\n\n8.4 is scanning the whole table instead:\n\n-> Seq Scan on members (cost=0.00..121379.95 rows=2434956 width=4) \n(actual time=0.024..1085.143 rows=2435153 loops=1)\n Filter: (membershipid = 40)\n\nWhich gives you essentially every single member ID available, to then \nmatch against in a Hash join. The filter on membershipid isn't \nconsidered selective at all. I'm not sure why 8.4 isn't also \nrecognizing the value of being selective on the membid here, to reduce \nthe number of output rows that come out of that.\n\nIs the mis-estimation of the Nested Loop part causing this sequential \nscan to happen, because there are so many more potential values to join \nagainst in the estimate than in reality? If that's the case, it just \ncomes full circle back to how the node already discussed above is coming \nabout.\n\nWhile there are some statistics anonomlies due to data skew on the \nproduction system I don't see how they could explain this Nested Loop \nrow explosion. I can tell you in detail why some of the lower-level \ndata is misestimated by a single order of magnitude. For example, if \nyou focus on this inner part:\n\nSELECT useraccount FROM cards WHERE membid IN\n (SELECT membid from members where commonid = 3594));\n\n-> Nested Loop (cost=11.33..293.71 rows=33 width=29) (actual \ntime=145.940..169.543 rows=2 loops=1)\n -> HashAggregate (cost=11.33..11.66 rows=33 width=4) (actual \ntime=64.730..64.732 rows=2 loops=1)\n -> Index Scan using members_commonid on members \n(cost=0.00..11.25 rows=33 width=4) (actual time = 64.688..64.703 rows=2 \nloops=1)\n Index Cond: (commonid = 3594)\n -> Index Scan using cards_membid on cards (cost=0.00..8.53 \nrows=1 width=33) (actual time= 52.400..52.401 rows=1 loops=2)\n Index Cond: (public.cards.membid = public.members.membid)\n\nThe index scan on members_commonid here is estimating 33 rows when there \nare actually 2 that match. Looking at the table stats for this \nrelation, the distribution is a bit skewed because 99.7% of the rows are \nset to the sole named MCV: the value \"0\", that's used as a flag for no \ndata here instead of NULL (that's a standard legacy system import \ncompatibility issue). My guess is that the 250 points of histogram data \naren't quite enough to really capture the distribution of the non-zero \nvalues very well in the wake of that, so it's off by a factor of ~16. \nThat alone isn't enough of an error to switch any of the efficient index \nscans into other forms though. The actual runtime in this part of the \nplan isn't suffering that badly from this error, it's more that other \nplan decisions aren't being made well around it.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n",
"msg_date": "Tue, 28 Dec 2010 21:20:02 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regression: 8.3 2 seconds -> 8.4 100+ seconds"
}
] |
[
{
"msg_contents": "Hi Steve and other friends,\nSome information you would be interested in:\nI did some further tests using libpq in my code.\nI used a stored proc to insert 100 thousand rows in a table, it took 25 sec \n(almost same as time taken by Oracle PL/SQL and OCI interface).\nSame inserts through libpq take 70 seconds.\nI am inserting all records in a single transaction.\nSo, the problem seems to be optimization of usage of libpq in my code.\nI am attaching my code below.\nIs any optimization possible in this?\nDo prepared statements help in cutting down the insert time to half for this \nkind of inserts? One of the major problems with libpq usage is lack of good \ndocumentation and examples. \n\nI could not get any good example of prepared stmt usage anywhere.\n\n//----------------------------------------------------------------------------------------------------------------------------\n\n/*\n * testlibpq.c\n *\n * Test the C version of libpq, the PostgreSQL frontend library.\n */\n#include <stdio.h>\n#include <stdlib.h>\n#include <libpq-fe.h>\n#include \"iostream.h\"\n#include \"stdio.h\"\n#include <time.h>\n\nstatic void\nexit_nicely(PGconn *conn)\n{\n PQfinish(conn);\n exit(1);\n}\n\nint\nmain(int argc, char **argv)\n{\n const char *conninfo;\n PGconn *conn;\n PGresult *res;\n int nFields;\n int i=0;\n int howmany=0;\n if (argc<2)\n {\n cout<<\"please pass no of records as parameter\"<<endl;\n return -1;\n }\n sscanf(argv[1], \"%d\", &howmany);\n cout<<\"inserting \"<<howmany<<\" records\"<<endl;\n\n time_t mytime1 = time(0);\n cout<<\"starting at \"<<asctime(localtime(&mytime1))<<endl;\n\n\n\n /*\n * If the user supplies a parameter on the command line, use it as the\n * conninfo string; otherwise default to setting dbname=postgres and using\n * environment variables or defaults for all other connection parameters.\n */\n conninfo = \"host=x.y.z.a dbname=xyz port=5432 user=sd password=fg\" ;\n\n\n /* Make a connection to the database */\n conn = PQconnectdb(conninfo);\n\n /* Check to see that the backend connection was successfully made */\n if (PQstatus(conn) != CONNECTION_OK)\n {\n fprintf(stderr, \"Connection to database failed: %s\",\n PQerrorMessage(conn));\n exit_nicely(conn);\n }\n\n /*\n * Our test case here involves using a cursor, for which we must be inside\n * a transaction block. We could do the whole thing with a single\n * PQexec() of \"select * from pg_database\", but that's too trivial to make\n * a good example.\n */\n\n /* Start a transaction block */\n res = PQexec(conn, \"BEGIN\");\n if (PQresultStatus(res) != PGRES_COMMAND_OK)\n {\n fprintf(stderr, \"BEGIN command failed: %s\", PQerrorMessage(conn));\n PQclear(res);\n exit_nicely(conn);\n }\n\n /*\n * Should PQclear PGresult whenever it is no longer needed to avoid memory\n * leaks\n */\n PQclear(res);\n\n char query[1024]={0};\n\n for (; i<howmany;i++ )\n {\n\n sprintf (query, \"INSERT INTO aaaa(a, b, c, d, e, f, g, h, j, k, l, m, n, p) \nVALUES (67, 'ABRAKADABRA', 'ABRAKADABRA', 'ABRAKADABRA', '1-Dec-2010', \n'ABRAKADABRA', 'ABRAKADABRA', 'ABRAKADABRA', '1-Dec-2010', 99999, 99999, %d, \n9999, \n'ABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRA')\",\n i);\n\n res = PQexec(conn, query);\n\n if (PQresultStatus(res) != PGRES_COMMAND_OK)\n {\n cout<<\"error at iteration \"<<i<<\":\"<<PQresultErrorMessage(res)<<endl;\n PQclear(res);\n break;\n }\n //PQclear(res);\n\n\n }\n\n /* close the portal ... we don't bother to check for errors ... */\n /*res = PQexec(conn, \"CLOSE myportal\");\n PQclear(res);*/\n\n /* end the transaction */\n res = PQexec(conn, \"END\");\n PQclear(res);\n\n cout<<i<<\" records inserted!\"<<endl;\n\n mytime1 = time(0);\n cout<<\"Finished at \"<<asctime(localtime(&mytime1))<<endl;\n\n\n /* close the connection to the database and cleanup */\n PQfinish(conn);\n\n return 0;\n}\n\n//----------------------------------------------------------------------------------------------------------------------------\n\n\n\n Best Regards,\nDivakar\n\n\n\n\n________________________________\nFrom: Divakar Singh <[email protected]>\nTo: Steve Singer <[email protected]>\nCc: [email protected]; [email protected]\nSent: Tue, October 26, 2010 12:22:31 AM\nSubject: Re: [PERFORM] Postgres insert performance and storage requirement \ncompared to Oracle\n\n\n\nAnswers:\n\nHow are you using libpq?\n-Are you opening and closing the database connection between each insert?\n\n[Need to check, will come back on this]\n\n-Are you doing all of your inserts as one big transaction or are you doing a \ntransaction per insert\n\n[Answer: for C++ program, one insert per transaction in PG as well as Oracle. \nBut in stored proc, I think both use only 1 transaction for all inserts]\n\n-Are you using prepared statements for your inserts?\n\n[Need to check, will come back on this]\n\n-Are you using the COPY command to load your data or the INSERT command?\n\n[No]\n\n-Are you running your libpq program on the same server as postgresql?\n\n[Yes]\n\n-How is your libpq program connecting to postgresql, is it using ssl?\n\n[No]\n\nIf your run \"VACUUM VERBOSE tablename\" on the table, what does it say?\n\n[Need to check, will come back on this]\n\nYou also don't mention which version of postgresql your using.\n\n[Latest, 9.x]\n\n Best Regards,\nDivakar\n\n\n\n\n________________________________\nFrom: Steve Singer <[email protected]>\nTo: Divakar Singh <[email protected]>\nCc: [email protected]; [email protected]\nSent: Tue, October 26, 2010 12:16:46 AM\nSubject: Re: [PERFORM] Postgres insert performance and storage requirement \ncompared to Oracle\n\nOn 10-10-25 02:31 PM, Divakar Singh wrote:\n>\n> > My questions/scenarios are:\n> >\n> > 1. How does PostgreSQL perform when inserting data into an indexed\n> > (type: btree)\n> > table? Is it true that as you add the indexes on a table, the\n> > performance\n> > deteriorates significantly whereas Oracle does not show that much\n> > performance\n> > decrease. I have tried almost all postgreSQL performance tips\n> > available. I want\n> > to have very good \"insert\" performance (with indexes), \"select\"\n> > performance is\n> > not that important at this point of time.\n>\n> -- Did you test?\n>\n> Yes. the performance was comparable when using SQL procedure. However,\n> When I used libpq, PostgreSQL performed very bad. There was some\n> difference in environment also between these 2 tests, but I am assuming\n> libpq vs SQL was the real cause. Or it was something else?\n\nSo your saying that when you load the data with psql it loads fine, but \nwhen you load it using libpq it takes much longer?\n\nHow are you using libpq?\n-Are you opening and closing the database connection between each insert?\n-Are you doing all of your inserts as one big transaction or are you \ndoing a transaction per insert\n-Are you using prepared statements for your inserts?\n-Are you using the COPY command to load your data or the INSERT command?\n-Are you running your libpq program on the same server as postgresql?\n-How is your libpq program connecting to postgresql, is it using ssl?\n\n>\n> Some 10-12 columns ( like 2 timestamp, 4 int and 4 varchar), with 5\n> indexes on varchar and int fields including 1 implicit index coz of PK.\n\nIf your run \"VACUUM VERBOSE tablename\" on the table, what does it say?\n\nYou also don't mention which version of postgresql your using.\n\n>\n>\n> Joshua D. Drake\n>\n>\n> --\n> PostgreSQL.org Major Contributor\n> Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\n> Consulting, Training, Support, Custom Development, Engineering\n> http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n>\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n \nHi Steve and other friends,Some information you would be interested in:I did some further tests using libpq in my code.I used a stored proc to insert 100 thousand rows in a table, it took 25 sec (almost same as time taken by Oracle PL/SQL and OCI interface).Same inserts through libpq take 70 seconds.I am inserting all records in a single transaction.So, the problem seems to be optimization of usage of libpq in my code.I am attaching my code below.Is any optimization possible in this?Do prepared statements help in cutting down the insert time to half for this kind of inserts? One of the major problems with libpq usage is lack of good documentation and examples. I could not get any good example of prepared stmt usage\n anywhere.//----------------------------------------------------------------------------------------------------------------------------/* * testlibpq.c * * Test the C version of libpq, the PostgreSQL frontend library. */#include <stdio.h>#include <stdlib.h>#include <libpq-fe.h>#include \"iostream.h\"#include \"stdio.h\"#include <time.h>static voidexit_nicely(PGconn *conn){ PQfinish(conn); exit(1);}intmain(int argc, char **argv){ const char *conninfo; PGconn *conn; PGresult *res; int nFields; int \n i=0; int howmany=0; if (argc<2) { cout<<\"please pass no of records as parameter\"<<endl; return -1; } sscanf(argv[1], \"%d\", &howmany); cout<<\"inserting \"<<howmany<<\" records\"<<endl; time_t mytime1 = time(0); cout<<\"starting at \"<<asctime(localtime(&mytime1))<<endl; /* * If the user supplies a parameter on the command line, use it as the * conninfo string; otherwise default to setting dbname=postgres and using * environment variables or defaults for all other connection parameters. \n */ conninfo = \"host=x.y.z.a dbname=xyz port=5432 user=sd password=fg\" ; /* Make a connection to the database */ conn = PQconnectdb(conninfo); /* Check to see that the backend connection was successfully made */ if (PQstatus(conn) != CONNECTION_OK) { fprintf(stderr, \"Connection to database failed: %s\", PQerrorMessage(conn)); exit_nicely(conn); } /* * Our test case here involves using a cursor, for which we must be inside * a transaction block. We could do the whole thing with a\n single * PQexec() of \"select * from pg_database\", but that's too trivial to make * a good example. */ /* Start a transaction block */ res = PQexec(conn, \"BEGIN\"); if (PQresultStatus(res) != PGRES_COMMAND_OK) { fprintf(stderr, \"BEGIN command failed: %s\", PQerrorMessage(conn)); PQclear(res); exit_nicely(conn); } /* * Should PQclear PGresult whenever it is no longer needed to avoid memory * leaks */ PQclear(res); char query[1024]={0}; for (;\n i<howmany;i++ ) { sprintf (query, \"INSERT INTO aaaa(a, b, c, d, e, f, g, h, j, k, l, m, n, p) VALUES (67, 'ABRAKADABRA', 'ABRAKADABRA', 'ABRAKADABRA', '1-Dec-2010', 'ABRAKADABRA', 'ABRAKADABRA', 'ABRAKADABRA', '1-Dec-2010', 99999, 99999, %d, 9999, 'ABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRA')\", i); res = PQexec(conn, query); if (PQresultStatus(res) != PGRES_COMMAND_OK) { cout<<\"error at iteration \"<<i<<\":\"<<PQresultErrorMessage(res)<<endl; PQclear(res); break; \n } //PQclear(res); } /* close the portal ... we don't bother to check for errors ... */ /*res = PQexec(conn, \"CLOSE myportal\"); PQclear(res);*/ /* end the transaction */ res = PQexec(conn, \"END\"); PQclear(res); cout<<i<<\" records inserted!\"<<endl; mytime1 = time(0); cout<<\"Finished at \"<<asctime(localtime(&mytime1))<<endl; /* close the connection to the database and cleanup */ PQfinish(conn); return\n 0;}//---------------------------------------------------------------------------------------------------------------------------- Best Regards,DivakarFrom: Divakar Singh <[email protected]>To: Steve Singer <[email protected]>Cc: [email protected]; [email protected]: Tue, October 26, 2010 12:22:31 AMSubject: Re: [PERFORM] Postgres insert performance and storage requirement compared to OracleAnswers:How are you using libpq?-Are you opening and closing the \ndatabase connection between each insert?[Need to check, will come back on this]-Are you doing all of your \ninserts as one big transaction or are you doing a transaction per \ninsert[Answer: for C++ program, one insert per transaction in PG as well as Oracle. But in stored proc, I think both use only 1 transaction for all inserts]-Are you using prepared statements for your inserts?[Need to check, will come back on this]-Are \nyou using the COPY command to load your data or the INSERT command?[No]-Are\n you running your libpq program on the same server as postgresql?[Yes]-How\n is your libpq program connecting to postgresql, is it using ssl?[No]If your run \"VACUUM VERBOSE tablename\" on the table, what does it \nsay?[Need to check, will come back on this]You also don't mention which version of postgresql your \nusing.[Latest, 9.x] Best Regards,DivakarFrom: Steve Singer <[email protected]>To: Divakar Singh <[email protected]>Cc: [email protected]; [email protected]: Tue, October 26, 2010 12:16:46 AMSubject: Re: [PERFORM] Postgres insert performance and storage requirement compared to OracleOn 10-10-25 02:31 PM, Divakar Singh wrote:>> > My questions/scenarios are:> >> > 1.\n How does PostgreSQL perform when inserting data into an indexed> > (type: btree)> > table? Is it true that as you add the indexes on a table, the> > performance> > deteriorates significantly whereas Oracle does not show that much> > performance> > decrease. I have tried almost all postgreSQL performance tips> > available. I want> > to have very good \"insert\" performance (with indexes), \"select\"> > performance is> > not that important at this point of time.>> -- Did you test?>> Yes. the performance was comparable when using SQL procedure. However,> When I used libpq, PostgreSQL performed very bad. There was some> difference in environment also between these 2 tests, but I am assuming> libpq vs SQL was the real cause. Or it was something\n else?So your saying that when you load the data with psql it loads fine, but when you load it using libpq it takes much longer?How are you using libpq?-Are you opening and closing the database connection between each insert?-Are you doing all of your inserts as one big transaction or are you doing a transaction per insert-Are you using prepared statements for your inserts?-Are you using the COPY command to load your data or the INSERT command?-Are you running your libpq program on the same server as postgresql?-How is your libpq program connecting to postgresql, is it using ssl?>> Some 10-12 columns ( like 2 timestamp, 4 int and 4 varchar), with 5> indexes on varchar and int fields including 1 implicit index coz of PK.If your run \"VACUUM VERBOSE tablename\" on the table, what does it say?You also don't mention which version of postgresql your\n using.>>> Joshua D. Drake>>> --> PostgreSQL.org Major Contributor> Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579> Consulting, Training, Support, Custom Development, Engineering> http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt>>-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 27 Oct 2010 07:00:15 -0700 (PDT)",
"msg_from": "Divakar Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres insert performance and storage requirement compared to\n\tOracle"
},
{
"msg_contents": "On 10-10-27 10:00 AM, Divakar Singh wrote:\n> Hi Steve and other friends,\n> Some information you would be interested in:\n> I did some further tests using libpq in my code.\n> I used a stored proc to insert 100 thousand rows in a table, it took 25\n> sec (almost same as time taken by Oracle PL/SQL and OCI interface).\n> Same inserts through libpq take 70 seconds.\n> I am inserting all records in a single transaction.\n> So, the problem seems to be optimization of usage of libpq in my code.\n> I am attaching my code below.\n> Is any optimization possible in this?\n> Do prepared statements help in cutting down the insert time to half for\n> this kind of inserts? One of the major problems with libpq usage is lack\n> of good documentation and examples.\n> I could not get any good example of prepared stmt usage anywhere.\n\n\nYes using prepared statements should make this go faster, but your best \nbet might be to use the COPY command. I don't have a PQprepare example \nhandy though we probably should add one to the docs.\n\nThe copy command would be used similar to\n\nPQexec(conn,\"COPY TO aaaa (a,b,c,d,e,f,g,h,j,k,l,m,n,p) FROM STDIN WITH \n(DELIMITER ',') \");\nfor(; i < howmany;i++)\n{\nsprintf(query,\"67,'ABRAKADABRA', 'ABRAKADABRA', 'ABRAKADABRA', '1-Dec \n2010', 'ABRAKADABRA', 'ABRAKADABRA', 'ABRAKADABRA', '1-Dec-2010', \n99999, 99999, %d, 9999, \n'ABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRA'\\n\", \n\ni);\nres = PQputCopyData(conn,query,strlen(query);\n\n}\nPQputCopyEnd(conn,NULL);\n\nI have not actually tried the above code snippet, it is just to give you \nthe general idea.\n\nYou call PQexec with the COPY command outside the loop then at each loop \niteration you call PQputCopyData with some of the data that gets passed \nto the server.\n\n\nYou can combine multiple lines on a single PQputCopyData call if you want.\n\nhttp://www.postgresql.org/docs/9.0/interactive/sql-copy.html\nhttp://www.postgresql.org/docs/9.0/interactive/libpq-copy.html\n\n\n\n\n\n\n>\n>\n> char query[1024]={0};\n>\n> for (; i<howmany;i++ )\n> {\n>\n> sprintf (query, \"INSERT INTO aaaa(a, b, c, d, e, f, g, h, j, k, l, m, n,\n> p) VALUES (67, 'ABRAKADABRA', 'ABRAKADABRA', 'ABRAKADABRA',\n> '1-Dec-2010', 'ABRAKADABRA', 'ABRAKADABRA', 'ABRAKADABRA', '1-Dec-2010',\n> 99999, 99999, %d, 9999,\n> 'ABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRA')\",\n> i);\n>\n> res = PQexec(conn, query);\n>\n> if (PQresultStatus(res) != PGRES_COMMAND_OK)\n> {\n> cout<<\"error at iteration \"<<i<<\":\"<<PQresultErrorMessage(res)<<endl;\n> PQclear(res);\n> break;\n> }\n> //PQclear(res);\n>\n>\n> }\n>\n> /* close the portal ... we don't bother to check for errors ... */\n> /*res = PQexec(conn, \"CLOSE myportal\");\n> PQclear(res);*/\n<snip>\n\n\n",
"msg_date": "Wed, 27 Oct 2010 11:42:09 -0400",
"msg_from": "Steve Singer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "On Wed, 2010-10-27 at 11:42 -0400, Steve Singer wrote:\r\n> I don't have a PQprepare example \r\n> handy though we probably should add one to the docs.\r\n> \r\nI would like to see this. A several minutes web search didn't turn up\r\nan example for me either.\r\n\r\nthanks,\r\nreid\r\n\n\n\n\n\nRe: [PERFORM] Postgres insert performance and storage requirement compared to Oracle\n\n\n\nOn Wed, 2010-10-27 at 11:42 -0400, Steve Singer wrote:\r\n> I don't have a PQprepare example\r\n> handy though we probably should add one to the docs.\r\n>\r\nI would like to see this. A several minutes web search didn't turn up\r\nan example for me either.\n\r\nthanks,\r\nreid",
"msg_date": "Wed, 27 Oct 2010 11:49:41 -0400",
"msg_from": "\"Reid Thompson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement compared to\n\tOracle"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 10:00 AM, Divakar Singh <[email protected]> wrote:\n\n>\n> for (; i<howmany;i++ )\n> {\n>\n> sprintf (query, \"INSERT INTO aaaa(a, b, c, d, e, f, g, h, j, k, l, m,\n> n, p) VALUES (67, 'ABRAKADABRA', 'ABRAKADABRA', 'ABRAKADABRA', '1-Dec-2010',\n> 'ABRAKADABRA', 'ABRAKADABRA', 'ABRAKADABRA', '1-Dec-2010', 99999, 99999,\n> %d, 9999,\n> 'ABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRA')\",\n> i);\n>\n> res = PQexec(conn, query);\n>\n> if (PQresultStatus(res) != PGRES_COMMAND_OK)\n> {\n> cout<<\"error at iteration\n> \"<<i<<\":\"<<PQresultErrorMessage(res)<<endl;\n> PQclear(res);\n> break;\n> }\n> //PQclear(res);\n>\n>\n> }\n>\n\nWhy is that PQclear(res) commented out? You're leaking result status for\nevery insert.\n\n\n-- \n- David T. Wilson\[email protected]\n\nOn Wed, Oct 27, 2010 at 10:00 AM, Divakar Singh <[email protected]> wrote:\n for (;\n i<howmany;i++ ) { sprintf (query, \"INSERT INTO aaaa(a, b, c, d, e, f, g, h, j, k, l, m, n, p) VALUES (67, 'ABRAKADABRA', 'ABRAKADABRA', 'ABRAKADABRA', '1-Dec-2010', 'ABRAKADABRA', 'ABRAKADABRA', 'ABRAKADABRA', '1-Dec-2010', 99999, 99999, %d, 9999, 'ABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRA')\", i);\n res = PQexec(conn, query); if (PQresultStatus(res) != PGRES_COMMAND_OK) { cout<<\"error at iteration \"<<i<<\":\"<<PQresultErrorMessage(res)<<endl;\n PQclear(res); break; \n } //PQclear(res); }Why is that PQclear(res) commented out? You're leaking result status for every insert.-- \n- David T. [email protected]",
"msg_date": "Wed, 27 Oct 2010 11:50:46 -0400",
"msg_from": "David Wilson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "> for (; i<howmany;i++ )\n> {\n>\n> sprintf (query, \"INSERT INTO aaaa(a, b, c, d, e, f, g, h, j, k, l, m, n, p) \n>VALUES (67, 'ABRAKADABRA', 'ABRAKADABRA', 'ABRAKADABRA', '1-Dec-2010', \n>'ABRAKADABRA', 'ABRAKADABRA', 'ABRAKADABRA', '1-Dec-2010', 99999, 99999, %d, \n>9999, \n>'ABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRA')\",\n> i);\n>\n> res = PQexec(conn, query);\n>\n> if (PQresultStatus(res) != PGRES_COMMAND_OK)\n> {\n> cout<<\"error at iteration \"<<i<<\":\"<<PQresultErrorMessage(res)<<endl;\n> PQclear(res);\n> break;\n> }\n> //PQclear(res);\n>\n>\n> }\n>\n\n>Why is that PQclear(res) commented out? You're leaking result status for every \n>insert.\n\n\nI did that purposely to see if cleanup part is contributing to any performance \nloss.\nRight now in my test, memory leak is not a concern for me but performance is. \nThough I understand that memory leak can also result in performance loss if \nleak is too much.\nHowever, in this case, commenting or uncommenting this statement did not cause \nany change in performance.\n\n\n \n\n for (;\n i<howmany;i++ ) { sprintf (query, \"INSERT INTO aaaa(a, b, c, d, e, f, g, h, j, k, l, m, n, p) VALUES (67, 'ABRAKADABRA', 'ABRAKADABRA', 'ABRAKADABRA', '1-Dec-2010', 'ABRAKADABRA', 'ABRAKADABRA', 'ABRAKADABRA', '1-Dec-2010', 99999, 99999, %d, 9999, 'ABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRAABRAKADABRA')\", i);\n res = PQexec(conn, query); if (PQresultStatus(res) != PGRES_COMMAND_OK) { cout<<\"error at iteration \"<<i<<\":\"<<PQresultErrorMessage(res)<<endl;\n PQclear(res); break; \n } //PQclear(res); }>Why is that PQclear(res) commented out? You're leaking result status for every insert.I did that purposely to see if cleanup part is contributing to \nany performance loss.\nRight now in my test, memory leak is not a concern for me but performance is. \nThough I understand that memory leak can also result in performance loss\n if leak is too much.\nHowever, in this case, commenting or uncommenting this statement did not\n cause any change in performance.",
"msg_date": "Wed, 27 Oct 2010 09:12:24 -0700 (PDT)",
"msg_from": "Divakar Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres insert performance and storage requirement compared to\n\tOracle"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 08:00, Divakar Singh <[email protected]> wrote:\n> I am attaching my code below.\n> Is any optimization possible in this?\n> Do prepared statements help in cutting down the insert time to half for this\n> kind of inserts?\n\nIn half? not for me. Optimization possible? Sure, using the code you\npasted (time ./a.out 100000 <method>):\nPQexec: 41s\nPQexecPrepared: 36s\n1 insert statement: 7s\nCOPY: 1s\npsql: 256ms\n\nBasically the above echoes the suggestions of others, use COPY if you can.\n\nFind the source for the above attached. Its just a very quick\nmodified version of what you posted. [ disclaimer the additions I\nadded are almost certainly missing some required error checking... ]\n\n[ psql is fast because the insert is really dumb: insert into aaaa (a,\nb, c, d, e, f, g, h, j, k, l, m, n, p) select 1, 'asdf', 'asdf',\n'asdf', 'asdf', 'asdf', 'asdf', 'asdf', 'asdf', 'asdf', 'asdf',\n'asdf', 'asdf', 'asdf' from generate_series(1, 100000); ]",
"msg_date": "Wed, 27 Oct 2010 13:45:06 -0600",
"msg_from": "Alex Hunsaker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "So another question pops up: What method in PostgreSQL does the stored proc use \nwhen I issue multiple insert (for loop for 100 thousand records) in the stored \nproc?\nIt takes half the time compared to the consecutive \"insert\" using libpq.\nIn the backend, does it use COPY or prepared statement? or something else?\n\n Best Regards,\nDivakar\n\n\n\n\n________________________________\nFrom: Alex Hunsaker <[email protected]>\nTo: Divakar Singh <[email protected]>\nCc: Steve Singer <[email protected]>; [email protected]; \[email protected]\nSent: Thu, October 28, 2010 1:15:06 AM\nSubject: Re: [PERFORM] Postgres insert performance and storage requirement \ncompared to Oracle\n\nOn Wed, Oct 27, 2010 at 08:00, Divakar Singh <[email protected]> wrote:\n> I am attaching my code below.\n> Is any optimization possible in this?\n> Do prepared statements help in cutting down the insert time to half for this\n> kind of inserts?\n\nIn half? not for me. Optimization possible? Sure, using the code you\npasted (time ./a.out 100000 <method>):\nPQexec: 41s\nPQexecPrepared: 36s\n1 insert statement: 7s\nCOPY: 1s\npsql: 256ms\n\nBasically the above echoes the suggestions of others, use COPY if you can.\n\nFind the source for the above attached. Its just a very quick\nmodified version of what you posted. [ disclaimer the additions I\nadded are almost certainly missing some required error checking... ]\n\n[ psql is fast because the insert is really dumb: insert into aaaa (a,\nb, c, d, e, f, g, h, j, k, l, m, n, p) select 1, 'asdf', 'asdf',\n'asdf', 'asdf', 'asdf', 'asdf', 'asdf', 'asdf', 'asdf', 'asdf',\n'asdf', 'asdf', 'asdf' from generate_series(1, 100000); ]\n\n\n\n \nSo another question pops up: What method in PostgreSQL does the stored proc use when I issue multiple insert (for loop for 100 thousand records) in the stored proc?It takes half the time compared to the consecutive \"insert\" using libpq.In the backend, does it use COPY or prepared statement? or something else? Best Regards,DivakarFrom: Alex Hunsaker <[email protected]>To: Divakar Singh <[email protected]>Cc: Steve\n Singer <[email protected]>; [email protected]; [email protected]: Thu, October 28, 2010 1:15:06 AMSubject: Re: [PERFORM] Postgres insert performance and storage requirement compared to OracleOn Wed, Oct 27, 2010 at 08:00, Divakar Singh <[email protected]> wrote:> I am attaching my code below.> Is any optimization possible in this?> Do prepared statements help in cutting down the insert time to half for this> kind of inserts?In half? not for me. Optimization possible? Sure, using the code youpasted (time ./a.out 100000 <method>):PQexec: 41sPQexecPrepared: 36s1 insert statement: 7sCOPY: 1spsql: 256msBasically the above echoes the suggestions of\n others, use COPY if you can.Find the source for the above attached. Its just a very quickmodified version of what you posted. [ disclaimer the additions Iadded are almost certainly missing some required error checking... ][ psql is fast because the insert is really dumb: insert into aaaa (a,b, c, d, e, f, g, h, j, k, l, m, n, p) select 1, 'asdf', 'asdf','asdf', 'asdf', 'asdf', 'asdf', 'asdf', 'asdf', 'asdf', 'asdf','asdf', 'asdf', 'asdf' from generate_series(1, 100000); ]",
"msg_date": "Wed, 27 Oct 2010 20:08:53 -0700 (PDT)",
"msg_from": "Divakar Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres insert performance and storage requirement compared to\n\tOracle"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 21:08, Divakar Singh <[email protected]> wrote:\n> So another question pops up: What method in PostgreSQL does the stored proc\n> use when I issue multiple insert (for loop for 100 thousand records) in the\n> stored proc?\n\nIt uses prepared statements (unless you are using execute). There is\nalso the benefit of not being on the network. Assuming 0.3ms avg\nlatency, 1 packet per query and 100,000 queries-- thats 30s just from\nlatency! Granted this is just a silly estimate that happens to (more\nor less) fit my numbers...\n",
"msg_date": "Wed, 27 Oct 2010 22:23:44 -0600",
"msg_from": "Alex Hunsaker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
},
{
"msg_contents": "Hello\n\n2010/10/28 Divakar Singh <[email protected]>:\n> So another question pops up: What method in PostgreSQL does the stored proc\n> use when I issue multiple insert (for loop for 100 thousand records) in the\n> stored proc?\n\nnothing special - but it run as inprocess inside server backend. The\nare no data casting, there are no overhead from communication, there\nare no overhead from content switch.\n\nRegards\n\nPavel Stehule\n\n> It takes half the time compared to the consecutive \"insert\" using libpq.\n> In the backend, does it use COPY or prepared statement? or something else?\n>\n> Best Regards,\n> Divakar\n>\n> ________________________________\n> From: Alex Hunsaker <[email protected]>\n> To: Divakar Singh <[email protected]>\n> Cc: Steve Singer <[email protected]>; [email protected];\n> [email protected]\n> Sent: Thu, October 28, 2010 1:15:06 AM\n> Subject: Re: [PERFORM] Postgres insert performance and storage requirement\n> compared to Oracle\n>\n> On Wed, Oct 27, 2010 at 08:00, Divakar Singh <[email protected]> wrote:\n>> I am attaching my code below.\n>> Is any optimization possible in this?\n>> Do prepared statements help in cutting down the insert time to half for\n>> this\n>> kind of inserts?\n>\n> In half? not for me. Optimization possible? Sure, using the code you\n> pasted (time ./a.out 100000 <method>):\n> PQexec: 41s\n> PQexecPrepared: 36s\n> 1 insert statement: 7s\n> COPY: 1s\n> psql: 256ms\n>\n> Basically the above echoes the suggestions of others, use COPY if you can.\n>\n> Find the source for the above attached. Its just a very quick\n> modified version of what you posted. [ disclaimer the additions I\n> added are almost certainly missing some required error checking... ]\n>\n> [ psql is fast because the insert is really dumb: insert into aaaa (a,\n> b, c, d, e, f, g, h, j, k, l, m, n, p) select 1, 'asdf', 'asdf',\n> 'asdf', 'asdf', 'asdf', 'asdf', 'asdf', 'asdf', 'asdf', 'asdf',\n> 'asdf', 'asdf', 'asdf' from generate_series(1, 100000); ]\n>\n>\n",
"msg_date": "Thu, 28 Oct 2010 06:26:08 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres insert performance and storage requirement\n\tcompared to Oracle"
}
] |
[
{
"msg_contents": "I have an app which imports a lot of data into a temporary table, does\na number of updates, creates some indexes, and then does a bunch more\nupdates and deletes, and then eventually inserts some of the columns\nfrom the transformed table into a permanent table.\n\nThings were not progressing in a performant manner - specifically,\nafter creating an index on a column (INTEGER) that is unique, I\nexpected statements like this to use an index scan:\n\nupdate foo set colA = 'some value' where indexed_colB = 'some other value'\n\nbut according to the auto_explain module (yay!) the query plan\n(always) results in a sequential scan, despite only 1 row getting the\nupdate.\n\nIn summary, the order goes like this:\n\nBEGIN;\nCREATE TEMPORARY TABLE foo ...;\ncopy into foo ....\nUPDATE foo .... -- 4 or 5 times, updating perhaps 1/3 of the table all told\nCREATE INDEX ... -- twice - one index each for two columns\nANALYZE foo; -- didn't seem to help\nUPDATE foo SET ... WHERE indexed_column_B = 'some value'; -- seq scan?\nOut of 10 million rows only one is updated!\n...\n\nWhat might be going on here?\n\n-- \nJon\n",
"msg_date": "Wed, 27 Oct 2010 12:29:44 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "temporary tables, indexes, and query plans"
},
{
"msg_contents": "On 10/27/2010 1:29 PM, Jon Nelson wrote:\n> I have an app which imports a lot of data into a temporary table, does\n> a number of updates, creates some indexes, and then does a bunch more\n> updates and deletes, and then eventually inserts some of the columns\n> from the transformed table into a permanent table.\n>\n> Things were not progressing in a performant manner - specifically,\n> after creating an index on a column (INTEGER) that is unique, I\n> expected statements like this to use an index scan:\n>\n> update foo set colA = 'some value' where indexed_colB = 'some other value'\n>\n> but according to the auto_explain module (yay!) the query plan\n> (always) results in a sequential scan, despite only 1 row getting the\n> update.\n>\n> In summary, the order goes like this:\n>\n> BEGIN;\n> CREATE TEMPORARY TABLE foo ...;\n> copy into foo ....\n> UPDATE foo .... -- 4 or 5 times, updating perhaps 1/3 of the table all told\n> CREATE INDEX ... -- twice - one index each for two columns\n> ANALYZE foo; -- didn't seem to help\n> UPDATE foo SET ... WHERE indexed_column_B = 'some value'; -- seq scan?\n> Out of 10 million rows only one is updated!\n> ...\n>\n> What might be going on here?\n>\nHow big is your default statistics target? The default is rather small, \nit doesn't produce very good or usable histograms.\n\n-- \n\nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com\nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Wed, 27 Oct 2010 13:44:24 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temporary tables, indexes, and query plans"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 12:44 PM, Mladen Gogala\n<[email protected]> wrote:\n> On 10/27/2010 1:29 PM, Jon Nelson wrote:\n> How big is your default statistics target? The default is rather small, it\n> doesn't produce very good or usable histograms.\n\nCurrently, default_statistics_target is 50.\n\nI note that if I create a indexes earlier in the process (before the\ncopy) then they are used.\nI'm not trying creating them after the first UPDATE (which updates\n2.8million of the 10million rows).\nThe subsequent UPDATE statements update very few (3-4 thousand for 2\nof them, less than a few dozen for the others) and the ones that use\nthe index only update *1* row.\n\nI'll also try setting a higher default_statistics_target and let you know!\n\n-- \nJon\n",
"msg_date": "Wed, 27 Oct 2010 12:59:40 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: temporary tables, indexes, and query plans"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 12:59 PM, Jon Nelson <[email protected]> wrote:\n> On Wed, Oct 27, 2010 at 12:44 PM, Mladen Gogala\n> <[email protected]> wrote:\n>> On 10/27/2010 1:29 PM, Jon Nelson wrote:\n>> How big is your default statistics target? The default is rather small, it\n>> doesn't produce very good or usable histograms.\n>\n> Currently, default_statistics_target is 50.\n\nI set it to 500 and restarted postgres. No change in (most of) the query plans!\nThe update statement that updates 7 rows? No change.\nThe one that updates 242 rows? No change.\n3714? No change.\nI killed the software before it got to the 1-row-only statements.\n\n> I'm not trying creating them after the first UPDATE (which updates\n> 2.8million of the 10million rows).\n\nI mean to say that I (tried) creating the indexes after the first\nUPDATE statement. This did not improve things.\nI am now trying to see how creating the indexes before between the\nCOPY and the UPDATE performs.\nI didn't really want to do this because I know that the first UPDATE\nstatement touches about 1/3 of the table, and this would bloat the\nindex and slow the UPDATE (which should be a full table scan anyway).\nIt's every subsequent UPDATE that touches (at most) 4000 rows (out of\n10 million) that I'm interested in.\n\n-- \nJon\n",
"msg_date": "Wed, 27 Oct 2010 13:23:05 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: temporary tables, indexes, and query plans"
},
{
"msg_contents": "On Wed, 2010-10-27 at 13:23 -0500, Jon Nelson wrote:\r\n> set it to 500 and restarted postgres.\r\n\r\ndid you re-analyze?\r\n\n\n\n\n\nRe: [PERFORM] temporary tables, indexes, and query plans\n\n\n\nOn Wed, 2010-10-27 at 13:23 -0500, Jon Nelson wrote:\r\n> set it to 500 and restarted postgres.\n\r\ndid you re-analyze?",
"msg_date": "Wed, 27 Oct 2010 14:32:58 -0400",
"msg_from": "\"Reid Thompson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temporary tables, indexes, and query plans"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 1:32 PM, Reid Thompson <[email protected]> wrote:\n> On Wed, 2010-10-27 at 13:23 -0500, Jon Nelson wrote:\n>> set it to 500 and restarted postgres.\n>\n> did you re-analyze?\n\nNot recently. I tried that, initially, and there was no improvement.\nI'll try it again now that I've set the stats to 500.\nThe most recent experiment shows me that, unless I create whatever\nindexes I would like to see used *before* the large (first) update,\nthen they just don't get used. At all. Why would I need to ANALYZE the\ntable immediately following index creation? Isn't that part of the\nindex creation process?\n\nCurrently executing is a test where I place an \"ANALYZE foo\" after the\nCOPY, first UPDATE, and first index, but before the other (much\nsmaller) updates.\n\n..\n\nNope. The ANALYZE made no difference. This is what I just ran:\n\nBEGIN;\nCREATE TEMPORARY TABLE foo\nCOPY ...\nUPDATE ... -- 1/3 of table, approx\nCREATE INDEX foo_rowB_idx on foo (rowB);\nANALYZE ...\n-- queries from here to 'killed' use WHERE rowB = 'someval'\nUPDATE ... -- 7 rows. seq scan!\nUPDATE ... -- 242 rows, seq scan!\nUPDATE .. -- 3700 rows, seq scan!\nUPDATE .. -- 3100 rows, seq scan!\nkilled.\n\n\n-- \nJon\n",
"msg_date": "Wed, 27 Oct 2010 13:52:05 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: temporary tables, indexes, and query plans"
},
{
"msg_contents": "If you alter the default_statistics_target or any of the specific\nstatistics targets ( via ALTER TABLE SET STATISTICS ) , the change\nwill not have an effect until an analyze is performed.\n\nThis is implied by\nhttp://www.postgresql.org/docs/9.0/static/planner-stats.html and\nhttp://www.postgresql.org/docs/9.0/static/runtime-config-query.html#GUC-DEFAULT-STATISTICS-TARGET,\nbut it might save questions like this if it were much more explicit.\n\nOn Wed, Oct 27, 2010 at 2:52 PM, Jon Nelson <[email protected]> wrote:\n> On Wed, Oct 27, 2010 at 1:32 PM, Reid Thompson <[email protected]> wrote:\n>> On Wed, 2010-10-27 at 13:23 -0500, Jon Nelson wrote:\n>>> set it to 500 and restarted postgres.\n>>\n>> did you re-analyze?\n>\n> Not recently. I tried that, initially, and there was no improvement.\n> I'll try it again now that I've set the stats to 500.\n> The most recent experiment shows me that, unless I create whatever\n> indexes I would like to see used *before* the large (first) update,\n> then they just don't get used. At all. Why would I need to ANALYZE the\n> table immediately following index creation? Isn't that part of the\n> index creation process?\n>\n> Currently executing is a test where I place an \"ANALYZE foo\" after the\n> COPY, first UPDATE, and first index, but before the other (much\n> smaller) updates.\n>\n> ..\n>\n> Nope. The ANALYZE made no difference. This is what I just ran:\n>\n> BEGIN;\n> CREATE TEMPORARY TABLE foo\n> COPY ...\n> UPDATE ... -- 1/3 of table, approx\n> CREATE INDEX foo_rowB_idx on foo (rowB);\n> ANALYZE ...\n> -- queries from here to 'killed' use WHERE rowB = 'someval'\n> UPDATE ... -- 7 rows. seq scan!\n> UPDATE ... -- 242 rows, seq scan!\n> UPDATE .. -- 3700 rows, seq scan!\n> UPDATE .. -- 3100 rows, seq scan!\n> killed.\n>\n>\n> --\n> Jon\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Wed, 27 Oct 2010 15:23:31 -0400",
"msg_from": "Justin Pitts <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temporary tables, indexes, and query plans"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 1:52 PM, Jon Nelson <[email protected]> wrote:\n> On Wed, Oct 27, 2010 at 1:32 PM, Reid Thompson <[email protected]> wrote:\n>> On Wed, 2010-10-27 at 13:23 -0500, Jon Nelson wrote:\n>>> set it to 500 and restarted postgres.\n>>\n>> did you re-analyze?\n>\n> Not recently. I tried that, initially, and there was no improvement.\n> I'll try it again now that I've set the stats to 500.\n> The most recent experiment shows me that, unless I create whatever\n> indexes I would like to see used *before* the large (first) update,\n> then they just don't get used. At all. Why would I need to ANALYZE the\n> table immediately following index creation? Isn't that part of the\n> index creation process?\n>\n> Currently executing is a test where I place an \"ANALYZE foo\" after the\n> COPY, first UPDATE, and first index, but before the other (much\n> smaller) updates.\n>\n> ..\n>\n> Nope. The ANALYZE made no difference. This is what I just ran:\n>\n> BEGIN;\n> CREATE TEMPORARY TABLE foo\n> COPY ...\n> UPDATE ... -- 1/3 of table, approx\n> CREATE INDEX foo_rowB_idx on foo (rowB);\n> ANALYZE ...\n> -- queries from here to 'killed' use WHERE rowB = 'someval'\n> UPDATE ... -- 7 rows. seq scan!\n> UPDATE ... -- 242 rows, seq scan!\n> UPDATE .. -- 3700 rows, seq scan!\n> UPDATE .. -- 3100 rows, seq scan!\n> killed.\n>\n\nEven generating the index beforehand (sans ANALYZE) was no help.\nIf I generate *all* of the indexes ahead of time, before the COPY,\nthat's the only time index usage jives with my expectations.\n\nHere is an example of the output from auto analyze (NOTE: the WHERE\nclause in this statement specifies a single value in the same column\nthat has a UNIQUE index on it):\n\nSeq Scan on foo_table (cost=0.00..289897.04 rows=37589 width=486)\n\nand yet the actual row count is exactly 1.\n\nIf I change the order so that the index creation *and* analyze happen\n*before* the first (large) update, then things appear to proceed\nnormally and the indexes are used when expected, although in some\ncases the stats are still way off:\n\n Bitmap Heap Scan on foo_table (cost=40.96..7420.39 rows=1999 width=158)\n\nand yet there are only 7 rows that match. The others seem closer (only\noff by 2x rather than 250x).\n\nIt seems as though creating an index is not enough. It seems as though\nANALYZE after index creation is not enough, either. I am theorizing\nthat I have to touch (or just scan?) some percentage of the table in\norder for the index to be used? If that's true, then what is ANALYZE\nfor? I've got the stats cranked up to 500. Should I try 1000?\n\n\nJason Pitts:\nRE: changing default_statistics_target (or via ALTER TABLE SET STATS)\nnot taking effect until ANALYZE is performed.\n\nI did already know that, but it's probably good to put into this\nthread. However, you'll note that this is a temporary table created at\nthe beginning of a transaction.\n\n\n-- \nJon\n",
"msg_date": "Wed, 27 Oct 2010 14:29:08 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: temporary tables, indexes, and query plans"
},
{
"msg_contents": "Jon Nelson <[email protected]> writes:\n> The most recent experiment shows me that, unless I create whatever\n> indexes I would like to see used *before* the large (first) update,\n> then they just don't get used. At all.\n\nYou're making a whole lot of assertions here that don't square with\nusual experience. I think there is some detail about what you're\ndoing that affects the outcome, but since you haven't shown a concrete\nexample, it's pretty hard to guess what the critical detail is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Oct 2010 15:43:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temporary tables, indexes, and query plans "
},
{
"msg_contents": "> Jason Pitts:\n> RE: changing default_statistics_target (or via ALTER TABLE SET STATS)\n> not taking effect until ANALYZE is performed.\n>\n> I did already know that, but it's probably good to put into this\n> thread. However, you'll note that this is a temporary table created at\n> the beginning of a transaction.\n>\n\n( giving up on replying to the group; the list will not accept my posts )\nI've been following the thread so long I had forgotten that. I rather\nstrongly doubt that analyze can reach that table's content inside that\ntransaction, if you are creating, populating, and querying it all\nwithin that single transaction.\n",
"msg_date": "Wed, 27 Oct 2010 15:44:52 -0400",
"msg_from": "Justin Pitts <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temporary tables, indexes, and query plans"
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 2:43 PM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> The most recent experiment shows me that, unless I create whatever\n>> indexes I would like to see used *before* the large (first) update,\n>> then they just don't get used. At all.\n>\n> You're making a whole lot of assertions here that don't square with\n> usual experience. I think there is some detail about what you're\n> doing that affects the outcome, but since you haven't shown a concrete\n> example, it's pretty hard to guess what the critical detail is.\n\nFirst, let me supply all of the changed (from the default) params:\n\ndefault_statistics_target = 500\nmaintenance_work_mem = 240MB\nwork_mem = 256MB\neffective_cache_size = 1GB\ncheckpoint_segments = 128\nshared_buffers = 1GB\nmax_connections = 30\nwal_buffers = 64MB\nshared_preload_libraries = 'auto_explain'\n\nThe machine is a laptop with 4GB of RAM running my desktop. Kernel is\n2.6.36, filesystem is ext4 (for data) and ext2 (for WAL logs). The\ndisk is a really real disk, not an SSD.\n\nThe sequence goes exactly like this:\n\nBEGIN;\nCREATE TEMPORARY TABLE (20 columns, mostly text, a few int).\nCOPY (approx 8 million rows, ~900 MB)[1]\nUPDATE (2.8 million of the rows)\nUPDATE (7 rows)\nUPDATE (250 rows)\nUPDATE (3500 rows)\nUPDATE (3100 rows)\na bunch of UPDATE (1 row)\n...\n\nExperimentally, I noticed that performance was not especially great.\nSo, I added some indexes (three indexes on one column each). One index\nis UNIQUE.\nThe first UPDATE can't use any of the indexes. The rest should be able to.\n\nIn my experiments, I found that:\n\nIf I place the index creation *before* the copy, the indexes are used.\nIf I place the index creation *after* the copy but before first\nUPDATE, the indexes are used.\nIf I place the index creation at any point after the first UPDATE,\nregardless of whether ANALYZE is run, the indexes are not used (at\nleast, according to auto_analyze).\n\nDoes that help?\n\n\n[1] I've been saying 10 million. It's really more like 8 million.\n-- \nJon\n",
"msg_date": "Wed, 27 Oct 2010 15:29:13 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: temporary tables, indexes, and query plans"
},
{
"msg_contents": "Jon Nelson <[email protected]> writes:\n> The sequence goes exactly like this:\n\n> BEGIN;\n> CREATE TEMPORARY TABLE (20 columns, mostly text, a few int).\n> COPY (approx 8 million rows, ~900 MB)[1]\n> UPDATE (2.8 million of the rows)\n> UPDATE (7 rows)\n> UPDATE (250 rows)\n> UPDATE (3500 rows)\n> UPDATE (3100 rows)\n> a bunch of UPDATE (1 row)\n> ...\n\n> Experimentally, I noticed that performance was not especially great.\n> So, I added some indexes (three indexes on one column each). One index\n> is UNIQUE.\n> The first UPDATE can't use any of the indexes. The rest should be able to.\n\nPlease ... there is *nothing* exact about that. It's not even clear\nwhat the datatypes of the indexed columns are, let alone what their\nstatistics are, or whether there's something specific about how you're\ndeclaring the table or the indexes.\n\nHere's an exact test case, which is something I just tried to see if\nI could easily reproduce your results:\n\nbegin;\ncreate temp table foo (f1 int, f2 text, f3 text);\ninsert into foo select x, 'xyzzy', x::text from generate_series(1,1000000) x;\nupdate foo set f2 = 'bogus' where f1 < 500000;\nexplain update foo set f2 = 'zzy' where f1 = 42;\ncreate index fooi on foo(f1);\nexplain update foo set f2 = 'zzy' where f1 = 42;\nanalyze foo;\nexplain update foo set f2 = 'zzy' where f1 = 42;\nrollback;\n\nI get a seqscan, a bitmap index scan, then a plain indexscan, which\nis about what I'd expect. Clearly there's something you're doing\nthat deviates from this, but you are failing to provide the detail\nnecessary to figure out what the critical difference is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Oct 2010 17:45:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temporary tables, indexes, and query plans "
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 4:45 PM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> The sequence goes exactly like this:\n>\n>> BEGIN;\n>> CREATE TEMPORARY TABLE (20 columns, mostly text, a few int).\n>> COPY (approx 8 million rows, ~900 MB)[1]\n>> UPDATE (2.8 million of the rows)\n>> UPDATE (7 rows)\n>> UPDATE (250 rows)\n>> UPDATE (3500 rows)\n>> UPDATE (3100 rows)\n>> a bunch of UPDATE (1 row)\n>> ...\n>\n>> Experimentally, I noticed that performance was not especially great.\n>> So, I added some indexes (three indexes on one column each). One index\n>> is UNIQUE.\n>> The first UPDATE can't use any of the indexes. The rest should be able to.\n>\n> Please ... there is *nothing* exact about that. It's not even clear\n> what the datatypes of the indexed columns are, let alone what their\n> statistics are, or whether there's something specific about how you're\n> declaring the table or the indexes.\n\nThe indexed data types are:\n- an INT (this is a unique ID, and it is declared so)\n- two TEXT fields. The initial value of one of the text fields is\nNULL, and it is updated to be not longer than 10 characters long. The\nother text field is not more than 4 characters long. My guesstimate as\nto the distribution of values in this column is not more than 2 dozen.\n\nI am not doing anything when I define the table except using TEMPORARY.\nThe indexes are as bog-standard as one can get. No where clause, no\nfunctions, nothing special at all.\n\nI'd like to zoom out a little bit and, instead of focusing on the\nspecifics, ask more general questions:\n\n- does the table being temporary effect anything? Another lister\nemailed me and wondered if ANALYZE on a temporary table might behave\ndifferently.\n- is there some way for me to determine /why/ the planner chooses a\nsequential scan over other options? I'm already using auto explain.\n- in the general case, are indexes totally ready to use after creation\nor is an analyze step necessary?\n- do hint bits come into play here at all?\n\n\n\n-- \nJon\n",
"msg_date": "Wed, 27 Oct 2010 17:02:43 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: temporary tables, indexes, and query plans"
},
{
"msg_contents": "Jon Nelson <[email protected]> writes:\n> I'd like to zoom out a little bit and, instead of focusing on the\n> specifics, ask more general questions:\n\n> - does the table being temporary effect anything? Another lister\n> emailed me and wondered if ANALYZE on a temporary table might behave\n> differently.\n\nWell, the autovacuum daemon can't do anything with temp tables, so\nyou're reliant on doing a manual ANALYZE if you want the planner to\nhave stats. Otherwise it should be the same.\n\n> - is there some way for me to determine /why/ the planner chooses a\n> sequential scan over other options?\n\nIt thinks it's faster, or there is some reason why it *can't* use the\nindex, like a datatype mismatch. You could tell which by trying \"set\nenable_seqscan = off\" to see if that will make it change to another\nplan; if so, the estimated costs of that plan versus the original\nseqscan would be valuable information.\n\n> - in the general case, are indexes totally ready to use after creation\n> or is an analyze step necessary?\n\nThey are unless you said CREATE INDEX CONCURRENTLY, which doesn't seem\nlike it's relevant here; but since you keep on not showing us your code,\nwho knows?\n\n> - do hint bits come into play here at all?\n\nNo.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Oct 2010 18:36:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temporary tables, indexes, and query plans "
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 5:36 PM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> I'd like to zoom out a little bit and, instead of focusing on the\n>> specifics, ask more general questions:\n..\n>> - is there some way for me to determine /why/ the planner chooses a\n>> sequential scan over other options?\n>\n> It thinks it's faster, or there is some reason why it *can't* use the\n> index, like a datatype mismatch. You could tell which by trying \"set\n> enable_seqscan = off\" to see if that will make it change to another\n> plan; if so, the estimated costs of that plan versus the original\n> seqscan would be valuable information.\n\nWhen I place the index creation and ANALYZE right after the bulk\nupdate, follow it with 'set enable_seqscan = false', the next query\n(also an UPDATE - should be about 7 rows) results in this plan:\n\nSeq Scan on foo_table (cost=10000000000.00..10000004998.00 rows=24 width=236)\n\nThe subsequent queries all have the same first-row cost and similar\nlast-row costs, and of course the rows value varies some as well. All\nof them, even the queries which update exactly 1 row, have similar\ncost:\n\nSeq Scan on foo_table (cost=10000000000.00..10000289981.17 rows=1 width=158)\n\nI cranked the logging up a bit, but I don't really know what to fiddle\nthere, and while I got a lot of output, I didn't see much in the way\nof cost comparisons.\n\n-- \nJon\n",
"msg_date": "Thu, 28 Oct 2010 09:08:06 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: temporary tables, indexes, and query plans"
},
{
"msg_contents": "Jon Nelson <[email protected]> writes:\n> On Wed, Oct 27, 2010 at 5:36 PM, Tom Lane <[email protected]> wrote:\n>> It thinks it's faster, or there is some reason why it *can't* use the\n>> index, like a datatype mismatch. You could tell which by trying \"set\n>> enable_seqscan = off\" to see if that will make it change to another\n>> plan; if so, the estimated costs of that plan versus the original\n>> seqscan would be valuable information.\n\n> When I place the index creation and ANALYZE right after the bulk\n> update, follow it with 'set enable_seqscan = false', the next query\n> (also an UPDATE - should be about 7 rows) results in this plan:\n\n> Seq Scan on foo_table (cost=10000000000.00..10000004998.00 rows=24 width=236)\n\nOK, so it thinks it can't use the index. (The \"cost=10000000000\" bit is\nthe effect of enable_seqscan = off: it's not possible to just never use\nseqscans, but we assign an artificially high cost to discourage the\nplanner from selecting them if there's any other alternative.)\n\nSo we're back to wondering why it can't use the index. I will say\nonce more that we could probably figure this out quickly if you'd\npost an exact example instead of handwaving.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 28 Oct 2010 10:23:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temporary tables, indexes, and query plans "
},
{
"msg_contents": "On Wed, Oct 27, 2010 at 3:44 PM, Justin Pitts <[email protected]> wrote:\n>> Jason Pitts:\n>> RE: changing default_statistics_target (or via ALTER TABLE SET STATS)\n>> not taking effect until ANALYZE is performed.\n>>\n>> I did already know that, but it's probably good to put into this\n>> thread. However, you'll note that this is a temporary table created at\n>> the beginning of a transaction.\n>>\n>\n> ( giving up on replying to the group; the list will not accept my posts )\n\nEvidently it's accepting some of them...\n\n> I've been following the thread so long I had forgotten that. I rather\n> strongly doubt that analyze can reach that table's content inside that\n> transaction, if you are creating, populating, and querying it all\n> within that single transaction.\n\nActually I don't think that's a problem, at least for a manual ANALYZE.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Fri, 29 Oct 2010 14:16:02 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temporary tables, indexes, and query plans"
},
{
"msg_contents": "On Thu, Oct 28, 2010 at 9:23 AM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> On Wed, Oct 27, 2010 at 5:36 PM, Tom Lane <[email protected]> wrote:\n>>> It thinks it's faster, or there is some reason why it *can't* use the\n>>> index, like a datatype mismatch. You could tell which by trying \"set\n>>> enable_seqscan = off\" to see if that will make it change to another\n>>> plan; if so, the estimated costs of that plan versus the original\n>>> seqscan would be valuable information.\n>\n>> When I place the index creation and ANALYZE right after the bulk\n>> update, follow it with 'set enable_seqscan = false', the next query\n>> (also an UPDATE - should be about 7 rows) results in this plan:\n>\n>> Seq Scan on foo_table (cost=10000000000.00..10000004998.00 rows=24 width=236)\n>\n> OK, so it thinks it can't use the index. (The \"cost=10000000000\" bit is\n> the effect of enable_seqscan = off: it's not possible to just never use\n> seqscans, but we assign an artificially high cost to discourage the\n> planner from selecting them if there's any other alternative.)\n>\n> So we're back to wondering why it can't use the index. I will say\n> once more that we could probably figure this out quickly if you'd\n> post an exact example instead of handwaving.\n\nOK. This is a highly distilled example that shows the behavior.\nThe ANALYZE doesn't appear to change anything, nor the SET STATISTICS\n(followed by ANALYZE), nor disabling seqential scans. Re-writing the\ntable with ALTER TABLE does, though.\nIf the initial UPDATE (the one before the index creation) is commented\nout, then the subsequent updates don't use sequential scans.\n\n\\timing off\nBEGIN;\nCREATE TEMPORARY TABLE foo AS SELECT x AS A, chr(x % 75 + 32) AS b,\n''::text AS c from generate_series(1,500) AS x;\nUPDATE foo SET c = 'foo' WHERE b = 'A' ;\nCREATE INDEX foo_b_idx on foo (b);\n\n-- let's see what it looks like\nEXPLAIN UPDATE foo SET c='bar' WHERE b = 'C';\n\n-- does forcing a seqscan off help?\nset enable_seqscan = false;\nEXPLAIN UPDATE foo SET c='bar' WHERE b = 'C';\n\n-- what about analyze?\nANALYZE VERBOSE foo;\nEXPLAIN UPDATE foo SET c='bar' WHERE b = 'C';\n\n-- what about statistics?\nALTER TABLE foo ALTER COLUMN b SET STATISTICS 10000;\nANALYZE VERBOSE foo;\nEXPLAIN UPDATE foo SET c='bar' WHERE b = 'C';\n\n-- let's re-write the table\nALTER TABLE foo ALTER COLUMN a TYPE int;\nEXPLAIN UPDATE foo SET c='bar' WHERE b = 'C';\n\nROLLBACK;\n\n-- \nJon\n",
"msg_date": "Fri, 12 Nov 2010 21:31:55 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: temporary tables, indexes, and query plans"
},
{
"msg_contents": "Jon Nelson <[email protected]> writes:\n> OK. This is a highly distilled example that shows the behavior.\n\n> BEGIN;\n> CREATE TEMPORARY TABLE foo AS SELECT x AS A, chr(x % 75 + 32) AS b,\n> ''::text AS c from generate_series(1,500) AS x;\n> UPDATE foo SET c = 'foo' WHERE b = 'A' ;\n> CREATE INDEX foo_b_idx on foo (b);\n> [ and the rest of the transaction can't use that index ]\n\nOK, this is an artifact of the \"HOT update\" optimization. Before\ncreating the index, you did updates on the table that would have been\nexecuted differently if the index had existed. When the index does get\ncreated, its entries for those updates are incomplete, so the index\ncan't be used in transactions that could in principle see the unmodified\nrows.\n\nYou could avoid this effect either by creating the index before you do\nany updates on the table, or by not wrapping the entire process into a\nsingle transaction.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 13 Nov 2010 10:41:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temporary tables, indexes, and query plans "
},
{
"msg_contents": "On Sat, Nov 13, 2010 at 9:41 AM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> OK. This is a highly distilled example that shows the behavior.\n>\n>> BEGIN;\n>> CREATE TEMPORARY TABLE foo AS SELECT x AS A, chr(x % 75 + 32) AS b,\n>> ''::text AS c from generate_series(1,500) AS x;\n>> UPDATE foo SET c = 'foo' WHERE b = 'A' ;\n>> CREATE INDEX foo_b_idx on foo (b);\n>> [ and the rest of the transaction can't use that index ]\n>\n> OK, this is an artifact of the \"HOT update\" optimization. Before\n> creating the index, you did updates on the table that would have been\n> executed differently if the index had existed. When the index does get\n> created, its entries for those updates are incomplete, so the index\n> can't be used in transactions that could in principle see the unmodified\n> rows.\n\nAha! When you indicated that HOT updates were part of the problem, I\ngoogled HOT updates for more detail and ran across this article:\nhttp://pgsql.tapoueh.org/site/html/misc/hot.html\nwhich was very useful in helping me to understand things.\n\nIf I understand things correctly, after a tuple undergoes a HOT-style\nupdate, there is a chain from the original tuple to the updated tuple.\nIf an index already exists on the relation (and involves the updated\ncolumn), a *new entry* in the index is created. However, if an index\ndoes not already exist and one is created (which involves a column\nwith tuples that underwent HOT update) then it seems as though the\nindex doesn't see either version. Is that description inaccurate?\n\nWhat would the effect be of patching postgresql to allow indexes to\nsee and follow the HOT chains during index creation?\n\nThe reason I did the update before the index creation is that the\ninitial update (in the actual version, not this test version) updates\n2.8 million of some 7.5 million rows (or a bit under 40% of the entire\ntable), and such a large update seems like it would have a deleterious\neffect on the index (although in either case the planner properly\nchooses a sequential scan for this update).\n\n> You could avoid this effect either by creating the index before you do\n> any updates on the table, or by not wrapping the entire process into a\n> single transaction.\n\nI need the whole thing in a single transaction because I make\n/extensive/ use of temporary tables and many dozens of statements that\nneed to either succeed or fail as one.\n\nIs this \"HOT update\" optimization interaction with indexes documented\nanywhere? It doesn't appear to be common knowledge as there are now 20\nmessages in this topic and this is the first mention of the HOT\nupdates / index interaction. I would like to suggest that an update to\nthe CREATE INDEX documentation might contain some caveats about\ncreating indexes in transactions on relations that might have HOT\nupdates.\n\nAgain, I'd like to thank everybody for helping me to figure this out.\nIt's not a huge burden to create the index before the updates, but\nunderstanding *why* it wasn't working (even if it violates the\nprinciple-of-least-surprise) helps quite a bit.\n\n\n-- \nJon\n",
"msg_date": "Sat, 13 Nov 2010 10:14:54 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: temporary tables, indexes, and query plans"
},
{
"msg_contents": "Jon Nelson <[email protected]> writes:\n> What would the effect be of patching postgresql to allow indexes to\n> see and follow the HOT chains during index creation?\n\nIt would break things. We did a *lot* of thinking about this when\nHOT was implemented; there are not simple improvements to be made.\n\nThe particular case you have here might be improvable because you\nactually don't have any indexes at all during the UPDATE, and so\nmaybe there's no need for it to create HOT-update chains. But that\nwould still fall over if you made an index, did the update, then\nmade more indexes.\n\n> Is this \"HOT update\" optimization interaction with indexes documented\n> anywhere? It doesn't appear to be common knowledge as there are now 20\n> messages in this topic and this is the first mention of the HOT\n> updates / index interaction.\n\nThe reason it wasn't mentioned before was that you kept on not showing\nus what you did, and there was no reason for anyone to guess that you\nwere mixing updates and index creations in a single transaction. We\nhave seen people run into this type of issue once or twice since 8.3\ncame out, but it's sufficiently uncommon that it doesn't spend time at\nthe front of anybody's mind.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 13 Nov 2010 11:42:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temporary tables, indexes, and query plans "
},
{
"msg_contents": "On Sat, Nov 13, 2010 at 10:41 AM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> OK. This is a highly distilled example that shows the behavior.\n>\n>> BEGIN;\n>> CREATE TEMPORARY TABLE foo AS SELECT x AS A, chr(x % 75 + 32) AS b,\n>> ''::text AS c from generate_series(1,500) AS x;\n>> UPDATE foo SET c = 'foo' WHERE b = 'A' ;\n>> CREATE INDEX foo_b_idx on foo (b);\n>> [ and the rest of the transaction can't use that index ]\n>\n> OK, this is an artifact of the \"HOT update\" optimization. Before\n> creating the index, you did updates on the table that would have been\n> executed differently if the index had existed. When the index does get\n> created, its entries for those updates are incomplete, so the index\n> can't be used in transactions that could in principle see the unmodified\n> rows.\n\nIs the \"in principle\" here because there might be an open snapshot\nother than the one under which CREATE INDEX is running, like a cursor?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Sat, 13 Nov 2010 19:46:05 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temporary tables, indexes, and query plans"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Sat, Nov 13, 2010 at 10:41 AM, Tom Lane <[email protected]> wrote:\n>> OK, this is an artifact of the \"HOT update\" optimization. �Before\n>> creating the index, you did updates on the table that would have been\n>> executed differently if the index had existed. �When the index does get\n>> created, its entries for those updates are incomplete, so the index\n>> can't be used in transactions that could in principle see the unmodified\n>> rows.\n\n> Is the \"in principle\" here because there might be an open snapshot\n> other than the one under which CREATE INDEX is running, like a cursor?\n\nWell, the test is based on xmin alone, not cmin, so it can't really tell\nthe difference. It's unclear that it'd be worth trying.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 13 Nov 2010 19:54:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temporary tables, indexes, and query plans "
},
{
"msg_contents": "On Sat, Nov 13, 2010 at 7:54 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> On Sat, Nov 13, 2010 at 10:41 AM, Tom Lane <[email protected]> wrote:\n>>> OK, this is an artifact of the \"HOT update\" optimization. Before\n>>> creating the index, you did updates on the table that would have been\n>>> executed differently if the index had existed. When the index does get\n>>> created, its entries for those updates are incomplete, so the index\n>>> can't be used in transactions that could in principle see the unmodified\n>>> rows.\n>\n>> Is the \"in principle\" here because there might be an open snapshot\n>> other than the one under which CREATE INDEX is running, like a cursor?\n>\n> Well, the test is based on xmin alone, not cmin, so it can't really tell\n> the difference. It's unclear that it'd be worth trying.\n\nYeah, I'm not familiar with the logic in that area of the code, so I\ncan't comment all that intelligently. However, I feel like there's a\nclass of things that could potentially be optimized if we know that\nthe only snapshot they could affect is the one we're currently using.\nFor example, when bulk loading a newly created table with COPY or\nCTAS, we could set the xmin-committed hint bit if it weren't for the\npossibility that some snapshot with a command-ID equal to or lower\nthan our own might take a look and get confused. That seems to\nrequire a BEFORE trigger or another open snapshot. And, if we\nHOT-update a tuple created by our own transaction that can't be of\ninterest to anyone else ever again, it would be nice to either mark it\nfor pruning or maybe even overwrite it in place; similarly if we\ndelete such a tuple it would be nice to schedule its execution. There\nare problems with all of these ideas, and I'm not totally sure how to\nmake any of it work, but to me this sounds suspiciously like another\ninstance of a somewhat more general problem.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Sat, 13 Nov 2010 21:54:58 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temporary tables, indexes, and query plans"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> Yeah, I'm not familiar with the logic in that area of the code, so I\n> can't comment all that intelligently. However, I feel like there's a\n> class of things that could potentially be optimized if we know that\n> the only snapshot they could affect is the one we're currently using.\n\nYeah, perhaps. The other thing I noticed while looking at the code is\nthat CREATE INDEX's test to see whether there are broken HOT chains is\nborderline brain-dead: if there are any recently-dead HOT-updated tuples\nin the table, it assumes they represent broken HOT chains, whether they\nreally do or not. In principle you could find the live member of the\nchain and see whether or not it is really different from the dead member\nin the columns used by the new index. In Jon's example that would win\nbecause his update didn't actually change the indexed column. It's\nunclear though that it would be useful often enough to be worth the\nextra code and cycles.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 14 Nov 2010 10:55:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: temporary tables, indexes, and query plans "
}
] |
[
{
"msg_contents": "Hi,\n\nI have a Linux Server (Debian) with Postgres 8.3 and I have problems with a\nmassive update, about 400000 updates/inserts.\nIf I execute about 100000 it seems all ok, but when I execute 400000, I have\nthe same problem with or without a transaction (I need to do with a\ntransaction) increase memory usage and disk usage.\nWith a execution of 400.000 inserts/update server begin woring well, but\nafter 100 seconds of executions increase usage of RAM, and then Swap and\nfinally all RAM and swap are used and execution can't finish.\nI have made some tuning in server, I have modified:\n-shared_buffers 1024 Mb\n-work_mem 512 Mb\n-effective_cache_size 2048Mb\n-random_page_cost 2.0\n-checkpoint_segments 64\n-wal_buffers 8Mb\n-max_prepared_transaction 100\n-synchronous_commit off\n\nwhat is wrong in this configuration to executes this inserts/update?\n\nServer has: 4Gb RAM, 3GB Swap and SATA Disk with RAID5\n\n\nThanks\n\nHi,I have a Linux Server (Debian) with Postgres 8.3 and I have problems with a massive update, about 400000 updates/inserts.If I execute about 100000 it seems all ok, but when I execute 400000, I have the same problem with or without a transaction (I need to do with a transaction) increase memory usage and disk usage.\n\n\nWith a execution of 400.000 inserts/update server begin woring well, but after 100 seconds of executions increase usage of RAM, and then Swap and finally all RAM and swap are used and execution can't finish.I have made some tuning in server, I have modified:\n\n\n-shared_buffers 1024 Mb-work_mem 512 Mb-effective_cache_size 2048Mb-random_page_cost 2.0 -checkpoint_segments 64 -wal_buffers 8Mb-max_prepared_transaction 100-synchronous_commit offwhat is wrong in this configuration to executes this inserts/update?\nServer has: 4Gb RAM, 3GB Swap and SATA Disk with RAID5Thanks",
"msg_date": "Wed, 27 Oct 2010 20:38:20 +0200",
"msg_from": "Trenta sis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Massive update, memory usage"
},
{
"msg_contents": "Trenta sis <[email protected]> wrote:\n\n> \n> Hi,\n> \n> I have a Linux Server (Debian) with Postgres 8.3 and I have problems with a\n> massive update, about 400000 updates/inserts.\n\nUpdates or Inserts?\n\n\n> If I execute about 100000 it seems all ok, but when I execute 400000, I have\n> the same problem with or without a transaction (I need to do with a\n> transaction) increase memory usage and disk usage.\n> With a execution of 400.000 inserts/update server begin woring well, but after\n> 100 seconds of executions increase usage of RAM, and then Swap and finally all\n> RAM and swap are used and execution can't finish.\n> I have made some tuning in server, I have modified:\n> -shared_buffers 1024 Mb\n> -work_mem 512 Mb\n\nWay too high, but that's not the problem here... (i guess, depends on\nthe real query, see below about explain analyse)\n\n> -effective_cache_size 2048Mb\n\nYou have 4GB, but you are defined only 1 GByte for shared_mem and you\nhave defined only 2GB for shared_mem and os-cache together. What about\nthe other 2 GByte?\n\n\n> -random_page_cost 2.0\n\nyou have changed the default, why?\n\n\n> -checkpoint_segments 64\n> -wal_buffers 8Mb\n> -max_prepared_transaction 100\n> -synchronous_commit off\n> \n> what is wrong in this configuration to executes this inserts/update?\n\nHard to guess, can you provide the output generated from \nEXPLAIN ANALYSE <your query>?\n\n\n> \n> Server has: 4Gb RAM, 3GB Swap and SATA Disk with RAID5\n\nRAID5 isn't a good choise for a database server...\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n",
"msg_date": "Wed, 27 Oct 2010 20:58:04 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive update, memory usage"
},
{
"msg_contents": "On 10/28/2010 02:38 AM, Trenta sis wrote:\n>\n> Hi,\n>\n> I have a Linux Server (Debian) with Postgres 8.3 and I have problems\n> with a massive update, about 400000 updates/inserts.\n> If I execute about 100000 it seems all ok, but when I execute 400000, I\n> have the same problem with or without a transaction (I need to do with a\n> transaction) increase memory usage and disk usage.\n> With a execution of 400.000 inserts/update server begin woring well, but\n> after 100 seconds of executions increase usage of RAM, and then Swap and\n> finally all RAM and swap are used and execution can't finish.\n\nDo you have lots of triggers on the table? Or foreign key relationships \nthat're DEFERRABLE ?\n\n--\nCraig Ringer\n",
"msg_date": "Thu, 28 Oct 2010 07:24:33 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive update, memory usage"
},
{
"msg_contents": "There are about 100.000 inserts and 300000 updates. Without transaction it\nseems that works, but with a transaction no. Witt about only 300.000 updates\nit seems that can finish correctly, but last 20% is slow because is using\nswap...\n\nAny tunning to do in this configuration or it is correct?\n\nthanks\n\n2010/10/28 Craig Ringer <[email protected]>\n\nOn 10/28/2010 02:38 AM, Trenta sis wrote:\n>\n>>\n>> Hi,\n>>\n>> I have a Linux Server (Debian) with Postgres 8.3 and I have problems\n>> with a massive update, about 400000 updates/inserts.\n>> If I execute about 100000 it seems all ok, but when I execute 400000, I\n>> have the same problem with or without a transaction (I need to do with a\n>> transaction) increase memory usage and disk usage.\n>> With a execution of 400.000 inserts/update server begin woring well, but\n>> after 100 seconds of executions increase usage of RAM, and then Swap and\n>> finally all RAM and swap are used and execution can't finish.\n>>\n>\n> Do you have lots of triggers on the table? Or foreign key relationships\n> that're DEFERRABLE ?\n>\n> --\n> Craig Ringer\n>\n\nThere are about 100.000 inserts and 300000 updates. Without transaction it seems that works, but with a transaction no. Witt about only 300.000 updates it seems that can finish correctly, but last 20% is slow because is using swap...\nAny tunning to do in this configuration or it is correct?thanks2010/10/28 Craig Ringer <[email protected]>\n\nOn 10/28/2010 02:38 AM, Trenta sis wrote:\n\n\nHi,\n\nI have a Linux Server (Debian) with Postgres 8.3 and I have problems\nwith a massive update, about 400000 updates/inserts.\nIf I execute about 100000 it seems all ok, but when I execute 400000, I\nhave the same problem with or without a transaction (I need to do with a\ntransaction) increase memory usage and disk usage.\nWith a execution of 400.000 inserts/update server begin woring well, but\nafter 100 seconds of executions increase usage of RAM, and then Swap and\nfinally all RAM and swap are used and execution can't finish.\n\n\nDo you have lots of triggers on the table? Or foreign key relationships that're DEFERRABLE ?\n\n--\nCraig Ringer",
"msg_date": "Thu, 28 Oct 2010 10:16:20 +0200",
"msg_from": "Trenta sis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Massive update, memory usage"
},
{
"msg_contents": "2010/10/28 Trenta sis <[email protected]>:\n>\n>\n> There are about 100.000 inserts and 300000 updates. Without transaction it\n> seems that works, but with a transaction no. Witt about only 300.000 updates\n> it seems that can finish correctly, but last 20% is slow because is using\n> swap...\n>\n> Any tunning to do in this configuration or it is correct?\n\nYou should post your queries, and tables definitions involved.\n\n>\n> thanks\n>\n> 2010/10/28 Craig Ringer <[email protected]>\n>>\n>> On 10/28/2010 02:38 AM, Trenta sis wrote:\n>>>\n>>> Hi,\n>>>\n>>> I have a Linux Server (Debian) with Postgres 8.3 and I have problems\n>>> with a massive update, about 400000 updates/inserts.\n>>> If I execute about 100000 it seems all ok, but when I execute 400000, I\n>>> have the same problem with or without a transaction (I need to do with a\n>>> transaction) increase memory usage and disk usage.\n>>> With a execution of 400.000 inserts/update server begin woring well, but\n>>> after 100 seconds of executions increase usage of RAM, and then Swap and\n>>> finally all RAM and swap are used and execution can't finish.\n>>\n>> Do you have lots of triggers on the table? Or foreign key relationships\n>> that're DEFERRABLE ?\n>>\n>> --\n>> Craig Ringer\n>\n>\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Thu, 28 Oct 2010 17:37:28 +0200",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive update, memory usage"
},
{
"msg_contents": "Well, I have solved executing with more RAM, and then works correctly\n\nThanks\n\n\n\n2010/10/28 Cédric Villemain <[email protected]>\n\n> 2010/10/28 Trenta sis <[email protected]>:\n> >\n> >\n> > There are about 100.000 inserts and 300000 updates. Without transaction\n> it\n> > seems that works, but with a transaction no. Witt about only 300.000\n> updates\n> > it seems that can finish correctly, but last 20% is slow because is using\n> > swap...\n> >\n> > Any tunning to do in this configuration or it is correct?\n>\n> You should post your queries, and tables definitions involved.\n>\n> >\n> > thanks\n> >\n> > 2010/10/28 Craig Ringer <[email protected]>\n> >>\n> >> On 10/28/2010 02:38 AM, Trenta sis wrote:\n> >>>\n> >>> Hi,\n> >>>\n> >>> I have a Linux Server (Debian) with Postgres 8.3 and I have problems\n> >>> with a massive update, about 400000 updates/inserts.\n> >>> If I execute about 100000 it seems all ok, but when I execute 400000, I\n> >>> have the same problem with or without a transaction (I need to do with\n> a\n> >>> transaction) increase memory usage and disk usage.\n> >>> With a execution of 400.000 inserts/update server begin woring well,\n> but\n> >>> after 100 seconds of executions increase usage of RAM, and then Swap\n> and\n> >>> finally all RAM and swap are used and execution can't finish.\n> >>\n> >> Do you have lots of triggers on the table? Or foreign key relationships\n> >> that're DEFERRABLE ?\n> >>\n> >> --\n> >> Craig Ringer\n> >\n> >\n> >\n>\n>\n>\n> --\n> Cédric Villemain 2ndQuadrant\n> http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n>\n\nWell, I have solved executing with more RAM, and then works correctlyThanks2010/10/28 Cédric Villemain <[email protected]>\n2010/10/28 Trenta sis <[email protected]>:\n>\n>\n> There are about 100.000 inserts and 300000 updates. Without transaction it\n> seems that works, but with a transaction no. Witt about only 300.000 updates\n> it seems that can finish correctly, but last 20% is slow because is using\n> swap...\n>\n> Any tunning to do in this configuration or it is correct?\n\nYou should post your queries, and tables definitions involved.\n\n>\n> thanks\n>\n> 2010/10/28 Craig Ringer <[email protected]>\n>>\n>> On 10/28/2010 02:38 AM, Trenta sis wrote:\n>>>\n>>> Hi,\n>>>\n>>> I have a Linux Server (Debian) with Postgres 8.3 and I have problems\n>>> with a massive update, about 400000 updates/inserts.\n>>> If I execute about 100000 it seems all ok, but when I execute 400000, I\n>>> have the same problem with or without a transaction (I need to do with a\n>>> transaction) increase memory usage and disk usage.\n>>> With a execution of 400.000 inserts/update server begin woring well, but\n>>> after 100 seconds of executions increase usage of RAM, and then Swap and\n>>> finally all RAM and swap are used and execution can't finish.\n>>\n>> Do you have lots of triggers on the table? Or foreign key relationships\n>> that're DEFERRABLE ?\n>>\n>> --\n>> Craig Ringer\n>\n>\n>\n\n\n\n--\nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support",
"msg_date": "Thu, 28 Oct 2010 23:48:15 +0200",
"msg_from": "Trenta sis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive update, memory usage"
},
{
"msg_contents": "Scusa, scadenze a parte, ma non vi è sembrato il caso di chiedere a chi sta gestendo il progetto prima di rimuovere una risorsa?\nGrazie comunque.\nEmanuele\n\nIl giorno 28/ott/2010, alle ore 23.48, Trenta sis ha scritto:\n\n> Well, I have solved executing with more RAM, and then works correctly\n> \n> Thanks\n> \n> \n> \n> 2010/10/28 Cédric Villemain <[email protected]>\n> 2010/10/28 Trenta sis <[email protected]>:\n> >\n> >\n> > There are about 100.000 inserts and 300000 updates. Without transaction it\n> > seems that works, but with a transaction no. Witt about only 300.000 updates\n> > it seems that can finish correctly, but last 20% is slow because is using\n> > swap...\n> >\n> > Any tunning to do in this configuration or it is correct?\n> \n> You should post your queries, and tables definitions involved.\n> \n> >\n> > thanks\n> >\n> > 2010/10/28 Craig Ringer <[email protected]>\n> >>\n> >> On 10/28/2010 02:38 AM, Trenta sis wrote:\n> >>>\n> >>> Hi,\n> >>>\n> >>> I have a Linux Server (Debian) with Postgres 8.3 and I have problems\n> >>> with a massive update, about 400000 updates/inserts.\n> >>> If I execute about 100000 it seems all ok, but when I execute 400000, I\n> >>> have the same problem with or without a transaction (I need to do with a\n> >>> transaction) increase memory usage and disk usage.\n> >>> With a execution of 400.000 inserts/update server begin woring well, but\n> >>> after 100 seconds of executions increase usage of RAM, and then Swap and\n> >>> finally all RAM and swap are used and execution can't finish.\n> >>\n> >> Do you have lots of triggers on the table? Or foreign key relationships\n> >> that're DEFERRABLE ?\n> >>\n> >> --\n> >> Craig Ringer\n> >\n> >\n> >\n> \n> \n> \n> --\n> Cédric Villemain 2ndQuadrant\n> http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n> \n\n\nScusa, scadenze a parte, ma non vi è sembrato il caso di chiedere a chi sta gestendo il progetto prima di rimuovere una risorsa?Grazie comunque.EmanueleIl giorno 28/ott/2010, alle ore 23.48, Trenta sis ha scritto:Well, I have solved executing with more RAM, and then works correctlyThanks2010/10/28 Cédric Villemain <[email protected]>\n2010/10/28 Trenta sis <[email protected]>:\n>\n>\n> There are about 100.000 inserts and 300000 updates. Without transaction it\n> seems that works, but with a transaction no. Witt about only 300.000 updates\n> it seems that can finish correctly, but last 20% is slow because is using\n> swap...\n>\n> Any tunning to do in this configuration or it is correct?\n\nYou should post your queries, and tables definitions involved.\n\n>\n> thanks\n>\n> 2010/10/28 Craig Ringer <[email protected]>\n>>\n>> On 10/28/2010 02:38 AM, Trenta sis wrote:\n>>>\n>>> Hi,\n>>>\n>>> I have a Linux Server (Debian) with Postgres 8.3 and I have problems\n>>> with a massive update, about 400000 updates/inserts.\n>>> If I execute about 100000 it seems all ok, but when I execute 400000, I\n>>> have the same problem with or without a transaction (I need to do with a\n>>> transaction) increase memory usage and disk usage.\n>>> With a execution of 400.000 inserts/update server begin woring well, but\n>>> after 100 seconds of executions increase usage of RAM, and then Swap and\n>>> finally all RAM and swap are used and execution can't finish.\n>>\n>> Do you have lots of triggers on the table? Or foreign key relationships\n>> that're DEFERRABLE ?\n>>\n>> --\n>> Craig Ringer\n>\n>\n>\n\n\n\n--\nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support",
"msg_date": "Fri, 29 Oct 2010 00:07:29 +0200",
"msg_from": "Emanuele Bracci Poste <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive update, memory usage"
}
] |
[
{
"msg_contents": "hello --\n\nmy last email was apparently too long to respond to so i'll split it up into shorter pieces. my first question :\n\nmy understanding of how range partitioning and constraint exclusion works leads me to believe that it does not buy any query performance that a clustered index doesn't already give you -- the advantages are all in maintainability. an index is able to eliminate pages just as well as constraint exclusion is able to eliminate table partitions. the I/O advantages of having queries target small subtables are the same as the I/O advantages of clustering the index : result pages in a small range are very close to each other on disk.\n\nfinally, since constraint exclusion isn't as flexible as indexing (i've seen old mailing list posts that say that constraint exclusion only works with static constants in where clauses, and only works with simple operators like >, < which basically forces btree indexes when i want to use gist) it is indeed likely that partitioning can be slower than one big table with a clustered index.\n\nis my intuition completely off on this?\n\nbest regards, ben",
"msg_date": "Thu, 28 Oct 2010 09:36:54 -0700",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "partitioning question 1"
},
{
"msg_contents": "On Thu, 2010-10-28 at 09:36 -0700, Ben wrote:\n> hello --\n> \n> my last email was apparently too long to respond to so i'll split it up into shorter pieces. my first question :\n> \n> my understanding of how range partitioning and constraint exclusion works leads me to believe that it does not buy any query performance that a clustered index doesn't already give you -- the advantages are all in maintainability. an index is able to eliminate pages just as well as constraint exclusion is able to eliminate table partitions. the I/O advantages of having queries target small subtables are the same as the I/O advantages of clustering the index : result pages in a small range are very close to each other on disk.\n\nNot entirely true. One a clustered index will not stay clustered if you\nare still updating data that is in the partition. You shouldn't\nunderestimate the benefit of smaller relations in terms of maintenance\neither.\n\n> \n> finally, since constraint exclusion isn't as flexible as indexing (i've seen old mailing list posts that say that constraint exclusion only works with static constants in where clauses, and only works with simple operators like >, < which basically forces btree indexes when i want to use gist) it is indeed likely that partitioning can be slower than one big table with a clustered index.\n\nYes the constraints have to be static. Not sure about the operator\nquestion honestly.\n\n\n> is my intuition completely off on this?\n\nYou may actually want to look into expression indexes, not clustered\nones.\n\nSincerely,\n\nJoshua D. Drake\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n",
"msg_date": "Thu, 28 Oct 2010 10:31:32 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioning question 1"
},
{
"msg_contents": "thanks for the prompt response. some comments / questions below :\n\nOn Oct 28, 2010, at 10:31 AM, Joshua D. Drake wrote:\n>> ...constraint exclusion is able to eliminate table partitions. the I/O advantages of having queries target small subtables are the same as the I/O advantages of clustering the index : result pages in a small range are very close to each other on disk.\n> \n> Not entirely true. One a clustered index will not stay clustered if you\n> are still updating data that is in the partition. You shouldn't\n> underestimate the benefit of smaller relations in terms of maintenance\n> either.\n\nin my situation, the update come in-order (it is timeseries data and the clustered index is on time.) so the table should remain relatively clustered. updates also happen relatively infrequently (once a day in one batch.) so it appears that we will continue to get the I/O benefits described above.\n\nare there any other benefits which partitioning provides for query performance (as opposed to update performance) besides the ones which i have mentioned?\n\n\n> Yes the constraints have to be static. Not sure about the operator\n> question honestly.\n\nthis seems to severely restrict their usefulness -- our queries are data warehouse analytical -type queries, so the constraints are usually data-driven (come from joining against other tables.)\n\n>> is my intuition completely off on this?\n> \n> You may actually want to look into expression indexes, not clustered\n> ones.\n\n\nwhat would expression indexes give me?\n\nthanks and best regards, ben\n\n",
"msg_date": "Thu, 28 Oct 2010 11:44:41 -0700",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: partitioning question 1"
},
{
"msg_contents": "On Thu, 2010-10-28 at 11:44 -0700, Ben wrote:\n\n> > Yes the constraints have to be static. Not sure about the operator\n> > question honestly.\n> \n> this seems to severely restrict their usefulness -- our queries are data warehouse analytical -type queries, so the constraints are usually data-driven (come from joining against other tables.)\n\nWell it does and it doesn't. Keep in mind that the constraint can be:\n\ndate >= '2010-10-01\" and date <= '2010-10-31'\n\nWhat it can't be is something that contains date_part() or extract() (as\nan example) \n\n> \n> >> is my intuition completely off on this?\n> > \n> > You may actually want to look into expression indexes, not clustered\n> > ones.\n\nTake a look at the docs:\n\nhttp://www.postgresql.org/docs/8.4/interactive/indexes-expressional.html\n\nIt \"could\" be considered partitioning without breaking up the table,\njust the indexes.\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n",
"msg_date": "Thu, 28 Oct 2010 11:50:14 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioning question 1"
},
{
"msg_contents": "\nOn Oct 28, 2010, at 11:50 AM, Joshua D. Drake wrote:\n>>> Yes the constraints have to be static. Not sure about the operator\n>>> question honestly.\n>> \n>> this seems to severely restrict their usefulness -- our queries are data warehouse analytical -type queries, so the constraints are usually data-driven (come from joining against other tables.)\n> \n> Well it does and it doesn't. Keep in mind that the constraint can be:\n> \n> date >= '2010-10-01\" and date <= '2010-10-31'\n> \n> What it can't be is something that contains date_part() or extract() (as\n> an example) \n\ni think we are talking about two different things here: the constraints on the table, and the where-clause constraints in a query which may or may not trigger constraint exclusion. i understand that table constraints have to be constants -- it doesn't make much sense otherwise. what i am wondering about is, will constraint exclusion be triggered for queries where the column that is being partitioned on is being constrained things that are not static constants, for instance, in a join. (i'm pretty sure the answer is no, because i think constraint exclusion happens before real query planning.) a concrete example :\n\ncreate table foo (i integer not null, j float not null);\ncreate table foo_1 (check ( i >= 0 and i < 10) ) inherits (foo);\ncreate table foo_2 (check ( i >= 10 and i < 20) ) inherits (foo);\ncreate table foo_3 (check ( i >= 20 and i < 30) ) inherits (foo);\netc..\n\ncreate table bar (i integer not null, k float not null);\n\nmy understanding is that a query like\n\nselect * from foo, bar using (i);\n\ncan't use constraint exclusion, even if the histogram of i-values on table bar says they only live in the range 0-9, and so the query will touch all of the tables. i think this is not favorable compared to a single foo table with a well-maintained btree index on i.\n\n>>>> is my intuition completely off on this?\n>>> \n>>> You may actually want to look into expression indexes, not clustered\n>>> ones.\n> \n> Take a look at the docs:\n> \n> http://www.postgresql.org/docs/8.4/interactive/indexes-expressional.html\n> \n> It \"could\" be considered partitioning without breaking up the table,\n> just the indexes.\n\ndo you mean partial indexes? i have to confess to not understanding how this is relevant -- how could partial indexes give any advantage over a full clustered index?\n\nb ",
"msg_date": "Thu, 28 Oct 2010 12:25:16 -0700",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: partitioning question 1"
},
{
"msg_contents": "On Thu, 2010-10-28 at 12:25 -0700, Ben wrote:\n\n> i think we are talking about two different things here: the constraints on the table, and the where-clause constraints in a query which may or may not trigger constraint exclusion. i understand that table constraints have to be constants -- it doesn't make much sense otherwise. what i am wondering about is, will constraint exclusion be triggered for queries where the column that is being partitioned on is being constrained things that are not static constants, for instance, in a join. (i'm pretty sure the answer is no, because i think constraint exclusion happens before real query planning.) a concrete example :\n> \n> create table foo (i integer not null, j float not null);\n> create table foo_1 (check ( i >= 0 and i < 10) ) inherits (foo);\n> create table foo_2 (check ( i >= 10 and i < 20) ) inherits (foo);\n> create table foo_3 (check ( i >= 20 and i < 30) ) inherits (foo);\n> etc..\n> \n> create table bar (i integer not null, k float not null);\n> \n> my understanding is that a query like\n> \n> select * from foo, bar using (i);\n> \n> can't use constraint exclusion, even if the histogram of i-values on table bar says they only live in the range 0-9, and so the query will touch all of the tables. i think this is not favorable compared to a single foo table with a well-maintained btree index on i.\n> \n\nMy tests show you are incorrect:\n\n\npart_test=# explain analyze select * from foo join bar using (i) where\ni=9;\n QUERY\nPLAN \n------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=34.26..106.76 rows=200 width=20) (actual\ntime=0.004..0.004 rows=0 loops=1)\n -> Append (cost=0.00..68.50 rows=20 width=12) (actual\ntime=0.004..0.004 rows=0 loops=1)\n -> Seq Scan on foo (cost=0.00..34.25 rows=10 width=12)\n(actual time=0.001..0.001 rows=0 loops=1)\n Filter: (i = 9)\n -> Seq Scan on foo_1 foo (cost=0.00..34.25 rows=10 width=12)\n(actual time=0.000..0.000 rows=0 loops=1)\n Filter: (i = 9)\n -> Materialize (cost=34.26..34.36 rows=10 width=12) (never\nexecuted)\n -> Seq Scan on bar (cost=0.00..34.25 rows=10 width=12) (never\nexecuted)\n Filter: (i = 9)\n Total runtime: 0.032 ms\n(10 rows)\n\n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n",
"msg_date": "Thu, 28 Oct 2010 12:44:12 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioning question 1"
},
{
"msg_contents": "\nOn Oct 28, 2010, at 12:44 PM, Joshua D. Drake wrote:\n> \n> My tests show you are incorrect:\n> \n> \n> part_test=# explain analyze select * from foo join bar using (i) where\n> i=9;\n> QUERY\n> PLAN \n> ------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=34.26..106.76 rows=200 width=20) (actual\n> time=0.004..0.004 rows=0 loops=1)\n> -> Append (cost=0.00..68.50 rows=20 width=12) (actual\n> time=0.004..0.004 rows=0 loops=1)\n> -> Seq Scan on foo (cost=0.00..34.25 rows=10 width=12)\n> (actual time=0.001..0.001 rows=0 loops=1)\n> Filter: (i = 9)\n> -> Seq Scan on foo_1 foo (cost=0.00..34.25 rows=10 width=12)\n> (actual time=0.000..0.000 rows=0 loops=1)\n> Filter: (i = 9)\n> -> Materialize (cost=34.26..34.36 rows=10 width=12) (never\n> executed)\n> -> Seq Scan on bar (cost=0.00..34.25 rows=10 width=12) (never\n> executed)\n> Filter: (i = 9)\n> Total runtime: 0.032 ms\n> (10 rows)\n\nstrange. my tests don't agree with your tests :\n\ncreate table foo (i integer not null, j float not null);\ncreate table foo_1 ( check (i >= 0 and i < 10) ) inherits (foo);\ncreate table foo_2 ( check (i >= 10 and i < 20) ) inherits (foo);\ncreate table foo_3 ( check (i >= 20 and i < 30) ) inherits (foo);\ncreate index foo_1_idx on foo_1 (i);\ncreate index foo_2_idx on foo_2 (i);\ncreate index foo_3_idx on foo_3 (i);\ninsert into foo_1 select generate_series, generate_series from generate_series(0,9);\ninsert into foo_2 select generate_series, generate_series from generate_series(10,19);\ninsert into foo_3 select generate_series, generate_series from generate_series(20,29);\ncreate table bar (i integer not null, k float not null);\ncreate index bar_idx on bar (i);\ninsert into bar select generate_series, -generate_series from generate_series(0,9);\nvacuum analyze;\nexplain analyze select * from foo join bar using (i);\n\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=1.23..42.29 rows=98 width=20) (actual time=0.056..0.118 rows=10 loops=1)\n Hash Cond: (public.foo.i = bar.i)\n -> Append (cost=0.00..32.70 rows=1970 width=12) (actual time=0.008..0.043 rows=30 loops=1)\n -> Seq Scan on foo (cost=0.00..29.40 rows=1940 width=12) (actual time=0.001..0.001 rows=0 loops=1)\n -> Seq Scan on foo_1 foo (cost=0.00..1.10 rows=10 width=12) (actual time=0.005..0.008 rows=10 loops=1)\n -> Seq Scan on foo_2 foo (cost=0.00..1.10 rows=10 width=12) (actual time=0.003..0.006 rows=10 loops=1)\n -> Seq Scan on foo_3 foo (cost=0.00..1.10 rows=10 width=12) (actual time=0.003..0.006 rows=10 loops=1)\n -> Hash (cost=1.10..1.10 rows=10 width=12) (actual time=0.025..0.025 rows=10 loops=1)\n -> Seq Scan on bar (cost=0.00..1.10 rows=10 width=12) (actual time=0.005..0.013 rows=10 loops=1)\n Total runtime: 0.205 ms\n(10 rows)\n\n\ni'm running pg 8.4.3 with constraint_exclusion=on (just to be safe.)\n\nbest, b",
"msg_date": "Thu, 28 Oct 2010 12:59:50 -0700",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: partitioning question 1"
},
{
"msg_contents": "On Thu, 2010-10-28 at 12:59 -0700, Ben wrote:\r\n> explain analyze select * from foo join bar using (i);\r\nvs\r\nexplain analyze select * from foo join bar using (i) where i=9;\r\n\n\n\n\n\nRe: [PERFORM] partitioning question 1\n\n\n\nOn Thu, 2010-10-28 at 12:59 -0700, Ben wrote:\r\n> explain analyze select * from foo join bar using (i);\r\nvs\r\nexplain analyze select * from foo join bar using (i) where i=9;",
"msg_date": "Thu, 28 Oct 2010 16:08:43 -0400",
"msg_from": "\"Reid Thompson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioning question 1"
},
{
"msg_contents": "On Thu, 2010-10-28 at 12:59 -0700, Ben wrote:\n> On Oct 28, 2010, at 12:44 PM, Joshua D. Drake wrote:\n> > \n> > My tests show you are incorrect:\n> > \n> > \n> > part_test=# explain analyze select * from foo join bar using (i) where\n> > i=9;\n> > QUERY\n> > PLAN \n> > ------------------------------------------------------------------------------------------------------------------\n> > Nested Loop (cost=34.26..106.76 rows=200 width=20) (actual\n> > time=0.004..0.004 rows=0 loops=1)\n> > -> Append (cost=0.00..68.50 rows=20 width=12) (actual\n> > time=0.004..0.004 rows=0 loops=1)\n> > -> Seq Scan on foo (cost=0.00..34.25 rows=10 width=12)\n> > (actual time=0.001..0.001 rows=0 loops=1)\n> > Filter: (i = 9)\n> > -> Seq Scan on foo_1 foo (cost=0.00..34.25 rows=10 width=12)\n> > (actual time=0.000..0.000 rows=0 loops=1)\n> > Filter: (i = 9)\n> > -> Materialize (cost=34.26..34.36 rows=10 width=12) (never\n> > executed)\n> > -> Seq Scan on bar (cost=0.00..34.25 rows=10 width=12) (never\n> > executed)\n> > Filter: (i = 9)\n> > Total runtime: 0.032 ms\n> > (10 rows)\n> \n> strange. my tests don't agree with your tests :\n\nDo you have constraint_exclusion turned on? You should verify with show\nconstraint_exclusion (I saw what you wrote below).\n\nJD\n\nP.S. Blatant plug, you coming to http://www.postgresqlconference.org ?\n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n",
"msg_date": "Thu, 28 Oct 2010 13:48:32 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioning question 1"
},
{
"msg_contents": "whoops, didn't see the i=9 (linebreak! linebreak!)\n\nnonetheless that is a static constant constraint on the column i, and i was asking if constraint exclusions would work for dynamic constraints (like those derived from a table joined against.) so for example the bar table has only 0-9 in its histogram for i, but constraint exclusion can't use that to eliminate tables foo_2 and foo_3. this is precisely the kind of information an index can use via join selectivity.\n\ni am not going to the pg conference, sorry to say.\n\nb\n\n\nOn Oct 28, 2010, at 1:48 PM, Joshua D. Drake wrote:\n\n> On Thu, 2010-10-28 at 12:59 -0700, Ben wrote:\n>> On Oct 28, 2010, at 12:44 PM, Joshua D. Drake wrote:\n>>> \n>>> My tests show you are incorrect:\n>>> \n>>> \n>>> part_test=# explain analyze select * from foo join bar using (i) where\n>>> i=9;\n>>> QUERY\n>>> PLAN \n>>> ------------------------------------------------------------------------------------------------------------------\n>>> Nested Loop (cost=34.26..106.76 rows=200 width=20) (actual\n>>> time=0.004..0.004 rows=0 loops=1)\n>>> -> Append (cost=0.00..68.50 rows=20 width=12) (actual\n>>> time=0.004..0.004 rows=0 loops=1)\n>>> -> Seq Scan on foo (cost=0.00..34.25 rows=10 width=12)\n>>> (actual time=0.001..0.001 rows=0 loops=1)\n>>> Filter: (i = 9)\n>>> -> Seq Scan on foo_1 foo (cost=0.00..34.25 rows=10 width=12)\n>>> (actual time=0.000..0.000 rows=0 loops=1)\n>>> Filter: (i = 9)\n>>> -> Materialize (cost=34.26..34.36 rows=10 width=12) (never\n>>> executed)\n>>> -> Seq Scan on bar (cost=0.00..34.25 rows=10 width=12) (never\n>>> executed)\n>>> Filter: (i = 9)\n>>> Total runtime: 0.032 ms\n>>> (10 rows)\n>> \n>> strange. my tests don't agree with your tests :\n> \n> Do you have constraint_exclusion turned on? You should verify with show\n> constraint_exclusion (I saw what you wrote below).\n> \n> JD\n> \n> P.S. Blatant plug, you coming to http://www.postgresqlconference.org ?\n> \n> \n> -- \n> PostgreSQL.org Major Contributor\n> Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\n> Consulting, Training, Support, Custom Development, Engineering\n> http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n> \n\n",
"msg_date": "Thu, 28 Oct 2010 14:06:57 -0700",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: partitioning question 1"
},
{
"msg_contents": "> -----Original Message-----\n> From: Ben [mailto:[email protected]] \n> Sent: Thursday, October 28, 2010 12:37 PM\n> To: [email protected]\n> Subject: partitioning question 1\n> \n> hello --\n> \n> my last email was apparently too long to respond to so i'll \n> split it up into shorter pieces. my first question :\n> \n> my understanding of how range partitioning and constraint \n> exclusion works leads me to believe that it does not buy any \n> query performance that a clustered index doesn't already give \n> you -- the advantages are all in maintainability. an index \n> is able to eliminate pages just as well as constraint \n> exclusion is able to eliminate table partitions. the I/O \n> advantages of having queries target small subtables are the \n> same as the I/O advantages of clustering the index : result \n> pages in a small range are very close to each other on disk.\n> \n> finally, since constraint exclusion isn't as flexible as \n> indexing (i've seen old mailing list posts that say that \n> constraint exclusion only works with static constants in \n> where clauses, and only works with simple operators like >, < \n> which basically forces btree indexes when i want to use gist) \n> it is indeed likely that partitioning can be slower than one \n> big table with a clustered index.\n> \n> is my intuition completely off on this?\n> \n> best regards, ben\n> \n\nIf your SELECT retrieves substantial amount of records, table scan could\nbe more efficient than index access.\n\nNow, if while retrieving large amount of records \"WHERE clause\" of this\nSELECT still satisfies constraints on some partition(s), then obviously\none (or few) partition scans will be more efficient than full table scan\nof non-partitioned table.\n\nSo, yes partitioning provides performance improvements, not only\nmaintenance convenience.\n\nRegards,\nIgor Neyman\n",
"msg_date": "Fri, 29 Oct 2010 10:38:47 -0400",
"msg_from": "\"Igor Neyman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioning question 1"
},
{
"msg_contents": "On Oct 29, 2010, at 7:38 AM, Igor Neyman wrote:\n\n>> is my intuition completely off on this?\n>> \n>> best regards, ben\n>> \n> \n> If your SELECT retrieves substantial amount of records, table scan could\n> be more efficient than index access.\n> \n> Now, if while retrieving large amount of records \"WHERE clause\" of this\n> SELECT still satisfies constraints on some partition(s), then obviously\n> one (or few) partition scans will be more efficient than full table scan\n> of non-partitioned table.\n> \n> So, yes partitioning provides performance improvements, not only\n> maintenance convenience.\n\nmy impression was that a *clustered* index would give a lot of the same I/O benefits, in a more flexible way. if you're clustered on the column in question, then an index scan for a range is much like a sequential scan over a partition (as far as i understand.)\n\nb",
"msg_date": "Fri, 29 Oct 2010 09:16:13 -0700",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: partitioning question 1"
},
{
"msg_contents": " \n\n> -----Original Message-----\n> From: Ben [mailto:[email protected]] \n> Sent: Friday, October 29, 2010 12:16 PM\n> To: Igor Neyman\n> Cc: [email protected]\n> Subject: Re: partitioning question 1\n> \n> On Oct 29, 2010, at 7:38 AM, Igor Neyman wrote:\n> \n> >> is my intuition completely off on this?\n> >> \n> >> best regards, ben\n> >> \n> > \n> > If your SELECT retrieves substantial amount of records, table scan \n> > could be more efficient than index access.\n> > \n> > Now, if while retrieving large amount of records \"WHERE clause\" of \n> > this SELECT still satisfies constraints on some partition(s), then \n> > obviously one (or few) partition scans will be more efficient than \n> > full table scan of non-partitioned table.\n> > \n> > So, yes partitioning provides performance improvements, not only \n> > maintenance convenience.\n> \n> my impression was that a *clustered* index would give a lot \n> of the same I/O benefits, in a more flexible way. if you're \n> clustered on the column in question, then an index scan for a \n> range is much like a sequential scan over a partition (as far \n> as i understand.)\n> \n> b\n> \n\nEven with clustered index you still read index+table, which is more\nexpensive than just table scan (in situation I described above).\nPG clustered index is not the same as SQL Server clustered index (which\nincludes actual table pages on the leaf level).\n\nIgor Neyman\n",
"msg_date": "Fri, 29 Oct 2010 12:28:20 -0400",
"msg_from": "\"Igor Neyman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioning question 1"
}
] |
[
{
"msg_contents": "I've been having trouble with a query.\nThe query is a cross join between two tables.\nInitially, I mis-typed the query, and one of the columns specified in\nthe query doesn't exist, however the query ran nonetheless.\n\nThe actual query:\nselect gid from t2, t3 where t2.name = t3.name and t3.scope = 'city'\nand t3.hierarchy = 'STANDARD' and t2.adiv = t3.adiv limit 1 ;\nHowever, there *is* no column 'name' in table 't2'.\nWhen I ran the query, it took a *really* long time to run (670 seconds).\nWhen I corrected the query to use the right column name (city_name),\nthe query ran in 28ms.\n\nThe question, then, is why didn't the postgres grump about the\nnon-existent column name?\n\nThe version is 8.4.5 on x86_64, openSUSE 11.3\n\n PostgreSQL 8.4.5 on x86_64-unknown-linux-gnu, compiled by GCC gcc\n(SUSE Linux) 4.5.0 20100604 [gcc-4_5-branch revision 160292], 64-bit\n\n\n-- \nJon\n",
"msg_date": "Fri, 29 Oct 2010 11:40:24 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "typoed column name, but postgres didn't grump"
},
{
"msg_contents": "Jon Nelson <[email protected]> writes:\n> Initially, I mis-typed the query, and one of the columns specified in\n> the query doesn't exist, however the query ran nonetheless.\n\n> The actual query:\n> select gid from t2, t3 where t2.name = t3.name and t3.scope = 'city'\n> and t3.hierarchy = 'STANDARD' and t2.adiv = t3.adiv limit 1 ;\n> However, there *is* no column 'name' in table 't2'.\n\nThis is the old automatic-cast-from-record-to-text-string issue,\nie it treats this like \"(t2.*)::name\".\n\nWe've been over this a few times before, but it's not clear that\nwe can make this throw an error without introducing unpleasant\nasymmetry into the casting behavior, as in you couldn't get the\ncast when you did want it.\n\nBTW this seems pretty far off-topic for pgsql-performance.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Oct 2010 13:48:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: typoed column name, but postgres didn't grump "
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n \n> BTW this seems pretty far off-topic for pgsql-performance.\n \nIt is once you understand what's happening. It was probably the 11+\nminutes for the mistyped query run, versus the 28 ms without the\ntypo, that led them to this list.\n \nI remembered this as an issued that has come up before, but couldn't\ncome up with good search criteria for finding the old thread before\nyou posted. If you happen to have a reference or search criteria\nfor a previous thread, could you post it? Otherwise, a brief\nexplanation of why this is considered a feature worth keeping would\nbe good. I know it has been explained before, but it just looks\nwrong, on the face of it.\n \nPlaying around with it a little, it seems like a rather annoying\nfoot-gun which could confuse people and burn a lot of development\ntime:\n \ntest=# create domain make text;\nCREATE DOMAIN\ntest=# create domain model text;\nCREATE DOMAIN\ntest=# create table vehicle (id int primary key, make make);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n\"vehicle_pkey\" for table \"vehicle\"\nCREATE TABLE\ntest=# insert into vehicle values (1,\n'Toyota'),(2,'Ford'),(3,'Rambler');\nINSERT 0 3\ntest=# select v.make, v.model from vehicle v;\n make | model\n---------+-------------\n Toyota | (1,Toyota)\n Ford | (2,Ford)\n Rambler | (3,Rambler)\n(3 rows)\n \nIf someone incorrectly thinks they've added a column, and the\npurported column name happens to match any character-based type or\ndomain name, they can get a query which behaves in a rather\nunexpected way. In this simple query it's pretty easy to spot, but\nit could surface in a much more complex query. If a mistyped query\nruns for 11 days instead of 11 minutes, they may have a hard time\nspotting the problem.\n \nA typo like this could be particularly hazardous in a DELETE or\nUPDATE statement.\n \n-Kevin\n",
"msg_date": "Fri, 29 Oct 2010 13:38:54 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: typoed column name, but postgres didn't grump"
},
{
"msg_contents": "[ please continue any further discussion in pgsql-bugs only ]\n\n\"Kevin Grittner\" <[email protected]> writes:\n> Tom Lane <[email protected]> wrote:\n>> BTW this seems pretty far off-topic for pgsql-performance.\n \n> It is once you understand what's happening. It was probably the 11+\n> minutes for the mistyped query run, versus the 28 ms without the\n> typo, that led them to this list.\n \n> I remembered this as an issued that has come up before, but couldn't\n> come up with good search criteria for finding the old thread before\n> you posted. If you happen to have a reference or search criteria\n> for a previous thread, could you post it? Otherwise, a brief\n> explanation of why this is considered a feature worth keeping would\n> be good. I know it has been explained before, but it just looks\n> wrong, on the face of it.\n\nWhat's going on here is an unpleasant interaction of several different\nfeatures:\n\n1. The notations a.b and b(a) are equivalent: either one can mean the\ncolumn b of a table a, or an invocation of a function b() that takes\na's composite type as parameter. This is an ancient PostQUEL-ism,\nbut we've preserved it because it is helpful for things like\nemulating computed columns via functions.\n\n2. The notation t(x) will be taken to mean x::t if there's no function\nt() taking x's type, but there is a cast from x's type to t. This is\njust as ancient as #1. It doesn't really add any functionality, but\nI believe we would break a whole lot of users' code if we took it away.\nBecause of #1, this also means that x.t could mean x::t.\n\n3. As of 8.4 or so, there are built-in casts available from pretty much\nany type (including composites) to all the built-in string types, viz\ntext, varchar, bpchar, name.\n\nUpshot is that t.name is a cast to type \"name\" if there's no column or\nuser-defined function that can match the call. We've seen bug reports\non this with respect to both the \"name\" and \"text\" cases, though I'm\ntoo lazy to trawl the archives for them just now.\n\nSo, if you want to throw an error for this, you have to choose which\nof these other things you want to break. I think if I had to pick a\nproposal, I'd say we should disable #2 for the specific case of casting\na composite type to something else. The intentional uses I've seen were\nall scalar types; and before 8.4 there was no built-in functionality\nthat such a call could match. If we slice off some other part of the\nfunctionality, we risk breaking apps that've worked for many years.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Oct 2010 15:07:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't grump "
},
{
"msg_contents": "On Fri, Oct 29, 2010 at 3:07 PM, Tom Lane <[email protected]> wrote:\n> [ please continue any further discussion in pgsql-bugs only ]\n>\n> \"Kevin Grittner\" <[email protected]> writes:\n>> Tom Lane <[email protected]> wrote:\n>>> BTW this seems pretty far off-topic for pgsql-performance.\n>\n>> It is once you understand what's happening. It was probably the 11+\n>> minutes for the mistyped query run, versus the 28 ms without the\n>> typo, that led them to this list.\n>\n>> I remembered this as an issued that has come up before, but couldn't\n>> come up with good search criteria for finding the old thread before\n>> you posted. If you happen to have a reference or search criteria\n>> for a previous thread, could you post it? Otherwise, a brief\n>> explanation of why this is considered a feature worth keeping would\n>> be good. I know it has been explained before, but it just looks\n>> wrong, on the face of it.\n>\n> What's going on here is an unpleasant interaction of several different\n> features:\n>\n> 1. The notations a.b and b(a) are equivalent: either one can mean the\n> column b of a table a, or an invocation of a function b() that takes\n> a's composite type as parameter. This is an ancient PostQUEL-ism,\n> but we've preserved it because it is helpful for things like\n> emulating computed columns via functions.\n>\n> 2. The notation t(x) will be taken to mean x::t if there's no function\n> t() taking x's type, but there is a cast from x's type to t. This is\n> just as ancient as #1. It doesn't really add any functionality, but\n> I believe we would break a whole lot of users' code if we took it away.\n> Because of #1, this also means that x.t could mean x::t.\n>\n> 3. As of 8.4 or so, there are built-in casts available from pretty much\n> any type (including composites) to all the built-in string types, viz\n> text, varchar, bpchar, name.\n>\n> Upshot is that t.name is a cast to type \"name\" if there's no column or\n> user-defined function that can match the call. We've seen bug reports\n> on this with respect to both the \"name\" and \"text\" cases, though I'm\n> too lazy to trawl the archives for them just now.\n>\n> So, if you want to throw an error for this, you have to choose which\n> of these other things you want to break. I think if I had to pick a\n> proposal, I'd say we should disable #2 for the specific case of casting\n> a composite type to something else. The intentional uses I've seen were\n> all scalar types; and before 8.4 there was no built-in functionality\n> that such a call could match. If we slice off some other part of the\n> functionality, we risk breaking apps that've worked for many years.\n\nWell, then let's do that. It's not the exact fix I'd pick, but it's\nclearly better than nothing, so I'm willing to sign on to it as a\ncompromise position.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Fri, 29 Oct 2010 15:15:02 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't grump"
},
{
"msg_contents": "Robert Haas <[email protected]> wrote:\n \n>> 2. The notation t(x) will be taken to mean x::t if there's no\n>> function t() taking x's type, but there is a cast from x's type\n>> to t.\n \n>> I think if I had to pick a proposal, I'd say we should disable #2\n>> for the specific case of casting a composite type to something\n>> else.\n \n> Well, then let's do that. It's not the exact fix I'd pick, but\n> it's clearly better than nothing, so I'm willing to sign on to it\n> as a compromise position.\n \nIt seems a bad idea to have so many different syntaxes for identical\nCAST semantics, but there they are, and it's bad to break things. \nOne of the reasons #2 seems like the place to fix it is that it's\npretty flaky anyway -- \"it will be taken to mean x unless there no y\nbut there is a z\" is pretty fragile to start with. Adding one more\ncondition to the places it kicks in doesn't seem as good to me as\ndropping it entirely, but then I don't have any code which depends\non type(value) as a cast syntax -- those who do will likely feel\ndifferently.\n \nSo, I'd rather scrap #2 entirely; but if that really would break\nmuch working code, +1 for ignoring it when it would cast a composite\nto something else.\n \n-Kevin\n",
"msg_date": "Fri, 29 Oct 2010 14:46:28 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't\n\t grump"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Robert Haas <[email protected]> wrote:\n>>> I think if I had to pick a proposal, I'd say we should disable #2\n>>> for the specific case of casting a composite type to something\n>>> else.\n \n>> Well, then let's do that. It's not the exact fix I'd pick, but\n>> it's clearly better than nothing, so I'm willing to sign on to it\n>> as a compromise position.\n\n> So, I'd rather scrap #2 entirely; but if that really would break\n> much working code, +1 for ignoring it when it would cast a composite\n> to something else.\n\nWell, assuming for the sake of argument that we have consensus on fixing\nit like that, is this something we should just do in HEAD, or should we\nback-patch into 8.4 and 9.0? We'll be hearing about it nigh\nindefinitely if we don't, but on the other hand this isn't the kind of\nthing we like to change in released branches.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Oct 2010 16:12:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't grump "
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n>> Robert Haas <[email protected]> wrote:\n>>>> I think if I had to pick a proposal, I'd say we should disable\n>>>> #2 for the specific case of casting a composite type to\n>>>> something else.\n> \n>>> Well, then let's do that. It's not the exact fix I'd pick, but\n>>> it's clearly better than nothing, so I'm willing to sign on to\n>>> it as a compromise position.\n> \n>> So, I'd rather scrap #2 entirely; but if that really would break\n>> much working code, +1 for ignoring it when it would cast a\n>> composite to something else.\n> \n> Well, assuming for the sake of argument that we have consensus on\n> fixing it like that, is this something we should just do in HEAD,\n> or should we back-patch into 8.4 and 9.0? We'll be hearing about\n> it nigh indefinitely if we don't, but on the other hand this isn't\n> the kind of thing we like to change in released branches.\n \nI can't see back-patching it -- it's a behavior change.\n \nOn the bright side, in five years after the release where it's\nremoved, it will be out of support. Problem reports caused by it\nshould be tapering off before that....\n \n-Kevin\n",
"msg_date": "Fri, 29 Oct 2010 15:21:09 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't\n\t grump"
},
{
"msg_contents": "On Oct 29, 2010, at 4:21 PM, \"Kevin Grittner\" <[email protected]> wrote:\n> Tom Lane <[email protected]> wrote:\n>> \"Kevin Grittner\" <[email protected]> writes:\n>>> Robert Haas <[email protected]> wrote:\n>>>>> I think if I had to pick a proposal, I'd say we should disable\n>>>>> #2 for the specific case of casting a composite type to\n>>>>> something else.\n>> \n>>>> Well, then let's do that. It's not the exact fix I'd pick, but\n>>>> it's clearly better than nothing, so I'm willing to sign on to\n>>>> it as a compromise position.\n>> \n>>> So, I'd rather scrap #2 entirely; but if that really would break\n>>> much working code, +1 for ignoring it when it would cast a\n>>> composite to something else.\n>> \n>> Well, assuming for the sake of argument that we have consensus on\n>> fixing it like that, is this something we should just do in HEAD,\n>> or should we back-patch into 8.4 and 9.0? We'll be hearing about\n>> it nigh indefinitely if we don't, but on the other hand this isn't\n>> the kind of thing we like to change in released branches.\n> \n> I can't see back-patching it -- it's a behavior change.\n> \n> On the bright side, in five years after the release where it's\n> removed, it will be out of support. Problem reports caused by it\n> should be tapering off before that....\n\nYeah, I think we're going to have to live with it, at least for 8.4. One could make an argument that 9.0 is new enough we could get away with a small behavior change to avoid a large amount of user confusion. But that may be a self-serving argument based on wanting to tamp down the bug reports rather than a wisely considered policy decision... so I'm not sure I quite buy it.\n\n...Robert",
"msg_date": "Fri, 29 Oct 2010 17:48:59 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't grump"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> Yeah, I think we're going to have to live with it, at least for 8.4. One could make an argument that 9.0 is new enough we could get away with a small behavior change to avoid a large amount of user confusion. But that may be a self-serving argument based on wanting to tamp down the bug reports rather than a wisely considered policy decision... so I'm not sure I quite buy it.\n\nWell, tamping down the bug reports is good from the users' point of view\ntoo.\n\nThe argument for not changing it in the back branches is that there\nmight be someone depending on the 8.4/9.0 behavior. However, that seems\nmoderately unlikely. Also, if we wait, that just increases the chances\nthat someone will come to depend on it, and then have a problem when\nthey migrate to 9.1. I think the \"risk of breakage\" argument has a lot\nmore force when considering long-standing behaviors than things we just\nrecently introduced.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Oct 2010 17:53:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't grump "
},
{
"msg_contents": "On Oct 29, 2010, at 5:53 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> Yeah, I think we're going to have to live with it, at least for 8.4. One could make an argument that 9.0 is new enough we could get away with a small behavior change to avoid a large amount of user confusion. But that may be a self-serving argument based on wanting to tamp down the bug reports rather than a wisely considered policy decision... so I'm not sure I quite buy it.\n> \n> Well, tamping down the bug reports is good from the users' point of view\n> too.\n> \n> The argument for not changing it in the back branches is that there\n> might be someone depending on the 8.4/9.0 behavior. However, that seems\n> moderately unlikely. Also, if we wait, that just increases the chances\n> that someone will come to depend on it, and then have a problem when\n> they migrate to 9.1. I think the \"risk of breakage\" argument has a lot\n> more force when considering long-standing behaviors than things we just\n> recently introduced.\n\nI'm not entirely sure that a behavior we released well over a year ago can be considered \"just recently introduced\"...\n\n...Robert",
"msg_date": "Fri, 29 Oct 2010 20:24:54 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't grump"
},
{
"msg_contents": "On Fri, Oct 29, 2010 at 2:07 PM, Tom Lane <[email protected]> wrote:\n> [ please continue any further discussion in pgsql-bugs only ]\n>\n> \"Kevin Grittner\" <[email protected]> writes:\n>> Tom Lane <[email protected]> wrote:\n>>> BTW this seems pretty far off-topic for pgsql-performance.\n>\n>> It is once you understand what's happening. It was probably the 11+\n>> minutes for the mistyped query run, versus the 28 ms without the\n>> typo, that led them to this list.\n\nThat is correct. Indeed, at this point, I'm not even sure whether I\nshould have included -performance, here.\n\n>> I remembered this as an issued that has come up before, but couldn't\n>> come up with good search criteria for finding the old thread before\n>> you posted. If you happen to have a reference or search criteria\n>> for a previous thread, could you post it? Otherwise, a brief\n>> explanation of why this is considered a feature worth keeping would\n>> be good. I know it has been explained before, but it just looks\n>> wrong, on the face of it.\n>\n..\n\nI've spent some time thinking about this. Now, please remember that\nI'm not a seasoned postgresql veteran like many of you, but I've been\ndoing one kind of programming or another for the better part of 20\nyears. I am also a strong believer in the principle of least surprise.\nI say this only so that you might understand better the perspective\nI'm coming from. With that said, when I read the first part of your\nfirst item:\n\n> 1. The notations a.b and b(a) are equivalent: either one can mean the\n> column b of a table a, or an invocation of a function b() that takes\n> a's composite type as parameter.\n\nI feel that, while there may be a fair bit of history here, it's\ncertainly a bit of a surprise. From my perspective, a.b usually means,\nin most other languages (as it does here), \"access the named-thing 'b'\nfrom the named-thing 'a' and returns it's value\", and whenever\nparentheses are involved (especially when in the form \"b(a)\") it means\n\"call function 'b' on named-thing 'a' and return the result\".\n\nFurthermore, regarding your second point:\n\n> 2. The notation t(x) will be taken to mean x::t if there's no function\n> t() taking x's type, but there is a cast from x's type to t. This is\n> just as ancient as #1. It doesn't really add any functionality, but\n> I believe we would break a whole lot of users' code if we took it away.\n> Because of #1, this also means that x.t could mean x::t.\n\nI've always found the form b(a) to have an implicit (if there is a\n*type* b that can take a thing of type a, then do so (essentially an\nalternate form of casting). For example, Python and some other\nlanguages behave this way. I'm not sure what I might be doing wrong,\nbut there appears to be some sort of inconsistency here, however, as\nselect int(10.1) gives me a syntax error and select 10.1::int does\nnot.\n\nSo what I'm saying is that for people that do not have a significant\nbackground in postgresql that the postquel behavior of treating 'a.b'\nthe same as b(a) is quite a surprise, whereas treating b(a) the same\nas a::b is not (since frequently \"types\" are treated like functions in\nmany languages).\n\nTherefore, I suggest that you bear these things in mind when\ndiscussing or contemplating how the syntax should work - you probably\nhave many more people coming *to* postgresql from other languages than\nyou have users relying on syntax features of postquel.\n\nIf I saw this behavior ( a.b also meaning b(a) ) in another SQL\nengine, I would consider it a thoroughly unintuitive wart, however I\nalso understand the need to balance this with existing applications.\n\n-- \nJon\n",
"msg_date": "Sun, 31 Oct 2010 20:48:32 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't grump"
},
{
"msg_contents": "Jon Nelson <[email protected]> wrote:\n \n> If I saw this behavior ( a.b also meaning b(a) ) in another SQL\n> engine, I would consider it a thoroughly unintuitive wart\n \nI think the main reason it has been kept is the converse -- if you\ndefine a function \"b\" which takes record \"a\" as its only parameter,\nyou have effectively created a \"generated column\" on any relation\nusing record type \"a\". Kind of. It won't show up in the display of\nthe relation's structure or in a SELECT *, and you can't use it in\nan unqualified reference; but you can use a.b to reference it, which\ncan be convenient.\n \nIt seems to me that this would be most useful in combination with\nthe inheritance model of PostgreSQL (when used for modeling object\nhierarchies rather than partitioning).\n \n-Kevin\n",
"msg_date": "Tue, 02 Nov 2010 16:34:25 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't\n\t grump"
},
{
"msg_contents": "On Tue, Nov 2, 2010 at 4:34 PM, Kevin Grittner\n<[email protected]> wrote:\n> Jon Nelson <[email protected]> wrote:\n>\n>> If I saw this behavior ( a.b also meaning b(a) ) in another SQL\n>> engine, I would consider it a thoroughly unintuitive wart\n>\n> I think the main reason it has been kept is the converse -- if you\n> define a function \"b\" which takes record \"a\" as its only parameter,\n> you have effectively created a \"generated column\" on any relation\n> using record type \"a\". Kind of. It won't show up in the display of\n> the relation's structure or in a SELECT *, and you can't use it in\n> an unqualified reference; but you can use a.b to reference it, which\n> can be convenient.\n\nAha. I think I understand, now. I also read up on CAST behavior\nchanges between 8.1 and 8.4 (what I'm using), and I found section\n34.4.2 \"SQL Functions on Composite Types\" quite useful.\n\nThanks!\n\n-- \nJon\n",
"msg_date": "Tue, 2 Nov 2010 17:17:00 -0500",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't grump"
},
{
"msg_contents": "On Fri, Oct 29, 2010 at 4:12 PM, Tom Lane <[email protected]> wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n>> Robert Haas <[email protected]> wrote:\n>>>> I think if I had to pick a proposal, I'd say we should disable #2\n>>>> for the specific case of casting a composite type to something\n>>>> else.\n>\n>>> Well, then let's do that. It's not the exact fix I'd pick, but\n>>> it's clearly better than nothing, so I'm willing to sign on to it\n>>> as a compromise position.\n>\n>> So, I'd rather scrap #2 entirely; but if that really would break\n>> much working code, +1 for ignoring it when it would cast a composite\n>> to something else.\n>\n> Well, assuming for the sake of argument that we have consensus on fixing\n> it like that, is this something we should just do in HEAD, or should we\n> back-patch into 8.4 and 9.0? We'll be hearing about it nigh\n> indefinitely if we don't, but on the other hand this isn't the kind of\n> thing we like to change in released branches.\n\nTrying to understand real world cases that this would break...would\nthe following now fail w/o explicit cast?\n\ncreate type x as (a int, b int);\nselect f((1,2));\n\nmerlin\n",
"msg_date": "Thu, 4 Nov 2010 11:24:05 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't grump"
},
{
"msg_contents": "Merlin Moncure <[email protected]> wrote:\n \n> Trying to understand real world cases that this would\n> break...would the following now fail w/o explicit cast?\n> \n> create type x as (a int, b int);\n> select f((1,2));\n \nIt already does:\n \ntest=# create type x as (a int, b int);\nCREATE TYPE\ntest=# select f((1,2));\nERROR: function f(record) does not exist\nLINE 1: select f((1,2));\n ^\nHINT: No function matches the given name and argument types. You\nmight need to add explicit type casts.\n\n-Kevin\n",
"msg_date": "Thu, 04 Nov 2010 10:35:08 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't\n\t grump"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Merlin Moncure <[email protected]> wrote:\n>> Trying to understand real world cases that this would\n>> break...would the following now fail w/o explicit cast?\n>> \n>> create type x as (a int, b int);\n>> select f((1,2));\n \n> It already does:\n\nI think Merlin probably meant to write \"select x((1,2))\", but that\ndoesn't work out-of-the-box either. What would be affected is\nsomething like\n\n\tselect text((1,2));\n\nwhich you'd now be forced to write as\n\n\tselect (1,2)::text;\n\n(or you could use CAST notation; but not text(row) or row.text).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Nov 2010 12:14:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't grump "
},
{
"msg_contents": "On Thu, Nov 4, 2010 at 12:14 PM, Tom Lane <[email protected]> wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n>> Merlin Moncure <[email protected]> wrote:\n>>> Trying to understand real world cases that this would\n>>> break...would the following now fail w/o explicit cast?\n>>>\n>>> create type x as (a int, b int);\n>>> select f((1,2));\n>\n>> It already does:\n>\n> I think Merlin probably meant to write \"select x((1,2))\", but that\n> doesn't work out-of-the-box either. What would be affected is\n> something like\n\nActually I didn't -- I left out that there was a function f taking x.\nI misunderstood your assertion above: \"The notation t(x) will be taken\nto mean x::t if there's no function t() taking x's type, but there is\na cast from x's type to t\".\n\nI thought you meant that it would no longer implicitly cast where it\nused to for record types, rather than the expression rewrite it was\ndoing (it just clicked). Anyways, no objection to the change, or even\nthe backpatch if you'd like to do that. FWIW.\n\nIf we ever have an IOCCCish contest for postgresql variant of SQL,\nthere are some real gems in this thread :-).\n\nmerlin\n",
"msg_date": "Thu, 4 Nov 2010 12:48:08 -0400",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't grump"
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n \n> What would be affected is something like\n> \n> \tselect text((1,2));\n> \n> which you'd now be forced to write as\n> \n> \tselect (1,2)::text;\n> \n> (or you could use CAST notation; but not text(row) or row.text).\n \nRight. As far as I'm aware, there are currently four ways to spell\n\"cast record to text\":\n \nselect cast((1,2) as text);\nselect (1,2)::text;\nselect text((1,2));\nselect ((1,2)).text;\n \nWe would be disallowing the last two spellings. They aren't that\nreliable as casts anyway, since whether they are taken as a cast\ndepends on the field names of the record.\n \ntest=# create type x as (a int, b int, c text);\nCREATE TYPE\ntest=# select cast((1,2,'three')::x as text);\n row\n-------------\n (1,2,three)\n(1 row)\n\ntest=# select (1,2,'three')::x::text;\n row\n-------------\n (1,2,three)\n(1 row)\n\ntest=# select text((1,2,'three')::x);\n text\n-------------\n (1,2,three)\n(1 row)\n\ntest=# select ((1,2,'three')::x).text;\n text\n-------------\n (1,2,three)\n(1 row)\n\ntest=# drop type x;\nDROP TYPE\ntest=# create type x as (a int, b int, text text);\nCREATE TYPE\ntest=# select cast((1,2,'three')::x as text);\n row\n-------------\n (1,2,three)\n(1 row)\n\ntest=# select (1,2,'three')::x::text;\n row\n-------------\n (1,2,three)\n(1 row)\n\ntest=# select text((1,2,'three')::x);\n text\n-------\n three\n(1 row)\n\ntest=# select ((1,2,'three')::x).text;\n text\n-------\n three\n(1 row)\n \nSo we would only be keeping cast syntax which can be counted on to\nretain cast semantics in the face of a column name change.\n \n-Kevin\n",
"msg_date": "Thu, 04 Nov 2010 11:49:28 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't\n\t grump"
},
{
"msg_contents": "Merlin Moncure <[email protected]> writes:\n> On Thu, Nov 4, 2010 at 12:14 PM, Tom Lane <[email protected]> wrote:\n>> \"Kevin Grittner\" <[email protected]> writes:\n>>> Merlin Moncure <[email protected]> wrote:\n>>>> create type x as (a int, b int);\n>>>> select f((1,2));\n\n>> I think Merlin probably meant to write \"select x((1,2))\", but that\n>> doesn't work out-of-the-box either. �What would be affected is\n>> something like\n\n> Actually I didn't -- I left out that there was a function f taking x.\n\nAh. No, that would still work after the change. The case that I'm\nproposing to break is using function-ish notation to invoke a cast\nfrom a composite type to some other type whose name you use as if it\nwere a function. Even there, if you've created such a cast following\nthe usual convention of naming the cast function after the target type,\nit'll still act the same. It's just the built-in I/O-based casts that\nwill stop working this way (for lack of a matching underlying function).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Nov 2010 12:56:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't grump "
},
{
"msg_contents": "I wrote:\n> Ah. No, that would still work after the change. The case that I'm\n> proposing to break is using function-ish notation to invoke a cast\n> from a composite type to some other type whose name you use as if it\n> were a function. Even there, if you've created such a cast following\n> the usual convention of naming the cast function after the target type,\n> it'll still act the same. It's just the built-in I/O-based casts that\n> will stop working this way (for lack of a matching underlying function).\n\nHere's a proposed patch, sans documentation as yet.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 05 Nov 2010 15:17:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't grump "
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n \n> Here's a proposed patch, sans documentation as yet.\n \nI see you took the surgical approach -- only a cast from a record to\na character string type is affected. I agree that will fix the\ncomplaints I've seen, and I imagine you're keeping the change narrow\nto minimize the risk of breaking existing code, but this still looks\nweird to me:\n \ntest=# select ('2010-11-05'::date).text;\n text\n------------\n 2010-11-05\n(1 row)\n \nOh, well -- I guess you have to go well out of your way to shoot\nyour foot with such cases, so that's probably for the best.\n \n-Kevin\n",
"msg_date": "Fri, 05 Nov 2010 15:15:38 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't\n\t grump"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Tom Lane <[email protected]> wrote:\n>> Here's a proposed patch, sans documentation as yet.\n \n> I see you took the surgical approach -- only a cast from a record to\n> a character string type is affected. I agree that will fix the\n> complaints I've seen, and I imagine you're keeping the change narrow\n> to minimize the risk of breaking existing code, but this still looks\n> weird to me:\n \n> test=# select ('2010-11-05'::date).text;\n> text\n> ------------\n> 2010-11-05\n> (1 row)\n\nPerhaps, but it's been accepted since 7.3, with few complaints.\nI think we should only remove the behavior that was added in 8.4.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 05 Nov 2010 16:23:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] typoed column name, but postgres didn't grump "
}
] |
[
{
"msg_contents": "Unfortunately I have not received a response on this question. Is more\ninformation needed? Does anyone have any ideas why the estimates may be\nbad? Or what I might be able to do to speed this up?\n\n \n\nthanks\n\n \n\nFrom: Ozer, Pam \nSent: Tuesday, October 26, 2010 4:27 PM\nTo: '[email protected]'\nSubject: Slow Query- Bad Row Estimate\n\n \n\nI have the following query:\n\n \n\nselect distinct Region.RegionShort as RegionShort\n\n,County.County as County \n\nfrom Region \n\njoin PostalCodeRegionCountyCity on\n(PostalCodeRegionCountyCity.RegionId=Region.RegionId) \n\njoin DealerGroupGeoCache on\n(DealerGroupGeoCache.RegionId=PostalCodeRegionCountyCity.RegionId) \n\n and\n(DealerGroupGeoCache.CountyId=PostalCodeRegionCountyCity.CountyId) \n\n and\n(DealerGroupGeoCache.CityId=PostalCodeRegionCountyCity.CityId) \n\njoin County on (PostalCodeRegionCountyCity.CountyId=County.CountyId) \n\nwhere (DealerGroupGeoCache.DealerGroupId=13) and\n(PostalCodeRegionCountyCity.RegionId=5)\n\n \n\nWith the following Explain:\n\n \n\n\"HashAggregate (cost=6743.96..6747.36 rows=34 width=11) (actual\ntime=854.407..854.425 rows=57 loops=1)\"\n\n\" -> Nested Loop (cost=0.00..6743.28 rows=34 width=11) (actual\ntime=0.062..762.698 rows=163491 loops=1)\"\n\n\" -> Nested Loop (cost=0.00..6679.19 rows=34 width=11) (actual\ntime=0.053..260.001 rows=163491 loops=1)\"\n\n\" -> Index Scan using region_i00 on region\n(cost=0.00..3.36 rows=1 width=5) (actual time=0.009..0.011 rows=1\nloops=1)\"\n\n\" Index Cond: (regionid = 5)\"\n\n\" -> Merge Join (cost=0.00..6672.43 rows=34 width=10)\n(actual time=0.040..189.654 rows=163491 loops=1)\"\n\n\" Merge Cond: ((postalcoderegioncountycity.countyid =\ndealergroupgeocache.countyid) AND (postalcoderegioncountycity.cityid =\ndealergroupgeocache.cityid))\"\n\n\" -> Index Scan using postalcoderegioncountycity_i06\non postalcoderegioncountycity (cost=0.00..716.05 rows=2616 width=10)\n(actual time=0.018..1.591 rows=2615 loops=1)\"\n\n\" Index Cond: (regionid = 5)\"\n\n\" -> Index Scan using dealergroupgeocache_i01 on\ndealergroupgeocache (cost=0.00..5719.56 rows=9055 width=10) (actual\ntime=0.015..87.689 rows=163491 loops=1)\"\n\n\" Index Cond:\n((dealergroupgeocache.dealergroupid = 13) AND\n(dealergroupgeocache.regionid = 5))\"\n\n\" -> Index Scan using county_i00 on county (cost=0.00..1.77\nrows=1 width=12) (actual time=0.002..0.002 rows=1 loops=163491)\"\n\n\" Index Cond: (county.countyid =\ndealergroupgeocache.countyid)\"\n\n\"Total runtime: 854.513 ms\"\n\n \n\nThe statistics have been recently updated and it does not change the bad\nestimates. \n\n \n\nThe DealerGroupGeoCache Table has 765392 Rows, And the query returns 57\nrows. \n\n \n\nI am not at all involved in the way the server is set up so being able\nto change the settings is not very likely unless it will make a huge\ndifference.\n\n \n\nIs there any way for me to speed up this query without changing the\nsettings?\n\n \n\nIf not what would you think the changes that would be needed?\n\n \n\nWe are currently running Postgres8.4 with the following settings.\n\n \n\nshared_buffers = 500MB #\nmin 128kB\n\neffective_cache_size = 1000MB\n\n \n\nmax_connections = 100\n\ntemp_buffers = 100MB\n\nwork_mem = 100MB\n\nmaintenance_work_mem = 500MB\n\nmax_files_per_process = 10000\n\nseq_page_cost = 1.0\n\nrandom_page_cost = 1.1\n\ncpu_tuple_cost = 0.1\n\ncpu_index_tuple_cost = 0.05\n\ncpu_operator_cost = 0.01\n\ndefault_statistics_target = 1000\n\nautovacuum_max_workers = 1\n\n \n\n#log_min_messages = DEBUG1\n\n#log_min_duration_statement = 1000\n\n#log_statement = all\n\n#log_temp_files = 128\n\n#log_lock_waits = on\n\n#log_line_prefix = '%m %u %d %h %p %i %c %l %s'\n\n#log_duration = on\n\n#debug_print_plan = on\n\n \n\nAny help is appreciated,\n\n \n\nPam\n\n \n\n \n\n \n\n \n\n\n\n\n\n\n\n\n\n\n\nUnfortunately I have not\nreceived a response on this question. Is more information needed? Does anyone\nhave any ideas why the estimates may be bad? Or what I might be able to do to\nspeed this up?\n \nthanks\n \n\n\nFrom: Ozer, Pam \nSent: Tuesday, October 26, 2010 4:27 PM\nTo: '[email protected]'\nSubject: Slow Query- Bad Row Estimate\n\n\n \nI have the following query:\n \nselect distinct Region.RegionShort as RegionShort\n,County.County as County \nfrom Region \njoin PostalCodeRegionCountyCity on\n(PostalCodeRegionCountyCity.RegionId=Region.RegionId) \njoin DealerGroupGeoCache on (DealerGroupGeoCache.RegionId=PostalCodeRegionCountyCity.RegionId)\n\n \nand (DealerGroupGeoCache.CountyId=PostalCodeRegionCountyCity.CountyId) \n \nand (DealerGroupGeoCache.CityId=PostalCodeRegionCountyCity.CityId) \njoin County on\n(PostalCodeRegionCountyCity.CountyId=County.CountyId) \nwhere (DealerGroupGeoCache.DealerGroupId=13) and\n(PostalCodeRegionCountyCity.RegionId=5)\n \nWith the following Explain:\n \n\"HashAggregate (cost=6743.96..6747.36 rows=34\nwidth=11) (actual time=854.407..854.425 rows=57 loops=1)\"\n\" -> Nested Loop \n(cost=0.00..6743.28 rows=34 width=11) (actual time=0.062..762.698 rows=163491\nloops=1)\"\n\" -> \nNested Loop (cost=0.00..6679.19 rows=34 width=11) (actual\ntime=0.053..260.001 rows=163491 loops=1)\"\n\" \n-> Index Scan using region_i00 on region (cost=0.00..3.36 rows=1\nwidth=5) (actual time=0.009..0.011 rows=1 loops=1)\"\n\" \nIndex Cond: (regionid = 5)\"\n\" \n -> \nMerge Join (cost=0.00..6672.43 rows=34 width=10) (actual\ntime=0.040..189.654 rows=163491 loops=1)\"\n\" \nMerge Cond: ((postalcoderegioncountycity.countyid =\ndealergroupgeocache.countyid) AND (postalcoderegioncountycity.cityid =\ndealergroupgeocache.cityid))\"\n\" \n-> Index Scan using postalcoderegioncountycity_i06 on\npostalcoderegioncountycity (cost=0.00..716.05 rows=2616 width=10) (actual\ntime=0.018..1.591 rows=2615 loops=1)\"\n\" \nIndex Cond: (regionid = 5)\"\n\" \n-> Index Scan using dealergroupgeocache_i01 on\ndealergroupgeocache (cost=0.00..5719.56 rows=9055 width=10) (actual\ntime=0.015..87.689 rows=163491 loops=1)\"\n\" \nIndex Cond: ((dealergroupgeocache.dealergroupid = 13) AND\n(dealergroupgeocache.regionid = 5))\"\n\" -> \nIndex Scan using county_i00 on county (cost=0.00..1.77 rows=1 width=12)\n(actual time=0.002..0.002 rows=1 loops=163491)\"\n\" \nIndex Cond: (county.countyid = dealergroupgeocache.countyid)\"\n\"Total runtime: 854.513 ms\"\n \nThe statistics have been recently updated and it does not\nchange the bad estimates. \n \nThe DealerGroupGeoCache Table has 765392 Rows, And the\nquery returns 57 rows. \n \nI am not at all involved in the way the server is set up so\nbeing able to change the settings is not very likely unless it will make a huge\ndifference.\n \nIs there any way for me to speed up this query without\nchanging the settings?\n \nIf not what would you think the changes that would be\nneeded?\n \nWe are currently running Postgres8.4 with the following\nsettings.\n \nshared_buffers =\n500MB \n# min 128kB\neffective_cache_size = 1000MB\n \nmax_connections = 100\ntemp_buffers = 100MB\nwork_mem = 100MB\nmaintenance_work_mem = 500MB\nmax_files_per_process = 10000\nseq_page_cost = 1.0\nrandom_page_cost = 1.1\ncpu_tuple_cost = 0.1\ncpu_index_tuple_cost = 0.05\ncpu_operator_cost = 0.01\ndefault_statistics_target = 1000\nautovacuum_max_workers = 1\n \n#log_min_messages = DEBUG1\n#log_min_duration_statement = 1000\n#log_statement = all\n#log_temp_files = 128\n#log_lock_waits = on\n#log_line_prefix = '%m %u %d %h %p %i %c %l %s'\n#log_duration = on\n#debug_print_plan = on\n \nAny help is appreciated,\n \nPam",
"msg_date": "Fri, 29 Oct 2010 13:54:04 -0700",
"msg_from": "\"Ozer, Pam\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow Query- Bad Row Estimate"
},
{
"msg_contents": "On 10/29/10 1:54 PM, Ozer, Pam wrote:\n> \" -> Index Scan using dealergroupgeocache_i01 on\n> dealergroupgeocache (cost=0.00..5719.56 rows=9055 width=10) (actual\n> time=0.015..87.689 rows=163491 loops=1)\"\n\nThis appears to be your problem here.\n\na) when was dealergroupgeocache last ANALYZED?\n\nb) try increasing the stats_target on dealergroupid and regionid, to say\n500 and re-analyzing.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Fri, 29 Oct 2010 14:09:37 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query- Bad Row Estimate"
},
{
"msg_contents": "\"Ozer, Pam\" <[email protected]> writes:\n> Unfortunately I have not received a response on this question. Is more\n> information needed? Does anyone have any ideas why the estimates may be\n> bad? Or what I might be able to do to speed this up?\n\nThe most likely explanation for the bad rowcount estimates is that there\nis correlation between the regionid/countyid/cityid columns, only the\nplanner doesn't know it. Can you reformulate that data representation\nat all, or at least avoid depending on it as a join key?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Oct 2010 17:17:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query- Bad Row Estimate "
},
{
"msg_contents": "\"Ozer, Pam\" <[email protected]> wrote:\n \n> Is more information needed?\n \nTable layouts of the tables involved (including indexes) would be\ninteresting. A description of the machine would be useful,\nincluding OS, CPUs, RAM, and disk system.\n \nI know you said you might have trouble changing the config, but some\nof these seem problematic.\n \n> shared_buffers = 500MB\n> effective_cache_size = 1000MB\n> max_connections = 100\n> temp_buffers = 100MB\n \nSo you will allow up to 10GB to be tied up in space reserved for\ntemporary tables, but only expect to cache 1GB of your database? \nThat hardly seems optimal.\n \n> work_mem = 100MB\n \nThat could be another 10GB or more in work memory at any moment, if\neach connection was running a query which needed one work_mem\nallocation.\n \n> seq_page_cost = 1.0\n> random_page_cost = 1.1\n> cpu_tuple_cost = 0.1\n> cpu_index_tuple_cost = 0.05\n> cpu_operator_cost = 0.01\n \nThose settings are OK if the active portion of the database is fully\ncached. Is it?\n \n> default_statistics_target = 1000\n \nIf plan times get long with complex queries, you might want to back\nthat off; otherwise, OK.\n \n> autovacuum_max_workers = 1\n \nThat seems like a bad idea. Allowing multiple workers helps reduce\nbloat and improve statistics. If autovacuum is affecting\nperformance, you would be better off tweaking the autovacuum cost\nlimits.\n \n-Kevin\n",
"msg_date": "Fri, 29 Oct 2010 16:39:56 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query- Bad Row Estimate"
},
{
"msg_contents": "I am not sure what you mean by reformulate the data representation. Do\nyou mean do I have to join on all three columns? \n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Friday, October 29, 2010 2:18 PM\nTo: Ozer, Pam\nCc: [email protected]\nSubject: Re: [PERFORM] Slow Query- Bad Row Estimate \n\n\"Ozer, Pam\" <[email protected]> writes:\n> Unfortunately I have not received a response on this question. Is\nmore\n> information needed? Does anyone have any ideas why the estimates may\nbe\n> bad? Or what I might be able to do to speed this up?\n\nThe most likely explanation for the bad rowcount estimates is that there\nis correlation between the regionid/countyid/cityid columns, only the\nplanner doesn't know it. Can you reformulate that data representation\nat all, or at least avoid depending on it as a join key?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Oct 2010 14:45:52 -0700",
"msg_from": "\"Ozer, Pam\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow Query- Bad Row Estimate "
},
{
"msg_contents": "I had just analyzed the dealergroupgeochache table. Wow. Thank you. That did the trick. Can you give me an explanation of the default_stats work? I don't think I completely understand what it means when you set it to 500 instead of 1000?\r\n\r\nthanks\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Josh Berkus\r\nSent: Friday, October 29, 2010 2:10 PM\r\nTo: [email protected]\r\nSubject: Re: [PERFORM] Slow Query- Bad Row Estimate\r\n\r\nOn 10/29/10 1:54 PM, Ozer, Pam wrote:\r\n> \" -> Index Scan using dealergroupgeocache_i01 on\r\n> dealergroupgeocache (cost=0.00..5719.56 rows=9055 width=10) (actual\r\n> time=0.015..87.689 rows=163491 loops=1)\"\r\n\r\nThis appears to be your problem here.\r\n\r\na) when was dealergroupgeocache last ANALYZED?\r\n\r\nb) try increasing the stats_target on dealergroupid and regionid, to say\r\n500 and re-analyzing.\r\n\r\n-- \r\n -- Josh Berkus\r\n PostgreSQL Experts Inc.\r\n http://www.pgexperts.com\r\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n",
"msg_date": "Fri, 29 Oct 2010 14:47:55 -0700",
"msg_from": "\"Ozer, Pam\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow Query- Bad Row Estimate"
},
{
"msg_contents": "\"Ozer, Pam\" <[email protected]> writes:\n> I am not sure what you mean by reformulate the data representation. Do\n> you mean do I have to join on all three columns? \n\nNo, I was wondering if you could change things so that you join on just\none column, instead of two that each tell part of the truth.\n\nBTW, did you check your current statistics target? If it's small\nthen raising it might possibly fix the problem by itself.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Oct 2010 17:50:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query- Bad Row Estimate "
},
{
"msg_contents": "On 10/29/10 2:47 PM, Ozer, Pam wrote:\n> I had just analyzed the dealergroupgeochache table. Wow. Thank you. That did the trick. Can you give me an explanation of the default_stats work? I don't think I completely understand what it means when you set it to 500 instead of 1000?\n\nYou're already at 1000?\n\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Fri, 29 Oct 2010 14:54:49 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query- Bad Row Estimate"
},
{
"msg_contents": "Yes. The default statistics target was at 1000. So that would be what the column was using correct?\r\n\r\n-----Original Message-----\r\nFrom: Josh Berkus [mailto:[email protected]] \r\nSent: Friday, October 29, 2010 2:55 PM\r\nTo: Ozer, Pam\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Slow Query- Bad Row Estimate\r\n\r\nOn 10/29/10 2:47 PM, Ozer, Pam wrote:\r\n> I had just analyzed the dealergroupgeochache table. Wow. Thank you. That did the trick. Can you give me an explanation of the default_stats work? I don't think I completely understand what it means when you set it to 500 instead of 1000?\r\n\r\nYou're already at 1000?\r\n\r\n\r\n-- \r\n -- Josh Berkus\r\n PostgreSQL Experts Inc.\r\n http://www.pgexperts.com\r\n",
"msg_date": "Fri, 29 Oct 2010 14:55:59 -0700",
"msg_from": "\"Ozer, Pam\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow Query- Bad Row Estimate"
},
{
"msg_contents": "\"Ozer, Pam\" <[email protected]> writes:\n> Yes. The default statistics target was at 1000. So that would be what the column was using correct?\n\nBut you evidently didn't have stats. Perhaps you have autovacuum turned\noff? What PG version is this anyway?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Oct 2010 18:03:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow Query- Bad Row Estimate "
},
{
"msg_contents": "Its 8.4. On the column stats_target=-1 before I changed it. AutoVacuum\nis set to on. I actually did a full analyze of the database and then\nran it again. So what am I missing?\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Friday, October 29, 2010 3:03 PM\nTo: Ozer, Pam\nCc: Josh Berkus; [email protected]\nSubject: Re: [PERFORM] Slow Query- Bad Row Estimate \n\n\"Ozer, Pam\" <[email protected]> writes:\n> Yes. The default statistics target was at 1000. So that would be\nwhat the column was using correct?\n\nBut you evidently didn't have stats. Perhaps you have autovacuum turned\noff? What PG version is this anyway?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Oct 2010 15:10:16 -0700",
"msg_from": "\"Ozer, Pam\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow Query- Bad Row Estimate "
}
] |
[
{
"msg_contents": "Hi pgsql-performance,\n\nI was doing mass insertions on my desktop machine and getting at most\n1 MB/s disk writes (apart from occasional bursts of 16MB). Inserting 1\nmillion rows with a single integer (data+index 56 MB total) took over\n2 MINUTES! The only tuning I had done was shared_buffers=256MB. So I\ngot around to tuning the WAL writer and found that wal_buffers=16MB\nworks MUCH better. wal_sync_method=fdatasync also got similar results.\n\nFirst of all, I'm running PostgreSQL 9.0.1 on Arch Linux\n* Linux kernel 2.6.36 (also tested with 2.6.35.\n* Quad-core Phenom II\n* a single Seagate 7200RPM SATA drive (write caching on)\n* ext4 FS over LVM, with noatime, data=writeback\n\nI am creating a table like: create table foo(id integer primary key);\nThen measuring performance with the query: insert into foo (id) select\ngenerate_series(1, 1000000);\n\n130438,011 ms wal_buffers=64kB, wal_sync_method=open_datasync (all defaults)\n29306,847 ms wal_buffers=1MB, wal_sync_method=open_datasync\n4641,113 ms wal_buffers=16MB, wal_sync_method=open_datasync\n^ from 130s to 4.6 seconds by just changing wal_buffers.\n\n5528,534 ms wal_buffers=64kB, wal_sync_method=fdatasync\n4856,712 ms wal_buffers=16MB, wal_sync_method=fdatasync\n^ fdatasync works well even with small wal_buffers\n\n2911,265 ms wal_buffers=16MB, fsync=off\n^ Not bad, getting 60% of ideal throughput\n\nThese defaults are not just hurting bulk-insert performance, but also\neveryone who uses synchronus_commit=off\n\nUnless fdatasync is unsafe, I'd very much want to see it as the\ndefault for 9.1 on Linux (I don't know about other platforms). I\ncan't see any reasons why each write would need to be sync-ed if I\ndon't commit that often. Increasing wal_buffers probably has the same\neffect wrt data safety.\n\nAlso, the tuning guide on wiki is understating the importance of these\ntunables. Reading it I got the impression that some people change\nwal_sync_method but it's dangerous and it even literally claims about\nwal_buffers that \"1MB is enough for some large systems\"\n\nBut the truth is that if you want any write throughput AT ALL on a\nregular Linux desktop, you absolutely have to change one of these. If\nthe defaults were better, it would be enough to set\nsynchronous_commit=off to get all that your hardware has to offer.\n\nI was reading mailing list archives and didn't find anything against\nit either. Can anyone clarify the safety of wal_sync_method=fdatasync?\nAre there any reasons why it shouldn't be the default?\n\nRegards,\nMarti\n",
"msg_date": "Sun, 31 Oct 2010 14:13:39 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": true,
"msg_subject": "Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
},
{
"msg_contents": "Marti Raudsepp wrote:\n> Unless fdatasync is unsafe, I'd very much want to see it as the\n> default for 9.1 on Linux (I don't know about other platforms). I\n> can't see any reasons why each write would need to be sync-ed if I\n> don't commit that often. Increasing wal_buffers probably has the same\n> effect wrt data safety.\n> \n\nWrites only are sync'd out when you do a commit, or the database does a \ncheckpoint.\n\nThis issue is a performance difference introduced by a recent change to \nLinux. open_datasync support was just added to Linux itself very \nrecently. It may be more safe than fdatasync on your platform. As new \ncode it may have bugs so that it doesn't really work at all under heavy \nload. No one has really run those tests yet. See \nhttp://wiki.postgresql.org/wiki/Reliable_Writes for some background, and \nwelcome to the fun of being an early adopter. The warnings in the \ntuning guide are there for a reason--you're in untested territory now. \nI haven't finished validating whether I consider 2.6.32 safe for \nproduction use or not yet, and 2.6.36 is a solid year away from being on \nmy list for even considering it as a production database kernel. You \nshould proceed presuming that all writes are unreliable until proven \notherwise.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Sun, 31 Oct 2010 15:59:31 -0400",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for\n 9.1?"
},
{
"msg_contents": "On Sunday 31 October 2010 20:59:31 Greg Smith wrote:\n> Writes only are sync'd out when you do a commit, or the database does a \n> checkpoint.\nHm? WAL is written out to disk after an the space provided by wal_buffers(def \n8) * XLOG_BLCKSZ (def 8192) is used. The default is 64kb which you reach \npretty quickly - especially after a checkpoint. With O_D?SYNC that will \nsynchronously get written out during a normal XLogInsert if hits a page \nboundary.\n*Additionally* its gets written out at a commit if sync commit is not on.\n\nNot having a real O_DSYNC on linux until recently makes it even more dubious \nto have it as a default...\n\n\nAndres\n",
"msg_date": "Mon, 1 Nov 2010 00:10:28 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
},
{
"msg_contents": "On Sun, Oct 31, 2010 at 21:59, Greg Smith <[email protected]> wrote:\n> open_datasync support was just added to Linux itself very recently.\n\nOh I didn't realize it was a new feature. Indeed O_DSYNC support was\nadded in 2.6.33\n\nIt seems like bad behavior on PostgreSQL's part to default to new,\nuntested features.\n\nI have updated the tuning wiki page with my understanding of the problem:\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#wal_sync_method_wal_buffers\n\nRegards,\nMarti\n",
"msg_date": "Mon, 1 Nov 2010 03:29:41 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
},
{
"msg_contents": "On 01/11/10 08:59, Greg Smith wrote:\n> Marti Raudsepp wrote:\n>> Unless fdatasync is unsafe, I'd very much want to see it as the\n>> default for 9.1 on Linux (I don't know about other platforms). I\n>> can't see any reasons why each write would need to be sync-ed if I\n>> don't commit that often. Increasing wal_buffers probably has the same\n>> effect wrt data safety.\n>\n> Writes only are sync'd out when you do a commit, or the database does \n> a checkpoint.\n>\n> This issue is a performance difference introduced by a recent change \n> to Linux. open_datasync support was just added to Linux itself very \n> recently. It may be more safe than fdatasync on your platform. As \n> new code it may have bugs so that it doesn't really work at all under \n> heavy load. No one has really run those tests yet. See \n> http://wiki.postgresql.org/wiki/Reliable_Writes for some background, \n> and welcome to the fun of being an early adopter. The warnings in the \n> tuning guide are there for a reason--you're in untested territory \n> now. I haven't finished validating whether I consider 2.6.32 safe for \n> production use or not yet, and 2.6.36 is a solid year away from being \n> on my list for even considering it as a production database kernel. \n> You should proceed presuming that all writes are unreliable until \n> proven otherwise.\n>\n\nGreg,\n\nYour reply is possibly a bit confusingly worded - Marti was suggesting \nthat fdatasync be the default - so he wouldn't be a new adopter, since \nthis call has been implemented in the kernel for ages. I guess you were \nwanting to stress that *open_datasync* is the new kid, so watch out to \nsee if he bites...\n\nCheers\n\nMark\n\n\n\n\n\n\nOn 01/11/10 08:59, Greg Smith wrote:\nMarti\nRaudsepp wrote:\n \nUnless fdatasync is unsafe, I'd very much\nwant to see it as the\n \ndefault for 9.1 on Linux (I don't know about other platforms). I\n \ncan't see any reasons why each write would need to be sync-ed if I\n \ndon't commit that often. Increasing wal_buffers probably has the same\n \neffect wrt data safety.\n \n \n\nWrites only are sync'd out when you do a commit, or the database does a\ncheckpoint.\n \n\nThis issue is a performance difference introduced by a recent change to\nLinux. open_datasync support was just added to Linux itself very\nrecently. It may be more safe than fdatasync on your platform. As new\ncode it may have bugs so that it doesn't really work at all under heavy\nload. No one has really run those tests yet. See\nhttp://wiki.postgresql.org/wiki/Reliable_Writes for some background,\nand welcome to the fun of being an early adopter. The warnings in the\ntuning guide are there for a reason--you're in untested territory now. \nI haven't finished validating whether I consider 2.6.32 safe for\nproduction use or not yet, and 2.6.36 is a solid year away from being\non my list for even considering it as a production database kernel. \nYou should proceed presuming that all writes are unreliable until\nproven otherwise.\n \n\n\n\nGreg,\n\nYour reply is possibly a bit confusingly worded - Marti was suggesting\nthat fdatasync be the default - so he wouldn't be a new adopter, since\nthis call has been implemented in the kernel for ages. I guess you were\nwanting to stress that *open_datasync* is the new kid, so watch out to\nsee if he bites...\n\nCheers\n\nMark",
"msg_date": "Mon, 01 Nov 2010 18:03:33 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for\n 9.1?"
},
{
"msg_contents": "Andres Freund wrote:\n> On Sunday 31 October 2010 20:59:31 Greg Smith wrote:\n> \n>> Writes only are sync'd out when you do a commit, or the database does a \n>> checkpoint.\n>> \n> Hm? WAL is written out to disk after an the space provided by wal_buffers(def \n> 8) * XLOG_BLCKSZ (def 8192) is used. The default is 64kb which you reach \n> pretty quickly - especially after a checkpoint.\n\nFair enough; I'm so used to bumping wal_buffers up to 16MB nowadays that \nI forget sometimes that people actually run with the default where this \nbecomes an important consideration.\n\n\n> Not having a real O_DSYNC on linux until recently makes it even more dubious \n> to have it as a default...\n> \n\nIf Linux is now defining O_DSYNC, and it's buggy, that's going to break \nmore software than just PostgreSQL. It wasn't defined before because it \ndidn't work. If the kernel developers have made changes to claim it's \nworking now, but it doesn't really, I would think they'd consider any \nreports of actual bugs here as important to fix. There's only so much \nthe database can do in the face of incorrect information reported by the \noperating system.\n\nAnyway, I haven't actually seen reports that proves there's any problem \nhere, I was just pointing out that we haven't seen any positive reports \nabout database stress testing on these kernel versions yet either. The \nchanges here are theoretically the right ones, and defaulting to safe \nwrites that flush out write caches is a long-term good thing.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Fri, 05 Nov 2010 14:10:36 -0700",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for\n 9.1?"
},
{
"msg_contents": "On Fri, Nov 5, 2010 at 23:10, Greg Smith <[email protected]> wrote:\n>> Not having a real O_DSYNC on linux until recently makes it even more\n>> dubious to have it as a default...\n>>\n>\n> If Linux is now defining O_DSYNC\n\nWell, Linux always defined both O_SYNC and O_DSYNC, but they used to\nhave the same value. The defaults changed due to an unfortunate\nheuristic in PostgreSQL, which boils down to:\n\n#if O_DSYNC != O_SYNC\n#define DEFAULT_SYNC_METHOD SYNC_METHOD_OPEN_DSYNC\n#else\n#define DEFAULT_SYNC_METHOD SYNC_METHOD_FDATASYNC\n\n(see src/include/access/xlogdefs.h for details)\n\nIn fact, I was wrong in my earlier post. Linux always offered O_DSYNC\nbehavior. What's new is POSIX-compliant O_SYNC, and the fact that\nthese flags are now distinguished.\n\nHere's the change in Linux:\nhttp://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=6b2f3d1f769be5779b479c37800229d9a4809fc3\n\nRegards,\nMarti\n",
"msg_date": "Fri, 5 Nov 2010 23:21:50 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
},
{
"msg_contents": "On Friday 05 November 2010 22:10:36 Greg Smith wrote:\n> Andres Freund wrote:\n> > On Sunday 31 October 2010 20:59:31 Greg Smith wrote:\n> >> Writes only are sync'd out when you do a commit, or the database does a\n> >> checkpoint.\n> > \n> > Hm? WAL is written out to disk after an the space provided by\n> > wal_buffers(def 8) * XLOG_BLCKSZ (def 8192) is used. The default is 64kb\n> > which you reach pretty quickly - especially after a checkpoint.\n> Fair enough; I'm so used to bumping wal_buffers up to 16MB nowadays that\n> I forget sometimes that people actually run with the default where this\n> becomes an important consideration.\nIf you have relatively frequent checkpoints (quite a sensible in some \nenvironments given the burstiness/response time problems you can get) even a \n16MB wal_buffers can cause significantly more synchronous writes with O_DSYNC \nbecause of the amounts of wal traffic due to full_page_writes. For one the \nbackground wal writer wont keep up and for another all its writes will be \nsynchronous...\n\nIts simply a pointless setting.\n\n> > Not having a real O_DSYNC on linux until recently makes it even more\n> > dubious to have it as a default...\n> If Linux is now defining O_DSYNC, and it's buggy, that's going to break\n> more software than just PostgreSQL. It wasn't defined before because it\n> didn't work. If the kernel developers have made changes to claim it's\n> working now, but it doesn't really, I would think they'd consider any\n> reports of actual bugs here as important to fix. There's only so much\n> the database can do in the face of incorrect information reported by the\n> operating system.\nI don't see it being buggy so far. Its just doing what it should. Which is \nsimply a terrible thing for our implementation. Generally. Independent from \nlinux.\n\n> Anyway, I haven't actually seen reports that proves there's any problem\n> here, I was just pointing out that we haven't seen any positive reports\n> about database stress testing on these kernel versions yet either. The\n> changes here are theoretically the right ones, and defaulting to safe\n> writes that flush out write caches is a long-term good thing.\nI have seen several database which run under 2.6.33 with moderate to high load \nfor some time now. And two 2.6.35.\nLoads of problems, but none kernel related so far ;-)\n\nAndres\n",
"msg_date": "Fri, 5 Nov 2010 22:24:54 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
},
{
"msg_contents": "Marti Raudsepp wrote:\n> In fact, I was wrong in my earlier post. Linux always offered O_DSYNC\n> behavior. What's new is POSIX-compliant O_SYNC, and the fact that\n> these flags are now distinguished.\n> \n\nWhile I appreciate that you're trying to help here, I'm unconvinced \nyou've correctly diagnosed a couple of components to what's going on \nhere properly yet. Please refrain from making changes to popular \ndocuments like the tuning guide on the wiki based on speculation about \nwhat's happening. There's definitely at least one mistake in what you \nwrote there, and I just reverted the whole set of changes you made \naccordingly until this is sorted out better.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Fri, 05 Nov 2010 15:06:05 -0700",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for\n 9.1?"
},
{
"msg_contents": "\n> Fair enough; I'm so used to bumping wal_buffers up to 16MB nowadays that\n> I forget sometimes that people actually run with the default where this\n> becomes an important consideration.\n\nDo you have any testing in favor of 16mb vs. lower/higher?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Fri, 05 Nov 2010 15:07:09 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for\n 9.1?"
},
{
"msg_contents": "On Sat, Nov 6, 2010 at 00:06, Greg Smith <[email protected]> wrote:\n> Please refrain from making changes to popular documents like the\n> tuning guide on the wiki based on speculation about what's happening.\n\nI will grant you that the details were wrong, but I stand by the conclusion.\n\nI can state for a fact that PostgreSQL's default wal_sync_method\nvaries depending on the <fcntl.h> header.\nI have two PostgreSQL 9.0.1 builds, one with older\n/usr/include/bits/fcntl.h and one with newer.\n\nWhen I run \"show wal_sync_method;\" on one instance, I get fdatasync.\nOn the other one I get open_datasync.\n\nSo let's get down to code.\n\nOlder fcntl.h has:\n#define O_SYNC\t\t 010000\n# define O_DSYNC\tO_SYNC\t/* Synchronize data. */\n\nNewer has:\n#define O_SYNC\t 04010000\n# define O_DSYNC\t010000\t/* Synchronize data. */\n\nSo you can see that in the older header, O_DSYNC and O_SYNC are equal.\n\nsrc/include/access/xlogdefs.h does:\n\n#if defined(O_SYNC)\n#define OPEN_SYNC_FLAG O_SYNC\n...\n#if defined(OPEN_SYNC_FLAG)\n/* O_DSYNC is distinct? */\n#if O_DSYNC != OPEN_SYNC_FLAG\n#define OPEN_DATASYNC_FLAG O_DSYNC\n\n^ it's comparing O_DSYNC != O_SYNC\n\n#if defined(OPEN_DATASYNC_FLAG)\n#define DEFAULT_SYNC_METHOD SYNC_METHOD_OPEN_DSYNC\n#elif defined(HAVE_FDATASYNC)\n#define DEFAULT_SYNC_METHOD SYNC_METHOD_FDATASYNC\n\n^ depending on whether O_DSYNC and O_SYNC were equal, the default\nwal_sync_method will change.\n\nRegards,\nMarti\n",
"msg_date": "Sat, 6 Nov 2010 01:32:26 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
},
{
"msg_contents": "\n>> Fair enough; I'm so used to bumping wal_buffers up to 16MB nowadays that\n>> I forget sometimes that people actually run with the default where this\n>> becomes an important consideration.\n>\n> Do you have any testing in favor of 16mb vs. lower/higher?\n\n From some tests I had done some time ago, using separate spindles (RAID1) \nfor xlog, no battery, on 8.4, with stuff that generates lots of xlog \n(INSERT INTO SELECT) :\n\nWhen using a small wal_buffers, there was a problem when switching from \none xlog file to the next. Basically a fsync was issued, but most of the \nprevious log segment was still not written. So, postgres was waiting for \nthe fsync to finish. Of course, the default 64 kB of wal_buffers is \nquickly filled up, and all writes wait for the end of this fsync. This \ncaused hiccups in the xlog traffic, and xlog throughput wassn't nearly as \nhigh as the disks would allow. Sticking a sthetoscope on the xlog \nharddrives revealed a lot more random accesses that I would have liked \n(this is a much simpler solution than tracing the IOs, lol)\n\nI set wal writer delay to a very low setting (I dont remember which, \nperhaps 1 ms) so the walwriter was in effect constantly flushing the wal \nbuffers to disk. I also used fdatasync instead of fsync. Then I set \nwal_buffers to a rather high value, like 32-64 MB. Throughput and \nperformance were a lot better, and the xlog drives made a much more \n\"linear-access\" noise.\n\nWhat happened is that, since wal_buffers was larger than what the drives \ncan write in 1-2 rotations, it could absorb wal traffic during the time \npostgres waits for fdatasync / wal segment change, so the inserts would \nnot have to wait. And lowering the walwriter delay made it write something \non each disk rotation, so that when a COMMIT or segment switch came, most \nof the time, the WAL was already synced and there was no wait.\n\nJust my 2 c ;)\n",
"msg_date": "Sat, 06 Nov 2010 00:39:05 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for\n 9.1?"
},
{
"msg_contents": "Marti Raudsepp wrote:\n> I will grant you that the details were wrong, but I stand by the conclusion.\n> I can state for a fact that PostgreSQL's default wal_sync_method\n> varies depending on the <fcntl.h> header.\n> \n\nYes; it's supposed to, and that logic works fine on some other \nplatforms. The question is exactly what the new Linux O_DSYNC behavior \nis doing, in regards to whether it flushes drive caches out or not. \nUntil you've quantified which of the cases do that--which is required \nfor reliable operation of PostgreSQL--and which don't, you don't have \nany data that can be used to draw a conclusion from. If some setups are \nfaster because they write less reliably, that doesn't automatically make \nthem the better choice.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Sun, 07 Nov 2010 18:35:29 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for\n 9.1?"
},
{
"msg_contents": "On Monday 08 November 2010 00:35:29 Greg Smith wrote:\n> Marti Raudsepp wrote:\n> > I will grant you that the details were wrong, but I stand by the\n> > conclusion. I can state for a fact that PostgreSQL's default\n> > wal_sync_method varies depending on the <fcntl.h> header.\n> \n> Yes; it's supposed to, and that logic works fine on some other\n> platforms. The question is exactly what the new Linux O_DSYNC behavior\n> is doing, in regards to whether it flushes drive caches out or not.\n> Until you've quantified which of the cases do that--which is required\n> for reliable operation of PostgreSQL--and which don't, you don't have\n> any data that can be used to draw a conclusion from. If some setups are\n> faster because they write less reliably, that doesn't automatically make\n> them the better choice.\nI think thats FUD. Sorry.\n\nCan you explain to me why fsync() may/should/could be *any* less reliable than \nO_DSYNC? On *any* platform. Or fdatasync() in the special way its used with \npg, namely completely preallocated files.\n\nI think the reasons why O_DSYNC is, especially, but not only, in combination \nwith a small wal_buffers setting, slow in most circumstances are pretty clear.\n\nMaking a setting which is only supported on a small range of systems highest \nin the preferences list is even more doubtfull than the already strange choice \nof making O_DSYNC the default given the way it works (i.e. no reordering, \nsynchronous writes in the bgwriter, synchronous writes on wal_buffers pressure \netc).\n\nGreetings,\n\nAndres\n",
"msg_date": "Mon, 8 Nov 2010 00:45:23 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
},
{
"msg_contents": "Andres Freund wrote:\n> I think thats FUD. Sorry.\n> \n\nYes, there's plenty of uncertainty and doubt here, but not from me. The \ntest reports given so far have been so riddled with errors I don't trust \nany of them. \n\nAs a counter example showing my expectations here, the \"Testing \nSandforce SSD\" tests done by Yeb Havinga: \nhttp://archives.postgresql.org/message-id/[email protected] \nfollowed the right method for confirming both write integrity and \nperformance including pull the plug situations. Those I trusted. What \nMarti had posted, and what Phoronix investigated, just aren't that thorough.\n\n> Can you explain to me why fsync() may/should/could be *any* less reliable than \n> O_DSYNC? On *any* platform. Or fdatasync() in the special way its used with \n> pg, namely completely preallocated files.\n> \n\nIf the Linux kernel has done extra work so that O_DSYNC writes are \nforced to disk including a cache flush, but that isn't done for just \nfdatasync() calls, there could be difference here. The database still \nwouldn't work right in that case, because checkpoint writes are still \ngoing to be using fdatasync.\n\nI'm not sure what the actual behavior is supposed to be, but ultimately \nit doesn't matter. The history of the Linux kernel developers in this \narea has been so completely full of bugs and incomplete implementations \nthat I am working from the assumption that we know nothing about what \nactually works and what doesn't without doing careful real-world testing.\n\n> I think the reasons why O_DSYNC is, especially, but not only, in combination \n> with a small wal_buffers setting, slow in most circumstances are pretty clear.\n> \n\nWhere's your benchmarks proving it then? If you're right about this, \nand I'm not saying you aren't, it should be obvious in simple bechmarks \nby stepping through various sizes for wal_buffers and seeing the \nthroughput/latency situation improve. But since I haven't seen that \ndone, this one is still in the uncertainty & doubt bucket too. You're \nassuming one of the observed problems corresponds to this theorized \ncause. But you can't prove a performance change on theory. You have to \nisolate it and then you'll know. So long as there are multiple \nuncertainties going on here, I don't have any conclusion yet, just a \nlist of things to investigate that's far longer than the list of what's \nbeen looked at so far.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Sun, 07 Nov 2010 19:05:19 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for\n 9.1?"
},
{
"msg_contents": "On Mon, Nov 8, 2010 at 01:35, Greg Smith <[email protected]> wrote:\n> Yes; it's supposed to, and that logic works fine on some other platforms.\n\nNo, the logic was broken to begin with. Linux technically supported\nO_DSYNC all along. PostgreSQL used fdatasync as the default. Now,\nbecause Linux added proper O_SYNC support, PostgreSQL suddenly prefers\nO_DSYNC over fdatasync?\n\n> Until you've\n> quantified which of the cases do that--which is required for reliable\n> operation of PostgreSQL--and which don't, you don't have any data that can\n> be used to draw a conclusion from. If some setups are faster because they\n> write less reliably, that doesn't automatically make them the better choice.\n\nI don't see your point. If fdatasync worked on Linux, AS THE DEFAULT,\nall the time until recently, then how does it all of a sudden need\nproof NOW?\n\nIf anything, the new open_datasync should be scrutinized because it\nWASN'T the default before and it hasn't gotten as much testing on\nLinux.\n\nRegards,\nMarti\n",
"msg_date": "Mon, 8 Nov 2010 04:35:46 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
},
{
"msg_contents": "On Mon, Nov 8, 2010 at 02:05, Greg Smith <[email protected]> wrote:\n> Where's your benchmarks proving it then? If you're right about this, and\n> I'm not saying you aren't, it should be obvious in simple bechmarks by\n> stepping through various sizes for wal_buffers and seeing the\n> throughput/latency situation improve.\n\nSince benchmarking is the easy part, I did that. I plotted the time\ntaken by inserting 2 million rows to a table with a single integer\ncolumn and no indexes (total 70MB). Entire script is attached. If you\ndon't agree with something in this benchmark, please suggest\nimprovements.\n\nChart: http://ompldr.org/vNjNiNQ/wal_sync_method1.png\nSpreadsheet: http://ompldr.org/vNjNiNg/wal_sync_method1.ods (the 2nd\nworksheet has exact measurements)\n\nThis is a different machine from the original post, but similar\nconfiguration. One 1TB 7200RPM Seagate Barracuda, no disk controller\ncache, 4G RAM, Phenom X4, Linux 2.6.36, PostgreSQL 9.0.1, Arch Linux.\n\nThis time I created a separate 20GB ext4 partition specially for\nPostgreSQL, with all default settings (shared_buffers=32MB). The\npartition is near the end of the disk, so hdparm gives a sequential\nread throughput of ~72 MB/s. I'm getting frequent checkpoint warnings,\nshould I try larger checkpoing_segments too?\n\nThe partition is re-created and 'initdb' is re-ran for each test, to\nprevent file system allocation from affecting results. I did two runs\nof all benchmarks. The points on the graph show a sum of INSERT time +\nCOMMIT time in seconds.\n\nOne surprising thing on the graph is a \"plateau\", where open_datasync\nperforms almost equally with wal_buffers=128kB and 256kB.\n\nAnother noteworthy difference (not visible on the graph) is that with\nopen_datasync -- but not fdatasync -- and wal_buffers=128M, INSERT\ntime keeps shrinking, but COMMIT takes longer. The total INSERT+COMMIT\ntime remains the same, however.\n\n----\n\nI have a few expendable hard drives here so I can test reliability by\npulling the SATA cable as well. Is this kind of testing useful? What\nworkloads do you suggest?\n\nRegards,\nMarti",
"msg_date": "Mon, 8 Nov 2010 16:23:27 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
},
{
"msg_contents": "\nOn Nov 7, 2010, at 6:35 PM, Marti Raudsepp wrote:\n\n> On Mon, Nov 8, 2010 at 01:35, Greg Smith <[email protected]> wrote:\n>> Yes; it's supposed to, and that logic works fine on some other platforms.\n> \n> No, the logic was broken to begin with. Linux technically supported\n> O_DSYNC all along. PostgreSQL used fdatasync as the default. Now,\n> because Linux added proper O_SYNC support, PostgreSQL suddenly prefers\n> O_DSYNC over fdatasync?\n> \n>> Until you've\n>> quantified which of the cases do that--which is required for reliable\n>> operation of PostgreSQL--and which don't, you don't have any data that can\n>> be used to draw a conclusion from. If some setups are faster because they\n>> write less reliably, that doesn't automatically make them the better choice.\n> \n> I don't see your point. If fdatasync worked on Linux, AS THE DEFAULT,\n> all the time until recently, then how does it all of a sudden need\n> proof NOW?\n> \n> If anything, the new open_datasync should be scrutinized because it\n> WASN'T the default before and it hasn't gotten as much testing on\n> Linux.\n> \n\nI agree. Im my opinion, the burden of proof lies with those contending that the default value should _change_ from fdatasync to O_DSYNC on linux. If the default changes, all power-fail testing and other reliability tests done prior on a hardware configuration may become invalid without users even knowing.\n\nUnfortunately, a code change in postgres is required to _prevent_ the default from changing when compiled and run against the latest kernels.\n\nSummary:\nUntil recently, there was code with a code comment in the Linux kernel that said \"For now, when the user asks for O_SYNC, we'll actually give O_DSYNC\". Linux has had O_DSYNC forever and ever, but not O_SYNC. \nIf O_DSYNC is preferred over fdatasync for Postgres xlog (as the code indicates), it should have been the preferred for years on Linux as well. If fdatasync has been the preferred method on Linux, and the O_SYNC = O_DSYNC test was for that, then the purpose behind the test has broken. \n\nNo matter how you slice it, the default on Linux is implicitly changing and the choice is to either:\n * Return the default to fdatasync\n * Let it implicitly change to O_DSYNC\n\nThe latter choice is the one that requires testing to prove that it is the proper and preferred default from the performance and data reliability POV. The former is the status quo -- but requires a code change.\n\n\n\n\n\n\n> Regards,\n> Marti\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Mon, 8 Nov 2010 10:13:43 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for\n 9.1?"
},
{
"msg_contents": "Scott Carey <[email protected]> writes:\n> No matter how you slice it, the default on Linux is implicitly changing and the choice is to either:\n> * Return the default to fdatasync\n> * Let it implicitly change to O_DSYNC\n\n> The latter choice is the one that requires testing to prove that it is the proper and preferred default from the performance and data reliability POV.\n\nAnd, in fact, the game plan is to do that testing and see which default\nwe want. I think it's premature to argue further about this until we\nhave some test results.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 08 Nov 2010 13:40:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1? "
},
{
"msg_contents": "Scott Carey wrote:\n> Im my opinion, the burden of proof lies with those contending that the default value should _change_ from fdatasync to O_DSYNC on linux. If the default changes, all power-fail testing and other reliability tests done prior on a hardware configuration may become invalid without users even knowing.\n> \n\nThis seems to be ignoring the fact that unless you either added a \nnon-volatile cache or specifically turned off all write caching on your \ndrives, the results of all power-fail testing done on earlier versions \nof Linux was that it failed. The default configuration of PostgreSQL on \nLinux has been that any user who has a simple SATA drive gets unsafe \nwrites, unless they go out of their way to prevent them.\n\nWhatever newer kernels do by default cannot be worse. The open question \nis whether it's still broken, in which case we might as well favor the \nknown buggy behavior rather than the new one, or whether everything has \nimproved enough to no longer be unsafe with the new defaults.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Mon, 08 Nov 2010 17:12:57 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for\n 9.1?"
},
{
"msg_contents": "Hi,\n\nOn Monday 08 November 2010 23:12:57 Greg Smith wrote:\n> This seems to be ignoring the fact that unless you either added a \n> non-volatile cache or specifically turned off all write caching on your \n> drives, the results of all power-fail testing done on earlier versions \n> of Linux was that it failed. The default configuration of PostgreSQL on \n> Linux has been that any user who has a simple SATA drive gets unsafe \n> writes, unless they go out of their way to prevent them.\nWhich is about *no* argument in favor of any of the options, right?\n\n> Whatever newer kernels do by default cannot be worse. The open question \n> is whether it's still broken, in which case we might as well favor the \n> known buggy behavior rather than the new one, or whether everything has \n> improved enough to no longer be unsafe with the new defaults.\nEither I majorly misunderstand you, or ... I dont know.\n\nThere simply *is* no new implementation relevant for this discussion. Full \nStop. What changed is that O_DSYNC is defined differently from O_SYNC these days \nand O_SYNC actually does what it should. Which causes pg to move open_datasync \nfirst in the preference list doing what the option with the lowest preference \ndid up to now.\n\nThat does not *at all* change the earlier fdatasync() or fsync() \nimplementations/tests. It simply makes open_datasync the default doing what \nopen_sync did earlier.\nFor that note that open_sync was the method of *least* preference till now... \nAnd that fdatasync() thus was the default till now. Which it is not anymore.\n\nI don't argue *at all* that we have to test the change moving fdatasync before \nopen_datasync on the *other* operating systems. What I completely don't get is \nall that talking about data consistency on linux. Its simply irrelevant in \nthat context.\n\nAndres\n\n\n\n",
"msg_date": "Mon, 8 Nov 2010 23:32:10 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
},
{
"msg_contents": "On Mon, Nov 8, 2010 at 20:40, Tom Lane <[email protected]> wrote:\n>> The latter choice is the one that requires testing to prove that it is the proper and preferred default from the performance and data reliability POV.\n>\n> And, in fact, the game plan is to do that testing and see which default\n> we want. I think it's premature to argue further about this until we\n> have some test results.\n\nWho will be doing that testing? You said you're relying on Greg Smith\nto manage the testing, but he's obviously uninterested, so it seems\nunlikely that this will go anywhere.\n\nI posted my results with the simple INSERT test, but nobody cared. I\ncould do some pgbench runs, but I have no idea what parameters would\ngive useful results.\n\nMeanwhile, PostgreSQL performance is regressing and there's still no\nevidence that open_datasync is any safer.\n\nRegards,\nMarti\n",
"msg_date": "Sat, 13 Nov 2010 19:38:07 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
},
{
"msg_contents": "Marti Raudsepp <[email protected]> writes:\n> On Mon, Nov 8, 2010 at 20:40, Tom Lane <[email protected]> wrote:\n>> And, in fact, the game plan is to do that testing and see which default\n>> we want. I think it's premature to argue further about this until we\n>> have some test results.\n\n> Who will be doing that testing? You said you're relying on Greg Smith\n> to manage the testing, but he's obviously uninterested, so it seems\n> unlikely that this will go anywhere.\n\nWhat's your basis for asserting he's uninterested? Please have a little\npatience.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 13 Nov 2010 13:01:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1? "
},
{
"msg_contents": "On Sat, Nov 13, 2010 at 20:01, Tom Lane <[email protected]> wrote:\n> What's your basis for asserting he's uninterested? Please have a little\n> patience.\n\nMy apologies, I was under the impression that he hadn't answered your\nrequest, but he did in the -hackers thread.\n\nRegards,\nMarti\n",
"msg_date": "Sun, 14 Nov 2010 11:47:16 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
},
{
"msg_contents": "Time for a deeper look at what's going on here...I installed RHEL6 Beta \n2 yesterday, on the presumption that since the release version just came \nout this week it was likely the same version Marti tested against. \nAlso, it was the one I already had a DVD to install for. This was on a \nlaptop with 7200 RPM hard drive, already containing an Ubuntu \ninstallation for comparison sake.\n\nInitial testing was done with the PostgreSQL test_fsync utility, just to \nget a gross idea of what situations the drives involved were likely \nflushing data to disk correctly during, and which it was impossible for \nthat to be true. 7200 RPM = 120 rotations/second, which puts an upper \nlimit of 120 true fsync executions per second. The test_fsync released \nwith PostgreSQL 9.0 now reports its value on the right scale that you \ncan directly compare against that (earlier versions reported \nseconds/commit, not commits/second).\n\nFirst I built test_fsync from inside of an existing PostgreSQL 9.1 HEAD \ncheckout:\n\n$ cd [PostgreSQL source code tree]\n$ cd src/tools/fsync/\n$ make\n\nAnd I started with looking at the Ubuntu system running ext3, which \nrepresents the status quo we've been seeing the past few years. \nInitially the drive write cache was turned on:\n\nLinux meddle 2.6.28-19-generic #61-Ubuntu SMP Wed May 26 23:35:15 UTC \n2010 i686 GNU/Linux\n$ cat /etc/lsb-release\nDISTRIB_ID=Ubuntu\nDISTRIB_RELEASE=9.04\nDISTRIB_CODENAME=jaunty\nDISTRIB_DESCRIPTION=\"Ubuntu 9.04\"\n\n/dev/sda5 on / type ext3 (rw,relatime,errors=remount-ro)\n\n$ ./test_fsync\nLoops = 10000\n\nSimple write:\n 8k write 88476.784/second\n\nCompare file sync methods using one write:\n (unavailable: open_datasync)\n open_sync 8k write 1192.135/second\n 8k write, fdatasync 1222.158/second\n 8k write, fsync 1097.980/second\n\nCompare file sync methods using two writes:\n (unavailable: open_datasync)\n 2 open_sync 8k writes 527.361/second\n 8k write, 8k write, fdatasync 1105.204/second\n 8k write, 8k write, fsync 1084.050/second\n\nCompare open_sync with different sizes:\n open_sync 16k write 966.047/second\n 2 open_sync 8k writes 529.565/second\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written\non a different descriptor.)\n 8k write, fsync, close 1064.177/second\n 8k write, close, fsync 1042.337/second\n\nTwo notable things here. One, there is no open_datasync defined in this \nolder kernel. Two, all methods of commit give equally inflated commit \nrates, far faster than the drive is capable of. This proves this setup \nisn't flushing the drive's write cache after commit.\n\nYou can get safe behavior out of the old kernel by disabling its write \ncache:\n\n$ sudo /sbin/hdparm -W0 /dev/sda\n\n/dev/sda:\n setting drive write-caching to 0 (off)\n write-caching = 0 (off)\n\nLoops = 10000\n\nSimple write:\n 8k write 89023.413/second\n\nCompare file sync methods using one write:\n (unavailable: open_datasync)\n open_sync 8k write 106.968/second\n 8k write, fdatasync 108.106/second\n 8k write, fsync 104.238/second\n\nCompare file sync methods using two writes:\n (unavailable: open_datasync)\n 2 open_sync 8k writes 51.637/second\n 8k write, 8k write, fdatasync 109.256/second\n 8k write, 8k write, fsync 103.952/second\n\nCompare open_sync with different sizes:\n open_sync 16k write 109.562/second\n 2 open_sync 8k writes 52.752/second\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written\non a different descriptor.)\n 8k write, fsync, close 107.179/second\n 8k write, close, fsync 106.923/second\n\nAnd now results are as expected: just under 120/second.\n\nOnto RHEL6. Setup for this initial test was:\n\n$ uname -a\nLinux meddle 2.6.32-44.1.el6.x86_64 #1 SMP Wed Jul 14 18:51:29 EDT 2010 \nx86_64 x86_64 x86_64 GNU/Linux\n$ cat /etc/redhat-release\nRed Hat Enterprise Linux Server release 6.0 Beta (Santiago)\n$ mount\n/dev/sda7 on / type ext4 (rw)\n\nAnd I started with the write cache off to see a straight comparison \nagainst the above:\n\n$ sudo hdparm -W0 /dev/sda\n\n/dev/sda:\n setting drive write-caching to 0 (off)\n write-caching = 0 (off)\n$ ./test_fsync\nLoops = 10000\n\nSimple write:\n 8k write 104194.886/second\n\nCompare file sync methods using one write:\n open_datasync 8k write 97.828/second\n open_sync 8k write 109.158/second\n 8k write, fdatasync 109.838/second\n 8k write, fsync 20.872/second\n\nCompare file sync methods using two writes:\n 2 open_datasync 8k writes 53.902/second\n 2 open_sync 8k writes 53.721/second\n 8k write, 8k write, fdatasync 109.731/second\n 8k write, 8k write, fsync 20.918/second\n\nCompare open_sync with different sizes:\n open_sync 16k write 109.552/second\n 2 open_sync 8k writes 54.116/second\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written\non a different descriptor.)\n 8k write, fsync, close 20.800/second\n 8k write, close, fsync 20.868/second\n\nA few changes then. open_datasync is available now. It looks slightly \nslower than the alternatives on this test, but I didn't see that on the \nlater tests so I'm thinking that's just occasional run to run \nvariation. For some reason regular fsync is dramatically slower in this \nkernel than earlier ones. Perhaps a lot more metadata being flushed all \nthe way to the disk in that case now?\n\nThe issue that I think Marti has been concerned about is highlighted in \nthis interesting subset of the data:\n\nCompare file sync methods using two writes:\n 2 open_datasync 8k writes 53.902/second\n 8k write, 8k write, fdatasync 109.731/second\n\nThe results here aren't surprising; if you do two dsync writes, that \nwill take two disk rotations, while two writes followed a single sync \nonly takes one. But that does mean that in the case of small values for \nwal_buffers, like the default, you could easily end up paying a rotation \nsync penalty more than once per commit.\n\nNext question is what happens if I turn the drive's write cache back on:\n\n$ sudo hdparm -W1 /dev/sda\n\n/dev/sda:\n setting drive write-caching to 1 (on)\n write-caching = 1 (on)\n\n$ ./test_fsync\n\n[gsmith@meddle fsync]$ ./test_fsync\nLoops = 10000\n\nSimple write:\n 8k write 104198.143/second\n\nCompare file sync methods using one write:\n open_datasync 8k write 110.707/second\n open_sync 8k write 110.875/second\n 8k write, fdatasync 110.794/second\n 8k write, fsync 28.872/second\n\nCompare file sync methods using two writes:\n 2 open_datasync 8k writes 55.731/second\n 2 open_sync 8k writes 55.618/second\n 8k write, 8k write, fdatasync 110.551/second\n 8k write, 8k write, fsync 28.843/second\n\nCompare open_sync with different sizes:\n open_sync 16k write 110.176/second\n 2 open_sync 8k writes 55.785/second\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written\non a different descriptor.)\n 8k write, fsync, close 28.779/second\n 8k write, close, fsync 28.855/second\n\nThis is nice to see from a reliability perspective. On all three of the \nviable sync methods here, the speed seen suggests the drive's volatile \nwrite cache is being flushed after every commit. This is going to be \nbad for people who have gotten used to doing development on systems \nwhere that's not honored and they don't care, because this looks like a \n90% drop in performance on those systems. But since the new behavior is \nsafe and the earlier one was not, it's hard to get mad about it. \nDevelopers probably just need to be taught to turn synchronous_commit \noff to speed things up when playing with test data.\n\ntest_fsync writes to /var/tmp/test_fsync.out by default, not paying \nattention to what directory you're in. So to use it to test another \nfilesystem, you have to make sure to give it an explicit full path. \nNext I tested against the old Ubuntu partition that was formatted with \next3, with the write cache still on:\n\n# mount | grep /ext3\n/dev/sda5 on /ext3 type ext3 (rw)\n# ./test_fsync -f /ext3/test_fsync.out\nLoops = 10000\n\nSimple write:\n 8k write 100943.825/second\n\nCompare file sync methods using one write:\n open_datasync 8k write 106.017/second\n open_sync 8k write 108.318/second\n 8k write, fdatasync 108.115/second\n 8k write, fsync 105.270/second\n\nCompare file sync methods using two writes:\n 2 open_datasync 8k writes 53.313/second\n 2 open_sync 8k writes 54.045/second\n 8k write, 8k write, fdatasync 55.291/second\n 8k write, 8k write, fsync 53.243/second\n\nCompare open_sync with different sizes:\n open_sync 16k write 54.980/second\n 2 open_sync 8k writes 53.563/second\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written\non a different descriptor.)\n 8k write, fsync, close 105.032/second\n 8k write, close, fsync 103.987/second\n\nStrange...it looks like ext3 is executing cache flushes, too. Note that \nall of the \"Compare file sync methods using two writes\" results are half \nspeed now; it's as if ext3 is flushing the first write out immediately? \nThis result was unexpected, and I don't trust it yet; I want to validate \nthis elsewhere.\n\nWhat about XFS? That's a first class filesystem on RHEL6 too:\n\n[root@meddle fsync]# ./test_fsync -f /xfs/test_fsync.out\nLoops = 10000\n\nSimple write:\n 8k write 71878.324/second\n\nCompare file sync methods using one write:\n open_datasync 8k write 36.303/second\n open_sync 8k write 35.714/second\n 8k write, fdatasync 35.985/second\n 8k write, fsync 35.446/second\n\nI stopped that there, sick of waiting for it, as there's obviously some \nserious work (mounting options or such at a minimum) that needs to be \ndone before XFS matches the other two. Will return to that later.\n\nSo, what have we learned so far:\n\n1) On these newer kernels, both ext4 and ext3 seem to be pushing data \nout through the drive write caches correctly.\n\n2) On single writes, there's no performance difference between the main \nthree methods you might use, with the straight fsync method having a \nserious regression in this use case.\n\n3) WAL writes that are forced by wal_buffers filling will turn into a \ncommit-length write when using the new, default open_datasync. Using \nthe older default of fdatasync avoids that problem, in return for \ncausing WAL writes to pollute the OS cache. The main benefit of O_DSYNC \nwrites over fdatasync ones is avoiding the OS cache.\n\nI want to next go through and replicate some of the actual database \nlevel tests before giving a full opinion on whether this data proves \nit's worth changing the wal_sync_method detection. So far I'm torn \nbetween whether that's the right approach, or if we should just increase \nthe default value for wal_buffers to something more reasonable.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Tue, 16 Nov 2010 15:39:20 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for\n 9.1?"
},
{
"msg_contents": "On Tue, Nov 16, 2010 at 3:39 PM, Greg Smith <[email protected]> wrote:\n> I want to next go through and replicate some of the actual database level\n> tests before giving a full opinion on whether this data proves it's worth\n> changing the wal_sync_method detection. So far I'm torn between whether\n> that's the right approach, or if we should just increase the default value\n> for wal_buffers to something more reasonable.\n\nHow about both?\n\nopen_datasync seems problematic for a number of reasons - you get an\nimmediate write-through whether you need it or not, including, as you\npoint out, the case where the you want to write several blocks at once\nand then force them all out together.\n\nAnd 64kB for a ring buffer just seems awfully small.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 16 Nov 2010 18:10:12 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
},
{
"msg_contents": "On 11/16/10 12:39 PM, Greg Smith wrote:\n> I want to next go through and replicate some of the actual database\n> level tests before giving a full opinion on whether this data proves\n> it's worth changing the wal_sync_method detection. So far I'm torn\n> between whether that's the right approach, or if we should just increase\n> the default value for wal_buffers to something more reasonable.\n\nWe'd love to, but wal_buffers uses sysV shmem.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Tue, 16 Nov 2010 15:25:13 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for\n 9.1?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> On 11/16/10 12:39 PM, Greg Smith wrote:\n>> I want to next go through and replicate some of the actual database\n>> level tests before giving a full opinion on whether this data proves\n>> it's worth changing the wal_sync_method detection. So far I'm torn\n>> between whether that's the right approach, or if we should just increase\n>> the default value for wal_buffers to something more reasonable.\n\n> We'd love to, but wal_buffers uses sysV shmem.\n\nWell, we're not going to increase the default to gigabytes, but we could\nvery probably increase it by a factor of 10 or so without anyone\nsquawking. It's been awhile since I heard of anyone trying to run PG in\n4MB shmmax. How much would a change of that size help?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Nov 2010 18:31:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1? "
},
{
"msg_contents": "On Wed, Nov 17, 2010 at 01:31, Tom Lane <[email protected]> wrote:\n> Well, we're not going to increase the default to gigabytes, but we could\n> very probably increase it by a factor of 10 or so without anyone\n> squawking. It's been awhile since I heard of anyone trying to run PG in\n> 4MB shmmax. How much would a change of that size help?\n\nIn my testing, when running a large bulk insert query with fdatasync\non ext4, changing wal_buffers has very little effect:\nhttp://ompldr.org/vNjNiNQ/wal_sync_method1.png\n\n(More details at\nhttp://archives.postgresql.org/pgsql-performance/2010-11/msg00094.php\n)\n\nIt would take some more testing to say this conclusively, but looking\nat the raw data, there only seems to be an effect when moving from 8\nto 16MB. Could be different on other file systems though.\n\nRegards,\nMarti\n",
"msg_date": "Wed, 17 Nov 2010 02:01:21 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
},
{
"msg_contents": "Josh Berkus wrote:\n> On 11/16/10 12:39 PM, Greg Smith wrote:\n> \n>> I want to next go through and replicate some of the actual database\n>> level tests before giving a full opinion on whether this data proves\n>> it's worth changing the wal_sync_method detection. So far I'm torn\n>> between whether that's the right approach, or if we should just increase\n>> the default value for wal_buffers to something more reasonable.\n>> \n>\n> We'd love to, but wal_buffers uses sysV shmem.\n>\n> \nSpeaking of the SYSV SHMEM, is it possible to use huge pages?\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Tue, 16 Nov 2010 19:05:05 -0500",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for\n 9.1?"
},
{
"msg_contents": "On Wednesday 17 November 2010 00:31:34 Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n> > On 11/16/10 12:39 PM, Greg Smith wrote:\n> >> I want to next go through and replicate some of the actual database\n> >> level tests before giving a full opinion on whether this data proves\n> >> it's worth changing the wal_sync_method detection. So far I'm torn\n> >> between whether that's the right approach, or if we should just increase\n> >> the default value for wal_buffers to something more reasonable.\n> > \n> > We'd love to, but wal_buffers uses sysV shmem.\n> \n> Well, we're not going to increase the default to gigabytes\nEspecially not as I don't think it will have any effect after wal_segment_size \nas that will force a write-out anyway. Or am I misremembering the \nimplementation?\n\nAndres\n",
"msg_date": "Wed, 17 Nov 2010 01:30:09 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On Wednesday 17 November 2010 00:31:34 Tom Lane wrote:\n>> Well, we're not going to increase the default to gigabytes\n\n> Especially not as I don't think it will have any effect after wal_segment_size \n> as that will force a write-out anyway. Or am I misremembering the \n> implementation?\n\nWell, there's a forced fsync after writing the last page of an xlog\nfile, but I don't believe that proves that more than 16MB of xlog\nbuffers is useless. Other processes could still be busy filling the\nbuffers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Nov 2010 19:51:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1? "
},
{
"msg_contents": "On Wednesday 17 November 2010 01:51:28 Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On Wednesday 17 November 2010 00:31:34 Tom Lane wrote:\n> >> Well, we're not going to increase the default to gigabytes\n> > \n> > Especially not as I don't think it will have any effect after\n> > wal_segment_size as that will force a write-out anyway. Or am I\n> > misremembering the implementation?\n> \n> Well, there's a forced fsync after writing the last page of an xlog\n> file, but I don't believe that proves that more than 16MB of xlog\n> buffers is useless. Other processes could still be busy filling the\n> buffers.\nMaybe I am missing something, but I think the relevant AdvanceXLInsertBuffer() \nis currently called with WALInsertLock held?\n\nAndres\n\n",
"msg_date": "Wed, 17 Nov 2010 02:01:24 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n> On Wednesday 17 November 2010 01:51:28 Tom Lane wrote:\n>> Well, there's a forced fsync after writing the last page of an xlog\n>> file, but I don't believe that proves that more than 16MB of xlog\n>> buffers is useless. Other processes could still be busy filling the\n>> buffers.\n\n> Maybe I am missing something, but I think the relevant AdvanceXLInsertBuffer() \n> is currently called with WALInsertLock held?\n\nThe fsync is associated with the write, which is not done with insert\nlock held. We're not quite that dumb.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Nov 2010 20:04:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1? "
},
{
"msg_contents": "On Wednesday 17 November 2010 02:04:28 Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On Wednesday 17 November 2010 01:51:28 Tom Lane wrote:\n> >> Well, there's a forced fsync after writing the last page of an xlog\n> >> file, but I don't believe that proves that more than 16MB of xlog\n> >> buffers is useless. Other processes could still be busy filling the\n> >> buffers.\n> > \n> > Maybe I am missing something, but I think the relevant\n> > AdvanceXLInsertBuffer() is currently called with WALInsertLock held?\n> \n> The fsync is associated with the write, which is not done with insert\n> lock held. We're not quite that dumb.\nAh, I see. The XLogWrite in AdvanceXLInsertBuffer is only happening if the head \nof the buffer gets to the tail - which is more likely if the wal buffers are \nsmall...\n\nAndres\n\n",
"msg_date": "Wed, 17 Nov 2010 02:12:29 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
},
{
"msg_contents": "\n> Well, we're not going to increase the default to gigabytes, but we could\n> very probably increase it by a factor of 10 or so without anyone\n> squawking. It's been awhile since I heard of anyone trying to run PG in\n> 4MB shmmax. How much would a change of that size help?\n\nLast I checked, though, this comes out of the allocation available to\nshared_buffers. And there definitely are several OSes (several linuxes,\nOSX) still limited to 32MB by default.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n",
"msg_date": "Tue, 16 Nov 2010 17:17:42 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for\n 9.1?"
},
{
"msg_contents": "I wrote:\n> The fsync is associated with the write, which is not done with insert\n> lock held. We're not quite that dumb.\n\nBut wait --- are you thinking of the call path where a write (and\npossible fsync) is forced during AdvanceXLInsertBuffer because there's\nno WAL buffer space left? If so, that's *exactly* the scenario that\ncan be expected to be less common with more buffer space.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Nov 2010 20:17:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1? "
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> Well, we're not going to increase the default to gigabytes, but we could\n>> very probably increase it by a factor of 10 or so without anyone\n>> squawking. It's been awhile since I heard of anyone trying to run PG in\n>> 4MB shmmax. How much would a change of that size help?\n\n> Last I checked, though, this comes out of the allocation available to\n> shared_buffers. And there definitely are several OSes (several linuxes,\n> OSX) still limited to 32MB by default.\n\nSure, but the current default is a measly 64kB. We could increase that\n10x for a relatively small percentage hit in the size of shared_buffers,\nif you suppose that there's 32MB available. The current default is set\nto still work if you've got only a couple of MB in SHMMAX.\n\nWhat we'd want is for initdb to adjust the setting as part of its\nprobing to see what SHMMAX is set to.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Nov 2010 20:22:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1? "
},
{
"msg_contents": "On Tue, Nov 16, 2010 at 6:25 PM, Josh Berkus <[email protected]> wrote:\n> On 11/16/10 12:39 PM, Greg Smith wrote:\n>> I want to next go through and replicate some of the actual database\n>> level tests before giving a full opinion on whether this data proves\n>> it's worth changing the wal_sync_method detection. So far I'm torn\n>> between whether that's the right approach, or if we should just increase\n>> the default value for wal_buffers to something more reasonable.\n>\n> We'd love to, but wal_buffers uses sysV shmem.\n\n<places tongue firmly in cheek>\n\nGee, too bad there's not some other shared-memory implementation we could use...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 16 Nov 2010 22:07:49 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
},
{
"msg_contents": "\nOn Nov 16, 2010, at 4:05 PM, Mladen Gogala wrote:\n\n> Josh Berkus wrote:\n>> On 11/16/10 12:39 PM, Greg Smith wrote:\n>> \n>>> I want to next go through and replicate some of the actual database\n>>> level tests before giving a full opinion on whether this data proves\n>>> it's worth changing the wal_sync_method detection. So far I'm torn\n>>> between whether that's the right approach, or if we should just increase\n>>> the default value for wal_buffers to something more reasonable.\n>>> \n>> \n>> We'd love to, but wal_buffers uses sysV shmem.\n>> \n>> \n> Speaking of the SYSV SHMEM, is it possible to use huge pages?\n\nRHEL 6 and friends have transparent hugepage support. I'm not sure if they yet transparently do it for SYSV SHMEM, but they do for most everything else. Sequential traversal of a process heap is several times faster with hugepages. Unfortunately, postgres doesn't organize its blocks in its shared_mem to be sequential for a relation. So it might not matter much.\n\n> \n> -- \n> \n> Mladen Gogala \n> Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> http://www.vmsinfo.com \n> The Leader in Integrated Media Intelligence Solutions\n> \n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Wed, 17 Nov 2010 11:26:30 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for\n 9.1?"
},
{
"msg_contents": "\nOn Nov 16, 2010, at 12:39 PM, Greg Smith wrote:\n> \n> $ ./test_fsync\n> Loops = 10000\n> \n> Simple write:\n> 8k write 88476.784/second\n> \n> Compare file sync methods using one write:\n> (unavailable: open_datasync)\n> open_sync 8k write 1192.135/second\n> 8k write, fdatasync 1222.158/second\n> 8k write, fsync 1097.980/second\n> \n> Compare file sync methods using two writes:\n> (unavailable: open_datasync)\n> 2 open_sync 8k writes 527.361/second\n> 8k write, 8k write, fdatasync 1105.204/second\n> 8k write, 8k write, fsync 1084.050/second\n> \n> Compare open_sync with different sizes:\n> open_sync 16k write 966.047/second\n> 2 open_sync 8k writes 529.565/second\n> \n> Test if fsync on non-write file descriptor is honored:\n> (If the times are similar, fsync() can sync data written\n> on a different descriptor.)\n> 8k write, fsync, close 1064.177/second\n> 8k write, close, fsync 1042.337/second\n> \n> Two notable things here. One, there is no open_datasync defined in this\n> older kernel. Two, all methods of commit give equally inflated commit\n> rates, far faster than the drive is capable of. This proves this setup\n> isn't flushing the drive's write cache after commit.\n\nNit: there is no open_sync, only open_dsync. Prior to recent kernels, only (semantically) open_dsync exists, labeled as open_sync. New kernels move that code to open_datasync and nave a NEW open_sync that supposedly flushes metadata properly. \n\n> \n> You can get safe behavior out of the old kernel by disabling its write\n> cache:\n> \n> $ sudo /sbin/hdparm -W0 /dev/sda\n> \n> /dev/sda:\n> setting drive write-caching to 0 (off)\n> write-caching = 0 (off)\n> \n> Loops = 10000\n> \n> Simple write:\n> 8k write 89023.413/second\n> \n> Compare file sync methods using one write:\n> (unavailable: open_datasync)\n> open_sync 8k write 106.968/second\n> 8k write, fdatasync 108.106/second\n> 8k write, fsync 104.238/second\n> \n> Compare file sync methods using two writes:\n> (unavailable: open_datasync)\n> 2 open_sync 8k writes 51.637/second\n> 8k write, 8k write, fdatasync 109.256/second\n> 8k write, 8k write, fsync 103.952/second\n> \n> Compare open_sync with different sizes:\n> open_sync 16k write 109.562/second\n> 2 open_sync 8k writes 52.752/second\n> \n> Test if fsync on non-write file descriptor is honored:\n> (If the times are similar, fsync() can sync data written\n> on a different descriptor.)\n> 8k write, fsync, close 107.179/second\n> 8k write, close, fsync 106.923/second\n> \n> And now results are as expected: just under 120/second.\n> \n> Onto RHEL6. Setup for this initial test was:\n> \n> $ uname -a\n> Linux meddle 2.6.32-44.1.el6.x86_64 #1 SMP Wed Jul 14 18:51:29 EDT 2010\n> x86_64 x86_64 x86_64 GNU/Linux\n> $ cat /etc/redhat-release\n> Red Hat Enterprise Linux Server release 6.0 Beta (Santiago)\n> $ mount\n> /dev/sda7 on / type ext4 (rw)\n> \n> And I started with the write cache off to see a straight comparison\n> against the above:\n> \n> $ sudo hdparm -W0 /dev/sda\n> \n> /dev/sda:\n> setting drive write-caching to 0 (off)\n> write-caching = 0 (off)\n> $ ./test_fsync\n> Loops = 10000\n> \n> Simple write:\n> 8k write 104194.886/second\n> \n> Compare file sync methods using one write:\n> open_datasync 8k write 97.828/second\n> open_sync 8k write 109.158/second\n> 8k write, fdatasync 109.838/second\n> 8k write, fsync 20.872/second\n\nfsync is working now! flushing metadata properly reduces performance.\nHowever, shouldn't open_sync slow down vs open_datasync too and be similar to fsync?\n\nDid you recompile your test on the RHEL6 system? \nCode compiled on newer kernels will see O_DSYNC and O_SYNC as two separate sentinel values, lets call them 1 and 2 respectively. Code compiled against earlier kernels will see both O_DSYNC and O_SYNC as the same value, 1. So code compiled against older kernels, asking for O_SYNC on a newer kernel will actually get O_DSYNC behavior! This was intended. I can't find the link to the mail, but it was Linus' idea to make old code that expected the 'faster but incorrect' behavior to retain it on newer kernels. Only a recompile with newer header files will trigger the new behavior and expose the 'correct' open_sync behavior.\n\nThis will be 'fun' for postgres packagers and users -- data reliability behavior differs based on what kernel it is compiled against. Luckily, the xlogs only need open_datasync semantics.\n\n> \n> Compare file sync methods using two writes:\n> 2 open_datasync 8k writes 53.902/second\n> 2 open_sync 8k writes 53.721/second\n> 8k write, 8k write, fdatasync 109.731/second\n> 8k write, 8k write, fsync 20.918/second\n> \n> Compare open_sync with different sizes:\n> open_sync 16k write 109.552/second\n> 2 open_sync 8k writes 54.116/second\n> \n> Test if fsync on non-write file descriptor is honored:\n> (If the times are similar, fsync() can sync data written\n> on a different descriptor.)\n> 8k write, fsync, close 20.800/second\n> 8k write, close, fsync 20.868/second\n> \n> A few changes then. open_datasync is available now. \n\nAgain, noting the detail that it is open_sync that is new (depending on where it is compiled). The old open_sync is relabeled to the new open_datasync. \n\n> It looks slightly\n> slower than the alternatives on this test, but I didn't see that on the\n> later tests so I'm thinking that's just occasional run to run\n> variation. For some reason regular fsync is dramatically slower in this\n> kernel than earlier ones. Perhaps a lot more metadata being flushed all\n> the way to the disk in that case now?\n> \n> The issue that I think Marti has been concerned about is highlighted in\n> this interesting subset of the data:\n> \n> Compare file sync methods using two writes:\n> 2 open_datasync 8k writes 53.902/second\n> 8k write, 8k write, fdatasync 109.731/second\n> \n> The results here aren't surprising; if you do two dsync writes, that\n> will take two disk rotations, while two writes followed a single sync\n> only takes one. But that does mean that in the case of small values for\n> wal_buffers, like the default, you could easily end up paying a rotation\n> sync penalty more than once per commit.\n> \n> Next question is what happens if I turn the drive's write cache back on:\n> \n> $ sudo hdparm -W1 /dev/sda\n> \n> /dev/sda:\n> setting drive write-caching to 1 (on)\n> write-caching = 1 (on)\n> \n> $ ./test_fsync\n> \n> [gsmith@meddle fsync]$ ./test_fsync\n> Loops = 10000\n> \n> Simple write:\n> 8k write 104198.143/second\n> \n> Compare file sync methods using one write:\n> open_datasync 8k write 110.707/second\n> open_sync 8k write 110.875/second\n> 8k write, fdatasync 110.794/second\n> 8k write, fsync 28.872/second\n> \n> Compare file sync methods using two writes:\n> 2 open_datasync 8k writes 55.731/second\n> 2 open_sync 8k writes 55.618/second\n> 8k write, 8k write, fdatasync 110.551/second\n> 8k write, 8k write, fsync 28.843/second\n> \n> Compare open_sync with different sizes:\n> open_sync 16k write 110.176/second\n> 2 open_sync 8k writes 55.785/second\n> \n> Test if fsync on non-write file descriptor is honored:\n> (If the times are similar, fsync() can sync data written\n> on a different descriptor.)\n> 8k write, fsync, close 28.779/second\n> 8k write, close, fsync 28.855/second\n> \n> This is nice to see from a reliability perspective. On all three of the\n> viable sync methods here, the speed seen suggests the drive's volatile\n> write cache is being flushed after every commit. This is going to be\n> bad for people who have gotten used to doing development on systems\n> where that's not honored and they don't care, because this looks like a\n> 90% drop in performance on those systems.\n> But since the new behavior is\n> safe and the earlier one was not, it's hard to get mad about it.\n\nI would love to see the same tests in this detail for RHEL 5.5 (which has ext3, ext4, and xfs). I think this data reliability issue that requires turning off write cache was in the kernel ~2.6.26 to 2.6.31 range. Ubuntu doesn't really care about this stuff which is one reason I avoid it for a prod db. I know that xfs with the right settings on RHEL 5.5 does not require disabling the write cache.\n\n> Developers probably just need to be taught to turn synchronous_commit\n> off to speed things up when playing with test data.\n> \n\nAbsolutely.\n\n> test_fsync writes to /var/tmp/test_fsync.out by default, not paying\n> attention to what directory you're in. So to use it to test another\n> filesystem, you have to make sure to give it an explicit full path.\n> Next I tested against the old Ubuntu partition that was formatted with\n> ext3, with the write cache still on:\n> \n> # mount | grep /ext3\n> /dev/sda5 on /ext3 type ext3 (rw)\n> # ./test_fsync -f /ext3/test_fsync.out\n> Loops = 10000\n> \n> Simple write:\n> 8k write 100943.825/second\n> \n> Compare file sync methods using one write:\n> open_datasync 8k write 106.017/second\n> open_sync 8k write 108.318/second\n> 8k write, fdatasync 108.115/second\n> 8k write, fsync 105.270/second\n> \n> Compare file sync methods using two writes:\n> 2 open_datasync 8k writes 53.313/second\n> 2 open_sync 8k writes 54.045/second\n> 8k write, 8k write, fdatasync 55.291/second\n> 8k write, 8k write, fsync 53.243/second\n> \n> Compare open_sync with different sizes:\n> open_sync 16k write 54.980/second\n> 2 open_sync 8k writes 53.563/second\n> \n> Test if fsync on non-write file descriptor is honored:\n> (If the times are similar, fsync() can sync data written\n> on a different descriptor.)\n> 8k write, fsync, close 105.032/second\n> 8k write, close, fsync 103.987/second\n> \n> Strange...it looks like ext3 is executing cache flushes, too. Note that\n> all of the \"Compare file sync methods using two writes\" results are half\n> speed now; it's as if ext3 is flushing the first write out immediately?\n> This result was unexpected, and I don't trust it yet; I want to validate\n> this elsewhere.\n> \n> What about XFS? That's a first class filesystem on RHEL6 too:\nand available on later RHEL 5's.\n> \n> [root@meddle fsync]# ./test_fsync -f /xfs/test_fsync.out\n> Loops = 10000\n> \n> Simple write:\n> 8k write 71878.324/second\n> \n> Compare file sync methods using one write:\n> open_datasync 8k write 36.303/second\n> open_sync 8k write 35.714/second\n> 8k write, fdatasync 35.985/second\n> 8k write, fsync 35.446/second\n> \n> I stopped that there, sick of waiting for it, as there's obviously some\n> serious work (mounting options or such at a minimum) that needs to be\n> done before XFS matches the other two. Will return to that later.\n> \n\nYes, XFS requires some fiddling. Its metadata operations are also very slow.\n\n> So, what have we learned so far:\n> \n> 1) On these newer kernels, both ext4 and ext3 seem to be pushing data\n> out through the drive write caches correctly.\n> \n\nI suspect that some older kernels are partially OK here too. The kernel not flushing properly appeared near 2.6.25 ish.\n\n> 2) On single writes, there's no performance difference between the main\n> three methods you might use, with the straight fsync method having a\n> serious regression in this use case.\n\nI'll ask again -- did you compile the test on RHEL6 for the RHEL6 tests? The behavior in later kernels for this depends on what kernel it was compiled against for open_sync. For fsync, its not a regression, its actually flushing metadata properly and therefore actually robust if there is a power failure during a write. Even the write cache disabled case on the ubuntu kernel could leave a filesystem with corrupt data if the power failed in a metadata intensive write situation. \n\n> \n> 3) WAL writes that are forced by wal_buffers filling will turn into a\n> commit-length write when using the new, default open_datasync. Using\n> the older default of fdatasync avoids that problem, in return for\n> causing WAL writes to pollute the OS cache. The main benefit of O_DSYNC\n> writes over fdatasync ones is avoiding the OS cache.\n> \n> I want to next go through and replicate some of the actual database\n> level tests before giving a full opinion on whether this data proves\n> it's worth changing the wal_sync_method detection. So far I'm torn\n> between whether that's the right approach, or if we should just increase\n> the default value for wal_buffers to something more reasonable.\n> \n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services and Support www.2ndQuadrant.us\n> \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Wed, 17 Nov 2010 12:19:10 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for\n 9.1?"
},
{
"msg_contents": "Scott Carey wrote:\n> Did you recompile your test on the RHEL6 system? \n\nOn both systems I showed, I checked out a fresh copy of the PostgreSQL \n9.1 HEAD from the git repo, and compiled that on the server, to make \nsure I was pulling in the appropriate kernel headers. I wasn't aware of \nexactly how the kernel sync stuff was refactored though, thanks for the \nconcise update on that. I can do similar tests on a RHEL5 system, but \nnot on the same hardware. Can only make my laptop boot so many \noperating systems at a time usefully.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Wed, 17 Nov 2010 16:24:54 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for\n 9.1?"
},
{
"msg_contents": "On Wed, Nov 17, 2010 at 3:24 PM, Greg Smith <[email protected]> wrote:\n> Scott Carey wrote:\n>>\n>> Did you recompile your test on the RHEL6 system?\n>\n> On both systems I showed, I checked out a fresh copy of the PostgreSQL 9.1\n> HEAD from the git repo, and compiled that on the server, to make sure I was\n> pulling in the appropriate kernel headers. I wasn't aware of exactly how\n> the kernel sync stuff was refactored though, thanks for the concise update\n> on that. I can do similar tests on a RHEL5 system, but not on the same\n> hardware. Can only make my laptop boot so many operating systems at a time\n> usefully.\n\nOne thing to note is that where on a disk things sit can make a /huge/\ndifference - depending on if Ubuntu is /here/ and RHEL is /there/ and\nso on can make a factor of 2 or more difference. The outside tracks\nof most modern SATA disks can do around 120MB/s. The inside tracks\naren't even half of that.\n\n-- \nJon\n",
"msg_date": "Wed, 17 Nov 2010 15:36:44 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
},
{
"msg_contents": "Jon Nelson wrote:\n> One thing to note is that where on a disk things sit can make a /huge/\n> difference - depending on if Ubuntu is /here/ and RHEL is /there/ and\n> so on can make a factor of 2 or more difference. The outside tracks\n> of most modern SATA disks can do around 120MB/s. The inside tracks\n> aren't even half of that.\n> \n\nYou're talking about changes in sequential read and write speed due to \nZone Bit Recording (ZBR) AKA Zone Constant Angular Velocity (ZCAV). \nWhat I was measuring was commit latency time on small writes. That \ndoesn't change as you move around the disk, since it's tied to the raw \nrotation speed of the drive rather than density of storage in any zone. \nIf I get to something that's impacted by sequential transfers rather \nthan rotation time, I'll be sure to use the same section of disk for \nthat. It wasn't really necessary to get these initial gross numbers \nanyway. What I was looking for is the about 10:1 speedup seen on this \nhardware when the write cache is used, which could easily be seen even \nwere there ZBR differences involved.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Wed, 17 Nov 2010 17:48:31 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for\n 9.1?"
},
{
"msg_contents": "\nOn Nov 17, 2010, at 1:24 PM, Greg Smith wrote:\n\n> Scott Carey wrote:\n>> Did you recompile your test on the RHEL6 system? \n> \n> On both systems I showed, I checked out a fresh copy of the PostgreSQL \n> 9.1 HEAD from the git repo, and compiled that on the server, to make \n> sure I was pulling in the appropriate kernel headers. I wasn't aware of \n> exactly how the kernel sync stuff was refactored though, thanks for the \n> concise update on that. \n\nThanks!\n\nSo this could be another bug in Linux. Not entirely surprising.\nSince fsync/fdatasync relative performance isn't similar to open_sync/open_datasync relative performance on this test there is probably a bug that either hurts fsync, or one that is preventing open_sync from dealing with metadata properly. Luckily for the xlog, both of those can be avoided -- the real choice is fdatasync vs open_datasync. And both work in newer kernels or break in certain older ones.\n\n\n> I can do similar tests on a RHEL5 system, but \n> not on the same hardware. Can only make my laptop boot so many \n> operating systems at a time usefully.\n\nYeah, I understand. I might throw this at a RHEL5 system if I get a chance but I need one without a RAID card that is not in use. Hopefully it doesn't turn out that fdatasync is write-cache safe but open_sync/open_datasync isn't on that platform. It could impact the choice of a default value.\n\n> \n> -- \n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services and Support www.2ndQuadrant.us\n> \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n> \n\n",
"msg_date": "Wed, 17 Nov 2010 15:20:15 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for\n 9.1?"
},
{
"msg_contents": "On Tue, Nov 16, 2010 at 8:22 PM, Tom Lane <[email protected]> wrote:\n> Josh Berkus <[email protected]> writes:\n>>> Well, we're not going to increase the default to gigabytes, but we could\n>>> very probably increase it by a factor of 10 or so without anyone\n>>> squawking. It's been awhile since I heard of anyone trying to run PG in\n>>> 4MB shmmax. How much would a change of that size help?\n>\n>> Last I checked, though, this comes out of the allocation available to\n>> shared_buffers. And there definitely are several OSes (several linuxes,\n>> OSX) still limited to 32MB by default.\n>\n> Sure, but the current default is a measly 64kB. We could increase that\n> 10x for a relatively small percentage hit in the size of shared_buffers,\n> if you suppose that there's 32MB available. The current default is set\n> to still work if you've got only a couple of MB in SHMMAX.\n>\n> What we'd want is for initdb to adjust the setting as part of its\n> probing to see what SHMMAX is set to.\n>\n> regards, tom lane\n>\n>\n\nIn all the performance tests that I have done, generally I get a good\nbang for the buck with wal_buffers set to 512kB in low memory cases\nand mostly I set it to 1MB which is probably enough for most of the\ncases even with high memory.\n\nThat 1/2 MB wont make drastic change on shared_buffers anyway (except\nfor edge cases) but will relieve the stress quite a bit on wal\nbuffers.\n\nRegards,\nJignesh\n",
"msg_date": "Fri, 19 Nov 2010 09:52:47 -0500",
"msg_from": "Jignesh Shah <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Defaulting wal_sync_method to fdatasync on Linux for 9.1?"
}
] |
[
{
"msg_contents": "Hi,\nI am trying to tune my libpq program for insert performance.\nWhen I tried inserting 1M rows into a table with a Primary Key, it took almost \n62 seconds.\nAfter adding a composite index of 2 columns, the performance degrades to 125 \nseconds.\nI am using COPY to insert all data in 1 transaction.\n\nthe table definition is \n\nCREATE TABLE ABC\n(\n event integer,\n innodeid character varying(80),\n innodename character varying(80),\n sourceid character varying(300),\n intime timestamp(3) without time zone,\n outnodeid character varying(80),\n outnodename character varying(80),\n destinationid character varying(300),\n outtime timestamp(3) without time zone,\n bytes integer,\n cdrs integer,\n tableindex integer NOT NULL,\n noofsubfilesinfile integer,\n recordsequenceintegerlist character varying(1000),\n CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex)\n)\n\nthe index definition is \n\n\nCREATE INDEX \"PK_AT2\"\n ON ABC\n USING btree\n (event, tableindex)\nTABLESPACE sample;\n\nAny tip to increase the insert performance in this case?\n\nIt would also be helpful if someone can send comprehensive libpq programming \nguide for PG 9.x. Online doc of libpq is not much helpful for a newbie like me.\n\n\n Best Regards,\nDivakar\n\n\n\n \nHi,I am trying to tune my libpq program for insert performance.When I tried inserting 1M rows into a table with a Primary Key, it took almost 62 seconds.After adding a composite index of 2 columns, the performance degrades to 125 seconds.I am using COPY to insert all data in 1 transaction.the table definition is CREATE TABLE ABC( event integer, innodeid character varying(80), innodename character varying(80), sourceid character varying(300), intime timestamp(3) without time zone, outnodeid character varying(80), outnodename character varying(80), destinationid character varying(300), outtime timestamp(3) without time zone, bytes integer, cdrs\n integer, tableindex integer NOT NULL, noofsubfilesinfile integer, recordsequenceintegerlist character varying(1000), CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex))the index definition is CREATE INDEX \"PK_AT2\" ON ABC USING btree (event, tableindex)TABLESPACE sample;Any tip to increase the insert performance in this case?It would also be helpful if someone can send comprehensive libpq programming guide for PG 9.x. Online doc of libpq is not much helpful for a newbie like me. Best Regards,Divakar",
"msg_date": "Mon, 1 Nov 2010 05:49:14 -0700 (PDT)",
"msg_from": "Divakar Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Insert performance with composite index"
},
{
"msg_contents": "On Mon, Nov 1, 2010 at 14:49, Divakar Singh <[email protected]> wrote:\n> I am trying to tune my libpq program for insert performance.\n> When I tried inserting 1M rows into a table with a Primary Key, it took\n> almost 62 seconds.\n> After adding a composite index of 2 columns, the performance degrades to 125\n> seconds.\n\nThis sounds a lot like the bottleneck I was hitting. What Linux kernel\nversion are you running?\n\nIf it's 2.6.33 or later, see:\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#wal_sync_method_wal_buffers\nhttp://archives.postgresql.org/pgsql-performance/2010-10/msg00602.php\n\nRegards,\nMarti\n",
"msg_date": "Mon, 1 Nov 2010 14:53:17 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance with composite index"
},
{
"msg_contents": "Hi Marti,\nThanks for your tips. i will try those.\nI am on Solaris Sparc 5.10\n\n Best Regards,\nDivakar\n\n\n\n\n________________________________\nFrom: Marti Raudsepp <[email protected]>\nTo: Divakar Singh <[email protected]>\nCc: [email protected]\nSent: Mon, November 1, 2010 6:23:17 PM\nSubject: Re: [PERFORM] Insert performance with composite index\n\nOn Mon, Nov 1, 2010 at 14:49, Divakar Singh <[email protected]> wrote:\n> I am trying to tune my libpq program for insert performance.\n> When I tried inserting 1M rows into a table with a Primary Key, it took\n> almost 62 seconds.\n> After adding a composite index of 2 columns, the performance degrades to 125\n> seconds.\n\nThis sounds a lot like the bottleneck I was hitting. What Linux kernel\nversion are you running?\n\nIf it's 2.6.33 or later, see:\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#wal_sync_method_wal_buffers\n\nhttp://archives.postgresql.org/pgsql-performance/2010-10/msg00602.php\n\nRegards,\nMarti\n\n\n\n \nHi Marti,Thanks for your tips. i will try those.I am on Solaris Sparc 5.10 Best Regards,DivakarFrom: Marti Raudsepp <[email protected]>To: Divakar Singh <[email protected]>Cc: [email protected]: Mon, November 1, 2010 6:23:17 PMSubject: Re: [PERFORM] Insert performance with composite\n indexOn Mon, Nov 1, 2010 at 14:49, Divakar Singh <[email protected]> wrote:> I am trying to tune my libpq program for insert performance.> When I tried inserting 1M rows into a table with a Primary Key, it took> almost 62 seconds.> After adding a composite index of 2 columns, the performance degrades to 125> seconds.This sounds a lot like the bottleneck I was hitting. What Linux kernelversion are you running?If it's 2.6.33 or later, see:http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#wal_sync_method_wal_buffershttp://archives.postgresql.org/pgsql-performance/2010-10/msg00602.phpRegards,Marti",
"msg_date": "Mon, 1 Nov 2010 05:56:13 -0700 (PDT)",
"msg_from": "Divakar Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insert performance with composite index"
},
{
"msg_contents": "On Mon, Nov 1, 2010 at 14:56, Divakar Singh <[email protected]> wrote:\n> Thanks for your tips. i will try those.\n> I am on Solaris Sparc 5.10\n\nSorry, I assumed you were running Linux. But still it could be the\nsame problem as I had.\n\nBe careful changing your wal_sync_method, as it has the potential to\ncorrupt your database. I have no experience with Solaris.\n\nFor what it's worth, Jignesh Shah recommends using\nwal_sync_method=fsync on Solaris:\nhttp://blogs.sun.com/jkshah/entry/postgresql_on_solaris_better_use\nhttp://blogs.sun.com/jkshah/entry/postgresql_wal_sync_method_and\n\nRegards,\nMarti\n",
"msg_date": "Mon, 1 Nov 2010 15:04:46 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance with composite index"
},
{
"msg_contents": "2010/11/1 Divakar Singh <[email protected]>:\n> Hi,\n> I am trying to tune my libpq program for insert performance.\n> When I tried inserting 1M rows into a table with a Primary Key, it took\n> almost 62 seconds.\n> After adding a composite index of 2 columns, the performance degrades to 125\n> seconds.\n> I am using COPY to insert all data in 1 transaction.\n>\n> the table definition is\n>\n> CREATE TABLE ABC\n> (\n> event integer,\n> innodeid character varying(80),\n> innodename character varying(80),\n> sourceid character varying(300),\n> intime timestamp(3) without time zone,\n> outnodeid character varying(80),\n> outnodename character varying(80),\n> destinationid character varying(300),\n> outtime timestamp(3) without time zone,\n> bytes integer,\n> cdrs integer,\n> tableindex integer NOT NULL,\n> noofsubfilesinfile integer,\n> recordsequenceintegerlist character varying(1000),\n> CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex)\n> )\n>\n> the index definition is\n>\n>\n> CREATE INDEX \"PK_AT2\"\n> ON ABC\n> USING btree\n> (event, tableindex)\n> TABLESPACE sample;\n\nIndexing twice the same column is useless. (perhaps move your PK to\nthe tablespace 'sample' is good too ?)\n\n>\n> Any tip to increase the insert performance in this case?\n\nIf you create or truncate table then copy to it, you should create\nindex after the copy order.\n\n>\n> It would also be helpful if someone can send comprehensive libpq programming\n> guide for PG 9.x. Online doc of libpq is not much helpful for a newbie like\n> me.\n>\n>\n> Best Regards,\n> Divakar\n>\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Mon, 1 Nov 2010 14:57:56 +0100",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance with composite index"
},
{
"msg_contents": "Hi,\n\nOn Monday 01 November 2010 13:49:14 Divakar Singh wrote:\n> When I tried inserting 1M rows into a table with a Primary Key, it took\n> almost 62 seconds.\n> After adding a composite index of 2 columns, the performance degrades to\n> 125 seconds.\n> I am using COPY to insert all data in 1 transaction.\nWithout seeing your config its hard to suggest anything here. Did you do basic \ntuning of your pg installation?\n\nwal_buffers, shared_buffers, checkpoint_segments, maintenance_work_mem are \nlikely most relevant for that specific case.\n\nAndres\n",
"msg_date": "Mon, 1 Nov 2010 15:03:51 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance with composite index"
},
{
"msg_contents": "On Monday 01 November 2010 15:08:10 Divakar Singh wrote:\n> here are my parameters:\nWhich pg version is that?\n",
"msg_date": "Mon, 1 Nov 2010 15:14:31 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance with composite index"
},
{
"msg_contents": "I am using 9.0.1\n\n Best Regards,\nDivakar\n\n\n\n\n________________________________\nFrom: Andres Freund <[email protected]>\nTo: Divakar Singh <[email protected]>\nCc: [email protected]\nSent: Mon, November 1, 2010 7:44:31 PM\nSubject: Re: [PERFORM] Insert performance with composite index\n\nOn Monday 01 November 2010 15:08:10 Divakar Singh wrote:\n> here are my parameters:\nWhich pg version is that?\n\n\n\n \nI am using 9.0.1 Best Regards,DivakarFrom: Andres Freund <[email protected]>To: Divakar Singh <[email protected]>Cc: [email protected]: Mon, November 1, 2010 7:44:31 PMSubject: Re: [PERFORM] Insert performance with composite indexOn Monday 01 November 2010 15:08:10 Divakar Singh\n wrote:> here are my parameters:Which pg version is that?",
"msg_date": "Mon, 1 Nov 2010 07:16:49 -0700 (PDT)",
"msg_from": "Divakar Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insert performance with composite index"
},
{
"msg_contents": "On Monday 01 November 2010 15:16:49 Divakar Singh wrote:\n> I am using 9.0.1\nEither thats not true or you cargo culted loads of your config from a \nsignificantly older pg version.\n\nThings like:\n\n#bgwriter_delay = 200 # 10-10000 milliseconds between rounds\nbgwriter_lru_percent = 0 # 0-100% of LRU buffers scanned/round\n#bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round\n#bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round\nbgwriter_all_maxpages = 0 # 0-1000 buffers max written/round\n\nmake me very suspicious.\n\nAs I said, I would check the variables I referenced in my first post...\n\nAndres\n",
"msg_date": "Mon, 1 Nov 2010 15:20:59 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance with composite index"
},
{
"msg_contents": "Do you mean these parameters have been removed starting 9.X?\nAs I see on \nhttp://www.network-theory.co.uk/docs/postgresql/vol3/BackgroundWriter.html \n,these parameters were added starting from 8.0 right?\n\n\n Best Regards,\nDivakar\n\n\n\n\n________________________________\nFrom: Andres Freund <[email protected]>\nTo: Divakar Singh <[email protected]>\nCc: [email protected]\nSent: Mon, November 1, 2010 7:50:59 PM\nSubject: Re: [PERFORM] Insert performance with composite index\n\nOn Monday 01 November 2010 15:16:49 Divakar Singh wrote:\n> I am using 9.0.1\nEither thats not true or you cargo culted loads of your config from a \nsignificantly older pg version.\n\nThings like:\n\n#bgwriter_delay = 200 # 10-10000 milliseconds between rounds\nbgwriter_lru_percent = 0 # 0-100% of LRU buffers scanned/round\n#bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round\n#bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round\nbgwriter_all_maxpages = 0 # 0-1000 buffers max written/round\n\nmake me very suspicious.\n\nAs I said, I would check the variables I referenced in my first post...\n\nAndres\n\n\n\n \nDo you mean these parameters have been removed starting 9.X?As I see on http://www.network-theory.co.uk/docs/postgresql/vol3/BackgroundWriter.html ,these parameters were added starting from 8.0 right? Best Regards,DivakarFrom: Andres Freund <[email protected]>To: Divakar Singh <[email protected]>Cc:\n [email protected]: Mon, November 1, 2010 7:50:59 PMSubject: Re: [PERFORM] Insert performance with composite indexOn Monday 01 November 2010 15:16:49 Divakar Singh wrote:> I am using 9.0.1Either thats not true or you cargo culted loads of your config from a significantly older pg version.Things like:#bgwriter_delay = 200 # 10-10000 milliseconds between roundsbgwriter_lru_percent = 0 # 0-100% of LRU buffers scanned/round#bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round#bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/roundbgwriter_all_maxpages = 0 # 0-1000 buffers max written/roundmake me very\n suspicious.As I said, I would check the variables I referenced in my first post...Andres",
"msg_date": "Mon, 1 Nov 2010 07:28:19 -0700 (PDT)",
"msg_from": "Divakar Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insert performance with composite index"
},
{
"msg_contents": "On Monday 01 November 2010 15:28:19 Divakar Singh wrote:\n> Do you mean these parameters have been removed starting 9.X?\n> As I see on \n> http://www.network-theory.co.uk/docs/postgresql/vol3/BackgroundWriter.html \n> ,these parameters were added starting from 8.0 right?\nNo, I mean setting to 0 is a bit of a strange value in many situations.\n\nAnd you have comments like:\n#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 1000 # min 100, ~70 bytes each\n\nWhich reference config options which do not exist anymore. And you have \nshared_buffers = 81920\nWhich indicates that you started from 8.1/8.2 or so...\n\nAndres\n",
"msg_date": "Mon, 1 Nov 2010 15:34:17 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance with composite index"
},
{
"msg_contents": "On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote:\n> > CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex)\n> > )\n> > the index definition is\n> > CREATE INDEX \"PK_AT2\"\n> > ON ABC\n> > USING btree\n> > (event, tableindex)\n> > TABLESPACE sample;\n> \n> Indexing twice the same column is useless. (perhaps move your PK to\n> the tablespace 'sample' is good too ?)\n\nwhy do you say that?\nthese are not the same indexes and they serve different purposes.\n\nBest regards,\n\ndepesz\n\n-- \nLinkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/\njid/gtalk: [email protected] / aim:depeszhdl / skype:depesz_hdl / gg:6749007\n",
"msg_date": "Tue, 2 Nov 2010 10:45:39 +0100",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance with composite index"
},
{
"msg_contents": "2010/11/2 hubert depesz lubaczewski <[email protected]>:\n> On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote:\n>> > CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex)\n>> > )\n>> > the index definition is\n>> > CREATE INDEX \"PK_AT2\"\n>> > ON ABC\n>> > USING btree\n>> > (event, tableindex)\n>> > TABLESPACE sample;\n>>\n>> Indexing twice the same column is useless. (perhaps move your PK to\n>> the tablespace 'sample' is good too ?)\n>\n> why do you say that?\n> these are not the same indexes and they serve different purposes.\n\nGiven that tableindex is the PK column, I really like to now the usage\npattern for having it indexed twice.\n\n>\n> Best regards,\n>\n> depesz\n>\n> --\n> Linkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/\n> jid/gtalk: [email protected] / aim:depeszhdl / skype:depesz_hdl / gg:6749007\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Tue, 2 Nov 2010 12:04:42 +0100",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance with composite index"
},
{
"msg_contents": "On Tue, Nov 02, 2010 at 12:04:42PM +0100, Cédric Villemain wrote:\n> 2010/11/2 hubert depesz lubaczewski <[email protected]>:\n> > On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote:\n> >> > CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex)\n> >> > )\n> >> > the index definition is\n> >> > CREATE INDEX \"PK_AT2\"\n> >> > ON ABC\n> >> > USING btree\n> >> > (event, tableindex)\n> >> > TABLESPACE sample;\n> >>\n> >> Indexing twice the same column is useless. (perhaps move your PK to\n> >> the tablespace 'sample' is good too ?)\n> >\n> > why do you say that?\n> > these are not the same indexes and they serve different purposes.\n> \n> Given that tableindex is the PK column, I really like to now the usage\n> pattern for having it indexed twice.\n\nselect * from table where event = 123 order by tableindex desc limit 50;\n\nBest regards,\n\ndepesz\n\n-- \nLinkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/\njid/gtalk: [email protected] / aim:depeszhdl / skype:depesz_hdl / gg:6749007\n",
"msg_date": "Tue, 2 Nov 2010 12:53:27 +0100",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance with composite index"
},
{
"msg_contents": "May be a query that is filtering based on these 2 columns?\n\n Best Regards,\nDivakar\n\n\n\n\n________________________________\nFrom: Cédric Villemain <[email protected]>\nTo: [email protected]\nCc: Divakar Singh <[email protected]>; [email protected]\nSent: Tue, November 2, 2010 4:34:42 PM\nSubject: Re: [PERFORM] Insert performance with composite index\n\n2010/11/2 hubert depesz lubaczewski <[email protected]>:\n> On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote:\n>> > CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex)\n>> > )\n>> > the index definition is\n>> > CREATE INDEX \"PK_AT2\"\n>> > ON ABC\n>> > USING btree\n>> > (event, tableindex)\n>> > TABLESPACE sample;\n>>\n>> Indexing twice the same column is useless. (perhaps move your PK to\n>> the tablespace 'sample' is good too ?)\n>\n> why do you say that?\n> these are not the same indexes and they serve different purposes.\n\nGiven that tableindex is the PK column, I really like to now the usage\npattern for having it indexed twice.\n\n>\n> Best regards,\n>\n> depesz\n>\n> --\n> Linkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/\n> jid/gtalk: [email protected] / aim:depeszhdl / skype:depesz_hdl / gg:6749007\n>\n\n\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n \nMay be a query that is filtering based on these 2 columns? Best Regards,DivakarFrom: Cédric Villemain <[email protected]>To: [email protected]: Divakar Singh <[email protected]>; [email protected]: Tue, November 2, 2010 4:34:42 PMSubject: Re: [PERFORM] Insert performance with composite\n index2010/11/2 hubert depesz lubaczewski <[email protected]>:> On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote:>> > CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex)>> > )>> > the index definition is>> > CREATE INDEX \"PK_AT2\">> > ON ABC>> > USING btree>> > (event, tableindex)>> > TABLESPACE sample;>>>> Indexing twice the same column is useless. (perhaps move your PK to>> the tablespace 'sample' is good too ?)>> why do you say that?> these are not the same indexes and they serve different purposes.Given that tableindex is the PK column, I really like to now the usagepattern for having it indexed twice.>> Best\n regards,>> depesz>> --> Linkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/> jid/gtalk: [email protected] / aim:depeszhdl / skype:depesz_hdl / gg:6749007>-- Cédric Villemain 2ndQuadranthttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 2 Nov 2010 05:51:07 -0700 (PDT)",
"msg_from": "Divakar Singh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insert performance with composite index"
},
{
"msg_contents": "2010/11/2 hubert depesz lubaczewski <[email protected]>:\n> On Tue, Nov 02, 2010 at 12:04:42PM +0100, Cédric Villemain wrote:\n>> 2010/11/2 hubert depesz lubaczewski <[email protected]>:\n>> > On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote:\n>> >> > CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex)\n>> >> > )\n>> >> > the index definition is\n>> >> > CREATE INDEX \"PK_AT2\"\n>> >> > ON ABC\n>> >> > USING btree\n>> >> > (event, tableindex)\n>> >> > TABLESPACE sample;\n>> >>\n>> >> Indexing twice the same column is useless. (perhaps move your PK to\n>> >> the tablespace 'sample' is good too ?)\n>> >\n>> > why do you say that?\n>> > these are not the same indexes and they serve different purposes.\n>>\n>> Given that tableindex is the PK column, I really like to now the usage\n>> pattern for having it indexed twice.\n>\n> select * from table where event = 123 order by tableindex desc limit 50;\n\nCorrect. Thanks Hubert.\n\n-- \nCédric Villemain 2ndQuadrant\nhttp://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support\n",
"msg_date": "Tue, 2 Nov 2010 21:32:02 +0100",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance with composite index"
}
] |
[
{
"msg_contents": "Hello\n\nWe have an application that needs to do bulk reads of ENTIRE Postgres tables very quickly (i.e. select * from table). We have \nobserved that such sequential scans run two orders of magnitude slower than observed raw disk reads (5 MB/s versus 100 MB/s). Part \nof this is due to the storage overhead we have observed in Postgres. In the example below, it takes 1 GB to store 350 MB of nominal \ndata. However that suggests we would expect to get 35 MB/s bulk read rates.\n\nObservations using iostat and top during these bulk reads suggest that the queries are CPU bound, not I/O bound. In fact, repeating \nthe queries yields similar response times. Presumably if it were an I/O issue the response times would be much shorter the second \ntime through with the benefit of caching.\n\nWe have tried these simple queries using psql, JDBC, pl/java stored procedures, and libpq. In all cases the client code ran on the \nsame box as the server.\nWe have experimented with Postgres 8.1, 8.3 and 9.0.\n\nWe also tried playing around with some of the server tuning parameters such as shared_buffers to no avail.\n\nHere is uname -a for a machine we have tested on:\n\nLinux nevs-bdb1.fsl.noaa.gov 2.6.18-194.17.1.el5 #1 SMP Mon Sep 20 07:12:06 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux\n\nA sample dataset that reproduces these results looks like the following (there are no indexes):\n\nTable \"bulk_performance.counts\"\n Column | Type | Modifiers\n--------+---------+-----------\n i1 | integer |\n i2 | integer |\n i3 | integer |\n i4 | integer |\n\nThere are 22 million rows in this case.\n\nWe HAVE observed that summation queries run considerably faster. In this case,\n\nselect sum(i1), sum(i2), sum(i3), sum(i4) from bulk_performance.counts\n\nruns at 35 MB/s.\n\n\nOur business logic does operations on the resulting data such that the output is several orders of magnitude smaller than the input. \n So we had hoped that by putting our business logic into stored procedures (and thus drastically reducing the amount of data \nflowing to the client) our throughput would go way up. This did not happen.\n\nSo our questions are as follows:\n\nIs there any way using stored procedures (maybe C code that calls SPI directly) or some other approach to get close to the expected \n35 MB/s doing these bulk reads? Or is this the price we have to pay for using SQL instead of some NoSQL solution. (We actually \ntried Tokyo Cabinet and found it to perform quite well. However it does not measure up to Postgres in terms of replication, data \ninterrogation, community support, acceptance, etc).\n\nThanks\n\nDan Schaffer\nPaul Hamer\nNick Matheson",
"msg_date": "Mon, 01 Nov 2010 14:15:05 +0000",
"msg_from": "Dan Schaffer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help with bulk read performance"
},
{
"msg_contents": "On Nov 1, 2010, at 9:15 AM, Dan Schaffer wrote:\n> We have an application that needs to do bulk reads of ENTIRE Postgres tables very quickly (i.e. select * from table). We have observed that such sequential scans run two orders of magnitude slower than observed raw disk reads (5 MB/s versus 100 MB/s). Part of this is due to the storage overhead we have observed in Postgres. In the example below, it takes 1 GB to store 350 MB of nominal data. However that suggests we would expect to get 35 MB/s bulk read rates.\n> \n> Observations using iostat and top during these bulk reads suggest that the queries are CPU bound, not I/O bound. In fact, repeating the queries yields similar response times. Presumably if it were an I/O issue the response times would be much shorter the second time through with the benefit of caching.\n> \n> We have tried these simple queries using psql, JDBC, pl/java stored procedures, and libpq. In all cases the client code ran on the same box as the server.\n> We have experimented with Postgres 8.1, 8.3 and 9.0.\n> \n> We also tried playing around with some of the server tuning parameters such as shared_buffers to no avail.\n> \n> Here is uname -a for a machine we have tested on:\n> \n> Linux nevs-bdb1.fsl.noaa.gov 2.6.18-194.17.1.el5 #1 SMP Mon Sep 20 07:12:06 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux\n> \n> A sample dataset that reproduces these results looks like the following (there are no indexes):\n> \n> Table \"bulk_performance.counts\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> i1 | integer |\n> i2 | integer |\n> i3 | integer |\n> i4 | integer |\n> \n> There are 22 million rows in this case.\n> \n> We HAVE observed that summation queries run considerably faster. In this case,\n> \n> select sum(i1), sum(i2), sum(i3), sum(i4) from bulk_performance.counts\n> \n> runs at 35 MB/s.\n> \n> \n> Our business logic does operations on the resulting data such that the output is several orders of magnitude smaller than the input. So we had hoped that by putting our business logic into stored procedures (and thus drastically reducing the amount of data flowing to the client) our throughput would go way up. This did not happen.\n> \n> So our questions are as follows:\n> \n> Is there any way using stored procedures (maybe C code that calls SPI directly) or some other approach to get close to the expected 35 MB/s doing these bulk reads? Or is this the price we have to pay for using SQL instead of some NoSQL solution. (We actually tried Tokyo Cabinet and found it to perform quite well. However it does not measure up to Postgres in terms of replication, data interrogation, community support, acceptance, etc).\n\nHave you by chance tried EXPLAIN ANALYZE SELECT * FROM bulk_performance.counts? That will throw away the query results, which removes client-server considerations.\n\nAlso, when you tested raw disk IO, did you do it with an 8k block size? That's the default size of a Postgres block, so all of it's IO is done that way.\n\nWhat does iostat show you? Are you getting a decent number of read requests/second?\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n",
"msg_date": "Tue, 14 Dec 2010 01:54:04 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with bulk read performance"
},
{
"msg_contents": "On 11/1/2010 9:15 AM, Dan Schaffer wrote:\n> Hello\n>\n> We have an application that needs to do bulk reads of ENTIRE Postgres\n> tables very quickly (i.e. select * from table). We have observed that\n> such sequential scans run two orders of magnitude slower than observed\n> raw disk reads (5 MB/s versus 100 MB/s). Part of this is due to the\n> storage overhead we have observed in Postgres. In the example below, it\n> takes 1 GB to store 350 MB of nominal data. However that suggests we\n> would expect to get 35 MB/s bulk read rates.\n>\n> Observations using iostat and top during these bulk reads suggest that\n> the queries are CPU bound, not I/O bound. In fact, repeating the queries\n> yields similar response times. Presumably if it were an I/O issue the\n> response times would be much shorter the second time through with the\n> benefit of caching.\n>\n> We have tried these simple queries using psql, JDBC, pl/java stored\n> procedures, and libpq. In all cases the client code ran on the same box\n> as the server.\n> We have experimented with Postgres 8.1, 8.3 and 9.0.\n>\n> We also tried playing around with some of the server tuning parameters\n> such as shared_buffers to no avail.\n>\n> Here is uname -a for a machine we have tested on:\n>\n> Linux nevs-bdb1.fsl.noaa.gov 2.6.18-194.17.1.el5 #1 SMP Mon Sep 20\n> 07:12:06 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux\n>\n> A sample dataset that reproduces these results looks like the following\n> (there are no indexes):\n>\n> Table \"bulk_performance.counts\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> i1 | integer |\n> i2 | integer |\n> i3 | integer |\n> i4 | integer |\n>\n> There are 22 million rows in this case.\n>\n> We HAVE observed that summation queries run considerably faster. In this\n> case,\n>\n> select sum(i1), sum(i2), sum(i3), sum(i4) from bulk_performance.counts\n>\n> runs at 35 MB/s.\n>\n>\n> Our business logic does operations on the resulting data such that the\n> output is several orders of magnitude smaller than the input. So we had\n> hoped that by putting our business logic into stored procedures (and\n> thus drastically reducing the amount of data flowing to the client) our\n> throughput would go way up. This did not happen.\n>\n> So our questions are as follows:\n>\n> Is there any way using stored procedures (maybe C code that calls SPI\n> directly) or some other approach to get close to the expected 35 MB/s\n> doing these bulk reads? Or is this the price we have to pay for using\n> SQL instead of some NoSQL solution. (We actually tried Tokyo Cabinet and\n> found it to perform quite well. However it does not measure up to\n> Postgres in terms of replication, data interrogation, community support,\n> acceptance, etc).\n>\n> Thanks\n>\n> Dan Schaffer\n> Paul Hamer\n> Nick Matheson\n>\n>\n>\n>\n\nWhoa... Deja Vu\n\nIs this the same thing Nick is working on? How'd he get along?\n\nhttp://archives.postgresql.org/message-id/[email protected]\n\n\n-Andy\n",
"msg_date": "Tue, 14 Dec 2010 09:27:19 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with bulk read performance"
},
{
"msg_contents": "On Dec 14, 2010, at 9:27 AM, Andy Colson wrote:\n> Is this the same thing Nick is working on? How'd he get along?\n> \n> http://archives.postgresql.org/message-id/[email protected]\n\nSo it is. The one I replied to stood out because no one had replied to it; I didn't see the earlier email.\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n",
"msg_date": "Tue, 14 Dec 2010 09:41:15 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with bulk read performance"
},
{
"msg_contents": "On 12/14/2010 9:41 AM, Jim Nasby wrote:\n> On Dec 14, 2010, at 9:27 AM, Andy Colson wrote:\n>> Is this the same thing Nick is working on? How'd he get along?\n>>\n>> http://archives.postgresql.org/message-id/[email protected]\n>\n> So it is. The one I replied to stood out because no one had replied to it; I didn't see the earlier email.\n> --\n> Jim C. Nasby, Database Architect [email protected]\n> 512.569.9461 (cell) http://jim.nasby.net\n>\n>\n>\n\nOh.. I didn't even notice the date... I thought it was a new post.\n\nBut still... (and I'll cc Nick on this) I'd love to hear an update on \nhow this worked out.\n\nDid you get it to go fast? What'd you use? Did the project go over \nbudget and did you all get fired? COME ON MAN! We need to know! :-)\n\n-Andy\n",
"msg_date": "Tue, 14 Dec 2010 09:51:39 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with bulk read performance"
},
{
"msg_contents": "Hey all-\n\nGlad to know you are still interested... ;)\n\nDidn't mean to leave you hanging, the holiday and all have put some \nbumps in the road.\n\nDan my co-worker might be able to post some more detailed information \nhere, but here is a brief summary of what I am aware of:\n\n1. We have not tested any stored procedure/SPI based solutions to date.\n2. The COPY API has been the best of the possible solutions explored to \ndate.\n3. We were able to get rates on the order of 35 MB/s with the original \nproblem this way.\n4. Another variant of the problem we were working on included some \nmetadata fields and 300 float values (for this we tried three variants)\n a. 300 float values as columns\n b. 300 float in a float array column\n c. 300 floats packed into a bytea column\nLong story short on these three variants a and b largely performed the \nsame. C was the winner and seems to have improved the throughput on \nmultiple counts. 1. it reduces the data transmitted over the wire by a \nfactor of two (float columns and float arrays have a 2x overhead over \nthe raw data requirement.) 2. this reduction seems to have reduced the \ncpu burdens on the server side thus producing a better than the expected \n2x speed. I think the final numbers left us somewhere in the 80-90 MB/s.\n\nThanks again for all the input. If you have any other questions let us \nknow. Also if we get results for the stored procedure/SPI route we will \ntry and post, but the improvements via standard JDBC are such that we \naren't really pressed at this point in time to get more throughput so it \nmay not happen.\n\nCheers,\n\nNick\n> On 12/14/2010 9:41 AM, Jim Nasby wrote:\n>> On Dec 14, 2010, at 9:27 AM, Andy Colson wrote:\n>>> Is this the same thing Nick is working on? How'd he get along?\n>>>\n>>> http://archives.postgresql.org/message-id/[email protected]\n>>\n>> So it is. The one I replied to stood out because no one had replied \n>> to it; I didn't see the earlier email.\n>> -- \n>> Jim C. Nasby, Database Architect [email protected]\n>> 512.569.9461 (cell) http://jim.nasby.net\n>>\n>>\n>>\n>\n> Oh.. I didn't even notice the date... I thought it was a new post.\n>\n> But still... (and I'll cc Nick on this) I'd love to hear an update on \n> how this worked out.\n>\n> Did you get it to go fast? What'd you use? Did the project go over \n> budget and did you all get fired? COME ON MAN! We need to know! :-)\n>\n> -Andy\n\n",
"msg_date": "Tue, 14 Dec 2010 16:07:24 +0000",
"msg_from": "Nick Matheson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with bulk read performance"
},
{
"msg_contents": "BTW, have you tried prepared statements? bytea is most likely faster (in part) due to less parsing in the backend. Prepared statements would eliminate that parsing step.\n\nOn Dec 14, 2010, at 10:07 AM, Nick Matheson wrote:\n\n> Hey all-\n> \n> Glad to know you are still interested... ;)\n> \n> Didn't mean to leave you hanging, the holiday and all have put some bumps in the road.\n> \n> Dan my co-worker might be able to post some more detailed information here, but here is a brief summary of what I am aware of:\n> \n> 1. We have not tested any stored procedure/SPI based solutions to date.\n> 2. The COPY API has been the best of the possible solutions explored to date.\n> 3. We were able to get rates on the order of 35 MB/s with the original problem this way.\n> 4. Another variant of the problem we were working on included some metadata fields and 300 float values (for this we tried three variants)\n> a. 300 float values as columns\n> b. 300 float in a float array column\n> c. 300 floats packed into a bytea column\n> Long story short on these three variants a and b largely performed the same. C was the winner and seems to have improved the throughput on multiple counts. 1. it reduces the data transmitted over the wire by a factor of two (float columns and float arrays have a 2x overhead over the raw data requirement.) 2. this reduction seems to have reduced the cpu burdens on the server side thus producing a better than the expected 2x speed. I think the final numbers left us somewhere in the 80-90 MB/s.\n> \n> Thanks again for all the input. If you have any other questions let us know. Also if we get results for the stored procedure/SPI route we will try and post, but the improvements via standard JDBC are such that we aren't really pressed at this point in time to get more throughput so it may not happen.\n> \n> Cheers,\n> \n> Nick\n>> On 12/14/2010 9:41 AM, Jim Nasby wrote:\n>>> On Dec 14, 2010, at 9:27 AM, Andy Colson wrote:\n>>>> Is this the same thing Nick is working on? How'd he get along?\n>>>> \n>>>> http://archives.postgresql.org/message-id/[email protected]\n>>> \n>>> So it is. The one I replied to stood out because no one had replied to it; I didn't see the earlier email.\n>>> -- \n>>> Jim C. Nasby, Database Architect [email protected]\n>>> 512.569.9461 (cell) http://jim.nasby.net\n>>> \n>>> \n>>> \n>> \n>> Oh.. I didn't even notice the date... I thought it was a new post.\n>> \n>> But still... (and I'll cc Nick on this) I'd love to hear an update on how this worked out.\n>> \n>> Did you get it to go fast? What'd you use? Did the project go over budget and did you all get fired? COME ON MAN! We need to know! :-)\n>> \n>> -Andy\n> \n\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n",
"msg_date": "Tue, 14 Dec 2010 10:39:38 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with bulk read performance"
},
{
"msg_contents": "Hi,\nMy name is Dan and I'm a co-worker of Nick Matheson who initially submitted this question (because the mail group had me blacklisted \nfor awhile for some reason).\n\n\nThank you for all of the suggestions. We were able to improve out bulk read performance from 3 MB/s to 60 MB/s (assuming the data \nare NOT in cache in both cases) by doing the following:\n\n1. Storing the data in a \"bytea\" column instead of an \"array\" column.\n2. Retrieving the data via the Postgres 9 CopyManager#copyOut(String sql, OutputStream stream) method\n\nThe key to the dramatic improvement appears to be the reduction in packing and unpacking time on the server and client, \nrespectively. The server packing occurs when the retrieved data are packed into a bytestream for sending across the network. \nStoring the data as a simple byte array reduces this time substantially. The client-side unpacking time is spent generating a \nResultSet object. By unpacking the bytestream into the desired arrays of floats by hand instead, this time became close to negligible.\n\nThe only downside of storing the data in byte arrays is the loss of transparency. That is, a simple \"select *\" of a few rows shows \nbytes instead of floats. We hope to mitigate this by writing a simple stored procedures that unpacks the bytes into floats.\n\nA couple of other results:\n\nIf the data are stored as a byte array but retrieve into a ResultSet, the unpacking time goes up by an order of magnitude and the \nobserved total throughput is 25 MB/s. If the data are stored in a Postgres float array and unpacked into a byte stream, the \nobserved throughput is 20 MB/s.\n\nDan (and Nick)\n\nAndy Colson wrote:\n> On 12/14/2010 9:41 AM, Jim Nasby wrote:\n>> On Dec 14, 2010, at 9:27 AM, Andy Colson wrote:\n>>> Is this the same thing Nick is working on? How'd he get along?\n>>>\n>>> http://archives.postgresql.org/message-id/[email protected]\n>>\n>> So it is. The one I replied to stood out because no one had replied to \n>> it; I didn't see the earlier email.\n>> -- \n>> Jim C. Nasby, Database Architect [email protected]\n>> 512.569.9461 (cell) http://jim.nasby.net\n>>\n>>\n>>\n> \n> Oh.. I didn't even notice the date... I thought it was a new post.\n> \n> But still... (and I'll cc Nick on this) I'd love to hear an update on \n> how this worked out.\n> \n> Did you get it to go fast? What'd you use? Did the project go over \n> budget and did you all get fired? COME ON MAN! We need to know! :-)\n> \n> -Andy\n>",
"msg_date": "Wed, 15 Dec 2010 20:15:14 +0000",
"msg_from": "Dan Schaffer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help with bulk read performance"
},
{
"msg_contents": "\n> If the data are stored as a byte array but retrieve into a ResultSet, \n> the unpacking time goes up by an order of magnitude and the\n> observed total throughput is 25 MB/s. If the data are stored in a \n> Postgres float array and unpacked into a byte stream, the\n> observed throughput is 20 MB/s.\n\n\nfloat <-> text conversions are very slow, this is in fact due to the \nmismatch between base-2 (IEEE754) and base-10 (text) floating point \nrepresentation, which needs very very complex calculations.\n",
"msg_date": "Thu, 16 Dec 2010 16:22:40 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with bulk read performance"
},
{
"msg_contents": "Pierre-\n\nI agree with your observation of float <-> text conversion costs, but in \nthis case Dan is talking about storing the raw float data (ie: 4 bytes \nper float) in a bytea array so there is only the conversion from java \nfloat[n] to java byte[4*n] which is not nearly as costly as float <-> \ntext conversion (especially if you leave it in architecture byte order).\n\nNick\n>\n>> If the data are stored as a byte array but retrieve into a ResultSet, \n>> the unpacking time goes up by an order of magnitude and the\n>> observed total throughput is 25 MB/s. If the data are stored in a \n>> Postgres float array and unpacked into a byte stream, the\n>> observed throughput is 20 MB/s.\n>\n>\n> float <-> text conversions are very slow, this is in fact due to the \n> mismatch between base-2 (IEEE754) and base-10 (text) floating point \n> representation, which needs very very complex calculations.\n\n",
"msg_date": "Fri, 17 Dec 2010 13:51:27 +0000",
"msg_from": "Nick Matheson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help with bulk read performance"
}
] |
[
{
"msg_contents": "Hi\n\nSorry, my previous post haven't shown in this list, so I repost this\none. I have a sql become very slow after upgrade to 8.4.5.\nThe table creation sql like this.\n\nbegin;\nCREATE TABLE t_a (\n id INT NOT NULL PRIMARY KEY\n);\nCREATE TABLE t_b (\n id INT NOT NULL PRIMARY KEY\n);\nCREATE TABLE t_c (\n id INT NOT NULL PRIMARY KEY,\n flag boolean\n);\n\nINSERT\nINTO t_a\nSELECT s\nFROM generate_series(1, 600) s;\n\nINSERT\nINTO t_b\nSELECT s\nFROM generate_series(1, 3000) s;\n\nSELECT SETSEED(0.1);\nINSERT\nINTO t_c\nSELECT s, RANDOM()> 0.5\nFROM generate_series(1, 12000) s;\n\n-- insert some id not in t_b into t_a\nINSERT\nINTO t_a values( 20000);\n\nANALYZE t_a;\nANALYZE t_b;\nANALYZE t_c;\nend;\n\nThe query sql is like this.\n\nSELECT t_a.id FROM t_a\nWHERE EXISTS ( SELECT t_b.id FROM t_b, t_c\n WHERE t_b.id = t_a.id AND t_c.flag = 'f')\n\nI extract this part form a big query.I known this query is not very\ngood.The query plan is different between 8.1.10 and 8.4.5, 8.1.10 use\na index scan, 8.4.5 use two table scan.\n\nPostgreSQL 8.1.10 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC)\n3.4.4 (mingw special)\nSeq Scan on t_a (cost=0.00..34.67 rows=300 width=4) (actual\ntime=0.025..5.350 rows=600 loops=1)\n Filter: (subplan)\n SubPlan\n -> Nested Loop (cost=0.00..248.44 rows=6042 width=4) (actual\ntime=0.007..0.007 rows=1 loops=601)\n -> Index Scan using t_b_pkey on t_b (cost=0.00..3.02\nrows=1 width=4) (actual time=0.003..0.003 rows=1 loops=601)\n Index Cond: (id = $0)\n -> Seq Scan on t_c (cost=0.00..185.00 rows=6042 width=0)\n(actual time=0.001..0.001 rows=1 loops=600)\n Filter: (NOT flag)\nTotal runtime: 5.574 ms\n\n\nPostgreSQL 8.4.5, compiled by Visual C++ build 1400, 32-bit\nNested Loop Semi Join (cost=0.00..134044.44 rows=601 width=4) (actual\ntime=0.033..17375.045 rows=600 loops=1)\n Join Filter: (t_a.id = t_b.id)\n -> Seq Scan on t_a (cost=0.00..9.01 rows=601 width=4) (actual\ntime=0.008..0.172 rows=601 loops=1)\n -> Nested Loop (cost=0.00..447282.00 rows=18126000 width=4)\n(actual time=0.011..20.922 rows=30460 loops=601)\n -> Seq Scan on t_c (cost=0.00..174.00 rows=6042 width=0)\n(actual time=0.004..0.011 rows=11 loops=601)\n Filter: (NOT flag)\n -> Seq Scan on t_b (cost=0.00..44.00 rows=3000 width=4)\n(actual time=0.004..0.652 rows=2756 loops=6642)\nTotal runtime: 17375.247 ms\n\nIf some t_a.id not in t_b.id 8.4.5 will become very slow. I confirmed\nthis behavior on default configuration.\n\nRegards,\nYao\n",
"msg_date": "Tue, 2 Nov 2010 10:50:22 +0800",
"msg_from": "Yaocl <[email protected]>",
"msg_from_op": true,
"msg_subject": "A query become very slow after upgrade from 8.1.10 to 8.4.5"
},
{
"msg_contents": "Yaocl <[email protected]> writes:\n> SELECT t_a.id FROM t_a\n> WHERE EXISTS ( SELECT t_b.id FROM t_b, t_c\n> WHERE t_b.id = t_a.id AND t_c.flag = 'f')\n\nI have some hopes for fixing this in 9.1, but nothing is going to happen\nin 8.4 or 9.0. In the meantime, is it intentional that there is no join\nclause between t_b and t_c? That'd be a lot more efficient as two\nseparate EXISTS tests, ie\n\nWHERE EXISTS ( SELECT 1 FROM t_b WHERE t_b.id = t_a.id ) AND\n EXISTS ( SELECT 1 FROM t_c WHERE t_c.flag = 'f')\n\nbut I wonder whether this query doesn't simply reflect a logic error on\nthe client side.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Nov 2010 18:30:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A query become very slow after upgrade from 8.1.10 to 8.4.5 "
},
{
"msg_contents": "On Wed, Nov 3, 2010 at 6:30 AM, Tom Lane <[email protected]> wrote:\n> Yaocl <[email protected]> writes:\n>> SELECT t_a.id FROM t_a\n>> WHERE EXISTS ( SELECT t_b.id FROM t_b, t_c\n>> WHERE t_b.id = t_a.id AND t_c.flag = 'f')\n>\n> I have some hopes for fixing this in 9.1, but nothing is going to happen\n> in 8.4 or 9.0. In the meantime, is it intentional that there is no join\n> clause between t_b and t_c? That'd be a lot more efficient as two\n> separate EXISTS tests, ie\n>\n> WHERE EXISTS ( SELECT 1 FROM t_b WHERE t_b.id = t_a.id ) AND\n> EXISTS ( SELECT 1 FROM t_c WHERE t_c.flag = 'f')\n>\n> but I wonder whether this query doesn't simply reflect a logic error on\n> the client side.\n>\n> regards, tom lane\n>\nYes ,If I moved t_c to another clause, It can resolve this problem.\nThe original sql is generate by a orm.Has some connection between t_b\nand t_c.Like this:\nAND exists ( SELECT t_b.id from t_b, t_c\n WHERE t_b.id = t_a.id\n AND t_c.some_field <= t_b.some_field )\nHow ever this is still a poor query.\n\nselect t_a.id from t_a\nwhere exists ( select t_b.id from t_b, t_c\n where t_b.id = t_a.id and t_c.flag = 'f'\n AND t_b.id < t_c.id)\n\n8.1.10\nSeq Scan on t_a (cost=0.00..50.87 rows=300 width=4) (actual\ntime=0.021..5.367 rows=600 loops=1)\n Filter: (subplan)\n SubPlan\n -> Nested Loop (cost=0.00..137.19 rows=2014 width=4) (actual\ntime=0.007..0.007 rows=1 loops=601)\n -> Index Scan using t_b_pkey on t_b (cost=0.00..3.02\nrows=1 width=4) (actual time=0.002..0.002 rows=1 loops=601)\n Index Cond: (id = $0)\n -> Index Scan using t_c_pkey on t_c (cost=0.00..109.00\nrows=2014 width=4) (actual time=0.003..0.003 rows=1 loops=600)\n Index Cond: (outer.id <= t_c.id)\n Filter: (NOT flag)\nTotal runtime: 5.564 ms\n\n8.4.5\nNested Loop Semi Join (cost=0.00..154223.42 rows=601 width=4) (actual\ntime=0.037..38727.982 rows=600 loops=1)\n Join Filter: (t_a.id = t_b.id)\n -> Seq Scan on t_a (cost=0.00..9.01 rows=601 width=4) (actual\ntime=0.011..0.237 rows=601 loops=1)\n -> Nested Loop (cost=0.00..182995.83 rows=6042000 width=4) (actual\ntime=0.009..49.298 rows=57594 loops=601)\n -> Seq Scan on t_c (cost=0.00..174.00 rows=6042 width=4)\n(actual time=0.005..0.085 rows=169 loops=601)\n Filter: (NOT flag)\n -> Index Scan using t_b_pkey on t_b (cost=0.00..17.76\nrows=1000 width=4) (actual time=0.007..0.132 rows=342 loops=101296)\n Index Cond: (t_b.id <= t_c.id)\nTotal runtime: 38728.263 ms\n\nfinally I rewritten the orm query to generate a different sql.\n\nRegards,\nYao\n",
"msg_date": "Wed, 3 Nov 2010 09:47:06 +0800",
"msg_from": "Yaocl <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: A query become very slow after upgrade from 8.1.10 to 8.4.5"
}
] |
[
{
"msg_contents": "I wrote a little Perl script, intended to test the difference that array \ninsert makes with PostgreSQL. Imagine my surprise when a single record \ninsert into a local database was faster than batches of 100 records. \nHere are the two respective routines:\n\nsub do_ssql\n{\n my $exec_cnt = 0;\n while (<FL>)\n {\n chomp;\n my @row = split /$sep/;\n $sth->execute(@row);\n $exec_cnt++;\n }\n $dbh->commit();\n print \"Insert executed $exec_cnt times.\\n\";\n}\n\nsub do_msql\n{\n my $bsz = shift;\n die(\"Batch size must be >0!\\n\") unless $bsz > 0;\n my $exec_cnt = 0;\n my @tstat;\n my (@col1, @col2, @col3);\n while (<FL>)\n {\n chomp;\n my @row = split /$sep/;\n push @col1, $row[0];\n push @col2, $row[1];\n push @col3, $row[2];\n if ($. % $bsz == 0)\n {\n my $tuples = $sth->execute_array({ArrayTupleStatus => \\@tstat},\n \\@col1, \\@col2, \\@col3);\n die(\"Multiple insert failed!\\n\") if (!$tuples);\n @col1 = ();\n @col2 = ();\n @col3 = ();\n $exec_cnt++;\n }\n\n }\n if ($#col1 >= 0)\n {\n my $tuples = $sth->execute_array({ArrayTupleStatus => \\@tstat},\n \\@col1, \\@col2, \\@col3);\n die(\"Multiple insert failed!\\n\") if (!$tuples);\n $exec_cnt++;\n }\n $dbh->commit();\n print \"Insert executed $exec_cnt times.\\n\";\n}\n\n\nThe variable \"$sth\" is a prepared statement handle.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Tue, 02 Nov 2010 15:46:09 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": true,
"msg_subject": "Array interface"
},
{
"msg_contents": "On 03/11/10 08:46, Mladen Gogala wrote:\n> I wrote a little Perl script, intended to test the difference that \n> array insert makes with PostgreSQL. Imagine my surprise when a single \n> record insert into a local database was faster than batches of 100 \n> records. Here are the two respective routines:\n\nInteresting - I'm seeing a modest but repeatable improvement with bigger \narray sizes (using attached program to insert pgbench_accounts) on an \nolder dual core AMD box with a single SATA drive running Ubuntu 10.04 i686.\n\n rows arraysize elapsed(s)\n1000000 1 161\n1000000 10 115\n1000000 100 110\n1000000 1000 109\n\nThis is *despite* the fact that tracing the executed sql (setting \nlog_min_duration_statement = 0) shows that there is *no* difference (i.e \n1000000 INSERT executions are performed) for each run. I'm guessing that \nsome perl driver overhead is being saved here.\n\nI'd be interested to see if you can reproduce the same or similar effect.\n\nWhat might also be interesting is doing each INSERT with an array-load \nof bind variables appended to the VALUES clause - as this will only do 1 \ninsert call per \"array\" of values.\n\nCheers\n\nMark",
"msg_date": "Wed, 10 Nov 2010 22:10:39 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Array interface"
},
{
"msg_contents": "On 10/11/10 22:10, Mark Kirkwood wrote:\n>\n> What might also be interesting is doing each INSERT with an array-load \n> of bind variables appended to the VALUES clause - as this will only do \n> 1 insert call per \"array\" of values.\n\nThis is probably more like what you were expecting:\n\nrows num values tuples(i.e array size) elapsed\n1000000 1 106\n1000000 10 14\n1000000 100 13\n1000000 1000 14\n\nI didn't try to use PREPARE + EXECUTE here, just did \"do\" with the \nINSERT + array size number of VALUES tuples (execute could well be \nfaster). The obvious difference here is we only do rows/(array size) \nnumber of insert calls.\n\nCheers\n\nMark",
"msg_date": "Wed, 10 Nov 2010 22:42:39 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Array interface"
}
] |
[
{
"msg_contents": "I wrote a little Perl script, intended to test the difference that array \ninsert makes with PostgreSQL. Imagine my surprise when a single record \ninsert into a local database was faster than batches of 100 records. \nHere are the two respective routines:\n\nsub do_ssql\n{\n my $exec_cnt = 0;\n while (<FL>)\n {\n chomp;\n my @row = split /$sep/;\n $sth->execute(@row);\n $exec_cnt++;\n }\n $dbh->commit();\n print \"Insert executed $exec_cnt times.\\n\";\n}\n\nsub do_msql\n{\n my $bsz = shift;\n die(\"Batch size must be >0!\\n\") unless $bsz > 0;\n my $exec_cnt = 0;\n my @tstat;\n my (@col1, @col2, @col3);\n while (<FL>)\n {\n chomp;\n my @row = split /$sep/;\n push @col1, $row[0];\n push @col2, $row[1];\n push @col3, $row[2];\n if ($. % $bsz == 0)\n {\n my $tuples = $sth->execute_array({ArrayTupleStatus => \\@tstat},\n \\@col1, \\@col2, \\@col3);\n die(\"Multiple insert failed!\\n\") if (!$tuples);\n @col1 = ();\n @col2 = ();\n @col3 = ();\n $exec_cnt++;\n }\n\n }\n if ($#col1 >= 0)\n {\n my $tuples = $sth->execute_array({ArrayTupleStatus => \\@tstat},\n \\@col1, \\@col2, \\@col3);\n die(\"Multiple insert failed!\\n\") if (!$tuples);\n $exec_cnt++;\n }\n $dbh->commit();\n print \"Insert executed $exec_cnt times.\\n\";\n}\n\n\nThe variable \"$sth\" is a prepared statement handle for the insert statement.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Tue, 02 Nov 2010 17:18:51 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": true,
"msg_subject": "Array interface"
}
] |
[
{
"msg_contents": "Can you hear me now?\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Tue, 02 Nov 2010 17:21:05 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": true,
"msg_subject": "Test"
},
{
"msg_contents": "\n\nOn 2010-11-02 22.21, Mladen Gogala wrote:\n> Can you hear me now?\n\nsure\n\n\n-- \nRegards,\nRobert \"roppert\" Gravsjö\n",
"msg_date": "Wed, 03 Nov 2010 16:50:58 +0100",
"msg_from": "=?UTF-8?B?Um9iZXJ0IEdyYXZzasO2?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Test"
}
] |
[
{
"msg_contents": "I sent 2 emails, both containing a Perl code sample, but none of them \nwent through. Essentially, I was testing Perl array bind & execute. \nEverything went well, except for the fact that array execute is no \nfaster than the row-by-row way of executing things. I was surprised \nbecause I expected array bind to produce better results over the network \nthan the row-by-row operations, yet it didn't. Can anybody elaborate a \nbit? It seems that some kind of email scanner has quarantined my emails \ncontaining Perl code samples as dangerous, so I can't really show the \ncode sample.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Tue, 02 Nov 2010 17:32:28 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": true,
"msg_subject": "Array interface"
},
{
"msg_contents": "On Tue, Nov 2, 2010 at 2:32 PM, Mladen Gogala <[email protected]> wrote:\n> I was surprised because I expected array bind to produce better\n> results over the network than the row-by-row operations, yet it\n> didn't. Can anybody elaborate a bit?\n\nWhile all of the bulk-execute functions are likely to have\nimplementations, they are not necessarily likely to actually be\nefficient implementations.\n\nI ran into this with DBD::ODBC a while back because DBD::ODBC\nimplements execute_array() as \"execute($_) foreach(@_)\". DBD::Pg\ndoesn't appear to implement execute_array() at all, so perhaps it's\nfalling back on a similar default implementation in the superclass.\n\nI generally suspect this is a Perl problem rather than a Postgres\nproblem, but can't say more without code. Maybe try pastebin if\nyou're having email censorship issues.\n\n-Conor\n",
"msg_date": "Tue, 2 Nov 2010 15:40:32 -0700",
"msg_from": "Conor Walsh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Array interface"
},
{
"msg_contents": "Conor Walsh wrote:\n>\n> I generally suspect this is a Perl problem rather than a Postgres\n> problem, \n\nSo do I. I had the same situation with Oracle, until John Scoles had the \nDBD::Oracle driver fixed and started utilizing the Oracle array interface.\n\n> but can't say more without code. Maybe try pastebin if\n> you're having email censorship issues.\n>\n> -Conor\n>\n> \nI posted it to comp.databases.postgresql.\n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Wed, 03 Nov 2010 10:56:51 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Array interface"
}
] |
[
{
"msg_contents": "Hello\n\nWe have an application that needs to do bulk reads of ENTIRE\nPostgres tables very quickly (i.e. select * from table). We have\nobserved that such sequential scans run two orders of magnitude slower\nthan observed raw disk reads (5 MB/s versus 100 MB/s). Part of this is\ndue to the storage overhead we have observed in Postgres. In the\nexample below, it takes 1 GB to store 350 MB of nominal data. However\nthat suggests we would expect to get 35 MB/s bulk read rates.\n\nObservations using iostat and top during these bulk reads suggest\nthat the queries are CPU bound, not I/O bound. In fact, repeating the\nqueries yields similar response times. Presumably if it were an I/O\nissue the response times would be much shorter the second time through\nwith the benefit of caching.\n\nWe have tried these simple queries using psql, JDBC, pl/java stored\nprocedures, and libpq. In all cases the client code ran on the same\nbox as the server. We have experimented with Postgres 8.1, 8.3 and 9.0.\n\nWe also tried playing around with some of the server tuning parameters such as shared_buffers to no avail.\n\nHere is uname -a for a machine we have tested on:\n\nLinux nevs-bdb1.fsl.noaa.gov 2.6.18-194.17.1.el5 #1 SMP Mon Sep 20 07:12:06 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux\n\nA sample dataset that reproduces these results looks like the following (there are no indexes):\n\nTable \"bulk_performance.counts\"\n Column | Type | Modifiers\n--------+---------+-----------\n i1 | integer |\n i2 | integer |\n i3 | integer |\n i4 | integer |\n\nThere are 22 million rows in this case.\n\nWe HAVE observed that summation queries run considerably faster. In this case,\n\nselect sum(i1), sum(i2), sum(i3), sum(i4) from bulk_performance.counts\n\nruns at 35 MB/s.\n\nOur business logic does operations on the resulting data such that\nthe output is several orders of magnitude smaller than the input. So\nwe had hoped that by putting our business logic into stored procedures\n(and thus drastically reducing the amount of data flowing to the\nclient) our throughput would go way up. This did not happen.\n\nSo our questions are as follows:\n\nIs there any way using stored procedures (maybe C code that calls\nSPI directly) or some other approach to get close to the expected 35\nMB/s doing these bulk reads? Or is this the price we have to pay for\nusing SQL instead of some NoSQL solution. (We actually tried Tokyo\nCabinet and found it to perform quite well. However it does not measure\nup to Postgres in terms of replication, data interrogation, community\nsupport, acceptance, etc).\n\nThanks\n\nDan Schaffer\nPaul Hamer\nNick Matheson\n\n",
"msg_date": "Wed, 03 Nov 2010 15:52:31 +0000",
"msg_from": "Nick Matheson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Simple (hopefully) throughput question?"
},
{
"msg_contents": "On 03.11.2010 17:52, Nick Matheson wrote:\n> We have an application that needs to do bulk reads of ENTIRE\n> Postgres tables very quickly (i.e. select * from table). We have\n> observed that such sequential scans run two orders of magnitude slower\n> than observed raw disk reads (5 MB/s versus 100 MB/s). Part of this is\n> due to the storage overhead we have observed in Postgres. In the\n> example below, it takes 1 GB to store 350 MB of nominal data. However\n> that suggests we would expect to get 35 MB/s bulk read rates.\n>\n> Observations using iostat and top during these bulk reads suggest\n> that the queries are CPU bound, not I/O bound. In fact, repeating the\n> queries yields similar response times. Presumably if it were an I/O\n> issue the response times would be much shorter the second time through\n> with the benefit of caching.\n>\n> We have tried these simple queries using psql, JDBC, pl/java stored\n> procedures, and libpq. In all cases the client code ran on the same\n> box as the server. We have experimented with Postgres 8.1, 8.3 and 9.0.\n\nTry COPY, ie. \"COPY bulk_performance.counts TO STDOUT BINARY\".\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Wed, 03 Nov 2010 19:10:29 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple (hopefully) throughput question?"
},
{
"msg_contents": "Just some ideas that went through my mind when reading your post.\n\nOn Wed, Nov 3, 2010 at 17:52, Nick Matheson <[email protected]> wrote:\n> than observed raw disk reads (5 MB/s versus 100 MB/s). Part of this is\n> due to the storage overhead we have observed in Postgres. In the\n> example below, it takes 1 GB to store 350 MB of nominal data.\n\nPostgreSQL 8.3 and later have 22 bytes of overhead per row, plus\npage-level overhead and internal fragmentation. You can't do anything\nabout row overheads, but you can recompile the server with larger\npages to reduce page overhead.\n\n> Is there any way using stored procedures (maybe C code that calls\n> SPI directly) or some other approach to get close to the expected 35\n> MB/s doing these bulk reads?\n\nPerhaps a simpler alternative would be writing your own aggregate\nfunction with four arguments.\n\nIf you write this aggregate function in C, it should have similar\nperformance as the sum() query.\n\nRegards,\nMarti\n",
"msg_date": "Wed, 3 Nov 2010 19:17:09 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple (hopefully) throughput question?"
},
{
"msg_contents": "On 11/3/2010 10:52 AM, Nick Matheson wrote:\n> Hello\n>\n> We have an application that needs to do bulk reads of ENTIRE\n> Postgres tables very quickly (i.e. select * from table). We have\n> observed that such sequential scans run two orders of magnitude slower\n> than observed raw disk reads (5 MB/s versus 100 MB/s). Part of this is\n> due to the storage overhead we have observed in Postgres. In the\n> example below, it takes 1 GB to store 350 MB of nominal data. However\n> that suggests we would expect to get 35 MB/s bulk read rates.\n>\n> Observations using iostat and top during these bulk reads suggest\n> that the queries are CPU bound, not I/O bound. In fact, repeating the\n> queries yields similar response times. Presumably if it were an I/O\n> issue the response times would be much shorter the second time through\n> with the benefit of caching.\n>\n> We have tried these simple queries using psql, JDBC, pl/java stored\n> procedures, and libpq. In all cases the client code ran on the same\n> box as the server. We have experimented with Postgres 8.1, 8.3 and 9.0.\n>\n> We also tried playing around with some of the server tuning parameters\n> such as shared_buffers to no avail.\n>\n> Here is uname -a for a machine we have tested on:\n>\n> Linux nevs-bdb1.fsl.noaa.gov 2.6.18-194.17.1.el5 #1 SMP Mon Sep 20\n> 07:12:06 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux\n>\n> A sample dataset that reproduces these results looks like the following\n> (there are no indexes):\n>\n> Table \"bulk_performance.counts\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> i1 | integer |\n> i2 | integer |\n> i3 | integer |\n> i4 | integer |\n>\n> There are 22 million rows in this case.\n>\n> We HAVE observed that summation queries run considerably faster. In this\n> case,\n>\n> select sum(i1), sum(i2), sum(i3), sum(i4) from bulk_performance.counts\n>\n> runs at 35 MB/s.\n>\n> Our business logic does operations on the resulting data such that\n> the output is several orders of magnitude smaller than the input. So\n> we had hoped that by putting our business logic into stored procedures\n> (and thus drastically reducing the amount of data flowing to the\n> client) our throughput would go way up. This did not happen.\n>\n> So our questions are as follows:\n>\n> Is there any way using stored procedures (maybe C code that calls\n> SPI directly) or some other approach to get close to the expected 35\n> MB/s doing these bulk reads? Or is this the price we have to pay for\n> using SQL instead of some NoSQL solution. (We actually tried Tokyo\n> Cabinet and found it to perform quite well. However it does not measure\n> up to Postgres in terms of replication, data interrogation, community\n> support, acceptance, etc).\n>\n> Thanks\n>\n> Dan Schaffer\n> Paul Hamer\n> Nick Matheson\n>\n>\n\nI have no idea if this would be helpful or not, never tried it, but when \nyou fire off \"select * from bigtable\" pg will create the entire \nresultset in memory (and maybe swap?) and then send it all to the client \nin one big lump. You might try a cursor and fetch 100-1000 at a time \nfrom the cursor. No idea if it would be faster or slower.\n\n-Andy\n",
"msg_date": "Wed, 03 Nov 2010 12:40:20 -0500",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple (hopefully) throughput question?"
},
{
"msg_contents": "\n> Is there any way using stored procedures (maybe C code that calls\n> SPI directly) or some other approach to get close to the expected 35\n> MB/s doing these bulk reads? Or is this the price we have to pay for\n> using SQL instead of some NoSQL solution. (We actually tried Tokyo\n> Cabinet and found it to perform quite well. However it does not measure\n> up to Postgres in terms of replication, data interrogation, community\n> support, acceptance, etc).\n\nReading from the tables is very fast, what bites you is that postgres has \nto convert the data to wire format, send it to the client, and the client \nhas to decode it and convert it to a format usable by your application. \nWriting a custom aggregate in C should be a lot faster since it has direct \naccess to the data itself. The code path from actual table data to an \naggregate is much shorter than from table data to the client...\n",
"msg_date": "Thu, 04 Nov 2010 10:12:11 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple (hopefully) throughput question?"
},
{
"msg_contents": "Heikki-\n>\n> Try COPY, ie. \"COPY bulk_performance.counts TO STDOUT BINARY\".\n>\nThanks for the suggestion. A preliminary test shows an improvement \ncloser to our expected 35 MB/s.\n\nAre you familiar with any Java libraries for decoding the COPY format? \nThe spec is clear and we could clearly write our own, but figured I \nwould ask. ;)\n\nNick\n",
"msg_date": "Thu, 04 Nov 2010 14:31:25 +0000",
"msg_from": "Nick Matheson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple (hopefully) throughput question?"
},
{
"msg_contents": "Marti-\n> Just some ideas that went through my mind when reading your post\n> PostgreSQL 8.3 and later have 22 bytes of overhead per row, plus\n> page-level overhead and internal fragmentation. You can't do anything\n> about row overheads, but you can recompile the server with larger\n> pages to reduce page overhead.\n>\n> \n>> Is there any way using stored procedures (maybe C code that calls\n>> SPI directly) or some other approach to get close to the expected 35\n>> MB/s doing these bulk reads?\n>> \n>\n> Perhaps a simpler alternative would be writing your own aggregate\n> function with four arguments.\n>\n> If you write this aggregate function in C, it should have similar\n> performance as the sum() query.\n> \nYou comments seem to confirm some of our foggy understanding of the \nstorage 'overhead' and nudge us in the direction of C stored procedures.\n\nDo you have any results or personal experiences from moving calculations \nin this way? I think we are trying to get an understanding of how much \nwe might stand to gain by the added investment.\n\nThanks,\n\nNick\n\n\n\n\n\n\nMarti-\n\nJust some ideas that went through my mind when reading your post\n\n\nPostgreSQL 8.3 and later have 22 bytes of overhead per row, plus\npage-level overhead and internal fragmentation. You can't do anything\nabout row overheads, but you can recompile the server with larger\npages to reduce page overhead.\n\n \n\nIs there any way using stored procedures (maybe C code that calls\nSPI directly) or some other approach to get close to the expected 35\nMB/s doing these bulk reads?\n \n\n\nPerhaps a simpler alternative would be writing your own aggregate\nfunction with four arguments.\n\nIf you write this aggregate function in C, it should have similar\nperformance as the sum() query.\n \n\nYou comments seem to confirm some of our foggy understanding of the\nstorage 'overhead' and nudge us in the direction of C stored\nprocedures. \n\nDo you have any results or personal experiences from moving\ncalculations in this way? I think we are trying to get an understanding\nof how much we might stand to gain by the added investment.\n\nThanks,\n\nNick",
"msg_date": "Thu, 04 Nov 2010 14:34:55 +0000",
"msg_from": "Nick Matheson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple (hopefully) throughput question?"
},
{
"msg_contents": "Andy-\n> I have no idea if this would be helpful or not, never tried it, but \n> when you fire off \"select * from bigtable\" pg will create the entire \n> resultset in memory (and maybe swap?) and then send it all to the \n> client in one big lump. You might try a cursor and fetch 100-1000 at \n> a time from the cursor. No idea if it would be faster or slower.\nI am pretty sure we have tried paged datasets and didn't see any \nimprovement. But we will put this on our list of things to double check, \nbetter safe than sorry you know.\n\nThanks,\n\nNick\n",
"msg_date": "Thu, 04 Nov 2010 14:38:23 +0000",
"msg_from": "Nick Matheson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple (hopefully) throughput question?"
},
{
"msg_contents": "Pierre-\n\nReading from the tables is very fast, what bites you is that postgres \nhas to convert the data to wire format, send it to the client, and the \nclient has to decode it and convert it to a format usable by your \napplication. Writing a custom aggregate in C should be a lot faster \nsince it has direct access to the\ndata itself. The code path from actual table data to an aggregate is \nmuch shorter than from table data to the client...\n\n\nI think your comments really get at what our working hypothesis was, but \ngiven that our experience is limited compared to you all here on the \nmailing lists we really wanted to make sure we weren't missing any \nalternatives. Also the writing of custom aggregators will likely \nleverage any improvements we make to our storage throughput.\n\nThanks,\n\nNick\n\n",
"msg_date": "Thu, 04 Nov 2010 14:42:08 +0000",
"msg_from": "Nick Matheson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple (hopefully) throughput question?"
},
{
"msg_contents": "04.11.10 16:31, Nick Matheson написав(ла):\n> Heikki-\n>>\n>> Try COPY, ie. \"COPY bulk_performance.counts TO STDOUT BINARY\".\n>>\n> Thanks for the suggestion. A preliminary test shows an improvement \n> closer to our expected 35 MB/s.\n>\n> Are you familiar with any Java libraries for decoding the COPY format? \n> The spec is clear and we could clearly write our own, but figured I \n> would ask. ;)\nJDBC driver has some COPY support, but I don't remember details. You'd \nbetter ask in JDBC list.\n\nBest regards, Vitalii Tymchyshyn\n",
"msg_date": "Thu, 04 Nov 2010 17:07:28 +0200",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple (hopefully) throughput question?"
},
{
"msg_contents": "mark-\n\nThanks for all the good questions/insights. \n> People are probably going to want more detail on the list to give alternate\n> ways of attacking the problem. That said....\n> \nI am going to try and fill in some of the gaps where I can...\n> The copy suggestion is a good one if you are unloading to another\n> application for actual data processing. \n>\n> Are you watching an avg of all cores or just one core when you see this cpu\n> bound issue ? I would expect a simple \"select * from table \" to get some IO\n> wait. \n> \nOur application doesn't have many concurrent requests so this \nobservation is of a single query resulting in a single cpu being maxed. \nWe have played with making multiple requests each for a subset of the \ndata and this scales fairly well, but again with the process seemingly \nvery cpu bound for a conceptually simple data retrieval. Our problem is \nreally about answering a single large question as quickly as possible vs \nthe more typical requests per/s type throughput (more OLAP than OLTP).\n> Does the table your reading from have most of the data stored in TOAST? I\n> ask because as a default there is some compression with TOAST and maybe your\n> spending more time with decompression that expected. Not the first thing\n> that I would think to change or where I suspect the problem comes from. \n> \nNo the example table isn't too far from the real thing (simple I know) \njust add a few int/short metadata columns and you have it. And the \nexample table exhibits the same performance issue so I think it is a \nreasonable test case.\n> More detailed hardware list and more detailed example case will probably\n> help a lot with getting some of the really smart PG people on it, (like Tom\n> Lane or some of people who work for a postgresql paid support company)\n> \nHardware\n------------------------------------\n2 x Dual Core 2.4GHz Opteron\n8G Ram\n4-Drive Raid 5\n> For the record: 35MB/s seq reads isn't that fast so a lot of people are\n> going to wonder why that number is so low.\n> \nI completely agree. (With the large RAID arrays or even SSDs arrays I \nhave seen here on the boards 3G/s isn't crazy any longer)\n\nI think our dilemma was that we weren't seemingly able to make use of \nwhat we had for IO throughput. This was most evident when we did some \ntests with smaller datasets that could fit entirely into the disk cache, \nwe saw (via iostat) that indeed the query ran almost entirely from the \ndisk cache, but yielded nearly the same 5 MB/s throughput. This seemed \nto indicate that our problem was somewhere other than the storage \ninfrastructure and lead us to the single CPU bottleneck discovery.\n> Anyways since I suspect that improving IO some will actually speed up your\n> select * case I submit the following for you to consider.\n>\n> My suggestions to improve a \"select * from table\" case (and copy\n> performance): \n>\n> First, if you haven't, bump your read ahead value, this should improve\n> things some - however probably not enough by itself. \n>\n> blockdev --setra 16384 /dev/<devicename>\n> \nWill check into this one.\n> The default on most linux installs is IMO woefully small at 256. 16384 might\n> be a little high for you but it has worked well for us and our hardware.\n>\n>\n> If your data directory is mounted as its own partition or logical disk you\n> might consider mounting it with the noatime flag. \n>\n> Also are you running ext3? If you can pull in XFS (and do noatime there as\n> well) you should see about a 20% increase. It looks like this is a redhat or\n> cent box. If this is RHEL-ES you will need to either do a custom import for\n> xfsdump and xfsprogs your self and risk support issues from RH, or if it is\n> cent you can pull in the \"extras\". (if Redhat and not Adv. Server you can\n> buy it from Redhat for ES servers) CENT/RH 6 should have XFS support by\n> default that might be too far off for you.\n>\n> (this 20% number came from our own inhouse testing of sequential read tests\n> with dd) but there are plenty of other posts on web showing how much of an\n> improvement XFS is over ext3. \n>\n>\n> If you haven't broken out the data directory to it's own partition (and\n> hopefully spindles) there are some problems with using noatime on a system,\n> be aware of what they are. \n>\n> You will probably still be annoyed with a single long running query getting\n> bottlenecked at a single cpu core but without a more detailed example case\n> people might have a hard time helping with solving that problem.. \n>\n> Anyways try these and see if that gets you anywhere. \n> \next3 currently, our support/IT layer may balk at non-Redhat RPM stuff, \nhave to see on that one.\n\nI think if we can get past the seeming CPU bound then all of the above \nwould be candidates for optimizing the IO. We had actually slated to buy \na system with an SSD array separate spindles for OS, data, etc, but when \nI couldn't make a convincing case for the end user response time \nimprovements it was put on hold, and hence the source of our questions \nhere. ;)\n> You could always play with the greenplum singlenode addition if you want to\n> see a way to still sort-of be on postgres and use all cores... but that\n> introduces a whole host of other issues to solve. \n> \nInteresting. I had heard of Greenplum, but thought it was more about \nscaling to clusters rather than single node improvements. We will have \nto look into that.\n\nThanks again for all the ideas, questions and things to look into I \nthink you have opened a number of new possibilities.\n\nCheers,\n\nNick\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Nick Matheson\n> Sent: Wednesday, November 03, 2010 9:53 AM\n> To: [email protected]\n> Subject: [PERFORM] Simple (hopefully) throughput question?\n>\n> Hello\n>\n> We have an application that needs to do bulk reads of ENTIRE\n> Postgres tables very quickly (i.e. select * from table). We have\n> observed that such sequential scans run two orders of magnitude slower\n> than observed raw disk reads (5 MB/s versus 100 MB/s). Part of this is\n> due to the storage overhead we have observed in Postgres. In the\n> example below, it takes 1 GB to store 350 MB of nominal data. However\n> that suggests we would expect to get 35 MB/s bulk read rates.\n>\n> Observations using iostat and top during these bulk reads suggest\n> that the queries are CPU bound, not I/O bound. In fact, repeating the\n> queries yields similar response times. Presumably if it were an I/O\n> issue the response times would be much shorter the second time through\n> with the benefit of caching.\n>\n> We have tried these simple queries using psql, JDBC, pl/java stored\n> procedures, and libpq. In all cases the client code ran on the same\n> box as the server. We have experimented with Postgres 8.1, 8.3 and 9.0.\n>\n> We also tried playing around with some of the server tuning parameters such\n> as shared_buffers to no avail.\n>\n> Here is uname -a for a machine we have tested on:\n>\n> Linux nevs-bdb1.fsl.noaa.gov 2.6.18-194.17.1.el5 #1 SMP Mon Sep 20 07:12:06\n> EDT 2010 x86_64 x86_64 x86_64 GNU/Linux\n>\n> A sample dataset that reproduces these results looks like the following\n> (there are no indexes):\n>\n> Table \"bulk_performance.counts\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> i1 | integer |\n> i2 | integer |\n> i3 | integer |\n> i4 | integer |\n>\n> There are 22 million rows in this case.\n>\n> We HAVE observed that summation queries run considerably faster. In this\n> case,\n>\n> select sum(i1), sum(i2), sum(i3), sum(i4) from bulk_performance.counts\n>\n> runs at 35 MB/s.\n>\n> Our business logic does operations on the resulting data such that\n> the output is several orders of magnitude smaller than the input. So\n> we had hoped that by putting our business logic into stored procedures\n> (and thus drastically reducing the amount of data flowing to the\n> client) our throughput would go way up. This did not happen.\n>\n> So our questions are as follows:\n>\n> Is there any way using stored procedures (maybe C code that calls\n> SPI directly) or some other approach to get close to the expected 35\n> MB/s doing these bulk reads? Or is this the price we have to pay for\n> using SQL instead of some NoSQL solution. (We actually tried Tokyo\n> Cabinet and found it to perform quite well. However it does not measure\n> up to Postgres in terms of replication, data interrogation, community\n> support, acceptance, etc).\n>\n> Thanks\n>\n> Dan Schaffer\n> Paul Hamer\n> Nick Matheson\n>\n>\n> \n\n",
"msg_date": "Thu, 04 Nov 2010 15:08:42 +0000",
"msg_from": "Nick Matheson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple (hopefully) throughput question?"
},
{
"msg_contents": "> JDBC driver has some COPY support, but I don't remember details. You'd\n> better ask in JDBC list.\n\nAs long as we're here: yes, the JDBC driver has COPY support as of\n8.4(?) via the CopyManager PostgreSQL-specific API. You can call\n((PGConnection)conn).getCopyManager() and do either push- or\npull-based COPY IN or OUT. We've been using it for several years and\nit works like a charm. For more details, ask the JDBC list or check\nout the docs: http://jdbc.postgresql.org/documentation/publicapi/index.html\n\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n",
"msg_date": "Thu, 4 Nov 2010 08:13:03 -0700",
"msg_from": "Maciek Sakrejda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple (hopefully) throughput question?"
},
{
"msg_contents": "On Thu, 04 Nov 2010 15:42:08 +0100, Nick Matheson \n<[email protected]> wrote:\n> I think your comments really get at what our working hypothesis was, but \n> given that our experience is limited compared to you all here on the \n> mailing lists we really wanted to make sure we weren't missing any \n> alternatives. Also the writing of custom aggregators will likely \n> leverage any improvements we make to our storage throughput.\n\nQuick test : SELECT sum(x) FROM a table with 1 INT column, 3M rows, cached\n=> 244 MB/s\n=> 6.7 M rows/s\n\nSame on MySQL :\n\n size SELECT sum(x) (cached)\npostgres 107 MB 0.44 s\nmyisam 20 MB 0.42 s\ninnodb 88 MB 1.98 s\n\nAs you can see, even though myisam is much smaller (no transaction data to \nstore !) the aggregate performance isn't any better, and for innodb it is \nmuch worse.\n\nEven though pg's per-row header is large, seq scan / aggregate performance \nis very good.\n\nYou can get performance in this ballpark by writing a custom aggregate in \nC ; it isn't very difficult, the pg source code is clean and full of \ninsightful comments.\n\n- take a look at how contrib/intagg works\n- http://www.postgresql.org/files/documentation/books/aw_pgsql/node168.html\n- and the pg manual of course\n",
"msg_date": "Fri, 05 Nov 2010 10:05:25 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple (hopefully) throughput question?"
},
{
"msg_contents": "On 11/03/2010 04:52 PM, Nick Matheson wrote:\n\n> We have an application that needs to do bulk reads of ENTIRE\n> Postgres tables very quickly (i.e. select * from table). We have\n> observed that such sequential scans run two orders of magnitude slower\n> than observed raw disk reads (5 MB/s versus 100 MB/s). Part of this is\n> due to the storage overhead we have observed in Postgres. In the\n> example below, it takes 1 GB to store 350 MB of nominal data. However\n> that suggests we would expect to get 35 MB/s bulk read rates.\n\n> Our business logic does operations on the resulting data such that\n> the output is several orders of magnitude smaller than the input. So\n> we had hoped that by putting our business logic into stored procedures\n> (and thus drastically reducing the amount of data flowing to the\n> client) our throughput would go way up. This did not happen.\n\nCan you disclose what kinds of manipulations you want to do on the data? \n I am asking because maybe there is a fancy query (possibly using \nwindowing functions and / or aggregation functions) that gets you the \nspeed that you need without transferring the whole data set to the client.\n\n> So our questions are as follows:\n>\n> Is there any way using stored procedures (maybe C code that calls\n> SPI directly) or some other approach to get close to the expected 35\n> MB/s doing these bulk reads? Or is this the price we have to pay for\n> using SQL instead of some NoSQL solution. (We actually tried Tokyo\n> Cabinet and found it to perform quite well. However it does not measure\n> up to Postgres in terms of replication, data interrogation, community\n> support, acceptance, etc).\n\nKind regards\n\n\trobert\n\n",
"msg_date": "Fri, 05 Nov 2010 18:30:54 +0100",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple (hopefully) throughput question?"
},
{
"msg_contents": "On Thu, Nov 4, 2010 at 8:07 AM, Vitalii Tymchyshyn <[email protected]> wrote:\n\n> 04.11.10 16:31, Nick Matheson написав(ла):\n>\n> Heikki-\n>>\n>>>\n>>> Try COPY, ie. \"COPY bulk_performance.counts TO STDOUT BINARY\".\n>>>\n>>> Thanks for the suggestion. A preliminary test shows an improvement\n>> closer to our expected 35 MB/s.\n>>\n>> Are you familiar with any Java libraries for decoding the COPY format? The\n>> spec is clear and we could clearly write our own, but figured I would ask.\n>> ;)\n>>\n> JDBC driver has some COPY support, but I don't remember details. You'd\n> better ask in JDBC list.\n>\n>\n>\nThe JDBC driver support works fine. You can pass a Reader or InputStream\n(if I recall correctly, the InputStream path is more efficient. Or maybe\nthe Reader path was buggy. Regardless, I wound up using an InputStream in\nthe driver which I then wrap in a Reader in order to get it line-by-line.\n\nYou can write a COPY statement to send standard CSV format - take a look at\nthe postgres docs for the COPY statement to see the full syntax. I then\nhave a subclass of BufferedReader which parses each line of CSV and does\nsomething interesting with it. I've had it working very reliably for many\nmonths now, processing about 500 million rows per day (I'm actually COPYing\nout, rather than in, but the concept is the same, rgardless - my\noutputstream is wrapper in a writer, which reformats data on the fly).\n\nOn Thu, Nov 4, 2010 at 8:07 AM, Vitalii Tymchyshyn <[email protected]> wrote:\n04.11.10 16:31, Nick Matheson написав(ла):\n\nHeikki-\n\n\nTry COPY, ie. \"COPY bulk_performance.counts TO STDOUT BINARY\".\n\n\nThanks for the suggestion. A preliminary test shows an improvement closer to our expected 35 MB/s.\n\nAre you familiar with any Java libraries for decoding the COPY format? The spec is clear and we could clearly write our own, but figured I would ask. ;)\n\nJDBC driver has some COPY support, but I don't remember details. You'd better ask in JDBC list.\nThe JDBC driver support works fine. You can pass a Reader or InputStream (if I recall correctly, the InputStream path is more efficient. Or maybe the Reader path was buggy. Regardless, I wound up using an InputStream in the driver which I then wrap in a Reader in order to get it line-by-line.\nYou can write a COPY statement to send standard CSV format - take a look at the postgres docs for the COPY statement to see the full syntax. I then have a subclass of BufferedReader which parses each line of CSV and does something interesting with it. I've had it working very reliably for many months now, processing about 500 million rows per day (I'm actually COPYing out, rather than in, but the concept is the same, rgardless - my outputstream is wrapper in a writer, which reformats data on the fly).",
"msg_date": "Fri, 5 Nov 2010 12:23:59 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple (hopefully) throughput question?"
},
{
"msg_contents": "On Fri, Nov 5, 2010 at 12:23 PM, Samuel Gendler\n<[email protected]>wrote:\n\n> On Thu, Nov 4, 2010 at 8:07 AM, Vitalii Tymchyshyn <[email protected]>wrote:\n>\n>> 04.11.10 16:31, Nick Matheson написав(ла):\n>>\n>> Heikki-\n>>>\n>>>>\n>>>> Try COPY, ie. \"COPY bulk_performance.counts TO STDOUT BINARY\".\n>>>>\n>>>> Thanks for the suggestion. A preliminary test shows an improvement\n>>> closer to our expected 35 MB/s.\n>>>\n>>> Are you familiar with any Java libraries for decoding the COPY format?\n>>> The spec is clear and we could clearly write our own, but figured I would\n>>> ask. ;)\n>>>\n>> JDBC driver has some COPY support, but I don't remember details. You'd\n>> better ask in JDBC list.\n>>\n>>\n>>\n> The JDBC driver support works fine. You can pass a Reader or InputStream\n> (if I recall correctly, the InputStream path is more efficient. Or maybe\n> the Reader path was buggy. Regardless, I wound up using an InputStream in\n> the driver which I then wrap in a Reader in order to get it line-by-line.\n>\n> You can write a COPY statement to send standard CSV format - take a look at\n> the postgres docs for the COPY statement to see the full syntax. I then\n> have a subclass of BufferedReader which parses each line of CSV and does\n> something interesting with it. I've had it working very reliably for many\n> months now, processing about 500 million rows per day (I'm actually COPYing\n> out, rather than in, but the concept is the same, rgardless - my\n> outputstream is wrapper in a writer, which reformats data on the fly).\n>\n>\n>\nI should mention that I found basically no documentation of the copy api in\nthe jdbc driver in 8.4. I have no idea if that has changed with 9.x. I had\nto figure it out by reading the source code. Fortunately, it is very\nsimple:\n\nreturn ((PGConnection) con).getCopyAPI().copyIn(sql, this.fis);\n\n\nWhere this.fis is an InputStream. There's an alternative copyIn\nimplementation that takes a Reader instead. I'm sure the copyOut methods\nare the same.\n\n\nNote: my earlier email was confusing. copyIn, copies into the db and\nreceives an InputStream that will deliver data when it is read. copyOut\ncopies data from the db and receives an OutputStream which will receive the\ndata. I inverted those in my earlier email.\n\n\nYou can look at the source code to the CopyAPI to learn more about the\nmechanism.\n\nOn Fri, Nov 5, 2010 at 12:23 PM, Samuel Gendler <[email protected]> wrote:\nOn Thu, Nov 4, 2010 at 8:07 AM, Vitalii Tymchyshyn <[email protected]> wrote:\n\n04.11.10 16:31, Nick Matheson написав(ла):\n\nHeikki-\n\n\nTry COPY, ie. \"COPY bulk_performance.counts TO STDOUT BINARY\".\n\n\nThanks for the suggestion. A preliminary test shows an improvement closer to our expected 35 MB/s.\n\nAre you familiar with any Java libraries for decoding the COPY format? The spec is clear and we could clearly write our own, but figured I would ask. ;)\n\nJDBC driver has some COPY support, but I don't remember details. You'd better ask in JDBC list.\nThe JDBC driver support works fine. You can pass a Reader or InputStream (if I recall correctly, the InputStream path is more efficient. Or maybe the Reader path was buggy. Regardless, I wound up using an InputStream in the driver which I then wrap in a Reader in order to get it line-by-line.\nYou can write a COPY statement to send standard CSV format - take a look at the postgres docs for the COPY statement to see the full syntax. I then have a subclass of BufferedReader which parses each line of CSV and does something interesting with it. I've had it working very reliably for many months now, processing about 500 million rows per day (I'm actually COPYing out, rather than in, but the concept is the same, rgardless - my outputstream is wrapper in a writer, which reformats data on the fly).\nI should mention that I found basically no documentation of the copy api in the jdbc driver in 8.4. I have no idea if that has changed with 9.x. I had to figure it out by reading the source code. Fortunately, it is very simple:\nreturn ((PGConnection) con).getCopyAPI().copyIn(sql, this.fis);\nWhere this.fis is an InputStream. There's an alternative copyIn implementation that takes a Reader instead. I'm sure the copyOut methods are the same.\n\nNote: my earlier email was confusing. copyIn, copies into the db and receives an InputStream that will deliver data when it is read. copyOut copies data from the db and receives an OutputStream which will receive the data. I inverted those in my earlier email.\n\nYou can look at the source code to the CopyAPI to learn more about the mechanism.",
"msg_date": "Fri, 5 Nov 2010 12:29:48 -0700",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple (hopefully) throughput question?"
}
] |
[
{
"msg_contents": "Where can I find the documentation describing the buffer replacement \npolicy? Are there any parameters governing the page replacement policy?\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n",
"msg_date": "Wed, 03 Nov 2010 12:35:33 -0400",
"msg_from": "Mladen Gogala <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bufer cache replacement LRU algorithm?"
},
{
"msg_contents": "Mladen,\n\nYou would need to check the mailing lists. The release notes\nhave it as being a clock sweep algorithm starting in version\n8. Then additional changes were added to eliminate the cache\nblowout caused by a sequential scan and by vacuum/autovacuum.\nI do not believe that there are any parameters available other\nthan total size of the pool and whether sequential scans are\nsynchronized.\n\nRegards,\nKen\n\nOn Wed, Nov 03, 2010 at 12:35:33PM -0400, Mladen Gogala wrote:\n> Where can I find the documentation describing the buffer replacement \n> policy? Are there any parameters governing the page replacement policy?\n>\n> -- \n> Mladen Gogala Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> http://www.vmsinfo.com The Leader in Integrated Media Intelligence \n> Solutions\n>\n>\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Wed, 3 Nov 2010 11:52:16 -0500",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bufer cache replacement LRU algorithm?"
},
{
"msg_contents": "Kenneth Marshall <[email protected]> wrote:\n> On Wed, Nov 03, 2010 at 12:35:33PM -0400, Mladen Gogala wrote:\n>> Where can I find the documentation describing the buffer\n>> replacement policy? Are there any parameters governing the page\n>> replacement policy?\n \n> You would need to check the mailing lists. The release notes\n> have it as being a clock sweep algorithm starting in version\n> 8. Then additional changes were added to eliminate the cache\n> blowout caused by a sequential scan and by vacuum/autovacuum.\n> I do not believe that there are any parameters available other\n> than total size of the pool and whether sequential scans are\n> synchronized.\n \nThe background writer settings might be considered relevant, too.\n \nAlso keep in mind that PostgreSQL goes through the OS cache and\nfilesystems; the filesystem choice and OS settings will have an\nimpact on how that level of caching behaves. Since there is often\nmuch more cache at the OS level than in PostgreSQL shared buffers,\nyou don't want to overlook that aspect of things.\n \n-Kevin\n",
"msg_date": "Wed, 03 Nov 2010 12:09:42 -0500",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bufer cache replacement LRU algorithm?"
},
{
"msg_contents": "Mladen Gogala wrote:\n> Where can I find the documentation describing the buffer replacement \n> policy? Are there any parameters governing the page replacement policy?\n\nI wrote a pretty detailed description of this in my \"Inside the \nPostgreSQL Buffer Cache\" presentation at \nhttp://projects.2ndquadrant.com/talks and nothing in this specific area \nhas changed significantly since then. There aren't any specific \ntunables in this area beyond the ones that cover sizing of the buffer \npool and how often checkpoints happen. There's a TODO item to benchmark \ntoward whether there's any gain to increasing the maximum usage count an \nindividual page can accumulate, currently hard coded at 5. That's the \nmain thing that could be tunable in this area that currently isn't.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n",
"msg_date": "Fri, 05 Nov 2010 13:58:36 -0700",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bufer cache replacement LRU algorithm?"
}
] |
[
{
"msg_contents": "Maciek/Vitalii-\n\nThanks for the pointers to the JDBC work. \n\nLuckily, we had already found the COPY support in the pg driver, but \nwere wondering if anyone had already written the complimentary unpacking \ncode for the raw data returned from the copy.\n\nAgain the spec is clear enough that we could write it, but we just \ndidn't want to re-invent the wheel if it wasn't necessary.\n\nCheers,\n\nNick\n",
"msg_date": "Thu, 04 Nov 2010 19:24:59 +0000",
"msg_from": "Nick Matheson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple (hopefully) throughput question?"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.