threads
listlengths 1
275
|
---|
[
{
"msg_contents": "I have one query which does not run very often. Sometimes it may be\nmonths between runs.\nHowever, when it does get executed, it scans approximately 100\nidentically-structured tables (a form of partitioning), extracts and\ngroups on a subset of the columns, and creates a new table. The\nindividual table queries have no where clauses, this is a full table\nscan for every table.\n\nI've tried all sorts of things to try to improve the performance,\nwhich can take a /very/ long time.\nWe are talking about approximately 175GB of data before grouping/summarizing.\n\nThis is on PG 8.4.8 on Linux, 16GB of \"real\" RAM.\nMost recently, I enabled trace_sort, disabled hash aggregation[1], and\nset a large work_mem (normally very small, in this case I tried\nanything from 8MB to 256MB. I even tried 1GB and 2GB).\n\nIn the logs, I saw this:\n\nexternal sort ended, 7708696 disk blocks used: CPU 359.84s/57504.66u\nsec elapsed 58966.76 sec\n\nAm I to understand that the CPU portion of the sorting only took 6\nminutes but the sort itself took almost 16.5 hours and used approx\n60GB of disk space?\nThe resulting summary table is about 5GB in size as reported by \\d+ in\npsql (and pg_relation_size).\n\nThe underlying storage is ext4 on a hardware raid 10 with a BBU.\n\nWhat sorts of things should I be looking at to improve the performance\nof this query? Is my interpretation of that log line totally off base?\n\n\n\n[1] if I don't disable hash aggregation and the work_mem is over 8MB\nin size, the memory allocation explodes to the point where postgresql\nwants dozens of gigs of memory. I've tried setting the statistics as\nhigh as 1000 without benefit.\n\n-- \nJon\n",
"msg_date": "Thu, 17 Nov 2011 11:10:56 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "external sort performance"
},
{
"msg_contents": "On Thu, Nov 17, 2011 at 2:10 PM, Jon Nelson <[email protected]> wrote:\n> What sorts of things should I be looking at to improve the performance\n> of this query? Is my interpretation of that log line totally off base?\n\nYou'll have to post some more details.\nLike a query and an explain/explain analyze.\n\nMemory consumption probably skyrockets since you'll need at least one\nsort per table, so if you have 100+, then that's (at least) 100+\nsorts.\nWithout the explain output it's impossible to be sure.\n",
"msg_date": "Thu, 17 Nov 2011 14:16:59 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: external sort performance"
},
{
"msg_contents": "On 11/17/11 9:10 AM, Jon Nelson wrote:\n> I have one query which does not run very often. Sometimes it may be\n> months between runs.\n> However, when it does get executed, it scans approximately 100\n> identically-structured tables (a form of partitioning), extracts and\n> groups on a subset of the columns, and creates a new table. The\n> individual table queries have no where clauses, this is a full table\n> scan for every table.\n>\n> I've tried all sorts of things to try to improve the performance,\n> which can take a /very/ long time.\n> We are talking about approximately 175GB of data before grouping/summarizing.\n>\n> This is on PG 8.4.8 on Linux, 16GB of \"real\" RAM.\n> Most recently, I enabled trace_sort, disabled hash aggregation[1], and\n> set a large work_mem (normally very small, in this case I tried\n> anything from 8MB to 256MB. I even tried 1GB and 2GB).\n>\n> In the logs, I saw this:\n>\n> external sort ended, 7708696 disk blocks used: CPU 359.84s/57504.66u\n> sec elapsed 58966.76 sec\n>\n> Am I to understand that the CPU portion of the sorting only took 6\n> minutes but the sort itself took almost 16.5 hours and used approx\n> 60GB of disk space?\n> The resulting summary table is about 5GB in size as reported by \\d+ in\n> psql (and pg_relation_size).\n>\n> The underlying storage is ext4 on a hardware raid 10 with a BBU.\n>\n> What sorts of things should I be looking at to improve the performance\n> of this query? Is my interpretation of that log line totally off base?\nYou don't give any details about how and why you are sorting. Are you actually using all of the columns in your aggregated-data table in the sort operation? Or just a few of them?\n\nYou're making the sort operation work with 175 GB of data. If most of that data is only needed for the report (not the sort), then separate it into two tables - one of just the data that the sorting/grouping needs, and the other with the rest of the data. Then create a view that joins it all back together for reporting purposes.\n\nCraig\n",
"msg_date": "Thu, 17 Nov 2011 09:28:15 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: external sort performance"
},
{
"msg_contents": "Jon Nelson <[email protected]> writes:\n> This is on PG 8.4.8 on Linux, 16GB of \"real\" RAM.\n> Most recently, I enabled trace_sort, disabled hash aggregation[1], and\n> set a large work_mem (normally very small, in this case I tried\n> anything from 8MB to 256MB. I even tried 1GB and 2GB).\n\nFWIW, I think hash aggregation is your best shot at getting reasonable\nperformance. Sorting 175GB of data is going to hurt no matter what.\n\nIf the grouped table amounts to 5GB, I wouldn't have expected the hash\ntable to be more than maybe 2-3X that size (although this does depend on\nwhat aggregates you're running...). Letting the hash aggregation have\nall your RAM might be the best answer.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Nov 2011 12:55:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: external sort performance "
},
{
"msg_contents": "I'll try to compile multiple questions/answers into a single response.\n\nOn Thu, Nov 17, 2011 at 11:16 AM, Claudio Freire <[email protected]> wrote:\n> On Thu, Nov 17, 2011 at 2:10 PM, Jon Nelson <[email protected]> wrote:\n>> What sorts of things should I be looking at to improve the performance\n>> of this query? Is my interpretation of that log line totally off base?\n\n> You'll have to post some more details.\n> Like a query and an explain/explain analyze.\n\nPlease see below, however, I am also very interested to know if I'm\ninterpreting that log line correctly.\n\n> Memory consumption probably skyrockets since you'll need at least one\n> sort per table, so if you have 100+, then that's (at least) 100+\n> sorts.\n\nRight, that much I had understood.\n\n\nOn Thu, Nov 17, 2011 at 11:28 AM, Craig James\n<[email protected]> wrote:\n> You don't give any details about how and why you are sorting. Are you\n> actually using all of the columns in your aggregated-data table in the sort\n> operation? Or just a few of them?\n\n> You're making the sort operation work with 175 GB of data. If most of that\n> data is only needed for the report (not the sort), then separate it into two\n> tables - one of just the data that the sorting/grouping needs, and the other\n> with the rest of the data. Then create a view that joins it all back\n> together for reporting purposes.\n\nI'm not actually using any ORDER BY at all. This is purely a GROUP BY.\nThe sort happens because of the group aggregate (vs. hash aggregate).\nTwo of the columns are used to group, the other two are aggregates (SUM).\n\nOn Thu, Nov 17, 2011 at 11:55 AM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> This is on PG 8.4.8 on Linux, 16GB of \"real\" RAM.\n>> Most recently, I enabled trace_sort, disabled hash aggregation[1], and\n>> set a large work_mem (normally very small, in this case I tried\n>> anything from 8MB to 256MB. I even tried 1GB and 2GB).\n>\n> FWIW, I think hash aggregation is your best shot at getting reasonable\n> performance. Sorting 175GB of data is going to hurt no matter what.\n\n> If the grouped table amounts to 5GB, I wouldn't have expected the hash\n> table to be more than maybe 2-3X that size (although this does depend on\n> what aggregates you're running...). Letting the hash aggregation have\n> all your RAM might be the best answer.\n\nI'm re-running the query with work_mem set to 16GB (for just that query).\n\nThe query (with table and column names changed):\n\nSELECT anon_1.columnA, sum(anon_1.columnB) AS columnB,\nsum(anon_1.columnC) AS columnC, anon_1.columnD\nFROM (\n SELECT columnA, columnB, columnC, columnD FROM tableA\n UNION ALL\n .... same select/union all pattern but from 90-ish other tables\n) AS anon_1\nGROUP BY anon_1.columnA, anon_1.columnD\nHAVING (anon_1.columnB) > 0\n\nThe explain verbose with work_mem = 16GB\n\n HashAggregate (cost=54692162.83..54692962.83 rows=40000 width=28)\n Output: columnA, sum(columnB), sum(columnC), columnD\n Filter: (sum(columnB) > 0)\n -> Append (cost=0.00..34547648.48 rows=1611561148 width=28)\n -> Seq Scan on tableA (cost=0.00..407904.40 rows=19045540 width=28)\n Output: columnA, columnB, columnC, columnD\n .... 90-ish more tables here\n\n12 minutes into the query it is consuming 10.1GB of memory.\n21 minutes into the query it is consuming 12.9GB of memory.\nAfter just under 34 minutes it completed with about 15GB of memory being used.\nThat is a rather impressive improvement. Previously, I had been\nadvised against using a large work_mem value. I had never thought to\nuse one 3 times the size of the resulting table.\n\nThe explain verbose with enable_hashagg = false:\n\n GroupAggregate (cost=319560040.24..343734257.46 rows=40000 width=28)\n Output: columnA, sum(columnB), sum(columnC), columnD\n Filter: (sum(columnB) > 0)\n -> Sort (cost=319560040.24..323588943.11 rows=1611561148 width=28)\n Output: columnA, columnB, columnC, columnD\n Sort Key: columnA, columnD\n -> Result (cost=0.00..34547648.48 rows=1611561148 width=28)\n Output: columnA, columnB, columnC, columnD\n -> Append (cost=0.00..34547648.48 rows=1611561148 width=28)\n -> Seq Scan on tableA (cost=0.00..407904.40\nrows=19045540 width=28)\n Output: columnA, columnB, columnC, columnD\n .... 90-ish more tables here\n\n\n\n--\nJon\n",
"msg_date": "Thu, 17 Nov 2011 13:32:26 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: external sort performance"
},
{
"msg_contents": "A follow-up question.\nEven with both work_mem and maintenance_work_mem equal to 16GB, I see this:\n\nLOG: 00000: begin index sort: unique = f, workMem = 16777216, randomAccess = f\nand shortly thereafter:\nLOG: 00000: switching to external sort with 59919 tapes: CPU\n2.59s/13.20u sec elapsed 16.85 sec\nand a while later:\nLOG: 00000: finished writing run 1 to tape 0: CPU 8.16s/421.45u sec\nelapsed 433.83 sec\nLOG: 00000: performsort done (except 2-way final merge): CPU\n9.53s/561.56u sec elapsed 576.54 sec\nLOG: 00000: external sort ended, 181837 disk blocks used: CPU\n12.90s/600.45u sec elapsed 625.05 sec\n\n\nThe first log statement is expected. The second log statement, however, isn't.\nThe total table size is (as noted earlier) about 5GB and, in fact, fit\ninto one nice hash table (approx 15GB in size).\nIs the sorting that is necessary for index creation unable to use a\nhash table? (This is a standard btree index).\n\n-- \nJon\n",
"msg_date": "Thu, 17 Nov 2011 14:11:30 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: external sort performance"
},
{
"msg_contents": "Jon Nelson <[email protected]> writes:\n> Is the sorting that is necessary for index creation unable to use a\n> hash table? (This is a standard btree index).\n\nHash aggregation isn't sorting --- it's only useful for grouping.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Nov 2011 19:33:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: external sort performance "
},
{
"msg_contents": "\n> The first log statement is expected. The second log statement, however, isn't.\n> The total table size is (as noted earlier) about 5GB and, in fact, fit\n> into one nice hash table (approx 15GB in size).\n> Is the sorting that is necessary for index creation unable to use a\n> hash table? (This is a standard btree index).\n\nHow big is the source table? You're not sorting the *result* table,\nyou're sorting the source table if you're summarizing it.\n\nIf the original source data is only 5GB, I'd check your code for a\ncartesian join.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Thu, 17 Nov 2011 16:46:05 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: external sort performance"
},
{
"msg_contents": "On 2011-11-17 17:10, Jon Nelson wrote:\n> external sort ended, 7708696 disk blocks used: CPU 359.84s/57504.66u\n> sec elapsed 58966.76 sec\n>\n> Am I to understand that the CPU portion of the sorting only took 6\n> minutes but the sort itself took almost 16.5 hours and used approx\n> 60GB of disk space?\n\n\nI realise you've had helpful answers by now, but.... that reads\nas 16 hours of cpu time to me; mostly user-mode but with 6 minute\nof system-mode. 98% cpu usage for the 16 hours elapsed.\n\n-- \nJeremy\n",
"msg_date": "Sun, 20 Nov 2011 13:56:14 +0000",
"msg_from": "Jeremy Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: external sort performance"
},
{
"msg_contents": "On Sun, Nov 20, 2011 at 7:56 AM, Jeremy Harris <[email protected]> wrote:\n> On 2011-11-17 17:10, Jon Nelson wrote:\n>>\n>> external sort ended, 7708696 disk blocks used: CPU 359.84s/57504.66u\n>> sec elapsed 58966.76 sec\n>>\n>> Am I to understand that the CPU portion of the sorting only took 6\n>> minutes but the sort itself took almost 16.5 hours and used approx\n>> 60GB of disk space?\n>\n>\n> I realise you've had helpful answers by now, but.... that reads\n> as 16 hours of cpu time to me; mostly user-mode but with 6 minute\n> of system-mode. 98% cpu usage for the 16 hours elapsed.\n\nThank you very much!\nI was going to post a followup asking for help interpreting the log\nline, but now I don't have to. Do you happen to recall if disk I/O is\ncounted as user or system time? If it's counted as system time, then I\nhave more questions, namely:\n\nIf using a hash table (hash aggregation) shows that the GROUPing can\ntake place in 35 minutes, but a Group Aggregation takes 16 hours, how\nmuch of that is CPU and how much is waiting for I/O?\n\n\n-- \nJon\n",
"msg_date": "Sun, 20 Nov 2011 09:00:41 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: external sort performance"
},
{
"msg_contents": "On 2011-11-20 15:00, Jon Nelson wrote:\n> Do you happen to recall if disk I/O is\n> counted as user or system time?\n\nNeither, traditionally. Those times are cpu times;\nthey only account for what the cpu was doing.\nThe disks could be working in parallel as a result\nof cpu actions, and probably were - but the software\nfound work to do for the cpu. You'd want to be\nlooking at iostat during the run to see how busy the\ndisks were.\n\nAs to why it takes 16 hours cpu to do the external\nversion but only 34 minutes for internal - some of that\nwill be down to data-shuffling in- and out- of disk files\nwhich is nonetheless accounted to user-mode cpu time,\nbut some will be the plain inefficiency of the external\nversion having to effectively do work over many times\nbecause it can't have a complete view of the problem at\nhand at any one time.\n\n-- \nJeremy\n",
"msg_date": "Sun, 20 Nov 2011 16:11:53 +0000",
"msg_from": "Jeremy Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: external sort performance"
}
] |
[
{
"msg_contents": "Hi folks,\n\nI'm running PG 8.3.15 on an itanium box and was seeing lots of \nfloating-point assist faults by the kernel. Searched around, found a \ncouple references/discussions here and there:\n\nhttp://archives.postgresql.org/pgsql-general/2008-08/msg00244.php\nhttp://archives.postgresql.org/pgsql-performance/2011-06/msg00093.php\nhttp://archives.postgresql.org/pgsql-performance/2011-06/msg00102.php\n\nI took up Tom's challenge and found that the buffer allocation prediction \ncode in BgBufferSync() is the likely culprit:\n\n \tif (smoothed_alloc <= (float) recent_alloc)\n \t\tsmoothed_alloc = recent_alloc;\n \telse\n \t\tsmoothed_alloc += ((float) recent_alloc - smoothed_alloc) /\n \t\t\tsmoothing_samples;\n\nsmoothed_alloc (float) is moving towards 0 during any extended period of \ntime when recent_alloc (uint32) remains 0. In my case it takes just a \nminute or two before it becomes small enough to start triggering the \nfault.\n\nGiven how smoothed_alloc is used just after this place in the code it \nseems overkill to allow it to continue to shrink so small, so I made a \nlittle mod:\n\n \tif (smoothed_alloc <= (float) recent_alloc)\n \t\tsmoothed_alloc = recent_alloc;\n \telse if (smoothed_alloc >= 0.00001)\n \t\tsmoothed_alloc += ((float) recent_alloc - smoothed_alloc) /\n \t\t\tsmoothing_samples;\n\n\nThis seems to have done the trick. From what I can tell this section of \ncode is unchanged in 9.1.1 - perhaps in a future version a similar mod \ncould be made?\n\nFWIW, I don't think it's really much of a performance impact for the \ndatabase, because if recent_alloc remains 0 for a long while it probably \nmeans the DB isn't doing much anyway. However it is annoying when system \nlogs fill up, and the extra floating point handling may affect some other \nprocess(es).\n\n-Greg\n",
"msg_date": "Thu, 17 Nov 2011 17:07:51 -0800",
"msg_from": "Greg Matthews <[email protected]>",
"msg_from_op": true,
"msg_subject": "probably cause (and fix) for floating-point assist faults on itanium"
},
{
"msg_contents": "On Thu, Nov 17, 2011 at 10:07 PM, Greg Matthews\n<[email protected]> wrote:\n> if (smoothed_alloc <= (float) recent_alloc)\n> smoothed_alloc = recent_alloc;\n> else if (smoothed_alloc >= 0.00001)\n> smoothed_alloc += ((float) recent_alloc - smoothed_alloc) /\n> smoothing_samples;\n>\n\nI don't think that logic is sound.\n\nRather,\n\n if (smoothed_alloc <= (float) recent_alloc) {\n smoothed_alloc = recent_alloc;\n } else {\n if (smoothed_alloc < 0.000001)\n smoothed_alloc = 0;\n smoothed_alloc += ((float) recent_alloc - smoothed_alloc) /\n smoothing_samples;\n }\n",
"msg_date": "Fri, 18 Nov 2011 11:21:52 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: probably cause (and fix) for floating-point assist\n\tfaults on itanium"
},
{
"msg_contents": "Claudio Freire <[email protected]> writes:\n> On Thu, Nov 17, 2011 at 10:07 PM, Greg Matthews\n> <[email protected]> wrote:\n>> if (smoothed_alloc <= (float) recent_alloc)\n>> smoothed_alloc = recent_alloc;\n>> else if (smoothed_alloc >= 0.00001)\n>> smoothed_alloc += ((float) recent_alloc - smoothed_alloc) /\n>> smoothing_samples;\n>> \n\n> I don't think that logic is sound.\n\n> Rather,\n\n> if (smoothed_alloc <= (float) recent_alloc) {\n> smoothed_alloc = recent_alloc;\n> } else {\n> if (smoothed_alloc < 0.000001)\n> smoothed_alloc = 0;\n> smoothed_alloc += ((float) recent_alloc - smoothed_alloc) /\n> smoothing_samples;\n> }\n\nThe real problem with either of these is the cutoff number is totally\narbitrary. I'm thinking of something like this:\n\n /*\n * Track a moving average of recent buffer allocations. Here, rather than\n * a true average we want a fast-attack, slow-decline behavior: we\n * immediately follow any increase.\n */\n if (smoothed_alloc <= (float) recent_alloc)\n smoothed_alloc = recent_alloc;\n else\n smoothed_alloc += ((float) recent_alloc - smoothed_alloc) /\n smoothing_samples;\n\n /* Scale the estimate by a GUC to allow more aggressive tuning. */\n upcoming_alloc_est = smoothed_alloc * bgwriter_lru_multiplier;\n\n+ /*\n+ * If recent_alloc remains at zero for many cycles,\n+ * smoothed_alloc will eventually underflow to zero, and the\n+ * underflows produce annoying kernel warnings on some platforms.\n+ * Once upcoming_alloc_est has gone to zero, there's no point in\n+ * tracking smaller and smaller values of smoothed_alloc, so just\n+ * reset it to exactly zero to avoid this syndrome.\n+ */\n+ if (upcoming_alloc_est == 0)\n+ smoothed_alloc = 0;\n\n /*\n * Even in cases where there's been little or no buffer allocation\n * activity, we want to make a small amount of progress through the buffer\n\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Nov 2011 11:11:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: probably cause (and fix) for floating-point assist faults on\n\titanium"
},
{
"msg_contents": "Looks good to me. I built PG with this change, no kernel warnings after \n~10 minutes of running. I'll continue to monitor but I think this fixes \nthe syndrome. Thanks Tom.\n\n-Greg\n\n\nOn Fri, 18 Nov 2011, Tom Lane wrote:\n\n> Claudio Freire <[email protected]> writes:\n>> On Thu, Nov 17, 2011 at 10:07 PM, Greg Matthews\n>> <[email protected]> wrote:\n>>> if (smoothed_alloc <= (float) recent_alloc)\n>>> smoothed_alloc = recent_alloc;\n>>> else if (smoothed_alloc >= 0.00001)\n>>> smoothed_alloc += ((float) recent_alloc - smoothed_alloc) /\n>>> smoothing_samples;\n>>>\n>\n>> I don't think that logic is sound.\n>\n>> Rather,\n>\n>> if (smoothed_alloc <= (float) recent_alloc) {\n>> smoothed_alloc = recent_alloc;\n>> } else {\n>> if (smoothed_alloc < 0.000001)\n>> smoothed_alloc = 0;\n>> smoothed_alloc += ((float) recent_alloc - smoothed_alloc) /\n>> smoothing_samples;\n>> }\n>\n> The real problem with either of these is the cutoff number is totally\n> arbitrary. I'm thinking of something like this:\n>\n> /*\n> * Track a moving average of recent buffer allocations. Here, rather than\n> * a true average we want a fast-attack, slow-decline behavior: we\n> * immediately follow any increase.\n> */\n> if (smoothed_alloc <= (float) recent_alloc)\n> smoothed_alloc = recent_alloc;\n> else\n> smoothed_alloc += ((float) recent_alloc - smoothed_alloc) /\n> smoothing_samples;\n>\n> /* Scale the estimate by a GUC to allow more aggressive tuning. */\n> upcoming_alloc_est = smoothed_alloc * bgwriter_lru_multiplier;\n>\n> + /*\n> + * If recent_alloc remains at zero for many cycles,\n> + * smoothed_alloc will eventually underflow to zero, and the\n> + * underflows produce annoying kernel warnings on some platforms.\n> + * Once upcoming_alloc_est has gone to zero, there's no point in\n> + * tracking smaller and smaller values of smoothed_alloc, so just\n> + * reset it to exactly zero to avoid this syndrome.\n> + */\n> + if (upcoming_alloc_est == 0)\n> + smoothed_alloc = 0;\n>\n> /*\n> * Even in cases where there's been little or no buffer allocation\n> * activity, we want to make a small amount of progress through the buffer\n>\n>\n> \t\t\tregards, tom lane\n>\n",
"msg_date": "Fri, 18 Nov 2011 08:37:07 -0800",
"msg_from": "Greg Matthews <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: probably cause (and fix) for floating-point assist\n\tfaults on itanium"
},
{
"msg_contents": "Greg Matthews <[email protected]> writes:\n> Looks good to me. I built PG with this change, no kernel warnings after \n> ~10 minutes of running. I'll continue to monitor but I think this fixes \n> the syndrome. Thanks Tom.\n\nPatch committed -- thanks for checking it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 19 Nov 2011 00:39:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: probably cause (and fix) for floating-point assist faults on\n\titanium"
}
] |
[
{
"msg_contents": "I have two queries in PG 9.1. One uses an index like I would like, the other does not. Is this expected behavior? If so, is there any way around it? \n\n\npostgres=# explain analyze select min(id) from delayed_jobs where strand='sis_batch:account:15' group by strand;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=0.00..8918.59 rows=66 width=29) (actual time=226.759..226.760 rows=1 loops=1)\n -> Seq Scan on delayed_jobs (cost=0.00..8553.30 rows=72927 width=29) (actual time=0.014..169.941 rows=72268 loops=1)\n Filter: ((strand)::text = 'sis_batch:account:15'::text)\n Total runtime: 226.817 ms\n(4 rows)\n\npostgres=# explain analyze select id from delayed_jobs where strand='sis_batch:account:15' order by id limit 1;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.33 rows=1 width=8) (actual time=0.097..0.098 rows=1 loops=1)\n -> Index Scan using index_delayed_jobs_on_strand on delayed_jobs (cost=0.00..24181.74 rows=72927 width=8) (actual time=0.095..0.095 rows=1 loops=1)\n Index Cond: ((strand)::text = 'sis_batch:account:15'::text)\n Total runtime: 0.129 ms\n(4 rows)\n\n",
"msg_date": "Thu, 17 Nov 2011 17:12:38 -0800",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": true,
"msg_subject": "index usage for min() vs. \"order by asc limit 1\""
},
{
"msg_contents": "\nOn Nov 17, 2011, at 5:12 PM, Ben Chobot wrote:\n\n> I have two queries in PG 9.1. One uses an index like I would like, the other does not. Is this expected behavior? If so, is there any way around it? \n\nI don't think you want the group by in that first query.\n\nCheers,\n Steve\n\n> \n> \n> postgres=# explain analyze select min(id) from delayed_jobs where strand='sis_batch:account:15' group by strand;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=0.00..8918.59 rows=66 width=29) (actual time=226.759..226.760 rows=1 loops=1)\n> -> Seq Scan on delayed_jobs (cost=0.00..8553.30 rows=72927 width=29) (actual time=0.014..169.941 rows=72268 loops=1)\n> Filter: ((strand)::text = 'sis_batch:account:15'::text)\n> Total runtime: 226.817 ms\n> (4 rows)\n> \n> postgres=# explain analyze select id from delayed_jobs where strand='sis_batch:account:15' order by id limit 1;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.33 rows=1 width=8) (actual time=0.097..0.098 rows=1 loops=1)\n> -> Index Scan using index_delayed_jobs_on_strand on delayed_jobs (cost=0.00..24181.74 rows=72927 width=8) (actual time=0.095..0.095 rows=1 loops=1)\n> Index Cond: ((strand)::text = 'sis_batch:account:15'::text)\n> Total runtime: 0.129 ms\n> (4 rows)\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Thu, 17 Nov 2011 17:20:34 -0800",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index usage for min() vs. \"order by asc limit 1\""
},
{
"msg_contents": "On Nov 17, 2011, at 5:20 PM, Steve Atkins wrote:\n> \n> I don't think you want the group by in that first query.\n\nHeh, I tried to simply the example, but in reality that = becomes an in clause of multiple values. So the group by is needed.\n\n\n>> \n>> \n>> postgres=# explain analyze select min(id) from delayed_jobs where strand='sis_batch:account:15' group by strand;\n>> QUERY PLAN\n>> --------------------------------------------------------------------------------------------------------------------------\n>> GroupAggregate (cost=0.00..8918.59 rows=66 width=29) (actual time=226.759..226.760 rows=1 loops=1)\n>> -> Seq Scan on delayed_jobs (cost=0.00..8553.30 rows=72927 width=29) (actual time=0.014..169.941 rows=72268 loops=1)\n>> Filter: ((strand)::text = 'sis_batch:account:15'::text)\n>> Total runtime: 226.817 ms\n>> (4 rows)\n>> \n>> postgres=# explain analyze select id from delayed_jobs where strand='sis_batch:account:15' order by id limit 1;\n>> QUERY PLAN\n>> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=0.00..0.33 rows=1 width=8) (actual time=0.097..0.098 rows=1 loops=1)\n>> -> Index Scan using index_delayed_jobs_on_strand on delayed_jobs (cost=0.00..24181.74 rows=72927 width=8) (actual time=0.095..0.095 rows=1 loops=1)\n>> Index Cond: ((strand)::text = 'sis_batch:account:15'::text)\n>> Total runtime: 0.129 ms\n>> (4 rows)\n>> \n>> \n>> -- \n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\nOn Nov 17, 2011, at 5:20 PM, Steve Atkins wrote:I don't think you want the group by in that first query.Heh, I tried to simply the example, but in reality that = becomes an in clause of multiple values. So the group by is needed.postgres=# explain analyze select min(id) from delayed_jobs where strand='sis_batch:account:15' group by strand; QUERY PLAN--------------------------------------------------------------------------------------------------------------------------GroupAggregate (cost=0.00..8918.59 rows=66 width=29) (actual time=226.759..226.760 rows=1 loops=1) -> Seq Scan on delayed_jobs (cost=0.00..8553.30 rows=72927 width=29) (actual time=0.014..169.941 rows=72268 loops=1) Filter: ((strand)::text = 'sis_batch:account:15'::text)Total runtime: 226.817 ms(4 rows)postgres=# explain analyze select id from delayed_jobs where strand='sis_batch:account:15' order by id limit 1; QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------Limit (cost=0.00..0.33 rows=1 width=8) (actual time=0.097..0.098 rows=1 loops=1) -> Index Scan using index_delayed_jobs_on_strand on delayed_jobs (cost=0.00..24181.74 rows=72927 width=8) (actual time=0.095..0.095 rows=1 loops=1) Index Cond: ((strand)::text = 'sis_batch:account:15'::text)Total runtime: 0.129 ms(4 rows)-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 17 Nov 2011 17:23:38 -0800",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index usage for min() vs. \"order by asc limit 1\""
},
{
"msg_contents": "can you run an analyze command first and then post here the results of:\nselect * FROM pg_stats WHERE tablename = 'delayed_jobs'; \n?\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/index-usage-for-min-vs-order-by-asc-limit-1-tp5002928p5004410.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Fri, 18 Nov 2011 06:16:39 -0800 (PST)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index usage for min() vs. \"order by asc limit 1\""
}
] |
[
{
"msg_contents": "Hello,\n\nI'm going to be testing some new hardware (see http://archives.postgresql.org/pgsql-performance/2011-11/msg00230.php) and while I've done some very rudimentary before/after tests with pgbench, I'm looking to pull more info than I have in the past, and I'd really like to automate things further.\n\nI'll be starting with basic disk benchmarks (bonnie++ and iozone) and then moving on to pgbench.\n\nI'm running FreeBSD and I'm interested in getting some baseline info on UFS2 single disk (SATA 7200/WD RE4), gmirror, zfs mirror, zfs raidz1, zfs set of two mirrors (ie: two mirrored vdevs in a mirror). Then I'm repeating that with the 4 Intel 320 SSDs, and just to satisfy my curiosity, a zfs mirror with two of the SSDs mirrored as the ZIL.\n\nOnce that's narrowed down to a few practical choices, I'm moving on to pgbench. I've found some good info here regarding pgbench that is unfortunately a bit dated: http://www.westnet.com/~gsmith/content/postgresql/\n\nA few questions:\n\n-Any favorite automation or graphing tools beyond what's on Greg's site?\n-Any detailed information on creating \"custom\" pgbench tests?\n-Any other postgres benchmarking tools?\n\nI'm also curious about benchmarking using my own data. I tried something long ago that at least gave the illusion of working, but didn't seem quite right to me. I enabled basic query logging on one of our busier servers, dumped the db, and let it run for 24 hours. That gave me the normal random data from users throughout the day as well as our batch jobs that run overnight. I had to grep out and reformat the actual queries from the logfile, but that was not difficult. I then loaded the dump into the test server and basically fed the saved queries into it and timed the result. I also hacked together a script to sample cpu and disk stats every 2S and had that feeding into an rrd database so I could see how \"busy\" things were.\n\nIn theory, this sounded good (to me), but I'm not sure I trust the results. Any suggestions on the general concept? Is it sound? Is there a better way to do it? I really like the idea of using (our) real data.\n\nLastly, any general suggestions on tools to collect system data during tests and graph it are more than welcome. I can homebrew, but I'm sure I'd be reinventing the wheel.\n\nOh, and if anyone wants any tests run that would not take an insane amount of time and would be valuable to those on this list, please let me know. Since SSDs have been a hot topic lately and not everyone has a 4 SSDs laying around, I'd like to sort of focus on anything that would shed some light on the whole SSD craze.\n\nThe box under test ultimately will have 32GB RAM, 2 quad core 2.13GHz Xeon 5506 cpus and 4 Intel 320 160GB SSDs. I'm recycling some older boxes as well, so I have much more RAM on hand until those are finished.\n\nThanks,\n\nCharles\n\nps - considering the new PostgreSQL Performance book that Packt has, any strong feelings about that one way or the other? Does it go very far beyond what's on the wiki?",
"msg_date": "Fri, 18 Nov 2011 04:55:54 -0500",
"msg_from": "CSS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Benchmarking tools, methods"
},
{
"msg_contents": "On 18 Listopad 2011, 10:55, CSS wrote:\n> Hello,\n>\n> I'm going to be testing some new hardware (see\n> http://archives.postgresql.org/pgsql-performance/2011-11/msg00230.php) and\n> while I've done some very rudimentary before/after tests with pgbench, I'm\n> looking to pull more info than I have in the past, and I'd really like to\n> automate things further.\n>\n> I'll be starting with basic disk benchmarks (bonnie++ and iozone) and then\n> moving on to pgbench.\n>\n> I'm running FreeBSD and I'm interested in getting some baseline info on\n> UFS2 single disk (SATA 7200/WD RE4), gmirror, zfs mirror, zfs raidz1, zfs\n> set of two mirrors (ie: two mirrored vdevs in a mirror). Then I'm\n> repeating that with the 4 Intel 320 SSDs, and just to satisfy my\n> curiosity, a zfs mirror with two of the SSDs mirrored as the ZIL.\n>\n> Once that's narrowed down to a few practical choices, I'm moving on to\n> pgbench. I've found some good info here regarding pgbench that is\n> unfortunately a bit dated:\n> http://www.westnet.com/~gsmith/content/postgresql/\n>\n> A few questions:\n>\n> -Any favorite automation or graphing tools beyond what's on Greg's site?\n\nThere are talks not listed on that westnet page - for example a recent\n\"Bottom-up Database Benchmarking\" talk, available for example here:\n\n http://pgbr.postgresql.org.br/2011/palestras.php?id=60\n\nIt probably contains more recent info about benchmarking tools and testing\nnew hardware.\n\n> -Any detailed information on creating \"custom\" pgbench tests?\n\nThe technical info at\nhttp://www.postgresql.org/docs/9.1/interactive/pgbench.html should be\nsufficient I guess, it's fairly simple. The most difficult thing is\ndetermining what the script should do - what queries to execute etc. And\nthat depends on the application.\n\n> -Any other postgres benchmarking tools?\n\nNot really. The pgbench is a nice stress testing tool and the scripting is\nquite flexible. I've done some TPC-H-like testing recently, but it's\nrather a bunch of scripts executed manually.\n\n> I'm also curious about benchmarking using my own data. I tried something\n> long ago that at least gave the illusion of working, but didn't seem quite\n> right to me. I enabled basic query logging on one of our busier servers,\n> dumped the db, and let it run for 24 hours. That gave me the normal\n> random data from users throughout the day as well as our batch jobs that\n> run overnight. I had to grep out and reformat the actual queries from the\n> logfile, but that was not difficult. I then loaded the dump into the\n> test server and basically fed the saved queries into it and timed the\n> result. I also hacked together a script to sample cpu and disk stats\n> every 2S and had that feeding into an rrd database so I could see how\n> \"busy\" things were.\n>\n> In theory, this sounded good (to me), but I'm not sure I trust the\n> results. Any suggestions on the general concept? Is it sound? Is there\n> a better way to do it? I really like the idea of using (our) real data.\n\nIt's definitely a step in the right direction. An application-specific\nbenchmark is usually much more useful that a generic stress test. It\nsimply is going to tell you more about your workload and you can use it to\nasses the capacity more precisely.\n\nThere are some issues though - mostly about transactions and locking. For\nexample if the client starts a transaction, locks a bunch of records and\nthen performs a time-consuming processing task outside the database, the\nother clients may be locked. You won't see this during the stress test,\nbecause in reality it looks like this\n\n1) A: BEGIN\n2) A: LOCK (table, row, ...)\n3) A: perform something expensive\n4) B: attempt to LOCK the same resource (blocks)\n5) A: release the LOCK\n6) B: obtains the LOCK and continues\n\nbut when replaying the workload, you'll see this\n\n1) A: BEGIN\n2) A: LOCK (table, row, ...)\n3) B: attempt to LOCK the same resource (blocks)\n4) A: release the LOCK\n5) B: obtains the LOCK and continues\n\nso B waits for a very short period of time (or not at all).\n\nTo identify this problem, you'd have to actually behave like the client.\nFor example with a web application, you could use apache bench\n(https://httpd.apache.org/docs/2.0/programs/ab.html) or something like\nthat.\n\n> Lastly, any general suggestions on tools to collect system data during\n> tests and graph it are more than welcome. I can homebrew, but I'm sure\n> I'd be reinventing the wheel.\n\nSystem stats or database stats? There's a plenty of tools for system stats\n (e.g. sar). For database stat's it's a bit more difficult - there's\npgwatch, pgstatspack and maybe some other tools (I've written pg_monitor).\n\nTomas\n\n\n",
"msg_date": "Fri, 18 Nov 2011 12:59:02 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking tools, methods"
},
{
"msg_contents": "2011/11/18 Tomas Vondra <[email protected]>:\n> On 18 Listopad 2011, 10:55, CSS wrote:\n>> Hello,\n>>\n>> I'm going to be testing some new hardware (see\n>> http://archives.postgresql.org/pgsql-performance/2011-11/msg00230.php) and\n>> while I've done some very rudimentary before/after tests with pgbench, I'm\n>> looking to pull more info than I have in the past, and I'd really like to\n>> automate things further.\n>>\n>> I'll be starting with basic disk benchmarks (bonnie++ and iozone) and then\n>> moving on to pgbench.\n>>\n>> I'm running FreeBSD and I'm interested in getting some baseline info on\n>> UFS2 single disk (SATA 7200/WD RE4), gmirror, zfs mirror, zfs raidz1, zfs\n>> set of two mirrors (ie: two mirrored vdevs in a mirror). Then I'm\n>> repeating that with the 4 Intel 320 SSDs, and just to satisfy my\n>> curiosity, a zfs mirror with two of the SSDs mirrored as the ZIL.\n>>\n>> Once that's narrowed down to a few practical choices, I'm moving on to\n>> pgbench. I've found some good info here regarding pgbench that is\n>> unfortunately a bit dated:\n>> http://www.westnet.com/~gsmith/content/postgresql/\n>>\n>> A few questions:\n>>\n>> -Any favorite automation or graphing tools beyond what's on Greg's site?\n>\n> There are talks not listed on that westnet page - for example a recent\n> \"Bottom-up Database Benchmarking\" talk, available for example here:\n>\n> http://pgbr.postgresql.org.br/2011/palestras.php?id=60\n>\n> It probably contains more recent info about benchmarking tools and testing\n> new hardware.\n>\n>> -Any detailed information on creating \"custom\" pgbench tests?\n>\n> The technical info at\n> http://www.postgresql.org/docs/9.1/interactive/pgbench.html should be\n> sufficient I guess, it's fairly simple. The most difficult thing is\n> determining what the script should do - what queries to execute etc. And\n> that depends on the application.\n>\n>> -Any other postgres benchmarking tools?\n>\n> Not really. The pgbench is a nice stress testing tool and the scripting is\n> quite flexible. I've done some TPC-H-like testing recently, but it's\n> rather a bunch of scripts executed manually.\n>\n>> I'm also curious about benchmarking using my own data. I tried something\n>> long ago that at least gave the illusion of working, but didn't seem quite\n>> right to me. I enabled basic query logging on one of our busier servers,\n>> dumped the db, and let it run for 24 hours. That gave me the normal\n>> random data from users throughout the day as well as our batch jobs that\n>> run overnight. I had to grep out and reformat the actual queries from the\n>> logfile, but that was not difficult. I then loaded the dump into the\n>> test server and basically fed the saved queries into it and timed the\n>> result. I also hacked together a script to sample cpu and disk stats\n>> every 2S and had that feeding into an rrd database so I could see how\n>> \"busy\" things were.\n>>\n>> In theory, this sounded good (to me), but I'm not sure I trust the\n>> results. Any suggestions on the general concept? Is it sound? Is there\n>> a better way to do it? I really like the idea of using (our) real data.\n>\n> It's definitely a step in the right direction. An application-specific\n> benchmark is usually much more useful that a generic stress test. It\n> simply is going to tell you more about your workload and you can use it to\n> asses the capacity more precisely.\n>\n> There are some issues though - mostly about transactions and locking. For\n> example if the client starts a transaction, locks a bunch of records and\n> then performs a time-consuming processing task outside the database, the\n> other clients may be locked. You won't see this during the stress test,\n> because in reality it looks like this\n>\n> 1) A: BEGIN\n> 2) A: LOCK (table, row, ...)\n> 3) A: perform something expensive\n> 4) B: attempt to LOCK the same resource (blocks)\n> 5) A: release the LOCK\n> 6) B: obtains the LOCK and continues\n>\n> but when replaying the workload, you'll see this\n>\n> 1) A: BEGIN\n> 2) A: LOCK (table, row, ...)\n> 3) B: attempt to LOCK the same resource (blocks)\n> 4) A: release the LOCK\n> 5) B: obtains the LOCK and continues\n>\n> so B waits for a very short period of time (or not at all).\n>\n> To identify this problem, you'd have to actually behave like the client.\n> For example with a web application, you could use apache bench\n> (https://httpd.apache.org/docs/2.0/programs/ab.html) or something like\n> that.\n\nI like Tsung: http://tsung.erlang-projects.org/\nIt is very efficient (you can achieve tens or hundreds of thousands\nconnections per core)\nAnd you can script scenario in xml (there is also a sql proxy to\nrecord session, and pgfouine as an option to build tsung scenario from\nits parsed log).\n\nYou can add dynamic stuff in the xml (core function provided by tsung)\nand also write your own erland modules to add complexity to your\nscenario.\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation\n",
"msg_date": "Fri, 18 Nov 2011 13:44:30 +0100",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking tools, methods"
},
{
"msg_contents": "On Fri, Nov 18, 2011 at 2:55 AM, CSS <[email protected]> wrote:\n\n> ps - considering the new PostgreSQL Performance book that Packt has, any strong feelings about that one way or the other? Does it go very far beyond what's on the wiki?\n\nSince others have provided perfectly good answers to all your other\nquestions, I'll take this one. The book is fantastic. I was a\nreviewer for it and had read it all before it was published but still\nin rough form. Got a copy and read most of it all over again. It's a\nmust have for postgresql production DBAs.\n",
"msg_date": "Fri, 18 Nov 2011 10:38:52 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking tools, methods"
},
{
"msg_contents": "On 11/18/2011 04:55 AM, CSS wrote:\n> I'm also curious about benchmarking using my own data. I tried something long ago that at least gave the illusion of working, but didn't seem quite right to me. I enabled basic query logging on one of our busier servers, dumped the db, and let it run for 24 hours. That gave me the normal random data from users throughout the day as well as our batch jobs that run overnight. I had to grep out and reformat the actual queries from the logfile, but that was not difficult. I then loaded the dump into the test server and basically fed the saved queries into it and timed the result. I also hacked together a script to sample cpu and disk stats every 2S and had that feeding into an rrd database so I could see how \"busy\" things were.\n>\n> In theory, this sounded good (to me), but I'm not sure I trust the results. Any suggestions on the general concept? Is it sound? Is there a better way to do it? I really like the idea of using (our) real data.\n> \n\nThe thing that's hard to do here is replay the activity with the right \ntiming. Some benchmarks, such as pgbench, will hit the database as fast \nas it will process work. That's not realistic. You really need to \nconsider that real applications have pauses in them, and worry about \nthat both in playback speed and in results analysis.\n\nSee http://wiki.postgresql.org/wiki/Statement_Playback for some more \ninfo on this.\n\n> ps - considering the new PostgreSQL Performance book that Packt has, any strong feelings about that one way or the other? Does it go very far beyond what's on the wiki?\n> \n\nPages 21 through 97 are about general benchmarking and hardware setup; \n189 through 208 cover just pgbench. There's almost no overlap between \nthose sections and the wiki, which is mainly focused on PostgreSQL usage \nissues. Unless you're much smarter than me, you can expect to spent \nmonths to years reinventing wheels described there before reaching new \nground in the areas it covers. From the questions you've been asking, \nyou may not find as much about ZFS tuning and SSDs as you'd like though.\n\nhttp://www.2ndquadrant.com/en/talks/ has some updated material about \nthings discovered since the book was published. The \"Bottom-Up Database \nBenchmarking\" there shows the tests I'm running nowadays, which have \nevolved a bit in the last year.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Sat, 19 Nov 2011 11:21:43 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking tools, methods"
},
{
"msg_contents": "On Nov 19, 2011, at 11:21 AM, Greg Smith wrote:\n\n> On 11/18/2011 04:55 AM, CSS wrote:\n>> I'm also curious about benchmarking using my own data. I tried something long ago that at least gave the illusion of working, but didn't seem quite right to me. I enabled basic query logging on one of our busier servers, dumped the db, and let it run for 24 hours. That gave me the normal random data from users throughout the day as well as our batch jobs that run overnight. I had to grep out and reformat the actual queries from the logfile, but that was not difficult. I then loaded the dump into the test server and basically fed the saved queries into it and timed the result. I also hacked together a script to sample cpu and disk stats every 2S and had that feeding into an rrd database so I could see how \"busy\" things were.\n>> \n>> In theory, this sounded good (to me), but I'm not sure I trust the results. Any suggestions on the general concept? Is it sound? Is there a better way to do it? I really like the idea of using (our) real data.\n>> \n> \n> The thing that's hard to do here is replay the activity with the right timing. Some benchmarks, such as pgbench, will hit the database as fast as it will process work. That's not realistic. You really need to consider that real applications have pauses in them, and worry about that both in playback speed and in results analysis.\n> \n> See http://wiki.postgresql.org/wiki/Statement_Playback for some more info on this.\n\nThanks so much for this, and thanks to Cédric for also pointing out Tsung specifically on that page. I had no idea any of these tools existed. I really like the idea of \"application specific\" testing, it makes total sense for the kind of things we're trying to measure.\n\nI also wanted to thank everyone else that posted in this thread, all of this info is tremendously helpful. This is a really excellent list, and I really appreciate all the people posting here that make their living doing paid consulting taking the time to monitor and post on this list. Yet another way for me to validate choosing postgres over that \"other\" open source db.\n\n\n>> ps - considering the new PostgreSQL Performance book that Packt has, any strong feelings about that one way or the other? Does it go very far beyond what's on the wiki?\n>> \n> \n> Pages 21 through 97 are about general benchmarking and hardware setup; 189 through 208 cover just pgbench. There's almost no overlap between those sections and the wiki, which is mainly focused on PostgreSQL usage issues. Unless you're much smarter than me, you can expect to spent months to years reinventing wheels described there before reaching new ground in the areas it covers. From the questions you've been asking, you may not find as much about ZFS tuning and SSDs as you'd like though.\n\nWe're grabbing a copy of it for the office. Packt is running a sale, so we're also going to grab the \"cookbook\", it looks intriguing.\n\n> http://www.2ndquadrant.com/en/talks/ has some updated material about things discovered since the book was published. The \"Bottom-Up Database Benchmarking\" there shows the tests I'm running nowadays, which have evolved a bit in the last year.\n\nLooks like good stuff, thanks.\n\nCharles\n\n> -- \n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Mon, 28 Nov 2011 17:32:59 -0500",
"msg_from": "CSS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Benchmarking tools, methods"
}
] |
[
{
"msg_contents": "Hey there,\r\n\r\n \r\nWe are looking at beefing up our servers with SSD's. Some of you did some interesting tests with the Intel 320. So the idea came to make a RAID10 with four 600GB models.\r\n\r\n \r\nI did however do some calculations with the current database server (220GB database, expected to grow to 1TB by the end of next year). I specifically looked at /proc/diskstat at the read/write figures. From there I could see a read/write ratio of 3:1, and I also saw a wopping 170GB of writes per day (for a database that currently grows 1GB per dag). That seems like an insanely high figure to me! How come? We do mostly inserts, hardly any updates, virtually no deletes.\r\n\r\n \r\nSecondly, I also looked at the reliability figures of the Intel 320. They show 5 years of 20GB per day, meaning that it will hold up for about 200 days in our system. RAID 10 wil make 400 days of that, but this seems hardly a lot.. Am I missing something here?\r\n\r\n \r\nKind regards,\r\n\r\n \r\nChristiaan\r\n\r\n \r\n \r\n \r\n\n\n\n\n\nSSD endurance calculations\n\n\n\nHey there, We are looking at beefing up our servers with SSD's. Some of you did some interesting tests with the Intel 320. So the idea came to make a RAID10 with four 600GB models. I did however do some calculations with the current database server (220GB database, expected to grow to 1TB by the end of next year). I specifically looked at /proc/diskstat at the read/write figures. From there I could see a read/write ratio of 3:1, and I also saw a wopping 170GB of writes per day (for a database that currently grows 1GB per dag). That seems like an insanely high figure to me! How come? We do mostly inserts, hardly any updates, virtually no deletes. Secondly, I also looked at the reliability figures of the Intel 320. They show 5 years of 20GB per day, meaning that it will hold up for about 200 days in our system. RAID 10 wil make 400 days of that, but this seems hardly a lot.. Am I missing something here? Kind regards, Christiaan",
"msg_date": "Mon, 21 Nov 2011 22:03:46 +0100",
"msg_from": "=?utf-8?Q?Christiaan_Willemsen?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "SSD endurance calculations"
},
{
"msg_contents": "On 2011-11-21, Christiaan Willemsen <[email protected]> wrote:\n>We=\n> are looking at beefing up our servers with SSD's. Some of you did so=\n> me interesting tests with the Intel 320. So the idea came to make a RAID1=\n> 0 with four 600GB models.</p><p> </p><p>I did however do some calcul=\n> ations with the current database server (220GB database, expected to grow=\n> to 1TB by the end of next year). I specifically looked at /proc/diskstat=\n> at the read/write figures. From there I could see a read/write ratio of =\n> 3:1, and I also saw a wopping 170GB of writes per day (for a database tha=\n> t currently grows 1GB per dag). That seems like an insanely high figure t=\n> o me! How come=3F We do mostly inserts, hardly any updates, virtually no =\n> deletes.</p><p> </p><p>Secondly, I also looked at the reliability fi=\n> gures of the Intel 320. They show 5 years of 20GB per day, meaning that i=\n> t will hold up for about 200 days in our system. RAID 10 wil make 400 day=\n> s of that, but this seems hardly a lot.. Am I missing something here=3F</=\n> p><p> </p><p>Kind regards,</p><p> </p><p>Christiaan</p><div><p =\n> style=3D\"font-family: monospace; \"> </p></div><p> </p><p> =\n></p>=0A</body>=0A</html>\n\nIs your WAL on a separate disk (or set of disks)?\n\nAlso, not sure you can fairly conclude that \"RAID 10 will make 400 days\nof that\" -- I had read some posts here a few months back suggesting\nthat SSDs have been observed to fail very close to each\nother in time in a RAID configuration.\n",
"msg_date": "Tue, 22 Nov 2011 16:59:17 +0000 (UTC)",
"msg_from": "Edgardo Portal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD endurance calculations"
},
{
"msg_contents": "On 11/21/2011 04:03 PM, Christiaan Willemsen wrote:\n>\n> Secondly, I also looked at the reliability figures of the Intel 320. \n> They show 5 years of 20GB per day, meaning that it will hold up for \n> about 200 days in our system. RAID 10 wil make 400 days of that, but \n> this seems hardly a lot.. Am I missing something here?\n>\n\nThe 320 series drives are not intended for things that write as heavily \nas you do. If your database is growing fast enough that you're going to \nhit a terabyte in a short period of time, just cross that right off the \nlist of possibilities. Intel's 710 series is the one aimed at your sort \nof workload.\n\nYou can probably pull down the total write volume on your system by more \naggressively running VACUUM FREEZE shortly after new data is loaded. \nPostgreSQL tends to write blocks even in INSERT-only tables several \ntimes; forcing them to freeze early can eliminate several of them.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 23 Nov 2011 10:19:04 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SSD endurance calculations"
}
] |
[
{
"msg_contents": "Hi,\n\nMy application is performing 1600 inserts per second and 7 updates per\nsecond. The updates occurred only in a small table which has only 6 integer\ncolumns. The inserts occurred in all other daily tables. My application\ncreates around 75 tables per day. No updates/deletes occurred in those 75\ndaily tables (only inserts and drop tables if older than 40 days). Since\nonly inserts in the daily tables, I disabled autovacuum in the conf file\nand I can see it is off stat using show command.\n\n*sasdb=# show \"autovacuum\";\n autovacuum\n------------\n off\n(1 row)*\n\nBut the autovacuum is running frequently and it impact the performance of\nmy system(high CPU). You can see the autovacuum in the pg_stat_activity.\n*\nsasdb=# select current_query from pg_stat_activity where current_query like\n'autovacuum%';\n ** current_query\n**\n\n------------------------------**------------------------------**\n---------------------------\n autovacuum: VACUUM public.xxxxx**_17_Oct_11 (to prevent wraparound)\n autovacuum: VACUUM public.**xxxxx**_17_Oct_11 (to prevent wraparound)\n autovacuum: VACUUM public.**xxxxx**_17_Oct_11 (to prevent wraparound)\n(3 rows)\n\n\n*Why the autovacuum is running even though, I disabled ? Am I miss anything\n?\n\nAnd also please share your views on my decision about disable autovacuum\nfor my application. I am planning to run vacuum command daily on that small\ntable which has frequent updates.\n\nThanks,\nRamesh\n\nHi,My application is performing 1600 inserts per second and 7 updates per second. The updates occurred only in a small table which has only 6 integer columns. The inserts occurred in all other daily tables. My application creates around 75 tables per day. No updates/deletes occurred in those 75 daily tables (only inserts and drop tables if older than 40 days). Since only inserts in the daily tables, I disabled autovacuum in the conf file and I can see it is off stat using show command. \nsasdb=# show \"autovacuum\"; autovacuum ------------ off(1 row)But the autovacuum is running frequently and it impact the performance of my system(high CPU). You can see the autovacuum in the pg_stat_activity. \nsasdb=# select current_query from pg_stat_activity where current_query like 'autovacuum%'; current_query \n---------------------------------------------------------------------------------------\n autovacuum: VACUUM public.xxxxx_17_Oct_11 (to prevent wraparound) autovacuum: VACUUM public.xxxxx_17_Oct_11 (to prevent wraparound)\n autovacuum: VACUUM public.xxxxx_17_Oct_11 (to prevent wraparound)\n(3 rows)Why the autovacuum is running even though, I disabled ? Am I miss anything ? And also please share your views on my decision about disable autovacuum for my application. I am planning to run vacuum command daily on that small table which has frequent updates. \nThanks,Ramesh",
"msg_date": "Wed, 23 Nov 2011 11:25:07 +0530",
"msg_from": "J Ramesh Kumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Autovacuum Issue"
},
{
"msg_contents": "On Wed, Nov 23, 2011 at 11:25 AM, J Ramesh Kumar <[email protected]>wrote:\n\n> Hi,\n>\n> My application is performing 1600 inserts per second and 7 updates per\n> second. The updates occurred only in a small table which has only 6 integer\n> columns. The inserts occurred in all other daily tables. My application\n> creates around 75 tables per day. No updates/deletes occurred in those 75\n> daily tables (only inserts and drop tables if older than 40 days). Since\n> only inserts in the daily tables, I disabled autovacuum in the conf file\n> and I can see it is off stat using show command.\n>\n> *sasdb=# show \"autovacuum\";\n> autovacuum\n> ------------\n> off\n> (1 row)*\n>\n> But the autovacuum is running frequently and it impact the performance of\n> my system(high CPU). You can see the autovacuum in the pg_stat_activity.\n> *\n> sasdb=# select current_query from pg_stat_activity where current_query\n> like 'autovacuum%';\n> ** current_query **\n>\n> ------------------------------**------------------------------**\n> ---------------------------\n> autovacuum: VACUUM public.xxxxx**_17_Oct_11 (to prevent wraparound)\n> autovacuum: VACUUM public.**xxxxx**_17_Oct_11 (to prevent wraparound)\n> autovacuum: VACUUM public.**xxxxx**_17_Oct_11 (to prevent wraparound)\n> (3 rows)\n>\n>\n> *\n\n\nIts pretty clear, its to prevent tranx wrap-around.\n\nautovacuum_freeze_max_age (integer)\n\nSpecifies the maximum age (in transactions) that a table's pg_class.\nrelfrozenxid field can attain before a VACUUM operation is forced to\nprevent transaction ID wraparound within the table. Note that the system\nwill launch autovacuum processes to prevent wraparound even when autovacuum\nis otherwise disabled.\n\nhttp://developer.postgresql.org/pgdocs/postgres/runtime-config-autovacuum.html\n---\nRegards,\nRaghavendra\nEnterpriseDB Corporation\nBlog: http://raghavt.blogspot.com/\n\n\n\n\n\n> **Why the autovacuum is running even though, I disabled ? Am I miss\n> anything ?\n>\n> And also please share your views on my decision about disable autovacuum\n> for my application. I am planning to run vacuum command daily on that small\n> table which has frequent updates.\n>\n> Thanks,\n> Ramesh\n>\n\nOn Wed, Nov 23, 2011 at 11:25 AM, J Ramesh Kumar <[email protected]> wrote:\nHi,My application is performing 1600 inserts per second and 7 updates per second. The updates occurred only in a small table which has only 6 integer columns. The inserts occurred in all other daily tables. My application creates around 75 tables per day. No updates/deletes occurred in those 75 daily tables (only inserts and drop tables if older than 40 days). Since only inserts in the daily tables, I disabled autovacuum in the conf file and I can see it is off stat using show command. \nsasdb=# show \"autovacuum\"; autovacuum ------------ off(1 row)But the autovacuum is running frequently and it impact the performance of my system(high CPU). You can see the autovacuum in the pg_stat_activity. \nsasdb=# select current_query from pg_stat_activity where current_query like 'autovacuum%'; current_query \n\n\n---------------------------------------------------------------------------------------\n autovacuum: VACUUM public.xxxxx_17_Oct_11 (to prevent wraparound) autovacuum: VACUUM public.xxxxx_17_Oct_11 (to prevent wraparound)\n\n\n autovacuum: VACUUM public.xxxxx_17_Oct_11 (to prevent wraparound)\n(3 rows)Its pretty clear, its to prevent tranx wrap-around. \nautovacuum_freeze_max_age (integer)Specifies the maximum age (in transactions) that a table's pg_class.relfrozenxid field can attain before a VACUUM operation is forced to prevent transaction ID wraparound within the table. Note that the system will launch autovacuum processes to prevent wraparound even when autovacuum is otherwise disabled.\nhttp://developer.postgresql.org/pgdocs/postgres/runtime-config-autovacuum.html\n\n---Regards,RaghavendraEnterpriseDB CorporationBlog: http://raghavt.blogspot.com/\n Why the autovacuum is running even though, I disabled ? Am I miss anything ? \nAnd also please share your views on my decision about disable autovacuum for my application. I am planning to run vacuum command daily on that small table which has frequent updates. \nThanks,Ramesh",
"msg_date": "Wed, 23 Nov 2011 11:40:38 +0530",
"msg_from": "Raghavendra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum Issue"
},
{
"msg_contents": "On Wed, Nov 23, 2011 at 12:55 AM, J Ramesh Kumar <[email protected]> wrote:\n> Why the autovacuum is running even though, I disabled ? Am I miss anything ?\n\nAs Raghavendra says, anti-wraparound vacuum will always kick in to\nprevent a database shutdown.\n\n> And also please share your views on my decision about disable autovacuum for\n> my application. I am planning to run vacuum command daily on that small\n> table which has frequent updates.\n\nSounds like a bad plan.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Thu, 1 Dec 2011 13:18:53 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum Issue"
},
{
"msg_contents": "On 02/12/11 07:18, Robert Haas wrote:\n>\n> And also please share your views on my decision about disable autovacuum for\n> my application. I am planning to run vacuum command daily on that small\n> table which has frequent updates.\n> Sounds like a bad plan.\n>\n\nIf the table has frequent updates vacuuming once a day will not control \nspace bloat from dead rows... so your small table's storage will become \na very large (even though there are only a few undeleted rows), and \nperformance will become terrible.\n\nI would suggest tuning autovacuum to wakeup more frequently (c.f \nautovacuum_naptime parameter), so your small table stays small.\n\nAlso you didn't mention what version of Postgres you are running. In 8.4 \nand later vacuum (hence autovacuum) is much smarter about finding dead \nrows to clean up, and should have less impact. You can also control the \nload autovacuum puts on your system (c.f autovacuum_vacuum_cost_delay \nparameter).\n\nregards\n\nMark\n",
"msg_date": "Fri, 02 Dec 2011 11:35:20 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum Issue"
},
{
"msg_contents": "On Tue, Nov 22, 2011 at 10:55 PM, J Ramesh Kumar <[email protected]> wrote:\n> But the autovacuum is running frequently and it impact the performance of my\n> system(high CPU). You can see the autovacuum in the pg_stat_activity.\n\nCould you show us the system metrics that led you to believe it was\nhigh CPU usage? Sometimes people misinterpret the numbers from\nutilities like top, iostat, or vmstat, so I'd rather see them myself\nif you got them.\n",
"msg_date": "Thu, 1 Dec 2011 20:02:48 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autovacuum Issue"
}
] |
[
{
"msg_contents": "Very Fast Version:\n\nRecently my database stopped respecting one of my indexes, which took a query that should run in \"subsecond response time\" and turning it into something that with small data sets runs in the 7-10 minute range and with large data sets runs in the 30 minute - eternity range.\n\nExplain Analyze tells me that what used to be an Index Scan has become a Seq Scan, doing a full table scan through 140 million records.\n\nAny thoughts on why that happens?\n\n+++++++++++++++++++\nLonger version:\n\nI have a very large database (billion+ records) that has begun not respecting indexes on some queries, resulting in what used to be \"instant answers\" now taking many minutes and I'm trying to figure out why.\n\nI'll try to simplify the request this way:\n\nI have a database of many hundreds of millions of email messages, and information about those messages. There is a table that tells me when I received emails and from what source. There is another table that tells what URLs were seen in the bodies of those emails and there is a table that links those two tables together. Since many emails can contain the same URL, and many URLs can be seen in each email, we divide like this:\n\nTables\n===========\nemails\n\nlink_urls\n\nemail_links\n\nemail.message_id = link_url.message_id\nlinkurl.urlid = email_links.urlid\n\n\nOne attribute of the URL is the \"hostname\" portion.\n\nIn my puzzle, I have a table of hostnames, and I want to get statistics about which emails contained URLs that pointed to those hostnames.\n\nA very simplified version of the query could be:\n\nselect email.stuff from email natural join link_url natural join email_links where hostname = 'foo.bar.com';\n\n\nWe know that there were two emails that advertised foo.bar.com, so the objective is to link the presence of foo.bar.com to a URLID, use the link_url table to find the respective message_id and then go ask the email table for details about those two messages.\n\n\nMy \"explain analyze\" of that query takes seven minutes to complete, despite the fact that I have an index on message_id, and urlid, and machine where they occur in each of the three tables.\n\nWe broke the query down into it's components, and confirmed that what we consider the three natural pieces of the query each run \"blazing fast\", each using the proper indexes. But when we put them all together, the index gets ignored, and we sequential scan a 140 million row table.\n\nWhich is slow.\n\n\n===============================\nIf we break it into pieces ... a human might think of the pieces as:\n\n\n============\n\nexplain analyze select urlid from email_links where machine = 'foo.bar.com';\n\n QUERY PLAN \n-------------------------------------------------------------------------------------------\n Bitmap Heap Scan on email_links (cost=97.10..9650.58 rows=2457 width=33) (actual time=0.049..0.049 rows=1 loops=1)\n Recheck Cond: (machine = 'foo.bar.com'::text)\n -> Bitmap Index Scan on hostdex (cost=0.00..96.49 rows=2457 width=0) (actual time=0.039..0.039 rows=1 loops=1)\n Index Cond: (machine = 'foo.bar.com'::text)\n Total runtime: 0.066 ms\n(5 rows)\n\n\n=============\n\n=> explain analyze select message_id from link_url where urlid = '9de6440fcc089c654806bd4c853c76f1';\n QUERY PLAN \n-------------------------------------------------------------------------------------------\n Bitmap Heap Scan on link_url (cost=65.20..5680.14 rows=1437 width=4) (actual time=0.074..0.083 rows=2 loops=1)\n Recheck Cond: (urlid = '9de6440fcc089c654806bd4c853c76f1'::text)\n -> Bitmap Index Scan on link_url_hkey (cost=0.00..64.84 rows=1437 width=0) (actual time=0.021..0.021 rows=2 loops=1)\n Index Cond: (urlid = '9de6440fcc089c654806bd4c853c76f1'::text)\n Total runtime: 0.109 ms\n(5 rows)\n\n=================\n\nexplain analyze select stuff from email where message_id in (78085350, 78088168);\n QUERY PLAN \n-------------------------------------------------------------------------------------------\n Bitmap Heap Scan on email (cost=17.19..25.21 rows=2 width=8) (actual time=0.068..0.077 rows=2 loops=1)\n Recheck Cond: (message_id = ANY ('{78085350,78088168}'::integer[]))\n -> Bitmap Index Scan on email_pkey (cost=0.00..17.19 rows=2 width=0) (actual time=0.054..0.054 rows=2 loops=1)\n Index Cond: (message_id = ANY ('{78085350,78088168}'::integer[]))\n Total runtime: 0.100 ms\n(5 rows)\n\n++++++++++++++++++++++\n\n\nIn my simple mind that should take (.1 + .109 + .066) = FAST.\n\nHere's the \"Explain Analyze\" for the combined version:\n\n++++++++++++++++++++++++\nexplain analyze select email.stuff from email natural join link_url natural join email_link where machine = 'foo.bar.com';\n\n QUERY PLAN \n-------------------------------------------------------------------------------------------\n Merge Join (cost=3949462.38..8811048.82 rows=4122698 width=7) (actual time=771578.076..777749.755 rows=3 loops=1)\n Merge Cond: (email.message_id = link_url.message_id)\n -> Index Scan using email_pkey on email (cost=0.00..4561330.19 rows=79154951 width=11) (actual time=0.041..540883.445 rows=79078427 loops=1)\n -> Materialize (cost=3948986.49..4000520.21 rows=4122698 width=4) (actual time=227023.820..227023.823 rows=3 loops=1)\n -> Sort (cost=3948986.49..3959293.23 rows=4122698 width=4) (actual time=227023.816..227023.819 rows=3 loops=1)\n Sort Key: link_url.message_id\n Sort Method: quicksort Memory: 25kB\n -> Hash Join (cost=9681.33..3326899.30 rows=4122698 width=4) (actual time=216443.617..227023.798 rows=3 loops=1)\n Hash Cond: (link_url.urlid = email_link.urlid)\n -> Seq Scan on link_url (cost=0.00..2574335.33 rows=140331133 width=37) (actual time=0.013..207980.261 rows=140330592 lo\nops=1)\n -> Hash (cost=9650.62..9650.62 rows=2457 width=33) (actual time=0.074..0.074 rows=1 loops=1)\n -> Bitmap Heap Scan on email_link (cost=97.10..9650.62 rows=2457 width=33) (actual time=0.072..0.072 rows=1 loops=1\n)\n Recheck Cond: (hostname = 'foo.bar.com'::text)\n -> Bitmap Index Scan on hostdex (cost=0.00..96.49 rows=2457 width=0) (actual time=0.060..0.060 rows=1 loops=1\n)\n Index Cond: (hostname = 'foo.bar.com'::text)\n Total runtime: 777749.820 ms\n(16 rows)\n\n++++++++++++++++++\n\nSee that \"Seq Scan on link_url\"? We can't figure out why that is there! We should be scanning for a matching \"urlid\" and we have an index on \"urlid\"?\n\nWhen this is happening in a \"two table\" version of this problem, we can get temporary relief by giving the statement:\n\nset enable_seqscan = false;\n\nBut in the \"three table\" version (which is itself a simplification of the real problem, which is that instead of looking for 'foo.bar.com', I'm looking for all the hostnames in the table \"testurls\") the \"enable_seqscan = false\" doesn't seem to do anything.\n\n\n++++++++++++++++++\n\nDoes anyone have suggestions as to why my database has started ignoring this index in certain circumstances? We've tried a variety of different joins and subselects and can't seem to beat the \"combined query\" index failure.\n\nThe work-around has been to actually write code that makes the query for urlids, and then manually does the query into the table for message_ids, and then manually does the query for email.stuff. It's bad fast, but we can't figure out why 'the old way' quit working.\n\nSuggestions welcome!\n\n\n\n\n--\n\n----------------------------------------------------------\n\nGary Warner\n\n-----------------------------------------------------------\n\n",
"msg_date": "Wed, 23 Nov 2011 16:24:36 -0600 (CST)",
"msg_from": "Gary Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Seq Scan used instead of Index Scan"
},
{
"msg_contents": "On Wed, Nov 23, 2011 at 7:24 PM, Gary Warner <[email protected]> wrote:\n> See that \"Seq Scan on link_url\"? We can't figure out why that is there! We should be scanning for a matching \"urlid\" and we have an index on \"urlid\"?\n>\n> When this is happening in a \"two table\" version of this problem, we can get temporary relief by giving the statement:\n>\n> set enable_seqscan = false;\n\nObviously, because it thinks the index scan will perform worse.\n\nIt would be interesting to see the explain analyze with\nenable_seqscan=false to see why\n",
"msg_date": "Wed, 23 Nov 2011 19:42:52 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seq Scan used instead of Index Scan"
},
{
"msg_contents": "Can you post your non-default postgresql.conf settings? (I'd hazard a \nguess that you have effective_cache_size set to the default 128MB).\n\nBest wishes\n\nMark\n\nOn 24/11/11 11:24, Gary Warner wrote:\n> Very Fast Version:\n>\n> Recently my database stopped respecting one of my indexes, which took a query that should run in \"subsecond response time\" and turning it into something that with small data sets runs in the 7-10 minute range and with large data sets runs in the 30 minute - eternity range.\n>\n> Explain Analyze tells me that what used to be an Index Scan has become a Seq Scan, doing a full table scan through 140 million records.\n>\n> Any thoughts on why that happens?\n>\n\n",
"msg_date": "Thu, 24 Nov 2011 14:30:01 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seq Scan used instead of Index Scan"
},
{
"msg_contents": "Gary Warner <[email protected]> writes:\n> Recently my database stopped respecting one of my indexes, which took a query that should run in \"subsecond response time\" and turning it into something that with small data sets runs in the 7-10 minute range and with large data sets runs in the 30 minute - eternity range.\n\n> Explain Analyze tells me that what used to be an Index Scan has become a Seq Scan, doing a full table scan through 140 million records.\n\n> Any thoughts on why that happens?\n\nI'd bet it has a lot to do with the nigh-three-orders-of-magnitude\noverestimates of the numbers of matching rows. You might find that\nincreasing the statistics targets for the indexed columns helps ---\nI'm guessing that these particular key values are out in the long\ntail of a highly skewed distribution, and the planner needs a larger MCV\nlist to convince it that non-MCV values will not occur very many times.\n\nIf that is an accurate guess, then trying to force the matter with\nsomething like enable_seqscan = off is not a good production solution,\nbecause it will result in horrid plans whenever you decide to query\na not-so-infrequent value.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Nov 2011 22:06:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seq Scan used instead of Index Scan "
}
] |
[
{
"msg_contents": "Is here any reason why Postgresql calculates subqueries/storable procedures\nin select list before applying ORDER BY / LIMIT?\n\nI talking about cases like:\n\nSELECT *,\n(some very slow subquery or slow storable stable/immutable procedure like\nxml processing)\nFROM\nsome_table\nORDER BY\nsome_field (unrelated to subquery results)\nLIMIT N\n?\n\nI seen cases where that lead to 3-6 orders of slowdown.\n\nSimpliest test case:\n\nCREATE TABLE test (id integer);\nINSERT INTO test SELECT * FROM generate_series(1,1000);\n\nSlow query (note LOOPS=1000 around subplan):\n EXPLAIN ANALYZE select id,(select count(*) from test t1 where t1.id=t.id)\nfrom test t order by id limit 10;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=13044.61..13044.63 rows=10 width=4) (actual\ntime=158.636..158.641 rows=10 loops=1)\n -> Sort (cost=13044.61..13047.11 rows=1000 width=4) (actual\ntime=158.636..158.639 rows=10 loops=1)\n Sort Key: t.id\n Sort Method: top-N heapsort Memory: 25kB\n -> Seq Scan on test t (cost=0.00..13023.00 rows=1000 width=4)\n(actual time=0.188..158.242 rows=1000 loops=1)\n SubPlan 1\n -> Aggregate (cost=13.00..13.01 rows=1 width=0) (actual\ntime=0.157..0.157 rows=1 loops=1000)\n -> Seq Scan on test t1 (cost=0.00..13.00 rows=1\nwidth=0) (actual time=0.081..0.156 rows=1 loops=1000)\n Filter: (id = t.id)\n Total runtime: 158.676 ms\n\nFast query:\nEXPLAIN ANALYZE select id,(select count(*) from test t1 where t1.id=t.id)\nfrom (select id from test order by id limit 10) as t order by id;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Subquery Scan on t (cost=32.11..162.36 rows=10 width=4) (actual\ntime=1.366..4.770 rows=10 loops=1)\n -> Limit (cost=32.11..32.13 rows=10 width=4) (actual time=0.971..0.983\nrows=10 loops=1)\n -> Sort (cost=32.11..34.61 rows=1000 width=4) (actual\ntime=0.970..0.975 rows=10 loops=1)\n Sort Key: test.id\n Sort Method: top-N heapsort Memory: 25kB\n -> Seq Scan on test (cost=0.00..10.50 rows=1000 width=4)\n(actual time=0.027..0.455 rows=1000 loops=1)\n SubPlan 1\n -> Aggregate (cost=13.00..13.01 rows=1 width=0) (actual\ntime=0.375..0.375 rows=1 loops=10)\n -> Seq Scan on test t1 (cost=0.00..13.00 rows=1 width=0)\n(actual time=0.017..0.371 rows=1 loops=10)\n Filter: (id = t.id)\n Total runtime: 4.845 ms\n\nUsing second way is reasonable workaround for sure, but half year ago I\nhappen to meet project where I was forced ask developers to rewrite huge\npile of analitical queries on that way\nto get reasonable performance (and there was a lot outcry and complaints in\nthe process).\n\nAnd ofcourse there is not always possible to create additional indexes so\nquery will be go through index scan/backward indexscan instead of\nsort/limit in the top level.\n\nRegards,\nMaksym\n\n-- \nMaxim Boguk\nSenior Postgresql DBA.\n\nPhone RU: +7 910 405 4718\nPhone AU: +61 45 218 5678\n\nSkype: maxim.boguk\nJabber: [email protected]\n\nLinkedIn profile: http://nz.linkedin.com/in/maximboguk\nIf they can send one man to the moon... why can't they send them all?\n\nМойКруг: http://mboguk.moikrug.ru/\nСила солому ломит, но не все в нашей жизни - солома, да и сила далеко не\nвсе.\n\nIs here any reason why Postgresql calculates subqueries/storable procedures in select list before applying ORDER BY / LIMIT?I talking about cases like:SELECT *,(some very slow subquery or slow storable stable/immutable procedure like xml processing)\n\nFROMsome_tableORDER BYsome_field (unrelated to subquery results)LIMIT N?I seen cases where that lead to 3-6 orders of slowdown.Simpliest test case:CREATE TABLE test (id integer);\n\nINSERT INTO test SELECT * FROM generate_series(1,1000);Slow query (note LOOPS=1000 around subplan): EXPLAIN ANALYZE select id,(select count(*) from test t1 where t1.id=t.id) from test t order by id limit 10;\n\n QUERY PLAN------------------------------------------------------------------------------------------------------------------------------- Limit (cost=13044.61..13044.63 rows=10 width=4) (actual time=158.636..158.641 rows=10 loops=1)\n\n -> Sort (cost=13044.61..13047.11 rows=1000 width=4) (actual time=158.636..158.639 rows=10 loops=1) Sort Key: t.id Sort Method: top-N heapsort Memory: 25kB -> Seq Scan on test t (cost=0.00..13023.00 rows=1000 width=4) (actual time=0.188..158.242 rows=1000 loops=1)\n\n SubPlan 1 -> Aggregate (cost=13.00..13.01 rows=1 width=0) (actual time=0.157..0.157 rows=1 loops=1000) -> Seq Scan on test t1 (cost=0.00..13.00 rows=1 width=0) (actual time=0.081..0.156 rows=1 loops=1000)\n\n Filter: (id = t.id) Total runtime: 158.676 msFast query:EXPLAIN ANALYZE select id,(select count(*) from test t1 where t1.id=t.id) from (select id from test order by id limit 10) as t order by id;\n\n QUERY PLAN----------------------------------------------------------------------------------------------------------------------- Subquery Scan on t (cost=32.11..162.36 rows=10 width=4) (actual time=1.366..4.770 rows=10 loops=1)\n\n -> Limit (cost=32.11..32.13 rows=10 width=4) (actual time=0.971..0.983 rows=10 loops=1) -> Sort (cost=32.11..34.61 rows=1000 width=4) (actual time=0.970..0.975 rows=10 loops=1) Sort Key: test.id\n\n Sort Method: top-N heapsort Memory: 25kB -> Seq Scan on test (cost=0.00..10.50 rows=1000 width=4) (actual time=0.027..0.455 rows=1000 loops=1) SubPlan 1 -> Aggregate (cost=13.00..13.01 rows=1 width=0) (actual time=0.375..0.375 rows=1 loops=10)\n\n -> Seq Scan on test t1 (cost=0.00..13.00 rows=1 width=0) (actual time=0.017..0.371 rows=1 loops=10) Filter: (id = t.id) Total runtime: 4.845 msUsing second way is reasonable workaround for sure, but half year ago I happen to meet project where I was forced ask developers to rewrite huge pile of analitical queries on that way\n\nto get reasonable performance (and there was a lot outcry and complaints in the process).And ofcourse there is not always possible to create additional indexes so query will be go through index scan/backward indexscan instead of sort/limit in the top level.\nRegards,Maksym-- Maxim BogukSenior Postgresql DBA.Phone RU: +7 910 405 4718Phone AU: +61 45 218 5678Skype: maxim.bogukJabber: [email protected]\nLinkedIn profile: http://nz.linkedin.com/in/maximbogukIf they can send one man to the moon... why can't they send them all?МойКруг: http://mboguk.moikrug.ru/\n\nСила солому ломит, но не все в нашей жизни - солома, да и сила далеко не все.",
"msg_date": "Thu, 24 Nov 2011 13:56:58 +1100",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Some question about lazy subquery/procedures execution in SELECT ...\n\tORDER BY... LIMIT N queries"
},
{
"msg_contents": "Maxim Boguk <[email protected]> writes:\n> Is here any reason why Postgresql calculates subqueries/storable procedures\n> in select list before applying ORDER BY / LIMIT?\n\nWell, that's the definition of ORDER BY --- it happens after computing\nthe select list, according to the SQL standard. We try to optimize this\nin some cases but you can't really complain when we don't. Consider\nputting the expensive function outside the ORDER BY/LIMIT, ie\n\nselect ..., expensive_fn() from (select ... order by ... limit ...) ss;\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Nov 2011 12:05:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some question about lazy subquery/procedures execution in SELECT\n\t... ORDER BY... LIMIT N queries"
},
{
"msg_contents": "I understand that position.\nHowever if assumption: \" the definition of ORDER BY --- it happens after\ncomputing the select list, according to the SQL standard\"\nis correct,\nthen plans like:\n\npostgres=# EXPLAIN ANALYZE SELECT * from test order by _data limit 10\noffset 1000;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=2884.19..2913.03 rows=10 width=8) (actual time=3.584..3.620\nrows=10 loops=1)\n -> Index Scan using random_key on test (cost=0.00..2884190.16\nrows=1000000 width=8) (actual time=0.103..3.354 rows=1010 loops=1)\n Total runtime: 3.663 ms\n(3 rows)\nshould not be used at all.\n\n\nIn realty I was bite by next scenario (that is simplified case):\n\npostgres=# CREATE TABLE test as (select random() as _data from (select *\nfrom generate_series(1,1000000)) as t);\nSELECT 1000000\npostgres=# CREATE INDEX random_key on test(_data);\nCREATE INDEX\npostgres=# analyze test;\nANALYZE\npostgres=# set seq_page_cost to 1;\nSET\npostgres=# set random_page_cost to 4;\nSET\npostgres=# set effective_cache_size to '16MB';\nSET\n\nNow:\npostgres=# EXPLAIN analyze SELECT *,(select pg_sleep(10)) from test order\nby _data limit 10;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.01..28.85 rows=10 width=8) (actual\ntime=10001.132..10001.198 rows=10 loops=1)\n InitPlan 1 (returns $0)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=10001.076..10001.078 rows=1 loops=1)\n -> Index Scan using random_key on test (cost=0.00..2884190.16\nrows=1000000 width=8) (actual time=10001.129..10001.188 rows=10 loops=1)\n Total runtime: 10001.252 ms\n(5 rows)\n\nIs ok.\n\npostgres=# EXPLAIN analyze SELECT *,(select pg_sleep(10)) from test order\nby _data limit 10 offset 10000;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=28841.91..28870.76 rows=10 width=8) (actual\ntime=10037.850..10037.871 rows=10 loops=1)\n InitPlan 1 (returns $0)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=10001.040..10001.041 rows=1 loops=1)\n -> Index Scan using random_key on test (cost=0.00..2884190.16\nrows=1000000 width=8) (actual time=10001.094..10036.022 rows=10010 loops=1)\n Total runtime: 10037.919 ms\n(5 rows)\n\nIs still ok.\n\n\npostgres=# EXPLAIN SELECT *,(select pg_sleep(10)) from test order by _data\nlimit 10 offset 100000;\n QUERY PLAN\n--------------------------------------------------------------------------\n Limit (cost=102723.94..102723.96 rows=10 width=8)\n InitPlan 1 (returns $0)\n -> Result (cost=0.00..0.01 rows=1 width=0)\n -> Sort (cost=102473.92..104973.92 rows=1000000 width=8)\n Sort Key: _data\n -> Seq Scan on test (cost=0.00..14425.00 rows=1000000 width=8)\n(6 rows)\n\nOoops, there project screwed.\n\nAnd it is not possible to predict in advance where and when you get hit by\nthat problem.\nE.g. all usually fast statements with some arguments become slow as a snail\nonce DB switch from index scan to top node sort.\n\nOnly way prevent that is always write all queries way you suggested.\n\nKind Regards,\nMaksym\n\nOn Fri, Nov 25, 2011 at 4:05 AM, Tom Lane <[email protected]> wrote:\n\n> Maxim Boguk <[email protected]> writes:\n> > Is here any reason why Postgresql calculates subqueries/storable\n> procedures\n> > in select list before applying ORDER BY / LIMIT?\n>\n> Well, that's the definition of ORDER BY --- it happens after computing\n> the select list, according to the SQL standard. We try to optimize this\n> in some cases but you can't really complain when we don't. Consider\n> putting the expensive function outside the ORDER BY/LIMIT, ie\n>\n> select ..., expensive_fn() from (select ... order by ... limit ...) ss;\n>\n> regards, tom lane\n>\n\n\n\n-- \nMaxim Boguk\nSenior Postgresql DBA.\n\nPhone RU: +7 910 405 4718\nPhone AU: +61 45 218 5678\n\nSkype: maxim.boguk\nJabber: [email protected]\n\nLinkedIn profile: http://nz.linkedin.com/in/maximboguk\nIf they can send one man to the moon... why can't they send them all?\n\nМойКруг: http://mboguk.moikrug.ru/\nСила солому ломит, но не все в нашей жизни - солома, да и сила далеко не\nвсе.\n\nI understand that position.However if assumption: \" the definition of ORDER BY --- it happens after computing\nthe select list, according to the SQL standard\"is correct,then plans like:postgres=# EXPLAIN ANALYZE SELECT * from test order by _data limit 10 offset 1000; QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=2884.19..2913.03 rows=10 width=8) (actual time=3.584..3.620 rows=10 loops=1)\n\n -> Index Scan using random_key on test (cost=0.00..2884190.16 rows=1000000 width=8) (actual time=0.103..3.354 rows=1010 loops=1) Total runtime: 3.663 ms(3 rows)should not be used at all.In realty I was bite by next scenario (that is simplified case):\npostgres=# CREATE TABLE test as (select random() as _data from (select * from generate_series(1,1000000)) as t);SELECT 1000000postgres=# CREATE INDEX random_key on test(_data);CREATE INDEXpostgres=# analyze test;\n\nANALYZEpostgres=# set seq_page_cost to 1;SETpostgres=# set random_page_cost to 4;SETpostgres=# set effective_cache_size to '16MB';SETNow:postgres=# EXPLAIN analyze SELECT *,(select pg_sleep(10)) from test order by _data limit 10;\n\n QUERY PLAN--------------------------------------------------------------------------------------------------------------------------------------------\n\n Limit (cost=0.01..28.85 rows=10 width=8) (actual time=10001.132..10001.198 rows=10 loops=1) InitPlan 1 (returns $0) -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=10001.076..10001.078 rows=1 loops=1)\n\n -> Index Scan using random_key on test (cost=0.00..2884190.16 rows=1000000 width=8) (actual time=10001.129..10001.188 rows=10 loops=1) Total runtime: 10001.252 ms(5 rows)Is ok.postgres=# EXPLAIN analyze SELECT *,(select pg_sleep(10)) from test order by _data limit 10 offset 10000;\n\n QUERY PLAN-----------------------------------------------------------------------------------------------------------------------------------------------\n\n Limit (cost=28841.91..28870.76 rows=10 width=8) (actual time=10037.850..10037.871 rows=10 loops=1) InitPlan 1 (returns $0) -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=10001.040..10001.041 rows=1 loops=1)\n\n -> Index Scan using random_key on test (cost=0.00..2884190.16 rows=1000000 width=8) (actual time=10001.094..10036.022 rows=10010 loops=1) Total runtime: 10037.919 ms(5 rows)Is still ok.\n\npostgres=# EXPLAIN SELECT *,(select pg_sleep(10)) from test order by _data limit 10 offset 100000; QUERY PLAN--------------------------------------------------------------------------\n\n Limit (cost=102723.94..102723.96 rows=10 width=8) InitPlan 1 (returns $0) -> Result (cost=0.00..0.01 rows=1 width=0) -> Sort (cost=102473.92..104973.92 rows=1000000 width=8) Sort Key: _data\n\n -> Seq Scan on test (cost=0.00..14425.00 rows=1000000 width=8)(6 rows)Ooops, there project screwed.And it is not possible to predict in advance where and when you get hit by that problem.\n\nE.g. all usually fast statements with some arguments become slow as a snail once DB switch from index scan to top node sort.\nOnly way prevent that is always write all queries way you suggested.Kind Regards,MaksymOn Fri, Nov 25, 2011 at 4:05 AM, Tom Lane <[email protected]> wrote:\nMaxim Boguk <[email protected]> writes:\n> Is here any reason why Postgresql calculates subqueries/storable procedures\n> in select list before applying ORDER BY / LIMIT?\n\nWell, that's the definition of ORDER BY --- it happens after computing\nthe select list, according to the SQL standard. We try to optimize this\nin some cases but you can't really complain when we don't. Consider\nputting the expensive function outside the ORDER BY/LIMIT, ie\n\nselect ..., expensive_fn() from (select ... order by ... limit ...) ss;\n\n regards, tom lane\n-- Maxim BogukSenior Postgresql DBA.Phone RU: +7 910 405 4718Phone AU: +61 45 218 5678Skype: maxim.bogukJabber: [email protected]\nLinkedIn profile: http://nz.linkedin.com/in/maximbogukIf they can send one man to the moon... why can't they send them all?МойКруг: http://mboguk.moikrug.ru/\n\nСила солому ломит, но не все в нашей жизни - солома, да и сила далеко не все.",
"msg_date": "Fri, 25 Nov 2011 09:53:49 +1100",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Some question about lazy subquery/procedures execution\n\tin SELECT ... ORDER BY... LIMIT N queries"
},
{
"msg_contents": "On 11/25/2011 06:53 AM, Maxim Boguk wrote:\n> I understand that position.\n> However if assumption: \" the definition of ORDER BY --- it happens after\n> computing the select list, according to the SQL standard\"\n> is correct,\n> then plans like:\n>\n> postgres=# EXPLAIN ANALYZE SELECT * from test order by _data limit 10\n> offset 1000;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=2884.19..2913.03 rows=10 width=8) (actual\n> time=3.584..3.620 rows=10 loops=1)\n> -> Index Scan using random_key on test (cost=0.00..2884190.16\n> rows=1000000 width=8) (actual time=0.103..3.354 rows=1010 loops=1)\n> Total runtime: 3.663 ms\n> (3 rows)\n> should not be used at all.\n\n\n`LIMIT' and `OFFSET' are explicitly defined to compute only that part of \nthe SELECT list that is required. If they weren't specifically defined \nwith that exception then you'd be right.\n\nLIMIT and OFFSET aren't standard anyway, so Pg can define them to mean \nwhatever is most appropriate. The SQL standard is adding new and (as \nusual) painfully clumsily worded features that work like LIMIT and \nOFFSET, but I don't know whether they have the same rules about whether \nexecution of functions can be skipped or not.\n\n> And it is not possible to predict in advance where and when you get hit\n> by that problem.\n\nThat's the biggest problem with statistics- and heuristics-based query \nplanners in general, but this does seem to be a particularly difficult case.\n\nSetting a cost on the function call that more accurately reflects how \nexpensive it is so PostgreSQL will work harder to avoid calling it might \nhelp. See \nhttp://www.postgresql.org/docs/current/static/sql-createfunction.html .\n\n--\nCraig Ringer\n",
"msg_date": "Mon, 28 Nov 2011 06:50:59 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some question about lazy subquery/procedures execution\n\tin SELECT ... ORDER BY... LIMIT N queries"
},
{
"msg_contents": "On Mon, Nov 28, 2011 at 9:50 AM, Craig Ringer <[email protected]> wrote:\n\n> On 11/25/2011 06:53 AM, Maxim Boguk wrote:\n>\n>> I understand that position.\n>> However if assumption: \" the definition of ORDER BY --- it happens after\n>> computing the select list, according to the SQL standard\"\n>> is correct,\n>> then plans like:\n>>\n>> postgres=# EXPLAIN ANALYZE SELECT * from test order by _data limit 10\n>> offset 1000;\n>> QUERY PLAN\n>> ------------------------------**------------------------------**\n>> ------------------------------**------------------------------**\n>> --------------\n>> Limit (cost=2884.19..2913.03 rows=10 width=8) (actual\n>> time=3.584..3.620 rows=10 loops=1)\n>> -> Index Scan using random_key on test (cost=0.00..2884190.16\n>> rows=1000000 width=8) (actual time=0.103..3.354 rows=1010 loops=1)\n>> Total runtime: 3.663 ms\n>> (3 rows)\n>> should not be used at all.\n>>\n>\n>\n> `LIMIT' and `OFFSET' are explicitly defined to compute only that part of\n> the SELECT list that is required. If they weren't specifically defined with\n> that exception then you'd be right.\n>\n> LIMIT and OFFSET aren't standard anyway, so Pg can define them to mean\n> whatever is most appropriate. The SQL standard is adding new and (as usual)\n> painfully clumsily worded features that work like LIMIT and OFFSET, but I\n> don't know whether they have the same rules about whether execution of\n> functions can be skipped or not.\n>\n>\n> And it is not possible to predict in advance where and when you get hit\n>> by that problem.\n>>\n>\n> That's the biggest problem with statistics- and heuristics-based query\n> planners in general, but this does seem to be a particularly difficult case.\n>\n> Setting a cost on the function call that more accurately reflects how\n> expensive it is so PostgreSQL will work harder to avoid calling it might\n> help. See http://www.postgresql.org/**docs/current/static/sql-**\n> createfunction.html<http://www.postgresql.org/docs/current/static/sql-createfunction.html>.\n>\n> --\n> Craig Ringer\n>\n\nChange cost for the functions in that case simple ignored by\nplanner/executor.\n\nI think it should be possible always delay execution functions/subqueries\nunrelated to order by list untill limit/offset were applied (even in the\nworst case that will provide same performance as today), and no heuristics\nneed at all.\n\n\nHm, one more idea: lets say I call the next sql query -\n'SELECT ...,very_log_sure_toasted_field FROM ... ORDER BY (something but\nnot very_log_toasted_field) LIMIT N'\nwhich will use sort as top node.\n\nIs detoasting of very_log_sure_toasted_field will be performed after\napplying ORDER BY... LIMIT N or before it?\n\nIf detoasting performed before applying order by/limit, than there exists\nlarge class of queries where delayed/lazy detoasting can be huge\nperformance win.\nIf detoasting performed after applying order by/limit, than the same\nmechanics can be used to delay subquery/function execution.\n\n\nPS: Yes I know good response to my complaints: 'patch welcome', but I only\nstarted study of postgresql source code and recovering my C coding skills.\nUnfortunately, I don't think I will be ready to start hacking\nplanner/executor code in short future (planner/executor is most complicated\nand easiest to break part of the postgresql code, that is definitely not\nnewbie task).\n\n-- \nMaxim Boguk\n\nOn Mon, Nov 28, 2011 at 9:50 AM, Craig Ringer <[email protected]> wrote:\nOn 11/25/2011 06:53 AM, Maxim Boguk wrote:\n\nI understand that position.\nHowever if assumption: \" the definition of ORDER BY --- it happens after\ncomputing the select list, according to the SQL standard\"\nis correct,\nthen plans like:\n\npostgres=# EXPLAIN ANALYZE SELECT * from test order by _data limit 10\noffset 1000;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=2884.19..2913.03 rows=10 width=8) (actual\ntime=3.584..3.620 rows=10 loops=1)\n -> Index Scan using random_key on test (cost=0.00..2884190.16\nrows=1000000 width=8) (actual time=0.103..3.354 rows=1010 loops=1)\n Total runtime: 3.663 ms\n(3 rows)\nshould not be used at all.\n\n\n\n`LIMIT' and `OFFSET' are explicitly defined to compute only that part of the SELECT list that is required. If they weren't specifically defined with that exception then you'd be right.\n\nLIMIT and OFFSET aren't standard anyway, so Pg can define them to mean whatever is most appropriate. The SQL standard is adding new and (as usual) painfully clumsily worded features that work like LIMIT and OFFSET, but I don't know whether they have the same rules about whether execution of functions can be skipped or not.\n\n\n\nAnd it is not possible to predict in advance where and when you get hit\nby that problem.\n\n\nThat's the biggest problem with statistics- and heuristics-based query planners in general, but this does seem to be a particularly difficult case.\n\nSetting a cost on the function call that more accurately reflects how expensive it is so PostgreSQL will work harder to avoid calling it might help. See http://www.postgresql.org/docs/current/static/sql-createfunction.html .\n\n--\nCraig Ringer\nChange cost for the functions in that case simple ignored by planner/executor.I think it should be possible always delay execution functions/subqueries unrelated to order by list untill limit/offset were applied (even in the worst case that will provide same performance as today), and no heuristics need at all.\nHm, one more idea: lets say I call the next sql query -'SELECT ...,very_log_sure_toasted_field FROM ... ORDER BY (something but not very_log_toasted_field) LIMIT N'which will use sort as top node.\nIs detoasting of very_log_sure_toasted_field will be performed after applying ORDER BY... LIMIT N or before it?If detoasting performed before applying order by/limit, than there exists large class of queries where delayed/lazy detoasting can be huge performance win.\n\nIf detoasting performed after applying order by/limit, than the same mechanics can be used to delay subquery/function execution.PS: Yes I know good response to my complaints: 'patch welcome', but I only started study of postgresql source code and recovering my C coding skills. \n\nUnfortunately, I don't think I will be ready to start hacking planner/executor code in short future (planner/executor is most complicated and easiest to break part of the postgresql code, that is definitely not newbie task).\n-- Maxim Boguk",
"msg_date": "Mon, 28 Nov 2011 11:05:57 +1100",
"msg_from": "Maxim Boguk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Some question about lazy subquery/procedures execution\n\tin SELECT ... ORDER BY... LIMIT N queries"
}
] |
[
{
"msg_contents": "hello to all,\n\ni would like your advice on the following matter. i have a table with 150\nmillion rows. there are some indexes on this table but the one that is\nreally important is one that has 3 columns (a,b,c). one application\nconstantly makes queries and the query planner uses this index to narrow\ndown the final set of results. so usually from 150 millions, when the 3\nconditions have been applied, the remaining rows to be checked are about\n20-300. So these queries are very fast, and take from 10-100 ms usually.\nThere is a special case where these 3 conditions narrow down the final set\nto 15.000 rows so the server must check all these rows. The result is that\nthe query takes around 1 minute to complete. Is that a normal time for the\nexecution of the query? \n\ni know that most of you will send me the link with the guide to reporting\nslow queries but that's not the point at the moment. i am not looking for a\nspecific answer why this is happening. \ni just want to know if that seems strange to more people than just me and if\ni should look into that. \n\nbut if for the above you need to have a clearer picture of the server then:\n-red hat 5.6\n-32 cores,\n-96GB ram\n-fiber storage (4GBps)\n-postgresql 9.0.5\n-shared_buffers : 25 GB\n-not i/o bound (too many disks, different partitions for backup, archives,\nxlogs, indexes)\n-not cpu bound (the cpu util was about 5% when i performed the tests)\n-the query planner values on postgresql.conf are the default\n-i also performed the tests on the hot-standby with the same results\n-the query plan is the correct one, indicating that it should use the\ncorrect index\n-i forced index_scan to off and then it used bitmap heap scan with similar\nresults.\n-i forced bitmap heap scan to off and then it did a seq scan\n\nany ideas? thx in advance for your insight\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/query-uses-index-but-takes-too-much-time-tp5020742p5020742.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Thu, 24 Nov 2011 09:20:31 -0800 (PST)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": true,
"msg_subject": "query uses index but takes too much time?"
},
{
"msg_contents": "On Thu, Nov 24, 2011 at 12:20 PM, MirrorX <[email protected]> wrote:\n\n> -32 cores,\n\n> -not cpu bound (the cpu util was about 5% when i performed the tests)\n\nA single query will only use a single CPU.\n\n5% of 32 cores is 100% of 1.6 cores.\n\nAre you sure that the 1 core doing the 1 postgresql query wasn't 100% utilized?\n\na.\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.\n",
"msg_date": "Thu, 24 Nov 2011 12:51:34 -0500",
"msg_from": "Aidan Van Dyk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query uses index but takes too much time?"
},
{
"msg_contents": "Le 24 novembre 2011 18:20, MirrorX <[email protected]> a écrit :\n> hello to all,\n>\n> i would like your advice on the following matter. i have a table with 150\n> million rows. there are some indexes on this table but the one that is\n> really important is one that has 3 columns (a,b,c). one application\n> constantly makes queries and the query planner uses this index to narrow\n> down the final set of results. so usually from 150 millions, when the 3\n> conditions have been applied, the remaining rows to be checked are about\n> 20-300. So these queries are very fast, and take from 10-100 ms usually.\n> There is a special case where these 3 conditions narrow down the final set\n> to 15.000 rows so the server must check all these rows. The result is that\n> the query takes around 1 minute to complete. Is that a normal time for the\n> execution of the query?\n>\n> i know that most of you will send me the link with the guide to reporting\n> slow queries but that's not the point at the moment. i am not looking for a\n> specific answer why this is happening.\n> i just want to know if that seems strange to more people than just me and if\n> i should look into that.\n>\n> but if for the above you need to have a clearer picture of the server then:\n> -red hat 5.6\n> -32 cores,\n> -96GB ram\n> -fiber storage (4GBps)\n> -postgresql 9.0.5\n> -shared_buffers : 25 GB\n> -not i/o bound (too many disks, different partitions for backup, archives,\n> xlogs, indexes)\n> -not cpu bound (the cpu util was about 5% when i performed the tests)\n> -the query planner values on postgresql.conf are the default\n> -i also performed the tests on the hot-standby with the same results\n> -the query plan is the correct one, indicating that it should use the\n> correct index\n> -i forced index_scan to off and then it used bitmap heap scan with similar\n> results.\n> -i forced bitmap heap scan to off and then it did a seq scan\n>\n> any ideas? thx in advance for your insight\n>\n\nnot it is not that strange. It can be several things that lead you to\nthis situation.\n\n>\n>\n>\n>\n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/query-uses-index-but-takes-too-much-time-tp5020742p5020742.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation\n",
"msg_date": "Thu, 24 Nov 2011 20:24:33 +0100",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query uses index but takes too much time?"
}
] |
[
{
"msg_contents": "Hi All,\n\nI have a table with 665605 rows (counted, vacuum-ed):\nCREATE TABLE unique_words\n( filename text NOT NULL,\n filetype text NOT NULL,\n word text NOT NULL,\n count integer,)\n\nThe query is:\nselect f.word , count(f.word) from \nunique_words as f, \nunique_words as s ,\nunique_words as n \nwhere\n(f.word = s.word and s.word = n.word)\nand\n(f.filetype = 'f' and s.filetype = 's' and n.filetype = 'n')\ngroup by f.word\n\nExplain says:\n\"GroupAggregate (cost=0.00..67237557.88 rows=1397 width=6)\"\n\" -> Nested Loop (cost=0.00..27856790.31 rows=7876150720 width=6)\"\n\" -> Nested Loop (cost=0.00..118722.04 rows=14770776 width=12)\"\n\" -> Index Scan using idx_unique_words_filetype_word on unique_words f (cost=0.00..19541.47 rows=92098 width=6)\"\n\" Index Cond: (filetype = 'f'::text)\"\n\" -> Index Scan using idx_unique_words_filetype_word on unique_words s (cost=0.00..0.91 rows=13 width=6)\"\n\" Index Cond: ((filetype = 's'::text) AND (word = f.word))\"\n\" -> Index Scan using idx_unique_words_filetype_word on unique_words n (cost=0.00..1.33 rows=44 width=6)\"\n\" Index Cond: ((filetype = 'n'::text) AND (word = f.word))\"\n\n\nThe right answer should be 3808 different words (according to a Java\nprogram I wrote).\n\nThis query takes more than 1 hour (after which I cancelled the query).\nMy questions are:\n- Is this to be expected?\n- Especially as the query over just 1 join takes 32 secs? (on f.word =\ns.word omitting everything for n )\n- Why does explain say it takes \"7876150720 rows\"? \n- Is there a way to rephrase the query that makes it faster?\n- Could another table layout help (f,s,n are all possibilities for\nfiletype)?\n- Anything else?????\n\nTIA\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\nweb: www.askesis.nl\n\n",
"msg_date": "Mon, 28 Nov 2011 17:42:06 +0100",
"msg_from": "Joost Kraaijeveld <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 9.1 : why is this query slow?"
},
{
"msg_contents": "On Mon, 2011-11-28 at 17:42 +0100, Joost Kraaijeveld wrote:\n> - Is there a way to rephrase the query that makes it faster?\nThis query goes faster (6224 ms, but I am not sure it gives the correct\nanswer as the result differs from my Java program):\n\nselect word, count (word) from unique_words \nwhere\nword in (select word from unique_words where \n\t word in ( select word from unique_words where filetype = 'f')\n\t and\n\t filetype = 's')\nand\nfiletype = 'n'\ngroup by word\n\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\nweb: www.askesis.nl\n\n",
"msg_date": "Mon, 28 Nov 2011 17:57:49 +0100",
"msg_from": "Joost Kraaijeveld <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 9.1 : why is this query slow?"
},
{
"msg_contents": "Joost Kraaijeveld <[email protected]> wrote:\n \n> This query goes faster (6224 ms, but I am not sure it gives the\n> correct answer as the result differs from my Java program):\n \nIt seems clear that you want to see words which appear with all\nthree types of files, but it's not clear what you want the count to\nrepresent. The number of times the word appears in filetype 'n'\nreferences (as specified in your second query)? The number of\npermutations of documents which incorporate one 'f' document, one\n's' document, and one 'n' document (as specified in your first\nquery). Something else, like the total number of times the word\nappears?\n \n-Kevin\n",
"msg_date": "Mon, 28 Nov 2011 11:05:20 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.1 : why is this query slow?"
},
{
"msg_contents": "On Mon, 2011-11-28 at 11:05 -0600, Kevin Grittner wrote:\n> Joost Kraaijeveld <[email protected]> wrote:\n> \n> > This query goes faster (6224 ms, but I am not sure it gives the\n> > correct answer as the result differs from my Java program):\n> \n> It seems clear that you want to see words which appear with all\n> three types of files, but it's not clear what you want the count to\n> represent. The number of times the word appears in filetype 'n'\n> references (as specified in your second query)? The number of\n> permutations of documents which incorporate one 'f' document, one\n> 's' document, and one 'n' document (as specified in your first\n> query). Something else, like the total number of times the word\n> appears?\nI would like the answer to be \"the number of times the word appears in\nall three the queries\", the intersection of the three queries. \n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\nweb: www.askesis.nl\n\n",
"msg_date": "Mon, 28 Nov 2011 18:23:02 +0100",
"msg_from": "Joost Kraaijeveld <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 9.1 : why is this query slow?"
},
{
"msg_contents": "Joost Kraaijeveld <[email protected]> wrote:\n \n> I would like the answer to be \"the number of times the word\n> appears in all three the queries\", the intersection of the three\n> queries. \n \nThat's still not entirely clear to me. If there are two 'f' rows,\nthree 's' rows, and four 'n' rows, do you want to see an answer of 2\n(which seems like the intersection you request here), 9 (which is\nthe sum), 24 (which is the product), or something else?\n \nIf you really want the intersection, perhaps:\n \nwith x as\n (\n select\n word,\n count(*) as countall,\n count(case when filetype = 'f' then 1 else null end)\n as countf,\n count(case when filetype = 's' then 1 else null end) as\n as counts,\n count(case when filetype = 'n' then 1 else null end) as\n as countn\n from unique_words\n )\nselect word, least(countf, counts, countn) from x\n where countf > 0 and counts > 0 and countn > 0\n order by word;\n \n-Kevin\n",
"msg_date": "Mon, 28 Nov 2011 11:32:15 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.1 : why is this query slow?"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> wrote:\n \n> If you really want the intersection, perhaps:\n \nOr maybe closer:\n \nwith x as\n (\n select\n word,\n count(*) as countall,\n count(case when filetype = 'f' then 1 else null end)\n as countf,\n count(case when filetype = 's' then 1 else null end)\n as counts,\n count(case when filetype = 'n' then 1 else null end)\n as countn\n from unique_words\n group by word\n )\nselect word, least(countf, counts, countn) from x\n where countf > 0 and counts > 0 and countn > 0\n order by word;\n \nCranked out rather quickly and untested.\n \n-Kevin\n",
"msg_date": "Mon, 28 Nov 2011 11:36:57 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.1 : why is this query slow?"
},
{
"msg_contents": "On 28.11.2011 17:42, Joost Kraaijeveld wrote:\n> - Why does explain say it takes \"7876150720 rows\"? \n\nAny idea where this number came from? No matter what I do, the nested\nloop row estimates are alway very close to the product of the two\nestimates (outer rows * inner rows).\n\nTomas\n",
"msg_date": "Mon, 28 Nov 2011 20:32:20 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 9.1 : why is this query slow?"
},
{
"msg_contents": "On Mon, 2011-11-28 at 11:36 -0600, Kevin Grittner wrote:\n> \"Kevin Grittner\" <[email protected]> wrote:\n> \n> > If you really want the intersection, perhaps:\n> \n> Or maybe closer:\n> \n> with x as\n> (\n> select\n> word,\n> count(*) as countall,\n> count(case when filetype = 'f' then 1 else null end)\n> as countf,\n> count(case when filetype = 's' then 1 else null end)\n> as counts,\n> count(case when filetype = 'n' then 1 else null end)\n> as countn\n> from unique_words\n> group by word\n> )\n> select word, least(countf, counts, countn) from x\n> where countf > 0 and counts > 0 and countn > 0\n> order by word;\n> \n> Cranked out rather quickly and untested.\n\nI tested it and it worked as advertised. Takes ~ 3 secs to complete.\nThanks.\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\nweb: www.askesis.nl\n\n",
"msg_date": "Tue, 29 Nov 2011 16:39:46 +0100",
"msg_from": "Joost Kraaijeveld <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 9.1 : why is this query slow?"
}
] |
[
{
"msg_contents": "dear all,\n\ni am trying to understand if i am missing something on how vacuum works. i\nve read the manual, and did some research on the web about that but i am\nstill not sure. \n\nto my understanding, vacuum just marks the dead rows of a table so that from\nthat point on that space would be re-used for new inserts and new updates on\nthat specific table. however, if there is an open transaction, vacuum can\nonly do what is described above up to the point that the open transaction\nwas started. so if for example there is a query running for 1 day, no matter\nhow many times i will have vacuumed the table (manual or auto), the dead\nrows wont be possible to be marked as re-usable space.\n-is the above correct?\n-is there something more about vacuum in that case i am describing? would\nfor example mark the rows as 'semi-dead' so that when a scan would be made\nthese rows wouldn't be checked and so the queries would be faster? is there\nanything else for this specific case?\n-would there be any effect from the vacuum on the indexes of the table?like\ni said above for the table, would the entries of the index not be scanned\nfor a query, due to some reason?\n\nif there is a something i could read to answer these questions plz point me\nto that direction, otherwise i would really appreciate any information you\nmay have. thx in advance\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/vacuum-internals-and-performance-affect-tp5033043p5033043.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Tue, 29 Nov 2011 09:12:13 -0800 (PST)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": true,
"msg_subject": "vacuum internals and performance affect"
},
{
"msg_contents": "MX,\n\n> to my understanding, vacuum just marks the dead rows of a table so that from\n> that point on that space would be re-used for new inserts and new updates on\n> that specific table. however, if there is an open transaction, vacuum can\n> only do what is described above up to the point that the open transaction\n> was started. so if for example there is a query running for 1 day, no matter\n> how many times i will have vacuumed the table (manual or auto), the dead\n> rows wont be possible to be marked as re-usable space.\n> -is the above correct?\n\nMore or less. The transactionID isn't a timestamp, so the \"stop point\"\nis based on snapshots rather than a point-in-time. But that's a fine\ndistinction.\n\n> -is there something more about vacuum in that case i am describing? would\n> for example mark the rows as 'semi-dead' so that when a scan would be made\n> these rows wouldn't be checked and so the queries would be faster? is there\n> anything else for this specific case?\n\nWell, vacuum does some other work, yes.\n\n> -would there be any effect from the vacuum on the indexes of the table?like\n> i said above for the table, would the entries of the index not be scanned\n> for a query, due to some reason?\n\nVacuum also does some pruning dead index pointers.\n\nOtherwise, I'm not sure what you're asking.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Wed, 30 Nov 2011 13:19:09 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum internals and performance affect"
},
{
"msg_contents": "thx a lot for your answer :)\n\nso when a transaction is still open from a while back (according to the\ntransactionID), no 'new dead' tuples can be marked as re-usable space for\nnew rows, right? by 'new dead' i mean that for example there is a\ntransaction running from 10.00am(with a specific transactionID). when i\ndelete rows at 11.00am these are the ones i am referring to.\n\nthe same thing happens with the index, right? the dead enties for the rows\nthat were deleted at 11.00am cannot be removed yet (this is not based on the\ntimestamp, i get it, i just want to point out that due to MVCC these rows\nshould be visible to the old transaction and by using timestamps this is\nmore obvious)\n\nbut, for these rows, the 'deleted' ones. does vacuum do anything at all at\nthat time? and if so, what is it? thx in advance\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/vacuum-internals-and-performance-affect-tp5033043p5036800.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 30 Nov 2011 13:34:20 -0800 (PST)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vacuum internals and performance affect"
},
{
"msg_contents": "MirrorX,\n\n> so when a transaction is still open from a while back (according to the\n> transactionID), no 'new dead' tuples can be marked as re-usable space for\n> new rows, right? by 'new dead' i mean that for example there is a\n> transaction running from 10.00am(with a specific transactionID). when i\n> delete rows at 11.00am these are the ones i am referring to.\n\nWith the understanding that what we're actually checking is snapshots\n(which are not completely linear) and not timestamps, yes, that's a good\nsimplification for what happens.\n\n> but, for these rows, the 'deleted' ones. does vacuum do anything at all at\n> that time? and if so, what is it? thx in advance\n\nNo, it does nothing. What would it do?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Thu, 01 Dec 2011 10:01:57 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum internals and performance affect"
},
{
"msg_contents": "from what i ve read and have i ve seen in practice, i expected it to do\nnothing at all. i just wanted to be absolutely sure and that's why i asked\nhere. \nthank you very much for the clarification\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/vacuum-internals-and-performance-affect-tp5033043p5039677.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Thu, 1 Dec 2011 11:23:56 -0800 (PST)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vacuum internals and performance affect"
}
] |
[
{
"msg_contents": "Experts,\n\nQuick Summary: data can now be inserted very quickly via COPY + removing\nindexes, but is there a design or some tricks to still allow someone to\nquery while the partition is still active and 'hot' ?\n\n- Postgres 9.1\n- Windows 7 (64-bit) , although this is just for the current test and\ncould vary depending on situation\n- We have 4 main tables with daily partitions\n- Each table/partition has multiple indexes on it\n- Streaming logs from client machines into our server app which\nprocesses the logs and tries to shove all that data into these daily\npartitions as fast as it can. \n- Using COPY and removed original primary key unique constraints to try\nto get it to be as fast as possible (some duplicates are possible)\n- Will remove duplicates in a later step (disregard for this post)\n\nWe now found (thanks Andres and Snow-Man in #postgresql) that in our\ntests, after the indexes get too large performance drops signficantly\nand our system limps forward due to disk reads (presumably for the\nindexes). If we remove the indexes, performance for our entire sample\ntest is great and everything is written to postgresql very quickly. \nThis allows us to shove lots and lots of data in (for production\npossibly 100 GB or a TB per day!)\n\nMy question is, what possible routes can I take where we can have both\nfast inserts (with indexes removed until the end of the day), but still\nallow a user to query against today's data? Is this even possible? One\nidea would be possibly have hourly tables for today and as soon as we\ncan try to re-add indexes. Another possible solution might be to stream\nthe data to another \"reader\" postgres instance that has indexes,\nalthough I'm not very versed in replication.\n\n\nAny ideas would be greatly appreciated.\n\nThanks!\n\nBen\n\n\n-- \nBenjamin Johnson \nhttp://getcarbonblack.com/ | @getcarbonblack\ncell: 312.933.3612 | @chicagoben\n\n",
"msg_date": "Wed, 30 Nov 2011 09:27:35 -0600",
"msg_from": "Benjamin Johnson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Guidance Requested - Bulk Inserting + Queries"
},
{
"msg_contents": "On Wed, Nov 30, 2011 at 7:27 AM, Benjamin Johnson\n<[email protected]> wrote:\n> Experts,\n>\n> Quick Summary: data can now be inserted very quickly via COPY + removing\n> indexes, but is there a design or some tricks to still allow someone to\n> query while the partition is still active and 'hot' ?\n>\n> - Postgres 9.1\n> - Windows 7 (64-bit) , although this is just for the current test and\n> could vary depending on situation\n> - We have 4 main tables with daily partitions\n\nHow long are the daily partitions kept for?\n\n> - Each table/partition has multiple indexes on it\n> - Streaming logs from client machines into our server app which\n> processes the logs and tries to shove all that data into these daily\n> partitions as fast as it can.\n\nWhy shove it in as fast as you can? If you want to both read and\nwrite at the same time, then focusing first only on writing and\nworrying about reading as an after thought seems like the wrong thing\nto do.\n\n> - Using COPY and removed original primary key unique constraints to try\n> to get it to be as fast as possible (some duplicates are possible)\n> - Will remove duplicates in a later step (disregard for this post)\n>\n> We now found (thanks Andres and Snow-Man in #postgresql) that in our\n> tests, after the indexes get too large performance drops signficantly\n> and our system limps forward due to disk reads (presumably for the\n> indexes).\n\nHow many hours worth of data can be loaded into the new partition\nbefore the performance knee hits?\n\nAfter the knee, how does the random disk read activity you see compare\nto the maximum random disk reads your IO system can support? How many\nCOPYs were you doing at the same time?\n\nDuring this test, was there background select activity going on, or\nwas the system only used for COPY?\n\n> If we remove the indexes, performance for our entire sample\n> test is great and everything is written to postgresql very quickly.\n> This allows us to shove lots and lots of data in (for production\n> possibly 100 GB or a TB per day!)\n\nHow much do you need to shove in per day? If you need to insert it,\nand index it, and run queries, and deal with maintenance of the older\npartitions, then you will need a lot of spare capacity, relative to\njust inserting, to do all of those things. Do you have windows where\nthere is less insert activity in which other things can get done?\n\nCheers,\n\nJeff\n",
"msg_date": "Thu, 1 Dec 2011 07:06:42 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guidance Requested - Bulk Inserting + Queries"
},
{
"msg_contents": "Jeff,\n\nSorry for the delayed response. Please see (some) answers inline.\n\nOn 12/1/2011 9:06 AM, Jeff Janes wrote:\n> On Wed, Nov 30, 2011 at 7:27 AM, Benjamin Johnson\n> <[email protected]> wrote:\n>> Experts,\n>>\n>> Quick Summary: data can now be inserted very quickly via COPY + removing\n>> indexes, but is there a design or some tricks to still allow someone to\n>> query while the partition is still active and 'hot' ?\n>>\n>> - Postgres 9.1\n>> - Windows 7 (64-bit) , although this is just for the current test and\n>> could vary depending on situation\n>> - We have 4 main tables with daily partitions\n> How long are the daily partitions kept for?\nWe want this to be user-configurable but ideally 30 - 90 days, possibly\nlonger for (order of magnitude) smaller customers.\n>> - Each table/partition has multiple indexes on it\n>> - Streaming logs from client machines into our server app which\n>> processes the logs and tries to shove all that data into these daily\n>> partitions as fast as it can.\n> Why shove it in as fast as you can? If you want to both read and\n> write at the same time, then focusing first only on writing and\n> worrying about reading as an after thought seems like the wrong thing\n> to do.\nYou're probably correct in that we need to think about the entire system\nas a whole. We're concerned with getting the data\nfrom our host-based to our server where it is processed and stored. \nBecause our system is essentially a giant logging service for\nyour enterprise, most of the time we're collecting data and writing it. \nThe main times it will be read is when some security incident\noccurs, but between those we expect it to be very write heavy.\n\nWe're probably most concerned with write performance because we were\noriginally seeing poor times and were scared by how well\nit would scale. We've improved it a lot so we might just need to take a\nstep back and see what else we can do for the overall system.\n\n>> - Using COPY and removed original primary key unique constraints to try\n>> to get it to be as fast as possible (some duplicates are possible)\n>> - Will remove duplicates in a later step (disregard for this post)\n>>\n>> We now found (thanks Andres and Snow-Man in #postgresql) that in our\n>> tests, after the indexes get too large performance drops signficantly\n>> and our system limps forward due to disk reads (presumably for the\n>> indexes).\n> How many hours worth of data can be loaded into the new partition\n> before the performance knee hits?\nIn simulations, if I try to simulate the amount of data a large customer\nwould send, then it is just about an hour worth of data before the indexes\nget to be several gigabytes in size and performance really goes downhill\n-- the \"knee\" if you will.\n> After the knee, how does the random disk read activity you see compare\n> to the maximum random disk reads your IO system can support? How many\n> COPYs were you doing at the same time?\nI don't have exact statistics, but we had 4 writer threads all doing\ncopy into 4 tables as fast as they receive data. \nThe system is very much NOT ideal -- Windows 7 Developer-Class\nWorkstation with (one) 7200 RPM Harddrive. I want to find bottlebecks\nin this\nsystem and then see what real servers can handle. (We're a small\ncompany and only now are starting to be able to invest in dev/test servers.\n\n>\n> During this test, was there background select activity going on, or\n> was the system only used for COPY?\nI pretty much stripped it entirely down to just doing the writes. Data\nwas coming in over HTTP to a python web stack, but that was pretty much\njust passing these logfiles to the (C++) writer threads.\n>> If we remove the indexes, performance for our entire sample\n>> test is great and everything is written to postgresql very quickly.\n>> This allows us to shove lots and lots of data in (for production\n>> possibly 100 GB or a TB per day!)\n> How much do you need to shove in per day? If you need to insert it,\n> and index it, and run queries, and deal with maintenance of the older\n> partitions, then you will need a lot of spare capacity, relative to\n> just inserting, to do all of those things. Do you have windows where\n> there is less insert activity in which other things can get done?\nThat's something we keep asking ourselves. Right now it's about 10 MB /\nclient per day. Some customers want 50,000 clients which would\nbe 500 GB per day if my math is correct. We know we will never handle\nthis with a single server, but we want to get up as high as we can (say\n5000 - 10000)\nbefore saying that our customers have to add more hardware.\n\n> Cheers,\n>\n> Jeff\n\nWe managed to sort of get around the issue by having hourly tables\ninherit from our daily tables. This makes our indexes smaller and the\nwrites in our tests don't\nseem to hit this same limit (at least so far.) I have a couple\nfollow-up questions:\n\n1) Would it be acceptable to have let's say 60 daily partitions and then\neach of those has 24 hourly partitions? Would it be better to after a\nday or two (so that data is now old and mostly unchanged) \"rollup\" the\nhourly tables into their respective daily table and then remove the\nhourly tables?\n\n2) Some of our indexes are on an identifier that is a hash of some event\nattributes, so it's basically a random BIGINT. We believe part of the\nproblem is that each row could be in an entirely different location in\nthe index thus causing lots of seeking and thrashing. Would doing\nsomething like having our index become a multi-column index by doing\n(event_timestamp, originally_index_column) be better so that they closer\nin proximity to other events coming in around the same time? I have to\nadmit that I don't really know how indexes are stored / paged.\n\n3) Does anyone else have similar systems where they have a ton of data\ncoming in that they also want to query? Any tips you can provide or\nalternative designs? Once the data is in, it will 99.9% of the time\n(100% of the time for some tables) be static. Part of the issue is that\nthe user wants to be able to search based on all sorts of attributes --\nthis leads to lots of indexes and more disk/memory usage when writing.\n\nBen\n\n-- \nBenjamin Johnson\nhttp://getcarbonblack.com/ | @getcarbonblack\ncell: 312.933.3612\n\n",
"msg_date": "Wed, 21 Dec 2011 20:30:44 -0600",
"msg_from": "Benjamin Johnson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Guidance Requested - Bulk Inserting + Queries"
},
{
"msg_contents": "On Wed, Dec 21, 2011 at 6:30 PM, Benjamin Johnson\n<[email protected]> wrote:\n> Jeff,\n>\n> Sorry for the delayed response. Please see (some) answers inline.\n>\n> On 12/1/2011 9:06 AM, Jeff Janes wrote:\n>> On Wed, Nov 30, 2011 at 7:27 AM, Benjamin Johnson\n\n>> Why shove it in as fast as you can? If you want to both read and\n>> write at the same time, then focusing first only on writing and\n>> worrying about reading as an after thought seems like the wrong thing\n>> to do.\n\n> You're probably correct in that we need to think about the entire system\n> as a whole. We're concerned with getting the data\n> from our host-based to our server where it is processed and stored.\n> Because our system is essentially a giant logging service for\n> your enterprise, most of the time we're collecting data and writing it.\n> The main times it will be read is when some security incident\n> occurs, but between those we expect it to be very write heavy.\n\nOK, that does have an interesting flavor, in which the typical row\nwill be queried about read zero times, but you can't predict in\nadvance which ones are more likely to ever be queried.\n\nDo you know how necessary all your indexes are for supporting the\nqueries? If the queries are relatively rare, maybe you could support\nthem simply with seq scans on unindexed tables/partitions, at least on\nthe leading edge partitions.\n\n>> How many hours worth of data can be loaded into the new partition\n>> before the performance knee hits?\n> In simulations, if I try to simulate the amount of data a large customer\n> would send, then it is just about an hour worth of data before the indexes\n> get to be several gigabytes in size and performance really goes downhill\n> -- the \"knee\" if you will.\n\nSo having hourly partitions with live indexes might be cutting it\npretty close. Once something pushes you over the edge into degraded\nperformance, you would never be able to recover.\n\n>> After the knee, how does the random disk read activity you see compare\n>> to the maximum random disk reads your IO system can support? How many\n>> COPYs were you doing at the same time?\n> I don't have exact statistics, but we had 4 writer threads all doing\n> copy into 4 tables as fast as they receive data.\n\nAre they receiving data at the rate they would naturally? I.e. does\nit take an hour to simulate an hour's worth of data?\n\nIf they go into different tables, then they are going into different\nindices and so are all competing with each other for cache space for\nthe index leaf blocks\n(rather than sharing that cache space as they might possibly if they\nwere going into the same table). So you run out of cache space and\nyour performance collapses at one forth the total size as if you made\nthem take turns. Of course if you make them take turns, you have to\neither throttle or buffer their data retrieval. Also there is a\nquestion of how often you would have to rotate turns, and how long it\nwould take to exchange out the buffers upon a turn rotation. (There\nare stupid OS tricks you can pull outside of PG to help that process\nalong, but trying to coordinate that would be painful.)\n\n\n> The system is very much NOT ideal -- Windows 7 Developer-Class\n> Workstation with (one) 7200 RPM Harddrive. I want to find bottlebecks\n> in this\n> system and then see what real servers can handle. (We're a small\n> company and only now are starting to be able to invest in dev/test servers.\n\nI think you said that for loading into large-grained partitions with\nlive indexes, the bottleneck was the random reads needed to pull in\nthe leaf blocks. In that case, if you change to RAID with striping\nyou should be able to scale with the effective number of spindles,\nprovided you have enough parallel copies going on to keep each spindle\nbusy with its own random read. Of course those parallel copies would\nmake the RAM issues worse, but by saying large-grained partitions I\nmean that you've already given up on the notion having the indices fit\nin RAM, so at that point you might as well get the spindle-scaling.\n\n...\n\n>\n> We managed to sort of get around the issue by having hourly tables\n> inherit from our daily tables. This makes our indexes smaller and the\n> writes in our tests don't\n> seem to hit this same limit (at least so far.) I have a couple\n> follow-up questions:\n>\n> 1) Would it be acceptable to have let's say 60 daily partitions and then\n> each of those has 24 hourly partitions?\n\nIt sounds like each client gets their own hardware, but of each client\ncan have several thousand customers, how is that handled? All dumped\ninto one giant partitioned (on time) table, or does each customer get\ntheir own table? 60*24*thousands would certainly add up! If it is\njust 60*24, it will certainly slow down your queries (the ones not\nusing constraint exclusion anyway) some as it has to do a look up in\n1440 btrees for each query, but if queries are fast enough then they\nare fast enough. It should be pretty easy to test, if you know the\ntypes of queries you will be seeing.\n\n> Would it be better to after a\n> day or two (so that data is now old and mostly unchanged) \"rollup\" the\n> hourly tables into their respective daily table and then remove the\n> hourly tables?\n\nThat would generate an awful lot of extraneous IO (mostly sequential\nrather than random, so more efficient, but still IO) which is going to\ncompete with the rest of the IO going on, in order to solve a problem\nthat you don't yet know that you have.\n\n>\n> 2) Some of our indexes are on an identifier that is a hash of some event\n> attributes, so it's basically a random BIGINT. We believe part of the\n> problem is that each row could be in an entirely different location in\n> the index thus causing lots of seeking and thrashing. Would doing\n> something like having our index become a multi-column index by doing\n> (event_timestamp, originally_index_column) be better so that they closer\n> in proximity to other events coming in around the same time? I have to\n> admit that I don't really know how indexes are stored / paged.\n\nWhat if you just drop this index but keep the others while loading?\nIf dropping just that index has a big effect, then changing it as you\ndescribe would almost certainly help on the loading, but would the new\nindex still efficiently support the same queries that the old one did?\n I.e. could all queries based on the hash code be reformulated to\nquery on both exact time stamp and the hash code? Otherwise you would\nbe throwing the baby out with the bath water.\n\n\n>\n> 3) Does anyone else have similar systems where they have a ton of data\n> coming in that they also want to query? Any tips you can provide or\n> alternative designs? Once the data is in, it will 99.9% of the time\n> (100% of the time for some tables) be static. Part of the issue is that\n> the user wants to be able to search based on all sorts of attributes --\n> this leads to lots of indexes and more disk/memory usage when writing.\n\nHave you experimentally verified that all of the indexes really are\nneeded to get acceptable query performance? I tend to error on the\nside of adding more indices just in case it might be useful, but you\nalready know you have a problem caused by index maintenance so\ndefaulting to not having them until you have proof that it is needed\nmight be better in that case.\n\nCheers,\n\nJeff\n",
"msg_date": "Thu, 22 Dec 2011 00:04:30 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guidance Requested - Bulk Inserting + Queries"
}
] |
[
{
"msg_contents": "> We now found (thanks Andres and Snow-Man in #postgresql) that in our\n> tests, after the indexes get too large performance drops signficantly\n> and our system limps forward due to disk reads (presumably for the\n> indexes). If we remove the indexes, performance for our entire sample\n> test is great and everything is written to postgresql very quickly. \nIt's usually the fact that the data you index is \"random\" as opposed to, \nsay, an always incremented value (could be a timestamp, or a sequence) \nthat leads to insert problems with btrees. \n> My question is, what possible routes can I take where we can have both\n> fast inserts (with indexes removed until the end of the day), but still\n> allow a user to query against today's data? Is this even possible? One\n> idea would be possibly have hourly tables for today and as soon as we\n> can try to re-add indexes. \nYep, that's the only way I've found: use smaller partitions. That leads \nto slower reads (due to the fact that you have to visit more indexes to \nread the same amount of data). But you'll get faster writes. \n\n> Another possible solution might be to stream\n> the data to another \"reader\" postgres instance that has indexes,\n> although I'm not very versed in replication. \nI don't think you can do that. \nAnother option that you have is to use ssd instead of HD for the indexes \nonly (that is, having the indexes in a separate tablespace of ssds). The \nproblem is that your disks usually can't keep up with the number of \nrandom writes it takes to update N \"random values\" btrees; ssd might help. \nCan you post some numbers, such as # of indexes, # of rows you're trying \nto insert per hour etc etc? \n\n\n",
"msg_date": "Wed, 30 Nov 2011 16:17:33 +0000 (GMT)",
"msg_from": "Leonardo Francalanci <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Guidance Requested - Bulk Inserting + Queries"
},
{
"msg_contents": "We're trying to split the current day into hourly tables so that the\nsize of the indexes that are popular is much lower and therefore we can\nsupport more rows across the day. We also are using numerics where we\ncould be using bigints, so we're going to also work on that to see how\nmuch smaller we can get it. Once a daily table is not \"today\", we will\nremove duplicates, so we can combine that step with rolling up the\nhourly tables into one daily table.\n\nIn a *small* test (1-2 orders of magnitude smaller than some potential\ncustomer environments), the cumulative size of the daily indexes is 3.6\nGB and that's for only about half of the test.\n\nWe're talking 4 different daily partitioned tables with each table\nhaving 1 - 6 indexes (yes, a lot!).\n\nI'll post another update when I have it.\n\nThanks Leonardo.\n\nOn 11/30/2011 10:17 AM, Leonardo Francalanci wrote:\n>> We now found (thanks Andres and Snow-Man in #postgresql) that in our\n>> tests, after the indexes get too large performance drops signficantly\n>> and our system limps forward due to disk reads (presumably for the\n>> indexes). If we remove the indexes, performance for our entire sample\n>> test is great and everything is written to postgresql very quickly.\n> It's usually the fact that the data you index is \"random\" as opposed to,\n> say, an always incremented value (could be a timestamp, or a sequence)\n> that leads to insert problems with btrees.\n>> My question is, what possible routes can I take where we can have both\n>> fast inserts (with indexes removed until the end of the day), but still\n>> allow a user to query against today's data? Is this even possible? One\n>> idea would be possibly have hourly tables for today and as soon as we\n>> can try to re-add indexes.\n> Yep, that's the only way I've found: use smaller partitions. That leads\n> to slower reads (due to the fact that you have to visit more indexes to\n> read the same amount of data). But you'll get faster writes.\n>\n>> Another possible solution might be to stream\n>> the data to another \"reader\" postgres instance that has indexes,\n>> although I'm not very versed in replication.\n> I don't think you can do that.\n> Another option that you have is to use ssd instead of HD for the indexes\n> only (that is, having the indexes in a separate tablespace of ssds). The\n> problem is that your disks usually can't keep up with the number of\n> random writes it takes to update N \"random values\" btrees; ssd might help.\n> Can you post some numbers, such as # of indexes, # of rows you're trying\n> to insert per hour etc etc?\n>\n>\n\n-- \nBenjamin Johnson\nhttp://getcarbonblack.com/ | @getcarbonblack\ncell: 312.933.3612\n\n\n\n\n\n\n\n We're trying to split the current day into hourly tables so that the\n size of the indexes that are popular is much lower and therefore we\n can support more rows across the day. We also are using numerics\n where we could be using bigints, so we're going to also work on that\n to see how much smaller we can get it. Once a daily table is not\n \"today\", we will remove duplicates, so we can combine that step with\n rolling up the hourly tables into one daily table.\n\n In a *small* test (1-2 orders of magnitude smaller than some\n potential customer environments), the cumulative size of the daily\n indexes is 3.6 GB and that's for only about half of the test.\n\n We're talking 4 different daily partitioned tables with each table\n having 1 - 6 indexes (yes, a lot!). \n\n I'll post another update when I have it. \n\n Thanks Leonardo.\n\n On 11/30/2011 10:17 AM, Leonardo Francalanci wrote:\n>> We now found (thanks Andres\n and Snow-Man in #postgresql) that in our\n >> tests, after the indexes get too large performance drops\n signficantly\n >> and our system limps forward due to disk reads\n (presumably for the\n >> indexes). If we remove the indexes, performance for our\n entire sample\n >> test is great and everything is written to postgresql\n very quickly. \n > It's usually the fact that the data you index is \"random\" as\n opposed to, \n > say, an always incremented value (could be a timestamp, or a\n sequence) \n > that leads to insert problems with btrees. \n >> My question is, what possible routes can I take where we\n can have both\n >> fast inserts (with indexes removed until the end of the\n day), but still\n >> allow a user to query against today's data? Is this even\n possible? One\n >> idea would be possibly have hourly tables for today and\n as soon as we\n >> can try to re-add indexes. \n > Yep, that's the only way I've found: use smaller partitions.\n That leads \n > to slower reads (due to the fact that you have to visit more\n indexes to \n > read the same amount of data). But you'll get faster writes.\n \n >\n >> Another possible solution might be to stream\n >> the data to another \"reader\" postgres instance that has\n indexes,\n >> although I'm not very versed in replication. \n > I don't think you can do that. \n > Another option that you have is to use ssd instead of HD for\n the indexes \n > only (that is, having the indexes in a separate tablespace of\n ssds). The \n > problem is that your disks usually can't keep up with the\n number of \n > random writes it takes to update N \"random values\" btrees;\n ssd might help. \n > Can you post some numbers, such as # of indexes, # of rows\n you're trying \n > to insert per hour etc etc? \n >\n >\n\n -- \n Benjamin Johnson\nhttp://getcarbonblack.com/ | @getcarbonblack\n cell: 312.933.3612",
"msg_date": "Wed, 30 Nov 2011 20:00:56 -0600",
"msg_from": "Benjamin Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Guidance Requested - Bulk Inserting + Queries"
}
] |
[
{
"msg_contents": "Hi all,\n\nI found this presentation from B. Momjian:\n\nhttp://momjian.us/main/writings/pgsql/performance.pdf\n\nI'm interested in what he said about \" Intersect/Union X AND/OR \" , Can I\nfind a transcription or a video of this presentation? Can anyone explain it\nto me?\n\nThanks,\n\nThiago Godoi\n\nHi all,I found this presentation from B. Momjian:http://momjian.us/main/writings/pgsql/performance.pdfI'm interested in what he said about \" Intersect/Union X AND/OR \" , Can I find a transcription or a video of this presentation? Can anyone explain it to me?\nThanks,Thiago Godoi",
"msg_date": "Fri, 2 Dec 2011 13:55:11 -0200",
"msg_from": "Thiago Godoi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Intersect/Union X AND/OR"
},
{
"msg_contents": "Thiago Godoi wrote:\n> Hi all,\n> \n> I found this presentation from B. Momjian:\n> \n> http://momjian.us/main/writings/pgsql/performance.pdf\n> \n> I'm interested in what he said about \" Intersect/Union X AND/OR \" , Can I\n> find a transcription or a video of this presentation? Can anyone explain it\n> to me?\n\nWell, there is a recording of the webcast on the EnterpriseDB web site,\nbut I am afraid they only allow viewing of 3+ hour webcasts by\nEnterpriseDB customers.\n\nThe idea is that a query that uses an OR can be rewritten as two SELECTs\nwith a UNION between them. I have seen rare cases where this is a win,\nso I mentioned it in that talk. Intersection is similarly possible for\nAND in WHERE clauses.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Fri, 2 Dec 2011 14:49:42 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intersect/Union X AND/OR"
},
{
"msg_contents": "On Fri, Dec 2, 2011 at 1:49 PM, Bruce Momjian <[email protected]> wrote:\n> Thiago Godoi wrote:\n>> Hi all,\n>>\n>> I found this presentation from B. Momjian:\n>>\n>> http://momjian.us/main/writings/pgsql/performance.pdf\n>>\n>> I'm interested in what he said about \" Intersect/Union X AND/OR \" , Can I\n>> find a transcription or a video of this presentation? Can anyone explain it\n>> to me?\n>\n> Well, there is a recording of the webcast on the EnterpriseDB web site,\n> but I am afraid they only allow viewing of 3+ hour webcasts by\n> EnterpriseDB customers.\n>\n> The idea is that a query that uses an OR can be rewritten as two SELECTs\n> with a UNION between them. I have seen rare cases where this is a win,\n> so I mentioned it in that talk. Intersection is similarly possible for\n> AND in WHERE clauses.\n\nI've seen this as well. Also boolean set EXCEPT is useful as well in\nthe occasional oddball case.\n\nmerlin\n",
"msg_date": "Fri, 2 Dec 2011 14:52:34 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intersect/Union X AND/OR"
},
{
"msg_contents": "Thanks for the answers.\n\nI found one of these cases , but I'm trying to understand this. Why the\nperformance is better? The number of tuples is making the difference?\n\nMy original query :\n\nselect table1.id\nfrom table1, (select function(12345) id) table2\nwhere table1.kind = 1234\nand table1.id = table2.id\n\n\"Nested Loop (cost=0.00..6.68 rows=1 width=12)\"\n\" Join Filter: ()\"\n\" -> Seq Scan on recorte (cost=0.00..6.39 rows=1 width=159)\"\n\" Filter: (id = 616)\"\n\" -> Result (cost=0.00..0.26 rows=1 width=0)\"\n\n\n-- function() returns a resultset\n\nI tryed with explicit join and \"in\" , but the plan is the same.\n\nWhen I changed the query to use intersect :\n\n\n(select table1.id from table1 where table1.kind = 1234)\nIntersect\n(select function(12345) id)\n\nThe new plan is :\n\n\"HashSetOp Intersect (cost=0.00..6.67 rows=1 width=80)\"\n\" -> Append (cost=0.00..6.67 rows=2 width=80)\"\n\" -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..6.40 rows=1\nwidth=159)\"\n\" -> Seq Scan on recorte (cost=0.00..6.39 rows=1 width=159)\"\n\" Filter: (id = 616)\"\n\" -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..0.27 rows=1\nwidth=0)\"\n\" -> Result (cost=0.00..0.26 rows=1 width=0)\"\n\nThe second plan is about 10 times faster than the first one.\n\n\n\n\n2011/12/2 Merlin Moncure <[email protected]>\n\n> On Fri, Dec 2, 2011 at 1:49 PM, Bruce Momjian <[email protected]> wrote:\n> > Thiago Godoi wrote:\n> >> Hi all,\n> >>\n> >> I found this presentation from B. Momjian:\n> >>\n> >> http://momjian.us/main/writings/pgsql/performance.pdf\n> >>\n> >> I'm interested in what he said about \" Intersect/Union X AND/OR \" , Can\n> I\n> >> find a transcription or a video of this presentation? Can anyone\n> explain it\n> >> to me?\n> >\n> > Well, there is a recording of the webcast on the EnterpriseDB web site,\n> > but I am afraid they only allow viewing of 3+ hour webcasts by\n> > EnterpriseDB customers.\n> >\n> > The idea is that a query that uses an OR can be rewritten as two SELECTs\n> > with a UNION between them. I have seen rare cases where this is a win,\n> > so I mentioned it in that talk. Intersection is similarly possible for\n> > AND in WHERE clauses.\n>\n> I've seen this as well. Also boolean set EXCEPT is useful as well in\n> the occasional oddball case.\n>\n> merlin\n>\n\n\n\n-- \nThiago Godoi\n\nThanks for the answers.I found one of these cases , but I'm trying to understand this. Why the performance is better? The number of tuples is making the difference?My original query :select table1.id \n\nfrom table1, (select function(12345) id) table2where table1.kind = 1234and table1.id = table2.id\"Nested Loop (cost=0.00..6.68 rows=1 width=12)\"\n\n\" Join Filter: ()\"\" -> Seq Scan on recorte (cost=0.00..6.39 rows=1 width=159)\"\" Filter: (id = 616)\"\" -> Result (cost=0.00..0.26 rows=1 width=0)\"\n\n-- function() returns a resultsetI tryed with explicit join and \"in\" , but the plan is the same.When I changed the query to use intersect :(select table1.id from table1 where table1.kind = 1234)\n\nIntersect (select function(12345) id) The new plan is : \"HashSetOp Intersect (cost=0.00..6.67 rows=1 width=80)\"\" -> Append (cost=0.00..6.67 rows=2 width=80)\"\" -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..6.40 rows=1 width=159)\"\n\n\" -> Seq Scan on recorte (cost=0.00..6.39 rows=1 width=159)\"\" Filter: (id = 616)\"\" -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..0.27 rows=1 width=0)\"\n\n\" -> Result (cost=0.00..0.26 rows=1 width=0)\"The second plan is about 10 times faster than the first one. 2011/12/2 Merlin Moncure <[email protected]>\nOn Fri, Dec 2, 2011 at 1:49 PM, Bruce Momjian <[email protected]> wrote:\n\n\n> Thiago Godoi wrote:\n>> Hi all,\n>>\n>> I found this presentation from B. Momjian:\n>>\n>> http://momjian.us/main/writings/pgsql/performance.pdf\n>>\n>> I'm interested in what he said about \" Intersect/Union X AND/OR \" , Can I\n>> find a transcription or a video of this presentation? Can anyone explain it\n>> to me?\n>\n> Well, there is a recording of the webcast on the EnterpriseDB web site,\n> but I am afraid they only allow viewing of 3+ hour webcasts by\n> EnterpriseDB customers.\n>\n> The idea is that a query that uses an OR can be rewritten as two SELECTs\n> with a UNION between them. I have seen rare cases where this is a win,\n> so I mentioned it in that talk. Intersection is similarly possible for\n> AND in WHERE clauses.\n\nI've seen this as well. Also boolean set EXCEPT is useful as well in\nthe occasional oddball case.\n\nmerlin\n-- Thiago Godoi",
"msg_date": "Mon, 5 Dec 2011 10:14:04 -0200",
"msg_from": "Thiago Godoi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Intersect/Union X AND/OR"
},
{
"msg_contents": "Thiago Godoi wrote:\n> Thanks for the answers.\n> \n> I found one of these cases , but I'm trying to understand this. Why the\n> performance is better? The number of tuples is making the difference?\n> \n> My original query :\n> \n> select table1.id\n> from table1, (select function(12345) id) table2\n> where table1.kind = 1234\n> and table1.id = table2.id\n> \n> \"Nested Loop (cost=0.00..6.68 rows=1 width=12)\"\n> \" Join Filter: ()\"\n> \" -> Seq Scan on recorte (cost=0.00..6.39 rows=1 width=159)\"\n> \" Filter: (id = 616)\"\n> \" -> Result (cost=0.00..0.26 rows=1 width=0)\"\n> \n> \n> -- function() returns a resultset\n> \n> I tryed with explicit join and \"in\" , but the plan is the same.\n> \n> When I changed the query to use intersect :\n> \n> \n> (select table1.id from table1 where table1.kind = 1234)\n> Intersect\n> (select function(12345) id)\n> \n> The new plan is :\n> \n> \"HashSetOp Intersect (cost=0.00..6.67 rows=1 width=80)\"\n> \" -> Append (cost=0.00..6.67 rows=2 width=80)\"\n> \" -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..6.40 rows=1\n> width=159)\"\n> \" -> Seq Scan on recorte (cost=0.00..6.39 rows=1 width=159)\"\n> \" Filter: (id = 616)\"\n> \" -> Subquery Scan on \"*SELECT* 2\" (cost=0.00..0.27 rows=1\n> width=0)\"\n> \" -> Result (cost=0.00..0.26 rows=1 width=0)\"\n> \n> The second plan is about 10 times faster than the first one.\n\nWell, there are usually several ways to execute a query internally,\nintsersect is using a different, and faster, method.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Mon, 5 Dec 2011 10:19:15 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intersect/Union X AND/OR"
},
{
"msg_contents": "On Mon, Dec 5, 2011 at 14:14, Thiago Godoi <[email protected]> wrote:\n> My original query :\n>\n> select table1.id\n> from table1, (select function(12345) id) table2\n> where table1.kind = 1234\n> and table1.id = table2.id\n>\n> \"Nested Loop (cost=0.00..6.68 rows=1 width=12)\"\n> \" Join Filter: ()\"\n> \" -> Seq Scan on recorte (cost=0.00..6.39 rows=1 width=159)\"\n> \" Filter: (id = 616)\"\n> \" -> Result (cost=0.00..0.26 rows=1 width=0)\"\n\nNote that this EXPLAIN output is quite different from your query.\nIntead of a \"kind=1234\" clause there's \"id=616\". Also, please post\nEXPLAIN ANALYZE results instead whenever possible.\n\n> When I changed the query to use intersect :\n[...]\n> The second plan is about 10 times faster than the first one.\n\nJudging by these plans, the 1st one should not be slower.\n\nNote that just running the query once and comparing times is often\nmisleading, especially for short queries, since noise often dominates\nthe query time -- depending on how busy the server was at the moment,\nwhat kind of data was cached, CPU power management/frequency scaling,\netc. ESPECIALLY don't compare pgAdmin timings since those also include\nnetwork variance, the time taken to render results on your screen and\nwho knows what else.\n\nA simple way to benchmark is with pgbench. Just write the query to a\ntext file (it needs to be a single line and not more than ~4000\ncharacters).\nThen run 'pgbench -n -f pgbench_script -T 5' to run it for 5 seconds.\nThese results are still not entirely reliable, but much better than\npgAdmin timings.\n\nRegards,\nMarti\n",
"msg_date": "Wed, 7 Dec 2011 16:56:53 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intersect/Union X AND/OR"
}
] |
[
{
"msg_contents": "Hi friends\n\nI want to know if it's possible to predict (calculate), how long a\nVACUUM FULL process will consume in a table?\n\ncan I apply some formula to calculate this?\n\nthanks\n\n\n\n-- \n----------------------------------------------------------\nVisita : http://www.eqsoft.net\n----------------------------------------------------------\nSigueme en Twitter : http://www.twitter.com/ernestoq\n",
"msg_date": "Fri, 2 Dec 2011 22:32:01 -0500",
"msg_from": "=?ISO-8859-1?Q?Ernesto_Qui=F1ones?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question about VACUUM"
},
{
"msg_contents": "On Fri, Dec 2, 2011 at 8:32 PM, Ernesto Quiñones <[email protected]> wrote:\n> Hi friends\n>\n> I want to know if it's possible to predict (calculate), how long a\n> VACUUM FULL process will consume in a table?\n>\n> can I apply some formula to calculate this?\n\nIf you look at what iostat is doing while the vacuum full is running,\nand divide the size of the table by that k/sec you can get a good\napproximation of how long it will take. Do you have naptime set to\nanything above 0?\n",
"msg_date": "Fri, 2 Dec 2011 20:42:12 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about VACUUM"
},
{
"msg_contents": "Thanks for the answer Scott, actually my autovacuum_naptime is 1h ..\nbut I don't find naptime parameter for a manual vacuum\n\nthanks again\n\n2011/12/2 Scott Marlowe <[email protected]>:\n> On Fri, Dec 2, 2011 at 8:32 PM, Ernesto Quiñones <[email protected]> wrote:\n>> Hi friends\n>>\n>> I want to know if it's possible to predict (calculate), how long a\n>> VACUUM FULL process will consume in a table?\n>>\n>> can I apply some formula to calculate this?\n>\n> If you look at what iostat is doing while the vacuum full is running,\n> and divide the size of the table by that k/sec you can get a good\n> approximation of how long it will take. Do you have naptime set to\n> anything above 0?\n\n\n\n-- \n----------------------------------------------------------\nVisita : http://www.eqsoft.net\n----------------------------------------------------------\nSigueme en Twitter : http://www.twitter.com/ernestoq\n",
"msg_date": "Sat, 3 Dec 2011 08:11:49 -0500",
"msg_from": "=?ISO-8859-1?Q?Ernesto_Qui=F1ones?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Question about VACUUM"
},
{
"msg_contents": "On Sat, Dec 3, 2011 at 6:11 AM, Ernesto Quiñones <[email protected]> wrote:\n> Thanks for the answer Scott, actually my autovacuum_naptime is 1h ..\n> but I don't find naptime parameter for a manual vacuum\n\nThat's really high, but what I meant to as was what your\nvacuum_cost_delay was set to. Also vacuum_cost_limit.\n",
"msg_date": "Sat, 3 Dec 2011 13:05:04 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about VACUUM"
}
] |
[
{
"msg_contents": "So we are making progress on our performance issues, we are splitting\nthe data, changing the index value etc. So far having some success,\nbut we also want to test out some of the options and changes in the 9\nbranch, but trying to dump and restore 750gb of data is not all that\nfun, so I'm trying to avoid that.\n\nSo upgraded from 8.4.4 64 bit to 9.1.1 64bit.\n\nIf we upgrade a database that just uses the public table space there\nare no issues, works fine. However when we try to upgrade a db that\nhas tablespaces defined it errors out trying to load the data from the\nthen now new db.\n\nThe tablespaces are hardcoded with a path, so that seems to cause issues.\n\nSteps I'm taking\n\nStandard location of data /data/db\nStandard binary location /pgsql/bin\n\nI'm moving the standard location to /data1/db and moving the binaries\nto /pgsql8/bin\n\nWHY: because my build scripts put my binaries and data in these\nlocations, so without recreating my build process, I have to move the\ncurrent data and binary locations before I install 9.11\n\nSo I move olddata to /data1/db\noldbinary to /pgsql8/bin\n\nnew 9.1.1 db goes to /data/db\nnewbinary installs at /pgsql/\n\nSo when I run pg_upgrade (check validates the config), however trying\nto the upgrade nets;\nRestoring user relation files\n /data/queue/16384/16406\nerror while copying queue.adm_version (/data/queue/16384/16406 to\n/data/queue/PG_9.1_201105231/16407/16406): No such file or directory\nFailure, exiting\n\nAs you can see, it's sticking with it's original path and not\nrealizing that I'm trying now to install into /data from /data1\n\nWhat is the flaw here? Do I have to rebuild my build process to\ninstall in a different location?, not sure what my choices are here. I\nmean I'm telling the upgrade process where new and old are located, I\nbelieve it should be overriding something and not allowing the\nincluded error.\n\nSlaps and or pointers are welcome\n\nTory\n",
"msg_date": "Fri, 2 Dec 2011 20:09:25 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_upgrade"
},
{
"msg_contents": "ever tried symlinking?\nOn Dec 3, 2011 5:09 AM, \"Tory M Blue\" <[email protected]> wrote:\n\n> So we are making progress on our performance issues, we are splitting\n> the data, changing the index value etc. So far having some success,\n> but we also want to test out some of the options and changes in the 9\n> branch, but trying to dump and restore 750gb of data is not all that\n> fun, so I'm trying to avoid that.\n>\n> So upgraded from 8.4.4 64 bit to 9.1.1 64bit.\n>\n> If we upgrade a database that just uses the public table space there\n> are no issues, works fine. However when we try to upgrade a db that\n> has tablespaces defined it errors out trying to load the data from the\n> then now new db.\n>\n> The tablespaces are hardcoded with a path, so that seems to cause issues.\n>\n> Steps I'm taking\n>\n> Standard location of data /data/db\n> Standard binary location /pgsql/bin\n>\n> I'm moving the standard location to /data1/db and moving the binaries\n> to /pgsql8/bin\n>\n> WHY: because my build scripts put my binaries and data in these\n> locations, so without recreating my build process, I have to move the\n> current data and binary locations before I install 9.11\n>\n> So I move olddata to /data1/db\n> oldbinary to /pgsql8/bin\n>\n> new 9.1.1 db goes to /data/db\n> newbinary installs at /pgsql/\n>\n> So when I run pg_upgrade (check validates the config), however trying\n> to the upgrade nets;\n> Restoring user relation files\n> /data/queue/16384/16406\n> error while copying queue.adm_version (/data/queue/16384/16406 to\n> /data/queue/PG_9.1_201105231/16407/16406): No such file or directory\n> Failure, exiting\n>\n> As you can see, it's sticking with it's original path and not\n> realizing that I'm trying now to install into /data from /data1\n>\n> What is the flaw here? Do I have to rebuild my build process to\n> install in a different location?, not sure what my choices are here. I\n> mean I'm telling the upgrade process where new and old are located, I\n> believe it should be overriding something and not allowing the\n> included error.\n>\n> Slaps and or pointers are welcome\n>\n> Tory\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\never tried symlinking?\nOn Dec 3, 2011 5:09 AM, \"Tory M Blue\" <[email protected]> wrote:\nSo we are making progress on our performance issues, we are splitting\nthe data, changing the index value etc. So far having some success,\nbut we also want to test out some of the options and changes in the 9\nbranch, but trying to dump and restore 750gb of data is not all that\nfun, so I'm trying to avoid that.\n\nSo upgraded from 8.4.4 64 bit to 9.1.1 64bit.\n\nIf we upgrade a database that just uses the public table space there\nare no issues, works fine. However when we try to upgrade a db that\nhas tablespaces defined it errors out trying to load the data from the\nthen now new db.\n\nThe tablespaces are hardcoded with a path, so that seems to cause issues.\n\nSteps I'm taking\n\nStandard location of data /data/db\nStandard binary location /pgsql/bin\n\nI'm moving the standard location to /data1/db and moving the binaries\nto /pgsql8/bin\n\nWHY: because my build scripts put my binaries and data in these\nlocations, so without recreating my build process, I have to move the\ncurrent data and binary locations before I install 9.11\n\nSo I move olddata to /data1/db\noldbinary to /pgsql8/bin\n\nnew 9.1.1 db goes to /data/db\nnewbinary installs at /pgsql/\n\nSo when I run pg_upgrade (check validates the config), however trying\nto the upgrade nets;\nRestoring user relation files\n /data/queue/16384/16406\nerror while copying queue.adm_version (/data/queue/16384/16406 to\n/data/queue/PG_9.1_201105231/16407/16406): No such file or directory\nFailure, exiting\n\nAs you can see, it's sticking with it's original path and not\nrealizing that I'm trying now to install into /data from /data1\n\nWhat is the flaw here? Do I have to rebuild my build process to\ninstall in a different location?, not sure what my choices are here. I\nmean I'm telling the upgrade process where new and old are located, I\nbelieve it should be overriding something and not allowing the\nincluded error.\n\nSlaps and or pointers are welcome\n\nTory\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sat, 3 Dec 2011 15:03:12 +0100",
"msg_from": "Klaus Ita <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": "Tory M Blue wrote:\n> So we are making progress on our performance issues, we are splitting\n> the data, changing the index value etc. So far having some success,\n> but we also want to test out some of the options and changes in the 9\n> branch, but trying to dump and restore 750gb of data is not all that\n> fun, so I'm trying to avoid that.\n> \n> So upgraded from 8.4.4 64 bit to 9.1.1 64bit.\n> \n> If we upgrade a database that just uses the public table space there\n> are no issues, works fine. However when we try to upgrade a db that\n> has tablespaces defined it errors out trying to load the data from the\n> then now new db.\n> \n> The tablespaces are hardcoded with a path, so that seems to cause issues.\n> \n> Steps I'm taking\n> \n> Standard location of data /data/db\n> Standard binary location /pgsql/bin\n> \n> I'm moving the standard location to /data1/db and moving the binaries\n> to /pgsql8/bin\n> \n> WHY: because my build scripts put my binaries and data in these\n> locations, so without recreating my build process, I have to move the\n> current data and binary locations before I install 9.11\n> \n> So I move olddata to /data1/db\n> oldbinary to /pgsql8/bin\n> \n> new 9.1.1 db goes to /data/db\n> newbinary installs at /pgsql/\n> \n> So when I run pg_upgrade (check validates the config), however trying\n> to the upgrade nets;\n> Restoring user relation files\n> /data/queue/16384/16406\n> error while copying queue.adm_version (/data/queue/16384/16406 to\n> /data/queue/PG_9.1_201105231/16407/16406): No such file or directory\n> Failure, exiting\n> \n> As you can see, it's sticking with it's original path and not\n> realizing that I'm trying now to install into /data from /data1\n> \n> What is the flaw here? Do I have to rebuild my build process to\n> install in a different location?, not sure what my choices are here. I\n> mean I'm telling the upgrade process where new and old are located, I\n> believe it should be overriding something and not allowing the\n> included error.\n> \n> Slaps and or pointers are welcome\n\nWell, I am not totally clear how you are moving things around, but I do\nknow pg_upgrade isn't happy to have the old and new cluster be very\ndifferent.\n\nWhat I think is happening is that you didn't properly move the\ntablespace in the old cluster. We don't give you a very easy way to do\nthat. You need to not only move the directory, but you need to update\nthe symlinks in data/pg_tblspc/, and update the pg_tablespace system\ntable. Did you do all of that? Does the 8.4 server see the tablespace\nproperly after the move, but before pg_upgrade?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Sat, 3 Dec 2011 09:04:22 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": "On Sat, Dec 3, 2011 at 6:04 AM, Bruce Momjian <[email protected]> wrote:\n\n> Well, I am not totally clear how you are moving things around, but I do\n> know pg_upgrade isn't happy to have the old and new cluster be very\n> different.\n>\n> What I think is happening is that you didn't properly move the\n> tablespace in the old cluster. We don't give you a very easy way to do\n> that. You need to not only move the directory, but you need to update\n> the symlinks in data/pg_tblspc/, and update the pg_tablespace system\n> table. Did you do all of that? Does the 8.4 server see the tablespace\n> properly after the move, but before pg_upgrade?\n\n\nSimple answer is umm no..\n\"http://www.postgresql.org/docs/current/static/pgupgrade.html\" is\nobviously lacking than :)\n\nSoooo I can take what you have told me and see if I can't attempt to\nmake those things happen and try again. Makes sense, but boy that's a\nlarge piece of info missing in the document!\n\nThanks again\n\nTory\n",
"msg_date": "Sat, 3 Dec 2011 11:35:22 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": "Tory M Blue wrote:\n> On Sat, Dec 3, 2011 at 6:04 AM, Bruce Momjian <[email protected]> wrote:\n> \n> > Well, I am not totally clear how you are moving things around, but I do\n> > know pg_upgrade isn't happy to have the old and new cluster be very\n> > different.\n> >\n> > What I think is happening is that you didn't properly move the\n> > tablespace in the old cluster. ?We don't give you a very easy way to do\n> > that. ?You need to not only move the directory, but you need to update\n> > the symlinks in data/pg_tblspc/, and update the pg_tablespace system\n> > table. ?Did you do all of that? ?Does the 8.4 server see the tablespace\n> > properly after the move, but before pg_upgrade?\n> \n> \n> Simple answer is umm no..\n\nThe \"no\" is an answer to which question?\n\n> \"http://www.postgresql.org/docs/current/static/pgupgrade.html\" is\n> obviously lacking than :)\n> \n> Soooo I can take what you have told me and see if I can't attempt to\n> make those things happen and try again. Makes sense, but boy that's a\n> large piece of info missing in the document!\n\nYou mean moving tablespaces? That isn't something pg_upgrade deals\nwith. If we need docs to move tablespaces, it is a missing piece of our\nmain docs, not something pg_upgrade would ever mention.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Sat, 3 Dec 2011 18:42:27 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Tory M Blue wrote:\n> > On Sat, Dec 3, 2011 at 6:04 AM, Bruce Momjian <[email protected]> wrote:\n> > \n> > > Well, I am not totally clear how you are moving things around, but I do\n> > > know pg_upgrade isn't happy to have the old and new cluster be very\n> > > different.\n> > >\n> > > What I think is happening is that you didn't properly move the\n> > > tablespace in the old cluster. ?We don't give you a very easy way to do\n> > > that. ?You need to not only move the directory, but you need to update\n> > > the symlinks in data/pg_tblspc/, and update the pg_tablespace system\n> > > table. ?Did you do all of that? ?Does the 8.4 server see the tablespace\n> > > properly after the move, but before pg_upgrade?\n> > \n> > \n> > Simple answer is umm no..\n> \n> The \"no\" is an answer to which question?\n> \n> > \"http://www.postgresql.org/docs/current/static/pgupgrade.html\" is\n> > obviously lacking than :)\n> > \n> > Soooo I can take what you have told me and see if I can't attempt to\n> > make those things happen and try again. Makes sense, but boy that's a\n> > large piece of info missing in the document!\n> \n> You mean moving tablespaces? That isn't something pg_upgrade deals\n> with. If we need docs to move tablespaces, it is a missing piece of our\n> main docs, not something pg_upgrade would ever mention.\n\nFYI, I have asked on the docs list about getting this documented.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Sat, 3 Dec 2011 21:20:05 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Bruce Momjian\n> Sent: Saturday, December 03, 2011 6:42 PM\n> To: Tory M Blue\n> Cc: [email protected]\n> Subject: Re: [PERFORM] pg_upgrade\n> \n> Tory M Blue wrote:\n> > On Sat, Dec 3, 2011 at 6:04 AM, Bruce Momjian <[email protected]>\n> wrote:\n> >\n> > > Well, I am not totally clear how you are moving things around, but\n> I do\n> > > know pg_upgrade isn't happy to have the old and new cluster be very\n> > > different.\n> > >\n> > > What I think is happening is that you didn't properly move the\n> > > tablespace in the old cluster. ?We don't give you a very easy way\n> to do\n> > > that. ?You need to not only move the directory, but you need to\n> update\n> > > the symlinks in data/pg_tblspc/, and update the pg_tablespace\n> system\n> > > table. ?Did you do all of that? ?Does the 8.4 server see the\n> tablespace\n> > > properly after the move, but before pg_upgrade?\n> >\n> >\n> > Simple answer is umm no..\n> \n> The \"no\" is an answer to which question?\n> \n> > \"http://www.postgresql.org/docs/current/static/pgupgrade.html\" is\n> > obviously lacking than :)\n> >\n> > Soooo I can take what you have told me and see if I can't attempt to\n> > make those things happen and try again. Makes sense, but boy that's a\n> > large piece of info missing in the document!\n> \n> You mean moving tablespaces? That isn't something pg_upgrade deals\n> with. If we need docs to move tablespaces, it is a missing piece of\n> our\n> main docs, not something pg_upgrade would ever mention.\n\nIf I'm reading the issue correctly, and pg_upgrade gets part way through an upgrade then fails if it hits a tablespace - it seems to me like the pg_upgrade should check for such a condition at the initial validation stage not proceed if found.\n\nBrad.\n\n",
"msg_date": "Mon, 5 Dec 2011 13:36:36 +0000",
"msg_from": "\"Nicholson, Brad (Toronto, ON, CA)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": "Nicholson, Brad (Toronto, ON, CA) wrote:\n> > You mean moving tablespaces? That isn't something pg_upgrade deals\n> > with. If we need docs to move tablespaces, it is a missing piece of\n> > our\n> > main docs, not something pg_upgrade would ever mention.\n> \n> If I'm reading the issue correctly, and pg_upgrade gets part way through\n> an upgrade then fails if it hits a tablespace - it seems to me like\n> the pg_upgrade should check for such a condition at the initial\n> validation stage not proceed if found.\n\nChecking for all such cases would make pg_upgrade huge and unusable. If\nyou messed up your configuration, pg_upgrade can't check for every such\ncase. There are thosands of ways people can mess up their configuration.\n\nI think you should read up on how pg_upgrade attempts to be minimal:\n\n\thttp://momjian.us/main/blogs/pgblog/2011.html#June_15_2011_2\n\nOn a related note, Magnus is working on code for Postgres 9.2 that would\nallow for easier moving of tablespaces.\n\n--\n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Mon, 5 Dec 2011 10:24:06 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Monday, December 05, 2011 10:24 AM\n> To: Nicholson, Brad (Toronto, ON, CA)\n> Cc: Tory M Blue; [email protected]; Magnus Hagander\n> Subject: Re: [PERFORM] pg_upgrade\n> \n> Nicholson, Brad (Toronto, ON, CA) wrote:\n> > > You mean moving tablespaces? That isn't something pg_upgrade deals\n> > > with. If we need docs to move tablespaces, it is a missing piece\n> of\n> > > our\n> > > main docs, not something pg_upgrade would ever mention.\n> >\n> > If I'm reading the issue correctly, and pg_upgrade gets part way\n> through\n> > an upgrade then fails if it hits a tablespace - it seems to me like\n> > the pg_upgrade should check for such a condition at the initial\n> > validation stage not proceed if found.\n> \n> Checking for all such cases would make pg_upgrade huge and unusable.\n> If\n> you messed up your configuration, pg_upgrade can't check for every such\n> case. There are thosands of ways people can mess up their\n> configuration.\n\nBased on the OP this does not seem like a messed up configuration. It sounds like the OP used a fully supported core feature of Postgres (tablespaces) and pg_upgrade failed as a result. I think having our upgrade utility fail under such circumstances is a bad thing.\n\nBrad.\n",
"msg_date": "Mon, 5 Dec 2011 15:29:00 +0000",
"msg_from": "\"Nicholson, Brad (Toronto, ON, CA)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": "Nicholson, Brad (Toronto, ON, CA) wrote:\n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:[email protected]]\n> > Sent: Monday, December 05, 2011 10:24 AM\n> > To: Nicholson, Brad (Toronto, ON, CA)\n> > Cc: Tory M Blue; [email protected]; Magnus Hagander\n> > Subject: Re: [PERFORM] pg_upgrade\n> >\n> > Nicholson, Brad (Toronto, ON, CA) wrote:\n> > > > You mean moving tablespaces? That isn't something pg_upgrade deals\n> > > > with. If we need docs to move tablespaces, it is a missing piece\n> > of\n> > > > our\n> > > > main docs, not something pg_upgrade would ever mention.\n> > >\n> > > If I'm reading the issue correctly, and pg_upgrade gets part way\n> > through\n> > > an upgrade then fails if it hits a tablespace - it seems to me like\n> > > the pg_upgrade should check for such a condition at the initial\n> > > validation stage not proceed if found.\n> >\n> > Checking for all such cases would make pg_upgrade huge and unusable.\n> > If\n> > you messed up your configuration, pg_upgrade can't check for every such\n> > case. There are thosands of ways people can mess up their\n> > configuration.\n> \n> Based on the OP this does not seem like a messed up configuration. It\n> sounds like the OP used a fully supported core feature of Postgres\n> (tablespaces) and pg_upgrade failed as a result. I think having our\n> upgrade utility fail under such circumstances is a bad thing.\n\nThe OP has not indicated exactly what he did to move the tablespaces, so\nI have to assume he changed the SQL location but not the symbolic link\nlocation, or some other misconfiguration. Can someone provide the steps\nthat caused the failure?\n\npg_upgrade works fine for tablespaces so there must be something\ndifferent about his configuration. Unless I hear details, I have to\nassume the tablespace move was done incorrectly. This is the first\ntablespace failure like this I have ever gotten, and I do test\ntablespaces. Perhaps something is wrong, but I can't guess what it is.\n\n--\n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Mon, 5 Dec 2011 10:34:54 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": "On Mon, Dec 5, 2011 at 7:34 AM, Bruce Momjian <[email protected]> wrote:\n> Nicholson, Brad (Toronto, ON, CA) wrote:\n>>\n>> Based on the OP this does not seem like a messed up configuration. It\n>> sounds like the OP used a fully supported core feature of Postgres\n>> (tablespaces) and pg_upgrade failed as a result. I think having our\n>> upgrade utility fail under such circumstances is a bad thing.\n>\n> The OP has not indicated exactly what he did to move the tablespaces, so\n> I have to assume he changed the SQL location but not the symbolic link\n> location, or some other misconfiguration. Can someone provide the steps\n> that caused the failure?\n>\n> pg_upgrade works fine for tablespaces so there must be something\n> different about his configuration. Unless I hear details, I have to\n> assume the tablespace move was done incorrectly. This is the first\n> tablespace failure like this I have ever gotten, and I do test\n> tablespaces. Perhaps something is wrong, but I can't guess what it is.\n>\n\n\nSorry for the late response, I didn't mean to host a party and step out!\n\nBruce is right, I didn't move tablespaces (I didn't know to be honest\nI had to, but it makes sense). I simply moved the location of the data\nfiles, from /data to /data1. But I did \"not\", change any sym links or\ndo any other pre-steps, other than install the new binary, make sure\nthat there was a new and old data location as well as a new and old\nbinary location.\n\nSince my build processes installs data files at /data and binary at\n/pgsql/, I simply moved the old Data and binaries, before installing\nmy new build. So /pgsql/ became /pgsql8/ and /data/ became /data1/\n\nI do understand what you are all saying in regards to the tablespace\nlinks and tablespace locations.It made total sense when Bruce pointed\nit out initially. However I'm not sure if I should of known that\npg_upgrade doesn't handle this, and or this would be a concern.\npg_upgrade asks for old and new locations, so one would think that\nthis information would be used for the upgrade process, including\npotentially changing tablespace paths during the migration step\n<shrug>, this is above my pay grade.\n\nBut initial response to all this, is umm we have not really made a\ndump/restore unnecessary with the latest releases of Postgres than, as\nI would have to think that there is a high percentage of users whom\nuse tablespaces.\n\nTory\n",
"msg_date": "Mon, 5 Dec 2011 09:21:19 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": "Tory M Blue wrote:\n> Bruce is right, I didn't move tablespaces (I didn't know to be honest\n> I had to, but it makes sense). I simply moved the location of the data\n> files, from /data to /data1. But I did \"not\", change any sym links or\n\nI was unclear if you moved the data directory or the tablespace. Your\nexample showed you moving something that didn't look like data\ndirectories:\n\n\t> So I move olddata to /data1/db\n\t> oldbinary to /pgsql8/bin\n\t>\n\t> new 9.1.1 db goes to /data/db\n\t> newbinary installs at /pgsql/\n\t>\n\t> So when I run pg_upgrade (check validates the config), however trying\n\t> to the upgrade nets;\n\t> Restoring user relation files\n\t> /data/queue/16384/16406\n\t> error while copying queue.adm_version (/data/queue/16384/16406 to\n\t> /data/queue/PG_9.1_201105231/16407/16406): No such file or directory\n\t> Failure, exiting\n\n/data/db and /data/queue are not data locations, or at least they are\nnot ones we create during the install. Was the real data directory and\nthe tablespaces all under /data1? Did you define these tablespace\nlocations using relative paths?\n\n> do any other pre-steps, other than install the new binary, make sure\n> that there was a new and old data location as well as a new and old\n> binary location.\n\nYou can definitely move data directories around. \n\n> Since my build processes installs data files at /data and binary at\n> /pgsql/, I simply moved the old Data and binaries, before installing\n> my new build. So /pgsql/ became /pgsql8/ and /data/ became /data1/\n\nI think you can do that but your error messages don't say that.\n \n> I do understand what you are all saying in regards to the tablespace\n> links and tablespace locations.It made total sense when Bruce pointed\n> it out initially. However I'm not sure if I should of known that\n> pg_upgrade doesn't handle this, and or this would be a concern.\n> pg_upgrade asks for old and new locations, so one would think that\n> this information would be used for the upgrade process, including\n> potentially changing tablespace paths during the migration step\n> <shrug>, this is above my pay grade.\n\nThere is no Postgres support for moving tablespaces, so it isn't\nsurprising that pg_upgrade doesn't handle it.\n\n> But initial response to all this, is umm we have not really made a\n> dump/restore unnecessary with the latest releases of Postgres than, as\n> I would have to think that there is a high percentage of users whom\n> use tablespaces.\n\nYes, but they don't change tablespace locations during the upgrade. In\nfact, we have had surprisingly few (zero) request for moving\ntablespaces, and now we are trying to implement this for Postgres 9.2. \nThe normal API will be to have the user move the tablespace before the\nupgrade, but as I said before, it isn't easy to do now in Postgres.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Mon, 5 Dec 2011 13:22:56 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": "On Mon, Dec 5, 2011 at 10:22 AM, Bruce Momjian <[email protected]> wrote:\n>> But initial response to all this, is umm we have not really made a\n>> dump/restore unnecessary with the latest releases of Postgres than, as\n>> I would have to think that there is a high percentage of users whom\n>> use tablespaces.\n>\n> Yes, but they don't change tablespace locations during the upgrade. In\n> fact, we have had surprisingly few (zero) request for moving\n> tablespaces, and now we are trying to implement this for Postgres 9.2.\n> The normal API will be to have the user move the tablespace before the\n> upgrade, but as I said before, it isn't easy to do now in Postgres.\n\nOkay think here is where I'm confused. \"they don't change tablespace\",\nokay how are they doing the upgrade? Do they leave the olddatadir in\nthe default location and create a new one elsewhere, vs where I'm kind\nof doing the opposite?\n\nThanks again!\nTory\n",
"msg_date": "Mon, 5 Dec 2011 10:31:50 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": "On Mon, Dec 5, 2011 at 10:31 AM, Tory M Blue <[email protected]> wrote:\n> On Mon, Dec 5, 2011 at 10:22 AM, Bruce Momjian <[email protected]> wrote:\n>>> But initial response to all this, is umm we have not really made a\n>>> dump/restore unnecessary with the latest releases of Postgres than, as\n>>> I would have to think that there is a high percentage of users whom\n>>> use tablespaces.\n>>\n>> Yes, but they don't change tablespace locations during the upgrade. In\n>> fact, we have had surprisingly few (zero) request for moving\n>> tablespaces, and now we are trying to implement this for Postgres 9.2.\n>> The normal API will be to have the user move the tablespace before the\n>> upgrade, but as I said before, it isn't easy to do now in Postgres.\n>\n> Okay think here is where I'm confused. \"they don't change tablespace\",\n> okay how are they doing the upgrade? Do they leave the olddatadir in\n> the default location and create a new one elsewhere, vs where I'm kind\n> of doing the opposite?\n\nOkay right\n\nSo changed the symlink in pg_tblspaces, and changed the path inside\nthe db, and it appears to have worked. These were either the \"doh\npieces\" or the missing components that you helped point me to. Thank\nyou!\n\nTory\n\n-bash-4.0$ /logs-all/temp/pg_upgrade --old-datadir \"/data1/db\"\n--new-datadir \"/data/db\" --old-bindir \"/ipix/pgsql8/bin\" --new-bindir\n\"/ipix/pgsql/bin\"\nPerforming Consistency Checks\n-----------------------------\nChecking current, bin, and data directories ok\nChecking cluster versions ok\nChecking database user is a superuser ok\nChecking for prepared transactions ok\nChecking for reg* system oid user data types ok\nChecking for contrib/isn with bigint-passing mismatch ok\nChecking for large objects ok\nCreating catalog dump ok\nChecking for prepared transactions ok\nChecking for presence of required libraries ok\n\n| If pg_upgrade fails after this point, you must\n| re-initdb the new cluster before continuing.\n| You will also need to remove the \".old\" suffix\n| from /data1/db/global/pg_control.old.\n\nPerforming Upgrade\n------------------\nAdding \".old\" suffix to old global/pg_control ok\nAnalyzing all rows in the new cluster ok\nFreezing all rows on the new cluster ok\nDeleting new commit clogs ok\nCopying old commit clogs to new server ok\nSetting next transaction id for new cluster ok\nResetting WAL archives ok\nSetting frozenxid counters in new cluster ok\nCreating databases in the new cluster ok\nAdding support functions to new cluster ok\nRestoring database schema to new cluster ok\nRemoving support functions from new cluster ok\nRestoring user relation files\n ok\nSetting next oid for new cluster ok\nCreating script to delete old cluster ok\nChecking for large objects ok\n\nUpgrade complete\n----------------\n| Optimizer statistics are not transferred by pg_upgrade\n| so consider running:\n| \tvacuumdb --all --analyze-only\n| on the newly-upgraded cluster.\n\n| Running this script will delete the old cluster's data files:\n| \t/data/pgsql/delete_old_cluster.sh\n-bash-4.0$\n",
"msg_date": "Mon, 5 Dec 2011 11:00:03 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": "Tory M Blue wrote:\n> On Mon, Dec 5, 2011 at 10:22 AM, Bruce Momjian <[email protected]> wrote:\n> >> But initial response to all this, is umm we have not really made a\n> >> dump/restore unnecessary with the latest releases of Postgres than, as\n> >> I would have to think that there is a high percentage of users whom\n> >> use tablespaces.\n> >\n> > Yes, but they don't change tablespace locations during the upgrade. ?In\n> > fact, we have had surprisingly few (zero) request for moving\n> > tablespaces, and now we are trying to implement this for Postgres 9.2.\n> > The normal API will be to have the user move the tablespace before the\n> > upgrade, but as I said before, it isn't easy to do now in Postgres.\n> \n> Okay think here is where I'm confused. \"they don't change tablespace\",\n> okay how are they doing the upgrade? Do they leave the olddatadir in\n> the default location and create a new one elsewhere, vs where I'm kind\n> of doing the opposite?\n\nIf you look in a 9.0+ tablespace directory, you will see that each\ncluster has its own subdirectory:\n\n\ttest=> create tablespace tb1 location '/u/pg/tb1';\n\tCREATE TABLESPACE\n\ttest=> \\q\n\t$ lf /u/pg/tb1\n\tPG_9.2_201111231/\n\nThat means if I upgrade to 9.3, there will be another subdirectory for\n9.3, _inside_ the same tablespace location. This change was added in\nPostgres 9.0 to allow for upgrades without having to move tablespaces. \n\nNow, since you are upgrading from 8.4, and don't have a subdirectory,\nthe 9.1 cluster will be created inside the tablespace directory, so it\nwill look like:\n\n\t323234/ 423411/ 932323/ PG_9.1_201105231/\n\t ----------------\n\nI realize that is kind of confusing, but it works just fine, and\npg_upgrade will provide you with a script to delete the old cluster, and\nits subdirectories, when you are ready.\n\nI hope this helps clarify things.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Mon, 5 Dec 2011 14:08:26 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": "Tory M Blue wrote:\n> On Mon, Dec 5, 2011 at 10:31 AM, Tory M Blue <[email protected]> wrote:\n> > On Mon, Dec 5, 2011 at 10:22 AM, Bruce Momjian <[email protected]> wrote:\n> >>> But initial response to all this, is umm we have not really made a\n> >>> dump/restore unnecessary with the latest releases of Postgres than, as\n> >>> I would have to think that there is a high percentage of users whom\n> >>> use tablespaces.\n> >>\n> >> Yes, but they don't change tablespace locations during the upgrade. ?In\n> >> fact, we have had surprisingly few (zero) request for moving\n> >> tablespaces, and now we are trying to implement this for Postgres 9.2.\n> >> The normal API will be to have the user move the tablespace before the\n> >> upgrade, but as I said before, it isn't easy to do now in Postgres.\n> >\n> > Okay think here is where I'm confused. \"they don't change tablespace\",\n> > okay how are they doing the upgrade? ?Do they leave the olddatadir in\n> > the default location and create a new one elsewhere, vs where I'm kind\n> > of doing the opposite?\n> \n> Okay right\n> \n> So changed the symlink in pg_tblspaces, and changed the path inside\n> the db, and it appears to have worked. These were either the \"doh\n> pieces\" or the missing components that you helped point me to. Thank\n> you!\n\nSee my other email --- this might not be necessary.\n\n---------------------------------------------------------------------------\n\n\n> \n> Tory\n> \n> -bash-4.0$ /logs-all/temp/pg_upgrade --old-datadir \"/data1/db\"\n> --new-datadir \"/data/db\" --old-bindir \"/ipix/pgsql8/bin\" --new-bindir\n> \"/ipix/pgsql/bin\"\n> Performing Consistency Checks\n> -----------------------------\n> Checking current, bin, and data directories ok\n> Checking cluster versions ok\n> Checking database user is a superuser ok\n> Checking for prepared transactions ok\n> Checking for reg* system oid user data types ok\n> Checking for contrib/isn with bigint-passing mismatch ok\n> Checking for large objects ok\n> Creating catalog dump ok\n> Checking for prepared transactions ok\n> Checking for presence of required libraries ok\n> \n> | If pg_upgrade fails after this point, you must\n> | re-initdb the new cluster before continuing.\n> | You will also need to remove the \".old\" suffix\n> | from /data1/db/global/pg_control.old.\n> \n> Performing Upgrade\n> ------------------\n> Adding \".old\" suffix to old global/pg_control ok\n> Analyzing all rows in the new cluster ok\n> Freezing all rows on the new cluster ok\n> Deleting new commit clogs ok\n> Copying old commit clogs to new server ok\n> Setting next transaction id for new cluster ok\n> Resetting WAL archives ok\n> Setting frozenxid counters in new cluster ok\n> Creating databases in the new cluster ok\n> Adding support functions to new cluster ok\n> Restoring database schema to new cluster ok\n> Removing support functions from new cluster ok\n> Restoring user relation files\n> ok\n> Setting next oid for new cluster ok\n> Creating script to delete old cluster ok\n> Checking for large objects ok\n> \n> Upgrade complete\n> ----------------\n> | Optimizer statistics are not transferred by pg_upgrade\n> | so consider running:\n> | \tvacuumdb --all --analyze-only\n> | on the newly-upgraded cluster.\n> \n> | Running this script will delete the old cluster's data files:\n> | \t/data/pgsql/delete_old_cluster.sh\n> -bash-4.0$\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Mon, 5 Dec 2011 14:08:51 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": "On Mon, Dec 5, 2011 at 11:08 AM, Bruce Momjian <[email protected]> wrote:\n\n>\n> If you look in a 9.0+ tablespace directory, you will see that each\n> cluster has its own subdirectory:\n>\n> test=> create tablespace tb1 location '/u/pg/tb1';\n> CREATE TABLESPACE\n> test=> \\q\n> $ lf /u/pg/tb1\n> PG_9.2_201111231/\n>\n> That means if I upgrade to 9.3, there will be another subdirectory for\n> 9.3, _inside_ the same tablespace location. This change was added in\n> Postgres 9.0 to allow for upgrades without having to move tablespaces.\n>\n> Now, since you are upgrading from 8.4, and don't have a subdirectory,\n> the 9.1 cluster will be created inside the tablespace directory, so it\n> will look like:\n>\n> 323234/ 423411/ 932323/ PG_9.1_201105231/\n> ----------------\n>\n> I realize that is kind of confusing, but it works just fine, and\n> pg_upgrade will provide you with a script to delete the old cluster, and\n> its subdirectories, when you are ready.\n>\n> I hope this helps clarify things.\n>\n\nWell I could see the PG_9.1 or whatever directory being created,\nhowever I would still get a fail. Once I modified the internal\ntablespace path and the filesystem symlink, it worked just fine.\nHaving to create 6-10 symlinks is kind of cruddy and altering the\npaths (although that is not bad). But it's working.\n\nSo I at least have a method to make this work :)\n\nTory\n",
"msg_date": "Mon, 5 Dec 2011 15:25:23 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": " From my last report I had success but it was successful due to lots of\nmanual steps. I figured it may be safer to just create a new rpm,\ninstalling to pgsql9 specific directories and a new data directory.\n\nThis allows pg_upgrade to complete successfully (so it says). However\nmy new data directory is empty and the old data directory now has what\nappears to be 8.4 data and the 9.1 data.\n\n/data is olddatadir original data dir\n\n[root@devqdb03 queue]# ll /data/queue\ntotal 12\ndrwx------ 2 postgres dba 4096 2011-12-07 09:44 16384\ndrwx------ 3 postgres dba 4096 2011-12-07 11:34 PG_9.1_201105231\n-rw------- 1 postgres dba 4 2011-12-07 09:44 PG_VERSION\n\n/data1 is the new 9.1 installed location.\n[root@devqdb03 queue]# ll /data1/queue/\ntotal 0\n\nDo I have to manually move the new PG_9.1..... data to /data1 or. I'm\njust confused at what I'm looking at here.\n\nIf I don't move anything and start up the DB , I get this\n\npsql (8.4.4, server 9.1.1)\nWARNING: psql version 8.4, server version 9.1.\n Some psql features might not work.\nType \"help\" for help.\n\nSorry my upgrade process has been an ugly mess :)\n\nTory\n",
"msg_date": "Wed, 7 Dec 2011 12:13:26 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade"
},
{
"msg_contents": "Tory M Blue wrote:\n> >From my last report I had success but it was successful due to lots of\n> manual steps. I figured it may be safer to just create a new rpm,\n> installing to pgsql9 specific directories and a new data directory.\n> \n> This allows pg_upgrade to complete successfully (so it says). However\n> my new data directory is empty and the old data directory now has what\n> appears to be 8.4 data and the 9.1 data.\n> \n> /data is olddatadir original data dir\n> \n> [root@devqdb03 queue]# ll /data/queue\n> total 12\n> drwx------ 2 postgres dba 4096 2011-12-07 09:44 16384\n> drwx------ 3 postgres dba 4096 2011-12-07 11:34 PG_9.1_201105231\n> -rw------- 1 postgres dba 4 2011-12-07 09:44 PG_VERSION\n\nThat sure looks like a tablespace to me, not a data directory.\n\n> \n> /data1 is the new 9.1 installed location.\n> [root@devqdb03 queue]# ll /data1/queue/\n> total 0\n> \n> Do I have to manually move the new PG_9.1..... data to /data1 or. I'm\n> just confused at what I'm looking at here.\n> \n> If I don't move anything and start up the DB , I get this\n> \n> psql (8.4.4, server 9.1.1)\n> WARNING: psql version 8.4, server version 9.1.\n> Some psql features might not work.\n> Type \"help\" for help.\n> \n> Sorry my upgrade process has been an ugly mess :)\n\nYou are using an 8.4.4 psql to connect to a 9.1.1 server.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n",
"msg_date": "Thu, 8 Dec 2011 15:14:52 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade"
}
] |
[
{
"msg_contents": "hello to all,\n\nthe situation i am facing is this->\ntable X-> 200 mil rows\nindex A (date, columnA)\nindex B (date,columnB,columnC)\n\nthe query planner is working properly and for a specific query that selects\nfrom table X where all 3 columns of index B are set, it uses index B.\n\nbut, at some point there are some bulk inserts with a different date. when\nthis happens and i run the query mentioned above the planner is using index\nA and not index B.\ni guess this happens b/c the planner due to the last analyze statistics has\nno values of the new date and so it thinks that it is faster to use index A\nthan index B since the rows that it will search are few. but that's not the\ncase so this query takes much longer to finish than it would take if it used\nthe index B.\n\ni have thought of some work-arounds to resolve this situation. for example i\ncould change the definition of index A to (columnA,date) and i could also\nrun an analyze command after every bulk insert. Another option would be to\nreduce autovacuum_analyze_scale_factor to a very low value so that analyze\nwould be forced to be made much more often.\n\nbut, instead of these solutions, is there anything else that could lead to a\n'better' query plan for this specific case? thx in advance\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/manually-force-planner-to-use-of-index-A-vs-index-B-tp5044616p5044616.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Sat, 3 Dec 2011 06:34:29 -0800 (PST)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": true,
"msg_subject": "manually force planner to use of index A vs index B"
},
{
"msg_contents": "On 3.12.2011 15:34, MirrorX wrote:\n> \n> but, at some point there are some bulk inserts with a different date. when\n> this happens and i run the query mentioned above the planner is using index\n> A and not index B.\n> i guess this happens b/c the planner due to the last analyze statistics has\n> no values of the new date and so it thinks that it is faster to use index A\n> than index B since the rows that it will search are few. but that's not the\n> case so this query takes much longer to finish than it would take if it used\n> the index B.\n\nProbably. But you haven't posted any explain plans and I've broken my\ncrystall ball yesterday, so I can only guess.\n\nDo this:\n\n1) get EXPLAIN ANALYZE of the query running fine\n2) do the bulk update\n3) get EXPLAIN ANALYZE of the query (this time it uses the wrong index)\n4) run ANALYZE on the table\n5) get EXPLAIN ANALYZE of the query (should be using the right index)\n\nand post the tree explain plans.\n\n> i have thought of some work-arounds to resolve this situation. for example i\n> could change the definition of index A to (columnA,date) and i could also\n> run an analyze command after every bulk insert. Another option would be to\n> reduce autovacuum_analyze_scale_factor to a very low value so that analyze\n> would be forced to be made much more often.\n\nThat is not a workaround, that is a solution. The database needs\nreasonably accurate statistics to prepare good plans, that's how it works.\n\nIf you know that the bulk insert is going to make the statistics\ninaccurate, you should run ANALYZE manually at the end. Or you might let\nautovacuum take care of that. But the autovacuum won't fix that\nimmediately - it's running each minute or so, so the queries executed\nbefore that will see the stale stats.\n\nYou may lower the autocacuum naptime, you may make it more aggressive,\nbut that means more overhead.\n\nTomas\n",
"msg_date": "Sat, 03 Dec 2011 16:10:05 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: manually force planner to use of index A vs index B"
},
{
"msg_contents": "thx a lot for the reply. i will post the query plans when a new bulk insert\nwill take place :)\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/manually-force-planner-to-use-of-index-A-vs-index-B-tp5044616p5044691.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Sat, 3 Dec 2011 07:15:51 -0800 (PST)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: manually force planner to use of index A vs index B"
}
] |
[
{
"msg_contents": "Ernesto Quiñones wrote:\n> Scott Marlowe wrote:\n>> Ernesto Quiñones wrote:\n \n>>> I want to know if it's possible to predict (calculate), how long\n>>> a VACUUM FULL process will consume in a table?\n \nI don't think you said what version of PostgreSQL you're using. \nVACUUM FULL prior to version 9.0 is not recommended for most\nsituations, and can take days or weeks to complete where other\nmethods of achieving the same end may take hours. If you have\nautovacuum properly configured, you will probably never need to run\nVACUUM FULL.\n \n>> If you look at what iostat is doing while the vacuum full is\n>> running, and divide the size of the table by that k/sec you can\n>> get a good approximation of how long it will take. Do you have\n>> naptime set to anything above 0?\n> \n> Thanks for the answer Scott, actually my autovacuum_naptime is 1h\n \nAh, well that right there is likely to put you into a position where\nyou need to do painful extraordinary cleanup like VACUUM FULL. In\nmost situation the autovacuum defaults are pretty good. Where they\nneed to be adjusted, the normal things which are actually beneficial\nare to change the thresholds to allow more aggressive cleanup or (on\nlow-powered hardware) to adjust the cost ratios so that performance\nis less affected by the autovacuum runs. When autovacuum is disabled\nor changed to a long interval, it almost always results in bloat\nand/or outdated statistics which cause much more pain than a more\naggressive autovacuum regimine does.\n \n> but I don't find naptime parameter for a manual vacuum\n \nI'm guessing that Scott was thinking of the vacuum_cost_delay\nsetting:\n \nhttp://www.postgresql.org/docs/current/interactive/runtime-config-resource.html#GUC-VACUUM-COST-DELAY\n \n-Kevin\n",
"msg_date": "Sat, 03 Dec 2011 10:00:10 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Question about VACUUM"
},
{
"msg_contents": "Hi Kevin, comments after your comments\n\n2011/12/3 Kevin Grittner <[email protected]>:\n> Ernesto Quiñones wrote:\n>> Scott Marlowe wrote:\n>>> Ernesto Quiñones wrote:\n>\n>>>> I want to know if it's possible to predict (calculate), how long\n>>>> a VACUUM FULL process will consume in a table?\n>\n> I don't think you said what version of PostgreSQL you're using.\n> VACUUM FULL prior to version 9.0 is not recommended for most\n> situations, and can take days or weeks to complete where other\n> methods of achieving the same end may take hours. If you have\n> autovacuum properly configured, you will probably never need to run\n> VACUUM FULL.\n\nI'm working with PostgreSQL 8.3 running in Solaris 10, my autovacuum\nparamaters are:\n\nautovacuum\ton\t\nautovacuum_analyze_scale_factor\t\t0,5\nautovacuum_analyze_threshold50000\nautovacuum_freeze_max_age \t200000000\nautovacuum_max_workers\t3\nautovacuum_naptime\t\t1h\nautovacuum_vacuum_cost_delay\t -1\nautovacuum_vacuum_cost_limit\t-1\nautovacuum_vacuum_scale_factor 0,5\nautovacuum_vacuum_threshold 50000\n\nmy vacuums parameters are:\n\nvacuum_cost_delay\t1s\nvacuum_cost_limit\t200\nvacuum_cost_page_dirty\t20\nvacuum_cost_page_hit\t1\nvacuum_cost_page_miss\t10\nvacuum_freeze_min_age\t100000000\n\n\n> Ah, well that right there is likely to put you into a position where\n> you need to do painful extraordinary cleanup like VACUUM FULL. In\n> most situation the autovacuum defaults are pretty good. Where they\n> need to be adjusted, the normal things which are actually beneficial\n> are to change the thresholds to allow more aggressive cleanup or (on\n> low-powered hardware) to adjust the cost ratios so that performance\n> is less affected by the autovacuum runs.\n\nI have a good performance in my hard disks, I have a good amount of\nmemory, but my cores are very poor, only 1ghz each one.\n\nI have some questions here:\n\n1. autovacuum_max_workers= 3 , each work processes is using only one\n\"core\" or one \"core\" it's sharing por 3 workers?\n\n2. when I run a \"explain analyze\" in a very big table (30millons of\nrows) , explain returning me 32 millons of rows moved, I am assuming\nthat my statistics are not updated in 2 millons of rows, but, is it a\nvery important number? or maybe, it's a regular result.\n\n\nthanks for your help?\n",
"msg_date": "Mon, 5 Dec 2011 12:19:42 -0500",
"msg_from": "=?ISO-8859-1?Q?Ernesto_Qui=F1ones?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about VACUUM"
},
{
"msg_contents": "On Mon, Dec 5, 2011 at 10:19 AM, Ernesto Quiñones <[email protected]> wrote:\n> Hi Kevin, comments after your comments\n>\n> 2011/12/3 Kevin Grittner <[email protected]>:\n>> Ernesto Quiñones wrote:\n>>> Scott Marlowe wrote:\n>>>> Ernesto Quiñones wrote:\n>>\n>>>>> I want to know if it's possible to predict (calculate), how long\n>>>>> a VACUUM FULL process will consume in a table?\n>>\n>> I don't think you said what version of PostgreSQL you're using.\n>> VACUUM FULL prior to version 9.0 is not recommended for most\n>> situations, and can take days or weeks to complete where other\n>> methods of achieving the same end may take hours. If you have\n>> autovacuum properly configured, you will probably never need to run\n>> VACUUM FULL.\n>\n> I'm working with PostgreSQL 8.3 running in Solaris 10, my autovacuum\n> paramaters are:\n>\n> autovacuum on\n> autovacuum_analyze_scale_factor 0,5\n> autovacuum_analyze_threshold50000\n> autovacuum_freeze_max_age 200000000\n> autovacuum_max_workers 3\n> autovacuum_naptime 1h\n> autovacuum_vacuum_cost_delay -1\n> autovacuum_vacuum_cost_limit -1\n> autovacuum_vacuum_scale_factor 0,5\n> autovacuum_vacuum_threshold 50000\n>\n> my vacuums parameters are:\n>\n> vacuum_cost_delay 1s\n> vacuum_cost_limit 200\n\nThose are insane settings for vacuum costing, even on a very slow\nmachine. Basically you're starving vacuum and autovacuum so much that\nthey can never keep up.\n\n> I have a good performance in my hard disks, I have a good amount of\n> memory, but my cores are very poor, only 1ghz each one.\n\nIf so then your settings for vacuum costing are doubly bad.\n\nI'd start by setting the cost_delay to 1ms and raising your cost limit\nby a factor of 10 or more.\n\n> I have some questions here:\n>\n> 1. autovacuum_max_workers= 3 , each work processes is using only one\n> \"core\" or one \"core\" it's sharing por 3 workers?\n\nEach worker uses a single process and can use one core basically.\nRight now your vacuum costing is such that it's using 1/100000th or so\nof a CPU.\n\n> 2. when I run a \"explain analyze\" in a very big table (30millons of\n> rows) , explain returning me 32 millons of rows moved, I am assuming\n> that my statistics are not updated in 2 millons of rows, but, is it a\n> very important number? or maybe, it's a regular result.\n\nLook for projections being off by factors of 10 or more before it\nstarts to make a big difference. 32M versus 30M is no big deal. 30k\nversus 30M is a big deal.\n",
"msg_date": "Mon, 5 Dec 2011 10:42:56 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about VACUUM"
},
{
"msg_contents": "On Mon, Dec 5, 2011 at 10:42 AM, Scott Marlowe <[email protected]> wrote:\n> On Mon, Dec 5, 2011 at 10:19 AM, Ernesto Quiñones <[email protected]> wrote:\n>> vacuum_cost_delay 1s\n>> vacuum_cost_limit 200\n>\n> Those are insane settings for vacuum costing, even on a very slow\n> machine. Basically you're starving vacuum and autovacuum so much that\n> they can never keep up.\n\nsorry, the word I meant there was pathological. No insult intended.\n",
"msg_date": "Mon, 5 Dec 2011 10:44:07 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about VACUUM"
},
{
"msg_contents": "no problem Scott, thanks for your appreciations\n\n\n\n2011/12/5 Scott Marlowe <[email protected]>:\n> On Mon, Dec 5, 2011 at 10:42 AM, Scott Marlowe <[email protected]> wrote:\n>> On Mon, Dec 5, 2011 at 10:19 AM, Ernesto Quiñones <[email protected]> wrote:\n>>> vacuum_cost_delay 1s\n>>> vacuum_cost_limit 200\n>>\n>> Those are insane settings for vacuum costing, even on a very slow\n>> machine. Basically you're starving vacuum and autovacuum so much that\n>> they can never keep up.\n>\n> sorry, the word I meant there was pathological. No insult intended.\n\n\n\n-- \n----------------------------------------------------------\nVisita : http://www.eqsoft.net\n----------------------------------------------------------\nSigueme en Twitter : http://www.twitter.com/ernestoq\n",
"msg_date": "Mon, 5 Dec 2011 12:46:48 -0500",
"msg_from": "=?ISO-8859-1?Q?Ernesto_Qui=F1ones?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about VACUUM"
},
{
"msg_contents": "Ernesto Quiñones<[email protected]> wrote:\n \nI understand the impulse to run autovacuum less frequently or less\naggressively. When we first started running PostgreSQL the default\nconfiguration was very cautious. A lot of bloat would accumulate\nbefore it kicked in, at which point there was a noticeable\nperformance hit, as it worked though a large number of dead pages. \nThe first thing I did was to make it run less often, which only made\nthings worse. The numbers we settled on through testing as optimal\nfor us are very close to current default values (for recent major\nreleases).\n \nNot only do queries run more quickly between autovacuum runs,\nbecause there is less dead space to wade through to get the current\ntuples, but the autovacuum runs just don't have the same degree of\nimpact -- presumably because they find less to do. Some small,\nfrequently updated tables when from having hundreds of pages down to\none or two.\n \n> autovacuum_analyze_scale_factor 0,5\n> autovacuum_analyze_threshold 50000\n \nWe use 0.10 + 10 in production. Defaults are now 0.10 + 50. That's\nthe portion of the table plus a number of rows. Analyze just does a\nrandom sample from the table; it doesn't pass the whole table.\n \n> autovacuum_vacuum_scale_factor 0,5\n> autovacuum_vacuum_threshold 50000\n \nWe use 0.20 + 10 in production. Defaults are now 0.20 + 50. Again,\na proportion of the table (in this case what is expected to have\nbecome unusable dead space) plus a number of unusable dead tuples.\n \n> autovacuum_naptime 1h\n \nA one-page table could easily bloat to hundreds (or thousands) of\npages within an hour. You will wonder where all your CPU time is\ngoing because it will constantly be scanning the same (cached) pages\nto find the one version of the row which matters. I recommend 1min.\n \n> vacuum_cost_delay 1s\n \nA vacuum run will never get much done at that rate. I recommend\n10ms.\n \n> vacuum_cost_limit 200\n \nWe've boosted this to 600. Once you're in a \"steady state\", this is\nthe setting you might want to adjust up or down as needed to make\ncleanup aggressive enough without putting a noticeable dent in\nperformance while it is running.\n \nOn 8.3 I believe you still need to worry about the fsm settings. \nRun your regular database vacuum with the VERBOSE option, and check\nwhat the last few lines say. If you don't have enough memory set\naside to track free space, no vacuum regimen will prevent bloat.\n \n-Kevin\n",
"msg_date": "Mon, 05 Dec 2011 12:36:34 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Question about VACUUM"
},
{
"msg_contents": "On Mon, Dec 5, 2011 at 11:36 AM, Kevin Grittner\n<[email protected]> wrote:\n> Ernesto Quiñones<[email protected]> wrote:\n>> vacuum_cost_limit 200\n\n> We've boosted this to 600. Once you're in a \"steady state\", this is\n> the setting you might want to adjust up or down as needed to make\n> cleanup aggressive enough without putting a noticeable dent in\n> performance while it is running.\n\nOn the busy production systems I've worked on in the past, we had this\ncranked up to several thousand along with 10 or so workers to keep up\non a busy machine. The more IO your box has, the more you can afford\nto make vacuum / autovacuum aggressive.\n",
"msg_date": "Mon, 5 Dec 2011 13:29:43 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about VACUUM"
},
{
"msg_contents": "On 12/5/11 1:36 PM, Kevin Grittner wrote:\n> I understand the impulse to run autovacuum less frequently or less\n> aggressively. When we first started running PostgreSQL the default\n> configuration was very cautious.\n\nThe default settings are deliberately cautious, as default settings\nshould be.\n\nBut yes, anyone with a really large/high-traffic database will often\nwant to make autovac more aggressive.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Tue, 06 Dec 2011 23:13:54 -0500",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about VACUUM"
},
{
"msg_contents": "Josh Berkus <[email protected]> wrote:\n> On 12/5/11 1:36 PM, Kevin Grittner wrote:\n>> I understand the impulse to run autovacuum less frequently or\n>> less aggressively. When we first started running PostgreSQL the\n>> default configuration was very cautious.\n> \n> The default settings are deliberately cautious, as default\n> settings should be.\n \nI was talking historically, about the defaults in 8.1:\n \nhttp://www.postgresql.org/docs/8.1/interactive/runtime-config-autovacuum.html\n \nThose defaults were *over*-cautious to the point that we experienced\nserious problems. My point was that many people's first instinct in\nthat case is to make the setting less aggressive, as I initially did\nand the OP has done. The problem is actually solved by making them\n*more* aggressive. Current defaults are pretty close to what we\nfound, through experimentation, worked well for us for most\ndatabases.\n \n> But yes, anyone with a really large/high-traffic database will\n> often want to make autovac more aggressive.\n \nI think we're in agreement: current defaults are good for a typical\nenvironment; high-end setups still need to tune to more aggressive\nsettings. This is an area where incremental changes with monitoring\nworks well.\n \n-Kevin\n",
"msg_date": "Wed, 07 Dec 2011 09:14:11 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Question about VACUUM"
}
] |
[
{
"msg_contents": "I have a fairly simple query:\n\nSELECT <some columns>\nFROM \"tubesite_image\"\nINNER JOIN \"tubesite_object\"\n\tON (\"tubesite_image\".\"object_ptr_id\" = \"tubesite_object\".\"id\")\nWHERE\n\t\"tubesite_object\".\"site_id\" = 8\nORDER BY\n\t\"tubesite_object\".\"pub_date\" ASC LIMIT 21;\n\n\n\nThat query is having a bad query plan on production server:\n\n Limit (cost=0.00..1938.67 rows=21 width=275) (actual\ntime=3270.000..3270.000 rows=0 loops=1)\n -> Nested Loop (cost=0.00..792824.51 rows=8588 width=275) (actual\ntime=3269.997..3269.997 rows=0 loops=1)\n -> Index Scan using tubesite_object_pub_date_idx on\ntubesite_object (cost=0.00..789495.13 rows=9711 width=271) (actual\ntime=0.011..3243.629 rows=9905 loops=1)\n Filter: (site_id = 8)\n -> Index Scan using tubesite_image_pkey on tubesite_image\n(cost=0.00..0.33 rows=1 width=4) (actual time=0.002..0.002 rows=0\nloops=9905)\n Index Cond: (tubesite_image.object_ptr_id =\ntubesite_object.id)\n Total runtime: 3270.071 ms\n\nBut, when I turn off nested loops, the query flies:\n\n\nQUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=31384.35..31384.40 rows=21 width=275) (actual\ntime=37.988..37.988 rows=0 loops=1)\n -> Sort (cost=31384.35..31405.82 rows=8588 width=275) (actual\ntime=37.986..37.986 rows=0 loops=1)\n Sort Key: tubesite_object.pub_date\n Sort Method: quicksort Memory: 25kB\n -> Hash Join (cost=857.00..31152.80 rows=8588 width=275)\n(actual time=37.968..37.968 rows=0 loops=1)\n Hash Cond: (tubesite_object.id =\ntubesite_image.object_ptr_id)\n -> Bitmap Heap Scan on tubesite_object\n(cost=596.77..30685.30 rows=9711 width=271) (actual time=7.414..25.132\nrows=9905 loops=1)\n Recheck Cond: (site_id = 8)\n -> Bitmap Index Scan on tubesite_object_site_id\n(cost=0.00..594.34 rows=9711 width=0) (actual time=4.943..4.943\nrows=9905 loops=1)\n Index Cond: (site_id = 8)\n -> Hash (cost=152.88..152.88 rows=8588 width=4) (actual\ntime=4.620..4.620 rows=8588 loops=1)\n -> Seq Scan on tubesite_image (cost=0.00..152.88\nrows=8588 width=4) (actual time=0.005..2.082 rows=8588 loops=1)\n Total runtime: 38.071 ms\n\n\nI have rsynced the database from the prod server to the test server,\nthat has same configuration (shared buffers, work mem, estimated cache\nsize, and so on), and there it chooses bitmap heap scan with hash join\nwithout disabling the nested loops.\n\nI have 8.4.8 on producion and 8.4.9 on test, could that explain the\ndifference in plans chosen?\n\n",
"msg_date": "Tue, 06 Dec 2011 20:48:10 +0100",
"msg_from": "Mario Splivalo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Different query plans on same servers"
},
{
"msg_contents": "Mario Splivalo <[email protected]> writes:\n> I have 8.4.8 on producion and 8.4.9 on test, could that explain the\n> difference in plans chosen?\n\nI'd wonder first if you have the same statistics settings on both.\nThe big problem here is that the estimation of the join size is bad\n(8588 versus 0).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Dec 2011 15:00:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Different query plans on same servers "
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n \n> I'd wonder first if you have the same statistics settings on both.\n> The big problem here is that the estimation of the join size is\n> bad (8588 versus 0).\n \nBut both servers develop that estimate for the join size. I was\nwondering more about whether the costing factors were really the\nsame:\n \nslow:\n \n -> Nested Loop\n (cost=0.00..792824.51 rows=8588 width=275)\n (actual time=3269.997..3269.997 rows=0 loops=1)\n \nversus fast:\n \n -> Hash Join\n (cost=857.00..31152.80 rows=8588 width=275)\n (actual time=37.968..37.968 rows=0 loops=1)\n \nThe hash join path must look more expensive on the first machine,\nfor some reason.\n \nMario, could you post the result of running this query from both\nservers?:\n \nhttp://wiki.postgresql.org/wiki/Server_Configuration\n \n-Kevin\n",
"msg_date": "Tue, 06 Dec 2011 14:17:30 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Different query plans on same servers"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> wrote:\n \n> But both servers develop that estimate for the join size.\n \n[sigh] Those *were* both from the production server. Please show\nus the EXPLAIN ANALYZE from the other server.\n \n-Kevin\n",
"msg_date": "Tue, 06 Dec 2011 14:29:03 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Different query plans on same servers"
},
{
"msg_contents": "On 12/06/2011 09:00 PM, Tom Lane wrote:\n> Mario Splivalo <[email protected]> writes:\n>> I have 8.4.8 on producion and 8.4.9 on test, could that explain the\n>> difference in plans chosen?\n> \n> I'd wonder first if you have the same statistics settings on both.\n> The big problem here is that the estimation of the join size is bad\n> (8588 versus 0).\n\nThey do, I guess. I did rsync postgres datadir from the prod server to\nthe test server. The only difference is that prod server was a bit more\nloaded than the test server.\n\n\tMario\n",
"msg_date": "Wed, 07 Dec 2011 01:23:57 +0100",
"msg_from": "Mario Splivalo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Different query plans on same servers"
},
{
"msg_contents": "On 12/06/2011 09:17 PM, Kevin Grittner wrote:\n> \n> The hash join path must look more expensive on the first machine,\n> for some reason.\n> \n> Mario, could you post the result of running this query from both\n> servers?:\n> \n> http://wiki.postgresql.org/wiki/Server_Configuration\n\nSure. Here is from the prod server:\n\n name |\n current_setting\n-----------------------------+--------------------------------------------------------------------------------------------------------\n version | PostgreSQL 8.4.8 on x86_64-pc-linux-gnu,\ncompiled by GCC gcc-4.3.real (Debian 4.3.2-1.1) 4.3.2, 64-bit\n checkpoint_segments | 64\n default_statistics_target | 2000\n effective_cache_size | 36GB\n external_pid_file | /var/run/postgresql/8.4-main.pid\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n listen_addresses | *\n log_autovacuum_min_duration | 0\n log_checkpoints | on\n log_line_prefix | %t [%p]: [%l-1] [%d]\n log_min_duration_statement | 1s\n maintenance_work_mem | 256MB\n max_connections | 1500\n max_stack_depth | 3MB\n port | 5432\n server_encoding | UTF8\n shared_buffers | 4GB\n statement_timeout | 30min\n temp_buffers | 4096\n TimeZone | localtime\n track_activity_query_size | 2048\n unix_socket_directory | /var/run/postgresql\n wal_buffers | 128MB\n work_mem | 64MB\n\n\nAnd here is from the test server:\n name |\ncurrent_setting\n----------------------------+------------------------------------------------------------------------------------------------------\n version | PostgreSQL 8.4.9 on x86_64-pc-linux-gnu,\ncompiled by GCC gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit\n checkpoint_segments | 64\n default_statistics_target | 2000\n effective_cache_size | 36GB\n external_pid_file | /var/run/postgresql/8.4-main.pid\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n listen_addresses | *\n log_connections | on\n log_disconnections | on\n log_line_prefix | %t [%p]: [%l-1] [%d]\n log_min_duration_statement | 0\n maintenance_work_mem | 256MB\n max_connections | 40\n max_stack_depth | 3MB\n port | 5432\n server_encoding | UTF8\n shared_buffers | 4GB\n ssl | on\n temp_buffers | 4096\n TimeZone | localtime\n unix_socket_directory | /var/run/postgresql\n wal_buffers | 128MB\n work_mem | 64MB\n(24 rows)\n\nAt the time of doing 'explain analyze' on the prod server there were cca\n80 connections on the server.\n\n\tMario\n",
"msg_date": "Wed, 07 Dec 2011 01:27:15 +0100",
"msg_from": "Mario Splivalo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Different query plans on same servers"
},
{
"msg_contents": "On 12/06/2011 09:29 PM, Kevin Grittner wrote:\n> \"Kevin Grittner\" <[email protected]> wrote:\n> \n>> But both servers develop that estimate for the join size.\n> \n> [sigh] Those *were* both from the production server. Please show\n> us the EXPLAIN ANALYZE from the other server.\n\nHuh, right... missed that one. Here is the 'explain analyze' from the\nother server:\n\n\n\nQUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=31531.75..31531.80 rows=21 width=275) (actual\ntime=45.584..45.584 rows=0 loops=1)\n -> Sort (cost=31531.75..31531.84 rows=36 width=275) (actual\ntime=45.579..45.579 rows=0 loops=1)\n Sort Key: tubesite_object.pub_date\n Sort Method: quicksort Memory: 25kB\n -> Hash Join (cost=866.34..31530.82 rows=36 width=275)\n(actual time=45.544..45.544 rows=0 loops=1)\n Hash Cond: (tubesite_object.id =\ntubesite_image.object_ptr_id)\n -> Bitmap Heap Scan on tubesite_object\n(cost=606.11..31146.68 rows=9884 width=271) (actual time=6.861..37.497\nrows=9905 loops=1)\n Recheck Cond: (site_id = 8)\n -> Bitmap Index Scan on tubesite_object_site_id\n(cost=0.00..603.64 rows=9884 width=0) (actual time=4.792..4.792\nrows=9905 loops=1)\n Index Cond: (site_id = 8)\n -> Hash (cost=152.88..152.88 rows=8588 width=4) (actual\ntime=3.816..3.816 rows=8588 loops=1)\n -> Seq Scan on tubesite_image (cost=0.00..152.88\nrows=8588 width=4) (actual time=0.003..1.740 rows=8588 loops=1)\n Total runtime: 45.798 ms\n\n\n\n\nThis is also a query from the prod server, but without LIMIT:\n\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=31713.95..31735.42 rows=8588 width=275) (actual\ntime=60.311..60.311 rows=0 loops=1)\n Sort Key: tubesite_object.pub_date\n Sort Method: quicksort Memory: 25kB\n -> Hash Join (cost=857.00..31152.80 rows=8588 width=275) (actual\ntime=60.255..60.255 rows=0 loops=1)\n Hash Cond: (tubesite_object.id = tubesite_image.object_ptr_id)\n -> Bitmap Heap Scan on tubesite_object (cost=596.77..30685.30\nrows=9711 width=271) (actual time=8.682..49.721 rows=9905 loops=1)\n Recheck Cond: (site_id = 8)\n -> Bitmap Index Scan on tubesite_object_site_id\n(cost=0.00..594.34 rows=9711 width=0) (actual time=5.705..5.705\nrows=9905 loops=1)\n Index Cond: (site_id = 8)\n -> Hash (cost=152.88..152.88 rows=8588 width=4) (actual\ntime=4.281..4.281 rows=8588 loops=1)\n -> Seq Scan on tubesite_image (cost=0.00..152.88\nrows=8588 width=4) (actual time=0.005..1.437 rows=8588 loops=1)\n Total runtime: 60.483 ms\n(12 rows)\n\n\nI will try to rsync prod database to 8.4.8 on test server tomorrow, and\nsee what happens. Hopefully upgrade to 8.4.9 (or even 8.4.10 if Debian\npackages is by tomorrow) will solve the issue...\n\n\tMario\n",
"msg_date": "Wed, 07 Dec 2011 01:35:18 +0100",
"msg_from": "Mario Splivalo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Different query plans on same servers"
},
{
"msg_contents": "On 12/06/2011 09:00 PM, Tom Lane wrote:\n> Mario Splivalo <[email protected]> writes:\n>> I have 8.4.8 on producion and 8.4.9 on test, could that explain the\n>> difference in plans chosen?\n> \n> I'd wonder first if you have the same statistics settings on both.\n> The big problem here is that the estimation of the join size is bad\n> (8588 versus 0).\n\nJust an update here. I did downgrade postgres on testbox to 8.4.8 and\nnow it's choosing bad plan there too.\n\nSo we upgraded postgres on production server and the bad plan went away.\nWe're preparing for upgrade to 9.1 now, we hope to offload some of the\nSELECTs to the slave server, we'll see how that will work.\n\nThank you for your inputs!\n\n\tMario\n",
"msg_date": "Wed, 07 Dec 2011 10:34:56 +0100",
"msg_from": "Mario Splivalo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Different query plans on same servers"
}
] |
[
{
"msg_contents": "Hi all,\n\n\nI am running a load simulation on Debian with PostgreSQL 8.4.9 (standard\nDebian package).\n\nCertain number of clients do the following stepsin a transaction (read\ncommited level) periodically (about 1.1 transaction per second / client)\nand concurrently:\n\n-reads a record of table Machine and State (they each have about 300\nrecords, read size is about 1.4 KB)\n-reads a record of table Card (it has about 1200 records)\n-reads some other records from other tables, all these are straightforward,\nsingle line queries (here there are even less records in the tables)\n-updates Machine (1 record, updates 2-3 integer values)\n-updates State (1 record, updates a bytea field, about 1,3KB)\n-updates Card (1 record, updates an integer)\n-inserts 1-1 record into 2 log tables\n\nIts important, that each client updates different records, so there is no\nupdate conflict. There are no triggers or rules. Tables have simple\nindexes, 2 at most.\n\nAs I run the simulation with more and more clients, I can observe, that at\nthe beginning of the simulation the transaction times are quite acceptable\n(20-30 ms) and quite uniform/smooth, but as the simultion progresses, it\nbecomes higher (30-40-50-60 ms) and more and more non-uniform, but the tps\ninterestingly remains the same during the simulation. With 100 clients this\nkind of behaviour can be seen very well. The simulation's duration is 500\nsec.\nI wonder why this happens on this server, and how I can keep the response\ntime as low as at the beginning.\n\nJust for comparison, I ran the same simulation on a Windows 7 notebook\nmachine but with PostgreSQL 9.1.2 (downloaded from EnterpriseDB's site, not\nPostgreSQL Plus), and it did not show this problem even with 120 clients.\nIt's transaction times were surprisingly smooth and consistent. The client\ncode was the same in the 2 cases.\nActually I ran first the test on the Windows machine, and after that on the\nbetter Debian. I expected that it would be even better there. Network\nlatency is quite minimal, because the clients and the database server run\non VMs on a server machine in the Linux case.\n\nHere is some important config variables from the 8.4 (9.1.2 is configured\nsimilarly):\n\n\n\nssl=false\n\nshared_buffers=24MB (OS max currently, but should not be a problem because\n9.1.2 performed quite well on Windows with 24 MB)\n\nwork_mem=1MB\n\nmaintainance_work_mem=16MB\n\n\n\nfsync=on\n\nsync_commit=on\n\nwal_sync_method=fsync\n\nfull_page_writes=on\n\nwal_buffers=1MB\n\ncommit_delay=0\n\ncheckpoint segments=8\n\n\n\neffective_cache_size=256MB\n\n\n\nvacuum: default\nbgwriter: default\n\n\nI suspected that due to the lot of update, the tables get bloated with dead\nrows, but vacuum analyze verbose did not show that.\nIt seems that something cannot keep up with the load, but tps does not\nchange, just the response time gets higher.\nCould you please help me with what can cause this kind of behaviour on\nLinux?\nWhat setting should I change perhaps?\nIs there so much difference between 8.4 and 9.1, or is this something else?\nPlease tell me if any other info is needed.\n\nThanks in advance,\nOtto\n\nHi all,I am running a load simulation on Debian with PostgreSQL 8.4.9 (standard Debian package).Certain number of clients do the following stepsin a transaction (read commited level) periodically (about 1.1 transaction per second / client) and concurrently:\n-reads a record of table Machine and State (they each have about 300 records, read size is about 1.4 KB)-reads a record of table Card (it has about 1200 records)-reads some other records from other tables, all these are straightforward, single line queries (here there are even less records in the tables)\n-updates Machine (1 record, updates 2-3 integer values)-updates State (1 record, updates a bytea field, about 1,3KB)-updates Card (1 record, updates an integer) -inserts 1-1 record into 2 log tablesIts important, that each client updates different records, so there is no update conflict. There are no triggers or rules. Tables have simple indexes, 2 at most.\nAs I run the simulation with more and more clients, I can observe, that at the beginning of the simulation the transaction times are quite acceptable (20-30 ms) and quite uniform/smooth, but as the simultion progresses, it becomes higher (30-40-50-60 ms) and more and more non-uniform, but the tps interestingly remains the same during the simulation. With 100 clients this kind of behaviour can be seen very well. The simulation's duration is 500 sec.\nI wonder why this happens on this server, and how I can keep the response time as low as at the beginning.Just for comparison, I ran the same simulation on a Windows 7 notebook machine but with PostgreSQL 9.1.2 (downloaded from EnterpriseDB's site, not PostgreSQL Plus), and it did not show this problem even with 120 clients. It's transaction times were surprisingly smooth and consistent. The client code was the same in the 2 cases. \nActually I ran first the test on the Windows machine, and after that on the better Debian. I expected that it would be even better there. Network latency is quite minimal, because the clients and the database server run on VMs on a server machine in the Linux case.\nHere is some important config variables from the 8.4 (9.1.2 is configured similarly):\n\n \nssl=false\nshared_buffers=24MB (OS max currently, but should not be a problem because 9.1.2 performed quite well on Windows with 24 MB)\nwork_mem=1MB\nmaintainance_work_mem=16MB\n \nfsync=on\nsync_commit=on\nwal_sync_method=fsync\nfull_page_writes=on\nwal_buffers=1MB\ncommit_delay=0\ncheckpoint segments=8\n \neffective_cache_size=256MB\n \nvacuum: default\n\nbgwriter: defaultI suspected that due to the lot of update, the tables get bloated with dead rows, but vacuum analyze verbose did not show that.It seems that something cannot keep up with the load, but tps does not change, just the response time gets higher.\nCould you please help me with what can cause this kind of behaviour on Linux?What setting should I change perhaps?Is there so much difference between 8.4 and 9.1, or is this something else?Please tell me if any other info is needed.\nThanks in advance,Otto",
"msg_date": "Tue, 6 Dec 2011 22:30:11 +0100",
"msg_from": "=?ISO-8859-1?Q?Havasv=F6lgyi_Ott=F3?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Response time increases over time"
},
{
"msg_contents": "On 12/6/11 4:30 PM, Havasvölgyi Ottó wrote:\n> Is there so much difference between 8.4 and 9.1, or is this something else?\n> Please tell me if any other info is needed.\n\nIt is fairly likely that the difference you're seeing here is due to\nimprovements made in checkpointing and other operations made between 8.4\nand 9.1.\n\nIs there some reason you didn't test 9.1 on Linux to compare the two?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Tue, 06 Dec 2011 23:11:45 -0500",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Response time increases over time"
},
{
"msg_contents": "Thanks, Josh.\nThe only reason I tried 8.4 first is that it was available for Debian as\ncompiled package, so it was simpler for me to do it. Anyway I am going to\ntest 9.1 too. I will post about the results.\n\nBest reagrds,\nOtto\n\n\n2011/12/7 Josh Berkus <[email protected]>\n\n> On 12/6/11 4:30 PM, Havasvölgyi Ottó wrote:\n> > Is there so much difference between 8.4 and 9.1, or is this something\n> else?\n> > Please tell me if any other info is needed.\n>\n> It is fairly likely that the difference you're seeing here is due to\n> improvements made in checkpointing and other operations made between 8.4\n> and 9.1.\n>\n> Is there some reason you didn't test 9.1 on Linux to compare the two?\n>\n> --\n> Josh Berkus\n> PostgreSQL Experts Inc.\n> http://pgexperts.com\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThanks, Josh.The only reason I tried 8.4 first is that it was available for Debian as compiled package, so it was simpler for me to do it. Anyway I am going to test 9.1 too. I will post about the results.Best reagrds,\nOtto2011/12/7 Josh Berkus <[email protected]>\nOn 12/6/11 4:30 PM, Havasvölgyi Ottó wrote:\n> Is there so much difference between 8.4 and 9.1, or is this something else?\n> Please tell me if any other info is needed.\n\nIt is fairly likely that the difference you're seeing here is due to\nimprovements made in checkpointing and other operations made between 8.4\nand 9.1.\n\nIs there some reason you didn't test 9.1 on Linux to compare the two?\n\n--\nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 7 Dec 2011 09:23:39 +0100",
"msg_from": "=?ISO-8859-1?Q?Havasv=F6lgyi_Ott=F3?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Response time increases over time"
},
{
"msg_contents": "On 12/07/2011 09:23 AM, Havasvölgyi Ottó wrote:\n> Thanks, Josh.\n> The only reason I tried 8.4 first is that it was available for Debian as\n> compiled package, so it was simpler for me to do it. Anyway I am going\n> to test 9.1 too. I will post about the results.\n> \n\nIf you're using squeeze, you can get 9.1 from the debian backports.\n\n\tMario\n",
"msg_date": "Wed, 07 Dec 2011 10:35:43 +0100",
"msg_from": "Mario Splivalo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Response time increases over time"
},
{
"msg_contents": "Thanks for that Mario, I will check it out.\n\n@All:\nAnyway, I have compiled 9.1.2 from source, and unfortunately the\nperformance haven't got better at the same load, it is consistently quite\nlow (~70 ms average transaction time with 100 clients) on this Debian. I am\nquite surprised about this, it is unrealistically high.\nI have run pg_test_fsync, and showed about 2600 fsync/sec, which means HDD\nhas write caching on (it is a 7200 rpm drive, there is no HW RAID\ncontroller). However my other machine, the simple Win7 one, on which\nperformance was so good and consistent, fsync/sec was a lot lower, only\nabout 100 as I can remember, so it probably really flushed each transaction\nto disk.\nI have also run load simulation on this Debian machine with InnoDb, and it\nperformed quite well, so the machine itself is good enough to handle this.\nOn the other hand it is quite poor on Win7, but that's another story...\n\nSo there seems to be something on this Debian machine that hinders\nPostgreSQL to perform better. With 8.4 I logged slow queries (with 9.1 not\nyet), and almost all were COMMIT, taking 10-20-30 or even more ms. But at\nthe same time the fsync rate can be quite high based on pg_test_fsync, so\nprobably not fsync is what makes it slow. Performance seems to degrade\ndrastically as I increase the concurrency, mainly concurrent commit has\nproblems as I can see.\nI also checked that connection pooling works well, and clients don't\nclose/open connections.\nI also have a graph about outstanding transaction count over time, and it\nis quite strange: it shows that low performce (20-30 xacts at a time) and\nhigh-performace (<5 xact at a time) parts are alternating quite frequently\ninstead of being more even.\nDo anybody have any idea based on this info about what can cause such\nbehaviour, or what I could check or try?\n\nThanks in advance,\nOtto\n\n2011/12/7 Mario Splivalo <[email protected]>\n\n> On 12/07/2011 09:23 AM, Havasvölgyi Ottó wrote:\n> > Thanks, Josh.\n> > The only reason I tried 8.4 first is that it was available for Debian as\n> > compiled package, so it was simpler for me to do it. Anyway I am going\n> > to test 9.1 too. I will post about the results.\n> >\n>\n> If you're using squeeze, you can get 9.1 from the debian backports.\n>\n> Mario\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThanks for that Mario, I will check it out.@All:Anyway, I \nhave compiled 9.1.2 from source, and unfortunately the performance \nhaven't got better at the same load, it is consistently quite low (~70 \nms average transaction time with 100 clients) on this Debian. I am quite\n surprised about this, it is unrealistically high.\nI have run pg_test_fsync, and showed about 2600 fsync/sec, which means \nHDD has write caching on (it is a 7200 rpm drive, there is no HW RAID \ncontroller). However my other machine, the simple Win7 one, on which \nperformance was so good and consistent, fsync/sec was a lot lower, only \nabout 100 as I can remember, so it probably really flushed each \ntransaction to disk.\nI have also run load simulation on this Debian machine with InnoDb, and \nit performed quite well, so the machine itself is good enough to handle \nthis. On the other hand it is quite poor on Win7, but that's another \nstory...\nSo there seems to be something on this Debian machine that hinders \nPostgreSQL to perform better. With 8.4 I logged slow queries (with 9.1 \nnot yet), and almost all were COMMIT, taking 10-20-30 or even more ms. \nBut at the same time the fsync rate can be quite high based on \npg_test_fsync, so probably not fsync is what makes it slow. Performance \nseems to degrade drastically as I increase the concurrency, mainly \nconcurrent commit has problems as I can see.\nI also checked that connection pooling works well, and clients don't close/open connections.I\n also have a graph about outstanding transaction count over time, and it\n is quite strange: it shows that low performce (20-30 xacts at a time) \nand high-performace (<5 xact at a time) parts are alternating quite \nfrequently instead of being more even.\nDo anybody have any idea based on this info about what can cause such behaviour, or what I could check or try?Thanks in advance,Otto2011/12/7 Mario Splivalo <[email protected]>\nOn 12/07/2011 09:23 AM, Havasvölgyi Ottó wrote:\n> Thanks, Josh.\n> The only reason I tried 8.4 first is that it was available for Debian as\n> compiled package, so it was simpler for me to do it. Anyway I am going\n> to test 9.1 too. I will post about the results.\n>\n\nIf you're using squeeze, you can get 9.1 from the debian backports.\n\n Mario\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 7 Dec 2011 23:13:31 +0100",
"msg_from": "=?ISO-8859-1?Q?Havasv=F6lgyi_Ott=F3?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Response time increases over time"
},
{
"msg_contents": "On Wed, Dec 7, 2011 at 5:13 PM, Havasvölgyi Ottó\n<[email protected]> wrote:\n\n> So there seems to be something on this Debian machine that hinders\n> PostgreSQL to perform better. With 8.4 I logged slow queries (with 9.1 not\n> yet), and almost all were COMMIT, taking 10-20-30 or even more ms. But at\n> the same time the fsync rate can be quite high based on pg_test_fsync, so\n> probably not fsync is what makes it slow. Performance seems to degrade\n> drastically as I increase the concurrency, mainly concurrent commit has\n> problems as I can see.\n\n> Do anybody have any idea based on this info about what can cause such\n> behaviour, or what I could check or try?\n\nLet me guess, debian squeeze, with data and xlog on both on a single\next3 filesystem, and the fsync done by your commit (xlog) is flushing\nall the dirty data of the entire filesystem (including PG data writes)\nout before it can return...\n\na.\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.\n",
"msg_date": "Wed, 7 Dec 2011 23:37:34 -0500",
"msg_from": "Aidan Van Dyk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Response time increases over time"
},
{
"msg_contents": "Yes, ext3 is the global file system, and you are right, PG xlog and data\nare on this one.\nIs this really what happens Aidan at fsync?\nWhat is be the best I can do?\nMount xlog directory to a separate file system?\nIf so, which file system fits the best for this purpose?\nShould I also mount the data separately, or is that not so important?\n\nThe strange thing is that InnoDb data and xlog are also on the same\nfilesystem, but on a separate one (ext4) from the global one.\n\nThanks,\nOtto\n\n\n\n\n2011/12/8 Aidan Van Dyk <[email protected]>\n\n> On Wed, Dec 7, 2011 at 5:13 PM, Havasvölgyi Ottó\n> <[email protected]> wrote:\n>\n> > So there seems to be something on this Debian machine that hinders\n> > PostgreSQL to perform better. With 8.4 I logged slow queries (with 9.1\n> not\n> > yet), and almost all were COMMIT, taking 10-20-30 or even more ms. But at\n> > the same time the fsync rate can be quite high based on pg_test_fsync, so\n> > probably not fsync is what makes it slow. Performance seems to degrade\n> > drastically as I increase the concurrency, mainly concurrent commit has\n> > problems as I can see.\n>\n> > Do anybody have any idea based on this info about what can cause such\n> > behaviour, or what I could check or try?\n>\n> Let me guess, debian squeeze, with data and xlog on both on a single\n> ext3 filesystem, and the fsync done by your commit (xlog) is flushing\n> all the dirty data of the entire filesystem (including PG data writes)\n> out before it can return...\n>\n> a.\n>\n> --\n> Aidan Van Dyk Create like a\n> god,\n> [email protected] command like a\n> king,\n> http://www.highrise.ca/ work like a\n> slave.\n>\n\nYes, ext3 is the global file system, and you are right, PG xlog and data are on this one.Is this really what happens Aidan at fsync?What is be the best I can do?Mount xlog directory to a separate file system? \nIf so, which file system fits the best for this purpose?Should I also mount the data separately, or is that not so important?The strange thing is that InnoDb data and xlog are also on the same filesystem, but on a separate one (ext4) from the global one.\nThanks,Otto2011/12/8 Aidan Van Dyk <[email protected]>\nOn Wed, Dec 7, 2011 at 5:13 PM, Havasvölgyi Ottó\n<[email protected]> wrote:\n\n> So there seems to be something on this Debian machine that hinders\n> PostgreSQL to perform better. With 8.4 I logged slow queries (with 9.1 not\n> yet), and almost all were COMMIT, taking 10-20-30 or even more ms. But at\n> the same time the fsync rate can be quite high based on pg_test_fsync, so\n> probably not fsync is what makes it slow. Performance seems to degrade\n> drastically as I increase the concurrency, mainly concurrent commit has\n> problems as I can see.\n\n> Do anybody have any idea based on this info about what can cause such\n> behaviour, or what I could check or try?\n\nLet me guess, debian squeeze, with data and xlog on both on a single\next3 filesystem, and the fsync done by your commit (xlog) is flushing\nall the dirty data of the entire filesystem (including PG data writes)\nout before it can return...\n\na.\n\n--\nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.",
"msg_date": "Thu, 8 Dec 2011 09:50:14 +0100",
"msg_from": "=?ISO-8859-1?Q?Havasv=F6lgyi_Ott=F3?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Response time increases over time"
},
{
"msg_contents": "On Thu, Dec 8, 2011 at 06:37, Aidan Van Dyk <[email protected]> wrote:\n> Let me guess, debian squeeze, with data and xlog on both on a single\n> ext3 filesystem, and the fsync done by your commit (xlog) is flushing\n> all the dirty data of the entire filesystem (including PG data writes)\n> out before it can return...\n\nThis is fixed with the data=writeback mount option, right?\n(If it's the root file system, you need to add\nrootfsflags=data=writeback to your kernel boot flags)\n\nWhile this setting is safe and recommended for PostgreSQL and other\ntransactional databases, it can cause garbage to appear in recently\nwritten files after a crash/power loss -- for applications that don't\ncorrectly fsync data to disk.\n\nRegards,\nMarti\n",
"msg_date": "Thu, 8 Dec 2011 15:44:42 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Response time increases over time"
},
{
"msg_contents": "I have moved the data directory (xlog, base, global, and everything) to an\next4 file system. The result hasn't changed unfortuately. With the same\nload test the average response time: 80ms; from 40ms to 120 ms everything\noccurs.\nThis ext4 has default settings in fstab.\nHave you got any other idea what is going on here?\n\nThanks,\nOtto\n\n\n\n\n2011/12/8 Marti Raudsepp <[email protected]>\n\n> On Thu, Dec 8, 2011 at 06:37, Aidan Van Dyk <[email protected]> wrote:\n> > Let me guess, debian squeeze, with data and xlog on both on a single\n> > ext3 filesystem, and the fsync done by your commit (xlog) is flushing\n> > all the dirty data of the entire filesystem (including PG data writes)\n> > out before it can return...\n>\n> This is fixed with the data=writeback mount option, right?\n> (If it's the root file system, you need to add\n> rootfsflags=data=writeback to your kernel boot flags)\n>\n> While this setting is safe and recommended for PostgreSQL and other\n> transactional databases, it can cause garbage to appear in recently\n> written files after a crash/power loss -- for applications that don't\n> correctly fsync data to disk.\n>\n> Regards,\n> Marti\n>\n\nI have moved the data directory (xlog, base, global, and everything) to an ext4 file system. The result hasn't changed unfortuately. With the same load test the average response time: 80ms; from 40ms to 120 ms everything occurs.\nThis ext4 has default settings in fstab.Have you got any other idea what is going on here?Thanks,Otto2011/12/8 Marti Raudsepp <[email protected]>\nOn Thu, Dec 8, 2011 at 06:37, Aidan Van Dyk <[email protected]> wrote:\n\n> Let me guess, debian squeeze, with data and xlog on both on a single\n> ext3 filesystem, and the fsync done by your commit (xlog) is flushing\n> all the dirty data of the entire filesystem (including PG data writes)\n> out before it can return...\n\nThis is fixed with the data=writeback mount option, right?\n(If it's the root file system, you need to add\nrootfsflags=data=writeback to your kernel boot flags)\n\nWhile this setting is safe and recommended for PostgreSQL and other\ntransactional databases, it can cause garbage to appear in recently\nwritten files after a crash/power loss -- for applications that don't\ncorrectly fsync data to disk.\n\nRegards,\nMarti",
"msg_date": "Thu, 8 Dec 2011 16:48:50 +0100",
"msg_from": "=?ISO-8859-1?Q?Havasv=F6lgyi_Ott=F3?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Response time increases over time"
},
{
"msg_contents": "Otto,\n\nSeparate the pg_xlog directory onto its own filesystem and retry your tests.\n\nBob Lunney\n\n\n________________________________\n From: Havasvölgyi Ottó <[email protected]>\nTo: Marti Raudsepp <[email protected]> \nCc: Aidan Van Dyk <[email protected]>; [email protected] \nSent: Thursday, December 8, 2011 9:48 AM\nSubject: Re: [PERFORM] Response time increases over time\n \n\nI have moved the data directory (xlog, base, global, and everything) to an ext4 file system. The result hasn't changed unfortuately. With the same load test the average response time: 80ms; from 40ms to 120 ms everything occurs.\nThis ext4 has default settings in fstab.\nHave you got any other idea what is going on here?\n\nThanks,\nOtto\n\n\n\n\n\n2011/12/8 Marti Raudsepp <[email protected]>\n\nOn Thu, Dec 8, 2011 at 06:37, Aidan Van Dyk <[email protected]> wrote:\n>> Let me guess, debian squeeze, with data and xlog on both on a single\n>> ext3 filesystem, and the fsync done by your commit (xlog) is flushing\n>> all the dirty data of the entire filesystem (including PG data writes)\n>> out before it can return...\n>\n>This is fixed with the data=writeback mount option, right?\n>(If it's the root file system, you need to add\n>rootfsflags=data=writeback to your kernel boot flags)\n>\n>While this setting is safe and recommended for PostgreSQL and other\n>transactional databases, it can cause garbage to appear in recently\n>written files after a crash/power loss -- for applications that don't\n>correctly fsync data to disk.\n>\n>Regards,\n>Marti\n>\nOtto,Separate the pg_xlog directory onto its own filesystem and retry your tests.Bob Lunney From: Havasvölgyi Ottó <[email protected]> To: Marti Raudsepp <[email protected]> Cc: Aidan Van Dyk <[email protected]>; [email protected] Sent: Thursday, December\n 8, 2011 9:48 AM Subject: Re: [PERFORM] Response time increases over time \nI have moved the data directory (xlog, base, global, and everything) to an ext4 file system. The result hasn't changed unfortuately. With the same load test the average response time: 80ms; from 40ms to 120 ms everything occurs.\nThis ext4 has default settings in fstab.Have you got any other idea what is going on here?Thanks,Otto2011/12/8 Marti Raudsepp <[email protected]>\nOn Thu, Dec 8, 2011 at 06:37, Aidan Van Dyk <[email protected]> wrote:\n\n> Let me guess, debian squeeze, with data and xlog on both on a single\n> ext3 filesystem, and the fsync done by your commit (xlog) is flushing\n> all the dirty data of the entire filesystem (including PG data writes)\n> out before it can return...\n\nThis is fixed with the data=writeback mount option, right?\n(If it's the root file system, you need to add\nrootfsflags=data=writeback to your kernel boot flags)\n\nWhile this setting is safe and recommended for PostgreSQL and other\ntransactional databases, it can cause garbage to appear in recently\nwritten files after a crash/power loss -- for applications that don't\ncorrectly fsync data to disk.\n\nRegards,\nMarti",
"msg_date": "Thu, 8 Dec 2011 07:58:30 -0800 (PST)",
"msg_from": "Bob Lunney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Response time increases over time"
},
{
"msg_contents": "I have put pg_xlog back to the ext3 partition, but nothing changed.\nI have also switched off sync_commit, but nothing. This is quite\ninteresting...\nHere is a graph about the transaction time (sync_commit off, pg_xlog on\nseparate file system): Graph <http://uploadpic.org/v.php?img=qIjfWBkHyE>\nOn the graph the red line up there is the tranaction/sec, it is about 110,\nand does not get lower as the transaction time gets higher.\nBased on this, am I right that it is not the commit, that causes these high\ntransaction times?\nKernel version is 2.6.32.\nAny idea is appreciated.\n\nThanks,\nOtto\n\n\n\n\n2011/12/8 Bob Lunney <[email protected]>\n\n> Otto,\n>\n> Separate the pg_xlog directory onto its own filesystem and retry your\n> tests.\n>\n> Bob Lunney\n>\n> ------------------------------\n> *From:* Havasvölgyi Ottó <[email protected]>\n> *To:* Marti Raudsepp <[email protected]>\n> *Cc:* Aidan Van Dyk <[email protected]>; [email protected]\n> *Sent:* Thursday, December 8, 2011 9:48 AM\n>\n> *Subject:* Re: [PERFORM] Response time increases over time\n>\n> I have moved the data directory (xlog, base, global, and everything) to an\n> ext4 file system. The result hasn't changed unfortuately. With the same\n> load test the average response time: 80ms; from 40ms to 120 ms everything\n> occurs.\n> This ext4 has default settings in fstab.\n> Have you got any other idea what is going on here?\n>\n> Thanks,\n> Otto\n>\n>\n>\n>\n> 2011/12/8 Marti Raudsepp <[email protected]>\n>\n> On Thu, Dec 8, 2011 at 06:37, Aidan Van Dyk <[email protected]> wrote:\n> > Let me guess, debian squeeze, with data and xlog on both on a single\n> > ext3 filesystem, and the fsync done by your commit (xlog) is flushing\n> > all the dirty data of the entire filesystem (including PG data writes)\n> > out before it can return...\n>\n> This is fixed with the data=writeback mount option, right?\n> (If it's the root file system, you need to add\n> rootfsflags=data=writeback to your kernel boot flags)\n>\n> While this setting is safe and recommended for PostgreSQL and other\n> transactional databases, it can cause garbage to appear in recently\n> written files after a crash/power loss -- for applications that don't\n> correctly fsync data to disk.\n>\n> Regards,\n> Marti\n>\n>\n>\n>\n>\n\nI have put pg_xlog back to the ext3 partition, but nothing changed.I have also switched off sync_commit, but nothing. This is quite interesting...Here is a graph about the transaction time (sync_commit off, pg_xlog on separate file system): Graph\nOn the graph the red line up there is the tranaction/sec, it is about 110, and does not get lower as the transaction time gets higher.Based on this, am I right that it is not the commit, that causes these high transaction times?\nKernel version is 2.6.32.Any idea is appreciated.Thanks,Otto2011/12/8 Bob Lunney <[email protected]>\n\nOtto,Separate the pg_xlog directory onto its own filesystem and retry your tests.Bob Lunney\n \nFrom: Havasvölgyi Ottó <[email protected]> To: Marti Raudsepp <[email protected]> \nCc: Aidan Van Dyk <[email protected]>; [email protected] \nSent: Thursday, December\n 8, 2011 9:48 AM Subject: Re: [PERFORM] Response time increases over time \nI have moved the data directory (xlog, base, global, and everything) to an ext4 file system. The result hasn't changed unfortuately. With the same load test the average response time: 80ms; from 40ms to 120 ms everything occurs.\n\nThis ext4 has default settings in fstab.Have you got any other idea what is going on here?Thanks,Otto2011/12/8 Marti Raudsepp <[email protected]>\nOn Thu, Dec 8, 2011 at 06:37, Aidan Van Dyk <[email protected]> wrote:\n\n\n> Let me guess, debian squeeze, with data and xlog on both on a single\n> ext3 filesystem, and the fsync done by your commit (xlog) is flushing\n> all the dirty data of the entire filesystem (including PG data writes)\n> out before it can return...\n\nThis is fixed with the data=writeback mount option, right?\n(If it's the root file system, you need to add\nrootfsflags=data=writeback to your kernel boot flags)\n\nWhile this setting is safe and recommended for PostgreSQL and other\ntransactional databases, it can cause garbage to appear in recently\nwritten files after a crash/power loss -- for applications that don't\ncorrectly fsync data to disk.\n\nRegards,\nMarti",
"msg_date": "Thu, 8 Dec 2011 18:21:53 +0100",
"msg_from": "=?ISO-8859-1?Q?Havasv=F6lgyi_Ott=F3?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Response time increases over time"
}
] |
[
{
"msg_contents": "Hi there,\r\n\r\n \r\n\r\n \r\n \r\n\r\nCurrently, we are running into serious performance problems with our paritioning setup, because index lookups are mostly done on allpartions, in stead of the one partition it should know that it can find the needed row.\r\n\r\n \r\n\r\n \r\n \r\n\r\nSimple example, were we have a partitioned tables named part_table. So here it goes:\r\n\r\n \r\n\r\n \r\n \r\n\r\nselect * from part_table where id = 12123231\r\n\r\n \r\n\r\n \r\n \r\n\r\nWill do an index lookup only in the partition that it knows it can find the id there. However:\r\n\r\n \r\n\r\n \r\n \r\n\r\nselect * from part_table where id = (select 12123231)\r\n\r\n \r\n\r\n \r\n \r\n\r\nWill do an index lookup in ALL partitions, meaning it is significantly slower, even more since the database will not fit into memory.\r\n\r\n \r\n\r\n \r\n \r\n\r\nSo okay, we could just not use parameterized queries... Well.. not so fast. Consider a second table referencing to the first:\r\n\r\n \r\n\r\n \r\n \r\n\r\nref_table:\r\n\r\n \r\n\r\ngroup_id bigint\r\n\r\n \r\n\r\npart_table_id bigint\r\n\r\n \r\n\r\n \r\n \r\n\r\nNow when I join the two:\r\n\r\n \r\n\r\nselect part_table.* from part_table\r\n\r\n \r\n\r\njoin ref_table on (ref_table.part_table_id = part_table.id and group_id = 12321)\r\n\r\n \r\n\r\n \r\n \r\n\r\nIt will also do index loopups on ALL partitions. \r\n\r\n \r\n\r\n \r\n \r\n\r\nHow do we handle this? Above queries are simplified versions of the things gooing on but the idea is clear. I tried dooing this in 9.1 (we are currently using 9.0), but this does not matter. So what is actually the practicial use of partitioning if you can't even use it effectively for simple joins?\r\n\r\n \r\n\r\n \r\n \r\n\r\nconstraint_exclusion is enabled correctly, and as far as I can see, this behaviour is according to the book.\r\n\r\n \r\n\r\n \r\n \r\n\r\nAre there any progresses in maybe 9.2 to make this any better? If not, how schould we handle this? We can also not choose to parition, but how will that perform on a 100 GB table?\r\n\r\n \r\n\r\n \r\n \r\n\r\nKind regards,\r\n\r\n \r\n\r\n \r\n \r\n\r\nChristiaan Willemsen\r\n\r\n \r\n\r\n \r\n \r\n\r\n \r\n \r\n\r\n \r\n \r\n\r\n \r\n \r\n\r\n \r\n\n\n\n\n\nPartitions and joins lead to index lookups on all partitions\n\n\n\nHi there, Currently, we are running into serious performance problems with our paritioning setup, because index lookups are mostly done on allpartions, in stead of the one partition it should know that it can find the needed row. Simple example, were we have a partitioned tables named part_table. So here it goes: select * from part_table where id = 12123231 Will do an index lookup only in the partition that it knows it can find the id there. However: select * from part_table where id = (select 12123231) Will do an index lookup in ALL partitions, meaning it is significantly slower, even more since the database will not fit into memory. So okay, we could just not use parameterized queries... Well.. not so fast. Consider a second table referencing to the first: ref_table: group_id bigint part_table_id bigint Now when I join the two: select part_table.* from part_table join ref_table on (ref_table.part_table_id = part_table.id and group_id = 12321) It will also do index loopups on ALL partitions. How do we handle this? Above queries are simplified versions of the things gooing on but the idea is clear. I tried dooing this in 9.1 (we are currently using 9.0), but this does not matter. So what is actually the practicial use of partitioning if you can't even use it effectively for simple joins? constraint_exclusion is enabled correctly, and as far as I can see, this behaviour is according to the book. Are there any progresses in maybe 9.2 to make this any better? If not, how schould we handle this? We can also not choose to parition, but how will that perform on a 100 GB table? Kind regards, Christiaan Willemsen",
"msg_date": "Wed, 7 Dec 2011 16:15:52 +0100",
"msg_from": "=?utf-8?Q?Christiaan_Willemsen?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partitions and joins lead to index lookups on all partitions"
},
{
"msg_contents": "Hi,\n\nOn 8 December 2011 02:15, Christiaan Willemsen <[email protected]> wrote:\n> Currently, we are running into serious performance problems with our\n> paritioning setup, because index lookups are mostly done on allpartions, in\n> stead of the one partition it should know that it can find the needed row.\n\nPlanner is not very smart about partitions. If expression can't be\nevaluated to constant (or you use stable/volatile function) during\nplanning time then you get index/seq scan across all partitions.\n\n> Now when I join the two:\n>\n> select part_table.* from part_table\n>\n> join ref_table on (ref_table.part_table_id = part_table.id and group_id =\n> 12321)\n\nI had to add extra where conditions which help to decide the right\npartitions i.e. where part_col between X and Y. It would be quite hard\nto this in your case. You can execute another query like\n- select part_table_id from ref_table where group_id = 12321\n- or select min(part_table_id), max(part_table_id) from ref_table\nwhere group_id = 12321\nand the use in() or between X and Y in second query (so have to\nexecute two queries).\n\n-- \nOndrej Ivanic\n([email protected])\n",
"msg_date": "Thu, 8 Dec 2011 08:36:40 +1100",
"msg_from": "=?UTF-8?Q?Ondrej_Ivani=C4=8D?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions and joins lead to index lookups on all partitions"
},
{
"msg_contents": "Hi Ondrej,\n\nYour solution has occurred to me, and wil even work in some cases. But in\nmore advanced queries, where for example, I would need the group ID again to\ndo some window function magic, this will sadly not work, without again doing\na reverse lookup to the ref_table to find it again. This scheme might still\nbe faster though even though it would take more queries.\n\nIm now testing some of queries against a non-paritioned version of our\ndataset to see what the difference is.\n\nI'm wondering how much the insert performance wil be impacted when\nnot-paritioning our data. We do have a few indexes and constriants on these\ntables, but not whole lot. I'll so some measurements to see how this wil\nwork out.\n\nThe general dilemma would be as follows:\n\nWhat if the suggested max of 100 partions would mean that a partition table\nwill also not fit into memory efficiently, and/or that the access pattern is\nsuch that because of the query planner, it needs to work it's way though all\nthe partitions for virtually most of the serious queries being done on the\ndata set.\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Partitions-and-joins-lead-to-index-lookups-on-all-partitions-tp5055965p5058853.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Thu, 8 Dec 2011 05:57:38 -0800 (PST)",
"msg_from": "voodooless <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions and joins lead to index lookups on all partitions"
},
{
"msg_contents": "Back again,\n\nI did some tests with our test machine, having a difficult query doing some\nfancy stuff ;)\n\nI made two versions, one using partitioned data, one, using unpartitioned\ndata, both having the same equivalent indexes. It's using two of those big\ntables, one 28GB data and 17GB index, one 25GB data and 41GB indexes (both\nfor the unpartitioned versions). Our test machine has 32GB of memory, short\nconfig:\n\nmaintenance_work_mem = 1GB\ncheckpoint_completion_target = 0.9\neffective_cache_size = 22GB\nwork_mem = 80MB\nwal_buffers = 8MB\ncheckpoint_segments = 16\nshared_buffers = 7680MB\nmax_connections = 400\n\nAt first I tested the query performance. It turned out that the\nunpartitioned version was about 36 times faster, of course for the obvious\nreason stated in my initial post. both are fully using the indexes they\nhave, and the partitioned version even has it's indexes on SSD.\n\nThen I did some insert tests using generate_series to insert 100000 rows\ninto one of the tables. It turns out that the unpartitioned version is again\nfaster, this time 30.9 vs 1.8 seconds. This is a huge difference. For the\nsecond table, with the huge 41GB index it's 30.5 vs 5.2 seconds, still a big\ndifference.\n\nConclusion: partitioning does not benefit us, and probably others, specially\nwhen doing lots of joins and using parameterized queries.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Partitions-and-joins-lead-to-index-lookups-on-all-partitions-tp5055965p5074907.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Wed, 14 Dec 2011 08:06:20 -0800 (PST)",
"msg_from": "voodooless <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions and joins lead to index lookups on all partitions"
}
] |
[
{
"msg_contents": "Hello, I have a postgres 9.0.2 installation.\n\nEvery works fine, but in some hours of day I got several timeout in my\napplication (my application wait X seconds before throw a timeout).\n\nNormally hours are not of intensive use, so I think that the autovacuum\ncould be the problem.\n\n \n\nIs threre any log where autovacuum write information about it self like\n\"duration for each table\" or any other relevante information.\n\n \n\nAnother inline question, should I exclude bigger tables from autovacuum or\nthere are some mechanism to tell autovacuum to not run often on bigger\ntables (tables with more than 400 millions of rows)\n\n \n\nThanks!\n\n\nHello, I have a postgres 9.0.2 installation.Every works fine, but in some hours of day I got several timeout in my application (my application wait X seconds before throw a timeout).Normally hours are not of intensive use, so I think that the autovacuum could be the problem. Is threre any log where autovacuum write information about it self like “duration for each table” or any other relevante information. Another inline question, should I exclude bigger tables from autovacuum or there are some mechanism to tell autovacuum to not run often on bigger tables (tables with more than 400 millions of rows) Thanks!",
"msg_date": "Wed, 7 Dec 2011 12:34:30 -0300",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "autovacuum, any log?"
},
{
"msg_contents": "On Wed, Dec 7, 2011 at 8:34 AM, Anibal David Acosta <[email protected]> wrote:\n> Hello, I have a postgres 9.0.2 installation.\n>\n> Every works fine, but in some hours of day I got several timeout in my\n> application (my application wait X seconds before throw a timeout).\n>\n> Normally hours are not of intensive use, so I think that the autovacuum\n> could be the problem.\n>\n>\n>\n> Is threre any log where autovacuum write information about it self like\n> “duration for each table” or any other relevante information.\n>\n>\n>\n> Another inline question, should I exclude bigger tables from autovacuum or\n> there are some mechanism to tell autovacuum to not run often on bigger\n> tables (tables with more than 400 millions of rows)\n\nMore often than not not the problem will be checkpoint segments not\nautovacuum. log vacuum and checkpoints, and then run something like\niostat in the background and keep an eye on %util to see if one or the\nother is slamming your IO subsystem. Default tuning for autovac is\npretty conservative, to the point that it won't usually hurt your IO,\nbut may not keep up with vaccuming, leading to table bloating.\n",
"msg_date": "Wed, 7 Dec 2011 13:19:29 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum, any log?"
}
] |
[
{
"msg_contents": "Well thought it was maybe just going from 8.4.4 to 9.1.1 so upgraded\nto 8.4.9 and tried pg_upgrade again (this is 64bit) and it's failing\n\n-bash-4.0$ /tmp/pg_upgrade --check --old-datadir \"/data/db\"\n--new-datadir \"/data1/db\" --old-bindir \"/ipix/pgsql/bin\" --new-bindir\n\"/ipix/pgsql9/bin\"\nPerforming Consistency Checks\n-----------------------------\nChecking current, bin, and data directories ok\nChecking cluster versions ok\nChecking database user is a superuser ok\nChecking for prepared transactions ok\nChecking for reg* system oid user data types ok\nChecking for contrib/isn with bigint-passing mismatch ok\nChecking for large objects ok\n\nThere were problems executing \"/ipix/pgsql/bin/pg_ctl\" -w -l\n\"/dev/null\" -D \"/data/db\" stop >> \"/dev/null\" 2>&1\nFailure, exiting\n\n\nI've read some re pg_migrator and issues with contribs, but wondered\nif there is something \"Else\" I need to know here\n\nThanks again\nTory\n",
"msg_date": "Wed, 7 Dec 2011 15:53:06 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade failure \"contrib\" issue?"
},
{
"msg_contents": "On Wed, Dec 7, 2011 at 6:53 PM, Tory M Blue <[email protected]> wrote:\n> Well thought it was maybe just going from 8.4.4 to 9.1.1 so upgraded\n> to 8.4.9 and tried pg_upgrade again (this is 64bit) and it's failing\n>\n> -bash-4.0$ /tmp/pg_upgrade --check --old-datadir \"/data/db\"\n> --new-datadir \"/data1/db\" --old-bindir \"/ipix/pgsql/bin\" --new-bindir\n> \"/ipix/pgsql9/bin\"\n> Performing Consistency Checks\n> -----------------------------\n> Checking current, bin, and data directories ok\n> Checking cluster versions ok\n> Checking database user is a superuser ok\n> Checking for prepared transactions ok\n> Checking for reg* system oid user data types ok\n> Checking for contrib/isn with bigint-passing mismatch ok\n> Checking for large objects ok\n>\n> There were problems executing \"/ipix/pgsql/bin/pg_ctl\" -w -l\n> \"/dev/null\" -D \"/data/db\" stop >> \"/dev/null\" 2>&1\n> Failure, exiting\n>\n>\n> I've read some re pg_migrator and issues with contribs, but wondered\n> if there is something \"Else\" I need to know here\n\nI'm not sure that this is on-topic for pgsql-performance, and my reply\nhere is horribly behind-the-times anyway, but my experience with\npg_upgrade is that it's entirely willing to send all the critically\nimportant information you need to solve the problem to the bit bucket,\nas in your example. If you knew WHY it was having trouble running\npg_ctl, you would probably be able to fix it easily, but since\neverything's been redirected to /dev/null, you can't. I believe that\nthis gets considerably better if you run pg_upgrade with the \"-l\nlogfile\" option, and then check the log file.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 10 Jan 2012 16:21:26 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade failure \"contrib\" issue?"
}
] |
[
{
"msg_contents": "Hello All.\nWe recently upgrade our server from PG8.2 to 8.4.\nOur current version is:\ndatabase=> SELECT version();\n \nversion \n--------------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.4.8 on amd64-portbld-freebsd8.2, compiled by GCC cc (GCC) \n4.2.2 20070831 prerelease [FreeBSD], 64-bit\n(1 row)\n \nI read and setup most of the tuning advices.\nSince the upgrade we have some very complicated reports that start \nworking too slow. Form tens of seconds to 1 hours.\nI execute vacuum analyze before start the query.\n\nHere I will post explain analyze. If you think it is necessary I will \npost the exact query:\nhttp://explain.depesz.com/s/J0O\n\nI think the planner didn't choose the best plan. I will try to I rewrite \nthe query and set join_collapse_limit to 1 and see what will happen. \nMeanwhile any suggestions are welcome.\n\nBest regards and thanks in advance for the help.\n Kaloyan Iliev\n",
"msg_date": "Thu, 08 Dec 2011 19:29:24 +0200",
"msg_from": "Kaloyan Iliev Iliev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query after upgrade from 8.2 to 8.4"
},
{
"msg_contents": "On 12/08/2011 11:29 AM, Kaloyan Iliev Iliev wrote:\n\n> I think the planner didn't choose the best plan. I will try to I rewrite\n> the query and set join_collapse_limit to 1 and see what will happen.\n> Meanwhile any suggestions are welcome.\n\nTry rewriting the query, definitely. But make sure your statistical \ntargets are high enough for an analyze to make a difference. I see way \ntoo many nested loops with wrong row estimates.\n\nLike these:\n\nNested Loop (cost=0.00..8675.62 rows=2263 width=4) (actual \ntime=0.456..5991.749 rows=68752 loops=167)\nJoin Filter: (dd.debtid = ad.debtid)\n\nNested Loop (cost=0.00..7864.54 rows=1160 width=4) (actual \ntime=0.374..2781.762 rows=34384 loops=167)\n\nIndex Scan using config_office_idx on config cf (cost=0.00..7762.56 \nrows=50 width=8) (actual time=0.199..1623.366 rows=2460 loops=167)\nIndex Cond: (office = 6)\nFilter: (id = (SubPlan 6))\n\nThere are several spots where the row estimates are off by one or two \norders of magnitude. Instead of doing a sequence scan for such large \ntables, it's nest looping over an index scan, sometimes millions of times.\n\nAnd then you have these:\n\nIndex Scan using config_confid_idx on config (cost=0.00..0.66 rows=6 \nwidth=12) (actual time=0.023..0.094 rows=10 loops=1655853)\nIndex Cond: (confid = $3)\n\nIndex Scan using debts_desc_refid_idx on debts_desc dd (cost=0.00..1.66 \nrows=30 width=8) (actual time=0.061..0.381 rows=14 loops=410867)\nIndex Cond: (dd.refid = cf.confid)\n\nIndex Scan using acc_debts_debtid_idx on acc_debts ad (cost=0.00..0.39 \nrows=2 width=8) (actual time=0.034..0.053 rows=2 loops=5742191)\nIndex Cond: (ad.debtid = dd.debtid)\n\nIndex Scan using acc_debtscl_debtid_idx on acc_debts_cleared ad \n(cost=0.00..0.27 rows=1 width=8) (actual time=0.005..0.005 rows=0 \nloops=5742183)\nIndex Cond: (ad.debtid = dd.debtid)\n\nHaving index scans that big embedded in nested loops is going to murder \nyour CPU even if every table involved is cached in memory. I'm not \nsurprised this takes an hour or more to run. Increase the statistics on \nthese tables, and pay special attention to the debtid and refid columns, \nand then analyze them again.\n\nWhat's your default_statistics_target?\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Thu, 8 Dec 2011 11:52:38 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query after upgrade from 8.2 to 8.4"
},
{
"msg_contents": "Kaloyan Iliev Iliev <[email protected]> writes:\n> We recently upgrade our server from PG8.2 to 8.4.\n> ...\n> Here I will post explain analyze. If you think it is necessary I will \n> post the exact query:\n> http://explain.depesz.com/s/J0O\n\nYeah, you need to show the query. It looks like the performance problem\nis stemming from a lot of subselects, but it's not clear why 8.4 would\nbe handling those worse than 8.2.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Dec 2011 22:28:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query after upgrade from 8.2 to 8.4 "
},
{
"msg_contents": "\n\n\n\n\nHi,\r\nActually I think the problem is with this sub query:\r\nexplain analyze select 1\r\n from acc_clients AC,\r\n acc_debts AD,\r\n debts_desc DD,\r\n config CF\r\n where AC.ino = 1200000 AND\n\r\n CF.id = (select id\r\n from config\r\n where\r\nconfid=CF.confid ORDER BY archived_at DESC LIMIT 1) AND\r\n AD.transact_no =\r\nAC.transact_no AND\r\n AD.debtid = DD.debtid AND\r\n CF.office = 18 AND\r\n DD.refid = CF.confid LIMIT\r\n1;\n\r\nInstead of starting from 'AC.ino = 1200000' and limit the rows IT\r\nstart with 'CF.office = 18' which returns much more rows:\r\nSO: This is the query plan of the upper query.\n\nhttp://explain.depesz.com/s/ATN\n\n\r\nIf I remove the condition 'CF.office = 18' the\r\nplanner chose the correct plan and result is fast.\nexplain analyze select 1\r\n from acc_clients AC,\r\n acc_debts AD,\r\n debts_desc DD,\r\n config CF\r\n where AC.ino = 1200000 AND\n\r\n CF.id = (select id\r\n from config\r\n where\r\nconfid=CF.confid ORDER BY archived_at DESC LIMIT 1) AND\r\n AD.transact_no =\r\nAC.transact_no AND\r\n AD.debtid = DD.debtid AND\r\n DD.refid = CF.confid LIMIT\r\n1;\n\nhttp://explain.depesz.com/s/4zb\n\r\nI want this plan and this query but with the additional condition 'CF.office\r\n= 18'.\r\nHow could I force the planner to use this plan and just filter the\r\nresult.\n\r\nBest regards,\r\n Kaloyan Iliev\n\n\r\nTom Lane wrote:\r\n\nKaloyan Iliev Iliev <[email protected]> writes:\r\n \n\nWe recently upgrade our server from PG8.2 to 8.4.\r\n...\r\nHere I will post explain analyze. If you think it is necessary I will \r\npost the exact query:\r\nhttp://explain.depesz.com/s/J0O\r\n \n\n\r\nYeah, you need to show the query. It looks like the performance problem\r\nis stemming from a lot of subselects, but it's not clear why 8.4 would\r\nbe handling those worse than 8.2.\r\n\r\n\t\t\tregards, tom lane\r\n\r\n \n\n\n\n",
"msg_date": "Fri, 09 Dec 2011 14:23:35 +0200",
"msg_from": "Kaloyan Iliev Iliev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query after upgrade from 8.2 to 8.4"
},
{
"msg_contents": "Kaloyan Iliev Iliev <[email protected]> writes:\n> <!DOCTYPE html PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\">\n> <html>\n> <head>\n> <meta content=\"text/html;charset=windows-1251\"\n> http-equiv=\"Content-Type\">\n> </head>\n> <body bgcolor=\"#ffffff\" text=\"#000000\">\n> <tt>Hi,<br>\n> Actually I think the problem is with this sub query:<br>\n> explain analyze select 1<br>\n> from acc_clients AC,<br>\n> acc_debts AD,<br>\n> debts_desc DD,<br>\n> config CF<br>\n> where AC.ino = 1200000 AND<br>\n> <br>\n> CF.id = (select id<br>\n> from config<br>\n> where\n> confid=CF.confid ORDER BY archived_at DESC LIMIT 1) AND<br>\n> AD.transact_no =\n> AC.transact_no AND<br>\n> AD.debtid = DD.debtid AND<br>\n> CF.office = 18 AND<br>\n> DD.refid = CF.confid LIMIT\n> 1;</tt><br>\n> <br>\n> Instead of starting from '<tt>AC.ino = 1200000' and limit the rows IT\n> start with '</tt><tt>CF.office = 18' which returns much more rows:<br>\n\nPlease don't post HTML mail.\n\nI think the real issue is that you've got an astonishingly expensive\napproach to keeping obsolete \"config\" rows around. You should get rid\nof that \"ORDER BY archived_at\" sub-select, either by not storing\nobsolete rows at all (you could move them to a history table instead),\nor by marking valid rows with a boolean flag.\n\nHowever, it's not apparent to me why you would see any difference\nbetween 8.2 and 8.4 on this type of query. I tried a query analogous\nto this one on both, and got identical plans. I'm guessing that your\nslowdown is due to not having updated statistics on the new\ninstallation, or perhaps failing to duplicate some relevant settings.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Dec 2011 10:30:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query after upgrade from 8.2 to 8.4 "
},
{
"msg_contents": "On 10/12/11 04:30, Tom Lane wrote:\n> However, it's not apparent to me why you would see any difference \n> between 8.2 and 8.4 on this type of query. I tried a query analogous \n> to this one on both, and got identical plans. I'm guessing that your \n> slowdown is due to not having updated statistics on the new \n> installation, or perhaps failing to duplicate some relevant settings.\n\nI notice he has 8.4.*8*... I wonder if he's running into the poor \nestimation bug for sub-selects/semi joins that was fixed in 8.4.9.\n\nKaloyan, can you try the query in 8.4.9?\n\nregards\n\nMark\n",
"msg_date": "Wed, 14 Dec 2011 11:13:03 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query after upgrade from 8.2 to 8.4"
},
{
"msg_contents": "Hi,\nThanks for Replay. Actually I finally find a solution. If I rewrite the \nquery in this way:\nexplain analyze select 1\n from acc_clients AC,\n acc_debts AD,\n debts_desc DD,\n config CF\n where AC.ino = 204627 AND\n CF.id = (select id\n from config\n where \nconfid=CF.confid AND office = 18 ORDER BY archived_at DESC LIMIT 1) AND\n AD.transact_no = \nAC.transact_no AND\n AD.debtid = DD.debtid AND\n DD.refid = CF.confid LIMIT 1;\n\nthe plan and execution time really approves.\nhttp://explain.depesz.com/s/Nkj\n\nAnd for comparison I will repost the old way the query was written.\nexplain analyze select 1\n from acc_clients AC,\n acc_debts AD,\n debts_desc DD,\n config CF\n where AC.ino = 1200000 AND\n CF.id = (select id\n from config\n where \nconfid=CF.confid ORDER BY archived_at DESC LIMIT 1) AND\n AD.transact_no = \nAC.transact_no AND\n AD.debtid = DD.debtid AND\n CF.office = 18 AND\n DD.refid = CF.confid LIMIT 1;\n\nThis is the query plan of the upper query.\nhttp://explain.depesz.com/s/ATN\n\nWhen we have 8.4.9 installed I will try the query and post the result.\n\nBest regards,\n Kaloyan Iliev\n\n\nMark Kirkwood wrote:\n> On 10/12/11 04:30, Tom Lane wrote:\n>> However, it's not apparent to me why you would see any difference \n>> between 8.2 and 8.4 on this type of query. I tried a query analogous \n>> to this one on both, and got identical plans. I'm guessing that your \n>> slowdown is due to not having updated statistics on the new \n>> installation, or perhaps failing to duplicate some relevant settings.\n>\n> I notice he has 8.4.*8*... I wonder if he's running into the poor \n> estimation bug for sub-selects/semi joins that was fixed in 8.4.9.\n>\n> Kaloyan, can you try the query in 8.4.9?\n>\n> regards\n>\n> Mark\n>\n",
"msg_date": "Wed, 14 Dec 2011 19:48:17 +0200",
"msg_from": "Kaloyan Iliev Iliev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query after upgrade from 8.2 to 8.4"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm trying to figure out some common slow queries running on the server, by\nanalyzing the slow queries log.\n\nI found debug_print_parse, debug_print_rewritten, debug_print_plan, which\nare too much verbose and logs all queries.\n\nI was thinking in something like a simple explain analyze just for queries\nlogged with log_min_duration_statement with the query too.\n\nIs there a way to configure PostgreSQL to get this kind of information,\nmaybe I'm missing something? Is it too hard to hack into sources and do it\nby hand? I never touched PostgreSQL sources.\n\nI'm thinking to write a paper that needs this information for my\npostgraduate course. The focus of my work will be the log data, not\nPostgreSQL itself. If I succeed, maybe it can be a tool to help all of us.\n\nThank you,\n-- \nDaniel Cristian Cruz\nクルズ クリスチアン ダニエル\n\nHi all,I'm trying to figure out some common slow queries running on the server, by analyzing the slow queries log.I found debug_print_parse, debug_print_rewritten, debug_print_plan, which are too much verbose and logs all queries.\nI was thinking in something like a simple explain analyze just for queries logged with log_min_duration_statement with the query too.Is there a way to configure PostgreSQL to get this kind of information, maybe I'm missing something? Is it too hard to hack into sources and do it by hand? I never touched PostgreSQL sources.\nI'm thinking to write a paper that needs this information for my postgraduate course. The focus of my work will be the log data, not PostgreSQL itself. If I succeed, maybe it can be a tool to help all of us.\nThank you,-- Daniel Cristian Cruzクルズ クリスチアン ダニエル",
"msg_date": "Sat, 10 Dec 2011 14:52:09 -0200",
"msg_from": "Daniel Cristian Cruz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Common slow query reasons - help with a special log"
},
{
"msg_contents": "Daniel Cristian Cruz <[email protected]> wrote:\n\n> Hi all,\n> \n> I'm trying to figure out some common slow queries running on the server, by\n> analyzing the slow queries log.\n> \n> I found�debug_print_parse,�debug_print_rewritten,�debug_print_plan, which are\n> too much verbose and logs all queries.\n> \n> I was thinking in something like a simple explain analyze just for queries\n> logged with�log_min_duration_statement with the query too.\n> \n> Is there a way to configure PostgreSQL to get this kind of information, maybe\n> I'm missing something? Is it too hard to hack into sources and do it by hand? I\n> never touched PostgreSQL sources.\n\nConsider auto_explain, it's a contrib-modul, see\nhttp://www.postgresql.org/docs/9.1/interactive/auto-explain.html\n\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n",
"msg_date": "Sat, 10 Dec 2011 18:25:43 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Common slow query reasons - help with a special log"
},
{
"msg_contents": "There's auto_explain contrib module that does exactly what you're asking\nfor. Anyway, explain analyze is quite expensive - think twice before\nenabling that on production server where you already have performance\nissues.\n\nTomas\n\nOn 10.12.2011 17:52, Daniel Cristian Cruz wrote:\n> Hi all,\n> \n> I'm trying to figure out some common slow queries running on the server,\n> by analyzing the slow queries log.\n> \n> I found debug_print_parse, debug_print_rewritten, debug_print_plan,\n> which are too much verbose and logs all queries.\n> \n> I was thinking in something like a simple explain analyze just for\n> queries logged with log_min_duration_statement with the query too.\n> \n> Is there a way to configure PostgreSQL to get this kind of information,\n> maybe I'm missing something? Is it too hard to hack into sources and do\n> it by hand? I never touched PostgreSQL sources.\n> \n> I'm thinking to write a paper that needs this information for my\n> postgraduate course. The focus of my work will be the log data, not\n> PostgreSQL itself. If I succeed, maybe it can be a tool to help all of us.\n> \n> Thank you,\n> -- \n> Daniel Cristian Cruz\n> クルズ クリスチアン ダニエル\n\n",
"msg_date": "Sat, 10 Dec 2011 18:25:52 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Common slow query reasons - help with a special log"
},
{
"msg_contents": "At work we have a 24 cores server, with a load average around 2.5.\n\nI don't know yet if a system which use some unused CPU to minimize the load\nof a bad query identified early is bad or worse.\n\nIndeed, I don't know if my boss would let me test this at production too,\nbut it could be good to know how things work in \"auto-pilot\" mode.\n\n2011/12/10 Tomas Vondra <[email protected]>\n\n> There's auto_explain contrib module that does exactly what you're asking\n> for. Anyway, explain analyze is quite expensive - think twice before\n> enabling that on production server where you already have performance\n> issues.\n>\n> Tomas\n>\n> On 10.12.2011 17:52, Daniel Cristian Cruz wrote:\n> > Hi all,\n> >\n> > I'm trying to figure out some common slow queries running on the server,\n> > by analyzing the slow queries log.\n> >\n> > I found debug_print_parse, debug_print_rewritten, debug_print_plan,\n> > which are too much verbose and logs all queries.\n> >\n> > I was thinking in something like a simple explain analyze just for\n> > queries logged with log_min_duration_statement with the query too.\n> >\n> > Is there a way to configure PostgreSQL to get this kind of information,\n> > maybe I'm missing something? Is it too hard to hack into sources and do\n> > it by hand? I never touched PostgreSQL sources.\n> >\n> > I'm thinking to write a paper that needs this information for my\n> > postgraduate course. The focus of my work will be the log data, not\n> > PostgreSQL itself. If I succeed, maybe it can be a tool to help all of\n> us.\n> >\n> > Thank you,\n> > --\n> > Daniel Cristian Cruz\n> > クルズ クリスチアン ダニエル\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nDaniel Cristian Cruz\nクルズ クリスチアン ダニエル\n\nAt work we have a 24 cores server, with a load average around 2.5.I don't know yet if a system which use some unused CPU to minimize the load of a bad query identified early is bad or worse.\nIndeed, I don't know if my boss would let me test this at production too, but it could be good to know how things work in \"auto-pilot\" mode.2011/12/10 Tomas Vondra <[email protected]>\nThere's auto_explain contrib module that does exactly what you're asking\nfor. Anyway, explain analyze is quite expensive - think twice before\nenabling that on production server where you already have performance\nissues.\n\nTomas\n\nOn 10.12.2011 17:52, Daniel Cristian Cruz wrote:\n> Hi all,\n>\n> I'm trying to figure out some common slow queries running on the server,\n> by analyzing the slow queries log.\n>\n> I found debug_print_parse, debug_print_rewritten, debug_print_plan,\n> which are too much verbose and logs all queries.\n>\n> I was thinking in something like a simple explain analyze just for\n> queries logged with log_min_duration_statement with the query too.\n>\n> Is there a way to configure PostgreSQL to get this kind of information,\n> maybe I'm missing something? Is it too hard to hack into sources and do\n> it by hand? I never touched PostgreSQL sources.\n>\n> I'm thinking to write a paper that needs this information for my\n> postgraduate course. The focus of my work will be the log data, not\n> PostgreSQL itself. If I succeed, maybe it can be a tool to help all of us.\n>\n> Thank you,\n> --\n> Daniel Cristian Cruz\n> クルズ クリスチアン ダニエル\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Daniel Cristian Cruzクルズ クリスチアン ダニエル",
"msg_date": "Sat, 10 Dec 2011 20:40:23 -0200",
"msg_from": "Daniel Cristian Cruz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Common slow query reasons - help with a special log"
},
{
"msg_contents": "On 10.12.2011 23:40, Daniel Cristian Cruz wrote:\n> At work we have a 24 cores server, with a load average around 2.5.\n\nA single query is processes by a single CPU, so even if the system is\nnot busy a single query may hit CPU bottleneck. The real issue is the\ninstrumentation overhead - timing etc. On some systems (with slow\ngettimeofday) this may be a significant problem as the query hits the\nCPU boundary sooner.\n\n> I don't know yet if a system which use some unused CPU to minimize the\n> load of a bad query identified early is bad or worse.\n\nNot really, due to the \"single query / single CPU\" rule.\n\n> Indeed, I don't know if my boss would let me test this at production\n> too, but it could be good to know how things work in \"auto-pilot\" mode.\n\nWhat I was pointing out is that you probably should not enable loggin\n\"explain analyze\" output by \"auto_explain.log_analyze = true\". There are\nthree levels of detail:\n\n1) basic, just log_min_duration_statement\n\n2) auto_explain, without 'analyze' - just explain plain\n\n3) auto_explain, with 'analyze' - explain plan with actual values\n\nLevels (1) and (2) are quite safe (unless the minimum execution time is\ntoo low).\n\nTomas\n\n> \n> 2011/12/10 Tomas Vondra <[email protected] <mailto:[email protected]>>\n> \n> There's auto_explain contrib module that does exactly what you're asking\n> for. Anyway, explain analyze is quite expensive - think twice before\n> enabling that on production server where you already have performance\n> issues.\n> \n> Tomas\n> \n> On 10.12.2011 17:52, Daniel Cristian Cruz wrote:\n> > Hi all,\n> >\n> > I'm trying to figure out some common slow queries running on the\n> server,\n> > by analyzing the slow queries log.\n> >\n> > I found debug_print_parse, debug_print_rewritten, debug_print_plan,\n> > which are too much verbose and logs all queries.\n> >\n> > I was thinking in something like a simple explain analyze just for\n> > queries logged with log_min_duration_statement with the query too.\n> >\n> > Is there a way to configure PostgreSQL to get this kind of\n> information,\n> > maybe I'm missing something? Is it too hard to hack into sources\n> and do\n> > it by hand? I never touched PostgreSQL sources.\n> >\n> > I'm thinking to write a paper that needs this information for my\n> > postgraduate course. The focus of my work will be the log data, not\n> > PostgreSQL itself. If I succeed, maybe it can be a tool to help\n> all of us.\n> >\n> > Thank you,\n> > --\n> > Daniel Cristian Cruz\n> > クルズ クリスチアン ダニエル\n> \n> \n> --\n> Sent via pgsql-performance mailing list\n> ([email protected]\n> <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n> \n> \n> -- \n> Daniel Cristian Cruz\n> クルズ クリスチアン ダニエル\n\n",
"msg_date": "Sun, 11 Dec 2011 00:50:56 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Common slow query reasons - help with a special log"
},
{
"msg_contents": "2011/12/10 Tomas Vondra <[email protected]>\n\n> On 10.12.2011 23:40, Daniel Cristian Cruz wrote:\n> > At work we have a 24 cores server, with a load average around 2.5.\n>\n> A single query is processes by a single CPU, so even if the system is\n> not busy a single query may hit CPU bottleneck. The real issue is the\n> instrumentation overhead - timing etc. On some systems (with slow\n> gettimeofday) this may be a significant problem as the query hits the\n> CPU boundary sooner.\n>\n\nYes, I forgot it will run on the same PID. Since analyze will cause all\nqueries to slow down, maybe the 24 cores could became overloaded.\n\n\n>\n> > I don't know yet if a system which use some unused CPU to minimize the\n> > load of a bad query identified early is bad or worse.\n>\n> Not really, due to the \"single query / single CPU\" rule.\n\n\nI guess it will be a nice tool to run in the validation server.\n\n\n> > Indeed, I don't know if my boss would let me test this at production\n> > too, but it could be good to know how things work in \"auto-pilot\" mode.\n>\n> What I was pointing out is that you probably should not enable loggin\n> \"explain analyze\" output by \"auto_explain.log_analyze = true\". There are\n> three levels of detail:\n>\n> 1) basic, just log_min_duration_statement\n>\n> 2) auto_explain, without 'analyze' - just explain plain\n>\n> 3) auto_explain, with 'analyze' - explain plan with actual values\n>\n> Levels (1) and (2) are quite safe (unless the minimum execution time is\n> too low).\n>\n\nI would start with 5 seconds.\n\nReading the manual again and I saw that enabling analyze, it analyze all\nqueries, even the ones that wasn't 5 second slower. And understood that\nthere is no way to disable for slower queries, since there is no way to\nknow it before it ends...\n\nI read Bruce blog about some features going to multi-core. Could explain\nanalyze go multi-core too?\n\nAnother thing I saw is that I almost never look at times in explain\nanalyze. I always look for rows divergence and methods used for scan and\njoins when looking for something to get better performance.\n\nI had the nasty idea of putting a // before de gettimeofday in the code for\nexplain analyze (I guess it could be very more complicated than this).\n\nSure, its ugly, but I think it could be an option for an explain analyze\n\"with no time\", and in concept, it's what I'm looking for.\n\n-- \nDaniel Cristian Cruz\nクルズ クリスチアン ダニエル\n\n2011/12/10 Tomas Vondra <[email protected]>\nOn 10.12.2011 23:40, Daniel Cristian Cruz wrote:\n> At work we have a 24 cores server, with a load average around 2.5.\n\nA single query is processes by a single CPU, so even if the system is\nnot busy a single query may hit CPU bottleneck. The real issue is the\ninstrumentation overhead - timing etc. On some systems (with slow\ngettimeofday) this may be a significant problem as the query hits the\nCPU boundary sooner.Yes, I forgot it will run on the same PID. Since analyze will cause all queries to slow down, maybe the 24 cores could became overloaded. \n\n> I don't know yet if a system which use some unused CPU to minimize the\n> load of a bad query identified early is bad or worse.\n\nNot really, due to the \"single query / single CPU\" rule.I guess it will be a nice tool to run in the validation server. \n\n> Indeed, I don't know if my boss would let me test this at production\n> too, but it could be good to know how things work in \"auto-pilot\" mode.\n\nWhat I was pointing out is that you probably should not enable loggin\n\"explain analyze\" output by \"auto_explain.log_analyze = true\". There are\nthree levels of detail:\n\n1) basic, just log_min_duration_statement\n\n2) auto_explain, without 'analyze' - just explain plain\n\n3) auto_explain, with 'analyze' - explain plan with actual values\n\nLevels (1) and (2) are quite safe (unless the minimum execution time is\ntoo low).I would start with 5 seconds.Reading the manual again and I saw that enabling analyze, it analyze all queries, even the ones that wasn't 5 second slower. And understood that there is no way to disable for slower queries, since there is no way to know it before it ends...\nI read Bruce blog about some features going to multi-core. Could explain analyze go multi-core too?Another thing I saw is that I almost never look at times in explain analyze. I always look for rows divergence and methods used for scan and joins when looking for something to get better performance.\nI had the nasty idea of putting a // before de gettimeofday in the code for explain analyze (I guess it could be very more complicated than this).Sure, its ugly, but I think it could be an option for an explain analyze \"with no time\", and in concept, it's what I'm looking for.\n-- Daniel Cristian Cruzクルズ クリスチアン ダニエル",
"msg_date": "Sat, 10 Dec 2011 23:27:18 -0200",
"msg_from": "Daniel Cristian Cruz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Common slow query reasons - help with a special log"
},
{
"msg_contents": "On 11.12.2011 02:27, Daniel Cristian Cruz wrote:\n> I would start with 5 seconds.\n> \n> Reading the manual again and I saw that enabling analyze, it analyze all\n> queries, even the ones that wasn't 5 second slower. And understood that\n> there is no way to disable for slower queries, since there is no way to\n> know it before it ends...\n\nYes, you can't predict how long a query will run until it actually\nfinishes, so you have to instrument all of them. Maybe this will change\nbecause of the \"faster than light\" neutrinos, but let's stick with the\ncurrent laws of physics for now.\n\n> I read Bruce blog about some features going to multi-core. Could explain\n> analyze go multi-core too?\n\nI don't think so. This is what Bruce mentioned as \"parallel execution\"\nand that's the very hard part requiring rearchitecting parts of the system.\n\n> Another thing I saw is that I almost never look at times in explain\n> analyze. I always look for rows divergence and methods used for scan and\n> joins when looking for something to get better performance.\n> \n> I had the nasty idea of putting a // before de gettimeofday in the code\n> for explain analyze (I guess it could be very more complicated than this).\n\nI was thinking about that too, but I've never done it. So I'm not sure\nwhat else is needed.\n\nTomas\n",
"msg_date": "Sun, 11 Dec 2011 04:11:22 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Common slow query reasons - help with a special log"
}
] |
[
{
"msg_contents": "I was experimenting with a few different methods of taking a line of\ntext, parsing it, into a set of fields, and then getting that info\ninto a table.\n\nThe first method involved writing a C program to parse a file, parse\nthe lines and output newly-formatted lines in a format that\npostgresql's COPY function can use.\nEnd-to-end, this takes 15 seconds for about 250MB (read 250MB, parse,\noutput new data to new file -- 4 seconds, COPY new file -- 10\nseconds).\n\nThe next approach I took was to write a C function in postgresql to\nparse a single TEXT datum into an array of C strings, and then use\nBuildTupleFromCStrings. There are 8 columns involved.\nEliding the time it takes to COPY the (raw) file into a temporary\ntable, this method took 120 seconds, give or take.\n\nThe difference was /quite/ a surprise to me. What is the probability\nthat I am doing something very, very wrong?\n\nNOTE: the code that does the parsing is actually the same,\nline-for-line, the only difference is whether the routine is called by\na postgresql function or by a C program via main, so obviously the\noverhead is elsewhere.\nNOTE #2: We are talking about approximately 2.6 million lines.\n\nI was testing:\n\n\\copy some_table from 'some_file.csv' with csv\nvs.\ninsert into some_table select (some_func(line)).* from some_temp_table;\n\nwhere some_func had been defined with (one) IN TEXT and (8) OUT params\nof varying types.\n\nPostgreSQL 9.1.1 on Linux, x86_64\n\n-- \nJon\n",
"msg_date": "Sat, 10 Dec 2011 19:27:11 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "copy vs. C function"
},
{
"msg_contents": "On 12/11/2011 09:27 AM, Jon Nelson wrote:\n> The first method involved writing a C program to parse a file, parse\n> the lines and output newly-formatted lines in a format that\n> postgresql's COPY function can use.\n> End-to-end, this takes 15 seconds for about 250MB (read 250MB, parse,\n> output new data to new file -- 4 seconds, COPY new file -- 10\n> seconds).\nWhy not `COPY tablename FROM /path/to/myfifo' ?\n\nJust connect your import program up to a named pipe (fifo) created with \n`mknod myfifo p` either by redirecting stdout or by open()ing the fifo \nfor write. Then have Pg read from the fifo. You'll save a round of disk \nwrites and reads.\n> The next approach I took was to write a C function in postgresql to\n> parse a single TEXT datum into an array of C strings, and then use\n> BuildTupleFromCStrings. There are 8 columns involved.\n> Eliding the time it takes to COPY the (raw) file into a temporary\n> table, this method took 120 seconds, give or take.\n>\n> The difference was /quite/ a surprise to me. What is the probability\n> that I am doing something very, very wrong?\nHave a look at how COPY does it within the Pg sources, see if that's any \nhelp. I don't know enough about Pg's innards to answer this one beyond \nthat suggestion, sorry.\n\n--\nCraig Ringer\n",
"msg_date": "Sun, 11 Dec 2011 10:32:50 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: copy vs. C function"
},
{
"msg_contents": "Start a transaction before the first insert and commit it after the last one and it will be much better, but I believe that the copy code path is optimized to perform better than any set of queries can, even in a single transaction\n\nSent from my iPhone\n\nOn Dec 10, 2011, at 5:27 PM, Jon Nelson <[email protected]> wrote:\n\n> I was experimenting with a few different methods of taking a line of\n> text, parsing it, into a set of fields, and then getting that info\n> into a table.\n> \n> The first method involved writing a C program to parse a file, parse\n> the lines and output newly-formatted lines in a format that\n> postgresql's COPY function can use.\n> End-to-end, this takes 15 seconds for about 250MB (read 250MB, parse,\n> output new data to new file -- 4 seconds, COPY new file -- 10\n> seconds).\n> \n> The next approach I took was to write a C function in postgresql to\n> parse a single TEXT datum into an array of C strings, and then use\n> BuildTupleFromCStrings. There are 8 columns involved.\n> Eliding the time it takes to COPY the (raw) file into a temporary\n> table, this method took 120 seconds, give or take.\n> \n> The difference was /quite/ a surprise to me. What is the probability\n> that I am doing something very, very wrong?\n> \n> NOTE: the code that does the parsing is actually the same,\n> line-for-line, the only difference is whether the routine is called by\n> a postgresql function or by a C program via main, so obviously the\n> overhead is elsewhere.\n> NOTE #2: We are talking about approximately 2.6 million lines.\n> \n> I was testing:\n> \n> \\copy some_table from 'some_file.csv' with csv\n> vs.\n> insert into some_table select (some_func(line)).* from some_temp_table;\n> \n> where some_func had been defined with (one) IN TEXT and (8) OUT params\n> of varying types.\n> \n> PostgreSQL 9.1.1 on Linux, x86_64\n> \n> -- \n> Jon\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 10 Dec 2011 18:35:09 -0800",
"msg_from": "Sam Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: copy vs. C function"
},
{
"msg_contents": "On Sat, Dec 10, 2011 at 8:32 PM, Craig Ringer <[email protected]> wrote:\n> On 12/11/2011 09:27 AM, Jon Nelson wrote:\n>>\n>> The first method involved writing a C program to parse a file, parse\n>> the lines and output newly-formatted lines in a format that\n>> postgresql's COPY function can use.\n>> End-to-end, this takes 15 seconds for about 250MB (read 250MB, parse,\n>> output new data to new file -- 4 seconds, COPY new file -- 10\n>> seconds).\n>\n> Why not `COPY tablename FROM /path/to/myfifo' ?\n\nIf I were to do this in any sort of production environment, that's\nexactly what I would do. I was much more concerned about the /huge/\ndifference -- 10 seconds for COPY and 120 seconds for (INSERT INTO /\nCREATE TABLE AS / whatever).\n\n>> The next approach I took was to write a C function in postgresql to\n>> parse a single TEXT datum into an array of C strings, and then use\n>> BuildTupleFromCStrings. There are 8 columns involved.\n>> Eliding the time it takes to COPY the (raw) file into a temporary\n>> table, this method took 120 seconds, give or take.\n>>\n>> The difference was /quite/ a surprise to me. What is the probability\n>> that I am doing something very, very wrong?\n>\n> Have a look at how COPY does it within the Pg sources, see if that's any\n> help. I don't know enough about Pg's innards to answer this one beyond that\n> suggestion, sorry.\n\nAck.\n\nRegarding a subsequent email, I was using full transactions.\n\n\n-- \nJon\n",
"msg_date": "Sat, 10 Dec 2011 21:08:39 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: copy vs. C function"
},
{
"msg_contents": "On Sat, Dec 10, 2011 at 7:27 PM, Jon Nelson <[email protected]> wrote:\n> I was experimenting with a few different methods of taking a line of\n> text, parsing it, into a set of fields, and then getting that info\n> into a table.\n>\n> The first method involved writing a C program to parse a file, parse\n> the lines and output newly-formatted lines in a format that\n> postgresql's COPY function can use.\n> End-to-end, this takes 15 seconds for about 250MB (read 250MB, parse,\n> output new data to new file -- 4 seconds, COPY new file -- 10\n> seconds).\n>\n> The next approach I took was to write a C function in postgresql to\n> parse a single TEXT datum into an array of C strings, and then use\n> BuildTupleFromCStrings. There are 8 columns involved.\n> Eliding the time it takes to COPY the (raw) file into a temporary\n> table, this method took 120 seconds, give or take.\n>\n> The difference was /quite/ a surprise to me. What is the probability\n> that I am doing something very, very wrong?\n>\n> NOTE: the code that does the parsing is actually the same,\n> line-for-line, the only difference is whether the routine is called by\n> a postgresql function or by a C program via main, so obviously the\n> overhead is elsewhere.\n> NOTE #2: We are talking about approximately 2.6 million lines.\n\n\nLet me throw out an interesting third method I've been using to parse\ndelimited text files that might be useful in your case. This is\nuseful when parsing text that is bad csv where values are not escaped\nor there are lines, incomplete and/or missing records, or a huge\namount of columns that you want to rotate into a more normalized\nstructure based on columns position.\n\n1. Import the data into a single column (containing the entire line)\nstaging table, feeding the COPY parser a bogus delimiter\n2. 'Parse' the record with regexp_split_to_array (maybe in plpgsql function).\n3. Either loop the array (in 9.1 use FOR-IN-ARRAY construct), or, if\nyou can work it into your problem, INSERT/SELECT, expanding the array\nwith a trick like used in information_schema._pg_expandarray so you\ncan hook logic on the array (column position).\n\nmerlin\n",
"msg_date": "Mon, 12 Dec 2011 10:38:04 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: copy vs. C function"
},
{
"msg_contents": "On Mon, Dec 12, 2011 at 10:38 AM, Merlin Moncure <[email protected]> wrote:\n> On Sat, Dec 10, 2011 at 7:27 PM, Jon Nelson <[email protected]> wrote:\n>> I was experimenting with a few different methods of taking a line of\n>> text, parsing it, into a set of fields, and then getting that info\n>> into a table.\n>>\n>> The first method involved writing a C program to parse a file, parse\n>> the lines and output newly-formatted lines in a format that\n>> postgresql's COPY function can use.\n>> End-to-end, this takes 15 seconds for about 250MB (read 250MB, parse,\n>> output new data to new file -- 4 seconds, COPY new file -- 10\n>> seconds).\n>>\n>> The next approach I took was to write a C function in postgresql to\n>> parse a single TEXT datum into an array of C strings, and then use\n>> BuildTupleFromCStrings. There are 8 columns involved.\n>> Eliding the time it takes to COPY the (raw) file into a temporary\n>> table, this method took 120 seconds, give or take.\n>>\n>> The difference was /quite/ a surprise to me. What is the probability\n>> that I am doing something very, very wrong?\n>>\n>> NOTE: the code that does the parsing is actually the same,\n>> line-for-line, the only difference is whether the routine is called by\n>> a postgresql function or by a C program via main, so obviously the\n>> overhead is elsewhere.\n>> NOTE #2: We are talking about approximately 2.6 million lines.\n>\n>\n> Let me throw out an interesting third method I've been using to parse\n> delimited text files that might be useful in your case. This is\n> useful when parsing text that is bad csv where values are not escaped\n> or there are lines, incomplete and/or missing records, or a huge\n> amount of columns that you want to rotate into a more normalized\n> structure based on columns position.\n>\n> 1. Import the data into a single column (containing the entire line)\n> staging table, feeding the COPY parser a bogus delimiter\n> 2. 'Parse' the record with regexp_split_to_array (maybe in plpgsql function).\n> 3. Either loop the array (in 9.1 use FOR-IN-ARRAY construct), or, if\n> you can work it into your problem, INSERT/SELECT, expanding the array\n> with a trick like used in information_schema._pg_expandarray so you\n> can hook logic on the array (column position).\n\nIf you replace [2] with my C function (which can process all of the\ndata, *postgresql overhead not included*, in about 1 second) then\nthat's what I did. It returns a composite type making [3] unnecessary.\n\nI know it's not parsing, so I started a time honored debugging\napproach: I returned early.\n\nIs the function-call overhead that high? That's outrageously high.\nWhat else could it be? Is returning a composite type outragously\nexpensive?\nSo here is what I did: I modified the code so that it immediately returns NULL.\nResult: 2 seconds.\nExtract arguments, allocate temporary work buffer: another 0.5 seconds.\nAdd parsing: another 1.5 seconds [total: 4.1 seconds]\n\nand so on...\n\nTwo of the items require base conversion, so:\nCalling strtol (twice) and snprintf (twice) -- adds *6 seconds.\n\nand to format one of the items as an array (a strcpy and a strcat) --\nadd 1.5 seconds for a total of 11.5.\n\nThe only thing I have left are these statements:\n\nget_call_result_type\nTupleDescGetAttInMetadata\nBuildTupleFromCStrings\nHeapTupleGetDatum\nand finally PG_RETURN_DATUM\n\nIt turns out that:\nget_call_result_type adds 43 seconds [total: 54],\nTupleDescGetAttInMetadata adds 19 seconds [total: 73],\nBuildTypleFromCStrings accounts for 43 seconds [total: 116].\n\nSo those three functions account for 90% of the total time spent.\nWhat alternatives exist? Do I have to call get_call_result_type /every\ntime/ through the function?\n\n-- \nJon\n",
"msg_date": "Tue, 13 Dec 2011 08:29:52 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: copy vs. C function"
},
{
"msg_contents": "Jon Nelson <[email protected]> writes:\n> The only thing I have left are these statements:\n\n> get_call_result_type\n> TupleDescGetAttInMetadata\n> BuildTupleFromCStrings\n> HeapTupleGetDatum\n> and finally PG_RETURN_DATUM\n\n> It turns out that:\n> get_call_result_type adds 43 seconds [total: 54],\n> TupleDescGetAttInMetadata adds 19 seconds [total: 73],\n> BuildTypleFromCStrings accounts for 43 seconds [total: 116].\n\n> So those three functions account for 90% of the total time spent.\n> What alternatives exist? Do I have to call get_call_result_type /every\n> time/ through the function?\n\nWell, if you're concerned about performance then I think you're going\nabout this in entirely the wrong way, because as far as I can tell from\nthis you're converting all the field values to text and back again.\nYou should be trying to keep the values in Datum format and then\ninvoking heap_form_tuple. And yeah, you probably could cache the\ntype information across calls.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Dec 2011 01:18:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: copy vs. C function "
},
{
"msg_contents": "Hi guys,\nA nub question here since I could not figure it out on my own:\nI'm using Hamachi to connect different sites into a VPN and their address always starts with 5.*.*.* - the problem I'm facing is that I cannot make the access restricted to that particular range only.\nCurrently I got :\nhost all all 0.0.0.0/32 md5\nwhich allows all the IP's, and the try:\nhost all all 5.0.0.0/32 md5 \n\ndoes not work.\nSo what I am suppose to add in \"pg_hba.conf\" in order to achieve my restriction? Please help me,\nThank you,\nDanny\n\n\n\n________________________________\n From: Tom Lane <[email protected]>\nTo: Jon Nelson <[email protected]> \nCc: [email protected] \nSent: Wednesday, December 14, 2011 8:18 AM\nSubject: Re: [PERFORM] copy vs. C function \n \nJon Nelson <[email protected]> writes:\n> The only thing I have left are these statements:\n\n> get_call_result_type\n> TupleDescGetAttInMetadata\n> BuildTupleFromCStrings\n> HeapTupleGetDatum\n> and finally PG_RETURN_DATUM\n\n> It turns out that:\n> get_call_result_type adds 43 seconds [total: 54],\n> TupleDescGetAttInMetadata adds 19 seconds [total: 73],\n> BuildTypleFromCStrings accounts for 43 seconds [total: 116].\n\n> So those three functions account for 90% of the total time spent.\n> What alternatives exist? Do I have to call get_call_result_type /every\n> time/ through the function?\n\nWell, if you're concerned about performance then I think you're going\nabout this in entirely the wrong way, because as far as I can tell from\nthis you're converting all the field values to text and back again.\nYou should be trying to keep the values in Datum format and then\ninvoking heap_form_tuple. And yeah, you probably could cache the\ntype information across calls.\n\n regards, tom lane\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nHi guys,A nub question here since I could not figure it out on my own:I'm using Hamachi to connect different sites into a VPN and their address always starts with 5.*.*.* - the problem I'm facing is that I cannot make the access restricted to that particular range only.Currently I got :host all all 0.0.0.0/32 md5which allows all the IP's, and the try:host all all 5.0.0.0/32 md5 does not work.So what I am suppose to add in \"pg_hba.conf\" in order to achieve my restriction? Please help me,Thank you,Danny From: Tom Lane <[email protected]> To: Jon Nelson <[email protected]> Cc: [email protected] Sent: Wednesday, December 14, 2011 8:18 AM Subject: Re: [PERFORM] copy vs. C function \nJon Nelson <[email protected]> writes:> The only thing I have left are these statements:> get_call_result_type> TupleDescGetAttInMetadata> BuildTupleFromCStrings> HeapTupleGetDatum> and finally PG_RETURN_DATUM> It turns out that:> get_call_result_type adds 43 seconds [total: 54],> TupleDescGetAttInMetadata adds 19 seconds [total: 73],> BuildTypleFromCStrings accounts for 43 seconds [total: 116].> So those three functions account for 90% of the total time spent.> What alternatives exist? Do I have to call get_call_result_type /every> time/ through the function?Well, if you're concerned about performance then I think you're goingabout this in entirely the wrong way, because as far as I can tell fromthis you're converting all the field values to text and\n back again.You should be trying to keep the values in Datum format and theninvoking heap_form_tuple. And yeah, you probably could cache thetype information across calls. regards, tom lane-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 13 Dec 2011 23:02:10 -0800 (PST)",
"msg_from": "idc danny <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: copy vs. C function "
},
{
"msg_contents": "try\nhost all all 5.0.0.0/8 md5\n\nOn Wed, Dec 14, 2011 at 2:02 AM, idc danny <[email protected]> wrote:\n\n> Hi guys,\n> A nub question here since I could not figure it out on my own:\n> I'm using Hamachi to connect different sites into a VPN and their address\n> always starts with 5.*.*.* - the problem I'm facing is that I cannot make\n> the access restricted to that particular range only.\n> Currently I got :\n> host all all 0.0.0.0/32 md5\n> which allows all the IP's, and the try:\n> host all all 5.0.0.0/32 md5\n> does not work.\n> So what I am suppose to add in \"pg_hba.conf\" in order to achieve my\n> restriction? Please help me,\n> Thank you,\n> Danny\n>\n> ------------------------------\n> *From:* Tom Lane <[email protected]>\n> *To:* Jon Nelson <[email protected]>\n> *Cc:* [email protected]\n> *Sent:* Wednesday, December 14, 2011 8:18 AM\n> *Subject:* Re: [PERFORM] copy vs. C function\n>\n> Jon Nelson <[email protected]> writes:\n> > The only thing I have left are these statements:\n>\n> > get_call_result_type\n> > TupleDescGetAttInMetadata\n> > BuildTupleFromCStrings\n> > HeapTupleGetDatum\n> > and finally PG_RETURN_DATUM\n>\n> > It turns out that:\n> > get_call_result_type adds 43 seconds [total: 54],\n> > TupleDescGetAttInMetadata adds 19 seconds [total: 73],\n> > BuildTypleFromCStrings accounts for 43 seconds [total: 116].\n>\n> > So those three functions account for 90% of the total time spent.\n> > What alternatives exist? Do I have to call get_call_result_type /every\n> > time/ through the function?\n>\n> Well, if you're concerned about performance then I think you're going\n> about this in entirely the wrong way, because as far as I can tell from\n> this you're converting all the field values to text and back again.\n> You should be trying to keep the values in Datum format and then\n> invoking heap_form_tuple. And yeah, you probably could cache the\n> type information across calls.\n>\n> regards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\n\n\n-- \nKevin P Martyn, CISSP\nPrincipal Sales Engineer\n(914) 819 8795 mobile\[email protected]\nSkype: kevin.martyn4\n\nWebsite: www.enterprisedb.com\nEnterpriseDB Blog: http://blogs.enterprisedb.com/\nFollow us on Twitter: http://www.twitter.com/enterprisedb\n\nThis e-mail message (and any attachment) is intended for the use of the\nindividual or entity to whom it is addressed. This message contains\ninformation from EnterpriseDB Corporation that may be privileged,\nconfidential, or exempt from disclosure under applicable law. If you are\nnot the intended recipient or authorized to receive this for the intended\nrecipient, any use, dissemination, distribution, retention, archiving, or\ncopying of this communication is strictly prohibited. If you have received\nthis e-mail in error, please notify the sender immediately by reply e-mail\nand delete this message.\n\ntryhost all all 5.0.0.0/8 md5On Wed, Dec 14, 2011 at 2:02 AM, idc danny <[email protected]> wrote:\n\nHi guys,A nub question here since I could not figure it out on my own:I'm using Hamachi to connect different sites into a VPN and their address always starts with 5.*.*.* - the problem I'm facing is that I cannot make the access restricted to that particular range only.\nCurrently I got :host all all 0.0.0.0/32 md5which allows all the IP's, and the try:host all all 5.0.0.0/32 md5 \ndoes not work.So what I am suppose to add in \"pg_hba.conf\" in order to achieve my restriction? Please help me,Thank you,Danny\n From: Tom Lane <[email protected]>\nTo: Jon Nelson <[email protected]> Cc: [email protected] \nSent: Wednesday, December 14, 2011 8:18 AM Subject: Re: [PERFORM] copy vs. C function \nJon Nelson <[email protected]> writes:> The only thing I have left are these statements:> get_call_result_type> TupleDescGetAttInMetadata\n> BuildTupleFromCStrings> HeapTupleGetDatum> and finally PG_RETURN_DATUM> It turns out that:> get_call_result_type adds 43 seconds [total: 54],> TupleDescGetAttInMetadata adds 19 seconds [total: 73],\n> BuildTypleFromCStrings accounts for 43 seconds [total: 116].> So those three functions account for 90% of the total time spent.> What alternatives exist? Do I have to call get_call_result_type /every\n> time/ through the function?Well, if you're concerned about performance then I think you're goingabout this in entirely the wrong way, because as far as I can tell fromthis you're converting all the field values to text and\n back again.You should be trying to keep the values in Datum format and theninvoking heap_form_tuple. And yeah, you probably could cache thetype information across calls. regards, tom lane\n-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n -- Kevin P Martyn, CISSPPrincipal Sales Engineer(914) 819 8795 [email protected]: kevin.martyn4\nWebsite: www.enterprisedb.comEnterpriseDB Blog: http://blogs.enterprisedb.com/\n\nFollow us on Twitter: http://www.twitter.com/enterprisedbThis\n e-mail message (and any attachment) is intended for the use of the \nindividual or entity to whom it is addressed. This message contains \ninformation from EnterpriseDB Corporation that may be privileged, \nconfidential, or exempt from disclosure under applicable law. If you are\n not the intended recipient or authorized to receive this for the \nintended recipient, any use, dissemination, distribution, retention, \narchiving, or copying of this communication is strictly prohibited. If \nyou have received this e-mail in error, please notify the sender \nimmediately by reply e-mail and delete this message.",
"msg_date": "Wed, 14 Dec 2011 08:14:12 -0500",
"msg_from": "Kevin Martyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: copy vs. C function"
},
{
"msg_contents": "On Wed, Dec 14, 2011 at 12:18 AM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> The only thing I have left are these statements:\n>\n>> get_call_result_type\n>> TupleDescGetAttInMetadata\n>> BuildTupleFromCStrings\n>> HeapTupleGetDatum\n>> and finally PG_RETURN_DATUM\n>\n>> It turns out that:\n>> get_call_result_type adds 43 seconds [total: 54],\n>> TupleDescGetAttInMetadata adds 19 seconds [total: 73],\n>> BuildTypleFromCStrings accounts for 43 seconds [total: 116].\n>\n>> So those three functions account for 90% of the total time spent.\n>> What alternatives exist? Do I have to call get_call_result_type /every\n>> time/ through the function?\n>\n> Well, if you're concerned about performance then I think you're going\n> about this in entirely the wrong way, because as far as I can tell from\n> this you're converting all the field values to text and back again.\n> You should be trying to keep the values in Datum format and then\n> invoking heap_form_tuple. And yeah, you probably could cache the\n> type information across calls.\n\nThe parsing/conversion (except BuildTupleFromCStrings) is only a small\nfraction of the overall time spent in the function and could probably\nbe made slightly faster. It's the overhead that's killing me.\n\nRemember: I'm not converting multiple field values to text and back\nagain, I'm turning a *single* TEXT into 8 columns of varying types\n(INET, INTEGER, and one INTEGER array, among others). I'll re-write\nthe code to use Tuples but given that 53% of the time is spent in just\ntwo functions (the two I'd like to cache) I'm not sure how much of a\ngain it's likely to be.\n\nRegarding caching, I tried caching it across calls by making the\nTupleDesc static and only initializing it once.\nWhen I tried that, I got:\n\nERROR: number of columns (6769856) exceeds limit (1664)\n\nI tried to find some documentation or examples that cache the\ninformation, but couldn't find any.\n\n-- \nJon\n",
"msg_date": "Wed, 14 Dec 2011 08:06:09 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: copy vs. C function"
},
{
"msg_contents": "Ah, that did the trick, thank you Kevin,\nDanny\n\n\n\n________________________________\n From: Kevin Martyn <[email protected]>\nTo: idc danny <[email protected]> \nCc: \"[email protected]\" <[email protected]> \nSent: Wednesday, December 14, 2011 3:14 PM\nSubject: Re: [PERFORM] copy vs. C function\n \n\ntry\nhost all all 5.0.0.0/8 md5\n\n\nOn Wed, Dec 14, 2011 at 2:02 AM, idc danny <[email protected]> wrote:\n\nHi guys,\n>A nub question here since I could not figure it out on my own:\n>I'm using Hamachi to connect different sites into a VPN and their address always starts with 5.*.*.* - the problem I'm facing is that I cannot make the access restricted to that particular range only.\n>Currently I got :\n>host all all 0.0.0.0/32 md5\n>which allows all the IP's, and the try:\n>host all all 5.0.0.0/32 md5 \n>\n>does not work.\n>So what I am suppose to add in \"pg_hba.conf\" in order to achieve my restriction? Please help me,\n>Thank you,\n>Danny\n>\n>\n>\n>\n>________________________________\n> From: Tom Lane <[email protected]>\n>To: Jon Nelson <[email protected]> \n>Cc: [email protected] \n>Sent: Wednesday, December 14, 2011 8:18 AM\n>Subject: Re: [PERFORM] copy vs. C function \n> \n>Jon Nelson <[email protected]> writes:\n>> The only thing I have left are these statements:\n>\n>> get_call_result_type\n>> TupleDescGetAttInMetadata\n>> BuildTupleFromCStrings\n>> HeapTupleGetDatum\n>> and finally PG_RETURN_DATUM\n>\n>> It turns out that:\n>> get_call_result_type adds 43 seconds [total: 54],\n>> TupleDescGetAttInMetadata adds 19 seconds [total: 73],\n>> BuildTypleFromCStrings accounts for 43 seconds [total: 116].\n>\n>> So those three functions account for 90% of the total time spent.\n>> What alternatives exist? Do I have to call get_call_result_type /every\n>> time/ through the function?\n>\n>Well, if you're concerned about performance then I think you're going\n>about this in entirely the wrong way, because as far as I can tell from\n>this you're converting all the field values to text and\n back again.\n>You should be trying to keep the values in Datum format and then\n>invoking heap_form_tuple. And yeah, you probably could cache the\n>type information across calls.\n>\n> regards, tom lane\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\n\n\n-- \nKevin P Martyn, CISSP\nPrincipal Sales Engineer\n(914) 819 8795 mobile\[email protected]\nSkype: kevin.martyn4\n\nWebsite: www.enterprisedb.com\nEnterpriseDB Blog: http://blogs.enterprisedb.com/\nFollow us on Twitter: http://www.twitter.com/enterprisedb\n\nThis\n e-mail message (and any attachment) is intended for the use of the \nindividual or entity to whom it is addressed. This message contains \ninformation from EnterpriseDB Corporation that may be privileged, \nconfidential, or exempt from disclosure under applicable law. If you are\n not the intended recipient or authorized to receive this for the \nintended recipient, any use, dissemination, distribution, retention, \narchiving, or copying of this communication is strictly prohibited. If \nyou have received this e-mail in error, please notify the sender \nimmediately by reply e-mail and delete this message. \nAh, that did the trick, thank you Kevin,Danny From: Kevin Martyn <[email protected]> To: idc danny <[email protected]> Cc: \"[email protected]\" <[email protected]> Sent: Wednesday, December 14, 2011 3:14 PM Subject: Re: [PERFORM] copy vs. C\n function \ntryhost all all 5.0.0.0/8 md5On Wed, Dec 14, 2011 at 2:02 AM, idc danny <[email protected]> wrote:\n\nHi guys,A nub question here since I could not figure it out on my own:I'm using Hamachi to connect different sites into a VPN and their address always starts with 5.*.*.* - the problem I'm facing is that I cannot make the access restricted to that particular range only.\nCurrently I got :host all all 0.0.0.0/32 md5which allows all the IP's, and the try:host all all 5.0.0.0/32 md5 \ndoes not work.So what I am suppose to add in \"pg_hba.conf\" in order to achieve my restriction? Please help me,Thank you,Danny\n From: Tom Lane <[email protected]>\nTo: Jon Nelson <[email protected]> Cc: [email protected] \nSent: Wednesday, December 14, 2011 8:18 AM Subject: Re: [PERFORM] copy vs. C function \nJon Nelson <[email protected]> writes:> The only thing I have left are these statements:> get_call_result_type> TupleDescGetAttInMetadata\n> BuildTupleFromCStrings> HeapTupleGetDatum> and finally PG_RETURN_DATUM> It turns out that:> get_call_result_type adds 43 seconds [total: 54],> TupleDescGetAttInMetadata adds 19 seconds [total: 73],\n> BuildTypleFromCStrings accounts for 43 seconds [total: 116].> So those three functions account for 90% of the total time spent.> What alternatives exist? Do I have to call get_call_result_type /every\n> time/ through the function?Well, if you're concerned about performance then I think you're goingabout this in entirely the wrong way, because as far as I can tell fromthis you're converting all the field values to text and\n back again.You should be trying to keep the values in Datum format and theninvoking heap_form_tuple. And yeah, you probably could cache thetype information across calls. regards, tom lane\n-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n -- Kevin P Martyn, CISSPPrincipal Sales Engineer(914) 819 8795 [email protected]: kevin.martyn4\nWebsite: www.enterprisedb.comEnterpriseDB Blog: http://blogs.enterprisedb.com/\n\nFollow us on Twitter: http://www.twitter.com/enterprisedbThis\n e-mail message (and any attachment) is intended for the use of the \nindividual or entity to whom it is addressed. This message contains \ninformation from EnterpriseDB Corporation that may be privileged, \nconfidential, or exempt from disclosure under applicable law. If you are\n not the intended recipient or authorized to receive this for the \nintended recipient, any use, dissemination, distribution, retention, \narchiving, or copying of this communication is strictly prohibited. If \nyou have received this e-mail in error, please notify the sender \nimmediately by reply e-mail and delete this message.",
"msg_date": "Wed, 14 Dec 2011 06:32:03 -0800 (PST)",
"msg_from": "idc danny <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: copy vs. C function"
},
{
"msg_contents": "Jon Nelson <[email protected]> writes:\n> Regarding caching, I tried caching it across calls by making the\n> TupleDesc static and only initializing it once.\n> When I tried that, I got:\n\n> ERROR: number of columns (6769856) exceeds limit (1664)\n\n> I tried to find some documentation or examples that cache the\n> information, but couldn't find any.\n\nYou might find reading record_in to be helpful. What it caches is not\nexactly what you need to, I think, but it shows the general principles.\nThere are lots of other functions that use fn_extra to cache info, too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Dec 2011 10:25:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: copy vs. C function "
},
{
"msg_contents": "On Wed, Dec 14, 2011 at 9:25 AM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> Regarding caching, I tried caching it across calls by making the\n>> TupleDesc static and only initializing it once.\n>> When I tried that, I got:\n>\n>> ERROR: number of columns (6769856) exceeds limit (1664)\n>\n>> I tried to find some documentation or examples that cache the\n>> information, but couldn't find any.\n>\n> You might find reading record_in to be helpful. What it caches is not\n> exactly what you need to, I think, but it shows the general principles.\n> There are lots of other functions that use fn_extra to cache info, too.\n\nI will definitely look into those. I'm probably doing it wrong, but in\nthe meantime, I allocated enough space (by way of MemoryContextAlloc)\nin TopMemoryContext for an AttInMetadata pointer, switched to that\nmemory context (just the first time through), used CreateTupleDescCopy\n+ TupleDescGetAttInMetadata to duplicate (in the new memory context)\nthe TupleDesc, and then switched back. This approach seems to have\ndropped the total run time to about 54 seconds, the bulk of which is\nBuildTupleFromCStrings, a rather significant improvement.\n\n....\n\nLooking at record_in, I think I see what I could be doing better.\n\nAgain, thanks for the pointers.\n\n\n-- \nJon\n",
"msg_date": "Wed, 14 Dec 2011 09:40:01 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: copy vs. C function"
},
{
"msg_contents": "On Wed, Dec 14, 2011 at 9:40 AM, Jon Nelson <[email protected]> wrote:\n> On Wed, Dec 14, 2011 at 9:25 AM, Tom Lane <[email protected]> wrote:\n>> Jon Nelson <[email protected]> writes:\n>>> Regarding caching, I tried caching it across calls by making the\n>>> TupleDesc static and only initializing it once.\n>>> When I tried that, I got:\n>>\n>>> ERROR: number of columns (6769856) exceeds limit (1664)\n>>\n>>> I tried to find some documentation or examples that cache the\n>>> information, but couldn't find any.\n>>\n>> You might find reading record_in to be helpful. What it caches is not\n>> exactly what you need to, I think, but it shows the general principles.\n>> There are lots of other functions that use fn_extra to cache info, too.\n>\n> I will definitely look into those. I'm probably doing it wrong, but in\n> the meantime, I allocated enough space (by way of MemoryContextAlloc)\n> in TopMemoryContext for an AttInMetadata pointer, switched to that\n> memory context (just the first time through), used CreateTupleDescCopy\n> + TupleDescGetAttInMetadata to duplicate (in the new memory context)\n> the TupleDesc, and then switched back. This approach seems to have\n> dropped the total run time to about 54 seconds, the bulk of which is\n> BuildTupleFromCStrings, a rather significant improvement.\n>\n> ....\n>\n> Looking at record_in, I think I see what I could be doing better.\n\nIndeed. I revised the code to make use of fcinfo->flinfo->fn_extra for\nstorage and fcinfo->flinfo->fn_mcxt for the MemoryContext and\neverything seemed to work just fine.\n\nAssuming one *starts* with a char *some_var[8], would building Datum\nmyself be faster than using BuildTupleFromCStrings?\n\n-- \nJon\n",
"msg_date": "Wed, 14 Dec 2011 09:51:05 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: copy vs. C function"
},
{
"msg_contents": "On Wed, Dec 14, 2011 at 9:51 AM, Jon Nelson <[email protected]> wrote:\n> On Wed, Dec 14, 2011 at 9:40 AM, Jon Nelson <[email protected]> wrote:\n>> On Wed, Dec 14, 2011 at 9:25 AM, Tom Lane <[email protected]> wrote:\n>>> Jon Nelson <[email protected]> writes:\n>>>> Regarding caching, I tried caching it across calls by making the\n>>>> TupleDesc static and only initializing it once.\n>>>> When I tried that, I got:\n>>>\n>>>> ERROR: number of columns (6769856) exceeds limit (1664)\n>>>\n>>>> I tried to find some documentation or examples that cache the\n>>>> information, but couldn't find any.\n>>>\n>>> You might find reading record_in to be helpful. What it caches is not\n>>> exactly what you need to, I think, but it shows the general principles.\n>>> There are lots of other functions that use fn_extra to cache info, too.\n>>\n>> I will definitely look into those. I'm probably doing it wrong, but in\n>> the meantime, I allocated enough space (by way of MemoryContextAlloc)\n>> in TopMemoryContext for an AttInMetadata pointer, switched to that\n>> memory context (just the first time through), used CreateTupleDescCopy\n>> + TupleDescGetAttInMetadata to duplicate (in the new memory context)\n>> the TupleDesc, and then switched back. This approach seems to have\n>> dropped the total run time to about 54 seconds, the bulk of which is\n>> BuildTupleFromCStrings, a rather significant improvement.\n>>\n>> ....\n>>\n>> Looking at record_in, I think I see what I could be doing better.\n>\n> Indeed. I revised the code to make use of fcinfo->flinfo->fn_extra for\n> storage and fcinfo->flinfo->fn_mcxt for the MemoryContext and\n> everything seemed to work just fine.\n>\n> Assuming one *starts* with a char *some_var[8], would building Datum\n> myself be faster than using BuildTupleFromCStrings?\n\nThe answer is: yes. At least, in my case it is.\nThe total run time is now down to about 32 seconds.\nVersus the BuildTupleFromCStrings which takes about 54 seconds.\n32 seconds is more than 10-15 seconds, but it's livable.\n\nThis experiment has been very worthwhile - thank you all for the help.\n\n-- \nJon\n",
"msg_date": "Wed, 14 Dec 2011 21:18:38 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: copy vs. C function"
},
{
"msg_contents": "Hi Jon\n\n \n\nThis is exactly, what I was looking for. Need to read the data from\ndelimited file with no header, and do few transformation as described below\nusing Postgres C function and load it using pg_bulkload utility.\n\n \n\nTransformation below, can be handled with query after loading all the data\nas varchar and nullable. But we need to handle this before loading as like\nwe do in Oracle. I'm converting the code from Oracle to Postgres. Both\nversion of code(Oracle & Postgres) will be available for different users.\n\n \n\nIn Oracle, doing these kind of transformation in SQL loader. Need to follow\nthe same kind of approach in Postgres. SQL filter approach was very easy in\nterms of coding. From documentation found, C filter was very much faster\nthan SQL.\n\n \n\nI'm very much new to C. Looking for your options as you mentioned here in\npost.Some standard syntax for writing these functionalities would be greatly\nhelpful. Kindly help me.\n\n \n\n \n\n Sample Data:\n\n ABC|20170101|DEF ||GHIJ|KLM\n\n \n\n Target Table Definition:\n\n COLA numeric(5,0)\n\n COLB date\n\n COLC text\n\n COLD text\n\n COLE text\n\n \n\n First column should be mapped to COLA\n\n Second column should be mapped to COLB\n\n Third column should be mapped to COLD\n\n Fourth column should be mapped to COLC\n\n Fifth column should be mapped to Some default value(column is not\npresent in source)\n\n \n\n Transformation:\n\n a)First column should be mapped to COLA. It is numeric in target table.\nIf any alpha-characters were present, default this column with '0'.\nOtherwise, source value should be moved to table.\n\n b)Second column should be mapped to COLB. TO_DATE function from text\nformat. File will have date format as YYYYMMDD. It should be converted to\ndate.\n\n c)Third column should be mapped to COLD.Need to Trim both leading and\ntrailing spaces.\n\n d)Fourth column should be mapped to COLC. If it NULL, some value should\nbe defaulted.\n\n e)Only few columns from source file should be loaded. In this case, only\nfirst four columns should be loaded.\n\n f)Different ordering in source files & target columns.In this case,\n\n Third column should be mapped to COLD\n\n Fourth column should be mapped to COLC\n\n g)COLE should be loaded with default value. This column is not present\nin source file.\n \n\nThanks\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/copy-vs-C-function-tp5065298p5936796.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 30 Dec 2016 04:35:52 -0700 (MST)",
"msg_from": "rajmhn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: copy vs. C function"
}
] |
[
{
"msg_contents": "I have a couple of tables with about 400millions of records increasing about\n5 millions per day.\n\n \n\nI think that disabling autovac over those tables, and enabling daily manual\nvacuum (in some idle hour) will be better.\n\n \n\nI am right?\n\n \n\nIs possible to exclude autovacuum over some tables?\n\n \n\nThanks!\n\n \n\nAnibal\n\n\nI have a couple of tables with about 400millions of records increasing about 5 millions per day. I think that disabling autovac over those tables, and enabling daily manual vacuum (in some idle hour) will be better. I am right? Is possible to exclude autovacuum over some tables? Thanks! Anibal",
"msg_date": "Mon, 12 Dec 2011 11:25:43 -0300",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "autovacuum, exclude table"
},
{
"msg_contents": "Top-posting because this is context free:\n\nYou need to provide more info for anybody to help you. Are the tables\nappend-only or are deletes/updates also performed? Also this:\n\nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n On Dec 12, 2011 10:26 PM, \"Anibal David Acosta\" <[email protected]> wrote:\n\n> I have a couple of tables with about 400millions of records increasing\n> about 5 millions per day.****\n>\n> ** **\n>\n> I think that disabling autovac over those tables, and enabling daily\n> manual vacuum (in some idle hour) will be better.****\n>\n> ** **\n>\n> I am right?****\n>\n> ** **\n>\n> Is possible to exclude autovacuum over some tables?****\n>\n> ** **\n>\n> Thanks!****\n>\n> ** **\n>\n> Anibal****\n>\n\nTop-posting because this is context free:\nYou need to provide more info for anybody to help you. Are the tables append-only or are deletes/updates also performed? Also this:\n http://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\nOn Dec 12, 2011 10:26 PM, \"Anibal David Acosta\" <[email protected]> wrote:\nI have a couple of tables with about 400millions of records increasing about 5 millions per day. \nI think that disabling autovac over those tables, and enabling daily manual vacuum (in some idle hour) will be better. I am right?\n Is possible to exclude autovacuum over some tables? Thanks!\n Anibal",
"msg_date": "Mon, 12 Dec 2011 22:45:24 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum, exclude table"
},
{
"msg_contents": "The postgres version is 9.0.4 on a Windows Server 2008 (planning to upgrade to 9.1)\n\nTables has insert (in bulk every 3 minutes) and delete one per day (delete records older than XX days)\n\n \n\nThere are not much additional relevant information.\n\n \n\nThanks!\n\n \n\n \n\nDe: Craig Ringer [mailto:[email protected]] \nEnviado el: lunes, 12 de diciembre de 2011 11:45 a.m.\nPara: Anibal David Acosta\nCC: [email protected]\nAsunto: Re: [PERFORM] autovacuum, exclude table\n\n \n\nTop-posting because this is context free:\n\nYou need to provide more info for anybody to help you. Are the tables append-only or are deletes/updates also performed? Also this:\n\nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\nOn Dec 12, 2011 10:26 PM, \"Anibal David Acosta\" <[email protected]> wrote:\n\nI have a couple of tables with about 400millions of records increasing about 5 millions per day.\n\n \n\nI think that disabling autovac over those tables, and enabling daily manual vacuum (in some idle hour) will be better.\n\n \n\nI am right?\n\n \n\nIs possible to exclude autovacuum over some tables?\n\n \n\nThanks!\n\n \n\nAnibal\n\n\nThe postgres version is 9.0.4 on a Windows Server 2008 (planning to upgrade to 9.1)Tables has insert (in bulk every 3 minutes) and delete one per day (delete records older than XX days) There are not much additional relevant information. Thanks! De: Craig Ringer [mailto:[email protected]] Enviado el: lunes, 12 de diciembre de 2011 11:45 a.m.Para: Anibal David AcostaCC: [email protected]: Re: [PERFORM] autovacuum, exclude table Top-posting because this is context free:You need to provide more info for anybody to help you. Are the tables append-only or are deletes/updates also performed? Also this:http://wiki.postgresql.org/wiki/Guide_to_reporting_problemsOn Dec 12, 2011 10:26 PM, \"Anibal David Acosta\" <[email protected]> wrote:I have a couple of tables with about 400millions of records increasing about 5 millions per day. I think that disabling autovac over those tables, and enabling daily manual vacuum (in some idle hour) will be better. I am right? Is possible to exclude autovacuum over some tables? Thanks! Anibal",
"msg_date": "Mon, 12 Dec 2011 11:55:00 -0300",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum, exclude table"
},
{
"msg_contents": "On 12.12.2011 16:25, Anibal David Acosta wrote:\n> I have a couple of tables with about 400millions of records increasing about\n> 5 millions per day.\n>\n> I think that disabling autovac over those tables, and enabling daily manual\n> vacuum (in some idle hour) will be better.\n>\n> I am right?\n\nPossibly. If the system is otherwise idle, it sounds sensible to do \nroutine maintenance at that time.\n\n> Is possible to exclude autovacuum over some tables?\n\nSure, see \nhttp://www.postgresql.org/docs/9.1/static/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS\n\nALTER TABLE foo SET (autovacuum_enabled=false, toast.autovacuum_enabled \n= false);\n\nIt might be better, though, to let autovacuum enabled, and just do the \nadditional manual VACUUM in the idle period. If the daily manual VACUUM \nis enough to keep the bloat within the autovacuum thresholds, autovacuum \nwill never kick in. If it's not enough, then you probably want \nautovacuum to run.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Mon, 12 Dec 2011 17:16:08 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum, exclude table"
},
{
"msg_contents": "\"Anibal David Acosta\" <[email protected]> writes:\n> Tables has insert (in bulk every 3 minutes) and delete one per day (delete records older than XX days)\n\nNo updates at all, just inserts and a daily delete?\n\nIf so, you're wasting your time even thinking about suppressing\nautovacuum, because it won't fire on this table except after the daily\ndelete, which is exactly when you need it to.\n\nAlso, if you suppress autovacuum you also suppress autoanalyze,\nwhich is something that *will* fire after large inserts, and probably\nshould. At least, this usage pattern doesn't suggest to me that it's\nclearly safe to run without up-to-date stats.\n\nRight offhand, I'm not convinced either that you have a problem, or that\nturning off autovacuum would fix it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 12 Dec 2011 10:26:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum, exclude table "
}
] |
[
{
"msg_contents": "Hello, I wanted to ask according such a problem which we had faced with.\nWe are widely using postgres arrays like key->value array by doing like \nthis:\n\n{{1,5},{2,6},{3,7}}\n\nwhere 1,2,3 are keys, and 5,6,7 are values. In our pgSql functions we \nare using self written array_input(array::numeric[], key::numeric) \nfunction which makes a loop on whole array and searches for key like\nFOR i IN 1 .. size LOOP\n if array[i][1] = key then\n return array[i][2];\n end if;\nEND LOOP;\n\nBut this was a good solution until our arrays and database had grown. So \nnow FOR loop takes a lot of time to find value of an array.\n\nAnd my question is, how this problem of performance could be solved? We \nhad tried pgperl for string parsing, but it takes much more time than \nour current solution. Also we are thinking about self-written C++ \nfunction, may be someone had implemented this algorithm before?\n\n-- \nBest regards\n\nAleksej Trofimov\n\n",
"msg_date": "Tue, 13 Dec 2011 15:55:02 +0200",
"msg_from": "Aleksej Trofimov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres array parser"
},
{
"msg_contents": "Hello\n\ndo you know FOREACH IN ARRAY statement in 9.1\n\nthis significantly accelerate iteration over array\n\nhttp://www.depesz.com/index.php/2011/03/07/waiting-for-9-1-foreach-in-array/\n\n\n\n2011/12/13 Aleksej Trofimov <[email protected]>:\n> Hello, I wanted to ask according such a problem which we had faced with.\n> We are widely using postgres arrays like key->value array by doing like\n> this:\n>\n> {{1,5},{2,6},{3,7}}\n>\n> where 1,2,3 are keys, and 5,6,7 are values. In our pgSql functions we are\n> using self written array_input(array::numeric[], key::numeric) function\n> which makes a loop on whole array and searches for key like\n> FOR i IN 1 .. size LOOP\n> if array[i][1] = key then\n> return array[i][2];\n> end if;\n> END LOOP;\n>\n> But this was a good solution until our arrays and database had grown. So now\n> FOR loop takes a lot of time to find value of an array.\n>\n> And my question is, how this problem of performance could be solved? We had\n> tried pgperl for string parsing, but it takes much more time than our\n> current solution. Also we are thinking about self-written C++ function, may\n> be someone had implemented this algorithm before?\n>\n\nyou can use indexes or you can use hstore\n\nRegards\n\nPavel Stehule\n\n> --\n> Best regards\n>\n> Aleksej Trofimov\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 13 Dec 2011 15:02:29 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres array parser"
},
{
"msg_contents": "We have tried foreach syntax, but we have noticed performance degradation:\nFunction with for: 203ms\nFunction with foreach: ~250ms:\n\nthere is functions code:\nCREATE OR REPLACE FUNCTION input_value_fe(in_inputs numeric[], \nin_input_nr numeric)\n RETURNS numeric AS\n$BODY$\ndeclare i numeric[];\nBEGIN\n FOREACH i SLICE 1 IN ARRAY in_inputs\n LOOP\n if i[1] = in_input_nr then\n return i[2];\n end if;\n END LOOP;\n\n return null;\nEND;\n$BODY$\n LANGUAGE plpgsql VOLATILE\n COST 100;\n\nCREATE OR REPLACE FUNCTION input_value(in_inputs numeric[], in_input_nr \nnumeric)\n RETURNS numeric AS\n$BODY$\ndeclare\n size int;\nBEGIN\n size = array_upper(in_inputs, 1);\n IF size IS NOT NULL THEN\n FOR i IN 1 .. size LOOP\n if in_inputs[i][1] = in_input_nr then\n return in_inputs[i][2];\n end if;\n END LOOP;\n END IF;\n\n return null;\nEND;\n$BODY$\n LANGUAGE plpgsql VOLATILE\n COST 100;\n\nOn 12/13/2011 04:02 PM, Pavel Stehule wrote:\n> Hello\n>\n> do you know FOREACH IN ARRAY statement in 9.1\n>\n> this significantly accelerate iteration over array\n>\n> http://www.depesz.com/index.php/2011/03/07/waiting-for-9-1-foreach-in-array/\n>\n>\n>\n> 2011/12/13 Aleksej Trofimov<[email protected]>:\n>> Hello, I wanted to ask according such a problem which we had faced with.\n>> We are widely using postgres arrays like key->value array by doing like\n>> this:\n>>\n>> {{1,5},{2,6},{3,7}}\n>>\n>> where 1,2,3 are keys, and 5,6,7 are values. In our pgSql functions we are\n>> using self written array_input(array::numeric[], key::numeric) function\n>> which makes a loop on whole array and searches for key like\n>> FOR i IN 1 .. size LOOP\n>> if array[i][1] = key then\n>> return array[i][2];\n>> end if;\n>> END LOOP;\n>>\n>> But this was a good solution until our arrays and database had grown. So now\n>> FOR loop takes a lot of time to find value of an array.\n>>\n>> And my question is, how this problem of performance could be solved? We had\n>> tried pgperl for string parsing, but it takes much more time than our\n>> current solution. Also we are thinking about self-written C++ function, may\n>> be someone had implemented this algorithm before?\n>>\n> you can use indexes or you can use hstore\n>\n> Regards\n>\n> Pavel Stehule\n>\n>> --\n>> Best regards\n>>\n>> Aleksej Trofimov\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nBest regards\n\nAleksej Trofimov\n\nUAB \"Ruptela\"\n\nPhone: +370 657 80475\n\nE-Mail: [email protected]\nWeb: http://www.ruptela.lt\n\nRuptela - the most successful IT company in Lithuania 2011\nRuptela - sekmingiausia Lietuvos aukštųjų technologijų įmonė 2011\nhttp://www.prezidentas.lt/lt/spaudos_centras_392/pranesimai_spaudai/inovatyvus_verslas_-_konkurencingos_lietuvos_pagrindas.html\nhttp://www.ruptela.lt/news/37/121/Ruptela-sekmingiausia-jauna-aukstuju-technologiju-imone-Lietuvoje\n\n",
"msg_date": "Tue, 13 Dec 2011 16:28:32 +0200",
"msg_from": "Aleksej Trofimov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres array parser"
},
{
"msg_contents": "Hello\n\n2011/12/13 Aleksej Trofimov <[email protected]>:\n> We have tried foreach syntax, but we have noticed performance degradation:\n> Function with for: 203ms\n> Function with foreach: ~250ms:\n>\n> there is functions code:\n> CREATE OR REPLACE FUNCTION input_value_fe(in_inputs numeric[], in_input_nr\n> numeric)\n> RETURNS numeric AS\n> $BODY$\n> declare i numeric[];\n> BEGIN\n> FOREACH i SLICE 1 IN ARRAY in_inputs\n> LOOP\n> if i[1] = in_input_nr then\n> return i[2];\n> end if;\n> END LOOP;\n>\n> return null;\n> END;\n> $BODY$\n> LANGUAGE plpgsql VOLATILE\n> COST 100;\n>\n> CREATE OR REPLACE FUNCTION input_value(in_inputs numeric[], in_input_nr\n> numeric)\n> RETURNS numeric AS\n> $BODY$\n> declare\n> size int;\n> BEGIN\n> size = array_upper(in_inputs, 1);\n> IF size IS NOT NULL THEN\n>\n> FOR i IN 1 .. size LOOP\n> if in_inputs[i][1] = in_input_nr then\n> return in_inputs[i][2];\n> end if;\n> END LOOP;\n> END IF;\n>\n> return null;\n> END;\n> $BODY$\n> LANGUAGE plpgsql VOLATILE\n> COST 100;\n>\n>\n> On 12/13/2011 04:02 PM, Pavel Stehule wrote:\n>>\n>> Hello\n>>\n>> do you know FOREACH IN ARRAY statement in 9.1\n>>\n>> this significantly accelerate iteration over array\n>>\n>>\n>> http://www.depesz.com/index.php/2011/03/07/waiting-for-9-1-foreach-in-array/\n>>\n>>\n>>\n>> 2011/12/13 Aleksej Trofimov<[email protected]>:\n>>>\n>>> Hello, I wanted to ask according such a problem which we had faced with.\n>>> We are widely using postgres arrays like key->value array by doing like\n>>> this:\n>>>\n>>> {{1,5},{2,6},{3,7}}\n>>>\n>>> where 1,2,3 are keys, and 5,6,7 are values. In our pgSql functions we are\n>>> using self written array_input(array::numeric[], key::numeric) function\n>>> which makes a loop on whole array and searches for key like\n>>> FOR i IN 1 .. size LOOP\n>>> if array[i][1] = key then\n>>> return array[i][2];\n>>> end if;\n>>> END LOOP;\n>>>\n>>> But this was a good solution until our arrays and database had grown. So\n>>> now\n>>> FOR loop takes a lot of time to find value of an array.\n>>>\n>>> And my question is, how this problem of performance could be solved? We\n>>> had\n>>> tried pgperl for string parsing, but it takes much more time than our\n>>> current solution. Also we are thinking about self-written C++ function,\n>>> may\n>>> be someone had implemented this algorithm before?\n>>>\n>> you can use indexes or you can use hstore\n>>\n>> Regards\n>>\n>> Pavel Stehule\n>>\n>>> --\n>>> Best regards\n>>>\n>>> Aleksej Trofimov\n>>>\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list\n>>> ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\nIt is strange - on my comp FOREACH is about 2x faster\n\npostgres=# select input_value(array(select\ngenerate_series(1,1000000)::numeric), 100000);\n input_value\n-------------\n\n(1 row)\n\nTime: 495.426 ms\n\npostgres=# select input_value_fe(array(select\ngenerate_series(1,1000000)::numeric), 100000);\n input_value_fe\n----------------\n\n(1 row)\n\nTime: 248.980 ms\n\nRegards\n\nPavel\n\n\n>\n> --\n> Best regards\n>\n> Aleksej Trofimov\n>\n> UAB \"Ruptela\"\n>\n> Phone: +370 657 80475\n>\n> E-Mail: [email protected]\n> Web: http://www.ruptela.lt\n>\n> Ruptela - the most successful IT company in Lithuania 2011\n> Ruptela - sekmingiausia Lietuvos aukštųjų technologijų įmonė 2011\n> http://www.prezidentas.lt/lt/spaudos_centras_392/pranesimai_spaudai/inovatyvus_verslas_-_konkurencingos_lietuvos_pagrindas.html\n> http://www.ruptela.lt/news/37/121/Ruptela-sekmingiausia-jauna-aukstuju-technologiju-imone-Lietuvoje\n>\n",
"msg_date": "Tue, 13 Dec 2011 15:42:58 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres array parser"
},
{
"msg_contents": "Hello,\r\n\r\nFor such cases (see below), it would be nice to have an unnest function that only affect the first array dimension.\r\n\r\nSomething like \r\n\r\nunnest(ARRAY[[1,2],[2,3]], SLICE=1)\r\n=>\r\nunnest\r\n------\r\n[1,2]\r\n[2,3]\r\n\r\n\r\nWith this function, I imagine that following sql function\r\nmight beat the plpgsql FOREACH version. \r\n\r\n\r\nCREATE OR REPLACE FUNCTION input_value_un (in_inputs numeric[], in_input_nr numeric)\r\n RETURNS numeric AS\r\n$BODY$\r\n \r\n SELECT u[1][2]\r\n FROM unnest($1, SLICE =1) u\r\n WHERE u[1][1]=in_input_nr\r\n LIMIT 1;\r\n\r\n$BODY$\r\n LANGUAGE sql IMMUTABLE;\r\n\r\n \r\n \r\nbest regards,\r\n\r\nMarc Mamin\r\n \r\n\r\n> -----Original Message-----\r\n> From: [email protected] [mailto:pgsql-performance-\r\n> [email protected]] On Behalf Of Pavel Stehule\r\n> Sent: Dienstag, 13. Dezember 2011 15:43\r\n> To: Aleksej Trofimov\r\n> Cc: [email protected]\r\n> Subject: Re: [PERFORM] Postgres array parser\r\n> \r\n> Hello\r\n> \r\n> 2011/12/13 Aleksej Trofimov <[email protected]>:\r\n> > We have tried foreach syntax, but we have noticed performance\r\n> degradation:\r\n> > Function with for: 203ms\r\n> > Function with foreach: ~250ms:\r\n> >\r\n> > there is functions code:\r\n> > CREATE OR REPLACE FUNCTION input_value_fe(in_inputs numeric[],\r\n> in_input_nr\r\n> > numeric)\r\n> > RETURNS numeric AS\r\n> > $BODY$\r\n> > declare i numeric[];\r\n> > BEGIN\r\n> > FOREACH i SLICE 1 IN ARRAY in_inputs\r\n> > LOOP\r\n> > if i[1] = in_input_nr then\r\n> > return i[2];\r\n> > end if;\r\n> > END LOOP;\r\n> >\r\n> > return null;\r\n> > END;\r\n> > $BODY$\r\n> > LANGUAGE plpgsql VOLATILE\r\n> > COST 100;\r\n> >\r\n> > CREATE OR REPLACE FUNCTION input_value(in_inputs numeric[],\r\n> in_input_nr\r\n> > numeric)\r\n> > RETURNS numeric AS\r\n> > $BODY$\r\n> > declare\r\n> > size int;\r\n> > BEGIN\r\n> > size = array_upper(in_inputs, 1);\r\n> > IF size IS NOT NULL THEN\r\n> >\r\n> > FOR i IN 1 .. size LOOP\r\n> > if in_inputs[i][1] = in_input_nr then\r\n> > return in_inputs[i][2];\r\n> > end if;\r\n> > END LOOP;\r\n> > END IF;\r\n> >\r\n> > return null;\r\n> > END;\r\n> > $BODY$\r\n> > LANGUAGE plpgsql VOLATILE\r\n> > COST 100;\r\n> >\r\n> >\r\n> > On 12/13/2011 04:02 PM, Pavel Stehule wrote:\r\n> >>\r\n> >> Hello\r\n> >>\r\n> >> do you know FOREACH IN ARRAY statement in 9.1\r\n> >>\r\n> >> this significantly accelerate iteration over array\r\n> >>\r\n> >>\r\n> >> http://www.depesz.com/index.php/2011/03/07/waiting-for-9-1-foreach-\r\n> in-array/\r\n> >>\r\n> >>\r\n> >>\r\n> >> 2011/12/13 Aleksej Trofimov<[email protected]>:\r\n> >>>\r\n> >>> Hello, I wanted to ask according such a problem which we had faced\r\n> with.\r\n> >>> We are widely using postgres arrays like key->value array by doing\r\n> like\r\n> >>> this:\r\n> >>>\r\n> >>> {{1,5},{2,6},{3,7}}\r\n> >>>\r\n> >>> where 1,2,3 are keys, and 5,6,7 are values. In our pgSql functions\r\n> we are\r\n> >>> using self written array_input(array::numeric[], key::numeric)\r\n> function\r\n> >>> which makes a loop on whole array and searches for key like\r\n> >>> FOR i IN 1 .. size LOOP\r\n> >>> if array[i][1] = key then\r\n> >>> return array[i][2];\r\n> >>> end if;\r\n> >>> END LOOP;\r\n> >>>\r\n> >>> But this was a good solution until our arrays and database had\r\n> grown. So\r\n> >>> now\r\n> >>> FOR loop takes a lot of time to find value of an array.\r\n> >>>\r\n> >>> And my question is, how this problem of performance could be\r\n> solved? We\r\n> >>> had\r\n> >>> tried pgperl for string parsing, but it takes much more time than\r\n> our\r\n> >>> current solution. Also we are thinking about self-written C++\r\n> function,\r\n> >>> may\r\n> >>> be someone had implemented this algorithm before?\r\n> >>>\r\n> >> you can use indexes or you can use hstore\r\n> >>\r\n> >> Regards\r\n> >>\r\n> >> Pavel Stehule\r\n> >>\r\n> >>> --\r\n> >>> Best regards\r\n> >>>\r\n> >>> Aleksej Trofimov\r\n> >>>\r\n> >>>\r\n> >>> --\r\n> >>> Sent via pgsql-performance mailing list\r\n> >>> ([email protected])\r\n> >>> To make changes to your subscription:\r\n> >>> http://www.postgresql.org/mailpref/pgsql-performance\r\n> >\r\n> >\r\n> \r\n> It is strange - on my comp FOREACH is about 2x faster\r\n> \r\n> postgres=# select input_value(array(select\r\n> generate_series(1,1000000)::numeric), 100000);\r\n> input_value\r\n> -------------\r\n> \r\n> (1 row)\r\n> \r\n> Time: 495.426 ms\r\n> \r\n> postgres=# select input_value_fe(array(select\r\n> generate_series(1,1000000)::numeric), 100000);\r\n> input_value_fe\r\n> ----------------\r\n> \r\n> (1 row)\r\n> \r\n> Time: 248.980 ms\r\n> \r\n> Regards\r\n> \r\n> Pavel\r\n> \r\n> \r\n> >\r\n> > --\r\n> > Best regards\r\n> >\r\n> > Aleksej Trofimov\r\n> >\r\n> > UAB \"Ruptela\"\r\n> >\r\n> > Phone: +370 657 80475\r\n> >\r\n> > E-Mail: [email protected]\r\n> > Web: http://www.ruptela.lt\r\n> >\r\n> > Ruptela - the most successful IT company in Lithuania 2011\r\n> > Ruptela - sekmingiausia Lietuvos aukštųjų technologijų įmonė 2011\r\n> >\r\n> http://www.prezidentas.lt/lt/spaudos_centras_392/pranesimai_spaudai/ino\r\n> vatyvus_verslas_-_konkurencingos_lietuvos_pagrindas.html\r\n> > http://www.ruptela.lt/news/37/121/Ruptela-sekmingiausia-jauna-\r\n> aukstuju-technologiju-imone-Lietuvoje\r\n> >\r\n> \r\n> --\r\n> Sent via pgsql-performance mailing list (pgsql-\r\n> [email protected])\r\n> To make changes to your subscription:\r\n> http://www.postgresql.org/mailpref/pgsql-performance\r\n",
"msg_date": "Wed, 14 Dec 2011 10:21:56 +0100",
"msg_from": "\"Marc Mamin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres array parser"
},
{
"msg_contents": "Yes, it would be great, but I haven't found such a function, which \nsplits 2 dimensional array into rows =) Maybe we'll modify existing \nfunction, but unfortunately we have tried hstore type and function in \npostgres and we see a significant performance improvements. So we only \nneed to convert existing data into hstore and I think this is a good \nsolution.\n\nOn 12/14/2011 11:21 AM, Marc Mamin wrote:\n> Hello,\n>\n> For such cases (see below), it would be nice to have an unnest function that only affect the first array dimension.\n>\n> Something like\n>\n> unnest(ARRAY[[1,2],[2,3]], SLICE=1)\n> =>\n> unnest\n> ------\n> [1,2]\n> [2,3]\n>\n>\n> With this function, I imagine that following sql function\n> might beat the plpgsql FOREACH version.\n>\n>\n> CREATE OR REPLACE FUNCTION input_value_un (in_inputs numeric[], in_input_nr numeric)\n> RETURNS numeric AS\n> $BODY$\n>\n> SELECT u[1][2]\n> FROM unnest($1, SLICE =1) u\n> WHERE u[1][1]=in_input_nr\n> LIMIT 1;\n>\n> $BODY$\n> LANGUAGE sql IMMUTABLE;\n>\n>\n>\n> best regards,\n>\n> Marc Mamin\n>\n>\n>> -----Original Message-----\n>> From: [email protected] [mailto:pgsql-performance-\n>> [email protected]] On Behalf Of Pavel Stehule\n>> Sent: Dienstag, 13. Dezember 2011 15:43\n>> To: Aleksej Trofimov\n>> Cc: [email protected]\n>> Subject: Re: [PERFORM] Postgres array parser\n>>\n>> Hello\n>>\n>> 2011/12/13 Aleksej Trofimov<[email protected]>:\n>>> We have tried foreach syntax, but we have noticed performance\n>> degradation:\n>>> Function with for: 203ms\n>>> Function with foreach: ~250ms:\n>>>\n>>> there is functions code:\n>>> CREATE OR REPLACE FUNCTION input_value_fe(in_inputs numeric[],\n>> in_input_nr\n>>> numeric)\n>>> RETURNS numeric AS\n>>> $BODY$\n>>> declare i numeric[];\n>>> BEGIN\n>>> FOREACH i SLICE 1 IN ARRAY in_inputs\n>>> LOOP\n>>> if i[1] = in_input_nr then\n>>> return i[2];\n>>> end if;\n>>> END LOOP;\n>>>\n>>> return null;\n>>> END;\n>>> $BODY$\n>>> LANGUAGE plpgsql VOLATILE\n>>> COST 100;\n>>>\n>>> CREATE OR REPLACE FUNCTION input_value(in_inputs numeric[],\n>> in_input_nr\n>>> numeric)\n>>> RETURNS numeric AS\n>>> $BODY$\n>>> declare\n>>> size int;\n>>> BEGIN\n>>> size = array_upper(in_inputs, 1);\n>>> IF size IS NOT NULL THEN\n>>>\n>>> FOR i IN 1 .. size LOOP\n>>> if in_inputs[i][1] = in_input_nr then\n>>> return in_inputs[i][2];\n>>> end if;\n>>> END LOOP;\n>>> END IF;\n>>>\n>>> return null;\n>>> END;\n>>> $BODY$\n>>> LANGUAGE plpgsql VOLATILE\n>>> COST 100;\n>>>\n>>>\n>>> On 12/13/2011 04:02 PM, Pavel Stehule wrote:\n>>>> Hello\n>>>>\n>>>> do you know FOREACH IN ARRAY statement in 9.1\n>>>>\n>>>> this significantly accelerate iteration over array\n>>>>\n>>>>\n>>>> http://www.depesz.com/index.php/2011/03/07/waiting-for-9-1-foreach-\n>> in-array/\n>>>>\n>>>>\n>>>> 2011/12/13 Aleksej Trofimov<[email protected]>:\n>>>>> Hello, I wanted to ask according such a problem which we had faced\n>> with.\n>>>>> We are widely using postgres arrays like key->value array by doing\n>> like\n>>>>> this:\n>>>>>\n>>>>> {{1,5},{2,6},{3,7}}\n>>>>>\n>>>>> where 1,2,3 are keys, and 5,6,7 are values. In our pgSql functions\n>> we are\n>>>>> using self written array_input(array::numeric[], key::numeric)\n>> function\n>>>>> which makes a loop on whole array and searches for key like\n>>>>> FOR i IN 1 .. size LOOP\n>>>>> if array[i][1] = key then\n>>>>> return array[i][2];\n>>>>> end if;\n>>>>> END LOOP;\n>>>>>\n>>>>> But this was a good solution until our arrays and database had\n>> grown. So\n>>>>> now\n>>>>> FOR loop takes a lot of time to find value of an array.\n>>>>>\n>>>>> And my question is, how this problem of performance could be\n>> solved? We\n>>>>> had\n>>>>> tried pgperl for string parsing, but it takes much more time than\n>> our\n>>>>> current solution. Also we are thinking about self-written C++\n>> function,\n>>>>> may\n>>>>> be someone had implemented this algorithm before?\n>>>>>\n>>>> you can use indexes or you can use hstore\n>>>>\n>>>> Regards\n>>>>\n>>>> Pavel Stehule\n>>>>\n>>>>> --\n>>>>> Best regards\n>>>>>\n>>>>> Aleksej Trofimov\n>>>>>\n>>>>>\n>>>>> --\n>>>>> Sent via pgsql-performance mailing list\n>>>>> ([email protected])\n>>>>> To make changes to your subscription:\n>>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>\n>> It is strange - on my comp FOREACH is about 2x faster\n>>\n>> postgres=# select input_value(array(select\n>> generate_series(1,1000000)::numeric), 100000);\n>> input_value\n>> -------------\n>>\n>> (1 row)\n>>\n>> Time: 495.426 ms\n>>\n>> postgres=# select input_value_fe(array(select\n>> generate_series(1,1000000)::numeric), 100000);\n>> input_value_fe\n>> ----------------\n>>\n>> (1 row)\n>>\n>> Time: 248.980 ms\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list (pgsql-\n>> [email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nBest regards\n\nAleksej Trofimov\n\n",
"msg_date": "Wed, 14 Dec 2011 11:59:34 +0200",
"msg_from": "Aleksej Trofimov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres array parser"
},
{
"msg_contents": "> Yes, it would be great, but I haven't found such a function, which\r\n> splits 2 dimensional array into rows =) Maybe we'll modify existing\r\n> function, but unfortunately we have tried hstore type and function in\r\n> postgres and we see a significant performance improvements. So we only\r\n> need to convert existing data into hstore and I think this is a good\r\n> solution.\r\n\r\n\r\n\r\nI haven't tested hstore yet, but I would be interested to find out if it still better perform with custom \"numeric\" aggregates on the hstore values.\r\n\r\nI've made a short \"proof of concept\" test with a custom key/value type to achieve such an aggregation.\r\nSomething like:\r\n\r\n\r\n SELECT x, distinct_sum( (currency,amount)::keyval ) overview FROM ... GROUP BY x\r\n\r\n x currency amount\r\n a EUR 15.0\r\n a EUR 5.0\r\n a CHF 7.5\r\n b USD 12.0\r\n =>\r\n\r\n x overview\r\n - --------\r\n a {(EUR,20.0), (CHF,7.5)}\r\n b {(USD,10.0)}\r\n\r\n\r\nregards,\r\n\r\nMarc Mamin\r\n\r\n \r\n> On 12/14/2011 11:21 AM, Marc Mamin wrote:\r\n> > Hello,\r\n> >\r\n> > For such cases (see below), it would be nice to have an unnest\r\n> function that only affect the first array dimension.\r\n> >\r\n> > Something like\r\n> >\r\n> > unnest(ARRAY[[1,2],[2,3]], SLICE=1)\r\n> > =>\r\n> > unnest\r\n> > ------\r\n> > [1,2]\r\n> > [2,3]\r\n> >\r\n> >\r\n> > With this function, I imagine that following sql function\r\n> > might beat the plpgsql FOREACH version.\r\n> >\r\n> >\r\n> > CREATE OR REPLACE FUNCTION input_value_un (in_inputs numeric[],\r\n> in_input_nr numeric)\r\n> > RETURNS numeric AS\r\n> > $BODY$\r\n> >\r\n> > SELECT u[1][2]\r\n> > FROM unnest($1, SLICE =1) u\r\n> > WHERE u[1][1]=in_input_nr\r\n> > LIMIT 1;\r\n> >\r\n> > $BODY$\r\n> > LANGUAGE sql IMMUTABLE;\r\n> >\r\n> >\r\n> >\r\n> > best regards,\r\n> >\r\n> > Marc Mamin\r\n> >\r\n> >\r\n> >> -----Original Message-----\r\n> >> From: [email protected] [mailto:pgsql-\r\n> performance-\r\n> >> [email protected]] On Behalf Of Pavel Stehule\r\n> >> Sent: Dienstag, 13. Dezember 2011 15:43\r\n> >> To: Aleksej Trofimov\r\n> >> Cc: [email protected]\r\n> >> Subject: Re: [PERFORM] Postgres array parser\r\n> >>\r\n> >> Hello\r\n> >>\r\n> >> 2011/12/13 Aleksej Trofimov<[email protected]>:\r\n> >>> We have tried foreach syntax, but we have noticed performance\r\n> >> degradation:\r\n> >>> Function with for: 203ms\r\n> >>> Function with foreach: ~250ms:\r\n> >>>\r\n> >>> there is functions code:\r\n> >>> CREATE OR REPLACE FUNCTION input_value_fe(in_inputs numeric[],\r\n> >> in_input_nr\r\n> >>> numeric)\r\n> >>> RETURNS numeric AS\r\n> >>> $BODY$\r\n> >>> declare i numeric[];\r\n> >>> BEGIN\r\n> >>> FOREACH i SLICE 1 IN ARRAY in_inputs\r\n> >>> LOOP\r\n> >>> if i[1] = in_input_nr then\r\n> >>> return i[2];\r\n> >>> end if;\r\n> >>> END LOOP;\r\n> >>>\r\n> >>> return null;\r\n> >>> END;\r\n> >>> $BODY$\r\n> >>> LANGUAGE plpgsql VOLATILE\r\n> >>> COST 100;\r\n> >>>\r\n> >>> CREATE OR REPLACE FUNCTION input_value(in_inputs numeric[],\r\n> >> in_input_nr\r\n> >>> numeric)\r\n> >>> RETURNS numeric AS\r\n> >>> $BODY$\r\n> >>> declare\r\n> >>> size int;\r\n> >>> BEGIN\r\n> >>> size = array_upper(in_inputs, 1);\r\n> >>> IF size IS NOT NULL THEN\r\n> >>>\r\n> >>> FOR i IN 1 .. size LOOP\r\n> >>> if in_inputs[i][1] = in_input_nr then\r\n> >>> return in_inputs[i][2];\r\n> >>> end if;\r\n> >>> END LOOP;\r\n> >>> END IF;\r\n> >>>\r\n> >>> return null;\r\n> >>> END;\r\n> >>> $BODY$\r\n> >>> LANGUAGE plpgsql VOLATILE\r\n> >>> COST 100;\r\n> >>>\r\n> >>>\r\n> >>> On 12/13/2011 04:02 PM, Pavel Stehule wrote:\r\n> >>>> Hello\r\n> >>>>\r\n> >>>> do you know FOREACH IN ARRAY statement in 9.1\r\n> >>>>\r\n> >>>> this significantly accelerate iteration over array\r\n> >>>>\r\n> >>>>\r\n> >>>> http://www.depesz.com/index.php/2011/03/07/waiting-for-9-1-\r\n> foreach-\r\n> >> in-array/\r\n> >>>>\r\n> >>>>\r\n> >>>> 2011/12/13 Aleksej Trofimov<[email protected]>:\r\n> >>>>> Hello, I wanted to ask according such a problem which we had\r\n> faced\r\n> >> with.\r\n> >>>>> We are widely using postgres arrays like key->value array by\r\n> doing\r\n> >> like\r\n> >>>>> this:\r\n> >>>>>\r\n> >>>>> {{1,5},{2,6},{3,7}}\r\n> >>>>>\r\n> >>>>> where 1,2,3 are keys, and 5,6,7 are values. In our pgSql\r\n> functions\r\n> >> we are\r\n> >>>>> using self written array_input(array::numeric[], key::numeric)\r\n> >> function\r\n> >>>>> which makes a loop on whole array and searches for key like\r\n> >>>>> FOR i IN 1 .. size LOOP\r\n> >>>>> if array[i][1] = key then\r\n> >>>>> return array[i][2];\r\n> >>>>> end if;\r\n> >>>>> END LOOP;\r\n> >>>>>\r\n> >>>>> But this was a good solution until our arrays and database had\r\n> >> grown. So\r\n> >>>>> now\r\n> >>>>> FOR loop takes a lot of time to find value of an array.\r\n> >>>>>\r\n> >>>>> And my question is, how this problem of performance could be\r\n> >> solved? We\r\n> >>>>> had\r\n> >>>>> tried pgperl for string parsing, but it takes much more time than\r\n> >> our\r\n> >>>>> current solution. Also we are thinking about self-written C++\r\n> >> function,\r\n> >>>>> may\r\n> >>>>> be someone had implemented this algorithm before?\r\n> >>>>>\r\n> >>>> you can use indexes or you can use hstore\r\n> >>>>\r\n> >>>> Regards\r\n> >>>>\r\n> >>>> Pavel Stehule\r\n> >>>>\r\n> >>>>> --\r\n> >>>>> Best regards\r\n> >>>>>\r\n> >>>>> Aleksej Trofimov\r\n> >>>>>\r\n> >>>>>\r\n> >>>>> --\r\n> >>>>> Sent via pgsql-performance mailing list\r\n> >>>>> ([email protected])\r\n> >>>>> To make changes to your subscription:\r\n> >>>>> http://www.postgresql.org/mailpref/pgsql-performance\r\n> >>>\r\n> >> It is strange - on my comp FOREACH is about 2x faster\r\n> >>\r\n> >> postgres=# select input_value(array(select\r\n> >> generate_series(1,1000000)::numeric), 100000);\r\n> >> input_value\r\n> >> -------------\r\n> >>\r\n> >> (1 row)\r\n> >>\r\n> >> Time: 495.426 ms\r\n> >>\r\n> >> postgres=# select input_value_fe(array(select\r\n> >> generate_series(1,1000000)::numeric), 100000);\r\n> >> input_value_fe\r\n> >> ----------------\r\n> >>\r\n> >> (1 row)\r\n> >>\r\n> >> Time: 248.980 ms\r\n> >>\r\n> >> Regards\r\n> >>\r\n> >> Pavel\r\n> >>\r\n> >>\r\n> >> --\r\n> >> Sent via pgsql-performance mailing list (pgsql-\r\n> >> [email protected])\r\n> >> To make changes to your subscription:\r\n> >> http://www.postgresql.org/mailpref/pgsql-performance\r\n> \r\n> \r\n> --\r\n> Best regards\r\n> \r\n> Aleksej Trofimov\r\n> \r\n> \r\n> --\r\n> Sent via pgsql-performance mailing list (pgsql-\r\n> [email protected])\r\n> To make changes to your subscription:\r\n> http://www.postgresql.org/mailpref/pgsql-performance\r\n",
"msg_date": "Wed, 14 Dec 2011 11:27:10 +0100",
"msg_from": "\"Marc Mamin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres array parser"
}
] |
[
{
"msg_contents": "I've got a 5GB table with about 12 million rows.\nRecently, I had to select the distinct values from just one column.\nThe planner chose an index scan. The query took almost an hour.\nWhen I forced index scan off, the query took 90 seconds (full table scan).\n\nThe planner estimated 70,000 unique values when, in fact, there are 12\nmillion (the value for this row is *almost* but not quite unique).\nWhat's more, despite bumping the statistics on that column up to 1000\nand re-analyzing, the planner now thinks that there are 300,000 unique\nvalues.\nHow can I tell the planner that a given column is much more unique\nthan, apparently, it thinks it is?\nThe column type is INET.\nThis is on PG 8.4.10 on Linux x86_64, with\n81f4e6cd27d538bc27e9714a9173e4df353a02e5 applied.\n\n-- \nJon\n",
"msg_date": "Tue, 13 Dec 2011 12:12:47 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "select distinct uses index scan vs full table scan"
},
{
"msg_contents": "Jon Nelson <[email protected]> writes:\n> I've got a 5GB table with about 12 million rows.\n> Recently, I had to select the distinct values from just one column.\n> The planner chose an index scan. The query took almost an hour.\n> When I forced index scan off, the query took 90 seconds (full table scan).\n\nUsually, we hear complaints about the opposite. Are you using\nnondefault cost settings?\n\n> The planner estimated 70,000 unique values when, in fact, there are 12\n> million (the value for this row is *almost* but not quite unique).\n> What's more, despite bumping the statistics on that column up to 1000\n> and re-analyzing, the planner now thinks that there are 300,000 unique\n> values.\n\nAccurate ndistinct estimates are hard, but that wouldn't have much of\nanything to do with this particular choice, AFAICS.\n\n> How can I tell the planner that a given column is much more unique\n> than, apparently, it thinks it is?\n\n9.0 and up have ALTER TABLE ... ALTER COLUMN ... SET n_distinct.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Dec 2011 14:57:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select distinct uses index scan vs full table scan "
},
{
"msg_contents": "On Tue, Dec 13, 2011 at 1:57 PM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> I've got a 5GB table with about 12 million rows.\n>> Recently, I had to select the distinct values from just one column.\n>> The planner chose an index scan. The query took almost an hour.\n>> When I forced index scan off, the query took 90 seconds (full table scan).\n>\n> Usually, we hear complaints about the opposite. Are you using\n> nondefault cost settings?\n\nCost settings had not been changed until a few minutes ago when your\nresponse prompted me to try a few things.\n\nI ended up changing the random_page_cost to 16.0 (from 4.0), partly\nbecause the H/W raid I'm using is awful bad at random I/O. I'll\nexperiment and keep tabs on performance to see if this has a negative\neffect on other aspects.\n\n>> The planner estimated 70,000 unique values when, in fact, there are 12\n>> million (the value for this row is *almost* but not quite unique).\n>> What's more, despite bumping the statistics on that column up to 1000\n>> and re-analyzing, the planner now thinks that there are 300,000 unique\n>> values.\n>\n> Accurate ndistinct estimates are hard, but that wouldn't have much of\n> anything to do with this particular choice, AFAICS.\n>\n>> How can I tell the planner that a given column is much more unique\n>> than, apparently, it thinks it is?\n>\n> 9.0 and up have ALTER TABLE ... ALTER COLUMN ... SET n_distinct.\n\nD'oh! I'm on 8.4.10+patches.\nThis may provide the necessary push.\n\n-- \nJon\n",
"msg_date": "Tue, 13 Dec 2011 14:17:58 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: select distinct uses index scan vs full table scan"
}
] |
[
{
"msg_contents": "for example, the where condition is: where 'aaaa' ~ col1. I created a \nnormal index on col1 but seems it is not used.\n",
"msg_date": "Thu, 15 Dec 2011 00:05:31 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is it possible to use index on column for regexp match operator '~'?"
},
{
"msg_contents": "2011/12/14 Rural Hunter <[email protected]>:\n> for example, the where condition is: where 'aaaa' ~ col1. I created a normal\n> index on col1 but seems it is not used.\n\nI assume you want to search values that match one particular pattern,\nthat would be col1 ~ 'aaaa'\n\nThe answer is, only very simple patterns that start with '^'. Note\nthat you MUST use the text_pattern_ops index opclass:\n\n# create table words (word text);\n# copy words from '/usr/share/dict/words';\n# create index on words (word text_pattern_ops);\n# explain select * from words where word ~ '^post';\nIndex Scan using words_word_idx on words (cost=0.00..8.28 rows=10 width=9)\n Index Cond: ((word ~>=~ 'post'::text) AND (word ~<~ 'posu'::text))\n Filter: (word ~ '^post'::text)\n\n----\n\nIf you just want to search for arbitrary strings, in PostgreSQL 9.1+\nyou can use pg_trgm extension with a LIKE expression:\n\n# create extension pg_trgm;\n# create index on words using gist (word gist_trgm_ops);\n# explain select * from words where word like '%post%';\nBitmap Heap Scan on words (cost=4.36..40.23 rows=10 width=9)\n Recheck Cond: (word ~~ '%post%'::text)\n -> Bitmap Index Scan on words_word_idx1 (cost=0.00..4.36 rows=10 width=0)\n Index Cond: (word ~~ '%post%'::text)\n\n----\n\nThere's also the \"wildspeed\" external module which is somewhat faster\nat this: http://www.sai.msu.su/~megera/wiki/wildspeed\n\nAnd someone is working to get pg_trgm support for arbitrary regular\nexpression searches. This *may* become part of the next major\nPostgreSQL release (9.2)\nhttp://archives.postgresql.org/message-id/CAPpHfduD6EGNise5codBz0KcdDahp7--MhFz_JDD_FRPC7-i=A@mail.gmail.com\n\nRegards,\nMarti\n",
"msg_date": "Wed, 14 Dec 2011 22:43:37 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is it possible to use index on column for regexp match\n\toperator '~'?"
},
{
"msg_contents": "actually I stored the pattern in col1. I want to get the row whose col1 \npattern matches one string 'aaa'.\n\n于2011年12月15日 4:43:37,Marti Raudsepp写到:\n> 2011/12/14 Rural Hunter<[email protected]>:\n>> for example, the where condition is: where 'aaaa' ~ col1. I created a normal\n>> index on col1 but seems it is not used.\n>\n> I assume you want to search values that match one particular pattern,\n> that would be col1 ~ 'aaaa'\n>\n> The answer is, only very simple patterns that start with '^'. Note\n> that you MUST use the text_pattern_ops index opclass:\n>\n> # create table words (word text);\n> # copy words from '/usr/share/dict/words';\n> # create index on words (word text_pattern_ops);\n> # explain select * from words where word ~ '^post';\n> Index Scan using words_word_idx on words (cost=0.00..8.28 rows=10 width=9)\n> Index Cond: ((word ~>=~ 'post'::text) AND (word ~<~ 'posu'::text))\n> Filter: (word ~ '^post'::text)\n>\n> ----\n>\n> If you just want to search for arbitrary strings, in PostgreSQL 9.1+\n> you can use pg_trgm extension with a LIKE expression:\n>\n> # create extension pg_trgm;\n> # create index on words using gist (word gist_trgm_ops);\n> # explain select * from words where word like '%post%';\n> Bitmap Heap Scan on words (cost=4.36..40.23 rows=10 width=9)\n> Recheck Cond: (word ~~ '%post%'::text)\n> -> Bitmap Index Scan on words_word_idx1 (cost=0.00..4.36 rows=10 width=0)\n> Index Cond: (word ~~ '%post%'::text)\n>\n> ----\n>\n> There's also the \"wildspeed\" external module which is somewhat faster\n> at this: http://www.sai.msu.su/~megera/wiki/wildspeed\n>\n> And someone is working to get pg_trgm support for arbitrary regular\n> expression searches. This *may* become part of the next major\n> PostgreSQL release (9.2)\n> http://archives.postgresql.org/message-id/CAPpHfduD6EGNise5codBz0KcdDahp7--MhFz_JDD_FRPC7-i=A@mail.gmail.com\n>\n> Regards,\n> Marti\n>\n\n\n",
"msg_date": "Thu, 15 Dec 2011 09:54:06 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is it possible to use index on column for regexp match\n\toperator '~'?"
}
] |
[
{
"msg_contents": "Forwarding this on from someone who may pop up on the list too. This is \na new server that's running a query slowly. The rate at which new \nsemops happen may be related. OS is standard RHEL 5.5, kernel \n2.6.18-194.32.1.el5. One possibly interesting part is that this is \nleading edge hardware: four 8-core processes with HyperThreading, for \n64 threads total:\n\nprocessor : 63\nmodel name : Intel(R) Xeon(R) CPU X7550 @ 2.00GHz\nphysical id : 3\nsiblings : 16\ncpu cores : 8\n\nMy only vague memory of issues around systems like this that comes to \nmind here is Scott Marlowe talking about some kernel tweaks he had to do \non his 48 core AMD boxes to get them work right. That seems like a \nsketchy association, and I don't seem to have the details handy.\n\nI do have the full query text and associated plans, but it's mostly \nnoise. Here's the fun part:\n\n-> Nested Loop (cost=581.52..3930.81 rows=2 width=51) (actual \ntime=25.681..17562.198 rows=21574 loops=1)\n -> Merge Join (cost=581.52..783.07 rows=2 width=37) (actual \ntime=25.650..98.268 rows=21574 loops=1)\n Merge Cond: (rim.instrumentid = ri.instrumentid)\n -> Index Scan using reportinstruments_part_125_pkey on \nreportinstruments_part_125 rim (cost=0.00..199.83 rows=110 width=8) \n(actual time=0.033..27.180 rows=20555 loops=1)\n Index Cond: (reportid = 105668)\n -> Sort (cost=581.52..582.31 rows=316 width=33) (actual \ntime=25.608..34.931 rows=21574 loops=1)\n Sort Key: ri.instrumentid\n Sort Method: quicksort Memory: 2454kB\n -> Index Scan using riskbreakdown_part_51_pkey on \nriskbreakdown_part_51 ri (cost=0.00..568.40 rows=316 width=33) (actual \ntime=0.019..11.599 rows=21574 loops=1)\n Index Cond: (reportid = 105668)\n -> Index Scan using energymarketlist_pkey on energymarketlist \nip (cost=0.00..1573.86 rows=1 width=18) (actual time=0.408..0.808 \nrows=1 loops=21574)\n Index Cond: ((reportid = 105668) AND (instrumentid = \nri.instrumentid))\n...\nTotal runtime: 21250.377 ms\n\nThe stats are terrible and the estimates off by many orders of \nmagnitude. But that's not the point. It expected 2 rows and 21574 came \nout; fine. Why is it taking this server 17 seconds to process 21K rows \nof tiny width through a Nested Loop? Is it bouncing to a new CPU every \ntime the thing processes a row or something? I'm finding it hard to \nimagine how this could be a PostgreSQL problem; seems more like a kernel \nbug aggrevated on this system. I wonder if we could produce a \nstandalone test case with a similar plan from what the bad query looks \nlike, and ask the person with this strange system to try it. See if \nit's possible to make it misbehave in the same way with something \nsimpler others can try, too.\n\nThe only slow semops thread I found in the archives turned out to be I/O \nbound. This query is all CPU; combining a few examples here since this \nis repeatable and I'm told acts the same each time:\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n12634 xxx 25 0 131g 1.4g 1.4g R 100.1 0.6 11:24.20 postgres: \nxxx xxx [local] SELECT\n\n$ ./pidstat -d -T ALL -p 27683 1 100\n15:28:46 PID kB_rd/s kB_wr/s kB_ccwr/s Command\n15:28:47 27683 0.00 0.00 0.00 postgres\n\nThe PostgreSQL 9.1.1 is a custom build, but looking at pg_config I don't \nsee anything special that might be related; it was optimized like this:\n\nCFLAGS=-march=core2 -O2 -pipe\n\nHere's a chunk of the observed and seemingly strange semop rate:\n\n$ strace -fttp 12634\nProcess 12634 attached - interrupt to quit\n15:38:55.966047 semop(16646247, 0x7fffd09dac20, 1) = 0\n15:39:17.490133 semop(16810092, 0x7fffd09dac20, 1) = 0\n15:39:17.532522 semop(16810092, 0x7fffd09dac20, 1) = 0\n15:39:17.534874 semop(16777323, 0x7fffd09dac00, 1) = 0\n15:39:17.603637 semop(16777323, 0x7fffd09dac00, 1) = 0\n15:39:17.640646 semop(16810092, 0x7fffd09dac20, 1) = 0\n15:39:17.658230 semop(16810092, 0x7fffd09dac20, 1) = 0\n15:39:18.905137 semop(16646247, 0x7fffd09dac20, 1) = 0\n15:39:33.396657 semop(16810092, 0x7fffd09dac20, 1) = 0\n15:39:50.208516 semop(16777323, 0x7fffd09dac00, 1) = 0\n15:39:54.640712 semop(16646247, 0x7fffd09dac20, 1) = 0\n15:39:55.468458 semop(16777323, 0x7fffd09dac00, 1) = 0\n15:39:55.488364 semop(16777323, 0x7fffd09dac00, 1) = 0\n15:39:55.489344 semop(16777323, 0x7fffd09dac00, 1) = 0\nProcess 12634 detached\n\npg_locks for this 12634 shows all granted ones, nothing exciting there. \nI asked how well this executes with enable_nestloop turned off, hoping \nto see that next.\n\nThis all seems odd, and I get interested and concerned when that start \nshowing up specifically on newer hardware.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\n",
"msg_date": "Fri, 16 Dec 2011 13:27:57 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow nested loop execution on larger server"
},
{
"msg_contents": "On Fri, Dec 16, 2011 at 11:27 AM, Greg Smith <[email protected]> wrote:\n> Forwarding this on from someone who may pop up on the list too. This is a\n> new server that's running a query slowly. The rate at which new semops\n> happen may be related. OS is standard RHEL 5.5, kernel 2.6.18-194.32.1.el5.\n> One possibly interesting part is that this is leading edge hardware: four\n> 8-core processes with HyperThreading, for 64 threads total:\n>\n> processor : 63\n> model name : Intel(R) Xeon(R) CPU X7550 @ 2.00GHz\n> physical id : 3\n> siblings : 16\n> cpu cores : 8\n>\n> My only vague memory of issues around systems like this that comes to mind\n> here is Scott Marlowe talking about some kernel tweaks he had to do on his\n> 48 core AMD boxes to get them work right. That seems like a sketchy\n> association, and I don't seem to have the details handy.\n>\n> I do have the full query text and associated plans, but it's mostly noise.\n> Here's the fun part:\n>\n> -> Nested Loop (cost=581.52..3930.81 rows=2 width=51) (actual\n> time=25.681..17562.198 rows=21574 loops=1)\n> -> Merge Join (cost=581.52..783.07 rows=2 width=37) (actual\n> time=25.650..98.268 rows=21574 loops=1)\n> Merge Cond: (rim.instrumentid = ri.instrumentid)\n> -> Index Scan using reportinstruments_part_125_pkey on\n> reportinstruments_part_125 rim (cost=0.00..199.83 rows=110 width=8) (actual\n> time=0.033..27.180 rows=20555 loops=1)\n> Index Cond: (reportid = 105668)\n> -> Sort (cost=581.52..582.31 rows=316 width=33) (actual\n> time=25.608..34.931 rows=21574 loops=1)\n> Sort Key: ri.instrumentid\n> Sort Method: quicksort Memory: 2454kB\n> -> Index Scan using riskbreakdown_part_51_pkey on\n> riskbreakdown_part_51 ri (cost=0.00..568.40 rows=316 width=33) (actual\n> time=0.019..11.599 rows=21574 loops=1)\n> Index Cond: (reportid = 105668)\n> -> Index Scan using energymarketlist_pkey on energymarketlist ip\n> (cost=0.00..1573.86 rows=1 width=18) (actual time=0.408..0.808 rows=1\n> loops=21574)\n> Index Cond: ((reportid = 105668) AND (instrumentid =\n> ri.instrumentid))\n> ...\n> Total runtime: 21250.377 ms\n>\n> The stats are terrible and the estimates off by many orders of magnitude.\n> But that's not the point. It expected 2 rows and 21574 came out; fine.\n> Why is it taking this server 17 seconds to process 21K rows of tiny width\n> through a Nested Loop? Is it bouncing to a new CPU every time the thing\n> processes a row or something? I'm finding it hard to imagine how this could\n> be a PostgreSQL problem; seems more like a kernel bug aggrevated on this\n> system. I wonder if we could produce a standalone test case with a similar\n> plan from what the bad query looks like, and ask the person with this\n> strange system to try it. See if it's possible to make it misbehave in the\n> same way with something simpler others can try, too.\n\nWhat's the difference in speed of running the query naked and with\nexplain analyze? It's not uncommon to run into a situation where the\ninstrumentation of explain analyze costs significantly more than\nanything in the query, and for some loops this gets especially bad.\nIf the naked query runs in say 1 second, and the explain analyze runs\nin 17 seconds, then the explain analyze time keeping is costing you\n16/17ths of the time to do the accounting. If that's the case then\nthe real query is about 16 times faster than the explain analyzed one,\nand the rows are being processed at about 25k/second.\n",
"msg_date": "Fri, 16 Dec 2011 14:16:21 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow nested loop execution on larger server"
},
{
"msg_contents": "The naked query runs a very long time, and stuck in a slow semop() syscalls from my strace output, the explain plans are probably more to do with our statistic problems that we are clubbing through slowly but surely. The main concern was way the slow semop() calls consistently, a restart of the server seemed to have stopped the issue for now.\r\n\r\n- John\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Scott Marlowe\r\nSent: Friday, December 16, 2011 3:16 PM\r\nTo: Greg Smith\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Slow nested loop execution on larger server\r\n\r\nOn Fri, Dec 16, 2011 at 11:27 AM, Greg Smith <[email protected]> wrote:\r\n> Forwarding this on from someone who may pop up on the list too. This \r\n> is a new server that's running a query slowly. The rate at which new \r\n> semops happen may be related. OS is standard RHEL 5.5, kernel 2.6.18-194.32.1.el5.\r\n> One possibly interesting part is that this is leading edge hardware: \r\n> four 8-core processes with HyperThreading, for 64 threads total:\r\n>\r\n> processor : 63\r\n> model name : Intel(R) Xeon(R) CPU X7550 @ 2.00GHz \r\n> physical id : 3 siblings : 16 cpu cores : 8\r\n>\r\n> My only vague memory of issues around systems like this that comes to \r\n> mind here is Scott Marlowe talking about some kernel tweaks he had to \r\n> do on his\r\n> 48 core AMD boxes to get them work right. That seems like a sketchy \r\n> association, and I don't seem to have the details handy.\r\n>\r\n> I do have the full query text and associated plans, but it's mostly noise.\r\n> Here's the fun part:\r\n>\r\n> -> Nested Loop (cost=581.52..3930.81 rows=2 width=51) (actual\r\n> time=25.681..17562.198 rows=21574 loops=1)\r\n> -> Merge Join (cost=581.52..783.07 rows=2 width=37) (actual\r\n> time=25.650..98.268 rows=21574 loops=1)\r\n> Merge Cond: (rim.instrumentid = ri.instrumentid)\r\n> -> Index Scan using reportinstruments_part_125_pkey on\r\n> reportinstruments_part_125 rim (cost=0.00..199.83 rows=110 width=8) \r\n> (actual\r\n> time=0.033..27.180 rows=20555 loops=1)\r\n> Index Cond: (reportid = 105668)\r\n> -> Sort (cost=581.52..582.31 rows=316 width=33) (actual\r\n> time=25.608..34.931 rows=21574 loops=1)\r\n> Sort Key: ri.instrumentid\r\n> Sort Method: quicksort Memory: 2454kB\r\n> -> Index Scan using riskbreakdown_part_51_pkey on\r\n> riskbreakdown_part_51 ri (cost=0.00..568.40 rows=316 width=33) \r\n> (actual\r\n> time=0.019..11.599 rows=21574 loops=1)\r\n> Index Cond: (reportid = 105668)\r\n> -> Index Scan using energymarketlist_pkey on energymarketlist ip\r\n> (cost=0.00..1573.86 rows=1 width=18) (actual time=0.408..0.808 rows=1\r\n> loops=21574)\r\n> Index Cond: ((reportid = 105668) AND (instrumentid =\r\n> ri.instrumentid))\r\n> ...\r\n> Total runtime: 21250.377 ms\r\n>\r\n> The stats are terrible and the estimates off by many orders of magnitude.\r\n> But that's not the point. It expected 2 rows and 21574 came out; fine.\r\n> Why is it taking this server 17 seconds to process 21K rows of tiny \r\n> width through a Nested Loop? Is it bouncing to a new CPU every time \r\n> the thing processes a row or something? I'm finding it hard to \r\n> imagine how this could be a PostgreSQL problem; seems more like a \r\n> kernel bug aggrevated on this system. I wonder if we could produce a \r\n> standalone test case with a similar plan from what the bad query looks \r\n> like, and ask the person with this strange system to try it. See if \r\n> it's possible to make it misbehave in the same way with something simpler others can try, too.\r\n\r\nWhat's the difference in speed of running the query naked and with explain analyze? It's not uncommon to run into a situation where the instrumentation of explain analyze costs significantly more than anything in the query, and for some loops this gets especially bad.\r\nIf the naked query runs in say 1 second, and the explain analyze runs in 17 seconds, then the explain analyze time keeping is costing you 16/17ths of the time to do the accounting. If that's the case then the real query is about 16 times faster than the explain analyzed one, and the rows are being processed at about 25k/second.\r\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\nThis email is confidential and subject to important disclaimers and\r\nconditions including on offers for the purchase or sale of\r\nsecurities, accuracy and completeness of information, viruses,\r\nconfidentiality, legal privilege, and legal entity disclaimers,\r\navailable at http://www.jpmorgan.com/pages/disclosures/email. \n",
"msg_date": "Tue, 3 Jan 2012 13:23:44 -0500",
"msg_from": "\"Strange, John W\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow nested loop execution on larger server"
},
{
"msg_contents": "On Fri, Dec 16, 2011 at 1:27 PM, Greg Smith <[email protected]> wrote:\n> Forwarding this on from someone who may pop up on the list too. This is a\n> new server that's running a query slowly. The rate at which new semops\n> happen may be related. OS is standard RHEL 5.5, kernel 2.6.18-194.32.1.el5.\n> One possibly interesting part is that this is leading edge hardware: four\n> 8-core processes with HyperThreading, for 64 threads total:\n>\n> processor : 63\n> model name : Intel(R) Xeon(R) CPU X7550 @ 2.00GHz\n> physical id : 3\n> siblings : 16\n> cpu cores : 8\n>\n> My only vague memory of issues around systems like this that comes to mind\n> here is Scott Marlowe talking about some kernel tweaks he had to do on his\n> 48 core AMD boxes to get them work right. That seems like a sketchy\n> association, and I don't seem to have the details handy.\n>\n> I do have the full query text and associated plans, but it's mostly noise.\n> Here's the fun part:\n>\n> -> Nested Loop (cost=581.52..3930.81 rows=2 width=51) (actual\n> time=25.681..17562.198 rows=21574 loops=1)\n> -> Merge Join (cost=581.52..783.07 rows=2 width=37) (actual\n> time=25.650..98.268 rows=21574 loops=1)\n> Merge Cond: (rim.instrumentid = ri.instrumentid)\n> -> Index Scan using reportinstruments_part_125_pkey on\n> reportinstruments_part_125 rim (cost=0.00..199.83 rows=110 width=8) (actual\n> time=0.033..27.180 rows=20555 loops=1)\n> Index Cond: (reportid = 105668)\n> -> Sort (cost=581.52..582.31 rows=316 width=33) (actual\n> time=25.608..34.931 rows=21574 loops=1)\n> Sort Key: ri.instrumentid\n> Sort Method: quicksort Memory: 2454kB\n> -> Index Scan using riskbreakdown_part_51_pkey on\n> riskbreakdown_part_51 ri (cost=0.00..568.40 rows=316 width=33) (actual\n> time=0.019..11.599 rows=21574 loops=1)\n> Index Cond: (reportid = 105668)\n> -> Index Scan using energymarketlist_pkey on energymarketlist ip\n> (cost=0.00..1573.86 rows=1 width=18) (actual time=0.408..0.808 rows=1\n> loops=21574)\n> Index Cond: ((reportid = 105668) AND (instrumentid =\n> ri.instrumentid))\n> ...\n> Total runtime: 21250.377 ms\n>\n> The stats are terrible and the estimates off by many orders of magnitude.\n> But that's not the point. It expected 2 rows and 21574 came out; fine.\n> Why is it taking this server 17 seconds to process 21K rows of tiny width\n> through a Nested Loop? Is it bouncing to a new CPU every time the thing\n> processes a row or something? I'm finding it hard to imagine how this could\n> be a PostgreSQL problem; seems more like a kernel bug aggrevated on this\n> system. I wonder if we could produce a standalone test case with a similar\n> plan from what the bad query looks like, and ask the person with this\n> strange system to try it. See if it's possible to make it misbehave in the\n> same way with something simpler others can try, too.\n>\n> The only slow semops thread I found in the archives turned out to be I/O\n> bound. This query is all CPU; combining a few examples here since this is\n> repeatable and I'm told acts the same each time:\n>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 12634 xxx 25 0 131g 1.4g 1.4g R 100.1 0.6 11:24.20 postgres: xxx\n> xxx [local] SELECT\n>\n> $ ./pidstat -d -T ALL -p 27683 1 100\n> 15:28:46 PID kB_rd/s kB_wr/s kB_ccwr/s Command\n> 15:28:47 27683 0.00 0.00 0.00 postgres\n>\n> The PostgreSQL 9.1.1 is a custom build, but looking at pg_config I don't see\n> anything special that might be related; it was optimized like this:\n>\n> CFLAGS=-march=core2 -O2 -pipe\n>\n> Here's a chunk of the observed and seemingly strange semop rate:\n>\n> $ strace -fttp 12634\n> Process 12634 attached - interrupt to quit\n> 15:38:55.966047 semop(16646247, 0x7fffd09dac20, 1) = 0\n> 15:39:17.490133 semop(16810092, 0x7fffd09dac20, 1) = 0\n> 15:39:17.532522 semop(16810092, 0x7fffd09dac20, 1) = 0\n> 15:39:17.534874 semop(16777323, 0x7fffd09dac00, 1) = 0\n> 15:39:17.603637 semop(16777323, 0x7fffd09dac00, 1) = 0\n> 15:39:17.640646 semop(16810092, 0x7fffd09dac20, 1) = 0\n> 15:39:17.658230 semop(16810092, 0x7fffd09dac20, 1) = 0\n> 15:39:18.905137 semop(16646247, 0x7fffd09dac20, 1) = 0\n> 15:39:33.396657 semop(16810092, 0x7fffd09dac20, 1) = 0\n> 15:39:50.208516 semop(16777323, 0x7fffd09dac00, 1) = 0\n> 15:39:54.640712 semop(16646247, 0x7fffd09dac20, 1) = 0\n> 15:39:55.468458 semop(16777323, 0x7fffd09dac00, 1) = 0\n> 15:39:55.488364 semop(16777323, 0x7fffd09dac00, 1) = 0\n> 15:39:55.489344 semop(16777323, 0x7fffd09dac00, 1) = 0\n> Process 12634 detached\n>\n> pg_locks for this 12634 shows all granted ones, nothing exciting there. I\n> asked how well this executes with enable_nestloop turned off, hoping to see\n> that next.\n>\n> This all seems odd, and I get interested and concerned when that start\n> showing up specifically on newer hardware.\n\nRidiculously late response here, but, IME, semop() calls typically\nindicate LWLock contention, but with a stock build it's pretty well\nimpossible to figure out which LWLock is being contended; compiling\nwith LWLOCK_STATS could tell ou that.\n\nShooting from the hip, the first thing that comes to mind is that the\nindex isn't fully cached in shared_buffers, and every time you hit a\npage that isn't there you have to acquire BufFreelistLock to run the\nclock sweep. If the lock is uncontended then you wouldn't get system\ncalls, but if there's other activity on the system you might get\nsomething like this.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Fri, 3 Feb 2012 12:15:17 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow nested loop execution on larger server"
}
] |
[
{
"msg_contents": "I have a query that used <> against an indexed column. In this\ncase I can use the reverse and use in or = and get the performance\nI need... but \"in general\"... will the planner ever use an index when \nthe related column is compared using <>?\n\nI feel like the answer is no, but wanted to ask.\n\nRoxanne\nPostgres Version 8.4.9 PostGIS version 1.5.2\n\n\n\nContext for question:\n\nI have the following query:\n\nselect *\nfrom op_region opr, yield_segment_info ysi, data_location dl\nwhere opr.op_region_id in\n (select distinct op_region_id\n from yield_point\n where yield > 0\n and area > 0\n and ST_GeometryType(location) <> 'ST_Point'\n )\nand ysi.op_region_id = opr.op_region_id\nand dl.data_set_id = opr.data_set_id\n\nYield_Point has 161,575,599 records\nwhere yield >0 and area > 0 has 161,263,193 records,\nwhere ST_GeometryType(location)<> 'ST_Point' has just 231 records\n\nyield_segment_info has 165,929 records\nop_region has 566,212 records\ndata_location has 394,763\n\nAll of these have a high volume of insert/delete's.\nThe tables have recently been vacuum full'd and the indexes reindexed.\n[they are under the management of the autovacuum, but we forced a \ncleanup on the chance that things had degraded...]\n\nIf I run an explain analyze:\n\n\"Nested Loop\n (cost=5068203.00..5068230.31 rows=3 width=225308)\n (actual time=192571.730..193625.728 rows=236 loops=1)\"\n\"->Nested Loop\n (cost=5068203.00..5068219.66 rows=1 width=57329)\n (actual time=192522.573..192786.698 rows=230 loops=1)\"\n\" ->Nested Loop\n\t(cost=5068203.00..5068211.36 rows=1 width=57268)\n\t(actual time=192509.822..192638.446 rows=230 loops=1)\"\n\" ->HashAggregate\n (cost=5068203.00..5068203.01 rows=1 width=4)\n (actual time=192471.507..192471.682 rows=230 loops=1)\"\n\" ->Seq Scan on yield_point\n (cost=0.00..5068203.00 rows=1 width=4)\n (actual time=602.174..192471.177 rows=230 loops=1)\"\n\" Filter: ((yield > 0::double precision) AND\n (area > 0::double precision) AND\n (st_geometrytype(location) <> 'ST_Point'::text))\"\n\" ->Index Scan using op_region_pkey on op_region opr\n (cost=0.00..8.33 rows=1 width=57264)\n (actual time=0.723..0.723 rows=1 loops=230)\"\n\" Index Cond: (opr.op_region_id = yield_point.op_region_id)\"\n\" ->Index Scan using yield_segment_info_key on yield_segment_info ysi\n (cost=0.00..8.29 rows=1 width=65)\n (actual time=0.643..0.643 rows=1 loops=230)\"\n\" Index Cond: (ysi.op_region_id = opr.op_region_id)\"\n\"->Index Scan using data_location_data_set_idx on data_location dl\n (cost=0.00..10.61 rows=3 width=167979)\n (actual time=3.611..3.646 rows=1 loops=230)\"\n\"Index Cond: (dl.data_set_id = opr.data_set_id)\"\n\"Total runtime: 193625.955 ms\"\n\nyield_point has the following indexes:\n btree on ST_GeometryType(location)\n gist on location\n btree on op_region_id\n\nI've also tried an index on\n ((yield > 0::double precision) AND (area > 0::double precision) \nAND (st_geometrytype(location) <> 'ST_Point'::text))\n... it still goes for the sequential scan.\n\nBut if I change it to st_geometrytype(location) = 'ST_Polygon' or\neven in ('ST_Polygon','ST_MultiPolygon')\n\nthe planner uses the index.\n\nRoxanne\n",
"msg_date": "Sat, 17 Dec 2011 10:30:07 -0500",
"msg_from": "Roxanne Reid-Bennett <[email protected]>",
"msg_from_op": true,
"msg_subject": "will the planner ever use an index when the condition is <> ?"
},
{
"msg_contents": "Normally there is no chance it could work,\nbecause (a) the planner does not know all possible values of a column,\nand (b) btree indexes cannot search on \"not equal\" operator.\n\n\nBTW I've just made a case where - logically - it could work, but it\nstill does not:\n\ncreate table nums ( num int4 not null, check(num=1 or num=2) );\ninsert into nums select case when random()<=0.99 then 1 else 2 end\nfrom generate_series(1,1000000);\ncreate index nums_idx on nums(num);\nanalyze nums;\nset constraint_exclusion to 'on';\nexplain select * from nums where num<>1;\n--planner could estimate selectivity as 1%, and use index with \"=2\"\nfilter basing on check constraint?\n\n\n\n\n2011/12/17 Roxanne Reid-Bennett <[email protected]>:\n> I have a query that used <> against an indexed column. In this\n> case I can use the reverse and use in or = and get the performance\n> I need... but \"in general\"... will the planner ever use an index when the\n> related column is compared using <>?\n>\n> I feel like the answer is no, but wanted to ask.\n>\n> Roxanne\n> Postgres Version 8.4.9 PostGIS version 1.5.2\n>\n>\n>\n> Context for question:\n>\n> I have the following query:\n>\n> select *\n> from op_region opr, yield_segment_info ysi, data_location dl\n> where opr.op_region_id in\n> (select distinct op_region_id\n> from yield_point\n> where yield > 0\n> and area > 0\n> and ST_GeometryType(location) <> 'ST_Point'\n> )\n> and ysi.op_region_id = opr.op_region_id\n> and dl.data_set_id = opr.data_set_id\n>\n> Yield_Point has 161,575,599 records\n> where yield >0 and area > 0 has 161,263,193 records,\n> where ST_GeometryType(location)<> 'ST_Point' has just 231 records\n>\n> yield_segment_info has 165,929 records\n> op_region has 566,212 records\n> data_location has 394,763\n>\n> All of these have a high volume of insert/delete's.\n> The tables have recently been vacuum full'd and the indexes reindexed.\n> [they are under the management of the autovacuum, but we forced a cleanup on\n> the chance that things had degraded...]\n>\n> If I run an explain analyze:\n>\n> \"Nested Loop\n> (cost=5068203.00..5068230.31 rows=3 width=225308)\n> (actual time=192571.730..193625.728 rows=236 loops=1)\"\n> \"->Nested Loop\n> (cost=5068203.00..5068219.66 rows=1 width=57329)\n> (actual time=192522.573..192786.698 rows=230 loops=1)\"\n> \" ->Nested Loop\n> (cost=5068203.00..5068211.36 rows=1 width=57268)\n> (actual time=192509.822..192638.446 rows=230 loops=1)\"\n> \" ->HashAggregate\n> (cost=5068203.00..5068203.01 rows=1 width=4)\n> (actual time=192471.507..192471.682 rows=230 loops=1)\"\n> \" ->Seq Scan on yield_point\n> (cost=0.00..5068203.00 rows=1 width=4)\n> (actual time=602.174..192471.177 rows=230 loops=1)\"\n> \" Filter: ((yield > 0::double precision) AND\n> (area > 0::double precision) AND\n> (st_geometrytype(location) <> 'ST_Point'::text))\"\n> \" ->Index Scan using op_region_pkey on op_region opr\n> (cost=0.00..8.33 rows=1 width=57264)\n> (actual time=0.723..0.723 rows=1 loops=230)\"\n> \" Index Cond: (opr.op_region_id = yield_point.op_region_id)\"\n> \" ->Index Scan using yield_segment_info_key on yield_segment_info ysi\n> (cost=0.00..8.29 rows=1 width=65)\n> (actual time=0.643..0.643 rows=1 loops=230)\"\n> \" Index Cond: (ysi.op_region_id = opr.op_region_id)\"\n> \"->Index Scan using data_location_data_set_idx on data_location dl\n> (cost=0.00..10.61 rows=3 width=167979)\n> (actual time=3.611..3.646 rows=1 loops=230)\"\n> \"Index Cond: (dl.data_set_id = opr.data_set_id)\"\n> \"Total runtime: 193625.955 ms\"\n>\n> yield_point has the following indexes:\n> btree on ST_GeometryType(location)\n> gist on location\n> btree on op_region_id\n>\n> I've also tried an index on\n> ((yield > 0::double precision) AND (area > 0::double precision) AND\n> (st_geometrytype(location) <> 'ST_Point'::text))\n> ... it still goes for the sequential scan.\n>\n> But if I change it to st_geometrytype(location) = 'ST_Polygon' or\n> even in ('ST_Polygon','ST_MultiPolygon')\n>\n> the planner uses the index.\n>\n> Roxanne\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 17 Dec 2011 17:24:15 +0100",
"msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: will the planner ever use an index when the condition\n is <> ?"
},
{
"msg_contents": "17.12.2011 18:25 пользователь \"Filip Rembiałkowski\" <[email protected]>\nнаписал:\n>\n> Normally there is no chance it could work,\n> because (a) the planner does not know all possible values of a column,\n> and (b) btree indexes cannot search on \"not equal\" operator.\n>\n\nWhy so? a<>b is same as (a<b or a>b), so, planner should chech this option.\n\n\n17.12.2011 18:25 пользователь \"Filip Rembiałkowski\" <[email protected]> написал:\n>\n> Normally there is no chance it could work,\n> because (a) the planner does not know all possible values of a column,\n> and (b) btree indexes cannot search on \"not equal\" operator.\n>\nWhy so? a<>b is same as (a<b or a>b), so, planner should chech this option.",
"msg_date": "Sun, 18 Dec 2011 12:41:21 +0200",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: will the planner ever use an index when the condition\n is <> ?"
},
{
"msg_contents": "On 12/17/2011 11:24 AM, Filip Rembiałkowski wrote:\n> Normally there is no chance it could work,\n> because (a) the planner does not know all possible values of a column,\n> and (b) btree indexes cannot search on \"not equal\" operator.\n\nIs there an index type that can check \"not equal\"?\nThis specific column has a limited number of possible values - it is \nessentially an enumerated list.\n\nRoxanne\n>\n>\n> BTW I've just made a case where - logically - it could work, but it\n> still does not:\n>\n> create table nums ( num int4 not null, check(num=1 or num=2) );\n> insert into nums select case when random()<=0.99 then 1 else 2 end\n> from generate_series(1,1000000);\n> create index nums_idx on nums(num);\n> analyze nums;\n> set constraint_exclusion to 'on';\n> explain select * from nums where num<>1;\n> --planner could estimate selectivity as 1%, and use index with \"=2\"\n> filter basing on check constraint?\n>\n>\n>\n>\n> 2011/12/17 Roxanne Reid-Bennett<[email protected]>:\n>> I have a query that used<> against an indexed column. In this\n>> case I can use the reverse and use in or = and get the performance\n>> I need... but \"in general\"... will the planner ever use an index when the\n>> related column is compared using<>?\n>>\n>> I feel like the answer is no, but wanted to ask.\n>>\n>> Roxanne\n>> Postgres Version 8.4.9 PostGIS version 1.5.2\n>>\n>>\n>>\n>> Context for question:\n>>\n>> I have the following query:\n>>\n>> select *\n>> from op_region opr, yield_segment_info ysi, data_location dl\n>> where opr.op_region_id in\n>> (select distinct op_region_id\n>> from yield_point\n>> where yield> 0\n>> and area> 0\n>> and ST_GeometryType(location)<> 'ST_Point'\n>> )\n>> and ysi.op_region_id = opr.op_region_id\n>> and dl.data_set_id = opr.data_set_id\n>>\n>> Yield_Point has 161,575,599 records\n>> where yield>0 and area> 0 has 161,263,193 records,\n>> where ST_GeometryType(location)<> 'ST_Point' has just 231 records\n>>\n>> yield_segment_info has 165,929 records\n>> op_region has 566,212 records\n>> data_location has 394,763\n>>\n>> All of these have a high volume of insert/delete's.\n>> The tables have recently been vacuum full'd and the indexes reindexed.\n>> [they are under the management of the autovacuum, but we forced a cleanup on\n>> the chance that things had degraded...]\n>>\n>> If I run an explain analyze:\n>>\n>> \"Nested Loop\n>> (cost=5068203.00..5068230.31 rows=3 width=225308)\n>> (actual time=192571.730..193625.728 rows=236 loops=1)\"\n>> \"->Nested Loop\n>> (cost=5068203.00..5068219.66 rows=1 width=57329)\n>> (actual time=192522.573..192786.698 rows=230 loops=1)\"\n>> \" ->Nested Loop\n>> (cost=5068203.00..5068211.36 rows=1 width=57268)\n>> (actual time=192509.822..192638.446 rows=230 loops=1)\"\n>> \" ->HashAggregate\n>> (cost=5068203.00..5068203.01 rows=1 width=4)\n>> (actual time=192471.507..192471.682 rows=230 loops=1)\"\n>> \" ->Seq Scan on yield_point\n>> (cost=0.00..5068203.00 rows=1 width=4)\n>> (actual time=602.174..192471.177 rows=230 loops=1)\"\n>> \" Filter: ((yield> 0::double precision) AND\n>> (area> 0::double precision) AND\n>> (st_geometrytype(location)<> 'ST_Point'::text))\"\n>> \" ->Index Scan using op_region_pkey on op_region opr\n>> (cost=0.00..8.33 rows=1 width=57264)\n>> (actual time=0.723..0.723 rows=1 loops=230)\"\n>> \" Index Cond: (opr.op_region_id = yield_point.op_region_id)\"\n>> \" ->Index Scan using yield_segment_info_key on yield_segment_info ysi\n>> (cost=0.00..8.29 rows=1 width=65)\n>> (actual time=0.643..0.643 rows=1 loops=230)\"\n>> \" Index Cond: (ysi.op_region_id = opr.op_region_id)\"\n>> \"->Index Scan using data_location_data_set_idx on data_location dl\n>> (cost=0.00..10.61 rows=3 width=167979)\n>> (actual time=3.611..3.646 rows=1 loops=230)\"\n>> \"Index Cond: (dl.data_set_id = opr.data_set_id)\"\n>> \"Total runtime: 193625.955 ms\"\n>>\n>> yield_point has the following indexes:\n>> btree on ST_GeometryType(location)\n>> gist on location\n>> btree on op_region_id\n>>\n>> I've also tried an index on\n>> ((yield> 0::double precision) AND (area> 0::double precision) AND\n>> (st_geometrytype(location)<> 'ST_Point'::text))\n>> ... it still goes for the sequential scan.\n>>\n>> But if I change it to st_geometrytype(location) = 'ST_Polygon' or\n>> even in ('ST_Polygon','ST_MultiPolygon')\n>>\n>> the planner uses the index.\n>>\n>> Roxanne\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Sun, 18 Dec 2011 09:52:14 -0500",
"msg_from": "Roxanne Reid-Bennett <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: will the planner ever use an index when the condition\n is <> ?"
},
{
"msg_contents": "Roxanne Reid-Bennett <[email protected]> writes:\n> On 12/17/2011 11:24 AM, Filip Rembiałkowski wrote:\n>> Normally there is no chance it could work,\n>> because (a) the planner does not know all possible values of a column,\n>> and (b) btree indexes cannot search on \"not equal\" operator.\n\n> Is there an index type that can check \"not equal\"?\n\nThere is not. It's not so much that it's logically impossible as that\nit doesn't seem worth the trouble to implement and maintain, because\nmost of the time a query like \"where x <> constant\" is going to fetch\nmost of the table, and so it would be better done as a seqscan anyway.\n\nIf you have a specific case where that's not true, you might consider\na partial index (CREATE INDEX ... WHERE x <> constant). But the details\nof that would depend a lot on the queries you're concerned about.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Dec 2011 13:31:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: will the planner ever use an index when the condition is <> ? "
},
{
"msg_contents": "On Sun, Dec 18, 2011 at 16:52, Roxanne Reid-Bennett <[email protected]> wrote:\n> Is there an index type that can check \"not equal\"?\n> This specific column has a limited number of possible values - it is\n> essentially an enumerated list.\n\nInstead of writing WHERE foo<>3 you could rewrite it as WHERE foo IN\n(1,2,4,...) or WHERE foo < 3 OR foo > 3. Both of these are indexable\nqueries, but obviously the planner may choose not to use index scan if\nit's not worth it.\n\nRegards,\nMarti\n",
"msg_date": "Sun, 18 Dec 2011 22:11:28 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: will the planner ever use an index when the condition\n is <> ?"
},
{
"msg_contents": "On 12/18/2011 1:31 PM, Tom Lane wrote:\n> If you have a specific case where that's not true, you might consider \n> a partial index (CREATE INDEX ... WHERE x <> constant). But the \n> details of that would depend a lot on the queries you're concerned \n> about. regards, tom lane \n\nWhich I had tried in the form of (st_geometrytype(location) <> \n'ST_Point'::text)... planner never picked it (for the scenario given \nbefore). But this thread was all pretty much .. design/plan/future use. \nThis specific instance I've handled with in \n(\"ST_Polygon\",\"ST_MultiPolygon\").\n\nThank you for the feedback.\n\nRoxanne\n",
"msg_date": "Sun, 18 Dec 2011 15:18:49 -0500",
"msg_from": "Roxanne Reid-Bennett <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: will the planner ever use an index when the condition\n is <> ?"
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHello\n\nI am sending this email to ask if anyone has noticed a change in how\na server running postgreSQL 9.1 uses and allocates memory compared to\nolder versions.\n\nWe upgraded all our systems from 8.3 to 9.1 a couple of weeks ago, and\nwe have experienced a radical change in how our servers make use of\nmemory. How memory is allocated has become more unstable and the swap\nusage has increased dramatically.\n\nThe pattern that we have started seeing is:\n\n* Sudden decrease of swap when running backup/vacuum+analyze jobs\n* Full use of cached memory when running backup/vacuum+analyze jobs\n* Sudden increase of swap and unused memory when backup/vacuum+analyze\njobs are finnished.\n* Progressive decrease of swap during the day.\n\n\nHere is a list of things about this upgrade to version 9.1 that can be\ninteresting when analyzing this change of behavior:\n\n* The servers are running the samme OS version and linux kernel as\nwith 8.3.\n\n* We are running the same values for parameters related to memory\nallocation as we used in 8.3.\n\n* We are running the same backups and maintenance jobs as with version\n8.3. These jobs are running at the exactly same time as with 8.3.\n\n* Backups (PITR, pg_dumps) and maintenances (vacuum, analyze) jobs are\nexecuted between midnight and early morning.\n\n* We run several postgreSQL clusters per server, running in different\nIPs and disks.\n\n* We have not seen any significant change in how databases are\nused/accessed after the upgrade to 9.1.\n\n* We upgraded in the first time from 8.3.12 to 9.1.2, but because this\nbug: http://archives.postgresql.org/pgsql-bugs/2011-12/msg00068.php\nwe had to downgrade to 9.1.1. We thought in the begynning that our\nmemory problems were related to this bug, but everything is the same\nwith 9.1.1.\n\n* A couple of days ago we decreased the values of maintenance_work_mem\nand work_mem over a 50% in relation to values used with 8.3. The only\nchange we have seen is even more unused memory after backup/vacuum\n+analyze jobs are finnished.\n\nHere you have some graphs that can help to get a picture about what we\nare talking about:\n\n* Overview of how memory use changed in one of our servers after the\nupgrade in the begynning og week 49:\nhttp://folk.uio.no/rafael/upgrade_to_9.1/server-1/memory-month.png\nhttp://folk.uio.no/rafael/upgrade_to_9.1/server-1/memory-year.png\n\n* We could think that all this happens because we are running to much\nin one server. Here are some graphs from a server with 30GB+ running\nonly one postgres cluster (shared_memory = 6GB,\nmaintenance_work_memory = 512MB, work_mem = 32MB) for a couple of days:\n\nhttp://folk.uio.no/rafael/upgrade_to_9.1/server-2/memory-week.png\n\nThe memory pattern is the same even when running only one postgres\ncluster in a server with enough memory.\n\nAny ideas about why this dramatic change in memory usage when the only\nthing apparently changed from our side is the postgres version?\n\nThanks in advance for any help.\n\nregards,\n- -- \nRafael Martinez Guerrero\nCenter for Information Technology\nUniversity of Oslo, Norway\n\nPGP Public Key: http://folk.uio.no/rafael/\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.11 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/\n\niEYEARECAAYFAk7vUpYACgkQBhuKQurGihTvjACff5J08pNJuRDgkegYdtQ5zp52\nGeoAnRaaU+F/C/udQ7lMl/TkvRKX2WnP\n=VcDk\n-----END PGP SIGNATURE-----\n",
"msg_date": "Mon, 19 Dec 2011 16:04:54 +0100",
"msg_from": "Rafael Martinez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Dramatic change in memory usage with version 9.1"
},
{
"msg_contents": "Wow, upgrading 3 major releases at a go. :) It would probably be\nuseful to use the helpful:\n\nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\nto get the information that is needed to the right people.\n\nRegards,\nKen\n\nOn Mon, Dec 19, 2011 at 04:04:54PM +0100, Rafael Martinez wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> Hello\n> \n> I am sending this email to ask if anyone has noticed a change in how\n> a server running postgreSQL 9.1 uses and allocates memory compared to\n> older versions.\n> \n> We upgraded all our systems from 8.3 to 9.1 a couple of weeks ago, and\n> we have experienced a radical change in how our servers make use of\n> memory. How memory is allocated has become more unstable and the swap\n> usage has increased dramatically.\n",
"msg_date": "Mon, 19 Dec 2011 09:54:11 -0600",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dramatic change in memory usage with version 9.1"
},
{
"msg_contents": "On Mon, Dec 19, 2011 at 17:04, Rafael Martinez <[email protected]> wrote:\n> * Sudden decrease of swap when running backup/vacuum+analyze jobs\n\nDo you know for certain that this memory use is attributed to\nvacuum/analyze/backup, or are you just guessing? You should isolate\nwhether it's the vacuum or a backup process/backend that takes this\nmemory.\n\nDo you launch vacuum/analyze manually or are you just relying on autovacuum?\nHow many parallel vacuum jobs are there?\nWhat's your autovacuum_max_workers set to?\nHow large is your database?\nHow did you perform the upgrade -- via pg_upgrade or pg_dump?\n\n> Any ideas about why this dramatic change in memory usage when the only\n> thing apparently changed from our side is the postgres version?\n\nWell, for one, there have been many planner changes that make it use\nmemory more aggressively, these probably being the most significant:\n* Materialize for nested loop queries in 9.0:\nhttp://rhaas.blogspot.com/2010/04/materialization-in-postgresql-90.html\n* Hash join usage for RIGHT and FULL OUTER JOINs in 9.0\n\nHowever, none of these would apply to vacuum, analyze or backups.\n\nRegards,\nMarti\n",
"msg_date": "Mon, 19 Dec 2011 18:02:56 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dramatic change in memory usage with version 9.1"
},
{
"msg_contents": "Le 19 décembre 2011 16:04, Rafael Martinez <[email protected]> a écrit :\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> Hello\n>\n> I am sending this email to ask if anyone has noticed a change in how\n> a server running postgreSQL 9.1 uses and allocates memory compared to\n> older versions.\n>\n> We upgraded all our systems from 8.3 to 9.1 a couple of weeks ago, and\n> we have experienced a radical change in how our servers make use of\n> memory. How memory is allocated has become more unstable and the swap\n> usage has increased dramatically.\n>\n> The pattern that we have started seeing is:\n>\n> * Sudden decrease of swap when running backup/vacuum+analyze jobs\n> * Full use of cached memory when running backup/vacuum+analyze jobs\n> * Sudden increase of swap and unused memory when backup/vacuum+analyze\n> jobs are finnished.\n> * Progressive decrease of swap during the day.\n>\n>\n> Here is a list of things about this upgrade to version 9.1 that can be\n> interesting when analyzing this change of behavior:\n>\n> * The servers are running the samme OS version and linux kernel as\n> with 8.3.\n>\n> * We are running the same values for parameters related to memory\n> allocation as we used in 8.3.\n>\n> * We are running the same backups and maintenance jobs as with version\n> 8.3. These jobs are running at the exactly same time as with 8.3.\n>\n> * Backups (PITR, pg_dumps) and maintenances (vacuum, analyze) jobs are\n> executed between midnight and early morning.\n>\n> * We run several postgreSQL clusters per server, running in different\n> IPs and disks.\n>\n> * We have not seen any significant change in how databases are\n> used/accessed after the upgrade to 9.1.\n>\n> * We upgraded in the first time from 8.3.12 to 9.1.2, but because this\n> bug: http://archives.postgresql.org/pgsql-bugs/2011-12/msg00068.php\n> we had to downgrade to 9.1.1. We thought in the begynning that our\n> memory problems were related to this bug, but everything is the same\n> with 9.1.1.\n>\n> * A couple of days ago we decreased the values of maintenance_work_mem\n> and work_mem over a 50% in relation to values used with 8.3. The only\n> change we have seen is even more unused memory after backup/vacuum\n> +analyze jobs are finnished.\n>\n> Here you have some graphs that can help to get a picture about what we\n> are talking about:\n>\n> * Overview of how memory use changed in one of our servers after the\n> upgrade in the begynning og week 49:\n> http://folk.uio.no/rafael/upgrade_to_9.1/server-1/memory-month.png\n> http://folk.uio.no/rafael/upgrade_to_9.1/server-1/memory-year.png\n>\n> * We could think that all this happens because we are running to much\n> in one server. Here are some graphs from a server with 30GB+ running\n> only one postgres cluster (shared_memory = 6GB,\n> maintenance_work_memory = 512MB, work_mem = 32MB) for a couple of days:\n>\n> http://folk.uio.no/rafael/upgrade_to_9.1/server-2/memory-week.png\n>\n> The memory pattern is the same even when running only one postgres\n> cluster in a server with enough memory.\n>\n> Any ideas about why this dramatic change in memory usage when the only\n> thing apparently changed from our side is the postgres version?\n>\n> Thanks in advance for any help.\n\nCan you report what is filling the cache and the swap ?\n\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation\n",
"msg_date": "Tue, 20 Dec 2011 12:15:18 +0100",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dramatic change in memory usage with version 9.1"
},
{
"msg_contents": "On 19/12/2011 11:04 PM, Rafael Martinez wrote:\n> Any ideas about why this dramatic change in memory usage when the only\n> thing apparently changed from our side is the postgres version?\n>\nIt'd be interesting to know how much of your workload operates with \nSERIALIZABLE transactions, as the behavior of those has changed \nsignificantly in 9.1 and they _are_ more expensive in RAM terms now.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 21 Dec 2011 07:48:54 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dramatic change in memory usage with version 9.1"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 12/20/2011 12:15 PM, C�dric Villemain wrote:\n> Le 19 d�cembre 2011 16:04, Rafael Martinez <[email protected]> a �crit :\n>> -----BEGIN PGP SIGNED MESSAGE-----\n>> Hash: SHA1\n>>\n>> Hello\n>>\n>> I am sending this email to ask if anyone has noticed a change in how\n>> a server running postgreSQL 9.1 uses and allocates memory compared to\n>> older versions.\n>>\n>> We upgraded all our systems from 8.3 to 9.1 a couple of weeks ago, and\n>> we have experienced a radical change in how our servers make use of\n>> memory. How memory is allocated has become more unstable and the swap\n>> usage has increased dramatically.\n>>\n[.......]\n> \n> Can you report what is filling the cache and the swap ?\n> \n\nHello\n\nWe are running RHEL4 with a 2.6.9 kernels and we do not know how to\ncheck how much swap a particular process is using. It looks like with\nkernels > 2.6.16 you can get this informaton via /proc/PID/smaps.\n\nWe have been able to run some tests and we think we have found a reason\nfor the change in memory usage with version 9.1\n\nIt looks like it is a combination of how pg_dump works now and how the\noperative system manages memory.\n\nWhat we have found out is that the server process attending to pg_dump\nuses much more memory with 9.1 than with 8.3 dumping the same database.\n\nThis is the test we have done with 8.3 and 9.1:\n\n* Clean reboot of the server.\n* Clean start of postgres server\n* One unique process running against postgres:\npgdump -c --verbose <dbname> | gzip > dump_file.dump.gz\n\n* DBsize = 51GB+\n* shared_buffers = 2GB\n* work_mem = 16MB\n* maintenance_work_mem = 256MB\n* Total server memory = 8GB\n\n* We have collected data via /proc of how the system has been using\nmemory and VSIZE, RSS and SHARE memory values for all postgres processes.\n\nSome graphs showing what happens during the dump of the database with\n9.1 and 8.3 can be consulted here:\n\nhttp://folk.uio.no/rafael/upgrade_to_9.1/test/\n\nAs you can see, the server process with 9.1 memory usage grows more than\nthe dobbel of the value defined with shared_buffers. With 8.3 is half of\nthis.\n\nWhat we have seen in these tests corresponds with what we have seen in\nproduction Ref:[1]. The 'cached' memory follows the 'inactive' memory\nwhen this one gets over a certain limit. And 'active' and 'inactive'\nmemory cross their paths and exchange roles.\n\nWe have not experienced the use of swap under these tests as we do in\nproduction probably because we are not running several jobs in parallel.\n\nSo the drop in 'cached' memory we see in production is not related to\nthe termination of a backup or maintenance job, it is related to how\nmuch 'inactive' memory the system has. It looks like some kernel limit\nis reached and the kernel starts to reallocate how the memory is used.\n\nWhat it's clear is that:\n\n* Running pg_dump needs/uses much more memory with 9.1 than with 8.3\n(33% more). The same job takes 15min.(18%) more with 9.1 than 8.3\n\n* With 9.1 the assignation the system does of wich memory is 'active'\nand wich one is 'inactive' has changed Ref:[2].\n\nWe still has some things to find out:\n\n* We are not sure why swap usage has increased dramatically. We have in\ntheory a lot of memory 'cached' that could be used instead of swap.\n\n* We still do not understand why the assignation of which memory is\n'active' and which one is 'inactive' has such an impact in how memory is\nmanaged.\n\n* We are trying to find out if the kernel has some memory parameters\nthat can be tunned to change the behavior we are seeing.\n\n[1] http://folk.uio.no/rafael/upgrade_to_9.1/server-1/memory-week.png\n[2] http://folk.uio.no/rafael/upgrade_to_9.1/server-1/memory-month.png\n\nThanks in advance to anyone trying to find an explanation.\n\nregards,\n- -- \n Rafael Martinez Guerrero\n Center for Information Technology\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.14 (GNU/Linux)\n\niEYEARECAAYFAk7yGjYACgkQBhuKQurGihTeHwCggv0yjskln8OkW2g5Kj6T4YGR\njekAn3FhUbCUR0RjXS+LLJpyzAGNQjys\n=lBqa\n-----END PGP SIGNATURE-----\n",
"msg_date": "Wed, 21 Dec 2011 18:41:10 +0100",
"msg_from": "Rafael Martinez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dramatic change in memory usage with version 9.1"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 12/21/2011 12:48 AM, Craig Ringer wrote:\n> On 19/12/2011 11:04 PM, Rafael Martinez wrote:\n>> Any ideas about why this dramatic change in memory usage when the only\n>> thing apparently changed from our side is the postgres version?\n>>\n> It'd be interesting to know how much of your workload operates with\n> SERIALIZABLE transactions, as the behavior of those has changed\n> significantly in 9.1 and they _are_ more expensive in RAM terms now.\n> \n\nHello\n\nAs long as I know, all the databases are using the default, \"read\ncommitted\".\n\nWe have almost 500 databases across all our servers, but we are only\ndbas. We provide the infrastructure necessary to run this and help users\nwhen they need it but we have not 100% control over how they are using\nthe databases ;-)\n\nregards,\n- -- \n Rafael Martinez Guerrero\n Center for Information Technology\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.14 (GNU/Linux)\n\niEYEARECAAYFAk7yHHAACgkQBhuKQurGihQz1gCdGJY6vk89lHKMldkYlkxOeJYJ\nGSMAoKDRCRo1UpqlUgItzCm/XV9aCbb8\n=7f6R\n-----END PGP SIGNATURE-----\n",
"msg_date": "Wed, 21 Dec 2011 18:50:40 +0100",
"msg_from": "Rafael Martinez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dramatic change in memory usage with version 9.1"
},
{
"msg_contents": "On Wed, Dec 21, 2011 at 10:50 AM, Rafael Martinez\n<[email protected]> wrote:\n> As long as I know, all the databases are using the default, \"read\n> committed\".\n\nNote that backups run in serializable mode.\n",
"msg_date": "Wed, 21 Dec 2011 11:18:21 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dramatic change in memory usage with version 9.1"
},
{
"msg_contents": "Scott Marlowe <[email protected]> wrote:\n> On Wed, Dec 21, 2011 at 10:50 AM, Rafael Martinez\n> <[email protected]> wrote:\n>> As long as I know, all the databases are using the default, \"read\n>> committed\".\n> \n> Note that backups run in serializable mode.\n \nIn 9.1 they default to running in \"repeatable read\". You can choose\nthe --serializable-deferrable option, which runs at the serializable\ntransaction isolation level, sort of. It does that by waiting for a\n\"safe\" snapshot and then running the same as a repeatable read\ntransaction -- so either way you have none of the overhead of the\nnew serializable transactions.\n \nBesides that, almost all of the additional RAM usage for the new\nserializable implementation is in shared memory. As you can see in\nthe graphs from Rafael, the difference isn't very dramatic as a\npercentage of a typical production configuration.\n \n-Kevin\n",
"msg_date": "Wed, 21 Dec 2011 12:54:32 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dramatic change in memory usage with version 9.1"
},
{
"msg_contents": "Hello,\n\nCan you find some relation between the memory usage and insert statements?\n9.1.2 has memory problems with inserts (even the simplest ones) on Linux\nand Windows too, I could produce it. Using pgbench also shows it. Some\nmemory is not reclaimed.\nI could produce it also with 8.4.9 on Linux, I haven't tried 8.4.10 yet.\n\nBest regards,\nOtto\n\n\n2011/12/21 Rafael Martinez <[email protected]>\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> On 12/21/2011 12:48 AM, Craig Ringer wrote:\n> > On 19/12/2011 11:04 PM, Rafael Martinez wrote:\n> >> Any ideas about why this dramatic change in memory usage when the only\n> >> thing apparently changed from our side is the postgres version?\n> >>\n> > It'd be interesting to know how much of your workload operates with\n> > SERIALIZABLE transactions, as the behavior of those has changed\n> > significantly in 9.1 and they _are_ more expensive in RAM terms now.\n> >\n>\n> Hello\n>\n> As long as I know, all the databases are using the default, \"read\n> committed\".\n>\n> We have almost 500 databases across all our servers, but we are only\n> dbas. We provide the infrastructure necessary to run this and help users\n> when they need it but we have not 100% control over how they are using\n> the databases ;-)\n>\n> regards,\n> - --\n> Rafael Martinez Guerrero\n> Center for Information Technology\n> University of Oslo, Norway\n>\n> PGP Public Key: http://folk.uio.no/rafael/\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v2.0.14 (GNU/Linux)\n>\n> iEYEARECAAYFAk7yHHAACgkQBhuKQurGihQz1gCdGJY6vk89lHKMldkYlkxOeJYJ\n> GSMAoKDRCRo1UpqlUgItzCm/XV9aCbb8\n> =7f6R\n> -----END PGP SIGNATURE-----\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHello,Can you find some relation between the memory usage and insert statements? 9.1.2 has memory problems with inserts (even the simplest ones) on Linux and Windows too, I could produce it. Using pgbench also shows it. Some memory is not reclaimed.\nI could produce it also with 8.4.9 on Linux, I haven't tried 8.4.10 yet.Best regards,Otto2011/12/21 Rafael Martinez <[email protected]>\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 12/21/2011 12:48 AM, Craig Ringer wrote:\n> On 19/12/2011 11:04 PM, Rafael Martinez wrote:\n>> Any ideas about why this dramatic change in memory usage when the only\n>> thing apparently changed from our side is the postgres version?\n>>\n> It'd be interesting to know how much of your workload operates with\n> SERIALIZABLE transactions, as the behavior of those has changed\n> significantly in 9.1 and they _are_ more expensive in RAM terms now.\n>\n\nHello\n\nAs long as I know, all the databases are using the default, \"read\ncommitted\".\n\nWe have almost 500 databases across all our servers, but we are only\ndbas. We provide the infrastructure necessary to run this and help users\nwhen they need it but we have not 100% control over how they are using\nthe databases ;-)\n\nregards,\n- --\n Rafael Martinez Guerrero\n Center for Information Technology\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.14 (GNU/Linux)\n\niEYEARECAAYFAk7yHHAACgkQBhuKQurGihQz1gCdGJY6vk89lHKMldkYlkxOeJYJ\nGSMAoKDRCRo1UpqlUgItzCm/XV9aCbb8\n=7f6R\n-----END PGP SIGNATURE-----\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 22 Dec 2011 00:29:16 +0100",
"msg_from": "=?ISO-8859-1?Q?Havasv=F6lgyi_Ott=F3?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dramatic change in memory usage with version 9.1"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 12/22/2011 12:29 AM, Havasv�lgyi Ott� wrote:\n> Hello,\n> \n> Can you find some relation between the memory usage and insert\n> statements? 9.1.2 has memory problems with inserts (even the simplest\n> ones) on Linux and Windows too, I could produce it. Using pgbench also\n> shows it. Some memory is not reclaimed.\n> I could produce it also with 8.4.9 on Linux, I haven't tried 8.4.10 yet.\n> \n[...]\n\nHello\n\nAre you thinking about this bug?:\nhttp://archives.postgresql.org/pgsql-bugs/2011-12/msg00068.php\n\nOur problem should not have anything to do with this bug (it was\nintroduced in 9.1.2)\n\nWe could not finish a full import of some of our databases with 9.1.2\nbecause all ram+swap was used in a matter of minuttes. We are using\n9.1.1 and we haven't seen the 9.1.2 behavior.\n\nregards,\n- -- \n Rafael Martinez Guerrero\n Center for Information Technology\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.14 (GNU/Linux)\n\niEYEARECAAYFAk7y8aUACgkQBhuKQurGihTD8gCgk0Frrd/mEjQrIgG9K0dzhNxN\nHzcAnRiQKWBgwZaNSmY+zrGjYSJFva9o\n=zcv3\n-----END PGP SIGNATURE-----\n",
"msg_date": "Thu, 22 Dec 2011 10:00:21 +0100",
"msg_from": "Rafael Martinez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dramatic change in memory usage with version 9.1"
},
{
"msg_contents": "Yes, perhaps it is related to it, and the cause is the same. But they\nmention here a special type inet.\n\nBest regards,\nOtto\n\n2011/12/22 Rafael Martinez <[email protected]>\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> On 12/22/2011 12:29 AM, Havasvölgyi Ottó wrote:\n> > Hello,\n> >\n> > Can you find some relation between the memory usage and insert\n> > statements? 9.1.2 has memory problems with inserts (even the simplest\n> > ones) on Linux and Windows too, I could produce it. Using pgbench also\n> > shows it. Some memory is not reclaimed.\n> > I could produce it also with 8.4.9 on Linux, I haven't tried 8.4.10 yet.\n> >\n> [...]\n>\n> Hello\n>\n> Are you thinking about this bug?:\n> http://archives.postgresql.org/pgsql-bugs/2011-12/msg00068.php\n>\n> Our problem should not have anything to do with this bug (it was\n> introduced in 9.1.2)\n>\n> We could not finish a full import of some of our databases with 9.1.2\n> because all ram+swap was used in a matter of minuttes. We are using\n> 9.1.1 and we haven't seen the 9.1.2 behavior.\n>\n> regards,\n> - --\n> Rafael Martinez Guerrero\n> Center for Information Technology\n> University of Oslo, Norway\n>\n> PGP Public Key: http://folk.uio.no/rafael/\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v2.0.14 (GNU/Linux)\n>\n> iEYEARECAAYFAk7y8aUACgkQBhuKQurGihTD8gCgk0Frrd/mEjQrIgG9K0dzhNxN\n> HzcAnRiQKWBgwZaNSmY+zrGjYSJFva9o\n> =zcv3\n> -----END PGP SIGNATURE-----\n>\n\nYes, perhaps it is related to it, and the cause is the same. But they mention here a special type inet.Best regards,Otto2011/12/22 Rafael Martinez <[email protected]>\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 12/22/2011 12:29 AM, Havasvölgyi Ottó wrote:\n> Hello,\n>\n> Can you find some relation between the memory usage and insert\n> statements? 9.1.2 has memory problems with inserts (even the simplest\n> ones) on Linux and Windows too, I could produce it. Using pgbench also\n> shows it. Some memory is not reclaimed.\n> I could produce it also with 8.4.9 on Linux, I haven't tried 8.4.10 yet.\n>\n[...]\n\nHello\n\nAre you thinking about this bug?:\nhttp://archives.postgresql.org/pgsql-bugs/2011-12/msg00068.php\n\nOur problem should not have anything to do with this bug (it was\nintroduced in 9.1.2)\n\nWe could not finish a full import of some of our databases with 9.1.2\nbecause all ram+swap was used in a matter of minuttes. We are using\n9.1.1 and we haven't seen the 9.1.2 behavior.\n\nregards,\n- --\n Rafael Martinez Guerrero\n Center for Information Technology\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.14 (GNU/Linux)\n\niEYEARECAAYFAk7y8aUACgkQBhuKQurGihTD8gCgk0Frrd/mEjQrIgG9K0dzhNxN\nHzcAnRiQKWBgwZaNSmY+zrGjYSJFva9o\n=zcv3\n-----END PGP SIGNATURE-----",
"msg_date": "Thu, 22 Dec 2011 10:58:57 +0100",
"msg_from": "=?ISO-8859-1?Q?Havasv=F6lgyi_Ott=F3?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dramatic change in memory usage with version 9.1"
}
] |
[
{
"msg_contents": "Hi - I'm running into an OOM-killer issue when running a specific query (no\nvirtual machine running) and, based on researching the issue, I can\nprobably fix by making the following sysctl adjustments:\n vm.overcommit_memory = 2\n vm.overcommit_ratio = 0\nHowever, I am perplexed as to why I am running into the issue in the first\nplace. The machine (running Linux 2.6.34.7-61.fc13.x86_64) is dedicated to\nPostgres (v9.0.0 [RPM package: postgresql90-9.0.0-1PGDG.fc13.1.x86_64]) and\nthe following memory usage is pretty typical for the system (via \"top\"):\n Mem: 8121992k total, 2901960k used, 5220032k free, 237408k buffers\n Swap: 1048572k total, 235940k used, 812632k free, 2053768k cached\nUnder steady-state conditions, the following shows the virtual memory size\nfor postgres backend processes:\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 8506 postgres 20 0 2327m 3084 1792 S 0.0 0.0 0:00.33 postgres\n 8504 postgres 20 0 2326m 14m 13m S 0.0 0.2 0:01.32 postgres\n 8505 postgres 20 0 2326m 728 452 S 0.0 0.0 0:00.91 postgres\n 3582 postgres 20 0 2325m 54m 53m S 0.0 0.7 0:02.03 postgres\nMy current relevant postgresql.conf settings are the following:\n shared_buffers = 2100MB\n temp_buffers = 8MB\n work_mem = 32MB\n maintenance_work_mem = 16MB\n max_stack_depth = 2MB\n constraint_exclusion = partition\nWhen executing the query, I've been watching the \"top\" activity, sorted by\nresident memory. Upon execution, no other processes appear to take\nadditional resident memory, except a postgres backend process servicing the\nquery, which goes to +6Gb (triggering the OOM-killer). Given the settings\nin postgresql.conf, and my anecdotal understanding of Postgres memory\nmanagement functions, I am uncertain why Postgres exhausts physical memory\ninstead of swapping to temporary files. Do I need to lower my work_mem\nsetting since the subquery involves a partitioned table, causing a\nmultiplier effect to the memory used (I have tried per-connection settings\nof 6MB)? Would tweaking query planning settings help?\n\nThanks in advance!\n\nIf it helps, I have included the query (with column names aliased to their\ndata type), a brief description of the applicable table's contents, and an\nabridged copy of the EXPLAIN ANALYZE output\n\nSELECT \"bigint\", \"date\", \"text\"\nFROM tableA AS A\nWHERE A.\"boolean\" = 'true' AND\n(A.\"text\" = 'abc' OR A.\"text\" = 'xyz') AND\nA.\"bigint\" NOT IN (SELECT \"bigint\" FROM tableB)\nORDER BY A.\"date\" DESC;\n\ntableA:\n - total table contains ~11 million records (total width: 109 bytes)\n - partitioned by month (180 partitions)\n - each table partition contains ~100k records\ntableB:\n - total table contains ~400k records (total width: 279 bytes)\n - partitioned by month (96 partitions)\n - each table partition contains ~30k records\n\n\nEXPLAIN ANALYZE output:\n Note: could not produce output for exact query due to OOM-killer, but\nran query by limiting the subquery to the first 50 results. The planner\niterates over all partitions, but only the first two partitions are noted\nfor brevity.\n\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=451279.67..451279.70 rows=10 width=55) (actual\ntime=18343.085..18343.090 rows=10 loops=1)\n -> Sort (cost=451279.67..456398.37 rows=2047480 width=55) (actual\ntime=18343.083..18343.087 rows=10 loops=1)\n Sort Key: A.\"Date\"\n Sort Method: top-N heapsort Memory: 26kB\n -> Result (cost=1.21..407034.37 rows=2047480 width=55) (actual\ntime=0.793..17014.726 rows=4160606 loops=1)\n -> Append (cost=1.21..407034.37 rows=2047480 width=55)\n(actual time=0.792..16119.298 rows=4160606 loops=1)\n -> Seq Scan on tableA A (cost=1.21..19.08 rows=1\nwidth=44) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: (\"boolean\" AND (NOT (hashed SubPlan 1))\nAND ((\"text\" = 'abc'::text) OR (\"text\" = 'xyz'::text)))\n SubPlan 1\n -> Limit (cost=0.00..1.08 rows=50 width=8)\n(actual time=0.010..0.054 rows=50 loops=210)\n -> Result (cost=0.00..9249.46\nrows=427846 width=8) (actual time=0.009..0.044 rows=50 loops=210)\n -> Append (cost=0.00..9249.46\nrows=427846 width=8) (actual time=0.008..0.031 rows=50 loops=210)\n -> Seq Scan on tableB\n(cost=0.00..15.30 rows=530 width=8) (actual time=0.001..0.001 rows=0\nloops=210)\n -> Seq Scan on\ntableB_201201 tableB (cost=0.00..15.30 rows=530 width=8) (actual\ntime=0.000..0.000 rows=0 loops=210)\n -> Seq Scan on\ntableB_201112 tableB (cost=0.00..251.25 rows=12125 width=8) (actual\ntime=0.006..0.019 rows=50 loops=210)\n -> ...\n -> Seq Scan on tableA_201201 A (cost=1.21..19.08\nrows=1 width=44) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: (\"boolean\" AND (NOT (hashed SubPlan 1))\nAND ((\"text\" = 'abc'::text) OR (\"text\" = 'xyz'::text)))\n SubPlan 1\n -> Limit (cost=0.00..1.08 rows=50 width=8)\n(actual time=0.010..0.054 rows=50 loops=210)\n -> Result (cost=0.00..9249.46\nrows=427846 width=8) (actual time=0.009..0.044 rows=50 loops=210)\n -> Append (cost=0.00..9249.46\nrows=427846 width=8) (actual time=0.008..0.031 rows=50 loops=210)\n -> Seq Scan on tableB\n(cost=0.00..15.30 rows=530 width=8) (actual time=0.001..0.001 rows=0\nloops=210)\n -> Seq Scan on\ntableB_201201 tableB (cost=0.00..15.30 rows=530 width=8) (actual\ntime=0.000..0.000 rows=0 loops=210)\n -> Seq Scan on\ntableB_201112 tableB (cost=0.00..251.25 rows=12125 width=8) (actual\ntime=0.006..0.019 rows=50 loops=210)\n -> ...\n -> Seq Scan on tableA_201112 A (cost=1.21..794.69\nrows=5980 width=55) (actual time=0.789..12.686 rows=12075 loops=1)\n Filter: (\"boolean\" AND (NOT (hashed SubPlan 1))\nAND ((\"text\" = 'abc'::text) OR (\"text\" = 'xyz'::text)))\n SubPlan 1\n -> Limit (cost=0.00..1.08 rows=50 width=8)\n(actual time=0.010..0.054 rows=50 loops=210)\n -> Result (cost=0.00..9249.46\nrows=427846 width=8) (actual time=0.009..0.044 rows=50 loops=210)\n -> Append (cost=0.00..9249.46\nrows=427846 width=8) (actual time=0.008..0.031 rows=50 loops=210)\n -> Seq Scan on tableB\n(cost=0.00..15.30 rows=530 width=8) (actual time=0.001..0.001 rows=0\nloops=210)\n -> Seq Scan on\ntableB_201201 tableB (cost=0.00..15.30 rows=530 width=8) (actual\ntime=0.000..0.000 rows=0 loops=210)\n -> Seq Scan on\ntableB_201112 tableB (cost=0.00..251.25 rows=12125 width=8) (actual\ntime=0.006..0.019 rows=50 loops=210)\n -> Seq Scan on\ntableB_201111 tableB (cost=0.00..604.89 rows=29189 width=8) (never\nexecuted)\n -> ...\n -> Seq Scan on tableA_201111 A (cost=1.21..2666.12\nrows=14670 width=55) (actual time=0.441..36.680 rows=29189 loops=1)\n Filter: (\"boolean\" AND (NOT (hashed SubPlan 1))\nAND ((\"text\" = 'abc'::text) OR (\"text\" = 'xyz'::text)))\n SubPlan 1\n -> Limit (cost=0.00..1.08 rows=50 width=8)\n(actual time=0.010..0.054 rows=50 loops=210)\n -> Result (cost=0.00..9249.46\nrows=427846 width=8) (actual time=0.009..0.044 rows=50 loops=210)\n -> Append (cost=0.00..9249.46\nrows=427846 width=8) (actual time=0.008..0.031 rows=50 loops=210)\n -> Seq Scan on tableB\n(cost=0.00..15.30 rows=530 width=8) (actual time=0.001..0.001 rows=0\nloops=210)\n -> Seq Scan on\ntableB_201201 tableB (cost=0.00..15.30 rows=530 width=8) (actual\ntime=0.000..0.000 rows=0 loops=210)\n -> Seq Scan on\ntableB_201112 tableB (cost=0.00..251.25 rows=12125 width=8) (actual\ntime=0.006..0.019 rows=50 loops=210)\n -> Seq Scan on\ntableB_201111 tableB (cost=0.00..604.89 rows=29189 width=8) (never\nexecuted)\n -> ...\n -> ...\n Total runtime: 18359.851 ms\n(23327 rows)\n\nHi - I'm running into an OOM-killer issue when running a specific query (no virtual machine running) and, based on researching the issue, I can probably fix by making the following sysctl adjustments: vm.overcommit_memory = 2\n vm.overcommit_ratio = 0However, I am perplexed as to why I am running into the issue in the first place. The machine (running Linux 2.6.34.7-61.fc13.x86_64) is dedicated to Postgres (v9.0.0 [RPM package: postgresql90-9.0.0-1PGDG.fc13.1.x86_64]) and the following memory usage is pretty typical for the system (via \"top\"):\n Mem: 8121992k total, 2901960k used, 5220032k free, 237408k buffers Swap: 1048572k total, 235940k used, 812632k free, 2053768k cachedUnder steady-state conditions, the following shows the virtual memory size for postgres backend processes:\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8506 postgres 20 0 2327m 3084 1792 S 0.0 0.0 0:00.33 postgres 8504 postgres 20 0 2326m 14m 13m S 0.0 0.2 0:01.32 postgres\n 8505 postgres 20 0 2326m 728 452 S 0.0 0.0 0:00.91 postgres 3582 postgres 20 0 2325m 54m 53m S 0.0 0.7 0:02.03 postgresMy current relevant postgresql.conf settings are the following: shared_buffers = 2100MB\n temp_buffers = 8MB work_mem = 32MB maintenance_work_mem = 16MB max_stack_depth = 2MB constraint_exclusion = partitionWhen executing the query, I've been watching the \"top\" activity, sorted by resident memory. Upon execution, no other processes appear to take additional resident memory, except a postgres backend process servicing the query, which goes to +6Gb (triggering the OOM-killer). Given the settings in postgresql.conf, and my anecdotal understanding of Postgres memory management functions, I am uncertain why Postgres exhausts physical memory instead of swapping to temporary files. Do I need to lower my work_mem setting since the subquery involves a partitioned table, causing a multiplier effect to the memory used (I have tried per-connection settings of 6MB)? Would tweaking query planning settings help?\nThanks in advance!If it helps, I have included the query (with column names aliased to their data type), a brief description of the applicable table's contents, and an abridged copy of the EXPLAIN ANALYZE output\nSELECT \"bigint\", \"date\", \"text\"FROM tableA AS AWHERE A.\"boolean\" = 'true' AND(A.\"text\" = 'abc' OR A.\"text\" = 'xyz') AND\nA.\"bigint\" NOT IN (SELECT \"bigint\" FROM tableB)ORDER BY A.\"date\" DESC;tableA: - total table contains ~11 million records (total width: 109 bytes) - partitioned by month (180 partitions)\n - each table partition contains ~100k recordstableB: - total table contains ~400k records (total width: 279 bytes) - partitioned by month (96 partitions) - each table partition contains ~30k records\nEXPLAIN ANALYZE output: Note: could not produce output for exact query due to OOM-killer, but ran query by limiting the subquery to the first 50 results. The planner iterates over all partitions, but only the first two partitions are noted for brevity. \n QUERY PLAN-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=451279.67..451279.70 rows=10 width=55) (actual time=18343.085..18343.090 rows=10 loops=1) -> Sort (cost=451279.67..456398.37 rows=2047480 width=55) (actual time=18343.083..18343.087 rows=10 loops=1)\n Sort Key: A.\"Date\" Sort Method: top-N heapsort Memory: 26kB -> Result (cost=1.21..407034.37 rows=2047480 width=55) (actual time=0.793..17014.726 rows=4160606 loops=1) -> Append (cost=1.21..407034.37 rows=2047480 width=55) (actual time=0.792..16119.298 rows=4160606 loops=1)\n -> Seq Scan on tableA A (cost=1.21..19.08 rows=1 width=44) (actual time=0.002..0.002 rows=0 loops=1) Filter: (\"boolean\" AND (NOT (hashed SubPlan 1)) AND ((\"text\" = 'abc'::text) OR (\"text\" = 'xyz'::text)))\n SubPlan 1 -> Limit (cost=0.00..1.08 rows=50 width=8) (actual time=0.010..0.054 rows=50 loops=210) -> Result (cost=0.00..9249.46 rows=427846 width=8) (actual time=0.009..0.044 rows=50 loops=210)\n -> Append (cost=0.00..9249.46 rows=427846 width=8) (actual time=0.008..0.031 rows=50 loops=210) -> Seq Scan on tableB (cost=0.00..15.30 rows=530 width=8) (actual time=0.001..0.001 rows=0 loops=210)\n -> Seq Scan on tableB_201201 tableB (cost=0.00..15.30 rows=530 width=8) (actual time=0.000..0.000 rows=0 loops=210) -> Seq Scan on tableB_201112 tableB (cost=0.00..251.25 rows=12125 width=8) (actual time=0.006..0.019 rows=50 loops=210)\n -> ... -> Seq Scan on tableA_201201 A (cost=1.21..19.08 rows=1 width=44) (actual time=0.001..0.001 rows=0 loops=1) Filter: (\"boolean\" AND (NOT (hashed SubPlan 1)) AND ((\"text\" = 'abc'::text) OR (\"text\" = 'xyz'::text)))\n SubPlan 1 -> Limit (cost=0.00..1.08 rows=50 width=8) (actual time=0.010..0.054 rows=50 loops=210) -> Result (cost=0.00..9249.46 rows=427846 width=8) (actual time=0.009..0.044 rows=50 loops=210)\n -> Append (cost=0.00..9249.46 rows=427846 width=8) (actual time=0.008..0.031 rows=50 loops=210) -> Seq Scan on tableB (cost=0.00..15.30 rows=530 width=8) (actual time=0.001..0.001 rows=0 loops=210)\n -> Seq Scan on tableB_201201 tableB (cost=0.00..15.30 rows=530 width=8) (actual time=0.000..0.000 rows=0 loops=210) -> Seq Scan on tableB_201112 tableB (cost=0.00..251.25 rows=12125 width=8) (actual time=0.006..0.019 rows=50 loops=210)\n -> ... -> Seq Scan on tableA_201112 A (cost=1.21..794.69 rows=5980 width=55) (actual time=0.789..12.686 rows=12075 loops=1) Filter: (\"boolean\" AND (NOT (hashed SubPlan 1)) AND ((\"text\" = 'abc'::text) OR (\"text\" = 'xyz'::text)))\n SubPlan 1 -> Limit (cost=0.00..1.08 rows=50 width=8) (actual time=0.010..0.054 rows=50 loops=210) -> Result (cost=0.00..9249.46 rows=427846 width=8) (actual time=0.009..0.044 rows=50 loops=210)\n -> Append (cost=0.00..9249.46 rows=427846 width=8) (actual time=0.008..0.031 rows=50 loops=210) -> Seq Scan on tableB (cost=0.00..15.30 rows=530 width=8) (actual time=0.001..0.001 rows=0 loops=210)\n -> Seq Scan on tableB_201201 tableB (cost=0.00..15.30 rows=530 width=8) (actual time=0.000..0.000 rows=0 loops=210) -> Seq Scan on tableB_201112 tableB (cost=0.00..251.25 rows=12125 width=8) (actual time=0.006..0.019 rows=50 loops=210)\n -> Seq Scan on tableB_201111 tableB (cost=0.00..604.89 rows=29189 width=8) (never executed) -> ... -> Seq Scan on tableA_201111 A (cost=1.21..2666.12 rows=14670 width=55) (actual time=0.441..36.680 rows=29189 loops=1)\n Filter: (\"boolean\" AND (NOT (hashed SubPlan 1)) AND ((\"text\" = 'abc'::text) OR (\"text\" = 'xyz'::text))) SubPlan 1 -> Limit (cost=0.00..1.08 rows=50 width=8) (actual time=0.010..0.054 rows=50 loops=210)\n -> Result (cost=0.00..9249.46 rows=427846 width=8) (actual time=0.009..0.044 rows=50 loops=210) -> Append (cost=0.00..9249.46 rows=427846 width=8) (actual time=0.008..0.031 rows=50 loops=210)\n -> Seq Scan on tableB (cost=0.00..15.30 rows=530 width=8) (actual time=0.001..0.001 rows=0 loops=210) -> Seq Scan on tableB_201201 tableB (cost=0.00..15.30 rows=530 width=8) (actual time=0.000..0.000 rows=0 loops=210)\n -> Seq Scan on tableB_201112 tableB (cost=0.00..251.25 rows=12125 width=8) (actual time=0.006..0.019 rows=50 loops=210) -> Seq Scan on tableB_201111 tableB (cost=0.00..604.89 rows=29189 width=8) (never executed)\n -> ... -> ... Total runtime: 18359.851 ms(23327 rows)",
"msg_date": "Mon, 19 Dec 2011 10:52:40 -0500",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "OOM-killer issue with a specific query"
},
{
"msg_contents": "On Mon, Dec 19, 2011 at 8:52 AM, <[email protected]> wrote:\n> Under steady-state conditions, the following shows the virtual memory size\n> for postgres backend processes:\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 8506 postgres 20 0 2327m 3084 1792 S 0.0 0.0 0:00.33 postgres\n> 8504 postgres 20 0 2326m 14m 13m S 0.0 0.2 0:01.32 postgres\n> 8505 postgres 20 0 2326m 728 452 S 0.0 0.0 0:00.91 postgres\n> 3582 postgres 20 0 2325m 54m 53m S 0.0 0.7 0:02.03 postgres\n\nFYI, this is not swap usage. VIRT is all the memory a process has\nhandles open to everywhere, including libs that it's linked against\nthat might not even be loaded. Generally speaking, VIRT is close to\nworthless for troubleshooting.\n\n> My current relevant postgresql.conf settings are the following:\n> shared_buffers = 2100MB\n> temp_buffers = 8MB\n> work_mem = 32MB\n> maintenance_work_mem = 16MB\n> max_stack_depth = 2MB\n> constraint_exclusion = partition\n\nWhat's max_connections?\n\n> When executing the query, I've been watching the \"top\" activity, sorted by\n> resident memory. Upon execution, no other processes appear to take\n> additional resident memory, except a postgres backend process servicing the\n> query, which goes to +6Gb (triggering the OOM-killer). Given the settings in\n> postgresql.conf, and my anecdotal understanding of Postgres memory\n> management functions, I am uncertain why Postgres exhausts physical memory\n> instead of swapping to temporary files.\n\n> EXPLAIN ANALYZE output:\n> Note: could not produce output for exact query due to OOM-killer, but\n> ran query by limiting the subquery to the first 50 results. The planner\n> iterates over all partitions, but only the first two partitions are noted\n> for brevity.\n\nThis may be one instance where the regular explain will be more\nuseful. it's quite likely that the query changes when there is no\nlimit. If you compare what explain for the full query says, and what\nexplain (analyze) for the abridged one says, the part that's causing\nyou to run out of memory may be more obvious.\n",
"msg_date": "Tue, 20 Dec 2011 06:24:12 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OOM-killer issue with a specific query"
},
{
"msg_contents": "On Tue, Dec 20, 2011 at 8:24 AM, Scott Marlowe -\[email protected]\n<+nabble+miller_2555+3b65e832a3.scott.marlowe#[email protected]>\nwrote:\n>\n> On Mon, Dec 19, 2011 at 8:52 AM, <[email protected]> wrote:\n> > I can probably fix by making the following sysctl adjustments:\n> > vm.overcommit_memory = 2\n> > vm.overcommit_ratio = 0\n\nFYI - for the sake of others visiting this post, disabling the OS\nmemory overcommit does not appear an easy solution in my case as the\nbox fails to bootstrap due to insufficient memory.\n\n> > Under steady-state conditions, the following shows the virtual memory size\n> > for postgres backend processes:\n> > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> > 8506 postgres 20 0 2327m 3084 1792 S 0.0 0.0 0:00.33 postgres\n> > 8504 postgres 20 0 2326m 14m 13m S 0.0 0.2 0:01.32 postgres\n> > 8505 postgres 20 0 2326m 728 452 S 0.0 0.0 0:00.91 postgres\n> > 3582 postgres 20 0 2325m 54m 53m S 0.0 0.7 0:02.03 postgres\n>\n> FYI, this is not swap usage. VIRT is all the memory a process has\n> handles open to everywhere, including libs that it's linked against\n> that might not even be loaded. Generally speaking, VIRT is close to\n> worthless for troubleshooting.\n>\nThanks - I misunderstood the meaning of VIRT (its been ahwile since\nI've consulted the man page).\n\n> > My current relevant postgresql.conf settings are the following:\n> > shared_buffers = 2100MB\n> > temp_buffers = 8MB\n> > work_mem = 32MB\n> > maintenance_work_mem = 16MB\n> > max_stack_depth = 2MB\n> > constraint_exclusion = partition\n>\n> What's max_connections?\n>\nmax_connections=20. As a sidenote, this is a development box and there\nare no other active connections to the database while this test case\nwas run.\n\n> > When executing the query, I've been watching the \"top\" activity, sorted by\n> > resident memory. Upon execution, no other processes appear to take\n> > additional resident memory, except a postgres backend process servicing the\n> > query, which goes to +6Gb (triggering the OOM-killer). Given the settings in\n> > postgresql.conf, and my anecdotal understanding of Postgres memory\n> > management functions, I am uncertain why Postgres exhausts physical memory\n> > instead of swapping to temporary files.\n>\n> > EXPLAIN ANALYZE output:\n> > Note: could not produce output for exact query due to OOM-killer, but\n> > ran query by limiting the subquery to the first 50 results. The planner\n> > iterates over all partitions, but only the first two partitions are noted\n> > for brevity.\n>\n> This may be one instance where the regular explain will be more\n> useful. it's quite likely that the query changes when there is no\n> limit. If you compare what explain for the full query says, and what\n> explain (analyze) for the abridged one says, the part that's causing\n> you to run out of memory may be more obvious.\n>\nI've run EXPLAIN on the query, but AFAICS the query plan does not\nappear significantly different than the abridged version for this\nparticular query (output attached below). In an effort to analyze the\nbase case, I re-ran the query (without LIMIT) for a selected partition\nof tableA and tableB (both tables are partitioned by \"Date\" and the\n\"Date\" column on each partition of tableB references the \"Date\" column\nof the corresponding partition of tableA as a foreign key constraint).\nThe tableA partition holds 82,939 records (record width is 108 bytes,\nper EXPLAIN) and the tableB partition holds 13,718 records (record\nwidth is 312 bytes, per EXPLAIN) For a single table partition, `top`\nshows the following resource usage of running postmaster processes:\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 1818 postgres 20 0 2325m 56m 55m S 0.0 0.7 0:32.10 postmaster\n 2810 postgres 20 0 156m 1044 420 S 0.0 0.0 0:01.50 postmaster\n 2813 postgres 20 0 2326m 256m 255m S 0.0 3.2 0:09.61 postmaster\n 2814 postgres 20 0 2326m 2220 1592 S 0.0 0.0 0:04.30 postmaster\n 2815 postgres 20 0 2327m 3996 2148 S 0.0 0.0 0:00.66 postmaster\n 2816 postgres 20 0 156m 1272 504 S 0.0 0.0 0:09.14 postmaster\n29661 postgres 20 0 2335m 49m 40m S 0.0 0.6 0:00.24 postmaster\n\nWhile I could run the query partition-by-partition, I'd still like to\nbe able to run a full query across all partitions.\n\nEXPLAIN output excerpt:\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Sort (cost=3125664.13..3130782.83 rows=2047480 width=55)\n Sort Key: A.\"Date\"\n -> Result (cost=11553.15..2856046.51 rows=2047480 width=55)\n -> Append (cost=11553.15..2856046.51 rows=2047480 width=55)\n -> Seq Scan on tableA A (cost=11553.15..11571.02\nrows=1 width=44)\n Filter: (\"Boolean\" AND (NOT (hashed SubPlan 1))\nAND ((\"text\" = 'abc'::text) OR (\"text\" = 'xyz'::text)))\n SubPlan 1\n -> Result (cost=0.00..10357.52 rows=478252 width=8)\n -> Append (cost=0.00..10357.52\nrows=478252 width=8)\n -> Seq Scan on tableB\n(cost=0.00..15.30 rows=530 width=8)\n -> Seq Scan on tableB_201201\ntableB (cost=0.00..15.30 rows=530 width=8)\n -> Seq Scan on tableB_201112\ntableB (cost=0.00..251.25 rows=12125 width=8)\n -> Seq Scan on tableB_201111\ntableB (cost=0.00..604.89 rows=29189 width=8)\n -> Seq Scan on tableB_201110\ntableB (cost=0.00..490.30 rows=23630 width=8)\n -> ...\n -> Seq Scan on tableA_201201 A\n(cost=11553.15..11571.02 rows=1 width=44)\n Filter: (\"Boolean\" AND (NOT (hashed SubPlan 1))\nAND ((\"text\" = 'abc'::text) OR (\"text\" = 'xyz'::text)))\n SubPlan 1\n -> Result (cost=0.00..10357.52 rows=478252 width=8)\n -> Append (cost=0.00..10357.52\nrows=478252 width=8)\n -> Seq Scan on tableB\n(cost=0.00..15.30 rows=530 width=8)\n -> Seq Scan on tableB_201201\ntableB (cost=0.00..15.30 rows=530 width=8)\n -> Seq Scan on tableB_201112\ntableB (cost=0.00..251.25 rows=12125 width=8)\n -> Seq Scan on tableB_201111\ntableB (cost=0.00..604.89 rows=29189 width=8)\n -> Seq Scan on tableB_201110\ntableB (cost=0.00..490.30 rows=23630 width=8)\n -> ...\n -> Seq Scan on tableA_201112 A\n(cost=11553.15..12346.63 rows=5980 width=55)\n Filter: (\"Boolean\" AND (NOT (hashed SubPlan 1))\nAND ((\"text\" = 'abc'::text) OR (\"text\" = 'xyz'::text)))\n SubPlan 1\n -> Result (cost=0.00..10357.52 rows=478252 width=8)\n -> Append (cost=0.00..10357.52\nrows=478252 width=8)\n -> Seq Scan on tableB\n(cost=0.00..15.30 rows=530 width=8)\n -> Seq Scan on tableB_201201\ntableB (cost=0.00..15.30 rows=530 width=8)\n -> Seq Scan on tableB_201112\ntableB (cost=0.00..251.25 rows=12125 width=8)\n -> Seq Scan on tableB_201111\ntableB (cost=0.00..604.89 rows=29189 width=8)\n -> Seq Scan on tableB_201110\ntableB (cost=0.00..490.30 rows=23630 width=8)\n -> ...\n -> ...\n(23112 rows)\n\n",
"msg_date": "Tue, 20 Dec 2011 11:46:02 -0500",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: OOM-killer issue with a specific query\n 9 of 20)"
},
{
"msg_contents": "On Tue, Dec 20, 2011 at 11:15 AM, F. BROUARD / SQLpro -\[email protected]\n<+nabble+miller_2555+ca434688eb.sqlpro#[email protected]>\nwrote:\n> I should think your query is not correct.\n>\n> Le 19/12/2011 16:52, [email protected] a écrit :\n>>\n>> SELECT \"bigint\", \"date\", \"text\"\n>> FROM tableA AS A\n>> WHERE A.\"boolean\" = 'true' AND\n>> (A.\"text\" = 'abc' OR A.\"text\" = 'xyz') AND\n>> A.\"bigint\" NOT IN (SELECT \"bigint\" FROM tableB)\n>> ORDER BY A.\"date\" DESC;\n>\n>\n> Why do you cast the true as a string ?\n> Can't you be more simple like :\n> WHERE A.\"boolean\" = true\n> and that's all ?\n> or much more simple :\n> WHERE A.\"boolean\"\n>\nThat is true - I was actually quoting the value as a literal to make\nthe query more explicit in the post ... probably not the best judgment\nin hindsight given posting to a performance-based mailing list :-). I\ndo use the WHERE A.\"boolean\" clause in the actual SQL query to avoid\nthe unneccesary parsing & type casting. Apologies for any confusion.\n\n",
"msg_date": "Tue, 20 Dec 2011 13:33:10 -0500",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: OOM-killer issue with a specific query\n 11 of 20)"
},
{
"msg_contents": "[email protected] writes:\n> I've run EXPLAIN on the query, but AFAICS the query plan does not\n> appear significantly different than the abridged version for this\n> particular query (output attached below).\n\nI think what's happening is that you've got the hashed NOT IN being\npushed down separately to each of the 180 child tables, so each of those\nhashtables thinks it can use work_mem (32MB), which means you're pushing\n6GB of memory usage before accounting for anything else.\n\nNOT IN is really hard to optimize because of its weird behavior for\nnulls, so the planner doesn't have much of any intelligence about it.\nI'd suggest seeing if you can transform it to a NOT EXISTS, if you\ndon't have any nulls in the bigint columns or don't really want the\nspec-mandated behavior for them anyway. A quick check suggests that 9.0\nshould give you a vastly better plan from a NOT EXISTS.\n\nAnother suggestion is that you ought to be running something newer than\n9.0.0; you're missing over a year's worth of bug fixes (some of which\nwere memory leaks...). If you are going to pick a PG version to sit on\nand not bother to update, a dot-zero release is about your worst\npossible choice; it will always have more bugs than a more mature\nrelease series. With my red fedora on, I'd also mutter that F13 is well\npast its use-by date.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Dec 2011 15:46:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OOM-killer issue with a specific query 9 of 20) "
}
] |
[
{
"msg_contents": "SOLVED\nOn Tue, Dec 20, 2011 at 3:46 PM, Tom Lane - [email protected]\n<+nabble+miller_2555+c5a65c2e1a.tgl#[email protected]>\nwrote:\n> [email protected] writes:\n>> I've run EXPLAIN on the query, but AFAICS the query plan does not\n>> appear significantly different than the abridged version for this\n>> particular query (output attached below).\n>\n> I think what's happening is that you've got the hashed NOT IN being\n> pushed down separately to each of the 180 child tables, so each of those\n> hashtables thinks it can use work_mem (32MB), which means you're pushing\n> 6GB of memory usage before accounting for anything else.\n>\n> NOT IN is really hard to optimize because of its weird behavior for\n> nulls, so the planner doesn't have much of any intelligence about it.\n> I'd suggest seeing if you can transform it to a NOT EXISTS, if you\n> don't have any nulls in the bigint columns or don't really want the\n> spec-mandated behavior for them anyway. A quick check suggests that 9.0\n> should give you a vastly better plan from a NOT EXISTS.\n>\nI've updated the query to use NOT EXISTS, which does produce a vastly\nmore efficient plan and barely moves memory consumption when running.\nSince NULLS are not permitted in the bigint columns, this works really\nwell. Thanks Tom - this has saved me a lot of head bashing!\n\n> Another suggestion is that you ought to be running something newer than\n> 9.0.0; you're missing over a year's worth of bug fixes (some of which\n> were memory leaks...). If you are going to pick a PG version to sit on\n> and not bother to update, a dot-zero release is about your worst\n> possible choice; it will always have more bugs than a more mature\n> release series. With my red fedora on, I'd also mutter that F13 is well\n> past its use-by date.\n>\nha - true...I've been pretty remiss in updating development\nenvironment system components - might be a project for the holidays :)\n\nThanks again\n\n",
"msg_date": "Wed, 21 Dec 2011 00:09:14 -0500",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: OOM-killer issue with a specific query SOLVED"
}
] |
[
{
"msg_contents": "I have IBM x3560 with 2G RAM - RAID 5 3 disk - PostgreSQL 9.0.6 64bit on\nWindows 2003 64bit\nI had read some tuning guide, it recomment not use RAID 5. So Raid 5 is\nbestter than 3 disk independent or not.\n\nHere is my pgbench -h %HOST% -p 5433 -U postgres -c 10 -T 1800 -s 10\npgbench\n\npgbench -h 127.0.0.1 -p 5433 -U postgres -c 10 -T 1800 -s 10 pgbench\nScale option ignored, using pgbench_branches table count = 10\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nquery mode: simple\nnumber of clients: 10\nnumber of threads: 1\nduration: 1800 s\nnumber of transactions actually processed: 775366\ntps = 430.736191 (including connections establishing)\ntps = 430.780400 (excluding connections establishing)\n\nSorry for my English.\n\nTuan Hoang Anh\n\nI have IBM x3560 with 2G RAM - RAID 5 3 disk - PostgreSQL 9.0.6 64bit on Windows 2003 64bit\nI had read some tuning guide, it recomment not use RAID 5. So Raid 5 is bestter than 3 disk independent or not.Here is my pgbench -h %HOST% -p 5433 -U postgres -c 10 -T 1800 -s 10 pgbench \npgbench -h 127.0.0.1 -p 5433 -U postgres -c 10 -T 1800 -s 10 pgbenchScale option ignored, using pgbench_branches table count = 10starting vacuum...end.transaction type: TPC-B (sort of)\nscaling factor: 10query mode: simplenumber of clients: 10number of threads: 1duration: 1800 snumber of transactions actually processed: 775366tps = 430.736191 (including connections establishing)\ntps = 430.780400 (excluding connections establishing)Sorry for my English.Tuan Hoang Anh",
"msg_date": "Fri, 23 Dec 2011 10:36:24 +0700",
"msg_from": "tuanhoanganh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql 9.0.6 Raid 5 or not please help."
},
{
"msg_contents": "On Thu, Dec 22, 2011 at 8:36 PM, tuanhoanganh <[email protected]> wrote:\n> I have IBM x3560 with 2G RAM - RAID 5 3 disk - PostgreSQL 9.0.6 64bit on\n> Windows 2003 64bit\n> I had read some tuning guide, it recomment not use RAID 5. So Raid 5 is\n> bestter than 3 disk independent or not.\n>\n> Here is my pgbench -h %HOST% -p 5433 -U postgres -c 10 -T 1800 -s 10\n> pgbench\n>\n> pgbench -h 127.0.0.1 -p 5433 -U postgres -c 10 -T 1800 -s 10 pgbench\n> Scale option ignored, using pgbench_branches table count = 10\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 10\n> query mode: simple\n> number of clients: 10\n> number of threads: 1\n> duration: 1800 s\n> number of transactions actually processed: 775366\n> tps = 430.736191 (including connections establishing)\n> tps = 430.780400 (excluding connections establishing)\n\nRAID 5 is aweful. Look up RAID 1E for 3 disks:\nhttp://en.wikipedia.org/wiki/Non-standard_RAID_levels#RAID_1E\n",
"msg_date": "Thu, 22 Dec 2011 20:55:01 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 9.0.6 Raid 5 or not please help."
},
{
"msg_contents": "On Thu, Dec 22, 2011 at 8:55 PM, Scott Marlowe <[email protected]> wrote:\n> On Thu, Dec 22, 2011 at 8:36 PM, tuanhoanganh <[email protected]> wrote:\n>> I have IBM x3560 with 2G RAM - RAID 5 3 disk - PostgreSQL 9.0.6 64bit on\n>> Windows 2003 64bit\n>> I had read some tuning guide, it recomment not use RAID 5. So Raid 5 is\n>> bestter than 3 disk independent or not.\n>>\n>> Here is my pgbench -h %HOST% -p 5433 -U postgres -c 10 -T 1800 -s 10\n>> pgbench\n>>\n>> pgbench -h 127.0.0.1 -p 5433 -U postgres -c 10 -T 1800 -s 10 pgbench\n>> Scale option ignored, using pgbench_branches table count = 10\n>> starting vacuum...end.\n>> transaction type: TPC-B (sort of)\n>> scaling factor: 10\n>> query mode: simple\n>> number of clients: 10\n>> number of threads: 1\n>> duration: 1800 s\n>> number of transactions actually processed: 775366\n>> tps = 430.736191 (including connections establishing)\n>> tps = 430.780400 (excluding connections establishing)\n>\n> RAID 5 is aweful. Look up RAID 1E for 3 disks:\n> http://en.wikipedia.org/wiki/Non-standard_RAID_levels#RAID_1E\n\nIf Windows doesn't support RAID 1E then setup a mirror set and use the\nthird drive as a hot spare. Still faster than RAID-5.\n",
"msg_date": "Thu, 22 Dec 2011 21:00:42 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 9.0.6 Raid 5 or not please help."
},
{
"msg_contents": "Thanks for your answer. But how performance between raid5 and one disk.\n\nPlease help me.\nThanks in advance\n\nTuan Hoang Anh\n\n\nOn Fri, Dec 23, 2011 at 11:00 AM, Scott Marlowe <[email protected]>wrote:\n\n> On Thu, Dec 22, 2011 at 8:55 PM, Scott Marlowe <[email protected]>\n> wrote:\n> > On Thu, Dec 22, 2011 at 8:36 PM, tuanhoanganh <[email protected]>\n> wrote:\n> >> I have IBM x3560 with 2G RAM - RAID 5 3 disk - PostgreSQL 9.0.6 64bit on\n> >> Windows 2003 64bit\n> >> I had read some tuning guide, it recomment not use RAID 5. So Raid 5 is\n> >> bestter than 3 disk independent or not.\n> >>\n> >> Here is my pgbench -h %HOST% -p 5433 -U postgres -c 10 -T 1800 -s 10\n> >> pgbench\n> >>\n> >> pgbench -h 127.0.0.1 -p 5433 -U postgres -c 10 -T 1800 -s 10 pgbench\n> >> Scale option ignored, using pgbench_branches table count = 10\n> >> starting vacuum...end.\n> >> transaction type: TPC-B (sort of)\n> >> scaling factor: 10\n> >> query mode: simple\n> >> number of clients: 10\n> >> number of threads: 1\n> >> duration: 1800 s\n> >> number of transactions actually processed: 775366\n> >> tps = 430.736191 (including connections establishing)\n> >> tps = 430.780400 (excluding connections establishing)\n> >\n> > RAID 5 is aweful. Look up RAID 1E for 3 disks:\n> > http://en.wikipedia.org/wiki/Non-standard_RAID_levels#RAID_1E\n>\n> If Windows doesn't support RAID 1E then setup a mirror set and use the\n> third drive as a hot spare. Still faster than RAID-5.\n>\n\nThanks for your answer. But how performance between raid5 and one disk.Please help me.Thanks in advanceTuan Hoang AnhOn Fri, Dec 23, 2011 at 11:00 AM, Scott Marlowe <[email protected]> wrote:\nOn Thu, Dec 22, 2011 at 8:55 PM, Scott Marlowe <[email protected]> wrote:\n\n> On Thu, Dec 22, 2011 at 8:36 PM, tuanhoanganh <[email protected]> wrote:\n>> I have IBM x3560 with 2G RAM - RAID 5 3 disk - PostgreSQL 9.0.6 64bit on\n>> Windows 2003 64bit\n>> I had read some tuning guide, it recomment not use RAID 5. So Raid 5 is\n>> bestter than 3 disk independent or not.\n>>\n>> Here is my pgbench -h %HOST% -p 5433 -U postgres -c 10 -T 1800 -s 10\n>> pgbench\n>>\n>> pgbench -h 127.0.0.1 -p 5433 -U postgres -c 10 -T 1800 -s 10 pgbench\n>> Scale option ignored, using pgbench_branches table count = 10\n>> starting vacuum...end.\n>> transaction type: TPC-B (sort of)\n>> scaling factor: 10\n>> query mode: simple\n>> number of clients: 10\n>> number of threads: 1\n>> duration: 1800 s\n>> number of transactions actually processed: 775366\n>> tps = 430.736191 (including connections establishing)\n>> tps = 430.780400 (excluding connections establishing)\n>\n> RAID 5 is aweful. Look up RAID 1E for 3 disks:\n> http://en.wikipedia.org/wiki/Non-standard_RAID_levels#RAID_1E\n\nIf Windows doesn't support RAID 1E then setup a mirror set and use the\nthird drive as a hot spare. Still faster than RAID-5.",
"msg_date": "Fri, 23 Dec 2011 13:18:20 +0700",
"msg_from": "tuanhoanganh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql 9.0.6 Raid 5 or not please help."
},
{
"msg_contents": "On Thu, Dec 22, 2011 at 11:18 PM, tuanhoanganh <[email protected]> wrote:\n> Thanks for your answer. But how performance between raid5 and one disk.\n\nOne disk will usually win, 2 disks (in a mirror) will definitely win.\nRAID-5 has the highest overhead and the poorest performance,\nespecially if it's degraded (1 drive out) that simple mirroring\nmethods don't suffer from. But even in an undegraded state it is\nusually the slowest method. RAID-10 is generally the fastest with\nredundancy, and of course pure RAID-0 is fastest of all but has no\nredundancy.\n\nYou should do some simple benchmarks with something like pgbench and\nvarious configs to see for yourself. For extra bonus points, break a\nmirror (2 disk -> 1 disk) and compare it to RAID-5 (3 disk -> 2 disk\ndegraded) for performance. The change in performance for a RAID-1 to\nsingle disk degraded situation is usually reads are half as fast and\nwrites are just as fast. For RAID-5 expect to see it drop by a lot.\n",
"msg_date": "Fri, 23 Dec 2011 00:05:31 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 9.0.6 Raid 5 or not please help."
},
{
"msg_contents": "Hi,\n\nIn addition, when you have multiple hard drive, it needs to be considered to put a database cluster and wal files separately on different spindles (hard drives), because of different I/O charasteristics, in particular update intensive workload.\n\nGenerally speaking, having 4 disks, one RAID-1 pair for a database cluster and another RAID-1 pair for WAL files, would be fine.\n\n-- \nNAGAYASU Satoshi <[email protected]>\n \n\n-----Original Message-----\nFrom: Scott Marlowe <[email protected]>\nSender: [email protected]: Fri, 23 Dec 2011 00:05:31 \nTo: tuanhoanganh<[email protected]>\nCc: <[email protected]>\nSubject: Re: [PERFORM] Postgresql 9.0.6 Raid 5 or not please help.\n\nOn Thu, Dec 22, 2011 at 11:18 PM, tuanhoanganh <[email protected]> wrote:\n> Thanks for your answer. But how performance between raid5 and one disk.\n\nOne disk will usually win, 2 disks (in a mirror) will definitely win.\nRAID-5 has the highest overhead and the poorest performance,\nespecially if it's degraded (1 drive out) that simple mirroring\nmethods don't suffer from. But even in an undegraded state it is\nusually the slowest method. RAID-10 is generally the fastest with\nredundancy, and of course pure RAID-0 is fastest of all but has no\nredundancy.\n\nYou should do some simple benchmarks with something like pgbench and\nvarious configs to see for yourself. For extra bonus points, break a\nmirror (2 disk -> 1 disk) and compare it to RAID-5 (3 disk -> 2 disk\ndegraded) for performance. The change in performance for a RAID-1 to\nsingle disk degraded situation is usually reads are half as fast and\nwrites are just as fast. For RAID-5 expect to see it drop by a lot.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Fri, 23 Dec 2011 07:58:52 +0000",
"msg_from": "\"Satoshi Nagayasu\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 9.0.6 Raid 5 or not please help."
},
{
"msg_contents": "Am 23.12.2011 08:05, schrieb Scott Marlowe:\n> On Thu, Dec 22, 2011 at 11:18 PM, tuanhoanganh<[email protected]> wrote:\n>> Thanks for your answer. But how performance between raid5 and one disk.\n> One disk will usually win, 2 disks (in a mirror) will definitely win.\n> RAID-5 has the highest overhead and the poorest performance,\n> especially if it's degraded (1 drive out) that simple mirroring\n> methods don't suffer from. But even in an undegraded state it is\n> usually the slowest method. RAID-10 is generally the fastest with\n> redundancy, and of course pure RAID-0 is fastest of all but has no\n> redundancy.\n>\n> You should do some simple benchmarks with something like pgbench and\n> various configs to see for yourself. For extra bonus points, break a\n> mirror (2 disk -> 1 disk) and compare it to RAID-5 (3 disk -> 2 disk\n> degraded) for performance. The change in performance for a RAID-1 to\n> single disk degraded situation is usually reads are half as fast and\n> writes are just as fast. For RAID-5 expect to see it drop by a lot.\n>\nI'm not so confident that a RAID-1 will win over a single disk. When it \ncomes to writes, the latency should be ~50 higher (if both disk must \nsync), since the spindles are not running synchronously. This applies to \nsoftraid, not something like a battery-backend raid controller of course.\n\nOr am I wrong here?\n\n\n\n",
"msg_date": "Fri, 23 Dec 2011 10:20:05 +0100",
"msg_from": "Mario Weilguni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 9.0.6 Raid 5 or not please help."
},
{
"msg_contents": "> I'm not so confident that a RAID-1 will win over a single disk. When it\n> comes to writes, the latency should be ~50 higher (if both disk must\n> sync), since the spindles are not running synchronously. This applies to\n> softraid, not something like a battery-backend raid controller of course.\n>\n> Or am I wrong here?\n>\n\nSoftware RAID-1 in Linux, can read data in all disks and generally \nincrease a lot the data rate in reads. In writes, for sure, the overhead \nis great compared with a single disk, but not too much.\n\n",
"msg_date": "Fri, 23 Dec 2011 10:15:35 -0200",
"msg_from": "alexandre - aldeia digital <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 9.0.6 Raid 5 or not please help."
},
{
"msg_contents": "On Fri, Dec 23, 2011 at 5:15 AM, alexandre - aldeia digital\n<[email protected]> wrote:\n>> I'm not so confident that a RAID-1 will win over a single disk. When it\n>> comes to writes, the latency should be ~50 higher (if both disk must\n>> sync), since the spindles are not running synchronously. This applies to\n>> softraid, not something like a battery-backend raid controller of course.\n>>\n>> Or am I wrong here?\n>>\n>\n> Software RAID-1 in Linux, can read data in all disks and generally increase\n> a lot the data rate in reads. In writes, for sure, the overhead is great\n> compared with a single disk, but not too much.\n\nExactly. Unless you spend a great deal of time writing data out to\nthe disks, the faster reads will more than make up for a tiny increase\nin latency for the writes to the drives.\n\nAs regards the other recommendation in this thread to use two mirror\nsets one for xlog and one for everything else, unless you're doing a\nlot of writing, it's often still a winner to just run one big 4 disk\nRAID-10.\n\nOf course the real winner is to put a hardware RAID controller with\nbattery backed cache between your OS and the hard drives, then the\nperformance of even just a pair of drives in RAID-1 will be quite\nfast.\n",
"msg_date": "Fri, 23 Dec 2011 08:25:54 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 9.0.6 Raid 5 or not please help."
},
{
"msg_contents": "Thanks for all. I change to RAID 1 and here is new pg_bench result:\n\npgbench -h 127.0.0.1 -p 5433 -U postgres -c 10 -T 1800 -s 10 pgbench\nScale option ignored, using pgbench_branches table count = 10\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nquery mode: simple\nnumber of clients: 10\nnumber of threads: 1\nduration: 1800 s\nnumber of transactions actually processed: 4373177\ntps = 2429.396876 (including connections establishing)\ntps = 2429.675016 (excluding connections establishing)\nPress any key to continue . . .\n\nTuan Hoang ANh\nOn Fri, Dec 23, 2011 at 10:25 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Fri, Dec 23, 2011 at 5:15 AM, alexandre - aldeia digital\n> <[email protected]> wrote:\n> >> I'm not so confident that a RAID-1 will win over a single disk. When it\n> >> comes to writes, the latency should be ~50 higher (if both disk must\n> >> sync), since the spindles are not running synchronously. This applies to\n> >> softraid, not something like a battery-backend raid controller of\n> course.\n> >>\n> >> Or am I wrong here?\n> >>\n> >\n> > Software RAID-1 in Linux, can read data in all disks and generally\n> increase\n> > a lot the data rate in reads. In writes, for sure, the overhead is great\n> > compared with a single disk, but not too much.\n>\n> Exactly. Unless you spend a great deal of time writing data out to\n> the disks, the faster reads will more than make up for a tiny increase\n> in latency for the writes to the drives.\n>\n> As regards the other recommendation in this thread to use two mirror\n> sets one for xlog and one for everything else, unless you're doing a\n> lot of writing, it's often still a winner to just run one big 4 disk\n> RAID-10.\n>\n> Of course the real winner is to put a hardware RAID controller with\n> battery backed cache between your OS and the hard drives, then the\n> performance of even just a pair of drives in RAID-1 will be quite\n> fast.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThanks for all. I change to RAID 1 and here is new pg_bench result:pgbench -h 127.0.0.1 -p 5433 -U postgres -c 10 -T 1800 -s 10 pgbenchScale option ignored, using pgbench_branches table count = 10\nstarting vacuum...end.transaction type: TPC-B (sort of)scaling factor: 10query mode: simplenumber of clients: 10number of threads: 1duration: 1800 s\nnumber of transactions actually processed: 4373177tps = 2429.396876 (including connections establishing)tps = 2429.675016 (excluding connections establishing)Press any key to continue . . .\nTuan Hoang ANhOn Fri, Dec 23, 2011 at 10:25 PM, Scott Marlowe <[email protected]> wrote:\nOn Fri, Dec 23, 2011 at 5:15 AM, alexandre - aldeia digital\n<[email protected]> wrote:\n>> I'm not so confident that a RAID-1 will win over a single disk. When it\n>> comes to writes, the latency should be ~50 higher (if both disk must\n>> sync), since the spindles are not running synchronously. This applies to\n>> softraid, not something like a battery-backend raid controller of course.\n>>\n>> Or am I wrong here?\n>>\n>\n> Software RAID-1 in Linux, can read data in all disks and generally increase\n> a lot the data rate in reads. In writes, for sure, the overhead is great\n> compared with a single disk, but not too much.\n\nExactly. Unless you spend a great deal of time writing data out to\nthe disks, the faster reads will more than make up for a tiny increase\nin latency for the writes to the drives.\n\nAs regards the other recommendation in this thread to use two mirror\nsets one for xlog and one for everything else, unless you're doing a\nlot of writing, it's often still a winner to just run one big 4 disk\nRAID-10.\n\nOf course the real winner is to put a hardware RAID controller with\nbattery backed cache between your OS and the hard drives, then the\nperformance of even just a pair of drives in RAID-1 will be quite\nfast.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 23 Dec 2011 22:32:52 +0700",
"msg_from": "tuanhoanganh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql 9.0.6 Raid 5 or not please help."
},
{
"msg_contents": "On Fri, Dec 23, 2011 at 8:32 AM, tuanhoanganh <[email protected]> wrote:\n> Thanks for all. I change to RAID 1 and here is new pg_bench result:\n>\n> pgbench -h 127.0.0.1 -p 5433 -U postgres -c 10 -T 1800 -s 10 pgbench\n> Scale option ignored, using pgbench_branches table count = 10\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 10\n> query mode: simple\n> number of clients: 10\n> number of threads: 1\n> duration: 1800 s\n> number of transactions actually processed: 4373177\n> tps = 2429.396876 (including connections establishing)\n> tps = 2429.675016 (excluding connections establishing)\n> Press any key to continue . . .\n\nNote that those numbers are really only possible if your drives are\nlying about fsync or you have fsync turned off or you have a battery\nbacked caching RAID controller. I.e. your database is likely not\ncrash-proof.\n",
"msg_date": "Fri, 23 Dec 2011 11:06:51 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 9.0.6 Raid 5 or not please help."
},
{
"msg_contents": "Thank for your information.\nMy postgresql config fsync default\n#fsync = on # turns forced synchronization on or off\nMy RAID is ServeRAID M5015 SAS/SATA controller, in MegaRaid Store Manager\nit show BBU Present = YES.\nDoes it have battery backed caching RAID controller?\nPlease help me, I am newbie of RAID card manager.\n\nTuan Hoang Anh.\n\n\nOn Sat, Dec 24, 2011 at 1:06 AM, Scott Marlowe <[email protected]>wrote:\n\n> On Fri, Dec 23, 2011 at 8:32 AM, tuanhoanganh <[email protected]> wrote:\n> > Thanks for all. I change to RAID 1 and here is new pg_bench result:\n> >\n> > pgbench -h 127.0.0.1 -p 5433 -U postgres -c 10 -T 1800 -s 10 pgbench\n> > Scale option ignored, using pgbench_branches table count = 10\n> > starting vacuum...end.\n> > transaction type: TPC-B (sort of)\n> > scaling factor: 10\n> > query mode: simple\n> > number of clients: 10\n> > number of threads: 1\n> > duration: 1800 s\n> > number of transactions actually processed: 4373177\n> > tps = 2429.396876 (including connections establishing)\n> > tps = 2429.675016 (excluding connections establishing)\n> > Press any key to continue . . .\n>\n> Note that those numbers are really only possible if your drives are\n> lying about fsync or you have fsync turned off or you have a battery\n> backed caching RAID controller. I.e. your database is likely not\n> crash-proof.\n>\n\nThank for your information.My postgresql config fsync default#fsync = on # turns forced synchronization on or offMy RAID is ServeRAID M5015 SAS/SATA controller, in MegaRaid Store Manager it show BBU Present = YES.\nDoes it have battery backed caching RAID controller?Please help me, I am newbie of RAID card manager.Tuan Hoang Anh.On Sat, Dec 24, 2011 at 1:06 AM, Scott Marlowe <[email protected]> wrote:\nOn Fri, Dec 23, 2011 at 8:32 AM, tuanhoanganh <[email protected]> wrote:\n\n> Thanks for all. I change to RAID 1 and here is new pg_bench result:\n>\n> pgbench -h 127.0.0.1 -p 5433 -U postgres -c 10 -T 1800 -s 10 pgbench\n> Scale option ignored, using pgbench_branches table count = 10\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 10\n> query mode: simple\n> number of clients: 10\n> number of threads: 1\n> duration: 1800 s\n> number of transactions actually processed: 4373177\n> tps = 2429.396876 (including connections establishing)\n> tps = 2429.675016 (excluding connections establishing)\n> Press any key to continue . . .\n\nNote that those numbers are really only possible if your drives are\nlying about fsync or you have fsync turned off or you have a battery\nbacked caching RAID controller. I.e. your database is likely not\ncrash-proof.",
"msg_date": "Sun, 25 Dec 2011 10:13:19 +0700",
"msg_from": "tuanhoanganh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql 9.0.6 Raid 5 or not please help."
},
{
"msg_contents": "On Sat, Dec 24, 2011 at 8:13 PM, tuanhoanganh <[email protected]> wrote:\n> Thank for your information.\n> My postgresql config fsync default\n> #fsync = on # turns forced synchronization on or off\n> My RAID is ServeRAID M5015 SAS/SATA controller, in MegaRaid Store Manager it\n> show BBU Present = YES.\n> Does it have battery backed caching RAID controller?\n> Please help me, I am newbie of RAID card manager.\n\nYep you've got battery backed caching RAID. So regular pgbench tells\nyou how much faster RAID-1 is than RAID-5 at a read/write mixed load.\nYou can run it with a -s switch for a read only benchmark to get an\nidea how much, if any, of a difference there is between the two.\n",
"msg_date": "Sun, 25 Dec 2011 00:15:01 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 9.0.6 Raid 5 or not please help."
}
] |
[
{
"msg_contents": "I'd like to find some measurements/figures of query preparation and\nplanning time to justify the use of prepared statements and stored\nprocedures.\nI know that complex queries have larger preparation time. Though, is it\npossible to explicitly measure the time the optimizer spends parsing and\nplanning for query execution?\n\nThank you,\nJames\n\nI'd like to find some measurements/figures of query preparation and planning time to justify the use of prepared statements and stored procedures.I know that complex queries have larger preparation time. Though, is it possible to explicitly measure the time the optimizer spends parsing and planning for query execution?\nThank you,James",
"msg_date": "Fri, 23 Dec 2011 11:27:13 -0800",
"msg_from": "Igor Schtein <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to clock the time spent for query parsing and planning?"
},
{
"msg_contents": "Hello\n\n2011/12/23 Igor Schtein <[email protected]>:\n> I'd like to find some measurements/figures of query preparation and planning\n> time to justify the use of prepared statements and stored procedures.\n> I know that complex queries have larger preparation time. Though, is it\n> possible to explicitly measure the time the optimizer spends parsing and\n> planning for query execution?\n\nYou can use time for EXPLAIN statement\n\nRegards\n\nPavel Stehule\n\n>\n> Thank you,\n> James\n",
"msg_date": "Tue, 27 Dec 2011 10:38:18 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to clock the time spent for query parsing and planning?"
},
{
"msg_contents": "Hi Pavel,\n\nThanks for your reply.\n\nMy understanding is that Explain provides measurements of query run time but does not include or specifies the timing for planning and for running optimization algorithms. \n\nPls let me know if my understanding is not correct. In that case, how do I find out how much of query time is spent to prepare the query and how much time is spent executing it. \n\nThanks,\nJames\n\nOn Dec 27, 2011, at 1:38 AM, Pavel Stehule <[email protected]> wrote:\n\n> Hello\n> \n> 2011/12/23 Igor Schtein <[email protected]>:\n>> I'd like to find some measurements/figures of query preparation and planning\n>> time to justify the use of prepared statements and stored procedures.\n>> I know that complex queries have larger preparation time. Though, is it\n>> possible to explicitly measure the time the optimizer spends parsing and\n>> planning for query execution?\n> \n> You can use time for EXPLAIN statement\n> \n> Regards\n> \n> Pavel Stehule\n> \n>> \n>> Thank you,\n>> James\n",
"msg_date": "Fri, 30 Dec 2011 16:59:57 -0800",
"msg_from": "Igor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to clock the time spent for query parsing and planning?"
},
{
"msg_contents": "2011/12/31 Igor <[email protected]>:\n> Hi Pavel,\n>\n> Thanks for your reply.\n>\n> My understanding is that Explain provides measurements of query run time but does not include or specifies the timing for planning and for running optimization algorithms.\n\nthe a result of explain is visualisation of execution plan - so all\nnecessary steps (parsing, analysing, optimization) was processed. So\ntime of EXPLAIN is time that you looking. Other similar value should\nby time of PREPARE statement.\n\nRegards\n\nPavel\n\n>\n> Pls let me know if my understanding is not correct. In that case, how do I find out how much of query time is spent to prepare the query and how much time is spent executing it.\n>\n> Thanks,\n> James\n>\n> On Dec 27, 2011, at 1:38 AM, Pavel Stehule <[email protected]> wrote:\n>\n>> Hello\n>>\n>> 2011/12/23 Igor Schtein <[email protected]>:\n>>> I'd like to find some measurements/figures of query preparation and planning\n>>> time to justify the use of prepared statements and stored procedures.\n>>> I know that complex queries have larger preparation time. Though, is it\n>>> possible to explicitly measure the time the optimizer spends parsing and\n>>> planning for query execution?\n>>\n>> You can use time for EXPLAIN statement\n>>\n>> Regards\n>>\n>> Pavel Stehule\n>>\n>>>\n>>> Thank you,\n>>> James\n",
"msg_date": "Sat, 31 Dec 2011 08:49:37 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to clock the time spent for query parsing and planning?"
}
] |
[
{
"msg_contents": "Hello,\n\n I'm running a fairly complex query on my postgres-8.4.9 Ubuntu box. The box has 8-core CPU, 18G of RAM and no virtualization layer. The query takes many hours to run. The query essentially involves a join of two large tables on a common string column, but it also includes joins with other smaller tables. I looked at the query plan, as produced by EXPLAIN and found it reasonable.\n\n When the query starts, Linux `top' shows fairly low RAM usage (in the hundreds of MB range) and about 100% CPU usage for the relevant postgres process. I'm also seeing some temp files being generated by this postgres process in the postgres temp directory. All this makes sense to me.\n\n As the query processes further, the memory usage by this postgres process shoots up to 12G resident and 17G virtual, while the CPU usage falls down to single-digit percents. The need to utilize more memory at some point during query execution seems in agreement with the query plan.\n\n I feel that my server configuration is not optimal: I would like to observe close to 100% CPU utilization on my queries, but seeing 20 times lower values.\n\n My query forks a single large-RAM process running on the server. There are other queries running on the same server, but they are quick and light on memory.\n\n I cannot explain the following observations:\n\n * Postgres is not writing temp files into its temp directory once the RAM usage goes up, but vmstat shows heavy disk usage, mostly the \"swap in\" field is high. Top shows 6G of swap space in use.\n\n * All my attempts to limit postgres' memory usage by playing with postgres config parameters failed.\n\n Here are the relevant parameters from postgresql.conf (I did use SHOW parameter to check that the parameters have been read by the server). I think I'm using the defaults for all other memory-related configurations.\n\n shared_buffers = 2GB (tried 8GB, didn't change anything)\n work_mem = 128MB (tried 257MB, didn't change anything)\n wal_buffers = 16MB\n effective_cache_size = 12GB (tried 2GB didn't change anything)\n\n In order to resolve my issue, I tried to search for postgres profiling tools and found no relevant ones. This is rather disappointing. That's what I expected to find:\n\n * A tool that could explain to me why postgres is swapping.\n\n * A tool that showed what kind of memory (work mem vs buffers, etc) was taking all that virtual memory space.\n\n * A tool for examining plans of the running queries. It would be helpful to see what stage of the query plan the server is stuck on (e.g. mark the query plans with some symbols that indicate \"currently running\", \"completed\", \"results in memory/disk\", etc).\n\n I realize that postgres is a free software and one cannot demand new features from people who invest their own free time in developing and maintaining it. I am hoping that my feedback could be useful for future development.\n\n Thanks!\n\nHello,\n \nI'm running a fairly complex query on my postgres-8.4.9 Ubuntu box. The box has 8-core CPU, 18G of RAM and no virtualization layer. The query takes many hours to run. The query essentially involves a join of two large tables on a common string column, but it also includes joins with other smaller tables. I looked at the query plan, as produced by EXPLAIN and found it reasonable.\n \nWhen the query starts, Linux `top' shows fairly low RAM usage (in the hundreds of MB range) and about 100% CPU usage for the relevant postgres process. I'm also seeing some temp files being generated by this postgres process in the postgres temp directory. All this makes sense to me.\n \nAs the query processes further, the memory usage by this postgres process shoots up to 12G resident and 17G virtual, while the CPU usage falls down to single-digit percents. The need to utilize more memory at some point during query execution seems in agreement with the query plan.\n \nI feel that my server configuration is not optimal: I would like to observe close to 100% CPU utilization on my queries, but seeing 20 times lower values.\n \nMy query forks a single large-RAM process running on the server. There are other queries running on the same server, but they are quick and light on memory.\n \nI cannot explain the following observations:\n \n* Postgres is not writing temp files into its temp directory once the RAM usage goes up, but vmstat shows heavy disk usage, mostly the \"swap in\" field is high. Top shows 6G of swap space in use.\n \n* All my attempts to limit postgres' memory usage by playing with postgres config parameters failed.\n \nHere are the relevant parameters from postgresql.conf (I did use SHOW parameter to check that the parameters have been read by the server). I think I'm using the defaults for all other memory-related configurations.\n \nshared_buffers = 2GB (tried 8GB, didn't change anything) \nwork_mem = 128MB (tried 257MB, didn't change anything) \nwal_buffers = 16MB \neffective_cache_size = 12GB (tried 2GB didn't change anything)\n \nIn order to resolve my issue, I tried to search for postgres profiling tools and found no relevant ones. This is rather disappointing. That's what I expected to find:\n \n* A tool that could explain to me why postgres is swapping.\n \n* A tool that showed what kind of memory (work mem vs buffers, etc) was taking all that virtual memory space.\n \n* A tool for examining plans of the running queries. It would be helpful to see what stage of the query plan the server is stuck on (e.g. mark the query plans with some symbols that indicate \"currently running\", \"completed\", \"results in memory/disk\", etc).\n \nI realize that postgres is a free software and one cannot demand new features from people who invest their own free time in developing and maintaining it. I am hoping that my feedback could be useful for future development.\n \nThanks!",
"msg_date": "Sat, 24 Dec 2011 14:22:39 -0500",
"msg_from": "\"Michael Smolsky\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Exploring memory usage"
},
{
"msg_contents": "On Sat, Dec 24, 2011 at 4:22 PM, Michael Smolsky <[email protected]> wrote:\n> work_mem = 128MB (tried 257MB, didn't change anything)\n\nThis is probably your problem.\n\nWithout an EXPLAIN output, I cannot be sure, but 'work_mem' is not the\ntotal amount of memory a query can use, it's the amount of memory it\ncan use for *one* sort/hash/whatever operation. A complex query can\nhave many of those, so your machine is probably swapping due to\nexcessive memory requirements.\n\nTry *lowering* it. You can do so only for that query, by executing:\n\nset work_mem = '8MB'; <your query>\n",
"msg_date": "Tue, 27 Dec 2011 12:33:30 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Exploring memory usage"
},
{
"msg_contents": "On Tue, Dec 27, 2011 at 8:33 AM, Claudio Freire <[email protected]> wrote:\n> On Sat, Dec 24, 2011 at 4:22 PM, Michael Smolsky <[email protected]> wrote:\n>> work_mem = 128MB (tried 257MB, didn't change anything)\n>\n> This is probably your problem.\n>\n> Without an EXPLAIN output, I cannot be sure, but 'work_mem' is not the\n> total amount of memory a query can use, it's the amount of memory it\n> can use for *one* sort/hash/whatever operation. A complex query can\n> have many of those, so your machine is probably swapping due to\n> excessive memory requirements.\n>\n> Try *lowering* it. You can do so only for that query, by executing:\n>\n> set work_mem = '8MB'; <your query>\n\nHe can lower it for just that query but honestly, even on a machine\nwith much more memory I'd never set it as high as he has it. On a\nbusy machine with 128G RAM the max I ever had it set to was 16M, and\nthat was high enough I kept a close eye on it (well, nagios did\nanway.)\n",
"msg_date": "Tue, 27 Dec 2011 09:00:20 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Exploring memory usage"
},
{
"msg_contents": "On Tue, Dec 27, 2011 at 1:00 PM, Scott Marlowe <[email protected]> wrote:\n> He can lower it for just that query but honestly, even on a machine\n> with much more memory I'd never set it as high as he has it. On a\n> busy machine with 128G RAM the max I ever had it set to was 16M, and\n> that was high enough I kept a close eye on it (well, nagios did\n> anway.)\n\nI have it quite high, because I know the blend of queries going into\nthe server allows it.\n\nBut yes, it's not a sensible setting if you didn't analyze the\nactivity carefully.\n",
"msg_date": "Tue, 27 Dec 2011 13:06:46 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Exploring memory usage"
},
{
"msg_contents": "\n\nOn 12/27/2011 11:00 AM, Scott Marlowe wrote:\n> On Tue, Dec 27, 2011 at 8:33 AM, Claudio Freire<[email protected]> wrote:\n>> On Sat, Dec 24, 2011 at 4:22 PM, Michael Smolsky<[email protected]> wrote:\n>>> work_mem = 128MB (tried 257MB, didn't change anything)\n>> This is probably your problem.\n>>\n>> Without an EXPLAIN output, I cannot be sure, but 'work_mem' is not the\n>> total amount of memory a query can use, it's the amount of memory it\n>> can use for *one* sort/hash/whatever operation. A complex query can\n>> have many of those, so your machine is probably swapping due to\n>> excessive memory requirements.\n>>\n>> Try *lowering* it. You can do so only for that query, by executing:\n>>\n>> set work_mem = '8MB';<your query>\n> He can lower it for just that query but honestly, even on a machine\n> with much more memory I'd never set it as high as he has it. On a\n> busy machine with 128G RAM the max I ever had it set to was 16M, and\n> that was high enough I kept a close eye on it (well, nagios did\n> anway.)\n\n\n\nIt depends on the workload. Your 16M setting would make many of my \nclients' systems slow to an absolute crawl for some queries, and they \ndon't run into swap issues, because we've made educated guesses about \nusage patterns.\n\ncheers\n\nandrew\n",
"msg_date": "Tue, 27 Dec 2011 11:14:40 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Exploring memory usage"
},
{
"msg_contents": "On Tue, Dec 27, 2011 at 9:14 AM, Andrew Dunstan <[email protected]> wrote:\n> It depends on the workload. Your 16M setting would make many of my clients'\n> systems slow to an absolute crawl for some queries, and they don't run into\n> swap issues, because we've made educated guesses about usage patterns.\n\nExactly. I've had an old Pentium4 machine that did reporting and only\nhad 2G RAM with a 256M work_mem setting, while the heavily loaded\nmachine I mentioned earlier handles something on the order of several\nhundred concurrent users and thousands of queries a second, and 16Meg\nwas a pretty big setting on that machine, but since most of the\nqueries were of the select * from sometable where pkid=123456 it\nwasn't too dangerous.\n\nIt's all about the workload. For that, we need more info from the OP.\n",
"msg_date": "Tue, 27 Dec 2011 09:17:02 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Exploring memory usage"
},
{
"msg_contents": "On Sat, Dec 24, 2011 at 12:22 PM, Michael Smolsky <[email protected]> wrote:\n> shared_buffers = 2GB (tried 8GB, didn't change anything)\n> work_mem = 128MB (tried 257MB, didn't change anything)\n\nAs someone mentioned, lower is better here. 128M is quite high.\n\n> effective_cache_size = 12GB (tried 2GB didn't change anything)\n\nThis doesn't affect memory usage. It only tells the planner about how\nbig the OS and pg caches are for the db. It's a very coarse\nadjustment knob, so don't get too worried about it.\n\n> In order to resolve my issue, I tried to search for postgres profiling tools\n> and found no relevant ones. This is rather disappointing. That's what I\n> expected to find:\n\nLook for pg_buffercache. I'm sue there's some others I'm forgetting.\nGrab a copy of Greg Smith's Performance PostgreSQL, it's got a lot of\ngreat info in it on handling heavy load servers.\n\n> I realize that postgres is a free software and one cannot demand new\n> features from people who invest their own free time in developing and\n> maintaining it. I am hoping that my feedback could be useful for future\n> development.\n\nIt's not just free as in beer. It's free as in do what you will with\nit. So, if you whip out your checkbook and start waving it around,\nyou can certainly pay someone to write the code to instrument this\nstuff. Whether you release it back into the wild is up to you. But\nyea, first see if someone's already done some work on that, like the\npg_bufffercache modules before spending money reinventing the wheel.\n",
"msg_date": "Tue, 27 Dec 2011 09:45:06 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Exploring memory usage"
}
] |
[
{
"msg_contents": "there are some performance issues on a server and by searching in the logs i\nnoticed that the phases of parse and bind take considerably more time than\nexecute for most of the queries. i guess that the right thing to do in this\ncase is to use functions or prepare statements but in any case, what could\nbe the cause of this?\n\ninformation about the server->\n-CentOS 5.6\n-4-cores\n-12GB ram\n\n\nshared_buffers: 1 GB\ntemp_buffers = 100MB\nwork_mem : 30 MB\nmaintenance_mem: 512 MB\n\ndatabase_size: 1,5 GB\narchive_mode is ON\nvacuum/analyze (vacuum_scale_factor 0.1, analyze 0.05)\n\n\nthis behaviour is not related with checkpoints on the database (as indicated\nby the logs, i dont see this latency when a checkpoint occurs, i see it most\nof the time)\n\nso my question is the following; what can cause the bind/parse phases to\ntake so much longer than the execute? if you need any more info the server i\nll be glad to provide it. thank you in advance for your advice\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/parse-bind-take-more-time-than-execute-tp5102940p5102940.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Tue, 27 Dec 2011 02:52:13 -0800 (PST)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": true,
"msg_subject": "parse - bind take more time than execute"
},
{
"msg_contents": "the version of postgres is 8.4.7 :)\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/parse-bind-take-more-time-than-execute-tp5102940p5102954.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Tue, 27 Dec 2011 03:01:59 -0800 (PST)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: parse - bind take more time than execute"
},
{
"msg_contents": "hello.\n\n1. planning time > execute time, it can happen normally, for some\nfast-executing queries, so it is not bad per se.\n\n2. what are your statistics settings? they influence planning time. I\nmean default_statistics_target and per-column SET STATISTICS?\n\n3. upgrade to 8.4.10, it's quick upgrade (minimal downtime) and there\nwere some planner improvements.\n\n4. what is \"considerably more time\" in absolute units?\n\n\nFilip\n\n\n2011/12/27 MirrorX <[email protected]>:\n> there are some performance issues on a server and by searching in the logs i\n> noticed that the phases of parse and bind take considerably more time than\n> execute for most of the queries. i guess that the right thing to do in this\n> case is to use functions or prepare statements but in any case, what could\n> be the cause of this?\n>\n> information about the server->\n> -CentOS 5.6\n> -4-cores\n> -12GB ram\n>\n>\n> shared_buffers: 1 GB\n> temp_buffers = 100MB\n> work_mem : 30 MB\n> maintenance_mem: 512 MB\n>\n> database_size: 1,5 GB\n> archive_mode is ON\n> vacuum/analyze (vacuum_scale_factor 0.1, analyze 0.05)\n>\n>\n> this behaviour is not related with checkpoints on the database (as indicated\n> by the logs, i dont see this latency when a checkpoint occurs, i see it most\n> of the time)\n>\n> so my question is the following; what can cause the bind/parse phases to\n> take so much longer than the execute? if you need any more info the server i\n> ll be glad to provide it. thank you in advance for your advice\n>\n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/parse-bind-take-more-time-than-execute-tp5102940p5102940.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 27 Dec 2011 13:01:04 +0100",
"msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: parse - bind take more time than execute"
},
{
"msg_contents": "thx a lot for your answer :)\n\n2) default_statistics_target is set to (default) 100 and there no special\nstatistics per-column\n\n3) i will do that very soon\n\n4) in absolute units i can see the same query having similar stats to these:\nparse -> 600 ms\nbind -> 300 ms\nexecute -> 50 ms\n\nthe query mentioned above is a simple select from one table using using two\nwhere conditions. and this table has 1 additional index (except the primary\nkey) on the columns that are part of the where clause\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/parse-bind-take-more-time-than-execute-tp5102940p5103116.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Tue, 27 Dec 2011 05:34:27 -0800 (PST)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: parse - bind take more time than execute"
},
{
"msg_contents": "Hello\n\n2011/12/27 MirrorX <[email protected]>:\n> there are some performance issues on a server and by searching in the logs i\n> noticed that the phases of parse and bind take considerably more time than\n> execute for most of the queries. i guess that the right thing to do in this\n> case is to use functions or prepare statements but in any case, what could\n> be the cause of this?\n>\n\nA reason should be a blind optimization of prepared statement.\nPrepared statements are optimized to most frequent values.\n\ntry to look on plan - statement EXPLAIN should be used for prepared\nstatements too.\n\nRegards\n\nPavel Stehule\n\n> information about the server->\n> -CentOS 5.6\n> -4-cores\n> -12GB ram\n>\n>\n> shared_buffers: 1 GB\n> temp_buffers = 100MB\n> work_mem : 30 MB\n> maintenance_mem: 512 MB\n>\n> database_size: 1,5 GB\n> archive_mode is ON\n> vacuum/analyze (vacuum_scale_factor 0.1, analyze 0.05)\n>\n>\n> this behaviour is not related with checkpoints on the database (as indicated\n> by the logs, i dont see this latency when a checkpoint occurs, i see it most\n> of the time)\n>\n> so my question is the following; what can cause the bind/parse phases to\n> take so much longer than the execute? if you need any more info the server i\n> ll be glad to provide it. thank you in advance for your advice\n>\n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/parse-bind-take-more-time-than-execute-tp5102940p5102940.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 27 Dec 2011 15:01:31 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: parse - bind take more time than execute"
},
{
"msg_contents": "i am not using prepared statements for now :)\ni just said that probably, if i do use them, i will get rid of that extra\ntime since the plan will be already 'decided' in advance \n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/parse-bind-take-more-time-than-execute-tp5102940p5103182.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Tue, 27 Dec 2011 06:21:02 -0800 (PST)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: parse - bind take more time than execute"
},
{
"msg_contents": "MirrorX wrote:\n \n> default_statistics_target is set to (default) 100 and there no\n> special statistics per-column\n \n> in absolute units i can see the same query having similar stats to\n> these:\n> parse -> 600 ms\n> bind -> 300 ms\n> execute -> 50 ms\n \nHow did you determine those timings?\n \n> the query mentioned above is a simple select from one table using\n> using two where conditions. and this table has 1 additional index\n> (except the primary key) on the columns that are part of the where\n> clause\n \nAre you saying that a simple query against a single table with only\ntwo indexes is taking 600 ms to plan and 300 ms to bind? I have\nnever seen anything remotely like that. Could you post the psql \\d\noutput for the table and the actual query?\n \nIt would also be good to include a description of the hardware and\nthe output of running the query on this page:\n \nhttp://wiki.postgresql.org/wiki/Server_Configuration\n \n-Kevin\n",
"msg_date": "Wed, 28 Dec 2011 17:22:10 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: parse - bind take more time than execute"
},
{
"msg_contents": "thx for your reply :)\n\n-the timings come from the log\n-the table is this ->\n\\d configurations\n Table \"public.configurationcontext\"\n Column | Type | Modifiers \n-------------------+------------------------+-----------\n id | numeric(18,0) | not null\n category | numeric(18,0) | \n pr_oid | numeric(18,0) | \n var_attrs | character varying(255) | \n num_value | numeric(18,0) | \nIndexes:\n \"pk_configurations\" PRIMARY KEY, btree (id)\n \"conf_index\" btree (category, pr_oid, num_value)\n\nand one query is this ->\nSELECT * FROM configurations WHERE pr_oid=$1 AND num_value=$2\n\n-the table has only 2500 rows\n-this messages used to appear a lot after i created a new index for the 2\ncolumns mentioned above in the query, since i thought that the 3-column\nindex wouldnt be of much help since the first column was not defined in the\nquery. now i have dropped this extra index and i see much less records in\nthe log about the bind/parse phase of the query\n\n-the server has 4 cores, 12 GB ram, and fata disks. the settings from the\nquery are these ->\n name | \ncurrent_setting \n---------------------------------+------------------------------------------------------------------------------------------------------------------\n version | PostgreSQL 8.4.7 on\nx86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat\n4.1.2-50), 64-bit\n archive_command | cp -i %p /var/lib/pgsql/wals/%f\n</dev/null\n archive_mode | on\n autovacuum_analyze_scale_factor | 0.05\n autovacuum_vacuum_scale_factor | 0.1\n bgwriter_delay | 50ms\n bgwriter_lru_maxpages | 200\n bgwriter_lru_multiplier | 4\n checkpoint_completion_target | 0.9\n checkpoint_segments | 30\n checkpoint_timeout | 15min\n effective_cache_size | 9GB\n lc_collate | en_US.UTF-8\n lc_ctype | en_US.UTF-8\n listen_addresses | *\n log_checkpoints | off\n log_directory | pg_log\n log_filename | postgresql-%a.log\n log_min_duration_statement | 50ms\n log_rotation_age | 1d\n log_rotation_size | 0\n log_truncate_on_rotation | on\n logging_collector | on\n maintenance_work_mem | 512MB\n max_connections | 100\n max_prepared_transactions | 20\n max_stack_depth | 8MB\n port | 5432\n server_encoding | UTF8\n shared_buffers | 2GB\n synchronous_commit | off\n temp_buffers | 12800\n TimeZone | Europe/Athens\n wal_buffers | 16MB\n work_mem | 30MB\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/parse-bind-take-more-time-than-execute-tp5102940p5107985.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Thu, 29 Dec 2011 08:12:49 -0800 (PST)",
"msg_from": "MirrorX <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: parse - bind take more time than execute"
}
] |
[
{
"msg_contents": "PostgreSQL 9.0.2\nMac OS X Server 10.6.8\nAutovacuum is on, and I have a script that runs vacuum analyze verbose every night along with the backup.\n\nI have a situation where I'm experiencing a seq scan on a table with almost 3M rows when my condition is based on a subquery. A google search turned up a way to prevent flattening the subquery into a join using OFFSET 0. This does work, reducing the query from around 1s to around 250ms, most of which is the subquery. \n\nMy question is why does it do a seq scan when it flattens this subquery into a JOIN? Is it because the emsg_messages table is around 1M rows? Are there some guidelines to when the planner will prefer not to use an available index? I just had a look through postgresql.conf and noticed that I forgot to set effective_cache_size to something reasonable for a machine with 16GB of memory. Would the default setting of 128MB cause this behavior? I can't bounce the production server midday to test that change.\n\n\n\nEXPLAIN ANALYZE\nSELECT ema.message_id, ema.email_address_id, ema.address_type\nFROM emsg_message_addresses ema\nWHERE ema.message_id IN (\n\tSELECT id \n\tFROM emsg_messages msg \n\tWHERE msg.account_id = 314 AND msg.outgoing = FALSE \n\t AND msg.message_type = 1 AND msg.spam_level < 2 \n\t AND msg.deleted_at IS NULL \n\t AND msg.id NOT IN (\n\t\t\tSELECT emf.message_id \n\t\t\tFROM emsg_message_folders emf \n\t\t\twhere emf.account_id = 314\n\t)\n)\n\n\nQUERY PLAN\t\nHash Semi Join (cost=84522.74..147516.35 rows=49545 width=12) (actual time=677.058..1083.685 rows=2 loops=1)\t\n Hash Cond: (ema.message_id = msg.id)\t\n -> Seq Scan on emsg_message_addresses ema (cost=0.00..53654.78 rows=2873478 width=12) (actual time=0.020..424.241 rows=2875437 loops=1)\t\n -> Hash (cost=84475.45..84475.45 rows=3783 width=4) (actual time=273.392..273.392 rows=1 loops=1)\t\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\t\n -> Bitmap Heap Scan on emsg_messages msg (cost=7979.35..84475.45 rows=3783 width=4) (actual time=273.224..273.387 rows=1 loops=1)\t\n Recheck Cond: (account_id = 314)\t\n Filter: ((NOT outgoing) AND (deleted_at IS NULL) AND (spam_level < 2) AND (NOT (hashed SubPlan 1)) AND (message_type = 1))\t\n -> Bitmap Index Scan on index_emsg_messages_on_account_id (cost=0.00..867.98 rows=34611 width=0) (actual time=9.633..9.633 rows=34997 loops=1)\t\n Index Cond: (account_id = 314)\t\n SubPlan 1\t\n -> Bitmap Heap Scan on emsg_message_folders emf (cost=704.90..7022.51 rows=35169 width=4) (actual time=5.684..38.016 rows=34594 loops=1)\t\n Recheck Cond: (account_id = 314)\t\n -> Bitmap Index Scan on index_emsg_message_folders_on_account_id (cost=0.00..696.10 rows=35169 width=0) (actual time=5.175..5.175 rows=34594 loops=1)\t\n Index Cond: (account_id = 314)\t\nTotal runtime: 1083.890 ms\t\n\n\n\n\nEXPLAIN ANALYZE\nSELECT ema.message_id, ema.email_address_id, ema.address_type\nFROM emsg_message_addresses ema\nWHERE ema.message_id IN (\n\tSELECT id \n\tFROM emsg_messages msg \n\tWHERE msg.account_id = 314 AND msg.outgoing = FALSE \n\t AND msg.message_type = 1 AND msg.spam_level < 2 \n\t AND msg.deleted_at IS NULL \n\t AND msg.id NOT IN (\n\t\t\tSELECT emf.message_id \n\t\t\tFROM emsg_message_folders emf \n\t\t\twhere emf.account_id = 314\n\t)\n\tOFFSET 0\n)\n\n\nQUERY PLAN\t\nNested Loop (cost=84524.89..87496.74 rows=2619 width=12) (actual time=273.409..273.412 rows=2 loops=1)\t\n -> HashAggregate (cost=84524.89..84526.89 rows=200 width=4) (actual time=273.345..273.346 rows=1 loops=1)\t\n -> Limit (cost=7979.36..84477.60 rows=3783 width=4) (actual time=273.171..273.335 rows=1 loops=1)\t\n -> Bitmap Heap Scan on emsg_messages msg (cost=7979.36..84477.60 rows=3783 width=4) (actual time=273.169..273.333 rows=1 loops=1)\t\n Recheck Cond: (account_id = 314)\t\n Filter: ((NOT outgoing) AND (deleted_at IS NULL) AND (spam_level < 2) AND (NOT (hashed SubPlan 1)) AND (message_type = 1))\t\n -> Bitmap Index Scan on index_emsg_messages_on_account_id (cost=0.00..867.99 rows=34612 width=0) (actual time=9.693..9.693 rows=34998 loops=1)\t\n Index Cond: (account_id = 314)\t\n SubPlan 1\t\n -> Bitmap Heap Scan on emsg_message_folders emf (cost=704.90..7022.51 rows=35169 width=4) (actual time=5.795..39.420 rows=34594 loops=1)\t\n Recheck Cond: (account_id = 314)\t\n -> Bitmap Index Scan on index_emsg_message_folders_on_account_id (cost=0.00..696.10 rows=35169 width=0) (actual time=5.266..5.266 rows=34594 loops=1)\t\n Index Cond: (account_id = 314)\t\n -> Index Scan using index_emsg_message_addresses_on_message_id on emsg_message_addresses ema (cost=0.00..14.69 rows=13 width=12) (actual time=0.056..0.058 rows=2 loops=1)\t\n Index Cond: (ema.message_id = msg.id)\t\nTotal runtime: 273.679 ms\t\n\n\nJim Crate\n\n",
"msg_date": "Tue, 27 Dec 2011 12:29:14 -0500",
"msg_from": "Jim Crate <[email protected]>",
"msg_from_op": true,
"msg_subject": "Subquery flattening causing sequential scan"
},
{
"msg_contents": "Jim Crate <[email protected]> writes:\n> My question is why does it do a seq scan when it flattens this\n> subquery into a JOIN?\n\nBecause it thinks there will be 3783 rows out of the msg scan, which if\ntrue would make your desired nestloop join a serious loser. You need to\nsee about getting that estimate to be off by less than three orders of\nmagnitude. Possibly raising the stats target on emsg_messages would\nhelp. I'd also try converting the inner NOT IN into a NOT EXISTS, just\nto see if that makes the estimate any better. Using something newer\nthan 9.0.2 might help too, as we fixed some outer-join estimation bugs a\nfew months ago.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 27 Dec 2011 13:12:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subquery flattening causing sequential scan "
},
{
"msg_contents": "Hi,\n\nOn 28 December 2011 05:12, Tom Lane <[email protected]> wrote:\n> Possibly raising the stats target on emsg_messages would help.\n\nIn the function std_typanalyze() is this comment:\n\n /*--------------------\n * The following choice of minrows is based on the paper\n * \"Random sampling for histogram construction: how much is enough?\"\n * by Surajit Chaudhuri, Rajeev Motwani and Vivek Narasayya, in\n * Proceedings of ACM SIGMOD International Conference on Management\n * of Data, 1998, Pages 436-447. Their Corollary 1 to Theorem 5\n * says that for table size n, histogram size k, maximum relative\n * error in bin size f, and error probability gamma, the minimum\n * random sample size is\n * r = 4 * k * ln(2*n/gamma) / f^2\n * Taking f = 0.5, gamma = 0.01, n = 10^6 rows, we obtain\n * r = 305.82 * k\n * Note that because of the log function, the dependence on n is\n * quite weak; even at n = 10^12, a 300*k sample gives <= 0.66\n * bin size error with probability 0.99. So there's no real need to\n * scale for n, which is a good thing because we don't necessarily\n * know it at this point.\n *--------------------\n */\n\nThe question is why the parameter f is not exposed as a GUC? Sometimes\nit could make sense to have few bins with better estimation (for same\nr).\n\n-- \nOndrej Ivanic\n([email protected])\n",
"msg_date": "Wed, 28 Dec 2011 09:21:00 +1100",
"msg_from": "=?UTF-8?Q?Ondrej_Ivani=C4=8D?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subquery flattening causing sequential scan"
},
{
"msg_contents": "=?UTF-8?Q?Ondrej_Ivani=C4=8D?= <[email protected]> writes:\n> The question is why the parameter f is not exposed as a GUC?\n\nWhat would that accomplish that default_statistics_target doesn't?\n(Other than being much harder to explain...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 27 Dec 2011 19:28:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subquery flattening causing sequential scan "
},
{
"msg_contents": "'\n27.12.2011 20:13 пользователь \"Tom Lane\" <[email protected]> написал:\n>\n> Jim Crate <[email protected]> writes:\n> > My question is why does it do a seq scan when it flattens this\n> > subquery into a JOIN?\n>\n> Because it thinks there will be 3783 rows out of the msg scan, which if\n> true would make your desired nestloop join a serious loser.\n\nBut second plan is evaluated cheapier by analyze. I thought this should\nmake it being used unless it is not evaluated. Can it be collapse limit\nproblem or like?\n\n'\n27.12.2011 20:13 пользователь \"Tom Lane\" <[email protected]> написал:\n>\n> Jim Crate <[email protected]> writes:\n> > My question is why does it do a seq scan when it flattens this\n> > subquery into a JOIN?\n>\n> Because it thinks there will be 3783 rows out of the msg scan, which if\n> true would make your desired nestloop join a serious loser. \nBut second plan is evaluated cheapier by analyze. I thought this should make it being used unless it is not evaluated. Can it be collapse limit problem or like?",
"msg_date": "Wed, 28 Dec 2011 10:30:59 +0200",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subquery flattening causing sequential scan"
},
{
"msg_contents": "On Dec 27, 2011, at 1:12 PM, Tom Lane wrote:\n\n> Jim Crate <[email protected]> writes:\n>> My question is why does it do a seq scan when it flattens this\n>> subquery into a JOIN?\n> \n> Because it thinks there will be 3783 rows out of the msg scan, which if\n> true would make your desired nestloop join a serious loser. You need to\n> see about getting that estimate to be off by less than three orders of\n> magnitude. Possibly raising the stats target on emsg_messages would\n> help. I'd also try converting the inner NOT IN into a NOT EXISTS, just\n> to see if that makes the estimate any better. \n\n\nThe planner does choose the nested loop after converting the NOT IN to NOT EXISTS. Using LEFT JOIN / IS NULL also generated the same plan as NOT EXISTS. I guess I really need to learn more about reading explain plans, and expand my use of different constructs. It's so easy to fall into the trap of using the same construct in all situations just because it works well enough most of the time and is easy to read. \n\nAs for default_statistics_target, I read the docs and I'm not sure how increasing that value would help in this case. There are only a couple hundred accounts, and less than 5 values for message_type and spam_level. In the emsg_message_folders table, the message_id is considered unique (pg_stats has n_distinct = -1), which would also be correct. \n\n\n\nEXPLAIN ANALYZE\nSELECT ema.message_id, ema.email_address_id, ema.address_type\nFROM emsg_message_addresses ema\nWHERE ema.message_id IN (\n\tSELECT id \n\tFROM emsg_messages msg \n\tWHERE msg.account_id = 314 AND msg.outgoing = FALSE \n\t AND msg.message_type = 1 AND msg.spam_level < 2 \n\t AND msg.deleted_at IS NULL \n\t AND NOT EXISTS (\n\t\t\tSELECT emf.message_id \n\t\t\tFROM emsg_message_folders emf \n\t\t\tWHERE emf.account_id = 314 AND emf.message_id = msg.id\n\t)\n)\n\n\nQUERY PLAN\t\nNested Loop (cost=84785.80..84806.43 rows=100455 width=12) (actual time=262.507..262.528 rows=6 loops=1)\t\n -> HashAggregate (cost=84785.80..84785.81 rows=1 width=4) (actual time=262.445..262.446 rows=3 loops=1)\t\n -> Hash Anti Join (cost=8285.87..84785.80 rows=1 width=4) (actual time=254.363..262.426 rows=3 loops=1)\t\n Hash Cond: (msg.id = emf.message_id)\t\n -> Bitmap Heap Scan on emsg_messages msg (cost=869.66..77274.56 rows=7602 width=4) (actual time=13.622..204.879 rows=12387 loops=1)\t\n Recheck Cond: (account_id = 314)\t\n Filter: ((NOT outgoing) AND (deleted_at IS NULL) AND (spam_level < 2) AND (message_type = 1))\t\n -> Bitmap Index Scan on index_emsg_messages_on_account_id (cost=0.00..867.76 rows=34582 width=0) (actual time=8.756..8.756 rows=35091 loops=1)\t\n Index Cond: (account_id = 314)\t\n -> Hash (cost=6990.69..6990.69 rows=34042 width=4) (actual time=45.785..45.785 rows=34647 loops=1)\t\n Buckets: 4096 Batches: 1 Memory Usage: 1219kB\t\n -> Bitmap Heap Scan on emsg_message_folders emf (cost=680.16..6990.69 rows=34042 width=4) (actual time=5.465..35.842 rows=34647 loops=1)\t\n Recheck Cond: (account_id = 314)\t\n -> Bitmap Index Scan on index_emsg_message_folders_on_account_id (cost=0.00..671.65 rows=34042 width=0) (actual time=4.966..4.966 rows=34647 loops=1)\t\n Index Cond: (account_id = 314)\t\n -> Index Scan using index_emsg_message_addresses_on_message_id on emsg_message_addresses ema (cost=0.00..20.45 rows=13 width=12) (actual time=0.023..0.023 rows=2 loops=3)\t\n Index Cond: (ema.message_id = msg.id)\t\nTotal runtime: 262.742 ms\t\n\n\nJim Crate\n\n",
"msg_date": "Wed, 28 Dec 2011 12:22:49 -0500",
"msg_from": "Jim Crate <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Subquery flattening causing sequential scan "
},
{
"msg_contents": "On Tue, Dec 27, 2011 at 12:29 PM, Jim Crate <[email protected]> wrote:\n> My question is why does it do a seq scan when it flattens this subquery into a JOIN? Is it because the emsg_messages table is around 1M rows? Are there some guidelines to when the planner will prefer not to use an available index? I just had a look through postgresql.conf and noticed that I forgot to set effective_cache_size to something reasonable for a machine with 16GB of memory. Would the default setting of 128MB cause this behavior? I can't bounce the production server midday to test that change.\n\nYou wouldn't need to bounce the production server to test that. You\ncould just use SET in the session you were testing from.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 10 Jan 2012 19:19:00 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Subquery flattening causing sequential scan"
}
] |
[
{
"msg_contents": "I am doing POC on Posgtresql replication. I am using latest version of\npostgresql i.e. 9.1. There are multiple replication solutions avaliable in\nthe market (PGCluster, Pgpool-II, Slony-I). Postgresql also provide in-built\nreplication solutions (Streaming replication, Warm Standby and hot standby).\nI am confused which solution is best for the financial application for which\nI am doing POC. The application will write around 160 million records with\nrow size of 2.5 KB in database. My questions is for following scenarios\nwhich replication solution will be suitable:\n\nIf I would require replication for backup purpose only\nIf I would require to scale the reads\nIf I would require High Avaliability and Consistency\nAlso It will be very helpful if you can share the perfomance or experience\nwith postgresql replication solutions.\n\nThanks\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Postgresql-Replication-Performance-tp5107278p5107278.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Thu, 29 Dec 2011 01:33:04 -0800 (PST)",
"msg_from": "sgupta <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql Replication Performance"
},
{
"msg_contents": "On 12/29/2011 11:33 AM, sgupta wrote:\n> I am doing POC on Posgtresql replication. I am using latest version of\n> postgresql i.e. 9.1. There are multiple replication solutions avaliable in\n> the market (PGCluster, Pgpool-II, Slony-I). Postgresql also provide in-built\n> replication solutions (Streaming replication, Warm Standby and hot standby).\n> I am confused which solution is best for the financial application for which\n> I am doing POC. The application will write around 160 million records with\n> row size of 2.5 KB in database. My questions is for following scenarios\n> which replication solution will be suitable:\n>\n> If I would require replication for backup purpose only\n> If I would require to scale the reads\n> If I would require High Avaliability and Consistency\n> Also It will be very helpful if you can share the perfomance or experience\n> with postgresql replication solutions.\n>\n> Thanks\n>\n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/Postgresql-Replication-Performance-tp5107278p5107278.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n\nWhich replication solution will be suitable depends on your needs and \ndatabase architecture.\nStarting with PGCluster, I can say only, that PGCluster last released in \n2005 year, so you can not use it with Postgres 9.0 =)\nSlony-I is a good solution if you want to have cascade replication from \nSlave to Slave or you want to replicate only several parts of your \ndatabase (because Slony performs table level like replication)\nPGPool-II is an advanced load balancer and pooling solution. Which also \nhas replication support. Pgpool-II is query based replication utility, \nwhich performs queries on several database servers. If you are looking \nfor performance and stability I do not recommend using PGPool as \nreplication software.\nPostgres Streaming replication is WAL based replication, so using this \ntype of replication you will have absolutely identical database servers, \nwhat is best choice for HA and scaling reads. Also this choice is not \npractically affecting performance, because it is not adding any latency \nto database layer.\n\nAlso you could read about difference between Slony and Streaming \nreplications here \nhttp://scanningpages.wordpress.com/2010/10/09/9-0-streaming-replication-vs-slony/\n\n\n-- \nBest regards\n\nAleksej Trofimov\n\n\n\n\n\n\n\n On 12/29/2011 11:33 AM, sgupta wrote:\n \nI am doing POC on Posgtresql replication. I am using latest version of\npostgresql i.e. 9.1. There are multiple replication solutions avaliable in\nthe market (PGCluster, Pgpool-II, Slony-I). Postgresql also provide in-built\nreplication solutions (Streaming replication, Warm Standby and hot standby).\nI am confused which solution is best for the financial application for which\nI am doing POC. The application will write around 160 million records with\nrow size of 2.5 KB in database. My questions is for following scenarios\nwhich replication solution will be suitable:\n\nIf I would require replication for backup purpose only\nIf I would require to scale the reads\nIf I would require High Avaliability and Consistency\nAlso It will be very helpful if you can share the perfomance or experience\nwith postgresql replication solutions.\n\nThanks\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Postgresql-Replication-Performance-tp5107278p5107278.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n\n\n Which replication solution will be suitable depends on your needs\n and database architecture.\n Starting with PGCluster, I can say only, that PGCluster last\n released in 2005 year, so you can not use it\n with Postgres 9.0 =)\n Slony-I is a good solution if you want to have cascade replication\n from Slave to Slave or you want to replicate only several parts of\n your database (because Slony performs table level like\n replication)\n PGPool-II is an advanced load balancer and pooling solution. Which\n also has replication support. Pgpool-II is query based replication\n utility, which performs queries on several database servers. If\n you are looking for performance and stability I do not recommend\n using PGPool as replication software. \n Postgres Streaming replication is WAL based replication, so using\n this type of replication you will have absolutely identical\n database servers, what is best choice for HA and scaling reads.\n Also this choice is not practically affecting performance, because\n it is not adding any latency to database layer.\n\n Also you could read about difference between Slony and Streaming\n replications here\nhttp://scanningpages.wordpress.com/2010/10/09/9-0-streaming-replication-vs-slony/\n\n\n-- \nBest regards\n\nAleksej Trofimov",
"msg_date": "Thu, 29 Dec 2011 16:33:45 +0200",
"msg_from": "Aleksej Trofimov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Replication Performance"
},
{
"msg_contents": "On Thu, Dec 29, 2011 at 11:33 AM, Aleksej Trofimov\n<[email protected]> wrote:\n> Postgres Streaming replication is WAL based replication, so using this type\n> of replication you will have absolutely identical database servers, what is\n> best choice for HA and scaling reads. Also this choice is not practically\n> affecting performance, because it is not adding any latency to database\n> layer.\n\nLet me chime in, because I'm in a similar situation. I'm preparing a\nPOC WAL-replicated environment, and testing up until now has been\ninconclusive since we lack the kind of hardware in our test\nenvironment. I know I should require it, testing on similar hardware\nis the only way to get reliable results, but getting the budget\napproved would take way too long, and right now we're in a hurry to\nscale reads.\n\nSo getting the hardware is not an option, my option is asking those\nwho have the experience :-)\n\nI gather WAL replication introduces only a few possible bottlenecks.\n\nFirst, network bandwidth between master and slaves, and my app does\nwrite a lot - our monitoring tools show, today, an average of 1MB/s\nwrites on the WAL array, with peaks exceeding 8MB/s, which can easily\nsaturate our lowly 100Mb/s links. No worries, we can upgrade to 1Gb/s\nlinks.\n\nSecond, is that WAL activity on streaming replication or WAL shipping\nis documented to contain more data than on non-replicated setups. What\nis not clear is how much more data. This not only affects our network\nbandwidth estimations, but also I/O load on the master server, slowing\nwrites (and some reads that cannot happen on the slave).\n\nSo, my question is, in your experience, how much of an increase in WAL\nactivity can be expected?\n",
"msg_date": "Thu, 29 Dec 2011 12:00:33 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Replication Performance"
},
{
"msg_contents": "On Thu, Dec 29, 2011 at 3:33 AM, sgupta <[email protected]> wrote:\n> I am doing POC on Posgtresql replication. I am using latest version of\n> postgresql i.e. 9.1. There are multiple replication solutions avaliable in\n> the market (PGCluster, Pgpool-II, Slony-I). Postgresql also provide in-built\n> replication solutions (Streaming replication, Warm Standby and hot standby).\n> I am confused which solution is best for the financial application for which\n> I am doing POC. The application will write around 160 million records with\n> row size of 2.5 KB in database. My questions is for following scenarios\n> which replication solution will be suitable:\n>\n> If I would require replication for backup purpose only\n> If I would require to scale the reads\n> If I would require High Avaliability and Consistency\n> Also It will be very helpful if you can share the perfomance or experience\n> with postgresql replication solutions.\n\nThe built in HS/SR integrates with the postgres engine (over the WAL\nsystem) at a very low level and is going to be generally faster and\nmore robust. More importantly, it has a very low administrative\noverhead -- the underlying mechanism of log shipping has been tweaked\nand refined continually since PITR was released in 8.0. Once you've\ndone it a few times, it's a five minute procedure to replicate a\ndatabase (not counting, heh, the base database copy).\n\nThe main disadvantage of HS/SR is inflexibility: you get an exact\nreplica of a database cluster. Slony (which is a trigger based\nsystem) and pgpool (which is statement replication) can do a lot of\nfunky things that hs/sr can't do -- so they definitely fill a niche\ndepending on what your requirements are.\n\nmerlin\n",
"msg_date": "Thu, 29 Dec 2011 09:05:30 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Replication Performance"
},
{
"msg_contents": "On 12/29/2011 05:00 PM, Claudio Freire wrote:\n> Second, is that WAL activity on streaming replication or WAL shipping\n> is documented to contain more data than on non-replicated setups. What\n> is not clear is how much more data. This not only affects our network\n> bandwidth estimations, but also I/O load on the master server, slowing\n> writes (and some reads that cannot happen on the slave).\nOur database has about 2MB/s writes on the WAL array, we had about 160 \nIOPS in average when replications was switched off, and 165-170 IOPS in \nreplication. This I think could be explained with statistical error, so \nwe have not experienced any I/O load on our master server since \nreplication was configured.\n\n\n-- \nBest regards\n\nAleksej Trofimov\n\n",
"msg_date": "Thu, 29 Dec 2011 18:05:36 +0200",
"msg_from": "Aleksej Trofimov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Replication Performance"
},
{
"msg_contents": "Thank you all for the valuable information. Now we have decide to go\nwith streaming replication. I did the setup on machine and it is\nworking good. Now I have to implement the automatic failover. Please\nshare a solution for the same.\n\nSaurabh Gupta\n",
"msg_date": "Wed, 4 Jan 2012 02:41:40 -0800 (PST)",
"msg_from": "Saurabh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Replication Performance"
},
{
"msg_contents": "On 01/04/2012 12:41 PM, Saurabh wrote:\n> Thank you all for the valuable information. Now we have decide to go\n> with streaming replication. I did the setup on machine and it is\n> working good. Now I have to implement the automatic failover. Please\n> share a solution for the same.\n>\n> Saurabh Gupta\n>\nYou ca use pgpool-II for automatic failover and connection cache. This \narticle is good enough\nhttp://pgpool.projects.postgresql.org/contrib_docs/simple_sr_setting/index.html\n\nAlso do not forget to configure Postgres max_connections >= (pgpool) \nnum_init_children*max_pool if you'll use connections cache.\n\n-- \nBest regards\n\nAleksej Trofimov\n\n\n",
"msg_date": "Wed, 04 Jan 2012 13:42:26 +0200",
"msg_from": "Aleksej Trofimov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Replication Performance"
}
] |
[
{
"msg_contents": "MirrorX wrote:\n \n> -the table is this ->\n> \\d configurations\n> Table \"public.configurationcontext\"\n \nOne of my concerns was that you might actually be selecting against a\nview rather than a table, and the above doesn't reassure me that\nyou're not. Why the difference between \"configurations\" and\n\"configurationcontext\"?\n \n-Kevin\n\n",
"msg_date": "Thu, 29 Dec 2011 10:51:21 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: parse - bind take more time than execute"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a two tables that are partitioned by month.\n\nI have different results for the same query (query A/query B), the only thing that differ from A and B is the customer id.\n\n\nQuery A:\n\nSELECT sms.id AS id_sms\n \n FROM \n sms_messaggio AS sms,\n sms_messaggio_dlr AS dlr\n WHERE sms.id = dlr.id_sms_messaggio\n AND sms.timestamp_todeliver >= '1/3/2010'::timestamp\n AND sms.timestamp_todeliver < '30/4/2010'::timestamp \n AND dlr.timestamp_todeliver >= '1/3/2010'::timestamp\n AND dlr.timestamp_todeliver < '30/4/2010'::timestamp\n AND sms.id_cliente = '13'\n ORDER BY dlr.timestamp_todeliver DESC LIMIT 50;\n\nPLAN:\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.02..943.11 rows=50 width=16) (actual time=0.616..83.103 rows=50 loops=1)\n -> Nested Loop (cost=0.02..107279143.34 rows=5687651 width=16) (actual time=0.615..83.045 rows=50 loops=1)\n Join Filter: (sms.id = dlr.id_sms_messaggio)\n -> Merge Append (cost=0.02..20289460.70 rows=5687651 width=16) (actual time=0.046..15.379 rows=5874 loops=1)\n Sort Key: dlr.timestamp_todeliver\n -> Index Scan Backward using sms_messaggio_dlr_todeliver on sms_messaggio_dlr dlr (cost=0.00..8.27 rows=1 width=16) (actual time=0.004..0.004 rows=0 loops=1)\n Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n -> Index Scan Backward using sms_messaggio_dlr_timestamp_todeliver_201003 on sms_messaggio_dlr_201003 dlr (cost=0.00..12428664.98 rows=3502530 width=16) (actual time=0.018..0.018 rows=1 loops=1)\n Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n -> Index Scan Backward using sms_messaggio_dlr_timestamp_todeliver_201004 on sms_messaggio_dlr_201004 dlr (cost=0.00..7756421.17 rows=2185120 width=16) (actual time=0.023..8.458 rows=5874 loops=1)\n Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n -> Append (cost=0.00..15.26 rows=3 width=8) (actual time=0.010..0.010 rows=0 loops=5874)\n -> Index Scan using sms_messaggio_pkey1 on sms_messaggio sms (cost=0.00..0.28 rows=1 width=8) (actual time=0.001..0.001 rows=0 loops=5874)\n Index Cond: (id = dlr.id_sms_messaggio)\n Filter: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone) AND (id_cliente = 13))\n -> Index Scan using sms_messaggio_201003_pkey on sms_messaggio_201003 sms (cost=0.00..7.54 rows=1 width=8) (actual time=0.002..0.002 rows=0 loops=5874)\n Index Cond: (id = dlr.id_sms_messaggio)\n Filter: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone) AND (id_cliente = 13))\n -> Index Scan using sms_messaggio_201004_pkey on sms_messaggio_201004 sms (cost=0.00..7.45 rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=5874)\n Index Cond: (id = dlr.id_sms_messaggio)\n Filter: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone) AND (id_cliente = 13))\n Total runtime: 83.201 ms\n\nQuery B:\nEXPLAIN ANALYZE SELECT sms.id AS id_sms,\n dlr.msisdn,\n to_char(dlr.timestamp_stato,'DD/MM/YYYY HH24:MI:SS') AS timestamp_stato,\n dlr.stato,\n dlr.id AS id_dlr,\n dlr.numero_pdu,\n dlr.costo_cli\n FROM\n sms_messaggio AS sms,\n sms_messaggio_dlr AS dlr\n WHERE sms.id = dlr.id_sms_messaggio\n AND sms.timestamp_todeliver >= '1/3/2010'::timestamp\n AND sms.timestamp_todeliver < '30/4/2010'::timestamp\n AND dlr.timestamp_todeliver >= '1/3/2010'::timestamp\n AND dlr.timestamp_todeliver < '30/4/2010'::timestamp\n AND sms.id_cliente = '7'\n ORDER BY dlr.timestamp_todeliver ASC LIMIT 50;\n\nPLAN:\n\n Limit (cost=0.02..78345.78 rows=50 width=54) (actual time=8852.661..269509.298 rows=50 loops=1)\n -> Nested Loop (cost=0.02..58256338.38 rows=37179 width=54) (actual time=8852.658..269509.225 rows=50 loops=1)\n Join Filter: (sms.id = dlr.id_sms_messaggio)\n -> Merge Append (cost=0.02..20289460.70 rows=5687651 width=54) (actual time=0.067..4016.421 rows=1568544 loops=1)\n Sort Key: dlr.timestamp_todeliver\n -> Index Scan using sms_messaggio_dlr_todeliver on sms_messaggio_dlr dlr (cost=0.00..8.27 rows=1 width=101) (actual time=0.005..0.005 rows=0 loops=1)\n Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n -> Index Scan using sms_messaggio_dlr_timestamp_todeliver_201003 on sms_messaggio_dlr_201003 dlr (cost=0.00..12428664.98 rows=3502530 width=54) (actual time=0.030..2405.200 rows=1568544 loops=1)\n Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n -> Index Scan using sms_messaggio_dlr_timestamp_todeliver_201004 on sms_messaggio_dlr_201004 dlr (cost=0.00..7756421.17 rows=2185120 width=55) (actual time=0.028..0.028 rows=1 loops=1)\n Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n -> Materialize (cost=0.00..1715.42 rows=445 width=8) (actual time=0.001..0.080 rows=161 loops=1568544)\n -> Append (cost=0.00..1713.20 rows=445 width=8) (actual time=0.034..0.337 rows=161 loops=1)\n -> Seq Scan on sms_messaggio sms (cost=0.00..0.00 rows=1 width=8) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone) AND (id_cliente = 7))\n -> Bitmap Heap Scan on sms_messaggio_201003 sms (cost=6.85..1199.49 rows=313 width=8) (actual time=0.032..0.122 rows=94 loops=1)\n Recheck Cond: (id_cliente = 7)\n Filter: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on sms_messaggio_id_cliente_201003 (cost=0.00..6.78 rows=313 width=0) (actual time=0.022..0.022 rows=94 loops=1)\n Index Cond: (id_cliente = 7)\n -> Index Scan using sms_messaggio_id_cliente_timestamp_201004 on sms_messaggio_201004 sms (cost=0.00..513.71 rows=131 width=8) (actual time=0.016..0.072 rows=67 loops=1)\n Index Cond: ((id_cliente = 7) AND (timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n Total runtime: 269510.002 ms\n\nI'm using pg 9.1\n\nCan someone explain me why the planner do this?\n\nThanks\n\nMatteo\n",
"msg_date": "Fri, 30 Dec 2011 17:01:00 +0100 (CET)",
"msg_from": "Matteo Sgalaberni <[email protected]>",
"msg_from_op": true,
"msg_subject": "partitioned table: differents plans, slow on some situations"
},
{
"msg_contents": "W dniu 30.12.2011 17:01, Matteo Sgalaberni pisze:\n> Hi,\n\nHello,\n\n> I have a two tables that are partitioned by month.\n> \n> I have different results for the same query (query A/query B), the only thing that differ from A and B is the customer id.\n\nNot only:\n\n> Query A:\n> \n> SELECT sms.id AS id_sms\n> \n> FROM\n> sms_messaggio AS sms,\n> sms_messaggio_dlr AS dlr\n> WHERE sms.id = dlr.id_sms_messaggio\n> AND sms.timestamp_todeliver >= '1/3/2010'::timestamp\n> AND sms.timestamp_todeliver < '30/4/2010'::timestamp\n> AND dlr.timestamp_todeliver >= '1/3/2010'::timestamp\n> AND dlr.timestamp_todeliver < '30/4/2010'::timestamp\n> AND sms.id_cliente = '13'\n> ORDER BY dlr.timestamp_todeliver DESC LIMIT 50;\n ^^^^^^^\n\n> Query B:\n> EXPLAIN ANALYZE SELECT sms.id AS id_sms,\n> dlr.msisdn,\n> to_char(dlr.timestamp_stato,'DD/MM/YYYY HH24:MI:SS') AS timestamp_stato,\n> dlr.stato,\n> dlr.id AS id_dlr,\n> dlr.numero_pdu,\n> dlr.costo_cli\n> FROM\n> sms_messaggio AS sms,\n> sms_messaggio_dlr AS dlr\n> WHERE sms.id = dlr.id_sms_messaggio\n> AND sms.timestamp_todeliver >= '1/3/2010'::timestamp\n> AND sms.timestamp_todeliver < '30/4/2010'::timestamp\n> AND dlr.timestamp_todeliver >= '1/3/2010'::timestamp\n> AND dlr.timestamp_todeliver < '30/4/2010'::timestamp\n> AND sms.id_cliente = '7'\n> ORDER BY dlr.timestamp_todeliver ASC LIMIT 50;\n ^^^^^\n> I'm using pg 9.1\n> \n> Can someone explain me why the planner do this?\n\nThose queries are diffrent.\nRegards.\n\n",
"msg_date": "Fri, 30 Dec 2011 17:23:25 +0100",
"msg_from": "=?UTF-8?B?TWFyY2luIE1pcm9zxYJhdw==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioned table: differents plans, slow on some situations"
},
{
"msg_contents": "I'm sorry, I pasted the wrong ones, but the results are the same, here A and B again:\n\nQuery A\n\n# EXPLAIN ANALYZE SELECT sms.id AS id_sms\n\n FROM\n sms_messaggio AS sms,\n sms_messaggio_dlr AS dlr\n WHERE sms.id = dlr.id_sms_messaggio\n AND sms.timestamp_todeliver >= '1/3/2010'::timestamp\n AND sms.timestamp_todeliver < '30/4/2010'::timestamp\n AND dlr.timestamp_todeliver >= '1/3/2010'::timestamp\n AND dlr.timestamp_todeliver < '30/4/2010'::timestamp\n AND sms.id_cliente = '13'\n ORDER BY dlr.timestamp_todeliver DESC LIMIT 50;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.02..943.11 rows=50 width=16) (actual time=0.603..79.729 rows=50 loops=1)\n -> Nested Loop (cost=0.02..107279143.34 rows=5687651 width=16) (actual time=0.601..79.670 rows=50 loops=1)\n Join Filter: (sms.id = dlr.id_sms_messaggio)\n -> Merge Append (cost=0.02..20289460.70 rows=5687651 width=16) (actual time=0.048..14.556 rows=5874 loops=1)\n Sort Key: dlr.timestamp_todeliver\n -> Index Scan Backward using sms_messaggio_dlr_todeliver on sms_messaggio_dlr dlr (cost=0.00..8.27 rows=1 width=16) (actual time=0.005..0.005 rows=0 loops=1)\n Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n -> Index Scan Backward using sms_messaggio_dlr_timestamp_todeliver_201003 on sms_messaggio_dlr_201003 dlr (cost=0.00..12428664.98 rows=3502530 width=16) (actual time=0.018..0.018 rows=1 loops=1)\n Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n -> Index Scan Backward using sms_messaggio_dlr_timestamp_todeliver_201004 on sms_messaggio_dlr_201004 dlr (cost=0.00..7756421.17 rows=2185120 width=16) (actual time=0.022..8.408 rows=5874 loops=1)\n Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n -> Append (cost=0.00..15.26 rows=3 width=8) (actual time=0.010..0.010 rows=0 loops=5874)\n -> Index Scan using sms_messaggio_pkey1 on sms_messaggio sms (cost=0.00..0.28 rows=1 width=8) (actual time=0.001..0.001 rows=0 loops=5874)\n Index Cond: (id = dlr.id_sms_messaggio)\n Filter: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone) AND (id_cliente = 13))\n -> Index Scan using sms_messaggio_201003_pkey on sms_messaggio_201003 sms (cost=0.00..7.54 rows=1 width=8) (actual time=0.002..0.002 rows=0 loops=5874)\n Index Cond: (id = dlr.id_sms_messaggio)\n Filter: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone) AND (id_cliente = 13))\n -> Index Scan using sms_messaggio_201004_pkey on sms_messaggio_201004 sms (cost=0.00..7.45 rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=5874)\n Index Cond: (id = dlr.id_sms_messaggio)\n Filter: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone) AND (id_cliente = 13))\n Total runtime: 79.821 ms\n(22 rows)\n\nQuery B:\n# EXPLAIN ANALYZE SELECT sms.id AS id_sms\n \n FROM \n sms_messaggio AS sms,\n sms_messaggio_dlr AS dlr\n WHERE sms.id = dlr.id_sms_messaggio\n AND sms.timestamp_todeliver >= '1/3/2010'::timestamp\n AND sms.timestamp_todeliver < '30/4/2010'::timestamp \n AND dlr.timestamp_todeliver >= '1/3/2010'::timestamp\n AND dlr.timestamp_todeliver < '30/4/2010'::timestamp\n AND sms.id_cliente = '7'\n ORDER BY dlr.timestamp_todeliver DESC LIMIT 50;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.02..78345.66 rows=50 width=16) (actual time=183.547..257383.459 rows=50 loops=1)\n -> Nested Loop (cost=0.02..58256245.44 rows=37179 width=16) (actual time=183.544..257383.379 rows=50 loops=1)\n Join Filter: (sms.id = dlr.id_sms_messaggio)\n -> Merge Append (cost=0.02..20289460.70 rows=5687651 width=16) (actual time=0.047..4040.930 rows=1490783 loops=1)\n Sort Key: dlr.timestamp_todeliver\n -> Index Scan Backward using sms_messaggio_dlr_todeliver on sms_messaggio_dlr dlr (cost=0.00..8.27 rows=1 width=16) (actual time=0.005..0.005 rows=0 loops=1)\n Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n -> Index Scan Backward using sms_messaggio_dlr_timestamp_todeliver_201003 on sms_messaggio_dlr_201003 dlr (cost=0.00..12428664.98 rows=3502530 width=16) (actual time=0.018..0.018 rows=1 loops=1)\n Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n -> Index Scan Backward using sms_messaggio_dlr_timestamp_todeliver_201004 on sms_messaggio_dlr_201004 dlr (cost=0.00..7756421.17 rows=2185120 width=16) (actual time=0.022..2511.283 rows=1490783 loops=1)\n Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n -> Materialize (cost=0.00..1715.42 rows=445 width=8) (actual time=0.001..0.081 rows=161 loops=1490783)\n -> Append (cost=0.00..1713.20 rows=445 width=8) (actual time=0.111..0.502 rows=161 loops=1)\n -> Seq Scan on sms_messaggio sms (cost=0.00..0.00 rows=1 width=8) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone) AND (id_cliente = 7))\n -> Bitmap Heap Scan on sms_messaggio_201003 sms (cost=6.85..1199.49 rows=313 width=8) (actual time=0.108..0.245 rows=94 loops=1)\n Recheck Cond: (id_cliente = 7)\n Filter: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on sms_messaggio_id_cliente_201003 (cost=0.00..6.78 rows=313 width=0) (actual time=0.083..0.083 rows=94 loops=1)\n Index Cond: (id_cliente = 7)\n -> Index Scan using sms_messaggio_id_cliente_timestamp_201004 on sms_messaggio_201004 sms (cost=0.00..513.71 rows=131 width=8) (actual time=0.059..0.113 rows=67 loops=1)\n Index Cond: ((id_cliente = 7) AND (timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n Total runtime: 257383.922 ms\n\n\nThanks\n\nM.\n\n",
"msg_date": "Fri, 30 Dec 2011 17:35:35 +0100 (CET)",
"msg_from": "Matteo Sgalaberni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: partitioned table: differents plans, slow on some situations"
},
{
"msg_contents": "2011/12/30 Matteo Sgalaberni <[email protected]>:\n> I'm sorry, I pasted the wrong ones, but the results are the same, here A and B again:\n>\n> Query A\n>\n> # EXPLAIN ANALYZE SELECT sms.id AS id_sms\n>\n> FROM\n> sms_messaggio AS sms,\n> sms_messaggio_dlr AS dlr\n> WHERE sms.id = dlr.id_sms_messaggio\n> AND sms.timestamp_todeliver >= '1/3/2010'::timestamp\n> AND sms.timestamp_todeliver < '30/4/2010'::timestamp\n> AND dlr.timestamp_todeliver >= '1/3/2010'::timestamp\n> AND dlr.timestamp_todeliver < '30/4/2010'::timestamp\n> AND sms.id_cliente = '13'\n> ORDER BY dlr.timestamp_todeliver DESC LIMIT 50;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.02..943.11 rows=50 width=16) (actual time=0.603..79.729 rows=50 loops=1)\n> -> Nested Loop (cost=0.02..107279143.34 rows=5687651 width=16) (actual time=0.601..79.670 rows=50 loops=1)\n> Join Filter: (sms.id = dlr.id_sms_messaggio)\n> -> Merge Append (cost=0.02..20289460.70 rows=5687651 width=16) (actual time=0.048..14.556 rows=5874 loops=1)\n> Sort Key: dlr.timestamp_todeliver\n> -> Index Scan Backward using sms_messaggio_dlr_todeliver on sms_messaggio_dlr dlr (cost=0.00..8.27 rows=1 width=16) (actual time=0.005..0.005 rows=0 loops=1)\n> Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n> -> Index Scan Backward using sms_messaggio_dlr_timestamp_todeliver_201003 on sms_messaggio_dlr_201003 dlr (cost=0.00..12428664.98 rows=3502530 width=16) (actual time=0.018..0.018 rows=1 loops=1)\n> Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n> -> Index Scan Backward using sms_messaggio_dlr_timestamp_todeliver_201004 on sms_messaggio_dlr_201004 dlr (cost=0.00..7756421.17 rows=2185120 width=16) (actual time=0.022..8.408 rows=5874 loops=1)\n> Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n> -> Append (cost=0.00..15.26 rows=3 width=8) (actual time=0.010..0.010 rows=0 loops=5874)\n> -> Index Scan using sms_messaggio_pkey1 on sms_messaggio sms (cost=0.00..0.28 rows=1 width=8) (actual time=0.001..0.001 rows=0 loops=5874)\n> Index Cond: (id = dlr.id_sms_messaggio)\n> Filter: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone) AND (id_cliente = 13))\n> -> Index Scan using sms_messaggio_201003_pkey on sms_messaggio_201003 sms (cost=0.00..7.54 rows=1 width=8) (actual time=0.002..0.002 rows=0 loops=5874)\n> Index Cond: (id = dlr.id_sms_messaggio)\n> Filter: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone) AND (id_cliente = 13))\n> -> Index Scan using sms_messaggio_201004_pkey on sms_messaggio_201004 sms (cost=0.00..7.45 rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=5874)\n> Index Cond: (id = dlr.id_sms_messaggio)\n> Filter: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone) AND (id_cliente = 13))\n> Total runtime: 79.821 ms\n> (22 rows)\n>\n> Query B:\n> # EXPLAIN ANALYZE SELECT sms.id AS id_sms\n>\n> FROM\n> sms_messaggio AS sms,\n> sms_messaggio_dlr AS dlr\n> WHERE sms.id = dlr.id_sms_messaggio\n> AND sms.timestamp_todeliver >= '1/3/2010'::timestamp\n> AND sms.timestamp_todeliver < '30/4/2010'::timestamp\n> AND dlr.timestamp_todeliver >= '1/3/2010'::timestamp\n> AND dlr.timestamp_todeliver < '30/4/2010'::timestamp\n> AND sms.id_cliente = '7'\n> ORDER BY dlr.timestamp_todeliver DESC LIMIT 50;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.02..78345.66 rows=50 width=16) (actual time=183.547..257383.459 rows=50 loops=1)\n> -> Nested Loop (cost=0.02..58256245.44 rows=37179 width=16) (actual time=183.544..257383.379 rows=50 loops=1)\n> Join Filter: (sms.id = dlr.id_sms_messaggio)\n> -> Merge Append (cost=0.02..20289460.70 rows=5687651 width=16) (actual time=0.047..4040.930 rows=1490783 loops=1)\n> Sort Key: dlr.timestamp_todeliver\n> -> Index Scan Backward using sms_messaggio_dlr_todeliver on sms_messaggio_dlr dlr (cost=0.00..8.27 rows=1 width=16) (actual time=0.005..0.005 rows=0 loops=1)\n> Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n> -> Index Scan Backward using sms_messaggio_dlr_timestamp_todeliver_201003 on sms_messaggio_dlr_201003 dlr (cost=0.00..12428664.98 rows=3502530 width=16) (actual time=0.018..0.018 rows=1 loops=1)\n> Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n> -> Index Scan Backward using sms_messaggio_dlr_timestamp_todeliver_201004 on sms_messaggio_dlr_201004 dlr (cost=0.00..7756421.17 rows=2185120 width=16) (actual time=0.022..2511.283 rows=1490783 loops=1)\n> Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n> -> Materialize (cost=0.00..1715.42 rows=445 width=8) (actual time=0.001..0.081 rows=161 loops=1490783)\n> -> Append (cost=0.00..1713.20 rows=445 width=8) (actual time=0.111..0.502 rows=161 loops=1)\n> -> Seq Scan on sms_messaggio sms (cost=0.00..0.00 rows=1 width=8) (actual time=0.001..0.001 rows=0 loops=1)\n> Filter: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone) AND (id_cliente = 7))\n> -> Bitmap Heap Scan on sms_messaggio_201003 sms (cost=6.85..1199.49 rows=313 width=8) (actual time=0.108..0.245 rows=94 loops=1)\n> Recheck Cond: (id_cliente = 7)\n> Filter: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n> -> Bitmap Index Scan on sms_messaggio_id_cliente_201003 (cost=0.00..6.78 rows=313 width=0) (actual time=0.083..0.083 rows=94 loops=1)\n> Index Cond: (id_cliente = 7)\n> -> Index Scan using sms_messaggio_id_cliente_timestamp_201004 on sms_messaggio_201004 sms (cost=0.00..513.71 rows=131 width=8) (actual time=0.059..0.113 rows=67 loops=1)\n> Index Cond: ((id_cliente = 7) AND (timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n> Total runtime: 257383.922 ms\n\nHmm. In the first (good) plan, the planner is using a parameterized\nnestloop. So for each row it finds in dlr, it looks up\ndlr.id_sms_messaggio and passes that down to the index scans, which\nthen pull out just the rows where sms.id takes that specific value.\nIn the second (bad) plan, the planner is using an unparameterized\nnestloop: it's fetching all 445 rows that match the remaining criteria\non sms_messagio (i.e. date and id_cliente) and then repeatedly\nrescanning the output of that calculation. My guess is that the\nplanner figures that repeated index scans are going to cause too much\nI/O, and that caching the results is better; you might want to check\nyour values for random_page_cost, seq_page_cost, and\neffective_cache_size.\n\nThat having been said, if the planner doesn't like the idea of\nrepeatedly index-scanning, why not use a hash join instead of a nested\nloop? That seems likely to be a whole lot faster for the 445 rows the\nplanner is estimating. Can you show us all of your non-default\nconfiguration settings?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Tue, 10 Jan 2012 20:53:32 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioned table: differents plans, slow on some situations"
}
] |
[
{
"msg_contents": "Hi all!\n\nI've ran into a performance problem a few time ago and I've been trying \nto figure out a solution until now. But since I've failed to come up \nwith anything conclusive, it's time to ask some help from people with \nmore understanding of how postgresql works.\n\nHere's the big picture.\nI work for a software company that has it's main program installed on \nover 200 clients. This program uses a small local database in \npostgresql. Always installed with the one-click installer and \npostgresql.conf left on default settings. This structure allows us to \nalways install the latest version of postgresql both in new clients and \nolder clients (when they are updated). And all was well for over 7 years.\nBut with postgresql version 9.0.5 (in version 9.0.4 all was fine), we \nnoticed the program was taking longer to start. In fact, in some clients \nthat had older hardware, it could take around 20 minutes when it usually \ntakes only a few seconds. To make a long story short, the problem was \ntraced and narrowed down to a single auto generated query. Here it is:\n\n\"SELECT\n NULL::text AS PKTABLE_CAT,\n pkn.nspname AS PKTABLE_SCHEM,\n pkc.relname AS PKTABLE_NAME,\n pka.attname AS PKCOLUMN_NAME,\n NULL::text AS FKTABLE_CAT,\n fkn.nspname AS FKTABLE_SCHEM,\n fkc.relname AS FKTABLE_NAME,\n fka.attname AS FKCOLUMN_NAME,\n pos.n AS KEY_SEQ,\n CASE con.confupdtype WHEN 'c' THEN 0 WHEN 'n' THEN 2 WHEN 'd' THEN \n4 WHEN 'r' THEN 1 WHEN 'a' THEN 3 ELSE NULL END AS UPDATE_RULE,\n CASE con.confdeltype WHEN 'c' THEN 0 WHEN 'n' THEN 2 WHEN 'd' THEN \n4 WHEN 'r' THEN 1 WHEN 'a' THEN 3 ELSE NULL END AS DELETE_RULE,\n con.conname AS FK_NAME,\n pkic.relname AS PK_NAME,\n CASE WHEN con.condeferrable AND con.condeferred THEN 5 WHEN \ncon.condeferrable THEN 6 ELSE 7 END AS DEFERRABILITY\nFROM\n pg_catalog.pg_namespace pkn,\n pg_catalog.pg_class pkc,\n pg_catalog.pg_attribute pka,\n pg_catalog.pg_namespace fkn,\n pg_catalog.pg_class fkc,\n pg_catalog.pg_attribute fka,\n pg_catalog.pg_constraint con,\n pg_catalog.generate_series(1, 32) pos(n),\n pg_catalog.pg_depend dep,\n pg_catalog.pg_class pkic\nWHERE pkn.oid = pkc.relnamespace\n AND pkc.oid = pka.attrelid\n AND pka.attnum = con.confkey[pos.n]\n AND con.confrelid = pkc.oid\n AND fkn.oid = fkc.relnamespace\n AND fkc.oid = fka.attrelid\n AND fka.attnum = con.conkey[pos.n]\n AND con.conrelid = fkc.oid\n AND con.contype = 'f'\n AND con.oid = dep.objid\n AND pkic.oid = dep.refobjid\n AND pkic.relkind = 'i'\n AND dep.classid = 'pg_constraint'::regclass::oid\n AND dep.refclassid = 'pg_class'::regclass::oid\n AND pkn.nspname = 'public'\n AND fkn.nspname = 'public'\nORDER BY\n pkn.nspname,\n pkc.relname,\n pos.n;\"\n\n\n From this point on, in all the tests I did, I directly typed this query \non psql command line. I tried everything. Vaccuming and analyzing \n(although this is already automatic on postgresql 9.0), updating \npostgresql to version 9.1, tuning the database as explained on \npostgresql.org documentation (with various values to every parameter, \ndifferent possible combinations), nothing worked, EXCEPT switching the \n\"enable_material\" parameter to OFF. That reduces the query time from \naround 25 seconds on my system (Intel Core2 Duo 2.93GHz 32bit running \nWindows 7 Enterprise Service Pack 1) to around 5 seconds. Here are the \nexplain analyzes.\n\nenable_material ON: http://explain.depesz.com/s/wen\nenable_material OFF: http://explain.depesz.com/s/Zaa\n\nThen, to narrow it down a bit further, I tried running the query on \nanother database. It ran much faster.\nSo I made a script that creates tables and foreign keys on a database, \nto find out at which number of tables/foreign keys the query started to \nslow down. I managed to get identically slow performance when I had 1000 \ntables and 5000 foreign keys. Which didn't help at all, since the \ndatabase in which the problem occurs has only 292 tables and 521 foreign \nkeys.\n\nOf course, it is possible to change the code and use a (different) \nmanual query that does the same and runs perfectly fine, I've already \ndone that. But why does this happen, from 9.0.5 on? Is there any idea? \nIs this situation already known?\nI hope someone can enlighten me on this subject..\n\nThanks in advance! Best regards,\n\nMiguel Silva\n",
"msg_date": "Fri, 30 Dec 2011 16:39:04 +0000",
"msg_from": "Miguel Silva <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query performance - normal on 9.0.4, slow from 9.0.5 onwards"
},
{
"msg_contents": "Miguel Silva <[email protected]> writes:\n> But with postgresql version 9.0.5 (in version 9.0.4 all was fine), we \n> noticed the program was taking longer to start. In fact, in some clients \n> that had older hardware, it could take around 20 minutes when it usually \n> takes only a few seconds. To make a long story short, the problem was \n> traced and narrowed down to a single auto generated query. Here it is:\n\n> \"SELECT [ snip ]\"\n\n> ... Here are the explain analyzes.\n\n> enable_material ON: http://explain.depesz.com/s/wen\n> enable_material OFF: http://explain.depesz.com/s/Zaa\n\nIt doesn't really accomplish anything to post anonymized explain output\nwhen you've already shown us the actual query, does it? Especially when\nsaid query involves only the system catalogs and by no stretch of the\nimagination could be thought to contain anything proprietary?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 30 Dec 2011 12:40:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance - normal on 9.0.4, slow from 9.0.5 onwards "
},
{
"msg_contents": "On 30-12-2011 17:40, Tom Lane wrote:\n> Miguel Silva<[email protected]> writes:\n>> But with postgresql version 9.0.5 (in version 9.0.4 all was fine), we\n>> noticed the program was taking longer to start. In fact, in some clients\n>> that had older hardware, it could take around 20 minutes when it usually\n>> takes only a few seconds. To make a long story short, the problem was\n>> traced and narrowed down to a single auto generated query. Here it is:\n>> \"SELECT [ snip ]\"\n>> ... Here are the explain analyzes.\n>> enable_material ON: http://explain.depesz.com/s/wen\n>> enable_material OFF: http://explain.depesz.com/s/Zaa\n> It doesn't really accomplish anything to post anonymized explain output\n> when you've already shown us the actual query, does it? Especially when\n> said query involves only the system catalogs and by no stretch of the\n> imagination could be thought to contain anything proprietary?\n>\n> \t\t\tregards, tom lane\n>\nIndeed you are right. Those are explains I created some time ago, when I \ndidn't really know what that webpage did. I just kept them, and used \nthem now, didn't even think about that. But the explains are still \nthere, still useful. Anyway, if it is really necessary, I can post new ones.\n\nBest regards,\n\nMiguel Silva\n",
"msg_date": "Fri, 30 Dec 2011 17:50:08 +0000",
"msg_from": "Miguel Silva <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance - normal on 9.0.4, slow from 9.0.5\n onwards"
},
{
"msg_contents": "On Fri, Dec 30, 2011 at 10:39 AM, Miguel Silva <[email protected]> wrote:\n> Hi all!\n>\n> I've ran into a performance problem a few time ago and I've been trying to\n> figure out a solution until now. But since I've failed to come up with\n> anything conclusive, it's time to ask some help from people with more\n> understanding of how postgresql works.\n>\n> Here's the big picture.\n> I work for a software company that has it's main program installed on over\n> 200 clients. This program uses a small local database in postgresql. Always\n> installed with the one-click installer and postgresql.conf left on default\n> settings. This structure allows us to always install the latest version of\n> postgresql both in new clients and older clients (when they are updated).\n> And all was well for over 7 years.\n> But with postgresql version 9.0.5 (in version 9.0.4 all was fine), we\n> noticed the program was taking longer to start. In fact, in some clients\n> that had older hardware, it could take around 20 minutes when it usually\n> takes only a few seconds. To make a long story short, the problem was traced\n> and narrowed down to a single auto generated query. Here it is:\n>\n> \"SELECT\n> NULL::text AS PKTABLE_CAT,\n> pkn.nspname AS PKTABLE_SCHEM,\n> pkc.relname AS PKTABLE_NAME,\n> pka.attname AS PKCOLUMN_NAME,\n> NULL::text AS FKTABLE_CAT,\n> fkn.nspname AS FKTABLE_SCHEM,\n> fkc.relname AS FKTABLE_NAME,\n> fka.attname AS FKCOLUMN_NAME,\n> pos.n AS KEY_SEQ,\n> CASE con.confupdtype WHEN 'c' THEN 0 WHEN 'n' THEN 2 WHEN 'd' THEN 4\n> WHEN 'r' THEN 1 WHEN 'a' THEN 3 ELSE NULL END AS UPDATE_RULE,\n> CASE con.confdeltype WHEN 'c' THEN 0 WHEN 'n' THEN 2 WHEN 'd' THEN 4\n> WHEN 'r' THEN 1 WHEN 'a' THEN 3 ELSE NULL END AS DELETE_RULE,\n> con.conname AS FK_NAME,\n> pkic.relname AS PK_NAME,\n> CASE WHEN con.condeferrable AND con.condeferred THEN 5 WHEN\n> con.condeferrable THEN 6 ELSE 7 END AS DEFERRABILITY\n> FROM\n> pg_catalog.pg_namespace pkn,\n> pg_catalog.pg_class pkc,\n> pg_catalog.pg_attribute pka,\n> pg_catalog.pg_namespace fkn,\n> pg_catalog.pg_class fkc,\n> pg_catalog.pg_attribute fka,\n> pg_catalog.pg_constraint con,\n> pg_catalog.generate_series(1, 32) pos(n),\n> pg_catalog.pg_depend dep,\n> pg_catalog.pg_class pkic\n> WHERE pkn.oid = pkc.relnamespace\n> AND pkc.oid = pka.attrelid\n> AND pka.attnum = con.confkey[pos.n]\n> AND con.confrelid = pkc.oid\n> AND fkn.oid = fkc.relnamespace\n> AND fkc.oid = fka.attrelid\n> AND fka.attnum = con.conkey[pos.n]\n> AND con.conrelid = fkc.oid\n> AND con.contype = 'f'\n> AND con.oid = dep.objid\n> AND pkic.oid = dep.refobjid\n> AND pkic.relkind = 'i'\n> AND dep.classid = 'pg_constraint'::regclass::oid\n> AND dep.refclassid = 'pg_class'::regclass::oid\n> AND pkn.nspname = 'public'\n> AND fkn.nspname = 'public'\n> ORDER BY\n> pkn.nspname,\n> pkc.relname,\n> pos.n;\"\n>\n>\n> From this point on, in all the tests I did, I directly typed this query on\n> psql command line. I tried everything. Vaccuming and analyzing (although\n> this is already automatic on postgresql 9.0), updating postgresql to version\n> 9.1, tuning the database as explained on postgresql.org documentation (with\n> various values to every parameter, different possible combinations), nothing\n> worked, EXCEPT switching the \"enable_material\" parameter to OFF. That\n> reduces the query time from around 25 seconds on my system (Intel Core2 Duo\n> 2.93GHz 32bit running Windows 7 Enterprise Service Pack 1) to around 5\n> seconds. Here are the explain analyzes.\n>\n> enable_material ON: http://explain.depesz.com/s/wen\n> enable_material OFF: http://explain.depesz.com/s/Zaa\n>\n> Then, to narrow it down a bit further, I tried running the query on another\n> database. It ran much faster.\n> So I made a script that creates tables and foreign keys on a database, to\n> find out at which number of tables/foreign keys the query started to slow\n> down. I managed to get identically slow performance when I had 1000 tables\n> and 5000 foreign keys. Which didn't help at all, since the database in which\n> the problem occurs has only 292 tables and 521 foreign keys.\n>\n> Of course, it is possible to change the code and use a (different) manual\n> query that does the same and runs perfectly fine, I've already done that.\n> But why does this happen, from 9.0.5 on? Is there any idea? Is this\n> situation already known?\n> I hope someone can enlighten me on this subject..\n\ntry this (curious):\ncreate table pos as select n from generate_series(1,32) n;\n\nand swap that for the in-query generate series call. your statistics\nin the query are completely off (not 100% sure why), so I'm thinking\nto replace that since it lies to the planner about the # rows\nreturned. also the join on the array element probably isn't helping.\n\nmerlin\n",
"msg_date": "Fri, 30 Dec 2011 13:35:03 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance - normal on 9.0.4, slow from 9.0.5 onwards"
},
{
"msg_contents": "Miguel Silva <[email protected]> writes:\n> I work for a software company that has it's main program installed on \n> over 200 clients. This program uses a small local database in \n> postgresql. Always installed with the one-click installer and \n> postgresql.conf left on default settings. This structure allows us to \n> always install the latest version of postgresql both in new clients and \n> older clients (when they are updated). And all was well for over 7 years.\n> But with postgresql version 9.0.5 (in version 9.0.4 all was fine), we \n> noticed the program was taking longer to start.\n\nI poked at this a little bit. AFAICS the only potentially relevant\nplanner change between 9.0.4 and 9.0.5 was the removal of eqjoinsel's\nndistinct-clamping heuristic,\nhttp://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=3505862a8d3e3b389ab926346061b7135fa44f79\n\nNow that's something we took out because it seemed to be making more\ncases worse than better, but there were cases where it helped (for the\nwrong reasons, but nonetheless it sometimes adjusted the estimates to be\ncloser to reality), and apparently you've got one such case. However,\nremoving that logic just brought the behavior back to what it was\npre-8.4, so I'm a bit dubious of the claim that this query has worked\nwell for \"over 7 years\". Perhaps you had lots fewer tables and/or FKs\nback in pre-8.4 days?\n\nI experimented with a toy database having 1000 tables of 30 columns\neach, with one foreign key per table, all in the \"public\" schema, and\nindeed this query is pretty slow on current releases. A big part of the\nproblem is that the planner is unaware that the one row you're selecting\nfrom pg_namespace will join to almost all the rows in pg_class; so it\nunderestimates the sizes of those join results, and that leads to\npicking a nestloop plan style where it's not appropriate.\n\nI tried removing these WHERE conditions:\n\n> AND pkn.nspname = 'public'\n> AND fkn.nspname = 'public'\n\nand got a decently fast plan. If those are, as I suspect, also no-ops\nin your real database, perhaps that will do as a workaround.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 30 Dec 2011 17:29:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance - normal on 9.0.4, slow from 9.0.5 onwards "
},
{
"msg_contents": "On 30-12-2011 19:35, Merlin Moncure wrote:\n> try this (curious):\n> create table pos as select n from generate_series(1,32) n;\n>\n> and swap that for the in-query generate series call. your statistics\n> in the query are completely off (not 100% sure why), so I'm thinking\n> to replace that since it lies to the planner about the # rows\n> returned. also the join on the array element probably isn't helping.\n>\n> merlin\n>\nTried it. The query still takes around the same amount of time but, out \nof curiosity, here's the explain analyze of it:\nhttp://explain.depesz.com/s/MvE .\n",
"msg_date": "Mon, 02 Jan 2012 11:44:43 +0000",
"msg_from": "Miguel Silva <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance - normal on 9.0.4, slow from 9.0.5\n onwards"
},
{
"msg_contents": "On 30-12-2011 22:29, Tom Lane wrote:\n> I poked at this a little bit. AFAICS the only potentially relevant\n> planner change between 9.0.4 and 9.0.5 was the removal of eqjoinsel's\n> ndistinct-clamping heuristic,\n> http://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=3505862a8d3e3b389ab926346061b7135fa44f79\n>\n> Now that's something we took out because it seemed to be making more\n> cases worse than better, but there were cases where it helped (for the\n> wrong reasons, but nonetheless it sometimes adjusted the estimates to be\n> closer to reality), and apparently you've got one such case. However,\n> removing that logic just brought the behavior back to what it was\n> pre-8.4, so I'm a bit dubious of the claim that this query has worked\n> well for \"over 7 years\". Perhaps you had lots fewer tables and/or FKs\n> back in pre-8.4 days?\nWell, thanks, that clarifies the reason why this happens!\nPerhaps you are right. I mean, that's what I've been told, and I believe \nit really worked well for all that time. But since this is an \nauto-generated query, maybe it hasn't always been exactly like this. Or \nmaybe there really were fewer tables/FKs, back then.\n>\n> I experimented with a toy database having 1000 tables of 30 columns\n> each, with one foreign key per table, all in the \"public\" schema, and\n> indeed this query is pretty slow on current releases. A big part of the\n> problem is that the planner is unaware that the one row you're selecting\n> from pg_namespace will join to almost all the rows in pg_class; so it\n> underestimates the sizes of those join results, and that leads to\n> picking a nestloop plan style where it's not appropriate.\n>\n> I tried removing these WHERE conditions:\n>\n>> AND pkn.nspname = 'public'\n>> AND fkn.nspname = 'public'\n> and got a decently fast plan. If those are, as I suspect, also no-ops\n> in your real database, perhaps that will do as a workaround.\n>\n> \t\t\tregards, tom lane\n>\nI tried running the query with that change, but it still takes around 25 \nsecs. What I did as a workaround, was use this query instead of an \nauto-generated one:\n\nSELECT\n tc.constraint_name AS FK_NAME,\n tc.table_name AS PKTABLE_NAME,\n kcu.column_name AS PKCOLUMN_NAME,\n ccu.table_name AS FKTABLE_NAME,\n ccu.column_name AS FKCOLUMN_NAME,\n CASE con.confupdtype WHEN 'c' THEN 0 WHEN 'n' THEN 2 WHEN 'd' THEN \n4 WHEN 'r' THEN 1 WHEN 'a' THEN 3 ELSE NULL END AS UPDATE_RULE,\n CASE con.confdeltype WHEN 'c' THEN 0 WHEN 'n' THEN 2 WHEN 'd' THEN \n4 WHEN 'r' THEN 1 WHEN 'a' THEN 3 ELSE NULL END AS DELETE_RULE\n\nFROM information_schema.table_constraints AS tc\n JOIN information_schema.key_column_usage AS kcu ON \ntc.constraint_name = kcu.constraint_name\n JOIN information_schema.constraint_column_usage AS ccu ON \nccu.constraint_name = tc.constraint_name\n JOIN pg_catalog.pg_constraint AS con ON con.conname = \ntc.constraint_name\n\nWHERE constraint_type = 'FOREIGN KEY';\n\nThanks for looking into this!\n\nBest regards,\n\nMiguel Silva\n",
"msg_date": "Mon, 02 Jan 2012 11:57:24 +0000",
"msg_from": "Miguel Silva <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance - normal on 9.0.4, slow from 9.0.5\n onwards"
}
] |
[
{
"msg_contents": "I gather that a big part of making queries performant is making sure the \nplanner's estimates reflect reality.\n\nGiven a random explain analyze line:\n\nLimit (cost=0.02..943.11 rows=50 width=16) (actual time=0.603..79.729 \nrows=50 loops=1)\n\nwhich is the truer statement?\n\n1. As long as costs go up with actual time, you're fine.\n\n2. You should try to ensure that costs go up linearly with actual time.\n\n3. You should try to ensure that costs are as close as possible to actual time.\n\n4. The number \"4\".\n\nJay Levitt\n",
"msg_date": "Sun, 01 Jan 2012 13:59:25 -0500",
"msg_from": "Jay Levitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cost estimate vs. actual - do I care?"
},
{
"msg_contents": "3), 2), 1).\n\nThe planner needs the right information to make the right decision. \nHowever, the planner rarely has perfect information, so the algorithms \nneed to be able to cope with some amount of imperfection while still \ngenerally making the right decision. There are also a limited \n(relatively) set of possible plans and the plans often enough have \ndifferent enough characteristics that the truly different plans (i.e. \nfactor of 10 difference in terms of run time) won't be selected by \naccident, even with fairly bad estimates.\n\nIt probably depends mostly on your data set. For many data sets, the \nestimates might be possible to be off by a factor of 10 and still come \nup with the same \"right\" plan. For many other data sets, for example, \nthe difference between an index scan or a sequential scan could result \nin performance differences by a factor of 10 or more and picking the \n\"right one\" is incredibly important.\n\nIf you are worried, you can test your theory with \"set enable_seqscan=0\" \nand other such things to see what alternative plans might be generated \nand whether they are indeed better or not. Generally, I only worry about \nit for queries that I know to be bottlenecks, and many of the times - \nthe time investment I make into trying to prove that a better query is \npossible ends up only educating me on why I am wrong... :-)\n\nThe majority of the time for me, anyways, I don't find that the \nestimates are that bad or that the planner is wrong. It's usually the \ntypical scenario where somebody added a query but forgot to make sure \nthe query was efficient by ensuring that the indexes properly accelerate \ntheir query.\n\nGood luck.\n\nmark\n\n\nOn 01/01/2012 01:59 PM, Jay Levitt wrote:\n> I gather that a big part of making queries performant is making sure \n> the planner's estimates reflect reality.\n>\n> Given a random explain analyze line:\n>\n> Limit (cost=0.02..943.11 rows=50 width=16) (actual time=0.603..79.729 \n> rows=50 loops=1)\n>\n> which is the truer statement?\n>\n> 1. As long as costs go up with actual time, you're fine.\n>\n> 2. You should try to ensure that costs go up linearly with actual time.\n>\n> 3. You should try to ensure that costs are as close as possible to \n> actual time.\n>\n> 4. The number \"4\".\n>\n> Jay Levitt\n>\n\n\n-- \nMark Mielke<[email protected]>\n\n",
"msg_date": "Sun, 01 Jan 2012 15:00:10 -0500",
"msg_from": "Mark Mielke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cost estimate vs. actual - do I care?"
}
] |
[
{
"msg_contents": "Hello,\n\nI've a table with approximately 50 million rows with a schema like \nthis:\n\n id bigint NOT NULL DEFAULT nextval('stats_5mn'::regclass),\n t_value integer NOT NULL DEFAULT 0,\n t_record integer NOT NULL DEFAULT 0,\n output_id integer NOT NULL DEFAULT 0,\n count bigint NOT NULL DEFAULT 0,\n CONSTRAINT stats_mcs_5min_pkey PRIMARY KEY (id)\n\nEvery 5 minutes, a process have to insert a few thousand of rows in \nthis table,\nbut sometime, the process have to insert an already existing row (based \non\nvalues in the triplet (t_value, t_record, output_id). In this case, the \nrow\nmust be updated with the new count value. I've tried some solution \ngiven on this\nstackoverflow question [1] but the insertion rate is always too low for \nmy needs.\n\nSo, I've decided to do it in two times:\n\n - I insert all my new data with a COPY command\n - When it's done, I run a delete query to remove oldest duplicates\n\nRight now, my delete query look like this:\n\n SELECT min(id) FROM stats_5mn\n GROUP BY t_value, t_record, output_id\n HAVING count(*) > 1;\n\nThe duration of the query on my test machine with approx. 16 million \nrows is ~18s.\n\nTo reduce this duration, I've tried to add an index on my triplet:\n\n CREATE INDEX test\n ON stats_5mn\n USING btree\n (t_value , t_record , output_id );\n\nBy default, the PostgreSQL planner doesn't want to use my index and do \na sequential\nscan [2], but if I force it with \"SET enable_seqscan = off\", the index \nis used [3]\nand query duration is lowered to ~5s.\n\n\nMy questions:\n\n - Why the planner refuse to use my index?\n - Is there a better method for my problem?\n\n\nThanks by advance for your help,\nAntoine Millet.\n\n\n[1] \nhttp://stackoverflow.com/questions/1109061/insert-on-duplicate-update-postgresql\n \nhttp://stackoverflow.com/questions/3464750/postgres-upsert-insert-or-update-only-if-value-is-different\n\n[2] http://explain.depesz.com/s/UzW :\n GroupAggregate (cost=1167282.380..1294947.770 rows=762182 \nwidth=20) (actual time=20067.661..20067.661 rows=0 loops=1)\n Filter: (five(*) > 1)\n -> Sort (cost=1167282.380..1186336.910 rows=7621814 width=20) \n(actual time=15663.549..17463.458 rows=7621805 loops=1)\n Sort Key: delta, kilo, four\n Sort Method: external merge Disk: 223512kB\n -> Seq Scan on three (cost=0.000..139734.140 rows=7621814 \nwidth=20) (actual time=0.041..2093.434 rows=7621805 loops=1)\n\n[3] http://explain.depesz.com/s/o9P :\n GroupAggregate (cost=0.000..11531349.190 rows=762182 width=20) \n(actual time=5307.734..5307.734 rows=0 loops=1)\n Filter: (five(*) > 1)\n -> Index Scan using charlie on three (cost=0.000..11422738.330 \nrows=7621814 width=20) (actual time=0.046..2062.952 rows=7621805 \nloops=1)\n",
"msg_date": "Fri, 06 Jan 2012 15:35:36 +0100",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Duplicate deletion optimizations"
},
{
"msg_contents": "On Fri, 06 Jan 2012 15:35:36 +0100, [email protected] wrote:\n> Hello,\n>\n> I've a table with approximately 50 million rows with a schema like \n> this:\n>\n> id bigint NOT NULL DEFAULT nextval('stats_5mn'::regclass),\n> t_value integer NOT NULL DEFAULT 0,\n> t_record integer NOT NULL DEFAULT 0,\n> output_id integer NOT NULL DEFAULT 0,\n> count bigint NOT NULL DEFAULT 0,\n> CONSTRAINT stats_mcs_5min_pkey PRIMARY KEY (id)\n>\n> Every 5 minutes, a process have to insert a few thousand of rows in\n> this table,\n> but sometime, the process have to insert an already existing row \n> (based on\n> values in the triplet (t_value, t_record, output_id). In this case, \n> the row\n> must be updated with the new count value. I've tried some solution\n> given on this\n> stackoverflow question [1] but the insertion rate is always too low\n> for my needs.\n>\n> So, I've decided to do it in two times:\n>\n> - I insert all my new data with a COPY command\n> - When it's done, I run a delete query to remove oldest duplicates\n>\n> Right now, my delete query look like this:\n>\n> SELECT min(id) FROM stats_5mn\n> GROUP BY t_value, t_record, output_id\n> HAVING count(*) > 1;\n\nCorrection:\n\n DELETE FROM stats_5mn WHERE id in (\n SELECT min(id) FROM stats_5mn\n GROUP BY t_value, t_record, output_id\n HAVING count(*) > 1;\n );\n\nSorry :-)\n\n>\n> The duration of the query on my test machine with approx. 16 million\n> rows is ~18s.\n>\n> To reduce this duration, I've tried to add an index on my triplet:\n>\n> CREATE INDEX test\n> ON stats_5mn\n> USING btree\n> (t_value , t_record , output_id );\n>\n> By default, the PostgreSQL planner doesn't want to use my index and\n> do a sequential\n> scan [2], but if I force it with \"SET enable_seqscan = off\", the\n> index is used [3]\n> and query duration is lowered to ~5s.\n>\n>\n> My questions:\n>\n> - Why the planner refuse to use my index?\n> - Is there a better method for my problem?\n>\n>\n> Thanks by advance for your help,\n> Antoine Millet.\n>\n>\n> [1]\n> \n> http://stackoverflow.com/questions/1109061/insert-on-duplicate-update-postgresql\n>\n> \n> http://stackoverflow.com/questions/3464750/postgres-upsert-insert-or-update-only-if-value-is-different\n>\n> [2] http://explain.depesz.com/s/UzW :\n> GroupAggregate (cost=1167282.380..1294947.770 rows=762182\n> width=20) (actual time=20067.661..20067.661 rows=0 loops=1)\n> Filter: (five(*) > 1)\n> -> Sort (cost=1167282.380..1186336.910 rows=7621814 width=20)\n> (actual time=15663.549..17463.458 rows=7621805 loops=1)\n> Sort Key: delta, kilo, four\n> Sort Method: external merge Disk: 223512kB\n> -> Seq Scan on three (cost=0.000..139734.140\n> rows=7621814 width=20) (actual time=0.041..2093.434 rows=7621805\n> loops=1)\n>\n> [3] http://explain.depesz.com/s/o9P :\n> GroupAggregate (cost=0.000..11531349.190 rows=762182 width=20)\n> (actual time=5307.734..5307.734 rows=0 loops=1)\n> Filter: (five(*) > 1)\n> -> Index Scan using charlie on three\n> (cost=0.000..11422738.330 rows=7621814 width=20) (actual\n> time=0.046..2062.952 rows=7621805 loops=1)\n\n",
"msg_date": "Fri, 06 Jan 2012 16:21:06 +0100",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Duplicate deletion optimizations"
},
{
"msg_contents": "On Fri, Jan 6, 2012 at 6:35 AM, <[email protected]> wrote:\n\n> Hello,\n>\n> I've a table with approximately 50 million rows with a schema like this:\n>\n> id bigint NOT NULL DEFAULT nextval('stats_5mn'::regclass)**,\n> t_value integer NOT NULL DEFAULT 0,\n> t_record integer NOT NULL DEFAULT 0,\n> output_id integer NOT NULL DEFAULT 0,\n> count bigint NOT NULL DEFAULT 0,\n> CONSTRAINT stats_mcs_5min_pkey PRIMARY KEY (id)\n>\n> Every 5 minutes, a process have to insert a few thousand of rows in this\n> table,\n> but sometime, the process have to insert an already existing row (based on\n> values in the triplet (t_value, t_record, output_id). In this case, the row\n> must be updated with the new count value. I've tried some solution given\n> on this\n> stackoverflow question [1] but the insertion rate is always too low for my\n> needs.\n>\n> So, I've decided to do it in two times:\n>\n> - I insert all my new data with a COPY command\n> - When it's done, I run a delete query to remove oldest duplicates\n>\n> Right now, my delete query look like this:\n>\n> SELECT min(id) FROM stats_5mn\n> GROUP BY t_value, t_record, output_id\n> HAVING count(*) > 1;\n>\n> The duration of the query on my test machine with approx. 16 million rows\n> is ~18s.\n>\n\nHave you considered doing the insert by doing a bulk insert into a temp\ntable and then pulling rows that don't exist across to the final table in\none query and updating rows that do exist in another query? I did a very\nbrief scan of the SO thread and didn't see it suggested. Something like\nthis:\n\nupdate stats_5mn set count = count + t.count\nfrom temp_table t\nwhere stats_5mn.t_value = t.t_value and stats_5mn.t_record and\nstats_5mn.output_id = t.output_id;\n\ninsert into stats_5mn\nselect * from temp_table t\nwhere not exists (\nselect 1 from stats_5mn s\nwhere s.t_value = t.t_value and s.t_record = t.t_record and s.output_id =\nt.output_id\n);\n\ndrop table temp_table;\n\nNote - you must do the update before the insert because doing it the other\nway around will cause every row you just inserted to also be updated.\n\nI'm not sure it'd be markedly faster, but you'd at least be able to retain\na unique constraint on the triplet, if desired. And, to my eye, the logic\nis easier to comprehend. The different query structure may make better use\nof your index, but I imagine that it is not using it currently because your\ndb isn't configured to accurately reflect the real cost of index use vs\nsequential scan, so it is incorrectly determining the cost of looking up\n7.5 million rows. Its estimate of the row count is correct, so the\nestimate of the cost must be the problem. We'd need to know more about\nyour current config and hardware specs to be able to even start making\nsuggestions about config changes to correct the problem.\n\nOn Fri, Jan 6, 2012 at 6:35 AM, <[email protected]> wrote:\nHello,\n\nI've a table with approximately 50 million rows with a schema like this:\n\n id bigint NOT NULL DEFAULT nextval('stats_5mn'::regclass),\n t_value integer NOT NULL DEFAULT 0,\n t_record integer NOT NULL DEFAULT 0,\n output_id integer NOT NULL DEFAULT 0,\n count bigint NOT NULL DEFAULT 0,\n CONSTRAINT stats_mcs_5min_pkey PRIMARY KEY (id)\n\nEvery 5 minutes, a process have to insert a few thousand of rows in this table,\nbut sometime, the process have to insert an already existing row (based on\nvalues in the triplet (t_value, t_record, output_id). In this case, the row\nmust be updated with the new count value. I've tried some solution given on this\nstackoverflow question [1] but the insertion rate is always too low for my needs.\n\nSo, I've decided to do it in two times:\n\n - I insert all my new data with a COPY command\n - When it's done, I run a delete query to remove oldest duplicates\n\nRight now, my delete query look like this:\n\n SELECT min(id) FROM stats_5mn\n GROUP BY t_value, t_record, output_id\n HAVING count(*) > 1;\n\nThe duration of the query on my test machine with approx. 16 million rows is ~18s.Have you considered doing the insert by doing a bulk insert into a temp table and then pulling rows that don't exist across to the final table in one query and updating rows that do exist in another query? I did a very brief scan of the SO thread and didn't see it suggested. Something like this:\nupdate stats_5mn set count = count + t.count from temp_table t where stats_5mn.t_value = t.t_value and stats_5mn.t_record and stats_5mn.output_id = t.output_id;\ninsert into stats_5mn select * from temp_table t where not exists (select 1 from stats_5mn s where s.t_value = t.t_value and s.t_record = t.t_record and s.output_id = t.output_id\n); drop table temp_table;Note - you must do the update before the insert because doing it the other way around will cause every row you just inserted to also be updated.\nI'm not sure it'd be markedly faster, but you'd at least be able to retain a unique constraint on the triplet, if desired. And, to my eye, the logic is easier to comprehend. The different query structure may make better use of your index, but I imagine that it is not using it currently because your db isn't configured to accurately reflect the real cost of index use vs sequential scan, so it is incorrectly determining the cost of looking up 7.5 million rows. Its estimate of the row count is correct, so the estimate of the cost must be the problem. We'd need to know more about your current config and hardware specs to be able to even start making suggestions about config changes to correct the problem.",
"msg_date": "Fri, 6 Jan 2012 12:02:24 -0800",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate deletion optimizations"
},
{
"msg_contents": "Hi Samuel!\n\nOn 6 January 2012 20:02, Samuel Gendler <[email protected]> wrote:\n> Have you considered doing the insert by doing a bulk insert into a temp\n> table and then pulling rows that don't exist across to the final table in\n> one query and updating rows that do exist in another query? I did a very\n> brief scan of the SO thread and didn't see it suggested. Something like\n> this:\n>\n> update stats_5mn set count = count + t.count\n> from temp_table t\n> where stats_5mn.t_value = t.t_value and stats_5mn.t_record and\n> stats_5mn.output_id = t.output_id;\n>\n> insert into stats_5mn\n> select * from temp_table t\n> where not exists (\n> select 1 from stats_5mn s\n> where s.t_value = t.t_value and s.t_record = t.t_record and s.output_id =\n> t.output_id\n> );\n>\n> drop table temp_table;\n\nAm I right to assume that the update/insert needs to be placed into a\nbegin / end transaction block if such batch uploads might happen\nconcurrently? Doesn't seem to be the case for this question here, but\nI like the solution and wonder if it works under more general\ncircumstances.\n\nWhat's the overhead of creating and dropping a temporary table? Is it\nonly worth doing this for a large number of inserted/updated elements?\nWhat if the number of inserts/updates is only a dozen at a time for a\nlarge table (>10M entries)?\n\nThanks,\nMarc\n",
"msg_date": "Fri, 6 Jan 2012 20:22:46 +0000",
"msg_from": "Marc Eberhard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate deletion optimizations"
},
{
"msg_contents": "On Fri, Jan 6, 2012 at 12:22 PM, Marc Eberhard <[email protected]>wrote:\n\n> Hi Samuel!\n>\n> On 6 January 2012 20:02, Samuel Gendler <[email protected]> wrote:\n> > Have you considered doing the insert by doing a bulk insert into a temp\n> > table and then pulling rows that don't exist across to the final table in\n> > one query and updating rows that do exist in another query? I did a very\n> > brief scan of the SO thread and didn't see it suggested. Something like\n> > this:\n> >\n> > update stats_5mn set count = count + t.count\n> > from temp_table t\n> > where stats_5mn.t_value = t.t_value and stats_5mn.t_record and\n> > stats_5mn.output_id = t.output_id;\n> >\n> > insert into stats_5mn\n> > select * from temp_table t\n> > where not exists (\n> > select 1 from stats_5mn s\n> > where s.t_value = t.t_value and s.t_record = t.t_record and s.output_id =\n> > t.output_id\n> > );\n> >\n> > drop table temp_table;\n>\n> Am I right to assume that the update/insert needs to be placed into a\n> begin / end transaction block if such batch uploads might happen\n> concurrently? Doesn't seem to be the case for this question here, but\n> I like the solution and wonder if it works under more general\n> circumstances.\n>\n\nyes, assuming you are concerned about making the insertion atomic.\n Obviously, a failure in the second query after success in the 1st query\nwould be problematic outside of a transaction, since any attempt to repeat\nthe entire operation would result in repeated updates.\n\n\n> What's the overhead of creating and dropping a temporary table? Is it\n> only worth doing this for a large number of inserted/updated elements?\n> What if the number of inserts/updates is only a dozen at a time for a\n> large table (>10M entries)?\n>\n\npretty minimal, but enough that doing a handful of rows at a time probably\nwouldn't be worth it. You'd surely get index usage on a plain insert in\nsuch a case, so I'd probably just use an upsert stored proc for doing small\nnumbers of rows - unless you are doing large numbers of inserts, just a few\nat a time. In that case, I'd try to accumulate them and then do them in\nbulk. Those are tough questions to answer without a specific context. My\nreal answer is 'try it and see.' You'll always get an answer that is\nspecific to your exact circumstance that way.\n\nBy the way, there is definitely a difference between creating a temp table\nand creating a table temporarily. See the postgres docs about temp tables\nfor specifics, but many databases treat temp tables differently from\nordinary tables, so it is worth understanding what those differences are.\n Temp tables are automatically dropped when a connection (or transaction)\nis closed. Temp table names are local to the connection, so multiple\nconnections can each create a temp table with the same name without\nconflict, which is convenient. I believe they are also created in a\nspecific tablespace on disk, etc.\n\nOn Fri, Jan 6, 2012 at 12:22 PM, Marc Eberhard <[email protected]> wrote:\nHi Samuel!\n\nOn 6 January 2012 20:02, Samuel Gendler <[email protected]> wrote:\n> Have you considered doing the insert by doing a bulk insert into a temp\n> table and then pulling rows that don't exist across to the final table in\n> one query and updating rows that do exist in another query? I did a very\n> brief scan of the SO thread and didn't see it suggested. Something like\n> this:\n>\n> update stats_5mn set count = count + t.count\n> from temp_table t\n> where stats_5mn.t_value = t.t_value and stats_5mn.t_record and\n> stats_5mn.output_id = t.output_id;\n>\n> insert into stats_5mn\n> select * from temp_table t\n> where not exists (\n> select 1 from stats_5mn s\n> where s.t_value = t.t_value and s.t_record = t.t_record and s.output_id =\n> t.output_id\n> );\n>\n> drop table temp_table;\n\nAm I right to assume that the update/insert needs to be placed into a\nbegin / end transaction block if such batch uploads might happen\nconcurrently? Doesn't seem to be the case for this question here, but\nI like the solution and wonder if it works under more general\ncircumstances.yes, assuming you are concerned about making the insertion atomic. Obviously, a failure in the second query after success in the 1st query would be problematic outside of a transaction, since any attempt to repeat the entire operation would result in repeated updates.\n What's the overhead of creating and dropping a temporary table? Is it\nonly worth doing this for a large number of inserted/updated elements?\nWhat if the number of inserts/updates is only a dozen at a time for a\nlarge table (>10M entries)?pretty minimal, but enough that doing a handful of rows at a time probably wouldn't be worth it. You'd surely get index usage on a plain insert in such a case, so I'd probably just use an upsert stored proc for doing small numbers of rows - unless you are doing large numbers of inserts, just a few at a time. In that case, I'd try to accumulate them and then do them in bulk. Those are tough questions to answer without a specific context. My real answer is 'try it and see.' You'll always get an answer that is specific to your exact circumstance that way.\nBy the way, there is definitely a difference between creating a temp table and creating a table temporarily. See the postgres docs about temp tables for specifics, but many databases treat temp tables differently from ordinary tables, so it is worth understanding what those differences are. Temp tables are automatically dropped when a connection (or transaction) is closed. Temp table names are local to the connection, so multiple connections can each create a temp table with the same name without conflict, which is convenient. I believe they are also created in a specific tablespace on disk, etc.",
"msg_date": "Fri, 6 Jan 2012 12:38:30 -0800",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate deletion optimizations"
},
{
"msg_contents": "On 6 January 2012 20:38, Samuel Gendler <[email protected]> wrote:\n> On Fri, Jan 6, 2012 at 12:22 PM, Marc Eberhard <[email protected]>\n> wrote:\n>> On 6 January 2012 20:02, Samuel Gendler <[email protected]> wrote:\n>> > Have you considered doing the insert by doing a bulk insert into a temp\n>> > table and then pulling rows that don't exist across to the final table\n>> > in\n>> > one query and updating rows that do exist in another query? I did a\n>> > very\n>> > brief scan of the SO thread and didn't see it suggested. Something like\n>> > this:\n>> >\n>> > update stats_5mn set count = count + t.count\n>> > from temp_table t\n>> > where stats_5mn.t_value = t.t_value and stats_5mn.t_record and\n>> > stats_5mn.output_id = t.output_id;\n>> >\n>> > insert into stats_5mn\n>> > select * from temp_table t\n>> > where not exists (\n>> > select 1 from stats_5mn s\n>> > where s.t_value = t.t_value and s.t_record = t.t_record and s.output_id\n>> > =\n>> > t.output_id\n>> > );\n>> >\n>> > drop table temp_table;\n>>\n>> Am I right to assume that the update/insert needs to be placed into a\n>> begin / end transaction block if such batch uploads might happen\n>> concurrently? Doesn't seem to be the case for this question here, but\n>> I like the solution and wonder if it works under more general\n>> circumstances.\n>\n>\n> yes, assuming you are concerned about making the insertion atomic.\n> Obviously, a failure in the second query after success in the 1st query\n> would be problematic outside of a transaction, since any attempt to repeat\n> the entire operation would result in repeated updates.\n\nTrue, but I was more concerned about concurrency, where a second\nupsert inserts an element between update/insert from the first. That\nwould then skip the element in the first upsert as it is neither\nupdated (doesn't exist at that point in time) nor inserted (does\nexists at that later point). Or would that be impossible anyway?\n\n>> What's the overhead of creating and dropping a temporary table? Is it\n>> only worth doing this for a large number of inserted/updated elements?\n>> What if the number of inserts/updates is only a dozen at a time for a\n>> large table (>10M entries)?\n>\n> pretty minimal, but enough that doing a handful of rows at a time probably\n> wouldn't be worth it. You'd surely get index usage on a plain insert in\n> such a case, so I'd probably just use an upsert stored proc for doing small\n> numbers of rows - unless you are doing large numbers of inserts, just a few\n> at a time. In that case, I'd try to accumulate them and then do them in\n> bulk. Those are tough questions to answer without a specific context. My\n> real answer is 'try it and see.' You'll always get an answer that is\n> specific to your exact circumstance that way.\n\nIt's a fairly tricky problem. I have a number of sensors producing\nenergy data about every 5 minutes, but at random times between 1 and\n15 minutes. I can't change that as that's the way the hardware of the\nsensors works. These feed into another unit, which accumulates them\nand forwards them in batches over the Internet to my PostgreSQL\ndatabase server every few minutes (again at random times outside my\ncontrol and with random batch sizes). To make things worse, if the\nInternet connection between the unit and the database server fails, it\nwill send the latest data first to provide a quick update to the\ncurrent values and then send the backlog of stored values. Thus, data\ndo not always arrive in correct time order.\n\nAt the moment I only look at the latest data for each sensor and these\nshould be as close to real time as possible. Thus, collecting data for\nsome time to get a larger size for a batch update isn't preferable.\nWhat I want to do, and this is where the upsert problem starts, is to\nbuild a table with energy values at fixed times. These should be\ncalculated as a linear interpolation between the nearest reported\nvalues from the sensors. Please note each sensor is reporting a\nmeasured energy value (not instant power), which always increases\nmonotonically with time. To compare the performance of the different\ndevices that are measured, I need to have the energy values at the\nsame time and not at the random times when the sensors report. This\nalso allows the calculation of average power for the devices by taking\nthe difference of the energy values over longer periods, like 30\nminutes.\n\nWhat I simply haven't got my head around is how to do this in an\nefficient way. When new values arrive, the table of interpolated\nvalues needs to be updated. For some times, there will already be\nvalues in the table, but for other times there won't. Thus, the\nupsert. If there was a communication failure, the reported sensor\ntimes will go backwards as the most recent is transmitted first until\nthe backlog is cleared. In that case the interpolation table will be\npopulated with intermediate values from the first insert with the\nlatest timestamp and then these values will be refined by the backlog\ndata as they trickle in. Under normal circumstances, reported\ntimestamps will be monotonically increasing and the interpolation\ntable will simply extend to later times. There are more reads from the\ninterpolation table than updates as there are many clients watching\nthe data live via a COMET web frontend (or better will be once I get\nthis working).\n\nI could try to code all of this in the application code (Tomcat\nservlets in my case), but I'd much rather like to find an elegant way\nto let the database server populated the interpolation table from the\ninserted sensor values. I can find the nearest relevant entries in the\ninterpolation table to be upserted by using date_trunc() on the\ntimestamp from the sensor value. But I then also need to find out the\nclosest sensor value in the database with an earlier and possibly\nlater timestamp around the fixed times in the interpolation table.\nSometimes a new value will result in an update and sometimes not.\nSometimes a new value needs to be added to the interpolation table and\nsometimes not.\n\nI know I'm pushing SQL a bit hard with this type of problem, but doing\nit in the application logic would result in quite a few round trips\nbetween the database server and the application code. It's sort of an\nintellectual challenge for me to see how much I can offload onto the\ndatabase server. Thus, my interest in batch upserts.\n\nAnother reason is that I don't want to hold any state or intermediate\ndata in the application code. I want this in the database as it is\nmuch better in storing things persistently than my own code could ever\nbe. It was designed to do that properly after all!\n\n> By the way, there is definitely a difference between creating a temp table\n> and creating a table temporarily. See the postgres docs about temp tables\n\nYes, I'm aware of that and meant a temporary/temp table, but being old\nfashioned I prefer the long form, which is also valid syntax.\n\n From the docs (v9.1.2): CREATE ... { TEMPORARY | TEMP } ... TABLE\n\nThanks,\nMarc\n",
"msg_date": "Fri, 6 Jan 2012 22:20:35 +0000",
"msg_from": "Marc Eberhard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate deletion optimizations"
},
{
"msg_contents": "Are your stats updated on the table after you added the index?\r\n\r\n- run the bad query with explain verbose on (you should send this anyways)\r\n- check to see what the difference is in expected rows vs. actual rows\r\n- make sure that your work_mem is high enough if you are sorting, if not you'll see it write out a temp file which will be slow.\r\n- if there is different analyze the table and rerun the query to see if you get the expected results.\r\n- I do believe having COUNT(*) > 1 will never use an index, but someone more experience can comment here.\r\n\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of [email protected]\r\nSent: Friday, January 06, 2012 8:36 AM\r\nTo: [email protected]\r\nSubject: [PERFORM] Duplicate deletion optimizations\r\n\r\nHello,\r\n\r\nI've a table with approximately 50 million rows with a schema like\r\nthis:\r\n\r\n id bigint NOT NULL DEFAULT nextval('stats_5mn'::regclass),\r\n t_value integer NOT NULL DEFAULT 0,\r\n t_record integer NOT NULL DEFAULT 0,\r\n output_id integer NOT NULL DEFAULT 0,\r\n count bigint NOT NULL DEFAULT 0,\r\n CONSTRAINT stats_mcs_5min_pkey PRIMARY KEY (id)\r\n\r\nEvery 5 minutes, a process have to insert a few thousand of rows in this table, but sometime, the process have to insert an already existing row (based on values in the triplet (t_value, t_record, output_id). In this case, the row must be updated with the new count value. I've tried some solution given on this stackoverflow question [1] but the insertion rate is always too low for my needs.\r\n\r\nSo, I've decided to do it in two times:\r\n\r\n - I insert all my new data with a COPY command\r\n - When it's done, I run a delete query to remove oldest duplicates\r\n\r\nRight now, my delete query look like this:\r\n\r\n SELECT min(id) FROM stats_5mn\r\n GROUP BY t_value, t_record, output_id\r\n HAVING count(*) > 1;\r\n\r\nThe duration of the query on my test machine with approx. 16 million rows is ~18s.\r\n\r\nTo reduce this duration, I've tried to add an index on my triplet:\r\n\r\n CREATE INDEX test\r\n ON stats_5mn\r\n USING btree\r\n (t_value , t_record , output_id );\r\n\r\nBy default, the PostgreSQL planner doesn't want to use my index and do a sequential scan [2], but if I force it with \"SET enable_seqscan = off\", the index is used [3] and query duration is lowered to ~5s.\r\n\r\n\r\nMy questions:\r\n\r\n - Why the planner refuse to use my index?\r\n - Is there a better method for my problem?\r\n\r\n\r\nThanks by advance for your help,\r\nAntoine Millet.\r\n\r\n\r\n[1]\r\nhttp://stackoverflow.com/questions/1109061/insert-on-duplicate-update-postgresql\r\n \r\nhttp://stackoverflow.com/questions/3464750/postgres-upsert-insert-or-update-only-if-value-is-different\r\n\r\n[2] http://explain.depesz.com/s/UzW :\r\n GroupAggregate (cost=1167282.380..1294947.770 rows=762182\r\nwidth=20) (actual time=20067.661..20067.661 rows=0 loops=1)\r\n Filter: (five(*) > 1)\r\n -> Sort (cost=1167282.380..1186336.910 rows=7621814 width=20) (actual time=15663.549..17463.458 rows=7621805 loops=1)\r\n Sort Key: delta, kilo, four\r\n Sort Method: external merge Disk: 223512kB\r\n -> Seq Scan on three (cost=0.000..139734.140 rows=7621814\r\nwidth=20) (actual time=0.041..2093.434 rows=7621805 loops=1)\r\n\r\n[3] http://explain.depesz.com/s/o9P :\r\n GroupAggregate (cost=0.000..11531349.190 rows=762182 width=20) (actual time=5307.734..5307.734 rows=0 loops=1)\r\n Filter: (five(*) > 1)\r\n -> Index Scan using charlie on three (cost=0.000..11422738.330\r\nrows=7621814 width=20) (actual time=0.046..2062.952 rows=7621805\r\nloops=1)\r\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\r\nThis email is confidential and subject to important disclaimers and\r\nconditions including on offers for the purchase or sale of\r\nsecurities, accuracy and completeness of information, viruses,\r\nconfidentiality, legal privilege, and legal entity disclaimers,\r\navailable at http://www.jpmorgan.com/pages/disclosures/email. ",
"msg_date": "Fri, 6 Jan 2012 19:02:01 -0500",
"msg_from": "\"Strange, John W\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate deletion optimizations"
},
{
"msg_contents": "Friday, January 6, 2012, 4:21:06 PM you wrote:\n\n>> Every 5 minutes, a process have to insert a few thousand of rows in this\n>> table, but sometime, the process have to insert an already existing row\n>> (based on values in the triplet (t_value, t_record, output_id). In this\n>> case, the row must be updated with the new count value. I've tried some\n>> solution given on this stackoverflow question [1] but the insertion rate\n>> is always too low for my needs.\n\nI did check the following in a loop, starting with an empty table, and\ninserting/updating 50000 random unique entries. After 15 minutes I've got\nabout 10 million records, each loop takes about 3 seconds. After 30 minutes\nthe table contains approx. 18 million entries, time per loop only slightly\nincreased. After 90 minutes the database has about 30 million entries. The\nspeed has dropped to about 15-20 seconds per loop, but the server is doing\nlots of other queries in parallel, so with an unloaded server the updates\nshould still take less than 10 seconds.\n\nThe generator runs in perl, and generates records for a maximum of 100 \nmillion different entries:\n\nuse strict;\n\nsrand time;\nmy $i = 0;\nopen FD, \">data.in\";\nfor (1..50000)\n{\n $i += rand(2000);\n print FD sprintf(\"%d\\t%d\\t%d\\t%d\\n\", $i/65536, ($i/256)%255, $i%255, rand(1000));\n}\nclose FD;\n\nThe SQL-script looks like this:\n\n\\timing on\nbegin;\ncreate temp table t_imp(id bigint,t_value integer,t_record integer,output_id integer,count bigint);\n\\copy t_imp (t_value, t_record, output_id, count) from 'data.in'\n--an index is not really needed, table is in memory anyway\n--create index t_imp_ix on t_imp(t_value,t_record,output_id);\n\n-- find matching rows\nupdate t_imp\n set id=test.id\n from test\n where (t_imp.t_value,t_imp.t_record,t_imp.output_id)=(test.t_value,test.t_record,test.output_id);\n-- update matching rows using primary key\nupdate test\n set count=t_imp.count\n from t_imp\n where t_imp.id is null and test.id=t_imp.id;\n-- insert missing rows\ninsert into test(t_value,t_record,output_id,count)\n select t_value,t_record,output_id,count\n from t_imp\n where id is null;\ncommit;\n\nAdvantages of this solution:\n\n- all updates are done in-place, no index modifications (except for the \n inserts, of course)\n- big table only gets inserts\n- no dead tuples from deletes\n- completely avoids sequential scans on the big table\n\nTested on my home server (8GB RAM, 3GB shared memory, Dual-Xeon 5110, 1.6 \nGHz, table and indices stored on a SSD)\n\nTable statistics:\n\nrelid | 14332525\nschemaname | public\nrelname | test\nseq_scan | 8\nseq_tup_read | 111541821\nidx_scan | 149240169\nidx_tup_fetch | 117901695\nn_tup_ins | 30280175\nn_tup_upd | 0\nn_tup_del | 0\nn_tup_hot_upd | 0\nn_live_tup | 30264431\nn_dead_tup | 0\nlast_vacuum |\nlast_autovacuum |\nlast_analyze |\nlast_autoanalyze | 2012-01-07 12:38:49.593651+01\nvacuum_count | 0\nautovacuum_count | 0\nanalyze_count | 0\nautoanalyze_count | 31\n\nThe sequential scans were from some 'select count(*)' in between.\n\nHTH.\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n",
"msg_date": "Sat, 7 Jan 2012 12:57:26 +0100",
"msg_from": "Jochen Erwied <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate deletion optimizations"
},
{
"msg_contents": "\n> It's a fairly tricky problem. I have a number of sensors producing\n> energy data about every 5 minutes, but at random times between 1 and\n> 15 minutes. I can't change that as that's the way the hardware of the\n> sensors works. These feed into another unit, which accumulates them\n> and forwards them in batches over the Internet to my PostgreSQL\n> database server every few minutes (again at random times outside my\n> control and with random batch sizes). To make things worse, if the\n> Internet connection between the unit and the database server fails, it\n> will send the latest data first to provide a quick update to the\n> current values and then send the backlog of stored values. Thus, data\n> do not always arrive in correct time order.\n\nI'm stuck home with flu, so I'm happy to help ;)\n\nI'll build an example setup to make it clearer...\n\n-- A list of all sensors\ncreate table sensors( sensor_id integer primary key );\ninsert into sensors select generate_series(1,100);\n\n-- A table to contain raw sensor data\ncreate table log(\n sensor_id integer not null references sensors(sensor_id),\n time integer not null,\n value float not null\n);\n\n-- Fill it up with test data\ninsert into log\nselect sensor_id, time, time from (\n select distinct sensor_id,\n (n+random()*10)::INTEGER as time\n from generate_series(0,50000,5) n\n cross join sensors\n) d;\n\n-- index it\nalter table log add primary key( time, sensor_id );\ncreate index log_sensor_time on log( sensor_id, time );\n\nselect * from log where sensor_id=1 order by time;\n sensor_id | time | value\n-----------+-------+-------\n 1 | 12 | 12\n 1 | 14 | 14\n 1 | 21 | 21\n 1 | 29 | 29\n 1 | 30 | 30\n(....)\n 1 | 49996 | 49996\n 1 | 50001 | 50001\n\n-- create a table which will contain the time ticks\n-- which will be used as x-axis for interpolation\n-- (in this example, one tick every 10 time units)\n\ncreate table ticks( time integer primary key,\n check( time%10 = 0 ) );\ninsert into ticks select\n generate_series( 0, (select max(time) from log), 10 );\n\n-- create interpolated values table\ncreate table interp(\n sensor_id integer not null references sensors( sensor_id ),\n time integer not null references ticks( time ),\n value float,\n distance integer not null\n);\n\n-- fill interpolated values table\n-- (pretty slow)\n\ninsert into interp\nselect\n sensor_id,\n t.time,\n start_value + \n(end_value-start_value)*(t.time-start_time)/(end_time-start_time),\n greatest( t.time - start_time, end_time-t.time )\n from\n (select\n sensor_id,\n lag(time) over (partition by sensor_id order by time) as start_time,\n time as end_time,\n lag(value) over (partition by sensor_id order by time) as \nstart_value,\n value as end_value\n from log\n ) as l\n join ticks t on (t.time >= start_time and t.time < end_time);\n\n-- alternate query if you don't like the ticks table (same sesult) :\ninsert into interp\nselect\n sensor_id,\n time,\n start_value + \n(end_value-start_value)*(time-start_time)/(end_time-start_time),\n greatest( time - start_time, end_time-time )\n from\n (select\n *,\n generate_series( ((start_time+9)/10)*10, ((end_time-1)/10)*10, 10 ) AS \ntime\n from\n (select\n sensor_id,\n lag(time) over (partition by sensor_id order by time) as \nstart_time,\n time as end_time,\n lag(value) over (partition by sensor_id order by time) as \nstart_value,\n value as end_value\n from log\n ) as l\n ) l;\n\nalter table interp add primary key( time,sensor_id );\ncreate index interp_sensor_time on interp( sensor_id, time );\n\nFor each interval in the log table that contains a time tick, this query \ngenerates the interpolated data at that tick.\n\nNote that the \"distance\" field represents the distance (in time) between \nthe interpolated value and the farthest real data point that was used to \ncalculate it. Therefore, it can be used as a measure of the quality of the \ninterpolated point ; if the distance is greater than some threshold, the \nvalue might not be that precise.\n\nNow, suppose we receive a bunch of data. The data isn't ordered according \nto time.\nThere are two possibilities :\n\n- the new data starts right where we left off (ie, just after the last \ntime for each sensor in table log)\n- the new data starts later in time, and we want to process the results \nright away, expecting to receive, at some later point, older data to fill \nthe holes\n\nThe second one is hairier, lets' do that.\n\nAnyway, let's create a packet :\n\n-- A table to contain raw sensor data\ncreate temporary table packet(\n sensor_id integer not null,\n time integer not null,\n value float not null\n);\n\n-- Fill it up with test data\ninsert into packet\nselect sensor_id, time, time from (\n select distinct sensor_id,\n (n+random()*10)::INTEGER as time\n from generate_series(50200,50400) n\n cross join sensors\n) d;\n\nNote that I deliberately inserted a hole : the log table contains times \n0-50000 and the packet contains times 50200-50400.\n\nWe'll need to decide if we want the hole to appear in the \"interp\" table \nor not. Let's say we don't want it to appear, we'll just interpolate over \nthe hole. he \"distance\" column will be there so we don't forget this data \nis some sort of guess. If we receive data to fill that hole later, we can \nalways use it.\n\nFor each sensor in the packet, we need to grab some entries from table \n\"log\", at least the most recent one, to be able to do some interpolation \nwith the first (oldest) value in the packet. To be more general, in case \nwe receive old data that will plug a hole, we'll also grab the oldest log \nentry that is more recent than the most recent one in the packet for this \nsensor (hum... i have to re-read that...)\n\nAnyway, first let's create the missing ticks :\n\nINSERT INTO ticks\n SELECT generate_series(\n (SELECT max(time) FROM ticks)+10,\n (SELECT max(time) FROM packet),\n 10);\n\nAnd ...\n\nCREATE TEMPORARY TABLE new_interp(\n sensor_id INTEGER NOT NULL,\n time INTEGER NOT NULL,\n value FLOAT NOT NULL,\n distance INTEGER NOT NULL\n);\n\n-- time range in the packet for each sensor\nWITH ranges AS (\n SELECT sensor_id, min(time) AS packet_start_time, max(time) AS \npacket_end_time\n FROM packet\n GROUP BY sensor_id\n),\n-- time ranges for records already in table log that will be needed to \ninterpolate packet records\nlog_boundaries AS (\n SELECT\n sensor_id,\n COALESCE(\n (SELECT max(l.time) FROM log l WHERE l.sensor_id=r.sensor_id AND \nl.time < r.packet_start_time),\n r.packet_start_time\n ) AS packet_start_time,\n COALESCE(\n (SELECT min(l.time) FROM log l WHERE l.sensor_id=r.sensor_id AND \nl.time > r.packet_end_time),\n r.packet_end_time\n ) AS packet_end_time\n FROM ranges r\n),\n-- merge existing and new data\nextended_packet AS (\n SELECT log.* FROM log JOIN log_boundaries USING (sensor_id)\n WHERE log.time BETWEEN packet_start_time AND packet_end_time\n UNION ALL\n SELECT * FROM packet\n),\n-- zip current and next records\npre_interp AS (\n SELECT\n sensor_id,\n lag(time) OVER (PARTITION BY sensor_id ORDER BY time) AS \nstart_time,\n time AS end_time,\n lag(value) over (PARTITION BY sensor_id ORDER BY time) AS \nstart_value,\n value AS end_value\n FROM extended_packet\n),\n-- add tick info\npre_interp2 AS (\n SELECT *, generate_series( ((start_time+9)/10)*10, ((end_time-1)/10)*10, \n10 ) AS time\n FROM pre_interp\n)\n-- interpolate\nINSERT INTO new_interp SELECT\n sensor_id,\n time,\n start_value + \n(end_value-start_value)*(time-start_time)/(end_time-start_time) AS value,\n greatest( time - start_time, end_time-time ) AS distance\n FROM pre_interp2;\n\nAlthough this query is huge, it's very fast, since it doesn't hit the big \ntables with any seq scans (hence the max() and min() tricks to use the \nindexes instead).\n\nI love how postgres can blast that huge pile of SQL in, like, 50 ms...\n\nIf there is some overlap between packet data and data already in the log, \nyou might get some division by zero errors, in this case you'll need to \napply a DISTINCT somewhere (or simply replace the UNION ALL with an UNION, \nwhich might be wiser anyway...)\n\nAnyway, that doesn't solve the \"upsert\" problem, so here we go :\n\n-- Update the existing rows\nUPDATE interp\n SET value = new_interp.value, distance = new_interp.distance\n FROM new_interp\n WHERE interp.sensor_id = new_interp.sensor_id\n AND interp.time = new_interp.time\n AND interp.distance > new_interp.distance;\n\n-- insert new rows\nINSERT INTO interp\nSELECT new_interp.* FROM new_interp\n LEFT JOIN interp USING (sensor_id,time)\n WHERE interp.sensor_id IS NULL;\n\n-- also insert data into log (don't forget this !)\nINSERT INTO log SELECT * FROM packet;\n\nTada.\n\nselect * from interp where sensor_id=1 and time > 49950 order by time;\n sensor_id | time | value | distance\n-----------+-------+-------+----------\n 1 | 49960 | 49960 | 7\n 1 | 49970 | 49970 | 4\n 1 | 49980 | 49980 | 3\n 1 | 49990 | 49990 | 5\n 1 | 50000 | 50000 | 2\n 1 | 50010 | 50010 | 190\n 1 | 50020 | 50020 | 180\n 1 | 50030 | 50030 | 170\n(...)\n 1 | 50180 | 50180 | 178\n 1 | 50190 | 50190 | 188\n 1 | 50200 | 50200 | 2\n 1 | 50210 | 50210 | 1\n 1 | 50220 | 50220 | 1\n 1 | 50230 | 50230 | 1\n 1 | 50240 | 50240 | 2\n\nNote that the hole was interpolated over, but the \"distance\" column shows \nthis data is a guess, not real.\n\nWhat happens if we receive some data later to plug the hole ?\n\n-- plug the previously left hole\ntruncate packet;\ntruncate new_interp;\ninsert into packet\nselect sensor_id, time, time from (\n select distinct sensor_id,\n (n+random()*10)::INTEGER as time\n from generate_series(50050,50150) n\n cross join sensors\n) d;\n\n(re-run huge query and upsert)\n\nselect * from interp where sensor_id=1 and time > 49950 order by time;\nsensor_id | time | value | distance\n-----------+-------+-------+----------\n 1 | 49960 | 49960 | 7\n 1 | 49970 | 49970 | 4\n 1 | 49980 | 49980 | 3\n 1 | 49990 | 49990 | 5\n 1 | 50000 | 50000 | 2\n 1 | 50010 | 50010 | 45\n 1 | 50020 | 50020 | 35\n 1 | 50030 | 50030 | 28\n 1 | 50040 | 50040 | 38\n 1 | 50050 | 50050 | 48\n 1 | 50060 | 50060 | 1\n 1 | 50070 | 50070 | 1\n 1 | 50080 | 50080 | 2\n(...)\n 1 | 50130 | 50130 | 1\n 1 | 50140 | 50140 | 3\n 1 | 50150 | 50150 | 1\n 1 | 50160 | 50160 | 40\n 1 | 50170 | 50170 | 30\n 1 | 50180 | 50180 | 26\n 1 | 50190 | 50190 | 36\n 1 | 50200 | 50200 | 2\n 1 | 50210 | 50210 | 1\n 1 | 50220 | 50220 | 1\n 1 | 50230 | 50230 | 1\n 1 | 50240 | 50240 | 2\n\nIt has used the new data to rewrite new values over the entire hole, and \nthose values should have better precision.\n\nEnjoy !\n",
"msg_date": "Sat, 07 Jan 2012 13:20:03 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate deletion optimizations"
},
{
"msg_contents": "Yes, but it should become a bit slower if you fix your code :-)\n\n where t_imp.id is null and test.id=t_imp.id;\n =>\n where t_imp.id is not null and test.id=t_imp.id;\n\nand a partial index on matching rows might help (should be tested):\n\n (after the first updat)\n create index t_imp_ix on t_imp(t_value,t_record,output_id) where t_imp.id is not null.\n\nregards,\nMarc Mamin\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected] im Auftrag von Jochen Erwied\nGesendet: Sa 1/7/2012 12:57\nAn: [email protected]\nCc: [email protected]\nBetreff: Re: [PERFORM] Duplicate deletion optimizations\n \nFriday, January 6, 2012, 4:21:06 PM you wrote:\n\n>> Every 5 minutes, a process have to insert a few thousand of rows in this\n>> table, but sometime, the process have to insert an already existing row\n>> (based on values in the triplet (t_value, t_record, output_id). In this\n>> case, the row must be updated with the new count value. I've tried some\n>> solution given on this stackoverflow question [1] but the insertion rate\n>> is always too low for my needs.\n\nI did check the following in a loop, starting with an empty table, and\ninserting/updating 50000 random unique entries. After 15 minutes I've got\nabout 10 million records, each loop takes about 3 seconds. After 30 minutes\nthe table contains approx. 18 million entries, time per loop only slightly\nincreased. After 90 minutes the database has about 30 million entries. The\nspeed has dropped to about 15-20 seconds per loop, but the server is doing\nlots of other queries in parallel, so with an unloaded server the updates\nshould still take less than 10 seconds.\n\nThe generator runs in perl, and generates records for a maximum of 100 \nmillion different entries:\n\nuse strict;\n\nsrand time;\nmy $i = 0;\nopen FD, \">data.in\";\nfor (1..50000)\n{\n $i += rand(2000);\n print FD sprintf(\"%d\\t%d\\t%d\\t%d\\n\", $i/65536, ($i/256)%255, $i%255, rand(1000));\n}\nclose FD;\n\nThe SQL-script looks like this:\n\n\\timing on\nbegin;\ncreate temp table t_imp(id bigint,t_value integer,t_record integer,output_id integer,count bigint);\n\\copy t_imp (t_value, t_record, output_id, count) from 'data.in'\n--an index is not really needed, table is in memory anyway\n--create index t_imp_ix on t_imp(t_value,t_record,output_id);\n\n-- find matching rows\nupdate t_imp\n set id=test.id\n from test\n where (t_imp.t_value,t_imp.t_record,t_imp.output_id)=(test.t_value,test.t_record,test.output_id);\n-- update matching rows using primary key\nupdate test\n set count=t_imp.count\n from t_imp\n where t_imp.id is null and test.id=t_imp.id;\n-- insert missing rows\ninsert into test(t_value,t_record,output_id,count)\n select t_value,t_record,output_id,count\n from t_imp\n where id is null;\ncommit;\n\nAdvantages of this solution:\n\n- all updates are done in-place, no index modifications (except for the \n inserts, of course)\n- big table only gets inserts\n- no dead tuples from deletes\n- completely avoids sequential scans on the big table\n\nTested on my home server (8GB RAM, 3GB shared memory, Dual-Xeon 5110, 1.6 \nGHz, table and indices stored on a SSD)\n\nTable statistics:\n\nrelid | 14332525\nschemaname | public\nrelname | test\nseq_scan | 8\nseq_tup_read | 111541821\nidx_scan | 149240169\nidx_tup_fetch | 117901695\nn_tup_ins | 30280175\nn_tup_upd | 0\nn_tup_del | 0\nn_tup_hot_upd | 0\nn_live_tup | 30264431\nn_dead_tup | 0\nlast_vacuum |\nlast_autovacuum |\nlast_analyze |\nlast_autoanalyze | 2012-01-07 12:38:49.593651+01\nvacuum_count | 0\nautovacuum_count | 0\nanalyze_count | 0\nautoanalyze_count | 31\n\nThe sequential scans were from some 'select count(*)' in between.\n\nHTH.\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\nAW: [PERFORM] Duplicate deletion optimizations\n\n\n\nYes, but it should become a bit slower if you fix your code :-)\n\n where t_imp.id is null and test.id=t_imp.id;\n =>\n where t_imp.id is not null and test.id=t_imp.id;\n\nand a partial index on matching rows might help (should be tested):\n\n (after the first updat)\n create index t_imp_ix on t_imp(t_value,t_record,output_id) where t_imp.id is not null.\n\nregards,\nMarc Mamin\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected] im Auftrag von Jochen Erwied\nGesendet: Sa 1/7/2012 12:57\nAn: [email protected]\nCc: [email protected]\nBetreff: Re: [PERFORM] Duplicate deletion optimizations\n\nFriday, January 6, 2012, 4:21:06 PM you wrote:\n\n>> Every 5 minutes, a process have to insert a few thousand of rows in this\n>> table, but sometime, the process have to insert an already existing row\n>> (based on values in the triplet (t_value, t_record, output_id). In this\n>> case, the row must be updated with the new count value. I've tried some\n>> solution given on this stackoverflow question [1] but the insertion rate\n>> is always too low for my needs.\n\nI did check the following in a loop, starting with an empty table, and\ninserting/updating 50000 random unique entries. After 15 minutes I've got\nabout 10 million records, each loop takes about 3 seconds. After 30 minutes\nthe table contains approx. 18 million entries, time per loop only slightly\nincreased. After 90 minutes the database has about 30 million entries. The\nspeed has dropped to about 15-20 seconds per loop, but the server is doing\nlots of other queries in parallel, so with an unloaded server the updates\nshould still take less than 10 seconds.\n\nThe generator runs in perl, and generates records for a maximum of 100\nmillion different entries:\n\nuse strict;\n\nsrand time;\nmy $i = 0;\nopen FD, \">data.in\";\nfor (1..50000)\n{\n $i += rand(2000);\n print FD sprintf(\"%d\\t%d\\t%d\\t%d\\n\", $i/65536, ($i/256)%255, $i%255, rand(1000));\n}\nclose FD;\n\nThe SQL-script looks like this:\n\n\\timing on\nbegin;\ncreate temp table t_imp(id bigint,t_value integer,t_record integer,output_id integer,count bigint);\n\\copy t_imp (t_value, t_record, output_id, count) from 'data.in'\n--an index is not really needed, table is in memory anyway\n--create index t_imp_ix on t_imp(t_value,t_record,output_id);\n\n-- find matching rows\nupdate t_imp\n set id=test.id\n from test\n where (t_imp.t_value,t_imp.t_record,t_imp.output_id)=(test.t_value,test.t_record,test.output_id);\n-- update matching rows using primary key\nupdate test\n set count=t_imp.count\n from t_imp\n where t_imp.id is null and test.id=t_imp.id;\n-- insert missing rows\ninsert into test(t_value,t_record,output_id,count)\n select t_value,t_record,output_id,count\n from t_imp\n where id is null;\ncommit;\n\nAdvantages of this solution:\n\n- all updates are done in-place, no index modifications (except for the\n inserts, of course)\n- big table only gets inserts\n- no dead tuples from deletes\n- completely avoids sequential scans on the big table\n\nTested on my home server (8GB RAM, 3GB shared memory, Dual-Xeon 5110, 1.6\nGHz, table and indices stored on a SSD)\n\nTable statistics:\n\nrelid | 14332525\nschemaname | public\nrelname | test\nseq_scan | 8\nseq_tup_read | 111541821\nidx_scan | 149240169\nidx_tup_fetch | 117901695\nn_tup_ins | 30280175\nn_tup_upd | 0\nn_tup_del | 0\nn_tup_hot_upd | 0\nn_live_tup | 30264431\nn_dead_tup | 0\nlast_vacuum |\nlast_autovacuum |\nlast_analyze |\nlast_autoanalyze | 2012-01-07 12:38:49.593651+01\nvacuum_count | 0\nautovacuum_count | 0\nanalyze_count | 0\nautoanalyze_count | 31\n\nThe sequential scans were from some 'select count(*)' in between.\n\nHTH.\n\n--\nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sat, 7 Jan 2012 13:21:02 +0100",
"msg_from": "\"Marc Mamin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate deletion optimizations"
},
{
"msg_contents": "Saturday, January 7, 2012, 1:21:02 PM you wrote:\n\n> where t_imp.id is null and test.id=t_imp.id;\n> =>\n> where t_imp.id is not null and test.id=t_imp.id;\n\nYou're right, overlooked that one. But the increase to execute the query is\n- maybe not completely - suprisingly minimal.\n\nBecause the query updating the id-column of t_imp fetches all rows from\ntest to be updated, they are already cached, and the second query is run \ncompletely from cache. I suppose you will get a severe performance hit when \nthe table cannot be cached...\n\nI ran the loop again, after 30 minutes I'm at about 3-5 seconds per loop,\nas long as the server isn't doing something else. Under load it's at about\n10-20 seconds, with a ratio of 40% updates, 60% inserts.\n\n> and a partial index on matching rows might help (should be tested):\n\n> (after the first updat)\n> create index t_imp_ix on t_imp(t_value,t_record,output_id) where t_imp.id is not null.\n\nI don't think this will help much since t_imp is scanned sequentially\nanyway, so creating an index is just unneeded overhead.\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n",
"msg_date": "Sat, 7 Jan 2012 15:18:37 +0100",
"msg_from": "Jochen Erwied <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate deletion optimizations"
},
{
"msg_contents": "\n> It's a fairly tricky problem. I have a number of sensors producing\n> energy data about every 5 minutes, but at random times between 1 and\n> 15 minutes. I can't change that as that's the way the hardware of the\n> sensors works. These feed into another unit, which accumulates them\n> and forwards them in batches over the Internet to my PostgreSQL\n> database server every few minutes (again at random times outside my\n> control and with random batch sizes). To make things worse, if the\n> Internet connection between the unit and the database server fails, it\n> will send the latest data first to provide a quick update to the\n> current values and then send the backlog of stored values. Thus, data\n> do not always arrive in correct time order.\n\nI'm stuck home with flu, so I'm happy to help ;)\n\nI'll build an example setup to make it clearer...\n\n-- A list of all sensors\ncreate table sensors( sensor_id integer primary key );\ninsert into sensors select generate_series(1,100);\n\n-- A table to contain raw sensor data\ncreate table log(\n sensor_id integer not null references sensors(sensor_id),\n time integer not null,\n value float not null\n);\n\n-- Fill it up with test data\ninsert into log\nselect sensor_id, time, time from (\n select distinct sensor_id,\n (n+random()*10)::INTEGER as time\n from generate_series(0,50000,5) n\n cross join sensors\n) d;\n\n-- index it\nalter table log add primary key( time, sensor_id );\ncreate index log_sensor_time on log( sensor_id, time );\n\nselect * from log where sensor_id=1 order by time;\n sensor_id | time | value\n-----------+-------+-------\n 1 | 12 | 12\n 1 | 14 | 14\n 1 | 21 | 21\n 1 | 29 | 29\n 1 | 30 | 30\n(....)\n 1 | 49996 | 49996\n 1 | 50001 | 50001\n\n-- create a table which will contain the time ticks\n-- which will be used as x-axis for interpolation\n-- (in this example, one tick every 10 time units)\n\ncreate table ticks( time integer primary key,\n check( time%10 = 0 ) );\ninsert into ticks select\n generate_series( 0, (select max(time) from log), 10 );\n\n-- create interpolated values table\ncreate table interp(\n sensor_id integer not null references sensors( sensor_id ),\n time integer not null references ticks( time ),\n value float,\n distance integer not null\n);\n\n-- fill interpolated values table\n-- (pretty slow)\n\ninsert into interp\nselect\n sensor_id,\n t.time,\n start_value +\n(end_value-start_value)*(t.time-start_time)/(end_time-start_time),\n greatest( t.time - start_time, end_time-t.time )\n from\n (select\n sensor_id,\n lag(time) over (partition by sensor_id order by time) as \nstart_time,\n time as end_time,\n lag(value) over (partition by sensor_id order by time) as\nstart_value,\n value as end_value\n from log\n ) as l\n join ticks t on (t.time >= start_time and t.time < end_time);\n\n-- alternate query if you don't like the ticks table (same sesult) :\ninsert into interp\nselect\n sensor_id,\n time,\n start_value +\n(end_value-start_value)*(time-start_time)/(end_time-start_time),\n greatest( time - start_time, end_time-time )\n from\n (select\n *,\n generate_series( ((start_time+9)/10)*10, ((end_time-1)/10)*10, 10 ) \nAS\ntime\n from\n (select\n sensor_id,\n lag(time) over (partition by sensor_id order by time) as\nstart_time,\n time as end_time,\n lag(value) over (partition by sensor_id order by time) as\nstart_value,\n value as end_value\n from log\n ) as l\n ) l;\n\nalter table interp add primary key( time,sensor_id );\ncreate index interp_sensor_time on interp( sensor_id, time );\n\nFor each interval in the log table that contains a time tick, this query\ngenerates the interpolated data at that tick.\n\nNote that the \"distance\" field represents the distance (in time) between\nthe interpolated value and the farthest real data point that was used to\ncalculate it. Therefore, it can be used as a measure of the quality of the\ninterpolated point ; if the distance is greater than some threshold, the\nvalue might not be that precise.\n\nNow, suppose we receive a bunch of data. The data isn't ordered according\nto time.\nThere are two possibilities :\n\n- the new data starts right where we left off (ie, just after the last\ntime for each sensor in table log)\n- the new data starts later in time, and we want to process the results\nright away, expecting to receive, at some later point, older data to fill\nthe holes\n\nThe second one is hairier, lets' do that.\n\nAnyway, let's create a packet :\n\n-- A table to contain raw sensor data\ncreate temporary table packet(\n sensor_id integer not null,\n time integer not null,\n value float not null\n);\n\n-- Fill it up with test data\ninsert into packet\nselect sensor_id, time, time from (\n select distinct sensor_id,\n (n+random()*10)::INTEGER as time\n from generate_series(50200,50400) n\n cross join sensors\n) d;\n\nNote that I deliberately inserted a hole : the log table contains times\n0-50000 and the packet contains times 50200-50400.\n\nWe'll need to decide if we want the hole to appear in the \"interp\" table\nor not. Let's say we don't want it to appear, we'll just interpolate over\nthe hole. he \"distance\" column will be there so we don't forget this data\nis some sort of guess. If we receive data to fill that hole later, we can\nalways use it.\n\nFor each sensor in the packet, we need to grab some entries from table\n\"log\", at least the most recent one, to be able to do some interpolation\nwith the first (oldest) value in the packet. To be more general, in case\nwe receive old data that will plug a hole, we'll also grab the oldest log\nentry that is more recent than the most recent one in the packet for this\nsensor (hum... i have to re-read that...)\n\nAnyway, first let's create the missing ticks :\n\nINSERT INTO ticks\n SELECT generate_series(\n (SELECT max(time) FROM ticks)+10,\n (SELECT max(time) FROM packet),\n 10);\n\nAnd ...\n\nCREATE TEMPORARY TABLE new_interp(\n sensor_id INTEGER NOT NULL,\n time INTEGER NOT NULL,\n value FLOAT NOT NULL,\n distance INTEGER NOT NULL\n);\n\n-- time range in the packet for each sensor\nWITH ranges AS (\n SELECT sensor_id, min(time) AS packet_start_time, max(time) AS\npacket_end_time\n FROM packet\n GROUP BY sensor_id\n),\n-- time ranges for records already in table log that will be needed to\ninterpolate packet records\nlog_boundaries AS (\n SELECT\n sensor_id,\n COALESCE(\n (SELECT max(l.time) FROM log l WHERE l.sensor_id=r.sensor_id AND\nl.time < r.packet_start_time),\n r.packet_start_time\n ) AS packet_start_time,\n COALESCE(\n (SELECT min(l.time) FROM log l WHERE l.sensor_id=r.sensor_id AND\nl.time > r.packet_end_time),\n r.packet_end_time\n ) AS packet_end_time\n FROM ranges r\n),\n-- merge existing and new data\nextended_packet AS (\n SELECT log.* FROM log JOIN log_boundaries USING (sensor_id)\n WHERE log.time BETWEEN packet_start_time AND packet_end_time\n UNION ALL\n SELECT * FROM packet\n),\n-- zip current and next records\npre_interp AS (\n SELECT\n sensor_id,\n lag(time) OVER (PARTITION BY sensor_id ORDER BY time) AS\nstart_time,\n time AS end_time,\n lag(value) over (PARTITION BY sensor_id ORDER BY time) AS\nstart_value,\n value AS end_value\n FROM extended_packet\n),\n-- add tick info\npre_interp2 AS (\n SELECT *, generate_series( ((start_time+9)/10)*10, \n((end_time-1)/10)*10,\n10 ) AS time\n FROM pre_interp\n)\n-- interpolate\nINSERT INTO new_interp SELECT\n sensor_id,\n time,\n start_value +\n(end_value-start_value)*(time-start_time)/(end_time-start_time) AS value,\n greatest( time - start_time, end_time-time ) AS distance\n FROM pre_interp2;\n\nAlthough this query is huge, it's very fast, since it doesn't hit the big\ntables with any seq scans (hence the max() and min() tricks to use the\nindexes instead).\n\nI love how postgres can blast that huge pile of SQL in, like, 50 ms...\n\nIf there is some overlap between packet data and data already in the log,\nyou might get some division by zero errors, in this case you'll need to\napply a DISTINCT somewhere (or simply replace the UNION ALL with an UNION,\nwhich might be wiser anyway...)\n\nAnyway, that doesn't solve the \"upsert\" problem, so here we go :\n\n-- Update the existing rows\nUPDATE interp\n SET value = new_interp.value, distance = new_interp.distance\n FROM new_interp\n WHERE interp.sensor_id = new_interp.sensor_id\n AND interp.time = new_interp.time\n AND interp.distance > new_interp.distance;\n\n-- insert new rows\nINSERT INTO interp\nSELECT new_interp.* FROM new_interp\n LEFT JOIN interp USING (sensor_id,time)\n WHERE interp.sensor_id IS NULL;\n\n-- also insert data into log (don't forget this !)\nINSERT INTO log SELECT * FROM packet;\n\nTada.\n\nselect * from interp where sensor_id=1 and time > 49950 order by time;\n sensor_id | time | value | distance\n-----------+-------+-------+----------\n 1 | 49960 | 49960 | 7\n 1 | 49970 | 49970 | 4\n 1 | 49980 | 49980 | 3\n 1 | 49990 | 49990 | 5\n 1 | 50000 | 50000 | 2\n 1 | 50010 | 50010 | 190\n 1 | 50020 | 50020 | 180\n 1 | 50030 | 50030 | 170\n(...)\n 1 | 50180 | 50180 | 178\n 1 | 50190 | 50190 | 188\n 1 | 50200 | 50200 | 2\n 1 | 50210 | 50210 | 1\n 1 | 50220 | 50220 | 1\n 1 | 50230 | 50230 | 1\n 1 | 50240 | 50240 | 2\n\nNote that the hole was interpolated over, but the \"distance\" column shows\nthis data is a guess, not real.\n\nWhat happens if we receive some data later to plug the hole ?\n\n-- plug the previously left hole\ntruncate packet;\ntruncate new_interp;\ninsert into packet\nselect sensor_id, time, time from (\n select distinct sensor_id,\n (n+random()*10)::INTEGER as time\n from generate_series(50050,50150) n\n cross join sensors\n) d;\n\n(re-run huge query and upsert)\n\nselect * from interp where sensor_id=1 and time > 49950 order by time;\nsensor_id | time | value | distance\n-----------+-------+-------+----------\n 1 | 49960 | 49960 | 7\n 1 | 49970 | 49970 | 4\n 1 | 49980 | 49980 | 3\n 1 | 49990 | 49990 | 5\n 1 | 50000 | 50000 | 2\n 1 | 50010 | 50010 | 45\n 1 | 50020 | 50020 | 35\n 1 | 50030 | 50030 | 28\n 1 | 50040 | 50040 | 38\n 1 | 50050 | 50050 | 48\n 1 | 50060 | 50060 | 1\n 1 | 50070 | 50070 | 1\n 1 | 50080 | 50080 | 2\n(...)\n 1 | 50130 | 50130 | 1\n 1 | 50140 | 50140 | 3\n 1 | 50150 | 50150 | 1\n 1 | 50160 | 50160 | 40\n 1 | 50170 | 50170 | 30\n 1 | 50180 | 50180 | 26\n 1 | 50190 | 50190 | 36\n 1 | 50200 | 50200 | 2\n 1 | 50210 | 50210 | 1\n 1 | 50220 | 50220 | 1\n 1 | 50230 | 50230 | 1\n 1 | 50240 | 50240 | 2\n\nIt has used the new data to rewrite new values over the entire hole, and\nthose values should have better precision.\n\nEnjoy !\n",
"msg_date": "Sat, 07 Jan 2012 16:31:23 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate deletion optimizations"
},
{
"msg_contents": "On Fri, Jan 6, 2012 at 6:35 AM, <[email protected]> wrote:\n> Hello,\n>\n> I've a table with approximately 50 million rows with a schema like this:\n>\n> id bigint NOT NULL DEFAULT nextval('stats_5mn'::regclass),\n> t_value integer NOT NULL DEFAULT 0,\n> t_record integer NOT NULL DEFAULT 0,\n> output_id integer NOT NULL DEFAULT 0,\n> count bigint NOT NULL DEFAULT 0,\n> CONSTRAINT stats_mcs_5min_pkey PRIMARY KEY (id)\n>\n> Every 5 minutes, a process have to insert a few thousand of rows in this\n> table,\n> but sometime, the process have to insert an already existing row (based on\n> values in the triplet (t_value, t_record, output_id). In this case, the row\n> must be updated with the new count value. I've tried some solution given on\n> this\n> stackoverflow question [1] but the insertion rate is always too low for my\n> needs.\n\nWhat are your needs? It should take no special hardware or coding to\nbe able to manage a few thousand rows over 5 minutes.\n\n\n> So, I've decided to do it in two times:\n>\n> - I insert all my new data with a COPY command\n> - When it's done, I run a delete query to remove oldest duplicates\n>\n> Right now, my delete query look like this:\n>\n> SELECT min(id) FROM stats_5mn\n> GROUP BY t_value, t_record, output_id\n> HAVING count(*) > 1;\n>\n> The duration of the query on my test machine with approx. 16 million rows is\n> ~18s.\n>\n> To reduce this duration, I've tried to add an index on my triplet:\n>\n> CREATE INDEX test\n> ON stats_5mn\n> USING btree\n> (t_value , t_record , output_id );\n>\n> By default, the PostgreSQL planner doesn't want to use my index and do a\n> sequential\n> scan [2], but if I force it with \"SET enable_seqscan = off\", the index is\n> used [3]\n> and query duration is lowered to ~5s.\n>\n>\n> My questions:\n>\n> - Why the planner refuse to use my index?\n\nIt thinks that using the index will be about 9 times more expensive\nthan the full scan. Probably your settings for seq_page_cost and\nrandom_page_cost are such that the planner thinks that nearly every\nbuffer read is going to be from disk. But in reality (in this case)\nyour data is all in memory. So the planner is mis-estimating. (It\nwould help verify this if you did your EXPLAIN ANALYZE with BUFFERS as\nwell). But before trying to fix this by tweaking settings, will the\nreal case always be like your test case? If the data stops being all\nin memory, either because the problem size increases or because you\nhave to compete for buffer space with other things going on, then\nusing the index scan could be catastrophic.\n\nCheers,\n\nJeff\n",
"msg_date": "Sat, 7 Jan 2012 10:54:51 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate deletion optimizations"
},
{
"msg_contents": "Hi Pierre!\n\nOn 7 January 2012 12:20, Pierre C <[email protected]> wrote:\n> I'm stuck home with flu, so I'm happy to help ;)\n[...]\n> I'll build an example setup to make it clearer...\n[...]\n\nThat's almost identical to my tables. :-)\n\n> Note that the \"distance\" field represents the distance (in time) between the\n> interpolated value and the farthest real data point that was used to\n> calculate it. Therefore, it can be used as a measure of the quality of the\n> interpolated point ; if the distance is greater than some threshold, the\n> value might not be that precise.\n\nNice idea!\n\n> Although this query is huge, it's very fast, since it doesn't hit the big\n> tables with any seq scans (hence the max() and min() tricks to use the\n> indexes instead).\n\nAnd it can easily be tamed by putting parts of it into stored pgpsql functions.\n\n> I love how postgres can blast that huge pile of SQL in, like, 50 ms...\n\nYes, indeed. It's incredible fast. Brilliant!\n\n> If there is some overlap between packet data and data already in the log,\n> you might get some division by zero errors, in this case you'll need to\n> apply a DISTINCT somewhere (or simply replace the UNION ALL with an UNION,\n> which might be wiser anyway...)\n\nI do have a unique constraint on the actual table to prevent duplicate\ndata in case of retransmission after a failed connect. It's easy\nenough to delete the rows from packet that already exist in the main\ntable with a short one line SQL delete statement before the\ninterpolation and merge.\n\n> Tada.\n\n:-))))\n\n> Enjoy !\n\nI certainly will. Many thanks for those great lines of SQL!\n\nHope you recover from your flu quickly!\n\nAll the best,\nMarc\n",
"msg_date": "Sat, 7 Jan 2012 22:54:20 +0000",
"msg_from": "Marc Eberhard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate deletion optimizations"
},
{
"msg_contents": "> That's almost identical to my tables.\n\nYou explained your problem very well ;)\n\n> I certainly will. Many thanks for those great lines of SQL!\n\nYou're welcome !\nStrangely I didn't receive the mail I posted to the list (received yours \nthough).\n",
"msg_date": "Sun, 08 Jan 2012 19:09:57 +0100",
"msg_from": "\"Pierre C\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate deletion optimizations"
},
{
"msg_contents": "Hello,\n\nThanks for your numerous and complete answers!\n\nFor those who have asked for more information about the process and \nhardware:\n\nThe goal of the process is to compute data from a nosql cluster and \nwrite results in a PostgreSQL database. This process is triggered every \n5 minutes for the latest 5 minutes data. 80% of data can be wrote in the \ndatabase with a simple copy, which is the fastest solution we found for \nbulk insertion. But for some data, duplicates are possible (but very \nunusual), and the new data must replace the old one in database. I'm \nlooking for the fastest solution to do this upsert.\n\nAbout the hardware:\n\nThe PostgreSQL database run on a KVM virtual machine, configured with \n8GB of ram and 4 cores of a L5640 CPU. The hypervisor have two 7,2k \nstandard SAS disks working in linux software raid 1. Disks are shared by \nVMs, and obviously, this PostgreSQL VM doesn't share its hypervisor with \nanother \"write-intensive\" VM.\n\nAlso, this database is dedicated to store the data outgoing the \nprocess, so I'm really free for its configuration and tuning. I also \nplan to add a replicated slave database for read operations, and maybe \ndo a partitioning of data, if needed.\n\nIf I summarize your solutions:\n\n - Add an \"order by\" statement to my initial query can help the planner \nto use the index.\n - Temporary tables, with a COPY of new data to the temporary table and \na merge of data (you proposed different ways for the merge).\n - Use EXISTS statement in the delete (but not recommended by another \nreply)\n\nI'll try your ideas this week, and I'll give you results.\n\nAntoine.\n",
"msg_date": "Mon, 09 Jan 2012 13:59:59 +0100",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Duplicate deletion optimizations"
}
] |
[
{
"msg_contents": "hi,\n\nMaybe these thoughts could help....\n\n1) order by those three columns in your select min query could force\nindex usage...\n\n2) or\n\nDELETE FROM table\nWHERE EXISTS(SELECT id FROM table t WHERE t.id > table.id AND t.col1 =\ntable.col1 AND t.col2 = table.col2 AND col3 = table.col3)\n\n\n\nSent from my Windows Phone\nFrom: [email protected]\nSent: 06/01/2012 15:36\nTo: [email protected]\nSubject: [PERFORM] Duplicate deletion optimizations\nHello,\n\nI've a table with approximately 50 million rows with a schema like\nthis:\n\n id bigint NOT NULL DEFAULT nextval('stats_5mn'::regclass),\n t_value integer NOT NULL DEFAULT 0,\n t_record integer NOT NULL DEFAULT 0,\n output_id integer NOT NULL DEFAULT 0,\n count bigint NOT NULL DEFAULT 0,\n CONSTRAINT stats_mcs_5min_pkey PRIMARY KEY (id)\n\nEvery 5 minutes, a process have to insert a few thousand of rows in\nthis table,\nbut sometime, the process have to insert an already existing row (based\non\nvalues in the triplet (t_value, t_record, output_id). In this case, the\nrow\nmust be updated with the new count value. I've tried some solution\ngiven on this\nstackoverflow question [1] but the insertion rate is always too low for\nmy needs.\n\nSo, I've decided to do it in two times:\n\n - I insert all my new data with a COPY command\n - When it's done, I run a delete query to remove oldest duplicates\n\nRight now, my delete query look like this:\n\n SELECT min(id) FROM stats_5mn\n GROUP BY t_value, t_record, output_id\n HAVING count(*) > 1;\n\nThe duration of the query on my test machine with approx. 16 million\nrows is ~18s.\n\nTo reduce this duration, I've tried to add an index on my triplet:\n\n CREATE INDEX test\n ON stats_5mn\n USING btree\n (t_value , t_record , output_id );\n\nBy default, the PostgreSQL planner doesn't want to use my index and do\na sequential\nscan [2], but if I force it with \"SET enable_seqscan = off\", the index\nis used [3]\nand query duration is lowered to ~5s.\n\n\nMy questions:\n\n - Why the planner refuse to use my index?\n - Is there a better method for my problem?\n\n\nThanks by advance for your help,\nAntoine Millet.\n\n\n[1]\nhttp://stackoverflow.com/questions/1109061/insert-on-duplicate-update-postgresql\n\nhttp://stackoverflow.com/questions/3464750/postgres-upsert-insert-or-update-only-if-value-is-different\n\n[2] http://explain.depesz.com/s/UzW :\n GroupAggregate (cost=1167282.380..1294947.770 rows=762182\nwidth=20) (actual time=20067.661..20067.661 rows=0 loops=1)\n Filter: (five(*) > 1)\n -> Sort (cost=1167282.380..1186336.910 rows=7621814 width=20)\n(actual time=15663.549..17463.458 rows=7621805 loops=1)\n Sort Key: delta, kilo, four\n Sort Method: external merge Disk: 223512kB\n -> Seq Scan on three (cost=0.000..139734.140 rows=7621814\nwidth=20) (actual time=0.041..2093.434 rows=7621805 loops=1)\n\n[3] http://explain.depesz.com/s/o9P :\n GroupAggregate (cost=0.000..11531349.190 rows=762182 width=20)\n(actual time=5307.734..5307.734 rows=0 loops=1)\n Filter: (five(*) > 1)\n -> Index Scan using charlie on three (cost=0.000..11422738.330\nrows=7621814 width=20) (actual time=0.046..2062.952 rows=7621805\nloops=1)\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 6 Jan 2012 19:28:06 -0800",
"msg_from": "Misa Simic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Duplicate deletion optimizations"
}
] |
[
{
"msg_contents": "If solution with temp table is acceptable - i think steps could be\nreduced...\n\n• copy to temp_imp ( temp table does not have id column)\n\n• update live set count = temp_imp.count from temp_imp using (\ncol1,col2,col3)\n\n• insert into live from temp where col1, col2 and col3 not exists in\nlive\n\nKind Regards,\n\nMisa\n\nSent from my Windows Phone\nFrom: Jochen Erwied\nSent: 07/01/2012 12:58\nTo: [email protected]\nCc: [email protected]\nSubject: Re: [PERFORM] Duplicate deletion optimizations\nFriday, January 6, 2012, 4:21:06 PM you wrote:\n\n>> Every 5 minutes, a process have to insert a few thousand of rows in this\n>> table, but sometime, the process have to insert an already existing row\n>> (based on values in the triplet (t_value, t_record, output_id). In this\n>> case, the row must be updated with the new count value. I've tried some\n>> solution given on this stackoverflow question [1] but the insertion rate\n>> is always too low for my needs.\n\nI did check the following in a loop, starting with an empty table, and\ninserting/updating 50000 random unique entries. After 15 minutes I've got\nabout 10 million records, each loop takes about 3 seconds. After 30 minutes\nthe table contains approx. 18 million entries, time per loop only slightly\nincreased. After 90 minutes the database has about 30 million entries. The\nspeed has dropped to about 15-20 seconds per loop, but the server is doing\nlots of other queries in parallel, so with an unloaded server the updates\nshould still take less than 10 seconds.\n\nThe generator runs in perl, and generates records for a maximum of 100\nmillion different entries:\n\nuse strict;\n\nsrand time;\nmy $i = 0;\nopen FD, \">data.in\";\nfor (1..50000)\n{\n $i += rand(2000);\n print FD sprintf(\"%d\\t%d\\t%d\\t%d\\n\", $i/65536, ($i/256)%255,\n$i%255, rand(1000));\n}\nclose FD;\n\nThe SQL-script looks like this:\n\n\\timing on\nbegin;\ncreate temp table t_imp(id bigint,t_value integer,t_record\ninteger,output_id integer,count bigint);\n\\copy t_imp (t_value, t_record, output_id, count) from 'data.in'\n--an index is not really needed, table is in memory anyway\n--create index t_imp_ix on t_imp(t_value,t_record,output_id);\n\n-- find matching rows\nupdate t_imp\n set id=test.id\n from test\n where (t_imp.t_value,t_imp.t_record,t_imp.output_id)=(test.t_value,test.t_record,test.output_id);\n-- update matching rows using primary key\nupdate test\n set count=t_imp.count\n from t_imp\n where t_imp.id is null and test.id=t_imp.id;\n-- insert missing rows\ninsert into test(t_value,t_record,output_id,count)\n select t_value,t_record,output_id,count\n from t_imp\n where id is null;\ncommit;\n\nAdvantages of this solution:\n\n- all updates are done in-place, no index modifications (except for the\n inserts, of course)\n- big table only gets inserts\n- no dead tuples from deletes\n- completely avoids sequential scans on the big table\n\nTested on my home server (8GB RAM, 3GB shared memory, Dual-Xeon 5110, 1.6\nGHz, table and indices stored on a SSD)\n\nTable statistics:\n\nrelid | 14332525\nschemaname | public\nrelname | test\nseq_scan | 8\nseq_tup_read | 111541821\nidx_scan | 149240169\nidx_tup_fetch | 117901695\nn_tup_ins | 30280175\nn_tup_upd | 0\nn_tup_del | 0\nn_tup_hot_upd | 0\nn_live_tup | 30264431\nn_dead_tup | 0\nlast_vacuum |\nlast_autovacuum |\nlast_analyze |\nlast_autoanalyze | 2012-01-07 12:38:49.593651+01\nvacuum_count | 0\nautovacuum_count | 0\nanalyze_count | 0\nautoanalyze_count | 31\n\nThe sequential scans were from some 'select count(*)' in between.\n\nHTH.\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 7 Jan 2012 06:02:10 -0800",
"msg_from": "Misa Simic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Duplicate deletion optimizations"
},
{
"msg_contents": "Saturday, January 7, 2012, 3:02:10 PM you wrote:\n\n> • insert into live from temp where col1, col2 and col3 not exists in\n> live\n\n'not exists' is something I'm trying to avoid, even if the optimizer is \nable to handle it. \n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n",
"msg_date": "Sat, 7 Jan 2012 15:17:58 +0100",
"msg_from": "Jochen Erwied <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Duplicate deletion optimizations"
}
] |
[
{
"msg_contents": "It was not query... Just sentence where some index values in one table\nnot exist in another...\n\nSo query could be with:\n• WHERE (col1,col2,col2) NOT IN\n• WHERE NOT EXISTS\n• LEFT JOIN live USING (col1,col2,col2) WHERE live.id IS NULL\n\nwhat ever whoever prefer more or what gives better results... But I\nthink it is more personal feelings which is better then real...\n\nSent from my Windows Phone\nFrom: Jochen Erwied\nSent: 07/01/2012 15:18\nTo: Misa Simic\nCc: [email protected]\nSubject: Re: [PERFORM] Duplicate deletion optimizations\nSaturday, January 7, 2012, 3:02:10 PM you wrote:\n\n> • insert into live from temp where col1, col2 and col3 not exists in\n> live\n\n'not exists' is something I'm trying to avoid, even if the optimizer is\nable to handle it.\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n",
"msg_date": "Sat, 7 Jan 2012 12:16:13 -0800",
"msg_from": "Misa Simic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Duplicate deletion optimizations"
}
] |
[
{
"msg_contents": "Some info:\nPostgreSQL version: 9.1.2\n\nTable \"cache\":\nRows count: 3 471 081\nColumn \"tsv\" tsvector\nIndex \"cache_tsv\" USING gin (tsv)\n\nIf i do query like THIS:\n\n*SELECT id FROM table WHERE tsv @@ to_tsquery('test:*');*\nIt uses index and returns results immediately:\n\nexplain analyze\n'Bitmap Heap Scan on cache (cost=1441.78..63802.63 rows=19843 width=4)\n(actual time=29.309..31.518 rows=1358 loops=1)'\n' Recheck Cond: (tsv @@ to_tsquery('test:*'::text))'\n' -> Bitmap Index Scan on cache_tsv (cost=0.00..1436.82 rows=19843\nwidth=0) (actual time=28.966..28.966 rows=1559 loops=1)'\n' Index Cond: (tsv @@ to_tsquery('test:*'::text))'\n'Total runtime: 31.789 ms'\n\n\nBut the performance problems starts when i do the same query specifying\nLIMIT.\n*SELECT id FROM cache WHERE tsv @@ to_tsquery('test:*') limit 20;*\n\nBy some reason index is not used.\n\nexplain analyze\n'Limit (cost=0.00..356.23 rows=20 width=4) (actual time=7.984..765.550\nrows=20 loops=1)'\n' -> Seq Scan on cache (cost=0.00..353429.50 rows=19843 width=4) (actual\ntime=7.982..765.536 rows=20 loops=1)'\n' Filter: (tsv @@ to_tsquery('test:*'::text))'\n'Total runtime: 765.620 ms'\n\n\nSome more debug notes:\n1) If i set SET enable_seqscan=off; then query uses indexes correctly\n2) Also i notified, if i use: to_tsquery('test') without wildcard search\n:*, then index is used correctly in both queries, with or without LIMIT\n\nAny ideas how to fix the problem?\nThank you\n\nSome info:PostgreSQL version: 9.1.2Table \"cache\":Rows count: 3 471 081Column \"tsv\" tsvectorIndex \"cache_tsv\" USING gin (tsv)\nIf i do query like THIS:SELECT id FROM table WHERE tsv @@ to_tsquery('test:*');It uses index and returns results immediately:\nexplain analyze 'Bitmap Heap Scan on cache (cost=1441.78..63802.63 rows=19843 width=4) (actual time=29.309..31.518 rows=1358 loops=1)'' Recheck Cond: (tsv @@ to_tsquery('test:*'::text))'\n' -> Bitmap Index Scan on cache_tsv (cost=0.00..1436.82 rows=19843 width=0) (actual time=28.966..28.966 rows=1559 loops=1)'' Index Cond: (tsv @@ to_tsquery('test:*'::text))'\n'Total runtime: 31.789 ms'But the performance problems starts when i do the same query specifying LIMIT. SELECT id FROM cache WHERE tsv @@ to_tsquery('test:*') limit 20;\nBy some reason index is not used. explain analyze 'Limit (cost=0.00..356.23 rows=20 width=4) (actual time=7.984..765.550 rows=20 loops=1)'' -> Seq Scan on cache (cost=0.00..353429.50 rows=19843 width=4) (actual time=7.982..765.536 rows=20 loops=1)'\n' Filter: (tsv @@ to_tsquery('test:*'::text))''Total runtime: 765.620 ms'Some more debug notes:1) If i set SET enable_seqscan=off; then query uses indexes correctly\n2) Also i notified, if i use: to_tsquery('test') without wildcard search :*, then index is used correctly in both queries, with or without LIMITAny ideas how to fix the problem?\nThank you",
"msg_date": "Tue, 10 Jan 2012 13:30:53 +0200",
"msg_from": "darklow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query planner doesn't use index scan on tsvector GIN index if LIMIT\n\tis specified"
}
] |
[
{
"msg_contents": "Some info:\nPostgreSQL version: 9.1.2\n\nTable \"cache\":\nRows count: 3 471 081\nColumn \"tsv\" tsvector\nIndex \"cache_tsv\" USING gin (tsv)\n\nIf i do query like THIS:\n\n*SELECT id FROM table WHERE tsv @@ to_tsquery('test:*');*\nIt uses index and returns results immediately:\n\nexplain analyze\n'Bitmap Heap Scan on cache (cost=1441.78..63802.63 rows=19843 width=4)\n(actual time=29.309..31.518 rows=1358 loops=1)'\n' Recheck Cond: (tsv @@ to_tsquery('test:*'::text))'\n' -> Bitmap Index Scan on cache_tsv (cost=0.00..1436.82 rows=19843\nwidth=0) (actual time=28.966..28.966 rows=1559 loops=1)'\n' Index Cond: (tsv @@ to_tsquery('test:*'::text))'\n'Total runtime: 31.789 ms'\n\n\nBut the performance problems starts when i do the same query specifying\nLIMIT.\n*SELECT id FROM cache WHERE tsv @@ to_tsquery('test:*') limit 20;*\n\nBy some reason index is not used.\n\nexplain analyze\n'Limit (cost=0.00..356.23 rows=20 width=4) (actual time=7.984..765.550\nrows=20 loops=1)'\n' -> Seq Scan on cache (cost=0.00..353429.50 rows=19843 width=4) (actual\ntime=7.982..765.536 rows=20 loops=1)'\n' Filter: (tsv @@ to_tsquery('test:*'::text))'\n'Total runtime: 765.620 ms'\n\n\nSome more debug notes:\n1) If i set SET enable_seqscan=off; then query uses indexes correctly\n2) Also i notified, if i use: to_tsquery('test') without wildcard search\n:*, then index is used correctly in both queries, with or without LIMIT\n\nAny ideas how to fix the problem?\nThank you\n\nSome info:PostgreSQL version: 9.1.2Table \"cache\":Rows count: 3 471 081Column \"tsv\" tsvectorIndex \"cache_tsv\" USING gin (tsv)\nIf i do query like THIS:SELECT id FROM table WHERE tsv @@ to_tsquery('test:*');It uses index and returns results immediately:\nexplain analyze 'Bitmap Heap Scan on cache (cost=1441.78..63802.63 rows=19843 width=4) (actual time=29.309..31.518 rows=1358 loops=1)'' Recheck Cond: (tsv @@ to_tsquery('test:*'::text))'\n' -> Bitmap Index Scan on cache_tsv (cost=0.00..1436.82 rows=19843 width=0) (actual time=28.966..28.966 rows=1559 loops=1)'\n' Index Cond: (tsv @@ to_tsquery('test:*'::text))''Total runtime: 31.789 ms'But the performance problems starts when i do the same query specifying LIMIT. \nSELECT id FROM cache WHERE tsv @@ to_tsquery('test:*') limit 20;By some reason index is not used. explain analyze 'Limit (cost=0.00..356.23 rows=20 width=4) (actual time=7.984..765.550 rows=20 loops=1)'\n' -> Seq Scan on cache (cost=0.00..353429.50 rows=19843 width=4) (actual time=7.982..765.536 rows=20 loops=1)'' Filter: (tsv @@ to_tsquery('test:*'::text))''Total runtime: 765.620 ms'\nSome more debug notes:1) If i set SET enable_seqscan=off; then query uses indexes correctly2) Also i notified, if i use: to_tsquery('test') without wildcard search :*, then index is used correctly in both queries, with or without LIMIT\nAny ideas how to fix the problem?Thank you",
"msg_date": "Tue, 10 Jan 2012 14:30:41 +0200",
"msg_from": "darklow <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query planner doesn't use index scan on tsvector GIN index if LIMIT\n\tis specifiedQuery planner doesn't use index scan on tsvector GIN index\n\tif LIMIT is specified"
},
{
"msg_contents": "darklow <[email protected]> writes:\n> But the performance problems starts when i do the same query specifying\n> LIMIT.\n> *SELECT id FROM cache WHERE tsv @@ to_tsquery('test:*') limit 20;*\n> By some reason index is not used.\n\nIt apparently thinks there are enough matches that it might as well just\nseqscan the table and expect to find some matches at random, in less\ntime than using the index would take.\n\nThe estimate seems to be off quite a bit, so maybe raising the stats\ntarget for this column would help.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Jan 2012 12:04:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner doesn't use index scan on tsvector GIN index if\n\tLIMIT is specifiedQuery planner doesn't use index scan on\n\ttsvector GIN index if LIMIT is specified"
},
{
"msg_contents": "On 2012-01-10 18:04, Tom Lane wrote:\n> darklow<[email protected]> writes:\n>> But the performance problems starts when i do the same query specifying\n>> LIMIT.\n>> *SELECT id FROM cache WHERE tsv @@ to_tsquery('test:*') limit 20;*\n>> By some reason index is not used.\n> It apparently thinks there are enough matches that it might as well just\n> seqscan the table and expect to find some matches at random, in less\n> time than using the index would take.\n>\n> The estimate seems to be off quite a bit, so maybe raising the stats\n> target for this column would help.\nThe cost of matching ts_match_vq against a toasted column\nis not calculated correctly. This is completely parallel with\nhttp://archives.postgresql.org/pgsql-hackers/2011-11/msg01754.php\n\nTry raising the cost for ts_match_vq(tsvector,tsquery) that help a bit, but\nits hard to get the cost high enough.\n\nRaising statistics target helps too..\n\n-- \nJesper\n",
"msg_date": "Tue, 10 Jan 2012 22:42:35 +0100",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query planner doesn't use index scan on tsvector GIN\n\tindex if LIMIT is specifiedQuery planner doesn't use index scan on\n\ttsvector GIN index if LIMIT is specified"
}
] |
[
{
"msg_contents": "We have a set of large tables. One of the columns is a status indicator\n(active / archived). The queries against these tables almost always\ninclude the status, so partitioning against that seems to makes sense from\na logical standpoint, especially given most of the data is \"archived\" and\nmost of the processes want active records.\n\nIs it practical to partition on the status column and, eg, use triggers to\nmove a row between the two partitions when status is updated? Any\nsurprises to watch for, given the status column is actually NULL for active\ndata and contains a value when archived?\n\nMike\n\nWe have a set of large tables. One of the columns is a status indicator (active / archived). The queries against these tables almost always include the status, so partitioning against that seems to makes sense from a logical standpoint, especially given most of the data is \"archived\" and most of the processes want active records.\nIs it practical to partition on the status column and, eg, use triggers to move a row between the two partitions when status is updated? Any surprises to watch for, given the status column is actually NULL for active data and contains a value when archived?\nMike",
"msg_date": "Tue, 10 Jan 2012 10:57:04 -0600",
"msg_from": "Mike Blackwell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partitioning by status?"
},
{
"msg_contents": "Mike Blackwell <[email protected]> wrote:\n\n> We have a set of large tables. �One of the columns is a status indicator\n> (active / archived). �The queries against these tables almost always include\n> the status, so partitioning against that seems to makes sense from a logical\n> standpoint, especially given most of the data is \"archived\" and most of the\n> processes want active records.\n> \n> Is it practical to partition on the status column and, eg, use triggers to move\n> a row between the two partitions when status is updated? �Any surprises to\n> watch for, given the status column is actually NULL for active data and\n> contains a value when archived?\n\nIf i where you, i would try a partial index where status is null. But\nyes, partitioning is an other option, depends on your workload.\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n",
"msg_date": "Tue, 10 Jan 2012 18:09:37 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning by status?"
},
{
"msg_contents": "Mike,\n\n> Is it practical to partition on the status column and, eg, use triggers to\n> move a row between the two partitions when status is updated? Any\n> surprises to watch for, given the status column is actually NULL for active\n> data and contains a value when archived?\n\nWhen I've done this before, I've had a setup like the following:\n\n1. One \"active\" partition\n\n2. Multiple \"archive\" partitions, also partitioned by time (month or year)\n\n3. stored procedure for archiving a record or records.\n\nI'd recommend against triggers because they'll be extremely inefficient\nif you need to archive a large number of rows at once.\n\nAlso, (2) only really works if you're going to obsolesce (remove)\narchive records after a certain period of time. Otherwise the\nsub-partitioning hurts performance.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Thu, 12 Jan 2012 10:24:36 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning by status?"
},
{
"msg_contents": ">\n> Also, (2) only really works if you're going to obsolesce (remove)\n> archive records after a certain period of time. Otherwise the\n> sub-partitioning hurts performance.\n>\n\nIs there any moves to include the \"easy\" table partitioning in the 9.2 \nversion ?\n\n",
"msg_date": "Fri, 13 Jan 2012 08:44:01 -0200",
"msg_from": "alexandre - aldeia digital <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning by status?"
},
{
"msg_contents": "On 1/13/12 2:44 AM, alexandre - aldeia digital wrote:\n>>\n>> Also, (2) only really works if you're going to obsolesce (remove)\n>> archive records after a certain period of time. Otherwise the\n>> sub-partitioning hurts performance.\n>>\n> \n> Is there any moves to include the \"easy\" table partitioning in the 9.2\n> version ?\n\nNobody has been submitting patches.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Fri, 13 Jan 2012 11:05:56 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning by status?"
},
{
"msg_contents": "Em 13-01-2012 17:05, Josh Berkus escreveu:\n> On 1/13/12 2:44 AM, alexandre - aldeia digital wrote:\n>>>\n>>> Also, (2) only really works if you're going to obsolesce (remove)\n>>> archive records after a certain period of time. Otherwise the\n>>> sub-partitioning hurts performance.\n>>>\n>>\n>> Is there any moves to include the \"easy\" table partitioning in the 9.2\n>> version ?\n>\n> Nobody has been submitting patches.\n>\n\nI'm sorry hear this. Table partitioning is a very good helper in a large \nnumber of performance issues. If there was a bounty to help anyone to \nmake this, I would be a happy contributor. :)\n",
"msg_date": "Mon, 23 Jan 2012 15:22:51 -0200",
"msg_from": "alexandre - aldeia digital <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning by status?"
}
] |
[
{
"msg_contents": "----- Original Message -----\n> From: \"Robert Haas\" <[email protected]>\n\nHi Robert,\n\nI solved the problem by modifying the query:\n\nbefore:\n ORDER BY dlr.timestamp_todeliver DESC LIMIT\n\nafter:\n ORDER BY sms.timestamp_todeliver DESC LIMIT\n\nmodifying this, the planner changed and computed the result in few ms (500ms before caching, 5ms after caching)...I really don't understand why but is fine...\n\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.02..2196.62 rows=50 width=16) (actual time=0.423..4.010 rows=50 loops=1)\n -> Nested Loop (cost=0.02..250390629.32 rows=5699521 width=16) (actual time=0.422..3.954 rows=50 loops=1)\n Join Filter: (sms.id = dlr.id_sms_messaggio)\n -> Merge Append (cost=0.02..11758801.28 rows=470529 width=16) (actual time=0.384..2.977 rows=50 loops=1)\n Sort Key: sms.timestamp_todeliver\n -> Index Scan Backward using sms_messaggio_todeliver on sms_messaggio sms (cost=0.00..8.27 rows=1 width=16) (actual time=0.006..0.006 rows=0 loops=1)\n Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n Filter: (id_cliente = 13)\n -> Index Scan Backward using sms_messaggio_timestamp_todeliver_201003 on sms_messaggio_201003 sms (cost=0.00..7645805.79 rows=273298 width=16) (actual time=0.313..0.313 rows=1 loops=1)\n Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n Filter: (id_cliente = 13)\n -> Index Scan Backward using sms_messaggio_timestamp_todeliver_201004 on sms_messaggio_201004 sms (cost=0.00..4104353.16 rows=197230 width=16) (actual time=0.062..2.600 rows=50 loops=1)\n Index Cond: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n Filter: (id_cliente = 13)\n -> Append (cost=0.00..505.46 rows=136 width=8) (actual time=0.016..0.017 rows=1 loops=50)\n -> Index Scan using sms_messaggio_dlr_id_sms on sms_messaggio_dlr dlr (cost=0.00..0.27 rows=1 width=8) (actual time=0.001..0.001 rows=0 loops=50)\n Index Cond: (id_sms_messaggio = sms.id)\n Filter: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n -> Bitmap Heap Scan on sms_messaggio_dlr_201003 dlr (cost=4.89..274.56 rows=73 width=8) (actual time=0.004..0.004 rows=0 loops=50)\n Recheck Cond: (id_sms_messaggio = sms.id)\n Filter: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on sms_messaggio_dlr_id_sms_201003 (cost=0.00..4.88 rows=73 width=0) (actual time=0.003..0.003 rows=0 loops=50)\n Index Cond: (id_sms_messaggio = sms.id)\n -> Bitmap Heap Scan on sms_messaggio_dlr_201004 dlr (cost=4.69..230.62 rows=62 width=8) (actual time=0.006..0.007 rows=1 loops=50)\n Recheck Cond: (id_sms_messaggio = sms.id)\n Filter: ((timestamp_todeliver >= '2010-03-01 00:00:00'::timestamp without time zone) AND (timestamp_todeliver < '2010-04-30 00:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on sms_messaggio_dlr_id_sms_201004 (cost=0.00..4.68 rows=62 width=0) (actual time=0.003..0.003 rows=1 loops=50)\n Index Cond: (id_sms_messaggio = sms.id)\n Total runtime: 4.112 ms\n\nRegards,\n\nMatteo\n",
"msg_date": "Wed, 11 Jan 2012 12:57:42 +0100 (CET)",
"msg_from": "Matteo Sgalaberni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: partitioned table: differents plans, slow on some situations"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've run a series fo pgbench benchmarks with the aim to see the effect\nof moving the WAL logs to a separate drive, and one thing that really\nsurprised me is that the archive log level seems to give much better\nperformance than minimal log level.\n\nOn spinning drives this is not noticeable, but on SSDs it's quite clear.\nSee for example this:\n\n http://www.fuzzy.cz/tmp/tps-rw-minimal.png\n http://www.fuzzy.cz/tmp/tps-rw-archive.png\n\nThat minimal log level gives about 1600 tps all the time, while archive\nlog level gives about the same performance at the start but then it\ncontinuously increases up to about 2000 tps.\n\nThis seems very suspicious, because AFAIK the wal level should not\nreally matter for pgbench and if it does I'd expect exactly the opposite\nbehaviour (i.e. 'archive' performing worse than 'minimal').\n\nThis was run on 9.1.2 with two SSDs (Intel 320) and EXT4, but I do see\nexactly the same behaviour with a single SSD drive.\n\nThe config files are here (the only difference is the wal_level line at\nthe very end)\n\n http://www.fuzzy.cz/tmp/postgresql-minimal.conf\n http://www.fuzzy.cz/tmp/postgresql-archive.conf\n\npgbench results and logs are here:\n\n http://www.fuzzy.cz/tmp/pgbench.minimal.log.gz\n http://www.fuzzy.cz/tmp/pgbench.archive.log.gz\n\n http://www.fuzzy.cz/tmp/results.minimal.log\n http://www.fuzzy.cz/tmp/results.archive.log\n\nI do plan to rerun the whole benchmark, but is there any reasonable\nexplanation or something that might cause such behaviour?\n\nkind regards\nTomas\n",
"msg_date": "Fri, 13 Jan 2012 00:17:22 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "wal_level=archive gives better performance than minimal - why?"
},
{
"msg_contents": "On 01/12/2012 06:17 PM, Tomas Vondra wrote:\n> I've run a series fo pgbench benchmarks with the aim to see the effect\n> of moving the WAL logs to a separate drive, and one thing that really\n> surprised me is that the archive log level seems to give much better\n> performance than minimal log level.\n\nHow repeatable is this? If you always run minimal first and then \narchive, that might be the actual cause of the difference. In this \nsituation I would normally run this 12 times, with this sort of pattern:\n\nminimal\nminimal\nminimal\narchive\narchive\narchive\nminimal\nminimal\nminimal\narchive\narchive\narchive\n\nTo make sure the difference wasn't some variation on \"gets slower after \neach run\". pgbench suffers a lot from problems in that class.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n",
"msg_date": "Mon, 16 Jan 2012 17:35:53 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wal_level=archive gives better performance than minimal\n - why?"
},
{
"msg_contents": "On 16.1.2012 23:35, Greg Smith wrote:\n> On 01/12/2012 06:17 PM, Tomas Vondra wrote:\n>> I've run a series fo pgbench benchmarks with the aim to see the effect\n>> of moving the WAL logs to a separate drive, and one thing that really\n>> surprised me is that the archive log level seems to give much better\n>> performance than minimal log level.\n> \n> How repeatable is this? If you always run minimal first and then\n> archive, that might be the actual cause of the difference. In this\n> situation I would normally run this 12 times, with this sort of pattern:\n> \n> minimal\n> minimal\n> minimal\n> archive\n> archive\n> archive\n> minimal\n> minimal\n> minimal\n> archive\n> archive\n> archive\n> \n> To make sure the difference wasn't some variation on \"gets slower after\n> each run\". pgbench suffers a lot from problems in that class.\n\nAFAIK it's well repeatable - the primary goal of the benchmark was to\nsee the benefir of moving the WAL to a separate device (with various WAL\nlevels and device types - SSD and HDD).\n\nI plan to rerun the whole thing this week with a bit more details logged\nto rule out basic configuration mistakes etc.\n\nEach run is completely separate (rebuilt from scratch) and takes about 1\nhour to complete. Each pgbench run consists of these steps\n\n 1) rebuild the data from scratch\n 2) 10-minute warmup (read-only run)\n 3) 20-minute read-only run\n 4) checkpoint\n 5) 20-minute read-write run\n\nand the results are very stable.\n\nTomas\n",
"msg_date": "Tue, 17 Jan 2012 01:29:53 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: wal_level=archive gives better performance than minimal\n - why?"
},
{
"msg_contents": "On 17.1.2012 01:29, Tomas Vondra wrote:\n> On 16.1.2012 23:35, Greg Smith wrote:\n>> On 01/12/2012 06:17 PM, Tomas Vondra wrote:\n>>> I've run a series fo pgbench benchmarks with the aim to see the effect\n>>> of moving the WAL logs to a separate drive, and one thing that really\n>>> surprised me is that the archive log level seems to give much better\n>>> performance than minimal log level.\n>>\n>> How repeatable is this? If you always run minimal first and then\n>> archive, that might be the actual cause of the difference. In this\n>> situation I would normally run this 12 times, with this sort of pattern:\n>>\n>> minimal\n>> minimal\n>> minimal\n>> archive\n>> archive\n>> archive\n>> minimal\n>> minimal\n>> minimal\n>> archive\n>> archive\n>> archive\n>>\n>> To make sure the difference wasn't some variation on \"gets slower after\n>> each run\". pgbench suffers a lot from problems in that class.\n\nSo, I've rerun the whole benchmark (varying fsync method and wal level),\nand the results are exactly the same as before ...\n\nSee this:\n\n http://www.fuzzy.cz/tmp/fsync/tps.html\n http://www.fuzzy.cz/tmp/fsync/latency.html\n\nEach row represents one of the fsync methods, first column is archive\nlevel, second column is minimal level. Notice that the performance with\narchive level continuously increases and is noticeably better than the\nminimal wal level. In some cases (e.g. fdatasync) the difference is up\nto 15%. That's a lot.\n\nThis is a 20-minute pgbench read-write run that is executed after a\n20-minute read-only pgbench run (to warm up the caches etc.)\n\nThe latencies seem generaly the same, except that with minimal WAL level\nthere's a 4-minute interval of significantly higher latencies at the\nbeginning.\n\nThat's suspiciously similar to the checkpoint timeout (which was set to\n4 minutes), but why should this matter for minimal WAL level and not for\narchive?\n\nTomas\n",
"msg_date": "Mon, 23 Jan 2012 00:07:01 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: wal_level=archive gives better performance than minimal\n - why?"
},
{
"msg_contents": "2012/1/22 Tomas Vondra <[email protected]>:\n> That's suspiciously similar to the checkpoint timeout (which was set to\n> 4 minutes), but why should this matter for minimal WAL level and not for\n> archive?\n\nI went through and looked at all the places where we invoke\nXLogIsNeeded(). When XLogIsNeeded(), we:\n\n1. WAL log creation of the _init fork of an unlogged table or an index\non an unlogged table (otherwise, an fsync is enough)\n2. WAL log index builds\n3. WAL log changes to max_connections, max_prepared_xacts,\nmax_locks_per_xact, and/or wal_level\n4. skip calling posix_fadvise(POSIX_FADV_DONTNEED) when closing a WAL file\n5. skip supplying O_DIRECT when writing WAL, if wal_sync_method is\nopen_sync or open_datasync\n6. refuse to create named restore points\n7. WAL log CLUSTER\n8. WAL log COPY FROM into a newly created/truncated relation\n9. WAL log ALTER TABLE .. SET TABLESPACE\n9. WAL log cleanup info before doing an index vacuum (this one should\nprobably be changed to happen only in HS mode)\n10. WAL log SELECT INTO\n\nIt's hard to see how generating more WAL could cause a performance\nimprovement, unless there's something about full page flushes being\nmore efficient than partial page flushes or something like that. But\nnone of the stuff above looks likely to happen very often anyway. But\nitems #4 and #5 on that list like things that could potentially be\ncausing a problem - if WAL files are being reused regularly, then\ncalling POSIX_FADV_DONTNEED on them could represent a regression. It\nmight be worth compiling with POSIX_FADV_DONTNEED undefined and see\nwhether that changes anything.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Fri, 3 Feb 2012 13:48:25 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wal_level=archive gives better performance than minimal\n - why?"
},
{
"msg_contents": "Le 3 février 2012 19:48, Robert Haas <[email protected]> a écrit :\n> 2012/1/22 Tomas Vondra <[email protected]>:\n>> That's suspiciously similar to the checkpoint timeout (which was set to\n>> 4 minutes), but why should this matter for minimal WAL level and not for\n>> archive?\n>\n> I went through and looked at all the places where we invoke\n> XLogIsNeeded(). When XLogIsNeeded(), we:\n>\n> 1. WAL log creation of the _init fork of an unlogged table or an index\n> on an unlogged table (otherwise, an fsync is enough)\n> 2. WAL log index builds\n> 3. WAL log changes to max_connections, max_prepared_xacts,\n> max_locks_per_xact, and/or wal_level\n> 4. skip calling posix_fadvise(POSIX_FADV_DONTNEED) when closing a WAL file\n> 5. skip supplying O_DIRECT when writing WAL, if wal_sync_method is\n> open_sync or open_datasync\n> 6. refuse to create named restore points\n> 7. WAL log CLUSTER\n> 8. WAL log COPY FROM into a newly created/truncated relation\n> 9. WAL log ALTER TABLE .. SET TABLESPACE\n> 9. WAL log cleanup info before doing an index vacuum (this one should\n> probably be changed to happen only in HS mode)\n> 10. WAL log SELECT INTO\n>\n> It's hard to see how generating more WAL could cause a performance\n> improvement, unless there's something about full page flushes being\n> more efficient than partial page flushes or something like that. But\n> none of the stuff above looks likely to happen very often anyway. But\n> items #4 and #5 on that list like things that could potentially be\n> causing a problem - if WAL files are being reused regularly, then\n> calling POSIX_FADV_DONTNEED on them could represent a regression. It\n> might be worth compiling with POSIX_FADV_DONTNEED undefined and see\n> whether that changes anything.\n\nit should be valuable to have the kernel version and also confirm the\nsame behavior happens with XFS.\n\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation\n",
"msg_date": "Sat, 4 Feb 2012 17:04:29 +0100",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: wal_level=archive gives better performance than minimal\n - why?"
},
{
"msg_contents": "On 4.2.2012 17:04, Cédric Villemain wrote:\n> Le 3 février 2012 19:48, Robert Haas <[email protected]> a écrit :\n>> 2012/1/22 Tomas Vondra <[email protected]>:\n>>> That's suspiciously similar to the checkpoint timeout (which was set to\n>>> 4 minutes), but why should this matter for minimal WAL level and not for\n>>> archive?\n>>\n>> I went through and looked at all the places where we invoke\n>> XLogIsNeeded(). When XLogIsNeeded(), we:\n>>\n>> 1. WAL log creation of the _init fork of an unlogged table or an index\n>> on an unlogged table (otherwise, an fsync is enough)\n>> 2. WAL log index builds\n>> 3. WAL log changes to max_connections, max_prepared_xacts,\n>> max_locks_per_xact, and/or wal_level\n>> 4. skip calling posix_fadvise(POSIX_FADV_DONTNEED) when closing a WAL file\n>> 5. skip supplying O_DIRECT when writing WAL, if wal_sync_method is\n>> open_sync or open_datasync\n>> 6. refuse to create named restore points\n>> 7. WAL log CLUSTER\n>> 8. WAL log COPY FROM into a newly created/truncated relation\n>> 9. WAL log ALTER TABLE .. SET TABLESPACE\n>> 9. WAL log cleanup info before doing an index vacuum (this one should\n>> probably be changed to happen only in HS mode)\n>> 10. WAL log SELECT INTO\n>>\n>> It's hard to see how generating more WAL could cause a performance\n>> improvement, unless there's something about full page flushes being\n>> more efficient than partial page flushes or something like that. But\n>> none of the stuff above looks likely to happen very often anyway. But\n>> items #4 and #5 on that list like things that could potentially be\n>> causing a problem - if WAL files are being reused regularly, then\n>> calling POSIX_FADV_DONTNEED on them could represent a regression. It\n>> might be worth compiling with POSIX_FADV_DONTNEED undefined and see\n>> whether that changes anything.\n> \n> it should be valuable to have the kernel version and also confirm the\n> same behavior happens with XFS.\n\nThe kernel is 3.1.5, more precisely the \"uname -a\" gives this:\n\nLinux rimmer 3.1.5-gentoo #1 SMP PREEMPT Sun Dec 25 14:11:19 CET 2011\nx86_64 Intel(R) Core(TM) i5-2500K CPU @ 3.30GHz GenuineIntel GNU/Linux\n\nI plan to rerun the test with various settings, I'll add there XFS\nresults (so far everything was on EXT4) and I'll post an update to this\nthread.\n\nTmoas\n",
"msg_date": "Sat, 04 Feb 2012 17:20:04 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: wal_level=archive gives better performance than minimal\n - why?"
}
] |
[
{
"msg_contents": "Hi, \n\nyesterday I delete about 200 million rows of a table (about 150GB of data),\nafter delete completes the autovacuum process start.\n\nThe autovacuum is running for about 11 hours but no space is released\n\nAutovacuum parameters are with default values in postgresql.conf\n\n \n\nThe postgres version is 9.0.3\n\n \n\nThe pg activity reports:\n\nselect (now()-query_start) as duration, waiting, current_query from\npg_stat_activity where current_query ilike '%auto%'\n\n \n\n10:42:19.829 f \"autovacuum: VACUUM ANALYZE\npublic.myTable\"\n\n \n\n \n\nHow can I release the space used by deleted rows? Without block the table.\n\n \n\nThanks!\n\n \n\n \n\n \n\n\nHi, yesterday I delete about 200 million rows of a table (about 150GB of data), after delete completes the autovacuum process start.The autovacuum is running for about 11 hours but no space is releasedAutovacuum parameters are with default values in postgresql.conf The postgres version is 9.0.3 The pg activity reports:select (now()-query_start) as duration, waiting, current_query from pg_stat_activity where current_query ilike '%auto%' 10:42:19.829 f \"autovacuum: VACUUM ANALYZE public.myTable\" How can I release the space used by deleted rows? Without block the table. Thanks!",
"msg_date": "Fri, 13 Jan 2012 09:08:36 -0300",
"msg_from": "\"Anibal David Acosta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "auto vacuum, not working?"
},
{
"msg_contents": "Am 13.01.2012 13:08, schrieb Anibal David Acosta:\n>\n> Hi,\n>\n> yesterday I delete about 200 million rows of a table (about 150GB of \n> data), after delete completes the autovacuum process start.\n>\n> The autovacuum is running for about 11 hours but no space is released\n>\n> Autovacuum parameters are with default values in postgresql.conf\n>\n> The postgres version is 9.0.3\n>\n> The pg activity reports:\n>\n> select (now()-query_start) as duration, waiting, current_query from \n> pg_stat_activity where current_query ilike '%auto%'\n>\n> 10:42:19.829 f \"autovacuum: VACUUM ANALYZE \n> public.myTable\"\n>\n> How can I release the space used by deleted rows? Without block the table.\n>\n> Thanks!\n>\n\nvacuum does not reclaim space, just marks tuples dead. You need vacuum full.\n\n\n\n\n\n\n Am 13.01.2012 13:08, schrieb Anibal David Acosta:\n \n\n\n\n\nHi, \nyesterday I delete about 200 million rows\n of a table (about 150GB of data), after delete completes the\n autovacuum process start.\nThe autovacuum is running for about 11\n hours but no space is released\nAutovacuum parameters are with default\n values in postgresql.conf\n \nThe postgres version is 9.0.3\n \nThe pg activity reports:\nselect (now()-query_start) as duration,\n waiting, current_query from pg_stat_activity where\n current_query ilike '%auto%'\n \n10:42:19.829 f \n \"autovacuum: VACUUM ANALYZE public.myTable\"\n \n \nHow can I release the space used by deleted\n rows? Without block the table.\n \nThanks!\n \n \n \n\n\n\n vacuum does not reclaim space, just marks tuples dead. You need\n vacuum full.",
"msg_date": "Fri, 13 Jan 2012 15:26:08 +0100",
"msg_from": "Mario Weilguni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: auto vacuum, not working?"
},
{
"msg_contents": "Mario Weilguni <[email protected]> wrote:\n \n>> yesterday I delete about 200 million rows of a table\n \n>> How can I release the space used by deleted rows?\n>> Without block the table.\n \n> vacuum does not reclaim space, just marks tuples dead. You need\n> vacuum full.\n \nVACUUM FULL will lock the table, blocking all other access, and it\ncan run for quite a while. If you expect to be adding 200 million\nnew rows to the table in the foreseeable future, a regular VACUUM\n(or autovacuum) will make that space available for reuse by that\ntable. The space won't show in the file system; it will still be\nallocated to the database but available for new rows.\n \n-Kevin\n",
"msg_date": "Fri, 13 Jan 2012 08:50:21 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: auto vacuum, not working?"
},
{
"msg_contents": "On 01/13/2012 07:08 AM, Anibal David Acosta wrote:\n>\n> How can I release the space used by deleted rows? Without block the table.\n>\n>\n\nThe database can only reduce the size tables by returning space to the \noperating system in one situation: there is free space at the very end \nof the table. In that case, if it's possible to get a brief exclusive \nlock on the table, it can shrink in size.\n\nThere are some programs available that reorganize table for goals like \nthis, without having any long-lasting locks on the tables. pg_reorg is \nthe most popular example: http://pgfoundry.org/projects/reorg/\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n\n\n\n\n\n\n On 01/13/2012 07:08 AM, Anibal David Acosta wrote:\n \n\n\n\n\nHow can I release the space used by deleted\n rows? Without block the table.\n\n\n\n\n The database can only reduce the size tables by returning space to\n the operating system in one situation: there is free space at the\n very end of the table. In that case, if it's possible to get a\n brief exclusive lock on the table, it can shrink in size.\n\n There are some programs available that reorganize table for goals\n like this, without having any long-lasting locks on the tables. \n pg_reorg is the most popular example: \n http://pgfoundry.org/projects/reorg/\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com",
"msg_date": "Sun, 15 Jan 2012 07:09:50 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: auto vacuum, not working?"
}
] |
[
{
"msg_contents": "Hi,\n\nIs there a simple way (or a tool) to discover the most searched values \nin a field from a table ?\n\nIn the pg_stats, I can see the most common values generated by ANALYZE, \nbut I want to know how many queries are using this values. With this \ninformation and the other statistics, I want to create partial indexes \nor use table partitioning to create some benchmarks to speed up the \ndatabase access.\n\nBest regards.\n",
"msg_date": "Fri, 13 Jan 2012 16:08:59 -0200",
"msg_from": "alexandre - aldeia digital <[email protected]>",
"msg_from_op": true,
"msg_subject": "Discovering the most searched values for a field"
},
{
"msg_contents": "On 1/13/12 10:08 AM, alexandre - aldeia digital wrote:\n> Hi,\n> \n> Is there a simple way (or a tool) to discover the most searched values\n> in a field from a table ?\n> \n> In the pg_stats, I can see the most common values generated by ANALYZE,\n> but I want to know how many queries are using this values. With this\n> information and the other statistics, I want to create partial indexes\n> or use table partitioning to create some benchmarks to speed up the\n> database access.\n\nNo simple + fast way.\n\nThe way to do this is:\n\n1) log all queries\n2) load query log into a database\n3) filter to queries which only run against that table\n4) analyze queries for values against that column.\n\nFor (4), we've had the best luck with generating explain plans in XML\nand then digesting the XML to look for filter conditions. Finding\ncolumn matches by regex was a lot less successful.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Fri, 13 Jan 2012 11:08:18 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Discovering the most searched values for a field"
},
{
"msg_contents": "Em 13-01-2012 17:08, Josh Berkus escreveu:\n> On 1/13/12 10:08 AM, alexandre - aldeia digital wrote:\n>> Hi,\n>>\n>> Is there a simple way (or a tool) to discover the most searched values\n>> in a field from a table ?\n>>\n>> In the pg_stats, I can see the most common values generated by ANALYZE,\n>> but I want to know how many queries are using this values. With this\n>> information and the other statistics, I want to create partial indexes\n>> or use table partitioning to create some benchmarks to speed up the\n>> database access.\n>\n> No simple + fast way.\n>\n> The way to do this is:\n>\n> 1) log all queries\n> 2) load query log into a database\n> 3) filter to queries which only run against that table\n> 4) analyze queries for values against that column.\n>\n> For (4), we've had the best luck with generating explain plans in XML\n> and then digesting the XML to look for filter conditions. Finding\n> column matches by regex was a lot less successful.\n>\n\nThanks Josh ! I will try this. The only problem is the size of the LOGs. \nOne day with logs turned on generates 100 GB log file in the most of my \ncustomers...\n",
"msg_date": "Mon, 23 Jan 2012 15:26:47 -0200",
"msg_from": "alexandre - aldeia digital <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Discovering the most searched values for a field"
}
] |
[
{
"msg_contents": "Hi,\n\nwhen benchmarking insert , can there be caching effects?\ni insert, delete again, and insert again.\ndoes anything cache the things that i deleted?\n\n(postgres 8.4 on debian)\n\ncheers,\n\nWBL\n\n-- \n\"Patriotism is the conviction that your country is superior to all others\nbecause you were born in it.\" -- George Bernard Shaw\n\nHi,when benchmarking insert , can there be caching effects?i insert, delete again, and insert again.does anything cache the things that i deleted?(postgres 8.4 on debian)cheers,\nWBL-- \"Patriotism is the conviction that your country is superior to all others because you were born in it.\" -- George Bernard Shaw",
"msg_date": "Fri, 20 Jan 2012 16:36:35 +0100",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": true,
"msg_subject": "when benchmarking insert , can there be caching effects?"
},
{
"msg_contents": "Willy-Bas Loos <[email protected]> wrote:\n \n> when benchmarking insert , can there be caching effects?\n \nYes.\n \n> i insert, delete again, and insert again.\n> does anything cache the things that i deleted?\n \nYes.\n \nHow long you wait before another attempt could have a significant\neffect on timings. There are background processes writing dirty\nbuffers, checkpointing with fsync to disk, and vacuuming to clean up\nthose deleted rows. Until vacuum cleans them up, the deleted rows\nwill still have index entries which might be searched. Competition\nwith background processes could affect performance. You might have\na table with space already allocated versus needing to ask the OS to\nadd more space to it. You might be reusing WAL files versus\ncreating new ones. Etc.\n \nGetting good benchmarks is hard to do. For starters, you need to\ndecide how many of those things *should* be included in the\nbenchmark. Then you need to manage things to measure what should be\nmeasured.\n \n-Kevin\n",
"msg_date": "Fri, 20 Jan 2012 11:53:55 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: when benchmarking insert , can there be caching\n\t effects?"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm working on a benchmark that demonstrates the effects of moving\ntables or indexes to separate devices (SSD and HDD), and one thing that\nreally caught my eye are spikes in the tps charts. See this:\n\n http://www.fuzzy.cz/tmp/data-indexes/indexes.html\n\nThe first one is a database with data on an SSD and indexes on 7.2k HDD.\nAround 2:00, the performance significantly grows (over 4k tps) and then\nfalls to about 500 tps (which is maintained for the remainder of the\nbenchmark).\n\nI've seen similar spikes on HDD (both data and indexes on the same\ndevice) - that's the second chart. The difference is not that huge, but\nthe spike at around 6:00 is noticeable.\n\nInterestingly, by separating the data and indexes to two 7.2k drives,\nthe spike disappears - that's the third chart.\n\nAny ideas why this happens? Is this a pgbench-only anomaly that does not\nhappen in real-world scenarios?\n\nMy theory is that it's related to the strategy that chooses what to keep\nin shared_buffers (or page cache), and that somehow does not work too\nwell in this case.\n\nregards\nTomas\n",
"msg_date": "Mon, 23 Jan 2012 00:17:18 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "spikes in pgbench read-only results"
},
{
"msg_contents": "2012/1/22 Tomas Vondra <[email protected]>:\n> Hi,\n>\n> I'm working on a benchmark that demonstrates the effects of moving\n> tables or indexes to separate devices (SSD and HDD), and one thing that\n> really caught my eye are spikes in the tps charts. See this:\n>\n> http://www.fuzzy.cz/tmp/data-indexes/indexes.html\n>\n> The first one is a database with data on an SSD and indexes on 7.2k HDD.\n> Around 2:00, the performance significantly grows (over 4k tps) and then\n> falls to about 500 tps (which is maintained for the remainder of the\n> benchmark).\n>\n> I've seen similar spikes on HDD (both data and indexes on the same\n> device) - that's the second chart. The difference is not that huge, but\n> the spike at around 6:00 is noticeable.\n>\n> Interestingly, by separating the data and indexes to two 7.2k drives,\n> the spike disappears - that's the third chart.\n>\n> Any ideas why this happens? Is this a pgbench-only anomaly that does not\n> happen in real-world scenarios?\n>\n> My theory is that it's related to the strategy that chooses what to keep\n> in shared_buffers (or page cache), and that somehow does not work too\n> well in this case.\n\nISTM the spike is coming from luck such that all data is read from\neither the SSD or ram. Neither the o/s or pg are smart enough to try\nand push all the buffering over the spinning disk so you are going to\nsee some anomalies coming from the reading patterns of the test -- you\nare mainly measuring the %iops that are getting send to data vs index.\n\nI bet if the index and data both were moved to the ssd you'd see no\npronounced spike just as they are when both are on hdd.\n\nFor huge, high traffic, mostly read workloads that are not cost bound\non storage, ssd is a no-brainer.\n\nmerlin\n",
"msg_date": "Mon, 23 Jan 2012 11:03:17 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: spikes in pgbench read-only results"
},
{
"msg_contents": "On 23 Leden 2012, 18:03, Merlin Moncure wrote:\n> 2012/1/22 Tomas Vondra <[email protected]>:\n>> Hi,\n>>\n>> I'm working on a benchmark that demonstrates the effects of moving\n>> tables or indexes to separate devices (SSD and HDD), and one thing that\n>> really caught my eye are spikes in the tps charts. See this:\n>>\n>> http://www.fuzzy.cz/tmp/data-indexes/indexes.html\n>>\n>> The first one is a database with data on an SSD and indexes on 7.2k HDD.\n>> Around 2:00, the performance significantly grows (over 4k tps) and then\n>> falls to about 500 tps (which is maintained for the remainder of the\n>> benchmark).\n>>\n>> I've seen similar spikes on HDD (both data and indexes on the same\n>> device) - that's the second chart. The difference is not that huge, but\n>> the spike at around 6:00 is noticeable.\n>>\n>> Interestingly, by separating the data and indexes to two 7.2k drives,\n>> the spike disappears - that's the third chart.\n>>\n>> Any ideas why this happens? Is this a pgbench-only anomaly that does not\n>> happen in real-world scenarios?\n>>\n>> My theory is that it's related to the strategy that chooses what to keep\n>> in shared_buffers (or page cache), and that somehow does not work too\n>> well in this case.\n>\n> ISTM the spike is coming from luck such that all data is read from\n> either the SSD or ram. Neither the o/s or pg are smart enough to try\n> and push all the buffering over the spinning disk so you are going to\n> see some anomalies coming from the reading patterns of the test -- you\n> are mainly measuring the %iops that are getting send to data vs index.\n\nNot sure what you mean. Could you describe that a bit more thoroughly?\n\n> I bet if the index and data both were moved to the ssd you'd see no\n> pronounced spike just as they are when both are on hdd.\n\nNo, the spike is there. It's not as apparent as with the HDD drives, but\nit's there.\n\nboth on the same HDD => spike (img. 2)\nboth on the same SSD => spike\ntwo HDDs => no spike (img. 3)\ntwo SSDs => no spike\ndata on HDD, index on SSD => no spike\ndata on SSD, index on HDD => spike (img. 1)\n\nI've realized I haven't set the random_page_cost for the index tablespace\nproperly, but if that would be the cause it would not affect the \"both on\nthe same HDD\" case.\n\n> For huge, high traffic, mostly read workloads that are not cost bound\n> on storage, ssd is a no-brainer.\n\nYes, sure. No doubt about that, unless the workload consists mostly of\nhuge seq scans ...\n\nTomas\n\n",
"msg_date": "Mon, 23 Jan 2012 18:39:14 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: spikes in pgbench read-only results"
}
] |
[
{
"msg_contents": "Hi folks\n\nThis could be a sheer volume issue, but I though I would ask the wisdom of\nthis forum as to next investigative steps.\n\n----\n\nWe use PostgreSQL 8.4.4 which is bundled with our application as a VMware\nvirtual appliance. The bulk of the app's database activity is recording\nperformance data points which arrive in farily large sustained bursts of\nperhaps 10,000 rows a minute at a medium sized customer, each of which are\nlogically separate items and being committed as individual transactions\n(JDBC auto-commit mode). Our offshore QA team was assigned to track an\nintermittent issue with speed of some large queries on other tables, and\nthey believe based on correlation the two activities may be contending.\n\nThe large query is coming off of different tables from the ones being\nwritten to ... the raw data goes into a table named by day (partitioning is\nall within the app, not PG) e.g. PERF_RAW_2012_01_24 and then there are a\nbunch of rollup statements which run hourly to do the aggregations, e.g.\n\ninsert into PERF_HOURLY_2012_01_24 select key_columns, avg(data), now()\nfrom perf_raw_2012_01_24 where time_stamp between (now() - interval '1\nhour') and now() group by key_columns\n\nThe big queries are hitting multiple of the PERF_HOURLY tables and pulling\na few dozen rows from each.\n\nWe are using a 64-bit VM with 8 virtual cores and 8GB RAM, of which Java\ntakes a bit over half, and Linux XXXXX with CentOS 5.x .... PG has 1GB of\nbuffer cache and reasonable (AFAICT) resource limits for everything else,\nwhich are intended to be workable for a range of client sizes out of the\nbox. True transactional consistency is disabled for performance reasons,\nvirtual environments do not take kindly to lots of small writes.\n\n---\n\nIs there any tweaking we should do on the PG settings, or on the pattern in\nwhich the app is writing - we currently use 10 writer threads on the Java\nside and they keep PG going pretty good.\n\nI considered bundling the writes into larger transactions, will that really\nhelp much with commit consistency off?\n\nIs there some specific \"usual suspect\" stuff I should look at on the PG\nside to look for efficiency issues such as index lock contention or a poor\nbuffer cache hit ratio? Will doing EXPLAIN ANALYSE on the big query be\ninformative, and if so, does it need to be done while the write load is\napplied?\n\nThe other whacky idea I had was to have the writer threads pause or\nthrottle themselves when a big query is happening (it's all in one JVM and\nwe are using a connection pooler, so it's easy to intercept and track if\nneeded) however that strikes me as a rather ugly hack and I'd prefer to do\nsomething more robust and based on config tweaks that leverage existing\nresource management in PG.\n\nRelevant schema and config attached, all comments and advice welcome,\nincluding general tuning tips and rationale for moving to PG 9.x .... I'm\nwell aware this isn't the acme of PG tuning :)\n\nCheers\nDave",
"msg_date": "Tue, 24 Jan 2012 14:16:19 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Can lots of small writes badly hamper reads from other tables?"
},
{
"msg_contents": "On 1/24/2012 2:16 PM, Dave Crooke wrote:\n> Hi folks\n>\n> This could be a sheer volume issue, but I though I would ask the wisdom\n> of this forum as to next investigative steps.\n>\n> ----\n>\n> We use PostgreSQL 8.4.4 which is bundled with our application as a\n> VMware virtual appliance. The bulk of the app's database activity is\n> recording performance data points which arrive in farily large sustained\n> bursts of perhaps 10,000 rows a minute at a medium sized customer, each\n> of which are logically separate items and being committed as individual\n> transactions (JDBC auto-commit mode). Our offshore QA team was assigned\n> to track an intermittent issue with speed of some large queries on other\n> tables, and they believe based on correlation the two activities may be\n> contending.\n\nYou have 10 connections, all doing:\n\nbegin\ninsert into PERF_RAW_2012_01_24.... -- one record\ncommit\n\n\nIf that's what you're doing, yes, I'd say that's the slowest way possible.\n\nDoing this would be faster:\n\nbegin\ninsert into PERF_RAW_2012_01_24.... -- one record\ninsert into PERF_RAW_2012_01_24.... -- one record\n...\ninsert into PERF_RAW_2012_01_24.... -- one record\ncommit\n\nDoing this would be even faster:\n\n\nbegin\n-- one insert, multiple rows\ninsert into PERF_RAW_2012_01_24 values (...) (...) (...) ... (...);\ninsert into PERF_RAW_2012_01_24 values (...) (...) (...) ... (...);\ncommit\n\nAnd, fastest of all fastest, use COPY. But be careful, its so fast \nit'll melt your face off :-)\n\n\nI didnt even bother trying to pick out the uncommented settings from \nyour .conf file. Way to much work.\n\nVM usually have pretty slow IO, so you might wanna watch vmstat and \niostat to see if you are IO bound or CPU bound.\n\nAlso watching iostat before and after the change might be interesting.\n\nIf you you keep having lots and lots of transaction, look into \ncommit_delay, it'll help batch commits out to disk (if I remember \ncorrectly).\n\n-Andy\n",
"msg_date": "Tue, 24 Jan 2012 15:06:41 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can lots of small writes badly hamper reads from other\n tables?"
},
{
"msg_contents": "On 24 Leden 2012, 21:16, Dave Crooke wrote:\n> Hi folks\n>\n> This could be a sheer volume issue, but I though I would ask the wisdom of\n> this forum as to next investigative steps.\n>\n> ----\n>\n> We use PostgreSQL 8.4.4 which is bundled with our application as a VMware\n> virtual appliance. The bulk of the app's database activity is recording\n> performance data points which arrive in farily large sustained bursts of\n> perhaps 10,000 rows a minute at a medium sized customer, each of which are\n> logically separate items and being committed as individual transactions\n> (JDBC auto-commit mode). Our offshore QA team was assigned to track an\n> intermittent issue with speed of some large queries on other tables, and\n> they believe based on correlation the two activities may be contending.\n>\n> The large query is coming off of different tables from the ones being\n> written to ... the raw data goes into a table named by day (partitioning\n> is\n> all within the app, not PG) e.g. PERF_RAW_2012_01_24 and then there are a\n> bunch of rollup statements which run hourly to do the aggregations, e.g.\n\nEach storage device has some basic I/O limits - sequential speed (read/write)\nand the maximum number or I/O operations it can handle. For example a 7.2k\ndrives can do up to 160MB/s sequential reads/writes, but not more than 120\nI/O ops per second. Similarly for other devices - 15k drives can do up to\n250 I/Os. SSDs can handle much more I/Os, e.g. Intel 320 can handle about\n8k I/Os.\n\nI have no idea what kind of storage device you're using and what amount of\nsequential and random operations it can handle. But my guess you're hitting\nthe limit of random I/Os - each commit requires a fsync, and you're doing\n10.000 of them per minute, i.e. about 160 per second. If the queries need\nto read data from the drive (e.g. randomly), this just adds more I/Os.\n\n> Is there any tweaking we should do on the PG settings, or on the pattern\n> in\n> which the app is writing - we currently use 10 writer threads on the Java\n> side and they keep PG going pretty good.\n\nThe first thing you should do is grouping the inserts to one transaction.\nThat'll lower the number of I/Os the database needs to do. Besides that,\nyou can move the WAL to a separate (physical) device, thus spreading the\nI/Os to more drives.\n\n> I considered bundling the writes into larger transactions, will that\n> really\n> help much with commit consistency off?\n\nWhat do you mean by \"commit consistency off\"?\n\n> Is there some specific \"usual suspect\" stuff I should look at on the PG\n> side to look for efficiency issues such as index lock contention or a poor\n> buffer cache hit ratio? Will doing EXPLAIN ANALYSE on the big query be\n> informative, and if so, does it need to be done while the write load is\n> applied?\n\nThe first thing you should do is gathering some basic I/O stats.\n\nRun pg_test_fsync (a contrib module) to see how many fsync operations the\nI/O subsystem can handle (if it reports more than 500, use \"-o\" to get it\nrunning for a longer time).\n\nThen gather \"vmstat 1\" and \"iostat -x 1\" for a few seconds when the workload\n(inserts and queries) are actually running. That should tell you how the\ndrives are actually utilized.\n\nPost these results to this list.\n\n> Relevant schema and config attached, all comments and advice welcome,\n> including general tuning tips and rationale for moving to PG 9.x .... I'm\n> well aware this isn't the acme of PG tuning :)\n\nThere's a nice page about tuning at the wiki:\n\n http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nI'd recommend significantly increasing the number of checkpoint segments,\ne.g. to 64 (1GB) and setting completion target to 0.9. This usually helps\nwrite-heavy workloads. And enable log_checkpoints.\n\nTomas\n\n",
"msg_date": "Tue, 24 Jan 2012 22:09:06 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can lots of small writes badly hamper reads from other\n tables?"
},
{
"msg_contents": "Hi guys\n\nThanks for the quick followups folks .... please note I am deliberately\nrunning a setup without commit guarantees, so a lot of the conventional\nadvice about not doing small writes isn't applicable, but I do want to\nunderstand more about how this affects PG internals even if the I/O is\nsmoothed out.\n\nBy \"commit consistency off\" I am referring to the setting\n\"synchronous_commit = off\" in postgresql.conf .... IIRC this should mean\nsemantically that a DB crash may lose data that was reported back to the\napp as successfully committed, but will result in a consistent state on\nreboot and recovery. In this case IIUC the \"120 commits per second per\ndrive\" limit does not apply, and I hope the advice about testing fsync is\nsimilarly not applicable to my case. Also, IIUC that settings like\ncommit_siblings and commit_delay should be ignored by PG in my case.\n\nI would be interested in learning what the **in-memory** constraints and\ncosts are on the PG server side of doing a lot of small commits when sync\nwrties are *off*, e.g. the implications for the locking system, and whether\nthis can affect the long queries on the other tables apart from general\nresource contention.\n\nThe pattern of lots of tiny transactions is semantically correct for the\napp, and I am using a JDBC prepared statement on the Java side, which I\nbelieve the PG driver will turn in to a pre-compiled statement with enough\nuses (it does NOT do so on the first few hits). This should in theory be\neven cheaper than a multiple INSERT VALUES which is all text and has to be\nparsed.\n\nHowever, if necessary for performance I can bundle the inserts into\nslightly larger transactions - cases where writes fail are due only to\noccasional duplicates (same primary key) coming from upstream and are\npretty rare, and in practice losing a batch of say 100 of these records\noccasionally is not a big deal in my world (ignoring sound of cringing DBAs\n:) so I could afford to bundle into transactions and then just drop a whole\nbundle if any single write has a primary key collision.\n\nStorage setup varies by customer, but a typical setup is to take RAID\ngroups of about 5-10TB each net from something like an EMC Clariion and\nslice each group into 1TB LUNs which become VMWare datastores, which are\nwritten simultaneously from multiple hosts. A mid-size Clariion would host\nperhaps 50-100 of these small LUNs, and a customer running a high\nperformance environment might have Fibrechannel disks and RAID-10, but SATA\nand RAID-5/6 would also be normal, albeit with a substantial write-back\ncache (maybe 1GB, IIRC a current Clariion SP has 4GB total). Each file on\nthe datastore corresponds to a virtual disk on a VM, and the datastore is\nformatted with VMFS (concurrent writer filesystem, uses SCSI locking to\ncontrol access to block allocation and directory entries).\n\nThe other type of VMWare datastore works at the filesystem layer - instead\nof a shared SAN with iSCSI / FC-AL, the VMware hosts are all pointed at a\nshared NFS server directory. NetApp is the popular back end for this\nconfiguration.\n\nOn top of this virtualization, I have PG laid out on two virtual disks -\nWAL and log files are on the main system partition, index and table data on\na second partition. Both formatted with ext3fs.\n\nOne of my larger customers had his SAN guy complain to him that our app was\nwriting more data to the NetApp it was on than every other app combined, so\nI am mindful of the volume being more than some of these systems were\nplanned for :)\n\nCheers\nDave\n\nOn Tue, Jan 24, 2012 at 3:09 PM, Tomas Vondra <[email protected]> wrote:\n\n> On 24 Leden 2012, 21:16, Dave Crooke wrote:\n> > Hi folks\n> >\n> > This could be a sheer volume issue, but I though I would ask the wisdom\n> of\n> > this forum as to next investigative steps.\n> >\n> > ----\n> >\n> > We use PostgreSQL 8.4.4 which is bundled with our application as a VMware\n> > virtual appliance. The bulk of the app's database activity is recording\n> > performance data points which arrive in farily large sustained bursts of\n> > perhaps 10,000 rows a minute at a medium sized customer, each of which\n> are\n> > logically separate items and being committed as individual transactions\n> > (JDBC auto-commit mode). Our offshore QA team was assigned to track an\n> > intermittent issue with speed of some large queries on other tables, and\n> > they believe based on correlation the two activities may be contending.\n> >\n> > The large query is coming off of different tables from the ones being\n> > written to ... the raw data goes into a table named by day (partitioning\n> > is\n> > all within the app, not PG) e.g. PERF_RAW_2012_01_24 and then there are a\n> > bunch of rollup statements which run hourly to do the aggregations, e.g.\n>\n> Each storage device has some basic I/O limits - sequential speed\n> (read/write)\n> and the maximum number or I/O operations it can handle. For example a 7.2k\n> drives can do up to 160MB/s sequential reads/writes, but not more than 120\n> I/O ops per second. Similarly for other devices - 15k drives can do up to\n> 250 I/Os. SSDs can handle much more I/Os, e.g. Intel 320 can handle about\n> 8k I/Os.\n>\n> I have no idea what kind of storage device you're using and what amount of\n> sequential and random operations it can handle. But my guess you're hitting\n> the limit of random I/Os - each commit requires a fsync, and you're doing\n> 10.000 of them per minute, i.e. about 160 per second. If the queries need\n> to read data from the drive (e.g. randomly), this just adds more I/Os.\n>\n> > Is there any tweaking we should do on the PG settings, or on the pattern\n> > in\n> > which the app is writing - we currently use 10 writer threads on the Java\n> > side and they keep PG going pretty good.\n>\n> The first thing you should do is grouping the inserts to one transaction.\n> That'll lower the number of I/Os the database needs to do. Besides that,\n> you can move the WAL to a separate (physical) device, thus spreading the\n> I/Os to more drives.\n>\n> > I considered bundling the writes into larger transactions, will that\n> > really\n> > help much with commit consistency off?\n>\n> What do you mean by \"commit consistency off\"?\n>\n> > Is there some specific \"usual suspect\" stuff I should look at on the PG\n> > side to look for efficiency issues such as index lock contention or a\n> poor\n> > buffer cache hit ratio? Will doing EXPLAIN ANALYSE on the big query be\n> > informative, and if so, does it need to be done while the write load is\n> > applied?\n>\n> The first thing you should do is gathering some basic I/O stats.\n>\n> Run pg_test_fsync (a contrib module) to see how many fsync operations the\n> I/O subsystem can handle (if it reports more than 500, use \"-o\" to get it\n> running for a longer time).\n>\n> Then gather \"vmstat 1\" and \"iostat -x 1\" for a few seconds when the\n> workload\n> (inserts and queries) are actually running. That should tell you how the\n> drives are actually utilized.\n>\n> Post these results to this list.\n>\n> > Relevant schema and config attached, all comments and advice welcome,\n> > including general tuning tips and rationale for moving to PG 9.x .... I'm\n> > well aware this isn't the acme of PG tuning :)\n>\n> There's a nice page about tuning at the wiki:\n>\n> http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n>\n> I'd recommend significantly increasing the number of checkpoint segments,\n> e.g. to 64 (1GB) and setting completion target to 0.9. This usually helps\n> write-heavy workloads. And enable log_checkpoints.\n>\n> Tomas\n>\n>\n\nHi guysThanks for the quick followups folks .... please note I am deliberately running a setup without commit guarantees, so a lot of the conventional advice about not doing small writes isn't applicable, but I do want to understand more about how this affects PG internals even if the I/O is smoothed out.\nBy \"commit consistency off\" I am referring to the setting \"synchronous_commit = off\" in postgresql.conf .... IIRC this should mean semantically that a DB crash may lose data that was reported back to the app as successfully committed, but will result in a consistent state on reboot and recovery. In this case IIUC the \"120 commits per second per drive\" limit does not apply, and I hope the advice about testing fsync is similarly not applicable to my case. Also, IIUC that settings like commit_siblings and commit_delay should be ignored by PG in my case.\nI would be interested in learning what the **in-memory** constraints and costs are on the PG server side of doing a lot of small commits when sync wrties are off, e.g. the implications for the locking system, and whether this can affect the long queries on the other tables apart from general resource contention.\nThe pattern of lots of tiny transactions is semantically correct for the app, and I am using a JDBC prepared statement on the Java side, which I believe the PG driver will turn in to a pre-compiled statement with enough uses (it does NOT do so on the first few hits). This should in theory be even cheaper than a multiple INSERT VALUES which is all text and has to be parsed. \nHowever, if necessary for performance I can bundle the inserts into slightly larger transactions - cases where writes fail are due only to occasional duplicates (same primary key) coming from upstream and are pretty rare, and in practice losing a batch of say 100 of these records occasionally is not a big deal in my world (ignoring sound of cringing DBAs :) so I could afford to bundle into transactions and then just drop a whole bundle if any single write has a primary key collision.\nStorage setup varies by customer, but a typical setup is to take RAID groups of about 5-10TB each net from something like an EMC Clariion and slice each group into 1TB LUNs which become VMWare datastores, which are written simultaneously from multiple hosts. A mid-size Clariion would host perhaps 50-100 of these small LUNs, and a customer running a high performance environment might have Fibrechannel disks and RAID-10, but SATA and RAID-5/6 would also be normal, albeit with a substantial write-back cache (maybe 1GB, IIRC a current Clariion SP has 4GB total). Each file on the datastore corresponds to a virtual disk on a VM, and the datastore is formatted with VMFS (concurrent writer filesystem, uses SCSI locking to control access to block allocation and directory entries).\nThe other type of VMWare datastore works at the filesystem layer - instead of a shared SAN with iSCSI / FC-AL, the VMware hosts are all pointed at a shared NFS server directory. NetApp is the popular back end for this configuration.\nOn top of this virtualization, I have PG laid out on two virtual disks - WAL and log files are on the main system partition, index and table data on a second partition. Both formatted with ext3fs.One of my larger customers had his SAN guy complain to him that our app was writing more data to the NetApp it was on than every other app combined, so I am mindful of the volume being more than some of these systems were planned for :)\nCheersDaveOn Tue, Jan 24, 2012 at 3:09 PM, Tomas Vondra <[email protected]> wrote:\nOn 24 Leden 2012, 21:16, Dave Crooke wrote:\n> Hi folks\n>\n> This could be a sheer volume issue, but I though I would ask the wisdom of\n> this forum as to next investigative steps.\n>\n> ----\n>\n> We use PostgreSQL 8.4.4 which is bundled with our application as a VMware\n> virtual appliance. The bulk of the app's database activity is recording\n> performance data points which arrive in farily large sustained bursts of\n> perhaps 10,000 rows a minute at a medium sized customer, each of which are\n> logically separate items and being committed as individual transactions\n> (JDBC auto-commit mode). Our offshore QA team was assigned to track an\n> intermittent issue with speed of some large queries on other tables, and\n> they believe based on correlation the two activities may be contending.\n>\n> The large query is coming off of different tables from the ones being\n> written to ... the raw data goes into a table named by day (partitioning\n> is\n> all within the app, not PG) e.g. PERF_RAW_2012_01_24 and then there are a\n> bunch of rollup statements which run hourly to do the aggregations, e.g.\n\nEach storage device has some basic I/O limits - sequential speed (read/write)\nand the maximum number or I/O operations it can handle. For example a 7.2k\ndrives can do up to 160MB/s sequential reads/writes, but not more than 120\nI/O ops per second. Similarly for other devices - 15k drives can do up to\n250 I/Os. SSDs can handle much more I/Os, e.g. Intel 320 can handle about\n8k I/Os.\n\nI have no idea what kind of storage device you're using and what amount of\nsequential and random operations it can handle. But my guess you're hitting\nthe limit of random I/Os - each commit requires a fsync, and you're doing\n10.000 of them per minute, i.e. about 160 per second. If the queries need\nto read data from the drive (e.g. randomly), this just adds more I/Os.\n\n> Is there any tweaking we should do on the PG settings, or on the pattern\n> in\n> which the app is writing - we currently use 10 writer threads on the Java\n> side and they keep PG going pretty good.\n\nThe first thing you should do is grouping the inserts to one transaction.\nThat'll lower the number of I/Os the database needs to do. Besides that,\nyou can move the WAL to a separate (physical) device, thus spreading the\nI/Os to more drives.\n\n> I considered bundling the writes into larger transactions, will that\n> really\n> help much with commit consistency off?\n\nWhat do you mean by \"commit consistency off\"?\n\n> Is there some specific \"usual suspect\" stuff I should look at on the PG\n> side to look for efficiency issues such as index lock contention or a poor\n> buffer cache hit ratio? Will doing EXPLAIN ANALYSE on the big query be\n> informative, and if so, does it need to be done while the write load is\n> applied?\n\nThe first thing you should do is gathering some basic I/O stats.\n\nRun pg_test_fsync (a contrib module) to see how many fsync operations the\nI/O subsystem can handle (if it reports more than 500, use \"-o\" to get it\nrunning for a longer time).\n\nThen gather \"vmstat 1\" and \"iostat -x 1\" for a few seconds when the workload\n(inserts and queries) are actually running. That should tell you how the\ndrives are actually utilized.\n\nPost these results to this list.\n\n> Relevant schema and config attached, all comments and advice welcome,\n> including general tuning tips and rationale for moving to PG 9.x .... I'm\n> well aware this isn't the acme of PG tuning :)\n\nThere's a nice page about tuning at the wiki:\n\n http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nI'd recommend significantly increasing the number of checkpoint segments,\ne.g. to 64 (1GB) and setting completion target to 0.9. This usually helps\nwrite-heavy workloads. And enable log_checkpoints.\n\nTomas",
"msg_date": "Tue, 24 Jan 2012 15:36:36 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can lots of small writes badly hamper reads from other tables?"
},
{
"msg_contents": "On 24.1.2012 22:36, Dave Crooke wrote:\n> Hi guys\n>\n> Thanks for the quick followups folks .... please note I am deliberately\n> running a setup without commit guarantees, so a lot of the conventional\n> advice about not doing small writes isn't applicable, but I do want to\n> understand more about how this affects PG internals even if the I/O is\n> smoothed out.\n>\n> By \"commit consistency off\" I am referring to the setting\n> \"synchronous_commit = off\" in postgresql.conf .... IIRC this should mean\n> semantically that a DB crash may lose data that was reported back to the\n> app as successfully committed, but will result in a consistent state on\n> reboot and recovery. In this case IIUC the \"120 commits per second per\n> drive\" limit does not apply, and I hope the advice about testing fsync\n> is similarly not applicable to my case. Also, IIUC that settings like\n> commit_siblings and commit_delay should be ignored by PG in my case.\nOh, I haven't noticed the synchronous_commit=off bit. You're right about\nthe consistency guarantees (possibility of lost transactions but no\ncorruption).\n\nIIRC the async commit issues fsync for each commit, but does not wait\nfor it to finish. The question is whether this improves the way the I/O\nis used or not. That's difficult to answer without more detailed info\n(vmstat/iostat).\n\nIn some cases this may actually hammer the system even worse, killing\nthe performance, because you're removing the \"wait time\" so the INSERT\nprocesses are submitting more fsync operations than it can handle.\n\nThere are cases when this may actually improve the I/O utilization (e.g.\nwhen there's a lot of drives in RAID).\n\nYou need to watch the drive and CPU stats to identify the causes. Is it\nCPU bound (100% cpu utilization)? Is it I/O bound (drives 100% utilized)?\n\nMoreover, it's not just about the fsync operations. If there are\nconstraints that need to be checked (e.g. foreign keys, unique\nconstrains etc.), that may cause additional I/O operations.\n\nMaybe you could get better results with commit_delay/commit_siblings.\nThat effectively groups commits into a single fsync operation. (Which\nsynchronous_commit=off does not do IIRC).\n\nI've seen really good results with large amounts of concurrent clients.\nHow many of those \"insert\" processes are there?\n\n> I would be interested in learning what the **in-memory** constraints and\n> costs are on the PG server side of doing a lot of small commits when\n> sync wrties are _off_, e.g. the implications for the locking system, and\n> whether this can affect the long queries on the other tables apart from\n> general resource contention.\nI really doubt this is the case. If you're interested in watching these\nissues, set up a pgbench database with small scaling factor (so that the\nDB fits into memory) and maybe set fsync=off. Then you'll be able to\nobserve the locking issues etc.\n\nBut this all is just a hypothesis, and my suggestion is that you really\nverify if before trying to fix it - if the bottleneck really is inside\nPostgreSQL (locking or whatever).\n\nEliminate all the other usual bottlenecks first - I/O and CPU. Show us\nsome stats, e.g. vmstat, iostat etc.\n\n> The pattern of lots of tiny transactions is semantically correct for the\n> app, and I am using a JDBC prepared statement on the Java side, which I\n> believe the PG driver will turn in to a pre-compiled statement with\n> enough uses (it does NOT do so on the first few hits). This should in\n> theory be even cheaper than a multiple INSERT VALUES which is all text\n> and has to be parsed.\n>\n> However, if necessary for performance I can bundle the inserts into\n> slightly larger transactions - cases where writes fail are due only to\n> occasional duplicates (same primary key) coming from upstream and are\n> pretty rare, and in practice losing a batch of say 100 of these records\n> occasionally is not a big deal in my world (ignoring sound of cringing\n> DBAs so I could afford to bundle into transactions and then just drop\n> a whole bundle if any single write has a primary key collision.\nIf it's semantically correct, let's try to keep it that way.\n\n> Storage setup varies by customer, but a typical setup is to take RAID\n> groups of about 5-10TB each net from something like an EMC Clariion and\n> slice each group into 1TB LUNs which become VMWare datastores, which are\n> written simultaneously from multiple hosts. A mid-size Clariion would\n> host perhaps 50-100 of these small LUNs, and a customer running a high\n> performance environment might have Fibrechannel disks and RAID-10, but\n> SATA and RAID-5/6 would also be normal, albeit with a substantial\n> write-back cache (maybe 1GB, IIRC a current Clariion SP has 4GB total).\n> Each file on the datastore corresponds to a virtual disk on a VM, and\n> the datastore is formatted with VMFS (concurrent writer filesystem, uses\n> SCSI locking to control access to block allocation and directory entries).\n>\n> The other type of VMWare datastore works at the filesystem layer -\n> instead of a shared SAN with iSCSI / FC-AL, the VMware hosts are all\n> pointed at a shared NFS server directory. NetApp is the popular back end\n> for this configuration.\nHmmmm, tuning such workloads is usually tightly bound to the I/O layout.\nWhat works great on one setup is going to fail miserably on another one.\n\nEspecially RAID-5/6 are well known to suck at write-intensive workloads.\nThe usual tuning advice in this case is \"OMG, get rid of RAID-5/6!\"\n\nYou really need to gather some data from each setup, see where's the\nbottleneck and fix it. It might be in a different place for each setup.\n\nYou need to see \"inside\" the storage, not just the first level. One\nfairly frequent mistake (and I've done that repeatedly) is the belief\nthat when iostat tell's you a device is 100% utilized it can't handle\nmore I/Os.\n\nWith a RAID array that's not true - what matters is how the individual\ndevices are used, not the virtual device on top of them. Consider for\nexample a RAID-1 with two drives. The array may report 100% utilization\nbut the devices are in fact 50% utilized because half of the requests is\nhanded to the first device, the other half to the second one.\n\n> On top of this virtualization, I have PG laid out on two virtual disks -\n> WAL and log files are on the main system partition, index and table data\n> on a second partition. Both formatted with ext3fs.\nOne suggestion - try to increase the effective_io_concurrency. There're\nsome recommendations here\n\n http://www.postgresql.org/docs/8.4/static/runtime-config-resource.html\n\nUse the number of drives as a starting point or experiment a bit.\n\nAnd increase the checkpoint parameters as I recommended before. You may\neven increase the checkpoint timeout - that may significantly lower the\namount of data that's written during checkpoints.\n\n> One of my larger customers had his SAN guy complain to him that our app\n> was writing more data to the NetApp it was on than every other app\n> combined, so I am mindful of the volume being more than some of these\n> systems were planned for\nI'm not familiar with NetApp - AFAIK they use RAID-DP which is a somehow\nimproved version of RAID 4, that should perform better than RAID 6. But\nin my experience these claims usually miss the \"for most workloads\" part.\n\ncheers\nTomas\n\n",
"msg_date": "Wed, 25 Jan 2012 00:46:01 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can lots of small writes badly hamper reads from other\n tables?"
},
{
"msg_contents": "On Tue, Jan 24, 2012 at 12:16 PM, Dave Crooke <[email protected]> wrote:\n> Hi folks\n>\n> This could be a sheer volume issue, but I though I would ask the wisdom of\n> this forum as to next investigative steps.\n\nTo answers the question in your subject, yes. If the disk head is\npositioned to write in one place, it can't be reading from some other\nplace. The various levels of caches and re-ordering and other tricks\ncan improve the situation, but they have a finite capacity to do so.\n\n> We use PostgreSQL 8.4.4 which is bundled with our application as a VMware\n> virtual appliance. The bulk of the app's database activity is recording\n> performance data points which arrive in farily large sustained bursts of\n> perhaps 10,000 rows a minute at a medium sized customer, each of which are\n> logically separate items and being committed as individual transactions\n> (JDBC auto-commit mode). Our offshore QA team was assigned to track an\n> intermittent issue with speed of some large queries on other tables, and\n> they believe based on correlation the two activities may be contending.\n>\n> The large query is coming off of different tables from the ones being\n> written to ... the raw data goes into a table named by day (partitioning is\n> all within the app, not PG) e.g. PERF_RAW_2012_01_24 and then there are a\n> bunch of rollup statements which run hourly to do the aggregations, e.g.\n\nIn your attached schema there are two perf_raw tables, and they have\ndifferent sets of indexes on them.\nWhich set is in operation during the inserts?\n\n\n> insert into PERF_HOURLY_2012_01_24 select key_columns, avg(data), now() from\n> perf_raw_2012_01_24 where time_stamp between (now() - interval '1 hour') and\n> now() group by key_columns\n>\n> The big queries are hitting multiple of the PERF_HOURLY tables and pulling a\n> few dozen rows from each.\n\nHow big are they those big queries, really? A few dozen tables times\na few dozen rows?\n\n...\n>\n> Is there any tweaking we should do on the PG settings, or on the pattern in\n> which the app is writing - we currently use 10 writer threads on the Java\n> side and they keep PG going pretty good.\n\nDo you need 10 writer threads? What happens if you use fewer?\n\n>\n> I considered bundling the writes into larger transactions, will that really\n> help much with commit consistency off?\n\nWith synchronous_commit=off, I wouldn't expect the transaction\nstructure to make much difference. Especially not if the target of\nthe mass inserts is indexed.\n\n> Is there some specific \"usual suspect\" stuff I should look at on the PG side\n> to look for efficiency issues such as index lock contention or a poor buffer\n> cache hit ratio? Will doing EXPLAIN ANALYSE on the big query be informative,\n> and if so, does it need to be done while the write load is applied?\n\nEXPLAIN would probably help, EXPLAIN ANALYSE while the problem is in\naction would help more.\n\nEven better would be to see where the queries are blocking during the\nproblem, but there is no easy way to get that in postgres. I'd strace\n-ttt -T the query process (although the mere act of stracing it can\nslow it down enough to relieve the bottleneck you are trying to\nidentify)\n\n>\n> The other whacky idea I had was to have the writer threads pause or throttle\n> themselves when a big query is happening (it's all in one JVM and we are\n> using a connection pooler, so it's easy to intercept and track if needed)\n> however that strikes me as a rather ugly hack and I'd prefer to do something\n> more robust and based on config tweaks that leverage existing resource\n> management in PG.\n\nWhy not just always throttle them? If you slam the data in as fast as\npossible during brief bursts, you are probably just setting yourself\nup for this type of issue. (The brief bursts can be useful if they\nmake better use of cache, but then you have to accept that other\nthings will be disrupted during those bursts.)\n\nCheers,\n\nJeff\n",
"msg_date": "Wed, 25 Jan 2012 08:19:21 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can lots of small writes badly hamper reads from other tables?"
}
] |
[
{
"msg_contents": "We are migrating our Oracle warehouse to Postgres 9.\n\nThis function responds well:\n\npg=# select public.getMemberAdminPrevious_sp2(247815829, 1,'[email protected]', 'email', 'test');\n getmemberadminprevious_sp2 \n----------------------------\n <unnamed portal 1>\n(1 row)\n\nTime: 7.549 ms\n\nHowever, when testing, this fetch takes upwards of 38 minutes:\n\nBEGIN;\nselect public.getMemberAdminPrevious_sp2(247815829, 1,'[email protected]', 'email', 'test');\nFETCH ALL IN \"<unnamed portal 2>\";\n\nHow can I diagnose any performance issues with the fetch in the cursor?\n\nThanks.\nTony\n\n",
"msg_date": "Tue, 24 Jan 2012 15:41:40 -0500",
"msg_from": "Tony Capobianco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cursor fetch performance issue"
},
{
"msg_contents": "Hello\n\n2012/1/24 Tony Capobianco <[email protected]>:\n> We are migrating our Oracle warehouse to Postgres 9.\n>\n> This function responds well:\n>\n> pg=# select public.getMemberAdminPrevious_sp2(247815829, 1,'[email protected]', 'email', 'test');\n> getmemberadminprevious_sp2\n> ----------------------------\n> <unnamed portal 1>\n> (1 row)\n>\n> Time: 7.549 ms\n>\n> However, when testing, this fetch takes upwards of 38 minutes:\n>\n> BEGIN;\n> select public.getMemberAdminPrevious_sp2(247815829, 1,'[email protected]', 'email', 'test');\n> FETCH ALL IN \"<unnamed portal 2>\";\n>\n> How can I diagnose any performance issues with the fetch in the cursor?\n>\n\nCursors are optimized to returns small subset of result - if you plan\nto read complete result, then set\n\nset cursor_tuple_fraction to 1.0;\n\nthis is session config value, you can set it before selected cursors queries\n\nRegards\n\nPavel Stehule\n\n> Thanks.\n> Tony\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 24 Jan 2012 21:47:35 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursor fetch performance issue"
},
{
"msg_contents": "Running just the sql of the function returns only 10 rows:\n\npg=# SELECT m.memberid, m.websiteid, m.emailaddress, \npg-# m.firstname, m.lastname, m.regcomplete, m.emailok\npg-# FROM members m\npg-# WHERE m.emailaddress LIKE '[email protected]'\npg-# AND m.changedate_id < 5868 ORDER BY m.emailaddress, m.websiteid;\n memberid | websiteid | emailaddress | firstname | lastname | regcomplete | emailok \n-----------+-----------+------------------------+-----------+----------+-------------+---------\n 247815829 | 1 | [email protected] | email | test | 1 | 1\n 300960335 | 62 | [email protected] | | | 1 | 1\n 300959937 | 625 | [email protected] | | | 1 | 1\n 260152830 | 1453 | [email protected] | | | 1 | 1\n 300960163 | 1737 | [email protected] | email | test | 1 | 1\n 300960259 | 1824 | [email protected] | email | test | 1 | 1\n 300959742 | 1928 | [email protected] | email | test | 1 | 1\n 368122699 | 2457 | [email protected] | email | test | 1 | 1\n 403218613 | 2464 | [email protected] | email | test | 1 | 0\n 378951994 | 2656 | [email protected] | | | 1 | 1\n(10 rows)\n\nTime: 132.626 ms\n\nSo, it would seem that's a small enough number of rows. Unfortunately, issuing: \n\nset cursor_tuple_fraction to 1.0;\n\nDid not have an effect on performance. Is it common to modify this\ncursor_tuple_fraction parameter each time we execute the function?\n\n\nOn Tue, 2012-01-24 at 21:47 +0100, Pavel Stehule wrote:\n> Hello\n> \n> 2012/1/24 Tony Capobianco <[email protected]>:\n> > We are migrating our Oracle warehouse to Postgres 9.\n> >\n> > This function responds well:\n> >\n> > pg=# select public.getMemberAdminPrevious_sp2(247815829, 1,'[email protected]', 'email', 'test');\n> > getmemberadminprevious_sp2\n> > ----------------------------\n> > <unnamed portal 1>\n> > (1 row)\n> >\n> > Time: 7.549 ms\n> >\n> > However, when testing, this fetch takes upwards of 38 minutes:\n> >\n> > BEGIN;\n> > select public.getMemberAdminPrevious_sp2(247815829, 1,'[email protected]', 'email', 'test');\n> > FETCH ALL IN \"<unnamed portal 2>\";\n> >\n> > How can I diagnose any performance issues with the fetch in the cursor?\n> >\n> \n> Cursors are optimized to returns small subset of result - if you plan\n> to read complete result, then set\n> \n> set cursor_tuple_fraction to 1.0;\n> \n> this is session config value, you can set it before selected cursors queries\n> \n> Regards\n> \n> Pavel Stehule\n> \n> > Thanks.\n> > Tony\n> >\n> >\n> > --\n> > Sent via pgsql-performance mailing list ([email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n",
"msg_date": "Tue, 24 Jan 2012 15:57:37 -0500",
"msg_from": "Tony Capobianco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cursor fetch performance issue"
},
{
"msg_contents": "> On Tue, 2012-01-24 at 21:47 +0100, Pavel Stehule wrote:\n>> Hello\n>>\n>> 2012/1/24 Tony Capobianco<[email protected]>:\n>>> We are migrating our Oracle warehouse to Postgres 9.\n>>>\n>>> This function responds well:\n>>>\n>>> pg=# select public.getMemberAdminPrevious_sp2(247815829, 1,'[email protected]', 'email', 'test');\n>>> getmemberadminprevious_sp2\n>>> ----------------------------\n>>> <unnamed portal 1>\n>>> (1 row)\n>>>\n>>> Time: 7.549 ms\n>>>\n>>> However, when testing, this fetch takes upwards of 38 minutes:\n>>>\n>>> BEGIN;\n>>> select public.getMemberAdminPrevious_sp2(247815829, 1,'[email protected]', 'email', 'test');\n>>> FETCH ALL IN \"<unnamed portal 2>\";\n>>>\n>>> How can I diagnose any performance issues with the fetch in the cursor?\n>>>\n>>\n>> Cursors are optimized to returns small subset of result - if you plan\n>> to read complete result, then set\n>>\n>> set cursor_tuple_fraction to 1.0;\n>>\n>> this is session config value, you can set it before selected cursors queries\n>>\n>> Regards\n>>\n>> Pavel Stehule\n>>\n>>> Thanks.\n>>> Tony\n>>>\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n>\n\n\nOn 1/24/2012 2:57 PM, Tony Capobianco wrote:\n > Running just the sql of the function returns only 10 rows:\n >\n > pg=# SELECT m.memberid, m.websiteid, m.emailaddress,\n > pg-# m.firstname, m.lastname, m.regcomplete, m.emailok\n > pg-# FROM members m\n > pg-# WHERE m.emailaddress LIKE '[email protected]'\n > pg-# AND m.changedate_id< 5868 ORDER BY m.emailaddress, \nm.websiteid;\n > memberid | websiteid | emailaddress | firstname | \nlastname | regcomplete | emailok\n > \n-----------+-----------+------------------------+-----------+----------+-------------+---------\n > 247815829 | 1 | [email protected] | email | test \n | 1 | 1\n > 300960335 | 62 | [email protected] | | \n | 1 | 1\n > 300959937 | 625 | [email protected] | | \n | 1 | 1\n > 260152830 | 1453 | [email protected] | | \n | 1 | 1\n > 300960163 | 1737 | [email protected] | email | test \n | 1 | 1\n > 300960259 | 1824 | [email protected] | email | test \n | 1 | 1\n > 300959742 | 1928 | [email protected] | email | test \n | 1 | 1\n > 368122699 | 2457 | [email protected] | email | test \n | 1 | 1\n > 403218613 | 2464 | [email protected] | email | test \n | 1 | 0\n > 378951994 | 2656 | [email protected] | | \n | 1 | 1\n > (10 rows)\n >\n > Time: 132.626 ms\n >\n > So, it would seem that's a small enough number of rows. \nUnfortunately, issuing:\n >\n > set cursor_tuple_fraction to 1.0;\n >\n > Did not have an effect on performance. Is it common to modify this\n > cursor_tuple_fraction parameter each time we execute the function?\n >\n >\n\n\nSo, is getMemberAdminPrevious_sp2() preparing a statement with wildcards?\n\nSELECT m.memberid, m.websiteid, m.emailaddress,\n m.firstname, m.lastname, m.regcomplete, m.emailok\n FROM members m\n WHERE m.emailaddress LIKE $1\n AND m.changedate_id < $2\n ORDER BY m.emailaddress, m.websiteid;\n\nOr is it creating the string and executing it:\n\nsql = 'SELECT m.memberid, m.websiteid, m.emailaddress, '\n || ' m.firstname, m.lastname, m.regcomplete, m.emailok '\n || ' FROM members m\n || ' WHERE m.emailaddress LIKE ' || arg1\n || ' AND m.changedate_id < ' || arg2\n || ' ORDER BY m.emailaddress, m.websiteid ';\nexecute(sql);\n\nMaybe its the planner doesnt plan so well with $1 arguments vs actual \narguments thing.\n\n-Andy\n\n\n",
"msg_date": "Tue, 24 Jan 2012 15:11:22 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursor fetch performance issue"
},
{
"msg_contents": "2012/1/24 Tony Capobianco <[email protected]>:\n> Running just the sql of the function returns only 10 rows:\n>\n> pg=# SELECT m.memberid, m.websiteid, m.emailaddress,\n> pg-# m.firstname, m.lastname, m.regcomplete, m.emailok\n> pg-# FROM members m\n> pg-# WHERE m.emailaddress LIKE '[email protected]'\n> pg-# AND m.changedate_id < 5868 ORDER BY m.emailaddress, m.websiteid;\n> memberid | websiteid | emailaddress | firstname | lastname | regcomplete | emailok\n> -----------+-----------+------------------------+-----------+----------+-------------+---------\n> 247815829 | 1 | [email protected] | email | test | 1 | 1\n> 300960335 | 62 | [email protected] | | | 1 | 1\n> 300959937 | 625 | [email protected] | | | 1 | 1\n> 260152830 | 1453 | [email protected] | | | 1 | 1\n> 300960163 | 1737 | [email protected] | email | test | 1 | 1\n> 300960259 | 1824 | [email protected] | email | test | 1 | 1\n> 300959742 | 1928 | [email protected] | email | test | 1 | 1\n> 368122699 | 2457 | [email protected] | email | test | 1 | 1\n> 403218613 | 2464 | [email protected] | email | test | 1 | 0\n> 378951994 | 2656 | [email protected] | | | 1 | 1\n> (10 rows)\n>\n> Time: 132.626 ms\n>\n> So, it would seem that's a small enough number of rows. Unfortunately, issuing:\n>\n> set cursor_tuple_fraction to 1.0;\n>\n> Did not have an effect on performance. Is it common to modify this\n> cursor_tuple_fraction parameter each time we execute the function?\n>\n\nno, usually only before some strange query. Check execution plan,\nplease - but I don't think so your slow query depends on cursor usage.\n\npostgres=# set cursor_tuple_fraction TO 1.0;\nSET\npostgres=# explain declare x cursor for select * from foo where a % 2\n= 0 order by a;\n QUERY PLAN\n────────────────────────────────────────────────────────────────\n Sort (cost=19229.19..19241.69 rows=5000 width=4)\n Sort Key: a\n -> Seq Scan on foo (cost=0.00..18922.00 rows=5000 width=4)\n Filter: ((a % 2) = 0)\n(4 rows)\n\npostgres=# set cursor_tuple_fraction TO 1.0;\nSET\npostgres=# explain declare x cursor for select * from foo where a % 2\n= 0 order by a;\n QUERY PLAN\n────────────────────────────────────────────────────────────────\n Sort (cost=19229.19..19241.69 rows=5000 width=4)\n Sort Key: a\n -> Seq Scan on foo (cost=0.00..18922.00 rows=5000 width=4)\n Filter: ((a % 2) = 0)\n(4 rows)\n\npostgres=# set cursor_tuple_fraction TO 0.1;\nSET\npostgres=# explain declare x cursor for select * from foo where a % 2\n= 0 order by a;\n QUERY PLAN\n───────────────────────────────────────────────────────────────────────────\n Index Scan using foo_pkey on foo (cost=0.00..32693.34 rows=5000 width=4)\n Filter: ((a % 2) = 0)\n(2 rows)\n\nRegards\n\nPavel Stehule\n>\n> On Tue, 2012-01-24 at 21:47 +0100, Pavel Stehule wrote:\n>> Hello\n>>\n>> 2012/1/24 Tony Capobianco <[email protected]>:\n>> > We are migrating our Oracle warehouse to Postgres 9.\n>> >\n>> > This function responds well:\n>> >\n>> > pg=# select public.getMemberAdminPrevious_sp2(247815829, 1,'[email protected]', 'email', 'test');\n>> > getmemberadminprevious_sp2\n>> > ----------------------------\n>> > <unnamed portal 1>\n>> > (1 row)\n>> >\n>> > Time: 7.549 ms\n>> >\n>> > However, when testing, this fetch takes upwards of 38 minutes:\n>> >\n>> > BEGIN;\n>> > select public.getMemberAdminPrevious_sp2(247815829, 1,'[email protected]', 'email', 'test');\n>> > FETCH ALL IN \"<unnamed portal 2>\";\n>> >\n>> > How can I diagnose any performance issues with the fetch in the cursor?\n>> >\n>>\n>> Cursors are optimized to returns small subset of result - if you plan\n>> to read complete result, then set\n>>\n>> set cursor_tuple_fraction to 1.0;\n>>\n>> this is session config value, you can set it before selected cursors queries\n>>\n>> Regards\n>>\n>> Pavel Stehule\n>>\n>> > Thanks.\n>> > Tony\n>> >\n>> >\n>> > --\n>> > Sent via pgsql-performance mailing list ([email protected])\n>> > To make changes to your subscription:\n>> > http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n",
"msg_date": "Tue, 24 Jan 2012 22:11:29 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursor fetch performance issue"
},
{
"msg_contents": "Hello\n\n>\n> So, is getMemberAdminPrevious_sp2() preparing a statement with wildcards?\n>\n> SELECT m.memberid, m.websiteid, m.emailaddress,\n> m.firstname, m.lastname, m.regcomplete, m.emailok\n> FROM members m\n> WHERE m.emailaddress LIKE $1\n> AND m.changedate_id < $2\n> ORDER BY m.emailaddress, m.websiteid;\n>\n> Or is it creating the string and executing it:\n>\n> sql = 'SELECT m.memberid, m.websiteid, m.emailaddress, '\n> || ' m.firstname, m.lastname, m.regcomplete, m.emailok '\n> || ' FROM members m\n> || ' WHERE m.emailaddress LIKE ' || arg1\n> || ' AND m.changedate_id < ' || arg2\n> || ' ORDER BY m.emailaddress, m.websiteid ';\n> execute(sql);\n>\n> Maybe its the planner doesnt plan so well with $1 arguments vs actual\n> arguments thing.\n>\n\nsure, it could be blind optimization problem in plpgsql. Maybe you\nhave to use a dynamic SQL - OPEN FOR EXECUTE stmt probably\n\nhttp://www.postgresql.org/docs/9.1/interactive/plpgsql-cursors.html\n\nRegards\n\nPavel Stehule\n\n> -Andy\n>\n>\n",
"msg_date": "Tue, 24 Jan 2012 22:17:11 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursor fetch performance issue"
},
{
"msg_contents": "Tony Capobianco <[email protected]> writes:\n> Running just the sql of the function returns only 10 rows:\n> pg=# SELECT m.memberid, m.websiteid, m.emailaddress, \n> pg-# m.firstname, m.lastname, m.regcomplete, m.emailok\n> pg-# FROM members m\n> pg-# WHERE m.emailaddress LIKE '[email protected]'\n> pg-# AND m.changedate_id < 5868 ORDER BY m.emailaddress, m.websiteid;\n\nBased on that, I'd bet your problem is that the function is executing\n\tWHERE m.emailaddress LIKE $1\n(for some spelling of $1) and you are therefore not getting the benefit\nof the index optimizations that can happen when LIKE's pattern is\nconstant. Do you actually need LIKE rather than just \"=\" here?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Jan 2012 16:28:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursor fetch performance issue "
},
{
"msg_contents": "Here's the explain:\n\npg=# explain select getMemberAdminPrevious_sp(247815829, 1,'[email protected]', 'Email', 'Test');\n QUERY PLAN \n------------------------------------------\n Result (cost=0.00..0.26 rows=1 width=0)\n(1 row)\n\nTime: 1.167 ms\n\nThere was discussion of 'LIKE' v. '=' and wildcard characters are not\nbeing entered into the $1 parameter.\n\nThis is not generating a sql string. I feel it's something to do with\nthe fetch of the refcursor. The cursor is a larger part of a function:\n\nCREATE OR REPLACE FUNCTION PUBLIC.GETMEMBERADMINPREVIOUS_SP2 ( \n p_memberid IN numeric,\n p_websiteid IN numeric,\n p_emailaddress IN varchar,\n p_firstname IN varchar,\n p_lastname IN varchar)\nRETURNS refcursor AS $$\nDECLARE\n ref refcursor;\n l_sysdateid numeric;\nBEGIN\n l_sysdateid := sysdateid();\n if (p_memberid != 0) then\n if (p_emailaddress IS NOT NULL) then\n OPEN ref FOR\n SELECT m.memberid, m.websiteid, m.emailaddress,\n m.firstname, m.lastname, m.regcomplete, m.emailok\n FROM members m\n WHERE m.emailaddress LIKE p_emailaddress\n AND m.changedate_id < l_sysdateid ORDER BY m.emailaddress, \nm.websiteid;\n end if;\n end if;\n Return ref;\nEXCEPTION\nWHEN NO_DATA_FOUND THEN\n Return null;\nEND;\n$$ LANGUAGE 'plpgsql';\n\n\nOn Tue, 2012-01-24 at 22:17 +0100, Pavel Stehule wrote:\n> Hello\n> \n> >\n> > So, is getMemberAdminPrevious_sp2() preparing a statement with wildcards?\n> >\n> > SELECT m.memberid, m.websiteid, m.emailaddress,\n> > m.firstname, m.lastname, m.regcomplete, m.emailok\n> > FROM members m\n> > WHERE m.emailaddress LIKE $1\n> > AND m.changedate_id < $2\n> > ORDER BY m.emailaddress, m.websiteid;\n> >\n> > Or is it creating the string and executing it:\n> >\n> > sql = 'SELECT m.memberid, m.websiteid, m.emailaddress, '\n> > || ' m.firstname, m.lastname, m.regcomplete, m.emailok '\n> > || ' FROM members m\n> > || ' WHERE m.emailaddress LIKE ' || arg1\n> > || ' AND m.changedate_id < ' || arg2\n> > || ' ORDER BY m.emailaddress, m.websiteid ';\n> > execute(sql);\n> >\n> > Maybe its the planner doesnt plan so well with $1 arguments vs actual\n> > arguments thing.\n> >\n> \n> sure, it could be blind optimization problem in plpgsql. Maybe you\n> have to use a dynamic SQL - OPEN FOR EXECUTE stmt probably\n> \n> http://www.postgresql.org/docs/9.1/interactive/plpgsql-cursors.html\n> \n> Regards\n> \n> Pavel Stehule\n> \n> > -Andy\n> >\n> >\n> \n\n\n",
"msg_date": "Tue, 24 Jan 2012 16:34:06 -0500",
"msg_from": "Tony Capobianco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cursor fetch performance issue"
},
{
"msg_contents": "On 24.01.2012 23:34, Tony Capobianco wrote:\n> Here's the explain:\n>\n> pg=# explain select getMemberAdminPrevious_sp(247815829, 1,'[email protected]', 'Email', 'Test');\n> QUERY PLAN\n> ------------------------------------------\n> Result (cost=0.00..0.26 rows=1 width=0)\n> (1 row)\n>\n> Time: 1.167 ms\n\nThat's not very helpful. We'd need to see the plan of the query within \nthe function, not the plan on invoking the function. The auto_explain \ncontrib module with auto_explain_log_nested_statements=on might be \nuseful to get that.\n\n> There was discussion of 'LIKE' v. '=' and wildcard characters are not\n> being entered into the $1 parameter.\n>\n> This is not generating a sql string. I feel it's something to do with\n> the fetch of the refcursor. The cursor is a larger part of a function:\n>\n> CREATE OR REPLACE FUNCTION PUBLIC.GETMEMBERADMINPREVIOUS_SP2 (\n> p_memberid IN numeric,\n> p_websiteid IN numeric,\n> p_emailaddress IN varchar,\n> p_firstname IN varchar,\n> p_lastname IN varchar)\n> RETURNS refcursor AS $$\n> DECLARE\n> ref refcursor;\n> l_sysdateid numeric;\n> BEGIN\n> l_sysdateid := sysdateid();\n> if (p_memberid != 0) then\n> if (p_emailaddress IS NOT NULL) then\n> OPEN ref FOR\n> SELECT m.memberid, m.websiteid, m.emailaddress,\n> m.firstname, m.lastname, m.regcomplete, m.emailok\n> FROM members m\n> WHERE m.emailaddress LIKE p_emailaddress\n> AND m.changedate_id< l_sysdateid ORDER BY m.emailaddress,\n> m.websiteid;\n> end if;\n> end if;\n> Return ref;\n> EXCEPTION\n> WHEN NO_DATA_FOUND THEN\n> Return null;\n> END;\n> $$ LANGUAGE 'plpgsql';\n\nThe theory that the query takes a long time because \"LIKE \np_emailaddress\" is not optimizeable by the planner seems the most likely \nto me.\n\nIf you don't actually use any wildcards in the email, try replacing LIKE \nwith =. If you do, then you can try the \"OPEN ref FOR EXECUTE\" syntax. \nThat way the query is re-planned every time, and the planner can take \nadvantage of the parameter value. That enables it to use an index on the \nemail address column, when there isn't in fact any wildcards in the \nvalue, and also estimate the selectivities better which can lead to a \nbetter plan. Like this:\n\nCREATE OR REPLACE FUNCTION public.getmemberadminprevious_sp2(p_memberid \nnumeric, p_websiteid numeric, p_emailaddress character varying, \np_firstname character varying, p_lastname character varying)\n RETURNS refcursor\n LANGUAGE plpgsql\nAS $function$\nDECLARE\n ref refcursor;\n l_sysdateid numeric;\nBEGIN\n l_sysdateid := sysdateid();\n if (p_memberid != 0) then\n if (p_emailaddress IS NOT NULL) then\n OPEN ref FOR EXECUTE $query$\n SELECT m.memberid, m.websiteid, m.emailaddress,\n m.firstname, m.lastname, m.regcomplete, m.emailok\n FROM members m\n WHERE m.emailaddress LIKE $1\n AND m.changedate_id < $2 ORDER BY m.emailaddress,\nm.websiteid;\n $query$ USING p_emailaddress, l_sysdateid;\n end if;\n end if;\n Return ref;\nEXCEPTION\nWHEN NO_DATA_FOUND THEN\n Return null;\nEND;\n$function$\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Wed, 25 Jan 2012 10:58:16 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursor fetch performance issue"
}
] |
[
{
"msg_contents": "Hi Everyone\n\nI just want to illustrate an idea may possible for bringing up\nparallel process in PostgreSQL at SQL-Query level\n\nThe PARALLEL option in Oracle really give great improvment in\nperformance, multi-thread concept has great possibilities\n\nIn Oracle we have hints ( see below ) :\nSELECT /*+PARALLEL( e, 2 )*/ e.* FROM EMP e ;\n\nPostgreSQL ( may if possible in future ) :\nSELECT e.* FROM EMP PARALLEL ( e, 2) ;\n\n\n*Note: The below syntax does not work with any PostgreSQL versions\nPostgreSQL Syntax for SELECT ( with PARALLEL )\n\n[ WITH [ RECURSIVE ] with_query [, ...] ]\nSELECT [ ALL | DISTINCT [ ON ( expression [, ...] ) ] ]\n * | expression [ [ AS ] output_name ] [, ...]\n [ FROM from_item [, ...] ]\n [ WHERE condition ]\n [ GROUP BY expression [, ...] ]\n [ HAVING condition [, ...] ]\n [ WINDOW window_name AS ( window_definition ) [, ...] ]\n [ PARALLEL (<alias> | <table> | <index> |<segment> , < no. of threads> ) ]\n [ { UNION | INTERSECT | EXCEPT } [ ALL ] select ]\n [ ORDER BY expression [ ASC | DESC | USING operator ] [ NULLS {\nFIRST | LAST } ] [, ...] ]\n [ LIMIT { count | ALL } ]\n [ OFFSET start [ ROW | ROWS ] ]\n [ FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY ]\n [ FOR { UPDATE | SHARE } [ OF table_name [, ...] ] [ NOWAIT ] [...] ]\n\n\n\n\nOn 1/24/12, ashish nauriyal <[email protected]> wrote:\n> Thoughts by Bruce Momjian on Parallel execution in PostgreSQL...\n>\n> http://momjian.us/main/blogs/pgblog/2011.html#December_5_2011\n>\n> You can give your thoughts on the blog itself....\n>\n> Thanks,\n> Ashish Nauriyal\n>\n",
"msg_date": "Wed, 25 Jan 2012 14:48:49 +0530",
"msg_from": "sridhar bamandlapally <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL Parallel Processing !"
},
{
"msg_contents": "On Wed, Jan 25, 2012 at 6:18 AM, sridhar bamandlapally\n<[email protected]> wrote:\n> I just want to illustrate an idea may possible for bringing up\n> parallel process in PostgreSQL at SQL-Query level\n>\n> The PARALLEL option in Oracle really give great improvment in\n> performance, multi-thread concept has great possibilities\n>\n> In Oracle we have hints ( see below ) :\n> SELECT /*+PARALLEL( e, 2 )*/ e.* FROM EMP e ;\n>\n> PostgreSQL ( may if possible in future ) :\n> SELECT e.* FROM EMP PARALLEL ( e, 2) ;\n\nIt makes little sense (and is contrary to pg policy of no hinting) to\ndo it like that.\n\nIn fact, I've been musing for a long time on leveraging pg's\nsophisticated planner to do the parallelization:\n * Synchroscan means whenever a table has to be scanned twice, it can\nbe done with two threads.\n * Knowing whether a scan will hit mostly disk or memory can help in\ndeciding whether to do them in parallel or not (memory can be\nparallelized, interleaved memory access isn't so bad, but interleaved\ndisk access is disastrous)\n * Big sorts can be parallelized quite easily\n * Number of threads to use can be a tunable or automatically set to\nthe number of processors on the system\n * Pipelining is another useful plan transformation: parallelize\nI/O-bound nodes with CPU-bound ones.\n\nI know squat about how to implement this, but I've been considering\npicking the low hanging fruit on that tree and patching up PG to try\nthe concept. Many of the items above would require a thread-safe\nexecution engine, which may be quite hard to get and have a\nsignificant performance hit. Some don't, like parallel sort.\n\nAlso, it is necessary to notice that parallelization will create some\npriority inversion issues. Simple, non-parallelizable queries will\nsuffer from resource starvation when contending against more complex,\nparallelizable ones.\n",
"msg_date": "Wed, 25 Jan 2012 10:43:23 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Parallel Processing !"
},
{
"msg_contents": "Yes\n\n\"Hint method\" is an alternative solution which does not appear to be\nexclusive parallelism solution as it is included in comment block and have\nno error handling,\nand this could be one of the reason against PG policy\n\n\"Parameter method\" ( which we are thinking about ) can be very exclusive\nparallelism solution\nwith proper error handling as it is part of SQL-Query syntax\n\nOn Wed, Jan 25, 2012 at 7:13 PM, Claudio Freire <[email protected]>wrote:\n\n> On Wed, Jan 25, 2012 at 6:18 AM, sridhar bamandlapally\n> <[email protected]> wrote:\n> > I just want to illustrate an idea may possible for bringing up\n> > parallel process in PostgreSQL at SQL-Query level\n> >\n> > The PARALLEL option in Oracle really give great improvment in\n> > performance, multi-thread concept has great possibilities\n> >\n> > In Oracle we have hints ( see below ) :\n> > SELECT /*+PARALLEL( e, 2 )*/ e.* FROM EMP e ;\n> >\n> > PostgreSQL ( may if possible in future ) :\n> > SELECT e.* FROM EMP PARALLEL ( e, 2) ;\n>\n> It makes little sense (and is contrary to pg policy of no hinting) to\n> do it like that.\n>\n> In fact, I've been musing for a long time on leveraging pg's\n> sophisticated planner to do the parallelization:\n> * Synchroscan means whenever a table has to be scanned twice, it can\n> be done with two threads.\n> * Knowing whether a scan will hit mostly disk or memory can help in\n> deciding whether to do them in parallel or not (memory can be\n> parallelized, interleaved memory access isn't so bad, but interleaved\n> disk access is disastrous)\n> * Big sorts can be parallelized quite easily\n> * Number of threads to use can be a tunable or automatically set to\n> the number of processors on the system\n> * Pipelining is another useful plan transformation: parallelize\n> I/O-bound nodes with CPU-bound ones.\n>\n> I know squat about how to implement this, but I've been considering\n> picking the low hanging fruit on that tree and patching up PG to try\n> the concept. Many of the items above would require a thread-safe\n> execution engine, which may be quite hard to get and have a\n> significant performance hit. Some don't, like parallel sort.\n>\n> Also, it is necessary to notice that parallelization will create some\n> priority inversion issues. Simple, non-parallelizable queries will\n> suffer from resource starvation when contending against more complex,\n> parallelizable ones.\n>\n\nYes \"Hint method\" is an alternative solution which does not appear to beexclusive parallelism solution as it is included in comment block and have no error handling,\nand this could be one of the reason against PG policy \"Parameter method\" ( which we are thinking about ) can be very exclusive parallelism solutionwith proper error handling as it is part of SQL-Query syntax\nOn Wed, Jan 25, 2012 at 7:13 PM, Claudio Freire <[email protected]> wrote:\nOn Wed, Jan 25, 2012 at 6:18 AM, sridhar bamandlapally\n<[email protected]> wrote:\n> I just want to illustrate an idea may possible for bringing up\n> parallel process in PostgreSQL at SQL-Query level\n>\n> The PARALLEL option in Oracle really give great improvment in\n> performance, multi-thread concept has great possibilities\n>\n> In Oracle we have hints ( see below ) :\n> SELECT /*+PARALLEL( e, 2 )*/ e.* FROM EMP e ;\n>\n> PostgreSQL ( may if possible in future ) :\n> SELECT e.* FROM EMP PARALLEL ( e, 2) ;\n\nIt makes little sense (and is contrary to pg policy of no hinting) to\ndo it like that.\n\nIn fact, I've been musing for a long time on leveraging pg's\nsophisticated planner to do the parallelization:\n * Synchroscan means whenever a table has to be scanned twice, it can\nbe done with two threads.\n * Knowing whether a scan will hit mostly disk or memory can help in\ndeciding whether to do them in parallel or not (memory can be\nparallelized, interleaved memory access isn't so bad, but interleaved\ndisk access is disastrous)\n * Big sorts can be parallelized quite easily\n * Number of threads to use can be a tunable or automatically set to\nthe number of processors on the system\n * Pipelining is another useful plan transformation: parallelize\nI/O-bound nodes with CPU-bound ones.\n\nI know squat about how to implement this, but I've been considering\npicking the low hanging fruit on that tree and patching up PG to try\nthe concept. Many of the items above would require a thread-safe\nexecution engine, which may be quite hard to get and have a\nsignificant performance hit. Some don't, like parallel sort.\n\nAlso, it is necessary to notice that parallelization will create some\npriority inversion issues. Simple, non-parallelizable queries will\nsuffer from resource starvation when contending against more complex,\nparallelizable ones.",
"msg_date": "Wed, 25 Jan 2012 21:48:43 +0530",
"msg_from": "sridhar bamandlapally <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL Parallel Processing !"
},
{
"msg_contents": "On Wed, Jan 25, 2012 at 7:43 AM, Claudio Freire <[email protected]> wrote:\n> I know squat about how to implement this, but I've been considering\n> picking the low hanging fruit on that tree and patching up PG to try\n> the concept. Many of the items above would require a thread-safe\n> execution engine, which may be quite hard to get and have a\n> significant performance hit. Some don't, like parallel sort.\n\nThis was just discussed on -hackers yesterday -- see thread\n'multithreaded query planner'. In short, judging by the comments of\nsome of the smartest people working on this project, it sounds like\nusing threads to attack this is not going to happen, ever. Note you\ncan probably still get parallel execution in other ways, using\nprocesses, shared memory, etc, so I'd consider researching in that\ndirection.\n\nmerlin\n",
"msg_date": "Wed, 25 Jan 2012 14:16:51 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Parallel Processing !"
},
{
"msg_contents": "On Wed, Jan 25, 2012 at 5:16 PM, Merlin Moncure <[email protected]> wrote:\n> On Wed, Jan 25, 2012 at 7:43 AM, Claudio Freire <[email protected]> wrote:\n>> I know squat about how to implement this, but I've been considering\n>> picking the low hanging fruit on that tree and patching up PG to try\n>> the concept. Many of the items above would require a thread-safe\n>> execution engine, which may be quite hard to get and have a\n>> significant performance hit. Some don't, like parallel sort.\n>\n> This was just discussed on -hackers yesterday -- see thread\n> 'multithreaded query planner'. In short, judging by the comments of\n> some of the smartest people working on this project, it sounds like\n> using threads to attack this is not going to happen, ever. Note you\n> can probably still get parallel execution in other ways, using\n> processes, shared memory, etc, so I'd consider researching in that\n> direction.\n\nIf you mean this[0] thread, it doesn't show anything conclusive\nagainst, say, parallel sort or pipelining.\n\nBut I agree, checking the code, it would be really tough to get any\nmore than parallel sorting by primitive types with threads.\n\nProcesses, however, show promise.\n\n[0] http://archives.postgresql.org/pgsql-hackers/2012-01/msg00734.php\n",
"msg_date": "Wed, 25 Jan 2012 17:54:21 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Parallel Processing !"
},
{
"msg_contents": "*Hi ALL*\n**\n*Please have a look into this,*\n*this may help us to think on PARALLEL option*\n**\n*WITHOUT PARALLEL Option*\nSQL> explain plan for select * from hr.emp ;\nExplained.\nPLAN\n--------------------------------------------------------------------------\n| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |\n--------------------------------------------------------------------------\n| 0 | SELECT STATEMENT | | 7444K| 944M| 16077 (4)| 00:03:13 |\n| 1 | TABLE ACCESS FULL| EMP | 7444K| 944M| 16077 (4)| 00:03:13 |\n--------------------------------------------------------------------------\n\n\n*WITH PARALLEL Option*\nSQL> explain plan for select /*+parallel(emp,4)*/ * from hr.emp ;\nExplained.\nPLAN\n---------------------------------------------------------------------------------\n| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|\nTime |\n---------------------------------------------------------------------------------\n| 0 | SELECT STATEMENT | | 7444K| 944M| 4442 (3)|\n00:00:54 |\n| 1 | PX COORDINATOR | | | |\n| |\n| 2 | PX SEND QC (RANDOM)| :TQ10000 | 7444K| 944M| 4442 (3)|\n00:00:54 |\n| 3 | PX BLOCK ITERATOR | | 7444K| 944M| 4442 (3)|\n00:00:54 |\n| 4 | TABLE ACCESS FULL| EMP | 7444K| 944M| 4442 (3)|\n00:00:54 |\n---------------------------------------------------------------------------------\n\nIn the above plan ( WITH PARALLEL Option )\n1. \"Cost\" has been nearly reduced to 1/4th\n2. \"CPU\" has been reduced\n3. \"Time\" has been nearly reduced to 1/3rd\n\n\n\n\nOn Thu, Jan 26, 2012 at 2:24 AM, Claudio Freire <[email protected]>wrote:\n\n> On Wed, Jan 25, 2012 at 5:16 PM, Merlin Moncure <[email protected]>\n> wrote:\n> > On Wed, Jan 25, 2012 at 7:43 AM, Claudio Freire <[email protected]>\n> wrote:\n> >> I know squat about how to implement this, but I've been considering\n> >> picking the low hanging fruit on that tree and patching up PG to try\n> >> the concept. Many of the items above would require a thread-safe\n> >> execution engine, which may be quite hard to get and have a\n> >> significant performance hit. Some don't, like parallel sort.\n> >\n> > This was just discussed on -hackers yesterday -- see thread\n> > 'multithreaded query planner'. In short, judging by the comments of\n> > some of the smartest people working on this project, it sounds like\n> > using threads to attack this is not going to happen, ever. Note you\n> > can probably still get parallel execution in other ways, using\n> > processes, shared memory, etc, so I'd consider researching in that\n> > direction.\n>\n> If you mean this[0] thread, it doesn't show anything conclusive\n> against, say, parallel sort or pipelining.\n>\n> But I agree, checking the code, it would be really tough to get any\n> more than parallel sorting by primitive types with threads.\n>\n> Processes, however, show promise.\n>\n> [0] http://archives.postgresql.org/pgsql-hackers/2012-01/msg00734.php\n>\n\nHi ALL Please have a look into this,\nthis may help us to think on PARALLEL option WITHOUT PARALLEL Option\nSQL> explain plan for select * from hr.emp ;Explained.PLAN--------------------------------------------------------------------------\n| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |--------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7444K| 944M| 16077 (4)| 00:03:13 |\n| 1 | TABLE ACCESS FULL| EMP | 7444K| 944M| 16077 (4)| 00:03:13 |-------------------------------------------------------------------------- \n WITH PARALLEL OptionSQL> explain plan for select /*+parallel(emp,4)*/ * from hr.emp ;\nExplained.PLAN---------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |\n---------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7444K| 944M| 4442 (3)| 00:00:54 || 1 | PX COORDINATOR | | | | | |\n| 2 | PX SEND QC (RANDOM)| :TQ10000 | 7444K| 944M| 4442 (3)| 00:00:54 || 3 | PX BLOCK ITERATOR | | 7444K| 944M| 4442 (3)| 00:00:54 || 4 | TABLE ACCESS FULL| EMP | 7444K| 944M| 4442 (3)| 00:00:54 |\n---------------------------------------------------------------------------------In the above plan ( WITH PARALLEL Option )1. \"Cost\" has been nearly reduced to 1/4th2. \"CPU\" has been reduced\n3. \"Time\" has been nearly reduced to 1/3rd On Thu, Jan 26, 2012 at 2:24 AM, Claudio Freire <[email protected]> wrote:\nOn Wed, Jan 25, 2012 at 5:16 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Wed, Jan 25, 2012 at 7:43 AM, Claudio Freire <[email protected]> wrote:\n>> I know squat about how to implement this, but I've been considering\n>> picking the low hanging fruit on that tree and patching up PG to try\n>> the concept. Many of the items above would require a thread-safe\n>> execution engine, which may be quite hard to get and have a\n>> significant performance hit. Some don't, like parallel sort.\n>\n> This was just discussed on -hackers yesterday -- see thread\n> 'multithreaded query planner'. In short, judging by the comments of\n> some of the smartest people working on this project, it sounds like\n> using threads to attack this is not going to happen, ever. Note you\n> can probably still get parallel execution in other ways, using\n> processes, shared memory, etc, so I'd consider researching in that\n> direction.\n\nIf you mean this[0] thread, it doesn't show anything conclusive\nagainst, say, parallel sort or pipelining.\n\nBut I agree, checking the code, it would be really tough to get any\nmore than parallel sorting by primitive types with threads.\n\nProcesses, however, show promise.\n\n[0] http://archives.postgresql.org/pgsql-hackers/2012-01/msg00734.php",
"msg_date": "Fri, 27 Jan 2012 10:01:09 +0530",
"msg_from": "sridhar bamandlapally <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL Parallel Processing !"
},
{
"msg_contents": "On Fri, Jan 27, 2012 at 06:31, sridhar bamandlapally\n<[email protected]> wrote:\n> --------------------------------------------------------------------------\n> | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |\n> --------------------------------------------------------------------------\n> | 0 | SELECT STATEMENT | | 7444K| 944M| 16077 (4)| 00:03:13 |\n> | 1 | TABLE ACCESS FULL| EMP | 7444K| 944M| 16077 (4)| 00:03:13 |\n> --------------------------------------------------------------------------\n\nSorry to take this off topic, but... Seriously, over 3 minutes to read\n944 MB of data? That's less than 5 MB/s, what's wrong with your\ndatabase? :)\n\nRegards,\nMarti\n",
"msg_date": "Fri, 27 Jan 2012 11:06:14 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Parallel Processing !"
},
{
"msg_contents": "27.01.12 11:06, Marti Raudsepp написав(ла):\n> On Fri, Jan 27, 2012 at 06:31, sridhar bamandlapally\n> <[email protected]> wrote:\n>> --------------------------------------------------------------------------\n>> | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |\n>> --------------------------------------------------------------------------\n>> | 0 | SELECT STATEMENT | | 7444K| 944M| 16077 (4)| 00:03:13 |\n>> | 1 | TABLE ACCESS FULL| EMP | 7444K| 944M| 16077 (4)| 00:03:13 |\n>> --------------------------------------------------------------------------\n> Sorry to take this off topic, but... Seriously, over 3 minutes to read\n> 944 MB of data? That's less than 5 MB/s, what's wrong with your\n> database? :)\nActually I'd ask how parallel CPU may help table sequence scan? Usually \nsequence scan does not take large amount of cpu time, so I see no point \nin parallelism.\n",
"msg_date": "Fri, 27 Jan 2012 11:19:25 +0200",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Parallel Processing !"
},
{
"msg_contents": "On 27 Leden 2012, 10:06, Marti Raudsepp wrote:\n> On Fri, Jan 27, 2012 at 06:31, sridhar bamandlapally\n> <[email protected]> wrote:\n>> --------------------------------------------------------------------------\n>> | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time \n>> |\n>> --------------------------------------------------------------------------\n>> | 0 | SELECT STATEMENT | | 7444K| 944M| 16077 (4)| 00:03:13\n>> |\n>> | 1 | TABLE ACCESS FULL| EMP | 7444K| 944M| 16077 (4)| 00:03:13\n>> |\n>> --------------------------------------------------------------------------\n>\n> Sorry to take this off topic, but... Seriously, over 3 minutes to read\n> 944 MB of data? That's less than 5 MB/s, what's wrong with your\n> database? :)\n\nYes, those results are quite suspicious. There's probably something\ninterfering with the queries (other queries, different processes, block\ncleanout, ...) or maybe this is purely due to caching.\n\nsridhar, run the queries repeatedly and my quess is the difference will\ndisappear (and the fist query will be a bit faster I guess).\n\nTomas\n\n",
"msg_date": "Fri, 27 Jan 2012 10:46:03 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Parallel Processing !"
},
{
"msg_contents": "sridhar bamandlapally, 27.01.2012 05:31:\n> SQL> explain plan for select * from hr.emp ;\n> Explained.\n> PLAN\n> --------------------------------------------------------------------------\n> | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |\n> --------------------------------------------------------------------------\n> | 0 | SELECT STATEMENT | | 7444K| 944M| 16077 (4)| 00:03:13 |\n> | 1 | TABLE ACCESS FULL| EMP | 7444K| 944M| 16077 (4)| 00:03:13 |\n> --------------------------------------------------------------------------\n> *WITH PARALLEL Option*\n> SQL> explain plan for select /*+parallel(emp,4)*/ * from hr.emp ;\n> Explained.\n> PLAN\n> ---------------------------------------------------------------------------------\n> | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |\n> ---------------------------------------------------------------------------------\n> | 0 | SELECT STATEMENT | | 7444K| 944M| 4442 (3)| 00:00:54 |\n> | 1 | PX COORDINATOR | | | | | |\n> | 2 | PX SEND QC (RANDOM)| :TQ10000 | 7444K| 944M| 4442 (3)| 00:00:54 |\n> | 3 | PX BLOCK ITERATOR | | 7444K| 944M| 4442 (3)| 00:00:54 |\n> | 4 | TABLE ACCESS FULL| EMP | 7444K| 944M| 4442 (3)| 00:00:54 |\n> ---------------------------------------------------------------------------------\n>\n> In the above plan ( WITH PARALLEL Option )\n> 1. \"Cost\" has been nearly reduced to 1/4th\n> 2. \"CPU\" has been reduced\n> 3. \"Time\" has been nearly reduced to 1/3rd\n\nI have *never* seen the \"time\" column in the explain plan output come anywhere near the actual execution time in Oracle.\n\n\n\n",
"msg_date": "Fri, 27 Jan 2012 10:52:44 +0100",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Parallel Processing !"
},
{
"msg_contents": "For security reasons, I cannot put real-time senario into loop,\n\nthe one which I gave is an example, if I have solution for this then\nsame will be applied for real-time senario\n\nWe have implemented PARALLEL technology into our database and same\nexpecting after migration to PostgreSQL,\n\nThe real-time SQL-Query is hiting 18000 times per day, and PARALLEL\noption gave us great performance and big window for all other process\n\nConcept is, we need window for every process on database and all\ntogether should fit in our window and time-line.\n\nWe think PostgreSQL should also upgrade PARALLEL technology at SQL-Query level\n\n\n\n\n\nOn 1/27/12, Thomas Kellerer <[email protected]> wrote:\n> sridhar bamandlapally, 27.01.2012 05:31:\n>> SQL> explain plan for select * from hr.emp ;\n>> Explained.\n>> PLAN\n>> --------------------------------------------------------------------------\n>> | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |\n>> --------------------------------------------------------------------------\n>> | 0 | SELECT STATEMENT | | 7444K| 944M| 16077 (4)| 00:03:13 |\n>> | 1 | TABLE ACCESS FULL| EMP | 7444K| 944M| 16077 (4)| 00:03:13 |\n>> --------------------------------------------------------------------------\n>> *WITH PARALLEL Option*\n>> SQL> explain plan for select /*+parallel(emp,4)*/ * from hr.emp ;\n>> Explained.\n>> PLAN\n>> ---------------------------------------------------------------------------------\n>> | Id | Operation | Name | Rows | Bytes | Cost (%CPU)|\n>> Time |\n>> ---------------------------------------------------------------------------------\n>> | 0 | SELECT STATEMENT | | 7444K| 944M| 4442 (3)|\n>> 00:00:54 |\n>> | 1 | PX COORDINATOR | | | | |\n>> |\n>> | 2 | PX SEND QC (RANDOM)| :TQ10000 | 7444K| 944M| 4442 (3)|\n>> 00:00:54 |\n>> | 3 | PX BLOCK ITERATOR | | 7444K| 944M| 4442 (3)|\n>> 00:00:54 |\n>> | 4 | TABLE ACCESS FULL| EMP | 7444K| 944M| 4442 (3)|\n>> 00:00:54 |\n>> ---------------------------------------------------------------------------------\n>>\n>> In the above plan ( WITH PARALLEL Option )\n>> 1. \"Cost\" has been nearly reduced to 1/4th\n>> 2. \"CPU\" has been reduced\n>> 3. \"Time\" has been nearly reduced to 1/3rd\n>\n> I have *never* seen the \"time\" column in the explain plan output come\n> anywhere near the actual execution time in Oracle.\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Fri, 27 Jan 2012 17:46:35 +0530",
"msg_from": "sridhar bamandlapally <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL Parallel Processing !"
},
{
"msg_contents": "On 27 Leden 2012, 13:16, sridhar bamandlapally wrote:\n> For security reasons, I cannot put real-time senario into loop,\n\nThe point of Marti's comment was that the estimates (as presented by\nEXPLAIN PLAN FOR in Oracle) are inherently imprecise. Don't trust what\nOracle is telling you about the expected runtime of the queries. Run the\nqueries repeatedly to see.\n\n> the one which I gave is an example, if I have solution for this then\n> same will be applied for real-time senario\n\nThere's no way to execute a single query in a parallel manner and it won't\nbe available anytime soon.\n\nThis is not an issue unless you have a CPU bound query and you have unused\nCPUs. That's not the case of your example, because the sequential scan is\nlikely to be I/O bound, thus executing it in parallel won't fix the issue.\n\n> We have implemented PARALLEL technology into our database and same\n> expecting after migration to PostgreSQL,\n\nWhy? Have you tried to run the query on PostgreSQL?\n\n> The real-time SQL-Query is hiting 18000 times per day, and PARALLEL\n> option gave us great performance and big window for all other process\n\nAre we still discussing the example you've posted? Because this 18k hits\nper day means running the query every 5 seconds. And if the query takes\nmore than a few seconds, there will be multiple queries running\nconcurrently, thus eating CPUs.\n\n> Concept is, we need window for every process on database and all\n> together should fit in our window and time-line.\n\nNot sure what you mean by window or time-line?\n\n> We think PostgreSQL should also upgrade PARALLEL technology at SQL-Query\n> level\n\nThat is currently discussed in other threads, but it won't happen any time\nsoon (a few years in the future, maybe).\n\nTomas\n\n",
"msg_date": "Fri, 27 Jan 2012 13:43:49 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Parallel Processing !"
},
{
"msg_contents": ">> the one which I gave is an example, if I have solution for this then\n>> same will be applied for real-time senario\n>\n> There's no way to execute a single query in a parallel manner and it won't\n> be available anytime soon.\n>\n> This is not an issue unless you have a CPU bound query and you have unused\n> CPUs. That's not the case of your example, because the sequential scan is\n> likely to be I/O bound, thus executing it in parallel won't fix the issue.\n\nit is possible to emulate with plproxy for example.\n\n>\n>> We have implemented PARALLEL technology into our database and same\n>> expecting after migration to PostgreSQL,\n>\n> Why? Have you tried to run the query on PostgreSQL?\n\npremature optimization ...\n\n>\n>> The real-time SQL-Query is hiting 18000 times per day, and PARALLEL\n>> option gave us great performance and big window for all other process\n>\n> Are we still discussing the example you've posted? Because this 18k hits\n> per day means running the query every 5 seconds. And if the query takes\n> more than a few seconds, there will be multiple queries running\n> concurrently, thus eating CPUs.\n\nagreed.\n\n>\n>> Concept is, we need window for every process on database and all\n>> together should fit in our window and time-line.\n>\n> Not sure what you mean by window or time-line?\n>\n>> We think PostgreSQL should also upgrade PARALLEL technology at SQL-Query\n>> level\n>\n> That is currently discussed in other threads, but it won't happen any time\n> soon (a few years in the future, maybe).\n\nat the SQL level, I don't see the immediate benefit given that the\nfeature is not implemented: SQL level stuff (planner hint) are here\nto workaround what the server can not handle on its own. And\nPostgreSQL policiy is not to allow planner hint, but to fix/improve\nthe server.\n\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation\n",
"msg_date": "Fri, 27 Jan 2012 15:38:53 +0100",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL Parallel Processing !"
}
] |
[
{
"msg_contents": "Assuming there was some sort of cost to pl/pgsql, I rewrote a bunch of\nstored functions s in straight SQL. Each stored proc was calling the next,\nso to get the full effect I had to track down all the pl/pgsql stored\nfunctions and convert them to sql. However, I was surprised to find after\nall of the rewrites, the LANGUAGE sql procs caused the queries to run slower\nthan the LANGUAGE plpgsql.\n\n \n\nNone of the stored functions selected from tables, the operated on and\nreturned scalar values - it was all assign variables, if/then/else - not\neven any looping.\n\n \n\nFor those who need the dirty details, here they are. If you happen to think\nthis behavior is expected, I needn't bore you - just let me know!\n\n \n\nThanks,\n\n \n\nCarlo\n\n \n\nThis was all triggered during the optimization of a query like this:\n\n \n\nSELECT myVar\n\nFROM myTable\n\nWHERE myFunc(myVar);\n\n \n\nLooking at EXPLAIN ANALYSE I saw something like this:\n\n \n\nFilter: myFunc(myVar)\n\n \n\nI rewrote the body of myFunc(myVar) something like this:\n\n \n\nSELECT CASE WHEN myVar IS NULL THEN false ELSE myOtherFunc(myVar) END\n\n \n\nWhen I reran EXPLAIN ANALYZE I got this:\n\n \n\nFilter: SELECT CASE WHEN myVar IS NULL THEN false ELSE myOtherFunc(myVar)\nEND\n\n \n\nNice. So, I did the same treatment to myOtherFunc() (converted to straight\nsql) but the EXPLAIN ANALYZE didn't change (reasonable, I guess - how deep\nwould I expect it to go?)\n\n \n\nAll of the procs were IMMUTABLE.\n\n \n\nI was very surprised to find that the query now ran much slower by a factor\nof 4.\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nAssuming there was some sort of cost to pl/pgsql, I\nrewrote a bunch of stored functions s in straight SQL. Each stored proc was\ncalling the next, so to get the full effect I had to track down all the\npl/pgsql stored functions and convert them to sql. However, I was surprised to\nfind after all of the rewrites, the LANGUAGE sql procs caused the queries to\nrun slower than the LANGUAGE plpgsql.\n \nNone of the stored functions selected from tables, the\noperated on and returned scalar values - it was all assign variables, if/then/else\n- not even any looping.\n \nFor those who need the dirty details, here they are.\nIf you happen to think this behavior is expected, I needn’t bore you –\njust let me know!\n \nThanks,\n \nCarlo\n \nThis was all triggered during the optimization of a query\nlike this:\n \nSELECT myVar\nFROM myTable\nWHERE myFunc(myVar);\n \nLooking at EXPLAIN ANALYSE I saw something like this:\n \nFilter: myFunc(myVar)\n \nI rewrote the body of myFunc(myVar) something like\nthis:\n \nSELECT CASE WHEN myVar IS NULL THEN false ELSE myOtherFunc(myVar)\nEND\n \nWhen I reran EXPLAIN ANALYZE I got this:\n \nFilter: SELECT CASE WHEN myVar IS NULL THEN false ELSE\nmyOtherFunc(myVar) END\n \nNice. So, I did the same treatment to myOtherFunc()\n(converted to straight sql) but the EXPLAIN ANALYZE didn’t change (reasonable,\nI guess – how deep would I expect it to go?)\n \nAll of the procs were IMMUTABLE.\n \nI was very surprised to find that the query now ran\nmuch slower by a factor of 4.",
"msg_date": "Thu, 26 Jan 2012 19:09:13 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pl/pgsql functions outperforming sql ones?"
},
{
"msg_contents": "On Thu, Jan 26, 2012 at 6:09 PM, Carlo Stonebanks\n<[email protected]> wrote:\n> Assuming there was some sort of cost to pl/pgsql, I rewrote a bunch of\n> stored functions s in straight SQL. Each stored proc was calling the next,\n> so to get the full effect I had to track down all the pl/pgsql stored\n> functions and convert them to sql. However, I was surprised to find after\n> all of the rewrites, the LANGUAGE sql procs caused the queries to run slower\n> than the LANGUAGE plpgsql.\n\nOne reason that plpgsql can outperform sql functions is that plpgsql\ncaches plans. That said, I don't think that's what's happening here.\nDid you confirm the performance difference outside of EXPLAIN ANALYZE?\n In particular cases EXPLAIN ANALYZE can skew times, either by\ninjecting time calls or in how it discards results.\n\nmerlin\n",
"msg_date": "Fri, 27 Jan 2012 09:47:09 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql functions outperforming sql ones?"
},
{
"msg_contents": "Yes, I did test it - i.e. I ran the functions on their own as I had always\nnoticed a minor difference between EXPLAIN ANALYZE results and direct query\ncalls.\n\nInteresting, so sql functions DON'T cache plans? Will plan-caching be of any\nbenefit to SQL that makes no reference to any tables? The SQL is emulating\nthe straight non-set-oriented procedural logic of the original plpgsql.\n\n-----Original Message-----\nFrom: Merlin Moncure [mailto:[email protected]] \nSent: January 27, 2012 10:47 AM\nTo: Carlo Stonebanks\nCc: [email protected]\nSubject: Re: [PERFORM] pl/pgsql functions outperforming sql ones?\n\nOn Thu, Jan 26, 2012 at 6:09 PM, Carlo Stonebanks\n<[email protected]> wrote:\n> Assuming there was some sort of cost to pl/pgsql, I rewrote a bunch of\n> stored functions s in straight SQL. Each stored proc was calling the next,\n> so to get the full effect I had to track down all the pl/pgsql stored\n> functions and convert them to sql. However, I was surprised to find after\n> all of the rewrites, the LANGUAGE sql procs caused the queries to run\nslower\n> than the LANGUAGE plpgsql.\n\nOne reason that plpgsql can outperform sql functions is that plpgsql\ncaches plans. That said, I don't think that's what's happening here.\nDid you confirm the performance difference outside of EXPLAIN ANALYZE?\n In particular cases EXPLAIN ANALYZE can skew times, either by\ninjecting time calls or in how it discards results.\n\nmerlin\n\n",
"msg_date": "Fri, 27 Jan 2012 13:36:49 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pl/pgsql functions outperforming sql ones?"
},
{
"msg_contents": "You can use PREPARE... EXECUTE to \"cache\" the plan (as well as\nparsing). However, I find it unlikely this will would explain the\nloss in performance you experienced.\n\nDeron\n\n\nOn Fri, Jan 27, 2012 at 11:36 AM, Carlo Stonebanks\n<[email protected]> wrote:\n> Yes, I did test it - i.e. I ran the functions on their own as I had always\n> noticed a minor difference between EXPLAIN ANALYZE results and direct query\n> calls.\n>\n> Interesting, so sql functions DON'T cache plans? Will plan-caching be of any\n> benefit to SQL that makes no reference to any tables? The SQL is emulating\n> the straight non-set-oriented procedural logic of the original plpgsql.\n>\n> -----Original Message-----\n> From: Merlin Moncure [mailto:[email protected]]\n> Sent: January 27, 2012 10:47 AM\n> To: Carlo Stonebanks\n> Cc: [email protected]\n> Subject: Re: [PERFORM] pl/pgsql functions outperforming sql ones?\n>\n> On Thu, Jan 26, 2012 at 6:09 PM, Carlo Stonebanks\n> <[email protected]> wrote:\n>> Assuming there was some sort of cost to pl/pgsql, I rewrote a bunch of\n>> stored functions s in straight SQL. Each stored proc was calling the next,\n>> so to get the full effect I had to track down all the pl/pgsql stored\n>> functions and convert them to sql. However, I was surprised to find after\n>> all of the rewrites, the LANGUAGE sql procs caused the queries to run\n> slower\n>> than the LANGUAGE plpgsql.\n>\n> One reason that plpgsql can outperform sql functions is that plpgsql\n> caches plans. That said, I don't think that's what's happening here.\n> Did you confirm the performance difference outside of EXPLAIN ANALYZE?\n> In particular cases EXPLAIN ANALYZE can skew times, either by\n> injecting time calls or in how it discards results.\n>\n> merlin\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 27 Jan 2012 12:29:05 -0700",
"msg_from": "Deron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql functions outperforming sql ones?"
},
{
"msg_contents": "Was I even right in thinking I would gain any performance by converting to\nSQL?\n\n-----Original Message-----\nFrom: Deron [mailto:[email protected]] \nSent: January 27, 2012 2:29 PM\nTo: Carlo Stonebanks\nCc: [email protected]\nSubject: Re: [PERFORM] pl/pgsql functions outperforming sql ones?\n\nYou can use PREPARE... EXECUTE to \"cache\" the plan (as well as\nparsing). However, I find it unlikely this will would explain the\nloss in performance you experienced.\n\nDeron\n\n\nOn Fri, Jan 27, 2012 at 11:36 AM, Carlo Stonebanks\n<[email protected]> wrote:\n> Yes, I did test it - i.e. I ran the functions on their own as I had\nalways\n> noticed a minor difference between EXPLAIN ANALYZE results and direct\nquery\n> calls.\n>\n> Interesting, so sql functions DON'T cache plans? Will plan-caching be of\nany\n> benefit to SQL that makes no reference to any tables? The SQL is emulating\n> the straight non-set-oriented procedural logic of the original plpgsql.\n>\n> -----Original Message-----\n> From: Merlin Moncure [mailto:[email protected]]\n> Sent: January 27, 2012 10:47 AM\n> To: Carlo Stonebanks\n> Cc: [email protected]\n> Subject: Re: [PERFORM] pl/pgsql functions outperforming sql ones?\n>\n> On Thu, Jan 26, 2012 at 6:09 PM, Carlo Stonebanks\n> <[email protected]> wrote:\n>> Assuming there was some sort of cost to pl/pgsql, I rewrote a bunch of\n>> stored functions s in straight SQL. Each stored proc was calling the\nnext,\n>> so to get the full effect I had to track down all the pl/pgsql stored\n>> functions and convert them to sql. However, I was surprised to find after\n>> all of the rewrites, the LANGUAGE sql procs caused the queries to run\n> slower\n>> than the LANGUAGE plpgsql.\n>\n> One reason that plpgsql can outperform sql functions is that plpgsql\n> caches plans. That said, I don't think that's what's happening here.\n> Did you confirm the performance difference outside of EXPLAIN ANALYZE?\n> In particular cases EXPLAIN ANALYZE can skew times, either by\n> injecting time calls or in how it discards results.\n>\n> merlin\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Fri, 27 Jan 2012 14:59:08 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pl/pgsql functions outperforming sql ones?"
},
{
"msg_contents": "\nOn Jan 27, 2012, at 2:59 PM, Carlo Stonebanks wrote:\n\n> Was I even right in thinking I would gain any performance by converting to\n> SQL?\n\nAs always, it depends. I converted an immutable pl/pgsql function to an SQL function and the body of the function barely changed. However, I experienced an order-of-magnitude speed-up because the SQL function could be folded into the plan (like a view) while a pl/pgsql function will never be folded (and the planner punts and assumes the function will return 100 rows for set-returning functions). However, not all SQL functions can be folded into the plan.\n\nOn the other hand, a pl/pgsql function can make use of memoization for number-crunching routines and make business-logical short-circuiting decisions.\n\nCheers,\nM",
"msg_date": "Fri, 27 Jan 2012 15:09:26 -0500",
"msg_from": "\"A.M.\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql functions outperforming sql ones?"
},
{
"msg_contents": "2012/1/27 Carlo Stonebanks <[email protected]>:\n> Yes, I did test it - i.e. I ran the functions on their own as I had always\n> noticed a minor difference between EXPLAIN ANALYZE results and direct query\n> calls.\n>\n> Interesting, so sql functions DON'T cache plans? Will plan-caching be of any\n> benefit to SQL that makes no reference to any tables? The SQL is emulating\n> the straight non-set-oriented procedural logic of the original plpgsql.\n>\n\nIt is not necessary usually - simple SQL functions are merged to outer\nquery - there are e few cases where this optimization cannot be\nprocessed and then there are performance lost.\n\nFor example this optimization is not possible (sometimes) when some\nparameter is volatile\n\nRegards\n\nPavel Stehule\n",
"msg_date": "Sat, 28 Jan 2012 07:37:36 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql functions outperforming sql ones?"
},
{
"msg_contents": "Update: The main stored function in question and all of its sub\nsub-functions were recoded to new pure sql functions. \n\nI then stub tested the sub functions sql vs. plpgsql.\n\nHere were the results for new sql vs old plpgsql:\n\nIndividual sub functions tested 20-30% faster\n\nBut the main function calling new sql sub functions ran 100% slower\n\nSo I tried this:\n\nI modified the old plpgsql function to call the new sql sub functions.\n\nTHAT ran 20-30% faster then the unmodified version.\n\nThat modified function is listed below. All the functions ending in 2 are\nthe new SQL versions.\n\nAny thoughts or insight would be much appreciated.\n\nCarlo\n\n\nCREATE OR REPLACE FUNCTION mdx_lib.lex_compare_candidate3(character varying,\ncharacter varying)\n RETURNS numeric AS\n$BODY$\n/*\nRate two strings candidacy for lex_compare.\nparam 1: first string to compare\nparam 2: 2nd string to compare\nreturns: numeric result like mdx_lib.lex_distance\n0 is a failure, 1 a perfect match\n*/\ndeclare\n str1 varchar = $1;\n str2 varchar = $2;\n acro1 varchar;\n acro2 varchar;\n str_dist numeric;\n acro_dist numeric;\n result numeric;\nbegin\n if str1 = str2 then\n result = 0;\n else\n str1 = lower(regexp_replace(str1, '[^[:alnum:]]', '', 'g'));\n str2 = lower(regexp_replace(str2, '[^[:alnum:]]', '', 'g'));\n\n if str1 = str2 then\n result = 0.1;\n else\n str_dist = mdx_lib.lex_distance2(str1, str2);\n acro1 = mdx_lib.lex_acronym2(str1);\n acro2 = mdx_lib.lex_acronym2(str2);\n acro_dist = mdx_lib.lex_distance2(acro1, acro2);\n result = (acro_dist + (str_dist * 2)) / 2;\n end if;\n end if;\n\n result = 1 - result;\n if result < 0 then\n result = 0;\n end if;\n return result;\nend;\n$BODY$\n LANGUAGE plpgsql IMMUTABLE\n COST 100;\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Pavel Stehule\nSent: January 28, 2012 1:38 AM\nTo: Carlo Stonebanks\nCc: Merlin Moncure; [email protected]\nSubject: Re: [PERFORM] pl/pgsql functions outperforming sql ones?\n\n2012/1/27 Carlo Stonebanks <[email protected]>:\n> Yes, I did test it - i.e. I ran the functions on their own as I had\nalways\n> noticed a minor difference between EXPLAIN ANALYZE results and direct\nquery\n> calls.\n>\n> Interesting, so sql functions DON'T cache plans? Will plan-caching be of\nany\n> benefit to SQL that makes no reference to any tables? The SQL is emulating\n> the straight non-set-oriented procedural logic of the original plpgsql.\n>\n\nIt is not necessary usually - simple SQL functions are merged to outer\nquery - there are e few cases where this optimization cannot be\nprocessed and then there are performance lost.\n\nFor example this optimization is not possible (sometimes) when some\nparameter is volatile\n\nRegards\n\nPavel Stehule\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Sat, 28 Jan 2012 23:20:40 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pl/pgsql functions outperforming sql ones?"
},
{
"msg_contents": "Pavel, are you saying that the code of the stored function is actually being\nadded to the SQL query, instead of a call to it? For example, I have seen\nthis:\n\nSELECT myVar\nFROM myTable\nWHERE myVar > 0 AND myFunc(myVar)\n\nAnd seen the SQL body of myVar appended to the outer query:\n\n... Filter: SELECT CASE WHERE myVar < 10 THEN true ELSE false END\n\nIs this what we are talking about? Two questions:\n\n1) Is this also done when the function is called as a SELECT column; \n e.g. would:\n SELECT myFunc(myVar) AS result \n - become:\n SELECT (\n SELECT CASE WHERE myVar < 10 THEN true ELSE false END\n ) AS result?\n\n2) Does that not bypass the benefits of IMMUTABLE?\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Pavel Stehule\nSent: January 28, 2012 1:38 AM\nTo: Carlo Stonebanks\nCc: Merlin Moncure; [email protected]\nSubject: Re: [PERFORM] pl/pgsql functions outperforming sql ones?\n\n2012/1/27 Carlo Stonebanks <[email protected]>:\n> Yes, I did test it - i.e. I ran the functions on their own as I had\nalways\n> noticed a minor difference between EXPLAIN ANALYZE results and direct\nquery\n> calls.\n>\n> Interesting, so sql functions DON'T cache plans? Will plan-caching be of\nany\n> benefit to SQL that makes no reference to any tables? The SQL is emulating\n> the straight non-set-oriented procedural logic of the original plpgsql.\n>\n\nIt is not necessary usually - simple SQL functions are merged to outer\nquery - there are e few cases where this optimization cannot be\nprocessed and then there are performance lost.\n\nFor example this optimization is not possible (sometimes) when some\nparameter is volatile\n\nRegards\n\nPavel Stehule\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Sun, 29 Jan 2012 18:04:53 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pl/pgsql functions outperforming sql ones?"
},
{
"msg_contents": "Hello\n\n2012/1/30 Carlo Stonebanks <[email protected]>:\n> Pavel, are you saying that the code of the stored function is actually being\n> added to the SQL query, instead of a call to it? For example, I have seen\n> this:\n>\n> SELECT myVar\n> FROM myTable\n> WHERE myVar > 0 AND myFunc(myVar)\n>\n> And seen the SQL body of myVar appended to the outer query:\n>\n> ... Filter: SELECT CASE WHERE myVar < 10 THEN true ELSE false END\n>\n> Is this what we are talking about? Two questions:\n\nyes - it is SQL function \"inlining\"\n\n>\n> 1) Is this also done when the function is called as a SELECT column;\n> e.g. would:\n> SELECT myFunc(myVar) AS result\n> - become:\n> SELECT (\n> SELECT CASE WHERE myVar < 10 THEN true ELSE false END\n> ) AS result?\n>\n\nyes\n\nCREATE OR REPLACE FUNCTION public.fx(integer, integer)\n RETURNS integer\n LANGUAGE sql\nAS $function$\nselect coalesce($1, $2)\n$function$\n\npostgres=# explain verbose select fx(random()::int, random()::int);\n QUERY PLAN\n--------------------------------------------------------------\n Result (cost=0.00..0.02 rows=1 width=0)\n Output: COALESCE((random())::integer, (random())::integer)\n(2 rows)\n\n\n> 2) Does that not bypass the benefits of IMMUTABLE?\n>\n\nno - optimizator works with expanded query - usually is preferred\nstyle a writing SQL functions without flags, because optimizer can\nwork with definition of SQL function and can set well flags. SQL\nfunction is not black box for optimizer like plpgsql does. And SQL\noptimizer chooses a inlining or some other optimizations. Sometimes\nexplicit flags are necessary, but usually not for scalar SQL\nfunctions.\n\npostgres=# create or replace function public.fxs(int)\npostgres-# returns setof int as $$\npostgres$# select * from generate_series(1,$1)\npostgres$# $$ language sql;\nCREATE FUNCTION\npostgres=# explain verbose select * from fxs(10);\n QUERY PLAN\n-------------------------------------------------------------------\n Function Scan on public.fxs (cost=0.25..10.25 rows=1000 width=4)\n Output: fxs\n Function Call: fxs(10)\n(3 rows)\n\npostgres=# create or replace function public.fxs(int)\nreturns setof int as $$\nselect * from generate_series(1,$1)\n$$ language sql IMMUTABLE;\nCREATE FUNCTION\npostgres=# explain verbose select * from fxs(10);\n QUERY PLAN\n-----------------------------------------------------------------------------------\n Function Scan on pg_catalog.generate_series (cost=0.00..10.00\nrows=1000 width=4)\n Output: generate_series.generate_series\n Function Call: generate_series(1, 10) --<<<< inlined query\n(3 rows)\n\nRegards\n\nPavel Stehule\n\n>\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Pavel Stehule\n> Sent: January 28, 2012 1:38 AM\n> To: Carlo Stonebanks\n> Cc: Merlin Moncure; [email protected]\n> Subject: Re: [PERFORM] pl/pgsql functions outperforming sql ones?\n>\n> 2012/1/27 Carlo Stonebanks <[email protected]>:\n>> Yes, I did test it - i.e. I ran the functions on their own as I had\n> always\n>> noticed a minor difference between EXPLAIN ANALYZE results and direct\n> query\n>> calls.\n>>\n>> Interesting, so sql functions DON'T cache plans? Will plan-caching be of\n> any\n>> benefit to SQL that makes no reference to any tables? The SQL is emulating\n>> the straight non-set-oriented procedural logic of the original plpgsql.\n>>\n>\n> It is not necessary usually - simple SQL functions are merged to outer\n> query - there are e few cases where this optimization cannot be\n> processed and then there are performance lost.\n>\n> For example this optimization is not possible (sometimes) when some\n> parameter is volatile\n>\n> Regards\n>\n> Pavel Stehule\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Mon, 30 Jan 2012 08:56:58 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql functions outperforming sql ones?"
},
{
"msg_contents": "Pavel, thank you very much for your explanation.\n\nIs it possible to define under what conditions that sql procs will\noutperform plpgsql ones, and vice-versa?\n\n-----Original Message-----\nFrom: Pavel Stehule [mailto:[email protected]] \nSent: January 30, 2012 2:57 AM\nTo: Carlo Stonebanks\nCc: Merlin Moncure; [email protected]\nSubject: Re: [PERFORM] pl/pgsql functions outperforming sql ones?\n\nHello\n\n2012/1/30 Carlo Stonebanks <[email protected]>:\n> Pavel, are you saying that the code of the stored function is actually\nbeing\n> added to the SQL query, instead of a call to it? For example, I have seen\n> this:\n>\n> SELECT myVar\n> FROM myTable\n> WHERE myVar > 0 AND myFunc(myVar)\n>\n> And seen the SQL body of myVar appended to the outer query:\n>\n> ... Filter: SELECT CASE WHERE myVar < 10 THEN true ELSE false END\n>\n> Is this what we are talking about? Two questions:\n\nyes - it is SQL function \"inlining\"\n\n>\n> 1) Is this also done when the function is called as a SELECT column;\n> e.g. would:\n> SELECT myFunc(myVar) AS result\n> - become:\n> SELECT (\n> SELECT CASE WHERE myVar < 10 THEN true ELSE false END\n> ) AS result?\n>\n\nyes\n\nCREATE OR REPLACE FUNCTION public.fx(integer, integer)\n RETURNS integer\n LANGUAGE sql\nAS $function$\nselect coalesce($1, $2)\n$function$\n\npostgres=# explain verbose select fx(random()::int, random()::int);\n QUERY PLAN\n--------------------------------------------------------------\n Result (cost=0.00..0.02 rows=1 width=0)\n Output: COALESCE((random())::integer, (random())::integer)\n(2 rows)\n\n\n> 2) Does that not bypass the benefits of IMMUTABLE?\n>\n\nno - optimizator works with expanded query - usually is preferred\nstyle a writing SQL functions without flags, because optimizer can\nwork with definition of SQL function and can set well flags. SQL\nfunction is not black box for optimizer like plpgsql does. And SQL\noptimizer chooses a inlining or some other optimizations. Sometimes\nexplicit flags are necessary, but usually not for scalar SQL\nfunctions.\n\npostgres=# create or replace function public.fxs(int)\npostgres-# returns setof int as $$\npostgres$# select * from generate_series(1,$1)\npostgres$# $$ language sql;\nCREATE FUNCTION\npostgres=# explain verbose select * from fxs(10);\n QUERY PLAN\n-------------------------------------------------------------------\n Function Scan on public.fxs (cost=0.25..10.25 rows=1000 width=4)\n Output: fxs\n Function Call: fxs(10)\n(3 rows)\n\npostgres=# create or replace function public.fxs(int)\nreturns setof int as $$\nselect * from generate_series(1,$1)\n$$ language sql IMMUTABLE;\nCREATE FUNCTION\npostgres=# explain verbose select * from fxs(10);\n QUERY PLAN\n----------------------------------------------------------------------------\n-------\n Function Scan on pg_catalog.generate_series (cost=0.00..10.00\nrows=1000 width=4)\n Output: generate_series.generate_series\n Function Call: generate_series(1, 10) --<<<< inlined query\n(3 rows)\n\nRegards\n\nPavel Stehule\n\n>\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Pavel Stehule\n> Sent: January 28, 2012 1:38 AM\n> To: Carlo Stonebanks\n> Cc: Merlin Moncure; [email protected]\n> Subject: Re: [PERFORM] pl/pgsql functions outperforming sql ones?\n>\n> 2012/1/27 Carlo Stonebanks <[email protected]>:\n>> Yes, I did test it - i.e. I ran the functions on their own as I had\n> always\n>> noticed a minor difference between EXPLAIN ANALYZE results and direct\n> query\n>> calls.\n>>\n>> Interesting, so sql functions DON'T cache plans? Will plan-caching be of\n> any\n>> benefit to SQL that makes no reference to any tables? The SQL is\nemulating\n>> the straight non-set-oriented procedural logic of the original plpgsql.\n>>\n>\n> It is not necessary usually - simple SQL functions are merged to outer\n> query - there are e few cases where this optimization cannot be\n> processed and then there are performance lost.\n>\n> For example this optimization is not possible (sometimes) when some\n> parameter is volatile\n>\n> Regards\n>\n> Pavel Stehule\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n",
"msg_date": "Mon, 30 Jan 2012 18:15:17 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pl/pgsql functions outperforming sql ones?"
},
{
"msg_contents": "2012/1/31 Carlo Stonebanks <[email protected]>:\n> Pavel, thank you very much for your explanation.\n>\n> Is it possible to define under what conditions that sql procs will\n> outperform plpgsql ones, and vice-versa?\n\nyes, little bit :)\n\nwhen inlining is possible, then SQL function will be faster - typical\nuse case is simple scalar functions (with nonvolatile real\nparameters).\n\nRegards\n\nPavel\n\n>\n> -----Original Message-----\n",
"msg_date": "Tue, 31 Jan 2012 05:56:27 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql functions outperforming sql ones?"
},
{
"msg_contents": "On Sat, Jan 28, 2012 at 11:20 PM, Carlo Stonebanks\n<[email protected]> wrote:\n> Update: The main stored function in question and all of its sub\n> sub-functions were recoded to new pure sql functions.\n>\n> I then stub tested the sub functions sql vs. plpgsql.\n>\n> Here were the results for new sql vs old plpgsql:\n>\n> Individual sub functions tested 20-30% faster\n>\n> But the main function calling new sql sub functions ran 100% slower\n>\n> So I tried this:\n>\n> I modified the old plpgsql function to call the new sql sub functions.\n>\n> THAT ran 20-30% faster then the unmodified version.\n>\n> That modified function is listed below. All the functions ending in 2 are\n> the new SQL versions.\n\nOne advantage of PL/pgsql for code like this is that you can compute\nvalues once and save them in variables. SQL doesn't have variables,\nso you can end up repeating the same SQL in multiple places (causing\nmultiple evaluation), or even if you manage to avoid that, the system\ncan inline things in multiple places and produce the same effect.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 8 Feb 2012 15:33:08 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql functions outperforming sql ones?"
}
] |
[
{
"msg_contents": "Hi,\n\nWe are having an embedded system with a freescale m68k architecture based micro-controller, 256MB RAM running a customized version of Slackware 12 linux.\nIt's a relatively modest Hardware.\nWe have installed postgres 9.1 as our database engine. While testing, we found that the Postgres operations take more than 70% of CPU and the average also stays above 40%.\nThis is suffocating the various other processes running on the system. Couple of them are very critical ones.\nThe testing involves inserting bulk number of records (approx. 10000 records having between 10 and 20 columns).\nPlease let us know how we can reduce CPU usage for the postgres.\n\nThanks and Regards\nJayashankar\n\n\n\nLarsen & Toubro Limited\n\nwww.larsentoubro.com\n\nThis Email may contain confidential or privileged information for the intended recipient (s) If you are not the intended recipient, please do not use or disseminate the information, notify the sender and delete it from your system.\n\n\n\n\n\n\n\n\nHi,\n \nWe are having an embedded system with a freescale m68k architecture based micro-controller, 256MB RAM running a customized version of Slackware 12 linux.\nIt’s a relatively modest Hardware.\nWe have installed postgres 9.1 as our database engine. While testing, we found that the Postgres operations take more than 70% of CPU and the average also stays above 40%.\nThis is suffocating the various other processes running on the system. Couple of them are very critical ones.\n\nThe testing involves inserting bulk number of records (approx. 10000 records having between 10 and 20 columns).\nPlease let us know how we can reduce CPU usage for the postgres.\n \nThanks and Regards\nJayashankar\n \n\n\n\nLarsen & Toubro Limited \n\nwww.larsentoubro.com \n\nThis Email may contain confidential or privileged information for the intended recipient (s) If you are not the intended recipient, please do not use or disseminate the information, notify the sender and delete it from your system.",
"msg_date": "Fri, 27 Jan 2012 13:34:15 +0000",
"msg_from": "Jayashankar K B <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgress is taking lot of CPU on our embedded hardware."
},
{
"msg_contents": "On 27.01.2012 15:34, Jayashankar K B wrote:\n> Hi,\n>\n> We are having an embedded system with a freescale m68k architecture based micro-controller, 256MB RAM running a customized version of Slackware 12 linux.\n> It's a relatively modest Hardware.\n\nFascinating!\n\n> We have installed postgres 9.1 as our database engine. While testing, we found that the Postgres operations take more than 70% of CPU and the average also stays above 40%.\n> This is suffocating the various other processes running on the system. Couple of them are very critical ones.\n> The testing involves inserting bulk number of records (approx. 10000 records having between 10 and 20 columns).\n> Please let us know how we can reduce CPU usage for the postgres.\n\nThe first step would be to figure out where all the time is spent. Are \nthere unnecessary indexes you could remove? Are you using INSERT \nstatements or COPY? Sending the data in binary format instead of text \nmight shave some cycles.\n\nIf you can run something like oprofile on the system, that would be \nhelpful to pinpoint the expensive part.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Fri, 27 Jan 2012 18:47:36 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgress is taking lot of CPU on our embedded hardware."
},
{
"msg_contents": "On 1/27/2012 10:47 AM, Heikki Linnakangas wrote:\n> On 27.01.2012 15:34, Jayashankar K B wrote:\n>> Hi,\n>>\n>> We are having an embedded system with a freescale m68k architecture\n>> based micro-controller, 256MB RAM running a customized version of\n>> Slackware 12 linux.\n>> It's a relatively modest Hardware.\n>\n> Fascinating!\n>\n>> We have installed postgres 9.1 as our database engine. While testing,\n>> we found that the Postgres operations take more than 70% of CPU and\n>> the average also stays above 40%.\n>> This is suffocating the various other processes running on the system.\n>> Couple of them are very critical ones.\n>> The testing involves inserting bulk number of records (approx. 10000\n>> records having between 10 and 20 columns).\n>> Please let us know how we can reduce CPU usage for the postgres.\n>\n> The first step would be to figure out where all the time is spent. Are\n> there unnecessary indexes you could remove? Are you using INSERT\n> statements or COPY? Sending the data in binary format instead of text\n> might shave some cycles.\n>\n\nDo you have triggers on the table?\n\n\n",
"msg_date": "Fri, 27 Jan 2012 11:14:50 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgress is taking lot of CPU on our embedded hardware."
},
{
"msg_contents": "Hi Heikki Linnakangas: We are using series of Insert statements to insert the records into database.\nSending data in binary is not an option as the module that writes into DB has been finalized.\nWe do not have control over that.\n\nHi Andy: As of now, there are no triggers in the table.\n\nPlease let me know how we can proceed. On the net I couldn't get hold of any good example where Postgres has been used on limited Hardware system.\nWe are starting to feel if Postgres was a good choice for us..!\n\nThanks and Regards\nJay\n\n-----Original Message-----\nFrom: Andy Colson [mailto:[email protected]]\nSent: Friday, January 27, 2012 10:45 PM\nTo: Heikki Linnakangas\nCc: Jayashankar K B; [email protected]\nSubject: Re: [PERFORM] Postgress is taking lot of CPU on our embedded hardware.\n\nOn 1/27/2012 10:47 AM, Heikki Linnakangas wrote:\n> On 27.01.2012 15:34, Jayashankar K B wrote:\n>> Hi,\n>>\n>> We are having an embedded system with a freescale m68k architecture\n>> based micro-controller, 256MB RAM running a customized version of\n>> Slackware 12 linux.\n>> It's a relatively modest Hardware.\n>\n> Fascinating!\n>\n>> We have installed postgres 9.1 as our database engine. While testing,\n>> we found that the Postgres operations take more than 70% of CPU and\n>> the average also stays above 40%.\n>> This is suffocating the various other processes running on the system.\n>> Couple of them are very critical ones.\n>> The testing involves inserting bulk number of records (approx. 10000\n>> records having between 10 and 20 columns).\n>> Please let us know how we can reduce CPU usage for the postgres.\n>\n> The first step would be to figure out where all the time is spent. Are\n> there unnecessary indexes you could remove? Are you using INSERT\n> statements or COPY? Sending the data in binary format instead of text\n> might shave some cycles.\n>\n\nDo you have triggers on the table?\n\n\n\n\nLarsen & Toubro Limited\n\nwww.larsentoubro.com\n\nThis Email may contain confidential or privileged information for the intended recipient (s) If you are not the intended recipient, please do not use or disseminate the information, notify the sender and delete it from your system.\n",
"msg_date": "Fri, 27 Jan 2012 18:30:00 +0000",
"msg_from": "Jayashankar K B <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgress is taking lot of CPU on our embedded\n hardware."
},
{
"msg_contents": "On 27.01.2012 20:30, Jayashankar K B wrote:\n> Hi Heikki Linnakangas: We are using series of Insert statements to insert the records into database.\n> Sending data in binary is not an option as the module that writes into DB has been finalized.\n> We do not have control over that.\n\nThat certainly limits your options.\n\n> Please let me know how we can proceed. On the net I couldn't get hold of any good example where Postgres has been used on limited Hardware system.\n\nI don't think there's anything particular in postgres that would make it \na poor choice on a small system, as far as CPU usage is concerned \nanyway. But inserting rows in a database is certainly slower than, say, \nwriting them into a flat file.\n\nAt what rate are you doing the INSERTs? And how fast would they need to \nbe? Remember that it's normal that while the INSERTs are running, \npostgres will use all the CPU it can to process them as fast as \npossible. So the question is, at what rate do they need to be processed \nto meet your target. Lowering the process priority with 'nice' might \nhelp too, to give the other important processes priority over postgres.\n\nThe easiest way to track down where the time is spent would be to run a \nprofiler, if that's possible on your platform.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Fri, 27 Jan 2012 21:56:53 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgress is taking lot of CPU on our embedded hardware."
},
{
"msg_contents": "On Fri, Jan 27, 2012 at 4:56 PM, Heikki Linnakangas\n<[email protected]> wrote:\n> I don't think there's anything particular in postgres that would make it a\n> poor choice on a small system, as far as CPU usage is concerned anyway. But\n> inserting rows in a database is certainly slower than, say, writing them\n> into a flat file.\n\nHow did you install postgres?\nDid you build it?\nWhich configure flags did you use?\nExactly which m68k cpu is it? (it does matter)\n\nFor instance...\n\nwiki: \"However, a significant difference is that the 68060 FPU is not\npipelined and is therefore up to three times slower than the Pentium\nin floating point applications\"\n\nThis means, if you don't configure the build correctly, you will get\nreally sub-optimal code. Modern versions are optimized for modern\ncpus.\nOf utmost importance, I would imagine, is the binary format chosen for\npg data types (floating types especially, if you use them).\n",
"msg_date": "Fri, 27 Jan 2012 23:23:45 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgress is taking lot of CPU on our embedded hardware."
},
{
"msg_contents": "On Fri, Jan 27, 2012 at 6:34 AM, Jayashankar K B\n<[email protected]> wrote:\n> Hi,\n>\n> We are having an embedded system with a freescale m68k architecture based\n> micro-controller, 256MB RAM running a customized version of Slackware 12\n> linux.\n>\n> It’s a relatively modest Hardware.\n>\n> We have installed postgres 9.1 as our database engine. While testing, we\n> found that the Postgres operations take more than 70% of CPU and the average\n> also stays above 40%.\n\nNot to dissuade you from using pgsql, but have you tried other dbs\nlike the much simpler SQL Lite?\n",
"msg_date": "Fri, 27 Jan 2012 20:07:31 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgress is taking lot of CPU on our embedded hardware."
},
{
"msg_contents": "Hi,\n\nThe number of inserts into the database would be a minimum of 3000 records in one operation.. We do not have any stringent requirement of writing speed.\nSo we could make do with a slower write speed as long as the CPU usage is not heavy... :)\nWe will try reducing the priority and check once.\nOur database file is located on a class 2 SD Card. So it is understandable if there is lot of IO activity and speed is less.\nBut we are stumped by the amount of CPU Postgres is eating up.\nAny configuration settings we could check up? Given our Hardware config, are following settings ok?\nShared Buffers: 24MB\nEffective Cache Size: 128MB\n\nWe are not experienced with database stuff. So some expert suggestions would be helpful :)\n\nThanks and Regards\nJayashankar\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Heikki Linnakangas\nSent: Saturday, January 28, 2012 1:27 AM\nTo: Jayashankar K B\nCc: Andy Colson; [email protected]\nSubject: Re: [PERFORM] Postgress is taking lot of CPU on our embedded hardware.\n\nOn 27.01.2012 20:30, Jayashankar K B wrote:\n> Hi Heikki Linnakangas: We are using series of Insert statements to insert the records into database.\n> Sending data in binary is not an option as the module that writes into DB has been finalized.\n> We do not have control over that.\n\nThat certainly limits your options.\n\n> Please let me know how we can proceed. On the net I couldn't get hold of any good example where Postgres has been used on limited Hardware system.\n\nI don't think there's anything particular in postgres that would make it a poor choice on a small system, as far as CPU usage is concerned anyway. But inserting rows in a database is certainly slower than, say, writing them into a flat file.\n\nAt what rate are you doing the INSERTs? And how fast would they need to be? Remember that it's normal that while the INSERTs are running, postgres will use all the CPU it can to process them as fast as possible. So the question is, at what rate do they need to be processed to meet your target. Lowering the process priority with 'nice' might help too, to give the other important processes priority over postgres.\n\nThe easiest way to track down where the time is spent would be to run a profiler, if that's possible on your platform.\n\n--\n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\nLarsen & Toubro Limited\n\nwww.larsentoubro.com\n\nThis Email may contain confidential or privileged information for the intended recipient (s) If you are not the intended recipient, please do not use or disseminate the information, notify the sender and delete it from your system.\n",
"msg_date": "Sat, 28 Jan 2012 17:11:53 +0000",
"msg_from": "Jayashankar K B <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgress is taking lot of CPU on our embedded\n hardware."
},
{
"msg_contents": "Hi,\n\nI downloaded the source code and cross compiled it into a relocatable package and copied it to the device.\nLTIB was the cross-compile tool chain that was used. Controller is coldfire MCF54418 CPU.\nHere is the configure options I used.\n\n./configure CC=/opt/freescale/usr/local/gcc-4.4.54-eglibc-2.10.54/m68k-linux/bin/m68k-linux-gnu-gcc CFLAGS='-fmessage-length=0 -fpack-struct -mcpu=54418 -msoft-float' --host=i686-pc-linux-gnu --target=m68k-linux-gnu --prefix=/home/jayashankar/databases/Postgre_8.4.9_relocatable/\n\nAny other special flags that could be of help to us?\n\nThanks and Regards\nJayashankar\n\n-----Original Message-----\nFrom: Claudio Freire [mailto:[email protected]]\nSent: Saturday, January 28, 2012 7:54 AM\nTo: Heikki Linnakangas\nCc: Jayashankar K B; Andy Colson; [email protected]\nSubject: Re: [PERFORM] Postgress is taking lot of CPU on our embedded hardware.\n\nOn Fri, Jan 27, 2012 at 4:56 PM, Heikki Linnakangas <[email protected]> wrote:\n> I don't think there's anything particular in postgres that would make\n> it a poor choice on a small system, as far as CPU usage is concerned\n> anyway. But inserting rows in a database is certainly slower than,\n> say, writing them into a flat file.\n\nHow did you install postgres?\nDid you build it?\nWhich configure flags did you use?\nExactly which m68k cpu is it? (it does matter)\n\nFor instance...\n\nwiki: \"However, a significant difference is that the 68060 FPU is not pipelined and is therefore up to three times slower than the Pentium in floating point applications\"\n\nThis means, if you don't configure the build correctly, you will get really sub-optimal code. Modern versions are optimized for modern cpus.\nOf utmost importance, I would imagine, is the binary format chosen for pg data types (floating types especially, if you use them).\n\n\nLarsen & Toubro Limited\n\nwww.larsentoubro.com\n\nThis Email may contain confidential or privileged information for the intended recipient (s) If you are not the intended recipient, please do not use or disseminate the information, notify the sender and delete it from your system.\n",
"msg_date": "Sat, 28 Jan 2012 17:21:18 +0000",
"msg_from": "Jayashankar K B <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgress is taking lot of CPU on our embedded\n hardware."
},
{
"msg_contents": "Hello,\n\nOne thing you may look at are the index and constraints on the\nrelations. If you have multiple constraints or index this may add\nCPU time on each insert. You may try to drop the index, do a bulk\nload, and then recreate the index. This may (or may not) reduce the\ntotal time / CPU but it could allow you to push a bulk insert to a\nspecific time. It would be good to use \"COPY\", or at least give it\na test to see if it is worth it.\n\nIf removing the index does significantly help with the insert, then\nyou may also try a different index (HASH or B-Tree, GIST). It may be\npossible that a specific index creation does not work efficiently on\nthat architecture...\nhttp://www.postgresql.org/docs/9.1/static/sql-createindex.html\n\nDeron\n\n\n\nOn Sat, Jan 28, 2012 at 10:21 AM, Jayashankar K B\n<[email protected]> wrote:\n> Hi,\n>\n> I downloaded the source code and cross compiled it into a relocatable package and copied it to the device.\n> LTIB was the cross-compile tool chain that was used. Controller is coldfire MCF54418 CPU.\n> Here is the configure options I used.\n>\n> ./configure CC=/opt/freescale/usr/local/gcc-4.4.54-eglibc-2.10.54/m68k-linux/bin/m68k-linux-gnu-gcc CFLAGS='-fmessage-length=0 -fpack-struct -mcpu=54418 -msoft-float' --host=i686-pc-linux-gnu --target=m68k-linux-gnu --prefix=/home/jayashankar/databases/Postgre_8.4.9_relocatable/\n>\n> Any other special flags that could be of help to us?\n>\n> Thanks and Regards\n> Jayashankar\n>\n> -----Original Message-----\n> From: Claudio Freire [mailto:[email protected]]\n> Sent: Saturday, January 28, 2012 7:54 AM\n> To: Heikki Linnakangas\n> Cc: Jayashankar K B; Andy Colson; [email protected]\n> Subject: Re: [PERFORM] Postgress is taking lot of CPU on our embedded hardware.\n>\n> On Fri, Jan 27, 2012 at 4:56 PM, Heikki Linnakangas <[email protected]> wrote:\n>> I don't think there's anything particular in postgres that would make\n>> it a poor choice on a small system, as far as CPU usage is concerned\n>> anyway. But inserting rows in a database is certainly slower than,\n>> say, writing them into a flat file.\n>\n> How did you install postgres?\n> Did you build it?\n> Which configure flags did you use?\n> Exactly which m68k cpu is it? (it does matter)\n>\n> For instance...\n>\n> wiki: \"However, a significant difference is that the 68060 FPU is not pipelined and is therefore up to three times slower than the Pentium in floating point applications\"\n>\n> This means, if you don't configure the build correctly, you will get really sub-optimal code. Modern versions are optimized for modern cpus.\n> Of utmost importance, I would imagine, is the binary format chosen for pg data types (floating types especially, if you use them).\n>\n>\n> Larsen & Toubro Limited\n>\n> www.larsentoubro.com\n>\n> This Email may contain confidential or privileged information for the intended recipient (s) If you are not the intended recipient, please do not use or disseminate the information, notify the sender and delete it from your system.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 28 Jan 2012 10:56:54 -0700",
"msg_from": "Deron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgress is taking lot of CPU on our embedded hardware."
},
{
"msg_contents": "On Fri, Jan 27, 2012 at 10:30 AM, Jayashankar K B\n<[email protected]> wrote:\n> Hi Heikki Linnakangas: We are using series of Insert statements to insert the records into database.\n> Sending data in binary is not an option as the module that writes into DB has been finalized.\n> We do not have control over that.\n>\n> Hi Andy: As of now, there are no triggers in the table.\n\nWhat about indexes?\n\nCheers,\n\nJeff\n",
"msg_date": "Sat, 28 Jan 2012 10:16:27 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgress is taking lot of CPU on our embedded hardware."
},
{
"msg_contents": "On Sat, Jan 28, 2012 at 2:21 PM, Jayashankar K B\n<[email protected]> wrote:\n>\n> ./configure CC=/opt/freescale/usr/local/gcc-4.4.54-eglibc-2.10.54/m68k-linux/bin/m68k-linux-gnu-gcc CFLAGS='-fmessage-length=0 -fpack-struct -mcpu=54418 -msoft-float' --host=i686-pc-linux-gnu --target=m68k-linux-gnu --prefix=/home/jayashankar/databases/Postgre_8.4.9_relocatable/\n>\n> Any other special flags that could be of help to us?\n\nWell, it's a tough issue, because you'll have to test every change to\nsee if it really makes a difference or not.\nBut you might try --disable-float8-byval, --disable-spinlocks. On the\ncompiler front (CFLAGS), you should have -mtune=54418 (or perhaps\n-mtune=cfv4) (-march and -mcpu don't imply -mtune), and even perhaps\n-O2 or -O3.\n\nI also see you're specifying -msoft-float. So that's probably your\nproblem, any floating point arithmetic you're doing is killing you.\nBut without access to the software in order to change the data types,\nyou're out of luck in that department.\n",
"msg_date": "Sat, 28 Jan 2012 16:37:14 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgress is taking lot of CPU on our embedded hardware."
},
{
"msg_contents": "If you can batch the inserts into groups (of say 10 to 100) it might \nhelp performance - i.e:\n\nInstead of\n\nINSERT INTO table VALUES(...);\nINSERT INTO table VALUES(...);\n...\nINSERT INTO table VALUES(...);\n\ndo\n\nINSERT INTO table VALUES(...),(...),...,(...);\n\nThis reduces the actual number of INSERT calls, which can be quite a win.\n\nRegards\n\nMark\n\n\nOn 28/01/12 07:30, Jayashankar K B wrote:\n> Hi Heikki Linnakangas: We are using series of Insert statements to insert the records into database.\n> Sending data in binary is not an option as the module that writes into DB has been finalized.\n> We do not have control over that.\n>\n>\n",
"msg_date": "Sun, 29 Jan 2012 12:58:04 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgress is taking lot of CPU on our embedded hardware."
},
{
"msg_contents": "Greetings,\n\nOn Sat, Jan 28, 2012 at 12:51 PM, Jayashankar K B\n<[email protected]> wrote:\n> Hi,\n>\n> I downloaded the source code and cross compiled it into a relocatable package and copied it to the device.\n> LTIB was the cross-compile tool chain that was used. Controller is coldfire MCF54418 CPU.\n> Here is the configure options I used.\n\nOk, no floating point, and just ~250MHz... small. Anyway, lets not\ntalk about hardware options, because you already have it.\n\nAbout kernel, I'm not sure if on this arch you have the option, but\ndid you enable \"PREEMPT\" kernel config option? (on menuconfig:\n\"Preemptible Kernel (Low-Latency Desktop)\").... Or, is that a RT\nkernel?\n\nWith such a small CPU, almost any DB engine you put there will be\nCPU-hungry, but if your CPU usage is under 95%, you know you still\nhave some CPU to spare, on the other hand, if you are 100% CPU, you\nhave to evaluate required response time, and set priorities\naccordingly.. However, I have found that, even with processes with\nnice level 19 using 100% CPU, other nice level 0 processes will\nslow-down unless I set PREEMPT option to on kernel compile options\n(other issue are IO wait times, at least on my application that uses\nCF can get quite high).\n\nSincerely,\n\nIldefonso Camargo\n",
"msg_date": "Sun, 29 Jan 2012 10:16:35 -0430",
"msg_from": "Jose Ildefonso Camargo Tolosa <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgress is taking lot of CPU on our embedded hardware."
},
{
"msg_contents": "On Sat, Jan 28, 2012 at 19:11, Jayashankar K B\n<[email protected]> wrote:\n> But we are stumped by the amount of CPU Postgres is eating up.\n\nYou still haven't told us *how* slow it actually is and how fast you\nneed it to be? What's your database layout like (tables, columns,\nindexes, foreign keys)? What do the queries look like that you have\nproblems with?\n\n> Our database file is located on a class 2 SD Card. So it is understandable if there is lot of IO activity and speed is less.\n\nBeware that most SD cards are unfit for database write workloads,\nsince they only perform very basic wear levelling (in my experience\nanyway -- things might have changed, but I'm doubtful). It's a matter\nof time before you wear out some frequently-written blocks and they\nstart returning I/O errors or corrupted data.\n\nIf you can spare the disk space, increase checkpoint_segments, as that\nmeans at least WAL writes are spread out over a larger number of\nblocks. (But heap/index writes are still a problem)\n\nThey can also corrupt your data if you lose power in the middle of a\nwrite -- since they use much larger physical block sizes than regular\nhard drives and it can lose the whole block, which file systems or\nPostgres are not designed to handle. They also tend to not respect\nflush/barrier requests that are required for database consistency.\n\nCertainly you should do such power-loss tests before you release your\nproduct. I've built an embedded platform with a database. Due to disk\ncorruptions, in the end I opted for mounting all file systems\nread-only and keeping the database only in RAM.\n\n> Any configuration settings we could check up?\n\nFor one, you should reduce max_connections to a more reasonable number\n-- I'd guess you don't need more than 5 or 10 concurrent connections.\n\nAlso set synchronous_commit=off; this means that you may lose some\ncommitted transactions after power loss, but I think with SD cards all\nbets are off anyway.\n\nRegards,\nMarti\n",
"msg_date": "Mon, 30 Jan 2012 12:06:56 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgress is taking lot of CPU on our embedded hardware."
}
] |
[
{
"msg_contents": "Let's say I have a 7GB table with 3-4 indices for a total of 10-12GB.\nFurthermore, let's say I have a machine with sufficient memory for me\nto set the work_mem and maintenance_work_mem to 20GB (just for this\nsession).\nWhen I issue a CLUSTER using one of the indices, I see PostgreSQL (by\nway of strace) performing an index scan which amounts to large\nquantities of random I/O.\nIn my case, that means it takes a very, very long time. PostgreSQL is\nlargely at defaults, except for a 2GB shared_buffers and a few\nunrelated changes. The system itself has 32GB of physical RAM and has\nplenty free.\nWhy didn't PostgreSQL just read the table into memory (and the\ninteresting index) as a sequential scan, sort, and then write it out?\nIt seems like there would be more than enough memory for that. The\nsequential I/O rate on this machine is 50-100x the random I/O rate.\n\nI'm using 8.4.10 (with the 'inet' de-toasting patch) on Scientific Linux 6.1.\n\n-- \nJon\n",
"msg_date": "Fri, 27 Jan 2012 11:43:10 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "regarding CLUSTER and HUGE work_mem / maintenance_work_mem"
},
{
"msg_contents": "On 27.01.2012 19:43, Jon Nelson wrote:\n> Let's say I have a 7GB table with 3-4 indices for a total of 10-12GB.\n> Furthermore, let's say I have a machine with sufficient memory for me\n> to set the work_mem and maintenance_work_mem to 20GB (just for this\n> session).\n> When I issue a CLUSTER using one of the indices, I see PostgreSQL (by\n> way of strace) performing an index scan which amounts to large\n> quantities of random I/O.\n> In my case, that means it takes a very, very long time. PostgreSQL is\n> largely at defaults, except for a 2GB shared_buffers and a few\n> unrelated changes. The system itself has 32GB of physical RAM and has\n> plenty free.\n> Why didn't PostgreSQL just read the table into memory (and the\n> interesting index) as a sequential scan, sort, and then write it out?\n> It seems like there would be more than enough memory for that. The\n> sequential I/O rate on this machine is 50-100x the random I/O rate.\n>\n> I'm using 8.4.10 (with the 'inet' de-toasting patch) on Scientific Linux 6.1.\n\nThe suppport for doing a seqscan+sort in CLUSTER was introduced in \nversion 9.1. Before that, CLUSTER always did an indexscan. See release \nnotes: http://www.postgresql.org/docs/9.1/static/release-9-1.html#AEN107416\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Fri, 27 Jan 2012 20:05:06 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: regarding CLUSTER and HUGE work_mem / maintenance_work_mem"
},
{
"msg_contents": "On Fri, Jan 27, 2012 at 12:05 PM, Heikki Linnakangas\n<[email protected]> wrote:\n> On 27.01.2012 19:43, Jon Nelson wrote:\n>>\n>> Let's say I have a 7GB table with 3-4 indices for a total of 10-12GB.\n>> Furthermore, let's say I have a machine with sufficient memory for me\n>> to set the work_mem and maintenance_work_mem to 20GB (just for this\n>> session).\n>> When I issue a CLUSTER using one of the indices, I see PostgreSQL (by\n>> way of strace) performing an index scan which amounts to large\n>> quantities of random I/O.\n>> In my case, that means it takes a very, very long time. PostgreSQL is\n>> largely at defaults, except for a 2GB shared_buffers and a few\n>> unrelated changes. The system itself has 32GB of physical RAM and has\n>> plenty free.\n>> Why didn't PostgreSQL just read the table into memory (and the\n>> interesting index) as a sequential scan, sort, and then write it out?\n>> It seems like there would be more than enough memory for that. The\n>> sequential I/O rate on this machine is 50-100x the random I/O rate.\n>>\n>> I'm using 8.4.10 (with the 'inet' de-toasting patch) on Scientific Linux\n>> 6.1.\n>\n>\n> The suppport for doing a seqscan+sort in CLUSTER was introduced in version\n> 9.1. Before that, CLUSTER always did an indexscan. See release notes:\n> http://www.postgresql.org/docs/9.1/static/release-9-1.html#AEN107416\n\nThat's what I get for digging through the source (git) but working\nwith 8.4.10, on a Friday, at the end of a long week.\nThanks for pointing that out to somebody that should have known better.\n\n\n-- \nJon\n",
"msg_date": "Fri, 27 Jan 2012 20:34:16 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: regarding CLUSTER and HUGE work_mem / maintenance_work_mem"
},
{
"msg_contents": "On Fri, Jan 27, 2012 at 7:34 PM, Jon Nelson <[email protected]> wrote:\n> On Fri, Jan 27, 2012 at 12:05 PM, Heikki Linnakangas\n> <[email protected]> wrote:\n>> On 27.01.2012 19:43, Jon Nelson wrote:\n>>>\n>>> Let's say I have a 7GB table with 3-4 indices for a total of 10-12GB.\n>>> Furthermore, let's say I have a machine with sufficient memory for me\n>>> to set the work_mem and maintenance_work_mem to 20GB (just for this\n>>> session).\n>>> When I issue a CLUSTER using one of the indices, I see PostgreSQL (by\n>>> way of strace) performing an index scan which amounts to large\n>>> quantities of random I/O.\n>>> In my case, that means it takes a very, very long time. PostgreSQL is\n>>> largely at defaults, except for a 2GB shared_buffers and a few\n>>> unrelated changes. The system itself has 32GB of physical RAM and has\n>>> plenty free.\n>>> Why didn't PostgreSQL just read the table into memory (and the\n>>> interesting index) as a sequential scan, sort, and then write it out?\n>>> It seems like there would be more than enough memory for that. The\n>>> sequential I/O rate on this machine is 50-100x the random I/O rate.\n>>>\n>>> I'm using 8.4.10 (with the 'inet' de-toasting patch) on Scientific Linux\n>>> 6.1.\n>>\n>>\n>> The suppport for doing a seqscan+sort in CLUSTER was introduced in version\n>> 9.1. Before that, CLUSTER always did an indexscan. See release notes:\n>> http://www.postgresql.org/docs/9.1/static/release-9-1.html#AEN107416\n>\n> That's what I get for digging through the source (git) but working\n> with 8.4.10, on a Friday, at the end of a long week.\n> Thanks for pointing that out to somebody that should have known better.\n\nBut if you're stuck on < 9.1 for a while, the workaround is to cluster\nthe table yourself by using a select * ... order by pkey. For\nrandomly distributed tables this is far faster for a first time\ncluster. After that, subsequent clusters won't have as much work to\ndo and the older method for clustering should work ok.\n\nIt's kinda funny to have a complaint against pgsql for NOT using a\nsequential scan. Most DBAs that come from other DBAs are upset when\nit doesn't use an index.\n",
"msg_date": "Fri, 27 Jan 2012 20:04:00 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: regarding CLUSTER and HUGE work_mem / maintenance_work_mem"
}
] |
[
{
"msg_contents": "Hi list,\n\nI am running PostgreSQL 8.1 (CentOS 5.7) on a VM on a single XCP (Xenserver) host.\nThis is a HP server with 8GB, Dual Quad Core, and 2 SATA in RAID-1.\n\nThe problem is: it's running very slow compared to running it on bare metal, and\nthe VM is starving for I/O bandwidht, so other processes (slow to a crawl.\nThis does not happen on bare metal.\n\nI had to replace the server with a bare-metal one, I could not troubleshoot in production.\nAlso it was hard to emulte the workload for that VM in a test environment, so I\nconcentrated on PostgreSQLand why it apparently generated so much I/O.\n\nBefore I start I should confess having only spotty experience with Xen and PostgreSQL\nperformance testing.\n\nI setup a test Xen server created a CentOS5.7 VM with out-of-the-box PostgreSQL and ran:\npgbench -i pgbench ; time pgbench -t 100000 pgbench\nThis ran for 3:28. Then I replaced the SATA HD with an SSD disk, and reran the test.\nIt ran for 2:46. This seemed strange as I expected the run to finish much faster.\n\nI reran the first test on the SATA, and looked at CPU and I/O use. The CPU was not used\ntoo much in both the VM (30%) and in dom0 (10%). The I/O use was not much as well,\naround 8MB/sec in the VM. (Couldn't use iotop in dom0, because of missing kernel support\nin XCP 1.1).\n\nIt reran the second test on SSD, and experienced almost the same CPU, and I/O load.\n\n(I now probably need to run the same test on bare metal, but didn't get to that yet,\nall this already ruined my weekend.)\n\nNow I came this far, can anybody give me some pointers? Why doesn't pgbench saturate\neither the CPU or the I/O? Why does using SSD only change the performance this much?\n\nThanks,\nRon\n\n\n\n\n",
"msg_date": "Sun, 29 Jan 2012 23:48:52 +0100",
"msg_from": "Ron Arts <[email protected]>",
"msg_from_op": true,
"msg_subject": "Having I/O problems in simple virtualized environment"
},
{
"msg_contents": "On Sun, Jan 29, 2012 at 7:48 PM, Ron Arts <[email protected]> wrote:\n> Hi list,\n>\n> I am running PostgreSQL 8.1 (CentOS 5.7) on a VM on a single XCP (Xenserver) host.\n> This is a HP server with 8GB, Dual Quad Core, and 2 SATA in RAID-1.\n>\n> The problem is: it's running very slow compared to running it on bare metal, and\n> the VM is starving for I/O bandwidht, so other processes (slow to a crawl.\n> This does not happen on bare metal.\n\nMy experience with xen and postgres, which we use for testing upgrades\nbefore doing them on production servers, never in production per-se,\nis that I/O is very costly on CPU cycles because of the necessary talk\nbetween domU and dom0.\n\nIt's is worthwhile to pin at least one core for exclusive use of the\ndom0, or at least only let low-load VMs use that core. That frees up\ncycles on the dom0, which is the one handling all I/O.\n\nYou'll still have lousy I/O. But it will suck a little less.\n",
"msg_date": "Sun, 29 Jan 2012 20:01:08 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Having I/O problems in simple virtualized environment"
},
{
"msg_contents": "On Sun, Jan 29, 2012 at 6:18 PM, Ron Arts <[email protected]> wrote:\n> Hi list,\n>\n> I am running PostgreSQL 8.1 (CentOS 5.7) on a VM on a single XCP (Xenserver) host.\n> This is a HP server with 8GB, Dual Quad Core, and 2 SATA in RAID-1.\n>\n> The problem is: it's running very slow compared to running it on bare metal, and\n> the VM is starving for I/O bandwidht, so other processes (slow to a crawl.\n> This does not happen on bare metal.\n>\n> I had to replace the server with a bare-metal one, I could not troubleshoot in production.\n> Also it was hard to emulte the workload for that VM in a test environment, so I\n> concentrated on PostgreSQLand why it apparently generated so much I/O.\n>\n> Before I start I should confess having only spotty experience with Xen and PostgreSQL\n> performance testing.\n>\n> I setup a test Xen server created a CentOS5.7 VM with out-of-the-box PostgreSQL and ran:\n> pgbench -i pgbench ; time pgbench -t 100000 pgbench\n> This ran for 3:28. Then I replaced the SATA HD with an SSD disk, and reran the test.\n> It ran for 2:46. This seemed strange as I expected the run to finish much faster.\n>\n> I reran the first test on the SATA, and looked at CPU and I/O use. The CPU was not used\n> too much in both the VM (30%) and in dom0 (10%). The I/O use was not much as well,\n> around 8MB/sec in the VM. (Couldn't use iotop in dom0, because of missing kernel support\n> in XCP 1.1).\n>\n> It reran the second test on SSD, and experienced almost the same CPU, and I/O load.\n>\n> (I now probably need to run the same test on bare metal, but didn't get to that yet,\n> all this already ruined my weekend.)\n>\n> Now I came this far, can anybody give me some pointers? Why doesn't pgbench saturate\n> either the CPU or the I/O? Why does using SSD only change the performance this much?\n\nOk, one point: Which IO scheduler are you using? (on dom0 and on the VM).\n",
"msg_date": "Sun, 29 Jan 2012 21:22:26 -0430",
"msg_from": "Jose Ildefonso Camargo Tolosa <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Having I/O problems in simple virtualized environment"
},
{
"msg_contents": "Op 30-01-12 02:52, Jose Ildefonso Camargo Tolosa schreef:\n> On Sun, Jan 29, 2012 at 6:18 PM, Ron Arts <[email protected]> wrote:\n>> Hi list,\n>>\n>> I am running PostgreSQL 8.1 (CentOS 5.7) on a VM on a single XCP (Xenserver) host.\n>> This is a HP server with 8GB, Dual Quad Core, and 2 SATA in RAID-1.\n>>\n>> The problem is: it's running very slow compared to running it on bare metal, and\n>> the VM is starving for I/O bandwidht, so other processes (slow to a crawl.\n>> This does not happen on bare metal.\n>>\n>> I had to replace the server with a bare-metal one, I could not troubleshoot in production.\n>> Also it was hard to emulte the workload for that VM in a test environment, so I\n>> concentrated on PostgreSQLand why it apparently generated so much I/O.\n>>\n>> Before I start I should confess having only spotty experience with Xen and PostgreSQL\n>> performance testing.\n>>\n>> I setup a test Xen server created a CentOS5.7 VM with out-of-the-box PostgreSQL and ran:\n>> pgbench -i pgbench ; time pgbench -t 100000 pgbench\n>> This ran for 3:28. Then I replaced the SATA HD with an SSD disk, and reran the test.\n>> It ran for 2:46. This seemed strange as I expected the run to finish much faster.\n>>\n>> I reran the first test on the SATA, and looked at CPU and I/O use. The CPU was not used\n>> too much in both the VM (30%) and in dom0 (10%). The I/O use was not much as well,\n>> around 8MB/sec in the VM. (Couldn't use iotop in dom0, because of missing kernel support\n>> in XCP 1.1).\n>>\n>> It reran the second test on SSD, and experienced almost the same CPU, and I/O load.\n>>\n>> (I now probably need to run the same test on bare metal, but didn't get to that yet,\n>> all this already ruined my weekend.)\n>>\n>> Now I came this far, can anybody give me some pointers? Why doesn't pgbench saturate\n>> either the CPU or the I/O? Why does using SSD only change the performance this much?\n> \n> Ok, one point: Which IO scheduler are you using? (on dom0 and on the VM).\n\nOk, first dom0:\n\nFor the SSD (hda):\n# cat /sys/block/sda/queue/scheduler\n[noop] anticipatory deadline cfq\n\nFor the SATA:\n# cat /sys/block/sdb/queue/scheduler\nnoop anticipatory deadline [cfq]\n\nThen in the VM:\n\n# cat /sys/block/xvda/queue/scheduler\n[noop] anticipatory deadline cfq\n\nRon\n",
"msg_date": "Mon, 30 Jan 2012 08:41:49 +0100",
"msg_from": "Ron Arts <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Having I/O problems in simple virtualized environment"
},
{
"msg_contents": "On Mon, Jan 30, 2012 at 3:11 AM, Ron Arts <[email protected]> wrote:\n> Op 30-01-12 02:52, Jose Ildefonso Camargo Tolosa schreef:\n>> On Sun, Jan 29, 2012 at 6:18 PM, Ron Arts <[email protected]> wrote:\n>>> Hi list,\n>>>\n>>> I am running PostgreSQL 8.1 (CentOS 5.7) on a VM on a single XCP (Xenserver) host.\n>>> This is a HP server with 8GB, Dual Quad Core, and 2 SATA in RAID-1.\n>>>\n>>> The problem is: it's running very slow compared to running it on bare metal, and\n>>> the VM is starving for I/O bandwidht, so other processes (slow to a crawl.\n>>> This does not happen on bare metal.\n>>>\n>>> I had to replace the server with a bare-metal one, I could not troubleshoot in production.\n>>> Also it was hard to emulte the workload for that VM in a test environment, so I\n>>> concentrated on PostgreSQLand why it apparently generated so much I/O.\n>>>\n>>> Before I start I should confess having only spotty experience with Xen and PostgreSQL\n>>> performance testing.\n>>>\n>>> I setup a test Xen server created a CentOS5.7 VM with out-of-the-box PostgreSQL and ran:\n>>> pgbench -i pgbench ; time pgbench -t 100000 pgbench\n>>> This ran for 3:28. Then I replaced the SATA HD with an SSD disk, and reran the test.\n>>> It ran for 2:46. This seemed strange as I expected the run to finish much faster.\n>>>\n>>> I reran the first test on the SATA, and looked at CPU and I/O use. The CPU was not used\n>>> too much in both the VM (30%) and in dom0 (10%). The I/O use was not much as well,\n>>> around 8MB/sec in the VM. (Couldn't use iotop in dom0, because of missing kernel support\n>>> in XCP 1.1).\n>>>\n>>> It reran the second test on SSD, and experienced almost the same CPU, and I/O load.\n>>>\n>>> (I now probably need to run the same test on bare metal, but didn't get to that yet,\n>>> all this already ruined my weekend.)\n>>>\n>>> Now I came this far, can anybody give me some pointers? Why doesn't pgbench saturate\n>>> either the CPU or the I/O? Why does using SSD only change the performance this much?\n>>\n>> Ok, one point: Which IO scheduler are you using? (on dom0 and on the VM).\n>\n> Ok, first dom0:\n>\n> For the SSD (hda):\n> # cat /sys/block/sda/queue/scheduler\n> [noop] anticipatory deadline cfq\n\nUse deadline.\n\n>\n> For the SATA:\n> # cat /sys/block/sdb/queue/scheduler\n> noop anticipatory deadline [cfq]\n\nUse deadline too (this is specially true if sdb is a raid array).\n\n>\n> Then in the VM:\n>\n> # cat /sys/block/xvda/queue/scheduler\n> [noop] anticipatory deadline cfq\n\nShould be ok for the VM.\n",
"msg_date": "Tue, 31 Jan 2012 16:43:12 -0430",
"msg_from": "Jose Ildefonso Camargo Tolosa <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Having I/O problems in simple virtualized environment"
}
] |
[
{
"msg_contents": "Hi all,\n\nI am using Postgresql database for our project and doing some\nperformance testing. We need to insert millions of record with indexed\ncolumns. We have 5 columns in table. I created index on integer only\nthen performance is good but when I created index on text column as\nwell then the performance reduced to 1/8th times. My question is how I\ncan improve performance when inserting data using index on text\ncolumn?\n\nThanks,\nSaurabh\n",
"msg_date": "Mon, 30 Jan 2012 01:27:30 -0800 (PST)",
"msg_from": "Saurabh <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to improve insert speed with index on text column"
},
{
"msg_contents": "On Mon, Jan 30, 2012 at 6:27 AM, Saurabh <[email protected]> wrote:\n> Hi all,\n>\n> I am using Postgresql database for our project and doing some\n> performance testing. We need to insert millions of record with indexed\n> columns. We have 5 columns in table. I created index on integer only\n> then performance is good but when I created index on text column as\n> well then the performance reduced to 1/8th times. My question is how I\n> can improve performance when inserting data using index on text\n> column?\n\nPost all the necessary details. Schema, table and index sizes, some config...\n\nAssuming your text column is a long one (long text), this results in\nreally big indices.\nAssuming you only search by equality, you can make it a lot faster by hashing.\nLast time I checked, hash indices were quite limited and performed\nbadly, but I've heard they improved quite a bit. If hash indices don't\nwork for you, you can always build them on top of btree indices by\nindexing on the expression hash(column) and comparing as hash(value) =\nhash(column) and value = column.\nOn a table indexed by URL I have, this improved things immensely. Both\nlookup and insertion times improved.\n",
"msg_date": "Mon, 30 Jan 2012 11:10:20 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve insert speed with index on text column"
},
{
"msg_contents": "On Mon, Jan 30, 2012 at 1:27 AM, Saurabh <[email protected]> wrote:\n> Hi all,\n>\n> I am using Postgresql database for our project and doing some\n> performance testing. We need to insert millions of record with indexed\n> columns. We have 5 columns in table. I created index on integer only\n> then performance is good but when I created index on text column as\n> well then the performance reduced to 1/8th times.\n\nInserting into a indexed table causes a lot of random access to the\nunderlying index (unless the data is inserted in an order which\ncorresponds to the index order of all indexes, which is not likely to\nhappen with multiple indexes). As soon as your indexes don't fit in\ncache, your performance will collapse.\n\nWhat if you don't have the integer index but just the text? What is\nthe average length of the data in the text field? Is your system CPU\nlimited or IO limited during the load?\n\n> My question is how I\n> can improve performance when inserting data using index on text\n> column?\n\nThe only \"magic\" answer is to drop the index and rebuild after the\ninsert. If that doesn't work for you, then you have to identify your\nbottleneck and fix it. That can't be done with just the information\nyou provide.\n\nCheers,\n\nJeff\n",
"msg_date": "Mon, 30 Jan 2012 07:24:36 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve insert speed with index on text column"
},
{
"msg_contents": "Thank you for the information.\n\nSchema of table is:\n\nID bigint\ncompany_name text\ndata_set text\ntime timestamp\nDate date\n\nLength of company_name is not known so it is of datatype text. I need\nto build the index on company_name and ID. And then insert the\nrecords. I can not create the index after insertion because user can\nsearch the data as well while insertion.\n\nMachine is of 8 core, os centos6 and 8 GB of RAM.\n\nHere is my configuration:\n\nmax_connections = 100\nshared_buffers = 32MB\nwal_buffers = 1024KB\ncheckpoint_segments = 3\n\n",
"msg_date": "Mon, 30 Jan 2012 09:46:21 -0800 (PST)",
"msg_from": "Saurabh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve insert speed with index on text column"
},
{
"msg_contents": "On Mon, Jan 30, 2012 at 2:46 PM, Saurabh <[email protected]> wrote:\n> max_connections = 100\n> shared_buffers = 32MB\n> wal_buffers = 1024KB\n> checkpoint_segments = 3\n\nThat's a default config isn't it?\n\nYou'd do well to try and optimize it for your system. The defaults are\nreally, reeallly conservative.\n\nYou should also consider normalizing. I'm assuming company_name could\nbe company_id ? (ie: each will have many rows). Otherwise I cannot see\nhow you'd expect to be *constantly* inserting millions of rows. If\nit's a one-time initialization thing, just drop the indices and\nrecreate them as you've been suggested. If you create new records all\nthe time, I'd bet you'll also have many rows with the same\ncompany_name, so normalizing would be a clear win.\n",
"msg_date": "Mon, 30 Jan 2012 15:20:28 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve insert speed with index on text column"
},
{
"msg_contents": "On 1/30/2012 3:27 AM, Saurabh wrote:\n> Hi all,\n>\n> I am using Postgresql database for our project and doing some\n> performance testing. We need to insert millions of record with indexed\n> columns. We have 5 columns in table. I created index on integer only\n> then performance is good but when I created index on text column as\n> well then the performance reduced to 1/8th times. My question is how I\n> can improve performance when inserting data using index on text\n> column?\n>\n> Thanks,\n> Saurabh\n>\n\nDo it in a single transaction, and use COPY.\n\n-Andy\n",
"msg_date": "Mon, 30 Jan 2012 12:33:00 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve insert speed with index on text column"
},
{
"msg_contents": "On Mon, Jan 30, 2012 at 9:46 AM, Saurabh <[email protected]> wrote:\n> Thank you for the information.\n>\n> Schema of table is:\n>\n> ID bigint\n> company_name text\n> data_set text\n> time timestamp\n> Date date\n>\n> Length of company_name is not known so it is of datatype text. I need\n> to build the index on company_name and ID. And then insert the\n> records. I can not create the index after insertion because user can\n> search the data as well while insertion.\n>\n> Machine is of 8 core, os centos6 and 8 GB of RAM.\n>\n> Here is my configuration:\n>\n> shared_buffers = 32MB\n\nThat is very small for your server. I'd use at least 512MB, and maybe 2GB\n\n> wal_buffers = 1024KB\n\nIf you are using 9.1, I would have this set to the default of -1 and\nlet the database decide for itself what to use.\n",
"msg_date": "Mon, 30 Jan 2012 12:32:25 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve insert speed with index on text column"
},
{
"msg_contents": "I changed the configuration in postgresql.conf. Following are the\nchanged parameters:\n\nshared_buffers = 1GB\nmaintenance_work_mem = 50MB\ncheckpoint_segments = 64\nwal_buffers = 5MB\nautovacuum = off\n\nInsert the records in the database and got a very good performance it\nis increased by 6 times.\n\nCan you please tell me the purpose of shared_buffer and\nmaintenance_work_mem parameter?\n\nThanks,\nSaurabh\n",
"msg_date": "Tue, 31 Jan 2012 01:29:35 -0800 (PST)",
"msg_from": "Saurabh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve insert speed with index on text column"
},
{
"msg_contents": "On 31 Leden 2012, 10:29, Saurabh wrote:\n> I changed the configuration in postgresql.conf. Following are the\n> changed parameters:\n>\n> shared_buffers = 1GB\n> maintenance_work_mem = 50MB\n> checkpoint_segments = 64\n> wal_buffers = 5MB\n> autovacuum = off\n>\n> Insert the records in the database and got a very good performance it\n> is increased by 6 times.\n>\n> Can you please tell me the purpose of shared_buffer and\n> maintenance_work_mem parameter?\n\nShared buffers is the cache maintained by PostgreSQL. All all the data\nthat you read/write need to go through shared buffers.\n\nMaintenance_work_mem specifies how much memory can \"maintenance tasks\"\n(e.g. autovacuum, reindex, etc.) use. This is similar to work_mem for\ncommon queries (sorting, grouping, ...).\n\nTomas\n\n",
"msg_date": "Tue, 31 Jan 2012 12:21:38 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve insert speed with index on text column"
},
{
"msg_contents": "\n> Shared buffers is the cache maintained by PostgreSQL. All all the data\n> that you read/write need to go through shared buffers.\n\nWhile this is technically true, I need to point out that you generally\nincrease shared_buffers for high concurrency, and for reads, not for\nwrites, especially for row-at-a-time inserts. There's just not that\nmuch memory required (although more than the out-of-the-box defaults).\n\nI'd suggest increasing wal_buffers to 16MB, which is the maximum useful\namount, rather than 5MB.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Tue, 31 Jan 2012 10:46:40 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve insert speed with index on text column"
},
{
"msg_contents": "On Tue, Jan 31, 2012 at 12:46 PM, Josh Berkus <[email protected]> wrote:\n>\n>> Shared buffers is the cache maintained by PostgreSQL. All all the data\n>> that you read/write need to go through shared buffers.\n>\n> While this is technically true, I need to point out that you generally\n> increase shared_buffers for high concurrency, and for reads, not for\n> writes, especially for row-at-a-time inserts. There's just not that\n> much memory required (although more than the out-of-the-box defaults).\n>\n> I'd suggest increasing wal_buffers to 16MB, which is the maximum useful\n> amount, rather than 5MB.\n\nyeah -- postgresql.conf settings are not going to play a big role here unless:\n*) you defer index build to the end of the load, and do CREATE INDEX\nand crank maintenance_work_mem\n*) you are doing lots of transactions and relax your sync policy via\nsynchronous_commit\n\nwhat's almost certainly happening here is that the text index is\nwriting out a lot more data. what's the average length of your key?\n\nIf I'm inspecting sizes of tables/indexes which start with 'foo', I can do this:\npostgres=# select relname,\npg_size_pretty(pg_relation_size(relname::text)) from pg_class where\nrelname like 'foo%';\n relname | pg_size_pretty\n-----------+----------------\n foo | 40 kB\n foo_i_idx | 40 kB\n foo_t_idx | 40 kB\n\nWe'd like to see the numbers for your table/indexes in question.\n\nmerlin\n",
"msg_date": "Tue, 31 Jan 2012 14:20:45 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve insert speed with index on text column"
},
{
"msg_contents": "On Tue, Jan 31, 2012 at 10:46 AM, Josh Berkus <[email protected]> wrote:\n>\n>> Shared buffers is the cache maintained by PostgreSQL. All all the data\n>> that you read/write need to go through shared buffers.\n>\n> While this is technically true, I need to point out that you generally\n> increase shared_buffers for high concurrency, and for reads, not for\n> writes, especially for row-at-a-time inserts. There's just not that\n> much memory required (although more than the out-of-the-box defaults).\n\nWhen inserting rows in bulk (even just with inserts in a tight loop)\ninto indexed tables, I often see the performance collapse soon after\nthe active index size exceeds shared_buffers. Or at least,\nshared_buffers + however much dirty data the kernel is willing to\ntolerate. But that later value is hard to predict. Increasing the\nshared_buffers can really help a lot here. I'm sure this behavior\ndepends greatly on your IO subsystem.\n\nCheers,\n\nJeff\n",
"msg_date": "Tue, 31 Jan 2012 18:11:57 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve insert speed with index on text column"
},
{
"msg_contents": "On Mon, Jan 30, 2012 at 9:46 AM, Saurabh <[email protected]> wrote:\n> I can not create the index after insertion because user can\n> search the data as well while insertion.\n\nRemember, DDL is transactional in PostgreSQL. In principle, you\nshould be able to drop the index, do your inserts, and re-create the\nindex without affecting concurrent users, if you do all of that inside\nan explicit transaction. Doing the inserts inside a transaction may\nspeed them up, as well.\n\nrls\n\n-- \n:wq\n",
"msg_date": "Tue, 31 Jan 2012 19:29:18 -0800",
"msg_from": "Rosser Schwarz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve insert speed with index on text column"
},
{
"msg_contents": "On Wed, Feb 1, 2012 at 12:29 AM, Rosser Schwarz\n<[email protected]> wrote:\n> Remember, DDL is transactional in PostgreSQL. In principle, you\n> should be able to drop the index, do your inserts, and re-create the\n> index without affecting concurrent users, if you do all of that inside\n> an explicit transaction. Doing the inserts inside a transaction may\n> speed them up, as well.\n\nCreating an index requires an update lock on the table, and an\nexclusive lock on the system catalog.\nEven though with \"CONCURRENTLY\" it's only for a short while.\nSo it does affect concurrent users.\n",
"msg_date": "Wed, 1 Feb 2012 00:49:09 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve insert speed with index on text column"
},
{
"msg_contents": "On Wed, Feb 1, 2012 at 12:49 AM, Claudio Freire <[email protected]> wrote:\n> On Wed, Feb 1, 2012 at 12:29 AM, Rosser Schwarz\n> <[email protected]> wrote:\n>> Remember, DDL is transactional in PostgreSQL. In principle, you\n>> should be able to drop the index, do your inserts, and re-create the\n>> index without affecting concurrent users, if you do all of that inside\n>> an explicit transaction. Doing the inserts inside a transaction may\n>> speed them up, as well.\n>\n> Creating an index requires an update lock on the table, and an\n> exclusive lock on the system catalog.\n> Even though with \"CONCURRENTLY\" it's only for a short while.\n> So it does affect concurrent users.\n\nForgot to mention that if you don't commit the drop, you see no\nperformance increase.\nSo:\n\nbegin\ndrop\ninsert\ncreate\ncommit\n\nDoes not work to improve performance. At all.\n",
"msg_date": "Wed, 1 Feb 2012 00:51:03 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve insert speed with index on text column"
},
{
"msg_contents": "On Tue, Jan 31, 2012 at 1:20 PM, Merlin Moncure <[email protected]> wrote:\n> yeah -- postgresql.conf settings are not going to play a big role here unless:\n> *) you defer index build to the end of the load, and do CREATE INDEX\n> and crank maintenance_work_mem\n> *) you are doing lots of transactions and relax your sync policy via\n> synchronous_commit\n\ncheckpoint segments sometimes helps for loading large amounts of data.\n\n> what's almost certainly happening here is that the text index is\n> writing out a lot more data. what's the average length of your key?\n\nYeah, the OP really needs to switch to hashes for the indexes. And\nlike another poster mentioned, it's often faster to use a btree of\nhashes than to use the built in hash index type.\n",
"msg_date": "Fri, 3 Feb 2012 13:54:36 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve insert speed with index on text column"
}
] |
[
{
"msg_contents": "So, here's the query:\n\nSELECT private, COUNT(block_id) FROM blocks WHERE created > 'yesterday' AND\nshared IS FALSE GROUP BY private\n\nWhat confuses me is that though this is a largish table (millions of rows)\nwith constant writes, the query is over indexed columns of types timestamp\nand boolean so I would expect it to be very fast. The clause where created\n> 'yesterday' is there mostly to speed it up, but apparently it doesn't\nhelp much.\n\nHere's the *Full Table and Index Schema*:\n\nCREATE TABLE blocks\n(\n block_id character(24) NOT NULL,\n user_id character(24) NOT NULL,\n created timestamp with time zone,\n locale character varying,\n shared boolean,\n private boolean,\n moment_type character varying NOT NULL,\n user_agent character varying,\n inserted timestamp without time zone NOT NULL DEFAULT now(),\n networks character varying[],\n lnglat point,\n CONSTRAINT blocks_pkey PRIMARY KEY (block_id )\n)\n\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX blocks_created_idx\n ON blocks\n USING btree\n (created DESC NULLS LAST);\n\nCREATE INDEX blocks_lnglat_idx\n ON blocks\n USING gist\n (lnglat );\n\nCREATE INDEX blocks_networks_idx\n ON blocks\n USING btree\n (networks );\n\nCREATE INDEX blocks_private_idx\n ON blocks\n USING btree\n (private );\n\nCREATE INDEX blocks_shared_idx\n ON blocks\n USING btree\n (shared );\n\nHere's the results from *EXPLAIN ANALYZE:*\n\n\"HashAggregate (cost=156619.01..156619.02 rows=2 width=26) (actual\ntime=43131.154..43131.156 rows=2 loops=1)\"\n*\" -> Seq Scan on blocks (cost=0.00..156146.14 rows=472871 width=26)\n(actual time=274.881..42124.505 rows=562888 loops=1)\"\n**\" Filter: ((shared IS FALSE) AND (created > '2012-01-29\n00:00:00+00'::timestamp with time zone))\"\n**\"Total runtime: 43131.221 ms\"*\nI'm using *Postgres version:* 9.0.5 (courtesy of Heroku)\n\nAs for *History:* I've only recently started using this query, so there\nreally isn't any.\n\nAs for *Hardware*: I'm using Heroku's \"Ronin\" setup which involves 1.7 GB\nCache. Beyond that I don't really know.\n\nAs for *Maintenance Setup*: I let Heroku handle that, so I again, I don't\nreally know. FWIW though, vacuuming should not really be an issue (as I\nunderstand it) since I don't really do any updates or deletions. It's\npretty much all inserts and selects.\n\nAs for *WAL Configuration*: I'm afraid I don't even know what that is. The\nquery is normally run from a Python web server though the above explain was\nrun using pgAdmin3, though I doubt that's relevant.\n\nAs for *GUC Settings*: Again, I don't know what this is. Whatever Heroku\ndefaults to is what I'm using.\n\nThank you in advance!\n-Alessandro Gagliardi\n\nSo, here's the query:\nSELECT private, COUNT(block_id) FROM blocks WHERE created > 'yesterday' AND shared IS FALSE GROUP BY privateWhat confuses me is that though this is a largish table (millions of rows) with constant writes, the query is over indexed columns of types timestamp and boolean so I would expect it to be very fast. The clause where created > 'yesterday' is there mostly to speed it up, but apparently it doesn't help much. \nHere's the Full Table and Index Schema:\nCREATE TABLE blocks(\n block_id character(24) NOT NULL, user_id character(24) NOT NULL, created timestamp with time zone,\n locale character varying, shared boolean, private boolean,\n moment_type character varying NOT NULL, user_agent character varying, inserted timestamp without time zone NOT NULL DEFAULT now(),\n networks character varying[], lnglat point, CONSTRAINT blocks_pkey PRIMARY KEY (block_id )\n)WITH ( OIDS=FALSE\n);CREATE INDEX blocks_created_idx ON blocks\n USING btree (created DESC NULLS LAST);CREATE INDEX blocks_lnglat_idx\n ON blocks USING gist (lnglat );\nCREATE INDEX blocks_networks_idx ON blocks USING btree\n (networks );CREATE INDEX blocks_private_idx ON blocks\n USING btree (private );CREATE INDEX blocks_shared_idx\n ON blocks USING btree (shared );\nHere's the results from EXPLAIN ANALYZE:\n\"HashAggregate (cost=156619.01..156619.02 rows=2 width=26) (actual time=43131.154..43131.156 rows=2 loops=1)\"\" -> Seq Scan on blocks (cost=0.00..156146.14 rows=472871 width=26) (actual time=274.881..42124.505 rows=562888 loops=1)\"\n\" Filter: ((shared IS FALSE) AND (created > '2012-01-29 00:00:00+00'::timestamp with time zone))\"\"Total runtime: 43131.221 ms\"\nI'm using Postgres version: 9.0.5 (courtesy of Heroku)As for History: I've only recently started using this query, so there really isn't any.\nAs for Hardware: I'm using Heroku's \"Ronin\" setup which involves 1.7 GB Cache. Beyond that I don't really know.\nAs for Maintenance Setup: I let Heroku handle that, so I again, I don't really know. FWIW though, vacuuming should not really be an issue (as I understand it) since I don't really do any updates or deletions. It's pretty much all inserts and selects.\nAs for WAL Configuration: I'm afraid I don't even know what that is. The query is normally run from a Python web server though the above explain was run using pgAdmin3, though I doubt that's relevant.\nAs for GUC Settings: Again, I don't know what this is. Whatever Heroku defaults to is what I'm using.\nThank you in advance!-Alessandro Gagliardi",
"msg_date": "Mon, 30 Jan 2012 11:13:08 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why should such a simple query over indexed columns be so slow?"
},
{
"msg_contents": "On Mon, Jan 30, 2012 at 4:13 PM, Alessandro Gagliardi\n<[email protected]> wrote:\n> So, here's the query:\n>\n> SELECT private, COUNT(block_id) FROM blocks WHERE created > 'yesterday' AND\n> shared IS FALSE GROUP BY private\n>\n> What confuses me is that though this is a largish table (millions of rows)\n> with constant writes, the query is over indexed columns of types timestamp\n> and boolean so I would expect it to be very fast. The clause where created >\n> 'yesterday' is there mostly to speed it up, but apparently it doesn't help\n> much.\n\nThe number of rows touched is ~0.5M, and is correctly estimated, which\nwould lead me to believe PG estimates the index plan to be slower.\n\nYou could try by executing first \"set enable_seqscan=false;\" and then\nyour query with explain analyze again. You'll probably get an index\nscan, and you'll see both how it performs and how PG thought it would\nperform. Any mismatch between the two probably means you'll have to\nchange the planner tunables (the x_tuple_cost ones) to better match\nyour hardware.\n\n\n> As for Hardware: I'm using Heroku's \"Ronin\" setup which involves 1.7 GB\n> Cache. Beyond that I don't really know.\nsnip\n> As for GUC Settings: Again, I don't know what this is. Whatever Heroku\n> defaults to is what I'm using.\n\nAnd there's your problem. Without knowing/understanding those, you\nwon't get anywhere. I don't know what Heroku is, but you should find\nout both hardware details and PG configuration details.\n\n> As for Maintenance Setup: I let Heroku handle that, so I again, I don't\n> really know. FWIW though, vacuuming should not really be an issue (as I\n> understand it) since I don't really do any updates or deletions. It's pretty\n> much all inserts and selects.\n\nMaintainance also includes analyzing the table, to gather stats that\nfeed the optimizer, and it's very important to keep the stats\naccurate. You can do it manually - just perform an ANALYZE. However,\nthe plan doesn't show any serious mismatch between expected and actual\nrowcounts, which suggests stats aren't your problem.\n",
"msg_date": "Mon, 30 Jan 2012 16:24:54 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why should such a simple query over indexed columns be\n so slow?"
},
{
"msg_contents": "Well that was a *lot* faster:\n\n\"HashAggregate (cost=156301.82..156301.83 rows=2 width=26) (actual\ntime=2692.806..2692.807 rows=2 loops=1)\"\n\" -> Bitmap Heap Scan on blocks (cost=14810.54..155828.95 rows=472871\nwidth=26) (actual time=289.828..1593.893 rows=575186 loops=1)\"\n\" Recheck Cond: (created > '2012-01-29 00:00:00+00'::timestamp with\ntime zone)\"\n\" Filter: (shared IS FALSE)\"\n\" -> Bitmap Index Scan on blocks_created_idx (cost=0.00..14786.89\nrows=550404 width=0) (actual time=277.407..277.407 rows=706663 loops=1)\"\n\" Index Cond: (created > '2012-01-29 00:00:00+00'::timestamp\nwith time zone)\"\n\"Total runtime: 2693.107 ms\"\n\nTo answer your (non-)question about Heroku, it's a cloud service, so I\ndon't host PostgreSQL myself. I'm not sure how much I can mess with things\nlike GUC since I don't even have access to the \"postgres\" database on the\nserver. I am a long time SQL user but new to Postgres so I welcome\nsuggestions on where to start with that sort of thing.\nSetting enable_seqscan=false made a huge difference, so I think I'll start\nthere.\n\nThank you very much!\n-Alessandro\n\nOn Mon, Jan 30, 2012 at 11:24 AM, Claudio Freire <[email protected]>wrote:\n\n> On Mon, Jan 30, 2012 at 4:13 PM, Alessandro Gagliardi\n> <[email protected]> wrote:\n> > So, here's the query:\n> >\n> > SELECT private, COUNT(block_id) FROM blocks WHERE created > 'yesterday'\n> AND\n> > shared IS FALSE GROUP BY private\n> >\n> > What confuses me is that though this is a largish table (millions of\n> rows)\n> > with constant writes, the query is over indexed columns of types\n> timestamp\n> > and boolean so I would expect it to be very fast. The clause where\n> created >\n> > 'yesterday' is there mostly to speed it up, but apparently it doesn't\n> help\n> > much.\n>\n> The number of rows touched is ~0.5M, and is correctly estimated, which\n> would lead me to believe PG estimates the index plan to be slower.\n>\n> You could try by executing first \"set enable_seqscan=false;\" and then\n> your query with explain analyze again. You'll probably get an index\n> scan, and you'll see both how it performs and how PG thought it would\n> perform. Any mismatch between the two probably means you'll have to\n> change the planner tunables (the x_tuple_cost ones) to better match\n> your hardware.\n>\n>\n> > As for Hardware: I'm using Heroku's \"Ronin\" setup which involves 1.7 GB\n> > Cache. Beyond that I don't really know.\n> snip\n> > As for GUC Settings: Again, I don't know what this is. Whatever Heroku\n> > defaults to is what I'm using.\n>\n> And there's your problem. Without knowing/understanding those, you\n> won't get anywhere. I don't know what Heroku is, but you should find\n> out both hardware details and PG configuration details.\n>\n> > As for Maintenance Setup: I let Heroku handle that, so I again, I don't\n> > really know. FWIW though, vacuuming should not really be an issue (as I\n> > understand it) since I don't really do any updates or deletions. It's\n> pretty\n> > much all inserts and selects.\n>\n> Maintainance also includes analyzing the table, to gather stats that\n> feed the optimizer, and it's very important to keep the stats\n> accurate. You can do it manually - just perform an ANALYZE. However,\n> the plan doesn't show any serious mismatch between expected and actual\n> rowcounts, which suggests stats aren't your problem.\n>\n\nWell that was a lot faster:\"HashAggregate (cost=156301.82..156301.83 rows=2 width=26) (actual time=2692.806..2692.807 rows=2 loops=1)\"\" -> Bitmap Heap Scan on blocks (cost=14810.54..155828.95 rows=472871 width=26) (actual time=289.828..1593.893 rows=575186 loops=1)\"\n\" Recheck Cond: (created > '2012-01-29 00:00:00+00'::timestamp with time zone)\"\" Filter: (shared IS FALSE)\"\" -> Bitmap Index Scan on blocks_created_idx (cost=0.00..14786.89 rows=550404 width=0) (actual time=277.407..277.407 rows=706663 loops=1)\"\n\" Index Cond: (created > '2012-01-29 00:00:00+00'::timestamp with time zone)\"\"Total runtime: 2693.107 ms\"To answer your (non-)question about Heroku, it's a cloud service, so I don't host PostgreSQL myself. I'm not sure how much I can mess with things like GUC since I don't even have access to the \"postgres\" database on the server. I am a long time SQL user but new to Postgres so I welcome suggestions on where to start with that sort of thing. Setting enable_seqscan=false made a huge difference, so I think I'll start there.\nThank you very much!-AlessandroOn Mon, Jan 30, 2012 at 11:24 AM, Claudio Freire <[email protected]> wrote:\nOn Mon, Jan 30, 2012 at 4:13 PM, Alessandro Gagliardi\n<[email protected]> wrote:\n> So, here's the query:\n>\n> SELECT private, COUNT(block_id) FROM blocks WHERE created > 'yesterday' AND\n> shared IS FALSE GROUP BY private\n>\n> What confuses me is that though this is a largish table (millions of rows)\n> with constant writes, the query is over indexed columns of types timestamp\n> and boolean so I would expect it to be very fast. The clause where created >\n> 'yesterday' is there mostly to speed it up, but apparently it doesn't help\n> much.\n\nThe number of rows touched is ~0.5M, and is correctly estimated, which\nwould lead me to believe PG estimates the index plan to be slower.\n\nYou could try by executing first \"set enable_seqscan=false;\" and then\nyour query with explain analyze again. You'll probably get an index\nscan, and you'll see both how it performs and how PG thought it would\nperform. Any mismatch between the two probably means you'll have to\nchange the planner tunables (the x_tuple_cost ones) to better match\nyour hardware.\n\n\n> As for Hardware: I'm using Heroku's \"Ronin\" setup which involves 1.7 GB\n> Cache. Beyond that I don't really know.\nsnip\n> As for GUC Settings: Again, I don't know what this is. Whatever Heroku\n> defaults to is what I'm using.\n\nAnd there's your problem. Without knowing/understanding those, you\nwon't get anywhere. I don't know what Heroku is, but you should find\nout both hardware details and PG configuration details.\n\n> As for Maintenance Setup: I let Heroku handle that, so I again, I don't\n> really know. FWIW though, vacuuming should not really be an issue (as I\n> understand it) since I don't really do any updates or deletions. It's pretty\n> much all inserts and selects.\n\nMaintainance also includes analyzing the table, to gather stats that\nfeed the optimizer, and it's very important to keep the stats\naccurate. You can do it manually - just perform an ANALYZE. However,\nthe plan doesn't show any serious mismatch between expected and actual\nrowcounts, which suggests stats aren't your problem.",
"msg_date": "Mon, 30 Jan 2012 12:35:04 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why should such a simple query over indexed columns be\n so slow?"
},
{
"msg_contents": "On Mon, Jan 30, 2012 at 5:35 PM, Alessandro Gagliardi\n<[email protected]> wrote:\n> To answer your (non-)question about Heroku, it's a cloud service, so I don't\n> host PostgreSQL myself. I'm not sure how much I can mess with things like\n> GUC since I don't even have access to the \"postgres\" database on the server.\n> I am a long time SQL user but new to Postgres so I welcome suggestions on\n> where to start with that sort of thing. Setting enable_seqscan=false made a\n> huge difference, so I think I'll start there.\n\nIt's not a good idea to abuse of the enable_stuff settings, they're\nfor debugging, not for general use. In particular, disable sequential\nscans everywhere can have a disastrous effect on performance.\n\nIt sounds as if PG had a misconfigured effective_cache_size. What does\n\"show effective_cache_size\" tell you?\n",
"msg_date": "Mon, 30 Jan 2012 17:50:21 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why should such a simple query over indexed columns be\n so slow?"
},
{
"msg_contents": "Hm. Well, it looks like setting enable_seqscan=false is session specific,\nso it seems like I can use it with this query alone; but it sounds like\neven if that works, it's a bad practice. (Is that true?)\n\nMy effective_cache_size is 1530000kB\n\nOn Mon, Jan 30, 2012 at 12:50 PM, Claudio Freire <[email protected]>wrote:\n\n> On Mon, Jan 30, 2012 at 5:35 PM, Alessandro Gagliardi\n> <[email protected]> wrote:\n> > To answer your (non-)question about Heroku, it's a cloud service, so I\n> don't\n> > host PostgreSQL myself. I'm not sure how much I can mess with things like\n> > GUC since I don't even have access to the \"postgres\" database on the\n> server.\n> > I am a long time SQL user but new to Postgres so I welcome suggestions on\n> > where to start with that sort of thing. Setting enable_seqscan=false\n> made a\n> > huge difference, so I think I'll start there.\n>\n> It's not a good idea to abuse of the enable_stuff settings, they're\n> for debugging, not for general use. In particular, disable sequential\n> scans everywhere can have a disastrous effect on performance.\n>\n> It sounds as if PG had a misconfigured effective_cache_size. What does\n> \"show effective_cache_size\" tell you?\n>\n\nHm. Well, it looks like setting enable_seqscan=false is session specific, so it seems like I can use it with this query alone; but it sounds like even if that works, it's a bad practice. (Is that true?)\nMy effective_cache_size is 1530000kBOn Mon, Jan 30, 2012 at 12:50 PM, Claudio Freire <[email protected]> wrote:\nOn Mon, Jan 30, 2012 at 5:35 PM, Alessandro Gagliardi\n<[email protected]> wrote:\n> To answer your (non-)question about Heroku, it's a cloud service, so I don't\n> host PostgreSQL myself. I'm not sure how much I can mess with things like\n> GUC since I don't even have access to the \"postgres\" database on the server.\n> I am a long time SQL user but new to Postgres so I welcome suggestions on\n> where to start with that sort of thing. Setting enable_seqscan=false made a\n> huge difference, so I think I'll start there.\n\nIt's not a good idea to abuse of the enable_stuff settings, they're\nfor debugging, not for general use. In particular, disable sequential\nscans everywhere can have a disastrous effect on performance.\n\nIt sounds as if PG had a misconfigured effective_cache_size. What does\n\"show effective_cache_size\" tell you?",
"msg_date": "Mon, 30 Jan 2012 12:55:18 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why should such a simple query over indexed columns be\n so slow?"
},
{
"msg_contents": "On Mon, Jan 30, 2012 at 5:55 PM, Alessandro Gagliardi\n<[email protected]> wrote:\n> Hm. Well, it looks like setting enable_seqscan=false is session specific, so\n> it seems like I can use it with this query alone; but it sounds like even if\n> that works, it's a bad practice. (Is that true?)\n\nYep\n\n> My effective_cache_size is 1530000kB\n\nUm... barring some really bizarre GUC setting, I cannot imagine how it\ncould be preferring the sequential scan.\nMaybe some of the more knowedgeable folks has a hint.\n\nIn the meanwhile, you can use the seqscan stuff on that query alone.\nBe sure to use it on that query alone - ie, re-enable it afterwards,\nor discard the connection.\n",
"msg_date": "Mon, 30 Jan 2012 17:59:10 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why should such a simple query over indexed columns be\n so slow?"
},
{
"msg_contents": "On Mon, Jan 30, 2012 at 17:35, Alessandro Gagliardi <[email protected]>wrote:\n\n> Well that was a *lot* faster:\n>\n> \"HashAggregate (cost=156301.82..156301.83 rows=2 width=26) (actual\n> time=2692.806..2692.807 rows=2 loops=1)\"\n> \" -> Bitmap Heap Scan on blocks (cost=14810.54..155828.95 rows=472871\n> width=26) (actual time=289.828..1593.893 rows=575186 loops=1)\"\n> \" Recheck Cond: (created > '2012-01-29 00:00:00+00'::timestamp\n> with time zone)\"\n> \" Filter: (shared IS FALSE)\"\n> \" -> Bitmap Index Scan on blocks_created_idx (cost=0.00..14786.89\n> rows=550404 width=0) (actual time=277.407..277.407 rows=706663 loops=1)\"\n> \" Index Cond: (created > '2012-01-29 00:00:00+00'::timestamp\n> with time zone)\"\n> \"Total runtime: 2693.107 ms\"\n>\n>\nU sure the new timing isn't owed to cached data? If I am reading it\ncorrectly, from the latest explain you posted the Index Scan shouldn't have\nmade a difference as it is reporting pretty much all rows in the table have\ncreated > 'yesterday'.\nIf the number of rows with created < 'yesterday' isn't significant (~ over\n25% with default config) a full scan will be chosen and it will probably be\nthe better choice too.\n\nOn Mon, Jan 30, 2012 at 17:35, Alessandro Gagliardi <[email protected]> wrote:\nWell that was a lot faster:\"HashAggregate (cost=156301.82..156301.83 rows=2 width=26) (actual time=2692.806..2692.807 rows=2 loops=1)\"\" -> Bitmap Heap Scan on blocks (cost=14810.54..155828.95 rows=472871 width=26) (actual time=289.828..1593.893 rows=575186 loops=1)\"\n\" Recheck Cond: (created > '2012-01-29 00:00:00+00'::timestamp with time zone)\"\" Filter: (shared IS FALSE)\"\n\" -> Bitmap Index Scan on blocks_created_idx (cost=0.00..14786.89 rows=550404 width=0) (actual time=277.407..277.407 rows=706663 loops=1)\"\n\" Index Cond: (created > '2012-01-29 00:00:00+00'::timestamp with time zone)\"\"Total runtime: 2693.107 ms\"\nU sure the new timing isn't owed to cached data? If I am reading it correctly, from the latest explain you posted the Index Scan shouldn't have made a difference as it is reporting pretty much all rows in the table have created > 'yesterday'.\nIf the number of rows with created < 'yesterday' isn't significant (~ over 25% with default config) a full scan will be chosen and it will probably be the better choice too.",
"msg_date": "Mon, 30 Jan 2012 18:13:19 -0300",
"msg_from": "Fernando Hevia <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why should such a simple query over indexed columns be\n so slow?"
},
{
"msg_contents": "On 1/30/12 12:59 PM, Claudio Freire wrote:\n> On Mon, Jan 30, 2012 at 5:55 PM, Alessandro Gagliardi\n> <[email protected]> wrote:\n>> Hm. Well, it looks like setting enable_seqscan=false is session specific, so\n>> it seems like I can use it with this query alone; but it sounds like even if\n>> that works, it's a bad practice. (Is that true?)\n> \n> Yep\n\nThe issue with that is that the enable_seqscan setting is not limited to\nthat one table in that query, and won't change over time. So by all\nmeans use it as a hotfix right now, but it's not a long-term solution to\nyour problem.\n\n> \n>> My effective_cache_size is 1530000kB\n\nThat seems appropriate for the Ronin; I'll test one out and see what\nrandom_page_cost is set to as well, possibly Heroku needs to adjust the\nbasic template for the Ronin. For Heroku, we want to favor index scans\na bit more than you would on regular hardware because the underlying\nstorage is Amazon, which has good seeks but crap throughput.\n\nYou can do \"SHOW random_page_cost\" yourself right now, too.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Mon, 30 Jan 2012 13:25:05 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why should such a simple query over indexed columns\n be so slow?"
},
{
"msg_contents": "On Mon, Jan 30, 2012 at 1:25 PM, Josh Berkus <[email protected]> wrote:\n\n> You can do \"SHOW random_page_cost\" yourself right now, too.\n>\n> 4\n\nI also tried \"SHOW seq_page_cost\" and that's 1.\n\nLooking at\nhttp://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-RANDOM-PAGE-COSTI\nwonder if I should try reducing random_page_cost?\n\nSomething that might help when it comes to advice on performance tuning is\nthat this database is used only for analytics. It's essentially a partial\nreplication of a production (document-oriented) database. So a lot of\nnormal operations that might employ a series of sequential fetches may not\nactually be the norm in my case. Rather, I'm doing a lot of counts on data\nthat is typically randomly distributed.\n\nThanks,\n\n-Alessandro\n\nOn Mon, Jan 30, 2012 at 1:25 PM, Josh Berkus <[email protected]> wrote:\nYou can do \"SHOW random_page_cost\" yourself right now, too.\n4I also tried \"SHOW seq_page_cost\" and that's 1. Looking at http://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-RANDOM-PAGE-COST I wonder if I should try reducing random_page_cost?\nSomething that might help when it comes to advice on performance tuning is that this database is used only for analytics. It's essentially a partial replication of a production (document-oriented) database. So a lot of normal operations that might employ a series of sequential fetches may not actually be the norm in my case. Rather, I'm doing a lot of counts on data that is typically randomly distributed.\nThanks,-Alessandro",
"msg_date": "Mon, 30 Jan 2012 13:39:14 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why should such a simple query over indexed columns be\n so slow?"
},
{
"msg_contents": "On Mon, Jan 30, 2012 at 2:39 PM, Alessandro Gagliardi\n<[email protected]> wrote:\n> On Mon, Jan 30, 2012 at 1:25 PM, Josh Berkus <[email protected]> wrote:\n>>\n>> You can do \"SHOW random_page_cost\" yourself right now, too.\n>>\n> 4\n>\n> I also tried \"SHOW seq_page_cost\" and that's 1.\n>\n> Looking\n> at http://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-RANDOM-PAGE-COST\n> I wonder if I should try reducing random_page_cost?\n>\n> Something that might help when it comes to advice on performance tuning is\n> that this database is used only for analytics. It's essentially a partial\n> replication of a production (document-oriented) database. So a lot of normal\n> operations that might employ a series of sequential fetches may not actually\n> be the norm in my case. Rather, I'm doing a lot of counts on data that is\n> typically randomly distributed.\n\nYes try lowering it. Generally speaking, random page cost should\nalways be >= seq page cost. Start with a number between 1.5 and 2.0\nto start with and see if that helps. You can make it \"sticky\" for\nyour user or database with alter user or alter database...\n",
"msg_date": "Mon, 30 Jan 2012 14:45:34 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why should such a simple query over indexed columns be\n so slow?"
},
{
"msg_contents": "Pretty sure. I just ran the same query twice in a row\nwith enable_seqscan=true and the \"actual time\" was on the order of 42\nseconds both times. With enable_seqscan=false, it was on the order 3\nseconds. Going back to enable_seqscan=true, it's back to 42 seconds. Unless\nyou're saying that enable_seqscan is determining whether or not the data is\nbeing cached....\n\nOn Mon, Jan 30, 2012 at 1:13 PM, Fernando Hevia <[email protected]> wrote:\n>\n> On Mon, Jan 30, 2012 at 17:35, Alessandro Gagliardi <[email protected]>wrote:\n>\n>> Well that was a *lot* faster:\n>>\n>> \"HashAggregate (cost=156301.82..156301.83 rows=2 width=26) (actual\n>> time=2692.806..2692.807 rows=2 loops=1)\"\n>> \" -> Bitmap Heap Scan on blocks (cost=14810.54..155828.95 rows=472871\n>> width=26) (actual time=289.828..1593.893 rows=575186 loops=1)\"\n>> \" Recheck Cond: (created > '2012-01-29 00:00:00+00'::timestamp\n>> with time zone)\"\n>> \" Filter: (shared IS FALSE)\"\n>> \" -> Bitmap Index Scan on blocks_created_idx\n>> (cost=0.00..14786.89 rows=550404 width=0) (actual time=277.407..277.407\n>> rows=706663 loops=1)\"\n>> \" Index Cond: (created > '2012-01-29 00:00:00+00'::timestamp\n>> with time zone)\"\n>> \"Total runtime: 2693.107 ms\"\n>>\n>>\n> U sure the new timing isn't owed to cached data? If I am reading it\n> correctly, from the latest explain you posted the Index Scan shouldn't have\n> made a difference as it is reporting pretty much all rows in the table have\n> created > 'yesterday'.\n> If the number of rows with created < 'yesterday' isn't significant (~ over\n> 25% with default config) a full scan will be chosen and it will probably be\n> the better choice too.\n>\n>\n>\n\nPretty sure. I just ran the same query twice in a row with enable_seqscan=true and the \"actual time\" was on the order of 42 seconds both times. With enable_seqscan=false, it was on the order 3 seconds. Going back to enable_seqscan=true, it's back to 42 seconds. Unless you're saying that enable_seqscan is determining whether or not the data is being cached....\nOn Mon, Jan 30, 2012 at 1:13 PM, Fernando Hevia <[email protected]> wrote:\nOn Mon, Jan 30, 2012 at 17:35, Alessandro Gagliardi <[email protected]> wrote:\n\nWell that was a lot faster:\"HashAggregate (cost=156301.82..156301.83 rows=2 width=26) (actual time=2692.806..2692.807 rows=2 loops=1)\"\" -> Bitmap Heap Scan on blocks (cost=14810.54..155828.95 rows=472871 width=26) (actual time=289.828..1593.893 rows=575186 loops=1)\"\n\" Recheck Cond: (created > '2012-01-29 00:00:00+00'::timestamp with time zone)\"\" Filter: (shared IS FALSE)\"\n\" -> Bitmap Index Scan on blocks_created_idx (cost=0.00..14786.89 rows=550404 width=0) (actual time=277.407..277.407 rows=706663 loops=1)\"\n\" Index Cond: (created > '2012-01-29 00:00:00+00'::timestamp with time zone)\"\"Total runtime: 2693.107 ms\"\nU sure the new timing isn't owed to cached data? If I am reading it correctly, from the latest explain you posted the Index Scan shouldn't have made a difference as it is reporting pretty much all rows in the table have created > 'yesterday'.\nIf the number of rows with created < 'yesterday' isn't significant (~ over 25% with default config) a full scan will be chosen and it will probably be the better choice too.",
"msg_date": "Mon, 30 Jan 2012 13:45:35 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why should such a simple query over indexed columns be\n so slow?"
},
{
"msg_contents": "I set random_page_cost to 2 (with enable_seqscan on) and get the same\nperformance I got with enable_seqscan off.\nSo far so good. Now I just need to figure out how to set it globally. :-/\n\nOn Mon, Jan 30, 2012 at 1:45 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Mon, Jan 30, 2012 at 2:39 PM, Alessandro Gagliardi\n> <[email protected]> wrote:\n> > Looking\n> > at\n> http://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-RANDOM-PAGE-COST\n> > I wonder if I should try reducing random_page_cost?\n> >\n>\n> Yes try lowering it. Generally speaking, random page cost should\n> always be >= seq page cost. Start with a number between 1.5 and 2.0\n> to start with and see if that helps. You can make it \"sticky\" for\n> your user or database with alter user or alter database...\n>\n\nI set random_page_cost to 2 (with enable_seqscan on) and get the same performance I got with enable_seqscan off. So far so good. Now I just need to figure out how to set it globally. :-/\nOn Mon, Jan 30, 2012 at 1:45 PM, Scott Marlowe <[email protected]> wrote:\nOn Mon, Jan 30, 2012 at 2:39 PM, Alessandro Gagliardi\n<[email protected]> wrote:> Looking\n> at http://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-RANDOM-PAGE-COST\n\n> I wonder if I should try reducing random_page_cost?\n>\nYes try lowering it. Generally speaking, random page cost should\nalways be >= seq page cost. Start with a number between 1.5 and 2.0\nto start with and see if that helps. You can make it \"sticky\" for\nyour user or database with alter user or alter database...",
"msg_date": "Mon, 30 Jan 2012 13:55:00 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why should such a simple query over indexed columns be\n so slow?"
},
{
"msg_contents": "On Mon, Jan 30, 2012 at 2:55 PM, Alessandro Gagliardi\n<[email protected]> wrote:\n> I set random_page_cost to 2 (with enable_seqscan on) and get the same\n> performance I got with enable_seqscan off.\n> So far so good. Now I just need to figure out how to set it globally. :-/\n\nalter database set random_page_cost=2.0;\n",
"msg_date": "Mon, 30 Jan 2012 15:19:16 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why should such a simple query over indexed columns be\n so slow?"
},
{
"msg_contents": "On Mon, Jan 30, 2012 at 3:19 PM, Scott Marlowe <[email protected]> wrote:\n> On Mon, Jan 30, 2012 at 2:55 PM, Alessandro Gagliardi\n> <[email protected]> wrote:\n>> I set random_page_cost to 2 (with enable_seqscan on) and get the same\n>> performance I got with enable_seqscan off.\n>> So far so good. Now I just need to figure out how to set it globally. :-/\n>\n> alter database set random_page_cost=2.0;\n\nThat should be:\n\nalter database dbnamegoeshere set random_page_cost=2.0;\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n",
"msg_date": "Mon, 30 Jan 2012 15:24:24 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why should such a simple query over indexed columns be\n so slow?"
},
{
"msg_contents": "Got it (with a little bit of klutzing around). :) Thanks!\n\nOn Mon, Jan 30, 2012 at 2:24 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Mon, Jan 30, 2012 at 3:19 PM, Scott Marlowe <[email protected]>\n> wrote:\n> > On Mon, Jan 30, 2012 at 2:55 PM, Alessandro Gagliardi\n> > <[email protected]> wrote:\n> >> I set random_page_cost to 2 (with enable_seqscan on) and get the same\n> >> performance I got with enable_seqscan off.\n> >> So far so good. Now I just need to figure out how to set it globally.\n> :-/\n> >\n> > alter database set random_page_cost=2.0;\n>\n> That should be:\n>\n> alter database dbnamegoeshere set random_page_cost=2.0;\n>\n>\n>\n> --\n> To understand recursion, one must first understand recursion.\n>\n\nGot it (with a little bit of klutzing around). :) Thanks!On Mon, Jan 30, 2012 at 2:24 PM, Scott Marlowe <[email protected]> wrote:\nOn Mon, Jan 30, 2012 at 3:19 PM, Scott Marlowe <[email protected]> wrote:\n\n> On Mon, Jan 30, 2012 at 2:55 PM, Alessandro Gagliardi\n> <[email protected]> wrote:\n>> I set random_page_cost to 2 (with enable_seqscan on) and get the same\n>> performance I got with enable_seqscan off.\n>> So far so good. Now I just need to figure out how to set it globally. :-/\n>\n> alter database set random_page_cost=2.0;\n\nThat should be:\n\nalter database dbnamegoeshere set random_page_cost=2.0;\n\n\n\n--\nTo understand recursion, one must first understand recursion.",
"msg_date": "Mon, 30 Jan 2012 14:26:47 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why should such a simple query over indexed columns be\n so slow?"
},
{
"msg_contents": "\n> Looking at\n> http://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-RANDOM-PAGE-COSTI\n> wonder if I should try reducing random_page_cost?\n\nYes, and I should speak to Heroku about reducing it by default. RPC\nrepresents the ratio between the cost of a sequential lookup of a single\nrow vs. the cost of a random lookup. On standard spinning media on a\ndedicated server 4.0 is a pretty good estimate of this. However, you\nare running on shared storage in a cloud, which has different math.\n\n> Something that might help when it comes to advice on performance tuning is\n> that this database is used only for analytics. It's essentially a partial\n> replication of a production (document-oriented) database. So a lot of\n> normal operations that might employ a series of sequential fetches may not\n> actually be the norm in my case. Rather, I'm doing a lot of counts on data\n> that is typically randomly distributed.\n\nIn that case, you might consider increasing default_statistics_target to\n1000 and ANALYZEing your whole database. That increases the sample size\nfor the database statstics collector, and most of the time will result\nin somewhat better plans on large tables and data with skewed\ndistributions. This is not something which Heroku would do as standard,\nsince most of their users are doing basic transactional webapps.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Mon, 30 Jan 2012 16:42:02 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why should such a simple query over indexed columns\n be so slow?"
}
] |
[
{
"msg_contents": "Hello all,\n\nJust wanted to share some results from some very basic benchmarking\nruns comparing three disk configurations on the same hardware:\n\nhttp://morefoo.com/bench.html\n\nBefore I launch into any questions about the results (I don't see\nanything particularly shocking here), I'll describe the hardware and\nconfigurations in use here.\n\nHardware:\n\n*Tyan B7016 mainboard w/onboard LSI SAS controller\n*2x4 core xeon E5506 (2.13GHz)\n*64GB ECC RAM (8GBx8 ECC, 1033MHz)\n*2x250GB Seagate SATA 7200.9 (ST3250824AS) drives (yes, old and slow)\n*2x160GB Intel 320 SSD drives\n\nSoftware:\n\n*FreeBSD 8.2 STABLE snapshot from 6/2011 (includes zfsv28, this is\nour production snapshot) \n*PostgreSQL 9.0.6 (also what we run in production) \n*pgbench-tools 0.5 (to automate the test runs and make nice graphs)\n\nI was mainly looking to compare three variations of drive\ncombinations and verify that we don't see any glaring performance\nissues with Postgres running on ZFS. We mostly run 1U boxes and\nwe're looking for ways to get better performance without having to\ninvest in some monster box that can hold a few dozen cheap SATA\ndrives. SSDs or SATA with SSDs hosting the \"ZIL\" (ZFS Intent Log).\nThe ZIL is a bit of a cheat, as it allows you to throw all the\nsynchronous writes to the SSD - I was particularly curious about how\nthis would benchmark even though we will not likely use ZIL in\nproduction (at least not on this db box).\n\nbackground thread: http://archives.postgresql.org/pgsql-performance/2011-10/msg00137.php\n\nSo the three sets of results I've linked are all pgbench-tools runs\nof the \"tpc-b\" benchmark. One using the two SATA drives in a ZFS\nmirror, one with the same two drives in a ZFS mirror with two of the\nIntel 320s as ZIL for that pool, and one with just two Intel 320s in\na ZFS mirror. Note that I also included graphs in the pgbench\nresults of some basic system metrics. That's from a few simple\nscripts that collect some vmstat, iostat and \"zpool iostat\" info\nduring the runs at 1 sample/second. They are a bit ugly, but give a\ngood enough visual representation of how swamped the drives are\nduring the course of the tests.\n\nWhy ZFS? Well, we adopted it pretty early for other tasks and it\nmakes a number of tasks easy. It's been stable for us for the most\npart and our latest wave of boxes all use cheap SATA disks, which\ngives us two things - a ton of cheap space (in 1U) for snapshots and\nall the other space-consuming toys ZFS gives us, and on this cheaper\ndisk type, a guarantee that we're not dealing with silent data\ncorruption (these are probably the normal fanboy talking points).\nZFS snapshots are also a big time-saver when benchmarking. For our\nown application testing I load the data once, shut down postgres,\nsnapshot pgsql + the app homedir and start postgres. After each run\nthat changes on-disk data, I simply rollback the snapshot.\n\nI don't have any real questions for the list, but I'd love to get\nsome feedback, especially on the ZIL results. The ZIL results\ninterest me because I have not settled on what sort of box we'll be\nusing as a replication slave for this one - I was going to either go\nthe somewhat risky route of another all-SSD box or looking at just\nhow cheap I can go with lots of 2.5\" SAS drives in a 2U.\n\nI'm hoping that the general \"call for discussion\" is an acceptable\nrequest for this list, which seems to cater more often to very\nspecific tuning questions. If not, let me know.\n\nIf you have any test requests that can be quickly run on the above\nhardware, let me know. I'll have the box easily accessible for the\nnext few days at least (and I wouldn't mind pushing more writes\nthrough to two of my four ssds before deploying the whole mess in\ncase it is true that SSDs fail a the same write cycle count). I'll\nbe doing more tests for my own curiousity such as making sure UFS2\ndoesn't wildly outperform ZFS on the SSD-only setup, testing with\nthe expected final config of 4 Intel 320s, and then lots of\napplication-specific tests, and finally digging a bit more\nthoroughly into Greg's book to make sure I squeeze all I can out of\nthis thing.\n\nThanks,\n\nCharles",
"msg_date": "Tue, 31 Jan 2012 03:07:51 -0500",
"msg_from": "CSS <[email protected]>",
"msg_from_op": true,
"msg_subject": "rough benchmarks, sata vs. ssd"
},
{
"msg_contents": "On 31/01/2012 09:07, CSS wrote:\n> Hello all,\n>\n> Just wanted to share some results from some very basic benchmarking\n> runs comparing three disk configurations on the same hardware:\n>\n> http://morefoo.com/bench.html\n\nThat's great!\n\n> *Tyan B7016 mainboard w/onboard LSI SAS controller\n> *2x4 core xeon E5506 (2.13GHz)\n> *64GB ECC RAM (8GBx8 ECC, 1033MHz)\n> *2x250GB Seagate SATA 7200.9 (ST3250824AS) drives (yes, old and slow)\n> *2x160GB Intel 320 SSD drives\n\nIt shows that you can have large cheap SATA drives and small fast SSD-s, \nand up to a point have best of both worlds. Could you send me \n(privately) a tgz of the results (i.e. the pages+images from the above \nURL), I'd like to host them somewhere more permanently.\n\n> The ZIL is a bit of a cheat, as it allows you to throw all the\n> synchronous writes to the SSD\n\nThis is one of the main reasons it was made. It's not a cheat, it's by \ndesign.\n\n> Why ZFS? Well, we adopted it pretty early for other tasks and it\n> makes a number of tasks easy. It's been stable for us for the most\n> part and our latest wave of boxes all use cheap SATA disks, which\n> gives us two things - a ton of cheap space (in 1U) for snapshots and\n> all the other space-consuming toys ZFS gives us, and on this cheaper\n> disk type, a guarantee that we're not dealing with silent data\n> corruption (these are probably the normal fanboy talking points).\n> ZFS snapshots are also a big time-saver when benchmarking. For our\n> own application testing I load the data once, shut down postgres,\n> snapshot pgsql + the app homedir and start postgres. After each run\n> that changes on-disk data, I simply rollback the snapshot.\n\nDid you tune ZFS block size for the postgresql data directory (you'll \nneed to re-create the file system to do this)? When I investigated it in \nthe past, it really did help performance.\n\n> I don't have any real questions for the list, but I'd love to get\n> some feedback, especially on the ZIL results. The ZIL results\n> interest me because I have not settled on what sort of box we'll be\n> using as a replication slave for this one - I was going to either go\n> the somewhat risky route of another all-SSD box or looking at just\n> how cheap I can go with lots of 2.5\" SAS drives in a 2U.\n\nYou probably know the answer to that: if you need lots of storage, \nyou'll probably be better off using large SATA drives with small SSDs \nfor the ZIL. 160 GB is probably more than you need for ZIL.\n\nOne thing I never tried is mirroring a SATA drive and a SSD (only makes \nsense if you don't trust SSDs to be reliable yet) - I don't know if ZFS \nwould recognize the assymetry and direct most of the read requests to \nthe SSD.\n\n> If you have any test requests that can be quickly run on the above\n> hardware, let me know.\n\nBlogbench (benchmarks/blogbench) results are always nice to see in a \ncomparison.\n\n",
"msg_date": "Fri, 03 Feb 2012 12:23:08 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: rough benchmarks, sata vs. ssd"
},
{
"msg_contents": "\nOn Feb 3, 2012, at 6:23 AM, Ivan Voras wrote:\n\n> On 31/01/2012 09:07, CSS wrote:\n>> Hello all,\n>> \n>> Just wanted to share some results from some very basic benchmarking\n>> runs comparing three disk configurations on the same hardware:\n>> \n>> http://morefoo.com/bench.html\n> \n> That's great!\n\nThanks. I did spend a fair amount of time on it. It was also a\ngood excuse to learn a little about gnuplot, which I used to draw\nthe (somewhat oddly combined) system stats. I really wanted to see\nIO and CPU info over the duration of a test even if I couldn't\nreally know what part of the test was running. Don't ask me why\niostat sometimes shows greater than 100% in the \"busy\" column\nthough. It is in the raw iostat output I used to create the graphs.\n\n> \n>> *Tyan B7016 mainboard w/onboard LSI SAS controller\n>> *2x4 core xeon E5506 (2.13GHz)\n>> *64GB ECC RAM (8GBx8 ECC, 1033MHz)\n>> *2x250GB Seagate SATA 7200.9 (ST3250824AS) drives (yes, old and slow)\n>> *2x160GB Intel 320 SSD drives\n> \n> It shows that you can have large cheap SATA drives and small fast SSD-s, and up to a point have best of both worlds. Could you send me (privately) a tgz of the results (i.e. the pages+images from the above URL), I'd like to host them somewhere more permanently.\n\nSent offlist, including raw vmstat, iostat and zpool iostat output.\n\n> \n>> The ZIL is a bit of a cheat, as it allows you to throw all the\n>> synchronous writes to the SSD\n> \n> This is one of the main reasons it was made. It's not a cheat, it's by design.\n\nI meant that only in the best way. Some of my proudest achievements\nare cheats. :)\n\nIt's a clever way of moving cache to something non-volatile and\nproviding a fallback, although the fallback would be insanely slow\nin comparison.\n\n> \n>> Why ZFS? Well, we adopted it pretty early for other tasks and it\n>> makes a number of tasks easy. It's been stable for us for the most\n>> part and our latest wave of boxes all use cheap SATA disks, which\n>> gives us two things - a ton of cheap space (in 1U) for snapshots and\n>> all the other space-consuming toys ZFS gives us, and on this cheaper\n>> disk type, a guarantee that we're not dealing with silent data\n>> corruption (these are probably the normal fanboy talking points).\n>> ZFS snapshots are also a big time-saver when benchmarking. For our\n>> own application testing I load the data once, shut down postgres,\n>> snapshot pgsql + the app homedir and start postgres. After each run\n>> that changes on-disk data, I simply rollback the snapshot.\n> \n> Did you tune ZFS block size for the postgresql data directory (you'll need to re-create the file system to do this)? When I investigated it in the past, it really did help performance.\n\nI actually did not. A year or so ago I was doing some basic tests\non cheap SATA drives with ZFS and at least with pgbench, I could see\nno difference at all. I actually still have some of that info, so\nI'll include it here. This was a 4-core xeon, E5506 2.1GHZ, 4 1TB\nWD RE3 drives in a RAIDZ1 array, 8GB RAM.\n\nI tested three things - time to load an 8.5GB dump of one of our\ndbs, time to run through a querylog of real data (1.4M queries), and\nthen pgbench with a scaling factor of 100, 20 clients, 10K\ntransactions per client.\n\ndefault 128K zfs recordsize:\n\n-9 minutes to load data\n-17 minutes to run query log\n-pgbench output\n\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nquery mode: simple\nnumber of clients: 20\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 200000/200000\ntps = 100.884540 (including connections establishing)\ntps = 100.887593 (excluding connections establishing)\n\n8K zfs recordsize (wipe data dir and reinit db)\n\n-10 minutes to laod data\n-21 minutes to run query log\n-pgbench output\n\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nquery mode: simple\nnumber of clients: 20\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 200000/200000\ntps = 97.896038 (including connections establishing)\ntps = 97.898279 (excluding connections establishing)\n\nJust thought I'd include that since I have the data.\n\n> \n>> I don't have any real questions for the list, but I'd love to get\n>> some feedback, especially on the ZIL results. The ZIL results\n>> interest me because I have not settled on what sort of box we'll be\n>> using as a replication slave for this one - I was going to either go\n>> the somewhat risky route of another all-SSD box or looking at just\n>> how cheap I can go with lots of 2.5\" SAS drives in a 2U.\n> \n> You probably know the answer to that: if you need lots of storage, you'll probably be better off using large SATA drives with small SSDs for the ZIL. 160 GB is probably more than you need for ZIL.\n> \n> One thing I never tried is mirroring a SATA drive and a SSD (only makes sense if you don't trust SSDs to be reliable yet) - I don't know if ZFS would recognize the assymetry and direct most of the read requests to the SSD.\n\nOur databases are pretty tiny. We could squeeze them on a pair of 160GB mirrored SSDs.\n\nTo be honest, the ZIL results really threw me for a loop. I had supposed that it would work well with bursty usage but that eventually the SATA drives would still be a choke point during heavy sustained sync writes since the difference in random sync write performance between the ZIL drives (SSD) and the actual data drives (SATA) was so huge. The benchmarks ran for quite some time and I am not spotting a point in the system graphs where the SATA gets truly saturated to the point that performance suffers.\n\nI now have to think about whether a safe replication slave/backup could be built in 1U with 4 2.5 SAS drives and a small mirrored pair of SSDs for ZIL. We've been trying to avoid building monster boxes - not only are 2.5\" SAS drives expensive, but so is whatever case you find to hold a dozen or so of them. Outside of some old Sun blog posts, I am finding little evidence of people running PostgreSQL on ZFS with SATA drives augmented with SSD ZIL. I'd love to hear more feedback on that.\n\n> \n>> If you have any test requests that can be quickly run on the above\n>> hardware, let me know.\n> \n> Blogbench (benchmarks/blogbench) results are always nice to see in a comparison.\n\nI don't know much about it, but here's what I get on the zfs mirrored SSD pair:\n\n[root@bltest1 /usr/ports/benchmarks/blogbench]# blogbench -d /tmp/bbench \n\nFrequency = 10 secs\nScratch dir = [/tmp/bbench]\nSpawning 3 writers...\nSpawning 1 rewriters...\nSpawning 5 commenters...\nSpawning 100 readers...\nBenchmarking for 30 iterations.\nThe test will run during 5 minutes.\n[…]\n\nFinal score for writes: 182\nFinal score for reads : 316840\n\nThanks, \n\nCharles\n\n\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Sat, 11 Feb 2012 01:35:17 -0500",
"msg_from": "CSS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: rough benchmarks, sata vs. ssd"
},
{
"msg_contents": "For the top-post scanners, I updated the ssd test to include\nchanging the zfs recordsize to 8k.\n\nOn Feb 11, 2012, at 1:35 AM, CSS wrote:\n\n> \n> On Feb 3, 2012, at 6:23 AM, Ivan Voras wrote:\n> \n>> On 31/01/2012 09:07, CSS wrote:\n>>> Hello all,\n>>> \n>>> Just wanted to share some results from some very basic benchmarking\n>>> runs comparing three disk configurations on the same hardware:\n>>> \n>>> http://morefoo.com/bench.html\n>> \n>> That's great!\n> \n> Thanks. I did spend a fair amount of time on it. It was also a\n> good excuse to learn a little about gnuplot, which I used to draw\n> the (somewhat oddly combined) system stats. I really wanted to see\n> IO and CPU info over the duration of a test even if I couldn't\n> really know what part of the test was running. Don't ask me why\n> iostat sometimes shows greater than 100% in the \"busy\" column\n> though. It is in the raw iostat output I used to create the graphs.\n> \n>> \n>>> *Tyan B7016 mainboard w/onboard LSI SAS controller\n>>> *2x4 core xeon E5506 (2.13GHz)\n>>> *64GB ECC RAM (8GBx8 ECC, 1033MHz)\n>>> *2x250GB Seagate SATA 7200.9 (ST3250824AS) drives (yes, old and slow)\n>>> *2x160GB Intel 320 SSD drives\n>> \n>> It shows that you can have large cheap SATA drives and small fast SSD-s, and up to a point have best of both worlds. Could you send me (privately) a tgz of the results (i.e. the pages+images from the above URL), I'd like to host them somewhere more permanently.\n> \n> Sent offlist, including raw vmstat, iostat and zpool iostat output.\n> \n>> \n>>> The ZIL is a bit of a cheat, as it allows you to throw all the\n>>> synchronous writes to the SSD\n>> \n>> This is one of the main reasons it was made. It's not a cheat, it's by design.\n> \n> I meant that only in the best way. Some of my proudest achievements\n> are cheats. :)\n> \n> It's a clever way of moving cache to something non-volatile and\n> providing a fallback, although the fallback would be insanely slow\n> in comparison.\n> \n>> \n>>> Why ZFS? Well, we adopted it pretty early for other tasks and it\n>>> makes a number of tasks easy. It's been stable for us for the most\n>>> part and our latest wave of boxes all use cheap SATA disks, which\n>>> gives us two things - a ton of cheap space (in 1U) for snapshots and\n>>> all the other space-consuming toys ZFS gives us, and on this cheaper\n>>> disk type, a guarantee that we're not dealing with silent data\n>>> corruption (these are probably the normal fanboy talking points).\n>>> ZFS snapshots are also a big time-saver when benchmarking. For our\n>>> own application testing I load the data once, shut down postgres,\n>>> snapshot pgsql + the app homedir and start postgres. After each run\n>>> that changes on-disk data, I simply rollback the snapshot.\n>> \n>> Did you tune ZFS block size for the postgresql data directory (you'll need to re-create the file system to do this)? When I investigated it in the past, it really did help performance.\n> \n\nWell now I did, added the results to\nhttp://ns.morefoo.com/bench.html and it looks like there's\ncertainly an improvement. That's with the only change from the\nprevious test being to copy the postgres data dir, wipe the\noriginal, set the zfs recordsize to 8K (default is 128K), and then\ncopy the data dir back.\n\nThings that stand out on first glance:\n\n-at a scaling factor of 10 or greater, there is a much more gentle\n decline in TPS than with the default zfs recordsize\n-on the raw *disk* IOPS graph, I now see writes peaking at around \n 11K/second compared to 1.5K/second.\n-on the zpool iostat graph, I do not see those huge write peaks, \n which is a bit confusing\n-on both iostat graphs, I see the datapoints look more scattered\n with the 8K recordsize\n\nAny comments are certainly welcome. I understand 8K recordsize\nshould perform better since that's the size of the chunks of data\npostgresql is dealing with, but the effects on the system graphs\nare interesting and I'm not quite following how it all relates.\n\nI wonder if the recordsize impacts the ssd write amplification at\nall...\n\nThanks,\n\nCharles\n\n\n> I actually did not. A year or so ago I was doing some basic tests\n> on cheap SATA drives with ZFS and at least with pgbench, I could see\n> no difference at all. I actually still have some of that info, so\n> I'll include it here. This was a 4-core xeon, E5506 2.1GHZ, 4 1TB\n> WD RE3 drives in a RAIDZ1 array, 8GB RAM.\n> \n> I tested three things - time to load an 8.5GB dump of one of our\n> dbs, time to run through a querylog of real data (1.4M queries), and\n> then pgbench with a scaling factor of 100, 20 clients, 10K\n> transactions per client.\n> \n> default 128K zfs recordsize:\n> \n> -9 minutes to load data\n> -17 minutes to run query log\n> -pgbench output\n> \n> transaction type: TPC-B (sort of)\n> scaling factor: 100\n> query mode: simple\n> number of clients: 20\n> number of transactions per client: 10000\n> number of transactions actually processed: 200000/200000\n> tps = 100.884540 (including connections establishing)\n> tps = 100.887593 (excluding connections establishing)\n> \n> 8K zfs recordsize (wipe data dir and reinit db)\n> \n> -10 minutes to laod data\n> -21 minutes to run query log\n> -pgbench output\n> \n> transaction type: TPC-B (sort of)\n> scaling factor: 100\n> query mode: simple\n> number of clients: 20\n> number of transactions per client: 10000\n> number of transactions actually processed: 200000/200000\n> tps = 97.896038 (including connections establishing)\n> tps = 97.898279 (excluding connections establishing)\n> \n> Just thought I'd include that since I have the data.\n> \n>> \n>>> I don't have any real questions for the list, but I'd love to get\n>>> some feedback, especially on the ZIL results. The ZIL results\n>>> interest me because I have not settled on what sort of box we'll be\n>>> using as a replication slave for this one - I was going to either go\n>>> the somewhat risky route of another all-SSD box or looking at just\n>>> how cheap I can go with lots of 2.5\" SAS drives in a 2U.\n>> \n>> You probably know the answer to that: if you need lots of storage, you'll probably be better off using large SATA drives with small SSDs for the ZIL. 160 GB is probably more than you need for ZIL.\n>> \n>> One thing I never tried is mirroring a SATA drive and a SSD (only makes sense if you don't trust SSDs to be reliable yet) - I don't know if ZFS would recognize the assymetry and direct most of the read requests to the SSD.\n> \n> Our databases are pretty tiny. We could squeeze them on a pair of 160GB mirrored SSDs.\n> \n> To be honest, the ZIL results really threw me for a loop. I had supposed that it would work well with bursty usage but that eventually the SATA drives would still be a choke point during heavy sustained sync writes since the difference in random sync write performance between the ZIL drives (SSD) and the actual data drives (SATA) was so huge. The benchmarks ran for quite some time and I am not spotting a point in the system graphs where the SATA gets truly saturated to the point that performance suffers.\n> \n> I now have to think about whether a safe replication slave/backup could be built in 1U with 4 2.5 SAS drives and a small mirrored pair of SSDs for ZIL. We've been trying to avoid building monster boxes - not only are 2.5\" SAS drives expensive, but so is whatever case you find to hold a dozen or so of them. Outside of some old Sun blog posts, I am finding little evidence of people running PostgreSQL on ZFS with SATA drives augmented with SSD ZIL. I'd love to hear more feedback on that.\n> \n>> \n>>> If you have any test requests that can be quickly run on the above\n>>> hardware, let me know.\n>> \n>> Blogbench (benchmarks/blogbench) results are always nice to see in a comparison.\n> \n> I don't know much about it, but here's what I get on the zfs mirrored SSD pair:\n> \n> [root@bltest1 /usr/ports/benchmarks/blogbench]# blogbench -d /tmp/bbench \n> \n> Frequency = 10 secs\n> Scratch dir = [/tmp/bbench]\n> Spawning 3 writers...\n> Spawning 1 rewriters...\n> Spawning 5 commenters...\n> Spawning 100 readers...\n> Benchmarking for 30 iterations.\n> The test will run during 5 minutes.\n> […]\n> \n> Final score for writes: 182\n> Final score for reads : 316840\n> \n> Thanks, \n> \n> Charles\n> \n> \n>> \n>> -- \n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n",
"msg_date": "Mon, 13 Feb 2012 16:49:38 -0500",
"msg_from": "CSS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: rough benchmarks, sata vs. ssd"
},
{
"msg_contents": "On 13 February 2012 22:49, CSS <[email protected]> wrote:\n> For the top-post scanners, I updated the ssd test to include\n> changing the zfs recordsize to 8k.\n\n> Well now I did, added the results to\n> http://ns.morefoo.com/bench.html and it looks like there's\n> certainly an improvement. That's with the only change from the\n> previous test being to copy the postgres data dir, wipe the\n> original, set the zfs recordsize to 8K (default is 128K), and then\n> copy the data dir back.\n\nThis makes sense simply because it reduces the amount of data read\nand/or written for non-sequential transactions.\n\n> Things that stand out on first glance:\n>\n> -at a scaling factor of 10 or greater, there is a much more gentle\n> decline in TPS than with the default zfs recordsize\n> -on the raw *disk* IOPS graph, I now see writes peaking at around\n> 11K/second compared to 1.5K/second.\n> -on the zpool iostat graph, I do not see those huge write peaks,\n> which is a bit confusing\n\nCould be that \"iostat\" and \"zpool iostat\" average raw data differently.\n\n> -on both iostat graphs, I see the datapoints look more scattered\n> with the 8K recordsize\n\nAs an educated guess, it could be that smaller transaction sizes can\n\"fit in\" (in buffers or controller processing paths) where large\ndidn't allowing more bursts of performance.\n",
"msg_date": "Mon, 13 Feb 2012 23:12:01 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: rough benchmarks, sata vs. ssd"
}
] |
[
{
"msg_contents": "Hello, \n\nI have a weird table, upon with the queries are much faster when no\nstatics were collected. \n\nIs there a way to delete statistics information for a table ?\nI've tried ALTER.. set STATISTICS 0 and then run ANALYZE, but it seems\nthat old statistics are kept this way.\nCan I delete entries directly in pg_statistic ?\n(Postgresql 9.1)\n\n\n short backgroud Info:\n \n One of the table index is a GIN on a tsvector returning function, which\nis very costy.\n once analyzed, the query planner often ignore this index in favour of\nother one, hence triggering this function too often.\n \n I'll fix that model, but am first looking for a quick way to restore\nperformance on our production servers.\n \n \n best regards,\n \n Marc Mamin\n\n\n\n\n\nHow to remove a table statistics ?\n\n\n\nHello, \nI have a weird table, upon with the queries are much faster when no statics were collected. \n\nIs there a way to delete statistics information for a table ?\nI've tried ALTER.. set STATISTICS 0 and then run ANALYZE, but it seems that old statistics are kept this way.\nCan I delete entries directly in pg_statistic ?\n(Postgresql 9.1)\n\n\n short backgroud Info:\n \n One of the table index is a GIN on a tsvector returning function, which is very costy.\n once analyzed, the query planner often ignore this index in favour of other one, hence triggering this function too often.\n \n I'll fix that model, but am first looking for a quick way to restore performance on our production servers.\n \n \n best regards,\n \n Marc Mamin",
"msg_date": "Tue, 31 Jan 2012 12:50:26 +0100",
"msg_from": "\"Marc Mamin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to remove a table statistics ?"
},
{
"msg_contents": "On 1/31/12 3:50 AM, Marc Mamin wrote:\n> Hello, \n> \n> I have a weird table, upon with the queries are much faster when no\n> statics were collected. \n> \n> Is there a way to delete statistics information for a table ?\n> I've tried ALTER.. set STATISTICS 0 and then run ANALYZE, but it seems\n> that old statistics are kept this way.\n> Can I delete entries directly in pg_statistic ?\n> (Postgresql 9.1)\n\nYou can, but it won't do any good; autovaccum will replace them.\n\nIt would be better to fix the actual query plan issue. If you can, post\nthe query plans with and without statistics (EXPLAIN ANALYZE, please) here.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Tue, 31 Jan 2012 10:44:02 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to remove a table statistics ?"
},
{
"msg_contents": "Hello,\r\nSome more tests have shown that removing the statistics just move the performance issue to other places.\r\nThe main issue here is a bad design, so I'd better focus on this than losing too much time with the current situation.\r\nBut this raises an interesting question on how/where does Postgres store statistics on functional indexes. \r\nin pg_statistics there are information on the column content, but I couldn't find stats on the function result which is fully computed only during the index creation.\r\nI guess that the planner would need to know at least the function cost to weight the benefit of such an index. \r\nIn my case I would set the function cost to 200 ...\r\n\r\n\r\nI have also tried to reduce random_page_cost to \"2\", and it seems to help in a few cases.\r\n\r\n\r\n(anonymized)\r\n\r\nexplain analyze\r\nSELECT min(msoffset) as t, coalesce(pipelinecall_id,-2) as pid\r\n from aserrorlist_20120125 l\r\n WHERE 1 = 1\r\n AND msoffset >= 1327503000000\r\n AND my_func('foo',20120125,l.id, l.header_9_10_id, l.categories_id, l.firstline_id) @@ to_aserrcfg_search_tsq($KUKU$lexeme_1 ! lexeme_2$KUKU$)\r\n group by ridcount,pipelinecall_id,coalesce(toplevelrid,msoffset::varchar);\r\n\r\n\r\nwithout stats: http://explain.depesz.com/s/qPg\r\nwith stats: http://explain.depesz.com/s/88q\r\n\r\naserr_20120125_tvi: GIN Index on my_func(.,.,.,.,.,.)\r\n\r\nbest regards,\r\n\r\nMarc Mamin\r\n\r\n> -----Original Message-----\r\n> From: [email protected] [mailto:pgsql-performance-\r\n> [email protected]] On Behalf Of Josh Berkus\r\n> Sent: Dienstag, 31. Januar 2012 19:44\r\n> To: [email protected]\r\n> Subject: Re: [PERFORM] How to remove a table statistics ?\r\n> \r\n> On 1/31/12 3:50 AM, Marc Mamin wrote:\r\n> > Hello,\r\n> >\r\n> > I have a weird table, upon with the queries are much faster when no\r\n> > statics were collected.\r\n> >\r\n> > Is there a way to delete statistics information for a table ?\r\n> > I've tried ALTER.. set STATISTICS 0 and then run ANALYZE, but it\r\n> seems\r\n> > that old statistics are kept this way.\r\n> > Can I delete entries directly in pg_statistic ?\r\n> > (Postgresql 9.1)\r\n> \r\n> You can, but it won't do any good; autovaccum will replace them.\r\n> \r\n> It would be better to fix the actual query plan issue. If you can,\r\n> post\r\n> the query plans with and without statistics (EXPLAIN ANALYZE, please)\r\n> here.\r\n> \r\n> --\r\n> Josh Berkus\r\n> PostgreSQL Experts Inc.\r\n> http://pgexperts.com\r\n> \r\n> --\r\n> Sent via pgsql-performance mailing list (pgsql-\r\n> [email protected])\r\n> To make changes to your subscription:\r\n> http://www.postgresql.org/mailpref/pgsql-performance\r\n",
"msg_date": "Tue, 31 Jan 2012 20:36:09 +0100",
"msg_from": "\"Marc Mamin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to remove a table statistics ?"
},
{
"msg_contents": "On Tue, Jan 31, 2012 at 2:36 PM, Marc Mamin <[email protected]> wrote:\n> But this raises an interesting question on how/where does Postgres store statistics on functional indexes.\n> in pg_statistics there are information on the column content, but I couldn't find stats on the function result which is fully computed only during the index creation.\n\nLook for rows where starelid is equal to the OID of the index.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Fri, 3 Feb 2012 16:56:15 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to remove a table statistics ?"
}
] |
[
{
"msg_contents": "My slow query today is somewhat more complex than yesterday's, but I'm\nhopeful it can be improved. Here's the query:\n\nSELECT relname, emotion, COUNT(feedback_id) FROM pg_class, moments\nJOIN emotions USING (moment_id)\nWHERE moments.inserted > 'today' AND moments.tableoid = pg_class.oid\nGROUP BY relname, emotion ORDER BY relname, emotion;\n\nAs you'll see below, moments is inherited by a number of other tables\nand the purpose of relname is to see which one. Meanwhile, emotions\ninherits feedback.\n\nHere's the Full Table and Index Schema:\n\nCREATE TABLE moments\n(\n moment_id character(24) NOT NULL DEFAULT to_char(now(), 'JHH24MISSUS'::text),\n block_id character(24) NOT NULL,\n inserted timestamp without time zone NOT NULL DEFAULT now(),\n CONSTRAINT moments_pkey PRIMARY KEY (moment_id )\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX moments_block_id_idx\n ON moments\n USING btree\n (block_id );\n\nCREATE INDEX moments_inserted_idx\n ON moments\n USING btree\n (inserted );\n\nCREATE TABLE feedback\n(\n feedback_id character(24) NOT NULL,\n user_id character(24) NOT NULL,\n moment_id character(24) NOT NULL,\n created timestamp without time zone,\n inserted timestamp without time zone NOT NULL DEFAULT now(),\n lnglat point,\n CONSTRAINT feedback_pkey PRIMARY KEY (feedback_id )\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX feedback_lnglat_idx\n ON feedback\n USING gist\n (lnglat );\n\nCREATE INDEX feedback_moment_id_idx\n ON feedback\n USING btree\n (moment_id );\n\nCREATE TABLE emotions\n(\n-- Inherited from table feedback: feedback_id character(24) NOT NULL,\n-- Inherited from table feedback: user_id character(24) NOT NULL,\n-- Inherited from table feedback: moment_id character(24) NOT NULL,\n-- Inherited from table feedback: created timestamp without time zone,\n-- Inherited from table feedback: inserted timestamp without time\nzone NOT NULL DEFAULT now(),\n emotion character varying NOT NULL,\n-- Inherited from table : lnglat point,\n CONSTRAINT emotions_pkey PRIMARY KEY (feedback_id )\n)\nINHERITS (feedback)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX emotions_emotion_idx\n ON emotions\n USING btree\n (emotion );\n\nHere's the results from EXPLAIN ANALYZE:\n\n\"Sort (cost=309717.70..309718.43 rows=1460 width=94) (actual\ntime=60462.534..60462.544 rows=25 loops=1)\"\n\" Sort Key: pg_class.relname, emotions.emotion\"\n\" Sort Method: quicksort Memory: 20kB\"\n\" -> HashAggregate (cost=309697.24..309702.35 rows=1460 width=94)\n(actual time=60462.457..60462.476 rows=25 loops=1)\"\n\" -> Hash Join (cost=133154.62..308963.70 rows=489024\nwidth=94) (actual time=26910.488..60031.589 rows=194642 loops=1)\"\n\" Hash Cond: (public.moments.tableoid = pg_class.oid)\"\n\" -> Hash Join (cost=133144.72..307119.96 rows=489024\nwidth=34) (actual time=26909.984..59434.137 rows=194642 loops=1)\"\n\" Hash Cond: (public.moments.moment_id = emotions.moment_id)\"\n\" -> Append (cost=0.00..114981.64 rows=119665\nwidth=29) (actual time=883.153..21696.939 rows=357565 loops=1)\"\n\" -> Seq Scan on moments (cost=0.00..0.00\nrows=1 width=104) (actual time=0.000..0.000 rows=0 loops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on thoughts moments\n(cost=0.00..38856.88 rows=44388 width=29) (actual\ntime=883.150..9040.959 rows=115436 loops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on photos moments\n(cost=0.00..29635.78 rows=194 width=29) (actual\ntime=5329.700..5827.447 rows=116420 loops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on music moments\n(cost=0.00..9371.88 rows=19070 width=29) (actual time=354.147..383.266\nrows=37248 loops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on people moments\n(cost=0.00..5945.26 rows=27 width=29) (actual time=185.393..185.393\nrows=0 loops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on places moments\n(cost=0.00..24551.03 rows=54961 width=29) (actual\ntime=5224.044..5324.517 rows=85564 loops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on videos moments\n(cost=0.00..981.31 rows=734 width=29) (actual time=21.075..28.735\nrows=2897 loops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on facebook_people moments\n(cost=0.00..10.84 rows=80 width=104) (actual time=0.001..0.001 rows=0\nloops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on address_people moments\n(cost=0.00..10.84 rows=80 width=104) (actual time=0.005..0.005 rows=0\nloops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on path_people moments\n(cost=0.00..5606.79 rows=30 width=29) (actual time=211.166..211.166\nrows=0 loops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on sleep moments\n(cost=0.00..11.05 rows=100 width=104) (actual time=0.002..0.002 rows=0\nloops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Hash (cost=79292.49..79292.49 rows=4059496\nwidth=55) (actual time=25757.998..25757.998 rows=4058642 loops=1)\"\n\" Buckets: 262144 Batches: 4 Memory Usage: 75211kB\"\n\" -> Seq Scan on emotions\n(cost=0.00..79292.49 rows=4059496 width=55) (actual\ntime=0.012..15969.981 rows=4058642 loops=1)\"\n\" -> Hash (cost=8.88..8.88 rows=292 width=68) (actual\ntime=0.487..0.487 rows=319 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 28kB\"\n\" -> Seq Scan on pg_class (cost=0.00..8.88\nrows=292 width=68) (actual time=0.013..0.234 rows=319 loops=1)\"\n\"Total runtime: 60601.612 ms\"\n\nPostgres version: is still 9.0.5\n\nHistory: N/A (This is the first time I've run this query.)\n\nHardware: 1.7 GB Cache and other things you'd expect from a Ronin\ninstance of a Heroku Postgres database.\n\nMaintenance Setup: What Heroku does. As before, vacuum should not be\nrelevant as there are no deletes or even updates (just inserts and\nselects)\n\nWAL Configuration: I still don't know. Heroku hosts the database on\nAmazon's servers, so maybe that answers the question?\n\nGUC Settings: As per the yesterday's discussion, I reduced\nrandom_page_cost to 2. Other than that, it's all default.\n\nBonus question: If that was too simple, here's something even more\ncomplex I'd like to do: I have another table that inherits feedback\ncalled \"comments\". Ideally, rather than an \"emotion\" column coming\nout, I would like to have a \"feedback_type\" column that would be\neither the value in the emotion column of the emotions table, or\n\"comment\" if it's from the comments table. I'm thinking I'm going to\nhave to simply put that together on the client, but if I can do that\nin a single query (that doesn't take an hour to run) that would be\nsuper cool. But that's definitely secondary compared to getting the\nabove query to run faster.\n\nThank you very much for any help!\n-Alessandro Gagliardi\n",
"msg_date": "Tue, 31 Jan 2012 13:22:12 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "From Simple to Complex"
},
{
"msg_contents": "Looks like I missed a key sentence in\nhttp://www.postgresql.org/docs/9.0/static/ddl-inherit.html which states: \"A\nserious limitation of the inheritance feature is that indexes (including\nunique constraints) and foreign key constraints only apply to single\ntables, not to their inheritance children.\"\nI should have realized that as I exploited that \"limitation\" in three of my\ntables. Gradually adding those indices now; will report on what kind of\ndifference it makes....\n\nOn Tue, Jan 31, 2012 at 1:22 PM, Alessandro Gagliardi\n<[email protected]>wrote:\n\n> My slow query today is somewhat more complex than yesterday's, but I'm\n> hopeful it can be improved. Here's the query:\n>\n> SELECT relname, emotion, COUNT(feedback_id) FROM pg_class, moments\n> JOIN emotions USING (moment_id)\n> WHERE moments.inserted > 'today' AND moments.tableoid = pg_class.oid\n> GROUP BY relname, emotion ORDER BY relname, emotion;\n>\n> As you'll see below, moments is inherited by a number of other tables\n> and the purpose of relname is to see which one. Meanwhile, emotions\n> inherits feedback.\n>\n> Here's the Full Table and Index Schema:\n>\n> CREATE TABLE moments\n> (\n> moment_id character(24) NOT NULL DEFAULT to_char(now(),\n> 'JHH24MISSUS'::text),\n> block_id character(24) NOT NULL,\n> inserted timestamp without time zone NOT NULL DEFAULT now(),\n> CONSTRAINT moments_pkey PRIMARY KEY (moment_id )\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n> CREATE INDEX moments_block_id_idx\n> ON moments\n> USING btree\n> (block_id );\n>\n> CREATE INDEX moments_inserted_idx\n> ON moments\n> USING btree\n> (inserted );\n>\n> CREATE TABLE feedback\n> (\n> feedback_id character(24) NOT NULL,\n> user_id character(24) NOT NULL,\n> moment_id character(24) NOT NULL,\n> created timestamp without time zone,\n> inserted timestamp without time zone NOT NULL DEFAULT now(),\n> lnglat point,\n> CONSTRAINT feedback_pkey PRIMARY KEY (feedback_id )\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n> CREATE INDEX feedback_lnglat_idx\n> ON feedback\n> USING gist\n> (lnglat );\n>\n> CREATE INDEX feedback_moment_id_idx\n> ON feedback\n> USING btree\n> (moment_id );\n>\n> CREATE TABLE emotions\n> (\n> -- Inherited from table feedback: feedback_id character(24) NOT NULL,\n> -- Inherited from table feedback: user_id character(24) NOT NULL,\n> -- Inherited from table feedback: moment_id character(24) NOT NULL,\n> -- Inherited from table feedback: created timestamp without time zone,\n> -- Inherited from table feedback: inserted timestamp without time\n> zone NOT NULL DEFAULT now(),\n> emotion character varying NOT NULL,\n> -- Inherited from table : lnglat point,\n> CONSTRAINT emotions_pkey PRIMARY KEY (feedback_id )\n> )\n> INHERITS (feedback)\n> WITH (\n> OIDS=FALSE\n> );\n>\n> CREATE INDEX emotions_emotion_idx\n> ON emotions\n> USING btree\n> (emotion );\n>\n> Here's the results from EXPLAIN ANALYZE:\n>\n> \"Sort (cost=309717.70..309718.43 rows=1460 width=94) (actual\n> time=60462.534..60462.544 rows=25 loops=1)\"\n> \" Sort Key: pg_class.relname, emotions.emotion\"\n> \" Sort Method: quicksort Memory: 20kB\"\n> \" -> HashAggregate (cost=309697.24..309702.35 rows=1460 width=94)\n> (actual time=60462.457..60462.476 rows=25 loops=1)\"\n> \" -> Hash Join (cost=133154.62..308963.70 rows=489024\n> width=94) (actual time=26910.488..60031.589 rows=194642 loops=1)\"\n> \" Hash Cond: (public.moments.tableoid = pg_class.oid)\"\n> \" -> Hash Join (cost=133144.72..307119.96 rows=489024\n> width=34) (actual time=26909.984..59434.137 rows=194642 loops=1)\"\n> \" Hash Cond: (public.moments.moment_id =\n> emotions.moment_id)\"\n> \" -> Append (cost=0.00..114981.64 rows=119665\n> width=29) (actual time=883.153..21696.939 rows=357565 loops=1)\"\n> \" -> Seq Scan on moments (cost=0.00..0.00\n> rows=1 width=104) (actual time=0.000..0.000 rows=0 loops=1)\"\n> \" Filter: (inserted > '2012-01-31\n> 00:00:00'::timestamp without time zone)\"\n> \" -> Seq Scan on thoughts moments\n> (cost=0.00..38856.88 rows=44388 width=29) (actual\n> time=883.150..9040.959 rows=115436 loops=1)\"\n> \" Filter: (inserted > '2012-01-31\n> 00:00:00'::timestamp without time zone)\"\n> \" -> Seq Scan on photos moments\n> (cost=0.00..29635.78 rows=194 width=29) (actual\n> time=5329.700..5827.447 rows=116420 loops=1)\"\n> \" Filter: (inserted > '2012-01-31\n> 00:00:00'::timestamp without time zone)\"\n> \" -> Seq Scan on music moments\n> (cost=0.00..9371.88 rows=19070 width=29) (actual time=354.147..383.266\n> rows=37248 loops=1)\"\n> \" Filter: (inserted > '2012-01-31\n> 00:00:00'::timestamp without time zone)\"\n> \" -> Seq Scan on people moments\n> (cost=0.00..5945.26 rows=27 width=29) (actual time=185.393..185.393\n> rows=0 loops=1)\"\n> \" Filter: (inserted > '2012-01-31\n> 00:00:00'::timestamp without time zone)\"\n> \" -> Seq Scan on places moments\n> (cost=0.00..24551.03 rows=54961 width=29) (actual\n> time=5224.044..5324.517 rows=85564 loops=1)\"\n> \" Filter: (inserted > '2012-01-31\n> 00:00:00'::timestamp without time zone)\"\n> \" -> Seq Scan on videos moments\n> (cost=0.00..981.31 rows=734 width=29) (actual time=21.075..28.735\n> rows=2897 loops=1)\"\n> \" Filter: (inserted > '2012-01-31\n> 00:00:00'::timestamp without time zone)\"\n> \" -> Seq Scan on facebook_people moments\n> (cost=0.00..10.84 rows=80 width=104) (actual time=0.001..0.001 rows=0\n> loops=1)\"\n> \" Filter: (inserted > '2012-01-31\n> 00:00:00'::timestamp without time zone)\"\n> \" -> Seq Scan on address_people moments\n> (cost=0.00..10.84 rows=80 width=104) (actual time=0.005..0.005 rows=0\n> loops=1)\"\n> \" Filter: (inserted > '2012-01-31\n> 00:00:00'::timestamp without time zone)\"\n> \" -> Seq Scan on path_people moments\n> (cost=0.00..5606.79 rows=30 width=29) (actual time=211.166..211.166\n> rows=0 loops=1)\"\n> \" Filter: (inserted > '2012-01-31\n> 00:00:00'::timestamp without time zone)\"\n> \" -> Seq Scan on sleep moments\n> (cost=0.00..11.05 rows=100 width=104) (actual time=0.002..0.002 rows=0\n> loops=1)\"\n> \" Filter: (inserted > '2012-01-31\n> 00:00:00'::timestamp without time zone)\"\n> \" -> Hash (cost=79292.49..79292.49 rows=4059496\n> width=55) (actual time=25757.998..25757.998 rows=4058642 loops=1)\"\n> \" Buckets: 262144 Batches: 4 Memory Usage:\n> 75211kB\"\n> \" -> Seq Scan on emotions\n> (cost=0.00..79292.49 rows=4059496 width=55) (actual\n> time=0.012..15969.981 rows=4058642 loops=1)\"\n> \" -> Hash (cost=8.88..8.88 rows=292 width=68) (actual\n> time=0.487..0.487 rows=319 loops=1)\"\n> \" Buckets: 1024 Batches: 1 Memory Usage: 28kB\"\n> \" -> Seq Scan on pg_class (cost=0.00..8.88\n> rows=292 width=68) (actual time=0.013..0.234 rows=319 loops=1)\"\n> \"Total runtime: 60601.612 ms\"\n>\n> Postgres version: is still 9.0.5\n>\n> History: N/A (This is the first time I've run this query.)\n>\n> Hardware: 1.7 GB Cache and other things you'd expect from a Ronin\n> instance of a Heroku Postgres database.\n>\n> Maintenance Setup: What Heroku does. As before, vacuum should not be\n> relevant as there are no deletes or even updates (just inserts and\n> selects)\n>\n> WAL Configuration: I still don't know. Heroku hosts the database on\n> Amazon's servers, so maybe that answers the question?\n>\n> GUC Settings: As per the yesterday's discussion, I reduced\n> random_page_cost to 2. Other than that, it's all default.\n>\n> Bonus question: If that was too simple, here's something even more\n> complex I'd like to do: I have another table that inherits feedback\n> called \"comments\". Ideally, rather than an \"emotion\" column coming\n> out, I would like to have a \"feedback_type\" column that would be\n> either the value in the emotion column of the emotions table, or\n> \"comment\" if it's from the comments table. I'm thinking I'm going to\n> have to simply put that together on the client, but if I can do that\n> in a single query (that doesn't take an hour to run) that would be\n> super cool. But that's definitely secondary compared to getting the\n> above query to run faster.\n>\n> Thank you very much for any help!\n> -Alessandro Gagliardi\n>\n\nLooks like I missed a key sentence in http://www.postgresql.org/docs/9.0/static/ddl-inherit.html which states: \"A serious limitation of the inheritance feature is that indexes (including unique constraints) and foreign key constraints only apply to single tables, not to their inheritance children.\"\nI should have realized that as I exploited that \"limitation\" in three of my tables. Gradually adding those indices now; will report on what kind of difference it makes....\nOn Tue, Jan 31, 2012 at 1:22 PM, Alessandro Gagliardi <[email protected]> wrote:\nMy slow query today is somewhat more complex than yesterday's, but I'm\nhopeful it can be improved. Here's the query:\n\nSELECT relname, emotion, COUNT(feedback_id) FROM pg_class, moments\nJOIN emotions USING (moment_id)\nWHERE moments.inserted > 'today' AND moments.tableoid = pg_class.oid\nGROUP BY relname, emotion ORDER BY relname, emotion;\n\nAs you'll see below, moments is inherited by a number of other tables\nand the purpose of relname is to see which one. Meanwhile, emotions\ninherits feedback.\n\nHere's the Full Table and Index Schema:\n\nCREATE TABLE moments\n(\n moment_id character(24) NOT NULL DEFAULT to_char(now(), 'JHH24MISSUS'::text),\n block_id character(24) NOT NULL,\n inserted timestamp without time zone NOT NULL DEFAULT now(),\n CONSTRAINT moments_pkey PRIMARY KEY (moment_id )\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX moments_block_id_idx\n ON moments\n USING btree\n (block_id );\n\nCREATE INDEX moments_inserted_idx\n ON moments\n USING btree\n (inserted );\n\nCREATE TABLE feedback\n(\n feedback_id character(24) NOT NULL,\n user_id character(24) NOT NULL,\n moment_id character(24) NOT NULL,\n created timestamp without time zone,\n inserted timestamp without time zone NOT NULL DEFAULT now(),\n lnglat point,\n CONSTRAINT feedback_pkey PRIMARY KEY (feedback_id )\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX feedback_lnglat_idx\n ON feedback\n USING gist\n (lnglat );\n\nCREATE INDEX feedback_moment_id_idx\n ON feedback\n USING btree\n (moment_id );\n\nCREATE TABLE emotions\n(\n-- Inherited from table feedback: feedback_id character(24) NOT NULL,\n-- Inherited from table feedback: user_id character(24) NOT NULL,\n-- Inherited from table feedback: moment_id character(24) NOT NULL,\n-- Inherited from table feedback: created timestamp without time zone,\n-- Inherited from table feedback: inserted timestamp without time\nzone NOT NULL DEFAULT now(),\n emotion character varying NOT NULL,\n-- Inherited from table : lnglat point,\n CONSTRAINT emotions_pkey PRIMARY KEY (feedback_id )\n)\nINHERITS (feedback)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX emotions_emotion_idx\n ON emotions\n USING btree\n (emotion );\n\nHere's the results from EXPLAIN ANALYZE:\n\n\"Sort (cost=309717.70..309718.43 rows=1460 width=94) (actual\ntime=60462.534..60462.544 rows=25 loops=1)\"\n\" Sort Key: pg_class.relname, emotions.emotion\"\n\" Sort Method: quicksort Memory: 20kB\"\n\" -> HashAggregate (cost=309697.24..309702.35 rows=1460 width=94)\n(actual time=60462.457..60462.476 rows=25 loops=1)\"\n\" -> Hash Join (cost=133154.62..308963.70 rows=489024\nwidth=94) (actual time=26910.488..60031.589 rows=194642 loops=1)\"\n\" Hash Cond: (public.moments.tableoid = pg_class.oid)\"\n\" -> Hash Join (cost=133144.72..307119.96 rows=489024\nwidth=34) (actual time=26909.984..59434.137 rows=194642 loops=1)\"\n\" Hash Cond: (public.moments.moment_id = emotions.moment_id)\"\n\" -> Append (cost=0.00..114981.64 rows=119665\nwidth=29) (actual time=883.153..21696.939 rows=357565 loops=1)\"\n\" -> Seq Scan on moments (cost=0.00..0.00\nrows=1 width=104) (actual time=0.000..0.000 rows=0 loops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on thoughts moments\n(cost=0.00..38856.88 rows=44388 width=29) (actual\ntime=883.150..9040.959 rows=115436 loops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on photos moments\n(cost=0.00..29635.78 rows=194 width=29) (actual\ntime=5329.700..5827.447 rows=116420 loops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on music moments\n(cost=0.00..9371.88 rows=19070 width=29) (actual time=354.147..383.266\nrows=37248 loops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on people moments\n(cost=0.00..5945.26 rows=27 width=29) (actual time=185.393..185.393\nrows=0 loops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on places moments\n(cost=0.00..24551.03 rows=54961 width=29) (actual\ntime=5224.044..5324.517 rows=85564 loops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on videos moments\n(cost=0.00..981.31 rows=734 width=29) (actual time=21.075..28.735\nrows=2897 loops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on facebook_people moments\n(cost=0.00..10.84 rows=80 width=104) (actual time=0.001..0.001 rows=0\nloops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on address_people moments\n(cost=0.00..10.84 rows=80 width=104) (actual time=0.005..0.005 rows=0\nloops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on path_people moments\n(cost=0.00..5606.79 rows=30 width=29) (actual time=211.166..211.166\nrows=0 loops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Seq Scan on sleep moments\n(cost=0.00..11.05 rows=100 width=104) (actual time=0.002..0.002 rows=0\nloops=1)\"\n\" Filter: (inserted > '2012-01-31\n00:00:00'::timestamp without time zone)\"\n\" -> Hash (cost=79292.49..79292.49 rows=4059496\nwidth=55) (actual time=25757.998..25757.998 rows=4058642 loops=1)\"\n\" Buckets: 262144 Batches: 4 Memory Usage: 75211kB\"\n\" -> Seq Scan on emotions\n(cost=0.00..79292.49 rows=4059496 width=55) (actual\ntime=0.012..15969.981 rows=4058642 loops=1)\"\n\" -> Hash (cost=8.88..8.88 rows=292 width=68) (actual\ntime=0.487..0.487 rows=319 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 28kB\"\n\" -> Seq Scan on pg_class (cost=0.00..8.88\nrows=292 width=68) (actual time=0.013..0.234 rows=319 loops=1)\"\n\"Total runtime: 60601.612 ms\"\n\nPostgres version: is still 9.0.5\n\nHistory: N/A (This is the first time I've run this query.)\n\nHardware: 1.7 GB Cache and other things you'd expect from a Ronin\ninstance of a Heroku Postgres database.\n\nMaintenance Setup: What Heroku does. As before, vacuum should not be\nrelevant as there are no deletes or even updates (just inserts and\nselects)\n\nWAL Configuration: I still don't know. Heroku hosts the database on\nAmazon's servers, so maybe that answers the question?\n\nGUC Settings: As per the yesterday's discussion, I reduced\nrandom_page_cost to 2. Other than that, it's all default.\n\nBonus question: If that was too simple, here's something even more\ncomplex I'd like to do: I have another table that inherits feedback\ncalled \"comments\". Ideally, rather than an \"emotion\" column coming\nout, I would like to have a \"feedback_type\" column that would be\neither the value in the emotion column of the emotions table, or\n\"comment\" if it's from the comments table. I'm thinking I'm going to\nhave to simply put that together on the client, but if I can do that\nin a single query (that doesn't take an hour to run) that would be\nsuper cool. But that's definitely secondary compared to getting the\nabove query to run faster.\n\nThank you very much for any help!\n-Alessandro Gagliardi",
"msg_date": "Tue, 31 Jan 2012 14:10:11 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: From Simple to Complex"
},
{
"msg_contents": "I changed the query a bit so the results would not change over the\ncourse of the day to:\n\nSELECT relname, emotion, COUNT(feedback_id) FROM pg_class, moments\nJOIN emotions USING (moment_id)\nWHERE moments.inserted BETWEEN 'yesterday' AND 'today' AND\nmoments.tableoid = pg_class.oid\nGROUP BY relname, emotion ORDER BY relname, emotion;\n\nAdding the indices means that I am now doing index scans instead of\nseq scans but it doesn't seem to help with speed. Here are the new\nEXPLAIN ANALYZE results:\n\n\"Sort (cost=174432.85..174433.58 rows=1460 width=94) (actual\ntime=73440.079..73440.088 rows=25 loops=1)\"\n\" Sort Key: pg_class.relname, emotions.emotion\"\n\" Sort Method: quicksort Memory: 20kB\"\n\" -> HashAggregate (cost=174412.39..174417.50 rows=1460 width=94)\n(actual time=73437.905..73437.940 rows=25 loops=1)\"\n\" -> Merge Join (cost=27888.98..172032.86 rows=1586355\nwidth=94) (actual time=65563.027..72763.848 rows=245917 loops=1)\"\n\" Merge Cond: (emotions.moment_id = public.moments.moment_id)\"\n\" -> Index Scan using emotions_moment_id_idx on emotions\n (cost=0.00..135759.78 rows=4077358 width=55) (actual\ntime=1.283..43894.799 rows=3841095 loops=1)\"\n\" -> Sort (cost=27888.98..28083.07 rows=388184\nwidth=89) (actual time=16556.348..17384.537 rows=521025 loops=1)\"\n\" Sort Key: public.moments.moment_id\"\n\" Sort Method: quicksort Memory: 60865kB\"\n\" -> Hash Join (cost=9.90..20681.81 rows=388184\nwidth=89) (actual time=2.612..4309.131 rows=396594 loops=1)\"\n\" Hash Cond: (public.moments.tableoid = pg_class.oid)\"\n\" -> Append (cost=0.00..19216.22\nrows=388184 width=29) (actual time=2.066..2851.885 rows=396594\nloops=1)\"\n\" -> Seq Scan on moments\n(cost=0.00..0.00 rows=1 width=104) (actual time=0.002..0.002 rows=0\nloops=1)\"\n\" Filter: ((inserted >=\n'2012-01-30 00:00:00'::timestamp without time zone) AND (inserted <=\n'2012-01-31 00:00:00'::timestamp without time zone))\"\n\" -> Index Scan using\nthoughts_inserted_idx on thoughts moments (cost=0.00..6146.96\nrows=136903 width=29) (actual time=2.063..606.584 rows=130884\nloops=1)\"\n\" Index Cond: ((inserted >=\n'2012-01-30 00:00:00'::timestamp without time zone) AND (inserted <=\n'2012-01-31 00:00:00'::timestamp without time zone))\"\n\" -> Index Scan using\nphotos_inserted_idx on photos moments (cost=0.00..4975.46 rows=109900\nwidth=29) (actual time=1.542..836.063 rows=128286 loops=1)\"\n\" Index Cond: ((inserted >=\n'2012-01-30 00:00:00'::timestamp without time zone) AND (inserted <=\n'2012-01-31 00:00:00'::timestamp without time zone))\"\n\" -> Index Scan using\nmusic_inserted_idx on music moments (cost=0.00..3102.69 rows=40775\nwidth=29) (actual time=0.756..308.031 rows=41176 loops=1)\"\n\" Index Cond: ((inserted >=\n'2012-01-30 00:00:00'::timestamp without time zone) AND (inserted <=\n'2012-01-31 00:00:00'::timestamp without time zone))\"\n\" -> Index Scan using\npeople_inserted_idx on people moments (cost=0.00..4.07 rows=1\nwidth=29) (actual time=0.015..0.015 rows=0 loops=1)\"\n\" Index Cond: ((inserted >=\n'2012-01-30 00:00:00'::timestamp without time zone) AND (inserted <=\n'2012-01-31 00:00:00'::timestamp without time zone))\"\n\" -> Index Scan using\nplaces_inserted_idx on places moments (cost=0.00..4125.65 rows=96348\nwidth=29) (actual time=0.066..263.853 rows=92756 loops=1)\"\n\" Index Cond: ((inserted >=\n'2012-01-30 00:00:00'::timestamp without time zone) AND (inserted <=\n'2012-01-31 00:00:00'::timestamp without time zone))\"\n\" -> Bitmap Heap Scan on videos\nmoments (cost=29.56..835.20 rows=3660 width=29) (actual\ntime=3.122..87.889 rows=3492 loops=1)\"\n\" Recheck Cond: ((inserted >=\n'2012-01-30 00:00:00'::timestamp without time zone) AND (inserted <=\n'2012-01-31 00:00:00'::timestamp without time zone))\"\n\" -> Bitmap Index Scan on\nvideos_inserted_idx (cost=0.00..29.37 rows=3660 width=0) (actual\ntime=0.696..0.696 rows=3492 loops=1)\"\n\" Index Cond: ((inserted >=\n'2012-01-30 00:00:00'::timestamp without time zone) AND (inserted <=\n'2012-01-31 00:00:00'::timestamp without time zone))\"\n\" -> Seq Scan on facebook_people\nmoments (cost=0.00..1.04 rows=1 width=104) (actual time=0.040..0.040\nrows=0 loops=1)\"\n\" Filter: ((inserted >=\n'2012-01-30 00:00:00'::timestamp without time zone) AND (inserted <=\n'2012-01-31 00:00:00'::timestamp without time zone))\"\n\" -> Index Scan using\naddress_people_inserted_idx on address_people moments\n(cost=0.00..4.06 rows=1 width=29) (actual time=0.017..0.017 rows=0\nloops=1)\"\n\" Index Cond: ((inserted >=\n'2012-01-30 00:00:00'::timestamp without time zone) AND (inserted <=\n'2012-01-31 00:00:00'::timestamp without time zone))\"\n\" -> Index Scan using\npath_people_inserted_idx on path_people moments (cost=0.00..17.03\nrows=593 width=29) (actual time=1.758..1.758 rows=0 loops=1)\"\n\" Index Cond: ((inserted >=\n'2012-01-30 00:00:00'::timestamp without time zone) AND (inserted <=\n'2012-01-31 00:00:00'::timestamp without time zone))\"\n\" -> Index Scan using\nsleep_inserted_idx on sleep moments (cost=0.00..4.06 rows=1 width=29)\n(actual time=0.012..0.012 rows=0 loops=1)\"\n\" Index Cond: ((inserted >=\n'2012-01-30 00:00:00'::timestamp without time zone) AND (inserted <=\n'2012-01-31 00:00:00'::timestamp without time zone))\"\n\" -> Hash (cost=8.88..8.88 rows=292\nwidth=68) (actual time=0.520..0.520 rows=334 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 29kB\"\n\" -> Seq Scan on pg_class\n(cost=0.00..8.88 rows=292 width=68) (actual time=0.007..0.257 rows=334\nloops=1)\"\n\"Total runtime: 73511.072 ms\"\n\nPlease let me know if there is any way to make this more efficient.\n\nThank you,\n-Alessandro\n\nOn Tue, Jan 31, 2012 at 2:10 PM, Alessandro Gagliardi\n<[email protected]> wrote:\n>\n> Looks like I missed a key sentence in http://www.postgresql.org/docs/9.0/static/ddl-inherit.html which states: \"A serious limitation of the inheritance feature is that indexes (including unique constraints) and foreign key constraints only apply to single tables, not to their inheritance children.\"\n> I should have realized that as I exploited that \"limitation\" in three of my tables. Gradually adding those indices now; will report on what kind of difference it makes....\n>\n> On Tue, Jan 31, 2012 at 1:22 PM, Alessandro Gagliardi <[email protected]> wrote:\n>>\n>> My slow query today is somewhat more complex than yesterday's, but I'm\n>> hopeful it can be improved. Here's the query:\n>>\n>> SELECT relname, emotion, COUNT(feedback_id) FROM pg_class, moments\n>> JOIN emotions USING (moment_id)\n>> WHERE moments.inserted > 'today' AND moments.tableoid = pg_class.oid\n>> GROUP BY relname, emotion ORDER BY relname, emotion;\n>>\n>> As you'll see below, moments is inherited by a number of other tables\n>> and the purpose of relname is to see which one. Meanwhile, emotions\n>> inherits feedback.\n>>\n>> Here's the Full Table and Index Schema:\n>>\n>> CREATE TABLE moments\n>> (\n>> moment_id character(24) NOT NULL DEFAULT to_char(now(), 'JHH24MISSUS'::text),\n>> block_id character(24) NOT NULL,\n>> inserted timestamp without time zone NOT NULL DEFAULT now(),\n>> CONSTRAINT moments_pkey PRIMARY KEY (moment_id )\n>> )\n>> WITH (\n>> OIDS=FALSE\n>> );\n>>\n>> CREATE INDEX moments_block_id_idx\n>> ON moments\n>> USING btree\n>> (block_id );\n>>\n>> CREATE INDEX moments_inserted_idx\n>> ON moments\n>> USING btree\n>> (inserted );\n>>\n>> CREATE TABLE feedback\n>> (\n>> feedback_id character(24) NOT NULL,\n>> user_id character(24) NOT NULL,\n>> moment_id character(24) NOT NULL,\n>> created timestamp without time zone,\n>> inserted timestamp without time zone NOT NULL DEFAULT now(),\n>> lnglat point,\n>> CONSTRAINT feedback_pkey PRIMARY KEY (feedback_id )\n>> )\n>> WITH (\n>> OIDS=FALSE\n>> );\n>>\n>> CREATE INDEX feedback_lnglat_idx\n>> ON feedback\n>> USING gist\n>> (lnglat );\n>>\n>> CREATE INDEX feedback_moment_id_idx\n>> ON feedback\n>> USING btree\n>> (moment_id );\n>>\n>> CREATE TABLE emotions\n>> (\n>> -- Inherited from table feedback: feedback_id character(24) NOT NULL,\n>> -- Inherited from table feedback: user_id character(24) NOT NULL,\n>> -- Inherited from table feedback: moment_id character(24) NOT NULL,\n>> -- Inherited from table feedback: created timestamp without time zone,\n>> -- Inherited from table feedback: inserted timestamp without time\n>> zone NOT NULL DEFAULT now(),\n>> emotion character varying NOT NULL,\n>> -- Inherited from table : lnglat point,\n>> CONSTRAINT emotions_pkey PRIMARY KEY (feedback_id )\n>> )\n>> INHERITS (feedback)\n>> WITH (\n>> OIDS=FALSE\n>> );\n>>\n>> CREATE INDEX emotions_emotion_idx\n>> ON emotions\n>> USING btree\n>> (emotion );\n>>\n>> Here's the results from EXPLAIN ANALYZE:\n>>\n>> \"Sort (cost=309717.70..309718.43 rows=1460 width=94) (actual\n>> time=60462.534..60462.544 rows=25 loops=1)\"\n>> \" Sort Key: pg_class.relname, emotions.emotion\"\n>> \" Sort Method: quicksort Memory: 20kB\"\n>> \" -> HashAggregate (cost=309697.24..309702.35 rows=1460 width=94)\n>> (actual time=60462.457..60462.476 rows=25 loops=1)\"\n>> \" -> Hash Join (cost=133154.62..308963.70 rows=489024\n>> width=94) (actual time=26910.488..60031.589 rows=194642 loops=1)\"\n>> \" Hash Cond: (public.moments.tableoid = pg_class.oid)\"\n>> \" -> Hash Join (cost=133144.72..307119.96 rows=489024\n>> width=34) (actual time=26909.984..59434.137 rows=194642 loops=1)\"\n>> \" Hash Cond: (public.moments.moment_id = emotions.moment_id)\"\n>> \" -> Append (cost=0.00..114981.64 rows=119665\n>> width=29) (actual time=883.153..21696.939 rows=357565 loops=1)\"\n>> \" -> Seq Scan on moments (cost=0.00..0.00\n>> rows=1 width=104) (actual time=0.000..0.000 rows=0 loops=1)\"\n>> \" Filter: (inserted > '2012-01-31\n>> 00:00:00'::timestamp without time zone)\"\n>> \" -> Seq Scan on thoughts moments\n>> (cost=0.00..38856.88 rows=44388 width=29) (actual\n>> time=883.150..9040.959 rows=115436 loops=1)\"\n>> \" Filter: (inserted > '2012-01-31\n>> 00:00:00'::timestamp without time zone)\"\n>> \" -> Seq Scan on photos moments\n>> (cost=0.00..29635.78 rows=194 width=29) (actual\n>> time=5329.700..5827.447 rows=116420 loops=1)\"\n>> \" Filter: (inserted > '2012-01-31\n>> 00:00:00'::timestamp without time zone)\"\n>> \" -> Seq Scan on music moments\n>> (cost=0.00..9371.88 rows=19070 width=29) (actual time=354.147..383.266\n>> rows=37248 loops=1)\"\n>> \" Filter: (inserted > '2012-01-31\n>> 00:00:00'::timestamp without time zone)\"\n>> \" -> Seq Scan on people moments\n>> (cost=0.00..5945.26 rows=27 width=29) (actual time=185.393..185.393\n>> rows=0 loops=1)\"\n>> \" Filter: (inserted > '2012-01-31\n>> 00:00:00'::timestamp without time zone)\"\n>> \" -> Seq Scan on places moments\n>> (cost=0.00..24551.03 rows=54961 width=29) (actual\n>> time=5224.044..5324.517 rows=85564 loops=1)\"\n>> \" Filter: (inserted > '2012-01-31\n>> 00:00:00'::timestamp without time zone)\"\n>> \" -> Seq Scan on videos moments\n>> (cost=0.00..981.31 rows=734 width=29) (actual time=21.075..28.735\n>> rows=2897 loops=1)\"\n>> \" Filter: (inserted > '2012-01-31\n>> 00:00:00'::timestamp without time zone)\"\n>> \" -> Seq Scan on facebook_people moments\n>> (cost=0.00..10.84 rows=80 width=104) (actual time=0.001..0.001 rows=0\n>> loops=1)\"\n>> \" Filter: (inserted > '2012-01-31\n>> 00:00:00'::timestamp without time zone)\"\n>> \" -> Seq Scan on address_people moments\n>> (cost=0.00..10.84 rows=80 width=104) (actual time=0.005..0.005 rows=0\n>> loops=1)\"\n>> \" Filter: (inserted > '2012-01-31\n>> 00:00:00'::timestamp without time zone)\"\n>> \" -> Seq Scan on path_people moments\n>> (cost=0.00..5606.79 rows=30 width=29) (actual time=211.166..211.166\n>> rows=0 loops=1)\"\n>> \" Filter: (inserted > '2012-01-31\n>> 00:00:00'::timestamp without time zone)\"\n>> \" -> Seq Scan on sleep moments\n>> (cost=0.00..11.05 rows=100 width=104) (actual time=0.002..0.002 rows=0\n>> loops=1)\"\n>> \" Filter: (inserted > '2012-01-31\n>> 00:00:00'::timestamp without time zone)\"\n>> \" -> Hash (cost=79292.49..79292.49 rows=4059496\n>> width=55) (actual time=25757.998..25757.998 rows=4058642 loops=1)\"\n>> \" Buckets: 262144 Batches: 4 Memory Usage: 75211kB\"\n>> \" -> Seq Scan on emotions\n>> (cost=0.00..79292.49 rows=4059496 width=55) (actual\n>> time=0.012..15969.981 rows=4058642 loops=1)\"\n>> \" -> Hash (cost=8.88..8.88 rows=292 width=68) (actual\n>> time=0.487..0.487 rows=319 loops=1)\"\n>> \" Buckets: 1024 Batches: 1 Memory Usage: 28kB\"\n>> \" -> Seq Scan on pg_class (cost=0.00..8.88\n>> rows=292 width=68) (actual time=0.013..0.234 rows=319 loops=1)\"\n>> \"Total runtime: 60601.612 ms\"\n>>\n>> Postgres version: is still 9.0.5\n>>\n>> History: N/A (This is the first time I've run this query.)\n>>\n>> Hardware: 1.7 GB Cache and other things you'd expect from a Ronin\n>> instance of a Heroku Postgres database.\n>>\n>> Maintenance Setup: What Heroku does. As before, vacuum should not be\n>> relevant as there are no deletes or even updates (just inserts and\n>> selects)\n>>\n>> WAL Configuration: I still don't know. Heroku hosts the database on\n>> Amazon's servers, so maybe that answers the question?\n>>\n>> GUC Settings: As per the yesterday's discussion, I reduced\n>> random_page_cost to 2. Other than that, it's all default.\n>>\n>> Bonus question: If that was too simple, here's something even more\n>> complex I'd like to do: I have another table that inherits feedback\n>> called \"comments\". Ideally, rather than an \"emotion\" column coming\n>> out, I would like to have a \"feedback_type\" column that would be\n>> either the value in the emotion column of the emotions table, or\n>> \"comment\" if it's from the comments table. I'm thinking I'm going to\n>> have to simply put that together on the client, but if I can do that\n>> in a single query (that doesn't take an hour to run) that would be\n>> super cool. But that's definitely secondary compared to getting the\n>> above query to run faster.\n>>\n>> Thank you very much for any help!\n>> -Alessandro Gagliardi\n>\n>\n",
"msg_date": "Tue, 31 Jan 2012 14:53:46 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: From Simple to Complex"
},
{
"msg_contents": "I just got a pointer on presenting EXPLAIN ANALYZE in a more human friendly\nfashion (thanks, Agent M!): http://explain.depesz.com/s/A9S\n\n From this it looks like the bottleneck happens when Postgres does an Index\nScan using emotions_moment_id_idx on emotions before filtering on\nmoments.inserted so I thought I'd try filtering on emotions.inserted\ninstead but that only made it worse. At the same time, I noticed that \"FROM\npg_class, moments WHERE moments.tableoid = pg_class.oid\" tends to run a bit\nfaster than \"FROM pg_class JOIN moments ON moments.tableoid =\npg_class.oid\". So I tried:\n\nSELECT relname, emotion, COUNT(feedback_id)\n FROM pg_class, moments, emotions\n WHERE moments.tableoid = pg_class.oid\n AND emotions.inserted > 'yesterday'\n AND moments.inserted BETWEEN 'yesterday' AND 'today'\n AND emotions.moment_id = moments.moment_id\n GROUP BY relname, emotion\n ORDER BY relname, emotion;\n\nThat was a bit faster, but still very slow. Here's the EXPLAIN:\nhttp://explain.depesz.com/s/ZdF\n\nOn Tue, Jan 31, 2012 at 2:53 PM, Alessandro Gagliardi\n<[email protected]>wrote:\n\n> I changed the query a bit so the results would not change over the\n> course of the day to:\n>\n> SELECT relname, emotion, COUNT(feedback_id) FROM pg_class, moments\n> JOIN emotions USING (moment_id)\n> WHERE moments.inserted BETWEEN 'yesterday' AND 'today' AND\n> moments.tableoid = pg_class.oid\n> GROUP BY relname, emotion ORDER BY relname, emotion;\n>\n\nI just got a pointer on presenting EXPLAIN ANALYZE in a more human friendly fashion (thanks, Agent M!): http://explain.depesz.com/s/A9SFrom this it looks like the bottleneck happens when Postgres does an Index Scan using emotions_moment_id_idx on emotions before filtering on moments.inserted so I thought I'd try filtering on emotions.inserted instead but that only made it worse. At the same time, I noticed that \"FROM pg_class, moments WHERE moments.tableoid = pg_class.oid\" tends to run a bit faster than \"FROM pg_class JOIN moments ON moments.tableoid = pg_class.oid\". So I tried:\nSELECT relname, emotion, COUNT(feedback_id) FROM pg_class, moments, emotions WHERE moments.tableoid = pg_class.oid AND emotions.inserted > 'yesterday' \n AND moments.inserted BETWEEN 'yesterday' AND 'today' AND emotions.moment_id = moments.moment_id GROUP BY relname, emotion ORDER BY relname, emotion;\nThat was a bit faster, but still very slow. Here's the EXPLAIN: http://explain.depesz.com/s/ZdFOn Tue, Jan 31, 2012 at 2:53 PM, Alessandro Gagliardi <[email protected]> wrote:\nI changed the query a bit so the results would not change over the\ncourse of the day to:\n\nSELECT relname, emotion, COUNT(feedback_id) FROM pg_class, moments\nJOIN emotions USING (moment_id)\nWHERE moments.inserted BETWEEN 'yesterday' AND 'today' AND\nmoments.tableoid = pg_class.oid\nGROUP BY relname, emotion ORDER BY relname, emotion;",
"msg_date": "Tue, 31 Jan 2012 15:43:10 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: From Simple to Complex"
},
{
"msg_contents": "Final update on this thread: since it is only necessary for me to get a\nrough ratio of the distribution (and not the absolute count), I refactored\nthe query to include a subquery that samples from the moments table\nthus: SELECT moment_id, block_id FROM moments WHERE inserted BETWEEN\n'yesterday' AND 'today' ORDER BY RANDOM() LIMIT 10000; I also took\nadvantage of another table called blocks that happens to contain the\nmoment_type as well (thus making it so I don't need to reference pg_class).\nThe final query looks like:\n\nSELECT moment_type, emotion, COUNT(feedback_id)\n FROM (SELECT moment_id, block_id\n FROM moments\n WHERE inserted BETWEEN 'yesterday' AND 'today'\n ORDER BY RANDOM() LIMIT 10000) AS sample_moments\n JOIN blocks USING (block_id)\n JOIN emotions USING (moment_id)\n GROUP BY moment_type, emotion\n ORDER BY moment_type, emotion\n\nThe explain is at http://explain.depesz.com/s/lYh\n\nInterestingly, increasing the limit does not seem to increase the runtime\nin a linear fashion. When I run it with a limit of 60000 I get a runtime\nof 14991 ms. But if I run it with a limit of 70000 I get a runtime of 77744\nms. I assume that that's because I'm hitting a memory limit and paging out.\nIs that right?\n\nOn Tue, Jan 31, 2012 at 3:43 PM, Alessandro Gagliardi\n<[email protected]>wrote:\n\n> I just got a pointer on presenting EXPLAIN ANALYZE in a more human\n> friendly fashion (thanks, Agent M!): http://explain.depesz.com/s/A9S\n>\n> From this it looks like the bottleneck happens when Postgres does an Index\n> Scan using emotions_moment_id_idx on emotions before filtering on\n> moments.inserted so I thought I'd try filtering on emotions.inserted\n> instead but that only made it worse. At the same time, I noticed that \"FROM\n> pg_class, moments WHERE moments.tableoid = pg_class.oid\" tends to run a bit\n> faster than \"FROM pg_class JOIN moments ON moments.tableoid =\n> pg_class.oid\". So I tried:\n>\n> SELECT relname, emotion, COUNT(feedback_id)\n> FROM pg_class, moments, emotions\n> WHERE moments.tableoid = pg_class.oid\n> AND emotions.inserted > 'yesterday'\n> AND moments.inserted BETWEEN 'yesterday' AND 'today'\n> AND emotions.moment_id = moments.moment_id\n> GROUP BY relname, emotion\n> ORDER BY relname, emotion;\n>\n> That was a bit faster, but still very slow. Here's the EXPLAIN:\n> http://explain.depesz.com/s/ZdF\n>\n> On Tue, Jan 31, 2012 at 2:53 PM, Alessandro Gagliardi <[email protected]\n> > wrote:\n>\n>> I changed the query a bit so the results would not change over the\n>> course of the day to:\n>>\n>> SELECT relname, emotion, COUNT(feedback_id) FROM pg_class, moments\n>> JOIN emotions USING (moment_id)\n>> WHERE moments.inserted BETWEEN 'yesterday' AND 'today' AND\n>> moments.tableoid = pg_class.oid\n>> GROUP BY relname, emotion ORDER BY relname, emotion;\n>>\n>\n\nFinal update on this thread: since it is only necessary for me to get a rough ratio of the distribution (and not the absolute count), I refactored the query to include a subquery that samples from the moments table thus: SELECT moment_id, block_id FROM moments WHERE inserted BETWEEN 'yesterday' AND 'today' ORDER BY RANDOM() LIMIT 10000; I also took advantage of another table called blocks that happens to contain the moment_type as well (thus making it so I don't need to reference pg_class). The final query looks like:\nSELECT moment_type, emotion, COUNT(feedback_id) FROM (SELECT moment_id, block_id FROM moments WHERE inserted BETWEEN 'yesterday' AND 'today' \n ORDER BY RANDOM() LIMIT 10000) AS sample_moments JOIN blocks USING (block_id) JOIN emotions USING (moment_id) GROUP BY moment_type, emotion ORDER BY moment_type, emotion\nThe explain is at http://explain.depesz.com/s/lYhInterestingly, increasing the limit does not seem to increase the runtime in a linear fashion. When I run it with a limit of 60000 I get a runtime of 14991 ms. But if I run it with a limit of 70000 I get a runtime of 77744 ms. I assume that that's because I'm hitting a memory limit and paging out. Is that right?\nOn Tue, Jan 31, 2012 at 3:43 PM, Alessandro Gagliardi <[email protected]> wrote:\nI just got a pointer on presenting EXPLAIN ANALYZE in a more human friendly fashion (thanks, Agent M!): http://explain.depesz.com/s/A9SFrom this it looks like the bottleneck happens when Postgres does an Index Scan using emotions_moment_id_idx on emotions before filtering on moments.inserted so I thought I'd try filtering on emotions.inserted instead but that only made it worse. At the same time, I noticed that \"FROM pg_class, moments WHERE moments.tableoid = pg_class.oid\" tends to run a bit faster than \"FROM pg_class JOIN moments ON moments.tableoid = pg_class.oid\". So I tried:\nSELECT relname, emotion, COUNT(feedback_id) FROM pg_class, moments, emotions WHERE moments.tableoid = pg_class.oid AND emotions.inserted > 'yesterday' \n AND moments.inserted BETWEEN 'yesterday' AND 'today' AND emotions.moment_id = moments.moment_id GROUP BY relname, emotion ORDER BY relname, emotion;\n\nThat was a bit faster, but still very slow. Here's the EXPLAIN: http://explain.depesz.com/s/ZdF\nOn Tue, Jan 31, 2012 at 2:53 PM, Alessandro Gagliardi <[email protected]> wrote:\nI changed the query a bit so the results would not change over the\ncourse of the day to:\n\nSELECT relname, emotion, COUNT(feedback_id) FROM pg_class, moments\nJOIN emotions USING (moment_id)\nWHERE moments.inserted BETWEEN 'yesterday' AND 'today' AND\nmoments.tableoid = pg_class.oid\nGROUP BY relname, emotion ORDER BY relname, emotion;",
"msg_date": "Wed, 1 Feb 2012 10:19:28 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: From Simple to Complex"
},
{
"msg_contents": "On Wed, Feb 1, 2012 at 11:19 AM, Alessandro Gagliardi\n<[email protected]> wrote:\n> Final update on this thread: since it is only necessary for me to get a\n> rough ratio of the distribution (and not the absolute count), I refactored\n> the query to include a subquery that samples from the moments table\n> thus: SELECT moment_id, block_id FROM moments WHERE inserted BETWEEN\n> 'yesterday' AND 'today' ORDER BY RANDOM() LIMIT 10000; I also took advantage\n> of another table called blocks that happens to contain the moment_type as\n> well (thus making it so I don't need to reference pg_class). The final query\n> looks like:\n>\n> SELECT moment_type, emotion, COUNT(feedback_id)\n> FROM (SELECT moment_id, block_id\n> FROM moments\n> WHERE inserted BETWEEN 'yesterday' AND 'today'\n> ORDER BY RANDOM() LIMIT 10000) AS sample_moments\n> JOIN blocks USING (block_id)\n> JOIN emotions USING (moment_id)\n> GROUP BY moment_type, emotion\n> ORDER BY moment_type, emotion\n>\n> The explain is at http://explain.depesz.com/s/lYh\n>\n> Interestingly, increasing the limit does not seem to increase the runtime in\n> a linear fashion. When I run it with a limit of 60000 I get a runtime\n> of 14991 ms. But if I run it with a limit of 70000 I get a runtime of 77744\n> ms. I assume that that's because I'm hitting a memory limit and paging out.\n> Is that right?\n\nHard to say. more likely your query plan changes at that point. Run\nthe queries with \"explain analyze\" in front of them to find out.\n",
"msg_date": "Wed, 1 Feb 2012 11:35:31 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: From Simple to Complex"
},
{
"msg_contents": "LIMIT 65536; Total query runtime: 14846 ms. -\nhttp://explain.depesz.com/s/I3E\nLIMIT 69632: Total query runtime: 80141 ms. -\nhttp://explain.depesz.com/s/9hp\n\nSo it looks like when the limit crosses a certain threshold (somewhere\nnorth of 2^16), Postgres decides to do a Seq Scan instead of an Index Scan.\nI've already lowered random_page_cost to 2. Maybe I should lower it to 1.5?\nActually 60K should be plenty for my purposes anyway.\n\nOn Wed, Feb 1, 2012 at 10:35 AM, Scott Marlowe <[email protected]>wrote:\n\n> On Wed, Feb 1, 2012 at 11:19 AM, Alessandro Gagliardi\n> <[email protected]> wrote:\n> > Interestingly, increasing the limit does not seem to increase the\n> runtime in\n> > a linear fashion. When I run it with a limit of 60000 I get a runtime\n> > of 14991 ms. But if I run it with a limit of 70000 I get a runtime\n> of 77744\n> > ms. I assume that that's because I'm hitting a memory limit and paging\n> out.\n> > Is that right?\n>\n> Hard to say. more likely your query plan changes at that point. Run\n> the queries with \"explain analyze\" in front of them to find out.\n>\n\nLIMIT 65536; Total query runtime: 14846 ms. - http://explain.depesz.com/s/I3E\nLIMIT 69632: Total query runtime: 80141 ms. - http://explain.depesz.com/s/9hp\nSo it looks like when the limit crosses a certain threshold (somewhere north of 2^16), Postgres decides to do a Seq Scan instead of an Index Scan. I've already lowered random_page_cost to 2. Maybe I should lower it to 1.5? Actually 60K should be plenty for my purposes anyway.\nOn Wed, Feb 1, 2012 at 10:35 AM, Scott Marlowe <[email protected]> wrote:\nOn Wed, Feb 1, 2012 at 11:19 AM, Alessandro Gagliardi\n<[email protected]> wrote:> Interestingly, increasing the limit does not seem to increase the runtime in\n> a linear fashion. When I run it with a limit of 60000 I get a runtime\n> of 14991 ms. But if I run it with a limit of 70000 I get a runtime of 77744\n> ms. I assume that that's because I'm hitting a memory limit and paging out.\n> Is that right?\n\nHard to say. more likely your query plan changes at that point. Run\nthe queries with \"explain analyze\" in front of them to find out.",
"msg_date": "Wed, 1 Feb 2012 10:48:33 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: From Simple to Complex"
},
{
"msg_contents": "On Wed, Feb 1, 2012 at 11:48 AM, Alessandro Gagliardi\n<[email protected]> wrote:\n> LIMIT 65536; Total query runtime: 14846 ms.\n> - http://explain.depesz.com/s/I3E\n> LIMIT 69632: Total query runtime: 80141 ms.\n> - http://explain.depesz.com/s/9hp\n>\n> So it looks like when the limit crosses a certain threshold (somewhere north\n> of 2^16), Postgres decides to do a Seq Scan instead of an Index Scan.\n> I've already lowered random_page_cost to 2. Maybe I should lower it to 1.5?\n> Actually 60K should be plenty for my purposes anyway.\n\nIt's important to set random_page_cost according to more than just one\nquery, but yeah, at this point it's likely a good idea to set it\ncloser to 1.0. You're on heroku right? Something closer to 1.0 is\nlikely called for if so. 1.2 to 1.4 or so.\n\nIf you've got other queries you can test the change on all the better.\n",
"msg_date": "Wed, 1 Feb 2012 11:53:06 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: From Simple to Complex"
},
{
"msg_contents": "Possibly. What does\n\n psql > show work_mem;\n\nsay?\n\nBob Lunney\n\n\n________________________________\n From: Alessandro Gagliardi <[email protected]>\nTo: [email protected] \nSent: Wednesday, February 1, 2012 12:19 PM\nSubject: Re: [PERFORM] From Simple to Complex\n \n\nFinal update on this thread: since it is only necessary for me to get a rough ratio of the distribution (and not the absolute count), I refactored the query to include a subquery that samples from the moments table thus: SELECT moment_id, block_id FROM moments WHERE inserted BETWEEN 'yesterday' AND 'today' ORDER BY RANDOM() LIMIT 10000; I also took advantage of another table called blocks that happens to contain the moment_type as well (thus making it so I don't need to reference pg_class). The final query looks like:\n\nSELECT moment_type, emotion, COUNT(feedback_id) \n FROM (SELECT moment_id, block_id \n FROM moments \n WHERE inserted BETWEEN 'yesterday' AND 'today' \n ORDER BY RANDOM() LIMIT 10000) AS sample_moments\n JOIN blocks USING (block_id)\n JOIN emotions USING (moment_id)\n GROUP BY moment_type, emotion\n ORDER BY moment_type, emotion\n\nThe explain is at http://explain.depesz.com/s/lYh\n\nInterestingly, increasing the limit does not seem to increase the runtime in a linear fashion. When I run it with a limit of 60000 I get a runtime of 14991 ms. But if I run it with a limit of 70000 I get a runtime of 77744 ms. I assume that that's because I'm hitting a memory limit and paging out. Is that right?\n\nOn Tue, Jan 31, 2012 at 3:43 PM, Alessandro Gagliardi <[email protected]> wrote:\n\nI just got a pointer on presenting EXPLAIN ANALYZE in a more human friendly fashion (thanks, Agent M!): http://explain.depesz.com/s/A9S\n>\n>\n>From this it looks like the bottleneck happens when Postgres does an Index Scan using emotions_moment_id_idx on emotions before filtering on moments.inserted so I thought I'd try filtering on emotions.inserted instead but that only made it worse. At the same time, I noticed that \"FROM pg_class, moments WHERE moments.tableoid = pg_class.oid\" tends to run a bit faster than \"FROM pg_class JOIN moments ON moments.tableoid = pg_class.oid\". So I tried:\n>\n>\n>SELECT relname, emotion, COUNT(feedback_id) \n> FROM pg_class, moments, emotions\n> WHERE moments.tableoid = pg_class.oid \n> AND emotions.inserted > 'yesterday' \n> AND moments.inserted BETWEEN 'yesterday' AND 'today' \n> AND emotions.moment_id = moments.moment_id\n> GROUP BY relname, emotion \n> ORDER BY relname, emotion;\n>\n>\n>That was a bit faster, but still very slow. Here's the EXPLAIN: http://explain.depesz.com/s/ZdF\n>\n>\n>On Tue, Jan 31, 2012 at 2:53 PM, Alessandro Gagliardi <[email protected]> wrote:\n>\n>I changed the query a bit so the results would not change over the\n>>course of the day to:\n>>\n>>\n>>SELECT relname, emotion, COUNT(feedback_id) FROM pg_class, moments\n>>JOIN emotions USING (moment_id)\n>>WHERE moments.inserted BETWEEN 'yesterday' AND 'today' AND\n>>\n>>moments.tableoid = pg_class.oid\n>>GROUP BY relname, emotion ORDER BY relname, emotion;\n>>\nPossibly. What does psql > show work_mem;say?Bob Lunney From: Alessandro Gagliardi <[email protected]> To: [email protected] Sent: Wednesday, February 1, 2012 12:19 PM Subject: Re: [PERFORM]\n From Simple to Complex \nFinal update on this thread: since it is only necessary for me to get a rough ratio of the distribution (and not the absolute count), I refactored the query to include a subquery that samples from the moments table thus: SELECT moment_id, block_id FROM moments WHERE inserted BETWEEN 'yesterday' AND 'today' ORDER BY RANDOM() LIMIT 10000; I also took advantage of another table called blocks that happens to contain the moment_type as well (thus making it so I don't need to reference pg_class). The final query looks like:\nSELECT moment_type, emotion, COUNT(feedback_id) FROM (SELECT moment_id, block_id FROM moments WHERE inserted BETWEEN 'yesterday' AND 'today' \n ORDER BY RANDOM() LIMIT 10000) AS sample_moments JOIN blocks USING (block_id) JOIN emotions USING (moment_id) GROUP BY moment_type, emotion ORDER BY moment_type, emotion\nThe explain is at http://explain.depesz.com/s/lYhInterestingly, increasing the limit does not seem to increase the runtime in a linear fashion. When I run it with a limit of 60000 I get a runtime of 14991 ms. But if I run it with a limit of 70000 I get a runtime of 77744 ms. I assume that that's because I'm hitting a memory limit and paging out. Is that right?\nOn Tue, Jan 31, 2012 at 3:43 PM, Alessandro Gagliardi <[email protected]> wrote:\nI just got a pointer on presenting EXPLAIN ANALYZE in a more human friendly fashion (thanks, Agent M!): http://explain.depesz.com/s/A9SFrom this it looks like the bottleneck happens when Postgres does an Index Scan using emotions_moment_id_idx on emotions before filtering on moments.inserted so I thought I'd try filtering on emotions.inserted instead but that only made it worse. At the same time, I noticed that \"FROM pg_class, moments WHERE moments.tableoid = pg_class.oid\" tends to run a bit faster than \"FROM pg_class JOIN moments ON moments.tableoid = pg_class.oid\". So I tried:\nSELECT relname, emotion, COUNT(feedback_id) FROM pg_class, moments, emotions WHERE moments.tableoid = pg_class.oid AND emotions.inserted > 'yesterday' \n AND moments.inserted BETWEEN 'yesterday' AND 'today' AND emotions.moment_id = moments.moment_id GROUP BY relname, emotion ORDER BY relname, emotion;\n\nThat was a bit faster, but still very slow. Here's the EXPLAIN: http://explain.depesz.com/s/ZdF\nOn Tue, Jan 31, 2012 at 2:53 PM, Alessandro Gagliardi <[email protected]> wrote:\nI changed the query a bit so the results would not change over the\ncourse of the day to:\n\nSELECT relname, emotion, COUNT(feedback_id) FROM pg_class, moments\nJOIN emotions USING (moment_id)\nWHERE moments.inserted BETWEEN 'yesterday' AND 'today' AND\nmoments.tableoid = pg_class.oid\nGROUP BY relname, emotion ORDER BY relname, emotion;",
"msg_date": "Wed, 1 Feb 2012 11:04:04 -0800 (PST)",
"msg_from": "Bob Lunney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: From Simple to Complex"
},
{
"msg_contents": "On Wed, Feb 1, 2012 at 11:04 AM, Bob Lunney <[email protected]> wrote:\n\n> Possibly. What does\n>\n> psql > show work_mem;\n>\n> say?\n>\n> 100MB\n\nOn Wed, Feb 1, 2012 at 11:04 AM, Bob Lunney <[email protected]> wrote:\nPossibly. What does psql > show work_mem;\nsay?100MB",
"msg_date": "Wed, 1 Feb 2012 11:19:27 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: From Simple to Complex"
},
{
"msg_contents": "On Wed, Feb 1, 2012 at 12:48 PM, Alessandro Gagliardi\n<[email protected]> wrote:\n> LIMIT 65536; Total query runtime: 14846 ms.\n> - http://explain.depesz.com/s/I3E\n> LIMIT 69632: Total query runtime: 80141 ms.\n> - http://explain.depesz.com/s/9hp\n>\n> So it looks like when the limit crosses a certain threshold (somewhere north\n> of 2^16), Postgres decides to do a Seq Scan instead of an Index Scan.\n> I've already lowered random_page_cost to 2. Maybe I should lower it to 1.5?\n> Actually 60K should be plenty for my purposes anyway.\n\n\nalso, is effective_cache_size set to a reasonable value?\n\nmerlin\n",
"msg_date": "Thu, 2 Feb 2012 08:52:17 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: From Simple to Complex"
},
{
"msg_contents": "On Thu, Feb 2, 2012 at 6:52 AM, Merlin Moncure <[email protected]> wrote:\n\n> also, is effective_cache_size set to a reasonable value?\n>\n> Yeah, it's 1530000kB\n\nOn Thu, Feb 2, 2012 at 6:52 AM, Merlin Moncure <[email protected]> wrote:\nalso, is effective_cache_size set to a reasonable value?\n\nYeah, it's 1530000kB",
"msg_date": "Thu, 2 Feb 2012 11:38:13 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: From Simple to Complex"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a table in Postgres like:\nCREATE TABLE test\n(\n id integer,\n dtstamp timestamp without time zone,\n rating real\n)\nCREATE INDEX test_all\n ON test\n USING btree\n (id , dtstamp , rating);\n\nMy db has around 200M rows and I have reduced my test select statement down\nto:\nSELECT count(1) FROM test\nWHERE id in (58,83,88,98,124,141,170,195,202,252,265,293,305,331,348)\nAND dtstamp between cast('2011-10-19 08:00:00' as timestamp) and\ncast('2011-10-19 16:00:00' as timestamp)\n\nIn Postgres this takes about 23 sec.\nIn MSSQL this takes about 1 sec.\n\nMSSQL only accesses the index and does not access the table it self (uses\nonly index scan)\n\nPostgres has the following plan:\n\"Aggregate (cost=130926.24..130926.25 rows=1 width=0)\"\n\" -> Bitmap Heap Scan on test (cost=1298.97..130832.92 rows=37330\nwidth=0)\"\n\" Recheck Cond: ((id = ANY\n('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\nAND (dtstamp >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n(dtstamp <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n\" -> Bitmap Index Scan on test_all (cost=0.00..1289.64 rows=37330\nwidth=0)\"\n\" Index Cond: ((id = ANY\n('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\nAND (dtstamp >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n(dtstamp <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n\nThe results are disappointing since I want to switch to Postgres but I have\nnot been able to force Postgres to only use the index :-(\n\nAny hints that may lead me back on track?\n\nThanks,\n - Gummi\n\nHi,I have a table in Postgres like:CREATE TABLE test( id integer, dtstamp timestamp without time zone, rating real)CREATE INDEX test_all ON test USING btree (id , dtstamp , rating);\nMy db has around 200M rows and I have reduced my test select statement down to:SELECT count(1) FROM testWHERE id in (58,83,88,98,124,141,170,195,202,252,265,293,305,331,348)AND dtstamp between cast('2011-10-19 08:00:00' as timestamp) and cast('2011-10-19 16:00:00' as timestamp)\nIn Postgres this takes about 23 sec.In MSSQL this takes about 1 sec.MSSQL only accesses the index and does not access the table it self (uses only index scan)\nPostgres has the following plan:\"Aggregate (cost=130926.24..130926.25 rows=1 width=0)\"\" -> Bitmap Heap Scan on test (cost=1298.97..130832.92 rows=37330 width=0)\"\" Recheck Cond: ((id = ANY ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[])) AND (dtstamp >= '2011-10-19 08:00:00'::timestamp without time zone) AND (dtstamp <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n\" -> Bitmap Index Scan on test_all (cost=0.00..1289.64 rows=37330 width=0)\"\" Index Cond: ((id = ANY ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[])) AND (dtstamp >= '2011-10-19 08:00:00'::timestamp without time zone) AND (dtstamp <= '2011-10-19 16:00:00'::timestamp without time zone))\"\nThe results are disappointing since I want to switch to Postgres but I have not been able to force Postgres to only use the index :-(Any hints that may lead me back on track?Thanks, - Gummi",
"msg_date": "Wed, 1 Feb 2012 17:10:56 +0000",
"msg_from": "Gudmundur Johannesson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index with all necessary columns - Postgres vs MSSQL"
},
{
"msg_contents": "On Wed, Feb 1, 2012 at 11:10 AM, Gudmundur Johannesson\n<[email protected]> wrote:\n> Hi,\n>\n> I have a table in Postgres like:\n> CREATE TABLE test\n> (\n> id integer,\n> dtstamp timestamp without time zone,\n> rating real\n> )\n> CREATE INDEX test_all\n> ON test\n> USING btree\n> (id , dtstamp , rating);\n>\n> My db has around 200M rows and I have reduced my test select statement down\n> to:\n> SELECT count(1) FROM test\n> WHERE id in (58,83,88,98,124,141,170,195,202,252,265,293,305,331,348)\n> AND dtstamp between cast('2011-10-19 08:00:00' as timestamp) and\n> cast('2011-10-19 16:00:00' as timestamp)\n>\n> In Postgres this takes about 23 sec.\n> In MSSQL this takes about 1 sec.\n>\n> MSSQL only accesses the index and does not access the table it self (uses\n> only index scan)\n>\n> Postgres has the following plan:\n> \"Aggregate (cost=130926.24..130926.25 rows=1 width=0)\"\n> \" -> Bitmap Heap Scan on test (cost=1298.97..130832.92 rows=37330\n> width=0)\"\n> \" Recheck Cond: ((id = ANY\n> ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\n> AND (dtstamp >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n> (dtstamp <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n> \" -> Bitmap Index Scan on test_all (cost=0.00..1289.64 rows=37330\n> width=0)\"\n> \" Index Cond: ((id = ANY\n> ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\n> AND (dtstamp >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n> (dtstamp <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n>\n> The results are disappointing since I want to switch to Postgres but I have\n> not been able to force Postgres to only use the index :-(\n>\n> Any hints that may lead me back on track?\n\n*) are the times in postgres stable across calls?\n*) where is the 'id list' coming from?\n*) how long does this query take?\n\nSELECT count(1) FROM test WHERE id = 202 AND AND dtstamp between\n'2011-10-19 08:00:00'::timestamp and '2011-10-19\n16:00:00'::timestamp; ?\n\nThe feature you're looking for in postgres is called 'index only\nscans' and an 9.2 will contain an implementation of that feature (see:\nhttp://rhaas.blogspot.com/2011/10/index-only-scans-weve-got-em.html).\n\nmerlin\n",
"msg_date": "Wed, 1 Feb 2012 11:52:11 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index with all necessary columns - Postgres vs MSSQL"
},
{
"msg_contents": "On Wed, Feb 1, 2012 at 10:10 AM, Gudmundur Johannesson\n<[email protected]> wrote:\n> Hi,\n>\n> I have a table in Postgres like:\n> CREATE TABLE test\n> (\n> id integer,\n> dtstamp timestamp without time zone,\n> rating real\n> )\n> CREATE INDEX test_all\n> ON test\n> USING btree\n> (id , dtstamp , rating);\n>\n> My db has around 200M rows and I have reduced my test select statement down\n> to:\n> SELECT count(1) FROM test\n> WHERE id in (58,83,88,98,124,141,170,195,202,252,265,293,305,331,348)\n> AND dtstamp between cast('2011-10-19 08:00:00' as timestamp) and\n> cast('2011-10-19 16:00:00' as timestamp)\n>\n> In Postgres this takes about 23 sec.\n> In MSSQL this takes about 1 sec.\n>\n> MSSQL only accesses the index and does not access the table it self (uses\n> only index scan)\n>\n> Postgres has the following plan:\n> \"Aggregate (cost=130926.24..130926.25 rows=1 width=0)\"\n> \" -> Bitmap Heap Scan on test (cost=1298.97..130832.92 rows=37330\n> width=0)\"\n> \" Recheck Cond: ((id = ANY\n> ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\n> AND (dtstamp >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n> (dtstamp <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n> \" -> Bitmap Index Scan on test_all (cost=0.00..1289.64 rows=37330\n> width=0)\"\n> \" Index Cond: ((id = ANY\n> ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\n> AND (dtstamp >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n> (dtstamp <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n>\n> The results are disappointing since I want to switch to Postgres but I have\n> not been able to force Postgres to only use the index :-(\n>\n> Any hints that may lead me back on track?\n\nAs Merlin mentioned postgres doesn't have \"covering\" indexes yet. I\nwas wondering what explain ANALYZE of your query looks like, and what\nversion of pgsql you're running. It might be that we can at least get\nthat 23 seconds down to something closer to 1 second rather than\nwaiting for pg 9.2 to get here.\n\nFirst try individual indexes on the two fields, and also try a two\ncolumn index on the two fields, both with id first and with date\nfirst. Use explain analyze to see if this does any better. also look\nat this wiki page and see if there's anything there that helps:\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions Especially this\npart: http://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n",
"msg_date": "Wed, 1 Feb 2012 11:32:31 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index with all necessary columns - Postgres vs MSSQL"
},
{
"msg_contents": "Hi,\n\nHere are the answers to your questions:\n1) I change the select statement so I am refering to 1 day at a time. In\nthat case the response time is similar. Basically, the data is not in\ncache when I do that and the response time is about 23 seconds.\n\n2) The list of IDs is provided by the middle layer and represents a logical\ngroup.\nbtw: There are about 360 devices there. The distribution of dtStamp is\napprox 200.000.000 rows / 360 devices / (4 months) which gives approx 4600\ndtStamp values per device per day.\n\n3) The query takes 23 sec vs 1 sec or lower in mssql.\n\nWe never update/delete and therefore the data is alway correct in the index\n(never dirty). Therefore, Postgres could have used the data in it.\n\nI started to add columns into indexes in Oracle for approx 15 years ago and\nit was a brilliant discovery. This looks like a show stopper for me but I\nwill\n\nThanks,\n - Gummi\n\nOn Wed, Feb 1, 2012 at 5:52 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Wed, Feb 1, 2012 at 11:10 AM, Gudmundur Johannesson\n> <[email protected]> wrote:\n> > Hi,\n> >\n> > I have a table in Postgres like:\n> > CREATE TABLE test\n> > (\n> > id integer,\n> > dtstamp timestamp without time zone,\n> > rating real\n> > )\n> > CREATE INDEX test_all\n> > ON test\n> > USING btree\n> > (id , dtstamp , rating);\n> >\n> > My db has around 200M rows and I have reduced my test select statement\n> down\n> > to:\n> > SELECT count(1) FROM test\n> > WHERE id in (58,83,88,98,124,141,170,195,202,252,265,293,305,331,348)\n> > AND dtstamp between cast('2011-10-19 08:00:00' as timestamp) and\n> > cast('2011-10-19 16:00:00' as timestamp)\n> >\n> > In Postgres this takes about 23 sec.\n> > In MSSQL this takes about 1 sec.\n> >\n> > MSSQL only accesses the index and does not access the table it self (uses\n> > only index scan)\n> >\n> > Postgres has the following plan:\n> > \"Aggregate (cost=130926.24..130926.25 rows=1 width=0)\"\n> > \" -> Bitmap Heap Scan on test (cost=1298.97..130832.92 rows=37330\n> > width=0)\"\n> > \" Recheck Cond: ((id = ANY\n> > ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\n> > AND (dtstamp >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n> > (dtstamp <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n> > \" -> Bitmap Index Scan on test_all (cost=0.00..1289.64\n> rows=37330\n> > width=0)\"\n> > \" Index Cond: ((id = ANY\n> > ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\n> > AND (dtstamp >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n> > (dtstamp <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n> >\n> > The results are disappointing since I want to switch to Postgres but I\n> have\n> > not been able to force Postgres to only use the index :-(\n> >\n> > Any hints that may lead me back on track?\n>\n> *) are the times in postgres stable across calls?\n> *) where is the 'id list' coming from?\n> *) how long does this query take?\n>\n> SELECT count(1) FROM test WHERE id = 202 AND AND dtstamp between\n> '2011-10-19 08:00:00'::timestamp and '2011-10-19\n> 16:00:00'::timestamp; ?\n>\n> The feature you're looking for in postgres is called 'index only\n> scans' and an 9.2 will contain an implementation of that feature (see:\n> http://rhaas.blogspot.com/2011/10/index-only-scans-weve-got-em.html).\n>\n> merlin\n>\n\nHi,Here are the answers to your questions:1) I change the select statement so I am refering to 1 day at a time. In that case the response time is similar. Basically, the data is not in cache when I do that and the response time is about 23 seconds.\n2) The list of IDs is provided by the middle layer and represents a logical group.btw: There are about 360 devices there. The distribution of dtStamp is approx 200.000.000 rows / 360 devices / (4 months) which gives approx 4600 dtStamp values per device per day.\n3) The query takes 23 sec vs 1 sec or lower in mssql.We never update/delete and therefore the data is alway correct in the index (never dirty). Therefore, Postgres could have used the data in it.I started to add columns into indexes in Oracle for approx 15 years ago and it was a brilliant discovery. This looks like a show stopper for me but I will \nThanks, - GummiOn Wed, Feb 1, 2012 at 5:52 PM, Merlin Moncure <[email protected]> wrote:\n\nOn Wed, Feb 1, 2012 at 11:10 AM, Gudmundur Johannesson\n<[email protected]> wrote:\n> Hi,\n>\n> I have a table in Postgres like:\n> CREATE TABLE test\n> (\n> id integer,\n> dtstamp timestamp without time zone,\n> rating real\n> )\n> CREATE INDEX test_all\n> ON test\n> USING btree\n> (id , dtstamp , rating);\n>\n> My db has around 200M rows and I have reduced my test select statement down\n> to:\n> SELECT count(1) FROM test\n> WHERE id in (58,83,88,98,124,141,170,195,202,252,265,293,305,331,348)\n> AND dtstamp between cast('2011-10-19 08:00:00' as timestamp) and\n> cast('2011-10-19 16:00:00' as timestamp)\n>\n> In Postgres this takes about 23 sec.\n> In MSSQL this takes about 1 sec.\n>\n> MSSQL only accesses the index and does not access the table it self (uses\n> only index scan)\n>\n> Postgres has the following plan:\n> \"Aggregate (cost=130926.24..130926.25 rows=1 width=0)\"\n> \" -> Bitmap Heap Scan on test (cost=1298.97..130832.92 rows=37330\n> width=0)\"\n> \" Recheck Cond: ((id = ANY\n> ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\n> AND (dtstamp >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n> (dtstamp <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n> \" -> Bitmap Index Scan on test_all (cost=0.00..1289.64 rows=37330\n> width=0)\"\n> \" Index Cond: ((id = ANY\n> ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\n> AND (dtstamp >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n> (dtstamp <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n>\n> The results are disappointing since I want to switch to Postgres but I have\n> not been able to force Postgres to only use the index :-(\n>\n> Any hints that may lead me back on track?\n\n*) are the times in postgres stable across calls?\n*) where is the 'id list' coming from?\n*) how long does this query take?\n\nSELECT count(1) FROM test WHERE id = 202 AND AND dtstamp between\n'2011-10-19 08:00:00'::timestamp and '2011-10-19\n16:00:00'::timestamp; ?\n\nThe feature you're looking for in postgres is called 'index only\nscans' and an 9.2 will contain an implementation of that feature (see:\nhttp://rhaas.blogspot.com/2011/10/index-only-scans-weve-got-em.html).\n\nmerlin",
"msg_date": "Wed, 1 Feb 2012 18:50:09 +0000",
"msg_from": "Gudmundur Johannesson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index with all necessary columns - Postgres vs MSSQL"
},
{
"msg_contents": "On Wed, Feb 1, 2012 at 12:50 PM, Gudmundur Johannesson\n<[email protected]> wrote:\n> Here are the answers to your questions:\n> 1) I change the select statement so I am refering to 1 day at a time. In\n> that case the response time is similar. Basically, the data is not in cache\n> when I do that and the response time is about 23 seconds.\n\nwhat's the difference between the first and the second run time?\nNote, if you are only interested in the date the dtStamp falls on, you\ncan exploit that in the index to knock 4 bytes off your index entry:\n\nCREATE INDEX test_all\n ON test\n USING btree\n (id , (dtstamp::date) , rating);\n\nand then use a similar expression to query it back out.\n\n> 3) The query takes 23 sec vs 1 sec or lower in mssql.\n\nI asked you to time a different query. Look again (and I'd like to\nsee cached and uncached times).\n\n> We never update/delete and therefore the data is alway correct in the index\n> (never dirty). Therefore, Postgres could have used the data in it.\n>\n> I started to add columns into indexes in Oracle for approx 15 years ago and\n> it was a brilliant discovery. This looks like a show stopper for me but I\n\nI doubt covering indexes is going to make that query 23x faster.\nHowever, I bet we can get something worked out.\n\nmerlin\n",
"msg_date": "Wed, 1 Feb 2012 13:35:35 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index with all necessary columns - Postgres vs MSSQL"
},
{
"msg_contents": "Hi,\n\nI want to start by thanking you guys for a quick response and I will try to\nprovide all the information you request.\n\n1) What version am I running:\n\"PostgreSQL 9.1.2, compiled by Visual C++ build 1500, 64-bit\"\n\n2) Schema:\nCREATE TABLE test( id integer, dtstamp timestamp without time zone,\nrating real) WITH ( OIDS=FALSE);\nCREATE INDEX test_all ON test USING btree (id , dtstamp, rating);\n200M rows\nTable size 9833MB\nIndex size 7653 MB\n\n3) Difference between the first and the second run time?\nThe statement executed is:\nSELECT count(1) FROM test\nWHERE id in (58,83,88,98,124,141,170,195,\n202,252,265,293,305,331,348)\nAND dtstamp between cast('2011-10-19 08:00:00' as timestamp) and\ncast('2011-10-19 16:00:00' as timestamp)\na) 1st run = 26 seconds\nb) 2nd run = 0.234 seconds\nc) 3rd-6th run = 0.06 seconds\n\nIf I perform the query above for another day then I get 26 seconds for the\n1st query.\n\n4) What was the execution plan of it\n\"Aggregate (cost=151950.75..151950.76 rows=1 width=0)\"\n\" -> Bitmap Heap Scan on data_cbm_reading cbm (cost=1503.69..151840.82\nrows=43974 width=0)\"\n\" Recheck Cond: ((virtual_id = ANY\n('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\nAND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n(\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n\" -> Bitmap Index Scan on data_cbm_reading_all (cost=0.00..1492.70\nrows=43974 width=0)\"\n\" Index Cond: ((virtual_id = ANY\n('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\nAND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n(\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n\n5) In this case, I shut down the mssql server/machine and restart it. To\nbe on the safe side, I ensured the cache is empty using dbcc freeproccache\nand dbcc dropcleanbuffers.\nThen I tried the same statement as above:\na) 1st run = 0.8 seconds\nb) 2nd, 3rd, ... run = 0.04 seconds\nc) change the select statement for any another other day and run it again\ngive 1st run 0.5 seconds\nd) 2nd, 3rd, ... run = 0.04 seconds\n\n\n6) You wrote \"I doubt covering indexes is going to make that query 23x\nfaster.\"\nI decided to check out how mssql performs if it cannot use a covering\nindex. In order to do that, I drop my current index and create it again on\n*id, dtstamp.* That forces mssql to look into the data file and the index\nis no longer sufficient.\nRunning the following statement force the \"rating\" columns to be accessed:\nselect sum(rating)\nFROM test\n WHERE id in\n(58,83,88,98,124,141,170,195,202,252,265,293,305,331,348)\n AND dtstamp >= '2011-10-19 08:00:00' AND dtstamp <=\n'2011-10-19 16:00:00'\na) 1st run = 20 seconds\nb) 2nd run = 0.6\nc) 3rd, ... run = 0.3 seconds\nAs you can see the response time gets just as bad as in Postgres.\nNow lets recreate the mssql index with all the columns and double check the\nresponse time:\na) 1st run = 2 seconds\nb) 2nd run = 0.12\nc) 3rd, ... run = 0.3 seconds\n\n\nTherefore, I must conclude that in the case of mssql the \"covering\" index\nis making a huge impact.\n\nI have spent the whole day providing this data (takes a while to shuffle\n200M rows) and tomorrow I will try your suggestion regarding two indexes.\n\n*Do you think I should try using the latest build of the source for 9.2\nsince index-only-scan is \"ready\" according to\nhttp://www.depesz.com/index.php/2011/10/08/waiting-for-9-2-index-only-scans/\n?*\n\nThanks,\n - Gummi\n\n\n\nOn Wed, Feb 1, 2012 at 7:35 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Wed, Feb 1, 2012 at 12:50 PM, Gudmundur Johannesson\n> <[email protected]> wrote:\n> > Here are the answers to your questions:\n> > 1) I change the select statement so I am refering to 1 day at a time. In\n> > that case the response time is similar. Basically, the data is not in\n> cache\n> > when I do that and the response time is about 23 seconds.\n>\n> what's the difference between the first and the second run time?\n> Note, if you are only interested in the date the dtStamp falls on, you\n> can exploit that in the index to knock 4 bytes off your index entry:\n>\n> CREATE INDEX test_all\n> ON test\n> USING btree\n> (id , (dtstamp::date) , rating);\n>\n> and then use a similar expression to query it back out.\n>\n> > 3) The query takes 23 sec vs 1 sec or lower in mssql.\n>\n> I asked you to time a different query. Look again (and I'd like to\n> see cached and uncached times).\n>\n> > We never update/delete and therefore the data is alway correct in the\n> index\n> > (never dirty). Therefore, Postgres could have used the data in it.\n> >\n> > I started to add columns into indexes in Oracle for approx 15 years ago\n> and\n> > it was a brilliant discovery. This looks like a show stopper for me but\n> I\n>\n> I doubt covering indexes is going to make that query 23x faster.\n> However, I bet we can get something worked out.\n>\n> merlin\n>\n\nHi,I want to start by thanking you guys for a quick response and I will try to provide all the information you request. 1) What version am I running:\"PostgreSQL 9.1.2, compiled by Visual C++ build 1500, 64-bit\"\n2) Schema:CREATE TABLE test( id integer, dtstamp timestamp without time zone, rating real) WITH ( OIDS=FALSE);CREATE INDEX test_all ON test USING btree (id , dtstamp, rating);\n\n200M rowsTable size 9833MBIndex size 7653 MB3) Difference between the first and the second run time?The statement executed is:SELECT count(1) FROM test\nWHERE id in (58,83,88,98,124,141,170,195,\n202,252,265,293,305,331,348)AND dtstamp between cast('2011-10-19 08:00:00' as timestamp) and cast('2011-10-19 16:00:00' as timestamp)\n\na) 1st run = 26 secondsb) 2nd run = 0.234 secondsc) 3rd-6th run = 0.06 secondsIf I perform the query above for another day then I get 26 seconds for the 1st query.4) What was the execution plan of it\n\"Aggregate (cost=151950.75..151950.76 rows=1 width=0)\"\n\" -> Bitmap Heap Scan on data_cbm_reading cbm (cost=1503.69..151840.82 rows=43974 width=0)\"\" Recheck Cond: ((virtual_id = ANY ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[])) AND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n\n\" -> Bitmap Index Scan on data_cbm_reading_all (cost=0.00..1492.70 rows=43974 width=0)\"\" Index Cond: ((virtual_id = ANY ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[])) AND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n5) In this case, I shut down the mssql server/machine and restart it. To be on the safe side, I ensured the cache is empty using dbcc freeproccache and dbcc dropcleanbuffers.Then I tried the same statement as above:\na) 1st run = 0.8 secondsb) 2nd, 3rd, ... run = 0.04 secondsc) change the select statement for any another other day and run it again give 1st run 0.5 secondsd) 2nd, 3rd, ... run = 0.04 seconds\n6) You wrote \"I doubt covering indexes is going to make that query 23x faster.\"I decided to check out how mssql performs if it cannot use a covering index. In order to do that, I drop my current index and create it again on id, dtstamp. That forces mssql to look into the data file and the index is no longer sufficient.\nRunning the following statement force the \"rating\" columns to be accessed:select sum(rating)FROM test WHERE id in (58,83,88,98,124,141,170,195,202,252,265,293,305,331,348)\n AND dtstamp >= '2011-10-19 08:00:00' AND dtstamp <= '2011-10-19 16:00:00'a) 1st run = 20 secondsb) 2nd run = 0.6c) 3rd, ... run = 0.3 secondsAs you can see the response time gets just as bad as in Postgres.\nNow lets recreate the mssql index with all the columns and double check the response time:a) 1st run = 2 secondsb) 2nd run = 0.12\nc) 3rd, ... run = 0.3 seconds\nTherefore, I must conclude that in the case of mssql the \"covering\" index is making a huge impact.I have spent the whole day providing this data (takes a while to shuffle 200M rows) and tomorrow I will try your suggestion regarding two indexes.\nDo you think I should try using the latest build of the source for 9.2 since index-only-scan is \"ready\" according to http://www.depesz.com/index.php/2011/10/08/waiting-for-9-2-index-only-scans/\n?Thanks, - Gummi\nOn Wed, Feb 1, 2012 at 7:35 PM, Merlin Moncure <[email protected]> wrote:\nOn Wed, Feb 1, 2012 at 12:50 PM, Gudmundur Johannesson\n<[email protected]> wrote:\n> Here are the answers to your questions:\n> 1) I change the select statement so I am refering to 1 day at a time. In\n> that case the response time is similar. Basically, the data is not in cache\n> when I do that and the response time is about 23 seconds.\n\nwhat's the difference between the first and the second run time?\nNote, if you are only interested in the date the dtStamp falls on, you\ncan exploit that in the index to knock 4 bytes off your index entry:\n\nCREATE INDEX test_all\n ON test\n USING btree\n (id , (dtstamp::date) , rating);\n\nand then use a similar expression to query it back out.\n\n> 3) The query takes 23 sec vs 1 sec or lower in mssql.\n\nI asked you to time a different query. Look again (and I'd like to\nsee cached and uncached times).\n\n> We never update/delete and therefore the data is alway correct in the index\n> (never dirty). Therefore, Postgres could have used the data in it.\n>\n> I started to add columns into indexes in Oracle for approx 15 years ago and\n> it was a brilliant discovery. This looks like a show stopper for me but I\n\nI doubt covering indexes is going to make that query 23x faster.\nHowever, I bet we can get something worked out.\n\nmerlin",
"msg_date": "Thu, 2 Feb 2012 16:41:37 +0000",
"msg_from": "Gudmundur Johannesson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index with all necessary columns - Postgres vs MSSQL"
},
{
"msg_contents": "On Thu, Feb 2, 2012 at 10:41 AM, Gudmundur Johannesson\n<[email protected]> wrote:\n> Do you think I should try using the latest build of the source for 9.2 since\n> index-only-scan is \"ready\" according to\n> http://www.depesz.com/index.php/2011/10/08/waiting-for-9-2-index-only-scans/\n> ?\n\nhm, interesting.\n\nYou are simply welcome to try that and we would definitely like to see\nyour results. I looked around and didn't see any binaries for the\ndevelopment snapshots for windows to test. That means you have to\ncompile postgres in order to test 9.2 at this point in time. Testing\nand feedback of index only scan feature would be very much\nappreciated.\n\nGenerally speaking, postgresql source tree is very high quality --\nstuff should mostly work. The biggest annoyance is that you get lots\nof catalog version bumps when pulling new versions of the sources\nforcing a dump/reload.\n\nmerlin\n",
"msg_date": "Thu, 2 Feb 2012 14:30:54 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index with all necessary columns - Postgres vs MSSQL"
},
{
"msg_contents": "May be I should first try to partition the table by date and see if that\nhelps.\n\nThanks,\n - Gummi\n\nOn Thu, Feb 2, 2012 at 8:30 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Thu, Feb 2, 2012 at 10:41 AM, Gudmundur Johannesson\n> <[email protected]> wrote:\n> > Do you think I should try using the latest build of the source for 9.2\n> since\n> > index-only-scan is \"ready\" according to\n> >\n> http://www.depesz.com/index.php/2011/10/08/waiting-for-9-2-index-only-scans/\n> > ?\n>\n> hm, interesting.\n>\n> You are simply welcome to try that and we would definitely like to see\n> your results. I looked around and didn't see any binaries for the\n> development snapshots for windows to test. That means you have to\n> compile postgres in order to test 9.2 at this point in time. Testing\n> and feedback of index only scan feature would be very much\n> appreciated.\n>\n> Generally speaking, postgresql source tree is very high quality --\n> stuff should mostly work. The biggest annoyance is that you get lots\n> of catalog version bumps when pulling new versions of the sources\n> forcing a dump/reload.\n>\n> merlin\n>\n\nMay be I should first try to partition the table by date and see if that helps.Thanks, - GummiOn Thu, Feb 2, 2012 at 8:30 PM, Merlin Moncure <[email protected]> wrote:\nOn Thu, Feb 2, 2012 at 10:41 AM, Gudmundur Johannesson\n<[email protected]> wrote:\n> Do you think I should try using the latest build of the source for 9.2 since\n> index-only-scan is \"ready\" according to\n> http://www.depesz.com/index.php/2011/10/08/waiting-for-9-2-index-only-scans/\n> ?\n\nhm, interesting.\n\nYou are simply welcome to try that and we would definitely like to see\nyour results. I looked around and didn't see any binaries for the\ndevelopment snapshots for windows to test. That means you have to\ncompile postgres in order to test 9.2 at this point in time. Testing\nand feedback of index only scan feature would be very much\nappreciated.\n\nGenerally speaking, postgresql source tree is very high quality --\nstuff should mostly work. The biggest annoyance is that you get lots\nof catalog version bumps when pulling new versions of the sources\nforcing a dump/reload.\n\nmerlin",
"msg_date": "Fri, 3 Feb 2012 08:22:01 +0000",
"msg_from": "Gudmundur Johannesson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index with all necessary columns - Postgres vs MSSQL"
},
{
"msg_contents": "From: Gudmundur Johannesson [mailto:[email protected]] \nSent: Thursday, February 02, 2012 11:42 AM\nTo: Merlin Moncure\nCc: [email protected]\nSubject: Re: Index with all necessary columns - Postgres vs MSSQL\n\nHi,\n\nI want to start by thanking you guys for a quick response and I will try to provide all the information you request. \n\n1) What version am I running:\n\"PostgreSQL 9.1.2, compiled by Visual C++ build 1500, 64-bit\"\n\n2) Schema:\nCREATE TABLE test( id integer, dtstamp timestamp without time zone, rating real) WITH ( OIDS=FALSE);\nCREATE INDEX test_all ON test USING btree (id , dtstamp, rating);\n200M rows\nTable size 9833MB\nIndex size 7653 MB\n\n3) Difference between the first and the second run time?\nThe statement executed is:\nSELECT count(1) FROM test\nWHERE id in (58,83,88,98,124,141,170,195,\n202,252,265,293,305,331,348)\nAND dtstamp between cast('2011-10-19 08:00:00' as timestamp) and cast('2011-10-19 16:00:00' as timestamp)\na) 1st run = 26 seconds\nb) 2nd run = 0.234 seconds\nc) 3rd-6th run = 0.06 seconds\n\nIf I perform the query above for another day then I get 26 seconds for the 1st query.\n\n4) What was the execution plan of it\n\"Aggregate (cost=151950.75..151950.76 rows=1 width=0)\"\n\" -> Bitmap Heap Scan on data_cbm_reading cbm (cost=1503.69..151840.82 rows=43974 width=0)\"\n\" Recheck Cond: ((virtual_id = ANY ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[])) AND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n\" -> Bitmap Index Scan on data_cbm_reading_all (cost=0.00..1492.70 rows=43974 width=0)\"\n\" Index Cond: ((virtual_id = ANY ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[])) AND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n\n5) In this case, I shut down the mssql server/machine and restart it. To be on the safe side, I ensured the cache is empty using dbcc freeproccache and dbcc dropcleanbuffers.\nThen I tried the same statement as above:\na) 1st run = 0.8 seconds\nb) 2nd, 3rd, ... run = 0.04 seconds\nc) change the select statement for any another other day and run it again give 1st run 0.5 seconds\nd) 2nd, 3rd, ... run = 0.04 seconds\n\n6) You wrote \"I doubt covering indexes is going to make that query 23x faster.\"\nI decided to check out how mssql performs if it cannot use a covering index. In order to do that, I drop my current index and create it again on id, dtstamp. That forces mssql to look into the data file and the index is no longer sufficient.\nRunning the following statement force the \"rating\" columns to be accessed:\nselect sum(rating)\nFROM test\n WHERE id in (58,83,88,98,124,141,170,195,202,252,265,293,305,331,348)\n AND dtstamp >= '2011-10-19 08:00:00' AND dtstamp <= '2011-10-19 16:00:00'\na) 1st run = 20 seconds\nb) 2nd run = 0.6\nc) 3rd, ... run = 0.3 seconds\nAs you can see the response time gets just as bad as in Postgres.\nNow lets recreate the mssql index with all the columns and double check the response time:\na) 1st run = 2 seconds\nb) 2nd run = 0.12\nc) 3rd, ... run = 0.3 seconds\n\n\nTherefore, I must conclude that in the case of mssql the \"covering\" index is making a huge impact.\n\nI have spent the whole day providing this data (takes a while to shuffle 200M rows) and tomorrow I will try your suggestion regarding two indexes.\n\nDo you think I should try using the latest build of the source for 9.2 since index-only-scan is \"ready\" according to http://www.depesz.com/index.php/2011/10/08/waiting-for-9-2-index-only-scans/\n?\n\nThanks,\n - Gummi\n\n\nGudmundur,\n\nJust for clarification purposes:\n\nThis schema:\n\nCREATE TABLE test( id integer, dtstamp timestamp without time zone, rating real) WITH ( OIDS=FALSE);\nCREATE INDEX test_all ON test USING btree (id , dtstamp, rating);\n\nand this query plan:\n\n\"Aggregate (cost=151950.75..151950.76 rows=1 width=0)\"\n\" -> Bitmap Heap Scan on data_cbm_reading cbm (cost=1503.69..151840.82 rows=43974 width=0)\"\n\" Recheck Cond: ((virtual_id = ANY ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[])) AND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n\" -> Bitmap Index Scan on data_cbm_reading_all (cost=0.00..1492.70 rows=43974 width=0)\"\n\" Index Cond: ((virtual_id = ANY ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[])) AND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n\nreference different table and index names.\nAlso, EXPLAIN ANALYZE would provide additional info compared to just EXPLAIN.\n\nOne option you could try, is to cluster your table based on \" test_all\" index, and see if it makes a difference.\nBTW., in SQL Server your \"covering\" index - is it clustered?\n\nRegards,\nIgor Neyman\n\n",
"msg_date": "Tue, 7 Feb 2012 10:11:37 -0500",
"msg_from": "\"Igor Neyman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index with all necessary columns - Postgres vs MSSQL"
},
{
"msg_contents": "On Tue, Feb 7, 2012 at 3:11 PM, Igor Neyman <[email protected]> wrote:\n\n> From: Gudmundur Johannesson [mailto:[email protected]]\n> Sent: Thursday, February 02, 2012 11:42 AM\n> To: Merlin Moncure\n> Cc: [email protected]\n> Subject: Re: Index with all necessary columns - Postgres vs MSSQL\n>\n> Hi,\n>\n> I want to start by thanking you guys for a quick response and I will try\n> to provide all the information you request.\n>\n> 1) What version am I running:\n> \"PostgreSQL 9.1.2, compiled by Visual C++ build 1500, 64-bit\"\n>\n> 2) Schema:\n> CREATE TABLE test( id integer, dtstamp timestamp without time zone,\n> rating real) WITH ( OIDS=FALSE);\n> CREATE INDEX test_all ON test USING btree (id , dtstamp, rating);\n> 200M rows\n> Table size 9833MB\n> Index size 7653 MB\n>\n> 3) Difference between the first and the second run time?\n> The statement executed is:\n> SELECT count(1) FROM test\n> WHERE id in (58,83,88,98,124,141,170,195,\n> 202,252,265,293,305,331,348)\n> AND dtstamp between cast('2011-10-19 08:00:00' as timestamp) and\n> cast('2011-10-19 16:00:00' as timestamp)\n> a) 1st run = 26 seconds\n> b) 2nd run = 0.234 seconds\n> c) 3rd-6th run = 0.06 seconds\n>\n> If I perform the query above for another day then I get 26 seconds for the\n> 1st query.\n>\n> 4) What was the execution plan of it\n> \"Aggregate (cost=151950.75..151950.76 rows=1 width=0)\"\n> \" -> Bitmap Heap Scan on data_cbm_reading cbm (cost=1503.69..151840.82\n> rows=43974 width=0)\"\n> \" Recheck Cond: ((virtual_id = ANY\n> ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\n> AND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n> (\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n> \" -> Bitmap Index Scan on data_cbm_reading_all\n> (cost=0.00..1492.70 rows=43974 width=0)\"\n> \" Index Cond: ((virtual_id = ANY\n> ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\n> AND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n> (\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n>\n> 5) In this case, I shut down the mssql server/machine and restart it. To\n> be on the safe side, I ensured the cache is empty using dbcc freeproccache\n> and dbcc dropcleanbuffers.\n> Then I tried the same statement as above:\n> a) 1st run = 0.8 seconds\n> b) 2nd, 3rd, ... run = 0.04 seconds\n> c) change the select statement for any another other day and run it again\n> give 1st run 0.5 seconds\n> d) 2nd, 3rd, ... run = 0.04 seconds\n>\n> 6) You wrote \"I doubt covering indexes is going to make that query 23x\n> faster.\"\n> I decided to check out how mssql performs if it cannot use a covering\n> index. In order to do that, I drop my current index and create it again on\n> id, dtstamp. That forces mssql to look into the data file and the index is\n> no longer sufficient.\n> Running the following statement force the \"rating\" columns to be accessed:\n> select sum(rating)\n> FROM test\n> WHERE id in\n> (58,83,88,98,124,141,170,195,202,252,265,293,305,331,348)\n> AND dtstamp >= '2011-10-19 08:00:00' AND dtstamp <=\n> '2011-10-19 16:00:00'\n> a) 1st run = 20 seconds\n> b) 2nd run = 0.6\n> c) 3rd, ... run = 0.3 seconds\n> As you can see the response time gets just as bad as in Postgres.\n> Now lets recreate the mssql index with all the columns and double check\n> the response time:\n> a) 1st run = 2 seconds\n> b) 2nd run = 0.12\n> c) 3rd, ... run = 0.3 seconds\n>\n>\n> Therefore, I must conclude that in the case of mssql the \"covering\" index\n> is making a huge impact.\n>\n> I have spent the whole day providing this data (takes a while to shuffle\n> 200M rows) and tomorrow I will try your suggestion regarding two indexes.\n>\n> Do you think I should try using the latest build of the source for 9.2\n> since index-only-scan is \"ready\" according to\n> http://www.depesz.com/index.php/2011/10/08/waiting-for-9-2-index-only-scans/\n> ?\n>\n> Thanks,\n> - Gummi\n>\n>\n> Gudmundur,\n>\n> Just for clarification purposes:\n>\n> This schema:\n>\n> CREATE TABLE test( id integer, dtstamp timestamp without time zone,\n> rating real) WITH ( OIDS=FALSE);\n> CREATE INDEX test_all ON test USING btree (id , dtstamp, rating);\n>\n> and this query plan:\n>\n> \"Aggregate (cost=151950.75..151950.76 rows=1 width=0)\"\n> \" -> Bitmap Heap Scan on data_cbm_reading cbm (cost=1503.69..151840.82\n> rows=43974 width=0)\"\n> \" Recheck Cond: ((virtual_id = ANY\n> ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\n> AND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n> (\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n> \" -> Bitmap Index Scan on data_cbm_reading_all\n> (cost=0.00..1492.70 rows=43974 width=0)\"\n> \" Index Cond: ((virtual_id = ANY\n> ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\n> AND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n> (\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n>\n> reference different table and index names.\n> Also, EXPLAIN ANALYZE would provide additional info compared to just\n> EXPLAIN.\n>\n> One option you could try, is to cluster your table based on \" test_all\"\n> index, and see if it makes a difference.\n> BTW., in SQL Server your \"covering\" index - is it clustered?\n>\n> Regards,\n> Igor Neyman\n>\n>\n\nHi Igor,\n\n1) I \"simplified\" the names when posting originally and forgot to replace\nthe names in the analyze output. Sorry about the confusion.\n\n2) The index in mssql is not clustered.\n\n3) I am now testing to partition the 200 million table into one partition\nper day and see how it performs.\n\n4) I compiled and installed Postgres 9.2 and proved to my self that\nPostgres does not look up into the table and relies only on the index.\nTherefore, this is looking bright at the moment.\n\n5) I must deliver the db for production in june and it does not sound wise\nto do that in 9.2 (unless it has been released by then).\n\nThanks,\n - Gummi\n\nThanks,\n - Gummi\n\nOn Tue, Feb 7, 2012 at 3:11 PM, Igor Neyman <[email protected]> wrote:\nFrom: Gudmundur Johannesson [mailto:[email protected]]\nSent: Thursday, February 02, 2012 11:42 AM\nTo: Merlin Moncure\nCc: [email protected]\nSubject: Re: Index with all necessary columns - Postgres vs MSSQL\n\nHi,\n\nI want to start by thanking you guys for a quick response and I will try to provide all the information you request. \n\n1) What version am I running:\n\"PostgreSQL 9.1.2, compiled by Visual C++ build 1500, 64-bit\"\n\n2) Schema:\nCREATE TABLE test( id integer, dtstamp timestamp without time zone, rating real) WITH ( OIDS=FALSE);\nCREATE INDEX test_all ON test USING btree (id , dtstamp, rating);\n200M rows\nTable size 9833MB\nIndex size 7653 MB\n\n3) Difference between the first and the second run time?\nThe statement executed is:\nSELECT count(1) FROM test\nWHERE id in (58,83,88,98,124,141,170,195,\n202,252,265,293,305,331,348)\nAND dtstamp between cast('2011-10-19 08:00:00' as timestamp) and cast('2011-10-19 16:00:00' as timestamp)\na) 1st run = 26 seconds\nb) 2nd run = 0.234 seconds\nc) 3rd-6th run = 0.06 seconds\n\nIf I perform the query above for another day then I get 26 seconds for the 1st query.\n\n4) What was the execution plan of it\n\"Aggregate (cost=151950.75..151950.76 rows=1 width=0)\"\n\" -> Bitmap Heap Scan on data_cbm_reading cbm (cost=1503.69..151840.82 rows=43974 width=0)\"\n\" Recheck Cond: ((virtual_id = ANY ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[])) AND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n\n\" -> Bitmap Index Scan on data_cbm_reading_all (cost=0.00..1492.70 rows=43974 width=0)\"\n\" Index Cond: ((virtual_id = ANY ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[])) AND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n\n5) In this case, I shut down the mssql server/machine and restart it. To be on the safe side, I ensured the cache is empty using dbcc freeproccache and dbcc dropcleanbuffers.\nThen I tried the same statement as above:\na) 1st run = 0.8 seconds\nb) 2nd, 3rd, ... run = 0.04 seconds\nc) change the select statement for any another other day and run it again give 1st run 0.5 seconds\nd) 2nd, 3rd, ... run = 0.04 seconds\n\n6) You wrote \"I doubt covering indexes is going to make that query 23x faster.\"\nI decided to check out how mssql performs if it cannot use a covering index. In order to do that, I drop my current index and create it again on id, dtstamp. That forces mssql to look into the data file and the index is no longer sufficient.\n\nRunning the following statement force the \"rating\" columns to be accessed:\nselect sum(rating)\nFROM test\n WHERE id in (58,83,88,98,124,141,170,195,202,252,265,293,305,331,348)\n AND dtstamp >= '2011-10-19 08:00:00' AND dtstamp <= '2011-10-19 16:00:00'\na) 1st run = 20 seconds\nb) 2nd run = 0.6\nc) 3rd, ... run = 0.3 seconds\nAs you can see the response time gets just as bad as in Postgres.\nNow lets recreate the mssql index with all the columns and double check the response time:\na) 1st run = 2 seconds\nb) 2nd run = 0.12\nc) 3rd, ... run = 0.3 seconds\n\n\nTherefore, I must conclude that in the case of mssql the \"covering\" index is making a huge impact.\n\nI have spent the whole day providing this data (takes a while to shuffle 200M rows) and tomorrow I will try your suggestion regarding two indexes.\n\nDo you think I should try using the latest build of the source for 9.2 since index-only-scan is \"ready\" according to http://www.depesz.com/index.php/2011/10/08/waiting-for-9-2-index-only-scans/\n\n?\n\nThanks,\n - Gummi\n\n\nGudmundur,\n\nJust for clarification purposes:\n\nThis schema:\n\nCREATE TABLE test( id integer, dtstamp timestamp without time zone, rating real) WITH ( OIDS=FALSE);\nCREATE INDEX test_all ON test USING btree (id , dtstamp, rating);\n\nand this query plan:\n\n\"Aggregate (cost=151950.75..151950.76 rows=1 width=0)\"\n\" -> Bitmap Heap Scan on data_cbm_reading cbm (cost=1503.69..151840.82 rows=43974 width=0)\"\n\" Recheck Cond: ((virtual_id = ANY ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[])) AND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n\n\" -> Bitmap Index Scan on data_cbm_reading_all (cost=0.00..1492.70 rows=43974 width=0)\"\n\" Index Cond: ((virtual_id = ANY ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[])) AND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND (\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n\nreference different table and index names.\nAlso, EXPLAIN ANALYZE would provide additional info compared to just EXPLAIN.\n\nOne option you could try, is to cluster your table based on \" test_all\" index, and see if it makes a difference.\nBTW., in SQL Server your \"covering\" index - is it clustered?\n\nRegards,\nIgor Neyman\n\nHi Igor,1) I \"simplified\" the names when posting originally and forgot to replace the names in the analyze output. Sorry about the confusion.2) The index in mssql is not clustered.\n3) I am now testing to partition the 200 million table into one partition per day and see how it performs.4) I compiled and installed Postgres 9.2 and proved to my self that Postgres does not look up into the table and relies only on the index. Therefore, this is looking bright at the moment.\n5) I must deliver the db for production in june and it does not sound wise to do that in 9.2 (unless it has been released by then).Thanks, - GummiThanks, - Gummi",
"msg_date": "Tue, 7 Feb 2012 17:59:33 +0000",
"msg_from": "Gudmundur Johannesson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index with all necessary columns - Postgres vs MSSQL"
},
{
"msg_contents": "On Tue, Feb 7, 2012 at 11:59 AM, Gudmundur Johannesson\n<[email protected]> wrote:\n> On Tue, Feb 7, 2012 at 3:11 PM, Igor Neyman <[email protected]> wrote:\n>>\n>> From: Gudmundur Johannesson [mailto:[email protected]]\n>> Sent: Thursday, February 02, 2012 11:42 AM\n>> To: Merlin Moncure\n>> Cc: [email protected]\n>> Subject: Re: Index with all necessary columns - Postgres vs MSSQL\n>>\n>> Hi,\n>>\n>> I want to start by thanking you guys for a quick response and I will try\n>> to provide all the information you request.\n>>\n>> 1) What version am I running:\n>> \"PostgreSQL 9.1.2, compiled by Visual C++ build 1500, 64-bit\"\n>>\n>> 2) Schema:\n>> CREATE TABLE test( id integer, dtstamp timestamp without time zone,\n>> rating real) WITH ( OIDS=FALSE);\n>> CREATE INDEX test_all ON test USING btree (id , dtstamp, rating);\n>> 200M rows\n>> Table size 9833MB\n>> Index size 7653 MB\n>>\n>> 3) Difference between the first and the second run time?\n>> The statement executed is:\n>> SELECT count(1) FROM test\n>> WHERE id in (58,83,88,98,124,141,170,195,\n>> 202,252,265,293,305,331,348)\n>> AND dtstamp between cast('2011-10-19 08:00:00' as timestamp) and\n>> cast('2011-10-19 16:00:00' as timestamp)\n>> a) 1st run = 26 seconds\n>> b) 2nd run = 0.234 seconds\n>> c) 3rd-6th run = 0.06 seconds\n>>\n>> If I perform the query above for another day then I get 26 seconds for the\n>> 1st query.\n>>\n>> 4) What was the execution plan of it\n>> \"Aggregate (cost=151950.75..151950.76 rows=1 width=0)\"\n>> \" -> Bitmap Heap Scan on data_cbm_reading cbm (cost=1503.69..151840.82\n>> rows=43974 width=0)\"\n>> \" Recheck Cond: ((virtual_id = ANY\n>> ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\n>> AND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n>> (\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n>> \" -> Bitmap Index Scan on data_cbm_reading_all\n>> (cost=0.00..1492.70 rows=43974 width=0)\"\n>> \" Index Cond: ((virtual_id = ANY\n>> ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\n>> AND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n>> (\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n>>\n>> 5) In this case, I shut down the mssql server/machine and restart it. To\n>> be on the safe side, I ensured the cache is empty using dbcc freeproccache\n>> and dbcc dropcleanbuffers.\n>> Then I tried the same statement as above:\n>> a) 1st run = 0.8 seconds\n>> b) 2nd, 3rd, ... run = 0.04 seconds\n>> c) change the select statement for any another other day and run it again\n>> give 1st run 0.5 seconds\n>> d) 2nd, 3rd, ... run = 0.04 seconds\n>>\n>> 6) You wrote \"I doubt covering indexes is going to make that query 23x\n>> faster.\"\n>> I decided to check out how mssql performs if it cannot use a covering\n>> index. In order to do that, I drop my current index and create it again on\n>> id, dtstamp. That forces mssql to look into the data file and the index is\n>> no longer sufficient.\n>> Running the following statement force the \"rating\" columns to be accessed:\n>> select sum(rating)\n>> FROM test\n>> WHERE id in\n>> (58,83,88,98,124,141,170,195,202,252,265,293,305,331,348)\n>> AND dtstamp >= '2011-10-19 08:00:00' AND dtstamp <=\n>> '2011-10-19 16:00:00'\n>> a) 1st run = 20 seconds\n>> b) 2nd run = 0.6\n>> c) 3rd, ... run = 0.3 seconds\n>> As you can see the response time gets just as bad as in Postgres.\n>> Now lets recreate the mssql index with all the columns and double check\n>> the response time:\n>> a) 1st run = 2 seconds\n>> b) 2nd run = 0.12\n>> c) 3rd, ... run = 0.3 seconds\n>>\n>>\n>> Therefore, I must conclude that in the case of mssql the \"covering\" index\n>> is making a huge impact.\n>>\n>> I have spent the whole day providing this data (takes a while to shuffle\n>> 200M rows) and tomorrow I will try your suggestion regarding two indexes.\n>>\n>> Do you think I should try using the latest build of the source for 9.2\n>> since index-only-scan is \"ready\" according to\n>> http://www.depesz.com/index.php/2011/10/08/waiting-for-9-2-index-only-scans/\n>> ?\n>>\n>> Thanks,\n>> - Gummi\n>>\n>>\n>> Gudmundur,\n>>\n>> Just for clarification purposes:\n>>\n>> This schema:\n>>\n>> CREATE TABLE test( id integer, dtstamp timestamp without time zone,\n>> rating real) WITH ( OIDS=FALSE);\n>> CREATE INDEX test_all ON test USING btree (id , dtstamp, rating);\n>>\n>> and this query plan:\n>>\n>> \"Aggregate (cost=151950.75..151950.76 rows=1 width=0)\"\n>> \" -> Bitmap Heap Scan on data_cbm_reading cbm (cost=1503.69..151840.82\n>> rows=43974 width=0)\"\n>> \" Recheck Cond: ((virtual_id = ANY\n>> ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\n>> AND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n>> (\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n>> \" -> Bitmap Index Scan on data_cbm_reading_all\n>> (cost=0.00..1492.70 rows=43974 width=0)\"\n>> \" Index Cond: ((virtual_id = ANY\n>> ('{58,83,88,98,124,141,170,195,202,252,265,293,305,331,348}'::integer[]))\n>> AND (\"timestamp\" >= '2011-10-19 08:00:00'::timestamp without time zone) AND\n>> (\"timestamp\" <= '2011-10-19 16:00:00'::timestamp without time zone))\"\n>>\n>> reference different table and index names.\n>> Also, EXPLAIN ANALYZE would provide additional info compared to just\n>> EXPLAIN.\n>>\n>> One option you could try, is to cluster your table based on \" test_all\"\n>> index, and see if it makes a difference.\n>> BTW., in SQL Server your \"covering\" index - is it clustered?\n>>\n>> Regards,\n>> Igor Neyman\n>>\n>\n>\n> Hi Igor,\n>\n> 1) I \"simplified\" the names when posting originally and forgot to replace\n> the names in the analyze output. Sorry about the confusion.\n>\n> 2) The index in mssql is not clustered.\n>\n> 3) I am now testing to partition the 200 million table into one partition\n> per day and see how it performs.\n>\n> 4) I compiled and installed Postgres 9.2 and proved to my self that Postgres\n> does not look up into the table and relies only on the index. Therefore,\n> this is looking bright at the moment.\n>\n> 5) I must deliver the db for production in june and it does not sound wise\n> to do that in 9.2 (unless it has been released by then).\n\nyeah -- I just started to do some performance testing on index only\nscan as well and am finding the speedup to be really dramatic when the\noptimization kicks in, especially when your query passes over the heap\nin a random-ish fashion. note results in the field will vary wildly\n-- index only scan optimization does visibility checks at the page\nlevel so write once or read mostly tables will see a lot more benefit\nthan high traffic oltp type tables.\n\nregarding postgresql 9.2 by june, the official release schedule has\n9.2 going into beta by april. if we actually make that date, then a\nproduction worthy build (which I define as release candidate or\nbetter) by june is plausible as long as you are willing to do binary\nswap in the field post release and will have some tolerance for early\nrelease type bugs. a key thing to watch for is the current commit\nfest (https://commitfest.postgresql.org/action/commitfest_view?id=13)\nto be wrapped up in a timely fashion -- it is scheduled to be wrapped\nup by feb 14 which is starting to look highly optimistic at best. on\na positive note, 9.2 seems to be somewhat lighter on big controversial\npatches than previous releases but if I were in your shoes I wouldn't\nunfortunately bank on 9.2 for june.\n\nmerlin\n",
"msg_date": "Tue, 7 Feb 2012 13:26:40 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index with all necessary columns - Postgres vs MSSQL"
}
] |
[
{
"msg_contents": "I created a table with two columns: an id SERIAL (primary key) and a\ntext (not null), and then added a unique index on the text field.\nThen I ran the following query (with a huge work_mem - 20GB):\n\ninsert into tableA (text_field) select distinct other_text_field from\nsome_huge_set_of_tables\n\nAfter 36 hours it had only written 3 GB (determined by looking at what\nfiles it was writing to).\nI started over with a TRUNCATE, and then removed the index and tried again.\nThis time it took 3807270.780 ms (a bit over an hour).\nTotal number of records: approx 227 million, comprising 16GB of storage.\n\nWhy the huge discrepancy?\n\n-- \nJon\n",
"msg_date": "Thu, 2 Feb 2012 11:28:09 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "*really* bad insert performance on table with unique index"
},
{
"msg_contents": "On Thu, Feb 2, 2012 at 9:28 AM, Jon Nelson <[email protected]> wrote:\n> I created a table with two columns: an id SERIAL (primary key) and a\n> text (not null), and then added a unique index on the text field.\n> Then I ran the following query (with a huge work_mem - 20GB):\n>\n> insert into tableA (text_field) select distinct other_text_field from\n> some_huge_set_of_tables\n\nI bet the distinct is being implemented by a hashAggregate. So then\nyou are inserting the records in a random order, causing the index to\nhave terrible locality of reference.\n\nTry adding \"order by other_text_field\" to the select. Or don't create\nthe index until afterwards\n\n>\n> After 36 hours it had only written 3 GB (determined by looking at what\n> files it was writing to).\n> I started over with a TRUNCATE, and then removed the index and tried again.\n> This time it took 3807270.780 ms (a bit over an hour).\n> Total number of records: approx 227 million, comprising 16GB of storage.\n>\n> Why the huge discrepancy?\n\nMaintaining indices when rows are inserted in a random order generates\na huge amount of scattered I/O.\n\nCheers,\n\nJeff\n",
"msg_date": "Thu, 2 Feb 2012 12:59:40 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: *really* bad insert performance on table with unique index"
}
] |
[
{
"msg_contents": "Saurabh wrote:\n \n> wal_buffers = 5MB\n \nAs as already been suggested, use 16MB (or if the version you're\nusing supports it, the default of -1);\n \n> autovacuum = off\n \nIf the only activity while this is off is a bulk load, that might be\nOK, but be sure *not* to leave this off. You will almost certainly\nregret very much later. Your tables will tend to bloat and slowly\nget very slow and very big. At that point it will be much more\npainful to do aggressive maintenance to clean things up. If you\nthink you have some particular reason to turn it off, please discuss\nit here -- you might have better options.\n \n-Kevin\n\n",
"msg_date": "Fri, 03 Feb 2012 13:03:18 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve insert speed with index on text\n\t column"
},
{
"msg_contents": "Kelvin,\n\nMy intention to keep autovacuum as off is bulk loading only. I was\nthinking after bullk load I will change it.\n\nI changed wal_buffer from 5MB to 16MB but I got same performance that\nI got with 5MB (even less).\n\nThanks,\nSaurabh\n",
"msg_date": "Sun, 5 Feb 2012 09:29:22 -0800 (PST)",
"msg_from": "Saurabh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve insert speed with index on text column"
},
{
"msg_contents": "On Sun, Feb 5, 2012 at 12:29 PM, Saurabh <[email protected]> wrote:\n> My intention to keep autovacuum as off is bulk loading only. I was\n> thinking after bullk load I will change it.\n>\n> I changed wal_buffer from 5MB to 16MB but I got same performance that\n> I got with 5MB (even less).\n\nDoes it help if you create the index using COLLATE \"C\"? Assuming\nyou're on 9.1.x...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 29 Feb 2012 13:35:37 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve insert speed with index on text column"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've been running a lot of benchmarks recently (I'll publish the results\nonce I properly analyze them). One thing I'd like to demonstrate is the\neffect of direct I/O when the wal_fsync_method is set to\nopen_sync/open_datasync.\n\nI.e. I'd like to see cases when this improves/hurts performance\n(compared to fsync/fdatasync) and if/how this works on SSD compared to\nold-fashioned HDD. But no matter what, I see no significant differences\nin performance.\n\nThis is what pg_test_fsync gives on the SSD (Intel 320):\n\n open_datasync 12492.192 ops/sec\n fdatasync 11646.257 ops/sec\n fsync 9839.101 ops/sec\n fsync_writethrough n/a\n open_sync 10420.971 ops/sec\n\nand this is what I get on the HDD (7.2k SATA)\n\n open_datasync 120.041 ops/sec\n fdatasync 120.042 ops/sec\n fsync 48.000 ops/sec\n fsync_writethrough n/a\n open_sync 48.116 ops/sec\n\nI can post the rest of the pg_test_fsync output if needed.\n\nWhat should I do to see the effect of direct I/O? I'm wondering if I\nneed something like a RAID array or a controller with write cache to see\nthe difference.\n\nAll this was run on a kernel 3.1.5 using an ext4 filesystem.\n\nthanks\nTomas\n",
"msg_date": "Sun, 05 Feb 2012 00:25:14 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to demonstrate the effect of direct I/O ?"
},
{
"msg_contents": "On 5.2.2012 00:25, Tomas Vondra wrote:\n> Hi all,\n> \n> I've been running a lot of benchmarks recently (I'll publish the results\n> once I properly analyze them). One thing I'd like to demonstrate is the\n> effect of direct I/O when the wal_fsync_method is set to\n> open_sync/open_datasync.\n> \n> I.e. I'd like to see cases when this improves/hurts performance\n> (compared to fsync/fdatasync) and if/how this works on SSD compared to\n> old-fashioned HDD. But no matter what, I see no significant differences\n> in performance.\n\nBTW the benchmark suite I run consists of two parts:\n\n (a) read-write pgbench\n (b) TPC-H-like benchmark that loads a few GBs of data (and then\n queries them)\n\nI'd expect to see the effect on the TPC-H load part, and maybe on the\npgbench (not sure if positive or negative).\n\n> All this was run on a kernel 3.1.5 using an ext4 filesystem.\n\nAnd the Pg versions tested were 9.1.2 and the current 9.2dev snapshot.\n\n\nTomas\n",
"msg_date": "Sun, 05 Feb 2012 00:34:23 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to demonstrate the effect of direct I/O ?"
},
{
"msg_contents": "On 02/04/2012 06:25 PM, Tomas Vondra wrote:\n> What should I do to see the effect of direct I/O?\n\nTest something other than a mainstream Linux filesystem. The two times \nI've either measured an improvement myself for direct I/O were a) \nVeritas VxFS on Linux, which has some documented acceleration here and \nb) on Solaris. You won't find a compelling performance improvement \nlisted at \nhttps://ext4.wiki.kernel.org/articles/c/l/a/Clarifying_Direct_IO%27s_Semantics_fd79.html \nand Linux has generally ignored direct I/O as something important to \noptimize for.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com\n\n",
"msg_date": "Tue, 07 Feb 2012 22:13:24 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to demonstrate the effect of direct I/O ?"
}
] |
[
{
"msg_contents": "Hello,\n\nI have quite systematically better performance with the text search when\nI disable the statistics collection for the tsvector column.\nSo I wonder if such statistics ever make sense.\n\nHere a testcase:\n\nThe table contains 200'000 tsvector, whereas the lexeme 'fooblablabla'\nexists in all tsvector:\nWithout statistics, the planner decide as expected for the gin index.\nAfter analyze, it switch to a table scan which is also expected, but the\nquery is 3 times slower.\n\nMy first thought was that the Bitmap Heap Scan was really fast as the\nsearched term is always at the first position.\nSo I repeated the test with an additional search term at the last\nposition, but without significant change:\n\n(result from the 6. test below)\n\nwithout analyze: http://explain.depesz.com/s/6At\nwith analyze: http://explain.depesz.com/s/r3B\n\n\nbest regards,\n\nMarc Mamin\n\n\n\n\nHere all my results, always one of the fastest from a few runs.\n\n\nCREATE TABLE tsv_test\n(\n id bigserial NOT NULL,\n v tsvector\n);\n\n\n<The code to fill the table with test data can be found below>\n\n\nThe test query:\n\nexplain analyze\nselect id from tsv_test where v @@ 'lexeme3179'::tsquery \nUNION ALL\nselect id from tsv_test where v @@ 'lexeme5'::tsquery\nUNION ALL\nselect id from tsv_test where v @@ 'fooblablabla'::tsquery\n\nThe results\n\nA) on first lexeme\n\n1) without indexes without analyze:\n http://explain.depesz.com/s/bOv\n\n2) alter table tsv_test add constraint tsv_test_pk primary key(id);\n http://explain.depesz.com/s/9QQ (same as previous);\n\n3) create index tsv_gin on tsv_test using gin(v);\n http://explain.depesz.com/s/r4M <= fastest\n\n4) ANALYZE tsv_test (id);\n http://explain.depesz.com/s/MyC (same as previous);\n\n5) ANALYZE tsv_test;\n http://explain.depesz.com/s/qu3S \n \n\nB) on lastlexeme \n\n6) create table tsv_test2 as select id,\n v||'zzthisisalongerlexemethisisalongerlexeme'::tsvector \n from tsv_test;\n \n explain analyze\n select id from tsv_test2 where v @@\n'zzthisisalongerlexemethisisalongerlexeme'::tsquery \n \n http://explain.depesz.com/s/6At \n \n ANALYZE tsv_test2;\n \n http://explain.depesz.com/s/r3B \n\n\n\ntest data:\n\ninsert into tsv_test (v) \nselect\ncast('fooblablabla' ||\n' lexeme'||s%2|| ' lexeme'||s%3|| ' lexeme'||s%4||\n' lexeme'||s%4|| ' lexeme'||s%5|| ' lexeme'||s%6||\n' lexeme'||s%7|| ' lexeme'||s%8|| ' lexeme'||s%9||\n' lexeme'||s%10 || ' lexeme2'||s%11 || ' lexeme3'||s%12 ||\n' lexeme'||s%11 || ' lexeme2'||s%12 || ' lexeme3'||s%22 ||\n' lexeme'||s%12 || ' lexeme2'||s%13 || ' lexeme3'||s%32 ||\n' lexeme'||s%13 || ' lexeme2'||s%14 || ' lexeme3'||s%42 ||\n' lexeme'||s%14 || ' lexeme2'||s%15 || ' lexeme3'||s%52 ||\n' lexeme'||s%15 || ' lexeme2'||s%16 || ' lexeme3'||s%62 ||\n' lexeme'||s%16 || ' lexeme2'||s%17 || ' lexeme3'||s%72 ||\n' lexeme'||s%17 || ' lexeme2'||s%18 || ' lexeme3'||s%82 ||\n' lexeme'||s%18 || ' lexeme2'||s%19 || ' lexeme3'||s%92 ||\n' lexeme'||s%19 || ' lexeme2'||s%10 || ' lexeme3'||s%15 ||\n' lexeme'||s%12 || ' lexeme2'||s%71 || ' lexeme3'||s%16 ||\n' lexeme'||s%20 || ' lexeme2'||s%81 || ' lexeme3'||s%17 ||\n' lexeme'||s%35 || ' lexeme2'||s%91 || ' lexeme3'||s%18 ||\n' lexeme'||s%100 || ' lexeme2'||s%110 || ' lexeme3'||s%120 ||\n' lexeme'||s%110 || ' lexeme2'||s%120 || ' lexeme3'||s%220 ||\n' lexeme'||s%120 || ' lexeme2'||s%130 || ' lexeme3'||s%320 ||\n' lexeme'||s%130 || ' lexeme2'||s%140 || ' lexeme3'||s%420 ||\n' lexeme'||s%140 || ' lexeme2'||s%150 || ' lexeme3'||s%520 ||\n' lexeme'||s%150 || ' lexeme2'||s%160 || ' lexeme3'||s%620 ||\n' lexeme'||s%160 || ' lexeme2'||s%170 || ' lexeme3'||s%720 ||\n' lexeme'||s%170 || ' lexeme2'||s%180 || ' lexeme3'||s%820 ||\n' lexeme'||s%180 || ' lexeme2'||s%190 || ' lexeme3'||s%920 ||\n' lexeme'||s%190 || ' lexeme2'||s%100 || ' lexeme3'||s%150 ||\n' lexeme'||s%120 || ' lexeme2'||s%710 || ' lexeme3'||s%160 ||\n' lexeme'||s%200 || ' lexeme2'||s%810 || ' lexeme3'||s%170 ||\n' lexeme'||s%350 || ' lexeme2'||s%910 || ' lexeme3'||s%180 \nas tsvector)\nFROM generate_series(1,100000) s\nUNION ALL\nselect\ncast('fooblablabla' ||\n' thisisalongerlexemethisisalongerlexeme'||s%2|| '\nthisisalongerlexemethisisalongerlexeme'||s%3|| '\nthisisalongerlexemethisisalongerlexeme'||s%4||\n' thisisalongerlexemethisisalongerlexeme'||s%4|| '\nthisisalongerlexemethisisalongerlexeme'||s%5|| '\nthisisalongerlexemethisisalongerlexeme'||s%6||\n' thisisalongerlexemethisisalongerlexeme'||s%7|| '\nthisisalongerlexemethisisalongerlexeme'||s%8|| '\nthisisalongerlexemethisisalongerlexeme'||s%9||\n' thisisalongerlexemethisisalongerlexeme'||s%10 || '\nthisisalongerlexemethisisalongerlexeme2'||s%11 || '\nthisisalongerlexemethisisalongerlexeme3'||s%12 ||\n' thisisalongerlexemethisisalongerlexeme'||s%11 || '\nthisisalongerlexemethisisalongerlexeme2'||s%12 || '\nthisisalongerlexemethisisalongerlexeme3'||s%22 ||\n' thisisalongerlexemethisisalongerlexeme'||s%12 || '\nthisisalongerlexemethisisalongerlexeme2'||s%13 || '\nthisisalongerlexemethisisalongerlexeme3'||s%32 ||\n' thisisalongerlexemethisisalongerlexeme'||s%13 || '\nthisisalongerlexemethisisalongerlexeme2'||s%14 || '\nthisisalongerlexemethisisalongerlexeme3'||s%42 ||\n' thisisalongerlexemethisisalongerlexeme'||s%14 || '\nthisisalongerlexemethisisalongerlexeme2'||s%15 || '\nthisisalongerlexemethisisalongerlexeme3'||s%52 ||\n' thisisalongerlexemethisisalongerlexeme'||s%15 || '\nthisisalongerlexemethisisalongerlexeme2'||s%16 || '\nthisisalongerlexemethisisalongerlexeme3'||s%62 ||\n' thisisalongerlexemethisisalongerlexeme'||s%16 || '\nthisisalongerlexemethisisalongerlexeme2'||s%17 || '\nthisisalongerlexemethisisalongerlexeme3'||s%72 ||\n' thisisalongerlexemethisisalongerlexeme'||s%17 || '\nthisisalongerlexemethisisalongerlexeme2'||s%18 || '\nthisisalongerlexemethisisalongerlexeme3'||s%82 ||\n' thisisalongerlexemethisisalongerlexeme'||s%18 || '\nthisisalongerlexemethisisalongerlexeme2'||s%19 || '\nthisisalongerlexemethisisalongerlexeme3'||s%92 ||\n' thisisalongerlexemethisisalongerlexeme'||s%19 || '\nthisisalongerlexemethisisalongerlexeme2'||s%10 || '\nthisisalongerlexemethisisalongerlexeme3'||s%15 ||\n' thisisalongerlexemethisisalongerlexeme'||s%12 || '\nthisisalongerlexemethisisalongerlexeme2'||s%71 || '\nthisisalongerlexemethisisalongerlexeme3'||s%16 ||\n' thisisalongerlexemethisisalongerlexeme'||s%20 || '\nthisisalongerlexemethisisalongerlexeme2'||s%81 || '\nthisisalongerlexemethisisalongerlexeme3'||s%17 ||\n' thisisalongerlexemethisisalongerlexeme'||s%35 || '\nthisisalongerlexemethisisalongerlexeme2'||s%91 || '\nthisisalongerlexemethisisalongerlexeme3'||s%18 ||\n' thisisalongerlexemethisisalongerlexeme'||s%100 || '\nthisisalongerlexemethisisalongerlexeme2'||s%110 || '\nthisisalongerlexemethisisalongerlexeme3'||s%120 ||\n' thisisalongerlexemethisisalongerlexeme'||s%110 || '\nthisisalongerlexemethisisalongerlexeme2'||s%120 || '\nthisisalongerlexemethisisalongerlexeme3'||s%220 ||\n' thisisalongerlexemethisisalongerlexeme'||s%120 || '\nthisisalongerlexemethisisalongerlexeme2'||s%130 || '\nthisisalongerlexemethisisalongerlexeme3'||s%320 ||\n' thisisalongerlexemethisisalongerlexeme'||s%130 || '\nthisisalongerlexemethisisalongerlexeme2'||s%140 || '\nthisisalongerlexemethisisalongerlexeme3'||s%420 ||\n' thisisalongerlexemethisisalongerlexeme'||s%140 || '\nthisisalongerlexemethisisalongerlexeme2'||s%150 || '\nthisisalongerlexemethisisalongerlexeme3'||s%520 ||\n' thisisalongerlexemethisisalongerlexeme'||s%150 || '\nthisisalongerlexemethisisalongerlexeme2'||s%160 || '\nthisisalongerlexemethisisalongerlexeme3'||s%620 ||\n' thisisalongerlexemethisisalongerlexeme'||s%160 || '\nthisisalongerlexemethisisalongerlexeme2'||s%170 || '\nthisisalongerlexemethisisalongerlexeme3'||s%720 ||\n' thisisalongerlexemethisisalongerlexeme'||s%170 || '\nthisisalongerlexemethisisalongerlexeme2'||s%180 || '\nthisisalongerlexemethisisalongerlexeme3'||s%820 ||\n' thisisalongerlexemethisisalongerlexeme'||s%180 || '\nthisisalongerlexemethisisalongerlexeme2'||s%190 || '\nthisisalongerlexemethisisalongerlexeme3'||s%920 ||\n' thisisalongerlexemethisisalongerlexeme'||s%190 || '\nthisisalongerlexemethisisalongerlexeme2'||s%100 || '\nthisisalongerlexemethisisalongerlexeme3'||s%150 ||\n' thisisalongerlexemethisisalongerlexeme'||s%120 || '\nthisisalongerlexemethisisalongerlexeme2'||s%710 || '\nthisisalongerlexemethisisalongerlexeme3'||s%160 ||\n' thisisalongerlexemethisisalongerlexeme'||s%200 || '\nthisisalongerlexemethisisalongerlexeme2'||s%810 || '\nthisisalongerlexemethisisalongerlexeme3'||s%170 ||\n' thisisalongerlexemethisisalongerlexeme'||s%350 || '\nthisisalongerlexemethisisalongerlexeme2'||s%910 || '\nthisisalongerlexemethisisalongerlexeme3'||s%180 \nas tsvector)\nFROM generate_series(1,100000) s\n\n\n",
"msg_date": "Mon, 6 Feb 2012 12:05:28 +0100",
"msg_from": "\"Marc Mamin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "text search: tablescan cost for a tsvector"
},
{
"msg_contents": "On Mon, Feb 6, 2012 at 6:05 AM, Marc Mamin <[email protected]> wrote:\n> without analyze: http://explain.depesz.com/s/6At\n> with analyze: http://explain.depesz.com/s/r3B\n\nI think this is the same issue complained about here:\n\nhttp://archives.postgresql.org/message-id/[email protected]\n\nAnd here:\n\nhttp://archives.postgresql.org/message-id/CANxtv6XiuiqEkXRJU2vk=xKAFXrLeP7uVhgR-XMCyjgQz29EFQ@mail.gmail.com\n\nThe problem seems to be that the cost estimator doesn't know that\ndetoasting is expensive.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 29 Feb 2012 13:32:48 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: text search: tablescan cost for a tsvector"
},
{
"msg_contents": "> Von: Robert Haas [mailto:[email protected]]\n> Gesendet: Mi 2/29/2012 7:32\n\n> \n> On Mon, Feb 6, 2012 at 6:05 AM, Marc Mamin <[email protected]> wrote:\n> > without analyze: http://explain.depesz.com/s/6At\n> > with analyze: http://explain.depesz.com/s/r3B\n... \n> The problem seems to be that the cost estimator doesn't know that\n> detoasting is expensive.\n\nHello,\n\nTom Lane has started a follow up thread in the hacker list.\nDetoasting is indeed the main obstacle, but I've repeated my test using plain storage\nand the planer still choose (systematically?) the slowest query.\nIt seems that I bumped into 2 different issues at the same time.\n\nhttp://archives.postgresql.org/pgsql-hackers/2012-02/msg00896.php\n\nBackround: \nOur reporting system offers amongst others time histograms \ncombined with a FTS filtering on error occurences (imported from error logs), \nIt is hence not unusual that given search terms are found within a majority of the documents...\n\nbest regards,\n\nMarc Mamin\n\n\n\n\n\nAW: [PERFORM] text search: tablescan cost for a tsvector\n\n\n\n> Von: Robert Haas [mailto:[email protected]]\n> Gesendet: Mi 2/29/2012 7:32\n\n> \n> On Mon, Feb 6, 2012 at 6:05 AM, Marc Mamin <[email protected]> wrote:\n> > without analyze: http://explain.depesz.com/s/6At\n> > with analyze: http://explain.depesz.com/s/r3B\n...\n> The problem seems to be that the cost estimator doesn't know that\n> detoasting is expensive.\n\nHello,\n\nTom Lane has started a follow up thread in the hacker list.\nDetoasting is indeed the main obstacle, but I've repeated my test using plain storage\nand the planer still choose (systematically?) the slowest query.\nIt seems that I bumped into 2 different issues at the same time.\n\nhttp://archives.postgresql.org/pgsql-hackers/2012-02/msg00896.php\n\nBackround:\nOur reporting system offers amongst others time histograms\ncombined with a FTS filtering on error occurences (imported from error logs),\nIt is hence not unusual that given search terms are found within a majority of the documents...\n\nbest regards,\n\nMarc Mamin",
"msg_date": "Wed, 29 Feb 2012 21:40:22 +0100",
"msg_from": "\"Marc Mamin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: text search: tablescan cost for a tsvector"
}
] |
[
{
"msg_contents": "Hi all,\n\nWe are currently \"stuck\" with a performance bottleneck in our server using PG and we are thinking of two potential solutions which I would be happy to hear your opinion about.\n\nOur system has a couple of tables that hold client generated information. The clients communicate every minute with the server and thus we perform an update on these two tables every minute. We are talking about ~50K clients (and therefore records).\n\nThese constant updates have made the table sizes to grow drastically and index bloating. So the two solutions that we are talking about are:\n\n 1. Configure autovacuum to work more intensively in both time and cost parameters.\nPros:\nNot a major architectural change.\nCons:\nAutovacuum does not handle index bloating and thus we will need to periodically reindex the tables.\nPerhaps we will also need to run vacuum full periodically if the autovacuum cleaning is not at the required pace and therefore defragmentation of the tables is needed?\n\n\n 1. Creating a new table every minute and inserting the data into this new temporary table (only inserts). This process will happen every minute. Note that in this process we will also need to copy missing data (clients that didn't communicate) from older table.\nPros:\nTables are always compact.\nWe will not reach a limit of autovacuum.\nCons:\nMajor architectural change.\n\nSo to sum it up, we would be happy to refrain from performing a major change to the system (solution #2), but we are not certain that the correct way to work in our situation, constant updates of records, is to configure an aggressive autovacuum or perhaps the \"known methodology\" is to work with temporary tables that are always inserted into?\n\n\nThank you,\nOfer\n\n\n\n\n\n\n\n\n\nHi all,\n \nWe are currently “stuck” with a performance\nbottleneck in our server using PG and we are thinking of two potential\nsolutions which I would be happy to hear your opinion about.\n \nOur system has a couple of tables that hold client generated\ninformation. The clients communicate every\nminute with the server and thus we perform an update on these two tables every\nminute. We are talking about ~50K clients (and therefore records).\n \nThese constant updates have made the table sizes to grow\ndrastically and index bloating. So the two solutions that we are talking\nabout are:\n\nConfigure autovacuum to work\n more intensively in both time\n and cost parameters.\n\nPros:\nNot a major architectural change.\nCons:\nAutovacuum does not handle index\nbloating and thus we will need to periodically reindex the tables.\nPerhaps we will also need to run vacuum\nfull periodically if the autovacuum cleaning is not at the required pace and therefore\ndefragmentation of the tables is needed?\n \n\nCreating a new table every\n minute and inserting the data into this new temporary table (only\n inserts). This process will happen every minute. Note that in\n this process we will also need to copy missing data (clients that didn’t\n communicate) from older table.\n\nPros:\nTables are always compact.\nWe will not reach a limit of autovacuum.\nCons:\nMajor architectural change.\n \nSo to sum it up, we would be happy to refrain from\nperforming a major change to the system (solution #2), but we are not certain\nthat the correct way to work in our situation, constant updates of records, is\nto configure an aggressive autovacuum or perhaps the “known methodology”\nis to work with temporary tables that are always inserted into?\n \n \nThank you,\nOfer",
"msg_date": "Tue, 7 Feb 2012 12:18:35 +0200",
"msg_from": "Ofer Israeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inserts or Updates"
},
{
"msg_contents": "On 2/7/2012 4:18 AM, Ofer Israeli wrote:\n> Hi all,\n>\n> We are currently “stuck” with a performance bottleneck in our server\n> using PG and we are thinking of two potential solutions which I would be\n> happy to hear your opinion about.\n>\n> Our system has a couple of tables that hold client generated\n> information. The clients communicate *every* minute with the server and\n> thus we perform an update on these two tables every minute. We are\n> talking about ~50K clients (and therefore records).\n>\n> These constant updates have made the table sizes to grow drastically and\n> index bloating. So the two solutions that we are talking about are:\n>\n\nYou dont give any table details, so I'll have to guess. Maybe you have \ntoo many indexes on your table? Or, you dont have a good primary index, \nwhich means your updates are changing the primary key?\n\nIf you only have a primary index, and you are not changing it, Pg should \nbe able to do HOT updates.\n\nIf you have lots of indexes, you should review them, you probably don't \nneed half of them.\n\n\nAnd like Kevin said, try the simple one first. Wont hurt anything, and \nif it works, great!\n\n-Andy\n",
"msg_date": "Tue, 07 Feb 2012 08:47:22 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts or Updates"
},
{
"msg_contents": "Hi Andy,\n\nThe two tables I am referring to have the following specs:\nTable 1:\n46 columns\n23 indexes on fields of the following types:\nINTEGER - 7\nTIMESTAMP - 2\nVARCHAR - 12\nUUID - 2\n\n23 columns\n12 indexes on fields of the following types:\nINTEGER - 3\nTIMESTAMP - 1\nVARCHAR - 6\nUUID - 2\n\nAll indexes are default indexes. \n\nThe primary index is INTERGER and is not updated.\n\nThe indexes are used for sorting and filtering purposes in our UI.\n\n\nI will be happy to hear your thoughts on this.\n\nThanks,\nOfer\n\n-----Original Message-----\nFrom: Andy Colson [mailto:[email protected]] \nSent: Tuesday, February 07, 2012 4:47 PM\nTo: Ofer Israeli\nCc: [email protected]; Olga Vingurt; Netta Kabala\nSubject: Re: [PERFORM] Inserts or Updates\n\nOn 2/7/2012 4:18 AM, Ofer Israeli wrote:\n> Hi all,\n>\n> We are currently \"stuck\" with a performance bottleneck in our server\n> using PG and we are thinking of two potential solutions which I would be\n> happy to hear your opinion about.\n>\n> Our system has a couple of tables that hold client generated\n> information. The clients communicate *every* minute with the server and\n> thus we perform an update on these two tables every minute. We are\n> talking about ~50K clients (and therefore records).\n>\n> These constant updates have made the table sizes to grow drastically and\n> index bloating. So the two solutions that we are talking about are:\n>\n\nYou dont give any table details, so I'll have to guess. Maybe you have \ntoo many indexes on your table? Or, you dont have a good primary index, \nwhich means your updates are changing the primary key?\n\nIf you only have a primary index, and you are not changing it, Pg should \nbe able to do HOT updates.\n\nIf you have lots of indexes, you should review them, you probably don't \nneed half of them.\n\n\nAnd like Kevin said, try the simple one first. Wont hurt anything, and \nif it works, great!\n\n-Andy\n\nScanned by Check Point Total Security Gateway.\n",
"msg_date": "Tue, 7 Feb 2012 19:40:11 +0200",
"msg_from": "Ofer Israeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserts or Updates"
},
{
"msg_contents": "> -----Original Message-----\n> From: Andy Colson [mailto:[email protected]]\n> Sent: Tuesday, February 07, 2012 4:47 PM\n> To: Ofer Israeli\n> Cc: [email protected]; Olga Vingurt; Netta Kabala\n> Subject: Re: [PERFORM] Inserts or Updates\n>\n> On 2/7/2012 4:18 AM, Ofer Israeli wrote:\n>> Hi all,\n>>\n>> We are currently \"stuck\" with a performance bottleneck in our server\n>> using PG and we are thinking of two potential solutions which I would be\n>> happy to hear your opinion about.\n>>\n>> Our system has a couple of tables that hold client generated\n>> information. The clients communicate *every* minute with the server and\n>> thus we perform an update on these two tables every minute. We are\n>> talking about ~50K clients (and therefore records).\n>>\n>> These constant updates have made the table sizes to grow drastically and\n>> index bloating. So the two solutions that we are talking about are:\n>>\n>\n> You dont give any table details, so I'll have to guess. Maybe you have\n> too many indexes on your table? Or, you dont have a good primary index,\n> which means your updates are changing the primary key?\n>\n> If you only have a primary index, and you are not changing it, Pg should\n> be able to do HOT updates.\n>\n> If you have lots of indexes, you should review them, you probably don't\n> need half of them.\n>\n>\n> And like Kevin said, try the simple one first. Wont hurt anything, and\n> if it works, great!\n>\n> -Andy\n>\n\n\nOn 2/7/2012 11:40 AM, Ofer Israeli wrote:\n > Hi Andy,\n >\n > The two tables I am referring to have the following specs:\n > Table 1:\n > 46 columns\n > 23 indexes on fields of the following types:\n > INTEGER - 7\n > TIMESTAMP - 2\n > VARCHAR - 12\n > UUID - 2\n >\n > 23 columns\n > 12 indexes on fields of the following types:\n > INTEGER - 3\n > TIMESTAMP - 1\n > VARCHAR - 6\n > UUID - 2\n >\n > All indexes are default indexes.\n >\n > The primary index is INTERGER and is not updated.\n >\n > The indexes are used for sorting and filtering purposes in our UI.\n >\n >\n > I will be happy to hear your thoughts on this.\n >\n > Thanks,\n > Ofer\n >\n\nFixed that top post for ya.\n\nWow, so out of 46 columns, half of them have indexes? That's a lot. \nI'd bet you could drop a bunch of them. You should review them and see \nif they are actually helping you. You already found out that maintain \nall those indexes is painful. If they are not speeding up your SELECT's \nby a huge amount, you should drop them.\n\nSounds like you went thru your sql statements and any field that was \neither in the where or order by clause you added an index for?\n\nYou need to find the columns that are the most selective. An index \nshould be useful at cutting the number of rows down. Once you have it \ncut down, an index on another field wont really help that much. And \nafter a result set has been collected, an index may or may not help for \nsorting.\n\nRunning some queries with EXPLAIN ANALYZE would be helpful. Give it a \nrun, drop an index, try it again to see if its about the same, or if \nthat index made a difference.\n\n-Andy\n",
"msg_date": "Tue, 07 Feb 2012 13:30:06 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts or Updates"
},
{
"msg_contents": "Andy Colson <[email protected]> wrote:\n \n> Wow, so out of 46 columns, half of them have indexes? That's a\n> lot. I'd bet you could drop a bunch of them. You should review\n> them and see if they are actually helping you. You already found\n> out that maintain all those indexes is painful. If they are not\n> speeding up your SELECT's by a huge amount, you should drop them.\n \nYou might want to review usage counts in pg_stat_user_indexes.\n \n-Kevin\n",
"msg_date": "Tue, 07 Feb 2012 13:40:41 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts or Updates"
},
{
"msg_contents": "Oh, I knew I'd seen index usage stats someplace.\n\ngive this a run:\n\nselect * from pg_stat_user_indexes where relname = 'SuperBigTable';\n\nhttp://www.postgresql.org/docs/current/static/monitoring-stats.html\n\n-Andy\n",
"msg_date": "Tue, 07 Feb 2012 13:43:13 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts or Updates"
},
{
"msg_contents": "Andy Colson wrote:\n> Oh, I knew I'd seen index usage stats someplace.\n> \n> give this a run:\n> \n> select * from pg_stat_user_indexes where relname = 'SuperBigTable';\n> \n> http://www.postgresql.org/docs/current/static/monitoring-stats.html\n> \n> -Andy\n> \n> Scanned by Check Point Total Security Gateway.\n\n\nThanks. We have begun analyzing the indexes and indeed found many are pretty useless and will be removed.\n",
"msg_date": "Wed, 8 Feb 2012 21:22:48 +0200",
"msg_from": "Ofer Israeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserts or Updates"
},
{
"msg_contents": "On Wed, Feb 8, 2012 at 20:22, Ofer Israeli <[email protected]> wrote:\n\n> Andy Colson wrote:\n> > Oh, I knew I'd seen index usage stats someplace.\n> >\n> > give this a run:\n> >\n> > select * from pg_stat_user_indexes where relname = 'SuperBigTable';\n> >\n> > http://www.postgresql.org/docs/current/static/monitoring-stats.html\n> >\n> > -Andy\n> >\n> > Scanned by Check Point Total Security Gateway.\n>\n>\n> Thanks. We have begun analyzing the indexes and indeed found many are\n> pretty useless and will be removed.\n>\n\nA quick word of warning: not all indexes are used for querying, some are\nused for maintaining constraints and foreign keys. These show up as\n\"useless\" in the above query.\n\nOn Wed, Feb 8, 2012 at 20:22, Ofer Israeli <[email protected]> wrote:\n\nAndy Colson wrote:\n> Oh, I knew I'd seen index usage stats someplace.\n>\n> give this a run:\n>\n> select * from pg_stat_user_indexes where relname = 'SuperBigTable';\n>\n> http://www.postgresql.org/docs/current/static/monitoring-stats.html\n>\n> -Andy\n>\n> Scanned by Check Point Total Security Gateway.\n\n\nThanks. We have begun analyzing the indexes and indeed found many are pretty useless and will be removed.A quick word of warning: not all indexes are used for querying, some are used for maintaining constraints and foreign keys. These show up as \"useless\" in the above query.",
"msg_date": "Thu, 9 Feb 2012 11:09:25 +0100",
"msg_from": "Vik Reykja <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts or Updates"
},
{
"msg_contents": "Am 07.02.2012 18:40, schrieb Ofer Israeli:\n> Table 1:\n> 46 columns\n> 23 indexes on fields of the following types:\n> INTEGER - 7\n> TIMESTAMP - 2\n> VARCHAR - 12\n> UUID - 2\n> \n> 23 columns\n> 12 indexes on fields of the following types:\n> INTEGER - 3\n> TIMESTAMP - 1\n> VARCHAR - 6\n> UUID - 2\n\nAre you regularly updating all columns? If not, maybe a good idea to\nsplit the tables so highly updated columns don't effect complete line.\n\ncheers,\nFrank\n",
"msg_date": "Thu, 09 Feb 2012 14:28:35 +0100",
"msg_from": "Frank Lanitz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts or Updates"
},
{
"msg_contents": "Frank Lanitz wrote:\n> Am 07.02.2012 18:40, schrieb Ofer Israeli:\n>> Table 1:\n>> 46 columns\n>> 23 indexes on fields of the following types:\n>> INTEGER - 7\n>> TIMESTAMP - 2\n>> VARCHAR - 12\n>> UUID - 2\n>> \n>> 23 columns\n>> 12 indexes on fields of the following types:\n>> INTEGER - 3\n>> TIMESTAMP - 1\n>> VARCHAR - 6\n>> UUID - 2\n> \n> Are you regularly updating all columns? If not, maybe a good idea to\n> split the tables so highly updated columns don't effect complete\n> line. \n\nWe're not always updating all of the columns, but the reason for consolidating all the columns into one table is for UI purposes - in the past, they had done benchmarks and found the JOINs to be extremely slow and so all data was consolidated into one table.\n\nThanks,\nOfer",
"msg_date": "Sun, 12 Feb 2012 12:48:48 +0200",
"msg_from": "Ofer Israeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserts or Updates"
},
{
"msg_contents": "Am 12.02.2012 11:48, schrieb Ofer Israeli:\n> Frank Lanitz wrote:\n>>> Am 07.02.2012 18:40, schrieb Ofer Israeli:\n>>>>> Table 1: 46 columns 23 indexes on fields of the following\n>>>>> types: INTEGER - 7 TIMESTAMP - 2 VARCHAR - 12 UUID - 2\n>>>>> \n>>>>> 23 columns 12 indexes on fields of the following types: \n>>>>> INTEGER - 3 TIMESTAMP - 1 VARCHAR - 6 UUID - 2\n>>> \n>>> Are you regularly updating all columns? If not, maybe a good idea\n>>> to split the tables so highly updated columns don't effect\n>>> complete line.\n> We're not always updating all of the columns, but the reason for\n> consolidating all the columns into one table is for UI purposes - in\n> the past, they had done benchmarks and found the JOINs to be\n> extremely slow and so all data was consolidated into one table.\n\nAh... I see. Maybe you can check whether all of the data are really\nneeded to fetch with one select but this might end up in tooo much\nguessing and based on your feedback you already did this step.\n\nCheers,\nFrank",
"msg_date": "Sun, 12 Feb 2012 19:29:13 +0100",
"msg_from": "Frank Lanitz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts or Updates"
},
{
"msg_contents": "Frank Lanitz wrote:\n> Am 12.02.2012 11:48, schrieb Ofer Israeli:\n>> Frank Lanitz wrote:\n>>>> Am 07.02.2012 18:40, schrieb Ofer Israeli:\n>>>>>> Table 1: 46 columns 23 indexes on fields of the following\n>>>>>> types: INTEGER - 7 TIMESTAMP - 2 VARCHAR - 12 UUID - 2\n>>>>>> \n>>>>>> 23 columns 12 indexes on fields of the following types:\n>>>>>> INTEGER - 3 TIMESTAMP - 1 VARCHAR - 6 UUID - 2\n>>>> \n>>>> Are you regularly updating all columns? If not, maybe a good idea\n>>>> to split the tables so highly updated columns don't effect complete\n>>>> line.\n>> We're not always updating all of the columns, but the reason for\n>> consolidating all the columns into one table is for UI purposes - in\n>> the past, they had done benchmarks and found the JOINs to be\n>> extremely slow and so all data was consolidated into one table.\n> \n> Ah... I see. Maybe you can check whether all of the data are really\n> needed to fetch with one select but this might end up in tooo much\n> guessing and based on your feedback you already did this step. \n \n\nThis was indeed checked, but I'm not sure it was thorough enough so we're having a go at it again. In the meanwhile, the autovacuum configurations have proved to help us immensely so for now we're good (will probably be asking around soon when we hit our next bottleneck :)). Thanks for your help!",
"msg_date": "Sun, 12 Feb 2012 20:32:11 +0200",
"msg_from": "Ofer Israeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserts or Updates"
}
] |
[
{
"msg_contents": "Ofer Israeli wrote:\n \n> Our system has a couple of tables that hold client generated\n> information. The clients communicate every minute with the server\n> and thus we perform an update on these two tables every minute. We\n> are talking about ~50K clients (and therefore records).\n> \n> These constant updates have made the table sizes to grow\n> drastically and index bloating. So the two solutions that we are\n> talking about are:\n> \n> 1. Configure autovacuum to work more intensively in both time and\n> cost parameters.\n> Pros:\n> Not a major architectural change.\n> Cons:\n> Autovacuum does not handle index bloating and thus we will need to\n> periodically reindex the tables.\n \nDone aggressively enough, autovacuum should prevent index bloat, too.\n \n> Perhaps we will also need to run vacuum full periodically if the\n> autovacuum cleaning is not at the required pace and therefore\n> defragmentation of the tables is needed?\n \nThe other thing that can cause bloat in this situation is a\nlong-running transaction. To correct occasional bloat due to that on\nsmall frequently-updated tables we run CLUSTER on them daily during\noff-peak hours. If you are on version 9.0 or later, VACUUM FULL\ninstead would be fine. While this locks the table against other\naction while it runs, on a small table it is a small enough fraction\nof a second that nobody notices.\n \n> 1. Creating a new table every minute and inserting the data into\n> this new temporary table (only inserts). This process will happen\n> every minute. Note that in this process we will also need to copy\n> missing data (clients that didn't communicate) from older table.\n> Pros:\n> Tables are always compact.\n> We will not reach a limit of autovacuum.\n> Cons:\n> Major architectural change.\n \nI would try the other alternative first.\n \n-Kevin\n",
"msg_date": "Tue, 07 Feb 2012 06:27:33 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserts or Updates"
},
{
"msg_contents": "Thanks Kevin for the ideas. Now that you have corrected our misconception regarding the autovacuum not handling index bloating, we are looking into running autovacuum frequently enough to make sure we don't have significant increase in table size or index size. We intend to keep our transactions short enough not to reach the situation where vacuum full or CLUSTER is needed.\n\nThanks,\nOfer\n\n-----Original Message-----\nFrom: Kevin Grittner [mailto:[email protected]] \nSent: Tuesday, February 07, 2012 2:28 PM\nTo: Ofer Israeli; [email protected]\nCc: Netta Kabala; Olga Vingurt\nSubject: Re: [PERFORM] Inserts or Updates\n\nOfer Israeli wrote:\n \n> Our system has a couple of tables that hold client generated\n> information. The clients communicate every minute with the server\n> and thus we perform an update on these two tables every minute. We\n> are talking about ~50K clients (and therefore records).\n> \n> These constant updates have made the table sizes to grow\n> drastically and index bloating. So the two solutions that we are\n> talking about are:\n> \n> 1. Configure autovacuum to work more intensively in both time and\n> cost parameters.\n> Pros:\n> Not a major architectural change.\n> Cons:\n> Autovacuum does not handle index bloating and thus we will need to\n> periodically reindex the tables.\n \nDone aggressively enough, autovacuum should prevent index bloat, too.\n \n> Perhaps we will also need to run vacuum full periodically if the\n> autovacuum cleaning is not at the required pace and therefore\n> defragmentation of the tables is needed?\n \nThe other thing that can cause bloat in this situation is a\nlong-running transaction. To correct occasional bloat due to that on\nsmall frequently-updated tables we run CLUSTER on them daily during\noff-peak hours. If you are on version 9.0 or later, VACUUM FULL\ninstead would be fine. While this locks the table against other\naction while it runs, on a small table it is a small enough fraction\nof a second that nobody notices.\n \n> 1. Creating a new table every minute and inserting the data into\n> this new temporary table (only inserts). This process will happen\n> every minute. Note that in this process we will also need to copy\n> missing data (clients that didn't communicate) from older table.\n> Pros:\n> Tables are always compact.\n> We will not reach a limit of autovacuum.\n> Cons:\n> Major architectural change.\n \nI would try the other alternative first.\n \n-Kevin\n\nScanned by Check Point Total Security Gateway.\n",
"msg_date": "Tue, 7 Feb 2012 19:27:33 +0200",
"msg_from": "Ofer Israeli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts or Updates"
},
{
"msg_contents": "On Tue, Feb 7, 2012 at 2:27 PM, Ofer Israeli <[email protected]> wrote:\n> Thanks Kevin for the ideas. Now that you have corrected our misconception regarding the autovacuum not handling index bloating, we are looking into running autovacuum frequently enough to make sure we don't have significant increase in table size or index size. We intend to keep our transactions short enough not to reach the situation where vacuum full or CLUSTER is needed.\n\nAlso, rather than going overboard with autovacuum settings, do make it\nmore aggressive, but also set up a regular, manual vacuum of either\nthe whole database or whatever tables you need to vacuum at\nknown-low-load hours.\n",
"msg_date": "Tue, 7 Feb 2012 14:31:20 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts or Updates"
},
{
"msg_contents": "Hi Claudio,\n\nYou mean running a VACUUM statement manually? I would basically try to avoid such a situation as the way I see it, the database should be configured in such a manner that it will be able to handle the load at any given moment and so I wouldn't want to manually intervene here. If you think differently, I'll be happy to stand corrected.\n\n\nThanks,\nOfer\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Claudio Freire\nSent: Tuesday, February 07, 2012 7:31 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Inserts or Updates\n\nOn Tue, Feb 7, 2012 at 2:27 PM, Ofer Israeli <[email protected]> wrote:\n> Thanks Kevin for the ideas. Now that you have corrected our misconception regarding the autovacuum not handling index bloating, we are looking into running autovacuum frequently enough to make sure we don't have significant increase in table size or index size. We intend to keep our transactions short enough not to reach the situation where vacuum full or CLUSTER is needed.\n\nAlso, rather than going overboard with autovacuum settings, do make it\nmore aggressive, but also set up a regular, manual vacuum of either\nthe whole database or whatever tables you need to vacuum at\nknown-low-load hours.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\nScanned by Check Point Total Security Gateway.\n",
"msg_date": "Tue, 7 Feb 2012 19:43:19 +0200",
"msg_from": "Ofer Israeli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts or Updates"
},
{
"msg_contents": "On Tue, Feb 7, 2012 at 2:43 PM, Ofer Israeli <[email protected]> wrote:\n> You mean running a VACUUM statement manually? I would basically try to avoid such a situation as the way I see it, the database should be configured in such a manner that it will be able to handle the load at any given moment and so I wouldn't want to manually intervene here. If you think differently, I'll be happy to stand corrected.\n\nI do think differently.\n\nAutovacuum isn't perfect, and you shouldn't make it too aggressive\nsince it does generate a lot of I/O activity. If you can pick a time\nwhere it will be able to run without interfering too much, running\nvacuum \"manually\" (where manually could easily be a cron task, ie,\nautomatically but coming from outside the database software itself),\nyou'll be able to dial down autovacuum and have more predictable load\noverall.\n",
"msg_date": "Tue, 7 Feb 2012 14:57:37 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts or Updates"
},
{
"msg_contents": ">> You mean running a VACUUM statement manually? I would basically try to\n>> avoid such a situation as the way I see it, the database should be\n>> configured in such a manner that it will be able to handle the load at \n>> any given moment and so I wouldn't want to manually intervene here. If \n>> you think differently, I'll be happy to stand corrected.\n> \n> I do think differently.\n> \n> Autovacuum isn't perfect, and you shouldn't make it too aggressive\n> since it does generate a lot of I/O activity. If you can pick a time\n> where it will be able to run without interfering too much, running\n> vacuum \"manually\" (where manually could easily be a cron task, ie,\n> automatically but coming from outside the database software itself),\n> you'll be able to dial down autovacuum and have more predictable load\n> overall.\n> \n\n\nSomething specific that you refer to in autovacuum's non-perfection, that is, what types of issues are you aware of?\n\nAs for the I/O - this is indeed true that it can generate much activity, but the way I see it, if you run performance tests and the tests succeed in all parameters even with heavy I/O, then you are good to go. That is, I don't mind the server doing lots of I/O as long as it's not causing lags in processing the messages that it handles.\n\n\nThanks,\nOfer\n\n",
"msg_date": "Tue, 7 Feb 2012 21:12:01 +0200",
"msg_from": "Ofer Israeli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts or Updates"
},
{
"msg_contents": "On Tue, Feb 7, 2012 at 4:12 PM, Ofer Israeli <[email protected]> wrote:\n> Something specific that you refer to in autovacuum's non-perfection, that is, what types of issues are you aware of?\n\nI refer to its criteria for when to perform vacuum/analyze. Especially\nanalyze. It usually fails to detect the requirement to analyze a table\n- sometimes value distributions change without triggering an\nautoanalyze. It's expected, as the autoanalyze works on number of\ntuples updates/inserted relative to table size, which is too generic\nto catch business-specific conditions.\n\nAs everything, it depends on your business. The usage pattern, the\nkinds of updates performed, how data varies in time... but in essence,\nI've found that forcing a periodic vacuum/analyze of tables beyond\nwhat autovacuum does improves stability. You know a lot more about the\nbusiness and access/update patterns than autovacuum, so you can\nschedule them where they are needed and autovacuum wouldn't.\n\n> As for the I/O - this is indeed true that it can generate much activity, but the way I see it, if you run performance tests and the tests succeed in all parameters even with heavy I/O, then you are good to go. That is, I don't mind the server doing lots of I/O as long as it's not causing lags in processing the messages that it handles.\n\nIf you don't mind the I/O, by all means, crank it up.\n",
"msg_date": "Tue, 7 Feb 2012 16:20:46 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts or Updates"
},
{
"msg_contents": "Claudio Freire wrote:\n> On Tue, Feb 7, 2012 at 4:12 PM, Ofer Israeli <[email protected]>\n> wrote: \n>> Something specific that you refer to in autovacuum's non-perfection,\n>> that is, what types of issues are you aware of? \n> \n> I refer to its criteria for when to perform vacuum/analyze.\n> Especially analyze. It usually fails to detect the requirement to\n> analyze a table - sometimes value distributions change without\n> triggering an autoanalyze. It's expected, as the autoanalyze works on\n> number of tuples updates/inserted relative to table size, which is\n> too generic to catch business-specific conditions. \n> \n> As everything, it depends on your business. The usage pattern, the\n> kinds of updates performed, how data varies in time... but in\n> essence, I've found that forcing a periodic vacuum/analyze of tables\n> beyond what autovacuum does improves stability. You know a lot more\n> about the business and access/update patterns than autovacuum, so you\n> can schedule them where they are needed and autovacuum wouldn't. \n> \n>> As for the I/O - this is indeed true that it can generate much\n>> activity, but the way I see it, if you run performance tests and the\n>> tests succeed in all parameters even with heavy I/O, then you are\n>> good to go. That is, I don't mind the server doing lots of I/O as\n>> long as it's not causing lags in processing the messages that it\n>> handles. \n> \n> If you don't mind the I/O, by all means, crank it up.\n\n\nThanks for the lep Claudio. We're looking into both these options.",
"msg_date": "Wed, 8 Feb 2012 21:20:22 +0200",
"msg_from": "Ofer Israeli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserts or Updates"
}
] |
[
{
"msg_contents": "PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2\n20080704 (Red Hat 4.1.2-51), 64-bit\n\nDedicated DB server\n\n4GB ram\n\nShared_Buffers = 1 GB\n\nEffective_cache_size = 3GB\n\nWork_mem = 32GB\n\nAnalyze done\n\nQueries ran multiple times, same differences/results\n\nDefault Statistics = 1000\n\n\nQuery (5366ms) :\n\nexplain analyze select initcap (fullname), initcap(issuer),upper(rsymbol),\ninitcap(industry),\nactivity,to_char(shareschange,'FM9,999,999,999,999,999'),sharespchange ||+\nE'\\%' from changes where activity in (4,5) and mfiled >= (select\nmax(mfiled) from changes) order by shareschange asc limit 15\n\n\nSlow Ascending explain Analyze:\n\nhttp://explain.depesz.com/s/zFz\n\n\nQuery (15ms) :\n\nexplain analyze select initcap (fullname), initcap(issuer),upper(rsymbol),\ninitcap(industry),\nactivity,to_char(shareschange,'FM9,999,999,999,999,999'),sharespchange ||+\nE'\\%' from changes where activity in (4,5) and mfiled >= (select\nmax(mfiled) from changes) order by shareschange desc limit 15\n\n\nFast descending explain analyze:\n\nhttp://explain.depesz.com/s/OP7\n\n\n\nThe index: changes_shareschange is a btree index created with default\nascending order\n\n\nThe query plan and estimates are exactly the same, except desc has index\nscan backwards instead of index scan for changes_shareschange.\n\n\nYet, actual runtime performance is different by 357x slower for the\nascending version instead of descending.\n\n\nWhy and how do I fix it?\n\nPostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-51), 64-bit\nDedicated DB server4GB ram\nShared_Buffers = 1 GBEffective_cache_size = 3GB\nWork_mem = 32GBAnalyze done\nQueries ran multiple times, same differences/results\nDefault Statistics = 1000\nQuery (5366ms) :explain analyze select initcap (fullname), initcap(issuer),upper(rsymbol), initcap(industry), activity,to_char(shareschange,'FM9,999,999,999,999,999'),sharespchange ||+ E'\\%' from changes where activity in (4,5) and mfiled >= (select max(mfiled) from changes) order by shareschange asc limit 15 \n\nSlow Ascending explain Analyze:http://explain.depesz.com/s/zFz\n\nQuery (15ms) :\nexplain analyze select initcap (fullname), initcap(issuer),upper(rsymbol), initcap(industry), activity,to_char(shareschange,'FM9,999,999,999,999,999'),sharespchange ||+ E'\\%' from changes where activity in (4,5) and mfiled >= (select max(mfiled) from changes) order by shareschange desc limit 15 \nFast descending explain analyze:\nhttp://explain.depesz.com/s/OP7\n\nThe index: changes_shareschange is a btree index created with default ascending order\n\nThe query plan and estimates are exactly the same, except desc has index scan backwards instead of index scan for changes_shareschange.\nYet, actual runtime performance is different by 357x slower for the ascending version instead of descending.\nWhy and how do I fix it?",
"msg_date": "Tue, 7 Feb 2012 08:49:05 -0800",
"msg_from": "Kevin Traster <[email protected]>",
"msg_from_op": true,
"msg_subject": "index scan forward vs backward = speed difference of 357X slower!"
},
{
"msg_contents": "\n\n\n\n\n what's the size of the index? is it too big to fit in\n shared_buffers? maybe the firt 15 rows by asc order are in buffer\n but the ones of desc order are not, while your disk IO is very slow?\n btw, your mem configuration of work_men is very strange. \n\n 于 2012/2/8 0:49, Kevin Traster 写道:\n \nPostgreSQL 9.1.2 on\n x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2\n 20080704 (Red Hat 4.1.2-51), 64-bit\nDedicated DB server\n4GB ram\nShared_Buffers = 1 GB\nEffective_cache_size = 3GB\nWork_mem = 32GB\nAnalyze done\nQueries ran multiple times, same\n differences/results\nDefault Statistics = 1000\n\n\nQuery (5366ms) :\nexplain\n analyze select initcap (fullname),\n initcap(issuer),upper(rsymbol), initcap(industry),\n activity,to_char(shareschange,'FM9,999,999,999,999,999'),sharespchange\n ||+ E'\\%' from changes where activity in (4,5) and mfiled\n >= (select max(mfiled) from changes) order by\n shareschange asc limit 15 \n\n\n\nSlow\n Ascending explain Analyze:\nhttp://explain.depesz.com/s/zFz\n\n\n\nQuery (15ms) :\nexplain\n analyze select initcap (fullname),\n initcap(issuer),upper(rsymbol), initcap(industry),\n activity,to_char(shareschange,'FM9,999,999,999,999,999'),sharespchange\n ||+ E'\\%' from changes where activity in (4,5) and mfiled\n >= (select max(mfiled) from changes) order by\n shareschange desc limit 15 \n\n\nFast descending explain analyze:\nhttp://explain.depesz.com/s/OP7\n\n\n\n\nThe\n index: changes_shareschange\n is\n a btree\n index created with default ascending order\n\n\nThe\n query plan and estimates are exactly the same, except desc\n has index scan backwards instead of index scan for\n changes_shareschange.\n\n\nYet, actual runtime performance is\n different by 357x slower for the ascending version instead\n of descending.\n\n\nWhy and how do I fix it?\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Wed, 08 Feb 2012 16:01:44 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index scan forward vs backward = speed difference of 357X slower!"
},
{
"msg_contents": "what's the size of the index? is it too big to fit in shared_buffers? \nmaybe the firt 15 rows by asc order are in buffer but the ones of desc \norder are not, while your disk IO is very slow?\nbtw, your mem configuration of work_men is very strange.\n\n于 2012/2/8 0:49, Kevin Traster 写道:\n>\n> PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) \n> 4.1.2 20080704 (Red Hat 4.1.2-51), 64-bit\n>\n> Dedicated DB server\n>\n> 4GB ram\n>\n> Shared_Buffers = 1 GB\n>\n> Effective_cache_size = 3GB\n>\n> Work_mem = 32GB\n>\n> Analyze done\n>\n> Queries ran multiple times, same differences/results\n>\n> Default Statistics = 1000\n>\n>\n> Query (5366ms) :\n>\n> explain analyze select initcap (fullname), \n> initcap(issuer),upper(rsymbol), initcap(industry), \n> activity,to_char(shareschange,'FM9,999,999,999,999,999'),sharespchange \n> ||+ E'\\%' from changes where activity in (4,5) and mfiled >= (select \n> max(mfiled) from changes) order by shareschange asc limit 15\n>\n>\n> Slow Ascending explain Analyze:\n>\n> http://explain.depesz.com/s/zFz\n>\n>\n>\n> Query (15ms) :\n>\n> explain analyze select initcap (fullname), \n> initcap(issuer),upper(rsymbol), initcap(industry), \n> activity,to_char(shareschange,'FM9,999,999,999,999,999'),sharespchange \n> ||+ E'\\%' from changes where activity in (4,5) and mfiled >= (select \n> max(mfiled) from changes) order by shareschange desc limit 15\n>\n>\n> Fast descending explain analyze:\n>\n> http://explain.depesz.com/s/OP7\n>\n>\n>\n> The index: changes_shareschange is a btree index created with default \n> ascending order\n>\n>\n> The query plan and estimates are exactly the same, except desc has \n> index scan backwards instead of index scan for changes_shareschange.\n>\n>\n> Yet, actual runtime performance is different by 357x slower for the \n> ascending version instead of descending.\n>\n>\n> Why and how do I fix it?\n>\n>\n>\n>\n\n\n",
"msg_date": "Wed, 08 Feb 2012 22:36:30 +0800",
"msg_from": "Rural Hunter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index scan forward vs backward = speed difference of 357X slower!"
},
{
"msg_contents": "Kevin Traster <[email protected]> writes:\n> The query plan and estimates are exactly the same, except desc has index\n> scan backwards instead of index scan for changes_shareschange.\n> Yet, actual runtime performance is different by 357x slower for the\n> ascending version instead of descending.\n\nApparently, there are some rows passing the filter condition that are\nclose to the end of the index, but none that are close to the start.\nSo it takes a lot longer to find the first 15 matches in one case than\nthe other. You haven't shown us the index definition, but I gather from\nthe fact that the scan condition is just a Filter (not an Index Cond)\nthat the index itself doesn't offer any clue as to whether a given row\nmeets those conditions. So this plan is going to be doing a lot of\nrandom-access heap probes until it finds a match.\n\n> Why and how do I fix it?\n\nProbably, you need an index better suited to the query condition.\nIf you have one and the problem is that the planner's not choosing it,\nthen this is going to take more information to resolve.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Feb 2012 14:31:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index scan forward vs backward = speed difference of 357X slower!"
},
{
"msg_contents": "Typo: Work_mem = 32 MB\n\nThe definition for both column and index:\n shareschange | numeric |\n\"changes_shareschange\" btree (shareschange)\n\nIndex created using: CREATE INDEX changes_shareschange ON changes(shareschange);\n\nThe entire table is created nightly (and analyzed afterwords), and\nused only for reporting - there no updates/deletes, so there shouldn't\nbe any dead rows in the table.\nLikewise, there is no nulls in the column.\n\nPlease elaborate on:\n\n>You haven't shown us the index definition, but I gather from\n> the fact that the scan condition is just a Filter (not an Index Cond)\n> that the index itself doesn't offer any clue as to whether a given row\n> meets those conditions\n\nAre you saying it is the retrieval of the physically random located 15\nrows to meet the ascending condition that causes the 5 sec difference?\nThe table is not-clustered, so it is \"random\" for descending also.\n\nThe condition is shareschange ascending, I have an index for that\ncondition and the planner is using it.\n\nWhat else can I look at?\n\n\n\nOn Wed, Feb 8, 2012 at 11:31 AM, Tom Lane <[email protected]> wrote:\n> Kevin Traster <[email protected]> writes:\n>> The query plan and estimates are exactly the same, except desc has index\n>> scan backwards instead of index scan for changes_shareschange.\n>> Yet, actual runtime performance is different by 357x slower for the\n>> ascending version instead of descending.\n>\n> Apparently, there are some rows passing the filter condition that are\n> close to the end of the index, but none that are close to the start.\n> So it takes a lot longer to find the first 15 matches in one case than\n> the other. You haven't shown us the index definition, but I gather from\n> the fact that the scan condition is just a Filter (not an Index Cond)\n> that the index itself doesn't offer any clue as to whether a given row\n> meets those conditions. So this plan is going to be doing a lot of\n> random-access heap probes until it finds a match.\n>\n>> Why and how do I fix it?\n>\n> Probably, you need an index better suited to the query condition.\n> If you have one and the problem is that the planner's not choosing it,\n> then this is going to take more information to resolve.\n>\n> regards, tom lane\n",
"msg_date": "Wed, 8 Feb 2012 11:58:57 -0800",
"msg_from": "Kevin Traster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index scan forward vs backward = speed difference of 357X slower!"
},
{
"msg_contents": "On Wed, Feb 8, 2012 at 1:58 PM, Kevin Traster\n<[email protected]> wrote:\n> Typo: Work_mem = 32 MB\n>\n> The definition for both column and index:\n> shareschange | numeric |\n> \"changes_shareschange\" btree (shareschange)\n>\n> Index created using: CREATE INDEX changes_shareschange ON changes(shareschange);\n>\n> The entire table is created nightly (and analyzed afterwords), and\n> used only for reporting - there no updates/deletes, so there shouldn't\n> be any dead rows in the table.\n> Likewise, there is no nulls in the column.\n>\n> Please elaborate on:\n>\n>>You haven't shown us the index definition, but I gather from\n>> the fact that the scan condition is just a Filter (not an Index Cond)\n>> that the index itself doesn't offer any clue as to whether a given row\n>> meets those conditions\n>\n> Are you saying it is the retrieval of the physically random located 15\n> rows to meet the ascending condition that causes the 5 sec difference?\n> The table is not-clustered, so it is \"random\" for descending also.\n>\n> The condition is shareschange ascending, I have an index for that\n> condition and the planner is using it.\n\nThis is not a problem with dead rows, but the index is not really\nsatisfying your query and the database has to look through an\nindeterminate amount of rows until the 'limit 15' is satisfied. Yeah,\nbackwards scans are slower, especially for disk bound scans but you\nalso have to consider how many filter misses your have. The smoking\ngun is here:\n\n\"Index Scan Backward using changes_shareschange on changes\n(cost=0.00..925150.26 rows=181997 width=98) (actual time=3.161..15.843\nrows=15 loops=1)\nFilter: ((activity = ANY ('{4,5}'::integer[])) AND (mfiled >= $1))\"\n\nWhen you see Filter: xyz, xyz is what each record has to be compared\nagainst after the index pointed you to an area(s) in the heap. It's\npure luck going forwards or backwards that determines how many records\nyou have to look through to get 15 good ones as defined by satisfying\nthe filter. To prove that one way or the other you can convert your\nwhere to a boolean returning (and bump the limit appropriately)\nexpression to see how many records get filtered out.\n\nmerlin\n",
"msg_date": "Wed, 8 Feb 2012 17:27:00 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index scan forward vs backward = speed difference of 357X slower!"
},
{
"msg_contents": "> This is not a problem with dead rows, but the index is not really\n> satisfying your query and the database has to look through an\n> indeterminate amount of rows until the 'limit 15' is satisfied. Yeah,\n> backwards scans are slower, especially for disk bound scans but you\n> also have to consider how many filter misses your have. The smoking\n> gun is here:\n>\n> \"Index Scan Backward using changes_shareschange on changes\n> (cost=0.00..925150.26 rows=181997 width=98) (actual time=3.161..15.843\n> rows=15 loops=1)\n> Filter: ((activity = ANY ('{4,5}'::integer[])) AND (mfiled >= $1))\"\n>\n> When you see Filter: xyz, xyz is what each record has to be compared\n> against after the index pointed you to an area(s) in the heap. It's\n> pure luck going forwards or backwards that determines how many records\n> you have to look through to get 15 good ones as defined by satisfying\n> the filter. To prove that one way or the other you can convert your\n> where to a boolean returning (and bump the limit appropriately)\n> expression to see how many records get filtered out.\n>\n> merlin\n\nI have indexes also on activity and mfiled (both btree) - wouldn't the\ndatabase use them? - Kevin\n",
"msg_date": "Wed, 8 Feb 2012 15:31:50 -0800",
"msg_from": "Kevin Traster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index scan forward vs backward = speed difference of 357X slower!"
},
{
"msg_contents": "Kevin Traster <[email protected]> wrote:\n \n> I have indexes also on activity and mfiled (both btree) - wouldn't\n> the database use them? - Kevin\n \nIt will use them if they are part of the plan which had the lowest\ncost when it compared the costs of all possible plans.\n \nYou haven't really shown us the schema, so there's more guesswork\ninvolved in trying to help you than there could be. This page might\nbe worth reviewing:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \nIn particular, if there are indexes that aren't being used which you\nthink should be, there is a good chance that either there is a type\nmismatch or your costing factors may need adjustment.\n \n-Kevin\n",
"msg_date": "Wed, 08 Feb 2012 17:41:06 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index scan forward vs backward = speed difference of 357X slower!"
}
] |
[
{
"msg_contents": "Per the thread from last month, I've updated the default\nrandom_page_cost on Heroku Postgres to reduce the expected cost of a\nrandom_page on all new databases.\n\nThanks to everyone who helped come to this conclusion!\n\nPeter\n\n-- \nPeter van Hardenberg\nSan Francisco, California\n\"Everything was beautiful, and nothing hurt.\" -- Kurt Vonnegut\n",
"msg_date": "Tue, 7 Feb 2012 16:59:58 -0800",
"msg_from": "Peter van Hardenberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "On 2/7/12 4:59 PM, Peter van Hardenberg wrote:\n> Per the thread from last month, I've updated the default\n> random_page_cost on Heroku Postgres to reduce the expected cost of a\n> random_page on all new databases.\n\nThis is because Heroku uses AWS storage, which has fast seeks but poor\nthroughput compared to internal disk on a standard system, BTW.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Wed, 08 Feb 2012 16:50:46 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "On Wed, Feb 8, 2012 at 5:50 PM, Josh Berkus <[email protected]> wrote:\n> On 2/7/12 4:59 PM, Peter van Hardenberg wrote:\n>> Per the thread from last month, I've updated the default\n>> random_page_cost on Heroku Postgres to reduce the expected cost of a\n>> random_page on all new databases.\n>\n> This is because Heroku uses AWS storage, which has fast seeks but poor\n> throughput compared to internal disk on a standard system, BTW.\n\nAlso judging by the other thread, it might be something to stop closer\nto 1.2 to 1.4 or something.\n",
"msg_date": "Wed, 8 Feb 2012 18:39:14 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "Having read the thread, I don't really see how I could study what a\nmore principled value would be.\n\nThat said, I have access to a very large fleet in which to can collect\ndata so I'm all ears for suggestions about how to measure and would\ngladly share the results with the list.\n\nPeter\n\nOn Wed, Feb 8, 2012 at 5:39 PM, Scott Marlowe <[email protected]> wrote:\n> On Wed, Feb 8, 2012 at 5:50 PM, Josh Berkus <[email protected]> wrote:\n>> On 2/7/12 4:59 PM, Peter van Hardenberg wrote:\n>>> Per the thread from last month, I've updated the default\n>>> random_page_cost on Heroku Postgres to reduce the expected cost of a\n>>> random_page on all new databases.\n>>\n>> This is because Heroku uses AWS storage, which has fast seeks but poor\n>> throughput compared to internal disk on a standard system, BTW.\n>\n> Also judging by the other thread, it might be something to stop closer\n> to 1.2 to 1.4 or something.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nPeter van Hardenberg\nSan Francisco, California\n\"Everything was beautiful, and nothing hurt.\" -- Kurt Vonnegut\n",
"msg_date": "Wed, 8 Feb 2012 17:45:50 -0800",
"msg_from": "Peter van Hardenberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "On 08/02/12 21:15, Peter van Hardenberg wrote:\n> Having read the thread, I don't really see how I could study what a\n> more principled value would be.\n>\n> That said, I have access to a very large fleet in which to can collect\n> data so I'm all ears for suggestions about how to measure and would\n> gladly share the results with the list.\n>\n> Peter\n>\n> On Wed, Feb 8, 2012 at 5:39 PM, Scott Marlowe<[email protected]> wrote:\n>> On Wed, Feb 8, 2012 at 5:50 PM, Josh Berkus<[email protected]> wrote:\n>>> On 2/7/12 4:59 PM, Peter van Hardenberg wrote:\n>>>> Per the thread from last month, I've updated the default\n>>>> random_page_cost on Heroku Postgres to reduce the expected cost of a\n>>>> random_page on all new databases.\n>>> This is because Heroku uses AWS storage, which has fast seeks but poor\n>>> throughput compared to internal disk on a standard system, BTW.\n>> Also judging by the other thread, it might be something to stop closer\n>> to 1.2 to 1.4 or something.\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\nYou can execute several queries with the three different values provided \nby Scott and Josh.\n- SET random_page_cost = 2.0\nFirst execution of the queries with EXPLAIN ANALYZE\n- SET random_page_cost = 1.4\nSecond execution of the queries with EXPLAIN ANALYZE\n- SET random_page_cost = 1.2\nSecond execution of the queries with EXPLAIN ANALYZE\n\nAnd then, you can compare the pattern behind these queries executions\nRegards,\n\n-- \nMarcos Luis Ort�z Valmaseda\n Sr. Software Engineer (UCI)\n http://marcosluis2186.posterous.com\n http://www.linkedin.com/in/marcosluis2186\n Twitter: @marcosluis2186\n\n\n\n\nFin a la injusticia, LIBERTAD AHORA A NUESTROS CINCO COMPATRIOTAS QUE SE ENCUENTRAN INJUSTAMENTE EN PRISIONES DE LOS EEUU!\nhttp://www.antiterroristas.cu\nhttp://justiciaparaloscinco.wordpress.com\n",
"msg_date": "Wed, 08 Feb 2012 21:35:19 -0430",
"msg_from": "Marcos Ortiz Valmaseda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg <[email protected]> wrote:\n> Having read the thread, I don't really see how I could study what a\n> more principled value would be.\n\nAgreed. Just pointing out more research needs to be done.\n\n> That said, I have access to a very large fleet in which to can collect\n> data so I'm all ears for suggestions about how to measure and would\n> gladly share the results with the list.\n\nI wonder if some kind of script that grabbed random queries and ran\nthem with explain analyze and various random_page_cost to see when\nthey switched and which plans are faster would work?\n",
"msg_date": "Wed, 8 Feb 2012 19:28:12 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe <[email protected]> wrote:\n> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg <[email protected]> wrote:\n>> That said, I have access to a very large fleet in which to can collect\n>> data so I'm all ears for suggestions about how to measure and would\n>> gladly share the results with the list.\n>\n> I wonder if some kind of script that grabbed random queries and ran\n> them with explain analyze and various random_page_cost to see when\n> they switched and which plans are faster would work?\n\nWe aren't exactly in a position where we can adjust random_page_cost\non our users' databases arbitrarily to see what breaks. That would\nbe... irresponsible of us.\n\nHow would one design a meta-analyzer which we could run across many\ndatabases and collect data? Could we perhaps collect useful\ninformation from pg_stat_user_indexes, for example?\n\n-p\n\n-- \nPeter van Hardenberg\nSan Francisco, California\n\"Everything was beautiful, and nothing hurt.\" -- Kurt Vonnegut\n",
"msg_date": "Wed, 8 Feb 2012 18:47:50 -0800",
"msg_from": "Peter van Hardenberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "On Wed, Feb 8, 2012 at 6:47 PM, Peter van Hardenberg <[email protected]> wrote:\n> On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe <[email protected]> wrote:\n>> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg <[email protected]> wrote:\n>>> That said, I have access to a very large fleet in which to can collect\n>>> data so I'm all ears for suggestions about how to measure and would\n>>> gladly share the results with the list.\n>>\n>> I wonder if some kind of script that grabbed random queries and ran\n>> them with explain analyze and various random_page_cost to see when\n>> they switched and which plans are faster would work?\n>\n> We aren't exactly in a position where we can adjust random_page_cost\n> on our users' databases arbitrarily to see what breaks. That would\n> be... irresponsible of us.\n>\n\nOh, of course we could do this on the session, but executing\npotentially expensive queries would still be unneighborly.\n\nPerhaps another way to think of this problem would be that we want to\nfind queries where the cost estimate is inaccurate.\n\n-- \nPeter van Hardenberg\nSan Francisco, California\n\"Everything was beautiful, and nothing hurt.\" -- Kurt Vonnegut\n",
"msg_date": "Wed, 8 Feb 2012 18:54:10 -0800",
"msg_from": "Peter van Hardenberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "On Wed, Feb 8, 2012 at 7:54 PM, Peter van Hardenberg <[email protected]> wrote:\n> On Wed, Feb 8, 2012 at 6:47 PM, Peter van Hardenberg <[email protected]> wrote:\n>> On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe <[email protected]> wrote:\n>>> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg <[email protected]> wrote:\n>>>> That said, I have access to a very large fleet in which to can collect\n>>>> data so I'm all ears for suggestions about how to measure and would\n>>>> gladly share the results with the list.\n>>>\n>>> I wonder if some kind of script that grabbed random queries and ran\n>>> them with explain analyze and various random_page_cost to see when\n>>> they switched and which plans are faster would work?\n>>\n>> We aren't exactly in a position where we can adjust random_page_cost\n>> on our users' databases arbitrarily to see what breaks. That would\n>> be... irresponsible of us.\n>>\n>\n> Oh, of course we could do this on the session, but executing\n> potentially expensive queries would still be unneighborly.\n>\n> Perhaps another way to think of this problem would be that we want to\n> find queries where the cost estimate is inaccurate.\n\nYeah, have a script the user runs for you heroku guys in their spare\ntime to see what queries are using the most time and then to jangle\nthe random_page_cost while running them to get an idea what's faster\nand why.\n",
"msg_date": "Wed, 8 Feb 2012 21:38:34 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: RIPEMD160\n\n\n> You can execute several queries with the three different values provided \n> by Scott and Josh.\n> - SET random_page_cost = 2.0\n> First execution of the queries with EXPLAIN ANALYZE\n> - SET random_page_cost = 1.4\n> Second execution of the queries with EXPLAIN ANALYZE\n> - SET random_page_cost = 1.2\n> Second execution of the queries with EXPLAIN ANALYZE\n\nWell, such a tool would ideally be smarter than that, such that \nyou would run EXPLAIN and compare to the previous plan and \nonly run EXPLAIN ANALYZE if the plan changed. One could even \ndecrement rpc slowly and find out at one points it changes, \nwhich would be more interesting than testing arbitrary numbers.\nWould lead to some really sweet graphs as well. :)\n\n- -- \nGreg Sabino Mullane [email protected]\nEnd Point Corporation http://www.endpoint.com/\nPGP Key: 0x14964AC8 201202082338\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n\n-----BEGIN PGP SIGNATURE-----\n\niEYEAREDAAYFAk8zTewACgkQvJuQZxSWSsiprACfTlYKiC4SS1UnERU+1N/2EGhJ\ns9AAoIXLJk88hoNHEkWKhUTqikDBtC/B\n=S65l\n-----END PGP SIGNATURE-----\n\n\n",
"msg_date": "Thu, 9 Feb 2012 04:39:50 -0000",
"msg_from": "\"Greg Sabino Mullane\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "On 09/02/12 00:09, Greg Sabino Mullane wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: RIPEMD160\n>\n>\n>> You can execute several queries with the three different values provided\n>> by Scott and Josh.\n>> - SET random_page_cost = 2.0\n>> First execution of the queries with EXPLAIN ANALYZE\n>> - SET random_page_cost = 1.4\n>> Second execution of the queries with EXPLAIN ANALYZE\n>> - SET random_page_cost = 1.2\n>> Second execution of the queries with EXPLAIN ANALYZE\n> Well, such a tool would ideally be smarter than that, such that\n> you would run EXPLAIN and compare to the previous plan and\n> only run EXPLAIN ANALYZE if the plan changed. One could even\n> decrement rpc slowly and find out at one points it changes,\n> which would be more interesting than testing arbitrary numbers.\n> Would lead to some really sweet graphs as well. :)\n>\nWell, the MyYearBook.com�s guys built something seemed called Posuta, I \ndon�t know is this project is alive, but we can ask to them \n([email protected]).\n\nhttp://area51.myyearbook.com\nPosuta can be a starting point for it. It uses Ruby and Clojure for core \nfunctionalities, jQuery/Flot for graphics,\n\n-- \nMarcos Luis Ort�z Valmaseda\n Sr. Software Engineer (UCI)\n http://marcosluis2186.posterous.com\n http://www.linkedin.com/in/marcosluis2186\n Twitter: @marcosluis2186\n\n\n\n\nFin a la injusticia, LIBERTAD AHORA A NUESTROS CINCO COMPATRIOTAS QUE SE ENCUENTRAN INJUSTAMENTE EN PRISIONES DE LOS EEUU!\nhttp://www.antiterroristas.cu\nhttp://justiciaparaloscinco.wordpress.com\n",
"msg_date": "Thu, 09 Feb 2012 00:41:43 -0430",
"msg_from": "Marcos Ortiz Valmaseda <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe <[email protected]> wrote:\n> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg <[email protected]> wrote:\n>> Having read the thread, I don't really see how I could study what a\n>> more principled value would be.\n>\n> Agreed. Just pointing out more research needs to be done.\n>\n>> That said, I have access to a very large fleet in which to can collect\n>> data so I'm all ears for suggestions about how to measure and would\n>> gladly share the results with the list.\n>\n> I wonder if some kind of script that grabbed random queries and ran\n> them with explain analyze and various random_page_cost to see when\n> they switched and which plans are faster would work?\n\nBut if you grab a random query and execute it repeatedly, you\ndrastically change the caching.\n\nResults from any execution after the first one are unlikely to give\nyou results which are meaningful to the actual production situation.\n\nCheers,\n\nJeff\n",
"msg_date": "Thu, 9 Feb 2012 07:32:19 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "Hmm, perhaps we could usefully aggregate auto_explain output.\n\nOn Thu, Feb 9, 2012 at 7:32 AM, Jeff Janes <[email protected]> wrote:\n> On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe <[email protected]> wrote:\n>> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg <[email protected]> wrote:\n>>> Having read the thread, I don't really see how I could study what a\n>>> more principled value would be.\n>>\n>> Agreed. Just pointing out more research needs to be done.\n>>\n>>> That said, I have access to a very large fleet in which to can collect\n>>> data so I'm all ears for suggestions about how to measure and would\n>>> gladly share the results with the list.\n>>\n>> I wonder if some kind of script that grabbed random queries and ran\n>> them with explain analyze and various random_page_cost to see when\n>> they switched and which plans are faster would work?\n>\n> But if you grab a random query and execute it repeatedly, you\n> drastically change the caching.\n>\n> Results from any execution after the first one are unlikely to give\n> you results which are meaningful to the actual production situation.\n>\n> Cheers,\n>\n> Jeff\n\n\n\n-- \nPeter van Hardenberg\nSan Francisco, California\n\"Everything was beautiful, and nothing hurt.\" -- Kurt Vonnegut\n",
"msg_date": "Thu, 9 Feb 2012 14:41:34 -0800",
"msg_from": "Peter van Hardenberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "On Thu, Feb 9, 2012 at 3:41 PM, Peter van Hardenberg <[email protected]> wrote:\n> Hmm, perhaps we could usefully aggregate auto_explain output.\n\nHow about something where you run a site at random_page cost of x,\nthen y, then z and you do some aggregating of query times in each. A\nscatter plot should tell you lots.\n",
"msg_date": "Thu, 9 Feb 2012 18:29:10 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "On 2/9/12 2:41 PM, Peter van Hardenberg wrote:\n> Hmm, perhaps we could usefully aggregate auto_explain output.\n\nThe other option is to take a statistical approach. After all, what you\nwant to do is optimize average response times across all your user's\ndatabases, not optimize for a few specific queries.\n\nSo one thought would be to add in pg_stat_statements to your platform\n... something I'd like to see Heroku do anyway. Then you can sample\nthis across dozens (or hundreds) of user databases, each with RPC set to\na slightly different level, and aggregate it into a heat map.\n\nThat's the way I'd do it, anyway.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Fri, 10 Feb 2012 11:32:50 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "Le vendredi 10 février 2012 20:32:50, Josh Berkus a écrit :\n> On 2/9/12 2:41 PM, Peter van Hardenberg wrote:\n> > Hmm, perhaps we could usefully aggregate auto_explain output.\n> \n> The other option is to take a statistical approach. After all, what you\n> want to do is optimize average response times across all your user's\n> databases, not optimize for a few specific queries.\n> \n> So one thought would be to add in pg_stat_statements to your platform\n> ... something I'd like to see Heroku do anyway. Then you can sample\n> this across dozens (or hundreds) of user databases, each with RPC set to\n> a slightly different level, and aggregate it into a heat map.\n> \n> That's the way I'd do it, anyway.\n\nin such set up, I sometime build a ratio between transactions processed and \nCPU usage, many indicators exists, inside and outside DB, that are useful to \ncombine and use just as a 'this is normal behavior'. It turns to be easy in \nthe long term to see if things go better or worse.\n\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation\n",
"msg_date": "Fri, 10 Feb 2012 20:48:07 +0100",
"msg_from": "=?utf-8?q?C=C3=A9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "On Fri, Feb 10, 2012 at 11:32 AM, Josh Berkus <[email protected]> wrote:\n> On 2/9/12 2:41 PM, Peter van Hardenberg wrote:\n> So one thought would be to add in pg_stat_statements to your platform\n> ... something I'd like to see Heroku do anyway. Then you can sample\n> this across dozens (or hundreds) of user databases, each with RPC set to\n> a slightly different level, and aggregate it into a heat map.\n>\n\nWe've funded some work by Peter Geoghegan to make pg_stat_statements\nmore useful, but the patch is currently sitting in the commitfest in\nneed of a champion. I'd very much like to see it landed.\n\nBetween that work, 9.2, and Dimitri's extension whitelist module,\npg_stat_statements should be usefully installable by everyone. We'd\nseriously consider installing it by default, but that would make us\nNot Vanilla, which is something we avoid very diligently.\n\n-- \nPeter van Hardenberg\nSan Francisco, California\n\"Everything was beautiful, and nothing hurt.\" -- Kurt Vonnegut\n",
"msg_date": "Fri, 10 Feb 2012 13:30:58 -0800",
"msg_from": "Peter van Hardenberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "Peter,\n\n> We've funded some work by Peter Geoghegan to make pg_stat_statements\n> more useful, but the patch is currently sitting in the commitfest in\n> need of a champion. I'd very much like to see it landed.\n\nOk, let me review it then ...\n\n> Between that work, 9.2, and Dimitri's extension whitelist module,\n> pg_stat_statements should be usefully installable by everyone. We'd\n> seriously consider installing it by default, but that would make us\n> Not Vanilla, which is something we avoid very diligently.\n\nBummer. You can get why this would be useful for autotuning, though, yes?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Fri, 10 Feb 2012 17:40:32 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "On Fri, Feb 10, 2012 at 5:40 PM, Josh Berkus <[email protected]> wrote:\n> Peter,\n>\n>> We've funded some work by Peter Geoghegan to make pg_stat_statements\n>> more useful, but the patch is currently sitting in the commitfest in\n>> need of a champion. I'd very much like to see it landed.\n>\n> Ok, let me review it then ...\n>\n>> Between that work, 9.2, and Dimitri's extension whitelist module,\n>> pg_stat_statements should be usefully installable by everyone. We'd\n>> seriously consider installing it by default, but that would make us\n>> Not Vanilla, which is something we avoid very diligently.\n>\n> Bummer. You can get why this would be useful for autotuning, though, yes?\n>\n\nAbsolutely! Everything we can do to better monitor users' database\nperformance without interfering (unduly) with that performance or\naffecting their dump/restore cycle is very exciting for us.\n\n-- \nPeter van Hardenberg\nSan Francisco, California\n\"Everything was beautiful, and nothing hurt.\" -- Kurt Vonnegut\n",
"msg_date": "Fri, 10 Feb 2012 18:13:07 -0800",
"msg_from": "Peter van Hardenberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "On Thu, Feb 9, 2012 at 2:41 PM, Peter van Hardenberg <[email protected]> wrote:\n\n> Hmm, perhaps we could usefully aggregate auto_explain output.\n\nBy the time you realize the query is running long, it is too late to\nstart analyzing it. And without analyzing it, you probably can't get\nthe information you need.\n\nMaybe with the timing = off feature,it would might make sense to just\npreemptively analyze everything.\n\nCheers,\n\nJeff\n",
"msg_date": "Sat, 11 Feb 2012 08:26:35 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "On Sat, Feb 11, 2012 at 8:26 AM, Jeff Janes <[email protected]> wrote:\n> By the time you realize the query is running long, it is too late to\n> start analyzing it. And without analyzing it, you probably can't get\n> the information you need.\n>\n> Maybe with the timing = off feature,it would might make sense to just\n> preemptively analyze everything.\n>\n\nAs before, I'm reluctant to introduce structural performance costs\nacross the fleet, though I suppose giving users the option to opt out\nmight ameliorate that. I don't think I have time right now to\nseriously explore these ideas, but I'll keep it in the back of my mind\nfor a while.\n\nIf anyone is interested in seriously exploring the idea of researching\nquery planner accuracy across an enormous fleet of production\ndatabases with the goal of feeding that information back to the\nproject please feel free to contact me off-list.\n\n-- \nPeter van Hardenberg\nSan Francisco, California\n\"Everything was beautiful, and nothing hurt.\" -- Kurt Vonnegut\n",
"msg_date": "Sat, 11 Feb 2012 11:52:28 -0800",
"msg_from": "Peter van Hardenberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "On Thu, Feb 9, 2012 at 5:29 PM, Scott Marlowe <[email protected]> wrote:\n> On Thu, Feb 9, 2012 at 3:41 PM, Peter van Hardenberg <[email protected]> wrote:\n>> Hmm, perhaps we could usefully aggregate auto_explain output.\n>\n> How about something where you run a site at random_page cost of x,\n> then y, then z and you do some aggregating of query times in each. A\n> scatter plot should tell you lots.\n\nIs there an easy and unintrusive way to get such a metric as the\naggregated query times? And to normalize it for how much work happens\nto have been doing on at the time?\n\nWithout a good way to do normalization, you could just do lots of\ntests with randomized settings, to average out any patterns in\nworkload, but that means you need an awful lot of tests to have enough\ndata to rely on randomization. But it would be desirable to do this\nanyway, in case the normalization isn't as effective as we think.\n\nBut how long should each setting be tested for? If a different\nsetting causes certain index to start being used, then performance\nwould go down until those indexes get cached and then increase from\nthere. But how long is long enough to allow this to happen?\n\nThanks,\n\nJeff\n",
"msg_date": "Sun, 12 Feb 2012 11:49:33 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "\n> Is there an easy and unintrusive way to get such a metric as the\n> aggregated query times? And to normalize it for how much work\n> happens\n> to have been doing on at the time?\n\nYou'd pretty much need to do large-scale log harvesting combined with samples of query concurrency taken several times per minute. Even that won't \"normalize\" things the way you want, though, since all queries are not equal in terms of the amount of data they hit.\n\nGiven that, I'd personally take a statistical approach. Sample query execution times across a large population of servers and over a moderate amount of time. Then apply common tests of statistical significance. This is why Heroku has the opportunity to do this in a way that smaller sites could not; they have enough servers to (probably) cancel out any random activity effects.\n\n--Josh Berkus\n",
"msg_date": "Sun, 12 Feb 2012 14:01:59 -0600 (CST)",
"msg_from": "Joshua Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "On Sun, Feb 12, 2012 at 12:01 PM, Joshua Berkus <[email protected]> wrote:\n> You'd pretty much need to do large-scale log harvesting combined with samples of query concurrency taken several times per minute. Even that won't \"normalize\" things the way you want, though, since all queries are not equal in terms of the amount of data they hit.\n>\n> Given that, I'd personally take a statistical approach. Sample query execution times across a large population of servers and over a moderate amount of time. Then apply common tests of statistical significance. This is why Heroku has the opportunity to do this in a way that smaller sites could not; they have enough servers to (probably) cancel out any random activity effects.\n>\n\nYes, I think if we could normalize, anonymize, and randomly EXPLAIN\nANALYZE 0.1% of all queries that run on our platform we could look for\nbad choices by the planner. I think the potential here could be quite\nremarkable.\n\n-- \nPeter van Hardenberg\nSan Francisco, California\n\"Everything was beautiful, and nothing hurt.\" -- Kurt Vonnegut\n",
"msg_date": "Sun, 12 Feb 2012 14:28:27 -0800",
"msg_from": "Peter van Hardenberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
},
{
"msg_contents": "On 12 February 2012 22:28, Peter van Hardenberg <[email protected]> wrote:\n> Yes, I think if we could normalize, anonymize, and randomly EXPLAIN\n> ANALYZE 0.1% of all queries that run on our platform we could look for\n> bad choices by the planner. I think the potential here could be quite\n> remarkable.\n\nTom Lane suggested that plans, rather than the query tree, might be a\nmore appropriate thing for the new pg_stat_statements to be hashing,\nas plans should be directly blamed for execution costs. While I don't\nthink that that's appropriate for normalisation (consider that there'd\noften be duplicate pg_stat_statements entries per query), it does seem\nlike an idea that could be worked into a future revision, to detect\nproblematic plans. Maybe it could be usefully combined with\nauto_explain or something like that (in a revision of auto_explain\nthat doesn't necessarily explain every plan, and therefore doesn't pay\nthe considerable overhead of that instrumentation across the board).\n\n-- \nPeter Geoghegan http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training and Services\n",
"msg_date": "Sun, 12 Feb 2012 23:37:14 +0000",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: random_page_cost = 2.0 on Heroku Postgres"
}
] |
[
{
"msg_contents": "Hi there,\n\nWe've got a pretty large table that sees millions of new rows a day, and\nwe're trying our best to optimize queries against it. We're hoping to find\nsome guidance on this list.\n\nThankfully, the types of queries that we perform against this table are\npretty constrained. We never update rows and we never join against other\ntables. The table essentially looks like this:\n\n| id | group_id | created_at | everything elseŠ\n\nWhere `id' is the primary key, auto-incrementing, `group_id' is the\nforeign key that we always scope against, and `created_at' is the\ninsertion time. We have indices against the primary key and the group_id.\nOur queries essentially fall into the following cases:\n\n * Š WHERE group_id = ? ORDER BY created_at DESC LIMIT 20;\n * Š WHERE group_id = ? AND id > ? ORDER BY created_at DESC;\n * Š WHERE group_id = ? AND id < ? ORDER BY created_at DESC LIMIT 20;\n * Š WHERE group_id = ? ORDER BY created_at DESC LIMIT 20 OFFSET ?;\n\nIn human words, we're looking for:\n\n * The most recent (20) rows.\n * The most recent rows after a given `id'.\n * Twenty rows before a given `id'.\n * Pages of twenty rows.\n\nOriginally, this table was part of our primary database, but recently we\nsaw queries take upwards of thirty seconds or more to complete. Since\nwe're serving web requests, that's basically unacceptable, and caused a\nlot of requests to backup. Our interim solution has been to simply carve\nout a new database that hosts only this table, and that has worked to some\ndegree. We aren't seeing thirty seconds plus database response times\nanymore, but some queries still take many seconds and the cost of spinning\nup a new master-slave configuration hasn't been cheap.\n\nIn the meantime, we're hoping to investigate other ways to optimize this\ntable and the queries against it. Heroku's data team has suggested balling\nup these rows into arrays, where a single row would represent a group_id,\nand the data would occupy a single column as an array. We don't have any\nexperience with this and were wondering if anyone here has tried it.\n\nAnd finally, we're also trying out alternative stores, since it seems like\nthis data and its retrieval could be well suited to document-oriented\nbackends. Redis and DynamoDB are currently the best contenders.\n\nThanks in advance for any help,\n\nRegards,\n\nDave Yeu & Neil Sarkar\nGroupMe\n\n\n",
"msg_date": "Wed, 8 Feb 2012 18:03:04 +0000",
"msg_from": "David Yeu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance on large, append-only tables"
},
{
"msg_contents": "On Wed, Feb 8, 2012 at 12:03 PM, David Yeu <[email protected]> wrote:\n> Hi there,\n>\n> We've got a pretty large table that sees millions of new rows a day, and\n> we're trying our best to optimize queries against it. We're hoping to find\n> some guidance on this list.\n>\n> Thankfully, the types of queries that we perform against this table are\n> pretty constrained. We never update rows and we never join against other\n> tables. The table essentially looks like this:\n>\n> | id | group_id | created_at | everything elseŠ\n>\n> Where `id' is the primary key, auto-incrementing, `group_id' is the\n> foreign key that we always scope against, and `created_at' is the\n> insertion time. We have indices against the primary key and the group_id.\n> Our queries essentially fall into the following cases:\n>\n> * Š WHERE group_id = ? ORDER BY created_at DESC LIMIT 20;\n> * Š WHERE group_id = ? AND id > ? ORDER BY created_at DESC;\n> * Š WHERE group_id = ? AND id < ? ORDER BY created_at DESC LIMIT 20;\n> * Š WHERE group_id = ? ORDER BY created_at DESC LIMIT 20 OFFSET ?;\n>\n> In human words, we're looking for:\n>\n> * The most recent (20) rows.\n> * The most recent rows after a given `id'.\n> * Twenty rows before a given `id'.\n> * Pages of twenty rows.\n\nYou can probably significantly optimize this. But first, can we see\nsome explain analyze for the affected queries?\n\nmerlin\n",
"msg_date": "Fri, 10 Feb 2012 09:19:10 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on large, append-only tables"
},
{
"msg_contents": "On Wed, Feb 8, 2012 at 3:03 PM, David Yeu <[email protected]> wrote:\n> Thankfully, the types of queries that we perform against this table are\n> pretty constrained. We never update rows and we never join against other\n> tables. The table essentially looks like this:\n>\n> | id | group_id | created_at | everything elseŠ\n...\n> Our queries essentially fall into the following cases:\n>\n> * Š WHERE group_id = ? ORDER BY created_at DESC LIMIT 20;\n> * Š WHERE group_id = ? AND id > ? ORDER BY created_at DESC;\n> * Š WHERE group_id = ? AND id < ? ORDER BY created_at DESC LIMIT 20;\n> * Š WHERE group_id = ? ORDER BY created_at DESC LIMIT 20 OFFSET ?;\n\nI think you have something to gain from partitioning.\nYou could partition on group_id, which is akin to sharding only on a\nsingle server, and that would significantly decrease each partition's\nindex size. Since those queries' performance is highly dependent on\nindex size, and since you seem to have such a huge table, I would\nimagine such partitioning would help keep the indices performant.\n\nNow, we do need statistics. How many groups are there? Do they grow\nwith your table, or is the number of groups constant? Which values of\noffsets do you use? (offset is quite expensive)\n\nAnd of course... explain analyze.\n",
"msg_date": "Fri, 10 Feb 2012 12:33:33 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on large, append-only tables"
},
{
"msg_contents": "On Wed, Feb 8, 2012 at 20:03, David Yeu <[email protected]> wrote:\n> * Š WHERE group_id = ? ORDER BY created_at DESC LIMIT 20 OFFSET ?;\n> * Pages of twenty rows.\n\nA good improvement for this sort of queries is the \"scalable paging\"\ntrick. Instead of increasing the OFFSET argument -- which means that\nPostgres has to scan more and more rows -- you should remember an\nindex key where the last page ended.\n\nIn other words, you get the first page using:\nWHERE group_id = ? ORDER BY created_at DESC LIMIT 20\n\nSay, this page returns created_at values between 2012-01-01 and\n2012-01-10. If the user clicks \"next page\", you run a query like this\ninstead:\nWHERE group_id = ? AND created_at>'2012-01-10' ORDER BY created_at DESC LIMIT 20\n\nThus, every \"next page\" fetch always takes a constant time. Of course\nthere's a small problem when two rows have equal times. Then, you can\nadd primary key to the sort key to disambiguate those rows:\n\nWHERE group_id = ? AND (created_at, pkey_col) > ('2012-01-10', 712)\nORDER BY created_at, pkey_col DESC LIMIT 20\n\nOf course an index on (group_id, created_at) or (group_id, created_at,\npkey_col) is necessary for these to work well\n\nRegards,\nMarti\n",
"msg_date": "Fri, 10 Feb 2012 17:34:39 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on large, append-only tables"
},
{
"msg_contents": "David Yeu <[email protected]> writes:\n> Our queries essentially fall into the following cases:\n\n> * � WHERE group_id = ? ORDER BY created_at DESC LIMIT 20;\n> * � WHERE group_id = ? AND id > ? ORDER BY created_at DESC;\n> * � WHERE group_id = ? AND id < ? ORDER BY created_at DESC LIMIT 20;\n> * � WHERE group_id = ? ORDER BY created_at DESC LIMIT 20 OFFSET ?;\n\nAll of those should be extremely cheap if you've got the right indexes,\nwith the exception of the last one. Large OFFSET values are never a\ngood idea, because Postgres always has to scan and discard that many\nrows. If you need to fetch successive pages, consider using a cursor\nwith a series of FETCH commands. Another possibility, if the data is\nsufficiently constrained, is to move the limit point with each new\nquery, ie instead of OFFSET use something like\n\n\tWHERE group_id = ? AND created_at < last-previous-value\n\tORDER BY created_at DESC LIMIT 20;\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Feb 2012 10:35:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on large, append-only tables "
},
{
"msg_contents": "David Yeu <[email protected]> wrote:\n \n> We have indices against the primary key and the group_id.\n> Our queries essentially fall into the following cases:\n> \n> * Š WHERE group_id = ? ORDER BY created_at DESC LIMIT 20;\n> * Š WHERE group_id = ? AND id > ? ORDER BY created_at DESC;\n> * Š WHERE group_id = ? AND id < ? ORDER BY created_at DESC LIMIT\n> 20;\n> * Š WHERE group_id = ? ORDER BY created_at DESC LIMIT 20 OFFSET\n> ?;\n> \n> In human words, we're looking for:\n> \n> * The most recent (20) rows.\n> * The most recent rows after a given `id'.\n> * Twenty rows before a given `id'.\n> * Pages of twenty rows.\n \nThe first thing I would try is building an index (perhaps\nCONCURRENTLY to avoid disrupting production) on (group_id,\ncreated_at). It might also be worth creating an index on (group_id,\nid, created_at), but that's a less-sure win.\n \n> Originally, this table was part of our primary database, but\n> recently we saw queries take upwards of thirty seconds or more to\n> complete. Since we're serving web requests, that's basically\n> unacceptable, and caused a lot of requests to backup.\n \nWith only the indexes you mention, it had to be doing either\ncomplete table scans for each request, or a lot of random access to\nrows it didn't need.\n \n> Our interim solution has been to simply carve out a new database\n> that hosts only this table, and that has worked to some degree. We\n> aren't seeing thirty seconds plus database response times anymore,\n> but some queries still take many seconds and the cost of spinning\n> up a new master-slave configuration hasn't been cheap.\n \nWell, throwing hardware at something doesn't generally hurt, but\nit's not the first solution I would try, especially when the product\nyou're using has ways to tune performance.\n \n> In the meantime, we're hoping to investigate other ways to\n> optimize this table and the queries against it. Heroku's data team\n> has suggested balling up these rows into arrays, where a single\n> row would represent a group_id, and the data would occupy a single\n> column as an array.\n \nUgh. You're a long way from needing to give up on the relational\nmodel here.\n \n> And finally, we're also trying out alternative stores, since it\n> seems like this data and its retrieval could be well suited to\n> document-oriented backends. Redis and DynamoDB are currently the\n> best contenders.\n \nYour current use of PostgreSQL is more or less equivalent to driving\na car around in first gear. You might consider a tuned PostgreSQL\nas another alternative store. :-)\n \n-Kevin\n",
"msg_date": "Fri, 10 Feb 2012 09:44:02 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on large, append-only tables"
}
] |
[
{
"msg_contents": "Hi all,\n\nWe have a 4-columned table that is also split up into a TOAST table, where the TOASTed entries are ~35KB each.\nThe table size is 10K records.\nThe table is updated at a rate of ~100 updates a minute.\n\nDuring our testing we see that the table size increases substantially. When looking at the autovacuum log, set with default configuration, it seems that it ran for around 60 seconds (see below and note that this was a 1-minute test, i.e. only 100 updates)!\n\npid:2148 tid:0 sid:4f32a8de.864 sln:2 sst:2012-02-08 18:54:54 IST [2012-02-08 19:24:06.967 IST]DEBUG: autovac_balance_cost(pid=4560 db=16385, rel=17881, cost_limit=200, cost_delay=20)\npid:2148 tid:0 sid:4f32a8de.864 sln:3 sst:2012-02-08 18:54:54 IST [2012-02-08 19:24:37.622 IST]DEBUG: autovac_balance_cost(pid=4560 db=16385, rel=17881, cost_limit=200, cost_delay=20)\npid:4560 tid:0 sid:4f32af99.11d0 sln:14 sst:2012-02-08 19:23:37 IST [2012-02-08 19:24:43.518 IST]DEBUG: scanned index \"pg_toast_17881_index\" to remove 1700 row versions\npid:4560 tid:0 sid:4f32af99.11d0 sln:15 sst:2012-02-08 19:23:37 IST [2012-02-08 19:24:43.518 IST]DETAIL: CPU 0.00s/0.00u sec elapsed 0.74 sec.\npid:4560 tid:0 sid:4f32af99.11d0 sln:16 sst:2012-02-08 19:23:37 IST [2012-02-08 19:24:43.581 IST]DEBUG: \"pg_toast_17881\": removed 1700 row versions in 494 pages\npid:4560 tid:0 sid:4f32af99.11d0 sln:17 sst:2012-02-08 19:23:37 IST [2012-02-08 19:24:43.581 IST]DETAIL: CPU 0.00s/0.00u sec elapsed 0.06 sec.\npid:4560 tid:0 sid:4f32af99.11d0 sln:18 sst:2012-02-08 19:23:37 IST [2012-02-08 19:24:43.581 IST]DEBUG: index \"pg_toast_17881_index\" now contains 169983 row versions in 473 pages\npid:4560 tid:0 sid:4f32af99.11d0 sln:19 sst:2012-02-08 19:23:37 IST [2012-02-08 19:24:43.581 IST]DETAIL: 1700 index row versions were removed.\n 4 index pages have been deleted, 0 are currently reusable.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\npid:4560 tid:0 sid:4f32af99.11d0 sln:20 sst:2012-02-08 19:23:37 IST [2012-02-08 19:24:43.581 IST]DEBUG: \"pg_toast_17881\": found 1471 removable, 169983 nonremovable row versions in 42921 pages\npid:4560 tid:0 sid:4f32af99.11d0 sln:21 sst:2012-02-08 19:23:37 IST [2012-02-08 19:24:43.581 IST]DETAIL: 0 dead row versions cannot be removed yet.\n There were 0 unused item pointers.\n 495 pages contain useful free space.\n 0 pages are entirely empty.\n CPU 0.00s/0.00u sec elapsed 66.36 sec.\npid:4560 tid:0 sid:4f32af99.11d0 sln:22 sst:2012-02-08 19:23:37 IST [2012-02-08 19:24:43.581 IST]LOG: automatic vacuum of table \"uepm.pg_toast.pg_toast_17881\": index scans: 1\n pages: 0 removed, 42921 remain\n tuples: 1471 removed, 169983 remain\n system usage: CPU 0.00s/0.00u sec elapsed 66.36 sec\n\nWhen setting a higher cost for the autovacuum, tried values of 2000, it ran for even longer: ~400 seconds!\n\n\nThe only other information that I have here is that the TOASTed data is split up into 17 sections (which makes sense considering it splits it up into 2KB sections).\n\nAnd one more thing that seems a bit strange - after a 1-minute run, we would expect to see 1700 Tuples Updated (100*17), but instead we see 1700 Tuples Inserted (and no deletes).\n\n\nAnyone have a clue on this phenomenon?\n\n\nThanks,\nOfer\n\n\n\n\n\nHi \nall,\n \nWe have a 4-columned \ntable that is also split up into a TOAST table, where the TOASTed entries are \n~35KB each. \nThe table size is \n10K records.\nThe table is updated \nat a rate of ~100 updates a minute.\n \nDuring our testing \nwe see that the table size increases substantially. When looking at the \nautovacuum log, set with default configuration, it seems that it ran for around \n60 seconds (see below and note that this was a 1-minute test, i.e. only 100 \nupdates)!\n \npid:2148 tid:0 \nsid:4f32a8de.864 sln:2 sst:2012-02-08 18:54:54 IST [2012-02-08 19:24:06.967 \nIST]DEBUG: autovac_balance_cost(pid=4560 db=16385, rel=17881, \ncost_limit=200, cost_delay=20)pid:2148 tid:0 sid:4f32a8de.864 sln:3 \nsst:2012-02-08 18:54:54 IST [2012-02-08 19:24:37.622 IST]DEBUG: \nautovac_balance_cost(pid=4560 db=16385, rel=17881, cost_limit=200, \ncost_delay=20)pid:4560 tid:0 sid:4f32af99.11d0 sln:14 sst:2012-02-08 \n19:23:37 IST [2012-02-08 19:24:43.518 IST]DEBUG: scanned index \n\"pg_toast_17881_index\" to remove 1700 row versionspid:4560 tid:0 \nsid:4f32af99.11d0 sln:15 sst:2012-02-08 19:23:37 IST [2012-02-08 19:24:43.518 \nIST]DETAIL: CPU 0.00s/0.00u sec elapsed 0.74 sec.pid:4560 tid:0 \nsid:4f32af99.11d0 sln:16 sst:2012-02-08 19:23:37 IST [2012-02-08 19:24:43.581 \nIST]DEBUG: \"pg_toast_17881\": removed 1700 row versions in 494 \npagespid:4560 tid:0 sid:4f32af99.11d0 sln:17 sst:2012-02-08 19:23:37 IST \n[2012-02-08 19:24:43.581 IST]DETAIL: CPU 0.00s/0.00u sec elapsed 0.06 \nsec.pid:4560 tid:0 sid:4f32af99.11d0 sln:18 sst:2012-02-08 19:23:37 IST \n[2012-02-08 19:24:43.581 IST]DEBUG: index \"pg_toast_17881_index\" now \ncontains 169983 row versions in 473 pagespid:4560 tid:0 sid:4f32af99.11d0 \nsln:19 sst:2012-02-08 19:23:37 IST [2012-02-08 19:24:43.581 IST]DETAIL: \n1700 index row versions were removed. 4 index pages have been deleted, \n0 are currently reusable. CPU 0.00s/0.00u sec elapsed 0.00 \nsec.pid:4560 tid:0 sid:4f32af99.11d0 sln:20 sst:2012-02-08 19:23:37 IST \n[2012-02-08 19:24:43.581 IST]DEBUG: \"pg_toast_17881\": found 1471 \nremovable, 169983 nonremovable row versions in 42921 pagespid:4560 tid:0 \nsid:4f32af99.11d0 sln:21 sst:2012-02-08 19:23:37 IST [2012-02-08 19:24:43.581 \nIST]DETAIL: 0 dead row versions cannot be removed yet. There were \n0 unused item pointers. 495 pages contain useful free space. 0 \npages are entirely empty. CPU 0.00s/0.00u sec elapsed 66.36 \nsec.pid:4560 tid:0 sid:4f32af99.11d0 sln:22 sst:2012-02-08 19:23:37 IST \n[2012-02-08 19:24:43.581 IST]LOG: automatic vacuum of table \n\"uepm.pg_toast.pg_toast_17881\": index scans: 1 pages: 0 removed, 42921 \nremain tuples: 1471 removed, 169983 remain system usage: CPU \n0.00s/0.00u sec elapsed 66.36 sec\n \nWhen setting a \nhigher cost for the autovacuum, tried values of 2000, it ran for even longer: \n~400 seconds!\n \n \nThe only other \ninformation that I have here is that the TOASTed data is split up into 17 \nsections (which makes sense considering it splits it up into 2KB \nsections).\n \nAnd one more thing \nthat seems a bit strange - after a 1-minute run, we would expect to see 1700 \nTuples Updated (100*17), but instead we see 1700 Tuples Inserted (and no \ndeletes).\n \n \nAnyone have a clue \non this phenomenon?\n \n \nThanks,\nOfer",
"msg_date": "Wed, 8 Feb 2012 21:33:59 +0200",
"msg_from": "Ofer Israeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Vacuuming problems on TOAST table"
},
{
"msg_contents": "Ofer Israeli <[email protected]> writes:\n> During our testing we see that the table size increases substantially. When looking at the autovacuum log, set with default configuration, it seems that it ran for around 60 seconds (see below and note that this was a 1-minute test, i.e. only 100 updates)!\n\nautovacuum is intended to run fairly slowly, so as to not consume too\nmuch resources. If you think it's too slow you can adjust the\nautovacuum_cost tunables.\n\n> When setting a higher cost for the autovacuum, tried values of 2000, it ran for even longer: ~400 seconds!\n\nThat's the wrong direction, no?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 08 Feb 2012 14:44:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuuming problems on TOAST table "
},
{
"msg_contents": "Tom Lane wrote:\n> Ofer Israeli <[email protected]> writes:\n>> During our testing we see that the table size increases\n>> substantially. When looking at the autovacuum log, set with default\n>> configuration, it seems that it ran for around 60 seconds (see below\n>> and note that this was a 1-minute test, i.e. only 100 updates)! \n> \n> autovacuum is intended to run fairly slowly, so as to not consume too\n> much resources. If you think it's too slow you can adjust the\n> autovacuum_cost tunables. \n> \n>> When setting a higher cost for the autovacuum, tried values of 2000,\n>> it ran for even longer: ~400 seconds! \n> \n> That's the wrong direction, no?\n\nThe settings we used were not in the postgresql.conf file, but rather an update of the pg_autovacuum table where we set the vac_cost_limit to 2000. The reason for this being that we wanted this definition only for the big (TOASTed) table I was referring to.\n\nThe logged settings in the ~400 second case were:\nautovac_balance_cost(pid=6224 db=16385, rel=17881, cost_limit=10, cost_delay=1)\n\nWhich comes as quite a surprise as it seems that the cost_limit is not set or am I missing something?\n\n\nThanks,\nOfer\n\n",
"msg_date": "Wed, 8 Feb 2012 21:59:52 +0200",
"msg_from": "Ofer Israeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Vacuuming problems on TOAST table "
},
{
"msg_contents": "On Wed, Feb 8, 2012 at 2:59 PM, Ofer Israeli <[email protected]> wrote:\n> The settings we used were not in the postgresql.conf file, but rather an update of the pg_autovacuum table where we set the vac_cost_limit to 2000. The reason for this being that we wanted this definition only for the big (TOASTed) table I was referring to.\n>\n> The logged settings in the ~400 second case were:\n> autovac_balance_cost(pid=6224 db=16385, rel=17881, cost_limit=10, cost_delay=1)\n>\n> Which comes as quite a surprise as it seems that the cost_limit is not set or am I missing something?\n\nThat doesn't look right, but without step-by-step directions it will\nbe hard for anyone to reproduce this. Also, what version are you\ntesting on? pg_autovacuum was removed in PostgreSQL 8.4, so you must\nbe using PostgreSQL 8.3 or earlier.\n\nYou might at least want to make sure you're running a late enough\nminor version to have this fix:\n\nAuthor: Tom Lane <[email protected]>\nBranch: master Release: REL9_1_BR [b58c25055] 2010-11-19 22:29:44 -0500\nBranch: REL9_0_STABLE Release: REL9_0_2 [b5efc0940] 2010-11-19 22:28:25 -0500\nBranch: REL8_4_STABLE Release: REL8_4_6 [fab2af30d] 2010-11-19 22:28:30 -0500\nBranch: REL8_3_STABLE Release: REL8_3_13 [6cb9d5113] 2010-11-19 22:28:35 -0500\n\n Fix leakage of cost_limit when multiple autovacuum workers are active.\n\n When using default autovacuum_vac_cost_limit, autovac_balance_cost relied\n on VacuumCostLimit to contain the correct global value ... but after the\n first time through in a particular worker process, it didn't, because we'd\n trashed it in previous iterations. Depending on the state of other autovac\n workers, this could result in a steady reduction of the effective\n cost_limit setting as a particular worker processed more and more tables,\n causing it to go slower and slower. Spotted by Simon Poole (bug #5759).\n Fix by saving and restoring the GUC variables in the loop in do_autovacuum.\n\n In passing, improve a few comments.\n\n Back-patch to 8.3 ... the cost rebalancing code has been buggy since it was\n put in.\n\nAlso:\n\n> And one more thing that seems a bit strange - after a 1-minute run, we would\n> expect to see 1700 Tuples Updated (100*17), but instead we see 1700 Tuples\n> Inserted (and no deletes).\n\nI don't think TOAST ever updates chunks in place. It just inserts and\ndeletes; or at least I think that's what it does.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 29 Feb 2012 13:45:25 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuuming problems on TOAST table"
}
] |
[
{
"msg_contents": "Here's my query:\n\nSELECT foursquare.name, foursquare.city, COUNT(moment_id) AS popularity\nFROM foursq_categories\nJOIN foursquare USING (foursq_id)\nJOIN places USING (foursq_id)\nJOIN blocks USING (block_id)\nWHERE \"primary\"\n AND (created at time zone timezone)::date = 'yesterday'\n AND (country = 'USA' OR country = 'United States')\n AND foursq_categories.name @@ to_tsquery('Restaurant')\nGROUP BY foursq_id, foursquare.name, foursquare.city ORDER BY popularity\nDESC LIMIT 12;\n\nHere's my explain: http://explain.depesz.com/s/xoH\n\nTo my surprise, it was not the tsquery that made this slow (which is\nawesome, because I was worried about that) but rather the filter: (created\nat time zone timezone)::date = 'yesterday'\ncreated has an index (btree if it matters). timezone does not. I'm\nwondering if the solution to my problem is to create a joint index between\ncreated and timezone (and if so, if there is a particular way to do that to\nmake it work the way I want).\n\nThanks in advance.\n\n-Alessandro\n\nHere's my query:SELECT foursquare.name, foursquare.city, COUNT(moment_id) AS popularityFROM foursq_categories JOIN foursquare USING (foursq_id)\nJOIN places USING (foursq_id)JOIN blocks USING (block_id)WHERE \"primary\" AND (created at time zone timezone)::date = 'yesterday' AND (country = 'USA' OR country = 'United States')\n AND foursq_categories.name @@ to_tsquery('Restaurant') GROUP BY foursq_id, foursquare.name, foursquare.city ORDER BY popularity DESC LIMIT 12;\nHere's my explain: http://explain.depesz.com/s/xoHTo my surprise, it was not the tsquery that made this slow (which is awesome, because I was worried about that) but rather the filter: (created at time zone timezone)::date = 'yesterday'\ncreated has an index (btree if it matters). timezone does not. I'm wondering if the solution to my problem is to create a joint index between created and timezone (and if so, if there is a particular way to do that to make it work the way I want).\nThanks in advance.-Alessandro",
"msg_date": "Thu, 9 Feb 2012 10:42:15 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "timestamp with time zone"
},
{
"msg_contents": "Alessandro Gagliardi <[email protected]> writes:\n> WHERE ... (created at time zone timezone)::date = 'yesterday'\n\n> created has an index (btree if it matters). timezone does not. I'm\n> wondering if the solution to my problem is to create a joint index between\n> created and timezone (and if so, if there is a particular way to do that to\n> make it work the way I want).\n\nThe only way to make that indexable is to create an expression index on\nthe whole expression \"(created at time zone timezone)::date\". Seems\npretty special-purpose, though it might be worthwhile if you do that a\nlot.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 09 Feb 2012 14:46:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: timestamp with time zone "
},
{
"msg_contents": "I tried: CREATE INDEX blocks_created_at_timezone_idx ON blocks USING btree\n((created at time zone timezone));\n\n(Actually, I originally did try one on \"(created at time zone\ntimezone)::date\" but couldn't figure out how to phrase it in a way\nPostgreSQL would accept.)\n\nAnyway, no difference: http://explain.depesz.com/s/Zre\n\nI even tried changing the filter to (created at time zone timezone) >\n 'yesterday' AND (created at time zone timezone) < 'today' to see if that\nmight make a difference. Sadly, no: http://explain.depesz.com/s/dfh\n\nHere's the definition for the offending table:\n\nCREATE TABLE blocks\n(\n block_id character(24) NOT NULL,\n user_id character(24) NOT NULL,\n created timestamp with time zone,\n locale character varying,\n shared boolean,\n private boolean,\n moment_type character varying NOT NULL,\n user_agent character varying,\n inserted timestamp without time zone NOT NULL DEFAULT now(),\n networks character varying[],\n lnglat point,\n timezone character varying,\n CONSTRAINT blocks_pkey PRIMARY KEY (block_id )\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX blocks_created_at_timezone_idx\n ON blocks\n USING btree\n (timezone(timezone::text, created) );\n\nCREATE INDEX blocks_created_idx\n ON blocks\n USING btree\n (created DESC NULLS LAST);\n\nCREATE INDEX blocks_lnglat_idx\n ON blocks\n USING gist\n (lnglat );\n\nCREATE INDEX blocks_moment_type_idx\n ON blocks\n USING btree\n (moment_type );\n\nCREATE INDEX blocks_networks_idx\n ON blocks\n USING btree\n (networks );\n\nCREATE INDEX blocks_private_idx\n ON blocks\n USING btree\n (private );\n\nCREATE INDEX blocks_shared_idx\n ON blocks\n USING btree\n (shared );\n\nCREATE INDEX blocks_timezone_idx\n ON blocks\n USING btree\n (timezone );\n\n\nOn Thu, Feb 9, 2012 at 11:46 AM, Tom Lane <[email protected]> wrote:\n\n> Alessandro Gagliardi <[email protected]> writes:\n> > WHERE ... (created at time zone timezone)::date = 'yesterday'\n>\n> > created has an index (btree if it matters). timezone does not. I'm\n> > wondering if the solution to my problem is to create a joint index\n> between\n> > created and timezone (and if so, if there is a particular way to do that\n> to\n> > make it work the way I want).\n>\n> The only way to make that indexable is to create an expression index on\n> the whole expression \"(created at time zone timezone)::date\". Seems\n> pretty special-purpose, though it might be worthwhile if you do that a\n> lot.\n>\n> regards, tom lane\n>\n\nI tried: CREATE INDEX blocks_created_at_timezone_idx ON blocks USING btree ((created at time zone timezone));(Actually, I originally did try one on \"(created at time zone timezone)::date\" but couldn't figure out how to phrase it in a way PostgreSQL would accept.)\nAnyway, no difference: http://explain.depesz.com/s/ZreI even tried changing the filter to (created at time zone timezone) > 'yesterday' AND (created at time zone timezone) < 'today' to see if that might make a difference. Sadly, no: http://explain.depesz.com/s/dfh\nHere's the definition for the offending table:CREATE TABLE blocks( block_id character(24) NOT NULL, user_id character(24) NOT NULL,\n created timestamp with time zone, locale character varying, shared boolean, private boolean, moment_type character varying NOT NULL, user_agent character varying,\n inserted timestamp without time zone NOT NULL DEFAULT now(), networks character varying[], lnglat point, timezone character varying, CONSTRAINT blocks_pkey PRIMARY KEY (block_id )\n)WITH ( OIDS=FALSE);CREATE INDEX blocks_created_at_timezone_idx ON blocks USING btree (timezone(timezone::text, created) );\nCREATE INDEX blocks_created_idx ON blocks USING btree (created DESC NULLS LAST);CREATE INDEX blocks_lnglat_idx ON blocks\n USING gist (lnglat );CREATE INDEX blocks_moment_type_idx ON blocks USING btree (moment_type );CREATE INDEX blocks_networks_idx\n ON blocks USING btree (networks );CREATE INDEX blocks_private_idx ON blocks USING btree (private );CREATE INDEX blocks_shared_idx\n ON blocks USING btree (shared );CREATE INDEX blocks_timezone_idx ON blocks USING btree (timezone );\nOn Thu, Feb 9, 2012 at 11:46 AM, Tom Lane <[email protected]> wrote:\nAlessandro Gagliardi <[email protected]> writes:\n> WHERE ... (created at time zone timezone)::date = 'yesterday'\n\n> created has an index (btree if it matters). timezone does not. I'm\n> wondering if the solution to my problem is to create a joint index between\n> created and timezone (and if so, if there is a particular way to do that to\n> make it work the way I want).\n\nThe only way to make that indexable is to create an expression index on\nthe whole expression \"(created at time zone timezone)::date\". Seems\npretty special-purpose, though it might be worthwhile if you do that a\nlot.\n\n regards, tom lane",
"msg_date": "Thu, 9 Feb 2012 12:00:56 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: timestamp with time zone"
},
{
"msg_contents": "Alessandro Gagliardi <[email protected]> wrote:\n \n> (Actually, I originally did try one on \"(created at time zone\n> timezone)::date\" but couldn't figure out how to phrase it in a way\n> PostgreSQL would accept.)\n \nCREATE INDEX blocks_created_date_idx\n ON blocks\n USING btree\n (((created at time zone timezone)::date));\n \n-Kevin\n",
"msg_date": "Thu, 09 Feb 2012 14:15:26 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: timestamp with time zone"
},
{
"msg_contents": "Still slow as mud: http://explain.depesz.com/s/Zfn\n\nNow I've got indices on created, timezone, created at time zone timezone,\nand (created at time zone timezone)::date. Clearly the problem isn't a lack\nof indices!...except, wait, it's not actually using blocks_created_date_idx\n(or blocks_created_at_timezone_idx). How do I make that happen?\n\n\nOn Thu, Feb 9, 2012 at 12:15 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Alessandro Gagliardi <[email protected]> wrote:\n>\n> > (Actually, I originally did try one on \"(created at time zone\n> > timezone)::date\" but couldn't figure out how to phrase it in a way\n> > PostgreSQL would accept.)\n>\n> CREATE INDEX blocks_created_date_idx\n> ON blocks\n> USING btree\n> (((created at time zone timezone)::date));\n>\n> -Kevin\n>\n\nStill slow as mud: http://explain.depesz.com/s/ZfnNow I've got indices on created, timezone, created at time zone timezone, and (created at time zone timezone)::date. Clearly the problem isn't a lack of indices!...except, wait, it's not actually using blocks_created_date_idx (or blocks_created_at_timezone_idx). How do I make that happen?\nOn Thu, Feb 9, 2012 at 12:15 PM, Kevin Grittner <[email protected]> wrote:\nAlessandro Gagliardi <[email protected]> wrote:\n\n> (Actually, I originally did try one on \"(created at time zone\n> timezone)::date\" but couldn't figure out how to phrase it in a way\n> PostgreSQL would accept.)\n\nCREATE INDEX blocks_created_date_idx\n ON blocks\n USING btree\n (((created at time zone timezone)::date));\n\n-Kevin",
"msg_date": "Thu, 9 Feb 2012 12:38:22 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: timestamp with time zone"
},
{
"msg_contents": "Alessandro Gagliardi <[email protected]> writes:\n> Still slow as mud: http://explain.depesz.com/s/Zfn\n> Now I've got indices on created, timezone, created at time zone timezone,\n> and (created at time zone timezone)::date. Clearly the problem isn't a lack\n> of indices!...except, wait, it's not actually using blocks_created_date_idx\n> (or blocks_created_at_timezone_idx). How do I make that happen?\n\nDid you ANALYZE the table after creating those indexes? Generally you\nneed an ANALYZE so that the planner will have some stats about an\nexpression index.\n\nIt might still think that the other index is a better option. In that\ncase you can experiment to see if it's right or not; the general idea\nis\n\n\tbegin;\n\tdrop index index_that_planner_prefers;\n\texplain analyze your_query;\n\trollback;\t-- revert the index drop\n\nIf that EXPLAIN isn't actually any better than what you had, then the\nplanner was right. If it is better, let's see 'em both.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Feb 2012 01:19:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: timestamp with time zone "
},
{
"msg_contents": "Hm. Tried running ANALYZE. Took almost 10 minutes to run. (Don't know if it\nwould have been run automatically since I last tried this yesterday, but\nfigured it couldn't hurt.) Still, no difference:\nhttp://explain.depesz.com/s/xHq\nActually, it's 10x worse (maybe because this is my first time running this\nquery today, whereas last time I had run it, or a version of it, several\ntimes before running that EXPLAIN). Anyway, good tip on dropping the index,\nbut I don't think that would be a good idea in this case because the index\nit appears to be choosing is the primary key!\n\nOn Thu, Feb 9, 2012 at 10:19 PM, Tom Lane <[email protected]> wrote:\n\n> Alessandro Gagliardi <[email protected]> writes:\n> > Still slow as mud: http://explain.depesz.com/s/Zfn\n> > Now I've got indices on created, timezone, created at time zone timezone,\n> > and (created at time zone timezone)::date. Clearly the problem isn't a\n> lack\n> > of indices!...except, wait, it's not actually using\n> blocks_created_date_idx\n> > (or blocks_created_at_timezone_idx). How do I make that happen?\n>\n> Did you ANALYZE the table after creating those indexes? Generally you\n> need an ANALYZE so that the planner will have some stats about an\n> expression index.\n>\n> It might still think that the other index is a better option. In that\n> case you can experiment to see if it's right or not; the general idea\n> is\n>\n> begin;\n> drop index index_that_planner_prefers;\n> explain analyze your_query;\n> rollback; -- revert the index drop\n>\n> If that EXPLAIN isn't actually any better than what you had, then the\n> planner was right. If it is better, let's see 'em both.\n>\n> regards, tom lane\n>\n\nHm. Tried running ANALYZE. Took almost 10 minutes to run. (Don't know if it would have been run automatically since I last tried this yesterday, but figured it couldn't hurt.) Still, no difference: http://explain.depesz.com/s/xHq\nActually, it's 10x worse (maybe because this is my first time running this query today, whereas last time I had run it, or a version of it, several times before running that EXPLAIN). Anyway, good tip on dropping the index, but I don't think that would be a good idea in this case because the index it appears to be choosing is the primary key!\nOn Thu, Feb 9, 2012 at 10:19 PM, Tom Lane <[email protected]> wrote:\nAlessandro Gagliardi <[email protected]> writes:\n> Still slow as mud: http://explain.depesz.com/s/Zfn\n> Now I've got indices on created, timezone, created at time zone timezone,\n> and (created at time zone timezone)::date. Clearly the problem isn't a lack\n> of indices!...except, wait, it's not actually using blocks_created_date_idx\n> (or blocks_created_at_timezone_idx). How do I make that happen?\n\nDid you ANALYZE the table after creating those indexes? Generally you\nneed an ANALYZE so that the planner will have some stats about an\nexpression index.\n\nIt might still think that the other index is a better option. In that\ncase you can experiment to see if it's right or not; the general idea\nis\n\n begin;\n drop index index_that_planner_prefers;\n explain analyze your_query;\n rollback; -- revert the index drop\n\nIf that EXPLAIN isn't actually any better than what you had, then the\nplanner was right. If it is better, let's see 'em both.\n\n regards, tom lane",
"msg_date": "Fri, 10 Feb 2012 11:53:54 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: timestamp with time zone"
}
] |
[
{
"msg_contents": "Yeah, Reply-All...\n\nBegin forwarded message:\n\n> From: David Yeu <[email protected]>\n> Subject: Re: [PERFORM] Performance on large, append-only tables\n> Date: February 10, 2012 10:59:04 AM EST\n> To: Merlin Moncure <[email protected]>\n> \n> On Feb 10, 2012, at 10:19 AM, Merlin Moncure wrote:\n> \n>> You can probably significantly optimize this. But first, can we see\n>> some explain analyze for the affected queries?\n> \n> Sorry, we should have included these in the original post. Here's the EXPLAIN output for a \"id < ?\" query:\n> \n> \n> => EXPLAIN ANALYZE SELECT \"lines\".* FROM \"lines\" WHERE (lines.deleted_at IS NULL) AND (\"lines\".group_id = ?) AND (id < ?) ORDER BY id DESC LIMIT 20 OFFSET 0;\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=9267.44..9267.45 rows=20 width=1321) (actual time=348.844..348.877 rows=20 loops=1)\n> -> Sort (cost=9267.44..9269.76 rows=4643 width=1321) (actual time=348.840..348.852 rows=20 loops=1)\n> Sort Key: id\n> Sort Method: top-N heapsort Memory: 29kB\n> -> Index Scan using index_lines_on_group_id on lines (cost=0.00..9242.73 rows=4643 width=1321) (actual time=6.131..319.835 rows=23038 loops=1)\n> Index Cond: (group_id = ?)\n> Filter: ((deleted_at IS NULL) AND (id < ?))\n> Total runtime: 348.987 ms\n> \n> \n> A quick suggestion from Heroku yesterday was a new index on (group_id, id). After adding it to a database fork, we ended up with:\n> \n> \n> => EXPLAIN ANALYZE SELECT \"lines\".* FROM \"lines\" WHERE (lines.deleted_at IS NULL) AND (\"lines\".group_id = ?) AND (id < ?) ORDER BY id DESC LIMIT 20 OFFSET 0;\n> QUERY PLAN \n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..28.88 rows=20 width=1321) (actual time=17.216..109.905 rows=20 loops=1)\n> -> Index Scan Backward using index_lines_on_group_id_and_id on lines (cost=0.00..6416.04 rows=4443 width=1321) (actual time=17.207..109.867 rows=20 loops=1)\n> Index Cond: ((group_id = ?) AND (id < ?))\n> Filter: (deleted_at IS NULL)\n> Total runtime: 110.039 ms\n> \n> \n> The result has been pretty dramatic for the \"id <> ?\" queries, which make up the bulk of the queries. Running a whole bunch of EXPLAIN ANAYLZE queries also showed that some queries were actually choosing to use the index on `id' instead of `group_id', and that performed about as poorly as expected. Thankfully, the new index on (group_id, id) seems to be preferable nearly always.\n> \n> And for reference, here's the EXPLAIN for the LIMIT, OFFSET query:\n> \n> \n> => EXPLAIN ANALYZE SELECT \"lines\".* FROM \"lines\" WHERE (lines.deleted_at IS NULL) AND (\"lines\".group_id = ?) ORDER BY id DESC LIMIT 20 OFFSET 60;\n> QUERY PLAN \n> -------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=9274.45..9274.46 rows=20 width=1321) (actual time=109.674..109.708 rows=20 loops=1)\n> -> Sort (cost=9274.42..9276.75 rows=4646 width=1321) (actual time=109.606..109.657 rows=80 loops=1)\n> Sort Key: id\n> Sort Method: top-N heapsort Memory: 43kB\n> -> Index Scan using index_lines_on_group_id on lines (cost=0.00..9240.40 rows=4646 width=1321) (actual time=0.117..98.905 rows=7999 loops=1)\n> Index Cond: (group_id = ?)\n> Filter: (deleted_at IS NULL)\n> Total runtime: 109.753 ms\n> \n> \n> - Dave\n> \n\n\n",
"msg_date": "Fri, 10 Feb 2012 16:19:57 +0000",
"msg_from": "David Yeu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance on large, append-only tables"
},
{
"msg_contents": "On Fri, Feb 10, 2012 at 1:19 PM, David Yeu <[email protected]> wrote:\n>> => EXPLAIN ANALYZE SELECT \"lines\".* FROM \"lines\" WHERE (lines.deleted_at IS NULL) AND (\"lines\".group_id = ?) AND (id < ?) ORDER BY id DESC LIMIT 20 OFFSET 0;\n\nInteresting...\n\nDo you have many \"deleted\" rows?\nDo you always filter them out like this?\n\nBecause in that case, you can add the condition to the indices to\nexclude deleted rows from the index. This is a big win if you have\nmany deleted rows, only the index expression has to be exactly the\nsame (verbatim) as the one used in the query.\n\nThat, and an index on \"(group_id, created_at) where (deleted_at IS\nNULL)\" to catch the sorted by date kind of query, and you'll be done I\nthink.\n",
"msg_date": "Fri, 10 Feb 2012 13:26:54 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on large, append-only tables"
},
{
"msg_contents": "On Feb 10, 2012, at 11:26 AM, Claudio Freire wrote:\n> That, and an index on \"(group_id, created_at) where (deleted_at IS\n> NULL)\" to catch the sorted by date kind of query, and you'll be done I\n> think.\n\nYeah, I didn't quite get that right -- we're actually sorting all these queries by \"id DESC\", not \"created_at DESC\", so that seems to obviate the need for any index on created_at. \n\nDave\n\n\n\n",
"msg_date": "Fri, 10 Feb 2012 16:45:39 +0000",
"msg_from": "David Yeu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance on large, append-only tables"
},
{
"msg_contents": "On Fri, Feb 10, 2012 at 1:45 PM, David Yeu <[email protected]> wrote:\n> On Feb 10, 2012, at 11:26 AM, Claudio Freire wrote:\n>> That, and an index on \"(group_id, created_at) where (deleted_at IS\n>> NULL)\" to catch the sorted by date kind of query, and you'll be done I\n>> think.\n>\n> Yeah, I didn't quite get that right -- we're actually sorting all these queries by \"id DESC\", not \"created_at DESC\", so that seems to obviate the need for any index on created_at.\n\n From your OP:\n\n> * Š WHERE group_id = ? ORDER BY created_at DESC LIMIT 20;\n",
"msg_date": "Fri, 10 Feb 2012 13:58:13 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on large, append-only tables"
},
{
"msg_contents": "On Feb 10, 2012, at 11:58 AM, Claudio Freire wrote:\n\n> From your OP:\n> \n>> * Š WHERE group_id = ? ORDER BY created_at DESC LIMIT 20;\n\nYup, sorry.\n\nDave\n\n\n",
"msg_date": "Fri, 10 Feb 2012 17:00:50 +0000",
"msg_from": "David Yeu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance on large, append-only tables"
},
{
"msg_contents": "On Fri, Feb 10, 2012 at 2:00 PM, David Yeu <[email protected]> wrote:\n>> From your OP:\n>>\n>>> * Š WHERE group_id = ? ORDER BY created_at DESC LIMIT 20;\n>\n> Yup, sorry.\n\nAh, ok, so that should do it.\nIf you need further improvement, remember to take a look at the deleted stuff.\n",
"msg_date": "Fri, 10 Feb 2012 14:12:11 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance on large, append-only tables"
}
] |
[
{
"msg_contents": "Hi,\n\nI am checking a performance problem encountered after porting old embeded\nDB to postgreSQL. While the system is real-time sensitive, we are\nconcerning for per-query cost. In our environment sequential scanning\n(select * from ...) for a table with tens of thousands of record costs 1 -\n2 seconds, regardless of using ODBC driver or the \"timing\" result shown in\npsql client (which in turn, relies on libpq). However, using EXPLAIN\nANALYZE, or checking the statistics in pg_stat_statement view, the query\ncosts only less than 100ms.\n\nSo, is it client interface (ODBC, libpq) 's cost mainly due to TCP? Has the\npg_stat_statement or EXPLAIN ANALYZE included the cost of copying tuples\nfrom shared buffers to result sets?\n\nCould you experts share your views on this big gap? And any suggestions to\noptimise?\n\nP.S. In our original embeded DB a \"fastpath\" interface is provided to read\ndirectly from shared memory for the records, thus provides extremely\nrealtime access (of course sacrifice some other features such as\nconsistency).\n\nBest regards,\nHan\n\nHi,I am checking a performance problem encountered after porting old embeded DB to postgreSQL. While the system is real-time sensitive, we are concerning for per-query cost. In our environment sequential scanning (select * from ...) for a table with tens of thousands of record costs 1 - 2 seconds, regardless of using ODBC driver or the \"timing\" result shown in psql client (which in turn, relies on libpq). However, using EXPLAIN ANALYZE, or checking the statistics in pg_stat_statement view, the query costs only less than 100ms.\nSo, is it client interface (ODBC, libpq) 's cost mainly due to TCP? Has the pg_stat_statement or EXPLAIN ANALYZE included the cost of copying tuples from shared buffers to result sets?Could you experts share your views on this big gap? And any suggestions to optimise?\nP.S. In our original embeded DB a \"fastpath\" interface is provided to read directly from shared memory for the records, thus provides extremely realtime access (of course sacrifice some other features such as consistency).\nBest regards,Han",
"msg_date": "Wed, 15 Feb 2012 11:59:36 +0800",
"msg_from": "Zhou Han <[email protected]>",
"msg_from_op": true,
"msg_subject": "client performance v.s. server statistics"
},
{
"msg_contents": ">>So, is it client interface (ODBC, libpq) 's cost mainly due to TCP?\n\n \n\nThe difference as compare to your embedded DB you are seeing is mainly seems\nto be due to TCP.\n\nOne optimization you can use is to use Unix-domain socket mode of\nPostgreSQL. You can refer unix_socket_directory parameter in postgresql.conf\nand other related parameters. \n\nI am suggesting you this as earlier you were using embedded DB, so your\nclient/server should be on same machine. If now this is not the case then it\nwill not work.\n\n \n\nCan you please clarify some more things like\n\n1. After doing sequence scan, do you need all the records in client for\nwhich seq. scan is happening. If less records then why you have not created\nindex.\n\n2. What is exact scenario for fetching records\n\n \n\n \n\n \n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Zhou Han\nSent: Wednesday, February 15, 2012 9:30 AM\nTo: [email protected]\nSubject: [HACKERS] client performance v.s. server statistics\n\n \n\nHi,\n\nI am checking a performance problem encountered after porting old embeded DB\nto postgreSQL. While the system is real-time sensitive, we are concerning\nfor per-query cost. In our environment sequential scanning (select * from\n...) for a table with tens of thousands of record costs 1 - 2 seconds,\nregardless of using ODBC driver or the \"timing\" result shown in psql client\n(which in turn, relies on libpq). However, using EXPLAIN ANALYZE, or\nchecking the statistics in pg_stat_statement view, the query costs only less\nthan 100ms.\n\nSo, is it client interface (ODBC, libpq) 's cost mainly due to TCP? Has the\npg_stat_statement or EXPLAIN ANALYZE included the cost of copying tuples\nfrom shared buffers to result sets?\n\nCould you experts share your views on this big gap? And any suggestions to\noptimise?\n\nP.S. In our original embeded DB a \"fastpath\" interface is provided to read\ndirectly from shared memory for the records, thus provides extremely\nrealtime access (of course sacrifice some other features such as\nconsistency).\n\nBest regards,\nHan\n\n\n>>So, is it client interface (ODBC, libpq) 's cost mainly due to TCP? The difference as compare to your embedded DB you are seeing is mainly seems to be due to TCP.One optimization you can use is to use Unix-domain socket mode of PostgreSQL. You can refer unix_socket_directory parameter in postgresql.conf and other related parameters. I am suggesting you this as earlier you were using embedded DB, so your client/server should be on same machine. If now this is not the case then it will not work. Can you please clarify some more things like1. After doing sequence scan, do you need all the records in client for which seq. scan is happening. If less records then why you have not created index.2. What is exact scenario for fetching records [email protected] [mailto:[email protected]] On Behalf Of Zhou HanSent: Wednesday, February 15, 2012 9:30 AMTo: [email protected]: [HACKERS] client performance v.s. server statistics Hi,I am checking a performance problem encountered after porting old embeded DB to postgreSQL. While the system is real-time sensitive, we are concerning for per-query cost. In our environment sequential scanning (select * from ...) for a table with tens of thousands of record costs 1 - 2 seconds, regardless of using ODBC driver or the \"timing\" result shown in psql client (which in turn, relies on libpq). However, using EXPLAIN ANALYZE, or checking the statistics in pg_stat_statement view, the query costs only less than 100ms.\nrface (ODBC, libpq) 's cost mainly due to TCP? Has the pg_stat_statement or EXPLAIN ANALYZE included the cost of copying tuples from shared buffers to result sets?Could you experts share your views on this big gap? And any suggestions to optimise?P.S. In our original embeded DB a \"fastpath\" interface is provided to read directly from shared memory for the records, thus provides extremely realtime access (of course sacrifice some other features such as consistency).Best regards,Han",
"msg_date": "Wed, 15 Feb 2012 10:53:04 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: client performance v.s. server statistics"
},
{
"msg_contents": "Hi,\n\nI have tried unix domain socket and the performance is similar with TCP\nsocket. It is MIPS architecture so memory copy to/from kernel can occupy\nmuch time, and apparently using unit domain socket has no difference than\nTCP in terms of memory copy.\n\nBut it is still unbelievable for the ten-fold gap between the client side\nstatistic and the server side statistics. So I want to know what exactly\nthe operations are involved in the server side statistics in EXPLAIN\nANALYZE. May I check the code later on when I get time.\n\nFor the query itself, it was just for performance comparison. There are\nother index based queries, which are of course much faster, but still\nresult in similar ten-fold of time gap between client side and server side\nstatistics.\n\nI am thinking of non-kernel involved client interface, is there such an\noption, or do I have to develop one from scratch?\n\nBest regards,\nHan\n\nOn Wed, Feb 15, 2012 at 1:23 PM, Amit Kapila <[email protected]> wrote:\n\n> >>So, is it client interface (ODBC, libpq) 's cost mainly due to TCP?****\n>\n> ** **\n>\n> The difference as compare to your embedded DB you are seeing is mainly\n> seems to be due to TCP.****\n>\n> One optimization you can use is to use Unix-domain socket mode of\n> PostgreSQL. You can refer unix_socket_directory parameter in\n> postgresql.conf and other related parameters. ****\n>\n> I am suggesting you this as earlier you were using embedded DB, so your\n> client/server should be on same machine. If now this is not the case then\n> it will not work.****\n>\n> ** **\n>\n> Can you please clarify some more things like****\n>\n> **1. **After doing sequence scan, do you need all the records in\n> client for which seq. scan is happening. If less records then why you have\n> not created index.****\n>\n> **2. **What is exact scenario for fetching records****\n>\n> ** **\n>\n> ** **\n>\n> ** **\n>\n> * [email protected] [mailto:\n> [email protected]] On Behalf Of Zhou Han\n> Sent: Wednesday, February 15, 2012 9:30 AM\n> To: [email protected]\n> Subject: [HACKERS] client performance v.s. server statistics*\n>\n> ** **\n>\n> Hi,\n>\n> I am checking a performance problem encountered after porting old embeded\n> DB to postgreSQL. While the system is real-time sensitive, we are\n> concerning for per-query cost. In our environment sequential scanning\n> (select * from ...) for a table with tens of thousands of record costs 1 -\n> 2 seconds, regardless of using ODBC driver or the \"timing\" result shown in\n> psql client (which in turn, relies on libpq). However, using EXPLAIN\n> ANALYZE, or checking the statistics in pg_stat_statement view, the query\n> costs only less than 100ms.\n> rface (ODBC, libpq) 's cost mainly due to TCP? Has the pg_stat_statement\n> or EXPLAIN ANALYZE included the cost of copying tuples from shared buffers\n> to result sets?\n>\n> Could you experts share your views on this big gap? And any suggestions to\n> optimise?\n>\n> P.S. In our original embeded DB a \"fastpath\" interface is provided to read\n> directly from shared memory for the records, thus provides extremely\n> realtime access (of course sacrifice some other features such as\n> consistency).\n>\n> Best regards,\n> Han****\n>\n>\n\nHi,I have tried unix domain socket and the performance is similar with TCP socket. It is MIPS architecture so memory copy to/from kernel can occupy much time, and apparently using unit domain socket has no difference than TCP in terms of memory copy. \nBut it is still unbelievable for the ten-fold gap between the client side statistic and the server side statistics. So I want to know what exactly the operations are involved in the server side statistics in EXPLAIN ANALYZE. May I check the code later on when I get time.\nFor the query itself, it was just for performance comparison. There are other index based queries, which are of course much faster, but still result in similar ten-fold of time gap between client side and server side statistics.\nI am thinking of non-kernel involved client interface, is there such an option, or do I have to develop one from scratch?Best regards,HanOn Wed, Feb 15, 2012 at 1:23 PM, Amit Kapila <[email protected]> wrote:\n\n>>So, is it client interface (ODBC, libpq) 's cost mainly due to TCP? \nThe difference as compare to your embedded DB you are seeing is mainly seems to be due to TCP.One optimization you can use is to use Unix-domain socket mode of PostgreSQL. You can refer unix_socket_directory parameter in postgresql.conf and other related parameters. \nI am suggesting you this as earlier you were using embedded DB, so your client/server should be on same machine. If now this is not the case then it will not work.\n Can you please clarify some more things like1. After doing sequence scan, do you need all the records in client for which seq. scan is happening. If less records then why you have not created index.\n2. What is exact scenario for fetching records \n \n \n [email protected] [mailto:[email protected]] On Behalf Of Zhou Han\nSent: Wednesday, February 15, 2012 9:30 AMTo: [email protected]: [HACKERS] client performance v.s. server statistics\n Hi,I am checking a performance problem encountered after porting old embeded DB to postgreSQL. While the system is real-time sensitive, we are concerning for per-query cost. In our environment sequential scanning (select * from ...) for a table with tens of thousands of record costs 1 - 2 seconds, regardless of using ODBC driver or the \"timing\" result shown in psql client (which in turn, relies on libpq). However, using EXPLAIN ANALYZE, or checking the statistics in pg_stat_statement view, the query costs only less than 100ms.\n\nrface (ODBC, libpq) 's cost mainly due to TCP? Has the pg_stat_statement or EXPLAIN ANALYZE included the cost of copying tuples from shared buffers to result sets?Could you experts share your views on this big gap? And any suggestions to optimise?\nP.S. In our original embeded DB a \"fastpath\" interface is provided to read directly from shared memory for the records, thus provides extremely realtime access (of course sacrifice some other features such as consistency).\nBest regards,Han",
"msg_date": "Wed, 15 Feb 2012 15:01:42 +0800",
"msg_from": "Zhou Han <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: client performance v.s. server statistics"
},
{
"msg_contents": "Hi,\n\nForward my question here.\n\nBest regards,\nHan\n\n---------- Forwarded message ----------\nFrom: Zhou Han <[email protected]>\nDate: Wed, Feb 15, 2012 at 3:01 PM\nSubject: Re: [HACKERS] client performance v.s. server statistics\nTo: Amit Kapila <[email protected]>\nCc: [email protected]\n\n\nHi,\n\nI have tried unix domain socket and the performance is similar with TCP\nsocket. It is MIPS architecture so memory copy to/from kernel can occupy\nmuch time, and apparently using unit domain socket has no difference than\nTCP in terms of memory copy.\n\nBut it is still unbelievable for the ten-fold gap between the client side\nstatistic and the server side statistics. So I want to know what exactly\nthe operations are involved in the server side statistics in EXPLAIN\nANALYZE. May I check the code later on when I get time.\n\nFor the query itself, it was just for performance comparison. There are\nother index based queries, which are of course much faster, but still\nresult in similar ten-fold of time gap between client side and server side\nstatistics.\n\nI am thinking of non-kernel involved client interface, is there such an\noption, or do I have to develop one from scratch?\n\nBest regards,\nHan\n\n\nOn Wed, Feb 15, 2012 at 1:23 PM, Amit Kapila <[email protected]> wrote:\n\n> >>So, is it client interface (ODBC, libpq) 's cost mainly due to TCP?****\n>\n> ** **\n>\n> The difference as compare to your embedded DB you are seeing is mainly\n> seems to be due to TCP.****\n>\n> One optimization you can use is to use Unix-domain socket mode of\n> PostgreSQL. You can refer unix_socket_directory parameter in\n> postgresql.conf and other related parameters. ****\n>\n> I am suggesting you this as earlier you were using embedded DB, so your\n> client/server should be on same machine. If now this is not the case then\n> it will not work.****\n>\n> ** **\n>\n> Can you please clarify some more things like****\n>\n> **1. **After doing sequence scan, do you need all the records in\n> client for which seq. scan is happening. If less records then why you have\n> not created index.****\n>\n> **2. **What is exact scenario for fetching records****\n>\n> ** **\n>\n> ** **\n>\n> ** **\n>\n> * [email protected] [mailto:\n> [email protected]] On Behalf Of Zhou Han\n> Sent: Wednesday, February 15, 2012 9:30 AM\n> To: [email protected]\n> Subject: [HACKERS] client performance v.s. server statistics*\n>\n> ** **\n>\n> Hi,\n>\n> I am checking a performance problem encountered after porting old embeded\n> DB to postgreSQL. While the system is real-time sensitive, we are\n> concerning for per-query cost. In our environment sequential scanning\n> (select * from ...) for a table with tens of thousands of record costs 1 -\n> 2 seconds, regardless of using ODBC driver or the \"timing\" result shown in\n> psql client (which in turn, relies on libpq). However, using EXPLAIN\n> ANALYZE, or checking the statistics in pg_stat_statement view, the query\n> costs only less than 100ms.\n> rface (ODBC, libpq) 's cost mainly due to TCP? Has the pg_stat_statement\n> or EXPLAIN ANALYZE included the cost of copying tuples from shared buffers\n> to result sets?\n>\n> Could you experts share your views on this big gap? And any suggestions to\n> optimise?\n>\n> P.S. In our original embeded DB a \"fastpath\" interface is provided to read\n> directly from shared memory for the records, thus provides extremely\n> realtime access (of course sacrifice some other features such as\n> consistency).\n>\n> Best regards,\n> Han****\n>\n>\n\nHi,Forward my question here.Best regards,Han---------- Forwarded message ----------From: Zhou Han <[email protected]>\nDate: Wed, Feb 15, 2012 at 3:01 PMSubject: Re: [HACKERS] client performance v.s. server statisticsTo: Amit Kapila <[email protected]>Cc: [email protected]\nHi,I have tried unix domain socket and the performance is similar with TCP socket. It is MIPS architecture so memory copy to/from kernel can occupy much time, and apparently using unit domain socket has no difference than TCP in terms of memory copy. \nBut it is still unbelievable for the ten-fold gap between the client side statistic and the server side statistics. So I want to know what exactly the operations are involved in the server side statistics in EXPLAIN ANALYZE. May I check the code later on when I get time.\nFor the query itself, it was just for performance comparison. There are other index based queries, which are of course much faster, but still result in similar ten-fold of time gap between client side and server side statistics.\nI am thinking of non-kernel involved client interface, is there such an option, or do I have to develop one from scratch?Best regards,Han\nOn Wed, Feb 15, 2012 at 1:23 PM, Amit Kapila <[email protected]> wrote:\n\n>>So, is it client interface (ODBC, libpq) 's cost mainly due to TCP? \nThe difference as compare to your embedded DB you are seeing is mainly seems to be due to TCP.One optimization you can use is to use Unix-domain socket mode of PostgreSQL. You can refer unix_socket_directory parameter in postgresql.conf and other related parameters. \nI am suggesting you this as earlier you were using embedded DB, so your client/server should be on same machine. If now this is not the case then it will not work.\n Can you please clarify some more things like1. After doing sequence scan, do you need all the records in client for which seq. scan is happening. If less records then why you have not created index.\n2. What is exact scenario for fetching records \n \n \n [email protected] [mailto:[email protected]] On Behalf Of Zhou Han\nSent: Wednesday, February 15, 2012 9:30 AMTo: [email protected]: [HACKERS] client performance v.s. server statistics\n Hi,I am checking a performance problem encountered after porting old embeded DB to postgreSQL. While the system is real-time sensitive, we are concerning for per-query cost. In our environment sequential scanning (select * from ...) for a table with tens of thousands of record costs 1 - 2 seconds, regardless of using ODBC driver or the \"timing\" result shown in psql client (which in turn, relies on libpq). However, using EXPLAIN ANALYZE, or checking the statistics in pg_stat_statement view, the query costs only less than 100ms.\n\nrface (ODBC, libpq) 's cost mainly due to TCP? Has the pg_stat_statement or EXPLAIN ANALYZE included the cost of copying tuples from shared buffers to result sets?Could you experts share your views on this big gap? And any suggestions to optimise?\nP.S. In our original embeded DB a \"fastpath\" interface is provided to read directly from shared memory for the records, thus provides extremely realtime access (of course sacrifice some other features such as consistency).\nBest regards,Han",
"msg_date": "Wed, 15 Feb 2012 17:38:18 +0800",
"msg_from": "Zhou Han <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: [HACKERS] client performance v.s. server statistics"
},
{
"msg_contents": "Hi,\n\nForward my question from HACKERS list to here (and added some more notes):\n\nI have tried unix domain socket and the performance is similar with\nTCP socket. It is MIPS architecture so memory copy to/from kernel can\noccupy much time, and apparently using unit domain socket has no\ndifference than TCP in terms of memory copy.\n\nBut it is still unbelievable for the ten-fold gap between the client\nside statistic and the server side statistics. So I want to know what\nexactly the operations are involved in the server side statistics in\nEXPLAIN ANALYZE. May I check the code later on when I get time.\n\nFor the query itself, it was just for performance comparison. There\nare other index based queries, which are of course much faster, but\nstill result in similar ten-fold of time gap between client side and\nserver side statistics.\n\nI am thinking of non-kernel involved client interface, is there such\nan option, or do I have to develop one from scratch?\n\nBesides, the test was done on the same host (without network cost).\nAnd even considering the memory copying cost it is not reasonable,\nbecause the client did similar job using another IPC mechanism via\nkernel space to transfer the data again to another program, which\nappears to be quite fast - costed even much less than the time shown\nby EXPLAIN ANALYZE.\n\nIs there anyone can help me to explain this?\n\nBest regards,\nHan\n\n\nOn Wed, Feb 15, 2012 at 1:23 PM, Amit Kapila <[email protected]> wrote:\n>\n> >>So, is it client interface (ODBC, libpq) 's cost mainly due to TCP?\n>\n>\n>\n> The difference as compare to your embedded DB you are seeing is mainly seems to be due to TCP.\n>\n> One optimization you can use is to use Unix-domain socket mode of PostgreSQL. You can refer unix_socket_directory parameter in postgresql.conf and other related parameters.\n>\n> I am suggesting you this as earlier you were using embedded DB, so your client/server should be on same machine. If now this is not the case then it will not work.\n>\n>\n>\n> Can you please clarify some more things like\n>\n> 1. After doing sequence scan, do you need all the records in client for which seq. scan is happening. If less records then why you have not created index.\n>\n> 2. What is exact scenario for fetching records\n>\n>\n>\n>\n>\n>\n>\n> [email protected] [mailto:[email protected]] On Behalf Of Zhou Han\n> Sent: Wednesday, February 15, 2012 9:30 AM\n> To: [email protected]\n> Subject: [HACKERS] client performance v.s. server statistics\n>\n>\n>\n> Hi,\n>\n> I am checking a performance problem encountered after porting old embeded DB to postgreSQL. While the system is real-time sensitive, we are concerning for per-query cost. In our environment sequential scanning (select * from ...) for a table with tens of thousands of record costs 1 - 2 seconds, regardless of using ODBC driver or the \"timing\" result shown in psql client (which in turn, relies on libpq). However, using EXPLAIN ANALYZE, or checking the statistics in pg_stat_statement view, the query costs only less than 100ms.\n> rface (ODBC, libpq) 's cost mainly due to TCP? Has the pg_stat_statement or EXPLAIN ANALYZE included the cost of copying tuples from shared buffers to result sets?\n>\n> Could you experts share your views on this big gap? And any suggestions to optimise?\n>\n> P.S. In our original embeded DB a \"fastpath\" interface is provided to read directly from shared memory for the records, thus provides extremely realtime access (of course sacrifice some other features such as consistency).\n>\n> Best regards,\n> Han\n\n\n\n\n\n-- \nBest regards,\nHan\n",
"msg_date": "Wed, 15 Feb 2012 18:19:00 +0800",
"msg_from": "Zhou Han <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: [HACKERS] client performance v.s. server statistics"
},
{
"msg_contents": "Hi,\nOn Wednesday, February 15, 2012 11:19:00 AM Zhou Han wrote:\n> I have tried unix domain socket and the performance is similar with\n> TCP socket. It is MIPS architecture so memory copy to/from kernel can\n> occupy much time, and apparently using unit domain socket has no\n> difference than TCP in terms of memory copy.\n\n> But it is still unbelievable for the ten-fold gap between the client\n> side statistic and the server side statistics. So I want to know what\n> exactly the operations are involved in the server side statistics in\n> EXPLAIN ANALYZE. May I check the code later on when I get time.\nMy guess is that the time difference youre seing is actually the planning time. \nThe timing shown at the end of EXPLAIN ANALYZE is just the execution, not the \nplanning time. You can use \"\\timing on\" in psql to let it display timing \ninformation that include planning.\n\nWhats the query?\n> For the query itself, it was just for performance comparison. There\n> are other index based queries, which are of course much faster, but\n> still result in similar ten-fold of time gap between client side and\n> server side statistics.\n> \n> I am thinking of non-kernel involved client interface, is there such\n> an option, or do I have to develop one from scratch?\nIts unlikely thats possible in a sensible amount of time. But I don't think \nthats your problem anyway.\n\nAndres\n",
"msg_date": "Wed, 15 Feb 2012 11:55:19 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: [HACKERS] client performance v.s. server statistics"
},
{
"msg_contents": "Hi Andres,\n\nMay you missed my first post, and I paste it here again:\nIn our environment sequential scanning (select * from ...) for a table\nwith tens of thousands of record costs 1 - 2 seconds, regardless of\nusing ODBC driver or the \"timing\" result shown in psql client (which\nin turn, relies on libpq). However, using EXPLAIN ANALYZE, or checking\nthe statistics in pg_stat_statement view, the query costs only less\nthan 100ms.\n\nHas the pg_stat_statement or EXPLAIN ANALYZE included the cost of\ncopying tuples from shared buffers to result sets?\n\nBest regards,\nHan\n\nOn Wed, Feb 15, 2012 at 6:55 PM, Andres Freund <[email protected]> wrote:\n> Hi,\n> On Wednesday, February 15, 2012 11:19:00 AM Zhou Han wrote:\n>> I have tried unix domain socket and the performance is similar with\n>> TCP socket. It is MIPS architecture so memory copy to/from kernel can\n>> occupy much time, and apparently using unit domain socket has no\n>> difference than TCP in terms of memory copy.\n>\n>> But it is still unbelievable for the ten-fold gap between the client\n>> side statistic and the server side statistics. So I want to know what\n>> exactly the operations are involved in the server side statistics in\n>> EXPLAIN ANALYZE. May I check the code later on when I get time.\n> My guess is that the time difference youre seing is actually the planning time.\n> The timing shown at the end of EXPLAIN ANALYZE is just the execution, not the\n> planning time. You can use \"\\timing on\" in psql to let it display timing\n> information that include planning.\n>\n> Whats the query?\n>> For the query itself, it was just for performance comparison. There\n>> are other index based queries, which are of course much faster, but\n>> still result in similar ten-fold of time gap between client side and\n>> server side statistics.\n>>\n>> I am thinking of non-kernel involved client interface, is there such\n>> an option, or do I have to develop one from scratch?\n> Its unlikely thats possible in a sensible amount of time. But I don't think\n> thats your problem anyway.\n>\n> Andres\n\n\n\n-- \nBest regards,\nHan\n",
"msg_date": "Wed, 15 Feb 2012 19:02:53 +0800",
"msg_from": "Han Zhou <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: [HACKERS] client performance v.s. server statistics"
},
{
"msg_contents": "Hi,\n\nTo be more specific, I list my calculation here:\nThe timing shown in psql may include: plan + execution + copying to\nresult set in backend (does this step exist?) + transferring data to\nclient via socket.\n\nThen I want to know what's the time shown in pg_stat_statement and\nEXPLAIN ANALYZE in terms of the above mentioned parts. And why are the\ngap is almost 10 times (100 ms v.s. 1 second)? As a comparison,\ntransferring same amount of data with unix domain socket should cost\nonly a very small fraction of this (almost negligible), according to\nmy other performance tests.\n\nAnd I don't think the plan time plays an important role here in\nEXPLAIN ANALYZE, because the command itself costs similar time to the\n\"Total runtime\" as shown in psql (timing on), which means the plan is\ntoo simple to take any significant part of time in this case.\n\nBest regards,\nHan\n\nOn Wed, Feb 15, 2012 at 7:02 PM, Han Zhou <[email protected]> wrote:\n> Hi Andres,\n>\n> May you missed my first post, and I paste it here again:\n> In our environment sequential scanning (select * from ...) for a table\n> with tens of thousands of record costs 1 - 2 seconds, regardless of\n> using ODBC driver or the \"timing\" result shown in psql client (which\n> in turn, relies on libpq). However, using EXPLAIN ANALYZE, or checking\n> the statistics in pg_stat_statement view, the query costs only less\n> than 100ms.\n>\n> Has the pg_stat_statement or EXPLAIN ANALYZE included the cost of\n> copying tuples from shared buffers to result sets?\n>\n> Best regards,\n> Han\n>\n> On Wed, Feb 15, 2012 at 6:55 PM, Andres Freund <[email protected]> wrote:\n>> Hi,\n>> On Wednesday, February 15, 2012 11:19:00 AM Zhou Han wrote:\n>>> I have tried unix domain socket and the performance is similar with\n>>> TCP socket. It is MIPS architecture so memory copy to/from kernel can\n>>> occupy much time, and apparently using unit domain socket has no\n>>> difference than TCP in terms of memory copy.\n>>\n>>> But it is still unbelievable for the ten-fold gap between the client\n>>> side statistic and the server side statistics. So I want to know what\n>>> exactly the operations are involved in the server side statistics in\n>>> EXPLAIN ANALYZE. May I check the code later on when I get time.\n>> My guess is that the time difference youre seing is actually the planning time.\n>> The timing shown at the end of EXPLAIN ANALYZE is just the execution, not the\n>> planning time. You can use \"\\timing on\" in psql to let it display timing\n>> information that include planning.\n>>\n>> Whats the query?\n>>> For the query itself, it was just for performance comparison. There\n>>> are other index based queries, which are of course much faster, but\n>>> still result in similar ten-fold of time gap between client side and\n>>> server side statistics.\n>>>\n>>> I am thinking of non-kernel involved client interface, is there such\n>>> an option, or do I have to develop one from scratch?\n>> Its unlikely thats possible in a sensible amount of time. But I don't think\n>> thats your problem anyway.\n>>\n>> Andres\n>\n>\n>\n> --\n> Best regards,\n> Han\n\n\n\n-- \nBest regards,\nHan\n",
"msg_date": "Wed, 15 Feb 2012 19:33:13 +0800",
"msg_from": "Han Zhou <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: [HACKERS] client performance v.s. server statistics"
},
{
"msg_contents": "On Wednesday, February 15, 2012 12:33:13 PM Han Zhou wrote:\n> Hi,\n> \n> To be more specific, I list my calculation here:\n> The timing shown in psql may include: plan + execution + copying to\n> result set in backend (does this step exist?) + transferring data to\n> client via socket.\nCorrect.\n\n> Then I want to know what's the time shown in pg_stat_statement and\n> EXPLAIN ANALYZE in terms of the above mentioned parts. And why are the\n> gap is almost 10 times (100 ms v.s. 1 second)? As a comparison,\n> transferring same amount of data with unix domain socket should cost\n> only a very small fraction of this (almost negligible), according to\n> my other performance tests.\nYea, you proved my quick theory wrong.\n\n> And I don't think the plan time plays an important role here in\n> EXPLAIN ANALYZE, because the command itself costs similar time to the\n> \"Total runtime\" as shown in psql (timing on), which means the plan is\n> too simple to take any significant part of time in this case.\nSounds like that.\n\nIt would be interesting to see the time difference between:\nCOPY (SELECT * FROM blub) TO '/tmp/somefile';\nCOPY (SELECT * FROM blub) TO '/tmp/somefile' BINARY;\nEXPLAIN ANALYZE SELECT * FROM blub;\n\nAndres\n",
"msg_date": "Wed, 15 Feb 2012 12:36:01 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: [HACKERS] client performance v.s. server statistics"
},
{
"msg_contents": ">>So I want to know what exactly the operations are involved in the server\nside statistics in EXPLAIN ANALYZE\n\nIt gives the time for execution of Query on server. According to my\nknowledge, it doesn't account for data to send over TCP.\n\n \n\nFrom: Zhou Han [mailto:[email protected]] \nSent: Wednesday, February 15, 2012 12:32 PM\nTo: Amit Kapila\nCc: [email protected]\nSubject: Re: [HACKERS] client performance v.s. server statistics\n\n \n\nHi,\n\nI have tried unix domain socket and the performance is similar with TCP\nsocket. It is MIPS architecture so memory copy to/from kernel can occupy\nmuch time, and apparently using unit domain socket has no difference than\nTCP in terms of memory copy. \n\nBut it is still unbelievable for the ten-fold gap between the client side\nstatistic and the server side statistics. So I want to know what exactly the\noperations are involved in the server side statistics in EXPLAIN ANALYZE.\nMay I check the code later on when I get time.\n\nFor the query itself, it was just for performance comparison. There are\nother index based queries, which are of course much faster, but still result\nin similar ten-fold of time gap between client side and server side\nstatistics.\n\nI am thinking of non-kernel involved client interface, is there such an\noption, or do I have to develop one from scratch?\n\nBest regards,\nHan\n\nOn Wed, Feb 15, 2012 at 1:23 PM, Amit Kapila <[email protected]> wrote:\n\n>>So, is it client interface (ODBC, libpq) 's cost mainly due to TCP?\n\n \n\nThe difference as compare to your embedded DB you are seeing is mainly seems\nto be due to TCP.\n\nOne optimization you can use is to use Unix-domain socket mode of\nPostgreSQL. You can refer unix_socket_directory parameter in postgresql.conf\nand other related parameters. \n\nI am suggesting you this as earlier you were using embedded DB, so your\nclient/server should be on same machine. If now this is not the case then it\nwill not work.\n\n \n\nCan you please clarify some more things like\n\n1. After doing sequence scan, do you need all the records in client for\nwhich seq. scan is happening. If less records then why you have not created\nindex.\n\n2. What is exact scenario for fetching records\n\n \n\n \n\n \n\[email protected]\n[mailto:[email protected]] On Behalf Of Zhou Han\nSent: Wednesday, February 15, 2012 9:30 AM\nTo: [email protected]\nSubject: [HACKERS] client performance v.s. server statistics\n\n \n\nHi,\n\nI am checking a performance problem encountered after porting old embeded DB\nto postgreSQL. While the system is real-time sensitive, we are concerning\nfor per-query cost. In our environment sequential scanning (select * from\n...) for a table with tens of thousands of record costs 1 - 2 seconds,\nregardless of using ODBC driver or the \"timing\" result shown in psql client\n(which in turn, relies on libpq). However, using EXPLAIN ANALYZE, or\nchecking the statistics in pg_stat_statement view, the query costs only less\nthan 100ms.\n\nrface (ODBC, libpq) 's cost mainly due to TCP? Has the pg_stat_statement or\nEXPLAIN ANALYZE included the cost of copying tuples from shared buffers to\nresult sets?\n\nCould you experts share your views on this big gap? And any suggestions to\noptimise?\n\nP.S. In our original embeded DB a \"fastpath\" interface is provided to read\ndirectly from shared memory for the records, thus provides extremely\nrealtime access (of course sacrifice some other features such as\nconsistency).\n\nBest regards,\nHan\n\n \n\n\n>>So I want to know what exactly the operations are involved in the server side statistics in EXPLAIN ANALYZEIt gives the time for execution of Query on server. According to my knowledge, it doesn’t account for data to send over TCP. From: Zhou Han [mailto:[email protected]] Sent: Wednesday, February 15, 2012 12:32 PMTo: Amit Kapil\[email protected]: Re: [HACKERS] client performance v.s. server statistics Hi,I have tried unix domain socket and the performance is similar with TCP socket. It is MIPS architecture so memory copy to/from kernel can occupy much time, and apparently using unit domain socket has no difference than TCP in terms of memory copy. But it is still unbelievable for the ten-fold gap between the client side statistic and the server side statistics. So I want to know what exactly the operations are involved in the server side statistics in EXPLAIN ANALYZE. May I check the code later on when I get time.For the query itself, it was just for performance comparison. There are other index based queries, which are of course much faster, but still result in similar ten-fold of time gap between client side and server side statistics.I am t\nolved client interface, is there such an option, or do I have to develop one from scratch?Best regards,HanOn Wed, Feb 15, 2012 at 1:23 PM, Amit Kapila <[email protected]> wrote:>>So, is it client interface (ODBC, libpq) 's cost mainly due to TCP? The difference as compare to your embedded DB you are seeing is mainly seems to be due to TCP.One optimization you can use is to use Unix-domain socket mode of PostgreSQL. You can refer unix_socket_directory parameter in post\nated parameters. I am suggesting you this as earlier you were using embedded DB, so your client/server should be on same machine. If now this is not the case then it will not work. Can you please clarify some more things like1. After doing sequence scan, do you need all the records in client for which seq. scan is happening. If less records then why you have not created index.2. What is exact scenario for fetching records [email protected] [mailto:[email protected]] On Behalf Of Zhou HanSent: Wednesday, February 15, 2012 9:30 AMTo: [email protected]: [HACKERS] client performance v.s. server statistics Hi,I am checking a performance problem encountered after porting old embeded DB to postgreSQL. While the system is real-time sensitive, we are concerning for per-query cost. In our environment sequential scanning (select * from ...) for a table with tens of thousands of record costs 1 - 2 seconds, regardless of using ODBC driver or the \"timing\" result shown in psql client (which in turn, relies on libpq). However, using EXPLAIN ANALYZE, or checking the statistics in pg_stat_statement view, the query costs only less than 100ms.rface (ODBC, libpq) 's cost mainly due to TCP? Has the pg_stat_statement or EXPLAIN ANALYZE included the cost of copying tuples from shared buffers to result sets?Could you experts share your views on this big gap? And any suggestions to optimise?P.S. In our original embeded DB a \"fastpath\"\n read directly from shared memory for the records, thus provides extremely realtime access (of course sacrifice some other features such as consistency).Best regards,Han",
"msg_date": "Wed, 15 Feb 2012 20:54:27 +0530",
"msg_from": "Amit Kapila <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: client performance v.s. server statistics"
},
{
"msg_contents": "Hi Andres,\n\nGood hint!\n\nDBRNWHSB=# COPY (SELECT * FROM my_large) TO '/tmp/somefile';\nCOPY 73728\nTime: 1405.976 ms\nDBRNWHSB=# COPY (SELECT * FROM my_large) TO '/tmp/somefile_binary' BINARY ;\nCOPY 73728\nTime: 840.987 ms\nDBRNWHSB=# EXPLAIN ANALYZE SELECT * FROM my_large;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Seq Scan on my_large (cost=0.00..1723.78 rows=80678 width=59)\n(actual time=0.036..114.400 rows=73728 loops=1)\n Total runtime: 171.561 ms\n(2 rows)\n\nTime: 172.523 ms\nDBRNWHSB=# SELECT * FROM my_large;\n...\nTime: 1513.274 ms\n\nIn this test the record number is 73728, each with tens of bytes. The\nsize of somefile is 5,455,872, and the size of somefile_binary is even\nmore: 6,782,997. However, BINARY COPY to memory file costs lower, so\nit means something else, e.g. result preparing is taking CPU time. But\neven the BINARY COPY still takes much more time than the ANALYZE:\n840ms v.s. 172ms. So I guess most time is spent in preparing +\ntransferring result in backend, and this part of time is not counted\nin the ANALYZE or pg_stat_statement statistics.\n\nIf this assumption is true, then is it possible to optimise towards\nthe result preparing and transferring in backend? Or is there any\n\"bulk\" output operation already supported in some existing PostgreSQL\noptions?\n\nBest regards,\nHan\n\nOn Wed, Feb 15, 2012 at 7:36 PM, Andres Freund <[email protected]> wrote:\n> On Wednesday, February 15, 2012 12:33:13 PM Han Zhou wrote:\n>> Hi,\n>>\n>> To be more specific, I list my calculation here:\n>> The timing shown in psql may include: plan + execution + copying to\n>> result set in backend (does this step exist?) + transferring data to\n>> client via socket.\n> Correct.\n>\n>> Then I want to know what's the time shown in pg_stat_statement and\n>> EXPLAIN ANALYZE in terms of the above mentioned parts. And why are the\n>> gap is almost 10 times (100 ms v.s. 1 second)? As a comparison,\n>> transferring same amount of data with unix domain socket should cost\n>> only a very small fraction of this (almost negligible), according to\n>> my other performance tests.\n> Yea, you proved my quick theory wrong.\n>\n>> And I don't think the plan time plays an important role here in\n>> EXPLAIN ANALYZE, because the command itself costs similar time to the\n>> \"Total runtime\" as shown in psql (timing on), which means the plan is\n>> too simple to take any significant part of time in this case.\n> Sounds like that.\n>\n> It would be interesting to see the time difference between:\n> COPY (SELECT * FROM blub) TO '/tmp/somefile';\n> COPY (SELECT * FROM blub) TO '/tmp/somefile' BINARY;\n> EXPLAIN ANALYZE SELECT * FROM blub;\n>\n> Andres\n\n\n\n-- \nBest regards,\nHan\n",
"msg_date": "Thu, 16 Feb 2012 09:44:53 +0800",
"msg_from": "Han Zhou <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: [HACKERS] client performance v.s. server statistics"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm working on a system that detects changes in someone's filesystem.\nWe first scan the entire filesystem which is dumped into the table \n'filesystem' containing the full_path of every file and it's \ncorresponding md5 hash.\nWe then do subsequent scans of the filesystem which are dumped into a \ntemporary table and we would like to find out which files are not \npresent anymore. To do so, we update the field 'dead' in the \n'filesystem' table for every row that does not exist in the 'temporary' \ntable.\n\n\\d filesystem;\n Table � public.filesystem �\n Colonne | Type | Modificateurs\n------------+------------------------+----------------------\n hash | character(32) |\n full_name | text |\n dead | integer | default 0\nIndex :\n � filesystem_unique � UNIQUE, btree (hash)\n\n\\d temporary;\n Table � public.filesystem �\n Colonne | Type | Modificateurs\n------------+------------------------+----------------------\n hash | character(32) |\n full_name | text |\nIndex :\n � temporary_unique � UNIQUE, btree (hash)\n\nCurrently, i use the following query to update the filesystem table with \nthe missing files :\nUPDATE filesystem SET dead=some_value WHERE dead=0 AND (SELECT 1 FROM \ntemporary AS t WHERE t.hash=filesystem.hash LIMIT 1) IS NULL\n\nThis works correctly for regular filesystems. But when the 'filesystem' \ntable contains more than a few million rows, the update query can take days.\n\nHere's an explain of the query :\n=# UPDATE filesystem SET dead=some_value WHERE dead=0 AND (SELECT 1 FROM \ntemporary AS t WHERE t.hash=filesystem.hash LIMIT 1) IS NULL\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Seq Scan on filesystem (cost=0.00..97536479.02 rows=25747 width=241)\n Filter: ((dead = 0) AND ((subplan) IS NULL))\n SubPlan\n -> Limit (cost=0.00..9.53 rows=1 width=0)\n -> Index Scan using temporary_hash on temporary t \n(cost=0.00..9.53 rows=1 width=0)\n Index Cond: (hash = $0)\n(6 lignes)\n\nIs there a better way to update a table if it doesn't join another table ?\n\nBest Regards,\n\nGabriel Biberian\n",
"msg_date": "Wed, 15 Feb 2012 19:33:49 +0100",
"msg_from": "Gabriel Biberian <[email protected]>",
"msg_from_op": true,
"msg_subject": "UPDATE on NOT JOIN"
},
{
"msg_contents": "On Wed, Feb 15, 2012 at 20:33, Gabriel Biberian\n<[email protected]> wrote:\n> Currently, i use the following query to update the filesystem table with the\n> missing files :\n> UPDATE filesystem SET dead=some_value WHERE dead=0 AND (SELECT 1 FROM\n> temporary AS t WHERE t.hash=filesystem.hash LIMIT 1) IS NULL\n\nI don't know if this solves your problem entirely, but an obvious\nimprovement would be using the NOT EXISTS (SELECT ...) construct:\n\nUPDATE filesystem SET dead=some_value WHERE dead=0 AND NOT EXISTS\n(SELECT 1 FROM temporary AS t WHERE t.hash=filesystem.hash);\n\nPostgreSQL 8.4+ can optimize this into an \"anti join\" query (you\ndidn't mention what version you are using).\n\nAlso, if your hardware isn't very limited, you should increase the\nwork_mem setting from the default (1MB).\n\nIf the above doesn't help significantly, please post the full EXPLAIN\nANALYZE output.\n\nRegards,\nMarti\n",
"msg_date": "Wed, 15 Feb 2012 21:12:04 +0200",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATE on NOT JOIN"
}
] |
[
{
"msg_contents": "hi all\n\nIn my query I have two tables (edges 3,600,000 tuples and nodes 1,373,00 tuples), where I want to obtain all edges,whose target vertex is within a given euclidean range starting from a query point q:\n\nthe query is formulated as following:\nSELECT \n E.ID, E.SOURCE,E.SOURCE_MODE,E.TARGET,E.TARGET_MODE,E.LENGTH,E.EDGE_MODE,E.ROUTE_ID,E.SOURCE_OUTDEGREE \nFROM it_edges E, it_nodes N \nWHERE \n E.TARGET=N.ID AND \n ST_DWITHIN(N.GEOMETRY,ST_PointFromText('POINT( 706924.6775765815 -509252.7248541778)',31370),1860.0)\n\nthe index are set on:\n- nodes: unique index on ID defined as primary key and a spatial index on the geometry column\n- edges: btree index on TARGET\n\nthe selectivity of the ST_DWITHIN is 0.07 of the total nodes table \nand 0.08% of the total edes table\n\nThe query plan says, that a sequential scan is performed on the edge table. I consider it strange that he is not accessing on the (btree) index one the edge table.\n\nAny idea or suggestion?\n\nOutput Query plan: \n\n\"Hash Join (cost=7007.11..149884.46 rows=1 width=34) (actual time=6.219..3254.692 rows=3126 loops=1)\"\n\" Hash Cond: ((e.target)::numeric = n.id)\"\n\" -> Seq Scan on it_edges e (cost=0.00..124621.23 rows=3651223 width=34) (actual time=0.012..2403.982 rows=3651223 loops=1)\"\n\" -> Hash (cost=7007.09..7007.09 rows=1 width=8) (actual time=5.613..5.613 rows=1028 loops=1)\"\n\" -> Bitmap Heap Scan on it_nodes n (cost=63.94..7007.09 rows=1 width=8) (actual time=1.213..5.025 rows=1028 loops=1)\"\n\" Recheck Cond: (geometry && '01030000208A7A000001000000050000005451EB5A51842541702C40E622321FC15451EB5A51842541702C40E602F81EC15451EB5A61A12541702C40E602F81EC15451EB5A61A12541702C40E622321FC15451EB5A51842541702C40E622321FC1'::geometry)\"\n\" Filter: (('01010000208A7A00005451EB5AD9922541702C40E612151FC1'::geometry && st_expand(geometry, 1860::double precision)) AND _st_dwithin(geometry, '01010000208A7A00005451EB5AD9922541702C40E612151FC1'::geometry, 1860::double precision))\"\n\" -> Bitmap Index Scan on it_nodes_geometry_gist (cost=0.00..63.94 rows=1959 width=0) (actual time=1.153..1.153 rows=1237 loops=1)\"\n\" Index Cond: (geometry && '01030000208A7A000001000000050000005451EB5A51842541702C40E622321FC15451EB5A51842541702C40E602F81EC15451EB5A61A12541702C40E602F81EC15451EB5A61A12541702C40E622321FC15451EB5A51842541702C40E622321FC1'::geometry)\"\n\"Total runtime: 3254.927 ms\"\n\n\n\n",
"msg_date": "Wed, 15 Feb 2012 23:40:13 +0100",
"msg_from": "Markus Innerebner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizer is not choosing index"
},
{
"msg_contents": "Markus Innerebner <[email protected]> writes:\n> The query plan says, that a sequential scan is performed on the edge table. I consider it strange that he is not accessing on the (btree) index one the edge table.\n\nThis suggests that you have a datatype mismatch:\n\n> \" Hash Cond: ((e.target)::numeric = n.id)\"\n\nYour index is presumably on e.target, not e.target::numeric, so it's not\napplicable. Try to make the join columns the same datatype.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Feb 2012 18:11:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer is not choosing index "
},
{
"msg_contents": "Hi Tom,\n\nthanks for your suggestion:\n\n\n> Markus Innerebner <[email protected]> writes:\n>> The query plan says, that a sequential scan is performed on the edge table. I consider it strange that he is not accessing on the (btree) index one the edge table.\n> \n> This suggests that you have a datatype mismatch:\n> \n>> \" Hash Cond: ((e.target)::numeric = n.id)\"\n> \n> Your index is presumably on e.target, not e.target::numeric, so it's not\n> applicable. Try to make the join columns the same datatype.\n\nindeed: the id column in the node table had as type numeric, while in edges the target is integer.\n\nAfter changing it, the index is used again.\n\nmany thanks\n\n\ncheers Markus",
"msg_date": "Thu, 16 Feb 2012 07:54:37 +0100",
"msg_from": "Markus Innerebner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizer is not choosing index "
}
] |
[
{
"msg_contents": "Comparing\nSELECT DISTINCT(user_id) FROM blocks JOIN seen_its USING (user_id) WHERE\nseen_its.created BETWEEN (now()::date - interval '8 days')::timestamp AND\nnow()::date::timestamp\nto\nSELECT DISTINCT(user_id) FROM seen_its WHERE created BETWEEN (now()::date -\ninterval '8 days')::timestamp AND now()::date::timestamp\nthe difference is 100x.\n\nHere are my tables:\n\nCREATE TABLE seen_its (\n user_id character(24) NOT NULL,\n moment_id character(24) NOT NULL,\n created timestamp without time zone,\n inserted timestamp without time zone DEFAULT now(),\n CONSTRAINT seen_its_pkey PRIMARY KEY (user_id , moment_id )\n) WITH ( OIDS=FALSE );\n\nCREATE INDEX seen_its_created_idx ON seen_its USING btree (created );\n\nCREATE INDEX seen_its_user_id_idx ON seen_its USING btree (user_id );\n\nCREATE TABLE blocks (\n block_id character(24) NOT NULL,\n user_id character(24) NOT NULL,\n created timestamp with time zone,\n locale character varying,\n shared boolean,\n private boolean,\n moment_type character varying NOT NULL,\n user_agent character varying,\n inserted timestamp without time zone NOT NULL DEFAULT now(),\n networks character varying[],\n lnglat point,\n timezone character varying,\n geohash character varying(20),\n CONSTRAINT blocks_pkey PRIMARY KEY (block_id )\n) WITH ( OIDS=FALSE );\n\nCREATE INDEX blocks_created_at_timezone_idx ON blocks USING btree\n(timezone(timezone::text, created) );\n\nCREATE INDEX blocks_created_idx ON blocks USING btree (created DESC\nNULLS LAST);\nCREATE INDEX blocks_geohash_idx ON blocks USING btree (geohash );\nCREATE INDEX blocks_timezone_idx ON blocks USING btree (timezone );\nCREATE INDEX blocks_user_id_idx ON blocks USING btree (user_id );\n\nMy blocks table has about 17M rows in it. My seen_its table has 1.9M rows\nin it (though that is expected to grow into the billions).\n\nHere is the EXPLAIN: *http://explain.depesz.com/s/ley*\n\nI'm using PostgreSQL 9.0.6 on i486-pc-linux-gnu, compiled by GCC\ngcc-4.4.real (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 32-bit\n\nMy random_page_cost is 2 and yet it still insists on using Seq Scan on\nblocks.\n\nWhenever I use my blocks table, this seems to happen. I'm not sure what's\nwrong.\n\nAny help would be much appreciated.\n\nThank you,\n-Alessandro\n\nComparing SELECT DISTINCT(user_id) FROM blocks JOIN seen_its USING (user_id) WHERE seen_its.created BETWEEN (now()::date - interval '8 days')::timestamp AND now()::date::timestamptoSELECT DISTINCT(user_id) FROM seen_its WHERE created BETWEEN (now()::date - interval '8 days')::timestamp AND now()::date::timestamp\nthe difference is 100x. \nHere are my tables:CREATE TABLE seen_its ( user_id character(24) NOT NULL, moment_id character(24) NOT NULL, created timestamp without time zone,\n inserted timestamp without time zone DEFAULT now(), CONSTRAINT seen_its_pkey PRIMARY KEY (user_id , moment_id )) WITH ( OIDS=FALSE );CREATE INDEX seen_its_created_idx ON seen_its USING btree (created );\nCREATE INDEX seen_its_user_id_idx ON seen_its USING btree (user_id );CREATE TABLE blocks ( block_id character(24) NOT NULL, user_id character(24) NOT NULL,\n created timestamp with time zone, locale character varying, shared boolean, private boolean, moment_type character varying NOT NULL, user_agent character varying,\n inserted timestamp without time zone NOT NULL DEFAULT now(), networks character varying[], lnglat point, timezone character varying, geohash character varying(20),\n CONSTRAINT blocks_pkey PRIMARY KEY (block_id )) WITH ( OIDS=FALSE );CREATE INDEX blocks_created_at_timezone_idx ON blocks USING btree (timezone(timezone::text, created) );\nCREATE INDEX blocks_created_idx ON blocks USING btree (created DESC NULLS LAST);CREATE INDEX blocks_geohash_idx ON blocks USING btree (geohash );CREATE INDEX blocks_timezone_idx ON blocks USING btree (timezone );\nCREATE INDEX blocks_user_id_idx ON blocks USING btree (user_id );My blocks table has about 17M rows in it. My seen_its table has 1.9M rows in it (though that is expected to grow into the billions). \nHere is the EXPLAIN: http://explain.depesz.com/s/leyI'm using PostgreSQL 9.0.6 on i486-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 32-bit\nMy random_page_cost is 2 and yet it still insists on using Seq Scan on blocks.Whenever I use my blocks table, this seems to happen. I'm not sure what's wrong. \nAny help would be much appreciated.Thank you,-Alessandro",
"msg_date": "Fri, 17 Feb 2012 10:34:50 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why so slow?"
},
{
"msg_contents": "On 02/17/2012 10:34 AM, Alessandro Gagliardi wrote:\n> Comparing\n> SELECT DISTINCT(user_id) FROM blocks JOIN seen_its USING (user_id) \n> WHERE seen_its.created BETWEEN (now()::date - interval '8 \n> days')::timestamp AND now()::date::timestamp\n> to\n> SELECT DISTINCT(user_id) FROM seen_its WHERE created BETWEEN \n> (now()::date - interval '8 days')::timestamp AND now()::date::timestamp\n> the difference is 100x.\n> ...\nThough I could figure it out, it would be helpful to actually specify \nwhich query is faster and to post the explain of *both* queries.\n\nBut in general, it is not terribly unusual to find that rewriting a \nquery can lead the planner to generate a superior plan. Trying and \ntesting different ways of writing a query is a standard tuning technique.\n\nThere are also version-specific issues with some versions of PostgreSQL \npreferring ...where foo in (select... and others preferring ...where \nexists (select...\n\nIf you are planning to ramp up to high volumes it is also *very* \nimportant to test and tune using the size of database you plan to have \non the hardware you will use in production. You cannot extrapolate from \na dev database on an i486 (?!?) machine to a production server with more \nspindles, different RAID setup, different CPU, more cores, vastly more \nmemory, etc.\n\nIn the case of your queries, the second one eliminates a join and gives \nthe planner an easy way to optimize using the available indexes so I'm \nnot surprised it's faster.\n\nNote: I am guessing that your seen_its table just grows and grows but is \nrarely, if ever, modified. If it is basically a log-type table it will \nbe a prime candidate for partitioning on date and queries like this will \nonly need to access a couple relatively small child tables instead of \none massive one.\n\nCheers,\nSteve\n\n",
"msg_date": "Fri, 17 Feb 2012 11:21:47 -0800",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why so slow?"
},
{
"msg_contents": "Your guess about the seen_its table growing is accurate and applies to the\nblocks table as well. Partitioning on date is probably a good idea and\nsomething that I've been meaning to investigate. I'm not surprised that the\nJOIN makes it slower, I'm surprised by the magnitude of how much slower it\nis.\n\nThis is my analytics database (not dev) so no extrapolation is necessary\nexcept in that I know the tables will grow in size. The database is hosted\non AWS and maintained by Heroku.\n\nOn Fri, Feb 17, 2012 at 11:21 AM, Steve Crawford <\[email protected]> wrote:\n\n> On 02/17/2012 10:34 AM, Alessandro Gagliardi wrote:\n>\n>> Comparing\n>> SELECT DISTINCT(user_id) FROM blocks JOIN seen_its USING (user_id) WHERE\n>> seen_its.created BETWEEN (now()::date - interval '8 days')::timestamp AND\n>> now()::date::timestamp\n>> to\n>> SELECT DISTINCT(user_id) FROM seen_its WHERE created BETWEEN (now()::date\n>> - interval '8 days')::timestamp AND now()::date::timestamp\n>> the difference is 100x.\n>> ...\n>>\n> Though I could figure it out, it would be helpful to actually specify\n> which query is faster and to post the explain of *both* queries.\n>\n> But in general, it is not terribly unusual to find that rewriting a query\n> can lead the planner to generate a superior plan. Trying and testing\n> different ways of writing a query is a standard tuning technique.\n>\n> There are also version-specific issues with some versions of PostgreSQL\n> preferring ...where foo in (select... and others preferring ...where exists\n> (select...\n>\n> If you are planning to ramp up to high volumes it is also *very* important\n> to test and tune using the size of database you plan to have on the\n> hardware you will use in production. You cannot extrapolate from a dev\n> database on an i486 (?!?) machine to a production server with more\n> spindles, different RAID setup, different CPU, more cores, vastly more\n> memory, etc.\n>\n> In the case of your queries, the second one eliminates a join and gives\n> the planner an easy way to optimize using the available indexes so I'm not\n> surprised it's faster.\n>\n> Note: I am guessing that your seen_its table just grows and grows but is\n> rarely, if ever, modified. If it is basically a log-type table it will be a\n> prime candidate for partitioning on date and queries like this will only\n> need to access a couple relatively small child tables instead of one\n> massive one.\n>\n> Cheers,\n> Steve\n>\n>\n\nYour guess about the seen_its table growing is accurate and applies to the blocks table as well. Partitioning on date is probably a good idea and something that I've been meaning to investigate. I'm not surprised that the JOIN makes it slower, I'm surprised by the magnitude of how much slower it is.\nThis is my analytics database (not dev) so no extrapolation is necessary except in that I know the tables will grow in size. The database is hosted on AWS and maintained by Heroku. \nOn Fri, Feb 17, 2012 at 11:21 AM, Steve Crawford <[email protected]> wrote:\nOn 02/17/2012 10:34 AM, Alessandro Gagliardi wrote:\n\nComparing\nSELECT DISTINCT(user_id) FROM blocks JOIN seen_its USING (user_id) WHERE seen_its.created BETWEEN (now()::date - interval '8 days')::timestamp AND now()::date::timestamp\nto\nSELECT DISTINCT(user_id) FROM seen_its WHERE created BETWEEN (now()::date - interval '8 days')::timestamp AND now()::date::timestamp\nthe difference is 100x.\n...\n\nThough I could figure it out, it would be helpful to actually specify which query is faster and to post the explain of *both* queries.\n\nBut in general, it is not terribly unusual to find that rewriting a query can lead the planner to generate a superior plan. Trying and testing different ways of writing a query is a standard tuning technique.\n\nThere are also version-specific issues with some versions of PostgreSQL preferring ...where foo in (select... and others preferring ...where exists (select...\n\nIf you are planning to ramp up to high volumes it is also *very* important to test and tune using the size of database you plan to have on the hardware you will use in production. You cannot extrapolate from a dev database on an i486 (?!?) machine to a production server with more spindles, different RAID setup, different CPU, more cores, vastly more memory, etc.\n\nIn the case of your queries, the second one eliminates a join and gives the planner an easy way to optimize using the available indexes so I'm not surprised it's faster.\n\nNote: I am guessing that your seen_its table just grows and grows but is rarely, if ever, modified. If it is basically a log-type table it will be a prime candidate for partitioning on date and queries like this will only need to access a couple relatively small child tables instead of one massive one.\n\nCheers,\nSteve",
"msg_date": "Fri, 17 Feb 2012 14:20:29 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why so slow?"
},
{
"msg_contents": "On Feb 17, 2012 8:35 PM, \"Alessandro Gagliardi\" <[email protected]> wrote:\n> Here is the EXPLAIN: http://explain.depesz.com/s/ley\n>\n> I'm using PostgreSQL 9.0.6 on i486-pc-linux-gnu, compiled by GCC\ngcc-4.4.real (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 32-bit\n>\n> My random_page_cost is 2 and yet it still insists on using Seq Scan on\nblocks.\n\nAs could be inferred from the row counts, it's slow because its joining and\nthen aggregating a quarter of the blocks table. The hash join with its\nsequential scan is probably the correct choice for that type of join, it's\nthe join itself that should be optimized out. The optimizer doesn't figure\nout that the join can be turned into a semi join if the output is\naggregated with distinct and is from only one of the tables (in this case,\nbecause the output is the join key, it can be from either table).\n\nTo make the optimizers job easier you can rewrite it as a semi-join\nexplicitly:\nSELECT DISTINCT(user_id) FROM seen_its WHERE EXISTS (SELECT 1 FROM blocks\nWHERE blocks.user_id = seen_its.user_id) AND seen_its.created BETWEEN\n(now()::date - interval '8 days')::timestamp AND now()::date::timestamp\n\n--\nAnts Aasma\n\nOn Feb 17, 2012 8:35 PM, \"Alessandro Gagliardi\" <[email protected]> wrote:\n> Here is the EXPLAIN: http://explain.depesz.com/s/ley\n>\n> I'm using PostgreSQL 9.0.6 on i486-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 32-bit\n>\n> My random_page_cost is 2 and yet it still insists on using Seq Scan on blocks.\nAs could be inferred from the row counts, it's slow because its joining and then aggregating a quarter of the blocks table. The hash join with its sequential scan is probably the correct choice for that type of join, it's the join itself that should be optimized out. The optimizer doesn't figure out that the join can be turned into a semi join if the output is aggregated with distinct and is from only one of the tables (in this case, because the output is the join key, it can be from either table). \nTo make the optimizers job easier you can rewrite it as a semi-join explicitly:\nSELECT DISTINCT(user_id) FROM seen_its WHERE EXISTS (SELECT 1 FROM blocks WHERE blocks.user_id = seen_its.user_id) AND seen_its.created BETWEEN (now()::date - interval '8 days')::timestamp AND now()::date::timestamp\n--\nAnts Aasma",
"msg_date": "Sat, 18 Feb 2012 07:29:15 +0200",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why so slow?"
},
{
"msg_contents": "Ah, that did make a big difference! It went from taking 10x as long to\ntaking only 1.5x as long (about what I would have expected, if not\nbetter.) Thank you!\n\nOn Fri, Feb 17, 2012 at 9:29 PM, Ants Aasma <[email protected]> wrote:\n\n> On Feb 17, 2012 8:35 PM, \"Alessandro Gagliardi\" <[email protected]>\n> wrote:\n> > Here is the EXPLAIN: http://explain.depesz.com/s/ley\n> >\n> > I'm using PostgreSQL 9.0.6 on i486-pc-linux-gnu, compiled by GCC\n> gcc-4.4.real (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 32-bit\n> >\n> > My random_page_cost is 2 and yet it still insists on using Seq Scan on\n> blocks.\n>\n> As could be inferred from the row counts, it's slow because its joining\n> and then aggregating a quarter of the blocks table. The hash join with its\n> sequential scan is probably the correct choice for that type of join, it's\n> the join itself that should be optimized out. The optimizer doesn't figure\n> out that the join can be turned into a semi join if the output is\n> aggregated with distinct and is from only one of the tables (in this case,\n> because the output is the join key, it can be from either table).\n>\n> To make the optimizers job easier you can rewrite it as a semi-join\n> explicitly:\n> SELECT DISTINCT(user_id) FROM seen_its WHERE EXISTS (SELECT 1 FROM blocks\n> WHERE blocks.user_id = seen_its.user_id) AND seen_its.created BETWEEN\n> (now()::date - interval '8 days')::timestamp AND now()::date::timestamp\n>\n> --\n> Ants Aasma\n>\n\nAh, that did make a big difference! It went from taking 10x as long to taking only 1.5x as long (about what I would have expected, if not better.) Thank you!On Fri, Feb 17, 2012 at 9:29 PM, Ants Aasma <[email protected]> wrote:\nOn Feb 17, 2012 8:35 PM, \"Alessandro Gagliardi\" <[email protected]> wrote:\n\n> Here is the EXPLAIN: http://explain.depesz.com/s/ley\n>\n> I'm using PostgreSQL 9.0.6 on i486-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 32-bit\n>\n> My random_page_cost is 2 and yet it still insists on using Seq Scan on blocks.\nAs could be inferred from the row counts, it's slow because its joining and then aggregating a quarter of the blocks table. The hash join with its sequential scan is probably the correct choice for that type of join, it's the join itself that should be optimized out. The optimizer doesn't figure out that the join can be turned into a semi join if the output is aggregated with distinct and is from only one of the tables (in this case, because the output is the join key, it can be from either table). \nTo make the optimizers job easier you can rewrite it as a semi-join explicitly:\nSELECT DISTINCT(user_id) FROM seen_its WHERE EXISTS (SELECT 1 FROM blocks WHERE blocks.user_id = seen_its.user_id) AND seen_its.created BETWEEN (now()::date - interval '8 days')::timestamp AND now()::date::timestamp\n--\nAnts Aasma",
"msg_date": "Mon, 20 Feb 2012 13:14:37 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why so slow?"
}
] |
[
{
"msg_contents": "Hello all!\n\nI have a very simple query that I am trying to wrap into a function:\n\nSELECT gs.geo_shape_id AS gid,\ngs.geocode\nFROM geo_shapes gs\nWHERE gs.geocode = 'xyz'\nAND geo_type = 1\nGROUP BY gs.geography, gs.geo_shape_id, gs.geocode;\n\nThis query runs in about 10 milliseconds.\n\nNow my goal is to wrap the query in a function:\n\nI create a return type:\nCREATE TYPE geocode_carrier_route_by_geocode_result AS\n (gid integer,\n geocode character varying(9));\nALTER TYPE geocode_carrier_route_by_geocode_result\n OWNER TO root;\n\n..and the function\nCREATE OR REPLACE FUNCTION geocode_carrier_route_by_geocode(geo_code\ncharacter(9))\n RETURNS SETOF geocode_carrier_route_by_geocode_result AS\n$BODY$\n\nBEGIN\n\nRETURN QUERY EXECUTE\n'SELECT gs.geo_shape_id AS gid,\ngs.geocode\nFROM geo_shapes gs\nWHERE gs.geocode = $1\nAND geo_type = 1\nGROUP BY gs.geography, gs.geo_shape_id, gs.geocode'\nUSING geo_code;\n\nEND;\n\n$BODY$\n LANGUAGE plpgsql STABLE;\nALTER FUNCTION geocode_carrier_route_by_geocode(character)\n OWNER TO root;\n\nExecute the function: select * from geocode_carrier_route_by_geocode('xyz');\n\nThis query takes 500 milliseconds to run. My question of course is why?\n\nRelated: If I create a function and assign LANGUAGE 'sql', my function runs\nin the expected 10 milliseconds. Is there some overhead to using the\nplpgsql language?\n\nThanks for any help in clarifying my understanding!\n\nHello all!I have a very simple query that I am trying to wrap into a function:SELECT gs.geo_shape_id AS gid, gs.geocode\nFROM geo_shapes gsWHERE gs.geocode = 'xyz'AND geo_type = 1 GROUP BY gs.geography, gs.geo_shape_id, gs.geocode;This query runs in about 10 milliseconds.\nNow my goal is to wrap the query in a function:I create a return type:CREATE TYPE geocode_carrier_route_by_geocode_result AS (gid integer,\n geocode character varying(9));ALTER TYPE geocode_carrier_route_by_geocode_result OWNER TO root;..and the functionCREATE OR REPLACE FUNCTION geocode_carrier_route_by_geocode(geo_code character(9))\n RETURNS SETOF geocode_carrier_route_by_geocode_result AS$BODY$BEGINRETURN QUERY EXECUTE'SELECT gs.geo_shape_id AS gid,\n gs.geocodeFROM geo_shapes gsWHERE gs.geocode = $1AND geo_type = 1 GROUP BY gs.geography, gs.geo_shape_id, gs.geocode'\nUSING geo_code;END;$BODY$ LANGUAGE plpgsql STABLE;ALTER FUNCTION geocode_carrier_route_by_geocode(character) OWNER TO root;\nExecute the function: select * from geocode_carrier_route_by_geocode('xyz');This query takes 500 milliseconds to run. My question of course is why?\nRelated: If I create a function and assign LANGUAGE 'sql', my function runs in the expected 10 milliseconds. Is there some overhead to using the plpgsql language?Thanks for any help in clarifying my understanding!",
"msg_date": "Sat, 18 Feb 2012 09:50:28 -0500",
"msg_from": "Steve Horn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query slow as function"
},
{
"msg_contents": "On Sat, Feb 18, 2012 at 8:50 AM, Steve Horn <[email protected]> wrote:\n> Hello all!\n>\n> I have a very simple query that I am trying to wrap into a function:\n>\n> SELECT gs.geo_shape_id AS gid,\n> gs.geocode\n> FROM geo_shapes gs\n> WHERE gs.geocode = 'xyz'\n> AND geo_type = 1\n> GROUP BY gs.geography, gs.geo_shape_id, gs.geocode;\n>\n> This query runs in about 10 milliseconds.\n>\n> Now my goal is to wrap the query in a function:\n>\n> I create a return type:\n> CREATE TYPE geocode_carrier_route_by_geocode_result AS\n> (gid integer,\n> geocode character varying(9));\n> ALTER TYPE geocode_carrier_route_by_geocode_result\n> OWNER TO root;\n>\n> ..and the function\n> CREATE OR REPLACE FUNCTION geocode_carrier_route_by_geocode(geo_code\n> character(9))\n> RETURNS SETOF geocode_carrier_route_by_geocode_result AS\n> $BODY$\n>\n> BEGIN\n>\n> RETURN QUERY EXECUTE\n> 'SELECT gs.geo_shape_id AS gid,\n> gs.geocode\n> FROM geo_shapes gs\n> WHERE gs.geocode = $1\n> AND geo_type = 1\n> GROUP BY gs.geography, gs.geo_shape_id, gs.geocode'\n> USING geo_code;\n>\n> END;\n>\n> $BODY$\n> LANGUAGE plpgsql STABLE;\n> ALTER FUNCTION geocode_carrier_route_by_geocode(character)\n> OWNER TO root;\n>\n> Execute the function: select * from geocode_carrier_route_by_geocode('xyz');\n>\n> This query takes 500 milliseconds to run. My question of course is why?\n>\n> Related: If I create a function and assign LANGUAGE 'sql', my function runs\n> in the expected 10 milliseconds. Is there some overhead to using the plpgsql\n> language?\n>\n> Thanks for any help in clarifying my understanding!\n\n\nnot overhead. it's how the plans are generated. plpgsql builds out\nthe query plan and caches it. sql language function replan the query\non every execution. caching the plan can help or hurt depending on how\nsensitive the plan is to the supplied parameters -- plpgsql can't (and\nshouldn't) use the actual parameter value when generating the plan.\nOTOH, for very long functions especially the amount of time spent in\nplan generation can really add up so plpgsql can often be faster than\nvanilla sql.\n\nto force plpgsql smarter plans, you can maybe attempt 'SET LOCAL\nenable_xxx' planner directives. pretty hacky, but maybe might help in\nyour case. also better statistics might help.\n\nmerlin\n",
"msg_date": "Mon, 20 Feb 2012 11:32:06 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slow as function"
}
] |
[
{
"msg_contents": "Hello all!\n\nI have a very simple query that I am trying to wrap into a function:\n\nSELECT gs.geo_shape_id AS gid,\ngs.geocode\nFROM geo_shapes gs\nWHERE gs.geocode = 'xyz'\nAND geo_type = 1\nGROUP BY gs.geography, gs.geo_shape_id, gs.geocode;\n\nThis query runs in about 10 milliseconds.\n\nNow my goal is to wrap the query in a function:\n\nI create a return type:\nCREATE TYPE geocode_carrier_route_by_geocode_result AS\n (gid integer,\n geocode character varying(9));\nALTER TYPE geocode_carrier_route_by_geocode_result\n OWNER TO root;\n\n..and the function\nCREATE OR REPLACE FUNCTION geocode_carrier_route_by_geocode(geo_code\ncharacter(9))\n RETURNS SETOF geocode_carrier_route_by_geocode_result AS\n$BODY$\n\nBEGIN\n\nRETURN QUERY EXECUTE\n'SELECT gs.geo_shape_id AS gid,\ngs.geocode\nFROM geo_shapes gs\nWHERE gs.geocode = $1\nAND geo_type = 1\nGROUP BY gs.geography, gs.geo_shape_id, gs.geocode'\nUSING geo_code;\n\nEND;\n\n$BODY$\n LANGUAGE plpgsql STABLE;\nALTER FUNCTION geocode_carrier_route_by_geocode(character)\n OWNER TO root;\n\nExecute the function: select * from geocode_carrier_route_by_geocode('xyz');\n\nThis query takes 500 milliseconds to run. My question of course is why?\n\nRelated: If I create a function and assign LANGUAGE 'sql', my function runs\nin the expected 10 milliseconds. Is there some overhead to using the\nplpgsql language?\n\nThanks for any help in clarifying my understanding!\n\nHello all!I have a very simple query that I am trying to wrap into a function:SELECT gs.geo_shape_id AS gid, gs.geocode\nFROM geo_shapes gsWHERE gs.geocode = 'xyz'AND geo_type = 1 GROUP BY gs.geography, gs.geo_shape_id, gs.geocode;This query runs in about 10 milliseconds.\nNow my goal is to wrap the query in a function:I create a return type:CREATE TYPE geocode_carrier_route_by_geocode_result AS\n (gid integer, geocode character varying(9));ALTER TYPE geocode_carrier_route_by_geocode_result OWNER TO root;..and the function\nCREATE OR REPLACE FUNCTION geocode_carrier_route_by_geocode(geo_code character(9)) RETURNS SETOF geocode_carrier_route_by_geocode_result AS$BODY$BEGIN\nRETURN QUERY EXECUTE'SELECT gs.geo_shape_id AS gid, gs.geocodeFROM geo_shapes gs\nWHERE gs.geocode = $1AND geo_type = 1 GROUP BY gs.geography, gs.geo_shape_id, gs.geocode'USING geo_code;END;$BODY$ LANGUAGE plpgsql STABLE;\nALTER FUNCTION geocode_carrier_route_by_geocode(character) OWNER TO root;Execute the function: select * from geocode_carrier_route_by_geocode('xyz');\nThis query takes 500 milliseconds to run. My question of course is why?Related: If I create a function and assign LANGUAGE 'sql', my function runs in the expected 10 milliseconds. Is there some overhead to using the plpgsql language?\nThanks for any help in clarifying my understanding!",
"msg_date": "Sat, 18 Feb 2012 10:03:46 -0500",
"msg_from": "Steve Horn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query slow as Function"
},
{
"msg_contents": "Steve Horn <[email protected]> wrote:\n\n> Execute the function: select * from geocode_carrier_route_by_geocode('xyz');\n> \n> This query takes 500 milliseconds to run. My question of course is why?\n\nWild guess:\n\nThe planner doesn't know the actual value of the input-parameter, so the\nplanner doesn't use the Index.\n\n> \n> Related: If I create a function and assign LANGUAGE 'sql', my function runs in\n> the expected 10 milliseconds. Is there some overhead to using the plpgsql\n> language?\n\nThe planner, in this case, knows the actual value.\n\n> \n> Thanks for any help in clarifying my understanding!\n\nYou can check the plan with the auto_explain - Extension, and you can\nforce the planner to create a plan based on the actual input-value by\nusing dynamic SQL (EXECUTE 'your query string' inside the function)\n\n\nAs i said, wild guess ...\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n",
"msg_date": "Sat, 18 Feb 2012 17:02:44 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slow as Function"
},
{
"msg_contents": "Andreas Kretschmer <[email protected]> writes:\n> You can check the plan with the auto_explain - Extension, and you can\n> force the planner to create a plan based on the actual input-value by\n> using dynamic SQL (EXECUTE 'your query string' inside the function)\n\nSteve *is* using EXECUTE, so that doesn't seem to be the answer. I'm\nwondering about datatype mismatches myself --- the function form is\nforcing the parameter to be char(9), which is not a constraint imposed\nin the written-out query. There are lots of other possibilities\nthough. It would be hard to say much without a self-contained example\nto try.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 Feb 2012 11:37:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slow as Function "
},
{
"msg_contents": "\n\nOn 02/18/2012 11:37 AM, Tom Lane wrote:\n> Andreas Kretschmer<[email protected]> writes:\n>> You can check the plan with the auto_explain - Extension, and you can\n>> force the planner to create a plan based on the actual input-value by\n>> using dynamic SQL (EXECUTE 'your query string' inside the function)\n> Steve *is* using EXECUTE, so that doesn't seem to be the answer. I'm\n> wondering about datatype mismatches myself --- the function form is\n> forcing the parameter to be char(9), which is not a constraint imposed\n> in the written-out query. There are lots of other possibilities\n> though. It would be hard to say much without a self-contained example\n> to try.\n>\n> \t\t\t\n\nHe's using EXECUTE ... USING. Does that plan with the used parameter?\n\ncheers\n\nandrew\n",
"msg_date": "Sat, 18 Feb 2012 11:41:18 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slow as Function"
},
{
"msg_contents": "Tom,\nThank you for your thoughts as it lead me to the solution. My column\n\"geocode\" is defined as character varying (9), and my function parameter as\ncharacter(9). Changing the input parameter type to match the column\ndefinition caused my procedure to execute in 10 milliseconds.\n\nI was even able to refactor the method to a non-dynamic SQL call (allowing\nme to take the SQL out of a string and dropping the EXECUTE):\n\nRETURN QUERY\nSELECT gs.geo_shape_id AS gid,\ngs.geocode\nFROM geo_shapes gs\nWHERE gs.geocode = geo_code\nAND geo_type = 1\nGROUP BY gs.geography, gs.geo_shape_id, gs.geocode;\n\nThanks for your help!\n\nOn Sat, Feb 18, 2012 at 11:37 AM, Tom Lane <[email protected]> wrote:\n\n> Andreas Kretschmer <[email protected]> writes:\n> > You can check the plan with the auto_explain - Extension, and you can\n> > force the planner to create a plan based on the actual input-value by\n> > using dynamic SQL (EXECUTE 'your query string' inside the function)\n>\n> Steve *is* using EXECUTE, so that doesn't seem to be the answer. I'm\n> wondering about datatype mismatches myself --- the function form is\n> forcing the parameter to be char(9), which is not a constraint imposed\n> in the written-out query. There are lots of other possibilities\n> though. It would be hard to say much without a self-contained example\n> to try.\n>\n> regards, tom lane\n>\n\n\n\n-- \nSteve Horn\nhttp://www.stevehorn.cc\[email protected]\nhttp://twitter.com/stevehorn\n740-503-2300\n\nTom,Thank you for your thoughts as it lead me to the solution. My column \"geocode\" is defined as character varying (9), and my function parameter as character(9). Changing the input parameter type to match the column definition caused my procedure to execute in 10 milliseconds. \nI was even able to refactor the method to a non-dynamic SQL call (allowing me to take the SQL out of a string and dropping the EXECUTE):RETURN QUERYSELECT gs.geo_shape_id AS gid,\n gs.geocodeFROM geo_shapes gsWHERE gs.geocode = geo_codeAND geo_type = 1 GROUP BY gs.geography, gs.geo_shape_id, gs.geocode;\nThanks for your help!On Sat, Feb 18, 2012 at 11:37 AM, Tom Lane <[email protected]> wrote:\nAndreas Kretschmer <[email protected]> writes:\n> You can check the plan with the auto_explain - Extension, and you can\n> force the planner to create a plan based on the actual input-value by\n> using dynamic SQL (EXECUTE 'your query string' inside the function)\n\nSteve *is* using EXECUTE, so that doesn't seem to be the answer. I'm\nwondering about datatype mismatches myself --- the function form is\nforcing the parameter to be char(9), which is not a constraint imposed\nin the written-out query. There are lots of other possibilities\nthough. It would be hard to say much without a self-contained example\nto try.\n\n regards, tom lane\n-- Steve Hornhttp://[email protected]://twitter.com/stevehorn\n740-503-2300",
"msg_date": "Sat, 18 Feb 2012 12:15:05 -0500",
"msg_from": "Steve Horn <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query slow as Function"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> He's using EXECUTE ... USING. Does that plan with the used parameter?\n\nYeah, it's supposed to. One of the open possibilities is that that's\nmalfunctioning for some reason, but without a test case to trace through\nI wouldn't jump to that as the most likely possibility.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 Feb 2012 12:20:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slow as Function "
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n\n> Andreas Kretschmer <[email protected]> writes:\n> > You can check the plan with the auto_explain - Extension, and you can\n> > force the planner to create a plan based on the actual input-value by\n> > using dynamic SQL (EXECUTE 'your query string' inside the function)\n> \n> Steve *is* using EXECUTE, so that doesn't seem to be the answer. I'm\n\nOhhh yes, my fault, sorry...\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n",
"msg_date": "Sat, 18 Feb 2012 18:32:55 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query slow as Function"
}
] |
[
{
"msg_contents": "Hi all,\n\nIn performance testing we're doing we are currently running two scenarios:\n\n 1. Starting from an empty db, therefore all operations are INSERTs.\n 2. Starting from an existing db - thereby UPDATing all records.\n\nI should also mention that the tables we are dealing with are heavily indexed.\n\nI would expect that the first scenario would give much better results than the second one as:\n\n 1. INSERT should be cheaper than UPDATE due to only dealing with one record instead of two.\n 2. We also have SELECT queries before the operation and in the first configuration, the SELECTs will be dealing with much less data for most of the run.\n\nTo our surprise, we see that the second scenario gives better results with an average processing time of an event at around %70 of the time run in the first scenario.\n\nAnyone have any ideas on why the empty db is giving worse results??\n\nMany Thanks,\nOfer\n\n\n\n\n\n\n\n\n\nHi all,\n \nIn performance testing we’re doing we are currently running\ntwo scenarios:\n\nStarting from an empty db, therefore\n all operations are INSERTs.\nStarting from an existing db –\n thereby UPDATing all records.\n\n \nI should also mention that the tables we are dealing with\nare heavily indexed.\n \nI would expect that the first scenario would give much\nbetter results than the second one as:\n\nINSERT should be cheaper than\n UPDATE due to only dealing with one record instead of two.\nWe also have SELECT queries\n before the operation and in the first configuration, the SELECTs will be\n dealing with much less data for most of the run.\n\n \nTo our surprise, we see that the second scenario gives better\nresults with an average processing time of an event at around %70 of the time\nrun in the first scenario.\n \nAnyone have any ideas on why the empty db is giving worse\nresults??\n \nMany Thanks,\nOfer",
"msg_date": "Mon, 20 Feb 2012 20:59:36 +0200",
"msg_from": "Ofer Israeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Insertions slower than Updates?"
},
{
"msg_contents": "Ofer Israeli <[email protected]> wrote:\n \n> INSERT should be cheaper than UPDATE due to only dealing with one \n> record instead of two.\n \n... unless the UPDATE is a HOT update, in which case the indexes\ndon't need to be touched.\n \n> Anyone have any ideas on why the empty db is giving worse\n> results??\n \nBesides the HOT updates being fast, there is the issue of having\nspace already allocated and ready for the database to use, rather\nthan needing to make calls to the OS to create and extend files as\nspace is needed.\n \n-Kevin\n",
"msg_date": "Mon, 20 Feb 2012 13:13:11 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insertions slower than Updates?"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Ofer Israeli <[email protected]> wrote:\n>> Anyone have any ideas on why the empty db is giving worse results??\n> \n> Besides the HOT updates being fast, there is the issue of having\n> space already allocated and ready for the database to use, rather\n> than needing to make calls to the OS to create and extend files as\n> space is needed. \n> \n\nI thought about this direction as well, but on UPDATES, some of them will need to ask the OS for more space anyhow at least at the beginning of the run, additional pages will be needed. Do you expect that the OS level allocations are so expensive as to show an ~%40 increase of processing time in average?\n\n\nThanks,\nOfer",
"msg_date": "Mon, 20 Feb 2012 21:21:12 +0200",
"msg_from": "Ofer Israeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insertions slower than Updates?"
},
{
"msg_contents": "Ofer Israeli <[email protected]> wrote:\n> Kevin Grittner wrote:\n>> Ofer Israeli <[email protected]> wrote:\n>>> Anyone have any ideas on why the empty db is giving worse\n>>> results??\n>> \n>> Besides the HOT updates being fast, there is the issue of having\n>> space already allocated and ready for the database to use, rather\n>> than needing to make calls to the OS to create and extend files\n>> as space is needed. \n> \n> I thought about this direction as well, but on UPDATES, some of\n> them will need to ask the OS for more space anyhow at least at the\n> beginning of the run, additional pages will be needed. Do you\n> expect that the OS level allocations are so expensive as to show\n> an ~%40 increase of processing time in average?\n \nGut feel, 40% does seem high for just that; but HOT updates could\neasily account for that, especially since you said that the tables\nare \"heavily indexed\". That is, as long as there are enough updates\nwhich don't modify indexed columns.\n \n-Kevin\n",
"msg_date": "Mon, 20 Feb 2012 13:29:43 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insertions slower than Updates?"
},
{
"msg_contents": "If the updates don't hit indexed columns (so the indexes don't need to be\nrebuilt), then the update would be very fast.\n\nInserts would always affect the index causing it to constantly need\nmodifying.\n\nIf you're doing a lot of INSERTs in a batch operation, you may want to\nconsider dropping the indexes and recreating at the end.\n\nOn Mon, Feb 20, 2012 at 2:29 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Ofer Israeli <[email protected]> wrote:\n> > Kevin Grittner wrote:\n> >> Ofer Israeli <[email protected]> wrote:\n> >>> Anyone have any ideas on why the empty db is giving worse\n> >>> results??\n> >>\n> >> Besides the HOT updates being fast, there is the issue of having\n> >> space already allocated and ready for the database to use, rather\n> >> than needing to make calls to the OS to create and extend files\n> >> as space is needed.\n> >\n> > I thought about this direction as well, but on UPDATES, some of\n> > them will need to ask the OS for more space anyhow at least at the\n> > beginning of the run, additional pages will be needed. Do you\n> > expect that the OS level allocations are so expensive as to show\n> > an ~%40 increase of processing time in average?\n>\n> Gut feel, 40% does seem high for just that; but HOT updates could\n> easily account for that, especially since you said that the tables\n> are \"heavily indexed\". That is, as long as there are enough updates\n> which don't modify indexed columns.\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nSteve Horn\nhttp://www.stevehorn.cc\[email protected]\nhttp://twitter.com/stevehorn\n740-503-2300\n\nIf the updates don't hit indexed columns (so the indexes don't need to be rebuilt), then the update would be very fast.Inserts would always affect the index causing it to constantly need modifying.\nIf you're doing a lot of INSERTs in a batch operation, you may want to consider dropping the indexes and recreating at the end.On Mon, Feb 20, 2012 at 2:29 PM, Kevin Grittner <[email protected]> wrote:\nOfer Israeli <[email protected]> wrote:\n> Kevin Grittner wrote:\n>> Ofer Israeli <[email protected]> wrote:\n>>> Anyone have any ideas on why the empty db is giving worse\n>>> results??\n>>\n>> Besides the HOT updates being fast, there is the issue of having\n>> space already allocated and ready for the database to use, rather\n>> than needing to make calls to the OS to create and extend files\n>> as space is needed.\n>\n> I thought about this direction as well, but on UPDATES, some of\n> them will need to ask the OS for more space anyhow at least at the\n> beginning of the run, additional pages will be needed. Do you\n> expect that the OS level allocations are so expensive as to show\n> an ~%40 increase of processing time in average?\n\nGut feel, 40% does seem high for just that; but HOT updates could\neasily account for that, especially since you said that the tables\nare \"heavily indexed\". That is, as long as there are enough updates\nwhich don't modify indexed columns.\n\n-Kevin\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Steve Hornhttp://[email protected]://twitter.com/stevehorn\n740-503-2300",
"msg_date": "Mon, 20 Feb 2012 14:32:35 -0500",
"msg_from": "Steve Horn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insertions slower than Updates?"
},
{
"msg_contents": "Ofer Israeli <[email protected]> wrote:\n\n>Hi all,\n\n> In performance testing we're doing we are currently running two scenarios:\n> 1. Starting from an empty db, therefore all operations are INSERTs.\n> 2. Starting from an existing db - thereby UPDATing all records.\n\n> I should also mention that the tables we are dealing with are heavily indexed.\n\n> I would expect that the first scenario would give much better results than the second one as:\n> 1. INSERT should be cheaper than UPDATE due to only dealing with one record instead of two.\n> 2. We also have SELECT queries before the operation and in the first configuration, the SELECTs will be dealing with much less data for most of the run.\n\n> To our surprise, we see that the second scenario gives better results with an average processing time of an event at around %70 of the time run in the first scenario.\n\n> Anyone have any ideas on why the empty db is giving worse results??\n\nA little googleing led me to this thought, will be happy to hear you're input on this. If the database is initially empty, the analyzer will probably decide to query the tables by full table scan as opposed to index searching. Now supposedly, during our test run, the analyzer does not run frequently enough and so at some point we are using the wrong method for SELECTs.\n\nThe version we are using is 8.3.7.\n\nMy questions are:\n1. Does the above seem reasonable to you?\n2. How often does the analyzer run? To my understanding it will always run with autovacuum, is that right? Is it triggered at other times as well?\n3. Does the autoanalyzer work only partially on the db like autovacuum going to sleep after a certain amount of work was done or does it work until finished? If it is partial work, maybe it does not reach our relevant tables. What dictates the order in which it will work?\n\n\nMany thanks,\nOfer\n\n\n\n\n\n\n\n\nOfer Israeli <[email protected]> \nwrote:\n>Hi all,\n \n> In performance testing we’re doing we are \ncurrently running two scenarios:\n> 1. Starting from an empty \ndb, therefore all operations are INSERTs. \n> 2. Starting from an existing db – thereby \nUPDATing all records. \n \n> I should also mention that the \ntables we are dealing with are heavily indexed.\n \n> I would expect that the first scenario \nwould give much better results than the second one \nas:\n> 1. INSERT \nshould be cheaper than UPDATE due to only dealing with one record instead of \ntwo. \n> 2. We also have SELECT queries before \nthe operation and in the first configuration, the SELECTs will be dealing with \nmuch less data for most of the run. \n \n> To our surprise, we see that the second \nscenario gives better results with an average processing time of an event at \naround %70 of the time run in the first scenario.\n \n> Anyone have any ideas on why the empty db \nis giving worse results??\n \nA little googleing led me to this thought, will be \nhappy to hear you're input on this. If the database is initially empty, \nthe analyzer will probably decide to query the tables by full table scan as \nopposed to index searching. Now supposedly, during our test run, the \nanalyzer does not run frequently enough and so at some point we are using the \nwrong method for SELECTs.\n \nThe version we are using is \n8.3.7.\n \nMy questions are:\n1. Does the above seem reasonable to \nyou?\n2. How often does the analyzer run? To my \nunderstanding it will always run with autovacuum, is that right? Is it \ntriggered at other times as well?\n3. Does the autoanalyzer work only partially on the db \nlike autovacuum going to sleep after a certain amount of work was done or does \nit work until finished? If it is partial work, maybe it does not reach our \nrelevant tables. What dictates the order in which it will \nwork?\n \n \nMany thanks,\nOfer",
"msg_date": "Mon, 20 Feb 2012 22:16:16 +0200",
"msg_from": "Ofer Israeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insertions slower than Updates?"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Ofer Israeli <[email protected]> wrote:\n>> Kevin Grittner wrote:\n>>> Ofer Israeli <[email protected]> wrote:\n>>>> Anyone have any ideas on why the empty db is giving worse results??\n>>> \n>>> Besides the HOT updates being fast, there is the issue of having\n>>> space already allocated and ready for the database to use, rather\n>>> than needing to make calls to the OS to create and extend files\n>>> as space is needed.\n>> \n>> I thought about this direction as well, but on UPDATES, some of them\n>> will need to ask the OS for more space anyhow at least at the\n>> beginning of the run, additional pages will be needed. Do you expect\n>> that the OS level allocations are so expensive as to show an ~%40\n>> increase of processing time in average?\n> \n> Gut feel, 40% does seem high for just that; but HOT updates could\n> easily account for that, especially since you said that the tables\n> are \"heavily indexed\". That is, as long as there are enough updates\n> which don't modify indexed columns. \n\nMost, if not all of our UPDATEs, involve updating an indexed column, so HOT updates are actually not performed at all :(",
"msg_date": "Mon, 20 Feb 2012 22:16:39 +0200",
"msg_from": "Ofer Israeli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insertions slower than Updates?"
}
] |
[
{
"msg_contents": "New question regarding this seen_its table: It gets over 100 inserts per\nsecond. Probably many more if you include every time unique_violation occurs.\nThis flood of data is constant. The commits take too long (upwards of 100\nms, ten times slower than it needs to be!) What I'm wondering is if it\nwould be better to insert all of these rows into a separate table with no\nconstraints (call it daily_seen_its) and then batch insert them into a\ntable with something like: INSERT INTO seen_its SELECT user_id, moment_id,\nMIN(created) FROM daily_seen_its GROUP BY user_id, moment_id WHERE created\nBETWEEN 'yesterday' AND 'today'; the idea being that a table with no\nconstraints would be able to accept insertions much faster and then the\nprimary key could be enforced later. Even better would be if this could\nhappen hourly instead of daily. But first I just want to know if people\nthink that this might be a viable solution or if I'm barking up the wrong\ntree.\n\nThanks!\n-Alessandro\n\nOn Fri, Feb 17, 2012 at 10:34 AM, Alessandro Gagliardi\n<[email protected]>wrote:\n\n> CREATE TABLE seen_its (\n> user_id character(24) NOT NULL,\n> moment_id character(24) NOT NULL,\n> created timestamp without time zone,\n> inserted timestamp without time zone DEFAULT now(),\n> CONSTRAINT seen_its_pkey PRIMARY KEY (user_id , moment_id )\n> ) WITH ( OIDS=FALSE );\n>\n> CREATE INDEX seen_its_created_idx ON seen_its USING btree (created );\n>\n> CREATE INDEX seen_its_user_id_idx ON seen_its USING btree (user_id );\n>\n>\n\nNew question regarding this seen_its table: It gets over 100 inserts per second. Probably many more if you include every time unique_violation occurs. This flood of data is constant. The commits take too long (upwards of 100 ms, ten times slower than it needs to be!) What I'm wondering is if it would be better to insert all of these rows into a separate table with no constraints (call it daily_seen_its) and then batch insert them into a table with something like: INSERT INTO seen_its SELECT user_id, moment_id, MIN(created) FROM daily_seen_its GROUP BY user_id, moment_id WHERE created BETWEEN 'yesterday' AND 'today'; the idea being that a table with no constraints would be able to accept insertions much faster and then the primary key could be enforced later. Even better would be if this could happen hourly instead of daily. But first I just want to know if people think that this might be a viable solution or if I'm barking up the wrong tree.\nThanks!-AlessandroOn Fri, Feb 17, 2012 at 10:34 AM, Alessandro Gagliardi <[email protected]> wrote:\nCREATE TABLE seen_its ( user_id character(24) NOT NULL, moment_id character(24) NOT NULL,\n created timestamp without time zone,\n inserted timestamp without time zone DEFAULT now(), CONSTRAINT seen_its_pkey PRIMARY KEY (user_id , moment_id )) WITH ( OIDS=FALSE );CREATE INDEX seen_its_created_idx ON seen_its USING btree (created );\nCREATE INDEX seen_its_user_id_idx ON seen_its USING btree (user_id );",
"msg_date": "Mon, 20 Feb 2012 14:06:28 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Indexes and Primary Keys on Rapidly Growing Tables"
},
{
"msg_contents": "On 2/20/12 2:06 PM, Alessandro Gagliardi wrote:\n> . But first I just want to know if people\n> think that this might be a viable solution or if I'm barking up the wrong\n> tree.\n\nBatching is usually helpful for inserts, especially if there's a unique\nkey on a very large table involved.\n\nI suggest also making the buffer table UNLOGGED, if you can afford to.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n",
"msg_date": "Mon, 20 Feb 2012 15:30:06 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexes and Primary Keys on Rapidly Growing Tables"
},
{
"msg_contents": "I was thinking about that (as per your presentation last week) but my\nproblem is that when I'm building up a series of inserts, if one of them\nfails (very likely in this case due to a unique_violation) I have to\nrollback the entire commit. I asked about this in the\nnovice<http://postgresql.1045698.n5.nabble.com/execute-many-for-each-commit-td5494218.html>forum\nand was advised to use\nSAVEPOINTs. That seems a little clunky to me but may be the best way. Would\nit be realistic to expect this to increase performance by ten-fold?\n\nOn Mon, Feb 20, 2012 at 3:30 PM, Josh Berkus <[email protected]> wrote:\n\n> On 2/20/12 2:06 PM, Alessandro Gagliardi wrote:\n> > . But first I just want to know if people\n> > think that this might be a viable solution or if I'm barking up the wrong\n> > tree.\n>\n> Batching is usually helpful for inserts, especially if there's a unique\n> key on a very large table involved.\n>\n> I suggest also making the buffer table UNLOGGED, if you can afford to.\n>\n> --\n> Josh Berkus\n> PostgreSQL Experts Inc.\n> http://pgexperts.com\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI was thinking about that (as per your presentation last week) but my problem is that when I'm building up a series of inserts, if one of them fails (very likely in this case due to a unique_violation) I have to rollback the entire commit. I asked about this in the novice forum and was advised to use SAVEPOINTs. That seems a little clunky to me but may be the best way. Would it be realistic to expect this to increase performance by ten-fold?\nOn Mon, Feb 20, 2012 at 3:30 PM, Josh Berkus <[email protected]> wrote:\nOn 2/20/12 2:06 PM, Alessandro Gagliardi wrote:\n> . But first I just want to know if people\n> think that this might be a viable solution or if I'm barking up the wrong\n> tree.\n\nBatching is usually helpful for inserts, especially if there's a unique\nkey on a very large table involved.\n\nI suggest also making the buffer table UNLOGGED, if you can afford to.\n\n--\nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 21 Feb 2012 09:59:40 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Indexes and Primary Keys on Rapidly Growing Tables"
},
{
"msg_contents": "On Tue, Feb 21, 2012 at 9:59 AM, Alessandro Gagliardi\n<[email protected]>wrote:\n\n> I was thinking about that (as per your presentation last week) but my\n> problem is that when I'm building up a series of inserts, if one of them\n> fails (very likely in this case due to a unique_violation) I have to\n> rollback the entire commit. I asked about this in the novice<http://postgresql.1045698.n5.nabble.com/execute-many-for-each-commit-td5494218.html>forum and was advised to use\n> SAVEPOINTs. That seems a little clunky to me but may be the best way.\n> Would it be realistic to expect this to increase performance by ten-fold?\n>\n>\nif you insert into a different table before doing a bulk insert later, you\ncan de-dupe before doing the insertion, eliminating the issue entirely.\n\nOn Tue, Feb 21, 2012 at 9:59 AM, Alessandro Gagliardi <[email protected]> wrote:\nI was thinking about that (as per your presentation last week) but my problem is that when I'm building up a series of inserts, if one of them fails (very likely in this case due to a unique_violation) I have to rollback the entire commit. I asked about this in the novice forum and was advised to use SAVEPOINTs. That seems a little clunky to me but may be the best way. Would it be realistic to expect this to increase performance by ten-fold?\nif you insert into a different table before doing a bulk insert later, you can de-dupe before doing the insertion, eliminating the issue entirely.",
"msg_date": "Tue, 21 Feb 2012 15:53:01 -0800",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexes and Primary Keys on Rapidly Growing Tables"
},
{
"msg_contents": "True. I implemented the SAVEPOINTs solution across the board. We'll see\nwhat kind of difference it makes. If it's fast enough, I may be able to do\nwithout that.\n\nOn Tue, Feb 21, 2012 at 3:53 PM, Samuel Gendler\n<[email protected]>wrote:\n\n>\n> On Tue, Feb 21, 2012 at 9:59 AM, Alessandro Gagliardi <[email protected]\n> > wrote:\n>\n>> I was thinking about that (as per your presentation last week) but my\n>> problem is that when I'm building up a series of inserts, if one of them\n>> fails (very likely in this case due to a unique_violation) I have to\n>> rollback the entire commit. I asked about this in the novice<http://postgresql.1045698.n5.nabble.com/execute-many-for-each-commit-td5494218.html>forum and was advised to use\n>> SAVEPOINTs. That seems a little clunky to me but may be the best way.\n>> Would it be realistic to expect this to increase performance by ten-fold?\n>>\n>>\n> if you insert into a different table before doing a bulk insert later, you\n> can de-dupe before doing the insertion, eliminating the issue entirely.\n>\n>\n>\n\nTrue. I implemented the SAVEPOINTs solution across the board. We'll see what kind of difference it makes. If it's fast enough, I may be able to do without that.On Tue, Feb 21, 2012 at 3:53 PM, Samuel Gendler <[email protected]> wrote:\nOn Tue, Feb 21, 2012 at 9:59 AM, Alessandro Gagliardi <[email protected]> wrote:\n\nI was thinking about that (as per your presentation last week) but my problem is that when I'm building up a series of inserts, if one of them fails (very likely in this case due to a unique_violation) I have to rollback the entire commit. I asked about this in the novice forum and was advised to use SAVEPOINTs. That seems a little clunky to me but may be the best way. Would it be realistic to expect this to increase performance by ten-fold?\nif you insert into a different table before doing a bulk insert later, you can de-dupe before doing the insertion, eliminating the issue entirely.",
"msg_date": "Tue, 21 Feb 2012 16:11:04 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Indexes and Primary Keys on Rapidly Growing Tables"
}
] |
[
{
"msg_contents": "I have a database where I virtually never delete and almost never do\nupdates. (The updates might change in the future but for now it's okay to\nassume they never happen.) As such, it seems like it might be worth it to\nset autovacuum=off or at least make it so vacuuming hardly ever occurs.\nActually, the latter is probably the more robust solution, though I don't\nknow how to do that (hence me writing this list). I did try turning\nautovacuum off but got:\n\nERROR: parameter \"autovacuum\" cannot be changed now\nSQL state: 55P02\nNot sure what, if anything, I can do about that.\n\nThanks,\n-Alessandro\n\nI have a database where I virtually never delete and almost never do updates. (The updates might change in the future but for now it's okay to assume they never happen.) As such, it seems like it might be worth it to set autovacuum=off or at least make it so vacuuming hardly ever occurs. Actually, the latter is probably the more robust solution, though I don't know how to do that (hence me writing this list). I did try turning autovacuum off but got:\nERROR: parameter \"autovacuum\" cannot be changed nowSQL state: 55P02 Not sure what, if anything, I can do about that. Thanks,-Alessandro",
"msg_date": "Wed, 22 Feb 2012 15:50:57 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "set autovacuum=off"
},
{
"msg_contents": "On 22 February 2012 23:50, Alessandro Gagliardi <[email protected]> wrote:\n> I have a database where I virtually never delete and almost never do\n> updates. (The updates might change in the future but for now it's okay to\n> assume they never happen.) As such, it seems like it might be worth it to\n> set autovacuum=off or at least make it so vacuuming hardly ever occurs.\n> Actually, the latter is probably the more robust solution, though I don't\n> know how to do that (hence me writing this list). I did try turning\n> autovacuum off but got:\n>\n> ERROR: parameter \"autovacuum\" cannot be changed now\n> SQL state: 55P02\n>\n> Not sure what, if anything, I can do about that.\n\nAutovacuum is controlled by how much of a table has changed, so if a\ntable never changes, it never gets vacuumed (with the exceptional case\nbeing a forced vacuum freeze to mitigate the transaction id\nwrap-around issue). The settings which control this are\nautovacuum_vacuum_threshold and autovacuum_vacuum_scale_factor.\nTherefore it isn't necessary to disable autovacuum.\n\nBut if you are adamant about disabling it, you need to change it in\nyour postgresql.conf file and restart the server.\n\n-- \nThom\n",
"msg_date": "Thu, 23 Feb 2012 12:34:29 +0000",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "On 2/23/2012 6:34 AM, Thom Brown wrote:\n> On 22 February 2012 23:50, Alessandro Gagliardi<[email protected]> wrote:\n>> I have a database where I virtually never delete and almost never do\n>> updates. (The updates might change in the future but for now it's okay to\n>> assume they never happen.) As such, it seems like it might be worth it to\n>> set autovacuum=off or at least make it so vacuuming hardly ever occurs.\n>> Actually, the latter is probably the more robust solution, though I don't\n>> know how to do that (hence me writing this list). I did try turning\n>> autovacuum off but got:\n>>\n>> ERROR: parameter \"autovacuum\" cannot be changed now\n>> SQL state: 55P02\n>>\n>> Not sure what, if anything, I can do about that.\n>\n> Autovacuum is controlled by how much of a table has changed, so if a\n> table never changes, it never gets vacuumed (with the exceptional case\n> being a forced vacuum freeze to mitigate the transaction id\n> wrap-around issue). The settings which control this are\n> autovacuum_vacuum_threshold and autovacuum_vacuum_scale_factor.\n> Therefore it isn't necessary to disable autovacuum.\n>\n> But if you are adamant about disabling it, you need to change it in\n> your postgresql.conf file and restart the server.\n>\n\nAgreed, don't disable autovacuum. It's not that demanding, and if you \ndo need it and forget to run it, it might cause you more problems.\n\nI have a db that's on a VM that doesnt get hit very much. I've noticed \nIO is a little busy (we are talking small percents of percents less than \none) but still more that I thought should be happening on a db with next \nto no usage.\n\nI found setting autovacuum_naptime = 6min made the IO all but vanish.\n\nAnd if I ever get a wild hair and blow some stuff away, the db will \nclean up after me.\n\n-Andy\n",
"msg_date": "Thu, 23 Feb 2012 09:18:35 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "I should have been more clear. I virtually never delete or do updates, but\nI insert *a lot*. So the table does change quite a bit, but only in one\ndirection.\n\nI was unable to disable autovacuum universally (due to the\ncant_change_runtime_param error) but I was able to disable it on individual\ntables. Still, I know this is heavy handed and sub-optimal. I tried set\nautovacuum_naptime='6min' but got the same 55P02 error. Should/can I set\nthat per table?\n\nI did look at autovacuum_vacuum_threshold and autovacuum_vacuum_scale_**factor\nbut couldn't make sense out of them. (Besides, I'd probably get the\nsame 55P02 error if I tried to change them.)\n\nOn Thu, Feb 23, 2012 at 7:18 AM, Andy Colson <[email protected]> wrote:\n\n> On 2/23/2012 6:34 AM, Thom Brown wrote:\n>\n>> On 22 February 2012 23:50, Alessandro Gagliardi<[email protected]>\n>> wrote:\n>>\n>>> I have a database where I virtually never delete and almost never do\n>>> updates. (The updates might change in the future but for now it's okay to\n>>> assume they never happen.) As such, it seems like it might be worth it to\n>>> set autovacuum=off or at least make it so vacuuming hardly ever occurs.\n>>> Actually, the latter is probably the more robust solution, though I don't\n>>> know how to do that (hence me writing this list). I did try turning\n>>> autovacuum off but got:\n>>>\n>>> ERROR: parameter \"autovacuum\" cannot be changed now\n>>> SQL state: 55P02\n>>>\n>>> Not sure what, if anything, I can do about that.\n>>>\n>>\n>> Autovacuum is controlled by how much of a table has changed, so if a\n>> table never changes, it never gets vacuumed (with the exceptional case\n>> being a forced vacuum freeze to mitigate the transaction id\n>> wrap-around issue). The settings which control this are\n>> autovacuum_vacuum_threshold and autovacuum_vacuum_scale_**factor.\n>> Therefore it isn't necessary to disable autovacuum.\n>>\n>> But if you are adamant about disabling it, you need to change it in\n>> your postgresql.conf file and restart the server.\n>>\n>>\n> Agreed, don't disable autovacuum. It's not that demanding, and if you do\n> need it and forget to run it, it might cause you more problems.\n>\n> I have a db that's on a VM that doesnt get hit very much. I've noticed IO\n> is a little busy (we are talking small percents of percents less than one)\n> but still more that I thought should be happening on a db with next to no\n> usage.\n>\n> I found setting autovacuum_naptime = 6min made the IO all but vanish.\n>\n> And if I ever get a wild hair and blow some stuff away, the db will clean\n> up after me.\n>\n> -Andy\n>\n\nI should have been more clear. I virtually never delete or do updates, but I insert a lot. So the table does change quite a bit, but only in one direction. I was unable to disable autovacuum universally (due to the cant_change_runtime_param error) but I was able to disable it on individual tables. Still, I know this is heavy handed and sub-optimal. I tried set autovacuum_naptime='6min' but got the same 55P02 error. Should/can I set that per table?\nI did look at autovacuum_vacuum_threshold and autovacuum_vacuum_scale_factor but couldn't make sense out of them. (Besides, I'd probably get the same 55P02 error if I tried to change them.) \nOn Thu, Feb 23, 2012 at 7:18 AM, Andy Colson <[email protected]> wrote:\nOn 2/23/2012 6:34 AM, Thom Brown wrote:\n\nOn 22 February 2012 23:50, Alessandro Gagliardi<[email protected]> wrote:\n\nI have a database where I virtually never delete and almost never do\nupdates. (The updates might change in the future but for now it's okay to\nassume they never happen.) As such, it seems like it might be worth it to\nset autovacuum=off or at least make it so vacuuming hardly ever occurs.\nActually, the latter is probably the more robust solution, though I don't\nknow how to do that (hence me writing this list). I did try turning\nautovacuum off but got:\n\nERROR: parameter \"autovacuum\" cannot be changed now\nSQL state: 55P02\n\nNot sure what, if anything, I can do about that.\n\n\nAutovacuum is controlled by how much of a table has changed, so if a\ntable never changes, it never gets vacuumed (with the exceptional case\nbeing a forced vacuum freeze to mitigate the transaction id\nwrap-around issue). The settings which control this are\nautovacuum_vacuum_threshold and autovacuum_vacuum_scale_factor.\nTherefore it isn't necessary to disable autovacuum.\n\nBut if you are adamant about disabling it, you need to change it in\nyour postgresql.conf file and restart the server.\n\n\n\nAgreed, don't disable autovacuum. It's not that demanding, and if you do need it and forget to run it, it might cause you more problems.\n\nI have a db that's on a VM that doesnt get hit very much. I've noticed IO is a little busy (we are talking small percents of percents less than one) but still more that I thought should be happening on a db with next to no usage.\n\nI found setting autovacuum_naptime = 6min made the IO all but vanish.\n\nAnd if I ever get a wild hair and blow some stuff away, the db will clean up after me.\n\n-Andy",
"msg_date": "Thu, 23 Feb 2012 09:35:07 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "On 23 February 2012 17:35, Alessandro Gagliardi <[email protected]> wrote:\n> I should have been more clear. I virtually never delete or do updates, but I\n> insert a lot. So the table does change quite a bit, but only in one\n> direction.\n\nThe same thing applies. VACUUM cleans up dead tuples, which INSERTs\ndon't create, only DELETE and UPDATEs do.\n\n-- \nThom\n",
"msg_date": "Thu, 23 Feb 2012 17:45:47 +0000",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "Hm. Okay, so just to be perfectly clear, my database with all its INSERTs,\nbut no DELETEs or UPDATEs should not be VACUUMing anyway, so disabling\nauto-vacuum is redundant (and possibly hazardous).\n\nFWIW, I did notice a speed increase after disabling auto-vacuum on several\nof my tables though that could have been a coincidence. Is there any way\nthat these tables could have been getting vacuumed (or some thing else)\ndespite the fact that they are not receiving updates or deletes? Or must\nthat have been a coincidence?\n\nWhile we're on the subject, I welcome any pointers with regard to tuning a\ndatabase that is being used in this way. Any cache sizes I should be\nmessing with? Etc.\n\nThank you,\n-Alessandro\n\nOn Thu, Feb 23, 2012 at 9:45 AM, Thom Brown <[email protected]> wrote:\n\n> On 23 February 2012 17:35, Alessandro Gagliardi <[email protected]>\n> wrote:\n> > I should have been more clear. I virtually never delete or do updates,\n> but I\n> > insert a lot. So the table does change quite a bit, but only in one\n> > direction.\n>\n> The same thing applies. VACUUM cleans up dead tuples, which INSERTs\n> don't create, only DELETE and UPDATEs do.\n>\n> --\n> Thom\n>\n\nHm. Okay, so just to be perfectly clear, my database with all its INSERTs, but no DELETEs or UPDATEs should not be VACUUMing anyway, so disabling auto-vacuum is redundant (and possibly hazardous).FWIW, I did notice a speed increase after disabling auto-vacuum on several of my tables though that could have been a coincidence. Is there any way that these tables could have been getting vacuumed (or some thing else) despite the fact that they are not receiving updates or deletes? Or must that have been a coincidence?\nWhile we're on the subject, I welcome any pointers with regard to tuning a database that is being used in this way. Any cache sizes I should be messing with? Etc.Thank you,\n-AlessandroOn Thu, Feb 23, 2012 at 9:45 AM, Thom Brown <[email protected]> wrote:\nOn 23 February 2012 17:35, Alessandro Gagliardi <[email protected]> wrote:\n> I should have been more clear. I virtually never delete or do updates, but I\n> insert a lot. So the table does change quite a bit, but only in one\n> direction.\n\nThe same thing applies. VACUUM cleans up dead tuples, which INSERTs\ndon't create, only DELETE and UPDATEs do.\n\n--\nThom",
"msg_date": "Thu, 23 Feb 2012 09:58:40 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "On 02/23/2012 09:35 AM, Alessandro Gagliardi wrote:\n> I should have been more clear. I virtually never delete or do updates, \n> but I insert /a lot/. So the table does change quite a bit, but only \n> in one direction.\n>\n> I was unable to disable autovacuum universally (due to the \n> cant_change_runtime_param error) but I was able to disable it on \n> individual tables. Still, I know this is heavy handed and sub-optimal. \n> I tried set autovacuum_naptime='6min' but got the same 55P02 error. \n> Should/can I set that per table?\n>\n> I did look at autovacuum_vacuum_threshold \n> and autovacuum_vacuum_scale_factor but couldn't make sense out of \n> them. (Besides, I'd probably get the same 55P02 error if I tried to \n> change them.)\n\nSee:\nhttp://www.postgresql.org/docs/current/static/runtime-config-autovacuum.html\n\nThe documentation has information like \"This parameter can only be set \nin the postgresql.conf file or on the server command line.\" that will \ntell you in advance which settings will fail when you attempt to set \nthem through SQL statements.\n\nBut autovacuum is pretty smart about not vacuuming tables until \nreasonably necessary. And beware that autovacuum is also controlling \nwhen to analyze a table. Mass inserts are probably changing the \ncharacteristics of your table such that it needs to be analyzed to allow \nthe planner to properly optimize your queries.\n\nHave you identified that vacuum is actually causing a problem? If not, \nI'd leave it alone. The system tables have a lot of information on table \nvacuuming and analyzing:\n\nselect\n relname,\n last_vacuum,\n last_autovacuum,\n last_analyze,\n last_autoanalyze,\n vacuum_count,\n autovacuum_count,\n analyze_count,\n autoanalyze_count\nfrom\n pg_stat_user_tables;\n\nCheers,\nSteve\n\n\n\n\n\n\n\n On 02/23/2012 09:35 AM, Alessandro Gagliardi wrote:\n I should have been more clear. I virtually never\n delete or do updates, but I insert a lot. So the table\n does change quite a bit, but only in one direction. \n \n\nI was unable to disable autovacuum universally (due to the cant_change_runtime_param error)\n but I was able to disable it on individual tables. Still, I know\n this is heavy handed and sub-optimal. I tried set\n autovacuum_naptime='6min' but got the same 55P02 error.\n Should/can I set that per table?\n\n\nI did look at autovacuum_vacuum_threshold\n and autovacuum_vacuum_scale_factor but couldn't make sense out\n of them. (Besides, I'd probably get the same 55P02 error if I\n tried to change them.) \n\n\n\n See:\nhttp://www.postgresql.org/docs/current/static/runtime-config-autovacuum.html\n\n The documentation has information like \"This parameter can only be\n set in the postgresql.conf file or on the\n server command line.\" that will tell you in advance which settings\n will fail when you attempt to set them through SQL statements.\n\n But autovacuum is pretty smart about not vacuuming tables until\n reasonably necessary. And beware that autovacuum is also controlling\n when to analyze a table. Mass inserts are probably changing the\n characteristics of your table such that it needs to be analyzed to\n allow the planner to properly optimize your queries.\n\n Have you identified that vacuum is actually causing a problem? If\n not, I'd leave it alone. The system tables have a lot of information\n on table vacuuming and analyzing:\n\n select\n relname,\n last_vacuum,\n last_autovacuum,\n last_analyze,\n last_autoanalyze,\n vacuum_count,\n autovacuum_count,\n analyze_count,\n autoanalyze_count\n from\n pg_stat_user_tables;\n\n Cheers,\n Steve",
"msg_date": "Thu, 23 Feb 2012 10:01:03 -0800",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "On Thu, Feb 23, 2012 at 10:01 AM, Steve Crawford <\[email protected]> wrote:\n\n> **\n> The documentation has information like \"This parameter can only be set in\n> the postgresql.conf file or on the server command line.\" that will tell\n> you in advance which settings will fail when you attempt to set them\n> through SQL statements.\n>\n> Ah. I missed that. Sorry for asking stupid questions.\n\n\n> But autovacuum is pretty smart about not vacuuming tables until reasonably\n> necessary. And beware that autovacuum is also controlling when to analyze a\n> table. Mass inserts are probably changing the characteristics of your table\n> such that it needs to be analyzed to allow the planner to properly optimize\n> your queries.\n>\n> Okay, that makes more sense to me; because the stats would be changing\nquickly and so while vacuuming may not be necessary, analyzing would be. At\nthe same time, I can't afford to analyze if it's causing my inserts to take\nover 50 ms. Something else I should add: if my selects are slow, that's\nannoying; but if my inserts are slow, that could be disastrous. Does\nanalyze increase the efficiency of inserts or just selects? (I assumed the\nlatter.) Obviously, I will need to analyze sometimes, but perhaps not\nnearly as often as postgres would predict under the circumstances.\n\n\n> Have you identified that vacuum is actually causing a problem? If not, I'd\n> leave it alone. The system tables have a lot of information on table\n> vacuuming and analyzing:\n>\n> Not indubitably, but circumstantially, I did notice that significantly\nfewer of my commits were taking over 50 ms after I set\nautovacuum_enabled=off on many of my tables. Unfortunately, it was not an\nisolated experiment, so I can't really be sure. At the same time, I'm\nhesitant to turn it back on until I'm sure it either didn't make a\ndifference or I've got a better plan for how to deal with this.\n\n\n> select\n> relname,\n> last_vacuum,\n> last_autovacuum,\n> last_analyze,\n> last_autoanalyze,\n> vacuum_count,\n> autovacuum_count,\n> analyze_count,\n> autoanalyze_count\n> from\n> pg_stat_user_tables;\n>\n> Apparently the last four columns don't exist in my database. As for the\nfirst four, that is somewhat illuminating. It looks like the\nlast_autovacuum that occurred on any of my tables was late Monday evening\n(almost two days before I set autovacuum_enabled=off). The last_autoanalyze\non one of the tables where I set autovacuum_enabled=off was yesterday at\n10:30, several hours before I disabled auto-vacuum. (I've had others since\nthen on tables where I didn't disable auto-vacuum.) It looks like\ndisabling auto-vacuum also disabled auto-analyze (did it?) but it also\nlooks like that might not have been the continuous problem I thought it was.\n\nSo if it's not auto-vacuuming that's making my inserts so slow, what is it?\nI'm batching my inserts (that didn't seem to help at all actually, but\nmaybe cause I had already turned off synchronous_commit anyway). I've\ngotten rid of a bunch of indices (especially those with low\ncardinality–that I did around the same time as disabling auto-vacuum, so\nthat could account for the coincidental speed up). I'm not sure what else I\ncould be doing wrong. It's definitely better than it was a few days ago,\nbut I still see \"LOG: duration: 77.315 ms statement: COMMIT\" every minute\nor two.\n\nThank you,\n-Alessandro\n\nOn Thu, Feb 23, 2012 at 10:01 AM, Steve Crawford <[email protected]> wrote:\n\nThe documentation has information like \"This parameter can only be\n set in the postgresql.conf file or on the\n server command line.\" that will tell you in advance which settings\n will fail when you attempt to set them through SQL statements.\n\nAh. I missed that. Sorry for asking stupid questions. \nBut autovacuum is pretty smart about not vacuuming tables until\n reasonably necessary. And beware that autovacuum is also controlling\n when to analyze a table. Mass inserts are probably changing the\n characteristics of your table such that it needs to be analyzed to\n allow the planner to properly optimize your queries.\n\nOkay, that makes more sense to me; because the stats would be changing quickly and so while vacuuming may not be necessary, analyzing would be. At the same time, I can't afford to analyze if it's causing my inserts to take over 50 ms. Something else I should add: if my selects are slow, that's annoying; but if my inserts are slow, that could be disastrous. Does analyze increase the efficiency of inserts or just selects? (I assumed the latter.) Obviously, I will need to analyze sometimes, but perhaps not nearly as often as postgres would predict under the circumstances.\n Have you identified that vacuum is actually causing a problem? If\n not, I'd leave it alone. The system tables have a lot of information\n on table vacuuming and analyzing:\n\nNot indubitably, but circumstantially, I did notice that significantly fewer of my commits were taking over 50 ms after I set autovacuum_enabled=off on many of my tables. Unfortunately, it was not an isolated experiment, so I can't really be sure. At the same time, I'm hesitant to turn it back on until I'm sure it either didn't make a difference or I've got a better plan for how to deal with this.\n select\n relname,\n last_vacuum,\n last_autovacuum,\n last_analyze,\n last_autoanalyze,\n vacuum_count,\n autovacuum_count,\n analyze_count,\n autoanalyze_count\n from\n pg_stat_user_tables;\n\nApparently the last four columns don't exist in my database. As for the first four, that is somewhat illuminating. It looks like the last_autovacuum that occurred on any of my tables was late Monday evening (almost two days before I set autovacuum_enabled=off). The last_autoanalyze on one of the tables where I set autovacuum_enabled=off was yesterday at 10:30, several hours before I disabled auto-vacuum. (I've had others since then on tables where I didn't disable auto-vacuum.) It looks like disabling auto-vacuum also disabled auto-analyze (did it?) but it also looks like that might not have been the continuous problem I thought it was.\nSo if it's not auto-vacuuming that's making my inserts so slow, what is it? I'm batching my inserts (that didn't seem to help at all actually, but maybe cause I had already turned off synchronous_commit anyway). I've gotten rid of a bunch of indices (especially those with low cardinality–that I did around the same time as disabling auto-vacuum, so that could account for the coincidental speed up). I'm not sure what else I could be doing wrong. It's definitely better than it was a few days ago, but I still see \"LOG: duration: 77.315 ms statement: COMMIT\" every minute or two.\nThank you,-Alessandro",
"msg_date": "Thu, 23 Feb 2012 10:38:29 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "On Thu, Feb 23, 2012 at 10:38 AM, Alessandro Gagliardi\n<[email protected]> wrote:\n> On Thu, Feb 23, 2012 at 10:01 AM, Steve Crawford\n> <[email protected]> wrote:\n> So if it's not auto-vacuuming that's making my inserts so slow, what is it?\n> I'm batching my inserts (that didn't seem to help at all actually, but maybe\n> cause I had already turned off synchronous_commit anyway). I've gotten rid\n> of a bunch of indices (especially those with low cardinality–that I did\n> around the same time as disabling auto-vacuum, so that could account for the\n> coincidental speed up). I'm not sure what else I could be doing wrong. It's\n> definitely better than it was a few days ago, but I still see \"LOG:\n> duration: 77.315 ms statement: COMMIT\" every minute or two.\n>\n\nHave you considered that you may have lock contention? Sampling\npg_locks may be illuminating; based on your description the lock\ncontention would be intermittent, so I wouldn't trust an n=1 test.\n\n-p\n\n-- \nPeter van Hardenberg\nSan Francisco, California\n\"Everything was beautiful, and nothing hurt.\" -- Kurt Vonnegut\n",
"msg_date": "Thu, 23 Feb 2012 10:42:05 -0800",
"msg_from": "Peter van Hardenberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "On 2/23/2012 12:38 PM, Alessandro Gagliardi wrote:\n\n> Does analyze increase the efficiency of inserts or just selects? (I\n> assumed the latter.) Obviously, I will need to analyze sometimes, but\n\nThat depends on if you have triggers that are doing selects. But in \ngeneral you are correct, analyze wont help inserts.\n\ncheckpoint_segments can help insert speed, what do you have that set to?\n\nAlso how you insert can make things faster too. (insert vs prepared vs COPY)\n\nAlso, if you have too many indexes on a table that can cause things to \nslow down.\n\nYour IO layer needs to be fast too. Have you watched vmstat and iostat?\n\nHave you read up on synchronous_commit?\n\n-Andy\n",
"msg_date": "Thu, 23 Feb 2012 13:07:07 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "On 02/23/2012 10:38 AM, Alessandro Gagliardi wrote:\n> On Thu, Feb 23, 2012 at 10:01 AM, Steve Crawford \n> <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> The documentation has information like \"This parameter can only be\n> set in the postgresql.conf file or on the server command line.\"\n> that will tell you in advance which settings will fail when you\n> attempt to set them through SQL statements.\n>\n> Ah. I missed that. Sorry for asking stupid questions.\nNo problem and not stupid. With the manual running to hundreds of pages \nplus information on wikis and mailing-list histories spanning hundreds \nof thousands of messages sometimes knowing where to look is 90% of the \nbattle.\n>\n> But autovacuum is pretty smart about not vacuuming tables until\n> reasonably necessary. And beware that autovacuum is also\n> controlling when to analyze a table. Mass inserts are probably\n> changing the characteristics of your table such that it needs to\n> be analyzed to allow the planner to properly optimize your queries.\n>\n> Okay, that makes more sense to me; because the stats would be changing \n> quickly and so while vacuuming may not be necessary, analyzing would \n> be. At the same time, I can't afford to analyze if it's causing my \n> inserts to take over 50 ms. Something else I should add: if my selects \n> are slow, that's annoying; but if my inserts are slow, that could \n> be disastrous...\n\nYou need to rethink things a bit. Databases can fail in all sorts of \nways and can slow down during bursts of activity, data dumps, etc. You \nmay need to investigate some form of intermediate buffering.\n\n> ...Apparently the last four columns don't exist in my database. As for \n> the first four, that is somewhat illuminating....\nThen you are not running a current version of PostgreSQL so the first \nstep to performance enhancement is to upgrade. (As a general rule - \nthere are occasionally specific cases where performance decreases.)\n> So if it's not auto-vacuuming that's making my inserts so slow, what \n> is it? I'm batching my inserts (that didn't seem to help at all \n> actually, but maybe cause I had already turned off synchronous_commit \n> anyway).\nHow are you batching them? Into a temp table that is copied to the main \ntable? As a bunch of insert statements within a single connection (saves \nprobably considerable time due to eliminating multiple connection \nsetups)? With one PREPARE and multiple EXECUTE (to save repeated \nplanning time - I'm not sure this will buy you much for simple inserts, \nthough)? With COPY (much faster as many records are inserted in a single \nstatement but if one fails, all fail)?\n\nAnd what is the 50ms limit? Is that an average? Since you are batching, \nit doesn't sound like you need every statement to complete in 50ms. \nThere is always a tradeoff between overall maximum throughput and \nmaximum allowed latency.\n\n> I've gotten rid of a bunch of indices (especially those with low \n> cardinality–that I did around the same time as disabling auto-vacuum, \n> so that could account for the coincidental speed up).\nYes, inserts require the indexes to be updated so they can slow down \ninserts and updates.\n\n> I'm not sure what else I could be doing wrong. It's definitely better \n> than it was a few days ago, but I still see \"LOG: duration: 77.315 ms \n> statement: COMMIT\" every minute or two.\n\nThat's a huge topic ranging from hardware (CPU speed, RAM, \nspindle-count, disk-type, battery-backed write caching), OS (you *are* \nrunning on some sort of *nix, right?), OS tuning, PG tuning, etc. \nFortunately the biggest benefit comes from some basic tuning.\n\nI recommend you abandon this thread as it presupposes a now seemingly \nincorrect cause of the problem and start a new one titled something like \n\"Tuning for high insert rate\" where you describe the problem you want to \nsolve. See http://wiki.postgresql.org/wiki/Guide_to_reporting_problems \nfor a good guide to the information that will be helpful in diagnosis.\n\nCheers,\nSteve\n\n\n\n\n\n\n\n On 02/23/2012 10:38 AM, Alessandro Gagliardi wrote:\n On Thu, Feb 23, 2012 at 10:01 AM, Steve Crawford <[email protected]>\n wrote:\n\n\n\nThe documentation has information like \"This\n parameter can only be set in the postgresql.conf\n file or on the server command line.\" that will tell you in\n advance which settings will fail when you attempt to set\n them through SQL statements.\n\n\n\nAh. I missed that. Sorry for asking stupid questions.\n\n\n No problem and not stupid. With the manual running to hundreds of\n pages plus information on wikis and mailing-list histories spanning\n hundreds of thousands of messages sometimes knowing where to look is\n 90% of the battle.\n\n\n \n\n\n But autovacuum is pretty smart about not vacuuming tables\n until reasonably necessary. And beware that autovacuum is\n also controlling when to analyze a table. Mass inserts are\n probably changing the characteristics of your table such\n that it needs to be analyzed to allow the planner to\n properly optimize your queries.\n\n\n\nOkay, that makes more sense to me; because the stats would\n be changing quickly and so while vacuuming may not be\n necessary, analyzing would be. At the same time, I can't\n afford to analyze if it's causing my inserts to take over 50\n ms. Something else I should add: if my selects are slow,\n that's annoying; but if my inserts are slow, that could\n be disastrous...\n\n\n\n You need to rethink things a bit. Databases can fail in all sorts of\n ways and can slow down during bursts of activity, data dumps, etc.\n You may need to investigate some form of intermediate buffering.\n\n\n...Apparently the last four columns don't\n exist in my database. As for the first four, that is somewhat\n illuminating....\n\n Then you are not running a current version of PostgreSQL so the\n first step to performance enhancement is to upgrade. (As a general\n rule - there are occasionally specific cases where performance\n decreases.)\n\nSo if it's not auto-vacuuming that's\n making my inserts so slow, what is it? I'm batching my inserts\n (that didn't seem to help at all actually, but maybe cause I had\n already turned off synchronous_commit anyway).\n\n How are you batching them? Into a temp table that is copied to the\n main table? As a bunch of insert statements within a single\n connection (saves probably considerable time due to eliminating\n multiple connection setups)? With one PREPARE and multiple EXECUTE\n (to save repeated planning time - I'm not sure this will buy you\n much for simple inserts, though)? With COPY (much faster as many\n records are inserted in a single statement but if one fails, all\n fail)?\n\n And what is the 50ms limit? Is that an average? Since you are\n batching, it doesn't sound like you need every statement to complete\n in 50ms. There is always a tradeoff between overall maximum\n throughput and maximum allowed latency.\n\n\n I've gotten rid of a\n bunch of indices (especially those with low cardinality–that I\n did around the same time as disabling auto-vacuum, so that\n could account for the coincidental speed up).\n\n Yes, inserts require the indexes to be updated so they can slow down\n inserts and updates.\n\n\n I'm not sure what else I could be doing\n wrong. It's definitely better than it was a few days ago, but I\n still see \"LOG: duration: 77.315 ms statement: COMMIT\" every\n minute or two.\n\n\n\n That's a huge topic ranging from hardware (CPU speed, RAM,\n spindle-count, disk-type, battery-backed write caching), OS (you\n *are* running on some sort of *nix, right?), OS tuning, PG tuning,\n etc. Fortunately the biggest benefit comes from some basic tuning.\n\n I recommend you abandon this thread as it presupposes a now\n seemingly incorrect cause of the problem and start a new one titled\n something like \"Tuning for high insert rate\" where you describe the\n problem you want to solve. See\n http://wiki.postgresql.org/wiki/Guide_to_reporting_problems for a\n good guide to the information that will be helpful in diagnosis.\n\n Cheers,\n Steve",
"msg_date": "Thu, 23 Feb 2012 11:26:55 -0800",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "I'm unable to make sense of pg_locks. The vast majority are\nlocktype='transactionid', mode='ExclusiveLock', granted=t. There are some\n'relation' locks with mode='RowExclusiveLock' and fewer with\n'AccessShareLock'. I have no idea what I should be looking for here.\n\nOn Thu, Feb 23, 2012 at 10:42 AM, Peter van Hardenberg <[email protected]> wrote:\n\n> On Thu, Feb 23, 2012 at 10:38 AM, Alessandro Gagliardi\n> <[email protected]> wrote:\n> > around the same time as disabling auto-vacuum, so that could account for\n> the\n> > coincidental speed up). I'm not sure what else I could be doing wrong.\n> It's\n> > definitely better than it was a few days ago, but I still see \"LOG:\n> > duration: 77.315 ms statement: COMMIT\" every minute or two.\n> >\n>\n> Have you considered that you may have lock contention? Sampling\n> pg_locks may be illuminating; based on your description the lock\n> contention would be intermittent, so I wouldn't trust an n=1 test.\n>\n> -p\n>\n> --\n> Peter van Hardenberg\n> San Francisco, California\n> \"Everything was beautiful, and nothing hurt.\" -- Kurt Vonnegut\n>\n\nI'm unable to make sense of pg_locks. The vast majority are locktype='transactionid', mode='ExclusiveLock', granted=t. There are some 'relation' locks with mode='RowExclusiveLock' and fewer with 'AccessShareLock'. I have no idea what I should be looking for here.\nOn Thu, Feb 23, 2012 at 10:42 AM, Peter van Hardenberg <[email protected]> wrote:\nOn Thu, Feb 23, 2012 at 10:38 AM, Alessandro Gagliardi\n<[email protected]> wrote:\n> around the same time as disabling auto-vacuum, so that could account for the\n> coincidental speed up). I'm not sure what else I could be doing wrong. It's\n> definitely better than it was a few days ago, but I still see \"LOG:\n> duration: 77.315 ms statement: COMMIT\" every minute or two.\n>\n\nHave you considered that you may have lock contention? Sampling\npg_locks may be illuminating; based on your description the lock\ncontention would be intermittent, so I wouldn't trust an n=1 test.\n\n-p\n\n--\nPeter van Hardenberg\nSan Francisco, California\n\"Everything was beautiful, and nothing hurt.\" -- Kurt Vonnegut",
"msg_date": "Thu, 23 Feb 2012 12:28:30 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "On Thu, Feb 23, 2012 at 11:07 AM, Andy Colson <[email protected]> wrote:\n\n> That depends on if you have triggers that are doing selects. But in\n> general you are correct, analyze wont help inserts.\n>\n> I do have some, actually. I have a couple trigger functions like:\n\nCREATE OR REPLACE FUNCTION locations_quiet_unique_violation()\n RETURNS trigger AS\n$BODY$\nBEGIN\n IF EXISTS (SELECT 1 FROM public.locations WHERE geohash = NEW.geohash)\nTHEN\n RETURN NULL;\n ELSE\n RETURN NEW;\n END IF;\nEND;\n$BODY$\n LANGUAGE plpgsql VOLATILE\n COST 100;\n\nthat are triggered thusly:\n\nCREATE TRIGGER locations_check_unique_violation\n BEFORE INSERT\n ON locations\n FOR EACH ROW\n EXECUTE PROCEDURE locations_quiet_unique_violation();\n\nI left auto-vacuum enabled for those tables.\n\ncheckpoint_segments can help insert speed, what do you have that set to?\n>\n> 40. Checking http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Serverit looks like setting that as high as 256 would not necessarily be\nunreasonable. What do you think?\n\n\n> Also how you insert can make things faster too. (insert vs prepared vs\n> COPY)\n>\n> I'm doing this all with INSERT. Is COPY that much faster? I don't know\nanything about prepared.\n\n\n> Also, if you have too many indexes on a table that can cause things to\n> slow down.\n>\n> Yeah, got that. I removed a bunch. I'd rather not remove what's left\nunless I have to.\n\n\n> Your IO layer needs to be fast too. Have you watched vmstat and iostat?\n>\n> I don't know if I have access to vmstat and iostat. Heroku is hosting this\nfor me on AWS.\n\n\n> Have you read up on synchronous_commit?\n>\n> Only a tiny bit. A couple people suggested disabling it since my database\nis being hosted on AWS so I did that. It seems a bit risky but perhaps\nworth it.\n\nOn Thu, Feb 23, 2012 at 11:07 AM, Andy Colson <[email protected]> wrote:\nThat depends on if you have triggers that are doing selects. But in general you are correct, analyze wont help inserts.\n\nI do have some, actually. I have a couple trigger functions like:CREATE OR REPLACE FUNCTION locations_quiet_unique_violation()\n RETURNS trigger AS$BODY$BEGIN\n IF EXISTS (SELECT 1 FROM public.locations WHERE geohash = NEW.geohash) THEN RETURN NULL;\n ELSE RETURN NEW; END IF;\nEND;$BODY$ LANGUAGE plpgsql VOLATILE\n COST 100;that are triggered thusly:CREATE TRIGGER locations_check_unique_violation\n BEFORE INSERT ON locations FOR EACH ROW\n EXECUTE PROCEDURE locations_quiet_unique_violation();I left auto-vacuum enabled for those tables.\ncheckpoint_segments can help insert speed, what do you have that set to?\n40. Checking http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server it looks like setting that as high as 256 would not necessarily be unreasonable. What do you think? \n \nAlso how you insert can make things faster too. (insert vs prepared vs COPY)\n\nI'm doing this all with INSERT. Is COPY that much faster? I don't know anything about prepared. \nAlso, if you have too many indexes on a table that can cause things to slow down.\n\nYeah, got that. I removed a bunch. I'd rather not remove what's left unless I have to. \nYour IO layer needs to be fast too. Have you watched vmstat and iostat?\n\nI don't know if I have access to vmstat and iostat. Heroku is hosting this for me on AWS. \nHave you read up on synchronous_commit?\n\nOnly a tiny bit. A couple people suggested disabling it since my database is being hosted on AWS so I did that. It seems a bit risky but perhaps worth it.",
"msg_date": "Thu, 23 Feb 2012 12:40:45 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "On 2/23/2012 2:40 PM, Alessandro Gagliardi wrote:\n>\n> checkpoint_segments can help insert speed, what do you have that set to?\n>\n> 40. Checking\n> http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server it looks\n> like setting that as high as 256 would not necessarily be unreasonable.\n> What do you think?\n\nI'd say go slow. Try a little bit and see if it helps. I don't \nactually have high insert rate problems, so I don't actually know from \nexperience.\n\n>\n> Also how you insert can make things faster too. (insert vs prepared\n> vs COPY)\n>\n> I'm doing this all with INSERT. Is COPY that much faster? I don't know\n> anything about prepared.\n\nIf you can batch multiple records then COPY is the fastest method. (Of \ncourse your triggers might be the cause for the slowness and not insert \nspeed).\n\nDepending on the language you are using to insert records, you can \nprepare a query and only send the arguments vs sending the entire sql \nstatement every time.\n\nIn pseudo-perl code I'd:\nmy $q = $db->prepare('insert into table(col1, vol2) values ($1, $2)');\n\n$q->execute('one', 'two');\n$q->execute('three', 'four');\n$q->execute('five', 'six');\n\nThis is faster because the \"insert...\" is only sent over the wire and \nparsed once. Then only the arguments are sent for each execute.\n\nSpeed wise, I think it'll go:\n1) slowest: individual insert statements\n2) prepared statements\n3) fastest: COPY\n\nAgain.. assuming the triggers are not the bottleneck.\n\nHave you run an insert by hand with 'EXPLAIN ANALYZE'?\n\n-Andy\n\n\n>\n> Have you read up on synchronous_commit?\n>\n> Only a tiny bit. A couple people suggested disabling it since my\n> database is being hosted on AWS so I did that. It seems a bit risky but\n> perhaps worth it.\n>\n\nI would think they are running on battery backed IO, with boxes on UPS, \nso I'd guess its pretty safe. It would also depend on your commit size. \n If you are batching a million records into one commit, you might loose \nall of them.\n\n-Andy\n",
"msg_date": "Thu, 23 Feb 2012 14:57:29 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "On Thu, Feb 23, 2012 at 11:26 AM, Steve Crawford <\[email protected]> wrote:\n\n> **\n> You need to rethink things a bit. Databases can fail in all sorts of ways\n> and can slow down during bursts of activity, data dumps, etc. You may need\n> to investigate some form of intermediate buffering.\n>\n> Currently my \"buffer\" (such as it is) is Kestrel<http://robey.github.com/kestrel/> which\nqueues up INSERTs and then executes them one at a time. This keeps the rest\nof the app from being held back, but it becomes a problem when the queue\nfills up faster than it can drain. For one particularly heavy logger, I\ntried writing it all to an unconstrained table with the idea that I would\ncopy that table (using INSERT . . . SELECT . . .) into another table with\nconstraints, reducing the data in the process (deduping and such). Problem\nwas, even my constraint-less table wasn't fast enough. Perhaps buffering to\na local file and then using COPY would do the trick.\n\n> ...Apparently the last four columns don't exist in my database. As for\n> the first four, that is somewhat illuminating....\n>\n> Then you are not running a current version of PostgreSQL so the first step\n> to performance enhancement is to upgrade. (As a general rule - there are\n> occasionally specific cases where performance decreases.)\n>\n> We're using 9.0.6. Peter, how do you feel about upgrading? :)\n\nHow are you batching them? Into a temp table that is copied to the main\n> table? As a bunch of insert statements within a single connection (saves\n> probably considerable time due to eliminating multiple connection setups)?\n> With one PREPARE and multiple EXECUTE (to save repeated planning time - I'm\n> not sure this will buy you much for simple inserts, though)? With COPY\n> (much faster as many records are inserted in a single statement but if one\n> fails, all fail)?\n>\n> The second one (a bunch of insert statements within a single connection).\nAs I mentioned above, I was going to try the temp table thing, but that\nwasn't fast enough. COPY might be my next attempt.\n\n\n> And what is the 50ms limit? Is that an average? Since you are batching, it\n> doesn't sound like you need every statement to complete in 50ms. There is\n> always a tradeoff between overall maximum throughput and maximum allowed\n> latency.\n>\n> No, not average. I want to be able to do 100-200 INSERTs per second (90%\nof those would go to one of two tables, the other 10% would go to any of a\ncouple dozen tables). If 1% of my INSERTs take 100 ms, then the other 99%\nmust take no more than 9 ms to complete.\n...actually, it occurs to me that since I'm now committing batches of 1000,\na 100ms latency per commit wouldn't be bad at all! I'll have to look into\nthat.... (Either that or my batching isn't working like I thought it was.)\n\n\n> I recommend you abandon this thread as it presupposes a now seemingly\n> incorrect cause of the problem and start a new one titled something like\n> \"Tuning for high insert rate\" where you describe the problem you want to\n> solve. See http://wiki.postgresql.org/wiki/Guide_to_reporting_problemsfor a good guide to the information that will be helpful in diagnosis.\n>\n> I'll leave the title as is since I think simply renaming this message\nwould cause more confusion than it would prevent. But this gives me\nsomething to chew on and when I need to return to this topic, I'll do just\nthat.\n\nThanks,\n-Alessandro\n\nOn Thu, Feb 23, 2012 at 11:26 AM, Steve Crawford <[email protected]> wrote:\n\nYou need to rethink things a bit. Databases can fail in all sorts of\n ways and can slow down during bursts of activity, data dumps, etc.\n You may need to investigate some form of intermediate buffering.\nCurrently my \"buffer\" (such as it is) is Kestrel which queues up INSERTs and then executes them one at a time. This keeps the rest of the app from being held back, but it becomes a problem when the queue fills up faster than it can drain. For one particularly heavy logger, I tried writing it all to an unconstrained table with the idea that I would copy that table (using INSERT . . . SELECT . . .) into another table with constraints, reducing the data in the process (deduping and such). Problem was, even my constraint-less table wasn't fast enough. Perhaps buffering to a local file and then using COPY would do the trick. \n\n\n...Apparently the last four columns don't\n exist in my database. As for the first four, that is somewhat\n illuminating....\n\n Then you are not running a current version of PostgreSQL so the\n first step to performance enhancement is to upgrade. (As a general\n rule - there are occasionally specific cases where performance\n decreases.)We're using 9.0.6. Peter, how do you feel about upgrading? :)\nHow are you batching them? Into a temp table that is copied to the\n main table? As a bunch of insert statements within a single\n connection (saves probably considerable time due to eliminating\n multiple connection setups)? With one PREPARE and multiple EXECUTE\n (to save repeated planning time - I'm not sure this will buy you\n much for simple inserts, though)? With COPY (much faster as many\n records are inserted in a single statement but if one fails, all\n fail)?\n\nThe second one (a bunch of insert statements within a single connection). As I mentioned above, I was going to try the temp table thing, but that wasn't fast enough. COPY might be my next attempt.\n And what is the 50ms limit? Is that an average? Since you are\n batching, it doesn't sound like you need every statement to complete\n in 50ms. There is always a tradeoff between overall maximum\n throughput and maximum allowed latency.No, not average. I want to be able to do 100-200 INSERTs per second (90% of those would go to one of two tables, the other 10% would go to any of a couple dozen tables). If 1% of my INSERTs take 100 ms, then the other 99% must take no more than 9 ms to complete.\n...actually, it occurs to me that since I'm now committing batches of 1000, a 100ms latency per commit wouldn't be bad at all! I'll have to look into that.... (Either that or my batching isn't working like I thought it was.)\n I recommend you abandon this thread as it presupposes a now\n seemingly incorrect cause of the problem and start a new one titled\n something like \"Tuning for high insert rate\" where you describe the\n problem you want to solve. See\n http://wiki.postgresql.org/wiki/Guide_to_reporting_problems for a\n good guide to the information that will be helpful in diagnosis.\n\nI'll leave the title as is since I think simply renaming this message would cause more confusion than it would prevent. But this gives me something to chew on and when I need to return to this topic, I'll do just that.\nThanks,-Alessandro",
"msg_date": "Thu, 23 Feb 2012 13:07:46 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "On Thu, Feb 23, 2012 at 1:07 PM, Alessandro Gagliardi\n<[email protected]> wrote:\n>>\n>> ...Apparently the last four columns don't exist in my database. As for the\n>> first four, that is somewhat illuminating....\n>>\n>> Then you are not running a current version of PostgreSQL so the first step\n>> to performance enhancement is to upgrade. (As a general rule - there are\n>> occasionally specific cases where performance decreases.)\n>>\n> We're using 9.0.6. Peter, how do you feel about upgrading? :)\n>\n\n9.1's in beta; we're working on writing an upgrade system before\ncalling it GA, but it works fine. Feel free.\n\nMy hunch is still that your issue is lock contention.\n\n> No, not average. I want to be able to do 100-200 INSERTs per second (90% of\n> those would go to one of two tables, the other 10% would go to any of a\n> couple dozen tables). If 1% of my INSERTs take 100 ms, then the other 99%\n> must take no more than 9 ms to complete.\n> ...actually, it occurs to me that since I'm now committing batches of 1000,\n> a 100ms latency per commit wouldn't be bad at all! I'll have to look into\n> that.... (Either that or my batching isn't working like I thought it was.)\n>\n\nWe have many customers who do much more than this throughput, though\nI'm not sure what level of resourcing you're current at. You might\nconsider experimenting with a larger system if you're having\nperformance problems.\n\nPeter\n",
"msg_date": "Thu, 23 Feb 2012 13:11:14 -0800",
"msg_from": "Peter van Hardenberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "On 02/23/2012 01:07 PM, Alessandro Gagliardi wrote:\n\n> The second one (a bunch of insert statements within a single \n> connection). As I mentioned above, I was going to try the temp table \n> thing, but that wasn't fast enough. COPY might be my next attempt.\ninsert into...;\ninsert into...;\ninsert into...;\n... is really (ignoring statement preparation time):\nbegin;\ninsert into...;\ncommit;\nbegin;\ninsert into...;\ncommit;\nbegin;\ninsert into...;\ncommit;\n\nIt's possible that you might get a nice boost by wrapping the inserts \ninto a transaction:\nbegin;\ninsert into...;\ninsert into...;\ninsert into...;\n...\ncommit;\n\nThis only requires all that disk-intensive stuff that protects your data \nonce at the end instead of 1000 times for you batch of 1000.\n\nCOPY is even better. I just ran a quick test by restoring a table on my \ndesktop hacking db (untuned, few years old PC, single SATA disk, modest \nRAM and lots of resource competition). The 22+ million rows restored in \n282 seconds which is a rate somewhat north of 78,000 records/second or \nabout 0.13ms/record.\n\nYou may want to eliminate that trigger, which only seems to exist to \nsilence errors from uniqueness violations, and copy the incoming data \ninto a temp table then move the data with a variant of:\nINSERT INTO main_table (SELECT ... FROM incoming_table WHERE NOT EXISTS \n((SELECT 1 from main_table WHERE ...))\n\nCheers,\nSteve\n",
"msg_date": "Thu, 23 Feb 2012 13:37:54 -0800",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "On Thu, Feb 23, 2012 at 1:37 PM, Steve Crawford <\[email protected]> wrote:\n\n> It's possible that you might get a nice boost by wrapping the inserts into\n> a transaction:\n> begin;\n> insert into...;\n> insert into...;\n> insert into...;\n> ...\n> commit;\n>\n> This only requires all that disk-intensive stuff that protects your data\n> once at the end instead of 1000 times for you batch of 1000.\n>\n> I think that is essentially what I am doing. I'm using psycopg2 in a\npython script that runs continuously on a queue. It opens a connection and\ncreates a cursor when it begins. It then passes that cursor into a function\nalong with the data (read off the queue) that needs to be inserted. I\nrun cur.execute(\"SAVEPOINT insert_savepoint;\") followed by cur.execute(q)\n(where q is the insert statement). If there's an error I\nrun cur.execute(\"ROLLBACK TO SAVEPOINT insert_savepoint;\") otherwise I\nincrement a counter. Once the counter exceeds 999, I run conn.commit() and\nreset the counter. I believe that psycopg2 is essentially doing what you\nare suggesting. The fact that the data does not appear in the database\nuntil conn.commit() tells me that it's not committing anything until then.\n\n\n> COPY is even better. I just ran a quick test by restoring a table on my\n> desktop hacking db (untuned, few years old PC, single SATA disk, modest RAM\n> and lots of resource competition). The 22+ million rows restored in 282\n> seconds which is a rate somewhat north of 78,000 records/second or about\n> 0.13ms/record.\n>\n> I'll try that. Of course, the fact that the database is stored in AWS\ncomplicates matters. Regardless, it sounds like COPY should be considerably\nfaster.\n\nOn Thu, Feb 23, 2012 at 1:37 PM, Steve Crawford <[email protected]> wrote:\n\nIt's possible that you might get a nice boost by wrapping the inserts into a transaction:\nbegin;\ninsert into...;\ninsert into...;\ninsert into...;\n...\ncommit;\n\nThis only requires all that disk-intensive stuff that protects your data once at the end instead of 1000 times for you batch of 1000.\n\nI think that is essentially what I am doing. I'm using psycopg2 in a python script that runs continuously on a queue. It opens a connection and creates a cursor when it begins. It then passes that cursor into a function along with the data (read off the queue) that needs to be inserted. I run cur.execute(\"SAVEPOINT insert_savepoint;\") followed by cur.execute(q) (where q is the insert statement). If there's an error I run cur.execute(\"ROLLBACK TO SAVEPOINT insert_savepoint;\") otherwise I increment a counter. Once the counter exceeds 999, I run conn.commit() and reset the counter. I believe that psycopg2 is essentially doing what you are suggesting. The fact that the data does not appear in the database until conn.commit() tells me that it's not committing anything until then.\n COPY is even better. I just ran a quick test by restoring a table on my desktop hacking db (untuned, few years old PC, single SATA disk, modest RAM and lots of resource competition). The 22+ million rows restored in 282 seconds which is a rate somewhat north of 78,000 records/second or about 0.13ms/record.\n\nI'll try that. Of course, the fact that the database is stored in AWS complicates matters. Regardless, it sounds like COPY should be considerably faster.",
"msg_date": "Thu, 23 Feb 2012 14:30:37 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "On Thu, Feb 23, 2012 at 1:11 PM, Peter van Hardenberg <[email protected]>wrote:\n\n> My hunch is still that your issue is lock contention.\n>\n> How would I check that? I tried looking at pg_locks but I don't know what\nto look for.\n\n\n> We have many customers who do much more than this throughput, though\n> I'm not sure what level of resourcing you're current at. You might\n> consider experimenting with a larger system if you're having\n> performance problems.\n>\n> Heh. I thought you might say that. :) It's definitely worth considering,\nbut as youmight expect, I want to exhaust other options first. For\ncustomers who do much more (or even comparable) throughput, can you tell me\nhow big of a system they require?\n\nAlso, as per Andy's suggestion, I'd like to try\ndoubling checkpoint_segments. However, it appears that that is one of those\nvariables that I cannot change from pgAdmin. I don't suppose there's any\nway to change this without rebooting the database?\n\nOn Thu, Feb 23, 2012 at 1:11 PM, Peter van Hardenberg <[email protected]> wrote:\nMy hunch is still that your issue is lock contention.\n\nHow would I check that? I tried looking at pg_locks but I don't know what to look for. \nWe have many customers who do much more than this throughput, though\nI'm not sure what level of resourcing you're current at. You might\nconsider experimenting with a larger system if you're having\nperformance problems.\n\nHeh. I thought you might say that. :) It's definitely worth considering, but as youmight expect, I want to exhaust other options first. For customers who do much more (or even comparable) throughput, can you tell me how big of a system they require?\n Also, as per Andy's suggestion, I'd like to try doubling checkpoint_segments. However, it appears that that is one of those variables that I cannot change from pgAdmin. I don't suppose there's any way to change this without rebooting the database?",
"msg_date": "Thu, 23 Feb 2012 14:54:05 -0800",
"msg_from": "Alessandro Gagliardi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "yup there is. the parameter checkpoint_segments does not require a restart\nof the server, just a reload :)\n\nOn Fri, Feb 24, 2012 at 12:54 AM, Alessandro Gagliardi\n<[email protected]>wrote:\n\n> On Thu, Feb 23, 2012 at 1:11 PM, Peter van Hardenberg <[email protected]>wrote:\n>\n>> My hunch is still that your issue is lock contention.\n>>\n>> How would I check that? I tried looking at pg_locks but I don't know what\n> to look for.\n>\n>\n>> We have many customers who do much more than this throughput, though\n>> I'm not sure what level of resourcing you're current at. You might\n>> consider experimenting with a larger system if you're having\n>> performance problems.\n>>\n>> Heh. I thought you might say that. :) It's definitely worth considering,\n> but as youmight expect, I want to exhaust other options first. For\n> customers who do much more (or even comparable) throughput, can you tell me\n> how big of a system they require?\n>\n> Also, as per Andy's suggestion, I'd like to try\n> doubling checkpoint_segments. However, it appears that that is one of those\n> variables that I cannot change from pgAdmin. I don't suppose there's any\n> way to change this without rebooting the database?\n>\n\nyup there is. the parameter checkpoint_segments does not require a restart of the server, just a reload :)On Fri, Feb 24, 2012 at 12:54 AM, Alessandro Gagliardi <[email protected]> wrote:\nOn Thu, Feb 23, 2012 at 1:11 PM, Peter van Hardenberg <[email protected]> wrote:\n\nMy hunch is still that your issue is lock contention.\n\nHow would I check that? I tried looking at pg_locks but I don't know what to look for. \nWe have many customers who do much more than this throughput, though\nI'm not sure what level of resourcing you're current at. You might\nconsider experimenting with a larger system if you're having\nperformance problems.\n\nHeh. I thought you might say that. :) It's definitely worth considering, but as youmight expect, I want to exhaust other options first. For customers who do much more (or even comparable) throughput, can you tell me how big of a system they require?\n\n Also, as per Andy's suggestion, I'd like to try doubling checkpoint_segments. However, it appears that that is one of those variables that I cannot change from pgAdmin. I don't suppose there's any way to change this without rebooting the database?",
"msg_date": "Mon, 27 Feb 2012 23:55:14 +0200",
"msg_from": "Filippos Kalamidas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "you might also consider increasing the wal_buffers value if it's still the\ndefault (64KB)\n\nBR\n\nyou might also consider increasing the wal_buffers value if it's still the default (64KB)BR",
"msg_date": "Mon, 27 Feb 2012 23:59:56 +0200",
"msg_from": "Filippos Kalamidas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: set autovacuum=off"
},
{
"msg_contents": "On Thu, Feb 23, 2012 at 3:28 PM, Alessandro Gagliardi\n<[email protected]> wrote:\n> I'm unable to make sense of pg_locks. The vast majority are\n> locktype='transactionid', mode='ExclusiveLock', granted=t. There are some\n> 'relation' locks with mode='RowExclusiveLock' and fewer with\n> 'AccessShareLock'. I have no idea what I should be looking for here.\n\nIf you have lock contention, you'll see locks with granted='f', at\nleast from time to time. Those are the ones you want to worry about.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n",
"msg_date": "Wed, 21 Mar 2012 10:32:26 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: set autovacuum=off"
}
] |
[
{
"msg_contents": "Hi, everyone. I'm maintaining an application that exists as a \"black \nbox\" in manufacturing plants. The system is based on Windows, .NET, and \nPostgreSQL 8.3. I'm a Unix kind of guy myself, but the application \nlayer and system administration are being handled by other people; I'm \njust the PostgreSQL guy.\n\nBecause of the nature of the application, we don't have direct control \nover what happens. And it turns out that at one installation, we're \nquickly running out of disk space. The database is already taking up \nabout 200 GB of space, and is growing by 1 GB or so a day. Switching \ndisks, either to a larger/faster traditional drive, or even to a SSD, is \nnot an option. (And yes, I know that SSDs have their own risks, but I'm \njust throwing that out as one option.)\n\nRight now, the best solution to the space problem is to delete \ninformation associated with old records, where \"old\" is from at least 30 \ndays ago. The old records are spread across a few tables, including \nmany large objects. (The application was written by people who were new \nto PostgreSQL, and didn't realize that they could use BYTEA.) \nBasically, given a foreign key B.a_id that points to table A, I want to \nDELETE all in B where A's creation date is at least 30 days ago.\n\nUnfortunately, when we implemented this simple delete, it executed \nslower than molasses, taking about 9 hours to do its thing. Not only \ndoes this seem like a really, really long time to do such deleting, but \nwe have only a 4-hour window in which to run this maintenance activity, \nbefore the factory starts to use our black box again.\n\nI've tried a few approaches so far, none of which have been hugely \nsuccessful. The fact that it takes several hours to test each theory is \nobviously a bit of a pain, and so I'm curious to hear suggestions from \npeople here.\n\nI should note that my primary concern is available RAM. The database, \nas I wrote, is about 200 GB in size, and PostgreSQL is reporting \n(according to Windows) use of about 5 GB RAM, plus another 25 GB of \nvirtual memory. I've told the Windows folks on this project that \nvirtual memory kills a database, and that it shouldn't surprise us to \nhave horrible performance if the database and operating system are both \ntransferring massive amounts of data back and forth. But there doesn't \nseem to be a good way to handle this\n\nThis is basically what I'm trying to execute:\n\nDELETE FROM B\nWHERE r_id IN (SELECT R.id\n FROM R, B\n WHERE r.end_date < (NOW() - (interval '1 day' * 30))\n AND r.id = b.r_id\n\n(1) I tried to write this as a join, rather than a subselect. But B has \nan oid column that points to large objects, and on which we have a rule \nthat removes the associated large object when a row in B is removed. \nDoing the delete as a join resulted in \"no such large object with an oid \nof xxx\" errors. (I'm not sure why, although it might have to do with \nthe rule.)\n\n(2) I tried to grab the rows that *do* interest me, put them into a \ntemporary table, TRUNCATE the existing table, and then copy the rows \nback. I only tested that with a 1 GB subset of the data, but that took \nlonger than other options.\n\n(3) There are some foreign-key constraints on the B table. I thought \nthat perhaps doing a mass DELETE was queueing up all of those \nconstraints, and possibly using up lots of memory and/or taking a long \ntime to execute. I thus rewrote my queries such that they first removed \nthe constraints, then executed the DELETE, and then restored the \nconstraints. That didn't seem to improve things much either, and took a \nlong time (30 minutes) just to remove the constraints. I expected \nre-adding the constraints to take a while, but shouldn't removing them \nbe relatively quick?\n\n(4) I tried \"chunking\" the deletes, such that instead of trying to \ndelete all of the records from the B table, I would instead delete just \nthose associated with 100 or 200 rows from the R table. On a 1 GB \nsubset of the data, this seemed to work just fine. But on the actual \ndatabase, it was still far too slow.\n\nI've been surprised by the time it takes to delete the records in \nquestion. I keep trying to tell the others on this project that \nPostgreSQL isn't inherently slow, but that a 200 GB database running on \na non-dedicated machine, with an old version (8.3), and while it's \nswapping RAM, will be slow regardless of the database software we're \nusing. But even so, 9 hours to delete 100 GB of data strikes me as a \nvery long process.\n\nAgain, I continue to believe that given our hard time deadlines, and the \nfact that we're using a large amount of virtual memory, that there isn't \nreally a solution that will work quickly and easily. But I'd be \ndelighted to be wrong, and welcome any and all comments and suggestions \nfor how to deal with this.\n\nReuven\n\n-- \nReuven M. Lerner -- Web development, consulting, and training\nMobile: +972-54-496-8405 * US phone: 847-230-9795\nSkype/AIM: reuvenlerner\n",
"msg_date": "Thu, 23 Feb 2012 10:39:49 +0200",
"msg_from": "\"Reuven M. Lerner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Very long deletion time on a 200 GB database"
},
{
"msg_contents": "Do you have any more detailed information about the hardware, what sort of disk configuration does it have?\n\nCan you get onto the machine to look at what is using those resources? You mention the 25gb of virtual memory; is that being used? If so is it being used by postgres or something else? If it's being used by postgres you should change postgresql.conf to work within your 5gb, otherwise can you stop the other applications to do your delete? A snapshot from task manager or process monitor of process resource usage would be useful, even better somelogging from perfmon including physical disk usage.\n\n\nWhat would be even more useful is the table definitions; you mention trying to drop constraints to speed it up but is there anything else at play, e.g. triggers?\n\n\n\n>________________________________\n> From: Reuven M. Lerner <[email protected]>\n>To: [email protected] \n>Sent: Thursday, 23 February 2012, 8:39\n>Subject: [PERFORM] Very long deletion time on a 200 GB database\n> \n>\n>\n>Hi, everyone. I'm maintaining an application that exists as a \"black box\" in manufacturing plants. The system is based on Windows, .NET, and PostgreSQL 8.3. I'm a Unix kind of guy myself, but the application layer and system administration are being handled by other people; I'm just the PostgreSQL guy.\n>\n>Because of the nature of the application, we don't have direct control over what happens. And it turns out that at one installation, we're quickly running out of disk space. The database is already taking up about 200 GB of space, and is growing by 1 GB or so a day. Switching disks, either to a larger/faster traditional drive, or even to a SSD, is not an option. (And yes, I know that SSDs have their own risks, but I'm just throwing that out as one option.)\n>\n>Right now, the best solution to the space problem is to delete information associated with old records, where \"old\" is from at least 30 days ago. The old records are spread across a few tables, including many large objects. (The application was written by people who were new to PostgreSQL, and didn't realize that they could use BYTEA.) Basically, given a foreign key B.a_id that points to table A, I want to DELETE all in B where A's creation date is at least 30 days ago.\n>\n>Unfortunately, when we implemented this simple delete, it executed slower than molasses, taking about 9 hours to do its thing. Not only does this seem like a really, really long time to do such deleting, but we have only a 4-hour window in which to run this maintenance activity, before the factory starts to use our black box again.\n>\n>I've tried a few approaches so far, none of which have been hugely successful. The fact that it takes several hours to test each theory is obviously a bit of a pain, and so I'm curious to hear suggestions from people here.\n>\n>I should note that my primary concern is available RAM. The database, as I wrote, is about 200 GB in size, and PostgreSQL is reporting (according to Windows) use of about 5 GB RAM, plus another 25 GB of virtual memory. I've told the Windows folks on this project that virtual memory kills a database, and that it shouldn't surprise us to have horrible performance if the database and operating system are both transferring massive amounts of data back and forth. But there doesn't seem to be a good way to handle this\n>\n>This is basically what I'm trying to execute:\n>\n>DELETE FROM B\n>WHERE r_id IN (SELECT R.id\n> FROM R, B\n> WHERE r.end_date < (NOW() - (interval '1 day' * 30))\n> AND r.id = b.r_id\n>\n>(1) I tried to write this as a join, rather than a subselect. But B has an oid column that points to large objects, and on which we have a rule that removes the associated large object when a row in B is removed. Doing the delete as a join resulted in \"no such large object with an oid of xxx\" errors. (I'm not sure why, although it might have to do with the rule.)\n>\n>(2) I tried to grab the rows that *do* interest me, put them into a temporary table, TRUNCATE the existing table, and then copy the rows back. I only tested that with a 1 GB subset of the data, but that took longer than other options.\n>\n>(3) There are some foreign-key constraints on the B table. I thought that perhaps doing a mass DELETE was queueing up all of those constraints, and possibly using up lots of memory and/or taking a long time to execute. I thus rewrote my queries such that they first removed the constraints, then executed the DELETE, and then restored the constraints. That didn't seem to improve things much either, and took a long time (30 minutes) just to remove the constraints. I expected re-adding the constraints to take a while, but shouldn't removing them be relatively quick?\n>\n>(4) I tried \"chunking\" the deletes, such that instead of trying to delete all of the records from the B table, I would instead delete just those associated with 100 or 200 rows from the R table. On a 1 GB subset of the data, this seemed to work just fine. But on the actual database, it was still far too slow.\n>\n>I've been surprised by the time it takes to delete the records in question. I keep trying to tell the others on this project that PostgreSQL isn't inherently slow, but that a 200 GB database running on a non-dedicated machine, with an old version (8.3), and while it's swapping RAM, will be slow regardless of the database software we're using. But even so, 9 hours to delete 100 GB of data strikes me as a very long process.\n>\n>Again, I continue to believe that given our hard time deadlines, and the fact that we're using a large amount of virtual memory, that there isn't really a solution that will work quickly and easily. But I'd be delighted to be wrong, and welcome any and all comments and suggestions for how to deal with this.\n>\n>Reuven\n>\n>-- Reuven M. Lerner -- Web development, consulting, and training\n>Mobile: +972-54-496-8405 * US phone: 847-230-9795\n>Skype/AIM: reuvenlerner\n>\n>-- Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\nDo you have any more detailed information about the hardware, what sort of disk configuration does it have?Can you get onto the machine to look at what is using those resources? You mention the 25gb of virtual memory; is that being used? If so is it being used by postgres or something else? If it's being used by postgres you should change postgresql.conf to work within your 5gb, otherwise can you stop the other applications to do your delete? A snapshot from task manager or process monitor of process resource usage would be useful, even better some logging from perfmon including physical disk usage.What would be\n even more useful is the table definitions; you mention trying to drop constraints to speed it up but is there anything else at play, e.g. triggers? From: Reuven M. Lerner <[email protected]> To: [email protected] Sent: Thursday, 23 February 2012, 8:39 Subject: [PERFORM] Very long deletion time on a 200 GB database\n Hi, everyone. I'm maintaining an application that exists as a \"black box\" in manufacturing plants. The system is based on Windows, .NET, and PostgreSQL 8.3. I'm a Unix kind of guy myself, but the application layer and system administration are being handled by other people; I'm just the PostgreSQL guy.Because of the nature of the application, we don't have direct control over what happens. And it turns out that at one installation, we're quickly running out of disk space. The database is already taking up about 200 GB of space, and is growing by 1 GB or so a day. Switching disks, either to a larger/faster traditional drive, or even to a SSD, is not an option. (And yes, I know that SSDs have their own risks, but I'm just throwing that out as one option.)Right now, the best solution to the space problem is to delete information associated with old\n records, where \"old\" is from at least 30 days ago. The old records are spread across a few tables, including many large objects. (The application was written by people who were new to PostgreSQL, and didn't realize that they could use BYTEA.) Basically, given a foreign key B.a_id that points to table A, I want to DELETE all in B where A's creation date is at least 30 days ago.Unfortunately, when we implemented this simple delete, it executed slower than molasses, taking about 9 hours to do its thing. Not only does this seem like a really, really long time to do such deleting, but we have only a 4-hour window in which to run this maintenance activity, before the factory starts to use our black box again.I've tried a few approaches so far, none of which have been hugely successful. The fact that it takes several hours to test each theory is obviously a bit of a pain, and so I'm curious to hear suggestions from\n people here.I should note that my primary concern is available RAM. The database, as I wrote, is about 200 GB in size, and PostgreSQL is reporting (according to Windows) use of about 5 GB RAM, plus another 25 GB of virtual memory. I've told the Windows folks on this project that virtual memory kills a database, and that it shouldn't surprise us to have horrible performance if the database and operating system are both transferring massive amounts of data back and forth. But there doesn't seem to be a good way to handle thisThis is basically what I'm trying to execute:DELETE FROM BWHERE r_id IN (SELECT R.id FROM R, B WHERE r.end_date < (NOW() - (interval '1 day' * 30)) AND r.id = b.r_id(1) I tried to write this as a join, rather than a subselect. But B has an oid column that points to large objects, and on which we have a rule\n that removes the associated large object when a row in B is removed. Doing the delete as a join resulted in \"no such large object with an oid of xxx\" errors. (I'm not sure why, although it might have to do with the rule.)(2) I tried to grab the rows that *do* interest me, put them into a temporary table, TRUNCATE the existing table, and then copy the rows back. I only tested that with a 1 GB subset of the data, but that took longer than other options.(3) There are some foreign-key constraints on the B table. I thought that perhaps doing a mass DELETE was queueing up all of those constraints, and possibly using up lots of memory and/or taking a long time to execute. I thus rewrote my queries such that they first removed the constraints, then executed the DELETE, and then restored the constraints. That didn't seem to improve things much either, and took a long time (30 minutes) just to remove the\n constraints. I expected re-adding the constraints to take a while, but shouldn't removing them be relatively quick?(4) I tried \"chunking\" the deletes, such that instead of trying to delete all of the records from the B table, I would instead delete just those associated with 100 or 200 rows from the R table. On a 1 GB subset of the data, this seemed to work just fine. But on the actual database, it was still far too slow.I've been surprised by the time it takes to delete the records in question. I keep trying to tell the others on this project that PostgreSQL isn't inherently slow, but that a 200 GB database running on a non-dedicated machine, with an old version (8.3), and while it's swapping RAM, will be slow regardless of the database software we're using. But even so, 9 hours to delete 100 GB of data strikes me as a very long process.Again, I continue to believe that given our hard time\n deadlines, and the fact that we're using a large amount of virtual memory, that there isn't really a solution that will work quickly and easily. But I'd be delighted to be wrong, and welcome any and all comments and suggestions for how to deal with this.Reuven-- Reuven M. Lerner -- Web development, consulting, and trainingMobile: +972-54-496-8405 * US phone: 847-230-9795Skype/AIM: reuvenlerner-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 23 Feb 2012 09:50:20 +0000 (GMT)",
"msg_from": "Glyn Astill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "> DELETE FROM B\n> WHERE r_id IN (SELECT R.id\n> FROM R, B\n> WHERE r.end_date < (NOW() - (interval '1 day' * 30))\n> AND r.id = b.r_id\n>\n\nHow about:\n\n DELETE FROM B\n WHERE r_id IN (SELECT distinct R.id\n FROM R WHERE r.end_date < (NOW() - (interval '1 day' * 30))\n\n?\n\nGreetings\nMarcin\n",
"msg_date": "Thu, 23 Feb 2012 11:07:47 +0100",
"msg_from": "=?UTF-8?B?TWFyY2luIE1hxYRr?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On 23/02/12 08:39, Reuven M. Lerner wrote:\n> (4) I tried \"chunking\" the deletes, such that instead of trying to \n> delete all of the records from the B table, I would instead delete \n> just those associated with 100 or 200 rows from the R table. On a 1 \n> GB subset of the data, this seemed to work just fine. But on the \n> actual database, it was still far too slow.\n\nThis is the approach I'd take. You don't have enough control / access to \ncome up with a better solution. Build a temp table with 100 ids to \ndelete. Time that, and then next night you can increase to 200 etc until \nit takes around 3 hours.\n\nOh - and get the Windows admins to take a look at disk activity - the \nstandard performance monitor can tell you more than enough. If it is \nswapping constantly, performance will be atrocious but even if the disks \nare just constantly busy then updates and deletes can be very slow.\n\n-- \n Richard Huxton\n Archonet Ltd\n\n",
"msg_date": "Thu, 23 Feb 2012 11:23:09 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On Thu, Feb 23, 2012 at 5:39 AM, Reuven M. Lerner <[email protected]> wrote:\n> Unfortunately, when we implemented this simple delete, it executed slower\n> than molasses, taking about 9 hours to do its thing. Not only does this\n> seem like a really, really long time to do such deleting, but we have only a\n> 4-hour window in which to run this maintenance activity, before the factory\n> starts to use our black box again.\n\nPG 8.3 had horrible hash joins for big tables, so you might try \"set\nenable_hashjoin=false\" prior to your query. I suspect this is not your\nbiggest problem, though...\n\n> I should note that my primary concern is available RAM. The database, as I\n> wrote, is about 200 GB in size, and PostgreSQL is reporting (according to\n> Windows) use of about 5 GB RAM, plus another 25 GB of virtual memory.\n\nYou really *have* to look into that situation. 25GB of *active*\nvirtual memory? That would mean a thrashing server, and an utterly\nunresponsive one from my experience. Your data is probably wrong,\nbecause if I had a server *using* 30G of RAM only 5 of which are\nphysical, I wouldn't even be able to open a remote desktop to it.\n\nI bet your numbers are wrong. In any case, you have to look into the\nmatter. Any amount of swapping will kill performance for postgres,\nthrowing plans out of whack, turning sequential I/O into random I/O, a\nmess.\n\n> told the Windows folks on this project that virtual memory kills a database,\n> and that it shouldn't surprise us to have horrible performance if the\n> database and operating system are both transferring massive amounts of data\n> back and forth. But there doesn't seem to be a good way to handle this\n\nThere is, tune postgresql.conf to use less memory.\n\n> This is basically what I'm trying to execute:\n>\n> DELETE FROM B\n> WHERE r_id IN (SELECT R.id\n> FROM R, B\n> WHERE r.end_date < (NOW() - (interval '1 day' * 30))\n> AND r.id = b.r_id\n\nDELETE is way faster when you have no constraints, no primary key, no indices.\n\nIf you could stop all database use in that 4-hour period, it's\npossible dropping indices, FKs and PK, delete, and recreating the\nindices/FKs/PK will run fast enough.\nYou'll have to test that on some test server though...\n\n> (3) There are some foreign-key constraints on the B table. I thought that\n> perhaps doing a mass DELETE was queueing up all of those constraints, and\n> possibly using up lots of memory and/or taking a long time to execute. I\n> thus rewrote my queries such that they first removed the constraints, then\n> executed the DELETE, and then restored the constraints.\n\nThat's not enough. As I said, you have to drop the PK and all indices too.\n\n> That didn't seem to\n> improve things much either, and took a long time (30 minutes) just to remove\n> the constraints. I expected re-adding the constraints to take a while, but\n> shouldn't removing them be relatively quick?\n\nThat means your database is locked (a lot of concurrent access), or\nthrashing, because dropping a constraint is a very quick task.\nIf you stop your application, I'd expect dropping constraints and\nindices to take almost no time.\n\n> (4) I tried \"chunking\" the deletes, such that instead of trying to delete\n> all of the records from the B table, I would instead delete just those\n> associated with 100 or 200 rows from the R table. On a 1 GB subset of the\n> data, this seemed to work just fine. But on the actual database, it was\n> still far too slow.\n\nCheck the hash join thing.\nCheck/post an explain of the delete query, to see if it uses hash\njoins, and which tables are hash-joined. If they're big ones, 8.3 will\nperform horribly.\n\n> I've been surprised by the time it takes to delete the records in question.\n> I keep trying to tell the others on this project that PostgreSQL isn't\n> inherently slow, but that a 200 GB database running on a non-dedicated\n> machine, with an old version (8.3), and while it's swapping RAM, will be\n> slow regardless of the database software we're using. But even so, 9 hours\n> to delete 100 GB of data strikes me as a very long process.\n\nDeletes in MVCC is more like an update. It's a complex procedure to\nmake it transactional, that's why truncate is so much faster.\n",
"msg_date": "Thu, 23 Feb 2012 11:07:21 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "\n\nOn 02/23/2012 05:07 AM, Marcin Mańk wrote:\n>> DELETE FROM B\n>> WHERE r_id IN (SELECT R.id\n>> FROM R, B\n>> WHERE r.end_date< (NOW() - (interval '1 day' * 30))\n>> AND r.id = b.r_id\n>>\n> How about:\n>\n> DELETE FROM B\n> WHERE r_id IN (SELECT distinct R.id\n> FROM R WHERE r.end_date< (NOW() - (interval '1 day' * 30))\n>\n> ?\n>\n\nOr possibly without the DISTINCT. But I agree that the original query \nshouldn't have B in the subquery - that alone could well make it crawl.\n\nWhat is the distribution of end_dates? It might be worth running this in \nseveral steps, deleting records older than, say, 90 days, 60 days, 30 days.\n\ncheers\n\nandrew\n",
"msg_date": "Thu, 23 Feb 2012 09:25:13 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "Hi, everyone. Thanks for all of the help and suggestions so far; I'll \ntry to respond to some of them soon. Andrew wrote:\n\n>> How about:\n>>\n>> DELETE FROM B\n>> WHERE r_id IN (SELECT distinct R.id\n>> FROM R WHERE r.end_date< (NOW() - (interval '1 day' * 30))\n>>\n>> ?\n>>\n>\n> Or possibly without the DISTINCT. But I agree that the original query\n> shouldn't have B in the subquery - that alone could well make it crawl.\n\nI put B in the subquery so as to reduce the number of rows that would be \nreturned, but maybe that was indeed backfiring on me. Now that I think \nabout it, B is a huge table, and R is a less-huge one, so including B in \nthe subselect was probably a mistake.\n\n>\n> What is the distribution of end_dates? It might be worth running this in\n> several steps, deleting records older than, say, 90 days, 60 days, 30 days.\n\nI've suggested something similar, but was told that we have limited time \nto execute the DELETE, and that doing it in stages might not be possible.\n\nReuven\n\n\n-- \nReuven M. Lerner -- Web development, consulting, and training\nMobile: +972-54-496-8405 * US phone: 847-230-9795\nSkype/AIM: reuvenlerner\n",
"msg_date": "Thu, 23 Feb 2012 17:25:46 +0200",
"msg_from": "\"Reuven M. Lerner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On Thu, Feb 23, 2012 at 05:25:46PM +0200, Reuven M. Lerner wrote:\n> >\n> >What is the distribution of end_dates? It might be worth running this in\n> >several steps, deleting records older than, say, 90 days, 60 days, 30 days.\n> \n> I've suggested something similar, but was told that we have limited\n> time to execute the DELETE, and that doing it in stages might not be\n> possible.\n> \n> Reuven\n> \n\nIn cases like this, I have often found that doing the delete in smaller\npieces goes faster, sometimes much faster, than the bigger delete.\n\nRegards,\nKen\n",
"msg_date": "Thu, 23 Feb 2012 09:28:37 -0600",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On 02/23/2012 07:28 AM, [email protected] wrote:\n> On Thu, Feb 23, 2012 at 05:25:46PM +0200, Reuven M. Lerner wrote:\n>>>\n>>> What is the distribution of end_dates? It might be worth running this in\n>>> several steps, deleting records older than, say, 90 days, 60 days, 30 days.\n>>\n>> I've suggested something similar, but was told that we have limited\n>> time to execute the DELETE, and that doing it in stages might not be\n>> possible.\n>\n> In cases like this, I have often found that doing the delete in smaller\n> pieces goes faster, sometimes much faster, than the bigger delete.\n\nFor some reason it is common for a conversation with a software manager to go \nlike this:\n\nProgrammer: Let's go with option \"A\"; it'll be much faster than what we're doing.\nManager: We don't have time to do that.\n\nWe don't have time to be faster? When I've had this conversation, the payback \nwas usually immediate, like it's Wednesday and it'll be faster by Thursday the \nnext day, and we'll get more done by Friday of the same week the new way. But \nwe don't have time.\n\nI have had this conversation dozens of times over the years. (I was always \n\"Programmer\".)\n\n-- \nLew\nHoni soit qui mal y pense.\nhttp://upload.wikimedia.org/wikipedia/commons/c/cf/Friz.jpg\n",
"msg_date": "Thu, 23 Feb 2012 07:42:12 -0800",
"msg_from": "Lew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On 02/23/2012 12:39 AM, Reuven M. Lerner wrote:\n> Hi, everyone...\n> This is basically what I'm trying to execute:\n>\n> DELETE FROM B\n> WHERE r_id IN (SELECT R.id\n> FROM R, B\n> WHERE r.end_date < (NOW() - (interval '1 day' * 30))\n> AND r.id = b.r_id\n\nI don't recall which versions like which approach, but have you tried \n...WHERE EXISTS (SELECT... instead of WHERE IN? Depending on the version \nof PostgreSQL, one or the other may yield a superior result.\n\n\n> (2) I tried to grab the rows that *do* interest me, put them into a \n> temporary table, TRUNCATE the existing table, and then copy the rows \n> back. I only tested that with a 1 GB subset of the data, but that \n> took longer than other options.\n>\n\nWas the 1GB subset the part you were keeping or the part you were \ndeleting? Which part was slow (creating the temp table or copying it back)?\n\nTry running EXPLAIN on the SELECT query that creates the temporary table \nand try to optimize that. Also, when copying the data back, you are \nprobably having to deal with index and foreign keys maintenance. It will \nprobably be faster to drop those, copy the data back then recreate them.\n\nI know you are a *nix-guy in a Windows org so your options are limited, \nbut word-on-the-street is that for high-performance production use, \ninstall PostgreSQL on *nix.\n\nCheers,\nSteve\n\n",
"msg_date": "Thu, 23 Feb 2012 07:47:36 -0800",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On 02/23/2012 02:39 AM, Reuven M. Lerner wrote:\n\n> I should note that my primary concern is available RAM. The database, as\n> I wrote, is about 200 GB in size, and PostgreSQL is reporting (according\n> to Windows) use of about 5 GB RAM, plus another 25 GB of virtual memory.\n\nO_o\n\nThat... that would probably swap just constantly. No end. Just swap all \nday long. But maybe not. Please tell us the values for these settings:\n\n* shared_buffers\n* work_mem\n* maintenance_work_mem\n* checkpoint_segments\n* checkpoint_timeout\n\nIt also wouldn't be a bad idea to see how many concurrent connections \nthere are, because that may determine how much memory all the backends \nare consuming. In any case, if it's actually using 25GB of virtual \nmemory, any command you run that doesn't happen to be in cache, will \njust immediately join a giant logjam.\n\n> I've told the Windows folks on this project that virtual memory kills a\n> database, and that it shouldn't surprise us to have horrible performance\n> if the database and operating system are both transferring massive\n> amounts of data back and forth. But there doesn't seem to be a good way\n> to handle this\n\nYou kinda can, by checking those settings and sanitizing them. If \nthey're out of line, or too large, they'll create the need for more \nvirtual memory. Having the virtual memory there isn't necessarily bad, \nbut using it is.\n\n> DELETE FROM B\n> WHERE r_id IN (SELECT R.id\n> FROM R, B\n> WHERE r.end_date < (NOW() - (interval '1 day' * 30))\n> AND r.id = b.r_id\n\nJust to kinda help you out syntactically, have you ever tried a DELETE \nFROM ... USING? You can also collapse your interval notation.\n\nDELETE FROM B\n USING R\n WHERE R.id = B.r_id\n AND R.end_date < CURRENT_DATE - INTERVAL '30 days';\n\nBut besides that, the other advise you've received is sound. Since your \nselect->truncate->insert attempt was also slow, I suspect you're having \nproblems with foreign key checks, and updating the index trees. \nMaintaining an existing index can be multiples slower than filling an \nempty table and creating the indexes afterwards.\n\nSo far as your foreign keys, if any of the child tables don't have an \nindex on the referring column, your delete performance will be \natrocious. You also need to make sure the types of the columns are \nidentical. Even a numeric/int difference will be enough to render an \nindex unusable.\n\nWe have a 100GB *table* with almost 200M rows and even deleting from \nthat in many of our archive tests doesn't take anywhere near 9 hours. \nBut I *have* seen a delete take that long when we had a numeric primary \nkey, and an integer foreign key. Even a handful of records can cause a \nnested loop sequence scan, which will vastly inflate delete time.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Thu, 23 Feb 2012 10:37:31 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On Thu, Feb 23, 2012 at 8:25 AM, Reuven M. Lerner <[email protected]>wrote:\n\n>\n> I've suggested something similar, but was told that we have limited time\n> to execute the DELETE, and that doing it in stages might not be possible.\n>\n>\nJust so happens I had this exact problem last week on a rather large table.\n * DELETE FROM x WHERE id IN (SELECT id FROM y ...) was horrible at best.\n * DELETE FROM x USING y WHERE ... was nearly as bad\nBoth of the above were taking hours and looking more like it would stretch\ninto days.\n\nWhat I discovered however was this little technique that I'm sure all the\nPg gods will smote me for using however it worked for me.\n\nBEGIN;\nLOCK x IN SHARE UPDATE EXCLUSIVE; -- to prevent VACUUM's\nSELECT x.ctid INTO TEMPORARY TABLE recs_to_delete FROM x,y WHERE x.id=y.id;\nDELETE FROM x USING recs_to_delete r WHERE r.ctid=x.ctid;\nCOMMIT;\n\nI know there are perils in using ctid but with the LOCK it should be safe.\n This transaction took perhaps 30 minutes and removed 100k rows and once\nthe table was VACUUM'd afterward it freed up close to 20 GB on the file\nsystem.\n\nHTH\n-Greg\n\nOn Thu, Feb 23, 2012 at 8:25 AM, Reuven M. Lerner <[email protected]> wrote:\n\nI've suggested something similar, but was told that we have limited time to execute the DELETE, and that doing it in stages might not be possible.Just so happens I had this exact problem last week on a rather large table.\n * DELETE FROM x WHERE id IN (SELECT id FROM y ...) was horrible at best. * DELETE FROM x USING y WHERE ... was nearly as badBoth of the above were taking hours and looking more like it would stretch into days.\nWhat I discovered however was this little technique that I'm sure all the Pg gods will smote me for using however it worked for me.BEGIN;LOCK x IN SHARE UPDATE EXCLUSIVE; -- to prevent VACUUM's\nSELECT x.ctid INTO TEMPORARY TABLE recs_to_delete FROM x,y WHERE x.id=y.id;DELETE FROM x USING recs_to_delete r WHERE r.ctid=x.ctid;COMMIT;\nI know there are perils in using ctid but with the LOCK it should be safe. This transaction took perhaps 30 minutes and removed 100k rows and once the table was VACUUM'd afterward it freed up close to 20 GB on the file system.\nHTH-Greg",
"msg_date": "Thu, 23 Feb 2012 10:56:14 -0700",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On 02/23/2012 11:56 AM, Greg Spiegelberg wrote:\n\n> I know there are perils in using ctid but with the LOCK it should be\n> safe. This transaction took perhaps 30 minutes and removed 100k rows\n> and once the table was VACUUM'd afterward it freed up close to 20 GB\n> on the file system.\n\nIt took *30 minutes* to delete 100k rows? And 100k rows were using 20GB? \nIs that off by an order of magnitude?\n\nUsing the ctid is a cute trick, though. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Thu, 23 Feb 2012 12:05:01 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On 2/23/2012 12:05 PM, Shaun Thomas wrote:\n> On 02/23/2012 11:56 AM, Greg Spiegelberg wrote:\n>\n>> I know there are perils in using ctid but with the LOCK it should be\n>> safe. This transaction took perhaps 30 minutes and removed 100k rows\n>> and once the table was VACUUM'd afterward it freed up close to 20 GB\n>> on the file system.\n>\n> It took *30 minutes* to delete 100k rows? And 100k rows were using 20GB?\n> Is that off by an order of magnitude?\n>\n> Using the ctid is a cute trick, though. :)\n>\n\nAnd I'm not sure the LOCK is necessary, while googling for \"delete from \ntable limit 10\" I ran across this thread:\n\nhttp://archives.postgresql.org/pgsql-hackers/2010-11/msg02028.php\n\nThey use it without locks.\n\n-Andy\n",
"msg_date": "Thu, 23 Feb 2012 12:11:12 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On Thu, Feb 23, 2012 at 11:11 AM, Andy Colson <[email protected]> wrote:\n\n> On 2/23/2012 12:05 PM, Shaun Thomas wrote:\n>\n>> On 02/23/2012 11:56 AM, Greg Spiegelberg wrote:\n>>\n>> I know there are perils in using ctid but with the LOCK it should be\n>>> safe. This transaction took perhaps 30 minutes and removed 100k rows\n>>> and once the table was VACUUM'd afterward it freed up close to 20 GB\n>>> on the file system.\n>>>\n>>\n>> It took *30 minutes* to delete 100k rows? And 100k rows were using 20GB?\n>> Is that off by an order of magnitude?\n>>\n>> Using the ctid is a cute trick, though. :)\n>>\n>>\n> And I'm not sure the LOCK is necessary, while googling for \"delete from\n> table limit 10\" I ran across this thread:\n>\n> http://archives.postgresql.**org/pgsql-hackers/2010-11/**msg02028.php<http://archives.postgresql.org/pgsql-hackers/2010-11/msg02028.php>\n>\n> They use it without locks.\n>\n>\nI used LOCK simply because if a VACUUM FULL x; slipped in between the\nSELECT and the DELETE the ctid's could conceivably change.\n\n-Greg\n\nOn Thu, Feb 23, 2012 at 11:11 AM, Andy Colson <[email protected]> wrote:\nOn 2/23/2012 12:05 PM, Shaun Thomas wrote:\n\nOn 02/23/2012 11:56 AM, Greg Spiegelberg wrote:\n\n\nI know there are perils in using ctid but with the LOCK it should be\nsafe. This transaction took perhaps 30 minutes and removed 100k rows\nand once the table was VACUUM'd afterward it freed up close to 20 GB\non the file system.\n\n\nIt took *30 minutes* to delete 100k rows? And 100k rows were using 20GB?\nIs that off by an order of magnitude?\n\nUsing the ctid is a cute trick, though. :)\n\n\n\nAnd I'm not sure the LOCK is necessary, while googling for \"delete from table limit 10\" I ran across this thread:\n\nhttp://archives.postgresql.org/pgsql-hackers/2010-11/msg02028.php\n\nThey use it without locks.I used LOCK simply because if a VACUUM FULL x; slipped in between the SELECT and the DELETE the ctid's could conceivably change. \n-Greg",
"msg_date": "Thu, 23 Feb 2012 11:13:39 -0700",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "Greg Spiegelberg <[email protected]> writes:\n> I used LOCK simply because if a VACUUM FULL x; slipped in between the\n> SELECT and the DELETE the ctid's could conceivably change.\n\nVACUUM FULL can't \"slip in\" there, because you'd have AccessShareLock\njust from the SELECT. The real problem goes like this:\n\n\t1. You SELECT some ctid and save it in the other table.\n\t2. Somebody else updates or deletes that row.\n\t3. Plain VACUUM comes along and frees the dead TID.\n\t4. Somebody else (maybe not same somebody as #2) inserts a new\n\t row at that TID position.\n\t5. You DELETE that TID. Ooops.\n\nSo you might say \"okay, the point of the lock is to block plain vacuum,\nnot vacuum full\". I'm still a bit worried about whether the technique\nis entirely safe, though, because of page pruning which can happen\nanyway. What this really boils down to is: how sure are you that no\nother userland activity is going to update or delete any of the targeted\nrows between the SELECT INTO and the DELETE? If you're sure, then this\nis safe without the extra lock. Otherwise, I wouldn't trust it.\n\nIt might be worth having the SELECT that creates the temp table be a\nSELECT FOR UPDATE on the target table, so as to ensure you've locked\ndown the targeted rows against anybody else. This is not free though,\nas it'll mean extra writes of all the modified tuples.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Feb 2012 13:30:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database "
},
{
"msg_contents": "I just reread the original post and noted this:\n\n\"Reuven M. Lerner\" <[email protected]> writes:\n> (1) I tried to write this as a join, rather than a subselect. But B has \n> an oid column that points to large objects, and on which we have a rule \n> that removes the associated large object when a row in B is removed. \n\nA rule? Really? That's probably bad enough in itself, but when you\nwrite an overcomplicated join delete query, I bet the resulting plan\nis spectacularly bad. Have you looked at the EXPLAIN output for this?\n\nI'd strongly recommend getting rid of the rule in favor of a trigger.\nAlso, as already noted, the extra join inside the IN sub-select is\nprobably hurting far more than it helps.\n\n> (3) There are some foreign-key constraints on the B table.\n\nIf those are FK references *to* the B table, make sure the other end\n(the referencing column) is indexed. Postgres doesn't require an index\non a referencing column, but deletes in the referenced table will suck\nif you haven't got one.\n\nI don't think any of the fancy stuff being discussed in the thread is\nworth worrying about until you've got these basic issues dealt with.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Feb 2012 14:04:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database "
},
{
"msg_contents": "On 23/02/12 09:39, Reuven M. Lerner wrote:\n> Hi, everyone. I'm maintaining an application that exists as a \"black \n> box\" in manufacturing plants. The system is based on Windows, .NET, \n> and PostgreSQL 8.3. I'm a Unix kind of guy myself, but the \n> application layer and system administration are being handled by other \n> people; I'm just the PostgreSQL guy.\n\nJust thinking loud. It looks like (just guessing)\nthat the application needs store data worth 1 month back and\nit was put into production under the assumption that it would\nnever fill up or deletion easily could be done under maintaince\nwindows. And that now turns out not to be the case.\n\nI would stuff in a trigger function on the table that automatically\ndoes the cleanup.. It could be a BEFORE INSERT OR UPDATE\nTRIGGER that just tries to prune 2-3 rows of the table if they\nhave exceeded the keep-back time. Just installing that in the\nmaintance window would allow the system to self-heal over time.\n\nIf the maintaince window allows for more cleanup, then manually\ndo some deletions. Now the black-box is self-healing.\n\n-- \nJesper\n",
"msg_date": "Thu, 23 Feb 2012 20:46:11 +0100",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "Hi, everyone.\n\nSo it turns out that we're not using 25 GB of virtual memory. (That's \nwhat I had been shown yesterday, and it was a bit surprising, to say the \nleast...)\n\nA few statistics that I managed to get from the Windows \ndevelopers/system administrators:\n\n- The machine has a total of 3.5 GB of RAM\n- shared_buffers was set to 256 MB (yes, MB!)\n- Virtual memory usage by our process is 3 MB (yes, MB)\n- CPU is virtually idle when running the deletes, using about 1% of CPU\n- No other processes are accessing the database when we're running the \nmaintenance; there are a total of three server processes, but two are idle.\n\n(I was a bit surprised, to say the least, by the low number on \nshared_buffers, given that I believe it's one of the first things I told \nthem to increase about 18 months ago.)\n\nAs for Tom's point about rules, I know that rules are bad, and I'm not \nsure why the system is using a rule rather than a trigger. I'll see \nif I can change that to a trigger, but I have very indirect control over \nthe machines, and every change requires (believe it or not) writing a \n.NET program that runs my changes, rather than just a textual script \nthat deploys them.\n\nThe only foreign keys are from the B table (i.e., the table whose \nrecords I want to remove) to other tables. There are no REFERENCES \npointing to the B table. That said, I hadn't realized that primary keys \nand indexes can also delay the DELETE.\n\nFor the latest round of testing, I quadrupled shared_buffers to 1 GB, \nturned off hash joins (as suggested by someone), and also simplified the \nquery (based on everyone's suggestions). In the tests on my own \ncomputer (with a somewhat random 1 GB snapshot of the 200 GB database), \nthe simplified query was indeed much faster, so I'm optimistic.\n\nSeveral people suggested that chunking the deletes might indeed help, \nwhich makes me feel a bit better. Unfortunately, given the time that it \ntakes to run the queries, it's hard to figure out the right chunk size. \n Whoever suggested doing it in time slices had an interesting idea, but \nI'm not sure if it'll be implementable given our constraints.\n\nThanks again to everyone for your help. I'll let you know what happens...\n\nReuven\n",
"msg_date": "Fri, 24 Feb 2012 08:39:48 +0200",
"msg_from": "\"Reuven M. Lerner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On Thu, Feb 23, 2012 at 10:39 PM, Reuven M. Lerner <[email protected]>wrote:\n\n> Hi, everyone.\n>\n> So it turns out that we're not using 25 GB of virtual memory. (That's\n> what I had been shown yesterday, and it was a bit surprising, to say the\n> least...)\n>\n> A few statistics that I managed to get from the Windows developers/system\n> administrators:\n>\n> - The machine has a total of 3.5 GB of RAM\n> - shared_buffers was set to 256 MB (yes, MB!)\n> - Virtual memory usage by our process is 3 MB (yes, MB)\n> - CPU is virtually idle when running the deletes, using about 1% of CPU\n> - No other processes are accessing the database when we're running the\n> maintenance; there are a total of three server processes, but two are idle.\n>\n\nWhat is work_mem set to? If all the other values were set so low, I'd\nexpect work_mem to also be small, which could be causing all kind of disk\nactivity when steps don't fit into a work_mem segment.\n\nOn Thu, Feb 23, 2012 at 10:39 PM, Reuven M. Lerner <[email protected]> wrote:\nHi, everyone.\n\nSo it turns out that we're not using 25 GB of virtual memory. (That's what I had been shown yesterday, and it was a bit surprising, to say the least...)\n\nA few statistics that I managed to get from the Windows developers/system administrators:\n\n- The machine has a total of 3.5 GB of RAM\n- shared_buffers was set to 256 MB (yes, MB!)\n- Virtual memory usage by our process is 3 MB (yes, MB)\n- CPU is virtually idle when running the deletes, using about 1% of CPU\n- No other processes are accessing the database when we're running the maintenance; there are a total of three server processes, but two are idle.What is work_mem set to? If all the other values were set so low, I'd expect work_mem to also be small, which could be causing all kind of disk activity when steps don't fit into a work_mem segment.",
"msg_date": "Fri, 24 Feb 2012 00:22:21 -0800",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "Hi, everyone. Samuel wrote:\n>\n> What is work_mem set to? If all the other values were set so low, I'd\n> expect work_mem to also be small, which could be causing all kind of\n> disk activity when steps don't fit into a work_mem segment.\n\nI just checked, and work_mem is set to 30 MB. That seems a bit low to \nme, given the size of the database and the fact that we're doing so much \nsorting and subselecting. Am I right that we should push that up some more?\n\nReuven\n-- \nReuven M. Lerner -- Web development, consulting, and training\nMobile: +972-54-496-8405 * US phone: 847-230-9795\nSkype/AIM: reuvenlerner\n",
"msg_date": "Fri, 24 Feb 2012 14:37:30 +0200",
"msg_from": "\"Reuven M. Lerner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On Fri, Feb 24, 2012 at 9:37 AM, Reuven M. Lerner <[email protected]> wrote:\n> I just checked, and work_mem is set to 30 MB. That seems a bit low to me,\n> given the size of the database and the fact that we're doing so much sorting\n> and subselecting. Am I right that we should push that up some more?\n\nYou can certainly increase work_mem **for the delete** (which you can\ndo by issuing \"set work_mem='something'\" just before the delete - it\nwill only apply to that connection), but bear in mind that work_mem is\nthe amount of memory each connection can use for each operation. Total\nusage can go way higher than max_connections * work_mem, depending on\nthe kind of queries you have.\n",
"msg_date": "Fri, 24 Feb 2012 10:07:00 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On 02/24/2012 12:39 AM, Reuven M. Lerner wrote:\n\n> - CPU is virtually idle when running the deletes, using about 1% of\n> CPU\n\nI think you found your problem.\n\nSee if you can get the Windows admins to give you some info on how busy \nthe disks are (percent utilization, IOPS, something) the next time you \ntry this. Increasing your memory settings may help, but a 1% CPU usage \nusually suggests it's waiting for disk blocks to be read before it can \nactually do something.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Fri, 24 Feb 2012 08:34:06 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "Hi, everyone. Shaun said:\n> On 02/24/2012 12:39 AM, Reuven M. Lerner wrote:\n>\n>> - CPU is virtually idle when running the deletes, using about 1% of\n>> CPU\n>\n> I think you found your problem.\n>\n> See if you can get the Windows admins to give you some info on how busy\n> the disks are (percent utilization, IOPS, something) the next time you\n> try this. Increasing your memory settings may help, but a 1% CPU usage\n> usually suggests it's waiting for disk blocks to be read before it can\n> actually do something.\n\nI asked them for disk readings, but I'm not sure how to contextualize \nthe numbers I got:\n\nI/O writes: process1: 820,000, process2: 1Milion Process3: 33,000\n\nAny suggestions for what I can do to improve performance with such a \nslow disk, and a lack of additional RAM?\n\nReuven\n\n-- \nReuven M. Lerner -- Web development, consulting, and training\nMobile: +972-54-496-8405 * US phone: 847-230-9795\nSkype/AIM: reuvenlerner\n",
"msg_date": "Fri, 24 Feb 2012 16:54:04 +0200",
"msg_from": "\"Reuven M. Lerner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On 02/24/2012 08:54 AM, Reuven M. Lerner wrote:\n\n> I/O writes: process1: 820,000, process2: 1Milion Process3: 33,000\n\nThat's not especially helpful, unfortunately. That doesn't really tell \nus how saturated the controller is. However I suspect it's being \neffectively slammed based simply on your CPU usage.\n\nThe main problem you're going to run into is that your table is larger \nthan the memory in that server. 4GB is really pretty small for a server \nhosting a 200+GB database. That they didn't mean it to get that big \ndoesn't really help you clean it up.\n\nBut as a consequence, deleting from that table, or creating a temp table \nwith 30 days of data, truncating the table, and re-inserting, it's still \ngoing to cause a lot of disk activity. Especially since the database is \nconstantly writing out transaction logs. But you do have a few things on \nyour side.\n\nYou say you're deleting from table B, which has no foreign keys \nreferencing it. That's good. You need to go back to your truncate \napproach, and do this:\n\nCREATE TABLE keep_b_data AS\nSELECT *\n FROM B\n WHERE some_date >= CURRENT_DATE - INTERVAL '30 days';\n\nTRUNCATE TABLE B;\n\nDROP INDEX idx_something_on_b_1;\nDROP INDEX idx_something_on_b_2;\nDROP INDEX idx_something_on_b_3;\n\nALTER TABLE B DROP CONSTRAINT whatever_pk;\n\nINSERT INTO B\nSELECT *\n FROM keep_b_data;\n\nALTER TABLE B ADD CONSTRAINT whatever_pk PRIMARY KEY (some_col);\n\nCREATE INDEX idx_something_on_b_1 ON B (col_a);\nCREATE INDEX idx_something_on_b_2 ON B (col_b);\nCREATE INDEX idx_something_on_b_3 ON B (col_c);\n\nYou need to make sure nothing is reading from the table while you're \ndoing this, because the missing indexes will make selects increase your \ndisk utilization, which you definitely don't want. Get a window to work in.\n\nBut this should be much faster than your original attempts. Inserting \nthe 30-day window into table B should be just as fast as creating the \nholding table, and creating the primary key and recreating the indexes \nshould take about the same amount of time each.\n\nSo to get a *rough* idea of how long it will take, do the first step, \nand create the holding table. Multiply that by the number of indexes and \nthe primary key, plus 1. So if it takes 20 minutes, and you have three \nindexes, and the primary key, multiply by five.\n\nI guess the other question is: Is PostgreSQL the only thing running on \nthis server? If not, that may be the source of your disk IO, and it's \nchoking the database and your ability to clean it up. Try to get them to \ntemporarily disable all non-essential services while you do the cleanup. \nI'm wondering if they're also running the app on the Windows machine, \nbased on your original story. That in itself isn't a very good design, \nbut they're also running a PostgreSQL server on Windows, so who knows \nwhat they're thinking over there. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Fri, 24 Feb 2012 09:16:50 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On 02/23/2012 12:39 AM, Reuven M. Lerner wrote:\n> Hi, everyone. ...\n> ...at one installation, we're quickly running out of disk space. The \n> database is already taking up about 200 GB of space, and is growing by \n> 1 GB or so a day.\n\nI've been following the discussion of approaches and tuning for bulk \ndeletes and suddenly wondered if you have checked a couple other basics.\n\nDo you know the source of the increases in DB size? Is it due strictly \nto inserted data or are there lots of updates as well?\n\nIs autovacuum running properly?\n\nCould you, due to bulk deletes and other issues, be suffering from \ntable- or index-bloat? Heavily bloated tables/indexes will exacerbate \nboth your disk-usage and performance problems.\n\nIf possible you might try clustering your tables and see what happens to \ndisk usage and bulk-delete performance. Clusters are generally \nreasonably fast - way faster than VACUUM FULL, though they could still \ntake a while on your very large tables.\n\nAs a bonus, cluster gives you shiny, new non-bloated indexes. They do \nrequire an exclusive lock and they do require sufficient disk-space to \nbuild the new, albeit smaller, table/indexes so it may not be an option \nif you are short on disk-space. You may be able to start by clustering \nyour smaller tables and move toward the larger ones as you free \ndisk-space. Be sure to run ANALYZE on any table that you have CLUSTERed.\n\nYou might find it useful to make CLUSTER part of your regular maintenance.\n\nCheers,\nSteve\n\n",
"msg_date": "Fri, 24 Feb 2012 09:20:04 -0800",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On Thu, Feb 23, 2012 at 12:39 AM, Reuven M. Lerner <[email protected]> wrote:\n> Hi, everyone. I'm maintaining an application that exists as a \"black box\"\n> in manufacturing plants. The system is based on Windows, .NET, and\n> PostgreSQL 8.3. I'm a Unix kind of guy myself, but the application layer\n> and system administration are being handled by other people; I'm just the\n> PostgreSQL guy.\n>\n> Because of the nature of the application, we don't have direct control over\n> what happens. And it turns out that at one installation, we're quickly\n> running out of disk space. The database is already taking up about 200 GB\n> of space, and is growing by 1 GB or so a day. Switching disks, either to a\n> larger/faster traditional drive, or even to a SSD, is not an option. (And\n> yes, I know that SSDs have their own risks, but I'm just throwing that out\n> as one option.)\n>\n> Right now, the best solution to the space problem is to delete information\n> associated with old records, where \"old\" is from at least 30 days ago. The\n> old records are spread across a few tables, including many large objects.\n> (The application was written by people who were new to PostgreSQL, and\n> didn't realize that they could use BYTEA.) Basically, given a foreign key\n> B.a_id that points to table A, I want to DELETE all in B where A's creation\n> date is at least 30 days ago.\n>\n> Unfortunately, when we implemented this simple delete, it executed slower\n> than molasses, taking about 9 hours to do its thing.\n\nIs this 9 hours run time for deleting one day worth of data, or for\ndeleting the entire accumulation of cruft that filled up the hard\ndrive in the first place (which would be 170 days, if you have 200GB\nthat accumulated at 1GB per day and you only need 30 days) ?\n\n\nCheers,\n\nJeff\n",
"msg_date": "Sat, 25 Feb 2012 11:17:57 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "Hi, everyone. Jeff wrote:\n>\n> Is this 9 hours run time for deleting one day worth of data, or for\n> deleting the entire accumulation of cruft that filled up the hard\n> drive in the first place (which would be 170 days, if you have 200GB\n> that accumulated at 1GB per day and you only need 30 days) ?\n\nUnfortunately, it took 9 hours to delete all of the rows associated with \nthe older-than-30-days records.\n\nReuven\n-- \nReuven M. Lerner -- Web development, consulting, and training\nMobile: +972-54-496-8405 * US phone: 847-230-9795\nSkype/AIM: reuvenlerner\n",
"msg_date": "Sun, 26 Feb 2012 09:33:05 +0200",
"msg_from": "\"Reuven M. Lerner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "Hi again, everyone.\n\nWow, I can't get over how helpful everyone has been.\n\nShaun wrote:\n\n> The main problem you're going to run into is that your table is larger than the memory in that server. 4GB is really pretty small for a server hosting a 200+GB database. That they didn't mean it to get that big doesn't really help you clean it up.\n\nYep! And as you pointed out later in you note, PostgreSQL isn't the \nonly thing running on this computer. There's also a full-fledged \nWindows application normally running on it. And the nature of the \nmanufacturing, black-box context means that maintenance is supposed to \nbe rare, and that anything which gets us off of a 24/7 work schedule is \nenormously expensive.\n\nThis has been a fun problem to fix, for sure... We're not there yet, \nbut I feel like we're really close.\n\nI'm currently trying a hybrid approach, based on several suggestions \nthat were posted to this list:\n\nGiven that during this maintenance operation, nothing else should \nrunning, I'm going to bump up the shared_buffers. Even after we run our \nmaintenance, the fact that shared_buffers was so ridiculously low \ncouldn't be helping anything, and I'll push it up.\n\nI finally remembered why I had such a tortured set of subselects in my \noriginal query: If you're going to do a query with LIMIT in it, you had \nbetter be sure that you know what you're doing, because without an ORDER \nBY clause, you might be in for surprises. And sure enough, in our \ntesting, I realized that when we asked the database for up to 5 rows, we \nwere getting the same rows again and again, thus stopping after it \ndeleted a few bunches of rows.\n\nSo I changed tactics somewhat, and it appears to be working much, much \nfaster: I first created a table (not a temp table, simply because my \nfunctions are getting invoked by the .NET application in a new \nconnection each time, and I obviously don't want my table to go away) \nwith the IDs of the R table that are older than n days old. This \ntable has about 200,000 rows in it, but each column is an int, so it's \npretty small.\n\nI then have a separate function that takes a parameter, the chunk size. \n I loop through the table created in the first function \n(old_report_ids), deleting all of the records in the B table that \nreferences the R table. I then remove the row from the old_report_ids \ntable, and then loop again, until I've reached the chunk size. There \nare undoubtedly more elegant ways to do this, but we just gotta get it \nworking at this point. :-)\n\nWe're about to test this, but from my small tests on my computer, it ran \nmuch, much faster than other options. We'll see what happens when we \ntry it now on the 200 GB monster...\n\nReuven\n",
"msg_date": "Sun, 26 Feb 2012 12:46:28 +0200",
"msg_from": "\"Reuven M. Lerner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "I have a question.\n\nYour data is growing 1Gb by 1 day.\n\nCan we use another Disk or partition to continue archive data ?\nI mean, do postgreSql support a Layering System for archive data ?\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Very-long-deletion-time-on-a-200-GB-database-tp5507359p5517941.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n",
"msg_date": "Sun, 26 Feb 2012 20:33:08 -0800 (PST)",
"msg_from": "lephongvu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "Hi, everyone. I wanted to thank you again for your help on the huge \ndelete problem that I was experiencing.\n\nAfter a lot of trial and error, we finally came to the conclusion that \ndeleting this much data in the time frame that they need, on \nunderpowered hardware that is shared with an application, with each test \niteration taking 5-9 hours to run (but needing to run in 2-3), is just \nnot going to happen. We tried many of the options that people helpfully \nsuggested here, but none of them gave us the performance that we needed.\n\n(One of the developers kept asking me how it can possibly take so long \nto delete 200 GB, when he can delete files of that size in much less \ntime. I had to explain to him that deleting rows from a database, is a \nfar more complicated task, and can't really be compared to deleting a \nfew files.)\n\nIn the end, it was agreed that we could execute the deletes over time, \ndeleting items in the background, or in parallel with the application's \nwork. After all, if the disk is filling up at the rate of 2 GB/day, \nthen so long as we delete 4 GB/day (which is pretty easy to do), we \nshould be fine. Adding RAM or another disk are simply out of the \nquestion, which is really a shame for a database of this size.\n\nI should add that it was interesting/amusing to see the difference \nbetween the Unix and Windows philosophies. Each time I would update my \npl/pgsql functions, the Windows guys would wrap it into a string, inside \nof a .NET program, which then needed to be compiled, installed, and run. \n (Adding enormous overhead to our already long testing procedure.) I \nfinally managed to show them that we could get equivalent functionality, \nwith way less overhead, by just running psql -f FILENAME. This version \ndoesn't have fancy GUI output, but it works just fine...\n\nI always tell people that PostgreSQL is not just a great database, but a \nfantastic, helpful community. Thanks to everyone for their suggestions \nand advice.\n\nReuven\n",
"msg_date": "Mon, 27 Feb 2012 10:08:15 +0200",
"msg_from": "\"Reuven M. Lerner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On 02/27/2012 02:08 AM, Reuven M. Lerner wrote:\n\n> In the end, it was agreed that we could execute the deletes over\n> time, deleting items in the background, or in parallel with the\n> application's work. After all, if the disk is filling up at the rate\n> of 2 GB/day, then so long as we delete 4 GB/day (which is pretty easy\n> to do), we should be fine.\n\nPlease tell me you understand deleting rows from a PostgreSQL database \ndoesn't work like this. :) The MVCC storage system means you'll \nbasically just be marking all those deleted rows as reusable, so your \ndatabase will stop growing, but you'll eventually want to purge all the \naccumulated dead rows.\n\nOne way to see how many there are is to use the pgstattuple contrib \nmodule. You can just call it on the table name in question:\n\nSELECT * FROM pgstattuple('my_table');\n\nYou may find that after your deletes are done, you'll have a free_pct of \n80+%. In order to get rid of all that, you'll need to either run CLUSTER \non your table(s) or use the select->truncate->insert method anyway.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 27 Feb 2012 08:45:53 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "\n\nOn 02/27/2012 09:45 AM, Shaun Thomas wrote:\n> On 02/27/2012 02:08 AM, Reuven M. Lerner wrote:\n>\n>> In the end, it was agreed that we could execute the deletes over\n>> time, deleting items in the background, or in parallel with the\n>> application's work. After all, if the disk is filling up at the rate\n>> of 2 GB/day, then so long as we delete 4 GB/day (which is pretty easy\n>> to do), we should be fine.\n>\n> Please tell me you understand deleting rows from a PostgreSQL database \n> doesn't work like this. :) The MVCC storage system means you'll \n> basically just be marking all those deleted rows as reusable, so your \n> database will stop growing, but you'll eventually want to purge all \n> the accumulated dead rows.\n>\n> One way to see how many there are is to use the pgstattuple contrib \n> module. You can just call it on the table name in question:\n>\n> SELECT * FROM pgstattuple('my_table');\n>\n> You may find that after your deletes are done, you'll have a free_pct \n> of 80+%. In order to get rid of all that, you'll need to either run \n> CLUSTER on your table(s) or use the select->truncate->insert method \n> anyway.\n>\n\nIf he has autovacuum on he could well be just fine with his proposed \nstrategy. Or he could have tables partitioned by time and do the delete \nby just dropping partitions. There are numerous way he could get this to \nwork.\n\ncheers\n\nandrew\n",
"msg_date": "Mon, 27 Feb 2012 09:53:59 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "Hi, Shaun. You wrote:\n>\n>> In the end, it was agreed that we could execute the deletes over\n>> time, deleting items in the background, or in parallel with the\n>> application's work. After all, if the disk is filling up at the rate\n>> of 2 GB/day, then so long as we delete 4 GB/day (which is pretty easy\n>> to do), we should be fine.\n>\n> Please tell me you understand deleting rows from a PostgreSQL database\n> doesn't work like this. :) The MVCC storage system means you'll\n> basically just be marking all those deleted rows as reusable, so your\n> database will stop growing, but you'll eventually want to purge all the\n> accumulated dead rows.\n\nOh, I understand that all right. I've had many, *many* conversations \nwith this company explaining MVCC. It doesn't seem to work; when they \nrefer to \"vacuuming the database,\" I remind them that we have autovacuum \nworking, to which they respond, \"Oh, we mean VACUUM FULL.\" At which \npoint I remind them that VACUUM FULL is almost certainly not what they \nwant to do, and then they say, \"Yes, we know, but we still like to do it \nevery so often.\"\n\n From what I understand, the issue isn't one of current disk space, but \nrather of how quickly the disk space is being used up. Maybe they want \nto reclaim disk space, but it's more crucial to stop the rate at which \ndisk space is being taken. If we were to delete all of the existing \nrows, and let vacuum mark them as dead and available for reuse, then \nthat would probably be just fine.\n\nI wouldn't be surprised if we end up doing a CLUSTER at some point. The \nproblem is basically that this machine is in 24/7 operation at \nhigh-speed manufacturing plants, and the best-case scenario is for a \n4-hour maintenance window. I've suggested that we might be able to help \nthe situation somewhat by attaching a portable USB-based hard disk, and \nadding a new tablespace that'll let us keep running while we divide up \nthe work that the disk is doing, but they've made it clear that the \ncurrent hardware configuration cannot and will not change. Period.\n\nSo for now, we'll just try to DELETE faster than we INSERT, and combined \nwith autovacuum, I'm hoping that this crisis will be averted. That \nsaid, the current state of affairs with these machines is pretty \nfragile, and I think that we might want to head off such problems in the \nfuture, rather than be surprised by them.\n\nReuven\n\n-- \nReuven M. Lerner -- Web development, consulting, and training\nMobile: +972-54-496-8405 * US phone: 847-230-9795\nSkype/AIM: reuvenlerner\n",
"msg_date": "Mon, 27 Feb 2012 16:59:57 +0200",
"msg_from": "\"Reuven M. Lerner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On 02/27/2012 08:53 AM, Andrew Dunstan wrote:\n\n> If he has autovacuum on he could well be just fine with his proposed\n> strategy. Or he could have tables partitioned by time and do the delete\n> by just dropping partitions. There are numerous way he could get this to\n> work.\n\nHe isn't using partitions though. That's the whole reason for this \nthread. Having autovacuum turned on (which should be the case for 8.4 \nand above anyway) will not magically remove the old rows. VACUUM marks \nrows as dead/reusable, so INSERT and UPDATE statements will take the \ndead spots instead of creating new extents.\n\nLike I said, this will stop his tables from growing further so long as \nhe keeps his maintenance functions running regularly from now on, but \nthe existing rows he's trying to delete will never go away until he runs \na CLUSTER or some other system of actually purging the dead rows.\n\nNotice how I don't suggest using VACUUM FULL. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 27 Feb 2012 09:01:13 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On 02/27/2012 08:59 AM, Reuven M. Lerner wrote:\n\n> From what I understand, the issue isn't one of current disk space,\n> but rather of how quickly the disk space is being used up.\n\nNoted. Just keep in mind that dead rows are not free. In the case of \nsequence scans, the rows still have to be read from disk and then \nignored by the engine. Vacuums also act as sequence scans, so the more \ndata they're reading, the longer that takes. This is especially true on \nan overloaded system.\n\n> I wouldn't be surprised if we end up doing a CLUSTER at some point.\n> The problem is basically that this machine is in 24/7 operation at\n> high-speed manufacturing plants, and the best-case scenario is for a\n> 4-hour maintenance window.\n\nThe best case scenario is for them to buy a second server. If operation \nof this app stack really is critical to business, they need to spend the \nmoney to keep it working, or they'll end up paying much more for it when \nit fails. You also said that server has other stuff running on it, and \nit already has very little memory. That tells me they have no DR node. \nI'm afraid to even ask how they're doing backups. That one machine is a \ngiant, red, flashing single point of failure. I really hope they \nunderstand that.\n\n> I've suggested that we might be able to help the situation somewhat\n> by attaching a portable USB-based hard disk, and adding a new\n> tablespace that'll let us keep running while we divide up the work\n> that the disk is doing, but they've made it clear that the current\n> hardware configuration cannot and will not change. Period.\n\nAnd that's it, then. You have yourself a bad client. If it were me, I'd \nget through this contract and never do business with them again. They \nhave a system that's basically 100% guaranteed to fail some time in the \nfuture (and yet is critical for operation!) and are putting Band-Aids on \nit. I think there's a parable somewhere about eggs and baskets, but I \ncan't recall it at this moment. ;)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Mon, 27 Feb 2012 09:14:13 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On 02/27/2012 07:14 AM, Shaun Thomas wrote:\n> On 02/27/2012 08:59 AM, Reuven M. Lerner wrote:\n>\n>> From what I understand, the issue isn't one of current disk space,\n>> but rather of how quickly the disk space is being used up.\n>\n> Noted. Just keep in mind that dead rows are not free. In the case of sequence\n> scans, the rows still have to be read from disk and then ignored by the\n> engine. Vacuums also act as sequence scans, so the more data they're reading,\n> the longer that takes. This is especially true on an overloaded system.\n>\n>> I wouldn't be surprised if we end up doing a CLUSTER at some point.\n>> The problem is basically that this machine is in 24/7 operation at\n>> high-speed manufacturing plants, and the best-case scenario is for a\n>> 4-hour maintenance window.\n>\n> The best case scenario is for them to buy a second server. If operation of\n> this app stack really is critical to business, they need to spend the money to\n> keep it working, or they'll end up paying much more for it when it fails. You\n> also said that server has other stuff running on it, and it already has very\n> little memory. That tells me they have no DR node. I'm afraid to even ask how\n> they're doing backups. That one machine is a giant, red, flashing single point\n> of failure. I really hope they understand that.\n>\n>> I've suggested that we might be able to help the situation somewhat\n>> by attaching a portable USB-based hard disk, and adding a new\n>> tablespace that'll let us keep running while we divide up the work\n>> that the disk is doing, but they've made it clear that the current\n>> hardware configuration cannot and will not change. Period.\n>\n> And that's it, then. You have yourself a bad client. If it were me, I'd get\n> through this contract and never do business with them again. They have a\n> system that's basically 100% guaranteed to fail some time in the future (and\n> yet is critical for operation!) and are putting Band-Aids on it. I think\n> there's a parable somewhere about eggs and baskets, but I can't recall it at\n> this moment. ;)\n\nThere is more than one parable here.\n\nFor the client - don't be a damn fool. When you go to a doctor for a broken \narm, you don't refuse the splint and insist on using just aspirin to manage \nthe problem.\n\nFor the consultant/employee - stop buying into the bullshit. This is a common \nsituation, where you tell your client, \"You need X\" and they refuse the \nadvice. You need to be crystal clear with them that they are therefore NOT \nsolving their problem.\n\nI stopped giving in to the client's bullshit in this regard years ago when a \ncustomer tried to withhold over eight thousand dollars because I agreed to my \nmanager's refusal to normalize a database and thus didn't fix a performance \nproblem. I got paid when their programmer whom I'd secretly informed of the \nproblem and how to fix it took over as the project manager, after using my \nadvice to become the hero. The lesson I took is not to gloss over real \nproblems because the client is recalcitrant. They don't win, you don't win, \nnobody wins. (Unless you use a workaround as I did, but politics is the court \nof last resort for an engineer.)\n\nI'd rather have my bosses think I'm a little snarky (as long as I'm not fired \nfor it), than have them hate me and try not to pay me. I am just loud about \nwhat is correct and what the consequences of incorrect are; then when they get \nthose consequences I make sure to draw the connection.\n\nI'm not there to make friends, I'm there to make solutions. It is fiduciary \nirresponsibility to let your clients go down in flames without at least \ninforming them of the alternative.\n\n-- \nLew\nHoni soit qui mal y pense.\nhttp://upload.wikimedia.org/wikipedia/commons/c/cf/Friz.jpg\n",
"msg_date": "Mon, 27 Feb 2012 09:07:04 -0800",
"msg_from": "Lew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On Mon, Feb 27, 2012 at 12:01 PM, Shaun Thomas <[email protected]> wrote:\n>\n> Like I said, this will stop his tables from growing further so long as he\n> keeps his maintenance functions running regularly from now on, but the\n> existing rows he's trying to delete will never go away until he runs a\n> CLUSTER or some other system of actually purging the dead rows.\n\nActually, given the usage and deletion pattern, it's quite probable\nthat by doing only regular vacuuming disk space will be returned to\nthe OS within 30 days. Assuming the free space map can contain all\nthat free space (where progressive deletion would help in comparison\nto full deletion at once), new rows will be assigned to reusable\npages, and eventually trailing pages will become free and be purged.\n\nI'd expect that process to take around 30 days, 60 at worst. Though,\nclearly, the best option is to cluster. Cluster is a lot faster than\nvacuum full in 8.3, so it's worth considering, but it does require a\nlot of free disk space which the system may not have.\n",
"msg_date": "Mon, 27 Feb 2012 16:01:10 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On Mon, Feb 27, 2012 at 6:59 AM, Reuven M. Lerner <[email protected]>wrote:\n\n>\n> So for now, we'll just try to DELETE faster than we INSERT, and combined\n> with autovacuum, I'm hoping that this crisis will be averted. That said,\n> the current state of affairs with these machines is pretty fragile, and I\n> think that we might want to head off such problems in the future, rather\n> than be surprised by them.\n>\n>\n>\nFor the record, one very effective long term solution for doing this and\ncontinuing to be able to do this no matter how many rows have accumulated\nis to partition the data tables over time so that you can just drop older\npartitions. It does require code changes since relying on a trigger on the\nparent table to distribute the inserts to the correct partition is much\nslower than simply modifying your code to insert/copy into the correct\npartition directly. But it is well worth doing if you are accumulating\nlarge volumes of data. You can even leave old partitions around if you\ndon't need the disk space, since well-constructed queries will simply\nignore their existence, anyway, if you are only ever going back 30 days or\nless. Indexes are on individual partitions, so you needn't worry about\nindexes getting too large, either.\n\nOn Mon, Feb 27, 2012 at 6:59 AM, Reuven M. Lerner <[email protected]> wrote:\n\nSo for now, we'll just try to DELETE faster than we INSERT, and combined with autovacuum, I'm hoping that this crisis will be averted. That said, the current state of affairs with these machines is pretty fragile, and I think that we might want to head off such problems in the future, rather than be surprised by them.\n\nFor the record, one very effective long term solution for doing this and continuing to be able to do this no matter how many rows have accumulated is to partition the data tables over time so that you can just drop older partitions. It does require code changes since relying on a trigger on the parent table to distribute the inserts to the correct partition is much slower than simply modifying your code to insert/copy into the correct partition directly. But it is well worth doing if you are accumulating large volumes of data. You can even leave old partitions around if you don't need the disk space, since well-constructed queries will simply ignore their existence, anyway, if you are only ever going back 30 days or less. Indexes are on individual partitions, so you needn't worry about indexes getting too large, either.",
"msg_date": "Mon, 27 Feb 2012 13:13:57 -0800",
"msg_from": "Samuel Gendler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On Mon, Feb 27, 2012 at 2:13 PM, Samuel Gendler\n<[email protected]> wrote:\n>\n>\n> On Mon, Feb 27, 2012 at 6:59 AM, Reuven M. Lerner <[email protected]>\n> wrote:\n>>\n>>\n>> So for now, we'll just try to DELETE faster than we INSERT, and combined\n>> with autovacuum, I'm hoping that this crisis will be averted. That said,\n>> the current state of affairs with these machines is pretty fragile, and I\n>> think that we might want to head off such problems in the future, rather\n>> than be surprised by them.\n>>\n>>\n>\n> For the record, one very effective long term solution for doing this and\n> continuing to be able to do this no matter how many rows have accumulated is\n> to partition the data tables over time so that you can just drop older\n> partitions. It does require code changes since relying on a trigger on the\n> parent table to distribute the inserts to the correct partition is much\n> slower than simply modifying your code to insert/copy into the correct\n> partition directly. But it is well worth doing if you are accumulating\n> large volumes of data. You can even leave old partitions around if you\n> don't need the disk space, since well-constructed queries will simply ignore\n> their existence, anyway, if you are only ever going back 30 days or less.\n> Indexes are on individual partitions, so you needn't worry about indexes\n> getting too large, either.\n\nIf they're only inserting ~1 or 2G a day, a trigger is likely plenty\nfast. I've had stats dbs that grew up 10s or 20s of gigs a day and\nthe triggers were never a performance problem there.\n",
"msg_date": "Mon, 27 Feb 2012 14:28:33 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On 02/27/2012 12:08 AM, Reuven M. Lerner wrote:\n> Hi, everyone. I wanted to thank you again for your help on the huge\n> delete problem that I was experiencing.\n>\n> After a lot of trial and error, we finally came to the conclusion that\n> deleting this much data in the time frame that they need, on\n> underpowered hardware that is shared with an application, with each test\n> iteration taking 5-9 hours to run (but needing to run in 2-3), is just\n> not going to happen. We tried many of the options that people helpfully\n> suggested here, but none of them gave us the performance that we needed.\n>\n> (One of the developers kept asking me how it can possibly take so long\n> to delete 200 GB, when he can delete files of that size in much less\n> time. I had to explain to him that deleting rows from a database, is a\n> far more complicated task, and can't really be compared to deleting a\n> few files.)\n>\n> In the end, it was agreed that we could execute the deletes over time,\n> deleting items in the background, or in parallel with the application's\n> work. After all, if the disk is filling up at the rate of 2 GB/day, then\n> so long as we delete 4 GB/day (which is pretty easy to do), we should be\n> fine. Adding RAM or another disk are simply out of the question, which\n> is really a shame for a database of this size.\n>\n\nHowdy,\n\nI'm coming a little late to the tread but i didn't see anyone propose \nsome tricks I've used in the past to overcome the slow delete problem.\n\nFirst - if you can drop your FKs, delete, re-create your FKs you'll find \nthat you can delete an amazing amount of data very quickly.\n\nsecond - if you can't do that - you can try function that loops and \ndeletes a small amount at a time, this gets around the deleting more \ndata then you can fit into memory problem. It's still slow but just not \nas slow.\n\nthird - don't delete, instead,\ncreate new_table as select * from old_table where <records are not the \nones you want to delete>\nrename new_table to old_table;\ncreate indexes and constraints\ndrop old_table;\n\nfourth - I think some folks mentioned this, but just for completeness, \npartition the table and make sure that your partition key is such that \nyou can just drop an entire partition.\n\nHope that helps and wasn't redundant.\n\nDave\n",
"msg_date": "Tue, 28 Feb 2012 09:06:08 -0800",
"msg_from": "David Kerr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
},
{
"msg_contents": "On 29/02/12 06:06, David Kerr wrote:\n> On 02/27/2012 12:08 AM, Reuven M. Lerner wrote:\n>> Hi, everyone. I wanted to thank you again for your help on the huge\n>> delete problem that I was experiencing.\n>>\n>> After a lot of trial and error, we finally came to the conclusion that\n>> deleting this much data in the time frame that they need, on\n>> underpowered hardware that is shared with an application, with each test\n>> iteration taking 5-9 hours to run (but needing to run in 2-3), is just\n>> not going to happen. We tried many of the options that people helpfully\n>> suggested here, but none of them gave us the performance that we needed.\n>>\n>> (One of the developers kept asking me how it can possibly take so long\n>> to delete 200 GB, when he can delete files of that size in much less\n>> time. I had to explain to him that deleting rows from a database, is a\n>> far more complicated task, and can't really be compared to deleting a\n>> few files.)\n>>\n>> In the end, it was agreed that we could execute the deletes over time,\n>> deleting items in the background, or in parallel with the application's\n>> work. After all, if the disk is filling up at the rate of 2 GB/day, then\n>> so long as we delete 4 GB/day (which is pretty easy to do), we should be\n>> fine. Adding RAM or another disk are simply out of the question, which\n>> is really a shame for a database of this size.\n>>\n>\n> Howdy,\n>\n> I'm coming a little late to the tread but i didn't see anyone propose \n> some tricks I've used in the past to overcome the slow delete problem.\n>\n> First - if you can drop your FKs, delete, re-create your FKs you'll \n> find that you can delete an amazing amount of data very quickly.\n>\n> second - if you can't do that - you can try function that loops and \n> deletes a small amount at a time, this gets around the deleting more \n> data then you can fit into memory problem. It's still slow but just \n> not as slow.\n>\n> third - don't delete, instead,\n> create new_table as select * from old_table where <records are not the \n> ones you want to delete>\n> rename new_table to old_table;\n> create indexes and constraints\n> drop old_table;\n>\n> fourth - I think some folks mentioned this, but just for completeness, \n> partition the table and make sure that your partition key is such that \n> you can just drop an entire partition.\n>\n> Hope that helps and wasn't redundant.\n>\n> Dave\n>\n Hi,\n\nI think your first and third points are very obvious - but only after I \nhad read them! :-)\n\nYour third point is not bad either!\n\nBrilliant simplicity, I hope I can remember them if I run into a similar \nsituation.\n\n\nThanks,\nGavin\n\n\n\n",
"msg_date": "Wed, 29 Feb 2012 18:31:29 +1300",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very long deletion time on a 200 GB database"
}
] |
[
{
"msg_contents": "Hello,\n\nI am trying to understand the analysis behind the \"cost\" attribute in\nEXPLAIN output.\n\npostgres = # explain select * from table_event where seq_id=8520960;\n\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Index Scan using te_pk on table_event (cost=0.00..13.88 rows=1 width=62)\n Index Cond: (sequence_id = 8520960)\n\nThe cost is \"13.88\" to fetch 1 row by scanning an Primary Key indexed\ncolumn.\n\nIsn't the cost for fetching 1 row is too high ?\n\nOn the same table, the cost calculation for scanning the full table is\nlooking justified --\n\npostgres=# explain select * from table_event;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on table_event (cost=0.00..853043.44 rows=38679544 width=62)\n(1 row)\n\n(disk pages read * seq_page_cost) + (rows scanned * cpu_tuple_cost) = (466248 *\n1) + (38679544 * 0.01) = 853043.44\n\nBy the way below are the details -\n\nVersion - Postgres-9.0\n\nTable size is - 3643 MB\n+Indexes the size is - 8898 MB\n\nI am looking for a way to reduce cost as much as possible because the query\nexecutes 100000+ times a day.\n\nAny thoughts ?\n\nThanks,\nVB\n\nHello,I am trying to understand the analysis behind the \"cost\" attribute in EXPLAIN output.postgres = # explain select * from table_event where seq_id=8520960;\n QUERY PLAN----------------------------------------------------------------------------------------------------------------- Index Scan using te_pk on table_event (cost=0.00..13.88 rows=1 width=62)\n Index Cond: (sequence_id = 8520960)The cost is \"13.88\" to fetch 1 row by scanning an Primary Key indexed column.Isn't the cost for fetching 1 row is too high ?\nOn the same table, the cost calculation for scanning the full table is looking justified --postgres=# explain select * from table_event; QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------ Seq Scan on table_event (cost=0.00..853043.44 rows=38679544 width=62)\n(1 row)(disk pages read * seq_page_cost) + (rows scanned * cpu_tuple_cost) = (466248 * 1) + (38679544 * 0.01) = 853043.44\nBy the way below are the details -Version - Postgres-9.0Table size is - 3643 MB+Indexes the size is - 8898 MB\nI am looking for a way to reduce cost as much as possible because the query executes 100000+ times a day.Any thoughts ?Thanks,VB",
"msg_date": "Thu, 23 Feb 2012 17:51:26 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": ": Cost calculation for EXPLAIN output"
},
{
"msg_contents": "Venkat Balaji <[email protected]> wrote:\n \n\n> The cost is \"13.88\" to fetch 1 row by scanning an Primary Key\n> indexed column.\n> \n> Isn't the cost for fetching 1 row is too high ?\n \nI don't know, how many index pages will need to be randomly accessed\nin addition to the random heap access? How many dead versions of\nthe row will need to be visited besides the row which is actually\nvisible? How many of these pages are in shared_buffers? How many\nof these pages are in OS cache?\n \n> I am looking for a way to reduce cost as much as possible because\n> the query executes 100000+ times a day.\n \nWell, you can reduce the cost all you want by dividing all of the\ncosting factors in postgresql.conf by the same value, but that won't\naffect query run time. That depends on the query plan which is\nchosen. The cost is just an abstract number used for comparing the\napparent resources needed to run a query through each of the\navailable plans. What matters is that the cost factors accurately\nreflect the resources used; if not you should adjust them.\n \nIf you calculate a ratio between run time and estimated cost, you\nshould find that it remains relatively constant (like within an\norder of magnitude) for various queries. Since you didn't show\nactual run times, we can't tell whether anything need adjustment.\n \n-Kevin\n",
"msg_date": "Thu, 23 Feb 2012 10:51:25 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Cost calculation for EXPLAIN output"
},
{
"msg_contents": "On 02/23/2012 06:21 AM, Venkat Balaji wrote:\n\n> The cost is \"13.88\" to fetch 1 row by scanning an Primary Key\n> indexed column.\n>\n> Isn't the cost for fetching 1 row is too high ?\n\nNot really. The \"cost\" is really just an estimate to rank alternate \nquery plans so the database picks the least expensive plan. The number \n'13.88' is basically meaningless. It doesn't translate to any real-world \nequivalent. What you actually care about is the execution time. If it \ntakes 0.25ms or something per row, that's what really matters.\n\nFor what it's worth, it looks like you have the right query plan, there. \nScan the primary key for one row. What's wrong with that? Our systems \nhave tables far larger than yours, handling 300M queries per day that \nare far more expensive than a simple primary key index scan. You'll be \nfine. :)\n\nI suggest you set log_min_duration_statement to something like 1000, to \nsend any query that takes longer than 1 second to the PG logs. \nConcentrate on those queries, because ones like this are already working \nright.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604\n312-444-8534\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n",
"msg_date": "Thu, 23 Feb 2012 10:56:22 -0600",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: : Cost calculation for EXPLAIN output"
},
{
"msg_contents": ">\n> > The cost is \"13.88\" to fetch 1 row by scanning an Primary Key\n> > indexed column.\n> >\n> > Isn't the cost for fetching 1 row is too high ?\n>\n> I don't know, how many index pages will need to be randomly accessed\n> in addition to the random heap access? How many dead versions of\n> the row will need to be visited besides the row which is actually\n> visible? How many of these pages are in shared_buffers? How many\n> of these pages are in OS cache?\n>\n\nTotal Index pages are 140310. Yes. I suspect most of the times the required\npage is found in either OS cache or disk (shared_buffers is .9 GB) as we\nhave 200+ GB of highly active database and the Index is on a 10GB table.\n\n> The cost is \"13.88\" to fetch 1 row by scanning an Primary Key\n\n> indexed column.\n>\n> Isn't the cost for fetching 1 row is too high ?\n\nI don't know, how many index pages will need to be randomly accessed\nin addition to the random heap access? How many dead versions of\nthe row will need to be visited besides the row which is actually\nvisible? How many of these pages are in shared_buffers? How many\nof these pages are in OS cache?Total Index pages are 140310. Yes. I suspect most of the times the required page is found in either OS cache or disk (shared_buffers is .9 GB) as we have 200+ GB of highly active database and the Index is on a 10GB table.",
"msg_date": "Sun, 26 Feb 2012 22:44:51 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : Cost calculation for EXPLAIN output"
},
{
"msg_contents": "Thanks for your valuable inputs !\n\nThe cost is \"13.88\" to fetch 1 row by scanning an Primary Key\n>> indexed column.\n>>\n>> Isn't the cost for fetching 1 row is too high ?\n>>\n>\n> Not really. The \"cost\" is really just an estimate to rank alternate query\n> plans so the database picks the least expensive plan. The number '13.88' is\n> basically meaningless. It doesn't translate to any real-world equivalent.\n> What you actually care about is the execution time. If it takes 0.25ms or\n> something per row, that's what really matters.\n>\n> For what it's worth, it looks like you have the right query plan, there.\n> Scan the primary key for one row. What's wrong with that? Our systems have\n> tables far larger than yours, handling 300M queries per day that are far\n> more expensive than a simple primary key index scan. You'll be fine. :)\n\n\nExecution time is 0.025 ms per row. I am quite happy with the execution\ntime, even the plan is going the correct way. The total execution time\nreduced from 2.5 hrs to 5.5 seconds. I am looking for a room for further\nimprovement -- probably quite ambitious :-) ..\n\nBut, when i reduced random_page_cost to 2, the cost reduced to 7.03 and\nexecution also have reduced by almost 50%. Is this an gain ?\n\nPlease comment !\n\nThanks,\nVB\n\n_________________**________________\n\n>\n> See http://www.peak6.com/email_**disclaimer/<http://www.peak6.com/email_disclaimer/>for terms and conditions related to this email\n>\n\nThanks for your valuable inputs !\nThe cost is \"13.88\" to fetch 1 row by scanning an Primary Key\nindexed column.\n\nIsn't the cost for fetching 1 row is too high ?\n\n\nNot really. The \"cost\" is really just an estimate to rank alternate query plans so the database picks the least expensive plan. The number '13.88' is basically meaningless. It doesn't translate to any real-world equivalent. What you actually care about is the execution time. If it takes 0.25ms or something per row, that's what really matters.\n\nFor what it's worth, it looks like you have the right query plan, there. Scan the primary key for one row. What's wrong with that? Our systems have tables far larger than yours, handling 300M queries per day that are far more expensive than a simple primary key index scan. You'll be fine. :)\nExecution time is 0.025 ms per row. I am quite happy with the execution time, even the plan is going the correct way. The total execution time reduced from 2.5 hrs to 5.5 seconds. I am looking for a room for further improvement -- probably quite ambitious :-) ..\nBut, when i reduced random_page_cost to 2, the cost reduced to 7.03 and execution also have reduced by almost 50%. Is this an gain ?Please comment !Thanks,\nVB_________________________________\n\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email",
"msg_date": "Sun, 26 Feb 2012 22:52:45 +0530",
"msg_from": "Venkat Balaji <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: : Cost calculation for EXPLAIN output"
}
] |
[
{
"msg_contents": "Hi All,\n\nI am trying to compile Postgres Source code for ARM cortex A8 architecture.\nWhile compiling, I got an error message which read \"selected processor does not support `swpb r4,r4,[r3]' \"\nOne of the Postgres forums at the location \"http://postgresql.1045698.n5.nabble.com/BUG-6331-Cross-compile-error-aborts-Works-if-disable-spinlock-is-used-td5068738.html\"\nMentioned that by using -disable-spinlocks, we can overcome the error at the cost of performance. I did the same and it compiled successfully.\nBut the INSTALL guide in Postgres source code mentioned that I should inform the Postgres community in case I am forced to use -disable spinlocks to let the code compile.\nHence this email. So please suggest me what I should do now. What sort of performance penalty will be there if I use this option? What actually is the significance of this parameter?\nPlease guide me.\n\nThis is the configure command I used\n./configure CC=/opt/toolchain/bin/armv7l-timesys-linux-gnueabi-gcc --target=armv7l-timesys-linux-gnueabi --prefix=/home/jayashankar/WorkingDirectory/Postgres9.1_Cortex --host=x86_64-unknown-linux-gnu CFLAGS='-march=armv7-a -mtune=cortex-a8 -mfpu=vfpv3 -mthumb' --disable-spinlocks\n\nThanks and Regards\nJayashankar\n\n\n\nLarsen & Toubro Limited\n\nwww.larsentoubro.com\n\nThis Email may contain confidential or privileged information for the intended recipient (s) If you are not the intended recipient, please do not use or disseminate the information, notify the sender and delete it from your system.\n\n\n\n\n\n\n\n\nHi All,\n \nI am trying to compile Postgres Source code for ARM cortex A8 architecture.\nWhile compiling, I got an error message which read “selected processor does not support `swpb r4,r4,[r3]' “\nOne of the Postgres forums at the location “http://postgresql.1045698.n5.nabble.com/BUG-6331-Cross-compile-error-aborts-Works-if-disable-spinlock-is-used-td5068738.html”\nMentioned that by using –disable-spinlocks, we can overcome the error at the cost of performance. I did the same and it compiled successfully.\nBut the INSTALL guide in Postgres source code mentioned that I should inform the Postgres community in case I am forced to use –disable spinlocks to let the code compile.\nHence this email. So please suggest me what I should do now. What sort of performance penalty will be there if I use this option? What actually is the significance of this parameter?\nPlease guide me.\n \nThis is the configure command I used\n./configure CC=/opt/toolchain/bin/armv7l-timesys-linux-gnueabi-gcc --target=armv7l-timesys-linux-gnueabi --prefix=/home/jayashankar/WorkingDirectory/Postgres9.1_Cortex --host=x86_64-unknown-linux-gnu CFLAGS='-march=armv7-a -mtune=cortex-a8\n -mfpu=vfpv3 -mthumb' --disable-spinlocks\n \nThanks and Regards\nJayashankar\n \n\n\n\nLarsen & Toubro Limited \n\nwww.larsentoubro.com \n\nThis Email may contain confidential or privileged information for the intended recipient (s) If you are not the intended recipient, please do not use or disseminate the information, notify the sender and delete it from your system.",
"msg_date": "Thu, 23 Feb 2012 20:14:30 +0000",
"msg_from": "Jayashankar K B <[email protected]>",
"msg_from_op": true,
"msg_subject": "Disable-spinlocks while compiling postgres 9.1 for ARM Cortex A8"
},
{
"msg_contents": "Jayashankar K B <[email protected]> writes:\n> Hi All,\n> I am trying to compile Postgres Source code for ARM cortex A8 architecture.\n> While compiling, I got an error message which read \"selected processor does not support `swpb r4,r4,[r3]' \"\n> One of the Postgres forums at the location \"http://postgresql.1045698.n5.nabble.com/BUG-6331-Cross-compile-error-aborts-Works-if-disable-spinlock-is-used-td5068738.html\"\n> Mentioned that by using -disable-spinlocks, we can overcome the error at the cost of performance. I did the same and it compiled successfully.\n> But the INSTALL guide in Postgres source code mentioned that I should inform the Postgres community in case I am forced to use -disable spinlocks to let the code compile.\n> Hence this email. So please suggest me what I should do now.\n\nTry this patch:\nhttp://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=068e08eebbb2204f525647daad3fe15063b77820\n\nBTW, please don't cross-post to multiple PG mailing lists; there's very\nseldom a good reason to do that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Feb 2012 15:19:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Disable-spinlocks while compiling postgres 9.1 for ARM\n\tCortex A8"
},
{
"msg_contents": "Hi Tom,\n\nSorry about the cross-post.\nI am not aware of the procedures for patch etc.\nCould you please tell me how to use the patch ?\nI have already compiled and got the postgres server.\nSo please let me know the process of patching or kindly point me to a link which explain this.\n\nThanks and Regards\nJayashankar\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: 24 February 2012 AM 09:20\nTo: Jayashankar K B\nCc: [email protected]; [email protected]\nSubject: Re: [PERFORM] Disable-spinlocks while compiling postgres 9.1 for ARM Cortex A8\n\nJayashankar K B <[email protected]> writes:\n> Hi All,\n> I am trying to compile Postgres Source code for ARM cortex A8 architecture.\n> While compiling, I got an error message which read \"selected processor does not support `swpb r4,r4,[r3]' \"\n> One of the Postgres forums at the location \"http://postgresql.1045698.n5.nabble.com/BUG-6331-Cross-compile-error-aborts-Works-if-disable-spinlock-is-used-td5068738.html\"\n> Mentioned that by using -disable-spinlocks, we can overcome the error at the cost of performance. I did the same and it compiled successfully.\n> But the INSTALL guide in Postgres source code mentioned that I should inform the Postgres community in case I am forced to use -disable spinlocks to let the code compile.\n> Hence this email. So please suggest me what I should do now.\n\nTry this patch:\nhttp://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=068e08eebbb2204f525647daad3fe15063b77820\n\nBTW, please don't cross-post to multiple PG mailing lists; there's very seldom a good reason to do that.\n\n regards, tom lane\n\n\nLarsen & Toubro Limited\n\nwww.larsentoubro.com\n\nThis Email may contain confidential or privileged information for the intended recipient (s) If you are not the intended recipient, please do not use or disseminate the information, notify the sender and delete it from your system.\n",
"msg_date": "Thu, 23 Feb 2012 21:09:10 +0000",
"msg_from": "Jayashankar K B <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Disable-spinlocks while compiling postgres 9.1 for\n\tARM Cortex A8"
},
{
"msg_contents": "On Thu, Feb 23, 2012 at 3:09 PM, Jayashankar K B\n<[email protected]> wrote:\n> Hi Tom,\n>\n> Sorry about the cross-post.\n> I am not aware of the procedures for patch etc.\n> Could you please tell me how to use the patch ?\n\nsee general instructions here:\nhttp://jungels.net/articles/diff-patch-ten-minutes.html\n\nmerlin\n",
"msg_date": "Thu, 23 Feb 2012 16:58:01 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PERFORM] Disable-spinlocks while compiling\n\tpostgres 9.1 for ARM Cortex A8"
},
{
"msg_contents": "Hi Tom,\n\nI tried to apply the patch. I succeeded in patching configure, configure.in and src/include/pg_config.h.in files.\nBut while applying the patch for src/include/storage/s_lock.h , I am getting an error.\n\nThis is how I am doing the patch,\n1. I copied the diff output given in the link mentioned in the below email.\n2. Changed the file names appropriately (patch file path is different from source file path. But I have initialized --- and +++ appropriately\n3. Executed the patch command in the Postgres 9.1.1 directory as \"patch src/include/storage/s_lock.h -i s_lock.h_Patch\"\n4. Got the following output\n Hunk #1 succeeded at 252 with fuzz 1.\n Hunk #2 FAILED at 292.\n 1 out of 2 hunks FAILED -- saving rejects to file src/include/storage/s_lock.h.rej\n\n Thought of doing the failed patch manually. But couldn't understand what to do.\n\nPlease let me know what I am doing wrong and what I should be doing.\n\nThanks and Regards\nJayashankar\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: 24 February 2012 AM 09:20\nTo: Jayashankar K B\nCc: [email protected]; [email protected]\nSubject: Re: [PERFORM] Disable-spinlocks while compiling postgres 9.1 for ARM Cortex A8\n\nJayashankar K B <[email protected]<mailto:[email protected]>> writes:\n> Hi All,\n> I am trying to compile Postgres Source code for ARM cortex A8 architecture.\n> While compiling, I got an error message which read \"selected processor does not support `swpb r4,r4,[r3]' \"\n> One of the Postgres forums at the location \"http://postgresql.1045698.n5.nabble.com/BUG-6331-Cross-compile-error-aborts-Works-if-disable-spinlock-is-used-td5068738.html\"\n> Mentioned that by using -disable-spinlocks, we can overcome the error at the cost of performance. I did the same and it compiled successfully.\n> But the INSTALL guide in Postgres source code mentioned that I should inform the Postgres community in case I am forced to use -disable spinlocks to let the code compile.\n> Hence this email. So please suggest me what I should do now.\n\nTry this patch:\nhttp://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=068e08eebbb2204f525647daad3fe15063b77820\n\nBTW, please don't cross-post to multiple PG mailing lists; there's very seldom a good reason to do that.\n\n regards, tom lane\n\n\n\nLarsen & Toubro Limited\n\nwww.larsentoubro.com\n\nThis Email may contain confidential or privileged information for the intended recipient (s) If you are not the intended recipient, please do not use or disseminate the information, notify the sender and delete it from your system.\n\n\n\n\n\n\n\n\n\n\nHi Tom,\n \nI tried to apply the patch. I succeeded in patching configure, configure.in and src/include/pg_config.h.in files.\nBut while applying the patch for src/include/storage/s_lock.h , I am getting an error. \n \nThis is how I am doing the patch,\n\nI copied the diff output given in the link mentioned in the below email.Changed the file names appropriately (patch file path is different from source file path. But I have initialized --- and +++ appropriatelyExecuted the patch command in the Postgres 9.1.1 directory as “patch src/include/storage/s_lock.h -i s_lock.h_Patch”Got the following output\nHunk #1 succeeded at 252 with fuzz 1.\nHunk #2 FAILED at 292.\n1 out of 2 hunks FAILED -- saving rejects to file src/include/storage/s_lock.h.rej \n \nThought of doing the failed patch manually. But couldn’t understand what to do.\n \nPlease let me know what I am doing wrong and what I should be doing.\n \nThanks and Regards\nJayashankar\n \n-----Original Message-----\n\nFrom: Tom Lane [mailto:[email protected]] \n\nSent: 24 February 2012 AM 09:20\n\nTo: Jayashankar K B\n\nCc: [email protected]; [email protected]\n\nSubject: Re: [PERFORM] Disable-spinlocks while compiling postgres 9.1 for ARM Cortex A8 \n \nJayashankar K B <[email protected]> writes:\n> Hi All,\n> I am trying to compile Postgres Source code for ARM cortex A8 architecture.\n> While compiling, I got an error message which read \"selected processor does not support `swpb r4,r4,[r3]' \"\n> One of the Postgres forums at the location \"http://postgresql.1045698.n5.nabble.com/BUG-6331-Cross-compile-error-aborts-Works-if-disable-spinlock-is-used-td5068738.html\"\n> Mentioned that by using -disable-spinlocks, we can overcome the error at the cost of performance. I did the same and it compiled successfully.\n> But the INSTALL guide in Postgres source code mentioned that I should inform the Postgres community in case I am forced to use -disable spinlocks to let the code compile.\n> Hence this email. So please suggest me what I should do now.\n \nTry this patch:\nhttp://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=068e08eebbb2204f525647daad3fe15063b77820\n \nBTW, please don't cross-post to multiple PG mailing lists; there's very seldom a good reason to do that.\n \n regards, tom lane\n \n\n\nLarsen & Toubro Limited \n\nwww.larsentoubro.com \n\nThis Email may contain confidential or privileged information for the intended recipient (s) If you are not the intended recipient, please do not use or disseminate the information, notify the sender and delete it from your system.",
"msg_date": "Fri, 24 Feb 2012 22:42:29 +0000",
"msg_from": "Jayashankar K B <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Disable-spinlocks while compiling postgres 9.1 for\n\tARM Cortex A8"
},
{
"msg_contents": "Jayashankar K B <[email protected]> writes:\n> I tried to apply the patch. I succeeded in patching configure, configure.in and src/include/pg_config.h.in files.\n> But while applying the patch for src/include/storage/s_lock.h , I am getting an error.\n\nThat patch should apply exactly to 9.1.0 or later. I think either you\nmessed up copying the patch from the web page (note that patch is not\nforgiving about white space...) or else perhaps fooling with the file\nnames messed it up. You shouldn't have to modify the file taken from\nthe \"patch\" link at all. The right way to do it is to cd into the\ntop source directory and use\n\tpatch -p1 <patchfile\nwhich will tell patch how much of the filename to pay attention to\n(viz, not the \"a/\" or \"b/\" parts).\n\nIf you get too frustrated, just wait till Monday and grab 9.1.3.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Feb 2012 18:54:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PERFORM] Disable-spinlocks while compiling postgres 9.1 for\n\tARM Cortex A8"
},
{
"msg_contents": "Ok. I did a manual patch and it Postgres 9.1.1 compiled for me without using the --disable-spinlocks option.\nThanks a lot for the patch. :)\nBy the way, could you please point me to the explanation on the significance of spinlocks for Postgres?\n\nThanks and Regards\nJayashankar\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: 25 February 2012 PM 12:54\nTo: Jayashankar K B\nCc: [email protected]\nSubject: Re: [GENERAL] Re: [PERFORM] Disable-spinlocks while compiling postgres 9.1 for ARM Cortex A8\n\nJayashankar K B <[email protected]> writes:\n> I tried to apply the patch. I succeeded in patching configure, configure.in and src/include/pg_config.h.in files.\n> But while applying the patch for src/include/storage/s_lock.h , I am getting an error.\n\nThat patch should apply exactly to 9.1.0 or later. I think either you messed up copying the patch from the web page (note that patch is not forgiving about white space...) or else perhaps fooling with the file names messed it up. You shouldn't have to modify the file taken from the \"patch\" link at all. The right way to do it is to cd into the top source directory and use\n patch -p1 <patchfile\nwhich will tell patch how much of the filename to pay attention to (viz, not the \"a/\" or \"b/\" parts).\n\nIf you get too frustrated, just wait till Monday and grab 9.1.3.\n\n regards, tom lane\n\n\nLarsen & Toubro Limited\n\nwww.larsentoubro.com\n\nThis Email may contain confidential or privileged information for the intended recipient (s) If you are not the intended recipient, please do not use or disseminate the information, notify the sender and delete it from your system.\n",
"msg_date": "Sun, 26 Feb 2012 12:16:05 +0000",
"msg_from": "Jayashankar K B <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: [PERFORM] Disable-spinlocks while compiling\n\tpostgres 9.1 for ARM Cortex A8"
},
{
"msg_contents": "On Sun, Feb 26, 2012 at 6:16 AM, Jayashankar K B\n<[email protected]> wrote:\n> Ok. I did a manual patch and it Postgres 9.1.1 compiled for me without using the --disable-spinlocks option.\n> Thanks a lot for the patch. :)\n> By the way, could you please point me to the explanation on the significance of spinlocks for Postgres?\n\nspinlocks are used all over the place to synchronize access to shared\ndata structures (see here: http://en.wikipedia.org/wiki/Spinlock also\nsee here: http://rhaas.blogspot.com/2011/01/locking-in-postgresql.html).\n you can awkwardly implement them in high level languages like C but\ntypically there are hardware primitives that are much faster and\nbetter to use.\n\nvery generally speaking, spinlocks are a better than semaphores when\nthe lock duration is very short, contention isn't terrible, and the\ntime taken to acquire the lock matters.\n\nmerlin\n",
"msg_date": "Mon, 27 Feb 2012 08:39:23 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [PERFORM] Disable-spinlocks while compiling\n\tpostgres 9.1 for ARM Cortex A8"
}
] |
[
{
"msg_contents": "We are experiencing an unusual slowdown when using UUID field in JOIN when\nupdating a table. SQL looks like this:\n\nUPDATE dst\nSET data_field = src.data_field\nFROM src\nWHERE dst.uuid_field = src.uuid_field;\n\nThis statement takes over 6 times longer than a similar statement against\nthe same table except the join is done by a integer field, e.g.\n\nUPDATE dst\nSET data_field = src.data_field\nFROM src\nWHERE dst.integer_field = src.integer_field;\n\nI can't get rid of UUID in the \"src\" table since it comes from another\ndatabase that we can't change. The table has around 1 mil rows. I tried\nvacuuming it. Tried creating indexes on src table (it ignores them and\nbuilds hash join anyway). It takes faster to rebuild the whole table than\nto update it while joining by UUID. Has anyone experienced this before and\nwhat was the solution for you?\n\nHelp is greatly appreciated.\n\nWe are experiencing an unusual slowdown when using UUID field in JOIN when updating a table. SQL looks like this:UPDATE dstSET data_field = src.data_fieldFROM srcWHERE dst.uuid_field = src.uuid_field;\nThis statement takes over 6 times longer than a similar statement against the same table except the join is done by a integer field, e.g.UPDATE dst\nSET data_field = src.data_field\nFROM src\nWHERE dst.integer_field = src.integer_field;\nI can't get rid of UUID in the \"src\" table since it comes from another database that we can't change. The table has around 1 mil rows. I tried vacuuming it. Tried creating indexes on src table (it ignores them and builds hash join anyway). It takes faster to rebuild the whole table than to update it while joining by UUID. Has anyone experienced this before and what was the solution for you?\nHelp is greatly appreciated.",
"msg_date": "Fri, 24 Feb 2012 17:46:23 -0500",
"msg_from": "Cherio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Joining tables by UUID field - very slow"
},
{
"msg_contents": "On Fri, Feb 24, 2012 at 4:46 PM, Cherio <[email protected]> wrote:\n> We are experiencing an unusual slowdown when using UUID field in JOIN when\n> updating a table. SQL looks like this:\n>\n> UPDATE dst\n> SET data_field = src.data_field\n> FROM src\n> WHERE dst.uuid_field = src.uuid_field;\n>\n> This statement takes over 6 times longer than a similar statement against\n> the same table except the join is done by a integer field, e.g.\n>\n> UPDATE dst\n> SET data_field = src.data_field\n> FROM src\n> WHERE dst.integer_field = src.integer_field;\n>\n> I can't get rid of UUID in the \"src\" table since it comes from another\n> database that we can't change. The table has around 1 mil rows. I tried\n> vacuuming it. Tried creating indexes on src table (it ignores them and\n> builds hash join anyway). It takes faster to rebuild the whole table than to\n> update it while joining by UUID. Has anyone experienced this before and what\n> was the solution for you?\n\nIf you're updating every field in the table, you're basically\nrebuilding the whole table anyways. Also, both the heap and the\nindexes have to track both row versions. HOT helps for non indexed\nfield updates, but the HOT optimization tends to only really shine\nwhen the updates are small and frequent. In postgres it's good to try\nand avoid large updates when reasonable to do so.\n\nThe UUID is slower because it adds lots of bytes to both the heap and\nthe index although 6 times slower does seem like a lot. Can you\nsimulate a similar update with a text column to see if the performance\ndifferences is related to row/key size?\n\nmerlin\n",
"msg_date": "Mon, 27 Feb 2012 08:53:15 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joining tables by UUID field - very slow"
},
{
"msg_contents": "Cherio <[email protected]> writes:\n> This statement takes over 6 times longer than a similar statement against\n> the same table except the join is done by a integer field, e.g.\n\nCould we see EXPLAIN ANALYZE data for both cases?\n\nHow are you representing the UUIDs, exactly (ie what's the column data\ntype)?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 27 Feb 2012 10:25:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joining tables by UUID field - very slow "
}
] |
[
{
"msg_contents": "Hi,\n\n2011/10/24 Stephen Frost <[email protected]> wrote\n> Now, we've also been discussing ways to have PG automatically\n> re-populate shared buffers and possibly OS cache based on what was in\n> memory at the time of the last shut-down, but I'm not sure that would\n> help your case either since you're rebuilding everything every night and\n> that's what's trashing your buffers (because everything ends up getting\n> moved around). You might actually want to consider if that's doing more\n> harm than good for you. If you weren't doing that, then the cache\n> wouldn't be getting destroyed every night..\n\nI'd like to come back on the issue of aka of in-memory key-value database.\n\nTo remember, it contains table definition and queries as indicated in\nthe appendix [0]. There exist 4 other tables of similar structure.\nThere are indexes on each column. The tables contain around 10 million\ntuples. The database is \"read-only\"; it's completely updated every\nday. I don't expect more than 5 concurrent users at any time. A\ntypical query looks like [1] and varies in an unforeseable way (that's\nwhy hstore is used). EXPLAIN tells me that the indexes are used [2].\n\nThe problem is that the initial queries are too slow - and there is no\nsecond chance. I do have to trash the buffer every night. There is\nenough main memory to hold all table contents.\n\n1. How can I warm up or re-populate shared buffers of Postgres?\n2. Are there any hints on how to tell Postgres to read in all table\ncontents into memory?\n\nYours, Stefan\n\n\nAPPENDIX\n\n[0]\nCREATE TABLE osm_point (\n osm_id integer,\n name text,\n tags hstore\n geom geometry(Point,4326)\n);\n\n\n[1]\nSELECT osm_id, name FROM osm_point\n WHERE tags @> 'tourism=>viewpoint'\n AND ST_Contains(\n GeomFromText('BOX(8.42 47.072, 9.088 47.431)'::box2d, 4326),\n geom)\n\n[2]\nEXPLAIN ANALYZE returns:\n Bitmap Heap Scan on osm_point (cost=402.15..40465.85 rows=430\nwidth=218) (actual time=121.888..137.\n Recheck Cond: (tags @> '\"tourism\"=>\"viewpoint\"'::hstore)\n Filter: (('01030...'::geometry && geom) AND\n_st_contains('01030'::geometry, geom))\n -> Bitmap Index Scan on osm_point_tags_idx (cost=0.00..402.04\nrows=11557 width=0) (actual time=1 6710 loops=1)\n Index Cond: (tags @> '\"tourism\"=>\"viewpoint\"'::hstore)\n Total runtime: 137.881 ms\n(6 rows)\n",
"msg_date": "Sun, 26 Feb 2012 01:16:08 +0100",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "PG as in-memory db? How to warm up and re-populate buffers? How to\n\tread in all tuples into memory?"
},
{
"msg_contents": "On Sat, Feb 25, 2012 at 4:16 PM, Stefan Keller <[email protected]> wrote:\n>\n> I'd like to come back on the issue of aka of in-memory key-value database.\n>\n> To remember, it contains table definition and queries as indicated in\n> the appendix [0]. There exist 4 other tables of similar structure.\n> There are indexes on each column. The tables contain around 10 million\n> tuples. The database is \"read-only\"; it's completely updated every\n> day. I don't expect more than 5 concurrent users at any time. A\n> typical query looks like [1] and varies in an unforeseable way (that's\n> why hstore is used). EXPLAIN tells me that the indexes are used [2].\n>\n> The problem is that the initial queries are too slow - and there is no\n> second chance. I do have to trash the buffer every night. There is\n> enough main memory to hold all table contents.\n\nJust that table, or the entire database?\n\n>\n> 1. How can I warm up or re-populate shared buffers of Postgres?\n\nInstead, warm the OS cache. Then data will get transferred into the\npostgres shared_buffers pool from the OS cache very quickly.\n\ntar -c $PGDATA/base/ |wc -c\n\nIf you need to warm just one table, because the entire base directory\nwon't fit in OS cache, then you need to do a bit more work to find out\nwhich files to use.\n\nYou might feel clever and try this instead:\n\ntar -c /dev/null $PGDATA/base/ > /dev/null\n\nBut my tar program is too clever by half. It detects that it is\nwriting to /dev/null, and just does not actually read the data.\n\n> 2. Are there any hints on how to tell Postgres to read in all table\n> contents into memory?\n\nI don't think so, at least not in core. I've wondered if it would\nmake sense to suppress ring-buffer strategy when there are buffers on\nthe free-list. That way a sequential scan would populate\nshared_buffers after a restart. But it wouldn't help you get the\nindexes into cache.\n\nCheers,\n\nJeff\n",
"msg_date": "Sat, 25 Feb 2012 18:13:49 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "You can try PostgreSQL 9.x master/slave replication, then try run slave on persistent RAM Fileystem(tmpfs)\nSo, access your all data from slave PostgreSQL that run on tmpfs..\n \n\n________________________________\n 发件人: Jeff Janes <[email protected]>\n收件人: Stefan Keller <[email protected]> \n抄送: [email protected]; Stephen Frost <[email protected]> \n发送日期: 2012年2月26日, 星期日, 上午 10:13\n主题: Re: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?\n \nOn Sat, Feb 25, 2012 at 4:16 PM, Stefan Keller <[email protected]> wrote:\n>\n> I'd like to come back on the issue of aka of in-memory key-value database.\n>\n> To remember, it contains table definition and queries as indicated in\n> the appendix [0]. There exist 4 other tables of similar structure.\n> There are indexes on each column. The tables contain around 10 million\n> tuples. The database is \"read-only\"; it's completely updated every\n> day. I don't expect more than 5 concurrent users at any time. A\n> typical query looks like [1] and varies in an unforeseable way (that's\n> why hstore is used). EXPLAIN tells me that the indexes are used [2].\n>\n> The problem is that the initial queries are too slow - and there is no\n> second chance. I do have to trash the buffer every night. There is\n> enough main memory to hold all table contents.\n\nJust that table, or the entire database?\n\n>\n> 1. How can I warm up or re-populate shared buffers of Postgres?\n\nInstead, warm the OS cache. Then data will get transferred into the\npostgres shared_buffers pool from the OS cache very quickly.\n\ntar -c $PGDATA/base/ |wc -c\n\nIf you need to warm just one table, because the entire base directory\nwon't fit in OS cache, then you need to do a bit more work to find out\nwhich files to use.\n\nYou might feel clever and try this instead:\n\ntar -c /dev/null $PGDATA/base/ > /dev/null\n\nBut my tar program is too clever by half. It detects that it is\nwriting to /dev/null, and just does not actually read the data.\n\n> 2. Are there any hints on how to tell Postgres to read in all table\n> contents into memory?\n\nI don't think so, at least not in core. I've wondered if it would\nmake sense to suppress ring-buffer strategy when there are buffers on\nthe free-list. That way a sequential scan would populate\nshared_buffers after a restart. But it wouldn't help you get the\nindexes into cache.\n\nCheers,\n\nJeff\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nYou can try PostgreSQL 9.x master/slave replication, then try run slave on persistent RAM Fileystem(tmpfs)So, access your all data from slave PostgreSQL that run on tmpfs.. 发件人: Jeff Janes <[email protected]> 收件人: Stefan Keller <[email protected]> 抄送: [email protected]; Stephen Frost <[email protected]> 发送日期: 2012年2月26日, 星期日, 上午 10:13 主题: Re: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory? On Sat, Feb 25, 2012 at 4:16 PM, Stefan Keller <[email protected]> wrote:>> I'd like to come back on the issue of aka of in-memory key-value database.>> To remember, it contains table definition and queries as indicated in> the appendix [0]. There exist 4 other tables of similar structure.> There are indexes on each column. The\n tables contain around 10 million> tuples. The database is \"read-only\"; it's completely updated every> day. I don't expect more than 5 concurrent users at any time. A> typical query looks like [1] and varies in an unforeseable way (that's> why hstore is used). EXPLAIN tells me that the indexes are used [2].>> The problem is that the initial queries are too slow - and there is no> second chance. I do have to trash the buffer every night. There is> enough main memory to hold all table contents.Just that table, or the entire database?>> 1. How can I warm up or re-populate shared buffers of Postgres?Instead, warm the OS cache. Then data will get transferred into thepostgres shared_buffers pool from the OS cache very quickly.tar -c $PGDATA/base/ |wc -cIf you need to warm just one table, because the entire base directorywon't fit in OS cache,\n then you need to do a bit more work to find outwhich files to use.You might feel clever and try this instead:tar -c /dev/null $PGDATA/base/ > /dev/nullBut my tar program is too clever by half. It detects that it iswriting to /dev/null, and just does not actually read the data.> 2. Are there any hints on how to tell Postgres to read in all table> contents into memory?I don't think so, at least not in core. I've wondered if it wouldmake sense to suppress ring-buffer strategy when there are buffers onthe free-list. That way a sequential scan would populateshared_buffers after a restart. But it wouldn't help you get theindexes into cache.Cheers,Jeff-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sun, 26 Feb 2012 00:52:07 -0800 (PST)",
"msg_from": "Wales Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?utf-8?B?UmXvvJogW1BFUkZPUk1dIFBHIGFzIGluLW1lbW9yeSBkYj8gSG93IHRvIHdh?=\n\t=?utf-8?B?cm0gdXAgYW5kIHJlLXBvcHVsYXRlIGJ1ZmZlcnM/IEhvdyB0byByZWFkIGlu?=\n\t=?utf-8?B?IGFsbCB0dXBsZXMgaW50byBtZW1vcnk/?="
},
{
"msg_contents": "Hi Jeff and Wales,\n\n2012/2/26 Jeff Janes <[email protected]> wrote:\n>> The problem is that the initial queries are too slow - and there is no\n>> second chance. I do have to trash the buffer every night. There is\n>> enough main memory to hold all table contents.\n>\n> Just that table, or the entire database?\n\nThe entire database consisting of only about 5 tables which are\nsimilar but with different geometry types plus a relations table (as\nOpenStreetMap calls it).\n\n>> 1. How can I warm up or re-populate shared buffers of Postgres?\n>\n> Instead, warm the OS cache. Then data will get transferred into the\n> postgres shared_buffers pool from the OS cache very quickly.\n>\n> tar -c $PGDATA/base/ |wc -c\n\nOk. So with \"OS cache\" you mean the files which to me are THE database itself?\nA cache to me is a second storage with \"controlled redudancy\" because\nof performance reasons.\n\n>> 2. Are there any hints on how to tell Postgres to read in all table\n>> contents into memory?\n>\n> I don't think so, at least not in core. I've wondered if it would\n> make sense to suppress ring-buffer strategy when there are buffers on\n> the free-list. That way a sequential scan would populate\n> shared_buffers after a restart. But it wouldn't help you get the\n> indexes into cache.\n\nSo, are there any developments going on with PostgreSQL as Stephen\nsuggested in the former thread?\n\n2012/2/26 Wales Wang <[email protected]>:\n> You can try PostgreSQL 9.x master/slave replication, then try run slave\n> on persistent RAM Fileystem (tmpfs)\n> So, access your all data from slave PostgreSQL that run on tmpfs..\n\nNice idea.\nI do have a single upscaled server and up to now I hesitated to\nallocate say 48 Gigabytes (out of 72) to such a RAM Fileystem (tmpfs).\n\nStill, would'nt it be more flexible when I could dynamically instruct\nPostgreSQL to behave like an in-memory database?\n\nYours, Stefan\n",
"msg_date": "Sun, 26 Feb 2012 11:56:44 +0100",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "Stefan Keller wrote on 26.02.2012 01:16:\n> 2. Are there any hints on how to tell Postgres to read in all table\n> contents into memory?\n\nWhat about creating tablespace on a RAM Fileystem (tmpfs), then create a second schema in your database where all tables are located in the that \"temp\" tablespace.\n\nThen upon startup (or using triggers) you can copy all data from the persistent tables to the memory tables.\n\nIt would probably make sense to change the value of random_page_cost for that tablespace to 1\n\nI'm not sure though how PostgreSQL handles a system-restart with tables on a tablespace that might not be there.\n\nThomas\n\n\n\n\n\n\n",
"msg_date": "Sun, 26 Feb 2012 12:46:03 +0100",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate buffers? How\n\tto read in all tuples into memory?"
},
{
"msg_contents": "* Stefan Keller ([email protected]) wrote:\n> So, are there any developments going on with PostgreSQL as Stephen\n> suggested in the former thread?\n\nWhile the idea has been getting kicked around, I don't know of anyone\nactively working on developing code to implement it.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Sun, 26 Feb 2012 08:55:11 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "On 02/25/2012 06:16 PM, Stefan Keller wrote:\n>\n> 1. How can I warm up or re-populate shared buffers of Postgres?\n> 2. Are there any hints on how to tell Postgres to read in all table\n> contents into memory?\n>\n> Yours, Stefan\n>\n\nHow about after you load the data, vacuum freeze it, then do something like:\n\nSELECT count(*) FROM osm_point WHERE tags @> 'tourism=>junk'\n\n-Andy\n\n\n",
"msg_date": "Sun, 26 Feb 2012 09:20:54 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "2012/2/26 Andy Colson <[email protected]> wrote:\n> On 02/25/2012 06:16 PM, Stefan Keller wrote:\n>> 1. How can I warm up or re-populate shared buffers of Postgres?\n>> 2. Are there any hints on how to tell Postgres to read in all table\n>> contents into memory?\n>>\n>> Yours, Stefan\n>\n> How about after you load the data, vacuum freeze it, then do something like:\n>\n> SELECT count(*) FROM osm_point WHERE tags @> 'tourism=>junk'\n>\n> -Andy\n\nThat good idea is what I proposed elsewhere on one of the PG lists and\ngot told that this does'nt help.\n\nI can accept this approach that users should'nt directly interfere\nwith the optimizer. But I think it's still worth to discuss a\nconfiguration option (per table) or so which tells PG that this table\ncontents should fit into memory so that it tries to load a table into\nmemory and keeps it there. This option probably only makes sense in\ncombination with unlogged tables.\n\nYours, Stefan\n",
"msg_date": "Sun, 26 Feb 2012 20:11:24 +0100",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "On 02/26/2012 01:11 PM, Stefan Keller wrote:\n> 2012/2/26 Andy Colson<[email protected]> wrote:\n>> On 02/25/2012 06:16 PM, Stefan Keller wrote:\n>>> 1. How can I warm up or re-populate shared buffers of Postgres?\n>>> 2. Are there any hints on how to tell Postgres to read in all table\n>>> contents into memory?\n>>>\n>>> Yours, Stefan\n>>\n>> How about after you load the data, vacuum freeze it, then do something like:\n>>\n>> SELECT count(*) FROM osm_point WHERE tags @> 'tourism=>junk'\n>>\n>> -Andy\n>\n> That good idea is what I proposed elsewhere on one of the PG lists and\n> got told that this does'nt help.\n>\n> I can accept this approach that users should'nt directly interfere\n> with the optimizer. But I think it's still worth to discuss a\n> configuration option (per table) or so which tells PG that this table\n> contents should fit into memory so that it tries to load a table into\n> memory and keeps it there. This option probably only makes sense in\n> combination with unlogged tables.\n>\n> Yours, Stefan\n>\n\nI don't buy that. Did you test it? Who/where did you hear this? And... how long does it take after you replace the entire table until things are good and cached? One or two queries?\n\nAfter a complete reload of the data, do you vacuum freeze it?\n\nAfter a complete reload of the data, how long until its fast?\n\n-Andy\n",
"msg_date": "Sun, 26 Feb 2012 13:20:58 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "Le dimanche 26 février 2012 01:16:08, Stefan Keller a écrit :\n> Hi,\n> \n> 2011/10/24 Stephen Frost <[email protected]> wrote\n> \n> > Now, we've also been discussing ways to have PG automatically\n> > re-populate shared buffers and possibly OS cache based on what was in\n> > memory at the time of the last shut-down, but I'm not sure that would\n> > help your case either since you're rebuilding everything every night and\n> > that's what's trashing your buffers (because everything ends up getting\n> > moved around). You might actually want to consider if that's doing more\n> > harm than good for you. If you weren't doing that, then the cache\n> > wouldn't be getting destroyed every night..\n> \n> I'd like to come back on the issue of aka of in-memory key-value database.\n> \n> To remember, it contains table definition and queries as indicated in\n> the appendix [0]. There exist 4 other tables of similar structure.\n> There are indexes on each column. The tables contain around 10 million\n> tuples. The database is \"read-only\"; it's completely updated every\n> day. I don't expect more than 5 concurrent users at any time. A\n> typical query looks like [1] and varies in an unforeseable way (that's\n> why hstore is used). EXPLAIN tells me that the indexes are used [2].\n> \n> The problem is that the initial queries are too slow - and there is no\n> second chance. I do have to trash the buffer every night. There is\n> enough main memory to hold all table contents.\n> \n> 1. How can I warm up or re-populate shared buffers of Postgres?\n\nThere was a patch proposed for postgresql which purpose was to \nsnapshot/Restore postgresql buffers, but it is still not sure how far that \nreally help to have that part loaded.\n\n> 2. Are there any hints on how to tell Postgres to read in all table\n> contents into memory?\n\nI wrote pgfincore for the OS part: you can use it to preload table/index in OS \ncache, and do snapshot/restore if you want fine grain control of what part of \nthe object you want to warm.\nhttps://github.com/klando/pgfincore\n\n\n> \n> Yours, Stefan\n> \n> \n> APPENDIX\n> \n> [0]\n> CREATE TABLE osm_point (\n> osm_id integer,\n> name text,\n> tags hstore\n> geom geometry(Point,4326)\n> );\n> \n> \n> [1]\n> SELECT osm_id, name FROM osm_point\n> WHERE tags @> 'tourism=>viewpoint'\n> AND ST_Contains(\n> GeomFromText('BOX(8.42 47.072, 9.088 47.431)'::box2d, 4326),\n> geom)\n> \n> [2]\n> EXPLAIN ANALYZE returns:\n> Bitmap Heap Scan on osm_point (cost=402.15..40465.85 rows=430\n> width=218) (actual time=121.888..137.\n> Recheck Cond: (tags @> '\"tourism\"=>\"viewpoint\"'::hstore)\n> Filter: (('01030...'::geometry && geom) AND\n> _st_contains('01030'::geometry, geom))\n> -> Bitmap Index Scan on osm_point_tags_idx (cost=0.00..402.04\n> rows=11557 width=0) (actual time=1 6710 loops=1)\n> Index Cond: (tags @> '\"tourism\"=>\"viewpoint\"'::hstore)\n> Total runtime: 137.881 ms\n> (6 rows)\n\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation\n",
"msg_date": "Sun, 26 Feb 2012 20:35:44 +0100",
"msg_from": "=?utf-8?q?C=C3=A9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate buffers? How\n\tto read in all tuples into memory?"
},
{
"msg_contents": "2012/2/26 Andy Colson <[email protected]> wrote:\n>>> How about after you load the data, vacuum freeze it, then do something\n>>> like:\n>>>\n>>> SELECT count(*) FROM osm_point WHERE tags @> 'tourism=>junk'\n>>>\n>>> -Andy\n>>\n>>\n>> That good idea is what I proposed elsewhere on one of the PG lists and\n>> got told that this does'nt help.\n>>\n...\n> I don't buy that. Did you test it? Who/where did you hear this? And...\n> how long does it take after you replace the entire table until things are\n> good and cached? One or two queries?\n>\n> After a complete reload of the data, do you vacuum freeze it?\n\nYes.\n\n> After a complete reload of the data, how long until its fast?\n\nJust after the second query. You can try it yourself online here:\nhttp://bit.ly/A8duyB\n\n-Stefan\n",
"msg_date": "Sun, 26 Feb 2012 21:37:45 +0100",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "Hi,\n\n2012/2/26 Cédric Villemain <[email protected]> wrote:\n>> 1. How can I warm up or re-populate shared buffers of Postgres?\n>\n> There was a patch proposed for postgresql which purpose was to\n\nWhich patch are you referring to?\n\n> snapshot/Restore postgresql buffers, but it is still not sure how far that\n> really help to have that part loaded.\n\nWhat's not sure and why?\n\n>> 2. Are there any hints on how to tell Postgres to read in all table\n>> contents into memory?\n>\n> I wrote pgfincore for the OS part: you can use it to preload table/index in OS\n> cache, and do snapshot/restore if you want fine grain control of what part of\n> the object you want to warm.\n> https://github.com/klando/pgfincore\n\nYes, now I remember. I have a look at that.\n\nI'd still like to see something where PG really preloads tuples and\ntreats them \"always in-memory\" (given they fit into RAM).\nSince I have a \"read-only\" database there's no WAL and locking needed.\nBut as soon as we allow writes I realize that the in-memory feature\nneeds to be coupled with other enhancements like replication (which\nsomehow would avoid WAL).\n\nYours, Stefan\n",
"msg_date": "Sun, 26 Feb 2012 21:56:35 +0100",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "On Sun, Feb 26, 2012 at 2:56 AM, Stefan Keller <[email protected]> wrote:\n> Hi Jeff and Wales,\n>\n> 2012/2/26 Jeff Janes <[email protected]> wrote:\n>>> The problem is that the initial queries are too slow - and there is no\n>>> second chance. I do have to trash the buffer every night. There is\n>>> enough main memory to hold all table contents.\n>>\n>> Just that table, or the entire database?\n>\n> The entire database consisting of only about 5 tables which are\n> similar but with different geometry types plus a relations table (as\n> OpenStreetMap calls it).\n\nAnd all of those combined fit in RAM? With how much to spare?\n\n>\n>>> 1. How can I warm up or re-populate shared buffers of Postgres?\n>>\n>> Instead, warm the OS cache. Then data will get transferred into the\n>> postgres shared_buffers pool from the OS cache very quickly.\n>>\n>> tar -c $PGDATA/base/ |wc -c\n>\n> Ok. So with \"OS cache\" you mean the files which to me are THE database itself?\n\nMost operating systems will use any otherwise unused RAM to cache\n\"recently\" accessed file-system data. That is the OS cache. The\npurpose of the tar is to populate the OS cache with the \"database\nitself\". That way, when postgres wants something that isn't already\nin shared_buffers, it doesn't require a disk read to get it, just a\nrequest to the OS.\n\nBut this trick is most useful after the OS has been restarted so the\nOS cache is empty. If the OS has been up for a long time, then why\nisn't it already populated with the data you need? Maybe the data\ndoesn't fit, maybe some other process has trashed the cache (in which\ncase, why would it not continue to trash the cache on an ongoing\nbasis?)\n\nSince you just recently created the tables and indexes, they must have\npassed through the OS cache on the way to disk. So why aren't they\nstill there? Is shared_buffers so large that little RAM is left over\nfor the OS? Did you reboot the OS? Are there other processes running\nthat drive the database-specific files out of the OS cache?\n\n> A cache to me is a second storage with \"controlled redudancy\" because\n> of performance reasons.\n\nYeah. But there are multiple caches, with different parties in\ncontrol and different opinions of what is redundant.\n\n>>> 2. Are there any hints on how to tell Postgres to read in all table\n>>> contents into memory?\n>>\n>> I don't think so, at least not in core. I've wondered if it would\n>> make sense to suppress ring-buffer strategy when there are buffers on\n>> the free-list. That way a sequential scan would populate\n>> shared_buffers after a restart. But it wouldn't help you get the\n>> indexes into cache.\n>\n> So, are there any developments going on with PostgreSQL as Stephen\n> suggested in the former thread?\n\nI don't see any active development for the upcoming release, and most\nof what has been suggested wouldn't help you because they are about\nre-populating the cache with previously hot data, while you are\ndestroying your previously hot data and wanting to specify the\nfuture-hot data.\n\nBy the way, your explain plan would be more useful if it included\nbuffers. \"Explain (analyze, buffers) select...\"\n\nI don't know that it is ever better to run analyze without buffers,\nother than for backwards compatibility. I'm trying to get in the\nhabit of just automatically doing it.\n\nCheers,\n\nJeff\n",
"msg_date": "Sun, 26 Feb 2012 14:34:35 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "There are many approach for PostgreSQL in-memory.\n\nThe quick and easy way is making slave pgsql run on persistent RAM filesystem, the slave is part of master/slave replication cluster.\n \nThe fstab and script make RAM file system persistent is below:\nSetup:\nFirst, create a mountpoint for the disk : \nmkdir /mnt/ramdisk\nSecondly, add this line to /etc/fstab in to mount the drive at boot-time. \ntmpfs /mnt/ramdisk tmpfs defaults,size=65536M 0 0\n\n#! /bin/sh \n# /etc/init.d/ramdisk.sh\n#\n \ncase \"$1\" in\n start)\n echo \"Copying files to ramdisk\"\n rsync -av /data/ramdisk-backup/ /mnt/ramdisk/\n echo [`date +\"%Y-%m-%d %H:%M\"`] Ramdisk Synched from HD >> /var/log/ramdisk_sync.log\n ;;\n sync)\n echo \"Synching files from ramdisk to Harddisk\"\n echo [`date +\"%Y-%m-%d %H:%M\"`] Ramdisk Synched to HD >> /var/log/ramdisk_sync.log\n rsync -av --delete --recursive --force /mnt/ramdisk/ /data/ramdisk-backup/\n ;;\n stop)\n echo \"Synching logfiles from ramdisk to Harddisk\"\n echo [`date +\"%Y-%m-%d %H:%M\"`] Ramdisk Synched to HD >> /var/log/ramdisk_sync.log\n rsync -av --delete --recursive --force /mnt/ramdisk/ /data/ramdisk-backup/\n ;;\n *)\n echo \"Usage: /etc/init.d/ramdisk {start|stop|sync}\"\n exit 1\n ;;\nesac\nexit 0\n \nyou can run it when startup and shutdown and crontabe hoursly.\n \nWales Wang \n\n________________________________\n 发件人: Jeff Janes <[email protected]>\n收件人: Stefan Keller <[email protected]> \n抄送: Wales Wang <[email protected]>; [email protected]; Stephen Frost <[email protected]> \n发送日期: 2012年2月27日, 星期一, 上午 6:34\n主题: Re: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory?\n \nOn Sun, Feb 26, 2012 at 2:56 AM, Stefan Keller <[email protected]> wrote:\n> Hi Jeff and Wales,\n>\n> 2012/2/26 Jeff Janes <[email protected]> wrote:\n>>> The problem is that the initial queries are too slow - and there is no\n>>> second chance. I do have to trash the buffer every night. There is\n>>> enough main memory to hold all table contents.\n>>\n>> Just that table, or the entire database?\n>\n> The entire database consisting of only about 5 tables which are\n> similar but with different geometry types plus a relations table (as\n> OpenStreetMap calls it).\n\nAnd all of those combined fit in RAM? With how much to spare?\n\n>\n>>> 1. How can I warm up or re-populate shared buffers of Postgres?\n>>\n>> Instead, warm the OS cache. 燭hen data will get transferred into the\n>> postgres shared_buffers pool from the OS cache very quickly.\n>>\n>> tar -c $PGDATA/base/ |wc -c\n>\n> Ok. So with \"OS cache\" you mean the files which to me are THE database itself?\n\nMost operating systems will use any otherwise unused RAM to cache\n\"recently\" accessed file-system data. That is the OS cache. The\npurpose of the tar is to populate the OS cache with the \"database\nitself\". That way, when postgres wants something that isn't already\nin shared_buffers, it doesn't require a disk read to get it, just a\nrequest to the OS.\n\nBut this trick is most useful after the OS has been restarted so the\nOS cache is empty. If the OS has been up for a long time, then why\nisn't it already populated with the data you need? Maybe the data\ndoesn't fit, maybe some other process has trashed the cache (in which\ncase, why would it not continue to trash the cache on an ongoing\nbasis?)\n\nSince you just recently created the tables and indexes, they must have\npassed through the OS cache on the way to disk. So why aren't they\nstill there? Is shared_buffers so large that little RAM is left over\nfor the OS? Did you reboot the OS? Are there other processes running\nthat drive the database-specific files out of the OS cache?\n\n> A cache to me is a second storage with \"controlled redudancy\" because\n> of performance reasons.\n\nYeah. But there are multiple caches, with different parties in\ncontrol and different opinions of what is redundant.\n\n>>> 2. Are there any hints on how to tell Postgres to read in all table\n>>> contents into memory?\n>>\n>> I don't think so, at least not in core. 營've wondered if it would\n>> make sense to suppress ring-buffer strategy when there are buffers on\n>> the free-list. 燭hat way a sequential scan would populate\n>> shared_buffers after a restart. 燘ut it wouldn't help you get the\n>> indexes into cache.\n>\n> So, are there any developments going on with PostgreSQL as Stephen\n> suggested in the former thread?\n\nI don't see any active development for the upcoming release, and most\nof what has been suggested wouldn't help you because they are about\nre-populating the cache with previously hot data, while you are\ndestroying your previously hot data and wanting to specify the\nfuture-hot data.\n\nBy the way, your explain plan would be more useful if it included\nbuffers. \"Explain (analyze, buffers) select...\"\n\nI don't know that it is ever better to run analyze without buffers,\nother than for backwards compatibility. I'm trying to get in the\nhabit of just automatically doing it.\n\nCheers,\n\nJeff\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nThere are many approach for PostgreSQL in-memory.The quick and easy way is making slave pgsql run on persistent RAM filesystem, the slave is part of master/slave replication cluster. The fstab and script make RAM file system persistent is below:Setup:First, create a mountpoint for the disk : mkdir /mnt/ramdiskSecondly, add this line to /etc/fstab in to mount the drive at boot-time. tmpfs /mnt/ramdisk tmpfs defaults,size=65536M 0 0#! /bin/sh # /etc/init.d/ramdisk.sh# case \"$1\" in start) echo \"Copying files to ramdisk\" rsync -av /data/ramdisk-backup/\n /mnt/ramdisk/ echo [`date +\"%Y-%m-%d %H:%M\"`] Ramdisk Synched from HD >> /var/log/ramdisk_sync.log ;; sync) echo \"Synching files from ramdisk to Harddisk\" echo [`date +\"%Y-%m-%d %H:%M\"`] Ramdisk Synched to HD >> /var/log/ramdisk_sync.log rsync -av --delete --recursive --force /mnt/ramdisk/ /data/ramdisk-backup/ ;; stop) echo \"Synching logfiles from ramdisk to Harddisk\" echo [`date +\"%Y-%m-%d %H:%M\"`] Ramdisk Synched to HD >> /var/log/ramdisk_sync.log rsync -av --delete --recursive --force /mnt/ramdisk/ /data/ramdisk-backup/ ;; *) echo \"Usage: /etc/init.d/ramdisk {start|stop|sync}\" exit 1 ;;esacexit\n 0 you can run it when startup and shutdown and crontabe hoursly. Wales Wang 发件人: Jeff Janes <[email protected]> 收件人: Stefan Keller <[email protected]> 抄送: Wales Wang <[email protected]>; [email protected]; Stephen Frost <[email protected]> 发送日期: 2012年2月27日, 星期一, 上午 6:34 主题: Re: [PERFORM] PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory? On Sun, Feb 26, 2012 at 2:56 AM, Stefan Keller <[email protected]> wrote:> Hi Jeff and Wales,>> 2012/2/26 Jeff Janes <[email protected]> wrote:>>> The problem is that the initial queries are too slow - and there is no>>> second chance. I do have to trash the buffer every night. There is>>> enough main memory to hold all table contents.>>>> Just that table, or the entire database?>> The entire database consisting of only about 5\n tables which are> similar but with different geometry types plus a relations table (as> OpenStreetMap calls it).And all of those combined fit in RAM? With how much to spare?>>>> 1. How can I warm up or re-populate shared buffers of Postgres?>>>> Instead, warm the OS cache. 燭hen data will get transferred into the>> postgres shared_buffers pool from the OS cache very quickly.>>>> tar -c $PGDATA/base/ |wc -c>> Ok. So with \"OS cache\" you mean the files which to me are THE database itself?Most operating systems will use any otherwise unused RAM to cache\"recently\" accessed file-system data. That is the OS cache. Thepurpose of the tar is to populate the OS cache with the \"databaseitself\". That way, when postgres wants something that isn't alreadyin shared_buffers, it doesn't require a disk read to get\n it, just arequest to the OS.But this trick is most useful after the OS has been restarted so theOS cache is empty. If the OS has been up for a long time, then whyisn't it already populated with the data you need? Maybe the datadoesn't fit, maybe some other process has trashed the cache (in whichcase, why would it not continue to trash the cache on an ongoingbasis?)Since you just recently created the tables and indexes, they must havepassed through the OS cache on the way to disk. So why aren't theystill there? Is shared_buffers so large that little RAM is left overfor the OS? Did you reboot the OS? Are there other processes runningthat drive the database-specific files out of the OS cache?> A cache to me is a second storage with \"controlled redudancy\" because> of performance reasons.Yeah. But there are multiple caches, with\n different parties incontrol and different opinions of what is redundant.>>> 2. Are there any hints on how to tell Postgres to read in all table>>> contents into memory?>>>> I don't think so, at least not in core. 營've wondered if it would>> make sense to suppress ring-buffer strategy when there are buffers on>> the free-list. 燭hat way a sequential scan would populate>> shared_buffers after a restart. 燘ut it wouldn't help you get the>> indexes into cache.>> So, are there any developments going on with PostgreSQL as Stephen> suggested in the former thread?I don't see any active development for the upcoming release, and mostof what has been suggested wouldn't help you because they are aboutre-populating the cache with previously hot data, while you aredestroying your previously hot data and wanting to specify\n thefuture-hot data.By the way, your explain plan would be more useful if it includedbuffers. \"Explain (analyze, buffers) select...\"I don't know that it is ever better to run analyze without buffers,other than for backwards compatibility. I'm trying to get in thehabit of just automatically doing it.Cheers,Jeff-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 27 Feb 2012 04:46:38 -0800 (PST)",
"msg_from": "Wales Wang <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?utf-8?B?5Zue5aSN77yaIFtQRVJGT1JNXSBQRyBhcyBpbi1tZW1vcnkgZGI/IEhvdyB0?=\n\t=?utf-8?B?byB3YXJtIHVwIGFuZCByZS1wb3B1bGF0ZSBidWZmZXJzPyBIb3cgdG8gcmVh?=\n\t=?utf-8?B?ZCBpbiBhbGwgdHVwbGVzIGludG8gbWVtb3J5Pw==?="
},
{
"msg_contents": "Hi Wales\n\n2012/2/27 Wales Wang <[email protected]> wrote:\n> There are many approach for PostgreSQL in-memory.\n> The quick and easy way is making slave pgsql run on persistent RAM\n> filesystem, the slave is part of master/slave replication cluster.\n>\n> The fstab and script make RAM file system persistent is below:\n> Setup:\n> First, create a mountpoint for the disk :\n> mkdir /mnt/ramdisk\n> Secondly, add this line to /etc/fstab in to mount the drive at boot-time.\n> tmpfs /mnt/ramdisk tmpfs defaults,size=65536M 0 0\n> #! /bin/sh\n> # /etc/init.d/ramdisk.sh\n> #\n>\n> case \"$1\" in\n> start)\n> echo \"Copying files to ramdisk\"\n> rsync -av /data/ramdisk-backup/ /mnt/ramdisk/\n> echo [`date +\"%Y-%m-%d %H:%M\"`] Ramdisk Synched from HD >>\n> /var/log/ramdisk_sync.log\n> ;;\n> sync)\n> echo \"Synching files from ramdisk to Harddisk\"\n> echo [`date +\"%Y-%m-%d %H:%M\"`] Ramdisk Synched to HD >>\n> /var/log/ramdisk_sync.log\n> rsync -av --delete --recursive --force /mnt/ramdisk/\n> /data/ramdisk-backup/\n> ;;\n> stop)\n> echo \"Synching logfiles from ramdisk to Harddisk\"\n> echo [`date +\"%Y-%m-%d %H:%M\"`] Ramdisk Synched to HD >>\n> /var/log/ramdisk_sync.log\n> rsync -av --delete --recursive --force /mnt/ramdisk/\n> /data/ramdisk-backup/\n> ;;\n> *)\n> echo \"Usage: /etc/init.d/ramdisk {start|stop|sync}\"\n> exit 1\n> ;;\n> esac\n> exit 0\n>\n> you can run it when startup and shutdown and crontabe hoursly.\n>\n> Wales Wang\n\nThank you for the tipp.\nMaking slave pgsql run on persistent RAM filesystem is surely at least\na possibility which I'll try out.\n\nBut what I'm finally after is a solution, where records don't get\npushed back to disk a.s.a.p. but rather got hold in memory as long as\npossible assuming that there is enough memory.\nI suspect that currently there is quite some overhead because of that\n(besides disk-oriented structures).\n\n-Stefan\n",
"msg_date": "Tue, 28 Feb 2012 09:30:44 +0100",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=E5=9B=9E=E5=A4=8D=EF=BC=9A_=5BPERFORM=5D_PG_as_in=2Dmemory_db=3F_How_to_w?=\n\t=?UTF-8?Q?arm_up_and_re=2Dpopulate_buffers=3F_How_to_read_in_all_tuples_in?=\n\t=?UTF-8?Q?to_memory=3F?="
},
{
"msg_contents": "On Tue, Feb 28, 2012 at 5:30 AM, Stefan Keller <[email protected]> wrote:\n>\n> But what I'm finally after is a solution, where records don't get\n> pushed back to disk a.s.a.p. but rather got hold in memory as long as\n> possible assuming that there is enough memory.\n\nfsync = off ?\n",
"msg_date": "Tue, 28 Feb 2012 10:08:25 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPERFORM=5D_Re=3A_=E5=9B=9E=E5=A4=8D=EF=BC=9A_=5BPERFORM=5D_PG_as_in=2Dmemor?=\n\t=?UTF-8?Q?y_db=3F_How_to_warm_up_and_re=2Dpopulate_buffers=3F_How_to_read_in?=\n\t=?UTF-8?Q?_all_tuples_into_memory=3F?="
},
{
"msg_contents": "On 28 Únor 2012, 14:08, Claudio Freire wrote:\n> On Tue, Feb 28, 2012 at 5:30 AM, Stefan Keller <[email protected]> wrote:\n>>\n>> But what I'm finally after is a solution, where records don't get\n>> pushed back to disk a.s.a.p. but rather got hold in memory as long as\n>> possible assuming that there is enough memory.\n>\n> fsync = off ?\n\nI don't think this is a viable idea, unless you don't care about the data.\n\nMoreover, \"fsyn=off\" does not mean \"not writing\" and writing does not mean\n\"removing from shared buffers\". A page written/fsynced during a checkpoint\nmay stay in shared buffers.\n\nAFAIK the pages are not removed from shared buffers without a reason. So a\ndirty buffer is written to a disk (because it needs to, to keep ACID) but\nstays in shared buffers as \"clean\" (unless it was written by a backend,\nwhich means there's not enough memory).\n\nTomas\n\n",
"msg_date": "Tue, 28 Feb 2012 14:38:57 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?utf-8?B?UmU6IFtQRVJGT1JNXSBSZTogW1BFUkZPUk1dIFJlOiDlm57lpI3vvJogW1BF?=\n\t=?utf-8?B?UkZPUk1dIFBHIGFzIGluLW1lbW9yeSBkYj8gSG93IHRvIHdhcm0gdXAgYW5k?=\n\t=?utf-8?B?IHJlLXBvcHVsYXRlIGJ1ZmZlcnM/IEhvdyB0byByZWFkIGluIGFsbCB0dXBs?=\n\t=?utf-8?B?ZXMgaW50byBtZW1vcnk/?="
},
{
"msg_contents": "On Tue, Feb 28, 2012 at 10:38 AM, Tomas Vondra <[email protected]> wrote:\n> On 28 Únor 2012, 14:08, Claudio Freire wrote:\n>> On Tue, Feb 28, 2012 at 5:30 AM, Stefan Keller <[email protected]> wrote:\n>>>\n>>> But what I'm finally after is a solution, where records don't get\n>>> pushed back to disk a.s.a.p. but rather got hold in memory as long as\n>>> possible assuming that there is enough memory.\n>>\n>> fsync = off ?\n>\n> I don't think this is a viable idea, unless you don't care about the data.\n\nWell, if you \"keep things in memory as long as possible\" (as per the\nquoted message), then you don't care about memory. There's no way\nmemory-only DBs can provide ACID guarantees.\n\nsynchronous_commit=off goes half way there without sacrificing crash\nrecovery, which is another option.\n\n> Moreover, \"fsyn=off\" does not mean \"not writing\" and writing does not mean\n> \"removing from shared buffers\". A page written/fsynced during a checkpoint\n> may stay in shared buffers.\n\nThe OS will write in the background (provided there's enough memory,\nwhich was an assumption on the quoted message). It will not interfere\nwith other operations, so, in any case, writing or not, you get what\nyou want.\n\n> AFAIK the pages are not removed from shared buffers without a reason. So a\n> dirty buffer is written to a disk (because it needs to, to keep ACID) but\n> stays in shared buffers as \"clean\" (unless it was written by a backend,\n> which means there's not enough memory).\n\nJust writing is not enough. ACID requires fsync. If you don't fsync\n(be it with synchronous_commit=off or fsync=off), then it's not full\nACID already.\nBecause a crash at a bad moment can always make your data nonpersistent.\n\nThat's an unavoidable result of keeping things in memory.\n",
"msg_date": "Tue, 28 Feb 2012 10:52:28 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPERFORM=5D_Re=3A_=5BPERFORM=5D_Re=3A_=E5=9B=9E=E5=A4=8D=EF=BC=9A_=5BPERFORM=5D_?=\n\t=?UTF-8?Q?PG_as_in=2Dmemory_db=3F_How_to_warm_up_and_re=2Dpopulate_buffers=3F_?=\n\t=?UTF-8?Q?How_to_read_in_all_tuples_into_memory=3F?="
},
{
"msg_contents": "On 28 Únor 2012, 14:52, Claudio Freire wrote:\n> On Tue, Feb 28, 2012 at 10:38 AM, Tomas Vondra <[email protected]> wrote:\n>> On 28 Únor 2012, 14:08, Claudio Freire wrote:\n>>> On Tue, Feb 28, 2012 at 5:30 AM, Stefan Keller <[email protected]>\n>>> wrote:\n>>>>\n>>>> But what I'm finally after is a solution, where records don't get\n>>>> pushed back to disk a.s.a.p. but rather got hold in memory as long as\n>>>> possible assuming that there is enough memory.\n>>>\n>>> fsync = off ?\n>>\n>> I don't think this is a viable idea, unless you don't care about the\n>> data.\n>\n> Well, if you \"keep things in memory as long as possible\" (as per the\n> quoted message), then you don't care about memory. There's no way\n> memory-only DBs can provide ACID guarantees.\n>\n> synchronous_commit=off goes half way there without sacrificing crash\n> recovery, which is another option.\n>\n>> Moreover, \"fsyn=off\" does not mean \"not writing\" and writing does not\n>> mean\n>> \"removing from shared buffers\". A page written/fsynced during a\n>> checkpoint\n>> may stay in shared buffers.\n>\n> The OS will write in the background (provided there's enough memory,\n> which was an assumption on the quoted message). It will not interfere\n> with other operations, so, in any case, writing or not, you get what\n> you want.\n>\n>> AFAIK the pages are not removed from shared buffers without a reason. So\n>> a\n>> dirty buffer is written to a disk (because it needs to, to keep ACID)\n>> but\n>> stays in shared buffers as \"clean\" (unless it was written by a backend,\n>> which means there's not enough memory).\n>\n> Just writing is not enough. ACID requires fsync. If you don't fsync\n> (be it with synchronous_commit=off or fsync=off), then it's not full\n> ACID already.\n> Because a crash at a bad moment can always make your data nonpersistent.\n\nI haven't said writing is sufficient for ACID, I said it's required. Which\nis kind of obvious because of the \"durability\" part.\n\n> That's an unavoidable result of keeping things in memory.\n\nWhy? IIRC the OP was interested in keeping the data in memory for querying\nand that the database is read-only after it's populated with data (once a\nday). How does writing the transactional logs / data files properly\ninterfere with that?\n\nI haven't investigated why exactly the data are not cached initially, but\nnone of the options that I can think of could be \"fixed\" by setting\n\"fsync=off\". That's something that influences writes (not read-only\ndatabase) and I don't think it influences how buffers are evicted from\nshared buffers / page cache.\n\nIt might speed up the initial load of data, but that's not what the OP was\nasking.\n\nkind regards\nTomas\n\n",
"msg_date": "Tue, 28 Feb 2012 15:15:24 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?utf-8?B?UmU6IFtQRVJGT1JNXSBSZTogW1BFUkZPUk1dIFJlOiBbUEVSRk9STV0gUmU6?=\n\t=?utf-8?B?IOWbnuWkje+8miBbUEVSRk9STV0gUEcgYXMgaW4tbWVtb3J5IGRiPyBIb3cg?=\n\t=?utf-8?B?dG8gd2FybSB1cCBhbmQgcmUtcG9wdWxhdGUgYnVmZmVycz8gSG93IHRvIHJl?=\n\t=?utf-8?B?YWQgaW4gYWxsIHR1cGxlcyBpbnRvIG1lbW9yeT8=?="
},
{
"msg_contents": "On Tue, Feb 28, 2012 at 11:15 AM, Tomas Vondra <[email protected]> wrote:\n> I haven't investigated why exactly the data are not cached initially, but\n> none of the options that I can think of could be \"fixed\" by setting\n> \"fsync=off\". That's something that influences writes (not read-only\n> database) and I don't think it influences how buffers are evicted from\n> shared buffers / page cache.\n>\n> It might speed up the initial load of data, but that's not what the OP was\n> asking.\n\nIt speeds a lot more than the initial load of data.\n\nAssuming the database is read-only, but not the filesystem (ie: it's\nnot a slave, in which case all this is moot, as you said, there are no\nwrites on a slave). That is, assuming this is a read-only master, then\nread-only queries don't mean read-only filesystem. Bookkeeping tasks\nlike updating catalog dbs, statistics tables, page cleanup, stuff like\nthat can actually result in writes.\n\nWrites that go through the WAL and then the filesystem.\n\nWith fsync=off, those writes happen on the background, and are carried\nout by the OS. Effectively releasing postgres from having to wait on\nthem, and, assuming there's enough RAM, merging repeated writes to the\nsame sectors in one operation in the end. For stats, bookkeeping, and\nwho knows what else, the merging would be quite effective. With enough\nRAM to hold the entire DB, the merging would effectively keep\neverything in RAM (in system buffers) until there's enough I/O\nbandwidth to transparently push that to persistent storage.\n\nIn essence, what was required, to keep everything in RAM for as much\nas possible.\n\nIt *does* in the same way affect buffer eviction - it makes eviction\n*very* quick, and re-population equally as quick, if everything fits\ninto memory.\n",
"msg_date": "Tue, 28 Feb 2012 11:24:09 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPERFORM=5D_Re=3A_=5BPERFORM=5D_Re=3A_=5BPERFORM=5D_Re=3A_=E5=9B=9E=E5=A4=8D?=\n\t=?UTF-8?Q?=EF=BC=9A_=5BPERFORM=5D_PG_as_in=2Dmemory_db=3F_How_to_warm_up_and_re=2Dpopu?=\n\t=?UTF-8?Q?late_buffers=3F_How_to_read_in_all_tuples_into_memory=3F?="
},
{
"msg_contents": "On Tue, Feb 28, 2012 at 12:30 AM, Stefan Keller <[email protected]> wrote:\n>\n> Thank you for the tipp.\n> Making slave pgsql run on persistent RAM filesystem is surely at least\n> a possibility which I'll try out.\n>\n> But what I'm finally after is a solution, where records don't get\n> pushed back to disk a.s.a.p. but rather got hold in memory as long as\n> possible assuming that there is enough memory.\n\nThat is already the case. There are two separate issues, when dirty\ndata is written to disk, and when clean data is dropped from memory.\nThe only connection between them is that dirty data can't just be\ndropped, it must be written first. But have written it, there is no\nreason to immediately drop it. When a checkpoint cleans data from the\nshard_buffers, that now-clean data remains in shared_buffers. And at\nthe OS level, when an fsync forces dirty data out to disk, the\nnow-clean data generally remains in cache (although I've seen nfs\nimplementations where that was not the case).\n\nIt is hard to figure out what problem you are facing. Is your data\nnot getting loaded into cache, or is it not staying there?\n\nCheers,\n\nJeff\n",
"msg_date": "Tue, 28 Feb 2012 07:14:46 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=E5=9B=9E=E5=A4=8D=EF=BC=9A_=5BPERFORM=5D_PG_as_in=2Dmemory_db=3F_How_to_w?=\n\t=?UTF-8?Q?arm_up_and_re=2Dpopulate_buffers=3F_How_to_read_in_all_tuples_in?=\n\t=?UTF-8?Q?to_memory=3F?="
},
{
"msg_contents": "On Sun, Feb 26, 2012 at 12:37 PM, Stefan Keller <[email protected]> wrote:\n> 2012/2/26 Andy Colson <[email protected]> wrote:\n>>>> How about after you load the data, vacuum freeze it, then do something\n>>>> like:\n>>>>\n>>>> SELECT count(*) FROM osm_point WHERE tags @> 'tourism=>junk'\n>>>>\n>>>> -Andy\n>>>\n>>>\n>>> That good idea is what I proposed elsewhere on one of the PG lists and\n>>> got told that this does'nt help.\n>>>\n> ...\n>> I don't buy that. Did you test it? Who/where did you hear this? And...\n>> how long does it take after you replace the entire table until things are\n>> good and cached? One or two queries?\n>>\n>> After a complete reload of the data, do you vacuum freeze it?\n>\n> Yes.\n>\n>> After a complete reload of the data, how long until its fast?\n>\n> Just after the second query. You can try it yourself online here:\n> http://bit.ly/A8duyB\n\nThe second instance of the exact same query is fast. How long until\nall similar but not identical queries are fast?\n\nCheers,\n\nJeff\n",
"msg_date": "Tue, 28 Feb 2012 07:22:27 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "On 28 Únor 2012, 15:24, Claudio Freire wrote:\n> On Tue, Feb 28, 2012 at 11:15 AM, Tomas Vondra <[email protected]> wrote:\n>> I haven't investigated why exactly the data are not cached initially,\n>> but\n>> none of the options that I can think of could be \"fixed\" by setting\n>> \"fsync=off\". That's something that influences writes (not read-only\n>> database) and I don't think it influences how buffers are evicted from\n>> shared buffers / page cache.\n>>\n>> It might speed up the initial load of data, but that's not what the OP\n>> was\n>> asking.\n>\n> It speeds a lot more than the initial load of data.\n>\n> Assuming the database is read-only, but not the filesystem (ie: it's\n> not a slave, in which case all this is moot, as you said, there are no\n> writes on a slave). That is, assuming this is a read-only master, then\n> read-only queries don't mean read-only filesystem. Bookkeeping tasks\n> like updating catalog dbs, statistics tables, page cleanup, stuff like\n> that can actually result in writes.\n>\n> Writes that go through the WAL and then the filesystem.\n\nI'm not sure what maintenance tasks you mean. Sure, there are tasks that\nneed to be performed after the load (stats, hint bits, updating system\ncatalogs etc.) but this may happen once right after the load and then\nthere's effectively zero write activity. Unless the database needs to\nwrite temp files, but that contradicts the 'fits into RAM' assumption ...\n\n> With fsync=off, those writes happen on the background, and are carried\n> out by the OS. Effectively releasing postgres from having to wait on\n> them, and, assuming there's enough RAM, merging repeated writes to the\n> same sectors in one operation in the end. For stats, bookkeeping, and\n> who knows what else, the merging would be quite effective. With enough\n> RAM to hold the entire DB, the merging would effectively keep\n> everything in RAM (in system buffers) until there's enough I/O\n> bandwidth to transparently push that to persistent storage.\n\nThe writes are always carried out by the OS - except when dirty_ratio is\nexceeded (but that's a different story) and WAL with direct I/O enabled.\nThe best way to allow merging the writes in shared buffers or page cache\nis to set the checkpoint_segments / checkpoint_timeout high enough.\n\nThat way the transactions won't need to wait for writes to data files\n(which is the part related to evictions of buffers from cache). And\nread-only transactions won't need to wait at all because they don't need\nto wait for fsync on WAL.\n\n> In essence, what was required, to keep everything in RAM for as much\n> as possible.\n>\n> It *does* in the same way affect buffer eviction - it makes eviction\n> *very* quick, and re-population equally as quick, if everything fits\n> into memory.\n\nNo it doesn't. Only a write caused by a background process (due to full\nshared buffers) means immediate eviction. A simple write (caused by a\ncheckpoint) does not evict the page from shared buffers. Not even a\nbackground writer evicts a page from shared buffers, it merely marks them\nas 'clean' and leaves them there. And all those writes happen on the\nbackground, so the clients don't need to wait for them to complete (except\nfor xlog checkpoints).\n\nkind regards\nTomas\n\n",
"msg_date": "Tue, 28 Feb 2012 17:05:08 +0100",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?utf-8?B?UmU6IFtQRVJGT1JNXSBSZTogW1BFUkZPUk1dIFJlOiBbUEVSRk9STV0gUmU6?=\n\t=?utf-8?B?IFtQRVJGT1JNXSBSZTog5Zue5aSN77yaIFtQRVJGT1JNXSBQRyBhcyBpbi1t?=\n\t=?utf-8?B?ZW1vcnkgZGI/IEhvdyB0byB3YXJtIHVwIGFuZCByZS1wb3B1bGF0ZSBidWZm?=\n\t=?utf-8?B?ZXJzPyBIb3cgdG8gcmVhZCBpbiBhbGwgdHVwbGVzIGludG8gbWVtb3J5Pw==?="
},
{
"msg_contents": "On Tue, Feb 28, 2012 at 1:05 PM, Tomas Vondra <[email protected]> wrote:\n> On 28 Únor 2012, 15:24, Claudio Freire wrote:\n>> It speeds a lot more than the initial load of data.\n>>\n>> Assuming the database is read-only, but not the filesystem (ie: it's\n>> not a slave, in which case all this is moot, as you said, there are no\n>> writes on a slave). That is, assuming this is a read-only master, then\n>> read-only queries don't mean read-only filesystem. Bookkeeping tasks\n>> like updating catalog dbs, statistics tables, page cleanup, stuff like\n>> that can actually result in writes.\n>>\n>> Writes that go through the WAL and then the filesystem.\n>\n> I'm not sure what maintenance tasks you mean. Sure, there are tasks that\n> need to be performed after the load (stats, hint bits, updating system\n> catalogs etc.) but this may happen once right after the load and then\n> there's effectively zero write activity. Unless the database needs to\n> write temp files, but that contradicts the 'fits into RAM' assumption ...\n\nAFAIK, stats need to be constantly updated.\nNot sure about the rest.\n\nAnd yes, it's quite possible to require temp files without a database\nthat doesn't fit in memory, only big OLAP-style queries and small\nenough work_mem.\n\n> The writes are always carried out by the OS - except when dirty_ratio is\n> exceeded (but that's a different story) and WAL with direct I/O enabled.\n> The best way to allow merging the writes in shared buffers or page cache\n> is to set the checkpoint_segments / checkpoint_timeout high enough.\n> That way the transactions won't need to wait for writes to data files\n> (which is the part related to evictions of buffers from cache). And\n> read-only transactions won't need to wait at all because they don't need\n> to wait for fsync on WAL.\n\nExactly\n\n>> In essence, what was required, to keep everything in RAM for as much\n>> as possible.\n>>\n>> It *does* in the same way affect buffer eviction - it makes eviction\n>> *very* quick, and re-population equally as quick, if everything fits\n>> into memory.\n>\n> No it doesn't. Only a write caused by a background process (due to full\n> shared buffers) means immediate eviction. A simple write (caused by a\n> checkpoint) does not evict the page from shared buffers. Not even a\n> background writer evicts a page from shared buffers, it merely marks them\n> as 'clean' and leaves them there. And all those writes happen on the\n> background, so the clients don't need to wait for them to complete (except\n> for xlog checkpoints).\n\nSo, we're saying the same.\n\nWith all that, and enough RAM, it already does what was requested.\n\nMaybe it would help to tune shared_buffers-to-os-cache ratio, and\ndirty_ratio to allow a big portion of RAM used for write caching (if\nthere were enough writes which I doubt), but, in essence, un\nunmodified postgres installation with enough RAM to hold the whole DB\n+ shared buffers in RAM should perform quite optimally.\n",
"msg_date": "Tue, 28 Feb 2012 13:42:11 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPERFORM=5D_Re=3A_=5BPERFORM=5D_Re=3A_=5BPERFORM=5D_Re=3A_=5BPERFO?=\n\t=?UTF-8?Q?RM=5D_Re=3A_=E5=9B=9E=E5=A4=8D=EF=BC=9A_=5BPERFORM=5D_PG_as_in=2Dmemory_db=3F_How_to_warm_?=\n\t=?UTF-8?Q?up_and_re=2Dpopulate_buffers=3F_How_to_read_in_all_tuples_into_m?=\n\t=?UTF-8?Q?emory=3F?="
},
{
"msg_contents": "Hi\n\n2012/2/28 Jeff Janes <[email protected]> wrote:\n> It is hard to figure out what problem you are facing. Is your data\n> not getting loaded into cache, or is it not staying there?\n\nOne could say both:\nI'd like to warm up the cache befor hand in order to speed up the\nfirst query right away.\nAnd it's not staying there because when there comes a second slightly\ndifferent query it's slow again and I would expect that the tuples of\nthat table stay.\n\n>> Just after the second query. You can try it yourself online here:\n>> http://bit.ly/A8duyB\n\nI should have said after the first query.\n\n> The second instance of the exact same query is fast.\n\nRight.\n\n> How long until all similar but not identical queries are fast?\n\nGood question. Can't tell for sure because it not so easy to make it repeatable.\nI tested the following:\n\nSELECT count(*) FROM osm_point WHERE tags @> 'amenity=>restaurant'\n\nSELECT count(*) FROM osm_point WHERE tags @> 'cuisine=>pizza'\n\nSELECT count(*) FROM osm_point WHERE tags @> 'tourism=>hotel'\n\nSELECT count(*) FROM osm_point WHERE tags @> 'historic=>castle'\n\nSELECT count(*) FROM osm_point WHERE tags @> 'natural=>peak'\nAND to_number(ele, '9999') >= 4000\n\nI would say that after the 4th query it remains fast (meaning less\nthan a second).\n\n-Stefan\n\nP.S. And yes, the database is aka 'read-only' and truncated and\nre-populated from scratch every night. fsync is off so I don't care\nabout ACID. After the indexes on name, hstore and geometry are\ngenerated I do a VACUUM FULL FREEZE. The current installation is a\nvirtual machine with 4GB memory and the filesystem is \"read/write\".\nThe future machine will be a pizza box with 72GB memory.\n",
"msg_date": "Tue, 28 Feb 2012 21:48:45 +0100",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "On Tue, Feb 28, 2012 at 5:48 PM, Stefan Keller <[email protected]> wrote:\n> P.S. And yes, the database is aka 'read-only' and truncated and\n> re-populated from scratch every night. fsync is off so I don't care\n> about ACID. After the indexes on name, hstore and geometry are\n> generated I do a VACUUM FULL FREEZE. The current installation is a\n> virtual machine with 4GB memory and the filesystem is \"read/write\".\n> The future machine will be a pizza box with 72GB memory.\n\nI don't get this. Something's wrong.\n\nIn the OP, you say \"There is enough main memory to hold all table\ncontents.\". I'm assuming, there you refer to your current system, with\n4GB memory.\n\nSo your data is less than 4GB, but then you'll be throwing a 72GB\nserver? It's either tremendous overkill, or your data simply isn't\nless than 4GB.\n\nIt's quite possible the vacuum full is thrashing your disk cache due\nto maintainance_work_mem. You can overcome this issue with the tar\ntrick, which is more easily performed as:\n\ntar cf /dev/null $PG_DATA/base\n\ntar will read all the table's contents and populate the OS cache. From\nthere to shared_buffers it should be very very quick. If it is true\nthat your data fits in 4GB, then that should fix it all. Beware,\nwhatever you allocate to shared buffers will be redundantly loaded\ninto RAM, first in shared buffers, then in the OS cache. So your data\nhas to fit in 4GB - shared buffers.\n\nI don't think query-based tricks will load everything into RAM,\nbecause you will get sequential scans and not index scans - the\nindices will remain uncached. If you forced an index scan, it would\nhave to read the whole index in random order (random I/O), and that\nwould be horribly slow. The best way is to tar the whole database into\n/dev/null and be done with it.\n\nAnother option is to issue a simple vacuum after the vacuum full.\nSimple vacuum will just scan the tables and indices, I'm hoping doing\nnothing since the vacuum full will have cleaned everything already,\nbut loading everything both in the OS cache and into shared_buffers.\n",
"msg_date": "Tue, 28 Feb 2012 19:41:34 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "2012/2/28 Claudio Freire <[email protected]>:\n> On Tue, Feb 28, 2012 at 5:48 PM, Stefan Keller <[email protected]> wrote:\n>> P.S. And yes, the database is aka 'read-only' and truncated and\n>> re-populated from scratch every night. fsync is off so I don't care\n>> about ACID. After the indexes on name, hstore and geometry are\n>> generated I do a VACUUM FULL FREEZE. The current installation is a\n>> virtual machine with 4GB memory and the filesystem is \"read/write\".\n>> The future machine will be a pizza box with 72GB memory.\n>\n> I don't get this. Something's wrong.\n>\n> In the OP, you say \"There is enough main memory to hold all table\n> contents.\". I'm assuming, there you refer to your current system, with\n> 4GB memory.\n\nSorry for the confusion: I'm doing these tests on this machine with\none table (osm_point) and one country. This table has a size of 2.6GB\nand 10 million tuples. The other machine has to deal with at least 5\ntables in total and will be hold more than one country plus routing\netc..\n\n-Stefan\n",
"msg_date": "Wed, 29 Feb 2012 00:46:40 +0100",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "On Tue, Feb 28, 2012 at 2:41 PM, Claudio Freire <[email protected]> wrote:\n> On Tue, Feb 28, 2012 at 5:48 PM, Stefan Keller <[email protected]> wrote:\n>> P.S. And yes, the database is aka 'read-only' and truncated and\n>> re-populated from scratch every night. fsync is off so I don't care\n>> about ACID. After the indexes on name, hstore and geometry are\n>> generated I do a VACUUM FULL FREEZE. The current installation is a\n>> virtual machine with 4GB memory and the filesystem is \"read/write\".\n>> The future machine will be a pizza box with 72GB memory.\n>\n> I don't get this. Something's wrong.\n>\n> In the OP, you say \"There is enough main memory to hold all table\n> contents.\". I'm assuming, there you refer to your current system, with\n> 4GB memory.\n>\n> So your data is less than 4GB, but then you'll be throwing a 72GB\n> server? It's either tremendous overkill, or your data simply isn't\n> less than 4GB.\n>\n> It's quite possible the vacuum full is thrashing your disk cache due\n> to maintainance_work_mem. You can overcome this issue with the tar\n> trick, which is more easily performed as:\n>\n> tar cf /dev/null $PG_DATA/base\n\nBut on many implementations, that will not work. tar detects the\noutput is going to the bit bucket, and so doesn't bother to actually\nread the data.\n\n...\n>\n> Another option is to issue a simple vacuum after the vacuum full.\n> Simple vacuum will just scan the tables and indices, I'm hoping doing\n> nothing since the vacuum full will have cleaned everything already,\n> but loading everything both in the OS cache and into shared_buffers.\n\nDoesn't it use a ring buffer strategy, so it would load to OS, but\nprobably not to shared_buffers?\n\nCheers,\n\nJeff\n",
"msg_date": "Wed, 29 Feb 2012 07:16:09 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "On Wed, Feb 29, 2012 at 12:16 PM, Jeff Janes <[email protected]> wrote:\n> But on many implementations, that will not work. tar detects the\n> output is going to the bit bucket, and so doesn't bother to actually\n> read the data.\n\nReally? Getting smart on us?\n\nShame on it. Who asked it to be smart?\n",
"msg_date": "Wed, 29 Feb 2012 12:18:12 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "2012/2/29 Jeff Janes <[email protected]>:\n>> It's quite possible the vacuum full is thrashing your disk cache due\n>> to maintainance_work_mem. You can overcome this issue with the tar\n>> trick, which is more easily performed as:\n>>\n>> tar cf /dev/null $PG_DATA/base\n>\n> But on many implementations, that will not work. tar detects the\n> output is going to the bit bucket, and so doesn't bother to actually\n> read the data.\n\nRight.\nBut what about the commands cp $PG_DATA/base /dev/null or cat\n$PG_DATA/base > /dev/null ?\nThey seem to do something.\n\n-Stefan\n",
"msg_date": "Wed, 29 Feb 2012 16:24:15 +0100",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "2012/2/29 Stefan Keller <[email protected]>:\n> 2012/2/29 Jeff Janes <[email protected]>:\n>>> It's quite possible the vacuum full is thrashing your disk cache due\n>>> to maintainance_work_mem. You can overcome this issue with the tar\n>>> trick, which is more easily performed as:\n>>>\n>>> tar cf /dev/null $PG_DATA/base\n>>\n>> But on many implementations, that will not work. tar detects the\n>> output is going to the bit bucket, and so doesn't bother to actually\n>> read the data.\n>\n> Right.\n> But what about the commands cp $PG_DATA/base /dev/null or cat\n> $PG_DATA/base > /dev/null ?\n> They seem to do something.\n\n...or let's try /dev/zero instead /dev/null:\ntar cf /dev/zero $PG_DATA/base\n\n-Stefan\n",
"msg_date": "Wed, 29 Feb 2012 16:28:25 +0100",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "On Tue, Feb 28, 2012 at 12:48 PM, Stefan Keller <[email protected]> wrote:\n> Hi\n>\n> 2012/2/28 Jeff Janes <[email protected]> wrote:\n>> It is hard to figure out what problem you are facing. Is your data\n>> not getting loaded into cache, or is it not staying there?\n>\n> One could say both:\n> I'd like to warm up the cache befor hand in order to speed up the\n> first query right away.\n> And it's not staying there because when there comes a second slightly\n> different query it's slow again and I would expect that the tuples of\n> that table stay.\n\nOnly the pages needed for a given query are loaded in the first place.\n So even if they do stay, a new query that needs different pages\n(because it accesses a different part of the index, and of the table)\nwon't find them already loaded, except by accident.\n\n>\n>>> Just after the second query. You can try it yourself online here:\n>>> http://bit.ly/A8duyB\n>\n> I should have said after the first query.\n>\n>> The second instance of the exact same query is fast.\n>\n> Right.\n>\n>> How long until all similar but not identical queries are fast?\n>\n> Good question. Can't tell for sure because it not so easy to make it repeatable.\n> I tested the following:\n>\n> SELECT count(*) FROM osm_point WHERE tags @> 'amenity=>restaurant'\n>\n> SELECT count(*) FROM osm_point WHERE tags @> 'cuisine=>pizza'\n>\n> SELECT count(*) FROM osm_point WHERE tags @> 'tourism=>hotel'\n>\n> SELECT count(*) FROM osm_point WHERE tags @> 'historic=>castle'\n>\n> SELECT count(*) FROM osm_point WHERE tags @> 'natural=>peak'\n> AND to_number(ele, '9999') >= 4000\n>\n> I would say that after the 4th query it remains fast (meaning less\n> than a second).\n\nHmm. I ran out of example queries before they started being fast.\n\nCheers,\n\nJeff\n",
"msg_date": "Thu, 1 Mar 2012 08:34:32 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "On Tue, Feb 28, 2012 at 3:46 PM, Stefan Keller <[email protected]> wrote:\n> 2012/2/28 Claudio Freire <[email protected]>:\n>>\n>> In the OP, you say \"There is enough main memory to hold all table\n>> contents.\". I'm assuming, there you refer to your current system, with\n>> 4GB memory.\n>\n> Sorry for the confusion: I'm doing these tests on this machine with\n> one table (osm_point) and one country. This table has a size of 2.6GB\n> and 10 million tuples. The other machine has to deal with at least 5\n> tables in total and will be hold more than one country plus routing\n> etc..\n\nWhat is your shared_buffers set to? 2.6GB is uncomfortably close to\n4GB, considering the computer has other things it needs to use memory\nfor as well.\n\nA problem is that often the shared_buffers and the OS cache end up\nbeing basically copies of one another, rather than complementing each\nother. So on read-only applications, the actually useful size of the\ntotal cache turns out to be max(shared_buffers, RAM - 2*shared_buffers\n- unknown_overhead).\n\nSo one choice is setting shared_buffers low (<0.5GB) and let the OS\ncache be your main cache. Advantages of this are that the OS cache\nsurvives PG server restarts, gets populated even by sequential scans,\nand can be pre-warmed by the tar trick. Disadvantages are that pages\ncan be driven out of the OS cache by non-PG related activity, which\ncan be hard to monitor and control. Also, there is some small cost to\nconstantly transferring data from OS cache to PG cache, but in your\ncase I htink that would be negligible.\n\nThe other choice is setting shared_buffers high (>3GB) and having it\nbe your main cache. The advantage is that non-PG activity generally\nwon't drive it out. The disadvantages are that it is hard to\npre-populate as the tar trick won't work, and neither will sequential\nscans on tables due to the ring buffer.\n\nActually, the tar trick might work somewhat if applied either shortly\nbefore or shortly after the database is started. If the database\nstarts out not using its full allotment of memory, the OS will use it\nfor cache, and you can pre-populate that cache. Then as the database\nruns, the PG cache gets larger by copying needed data from the OS\ncache into it. As the PG cache grows, pages need to get evicted from\nOS cache to make room for it. Ideally, the pages evicted from the OS\ncache would be the ones just copied into PG, but the kernel is not\naware of that. So the whole thing is rather sub-optimal.\n\nCheers,\n\nJeff\n",
"msg_date": "Thu, 1 Mar 2012 08:57:44 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "On Thu, Mar 1, 2012 at 9:57 AM, Jeff Janes <[email protected]> wrote:\n> On Tue, Feb 28, 2012 at 3:46 PM, Stefan Keller <[email protected]> wrote:\n>> 2012/2/28 Claudio Freire <[email protected]>:\n>>>\n>>> In the OP, you say \"There is enough main memory to hold all table\n>>> contents.\". I'm assuming, there you refer to your current system, with\n>>> 4GB memory.\n>>\n>> Sorry for the confusion: I'm doing these tests on this machine with\n>> one table (osm_point) and one country. This table has a size of 2.6GB\n>> and 10 million tuples. The other machine has to deal with at least 5\n>> tables in total and will be hold more than one country plus routing\n>> etc..\n>\n> What is your shared_buffers set to? 2.6GB is uncomfortably close to\n> 4GB, considering the computer has other things it needs to use memory\n> for as well.\n\nThe real danger here is that the kernel will happily swap ut\nshared_buffers memory to make room to cache more from the hard disks,\nespecially if that shared_mem hasn't been touched in a while. On a\nstock kernel with swappinness of 60 etc, it's quite likely the OP is\nseeing the DB go to get data from shared_buffers, and the OS is\nactually paging in for shared_buffers. At that point reading from\nkernel cache is MUCH faster, and reading from the HDs is still\nprobably faster than swapping in shared_buffers.\n",
"msg_date": "Thu, 1 Mar 2012 10:35:23 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "Just curious ... has anyone tried using a ram disk as the PG primary and\nDRBD as the means to make it persistent?\nOn Mar 1, 2012 11:35 AM, \"Scott Marlowe\" <[email protected]> wrote:\n\n> On Thu, Mar 1, 2012 at 9:57 AM, Jeff Janes <[email protected]> wrote:\n> > On Tue, Feb 28, 2012 at 3:46 PM, Stefan Keller <[email protected]>\n> wrote:\n> >> 2012/2/28 Claudio Freire <[email protected]>:\n> >>>\n> >>> In the OP, you say \"There is enough main memory to hold all table\n> >>> contents.\". I'm assuming, there you refer to your current system, with\n> >>> 4GB memory.\n> >>\n> >> Sorry for the confusion: I'm doing these tests on this machine with\n> >> one table (osm_point) and one country. This table has a size of 2.6GB\n> >> and 10 million tuples. The other machine has to deal with at least 5\n> >> tables in total and will be hold more than one country plus routing\n> >> etc..\n> >\n> > What is your shared_buffers set to? 2.6GB is uncomfortably close to\n> > 4GB, considering the computer has other things it needs to use memory\n> > for as well.\n>\n> The real danger here is that the kernel will happily swap ut\n> shared_buffers memory to make room to cache more from the hard disks,\n> especially if that shared_mem hasn't been touched in a while. On a\n> stock kernel with swappinness of 60 etc, it's quite likely the OP is\n> seeing the DB go to get data from shared_buffers, and the OS is\n> actually paging in for shared_buffers. At that point reading from\n> kernel cache is MUCH faster, and reading from the HDs is still\n> probably faster than swapping in shared_buffers.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nJust curious ... has anyone tried using a ram disk as the PG primary and DRBD as the means to make it persistent?\nOn Mar 1, 2012 11:35 AM, \"Scott Marlowe\" <[email protected]> wrote:\nOn Thu, Mar 1, 2012 at 9:57 AM, Jeff Janes <[email protected]> wrote:\n> On Tue, Feb 28, 2012 at 3:46 PM, Stefan Keller <[email protected]> wrote:\n>> 2012/2/28 Claudio Freire <[email protected]>:\n>>>\n>>> In the OP, you say \"There is enough main memory to hold all table\n>>> contents.\". I'm assuming, there you refer to your current system, with\n>>> 4GB memory.\n>>\n>> Sorry for the confusion: I'm doing these tests on this machine with\n>> one table (osm_point) and one country. This table has a size of 2.6GB\n>> and 10 million tuples. The other machine has to deal with at least 5\n>> tables in total and will be hold more than one country plus routing\n>> etc..\n>\n> What is your shared_buffers set to? 2.6GB is uncomfortably close to\n> 4GB, considering the computer has other things it needs to use memory\n> for as well.\n\nThe real danger here is that the kernel will happily swap ut\nshared_buffers memory to make room to cache more from the hard disks,\nespecially if that shared_mem hasn't been touched in a while. On a\nstock kernel with swappinness of 60 etc, it's quite likely the OP is\nseeing the DB go to get data from shared_buffers, and the OS is\nactually paging in for shared_buffers. At that point reading from\nkernel cache is MUCH faster, and reading from the HDs is still\nprobably faster than swapping in shared_buffers.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 1 Mar 2012 11:38:39 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "2012/3/1 Jeff Janes <[email protected]>:\n> On Tue, Feb 28, 2012 at 3:46 PM, Stefan Keller <[email protected]> wrote:\n>> 2012/2/28 Claudio Freire <[email protected]>:\n>>>\n>>> In the OP, you say \"There is enough main memory to hold all table\n>>> contents.\". I'm assuming, there you refer to your current system, with\n>>> 4GB memory.\n>>\n>> Sorry for the confusion: I'm doing these tests on this machine with\n>> one table (osm_point) and one country. This table has a size of 2.6GB\n>> and 10 million tuples. The other machine has to deal with at least 5\n>> tables in total and will be hold more than one country plus routing\n>> etc..\n>\n> What is your shared_buffers set to? 2.6GB is uncomfortably close to\n> 4GB, considering the computer has other things it needs to use memory\n> for as well.\n\nThese are the current modified settings in postgresql.conf:\nshared_buffers = 128MB\nwork_mem = 3MB\nmaintenance_work_mem = 30MB\neffective_cache_size = 352MB\nwal_buffers = 8MB\ndefault_statistics_target = 50\nconstraint_exclusion = on\ncheckpoint_completion_target = 0.9\ncheckpoint_segments = 16\nmax_connections = 80\n\n-Stefan\n",
"msg_date": "Thu, 1 Mar 2012 23:52:48 +0100",
"msg_from": "Stefan Keller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "\n\nOn 03/01/2012 05:52 PM, Stefan Keller wrote:\n> These are the current modified settings in postgresql.conf:\n> shared_buffers = 128MB\n> work_mem = 3MB\n\nThese are extremely low settings on virtually any modern computer. I \nusually look to set shared buffers in numbers of Gb and work_mem at \nleast in tens if not hundreds of Mb for any significantly sized database.\n\ncheers\n\nandrew\n",
"msg_date": "Thu, 01 Mar 2012 18:08:19 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "On Thu, Mar 1, 2012 at 8:08 PM, Andrew Dunstan <[email protected]> wrote:\n> These are extremely low settings on virtually any modern computer. I usually\n> look to set shared buffers in numbers of Gb and work_mem at least in tens if\n> not hundreds of Mb for any significantly sized database.\n\nFor a read-only database, as was discussed, a lower shared_buffers\nsettings makes sense. And 128M is low enough, I'd guess.\n\nSetting work_mem to hundreds of MB in a 4G system is suicide. Tens\neven is dangerous.\n",
"msg_date": "Thu, 1 Mar 2012 21:23:20 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "On Thu, Mar 1, 2012 at 4:23 PM, Claudio Freire <[email protected]> wrote:\n> For a read-only database, as was discussed, a lower shared_buffers\n> settings makes sense. And 128M is low enough, I'd guess.\n>\n> Setting work_mem to hundreds of MB in a 4G system is suicide. Tens\n> even is dangerous.\n>\n\nWhy do you say that? We've had work_mem happily at 100MB for years. Is\nthere a particular degenerate case you're concerned about?\n\n-p\n\n-- \nPeter van Hardenberg\nSan Francisco, California\n\"Everything was beautiful, and nothing hurt.\" -- Kurt Vonnegut\n",
"msg_date": "Thu, 1 Mar 2012 16:28:02 -0800",
"msg_from": "Peter van Hardenberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "On Wed, Feb 29, 2012 at 7:28 AM, Stefan Keller <[email protected]> wrote:\n> 2012/2/29 Stefan Keller <[email protected]>:\n>> 2012/2/29 Jeff Janes <[email protected]>:\n>>>> It's quite possible the vacuum full is thrashing your disk cache due\n>>>> to maintainance_work_mem. You can overcome this issue with the tar\n>>>> trick, which is more easily performed as:\n>>>>\n>>>> tar cf /dev/null $PG_DATA/base\n>>>\n>>> But on many implementations, that will not work. tar detects the\n>>> output is going to the bit bucket, and so doesn't bother to actually\n>>> read the data.\n>>\n>> Right.\n>> But what about the commands cp $PG_DATA/base /dev/null or cat\n>> $PG_DATA/base > /dev/null ?\n>> They seem to do something.\n\nFor me they both give errors, because neither of them works on an\ndirectory rather than ordinary files.\n\n>\n> ...or let's try /dev/zero instead /dev/null:\n> tar cf /dev/zero $PG_DATA/base\n\nThat does seem to work.\n\nSo, does it solve your problem?\n\nCheers,\n\nJeff\n",
"msg_date": "Thu, 1 Mar 2012 16:35:23 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "On Thu, Mar 1, 2012 at 9:28 PM, Peter van Hardenberg <[email protected]> wrote:\n>> Setting work_mem to hundreds of MB in a 4G system is suicide. Tens\n>> even is dangerous.\n>>\n>\n> Why do you say that? We've had work_mem happily at 100MB for years. Is\n> there a particular degenerate case you're concerned about?\n\nMe too.\n\nBut I've analyzed the queries I'll be sending to the database and I've\ncarefully bound the effective amount of memory used given the load\nI'll be experiencing.\n\nSaying that it should be set to 100M without consideration for those\nmatters is the suicide part. work_mem applies to each sort operation.\nSuppose, just for the sake of argument, that each connection is\nperforming 5 such sorts (ie, 5 joins of big tables - not unthinkable),\nthen suppose you have your max_connections to the default of 100, then\nthe system could request as much as 50G of ram.\n\nI set work_mem higher in my database system since I *know* most of the\nconnections will not perform any merge or hash joins, nor will they\nsort the output, so they won't use work_mem even once. The ones that\nwill, I have limited on the application side to a handful, hence I\n*know* that 50G theoretical maximum will not be reached.\n\nCan the OP say that? I have no reason to think so. Hence I don't\nsuggest 100M is OK on a 4G system.\n",
"msg_date": "Thu, 1 Mar 2012 21:58:17 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "On 28.2.2012 17:42, Claudio Freire wrote:\n> On Tue, Feb 28, 2012 at 1:05 PM, Tomas Vondra <[email protected]> wrote:\n>> On 28 Únor 2012, 15:24, Claudio Freire wrote:\n>>> It speeds a lot more than the initial load of data.\n>>>\n>>> Assuming the database is read-only, but not the filesystem (ie: it's\n>>> not a slave, in which case all this is moot, as you said, there are no\n>>> writes on a slave). That is, assuming this is a read-only master, then\n>>> read-only queries don't mean read-only filesystem. Bookkeeping tasks\n>>> like updating catalog dbs, statistics tables, page cleanup, stuff like\n>>> that can actually result in writes.\n>>>\n>>> Writes that go through the WAL and then the filesystem.\n>>\n>> I'm not sure what maintenance tasks you mean. Sure, there are tasks that\n>> need to be performed after the load (stats, hint bits, updating system\n>> catalogs etc.) but this may happen once right after the load and then\n>> there's effectively zero write activity. Unless the database needs to\n>> write temp files, but that contradicts the 'fits into RAM' assumption ...\n> \n> AFAIK, stats need to be constantly updated.\n\nErr, what kind of stats are we talking about? Statistics capturing\ncharacteristics of the data or runtime stats? There's no point in\nupdating data stats (histograms, MCV, ...) for read-only data and\nPostgreSQL doesn't do that.\n\nRuntime stats OTOH are collected and written continuously, that's true.\nBut in most cases this is not a write-heavy task, and if it is then it's\nrecommended to place the pg_stat_tmp on ramdrive (it's usually just few\nMBs, written repeatedly).\n\n> Not sure about the rest.\n\nAFAIK it's like this:\n\n updating catalog tables - no updates on read-only data\n\n updating statistics - data stats: no, runtime stats: yes\n\n page cleanup - no (just once after the load)\n\n> And yes, it's quite possible to require temp files without a database\n> that doesn't fit in memory, only big OLAP-style queries and small\n> enough work_mem.\n\nRight. I'm not exactly sure how I arrived to the crazy conclusion that\nwriting temp files somehow contradicts the 'fits into RAM' assumption.\nThat's clearly nonsense ...\n\n> \n>> The writes are always carried out by the OS - except when dirty_ratio is\n>> exceeded (but that's a different story) and WAL with direct I/O enabled.\n>> The best way to allow merging the writes in shared buffers or page cache\n>> is to set the checkpoint_segments / checkpoint_timeout high enough.\n>> That way the transactions won't need to wait for writes to data files\n>> (which is the part related to evictions of buffers from cache). And\n>> read-only transactions won't need to wait at all because they don't need\n>> to wait for fsync on WAL.\n> \n> Exactly\n> \n>>> In essence, what was required, to keep everything in RAM for as much\n>>> as possible.\n>>>\n>>> It *does* in the same way affect buffer eviction - it makes eviction\n>>> *very* quick, and re-population equally as quick, if everything fits\n>>> into memory.\n>>\n>> No it doesn't. Only a write caused by a background process (due to full\n>> shared buffers) means immediate eviction. A simple write (caused by a\n>> checkpoint) does not evict the page from shared buffers. Not even a\n>> background writer evicts a page from shared buffers, it merely marks them\n>> as 'clean' and leaves them there. And all those writes happen on the\n>> background, so the clients don't need to wait for them to complete (except\n>> for xlog checkpoints).\n> \n> So, we're saying the same.\n\nMaybe. I still am not sure how fsync=off affects the eviction in your\nopinion. I think it does not (or just very remotely) and you were saying\nthe opposite. IMHO the eviction of (dirty) buffers is either very fast\nor slow, no matter what the fsync setting is.\n\n> With all that, and enough RAM, it already does what was requested.\n> \n> Maybe it would help to tune shared_buffers-to-os-cache ratio, and\n> dirty_ratio to allow a big portion of RAM used for write caching (if\n> there were enough writes which I doubt), but, in essence, un\n> unmodified postgres installation with enough RAM to hold the whole DB\n> + shared buffers in RAM should perform quite optimally.\n\nProbably, for a read-write database that fits into memory. In case of a\nread-only database I don't think this really matters because the main\nissue there are temp files and if you can stuff them into page cache\nthen you can just increase the work_mem instead and you're golden.\n\nTomas\n",
"msg_date": "Fri, 02 Mar 2012 02:13:36 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?B?UmU6IFtQRVJGT1JNXSBSZTogW1BFUkZPUk1dIFJlOiBbUEVSRk9STV0=?=\n\t=?UTF-8?B?IFJlOiBbUEVSRk9STV0gUmU6IOWbnuWkje+8miBbUEVSRk9STV0gUEcgYXMgaW4=?=\n\t=?UTF-8?B?LW1lbW9yeSBkYj8gSG93IHRvIHdhcm0gdXAgYW5kIHJlLXBvcHVsYXRlIGJ1ZmY=?=\n\t=?UTF-8?B?ZXJzPyBIb3cgdG8gcmVhZCBpbiBhbGwgdHVwbGVzIGludG8gbWVtb3J5Pw==?="
},
{
"msg_contents": "\n\nOn 03/01/2012 07:58 PM, Claudio Freire wrote:\n> On Thu, Mar 1, 2012 at 9:28 PM, Peter van Hardenberg<[email protected]> wrote:\n>>> Setting work_mem to hundreds of MB in a 4G system is suicide. Tens\n>>> even is dangerous.\n>>>\n>> Why do you say that? We've had work_mem happily at 100MB for years. Is\n>> there a particular degenerate case you're concerned about?\n> Me too.\n>\n> But I've analyzed the queries I'll be sending to the database and I've\n> carefully bound the effective amount of memory used given the load\n> I'll be experiencing.\n>\n> Saying that it should be set to 100M without consideration for those\n> matters is the suicide part. work_mem applies to each sort operation.\n> Suppose, just for the sake of argument, that each connection is\n> performing 5 such sorts (ie, 5 joins of big tables - not unthinkable),\n> then suppose you have your max_connections to the default of 100, then\n> the system could request as much as 50G of ram.\n>\n> I set work_mem higher in my database system since I *know* most of the\n> connections will not perform any merge or hash joins, nor will they\n> sort the output, so they won't use work_mem even once. The ones that\n> will, I have limited on the application side to a handful, hence I\n> *know* that 50G theoretical maximum will not be reached.\n>\n> Can the OP say that? I have no reason to think so. Hence I don't\n> suggest 100M is OK on a 4G system.\n\nWell, obviously you need to know your workload. Nobody said otherwise.\n\ncheers\n\nandrew\n",
"msg_date": "Thu, 01 Mar 2012 20:17:14 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate\n\tbuffers? How to read in all tuples into memory?"
},
{
"msg_contents": "On Thu, Mar 1, 2012 at 10:13 PM, Tomas Vondra <[email protected]> wrote:\n>\n> Maybe. I still am not sure how fsync=off affects the eviction in your\n> opinion. I think it does not (or just very remotely) and you were saying\n> the opposite. IMHO the eviction of (dirty) buffers is either very fast\n> or slow, no matter what the fsync setting is.\n\nI was thinking page cleanup, but if you're confident it doesn't happen\non a read-only database, I'd have to agree on all your other points.\n\nI have seen a small amount of writes on a read-only devel DB I work\nwith, though. Usually in the order of 100kb/s writes per 10mb/s reads\n- I attributed that to page cleanup. In that case, it can add some\nwait time to fsync, even though it's really a slow volume of writes.\nIf you're right, I'm thinking, it may be some other thing... atime\nupdates maybe, I'd have to check the filesystem configuration I guess.\n",
"msg_date": "Thu, 1 Mar 2012 23:05:15 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=5BPERFORM=5D_Re=3A_=5BPERFORM=5D_Re=3A_=5BPERFORM=5D_Re=3A_=5BPERFO?=\n\t=?UTF-8?Q?RM=5D_Re=3A_=E5=9B=9E=E5=A4=8D=EF=BC=9A_=5BPERFORM=5D_PG_as_in=2Dmemory_db=3F_How_to_warm_?=\n\t=?UTF-8?Q?up_and_re=2Dpopulate_buffers=3F_How_to_read_in_all_tuples_into_m?=\n\t=?UTF-8?Q?emory=3F?="
},
{
"msg_contents": "On 2.3.2012 03:05, Claudio Freire wrote:\n> On Thu, Mar 1, 2012 at 10:13 PM, Tomas Vondra <[email protected]> wrote:\n>>\n>> Maybe. I still am not sure how fsync=off affects the eviction in your\n>> opinion. I think it does not (or just very remotely) and you were saying\n>> the opposite. IMHO the eviction of (dirty) buffers is either very fast\n>> or slow, no matter what the fsync setting is.\n> \n> I was thinking page cleanup, but if you're confident it doesn't happen\n> on a read-only database, I'd have to agree on all your other points.\n> \n> I have seen a small amount of writes on a read-only devel DB I work\n> with, though. Usually in the order of 100kb/s writes per 10mb/s reads\n> - I attributed that to page cleanup. In that case, it can add some\n> wait time to fsync, even though it's really a slow volume of writes.\n> If you're right, I'm thinking, it may be some other thing... atime\n> updates maybe, I'd have to check the filesystem configuration I guess.\n\nI'd guess those writes were caused by hint bits (~ page cleanup, but\nthat's a one-time thing and should be fixed by VACUUM FREEZE right after\nthe load). Or maybe it was related to runtime stats (i.e. pgstat).\n\nT.\n",
"msg_date": "Sat, 03 Mar 2012 01:30:18 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?B?UmU6IFtQRVJGT1JNXSBSZTogW1BFUkZPUk1dIFJlOiBbUEVSRk9STV0=?=\n\t=?UTF-8?B?IFJlOiBbUEVSRk9STV0gUmU6IOWbnuWkje+8miBbUEVSRk9STV0gUEcgYXMgaW4=?=\n\t=?UTF-8?B?LW1lbW9yeSBkYj8gSG93IHRvIHdhcm0gdXAgYW5kIHJlLXBvcHVsYXRlIGJ1ZmY=?=\n\t=?UTF-8?B?ZXJzPyBIb3cgdG8gcmVhZCBpbiBhbGwgdHVwbGVzIGludG8gbWVtb3J5Pw==?="
}
] |
[
{
"msg_contents": "I found in a production system that mostly the performance is being \ncrippled by the handling of one particular case.\n\nI hope I've reduced this to a minimal example, below.\n\nSummary: when more than one key is given (or the data is sourced from a \ntable) the planner is forced to a join of the whole data set.\n\nThe query aggregates a small amount of the data in a large data set. It \nlooks like lookups via a Nested Loop would be considerably quicker. I did \nthis explicitly in the UNION query.\n\nWhat is that prevents the index condition from being used in earlier parts \nof the query? Only where a single condition is present is it be used below \nthe final join.\n\nIs it that a Nested Loop cannot make several independent lookups via an \nindex?\n\nIs my input preventing the planner doing this, or would it need to be \nsmarter about something?\n\nIt seems interesting that it is able to do this successfully in the third \nplan: \"As above but without the join to the job table\".\n\nThanks\n\n-- \nMark\n\n\n--\n-- Create schema: job -E task -E resource\n--\n\nCREATE TABLE job (\n jid integer PRIMARY KEY\n);\n\nCREATE TABLE task (\n jid integer REFERENCES job (jid),\n tid integer PRIMARY KEY\n);\n\nCREATE TABLE resource (\n tid integer REFERENCES task (tid),\n name text\n);\n\nCREATE INDEX idx_task ON task (jid, tid);\nCREATE INDEX idx_resource_tid ON resource (tid, name);\nCREATE INDEX idx_resource_name ON resource (name, tid);\n\n--\n-- Populate with data:\n-- 10000 jobs,\n-- 1000 tasks per job,\n-- 0-4 resources per task\n--\n\nCREATE OR REPLACE FUNCTION\npopulate()\nRETURNS VOID\nAS $$\nDECLARE\n t integer;\nBEGIN\n FOR t IN 0..10000 LOOP\n INSERT INTO job VALUES (t);\n END LOOP;\n\n FOR t IN 0..10000000 LOOP\n INSERT INTO task VALUES (random() * 10000, t);\n\n IF random() > 0.1 THEN\n INSERT INTO resource VALUES (t, 'wallace');\n INSERT INTO resource VALUES (t, 'gromit');\n END IF;\n\n IF random() > 0.9 THEN\n INSERT INTO resource VALUES (t, 'shaun');\n END IF;\n\n IF random() > 0.6 THEN\n INSERT INTO resource VALUES (t, 'wendolene');\n END IF;\n END LOOP;\nEND\n$$ LANGUAGE plpgsql;\n\nSELECT populate();\nVACUUM ANALYZE;\n\n-- Define some simple aggregation with a left join\n\nCREATE VIEW middle AS\n SELECT task.jid,\n task.tid,\n COUNT(resource.name) AS nresource\n FROM task\n LEFT JOIN resource ON task.tid = resource.tid\n GROUP BY task.jid,\n task.tid;\n\n-- Aggregate again for a single key: fast\n-- \"Nested Loop\" is used\n\nSELECT job.jid,\n sum(nresource)\nFROM job\n INNER JOIN middle ON job.jid = middle.jid\nWHERE job.jid IN (1234)\nGROUP BY job.jid;\n\n-- GroupAggregate (cost=0.00..35026.04 rows=1 width=12)\n-- -> Nested Loop (cost=0.00..35021.13 rows=980 width=12)\n-- -> Index Only Scan using job_pkey on job (cost=0.00..4.28 rows=1 width=4)\n-- Index Cond: (jid = 1234)\n-- -> GroupAggregate (cost=0.00..34997.25 rows=980 width=15)\n-- -> Nested Loop Left Join (cost=0.00..34970.55 rows=2254 width=15)\n-- -> Index Only Scan using idx_task on task (cost=0.00..55.98 rows=980 width=8)\n-- Index Cond: (jid = 1234)\n-- -> Index Only Scan using idx_resource_tid on resource (cost=0.00..35.54 rows=7 width=11)\n-- Index Cond: (tid = task.tid)\n-- (10 rows)\n\n-- As above, but with two keys: slow\n-- \"Merge Join\" is attempted; this is the 'bad' case\n\nEXPLAIN\nSELECT job.jid,\n sum(nresource)\nFROM job\n INNER JOIN middle ON job.jid = middle.jid\nWHERE job.jid IN (1234, 5678)\nGROUP BY job.jid;\n\n-- GroupAggregate (cost=5636130.95..6091189.12 rows=2 width=12)\n-- -> Merge Join (cost=5636130.95..6091179.10 rows=2000 width=12)\n-- Merge Cond: (task.jid = job.jid)\n-- -> GroupAggregate (cost=5636130.95..5966140.73 rows=9999986 width=15)\n-- -> Sort (cost=5636130.95..5693633.43 rows=23000992 width=15)\n-- Sort Key: task.jid, task.tid\n-- -> Merge Left Join (cost=0.00..1251322.49 rows=23000992 width=15)\n-- Merge Cond: (task.tid = resource.tid)\n-- -> Index Scan using task_pkey on task (cost=0.00..281847.80 rows=9999986 width=8)\n-- -> Index Only Scan using idx_resource_tid on resource (cost=0.00..656962.32 rows=23000992 width=11)\n-- -> Materialize (cost=0.00..8.55 rows=2 width=4)\n-- -> Index Only Scan using job_pkey on job (cost=0.00..8.54 rows=2 width=4)\n-- Index Cond: (jid = ANY ('{1234,5678}'::integer[]))\n-- (13 rows)\n\n-- As above but without the join to the job table: fast\n\nSELECT jid,\n sum(nresource)\nFROM middle\nWHERE jid IN (1234, 5678)\nGROUP BY jid;\n\n-- GroupAggregate (cost=0.00..69995.03 rows=200 width=12)\n-- -> GroupAggregate (cost=0.00..69963.62 rows=1961 width=15)\n-- -> Nested Loop Left Join (cost=0.00..69910.18 rows=4511 width=15)\n-- -> Index Only Scan using idx_task on task (cost=0.00..93.39 rows=1961 width=8)\n-- Index Cond: (jid = ANY ('{1234,5678}'::integer[]))\n-- -> Index Only Scan using idx_resource_tid on resource (cost=0.00..35.52 rows=7 width=11)\n-- Index Cond: (tid = task.tid)\n-- (7 rows)\n\n-- Kludge to lookup two keys: fast (cost 70052)\n\n SELECT job.jid,\n sum(nresource)\n FROM job\n INNER JOIN middle ON job.jid = middle.jid\n WHERE job.jid IN (1234)\n GROUP BY job.jid\nUNION\n SELECT job.jid,\n sum(nresource)\n FROM job\n INNER JOIN middle ON job.jid = middle.jid\n WHERE job.jid IN (5678)\n GROUP BY job.jid;\n\n-- \n-- Repeat with job keys from a table instead of 'IN' clause.\n-- \n\nCREATE TABLE one_job (\n jid integer PRIMARY KEY\n);\n\nCREATE TABLE two_jobs (\n jid integer PRIMARY KEY\n);\n\nINSERT INTO one_job VALUES (1234);\nINSERT INTO two_jobs VALUES (1234), (5678);\n\nANALYZE one_job;\nANALYZE two_jobs;\n\n-- Joining against one row: slow (cost 5636131.97..6092141.59)\n-- \"Merge Join\" is attempted\n\nEXPLAIN\nSELECT job.jid,\n sum(nresource)\nFROM one_job job\n INNER JOIN middle ON job.jid = middle.jid\nGROUP BY job.jid;\n\n-- Joining against two rows: slow (cost 5636131.98..6093141.60)\n-- \"Merge Join\" is attempted\n\nEXPLAIN\nSELECT job.jid,\n sum(nresource)\nFROM two_jobs job\n INNER JOIN middle ON job.jid = middle.jid\nGROUP BY job.jid;\n\n",
"msg_date": "Sun, 26 Feb 2012 14:16:47 +0000 (GMT)",
"msg_from": "Mark Hills <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index condition in a Nested Loop"
},
{
"msg_contents": "Mark Hills <[email protected]> writes:\n> What is that prevents the index condition from being used in earlier parts \n> of the query? Only where a single condition is present is it be used below \n> the final join.\n\n\"WHERE job.jid IN (1234)\" is simplified to \"WHERE job.jid = 1234\", and\nthat in combination with \"JOIN ON job.jid = middle.jid\" allows deduction\nof \"middle.jid = 1234\" a/k/a \"task.jid = 1234\", leading to the\nrecognition that only one row from \"task\" is needed. There is no such\ntransitive propagation of general IN clauses. The problem with your\nslower queries is not that they're using merge joins, it's that there's\nno scan-level restriction on the task table so that whole table has to\nbe scanned.\n\nAnother thing that's biting you is that the GROUP BY in the view acts as\na partial optimization fence: there's only a limited amount of stuff\nthat can get pushed down through that. You might consider rewriting the\nview to avoid that, along the lines of\n\ncreate view middle2 as\n SELECT task.jid, task.tid,\n (select count(resource.name) from resource where task.tid = resource.tid) AS nresource\n FROM task;\n\nThis is not perfect: this formulation forces the system into essentially\na nestloop join between task and resource. In cases where you actually\nwant results for a lot of task rows, that's going to lose badly. But in\nthe examples you're showing here, it's going to work better.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 26 Feb 2012 16:00:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index condition in a Nested Loop "
},
{
"msg_contents": "On Sun, 26 Feb 2012, Tom Lane wrote:\n\n> Mark Hills <[email protected]> writes:\n> > What is that prevents the index condition from being used in earlier parts \n> > of the query? Only where a single condition is present is it be used below \n> > the final join.\n> \n> \"WHERE job.jid IN (1234)\" is simplified to \"WHERE job.jid = 1234\", and\n> that in combination with \"JOIN ON job.jid = middle.jid\" allows deduction\n> of \"middle.jid = 1234\" a/k/a \"task.jid = 1234\", leading to the\n> recognition that only one row from \"task\" is needed. There is no such\n> transitive propagation of general IN clauses. The problem with your\n> slower queries is not that they're using merge joins, it's that there's\n> no scan-level restriction on the task table so that whole table has to\n> be scanned.\n> \n> Another thing that's biting you is that the GROUP BY in the view acts as\n> a partial optimization fence: there's only a limited amount of stuff\n> that can get pushed down through that. You might consider rewriting the\n> view to avoid that, along the lines of\n> \n> create view middle2 as\n> SELECT task.jid, task.tid,\n> (select count(resource.name) from resource where task.tid = resource.tid) AS nresource\n> FROM task;\n> \n> This is not perfect: this formulation forces the system into essentially\n> a nestloop join between task and resource. In cases where you actually\n> want results for a lot of task rows, that's going to lose badly. But in\n> the examples you're showing here, it's going to work better.\n\nThanks for this. Indeed it does work better, and it's exactly the method I \nwas hoping the planner could use to execute the query.\n\nI modified the report on the previous week's data, and it now runs 6x \nfaster (in a database containing approx. 2 years of data). There are \nseveral similar reports. Some queries work on only a hanful of jobs and \nthis change ensures they are instant.\n\nI hadn't realised that sub-queries restrict the planner so much. Although \nat some point I've picked up a habit of avoiding them, presumably for this \nreason.\n\nIf you have time to explain, I'd be interested in a suggestion for any \nchange to the planner that could make a small contribution towards \nimproving this. eg. a small project that could get me into the planner \ncode.\n\nMany thanks for your help,\n\n-- \nMark\n",
"msg_date": "Mon, 27 Feb 2012 23:13:57 +0000 (GMT)",
"msg_from": "Mark Hills <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index condition in a Nested Loop "
},
{
"msg_contents": "Mark Hills <[email protected]> writes:\n> I hadn't realised that sub-queries restrict the planner so much. Although \n> at some point I've picked up a habit of avoiding them, presumably for this \n> reason.\n\n> If you have time to explain, I'd be interested in a suggestion for any \n> change to the planner that could make a small contribution towards \n> improving this. eg. a small project that could get me into the planner \n> code.\n\nWell, if it were easy to do, we'd probably have done it already ...\n\nPlain subqueries might perhaps be turned into joins (with special join\ntypes no doubt), but I'm not sure what we'd do about subqueries with\ngrouping or aggregation, as your examples had. There was some talk a\nmonth or three back about allowing such subqueries to have parameterized\npaths a la the recently-added parameterized path mechanism, but it\ndidn't get further than idle speculation.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 27 Feb 2012 19:16:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index condition in a Nested Loop "
}
] |
[
{
"msg_contents": "Le dimanche 26 février 2012 01:16:08, Stefan Keller a écrit :\n> Hi,\n> \n> 2011/10/24 Stephen Frost <[email protected]> wrote\n> \n> > Now, we've also been discussing ways to have PG automatically\n> > re-populate shared buffers and possibly OS cache based on what was in\n> > memory at the time of the last shut-down, but I'm not sure that would\n> > help your case either since you're rebuilding everything every night and\n> > that's what's trashing your buffers (because everything ends up getting\n> > moved around). You might actually want to consider if that's doing more\n> > harm than good for you. If you weren't doing that, then the cache\n> > wouldn't be getting destroyed every night..\n> \n> I'd like to come back on the issue of aka of in-memory key-value database.\n> \n> To remember, it contains table definition and queries as indicated in\n> the appendix [0]. There exist 4 other tables of similar structure.\n> There are indexes on each column. The tables contain around 10 million\n> tuples. The database is \"read-only\"; it's completely updated every\n> day. I don't expect more than 5 concurrent users at any time. A\n> typical query looks like [1] and varies in an unforeseable way (that's\n> why hstore is used). EXPLAIN tells me that the indexes are used [2].\n> \n> The problem is that the initial queries are too slow - and there is no\n> second chance. I do have to trash the buffer every night. There is\n> enough main memory to hold all table contents.\n> \n> 1. How can I warm up or re-populate shared buffers of Postgres?\n\nThere was a patch proposed for postgresql which purpose was to \nsnapshot/Restore postgresql buffers, but it is still not sure how far that \nreally help to have that part loaded.\n\n> 2. Are there any hints on how to tell Postgres to read in all table\n> contents into memory?\n\nI wrote pgfincore for the OS part: you can use it to preload table/index in OS \ncache, and do snapshot/restore if you want fine grain control of what part of \nthe object you want to warm.\nhttps://github.com/klando/pgfincore\n\n\n> \n> Yours, Stefan\n> \n> \n> APPENDIX\n> \n> [0]\n> CREATE TABLE osm_point (\n> osm_id integer,\n> name text,\n> tags hstore\n> geom geometry(Point,4326)\n> );\n> \n> \n> [1]\n> SELECT osm_id, name FROM osm_point\n> WHERE tags @> 'tourism=>viewpoint'\n> AND ST_Contains(\n> GeomFromText('BOX(8.42 47.072, 9.088 47.431)'::box2d, 4326),\n> geom)\n> \n> [2]\n> EXPLAIN ANALYZE returns:\n> Bitmap Heap Scan on osm_point (cost=402.15..40465.85 rows=430\n> width=218) (actual time=121.888..137.\n> Recheck Cond: (tags @> '\"tourism\"=>\"viewpoint\"'::hstore)\n> Filter: (('01030...'::geometry && geom) AND\n> _st_contains('01030'::geometry, geom))\n> -> Bitmap Index Scan on osm_point_tags_idx (cost=0.00..402.04\n> rows=11557 width=0) (actual time=1 6710 loops=1)\n> Index Cond: (tags @> '\"tourism\"=>\"viewpoint\"'::hstore)\n> Total runtime: 137.881 ms\n> (6 rows)\n\n-- \nCédric Villemain +33 (0)6 20 30 22 52\nhttp://2ndQuadrant.fr/\nPostgreSQL: Support 24x7 - Développement, Expertise et Formation\n",
"msg_date": "Sun, 26 Feb 2012 20:36:19 +0100",
"msg_from": "=?utf-8?q?C=C3=A9dric_Villemain?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG as in-memory db? How to warm up and re-populate buffers? How\n\tto read in all tuples into memory?"
}
] |
[
{
"msg_contents": "I happened to be looking in the PostgreSQL logs (8.4.10, x86_64,\nScientificLinux 6.1) and noticed that an app was doing some sorting\n(group by, order by, index creation) that ended up on disk rather than\nstaying in memory.\nSo I enabled trace_sort and restarted the app.\nWhat followed confused me.\n\nI know that the app is setting the work_mem and maintenance_work_mem\nto 1GB, at the start of the session, with the following calls:\n\nselect set_config(work_mem, 1GB, False);\nselect set_config(maintenance_work_mem, 1GB, False);\n\nBy timestamps, I know that these statements take place before the next\nlog items, generated by PostgreSQL (note: I also log the PID of the\nbackend and all of these are from the same PID):\n\nLOG: 00000: begin tuple sort: nkeys = 2, workMem = 1048576, randomAccess = f\nLOG: 00000: begin tuple sort: nkeys = 1, workMem = 1048576, randomAccess = f\nLOG: 00000: begin tuple sort: nkeys = 1, workMem = 1048576, randomAccess = f\nLOG: 00000: begin tuple sort: nkeys = 1, workMem = 1048576, randomAccess = f\nLOG: 00000: begin tuple sort: nkeys = 2, workMem = 1048576, randomAccess = f\n^ these make sense\n\nLOG: 00000: begin tuple sort: nkeys = 2, workMem = 131072, randomAccess = f\nLOG: 00000: begin tuple sort: nkeys = 1, workMem = 131072, randomAccess = f\nLOG: 00000: begin tuple sort: nkeys = 1, workMem = 131072, randomAccess = f\n....\n^^ these do not (but 128MB is the globally-configured work_mem value)\n\nLOG: 00000: begin index sort: unique = t, workMem = 2097152, randomAccess = f\n^ this kinda does (2GB is the globally-configured maintenance_work_mem value)\n\nLOG: 00000: begin index sort: unique = f, workMem = 131072, randomAccess = f\nLOG: 00000: begin tuple sort: nkeys = 2, workMem = 131072, randomAccess = f\n..\n\n\nThe config shows 128MB for work_mem and 2GB for maintenance_work_mem.\nWhy does PostgreSQL /sometimes/ use the globally-configured values and\nsometimes use the values that come from the connection?\nAm I wrong in misunderstanding what 'session' variables are? I thought\nthat session (versus transaction) config items were set for /all/\ntransactions in a given backend, until changed or until that backend\nterminates. Is that not so?\n\nIf I reconfigure the app to call out to set_config(item, value, True)\nafter each 'BEGIN' statement then workMem seems to be correct (at\nleast more of the time -- the process takes some time to run and I\nhaven't done an exhaustive check as yet).\n\n-- \nJon\n",
"msg_date": "Tue, 28 Feb 2012 13:16:27 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "problems with set_config, work_mem, maintenance_work_mem, and sorting"
},
{
"msg_contents": "Jon Nelson <[email protected]> writes:\n> The config shows 128MB for work_mem and 2GB for maintenance_work_mem.\n> Why does PostgreSQL /sometimes/ use the globally-configured values and\n> sometimes use the values that come from the connection?\n\nYou sure those log entries are all from the same process?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 28 Feb 2012 14:28:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problems with set_config, work_mem, maintenance_work_mem,\n\tand sorting"
},
{
"msg_contents": "On Tue, Feb 28, 2012 at 1:28 PM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> The config shows 128MB for work_mem and 2GB for maintenance_work_mem.\n>> Why does PostgreSQL /sometimes/ use the globally-configured values and\n>> sometimes use the values that come from the connection?\n>\n> You sure those log entries are all from the same process?\n\nIf I am understanding this correctly, yes. They all share the same pid.\nThe logline format is:\n\nlog_line_prefix = '%t %d %u [%p]'\n\nand I believe %p represents the pid, and also that a pid corresponds\nto a backend. Therefore, same pid == same backend == same connection\n== same session. Many transactions within a session.\n\n\n-- \nJon\n",
"msg_date": "Tue, 28 Feb 2012 13:33:33 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: problems with set_config, work_mem, maintenance_work_mem,\n\tand sorting"
},
{
"msg_contents": "Jon Nelson <[email protected]> writes:\n> On Tue, Feb 28, 2012 at 1:28 PM, Tom Lane <[email protected]> wrote:\n>> Jon Nelson <[email protected]> writes:\n>>> Why does PostgreSQL /sometimes/ use the globally-configured values and\n>>> sometimes use the values that come from the connection?\n\n>> You sure those log entries are all from the same process?\n\n> If I am understanding this correctly, yes. They all share the same pid.\n\nHmph ... does seem a bit weird. Can you turn on log_statements and\nidentify which operations aren't using the session values?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 28 Feb 2012 15:51:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problems with set_config, work_mem, maintenance_work_mem,\n\tand sorting"
},
{
"msg_contents": "On Tue, Feb 28, 2012 at 2:51 PM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> On Tue, Feb 28, 2012 at 1:28 PM, Tom Lane <[email protected]> wrote:\n>>> Jon Nelson <[email protected]> writes:\n>>>> Why does PostgreSQL /sometimes/ use the globally-configured values and\n>>>> sometimes use the values that come from the connection?\n>\n>>> You sure those log entries are all from the same process?\n>\n>> If I am understanding this correctly, yes. They all share the same pid.\n>\n> Hmph ... does seem a bit weird. Can you turn on log_statements and\n> identify which operations aren't using the session values?\n\nI had log_min_duration_statement = 1000.\n\nAn example:\n\nLOG: 00000: begin tuple sort: nkeys = 3, workMem = 131072, randomAccess = f\nLOCATION: tuplesort_begin_heap, tuplesort.c:573\nSTATEMENT: INSERT INTO (new table) SELECT (bunch of stuff here) FROM\n.. ORDER BY ...\n\nand also some CREATE TABLE ... statements:\n\nLOG: 00000: begin index sort: unique = f, workMem = 131072, randomAccess = f\nLOCATION: tuplesort_begin_index_btree, tuplesort.c:642\nSTATEMENT: CREATE TABLE <tablename> (LIKE some_other_tablename)\n\nI also see this:\n\nLOG: 00000: begin tuple sort: nkeys = 2, workMem = 131072, randomAccess = f\nLOCATION: tuplesort_begin_heap, tuplesort.c:573\nSTATEMENT: SELECT <bunch of stuff from system catalogs>\n\nwhich is the ORM library (SQLAlchemy) doing a reflection of the\ntable(s) involved.\nThe statement is from the same backend (pid) and takes place\nchronologically *after* the following:\n\nLOG: 00000: begin tuple sort: nkeys = 2, workMem = 1048576, randomAccess = f\nLOCATION: tuplesort_begin_heap, tuplesort.c:573\nSTATEMENT: <more reflection stuff>\n\nIs that useful?\n\nIf that's not enough, I can crank the logging up.\nWhat would you like to see for 'log_statements' (if what I've provided\nyou above is not enough).\n\n-- \nJon\n",
"msg_date": "Tue, 28 Feb 2012 15:38:12 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: problems with set_config, work_mem, maintenance_work_mem,\n\tand sorting"
},
{
"msg_contents": "Quoting Jon Nelson:\n\"\"\"\nThe config shows 128MB for work_mem and 2GB for maintenance_work_mem.\nWhy does PostgreSQL /sometimes/ use the globally-configured values and\nsometimes use the values that come from the connection?\nAm I wrong in misunderstanding what 'session' variables are? I thought\nthat session (versus transaction) config items were set for /all/\ntransactions in a given backend, until changed or until that backend\nterminates. Is that not so?\n\"\"\"\n\nCould it be that the transaction which does the set_config is rolled back? If that is\nthe case, the set_config is rolled back, too. However, if the transaction commits,\nthen the set_config should be in effect for the whole session. It seems this is not\ndocumented at all for set_config, just for SET SQL command.\n\nI think it would be nice to have a way to force the connection to use the provided\nsettings even if the transaction in which they are done is rolled back. In single statement\nif possible. Otherwise you might be forced to do a transaction just to be sure the SET\nis actually in effect for the connection's life-time.\n\nDjango was bitten by this for example, it is now fixed by using this:\nhttps://github.com/django/django/blob/master/django/db/backends/postgresql_psycopg2/base.py#L189\n\n - Anssi",
"msg_date": "Tue, 28 Feb 2012 23:48:52 +0200",
"msg_from": "=?iso-8859-1?Q?K=E4=E4ri=E4inen_Anssi?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problems with set_config, work_mem,\n\tmaintenance_work_mem, and sorting"
},
{
"msg_contents": "Jon Nelson <[email protected]> writes:\n> On Tue, Feb 28, 2012 at 2:51 PM, Tom Lane <[email protected]> wrote:\n>> Hmph ... does seem a bit weird. Can you turn on log_statements and\n>> identify which operations aren't using the session values?\n\n> I had log_min_duration_statement = 1000.\n\nThat's not really going to prove much, as you won't be able to see any\ncommands that might be setting or resetting the work_mem parameters.\n\n> ... which is the ORM library (SQLAlchemy) doing a reflection of the\n> table(s) involved.\n\nOh, there's an ORM involved? I'll bet a nickel it's doing something\nsurprising, like not issuing your SET until much later than you thought.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 28 Feb 2012 16:54:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problems with set_config, work_mem, maintenance_work_mem,\n\tand sorting"
},
{
"msg_contents": "On Tue, Feb 28, 2012 at 6:54 PM, Tom Lane <[email protected]> wrote:\n>\n>> ... which is the ORM library (SQLAlchemy) doing a reflection of the\n>> table(s) involved.\n>\n> Oh, there's an ORM involved? I'll bet a nickel it's doing something\n> surprising, like not issuing your SET until much later than you thought.\n\nI'd rather go for an auto-rollback at some point within the\ntransaction that issued the set work_mem. SQLA tends to do that if,\nfor instance, an exception is risen within a transaction block (ie,\nflushing).\n\nYou can issue the set work_mem in its own transaction, and commit it,\nand in that way avoid that rollback.\n",
"msg_date": "Tue, 28 Feb 2012 19:46:52 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problems with set_config, work_mem, maintenance_work_mem,\n\tand sorting"
},
{
"msg_contents": "On Tue, Feb 28, 2012 at 4:46 PM, Claudio Freire <[email protected]> wrote:\n> On Tue, Feb 28, 2012 at 6:54 PM, Tom Lane <[email protected]> wrote:\n>>\n>>> ... which is the ORM library (SQLAlchemy) doing a reflection of the\n>>> table(s) involved.\n>>\n>> Oh, there's an ORM involved? I'll bet a nickel it's doing something\n>> surprising, like not issuing your SET until much later than you thought.\n>\n> I'd rather go for an auto-rollback at some point within the\n> transaction that issued the set work_mem. SQLA tends to do that if,\n> for instance, an exception is risen within a transaction block (ie,\n> flushing).\n>\n> You can issue the set work_mem in its own transaction, and commit it,\n> and in that way avoid that rollback.\n\nI cranked the logging /all/ the way up and isolated the server.\nI suspect that your theory is correct.\nI'll spend a bit more time investigating.\n\n\n-- \nJon\n",
"msg_date": "Tue, 28 Feb 2012 17:00:34 -0600",
"msg_from": "Jon Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: problems with set_config, work_mem, maintenance_work_mem,\n\tand sorting"
},
{
"msg_contents": "On Tue, Feb 28, 2012 at 8:00 PM, Jon Nelson <[email protected]> wrote:\n> I cranked the logging /all/ the way up and isolated the server.\n> I suspect that your theory is correct.\n\nAnother option, depending on your SQLA version, when connections are\nsent back to the pool, I seem to remember they were reset. That would\nalso reset the work_mem, you'd still see the same pid on PG logs, but\nit's not the same session.\n",
"msg_date": "Tue, 28 Feb 2012 20:43:37 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problems with set_config, work_mem, maintenance_work_mem,\n\tand sorting"
},
{
"msg_contents": "On Feb 29, 2012 1:44 AM, \"Claudio Freire\" <[email protected]> wrote:\n> Another option, depending on your SQLA version, when connections are\n> sent back to the pool, I seem to remember they were reset. That would\n> also reset the work_mem, you'd still see the same pid on PG logs, but\n> it's not the same session.\n\nExcept that any open transactions are rolled back no other reset is done.\nThe correct way to handle this would be to set the options and commit the\ntransaction in Pool connect or checkout events. The event choice depends on\nwhether application scope or request scope parameters are wanted.\n\n--\nAnts Aasma\n\n\nOn Feb 29, 2012 1:44 AM, \"Claudio Freire\" <[email protected]> wrote:\n> Another option, depending on your SQLA version, when connections are\n> sent back to the pool, I seem to remember they were reset. That would\n> also reset the work_mem, you'd still see the same pid on PG logs, but\n> it's not the same session.\nExcept that any open transactions are rolled back no other reset is done. The correct way to handle this would be to set the options and commit the transaction in Pool connect or checkout events. The event choice depends on whether application scope or request scope parameters are wanted.\n--\nAnts Aasma",
"msg_date": "Wed, 29 Feb 2012 09:30:21 +0200",
"msg_from": "Ants Aasma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: problems with set_config, work_mem, maintenance_work_mem,\n\tand sorting"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.