threads
listlengths
1
275
[ { "msg_contents": "I'm aware you already know that information_schema is slow [1] [2], so I\njust want to expose/document another case and tests I did.\n\nI'm using the following view to check what tables depend on what other\ntables.\n\nCREATE VIEW raw_relation_tree AS\nSELECT\n tc_p.table_catalog AS parent_catalog,\n tc_p.table_schema AS parent_schema,\n tc_p.table_name AS parent_table,\n tc_c.table_catalog AS child_catalog,\n tc_c.table_schema AS child_schema,\n tc_c.table_name AS child_table\nFROM\n information_schema.referential_constraints AS rc\n NATURAL JOIN information_schema.table_constraints AS tc_c\n LEFT JOIN information_schema.table_constraints AS tc_p ON\n rc.unique_constraint_catalog = tc_p.constraint_catalog AND\n rc.unique_constraint_schema = tc_p.constraint_schema AND\n rc.unique_constraint_name = tc_p.constraint_name\n;\n\ntest=# select count(*) from raw_relation_tree;\ncount \n-------\n 11\n(1 row)\n\nAn EXPLAIN ANALYZE for a simple SELECT on each of the FROM tables give:\nreferential_constraints: ~9ms.\ntable_constraints: ~24ms.\n\nThe result, on the above view: ~80ms. Fair enough. But if I apply a\ncondition:\n\nSELECT * FROM ___pgnui_relation_tree.raw_relation_tree WHERE\nparent_schema <> child_schema;\n\nit takes ~2 seconds (!) to complete.\n\nI tried using an alternate table_constraints definition by creating my\nown view and changing UNION to UNION ALL (as per [2]) The results were:\n\ntable_constraints using UNION ALL has the same number of rows as the\nUNION version.\n\ntable_constraints now take about 4 ms (as expected).\nVIEW raw_relation_tree is now 110 ms.\nVIEW raw_relation_tree WHERE parent_schema <> child_schema: 3.3 sec.\n\nEXPLAIN results are way too long to post here. If it is ok, I'll gladly\npost them.\n\nUsing 8.3.6.\n\n[1] http://archives.postgresql.org/pgsql-bugs/2008-12/msg00144.php\n[2]\nhttp://archives.postgresql.org/pgsql-performance/2008-05/msg00062.php\n\n\n", "msg_date": "Sat, 14 Feb 2009 10:13:07 -0800", "msg_from": "Octavio Alvarez <[email protected]>", "msg_from_op": true, "msg_subject": "Slow queries from information schema" }, { "msg_contents": "Octavio Alvarez <[email protected]> writes:\n> The result, on the above view: ~80ms. Fair enough. But if I apply a\n> condition:\n> SELECT * FROM ___pgnui_relation_tree.raw_relation_tree WHERE\n> parent_schema <> child_schema;\n> it takes ~2 seconds (!) to complete.\n\nI'm not sure I'm seeing the exact same case as you, but what I see here\nis that 8.3 puts the join condition involving _pg_keysequal() at the\ntop of the tree where it will be executed quite a lot of times (way\nmore than the planner expects, because of bad rowcount estimates below)\n... and _pg_keysequal() is implemented in a depressingly inefficient way.\n\nCVS HEAD seems to avoid this trap in the same case, but I'm not entirely\nconvinced whether it's getting better rowcount estimates or just got\nlucky.\n\nAnyway it seems to help a great deal if you use a less sucky definition\nof the function, such as\n\ncreate or replace function information_schema._pg_keysequal(smallint[], smallint[]) RETURNS boolean\nLANGUAGE sql STRICT IMMUTABLE AS\n'select $1 <@ $2 and $2 <@ $1';\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 14 Feb 2009 15:02:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow queries from information schema " }, { "msg_contents": "On Sat, 2009-02-14 at 15:02 -0500, Tom Lane wrote:\n> Octavio Alvarez <[email protected]> writes:\n> > The result, on the above view: ~80ms. Fair enough. But if I apply a\n> > condition:\n> > SELECT * FROM ___pgnui_relation_tree.raw_relation_tree WHERE\n> > parent_schema <> child_schema;\n> > it takes ~2 seconds (!) to complete.\n> \n> I'm not sure I'm seeing the exact same case as you, but what I see here\n> is that 8.3 puts the join condition involving _pg_keysequal() at the\n> top of the tree where it will be executed quite a lot of times (way\n> more than the planner expects, because of bad rowcount estimates below)\n> ... and _pg_keysequal() is implemented in a depressingly inefficient way.\n> \n> CVS HEAD seems to avoid this trap in the same case, but I'm not entirely\n> convinced whether it's getting better rowcount estimates or just got\n> lucky.\n> \n> Anyway it seems to help a great deal if you use a less sucky definition\n> of the function, such as\n> \n> create or replace function information_schema._pg_keysequal(smallint[], smallint[]) RETURNS boolean\n> LANGUAGE sql STRICT IMMUTABLE AS\n> 'select $1 <@ $2 and $2 <@ $1';\n\nWow! Just tried it with the UNION (the original) version of\ninformation_schema.table_constraints and it drastically reduced the\ntotal runtime to 309 ms!\n\nI also tested it with UNION ALL and it took 1.6 sec. (and yet, 50% of\nthe previous time with UNION ALL).\n\n\n\n", "msg_date": "Sat, 14 Feb 2009 12:15:17 -0800", "msg_from": "Octavio Alvarez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow queries from information schema" } ]
[ { "msg_contents": "This dinky little query takes about 4 seconds to run:\n\n select event_occurrences.*\n from event_occurrences\n join section_items on section_items.subject_id = event_occurrences.event_id\n and section_items.subject_type = 'Event'\n and section_items.sandbox_id = 16399\n where event_occurrences.start_time > '2009-02-14 18:15:14.739411 +0100'\n order by event_occurrences.start_time\n limit 4;\n\nOutput from \"explain analyze\":\n\n Limit (cost=0.00..973.63 rows=4 width=48) (actual\ntime=61.554..4039.704 rows=1 loops=1)\n -> Nested Loop (cost=0.00..70101.65 rows=288 width=48) (actual\ntime=61.552..4039.700 rows=1 loops=1)\n -> Nested Loop (cost=0.00..68247.77 rows=297 width=52)\n(actual time=61.535..4039.682 rows=1 loops=1)\n -> Index Scan using\nindex_event_occurrences_on_start_time on event_occurrences\n(cost=0.00..13975.01 rows=159718 width=48) (actual time=0.024..398.152\nrows=155197 loops=1)\n Index Cond: (start_time > '2009-02-14\n18:15:14.739411+01'::timestamp with time zone)\n -> Index Scan using\nindex_section_items_on_subject_type_and_subject_id on section_items\n(cost=0.00..0.33 rows=1 width=4) (actual time=0.023..0.023 rows=0\nloops=155197)\n Index Cond: (((section_items.subject_type)::text\n= 'Event'::text) AND (section_items.subject_id =\nevent_occurrences.event_id))\n Filter: (section_items.sandbox_id = 16399)\n -> Index Scan using event_instances_pkey on events\n(cost=0.00..6.23 rows=1 width=4) (actual time=0.014..0.015 rows=1\nloops=1)\n Index Cond: (events.id = event_occurrences.event_id)\n Filter: (events.read_permissions = (-1))\n Total runtime: 4039.788 ms\n\nNow, if I use \"limit 50\" it uses a plan that is several orders of\nmagnitude more efficient:\n\n Limit (cost=6202.38..6202.51 rows=50 width=48) (actual\ntime=0.170..0.171 rows=1 loops=1)\n -> Sort (cost=6202.38..6203.20 rows=326 width=48) (actual\ntime=0.170..0.170 rows=1 loops=1)\n Sort Key: event_occurrences.start_time\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=5.09..6191.55 rows=326 width=48)\n(actual time=0.160..0.161 rows=1 loops=1)\n -> Bitmap Heap Scan on section_items\n(cost=5.09..328.94 rows=96 width=4) (actual time=0.024..0.087 rows=7\nloops=1)\n Recheck Cond: (sandbox_id = 16399)\n Filter: ((subject_type)::text = 'Event'::text)\n -> Bitmap Index Scan on\nindex_section_items_on_sandbox_id (cost=0.00..5.06 rows=107 width=0)\n(actual time=0.018..0.018 rows=7 loops=1)\n Index Cond: (sandbox_id = 16399)\n -> Index Scan using\nindex_event_occurrences_on_event_id on event_occurrences\n(cost=0.00..60.14 rows=74 width=48) (actual time=0.010..0.010 rows=0\nloops=7)\n Index Cond: (event_occurrences.event_id =\nsection_items.subject_id)\n Filter: (event_occurrences.start_time >\n'2009-02-14 18:15:14.739411+01'::timestamp with time zone)\n Total runtime: 0.210 ms\n\nSimilarly if I disable nested joins with \"set enable_nestloop = off\":\n\n Limit (cost=10900.13..10900.14 rows=4 width=48) (actual\ntime=191.476..191.478 rows=1 loops=1)\n -> Sort (cost=10900.13..10900.95 rows=326 width=48) (actual\ntime=191.474..191.475 rows=1 loops=1)\n Sort Key: event_occurrences.start_time\n Sort Method: quicksort Memory: 25kB\n -> Hash Join (cost=8944.52..10895.24 rows=326 width=48)\n(actual time=162.104..191.463 rows=1 loops=1)\n Hash Cond: (section_items.subject_id =\nevent_occurrences.event_id)\n -> Bitmap Heap Scan on section_items\n(cost=5.09..328.94 rows=96 width=4) (actual time=0.026..0.050 rows=7\nloops=1)\n Recheck Cond: (sandbox_id = 16399)\n Filter: ((subject_type)::text = 'Event'::text)\n -> Bitmap Index Scan on\nindex_section_items_on_sandbox_id (cost=0.00..5.06 rows=107 width=0)\n(actual time=0.019..0.019 rows=7 loops=1)\n Index Cond: (sandbox_id = 16399)\n -> Hash (cost=5580.54..5580.54 rows=157752 width=48)\n(actual time=161.832..161.832 rows=155197 loops=1)\n -> Seq Scan on event_occurrences\n(cost=0.00..5580.54 rows=157752 width=48) (actual time=0.030..75.406\nrows=155197 loops=1)\n Filter: (start_time > '2009-02-14\n18:15:14.739411+01'::timestamp with time zone)\n Total runtime: 192.496 ms\n\nSome statistics:\n\n# # select attname, n_distinct from pg_stats where tablename =\n'event_occurrences';\n attname | n_distinct\n------------+------------\n id | -1\n created_at | -0.291615\n updated_at | -0.294081\n created_by | 715\n updated_by | 715\n event_id | 2146\n start_time | -0.10047\n end_time | 5602\n\n# select attname, n_distinct from pg_stats where tablename = 'section_items';\n attname | n_distinct\n----------------------+------------\n id | -1\n created_by | 1612\n created_at | -0.708649\n updated_at | -0.83635\n updated_by | 1190\n posted_at | -0.930831\n section_id | 456\n sandbox_id | 455\n reference | 2\n subject_id | -0.546929\n subject_type | 5\n conversation_id | 1981\n read_permissions | 8\n permission_policy_id | 11\n\nAnything I can do to fix the query?\n\nThis is PostgreSQL 8.3.5. Standard planner configs. Before testing I\nreindexed, vacuumed and analyzed the tables.\n\nAlexander.\n", "msg_date": "Sat, 14 Feb 2009 23:25:05 +0100", "msg_from": "Alexander Staubo <[email protected]>", "msg_from_op": true, "msg_subject": "Bad plan for nested loop + limit" }, { "msg_contents": "On Sat, Feb 14, 2009 at 5:25 PM, Alexander Staubo <[email protected]> wrote:\n>\n> Output from \"explain analyze\":\n>\n> Limit (cost=0.00..973.63 rows=4 width=48) (actual\n> time=61.554..4039.704 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..70101.65 rows=288 width=48) (actual\n> time=61.552..4039.700 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..68247.77 rows=297 width=52)\n> (actual time=61.535..4039.682 rows=1 loops=1)\n\nThose estimates are pretty far off. Did you try increasing the\nstatistics target? Also, is the first query repeatable (that is, is it\nalready in cache when you do the test, or alternately, are all queries\n*out* of cache when you test?)\n-- \n- David T. Wilson\[email protected]\n", "msg_date": "Sat, 14 Feb 2009 23:29:52 -0500", "msg_from": "David Wilson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad plan for nested loop + limit" }, { "msg_contents": "On Sun, Feb 15, 2009 at 5:29 AM, David Wilson <[email protected]> wrote:\n> On Sat, Feb 14, 2009 at 5:25 PM, Alexander Staubo <[email protected]> wrote:\n>>\n>> Output from \"explain analyze\":\n>>\n>> Limit (cost=0.00..973.63 rows=4 width=48) (actual\n>> time=61.554..4039.704 rows=1 loops=1)\n>> -> Nested Loop (cost=0.00..70101.65 rows=288 width=48) (actual\n>> time=61.552..4039.700 rows=1 loops=1)\n>> -> Nested Loop (cost=0.00..68247.77 rows=297 width=52)\n>> (actual time=61.535..4039.682 rows=1 loops=1)\n>\n> Those estimates are pretty far off. Did you try increasing the\n> statistics target? Also, is the first query repeatable (that is, is it\n> already in cache when you do the test, or alternately, are all queries\n> *out* of cache when you test?)\n\nAll in the cache when I do the test. Ok, so upping the statistics to\n100 on section_items.subject_id fixed it:\n\n Limit (cost=3530.95..3530.96 rows=4 width=48) (actual\ntime=0.107..0.107 rows=1 loops=1)\n -> Sort (cost=3530.95..3531.12 rows=66 width=48) (actual\ntime=0.106..0.106 rows=1 loops=1)\n Sort Key: event_occurrences.start_time\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=0.00..3529.96 rows=66 width=48)\n(actual time=0.098..0.100 rows=1 loops=1)\n -> Index Scan using index_section_items_on_sandbox_id\non section_items (cost=0.00..104.29 rows=22 width=4) (actual\ntime=0.017..0.033 rows=7 loops=1)\n Index Cond: (sandbox_id = 16399)\n Filter: ((subject_type)::text = 'Event'::text)\n -> Index Scan using\nindex_event_occurrences_on_event_id on event_occurrences\n(cost=0.00..154.79 rows=74 width=48) (actual time=0.008..0.008 rows=0\nloops=7)\n Index Cond: (event_occurrences.event_id =\nsection_items.subject_id)\n Filter: (event_occurrences.start_time >\n'2009-02-14 18:15:14.739411+01'::timestamp with time zone)\n Total runtime: 0.142 ms\n\nThanks.\n\nAlexander.\n", "msg_date": "Sun, 15 Feb 2009 17:45:42 +0100", "msg_from": "Alexander Staubo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad plan for nested loop + limit" }, { "msg_contents": "On Sun, Feb 15, 2009 at 5:45 PM, Alexander Staubo <[email protected]> wrote:\n> On Sun, Feb 15, 2009 at 5:29 AM, David Wilson <[email protected]> wrote:\n>> On Sat, Feb 14, 2009 at 5:25 PM, Alexander Staubo <[email protected]> wrote:\n>>>\n>>> Output from \"explain analyze\":\n>>>\n>>>  Limit  (cost=0.00..973.63 rows=4 width=48) (actual\n>>> time=61.554..4039.704 rows=1 loops=1)\n>>>   ->  Nested Loop  (cost=0.00..70101.65 rows=288 width=48) (actual\n>>> time=61.552..4039.700 rows=1 loops=1)\n>>>         ->  Nested Loop  (cost=0.00..68247.77 rows=297 width=52)\n>>> (actual time=61.535..4039.682 rows=1 loops=1)\n>>\n>> Those estimates are pretty far off. Did you try increasing the\n>> statistics target? Also, is the first query repeatable (that is, is it\n>> already in cache when you do the test, or alternately, are all queries\n>> *out* of cache when you test?)\n\nAll right, this query keeps coming back to bite me. If this part of the join:\n\n ... and section_items.sandbox_id = 16399\n\nyields a sufficiently large number of matches, then performance goes\n'boink', like so:\n\n Limit (cost=0.00..34.86 rows=4 width=48) (actual\ntime=4348.696..4348.696 rows=0 loops=1)\n -> Nested Loop (cost=0.00..60521.56 rows=6944 width=48) (actual\ntime=4348.695..4348.695 rows=0 loops=1)\n -> Index Scan using index_event_occurrences_on_start_time on\nevent_occurrences (cost=0.00..11965.38 rows=145712 width=48) (actual\ntime=0.093..138.029 rows=145108 loops=1)\n Index Cond: (start_time > '2009-02-27\n18:01:14.739411+01'::timestamp with time zone)\n -> Index Scan using\nindex_section_items_on_subject_type_and_subject_id on section_items\n(cost=0.00..0.32 rows=1 width=4) (actual time=0.029..0.029 rows=0\nloops=145108)\n Index Cond: (((section_items.subject_type)::text =\n'Event'::text) AND (section_items.subject_id =\nevent_occurrences.event_id))\n Filter: (section_items.sandbox_id = 9)\n Total runtime: 4348.777 ms\n\nIn this case:\n\n# select count(*) from section_items where sandbox_id = 9;\n count\n-------\n 3126\n\nIf I remove the start_time > ... clause, performance is fine. Upping\nthe statistics setting on any of the columns involved seems to have no\neffect.\n\nIs this a pathological border case, or is there something I can do to\n*generally* make this query run fast? Keep in mind that the query\nitself returns no rows at all. I want to avoid doing an initial\n\"select count(...)\" just to avoid the bad plan. Suffice to say, having\na web request take 5 seconds is asking too much from our users.\n\nAlexander.\n", "msg_date": "Fri, 27 Feb 2009 21:18:42 +0100", "msg_from": "Alexander Staubo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad plan for nested loop + limit" }, { "msg_contents": "On Fri, Feb 27, 2009 at 3:18 PM, Alexander Staubo <[email protected]> wrote:\n> On Sun, Feb 15, 2009 at 5:45 PM, Alexander Staubo <[email protected]> wrote:\n>> On Sun, Feb 15, 2009 at 5:29 AM, David Wilson <[email protected]> wrote:\n>>> On Sat, Feb 14, 2009 at 5:25 PM, Alexander Staubo <[email protected]> wrote:\n>>>>\n>>>> Output from \"explain analyze\":\n>>>>\n>>>>  Limit  (cost=0.00..973.63 rows=4 width=48) (actual\n>>>> time=61.554..4039.704 rows=1 loops=1)\n>>>>   ->  Nested Loop  (cost=0.00..70101.65 rows=288 width=48) (actual\n>>>> time=61.552..4039.700 rows=1 loops=1)\n>>>>         ->  Nested Loop  (cost=0.00..68247.77 rows=297 width=52)\n>>>> (actual time=61.535..4039.682 rows=1 loops=1)\n>>>\n>>> Those estimates are pretty far off. Did you try increasing the\n>>> statistics target? Also, is the first query repeatable (that is, is it\n>>> already in cache when you do the test, or alternately, are all queries\n>>> *out* of cache when you test?)\n>\n> All right, this query keeps coming back to bite me. If this part of the join:\n>\n>  ... and section_items.sandbox_id = 16399\n>\n> yields a sufficiently large number of matches, then performance goes\n> 'boink', like so:\n>\n>  Limit  (cost=0.00..34.86 rows=4 width=48) (actual\n> time=4348.696..4348.696 rows=0 loops=1)\n>   ->  Nested Loop  (cost=0.00..60521.56 rows=6944 width=48) (actual\n> time=4348.695..4348.695 rows=0 loops=1)\n>         ->  Index Scan using index_event_occurrences_on_start_time on\n> event_occurrences  (cost=0.00..11965.38 rows=145712 width=48) (actual\n> time=0.093..138.029 rows=145108 loops=1)\n>               Index Cond: (start_time > '2009-02-27\n> 18:01:14.739411+01'::timestamp with time zone)\n>         ->  Index Scan using\n> index_section_items_on_subject_type_and_subject_id on section_items\n> (cost=0.00..0.32 rows=1 width=4) (actual time=0.029..0.029 rows=0\n> loops=145108)\n>               Index Cond: (((section_items.subject_type)::text =\n> 'Event'::text) AND (section_items.subject_id =\n> event_occurrences.event_id))\n>               Filter: (section_items.sandbox_id = 9)\n>  Total runtime: 4348.777 ms\n>\n> In this case:\n>\n> # select count(*) from section_items where sandbox_id = 9;\n>  count\n> -------\n>  3126\n>\n> If I remove the start_time > ... clause, performance is fine. Upping\n> the statistics setting on any of the columns involved seems to have no\n> effect.\n>\n> Is this a pathological border case, or is there something I can do to\n> *generally* make this query run fast? Keep in mind that the query\n> itself returns no rows at all. I want to avoid doing an initial\n> \"select count(...)\" just to avoid the bad plan. Suffice to say, having\n> a web request take 5 seconds is asking too much from our users.\n\nThe problem here is that the planner estimates the cost of a Limit\nplan node by adding up (1) the startup cost of the underlying plan\nnode, in this case 0 for the nestjoin, and (2) a percentage of the run\ncost, based on the ratio of the number of rows expected to be returned\nto the total number of rows. In this case, the nested loop is\nexpected to return 6944 rows, so it figures it won't have to get very\nfar to find the 4 you requested.\n\nSo when the LIMIT clause is a little bigger, or missing, the planner\ntries to minimize the cost of the whole operation, whereas when the\nlimit is very small, it picks a plan that is much slower overall on\ntheory that it will be able to quit long before finishing the whole\nthing. When that turns out to be false, you get burned.\n\nThat means that the root cause of the problem is the fact that the\njoin is estimated to return hundreds or thousands of rows. But it's\nhard to think that you can make that estimate any better. The\nnestloop is expected to output 6944 rows, and the index scan on\nevent_occurrences is expected to return 145712 rows. So the planner\nknows that only a tiny fraction of the rows in event_occurrences are\ngoing to have a match in section_items - it just doesn't think the\nfraction is quite tiny enough to keep it from making a bad decision.\n\nInterestingly, I think the solution Tom and I were talking about to\nanother problem in this area would make your case MUCH WORSE.\n\nhttp://archives.postgresql.org/message-id/[email protected]\n\nI will think about this some more but nothing is occurring to me off\nthe top of my head.\n\n...Robert\n", "msg_date": "Fri, 27 Feb 2009 17:54:47 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad plan for nested loop + limit" }, { "msg_contents": "On Fri, Feb 27, 2009 at 11:54 PM, Robert Haas <[email protected]> wrote:\n> The problem here is that the planner estimates the cost of a Limit\n> plan node by adding up (1) the startup cost of the underlying plan\n> node, in this case 0 for the nestjoin, and (2) a percentage of the run\n> cost, based on the ratio of the number of rows expected to be returned\n> to the total number of rows.  In this case, the nested loop is\n> expected to return 6944 rows, so it figures it won't have to get very\n> far to find the 4 you requested.\n[...]\n> I will think about this some more but nothing is occurring to me off\n> the top of my head.\n\nThanks for explaining. Is there any way to rewrite the query in a way\nthat will avoid the nested loop join -- other than actually disabling\nnested loop joins? If I do the latter, the resulting query uses a hash\njoin and completes in 80-100 ms, which is still pretty horrible,\nespecially for a query that returns nothing, but extremely auspicious\ncompared to the unthinkable 4-5 seconds for the current query.\n\nAlexander.\n", "msg_date": "Sat, 28 Feb 2009 17:20:22 +0100", "msg_from": "Alexander Staubo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad plan for nested loop + limit" }, { "msg_contents": "On Sat, Feb 28, 2009 at 11:20 AM, Alexander Staubo <[email protected]> wrote:\n> On Fri, Feb 27, 2009 at 11:54 PM, Robert Haas <[email protected]> wrote:\n>> The problem here is that the planner estimates the cost of a Limit\n>> plan node by adding up (1) the startup cost of the underlying plan\n>> node, in this case 0 for the nestjoin, and (2) a percentage of the run\n>> cost, based on the ratio of the number of rows expected to be returned\n>> to the total number of rows.  In this case, the nested loop is\n>> expected to return 6944 rows, so it figures it won't have to get very\n>> far to find the 4 you requested.\n> [...]\n>> I will think about this some more but nothing is occurring to me off\n>> the top of my head.\n>\n> Thanks for explaining. Is there any way to rewrite the query in a way\n> that will avoid the nested loop join -- other than actually disabling\n> nested loop joins? If I do the latter, the resulting query uses a hash\n> join and completes in 80-100 ms, which is still pretty horrible,\n> especially for a query that returns nothing, but extremely auspicious\n> compared to the unthinkable 4-5 seconds for the current query.\n\nCan you post the schema for the two tables in question? Feel free to\nomit any columns that aren't included in the query, but make sure to\ninclude any unique indices, etc.\n\nWhat do you have default_statistics_target set to? If it's less than\n100, you should probably raise it to 100 and re-analyze (the default\nvalue for 8.4 will be 100, but for 8.3 and prior it is 10).\n\nWhat is the approximate total number of rows in each of these two\ntables? Of the rows in section_items, how many have subject_type =\n'Event'?\n\n...Robert\n", "msg_date": "Sat, 28 Feb 2009 22:32:57 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad plan for nested loop + limit" }, { "msg_contents": "On Sun, Mar 1, 2009 at 4:32 AM, Robert Haas <[email protected]> wrote:\n> What do you have default_statistics_target set to?  If it's less than\n> 100, you should probably raise it to 100 and re-analyze (the default\n> value for 8.4 will be 100, but for 8.3 and prior it is 10).\n\nChanging it to 100 fixed the problem. Thanks for alerting me to the\nexistence of default_statistics_target.\n\nAlexander.\n", "msg_date": "Mon, 30 Mar 2009 12:24:55 +0100", "msg_from": "Alexander Staubo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad plan for nested loop + limit" } ]
[ { "msg_contents": "Hi All,\n\nI have these indexes on a table:\n\n\nCREATE INDEX uidx_product_partno_producer_id\n ON product\n USING btree\n (partno, producer_id);\n\n\nCREATE INDEX idx_product_partno\n ON product\n USING btree\n (partno);\n\nCan I safely delete the second one? Will postgresql use \n(partno,producer_id) when it only needs to order by partno? (partno is a \ntext field, producer_id is int4). Index sizes: 172MB and 137MB. I guess \nif I only had one index, it would save memory and increase performance.\n\nAnother pair of incides, 144MB and 81MB respecively:\n\n\nCREATE INDEX idx_product_producer_uploads\n ON product\n USING btree\n (producer_id, am_upload_status_id);\n\n\nCREATE INDEX idx_product_producer_id\n ON product\n USING btree\n (producer_id);\n\n\nam_upload_status_id is also an int4. Can I delete the second index \nwithout performance drawback?\n\nThanks,\n\n Laszlo\n\n", "msg_date": "Mon, 16 Feb 2009 09:54:07 +0100", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Partial index usage" }, { "msg_contents": "Laszlo Nagy wrote:\n> Hi All,\n> \n> I have these indexes on a table:\n> \n> \n> CREATE INDEX uidx_product_partno_producer_id\n> ON product\n> USING btree\n> (partno, producer_id);\n> \n> \n> CREATE INDEX idx_product_partno\n> ON product\n> USING btree\n> (partno);\n> \n> Can I safely delete the second one?\n\nYou can safely delete BOTH in that it won't hurt your data, only\npotentially hurt performance.\n\nDeleting the index on (partno) should somewhat improve insert\nperformance and performance on updates that can't be done via HOT.\n\nHowever, the index on (partno, producer_id) is requires more storage and\nmemory than the index on just (partno). AFAIK it's considerably slower\nto scan.\n\nWithin a transaction, drop the second index then run the query of\ninterest with EXPLAIN ANALYZE to determine just how much slower - then\nROLLBACK to undo the index drop. You'll lock out other transactions\nwhile you're doing this, but you won't make any permanent changes and\nyou can cancel it at any time.\n\n> Will postgresql use\n> (partno,producer_id) when it only needs to order by partno?\n\nYes.\n\n> I guess\n> if I only had one index, it would save memory and increase performance.\n\nMaybe. If they both fit into memory along with the main table data, then\nyou might end up losing instead since the second index is smaller and\nshould be somewhat faster to scan.\n\n> am_upload_status_id is also an int4. Can I delete the second index\n> without performance drawback?\n\nSame answer as above - test it and find out. You may win or lose\ndepending on your workload, table sizes, available memory, etc.\n\n--\nCraig Ringer\n", "msg_date": "Tue, 17 Feb 2009 00:07:50 +0900", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index usage" }, { "msg_contents": "On Feb 16, 2009, at 9:07 AM, Craig Ringer wrote:\n>> CREATE INDEX uidx_product_partno_producer_id\n>> ON product\n>> USING btree\n>> (partno, producer_id);\n>>\n>>\n>> CREATE INDEX idx_product_partno\n>> ON product\n>> USING btree\n>> (partno);\n>>\n>> Can I safely delete the second one?\n>\n> You can safely delete BOTH in that it won't hurt your data, only\n> potentially hurt performance.\n>\n> Deleting the index on (partno) should somewhat improve insert\n> performance and performance on updates that can't be done via HOT.\n>\n> However, the index on (partno, producer_id) is requires more \n> storage and\n> memory than the index on just (partno). AFAIK it's considerably slower\n> to scan.\n\n\nActually, that's not necessarily true. If both partno and procuder_id \nare ints and you're on a 64bit platform, there won't be any change in \nindex size, due to alignment issues.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828\n\n\n", "msg_date": "Fri, 20 Feb 2009 21:22:11 -0600", "msg_from": "decibel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index usage" } ]
[ { "msg_contents": "Recently I've been working on improving the performance of a system that\ndelivers files stored in postgresql as bytea data. I was surprised at\njust how much a penalty I find moving from a domain socket connection to\na TCP connection, even localhost. For one particular 40MB file (nothing\noutragous) I see ~ 2.5 sec. to download w/ the domain socket, but ~ 45 sec\nfor a TCP connection (either localhost, name of localhost, or from\nanother machine 5 hops away (on campus - gigabit LAN) Similar numbers\nfor 8.2.3 or 8.3.6 (on Linux/Debian etch + backports)\n\nSo, why the 20 fold penalty for using TCP? Any clues on how to trace\nwhat's up in the network IO stack?\n\nRoss\n-- \nRoss Reedstrom, Ph.D. [email protected]\nSystems Engineer & Admin, Research Scientist phone: 713-348-6166\nThe Connexions Project http://cnx.org fax: 713-348-3665\nRice University MS-375, Houston, TX 77005\nGPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE\n", "msg_date": "Tue, 17 Feb 2009 01:04:03 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "TCP network cost" }, { "msg_contents": "On Feb 17, 2009, at 12:04 AM, Ross J. Reedstrom wrote:\n\n> Recently I've been working on improving the performance of a system \n> that\n> delivers files stored in postgresql as bytea data. I was surprised at\n> just how much a penalty I find moving from a domain socket \n> connection to\n> a TCP connection, even localhost. For one particular 40MB file \n> (nothing\n> outragous) I see ~ 2.5 sec. to download w/ the domain socket, but ~ \n> 45 sec\n> for a TCP connection (either localhost, name of localhost, or from\n> another machine 5 hops away (on campus - gigabit LAN) Similar numbers\n> for 8.2.3 or 8.3.6 (on Linux/Debian etch + backports)\n>\n> So, why the 20 fold penalty for using TCP? Any clues on how to trace\n> what's up in the network IO stack?\n\nTry running tests with ttcp to eliminate any PostgreSQL overhead and \nfind out the real bandwidth between the two machines. If its results \nare also slow, you know the problem is TCP related and not PostgreSQL \nrelated.\n\nCheers,\n\nRusty\n--\nRusty Conover\[email protected]\nInfoGears Inc / GearBuyer.com / FootwearBuyer.com\nhttp://www.infogears.com\nhttp://www.gearbuyer.com\nhttp://www.footwearbuyer.com\n\n\n\n\n\n\n\n\nOn Feb 17, 2009, at 12:04 AM, Ross J. Reedstrom wrote:Recently I've been working on improving the performance of a system thatdelivers files stored in postgresql as bytea data. I was surprised atjust how much a penalty I find moving from a domain socket connection toa TCP connection, even localhost. For one particular 40MB file (nothingoutragous) I see ~ 2.5 sec. to download w/ the domain socket, but ~ 45 secfor a TCP connection (either localhost, name of localhost, or fromanother machine 5 hops away (on campus - gigabit LAN) Similar numbersfor 8.2.3 or 8.3.6 (on Linux/Debian etch + backports)So, why the 20 fold penalty for using TCP? Any clues on how to tracewhat's up in the network IO stack?Try running tests with ttcp to eliminate any PostgreSQL overhead and find out the real bandwidth between the two machines.  If its results are also slow, you know the problem is TCP related and not PostgreSQL related.Cheers,Rusty--Rusty [email protected] Inc / GearBuyer.com / FootwearBuyer.comhttp://www.infogears.comhttp://www.gearbuyer.comhttp://www.footwearbuyer.com", "msg_date": "Tue, 17 Feb 2009 00:20:02 -0700", "msg_from": "Rusty Conover <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TCP network cost" }, { "msg_contents": "On Tue, 17 Feb 2009, Rusty Conover wrote:\n\n> On Feb 17, 2009, at 12:04 AM, Ross J. Reedstrom wrote:\n>\n>> Recently I've been working on improving the performance of a system that\n>> delivers files stored in postgresql as bytea data. I was surprised at\n>> just how much a penalty I find moving from a domain socket connection to\n>> a TCP connection, even localhost. For one particular 40MB file (nothing\n>> outragous) I see ~ 2.5 sec. to download w/ the domain socket, but ~ 45 sec\n>> for a TCP connection (either localhost, name of localhost, or from\n>> another machine 5 hops away (on campus - gigabit LAN) Similar numbers\n>> for 8.2.3 or 8.3.6 (on Linux/Debian etch + backports)\n>> \n>> So, why the 20 fold penalty for using TCP? Any clues on how to trace\n>> what's up in the network IO stack?\n>\n> Try running tests with ttcp to eliminate any PostgreSQL overhead and find out \n> the real bandwidth between the two machines. If its results are also slow, \n> you know the problem is TCP related and not PostgreSQL related.\n\nnote that he saw problems even on localhost.\n\nin the last couple of months I've seen a lot of discussin on the \nlinux-kernel list about the performance of localhost. unfortunantly those \nfixes are only in the 2.6.27.x and 2.6.28.x -stable kernels.\n\nDavid Lang\n", "msg_date": "Tue, 17 Feb 2009 00:34:15 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: TCP network cost" }, { "msg_contents": "On Mon, Feb 16, 2009 at 11:04 PM, Ross J. Reedstrom <[email protected]> wrote:\n> Recently I've been working on improving the performance of a system that\n> delivers files stored in postgresql as bytea data. I was surprised at\n> just how much a penalty I find moving from a domain socket connection to\n> a TCP connection, even localhost. For one particular 40MB file (nothing\n> outragous) I see ~ 2.5 sec. to download w/ the domain socket, but ~ 45 sec\n> for a TCP connection (either localhost, name of localhost, or from\n> another machine 5 hops away (on campus - gigabit LAN) Similar numbers\n> for 8.2.3 or 8.3.6 (on Linux/Debian etch + backports)\n>\n> So, why the 20 fold penalty for using TCP? Any clues on how to trace\n> what's up in the network IO stack?\n\nTCP has additional overhead as well as going through the IP stack\nwhich for non-tuned Linux kernels is pretty limiting.\n\nlong story short, there are things in /proc you can use to increase\nbuffers and window sizes which will help with large TCP streams (like\na 40MB file for example). There's a lot of documentation on the net\nfor how to tune the Linux IP stack so I won't repeat it here.\n\nNow, having your DB box 5 hops away is going to add a lot of latency\nand any packet loss is going to kill TCP throughput- especially if you\nincrease window sizes. I'd recommend something like \"mtr\" to map the\nnetwork traffic (make sure you run it both ways in case you have an\nasymmetric routing situation) for a long period of time to look for\nhiccups.\n\n-- \nAaron Turner\nhttp://synfin.net/\nhttp://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & Windows\nThose who would give up essential Liberty, to purchase a little\ntemporary Safety,\ndeserve neither Liberty nor Safety.\n -- Benjamin Franklin\n", "msg_date": "Tue, 17 Feb 2009 10:13:40 -0800", "msg_from": "Aaron Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TCP network cost" }, { "msg_contents": "On Tue, Feb 17, 2009 at 12:20:02AM -0700, Rusty Conover wrote:\n> \n> \n> Try running tests with ttcp to eliminate any PostgreSQL overhead and \n> find out the real bandwidth between the two machines. If its results \n> are also slow, you know the problem is TCP related and not PostgreSQL \n> related.\n\nI did in fact run a simple netcat client/server pair and verified that I\ncan transfer that file on 0.12 sec localhost (or hostname), 0.35 over the\nnet, so TCP stack and network are not to blame. This is purely inside\nthe postgresql code issue, I believe.\n\n\nOn Tue, Feb 17, 2009 at 10:13:40AM -0800, Aaron Turner wrote:\n> \n> TCP has additional overhead as well as going through the IP stack\n> which for non-tuned Linux kernels is pretty limiting.\n\nRight. Already tuned those so long ago, I failed to mention it. Note the\n'bare' transfer times added above. Nothing to write home about\n(~3Mb/sec) but another order of magnitude faster than the postgresql\ntransfer.\n\n> long story short, there are things in /proc you can use to increase\n> buffers and window sizes which will help with large TCP streams (like\n> a 40MB file for example). There's a lot of documentation on the net\n> for how to tune the Linux IP stack so I won't repeat it here.\n> \n> Now, having your DB box 5 hops away is going to add a lot of latency\n> and any packet loss is going to kill TCP throughput- especially if you\n> increase window sizes. I'd recommend something like \"mtr\" to map the\n> network traffic (make sure you run it both ways in case you have an\n> asymmetric routing situation) for a long period of time to look for\n> hiccups.\n\nThe 5-hops in on campus, gigabit all the way, w/ reasonable routing -\nand not the issue: I see the same times from another machine attaached\nto the same switch (which is the real use-case, actually.)\n\nRoss\n-- \nRoss Reedstrom, Ph.D. [email protected]\nSystems Engineer & Admin, Research Scientist phone: 713-348-6166\nThe Connexions Project http://cnx.org fax: 713-348-3665\nRice University MS-375, Houston, TX 77005\nGPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE\n", "msg_date": "Tue, 17 Feb 2009 14:04:17 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TCP network cost" }, { "msg_contents": "On Feb 17, 2009, at 1:04 PM, Ross J. Reedstrom wrote:\n\n> On Tue, Feb 17, 2009 at 12:20:02AM -0700, Rusty Conover wrote:\n>>\n>>\n>> Try running tests with ttcp to eliminate any PostgreSQL overhead and\n>> find out the real bandwidth between the two machines. If its results\n>> are also slow, you know the problem is TCP related and not PostgreSQL\n>> related.\n>\n> I did in fact run a simple netcat client/server pair and verified \n> that I\n> can transfer that file on 0.12 sec localhost (or hostname), 0.35 \n> over the\n> net, so TCP stack and network are not to blame. This is purely inside\n> the postgresql code issue, I believe.\n>\n>\n\n\nWhat is the client software you're using? libpq?\n\nRusty\n--\nRusty Conover\[email protected]\nInfoGears Inc / GearBuyer.com / FootwearBuyer.com\nhttp://www.infogears.com\nhttp://www.gearbuyer.com\nhttp://www.footwearbuyer.com\n\n\n\n\n\n\n\n\nOn Feb 17, 2009, at 1:04 PM, Ross J. Reedstrom wrote:On Tue, Feb 17, 2009 at 12:20:02AM -0700, Rusty Conover wrote:Try running tests with ttcp to eliminate any PostgreSQL overhead and  find out the real bandwidth between the two machines.  If its results  are also slow, you know the problem is TCP related and not PostgreSQL  related.I did in fact run a simple netcat client/server pair and verified that Ican transfer that file on 0.12 sec localhost (or hostname), 0.35 over thenet, so TCP stack and network are not to blame. This is purely insidethe postgresql code issue, I believe.What is the client software you're using?  libpq?  Rusty --Rusty [email protected] Inc / GearBuyer.com / FootwearBuyer.comhttp://www.infogears.comhttp://www.gearbuyer.comhttp://www.footwearbuyer.com", "msg_date": "Tue, 17 Feb 2009 13:59:55 -0700", "msg_from": "Rusty Conover <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TCP network cost" }, { "msg_contents": "On Tue, Feb 17, 2009 at 01:59:55PM -0700, Rusty Conover wrote:\n> \n> On Feb 17, 2009, at 1:04 PM, Ross J. Reedstrom wrote:\n> \n> \n> What is the client software you're using? libpq?\n> \n\npython w/ psycopg (or psycopg2), which wraps libpq. Same results w/\neither version.\n\nI think I'll try network sniffing to see if I can find where the\ndelays are happening.\n\nRoss\n", "msg_date": "Tue, 17 Feb 2009 15:14:55 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TCP network cost" }, { "msg_contents": "On Tue, Feb 17, 2009 at 03:14:55PM -0600, Ross J. Reedstrom wrote:\n> On Tue, Feb 17, 2009 at 01:59:55PM -0700, Rusty Conover wrote:\n> > \n> > What is the client software you're using? libpq?\n> > \n> \n> python w/ psycopg (or psycopg2), which wraps libpq. Same results w/\n> either version.\n> \n\nIt's not python networking per se's fault: sending the file via a \nSimpleHTTPServer, adn fetching w/ wget takes on the order of 0.5 sec.\nas well.\n\n> I think I'll try network sniffing to see if I can find where the\n> delays are happening.\n\nI'm no TCP/IP expert, but some packet capturing, and wireshark analysis\nmakes me suspicious about flow control. the 'netcat' transfer shows lots\nof packets from server -> client, w/ deltaTs of 8 - 200 usec (that's\nmicro-sec), mostly in the 10-20 range. The client -> server 'ack's seem\nbursty, happening only every 50-100 packets, then a few back-to-back,\nall taking 10-20 usec.\n\nI also see occasional lost packets, retransmits, and TCP Window Updates\nin this stream. FIN packet is after 8553 packets.\n\nFor the libpq driven transfer, I see lots of packets flowing both ways.\nSeems about every other packet from server to client is 'ack'ed. Each of\nthese 'ack's takes 10 uS to send, but seem to cause the transfer to\n'reset', since the next packet from the server doesn't arrive for 2-2.5\nms (that's milli-sec!) FIN happens at 63155 packets.\n\nNo lost packets, no renegotiation, etc.\n\nCapturing a localhost transfer shows the same pattern, although now\nalmost every single packet from server -> client takes ~ 3 ms\n\nSo, TCP experts out there, what's the scoop? Is libpq/psycopg being very\nconservative, or am I barking up the wrong tree? Are there network\nsocket properities I need to be tweaking?\n\nDoes framing up for TCP just take that long when the bits are coming\nfrom the DB? I assume the unix-domain socket case still uses the full\npostgresql messaging protocol, but wouldn't need to worry about\nnetwork-byte-order, etc.\n\nAll the postgres tunable knobs I can see seem to talk about disk IO,\nrather than net IO. Can someone point me at some doco about net IO?\n\nRoss\n-- \nRoss Reedstrom, Ph.D. [email protected]\nSystems Engineer & Admin, Research Scientist phone: 713-348-6166\nThe Connexions Project http://cnx.org fax: 713-348-3665\nRice University MS-375, Houston, TX 77005\nGPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE\n\n", "msg_date": "Tue, 17 Feb 2009 16:30:02 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TCP network cost" }, { "msg_contents": "On Tue, Feb 17, 2009 at 2:30 PM, Ross J. Reedstrom <[email protected]> wrote:\n> On Tue, Feb 17, 2009 at 03:14:55PM -0600, Ross J. Reedstrom wrote:\n>> On Tue, Feb 17, 2009 at 01:59:55PM -0700, Rusty Conover wrote:\n>> >\n>> > What is the client software you're using? libpq?\n>> >\n>>\n>> python w/ psycopg (or psycopg2), which wraps libpq. Same results w/\n>> either version.\n>>\n>\n> It's not python networking per se's fault: sending the file via a\n> SimpleHTTPServer, adn fetching w/ wget takes on the order of 0.5 sec.\n> as well.\n>\n>> I think I'll try network sniffing to see if I can find where the\n>> delays are happening.\n>\n> I'm no TCP/IP expert, but some packet capturing, and wireshark analysis\n> makes me suspicious about flow control. the 'netcat' transfer shows lots\n> of packets from server -> client, w/ deltaTs of 8 - 200 usec (that's\n> micro-sec), mostly in the 10-20 range. The client -> server 'ack's seem\n> bursty, happening only every 50-100 packets, then a few back-to-back,\n> all taking 10-20 usec.\n>\n> I also see occasional lost packets, retransmits, and TCP Window Updates\n> in this stream. FIN packet is after 8553 packets.\n>\n> For the libpq driven transfer, I see lots of packets flowing both ways.\n> Seems about every other packet from server to client is 'ack'ed. Each of\n> these 'ack's takes 10 uS to send, but seem to cause the transfer to\n> 'reset', since the next packet from the server doesn't arrive for 2-2.5\n> ms (that's milli-sec!) FIN happens at 63155 packets.\n>\n> No lost packets, no renegotiation, etc.\n>\n> Capturing a localhost transfer shows the same pattern, although now\n> almost every single packet from server -> client takes ~ 3 ms\n>\n> So, TCP experts out there, what's the scoop? Is libpq/psycopg being very\n> conservative, or am I barking up the wrong tree? Are there network\n> socket properities I need to be tweaking?\n>\n> Does framing up for TCP just take that long when the bits are coming\n> from the DB? I assume the unix-domain socket case still uses the full\n> postgresql messaging protocol, but wouldn't need to worry about\n> network-byte-order, etc.\n>\n> All the postgres tunable knobs I can see seem to talk about disk IO,\n> rather than net IO. Can someone point me at some doco about net IO?\n\nWhat's the negotiated window size? That's the amount of data allowed\n\"in flight\" without an ack. The fact that acks happen regularly\nshouldn't be a problem, but if the sender is stalling because it has a\nsmall window, waiting for an ack to be received that could cause a\nlarge slow down.\n\nDo the ack's include any data? If so it's indicative of the PG\nnetworking protocol overhead- probably not much you can do about that.\n\nWithout looking at a pcap myself, I'm not sure I can help out any more.\n\n-- \nAaron Turner\nhttp://synfin.net/\nhttp://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix & Windows\nThose who would give up essential Liberty, to purchase a little\ntemporary Safety,\ndeserve neither Liberty nor Safety.\n -- Benjamin Franklin\n", "msg_date": "Tue, 17 Feb 2009 15:05:31 -0800", "msg_from": "Aaron Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TCP network cost" }, { "msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n\n> On Tue, Feb 17, 2009 at 12:20:02AM -0700, Rusty Conover wrote:\n>> \n>> Try running tests with ttcp to eliminate any PostgreSQL overhead and \n>> find out the real bandwidth between the two machines. If its results \n>> are also slow, you know the problem is TCP related and not PostgreSQL \n>> related.\n>\n> I did in fact run a simple netcat client/server pair and verified that I\n> can transfer that file on 0.12 sec localhost (or hostname), 0.35 over the\n> net, so TCP stack and network are not to blame. This is purely inside\n> the postgresql code issue, I believe.\n\nThere's not much Postgres can do to mess up TCP/IP. The only things that come\nto mind are a) making lots of short-lived connections and b) getting caught by\nNagle when doing lots of short operations and blocking waiting on results.\n\nWhat libpq (or other interface) operations are you doing exactly?\n\n[also, your Mail-Followup-To has a bogus email address in it. Please don't do\nthat]\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n", "msg_date": "Wed, 18 Feb 2009 13:44:23 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TCP network cost" }, { "msg_contents": "\n> python w/ psycopg (or psycopg2), which wraps libpq. Same results w/\n> either version.\n\n\tI've seen psycopg2 saturate a 100 Mbps ethernet connection (direct \nconnection with crossover cable) between postgres server and client during \na benchmark... I had to change the benchmark to not retrieve a large TEXT \ncolumn to remove this bottleneck... this was last year so versions are \nprobably different, but I don't think this matters a lot...\n\n> Note the 'bare' transfer times added above. Nothing to write home about\n> (~3Mb/sec) but another order of magnitude faster than the postgresql\n> transfer.\n\n\tYou should test with sending a large (>100 MB) amount of data through \nNetcat. This should give you your maximum wire speed. Use /dev/null as the \ntest file, and use \"pv\" (pipe viewer) to measure throughput :\n\nbox 1 : pv < /dev/zero | nc -lp 12345\nbox 2 : nc (ip) 12345 >/dev/null\n\n\tOn gigabit lan you should get 100 MB/s, on 100BaseT about 10 MB/s. If you \ndont get that, there is a problem somewhere (bad cable, bad NIC, slow \nswitch/router, etc). Monitor CPU during this test (vmstat). Usage should \nbe low.\n\n", "msg_date": "Thu, 19 Feb 2009 14:09:04 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TCP network cost" }, { "msg_contents": "[note: sending a message that's been sitting in 'drafts' since last week]\n\nSummary: C client and large-object API python both send bits in\nreasonable time, but I suspect there's still room for improvement in\nlibpq over TCP: I'm suspicious of the 6x difference. Detailed analysis\nwill probably find it's all down to memory allocation and extra copying\nof bits around (client side)\n\nRoss\n\nOn Wed, Feb 18, 2009 at 01:44:23PM +0000, Gregory Stark wrote:\n> \n> There's not much Postgres can do to mess up TCP/IP. The only things that come\n> to mind are a) making lots of short-lived connections and b) getting caught by\n> Nagle when doing lots of short operations and blocking waiting on results.\n\nThe hint re: Nagle sent to off hunting. It looks like libpq _should_ be\nsetting NODELAY on both sides of the socket. However, tcptrace output\ndoes show (what I understand to be) the stereotypical\nevery-other-packet-acked stairstep of a delayed-ack/Nagle interaction.\n(as described here: http://www.stuartcheshire.org/papers/NagleDelayedAck/ )\n\nWalking through the libpq code, though, it sets NODELAY, so Nagle should\nbe out of the picture. This may be a red herring, though. See below.\n\n> What libpq (or other interface) operations are you doing exactly?\n\nI'm using psycopg from python. My cut down test case is:\n\ncon=psycopg.connect('dbname=mydb user=myuser port=5433 host=myhost')\ncur=con.cursor()\nstart=DateTime.now()\ncur.execute(\"\"\"select file from files where fileid=1\"\"\")\ndata = cur.fetchone()[0]\nend=DateTime.now()\nf=open('/dev/null','w')\nf.write(data)\nf.close()\ncur.close()\nprint \"tcp socket: %s\" % str(end - start)\n\nI've since written a minimal C app, and it's doing much better, down to\nabout 7 sec for a local TCP connection (either localhost or hostname)\n\nSo, I get to blame the psycopg wrapper for ~ 30 sec of delay. I'm\nsuspicous of memory allocation, myself.\n\nThe tcp traces (tcpdump + tcptrace + xplot are cool set of tools, btw)\nindicate that the backend's taking ~ 0.35 sec to process the query and\nstart sending bits, and using a domain socket w/ that code gets the file\nin 1.3 - 1.4 sec, so I'm still seeing a 6-fold slowdown for going via\nTCP (6 sec. vs. 1 sec.) Sending the raw file via apache (localhost)\ntakes ~200 ms.\n\nMoving to a large-object based implementation would seem to confirm\nthat: psycopg2 (snapshot of svn head) manages to pull a lo version of\nthe file in times equivalent to the C client (7 sec local)\n\nI'll probably move the system to use that, since there's really almost\nno use-case for access to the insides of these files from SQL.\n\n> [also, your Mail-Followup-To has a bogus email address in it. Please don't do\n> that]\n\nHmm, not on purpose. I'll take a look.\n", "msg_date": "Mon, 23 Feb 2009 13:42:12 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TCP network cost" }, { "msg_contents": "On Thu, Feb 19, 2009 at 02:09:04PM +0100, PFC wrote:\n> \n> >python w/ psycopg (or psycopg2), which wraps libpq. Same results w/\n> >either version.\n> \n> \tI've seen psycopg2 saturate a 100 Mbps ethernet connection (direct \n> connection with crossover cable) between postgres server and client during \n> a benchmark... I had to change the benchmark to not retrieve a large TEXT \n> column to remove this bottleneck... this was last year so versions are \n> probably different, but I don't think this matters a lot...\n\nHere's the core of the problem: I in fact need to transfer exactly that:\na large single field (bytea in my case). I suspect psycopg[12] is having\nissues w/ memory allocation, but that's just an unsupported gut feeling.\n\nThe final upshot is that I need to restructure my config to use the\nlarge-object API (and hence a snapshot of psycopg2) to get decent\nthroughput. \n\n> \tYou should test with sending a large (>100 MB) amount of data \n> \tthrough Netcat. This should give you your maximum wire speed. Use \n> /dev/null as the test file, and use \"pv\" (pipe viewer) to measure \n> throughput :\n> \n> box 1 : pv < /dev/zero | nc -lp 12345\n> box 2 : nc (ip) 12345 >/dev/null\n> \n> \tOn gigabit lan you should get 100 MB/s, on 100BaseT about 10 MB/s. \n\n112 MB/s, and 233 MB/s for localhost. Thanks for the pointer to pv:\nlooks like a nice tool. Investigating this problem has lead me to a\nnumber of nice 'old school' tools: the other is tcptrace and xplot.org.\nI've been hand reading tcpdump output, or clicking around in\nethereal/wireshark. I like tcptrace's approach.\n\nRoss\n-- \nRoss Reedstrom, Ph.D. [email protected]\nSystems Engineer & Admin, Research Scientist phone: 713-348-6166\nThe Connexions Project http://cnx.org fax: 713-348-3665\nRice University MS-375, Houston, TX 77005\nGPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE\n\n", "msg_date": "Mon, 23 Feb 2009 13:43:25 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TCP network cost" }, { "msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> Summary: C client and large-object API python both send bits in\n> reasonable time, but I suspect there's still room for improvement in\n> libpq over TCP: I'm suspicious of the 6x difference. Detailed analysis\n> will probably find it's all down to memory allocation and extra copying\n> of bits around (client side)\n\nI wonder if the backend isn't contributing to the problem too. It chops\nits sends up into 8K units, which doesn't seem to create huge overhead\nin my environment but maybe it does in yours. It'd be interesting to see\nwhat results you get from the attached quick-and-dirty patch (against\nHEAD, but it should apply back to at least 8.1).\n\n\t\t\tregards, tom lane\n\n\nIndex: src/backend/libpq/pqcomm.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/libpq/pqcomm.c,v\nretrieving revision 1.199\ndiff -c -r1.199 pqcomm.c\n*** src/backend/libpq/pqcomm.c\t1 Jan 2009 17:23:42 -0000\t1.199\n--- src/backend/libpq/pqcomm.c\t23 Feb 2009 21:09:45 -0000\n***************\n*** 124,129 ****\n--- 124,130 ----\n static void pq_close(int code, Datum arg);\n static int\tinternal_putbytes(const char *s, size_t len);\n static int\tinternal_flush(void);\n+ static int\tinternal_send(const char *bufptr, size_t len);\n \n #ifdef HAVE_UNIX_SOCKETS\n static int\tLock_AF_UNIX(unsigned short portNumber, char *unixSocketName);\n***************\n*** 1041,1046 ****\n--- 1042,1056 ----\n \t\tif (PqSendPointer >= PQ_BUFFER_SIZE)\n \t\t\tif (internal_flush())\n \t\t\t\treturn EOF;\n+ \n+ \t\t/*\n+ \t\t * If buffer is empty and we'd fill it, just push the data immediately\n+ \t\t * rather than copying it into PqSendBuffer.\n+ \t\t */\n+ \t\tif (PqSendPointer == 0 && len >= PQ_BUFFER_SIZE)\n+ \t\t\treturn internal_send(s, len);\n+ \n+ \t\t/* Else put (some of) the data into the buffer */\n \t\tamount = PQ_BUFFER_SIZE - PqSendPointer;\n \t\tif (amount > len)\n \t\t\tamount = len;\n***************\n*** 1075,1090 ****\n static int\n internal_flush(void)\n {\n \tstatic int\tlast_reported_send_errno = 0;\n \n! \tchar\t *bufptr = PqSendBuffer;\n! \tchar\t *bufend = PqSendBuffer + PqSendPointer;\n \n \twhile (bufptr < bufend)\n \t{\n \t\tint\t\t\tr;\n \n! \t\tr = secure_write(MyProcPort, bufptr, bufend - bufptr);\n \n \t\tif (r <= 0)\n \t\t{\n--- 1085,1115 ----\n static int\n internal_flush(void)\n {\n+ \tint\t\t\tr;\n+ \n+ \tr = internal_send(PqSendBuffer, PqSendPointer);\n+ \n+ \t/*\n+ \t * On error, we drop the buffered data anyway so that processing can\n+ \t * continue, even though we'll probably quit soon.\n+ \t */\n+ \tPqSendPointer = 0;\n+ \n+ \treturn r;\n+ }\n+ \n+ static int\n+ internal_send(const char *bufptr, size_t len)\n+ {\n \tstatic int\tlast_reported_send_errno = 0;\n \n! \tconst char *bufend = bufptr + len;\n \n \twhile (bufptr < bufend)\n \t{\n \t\tint\t\t\tr;\n \n! \t\tr = secure_write(MyProcPort, (void *) bufptr, bufend - bufptr);\n \n \t\tif (r <= 0)\n \t\t{\n***************\n*** 1108,1118 ****\n \t\t\t\t\t\t errmsg(\"could not send data to client: %m\")));\n \t\t\t}\n \n- \t\t\t/*\n- \t\t\t * We drop the buffered data anyway so that processing can\n- \t\t\t * continue, even though we'll probably quit soon.\n- \t\t\t */\n- \t\t\tPqSendPointer = 0;\n \t\t\treturn EOF;\n \t\t}\n \n--- 1133,1138 ----\n***************\n*** 1120,1126 ****\n \t\tbufptr += r;\n \t}\n \n- \tPqSendPointer = 0;\n \treturn 0;\n }\n \n--- 1140,1145 ----", "msg_date": "Mon, 23 Feb 2009 16:17:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TCP network cost " }, { "msg_contents": "Excellent. I'll take a look at this and report back here.\n\nRoss\n\n\nOn Mon, Feb 23, 2009 at 04:17:00PM -0500, Tom Lane wrote:\n> \"Ross J. Reedstrom\" <[email protected]> writes:\n> > Summary: C client and large-object API python both send bits in\n> > reasonable time, but I suspect there's still room for improvement in\n> > libpq over TCP: I'm suspicious of the 6x difference. Detailed analysis\n> > will probably find it's all down to memory allocation and extra copying\n> > of bits around (client side)\n> \n> I wonder if the backend isn't contributing to the problem too. It chops\n> its sends up into 8K units, which doesn't seem to create huge overhead\n> in my environment but maybe it does in yours. It'd be interesting to see\n> what results you get from the attached quick-and-dirty patch (against\n> HEAD, but it should apply back to at least 8.1).\n> \n> \t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Feb 2009 11:02:01 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TCP network cost" }, { "msg_contents": "Ross J. Reedstrom escribi�:\n> Excellent. I'll take a look at this and report back here.\n> \n> Ross\n> \n> \n> On Mon, Feb 23, 2009 at 04:17:00PM -0500, Tom Lane wrote:\n>> \"Ross J. Reedstrom\" <[email protected]> writes:\n>>> Summary: C client and large-object API python both send bits in\n>>> reasonable time, but I suspect there's still room for improvement in\n>>> libpq over TCP: I'm suspicious of the 6x difference. Detailed analysis\n>>> will probably find it's all down to memory allocation and extra copying\n>>> of bits around (client side)\n>> I wonder if the backend isn't contributing to the problem too. It chops\n>> its sends up into 8K units, which doesn't seem to create huge overhead\n>> in my environment but maybe it does in yours. It'd be interesting to see\n>> what results you get from the attached quick-and-dirty patch (against\n>> HEAD, but it should apply back to at least 8.1).\n>>\n>> \t\t\tregards, tom lane\n> \n\nHello, i have been having a problem like this in debian machines and i have \ndiscovered that (almost in my case), the problem only arises when i am using \n\"ssl = true\" in postgresql.conf although i am using clear tcp connections to \nlocalhost to perform my query, if i disable ssl in configuration my localhost \nquery times goes from 4200ms to 110ms, the same parameter does not have this \neffect in my Arch Linux development machine, so maybe you should see how this \nparameter affect your setup Ross. My original post to general list is in \nhttp://archives.postgresql.org/pgsql-general/2009-02/msg01297.php for more \ninformation.\n\nRegards,\nMiguel Angel.\n", "msg_date": "Sun, 01 Mar 2009 18:32:28 +0100", "msg_from": "Linos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TCP network cost" }, { "msg_contents": "Linos <[email protected]> writes:\n> Hello, i have been having a problem like this in debian machines and i have \n> discovered that (almost in my case), the problem only arises when i am using \n> \"ssl = true\" in postgresql.conf although i am using clear tcp connections to \n> localhost to perform my query, if i disable ssl in configuration my localhost \n> query times goes from 4200ms to 110ms,\n\nDoes that number include connection startup overhead? (If it doesn't,\nit'd be pretty strange.) Ross's problem is not about startup overhead,\nunless I've misread him completely.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 01 Mar 2009 12:43:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TCP network cost " }, { "msg_contents": "Tom Lane escribi�:\n> Linos <[email protected]> writes:\n>> Hello, i have been having a problem like this in debian machines and i have \n>> discovered that (almost in my case), the problem only arises when i am using \n>> \"ssl = true\" in postgresql.conf although i am using clear tcp connections to \n>> localhost to perform my query, if i disable ssl in configuration my localhost \n>> query times goes from 4200ms to 110ms,\n> \n> Does that number include connection startup overhead? (If it doesn't,\n> it'd be pretty strange.) Ross's problem is not about startup overhead,\n> unless I've misread him completely.\n> \n> \t\t\tregards, tom lane\n\nThis difference it is from the runtime of the query, i get this with \\timing \nparameter in psql, it is from a table that have 300 small png (one for every row \nin table) on a bytea column but the problem grows with any large result anyway, \ni have attacted pcap files in general list but the differences are like this:\n\nssl enabled:\n`psql -d database`: SELECT * FROM TABLE (110 ms with \\timing)\n`psql -d database -h localhost`: SELECT * FROM TABLE (4200 ms with \\timing)\n\nssl disabled:\n`psql -d database`: SELECT * FROM TABLE (110 ms with \\timing)\n`psql -d database -h localhost`: SELECT * FROM TABLE (120 ~ 130 ms with \\timing)\n\nAnyway i dont know if this apply to Ross problem but reading his post and after \nsee that he is using debian and have problem with speed on tcp localhost i \nsuppose that maybe have the same problem.\n\nRegards,\nMiguel Angel\n", "msg_date": "Sun, 01 Mar 2009 18:57:03 +0100", "msg_from": "Linos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TCP network cost" }, { "msg_contents": "Linos <[email protected]> writes:\n> Tom Lane escribi�:\n>> Does that number include connection startup overhead? (If it doesn't,\n>> it'd be pretty strange.)\n\n> This difference it is from the runtime of the query, i get this with \\timing \n> parameter in psql,\n\nThat's just weird --- ssl off should be ssl off no matter which knob you\nuse to turn it off. Are you sure it's really off in the slow connections?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 01 Mar 2009 13:10:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TCP network cost " }, { "msg_contents": "Tom Lane escribi�:\n> Linos <[email protected]> writes:\n>> Tom Lane escribi�:\n>>> Does that number include connection startup overhead? (If it doesn't,\n>>> it'd be pretty strange.)\n> \n>> This difference it is from the runtime of the query, i get this with \\timing \n>> parameter in psql,\n> \n> That's just weird --- ssl off should be ssl off no matter which knob you\n> use to turn it off. Are you sure it's really off in the slow connections?\n> \n> \t\t\tregards, tom lane\n\nMaybe i am missing something, i use the same command to connect to it from \nlocalhost \"psql -d database -h localhost\" and in the pcap files i have captured \nthe protocol it is clear (with \"ssl = false\" or \"ssl = true\" either), but in the \ndebian machine with \"ssl = true\" in postgresql.conf you can see in the pcap file \nbig time jumps between data packets, psql commandline enables automatically ssl \nif the server supports it? but if this is the case i should see encrypted \ntraffic in pcap files, and the problem should be the same in Arch Linux, no? If \nyou want that i make a local test explain me the steps and i will try here.\n\nRegards,\nMiguel Angel.\n", "msg_date": "Sun, 01 Mar 2009 19:19:59 +0100", "msg_from": "Linos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TCP network cost" }, { "msg_contents": "Linos <[email protected]> writes:\n> Tom Lane escribi�:\n>> That's just weird --- ssl off should be ssl off no matter which knob you\n>> use to turn it off. Are you sure it's really off in the slow connections?\n\n> Maybe i am missing something, i use the same command to connect to it\n> from localhost \"psql -d database -h localhost\" and in the pcap files i\n> have captured the protocol it is clear (with \"ssl = false\" or \"ssl =\n> true\" either), but in the debian machine with \"ssl = true\" in\n> postgresql.conf you can see in the pcap file big time jumps between\n> data packets, psql commandline enables automatically ssl if the server\n> supports it?\n\nYeah, the default behavior is to do SSL if supported; see PGSSLMODE.\nNon-TCP connections never do SSL, though. One possibility to check\nis that one of the two distros has altered the default value of\nPGSSLMODE.\n\n> but if this is the case i should see encrypted traffic in\n> pcap files,\n\nI would suppose so, so there's something that doesn't quite add up here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 01 Mar 2009 13:28:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TCP network cost " }, { "msg_contents": "Tom Lane wrote:\n> Linos <[email protected]> writes:\n>> Tom Lane escribi�:\n>>> That's just weird --- ssl off should be ssl off no matter which knob you\n>>> use to turn it off. Are you sure it's really off in the slow connections?\n> \n>> Maybe i am missing something, i use the same command to connect to it\n>> from localhost \"psql -d database -h localhost\" and in the pcap files i\n>> have captured the protocol it is clear (with \"ssl = false\" or \"ssl =\n>> true\" either), but in the debian machine with \"ssl = true\" in\n>> postgresql.conf you can see in the pcap file big time jumps between\n>> data packets, psql commandline enables automatically ssl if the server\n>> supports it?\n> \n> Yeah, the default behavior is to do SSL if supported; see PGSSLMODE.\n> Non-TCP connections never do SSL, though. One possibility to check\n> is that one of the two distros has altered the default value of\n> PGSSLMODE.\n\nIIRC, debian ships with a default certificate for the postgres\ninstallation, so it can actually *use* SSL by default. I don't know if\nother distros do that - I think most require you to actually create a\ncertificate yourself.\n\n//Magnus\n\n", "msg_date": "Sun, 01 Mar 2009 19:40:10 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TCP network cost" }, { "msg_contents": "Magnus Hagander escribi�:\n> Tom Lane wrote:\n>> Linos <[email protected]> writes:\n>>> Tom Lane escribi�:\n>>>> That's just weird --- ssl off should be ssl off no matter which knob you\n>>>> use to turn it off. Are you sure it's really off in the slow connections?\n>>> Maybe i am missing something, i use the same command to connect to it\n>>> from localhost \"psql -d database -h localhost\" and in the pcap files i\n>>> have captured the protocol it is clear (with \"ssl = false\" or \"ssl =\n>>> true\" either), but in the debian machine with \"ssl = true\" in\n>>> postgresql.conf you can see in the pcap file big time jumps between\n>>> data packets, psql commandline enables automatically ssl if the server\n>>> supports it?\n>> Yeah, the default behavior is to do SSL if supported; see PGSSLMODE.\n>> Non-TCP connections never do SSL, though. One possibility to check\n>> is that one of the two distros has altered the default value of\n>> PGSSLMODE.\n> \n> IIRC, debian ships with a default certificate for the postgres\n> installation, so it can actually *use* SSL by default. I don't know if\n> other distros do that - I think most require you to actually create a\n> certificate yourself.\n> \n> //Magnus\n\nYeah i have tested with PGSSLMODE environment and it makes the difference when \nit is activated, debian ships with a cert that makes it enabled by default but \nArch Linux no, i get with wireshark in the data packets from postgresql \n\"unreassembled packet\" so i thought that was the same but obviously one it is \nusing ssl and the other not, and before now i have not noticed but psql gives me \nthe hint that it is connect by ssl with the line \"conexi�n SSL (cifrado: \nDHE-RSA-AES256-SHA, bits: 256)\" after connect, i did not know that ssl activated \nwould have this speed penalty, goes from 110 ms to 4200ms, Thanks Tom and Magnus \nfor the help.\n\nRegards,\nMiguel Angel.\n", "msg_date": "Sun, 01 Mar 2009 19:52:21 +0100", "msg_from": "Linos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TCP network cost" } ]
[ { "msg_contents": "Hi,\n\nLet's say I have a table (tbl) with two columns: id1, id2.\nI have an index on (id1,id2)\nAnd I would like to query the (12;34) - (56;78) range (so it also may\ncontain (12;58), (13;10), (40;80) etc.). With the index this can be done\nquite efficiently in theory, but I cannot find a way to make this happen. I\ntriy this in the WHERE clause:\n\nWHERE (id1>12 or id1=12 and id2>=34) and (id1<56 or id1=56 and id2<=78)\n\nI created a big enough table (131072 records, and it had also a 3rd field\nwith about 120 character text data).\nBut Postgres performs a SeqScan. I have analyzed the table before it.\nI also tried Row constructors with a Between expression, but in this case\nPostgres handled the elements of the row independently, and this led to\nfalse query result.\n\nWhat should I write in the Where clause to get Postgres to perform an\nIndexScan?\n\nI would like to apply this to other datatypes also, not just ints.\n\nThanks in advance,\nOtto\n\nHi,Let's say I have a table (tbl) with two columns: id1, id2.I have an index on (id1,id2)And I would like to query the (12;34) - (56;78) range (so it also may contain (12;58), (13;10), (40;80) etc.). With the index this can be done quite efficiently in theory, but I cannot find a way to make this happen. I triy this in the WHERE clause:\nWHERE (id1>12 or id1=12 and id2>=34) and (id1<56 or id1=56 and id2<=78)I created a big enough table (131072 records, and it had also a 3rd field with about 120 character text data). But Postgres performs a SeqScan. I have analyzed the table before it.\nI also tried Row constructors with a Between expression, but in this case Postgres handled the elements of the row independently, and this led to false query result.\nWhat should I write in the Where clause to get Postgres to perform an IndexScan?I would like to apply this to other datatypes also, not just ints.Thanks in advance,Otto", "msg_date": "Tue, 17 Feb 2009 13:44:48 +0100", "msg_from": "=?ISO-8859-1?Q?Havasv=F6lgyi_Ott=F3?= <[email protected]>", "msg_from_op": true, "msg_subject": "Query composite index range in an efficient way" }, { "msg_contents": "On Tue, 17 Feb 2009, Havasv�lgyi Ott� wrote:\n> I created a big enough table (131072 records, and it had also a 3rd \n> field with about 120 character text data). But Postgres performs a \n> SeqScan.\n\nFirstly, you should always post EXPLAIN ANALYSE results when asking about \na planning problem.\n\nSecondly, you can't \"get\" Postgres to choose a particular plan (without \ndisruptive fiddling with the planner). Postgres will try to choose the \nplan that answers the query fastest, and this may be a sequential scan.\n\nWhat happens if you use the following WHERE clause?\n\nWHERE id1 > 12 AND id1 < 56\n\nDoes Postgres use a sequential scan then?\n\nHow many rows does your query return? If it's more than about 10% of the \ntotal rows in the table, then a sequential scan is probably the fastest \nmethod.\n\nMatthew\n\n-- \n Matthew: That's one of things about Cambridge - all the roads keep changing\n names as you walk along them, like Hills Road in particular.\n Sagar: Yes, Sidney Street is a bit like that too.\n Matthew: Sidney Street *is* Hills Road.", "msg_date": "Tue, 17 Feb 2009 12:56:54 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query composite index range in an efficient way" }, { "msg_contents": "\nHavasvölgyi Ottó <[email protected]> writes:\n\n> I also tried Row constructors with a Between expression, but in this case\n> Postgres handled the elements of the row independently, and this led to\n> false query result.\n\nWhat version of Postgres is this? row constructors were fixed a long time ago\nto not do that and the main benefit of that was precisely that this type of\nexpression could use a multi-column index effectively.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n", "msg_date": "Tue, 17 Feb 2009 13:10:11 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query composite index range in an efficient way" }, { "msg_contents": "Gregory Stark <[email protected]> writes:\n> Havasv�lgyi Ott� <[email protected]> writes:\n>> I also tried Row constructors with a Between expression, but in this case\n>> Postgres handled the elements of the row independently, and this led to\n>> false query result.\n\n> What version of Postgres is this? row constructors were fixed a long time ago\n> to not do that and the main benefit of that was precisely that this type of\n> expression could use a multi-column index effectively.\n\nThat depends on whether you think 8.2 is \"a long time ago\" ;-). But\nyeah, row comparisons in a modern Postgres version are the way to handle\nthis.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Feb 2009 10:53:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query composite index range in an efficient way " }, { "msg_contents": ">>> Havasvᅵlgyi Ottᅵ <[email protected]> wrote: \n \n> WHERE (id1>12 or id1=12 and id2>=34)\n> and (id1<56 or id1=56 and id2<=78)\n \nAs others have pointed out, if you are using 8.2 or later, you should\nwrite this as:\n \nWHERE (id1, id2) >= (12, 34) and (id1, id2) <= (56, 78)\n \nOn earlier versions you might want to try the logically equivalent:\n \nWHERE (id1 >= 12 and (id1 > 12 or id2 >= 34))\n and (id1 <= 56 and (id1 < 56 or id2 <= 78))\n \n-Kevin\n", "msg_date": "Tue, 17 Feb 2009 10:16:13 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query composite index range in an efficient way" }, { "msg_contents": "Thanks, it's a very good idea!\nOtto\n\n\n2009/2/17 Kevin Grittner <[email protected]>\n\n> >>> Havasvölgyi Ottó <[email protected]> wrote:\n>\n> > WHERE (id1>12 or id1=12 and id2>=34)\n> > and (id1<56 or id1=56 and id2<=78)\n>\n> As others have pointed out, if you are using 8.2 or later, you should\n> write this as:\n>\n> WHERE (id1, id2) >= (12, 34) and (id1, id2) <= (56, 78)\n>\n> On earlier versions you might want to try the logically equivalent:\n>\n> WHERE (id1 >= 12 and (id1 > 12 or id2 >= 34))\n> and (id1 <= 56 and (id1 < 56 or id2 <= 78))\n>\n> -Kevin\n>\n\nThanks, it's a very good idea!Otto2009/2/17 Kevin Grittner <[email protected]>\n>>> Havasvölgyi Ottó <[email protected]> wrote:\n\n> WHERE (id1>12 or id1=12 and id2>=34)\n>   and (id1<56 or id1=56 and id2<=78)\n\nAs others have pointed out, if you are using 8.2 or later, you should\nwrite this as:\n\nWHERE (id1, id2) >= (12, 34) and (id1, id2) <= (56, 78)\n\nOn earlier versions you might want to try the logically equivalent:\n\nWHERE (id1 >= 12 and (id1 > 12 or id2 >= 34))\n  and (id1 <= 56 and (id1 < 56 or id2 <= 78))\n\n-Kevin", "msg_date": "Tue, 17 Feb 2009 19:55:50 +0100", "msg_from": "=?ISO-8859-1?Q?Havasv=F6lgyi_Ott=F3?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query composite index range in an efficient way" } ]
[ { "msg_contents": "Hi,\n\nI have a query SELECT * FROM myTable WHERE ((myCol = var1) OR (myCol = \nvar2))\n\n.. which produces the following EXLAIN output:\n\n\nIndex Scan using myIndex on myTable (cost=0.00..8.28 rows=1 width=537)\n Filter: ((myCol = $1) OR (myCol = $2))\n\n\nThe index \"myIndex\" is an index on column \"myCol\". However I expected an \n\"Index Cond:\" row instead of a \"Filter:\" row, which normally indicates \nperformance problems.\nShould I rather split the OR and use two SELECT statements to get my \nresult (var1!=var2)? Namely in this case, \"Index Cond:\" is outputed by \nEXPLAIN.\n\n--J�rg Kiegeland\n", "msg_date": "Tue, 17 Feb 2009 15:33:40 +0100", "msg_from": "=?ISO-8859-1?Q?J=F6rg_Kiegeland?= <[email protected]>", "msg_from_op": true, "msg_subject": "Cannot interpret EXPLAIN" } ]
[ { "msg_contents": "Hi,\n\nI have table containing bytea and text columns. It is my storage for\nimage files and it's labels. Labels can be 'original' and 'thumbnail'.\nI've C-function defined in *.so library and corresponding declaration in\npostgres for scaling image. This function scale image and insert it into\nthe same table with the label 'thumbnail'. I have trigger on before\ninsert or update on the table which make thumbnail for image labeled as\n'original'.\n\nInserting single image into the table takes about 3 SECONDS!. But call\nof scaling function directly in psql command prompt is approximately 20\ntimes faster. If I comment out scaling function call in the trigger,\ninsertion, and it is evident, becomes immediate (very fast).\n\nHere my somehow pseudo code:\n\nCREATE TABLE images_meta\n(\n data bytea,\n label text\n);\n\nCREATE FUNCTION imscale(data bytea, width integer)\n RETURNS integer AS 'libmylib.so', 'imscale' LANGUAGE 'c';\n\nCREATE FUNCTION auto_scale() RETURNS trigger AS $$\n DECLARE\n notused integer;\n BEGIN\n IF NEW.label = 'original' THEN\n notused := imscale(NEW.data, 128);\n END IF;\n RETURN NEW;\n END;\n$$ LANGUAGE PLPGSQL;\n\n", "msg_date": "Tue, 17 Feb 2009 19:17:14 +0300", "msg_from": "Alexander Gorban <[email protected]>", "msg_from_op": true, "msg_subject": "Call of function inside trigger much slower than explicit function\n\tcall" }, { "msg_contents": "On Tue, Feb 17, 2009 at 11:17 AM, Alexander Gorban\n<[email protected]> wrote:\n> Hi,\n>\n> I have table containing bytea and text columns. It is my storage for\n> image files and it's labels. Labels can be 'original' and 'thumbnail'.\n> I've C-function defined in *.so library and corresponding declaration in\n> postgres for scaling image. This function scale image and insert it into\n> the same table with the label 'thumbnail'. I have trigger on before\n> insert or update on the table which make thumbnail for image labeled as\n> 'original'.\n>\n> Inserting single image into the table takes about 3 SECONDS!. But call\n> of scaling function directly in psql command prompt is approximately 20\n> times faster. If I comment out scaling function call in the trigger,\n> insertion, and it is evident, becomes immediate (very fast).\n>\n> Here my somehow pseudo code:\n>\n> CREATE TABLE images_meta\n> (\n> data bytea,\n> label text\n> );\n>\n> CREATE FUNCTION imscale(data bytea, width integer)\n> RETURNS integer AS 'libmylib.so', 'imscale' LANGUAGE 'c';\n>\n> CREATE FUNCTION auto_scale() RETURNS trigger AS $$\n> DECLARE\n> notused integer;\n> BEGIN\n> IF NEW.label = 'original' THEN\n> notused := imscale(NEW.data, 128);\n> END IF;\n> RETURN NEW;\n> END;\n> $$ LANGUAGE PLPGSQL;\n\nWell my first guess is that when you actually do the insertion you\nhave to transfer the file from the client to the database, but when\nyou subsequently call the function by hand you're calling it on data\nthat is already in the database, so there's no transfer time... how\nbig are these images, anyway?\n\n...Robert\n", "msg_date": "Tue, 17 Feb 2009 12:24:58 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call of function inside trigger much slower than\n\texplicit function call" }, { "msg_contents": "В Втр, 17/02/2009 в 12:24 -0500, Robert Haas пишет:\n> On Tue, Feb 17, 2009 at 11:17 AM, Alexander Gorban\n> <[email protected]> wrote:\n> > Hi,\n> >\n> > I have table containing bytea and text columns. It is my storage for\n> > image files and it's labels. Labels can be 'original' and 'thumbnail'.\n> > I've C-function defined in *.so library and corresponding declaration in\n> > postgres for scaling image. This function scale image and insert it into\n> > the same table with the label 'thumbnail'. I have trigger on before\n> > insert or update on the table which make thumbnail for image labeled as\n> > 'original'.\n> >\n> > Inserting single image into the table takes about 3 SECONDS!. But call\n> > of scaling function directly in psql command prompt is approximately 20\n> > times faster. If I comment out scaling function call in the trigger,\n> > insertion, and it is evident, becomes immediate (very fast).\n> >\n> > Here my somehow pseudo code:\n> >\n> > CREATE TABLE images_meta\n> > (\n> > data bytea,\n> > label text\n> > );\n> >\n> > CREATE FUNCTION imscale(data bytea, width integer)\n> > RETURNS integer AS 'libmylib.so', 'imscale' LANGUAGE 'c';\n> >\n> > CREATE FUNCTION auto_scale() RETURNS trigger AS $$\n> > DECLARE\n> > notused integer;\n> > BEGIN\n> > IF NEW.label = 'original' THEN\n> > notused := imscale(NEW.data, 128);\n> > END IF;\n> > RETURN NEW;\n> > END;\n> > $$ LANGUAGE PLPGSQL;\n> \n> Well my first guess is that when you actually do the insertion you\n> have to transfer the file from the client to the database, but when\n> you subsequently call the function by hand you're calling it on data\n> that is already in the database, so there's no transfer time... how\n> big are these images, anyway?\n> \n> ...Robert\n\nAlso I've defined function to load images from disk directly inside sql\nquery:\n\nCREATE FUNCTION bytea_load_from_file(path text) RETURNS BYTEA \nAS 'libmylib.so','bytea_load_from_file' LANGUAGE C;\n\nand use it in both cases - for insertion of image and to call function\ndirectly. So, there is no difference it times spent for image loading.\nHere is code that I use\n1. Insertion example:\ntest_base=# insert INTO images_meta(label,data) VALUES('original',\nbytea_load_from_file('/tmp/test.jpg')); \n\n2. Direct call:\ntest_base=#select imscale(bytea_load_from_file('/tmp/test.jpg'),128);\n\nI realize, that insertion require more operations to perform (insert\ninitial image, fire after insert trigger, insert thumbnail, fire trigger\nagain after insertion thumbnail). But these operations not seems very\nhard.\n\nSize of image, that I use for tests is about 2MB. That is why 3sec. it\nis very long time to process it. \n\n", "msg_date": "Tue, 17 Feb 2009 20:46:43 +0300", "msg_from": "Alexander Gorban <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Call of function inside trigger much slower than\n\texplicit function call" }, { "msg_contents": "On Tue, Feb 17, 2009 at 12:46 PM, Alexander Gorban\n<[email protected]> wrote:\n> В Втр, 17/02/2009 в 12:24 -0500, Robert Haas пишет:\n>> On Tue, Feb 17, 2009 at 11:17 AM, Alexander Gorban\n>> <[email protected]> wrote:\n>> > Hi,\n>> >\n>> > I have table containing bytea and text columns. It is my storage for\n>> > image files and it's labels. Labels can be 'original' and 'thumbnail'.\n>> > I've C-function defined in *.so library and corresponding declaration in\n>> > postgres for scaling image. This function scale image and insert it into\n>> > the same table with the label 'thumbnail'. I have trigger on before\n>> > insert or update on the table which make thumbnail for image labeled as\n>> > 'original'.\n>> >\n>> > Inserting single image into the table takes about 3 SECONDS!. But call\n>> > of scaling function directly in psql command prompt is approximately 20\n>> > times faster. If I comment out scaling function call in the trigger,\n>> > insertion, and it is evident, becomes immediate (very fast).\n>> >\n>> > Here my somehow pseudo code:\n>> >\n>> > CREATE TABLE images_meta\n>> > (\n>> > data bytea,\n>> > label text\n>> > );\n>> >\n>> > CREATE FUNCTION imscale(data bytea, width integer)\n>> > RETURNS integer AS 'libmylib.so', 'imscale' LANGUAGE 'c';\n>> >\n>> > CREATE FUNCTION auto_scale() RETURNS trigger AS $$\n>> > DECLARE\n>> > notused integer;\n>> > BEGIN\n>> > IF NEW.label = 'original' THEN\n>> > notused := imscale(NEW.data, 128);\n>> > END IF;\n>> > RETURN NEW;\n>> > END;\n>> > $$ LANGUAGE PLPGSQL;\n>>\n>> Well my first guess is that when you actually do the insertion you\n>> have to transfer the file from the client to the database, but when\n>> you subsequently call the function by hand you're calling it on data\n>> that is already in the database, so there's no transfer time... how\n>> big are these images, anyway?\n>>\n>> ...Robert\n>\n> Also I've defined function to load images from disk directly inside sql\n> query:\n>\n> CREATE FUNCTION bytea_load_from_file(path text) RETURNS BYTEA\n> AS 'libmylib.so','bytea_load_from_file' LANGUAGE C;\n>\n> and use it in both cases - for insertion of image and to call function\n> directly. So, there is no difference it times spent for image loading.\n> Here is code that I use\n> 1. Insertion example:\n> test_base=# insert INTO images_meta(label,data) VALUES('original',\n> bytea_load_from_file('/tmp/test.jpg'));\n>\n> 2. Direct call:\n> test_base=#select imscale(bytea_load_from_file('/tmp/test.jpg'),128);\n>\n> I realize, that insertion require more operations to perform (insert\n> initial image, fire after insert trigger, insert thumbnail, fire trigger\n> again after insertion thumbnail). But these operations not seems very\n> hard.\n>\n> Size of image, that I use for tests is about 2MB. That is why 3sec. it\n> is very long time to process it.\n\nWell, that does sound weird... can you post the full definition for\nthe images_meta table? Are there any other triggers on that table?\nIs it referenced by any foreign keys? How fast is the insert if you\ndrop the trigger?\n\n...Robert\n", "msg_date": "Tue, 17 Feb 2009 14:31:04 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call of function inside trigger much slower than\n\texplicit function call" }, { "msg_contents": "I do no, but you really need rescale the image when he comes to database? or\ncan you doing this after, in a schudeled job?\nIf you need resize the image en it comes, I believe you pay a price related\nabout performance because the this is working to save image, the toast\nstrtucture are receiving the data and the server need sync this... all at\nsame time...\ncan you mail the times for the insert with and without this trigger?\n\n2009/2/17 Robert Haas <[email protected]>\n\n> On Tue, Feb 17, 2009 at 12:46 PM, Alexander Gorban\n> <[email protected]> wrote:\n> > В Втр, 17/02/2009 в 12:24 -0500, Robert Haas пишет:\n> >> On Tue, Feb 17, 2009 at 11:17 AM, Alexander Gorban\n> >> <[email protected]> wrote:\n> >> > Hi,\n> >> >\n> >> > I have table containing bytea and text columns. It is my storage for\n> >> > image files and it's labels. Labels can be 'original' and 'thumbnail'.\n> >> > I've C-function defined in *.so library and corresponding declaration\n> in\n> >> > postgres for scaling image. This function scale image and insert it\n> into\n> >> > the same table with the label 'thumbnail'. I have trigger on before\n> >> > insert or update on the table which make thumbnail for image labeled\n> as\n> >> > 'original'.\n> >> >\n> >> > Inserting single image into the table takes about 3 SECONDS!. But call\n> >> > of scaling function directly in psql command prompt is approximately\n> 20\n> >> > times faster. If I comment out scaling function call in the trigger,\n> >> > insertion, and it is evident, becomes immediate (very fast).\n> >> >\n> >> > Here my somehow pseudo code:\n> >> >\n> >> > CREATE TABLE images_meta\n> >> > (\n> >> > data bytea,\n> >> > label text\n> >> > );\n> >> >\n> >> > CREATE FUNCTION imscale(data bytea, width integer)\n> >> > RETURNS integer AS 'libmylib.so', 'imscale' LANGUAGE 'c';\n> >> >\n> >> > CREATE FUNCTION auto_scale() RETURNS trigger AS $$\n> >> > DECLARE\n> >> > notused integer;\n> >> > BEGIN\n> >> > IF NEW.label = 'original' THEN\n> >> > notused := imscale(NEW.data, 128);\n> >> > END IF;\n> >> > RETURN NEW;\n> >> > END;\n> >> > $$ LANGUAGE PLPGSQL;\n> >>\n> >> Well my first guess is that when you actually do the insertion you\n> >> have to transfer the file from the client to the database, but when\n> >> you subsequently call the function by hand you're calling it on data\n> >> that is already in the database, so there's no transfer time... how\n> >> big are these images, anyway?\n> >>\n> >> ...Robert\n> >\n> > Also I've defined function to load images from disk directly inside sql\n> > query:\n> >\n> > CREATE FUNCTION bytea_load_from_file(path text) RETURNS BYTEA\n> > AS 'libmylib.so','bytea_load_from_file' LANGUAGE C;\n> >\n> > and use it in both cases - for insertion of image and to call function\n> > directly. So, there is no difference it times spent for image loading.\n> > Here is code that I use\n> > 1. Insertion example:\n> > test_base=# insert INTO images_meta(label,data) VALUES('original',\n> > bytea_load_from_file('/tmp/test.jpg'));\n> >\n> > 2. Direct call:\n> > test_base=#select imscale(bytea_load_from_file('/tmp/test.jpg'),128);\n> >\n> > I realize, that insertion require more operations to perform (insert\n> > initial image, fire after insert trigger, insert thumbnail, fire trigger\n> > again after insertion thumbnail). But these operations not seems very\n> > hard.\n> >\n> > Size of image, that I use for tests is about 2MB. That is why 3sec. it\n> > is very long time to process it.\n>\n> Well, that does sound weird... can you post the full definition for\n> the images_meta table? Are there any other triggers on that table?\n> Is it referenced by any foreign keys? How fast is the insert if you\n> drop the trigger?\n>\n> ...Robert\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nIvo Nascimento - Iann\n-------------------------------------\n| twitter: ivonascimento . |\n| http://ianntech.com.br. |\n| ZCE ID 227463685 |\n-------------------------------------\n\nI do no, but you really need rescale the image when he comes to database? or can you doing this after, in a schudeled job?\nIf you need resize the image  en it comes, I believe you pay a price\nrelated about performance because the this is working to save image,\nthe toast strtucture are receiving the data and  the server need sync\nthis... all at same time...\ncan you mail the times for the insert with and without this trigger?2009/2/17 Robert Haas <[email protected]>\nOn Tue, Feb 17, 2009 at 12:46 PM, Alexander Gorban\n<[email protected]> wrote:\n> В Втр, 17/02/2009 в 12:24 -0500, Robert Haas пишет:\n>> On Tue, Feb 17, 2009 at 11:17 AM, Alexander Gorban\n>> <[email protected]> wrote:\n>> > Hi,\n>> >\n>> > I have table containing bytea and text columns. It is my storage for\n>> > image files and it's labels. Labels can be 'original' and 'thumbnail'.\n>> > I've C-function defined in *.so library and corresponding declaration in\n>> > postgres for scaling image. This function scale image and insert it into\n>> > the same table with the label 'thumbnail'. I have trigger on before\n>> > insert or update on the table which make thumbnail for image labeled as\n>> > 'original'.\n>> >\n>> > Inserting single image into the table takes about 3 SECONDS!. But call\n>> > of scaling function directly in psql command prompt is approximately 20\n>> > times faster. If I comment out scaling function call in the trigger,\n>> > insertion, and it is evident, becomes immediate (very fast).\n>> >\n>> > Here my somehow pseudo code:\n>> >\n>> > CREATE TABLE images_meta\n>> > (\n>> >  data bytea,\n>> >  label text\n>> > );\n>> >\n>> > CREATE FUNCTION imscale(data bytea, width integer)\n>> >  RETURNS integer AS 'libmylib.so', 'imscale' LANGUAGE 'c';\n>> >\n>> > CREATE FUNCTION auto_scale() RETURNS trigger AS $$\n>> >  DECLARE\n>> >    notused integer;\n>> >  BEGIN\n>> >    IF NEW.label = 'original' THEN\n>> >      notused := imscale(NEW.data, 128);\n>> >    END IF;\n>> >    RETURN NEW;\n>> >  END;\n>> > $$ LANGUAGE PLPGSQL;\n>>\n>> Well my first guess is that when you actually do the insertion you\n>> have to transfer the file from the client to the database, but when\n>> you subsequently call the function by hand you're calling it on data\n>> that is already in the database, so there's no transfer time...  how\n>> big are these images, anyway?\n>>\n>> ...Robert\n>\n> Also I've defined function to load images from disk directly inside sql\n> query:\n>\n> CREATE FUNCTION bytea_load_from_file(path text) RETURNS BYTEA\n> AS 'libmylib.so','bytea_load_from_file' LANGUAGE C;\n>\n> and use it in both cases - for insertion of image and to call function\n> directly. So, there is no difference it times spent for image loading.\n> Here is code that I use\n> 1. Insertion example:\n> test_base=# insert INTO images_meta(label,data) VALUES('original',\n> bytea_load_from_file('/tmp/test.jpg'));\n>\n> 2. Direct call:\n> test_base=#select imscale(bytea_load_from_file('/tmp/test.jpg'),128);\n>\n> I realize, that insertion require more operations to perform (insert\n> initial image, fire after insert trigger, insert thumbnail, fire trigger\n> again after insertion thumbnail). But these operations not seems very\n> hard.\n>\n> Size of image, that I use for tests is about 2MB. That is why 3sec. it\n> is very long time to process it.\n\nWell, that does sound weird... can you post the full definition for\nthe images_meta table?  Are there any other triggers on that table?\nIs it referenced by any foreign keys?  How fast is the insert if you\ndrop the trigger?\n\n...Robert\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Ivo Nascimento - Iann-------------------------------------|   twitter: ivonascimento .     ||   http://ianntech.com.br.      |\n|   ZCE ID 227463685            |-------------------------------------", "msg_date": "Tue, 17 Feb 2009 21:43:28 -0300", "msg_from": "ivo nascimento <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Call of function inside trigger much slower than\n\texplicit function call" }, { "msg_contents": "\n> Well, that does sound weird... can you post the full definition for\n> the images_meta table? Are there any other triggers on that table?\n> Is it referenced by any foreign keys? How fast is the insert if you\n> drop the trigger?\n> \n> ...Robert\n\nYes, weird. Something was wrong in my own code, after I've rewrite it to\nsend you full sources of problem example, execution times of image\ninsertion and direct scaling function call became the same. Insertion of\n4000x2667px (2MB) image and direct function call for downscaling\noriginal image to 800x600px and 128x128px both takes 1.6 sec. Sorry for\nconfusion. And it is almost the same time that takes command line\nutility to do the task. So, practically there is no overhead of using\ntriggers for such purposes.\n\nNevertheless here is my sources, maybe there is a better way to solve\nthe task?\nhttp://www.filedropper.com/imscalepgexample\n\n", "msg_date": "Wed, 18 Feb 2009 13:24:07 +0300", "msg_from": "Alexander Gorban <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Call of function inside trigger much slower than\n\texplicit function call" } ]
[ { "msg_contents": "Just as a question to Tom and team,\nI saw a post a bit ago, about plans for 8.4, and Tom said it is very\nlikely that 8.4 will rewrite subselects into left joins, is it still\nin plans?\n\nI mean query like:\nselect id from foo where id not in ( select id from bar);\ninto:\n\nselect f.id from foo f left join bar b on f.id=b.id where b.id is null;\n\nthe latter is most often much much faster on 8.1-8.3;\n\nthanks.\n\n-- \nGJ\n", "msg_date": "Fri, 20 Feb 2009 09:56:05 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": true, "msg_subject": "not in(subselect) in 8.4" }, { "msg_contents": "> Just as a question to Tom and team,\n\nmaybe it`s time for asktom.postgresql.org? Oracle has it :)\n", "msg_date": "Fri, 20 Feb 2009 12:14:44 +0100", "msg_from": "marcin mank <[email protected]>", "msg_from_op": false, "msg_subject": "Re: not in(subselect) in 8.4" }, { "msg_contents": "On Fri, Feb 20, 2009 at 11:14 AM, marcin mank <[email protected]> wrote:\n>> Just as a question to Tom and team,\n>\n> maybe it`s time for asktom.postgresql.org? Oracle has it :)\n\nhehe,\non the other hand - that would make my ppl here very skilfull, the\nonly reason I started to praise them about joins, and stuff - is\nbecause subselects were slow. (no wonder, when you check two tables\nagainst each other, and each holds few M of rows).\n\n\n\n\n-- \nGJ\n", "msg_date": "Fri, 20 Feb 2009 11:23:09 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: not in(subselect) in 8.4" }, { "msg_contents": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]> writes:\n> I mean query like:\n> select id from foo where id not in ( select id from bar);\n> into:\n> select f.id from foo f left join bar b on f.id=b.id where b.id is null;\n\nPostgres does not do that, because they don't mean the same thing ---\nthe behavior for NULLs in bar.id is different.\n\n8.4 does understand that NOT EXISTS is an antijoin, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Feb 2009 10:33:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: not in(subselect) in 8.4 " }, { "msg_contents": "On Fri, Feb 20, 2009 at 3:33 PM, Tom Lane <[email protected]> wrote:\n> =?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]> writes:\n>> I mean query like:\n>> select id from foo where id not in ( select id from bar);\n>> into:\n>> select f.id from foo f left join bar b on f.id=b.id where b.id is null;\n>\n> Postgres does not do that, because they don't mean the same thing ---\n> the behavior for NULLs in bar.id is different.\nyes, the obvious assumption here is that all columns are 'not null';\n\n\n> 8.4 does understand that NOT EXISTS is an antijoin, though.\n\nYes, I noticed that it actually assumes lesser cost.\n\n\n\n-- \nGJ\n", "msg_date": "Fri, 20 Feb 2009 15:39:56 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: not in(subselect) in 8.4" }, { "msg_contents": "On Fri, Feb 20, 2009 at 6:14 AM, marcin mank <[email protected]> wrote:\n> On Fri, Feb 20, 2009 at 4:56 AM, Grzegorz Jaśkiewicz <[email protected]> wrote:\n>> Just as a question to Tom and team,\n>\n> maybe it`s time for asktom.postgresql.org? Oracle has it :)\n\n+1\n", "msg_date": "Fri, 20 Feb 2009 19:02:36 -0500", "msg_from": "=?UTF-8?Q?Rodrigo_E=2E_De_Le=C3=B3n_Plicet?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: not in(subselect) in 8.4" }, { "msg_contents": "after your recent commit Tom, the cost is sky-high, and also it takes\nages again with subselect version. In case of two table join. I have\nto try the three way one.\n", "msg_date": "Sat, 21 Feb 2009 11:49:31 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: not in(subselect) in 8.4" }, { "msg_contents": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]> writes:\n> after your recent commit Tom, the cost is sky-high, and also it takes\n> ages again with subselect version. In case of two table join. I have\n> to try the three way one.\n\nWhich commit, and what example are you talking about?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 21 Feb 2009 11:56:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: not in(subselect) in 8.4 " }, { "msg_contents": "the foo bar example above, with notion that all columns are NOT NULL\nbehaves much different now. I noticed, that some of the 'anti join'\nstuff has changed in cvs recently, but I don't know if that's to\nblame.\nBasically, what I can see, is that the subselect case is no longer of\nlower cost, to the left join - but is quite substantially more\nexpensive.\n\nJust an observation, I don't intend to use subselects anyhow, because\nthese are very much slower on 8.3, which we use in production here.\n", "msg_date": "Sat, 21 Feb 2009 17:10:16 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: not in(subselect) in 8.4" }, { "msg_contents": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]> writes:\n> the foo bar example above, with notion that all columns are NOT NULL\n> behaves much different now.\n\nAFAIK the treatment of NOT IN subselects hasn't changed a bit since 8.3.\nSo I still find your complaint uninformative.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 21 Feb 2009 12:42:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: not in(subselect) in 8.4 " }, { "msg_contents": "Are there any optimizations planned for the case where columns are defined as NOT NULL? Or other special path filtering for cases where the planner can know that the set of values in the subselect won't contain NULLs (such as in (select a from b where (a > 0 and a < 10000).\n\nIt turns out to be a rare use case for someone to write a subselect for a NOT IN or IN clause that will have NULL values. In the common case, the subselect does not contain nulls. I would like to see Postgres optimize for the common case.\n\n________________________________________\nFrom: [email protected] [[email protected]] On Behalf Of Tom Lane [[email protected]]\nSent: Friday, February 20, 2009 7:33 AM\nTo: Grzegorz Jaśkiewicz\nCc: [email protected]\nSubject: Re: [PERFORM] not in(subselect) in 8.4\n\n=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]> writes:\n> I mean query like:\n> select id from foo where id not in ( select id from bar);\n> into:\n> select f.id from foo f left join bar b on f.id=b.id where b.id is null;\n\nPostgres does not do that, because they don't mean the same thing ---\nthe behavior for NULLs in bar.id is different.\n\n8.4 does understand that NOT EXISTS is an antijoin, though.\n\n regards, tom lane\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sat, 21 Feb 2009 11:29:10 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: not in(subselect) in 8.4 " }, { "msg_contents": "Scott Carey <[email protected]> writes:\n> Are there any optimizations planned for the case where columns are\n> defined as NOT NULL?\n\nWe might get around to recognizing that case as an antijoin sometime.\nIt's nontrivial though, because you have to check for an intermediate\nouter join causing the column to be possibly nullable after all.\n\n> It turns out to be a rare use case for someone to write a subselect\n> for a NOT IN or IN clause that will have NULL values.\n\nJudging from the steady flow of \"why doesn't my NOT IN query work\"\nnewbie questions, I don't think it's so rare as all that.\n\nThere's surely some population of people who know enough or could be\ntrained to be careful about using NOT NULL columns, but they could also\nbe trained to use NOT EXISTS, and dodge the whole bullet from the start.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 21 Feb 2009 22:41:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: not in(subselect) in 8.4 " }, { "msg_contents": "On Sat, Feb 21, 2009 at 10:41 PM, Tom Lane <[email protected]> wrote:\n> Scott Carey <[email protected]> writes:\n>> Are there any optimizations planned for the case where columns are\n>> defined as NOT NULL?\n>\n> We might get around to recognizing that case as an antijoin sometime.\n> It's nontrivial though, because you have to check for an intermediate\n> outer join causing the column to be possibly nullable after all.\n>\n>> It turns out to be a rare use case for someone to write a subselect\n>> for a NOT IN or IN clause that will have NULL values.\n>\n> Judging from the steady flow of \"why doesn't my NOT IN query work\"\n> newbie questions, I don't think it's so rare as all that.\n\nI think it's rare to do it on purpose, precisely because of the weird\nsemantics we all hate. I have done it by accident, more than once,\nand then fixed it by adding WHERE blah IS NOT NULL to the subquery.\nSo I think Scott is basically right.\n\n> There's surely some population of people who know enough or could be\n> trained to be careful about using NOT NULL columns, but they could also\n> be trained to use NOT EXISTS, and dodge the whole bullet from the start.\n\nThere are far more important reasons to make columns NOT NULL than\navoiding strange results from NOT IN. Personally, I have gotten used\nto the fact that the planner sucks at handling NOT IN and so always\nwrite LEFT JOIN ... WHERE pk IS NULL, so it's not important to me that\nwe fix it. But it's certainly a foot-gun for the inexperienced, as it\nis both the most compact and (at least IMO) the most intuitive\nformulation of an anti-join.\n\n...Robert\n", "msg_date": "Sat, 21 Feb 2009 23:39:49 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: not in(subselect) in 8.4" }, { "msg_contents": "but then you have 10 questions a week from windows people about\npassword, and yet you haven't remove that :P\n", "msg_date": "Sun, 22 Feb 2009 19:10:16 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: not in(subselect) in 8.4" } ]
[ { "msg_contents": "Hi,\n\nI've made a benchmark comparing PostgreSQL, MySQL and Oracle under three environments: GNU/Linux-x86, Solaris-x86 (same machine as GNU/Linux) and Solaris-SPARC. I think you might find it interesting:\n\nhttp://blogs.nologin.es/slopez/archives/17-Benchmarking-Databases-I.-Volatile-Storage..html\n\n", "msg_date": "Fri, 20 Feb 2009 12:28:55 +0100", "msg_from": "Sergio Lopez <[email protected]>", "msg_from_op": true, "msg_subject": "Benchmark comparing PostgreSQL, MySQL and Oracle" }, { "msg_contents": "El Fri, 20 Feb 2009 08:36:44 -0800\nAlan Hodgson <[email protected]> escribió:\n\n> On Friday 20 February 2009, Sergio Lopez <[email protected]>\n> wrote:\n> > Hi,\n> >\n> > I've made a benchmark comparing PostgreSQL, MySQL and Oracle under\n> > three environments: GNU/Linux-x86, Solaris-x86 (same machine as\n> > GNU/Linux) and Solaris-SPARC. I think you might find it interesting:\n> >\n> > http://blogs.nologin.es/slopez/archives/17-Benchmarking-Databases-I.-Vola\n> >tile-Storage..html\n> \n> How did you get permission from Oracle to publish benchmarks?\n> \n\nDamn, my Oracle's Evaluation License should be void now ;-). Sometimes,\nsoftware licenses are somewhat funny.\n\n", "msg_date": "Fri, 20 Feb 2009 16:57:27 +0100", "msg_from": "Sergio Lopez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmark comparing PostgreSQL, MySQL and Oracle" }, { "msg_contents": "On Friday 20 February 2009, Sergio Lopez <[email protected]> wrote:\n> Hi,\n>\n> I've made a benchmark comparing PostgreSQL, MySQL and Oracle under three\n> environments: GNU/Linux-x86, Solaris-x86 (same machine as GNU/Linux) and\n> Solaris-SPARC. I think you might find it interesting:\n>\n> http://blogs.nologin.es/slopez/archives/17-Benchmarking-Databases-I.-Vola\n>tile-Storage..html\n\nHow did you get permission from Oracle to publish benchmarks?\n\n-- \nEven a sixth-grader can figure out that you can’t borrow money to pay off \nyour debt\n", "msg_date": "Fri, 20 Feb 2009 08:36:44 -0800", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark comparing PostgreSQL, MySQL and Oracle" }, { "msg_contents": "On Fri, Feb 20, 2009 at 6:28 AM, Sergio Lopez <[email protected]>wrote:\n\n> Hi,\n>\n> I've made a benchmark comparing PostgreSQL, MySQL and Oracle under three\n> environments: GNU/Linux-x86, Solaris-x86 (same machine as GNU/Linux) and\n> Solaris-SPARC. I think you might find it interesting:\n>\n>\n> http://blogs.nologin.es/slopez/archives/17-Benchmarking-Databases-I.-Volatile-Storage..html\n\n\nSorry Segio,\n\nIn addition to violating your Oracle license, you need to learn a couple\nthings about benchmarking.\n\nFirst of all, you need to do some research on the benchmark kit itself,\nrather than blindly downloading and using one. BenchmarkSQL has significant\nbugs in it which affect the result. I can say that authoritatively as I\nworked on/with it for quite awhile. Don't trust any result that comes from\nBenchmarkSQL. If you fix the bugs, Oracle (out of the box in OLTP config)\nwill come out 60%.\n\nOracle comes out twice as fast as PG on Linux. And, unless you're using a\nsignificant number of warehouses, MySQL+InnoDB will come out better than PG\nas well.\n\nSecond, I didn't see anything in your Oracle settings for parallelism and\nI/O tuning. Did you set them? And, based on what you presented, you didn't\nset configure the SGA appropriately given the hardware mentioned. What was\nyour log buffer set to?\n\nThird, did you manually analyze the Oracle/MySQL databases, because\nBenchmarkSQL will automatically analyze Postgres' tables to help the\noptimizer... did you do the same for the other databases?\n\nFourth, it didn't look like you tuned PG properly either. What was\nshared_buffers, wal_buffers, and wal_sync_method set to?\n\nFifth, did you do an out-of-the-box install of Oracle, or a custom one? If\nout of the box, did you choose OLTP or General?\n\nThere's lots of other things I could go on about in regard to flushing all\nthe caches prior to starting the benchmarks, filesystem options, etc.\n\nNot trying to be rude, but *THIS* is why Oracle, IBM, Microsoft, et al.\ndon't want people running benchmarks without their permission. When\nperforming benchmarks, there are a lot of things to take into\nconsideration. If you're just performing out-of-the-box tests, then that's\nfine, but you have to make sure the benchmark kit doesn't optimize itself\nfor any one of those databases (which it does for PG).\n\n-- \nJonah H. Harris, Senior DBA\nmyYearbook.com\n\nOn Fri, Feb 20, 2009 at 6:28 AM, Sergio Lopez <[email protected]> wrote:\nHi,\n\nI've made a benchmark comparing PostgreSQL, MySQL and Oracle under three environments: GNU/Linux-x86, Solaris-x86 (same machine as GNU/Linux) and Solaris-SPARC. I think you might find it interesting:\n\nhttp://blogs.nologin.es/slopez/archives/17-Benchmarking-Databases-I.-Volatile-Storage..html\nSorry Segio,In addition to violating your Oracle license, you need to learn a couple things about benchmarking.First of all, you need to do some research on the benchmark kit itself, rather than blindly downloading and using one.  BenchmarkSQL has significant bugs in it which affect the result.  I can say that authoritatively as I worked on/with it for quite awhile.  Don't trust any result that comes from BenchmarkSQL.  If you fix the bugs, Oracle (out of the box in OLTP config) will come out 60%.\nOracle comes out twice as fast as PG on Linux.  And, unless you're using a significant number of warehouses, MySQL+InnoDB will come out better than PG as well.Second, I didn't see anything in your Oracle settings for parallelism and I/O tuning.  Did you set them?  And, based on what you presented, you didn't set configure the SGA appropriately given the hardware mentioned.  What was your log buffer set to?\nThird, did you manually analyze the Oracle/MySQL databases, because BenchmarkSQL will automatically analyze Postgres' tables to help the optimizer... did you do the same for the other databases?\nFourth, it didn't look like you tuned PG properly either.  What was shared_buffers, wal_buffers, and wal_sync_method set to?Fifth, did you do an out-of-the-box install of Oracle, or a custom one?  If out of the box, did you choose OLTP or General?\nThere's lots of other things I could go on about in regard to flushing all the caches prior to starting the benchmarks, filesystem options, etc.Not trying to be rude, but *THIS* is why Oracle, IBM, Microsoft, et al. don't want people running benchmarks without their permission.  When performing benchmarks, there are a lot of things to take into consideration.  If you're just performing out-of-the-box tests, then that's fine, but you have to make sure the benchmark kit doesn't optimize itself for any one of those databases (which it does for PG).\n-- Jonah H. Harris, Senior DBAmyYearbook.com", "msg_date": "Fri, 20 Feb 2009 12:39:41 -0500", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark comparing PostgreSQL, MySQL and Oracle" }, { "msg_contents": "El Fri, 20 Feb 2009 12:39:41 -0500\n\"Jonah H. Harris\" <[email protected]> escribió:\n\n> On Fri, Feb 20, 2009 at 6:28 AM, Sergio Lopez\n> <[email protected]>wrote:\n> \n> > Hi,\n> >\n> > I've made a benchmark comparing PostgreSQL, MySQL and Oracle under\n> > three environments: GNU/Linux-x86, Solaris-x86 (same machine as\n> > GNU/Linux) and Solaris-SPARC. I think you might find it interesting:\n> >\n> >\n> > http://blogs.nologin.es/slopez/archives/17-Benchmarking-Databases-I.-Volatile-Storage..html\n> \n> \n> Sorry Segio,\n> \n> In addition to violating your Oracle license, you need to learn a\n> couple things about benchmarking.\n> \n> First of all, you need to do some research on the benchmark kit\n> itself, rather than blindly downloading and using one. BenchmarkSQL\n> has significant bugs in it which affect the result. I can say that\n> authoritatively as I worked on/with it for quite awhile. Don't trust\n> any result that comes from BenchmarkSQL. If you fix the bugs, Oracle\n> (out of the box in OLTP config) will come out 60%.\n> \n> Oracle comes out twice as fast as PG on Linux. And, unless you're\n> using a significant number of warehouses, MySQL+InnoDB will come out\n> better than PG as well.\n> \n> Second, I didn't see anything in your Oracle settings for parallelism\n> and I/O tuning. Did you set them? And, based on what you presented,\n> you didn't set configure the SGA appropriately given the hardware\n> mentioned. What was your log buffer set to?\n> \n> Third, did you manually analyze the Oracle/MySQL databases, because\n> BenchmarkSQL will automatically analyze Postgres' tables to help the\n> optimizer... did you do the same for the other databases?\n> \n> Fourth, it didn't look like you tuned PG properly either. What was\n> shared_buffers, wal_buffers, and wal_sync_method set to?\n> \n> Fifth, did you do an out-of-the-box install of Oracle, or a custom\n> one? If out of the box, did you choose OLTP or General?\n> \n> There's lots of other things I could go on about in regard to\n> flushing all the caches prior to starting the benchmarks, filesystem\n> options, etc.\n> \n> Not trying to be rude, but *THIS* is why Oracle, IBM, Microsoft, et\n> al. don't want people running benchmarks without their permission.\n> When performing benchmarks, there are a lot of things to take into\n> consideration. If you're just performing out-of-the-box tests, then\n> that's fine, but you have to make sure the benchmark kit doesn't\n> optimize itself for any one of those databases (which it does for PG).\n> \n\nFirst, thanks for your thoughts, I found them very interesting.\n\nOn the other hand, I've neved said that what I've done is the\nPerfect-Marvelous-Definitive Benchmark, it's just a personal project,\nand I don't have an infinite amount of time to invest on it.\n\nHaving this said, the benchmark is not as unfair as you thought. I've\ntaken care to prepare all databases to meet similar values for their\ncache, buffers and I/O configuration (to what's possible given their\ndifferences), and the I've left the rest as comes by default (for\nOracle I've used the OLTP template).\n\nYes, BenchmarkSQL is NOT the perfect tool for database benchmarking and\nit is NOT a valid TPC-C test (I've made this clear in the article), but\nI've looked at its source (you assume I blindly used it, but actually\nI've even made some changes to make it work with Ingres for other\npurposes) and I find it fair enough due to the simplicity of the\nqueries it executes. I found no other evident optimization than the\n\"vacuum analyze\" in the LoadData application.\n\nObviously, you can optimize the queries to perform better in Oracle,\nthe same way you can do with any other DB, but doing that would be\ncheating. The key here is to keep the queries as simple as possible,\nand BenchmarkSQL does this nicely.\n\nOf course, my benchmark it's somewhat peculiar by the fact (that you\nhaven't mentioned) that all databases files reside in volatile storage\n(RAM) by using tmpfs, which makes something similar (but not the\nsame) as using DIRECT_IO with an extremly fast storage. But, again, all\ndatabases are given equal consideration.\n\nFinally, about the license issue, (also) not trying to be rude,\nforbiding people to publish benchmark of their products is simply\nstupid (and it lacks for legal basis in most countries). The only reason\nthey do this is to scare kids and be able to make up their own results.\nOf course, if you allow people to publish benchmarks there will be\nsome loosely done, but also there'll be others properly made (and made\nby people non-related with any database vendor).\n\nIMHO, worse than having loosely done benchmarks is having people saying\nthings like \"if you fix the bugs, Oracle (out of the box in OLTP\nconfig) will come out 60%\" or \"Oracle comes out twice as fast as PG on\nLinux\" without any proof to support this words. At least, benchmarks\nare refutable by using logic.\n", "msg_date": "Fri, 20 Feb 2009 19:15:01 +0100", "msg_from": "Sergio Lopez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmark comparing PostgreSQL, MySQL and Oracle" }, { "msg_contents": "> First of all, you need to do some research on the benchmark kit itself,\n> rather than blindly downloading and using one. BenchmarkSQL has significant\n> bugs in it which affect the result. I can say that authoritatively as I\n> worked on/with it for quite awhile. Don't trust any result that comes from\n> BenchmarkSQL. If you fix the bugs, Oracle (out of the box in OLTP config)\n> will come out 60%.\n\n60% what?\n\n> Oracle comes out twice as fast as PG on Linux. And, unless you're using a\n> significant number of warehouses, MySQL+InnoDB will come out better than PG\n> as well.\n\nI can believe that MySQL could come out faster than PG because I've\nhad previous experience with it being blindingly fast. Of course I've\nalso had experience with it having amazingly poor data integrity. I\nwould be pretty surprised if Oracle were in general twice as fast as\nPG - what are they doing that much better than what we're doing? I\ncould certainly imagine it being true in cases that rely on specific\nfeatures we lack (e.g. join removal)?\n\n...Robert\n", "msg_date": "Fri, 20 Feb 2009 14:35:39 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark comparing PostgreSQL, MySQL and Oracle" }, { "msg_contents": "On Fri, Feb 20, 2009 at 1:15 PM, Sergio Lopez <[email protected]>wrote:\n\n> On the other hand, I've neved said that what I've done is the\n> Perfect-Marvelous-Definitive Benchmark, it's just a personal project,\n> and I don't have an infinite amount of time to invest on it.\n\n\nWhen you make comments such as \"As for databases, both Oracle and MySQL show\nnice numbers, but it's PostgreSQL who stands in the top, giving consistent\nresults with each environment and workload\", you should make sure that your\ntest is correct. Otherwise you're making statements without any real\nbasis-in-fact.\n\nHaving this said, the benchmark is not as unfair as you thought. I've\n> taken care to prepare all databases to meet similar values for their\n> cache, buffers and I/O configuration (to what's possible given their\n> differences), and the I've left the rest as comes by default (for\n> Oracle I've used the OLTP template).\n\n\nOracle's buffer cache is different than Postgres'. And there are several\nother tuning paramaters which control how the buffer cache and I/O between\ncache and disk is performed. Making them the same size means nothing. And,\nas I said, you still didn't mention other important tuning parameters in\nMySQL, Postgres, or Oracle. So either you don't know about them, or you\ndidn't bother to tune them, which is odd if you were trying to run a truly\ncomparative benchmark.\n\n\n> Yes, BenchmarkSQL is NOT the perfect tool for database benchmarking and\n> it is NOT a valid TPC-C test (I've made this clear in the article), but\n> I've looked at its source (you assume I blindly used it, but actually\n> I've even made some changes to make it work with Ingres for other\n> purposes) and I find it fair enough due to the simplicity of the\n> queries it executes. I found no other evident optimization than the\n> \"vacuum analyze\" in the LoadData application.\n\n\nDid you fix the bug in, I believe, the Order Status transaction that can\ncause an endless loop? I would call giving the Postgres optimizer correct\nstatistics and leaving Oracle and MySQL with defaults an optimization.\n\n\n> Obviously, you can optimize the queries to perform better in Oracle,\n> the same way you can do with any other DB, but doing that would be\n> cheating. The key here is to keep the queries as simple as possible,\n> and BenchmarkSQL does this nicely.\n\n\nBenchmarkSQL is flawed. You need to review the code more closely.\n\nOf course, my benchmark it's somewhat peculiar by the fact (that you\n> haven't mentioned) that all databases files reside in volatile storage\n> (RAM) by using tmpfs, which makes something similar (but not the\n> same) as using DIRECT_IO with an extremly fast storage. But, again, all\n> databases are given equal consideration.\n\n\nYou're right, it's not the same. Oracle can benefit by using real direct\nI/O, not half-baked simulations which still cause double-buffering between\nthe linux page cache and the database buffer cache.\n\n\n> Finally, about the license issue, (also) not trying to be rude,\n> forbiding people to publish benchmark of their products is simply\n> stupid (and it lacks for legal basis in most countries). The only reason\n> they do this is to scare kids and be able to make up their own results.\n> Of course, if you allow people to publish benchmarks there will be\n> some loosely done, but also there'll be others properly made (and made\n> by people non-related with any database vendor).\n\n\nYour benchmark was flawed. You made condescending statements about Oracle\nand MySQL based on your bad data. That's why they don't let you do it.\n\nIMHO, worse than having loosely done benchmarks is having people saying\n> things like \"if you fix the bugs, Oracle (out of the box in OLTP\n> config) will come out 60%\" or \"Oracle comes out twice as fast as PG on\n> Linux\" without any proof to support this words. At least, benchmarks\n> are refutable by using logic.\n\n\nYour benchmark was flawed, you didn't tune correctly, and you made\nstatements based on bad data; refute that logic :)\n\n-- \nJonah H. Harris, Senior DBA\nmyYearbook.com\n\nOn Fri, Feb 20, 2009 at 1:15 PM, Sergio Lopez <[email protected]> wrote:\n\nOn the other hand, I've neved said that what I've done is the\nPerfect-Marvelous-Definitive Benchmark, it's just a personal project,\nand I don't have an infinite amount of time to invest on it.When you make comments such as \"As for databases, both Oracle and MySQL show nice numbers, but it's\nPostgreSQL who stands in the top, giving consistent results with each\nenvironment and workload\", you should make sure that your test is correct.  Otherwise you're making statements without any real basis-in-fact.\n\nHaving this said, the benchmark is not as unfair as you thought. I've\ntaken care to prepare all databases to meet similar values for their\ncache, buffers and I/O configuration (to what's possible given their\ndifferences), and the I've left the rest as comes by default (for\nOracle I've used the OLTP template).Oracle's buffer cache is different than Postgres'.  And there are several other tuning paramaters which control how the buffer cache and I/O between cache and disk is performed.  Making them the same size means nothing.  And, as I said, you still didn't mention other important tuning parameters in MySQL, Postgres, or Oracle.  So either you don't know about them, or you didn't bother to tune them, which is odd if you were trying to run a truly comparative benchmark.\n \nYes, BenchmarkSQL is NOT the perfect tool for database benchmarking and\nit is NOT a valid TPC-C test (I've made this clear in the article), but\nI've looked at its source (you assume I blindly used it, but actually\nI've even made some changes to make it work with Ingres for other\npurposes) and I find it fair enough due to the simplicity of the\nqueries it executes. I found no other evident optimization than the\n\"vacuum analyze\" in the LoadData application.Did you fix the bug in, I believe, the Order Status transaction that can cause an endless loop?  I would call giving the Postgres optimizer correct statistics and leaving Oracle and MySQL with defaults an optimization.\n \nObviously, you can optimize the queries to perform better in Oracle,\nthe same way you can do with any other DB, but doing that would be\ncheating. The key here is to keep the queries as simple as possible,\nand BenchmarkSQL does this nicely. BenchmarkSQL is flawed.  You need to review the code more closely.\n\nOf course, my benchmark it's somewhat peculiar by the fact (that you\nhaven't mentioned) that all databases files reside in volatile storage\n(RAM) by using tmpfs, which makes something similar (but not the\nsame) as using DIRECT_IO with an extremly fast storage. But, again, all\ndatabases are given equal consideration.You're right, it's not the same.  Oracle can benefit by using real direct I/O, not half-baked simulations which still cause double-buffering between the linux page cache and the database buffer cache.\n \nFinally, about the license issue, (also) not trying to be rude,\nforbiding people to publish benchmark of their products is simply\nstupid (and it lacks for legal basis in most countries). The only reason\nthey do this is to scare kids and be able to make up their own results.\nOf course, if you allow people to publish benchmarks there will be\nsome loosely done, but also there'll be others properly made (and made\nby people non-related with any database vendor).Your benchmark was flawed.  You made condescending statements about Oracle and MySQL based on your bad data.  That's why they don't let you do it.\n\n\nIMHO, worse than having loosely done benchmarks is having people saying\nthings like \"if you fix the bugs, Oracle (out of the box in OLTP\nconfig) will come out 60%\" or \"Oracle comes out twice as fast as PG on\nLinux\" without any proof to support this words. At least, benchmarks\nare refutable by using logic.Your benchmark was flawed, you didn't tune correctly, and you made statements based on bad data; refute that logic :)-- Jonah H. Harris, Senior DBA\nmyYearbook.com", "msg_date": "Fri, 20 Feb 2009 14:48:06 -0500", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark comparing PostgreSQL, MySQL and Oracle" }, { "msg_contents": "On Fri, Feb 20, 2009 at 2:35 PM, Robert Haas <[email protected]> wrote:\n\n> > First of all, you need to do some research on the benchmark kit itself,\n> > rather than blindly downloading and using one. BenchmarkSQL has\n> significant\n> > bugs in it which affect the result. I can say that authoritatively as I\n> > worked on/with it for quite awhile. Don't trust any result that comes\n> from\n> > BenchmarkSQL. If you fix the bugs, Oracle (out of the box in OLTP\n> config)\n> > will come out 60%.\n>\n> 60% what?\n\n\nFaster than PG 8.3-dev with 100 warehouses (when I last tested it).\n\n\n> > Oracle comes out twice as fast as PG on Linux. And, unless you're using\n> a\n> > significant number of warehouses, MySQL+InnoDB will come out better than\n> PG\n> > as well.\n>\n> I can believe that MySQL could come out faster than PG because I've\n> had previous experience with it being blindingly fast. Of course I've\n> also had experience with it having amazingly poor data integrity.\n\n\nThat was MySQL+InnoDB. I haven't really had any integrity problems in that\nconfiguration.\n\n\n> I would be pretty surprised if Oracle were in general twice as fast as\n> PG - what are they doing that much better than what we're doing? I\n> could certainly imagine it being true in cases that rely on specific\n> features we lack (e.g. join removal)?\n\n\nDIO + AIO + multiple DBWR processes + large buffer cache + properly sized\nlogs/log buffers makes a big difference. There are also several other\nconcurrency-related tunables which contribute to it as well.\n\n-- \nJonah H. Harris, Senior DBA\nmyYearbook.com\n\nOn Fri, Feb 20, 2009 at 2:35 PM, Robert Haas <[email protected]> wrote:\n> First of all, you need to do some research on the benchmark kit itself,\n> rather than blindly downloading and using one.  BenchmarkSQL has significant\n> bugs in it which affect the result.  I can say that authoritatively as I\n> worked on/with it for quite awhile.  Don't trust any result that comes from\n> BenchmarkSQL.  If you fix the bugs, Oracle (out of the box in OLTP config)\n> will come out 60%.\n\n60% what?Faster than PG 8.3-dev with 100 warehouses (when I last tested it). \n\n> Oracle comes out twice as fast as PG on Linux.  And, unless you're using a\n> significant number of warehouses, MySQL+InnoDB will come out better than PG\n> as well.\nI can believe that MySQL could come out faster than PG because I've\nhad previous experience with it being blindingly fast.  Of course I've\nalso had experience with it having amazingly poor data integrity.That was MySQL+InnoDB.  I haven't really had any integrity problems in that configuration. \nI would be pretty surprised if Oracle were in general twice as fast as\nPG - what are they doing that much better than what we're doing?  I\ncould certainly imagine it being true in cases that rely on specific\nfeatures we lack (e.g. join removal)?DIO + AIO + multiple DBWR processes + large buffer cache + properly sized logs/log buffers makes a big difference.  There are also several other concurrency-related tunables which contribute to it as well.\n-- Jonah H. Harris, Senior DBAmyYearbook.com", "msg_date": "Fri, 20 Feb 2009 14:51:53 -0500", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark comparing PostgreSQL, MySQL and Oracle" }, { "msg_contents": "On Fri, Feb 20, 2009 at 2:48 PM, Jonah H. Harris <[email protected]>wrote:\n\n> Having this said, the benchmark is not as unfair as you thought. I've\n>> taken care to prepare all databases to meet similar values for their\n>> cache, buffers and I/O configuration (to what's possible given their\n>> differences), and the I've left the rest as comes by default (for\n>> Oracle I've used the OLTP template).\n>\n>\n> Oracle's buffer cache is different than Postgres'. And there are several\n> other tuning paramaters which control how the buffer cache and I/O between\n> cache and disk is performed. Making them the same size means nothing. And,\n> as I said, you still didn't mention other important tuning parameters in\n> MySQL, Postgres, or Oracle. So either you don't know about them, or you\n> didn't bother to tune them, which is odd if you were trying to run a truly\n> comparative benchmark.\n>\n\nAlso forgot to ask, what block size did you use in Oracle? You mentioned\ntuning the shared pool, but you didn't specify db_cache_size or whether you\nwere using automatic SGA tuning. Were those not tuned?\n\n-- \nJonah H. Harris, Senior DBA\nmyYearbook.com\n\nOn Fri, Feb 20, 2009 at 2:48 PM, Jonah H. Harris <[email protected]> wrote:\nHaving this said, the benchmark is not as unfair as you thought. I've\n\ntaken care to prepare all databases to meet similar values for their\ncache, buffers and I/O configuration (to what's possible given their\ndifferences), and the I've left the rest as comes by default (for\nOracle I've used the OLTP template).Oracle's buffer cache is different than Postgres'.  And there are several other tuning paramaters which control how the buffer cache and I/O between cache and disk is performed.  Making them the same size means nothing.  And, as I said, you still didn't mention other important tuning parameters in MySQL, Postgres, or Oracle.  So either you don't know about them, or you didn't bother to tune them, which is odd if you were trying to run a truly comparative benchmark.\nAlso forgot to ask, what block size did you use in Oracle?  You mentioned tuning the shared pool, but you didn't specify db_cache_size or whether you were using automatic SGA tuning.  Were those not tuned?\n-- Jonah H. Harris, Senior DBAmyYearbook.com", "msg_date": "Fri, 20 Feb 2009 14:55:05 -0500", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark comparing PostgreSQL, MySQL and Oracle" }, { "msg_contents": "On Fri, Feb 20, 2009 at 2:48 PM, Jonah H. Harris <[email protected]> wrote:\n> On Fri, Feb 20, 2009 at 1:15 PM, Sergio Lopez <[email protected]>\n> wrote:\n>>\n>> On the other hand, I've neved said that what I've done is the\n>> Perfect-Marvelous-Definitive Benchmark, it's just a personal project,\n>> and I don't have an infinite amount of time to invest on it.\n>\n> When you make comments such as \"As for databases, both Oracle and MySQL show\n> nice numbers, but it's PostgreSQL who stands in the top, giving consistent\n> results with each environment and workload\", you should make sure that your\n> test is correct. Otherwise you're making statements without any real\n> basis-in-fact.\n\nISTM you are the one throwing out unsubstantiated assertions without\ndata to back it up. OP ran benchmark. showed hardware/configs, and\ndemonstrated result. He was careful to hedge expectations and gave\nrationale for his analysis methods.\n\nIf you think he's wrong, instead of picking on him why don't you run\nsome tests showing alternative results and publish them...leave off\nthe oracle results or use a pseudo-name or something.\n\nmerlin\n", "msg_date": "Fri, 20 Feb 2009 15:40:16 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark comparing PostgreSQL, MySQL and Oracle" }, { "msg_contents": "On Fri, Feb 20, 2009 at 3:40 PM, Merlin Moncure <[email protected]> wrote:\n\n> ISTM you are the one throwing out unsubstantiated assertions without\n> data to back it up. OP ran benchmark. showed hardware/configs, and\n> demonstrated result. He was careful to hedge expectations and gave\n> rationale for his analysis methods.\n\n\nAs I pointed out in my last email, he makes claims about PG being faster\nthan Oracle and MySQL based on his results. I've already pointed out\nsignificant tuning considerations, for both Postgres and Oracle, which his\nbenchmark did not take into account.\n\nThis group really surprises me sometimes. For such a smart group of people,\nI'm not sure why everyone seems to have a problem pointing out design flaws,\netc. in -hackers, yet when we want to look good, we'll overlook blatant\nflaws where benchmarks are concerned.\n\n\n> If you think he's wrong, instead of picking on him why don't you run\n> some tests showing alternative results and publish them...leave off\n> the oracle results or use a pseudo-name or something.\n\n\nOne of these days I'll get some time and post my results. I'm just pointing\nout obvious flaws in this benchmark. If Sergio wants to correct them and/or\nqualify them, that's cool with me. I just don't like people relying on\nquestionable and/or unclear data.\n\n-- \nJonah H. Harris, Senior DBA\nmyYearbook.com\n\nOn Fri, Feb 20, 2009 at 3:40 PM, Merlin Moncure <[email protected]> wrote:\nISTM you are the one throwing out unsubstantiated assertions without\ndata to back it up.  OP ran benchmark. showed hardware/configs, and\ndemonstrated result.  He was careful to hedge expectations and gave\nrationale for his analysis methods.As I pointed out in my last email, he makes claims about PG being faster than Oracle and MySQL based on his results.  I've already pointed out significant tuning considerations, for both Postgres and Oracle, which his benchmark did not take into account.\nThis group really surprises me sometimes.  For such a smart group of\npeople, I'm not sure why everyone seems to have a problem pointing out\ndesign flaws, etc. in -hackers, yet when we want to look good, we'll\noverlook blatant flaws where benchmarks are concerned. \nIf you think he's wrong, instead of picking on him why don't you run\nsome tests showing alternative results and publish them...leave off\nthe oracle results or use a pseudo-name or something.One of these days I'll get some time and post my results.  I'm just pointing out obvious flaws in this benchmark.  If Sergio wants to correct them and/or qualify them, that's cool with me.  I just don't like people relying on questionable and/or unclear data.\n-- Jonah H. Harris, Senior DBAmyYearbook.com", "msg_date": "Fri, 20 Feb 2009 16:34:56 -0500", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark comparing PostgreSQL, MySQL and Oracle" }, { "msg_contents": "On Fri, Feb 20, 2009 at 4:34 PM, Jonah H. Harris <[email protected]> wrote:\n> On Fri, Feb 20, 2009 at 3:40 PM, Merlin Moncure <[email protected]> wrote:\n>>\n>> ISTM you are the one throwing out unsubstantiated assertions without\n>> data to back it up. OP ran benchmark. showed hardware/configs, and\n>> demonstrated result. He was careful to hedge expectations and gave\n>> rationale for his analysis methods.\n>\n> As I pointed out in my last email, he makes claims about PG being faster\n> than Oracle and MySQL based on his results. I've already pointed out\n> significant tuning considerations, for both Postgres and Oracle, which his\n> benchmark did not take into account.\n>\n> This group really surprises me sometimes. For such a smart group of people,\n> I'm not sure why everyone seems to have a problem pointing out design flaws,\n> etc. in -hackers, yet when we want to look good, we'll overlook blatant\n> flaws where benchmarks are concerned.\n\nThe biggest flaw in the benchmark by far has got to be that it was\ndone with a ramdisk, so it's really only measuring CPU consumption.\nMeasuring CPU consumption is interesting, but it doesn't have a lot to\ndo with throughput in real-life situations. The benchmark was\nobviously constructed to make PG look good, since the OP even mentions\non the page that the reason he went to ramdisk was that all of the\ndatabases, *but particularly PG*, had trouble handling all those\nlittle writes. (I wonder how much it would help to fiddle with the\nsynchronous_commit settings. How do MySQL and Oracle alleviate this\nproblem and we can usefully imitate any of it?)\n\nStill, if you read his conclusions, he admits that he's just trying to\nshow that they're in the same ballpark, and that might well be true,\neven with the shortcomings of the tests.\n\nPersonally, I'm not as upset as you seem to be about the lack of\nperfect tuning. Real-world tuning is rarely perfect, either, and we\ndon't know that his tuning was bad. We do know that whatever tuning\nhe did was not adequately documented, and we can suspect that items\nmentioned were not tuned, but we really don't know that. We have\nplenty of evidence from these lists that fiddling with shared_buffers\n(either raising or even sometimes lowering it), page and tuple costs,\netc. can sometimes produce dramatic performance changes. But that\ndoesn't necessarily tell you anything about what will happen in a real\nlife application with a more complex mix of queries where you can't\noptimize for the benchmark.\n\n>> If you think he's wrong, instead of picking on him why don't you run\n>> some tests showing alternative results and publish them...leave off\n>> the oracle results or use a pseudo-name or something.\n>\n> One of these days I'll get some time and post my results. I'm just pointing\n> out obvious flaws in this benchmark. If Sergio wants to correct them and/or\n> qualify them, that's cool with me. I just don't like people relying on\n> questionable and/or unclear data.\n\nI'd love to see more results. Even if they're not 100% complete and\ncorrect they would give us more of a notion than we have now of where\nmore work is needed. I was interested to see that Oracle was the\nrunaway winner for bulk data load because I did some work on that a\nfew months back. I suspect a lot more is needed there, because the\nwork I did would only help with create-table-as-select or copy, not\nretail insert, and even at that I know that the cases I did handle\nhave room for further improvement.\n\nI am not certain which database is the fastest and suspect there is no\none answer. But if we get some information that helps us figure out\nwhere we can improve, that is all to the good.\n\n...Robert\n", "msg_date": "Fri, 20 Feb 2009 16:54:58 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark comparing PostgreSQL, MySQL and Oracle" }, { "msg_contents": "On Fri, Feb 20, 2009 at 2:54 PM, Robert Haas <[email protected]> wrote:\n> On Fri, Feb 20, 2009 at 4:34 PM, Jonah H. Harris <[email protected]> wrote:\n>> On Fri, Feb 20, 2009 at 3:40 PM, Merlin Moncure <[email protected]> wrote:\n>>>\n>>> ISTM you are the one throwing out unsubstantiated assertions without\n>>> data to back it up. OP ran benchmark. showed hardware/configs, and\n>>> demonstrated result. He was careful to hedge expectations and gave\n>>> rationale for his analysis methods.\n>>\n>> As I pointed out in my last email, he makes claims about PG being faster\n>> than Oracle and MySQL based on his results. I've already pointed out\n>> significant tuning considerations, for both Postgres and Oracle, which his\n>> benchmark did not take into account.\n>>\n>> This group really surprises me sometimes. For such a smart group of people,\n>> I'm not sure why everyone seems to have a problem pointing out design flaws,\n>> etc. in -hackers, yet when we want to look good, we'll overlook blatant\n>> flaws where benchmarks are concerned.\n>\n> The biggest flaw in the benchmark by far has got to be that it was\n> done with a ramdisk, so it's really only measuring CPU consumption.\n> Measuring CPU consumption is interesting, but it doesn't have a lot to\n\nAgreed. As soon as I saw that I pretty much threw the results out the window.\n", "msg_date": "Fri, 20 Feb 2009 14:57:35 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark comparing PostgreSQL, MySQL and Oracle" }, { "msg_contents": "El Fri, 20 Feb 2009 14:48:06 -0500\n\"Jonah H. Harris\" <[email protected]> escribió:\n\n> On Fri, Feb 20, 2009 at 1:15 PM, Sergio Lopez\n> <[email protected]>wrote:\n> \n> Having this said, the benchmark is not as unfair as you thought. I've\n> > taken care to prepare all databases to meet similar values for their\n> > cache, buffers and I/O configuration (to what's possible given their\n> > differences), and the I've left the rest as comes by default (for\n> > Oracle I've used the OLTP template).\n> \n> \n> Oracle's buffer cache is different than Postgres'. And there are\n> several other tuning paramaters which control how the buffer cache\n> and I/O between cache and disk is performed. Making them the same\n> size means nothing. And, as I said, you still didn't mention other\n> important tuning parameters in MySQL, Postgres, or Oracle. So either\n> you don't know about them, or you didn't bother to tune them, which\n> is odd if you were trying to run a truly comparative benchmark.\n> \n\nAs I written in the article, I only tuned a few parameters and let the\nother out-the-box. More info:\n\n - Oracle: \n * AMM, sga_max_size/sga_target_size=4GB (yes, it's pretty low\nfor a 20 GB RAM machine, but remember I needed to run the tests in\nanother 10 GB RAM SPARC server and still need some more memory for\ndatabase and redo (10 warehouses == about 1 GB of data)\n * db_block_size=8k (this also answers the other email)\n * filesystem_io=setall (which souldn't make difference, anyway)\n * db_writer_processes=2 (with a extremly fast tmpfs, incresing this\nwill obviously be counterproductive)\n\n - MySQL:\n * innodb_buffer_pool_size=4GB\n * innodb_log_file_size=512MB\n\n - PostgreSQL:\n * effective_cache_size=4GB\n * shared_pool_size=512MB\n * fsync = on\n * synchronous_commit = on\n * wal_sync_method = fsync\n * checkpoint_segments = 100\n * checkpoint_completion_target = 0.7\n\nIf you have some suggestions to do about this configurations, please\ntell me so I can put them in the next benchmark (which, hopefully, will\nuse a nice performing SAN instead of tmpfs).\n\n> > Yes, BenchmarkSQL is NOT the perfect tool for database benchmarking\n> > and it is NOT a valid TPC-C test (I've made this clear in the\n> > article), but I've looked at its source (you assume I blindly used\n> > it, but actually I've even made some changes to make it work with\n> > Ingres for other purposes) and I find it fair enough due to the\n> > simplicity of the queries it executes. I found no other evident\n> > optimization than the \"vacuum analyze\" in the LoadData application.\n> \n> \n> Did you fix the bug in, I believe, the Order Status transaction that\n> can cause an endless loop? I would call giving the Postgres\n> optimizer correct statistics and leaving Oracle and MySQL with\n> defaults an optimization.\n> \n\nThe bug was in the Delivery transaction, and yes, I fixed it. It was a\nsimple bad locking behaviour, solved by properly using the \"FOR UPDATE\"\nclause.\n\n> > Obviously, you can optimize the queries to perform better in Oracle,\n> > the same way you can do with any other DB, but doing that would be\n> > cheating. The key here is to keep the queries as simple as possible,\n> > and BenchmarkSQL does this nicely.\n> \n> \n> BenchmarkSQL is flawed. You need to review the code more closely.\n> \n\nPlease, could you point the bugs (or at least some of them) you're\nreferring to? That would be very helpful for me, so I can fix them for\nthe next benchmark.\n\n> Of course, my benchmark it's somewhat peculiar by the fact (that you\n> > haven't mentioned) that all databases files reside in volatile\n> > storage (RAM) by using tmpfs, which makes something similar (but\n> > not the same) as using DIRECT_IO with an extremly fast storage.\n> > But, again, all databases are given equal consideration.\n> \n> \n> You're right, it's not the same. Oracle can benefit by using real\n> direct I/O, not half-baked simulations which still cause\n> double-buffering between the linux page cache and the database buffer\n> cache.\n> \n\n_All_ databases can benefit from direct I/O, specially for their redo\nfiles. But, in this benchmark we don't have double buffering (nor\nread-ahead) issues, or do you expect Linux or Solaris to cache data\nwhich is already in RAM (tmpfs)?\n\n", "msg_date": "Sat, 21 Feb 2009 00:04:45 +0100", "msg_from": "Sergio Lopez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmark comparing PostgreSQL, MySQL and Oracle" }, { "msg_contents": "Robert Haas wrote:\n> \n> The biggest flaw in the benchmark by far has got to be that it was\n> done with a ramdisk, so it's really only measuring CPU consumption.\n> Measuring CPU consumption is interesting, but it doesn't have a lot to\n> do with throughput in real-life situations. \n> \n... and memory access. Measuring these two in isolation from any \n(real/usual) io system is interesting but perhaps only as a curiosity - \nhowever, it would become much more interesting if we could see how the \nresults change when a disk based filesystem is used (or even raw for the \nbig O and innodb and filesystem for postgres...).\n\nregards\n\nMark\n\n", "msg_date": "Sat, 21 Feb 2009 12:17:02 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark comparing PostgreSQL, MySQL and Oracle" }, { "msg_contents": "El Fri, 20 Feb 2009 16:54:58 -0500\nRobert Haas <[email protected]> escribió:\n\n> On Fri, Feb 20, 2009 at 4:34 PM, Jonah H. Harris\n> <[email protected]> wrote:\n> > On Fri, Feb 20, 2009 at 3:40 PM, Merlin Moncure\n> > <[email protected]> wrote:\n> >>\n> >> ISTM you are the one throwing out unsubstantiated assertions\n> >> without data to back it up. OP ran benchmark. showed\n> >> hardware/configs, and demonstrated result. He was careful to\n> >> hedge expectations and gave rationale for his analysis methods.\n> >\n> > As I pointed out in my last email, he makes claims about PG being\n> > faster than Oracle and MySQL based on his results. I've already\n> > pointed out significant tuning considerations, for both Postgres\n> > and Oracle, which his benchmark did not take into account.\n> >\n> > This group really surprises me sometimes. For such a smart group\n> > of people, I'm not sure why everyone seems to have a problem\n> > pointing out design flaws, etc. in -hackers, yet when we want to\n> > look good, we'll overlook blatant flaws where benchmarks are\n> > concerned.\n> \n> The biggest flaw in the benchmark by far has got to be that it was\n> done with a ramdisk, so it's really only measuring CPU consumption.\n> Measuring CPU consumption is interesting, but it doesn't have a lot to\n> do with throughput in real-life situations. The benchmark was\n> obviously constructed to make PG look good, since the OP even mentions\n> on the page that the reason he went to ramdisk was that all of the\n> databases, *but particularly PG*, had trouble handling all those\n> little writes. (I wonder how much it would help to fiddle with the\n> synchronous_commit settings. How do MySQL and Oracle alleviate this\n> problem and we can usefully imitate any of it?)\n> \n\nThe benchmark is NOT constructed to make PostgreSQL look good, that\nnever was my intention. All databases suffered the I/O bottleneck for\ntheir redo/xlog/binary_log files, specially PostgreSQL but closely\nfollowed by Oracle. For some reason MySQL seems to deal better with I/O\ncontention, but still gives numbers that are less than the half it gives\nwith tmpfs.\n\nWhile using the old array (StorageTek T3), I've played with\nsynchronous_commit, wal_sync_method, commit_delay... and only setting\nwal_sync_method = open_datasync (which, in Solaris, completly disables\nI/O syncing) gave better results, for obvious reasons.\n\nAnyway, I think that in the next few months I'll be able to repeat the\ntests with a nice SAN, and then we'll have new numbers that will be\nmore near to \"real-world situations\" (but synthetic benchmarks always\nare synthetic benchmarks) and also we'll be able to compare them with\nthis ones to see how I/O contetion impacts on each database.\n", "msg_date": "Sat, 21 Feb 2009 00:30:02 +0100", "msg_from": "Sergio Lopez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmark comparing PostgreSQL, MySQL and Oracle" }, { "msg_contents": "Hi all,\n\nAs the author of BenchmarkSQL and the founder of EnterpriseDB.... I\ncan assure you that BenchmarkSQL was NOT written specifically for\nPostgreSQL. It is intended to be a completely database agnostic\ntpc-c like java based benchmark.\n\nHowever; as Jonah correctly points out in painstaking detail:\nPostgreSQL is good, but... Oracle, MySQL/Innodb and and and don't\nnecessarily suck. :-)\n\n--Luss\n\nPS: Submit a patch to BenchmarkSQL and I'll update it.\n\n\nOn 2/20/09, Sergio Lopez <[email protected]> wrote:\n> El Fri, 20 Feb 2009 16:54:58 -0500\n> Robert Haas <[email protected]> escribió:\n>\n>> On Fri, Feb 20, 2009 at 4:34 PM, Jonah H. Harris\n>> <[email protected]> wrote:\n>> > On Fri, Feb 20, 2009 at 3:40 PM, Merlin Moncure\n>> > <[email protected]> wrote:\n>> >>\n>> >> ISTM you are the one throwing out unsubstantiated assertions\n>> >> without data to back it up. OP ran benchmark. showed\n>> >> hardware/configs, and demonstrated result. He was careful to\n>> >> hedge expectations and gave rationale for his analysis methods.\n>> >\n>> > As I pointed out in my last email, he makes claims about PG being\n>> > faster than Oracle and MySQL based on his results. I've already\n>> > pointed out significant tuning considerations, for both Postgres\n>> > and Oracle, which his benchmark did not take into account.\n>> >\n>> > This group really surprises me sometimes. For such a smart group\n>> > of people, I'm not sure why everyone seems to have a problem\n>> > pointing out design flaws, etc. in -hackers, yet when we want to\n>> > look good, we'll overlook blatant flaws where benchmarks are\n>> > concerned.\n>>\n>> The biggest flaw in the benchmark by far has got to be that it was\n>> done with a ramdisk, so it's really only measuring CPU consumption.\n>> Measuring CPU consumption is interesting, but it doesn't have a lot to\n>> do with throughput in real-life situations. The benchmark was\n>> obviously constructed to make PG look good, since the OP even mentions\n>> on the page that the reason he went to ramdisk was that all of the\n>> databases, *but particularly PG*, had trouble handling all those\n>> little writes. (I wonder how much it would help to fiddle with the\n>> synchronous_commit settings. How do MySQL and Oracle alleviate this\n>> problem and we can usefully imitate any of it?)\n>>\n>\n> The benchmark is NOT constructed to make PostgreSQL look good, that\n> never was my intention. All databases suffered the I/O bottleneck for\n> their redo/xlog/binary_log files, specially PostgreSQL but closely\n> followed by Oracle. For some reason MySQL seems to deal better with I/O\n> contention, but still gives numbers that are less than the half it gives\n> with tmpfs.\n>\n> While using the old array (StorageTek T3), I've played with\n> synchronous_commit, wal_sync_method, commit_delay... and only setting\n> wal_sync_method = open_datasync (which, in Solaris, completly disables\n> I/O syncing) gave better results, for obvious reasons.\n>\n> Anyway, I think that in the next few months I'll be able to repeat the\n> tests with a nice SAN, and then we'll have new numbers that will be\n> more near to \"real-world situations\" (but synthetic benchmarks always\n> are synthetic benchmarks) and also we'll be able to compare them with\n> this ones to see how I/O contetion impacts on each database.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Fri, 20 Feb 2009 20:40:02 -0500", "msg_from": "Denis Lussier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark comparing PostgreSQL, MySQL and Oracle" }, { "msg_contents": "On Fri, Feb 20, 2009 at 8:40 PM, Denis Lussier <\[email protected]> wrote:\n\n> Hi all,\n>\n> As the author of BenchmarkSQL and the founder of EnterpriseDB.... I\n> can assure you that BenchmarkSQL was NOT written specifically for\n> PostgreSQL. It is intended to be a completely database agnostic\n> tpc-c like java based benchmark.\n\n\nWith the exception that it analyzes Postgres tables but not Oracle or\nInnoDB, I agree with that. The goal of BenchmarkSQL was to be a database\nagnostic benchmark kit.\n\n-- \nJonah H. Harris, Senior DBA\nmyYearbook.com\n\nOn Fri, Feb 20, 2009 at 8:40 PM, Denis Lussier <[email protected]> wrote:\nHi all,\n\nAs the author of BenchmarkSQL and the founder of EnterpriseDB....  I\ncan assure you that BenchmarkSQL was NOT written specifically for\nPostgreSQL.    It is intended to be a completely database agnostic\ntpc-c like java based benchmark.With the exception that it analyzes Postgres tables but not Oracle or InnoDB, I agree with that.  The goal of BenchmarkSQL was to be a database agnostic benchmark kit.\n-- Jonah H. Harris, Senior DBAmyYearbook.com", "msg_date": "Sat, 21 Feb 2009 21:04:49 -0500", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark comparing PostgreSQL, MySQL and Oracle" }, { "msg_contents": "El Sat, 21 Feb 2009 21:04:49 -0500\n\"Jonah H. Harris\" <[email protected]> escribió:\n\n> On Fri, Feb 20, 2009 at 8:40 PM, Denis Lussier <\n> [email protected]> wrote:\n> \n> > Hi all,\n> >\n> > As the author of BenchmarkSQL and the founder of EnterpriseDB.... I\n> > can assure you that BenchmarkSQL was NOT written specifically for\n> > PostgreSQL. It is intended to be a completely database agnostic\n> > tpc-c like java based benchmark.\n> \n> \n> With the exception that it analyzes Postgres tables but not Oracle or\n> InnoDB, I agree with that. The goal of BenchmarkSQL was to be a\n> database agnostic benchmark kit.\n> \n\nI've just made the same tests analyzing Oracle (with the dbms.stats\npackage) and not analyzing Postgres, and results are almost the same\nas the ones obtained before. The queries and schema used by BenchmarkSQL\nseem to be too simple to let place for plan optimization.\n\nOn the other hand... you were right. My benchmark has a serious flaw,\nbut it isn't in database configuration, but in the client which runs the\ntests, which is a bottleneck for all the environments.\n\nI've just solved this issue, and I'm now running again the tests and\nOracle defeats PostgreSQL by far.\n\nI've taken down the article and I'll bring up it again when I've\ncollected new numbers.\n\nI must say thanks to your skepticism ;-)\n", "msg_date": "Mon, 23 Feb 2009 19:29:06 +0100", "msg_from": "Sergio Lopez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmark comparing PostgreSQL, MySQL and Oracle" }, { "msg_contents": "On Mon, Feb 23, 2009 at 1:29 PM, Sergio Lopez <[email protected]> wrote:\n> El Sat, 21 Feb 2009 21:04:49 -0500\n> I've taken down the article and I'll bring up it again when I've\n> collected new numbers.\n\nPlease do, this subject is very interesting.\n\nRegards.\n", "msg_date": "Wed, 25 Feb 2009 15:52:40 -0500", "msg_from": "=?UTF-8?Q?Rodrigo_E=2E_De_Le=C3=B3n_Plicet?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark comparing PostgreSQL, MySQL and Oracle" } ]
[ { "msg_contents": "I have a server box that has 4GB of RAM, Quad core CPU AMD Opteron 200.152\nMhz (1024 KB cache size each) with plenty of hard drive space.\n\nI installed both postgresql 8.2.6 and 8.3.3 on it. I've created a basic\ntest db and used\npgbench -i -s 1 -U test -h localhost test\nto create a sample test db.\n\nThen, to benchmark the postgreSQLs, I executed this separately on each of\nthem:\npg_bench -h localhost -d test -t 2000 -c 50 -s 50 -U test\n(2000 transactions per client, 50 clients, scalability factor of 50)\n\nUsing the above,\nI get on postgreSQL 8.2.6:\nLoad average: Between 3.4 and 4.3\ntps = 589 (including connections establishing)\ntps = 590 (excluding connections establishing)\n\nI get on postgreSQL 8.3.3\nLoad: Between 4.5 and 5.6\ntps = 949 (including connections establishing)\ntps = 951 (excluding connections establishing)\n\nThe amount of tps almost doubled, which is good, but i'm worried about the\nload. For my application, a load increase is bad and I'd like to keep it\njust like in 8.2.6 (a load average between 3.4 and 4.3). What parameters\nshould I work with to decrease the resulting load average at the expense of\ntps?\n\nDown below is my 8.3.3 configuration file. I removed everything that is\ncommented since if it's commented, it's default value. I also removed from\nthe sample below parameters related to logging.\n\n===== postgresql.conf begins =====\n\nport = 5432 # (change requires restart)\nmax_connections = 180 # (change requires restart)\nsuperuser_reserved_connections = 5 # (change requires restart)\nunix_socket_directory = '/var/run/postgresql' # (change requires\nrestart)\nssl = off # (change requires restart)\n\nshared_buffers = 512MB # min 128kB or max_connections*16kB\n\ntemp_buffers = 8MB # min 800kB\nmax_prepared_transactions = 5 # can be 0 or more\n\nwork_mem = 16MB # min 64kB\nmaintenance_work_mem = 512MB # min 1MB\nmax_stack_depth = 2MB # min 100kB\n\n# - Free Space Map -\n\nmax_fsm_pages = 2400000 # min max_fsm_relations*16, 6 bytes each\n\nvacuum_cost_delay = 0 # 0-1000 milliseconds\nvacuum_cost_page_hit = 1 # 0-10000 credits\nvacuum_cost_page_miss = 10 # 0-10000 credits\nvacuum_cost_page_dirty = 20 # 0-10000 credits\nvacuum_cost_limit = 200 # 1-10000 credits\n\n\nfsync = off # turns forced synchronization on or off\n\n#------------------------------------------------------------------------------\n# QUERY TUNING\n#------------------------------------------------------------------------------\n\nseq_page_cost = 1.0 # measured on an arbitrary scale\nrandom_page_cost = 3.0 # same scale as above\neffective_cache_size = 1024MB\n#------------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#------------------------------------------------------------------------------\n\nautovacuum = on # Enable autovacuum subprocess? 'on'\nautovacuum_naptime = 1min # time between autovacuum runs\nautovacuum_vacuum_threshold = 500 # min number of row updates before\nautovacuum_analyze_threshold = 250 # min number of row updates before\nautovacuum_vacuum_scale_factor = 0.2 # fraction of table size before\nvacuum\nautovacuum_analyze_scale_factor = 0.1 # fraction of table size before\nanalyze\nautovacuum_vacuum_cost_delay = 0 # default vacuum cost delay for\nautovacuum_vacuum_cost_limit = 200 # default vacuum cost limit for\n\n\n#------------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#------------------------------------------------------------------------------\ndatestyle = 'iso, mdy'\ntimezone = UTC # actually, defaults to TZ environment\nlc_messages = 'en_US.UTF-8' # locale for system error message\n # strings\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\n\n\n#------------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#------------------------------------------------------------------------------\n\nescape_string_warning = off\n\n\n\n===== postgresql.conf ends =====\n\nI have a server box that has 4GB of RAM, Quad core CPU AMD Opteron 200.152 Mhz (1024 KB cache size each) with plenty of hard drive space.I installed both postgresql 8.2.6 and 8.3.3 on it.  I've created a basic test db and used\npgbench -i -s 1 -U test -h localhost testto create a sample test db.Then, to benchmark the postgreSQLs, I executed this separately on each of them:pg_bench -h localhost -d test -t 2000 -c 50 -s 50 -U test\n(2000 transactions per client, 50 clients, scalability factor of 50)Using the above, I get on postgreSQL 8.2.6:Load average: Between 3.4 and 4.3tps = 589 (including connections establishing)tps = 590 (excluding connections establishing)\nI get on postgreSQL 8.3.3Load: Between 4.5 and 5.6tps = 949 (including connections establishing)tps = 951 (excluding connections establishing)The amount of tps almost doubled, which is good, but i'm worried about the load.  For my application, a load increase is bad and I'd like to keep it just like in 8.2.6 (a load average between 3.4 and 4.3).  What parameters should I work with to decrease the resulting load average at the expense of tps?\nDown below is my 8.3.3 configuration file.  I removed everything that is commented since if it's commented, it's default value.  I also removed from the sample below parameters related to logging.===== postgresql.conf begins =====\nport = 5432                # (change requires restart)max_connections = 180            # (change requires restart)superuser_reserved_connections = 5    # (change requires restart)unix_socket_directory = '/var/run/postgresql'        # (change requires restart)\nssl = off                # (change requires restart)shared_buffers = 512MB            # min 128kB or max_connections*16kBtemp_buffers = 8MB            # min 800kBmax_prepared_transactions = 5        # can be 0 or more\nwork_mem = 16MB                # min 64kBmaintenance_work_mem = 512MB        # min 1MBmax_stack_depth = 2MB            # min 100kB# - Free Space Map -max_fsm_pages = 2400000            # min max_fsm_relations*16, 6 bytes each\nvacuum_cost_delay = 0            # 0-1000 millisecondsvacuum_cost_page_hit = 1        # 0-10000 creditsvacuum_cost_page_miss = 10        # 0-10000 creditsvacuum_cost_page_dirty = 20        # 0-10000 credits\nvacuum_cost_limit = 200            # 1-10000 creditsfsync = off                # turns forced synchronization on or off#------------------------------------------------------------------------------\n# QUERY TUNING#------------------------------------------------------------------------------seq_page_cost = 1.0            # measured on an arbitrary scalerandom_page_cost = 3.0            # same scale as above\neffective_cache_size = 1024MB#------------------------------------------------------------------------------# AUTOVACUUM PARAMETERS#------------------------------------------------------------------------------\nautovacuum = on            # Enable autovacuum subprocess?  'on' autovacuum_naptime = 1min        # time between autovacuum runsautovacuum_vacuum_threshold = 500    # min number of row updates before\nautovacuum_analyze_threshold = 250    # min number of row updates before autovacuum_vacuum_scale_factor = 0.2    # fraction of table size before vacuumautovacuum_analyze_scale_factor = 0.1    # fraction of table size before analyze\nautovacuum_vacuum_cost_delay = 0    # default vacuum cost delay forautovacuum_vacuum_cost_limit = 200    # default vacuum cost limit for#------------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS#------------------------------------------------------------------------------datestyle = 'iso, mdy'timezone = UTC                # actually, defaults to TZ environment\nlc_messages = 'en_US.UTF-8'        # locale for system error message                    # stringslc_monetary = 'en_US.UTF-8'        # locale for monetary formattinglc_numeric = 'en_US.UTF-8'        # locale for number formatting\nlc_time = 'en_US.UTF-8'            # locale for time formatting#------------------------------------------------------------------------------# VERSION/PLATFORM COMPATIBILITY#------------------------------------------------------------------------------\nescape_string_warning = off===== postgresql.conf ends =====", "msg_date": "Fri, 20 Feb 2009 16:34:23 -0500", "msg_from": "Battle Mage <[email protected]>", "msg_from_op": true, "msg_subject": "postgreSQL performance 8.2.6 vs 8.3.3" }, { "msg_contents": "On Fri, Feb 20, 2009 at 04:34:23PM -0500, Battle Mage wrote:\n> I have a server box that has 4GB of RAM, Quad core CPU AMD Opteron 200.152\n> Mhz (1024 KB cache size each) with plenty of hard drive space.\n> \n> I installed both postgresql 8.2.6 and 8.3.3 on it. I've created a basic\n> test db and used\n> pgbench -i -s 1 -U test -h localhost test\n> to create a sample test db.\n> \n> Then, to benchmark the postgreSQLs, I executed this separately on each of\n> them:\n> pg_bench -h localhost -d test -t 2000 -c 50 -s 50 -U test\n> (2000 transactions per client, 50 clients, scalability factor of 50)\n> \n> Using the above,\n> I get on postgreSQL 8.2.6:\n> Load average: Between 3.4 and 4.3\n> tps = 589 (including connections establishing)\n> tps = 590 (excluding connections establishing)\n> \n> I get on postgreSQL 8.3.3\n> Load: Between 4.5 and 5.6\n> tps = 949 (including connections establishing)\n> tps = 951 (excluding connections establishing)\n> \n> The amount of tps almost doubled, which is good, but i'm worried about the\n> load. For my application, a load increase is bad and I'd like to keep it\n> just like in 8.2.6 (a load average between 3.4 and 4.3). What parameters\n> should I work with to decrease the resulting load average at the expense of\n> tps?\n> \n> Down below is my 8.3.3 configuration file. I removed everything that is\n> commented since if it's commented, it's default value. I also removed from\n> the sample below parameters related to logging.\n\nPlease evaluate your load on the 8.3.3 box at 590 tps. If the load is\nproportional to the tps than the scaled load will be: 2.8 to 3.5 for\nan equivalent tps. There is no free lunch but 8.3 performs much better than\n8.2 and I suspect that this trend will continue. :)\n\nCheers,\nKen\n\n", "msg_date": "Fri, 20 Feb 2009 15:45:50 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgreSQL performance 8.2.6 vs 8.3.3" }, { "msg_contents": "On Fri, Feb 20, 2009 at 1:34 PM, Battle Mage <[email protected]> wrote:\n> The amount of tps almost doubled, which is good, but i'm worried about the\n> load. For my application, a load increase is bad and I'd like to keep it\n> just like in 8.2.6 (a load average between 3.4 and 4.3). What parameters\n> should I work with to decrease the resulting load average at the expense of\n> tps?\n\nWhy is it bad? High load can mean a number of things.\n\nThe only way to reduce the load is to get the client to submit\nrequests slower. I don't think you'll be successful in tuning the\ndatabase to run slower. I think you're headed in the wrong direction.\n\n-Dave\n", "msg_date": "Fri, 20 Feb 2009 13:46:00 -0800", "msg_from": "David Rees <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgreSQL performance 8.2.6 vs 8.3.3" }, { "msg_contents": "On Fri, Feb 20, 2009 at 2:34 PM, Battle Mage <[email protected]> wrote:\n> I have a server box that has 4GB of RAM, Quad core CPU AMD Opteron 200.152\n> Mhz (1024 KB cache size each) with plenty of hard drive space.\n>\n> I installed both postgresql 8.2.6 and 8.3.3 on it. I've created a basic\n> test db and used\n> pgbench -i -s 1 -U test -h localhost test\n> to create a sample test db.\n>\n> Then, to benchmark the postgreSQLs, I executed this separately on each of\n> them:\n> pg_bench -h localhost -d test -t 2000 -c 50 -s 50 -U test\n> (2000 transactions per client, 50 clients, scalability factor of 50)\n\nIf you're goint to test with -c50 you should initialize with -s50. -s\n50 after initialization doesn't mean anything. It's the first pgbench\n-i -s nnn where you need to set nnn to 50 (or higher) if you're gonna\ntest with it.\n\n> Using the above,\n> I get on postgreSQL 8.2.6:\n> Load average: Between 3.4 and 4.3\n> tps = 589 (including connections establishing)\n> tps = 590 (excluding connections establishing)\n>\n> I get on postgreSQL 8.3.3\n> Load: Between 4.5 and 5.6\n> tps = 949 (including connections establishing)\n> tps = 951 (excluding connections establishing)\n\nNice improvement.\n\n> The amount of tps almost doubled, which is good, but i'm worried about the\n> load. For my application, a load increase is bad and I'd like to keep it\n> just like in 8.2.6 (a load average between 3.4 and 4.3). What parameters\n> should I work with to decrease the resulting load average at the expense of\n> tps?\n\nI agree with the other poster. Why is a load increase bad? What does\nit mean here. I've got one load that runs smoothly with a load factor\nof 60 to 150 on a server, while the same server with a different load\nstarts to bog down with load factors between 10 and 15. It's a very\nbroad measurement. Don't try to tune to your load factor, try to tune\nto the real load being applied, and opimtize there.\n\n> Down below is my 8.3.3 configuration file. I removed everything that is\n> commented since if it's commented, it's default value. I also removed from\n> the sample below parameters related to logging.\n>\n> ===== postgresql.conf begins =====\n> fsync = off # turns forced synchronization on or off\n\nSo, I assume either your data is easily reproduceable, unimportant, or\nreplicated in such a way that you can survive sudden power loss /\nkernel crash?\n\nAlso, is there are reason you're running two different out of date\nreleases of postgresql?\n", "msg_date": "Fri, 20 Feb 2009 14:56:54 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgreSQL performance 8.2.6 vs 8.3.3" }, { "msg_contents": "On Fri, 20 Feb 2009, David Rees wrote:\n\n> On Fri, Feb 20, 2009 at 1:34 PM, Battle Mage <[email protected]> wrote:\n>> The amount of tps almost doubled, which is good, but i'm worried about the\n>> load. For my application, a load increase is bad and I'd like to keep it\n>> just like in 8.2.6 (a load average between 3.4 and 4.3). What parameters\n>> should I work with to decrease the resulting load average at the expense of\n>> tps?\n>\n> Why is it bad? High load can mean a number of things.\n>\n> The only way to reduce the load is to get the client to submit\n> requests slower. I don't think you'll be successful in tuning the\n> database to run slower. I think you're headed in the wrong direction.\n\nnote that on linux the loadave includes processes that are stalled waiting \nfor I/O to complete. as a result loadave isn't the entire picture. you \nneed to also look to see what the cpu idle time looks like.\n\nthat being said, I am generally very happy with loadave <= # cores and \nconsider loadave <= 2x # cores to be acceptable\n\nit's nowhere near perfect, but it seems to serve me well as a rule of \nthumb.\n\nDavid Lang\n", "msg_date": "Mon, 23 Feb 2009 13:02:38 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: postgreSQL performance 8.2.6 vs 8.3.3" }, { "msg_contents": "On Mon, Feb 23, 2009 at 2:02 PM, <[email protected]> wrote:\n> On Fri, 20 Feb 2009, David Rees wrote:\n>\n>> On Fri, Feb 20, 2009 at 1:34 PM, Battle Mage <[email protected]> wrote:\n>>>\n>>> The amount of tps almost doubled, which is good, but i'm worried about\n>>> the\n>>> load. For my application, a load increase is bad and I'd like to keep it\n>>> just like in 8.2.6 (a load average between 3.4 and 4.3). What parameters\n>>> should I work with to decrease the resulting load average at the expense\n>>> of\n>>> tps?\n>>\n>> Why is it bad? High load can mean a number of things.\n>>\n>> The only way to reduce the load is to get the client to submit\n>> requests slower. I don't think you'll be successful in tuning the\n>> database to run slower. I think you're headed in the wrong direction.\n>\n> note that on linux the loadave includes processes that are stalled waiting\n> for I/O to complete. as a result loadave isn't the entire picture. you need\n> to also look to see what the cpu idle time looks like.\n>\n> that being said, I am generally very happy with loadave <= # cores and\n> consider loadave <= 2x # cores to be acceptable\n>\n> it's nowhere near perfect, but it seems to serve me well as a rule of thumb.\n\nAnd it's very dependent on type of load. For our primary customer\ndata database a load of 80 to 120 is not uncommon during certain\noperations (like adding a slave back to the fark and it gets a ton of\nrequests while it's loading up its cache) and it stays responsive.\nOTOH, a load of 20 on a reporting server doing tons of sequential\nscans and allocating a lot of memory is way overloaded for the same\nserver type.\n\nI had responsive behaviour into the 300 or 400 load range running\npgbench in \"destroy all servers mode (-c 500 -t 10000000 or something\nlike that) on that machine. Sure, it wasn't exactly peppy or\nanything, but most small queries were still running in well under a\nsecond.\n", "msg_date": "Mon, 23 Feb 2009 16:33:49 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgreSQL performance 8.2.6 vs 8.3.3" } ]
[ { "msg_contents": "Hi,\n\nI have a 8 GB database, and 2 GB table. In a query i use the 2 GB table and\nseveral other tables where it takes around 90 minutes for execution.\n\nIn different places, it takes drastically different time. Say everywhere i\nhave the same,\nOS - Debian.\nPrimary memory - 3 GB\nPostgreSQL configuration.\n\nBut in one machine it takes 3.5 minutes, and in other machine 90 minutes\nwhich confuses me much. So i did a test of hard disk read speed, in the\nmachine where it takes 90 minutes resulted in the following.,\n\ndd if=/var/lib/postgresql/8.1/main/base/16385/17283 of=/dev/null bs=1M\ncount=1024\n1024+0 records in\n1024+0 records out\n1073741824 bytes (1.1 GB) copied, 8.32928 seconds, 129 MB/s\n\nUnfortunately, i cannot execute this same on that 3.5 minute execution\nmachine.\n\nBut i had the previous write speed test, output which is\ndd if=/dev/zero of=/tmp/test bs=1M count=1024\n1073741824 bytes (1.1 GB) copied, 2.37701 seconds, 452 MB/s\n\nand the same write speed test in the 90 minutes machine is\ndd if=/dev/zero of=/tmp/test bs=1M count=1024\n1024+0 records in\n1024+0 records out\n1073741824 bytes (1.1 GB) copied, 19.5375 seconds, 55.0 MB/s\n\nSo i assume that there should be a 9 times faster execution. Because 55 MB\nwrite per second, and 450 MB write per second.\nBut am i doing some thing silly here. Or what i can do better confirm the\nproblem ??\n\ncan some body give me ideas on what to do for confirming what is the issue\nfor consuming much time for the query execution ?\n\nHi,I have a 8 GB database, and 2 GB table. In a query i use the 2 GB table and several other tables where it takes around 90 minutes for execution.In different places, it takes drastically different time. Say everywhere i have the same, \nOS - Debian.Primary memory - 3 GBPostgreSQL configuration.But in one machine it takes 3.5 minutes, and in other machine 90 minutes which confuses me much. So i did a test of hard disk read speed, in the machine where it takes 90 minutes resulted in the following.,\ndd if=/var/lib/postgresql/8.1/main/base/16385/17283 of=/dev/null bs=1M count=10241024+0 records in\n1024+0 records out1073741824 bytes (1.1 GB) copied, 8.32928 seconds, 129 MB/s\nUnfortunately, i cannot execute this same on that 3.5 minute execution machine.But i had the previous write speed test, output which is\ndd if=/dev/zero of=/tmp/test bs=1M count=1024\n1073741824 bytes (1.1 GB) copied, 2.37701 seconds, 452 MB/sand the same write speed test in the 90 minutes machine isdd if=/dev/zero of=/tmp/test bs=1M count=1024\n1024+0 records in\n1024+0 records out\n1073741824 bytes (1.1 GB) copied, 19.5375 seconds, 55.0 MB/sSo i assume that there should be a 9 times faster execution. Because 55 MB write per second, and 450 MB write per second.But am i doing some thing silly here. Or what i can do better confirm the problem ??\ncan some body give me ideas on what to do for confirming what is the issue for consuming much time for the query execution ?", "msg_date": "Sat, 21 Feb 2009 12:44:38 +0530", "msg_from": "sathiya psql <[email protected]>", "msg_from_op": true, "msg_subject": "how the hdd read speed is related to the query execution speed." }, { "msg_contents": "sathiya psql <[email protected]> schrieb:\n> can some body give me ideas on what to do for confirming what is the issue for\n> consuming much time for the query execution ?\n\nRun \n\nEXPLAIN ANALYSE <your query> on both machines and compare the output or\nshow the output here.\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Sat, 21 Feb 2009 09:36:04 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how the hdd read speed is related to the query\n\texecution speed." } ]
[ { "msg_contents": "Hello,\n\nI'm experiencing a strange issue. I have a table with around 11 million \nrecords (11471762 to be exact), storing login attempts to a web site. \nThanks to the index I have created on username, looking into that table \nby username is very fast:\n\n\n\ndb=# EXPLAIN ANALYZE\nSELECT\n *\nFROM\n login_attempt\nWHERE\n username='kouber'\nORDER BY\n login_attempt_sid DESC;\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=1415.15..1434.93 rows=7914 width=38) (actual \ntime=0.103..0.104 rows=2 loops=1)\n Sort Key: login_attempt_sid\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using login_attempt_username_idx on login_attempt \n(cost=0.00..902.71 rows=7914 width=38) (actual time=0.090..0.091 rows=2 \nloops=1)\n Index Cond: ((username)::text = 'kouber'::text)\n Total runtime: 0.140 ms\n(6 rows)\n\n\n\nAs you can see, there are only 2 records for that particular username.\n\nHowever when I add a LIMIT clause to the same query the planner no \nlonger uses the right index, hence the query becomes very slow:\n\n\n\ndb=# EXPLAIN ANALYZE\nSELECT\n *\nFROM\n login_attempt\nWHERE\n username='kouber'\nORDER BY\n login_attempt_sid DESC LIMIT 20;\n \n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..770.45 rows=20 width=38) (actual \ntime=0.064..3797.660 rows=2 loops=1)\n -> Index Scan Backward using login_attempt_pkey on login_attempt \n(cost=0.00..304866.46 rows=7914 width=38) (actual time=0.062..3797.657 \nrows=2 loops=1)\n Filter: ((username)::text = 'kouber'::text)\n Total runtime: 3797.691 ms\n(4 rows)\n\n\n\nNow, recently I have altered some of the default parameters in order to \nget as much as possible out of the hardware - 12 GB of RAM, 8 \nprocessors. So, I guess I have done something wrong, thus the planner is \ntaking that wrong decision. Here's what I have changed in \npostgresql.conf (from the default one):\n\nmax_connections = 200\nshared_buffers = 256MB\nwork_mem = 64MB\nmaintenance_work_mem = 128MB\nmax_stack_depth = 6MB\nmax_fsm_pages = 100000\nsynchronous_commit = off\nwal_buffers = 1MB\ncommit_delay = 100\ncommit_siblings = 5\ncheckpoint_segments = 10\ncheckpoint_timeout = 10min\nrandom_page_cost = 0.1\neffective_cache_size = 2048MB\n\nAny idea what's wrong here?\n\nRegards,\n-- \nKouber Saparev\nhttp://kouber.saparev.com\n", "msg_date": "Mon, 23 Feb 2009 14:26:24 +0200", "msg_from": "Kouber Saparev <[email protected]>", "msg_from_op": true, "msg_subject": "LIMIT confuses the planner" }, { "msg_contents": "Kouber Saparev wrote:\n\n> db=# EXPLAIN ANALYZE\n> SELECT\n> *\n> FROM\n> login_attempt\n> WHERE\n> username='kouber'\n> ORDER BY\n> login_attempt_sid DESC;\n> \n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> \n> Sort (cost=1415.15..1434.93 rows=7914 width=38) (actual\n> time=0.103..0.104 rows=2 loops=1)\n> Sort Key: login_attempt_sid\n> Sort Method: quicksort Memory: 25kB\n> -> Index Scan using login_attempt_username_idx on login_attempt\n> (cost=0.00..902.71 rows=7914 width=38) (actual time=0.090..0.091 rows=2\n> loops=1)\n> Index Cond: ((username)::text = 'kouber'::text)\n> Total runtime: 0.140 ms\n\nIt's expecting 7914 rows returned and is getting only 2. That is\nprobably the root of the problem.\n\n> However when I add a LIMIT clause to the same query the planner no\n> longer uses the right index, hence the query becomes very slow:\n> \n> \n> db=# EXPLAIN ANALYZE\n> SELECT\n> *\n> FROM\n> login_attempt\n> WHERE\n> username='kouber'\n> ORDER BY\n> login_attempt_sid DESC LIMIT 20;\n\nSince it's expecting 7914 rows for \"kouber\" it thinks it will find the\n20 rows you want fairly quickly by just looking backward through the\nlogin_attempt_pkey index.\n\nTry increasing the stats on the username column.\n\nALTER TABLE login_attempt ALTER COLUMN username SET STATISTICS 100;\nANALYZE login_attempt;\n\nYou can try different values of statistics up to 1000, but there's no\npoint in setting it too high.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 23 Feb 2009 13:27:44 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIMIT confuses the planner" }, { "msg_contents": "On Mon, Feb 23, 2009 at 7:26 AM, Kouber Saparev <[email protected]> wrote:\n> Now, recently I have altered some of the default parameters in order to get\n> as much as possible out of the hardware - 12 GB of RAM, 8 processors. So, I\n> guess I have done something wrong, thus the planner is taking that wrong\n> decision. Here's what I have changed in postgresql.conf (from the default\n> one):\n>\n> max_connections = 200\n> shared_buffers = 256MB\n> work_mem = 64MB\n> maintenance_work_mem = 128MB\n> max_stack_depth = 6MB\n> max_fsm_pages = 100000\n> synchronous_commit = off\n> wal_buffers = 1MB\n> commit_delay = 100\n> commit_siblings = 5\n> checkpoint_segments = 10\n> checkpoint_timeout = 10min\n> random_page_cost = 0.1\n> effective_cache_size = 2048MB\n>\n> Any idea what's wrong here?\n\nIf you left seq_page_cost (which isn't mentioned here) at the default\nvalue but reduced random_page_cost to 0.1, then you have\nrandom_page_cost < seq_page_cost. That's probably Bad.\n\n...Robert\n", "msg_date": "Mon, 23 Feb 2009 09:53:41 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIMIT confuses the planner" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> If you left seq_page_cost (which isn't mentioned here) at the default\n> value but reduced random_page_cost to 0.1, then you have\n> random_page_cost < seq_page_cost. That's probably Bad.\n\n... well, it's certainly going to push the planner to believe indexscans\nare cheaper than sorts no matter what.\n\nThe previously noted rowcount estimation problem might be a bigger issue\nin this particular case, but I agree this is a Bad Idea.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Feb 2009 10:09:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIMIT confuses the planner " }, { "msg_contents": "Richard Huxton wrote:\n> Since it's expecting 7914 rows for \"kouber\" it thinks it will find the\n> 20 rows you want fairly quickly by just looking backward through the\n> login_attempt_pkey index.\n> \n> Try increasing the stats on the username column.\n> \n> ALTER TABLE login_attempt ALTER COLUMN username SET STATISTICS 100;\n> ANALYZE login_attempt;\n> \n> You can try different values of statistics up to 1000, but there's no\n> point in setting it too high.\n> \n\nHmmm, that did the trick, thank you. I updated the statistics of the \ncolumn to 300, so now the query plan changed to:\n\n\nLimit (cost=127.65..127.70 rows=20 width=38) (actual time=0.085..0.086 \nrows=3 loops=1)\n-> Sort (cost=127.65..129.93 rows=910 width=38) (actual \ntime=0.084..0.085 rows=3 loops=1)\n Sort Key: login_attempt_sid\n Sort Method: quicksort Memory: 25kB\n -> Bitmap Heap Scan on login_attempt (cost=7.74..103.44 rows=910 \nwidth=38) (actual time=0.075..0.078 rows=3 loops=1)\n Recheck Cond: ((username)::text = 'kouber'::text)\n -> Bitmap Index Scan on login_attempt_username_idx \n(cost=0.00..7.51 rows=910 width=0) (actual time=0.069..0.069 rows=3 loops=1)\n\t Index Cond: ((username)::text = 'kouber'::text)\nTotal runtime: 0.114 ms\n\n\nNow the planner believes there're 910 rows, which is a bit closer to the \nreal data:\n\nswing=# select avg(length) from (select username, count(*) as length \nfrom login_attempt group by username) as freq;\n avg\n----------------------\n 491.6087310427555479\n(1 row)\n\n\n-- \nKouber Saparev\nhttp://kouber.saparev.com\n", "msg_date": "Mon, 23 Feb 2009 19:42:18 +0200", "msg_from": "Kouber Saparev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIMIT confuses the planner" }, { "msg_contents": "Kouber Saparev <[email protected]> writes:\n> Now the planner believes there're 910 rows, which is a bit closer to the \n> real data:\n\n> swing=# select avg(length) from (select username, count(*) as length \n> from login_attempt group by username) as freq;\n> avg\n> ----------------------\n> 491.6087310427555479\n> (1 row)\n\nHmph, that's still not real good. Ideally it should be estimating\n*less* than the average frequency, because the estimate is made after\nexcluding all the most-common-values, which evidently 'kouber' is not\none of. I suppose there's quite a large number of infrequently-seen\nusernames and the ndistinct estimate is much less than reality? (Look\nat the pg_stats row for this column.) It might be worth going all the\nway to stats target 1000 for this column.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Feb 2009 13:01:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIMIT confuses the planner " }, { "msg_contents": "Tom Lane wrote:\n> Robert Haas <[email protected]> writes:\n>> If you left seq_page_cost (which isn't mentioned here) at the default\n>> value but reduced random_page_cost to 0.1, then you have\n>> random_page_cost < seq_page_cost. That's probably Bad.\n> \n> ... well, it's certainly going to push the planner to believe indexscans\n> are cheaper than sorts no matter what.\n> \n> The previously noted rowcount estimation problem might be a bigger issue\n> in this particular case, but I agree this is a Bad Idea.\n\nSo I've set it wrong, I guess. :-)\n\nNow I put it to:\n\nseq_page_cost = 1\nrandom_page_cost = 2\n\nRegards,\n-- \nKouber Saparev\nhttp://kouber.saparev.com\n", "msg_date": "Mon, 23 Feb 2009 20:44:22 +0200", "msg_from": "Kouber Saparev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIMIT confuses the planner" }, { "msg_contents": "Tom Lane wrote:\n> Kouber Saparev <[email protected]> writes:\n>> Now the planner believes there're 910 rows, which is a bit closer to the \n>> real data:\n> \n>> swing=# select avg(length) from (select username, count(*) as length \n>> from login_attempt group by username) as freq;\n>> avg\n>> ----------------------\n>> 491.6087310427555479\n>> (1 row)\n> \n> Hmph, that's still not real good. Ideally it should be estimating\n> *less* than the average frequency, because the estimate is made after\n> excluding all the most-common-values, which evidently 'kouber' is not\n> one of. I suppose there's quite a large number of infrequently-seen\n> usernames and the ndistinct estimate is much less than reality? (Look\n> at the pg_stats row for this column.) It might be worth going all the\n> way to stats target 1000 for this column.\n\n\nI altered the statistics for that column to 1000, so now the planner \nassumes exactly 492 rows for the fore-mentioned query, which is indeed \nthe average. It never went *less* than that value, it was always higher, \ni.e. for a statistics value of 600, it was 588, for 800, it became 540.\n\nThe current value of n_distinct (given statistics=1000) is:\n\ndb=# SELECT n_distinct FROM pg_stats WHERE tablename='login_attempt' AND \nattname='username';\n n_distinct\n------------\n 5605\n(1 row)\n\ndb=# SELECT COUNT(DISTINCT username) FROM login_attempt;\n count\n-------\n 23391\n(1 row)\n\n\nIn fact, what is n_distinct standing for, apart the famous formula:\nn*d / (n - f1 + f1*n/N)\n\n;-)\n\nRegards,\n-- \nKouber Saparev\nhttp://kouber.saparev.com\n", "msg_date": "Tue, 24 Feb 2009 19:08:51 +0200", "msg_from": "Kouber Saparev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIMIT confuses the planner" }, { "msg_contents": "Kouber Saparev <[email protected]> writes:\n> Tom Lane wrote:\n>> Hmph, that's still not real good. Ideally it should be estimating\n>> *less* than the average frequency, because the estimate is made after\n>> excluding all the most-common-values, which evidently 'kouber' is not\n>> one of.\n\n> I altered the statistics for that column to 1000, so now the planner \n> assumes exactly 492 rows for the fore-mentioned query, which is indeed \n> the average. It never went *less* than that value, it was always higher, \n> i.e. for a statistics value of 600, it was 588, for 800, it became 540.\n\nI got some time to think more about this and experiment a bit further.\nAs far as I can tell there is no fundamental bug here --- given\nreasonably accurate stats the rowcount estimate behaves as expected, ie,\nyou get an estimate that's less than the actual average number of values\nif the target value is not one of the known MCVs. However, as the\nn_distinct estimate falls below the actual number of distinct values,\nthat rowcount estimate necessarily rises. What had surprised me about\nthis report is that the estimate matched the true average number of rows\nso closely; I wondered if there was some property of the way we estimate\nn_distinct that would make that happen. But I now think that that was\njust chance: there doesn't seem to be any underlying behavior that would\ncause it. I did some experiments with random data matching a Zipfian\ndistribution (1/k law) and did not observe that the rowcount estimate\nconverged to the true average when the n_distinct value was too low.\n\nSo the bottom line here is just that the estimated n_distinct is too\nlow. We've seen before that the equation we use tends to do that more\noften than not. I doubt that consistently erring on the high side would\nbe better though :-(. Estimating n_distinct from a limited sample of\nthe population is known to be a statistically hard problem, so we'll\nprobably not ever have perfect answers, but doing better is on the\nto-do list.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 22 Mar 2009 17:29:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIMIT confuses the planner " }, { "msg_contents": "> So the bottom line here is just that the estimated n_distinct is too\n> low.  We've seen before that the equation we use tends to do that more\n> often than not.  I doubt that consistently erring on the high side would\n> be better though :-(.  Estimating n_distinct from a limited sample of\n> the population is known to be a statistically hard problem, so we'll\n> probably not ever have perfect answers, but doing better is on the\n> to-do list.\n>\n\nI hit an interestinhg paper on n_distinct calculation:\n\nhttp://www.pittsburgh.intel-research.net/people/gibbons/papers/distinct-values-chapter.pdf\n\nthe PCSA algorithm described there requires O(1) calculation per\nvalue. Page 22 describes what to do with updates streams.\n\nThis I think (disclaimer: I know little about PG internals) means that\nthe n_distinct estimation can be done during vacuum time (it would\nplay well with the visibility map addon).\n\nWhat do You think?\n\nGreetings\nMarcin\n", "msg_date": "Mon, 23 Mar 2009 01:12:14 +0100", "msg_from": "marcin mank <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIMIT confuses the planner" }, { "msg_contents": "> I hit an interestinhg paper on n_distinct calculation:\n>\n> http://www.pittsburgh.intel-research.net/people/gibbons/papers/distinct-values-chapter.pdf\n>\n> the PCSA algorithm described there requires O(1) calculation per\n> value. Page 22 describes what to do with updates streams.\n>\n> This I think (disclaimer: I know little about PG internals) means that\n> the n_distinct estimation can be done during vacuum time (it would\n> play well with the visibility map addon).\n>\n> What do You think?\n\nok, if You think that calculating a has function of every data field\nfor each insert or delete is prohibitive, just say so and don`t bother\nreading the paper :]\n\nGreetings\nMarcin\n", "msg_date": "Mon, 23 Mar 2009 01:56:50 +0100", "msg_from": "marcin mank <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIMIT confuses the planner" }, { "msg_contents": "marcin mank <[email protected]> writes:\n> I hit an interestinhg paper on n_distinct calculation:\n> http://www.pittsburgh.intel-research.net/people/gibbons/papers/distinct-values-chapter.pdf\n\nI don't think we're quite ready to make ANALYZE read every row of a\ntable in order to estimate n_distinct. It is an interesting paper\nin that it says that you have to do that in order to get *provably*\ngood estimates, but I've not abandoned the hope of getting *usually*\ngood estimates without so much work.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 22 Mar 2009 22:18:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIMIT confuses the planner " }, { "msg_contents": "Now I am experiencing similar issue with another table, called \n\"message\", for which there's a conditional index:\n\nCREATE TABLE message (\n message_sid SERIAL PRIMARY KEY,\n from_profile_sid INT NOT NULL REFERENCES profile,\n to_profile_sid INT NOT NULL REFERENCES profile,\n sender_has_deleted BOOLEAN NOT NULL DEFAULT FALSE,\n receiver_has_deleted BOOLEAN NOT NULL DEFAULT FALSE,\n datetime TIMESTAMP NOT NULL DEFAULT NOW(),\n body TEXT\n);\n\nCREATE INDEX message_from_profile_idx (from_profile_sid) WHERE NOT \nsender_has_deleted;\n\n\nSo, again... adding a LIMIT makes the planner choose the \"wrong\" index.\n\n\ndb=# EXPLAIN ANALYZE SELECT \n message_sid\nFROM\n message\nWHERE\n from_profile_sid = 1296 AND NOT sender_has_deleted\nORDER BY\n message_sid DESC;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=2307.70..2310.74 rows=1215 width=4) (actual \ntime=0.040..0.040 rows=2 loops=1)\n Sort Key: message_sid\n Sort Method: quicksort Memory: 25kB\n -> Bitmap Heap Scan on message (cost=23.59..2245.45 rows=1215 \nwidth=4) (actual time=0.029..0.033 rows=2 loops=1)\n Recheck Cond: ((from_profile_sid = 1296) AND (NOT \nsender_has_deleted))\n -> Bitmap Index Scan on message_from_profile_idx \n(cost=0.00..23.28 rows=1215 width=0) (actual time=0.022..0.022 rows=2 \nloops=1)\n Index Cond: (from_profile_sid = 1296)\n Total runtime: 0.068 ms\n(8 rows)\n\n\n\n\ndb=# EXPLAIN ANALYZE SELECT\n message_sid\nFROM\n message\nWHERE\n from_profile_sid = 1296 AND NOT sender_has_deleted\nORDER BY\n message_sid DESC LIMIT 20;\n QUERY \nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..1461.12 rows=20 width=4) (actual \ntime=0.817..932.398 rows=2 loops=1)\n -> Index Scan Backward using message_pkey on message \n(cost=0.00..88762.80 rows=1215 width=4) (actual time=0.816..932.395 \nrows=2 loops=1)\n Filter: ((NOT sender_has_deleted) AND (from_profile_sid = 1296))\n Total runtime: 932.432 ms\n(4 rows)\n\n\n\nI had already increased STATISTICS to 1000 for both from_profile_sid and \nsender_has_deleted, and vacuum analyzed respectively (also did reindex), \nbut still statistical data is confusing me:\n\n\ndb=# SELECT n_distinct FROM pg_stats WHERE tablename='message' AND \nattname='from_profile_sid';\n\n n_distinct\n------------\n 4068\n(1 row)\n\ndb=# select avg(length) from (select from_profile_sid, count(*) as \nlength from message group by from_profile_sid) as freq;\n\n avg\n----------------------\n 206.5117822008693663\n(1 row)\n\n\n\nAny ideas/thoughts?\n\n\n-- \nKouber Saparev\nhttp://kouber.saparev.com\n", "msg_date": "Tue, 24 Mar 2009 12:52:24 +0200", "msg_from": "Kouber Saparev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIMIT confuses the planner" } ]
[ { "msg_contents": "Hello,\n\nI am doing a performance comparison between running\nJena<http://jena.sourceforge.net/>with MySQL and Postgres. I used the\n8.3-community version of Postgres and\nMySQL 5.0.67. I have run several queries to both MySQL and Postgres and all\nof them took similar amount of time to execute except one. For the following\nquery to a table having 10,003,728 rows, MySQL takes 0.11 seconds to return\nresults whereas Postgres takes like 1 hour and 20 minutes!\n\nQuery:\n\nselect A0.Subj, A2.Obj From jena_g1t1_stmt A0, jena_g1t1_stmt A1,\njena_g1t1_stmt A2 Where A0.Prop='Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/isPartOf' AND A0.Obj='Uv::\nhttp://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1<http://www.utdallas.edu/%7Efarhan.husain/IngentaConnect/issue1_1>'\nAND A0.GraphID=1 AND A0.Subj=A1.Subj AND A1.Prop='Uv::\nhttp://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND A1.Obj='Uv::\nhttp://metastore.ingenta.com/ns/structure/Article' AND A1.GraphID=1 AND\nA0.Subj=A2.Subj AND A2.Prop='Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/startingPage' AND\nA2.GraphID=1;\n\nTable:\n\n Table \"public.jena_g1t1_stmt\"\n Column | Type | Modifiers\n---------+------------------------+-----------\n subj | character varying(250) | not null\n prop | character varying(250) | not null\n obj | character varying(250) | not null\n graphid | integer |\nIndexes:\n \"jena_g1t1_stmt_ixo\" btree (obj)\n \"jena_g1t1_stmt_ixsp\" btree (subj, prop)\n\nMachine: SunOS 5.10 Generic_127111-11 sun4u sparc SUNW, Sun-Fire-880\nMemory: 4 GB\nNumber of physical processors: 2\n\nI tried to re-arrage the query but each time the amount of time needed is\nthe same. Can anyone help me find the answer to why Postgres is taking so\nmuch time?\n\nI can provide any other information needed and also the data if anyone\nwants.\n\nThanks and regards,\n\n\n-- \nMohammad Farhan Husain\nResearch Assistant\nDepartment of Computer Science\nErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas\n\nHello,I am doing a performance comparison between running Jena\nwith MySQL and Postgres. I used the 8.3-community version of Postgres\nand MySQL 5.0.67. I have run several queries to both MySQL and Postgres\nand all of them took similar amount of time to execute except one. For\nthe following query to a table having 10,003,728 rows, MySQL takes 0.11\nseconds to return results whereas Postgres takes like 1 hour and 20\nminutes!\nQuery:select A0.Subj, A2.Obj From jena_g1t1_stmt A0, jena_g1t1_stmt A1, jena_g1t1_stmt A2 Where A0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf' AND A0.Obj='Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1' AND A0.GraphID=1 AND A0.Subj=A1.Subj AND A1.Prop='Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND A1.Obj='Uv::http://metastore.ingenta.com/ns/structure/Article' AND A1.GraphID=1 AND A0.Subj=A2.Subj AND A2.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage' AND A2.GraphID=1;\nTable:        Table \"public.jena_g1t1_stmt\" Column  |          Type          | Modifiers ---------+------------------------+----------- subj    | character varying(250) | not null\n prop    | character varying(250) | not null\n obj     | character varying(250) | not null graphid | integer                | Indexes:    \"jena_g1t1_stmt_ixo\" btree (obj)    \"jena_g1t1_stmt_ixsp\" btree (subj, prop)Machine: SunOS 5.10 Generic_127111-11 sun4u sparc SUNW, Sun-Fire-880\n\nMemory: 4 GBNumber of physical processors: 2I\ntried to re-arrage the query but each time the amount of time needed is\nthe same. Can anyone help me find the answer to why Postgres is taking\nso much time?\nI can provide any other information needed and also the data if anyone wants.Thanks and regards,-- Mohammad Farhan HusainResearch AssistantDepartment of Computer ScienceErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas", "msg_date": "Mon, 23 Feb 2009 17:16:22 -0600", "msg_from": "Farhan Husain <[email protected]>", "msg_from_op": true, "msg_subject": "Abnormal performance difference between Postgres and MySQL" }, { "msg_contents": "Farhan Husain <[email protected]> writes:\n\n> I can provide any other information needed and also the data if anyone\n> wants.\n\nWhat did the query plans look like in both databases?\n\nIn Postgres you can get the query plan with\n\nEXPLAIN ANALYZE select ...\n\nYou can leave out the ANALYZE if you can't wait until the query completes but\nit will have much less information to diagnosis any problems.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n", "msg_date": "Mon, 23 Feb 2009 23:27:18 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and MySQL" }, { "msg_contents": "On Mon, Feb 23, 2009 at 4:16 PM, Farhan Husain <[email protected]> wrote:\n> Hello,\n>\n> I am doing a performance comparison between running Jena with MySQL and\n> Postgres. I used the 8.3-community version of Postgres and MySQL 5.0.67. I\n> have run several queries to both MySQL and Postgres and all of them took\n> similar amount of time to execute except one. For the following query to a\n> table having 10,003,728 rows, MySQL takes 0.11 seconds to return results\n> whereas Postgres takes like 1 hour and 20 minutes!\n>\n> Query:\n>\n> select A0.Subj, A2.Obj From jena_g1t1_stmt A0, jena_g1t1_stmt A1,\n> jena_g1t1_stmt A2 Where\n> A0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf' AND\n> A0.Obj='Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'\n> AND A0.GraphID=1 AND A0.Subj=A1.Subj AND\n> A1.Prop='Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND\n> A1.Obj='Uv::http://metastore.ingenta.com/ns/structure/Article' AND\n> A1.GraphID=1 AND A0.Subj=A2.Subj AND\n> A2.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage' AND\n> A2.GraphID=1;\n>\n> Table:\n>\n> Table \"public.jena_g1t1_stmt\"\n> Column | Type | Modifiers\n> ---------+--------------------\n> ----+-----------\n> subj | character varying(250) | not null\n> prop | character varying(250) | not null\n> obj | character varying(250) | not null\n> graphid | integer |\n> Indexes:\n> \"jena_g1t1_stmt_ixo\" btree (obj)\n> \"jena_g1t1_stmt_ixsp\" btree (subj, prop)\n>\n> Machine: SunOS 5.10 Generic_127111-11 sun4u sparc SUNW, Sun-Fire-880\n> Memory: 4 GB\n> Number of physical processors: 2\n>\n> I tried to re-arrage the query but each time the amount of time needed is\n> the same. Can anyone help me find the answer to why Postgres is taking so\n> much time?\n>\n> I can provide any other information needed and also the data if anyone\n> wants.\n\nWhat is the locale of your database? I.e.:\n\n# show lc_collate ;\n lc_collate\n-------------\n en_US.UTF-8\n(1 row)\n\nIf it's not C then string compares are going to probably need special\nindexes to work the way you expect them. (varchar pattern ops). Look\nhere for more information:\n\nhttp://www.postgresql.org/docs/8.3/static/indexes-opclass.html\n", "msg_date": "Mon, 23 Feb 2009 16:27:52 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "On Mon, Feb 23, 2009 at 5:27 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Mon, Feb 23, 2009 at 4:16 PM, Farhan Husain <[email protected]> wrote:\n> > Hello,\n> >\n> > I am doing a performance comparison between running Jena with MySQL and\n> > Postgres. I used the 8.3-community version of Postgres and MySQL 5.0.67.\n> I\n> > have run several queries to both MySQL and Postgres and all of them took\n> > similar amount of time to execute except one. For the following query to\n> a\n> > table having 10,003,728 rows, MySQL takes 0.11 seconds to return results\n> > whereas Postgres takes like 1 hour and 20 minutes!\n> >\n> > Query:\n> >\n> > select A0.Subj, A2.Obj From jena_g1t1_stmt A0, jena_g1t1_stmt A1,\n> > jena_g1t1_stmt A2 Where\n> > A0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf' AND\n> > A0.Obj='Uv::\n> http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1<http://www.utdallas.edu/%7Efarhan.husain/IngentaConnect/issue1_1>\n> '\n> > AND A0.GraphID=1 AND A0.Subj=A1.Subj AND\n> > A1.Prop='Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND\n> > A1.Obj='Uv::http://metastore.ingenta.com/ns/structure/Article' AND\n> > A1.GraphID=1 AND A0.Subj=A2.Subj AND\n> > A2.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage'\n> AND\n> > A2.GraphID=1;\n> >\n> > Table:\n> >\n> > Table \"public.jena_g1t1_stmt\"\n> > Column | Type | Modifiers\n> > ---------+--------------------\n> > ----+-----------\n> > subj | character varying(250) | not null\n> > prop | character varying(250) | not null\n> > obj | character varying(250) | not null\n> > graphid | integer |\n> > Indexes:\n> > \"jena_g1t1_stmt_ixo\" btree (obj)\n> > \"jena_g1t1_stmt_ixsp\" btree (subj, prop)\n> >\n> > Machine: SunOS 5.10 Generic_127111-11 sun4u sparc SUNW, Sun-Fire-880\n> > Memory: 4 GB\n> > Number of physical processors: 2\n> >\n> > I tried to re-arrage the query but each time the amount of time needed is\n> > the same. Can anyone help me find the answer to why Postgres is taking so\n> > much time?\n> >\n> > I can provide any other information needed and also the data if anyone\n> > wants.\n>\n> What is the locale of your database? I.e.:\n>\n> # show lc_collate ;\n> lc_collate\n> -------------\n> en_US.UTF-8\n> (1 row)\n>\n> If it's not C then string compares are going to probably need special\n> indexes to work the way you expect them. (varchar pattern ops). Look\n> here for more information:\n>\n> http://www.postgresql.org/docs/8.3/static/indexes-opclass.html\n>\n\nHere it is:\n\ningentadb=# show lc_collate;\n lc_collate\n-----------------\n en_US.ISO8859-1\n(1 row)\n\nDo you think this is the source of the problem?\n\nThanks,\n\n-- \nMohammad Farhan Husain\nResearch Assistant\nDepartment of Computer Science\nErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas\n\nOn Mon, Feb 23, 2009 at 5:27 PM, Scott Marlowe <[email protected]> wrote:\nOn Mon, Feb 23, 2009 at 4:16 PM, Farhan Husain <[email protected]> wrote:\n> Hello,\n>\n> I am doing a performance comparison between running Jena with MySQL and\n> Postgres. I used the 8.3-community version of Postgres and MySQL 5.0.67. I\n> have run several queries to both MySQL and Postgres and all of them took\n> similar amount of time to execute except one. For the following query to a\n> table having 10,003,728 rows, MySQL takes 0.11 seconds to return results\n> whereas Postgres takes like 1 hour and 20 minutes!\n>\n> Query:\n>\n> select A0.Subj, A2.Obj From jena_g1t1_stmt A0, jena_g1t1_stmt A1,\n> jena_g1t1_stmt A2 Where\n> A0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf' AND\n> A0.Obj='Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'\n> AND A0.GraphID=1 AND A0.Subj=A1.Subj AND\n> A1.Prop='Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND\n> A1.Obj='Uv::http://metastore.ingenta.com/ns/structure/Article' AND\n> A1.GraphID=1 AND A0.Subj=A2.Subj AND\n> A2.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage' AND\n> A2.GraphID=1;\n>\n> Table:\n>\n>         Table \"public.jena_g1t1_stmt\"\n>  Column  |          Type          | Modifiers\n> ---------+--------------------\n> ----+-----------\n>  subj    | character varying(250) | not null\n>  prop    | character varying(250) | not null\n>  obj     | character varying(250) | not null\n>  graphid | integer                |\n> Indexes:\n>     \"jena_g1t1_stmt_ixo\" btree (obj)\n>     \"jena_g1t1_stmt_ixsp\" btree (subj, prop)\n>\n> Machine: SunOS 5.10 Generic_127111-11 sun4u sparc SUNW, Sun-Fire-880\n> Memory: 4 GB\n> Number of physical processors: 2\n>\n> I tried to re-arrage the query but each time the amount of time needed is\n> the same. Can anyone help me find the answer to why Postgres is taking so\n> much time?\n>\n> I can provide any other information needed and also the data if anyone\n> wants.\n\nWhat is the locale of your database?  I.e.:\n\n# show lc_collate ;\n lc_collate\n-------------\n en_US.UTF-8\n(1 row)\n\nIf it's not C then string compares are going to probably need special\nindexes to work the way you expect them. (varchar pattern ops).  Look\nhere for more information:\n\nhttp://www.postgresql.org/docs/8.3/static/indexes-opclass.html\nHere it is:ingentadb=# show lc_collate;   lc_collate    ----------------- en_US.ISO8859-1(1 row)Do you think this is the source of the problem?Thanks,\n-- Mohammad Farhan HusainResearch AssistantDepartment of Computer ScienceErik Jonsson School of Engineering and Computer ScienceUniversity of Texas at Dallas", "msg_date": "Mon, 23 Feb 2009 17:33:06 -0600", "msg_from": "Farhan Husain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "On Tue, Feb 24, 2009 at 12:27 AM, Scott Marlowe <[email protected]> wrote:\n> If it's not C then string compares are going to probably need special\n> indexes to work the way you expect them. (varchar pattern ops). Look\n> here for more information:\n>\n> http://www.postgresql.org/docs/8.3/static/indexes-opclass.html\n\nIt's only relevant for pattern matching (eg LIKE or regexp). AFAICS,\nthe OP only uses plain equals in his query.\n\nAn EXPLAIN ANALYZE output would be nice to diagnose the problem.\n\n-- \nGuillaume\n", "msg_date": "Tue, 24 Feb 2009 00:33:33 +0100", "msg_from": "Guillaume Smet <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "On Mon, Feb 23, 2009 at 5:27 PM, Gregory Stark <[email protected]>wrote:\n\n> Farhan Husain <[email protected]> writes:\n>\n> > I can provide any other information needed and also the data if anyone\n> > wants.\n>\n> What did the query plans look like in both databases?\n>\n> In Postgres you can get the query plan with\n>\n> EXPLAIN ANALYZE select ...\n>\n> You can leave out the ANALYZE if you can't wait until the query completes\n> but\n> it will have much less information to diagnosis any problems.\n>\n> --\n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n> Ask me about EnterpriseDB's Slony Replication support!\n>\n\nI am doing the EXPLAIN ANALYZE now, it will take about 1 hour and 20 minutes\nagain. I will get back to you once it is finished. Do you know how to get\nthe query plan in MySQL?\n\n-- \nMohammad Farhan Husain\nResearch Assistant\nDepartment of Computer Science\nErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas\n\nOn Mon, Feb 23, 2009 at 5:27 PM, Gregory Stark <[email protected]> wrote:\nFarhan Husain <[email protected]> writes:\n\n> I can provide any other information needed and also the data if anyone\n> wants.\n\nWhat did the query plans look like in both databases?\n\nIn Postgres you can get the query plan with\n\nEXPLAIN ANALYZE select ...\n\nYou can leave out the ANALYZE if you can't wait until the query completes but\nit will have much less information to diagnosis any problems.\n\n--\n  Gregory Stark\n  EnterpriseDB          http://www.enterprisedb.com\n  Ask me about EnterpriseDB's Slony Replication support!\nI am doing the EXPLAIN ANALYZE now, it will take about 1 hour and 20 minutes again. I will get back to you once it is finished. Do you know how to get the query plan in MySQL?\n-- Mohammad Farhan HusainResearch AssistantDepartment of Computer ScienceErik Jonsson School of Engineering and Computer ScienceUniversity of Texas at Dallas", "msg_date": "Mon, 23 Feb 2009 17:35:21 -0600", "msg_from": "Farhan Husain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Abnormal performance difference between Postgres and MySQL" }, { "msg_contents": "On Mon, Feb 23, 2009 at 4:33 PM, Guillaume Smet\n<[email protected]> wrote:\n> On Tue, Feb 24, 2009 at 12:27 AM, Scott Marlowe <[email protected]> wrote:\n>> If it's not C then string compares are going to probably need special\n>> indexes to work the way you expect them. (varchar pattern ops). Look\n>> here for more information:\n>>\n>> http://www.postgresql.org/docs/8.3/static/indexes-opclass.html\n>\n> It's only relevant for pattern matching (eg LIKE or regexp). AFAICS,\n> the OP only uses plain equals in his query.\n\nTrue, I had a bit of a headache trying to read that unindented query.\n(OP here's a hint, if you want people to read your queries / code,\nindent it in some way that makes it fairly readable Note that\nvarchar_pattern_ops indexes can't be used for straight equal compares\neither.\n", "msg_date": "Mon, 23 Feb 2009 16:37:04 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "On Mon, Feb 23, 2009 at 4:35 PM, Farhan Husain <[email protected]> wrote:\n>\n>\n> On Mon, Feb 23, 2009 at 5:27 PM, Gregory Stark <[email protected]>\n> wrote:\n>>\n>> Farhan Husain <[email protected]> writes:\n>>\n>> > I can provide any other information needed and also the data if anyone\n>> > wants.\n>>\n>> What did the query plans look like in both databases?\n>>\n>> In Postgres you can get the query plan with\n>>\n>> EXPLAIN ANALYZE select ...\n>>\n>> You can leave out the ANALYZE if you can't wait until the query completes\n>> but\n>> it will have much less information to diagnosis any problems.\n>>\n>> --\n>> Gregory Stark\n>> EnterpriseDB http://www.enterprisedb.com\n>> Ask me about EnterpriseDB's Slony Replication support!\n>\n> I am doing the EXPLAIN ANALYZE now, it will take about 1 hour and 20 minutes\n> again. I will get back to you once it is finished. Do you know how to get\n> the query plan in MySQL?\n\nExplain works in mysql. It just doesn't tell you a whole lot, because\nthe query planner's dumb as a brick. Note that often that stupid\nquery planner makes queries run really fast. When it doesn't, there's\nnot a lot of tuning you can do to fix it.\n\nWhat does plain explain on pgsql tell you? Please attach output back\nto the list from it for us to peruse.\n", "msg_date": "Mon, 23 Feb 2009 16:38:27 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "On Mon, Feb 23, 2009 at 5:27 PM, Gregory Stark <[email protected]>wrote:\n\n> Farhan Husain <[email protected]> writes:\n>\n> > I can provide any other information needed and also the data if anyone\n> > wants.\n>\n> What did the query plans look like in both databases?\n>\n> In Postgres you can get the query plan with\n>\n> EXPLAIN ANALYZE select ...\n>\n> You can leave out the ANALYZE if you can't wait until the query completes\n> but\n> it will have much less information to diagnosis any problems.\n>\n> --\n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n> Ask me about EnterpriseDB's Slony Replication support!\n>\n\nHere is the output:\n\ningentadb=# EXPLAIN ANALYZE select A0.Subj, A2.Obj From jena_g1t1_stmt A0,\njena_g1t1_stmt A1, jena_g1t1_stmt A2 Where A0.Prop='Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/isPartOf' AND A0.Obj='Uv::\nhttp://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1' AND\nA0.GraphID=1 AND A0.Subj=A1.Subj AND A1.Prop='Uv::\nhttp://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND A1.Obj='Uv::\nhttp://metastore.ingenta.com/ns/structure/Article' AND A1.GraphID=1 AND\nA0.Subj=A2.Subj AND A2.Prop='Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/startingPage' AND\nA2.GraphID=1;\n\nQUERY\nPLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=652089.37..665004.47 rows=733195 width=134) (actual\ntime=5410683.129..5410690.033 rows=30 loops=1)\n Merge Cond: ((a0.subj)::text = (a1.subj)::text)\n -> Sort (cost=86716.91..86796.78 rows=31949 width=208) (actual\ntime=76.395..76.423 rows=30 loops=1)\n Sort Key: a0.subj\n Sort Method: quicksort Memory: 24kB\n -> Nested Loop (cost=0.00..84326.57 rows=31949 width=208) (actual\ntime=4.146..65.409 rows=30 loops=1)\n -> Index Scan using jena_g1t1_stmt_ixo on jena_g1t1_stmt a0\n(cost=0.00..5428.34 rows=487 width=74) (actual time=1.980..2.142 rows=30\nloops=1)\n Index Cond: ((obj)::text = 'Uv::\nhttp://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'::text)\n Filter: (((prop)::text = 'Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/isPartOf'::text) AND (graphid\n= 1))\n -> Index Scan using jena_g1t1_stmt_ixsp on jena_g1t1_stmt\na2 (cost=0.00..161.37 rows=51 width=134) (actual time=2.101..2.104 rows=1\nloops=30)\n Index Cond: (((a2.subj)::text = (a0.subj)::text) AND\n((a2.prop)::text = 'Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/startingPage'::text))\n Filter: (a2.graphid = 1)\n -> Sort (cost=565372.46..568084.16 rows=1084680 width=74) (actual\ntime=5410606.604..5410606.628 rows=31 loops=1)\n Sort Key: a1.subj\n Sort Method: quicksort Memory: 489474kB\n -> Seq Scan on jena_g1t1_stmt a1 (cost=0.00..456639.59\nrows=1084680 width=74) (actual time=0.043..44005.780 rows=3192000 loops=1)\n Filter: ((graphid = 1) AND ((prop)::text = 'Uv::\nhttp://www.w3.org/1999/02/22-rdf-syntax-ns#type'::text) AND ((obj)::text =\n'Uv::http://metastore.ingenta.com/ns/structure/Article'::text))\n Total runtime: 5410691.012 ms\n(18 rows)\n\n\n-- \nMohammad Farhan Husain\nResearch Assistant\nDepartment of Computer Science\nErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas\n\nOn Mon, Feb 23, 2009 at 5:27 PM, Gregory Stark <[email protected]> wrote:\nFarhan Husain <[email protected]> writes:\n\n> I can provide any other information needed and also the data if anyone\n> wants.\n\nWhat did the query plans look like in both databases?\n\nIn Postgres you can get the query plan with\n\nEXPLAIN ANALYZE select ...\n\nYou can leave out the ANALYZE if you can't wait until the query completes but\nit will have much less information to diagnosis any problems.\n\n--\n  Gregory Stark\n  EnterpriseDB          http://www.enterprisedb.com\n  Ask me about EnterpriseDB's Slony Replication support!\nHere is the output:\n\ningentadb=# EXPLAIN ANALYZE select A0.Subj, A2.Obj From jena_g1t1_stmt\nA0, jena_g1t1_stmt A1, jena_g1t1_stmt A2 Where\nA0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf'\nAND\nA0.Obj='Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'\nAND A0.GraphID=1 AND A0.Subj=A1.Subj AND\nA1.Prop='Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND\nA1.Obj='Uv::http://metastore.ingenta.com/ns/structure/Article' AND\nA1.GraphID=1 AND A0.Subj=A2.Subj AND\nA2.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage'\nAND A2.GraphID=1;\n                                                                                               \nQUERY\nPLAN                                                                                               \n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join  (cost=652089.37..665004.47 rows=733195 width=134) (actual time=5410683.129..5410690.033 rows=30 loops=1)\n   Merge Cond: ((a0.subj)::text = (a1.subj)::text)\n   ->  Sort  (cost=86716.91..86796.78 rows=31949 width=208) (actual time=76.395..76.423 rows=30 loops=1)\n         Sort Key: a0.subj\n         Sort Method:  quicksort  Memory: 24kB\n         ->  Nested Loop  (cost=0.00..84326.57 rows=31949 width=208) (actual time=4.146..65.409 rows=30 loops=1)\n               ->  Index Scan using jena_g1t1_stmt_ixo on\njena_g1t1_stmt a0  (cost=0.00..5428.34 rows=487 width=74) (actual\ntime=1.980..2.142 rows=30 loops=1)\n                     Index Cond: ((obj)::text = 'Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'::text)\n\n                     Filter: (((prop)::text =\n'Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf'::text) AND\n(graphid = 1))\n               ->  Index Scan using jena_g1t1_stmt_ixsp on\njena_g1t1_stmt a2  (cost=0.00..161.37 rows=51 width=134) (actual\ntime=2.101..2.104 rows=1 loops=30)\n                     Index Cond: (((a2.subj)::text = (a0.subj)::text)\nAND ((a2.prop)::text =\n'Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage'::text))\n                     Filter: (a2.graphid = 1)\n   ->  Sort  (cost=565372.46..568084.16 rows=1084680 width=74) (actual time=5410606.604..5410606.628 rows=31 loops=1)\n         Sort Key: a1.subj\n         Sort Method:  quicksort  Memory: 489474kB\n         ->  Seq Scan on jena_g1t1_stmt a1  (cost=0.00..456639.59\nrows=1084680 width=74) (actual time=0.043..44005.780 rows=3192000\nloops=1)\n               Filter: ((graphid = 1) AND ((prop)::text =\n'Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type'::text) AND\n((obj)::text =\n'Uv::http://metastore.ingenta.com/ns/structure/Article'::text))\n Total runtime: 5410691.012 ms\n(18 rows)\n-- Mohammad Farhan HusainResearch AssistantDepartment of Computer ScienceErik Jonsson School of Engineering and Computer ScienceUniversity of Texas at Dallas", "msg_date": "Mon, 23 Feb 2009 19:24:49 -0600", "msg_from": "Farhan Husain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Abnormal performance difference between Postgres and MySQL" }, { "msg_contents": "On Mon, Feb 23, 2009 at 6:24 PM, Farhan Husain <[email protected]> wrote:\nThis sort here:\n\n> -> Sort (cost=565372.46..568084.16 rows=1084680 width=74) (actual\n> time=5410606.604..5410606.628 rows=31 loops=1)\n> Sort Key: a1.subj\n> Sort Method: quicksort Memory: 489474kB\n> -> Seq Scan on jena_g1t1_stmt a1 (cost=0.00..456639.59\n> rows=1084680 width=74) (actual time=0.043..44005.780 rows=3192000 loops=1)\n\nSeems to be the problem. There are a few things that seem odd, the\nfirst is that it estimates it will return 1M ros, but returns only 31.\n The other is that sorting 31 rows is taking 5410606 milliseconds.\n\nMy first guess would be to crank up the statistics on a1.subj to a few\nhundred (going up to a thousand if necessary) re-analyzing and seeing\nif the query plan changes.\n\nI'm not expert enough on explain analyze to offer any more.\n", "msg_date": "Mon, 23 Feb 2009 18:53:50 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "Scott Marlowe <[email protected]> writes:\n> On Mon, Feb 23, 2009 at 6:24 PM, Farhan Husain <[email protected]> wrote:\n> This sort here:\n\n>> -> Sort (cost=565372.46..568084.16 rows=1084680 width=74) (actual\n>> time=5410606.604..5410606.628 rows=31 loops=1)\n>> Sort Key: a1.subj\n>> Sort Method: quicksort Memory: 489474kB\n>> -> Seq Scan on jena_g1t1_stmt a1 (cost=0.00..456639.59\n>> rows=1084680 width=74) (actual time=0.043..44005.780 rows=3192000 loops=1)\n\n> Seems to be the problem. There are a few things that seem odd, the\n> first is that it estimates it will return 1M ros, but returns only 31.\n> The other is that sorting 31 rows is taking 5410606 milliseconds.\n\nUh, no, it's sorting 3192000 rows --- look at the input scan. Evidently\nonly the first 31 of those rows are getting fetched out of the sort,\nthough.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Feb 2009 21:00:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and MySQL " }, { "msg_contents": "Farhan Husain <[email protected]> writes:\n> Here is the output:\n\nI see a couple of things going on here:\n\n* The planner is choosing to use sort-and-mergejoin for the second join.\nThis requires sorting all of jena_g1t1_stmt. If it had accurately\nestimated the output size of the first join (ie 30 rows not 30K rows)\nit'd likely have gone for a nestloop join instead, assuming that you\nhave an index on jena_g1t1_stmt.subj. You need to try to reduce the\n1000X error factor in that estimate. I'm not sure how much it will\nhelp to increase the stats targets on the input tables, but that's\nthe first thing to try.\n\n* Considering that the sort is all in memory, 5400 seconds seems like\na heck of a long time even for sorting 3M rows. Since we already found\nout you're using a non-C locale, the sort comparisons are ultimately\nstrcoll() calls, and I'm betting that you are on a platform where\nstrcoll() is really really slow. Another possibility is that you don't\nreally have 500MB of memory to spare, and the sort workspace is getting\nswapped in and out (read thrashed...). Setting work_mem higher than\nyou can afford is a bad idea.\n\nIn comparison to mysql, I think that their planner will use a indexed\nnestloop if it possibly can, which makes it look great on this type\nof query (but not so hot if the query actually does need to join a\nlot of rows).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Feb 2009 21:18:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and MySQL " }, { "msg_contents": "On Mon, Feb 23, 2009 at 5:27 PM, Gregory Stark <[email protected]>wrote:\n\n> Farhan Husain <[email protected]> writes:\n>\n> > I can provide any other information needed and also the data if anyone\n> > wants.\n>\n> What did the query plans look like in both databases?\n>\n> In Postgres you can get the query plan with\n>\n> EXPLAIN ANALYZE select ...\n>\n> You can leave out the ANALYZE if you can't wait until the query completes\n> but\n> it will have much less information to diagnosis any problems.\n>\n> --\n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n> Ask me about EnterpriseDB's Slony Replication support!\n>\n\nHere is what I got from MySQL:\n\nmysql> explain Select A0.Subj, A2.Obj From jena_g1t1_stmt A0, jena_g1t1_stmt\nA1, jena_g1t1_stmt A2 Where A0.Prop='Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/isPartOf:' AND A0.Obj='Uv::\nhttp://ww\nw.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1:' AND A0.GraphID=1 AND\nA0.Subj=A1.Subj AND A1.Prop='Uv::\nhttp://www.w3.org/1999/02/22-rdf-syntax-ns#type:' AND A1.Obj='Uv::\nhttp://metastore.ingenta\n.com/ns/structure/Article:' AND A1.GraphID=1 AND A0.Subj=A2.Subj AND\nA2.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage:'\nAND A2.GraphID=1;\n+----+-------------+-------+------+------------------------------------+-------------------+---------+-------------------------+------+-------------+\n| id | select_type | table | type | possible_keys |\nkey | key_len | ref | rows | Extra |\n+----+-------------+-------+------+------------------------------------+-------------------+---------+-------------------------+------+-------------+\n| 1 | SIMPLE | A0 | ref | jena_g1t1_stmtXSP,jena_g1t1_stmtXO |\njena_g1t1_stmtXO | 102 | const | 628 | Using where |\n| 1 | SIMPLE | A1 | ref | jena_g1t1_stmtXSP,jena_g1t1_stmtXO |\njena_g1t1_stmtXSP | 204 | ingentadb.A0.Subj,const | 1 | Using where |\n| 1 | SIMPLE | A2 | ref | jena_g1t1_stmtXSP |\njena_g1t1_stmtXSP | 204 | ingentadb.A0.Subj,const | 1 | Using where |\n+----+-------------+-------+------+------------------------------------+-------------------+---------+-------------------------+------+-------------+\n3 rows in set (0.00 sec)\n\n\n-- \nMohammad Farhan Husain\nResearch Assistant\nDepartment of Computer Science\nErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas\n\nOn Mon, Feb 23, 2009 at 5:27 PM, Gregory Stark <[email protected]> wrote:\nFarhan Husain <[email protected]> writes:\n\n> I can provide any other information needed and also the data if anyone\n> wants.\n\nWhat did the query plans look like in both databases?\n\nIn Postgres you can get the query plan with\n\nEXPLAIN ANALYZE select ...\n\nYou can leave out the ANALYZE if you can't wait until the query completes but\nit will have much less information to diagnosis any problems.\n\n--\n  Gregory Stark\n  EnterpriseDB          http://www.enterprisedb.com\n  Ask me about EnterpriseDB's Slony Replication support!\nHere is what I got from MySQL:mysql> explain Select A0.Subj, A2.Obj From jena_g1t1_stmt A0, jena_g1t1_stmt A1, jena_g1t1_stmt A2 Where A0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf:' AND A0.Obj='Uv::http://ww\nw.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1:' AND A0.GraphID=1 AND A0.Subj=A1.Subj AND A1.Prop='Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type:' AND A1.Obj='Uv::http://metastore.ingenta\n.com/ns/structure/Article:' AND A1.GraphID=1 AND A0.Subj=A2.Subj AND A2.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage:' AND A2.GraphID=1;\n+----+-------------+-------+------+------------------------------------+-------------------+---------+-------------------------+------+-------------+| id | select_type | table | type | possible_keys                      | key               | key_len | ref                     | rows | Extra       |\n+----+-------------+-------+------+------------------------------------+-------------------+---------+-------------------------+------+-------------+|  1 | SIMPLE      | A0    | ref  | jena_g1t1_stmtXSP,jena_g1t1_stmtXO | jena_g1t1_stmtXO  | 102     | const                   |  628 | Using where |\n|  1 | SIMPLE      | A1    | ref  | jena_g1t1_stmtXSP,jena_g1t1_stmtXO | jena_g1t1_stmtXSP | 204     | ingentadb.A0.Subj,const |    1 | Using where ||  1 | SIMPLE      | A2    | ref  | jena_g1t1_stmtXSP                  | jena_g1t1_stmtXSP | 204     | ingentadb.A0.Subj,const |    1 | Using where |\n+----+-------------+-------+------+------------------------------------+-------------------+---------+-------------------------+------+-------------+3 rows in set (0.00 sec)-- Mohammad Farhan Husain\nResearch AssistantDepartment of Computer ScienceErik Jonsson School of Engineering and Computer ScienceUniversity of Texas at Dallas", "msg_date": "Mon, 23 Feb 2009 22:55:46 -0600", "msg_from": "Farhan Husain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Abnormal performance difference between Postgres and MySQL" }, { "msg_contents": "> I am doing a performance comparison between running Jena with MySQL and\n> Postgres. I used the 8.3-community version of Postgres and MySQL 5.0.67. I\n> have run several queries to both MySQL and Postgres and all of them took\n> similar amount of time to execute except one. For the following query to a\n> table having 10,003,728 rows, MySQL takes 0.11 seconds to return results\n> whereas Postgres takes like 1 hour and 20 minutes!\n>\n> Query:\n>\n> select A0.Subj, A2.Obj From jena_g1t1_stmt A0, jena_g1t1_stmt A1,\n> jena_g1t1_stmt A2 Where\n> A0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf' AND\n> A0.Obj='Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'\n> AND A0.GraphID=1 AND A0.Subj=A1.Subj AND\n> A1.Prop='Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND\n> A1.Obj='Uv::http://metastore.ingenta.com/ns/structure/Article' AND\n> A1.GraphID=1 AND A0.Subj=A2.Subj AND\n> A2.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage' AND\n> A2.GraphID=1;\n>\n> Table:\n>\n>         Table \"public.jena_g1t1_stmt\"\n>  Column  |          Type          | Modifiers\n> ---------+--------------------\n> ----+-----------\n>  subj    | character varying(250) | not null\n>  prop    | character varying(250) | not null\n>  obj     | character varying(250) | not null\n>  graphid | integer                |\n> Indexes:\n>     \"jena_g1t1_stmt_ixo\" btree (obj)\n>     \"jena_g1t1_stmt_ixsp\" btree (subj, prop)\n\nIsn't it missing an index on the column prop?\n\nselect ... where A0.Prop='foo' and ...\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentler gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Tue, 24 Feb 2009 08:28:38 +0100", "msg_from": "Claus Guttesen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "The result set should have 31 rows, that is correct.\n\nOn Mon, Feb 23, 2009 at 7:53 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Mon, Feb 23, 2009 at 6:24 PM, Farhan Husain <[email protected]> wrote:\n> This sort here:\n>\n> > -> Sort (cost=565372.46..568084.16 rows=1084680 width=74) (actual\n> > time=5410606.604..5410606.628 rows=31 loops=1)\n> > Sort Key: a1.subj\n> > Sort Method: quicksort Memory: 489474kB\n> > -> Seq Scan on jena_g1t1_stmt a1 (cost=0.00..456639.59\n> > rows=1084680 width=74) (actual time=0.043..44005.780 rows=3192000\n> loops=1)\n>\n> Seems to be the problem. There are a few things that seem odd, the\n> first is that it estimates it will return 1M ros, but returns only 31.\n> The other is that sorting 31 rows is taking 5410606 milliseconds.\n>\n> My first guess would be to crank up the statistics on a1.subj to a few\n> hundred (going up to a thousand if necessary) re-analyzing and seeing\n> if the query plan changes.\n>\n> I'm not expert enough on explain analyze to offer any more.\n>\n\n\n\n-- \nMohammad Farhan Husain\nResearch Assistant\nDepartment of Computer Science\nErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas\n\nThe result set should have 31 rows, that is correct.On Mon, Feb 23, 2009 at 7:53 PM, Scott Marlowe <[email protected]> wrote:\nOn Mon, Feb 23, 2009 at 6:24 PM, Farhan Husain <[email protected]> wrote:\n\nThis sort here:\n\n>    ->  Sort  (cost=565372.46..568084.16 rows=1084680 width=74) (actual\n> time=5410606.604..5410606.628 rows=31 loops=1)\n>          Sort Key: a1.subj\n>          Sort Method:  quicksort  Memory: 489474kB\n>          ->  Seq Scan on jena_g1t1_stmt a1  (cost=0.00..456639.59\n> rows=1084680 width=74) (actual time=0.043..44005.780 rows=3192000 loops=1)\n\nSeems to be the problem.  There are a few things that seem odd, the\nfirst is that it estimates it will return 1M ros, but returns only 31.\n The other is that sorting 31 rows is taking 5410606 milliseconds.\n\nMy first guess would be to crank up the statistics on a1.subj to a few\nhundred (going up to a thousand if necessary) re-analyzing and seeing\nif the query plan changes.\n\nI'm not expert enough on explain analyze to offer any more.\n-- Mohammad Farhan HusainResearch AssistantDepartment of Computer ScienceErik Jonsson School of Engineering and Computer ScienceUniversity of Texas at Dallas", "msg_date": "Tue, 24 Feb 2009 13:51:38 -0600", "msg_from": "Farhan Husain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "On Tue, Feb 24, 2009 at 1:28 AM, Claus Guttesen <[email protected]> wrote:\n\n> > I am doing a performance comparison between running Jena with MySQL and\n> > Postgres. I used the 8.3-community version of Postgres and MySQL 5.0.67.\n> I\n> > have run several queries to both MySQL and Postgres and all of them took\n> > similar amount of time to execute except one. For the following query to\n> a\n> > table having 10,003,728 rows, MySQL takes 0.11 seconds to return results\n> > whereas Postgres takes like 1 hour and 20 minutes!\n> >\n> > Query:\n> >\n> > select A0.Subj, A2.Obj From jena_g1t1_stmt A0, jena_g1t1_stmt A1,\n> > jena_g1t1_stmt A2 Where\n> > A0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf' AND\n> > A0.Obj='Uv::\n> http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1<http://www.utdallas.edu/%7Efarhan.husain/IngentaConnect/issue1_1>\n> '\n> > AND A0.GraphID=1 AND A0.Subj=A1.Subj AND\n> > A1.Prop='Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND\n> > A1.Obj='Uv::http://metastore.ingenta.com/ns/structure/Article' AND\n> > A1.GraphID=1 AND A0.Subj=A2.Subj AND\n> > A2.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage'\n> AND\n> > A2.GraphID=1;\n> >\n> > Table:\n> >\n> > Table \"public.jena_g1t1_stmt\"\n> > Column | Type | Modifiers\n> > ---------+--------------------\n> > ----+-----------\n> > subj | character varying(250) | not null\n> > prop | character varying(250) | not null\n> > obj | character varying(250) | not null\n> > graphid | integer |\n> > Indexes:\n> > \"jena_g1t1_stmt_ixo\" btree (obj)\n> > \"jena_g1t1_stmt_ixsp\" btree (subj, prop)\n>\n> Isn't it missing an index on the column prop?\n>\n> select ... where A0.Prop='foo' and ...\n>\n> --\n> regards\n> Claus\n>\n> When lenity and cruelty play for a kingdom,\n> the gentler gamester is the soonest winner.\n>\n> Shakespeare\n>\n\nCan you please elaborate a bit?\n\nThanks,\n\n-- \nMohammad Farhan Husain\nResearch Assistant\nDepartment of Computer Science\nErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas\n\nOn Tue, Feb 24, 2009 at 1:28 AM, Claus Guttesen <[email protected]> wrote:\n> I am doing a performance comparison between running Jena with MySQL and\n> Postgres. I used the 8.3-community version of Postgres and MySQL 5.0.67. I\n> have run several queries to both MySQL and Postgres and all of them took\n> similar amount of time to execute except one. For the following query to a\n> table having 10,003,728 rows, MySQL takes 0.11 seconds to return results\n> whereas Postgres takes like 1 hour and 20 minutes!\n>\n> Query:\n>\n> select A0.Subj, A2.Obj From jena_g1t1_stmt A0, jena_g1t1_stmt A1,\n> jena_g1t1_stmt A2 Where\n> A0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf' AND\n> A0.Obj='Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'\n> AND A0.GraphID=1 AND A0.Subj=A1.Subj AND\n> A1.Prop='Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND\n> A1.Obj='Uv::http://metastore.ingenta.com/ns/structure/Article' AND\n> A1.GraphID=1 AND A0.Subj=A2.Subj AND\n> A2.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage' AND\n> A2.GraphID=1;\n>\n> Table:\n>\n>         Table \"public.jena_g1t1_stmt\"\n>  Column  |          Type          | Modifiers\n> ---------+--------------------\n> ----+-----------\n>  subj    | character varying(250) | not null\n>  prop    | character varying(250) | not null\n>  obj     | character varying(250) | not null\n>  graphid | integer                |\n> Indexes:\n>     \"jena_g1t1_stmt_ixo\" btree (obj)\n>     \"jena_g1t1_stmt_ixsp\" btree (subj, prop)\n\nIsn't it missing an index on the column prop?\n\nselect ... where A0.Prop='foo' and ...\n\n--\nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentler gamester is the soonest winner.\n\nShakespeare\nCan you please elaborate a bit?Thanks,-- Mohammad Farhan HusainResearch AssistantDepartment of Computer ScienceErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas", "msg_date": "Tue, 24 Feb 2009 13:52:14 -0600", "msg_from": "Farhan Husain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": ">> > Query:\n>> >\n>> > select A0.Subj, A2.Obj From jena_g1t1_stmt A0, jena_g1t1_stmt A1,\n>> > jena_g1t1_stmt A2 Where\n>> > A0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf' AND\n>> >\n>> > A0.Obj='Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'\n>> > AND A0.GraphID=1 AND A0.Subj=A1.Subj AND\n>> > A1.Prop='Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND\n>> > A1.Obj='Uv::http://metastore.ingenta.com/ns/structure/Article' AND\n>> > A1.GraphID=1 AND A0.Subj=A2.Subj AND\n>> > A2.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage'\n>> > AND\n>> > A2.GraphID=1;\n>> >\n>> > Table:\n>> >\n>> >         Table \"public.jena_g1t1_stmt\"\n>> >  Column  |          Type          | Modifiers\n>> > ---------+--------------------\n>> > ----+-----------\n>> >  subj    | character varying(250) | not null\n>> >  prop    | character varying(250) | not null\n>> >  obj     | character varying(250) | not null\n>> >  graphid | integer                |\n>> > Indexes:\n>> >     \"jena_g1t1_stmt_ixo\" btree (obj)\n>> >     \"jena_g1t1_stmt_ixsp\" btree (subj, prop)\n>>\n>> Isn't it missing an index on the column prop?\n>>\n>> select ... where A0.Prop='foo' and ...\n>> --\n> Can you please elaborate a bit?\n\nI thought that A0.Prop would ignore the composite index created on the\ncolumns subj and prop but this does not seem to be the case.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentler gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Tue, 24 Feb 2009 21:55:22 +0100", "msg_from": "Claus Guttesen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": ">> Can you please elaborate a bit?\n>\n> I thought that A0.Prop would ignore the composite index created on the\n> columns subj and prop but this does not seem to be the case.\n\nYeah, I think you're barking up the wrong tree here. I think Tom had\nthe correct diagnosis - what do you get from \"show work_mem\"?\n\nWhat kind of machine are you running this on? If it's a UNIX-ish\nmachine, what do you get from \"free -m\"and \"uname -a\"?\n\n...Robert\n", "msg_date": "Tue, 24 Feb 2009 21:21:11 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "On Wed, Feb 25, 2009 at 12:49 PM, Robert Haas <[email protected]> wrote:\n\n> You still haven't answered the work_mem question, and you probably\n> want to copy the list, rather than just sending this to me.\n>\n> ...Robert\n>\n> On Wed, Feb 25, 2009 at 1:34 PM, Farhan Husain <[email protected]> wrote:\n> >\n> >\n> > On Tue, Feb 24, 2009 at 8:21 PM, Robert Haas <[email protected]>\n> wrote:\n> >>\n> >> >> Can you please elaborate a bit?\n> >> >\n> >> > I thought that A0.Prop would ignore the composite index created on the\n> >> > columns subj and prop but this does not seem to be the case.\n> >>\n> >> Yeah, I think you're barking up the wrong tree here. I think Tom had\n> >> the correct diagnosis - what do you get from \"show work_mem\"?\n> >>\n> >> What kind of machine are you running this on? If it's a UNIX-ish\n> >> machine, what do you get from \"free -m\"and \"uname -a\"?\n> >>\n> >> ...Robert\n> >\n> > Here is the machine info:\n> >\n> > Machine: SunOS 5.10 Generic_127111-11 sun4u sparc SUNW, Sun-Fire-880\n> > Memory: 4 GB\n> > Number of physical processors: 2\n> >\n> >\n> > --\n> > Mohammad Farhan Husain\n> > Research Assistant\n> > Department of Computer Science\n> > Erik Jonsson School of Engineering and Computer Science\n> > University of Texas at Dallas\n> >\n>\n\nDid you mean the work_mem field in the config file?\n\n\n-- \nMohammad Farhan Husain\nResearch Assistant\nDepartment of Computer Science\nErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas\n\nOn Wed, Feb 25, 2009 at 12:49 PM, Robert Haas <[email protected]> wrote:\nYou still haven't answered the work_mem question, and you probably\nwant to copy the list, rather than just sending this to me.\n\n...Robert\n\nOn Wed, Feb 25, 2009 at 1:34 PM, Farhan Husain <[email protected]> wrote:\n>\n>\n> On Tue, Feb 24, 2009 at 8:21 PM, Robert Haas <[email protected]> wrote:\n>>\n>> >> Can you please elaborate a bit?\n>> >\n>> > I thought that A0.Prop would ignore the composite index created on the\n>> > columns subj and prop but this does not seem to be the case.\n>>\n>> Yeah, I think you're barking up the wrong tree here.  I think Tom had\n>> the correct diagnosis - what do you get from \"show work_mem\"?\n>>\n>> What kind of machine are you running this on?  If it's a UNIX-ish\n>> machine, what do you get from \"free -m\"and \"uname -a\"?\n>>\n>> ...Robert\n>\n> Here is the machine info:\n>\n> Machine: SunOS 5.10 Generic_127111-11 sun4u sparc SUNW, Sun-Fire-880\n> Memory: 4 GB\n> Number of physical processors: 2\n>\n>\n> --\n> Mohammad Farhan Husain\n> Research Assistant\n> Department of Computer Science\n> Erik Jonsson School of Engineering and Computer Science\n> University of Texas at Dallas\n>\nDid you mean the work_mem field in the config file?-- Mohammad Farhan HusainResearch AssistantDepartment of Computer ScienceErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas", "msg_date": "Wed, 25 Feb 2009 12:53:53 -0600", "msg_from": "Farhan Husain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "Just start up psql and type:\n\nshow work_mem;\n\n(You could look in the config file too I suppose.)\n\n...Robert\n\nOn Wed, Feb 25, 2009 at 1:53 PM, Farhan Husain <[email protected]> wrote:\n>\n>\n> On Wed, Feb 25, 2009 at 12:49 PM, Robert Haas <[email protected]> wrote:\n>>\n>> You still haven't answered the work_mem question, and you probably\n>> want to copy the list, rather than just sending this to me.\n>>\n>> ...Robert\n>>\n>> On Wed, Feb 25, 2009 at 1:34 PM, Farhan Husain <[email protected]> wrote:\n>> >\n>> >\n>> > On Tue, Feb 24, 2009 at 8:21 PM, Robert Haas <[email protected]>\n>> > wrote:\n>> >>\n>> >> >> Can you please elaborate a bit?\n>> >> >\n>> >> > I thought that A0.Prop would ignore the composite index created on\n>> >> > the\n>> >> > columns subj and prop but this does not seem to be the case.\n>> >>\n>> >> Yeah, I think you're barking up the wrong tree here.  I think Tom had\n>> >> the correct diagnosis - what do you get from \"show work_mem\"?\n>> >>\n>> >> What kind of machine are you running this on?  If it's a UNIX-ish\n>> >> machine, what do you get from \"free -m\"and \"uname -a\"?\n>> >>\n>> >> ...Robert\n>> >\n>> > Here is the machine info:\n>> >\n>> > Machine: SunOS 5.10 Generic_127111-11 sun4u sparc SUNW, Sun-Fire-880\n>> > Memory: 4 GB\n>> > Number of physical processors: 2\n>> >\n>> >\n>> > --\n>> > Mohammad Farhan Husain\n>> > Research Assistant\n>> > Department of Computer Science\n>> > Erik Jonsson School of Engineering and Computer Science\n>> > University of Texas at Dallas\n>> >\n>\n> Did you mean the work_mem field in the config file?\n>\n>\n> --\n> Mohammad Farhan Husain\n> Research Assistant\n> Department of Computer Science\n> Erik Jonsson School of Engineering and Computer Science\n> University of Texas at Dallas\n>\n", "msg_date": "Wed, 25 Feb 2009 13:58:34 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "On Wed, Feb 25, 2009 at 12:58 PM, Robert Haas <[email protected]> wrote:\n\n> Just start up psql and type:\n>\n> show work_mem;\n>\n> (You could look in the config file too I suppose.)\n>\n> ...Robert\n>\n> On Wed, Feb 25, 2009 at 1:53 PM, Farhan Husain <[email protected]> wrote:\n> >\n> >\n> > On Wed, Feb 25, 2009 at 12:49 PM, Robert Haas <[email protected]>\n> wrote:\n> >>\n> >> You still haven't answered the work_mem question, and you probably\n> >> want to copy the list, rather than just sending this to me.\n> >>\n> >> ...Robert\n> >>\n> >> On Wed, Feb 25, 2009 at 1:34 PM, Farhan Husain <[email protected]>\n> wrote:\n> >> >\n> >> >\n> >> > On Tue, Feb 24, 2009 at 8:21 PM, Robert Haas <[email protected]>\n> >> > wrote:\n> >> >>\n> >> >> >> Can you please elaborate a bit?\n> >> >> >\n> >> >> > I thought that A0.Prop would ignore the composite index created on\n> >> >> > the\n> >> >> > columns subj and prop but this does not seem to be the case.\n> >> >>\n> >> >> Yeah, I think you're barking up the wrong tree here. I think Tom had\n> >> >> the correct diagnosis - what do you get from \"show work_mem\"?\n> >> >>\n> >> >> What kind of machine are you running this on? If it's a UNIX-ish\n> >> >> machine, what do you get from \"free -m\"and \"uname -a\"?\n> >> >>\n> >> >> ...Robert\n> >> >\n> >> > Here is the machine info:\n> >> >\n> >> > Machine: SunOS 5.10 Generic_127111-11 sun4u sparc SUNW, Sun-Fire-880\n> >> > Memory: 4 GB\n> >> > Number of physical processors: 2\n> >> >\n> >> >\n> >> > --\n> >> > Mohammad Farhan Husain\n> >> > Research Assistant\n> >> > Department of Computer Science\n> >> > Erik Jonsson School of Engineering and Computer Science\n> >> > University of Texas at Dallas\n> >> >\n> >\n> > Did you mean the work_mem field in the config file?\n> >\n> >\n> > --\n> > Mohammad Farhan Husain\n> > Research Assistant\n> > Department of Computer Science\n> > Erik Jonsson School of Engineering and Computer Science\n> > University of Texas at Dallas\n> >\n>\n\nI did it, it does not show anything. Here is what I have got from the config\nfile:\n\n\n#------------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#------------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 32MB # min 128kB or max_connections*16kB\n # (change requires restart)\ntemp_buffers = 1024MB # min 800kB\n#max_prepared_transactions = 5 # can be 0 or more\n # (change requires restart)\n# Note: Increasing max_prepared_transactions costs ~600 bytes of shared\nmemory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 1792MB # min 64kB\n#maintenance_work_mem = 16MB # min 1MB\n#max_stack_depth = 32MB # min 100kB\n\n# - Free Space Map -\n\nmax_fsm_pages = 204800 # min max_fsm_relations*16, 6 bytes\neach\n # (change requires restart)\n#max_fsm_relations = 1000 # min 100, ~70 bytes each\n # (change requires restart)\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n # (change requires restart)\n#shared_preload_libraries = '' # (change requires restart)\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0 # 0-1000 milliseconds\n#vacuum_cost_page_hit = 1 # 0-10000 credits\n#vacuum_cost_page_miss = 10 # 0-10000 credits\n#vacuum_cost_page_dirty = 20 # 0-10000 credits\n#vacuum_cost_limit = 200 # 1-10000 credits\n\n# - Background Writer -\n\n#bgwriter_delay = 200ms # 10-10000ms between rounds\n#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round\n#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers\nscanned/round\n\n\nPlease note that this (1792MB) is the highest that I could set for work_mem.\n-- \nMohammad Farhan Husain\nResearch Assistant\nDepartment of Computer Science\nErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas\n\nOn Wed, Feb 25, 2009 at 12:58 PM, Robert Haas <[email protected]> wrote:\nJust start up psql and type:\n\nshow work_mem;\n\n(You could look in the config file too I suppose.)\n\n...Robert\n\nOn Wed, Feb 25, 2009 at 1:53 PM, Farhan Husain <[email protected]> wrote:\n>\n>\n> On Wed, Feb 25, 2009 at 12:49 PM, Robert Haas <[email protected]> wrote:\n>>\n>> You still haven't answered the work_mem question, and you probably\n>> want to copy the list, rather than just sending this to me.\n>>\n>> ...Robert\n>>\n>> On Wed, Feb 25, 2009 at 1:34 PM, Farhan Husain <[email protected]> wrote:\n>> >\n>> >\n>> > On Tue, Feb 24, 2009 at 8:21 PM, Robert Haas <[email protected]>\n>> > wrote:\n>> >>\n>> >> >> Can you please elaborate a bit?\n>> >> >\n>> >> > I thought that A0.Prop would ignore the composite index created on\n>> >> > the\n>> >> > columns subj and prop but this does not seem to be the case.\n>> >>\n>> >> Yeah, I think you're barking up the wrong tree here.  I think Tom had\n>> >> the correct diagnosis - what do you get from \"show work_mem\"?\n>> >>\n>> >> What kind of machine are you running this on?  If it's a UNIX-ish\n>> >> machine, what do you get from \"free -m\"and \"uname -a\"?\n>> >>\n>> >> ...Robert\n>> >\n>> > Here is the machine info:\n>> >\n>> > Machine: SunOS 5.10 Generic_127111-11 sun4u sparc SUNW, Sun-Fire-880\n>> > Memory: 4 GB\n>> > Number of physical processors: 2\n>> >\n>> >\n>> > --\n>> > Mohammad Farhan Husain\n>> > Research Assistant\n>> > Department of Computer Science\n>> > Erik Jonsson School of Engineering and Computer Science\n>> > University of Texas at Dallas\n>> >\n>\n> Did you mean the work_mem field in the config file?\n>\n>\n> --\n> Mohammad Farhan Husain\n> Research Assistant\n> Department of Computer Science\n> Erik Jonsson School of Engineering and Computer Science\n> University of Texas at Dallas\n>\nI did it, it does not show anything. Here is what I have got from the config file:#------------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)#------------------------------------------------------------------------------# - Memory -shared_buffers = 32MB                   # min 128kB or max_connections*16kB                                        # (change requires restart)\ntemp_buffers = 1024MB                   # min 800kB#max_prepared_transactions = 5          # can be 0 or more                                        # (change requires restart)# Note:  Increasing max_prepared_transactions costs ~600 bytes of shared memory\n# per transaction slot, plus lock space (see max_locks_per_transaction).work_mem = 1792MB                               # min 64kB#maintenance_work_mem = 16MB            # min 1MB#max_stack_depth = 32MB                 # min 100kB\n# - Free Space Map -max_fsm_pages = 204800                  # min max_fsm_relations*16, 6 bytes each                                        # (change requires restart)#max_fsm_relations = 1000               # min 100, ~70 bytes each\n                                        # (change requires restart)# - Kernel Resource Usage -#max_files_per_process = 1000           # min 25                                        # (change requires restart)\n#shared_preload_libraries = ''          # (change requires restart)# - Cost-Based Vacuum Delay -#vacuum_cost_delay = 0                  # 0-1000 milliseconds#vacuum_cost_page_hit = 1               # 0-10000 credits\n#vacuum_cost_page_miss = 10             # 0-10000 credits#vacuum_cost_page_dirty = 20            # 0-10000 credits#vacuum_cost_limit = 200                # 1-10000 credits# - Background Writer -#bgwriter_delay = 200ms                 # 10-10000ms between rounds\n#bgwriter_lru_maxpages = 100            # 0-1000 max buffers written/round#bgwriter_lru_multiplier = 2.0          # 0-10.0 multipler on buffers scanned/roundPlease note that this (1792MB) is the highest that I could set for work_mem.\n-- Mohammad Farhan HusainResearch AssistantDepartment of Computer ScienceErik Jonsson School of Engineering and Computer ScienceUniversity of Texas at Dallas", "msg_date": "Wed, 25 Feb 2009 13:05:20 -0600", "msg_from": "Farhan Husain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "> Please note that this (1792MB) is the highest that I could set for work_mem.\n\nYeah, that's almost certainly part of your problem.\n\nYou need to make that number MUCH smaller. You probably want a value\nlike 1MB or 5MB or maybe if you have really a lot of memory 20MB.\n\nThat's insanely high.\n\n...Robert\n", "msg_date": "Wed, 25 Feb 2009 14:52:50 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "On Wed, Feb 25, 2009 at 12:05 PM, Farhan Husain <[email protected]> wrote:\n>\n> On Wed, Feb 25, 2009 at 12:58 PM, Robert Haas <[email protected]> wrote:\n>>\n>> Just start up psql and type:\n>>\n>> show work_mem;\n>\n> I did it, it does not show anything.\n\nDid you remember the ; symbol?\n\n> Here is what I have got from the config\n> file:\n>\n> shared_buffers = 32MB                   # min 128kB or max_connections*16kB\n>                                         # (change requires restart)\n\nAssuming your machine has more than a few hundred megs of ram in it,\nthis is kinda small. Generally setting it to 10 to 25% of total\nmemory is about right.\n\n> work_mem = 1792MB                               # min 64kB\n\nThat's crazy high. work_mem is the amount of memory a single sort can\nuse. Each query can have multiple sorts. So, if you have 10 users\nrunning 10 sorts, you could use 100*1792MB memory.\n\nLook to set it into the 1 to 16Meg. If testing shows a few queries\ncan do better with more work_mem, then look at setting it higher for\njust that one connection.\n", "msg_date": "Wed, 25 Feb 2009 13:28:10 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "On Wed, Feb 25, 2009 at 1:52 PM, Robert Haas <[email protected]> wrote:\n\n> > Please note that this (1792MB) is the highest that I could set for\n> work_mem.\n>\n> Yeah, that's almost certainly part of your problem.\n>\n> You need to make that number MUCH smaller. You probably want a value\n> like 1MB or 5MB or maybe if you have really a lot of memory 20MB.\n>\n> That's insanely high.\n>\n> ...Robert\n>\n\nInitially, it was the default value (32MB). Later I played with that value\nthinking that it might improve the performance. But all the values resulted\nin same amount of time.\n\n-- \nMohammad Farhan Husain\nResearch Assistant\nDepartment of Computer Science\nErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas\n\nOn Wed, Feb 25, 2009 at 1:52 PM, Robert Haas <[email protected]> wrote:\n> Please note that this (1792MB) is the highest that I could set for work_mem.\n\nYeah, that's almost certainly part of your problem.\n\nYou need to make that number MUCH smaller.  You probably want a value\nlike 1MB or 5MB or maybe if you have really a lot of memory 20MB.\n\nThat's insanely high.\n\n...Robert\nInitially, it was the default value (32MB). Later I played with that value thinking that it might improve the performance. But all the values resulted in same amount of time.\n-- Mohammad Farhan HusainResearch AssistantDepartment of Computer ScienceErik Jonsson School of Engineering and Computer ScienceUniversity of Texas at Dallas", "msg_date": "Wed, 25 Feb 2009 14:44:26 -0600", "msg_from": "Farhan Husain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "On Wed, Feb 25, 2009 at 3:44 PM, Farhan Husain <[email protected]> wrote:\n> Initially, it was the default value (32MB). Later I played with that value\n> thinking that it might improve the performance. But all the values resulted\n> in same amount of time.\n\nWell, if you set it back to what we consider to be a reasonable value,\nrerun EXPLAIN ANALYZE, and post that plan, it might help us tell you\nwhat to do next.\n\n...Robert\n", "msg_date": "Wed, 25 Feb 2009 16:30:00 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "On Wed, Feb 25, 2009 at 3:30 PM, Robert Haas <[email protected]> wrote:\n\n> On Wed, Feb 25, 2009 at 3:44 PM, Farhan Husain <[email protected]> wrote:\n> > Initially, it was the default value (32MB). Later I played with that\n> value\n> > thinking that it might improve the performance. But all the values\n> resulted\n> > in same amount of time.\n>\n> Well, if you set it back to what we consider to be a reasonable value,\n> rerun EXPLAIN ANALYZE, and post that plan, it might help us tell you\n> what to do next.\n>\n> ...Robert\n>\n\nRight now I am running the query again with 32MB work_mem. It is taking a\nlong time as before. However, I have kept the following values unchanged:\n\nshared_buffers = 32MB # min 128kB or max_connections*16kB\n temp_buffers = 1024MB # min\n800kB\n\nDo you think I should change them to something else?\n\nThanks,\n\n\n-- \nMohammad Farhan Husain\nResearch Assistant\nDepartment of Computer Science\nErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas\n\nOn Wed, Feb 25, 2009 at 3:30 PM, Robert Haas <[email protected]> wrote:\nOn Wed, Feb 25, 2009 at 3:44 PM, Farhan Husain <[email protected]> wrote:\n> Initially, it was the default value (32MB). Later I played with that value\n> thinking that it might improve the performance. But all the values resulted\n> in same amount of time.\n\nWell, if you set it back to what we consider to be a reasonable value,\nrerun EXPLAIN ANALYZE, and post that plan, it might help us tell you\nwhat to do next.\n\n...Robert\nRight now I am running the query again with 32MB work_mem. It is taking\na long time as before. However, I have kept the following values\nunchanged:\n\nshared_buffers = 32MB                   # min 128kB or max_connections*16kB\n                              \ntemp_buffers = 1024MB                   # min 800kB\n\nDo you think I should change them to something else?\n\nThanks,\n-- Mohammad Farhan HusainResearch AssistantDepartment of Computer ScienceErik Jonsson School of Engineering and Computer ScienceUniversity of Texas at Dallas", "msg_date": "Wed, 25 Feb 2009 15:32:49 -0600", "msg_from": "Farhan Husain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "On Wed, Feb 25, 2009 at 2:32 PM, Farhan Husain <[email protected]> wrote:\n>\n> On Wed, Feb 25, 2009 at 3:30 PM, Robert Haas <[email protected]> wrote:\n>>\n>> On Wed, Feb 25, 2009 at 3:44 PM, Farhan Husain <[email protected]> wrote:\n>> > Initially, it was the default value (32MB). Later I played with that\n>> > value\n>> > thinking that it might improve the performance. But all the values\n>> > resulted\n>> > in same amount of time.\n>>\n>> Well, if you set it back to what we consider to be a reasonable value,\n>> rerun EXPLAIN ANALYZE, and post that plan, it might help us tell you\n>> what to do next.\n>>\n>> ...Robert\n>\n> Right now I am running the query again with 32MB work_mem. It is taking a\n> long time as before. However, I have kept the following values unchanged:\n>\n> shared_buffers = 32MB                   # min 128kB or max_connections*16kB\n\nThat's REALLY small for pgsql. Assuming your machine has at least 1G\nof ram, I'd set it to 128M to 256M as a minimum.\n", "msg_date": "Wed, 25 Feb 2009 14:35:54 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "On Wed, Feb 25, 2009 at 4:32 PM, Farhan Husain <[email protected]> wrote:\n> On Wed, Feb 25, 2009 at 3:30 PM, Robert Haas <[email protected]> wrote:\n>> On Wed, Feb 25, 2009 at 3:44 PM, Farhan Husain <[email protected]> wrote:\n>> > Initially, it was the default value (32MB). Later I played with that\n>> > value\n>> > thinking that it might improve the performance. But all the values\n>> > resulted\n>> > in same amount of time.\n>>\n>> Well, if you set it back to what we consider to be a reasonable value,\n>> rerun EXPLAIN ANALYZE, and post that plan, it might help us tell you\n>> what to do next.\n>>\n>> ...Robert\n>\n> Right now I am running the query again with 32MB work_mem. It is taking a\n> long time as before. However, I have kept the following values unchanged:\n>\n> shared_buffers = 32MB                   # min 128kB or max_connections*16kB\n>\n> temp_buffers = 1024MB                   # min 800kB\n>\n> Do you think I should change them to something else?\n\nIt would probably be good to change them, but I don't think it's going\nto fix the problem you're having with this query.\n\n...Robert\n", "msg_date": "Wed, 25 Feb 2009 16:36:03 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "On Wed, Feb 25, 2009 at 3:35 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Wed, Feb 25, 2009 at 2:32 PM, Farhan Husain <[email protected]> wrote:\n> >\n> > On Wed, Feb 25, 2009 at 3:30 PM, Robert Haas <[email protected]>\n> wrote:\n> >>\n> >> On Wed, Feb 25, 2009 at 3:44 PM, Farhan Husain <[email protected]>\n> wrote:\n> >> > Initially, it was the default value (32MB). Later I played with that\n> >> > value\n> >> > thinking that it might improve the performance. But all the values\n> >> > resulted\n> >> > in same amount of time.\n> >>\n> >> Well, if you set it back to what we consider to be a reasonable value,\n> >> rerun EXPLAIN ANALYZE, and post that plan, it might help us tell you\n> >> what to do next.\n> >>\n> >> ...Robert\n> >\n> > Right now I am running the query again with 32MB work_mem. It is taking a\n> > long time as before. However, I have kept the following values unchanged:\n> >\n> > shared_buffers = 32MB # min 128kB or\n> max_connections*16kB\n>\n> That's REALLY small for pgsql. Assuming your machine has at least 1G\n> of ram, I'd set it to 128M to 256M as a minimum.\n>\n\nAs I wrote in a previous email, I had the value set to 1792MB (the highest I\ncould set) and had the same execution time. This value is not helping me to\nbring down the execution time.\n\n-- \nMohammad Farhan Husain\nResearch Assistant\nDepartment of Computer Science\nErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas\n\nOn Wed, Feb 25, 2009 at 3:35 PM, Scott Marlowe <[email protected]> wrote:\nOn Wed, Feb 25, 2009 at 2:32 PM, Farhan Husain <[email protected]> wrote:\n>\n> On Wed, Feb 25, 2009 at 3:30 PM, Robert Haas <[email protected]> wrote:\n>>\n>> On Wed, Feb 25, 2009 at 3:44 PM, Farhan Husain <[email protected]> wrote:\n>> > Initially, it was the default value (32MB). Later I played with that\n>> > value\n>> > thinking that it might improve the performance. But all the values\n>> > resulted\n>> > in same amount of time.\n>>\n>> Well, if you set it back to what we consider to be a reasonable value,\n>> rerun EXPLAIN ANALYZE, and post that plan, it might help us tell you\n>> what to do next.\n>>\n>> ...Robert\n>\n> Right now I am running the query again with 32MB work_mem. It is taking a\n> long time as before. However, I have kept the following values unchanged:\n>\n> shared_buffers = 32MB                   # min 128kB or max_connections*16kB\n\nThat's REALLY small for pgsql.  Assuming your machine has at least 1G\nof ram, I'd set it to 128M to 256M as a minimum.\nAs I wrote in a previous email, I had the value set to 1792MB (the highest I could set) and had the same execution time. This value is not helping me to bring down the execution time.\n-- Mohammad Farhan HusainResearch AssistantDepartment of Computer ScienceErik Jonsson School of Engineering and Computer ScienceUniversity of Texas at Dallas", "msg_date": "Wed, 25 Feb 2009 15:38:10 -0600", "msg_from": "Farhan Husain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "What is random_page_cost set to? You could try to lower it to 1.5 if set higher.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentler gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Wed, 25 Feb 2009 22:39:56 +0100", "msg_from": "Claus Guttesen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": ">> > shared_buffers = 32MB                   # min 128kB or\n>> > max_connections*16kB\n>>\n>> That's REALLY small for pgsql.  Assuming your machine has at least 1G\n>> of ram, I'd set it to 128M to 256M as a minimum.\n>\n> As I wrote in a previous email, I had the value set to 1792MB (the highest I\n> could set) and had the same execution time. This value is not helping me to\n> bring down the execution time.\n\nNo, you increased work_mem, not shared_buffers. You might want to go\nand read the documentation:\n\nhttp://www.postgresql.org/docs/current/interactive/runtime-config-resource.html\n\nBut at any rate, the large work_mem was producing a very strange plan.\n It may help to see what the system does without that setting. But\nchanging shared_buffers will not change the plan, so let's not worry\nabout that right now.\n\n...Robert\n", "msg_date": "Wed, 25 Feb 2009 16:40:19 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "It was only after I got this high execution time when I started to look into\nthe configuration file and change those values. I tried several combinations\nin which all those values were higher than the default values. I got no\nimprovement in runtime. The machine postgres is running on has 4 GB of RAM.\n\nOn Wed, Feb 25, 2009 at 3:40 PM, Robert Haas <[email protected]> wrote:\n\n> >> > shared_buffers = 32MB # min 128kB or\n> >> > max_connections*16kB\n> >>\n> >> That's REALLY small for pgsql. Assuming your machine has at least 1G\n> >> of ram, I'd set it to 128M to 256M as a minimum.\n> >\n> > As I wrote in a previous email, I had the value set to 1792MB (the\n> highest I\n> > could set) and had the same execution time. This value is not helping me\n> to\n> > bring down the execution time.\n>\n> No, you increased work_mem, not shared_buffers. You might want to go\n> and read the documentation:\n>\n>\n> http://www.postgresql.org/docs/current/interactive/runtime-config-resource.html\n>\n> But at any rate, the large work_mem was producing a very strange plan.\n> It may help to see what the system does without that setting. But\n> changing shared_buffers will not change the plan, so let's not worry\n> about that right now.\n>\n> ...Robert\n>\n\n\n\n-- \nMohammad Farhan Husain\nResearch Assistant\nDepartment of Computer Science\nErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas\n\nIt was only after I got this high execution time when I started to look into the configuration file and change those values. I tried several combinations in which all those values were higher than the default values. I got no improvement in runtime. The machine postgres is running on has 4 GB of RAM.\nOn Wed, Feb 25, 2009 at 3:40 PM, Robert Haas <[email protected]> wrote:\n>> > shared_buffers = 32MB                   # min 128kB or\n>> > max_connections*16kB\n>>\n>> That's REALLY small for pgsql.  Assuming your machine has at least 1G\n>> of ram, I'd set it to 128M to 256M as a minimum.\n>\n> As I wrote in a previous email, I had the value set to 1792MB (the highest I\n> could set) and had the same execution time. This value is not helping me to\n> bring down the execution time.\n\nNo, you increased work_mem, not shared_buffers.  You might want to go\nand read the documentation:\n\nhttp://www.postgresql.org/docs/current/interactive/runtime-config-resource.html\n\nBut at any rate, the large work_mem was producing a very strange plan.\n It may help to see what the system does without that setting.  But\nchanging shared_buffers will not change the plan, so let's not worry\nabout that right now.\n\n...Robert\n-- Mohammad Farhan HusainResearch AssistantDepartment of Computer ScienceErik Jonsson School of Engineering and Computer ScienceUniversity of Texas at Dallas", "msg_date": "Wed, 25 Feb 2009 15:43:49 -0600", "msg_from": "Farhan Husain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "On Wed, Feb 25, 2009 at 2:38 PM, Farhan Husain <[email protected]> wrote:\n>\n>\n> On Wed, Feb 25, 2009 at 3:35 PM, Scott Marlowe <[email protected]>\n> wrote:\n>>\n>> On Wed, Feb 25, 2009 at 2:32 PM, Farhan Husain <[email protected]> wrote:\n>> >\n>> > On Wed, Feb 25, 2009 at 3:30 PM, Robert Haas <[email protected]>\n>> > wrote:\n>> >>\n>> >> On Wed, Feb 25, 2009 at 3:44 PM, Farhan Husain <[email protected]>\n>> >> wrote:\n>> >> > Initially, it was the default value (32MB). Later I played with that\n>> >> > value\n>> >> > thinking that it might improve the performance. But all the values\n>> >> > resulted\n>> >> > in same amount of time.\n>> >>\n>> >> Well, if you set it back to what we consider to be a reasonable value,\n>> >> rerun EXPLAIN ANALYZE, and post that plan, it might help us tell you\n>> >> what to do next.\n>> >>\n>> >> ...Robert\n>> >\n>> > Right now I am running the query again with 32MB work_mem. It is taking\n>> > a\n>> > long time as before. However, I have kept the following values\n>> > unchanged:\n>> >\n>> > shared_buffers = 32MB                   # min 128kB or\n>> > max_connections*16kB\n>>\n>> That's REALLY small for pgsql.  Assuming your machine has at least 1G\n>> of ram, I'd set it to 128M to 256M as a minimum.\n>\n> As I wrote in a previous email, I had the value set to 1792MB (the highest I\n> could set) and had the same execution time. This value is not helping me to\n> bring down the execution time.\n\nNo, that was work_mem. This is shared_buffers.\n", "msg_date": "Wed, 25 Feb 2009 14:55:29 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "Wed, 25 Feb 2009 15:43:49 -0600 -n\nFarhan Husain <[email protected]> írta:\n\nOK, you have two options:\n\n1. Learn to read carefully, and differentiate between work_mem and\nshared_buffers options. Lower work_mem and rise shared_buffers as\nothers wrote.\n2. Leave Postgresql alone and go for Oracle or Microsoft SQL...\n\nRgds,\nAkos\n\n> It was only after I got this high execution time when I started to\n> look into the configuration file and change those values. I tried\n> several combinations in which all those values were higher than the\n> default values. I got no improvement in runtime. The machine postgres\n> is running on has 4 GB of RAM.\n> \n> On Wed, Feb 25, 2009 at 3:40 PM, Robert Haas <[email protected]>\n> wrote:\n> \n> > >> > shared_buffers = 32MB # min 128kB or\n> > >> > max_connections*16kB\n> > >>\n> > >> That's REALLY small for pgsql. Assuming your machine has at\n> > >> least 1G of ram, I'd set it to 128M to 256M as a minimum.\n> > >\n> > > As I wrote in a previous email, I had the value set to 1792MB (the\n> > highest I\n> > > could set) and had the same execution time. This value is not\n> > > helping me\n> > to\n> > > bring down the execution time.\n> >\n> > No, you increased work_mem, not shared_buffers. You might want to\n> > go and read the documentation:\n> >\n> >\n> > http://www.postgresql.org/docs/current/interactive/runtime-config-resource.html\n> >\n> > But at any rate, the large work_mem was producing a very strange\n> > plan. It may help to see what the system does without that\n> > setting. But changing shared_buffers will not change the plan, so\n> > let's not worry about that right now.\n> >\n> > ...Robert\n> >\n> \n> \n> \n\n\n-- \nÜdvözlettel,\nGábriel Ákos\n-=E-Mail :[email protected]|Web: http://www.i-logic.hu=-\n-=Tel/fax:+3612367353 |Mobil:+36209278894 =-\n", "msg_date": "Wed, 25 Feb 2009 22:55:56 +0100", "msg_from": "Akos Gabriel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n MySQL" }, { "msg_contents": "On Wed, Feb 25, 2009 at 3:55 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Wed, Feb 25, 2009 at 2:38 PM, Farhan Husain <[email protected]> wrote:\n> >\n> >\n> > On Wed, Feb 25, 2009 at 3:35 PM, Scott Marlowe <[email protected]>\n> > wrote:\n> >>\n> >> On Wed, Feb 25, 2009 at 2:32 PM, Farhan Husain <[email protected]>\n> wrote:\n> >> >\n> >> > On Wed, Feb 25, 2009 at 3:30 PM, Robert Haas <[email protected]>\n> >> > wrote:\n> >> >>\n> >> >> On Wed, Feb 25, 2009 at 3:44 PM, Farhan Husain <[email protected]>\n> >> >> wrote:\n> >> >> > Initially, it was the default value (32MB). Later I played with\n> that\n> >> >> > value\n> >> >> > thinking that it might improve the performance. But all the values\n> >> >> > resulted\n> >> >> > in same amount of time.\n> >> >>\n> >> >> Well, if you set it back to what we consider to be a reasonable\n> value,\n> >> >> rerun EXPLAIN ANALYZE, and post that plan, it might help us tell you\n> >> >> what to do next.\n> >> >>\n> >> >> ...Robert\n> >> >\n> >> > Right now I am running the query again with 32MB work_mem. It is\n> taking\n> >> > a\n> >> > long time as before. However, I have kept the following values\n> >> > unchanged:\n> >> >\n> >> > shared_buffers = 32MB # min 128kB or\n> >> > max_connections*16kB\n> >>\n> >> That's REALLY small for pgsql. Assuming your machine has at least 1G\n> >> of ram, I'd set it to 128M to 256M as a minimum.\n> >\n> > As I wrote in a previous email, I had the value set to 1792MB (the\n> highest I\n> > could set) and had the same execution time. This value is not helping me\n> to\n> > bring down the execution time.\n>\n> No, that was work_mem. This is shared_buffers.\n>\n\nOh, sorry for the confusion. I will change the shared_buffer once the\ncurrent run finishes.\n\n-- \nMohammad Farhan Husain\nResearch Assistant\nDepartment of Computer Science\nErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas\n\nOn Wed, Feb 25, 2009 at 3:55 PM, Scott Marlowe <[email protected]> wrote:\nOn Wed, Feb 25, 2009 at 2:38 PM, Farhan Husain <[email protected]> wrote:\n>\n>\n> On Wed, Feb 25, 2009 at 3:35 PM, Scott Marlowe <[email protected]>\n> wrote:\n>>\n>> On Wed, Feb 25, 2009 at 2:32 PM, Farhan Husain <[email protected]> wrote:\n>> >\n>> > On Wed, Feb 25, 2009 at 3:30 PM, Robert Haas <[email protected]>\n>> > wrote:\n>> >>\n>> >> On Wed, Feb 25, 2009 at 3:44 PM, Farhan Husain <[email protected]>\n>> >> wrote:\n>> >> > Initially, it was the default value (32MB). Later I played with that\n>> >> > value\n>> >> > thinking that it might improve the performance. But all the values\n>> >> > resulted\n>> >> > in same amount of time.\n>> >>\n>> >> Well, if you set it back to what we consider to be a reasonable value,\n>> >> rerun EXPLAIN ANALYZE, and post that plan, it might help us tell you\n>> >> what to do next.\n>> >>\n>> >> ...Robert\n>> >\n>> > Right now I am running the query again with 32MB work_mem. It is taking\n>> > a\n>> > long time as before. However, I have kept the following values\n>> > unchanged:\n>> >\n>> > shared_buffers = 32MB                   # min 128kB or\n>> > max_connections*16kB\n>>\n>> That's REALLY small for pgsql.  Assuming your machine has at least 1G\n>> of ram, I'd set it to 128M to 256M as a minimum.\n>\n> As I wrote in a previous email, I had the value set to 1792MB (the highest I\n> could set) and had the same execution time. This value is not helping me to\n> bring down the execution time.\n\nNo, that was work_mem.  This is shared_buffers.\nOh, sorry for the confusion. I will change the shared_buffer once the current run finishes.-- Mohammad Farhan HusainResearch AssistantDepartment of Computer Science\nErik Jonsson School of Engineering and Computer ScienceUniversity of Texas at Dallas", "msg_date": "Wed, 25 Feb 2009 15:56:58 -0600", "msg_from": "Farhan Husain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "I am trying to find the reason of the problem so going to Oracle or\nsomething else is not the solution. I tried with several combinations of\nthose parameters before posting the problem here. I have read\nhttp://www.postgresql.org/docs/current/interactive/runtime-config-resource.htmlbefore\nand I think I understood what it said.\n\n2009/2/25 Akos Gabriel <[email protected]>\n\n> Wed, 25 Feb 2009 15:43:49 -0600 -n\n> Farhan Husain <[email protected]> írta:\n>\n> OK, you have two options:\n>\n> 1. Learn to read carefully, and differentiate between work_mem and\n> shared_buffers options. Lower work_mem and rise shared_buffers as\n> others wrote.\n> 2. Leave Postgresql alone and go for Oracle or Microsoft SQL...\n>\n> Rgds,\n> Akos\n>\n> > It was only after I got this high execution time when I started to\n> > look into the configuration file and change those values. I tried\n> > several combinations in which all those values were higher than the\n> > default values. I got no improvement in runtime. The machine postgres\n> > is running on has 4 GB of RAM.\n> >\n> > On Wed, Feb 25, 2009 at 3:40 PM, Robert Haas <[email protected]>\n> > wrote:\n> >\n> > > >> > shared_buffers = 32MB # min 128kB or\n> > > >> > max_connections*16kB\n> > > >>\n> > > >> That's REALLY small for pgsql. Assuming your machine has at\n> > > >> least 1G of ram, I'd set it to 128M to 256M as a minimum.\n> > > >\n> > > > As I wrote in a previous email, I had the value set to 1792MB (the\n> > > highest I\n> > > > could set) and had the same execution time. This value is not\n> > > > helping me\n> > > to\n> > > > bring down the execution time.\n> > >\n> > > No, you increased work_mem, not shared_buffers. You might want to\n> > > go and read the documentation:\n> > >\n> > >\n> > >\n> http://www.postgresql.org/docs/current/interactive/runtime-config-resource.html\n> > >\n> > > But at any rate, the large work_mem was producing a very strange\n> > > plan. It may help to see what the system does without that\n> > > setting. But changing shared_buffers will not change the plan, so\n> > > let's not worry about that right now.\n> > >\n> > > ...Robert\n> > >\n> >\n> >\n> >\n>\n>\n> --\n> Üdvözlettel,\n> Gábriel Ákos\n> -=E-Mail :[email protected]|Web: http://www.i-logic.hu=-\n> -=Tel/fax:+3612367353 |Mobil:+36209278894 =-\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nMohammad Farhan Husain\nResearch Assistant\nDepartment of Computer Science\nErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas\n\nI am trying to find the reason of the problem so going to Oracle or something else is not the solution. I tried with several combinations of those parameters before posting the problem here. I have read http://www.postgresql.org/docs/current/interactive/runtime-config-resource.html before and I think I understood what it said.\n2009/2/25 Akos Gabriel <[email protected]>\nWed, 25 Feb 2009 15:43:49 -0600 -n\nFarhan Husain <[email protected]> írta:\n\nOK, you have two options:\n\n1. Learn to read carefully, and differentiate between work_mem and\nshared_buffers options. Lower work_mem and rise shared_buffers as\nothers wrote.\n2. Leave Postgresql alone and go for Oracle or Microsoft SQL...\n\nRgds,\nAkos\n\n> It was only after I got this high execution time when I started to\n> look into the configuration file and change those values. I tried\n> several combinations in which all those values were higher than the\n> default values. I got no improvement in runtime. The machine postgres\n> is running on has 4 GB of RAM.\n>\n> On Wed, Feb 25, 2009 at 3:40 PM, Robert Haas <[email protected]>\n> wrote:\n>\n> > >> > shared_buffers = 32MB                   # min 128kB or\n> > >> > max_connections*16kB\n> > >>\n> > >> That's REALLY small for pgsql.  Assuming your machine has at\n> > >> least 1G of ram, I'd set it to 128M to 256M as a minimum.\n> > >\n> > > As I wrote in a previous email, I had the value set to 1792MB (the\n> > highest I\n> > > could set) and had the same execution time. This value is not\n> > > helping me\n> > to\n> > > bring down the execution time.\n> >\n> > No, you increased work_mem, not shared_buffers.  You might want to\n> > go and read the documentation:\n> >\n> >\n> > http://www.postgresql.org/docs/current/interactive/runtime-config-resource.html\n> >\n> > But at any rate, the large work_mem was producing a very strange\n> > plan. It may help to see what the system does without that\n> > setting.  But changing shared_buffers will not change the plan, so\n> > let's not worry about that right now.\n> >\n> > ...Robert\n> >\n>\n>\n>\n\n\n--\nÜdvözlettel,\nGábriel Ákos\n-=E-Mail :[email protected]|Web:  http://www.i-logic.hu=-\n-=Tel/fax:+3612367353            |Mobil:+36209278894         =-\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Mohammad Farhan HusainResearch AssistantDepartment of Computer ScienceErik Jonsson School of Engineering and Computer ScienceUniversity of Texas at Dallas", "msg_date": "Wed, 25 Feb 2009 16:08:36 -0600", "msg_from": "Farhan Husain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": ">>> Farhan Husain <[email protected]> wrote: \n> The machine postgres is running on has 4 GB of RAM.\n \nIn addition to the other suggestions, you should be sure that\neffective_cache_size is set to a reasonable value, which would\nprobably be somewhere in the neighborhood of '3GB'. This doesn't\naffect actual RAM allocation, but gives the optimizer a rough idea how\nmuch data is going to be kept in cache, between both the PostgreSQL\nshared_memory setting and the OS cache. It can make better choices\nwith more accurate information.\n \n-Kevin\n", "msg_date": "Wed, 25 Feb 2009 16:10:36 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres\n\tand MySQL" }, { "msg_contents": "On Wed, Feb 25, 2009 at 4:10 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> >>> Farhan Husain <[email protected]> wrote:\n> > The machine postgres is running on has 4 GB of RAM.\n>\n> In addition to the other suggestions, you should be sure that\n> effective_cache_size is set to a reasonable value, which would\n> probably be somewhere in the neighborhood of '3GB'. This doesn't\n> affect actual RAM allocation, but gives the optimizer a rough idea how\n> much data is going to be kept in cache, between both the PostgreSQL\n> shared_memory setting and the OS cache. It can make better choices\n> with more accurate information.\n>\n> -Kevin\n>\n\nHere is the latest output:\n\ningentadb=# EXPLAIN ANALYZE select A0.Subj, A2.Obj From jena_g1t1_stmt A0,\njena_g1t1_stmt A1, jena_g1t1_stmt A2 Where A0.Prop='Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/isPartOf' AND A0.Obj='Uv::\nhttp://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1' AND\nA0.GraphID=1 AND A0.Subj=A1.Subj AND A1.Prop='Uv::\nhttp://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND A1.Obj='Uv::\nhttp://metastore.ingenta.com/ns/structure/Article' AND A1.GraphID=1 AND\nA0.Subj=A2.Subj AND A2.Prop='Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/startingPage' AND\nA2.GraphID=1;\n\nQUERY\nPLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=799852.37..812767.47 rows=733195 width=134) (actual\ntime=5941553.710..5941569.192 rows=30 loops=1)\n Merge Cond: ((a0.subj)::text = (a1.subj)::text)\n -> Sort (cost=89884.41..89964.28 rows=31949 width=208) (actual\ntime=243.711..243.731 rows=30 loops=1)\n Sort Key: a0.subj\n Sort Method: quicksort Memory: 24kB\n -> Nested Loop (cost=0.00..84326.57 rows=31949 width=208) (actual\ntime=171.255..232.765 rows=30 loops=1)\n -> Index Scan using jena_g1t1_stmt_ixo on jena_g1t1_stmt a0\n(cost=0.00..5428.34 rows=487 width=74) (actual time=96.735..97.070 rows=30\nloops=1)\n Index Cond: ((obj)::text = 'Uv::\nhttp://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'::text)\n Filter: (((prop)::text = 'Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/isPartOf'::text) AND (graphid\n= 1))\n -> Index Scan using jena_g1t1_stmt_ixsp on jena_g1t1_stmt\na2 (cost=0.00..161.37 rows=51 width=134) (actual time=4.513..4.518 rows=1\nloops=30)\n Index Cond: (((a2.subj)::text = (a0.subj)::text) AND\n((a2.prop)::text = 'Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/startingPage'::text))\n Filter: (a2.graphid = 1)\n -> Materialize (cost=709967.96..723526.46 rows=1084680 width=74)\n(actual time=5941309.876..5941318.552 rows=31 loops=1)\n -> Sort (cost=709967.96..712679.66 rows=1084680 width=74) (actual\ntime=5941309.858..5941318.488 rows=31 loops=1)\n Sort Key: a1.subj\n Sort Method: external merge Disk: 282480kB\n -> Seq Scan on jena_g1t1_stmt a1 (cost=0.00..456639.59\nrows=1084680 width=74) (actual time=0.054..44604.597 rows=3192000 loops=1)\n Filter: ((graphid = 1) AND ((prop)::text = 'Uv::\nhttp://www.w3.org/1999/02/22-rdf-syntax-ns#type'::text) AND ((obj)::text =\n'Uv::http://metastore.ingenta.com/ns/structure/Article'::text))\n Total runtime: 5941585.248 ms\n(19 rows)\n\ningentadb=# show work_mem;\n work_mem\n----------\n 1MB\n(1 row)\n\ningentadb=# show shared_buffers;\n shared_buffers\n----------------\n 32MB\n(1 row)\n\ningentadb=# show temp_buffers;\n temp_buffers\n--------------\n 131072\n(1 row)\n\n\nThe execution time has not improved. I am going to increase the\nshared_buffers now keeping the work_mem same.\n\n-- \nMohammad Farhan Husain\nResearch Assistant\nDepartment of Computer Science\nErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas\n\nOn Wed, Feb 25, 2009 at 4:10 PM, Kevin Grittner <[email protected]> wrote:\n>>> Farhan Husain <[email protected]> wrote:\n> The machine postgres is running on has 4 GB of RAM.\n\nIn addition to the other suggestions, you should be sure that\neffective_cache_size is set to a reasonable value, which would\nprobably be somewhere in the neighborhood of '3GB'.  This doesn't\naffect actual RAM allocation, but gives the optimizer a rough idea how\nmuch data is going to be kept in cache, between both the PostgreSQL\nshared_memory setting and the OS cache.  It can make better choices\nwith more accurate information.\n\n-Kevin\nHere is the latest output:ingentadb=# EXPLAIN ANALYZE select A0.Subj, A2.Obj From jena_g1t1_stmt A0, jena_g1t1_stmt A1, jena_g1t1_stmt A2 Where A0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf' AND A0.Obj='Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1' AND A0.GraphID=1 AND A0.Subj=A1.Subj AND A1.Prop='Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND A1.Obj='Uv::http://metastore.ingenta.com/ns/structure/Article' AND A1.GraphID=1 AND A0.Subj=A2.Subj AND A2.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage' AND A2.GraphID=1;\n                                                                                                   QUERY PLAN                                                                                                   ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join  (cost=799852.37..812767.47 rows=733195 width=134) (actual time=5941553.710..5941569.192 rows=30 loops=1)   Merge Cond: ((a0.subj)::text = (a1.subj)::text)   ->  Sort  (cost=89884.41..89964.28 rows=31949 width=208) (actual time=243.711..243.731 rows=30 loops=1)\n         Sort Key: a0.subj         Sort Method:  quicksort  Memory: 24kB         ->  Nested Loop  (cost=0.00..84326.57 rows=31949 width=208) (actual time=171.255..232.765 rows=30 loops=1)               ->  Index Scan using jena_g1t1_stmt_ixo on jena_g1t1_stmt a0  (cost=0.00..5428.34 rows=487 width=74) (actual time=96.735..97.070 rows=30 loops=1)\n                     Index Cond: ((obj)::text = 'Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'::text)\n                     Filter: (((prop)::text = 'Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf'::text) AND (graphid = 1))\n               ->  Index Scan using jena_g1t1_stmt_ixsp on jena_g1t1_stmt a2  (cost=0.00..161.37 rows=51 width=134) (actual time=4.513..4.518 rows=1 loops=30)                     Index Cond: (((a2.subj)::text = (a0.subj)::text) AND ((a2.prop)::text = 'Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage'::text))\n                     Filter: (a2.graphid = 1)   ->  Materialize  (cost=709967.96..723526.46 rows=1084680 width=74) (actual time=5941309.876..5941318.552 rows=31 loops=1)         ->  Sort  (cost=709967.96..712679.66 rows=1084680 width=74) (actual time=5941309.858..5941318.488 rows=31 loops=1)\n               Sort Key: a1.subj               Sort Method:  external merge  Disk: 282480kB               ->  Seq Scan on jena_g1t1_stmt a1  (cost=0.00..456639.59 rows=1084680 width=74) (actual time=0.054..44604.597 rows=3192000 loops=1)\n                     Filter: ((graphid = 1) AND ((prop)::text = 'Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type'::text) AND ((obj)::text = 'Uv::http://metastore.ingenta.com/ns/structure/Article'::text))\n Total runtime: 5941585.248 ms(19 rows)ingentadb=# show work_mem; work_mem ---------- 1MB(1 row)ingentadb=# show shared_buffers; shared_buffers ---------------- 32MB(1 row)\ningentadb=# show temp_buffers; temp_buffers -------------- 131072(1 row)The execution time has not improved. I am going to increase the shared_buffers now keeping the work_mem same.\n-- Mohammad Farhan HusainResearch AssistantDepartment of Computer ScienceErik Jonsson School of Engineering and Computer ScienceUniversity of Texas at Dallas", "msg_date": "Wed, 25 Feb 2009 16:29:31 -0600", "msg_from": "Farhan Husain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": ">>> Farhan Husain <[email protected]> wrote: \n> Kevin Grittner <[email protected] wrote:\n>> >>> Farhan Husain <[email protected]> wrote:\n>> > The machine postgres is running on has 4 GB of RAM.\n>>\n>> In addition to the other suggestions, you should be sure that\n>> effective_cache_size is set to a reasonable value, which would\n>> probably be somewhere in the neighborhood of '3GB'.\n \n> The execution time has not improved. I am going to increase the\n> shared_buffers now keeping the work_mem same.\n \nIncreasing shared_buffers is good, but it has been mentioned that this\nwill not change the plan, which currently scans and sorts the whole\ntable for a1. Nobody will be surprised when you report minimal\nchange, if any. If you have not changed effective_cache_size (be sure\nnot to confuse this with any of the other configuration values) it\nwill think you only have 128MB of cache, which will be off by a factor\nof about 24 from reality.\n \nAlso, I'm going to respectfully differ with some of the other posts on\nthe best setting for work_mem. Most benchmarks I've run and can\nremember seeing posted found best performance for this at somewhere\nbetween 16MB and 32MB. You do have to be careful if you have a large\nnumber of concurrent queries, however, and keep it lower. In most\nsuch cases, though, you're better off using a connection pool to limit\nconcurrent queries instead.\n \n-Kevin\n", "msg_date": "Wed, 25 Feb 2009 17:11:52 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres\n\tand MySQL" }, { "msg_contents": "I will second Kevin's suggestion. Unless you think you will have more than a few dozen concurrent queries, start with work_mem around 32MB.\nFor the query here, a very large work_mem might help it hash join depending on the data... But that's not the real problem here.\n\nThe real problem is that it does a huge scan of all of the a1 table, and sorts it. Its pretty clear that this table has incorrect statistics. It thinks that it will get about 1 million rows back in the scan, but it is actually 3 million in the scan.\n\nCrank up the statistics target on that table from the default to at least 100, perhaps even 1000. This is a large table, the default statistics target of 10 is not good for large tables with skewed column data. Those to try increasing the target on are the columns filtered in the explain: graphid, prop, and obj. Then run vacuum analzye on that table (a1). The planner should then have better stats and will likely be able to use a better plan for the join.\n\nThe other tables involved in the join also seem to have bad statistics. You might just take the easiest solution and change the global statistics target and vacuum analyze the tables involved:\n\nset default_statistics_target = 50;\nvacuum analyze jena_g1t1_stmt ;\n\n(test the query)\n\nRepeat for several values of the default statistics target. You can run \"explain\" before running the actual query, to see if the plan changed. If it has not, the time will not likely change.\nThe max value for the statistics target is 1000, which makes analyzing and query planning slower, but more accurate. In most cases, dramatic differences can happen between the default of 10 and values of 25 or 50. Sometimes, you have to go into the hundreds, and it is safer to do this on a per-column basis once you get to larger values.\n\nFor larger database, I recommend increasing the default to 20 to 40 and re-analyzing all the tables.\n\n\n\n\n\nOn 2/25/09 3:11 PM, \"Kevin Grittner\" <[email protected]> wrote:\n\n>>> Farhan Husain <[email protected]> wrote:\n> Kevin Grittner <[email protected] wrote:\n>> >>> Farhan Husain <[email protected]> wrote:\n>> > The machine postgres is running on has 4 GB of RAM.\n>>\n>> In addition to the other suggestions, you should be sure that\n>> effective_cache_size is set to a reasonable value, which would\n>> probably be somewhere in the neighborhood of '3GB'.\n\n> The execution time has not improved. I am going to increase the\n> shared_buffers now keeping the work_mem same.\n\nIncreasing shared_buffers is good, but it has been mentioned that this\nwill not change the plan, which currently scans and sorts the whole\ntable for a1. Nobody will be surprised when you report minimal\nchange, if any. If you have not changed effective_cache_size (be sure\nnot to confuse this with any of the other configuration values) it\nwill think you only have 128MB of cache, which will be off by a factor\nof about 24 from reality.\n\nAlso, I'm going to respectfully differ with some of the other posts on\nthe best setting for work_mem. Most benchmarks I've run and can\nremember seeing posted found best performance for this at somewhere\nbetween 16MB and 32MB. You do have to be careful if you have a large\nnumber of concurrent queries, however, and keep it lower. In most\nsuch cases, though, you're better off using a connection pool to limit\nconcurrent queries instead.\n\n-Kevin\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\nRe: [PERFORM] Abnormal performance difference between Postgres and MySQL\n\n\nI will second Kevin’s suggestion.  Unless you think you will have more than a few dozen concurrent queries, start with work_mem around 32MB. \nFor the query here, a very large work_mem might help it hash join depending on the data... But that’s not the real problem here.\n\nThe real problem is that it does a huge scan of all of the a1 table, and sorts it.  Its pretty clear that this table has incorrect statistics.  It thinks that it will get about 1 million rows back in the scan, but it is actually 3 million in the scan. \n\nCrank up the statistics target on that table from the default to at least 100, perhaps even 1000.  This is a large table, the default statistics target of 10 is not good for large tables with skewed column data.  Those to try increasing the target on are the columns filtered in the explain: graphid, prop, and obj.  Then run vacuum analzye on that table (a1).  The planner should then have better stats and will likely be able to use a better plan for the join.\n\nThe other tables involved in the join also seem to have  bad statistics.  You might just take the easiest solution and change the global statistics target and vacuum analyze the tables involved:\n\nset default_statistics_target = 50;\nvacuum analyze jena_g1t1_stmt ;\n\n(test the query)\n\nRepeat for several values of the default statistics target.  You can run “explain” before running the actual query, to see if the plan changed.  If it has not, the time will not likely change.\nThe max value for the statistics target is 1000, which makes analyzing and query planning slower, but more accurate.  In most cases, dramatic differences can happen between the default of 10 and values of 25 or 50.  Sometimes, you have to go into the hundreds, and it is safer to do this on a per-column basis once you get to larger values.\n\nFor larger database, I recommend increasing the default to 20 to 40 and re-analyzing all the tables.\n\n\n\n\n\nOn 2/25/09 3:11 PM, \"Kevin Grittner\" <[email protected]> wrote:\n\n>>> Farhan Husain <[email protected]> wrote:\n> Kevin Grittner <[email protected] wrote:\n>> >>> Farhan Husain <[email protected]> wrote:\n>> > The machine postgres is running on has 4 GB of RAM.\n>>\n>> In addition to the other suggestions, you should be sure that\n>> effective_cache_size is set to a reasonable value, which would\n>> probably be somewhere in the neighborhood of '3GB'.\n\n> The execution time has not improved. I am going to increase the\n> shared_buffers now keeping the work_mem same.\n\nIncreasing shared_buffers is good, but it has been mentioned that this\nwill not change the plan, which currently scans and sorts the whole\ntable for a1.  Nobody will be surprised when you report minimal\nchange, if any.  If you have not changed effective_cache_size (be sure\nnot to confuse this with any of the other configuration values) it\nwill think you only have 128MB of cache, which will be off by a factor\nof about 24 from reality.\n\nAlso, I'm going to respectfully differ with some of the other posts on\nthe best setting for work_mem.  Most benchmarks I've run and can\nremember seeing posted found best performance for this at somewhere\nbetween 16MB and 32MB.  You do have to be careful if you have a large\nnumber of concurrent queries, however, and keep it lower.  In most\nsuch cases, though, you're better off using a connection pool to limit\nconcurrent queries instead.\n\n-Kevin\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 25 Feb 2009 16:07:42 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n MySQL" }, { "msg_contents": "> Here is the latest output:\n>\n> ingentadb=# EXPLAIN ANALYZE select A0.Subj, A2.Obj From jena_g1t1_stmt A0,\n> jena_g1t1_stmt A1, jena_g1t1_stmt A2 Where\n> A0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf' AND\n> A0.Obj='Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'\n> AND A0.GraphID=1 AND A0.Subj=A1.Subj AND\n> A1.Prop='Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND\n> A1.Obj='Uv::http://metastore.ingenta.com/ns/structure/Article' AND\n> A1.GraphID=1 AND A0.Subj=A2.Subj AND\n> A2.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage' AND\n> A2.GraphID=1;\n>\n> QUERY\n> PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>  Merge Join  (cost=799852.37..812767.47 rows=733195 width=134) (actual\n> time=5941553.710..5941569.192 rows=30 loops=1)\n>    Merge Cond: ((a0.subj)::text = (a1.subj)::text)\n>    ->  Sort  (cost=89884.41..89964.28 rows=31949 width=208) (actual\n> time=243.711..243.731 rows=30 loops=1)\n>          Sort Key: a0.subj\n>          Sort Method:  quicksort  Memory: 24kB\n>          ->  Nested Loop  (cost=0.00..84326.57 rows=31949 width=208) (actual\n> time=171.255..232.765 rows=30 loops=1)\n>                ->  Index Scan using jena_g1t1_stmt_ixo on jena_g1t1_stmt a0\n> (cost=0.00..5428.34 rows=487 width=74) (actual time=96.735..97.070 rows=30\n> loops=1)\n>                      Index Cond: ((obj)::text =\n> 'Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'::text)\n>                      Filter: (((prop)::text =\n> 'Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf'::text) AND\n> (graphid = 1))\n>                ->  Index Scan using jena_g1t1_stmt_ixsp on jena_g1t1_stmt\n> a2  (cost=0.00..161.37 rows=51 width=134) (actual time=4.513..4.518 rows=1\n> loops=30)\n>                      Index Cond: (((a2.subj)::text = (a0.subj)::text) AND\n> ((a2.prop)::text =\n> 'Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage'::text))\n>                      Filter: (a2.graphid = 1)\n>    ->  Materialize  (cost=709967.96..723526.46 rows=1084680 width=74)\n> (actual time=5941309.876..5941318.552 rows=31 loops=1)\n>          ->  Sort  (cost=709967.96..712679.66 rows=1084680 width=74) (actual\n> time=5941309.858..5941318.488 rows=31 loops=1)\n>                Sort Key: a1.subj\n>                Sort Method:  external merge  Disk: 282480kB\n>                ->  Seq Scan on jena_g1t1_stmt a1  (cost=0.00..456639.59\n> rows=1084680 width=74) (actual time=0.054..44604.597 rows=3192000 loops=1)\n>                      Filter: ((graphid = 1) AND ((prop)::text =\n> 'Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type'::text) AND\n> ((obj)::text =\n> 'Uv::http://metastore.ingenta.com/ns/structure/Article'::text))\n>  Total runtime: 5941585.248 ms\n> (19 rows)\n\nCan you do this:\n\nselect * from pg_statistic where starelid = 'jena_g1t1_stmt'::regclass;\n\nThanks,\n\n...Robert\n", "msg_date": "Wed, 25 Feb 2009 19:59:55 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "> The execution time has not improved. I am going to increase the\n> shared_buffers now keeping the work_mem same.\n\nHave you performed a vacuum analyze?\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentler gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Thu, 26 Feb 2009 09:00:07 +0100", "msg_from": "Claus Guttesen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "Thu, 26 Feb 2009 09:00:07 +0100 -n\nClaus Guttesen <[email protected]> írta:\n\n> > The execution time has not improved. I am going to increase the\n> > shared_buffers now keeping the work_mem same.\n> \n> Have you performed a vacuum analyze?\n> \n\nand reindex\n\n-- \nÜdvözlettel,\nGábriel Ákos\n-=E-Mail :[email protected]|Web: http://www.i-logic.hu=-\n-=Tel/fax:+3612367353 |Mobil:+36209278894 =-\n", "msg_date": "Thu, 26 Feb 2009 09:38:30 +0100", "msg_from": "Akos Gabriel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n MySQL" }, { "msg_contents": "On Wed, Feb 25, 2009 at 4:10 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> >>> Farhan Husain <[email protected]> wrote:\n> > The machine postgres is running on has 4 GB of RAM.\n>\n> In addition to the other suggestions, you should be sure that\n> effective_cache_size is set to a reasonable value, which would\n> probably be somewhere in the neighborhood of '3GB'. This doesn't\n> affect actual RAM allocation, but gives the optimizer a rough idea how\n> much data is going to be kept in cache, between both the PostgreSQL\n> shared_memory setting and the OS cache. It can make better choices\n> with more accurate information.\n>\n> -Kevin\n>\n\nI reran the query with new values of work_mem, effective_cache_size and\nshared_buffers. There is no change in runtime. Here is the output:\n\ningentadb=# show work_mem;\n work_mem\n----------\n 16MB\n(1 row)\n\ningentadb=# show shared_buffers;\n shared_buffers\n----------------\n 64MB\n(1 row)\n\ningentadb=# show effective_cache_size;\n effective_cache_size\n----------------------\n 2GB\n(1 row)\n\ningentadb=# EXPLAIN ANALYZE select A0.Subj, A2.Obj From jena_g1t1_stmt A0,\njena_g1t1_stmt A1, jena_g1t1_stmt A2 Where A0.Prop='Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/isPartOf' AND A0.Obj='Uv::\nhttp://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1' AND\nA0.GraphID=1 AND A0.Subj=A1.Subj AND A1.Prop='Uv::\nhttp://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND A1.Obj='Uv::\nhttp://metastore.ingenta.com/ns/structure/Article' AND A1.GraphID=1 AND\nA0.Subj=A2.Subj AND A2.Prop='Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/startingPage' AND\nA2.GraphID=1;\n\nQUERY\nPLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=698313.99..711229.09 rows=733195 width=134) (actual\ntime=7659407.195..7659418.630 rows=30 loops=1)\n Merge Cond: ((a0.subj)::text = (a1.subj)::text)\n -> Sort (cost=84743.03..84822.90 rows=31949 width=208) (actual\ntime=77.269..77.300 rows=30 loops=1)\n Sort Key: a0.subj\n Sort Method: quicksort Memory: 24kB\n -> Nested Loop (cost=0.00..82352.69 rows=31949 width=208) (actual\ntime=4.821..66.390 rows=30 loops=1)\n -> Index Scan using jena_g1t1_stmt_ixo on jena_g1t1_stmt a0\n(cost=0.00..5428.34 rows=487 width=74) (actual time=2.334..2.675 rows=30\nloops=1)\n Index Cond: ((obj)::text = 'Uv::\nhttp://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'::text)\n Filter: (((prop)::text = 'Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/isPartOf'::text) AND (graphid\n= 1))\n -> Index Scan using jena_g1t1_stmt_ixsp on jena_g1t1_stmt\na2 (cost=0.00..157.32 rows=51 width=134) (actual time=2.114..2.119 rows=1\nloops=30)\n Index Cond: (((a2.subj)::text = (a0.subj)::text) AND\n((a2.prop)::text = 'Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/startingPage'::text))\n Filter: (a2.graphid = 1)\n -> Materialize (cost=613570.96..627129.46 rows=1084680 width=74)\n(actual time=7659329.799..7659334.251 rows=31 loops=1)\n -> Sort (cost=613570.96..616282.66 rows=1084680 width=74) (actual\ntime=7659329.781..7659334.185 rows=31 loops=1)\n Sort Key: a1.subj\n Sort Method: external merge Disk: 282480kB\n -> Seq Scan on jena_g1t1_stmt a1 (cost=0.00..456639.59\nrows=1084680 width=74) (actual time=0.042..46465.020 rows=3192000 loops=1)\n Filter: ((graphid = 1) AND ((prop)::text = 'Uv::\nhttp://www.w3.org/1999/02/22-rdf-syntax-ns#type'::text) AND ((obj)::text =\n'Uv::http://metastore.ingenta.com/ns/structure/Article'::text))\n Total runtime: 7659420.128 ms\n(19 rows)\n\n\nI will try out other suggestions posted yesterday now.\n\nThanks,\n\n-- \nMohammad Farhan Husain\nResearch Assistant\nDepartment of Computer Science\nErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas\n\nOn Wed, Feb 25, 2009 at 4:10 PM, Kevin Grittner <[email protected]> wrote:\n>>> Farhan Husain <[email protected]> wrote:\n> The machine postgres is running on has 4 GB of RAM.\n\nIn addition to the other suggestions, you should be sure that\neffective_cache_size is set to a reasonable value, which would\nprobably be somewhere in the neighborhood of '3GB'.  This doesn't\naffect actual RAM allocation, but gives the optimizer a rough idea how\nmuch data is going to be kept in cache, between both the PostgreSQL\nshared_memory setting and the OS cache.  It can make better choices\nwith more accurate information.\n\n-Kevin\nI reran the query with new values of work_mem, effective_cache_size and shared_buffers. There is no change in runtime. Here is the output:ingentadb=# show work_mem; work_mem \n---------- 16MB(1 row)ingentadb=# show shared_buffers; shared_buffers ---------------- 64MB(1 row)ingentadb=# show effective_cache_size; effective_cache_size ----------------------\n 2GB(1 row)ingentadb=# EXPLAIN ANALYZE select A0.Subj, A2.Obj From jena_g1t1_stmt A0, jena_g1t1_stmt A1, jena_g1t1_stmt A2 Where A0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf' AND A0.Obj='Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1' AND A0.GraphID=1 AND A0.Subj=A1.Subj AND A1.Prop='Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND A1.Obj='Uv::http://metastore.ingenta.com/ns/structure/Article' AND A1.GraphID=1 AND A0.Subj=A2.Subj AND A2.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage' AND A2.GraphID=1;\n                                                                                                   QUERY PLAN                                                                                                   ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join  (cost=698313.99..711229.09 rows=733195 width=134) (actual time=7659407.195..7659418.630 rows=30 loops=1)   Merge Cond: ((a0.subj)::text = (a1.subj)::text)   ->  Sort  (cost=84743.03..84822.90 rows=31949 width=208) (actual time=77.269..77.300 rows=30 loops=1)\n         Sort Key: a0.subj         Sort Method:  quicksort  Memory: 24kB         ->  Nested Loop  (cost=0.00..82352.69 rows=31949 width=208) (actual time=4.821..66.390 rows=30 loops=1)               ->  Index Scan using jena_g1t1_stmt_ixo on jena_g1t1_stmt a0  (cost=0.00..5428.34 rows=487 width=74) (actual time=2.334..2.675 rows=30 loops=1)\n                     Index Cond: ((obj)::text = 'Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'::text)\n                     Filter: (((prop)::text = 'Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf'::text) AND (graphid = 1))\n               ->  Index Scan using jena_g1t1_stmt_ixsp on jena_g1t1_stmt a2  (cost=0.00..157.32 rows=51 width=134) (actual time=2.114..2.119 rows=1 loops=30)                     Index Cond: (((a2.subj)::text = (a0.subj)::text) AND ((a2.prop)::text = 'Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage'::text))\n                     Filter: (a2.graphid = 1)   ->  Materialize  (cost=613570.96..627129.46 rows=1084680 width=74) (actual time=7659329.799..7659334.251 rows=31 loops=1)         ->  Sort  (cost=613570.96..616282.66 rows=1084680 width=74) (actual time=7659329.781..7659334.185 rows=31 loops=1)\n               Sort Key: a1.subj               Sort Method:  external merge  Disk: 282480kB               ->  Seq Scan on jena_g1t1_stmt a1  (cost=0.00..456639.59 rows=1084680 width=74) (actual time=0.042..46465.020 rows=3192000 loops=1)\n                     Filter: ((graphid = 1) AND ((prop)::text = 'Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type'::text) AND ((obj)::text = 'Uv::http://metastore.ingenta.com/ns/structure/Article'::text))\n Total runtime: 7659420.128 ms(19 rows)I will try out other suggestions posted yesterday now.Thanks,-- Mohammad Farhan HusainResearch AssistantDepartment of Computer Science\nErik Jonsson School of Engineering and Computer ScienceUniversity of Texas at Dallas", "msg_date": "Thu, 26 Feb 2009 11:17:43 -0600", "msg_from": "Farhan Husain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": "On Wed, Feb 25, 2009 at 6:07 PM, Scott Carey <[email protected]>wrote:\n\n> I will second Kevin’s suggestion. Unless you think you will have more\n> than a few dozen concurrent queries, start with work_mem around 32MB.\n> For the query here, a very large work_mem might help it hash join depending\n> on the data... But that’s not the real problem here.\n>\n> The real problem is that it does a huge scan of all of the a1 table, and\n> sorts it. Its pretty clear that this table has incorrect statistics. It\n> thinks that it will get about 1 million rows back in the scan, but it is\n> actually 3 million in the scan.\n>\n> Crank up the statistics target on that table from the default to at least\n> 100, perhaps even 1000. This is a large table, the default statistics\n> target of 10 is not good for large tables with skewed column data. Those to\n> try increasing the target on are the columns filtered in the explain:\n> graphid, prop, and obj. Then run vacuum analzye on that table (a1). The\n> planner should then have better stats and will likely be able to use a\n> better plan for the join.\n>\n> The other tables involved in the join also seem to have bad statistics.\n> You might just take the easiest solution and change the global statistics\n> target and vacuum analyze the tables involved:\n>\n> set default_statistics_target = 50;\n> vacuum analyze jena_g1t1_stmt ;\n>\n> (test the query)\n>\n> Repeat for several values of the default statistics target. You can run\n> “explain” before running the actual query, to see if the plan changed. If\n> it has not, the time will not likely change.\n> The max value for the statistics target is 1000, which makes analyzing and\n> query planning slower, but more accurate. In most cases, dramatic\n> differences can happen between the default of 10 and values of 25 or 50.\n> Sometimes, you have to go into the hundreds, and it is safer to do this on\n> a per-column basis once you get to larger values.\n>\n> For larger database, I recommend increasing the default to 20 to 40 and\n> re-analyzing all the tables.\n>\n>\n>\n>\n>\n>\n> On 2/25/09 3:11 PM, \"Kevin Grittner\" <[email protected]> wrote:\n>\n> >>> Farhan Husain <[email protected]> wrote:\n> > Kevin Grittner <[email protected] wrote:\n> >> >>> Farhan Husain <[email protected]> wrote:\n> >> > The machine postgres is running on has 4 GB of RAM.\n> >>\n> >> In addition to the other suggestions, you should be sure that\n> >> effective_cache_size is set to a reasonable value, which would\n> >> probably be somewhere in the neighborhood of '3GB'.\n>\n> > The execution time has not improved. I am going to increase the\n> > shared_buffers now keeping the work_mem same.\n>\n> Increasing shared_buffers is good, but it has been mentioned that this\n> will not change the plan, which currently scans and sorts the whole\n> table for a1. Nobody will be surprised when you report minimal\n> change, if any. If you have not changed effective_cache_size (be sure\n> not to confuse this with any of the other configuration values) it\n> will think you only have 128MB of cache, which will be off by a factor\n> of about 24 from reality.\n>\n> Also, I'm going to respectfully differ with some of the other posts on\n> the best setting for work_mem. Most benchmarks I've run and can\n> remember seeing posted found best performance for this at somewhere\n> between 16MB and 32MB. You do have to be careful if you have a large\n> number of concurrent queries, however, and keep it lower. In most\n> such cases, though, you're better off using a connection pool to limit\n> concurrent queries instead.\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n> Thanks a lot Scott! I think that was the problem. I just changed the\ndefault statistics target to 50 and ran explain. The plan changed and I ran\nexplain analyze. Now it takes a fraction of a second!\n\nThanks to all of you who wanted to help me. I would be happy if someone does\nme one last favor. I want to know how these query plans are generated and\nhow the parameters you suggested to change affects it. If there is any\narticle, paper or book on it please give me the name or url.\n\nHere is the output of my latest tasks:\n\ningentadb=# set default_statistics_target=50;\nSET\ningentadb=# show default_statistics_target;\n default_statistics_target\n---------------------------\n 50\n(1 row)\n\ningentadb=# vacuum analyze jena_g1t1_stmt;\nVACUUM\ningentadb=# EXPLAIN select A0.Subj, A2.Obj From jena_g1t1_stmt A0,\njena_g1t1_stmt A1, jena_g1t1_stmt A2 Where A0.Prop='Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/isPartOf' AND A0.Obj='Uv::\nhttp://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1' AND\nA0.GraphID=1 AND A0.Subj=A1.Subj AND A1.Prop='Uv::\nhttp://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND A1.Obj='Uv::\nhttp://metastore.ingenta.com/ns/structure/Article' AND A1.GraphID=1 AND\nA0.Subj=A2.Subj AND A2.Prop='Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/startingPage' AND\nA2.GraphID=1;\n\nQUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..37838.46 rows=7568 width=134)\n -> Nested Loop (cost=0.00..7485.09 rows=495 width=148)\n -> Index Scan using jena_g1t1_stmt_ixo on jena_g1t1_stmt a0\n(cost=0.00..1160.62 rows=97 width=74)\n Index Cond: ((obj)::text = 'Uv::\nhttp://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'::text)\n Filter: (((prop)::text = 'Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/isPartOf'::text) AND (graphid\n= 1))\n -> Index Scan using jena_g1t1_stmt_ixsp on jena_g1t1_stmt a1\n(cost=0.00..65.15 rows=4 width=74)\n Index Cond: (((a1.subj)::text = (a0.subj)::text) AND\n((a1.prop)::text = 'Uv::\nhttp://www.w3.org/1999/02/22-rdf-syntax-ns#type'::text))\n Filter: ((a1.graphid = 1) AND ((a1.obj)::text = 'Uv::\nhttp://metastore.ingenta.com/ns/structure/Article'::text))\n -> Index Scan using jena_g1t1_stmt_ixsp on jena_g1t1_stmt a2\n(cost=0.00..61.17 rows=12 width=134)\n Index Cond: (((a2.subj)::text = (a0.subj)::text) AND\n((a2.prop)::text = 'Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/startingPage'::text))\n Filter: (a2.graphid = 1)\n(11 rows)\n\ningentadb=# EXPLAIN ANALYZE select A0.Subj, A2.Obj From jena_g1t1_stmt A0,\njena_g1t1_stmt A1, jena_g1t1_stmt A2 Where A0.Prop='Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/isPartOf' AND A0.Obj='Uv::\nhttp://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1' AND\nA0.GraphID=1 AND A0.Subj=A1.Subj AND A1.Prop='Uv::\nhttp://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND A1.Obj='Uv::\nhttp://metastore.ingenta.com/ns/structure/Article' AND A1.GraphID=1 AND\nA0.Subj=A2.Subj AND A2.Prop='Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/startingPage' AND\nA2.GraphID=1;\n\nQUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..37838.46 rows=7568 width=134) (actual\ntime=6.535..126.791 rows=30 loops=1)\n -> Nested Loop (cost=0.00..7485.09 rows=495 width=148) (actual\ntime=4.404..64.078 rows=30 loops=1)\n -> Index Scan using jena_g1t1_stmt_ixo on jena_g1t1_stmt a0\n(cost=0.00..1160.62 rows=97 width=74) (actual time=2.127..2.270 rows=30\nloops=1)\n Index Cond: ((obj)::text = 'Uv::\nhttp://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'::text)\n Filter: (((prop)::text = 'Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/isPartOf'::text) AND (graphid\n= 1))\n -> Index Scan using jena_g1t1_stmt_ixsp on jena_g1t1_stmt a1\n(cost=0.00..65.15 rows=4 width=74) (actual time=2.054..2.056 rows=1\nloops=30)\n Index Cond: (((a1.subj)::text = (a0.subj)::text) AND\n((a1.prop)::text = 'Uv::\nhttp://www.w3.org/1999/02/22-rdf-syntax-ns#type'::text))\n Filter: ((a1.graphid = 1) AND ((a1.obj)::text = 'Uv::\nhttp://metastore.ingenta.com/ns/structure/Article'::text))\n -> Index Scan using jena_g1t1_stmt_ixsp on jena_g1t1_stmt a2\n(cost=0.00..61.17 rows=12 width=134) (actual time=2.083..2.086 rows=1\nloops=30)\n Index Cond: (((a2.subj)::text = (a0.subj)::text) AND\n((a2.prop)::text = 'Uv::\nhttp://prismstandard.org/namespaces/1.2/basic/startingPage'::text))\n Filter: (a2.graphid = 1)\n Total runtime: 127.065 ms\n(12 rows)\n\n\nThanks and regards,\n\n-- \nMohammad Farhan Husain\nResearch Assistant\nDepartment of Computer Science\nErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas\n\nOn Wed, Feb 25, 2009 at 6:07 PM, Scott Carey <[email protected]> wrote:\n\nI will second Kevin’s suggestion.  Unless you think you will have more than a few dozen concurrent queries, start with work_mem around 32MB. \n\nFor the query here, a very large work_mem might help it hash join depending on the data... But that’s not the real problem here.\n\nThe real problem is that it does a huge scan of all of the a1 table, and sorts it.  Its pretty clear that this table has incorrect statistics.  It thinks that it will get about 1 million rows back in the scan, but it is actually 3 million in the scan. \n\nCrank up the statistics target on that table from the default to at least 100, perhaps even 1000.  This is a large table, the default statistics target of 10 is not good for large tables with skewed column data.  Those to try increasing the target on are the columns filtered in the explain: graphid, prop, and obj.  Then run vacuum analzye on that table (a1).  The planner should then have better stats and will likely be able to use a better plan for the join.\n\nThe other tables involved in the join also seem to have  bad statistics.  You might just take the easiest solution and change the global statistics target and vacuum analyze the tables involved:\n\nset default_statistics_target = 50;\nvacuum analyze jena_g1t1_stmt ;\n\n(test the query)\n\nRepeat for several values of the default statistics target.  You can run “explain” before running the actual query, to see if the plan changed.  If it has not, the time will not likely change.\nThe max value for the statistics target is 1000, which makes analyzing and query planning slower, but more accurate.  In most cases, dramatic differences can happen between the default of 10 and values of 25 or 50.  Sometimes, you have to go into the hundreds, and it is safer to do this on a per-column basis once you get to larger values.\n\nFor larger database, I recommend increasing the default to 20 to 40 and re-analyzing all the tables.\n\n\n\n\n\nOn 2/25/09 3:11 PM, \"Kevin Grittner\" <[email protected]> wrote:\n\n>>> Farhan Husain <[email protected]> wrote:\n\n> Kevin Grittner <[email protected] wrote:\n>> >>> Farhan Husain <[email protected]> wrote:\n>> > The machine postgres is running on has 4 GB of RAM.\n>>\n>> In addition to the other suggestions, you should be sure that\n>> effective_cache_size is set to a reasonable value, which would\n>> probably be somewhere in the neighborhood of '3GB'.\n\n> The execution time has not improved. I am going to increase the\n> shared_buffers now keeping the work_mem same.\n\nIncreasing shared_buffers is good, but it has been mentioned that this\nwill not change the plan, which currently scans and sorts the whole\ntable for a1.  Nobody will be surprised when you report minimal\nchange, if any.  If you have not changed effective_cache_size (be sure\nnot to confuse this with any of the other configuration values) it\nwill think you only have 128MB of cache, which will be off by a factor\nof about 24 from reality.\n\nAlso, I'm going to respectfully differ with some of the other posts on\nthe best setting for work_mem.  Most benchmarks I've run and can\nremember seeing posted found best performance for this at somewhere\nbetween 16MB and 32MB.  You do have to be careful if you have a large\nnumber of concurrent queries, however, and keep it lower.  In most\nsuch cases, though, you're better off using a connection pool to limit\nconcurrent queries instead.\n\n-Kevin\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\nThanks a lot Scott! I think that was the problem. I just changed the default statistics target to 50 and ran explain. The plan changed and I ran explain analyze. Now it takes a fraction of a second!\nThanks to all of you who wanted to help me. I would be happy if someone does me one last favor. I want to know how these query plans are generated and how the parameters you suggested to change affects it. If there is any article, paper or book on it please give me the name or url.\nHere is the output of my latest tasks:ingentadb=# set default_statistics_target=50;SETingentadb=# show default_statistics_target; default_statistics_target --------------------------- 50\n(1 row)ingentadb=# vacuum analyze jena_g1t1_stmt;VACUUMingentadb=# EXPLAIN select A0.Subj, A2.Obj From jena_g1t1_stmt A0, jena_g1t1_stmt A1, jena_g1t1_stmt A2 Where A0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf' AND A0.Obj='Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1' AND A0.GraphID=1 AND A0.Subj=A1.Subj AND A1.Prop='Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND A1.Obj='Uv::http://metastore.ingenta.com/ns/structure/Article' AND A1.GraphID=1 AND A0.Subj=A2.Subj AND A2.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage' AND A2.GraphID=1;\n                                                                        QUERY PLAN                                                                        ----------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop  (cost=0.00..37838.46 rows=7568 width=134)   ->  Nested Loop  (cost=0.00..7485.09 rows=495 width=148)         ->  Index Scan using jena_g1t1_stmt_ixo on jena_g1t1_stmt a0  (cost=0.00..1160.62 rows=97 width=74)\n               Index Cond: ((obj)::text = 'Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'::text)               Filter: (((prop)::text = 'Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf'::text) AND (graphid = 1))\n         ->  Index Scan using jena_g1t1_stmt_ixsp on jena_g1t1_stmt a1  (cost=0.00..65.15 rows=4 width=74)               Index Cond: (((a1.subj)::text = (a0.subj)::text) AND ((a1.prop)::text = 'Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type'::text))\n               Filter: ((a1.graphid = 1) AND ((a1.obj)::text = 'Uv::http://metastore.ingenta.com/ns/structure/Article'::text))   ->  Index Scan using jena_g1t1_stmt_ixsp on jena_g1t1_stmt a2  (cost=0.00..61.17 rows=12 width=134)\n         Index Cond: (((a2.subj)::text = (a0.subj)::text) AND ((a2.prop)::text = 'Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage'::text))\n         Filter: (a2.graphid = 1)(11 rows)ingentadb=# EXPLAIN ANALYZE select A0.Subj, A2.Obj From jena_g1t1_stmt A0, jena_g1t1_stmt A1, jena_g1t1_stmt A2 Where A0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf' AND A0.Obj='Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1' AND A0.GraphID=1 AND A0.Subj=A1.Subj AND A1.Prop='Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type' AND A1.Obj='Uv::http://metastore.ingenta.com/ns/structure/Article' AND A1.GraphID=1 AND A0.Subj=A2.Subj AND A2.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage' AND A2.GraphID=1;\n                                                                        QUERY PLAN                                                                        ----------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop  (cost=0.00..37838.46 rows=7568 width=134) (actual time=6.535..126.791 rows=30 loops=1)   ->  Nested Loop  (cost=0.00..7485.09 rows=495 width=148) (actual time=4.404..64.078 rows=30 loops=1)         ->  Index Scan using jena_g1t1_stmt_ixo on jena_g1t1_stmt a0  (cost=0.00..1160.62 rows=97 width=74) (actual time=2.127..2.270 rows=30 loops=1)\n               Index Cond: ((obj)::text = 'Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'::text)               Filter: (((prop)::text = 'Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf'::text) AND (graphid = 1))\n         ->  Index Scan using jena_g1t1_stmt_ixsp on jena_g1t1_stmt a1  (cost=0.00..65.15 rows=4 width=74) (actual time=2.054..2.056 rows=1 loops=30)               Index Cond: (((a1.subj)::text = (a0.subj)::text) AND ((a1.prop)::text = 'Uv::http://www.w3.org/1999/02/22-rdf-syntax-ns#type'::text))\n               Filter: ((a1.graphid = 1) AND ((a1.obj)::text = 'Uv::http://metastore.ingenta.com/ns/structure/Article'::text))   ->  Index Scan using jena_g1t1_stmt_ixsp on jena_g1t1_stmt a2  (cost=0.00..61.17 rows=12 width=134) (actual time=2.083..2.086 rows=1 loops=30)\n         Index Cond: (((a2.subj)::text = (a0.subj)::text) AND ((a2.prop)::text = 'Uv::http://prismstandard.org/namespaces/1.2/basic/startingPage'::text))\n         Filter: (a2.graphid = 1) Total runtime: 127.065 ms(12 rows)Thanks and regards,-- Mohammad Farhan HusainResearch AssistantDepartment of Computer ScienceErik Jonsson School of Engineering and Computer Science\nUniversity of Texas at Dallas", "msg_date": "Thu, 26 Feb 2009 11:45:26 -0600", "msg_from": "Farhan Husain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" }, { "msg_contents": ">>> Farhan Husain <[email protected]> wrote: \n> Thanks a lot Scott! I think that was the problem. I just changed the\n> default statistics target to 50 and ran explain. The plan changed\n> and I ran explain analyze. Now it takes a fraction of a second!\n \nYeah, the default of 10 has been too low. In 8.4 it is being raised\nto 100.\n \n> Thanks to all of you who wanted to help me. I would be happy if\n> someone does me one last favor. I want to know how these query plans\n> are generated and how the parameters you suggested to change affects\n> it. If there is any article, paper or book on it please give me the\n> name or url.\n \nIn terms of tuning in general, you might start with these:\n \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n \nhttp://www.postgresql.org/docs/8.3/interactive/runtime-config-query.html\n \nTo understand the mechanics of the optimizer you might be best off\ndownloading the source code and reading through the README files and\ncomments in the source code.\n \n-Kevin\n", "msg_date": "Thu, 26 Feb 2009 12:09:54 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres\n\tand MySQL" }, { "msg_contents": "Kevin Grittner wrote:\n>>>> Farhan Husain <[email protected]> wrote: \n>> Thanks a lot Scott! I think that was the problem. I just changed the\n>> default statistics target to 50 and ran explain. The plan changed\n>> and I ran explain analyze. Now it takes a fraction of a second!\n> \n> Yeah, the default of 10 has been too low. In 8.4 it is being raised\n> to 100.\n> \n>> Thanks to all of you who wanted to help me. I would be happy if\n>> someone does me one last favor. I want to know how these query plans\n>> are generated and how the parameters you suggested to change affects\n>> it. If there is any article, paper or book on it please give me the\n>> name or url.\n> \n> In terms of tuning in general, you might start with these:\n> \n> http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n> \n> http://www.postgresql.org/docs/8.3/interactive/runtime-config-query.html\n> \n> To understand the mechanics of the optimizer you might be best off\n> downloading the source code and reading through the README files and\n> comments in the source code.\n> \n> -Kevin\n> \nHello List,\n\nCan this be set in the postgresql.conf file?\ndefault_statistics_target = 50\n\nThanks,\nSteve\n", "msg_date": "Thu, 26 Feb 2009 14:10:29 -0500", "msg_from": "Steve Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres\tand\n MySQL" }, { "msg_contents": "On Thu, Feb 26, 2009 at 12:10 PM, Steve Clark <[email protected]> wrote:\n>\n> Can this be set in the postgresql.conf file?\n> default_statistics_target = 50\n\nYep. It will take affect after a reload and after the current\nconnection has been reset.\n\nIf you want to you also set a default for a database or a role. Fine\ntuning as needed.\n", "msg_date": "Thu, 26 Feb 2009 12:16:54 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Abnormal performance difference between Postgres and\n\tMySQL" } ]
[ { "msg_contents": "explain select ss, ARRAY(select id from foo where ss>0 and id between\n7 and 156 order by random() limit 3) as v from\ngenerate_series(1,1000000) ss;\n QUERY PLAN\n------------------------------------------------------------------------------------\n Function Scan on generate_series ss (cost=0.00..9381.22 rows=1000 width=4)\n SubPlan\n -> Limit (cost=9.36..9.37 rows=3 width=8)\n -> Sort (cost=9.36..9.74 rows=150 width=8)\n Sort Key: (random())\n -> Result (cost=0.00..7.42 rows=150 width=8)\n One-Time Filter: ($0 > 0)\n -> Seq Scan on foo (cost=0.00..7.05 rows=150 width=8)\n Filter: ((id >= 7) AND (id <= 156))\n(9 rows)\n\n:(\n\nno matter if I change last generate_series's range, it will always\nestimate 1000 rows...\n\n\n-- \nGJ\n", "msg_date": "Tue, 24 Feb 2009 14:33:55 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": true, "msg_subject": "planner's midjudge number of rows resulting, despite pretty obvious\n\tjoin" }, { "msg_contents": "Hello\n\n2009/2/24 Grzegorz Jaśkiewicz <[email protected]>:\n> explain select ss, ARRAY(select id from foo where ss>0 and id between\n> 7 and 156 order by random() limit 3) as v from\n> generate_series(1,1000000) ss;\n>                                     QUERY PLAN\n> ------------------------------------------------------------------------------------\n>  Function Scan on generate_series ss  (cost=0.00..9381.22 rows=1000 width=4)\n>   SubPlan\n>     ->  Limit  (cost=9.36..9.37 rows=3 width=8)\n>           ->  Sort  (cost=9.36..9.74 rows=150 width=8)\n>                 Sort Key: (random())\n>                 ->  Result  (cost=0.00..7.42 rows=150 width=8)\n>                       One-Time Filter: ($0 > 0)\n>                       ->  Seq Scan on foo  (cost=0.00..7.05 rows=150 width=8)\n>                             Filter: ((id >= 7) AND (id <= 156))\n> (9 rows)\n>\n> :(\n>\n> no matter if I change last generate_series's range, it will always\n> estimate 1000 rows...\n>\n>\n\nThere are not dynamic estimator for SRF function. You can change it\nstatically via ROWS flag - default is 1000 rows.\n\npostgres=# create or replace function fooo() returns setof int as $$\nselect * from (values(10),(20)) x$$ language sql;\nCREATE FUNCTION\npostgres=# explain select * from fooo();\n QUERY PLAN\n--------------------------------------------------------------\n Function Scan on fooo (cost=0.00..260.00 rows=1000 width=4)\n(1 row)\n\npostgres=# create or replace function fooo() returns setof int as $$\nselect * from (values(10),(20)) x$$ language sql rows 432;\nCREATE FUNCTION\npostgres=# explain select * from fooo();\n QUERY PLAN\n-------------------------------------------------------------\n Function Scan on fooo (cost=0.00..112.32 rows=432 width=4)\n(1 row)\n\npostgres=#\n\nregards\nPavel Stehule\n\n> --\n> GJ\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Tue, 24 Feb 2009 15:43:01 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: planner's midjudge number of rows resulting, despite\n\tpretty obvious join" } ]
[ { "msg_contents": "Question to core developers\nif I rank() a table, grouping by foo - but only will want to get first\nX result for every rank.\nWill postgresql be able to optimize that, or is it something left over\nfor 8.5 in general?\n\n\n-- \nGJ\n", "msg_date": "Tue, 24 Feb 2009 15:07:18 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": true, "msg_subject": "will 8.4 be able to optmize rank() windows ?" } ]
[ { "msg_contents": "Hello\n\nDoes Postgres have ability to keep .dict and .affix files cached \nglobally for all client sessions?\n\nEvery time I connect to test server - it takes 3 seconds to load 4MB \ndictionary when executing first FTS query.\n\n-- \nRegards,\nTomasz Myrta\n", "msg_date": "Wed, 25 Feb 2009 11:20:02 +0100", "msg_from": "Tomasz Myrta <[email protected]>", "msg_from_op": true, "msg_subject": "full text search - dictionary caching" }, { "msg_contents": "Tomasz Myrta <[email protected]> writes:\n> Does Postgres have ability to keep .dict and .affix files cached \n> globally for all client sessions?\n\nNo, there's no provision for that.\n\n> Every time I connect to test server - it takes 3 seconds to load 4MB \n> dictionary when executing first FTS query.\n\nYou might consider using connection pooling, so that you can re-use\na backend process that already has everything loaded in.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Feb 2009 11:36:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: full text search - dictionary caching " } ]
[ { "msg_contents": "Hi,\nI was reading a benchmark that sets out block sizes against raw IO performance for a number of different RAID configurations involving high end SSDs (the Mtron 7535) on a powerful RAID controller (the Areca 1680IX with 4GB RAM). See http://jdevelopment.nl/hardware/one-dvd-per-second/\n>From the figures being given it seems to be the case that a 16KB block size is the ideal size. Namely, in the graphs its clear that a high amount of IOPS (60000) is maintained until the 16KB block size, but drops sharply after that. MB/sec however still increases until a block size of ~128KB. I would say that the sweet spot is therefor either 16KB (if you emphasis many IOPS) or something between 16KB and 128KB if you want to optimize for both a large number of IOPS and a large number of MB/sec. It seems to me that MB/sec is less important for most database operations, but since we're talking about random IO MB/sec might still be an important figure.\nPostgreSQL however defaults to using a block size of 8KB. From the observations made in the benchmark this seems to be a less than optimal size (at least for such an SSD setup). The block size in PG only seems to be changeable by means of a recompile, so it's not something for a quick test.\nNevertheless, the numbers given in the benchmark intrigue me and I wonder if anyone has already tried setting PG's block size to 16KB for such a setup as used in the SSD benchmark.\nThanks in advance for all help,Henk\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\n\nHi,I was reading a benchmark that sets out block sizes against raw IO performance for a number of different RAID configurations involving high end SSDs (the Mtron 7535) on a powerful RAID controller (the Areca 1680IX with 4GB RAM). See http://jdevelopment.nl/hardware/one-dvd-per-second/From the figures being given it seems to be the case that a 16KB block size is the ideal size. Namely, in the graphs its clear that a high amount of IOPS (60000) is maintained until the 16KB block size, but drops sharply after that. MB/sec however still increases until a block size of ~128KB. I would say that the sweet spot is therefor either 16KB (if you emphasis many IOPS) or something between 16KB and 128KB if you want to optimize for both a large number of IOPS and a large number of MB/sec. It seems to me that MB/sec is less important for most database operations, but since we're talking about random IO MB/sec might still be an important figure.PostgreSQL however defaults to using a block size of 8KB. From the observations made in the benchmark this seems to be a less than optimal size (at least for such an SSD setup). The block size in PG only seems to be changeable by means of a recompile, so it's not something for a quick test.Nevertheless, the numbers given in the benchmark intrigue me and I wonder if anyone has already tried setting PG's block size to 16KB for such a setup as used in the SSD benchmark.Thanks in advance for all help,HenkExpress yourself instantly with MSN Messenger! MSN Messenger", "msg_date": "Wed, 25 Feb 2009 15:58:43 +0100", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL block size for SSD RAID setup?" }, { "msg_contents": "henk de wit <[email protected]> writes:\n\n> Hi,\n> I was reading a benchmark that sets out block sizes against raw IO performance\n> for a number of different RAID configurations involving high end SSDs (the\n> Mtron 7535) on a powerful RAID controller (the Areca 1680IX with 4GB RAM). See\n> http://jdevelopment.nl/hardware/one-dvd-per-second/\n\nYou might also be interested in:\n\nhttp://thunk.org/tytso/blog/2009/02/20/aligning-filesystems-to-an-ssds-erase-block-size/\n\nhttp://thunk.org/tytso/blog/2009/02/22/should-filesystems-be-optimized-for-ssds/\n\nIt seems you have to do more work than just look at the application. You want\nthe application, the filesystem, the partition layout, and the raid device\ngeometry to all consistently maintain alignment with erase blocks.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's PostGIS support!\n", "msg_date": "Wed, 25 Feb 2009 15:42:14 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL block size for SSD RAID setup?" }, { "msg_contents": ">You might also be interested in:\n> \n> http://thunk.org/tytso/blog/2009/02/20/aligning-filesystems-to-an-ssds-erase-block-size/\n> \n> http://thunk.org/tytso/blog/2009/02/22/should-filesystems-be-optimized-for-ssds/\n\nThanks a lot for the pointers. I'll definitely check these out.\n> It seems you have to do more work than just look at the application. You want\n> the application, the filesystem, the partition layout, and the raid device\n> geometry to all consistently maintain alignment with erase blocks.\nSo it seems. PG is just a factor in this equation, but nevertheless an important one.\n_________________________________________________________________\nSee all the ways you can stay connected to friends and family\nhttp://www.microsoft.com/windows/windowslive/default.aspx\n\n\n\n\n\n>You might also be interested in:> > http://thunk.org/tytso/blog/2009/02/20/aligning-filesystems-to-an-ssds-erase-block-size/> > http://thunk.org/tytso/blog/2009/02/22/should-filesystems-be-optimized-for-ssds/Thanks a lot for the pointers. I'll definitely check these out.> It seems you have to do more work than just look at the application. You want> the application, the filesystem, the partition layout, and the raid device> geometry to all consistently maintain alignment with erase blocks.So it seems. PG is just a factor in this equation, but nevertheless an important one.See all the ways you can stay connected to friends and family", "msg_date": "Wed, 25 Feb 2009 17:13:34 +0100", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL block size for SSD RAID setup?" }, { "msg_contents": "\n> Hi,\n> I was reading a benchmark that sets out block sizes against raw IO \n> performance for a number of different RAID configurations involving high \n> end SSDs (the Mtron 7535) on a powerful RAID controller (the Areca \n> 1680IX with 4GB RAM). See \n> http://jdevelopment.nl/hardware/one-dvd-per-second/\n\n\tLucky guys ;)\n\n\tSomething that bothers me about SSDs is the interface... The latest flash\nchips from Micron (32Gb = 4GB per chip) have something like 25 us \"access\ntime\" (lol) and push data at 166 MB/s (yes megabytes per second) per chip.\nSo two of these chips are enough to bottleneck a SATA 3Gbps link... there\nwould be 8 of those chips in a 32GB SSD. Parallelizing would depend on the\nblock size : putting all chips in parallel would increase the block size,\nso in practice I don't know how it's implemented, probably depends on the\nmake and model of SSD.\n\n\tAnd then RAIDing those (to get back the lost throughput from using SATA)\nwill again increase the block size which is bad for random writes. So it's\na bit of a chicken and egg problem. Also since harddisks have high\nthroughput but slow seeks, all the OS'es and RAID cards, drivers, etc are\nprobably optimized for throughput, not IOPS. You need a very different\nstrategy for 100K/s 8kbyte IOs versus 1K/s 1MByte IOs. Like huge queues,\nsmarter hardware, etc.\n\n\tFusionIO got an interesting product by using the PCI-e interface which\nbrings lots of benefits like much higher throughput and the possibility of\nusing custom drivers optimized for handling much more IO requests per\nsecond than what the OS and RAID cards, and even SATA protocol, were\ndesigned for.\n\n\tIntrigued by this I looked at the FusionIO benchmarks : more than 100.000\nIOPS, really mindboggling, but in random access over a 10MB file. A little\nbit of google image search reveals the board contains a lot of Flash chips\n(expected) and a fat FPGA (expected) probably a high-end chip from X or A,\nand two DDR RAM chips from Samsung, probably acting as cache. So I wonder\nif the 10 MB file used as benchmark to reach those humongous IOPS was\nactually in the Flash ?... or did they actually benchmark the device's\nonboard cache ?...\n\n\tIt probably has writeback cache so on a random writes benchmark this is\nan interesting question. A good RAID card with BBU cache would have the\nsame benchmarking gotcha (ie if you go crazy on random writes on a 10 MB\nfile which is very small, and the device is smart, possibly at the end of\nthe benchmark nothing at all was written to the disks !)\n\n\tAnyway in a database use case if random writes are going to be a pain\nthey are probably not going to be distributed in a tiny 10MB zone which\nthe controller cache would handle...\n\n\t(just rambling XDD)\n", "msg_date": "Wed, 25 Feb 2009 19:28:31 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL block size for SSD RAID setup?" }, { "msg_contents": "Most benchmarks and reviews out there are very ignorant on SSD design. I suggest you start by reading some white papers and presentations on the research side that are public:\n(pdf) http://research.microsoft.com/pubs/63596/USENIX-08-SSD.pdf\n(html) http://www.usenix.org/events/usenix08/tech/full_papers/agrawal/agrawal_html/index.html\nPdf presentation power point style: http://institute.lanl.gov/hec-fsio/workshops/2008/presentations/day3/Prabhakaran-Panel-SSD.pdf\n\nBenchmarks by EasyCo (software layer that does what the hardware should if your ssd's controller stinks):\nhttp://www.storagesearch.com/easyco-flashperformance-art.pdf\n\n\nOn 2/25/09 10:28 AM, \"PFC\" <[email protected]> wrote:\n> Hi,\n> I was reading a benchmark that sets out block sizes against raw IO\n> performance for a number of different RAID configurations involving high\n> end SSDs (the Mtron 7535) on a powerful RAID controller (the Areca\n> 1680IX with 4GB RAM). See\n> http://jdevelopment.nl/hardware/one-dvd-per-second/\n\n Lucky guys ;)\n\n Something that bothers me about SSDs is the interface... The latest flash\nchips from Micron (32Gb = 4GB per chip) have something like 25 us \"access\ntime\" (lol) and push data at 166 MB/s (yes megabytes per second) per chip.\nSo two of these chips are enough to bottleneck a SATA 3Gbps link... there\nwould be 8 of those chips in a 32GB SSD. Parallelizing would depend on the\nblock size : putting all chips in parallel would increase the block size,\nso in practice I don't know how it's implemented, probably depends on the\nmake and model of SSD.\n\nNo, you would need at least 10 to 12 of those chips for such a SSD (that does good wear leveling), since overprovisioning is required for wear leveling and write amplification factor.\n\n And then RAIDing those (to get back the lost throughput from using SATA)\nwill again increase the block size which is bad for random writes. So it's\na bit of a chicken and egg problem.\n\nWith cheap low end SSD's that don't deal with random writes properly, and can't remap LBA 's to physical blocks in small chunks, and raid stripes smaller than erase blocks, yes. But for SSD's you want large RAID block sizes, no raid 5, without pre-loading the whole block on a small read. This is since random access inside one block is fast, unlike hard drives.\n\nAlso since harddisks have high\nthroughput but slow seeks, all the OS'es and RAID cards, drivers, etc are\nprobably optimized for throughput, not IOPS. You need a very different\nstrategy for 100K/s 8kbyte IOs versus 1K/s 1MByte IOs. Like huge queues,\nsmarter hardware, etc.\n\nYes. I get better performance with software raid 10, multiple plain SAS adapters, and SSD's than any raid card I've tried because the raid card can't keep up with the i/o's and tries to do a lot of scheduling work. Furthermore, a Battery backed memory caching card is forced to prioritize writes at the expense of reads, which causes problems when you want to keep read latency low during a large batch write. Throw the same requests at a good SSD, and it works (90% of them are bad schedulers and with concurrent read/write at the moment though).\n\n FusionIO got an interesting product by using the PCI-e interface which\nbrings lots of benefits like much higher throughput and the possibility of\nusing custom drivers optimized for handling much more IO requests per\nsecond than what the OS and RAID cards, and even SATA protocol, were\ndesigned for.\n\n Intrigued by this I looked at the FusionIO benchmarks : more than 100.000\nIOPS, really mindboggling, but in random access over a 10MB file. A little\nbit of google image search reveals the board contains a lot of Flash chips\n(expected) and a fat FPGA (expected) probably a high-end chip from X or A,\nand two DDR RAM chips from Samsung, probably acting as cache. So I wonder\nif the 10 MB file used as benchmark to reach those humongous IOPS was\nactually in the Flash ?... or did they actually benchmark the device's\nonboard cache ?...\n\nIntel's SSD, and AFAIK FusionIO's device, do not cache writes in RAM (a tiny bit is buffered in SRAM, 256K=erase block size, on the intel controller; unknown in FusionIO's FPGA).\nThat ram is the working space cache for the LBA -> physical block remapping. When a request comes in for a read, looking up what physical block contains the LBA would take a long time if it was going through the flash (its the block that claims to be mapped that way, with the highest transaction number - or some other similar algorithm). The lookup table is cached in RAM. The wear leveling and other tasks need working set memory to operate as well.\n\n It probably has writeback cache so on a random writes benchmark this is\nan interesting question. A good RAID card with BBU cache would have the\nsame benchmarking gotcha (ie if you go crazy on random writes on a 10 MB\nfile which is very small, and the device is smart, possibly at the end of\nthe benchmark nothing at all was written to the disks !)\n\nThe numbers are slower, but not as dramatic as you would expect, for a 10GB file. Its clearly not a writeback cache.\n\n Anyway in a database use case if random writes are going to be a pain\nthey are probably not going to be distributed in a tiny 10MB zone which\nthe controller cache would handle...\n\n (just rambling XDD)\n\nCertain write load mixes can fragment the LBA > physical block map and make wear leveling and write amplification reduction expensive and slow things down. This effect is usually temporary and highly workload dependant.\nThe solution (see white papers above) is more over-provisioning of flash. This can be achieved manually by making sure that more of the LBA's are NEVER ever written to - partition just 75% of the drive and leave the last 25% untouched, then there will be that much more extra to work with which makes even insanely crazy continuous random writes over the whole space perform at very high iops with low latency. This is only necessary for particular loads, and all flash devices over-provision to some extent. I'm pretty sure that the Intel X25-M, which provides 80GB to the user, has at least 100GB of actual flash in there - perhaps 120GB. That overprovision may be internal to the actual flash chip, since Intel makes both the chip and controller. There is absolutely extra ECC and block metadata in there (this is not new, again, see the whitepaper).\nThe X25-E certainly is over-provisioned.\n\nIn the future, there are two things that will help flash a lot:\n*File systems that avoid writing to a region as long as possible, preferring to write to areas previously freed at some point.\n*New OS block device semantics. Currently its 'read' and 'write'. The device, once all LBA's have had a write to them once, is always \"100%\" full. A 'deallocate' command would help SSD random writes, wear leveling, and write amplification algorithms significantly.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\nRe: [PERFORM] PostgreSQL block size for SSD RAID setup?\n\n\nMost benchmarks and reviews out there are very ignorant on SSD design.  I suggest you start by reading some white papers and presentations on the research side that are public:\n(pdf) http://research.microsoft.com/pubs/63596/USENIX-08-SSD.pdf\n(html) http://www.usenix.org/events/usenix08/tech/full_papers/agrawal/agrawal_html/index.html\nPdf presentation power point style: http://institute.lanl.gov/hec-fsio/workshops/2008/presentations/day3/Prabhakaran-Panel-SSD.pdf\n\nBenchmarks by EasyCo (software layer that does what the hardware should if your ssd’s controller stinks):\nhttp://www.storagesearch.com/easyco-flashperformance-art.pdf\n\n\nOn 2/25/09 10:28 AM, \"PFC\" <[email protected]> wrote:\n> Hi,\n> I was reading a benchmark that sets out block sizes against raw IO \n> performance for a number of different RAID configurations involving high \n> end SSDs (the Mtron 7535) on a powerful RAID controller (the Areca \n> 1680IX with 4GB RAM). See \n> http://jdevelopment.nl/hardware/one-dvd-per-second/\n\n        Lucky guys ;)\n\n        Something that bothers me about SSDs is the interface... The latest flash\nchips from Micron (32Gb = 4GB per chip) have something like 25 us \"access\ntime\" (lol) and push data at 166 MB/s (yes megabytes per second) per chip.\nSo two of these chips are enough to bottleneck a SATA 3Gbps link... there\nwould be 8 of those chips in a 32GB SSD. Parallelizing would depend on the\nblock size : putting all chips in parallel would increase the block size,\nso in practice I don't know how it's implemented, probably depends on the\nmake and model of SSD.\n\nNo, you would need at least 10 to 12 of those chips for such a SSD (that does good wear leveling), since overprovisioning is required for wear leveling and write amplification factor.\n\n        And then RAIDing those (to get back the lost throughput from using SATA)\nwill again increase the block size which is bad for random writes. So it's\na bit of a chicken and egg problem. \n\nWith cheap low end SSD’s that don’t deal with random writes properly, and can’t remap LBA ‘s to physical blocks in small chunks, and raid stripes smaller than erase blocks, yes.  But for SSD’s you want large RAID block sizes, no raid 5, without pre-loading the whole block on a small read.  This is since random access inside one block is fast, unlike hard drives.\n\nAlso since harddisks have high\nthroughput but slow seeks, all the OS'es and RAID cards, drivers, etc are\nprobably optimized for throughput, not IOPS. You need a very different\nstrategy for 100K/s 8kbyte IOs versus 1K/s 1MByte IOs. Like huge queues,\nsmarter hardware, etc.\n\nYes.  I get better performance with software raid 10, multiple plain SAS adapters, and SSD’s than any raid card I’ve tried because the raid card can’t keep up with the i/o’s and tries to do a lot of scheduling work.  Furthermore, a Battery backed memory caching card is forced to prioritize writes at the expense of reads, which causes problems when you want to keep read latency low during a large batch write.  Throw the same requests at a good SSD, and it works (90% of them are bad schedulers and with concurrent read/write at the moment though).\n\n        FusionIO got an interesting product by using the PCI-e interface which\nbrings lots of benefits like much higher throughput and the possibility of\nusing custom drivers optimized for handling much more IO requests per\nsecond than what the OS and RAID cards, and even SATA protocol, were\ndesigned for.\n\n        Intrigued by this I looked at the FusionIO benchmarks : more than 100.000\nIOPS, really mindboggling, but in random access over a 10MB file. A little\nbit of google image search reveals the board contains a lot of Flash chips\n(expected) and a fat FPGA (expected) probably a high-end chip from X or A,\nand two DDR RAM chips from Samsung, probably acting as cache. So I wonder\nif the 10 MB file used as benchmark to reach those humongous IOPS was\nactually in the Flash ?... or did they actually benchmark the device's\nonboard cache ?...\n\nIntel’s SSD, and AFAIK FusionIO’s device, do not cache writes in RAM (a tiny bit is buffered in SRAM, 256K=erase block size, on the intel controller; unknown in FusionIO’s FPGA).\nThat ram is the working space cache for the LBA -> physical block remapping.  When a request comes in for a read, looking up what physical block contains the LBA would take a long time if it was going through the flash (its the block that claims to be mapped that way, with the highest transaction number — or some other similar algorithm).  The lookup table is cached in RAM.  The wear leveling and other tasks need working set memory to operate as well. \n\n        It probably has writeback cache so on a random writes benchmark this is\nan interesting question. A good RAID card with BBU cache would have the\nsame benchmarking gotcha (ie if you go crazy on random writes on a 10 MB\nfile which is very small, and the device is smart, possibly at the end of\nthe benchmark nothing at all was written to the disks !)\n\nThe numbers are slower, but not as dramatic as you would expect, for a 10GB file.  Its clearly not a writeback cache.\n\n        Anyway in a database use case if random writes are going to be a pain\nthey are probably not going to be distributed in a tiny 10MB zone which\nthe controller cache would handle...\n\n        (just rambling XDD)\n\nCertain write load mixes can fragment the LBA > physical block map and make wear leveling and write amplification reduction expensive and slow things down.  This effect is usually temporary and highly workload dependant.\nThe solution (see white papers above) is more over-provisioning of flash.  This can be achieved manually by making sure that more of the LBA’s are NEVER ever written to — partition just 75% of the drive and leave the last 25% untouched, then there will be that much more extra to work with which makes even insanely crazy continuous random writes over the whole space perform at very high iops with low latency.  This is only necessary for particular loads, and all flash devices over-provision to some extent.  I’m pretty sure that the Intel X25-M, which provides 80GB to the user, has at least 100GB of actual flash in there — perhaps 120GB.  That overprovision may be internal to the actual flash chip, since Intel makes both the chip and controller.  There is absolutely extra ECC and block metadata in there (this is not new, again, see the whitepaper).\nThe X25-E certainly is over-provisioned.\n\nIn the future, there are two things that will help flash a lot:\n*File systems that avoid writing to a region as long as possible, preferring to write to areas previously freed at some point.\n*New OS block device semantics.  Currently its ‘read’ and ‘write’.  The device, once all LBA’s have had a write to them once, is always “100%” full.  A ‘deallocate’ command would help SSD random writes, wear leveling, and write amplification algorithms significantly.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 25 Feb 2009 11:23:07 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL block size for SSD RAID setup?" } ]
[ { "msg_contents": "Actually, they're all deadlocked. The question is why?\n\nHere's a brief background. The ts_defects table is partitioned by \noccurrence date; each partition contains the rows for 1 day. When the \ndata gets old enough, the partition is dropped. Since the correct \npartition can be determined from the occurrence date, there is no \ntrigger: inserts are done directly into the correct partition. Multiple \nthreads may be inserting into a partition at the same time. The thread \nthat checks for old data to be dropped runs at 00:30 each night. It also \ncreates the partition for the next day.\n\nBelow is the output from:\nselect xact_start,query_start,substring(current_query from 0 for 40) \nfrom pg_stat_activity order by xact_start;\n\nrun at 18:40 on 28 Feb 2009 (i.e. these queries have been running for\n > 6 hours). The 1st select is not on any of the ts_defect partitions\nnor is the CREATE VIEW. The SELECT's shown last are not (directly) \ngenerated by the java program that is running the drop table, inserts,\nthe 1st select and the CREATE VIEW.\n\nThanks for your ideas,\nBrian\n\n\n 2009-02-28 00:30:00.01572-08 | 2009-02-28 00:30:00.015758-08 | drop \ntable ts_defects_20090225\n 2009-02-28 00:30:00.693353-08 | 2009-02-28 00:30:00.69337-08 | select \ntransetdef0_.ts_id as ts1_85_0_,\n 2009-02-28 00:30:01.875671-08 | 2009-02-28 00:30:01.875911-08 | insert \ninto ts_defects_20090228 (ts_id,\n 2009-02-28 00:30:01.875673-08 | 2009-02-28 00:30:01.875911-08 | insert \ninto ts_defects_20090228 (ts_id,\n 2009-02-28 00:30:01.875907-08 | 2009-02-28 00:30:01.87611-08 | insert \ninto ts_defects_20090228 (ts_id,\n 2009-02-28 00:30:01.87615-08 | 2009-02-28 00:30:01.876334-08 | insert \ninto ts_defects_20090228 (ts_id,\n 2009-02-28 00:30:01.87694-08 | 2009-02-28 00:30:01.877153-08 | insert \ninto ts_defects_20090228 (ts_id,\n 2009-02-28 00:30:01.876952-08 | 2009-02-28 00:30:01.877171-08 | insert \ninto ts_defects_20090228 (ts_id,\n 2009-02-28 00:30:01.876965-08 | 2009-02-28 00:30:01.87716-08 | insert \ninto ts_defects_20090228 (ts_id,\n 2009-02-28 00:30:01.877267-08 | 2009-02-28 00:30:01.877483-08 | insert \ninto ts_defects_20090228 (ts_id,\n 2009-02-28 00:30:01.877928-08 | 2009-02-28 00:30:01.878101-08 | insert \ninto ts_defects_20090228 (ts_id,\n 2009-02-28 00:30:06.822733-08 | 2009-02-28 00:30:06.822922-08 | insert \ninto ts_defects_20090228 (ts_id,\n 2009-02-28 01:01:00.95051-08 | 2009-02-28 01:01:00.950605-08 | CREATE \nVIEW TranSetGroupSlaPerformanceA\n 2009-02-28 09:12:33.181039-08 | 2009-02-28 09:12:33.181039-08 | SELECT \nc.oid, c.relname, pg_get_userbyi\n 2009-02-28 09:19:47.335621-08 | 2009-02-28 09:19:47.335621-08 | SELECT \nc.oid, c.relname, pg_get_userbyi\n 2009-02-28 10:52:36.638467-08 | 2009-02-28 10:52:36.638467-08 | SELECT \nc.oid, c.relname, pg_get_userbyi\n 2009-02-28 11:01:05.023126-08 | 2009-02-28 11:01:05.023126-08 | SELECT \nc.oid, c.relname, pg_get_userbyi\n", "msg_date": "Sat, 28 Feb 2009 18:51:32 -0800", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "\"slow\" queries" }, { "msg_contents": "On Sat, Feb 28, 2009 at 9:51 PM, Brian Cox <[email protected]> wrote:\n> Actually, they're all deadlocked. The question is why?\n>\n> Here's a brief background. The ts_defects table is partitioned by occurrence\n> date; each partition contains the rows for 1 day. When the data gets old\n> enough, the partition is dropped. Since the correct partition can be\n> determined from the occurrence date, there is no trigger: inserts are done\n> directly into the correct partition. Multiple threads may be inserting into\n> a partition at the same time. The thread that checks for old data to be\n> dropped runs at 00:30 each night. It also creates the partition for the next\n> day.\n>\n> Below is the output from:\n> select xact_start,query_start,substring(current_query from 0 for 40) from\n> pg_stat_activity order by xact_start;\n\nCan you post this again with procpid added to the column list and\nwithout truncating current_query? And then also post the results of\n\"select * from pg_locks\"?\n\nIs there anything interesting in the postmaster log?\n\n...Robert\n", "msg_date": "Sat, 28 Feb 2009 22:15:17 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"slow\" queries" }, { "msg_contents": "Brian Cox <[email protected]> writes:\n> Actually, they're all deadlocked. The question is why?\n\nProbably because the DROP is trying to acquire exclusive lock on its\ntarget table, and some other transaction already has a read or write\nlock on that table, and everything else is queuing up behind the DROP.\n\nIt's not a true deadlock that is visible to the database, or else\nPostgres would have failed enough of the transactions to remove the\ndeadlock. Rather, what you've got is some very-long-running transaction\nthat is still making progress, or else is sitting idle because its\nclient is neglecting to close it; and everything else is blocked behind\nthat.\n\nIf it is not clear to you exactly who is waiting for what, a look into\nthe pg_locks view might help.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 01 Mar 2009 12:05:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"slow\" queries " }, { "msg_contents": ">Probably because the DROP is trying to acquire exclusive lock on its\n>target table, and some other transaction already has a read or write\n>lock on that table, and everything else is queuing up behind the DROP.\n\n>It's not a true deadlock that is visible to the database, or else\n>Postgres would have failed enough of the transactions to remove the\n>deadlock. Rather, what you've got is some very-long-running transaction\n>that is still making progress, or else is sitting idle because its\n>client is neglecting to close it; and everything else is blocked behind\n>that.\n\nThis \"deadlock\" finished after 18h and 48m. As there is only 1 select\non a table with 400 rows and 10 inserts into a separate partition than\nthe one being dropped, what could possible take 18:48 to do?\n\nI also don't understand why inserts into a separate partition or a select on\nan unrelated table should cause any locks on the table being dropped in\nthe 1st place. I assume that the CREATE VIEW, which started 1 hour\nafter the DROP, can't possibly be the cause of this \"deadlock\".\n\nBrian\n\n\n\n\n\nRE: [PERFORM] \"slow\" queries \n\n\n\n\n>Probably because the DROP is trying to acquire exclusive lock on its\n>target table, and some other transaction already has a read or write\n>lock on that table, and everything else is queuing up behind the DROP.\n\n>It's not a true deadlock that is visible to the database, or else\n>Postgres would have failed enough of the transactions to remove the\n>deadlock.  Rather, what you've got is some very-long-running transaction\n>that is still making progress, or else is sitting idle because its\n>client is neglecting to close it; and everything else is blocked behind\n>that.\n\nThis \"deadlock\" finished after 18h and 48m. As there is only 1 select\non a table with 400 rows and 10 inserts into a separate partition than\nthe one being dropped, what could possible take 18:48 to do?\n\nI also don't understand why inserts into a separate partition or a select on\nan unrelated table should cause any locks on the table being dropped in\nthe 1st place. I assume that the CREATE VIEW, which started 1 hour\nafter the DROP, can't possibly be the cause of this \"deadlock\".\n\nBrian", "msg_date": "Sun, 1 Mar 2009 14:21:54 -0500", "msg_from": "\"Cox, Brian\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"slow\" queries " }, { "msg_contents": "\"Cox, Brian\" <[email protected]> writes:\n>> Probably because the DROP is trying to acquire exclusive lock on its\n>> target table, and some other transaction already has a read or write\n>> lock on that table, and everything else is queuing up behind the DROP.\n\n>> It's not a true deadlock that is visible to the database, or else\n>> Postgres would have failed enough of the transactions to remove the\n>> deadlock. Rather, what you've got is some very-long-running transaction\n>> that is still making progress, or else is sitting idle because its\n>> client is neglecting to close it; and everything else is blocked behind\n>> that.\n\n> This \"deadlock\" finished after 18h and 48m. As there is only 1 select\n> on a table with 400 rows and 10 inserts into a separate partition than\n> the one being dropped, what could possible take 18:48 to do?\n\n[ shrug... ] You tell us. To me it sounds a whole lot like some client\nprogram sitting on an open transaction that has a nonexclusive lock on\nthe table to be dropped. That transaction wasn't necessarily doing any\nuseful work; it might have just been waiting on the client.\n\nAt this point I suppose arguing about it is moot because the evidence\nis all gone. If it happens again, capture the contents of pg_locks and\npg_stat_activity while things are still stuck.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 01 Mar 2009 20:57:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"slow\" queries " } ]
[ { "msg_contents": "Tom Lane [[email protected]] wrote:\n> [ shrug... ] You tell us. To me it sounds a whole lot like some client\n> program sitting on an open transaction that has a nonexclusive lock on\n> the table to be dropped. That transaction wasn't necessarily doing any\n> useful work; it might have just been waiting on the client.\n\nI wish I could... And, in any event, aren't all transactions listed in \nthe pg_stat_activity select?\n\n> At this point I suppose arguing about it is moot because the evidence\n> is all gone. If it happens again, capture the contents of pg_locks and\n> pg_stat_activity while things are still stuck.\n\nThis happened again last night. This time I'd added a lock (in the java \ncode) to prevent inserts into other partitions of ts_defects while the \ndrop is in progress. Below is the output from:\nselect xact_start,datid,datname,procpid,usesysid,substring(current_query \nfrom 0 for 40),waiting,client_addr from pg_stat_activity order by \nxact_start;\n\nand\n\nselect locktype,database,relation,virtualxid,virtualtransaction,pid,mode \nfrom pg_locks order by mode;\n\nAs you can see there are only 3 transactions and 1 starts 1 hour after\nthe drop begins. I'm still trying to figure out how to interpret the\npg_locks output, but (presumably) you/others on this forum have more \nexperience at this than I.\n\nThanks,\nBrian\n\n\n\ncemdb=> select \nxact_start,datid,datname,procpid,usesysid,substring(current_query from 0 \nfor 40),waiting,client_addr from pg_stat_activity order by xact_start;\n xact_start | datid | datname | procpid | \nusesysid | substring | waiting | client_addr\n-------------------------------+----------+---------+---------+----------+-----------------------------------------+---------+----------------\n 2009-03-01 14:10:42.606592-08 | 26472437 | cemdb | 13833 | \n16392 | <IDLE> in transaction | f | 130.200.164.15\n 2009-03-02 00:30:00.039977-08 | 26472437 | cemdb | 13842 | \n16392 | drop table ts_defects_20090227 | t | 127.0.0.1\n 2009-03-02 00:30:00.066728-08 | 26472437 | cemdb | 13865 | \n16392 | select transetdef0_.ts_id as ts1_85_0_, | t | 127.0.0.1\n 2009-03-02 01:01:00.992486-08 | 26472437 | cemdb | 13840 | \n16392 | CREATE VIEW TranSetGroupSlaPerformanceA | t | 127.0.0.1\n 2009-03-02 10:16:21.252969-08 | 26472437 | cemdb | 29985 | \n16392 | select xact_start,datid,datname,procpid | f |\n | 26472437 | cemdb | 13735 | \n16392 | <IDLE> | f | 127.0.0.1\n | 26472437 | cemdb | 13744 | \n16392 | <IDLE> | f | 127.0.0.1\n | 26472437 | cemdb | 13857 | \n16392 | <IDLE> | f | 127.0.0.1\n | 26472437 | cemdb | 13861 | \n16392 | <IDLE> | f | 127.0.0.1\n | 26472437 | cemdb | 13864 | \n16392 | <IDLE> | f | 127.0.0.1\n | 26472437 | cemdb | 13855 | \n16392 | <IDLE> | f | 127.0.0.1\n | 26472437 | cemdb | 13740 | \n16392 | <IDLE> | f | 127.0.0.1\n(12 rows)\n\ncemdb=> select \nlocktype,database,relation,virtualxid,virtualtransaction,pid,mode from \npg_locks order by mode;\nlocktype | database | relation | virtualxid | virtualtransaction | \npid | mode\n---------------+----------+----------+------------+--------------------+-------+---------------------\n relation | 26472437 | 26592616 | | 15/69749 \n| 13842 | AccessExclusiveLock\n relation | 26472437 | 26592608 | | 15/69749 \n| 13842 | AccessExclusiveLock\n relation | 26472437 | 26592615 | | 15/69749 \n| 13842 | AccessExclusiveLock\n relation | 26472437 | 26592613 | | 15/69749 \n| 13842 | AccessExclusiveLock\n relation | 26472437 | 26472508 | | 15/69749 \n| 13842 | AccessExclusiveLock\n relation | 26472437 | 26493706 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 26473141 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 10969 | | 1/77414 \n| 29985 | AccessShareLock\n relation | 26472437 | 26473176 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 26493307 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 26493271 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 26493704 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 26493711 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 2674 | | 15/69749 \n| 13842 | AccessShareLock\n relation | 26472437 | 26493279 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 26473227 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 26493705 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 26472869 | | 14/70049 \n| 13840 | AccessShareLock\n relation | 26472437 | 26493306 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 26493712 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 26472508 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 26493709 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 26472508 | | 14/70049 \n| 13840 | AccessShareLock\n relation | 26472437 | 26472595 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 26493269 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 26493710 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 2702 | | 15/69749 \n| 13842 | AccessShareLock\n relation | 26472437 | 26493267 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 26493700 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 26472508 | | 29/69612 \n| 13865 | AccessShareLock\n relation | 26472437 | 26493259 | | 11/131 \n| 13833 | AccessShareLock\n relation | 26472437 | 26493103 | | 11/131 \n| 13833 | AccessShareLock\n virtualxid | | | 14/70049 | 14/70049 \n| 13840 | ExclusiveLock\n transactionid | | | | 15/69749 \n| 13842 | ExclusiveLock\n virtualxid | | | 29/69612 | 29/69612 \n| 13865 | ExclusiveLock\n virtualxid | | | 15/69749 | 15/69749 \n| 13842 | ExclusiveLock\n virtualxid | | | 1/77414 | 1/77414 \n| 29985 | ExclusiveLock\n virtualxid | | | 11/131 | 11/131 \n| 13833 | ExclusiveLock\n relation | 26472437 | 2620 | | 15/69749 \n| 13842 | RowExclusiveLock\n relation | 26472437 | 2608 | | 15/69749 \n| 13842 | RowExclusiveLock\n(40 rows)\n", "msg_date": "Mon, 02 Mar 2009 10:22:55 -0800", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \"slow\" queries" }, { "msg_contents": "In my experience, 13833, \"<IDLE> in transaction\" is your culprit. It is a transaction that has been there for 10 hours longer than all others, and is doing nothing at all. It has locks on a lot of objects in there. You'll have to take the oid's in the lock table and look them up in the pg_class table to figure out what those are. Alternatively, the procpid (13833) may be all you need to track down the user or program that needs to have a talking-to.\n\nSomething as dumb as:\n\nOpen psql\nBEGIN;\n// do some stuff\n\n//.. Go home with the terminal open, get some dinner, go to sleep\n//.. Wake up, dorp the kids off at school\n//.. Arrive at work, get some coffee\n//.. Realize your psql terminal is open and close it\n\nCan be your culprit.\nCommon culprits are applications that don't open and close their transactions properly when errors occur and pool connections forever.\n\nPg_locks, even on a really busy db, should not have that many locks in the view. If there are a lot, and they aren't 'churning and changing' then you have some long running transactions.\nThe pid column in the locks table corresponds with procpid in the activity table. The 'relation' column in the lock table corresponds with stuff in pg_class.\n\n\n\nOn 3/2/09 10:22 AM, \"Brian Cox\" <[email protected]> wrote:\n\nTom Lane [[email protected]] wrote:\n> [ shrug... ] You tell us. To me it sounds a whole lot like some client\n> program sitting on an open transaction that has a nonexclusive lock on\n> the table to be dropped. That transaction wasn't necessarily doing any\n> useful work; it might have just been waiting on the client.\n\nI wish I could... And, in any event, aren't all transactions listed in\nthe pg_stat_activity select?\n\n> At this point I suppose arguing about it is moot because the evidence\n> is all gone. If it happens again, capture the contents of pg_locks and\n> pg_stat_activity while things are still stuck.\n\nThis happened again last night. This time I'd added a lock (in the java\ncode) to prevent inserts into other partitions of ts_defects while the\ndrop is in progress. Below is the output from:\nselect xact_start,datid,datname,procpid,usesysid,substring(current_query\nfrom 0 for 40),waiting,client_addr from pg_stat_activity order by\nxact_start;\n\nand\n\nselect locktype,database,relation,virtualxid,virtualtransaction,pid,mode\nfrom pg_locks order by mode;\n\nAs you can see there are only 3 transactions and 1 starts 1 hour after\nthe drop begins. I'm still trying to figure out how to interpret the\npg_locks output, but (presumably) you/others on this forum have more\nexperience at this than I.\n\nThanks,\nBrian\n\n\n\ncemdb=> select\nxact_start,datid,datname,procpid,usesysid,substring(current_query from 0\nfor 40),waiting,client_addr from pg_stat_activity order by xact_start;\n xact_start | datid | datname | procpid |\nusesysid | substring | waiting | client_addr\n-------------------------------+----------+---------+---------+----------+-----------------------------------------+---------+----------------\n 2009-03-01 14:10:42.606592-08 | 26472437 | cemdb | 13833 |\n16392 | <IDLE> in transaction | f | 130.200.164.15\n 2009-03-02 00:30:00.039977-08 | 26472437 | cemdb | 13842 |\n16392 | drop table ts_defects_20090227 | t | 127.0.0.1\n 2009-03-02 00:30:00.066728-08 | 26472437 | cemdb | 13865 |\n16392 | select transetdef0_.ts_id as ts1_85_0_, | t | 127.0.0.1\n 2009-03-02 01:01:00.992486-08 | 26472437 | cemdb | 13840 |\n16392 | CREATE VIEW TranSetGroupSlaPerformanceA | t | 127.0.0.1\n 2009-03-02 10:16:21.252969-08 | 26472437 | cemdb | 29985 |\n16392 | select xact_start,datid,datname,procpid | f |\n | 26472437 | cemdb | 13735 |\n16392 | <IDLE> | f | 127.0.0.1\n | 26472437 | cemdb | 13744 |\n16392 | <IDLE> | f | 127.0.0.1\n | 26472437 | cemdb | 13857 |\n16392 | <IDLE> | f | 127.0.0.1\n | 26472437 | cemdb | 13861 |\n16392 | <IDLE> | f | 127.0.0.1\n | 26472437 | cemdb | 13864 |\n16392 | <IDLE> | f | 127.0.0.1\n | 26472437 | cemdb | 13855 |\n16392 | <IDLE> | f | 127.0.0.1\n | 26472437 | cemdb | 13740 |\n16392 | <IDLE> | f | 127.0.0.1\n(12 rows)\n\ncemdb=> select\nlocktype,database,relation,virtualxid,virtualtransaction,pid,mode from\npg_locks order by mode;\nlocktype | database | relation | virtualxid | virtualtransaction |\npid | mode\n---------------+----------+----------+------------+--------------------+-------+---------------------\n relation | 26472437 | 26592616 | | 15/69749\n| 13842 | AccessExclusiveLock\n relation | 26472437 | 26592608 | | 15/69749\n| 13842 | AccessExclusiveLock\n relation | 26472437 | 26592615 | | 15/69749\n| 13842 | AccessExclusiveLock\n relation | 26472437 | 26592613 | | 15/69749\n| 13842 | AccessExclusiveLock\n relation | 26472437 | 26472508 | | 15/69749\n| 13842 | AccessExclusiveLock\n relation | 26472437 | 26493706 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 26473141 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 10969 | | 1/77414\n| 29985 | AccessShareLock\n relation | 26472437 | 26473176 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 26493307 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 26493271 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 26493704 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 26493711 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 2674 | | 15/69749\n| 13842 | AccessShareLock\n relation | 26472437 | 26493279 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 26473227 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 26493705 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 26472869 | | 14/70049\n| 13840 | AccessShareLock\n relation | 26472437 | 26493306 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 26493712 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 26472508 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 26493709 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 26472508 | | 14/70049\n| 13840 | AccessShareLock\n relation | 26472437 | 26472595 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 26493269 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 26493710 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 2702 | | 15/69749\n| 13842 | AccessShareLock\n relation | 26472437 | 26493267 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 26493700 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 26472508 | | 29/69612\n| 13865 | AccessShareLock\n relation | 26472437 | 26493259 | | 11/131\n| 13833 | AccessShareLock\n relation | 26472437 | 26493103 | | 11/131\n| 13833 | AccessShareLock\n virtualxid | | | 14/70049 | 14/70049\n| 13840 | ExclusiveLock\n transactionid | | | | 15/69749\n| 13842 | ExclusiveLock\n virtualxid | | | 29/69612 | 29/69612\n| 13865 | ExclusiveLock\n virtualxid | | | 15/69749 | 15/69749\n| 13842 | ExclusiveLock\n virtualxid | | | 1/77414 | 1/77414\n| 29985 | ExclusiveLock\n virtualxid | | | 11/131 | 11/131\n| 13833 | ExclusiveLock\n relation | 26472437 | 2620 | | 15/69749\n| 13842 | RowExclusiveLock\n relation | 26472437 | 2608 | | 15/69749\n| 13842 | RowExclusiveLock\n(40 rows)\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\nRe: [PERFORM] \"slow\" queries\n\n\nIn my experience, 13833, “<IDLE> in transaction” is your culprit.  It is a transaction that has been there for 10 hours longer than all others, and is doing nothing at all.  It has locks on a lot of objects in there.  You’ll have to take the oid’s in the lock table and look them up in the pg_class table to figure out what those are.  Alternatively, the procpid (13833) may be all you need to track down the user or program that needs to have a talking-to.\n\nSomething as dumb as:\n\nOpen psql\nBEGIN;\n// do some stuff\n\n//.. Go home with the terminal open, get some dinner, go to sleep\n//.. Wake up, dorp the kids off at school\n//.. Arrive at work, get some coffee\n//.. Realize your psql terminal is open and close it \n\nCan be your culprit.  \nCommon culprits are applications that don’t open and close their transactions properly when errors occur and pool connections forever.\n\nPg_locks, even on a really busy db, should not have that many locks in the view.  If there are a lot, and they aren’t ‘churning and changing’ then you have some long running transactions.  \nThe pid column in the locks table corresponds with procpid in the activity table.  The ‘relation’ column in the lock table corresponds with stuff in pg_class.\n\n\n\nOn 3/2/09 10:22 AM, \"Brian Cox\" <[email protected]> wrote:\n\nTom Lane [[email protected]] wrote:\n> [ shrug... ]  You tell us.  To me it sounds a whole lot like some client\n> program sitting on an open transaction that has a nonexclusive lock on\n> the table to be dropped.  That transaction wasn't necessarily doing any\n> useful work; it might have just been waiting on the client.\n\nI wish I could... And, in any event, aren't all transactions listed in\nthe pg_stat_activity select?\n\n> At this point I suppose arguing about it is moot because the evidence\n> is all gone.  If it happens again, capture the contents of pg_locks and\n> pg_stat_activity while things are still stuck.\n\nThis happened again last night. This time I'd added a lock (in the java\ncode) to prevent inserts into other partitions of ts_defects while the\ndrop is in progress. Below is the output from:\nselect xact_start,datid,datname,procpid,usesysid,substring(current_query\nfrom 0 for 40),waiting,client_addr from  pg_stat_activity order by\nxact_start;\n\nand\n\nselect locktype,database,relation,virtualxid,virtualtransaction,pid,mode\nfrom pg_locks order by mode;\n\nAs you can see there are only 3 transactions and 1 starts 1 hour after\nthe drop begins. I'm still trying to figure out how to interpret the\npg_locks output, but (presumably) you/others on this forum have more\nexperience at this than I.\n\nThanks,\nBrian\n\n\n\ncemdb=> select\nxact_start,datid,datname,procpid,usesysid,substring(current_query from 0\nfor 40),waiting,client_addr from  pg_stat_activity order by xact_start;\n           xact_start           |  datid   | datname | procpid |\nusesysid |                substring                | waiting |  client_addr\n-------------------------------+----------+---------+---------+----------+-----------------------------------------+---------+----------------\n  2009-03-01 14:10:42.606592-08 | 26472437 | cemdb   |   13833 |\n16392 | <IDLE> in transaction                   | f       | 130.200.164.15\n  2009-03-02 00:30:00.039977-08 | 26472437 | cemdb   |   13842 |\n16392 | drop table ts_defects_20090227          | t       | 127.0.0.1\n  2009-03-02 00:30:00.066728-08 | 26472437 | cemdb   |   13865 |\n16392 | select transetdef0_.ts_id as ts1_85_0_, | t       | 127.0.0.1\n  2009-03-02 01:01:00.992486-08 | 26472437 | cemdb   |   13840 |\n16392 | CREATE VIEW TranSetGroupSlaPerformanceA | t       | 127.0.0.1\n  2009-03-02 10:16:21.252969-08 | 26472437 | cemdb   |   29985 |\n16392 | select xact_start,datid,datname,procpid | f       |\n                                | 26472437 | cemdb   |   13735 |\n16392 | <IDLE>                                  | f       | 127.0.0.1\n                                | 26472437 | cemdb   |   13744 |\n16392 | <IDLE>                                  | f       | 127.0.0.1\n                                | 26472437 | cemdb   |   13857 |\n16392 | <IDLE>                                  | f       | 127.0.0.1\n                                | 26472437 | cemdb   |   13861 |\n16392 | <IDLE>                                  | f       | 127.0.0.1\n                                | 26472437 | cemdb   |   13864 |\n16392 | <IDLE>                                  | f       | 127.0.0.1\n                                | 26472437 | cemdb   |   13855 |\n16392 | <IDLE>                                  | f       | 127.0.0.1\n                                | 26472437 | cemdb   |   13740 |\n16392 | <IDLE>                                  | f       | 127.0.0.1\n(12 rows)\n\ncemdb=> select\nlocktype,database,relation,virtualxid,virtualtransaction,pid,mode from\npg_locks order by mode;\nlocktype    | database | relation | virtualxid | virtualtransaction |\npid  |        mode\n---------------+----------+----------+------------+--------------------+-------+---------------------\n  relation      | 26472437 | 26592616 |            | 15/69749\n| 13842 | AccessExclusiveLock\n  relation      | 26472437 | 26592608 |            | 15/69749\n| 13842 | AccessExclusiveLock\n  relation      | 26472437 | 26592615 |            | 15/69749\n| 13842 | AccessExclusiveLock\n  relation      | 26472437 | 26592613 |            | 15/69749\n| 13842 | AccessExclusiveLock\n  relation      | 26472437 | 26472508 |            | 15/69749\n| 13842 | AccessExclusiveLock\n  relation      | 26472437 | 26493706 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 | 26473141 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 |    10969 |            | 1/77414\n| 29985 | AccessShareLock\n  relation      | 26472437 | 26473176 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 | 26493307 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 | 26493271 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 | 26493704 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 | 26493711 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 |     2674 |            | 15/69749\n| 13842 | AccessShareLock\n  relation      | 26472437 | 26493279 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 | 26473227 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 | 26493705 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 | 26472869 |            | 14/70049\n| 13840 | AccessShareLock\n  relation      | 26472437 | 26493306 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 | 26493712 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 | 26472508 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 | 26493709 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 | 26472508 |            | 14/70049\n| 13840 | AccessShareLock\n  relation      | 26472437 | 26472595 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 | 26493269 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 | 26493710 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 |     2702 |            | 15/69749\n| 13842 | AccessShareLock\n  relation      | 26472437 | 26493267 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 | 26493700 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 | 26472508 |            | 29/69612\n| 13865 | AccessShareLock\n  relation      | 26472437 | 26493259 |            | 11/131\n| 13833 | AccessShareLock\n  relation      | 26472437 | 26493103 |            | 11/131\n| 13833 | AccessShareLock\n  virtualxid    |          |          | 14/70049   | 14/70049\n| 13840 | ExclusiveLock\n  transactionid |          |          |            | 15/69749\n| 13842 | ExclusiveLock\n  virtualxid    |          |          | 29/69612   | 29/69612\n| 13865 | ExclusiveLock\n  virtualxid    |          |          | 15/69749   | 15/69749\n| 13842 | ExclusiveLock\n  virtualxid    |          |          | 1/77414    | 1/77414\n| 29985 | ExclusiveLock\n  virtualxid    |          |          | 11/131     | 11/131\n| 13833 | ExclusiveLock\n  relation      | 26472437 |     2620 |            | 15/69749\n| 13842 | RowExclusiveLock\n  relation      | 26472437 |     2608 |            | 15/69749\n| 13842 | RowExclusiveLock\n(40 rows)\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 2 Mar 2009 10:58:28 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"slow\" queries" }, { "msg_contents": "On Mon, Mar 2, 2009 at 1:22 PM, Brian Cox <[email protected]> wrote:\n> As you can see there are only 3 transactions and 1 starts 1 hour after\n> the drop begins. I'm still trying to figure out how to interpret the\n> pg_locks output, but (presumably) you/others on this forum have more\n> experience at this than I.\n\nI'm rather suspicious of that line that says <IDLE> in transaction.\nConnections that are idle, but in a transaction, can be holding locks.\n And since they are idle, things can stay that way for a very long\ntime... hours, days... coincidentally, that idle-in-transaction\nprocpid is holding AccessShareLocks on a whole boatload of relations.\n\nIt's a little hard to decode this output because the \"relation\" column\nfrom pg_locks is an OID, and we don't know what relation it\nrepresents. It's helpful to cast that column to \"regclass\": select\nlocktype,database,relation::regclass,virtualxid,virtualtransaction,pid,mode\nfrom pg_locks order by mode;\n\nFor now, though, try this:\n\nselect oid, relname from pg_class where relname like 'ts_defects%';\n\nI suspect you'll find that the oid of the table that you were trying\nto drop is one of the ones on which the idle in transaction process is\nholding an AccessShareLock on...\n\n...Robert\n", "msg_date": "Mon, 2 Mar 2009 14:04:34 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"slow\" queries" }, { "msg_contents": "Brian Cox <[email protected]> writes:\n> select locktype,database,relation,virtualxid,virtualtransaction,pid,mode \n> from pg_locks order by mode;\n\nIf you hadn't left out the \"granted\" column we could be more sure,\nbut what it looks like to me is the DROP (pid 13842) is stuck behind\nthe <IDLE> transaction (pid 13833). In particular these two rows of\npg_locks look like a possible conflict:\n\n> relation | 26472437 | 26472508 | | 15/69749 \n> | 13842 | AccessExclusiveLock\n\n> relation | 26472437 | 26472508 | | 11/131 \n> | 13833 | AccessShareLock\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Mar 2009 14:29:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"slow\" queries " }, { "msg_contents": "On Mon, Mar 02, 2009 at 02:29:31PM -0500, Tom Lane wrote:\n> Brian Cox <[email protected]> writes:\n> > select locktype,database,relation,virtualxid,virtualtransaction,pid,mode \n> > from pg_locks order by mode;\n> \n> If you hadn't left out the \"granted\" column we could be more sure,\n> but what it looks like to me is the DROP (pid 13842) is stuck behind\n> the <IDLE> transaction (pid 13833). In particular these two rows of\n> pg_locks look like a possible conflict:\n> \n> > relation | 26472437 | 26472508 | | 15/69749 \n> > | 13842 | AccessExclusiveLock\n> \n> > relation | 26472437 | 26472508 | | 11/131 \n> > | 13833 | AccessShareLock\n\nWould it be possible to write a stored procedure that would read\npg_locks, and other relevant tables, and list what's blocking what\nin a simplified form?\n\nTim.\n", "msg_date": "Mon, 2 Mar 2009 21:24:33 +0000", "msg_from": "Tim Bunce <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"slow\" queries" }, { "msg_contents": "On Mon, Mar 2, 2009 at 2:24 PM, Tim Bunce <[email protected]> wrote:\n> On Mon, Mar 02, 2009 at 02:29:31PM -0500, Tom Lane wrote:\n>> Brian Cox <[email protected]> writes:\n>> > select locktype,database,relation,virtualxid,virtualtransaction,pid,mode\n>> > from pg_locks order by mode;\n>>\n>> If you hadn't left out the \"granted\" column we could be more sure,\n>> but what it looks like to me is the DROP (pid 13842) is stuck behind\n>> the <IDLE> transaction (pid 13833).  In particular these two rows of\n>> pg_locks look like a possible conflict:\n>>\n>> >   relation      | 26472437 | 26472508 |            | 15/69749\n>> > | 13842 | AccessExclusiveLock\n>>\n>> >   relation      | 26472437 | 26472508 |            | 11/131\n>> > | 13833 | AccessShareLock\n>\n> Would it be possible to write a stored procedure that would read\n> pg_locks, and other relevant tables, and list what's blocking what\n> in a simplified form?\n\nI'm sure a query could use just connect by from tablefuncs, or WITH\nunder 8.4 and get it.\n", "msg_date": "Mon, 2 Mar 2009 14:59:47 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"slow\" queries" } ]
[ { "msg_contents": "Tom Lane [[email protected]] wrote:\n> If you hadn't left out the \"granted\" column we could be more sure,\n> but what it looks like to me is the DROP (pid 13842) is stuck behind\n> the <IDLE> transaction (pid 13833). In particular these two rows of\n> pg_locks look like a possible conflict:\n> \n> > relation | 26472437 | 26472508 | | 15/69749\n> > | 13842 | AccessExclusiveLock\n> \n> > relation | 26472437 | 26472508 | | 11/131\n> > | 13833 | AccessShareLock\n\nselect c.oid,c.relname,l.pid,l.mode,l.granted from pg_class c join \npg_locks l on c.oid=l.relation order by l.pid;\n\n26472508 | ts_transets | 13833 | AccessShareLock | t\n26472508 | ts_transets | 13842 | AccessExclusiveLock | f\n\npid 13833 is the idle transaction and 13842 is doing the drop table.\n\nSo, the idle transaction is the problem. Thanks to you, Scott Carey and \nRobert Haas for pointing this out. However, why does the drop of \nts_defects_20090227 need exclusive access to ts_transets? I assume it \nmust be due to this FK?\n\nalter table ts_defects_20090227 add constraint FK34AA2B629DADA24\nforeign key (ts_transet_id) references ts_transets;\n\n\nThanks again,\nBrian\n", "msg_date": "Mon, 02 Mar 2009 11:55:54 -0800", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \"slow\" queries" }, { "msg_contents": "Brian Cox <[email protected]> writes:\n> So, the idle transaction is the problem. Thanks to you, Scott Carey and \n> Robert Haas for pointing this out. However, why does the drop of \n> ts_defects_20090227 need exclusive access to ts_transets? I assume it \n> must be due to this FK?\n\n> alter table ts_defects_20090227 add constraint FK34AA2B629DADA24\n> foreign key (ts_transet_id) references ts_transets;\n\nWell, that's certainly a sufficient reason, if perhaps not the only\nreason. Dropping ts_defects_20090227 will require removal of FK triggers\non ts_transets, and we can't do that concurrently with transactions that\nmight be trying to fire those triggers.\n\nNow admittedly, it would probably be sufficient to take ExclusiveLock\nrather than AccessExclusiveLock when removing triggers, since we do not\nhave triggers ON SELECT. Right now though, we just take\nAccessExclusiveLock for most any DDL on a table. There was a patch\nsubmitted last fall to reduce DDL locking in some cases, but it hasn't\nbeen reworked to fix the problems that were pointed out (and I\ndisremember if it addressed DROP TRIGGER in particular anyway).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Mar 2009 15:11:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"slow\" queries " } ]
[ { "msg_contents": "Tom Lane [[email protected]] wrote:\n> Well, that's certainly a sufficient reason, if perhaps not the only\n> reason. Dropping ts_defects_20090227 will require removal of FK triggers\n> on ts_transets, and we can't do that concurrently with transactions that\n> might be trying to fire those triggers.\n> \n> Now admittedly, it would probably be sufficient to take ExclusiveLock\n> rather than AccessExclusiveLock when removing triggers, since we do not\n> have triggers ON SELECT. Right now though, we just take\n> AccessExclusiveLock for most any DDL on a table. There was a patch\n> submitted last fall to reduce DDL locking in some cases, but it hasn't\n> been reworked to fix the problems that were pointed out (and I\n> disremember if it addressed DROP TRIGGER in particular anyway).\n> \n> regards, tom lane\n\nThanks for furthering my understanding of postgres (and probably other \nSQL servers as well). I can fix this problem easily.\n\nBrian\n\n\n", "msg_date": "Mon, 02 Mar 2009 12:36:33 -0800", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \"slow\" queries" } ]
[ { "msg_contents": "Hi,\n\nWe are currently running postgres 8.2 and are evaluating the upgrade to 8.3.\n\nSome of our tests are indicating that postgresql 8.3 is actually degrading\nthe\nperformance of some of our queries by a factor of 10 or more. The queries\nin\nquestion are selects that are heavy on joins (~10 tables) with a lot of\ntimestamp-based conditions in where clauses. The tables and queries are\ntuned,\nthat is, there is no issue with the table structure, or missing indexes.\nThis\nis a side-by-side query performance measurement between 8.2 and 8.3 with an\nidentical dataset and schema.\n\n\n 8.2.12 8.3.3\n Time (ms) Time (ms)\n 1st 2nd 1st 2nd\n time time time time\n\nQuery 1 759 130 3294 1758\n\nattached you will find the explain analyze for this query. Any insight into\nthis issue would be very appreciated. Thanks.", "msg_date": "Mon, 2 Mar 2009 19:18:30 -0500", "msg_from": "Aaron Guyon <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres 8.3, four times slower queries?" }, { "msg_contents": "Aaron Guyon <[email protected]> writes:\n> We are currently running postgres 8.2 and are evaluating the upgrade to 8.3.\n> Some of our tests are indicating that postgresql 8.3 is actually degrading\n> the performance of some of our queries by a factor of 10 or more.\n\nAre you sure you are comparing apples to apples here? Same configure\noptions for the builds, same parameter values in postgresql.conf, both\ndatabases ANALYZEd, etc? And are they running on the same hardware?\n\nThe rowcount estimates seem to be a bit different, which might account\nfor the difference in plan choices, but I'm not convinced that that is\nthe reason for the slowness. The parts of the plans that are exactly\ncomparable show very significant speed differences, eg\n\n> -> Index Scan using idx_skin_day_part_id on skin t2 (cost=0.00..6.28 rows=1 width=24) (actual time=2.484..2.486 rows=1 loops=7)\n> Index Cond: (t2.day_part_id = t10.id)\n> Filter: (t2.active <> 0::numeric)\n> -> Index Scan using idx_skin_slot_skin_id on skin_slot t11 (cost=0.00..6.54 rows=92 width=25) (actual time=12.726..276.412 rows=94 loops=4)\n> Index Cond: (t11.skin_id = t2.id)\n> Filter: (t11.active <> 0::numeric)\n\n> -> Index Scan using idx_skin_day_part_id on skin t2 (cost=0.00..6.28 rows=1 width=30) (actual time=0.028..0.031 rows=1 loops=7)\n> Index Cond: (t2.day_part_id = t10.id)\n> Filter: (active <> 0::numeric)\n> -> Index Scan using idx_skin_slot_skin_id on skin_slot t11 (cost=0.00..6.85 rows=93 width=30) (actual time=0.053..1.382 rows=94 loops=4)\n> Index Cond: (t2.id = t11.skin_id)\n> Filter: (active <> 0::numeric)\n\nThere's nothing in 8.3 vs 8.2 to explain that, if they're configured\nthe same and running in the same environment.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Mar 2009 22:23:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 8.3, four times slower queries? " }, { "msg_contents": "On Mon, Mar 2, 2009 at 10:23 PM, Tom Lane <[email protected]> wrote:\n\n> Are you sure you are comparing apples to apples here? Same configure\n> options for the builds, same parameter values in postgresql.conf, both\n> databases ANALYZEd, etc? And are they running on the same hardware?\n>\n\nThank you for looking at this Tom. Yes, we have made sure we are comparing\napples to apples here. The postgresql.confs are identical, as are the\nconfigure flags:\n--disable-debug --enable-shared --enable-thread-safety --with-perl\n--with-pam --without-docdir --without-tcl --without-python --without-krb5\n--without-ldap --without-bonjour --enable-integer-datetimes\n--prefix=/opt/postgresql\n\nHowever, the db was not analyzed. I'll attached the new explain analyze of\nthe queries with the db analyzed, but 8.2 still beats 8.3.\n\nThe tests are both being run on the same machine, a Quad-core AMD Opteron\nProcessor 2212\n(each with 1024 KB cache) and 4GB of RAM.\n\nI find it telling that the query plan differs so much between postgres 8.2.\nand\n8.3. For example, why does the 8.3. planner choose to perform so many seq\nscans? I know seq scans are faster than index scans for small tables, but\nthese tables have 60K+ rows... surely an index scan would have been a better\nchoice here? If you look at the 8.2. query plan, it is very clean in\ncomparison, index scans all the way through. I can't help but think the 8.3\nplanner is simply failing to make the right choices in our case. Another\nquestion would be, why are there so many hash joins in the 8.3 plan now?\nAll\nour indexes are btrees...\n\nAny light that can be shed on what going on with the 8.3. planner would be\nmuch\nappreciated. Thanks in advance.\n\nOn Mon, Mar 2, 2009 at 10:23 PM, Tom Lane <[email protected]> wrote:\nAre you sure you are comparing apples to apples here?  Same configure\noptions for the builds, same parameter values in postgresql.conf, both\ndatabases ANALYZEd, etc?  And are they running on the same hardware?\nThank you for looking at this Tom.  Yes, we have made sure we are comparingapples to apples here.  The postgresql.confs are identical, as are the configure flags:--disable-debug\n--enable-shared --enable-thread-safety --with-perl --with-pam\n--without-docdir --without-tcl --without-python --without-krb5\n--without-ldap --without-bonjour --enable-integer-datetimes\n--prefix=/opt/postgresql\nHowever, the db was not analyzed.  I'll attached the new explain analyze of the queries with the db analyzed, but 8.2 still beats 8.3.The tests are both being run on the same machine, a Quad-core AMD Opteron Processor 2212 \n\n(each with 1024 KB cache) and 4GB of RAM.I find it telling that the query plan differs so much between postgres 8.2. and\n8.3.  For example, why does the 8.3. planner choose to perform so many seqscans?  I know seq scans are faster than index scans for small tables, butthese tables have 60K+ rows... surely an index scan would have been a better\n\n\nchoice here?  If you look at the 8.2. query plan, it is very clean incomparison, index scans all the way through.  I can't help but think the 8.3planner is simply failing to make the right choices in our case. Another\n\n\nquestion would be, why are there so many hash joins in the 8.3 plan now?  Allour indexes are btrees...Any light that can be shed on what going on with the 8.3. planner would be muchappreciated.  Thanks in advance.", "msg_date": "Tue, 3 Mar 2009 12:31:04 -0500", "msg_from": "Aaron Guyon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres 8.3, four times slower queries?" }, { "msg_contents": "On Tue, 3 Mar 2009, Aaron Guyon wrote:\n\n> On Mon, Mar 2, 2009 at 10:23 PM, Tom Lane <[email protected]> wrote:\n>\n>> Are you sure you are comparing apples to apples here? Same configure\n>> options for the builds, same parameter values in postgresql.conf, both\n>> databases ANALYZEd, etc? And are they running on the same hardware?\n>>\n>\n> Thank you for looking at this Tom. Yes, we have made sure we are comparing\n> apples to apples here. The postgresql.confs are identical, as are the\n> configure flags:\n> --disable-debug --enable-shared --enable-thread-safety --with-perl\n> --with-pam --without-docdir --without-tcl --without-python --without-krb5\n> --without-ldap --without-bonjour --enable-integer-datetimes\n> --prefix=/opt/postgresql\n>\n> However, the db was not analyzed. I'll attached the new explain analyze of\n> the queries with the db analyzed, but 8.2 still beats 8.3.\n>\n> The tests are both being run on the same machine, a Quad-core AMD Opteron\n> Processor 2212\n> (each with 1024 KB cache) and 4GB of RAM.\n>\n> I find it telling that the query plan differs so much between postgres 8.2.\n> and\n> 8.3. For example, why does the 8.3. planner choose to perform so many seq\n> scans? I know seq scans are faster than index scans for small tables, but\n> these tables have 60K+ rows... surely an index scan would have been a better\n> choice here? If you look at the 8.2. query plan, it is very clean in\n> comparison, index scans all the way through. I can't help but think the 8.3\n> planner is simply failing to make the right choices in our case. Another\n> question would be, why are there so many hash joins in the 8.3 plan now?\n> All\n> our indexes are btrees...\n>\n> Any light that can be shed on what going on with the 8.3. planner would be\n> much\n> appreciated. Thanks in advance.\n\nif you haven't done a vaccum analyse on either installation then postgres' \nidea of what sort of data is in the database is unpredictable, and as a \nresult it's not surprising that the two systems guess differently about \nwhat sort of plan is going to be most efficiant.\n\ntry doing vaccum analyse on both databases and see what the results are.\n\nDavid Lang\n", "msg_date": "Tue, 3 Mar 2009 09:38:28 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Postgres 8.3, four times slower queries?" }, { "msg_contents": "On Tue, Mar 3, 2009 at 12:38 PM, <[email protected]> wrote:\n\n> if you haven't done a vaccum analyse on either installation then postgres'\n> idea of what sort of data is in the database is unpredictable, and as a\n> result it's not surprising that the two systems guess differently about what\n> sort of plan is going to be most efficiant.\n>\n> try doing vaccum analyse on both databases and see what the results are.\n>\n> David Lang\n>\n\nThese are the results with vacuum analyze:\n8.2.12: 624.366 ms\n8.3.3: 1273.601 ms", "msg_date": "Tue, 3 Mar 2009 12:52:27 -0500", "msg_from": "Aaron Guyon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres 8.3, four times slower queries?" }, { "msg_contents": "Aaron Guyon <[email protected]> writes:\n> I find it telling that the query plan differs so much between postgres 8.2.\n\nWell, you haven't shown us either the query or the table definitions,\nso we're just guessing in the dark. However, the occurrences of\n\"::numeric\" in the query plan make me wonder whether all of your join\nkeys are numeric type. If so, the reason 8.2 didn't use any hash joins\nis that it couldn't --- it didn't have a hash method for numerics. 8.3\ndoes and therefore has more flexibility of plan choice. Comparisons on\nnumerics aren't terribly fast though (in either release). I wonder\nwhether you could change the key columns to int or bigint.\n\nI also find it a tad fishy that both releases are choosing *exactly* the\nsame join order when there is hardly anything else that is identical\nabout the plans --- given the cross-release variance in rowcount\nestimates etc I'd have expected at least one difference. Are you doing\nsomething to force the join order, like running with a small\njoin_collapse_limit setting? If so maybe you shouldn't.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Mar 2009 17:34:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 8.3, four times slower queries? " }, { "msg_contents": "On Tue, Mar 3, 2009 at 5:34 PM, Tom Lane <[email protected]> wrote:\n\n> Comparisons on\n> numerics aren't terribly fast though (in either release). I wonder\n> whether you could change the key columns to int or bigint.\n\n\nI changed the affected columns from numeric to integers and I was unable to\nget any performance gain:\n8.3.3: 1195 ms\n8.2.12: 611 ms\n\nI've attached the new query plans.\n\nAre you doing\n> something to force the join order, like running with a small\n> join_collapse_limit setting? If so maybe you shouldn't.\n>\n\nNo, we left the join_collapse_limit to the default 8. We tried a higher\nvalue, but there was no difference in performance.\n\nI'll post the query and the table descriptions in separate messages to the\nlist to avoid my mail from being rejected for exceeding the size limit :)", "msg_date": "Wed, 4 Mar 2009 18:20:49 -0500", "msg_from": "Aaron Guyon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres 8.3, four times slower queries?" }, { "msg_contents": "Query and first part of the table descriptions", "msg_date": "Wed, 4 Mar 2009 18:42:00 -0500", "msg_from": "Aaron Guyon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres 8.3, four times slower queries?" }, { "msg_contents": "2nd part of table descriptions", "msg_date": "Wed, 4 Mar 2009 18:42:26 -0500", "msg_from": "Aaron Guyon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres 8.3, four times slower queries?" }, { "msg_contents": "On Wed, Mar 4, 2009 at 6:20 PM, Aaron Guyon <[email protected]> wrote:\n> On Tue, Mar 3, 2009 at 5:34 PM, Tom Lane <[email protected]> wrote:\n>>\n>> Comparisons on\n>> numerics aren't terribly fast though (in either release).  I wonder\n>> whether you could change the key columns to int or bigint.\n>\n> I changed the affected columns from numeric to integers and I was unable to\n> get any performance gain:\n> 8.3.3: 1195 ms\n> 8.2.12: 611 ms\n>\n> I've attached the new query plans.\n>\n>> Are you doing\n>> something to force the join order, like running with a small\n>> join_collapse_limit setting?  If so maybe you shouldn't.\n>\n> No, we left the join_collapse_limit to the default 8.  We tried a higher\n> value, but there was no difference in performance.\n>\n> I'll post the query and the table descriptions in separate messages to the\n> list to avoid my mail from being rejected for exceeding the size limit :)\n\nWell, it looks like the problem is that 8.3 is not using the index\nidx_bundle_content_bundle_id. But I don't know why that should be\nhappening, unless there's a problem with that index.\n\n...Robert\n", "msg_date": "Thu, 5 Mar 2009 08:21:56 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 8.3, four times slower queries?" }, { "msg_contents": "Matching query plans with numerics changed to integers.\n\nI sent the wrong query plans earlier\n\n8.3.3: 1195 ms\n8.2.12: 611 ms", "msg_date": "Thu, 5 Mar 2009 11:06:21 -0500", "msg_from": "Aaron Guyon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres 8.3, four times slower queries?" }, { "msg_contents": "On Thu, Mar 5, 2009 at 10:20 AM, Kevin Grittner\n<[email protected]> wrote:\n>>>> Robert Haas <[email protected]> wrote:\n>> Well, it looks like the problem is that 8.3 is not using the index\n>> idx_bundle_content_bundle_id.  But I don't know why that should be\n>> happening, unless there's a problem with that index.\n>\n> I didn't see that index defined.  In fact, in the query shown, t8 is\n> the payment_amount table, but in plan, I don't see any references to\n> that table, and t8 is a table called bundle_content which is not\n> included.\n\nGood point. Now that you mention it, I notice that many of the tables\nand columns seem to have been renamed. It's pretty hard to make\nanything intelligible out of a schema that doesn't resemble the plan.\n\n...Robert\n", "msg_date": "Thu, 5 Mar 2009 11:56:08 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 8.3, four times slower queries?" }, { "msg_contents": ">>> Aaron Guyon <[email protected]> wrote: \n> 8.3.3: 1195 ms\n> 8.2.12: 611 ms\n \nCould you send the non-commented lines from the postgresql.conf files\nfrom both installations?\n \nIf feasible, update to the latest bug-fix version of 8.3.\n \nAlso, if you haven't already done so, try setting effective_cache_size\n= '3GB' and random_page_cost = 2 for the 8.3 database and restart to\nsee what kind of plan you get.\n \n-Kevin\n", "msg_date": "Thu, 05 Mar 2009 12:57:20 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 8.3, four times slower queries?" }, { "msg_contents": "On Thu, Mar 5, 2009 at 1:57 PM, Kevin Grittner\n<[email protected]> wrote:\n>>>> Aaron Guyon <[email protected]> wrote:\n>> 8.3.3: 1195 ms\n>> 8.2.12: 611 ms\n>\n> Could you send the non-commented lines from the postgresql.conf files\n> from both installations?\n>\n> If feasible, update to the latest bug-fix version of 8.3.\n>\n> Also, if you haven't already done so, try setting effective_cache_size\n> = '3GB' and random_page_cost = 2 for the 8.3 database and restart to\n> see what kind of plan you get.\n\nI still there's a problem with that index, now it's called\nidx_payment_amount_payment_id. What do you get from this?\n\nselect * from pg_index where indexrelid =\n'idx_payment_amount_payment_id'::regclass;\n\nHave you tried:\n\nreindex table payment_amount\n\n...Robert\n", "msg_date": "Thu, 5 Mar 2009 14:15:27 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 8.3, four times slower queries?" }, { "msg_contents": "On Thu, Mar 5, 2009 at 12:58 PM, Joshua D. Drake <[email protected]>wrote:\n\n> What happens if you do this:\n>\n> SET cpu_tuple_cost TO '0.5';\n> SET cpu_index_tuple_cost TO '0.5';\n> EXPLAIN ANALYZE 8.3 query....\n>\n\nRight now, I'm getting very good results with the above. I'm still running\nadditional tests but I'll keep you guys updated. I've attached the new\nexplain analyze.", "msg_date": "Thu, 5 Mar 2009 14:58:09 -0500", "msg_from": "Aaron Guyon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres 8.3, four times slower queries?" }, { "msg_contents": "On Thu, Mar 5, 2009 at 12:58 PM, Joshua D. Drake <[email protected]>wrote:\n\n> What happens if you do this:\n>\n> SET cpu_tuple_cost TO '0.5';\n> SET cpu_index_tuple_cost TO '0.5';\n> EXPLAIN ANALYZE 8.3 query....\n>\n> Next try this:\n>\n> SET cpu_tuple_cost TO '0.5';\n> SET cpu_index_tuple_cost TO '0.5';\n> SET seq_page_cost TO '4.0';\n> SET random_page_cost TO '1.0';\n> EXPLAIN ANALYZE 8.3 query....\n>\n> And then this:\n>\n> SET cpu_tuple_cost TO '0.5';\n> SET cpu_index_tuple_cost TO '0.5';\n> SET seq_page_cost TO '4.0';\n> SET random_page_cost TO '1.0';\n> SET effective_cache_size TO '3000MB';\n> EXPLAIN ANALYZE 8.3 query....\n>\n\nThese three are pretty much the same in terms of performance. I stayed with\nthe first one (cpu_tuple_cost = 0.5 and cpu_index_tuple_cost = 0.5). As\nshown earlier, it gives a result similar or slightly better than 8.2.12 in\nterms of performance and response time. The explain analyze shows that the\nquery no longer causes postgreSQL to uses hashes, but indexes instead which\nboosted the performance of the query from ~1200 ms to ~600 ms.\n\nThank you everyone for all the help and feedback on this issue.\n\nOn Thu, Mar 5, 2009 at 12:58 PM, Joshua D. Drake <[email protected]> wrote:\nWhat happens if you do this:\n\nSET cpu_tuple_cost TO '0.5';\nSET cpu_index_tuple_cost TO '0.5';\nEXPLAIN ANALYZE 8.3 query....\n\nNext try this:\n\nSET cpu_tuple_cost TO '0.5';\nSET cpu_index_tuple_cost TO '0.5';\nSET seq_page_cost TO '4.0';\nSET random_page_cost TO '1.0';\nEXPLAIN ANALYZE 8.3 query....\n\nAnd then this:\n\nSET cpu_tuple_cost TO '0.5';\nSET cpu_index_tuple_cost TO '0.5';\nSET seq_page_cost TO '4.0';\nSET random_page_cost TO '1.0';\nSET effective_cache_size TO '3000MB';\nEXPLAIN ANALYZE 8.3 query....\nThese three are pretty much the same in terms of performance.  I stayed\nwith the first one (cpu_tuple_cost = 0.5 and cpu_index_tuple_cost =\n0.5).  As shown earlier, it gives a result similar or slightly better than 8.2.12 in\nterms of performance and response time.  The explain analyze shows that the query no longer causes postgreSQL to uses hashes, but indexes instead which boosted the performance of the query from ~1200 ms to ~600 ms.\nThank you everyone for all the help and feedback on this issue.", "msg_date": "Fri, 6 Mar 2009 10:50:22 -0500", "msg_from": "Aaron Guyon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres 8.3, four times slower queries?" } ]
[ { "msg_contents": "\n\n\n\nHi,\n\nI have come across a weird bug (i think) in postgres 8.1.11 (possibly\nothers)\n\nWithout going into my table structure detail I will demonstrate the\nproblem by showing the select statements:\n\nThe following statement:\nSELECT count(*)\nFROM object o, object_version v, object_type ot \nwhere v.id = o.active_versionid and ot.id = o.object_typeid and\no.is_active ='t' and (o.is_archived = 'f' or o.is_archived is null) \nand o.is_published = 't' and ot.object_type_typeid <> 1 \n\nand exists (\nselect ova.object_versionid from attribute_value av,\nobject_version_attribute ova where ova.attribute_valueid=av.id and\nobject_versionid = v.id \nand (upper(av.text_val) like '%KIWI%') )\n\n\nruns fine and executes with success.\nBUT now this is the strange bit, if I have a space in my search term\nthen postgres hangs for an indefinite period: eg:\n\nSELECT count(*)\nFROM object o, object_version v, object_type ot \nwhere v.id = o.active_versionid and ot.id = o.object_typeid and\no.is_active ='t' and (o.is_archived = 'f' or o.is_archived is null) \nand o.is_published = 't' and ot.object_type_typeid <> 1 \n\nand exists (\nselect ova.object_versionid from attribute_value av,\nobject_version_attribute ova where ova.attribute_valueid=av.id and\nobject_versionid = v.id \nand (upper(av.text_val) like '%KIWI FRUIT%') )\n\n\nYet, if I modify the \"exists\" to an \"in\" all works well , as follows\n\nSELECT count(*)\nFROM object o, object_version v, object_type ot \nwhere v.id = o.active_versionid and ot.id = o.object_typeid and\no.is_active ='t' and (o.is_archived = 'f' or o.is_archived is null) \nand o.is_published = 't' and ot.object_type_typeid <> 1 \n\nand v.id in (\nselect ova.object_versionid from attribute_value av,\nobject_version_attribute ova where ova.attribute_valueid=av.id \nand (upper(av.text_val) like '%KIWI FRUIT%') )\n\n\nSo my question is why would a space character cause postgres to hang\nwhen using the exists clause????\n\nI have tested this on several different servers and mostly get the same\nresult (v8.08 and v8.1.11) , when I check the execution plan for either\nquery (space or no space) they are identical.\n\nAn upgrade to 8.3 fixes this, but I am still curious as to what could\ncause such bizarre behavior.\n\nThanks\nHans\n\n\n", "msg_date": "Tue, 03 Mar 2009 14:12:45 +0200", "msg_from": "Hans Liebenberg <[email protected]>", "msg_from_op": true, "msg_subject": "Substring search using \"exists\" with a space in the search term" } ]
[ { "msg_contents": "Hey,\n\nI have a table that links content together and it currently holds\nabout 17 mio records. Typical query is a join with a content table and\nlink table:\n\nnoovo-new=# explain analyze SELECT \"core_accessor\".\"id\",\n\"core_accessor\".\"content_type_id\",\n\"core_accessor\".\"object_id\", \"core_accessor\".\"ordering\",\n\"core_accessor\".\"label\", \"core_accessor\".\"date_posted\",\n\"core_accessor\".\"publish_state\", \"core_accessor\".\"nooximity_old\",\n\"core_accessor\".\"rising\", \"core_accessor\".\"nooximity\",\n\"core_accessor\".\"nooximity_old_date_posted\",\n\"core_accessor\".\"nooximity_date_posted\", \"core_accessor\".\"user_id\",\n\"core_accessor\".\"slot_id\", \"core_accessor\".\"slot_type_id\",\n\"core_accessor\".\"role\", \"core_base\".\"object_id\",\n\"core_base\".\"content_type_id\", \"core_base\".\"abstract\",\n\"core_base\".\"abstract_title\", \"core_base\".\"image\",\n \"core_base\".\"date_posted\", \"core_base\".\"date_modified\",\n\"core_base\".\"date_expires\", \"core_base\".\"publish_state\",\n \"core_base\".\"location\", \"core_base\".\"location_x\",\n\"core_base\".\"location_y\", \"core_base\".\"raw\", \"core_base\".\"author_id\",\n \"core_base\".\"excerpt\", \"core_base\".\"state_id\",\n\"core_base\".\"country_id\", \"core_base\".\"language\",\n\"core_base\".\"_identifier\",\n \"core_base\".\"slot_url\", \"core_base\".\"source_id\",\n\"core_base\".\"source_content_type_id\", \"core_base\".\"source_type\",\n \"core_base\".\"source_value\", \"core_base\".\"source_title\",\n\"core_base\".\"direct_to_source\", \"core_base\".\"comment_count\",\n \"core_base\".\"public\" FROM \"core_accessor\" INNER JOIN core_base AS\ncore_base ON core_base.content_type_id =\n core_accessor.content_type_id AND core_base.object_id =\ncore_accessor.object_id WHERE ((\"core_accessor\".\"slot_type_id\" = 119\n AND \"core_accessor\".\"slot_id\" = 472 AND \"core_accessor\".\"label\" = E''\nAND \"core_accessor\".\"publish_state\" >= 60 AND\n \"core_accessor\".\"role\" IN (0) AND \"core_accessor\".\"user_id\" = 0))\norder by core_accessor.date_posted, core_accessor.nooximity LIMIT 5\n;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=31930.65..31930.66 rows=5 width=860) (actual\ntime=711.924..711.927 rows=5 loops=1)\n -> Sort (cost=31930.65..31937.80 rows=2861 width=860) (actual\ntime=711.923..711.923 rows=5 loops=1)\n Sort Key: core_accessor.date_posted, core_accessor.nooximity\n Sort Method: top-N heapsort Memory: 31kB\n -> Nested Loop (cost=0.00..31883.13 rows=2861 width=860)\n(actual time=0.089..543.497 rows=68505 loops=1)\n -> Index Scan using core_accessor_fresh_idx on\ncore_accessor (cost=0.00..5460.07 rows=2970 width=92) (actual\ntime=0.068..54.921 rows=69312 loops=1)\n Index Cond: ((slot_id = 472) AND (slot_type_id =\n119) AND (label = ''::text) AND (user_id = 0) AND (role = 0) AND\n(publish_state >= 60))\n -> Index Scan using core_base_pkey on core_base\n(cost=0.00..8.88 rows=1 width=768) (actual time=0.004..0.005 rows=1\nloops=69312)\n Index Cond: ((core_base.object_id =\ncore_accessor.object_id) AND (core_base.content_type_id =\ncore_accessor.content_type_id))\n Total runtime: 712.031 ms\n(10 rows)\n\nnoovo-new=# select * from pg_stat_user_tables where relname='core_accessor';\n relid | schemaname | relname | seq_scan | seq_tup_read |\nidx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del |\nn_tup_hot_upd | n_live_tup | n_dead_tup | last_vacuum\n | last_autovacuum | last_analyze | last_autoanalyze\n-------+------------+---------------+----------+--------------+----------+---------------+-----------+-----------+-----------+---------------+------------+------------+-------------------------------+-----------------+-------------------------------+------------------\n 51159 | public | core_accessor | 58 | 749773516 |\n13785608 | 149165183 | 9566 | 548 | 347 |\n 206 | 17144303 | 251 | 2009-03-03 07:02:19.733778-06 |\n | 2009-03-03 06:17:47.784268-06 |\n(1 row)\n\nnoovo-new=# \\d+ core_accessor;\n Table \"public.core_accessor\"\n Column | Type |\n Modifiers | Description\n---------------------------+--------------------------+------------------------------------------------------------+-------------\n id | bigint | not null\ndefault nextval('core_accessor_id_seq'::regclass) |\n flavor | character varying(32) |\n |\n content_type_id | integer | not null\n |\n object_id | integer | not null\n |\n publish_state | smallint | not null\n |\n date_posted | timestamp with time zone | not null\n |\n user_id | integer |\n |\n slot_id | integer |\n |\n slot_type_id | integer |\n |\n role | smallint |\n |\n ordering | integer |\n |\n author_id | integer |\n |\n nooximity_old | double precision | default 0.0\n |\n rising | double precision | default 0.0\n |\n label | text |\n |\n nooximity | double precision | not null\ndefault 1.0 |\n nooximity_old_date_posted | timestamp with time zone |\n |\n nooximity_date_posted | timestamp with time zone |\n |\nIndexes:\n \"portal_metainfo_pkey\" PRIMARY KEY, btree (id)\n \"portal_metainfo_unique_constr\" UNIQUE, btree (content_type_id,\nobject_id, user_id, slot_id, slot_type_id, role, label) CLUSTER\n \"core_accessor_date_idx\" btree (date_posted, nooximity)\n \"core_accessor_dated_idx\" btree (slot_id, slot_type_id, label,\nuser_id, role, publish_state, date_posted, nooximity)\n \"core_accessor_fresh_idx\" btree (slot_id, slot_type_id, label,\nuser_id, role, publish_state)\n \"core_accessor_popularity_idx\" btree (nooximity, date_posted)\nCheck constraints:\n \"portal_metainfo_object_id_check\" CHECK (object_id >= 0)\n \"portal_metainfo_owner_id_check\" CHECK (slot_id >= 0)\nForeign-key constraints:\n \"portal_metainfo_accessor_id_fkey\" FOREIGN KEY (user_id)\nREFERENCES auth_user(id) DEFERRABLE INITIALLY DEFERRED\n \"portal_metainfo_content_type_id_fkey\" FOREIGN KEY\n(content_type_id) REFERENCES django_content_type(id) DEFERRABLE\nINITIALLY DEFERRED\n \"portal_metainfo_owner_type_id_fkey\" FOREIGN KEY (slot_type_id)\nREFERENCES django_content_type(id) DEFERRABLE INITIALLY DEFERRED\nHas OIDs: no\n\n\n\nAs far as I understand the explain, it fetches 68505 rows, matches\nthem with core_base and then tries to sort them? AFAIK it would\nprobably be much more effective to just find the records in accessor\nvia core_accessor_dated_idx and then lookup the core_base table? But\nfor some reason it doesn't want to?\n\nI ran analyze, vacuum and reindex but nothing helped. Queries just eat\nall the I/O and block. There is a huge difference between cached and\nnon-cached queries, like 50.000 to 50 ms.\n\nHelp! :)\n\n\nThanks, Sebastjan\n", "msg_date": "Tue, 3 Mar 2009 18:05:10 +0100", "msg_from": "Sebastjan Trepca <[email protected]>", "msg_from_op": true, "msg_subject": "Problems with ordering (can't force query planner to use an index)" }, { "msg_contents": "On Tue, Mar 3, 2009 at 12:05 PM, Sebastjan Trepca <[email protected]> wrote:\n> Hey,\n>\n> I have a table that links content together and it currently holds\n> about 17 mio records. Typical query is a join with a content table and\n> link table:\n>\n> noovo-new=# explain analyze SELECT \"core_accessor\".\"id\",\n> \"core_accessor\".\"content_type_id\",\n> \"core_accessor\".\"object_id\", \"core_accessor\".\"ordering\",\n> \"core_accessor\".\"label\", \"core_accessor\".\"date_posted\",\n> \"core_accessor\".\"publish_state\", \"core_accessor\".\"nooximity_old\",\n> \"core_accessor\".\"rising\", \"core_accessor\".\"nooximity\",\n> \"core_accessor\".\"nooximity_old_date_posted\",\n> \"core_accessor\".\"nooximity_date_posted\", \"core_accessor\".\"user_id\",\n> \"core_accessor\".\"slot_id\", \"core_accessor\".\"slot_type_id\",\n> \"core_accessor\".\"role\", \"core_base\".\"object_id\",\n> \"core_base\".\"content_type_id\", \"core_base\".\"abstract\",\n> \"core_base\".\"abstract_title\", \"core_base\".\"image\",\n>  \"core_base\".\"date_posted\", \"core_base\".\"date_modified\",\n> \"core_base\".\"date_expires\", \"core_base\".\"publish_state\",\n>  \"core_base\".\"location\", \"core_base\".\"location_x\",\n> \"core_base\".\"location_y\", \"core_base\".\"raw\", \"core_base\".\"author_id\",\n>  \"core_base\".\"excerpt\", \"core_base\".\"state_id\",\n> \"core_base\".\"country_id\", \"core_base\".\"language\",\n> \"core_base\".\"_identifier\",\n>  \"core_base\".\"slot_url\", \"core_base\".\"source_id\",\n> \"core_base\".\"source_content_type_id\", \"core_base\".\"source_type\",\n>  \"core_base\".\"source_value\", \"core_base\".\"source_title\",\n> \"core_base\".\"direct_to_source\", \"core_base\".\"comment_count\",\n>  \"core_base\".\"public\" FROM \"core_accessor\" INNER JOIN core_base AS\n> core_base ON core_base.content_type_id =\n>  core_accessor.content_type_id AND core_base.object_id =\n> core_accessor.object_id WHERE ((\"core_accessor\".\"slot_type_id\" = 119\n>  AND \"core_accessor\".\"slot_id\" = 472 AND \"core_accessor\".\"label\" = E''\n> AND \"core_accessor\".\"publish_state\" >= 60 AND\n>  \"core_accessor\".\"role\" IN (0) AND \"core_accessor\".\"user_id\" = 0))\n> order by core_accessor.date_posted, core_accessor.nooximity LIMIT 5\n> ;\n>\n>      QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>  Limit  (cost=31930.65..31930.66 rows=5 width=860) (actual\n> time=711.924..711.927 rows=5 loops=1)\n>   ->  Sort  (cost=31930.65..31937.80 rows=2861 width=860) (actual\n> time=711.923..711.923 rows=5 loops=1)\n>         Sort Key: core_accessor.date_posted, core_accessor.nooximity\n>         Sort Method:  top-N heapsort  Memory: 31kB\n>         ->  Nested Loop  (cost=0.00..31883.13 rows=2861 width=860)\n> (actual time=0.089..543.497 rows=68505 loops=1)\n>               ->  Index Scan using core_accessor_fresh_idx on\n> core_accessor  (cost=0.00..5460.07 rows=2970 width=92) (actual\n> time=0.068..54.921 rows=69312 loops=1)\n>                     Index Cond: ((slot_id = 472) AND (slot_type_id =\n> 119) AND (label = ''::text) AND (user_id = 0) AND (role = 0) AND\n> (publish_state >= 60))\n>               ->  Index Scan using core_base_pkey on core_base\n> (cost=0.00..8.88 rows=1 width=768) (actual time=0.004..0.005 rows=1\n> loops=69312)\n>                     Index Cond: ((core_base.object_id =\n> core_accessor.object_id) AND (core_base.content_type_id =\n> core_accessor.content_type_id))\n>  Total runtime: 712.031 ms\n> (10 rows)\n>\n> noovo-new=# select * from pg_stat_user_tables where relname='core_accessor';\n>  relid | schemaname |    relname    | seq_scan | seq_tup_read |\n> idx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del |\n> n_tup_hot_upd | n_live_tup | n_dead_tup |          last_vacuum\n>  | last_autovacuum |         last_analyze          | last_autoanalyze\n> -------+------------+---------------+----------+--------------+----------+---------------+-----------+-----------+-----------+---------------+------------+------------+-------------------------------+-----------------+-------------------------------+------------------\n>  51159 | public     | core_accessor |       58 |    749773516 |\n> 13785608 |     149165183 |      9566 |       548 |       347 |\n>  206 |   17144303 |        251 | 2009-03-03 07:02:19.733778-06 |\n>           | 2009-03-03 06:17:47.784268-06 |\n> (1 row)\n>\n> noovo-new=# \\d+ core_accessor;\n>                                                  Table \"public.core_accessor\"\n>          Column           |           Type           |\n>         Modifiers                          | Description\n> ---------------------------+--------------------------+------------------------------------------------------------+-------------\n>  id                        | bigint                   | not null\n> default nextval('core_accessor_id_seq'::regclass) |\n>  flavor                    | character varying(32)    |\n>                                            |\n>  content_type_id           | integer                  | not null\n>                                            |\n>  object_id                 | integer                  | not null\n>                                            |\n>  publish_state             | smallint                 | not null\n>                                            |\n>  date_posted               | timestamp with time zone | not null\n>                                            |\n>  user_id                   | integer                  |\n>                                            |\n>  slot_id                   | integer                  |\n>                                            |\n>  slot_type_id              | integer                  |\n>                                            |\n>  role                      | smallint                 |\n>                                            |\n>  ordering                  | integer                  |\n>                                            |\n>  author_id                 | integer                  |\n>                                            |\n>  nooximity_old             | double precision         | default 0.0\n>                                            |\n>  rising                    | double precision         | default 0.0\n>                                            |\n>  label                     | text                     |\n>                                            |\n>  nooximity                 | double precision         | not null\n> default 1.0                                       |\n>  nooximity_old_date_posted | timestamp with time zone |\n>                                            |\n>  nooximity_date_posted     | timestamp with time zone |\n>                                            |\n> Indexes:\n>    \"portal_metainfo_pkey\" PRIMARY KEY, btree (id)\n>    \"portal_metainfo_unique_constr\" UNIQUE, btree (content_type_id,\n> object_id, user_id, slot_id, slot_type_id, role, label) CLUSTER\n>    \"core_accessor_date_idx\" btree (date_posted, nooximity)\n>    \"core_accessor_dated_idx\" btree (slot_id, slot_type_id, label,\n> user_id, role, publish_state, date_posted, nooximity)\n>    \"core_accessor_fresh_idx\" btree (slot_id, slot_type_id, label,\n> user_id, role, publish_state)\n>    \"core_accessor_popularity_idx\" btree (nooximity, date_posted)\n> Check constraints:\n>    \"portal_metainfo_object_id_check\" CHECK (object_id >= 0)\n>    \"portal_metainfo_owner_id_check\" CHECK (slot_id >= 0)\n> Foreign-key constraints:\n>    \"portal_metainfo_accessor_id_fkey\" FOREIGN KEY (user_id)\n> REFERENCES auth_user(id) DEFERRABLE INITIALLY DEFERRED\n>    \"portal_metainfo_content_type_id_fkey\" FOREIGN KEY\n> (content_type_id) REFERENCES django_content_type(id) DEFERRABLE\n> INITIALLY DEFERRED\n>    \"portal_metainfo_owner_type_id_fkey\" FOREIGN KEY (slot_type_id)\n> REFERENCES django_content_type(id) DEFERRABLE INITIALLY DEFERRED\n> Has OIDs: no\n>\n>\n>\n> As far as I understand the explain, it fetches 68505 rows, matches\n> them with core_base and then tries to sort them? AFAIK it would\n> probably be much more effective to just find the records in accessor\n> via core_accessor_dated_idx and then lookup the core_base table? But\n> for some reason it doesn't want to?\n>\n> I ran analyze, vacuum and reindex but nothing helped. Queries just eat\n> all the I/O and block. There is a huge difference between cached and\n> non-cached queries, like 50.000 to 50 ms.\n>\n> Help! :)\n\nPlease send the output of EXPLAIN ANALYZE for this query.\n\n...Robert\n", "msg_date": "Tue, 3 Mar 2009 12:12:48 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with ordering (can't force query planner to\n\tuse an index)" }, { "msg_contents": "But it's already attached in the first mail or am I missing something?\n\nIf you don't see it, check this: http://pastebin.com/d71b996d0\n\nSebastjan\n\n\n\nOn Tue, Mar 3, 2009 at 6:12 PM, Robert Haas <[email protected]> wrote:\n> On Tue, Mar 3, 2009 at 12:05 PM, Sebastjan Trepca <[email protected]> wrote:\n>> Hey,\n>>\n>> I have a table that links content together and it currently holds\n>> about 17 mio records. Typical query is a join with a content table and\n>> link table:\n>>\n>> noovo-new=# explain analyze SELECT \"core_accessor\".\"id\",\n>> \"core_accessor\".\"content_type_id\",\n>> \"core_accessor\".\"object_id\", \"core_accessor\".\"ordering\",\n>> \"core_accessor\".\"label\", \"core_accessor\".\"date_posted\",\n>> \"core_accessor\".\"publish_state\", \"core_accessor\".\"nooximity_old\",\n>> \"core_accessor\".\"rising\", \"core_accessor\".\"nooximity\",\n>> \"core_accessor\".\"nooximity_old_date_posted\",\n>> \"core_accessor\".\"nooximity_date_posted\", \"core_accessor\".\"user_id\",\n>> \"core_accessor\".\"slot_id\", \"core_accessor\".\"slot_type_id\",\n>> \"core_accessor\".\"role\", \"core_base\".\"object_id\",\n>> \"core_base\".\"content_type_id\", \"core_base\".\"abstract\",\n>> \"core_base\".\"abstract_title\", \"core_base\".\"image\",\n>>  \"core_base\".\"date_posted\", \"core_base\".\"date_modified\",\n>> \"core_base\".\"date_expires\", \"core_base\".\"publish_state\",\n>>  \"core_base\".\"location\", \"core_base\".\"location_x\",\n>> \"core_base\".\"location_y\", \"core_base\".\"raw\", \"core_base\".\"author_id\",\n>>  \"core_base\".\"excerpt\", \"core_base\".\"state_id\",\n>> \"core_base\".\"country_id\", \"core_base\".\"language\",\n>> \"core_base\".\"_identifier\",\n>>  \"core_base\".\"slot_url\", \"core_base\".\"source_id\",\n>> \"core_base\".\"source_content_type_id\", \"core_base\".\"source_type\",\n>>  \"core_base\".\"source_value\", \"core_base\".\"source_title\",\n>> \"core_base\".\"direct_to_source\", \"core_base\".\"comment_count\",\n>>  \"core_base\".\"public\" FROM \"core_accessor\" INNER JOIN core_base AS\n>> core_base ON core_base.content_type_id =\n>>  core_accessor.content_type_id AND core_base.object_id =\n>> core_accessor.object_id WHERE ((\"core_accessor\".\"slot_type_id\" = 119\n>>  AND \"core_accessor\".\"slot_id\" = 472 AND \"core_accessor\".\"label\" = E''\n>> AND \"core_accessor\".\"publish_state\" >= 60 AND\n>>  \"core_accessor\".\"role\" IN (0) AND \"core_accessor\".\"user_id\" = 0))\n>> order by core_accessor.date_posted, core_accessor.nooximity LIMIT 5\n>> ;\n>>\n>>      QUERY PLAN\n>> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>  Limit  (cost=31930.65..31930.66 rows=5 width=860) (actual\n>> time=711.924..711.927 rows=5 loops=1)\n>>   ->  Sort  (cost=31930.65..31937.80 rows=2861 width=860) (actual\n>> time=711.923..711.923 rows=5 loops=1)\n>>         Sort Key: core_accessor.date_posted, core_accessor.nooximity\n>>         Sort Method:  top-N heapsort  Memory: 31kB\n>>         ->  Nested Loop  (cost=0.00..31883.13 rows=2861 width=860)\n>> (actual time=0.089..543.497 rows=68505 loops=1)\n>>               ->  Index Scan using core_accessor_fresh_idx on\n>> core_accessor  (cost=0.00..5460.07 rows=2970 width=92) (actual\n>> time=0.068..54.921 rows=69312 loops=1)\n>>                     Index Cond: ((slot_id = 472) AND (slot_type_id =\n>> 119) AND (label = ''::text) AND (user_id = 0) AND (role = 0) AND\n>> (publish_state >= 60))\n>>               ->  Index Scan using core_base_pkey on core_base\n>> (cost=0.00..8.88 rows=1 width=768) (actual time=0.004..0.005 rows=1\n>> loops=69312)\n>>                     Index Cond: ((core_base.object_id =\n>> core_accessor.object_id) AND (core_base.content_type_id =\n>> core_accessor.content_type_id))\n>>  Total runtime: 712.031 ms\n>> (10 rows)\n>>\n>> noovo-new=# select * from pg_stat_user_tables where relname='core_accessor';\n>>  relid | schemaname |    relname    | seq_scan | seq_tup_read |\n>> idx_scan | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del |\n>> n_tup_hot_upd | n_live_tup | n_dead_tup |          last_vacuum\n>>  | last_autovacuum |         last_analyze          | last_autoanalyze\n>> -------+------------+---------------+----------+--------------+----------+---------------+-----------+-----------+-----------+---------------+------------+------------+-------------------------------+-----------------+-------------------------------+------------------\n>>  51159 | public     | core_accessor |       58 |    749773516 |\n>> 13785608 |     149165183 |      9566 |       548 |       347 |\n>>  206 |   17144303 |        251 | 2009-03-03 07:02:19.733778-06 |\n>>           | 2009-03-03 06:17:47.784268-06 |\n>> (1 row)\n>>\n>> noovo-new=# \\d+ core_accessor;\n>>                                                  Table \"public.core_accessor\"\n>>          Column           |           Type           |\n>>         Modifiers                          | Description\n>> ---------------------------+--------------------------+------------------------------------------------------------+-------------\n>>  id                        | bigint                   | not null\n>> default nextval('core_accessor_id_seq'::regclass) |\n>>  flavor                    | character varying(32)    |\n>>                                            |\n>>  content_type_id           | integer                  | not null\n>>                                            |\n>>  object_id                 | integer                  | not null\n>>                                            |\n>>  publish_state             | smallint                 | not null\n>>                                            |\n>>  date_posted               | timestamp with time zone | not null\n>>                                            |\n>>  user_id                   | integer                  |\n>>                                            |\n>>  slot_id                   | integer                  |\n>>                                            |\n>>  slot_type_id              | integer                  |\n>>                                            |\n>>  role                      | smallint                 |\n>>                                            |\n>>  ordering                  | integer                  |\n>>                                            |\n>>  author_id                 | integer                  |\n>>                                            |\n>>  nooximity_old             | double precision         | default 0.0\n>>                                            |\n>>  rising                    | double precision         | default 0.0\n>>                                            |\n>>  label                     | text                     |\n>>                                            |\n>>  nooximity                 | double precision         | not null\n>> default 1.0                                       |\n>>  nooximity_old_date_posted | timestamp with time zone |\n>>                                            |\n>>  nooximity_date_posted     | timestamp with time zone |\n>>                                            |\n>> Indexes:\n>>    \"portal_metainfo_pkey\" PRIMARY KEY, btree (id)\n>>    \"portal_metainfo_unique_constr\" UNIQUE, btree (content_type_id,\n>> object_id, user_id, slot_id, slot_type_id, role, label) CLUSTER\n>>    \"core_accessor_date_idx\" btree (date_posted, nooximity)\n>>    \"core_accessor_dated_idx\" btree (slot_id, slot_type_id, label,\n>> user_id, role, publish_state, date_posted, nooximity)\n>>    \"core_accessor_fresh_idx\" btree (slot_id, slot_type_id, label,\n>> user_id, role, publish_state)\n>>    \"core_accessor_popularity_idx\" btree (nooximity, date_posted)\n>> Check constraints:\n>>    \"portal_metainfo_object_id_check\" CHECK (object_id >= 0)\n>>    \"portal_metainfo_owner_id_check\" CHECK (slot_id >= 0)\n>> Foreign-key constraints:\n>>    \"portal_metainfo_accessor_id_fkey\" FOREIGN KEY (user_id)\n>> REFERENCES auth_user(id) DEFERRABLE INITIALLY DEFERRED\n>>    \"portal_metainfo_content_type_id_fkey\" FOREIGN KEY\n>> (content_type_id) REFERENCES django_content_type(id) DEFERRABLE\n>> INITIALLY DEFERRED\n>>    \"portal_metainfo_owner_type_id_fkey\" FOREIGN KEY (slot_type_id)\n>> REFERENCES django_content_type(id) DEFERRABLE INITIALLY DEFERRED\n>> Has OIDs: no\n>>\n>>\n>>\n>> As far as I understand the explain, it fetches 68505 rows, matches\n>> them with core_base and then tries to sort them? AFAIK it would\n>> probably be much more effective to just find the records in accessor\n>> via core_accessor_dated_idx and then lookup the core_base table? But\n>> for some reason it doesn't want to?\n>>\n>> I ran analyze, vacuum and reindex but nothing helped. Queries just eat\n>> all the I/O and block. There is a huge difference between cached and\n>> non-cached queries, like 50.000 to 50 ms.\n>>\n>> Help! :)\n>\n> Please send the output of EXPLAIN ANALYZE for this query.\n>\n> ...Robert\n>\n", "msg_date": "Tue, 3 Mar 2009 18:20:57 +0100", "msg_from": "Sebastjan Trepca <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with ordering (can't force query planner to\n\tuse an index)" }, { "msg_contents": "On Tue, Mar 3, 2009 at 12:05 PM, Sebastjan Trepca <[email protected]> wrote:\n\n>         ->  Nested Loop  (cost=0.00..31883.13 rows=2861 width=860)\n> (actual time=0.089..543.497 rows=68505 loops=1)\n>               ->  Index Scan using core_accessor_fresh_idx on\n> core_accessor  (cost=0.00..5460.07 rows=2970 width=92) (actual\n> time=0.068..54.921 rows=69312 loops=1)\n>                     Index Cond: ((slot_id = 472) AND (slot_type_id =\n> 119) AND (label = ''::text) AND (user_id = 0) AND (role = 0) AND\n> (publish_state >= 60))\n\nThat index scan on core_accessor_fresh_idx has a pretty big disparity\nbetween what the planer expects to get (2970 rows) and what it\nactually gets (69312 rows). You should try increasing the statistics\ntarget if you haven't, then re-analyze and try the query again to see\nif the planner picks something better. The default of 10 is pretty\nsmall- try 100, or higher.\n\n\n\n-- \n- David T. Wilson\[email protected]\n", "msg_date": "Tue, 3 Mar 2009 12:34:55 -0500", "msg_from": "David Wilson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with ordering (can't force query planner to\n\tuse an index)" }, { "msg_contents": "Set statistics to 1000, reanalyzed and got exactly same results:\n\n\nnoovo-new=# explain analyze SELECT \"core_accessor\".\"id\",\n\"core_accessor\".\"content_type_id\",\n\"core_accessor\".\"object_id\", \"core_accessor\".\"ordering\",\n\"core_accessor\".\"label\", \"core_accessor\".\"date_posted\",\n\"core_accessor\".\"publish_state\", \"core_accessor\".\"nooximity_old\",\n\"core_accessor\".\"rising\", \"core_accessor\".\"nooximity\",\n\"core_accessor\".\"nooximity_old_date_posted\",\n\"core_accessor\".\"nooximity_date_posted\", \"core_accessor\".\"user_id\",\n\"core_accessor\".\"slot_id\", \"core_accessor\".\"slot_type_id\",\n\"core_accessor\".\"role\", \"core_base\".\"object_id\",\n\"core_base\".\"content_type_id\", \"core_base\".\"abstract\",\n\"core_base\".\"abstract_title\", \"core_base\".\"image\",\n \"core_base\".\"date_posted\", \"core_base\".\"date_modified\",\n\"core_base\".\"date_expires\", \"core_base\".\"publish_state\",\n \"core_base\".\"location\", \"core_base\".\"location_x\",\n\"core_base\".\"location_y\", \"core_base\".\"raw\", \"core_base\".\"author_id\",\n \"core_base\".\"excerpt\", \"core_base\".\"state_id\",\n\"core_base\".\"country_id\", \"core_base\".\"language\",\n\"core_base\".\"_identifier\",\n \"core_base\".\"slot_url\", \"core_base\".\"source_id\",\n\"core_base\".\"source_content_type_id\", \"core_base\".\"source_type\",\n \"core_base\".\"source_value\", \"core_base\".\"source_title\",\n\"core_base\".\"direct_to_source\", \"core_base\".\"comment_count\",\n \"core_base\".\"public\" FROM \"core_accessor\" INNER JOIN core_base AS\ncore_base ON core_base.content_type_id =\n core_accessor.content_type_id AND core_base.object_id =\ncore_accessor.object_id WHERE ((\"core_accessor\".\"slot_type_id\" = 119\n AND \"core_accessor\".\"slot_id\" = 472 AND \"core_accessor\".\"label\" = E''\nAND \"core_accessor\".\"publish_state\" >= 60 AND\n \"core_accessor\".\"role\" IN (0) AND \"core_accessor\".\"user_id\" = 0))\norder by core_accessor.date_posted, core_accessor.nooximity LIMIT 5\n;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=31716.13..31716.14 rows=5 width=860) (actual\ntime=711.340..711.343 rows=5 loops=1)\n -> Sort (cost=31716.13..31722.19 rows=2424 width=860) (actual\ntime=711.339..711.339 rows=5 loops=1)\n Sort Key: core_accessor.date_posted, core_accessor.nooximity\n Sort Method: top-N heapsort Memory: 31kB\n -> Nested Loop (cost=0.00..31675.87 rows=2424 width=860)\n(actual time=0.076..544.039 rows=68505 loops=1)\n -> Index Scan using core_accessor_fresh_idx on\ncore_accessor (cost=0.00..9234.77 rows=2511 width=92) (actual\ntime=0.058..55.225 rows=69312 loops=1)\n Index Cond: ((slot_id = 472) AND (slot_type_id =\n119) AND (label = ''::text) AND (user_id = 0) AND (role = 0) AND\n(publish_state >= 60))\n -> Index Scan using core_base_pkey on core_base\n(cost=0.00..8.92 rows=1 width=768) (actual time=0.005..0.005 rows=1\nloops=69312)\n Index Cond: ((core_base.object_id =\ncore_accessor.object_id) AND (core_base.content_type_id =\ncore_accessor.content_type_id))\n Total runtime: 711.443 ms\n(10 rows)\n\n\n\nThis is how I did it:\n\nnoovo-new=# alter table core_accessor alter column slot_id set statistics 1000;\nALTER TABLE\nnoovo-new=# alter table core_accessor alter column slot_type_id set\nstatistics 1000;\nALTER TABLE\nnoovo-new=# alter table core_accessor alter column label set statistics 1000;\nALTER TABLE\nnoovo-new=# alter table core_accessor alter column user_id set statistics 1000;\nALTER TABLE\nnoovo-new=# alter table core_accessor alter column role set statistics 1000;\nALTER TABLE\nnoovo-new=# alter table core_accessor alter column publish_state set\nstatistics 1000;\nALTER TABLE\nnoovo-new=# analyze core_accessor;\nANALYZE\n\n\n\nSebastjan\n\n\n\nOn Tue, Mar 3, 2009 at 6:34 PM, David Wilson <[email protected]> wrote:\n> On Tue, Mar 3, 2009 at 12:05 PM, Sebastjan Trepca <[email protected]> wrote:\n>\n>>         ->  Nested Loop  (cost=0.00..31883.13 rows=2861 width=860)\n>> (actual time=0.089..543.497 rows=68505 loops=1)\n>>               ->  Index Scan using core_accessor_fresh_idx on\n>> core_accessor  (cost=0.00..5460.07 rows=2970 width=92) (actual\n>> time=0.068..54.921 rows=69312 loops=1)\n>>                     Index Cond: ((slot_id = 472) AND (slot_type_id =\n>> 119) AND (label = ''::text) AND (user_id = 0) AND (role = 0) AND\n>> (publish_state >= 60))\n>\n> That index scan on core_accessor_fresh_idx has a pretty big disparity\n> between what the planer expects to get (2970 rows) and what it\n> actually gets (69312 rows). You should try increasing the statistics\n> target if you haven't, then re-analyze and try the query again to see\n> if the planner picks something better. The default of 10 is pretty\n> small- try 100, or higher.\n>\n>\n>\n> --\n> - David T. Wilson\n> [email protected]\n>\n", "msg_date": "Tue, 3 Mar 2009 18:40:26 +0100", "msg_from": "Sebastjan Trepca <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with ordering (can't force query planner to\n\tuse an index)" }, { "msg_contents": "On Tue, Mar 3, 2009 at 12:20 PM, Sebastjan Trepca <[email protected]> wrote:\n> But it's already attached in the first mail or am I missing something?\n>\n> If you don't see it, check this: http://pastebin.com/d71b996d0\n\nWoops, sorry, I thought you had sent plain EXPLAIN. I see it now.\n\nThe lowest level at which I see a problem is here:\n\n-> Index Scan using core_accessor_fresh_idx on core_accessor\n(cost=0.00..5460.07 rows=2970 width=92) (actual time=0.068..54.921\nrows=69312 loops=1)\n Index Cond: ((slot_id = 472) AND (slot_type_id = 119) AND (label =\n''::text) AND (user_id = 0) AND (role = 0) AND (publish_state >= 60))\n\nFor some reason it expect 2970 rows but gets 69312.\n\nA good place to start is to change your default_statistics_target\nvalue to 100 in postgresql.conf, restart postgresql, and re-ANALYZE.\n\n...Robert\n", "msg_date": "Tue, 3 Mar 2009 12:40:44 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with ordering (can't force query planner to\n\tuse an index)" }, { "msg_contents": "Still the same :/\n\nI raised the default_statistics_target to 600 (it was already 100). I\nthen restarted pg, ran analyze through all tables and yet there is not\neffect.\nThis is the output for core_accessor:\nINFO: analyzing \"public.core_accessor\"\nINFO: \"core_accessor\": scanned 291230 of 291230 pages, containing\n17144315 live rows and 0 dead rows; 300000 rows in sample, 17144315\nestimated total rows\n\nIt thinks there are even less rows in the set:\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=30816.49..30816.50 rows=5 width=855) (actual\ntime=683.907..683.910 rows=5 loops=1)\n -> Sort (cost=30816.49..30822.29 rows=2321 width=855) (actual\ntime=683.906..683.907 rows=5 loops=1)\n Sort Key: core_accessor.date_posted, core_accessor.nooximity\n Sort Method: top-N heapsort Memory: 31kB\n -> Nested Loop (cost=0.00..30777.94 rows=2321 width=855)\n(actual time=0.072..517.970 rows=68505 loops=1)\n -> Index Scan using core_accessor_fresh_idx on\ncore_accessor (cost=0.00..8955.44 rows=2440 width=92) (actual\ntime=0.056..53.107 rows=69312 loops=1)\n Index Cond: ((slot_id = 472) AND (slot_type_id =\n119) AND (label = ''::text) AND (user_id = 0) AND (role = 0) AND\n(publish_state >= 60))\n -> Index Scan using core_base_pkey on core_base\n(cost=0.00..8.93 rows=1 width=763) (actual time=0.004..0.005 rows=1\nloops=69312)\n Index Cond: ((core_base.object_id =\ncore_accessor.object_id) AND (core_base.content_type_id =\ncore_accessor.content_type_id))\n Total runtime: 684.015 ms\n(10 rows)\n\n\n\n\n\nSebastjan\n\n\n\nOn Tue, Mar 3, 2009 at 6:40 PM, Robert Haas <[email protected]> wrote:\n> On Tue, Mar 3, 2009 at 12:20 PM, Sebastjan Trepca <[email protected]> wrote:\n>> But it's already attached in the first mail or am I missing something?\n>>\n>> If you don't see it, check this: http://pastebin.com/d71b996d0\n>\n> Woops, sorry, I thought you had sent plain EXPLAIN.  I see it now.\n>\n> The lowest level at which I see a problem is here:\n>\n> ->  Index Scan using core_accessor_fresh_idx on core_accessor\n> (cost=0.00..5460.07 rows=2970 width=92) (actual time=0.068..54.921\n> rows=69312 loops=1)\n>    Index Cond: ((slot_id = 472) AND (slot_type_id = 119) AND (label =\n> ''::text) AND (user_id = 0) AND (role = 0) AND (publish_state >= 60))\n>\n> For some reason it expect 2970 rows but gets 69312.\n>\n> A good place to start is to change your default_statistics_target\n> value to 100 in postgresql.conf, restart postgresql, and re-ANALYZE.\n>\n> ...Robert\n>\n", "msg_date": "Tue, 3 Mar 2009 20:05:53 +0100", "msg_from": "Sebastjan Trepca <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with ordering (can't force query planner to\n\tuse an index)" }, { "msg_contents": "Maybe this is useful, I removed the JOIN and it uses other\nindex(core_accessor_date_idx indexes (date_posted, nooximity)), but\nits still hardly any better:\n\nnoovo-new=# explain analyze SELECT * FROM \"core_accessor\" WHERE\n((\"core_accessor\".\"slot_type_id\" = 119\nnoovo-new(# AND \"core_accessor\".\"slot_id\" = 472 AND\n\"core_accessor\".\"label\" = E'' AND \"core_accessor\".\"publish_state\" >=\n60 AND\nnoovo-new(# \"core_accessor\".\"role\" IN (0) AND\n\"core_accessor\".\"user_id\" = 0)) ORDER BY \"core_accessor\".\"date_posted\"\nDESC, \"core_accessor\".\"nooximity\" DESC LIMIT 5\nnoovo-new-# ;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..3709.56 rows=5 width=178) (actual\ntime=4593.867..4597.587 rows=5 loops=1)\n -> Index Scan Backward using core_accessor_date_idx on\ncore_accessor (cost=0.00..1810265.67 rows=2440 width=178) (actual\ntime=4593.866..4597.583 rows=5 loops=1)\n Filter: ((publish_state >= 60) AND (slot_type_id = 119) AND\n(slot_id = 472) AND (label = ''::text) AND (role = 0) AND (user_id =\n0))\n Total runtime: 4597.632 ms\n(4 rows)\n\n\nSebastjan\n\n\n\nOn Tue, Mar 3, 2009 at 8:05 PM, Sebastjan Trepca <[email protected]> wrote:\n> Still the same :/\n>\n> I raised the default_statistics_target to 600 (it was already 100). I\n> then restarted pg, ran analyze through all tables and yet there is not\n> effect.\n> This is the output for core_accessor:\n> INFO:  analyzing \"public.core_accessor\"\n> INFO:  \"core_accessor\": scanned 291230 of 291230 pages, containing\n> 17144315 live rows and 0 dead rows; 300000 rows in sample, 17144315\n> estimated total rows\n>\n> It thinks there are even less rows in the set:\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>  Limit  (cost=30816.49..30816.50 rows=5 width=855) (actual\n> time=683.907..683.910 rows=5 loops=1)\n>   ->  Sort  (cost=30816.49..30822.29 rows=2321 width=855) (actual\n> time=683.906..683.907 rows=5 loops=1)\n>         Sort Key: core_accessor.date_posted, core_accessor.nooximity\n>         Sort Method:  top-N heapsort  Memory: 31kB\n>         ->  Nested Loop  (cost=0.00..30777.94 rows=2321 width=855)\n> (actual time=0.072..517.970 rows=68505 loops=1)\n>               ->  Index Scan using core_accessor_fresh_idx on\n> core_accessor  (cost=0.00..8955.44 rows=2440 width=92) (actual\n> time=0.056..53.107 rows=69312 loops=1)\n>                     Index Cond: ((slot_id = 472) AND (slot_type_id =\n> 119) AND (label = ''::text) AND (user_id = 0) AND (role = 0) AND\n> (publish_state >= 60))\n>               ->  Index Scan using core_base_pkey on core_base\n> (cost=0.00..8.93 rows=1 width=763) (actual time=0.004..0.005 rows=1\n> loops=69312)\n>                     Index Cond: ((core_base.object_id =\n> core_accessor.object_id) AND (core_base.content_type_id =\n> core_accessor.content_type_id))\n>  Total runtime: 684.015 ms\n> (10 rows)\n>\n>\n>\n>\n>\n> Sebastjan\n>\n>\n>\n> On Tue, Mar 3, 2009 at 6:40 PM, Robert Haas <[email protected]> wrote:\n>> On Tue, Mar 3, 2009 at 12:20 PM, Sebastjan Trepca <[email protected]> wrote:\n>>> But it's already attached in the first mail or am I missing something?\n>>>\n>>> If you don't see it, check this: http://pastebin.com/d71b996d0\n>>\n>> Woops, sorry, I thought you had sent plain EXPLAIN.  I see it now.\n>>\n>> The lowest level at which I see a problem is here:\n>>\n>> ->  Index Scan using core_accessor_fresh_idx on core_accessor\n>> (cost=0.00..5460.07 rows=2970 width=92) (actual time=0.068..54.921\n>> rows=69312 loops=1)\n>>    Index Cond: ((slot_id = 472) AND (slot_type_id = 119) AND (label =\n>> ''::text) AND (user_id = 0) AND (role = 0) AND (publish_state >= 60))\n>>\n>> For some reason it expect 2970 rows but gets 69312.\n>>\n>> A good place to start is to change your default_statistics_target\n>> value to 100 in postgresql.conf, restart postgresql, and re-ANALYZE.\n>>\n>> ...Robert\n>>\n>\n", "msg_date": "Tue, 3 Mar 2009 20:16:12 +0100", "msg_from": "Sebastjan Trepca <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with ordering (can't force query planner to\n\tuse an index)" }, { "msg_contents": "On Tue, Mar 3, 2009 at 2:16 PM, Sebastjan Trepca <[email protected]> wrote:\n> Maybe this is useful, I removed the JOIN and it uses other\n> index(core_accessor_date_idx indexes (date_posted, nooximity)), but\n> its still hardly any better:\n>\n> noovo-new=# explain analyze SELECT * FROM \"core_accessor\" WHERE\n> ((\"core_accessor\".\"slot_type_id\" = 119\n> noovo-new(#  AND \"core_accessor\".\"slot_id\" = 472 AND\n> \"core_accessor\".\"label\" = E'' AND \"core_accessor\".\"publish_state\" >=\n> 60 AND\n> noovo-new(#  \"core_accessor\".\"role\" IN (0) AND\n> \"core_accessor\".\"user_id\" = 0)) ORDER BY \"core_accessor\".\"date_posted\"\n> DESC, \"core_accessor\".\"nooximity\" DESC LIMIT 5\n> noovo-new-# ;\n>\n>       QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>  Limit  (cost=0.00..3709.56 rows=5 width=178) (actual\n> time=4593.867..4597.587 rows=5 loops=1)\n>   ->  Index Scan Backward using core_accessor_date_idx on\n> core_accessor  (cost=0.00..1810265.67 rows=2440 width=178) (actual\n> time=4593.866..4597.583 rows=5 loops=1)\n>         Filter: ((publish_state >= 60) AND (slot_type_id = 119) AND\n> (slot_id = 472) AND (label = ''::text) AND (role = 0) AND (user_id =\n> 0))\n>  Total runtime: 4597.632 ms\n> (4 rows)\n>\n>\n> Sebastjan\n\nWell, in that case, you are being bitten by the fact that our\nmulti-column selectivity estimates are not very good. The planner has\ngood information on how each column behaves in isolation, but not how\nthey act together. I've found this to be a very difficult problem to\nfix.\n\nWhich of the parameters in this query vary and which ones are\ntypically always the same? Sometimes you can improve things by\ncreating an appropriate partial index.\n\n...Robert\n", "msg_date": "Tue, 3 Mar 2009 15:27:12 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with ordering (can't force query planner to\n\tuse an index)" }, { "msg_contents": "Sebastjan Trepca <[email protected]> writes:\n> It thinks there are even less rows in the set:\n\n> -> Index Scan using core_accessor_fresh_idx on\n> core_accessor (cost=0.00..8955.44 rows=2440 width=92) (actual\n> time=0.056..53.107 rows=69312 loops=1)\n> Index Cond: ((slot_id = 472) AND (slot_type_id =\n> 119) AND (label = ''::text) AND (user_id = 0) AND (role = 0) AND\n> (publish_state >= 60))\n\nMaybe you should get rid of this six-column index, if you'd rather the\nquery didn't use it. It seems a tad overspecialized anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Mar 2009 16:27:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with ordering (can't force query planner to use an\n\tindex)" } ]
[ { "msg_contents": "Hello all \n\nIn a dedicated server with 16 cores and 16GB of RAM running PostgreSQL 8.2.5 we have a database with basically two kinds of transactions: \n- short transactions with a couple of updates and inserts that runs all the day; \n- batch data loads with hundreds of inserts that runs several times a day; \n- one delete for thousands of lines after each batch; \n- selects are made when users need reports, low concurrency here. \n\nToday the max_connections is ~2500 where the application is a cluster of JBoss servers with a pool a bit smaller then this total. \nwork_mem = 1GB \nmaintenance_work_mem = 1GB \nshared_buffers = 4GB \n\nautovacuum takes a lot of time running in the largest tables (3 large tables in 50) causing some connections to have to wait for it to finish to start transactioning again. \n\nI see a few processes (connections) using 10 ~ 20% of total system memory and the others using no more then 1%. \n\nWhat I want to ask is: is it better to keep the work_mem as high as it is today or is it a safe bet triyng to reduce this number, for example, to 1 or 2MB so I can keep the distribution of memory more balanced among all connections? \n\nThanks! \n\nFlavio Henrique A. Gurgel \n\n\nHello allIn a dedicated server with 16 cores and 16GB of RAM\nrunning PostgreSQL 8.2.5 we have a database with basically two kinds of\ntransactions:- short transactions with a couple of updates and inserts that runs all the day;- batch data loads with hundreds of inserts that runs several times a day;- one delete for thousands of lines after each batch;- selects are made when users need reports, low concurrency here.Today the max_connections is ~2500 where the application is a cluster of JBoss servers with a pool a bit smaller then this total.work_mem = 1GBmaintenance_work_mem = 1GBshared_buffers = 4GBautovacuum\ntakes a lot of time running in the largest tables (3 large tables in\n50) causing some connections to have to wait for it to finish to start\ntransactioning again.I see a few processes (connections) using 10 ~ 20% of total system memory and the others using no more then 1%.What I want to ask is: is it better to keep the work_mem as high as it is today\nor is it a safe bet triyng to reduce this number, for example, to 1 or\n2MB so I can keep the distribution of memory more balanced among all\nconnections?Thanks!Flavio Henrique A. Gurgel", "msg_date": "Tue, 3 Mar 2009 21:28:56 -0300 (BRT)", "msg_from": "Flavio Henrique Araque Gurgel <[email protected]>", "msg_from_op": true, "msg_subject": "work_mem in high transaction rate database" }, { "msg_contents": "On Tue, Mar 3, 2009 at 5:28 PM, Flavio Henrique Araque Gurgel\n<[email protected]> wrote:\n> Hello all\n>\n> In a dedicated server with 16 cores and 16GB of RAM running PostgreSQL 8.2.5\n> we have a database with basically two kinds of transactions:\n> - short transactions with a couple of updates and inserts that runs all the\n> day;\n> - batch data loads with hundreds of inserts that runs several times a day;\n> - one delete for thousands of lines after each batch;\n> - selects are made when users need reports, low concurrency here.\n>\n> Today the max_connections is ~2500 where the application is a cluster of\n> JBoss servers with a pool a bit smaller then this total.\n> work_mem = 1GB\n> maintenance_work_mem = 1GB\n> shared_buffers = 4GB\n\nOh my lord, that is a foot gun waiting to go off. Assuming 2k\nconnections, and somehow a fair number of them went active with big\nsorts, you'd be able to exhaust all physical memory with about 8 to\n16 connections. Lower work_mem now. To something like 1 to 4 Meg. Do\nnot pass go. If some oddball query really needs a lot of work_mem,\nand benchmarks show something larger work_mem helps, consider raising\nthe work_mem setting for that one query to something under 1G (way\nunder 1G) That makes it noticeably faster. Don't allocate more than a\ntest shows you helps.\n\n> autovacuum takes a lot of time running in the largest tables (3 large tables\n> in 50) causing some connections to have to wait for it to finish to start\n> transactioning again.\n\nVacuum does not block transactions. unless you're dropping tables or something.\n\n> I see a few processes (connections) using 10 ~ 20% of total system memory\n> and the others using no more then 1%.\n\nThis is commonly misread. It has to do with the vagaries of shared\nmemory allocation and accounting. The numbers quite likely don't mean\nwhat you think they mean. Post the first 20 or so lines from top to\nshow us.\n\n> What I want to ask is: is it better to keep the work_mem as high as it is\n> today or is it a safe bet triyng to reduce this number, for example, to 1 or\n> 2MB so I can keep the distribution of memory more balanced among all\n> connections?\n\nYou're work_mem is dangerously high. Your current reading of top may\nnot actually support lowering it directly. Since you've got 4G\nshared_buffers allocated, any process that's touched all or most of\nshared_buffer memory will show as using 4G of ram. That's why you\nshould post output of top, or google on linux virtual memory and top\nand what the numbers mean.\n\nLet's say that 1% of your queries can benefit from > 100Meg work_mem,\nand 5% with 60M, and 10% with 40M, and 20% with 20M, and 30% with 16M,\nand 50% 8M and 4M is enough for all the else to do well.\n\nIf, somehow, 100 queries fired off that could use > 100Meg, they\nmight, with your current settings use all your memory and start using\nswap til swap ran out and they started getting out of memory errors\nand failing. This would affect all the other queries on the machine\nas well.\n\nOTOH, if you had work_mem limited to 16M, and 100 of those same\nqueries fired off, they'd individually run a little slower, but they\nwouldn't be able to run the machine out of memory.\n\nIf your work_mem and max_connections multiplied is > than some\nfraction of memory you're doing it wrong, and setting your machine up\nfor mysterious, heavy load failures, the worst kind.\n", "msg_date": "Tue, 3 Mar 2009 18:37:42 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: work_mem in high transaction rate database" }, { "msg_contents": "Tue, 3 Mar 2009 18:37:42 -0700 -n\nScott Marlowe <[email protected]> írta:\n\n\n> Oh my lord, that is a foot gun waiting to go off. Assuming 2k\n> connections, and somehow a fair number of them went active with big\n\nI absolutely agree with Scott. Plus set effective_cache_size\naccordingly, this would help the planner. You can read a lot about\nsetting this in the mailing list archives.\n\n-- \nÜdvözlettel,\nGábriel Ákos\n-=E-Mail :[email protected]|Web: http://www.i-logic.hu=-\n-=Tel/fax:+3612367353 |Mobil:+36209278894 =-\n", "msg_date": "Wed, 4 Mar 2009 08:04:57 +0100", "msg_from": "Akos Gabriel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: work_mem in high transaction rate database" }, { "msg_contents": "Hi,\n\nOn Wednesday 04 March 2009 02:37:42 Scott Marlowe wrote:\n> If some oddball query really needs a lot of work_mem,\n> and benchmarks show something larger work_mem helps, consider raising\n> the work_mem setting for that one query to something under 1G (way\n> under 1G) That makes it noticeably faster.  Don't allocate more than a\n> test shows you helps.\n\nThe probably easiest way to integrate this into an existing application is \nthis way, in my experience:\n\n BEGIN;\n SET LOCAL work_mem TO '650MB';\n SELECT -- the query requiring such a large setting\n COMMIT;\n\nRight after the commit the global configured work_mem (or the previous \nsession's one, in fact) will be in effect, you won't have to reset it yourself.\n\nRegards,\n-- \ndim", "msg_date": "Wed, 4 Mar 2009 10:16:07 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: work_mem in high transaction rate database" } ]
[ { "msg_contents": "----- \"Scott Marlowe\" <[email protected]> escreveu: \n> Oh my lord, that is a foot gun waiting to go off. Assuming 2k \n> connections, and somehow a fair number of them went active with big \n> sorts, you'd be able to exhaust all physical memory with about 8 to \n> 16 connections. Lower work_mem now. To something like 1 to 4 Meg. Do \n> not pass go. If some oddball query really needs a lot of work_mem, \n> and benchmarks show something larger work_mem helps, consider raising \n> the work_mem setting for that one query to something under 1G (way \n> under 1G) That makes it noticeably faster. Don't allocate more than a \n> test shows you helps. \n\nThanks a lot Scott. That's what I thought in the beginning but was very doubtful since the documentation is a bit odd regarding this point and several bloggers talk about increasing this value up to 250MB. I really think that separating regular non pooled distributed applications and pooled application servers makes a lot of difference in this point. \n\n> Vacuum does not block transactions. unless you're dropping tables or something. \n\nI'll try to separate things and check if the DELETE queries have something related here. \n\n(...) \n> what you think they mean. Post the first 20 or so lines from top to \n> show us. \n\nUnfortunately I can't do it. The data there is very sensitive (it's a public company here in Brazil) and the server is operated only by selected personal. I just ask for information and give written recomendations. Anyway, I'm going to pay some more attention in this topic. \n\nThis is a very interesting implementation of PostgreSQL (3 large databases, heavy load, things growing all the time) and I'll let you all know what happened when tuning it. I'll feedback you after lowering work_mem and changing related settings. \n\nThanks \nFlavio \n\n----- \"Scott Marlowe\" <[email protected]> escreveu:\n> Oh my lord, that is a foot gun waiting to go off.  Assuming 2k> connections, and somehow a fair number of them went active with big> sorts, you'd be able to exhaust all physical memory  with about 8 to> 16 connections.  Lower work_mem now. To something like 1 to 4 Meg.  Do> not pass go.  If some oddball query really needs a lot of work_mem,> and benchmarks show something larger work_mem helps, consider raising> the work_mem setting for that one query to something under 1G (way> under 1G) That makes it noticeably faster.  Don't allocate more than a> test shows you helps.Thanks a lot Scott. That's what I thought in the beginning but was very doubtful since the documentation is a bit odd regarding this point and several bloggers talk about increasing this value up to 250MB. I really think that separating regular non pooled distributed applications and pooled application servers makes a lot of difference in this point.> Vacuum does not block transactions.  unless you're dropping tables or something.I'll try to separate things and check if the DELETE queries have something related here.(...)> what you think they mean.  Post the first 20 or so lines from top to> show us.Unfortunately I can't do it. The data there is very sensitive (it's a public company here in Brazil) and the server is operated only by selected personal. I just ask for information and give written recomendations. Anyway, I'm going to pay some more attention in this topic.This is a very interesting implementation of PostgreSQL (3 large databases, heavy load, things growing all the time) and I'll let you all know what happened when tuning it. I'll feedback you after lowering work_mem and changing related settings.ThanksFlavio", "msg_date": "Wed, 4 Mar 2009 09:46:27 -0300 (BRT)", "msg_from": "Flavio Henrique Araque Gurgel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: work_mem in high transaction rate database" }, { "msg_contents": "You may have decreased performance in your batch jobs with the lower work_mem setting.\nAdditionally, the fact that you haven't had swap storm issues so far means that although there is certain risk of an issue, its probably a lot lower than what has been talked about here so far.\nWithout a change in client behavior (new queries or large change in data) a change in load alone is very unlikely to cause a problem. So take your time to do it right. I disagree with the knee-jerk \"change it now!\" response. The very fact you have gotten this far means it is not as risky as the bare settings indicate.\nDefinitely plan on improving and testing out work_mem variants soon, but a hasty change to a small value might cause your batch jobs to take much longer - what is your risk if they take too long or don't complete in time? That risk is yours to assess - if its not much of a risk, then by all means lower work_mem soon. But if it is business critical for those batch jobs to complete within some time frame, be careful.\n\nIf you know what you are doing, and are careful, your work_mem is probably, but not necessarily too high.\nIt all depends on how much you know about your clients.\nFor example 2010 connections with 500MB work_mem is not always a problem. If you know 2000 of those are from an application that accesses with a user that can only see small tables, and you know what queries those are, it may be perfectly safe. For example, I've dealt with an application that had a couple thousand connections, 95% were idle at any time (connection pool much like those likely from your Jboss apps). The queries these ran were a limited set of about 20 statements that all accessed by unique key on small-ish sized tables (<30MB) with few joins. There were tons of connections, but they:\n1: hardly did anything, most were idle. On 75% of the connections, a query set was run exactly once every 15 minutes selecting * from small (sub 1MB) tables.\n2: the more active connections (20%) did small select queries on single rows accessed by primary key.\n\nSo, the calculation max connections * work_mem is utterly inappropriate for that sort of workload. Yes, in theory, those connections could use work_mem * some factor of memory - if they changed their queries, and accessed other tables. In practice - nowhere close.\n\nThe remaining few connections(~5) were batch jobs that needed ~800MB of work_mem or else the performance would stink. And they didn't need 800MB of work_mem for real (the hashes used ~250MB) they needed a SETTING of 800MB because the planner is incapable of estimating row counts properly with partitioned table access.\nBoth applications were not able to configure their own work_mem for quite some time (getting client applications to change is not always a quick process).\nBut the risk of having a large setting, even with 2000 connections was low. The risk of changing it too low was very high (batch jobs taking 3 hours instead of 10 minutes). Only 5 ish connections even accessed schemas/tables with lots of data. The remaining couple thousand were constrained in many ways other than work_mem.\n\nIn the end I did have swap storms... However it was not caused by work_mem. It was the query planner, which can use GBs of memory per connection planning a query on partitioned tables.\n\nSo, my point is that if you don't know a lot about the database or its clients be very wary of large work_mem settings. If you do, and have a lot of control or knowledge about your clients, the work_mem * max_connections calculation is inappropriate.\n\nThe detailed formula is along the lines of:\nSUM_i [work_mem_i * active_connecions_i] (for each 'type' of connection i).\nIf you don't know enough about your connections, then the conservative estimate is work_mem * max_connections.\n\nA single query has the potential of using multiples of work_mem depending on how many concurrent hashes / sorts are in a query, so the above is not quite right either.\n\nIs there a way to make a particular database user have a user-local work_mem setting without having the client change their code? You could then have each application have its own user, with its own default setting. The batch jobs with few connections can get much larger work_mem than the Jboss ones. This would be especially powerful for applications that can't change or that use higher level tools for db access that make it impossible or very difficult to send non-standard commands like \"SET\".\n\nOn 3/4/09 4:46 AM, \"Flavio Henrique Araque Gurgel\" <[email protected]> wrote:\n\n----- \"Scott Marlowe\" <[email protected]> escreveu:\n> Oh my lord, that is a foot gun waiting to go off. Assuming 2k\n> connections, and somehow a fair number of them went active with big\n> sorts, you'd be able to exhaust all physical memory with about 8 to\n> 16 connections. Lower work_mem now. To something like 1 to 4 Meg. Do\n> not pass go. If some oddball query really needs a lot of work_mem,\n> and benchmarks show something larger work_mem helps, consider raising\n> the work_mem setting for that one query to something under 1G (way\n> under 1G) That makes it noticeably faster. Don't allocate more than a\n> test shows you helps.\n\nThanks a lot Scott. That's what I thought in the beginning but was very doubtful since the documentation is a bit odd regarding this point and several bloggers talk about increasing this value up to 250MB. I really think that separating regular non pooled distributed applications and pooled application servers makes a lot of difference in this point.\n\n> Vacuum does not block transactions. unless you're dropping tables or something.\n\nI'll try to separate things and check if the DELETE queries have something related here.\n\n(...)\n> what you think they mean. Post the first 20 or so lines from top to\n> show us.\n\nUnfortunately I can't do it. The data there is very sensitive (it's a public company here in Brazil) and the server is operated only by selected personal. I just ask for information and give written recomendations. Anyway, I'm going to pay some more attention in this topic.\n\nThis is a very interesting implementation of PostgreSQL (3 large databases, heavy load, things growing all the time) and I'll let you all know what happened when tuning it. I'll feedback you after lowering work_mem and changing related settings.\n\nThanks\nFlavio\n\n\n\n\nRe: [PERFORM] work_mem in high transaction rate database\n\n\nYou may have decreased performance in your batch jobs with the lower work_mem setting.\nAdditionally, the fact that you haven’t had swap storm issues so far means that although there is certain risk of an issue, its probably a lot lower than what has been talked about here so far.\nWithout a change in client behavior (new queries or large change in data) a change in load alone is very unlikely to cause a problem.  So take your time to do it right.  I disagree with the knee-jerk “change it now!” response.  The very fact you have gotten this far means it is not as risky as the bare settings indicate.\nDefinitely plan on improving and testing out work_mem variants soon, but a hasty change to a small value might cause your batch jobs to take much longer — what is your risk if they take too long or don’t complete in time?  That risk is yours to assess — if its not much of a risk, then by all means lower work_mem soon.  But if it is business critical for those batch jobs to complete within some time frame, be careful.\n\nIf you know what you are doing, and are careful,  your work_mem is probably, but not necessarily too high. \nIt all depends on how much you know about your clients.  \nFor example 2010 connections with 500MB work_mem is not always a problem.  If you know 2000 of those are from an application that accesses with a user that can only see small tables, and you know what queries those are, it may be perfectly safe.  For example, I’ve dealt with an application that had a couple thousand connections, 95% were idle at any time (connection pool much like those likely from your Jboss apps).  The queries these ran were a limited set of about 20 statements that all accessed by unique key on small-ish sized tables (<30MB) with few joins.  There were tons of connections, but they: \n1: hardly did anything, most were idle.  On 75% of the connections, a query set was run exactly once every 15 minutes selecting * from small (sub 1MB) tables.  \n2: the more active connections (20%) did small select queries on single rows accessed by primary key.\n\nSo, the calculation  max connections * work_mem is utterly inappropriate for that sort of workload.   Yes, in theory, those connections could use work_mem * some factor of memory — if they changed their queries, and accessed other tables.  In practice — nowhere close.\n\nThe remaining few connections(~5) were batch jobs that needed ~800MB of work_mem or else the performance would stink.  And they didn’t need 800MB of work_mem for real (the hashes used ~250MB) they needed a SETTING of 800MB because the planner is incapable of estimating row counts properly with partitioned table access. \nBoth applications were not able to configure their own work_mem for quite some time (getting client applications to change is not always a quick process).\nBut the risk of having a large setting, even with 2000 connections was low.  The risk of changing it too low was very high (batch jobs taking 3 hours instead of 10 minutes).  Only 5 ish connections even accessed schemas/tables with lots of data.  The remaining couple thousand were constrained in many ways other than work_mem.\n\nIn the end I did have swap storms... However it was not caused by work_mem.  It was the query planner, which can use GBs of memory per connection planning a query on partitioned tables.\n\nSo, my point is that if you don’t know a lot about the database or its clients be very wary of large work_mem settings.  If you do, and have a lot of control or knowledge about your clients, the work_mem * max_connections calculation is inappropriate.\n\nThe detailed formula is along the lines of:\nSUM_i [work_mem_i * active_connecions_i]    (for each ‘type’ of connection i).\nIf you don’t know enough about your connections, then the conservative estimate is work_mem * max_connections.\n\nA single query has the potential of using multiples of work_mem depending on how many concurrent hashes / sorts are in a query, so the above is not quite right either.\n\nIs there a way to make a particular database user have a user-local work_mem setting without having the client change their code? You could then have each application have its own user, with its own default setting.  The batch jobs with few connections can get much larger work_mem than the Jboss ones.  This would be especially powerful for applications that can’t change or that use higher level tools for db access that make it impossible or very difficult to send non-standard commands like “SET”.\n\nOn 3/4/09 4:46 AM, \"Flavio Henrique Araque Gurgel\" <[email protected]> wrote:\n\n----- \"Scott Marlowe\" <[email protected]> escreveu: \n> Oh my lord, that is a foot gun waiting to go off.  Assuming 2k\n> connections, and somehow a fair number of them went active with big\n> sorts, you'd be able to exhaust all physical memory  with about 8 to\n> 16 connections.  Lower work_mem now. To something like 1 to 4 Meg.  Do\n> not pass go.  If some oddball query really needs a lot of work_mem,\n> and benchmarks show something larger work_mem helps, consider raising\n> the work_mem setting for that one query to something under 1G (way\n> under 1G) That makes it noticeably faster.  Don't allocate more than a\n> test shows you helps.\n\nThanks a lot Scott. That's what I thought in the beginning but was very doubtful since the documentation is a bit odd regarding this point and several bloggers talk about increasing this value up to 250MB. I really think that separating regular non pooled distributed applications and pooled application servers makes a lot of difference in this point.\n\n> Vacuum does not block transactions.  unless you're dropping tables or something.\n\nI'll try to separate things and check if the DELETE queries have something related here.\n\n(...)\n> what you think they mean.  Post the first 20 or so lines from top to\n> show us.\n\nUnfortunately I can't do it. The data there is very sensitive (it's a public company here in Brazil) and the server is operated only by selected personal. I just ask for information and give written recomendations. Anyway, I'm going to pay some more attention in this topic.\n\nThis is a very interesting implementation of PostgreSQL (3 large databases, heavy load, things growing all the time) and I'll let you all know what happened when tuning it. I'll feedback you after lowering work_mem and changing related settings.\n\nThanks\nFlavio", "msg_date": "Wed, 4 Mar 2009 10:18:45 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: work_mem in high transaction rate database" }, { "msg_contents": "On Wed, Mar 4, 2009 at 11:18 AM, Scott Carey <[email protected]> wrote:\n> You may have decreased performance in your batch jobs with the lower\n> work_mem setting.\n\nThat would be why I recommended benchmarking queries that need more\nmemory and setting work_mem for those queries alone.\n\n> Additionally, the fact that you haven’t had swap storm issues so far means\n> that although there is certain risk of an issue, its probably a lot lower\n> than what has been talked about here so far.\n\nNo, it means you and the OP are guessing at what's a good number\nwithout any actual proof of it. Guessing it not a particularly good\nmethod for setting work_mem, especially on a server with 2000+\nconnections.\n\n> Without a change in client behavior (new queries or large change in data) a\n> change in load alone is very unlikely to cause a problem.\n\nThat is demonstrably incorrect. If the average number of live queries\nout of the 2000 connections is currently 10, and an increase in load\nmakes it 500, there is a very REAL chance of running the server out of\nmemory.\n\n> So take your time\n> to do it right.\n\nI believe I made mention of benchmarking queries above and in my first\npost. But doing it right does NOT mean setting work_mem to 2G then\nwhittling it down as your server crashes under load.\n\n>  I disagree with the knee-jerk “change it now!” response.\n>  The very fact you have gotten this far means it is not as risky as the bare\n> settings indicate.\n\nSorry, but I disagree back at you, and it's not a knee jerk reaction,\nit's a reaction honed from years of watching supposedly stable\npostgresql servers crash and burn under slashdot effect type loads.\n\n> Definitely plan on improving and testing out work_mem variants soon, but a\n> hasty change to a small value might cause your batch jobs to take much\n> longer — what is your risk if they take too long or don’t complete in time?\n>  That risk is yours to assess — if its not much of a risk, then by all means\n> lower work_mem soon.  But if it is business critical for those batch jobs to\n> complete within some time frame, be careful.\n\nSorry, but that's backwards. Unless the batch jobs are the only\nimportant thing on this server, running it with work_mem=2G is asking\nfor trouble under any real kind of load. It's sacrificing stability\nfor some unknown and quite probably minimal performance improvement.\n\nIt seems a lot of your post is based on either hoping for the best, or\nassuming access patterns won't change much over time. Do you test at\nhigher and higher parallel loads until failure occurs and then figure\nout how to limit that type of failure? I do, because I can't afford\nto have my db servers crash and burn midday under peak load. And you\nnever know when some app is gonna start suddenly spewing things like\nunconstrained joins due to some bug, and if you've got work_mem set to\n1G your server IS gonna have problems.\n", "msg_date": "Wed, 4 Mar 2009 14:49:29 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: work_mem in high transaction rate database" } ]
[ { "msg_contents": "Running PG 8.1.11 on AIX 5.3\n\nTrying to track down the cause of long running commits on one of our DB servers. \n\nI can rule checkpoints out (I've set log_min_messages to debug2 and the commits are not happening during checkpoints).\n\nWe have this problem over a period of ~ 35 minutes per day only. during that period we have no change in our write load, but we do have substantial\nincrease in reads on one table (pretty much all being served by index scans from the buffer pool - which most likely points to a shift\nin activity over the time period, and not the problem itself). \n\nOver the same time, we also see an abnormally large number of connections being opened and closed. We can see this in the DB connection pool \nlogs as well as the DB logs). This is most likely being driven by the activity shift that we can see in the stats. \n\nWhat we do see is connections being opened and closed rapidly during this same period of time (DB logs below). I'm wondering if the \ncommits are waiting on something at the file system level due to the processes.\n\nUlimit settings on the server:\n\nulimit -a\ncore file size (blocks, -c) soft\ndata seg size (kbytes, -d) soft\nfile size (blocks, -f) unlimited\nmax memory size (kbytes, -m) hard\nopen files (-n) 4096\npipe size (512 bytes, -p) 64\nstack size (kbytes, -s) soft\ncpu time (seconds, -t) unlimited\nmax user processes (-u) 10240\nvirtual memory (kbytes, -v) unlimited\n\n\nDB logs:\n\n2009-03-04 09:56:13.465 CUT [561402] DEBUG: forked new backend, pid=794696 socket=9\n2009-03-04 09:56:13.645 CUT [561402] DEBUG: forked new backend, pid=2101474 socket=9\n2009-03-04 09:56:13.645 CUT [561402] DEBUG: server process (PID 516334) exited with exit code 0\n2009-03-04 09:56:13.646 CUT [561402] DEBUG: server process (PID 1458332) exited with exit code 0\n2009-03-04 09:56:13.646 CUT [561402] DEBUG: server process (PID 2174998) exited with exit code 0\n2009-03-04 09:56:13.647 CUT [561402] DEBUG: server process (PID 1519662) exited with exit code 0\n2009-03-04 09:56:13.647 CUT [561402] DEBUG: server process (PID 1646618) exited with exit code 0\n2009-03-04 09:56:13.647 CUT [561402] DEBUG: server process (PID 999648) exited with exit code 0\n2009-03-04 09:56:13.648 CUT [561402] DEBUG: server process (PID 2285774) exited with exit code 0\n2009-03-04 09:56:13.648 CUT [561402] DEBUG: server process (PID 1622108) exited with exit code 0\n2009-03-04 09:56:13.648 CUT [561402] DEBUG: server process (PID 1024216) exited with exit code 0\n2009-03-04 09:56:13.649 CUT [561402] DEBUG: server process (PID 1392706) exited with exit code 0\n2009-03-04 09:56:13.665 CUT [561402] DEBUG: forked new backend, pid=1392708 socket=9\n2009-03-04 09:56:13.666 CUT [561402] DEBUG: server process (PID 1773680) exited with exit code 0\n2009-03-04 09:56:13.669 CUT [561402] DEBUG: forked new backend, pid=1773682 socket=9\n2009-03-04 09:56:13.805 CUT [561402] DEBUG: forked new backend, pid=1024218 socket=9\n2009-03-04 09:56:13.805 CUT [561402] DEBUG: server process (PID 1589374) exited with exit code 0\n2009-03-04 09:56:13.806 CUT [561402] DEBUG: server process (PID 995536) exited with exit code 0\n2009-03-04 09:56:13.806 CUT [561402] DEBUG: server process (PID 688336) exited with exit code 0\n2009-03-04 09:56:13.806 CUT [561402] DEBUG: server process (PID 2035798) exited with exit code 0\n2009-03-04 09:56:13.807 CUT [561402] DEBUG: server process (PID 1429700) exited with exit code 0\n2009-03-04 09:56:13.807 CUT [561402] DEBUG: server process (PID 2043960) exited with exit code 0\n2009-03-04 09:56:13.807 CUT [561402] DEBUG: server process (PID 2158782) exited with exit code 0\n2009-03-04 09:56:13.808 CUT [561402] DEBUG: server process (PID 2306144) exited with exit code 0\n2009-03-04 09:56:13.808 CUT [561402] DEBUG: server process (PID 884978) exited with exit code 0\n2009-03-04 09:56:13.808 CUT [561402] DEBUG: server process (PID 1781930) exited with exit code 0\n2009-03-04 09:56:13.809 CUT [561402] DEBUG: server process (PID 2277572) exited with exit code 0\n2009-03-04 09:56:13.809 CUT [561402] DEBUG: server process (PID 663656) exited with exit code 0\n2009-03-04 09:56:13.809 CUT [561402] DEBUG: server process (PID 2343006) exited with exit code 0\n2009-03-04 09:56:13.810 CUT [561402] DEBUG: server process (PID 889026) exited with exit code 0\n2009-03-04 09:56:13.810 CUT [561402] DEBUG: server process (PID 2404546) exited with exit code 0\n2009-03-04 09:56:13.810 CUT [561402] DEBUG: server process (PID 581830) exited with exit code 0\n2009-03-04 09:56:13.811 CUT [561402] DEBUG: server process (PID 1433846) exited with exit code 0\n2009-03-04 09:56:13.811 CUT [561402] DEBUG: server process (PID 667690) exited with exit code 0\n2009-03-04 09:56:13.932 CUT [561402] DEBUG: forked new backend, pid=667692 socket=9\n2009-03-04 09:56:13.933 CUT [561402] DEBUG: server process (PID 864402) exited with exit code 0\n2009-03-04 09:56:13.933 CUT [561402] DEBUG: server process (PID 594124) exited with exit code 0\n2009-03-04 09:56:13.934 CUT [561402] DEBUG: server process (PID 852202) exited with exit code 0\n2009-03-04 09:56:13.935 CUT [561402] DEBUG: server process (PID 2433156) exited with exit code 0\n2009-03-04 09:56:13.935 CUT [561402] DEBUG: server process (PID 2216120) exited with exit code 0\n2009-03-04 09:56:13.936 CUT [561402] DEBUG: server process (PID 1761484) exited with exit code 0\n2009-03-04 09:56:13.936 CUT [561402] DEBUG: server process (PID 815268) exited with exit code 0\n2009-03-04 09:56:13.936 CUT [561402] DEBUG: server process (PID 876668) exited with exit code 0\n2009-03-04 09:56:13.937 CUT [561402] DEBUG: server process (PID 897060) exited with exit code 0\n2009-03-04 09:56:13.937 CUT [561402] DEBUG: server process (PID 2199742) exited with exit code 0\n2009-03-04 09:56:13.937 CUT [561402] DEBUG: server process (PID 913618) exited with exit code 0\n2009-03-04 09:56:13.937 CUT [561402] DEBUG: server process (PID 2334774) exited with exit code 0\n2009-03-04 09:56:13.984 CUT [561402] DEBUG: forked new backend, pid=2334776 socket=9\n2009-03-04 09:56:14.065 CUT [602282] appdb appuser 1.2.3.3 LOG: duration: 872.457 ms statement: EXECUTE <unnamed> [PREPARE: commit]\n2009-03-04 09:56:14.065 CUT [868552] appdb appuser 1.2.3.4 LOG: duration: 873.756 ms statement: EXECUTE <unnamed> [PREPARE: commit]\n2009-03-04 09:56:14.075 CUT [2212000] appdb appuser 1.2.3.5 LOG: duration: 586.276 ms statement: EXECUTE <unnamed> [PREPARE: commit]\n2009-03-04 09:56:14.076 CUT [561402] DEBUG: forked new backend, pid=913620 socket=9\n2009-03-04 09:56:14.095 CUT [561402] DEBUG: forked new backend, pid=782510 socket=9\n2009-03-04 09:56:14.106 CUT [561402] DEBUG: forked new backend, pid=2285776 socket=9\n2009-03-04 09:56:14.179 CUT [561402] DEBUG: forked new backend, pid=999650 socket=9\n2009-03-04 09:56:14.194 CUT [561402] DEBUG: forked new backend, pid=1646620 socket=9\n2009-03-04 09:56:14.245 CUT [561402] DEBUG: forked new backend, pid=921634 socket=9\n2009-03-04 09:56:14.255 CUT [561402] DEBUG: forked new backend, pid=1257600 socket=9\n2009-03-04 09:56:14.265 CUT [561402] DEBUG: forked new backend, pid=1622114 socket=9\n2009-03-04 09:56:14.325 CUT [561402] DEBUG: forked new backend, pid=1519664 socket=9\n2009-03-04 09:56:14.326 CUT [561402] DEBUG: forked new backend, pid=1839238 socket=9\n2009-03-04 09:56:14.335 CUT [561402] DEBUG: forked new backend, pid=2412680 socket=9\n2009-03-04 09:56:14.425 CUT [561402] DEBUG: forked new backend, pid=2007238 socket=9\n2009-03-04 09:56:14.426 CUT [561402] DEBUG: server process (PID 2363424) exited with exit code 0\n2009-03-04 09:56:14.426 CUT [561402] DEBUG: server process (PID 1749056) exited with exit code 0\n2009-03-04 09:56:14.426 CUT [561402] DEBUG: server process (PID 2056242) exited with exit code 0\n2009-03-04 09:56:14.427 CUT [561402] DEBUG: server process (PID 770156) exited with exit code 0\n2009-03-04 09:56:14.433 CUT [561402] DEBUG: forked new backend, pid=770158 socket=9\n2009-03-04 09:56:14.433 CUT [561402] DEBUG: server process (PID 2379944) exited with exit code 0\n2009-03-04 09:56:14.484 CUT [561402] DEBUG: forked new backend, pid=2379946 socket=9\n2009-03-04 09:56:14.485 CUT [561402] DEBUG: server process (PID 1130616) exited with exit code 0\n2009-03-04 09:56:14.494 CUT [561402] DEBUG: forked new backend, pid=1130618 socket=9\n2009-03-04 09:56:14.505 CUT [561402] DEBUG: forked new backend, pid=516338 socket=9\n2009-03-04 09:56:14.506 CUT [561402] DEBUG: forked new backend, pid=2199744 socket=9\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n", "msg_date": "Wed, 04 Mar 2009 11:18:50 -0500", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": true, "msg_subject": "Long Running Commits" } ]
[ { "msg_contents": "I have a relatively simple query with a single index on (contract_id, time):\n\nvjtrade=> EXPLAIN SELECT * FROM ticks WHERE contract_id=1 ORDER BY time;\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------\n Sort (cost=11684028.44..11761274.94 rows=30898601 width=40)\n Sort Key: \"time\"\n -> Bitmap Heap Scan on ticks (cost=715657.57..6995196.08 rows=30898601\nwidth=40)\n Recheck Cond: (contract_id = 1)\n -> Bitmap Index Scan on contract_id_time_idx\n(cost=0.00..707932.92 rows=30898601 width=0)\n Index Cond: (contract_id = 1)\n(6 rows)\n\nThis plan doesn't complete in a reasonable amount of time. I end up having\nto kill the query after it's been running for over an hour.\n\nIf I do a:\nSET enable_sort=FALSE;\nSET enable_bitmapscan=FALSE;\n\nThen it gives me this plan:\n\nIndex Scan using contract_id_time_idx on ticks (cost=0.00..117276552.51\nrows=30897044 width=40) (actual time=34.025..738583.609 rows=27858174\nloops=1)\n Index Cond: (contract_id = 1)\nTotal runtime: 742323.102 ms\n\nNotice how the estimated cost is so much different from the actual time.\nThe row estimate is pretty good, however.\n\nThis is on postgresql 8.3.5 with:\nshared_buffers = 512MB\ntemp_buffers = 256MB\nwork_mem = 256MB\nmax_fsm_pages = 153600\neffective_cache_size = 1500MB\n\nIs there any way to give postgresql a better estimate of the index scan\ntime? I tried setting random_page_cost=1, but it still gave me the bitmap\nplan.\n\nThanks,\nJonathan Hseu\n\nI have a relatively simple query with a single index on (contract_id, time):vjtrade=> EXPLAIN SELECT * FROM ticks WHERE contract_id=1 ORDER BY time;                                             QUERY PLAN                                              \n\n----------------------------------------------------------------------------------------------------- Sort  (cost=11684028.44..11761274.94 rows=30898601 width=40)   Sort Key: \"time\"   ->  Bitmap Heap Scan on ticks  (cost=715657.57..6995196.08 rows=30898601 width=40)\n\n         Recheck Cond: (contract_id = 1)         ->  Bitmap Index Scan on contract_id_time_idx  (cost=0.00..707932.92 rows=30898601 width=0)               Index Cond: (contract_id = 1)(6 rows)This\nplan doesn't complete in a reasonable amount of time.  I end up having\nto kill the query after it's been running for over an hour.\nIf I do a:SET enable_sort=FALSE;SET enable_bitmapscan=FALSE;Then it gives me this plan:Index\nScan using contract_id_time_idx on ticks  (cost=0.00..117276552.51\nrows=30897044 width=40) (actual time=34.025..738583.609 rows=27858174\nloops=1)\n  Index Cond: (contract_id = 1)Total runtime: 742323.102 msNotice how the estimated cost is so much different from the actual time.  The row estimate is pretty good, however.This is on postgresql 8.3.5 with:\n\nshared_buffers = 512MBtemp_buffers = 256MBwork_mem = 256MBmax_fsm_pages = 153600effective_cache_size = 1500MBIs there any way to give postgresql a better estimate of the index scan time?  I tried setting random_page_cost=1, but it still gave me the bitmap plan.\nThanks,Jonathan Hseu", "msg_date": "Thu, 5 Mar 2009 09:56:48 -0600", "msg_from": "Jonathan Hseu <[email protected]>", "msg_from_op": true, "msg_subject": "Index scan plan estimates way off." }, { "msg_contents": "Jonathan Hseu <[email protected]> writes:\n> Sort (cost=11684028.44..11761274.94 rows=30898601 width=40)\n> Sort Key: \"time\"\n> -> Bitmap Heap Scan on ticks (cost=715657.57..6995196.08 rows=30898601\n> width=40)\n> Recheck Cond: (contract_id = 1)\n> -> Bitmap Index Scan on contract_id_time_idx\n> (cost=0.00..707932.92 rows=30898601 width=0)\n> Index Cond: (contract_id = 1)\n> (6 rows)\n\n> This plan doesn't complete in a reasonable amount of time. I end up having\n> to kill the query after it's been running for over an hour.\n\nThe bitmap scan should be at least as efficient as the plain indexscan,\nso I suppose the problem is that the sort is slow. What's the datatype\nof \"time\"? Can this machine actually support 256MB+ work_mem, or is that\nlikely to be driving it into swapping?\n\nYou might learn more from enabling trace_sort and watching the\npostmaster log entries it generates. On the whole I think the planner\nisn't making a stupid choice here: sorting a large number of rows\nusually *is* preferable to making an indexscan over them, unless the\ntable is remarkably close to being in physical order for the index.\nSo it would be worth trying to figure out what the problem with the\nsort is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Mar 2009 13:30:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index scan plan estimates way off. " }, { "msg_contents": "Oops, forgot to CC my reply to the list. Sorry if this gets messed up.\n\nOn Thu, Mar 5, 2009 at 12:30 PM, Tom Lane <[email protected]> wrote:\n\n> Jonathan Hseu <[email protected]> writes:\n> > Sort (cost=11684028.44..11761274.94 rows=30898601 width=40)\n> > Sort Key: \"time\"\n> > -> Bitmap Heap Scan on ticks (cost=715657.57..6995196.08\n> rows=30898601\n> > width=40)\n> > Recheck Cond: (contract_id = 1)\n> > -> Bitmap Index Scan on contract_id_time_idx\n> > (cost=0.00..707932.92 rows=30898601 width=0)\n> > Index Cond: (contract_id = 1)\n> > (6 rows)\n>\n> > This plan doesn't complete in a reasonable amount of time. I end up\n> having\n> > to kill the query after it's been running for over an hour.\n>\n> The bitmap scan should be at least as efficient as the plain indexscan,\n> so I suppose the problem is that the sort is slow. What's the datatype\n> of \"time\"?\n\n\nIt's a timestamp with time zone and not null.\n\n\n> Can this machine actually support 256MB+ work_mem, or is that\n> likely to be driving it into swapping?\n\n\nYeah, the machine has 4 GB of RAM and isn't even close to swapping at all.\n\n>\n> You might learn more from enabling trace_sort and watching the\n> postmaster log entries it generates.\n\n\nI got this (I'm not sure how to interpret it, as there doesn't seem to be\nany documentation about it on the web):\n\n2009-03-05 15:28:27 CST STATEMENT: select * from ticks where contract_id=1\norder by time limit 2800000;\n2009-03-05 15:28:30 CST LOG: begin tuple sort: nkeys = 1, workMem = 262144,\nrandomAccess = f\n2009-03-05 15:28:30 CST STATEMENT: explain analyze select * from ticks\nwhere contract_id=1 order by time limit 2800000;\n2009-03-05 16:50:31 CST LOG: switching to external sort with 937 tapes: CPU\n26.57s/4835.39u sec elapsed 4921.38 sec\n2009-03-05 16:50:31 CST STATEMENT: explain analyze select * from ticks\nwhere contract_id=1 order by time limit 2800000;\n2009-03-05 17:00:46 CST LOG: performsort starting: CPU 92.51s/4955.58u sec\nelapsed 5536.57 sec\n2009-03-05 17:00:46 CST STATEMENT: explain analyze select * from ticks\nwhere contract_id=1 order by time limit 2800000;\n2009-03-05 17:00:50 CST LOG: finished writing run 1 to tape 0: CPU\n92.86s/4958.30u sec elapsed 5539.78 sec\n2009-03-05 17:00:50 CST STATEMENT: explain analyze select * from ticks\nwhere contract_id=1 order by time limit 2800000;\n2009-03-05 17:00:50 CST LOG: finished writing final run 2 to tape 1: CPU\n92.88s/4958.40u sec elapsed 5539.90 sec\n2009-03-05 17:00:50 CST STATEMENT: explain analyze select * from ticks\nwhere contract_id=1 order by time limit 2800000;\n2009-03-05 17:00:51 CST LOG: performsort done (except 2-way final merge):\nCPU 92.96s/4958.55u sec elapsed 5541.10 sec\n2009-03-05 17:00:51 CST STATEMENT: explain analyze select * from ticks\nwhere contract_id=1 order by time limit 2800000;\n2009-03-05 17:00:58 CST LOG: external sort ended, 204674 disk blocks used:\nCPU 93.36s/4960.04u sec elapsed 5548.33 sec\n2009-03-05 17:00:58 CST STATEMENT: explain analyze select * from ticks\nwhere contract_id=1 order by time limit 2800000;\n\n\n\n> On the whole I think the planner\n> isn't making a stupid choice here: sorting a large number of rows\n> usually *is* preferable to making an indexscan over them, unless the\n> table is remarkably close to being in physical order for the index.\n> So it would be worth trying to figure out what the problem with the\n> sort is.\n\n\nI don't really understand this. It seems to me that fetching and sorting 30\nmillion rows wouldn't be preferable to just fetching them in the correct\norder in the first place, even if it's in a random order.\n\nI tried another query with a much smaller result set, and the index scan\ntakes 76 seconds, but the bitmap scan & sort takes 1.5 hours. That's quite\na difference. I'm pretty sure the physical order of the index is very\ndifferent from the physical order of the table. The elements of the table\nare inserted in strictly time order, if that's how it ends up being on disk,\nwhereas the index, as far as I understand it, would be sorted by the first\nof the multiple columns, the contract_id, then the time.\n\nHere's both of the EXPLAIN ANALYZEs for the same query:\n\n=> explain analyze select * from ticks where contract_id=1 order by time\nlimit 2800000;\n\nQUERY\nPLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=10487812.41..10494812.41 rows=2800000 width=40) (actual\ntime=5541109.704..5545345.598 rows=2800000 loops=1)\n -> Sort (cost=10487812.41..10565267.29 rows=30981949 width=40) (actual\ntime=5541109.702..5544883.149 rows=2800000 loops=1)\n Sort Key: \"time\"\n Sort Method: external merge Disk: 1637392kB\n -> Bitmap Heap Scan on ticks (cost=718724.01..7015201.37\nrows=30981949 width=40) (actual time=4874084.105..5465131.997 rows=27917481\nloops=1)\n Recheck Cond: (contract_id = 1)\n -> Bitmap Index Scan on contract_id_time_idx\n(cost=0.00..710978.52 rows=30981949 width=0) (actual\ntime=4871649.240..4871649.240 rows=27918305 loops=1)\n Index Cond: (contract_id = 1)\n Total runtime: 5548440.918 ms\n(9 rows)\n\n=> explain analyze select * from ticks where contract_id=1 order by time\nlimit 2800000;\n\nQUERY\nPLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..10629028.93 rows=2800000 width=40) (actual\ntime=136.612..75717.675 rows=2800000 loops=1)\n -> Index Scan using contract_id_time_idx on ticks\n(cost=0.00..117628023.89 rows=30986694 width=40) (actual\ntime=136.611..75033.090 rows=2800000 loops=1)\n Index Cond: (contract_id = 1)\n Total runtime: 76081.634 ms\n(4 rows)\n\n\nTo me, it seems like postgresql thinks that it has to do a random page fetch\nfor each row from the index scan and so its prediction is way off. The\nprediction for the other bitmap plan is much closer.\n\nThanks,\nJonathan Hseu\n\nOops, forgot to CC my reply to the list.  Sorry if this gets messed up.On Thu, Mar 5, 2009 at 12:30 PM, Tom Lane <[email protected]> wrote:\n\nJonathan Hseu <[email protected]> writes:\n>  Sort  (cost=11684028.44..11761274.94 rows=30898601 width=40)\n>    Sort Key: \"time\"\n>    ->  Bitmap Heap Scan on ticks  (cost=715657.57..6995196.08 rows=30898601\n> width=40)\n>          Recheck Cond: (contract_id = 1)\n>          ->  Bitmap Index Scan on contract_id_time_idx\n> (cost=0.00..707932.92 rows=30898601 width=0)\n>                Index Cond: (contract_id = 1)\n> (6 rows)\n\n> This plan doesn't complete in a reasonable amount of time.  I end up having\n> to kill the query after it's been running for over an hour.\n\nThe bitmap scan should be at least as efficient as the plain indexscan,\nso I suppose the problem is that the sort is slow.  What's the datatype\nof \"time\"?It's a timestamp with time zone and not null. \n\nCan this machine actually support 256MB+ work_mem, or is that\nlikely to be driving it into swapping?Yeah, the machine has 4 GB of RAM and isn't even close to swapping at all. \n\nYou might learn more from enabling trace_sort and watching the\npostmaster log entries it generates.I got this (I'm not sure how to interpret it, as there doesn't seem to be any documentation about it on the web):2009-03-05 15:28:27 CST STATEMENT:  select * from ticks where contract_id=1 order by time limit 2800000;\n\n2009-03-05 15:28:30 CST LOG:  begin tuple sort: nkeys = 1, workMem = 262144, randomAccess = f2009-03-05 15:28:30 CST STATEMENT:  explain analyze select * from ticks where contract_id=1 order by time limit 2800000;\n\n2009-03-05 16:50:31 CST LOG:  switching to external sort with 937 tapes: CPU 26.57s/4835.39u sec elapsed 4921.38 sec2009-03-05 16:50:31 CST STATEMENT:  explain analyze select * from ticks where contract_id=1 order by time limit 2800000;\n\n2009-03-05 17:00:46 CST LOG:  performsort starting: CPU 92.51s/4955.58u sec elapsed 5536.57 sec2009-03-05 17:00:46 CST STATEMENT:  explain analyze select * from ticks where contract_id=1 order by time limit 2800000;\n\n2009-03-05 17:00:50 CST LOG:  finished writing run 1 to tape 0: CPU 92.86s/4958.30u sec elapsed 5539.78 sec2009-03-05 17:00:50 CST STATEMENT:  explain analyze select * from ticks where contract_id=1 order by time limit 2800000;\n\n2009-03-05 17:00:50 CST LOG:  finished writing final run 2 to tape 1: CPU 92.88s/4958.40u sec elapsed 5539.90 sec2009-03-05 17:00:50 CST STATEMENT:  explain analyze select * from ticks where contract_id=1 order by time limit 2800000;\n\n2009-03-05 17:00:51 CST LOG:  performsort done (except 2-way final merge): CPU 92.96s/4958.55u sec elapsed 5541.10 sec2009-03-05 17:00:51 CST STATEMENT:  explain analyze select * from ticks where contract_id=1 order by time limit 2800000;\n\n2009-03-05 17:00:58 CST LOG:  external sort ended, 204674 disk blocks used: CPU 93.36s/4960.04u sec elapsed 5548.33 sec2009-03-05 17:00:58 CST STATEMENT:  explain analyze select * from ticks where contract_id=1 order by time limit 2800000;\n On the whole I think the planner\nisn't making a stupid choice here: sorting a large number of rows\nusually *is* preferable to making an indexscan over them, unless the\ntable is remarkably close to being in physical order for the index.\nSo it would be worth trying to figure out what the problem with the\nsort is.I\ndon't really understand this.  It seems to me that fetching and sorting\n30 million rows wouldn't be preferable to just fetching them in the\ncorrect order in the first place, even if it's in a random order.\nI tried another query with a much smaller result set, and the index\nscan takes 76 seconds, but the bitmap scan & sort takes 1.5 hours. \nThat's quite a difference.  I'm pretty sure the physical order of the\nindex is very different from the physical order of the table.  The\nelements of the table are inserted in strictly time order, if that's\nhow it ends up being on disk, whereas the index, as far as I understand\nit, would be sorted by the first of the multiple columns, the\ncontract_id, then the time.\nHere's both of the EXPLAIN ANALYZEs for the same query:=> explain analyze select * from ticks where contract_id=1 order by time limit 2800000;                                                                               QUERY PLAN                                                                               \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Limit  (cost=10487812.41..10494812.41 rows=2800000 width=40) (actual time=5541109.704..5545345.598 rows=2800000 loops=1)\n\n   ->  Sort  (cost=10487812.41..10565267.29 rows=30981949 width=40)\n(actual time=5541109.702..5544883.149 rows=2800000 loops=1)         Sort Key: \"time\"         Sort Method:  external merge  Disk: 1637392kB\n         ->  Bitmap Heap Scan on ticks  (cost=718724.01..7015201.37\nrows=30981949 width=40) (actual time=4874084.105..5465131.997\nrows=27917481 loops=1)               Recheck Cond: (contract_id = 1)              \n->  Bitmap Index Scan on contract_id_time_idx  (cost=0.00..710978.52\nrows=30981949 width=0) (actual time=4871649.240..4871649.240\nrows=27918305 loops=1)\n                     Index Cond: (contract_id = 1) Total runtime: 5548440.918 ms(9 rows)=> explain analyze select * from ticks where contract_id=1 order by time limit 2800000;                                                                          QUERY PLAN                                                                          \n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=0.00..10629028.93 rows=2800000 width=40) (actual time=136.612..75717.675 rows=2800000 loops=1)\n\n   ->  Index Scan using contract_id_time_idx on ticks \n(cost=0.00..117628023.89 rows=30986694 width=40) (actual\ntime=136.611..75033.090 rows=2800000 loops=1)         Index Cond: (contract_id = 1) Total runtime: 76081.634 ms\n(4 rows)To\nme, it seems like postgresql thinks that it has to do a random page\nfetch for each row from the index scan and so its prediction is way\noff.  The prediction for the other bitmap plan is much closer.\nThanks,Jonathan Hseu", "msg_date": "Thu, 5 Mar 2009 19:42:19 -0600", "msg_from": "Jonathan Hseu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index scan plan estimates way off." }, { "msg_contents": "On Thu, Mar 5, 2009 at 1:30 PM, Tom Lane <[email protected]> wrote:\n> Jonathan Hseu <[email protected]> writes:\n>>  Sort  (cost=11684028.44..11761274.94 rows=30898601 width=40)\n>>    Sort Key: \"time\"\n>>    ->  Bitmap Heap Scan on ticks  (cost=715657.57..6995196.08 rows=30898601\n>> width=40)\n>>          Recheck Cond: (contract_id = 1)\n>>          ->  Bitmap Index Scan on contract_id_time_idx\n>> (cost=0.00..707932.92 rows=30898601 width=0)\n>>                Index Cond: (contract_id = 1)\n>> (6 rows)\n>\n>> This plan doesn't complete in a reasonable amount of time.  I end up having\n>> to kill the query after it's been running for over an hour.\n>\n> The bitmap scan should be at least as efficient as the plain indexscan,\n> so I suppose the problem is that the sort is slow.  What's the datatype\n> of \"time\"?  Can this machine actually support 256MB+ work_mem, or is that\n> likely to be driving it into swapping?\n>\n> You might learn more from enabling trace_sort and watching the\n> postmaster log entries it generates.  On the whole I think the planner\n> isn't making a stupid choice here: sorting a large number of rows\n> usually *is* preferable to making an indexscan over them, unless the\n> table is remarkably close to being in physical order for the index.\n\nIt seems like this is only likely to be true if most of the data needs\nto be read from a magnetic disk, so that many seeks are involved.\nThat might not be the case here, since the machine has an awful lot of\nRAM.\n\n...Robert\n", "msg_date": "Thu, 5 Mar 2009 21:33:06 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index scan plan estimates way off." } ]
[ { "msg_contents": "I have a function, looking like this:\n\nCREATE OR REPLACE FUNCTION get_memo_display_queue_size(a_service_id integer)\n RETURNS integer AS\n$BODY$\nSELECT\n\tCOUNT(*)::integer\nFROM\n\tv_messages_memo\n\tLEFT JOIN messages_memo_displayed\n\t\tON id = message_id\nWHERE\n\tservice_id = $1\n\tAND state = 1\n\tAND admin_id IS NULL;\n$BODY$\n LANGUAGE 'sql' VOLATILE SECURITY DEFINER\n COST 100;\n\nNow, when I run that function from psql, it takes around 200ms to complete:\n\npulitzer2=# explain analyze select get_memo_display_queue_size(1829);\n QUERY PLAN \n\n----------------------------------------------------------------------------------------\n Result (cost=0.00..0.26 rows=1 width=0) (actual time=219.728..219.730 \nrows=1 loops=1)\n Total runtime: 219.758 ms\n(2 rows)\n\npulitzer2=#\n\nAnd it takes around 200ms each time I run the function!\n\n\nWhen I rewrite the query so I can see queryplan, I get this:\n\ncreate view _v1 as\nSELECT\n\t*\nFROM\n\tv_messages_memo\n\tLEFT JOIN messages_memo_displayed\n\t\tON id = message_id\nWHERE\n\tstate = 1\n\tAND admin_id IS NULL;\n\npulitzer2=# EXPLAIN ANALYZE select count(*) from _v1 WHERE service_id = \n1829;\n \n QUERY PLAN \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=23506.14..23506.15 rows=1 width=0) (actual \ntime=6.001..6.002 rows=1 loops=1)\n -> Nested Loop (cost=150.69..23505.59 rows=216 width=0) (actual \ntime=5.744..5.971 rows=13 loops=1)\n -> Hash Left Join (cost=150.69..11035.16 rows=2104 width=4) \n(actual time=5.721..5.793 rows=13 loops=1)\n Hash Cond: (messages.id = \nmessages_memo_displayed.message_id)\n Filter: (messages_memo_displayed.admin_id IS NULL)\n -> Bitmap Heap Scan on messages (cost=97.03..10955.11 \nrows=4209 width=4) (actual time=0.042..0.075 rows=13 loops=1)\n Recheck Cond: (service_id = 1829)\n -> Bitmap Index Scan on \nmessages_uq__service_id__tan (cost=0.00..95.98 rows=4209 width=0) \n(actual time=0.032..0.032 rows=13 loops=1)\n Index Cond: (service_id = 1829)\n -> Hash (cost=28.85..28.85 rows=1985 width=8) (actual \ntime=5.666..5.666 rows=1985 loops=1)\n -> Seq Scan on messages_memo_displayed \n(cost=0.00..28.85 rows=1985 width=8) (actual time=0.009..2.697 rows=1985 \nloops=1)\n -> Index Scan using messages_memo_pk on messages_memo \n(cost=0.00..5.91 rows=1 width=4) (actual time=0.006..0.008 rows=1 loops=13)\n Index Cond: (messages_memo.message_id = messages.id)\n Filter: ((messages_memo.state)::integer = 1)\n Total runtime: 6.079 ms\n(15 rows)\n\n\nSo I noticed that postgres is using seq_scan on messages_memo_displayed, \nalthough there is a PK (and an index) on message_id in \nmessages_memo_displayed (I'll post DDL of the tables at the end of the \npost).\n\nSo, I tried EXPLAIN ANALYZE after I forced planner not to use sequential \nscans:\n\npulitzer2=# EXPLAIN ANALYZE select count(*) from _v1 WHERE service_id = \n1829;\n \n QUERY PLAN \n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=25403.60..25403.61 rows=1 width=0) (actual \ntime=6.546..6.547 rows=1 loops=1)\n -> Nested Loop (cost=2048.16..25403.06 rows=216 width=0) (actual \ntime=6.287..6.512 rows=13 loops=1)\n -> Hash Left Join (cost=2048.16..12932.63 rows=2104 width=4) \n(actual time=6.268..6.340 rows=13 loops=1)\n Hash Cond: (messages.id = \nmessages_memo_displayed.message_id)\n Filter: (messages_memo_displayed.admin_id IS NULL)\n -> Bitmap Heap Scan on messages (cost=97.03..10955.11 \nrows=4209 width=4) (actual time=0.043..0.078 rows=13 loops=1)\n Recheck Cond: (service_id = 1829)\n -> Bitmap Index Scan on \nmessages_uq__service_id__tan (cost=0.00..95.98 rows=4209 width=0) \n(actual time=0.032..0.032 rows=13 loops=1)\n Index Cond: (service_id = 1829)\n -> Hash (cost=1926.31..1926.31 rows=1985 width=8) \n(actual time=6.211..6.211 rows=1985 loops=1)\n -> Index Scan using messages_memo_displayed_pk on \nmessages_memo_displayed (cost=0.00..1926.31 rows=1985 width=8) (actual \ntime=0.069..3.221 rows=1985 loops=1)\n -> Index Scan using messages_memo_pk on messages_memo \n(cost=0.00..5.91 rows=1 width=4) (actual time=0.006..0.008 rows=1 loops=13)\n Index Cond: (messages_memo.message_id = messages.id)\n Filter: ((messages_memo.state)::integer = 1)\n Total runtime: 6.628 ms\n(15 rows)\n\n\nNo sequential scan. So I 'changed' my function so that first row says \n'SET enable_seqscan TO false'. After that, here are the times for the \nfunction call:\n\nmike@som:~$ psql -U postgres pulitzer2\nWelcome to psql 8.3.5, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help with psql commands\n \\g or terminate with semicolon to execute query\n \\q to quit\npulitzer2=# explain analyze select get_memo_display_queue_size(1829);\n QUERY PLAN \n\n----------------------------------------------------------------------------------------\n Result (cost=0.00..0.26 rows=1 width=0) (actual time=300.448..300.450 \nrows=1 loops=1)\n Total runtime: 300.491 ms\n(2 rows)\n\npulitzer2=# explain analyze select get_memo_display_queue_size(1829);\n QUERY PLAN \n\n------------------------------------------------------------------------------------\n Result (cost=0.00..0.26 rows=1 width=0) (actual time=1.940..1.941 \nrows=1 loops=1)\n Total runtime: 1.961 ms\n(2 rows)\n\npulitzer2=# explain analyze select get_memo_display_queue_size(1829);\n QUERY PLAN \n\n------------------------------------------------------------------------------------\n Result (cost=0.00..0.26 rows=1 width=0) (actual time=1.946..1.947 \nrows=1 loops=1)\n Total runtime: 1.973 ms\n(2 rows)\n\npulitzer2=# explain analyze select get_memo_display_queue_size(1829);\n QUERY PLAN \n\n------------------------------------------------------------------------------------\n Result (cost=0.00..0.26 rows=1 width=0) (actual time=1.936..1.937 \nrows=1 loops=1)\n Total runtime: 1.964 ms\n(2 rows)\n\npulitzer2=#\n\nSo, first query on the same connection takes 300ms, and any \nsubsequential query on the same connection takes less than 2 ms. If I \nremove 'SET enable_seqscan TO false' from the top of the function, every \ncall to the function takes around 200-300ms.\n\n\nNow, as I was explained on pg-jdbc mailinglist, that 'SET enable_seqscan \nTO false' affects all queries on that persistent connection from tomcat, \nand It's not good solution. So I wanted to post here to ask what other \noptions do I have.\n\nWhile writing this I realized that, without forcing sequential scan out, \nI get much quicker execution times when I do:\n\nSELECT count(*) FROM _v1 WHERE service_id = 1829\n\nthen when I do\n\nSELECT get_memo_display_queue_size(1829),\n\nas seen here:\n\n\n\nmike@som:~$ psql -U postgres pulitzer2\nWelcome to psql 8.3.5, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help with psql commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\npulitzer2=# explain analyze select get_memo_display_queue_size(1829);\n QUERY PLAN \n\n----------------------------------------------------------------------------------------\n Result (cost=0.00..0.26 rows=1 width=0) (actual time=259.090..259.092 \nrows=1 loops=1)\n Total runtime: 259.132 ms\n(2 rows)\n\npulitzer2=# EXPLAIN ANALYZE select count(*) from _v1 WHERE service_id = \n1829;\n \n QUERY PLAN \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=23517.98..23517.99 rows=1 width=0) (actual \ntime=5.942..5.943 rows=1 loops=1)\n -> Nested Loop (cost=150.70..23517.44 rows=216 width=0) (actual \ntime=5.674..5.909 rows=13 loops=1)\n -> Hash Left Join (cost=150.70..11037.87 rows=2105 width=4) \n(actual time=5.633..5.706 rows=13 loops=1)\n Hash Cond: (messages.id = \nmessages_memo_displayed.message_id)\n Filter: (messages_memo_displayed.admin_id IS NULL)\n -> Bitmap Heap Scan on messages (cost=97.04..10957.81 \nrows=4210 width=4) (actual time=0.032..0.063 rows=13 loops=1)\n Recheck Cond: (service_id = 1829)\n -> Bitmap Index Scan on \nmessages_uq__service_id__tan (cost=0.00..95.98 rows=4210 width=0) \n(actual time=0.022..0.022 rows=13 loops=1)\n Index Cond: (service_id = 1829)\n -> Hash (cost=28.85..28.85 rows=1985 width=8) (actual \ntime=5.588..5.588 rows=1985 loops=1)\n -> Seq Scan on messages_memo_displayed \n(cost=0.00..28.85 rows=1985 width=8) (actual time=0.009..2.690 rows=1985 \nloops=1)\n -> Index Scan using messages_memo_pk on messages_memo \n(cost=0.00..5.92 rows=1 width=4) (actual time=0.008..0.010 rows=1 loops=13)\n Index Cond: (messages_memo.message_id = messages.id)\n Filter: ((messages_memo.state)::integer = 1)\n Total runtime: 6.026 ms\n(15 rows)\n\npulitzer2=# explain analyze select get_memo_display_queue_size(1829);\n QUERY PLAN \n\n----------------------------------------------------------------------------------------\n Result (cost=0.00..0.26 rows=1 width=0) (actual time=211.712..211.714 \nrows=1 loops=1)\n Total runtime: 211.742 ms\n(2 rows)\n\npulitzer2=# EXPLAIN ANALYZE select count(*) from _v1 WHERE service_id = \n1829;\n \n QUERY PLAN \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=23517.98..23517.99 rows=1 width=0) (actual \ntime=5.918..5.920 rows=1 loops=1)\n -> Nested Loop (cost=150.70..23517.44 rows=216 width=0) (actual \ntime=5.659..5.885 rows=13 loops=1)\n -> Hash Left Join (cost=150.70..11037.87 rows=2105 width=4) \n(actual time=5.638..5.711 rows=13 loops=1)\n Hash Cond: (messages.id = \nmessages_memo_displayed.message_id)\n Filter: (messages_memo_displayed.admin_id IS NULL)\n -> Bitmap Heap Scan on messages (cost=97.04..10957.81 \nrows=4210 width=4) (actual time=0.043..0.078 rows=13 loops=1)\n Recheck Cond: (service_id = 1829)\n -> Bitmap Index Scan on \nmessages_uq__service_id__tan (cost=0.00..95.98 rows=4210 width=0) \n(actual time=0.033..0.033 rows=13 loops=1)\n Index Cond: (service_id = 1829)\n -> Hash (cost=28.85..28.85 rows=1985 width=8) (actual \ntime=5.581..5.581 rows=1985 loops=1)\n -> Seq Scan on messages_memo_displayed \n(cost=0.00..28.85 rows=1985 width=8) (actual time=0.009..2.678 rows=1985 \nloops=1)\n -> Index Scan using messages_memo_pk on messages_memo \n(cost=0.00..5.92 rows=1 width=4) (actual time=0.006..0.008 rows=1 loops=13)\n Index Cond: (messages_memo.message_id = messages.id)\n Filter: ((messages_memo.state)::integer = 1)\n Total runtime: 5.994 ms\n(15 rows)\n\npulitzer2=#\n\n\n\nNow I'm confused, why is 'sql' function much slower than 'direct' SELECT?\n\n\tMike\n\nP.S. Here are tables definition, from psql:\n\npulitzer2=# \\d messages\n Table \"public.messages\"\n Column | Type | \n Modifiers\n--------------------+--------------------------+---------------------------------------------------------------------\n id | integer | not null default \nnextval(('public.message_id_seq'::text)::regclass)\n from | character varying(15) | not null\n to | character varying(10) | not null\n receiving_time | timestamp with time zone | not null default now()\n raw_text | character varying | not null\n keyword | character varying |\n destination_id | integer | not null\n vpn_id | integer |\n service_id | integer |\n status | integer | not null default 2\n gateway_message_id | character varying | not null\n prize_id | integer |\n tan | character varying |\nIndexes:\n \"messages_pk\" PRIMARY KEY, btree (id)\n \"messages_uq__gateway_message_id\" UNIQUE, btree (gateway_message_id)\n \"messages_uq__service_id__tan\" UNIQUE, btree (service_id, tan)\n \"messages_ix_from\" btree (\"from\")\n \"messages_ix_receiving_time__service_id__status\" btree \n(receiving_time, service_id, status)\n \"messages_ix_vpn_id\" btree (vpn_id)\nForeign-key constraints:\n \"messages_fk__destinations_id\" FOREIGN KEY (destination_id) \nREFERENCES destinations(id)\n \"messages_fk__service_prizes_prize_id\" FOREIGN KEY (prize_id) \nREFERENCES service_prizes(prize_id)\n \"messages_fk__services_id\" FOREIGN KEY (service_id) REFERENCES \nservices(id)\n \"messages_fk__vpns_id\" FOREIGN KEY (vpn_id) REFERENCES vpns(id)\n\n\npulitzer2=# \\d messages_memo\n Table \"public.messages_memo\"\n Column | Type | Modifiers\n------------------------+--------------------------+-----------\n message_id | integer | not null\n memo | character varying |\n state | dom_messages_memo_state | not null\n admin_id | integer |\n admin_change_timestamp | timestamp with time zone |\nIndexes:\n \"messages_memo_pk\" PRIMARY KEY, btree (message_id)\nForeign-key constraints:\n \"messages_memo_fk__messages_id\" FOREIGN KEY (message_id) REFERENCES \nmessages(id)\n\npulitzer2=# \\d messages_memo_displayed\nTable \"public.messages_memo_displayed\"\n Column | Type | Modifiers\n------------+---------+-----------\n message_id | integer | not null\n admin_id | integer | not null\nIndexes:\n \"messages_memo_displayed_pk\" PRIMARY KEY, btree (message_id, admin_id)\nForeign-key constraints:\n \"messages_memo_displayed_fk__admins_id\" FOREIGN KEY (admin_id) \nREFERENCES admins(id)\n \"messages_memo_displayed_fk__messages_id\" FOREIGN KEY (message_id) \nREFERENCES messages(id)\n\npulitzer2=# \\d v_messages_memo\n View \"public.v_messages_memo\"\n Column | Type | Modifiers\n--------------------+--------------------------+-----------\n id | integer |\n from | character varying(15) |\n to | character varying(10) |\n receiving_time | timestamp with time zone |\n raw_text | character varying |\n keyword | character varying |\n destination_id | integer |\n vpn_id | integer |\n service_id | integer |\n status | integer |\n gateway_message_id | character varying |\n prize_id | integer |\n tan | character varying |\n memo | character varying |\n state | dom_messages_memo_state |\n displayed | boolean |\nView definition:\n SELECT v_messages_full.id, v_messages_full.\"from\", \nv_messages_full.\"to\", v_messages_full.receiving_time, \nv_messages_full.raw_text, v_messages_full.keyword, \nv_messages_full.destination_id, v_messages_full.vpn_id, \nv_messages_full.service_id, v_messages_full.status, \nv_messages_full.gateway_message_id, v_messages_full.prize_id, \nv_messages_full.tan, messages_memo.memo, messages_memo.state, \nNULL::boolean AS displayed\n FROM messages v_messages_full\n JOIN messages_memo ON v_messages_full.id = messages_memo.message_id;\n\npulitzer2=#\n\n\n", "msg_date": "Mon, 09 Mar 2009 14:36:32 +0100", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Query much slower when run from postgres function" }, { "msg_contents": "Mario Splivalo <[email protected]> writes:\n> Now I'm confused, why is 'sql' function much slower than 'direct' SELECT?\n\nUsually the reason for this is that the planner chooses a different plan\nwhen it has knowledge of the particular value you are searching for than\nwhen it does not. I suppose 'service_id' has a very skewed distribution\nand you are looking for an uncommon value?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Mar 2009 12:31:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query much slower when run from postgres function " }, { "msg_contents": "Tom Lane <tgl 'at' sss.pgh.pa.us> writes:\n\n> Mario Splivalo <[email protected]> writes:\n>> Now I'm confused, why is 'sql' function much slower than 'direct' SELECT?\n>\n> Usually the reason for this is that the planner chooses a different plan\n> when it has knowledge of the particular value you are searching for than\n> when it does not.\n\nYes, and since Mario is coming from JDBC, I'll share my part on\nthis: I also noticed some very wrong plans in JDBC because of the\n\"optimization\" in prepared statements consisting of planning once\nfor all runs, e.g. without any parameter values to help planning.\n\nMy understanding is that practically, it's difficult for the\nplanner to opt for an index (or not) because the selectivity of a\nparameter value may be much different when the actual value\nchanges.\n\nNormally, the planner \"thinks\" that planning is so costly that\nit's better to plan once for all runs, but practically for our\nuse, this is very wrong (it may be very good for some uses,\nthough it would be interesting to know the actual uses share).\n\nUntil it's possible to specifically tell the JDBC driver (and/or\nPG?) to not plan once for all runs (or is there something better\nto think of?), or the whole thing would be more clever (off the\ntop of my head, PG could try to replan with the first actual\nvalues - or first xx actual values - and if the plan is\ndifferent, then flag that prepared statement for replanning each\ntime if the overall time estimate is different enough), I've\nopted to tell the JDBC driver to use the protocol version 2, as\nprepared statements were not so much prepared back then (IIRC\nparameter interpolation is performed in driver and the whole SQL\nquery is passed each time, parsed, and planned) using\nprotocolVersion=2 in the JDBC URL. So far it worked very well for\nus.\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Mon, 09 Mar 2009 17:51:24 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Query much slower when run from postgres function" }, { "msg_contents": "On Mon, Mar 9, 2009 at 5:51 PM, Guillaume Cottenceau <[email protected]> wrote:\n> Until it's possible to specifically tell the JDBC driver (and/or\n> PG?) to not plan once for all runs (or is there something better\n> to think of?), or the whole thing would be more clever (off the\n> top of my head, PG could try to replan with the first actual\n> values - or first xx actual values - and if the plan is\n> different, then flag that prepared statement for replanning each\n> time if the overall time estimate is different enough), I've\n> opted to tell the JDBC driver to use the protocol version 2, as\n> prepared statements were not so much prepared back then (IIRC\n> parameter interpolation is performed in driver and the whole SQL\n> query is passed each time, parsed, and planned) using\n> protocolVersion=2 in the JDBC URL. So far it worked very well for\n> us.\n\nUnnamed prepared statements are planned after binding the values,\nstarting with 8.3, or more precisely starting with 8.3.2 as early 8.3\nversions were partially broken on this behalf.\n\nIt's not always possible to use protocol version 2 as it's quite\nlimited (especially considering the exceptions returned).\n\n-- \nGuillaume\n", "msg_date": "Mon, 9 Mar 2009 18:04:23 +0100", "msg_from": "Guillaume Smet <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Query much slower when run from postgres\n\tfunction" }, { "msg_contents": "Guillaume Smet <[email protected]> writes:\n> Unnamed prepared statements are planned after binding the values,\n> starting with 8.3, or more precisely starting with 8.3.2 as early 8.3\n> versions were partially broken on this behalf.\n\nNo, 8.2 did it too (otherwise we wouldn't have considered 8.3.0 to be\nbroken...). The thing I'm not too clear about is what \"use of an\nunnamed statement\" translates to for a JDBC user.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Mar 2009 13:16:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Query much slower when run from postgres function " }, { "msg_contents": "\nTom Lane schrieb:\n> Guillaume Smet <[email protected]> writes:\n>> Unnamed prepared statements are planned after binding the values,\n>> starting with 8.3, or more precisely starting with 8.3.2 as early 8.3\n>> versions were partially broken on this behalf.\n> \n> No, 8.2 did it too (otherwise we wouldn't have considered 8.3.0 to be\n> broken...). The thing I'm not too clear about is what \"use of an\n> unnamed statement\" translates to for a JDBC user.\n> \n> \t\t\tregards, tom lane\n>\nI followed another post in the PHP List. Andrew McMillan was talking \nabout his experiences with udf's in Oracle and PG (--> look for subject: \nRe: [PHP] pl/php for windows). He was writing that, by using udf's, the \nplanner sometimes uses strange and not performant plans. So generally I \nunderstood that using udf's is a good idea - compared with the work I \nhave to do when I code that e.g in PHP and also compared to the better \nresulting performance with udf's. So what is your experience with using \nudf's (plpgsql)? Is there something like \"use it in this case but not in \nthat case\"?\n\nYour answers are very welcome ...\n\nCheers\n\nAndy\n\n\n", "msg_date": "Mon, 09 Mar 2009 18:42:16 +0100", "msg_from": "Andreas Wenk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Query much slower when run from postgres function" }, { "msg_contents": "Tom Lane wrote:\n> Mario Splivalo <[email protected]> writes:\n>> Now I'm confused, why is 'sql' function much slower than 'direct' SELECT?\n> \n> Usually the reason for this is that the planner chooses a different plan\n> when it has knowledge of the particular value you are searching for than\n> when it does not. I suppose 'service_id' has a very skewed distribution\n> and you are looking for an uncommon value?\n\nI don't think so. Here is distribution for the messages_memo_displayed\ntable (joined with messages, just to show how many messages of each\nservice_id are there in messages_memo_displayed):\n\npulitzer2=# select service_id, count(*) from messages join\nmessages_memo_displayed on id = message_id group by service_id order by\nservice_id;\n service_id | count\n------------+-------\n 504 | 2\n 1790 | 1922\n 1814 | 1\n 1816 | 57\n 1818 | 3\n(5 rows)\n\nAnd the sizes of other tables involved:\n\npulitzer2=# select count(*) from messages_memo_displayed;\n count\n-------\n 1985\n(1 row)\n\nTime: 0.602 ms\npulitzer2=#\n\npulitzer2=# select count(*) from messages;\n count\n---------\n 1096388\n(1 row)\n\nTime: 345.267 ms\npulitzer2=# select count(*) from messages_memo;\n count\n--------\n 776238\n(1 row)\n\nTime: 133.942 ms\npulitzer2=#\n\n\nAs I've mentioned earlier, I have created an view, for the sake of this\nposting:\n\nCREATE OR REPLACE VIEW _v1 AS\n SELECT messages.id, messages.\"from\", messages.\"to\",\nmessages.receiving_time, messages.raw_text, messages.keyword,\nmessages.destination_id, messages.vpn_id, messages.service_id,\nmessages.status, messages.gateway_message_id, messages.prize_id,\nmessages.tan, messages_memo.memo, messages_memo.state,\nmessages_memo.displayed, messages_memo_displayed.admin_id\n FROM messages\n JOIN messages_memo ON messages.id = messages_memo.message_id\n LEFT JOIN messages_memo_displayed ON messages.id =\nmessages_memo_displayed.message_id\n WHERE messages_memo.state::integer = 1 AND\nmessages_memo_displayed.admin_id IS NULL;\n\nAnd then I created a function:\n\nCREATE OR REPLACE FUNCTION\n__new__get_memo_display_queue_size(a_service_id integer)\n RETURNS integer AS\n$BODY$\nSELECT\n\tCOUNT(*)::int4\nFROM\n\t_v1\nWHERE\n\tservice_id = $1\n$BODY$\n LANGUAGE 'sql' VOLATILE SECURITY DEFINER;\n\n\nNow, here are the differences:\npulitzer2=# select count(*) from _v1 where service_id = 504;\n count\n-------\n 0\n(1 row)\n\nTime: 6.101 ms\npulitzer2=# select __new__get_memo_display_queue_size(504);\n __new__get_memo_display_queue_size\n------------------------------------\n 0\n(1 row)\n\nTime: 322.555 ms\npulitzer2=# select count(*) from _v1 where service_id = 1790;\n count\n-------\n 1\n(1 row)\n\nTime: 25.203 ms\npulitzer2=# select __new__get_memo_display_queue_size(1790);\n __new__get_memo_display_queue_size\n------------------------------------\n 1\n(1 row)\n\nTime: 225.763 ms\npulitzer2=# select count(*) from _v1 where service_id = 1814;\n count\n-------\n 2\n(1 row)\n\nTime: 13.662 ms\npulitzer2=# select __new__get_memo_display_queue_size(1814);\n __new__get_memo_display_queue_size\n------------------------------------\n 2\n(1 row)\n\nTime: 215.251 ms\npulitzer2=# select count(*) from _v1 where service_id = 1816;\n count\n-------\n 1\n(1 row)\n\nTime: 10.111 ms\npulitzer2=# select __new__get_memo_display_queue_size(1816);\n __new__get_memo_display_queue_size\n------------------------------------\n 1\n(1 row)\n\nTime: 220.457 ms\npulitzer2=# select count(*) from _v1 where service_id = 1829;\n count\n-------\n 13\n(1 row)\n\nTime: 2.023 ms\npulitzer2=# select __new__get_memo_display_queue_size(1829);\n __new__get_memo_display_queue_size\n------------------------------------\n 13\n(1 row)\n\nTime: 221.956 ms\npulitzer2=#\n\n\nIs this difference normal? I tend to have the interface between the\ndatabase and the application trough functions, and I'd like not to\ninclude 'SELECT COUNT(*)...' in my Java code (at least, if I don't have\nto! - esp. because I'm not Java developer on the project).\n\nThen, this is also interesting, I think! I'm telling the planer never to\nuse sequential scan:\n\npulitzer2=# set enable_seqscan to false;\nSET\nTime: 0.150 ms\npulitzer2=# select __new__get_memo_display_queue_size(1829);\n __new__get_memo_display_queue_size\n------------------------------------\n 13\n(1 row)\n\nTime: 2.412 ms\npulitzer2=# select count(*) from _v1 where service_id = 1829;\n count\n-------\n 13\n(1 row)\n\nTime: 2.092 ms\npulitzer2=# select __new__get_memo_display_queue_size(1816);\n __new__get_memo_display_queue_size\n------------------------------------\n 1\n(1 row)\n\nTime: 2.473 ms\npulitzer2=# select count(*) from _v1 where service_id = 1816;\n count\n-------\n 1\n(1 row)\n\nTime: 2.117 ms\npulitzer2=#\n\n\nNow the the execution times are almost the same.\n\nSo, why this difference in the first place, and, what should I do to\nhave satisfying results when calling a postgres function?\nI could rewrite the function from plain sql to plpgsql, and add 'SET\nenable_seqscan TO false' before getting the count, and add 'SET\nenable_seqscan TO true' after getting the count, but as I was explained\non pg-jdbc mailinglist that is not the way to go.\n\nAnd I still don't understand why do I have excellent times when I force\nplanner not to use sequential scan inside the function, but when\n'calling' the query from plain sql (SELECT COUNT(*) FROM _v1 WHERE),\nexecution time is always around 2-4ms, regardles of the value of\nenable_seqscan parametar.\n\n\tMike\n", "msg_date": "Mon, 09 Mar 2009 20:13:01 +0100", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query much slower when run from postgres function" }, { "msg_contents": "Guillaume Cottenceau wrote:\n>>> Now I'm confused, why is 'sql' function much slower than 'direct' SELECT?\n>> Usually the reason for this is that the planner chooses a different plan\n>> when it has knowledge of the particular value you are searching for than\n>> when it does not.\n> \n> Yes, and since Mario is coming from JDBC, I'll share my part on\n> this: I also noticed some very wrong plans in JDBC because of the\n> \"optimization\" in prepared statements consisting of planning once\n> for all runs, e.g. without any parameter values to help planning.\n> \n\nFor what is worth:\n\nWhen I call postgres function via JDBC, I have almost the same execution\ntime as when calling function from psql.\n\nWhen I call SELECT COUNT(*)... WHERE... query from JDBC, I again have\nalmost the same execution time as when executing query from psql.\n\nPostgres function takes around 200ms, and SELECT query takes around 2-4ms.\n\n\tMike\n", "msg_date": "Mon, 09 Mar 2009 20:26:05 +0100", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Query much slower when run from postgres function" }, { "msg_contents": "Mario Splivalo <[email protected]> writes:\n> Is this difference normal?\n\nIt's hard to tell, because you aren't comparing apples to apples.\nTry a prepared statement, like\n\nprepare foo(int) as\nSELECT\n\tCOUNT(*)::int4\nFROM\n\t_v1\nWHERE\n\tservice_id = $1\n;\n\nexecute foo(504);\n\nwhich should produce results similar to the function. You could\nthen use \"explain analyze execute\" to probe further into what's\nhappening.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Mar 2009 15:51:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query much slower when run from postgres function " }, { "msg_contents": "On Mon, Mar 9, 2009 at 1:16 PM, Tom Lane <[email protected]> wrote:\n\n> Guillaume Smet <[email protected]> writes:\n> > Unnamed prepared statements are planned after binding the values,\n> > starting with 8.3, or more precisely starting with 8.3.2 as early 8.3\n> > versions were partially broken on this behalf.\n>\n> No, 8.2 did it too (otherwise we wouldn't have considered 8.3.0 to be\n> broken...). The thing I'm not too clear about is what \"use of an\n> unnamed statement\" translates to for a JDBC user.\n>\n\nTom,\n\nThe driver will use unnamed statements for all statements until it sees the\nsame statement N times where N is 5 I believe, after that it uses a named\nstatement.\n\nDave\n\nOn Mon, Mar 9, 2009 at 1:16 PM, Tom Lane <[email protected]> wrote:\nGuillaume Smet <[email protected]> writes:\n> Unnamed prepared statements are planned after binding the values,\n> starting with 8.3, or more precisely starting with 8.3.2 as early 8.3\n> versions were partially broken on this behalf.\n\nNo, 8.2 did it too (otherwise we wouldn't have considered 8.3.0 to be\nbroken...).  The thing I'm not too clear about is what \"use of an\nunnamed statement\" translates to for a JDBC user.\nTom,The driver will use unnamed statements for all statements until it sees the same statement N times where N is 5 I believe, after that it uses a named statement. Dave", "msg_date": "Mon, 9 Mar 2009 15:56:53 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Query much slower when run from postgres\n\tfunction" }, { "msg_contents": "\n>\n> The driver will use unnamed statements for all statements until it \n> sees the same statement N times where N is 5 I believe, after that it \n> uses a named statement.\n>\n>\nShame there's no syntax for it to pass the a table of the parameters to \nthe server when it creates the named statement as planner hints.\n\nJames\n\n\n", "msg_date": "Mon, 09 Mar 2009 20:35:14 +0000", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Query much slower when run from postgres function" }, { "msg_contents": "1. And how do you do that from JDBC? There is no standard concept of 'unnamed' prepared statements in most database APIs, and if there were the behavior would be db specific. Telling PG to plan after binding should be more flexible than unnamed prepared statements - or at least more transparent to standard APIs. E.g. SET plan_prepared_postbind='true'.\n 2. How do you use those on a granularity other than global from jdbc? ( - I tried setting max_prepared_transactions to 0 but this didn't seem to work either, and it would be global if it did). Prepared statements are still great for things like selecting off a primary key, or especially inserts. Controls at the connection or user level would be significantly more valuable than global ones.\n 3. Is it possible to test them from psql? (documentation is weak, PREPARE requires a name, functions require names, etc .. C api has docs but that's not of use for most).\n\nI'd love to know if there were answers to the above that were workable.\n\nIn the end, we had to write our own client side code to deal with sql injection safely and avoid jdbc prepared statements to get acceptable performance in many cases (all cases involving partitioned tables, a few others). At least dollar-quotes are powerful and useful for dealing with this. Since the most important benefit of prepared statements is code clarity and sql injection protection, its sad to see weakness in control/configuration over prepared statement behavior at the parse/plan level get in the way of using them for those benefits.\n\n\n\nOn 3/9/09 9:04 AM, \"Guillaume Smet\" <[email protected]> wrote:\n\nOn Mon, Mar 9, 2009 at 5:51 PM, Guillaume Cottenceau <[email protected]> wrote:\n> Until it's possible to specifically tell the JDBC driver (and/or\n> PG?) to not plan once for all runs (or is there something better\n> to think of?), or the whole thing would be more clever (off the\n> top of my head, PG could try to replan with the first actual\n> values - or first xx actual values - and if the plan is\n> different, then flag that prepared statement for replanning each\n> time if the overall time estimate is different enough), I've\n> opted to tell the JDBC driver to use the protocol version 2, as\n> prepared statements were not so much prepared back then (IIRC\n> parameter interpolation is performed in driver and the whole SQL\n> query is passed each time, parsed, and planned) using\n> protocolVersion=2 in the JDBC URL. So far it worked very well for\n> us.\n\nUnnamed prepared statements are planned after binding the values,\nstarting with 8.3, or more precisely starting with 8.3.2 as early 8.3\nversions were partially broken on this behalf.\n\nIt's not always possible to use protocol version 2 as it's quite\nlimited (especially considering the exceptions returned).\n\n--\nGuillaume\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\nRe: [JDBC] [PERFORM] Query much slower when run from postgres  function\n\n\n\nAnd how do you do that from JDBC?  There is no standard concept of ‘unnamed’ prepared statements in most database APIs, and if there were the behavior would be db specific.  Telling PG to plan after binding should be more flexible than unnamed prepared statements — or at least more transparent to standard APIs.  E.g. SET plan_prepared_postbind=’true’.\nHow do you use those on a granularity other than global from jdbc?    ( — I tried setting max_prepared_transactions to 0 but this didn’t seem to work either, and it would be global if it did).  Prepared statements are still great for things like selecting off a primary key, or especially inserts.  Controls at the connection or user level would be significantly more valuable than global ones.\nIs it possible to test them from psql? (documentation is weak, PREPARE requires a name, functions require names, etc .. C api has docs but that’s not of use for most).\n\nI’d love to know if there were answers to the above that were workable.\n\nIn the end, we had to write our own client side code to deal with sql injection safely and avoid jdbc prepared statements to get acceptable performance in many cases (all cases involving partitioned tables, a few others).  At least dollar-quotes are powerful and useful for dealing with this.  Since the most important benefit of prepared statements is code clarity and sql injection protection, its sad to see weakness in control/configuration over prepared statement behavior at the parse/plan level get in the way of using them for those benefits.  \n\n\n\nOn 3/9/09 9:04 AM, \"Guillaume Smet\" <[email protected]> wrote:\n\nOn Mon, Mar 9, 2009 at 5:51 PM, Guillaume Cottenceau <[email protected]> wrote:\n> Until it's possible to specifically tell the JDBC driver (and/or\n> PG?) to not plan once for all runs (or is there something better\n> to think of?), or the whole thing would be more clever (off the\n> top of my head, PG could try to replan with the first actual\n> values - or first xx actual values - and if the plan is\n> different, then flag that prepared statement for replanning each\n> time if the overall time estimate is different enough), I've\n> opted to tell the JDBC driver to use the protocol version 2, as\n> prepared statements were not so much prepared back then (IIRC\n> parameter interpolation is performed in driver and the whole SQL\n> query is passed each time, parsed, and planned) using\n> protocolVersion=2 in the JDBC URL. So far it worked very well for\n> us.\n\nUnnamed prepared statements are planned after binding the values,\nstarting with 8.3, or more precisely starting with 8.3.2 as early 8.3\nversions were partially broken on this behalf.\n\nIt's not always possible to use protocol version 2 as it's quite\nlimited (especially considering the exceptions returned).\n\n--\nGuillaume\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 9 Mar 2009 13:56:01 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Query much slower when run from postgres\n function" }, { "msg_contents": "Dave Cramer wrote:\n> \n> \n> On Mon, Mar 9, 2009 at 1:16 PM, Tom Lane <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> Guillaume Smet <[email protected]\n> <mailto:[email protected]>> writes:\n> > Unnamed prepared statements are planned after binding the values,\n> > starting with 8.3, or more precisely starting with 8.3.2 as early 8.3\n> > versions were partially broken on this behalf.\n> \n> No, 8.2 did it too (otherwise we wouldn't have considered 8.3.0 to be\n> broken...). The thing I'm not too clear about is what \"use of an\n> unnamed statement\" translates to for a JDBC user.\n> \n> \n> Tom,\n> \n> The driver will use unnamed statements for all statements until it sees\n> the same statement N times where N is 5 I believe, after that it uses a\n> named statement.\n\nRight, with the caveat that \"the same statement\" means \"exactly the same\nPreparedStatement object\". If you happen to run the same (textual) query\nvia two different PreparedStatement objects, they're still considered\ndifferent queries for the purposes of this threshold.\n\nYou can also tune the threshold via the prepareThreshold parameter in\nthe driver URL, or use org.postgresql.PGStatement.setPrepareThreshold\n(an extension interface implemented by the driver on its Statement\nobjects) on a per-statement basis.\n\nprepareThreshold=0 is a special value that means \"never use a named\nstatement\".\n\nThe idea behind the threshold is that if a PreparedStatement object is\nreused, that's a fairly good indication that the application wants to\nrun the same query many times with different parameters (since it's\ngoing to the trouble of preserving the statement object for reuse). But\nit's all tunable if needed.\n\nAlso see http://jdbc.postgresql.org/documentation/head/server-prepare.html\n\n-O\n", "msg_date": "Tue, 10 Mar 2009 10:31:42 +1300", "msg_from": "Oliver Jowett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Query much slower when run from postgres \tfunction" }, { "msg_contents": "Scott Carey wrote:\n> \n> 1. And how do you do that from JDBC? There is no standard concept of\n> ‘unnamed’ prepared statements in most database APIs, and if there\n> were the behavior would be db specific. Telling PG to plan after\n> binding should be more flexible than unnamed prepared statements —\n> or at least more transparent to standard APIs. E.g. SET\n> plan_prepared_postbind=’true’.\n\nI've suggested that as a protocol-level addition in the past, but it\nwould mean a new protocol version. The named vs. unnamed statement\nbehaviour was an attempt to crowbar it into the protocol without\nrequiring a version change. If it's really a planner behaviour thing,\nmaybe it does belong at the SET level, but I believe that there's\nusually an aversion to having to SET anything per query to get\nreasonable plans.\n\n> 2. How do you use those on a granularity other than global from jdbc?\n\nprepareThreshold=N (as part of a connection URL),\norg.postgresql.PGConnection.setPrepareThreshold() (connection-level\ngranularity), org.postgresql.PGStatement.setPrepareThreshold()\n(statement-level granularity). See the driver docs.\n\n> ( — I tried setting max_prepared_transactions to 0 but this\n> didn’t seem to work either, and it would be global if it did).\n\nmax_prepared_transactions is to do with two-phase commit, not prepared\nstatements.\n\n> In the end, we had to write our own client side code to deal with sql\n> injection safely and avoid jdbc prepared statements to get acceptable\n> performance in many cases (all cases involving partitioned tables, a few\n> others). At least dollar-quotes are powerful and useful for dealing\n> with this. Since the most important benefit of prepared statements is\n> code clarity and sql injection protection, its sad to see weakness in\n> control/configuration over prepared statement behavior at the parse/plan\n> level get in the way of using them for those benefits. \n\nIt's unfortunate that ended up doing this, because it >is< all\nconfigurable on the JDBC side. Did you ask on pgsql-jdbc?\n\n-O\n", "msg_date": "Tue, 10 Mar 2009 10:40:26 +1300", "msg_from": "Oliver Jowett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Query much slower when run from postgres function" }, { "msg_contents": "Tom Lane wrote:\n> Mario Splivalo <[email protected]> writes:\n>> Is this difference normal?\n> \n> It's hard to tell, because you aren't comparing apples to apples.\n> Try a prepared statement, like\n[...cut...]\n> which should produce results similar to the function. You could\n> then use \"explain analyze execute\" to probe further into what's\n> happening.\n\nHuh, thnx! :) This got me even more confused:\n\npulitzer2=# prepare foo(int) as select count(*) from _v1 where\nservice_id = $1;\nPREPARE\nTime: 4.425 ms\npulitzer2=# execute foo(1816);\n count\n-------\n 1\n(1 row)\n\nTime: 248.301 ms\npulitzer2=# select __new__get_memo_display_queue_size(1816);\n __new__get_memo_display_queue_size\n------------------------------------\n 1\n(1 row)\n\nTime: 218.914 ms\npulitzer2=#\n\nSo, it is the same. When I do EXPLAIN ANALYZE EXECUTE I get completely\ndifferent execution plan:\n\npulitzer2=# explain analyze execute foo(1816);\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=40713.22..40713.23 rows=1 width=0) (actual\ntime=475.649..475.650 rows=1 loops=1)\n -> Hash Join (cost=21406.91..40711.65 rows=626 width=0) (actual\ntime=183.004..475.629 rows=1 loops=1)\n Hash Cond: (messages_memo.message_id = messages.id)\n -> Seq Scan on messages_memo (cost=0.00..18630.83 rows=106825\nwidth=4) (actual time=0.083..324.607 rows=107608 loops=1)\n Filter: ((state)::integer = 1)\n -> Hash (cost=21326.61..21326.61 rows=6424 width=4) (actual\ntime=5.868..5.868 rows=5 loops=1)\n -> Hash Left Join (cost=341.64..21326.61 rows=6424\nwidth=4) (actual time=5.650..5.855 rows=5 loops=1)\n Hash Cond: (messages.id =\nmessages_memo_displayed.message_id)\n Filter: (messages_memo_displayed.admin_id IS NULL)\n -> Bitmap Heap Scan on messages\n(cost=287.98..21192.42 rows=12848 width=4) (actual time=0.049..0.169\nrows=62 loops=1)\n Recheck Cond: (service_id = $1)\n -> Bitmap Index Scan on\nmessages_uq__service_id__tan (cost=0.00..284.77 rows=12848 width=0)\n(actual time=0.038..0.038 rows=62 loops=1)\n Index Cond: (service_id = $1)\n -> Hash (cost=28.85..28.85 rows=1985 width=8)\n(actual time=5.564..5.564 rows=1985 loops=1)\n -> Seq Scan on messages_memo_displayed\n(cost=0.00..28.85 rows=1985 width=8) (actual time=0.008..2.674 rows=1985\nloops=1)\n Total runtime: 475.761 ms\n(16 rows)\n\nTime: 476.280 ms\npulitzer2=#\n\n\nThere is a sequential scan on messages_memo, a scan that doesn't show up\nwhen I just do 'SELECT COUNT(*)...'.\n\nWhen I do 'set enable_seqscan to false' before i do PREPARE, here is the\nexecution plan:\n\npulitzer2=# explain analyze execute foo(1816);\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=55624.91..55624.92 rows=1 width=0) (actual\ntime=7.122..7.123 rows=1 loops=1)\n -> Nested Loop (cost=2239.11..55623.34 rows=626 width=0) (actual\ntime=7.098..7.108 rows=1 loops=1)\n -> Hash Left Join (cost=2239.11..23224.07 rows=6424 width=4)\n(actual time=6.663..6.962 rows=5 loops=1)\n Hash Cond: (messages.id = messages_memo_displayed.message_id)\n Filter: (messages_memo_displayed.admin_id IS NULL)\n -> Bitmap Heap Scan on messages (cost=287.98..21192.42\nrows=12848 width=4) (actual time=0.138..0.373 rows=62 loops=1)\n Recheck Cond: (service_id = $1)\n -> Bitmap Index Scan on\nmessages_uq__service_id__tan (cost=0.00..284.77 rows=12848 width=0)\n(actual time=0.121..0.121 rows=62 loops=1)\n Index Cond: (service_id = $1)\n -> Hash (cost=1926.31..1926.31 rows=1985 width=8)\n(actual time=6.430..6.430 rows=1985 loops=1)\n -> Index Scan using messages_memo_displayed_pk on\nmessages_memo_displayed (cost=0.00..1926.31 rows=1985 width=8) (actual\ntime=0.063..3.320 rows=1985 loops=1)\n -> Index Scan using messages_memo_pk on messages_memo\n(cost=0.00..5.03 rows=1 width=4) (actual time=0.025..0.025 rows=0 loops=5)\n Index Cond: (messages_memo.message_id = messages.id)\n Filter: ((messages_memo.state)::integer = 1)\n Total runtime: 7.260 ms\n(15 rows)\n\nTime: 7.786 ms\n\nI have no idea why postgres chooses sequential scan over messages_memo\nwhen I PREPARE the select. For now I'll go with plpgsql function\n(instead of sql function) because I can do 'set enable_seqscan to true'\njust before RETURNing the value. That way, when I call the function via\nJDBC I have short execution times.\n\n\tMike\n\n\n", "msg_date": "Mon, 09 Mar 2009 23:13:32 +0100", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query much slower when run from postgres function" }, { "msg_contents": "Mario Splivalo <[email protected]> writes:\n> So, it is the same. When I do EXPLAIN ANALYZE EXECUTE I get completely\n> different execution plan:\n> ...\n> -> Bitmap Heap Scan on messages\n> (cost=287.98..21192.42 rows=12848 width=4) (actual time=0.049..0.169\n> rows=62 loops=1)\n> Recheck Cond: (service_id = $1)\n> -> Bitmap Index Scan on\n> messages_uq__service_id__tan (cost=0.00..284.77 rows=12848 width=0)\n> (actual time=0.038..0.038 rows=62 loops=1)\n> Index Cond: (service_id = $1)\n\nWell, there's the problem: without knowing what the specific service_id\nis, the planner is estimating 12848 matching rows, which is evidently\noff by a couple of orders of magnitude. And that's pushing it to adopt\na hash join instead of a nestloop. Are you sure the stats on this\ntable are up to date? Maybe you need to increase the stats target?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Mar 2009 19:36:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query much slower when run from postgres function " }, { "msg_contents": "On 3/9/09 1:40 PM, \"Oliver Jowett\" <[email protected]> wrote:\n\nScott Carey wrote:\n>\n> 1. And how do you do that from JDBC? There is no standard concept of\n\nI've suggested that as a protocol-level addition in the past, but it\nwould mean a new protocol version. The named vs. unnamed statement\nbehaviour was an attempt to crowbar it into the protocol without\nrequiring a version change. If it's really a planner behaviour thing,\nmaybe it does belong at the SET level, but I believe that there's\nusually an aversion to having to SET anything per query to get\nreasonable plans.\n\nThere's a strong aversion, but I find myself re-writing queries to get good plans, a de-facto hint really. Its mandatory in the land of partitioned tables and large aggregates, much more rare elsewhere. I have a higher aversion to rewriting queries then telling the planner to use more information or to provide it with more information.\n\n> 2. How do you use those on a granularity other than global from jdbc?\n\nprepareThreshold=N (as part of a connection URL),\norg.postgresql.PGConnection.setPrepareThreshold() (connection-level\ngranularity), org.postgresql.PGStatement.setPrepareThreshold()\n(statement-level granularity). See the driver docs.\n\nI know I've tried the connection URL thing one time and that did not fix the performance problem. I did not know if it was user error. Without knowing how to trace what the query really was or if the setting was working properly, or having any other easy avenue to see if an unnamed prepared statement even fixed my problem, I had to resort to what would clearly fix it (there was only 1 day to fix it, and there was one proven way to fix it). I would love to be able to try out an unnamed prepared statement in psql, to prove that it even works to solve the query planning issue or not. In the end, it was simpler to change the code and probably less time consuming than all the options other than the connection URL setting.\n\n\n> ( - I tried setting max_prepared_transactions to 0 but this\n> didn't seem to work either, and it would be global if it did).\n\nmax_prepared_transactions is to do with two-phase commit, not prepared\nstatements.\n\nThanks! Good to know, the configuration documentation could be more clear... I got the two prepares confused.\n\n> In the end, we had to write our own client side code to deal with sql\n> injection safely and avoid jdbc prepared statements to get acceptable\n\nIt's unfortunate that ended up doing this, because it >is< all\nconfigurable on the JDBC side. Did you ask on pgsql-jdbc?\n\n-O\n\nI searched the archives, and did find a reference to the connection URL setting and recall trying that but not seeing the expected result. Rather than debugging, a decision was made to go with the solution that worked and be done with it. This was also when we were in production on 8.3.1 or 8.3.2 or so, so the bugs there might have caused some confusion in the rush to solve the issue.\n\nI'm still not sure that unnamed prepared statements will help my case. If the driver is using unnamed prepared statements for the first 5 uses of a query then naming it, I should see the first 5 uses significantly faster than those after. I'll keep an eye out for that in the places where we are still using prepared statements that can cause problems and in the old log files. Until another issue comes up, there isn't sufficient motivation to fix what is no longer broken for us.\n\nThanks for the good info on dealing with configuring unnamed prepared statements with the jdbc driver. That may come in very handy later.\n\n\n\nRe: [JDBC] [PERFORM] Query much slower when run from postgres  function\n\n\nOn 3/9/09 1:40 PM, \"Oliver Jowett\" <[email protected]> wrote:\n\nScott Carey wrote:\n>\n>    1. And how do you do that from JDBC?  There is no standard concept of\n\nI've suggested that as a protocol-level addition in the past, but it\nwould mean a new protocol version. The named vs. unnamed statement\nbehaviour was an attempt to crowbar it into the protocol without\nrequiring a version change. If it's really a planner behaviour thing,\nmaybe it does belong at the SET level, but I believe that there's\nusually an aversion to having to SET anything per query to get\nreasonable plans.\n\nThere’s a strong aversion, but I find myself re-writing queries to get good plans, a de-facto hint really.  Its mandatory in the land of partitioned tables and large aggregates, much more rare elsewhere.  I have a higher aversion to rewriting queries then telling the planner to use more information or to provide it with more information. \n\n>    2. How do you use those on a granularity other than global from jdbc?\n\nprepareThreshold=N (as part of a connection URL),\norg.postgresql.PGConnection.setPrepareThreshold() (connection-level\ngranularity), org.postgresql.PGStatement.setPrepareThreshold()\n(statement-level granularity). See the driver docs.\n\nI know I’ve tried the connection URL thing one time and that did not fix the performance problem.  I did not know if it was user error.  Without knowing how to trace what the query really was or if the setting was working properly, or having any other easy avenue to see if an unnamed prepared statement even fixed my problem, I had to resort to what would clearly fix it (there was only 1 day to fix it, and there was one proven way to fix it).  I would love to be able to try out an unnamed prepared statement in psql, to prove that it even works to solve the query planning issue or not.  In the end, it was simpler to change the code and probably less time consuming than all the options other than the connection URL setting.\n\n\n>          ( — I tried setting max_prepared_transactions to 0 but this\n>       didn’t seem to work either, and it would be global if it did).\n\nmax_prepared_transactions is to do with two-phase commit, not prepared\nstatements.\n\nThanks! Good to know, the configuration documentation could be more clear... I got the two prepares confused.\n\n> In the end, we had to write our own client side code to deal with sql\n> injection safely and avoid jdbc prepared statements to get acceptable\n\nIt's unfortunate that ended up doing this, because it >is< all\nconfigurable on the JDBC side. Did you ask on pgsql-jdbc?\n\n-O\n\nI searched the archives, and did find a reference to the connection URL setting and recall trying that but not seeing the expected result.  Rather than debugging, a decision was made to go with the solution that worked and be done with it.  This was also when we were in production on 8.3.1 or  8.3.2 or so, so the bugs there might have caused some confusion in the rush to solve the issue.\n\nI’m still not sure that unnamed prepared statements will help my case.  If the driver is using unnamed prepared statements for the first 5 uses of a query then naming it, I should see the first 5 uses significantly faster than those after.  I’ll keep an eye out for that in the places where we are still using prepared statements that can cause problems and in the old log files.  Until another issue comes up, there isn’t sufficient motivation to fix what is no longer broken for us.\n\nThanks for the good info on dealing with configuring unnamed prepared statements with the jdbc driver.  That may come in very handy later.", "msg_date": "Mon, 9 Mar 2009 20:38:50 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Query much slower when run from postgres\n function" }, { "msg_contents": "Oliver Jowett <oliver 'at' opencloud.com> writes:\n\n> The idea behind the threshold is that if a PreparedStatement object is\n> reused, that's a fairly good indication that the application wants to\n> run the same query many times with different parameters (since it's\n> going to the trouble of preserving the statement object for reuse). But\n\nOr it may just need the safeness of driver/database parameter\n\"interpolation\", to get a \"free\" efficient safeguard against SQL\ninjection. As for myself, I have found no other way to obtain\ndriver/database parameter interpolation. So sometimes I use\nprepared statements even for running a query only once. I am\nunsure it is a widely used pattern, but SQL injection being quite\nimportant to fight against, I think I may not be the only one.\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Tue, 10 Mar 2009 09:05:52 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Query much slower when run from postgres \tfunction" }, { "msg_contents": "Guillaume Cottenceau wrote:\n> Oliver Jowett <oliver 'at' opencloud.com> writes:\n> \n>> The idea behind the threshold is that if a PreparedStatement object is\n>> reused, that's a fairly good indication that the application wants to\n>> run the same query many times with different parameters (since it's\n>> going to the trouble of preserving the statement object for reuse). But\n> \n> Or it may just need the safeness of driver/database parameter\n> \"interpolation\", to get a \"free\" efficient safeguard against SQL\n> injection.\n\nIn which case, the application usually throws the PreparedStatement\nobject away after executing it once, and the threshold is never reached.\nAs I said, the application has to do extra work to preserve exactly the\nsame PreparedStatement object for reuse before the threshold applies, at\nwhich point it's reasonable to assume that it could be a\nperformance-sensitive query that would benefit from preserving the query\nplan and avoiding parse/plan costs on every execution.\n\nIt's just a heuristic because there *is* a tradeoff and many/most\napplications are not going to be customized specifically to know about\nthat tradeoff. And it's configurable because the tradeoff is not the\nsame in every case.\n\nDo you have a suggestion for a better way to decide when to use a named\nstatement?\n\n-O\n", "msg_date": "Tue, 10 Mar 2009 21:39:44 +1300", "msg_from": "Oliver Jowett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Query much slower when run from postgres \tfunction" }, { "msg_contents": "Oliver Jowett <oliver 'at' opencloud.com> writes:\n\n> Guillaume Cottenceau wrote:\n>> Oliver Jowett <oliver 'at' opencloud.com> writes:\n>> \n>>> The idea behind the threshold is that if a PreparedStatement object is\n>>> reused, that's a fairly good indication that the application wants to\n>>> run the same query many times with different parameters (since it's\n>>> going to the trouble of preserving the statement object for reuse). But\n>> \n>> Or it may just need the safeness of driver/database parameter\n>> \"interpolation\", to get a \"free\" efficient safeguard against SQL\n>> injection.\n>\n> In which case, the application usually throws the PreparedStatement\n> object away after executing it once, and the threshold is never reached.\n> As I said, the application has to do extra work to preserve exactly the\n> same PreparedStatement object for reuse before the threshold applies, at\n> which point it's reasonable to assume that it could be a\n> performance-sensitive query that would benefit from preserving the query\n> plan and avoiding parse/plan costs on every execution.\n\nThanks for the clarification!\n\nThat may just be me, but I see two issues here: first, parsing\nand planning are tied together, but parsing should be always done\nfirst time only as I see no point in reparsing in subsequent uses\nof the PreparedStatement?; second, it's still questionable that a\n\"performance-sensitive\" query should mean benefiting from\npreserving the query plan: I have seen dramatic use cases where\nthe preserved query plan opted for a seqscan and then the query\nwas orders of magnitude slower than it should because the actual\nthen used values would have qualified for an indexscan.\n\n> It's just a heuristic because there *is* a tradeoff and many/most\n> applications are not going to be customized specifically to know about\n> that tradeoff. And it's configurable because the tradeoff is not the\n> same in every case.\n\nYes, and it's well documented, actually. I obviously didn't read\nit carefully enough last time :/ I guess my approach of using the\nprotocol version 2 should be replaced by unsetting the prepared\nthreshold.. I think I came up with that workaround after that\npost from Kris:\n\nhttp://archives.postgresql.org/pgsql-jdbc/2008-03/msg00070.php\n\nbecause strangely, you and I intervened in that thread, but the\nprepared threshold issue was not raised, so I followed the\nprotocolVersion=2 path. Did I miss something - e.g. is the topic\ntoday different from the topic back then, for some reason? Am I\nwrong in assuming that your \"please replan this statement every\ntime you get new parameters\" suggestion is nearly-achievable with\nunsetting the prepared threshold (\"nearly\" being the difference\nbetween replanning always, and replanning only when parameters\nare new)?\n\nAnyway, documentation-wise, I've tried to think of how the\ndocumentation could be a little more aggressive with the warning:\n\nhttp://zarb.org/~gc/t/jdbc-more-cautious-preparedstatements.diff\n\nThat said, there's something more: when the documentation says:\n\n There are a number of ways to enable server side prepared\n statements depending on your application's needs. The general\n method is to set a threshold for a PreparedStatement.\n\nI assume that by default server side prepared statements are\n*not* enabled, although it seems to be the case, with a threshold\nof 5 as a simple test shows when using driver 8.3-604.jdbc3 (on\nPG 8.3.6).\n\nI think that either they should not be enabled by default\n(really, it could be better with, but it could be so much worse\nthat is it really a good idea to make a \"dropin\" use of the\ndriver use it?), or the documentation should clearly state they\nare, and add even more warnings about potential drawbacks. WDYT?\n\nhttp://zarb.org/~gc/t/jdbc-more-cautious-preparedstatements2.diff\n\nBtw, how can the doc be built? \"ant doc\" failed on missing\ndocbook.stylesheet but I was unable to find how to set that\nvalue.\n\n> Do you have a suggestion for a better way to decide when to use a named\n> statement?\n\nOh, I feel I don't have the qualifications to answer that\nquestion, sorry! The only thing I could think of, was what I\ntalked about in a previous mail, e.g. save all plans of the first\nxx queries before reaching the threshold, and then when the\nthreshold is reached, compare the global cost estimates of the\nsaved plans, and do not activate server side prepare if they are\ntoo different, as caching the plan for that query would probably\nyield too slow results sometimes. Ideally, I guess a new\nPG-specific method should be added to activate that feature (and\nset the value for \"are the plans too different?\"). But bear in\nmind that it may be a stupid idea :)\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Tue, 10 Mar 2009 10:41:19 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Query much slower when run from postgres \tfunction" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Mario Splivalo <[email protected]> writes:\n>> Now I'm confused, why is 'sql' function much slower than 'direct' SELECT?\n>\n> Usually the reason for this is that the planner chooses a different plan\n> when it has knowledge of the particular value you are searching for than\n> when it does not. I suppose 'service_id' has a very skewed distribution\n> and you are looking for an uncommon value?\n\nFor a prepared statement, could the planner produce *several* plans,\nif it guesses great sensitivity to the parameter values? Then it\ncould choose amongst them at run time.\n\n- FChE\n", "msg_date": "Tue, 10 Mar 2009 10:40:51 -0400", "msg_from": "[email protected] (Frank Ch. Eigler)", "msg_from_op": false, "msg_subject": "Re: Query much slower when run from postgres function" }, { "msg_contents": "\n\nOn Tue, 10 Mar 2009, Guillaume Cottenceau wrote:\n\n> Btw, how can the doc be built? \"ant doc\" failed on missing\n> docbook.stylesheet but I was unable to find how to set that\n> value.\n>\n\nCreate a file named build.local.properties and put something like the \nfollowing in it:\n\ndocbook.stylesheet=/usr/share/sgml/docbook/stylesheet/xsl/nwalsh/xhtml/chunk.xsl\ndocbook.dtd=/usr/share/sgml/docbook/dtd/xml/4.2\n\nKris Jurka\n\n", "msg_date": "Tue, 10 Mar 2009 12:55:24 -0400 (EDT)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Query much slower when run from postgres\n function" }, { "msg_contents": "[email protected] (Frank Ch. Eigler) writes:\n> For a prepared statement, could the planner produce *several* plans,\n> if it guesses great sensitivity to the parameter values? Then it\n> could choose amongst them at run time.\n\nWe've discussed that in the past. \"Choose at runtime\" is a bit more\neasily said than done though --- you can't readily flip between plan\nchoices part way through, if you've already emitted some result rows.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Mar 2009 13:20:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query much slower when run from postgres function " }, { "msg_contents": "On Mar 9, 2009, at 8:36 AM, Mario Splivalo wrote:\n> Now, as I was explained on pg-jdbc mailinglist, that 'SET \n> enable_seqscan TO false' affects all queries on that persistent \n> connection from tomcat, and It's not good solution. So I wanted to \n> post here to ask what other options do I have.\n\n\nFWIW, you can avoid that with SET LOCAL (though it would still affect \nthe rest of the transaction).\n\nYou could also store whatever enable_seqscan was set to in a variable \nbefore setting it to false and then set it back when you're done.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828\n\n\n", "msg_date": "Sat, 14 Mar 2009 09:31:31 -0500", "msg_from": "decibel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query much slower when run from postgres function" }, { "msg_contents": "On Mar 10, 2009, at 12:20 PM, Tom Lane wrote:\n> [email protected] (Frank Ch. Eigler) writes:\n>> For a prepared statement, could the planner produce *several* plans,\n>> if it guesses great sensitivity to the parameter values? Then it\n>> could choose amongst them at run time.\n>\n> We've discussed that in the past. \"Choose at runtime\" is a bit more\n> easily said than done though --- you can't readily flip between plan\n> choices part way through, if you've already emitted some result rows.\n\nTrue, but what if we planned for both high and low cardinality cases, \nassuming that pg_stats indicated both were a possibility? We would \nhave to store multiple plans for one prepared statement, which \nwouldn't work well for more complex queries (if you did high and low \ncardinality estimates for each table you'd end up with 2^r plans, \nwhere r is the number of relations), so we'd need a way to cap it \nsomehow. Of course, whether that's easier than having the ability to \nthrow out a current result set and start over with a different plan \nis up for debate...\n\nOn a related note, I wish there was a way to tell plpgsql not to pre- \nplan a query. Sure, you can use EXECUTE, but building the query plan \nis a serious pain in the rear.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828\n\n\n", "msg_date": "Sat, 14 Mar 2009 09:42:22 -0500", "msg_from": "decibel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query much slower when run from postgres function " }, { "msg_contents": "2009/3/14 decibel <[email protected]>\n\n> On Mar 10, 2009, at 12:20 PM, Tom Lane wrote:\n>\n>> [email protected] (Frank Ch. Eigler) writes:\n>>\n>>> For a prepared statement, could the planner produce *several* plans,\n>>> if it guesses great sensitivity to the parameter values? Then it\n>>> could choose amongst them at run time.\n>>>\n>>\n>> We've discussed that in the past. \"Choose at runtime\" is a bit more\n>> easily said than done though --- you can't readily flip between plan\n>> choices part way through, if you've already emitted some result rows.\n>>\n>\n> True, but what if we planned for both high and low cardinality cases,\n> assuming that pg_stats indicated both were a possibility? We would have to\n> store multiple plans for one prepared statement, which wouldn't work well\n> for more complex queries (if you did high and low cardinality estimates for\n> each table you'd end up with 2^r plans, where r is the number of relations),\n> so we'd need a way to cap it somehow. Of course, whether that's easier than\n> having the ability to throw out a current result set and start over with a\n> different plan is up for debate...\n>\n> On a related note, I wish there was a way to tell plpgsql not to pre-plan a\n> query. Sure, you can use EXECUTE, but building the query plan is a serious\n> pain in the rear.\n>\n\nI'd say it would be great for PostgreSQL to replan each execution of query\nautomatically if execution plan tells it would take some factor (say, x100,\nconfigurable) more time to execute query then to plan. In this case it would\nnot spend many time planning for small queries, but will use the most\nefficient plan possible for long queries. And even if a query can't be run\nbetter, it would spend only 1/factor time more (1% more time for factor of\n100).\n\n2009/3/14 decibel <[email protected]>\nOn Mar 10, 2009, at 12:20 PM, Tom Lane wrote:\n\[email protected] (Frank Ch. Eigler) writes:\n\nFor a prepared statement, could the planner produce *several* plans,\nif it guesses great sensitivity to the parameter values?  Then it\ncould choose amongst them at run time.\n\n\nWe've discussed that in the past.  \"Choose at runtime\" is a bit more\neasily said than done though --- you can't readily flip between plan\nchoices part way through, if you've already emitted some result rows.\n\n\nTrue, but what if we planned for both high and low cardinality cases, assuming that pg_stats indicated both were a possibility? We would have to store multiple plans for one prepared statement, which wouldn't work well for more complex queries (if you did high and low cardinality estimates for each table you'd end up with 2^r plans, where r is the number of relations), so we'd need a way to cap it somehow. Of course, whether that's easier than having the ability to throw out a current result set and start over with a different plan is up for debate...\n\nOn a related note, I wish there was a way to tell plpgsql not to pre-plan a query. Sure, you can use EXECUTE, but building the query plan is a serious pain in the rear.I'd say it would be great for PostgreSQL to replan each execution of query automatically if execution plan tells it would take some factor (say, x100, configurable) more time to execute query then to plan. In this case it would not spend many time planning for small queries, but will use the most efficient plan possible for long queries. And even if a query can't be run better, it would spend only 1/factor time more (1% more time for factor of 100).", "msg_date": "Mon, 16 Mar 2009 15:04:18 +0200", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query much slower when run from postgres function" } ]
[ { "msg_contents": "Hi- where can I find location of the DBT presentation in Portland next week?\n\nThanks-\n\nLee\n\nHi- where can I find location of the DBT presentation in Portland next week?Thanks-Lee", "msg_date": "Mon, 9 Mar 2009 07:28:20 -0700", "msg_from": "Lee Hughes <[email protected]>", "msg_from_op": true, "msg_subject": "DBT Presentation Location?" }, { "msg_contents": "On Mar 9, 2009, at 7:28 AM, Lee Hughes wrote:\n\n> Hi- where can I find location of the DBT presentation in Portland \n> next week?\n\nIt'll be at Portland State University at 7pm Thursday March 12. It's \nin the Fourth Avenue Building (FAB) room 86-01, on 1900 SW 4th Ave. \nIt's in G-10 on the map: http://www.pdx.edu/map.html\n\nSee you soon.\n\nRegards,\nMark\n", "msg_date": "Mon, 9 Mar 2009 17:57:21 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DBT Presentation Location?" } ]
[ { "msg_contents": "Hi,\n\nIt is frequently said that for PostgreSQL the number 1 thing to pay attention to when increasing performance is the amount of IOPS a storage system is capable of. Now I wonder if there is any situation in which sequential IO performance comes into play. E.g. perhaps during a tablescan on a non-fragmented table, or during a backup or restore?\n\nThe reason I'm asking is that we're building a storage array and for some reason are unable to\nincrease the number of random IOPS beyond a certain threshold when we add more controllers or more (SSD) disks to the system. However, the sequential performance keeps increasing when we do that.\n\nWould this extra sequential performance be of any benefit to PG or would it just be wasted?\n\nKind regards\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\n\nHi,It is frequently said that for PostgreSQL the number 1 thing to pay attention to when increasing performance is the amount of IOPS a storage system is capable of. Now I wonder if there is any situation in which sequential IO performance comes into play. E.g. perhaps during a tablescan on a non-fragmented table, or during a backup or restore?The reason I'm asking is that we're building a storage array and for some reason are unable to\nincrease the number of random IOPS beyond a certain threshold when we add more controllers or more (SSD) disks to the system. However, the sequential performance keeps increasing when we do that.Would this extra sequential performance be of any benefit to PG or would it just be wasted?Kind regardsExpress yourself instantly with MSN Messenger! MSN Messenger", "msg_date": "Tue, 10 Mar 2009 15:09:10 +0100", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "When does sequential performance matter in PG?" }, { "msg_contents": "On Tue, 10 Mar 2009, henk de wit wrote:\n> It is frequently said that for PostgreSQL the number 1 thing to pay \n> attention to when increasing performance is the amount of IOPS a storage \n> system is capable of. Now I wonder if there is any situation in which \n> sequential IO performance comes into play. E.g. perhaps during a \n> tablescan on a non-fragmented table, or during a backup or restore?\n\nYes, up to a point. That point is when a single CPU can no longer handle \nthe sequential transfer rate. Yes, there are some parallel restore \npossibilities which will get you further. Generally it only takes a few \ndiscs to max out a single CPU though.\n\n> The reason I'm asking is that we're building a storage array and for \n> some reason are unable to increase the number of random IOPS beyond a \n> certain threshold when we add more controllers or more (SSD) disks to \n> the system. However, the sequential performance keeps increasing when we \n> do that.\n\nAre you sure you're measuring the maximum IOPS, rather than measuring the \nIOPS capable in a single thread? The advantage of having more discs is \nthat you can perform more operations in parallel, so if you have lots of \nsimultaneous requests they can be spread over the disc array.\n\nMatthew\n\n-- \n [About NP-completeness] These are the problems that make efficient use of\n the Fairy Godmother. -- Computer Science Lecturer\n", "msg_date": "Tue, 10 Mar 2009 14:28:07 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When does sequential performance matter in PG?" }, { "msg_contents": "Hi,\n\n> On Tue, 10 Mar 2009, henk de wit wrote:\n> > Now I wonder if there is any situation in which \n> > sequential IO performance comes into play. E.g. perhaps during a \n> > tablescan on a non-fragmented table, or during a backup or restore?\n> \n> Yes, up to a point. That point is when a single CPU can no longer handle \n> the sequential transfer rate. Yes, there are some parallel restore \n> possibilities which will get you further. Generally it only takes a few \n> discs to max out a single CPU though.\n\nI see, but I take it you are only referring to a backup or a restore? It's of course unlikely (even highly undesirable) that multiple processes are doing a backup, but it doesn't seem unlikely that multiple queries are doing a table scan ;)\n\n> Are you sure you're measuring the maximum IOPS, rather than measuring the \n> IOPS capable in a single thread?\n\nI'm pretty sure we're not testing the number of IOPS for a single thread, as we're testing with 1, 10 and 40 threads. There is a significant (2x) increase in the total number of IOPS when going from 1 to 10 threads, but no increase when going from 10 to 40 threads. You can read more details about the setup I used and the problems I ran into here: http://www.xtremesystems.org/forums/showthread.php?p=3707365\n\nHenk\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\n\nHi,> On Tue, 10 Mar 2009, henk de wit wrote:> > Now I wonder if there is any situation in which > > sequential IO performance comes into play. E.g. perhaps during a > > tablescan on a non-fragmented table, or during a backup or restore?> > Yes, up to a point. That point is when a single CPU can no longer handle > the sequential transfer rate. Yes, there are some parallel restore > possibilities which will get you further. Generally it only takes a few > discs to max out a single CPU though.I see, but I take it you are only referring to a backup or a restore? It's of course unlikely (even highly undesirable) that multiple processes are doing a backup, but it doesn't seem unlikely that multiple queries are doing a table scan ;)> Are you sure you're measuring the maximum IOPS, rather than measuring the > IOPS capable in a single thread?I'm pretty sure we're not testing the number of IOPS for a single thread, as we're testing with 1, 10 and 40 threads. There is a significant (2x) increase in the total number of IOPS when going from 1 to 10 threads, but no increase when going from 10 to 40 threads. You can read more details about the setup I used and the problems I ran into here: http://www.xtremesystems.org/forums/showthread.php?p=3707365HenkExpress yourself instantly with MSN Messenger! MSN Messenger", "msg_date": "Tue, 10 Mar 2009 15:57:03 +0100", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Re: When does sequential performance matter in PG?" }, { "msg_contents": "On Tue, 10 Mar 2009, henk de wit wrote:\n\n> Now I wonder if there is any situation in which sequential IO \n> performance comes into play. E.g. perhaps during a tablescan on a \n> non-fragmented table, or during a backup or restore?\n\nIf you're doing a sequential scan of data that was loaded in a fairly \nlarge batch, you can approach reading at the sequential I/O rate of the \ndrives. Doing a backup using pg_dump is one situation where you might \nactually do that.\n\nUnless your disk performance is really weak, restores in PostgreSQL are \nusually CPU bound right now. There's a new parallel restore feature in \n8.4 that may make sequential write performance a more likely upper bound \nto run into, assuming your table structure is amenable to loading in \nparallel (situations with just one giant table won't benefit as much).\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 10 Mar 2009 13:50:05 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When does sequential performance matter in PG?" }, { "msg_contents": "On 3/10/09 6:28 AM, \"Matthew Wakeling\" <[email protected]> wrote:\n\nOn Tue, 10 Mar 2009, henk de wit wrote:\n> It is frequently said that for PostgreSQL the number 1 thing to pay\n> attention to when increasing performance is the amount of IOPS a storage\n> system is capable of. Now I wonder if there is any situation in which\n> sequential IO performance comes into play. E.g. perhaps during a\n> tablescan on a non-fragmented table, or during a backup or restore?\n\nYes, up to a point. That point is when a single CPU can no longer handle\nthe sequential transfer rate. Yes, there are some parallel restore\npossibilities which will get you further. Generally it only takes a few\ndiscs to max out a single CPU though.\n\nThis is not true if you have concurrent sequential scans. Then an array can be tuned for total throughput with concurrent access. Single thread sequential measurements are similarly useful to single thread random i/o measurement - not really a test like the DB will act, but useful as a starting point for tuning.\nI'm past the point where a single thread can not keep up with the disk on a sequential scan. For the most simple select * queries, this is ~ 800MB/sec for me.\nFor any queries those with more complicated processing/filtering, its much less, usually 400MB/sec is a pretty good rate for a single thread.\nHowever our raw array does about 1200MB/sec, and can get 75% efficiency on this or so with between 4 and 8 concurrent sequential scans. It took some significant tuning and testing time to make sure this worked, and to balance that with random i/o requirements.\n\nFurthermore, higher sequential rates help your random IOPS when you have sequential access concurrent with random access. You can tune OS parameters (readahead in linux, I/O scheduler types) to bias throughput or latency towards random iops throughput or sequential MB/sec throughput. Having faster sequential disk access means less % of time doing sequential I/O, meaning more time left for random I/O. It only goes so far, but it does help with mixed loads.\n\nOverall, it depends a lot on how important sequential scans are to your use case.\n\n\n\n\n\n\nRe: [PERFORM] When does sequential performance matter in PG?\n\n\n\n\n\nOn 3/10/09 6:28 AM, \"Matthew Wakeling\" <[email protected]> wrote:\n\nOn Tue, 10 Mar 2009, henk de wit wrote:\n> It is frequently said that for PostgreSQL the number 1 thing to pay\n> attention to when increasing performance is the amount of IOPS a storage\n> system is capable of. Now I wonder if there is any situation in which\n> sequential IO performance comes into play. E.g. perhaps during a\n> tablescan on a non-fragmented table, or during a backup or restore?\n\nYes, up to a point. That point is when a single CPU can no longer handle\nthe sequential transfer rate. Yes, there are some parallel restore\npossibilities which will get you further. Generally it only takes a few\ndiscs to max out a single CPU though.\n\nThis is not true if  you have concurrent sequential scans.  Then an array can be tuned for total throughput with concurrent access.  Single thread sequential measurements are similarly useful to single thread random i/o measurement — not really a test like the DB will act, but useful as a starting point for tuning.\nI’m past the point where a single thread can not keep up with the disk on a sequential scan.  For the most simple select * queries, this is ~ 800MB/sec for me.\nFor any queries those with more complicated processing/filtering, its much less, usually 400MB/sec is a pretty good rate for a single thread.  \nHowever our raw array does about 1200MB/sec, and can get 75% efficiency on this or so with between 4 and 8 concurrent sequential scans.  It took some significant tuning and testing time to make sure this worked, and to balance that with random i/o requirements.\n\nFurthermore, higher sequential rates help your random IOPS when you have sequential access concurrent with random access.  You can tune OS parameters (readahead in linux, I/O scheduler types) to bias throughput or latency towards random iops throughput or sequential MB/sec throughput.  Having faster sequential disk access means less % of time doing sequential I/O, meaning more time left for random I/O.  It only goes so far, but it does help with mixed loads.  \n\nOverall, it depends a lot on how important sequential scans are to your use case.", "msg_date": "Tue, 10 Mar 2009 11:01:00 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When does sequential performance matter in PG?" } ]
[ { "msg_contents": "Hi,\n\nI'd be grateful for any advice we can get... we recently switched from MySQL\nto PostgreSQL on the basis of some trials we carried out with datasets of\nvarying sizes and varying rates of contention. For the most part we've been\npleased with performance, but one particular application runs queries that\npull back a lot of results across what is for us a large dataset. We've\nnoticed enormous speed improvements over MySQL with complex queries, but some\nof these simpler queries are causing us some problems. We were hoping that the\nmachine would be able to cache the majority of the database in memory and be\nable to run these kinds of queries very quickly. The box doesn't seem to be\ndoing much I/O during these queries, and top usually reports the processor\nusage slowly increasing to about 75% but no higher than that (and then\ndropping once it's finished). We adjusted settings in postgresql.conf as\nrecommended in various places on the web. In particular, experimentation led\nus to turn of enable_seq_scan, because it consistently led to faster query\ntimes, but we're not sure why that's the case or if it's a good idea\ngenerally.\n\nThis example has been anonymised slightly, although I've checked it for typos.\nOur 'fact_table' has 6 million rows, each of which joins to one of 1.7 million\nrows in record_dimension, and one of 15,000 rows in 'date_dimension'. We have\nother tables that also join to 'fact_table', but for this example these three\ntables suffice. The total size (as reported on the file system, presumably\nincluding indexes) is 7.5GB. The query below pulls 12 months' worth of data\n(accounting for 20% of the rows in 'fact_table') with restrictions that\naccount for 15% of the rows in 'record_dimension'. It's a read-only database\n(we dump it fresh nightly).\n\nThe server itself is a dual-core 3.7GHz Xeon Dell (each core reporting 2\nlogical CPUs) running an amd64 build of FreeBSD 6.2, and postgres 8.3.5 built\nfrom source. It's got 400GB storage in RAID-5 (on 5 disks). It has 8GB of\nphysical RAM. I'm able to use about 6GB of that for my own purposes; the\nserver doesn't do much else but replicates a very low-usage mysql database.\nWhile it's running postgres only seems to use about 1.2GB of RAM.\n\nPostgres configuration is below the query and EXPLAIN.\n\nAny help would be much appreciated.\n\n=============\n\nSELECT \"record_dimension\".\"Id 1\" AS \"Id 1\", \"record_dimension\".\"Id 2\" AS\n\"fact_table\".\"Id 2\", \"Id 3\" AS \"Id 3\"\nFROM \"fact_table\"\n INNER JOIN \"record_dimension\" ON \"fact_table\".\"record_key\" =\n\"record_dimension\".\"record_key\"\n INNER JOIN \"date_dimension\" ON \"fact_table\".\"date_key\" =\n\"date_dimension\".\"date_key\"\nWHERE \"record_dimension\".\"Region\" = 'Big Region'\n AND \"date_dimension\".\"Month\" BETWEEN '110' AND '121'\n AND \"record_dimension\".\"A Common Property\"\n AND \"record_dimension\".\"Country\" = 'USA';\n\nENABLE_SEQSCAN ON\n Nested Loop (cost=466.34..192962.24 rows=15329 width=12) (actual\ntime=13653.238..31332.113 rows=131466 loops=1)\n -> Hash Join (cost=466.34..115767.54 rows=141718 width=8) (actual\ntime=13649.952..19548.019 rows=1098344 loops=1)\n Hash Cond: (fact_table.date_key = date_dimension.date_key)\n -> Seq Scan on fact_table (cost=0.00..91589.38 rows=5945238\nwidth=12) (actual time=0.014..8761.184 rows=5945238 loops=1)\n -> Hash (cost=461.99..461.99 rows=348 width=4) (actual\ntime=4.651..4.651 rows=378 loops=1)\n -> Seq Scan on date_dimension (cost=0.00..461.99 rows=348\nwidth=4) (actual time=0.044..4.007 rows=378 loops=1)\n Filter: ((\"Month\" >= 110::smallint) AND (\"Month\" <=\n121::smallint))\n -> Index Scan using record_dimension_pkey on record_dimension \n(cost=0.00..0.53 rows=1 width=12) (actual time=0.007..0.007 rows=0\nloops=1098344)\n Index Cond: (record_dimension.record_key = fact_table.record_key)\n Filter: (record_dimension.\"A Common Property\" AND\n((record_dimension.\"Region\")::text = 'Big Region'::text) AND\n((record_dimension.\"Country\")::text = 'USA'::text))\n Total runtime: 31522.166 ms\n\n(131466 rows)\n(Actual query time: 8606.963 ms)\n\nI/O during the query:\n+-----------------+-----------------------------------------+-----------------------------------+\n| \t| SEQUENTIAL I/O \t| INDEXED I/O \n |\n| \t| scans | tuples | heap_blks |cached\t| scans | tuples |\nidx_blks |cached|\n|-----------------+-------+--------+-----------+------------+-------+---------+----------+------+\n|date_dimension \t| 1 | 14599 | 0 | 243 \t| 0 | 0 | \n 0 | 0 |\n|fact_table\t\t| 1 |5945238 | 0 |32137 \t| 0 | 0 | 0\n| 0 |\n|record_dimension\t| 0 | 0 | 0 |1098344 \t|1098344 |1098344 | \n 0 |3300506 |\n\nENABLE_SEQSCAN OFF\n Nested Loop (cost=0.00..355177.96 rows=15329 width=12) (actual\ntime=14763.749..32483.625 rows=131466 loops=1)\n -> Merge Join (cost=0.00..277983.26 rows=141718 width=8) (actual\ntime=14760.467..20623.975 rows=1098344 loops=1)\n Merge Cond: (date_dimension.date_key = fact_table.date_key)\n -> Index Scan using date_dimension_pkey on date_dimension \n(cost=0.00..706.23 rows=348 width=4) (actual time=0.074..1.635\nrows=13 loops=1)\n Filter: ((\"Month\" >= 110::smallint) AND (\"Month\" <=\n121::smallint))\n -> Index Scan using date_key on fact_table (cost=0.00..261696.89\nrows=5945238 width=12) (actual time=0.016..9903.593 rows=5945238\nloops=1)\n -> Index Scan using record_dimension_pkey on record_dimension \n(cost=0.00..0.53 rows=1 width=12) (actual time=0.007..0.007 rows=0\nloops=1098344)\n Index Cond: (record_dimension.record_key = fact_table.record_key)\n Filter: (record_dimension.\"A Common Property\" AND\n((record_dimension.\"Region\")::text = 'Big Region'::text) AND\n((record_dimension.\"Country\")::text = 'USA'::text))\n Total runtime: 32672.995 ms\n(10 rows)\n\n(131466 rows)\n(Actual query time: 9049.854 ms)\n\npostgresql.conf\n=============\nshared_buffers=1200MB\nwork_mem = 100MB\nmaintenance_work_mem = 200MB\nmax_fsm_pages = 179200\n\nfsync = off\nsynchronous_commit = off\nfull_page_writes = off\n\nenable_seqscan = off\n\neffective_cache_size = 2000MB\n\ndefault_statistics_target = 100\n\nlc_messages = 'en_US.UTF-8'\nlc_monetary = 'en_US.UTF-8'\nlc_numeric = 'en_US.UTF-8'\nlc_time = 'en_US.UTF-8'\n\n\n", "msg_date": "Tue, 10 Mar 2009 16:12:17 -0500 (CDT)", "msg_from": "\"Steve McLellan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query performance over a large proportion of data" }, { "msg_contents": "On Tue, Mar 10, 2009 at 3:12 PM, Steve McLellan <[email protected]> wrote:\n> Hi,\n>\n> I'd be grateful for any advice we can get... we recently switched from MySQL\n> to PostgreSQL on the basis of some trials we carried out with datasets of\n> varying sizes and varying rates of contention. For the most part we've been\n> pleased with performance, but one particular application runs queries that\n> pull back a lot of results across what is for us a large dataset. We've\n> noticed enormous speed improvements over MySQL with complex queries, but some\n> of these simpler queries are causing us some problems. We were hoping that the\n> machine would be able to cache the majority of the database in memory and be\n> able to run these kinds of queries very quickly. The box doesn't seem to be\n> doing much I/O during these queries, and top usually reports the processor\n> usage slowly increasing to about 75% but no higher than that (and then\n> dropping once it's finished). We adjusted settings in postgresql.conf as\n> recommended in various places on the web. In particular, experimentation led\n> us to turn of enable_seq_scan, because it consistently led to faster query\n> times, but we're not sure why that's the case or if it's a good idea\n> generally.\n\nNo, it's not. The query planner in postgresql is quite good, and\nunless you're sure it's making a pathologically bad decision, turning\non / off things like seqscan are kind of like a bullet in the brain to\ncure a headache.\n\n> This example has been anonymised slightly, although I've checked it for typos.\n> Our 'fact_table' has 6 million rows, each of which joins to one of 1.7 million\n> rows in record_dimension, and one of 15,000 rows in 'date_dimension'. We have\n> other tables that also join to 'fact_table', but for this example these three\n> tables suffice. The total size (as reported on the file system, presumably\n> including indexes) is 7.5GB. The query below pulls 12 months' worth of data\n> (accounting for 20% of the rows in 'fact_table') with restrictions that\n> account for 15% of the rows in 'record_dimension'. It's a read-only database\n> (we dump it fresh nightly).\n>\n> The server itself is a dual-core 3.7GHz Xeon Dell (each core reporting 2\n> logical CPUs) running an amd64 build of FreeBSD 6.2, and postgres 8.3.5 built\n> from source. It's got 400GB storage in RAID-5 (on 5 disks). It has 8GB of\n> physical RAM. I'm able to use about 6GB of that for my own purposes; the\n> server doesn't do much else but replicates a very low-usage mysql database.\n> While it's running postgres only seems to use about 1.2GB of RAM.\n\nWhat do you mean you're able to use about 6GB for your own purposes?\nNote that postgresql relies on the OS to the majority of its caching\nso if you're doing something that chews up 6G ram on the same machine\nyou are affecting pgsql performance on it.\n\nI'll let someone else look through the explain analyze and all, but as\nregards your sequential scan being turned off, you're far better off\nadjusting the cost of seqscan and random_page_cost in postgresql.conf\nto push the planner towards random access. Also increasing your\neffective cache size up will favor index scans over sequential scans.\nThen, use enable_seqscan=off / on to test if you have the best query\nplan or not. Don't just leave enable_seqscan = off.\n", "msg_date": "Tue, 10 Mar 2009 15:57:40 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance over a large proportion of data" }, { "msg_contents": ">>> \"Steve McLellan\" <[email protected]> wrote: \n> The server itself is a dual-core 3.7GHz Xeon Dell (each core\n> reporting 2 logical CPUs) running an amd64 build of FreeBSD 6.2, and\n> postgres 8.3.5 built from source. It's got 400GB storage in RAID-5\n> (on 5 disks). It has 8GB of physical RAM. I'm able to use about 6GB\n> of that for my own purposes; the server doesn't do much else but\n> replicates a very low-usage mysql database.\n \n> shared_buffers=1200MB\n \nYou might want to play with this -- that's not a bad starting point,\nbut your best performance with your load could be on either side of\nthat value.\n \n> work_mem = 100MB\n \nProbably kinda high, especially if you expect a lot of connections. \nThis much memory can be concurrently used, possibly more than once, by\neach active connection.\n \n> fsync = off\n \nDon't use this setting unless you can afford to lose your entire\ndatabase cluster. We use it for initial (repeatable) loads, but not\nmuch else.\n \n> enable_seqscan = off\n \nNot a good idea; some queries will optimize better with seqscans.\nYou can probably get the behavior you want using other adjustments.\n \n> effective_cache_size = 2000MB\n \n>From what you said above, I'd bump this up to 5GB or more.\n \nYou probably need to reduce random_page_cost. If your caching is\ncomplete enough, you might want to set it equal to seq_page_cost\n(never set it lower that seq_page_cost!) and possibly reduce both of\nthese to 0.1.\n \nSome people have had good luck with boosting cpu_tuple_cost and\ncpu_index_tuple_cost. (I've seen 0.5 for both recommended.) I've\nnever had to do that, but if the earlier suggestions don't get good\nplans, you might try that.\n \nI hope that helps.\n \n-Kevin\n", "msg_date": "Tue, 10 Mar 2009 17:06:38 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance over a large proportion of\n\tdata" }, { "msg_contents": "On Tue, Mar 10, 2009 at 3:12 PM, Steve McLellan <[email protected]> wrote:\n>\n>  Nested Loop  (cost=466.34..192962.24 rows=15329 width=12) (actual\n> time=13653.238..31332.113 rows=131466 loops=1)\n\n\nBoth your query plans end with this nested loop join which is taking\nup about half your time in your query. Notice the estimation of the\nresult set is off by a factor of about 10 here, which means a nested\nloop might be not so good a choice for this. Try increasing default\nstats target and re-analyzing to see if that helps. 1000 is the max\nyou can give that a shot right off to see if it helps. If it does,\ndrop it until the numbers start to go off again and stop.\n\nFor a quicker test, you can set enable_nestloop = off in the psql\ncommand line and then run the query by hand and see if that helps.\n", "msg_date": "Tue, 10 Mar 2009 17:19:53 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance over a large proportion of data" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> You probably need to reduce random_page_cost. If your caching is\n> complete enough, you might want to set it equal to seq_page_cost\n> (never set it lower that seq_page_cost!) and possibly reduce both of\n> these to 0.1.\n \n> Some people have had good luck with boosting cpu_tuple_cost and\n> cpu_index_tuple_cost. (I've seen 0.5 for both recommended.) I've\n> never had to do that, but if the earlier suggestions don't get good\n> plans, you might try that.\n\nIt might be worth pointing out here that all that matters are the\nrelative values of the various xxx_cost parameters. If your DB is\nmostly or entirely cached, you probably want to lower the estimated cost\nof I/O relative to CPU work. You can do that *either* by dropping the\nseq_/random_page_costs, *or* by raising the cpu_xxx_costs (there are\nmore than two of those BTW). Doing both, as Kevin's comments might be\nread to suggest, is not useful ... and in particular I bet that having\nseq_page_cost actually less than cpu_tuple_cost would lead to some\npretty wacko decision-making by the planner.\n\nSee\nhttp://www.postgresql.org/docs/8.3/static/runtime-config-query.html\nfor some more info about what you're twiddling here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Mar 2009 20:09:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance over a large proportion of data " }, { "msg_contents": "\"Steve McLellan\" <[email protected]> writes:\n> lc_messages = 'en_US.UTF-8'\n> lc_monetary = 'en_US.UTF-8'\n> lc_numeric = 'en_US.UTF-8'\n> lc_time = 'en_US.UTF-8'\n\nBTW, aside from the points already made: the above indicates that you\ninitialized your database in en_US.utf8 locale. This is not necessarily\na good decision from a performance standpoint --- you might be much\nbetter off with C locale, and might even prefer it if you favor\nASCII-order sorting over \"dictionary\" sorting. utf8 encoding might\ncreate some penalties you don't need too. This all depends on a lot\nof factors you didn't mention; maybe you actually need utf8 data,\nor maybe your application doesn't do many string comparisons and so\nisn't sensitive to the speed of strcoll() anyway. But I've seen it\nbe a gotcha for people moving from MySQL, which AFAIK doesn't worry\nabout honoring locale-specific sort order.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Mar 2009 20:16:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance over a large proportion of data " }, { "msg_contents": "On Mar 10, 2009, at 4:12 PM, Steve McLellan wrote:\n> The server itself is a dual-core 3.7GHz Xeon Dell (each core \n> reporting 2\n> logical CPUs) running an amd64 build of FreeBSD 6.2, and postgres \n> 8.3.5 built\n> from source.\n\n\nUh, you're running an amd64 build on top of an Intel CPU? I didn't \nthink FBSD would allow that, but if it does it wouldn't surprise me \nif kernel/OS performance stunk. If Postgres then used the same \nsettings it would make matters even worse (IIRC there is some code \nthat's different in an AMD vs Intel build).\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828\n\n\n", "msg_date": "Sat, 14 Mar 2009 09:47:53 -0500", "msg_from": "decibel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance over a large proportion of data" }, { "msg_contents": "decibel wrote:\n> On Mar 10, 2009, at 4:12 PM, Steve McLellan wrote:\n>> The server itself is a dual-core 3.7GHz Xeon Dell (each core reporting 2\n>> logical CPUs) running an amd64 build of FreeBSD 6.2, and postgres \n>> 8.3.5 built\n>> from source.\n> \n> Uh, you're running an amd64 build on top of an Intel CPU? I didn't think \n> FBSD would allow that, but if it does it wouldn't surprise me if \n> kernel/OS performance stunk. If Postgres then used the same settings it \n> would make matters even worse (IIRC there is some code that's different \n> in an AMD vs Intel build).\n\nUh? Amd64 just the name of the FreeBSD port for AMD/Intel 64 bit CPUs.\n\nSee: http://www.freebsd.org/platforms/amd64.html\nand: http://en.wikipedia.org/wiki/X86-64\n\n\nCheers\n\n-- \nMatteo Beccati\n\nOpenX - http://www.openx.org\n", "msg_date": "Sun, 15 Mar 2009 03:38:17 +0100", "msg_from": "Matteo Beccati <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance over a large proportion of data" } ]
[ { "msg_contents": ">\n>\n> *Scott Marlowe <[email protected]>*\n> 03/10/2009 05:19 PM\n>\n> >\n> > Nested Loop (cost=466.34..192962.24 rows=15329 width=12) (actual\n> > time=13653.238..31332.113 rows=131466 loops=1)\n>\n>\n> Both your query plans end with this nested loop join which is taking\n> up about half your time in your query. Notice the estimation of the\n> result set is off by a factor of about 10 here, which means a nested\n> loop might be not so good a choice for this. Try increasing default\n> stats target and re-analyzing to see if that helps. 1000 is the max\n> you can give that a shot right off to see if it helps. If it does,\n> drop it until the numbers start to go off again and stop.\n>\n> For a quicker test, you can set enable_nestloop = off in the psql\n> command line and then run the query by hand and see if that helps.\n> Thanks - the nested loop is indeed causing problems - reducing\n> seq_page_cost had the same effect of removing the nested loop for this\n> query. We'd noticed the poor row count estimation. Increasing the statistics\n> doesn't seem to have much effect, but we'll have more of a go with it.\n\nScott Marlowe <[email protected]>\n03/10/2009 05:19 PM>>  Nested Loop  (cost=466.34..192962.24 rows=15329 width=12) (actual> time=13653.238..31332.113 rows=131466 loops=1)\nBoth your query plans end with this nested loop join which is takingup about half your time in your query.  Notice the estimation of theresult set is off by a factor of about 10  here, which means a nested\nloop might be not so good a choice for this.  Try increasing defaultstats target and re-analyzing to see if that helps.  1000 is the maxyou can give that a shot right off to see if it helps.  If it does,drop it until the numbers start to go off again and stop.\nFor a quicker test, you can set enable_nestloop = off in the psqlcommand line and then run the query by hand and see if that helps.Thanks - the nested loop is indeed causing problems - reducing seq_page_cost had\nthe same effect of removing the nested loop for this query. We'd\nnoticed the poor row count estimation. Increasing the statistics\ndoesn't seem to have much effect, but we'll have more of a go with it.", "msg_date": "Tue, 10 Mar 2009 22:15:13 -0500", "msg_from": "Steve McLellan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query performance over a large proportion of data" }, { "msg_contents": "On Tue, Mar 10, 2009 at 9:15 PM, Steve McLellan <[email protected]> wrote:\n> Thanks - the nested loop is indeed causing problems - reducing\n> seq_page_cost had the same effect of removing the nested loop for this\n> query. We'd noticed the poor row count estimation. Increasing the statistics\n> doesn't seem to have much effect, but we'll have more of a go with it.\n\nMore than likely it's the sequential page cost versus the cpu_xxx cost\nsetttings that's really making the difference. I.e. if you raised the\ncpu_xxx settings you'd get the same result. But I'm not sure, just a\nguess.\n", "msg_date": "Tue, 10 Mar 2009 22:53:56 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance over a large proportion of data" } ]
[ { "msg_contents": ">\n>\n>\n> *Tom Lane <[email protected]>*\n> Sent by: [email protected]\n> 03/10/2009 08:16 PM AST\n>\n> \"Steve McLellan\" <[email protected]> writes:\n> > lc_messages = 'en_US.UTF-8'\n> > lc_monetary = 'en_US.UTF-8'\n> > lc_numeric = 'en_US.UTF-8'\n> > lc_time = 'en_US.UTF-8'\n>\n> BTW, aside from the points already made: the above indicates that you\n> initialized your database in en_US.utf8 locale. This is not necessarily\n> a good decision from a performance standpoint --- you might be much\n> better off with C locale, and might even prefer it if you favor\n> ASCII-order sorting over \"dictionary\" sorting. utf8 encoding might\n> create some penalties you don't need too. This all depends on a lot\n> of factors you didn't mention; maybe you actually need utf8 data,\n> or maybe your application doesn't do many string comparisons and so\n> isn't sensitive to the speed of strcoll() anyway. But I've seen it\n> be a gotcha for people moving from MySQL, which AFAIK doesn't worry\n> about honoring locale-specific sort order.\n>\n> regards, tom lane\n>\n> Thanks for the reply. We did intentionally initialize it in UTF-8 locale.\nWe could get away with using C locale in this case, although we try to\nstandardise on UTF-8 in general since we do in other instances require it.\nThe only string comparisons we do are equalities, although it can be the\ncase that we're comparing a large number of rows. We may give it a try and\nsee what kind of performance hit that's giving us. Currently we're trying to\nget some big easy wins through parameter settings; I imagine we will want to\nstart shaving some more time off queries in the near future.\n\nThanks, Steve\n\n Tom Lane <[email protected]>\nSent by: [email protected]/10/2009 08:16 PM AST\"Steve McLellan\" <[email protected]> writes:\n> lc_messages = 'en_US.UTF-8'> lc_monetary = 'en_US.UTF-8'> lc_numeric = 'en_US.UTF-8'> lc_time = 'en_US.UTF-8'BTW, aside from the points already made: the above indicates that you\ninitialized your database in en_US.utf8 locale.  This is not necessarilya good decision from a performance standpoint --- you might be muchbetter off with C locale, and might even prefer it if you favorASCII-order sorting over \"dictionary\" sorting.  utf8 encoding might\ncreate some penalties you don't need too.  This all depends on a lotof factors you didn't mention; maybe you actually need utf8 data,or maybe your application doesn't do many string comparisons and so\nisn't sensitive to the speed of strcoll() anyway.  But I've seen itbe a gotcha for people moving from MySQL, which AFAIK doesn't worryabout honoring locale-specific sort order.regards, tom lane\nThanks for the reply. We did intentionally initialize it in UTF-8 locale. We could get away with using C locale in this case, although we try to standardise on UTF-8 in general since we do in other instances require it. The only string comparisons we do are equalities, although it can be the case that we're comparing a large number of rows. We may give it a try and see what kind of performance hit that's giving us. Currently we're trying to get some big easy wins through parameter settings; I imagine we will want to start shaving some more time off queries in the near future.\nThanks, Steve", "msg_date": "Tue, 10 Mar 2009 22:21:01 -0500", "msg_from": "Steve McLellan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query performance over a large proportion of data" } ]
[ { "msg_contents": ">\n>\n>\n> *\"Kevin Grittner\" <[email protected]>*\n> 03/10/2009 05:06 PM EST\n>\n> > enable_seqscan = off\n>\n> Not a good idea; some queries will optimize better with seqscans.\n> You can probably get the behavior you want using other adjustments.\n>\nThe bullet to cure the headache, as Scott said.\n\n>\n> You probably need to reduce random_page_cost. If your caching is\n> complete enough, you might want to set it equal to seq_page_cost\n> (never set it lower that seq_page_cost!) and possibly reduce both of\n> these to 0.1.\n>\n> Reducing seq_page_cost relative to random_page_cost seems to make an\nenormous difference for this query. Removing the nested loop seems to be\nwhat makes a difference. We'll continue to play with these and check there\nare no adverse effects on other queries.\n\nThanks again, Steve\n\n\"Kevin Grittner\" <[email protected]>\n03/10/2009 05:06 PM EST> enable_seqscan = offNot a good idea; some queries will optimize better with seqscans.\nYou can probably get the behavior you want using other adjustments.The bullet to cure the headache, as Scott said.  \nYou probably need to reduce random_page_cost.  If your caching iscomplete enough, you might want to set it equal to seq_page_cost\n(never set it lower that seq_page_cost!) and possibly reduce both ofthese to 0.1.Reducing seq_page_cost relative to random_page_cost seems to make an enormous difference for this query. Removing the nested loop seems to be what makes a difference. We'll continue to play with these and check there are no adverse effects on other queries.\nThanks again, Steve", "msg_date": "Tue, 10 Mar 2009 22:30:32 -0500", "msg_from": "Steve McLellan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query performance over a large proportion of data" } ]
[ { "msg_contents": "I've got a couple x25-e's in production now and they are working like \na champ. (In fact, I've got another box being built with all x25s in \nit. its going to smoke!)\n\nAnyway, I was just reading another thread on here and that made me \nwonder about random_page_cost in the world of an ssd where a seek is \nbasically free. I haven't tested this yet (I can do that next week), \nbut logically, in this scenario wouldn't lowering random_page_cost be \nideal or would it not really matter in the grand scheme of things?\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n", "msg_date": "Wed, 11 Mar 2009 09:46:36 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "random_page_cost vs ssd?" }, { "msg_contents": "On Wed, Mar 11, 2009 at 1:46 PM, Jeff <[email protected]> wrote:\n> I've got a couple x25-e's in production now and they are working like a\n> champ.  (In fact, I've got another box being built with all x25s in it. its\n> going to smoke!)\n>\n> Anyway, I was just reading another thread on here and that made me wonder\n> about random_page_cost in the world of an ssd where a seek is basically\n> free.  I haven't tested this yet (I can do that next week), but logically,\n> in this scenario wouldn't lowering random_page_cost be ideal or would it not\n> really matter in the grand scheme of things?\n\nJust on a side note, random access on SSD is still more expensive than\nsequential, because it is designed in banks.\nIf you don believe me, turn off any software/OS cache , and try random\naccess timing against seq reads.\nThis gap is just much much narrower.\n\n\n-- \nGJ\n", "msg_date": "Wed, 11 Mar 2009 15:37:55 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_cost vs ssd?" }, { "msg_contents": "At 8k block size, you can do more iops sequential than random.\nA X25-M I was just playing with will do about 16K iops reads at 8k block size with 32 concurrent threads.\nThat is about 128MB/sec. Sequential reads will do 250MB/sec. At 16k block size it does about 220MB/sec and at 32k block size there is no penalty for random access. All tests start with 'cat 3 > /proc/sys/vm/drop_caches', and work on a 32GB data set (40% of the disk).\n\nAlso, over time the actual location of the physical blocks will not map to the LBAs requested. This means that internally a sequential read is actually a random read, and that a random write is actually a sequential write. That is how the SSD's with good write performance are doing it, with advanced LBA to physical dynamic mapping.\n\nAs for the random_page_cost I'd make sure to set it virtually the same as the sequential cost. Perhaps 1 for sequential and 1.1 for random. One may also want to lower both of those values equally to be somewhat closer to the cpu costs. You want the planner to generally conserve total block access count and not favor streaming reads too much over random reads. \n\n\n________________________________________\nFrom: [email protected] [[email protected]] On Behalf Of Grzegorz Jaśkiewicz [[email protected]]\nSent: Wednesday, March 11, 2009 8:37 AM\n\nOn Wed, Mar 11, 2009 at 1:46 PM, Jeff <[email protected]> wrote:\n> I've got a couple x25-e's in production now and they are working like a\n> champ. (In fact, I've got another box being built with all x25s in it. its\n> going to smoke!)\n>\n> Anyway, I was just reading another thread on here and that made me wonder\n> about random_page_cost in the world of an ssd where a seek is basically\n> free. I haven't tested this yet (I can do that next week), but logically,\n> in this scenario wouldn't lowering random_page_cost be ideal or would it not\n> really matter in the grand scheme of things?\n\nJust on a side note, random access on SSD is still more expensive than\nsequential, because it is designed in banks.\nIf you don believe me, turn off any software/OS cache , and try random\naccess timing against seq reads.\nThis gap is just much much narrower.\n\n\n--\nGJ\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 11 Mar 2009 10:06:53 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_cost vs ssd?" }, { "msg_contents": "2009/3/12 Scott Carey <[email protected]>:\n> [...snip...]. All tests start with 'cat 3 > /proc/sys/vm/drop_caches', and work on\n> a 32GB data set (40% of the disk).\nWhat's the content of '3' above?\n\n-- \nPlease don't top post, and don't use HTML e-Mail :} Make your quotes concise.\n\nhttp://www.american.edu/econ/notes/htmlmail.htm\n", "msg_date": "Thu, 12 Mar 2009 08:04:17 +1300", "msg_from": "Andrej <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_cost vs ssd?" }, { "msg_contents": "Google > “linux drop_caches” first result:\r\nhttp://www.linuxinsight.com/proc_sys_vm_drop_caches.html\r\n\r\nTo be sure a test is going to disk and not file system cache for everything in linux, run:\r\n‘sync; cat 3 > /proc/sys/vm/drop_caches’\r\n\r\nOn 3/11/09 11:04 AM, \"Andrej\" <[email protected]> wrote:\r\n\r\n2009/3/12 Scott Carey <[email protected]>:\r\n> [...snip...]. All tests start with 'cat 3 > /proc/sys/vm/drop_caches', and work on\r\n> a 32GB data set (40% of the disk).\r\nWhat's the content of '3' above?\r\n\r\n--\r\nPlease don't top post, and don't use HTML e-Mail :} Make your quotes concise.\r\n\r\nhttp://www.american.edu/econ/notes/htmlmail.htm\r\n\r\n\n\n\nRe: [PERFORM] random_page_cost vs ssd?\n\n\nGoogle > “linux drop_caches” first result:\nhttp://www.linuxinsight.com/proc_sys_vm_drop_caches.html\n\r\nTo be sure a test is going to disk and not file system cache for everything in linux, run:\r\n‘sync; cat 3 > /proc/sys/vm/drop_caches’\n\r\nOn 3/11/09 11:04 AM, \"Andrej\" <[email protected]> wrote:\n\n2009/3/12 Scott Carey <[email protected]>:\r\n> [...snip...].    All tests start with 'cat 3 > /proc/sys/vm/drop_caches', and work on\r\n>  a 32GB data set (40% of the disk).\r\nWhat's the content of '3' above?\n\r\n--\r\nPlease don't top post, and don't use HTML e-Mail :}  Make your quotes concise.\n\nhttp://www.american.edu/econ/notes/htmlmail.htm", "msg_date": "Wed, 11 Mar 2009 12:28:56 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_cost vs ssd?" }, { "msg_contents": "On Wed, Mar 11, 2009 at 12:28:56PM -0700, Scott Carey wrote:\n> Google > “linux drop_caches” first result:\n> http://www.linuxinsight.com/proc_sys_vm_drop_caches.html\n> To be sure a test is going to disk and not file system cache for everything in linux, run:\n> ‘sync; cat 3 > /proc/sys/vm/drop_caches’\n\nwell. the url you showed tells to do: echo 3 > ...\ncat 3 is \"slightly\" different.\n\nBest regards,\n\ndepesz\n\n-- \nLinkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/\njid/gtalk: [email protected] / aim:depeszhdl / skype:depesz_hdl / gg:6749007\n", "msg_date": "Wed, 11 Mar 2009 20:32:27 +0100", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_cost vs ssd?" }, { "msg_contents": "Scott Carey <[email protected]> wrote:\n> On 3/11/09 11:04 AM, \"Andrej\" <[email protected]> wrote:\n>> 2009/3/12 Scott Carey <[email protected]>:\n \n>>> All tests start with 'cat 3 > /proc/sys/vm/drop_caches'\n \n>> What's the content of '3' above?\n \n> Google > *linux drop_caches* first result:\n> http://www.linuxinsight.com/proc_sys_vm_drop_caches.html\n> \n> To be sure a test is going to disk and not file system cache for\n> everything in linux, run:\n> *sync; cat 3 > /proc/sys/vm/drop_caches*\n \nThe cited page recommends \"echo 3\" -- is that what you used in your\ntests, or the \"cat 3\" you repeated specify? If the latter, what is in\nthe \"3\" file?\n \n>> Please don't top post, and don't use HTML e-Mail :} Make your\n>> quotes concise.\n>> \n>> http://www.american.edu/econ/notes/htmlmail.htm\n \nDid you miss this part?\n \n-Kevin\n", "msg_date": "Wed, 11 Mar 2009 14:40:44 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_cost vs ssd?" }, { "msg_contents": "Echo. It was a typo.\n\n\nOn 3/11/09 11:40 AM, \"Kevin Grittner\" <[email protected]> wrote:\n\nScott Carey <[email protected]> wrote:\n> On 3/11/09 11:04 AM, \"Andrej\" <[email protected]> wrote:\n>> 2009/3/12 Scott Carey <[email protected]>:\n\n>>> All tests start with 'cat 3 > /proc/sys/vm/drop_caches'\n\n>> What's the content of '3' above?\n\n> Google > *linux drop_caches* first result:\n> http://www.linuxinsight.com/proc_sys_vm_drop_caches.html\n>\n> To be sure a test is going to disk and not file system cache for\n> everything in linux, run:\n> *sync; cat 3 > /proc/sys/vm/drop_caches*\n\nThe cited page recommends \"echo 3\" -- is that what you used in your\ntests, or the \"cat 3\" you repeated specify? If the latter, what is in\nthe \"3\" file?\n\n>> Please don't top post, and don't use HTML e-Mail :} Make your\n>> quotes concise.\n>>\n>> http://www.american.edu/econ/notes/htmlmail.htm\n\nDid you miss this part?\n\n-Kevin\n\n\n\n\nRe: [PERFORM] random_page_cost vs ssd?\n\n\nEcho.  It was a typo.\n\n\nOn 3/11/09 11:40 AM, \"Kevin Grittner\" <[email protected]> wrote:\n\nScott Carey <[email protected]> wrote:\n> On 3/11/09 11:04 AM, \"Andrej\" <[email protected]> wrote:\n>> 2009/3/12 Scott Carey <[email protected]>:\n\n>>> All tests start with 'cat 3 > /proc/sys/vm/drop_caches'\n\n>> What's the content of '3' above?\n\n> Google > *linux drop_caches* first result:\n> http://www.linuxinsight.com/proc_sys_vm_drop_caches.html\n>\n> To be sure a test is going to disk and not file system cache for\n> everything in linux, run:\n> *sync; cat 3 > /proc/sys/vm/drop_caches*\n\nThe cited page recommends \"echo 3\" -- is that what you used in your\ntests, or the \"cat 3\" you repeated specify?  If the latter, what is in\nthe \"3\" file?\n\n>> Please don't top post, and don't use HTML e-Mail :}  Make your\n>> quotes concise.\n>>\n>> http://www.american.edu/econ/notes/htmlmail.htm\n\nDid you miss this part?\n\n-Kevin", "msg_date": "Wed, 11 Mar 2009 13:31:13 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_cost vs ssd?" } ]
[ { "msg_contents": "Greetings. We're having trouble with full logging since we moved from\nan 8-core server with 16 GB memory to a machine with double that\nspec and I am wondering if this *should* be working or if there is a\npoint on larger machines where logging and scheduling seeks of\nbackground writes - or something along those lines; it might be a\ntheory - doesn't work together any more?\n\nThe box in question is a Dell PowerEdge R900 with 16 cores and 64 GB\nof RAM (16 GB of shared buffers allocated), and a not-so-great\n\nroot@db04:~# lspci|grep RAID\n19:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS\n1078 (rev 04)\n\ncontroller with 8 10k rpm disks in RAID 1+0 (one big filesystem),\nrunning Ubuntu Hardy with kernel version\n\nroot@db04:~# uname -a\nLinux db04 2.6.24-22-server #1 SMP Mon Nov 24 20:06:28 UTC 2008 x86_64 GNU/Linux\n\nLogging to the disk array actually stops working much earlier; at\noff-peak time we have around 3k transactions per second and if we set\nlog_statement = all, the server gets bogged down immediately: Load,\ncontext switches, and above all mean query duration shoot up; the\napplication slows to a crawl and becomes unusable.\n\nSo the idea came up to log to /dev/shm which is a default ram disk on\nLinux with half the available memory as a maximum size.\n\nThis works much better but once we are at about 80% of peak load -\nwhich is around 8000 transactions per second currently - the server goes\ninto a tailspin in the manner described above and we have to switch off full\nlogging.\n\nThis is a problem because we can't do proper query analysis any more.\n\nHow are others faring with full logging on bigger boxes?\n\nRegards,\n\nFrank\n", "msg_date": "Wed, 11 Mar 2009 19:27:10 +0000", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "Full statement logging problematic on larger machines?" }, { "msg_contents": "On Wed, Mar 11, 2009 at 1:27 PM, Frank Joerdens <[email protected]> wrote:\n> Greetings. We're having trouble with full logging since we moved from\n> an 8-core server with 16 GB memory to a machine with double that\n> spec and I am wondering if this *should* be working or if there is a\n> point on larger machines where logging and scheduling seeks of\n> background writes - or something along those lines; it might be a\n> theory - doesn't work together any more?\n>\n> The box in question is a Dell PowerEdge R900 with 16 cores and 64 GB\n> of RAM (16 GB of shared buffers allocated), and a not-so-great\n>\n> root@db04:~# lspci|grep RAID\n> 19:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS\n> 1078 (rev 04)\n>\n> controller with 8 10k rpm disks in RAID 1+0 (one big filesystem),\n> running Ubuntu Hardy with kernel version\n>\n> root@db04:~# uname -a\n> Linux db04 2.6.24-22-server #1 SMP Mon Nov 24 20:06:28 UTC 2008 x86_64 GNU/Linux\n>\n> Logging to the disk array actually stops working much earlier; at\n> off-peak time we have around 3k transactions per second and if we set\n> log_statement = all, the server gets bogged down immediately: Load,\n> context switches, and above all mean query duration shoot up; the\n> application slows to a crawl and becomes unusable.\n>\n> So the idea came up to log to /dev/shm which is a default ram disk on\n> Linux with half the available memory as a maximum size.\n>\n> This works much better but once we are at about 80% of peak load -\n> which is around 8000 transactions per second currently - the server goes\n> into a tailspin in the manner described above and we have to switch off full\n> logging.\n>\n> This is a problem because we can't do proper query analysis any more.\n>\n> How are others faring with full logging on bigger boxes?\n\nWe have 16 disks in our machine, 2 in a mirror for OS / logs, 2 in a\nmirror for pg_xlog, and 12 in a RAID-10 for our main data store. By\nhaving the logs go to a mirror set that's used for little else, we get\nmuch better throughput than having it be on the same array as our\nlarge transactional store. We're not handling anywhere near as many\ntransactions per second as you though. But full query logging costs\nus almost nothing in terms of performance and load.\n\nWould logging only the slower / slowest running queries help you?\n", "msg_date": "Wed, 11 Mar 2009 13:41:24 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full statement logging problematic on larger machines?" }, { "msg_contents": "Frank Joerdens <[email protected]> writes:\n> Greetings. We're having trouble with full logging since we moved from\n> an 8-core server with 16 GB memory to a machine with double that\n> spec and I am wondering if this *should* be working or if there is a\n> point on larger machines where logging and scheduling seeks of\n> background writes - or something along those lines; it might be a\n> theory - doesn't work together any more?\n\nYou didn't tell us anything interesting about *how* you are logging,\nso it's hard to comment on this. Are you just writing to stderr?\nsyslog? using PG's built-in log capture process? It would be good\nto show all of your log-related postgresql.conf settings.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Mar 2009 16:46:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full statement logging problematic on larger machines? " }, { "msg_contents": "On Wed, Mar 11, 2009 at 8:46 PM, Tom Lane <[email protected]> wrote:\n> Frank Joerdens <[email protected]> writes:\n>> Greetings. We're having trouble with full logging since we moved from\n>> an 8-core server with 16 GB memory to a machine with double that\n>> spec and I am wondering if this *should* be working or if there is a\n>> point on larger machines where logging and scheduling seeks of\n>> background writes - or something along those lines; it might be a\n>> theory - doesn't work together any more?\n>\n> You didn't tell us anything interesting about *how* you are logging,\n> so it's hard to comment on this.  Are you just writing to stderr?\n> syslog?  using PG's built-in log capture process?  It would be good\n> to show all of your log-related postgresql.conf settings.\n\nHere's the complete postgresql.conf (I've whittled it down as much as\nI could so it's quite compact):\n\nfrank@db04:~$ cat /etc/postgresql/8.2/main/postgresql.conf\ndata_directory = '/var/lib/postgresql/8.2/main'\nhba_file = '/etc/postgresql/8.2/main/pg_hba.conf'\nident_file = '/etc/postgresql/8.2/main/pg_ident.conf'\n\nlisten_addresses = 'localhost,172.16.222.62'\nport = 5432\n\nmax_connections = 1000\nshared_buffers = 16GB\nwork_mem = 200MB\nmaintenance_work_mem = 1GB\nmax_fsm_pages = 50000\nwal_buffers = 8MB\ncheckpoint_segments = 16\n\n\nautovacuum = on\nstats_start_collector = on\nstats_row_level = on\n\n\neffective_cache_size = 4GB\ndefault_statistics_target = 10\nconstraint_exclusion = off\ncheckpoint_warning = 1h\nescape_string_warning = off\n\nlog_duration = off\nlog_min_duration_statement = 1000\nlog_statement = 'ddl'\nlog_line_prefix = '%m %p %h %u '\n\narchive_command = '/usr/bin/walmgr.py\n/var/lib/postgresql/walshipping/master.ini xarchive %p %f'\n\nredirect_stderr = on\nlog_directory = '/dev/shm/'\nlog_rotation_age = 0\nlog_rotation_size = 0\n\nThe above is what we're doing right now, only logging queries that run\nfor over a second, and that is no problem; so the answer to Scott's\nquestion in his reply to my posting is: Yes, logging only the slower\nqueries does work.\n\nYesterday I changed log_duration = on and log_statement = 'all' at\noff-peak time and left it on for 4 hours while traffic was picking up.\nEventually I had to stop it because the server got bogged down.\n\nRegards,\n\nFrank\n", "msg_date": "Wed, 11 Mar 2009 22:42:29 +0000", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Full statement logging problematic on larger machines?" }, { "msg_contents": "On Wed, Mar 11, 2009 at 8:27 PM, Frank Joerdens <[email protected]> wrote:\n> This works much better but once we are at about 80% of peak load -\n> which is around 8000 transactions per second currently - the server goes\n> into a tailspin in the manner described above and we have to switch off full\n> logging.\n\nFirst, don't use log_duration = on + log_statement = 'all' to log all\nthe queries, use log_min_duration_statement=0, it's less verbose.\n\nI don't know if the logging integrated into PostgreSQL can bufferize\nits output. Andrew? If not, you should try syslog instead and see if\nasynchronous logging with syslog is helping (you need to prefix the\npath with a dash to enable asynchronous logging). You can also try to\nsend the logs on the network via udp (and also tcp if you have an\nenhanced syslog-like).\n\nAnother option is to log the duration of every query but not the text.\nWe used to have this sort of configuration to gather comprehensive\nstatistics and slowest queries on highly loaded servers (it's not\nperfect though but it can be an acceptable compromise):\nlog_duration = on\nlog_min_duration_statement = 100\n\n-- \nGuillaume\n", "msg_date": "Thu, 12 Mar 2009 00:59:06 +0100", "msg_from": "Guillaume Smet <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full statement logging problematic on larger machines?" }, { "msg_contents": "Guillaume Smet <[email protected]> writes:\n> I don't know if the logging integrated into PostgreSQL can bufferize\n> its output. Andrew?\n\nIt uses fwrite(), and normally sets its output into line-buffered mode.\nFor a high-throughput case like this it seems like using fully buffered\nmode might be an acceptable tradeoff. You could try changing _IOLBF\nto _IOFBF near the head of postmaster/syslogger.c and see if that helps.\n(If it does, we could think about exposing some way of tuning this\nwithout modifying the code, but for now you'll have to do that.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Mar 2009 21:45:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full statement logging problematic on larger machines? " }, { "msg_contents": "On Thu, Mar 12, 2009 at 2:05 AM, Andrew Dunstan <[email protected]> wrote:\n> It is buffered at the individual log message level, so that we make sure we\n> don't multiplex messages. No more than that.\n\nOK. So if the OP can afford multiplexed queries by using a log\nanalyzer supporting them, it might be a good idea to try syslog with\nfull buffering.\n\n-- \nGuillaume\n", "msg_date": "Thu, 12 Mar 2009 08:55:33 +0100", "msg_from": "Guillaume Smet <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full statement logging problematic on larger machines?" }, { "msg_contents": "On Thu, Mar 12, 2009 at 1:45 AM, Tom Lane <[email protected]> wrote:\n[...]\n> You could try changing _IOLBF\n> to _IOFBF near the head of postmaster/syslogger.c and see if that helps.\n\nI just put the patched .deb on staging and we'll give it a whirl there\nfor basic sanity checking - we currently have no way to even\napproximate the load that we have on live for testing.\n\nIf all goes well I expect we'll put it on live early next week. I'll\nlet you know how it goes.\n\nRegards,\n\nFrank\n", "msg_date": "Thu, 12 Mar 2009 13:38:56 +0000", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Full statement logging problematic on larger machines?" }, { "msg_contents": "On Thursday 12 March 2009 14:38:56 Frank Joerdens wrote:\n> I just put the patched .deb on staging and we'll give it a whirl there\n> for basic sanity checking - we currently have no way to even\n> approximate the load that we have on live for testing.\n\nIs it a capacity problem or a tool suite problem?\nIf the latter, you could try out tsung:\n http://archives.postgresql.org/pgsql-admin/2008-12/msg00032.php\n http://tsung.erlang-projects.org/\n\nRegards,\n-- \ndim", "msg_date": "Thu, 12 Mar 2009 14:45:32 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full statement logging problematic on larger machines?" }, { "msg_contents": "On Wed, Mar 11, 2009 at 11:42 PM, Frank Joerdens <[email protected]> wrote:\n>\n> effective_cache_size            = 4GB\n\nOnly 4GB with 64GB of ram ?\n\nAbout logging, we have 3 partition :\n- data\n- index\n- everything else, including logging.\n\nUsually, we log on a remote syslog (a dedicated log server for the\nwhole server farm).\n\nFor profiling (pgfouine), we have a crontab that change the postgresql\nlogging configuration for just a few mn.\nand log \"all\" on the \"everything but postgresql\" partition.\n\naround 2000 query/seconds/servers, no problem.\n\n\n-- \nLaurent Laborde\nSysadmin at JFG-Networks / Over-blog\n", "msg_date": "Thu, 12 Mar 2009 22:32:59 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full statement logging problematic on larger machines?" }, { "msg_contents": "for profiling, you can also use the epqa.\n\nhttp://epqa.sourceforge.net/\n\nOn Fri, Mar 13, 2009 at 3:02 AM, Laurent Laborde <[email protected]>wrote:\n\n> On Wed, Mar 11, 2009 at 11:42 PM, Frank Joerdens <[email protected]>\n> wrote:\n> >\n> > effective_cache_size = 4GB\n>\n> Only 4GB with 64GB of ram ?\n>\n> About logging, we have 3 partition :\n> - data\n> - index\n> - everything else, including logging.\n>\n> Usually, we log on a remote syslog (a dedicated log server for the\n> whole server farm).\n>\n> For profiling (pgfouine), we have a crontab that change the postgresql\n> logging configuration for just a few mn.\n> and log \"all\" on the \"everything but postgresql\" partition.\n>\n> around 2000 query/seconds/servers, no problem.\n>\n>\n> --\n> Laurent Laborde\n> Sysadmin at JFG-Networks / Over-blog\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nfor profiling, you can also use the epqa.http://epqa.sourceforge.net/On Fri, Mar 13, 2009 at 3:02 AM, Laurent Laborde <[email protected]> wrote:\nOn Wed, Mar 11, 2009 at 11:42 PM, Frank Joerdens <[email protected]> wrote:\n\n>\n> effective_cache_size            = 4GB\n\nOnly 4GB with 64GB of ram ?\n\nAbout logging, we have 3 partition :\n- data\n- index\n- everything else, including logging.\n\nUsually, we log on a remote syslog (a dedicated log server for the\nwhole server farm).\n\nFor profiling (pgfouine), we have a crontab that change the postgresql\nlogging configuration for just a few mn.\nand log \"all\" on the \"everything but postgresql\" partition.\n\naround 2000 query/seconds/servers, no problem.\n\n\n--\nLaurent Laborde\nSysadmin at JFG-Networks / Over-blog\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 13 Mar 2009 13:58:05 +0530", "msg_from": "sathiya psql <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full statement logging problematic on larger machines?" }, { "msg_contents": "On Fri, Mar 13, 2009 at 9:28 AM, sathiya psql <[email protected]> wrote:\n> for profiling, you can also use the epqa.\n>\n> http://epqa.sourceforge.net/\n\nor PGSI : http://bucardo.org/pgsi/\nBut it require a syslog date format we don't use here. So i wasn't\nable to test it :/\n\n-- \nF4FQM\nKerunix Flan\nLaurent Laborde\n", "msg_date": "Fri, 13 Mar 2009 11:21:48 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full statement logging problematic on larger machines?" }, { "msg_contents": "On Thu, Mar 12, 2009 at 9:32 PM, Laurent Laborde <[email protected]> wrote:\n> On Wed, Mar 11, 2009 at 11:42 PM, Frank Joerdens <[email protected]> wrote:\n>>\n>> effective_cache_size            = 4GB\n>\n> Only 4GB with 64GB of ram ?\n\nI'd been overly cautious lately with config changes as it's been\ndifficult to argue for downtime and associated service risk. Changed\nit to 48 GB now since it doesn't require a restart which I'd missed.\nThanks for spotting!\n\nRegards,\n\nFrank\n", "msg_date": "Fri, 20 Mar 2009 19:01:05 +0000", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Full statement logging problematic on larger machines?" }, { "msg_contents": "On Thu, Mar 12, 2009 at 1:38 PM, Frank Joerdens <[email protected]> wrote:\n> On Thu, Mar 12, 2009 at 1:45 AM, Tom Lane <[email protected]> wrote:\n> [...]\n>> You could try changing _IOLBF\n>> to _IOFBF near the head of postmaster/syslogger.c and see if that helps.\n\nThe patched server is now running on live, and we'll be watching it\nover the weekend with\n\nlog_duration = off\nlog_min_duration_statement = 1000\nlog_statement = 'ddl'\n\nand then run a full logging test early next week if there are no\nproblems with the above settings.\n\nCan you explain again what the extent of multiplexed messages I'll\nhave to expect is? What exactly is the substance of the tradeoff?\nOccasionally the server will write the same line twice? Don't really\nfollow why ...\n\nAnd the next problem is that now unfortunately the entire comparison\nis obfuscated and complicated by a release we did on Monday which has\nhad a strange effect: Quite extreme load average spikes occurring\nfrequently that do not seem to impact query speed - not much anyway or\nif they do then in a random intermittent manner that's not become\napparent (yet) - CPU usage is actually marginally down, locks\nsignificantly down, and all other relevant metrics basically unchanged\nlike context switches and memory usage profile. Now, it *seems* that\nthe extra load is caused by idle (sic!) backends (*not* idle in\ntransaction even) consuming significant CPU when you look at htop. I\ndon't have a theory as to that right now. We use pgbouncer as a\nconnection pooler. What could make idle backends load the server\nsubstantially?\n\nRegards,\n\nFrank\n", "msg_date": "Fri, 20 Mar 2009 19:47:29 +0000", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Full statement logging problematic on larger machines?" }, { "msg_contents": "On Fri, Mar 20, 2009 at 8:21 PM, Andrew Dunstan <[email protected]> wrote:\n>\n>\n> Frank Joerdens wrote:\n>>\n>> On Thu, Mar 12, 2009 at 1:38 PM, Frank Joerdens <[email protected]> wrote:\n>>\n>>>\n>>> On Thu, Mar 12, 2009 at 1:45 AM, Tom Lane <[email protected]> wrote:\n>>> [...]\n>>>\n>>>>\n>>>> You could try changing _IOLBF\n>>>> to _IOFBF near the head of postmaster/syslogger.c and see if that helps.\n[...]\n>> Can you explain again what the extent of multiplexed messages I'll\n>> have to expect is? What exactly is the substance of the tradeoff?\n>> Occasionally the server will write the same line twice? Don't really\n>> follow why ...\n[...]\n> I don't believe changing this will result in any multiplexing. The\n> multiplexing problem was solved in 8.3 by the introduction of the chunking\n> protocol between backends and the syslogger, and changing the output\n> buffering of the syslogger should not cause a multiplexing problem, since\n> it's a single process.\n\nHum, we're still on 8.2 - last attempt to upgrade before xmas was\nunsuccessful; we had to roll back due to not fully understood\nperformance issues. We think we nailed the root cause(s) those though\nand will make another better planned effort to upgrade before March is\nout.\n\nOh well, maybe this all means we shouldn't try to get this running on\n8.2 and just tackle the issue again after the upgrade ...\n\nCheers,\n\nFrank\n", "msg_date": "Fri, 20 Mar 2009 22:15:01 +0000", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Full statement logging problematic on larger machines?" }, { "msg_contents": "On Fri, Mar 20, 2009 at 8:47 PM, Frank Joerdens <[email protected]> wrote:\n> On Thu, Mar 12, 2009 at 1:38 PM, Frank Joerdens <[email protected]> wrote:\n>> On Thu, Mar 12, 2009 at 1:45 AM, Tom Lane <[email protected]> wrote:\n>> [...]\n>>> You could try changing _IOLBF\n>>> to _IOFBF near the head of postmaster/syslogger.c and see if that helps.\n>\n> The patched server is now running on live, and we'll be watching it\n> over the weekend with\n>\n> log_duration = off\n> log_min_duration_statement = 1000\n> log_statement = 'ddl'\n>\n> and then run a full logging test early next week if there are no\n> problems with the above settings.\n\nReporting back on this eventually (hitherto, all our experimenting\nappeared inconclusive): The patched 8.2 server did not appear to make\nany difference, it still didn't work, performance was affected in the\nsame way as before.\n\nHowever in the meantime we managed to go to 8.3 and now it does work *if*\n\nsynchronous_commit = off\n\nAnd now I am wondering what that means exactly: Does it necessarily\nfollow that it's I/O contention on the disk subsystem because delayed\nflushing to WAL - what asynchronous commit does - gives the server\njust the window to insert the log line into the disk controller's\nwrite cache, as the transaction commit's write and the log line write\nwould be otherwise simultaneous with synchronous commit? Does it\nfollow that if I put pg_xlog now on a separate spindle and/or\ncontroller, it should work?\n\nSomehow I think not, as the disk array isn't even near maxed out,\naccording to vmstat. Or is the disk cache just too small then?\n\nFrank\n", "msg_date": "Sat, 23 May 2009 01:13:08 +0100", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Full statement logging problematic on larger machines?" } ]
[ { "msg_contents": "Hello All,\n\nAs you know that one of the thing that constantly that I have been \nusing benchmark kits to see how we can scale PostgreSQL on the \nUltraSPARC T2 based 1 socket (64 threads) and 2 socket (128 threads) \nservers that Sun sells.\n\nDuring last PgCon 2008 \nhttp://www.pgcon.org/2008/schedule/events/72.en.html you might remember \nthat I mentioned that ProcArrayLock is pretty hot when you have many users.\n\nRerunning similar tests on a 64-thread UltraSPARC T2plus based server \nconfig, I found that even with 8.4snap that I took I was still having \nsimilar problems (IO is not a problem... all in RAM .. no disks):\nTime:Users:Type:TPM: Response Time\n60: 100: Medium Throughput: 10552.000 Avg Medium Resp: 0.006\n120: 200: Medium Throughput: 22897.000 Avg Medium Resp: 0.006\n180: 300: Medium Throughput: 33099.000 Avg Medium Resp: 0.009\n240: 400: Medium Throughput: 44692.000 Avg Medium Resp: 0.007\n300: 500: Medium Throughput: 56455.000 Avg Medium Resp: 0.007\n360: 600: Medium Throughput: 67220.000 Avg Medium Resp: 0.008\n420: 700: Medium Throughput: 77592.000 Avg Medium Resp: 0.009\n480: 800: Medium Throughput: 87277.000 Avg Medium Resp: 0.011\n540: 900: Medium Throughput: 98029.000 Avg Medium Resp: 0.012\n600: 1000: Medium Throughput: 102547.000 Avg Medium Resp: 0.023\n660: 1100: Medium Throughput: 100503.000 Avg Medium Resp: 0.044\n720: 1200: Medium Throughput: 99506.000 Avg Medium Resp: 0.065\n780: 1300: Medium Throughput: 95474.000 Avg Medium Resp: 0.089\n840: 1400: Medium Throughput: 86254.000 Avg Medium Resp: 0.130\n900: 1500: Medium Throughput: 91947.000 Avg Medium Resp: 0.139\n960: 1600: Medium Throughput: 94838.000 Avg Medium Resp: 0.147\n1020: 1700: Medium Throughput: 92446.000 Avg Medium Resp: 0.173\n1080: 1800: Medium Throughput: 91032.000 Avg Medium Resp: 0.194\n1140: 1900: Medium Throughput: 88236.000 Avg Medium Resp: 0.221\n runDynamic: uCount = 2000delta = 1900\n runDynamic: ALL Threads Have Been created\n1200: 2000: Medium Throughput: -1352555.000 Avg Medium Resp: 0.071\n1260: 2000: Medium Throughput: 88872.000 Avg Medium Resp: 0.238\n1320: 2000: Medium Throughput: 88484.000 Avg Medium Resp: 0.248\n1380: 2000: Medium Throughput: 90777.000 Avg Medium Resp: 0.231\n1440: 2000: Medium Throughput: 90769.000 Avg Medium Resp: 0.229\n\nYou will notice that throughput drops around 1000 users.. Nothing new \nyou have already heard me mention that zillion times..\n\nNow while working on this today I was going through LWLockRelease like I \nhave probably done quite a few times before to see what can be done.. \nThe quick synopsis is that LWLockRelease releases the lock and wakes up \nthe next waiter to take over and if the next waiter is waiting for \nexclusive then it only wakes that waiter up and if next waiter is \nwaiting on shared then it goes through all shared waiters following and \nwakes them all up.\n\nEarlier last year I had tried various ways of doing intelligent waking \nup (finding all shared together and waking them up, coming up with a \ndifferent lock type and waking multiple of them up simultaneously but \nended up defining a new lock mode and of course none of them were \nstellar enough to make an impack..\n\nToday I tried something else.. Forget the distinction of exclusive and \nshared and just wake them all up so I changed the code from\n /*\n * Remove the to-be-awakened PGPROCs from the \nqueue. If the front\n * waiter wants exclusive lock, awaken him \nonly. Otherwise awaken\n * as many waiters as want shared access.\n */\n proc = head;\n if (!proc->lwExclusive)\n {\n while (proc->lwWaitLink != NULL &&\n !proc->lwWaitLink->lwExclusive)\n proc = proc->lwWaitLink;\n }\n /* proc is now the last PGPROC to be released */\n lock->head = proc->lwWaitLink;\n proc->lwWaitLink = NULL;\n /* prevent additional wakeups until retryer gets \nto run */\n lock->releaseOK = false;\n\n\nto basically wake them all up:\n /*\n * Remove the to-be-awakened PGPROCs from the queue. If the \nfront\n * waiter wants exclusive lock, awaken him only. Otherwise \nawaken\n * as many waiters as want shared access.\n */\n proc = head;\n //if (!proc->lwExclusive)\n if (1)\n {\n while (proc->lwWaitLink != NULL &&\n 1)\n // \n!proc->lwWaitLink->lwExclusive)\n proc = proc->lwWaitLink;\n }\n /* proc is now the last PGPROC to be released */\n lock->head = proc->lwWaitLink;\n proc->lwWaitLink = NULL;\n /* prevent additional wakeups until retryer gets \nto run */\n lock->releaseOK = false;\n\n\nWhich basically wakes them all up and let them find (technically causing \nthundering herds what the original logic was trying to avoid) I reran \nthe test and saw the results:\n\nTime:Users:Type:TPM: Response Time\n60: 100: Medium Throughput: 10457.000 Avg Medium Resp: 0.006\n120: 200: Medium Throughput: 22809.000 Avg Medium Resp: 0.006\n180: 300: Medium Throughput: 33665.000 Avg Medium Resp: 0.008\n240: 400: Medium Throughput: 45042.000 Avg Medium Resp: 0.006\n300: 500: Medium Throughput: 56655.000 Avg Medium Resp: 0.007\n360: 600: Medium Throughput: 67170.000 Avg Medium Resp: 0.007\n420: 700: Medium Throughput: 78343.000 Avg Medium Resp: 0.008\n480: 800: Medium Throughput: 87979.000 Avg Medium Resp: 0.008\n540: 900: Medium Throughput: 100369.000 Avg Medium Resp: 0.008\n600: 1000: Medium Throughput: 110697.000 Avg Medium Resp: 0.009\n660: 1100: Medium Throughput: 121255.000 Avg Medium Resp: 0.010\n720: 1200: Medium Throughput: 132915.000 Avg Medium Resp: 0.010\n780: 1300: Medium Throughput: 141505.000 Avg Medium Resp: 0.012\n840: 1400: Medium Throughput: 147084.000 Avg Medium Resp: 0.021\nlight: customer: No result set for custid 0\n900: 1500: Medium Throughput: 157906.000 Avg Medium Resp: 0.018\nlight: customer: No result set for custid 0\n960: 1600: Medium Throughput: 160289.000 Avg Medium Resp: 0.026\n1020: 1700: Medium Throughput: 152191.000 Avg Medium Resp: 0.053\n1080: 1800: Medium Throughput: 157949.000 Avg Medium Resp: 0.054\n1140: 1900: Medium Throughput: 161923.000 Avg Medium Resp: 0.063\n runDynamic: uCount = 2000delta = 1900\n runDynamic: ALL Threads Have Been created\n1200: 2000: Medium Throughput: -1781969.000 Avg Medium Resp: 0.019\nlight: customer: No result set for custid 0\n1260: 2000: Medium Throughput: 140741.000 Avg Medium Resp: 0.115\nlight: customer: No result set for custid 0\n1320: 2000: Medium Throughput: 165379.000 Avg Medium Resp: 0.070\n1380: 2000: Medium Throughput: 166585.000 Avg Medium Resp: 0.070\n1440: 2000: Medium Throughput: 169163.000 Avg Medium Resp: 0.063\n1500: 2000: Medium Throughput: 157508.000 Avg Medium Resp: 0.086\nlight: customer: No result set for custid 0\n1560: 2000: Medium Throughput: 170112.000 Avg Medium Resp: 0.063\n\nAn improvement of 1.89X in throughput and still not drastically dropping \nwhich means now I can go forward still stressing up PostgreSQL 8.4 to \nthe limits of the box.\n\nMy proposal is if we build a quick tunable for 8.4 \nwake-up-all-waiters=on (or something to that effect) in postgresql.conf \nbefore the beta then people can try the option and report back to see if \nthat helps improve performance on various other benchmarks that people \nare running and collect feedback. This way it will be not intrusive so \nlate in the game and also put an important scaling fix back in... Of \ncourse as usual this is open for debate.. I know avoiding thundering \nherd was the goal here.. but waking up 1 exclusive waiter who may not be \neven on CPU is pretty expensive from what I have seen till date.\n\nWhat do you all think ?\n\nRegards,\nJignesh\n\n", "msg_date": "Wed, 11 Mar 2009 16:53:49 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": ">>> \"Jignesh K. Shah\" <[email protected]> wrote: \n> Rerunning similar tests on a 64-thread UltraSPARC T2plus based\n> server config\n \n> (IO is not a problem... all in RAM .. no disks):\n> Time:Users:Type:TPM: Response Time\n> 60: 100: Medium Throughput: 10552.000 Avg Medium Resp: 0.006\n> 120: 200: Medium Throughput: 22897.000 Avg Medium Resp: 0.006\n> 180: 300: Medium Throughput: 33099.000 Avg Medium Resp: 0.009\n> 240: 400: Medium Throughput: 44692.000 Avg Medium Resp: 0.007\n> 300: 500: Medium Throughput: 56455.000 Avg Medium Resp: 0.007\n> 360: 600: Medium Throughput: 67220.000 Avg Medium Resp: 0.008\n> 420: 700: Medium Throughput: 77592.000 Avg Medium Resp: 0.009\n> 480: 800: Medium Throughput: 87277.000 Avg Medium Resp: 0.011\n> 540: 900: Medium Throughput: 98029.000 Avg Medium Resp: 0.012\n> 600: 1000: Medium Throughput: 102547.000 Avg Medium Resp: 0.023\n \nI'm wondering about the testing methodology. If there is no I/O, I\nwouldn't expect performance to improve after you have all the CPU\nthreads busy. (OK, so there might be some brief blocking that would\nmake the optimal number of connections somewhat above 64, but 1000???)\n \nWhat's the bottleneck which allows additional connections to improve\nthe throughput? Network latency?\n \nI'm a lot more interested in what's happening between 60 and 180 than\nover 1000, personally. If there was a RAID involved, I'd put it down\nto better use of the numerous spindles, but when it's all in RAM it\nmakes no sense.\n \n-Kevin\n", "msg_date": "Wed, 11 Mar 2009 17:27:12 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 03/11/09 18:27, Kevin Grittner wrote:\n>>>> \"Jignesh K. Shah\" <[email protected]> wrote: \n>>>> \n>> Rerunning similar tests on a 64-thread UltraSPARC T2plus based\n>> server config\n>> \n> \n> \n>> (IO is not a problem... all in RAM .. no disks):\n>> Time:Users:Type:TPM: Response Time\n>> 60: 100: Medium Throughput: 10552.000 Avg Medium Resp: 0.006\n>> 120: 200: Medium Throughput: 22897.000 Avg Medium Resp: 0.006\n>> 180: 300: Medium Throughput: 33099.000 Avg Medium Resp: 0.009\n>> 240: 400: Medium Throughput: 44692.000 Avg Medium Resp: 0.007\n>> 300: 500: Medium Throughput: 56455.000 Avg Medium Resp: 0.007\n>> 360: 600: Medium Throughput: 67220.000 Avg Medium Resp: 0.008\n>> 420: 700: Medium Throughput: 77592.000 Avg Medium Resp: 0.009\n>> 480: 800: Medium Throughput: 87277.000 Avg Medium Resp: 0.011\n>> 540: 900: Medium Throughput: 98029.000 Avg Medium Resp: 0.012\n>> 600: 1000: Medium Throughput: 102547.000 Avg Medium Resp: 0.023\n>> \n> \n> I'm wondering about the testing methodology. If there is no I/O, I\n> wouldn't expect performance to improve after you have all the CPU\n> threads busy. (OK, so there might be some brief blocking that would\n> make the optimal number of connections somewhat above 64, but 1000???)\n> \n> What's the bottleneck which allows additional connections to improve\n> the throughput? Network latency?\n> \n> I'm a lot more interested in what's happening between 60 and 180 than\n> over 1000, personally. If there was a RAID involved, I'd put it down\n> to better use of the numerous spindles, but when it's all in RAM it\n> makes no sense.\n> \n> -Kevin\n> \n\n\nKevin,\n\nThe problem is the CPUs are not all busy there is plenty of idle cycles \nsince PostgreSQL ends up in situations where they are all waiting for \nlockacquires for exclusive.. In cases where there is say one cpu then \nwaking up one or few waiters is more efficient.. However when you have \n64 or 128 or 256 (as in my case), waking up one waiter is inefficient \nsince only one waiter will be allowed to run while other waiters will \nstill wake up, spin acquire lock and say.. oh I am still not allowed and \ngo back to speed..\n\nTesting methology is considering we can get fast storage, can PostgreSQL \nstill scale to use say 32, 64, 128, 256 cpus... I am just ahead of the \ncurve of wide spread usage here probably but I want to make sure \nPostgreSQL is well tested already for it. And yes I still have plenty of \nunused CPU so the goal is to make sure if system can handle it, so can \nPostgreSQL.\n\n\nRegards,\nJignesh\n\n\n\n\n\n\n\n\n\nOn 03/11/09 18:27, Kevin Grittner wrote:\n\n\n\n\n\"Jignesh K. Shah\" <[email protected]> wrote: \n \n\n\nRerunning similar tests on a 64-thread UltraSPARC T2plus based\nserver config\n \n\n \n \n\n(IO is not a problem... all in RAM .. no disks):\nTime:Users:Type:TPM: Response Time\n60: 100: Medium Throughput: 10552.000 Avg Medium Resp: 0.006\n120: 200: Medium Throughput: 22897.000 Avg Medium Resp: 0.006\n180: 300: Medium Throughput: 33099.000 Avg Medium Resp: 0.009\n240: 400: Medium Throughput: 44692.000 Avg Medium Resp: 0.007\n300: 500: Medium Throughput: 56455.000 Avg Medium Resp: 0.007\n360: 600: Medium Throughput: 67220.000 Avg Medium Resp: 0.008\n420: 700: Medium Throughput: 77592.000 Avg Medium Resp: 0.009\n480: 800: Medium Throughput: 87277.000 Avg Medium Resp: 0.011\n540: 900: Medium Throughput: 98029.000 Avg Medium Resp: 0.012\n600: 1000: Medium Throughput: 102547.000 Avg Medium Resp: 0.023\n \n\n \nI'm wondering about the testing methodology. If there is no I/O, I\nwouldn't expect performance to improve after you have all the CPU\nthreads busy. (OK, so there might be some brief blocking that would\nmake the optimal number of connections somewhat above 64, but 1000???)\n \nWhat's the bottleneck which allows additional connections to improve\nthe throughput? Network latency?\n \nI'm a lot more interested in what's happening between 60 and 180 than\nover 1000, personally. If there was a RAID involved, I'd put it down\nto better use of the numerous spindles, but when it's all in RAM it\nmakes no sense.\n \n-Kevin\n \n\n\n\nKevin,\n\nThe problem is the CPUs are not all busy there is plenty of idle cycles\nsince PostgreSQL ends up in situations where they are all waiting for\nlockacquires for exclusive.. In cases where there is say one cpu then\nwaking up one or few waiters is more efficient.. However when you have\n64 or 128 or 256 (as in my case), waking up one waiter is inefficient\nsince only one waiter will be allowed to run while other waiters will\nstill wake up, spin acquire lock and say.. oh I am still not allowed\nand go back to speed.. \n\nTesting methology is considering we can get fast storage, can\nPostgreSQL still scale to use say 32, 64, 128, 256 cpus... I am just\nahead of the curve of wide spread usage here probably but I want to\nmake sure PostgreSQL is well tested already for it. And yes I still\nhave plenty of unused CPU so the goal is to make sure if system can\nhandle it, so can PostgreSQL.\n\n\nRegards,\nJignesh", "msg_date": "Wed, 11 Mar 2009 20:51:56 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> I'm wondering about the testing methodology.\n\nMe too. This test case seems much too far away from real world use\nto justify diddling low-level locking behavior; especially a change\nthat is obviously likely to have very negative effects in other\nscenarios. In particular, I think it would lead to complete starvation\nof would-be exclusive lockers in the face of competition from a steady\nstream of shared lockers. AFAIR the existing behavior was designed\nto reduce the odds of that, not for any other purpose.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Mar 2009 21:32:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4 " }, { "msg_contents": "On 3/11/09 3:27 PM, \"Kevin Grittner\" <[email protected]> wrote:\n\nI'm a lot more interested in what's happening between 60 and 180 than\nover 1000, personally. If there was a RAID involved, I'd put it down\nto better use of the numerous spindles, but when it's all in RAM it\nmakes no sense.\n\nIf there is enough lock contention and a common lock case is a short lived shared lock, it makes perfect sense sense. Fewer readers are blocked waiting on writers at any given time. Readers can 'cut' in line ahead of writers within a certain scope (only up to the number waiting at the time a shared lock is at the head of the queue). Essentially this clumps up shared and exclusive locks into larger streaks, and allows for higher shared lock throughput.\nExclusive locks may be delayed, but will NOT be starved, since on the next iteration, a streak of exclusive locks will occur first in the list and they will all process before any more shared locks can go.\n\nThis will even help in on a single CPU system if it is read dominated, lowering read latency and slightly increasing write latency.\n\nIf you want to make this more fair, instead of freeing all shared locks, limit the count to some number, such as the number of CPU cores. Perhaps rather than wake-up-all-waiters=true, the parameter can be an integer representing how many shared locks can be freed at once if an exclusive lock is encountered.\n\n\n-Kevin\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\nRe: [PERFORM] Proposal of tunable fix for scalability of 8.4\n\n\nOn 3/11/09 3:27 PM, \"Kevin Grittner\" <[email protected]> wrote:\n\nI'm a lot more interested in what's happening between 60 and 180 than\nover 1000, personally.  If there was a RAID involved, I'd put it down\nto better use of the numerous spindles, but when it's all in RAM it\nmakes no sense.\n\nIf there is enough lock contention and a common lock case is a short lived shared lock, it makes perfect sense sense.  Fewer readers are blocked waiting on writers at any given time.  Readers can ‘cut’ in line ahead of writers within a certain scope (only up to the number waiting at the time a shared lock is at the head of the queue).  Essentially this clumps up shared and exclusive locks into larger streaks, and allows for higher shared lock throughput.  \nExclusive locks may be delayed, but will NOT be starved, since on the next iteration, a streak of exclusive locks will occur first in the list and they will all process before any more shared locks can go.\n\nThis will even help in on a single CPU system if it is read dominated, lowering read latency and slightly increasing write latency.\n\nIf you want to make this more fair, instead of freeing all shared locks, limit the count to some number, such as the number of CPU cores.  Perhaps rather than wake-up-all-waiters=true, the parameter can be an integer representing how many shared locks can be freed at once if an exclusive lock is encountered. \n\n\n-Kevin\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 11 Mar 2009 19:01:57 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\n\nTom Lane wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n> \n>> I'm wondering about the testing methodology.\n>> \n>\n> Me too. This test case seems much too far away from real world use\n> to justify diddling low-level locking behavior; especially a change\n> that is obviously likely to have very negative effects in other\n> scenarios. In particular, I think it would lead to complete starvation\n> of would-be exclusive lockers in the face of competition from a steady\n> stream of shared lockers. AFAIR the existing behavior was designed\n> to reduce the odds of that, not for any other purpose.\n>\n> \t\t\tregards, tom lane\n>\n> \n\nHi Tom,\n\nThe test case is not that far fetched from real world.. Plus if you read \nmy proposal I clearly mention a tunable for it so that we can set and \nhence obviously not impact 99% of the people who don't care about it but \nstill allow the flexibility of the 1% of the people who do care about \nscalability when they go on bigger system. The fact that it is a tunable \n(and obviously not the default way) there is no impact to existing \nbehavior.\n\nMy test case clearly shows that Exclusive lockers ARE benefited from it \notherwise I would have not seen the huge impact on throughput.\n\nA tunable does not impact existing behavior but adds flexibility for \nthose using PostgreSQL on high end systems. Plus doing it the tunable \nway on PostgreSQL 8.4 will convince many people that I know to quickly \nadopt PostgreSQL 8.4 just because of the benefit it brings on systems \nwith many cpus/cores/threads.\n\nAll I am requesting is for the beta to have that tunable. Its not hard, \npeople can then quickly try default (off) or on or as Scott Carey \nmentioned a more flexible of default, all or a fixed integer number \n(for people to experiment).\n\n\nRegards,\nJignesh\n\n-- \nJignesh Shah http://blogs.sun.com/jkshah \t\t\t\nThe New Sun Microsystems,Inc http://sun.com/postgresql\n\n", "msg_date": "Wed, 11 Mar 2009 22:20:17 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Scott Carey <[email protected]> writes:\n> If there is enough lock contention and a common lock case is a short lived shared lock, it makes perfect sense sense. Fewer readers are blocked waiting on writers at any given time. Readers can 'cut' in line ahead of writers within a certain scope (only up to the number waiting at the time a shared lock is at the head of the queue). Essentially this clumps up shared and exclusive locks into larger streaks, and allows for higher shared lock throughput.\n> Exclusive locks may be delayed, but will NOT be starved, since on the next iteration, a streak of exclusive locks will occur first in the list and they will all process before any more shared locks can go.\n\nThat's a lot of sunny assertions without any shred of evidence behind\nthem...\n\nThe current LWLock behavior was arrived at over multiple iterations and\nis not lightly to be toyed with IMHO. Especially not on the basis of\none benchmark that does not reflect mainstream environments.\n\nNote that I'm not saying \"no\". I'm saying that I want a lot more\nevidence *before* we go to the trouble of making this configurable\nand asking users to test it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Mar 2009 22:47:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4 " }, { "msg_contents": "\n\nTom Lane wrote:\n> Scott Carey <[email protected]> writes:\n> \n>> If there is enough lock contention and a common lock case is a short lived shared lock, it makes perfect sense sense. Fewer readers are blocked waiting on writers at any given time. Readers can 'cut' in line ahead of writers within a certain scope (only up to the number waiting at the time a shared lock is at the head of the queue). Essentially this clumps up shared and exclusive locks into larger streaks, and allows for higher shared lock throughput.\n>> Exclusive locks may be delayed, but will NOT be starved, since on the next iteration, a streak of exclusive locks will occur first in the list and they will all process before any more shared locks can go.\n>> \n>\n> That's a lot of sunny assertions without any shred of evidence behind\n> them...\n>\n> The current LWLock behavior was arrived at over multiple iterations and\n> is not lightly to be toyed with IMHO. Especially not on the basis of\n> one benchmark that does not reflect mainstream environments.\n>\n> Note that I'm not saying \"no\". I'm saying that I want a lot more\n> evidence *before* we go to the trouble of making this configurable\n> and asking users to test it.\n>\n> \t\t\tregards, tom lane\n>\n> \nFair enough.. Well I am now appealing to all who has a fairly decent \nsized hardware want to try it out and see whether there are \"gains\", \n\"no-changes\" or \"regressions\" based on your workload. Also it will help \nif you report number of cpus when you respond back to help collect \nfeedback.\n\nRegards,\nJignesh\n\n-- \nJignesh Shah http://blogs.sun.com/jkshah \t\t\t\nThe New Sun Microsystems,Inc http://sun.com/postgresql\n\n", "msg_date": "Wed, 11 Mar 2009 23:48:44 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\"Jignesh K. Shah\" <[email protected]> wrote: \n> On 03/11/09 18:27, Kevin Grittner wrote:\n>> \"Jignesh K. Shah\" <[email protected]> wrote: \n \n>>> Rerunning similar tests on a 64-thread UltraSPARC T2plus based\n>>> server config\n>> \n>>> (IO is not a problem... all in RAM .. no disks):\n>>> Time:Users:Type:TPM: Response Time\n>>> 60: 100: Medium Throughput: 10552.000 Avg Medium Resp: 0.006\n>>> 120: 200: Medium Throughput: 22897.000 Avg Medium Resp: 0.006\n>>> 180: 300: Medium Throughput: 33099.000 Avg Medium Resp: 0.009\n>>> 240: 400: Medium Throughput: 44692.000 Avg Medium Resp: 0.007\n>>> 300: 500: Medium Throughput: 56455.000 Avg Medium Resp: 0.007\n>>> 360: 600: Medium Throughput: 67220.000 Avg Medium Resp: 0.008\n>>> 420: 700: Medium Throughput: 77592.000 Avg Medium Resp: 0.009\n \n>> I'm a lot more interested in what's happening between 60 and 180\nthan\n>> over 1000, personally. If there was a RAID involved, I'd put it\ndown\n>> to better use of the numerous spindles, but when it's all in RAM it\n>> makes no sense.\n \n> The problem is the CPUs are not all busy there is plenty of idle\ncycles \n> since PostgreSQL ends up in situations where they are all waiting for\n\n> lockacquires for exclusive..\n \nPrecisely. This is the area where it seems there is the most to gain.\nThe area you're looking at seems to have less than a 2X gain\navailable.\nThis part of the curve clearly has much more.\n \n-Kevin\n", "msg_date": "Thu, 12 Mar 2009 09:07:39 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 03/11/09 22:01, Scott Carey wrote:\n> On 3/11/09 3:27 PM, \"Kevin Grittner\" <[email protected]> wrote:\n>\n>\n> I'm a lot more interested in what's happening between 60 and 180 than\n> over 1000, personally. If there was a RAID involved, I'd put it down\n> to better use of the numerous spindles, but when it's all in RAM it\n> makes no sense.\n>\n> If there is enough lock contention and a common lock case is a short \n> lived shared lock, it makes perfect sense sense. Fewer readers are \n> blocked waiting on writers at any given time. Readers can 'cut' in \n> line ahead of writers within a certain scope (only up to the number \n> waiting at the time a shared lock is at the head of the queue). \n> Essentially this clumps up shared and exclusive locks into larger \n> streaks, and allows for higher shared lock throughput. \n> Exclusive locks may be delayed, but will NOT be starved, since on the \n> next iteration, a streak of exclusive locks will occur first in the \n> list and they will all process before any more shared locks can go.\n>\n> This will even help in on a single CPU system if it is read dominated, \n> lowering read latency and slightly increasing write latency.\n>\n> If you want to make this more fair, instead of freeing all shared \n> locks, limit the count to some number, such as the number of CPU \n> cores. Perhaps rather than wake-up-all-waiters=true, the parameter \n> can be an integer representing how many shared locks can be freed at \n> once if an exclusive lock is encountered.\n>\n>\nWell I am waking up not just shared but shared and exclusives.. However \ni like your idea of waking up the next N waiters where N matches the \nnumber of cpus available. In my case it is 64 so yes this works well \nsince the idea being of all the 64 waiters running right now one will be \nable to lock the next lock immediately and hence there are no cycles \nwasted where nobody gets a lock which is often the case when you say \nwake up only 1 waiter and hope that the process is on the CPU (which in \nmy case it is 64 processes) and it is able to acquire the lock.. The \nprobability of acquiring the lock within the next few cycles is much \nless for only 1 waiter than giving chance to 64 such processes and \nthen let them fight based on who is already on CPU and acquire the \nlock. That way the period where nobody has a lock is reduced and that \nhelps to cut out \"artifact\" idle time on the system.\n\n\nAs soon as I get more \"cycles\" I will try variations of it but it would \nhelp if others can try it out in their own environments to see if it \nhelps their instances.\n\n\n-Jignesh\n\n\n\n\n\n\n\n\n\nOn 03/11/09 22:01, Scott Carey wrote:\n\n\nRe: [PERFORM] Proposal of tunable fix for scalability of 8.4\nOn 3/11/09 3:27 PM, \"Kevin Grittner\" <[email protected]>\nwrote:\n\n \nI'm a lot more interested in what's happening between 60 and 180 than\nover 1000, personally.  If there was a RAID involved, I'd put it down\nto better use of the numerous spindles, but when it's all in RAM it\nmakes no sense.\n\n \nIf there is enough lock contention and a common lock case is a\nshort lived shared lock, it makes perfect sense sense.  Fewer readers\nare blocked waiting on writers at any given time.  Readers can ‘cut’ in\nline ahead of writers within a certain scope (only up to the number\nwaiting at the time a shared lock is at the head of the queue).\n Essentially this clumps up shared and exclusive locks into larger\nstreaks, and allows for higher shared lock throughput.  \nExclusive locks may be delayed, but will NOT be starved, since on the\nnext iteration, a streak of exclusive locks will occur first in the\nlist and they will all process before any more shared locks can go.\n\nThis will even help in on a single CPU system if it is read dominated,\nlowering read latency and slightly increasing write latency.\n\nIf you want to make this more fair, instead of freeing all shared\nlocks, limit the count to some number, such as the number of CPU cores.\n Perhaps rather than wake-up-all-waiters=true, the parameter can be an\ninteger representing how many shared locks can be freed at once if an\nexclusive lock is encountered. \n\n\n \n\nWell I am waking up not just shared but shared and exclusives.. However\ni like your idea of waking up the next N waiters where N matches the\nnumber of cpus available.  In my case it is 64 so yes this works well\nsince the idea being of all the 64 waiters running right now one will\nbe able to lock the next lock  immediately and hence there are no\ncycles wasted where nobody gets a lock which is often the case when you\nsay wake up only 1 waiter and hope that the process is on the CPU\n(which in my case it is 64 processes) and it is able to acquire the\nlock.. The probability of acquiring the lock within the next few cycles\nis much less for only 1 waiter  than giving chance to 64 such\nprocesses  and then let them fight based on who is already on CPU  and\nacquire the lock. That way the period where nobody has a lock is\nreduced and that helps to cut out \"artifact\"  idle time on the system.\n\n\nAs soon as I get more \"cycles\" I will try variations of it but it would\nhelp if others can try it out in their own environments to see if it\nhelps their instances.\n\n\n-Jignesh", "msg_date": "Thu, 12 Mar 2009 10:57:04 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": ">>> Scott Carey <[email protected]> wrote: \n> \"Kevin Grittner\" <[email protected]> wrote:\n> \n>> I'm a lot more interested in what's happening between 60 and 180\n>> than over 1000, personally. If there was a RAID involved, I'd put\n>> it down to better use of the numerous spindles, but when it's all\n>> in RAM it makes no sense.\n> \n> If there is enough lock contention and a common lock case is a short\n> lived shared lock, it makes perfect sense sense. Fewer readers are\n> blocked waiting on writers at any given time. Readers can 'cut' in\n> line ahead of writers within a certain scope (only up to the number\n> waiting at the time a shared lock is at the head of the queue). \n> Essentially this clumps up shared and exclusive locks into larger\n> streaks, and allows for higher shared lock throughput.\n \nYou misunderstood me. I wasn't addressing the affects of his change,\nbut rather the fact that his test shows a linear improvement in TPS up\nto 1000 connections for a 64 thread machine which is dealing entirely\nwith RAM -- no disk access. Where's the bottleneck that allows this\nto happen? Without understanding that, his results are meaningless.\n \n-Kevin\n", "msg_date": "Thu, 12 Mar 2009 10:13:24 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On Thu, Mar 12, 2009 at 3:13 PM, Kevin Grittner\n<[email protected]> wrote:\n>>>> Scott Carey <[email protected]> wrote:\n>> \"Kevin Grittner\" <[email protected]> wrote:\n>>\n>>> I'm a lot more interested in what's happening between 60 and 180\n>>> than over 1000, personally.  If there was a RAID involved, I'd put\n>>> it down to better use of the numerous spindles, but when it's all\n>>> in RAM it makes no sense.\n>>\n>> If there is enough lock contention and a common lock case is a short\n>> lived shared lock, it makes perfect sense sense.  Fewer readers are\n>> blocked waiting on writers at any given time.  Readers can 'cut' in\n>> line ahead of writers within a certain scope (only up to the number\n>> waiting at the time a shared lock is at the head of the queue).\n>> Essentially this clumps up shared and exclusive locks into larger\n>> streaks, and allows for higher shared lock throughput.\n>\n> You misunderstood me.  I wasn't addressing the affects of his change,\n> but rather the fact that his test shows a linear improvement in TPS up\n> to 1000 connections for a 64 thread machine which is dealing entirely\n> with RAM -- no disk access.  Where's the bottleneck that allows this\n> to happen?  Without understanding that, his results are meaningless.\n\nI think you try to argue about oranges, and he does about pears. Your\nargument has nothing to do with what you are saying, which you should\nunderstand.\nScalability is something that is affected by everything, and fixing\nthis makes sens as much as looking at possible fixes to make raids\nmore scalable, which is looked at by someone else I think.\nSo please, don't say that this doesn't make sense because he tested it\nagainst ram disc. That was precisely the point of exercise.\n\n\n-- \nGJ\n", "msg_date": "Thu, 12 Mar 2009 15:31:28 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": ">>> Grzegorz Jaᅵkiewicz <[email protected]> wrote: \n> Scalability is something that is affected by everything, and fixing\n> this makes sens as much as looking at possible fixes to make raids\n> more scalable, which is looked at by someone else I think.\n> So please, don't say that this doesn't make sense because he tested\nit\n> against ram disc. That was precisely the point of exercise.\n \nI'm probably more inclined to believe that his change may have merit\nthan many here, but I can't accept anything based on this test until\nsomeone answers the question, so far ignored by all responses, of\nwhere the bottleneck is at the low end which allows linear scalability\nup to 1000 users (which I assume means connections).\n \nI'm particularly inclined to be suspicious of this test since my own\nbenchmarks, with real applications replaying real URL requests from a\nproduction website that gets millions of hits per day, show that\nresponse time and throughput are improved by using a connection pool\nwith queuing to limit the concurrent active queries.\n \nMy skepticism is not helped by the fact that in a previous discussion\nwith someone about performance as connections are increased, this\npoint was covered by introducing a \"primitive\" connection pool --\nwhich used a one second sleep for a thread if the maximum number of\nconnections were already in use, rather than proper queuing and\nsemaphores. That really gives no clue how performance would be with a\nreal connection pool.\n \n-Kevin\n", "msg_date": "Thu, 12 Mar 2009 10:44:44 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 3/12/09 7:57 AM, \"Jignesh K. Shah\" <[email protected]> wrote:\n\n\n\nOn 03/11/09 22:01, Scott Carey wrote:\n Re: [PERFORM] Proposal of tunable fix for scalability of 8.4 On 3/11/09 3:27 PM, \"Kevin Grittner\" <[email protected]> wrote:\n\n\nIf you want to make this more fair, instead of freeing all shared locks, limit the count to some number, such as the number of CPU cores. Perhaps rather than wake-up-all-waiters=true, the parameter can be an integer representing how many shared locks can be freed at once if an exclusive lock is encountered.\n\n\n\nWell I am waking up not just shared but shared and exclusives.. However i like your idea of waking up the next N waiters where N matches the number of cpus available. In my case it is 64 so yes this works well since the idea being of all the 64 waiters running right now one will be able to lock the next lock immediately and hence there are no cycles wasted where nobody gets a lock which is often the case when you say wake up only 1 waiter and hope that the process is on the CPU (which in my case it is 64 processes) and it is able to acquire the lock.. The probability of acquiring the lock within the next few cycles is much less for only 1 waiter than giving chance to 64 such processes and then let them fight based on who is already on CPU and acquire the lock. That way the period where nobody has a lock is reduced and that helps to cut out \"artifact\" idle time on the system.\n\nIn that case, there can be some starvation of writers. If all the shareds are woken up but the exclusives are left in the front of the queued, no starvation can occur.\nThat was a bit of confusion on my part with respect to what the change was doing. Thanks for clarification.\n\n\n\nAs soon as I get more \"cycles\" I will try variations of it but it would help if others can try it out in their own environments to see if it helps their instances.\n\n\n-Jignesh\n\n\n\n\n\nRe: [PERFORM] Proposal of tunable fix for scalability of 8.4\n\n\nOn 3/12/09 7:57 AM, \"Jignesh K. Shah\" <[email protected]> wrote:\n\n\n\nOn 03/11/09 22:01, Scott Carey wrote: \n  Re: [PERFORM] Proposal of tunable fix for scalability of 8.4 On 3/11/09 3:27 PM, \"Kevin Grittner\" <[email protected]> wrote:\n\n \nIf you want to make this more fair, instead of freeing all shared locks, limit the count to some number, such as the number of CPU cores.  Perhaps rather than wake-up-all-waiters=true, the parameter can be an integer representing how many shared locks can be freed at once if an exclusive lock is encountered. \n  \n\n  \nWell I am waking up not just shared but shared and exclusives.. However i like your idea of waking up the next N waiters where N matches the number of cpus available.  In my case it is 64 so yes this works well since the idea being of all the 64 waiters running right now one will be able to lock the next lock  immediately and hence there are no cycles wasted where nobody gets a lock which is often the case when you say wake up only 1 waiter and hope that the process is on the CPU (which in my case it is 64 processes) and it is able to acquire the lock.. The probability of acquiring the lock within the next few cycles is much less for only 1 waiter  than giving chance to 64 such processes  and then let them fight based on who is already on CPU  and acquire the lock. That way the period where nobody has a lock is reduced and that helps to cut out \"artifact\"  idle time on the system.\n\nIn that case, there can be some starvation of writers.  If all the shareds are woken up but the exclusives are left in the front of the queued, no starvation can occur.\nThat was a bit of confusion on my part with respect to what the change was doing.  Thanks for clarification.\n\n\n\nAs soon as I get more \"cycles\" I will try variations of it but it would help if others can try it out in their own environments to see if it helps their instances.\n\n\n-Jignesh", "msg_date": "Thu, 12 Mar 2009 10:09:31 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Grzegorz Jaśkiewicz <[email protected]> writes:\n\n> So please, don't say that this doesn't make sense because he tested it\n> against ram disc. That was precisely the point of exercise.\n\nWhat people are tip-toeing around saying, which I'll just say right out in the\nmost provocative way, is that Jignesh has simply *misconfigured* the system.\nHe's contrived to artificially create a lot of unnecessary contention.\nOptimizing the system to reduce the cost of that artificial contention at the\nexpense of a properly configured system would be a bad idea.\n\nIt's misconfigured because there are more runnable threads than there are\ncpus. A lot more. 15 times as many as necessary. If users couldn't run\nconnection poolers on their own the right approach for us to address this\ncontention would be to build one into Postgres, not to re-engineer the\ninternals around the misuse.\n\nRam-resident use cases are entirely valid and worth testing, but in those use\ncases you would want to have about as many processes as you have processes.\n\nThe use case where having larger number of connections than processors makes\nsense is when they're blocked on disk i/o (or network i/o or whatever else\nother than cpu).\n\nAnd having it be configurable doesn't mean that it has no cost. Having a test\nof a user-settable dynamic variable in the middle of a low-level routine could\nvery well have some cost. Just the extra code would have some cost in reduced\ncache efficiency. It could be that loop prediction and so on save us but that\nremains to be proven.\n\nAnd as always the question would be whether the code designed for this\nmisconfigured setup is worth the maintenance effort if it's not helping\nproperly configured setups. Consider for example any work with dtrace to\noptimize locks under properly configured setups would lead us to make changes\nwhich would have to be tested twice, once with and once without this option.\nWhat do we do if dtrace says some unrelated change helps systems with this\noption disabled but hurts systems with it enabled?\n\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's RemoteDBA services!\n", "msg_date": "Thu, 12 Mar 2009 17:09:49 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 3/12/09 8:13 AM, \"Kevin Grittner\" <[email protected]> wrote:\n\n>>> Scott Carey <[email protected]> wrote:\n> \"Kevin Grittner\" <[email protected]> wrote:\n>\n>> I'm a lot more interested in what's happening between 60 and 180\n>> than over 1000, personally. If there was a RAID involved, I'd put\n>> it down to better use of the numerous spindles, but when it's all\n>> in RAM it makes no sense.\n>\n> If there is enough lock contention and a common lock case is a short\n> lived shared lock, it makes perfect sense sense. Fewer readers are\n> blocked waiting on writers at any given time. Readers can 'cut' in\n> line ahead of writers within a certain scope (only up to the number\n> waiting at the time a shared lock is at the head of the queue).\n> Essentially this clumps up shared and exclusive locks into larger\n> streaks, and allows for higher shared lock throughput.\n\nYou misunderstood me. I wasn't addressing the affects of his change,\nbut rather the fact that his test shows a linear improvement in TPS up\nto 1000 connections for a 64 thread machine which is dealing entirely\nwith RAM -- no disk access. Where's the bottleneck that allows this\nto happen? Without understanding that, his results are meaningless.\n\n-Kevin\n\nThey are not meaningless. It is certainly more to understand, but the test is entirely valid without that. In a CPU bound / RAM bound case, as concurrency increases you look for the throughput trend, the %CPU use trend and the context switch rate trend. More information would be useful but the test is validated by the evidence that it is held up by lock contention.\n\nThe reasons for not scaling with user count at lower numbers are numerous: network, client limitations, or 'lock locality' (if test user blocks access data in an organized pattern rather than random distribution neighbor clients are more likely to block than non-neighbor ones).\nFurthermore, the MOST valid types of tests don't drive each user in an ASAP fashion, but with some pacing to emulate the real world. In this case you expect the user count to significantly be greater than CPU core count before saturation. We need more info about the relationship between \"users\" and active postgres backends. If each user sleeps for 100 ms between queries (or processes results and writes HTML for 100ms) your assumption that it should take about <CPU core count> users to saturate the CPUs is flawed.\n\nEither way, the result here demonstrates something powerful with respect to CPU scalability and just because 300 clients isn't where it peaks does not mean its invalid, it merely means we don't have enough information to understand the test.\n\nThe fact is very simple: Increasing concurrency does not saturate all the CPUs due to lock contention. That can be shown by the results demonstrated without more information.\nUser count is irrelevant - performance is increasing linearly with user count for quite a while and then peaks and slightly dips. This is the typical curve for all tests with a measured pacing per client.\nWe want to know more though. More data would help (active postgres backends, %CPU, context switch rate would be my top 3 extra columns in the data set). From there all that we want to know is what the locks are and if that contention is artificial. What tools are available to show what locks are most contended with Postgres? Once the locks are known, we want to know if the locking can be tuned away by one of three general types of strategies: Less locking via smart use of atomics or copy on write (non-blocking strategies, probably fully investigated already); finer grained locks (most definitely investigated); improved performance of locks (looked into for sure, but is highly hardware dependant).\n\n\n\n\nRe: [PERFORM] Proposal of tunable fix for scalability of 8.4\n\n\n\nOn 3/12/09 8:13 AM, \"Kevin Grittner\" <[email protected]> wrote:\n\n>>> Scott Carey <[email protected]> wrote:\n> \"Kevin Grittner\" <[email protected]> wrote:\n>\n>> I'm a lot more interested in what's happening between 60 and 180\n>> than over 1000, personally.  If there was a RAID involved, I'd put\n>> it down to better use of the numerous spindles, but when it's all\n>> in RAM it makes no sense.\n>\n> If there is enough lock contention and a common lock case is a short\n> lived shared lock, it makes perfect sense sense.  Fewer readers are\n> blocked waiting on writers at any given time.  Readers can 'cut' in\n> line ahead of writers within a certain scope (only up to the number\n> waiting at the time a shared lock is at the head of the queue).\n> Essentially this clumps up shared and exclusive locks into larger\n> streaks, and allows for higher shared lock throughput.\n\nYou misunderstood me.  I wasn't addressing the affects of his change,\nbut rather the fact that his test shows a linear improvement in TPS up\nto 1000 connections for a 64 thread machine which is dealing entirely\nwith RAM -- no disk access.  Where's the bottleneck that allows this\nto happen?  Without understanding that, his results are meaningless.\n\n-Kevin\n\nThey are not meaningless.  It is certainly more to understand, but the test is entirely valid without that.  In a CPU bound / RAM bound case, as concurrency increases you look for the throughput trend, the %CPU use trend and the context switch rate trend.  More information would be useful but the test is validated by the evidence that it is held up by lock contention.  \n\nThe reasons for not scaling with user count at lower numbers are numerous:  network, client limitations, or ‘lock locality’ (if test user blocks access data in an organized pattern rather than random distribution neighbor clients are more likely to block than non-neighbor ones).  \nFurthermore, the MOST valid types of tests don’t drive each user in an ASAP fashion, but with some pacing to emulate the real world.  In this case you expect the user count to significantly be greater than CPU core count before saturation.  We need more info about the relationship between “users” and active postgres backends.  If each user sleeps for 100 ms between queries (or processes results and writes HTML for 100ms) your assumption that it should take about <CPU core count> users to saturate the CPUs is flawed.\n\nEither way, the result here demonstrates something powerful with respect to CPU scalability and just because 300 clients isn’t where it peaks does not mean its invalid, it merely means we don’t have enough information to understand the test.\n\nThe  fact is very simple:  Increasing concurrency does not saturate all the CPUs due to lock contention.  That can be shown by the results demonstrated without more information.\nUser count is irrelevant — performance is increasing linearly with user count for quite a while and then peaks and slightly dips.  This is the typical curve for all tests with a measured pacing per client.\nWe want to know more though.  More data would help (active postgres backends, %CPU, context switch rate would be my top 3 extra columns in the data set). From there all that we want to know is what the locks are and if that contention is artificial.  What tools are available to show what locks are most contended with Postgres?  Once the locks are known, we want to know if the locking can be tuned away by one of three general types of strategies:  Less locking via smart use of atomics or copy on write (non-blocking strategies, probably fully investigated already); finer grained locks (most definitely investigated); improved performance of locks (looked into for sure, but is highly hardware dependant).", "msg_date": "Thu, 12 Mar 2009 10:39:05 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 3/11/09 7:47 PM, \"Tom Lane\" <[email protected]> wrote:\n\nScott Carey <[email protected]> writes:\n> If there is enough lock contention and a common lock case is a short lived shared lock, it makes perfect sense sense. Fewer readers are blocked waiting on writers at any given time. Readers can 'cut' in line ahead of writers within a certain scope (only up to the number waiting at the time a shared lock is at the head of the queue). Essentially this clumps up shared and exclusive locks into larger streaks, and allows for higher shared lock throughput.\n> Exclusive locks may be delayed, but will NOT be starved, since on the next iteration, a streak of exclusive locks will occur first in the list and they will all process before any more shared locks can go.\n\nThat's a lot of sunny assertions without any shred of evidence behind\nthem...\n\nThe current LWLock behavior was arrived at over multiple iterations and\nis not lightly to be toyed with IMHO. Especially not on the basis of\none benchmark that does not reflect mainstream environments.\n\nNote that I'm not saying \"no\". I'm saying that I want a lot more\nevidence *before* we go to the trouble of making this configurable\nand asking users to test it.\n\n regards, tom lane\n\n\nAll I'm adding, is that it makes some sense to me based on my experience in CPU / RAM bound scalability tuning. It was expressed that the test itself didn't even make sense.\n\nI was wrong in my understanding of what the change did. If it wakes ALL waiters up there is an indeterminate amount of time a lock will wait.\nHowever, if instead of waking up all of them, if it only wakes up the shared readers and leaves all the exclusive ones at the front of the queue, there is no possibility of starvation since those exclusives will be at the front of the line after the wake-up batch.\n\nAs for this being a use case that is important:\n\n* SSDs will drive the % of use cases that are not I/O bound up significantly over the next couple years. All postgres installations with less than about 100GB of data TODAY could avoid being I/O bound with current SSD technology, and those less than 2TB can do so as well but at high expense or with less proven technology like the ZFS L2ARC flash cache.\n* Intel will have a mainstream CPU that handles 12 threads (6 cores, 2 threads each) at the end of this year. Mainstream two CPU systems will have access to 24 threads and be common in 2010. Higher end 4CPU boxes will have access to 48 CPU threads. Hardware thread count is only going up. This is the future.\n\n\n\n\nRe: [PERFORM] Proposal of tunable fix for scalability of 8.4 \n\n\nOn 3/11/09 7:47 PM, \"Tom Lane\" <[email protected]> wrote:\n\nScott Carey <[email protected]> writes:\n> If there is enough lock contention and a common lock case is a short lived shared lock, it makes perfect sense sense.  Fewer readers are blocked waiting on writers at any given time.  Readers can 'cut' in line ahead of writers within a certain scope (only up to the number waiting at the time a shared lock is at the head of the queue).  Essentially this clumps up shared and exclusive locks into larger streaks, and allows for higher shared lock throughput.\n> Exclusive locks may be delayed, but will NOT be starved, since on the next iteration, a streak of exclusive locks will occur first in the list and they will all process before any more shared locks can go.\n\nThat's a lot of sunny assertions without any shred of evidence behind\nthem...\n\nThe current LWLock behavior was arrived at over multiple iterations and\nis not lightly to be toyed with IMHO.  Especially not on the basis of\none benchmark that does not reflect mainstream environments.\n\nNote that I'm not saying \"no\".  I'm saying that I want a lot more\nevidence *before* we go to the trouble of making this configurable\nand asking users to test it.\n\n                        regards, tom lane\n\n\nAll I’m adding, is that it makes some sense to me based on my experience in CPU / RAM bound scalability tuning.  It was expressed that the test itself didn’t even make sense.\n\nI was wrong in my understanding of what the change did.  If it wakes ALL waiters up there is an indeterminate amount of time a lock will wait.\nHowever, if instead of waking up all of them, if it only wakes up the shared readers and leaves all the exclusive ones at the front of the queue, there is no possibility of starvation since those exclusives will be at the front of the line after the wake-up batch.\n\nAs for this being a use case that is important:\n\n*  SSDs will drive the % of use cases that are not I/O bound up significantly over the next couple years.  All postgres installations with less than about 100GB of data TODAY could avoid being I/O bound with current SSD technology, and those less than 2TB can do so as well but at high expense or with less proven technology like the ZFS L2ARC flash cache.\n*  Intel will have a mainstream CPU that handles 12 threads (6 cores, 2 threads each) at the end of this year.  Mainstream two CPU systems will have access to 24 threads and be common in 2010.  Higher end 4CPU boxes will have access to 48 CPU threads.  Hardware thread count is only going up.  This is the future.", "msg_date": "Thu, 12 Mar 2009 10:48:12 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4 " }, { "msg_contents": "On 03/12/09 11:13, Kevin Grittner wrote:\n>>>> Scott Carey <[email protected]> wrote: \n>>>> \n>> \"Kevin Grittner\" <[email protected]> wrote:\n>>\n>> \n>>> I'm a lot more interested in what's happening between 60 and 180\n>>> than over 1000, personally. If there was a RAID involved, I'd put\n>>> it down to better use of the numerous spindles, but when it's all\n>>> in RAM it makes no sense.\n>>> \n>> If there is enough lock contention and a common lock case is a short\n>> lived shared lock, it makes perfect sense sense. Fewer readers are\n>> blocked waiting on writers at any given time. Readers can 'cut' in\n>> line ahead of writers within a certain scope (only up to the number\n>> waiting at the time a shared lock is at the head of the queue). \n>> Essentially this clumps up shared and exclusive locks into larger\n>> streaks, and allows for higher shared lock throughput.\n>> \n> \n> You misunderstood me. I wasn't addressing the affects of his change,\n> but rather the fact that his test shows a linear improvement in TPS up\n> to 1000 connections for a 64 thread machine which is dealing entirely\n> with RAM -- no disk access. Where's the bottleneck that allows this\n> to happen? Without understanding that, his results are meaningless.\n> \n> -Kevin\n>\n> \n\nEvery user has a think time (200ms) to wait before doing the next \ntransaction which results in idle time and theoretically allows other \nusers to run in between ..\n\n-Jignesh\n\n\n\n\n\n\n\n\n\nOn 03/12/09 11:13, Kevin Grittner wrote:\n\n\n\n\nScott Carey <[email protected]> wrote: \n \n\n\n\"Kevin Grittner\" <[email protected]> wrote:\n\n \n\nI'm a lot more interested in what's happening between 60 and 180\nthan over 1000, personally. If there was a RAID involved, I'd put\nit down to better use of the numerous spindles, but when it's all\nin RAM it makes no sense.\n \n\nIf there is enough lock contention and a common lock case is a short\nlived shared lock, it makes perfect sense sense. Fewer readers are\nblocked waiting on writers at any given time. Readers can 'cut' in\nline ahead of writers within a certain scope (only up to the number\nwaiting at the time a shared lock is at the head of the queue). \nEssentially this clumps up shared and exclusive locks into larger\nstreaks, and allows for higher shared lock throughput.\n \n\n \nYou misunderstood me. I wasn't addressing the affects of his change,\nbut rather the fact that his test shows a linear improvement in TPS up\nto 1000 connections for a 64 thread machine which is dealing entirely\nwith RAM -- no disk access. Where's the bottleneck that allows this\nto happen? Without understanding that, his results are meaningless.\n \n-Kevin\n\n \n\n\nEvery user has a think time (200ms) to wait before doing the next\ntransaction which results in idle time and theoretically allows other\nusers to run in between ..\n\n-Jignesh", "msg_date": "Thu, 12 Mar 2009 13:49:37 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> You misunderstood me. I wasn't addressing the affects of his change,\n> but rather the fact that his test shows a linear improvement in TPS up\n> to 1000 connections for a 64 thread machine which is dealing entirely\n> with RAM -- no disk access. Where's the bottleneck that allows this\n> to happen? Without understanding that, his results are meaningless.\n\nYeah, that is a really good point. For a CPU-bound test you would\nideally expect linear performance improvement up to the point at which\nnumber of active threads equals number of CPUs, and flat throughput\nwith more threads. The fact that his results don't look like that\nshould excite deep suspicion that something is wrong somewhere.\n\nThis does not in itself prove that the idea is wrong, but it does say\nthat there is some major effect happening in this test that we don't\nunderstand. Without understanding it, it's impossible to guess whether\nthe proposal is helpful in any other scenario.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Mar 2009 13:53:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4 " }, { "msg_contents": "On 3/12/09 10:09 AM, \"Gregory Stark\" <[email protected]> wrote:\r\n\r\n\r\nRam-resident use cases are entirely valid and worth testing, but in those use\r\ncases you would want to have about as many processes as you have processes.\r\n\r\nWithin a factor of two or so, yes. However, where in his results does it show that there are 1000 active postgres connections? What if the test script is the most valid type: emulating application compute and sleep time between requests?\r\n\r\nWhat it is showing is “Users”. We don’t know the relationship between those and active postgres connections. Your contention is ONLY valid for active postgres processes.\r\n\r\nYes, the test could be invalid if it is artificially making all users bang up on the same locks by for example, having them all access the same rows. However, if this was what explains the results around the user count being about equal to CPU threads, then the throughput would have stopped growing around where the user count got near the CPU threads, not after a couple thousand.\r\n\r\nThe ‘fingerprint’ of this load test — linear scaling up to a point, then a peak and dropoff — is one of a test with paced users not one with artificial locking affecting results at low user counts. More data would help, but artificial lock contention with low user count would have shown up at low user count, not after 1000 users. There are some difficult to manipulate ways to fake this out (which is why CPU% and context switch rate data would help). This is most likely a ‘paced user’ profile.\r\n\r\nThe use case where having larger number of connections than processors makes\r\nsense is when they're blocked on disk i/o (or network i/o or whatever else\r\nother than cpu).\r\n\r\nUm, or are idle in a connection pool for 100ms. There is no such thing as a perfectly sized connection pool. And there is nothing wrong with some idle connections.\r\n\r\n\r\nAnd as always the question would be whether the code designed for this\r\nmisconfigured setup is worth the maintenance effort if it's not helping\r\nproperly configured setups.\r\n\r\nNow you are just assuming its misconfigured. I’d wager quite a bit it helps properly configured setups too so long as they have lots of hardware threads.\r\n\r\n\r\n\r\n\n\n\nRe: Proposal of tunable fix for scalability of 8.4\n\n\nOn 3/12/09 10:09 AM, \"Gregory Stark\" <[email protected]> wrote:\n\n\r\nRam-resident use cases are entirely valid and worth testing, but in those use\r\ncases you would want to have about as many processes as you have processes.\n\nWithin a factor of two or so, yes.  However, where in his results does it show that there are 1000 active postgres connections?  What if the test script is the most valid type:  emulating application compute and sleep time between requests?  \n\r\nWhat it is showing is “Users”.  We don’t know the relationship between those and active postgres connections.  Your contention is ONLY valid for active postgres processes.\n\r\nYes, the test could be invalid if it is artificially making all users bang up on the same locks by for example, having them all access the same rows.  However, if this was what explains the results around the user count being about equal to CPU threads, then the throughput would have stopped growing around where the user count got near the CPU threads, not after a couple thousand.\n\r\nThe ‘fingerprint’ of this load test — linear scaling up to a point, then a peak and dropoff — is one of a test with paced users not one with artificial locking affecting results at low user counts.  More data would help, but artificial lock contention with low user count would have shown up at low user count, not after 1000 users.  There are some difficult to manipulate ways to fake this out (which is why CPU% and context switch rate data would help).  This is most likely a ‘paced user’ profile.\n\r\nThe use case where having larger number of connections than processors makes\r\nsense is when they're blocked on disk i/o (or network i/o or whatever else\r\nother than cpu).\n\nUm, or are idle in a connection pool for 100ms.  There is no such thing as a perfectly sized connection pool.  And there is nothing wrong with some idle connections.\n\n\r\nAnd as always the question would be whether the code designed for this\r\nmisconfigured setup is worth the maintenance effort if it's not helping\r\nproperly configured setups. \n\nNow you are just assuming its misconfigured.  I’d wager quite a bit it helps properly configured setups too so long as they have lots of hardware threads.", "msg_date": "Thu, 12 Mar 2009 11:08:44 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 3/12/09 10:53 AM, \"Tom Lane\" <[email protected]> wrote:\n\n\"Kevin Grittner\" <[email protected]> writes:\n> You misunderstood me. I wasn't addressing the affects of his change,\n> but rather the fact that his test shows a linear improvement in TPS up\n> to 1000 connections for a 64 thread machine which is dealing entirely\n> with RAM -- no disk access. Where's the bottleneck that allows this\n> to happen? Without understanding that, his results are meaningless.\n\nYeah, that is a really good point. For a CPU-bound test you would\nideally expect linear performance improvement up to the point at which\nnumber of active threads equals number of CPUs, and flat throughput\nwith more threads. The fact that his results don't look like that\nshould excite deep suspicion that something is wrong somewhere.\n\nThis does not in itself prove that the idea is wrong, but it does say\nthat there is some major effect happening in this test that we don't\nunderstand. Without understanding it, it's impossible to guess whether\nthe proposal is helpful in any other scenario.\n\n regards, tom lane\n\nOnly on the assumption that each thread in the load test is running in ASAP mode rather than a metered pace.\n\n\n\nRe: [PERFORM] Proposal of tunable fix for scalability of 8.4 \n\n\nOn 3/12/09 10:53 AM, \"Tom Lane\" <[email protected]> wrote:\n\n\"Kevin Grittner\" <[email protected]> writes:\n> You misunderstood me.  I wasn't addressing the affects of his change,\n> but rather the fact that his test shows a linear improvement in TPS up\n> to 1000 connections for a 64 thread machine which is dealing entirely\n> with RAM -- no disk access.  Where's the bottleneck that allows this\n> to happen?  Without understanding that, his results are meaningless.\n\nYeah, that is a really good point.  For a CPU-bound test you would\nideally expect linear performance improvement up to the point at which\nnumber of active threads equals number of CPUs, and flat throughput\nwith more threads.  The fact that his results don't look like that\nshould excite deep suspicion that something is wrong somewhere.\n\nThis does not in itself prove that the idea is wrong, but it does say\nthat there is some major effect happening in this test that we don't\nunderstand.  Without understanding it, it's impossible to guess whether\nthe proposal is helpful in any other scenario.\n\n                        regards, tom lane\n\nOnly on the assumption that each thread in the load test is running in ASAP mode rather than a metered pace.", "msg_date": "Thu, 12 Mar 2009 11:09:41 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4 " }, { "msg_contents": "Scott Carey <[email protected]> writes:\n> They are not meaningless. It is certainly more to understand, but the test is entirely valid without that. In a CPU bound / RAM bound case, as concurrency increases you look for the throughput trend, the %CPU use trend and the context switch rate trend. More information would be useful but the test is validated by the evidence that it is held up by lock contention.\n\nEr ... *what* evidence? There might be evidence somewhere that proves\nthat, but Jignesh hasn't shown it. The available data suggests that the\nfirst-order performance limiter in this test is something else.\nOtherwise it should be possible to max out the performance with a lot\nless than 1000 active backends.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Mar 2009 14:28:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4 " }, { "msg_contents": "At 11:44 AM 3/12/2009, Kevin Grittner wrote:\n\n>I'm probably more inclined to believe that his change may have merit \n>than many here, but I can't accept anything based on this test until \n>someone answers the question, so far ignored by all responses, of \n>where the bottleneck is at the low end which allows linear \n>scalability up to 1000 users (which I assume means connections).\n>\n>I'm particularly inclined to be suspicious of this test since my own \n>benchmarks, with real applications replaying real URL requests from \n>a production website that gets millions of hits per day, show that \n>response time and throughput are improved by using a connection pool \n>with queuing to limit the concurrent active queries.\n>\n>My skepticism is not helped by the fact that in a previous \n>discussion with someone about performance as connections are \n>increased, this point was covered by introducing a \"primitive\" \n>connection pool -- which used a one second sleep for a thread if the \n>maximum number of connections were already in use, rather than \n>proper queuing and semaphores. That really gives no clue how \n>performance would be with a real connection pool.\n>\n>-Kevin\n\nIMHO, Jignesh is looking at performance for a spcialized niche in the \noverall space of pg use- that of memory resident DBs. Here's my \nthoughts on the more general problem. The following seems to explain \nall the performance phenomenon discussed so far while suggesting an \nimprovement in how pg deals with lock scaling and contention.\n\n Thoughts on lock scaling and contention\n\nlogical limits\n...for Exclusive locks\na= the number of non overlapping sets of DB entities (tables, rows, etc)\nIf every exclusive lock wants a different table,\nthen the limit is the number of tables.\nIf any exclusive lock wants the whole DB,\nthen there can only be one lock.\nb= possible HW limits\nEven if all exclusive locks in question ask for distinct DB entities, it is\npossible that the HW servicing those locks could be saturated.\n...for Shared locks\na= HW Limits\n\nHW limits\na= network IO\nb= HD IO\nNote that \"a\" and \"b\" may change relative order in some cases.\nA possibly unrealistic extreme to demonstrate the point would be a system with\n1 HD and 10G networking. It's likely to be HD IO bound before network \nIO bound.\nc= RAM IO\nd= Internal CPU bandwidth\n\nSince a DB must first and foremost protect the integrity of the data being\nprocessed, the above implies that we should process transactions in time order\nof resource access (thus transactions that do not share resources can always\nrun in parallel) while running as many of them in parallel as we can that\na= do not violate the exclusive criteria, and\nb= do not over saturate any resource being used for the processing.\n\nThis looks exactly like a job scheduling problem from the days of mainframes.\n(Or instruction scheduling in a CPU to maximize the IPC of a thread.)\n\nThe solution in the mainframe domain was multi-level feedback queues with\npriority aging.\nSince the concept of a time slice makes no sense in a DB, this becomes a\nmulti-level resource coloring problem with dynamic feedback based on \nexclusivity\nand resource contention.\n\nA possible algorithm might be\n1= every transaction for a given DB entity has priority over any transaction\nsubmitted at a later time that uses that same DB entity.\n2= every transaction that does not conflict with an earlier transaction can\nrun in parallel with that earlier transaction\n3= if any resource becomes saturated, we stop scheduling transactions that use\nthat resource or that are dependent on that resource until the deadlock is\nresolved.\n\nTo implement this, we need\na= to be able to count the number of locks for any given DB entity\nb= some way of detecting HW saturation\n\nHope this is useful,\nRon Peacetree\n\n\n\n\n\n\n\n", "msg_date": "Thu, 12 Mar 2009 14:32:38 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 03/12/09 13:48, Scott Carey wrote:\n> On 3/11/09 7:47 PM, \"Tom Lane\" <[email protected]> wrote:\n>\n> All I'm adding, is that it makes some sense to me based on my \n> experience in CPU / RAM bound scalability tuning. It was expressed \n> that the test itself didn't even make sense.\n>\n> I was wrong in my understanding of what the change did. If it wakes \n> ALL waiters up there is an indeterminate amount of time a lock will wait.\n> However, if instead of waking up all of them, if it only wakes up the \n> shared readers and leaves all the exclusive ones at the front of the \n> queue, there is no possibility of starvation since those exclusives \n> will be at the front of the line after the wake-up batch.\n>\n> As for this being a use case that is important:\n>\n> * SSDs will drive the % of use cases that are not I/O bound up \n> significantly over the next couple years. All postgres installations \n> with less than about 100GB of data TODAY could avoid being I/O bound \n> with current SSD technology, and those less than 2TB can do so as well \n> but at high expense or with less proven technology like the ZFS L2ARC \n> flash cache.\n> * Intel will have a mainstream CPU that handles 12 threads (6 cores, \n> 2 threads each) at the end of this year. Mainstream two CPU systems \n> will have access to 24 threads and be common in 2010. Higher end 4CPU \n> boxes will have access to 48 CPU threads. Hardware thread count is \n> only going up. This is the future.\n>\n\nSSDs are precisely my motivation of doing RAM based tests with \nPostgreSQL. While I am waiting for my SSDs to arrive, I started to \nemulate SSDs by putting the whole database on RAM which in sense are \nbetter than SSDs so if we can tune with RAM disks then SSDs will be covered.\n\nWhat we have is a pool of 2000 users and we start making each user do \nseries of transactions on different rows and see how much the database \ncan handle linearly before some bottleneck (system or database) kicks in \nand there can be no more linear increase in active users. Many times \nthere is drop after reaching some value of active users. If all 2000 \nusers can scale linearly then another test with say 2500 can be executed \n.. All to do is what's the limit we can go till typically there are no \nsystem resources still remaining to be exploited.\n\nThat said the testkit that I am using is a lightweight OLTP typish \nworkload which a user runs against a preknown schema and between various \ntransactions that it does it emulates a wait time of 200ms. That said it \nis some sense emulating a real user who clicks and then waits to see \nwhat he got and does another click which results in another transaction \nhappening. (Not exactly but you get the point). Like all workloads it \nis generally used to find bottlenecks in systems before putting \nproduction stuff on it.\n\n\nThat said my current environment I am having similar workloads and \nseeing how many users can go to the point where system has no more CPU \nresources available to do a linear growth in tpm. Generally as many of \nyou mentioned you will see disk latency, network latency, cpu resource \nproblems, etc.. And thats the work I am doing right now.. I am working \naround network latency by doing a private network, improving Operating \nsystems tunables to improve efficiency out there.. I am improving disk \nlatency by putting them on /RAM (and soon on SSDs).. However if I still \ncannot consume all CPU then it means I am probably hit by locks . Using \nPostgreSQL DTrace probes I can see what's happening..\n\nAt low user (100 users) counts my lock profiles from a user point of \nview are as follows:\n\n\n# dtrace -q -s 84_lwlock.d 1764\n\n Lock Id Mode State Count\n ProcArrayLock Shared Waiting 1\n CLogControlLock Shared Acquired 2\n ProcArrayLock Exclusive Waiting 3\n ProcArrayLock Exclusive Acquired 24\n XidGenLock Exclusive Acquired 24\n FirstLockMgrLock Shared Acquired 25\n CLogControlLock Exclusive Acquired 26\n FirstBufMappingLock Shared Acquired 55\n WALInsertLock Exclusive Acquired 75\n ProcArrayLock Shared Acquired 178\n SInvalReadLock Shared Acquired 378\n\n Lock Id Mode State Combined Time (ns)\n SInvalReadLock Acquired 29849\n ProcArrayLock Shared Waiting 92261\n ProcArrayLock Acquired 951470\n FirstLockMgrLock Exclusive Acquired 1069064\n CLogControlLock Exclusive Acquired 1295551\n ProcArrayLock Exclusive Waiting 1758033\n FirstBufMappingLock Exclusive Acquired 2078507\n XidGenLock Exclusive Acquired 3460800\n WALInsertLock Exclusive Acquired 12205466\n SInvalReadLock Exclusive Acquired 42684236\n ProcArrayLock Exclusive Acquired 57397139\n \nAs users grow beyond 1000 it changes to the following for the sample \nuser point of view\n# dtrace -q -s 84_lwlock.d 1764\n\n Lock Id Mode State Count\n CLogControlLock Exclusive Waiting 1\n WALInsertLock Exclusive Waiting 1\n ProcArrayLock Exclusive Acquired 7\n XidGenLock Exclusive Acquired 7\n ProcArrayLock Exclusive Waiting 10\n CLogControlLock Shared Acquired 13\n WALInsertLock Exclusive Acquired 23\n CLogControlLock Exclusive Acquired 30\n ProcArrayLock Shared Acquired 50\n FirstLockMgrLock Shared Acquired 104\n SInvalReadLock Shared Acquired 105\n FirstBufMappingLock Shared Acquired 106\n\n Lock Id Mode State Combined Time (ns)\n WALInsertLock Exclusive Waiting 73990\n CLogControlLock Exclusive Waiting 383066\n XidGenLock Exclusive Acquired 408301\n CLogControlLock Exclusive Acquired 1871642\n ProcArrayLock Acquired 2825372\n WALInsertLock Exclusive Acquired 3144580\n FirstLockMgrLock Exclusive Acquired 3799818\n FirstBufMappingLock Exclusive Acquired 4083473\n SInvalReadLock Exclusive Acquired 20611120\n ProcArrayLock Exclusive Acquired 37920098\n ProcArrayLock Exclusive Waiting 3783942020\n\n\nThats similar to what I had seen last year.. But thats the reason I am \nplaying with lwlock.c to see how changing of how LWLockRelease() can be \nmodified to do different types of wake-ups have impact on this top \nwaiting time which is basically waste of time from perspective of \napplication, operating system, cpu . All I am saying is with tuning \nflexibility we can actually reduce the time wasted and probably use that \ntime with acquired state while it is doing some useful work.\n\nI dont think I have misconfigured the system. I am just showing that hey \nthere are ways to cut down some inefficiencies here and showing test \npoints. I am also showing where it does seem to help performance. It may \nnot help in all case but I just gave you a test where it helps \nperformance where it is better than what it is. \n\nAnd again this is the third time I am saying.. the test users also have \nsome latency build up in them which is what generally is exploited to \nget more users than number of CPUS on the system but that's the point we \nwant to exploit.. Otherwise if all new users begin to do their job with \nno latency then we would need 6+ billion cpus to handle all possible \nusers. Typically as an administrator (System and database) I can only \ntweak/control latencies within my domain, that is network, disk, cpu's \netc and those are what I am tweaking and coming to a *Configured* \nenvironment and now trying to improve lock contentions/waits in \nPostgreSQL so that we have an optimized setup.\n\nI am trying another run where I limit the waked up threads to a \npre-configured number to see how various numbers pans out in terms of \nthroughput on this server.\n\nRegards,\nJignesh\n\n\n\n\n\n\n\n\n\nOn 03/12/09 13:48, Scott Carey wrote:\n\nRe: [PERFORM] Proposal of tunable fix for scalability of 8.4 \nOn 3/11/09 7:47 PM, \"Tom Lane\" <[email protected]>\nwrote:\n\n\nAll I’m adding, is that\nit makes some sense to me based on my experience in CPU / RAM bound\nscalability tuning.  It was expressed that the test itself didn’t even\nmake sense.\n\nI was wrong in my understanding of what the change did.  If it wakes\nALL waiters up there is an indeterminate amount of time a lock will\nwait.\nHowever, if instead of waking up all of them, if it only wakes up the\nshared readers and leaves all the exclusive ones at the front of the\nqueue, there is no possibility of starvation since those exclusives\nwill be at the front of the line after the wake-up batch.\n\nAs for this being a use case that is important:\n\n*  SSDs will drive the % of use cases that are not I/O bound up\nsignificantly over the next couple years.  All postgres installations\nwith less than about 100GB of data TODAY could avoid being I/O bound\nwith current SSD technology, and those less than 2TB can do so as well\nbut at high expense or with less proven technology like the ZFS L2ARC\nflash cache.\n*  Intel will have a mainstream CPU that handles 12 threads (6 cores, 2\nthreads each) at the end of this year.  Mainstream two CPU systems will\nhave access to 24 threads and be common in 2010.  Higher end 4CPU boxes\nwill have access to 48 CPU threads.  Hardware thread count is only\ngoing up.  This is the future.\n\n\n\n\nSSDs are precisely my motivation of doing RAM based tests with\nPostgreSQL. While I am waiting for my SSDs to arrive, I started to\nemulate SSDs by putting the whole database on RAM which in sense are\nbetter than SSDs so if we can tune with RAM disks then SSDs will be\ncovered.\n\nWhat we have is a\npool of 2000 users and we start making each user do series of\ntransactions on different rows and see how much the database can handle\nlinearly before some bottleneck (system or database) kicks in and there\ncan be no more linear increase in active users. Many times there is\ndrop after reaching some value of active users. If all 2000 users can\nscale linearly then another test with say 2500 can be executed .. All\nto do is what's the limit we can go till typically there are no system\nresources still remaining to be exploited.\n\nThat said the testkit that I am using is a lightweight OLTP typish\nworkload which a user runs against a preknown schema and between\nvarious transactions that it does it emulates a wait time of 200ms.\nThat said it is some sense emulating a real user who clicks and then\nwaits to see what he got and does another click which results in\nanother transaction happening.  (Not exactly but you get the point).  Like\nall workloads it is generally used to find bottlenecks in\nsystems before putting production stuff on it. \n\n\nThat said my current environment I am having similar workloads and\nseeing how many users can go to the point where system has no more CPU\nresources available to do a linear growth in tpm. Generally as many of\nyou  mentioned you will see disk latency, network latency, cpu resource\nproblems, etc.. And thats the work I am doing right now.. I am working\naround network latency by doing a private network, improving Operating\nsystems tunables to improve efficiency out there.. I am improving disk\nlatency by putting them on /RAM (and soon on SSDs).. However if I still\ncannot consume all CPU then it means I am probably hit by locks . Using\nPostgreSQL DTrace probes I can see what's happening..\n\nAt low user (100 users) counts my lock profiles from a user point of\nview are as follows:\n\n\n# dtrace -q -s 84_lwlock.d 1764\n\n\n              Lock Id            Mode           State           Count\n\n        ProcArrayLock          Shared         Waiting               1\n\n      CLogControlLock          Shared        Acquired               2\n\n        ProcArrayLock       Exclusive         Waiting               3\n\n        ProcArrayLock       Exclusive        Acquired              24\n\n           XidGenLock       Exclusive        Acquired              24\n\n     FirstLockMgrLock          Shared        Acquired              25\n\n      CLogControlLock       Exclusive        Acquired              26\n\n  FirstBufMappingLock          Shared        Acquired              55\n\n        WALInsertLock       Exclusive        Acquired              75\n\n        ProcArrayLock          Shared        Acquired             178\n\n       SInvalReadLock          Shared        Acquired             378\n\n\n              Lock Id            Mode           State   Combined Time\n(ns)\n\n       SInvalReadLock                        Acquired               \n29849\n\n        ProcArrayLock          Shared         Waiting               \n92261\n\n        ProcArrayLock                        Acquired              \n951470\n\n     FirstLockMgrLock       Exclusive        Acquired             \n1069064\n\n      CLogControlLock       Exclusive        Acquired             \n1295551\n\n        ProcArrayLock       Exclusive         Waiting             \n1758033\n\n  FirstBufMappingLock       Exclusive        Acquired             \n2078507\n\n           XidGenLock       Exclusive        Acquired             \n3460800\n\n        WALInsertLock       Exclusive        Acquired            \n12205466\n\n       SInvalReadLock       Exclusive        Acquired            \n42684236\n\n        ProcArrayLock       Exclusive        Acquired            \n57397139\n\n   \nAs users grow beyond 1000 it changes to the following for the sample\nuser point of view\n# dtrace -q  -s 84_lwlock.d 1764\n\n\n              Lock Id            Mode          \nState           Count\n\n      CLogControlLock       Exclusive         Waiting               1\n\n        WALInsertLock       Exclusive         Waiting               1\n\n        ProcArrayLock       Exclusive        Acquired               7\n\n           XidGenLock       Exclusive        Acquired               7\n\n        ProcArrayLock       Exclusive         Waiting              10\n\n      CLogControlLock          Shared        Acquired              13\n\n        WALInsertLock       Exclusive        Acquired              23\n\n      CLogControlLock       Exclusive        Acquired              30\n\n        ProcArrayLock          Shared        Acquired              50\n\n     FirstLockMgrLock          Shared        Acquired             104\n\n       SInvalReadLock          Shared        Acquired             105\n\n  FirstBufMappingLock          Shared        Acquired             106\n\n\n              Lock Id            Mode           State   Combined Time\n(ns)\n\n        WALInsertLock       Exclusive         Waiting               \n73990\n\n      CLogControlLock       Exclusive         Waiting              \n383066\n\n           XidGenLock       Exclusive        Acquired              \n408301\n\n      CLogControlLock       Exclusive        Acquired             \n1871642\n\n        ProcArrayLock                        Acquired             \n2825372\n\n        WALInsertLock       Exclusive        Acquired             \n3144580\n\n     FirstLockMgrLock       Exclusive        Acquired             \n3799818\n\n  FirstBufMappingLock       Exclusive        Acquired             \n4083473\n\n       SInvalReadLock       Exclusive        Acquired            \n20611120\n\n        ProcArrayLock       Exclusive        Acquired            \n37920098\n\n        ProcArrayLock       Exclusive         Waiting          \n3783942020\n\n\n\nThats similar to what I had seen last year.. But thats the reason I am\nplaying with lwlock.c to see how changing of how LWLockRelease() can be\nmodified to do different types of wake-ups have impact on this top \nwaiting time which is basically waste of time from perspective of\napplication, operating system, cpu .  All I am saying is with tuning\nflexibility we can actually reduce the time wasted and probably use\nthat time with acquired state while it is doing some useful work. \n\nI dont think I have misconfigured the system. I am just showing that\nhey there are ways to cut down some inefficiencies here and showing\ntest points. I am also showing where it does seem to help performance.\nIt may not help in all case but I just gave you a test where it helps\nperformance where it is better than what it is.  \n\nAnd again this is the third time I am saying.. the test users also have\nsome latency build up in them which is what generally is exploited to\nget more users than number of CPUS on the system but that's the point\nwe want to exploit.. Otherwise if all new users begin to do their job\nwith no latency then we would need 6+ billion cpus to handle all\npossible users. Typically as an administrator (System and database) I\ncan only tweak/control latencies within my domain, that is network,\ndisk, cpu's etc and those are what I am tweaking and coming to a\n*Configured* environment and now trying to improve lock\ncontentions/waits in PostgreSQL so that we have an optimized setup.\n\nI am trying another run where I limit the waked up threads to a\npre-configured number to see how various numbers pans out in terms of\nthroughput on this server.\n\nRegards,\nJignesh", "msg_date": "Thu, 12 Mar 2009 14:37:32 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Tom Lane wrote:\n> Scott Carey <[email protected]> writes:\n> > They are not meaningless. It is certainly more to understand, but the test is entirely valid without that. In a CPU bound / RAM bound case, as concurrency increases you look for the throughput trend, the %CPU use trend and the context switch rate trend. More information would be useful but the test is validated by the evidence that it is held up by lock contention.\n> \n> Er ... *what* evidence? There might be evidence somewhere that proves\n> that, but Jignesh hasn't shown it. The available data suggests that the\n> first-order performance limiter in this test is something else.\n> Otherwise it should be possible to max out the performance with a lot\n> less than 1000 active backends.\n\nWith 200ms of think times as Jignesh just said, 1000 users does not\nequate 1000 active backends. (It's probably closer to 100 backends,\ngiven an avg. response time of ~20ms)\n\nSomething that might be useful for him to report is the avg number of\nactive backends for each data point ...\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 12 Mar 2009 16:10:20 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 03/12/09 15:10, Alvaro Herrera wrote:\n> Tom Lane wrote:\n> \n>> Scott Carey <[email protected]> writes:\n>> \n>>> They are not meaningless. It is certainly more to understand, but the test is entirely valid without that. In a CPU bound / RAM bound case, as concurrency increases you look for the throughput trend, the %CPU use trend and the context switch rate trend. More information would be useful but the test is validated by the evidence that it is held up by lock contention.\n>>> \n>> Er ... *what* evidence? There might be evidence somewhere that proves\n>> that, but Jignesh hasn't shown it. The available data suggests that the\n>> first-order performance limiter in this test is something else.\n>> Otherwise it should be possible to max out the performance with a lot\n>> less than 1000 active backends.\n>> \n>\n> With 200ms of think times as Jignesh just said, 1000 users does not\n> equate 1000 active backends. (It's probably closer to 100 backends,\n> given an avg. response time of ~20ms)\n>\n> Something that might be useful for him to report is the avg number of\n> active backends for each data point ...\n> \nshort of doing select * from pg_stat_activity and removing the IDLE \nentries, any other clean way to get that information. If there is no \nother latency then active backends should be active users * 10ms/200ms \nor activeusers/20 on average. However the number is still lower than \nthat since active user can still be waiting for locks which can be \neither on CPU (spin) or sleeping (proven by increase in average response \ntime of execution which includes the wait).\n\nAlso till date I am primarily more interested in active backends which \nare waiting for acquiring the locks since I find making that more \nefficient gives me the biggest return on my buck.. Lower response time \nand higher throughput.\n\n-Jignesh\n\n\n\n\n\n\n\n\n\nOn 03/12/09 15:10, Alvaro Herrera wrote:\n\nTom Lane wrote:\n \n\nScott Carey <[email protected]> writes:\n \n\nThey are not meaningless. It is certainly more to understand, but the test is entirely valid without that. In a CPU bound / RAM bound case, as concurrency increases you look for the throughput trend, the %CPU use trend and the context switch rate trend. More information would be useful but the test is validated by the evidence that it is held up by lock contention.\n \n\nEr ... *what* evidence? There might be evidence somewhere that proves\nthat, but Jignesh hasn't shown it. The available data suggests that the\nfirst-order performance limiter in this test is something else.\nOtherwise it should be possible to max out the performance with a lot\nless than 1000 active backends.\n \n\n\nWith 200ms of think times as Jignesh just said, 1000 users does not\nequate 1000 active backends. (It's probably closer to 100 backends,\ngiven an avg. response time of ~20ms)\n\nSomething that might be useful for him to report is the avg number of\nactive backends for each data point ...\n \n\nshort of doing select * from pg_stat_activity and removing the IDLE\nentries, any other clean way to get that information.  If there is no\nother latency then active backends  should be active users * 10ms/200ms\nor activeusers/20   on average. However the number is still lower than\nthat since active user can still be waiting for locks which can be\neither on CPU (spin) or sleeping (proven by increase in average\nresponse time of execution which includes the wait). \n \nAlso till date I am primarily more interested in active backends which\nare waiting for acquiring the locks since I find  making that more\nefficient gives me the biggest return on my buck.. Lower response time\nand higher throughput.\n\n-Jignesh", "msg_date": "Thu, 12 Mar 2009 15:22:09 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": ">>> \"Jignesh K. Shah\" <[email protected]> wrote: \n> What we have is a pool of 2000 users and we start making each user\n> do series of transactions on different rows and see how much the\n> database can handle linearly before some bottleneck (system or\n> database) kicks in and there can be no more linear increase in\n> active users. Many times there is drop after reaching some value of\n> active users. If all 2000 users can scale linearly then another test\n> with say 2500 can be executed .. All to do is what's the limit we\n> can go till typically there are no system resources still remaining\n> to be exploited.\n \n> I dont think I have misconfigured the system.\n \nIf you're not using a queuing connection pool with that many users, I\nthink you have. Let me illustrate with a simple example.\n \nImagine you have one CPU and negligible hardware resource delays, and\nyou have 100 queries submitted at the same moment which each take one\nsecond of CPU time. If you start them all concurrently, they will all\nbe done in about 100 seconds, with an average run time of 100 seconds.\nIf you queue them and run them one at a time, the first will be done\nin one second, and the last will be done in 100 seconds, with an\naverage run time of 50.5 seconds. The context switching and extra RAM\nneeded for the multiple connections would tend to make the difference\nworse.\n \nWhat makes concurrent queries helpful is that one might block waiting\non a resource, and another can run during that time. Still, there is\na concurrency level at which the above effect comes into play. The\nmore CPUs and spindles you have, the higher the count of useful\nconcurrent sessions; but there will always be a point where you're\nbetter off queuing additional requests and scheduling them. The RAM\nusage per connection and the cost of context switching pretty much\nguarantee that.\n \nWith our hardware and workloads, I've been able to spot the pattern\nthat we settle in best with a pool which allows the number of active\nqueries to be about 2 times the CPU count plus the number of effective\nspindles. Other hardware environments and workloads will undoubtedly\nhave different \"sweet spots\"; however, 2000 concurrent queries running\non 64 CPUs with no significant latency on storage or network is almost\ncertainly *not* a sweet spot. Changing PostgreSQL to be well\noptimized for such a misconfigured system seems ill-advised to me.\n \nOn the other hand, I'd love to see numbers for your change in a more\noptimally configured environment, since we found that allowing the\n\"thundering herd\" worked pretty well in allowing threads in our\nframework's database service to compete for pulling requests off the\nprioritized queue of requests -- as long as the herd didn't get too\nbig. I just want to see some plausible evidence from a test\nenvironment which seems reasonable to me before I spend time setting\nup my own benchmarks.\n \n> I am trying another run where I limit the waked up threads to a \n> pre-configured number to see how various numbers pans out in terms\n> of throughput on this server.\n \nPlease ensure that requests are queued when all allowed connections\nare busy, and that when a connection completes a request it will\nimmediately begin serving another. Routing requests through a method\nwhich introduces an arbitrary sleep delay before waking up and\nchecking again is not going to be very convincing. It would help if\nthe number of connections used is related to your pool size, and the\nmax_connections is adjusted proportionally.\n \n-Kevin\n", "msg_date": "Thu, 12 Mar 2009 14:25:31 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On Thu, 12 Mar 2009, Jignesh K. Shah wrote:\n\n> As soon as I get more \"cycles\" I will try variations of it but it would \n> help if others can try it out in their own environments to see if it \n> helps their instances.\n\nWhat you should do next is see whether you can remove the bottleneck your \ntest is running into via using a connection pooler. That's what I think \nmost informed people would do were you to ask how to setup an optimal \nenvironment using PostgreSQL that aimed to serve thousands of clients. \nIf that makes your bottleneck go away, that's what you should be \nrecommending to customers who want to scale in this fashion too. If the \nbottleneck moves to somewhere else, that new hot spot might be one people \ncare more about. Given that there are multiple good pooling solutions \nfloating around already, it's hard to justify dumping coding and testing \nresources here if that makes the problem move somewhere else.\n\nIt's great that you've identified an alternate scheduling approach that \nhelps on your problematic test case, but you're a long ways from having a \nfull model of how changes to the locking model impact other database \nworkloads. As for the idea of doing something in this area for 8.4, there \nare a significant number of performance-related changes already committed \nfor that version that deserve more focused testing during beta. You're \nway too late to throw another one into that already crowded area.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 12 Mar 2009 16:35:31 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 3/12/09 11:28 AM, \"Tom Lane\" <[email protected]> wrote:\n\nScott Carey <[email protected]> writes:\n> They are not meaningless. It is certainly more to understand, but the test is entirely valid without that. In a CPU bound / RAM bound case, as concurrency increases you look for the throughput trend, the %CPU use trend and the context switch rate trend. More information would be useful but the test is validated by the evidence that it is held up by lock contention.\n\nEr ... *what* evidence? There might be evidence somewhere that proves\nthat, but Jignesh hasn't shown it. The available data suggests that the\nfirst-order performance limiter in this test is something else.\nOtherwise it should be possible to max out the performance with a lot\nless than 1000 active backends.\n\n regards, tom lane\n\nEvidence:\n\nRamp up the concurrency, measure throughput. Throughput peaks at X with low CPU utilization, linear ramp up until then. Change lock code. Throughput scales past that point to much higher CPU load.\nThat's evidence. Please explain a scenario that proves otherwise. Your last statement above is true but not applicable here. The test is not 1000 backends, it lists 1000 users.\n\nThere is a key difference between users and backends. In fact, the evidence is that the result can't be backends (the column is labeled users). If its not I/O bound it must cap out at roughly the number of active backends near the number of CPU or less, and as noted it does not. This isn't proof that there is something wrong with the test, its proof that the 1000 number cannot be active backends.\n\nI spent a decade solving and tuning CPU scalability problems in CPU/memory bound systems. Sophisticated tests peak at a user count >> CPU count, because real users don't execute as fast as possible. Through a chain of servers several layers deep, each tier can have different levels of concurrent activity. Its useful to measure concurrency at each tier, but almost impossible in postgres (easy in oracle / mssql). Most systems have a limited thread pool but can queue much more than that number. Postgres and many databases don't do that so clients must via connection pools. But the result behavior of too much concurrency is thrashing and inefficiency - this shows up in a test that ramps up concurrency by peak throughput followed by a steep drop off in throughput as concurrency goes into the thrashing state. At this thrashing time a lot of context switching and sometimes RAM pressure is a typical symptom.\n\nThe only way to construct a test that shows the current described behavior (linear ramp up, then plateau) is to have lock contention, I/O bottlenecks, or CPU saturation. The number of users is irrelevant, the trend is the same regardless of the relationship between user count and active backend count (0 delay or 1 second delay, same result different X axis). If it was an I/O or client bottleneck, changing the lock code wouldn't have made it faster.\n\nThe evidence is 100% certain that the first test result is limited by locks, and that changing them increased throughput.\n\n\n\nRe: [PERFORM] Proposal of tunable fix for scalability of 8.4 \n\n\nOn 3/12/09 11:28 AM, \"Tom Lane\" <[email protected]> wrote:\n\nScott Carey <[email protected]> writes:\n> They are not meaningless.  It is certainly more to understand, but the test is entirely valid without that.  In a CPU bound / RAM bound case, as concurrency increases you look for the throughput trend, the %CPU use trend and the context switch rate trend.  More information would be useful but the test is validated by the evidence that it is held up by lock contention.\n\nEr ... *what* evidence?  There might be evidence somewhere that proves\nthat, but Jignesh hasn't shown it.  The available data suggests that the\nfirst-order performance limiter in this test is something else.\nOtherwise it should be possible to max out the performance with a lot\nless than 1000 active backends.\n\n                        regards, tom lane\n\nEvidence:\n\nRamp up the concurrency, measure throughput.  Throughput peaks at X with low CPU utilization, linear ramp up until then.   Change lock code.  Throughput scales past that point to much higher CPU load.\nThat’s evidence.  Please explain a scenario that proves otherwise.  Your last statement above is true but not applicable here.  The test is not 1000 backends, it lists 1000 users.\n\nThere is a key difference between users and backends.  In fact, the evidence is that the result can’t be backends (the column is labeled users).  If its not I/O bound it must cap out at roughly the number of active backends near the number of CPU or less,  and as noted it does not.  This isn’t proof that there is something wrong with the test, its proof that the 1000 number cannot be active backends.\n \nI spent a decade solving and tuning CPU scalability problems in CPU/memory bound systems.  Sophisticated tests peak at a user count >> CPU count, because real users don’t execute as fast as possible.  Through a chain of servers several layers deep, each tier can have different levels of concurrent activity.  Its useful to measure concurrency at each tier, but almost impossible in postgres (easy in oracle / mssql).  Most systems have a limited thread pool but can queue much more than that number.  Postgres and many databases don’t do that so clients must via connection pools.  But the result behavior of too much concurrency is thrashing and inefficiency — this shows up in a test that ramps up concurrency by peak throughput followed by a steep drop off in throughput as concurrency goes into the thrashing state.  At this thrashing time a lot of context switching and sometimes RAM pressure is a typical symptom.\n\nThe only way to construct a test that shows the current described behavior (linear ramp up, then plateau) is to  have lock contention, I/O bottlenecks, or CPU saturation.  The number of users is irrelevant, the trend is the same regardless of the relationship between user count and active backend count (0 delay or 1 second delay, same result different X axis).  If it was an I/O or client bottleneck, changing the lock code wouldn’t have made it faster.  \n\nThe evidence is 100% certain that the first test result is limited by locks, and that changing them increased throughput.", "msg_date": "Thu, 12 Mar 2009 14:45:54 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4 " }, { "msg_contents": "On 3/12/09 1:35 PM, \"Greg Smith\" <[email protected]> wrote:\n\nOn Thu, 12 Mar 2009, Jignesh K. Shah wrote:\n\n> As soon as I get more \"cycles\" I will try variations of it but it would\n> help if others can try it out in their own environments to see if it\n> helps their instances.\n\nWhat you should do next is see whether you can remove the bottleneck your\ntest is running into via using a connection pooler.\n\nI doubt it is running into a bottleneck due to that, the symptoms aren't right. He can change his test to have near zero delay to simulate such a connection pool.\n\nIf it was an issue due to concurrency at that level, the results would not have scaled linearly with user count to a plateau the way they did. There would be a steep drop-down from thrashing as concurrency kept going up. Context switch data would help, since the thrashing ends up as a measurable there. No evidence of concurrency thrashing yet that I see, but more tests and data would help.\n\nThe disconnect, is that the Users column in his data does not represent back-ends. It represents concurrent users on the front-end. Whether these while idle pool or not is not clear. It would be useful to rule that possibility out but that looks like an improbable diagnosis to me given the lack of performance decrease as concurrency goes up.\nFurthermore, if the problem was due to too much concurrency in the database with active connections, its hard to see how changing the lock code would change the result the way it did - increasing CPU and throughput accordingly. Again, context switch rate info would help rule out many possibilities.\n\nThat's what I think\nmost informed people would do were you to ask how to setup an optimal\nenvironment using PostgreSQL that aimed to serve thousands of clients.\nIf that makes your bottleneck go away, that's what you should be\nrecommending to customers who want to scale in this fashion too.\n\nFirst just run a test with a tiny delay (5ms? 0?) and fewer users to compare. If your theory that a connection pooler would help, that test would provide higher throughput with low user count and not be lock limited. This may be easier to run than setting up a pooler, though he should investigate one regardless.\n\nIf the\nbottleneck moves to somewhere else, that new hot spot might be one people\ncare more about. Given that there are multiple good pooling solutions\nfloating around already, it's hard to justify dumping coding and testing\nresources here if that makes the problem move somewhere else.\n\nIts worth ruling out given that even if the likelihood is small, the fix is easy. However, I don't see the throughput drop from peak as more concurrency is added that is the hallmark of this problem - usually with a lot of context switching and a sudden increase in CPU use per transaction.\n\nThe biggest disconnect in load testing almost always occurs over the definition of \"concurrent users\".\nThink of an HTTP app, backed by a db - about as simple as it gets these days (this is fun with 5, 6 tier fanned out stuff).\n\n\"Users\" could mean:\nNumber of application user logins used.\nNumber of test harness threads or processes that are active.\nNumber of open HTTP connections\nNumber of HTTP requests being processed\nNumber of connections from the app to the db\nNumber of active connections from the app to the db\n\nKnowing which of these is the topic, and what that means in relation to all the others, is often messy. Without knowing which one it is in a result, you can still learn a lot. The data in the results here prove its not the last one on the list above, nor the first one. It could still be any of the middle four, but is most likely #2 or the second to last one (which might be equivalent).\n\n\n\nRe: [PERFORM] Proposal of tunable fix for scalability of 8.4\n\n\n\nOn 3/12/09 1:35 PM, \"Greg Smith\" <[email protected]> wrote:\n\nOn Thu, 12 Mar 2009, Jignesh K. Shah wrote:\n\n> As soon as I get more \"cycles\" I will try variations of it but it would\n> help if others can try it out in their own environments to see if it\n> helps their instances.\n\nWhat you should do next is see whether you can remove the bottleneck your\ntest is running into via using a connection pooler.  \n\nI doubt it is running into a bottleneck due to that, the symptoms aren’t right.  He can change his test to have near zero delay to simulate such a connection pool.  \n\nIf it was an issue due to concurrency at that level, the results would not have scaled linearly with user count to a plateau the way they did.  There would be a steep drop-down from thrashing as concurrency kept going up.  Context switch data would help, since the thrashing ends up as a measurable there.  No evidence of concurrency thrashing yet that I see, but more tests and data would help.\n\nThe disconnect, is that the Users column in his data does not represent back-ends.  It represents concurrent users on the front-end.  Whether these while idle pool or not is not clear.  It would be useful to rule that possibility out but that looks like an improbable diagnosis to me given the lack of performance decrease as concurrency goes up.\nFurthermore, if the problem was due to too much concurrency in the database with active connections, its hard to see how changing the lock code would change the result the way it did — increasing CPU and throughput accordingly.  Again, context switch rate info would help rule out many possibilities.  \n\nThat's what I think\nmost informed people would do were you to ask how to setup an optimal\nenvironment using PostgreSQL that aimed to serve thousands of clients.\nIf that makes your bottleneck go away, that's what you should be\nrecommending to customers who want to scale in this fashion too.  \n\nFirst just run a test with a tiny delay (5ms? 0?) and fewer users to compare.  If your theory that a connection pooler would help, that test would provide higher throughput with low user count and not be lock limited.  This may be easier to run than setting up a pooler, though he should investigate one regardless.\n\nIf the\nbottleneck moves to somewhere else, that new hot spot might be one people\ncare more about.  Given that there are multiple good pooling solutions\nfloating around already, it's hard to justify dumping coding and testing\nresources here if that makes the problem move somewhere else.\n\nIts worth ruling out given that even if the likelihood is small, the fix is easy.  However, I don’t see the throughput drop from peak as more concurrency is added that is the hallmark of this problem — usually with a lot of context switching and a sudden increase in CPU use per transaction. \n\nThe biggest disconnect in load testing almost always occurs over the definition of “concurrent users”.\nThink of an HTTP app, backed by a db — about as simple as it gets these days (this is fun with 5, 6 tier fanned out stuff).\n\n“Users” could mean:\nNumber of application user logins used.\nNumber of test harness threads or processes that are active.\nNumber of open HTTP connections\nNumber of HTTP requests being processed\nNumber of connections from the app to the db\nNumber of active connections from the app to the db\n\nKnowing which of these is the topic, and what that means in relation to all the others, is often messy.  Without knowing which one it is in a result, you can still learn a lot.  The data in the results here prove its not the last one on the list above, nor the first one.  It could still be any of the middle four, but is most likely #2 or the second to last one (which might be equivalent).", "msg_date": "Thu, 12 Mar 2009 15:15:51 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 3/12/09 11:37 AM, \"Jignesh K. Shah\" <[email protected]> wrote:\n\n\nAnd again this is the third time I am saying.. the test users also have some latency build up in them which is what generally is exploited to get more users than number of CPUS on the system but that's the point we want to exploit.. Otherwise if all new users begin to do their job with no latency then we would need 6+ billion cpus to handle all possible users. Typically as an administrator (System and database) I can only tweak/control latencies within my domain, that is network, disk, cpu's etc and those are what I am tweaking and coming to a *Configured* environment and now trying to improve lock contentions/waits in PostgreSQL so that we have an optimized setup.\n\nIn general, I suggest that it is useful to run tests with a few different types of pacing. Zero delay pacing will not have realistic number of connections, but will expose bottlenecks that are universal, and less controversial. Small latency (100ms to 1s) tests are easy to make from the zero delay ones, and help expose problems with connection count or other forms of 'non-active' concurrency. End-user realistic delays are app specific, and useful with larger holistic load tests (say, through the application interface). Generally, running them in this order helps because at each stage you are adding complexity. Based on your explanations, you've probably done much of this so far and your approach sounds solid to me.\nIf the first case fails (zero delay, smaller user count), there is no way the others will pass.\n\nI am trying another run where I limit the waked up threads to a pre-configured number to see how various numbers pans out in terms of throughput on this server.\n\nRegards,\nJignesh\n\n\nThis would be good, as would waking up only the shared locks, but refining the test somewhat to be maximally convincing would help. The first thing to show is either a test with very small or no sleep delay, or with a connection pooler in between. I prefer the former since it is the most simple. This will be a test that is less entangled with the connection count and should peak at a lot closer to the CPU core count and be more convincing to some. I'm positive it won't change the basic trend (ramp up and plateau, with a higher plateau with the changed lock code) but others seem unconvinced and I'm a nobody anyway.\n\n\n\nRe: [PERFORM] Proposal of tunable fix for scalability of 8.4\n\n\nOn 3/12/09 11:37 AM, \"Jignesh K. Shah\" <[email protected]> wrote:\n\n\nAnd again this is the third time I am saying.. the test users also have some latency build up in them which is what generally is exploited to get more users than number of CPUS on the system but that's the point we want to exploit.. Otherwise if all new users begin to do their job with no latency then we would need 6+ billion cpus to handle all possible users. Typically as an administrator (System and database) I can only tweak/control latencies within my domain, that is network, disk, cpu's etc and those are what I am tweaking and coming to a *Configured* environment and now trying to improve lock contentions/waits in PostgreSQL so that we have an optimized setup.\n\nIn general, I suggest that it is useful to run tests with a few different types of pacing.  Zero delay pacing will not have realistic number of connections, but will expose bottlenecks that are universal, and less controversial.   Small latency (100ms to 1s) tests are easy to make from the zero delay ones, and help expose problems with connection count or other forms of ‘non-active’ concurrency.  End-user realistic delays are app specific, and useful with larger holistic load tests (say, through the application interface).  Generally, running them in this order helps because at each stage you are adding complexity.  Based on your explanations, you’ve probably done much of this so far and your approach sounds solid to me.  \nIf the first case fails (zero delay, smaller user count), there is no way the others will pass.\n\nI am trying another run where I limit the waked up threads to a pre-configured number to see how various numbers pans out in terms of throughput on this server.\n\nRegards,\nJignesh\n\n\nThis would be good, as would waking up only the shared locks, but refining the test somewhat to be maximally convincing would help.  The first thing to show is either a test with very small or no sleep delay, or with a connection pooler in between.  I prefer the former since it is the most simple.   This will be a test that is less entangled with the connection count and should peak at a lot closer to the CPU core count and be more convincing to some.  I’m positive it won’t change the basic trend  (ramp up and plateau, with a higher plateau with the changed lock code) but others seem unconvinced and I’m a nobody anyway.", "msg_date": "Thu, 12 Mar 2009 15:57:05 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "> Its worth ruling out given that even if the likelihood is small, the fix is\n> easy.  However, I don’t see the throughput drop from peak as more\n> concurrency is added that is the hallmark of this problem — usually with a\n> lot of context switching and a sudden increase in CPU use per transaction.\n\nThe problem is that the proposed \"fix\" bears a strong resemblence to\nattempting to improve your gas mileage by removing a few non-critical\nparts from your card, like, say, the bumpers, muffler, turn signals,\nwindshield wipers, and emergency brake. While it's true that the car\nmight be drivable in that condition (as long as nothing unexpected\nhappens), you're going to have a hard time convincing the manufacturer\nto offer that as an options package.\n\nI think that changing the locking behavior is attacking the problem at\nthe wrong level anyway. If someone want to look at optimizing\nPostgreSQL for very large numbers of concurrent connections without a\nconnection pooler... at least IMO, it would be more worthwhile to\nstudy WHY there's so much locking contention, and, on a lock by lock\nbasis, what can be done about it without harming performance under\nmore normal loads? The fact that there IS locking contention is sorta\ninteresting, but it would be a lot more interesting to know why.\n\n...Robert\n", "msg_date": "Thu, 12 Mar 2009 21:29:52 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On Thu, 12 Mar 2009, Scott Carey wrote:\n\n> Furthermore, if the problem was due to too much concurrency in the \n> database with active connections, its hard to see how changing the lock \n> code would change the result the way it did ?\n\nWhat I wonder about is if the locking mechanism is accidentally turning \ninto a CPU resource scheduling problem on this benchmark. If the \nconnections were pooled instead, control over that scheduling would be \nmore explicit, because connections would more directly map onto physical \nCPUs. What if the fall-off is because the sum of the working code set \nhere is simply exceeding the sum of the CPU caching available once the \nnumber of active connections gets big enough? The real problem could be \nthat the connections waiting on ProcArray are just falling out of cache, \nsuch that when they do wake up they take a while to page back in and keep \ngoing.\n\nI wouldn't actually bet anything on that theory though, or any of the \nothers offered here. I find wandering into performance bottleneck \nanalysis presuming you know what's going on to be dangerous. The bigger \nissue here is that Jignesh is using a configuration known to be \nproblematic (lots of connections), which introduces some uncertaintly \nabout the true root cause here. Whether it's well founded or not, it \nstill hurts his case.\n\nAnd to step back for a second, after reading up on it again I see that \nSun's internal iGen-OLTP benchmark \"stresses lock management and \nconnectivity\"[1], which makes me wonder even more than I did before about \nhow specific this fix is to this workload.\n\n[1] http://blogs.sun.com/bmseer/entry/t2000_adds_database_leadership_to\n\n> First just run a test with a tiny delay (5ms? 0?) and fewer users to \n> compare. �If your theory that a connection pooler would help, that test \n> would provide higher throughput with low user count and not be lock \n> limited.\n\nIf the symptoms stay the same but are just scaled to a much lower \nconnection count, that might help rule out some types of context switching \nand caching problem from the list of most likely suspects. Might as well \nmake it 0ms to minimize the number of connections.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>From [email protected] Fri Mar 13 01:23:45 2009\nReceived: from localhost (unknown [200.46.208.211])\n\tby mail.postgresql.org (Postfix) with ESMTP id 7C52663584A\n\tfor <[email protected]>; Fri, 13 Mar 2009 01:23:44 -0300 (ADT)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by localhost (mx1.hub.org [200.46.208.211]) (amavisd-maia, port 10024)\n with ESMTP id 46725-06-4\n for <[email protected]>;\n Fri, 13 Mar 2009 01:23:39 -0300 (ADT)\nX-Greylist: from auto-whitelisted by SQLgrey-1.7.6\nReceived: from westnet.com (westnet.com [216.187.52.2])\n\tby mail.postgresql.org (Postfix) with ESMTP id 87C6763EF14\n\tfor <[email protected]>; Fri, 13 Mar 2009 00:31:06 -0300 (ADT)\nReceived: from westnet.com (localhost [127.0.0.1])\n\tby westnet.com (8.14.0/8.14.0) with ESMTP id n2D3V5tP029355;\n\tThu, 12 Mar 2009 23:31:05 -0400 (EDT)\nReceived: from localhost (gsmith@localhost)\n\tby westnet.com (8.14.0/8.13.2/Submit) with ESMTP id n2D3V5QC029351;\n\tThu, 12 Mar 2009 23:31:05 -0400 (EDT)\nX-Authentication-Warning: westnet.com: gsmith owned process doing -bs\nDate: Thu, 12 Mar 2009 23:31:05 -0400 (EDT)\nFrom: Greg Smith <[email protected]>\nX-X-Sender: [email protected]\nTo: \"Jignesh K. Shah\" <[email protected]>\ncc: Scott Carey <[email protected]>, Tom Lane <[email protected]>,\n Kevin Grittner <[email protected]>,\n \"[email protected]\" <[email protected]>\nSubject: Re: Proposal of tunable fix for scalability of 8.4\nIn-Reply-To: <[email protected]>\nMessage-ID: <[email protected]>\nReferences: <C5DE98EC.3381%[email protected]> <[email protected]>\nUser-Agent: Alpine 2.01 (GSO 1184 2008-12-16)\nMIME-Version: 1.0\nContent-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits=0 tagged_above=0 required=5 tests=none\nX-Spam-Level: \nX-Archive-Number: 200903/148\nX-Sequence-Number: 33070\n\nOn Thu, 12 Mar 2009, Jignesh K. Shah wrote:\n\n> That said the testkit that I am using is a lightweight OLTP typish \n> workload which a user runs against a preknown schema and between various \n> transactions that it does it emulates a wait time of 200ms.\n\nAfter re-reading about this all again at \nhttp://blogs.sun.com/jkshah/resource/pgcon_problems.pdf I remembered I \nwanted more info on just what Sun's iGen OLTP does anyway. Here's a \ncollection of published comments on it that assembles into a reasonably \ndetailed picture, as long as you're somewhat familiar with what TPC-C \ndoes:\n\nhttp://blogs.sun.com/bmseer/entry/t2000_adds_database_leadership_to\n\n\"The iGEN-OLTP 1.5 benchmark is a SUN internally developed transaction \nprocessing database workload. This workload simulates a light-weight \nGlobal Order System that stresses lock management and connectivity.\"\n\nhttp://www.mysqlperformanceblog.com/2008/02/27/a-piece-of-sunmysql-marketing/#comment-246663\n\n\"The iGen workload was created from actual customer workloads and has a \nlot more complexity than Sysbench which only test very simple operations \none at a time. The iGen database consist of 6 tables and its executes a \ncombination of light, medium and heavy transactions.\"\n\nhttp://www.sun.com/third-party/global/oracle/collateral/T2000_Oracle_iGEN_05-12-06.pdf?null\n\n\"The iGEN-OLTP benchmark is a stress and performance test, measuring the \nthroughput and simultaneous user connections of an OLTP database workload. \nThe iGEN-OLTP workload is based on customer applications and is \nconstructed as a 2-tier orders database application where three \ntransactions are executed:\n\n * light read-only query\n * medium read-only query\n * 'heavy' read and insert operation.\n\nThe transactions are comprised of various SQL statements: read-only \nselects, joins, update and insert operations. iGen OLTP avoids problems \nthat plague other OTLP benchmarks like TPC-C. TPC-C has problems with only \nusing light-weight queries, allowing artificial data partitioning, and \nonly testing a few database functions. The iGen transactions take almost \ntwice the computation work compared to the TPC-C transactions.\"\n\nhttp://blogs.sun.com/ritu/entry/mysql_benchmark_us_t2_beats\n\n\"iGen OLTP avoids problems that plague other OTLP benchmarks like TPC-C. \nIn particular, it is completely random in table row selections and thus is \ndifficult to use artificial optimizations. iGen OLTP stresses process and \nthread creation, process scheduling, and database commit processing...The \ntransactions are comprised of various SQL transactions: read-only selects, \njoins, inserts and update operations.\"\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 12 Mar 2009 23:00:38 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\n\nScott Carey wrote:\n> On 3/12/09 11:37 AM, \"Jignesh K. Shah\" <[email protected]> wrote:\n>\n>\n> And again this is the third time I am saying.. the test users also\n> have some latency build up in them which is what generally is\n> exploited to get more users than number of CPUS on the system but\n> that's the point we want to exploit.. Otherwise if all new users\n> begin to do their job with no latency then we would need 6+\n> billion cpus to handle all possible users. Typically as an\n> administrator (System and database) I can only tweak/control\n> latencies within my domain, that is network, disk, cpu's etc and\n> those are what I am tweaking and coming to a *Configured*\n> environment and now trying to improve lock contentions/waits in\n> PostgreSQL so that we have an optimized setup.\n>\n> In general, I suggest that it is useful to run tests with a few \n> different types of pacing. Zero delay pacing will not have realistic \n> number of connections, but will expose bottlenecks that are universal, \n> and less controversial. Small latency (100ms to 1s) tests are easy to \n> make from the zero delay ones, and help expose problems with \n> connection count or other forms of �non-active� concurrency. End-user \n> realistic delays are app specific, and useful with larger holistic \n> load tests (say, through the application interface). Generally, \n> running them in this order helps because at each stage you are adding \n> complexity. Based on your explanations, you�ve probably done much of \n> this so far and your approach sounds solid to me.\n> If the first case fails (zero delay, smaller user count), there is no \n> way the others will pass.\n>\n>\n\nI think I have done that before so I can do that again by running the \nusers at 0 think time which will represent a \"Connection pool\" which is \nhighly utilized\" and test how big the connection pool can be before the \nthroughput tanks.. This can be useful for App Servers which sets up \nconnections pools of their own talking with PostgreSQL.\n\n-Jignesh\n\n\n-- \nJignesh Shah http://blogs.sun.com/jkshah \t\t\t\nThe New Sun Microsystems,Inc http://sun.com/postgresql\n\n", "msg_date": "Fri, 13 Mar 2009 08:38:39 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\n\nGreg Smith wrote:\n> On Thu, 12 Mar 2009, Jignesh K. Shah wrote:\n>\n>> As soon as I get more \"cycles\" I will try variations of it but it \n>> would help if others can try it out in their own environments to see \n>> if it helps their instances.\n>\n> What you should do next is see whether you can remove the bottleneck \n> your test is running into via using a connection pooler. That's what \n> I think most informed people would do were you to ask how to setup an \n> optimal environment using PostgreSQL that aimed to serve thousands of \n> clients. If that makes your bottleneck go away, that's what you should \n> be recommending to customers who want to scale in this fashion too. \n> If the bottleneck moves to somewhere else, that new hot spot might be \n> one people care more about. Given that there are multiple good \n> pooling solutions floating around already, it's hard to justify \n> dumping coding and testing resources here if that makes the problem \n> move somewhere else.\n>\n> It's great that you've identified an alternate scheduling approach \n> that helps on your problematic test case, but you're a long ways from \n> having a full model of how changes to the locking model impact other \n> database workloads. As for the idea of doing something in this area \n> for 8.4, there are a significant number of performance-related changes \n> already committed for that version that deserve more focused testing \n> during beta. You're way too late to throw another one into that \n> already crowded area.\n>\n\nOn the other hand I have taken up a task of showing 8.4 Performance \nimprovements over 8.3.\nCan we do a vote on which specific performance features we want to test? \nI can use dbt2, dbt3 tests to see how 8.4 performs and compare it with \n8.3? Also if you have your own favorite test to test it out let me \nknow.. I have allocated some time for this task so it is feasible for me \nto do this.\n\nMany of the improvements may not be visible through this standard tests \nso feedback on testing methology for those is also appreciated.\n* Visibility map - Reduce Vacuum overhead - (I think I can time vacuum \nwith some usage on both databases)\n* Prefetch IO with posix_fadvice () - Though I am not sure if it is \nsupported on UNIX or not (but can be tested by standard tests)\n* Parallel pg_restore (Can be tested with a big database dump)\n\nAny more features that I can stress during the testing phase?\n\n\n\nRegards,\nJignesh\n\n> -- \n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n\n-- \nJignesh Shah http://blogs.sun.com/jkshah \t\t\t\nThe New Sun Microsystems,Inc http://sun.com/postgresql\n\n", "msg_date": "Fri, 13 Mar 2009 08:44:39 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "8.4 Performance improvements: was Re: Proposal of tunable\n\tfix for scalability of 8.4" }, { "msg_contents": "\"Jignesh K. Shah\" <[email protected]> writes:\n\n> Scott Carey wrote:\n>> On 3/12/09 11:37 AM, \"Jignesh K. Shah\" <[email protected]> wrote:\n>>\n>> In general, I suggest that it is useful to run tests with a few different\n>> types of pacing. Zero delay pacing will not have realistic number of\n>> connections, but will expose bottlenecks that are universal, and less\n>> controversial\n>\n> I think I have done that before so I can do that again by running the users at\n> 0 think time which will represent a \"Connection pool\" which is highly utilized\"\n> and test how big the connection pool can be before the throughput tanks.. This\n> can be useful for App Servers which sets up connections pools of their own\n> talking with PostgreSQL.\n\nKeep in mind when you do this that it's not interesting to test a number of\nconnections much larger than the number of processors you have. Once the\nsystem reaches 100% cpu usage it would be a misconfigured connection pooler\nthat kept more than that number of connections open.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's PostGIS support!\n", "msg_date": "Fri, 13 Mar 2009 13:05:36 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\"Jignesh K. Shah\" <[email protected]> writes:\n\n> Scott Carey wrote:\n>> On 3/12/09 11:37 AM, \"Jignesh K. Shah\" <[email protected]> wrote:\n>>\n>> In general, I suggest that it is useful to run tests with a few different\n>> types of pacing. Zero delay pacing will not have realistic number of\n>> connections, but will expose bottlenecks that are universal, and less\n>> controversial\n>\n> I think I have done that before so I can do that again by running the users at\n> 0 think time which will represent a \"Connection pool\" which is highly utilized\"\n> and test how big the connection pool can be before the throughput tanks.. This\n> can be useful for App Servers which sets up connections pools of their own\n> talking with PostgreSQL.\n\nA minute ago I said:\n\n Keep in mind when you do this that it's not interesting to test a number of\n connections much larger than the number of processors you have. Once the\n system reaches 100% cpu usage it would be a misconfigured connection pooler\n that kept more than that number of connections open.\n\nLet me give another reason to call this misconfigured: Postgres connections\nare heavyweight and it's wasteful to keep them around but idle. This has a lot\nin common with the issue with non-persistent connections where each connection\nis used for only a short amount of time.\n\nIn Postgres each connection requires a process, which limits scalability on a\nlot of operating systems already. On many operating systems having thousands\nof processes in itself would create a lot of issues.\n\nEach connection then allocates memory locally for things like temporary table\nbuffers, sorting, hash tables, etc. On most operating systems this memory is\nnot freed back to the system when it hasn't been used recently. (Worse, it's\nmore likely to be paged out and have to be paged in from disk even if it\ncontains only garbage we intend to overwrite!).\n\nAs a result, having thousands of processes --aside from any contention-- would\nlead to inefficient use of system resources. Consider for example that if your\nconnections are using 1MB each then a thousand of them are using 1GB of RAM.\nWhen only 64MB are actually useful at any time. I bet that 64MB would fit\nentirely in your processor caches you weren't jumping around in the gigabyte\nof local memory your thousands of processes' have allocated.\n\nConsider also that you're limited to setting relatively small settings of\nwork_mem for fear all your connections might happen to start a sort\nsimultaneously. So (in a real system running arbitrary queries) instead of a\nsingle quicksort in RAM you'll often be doing unnecessary on-disk merge sorts\nusing unnecessarily small merge heaps while gigabytes of RAM either go wasted\nto cover a rare occurrence or are being used to hold other sorts which have\nbeen started but context-switched away.\n\nTo engineer a system intended to handle thousands of simultaneous connections\nyou would want each backend to use the most light-weight primitives such as\nthreads, and to hold the least possible state in local memory. That would look\nlike quite a different system. The locking contention is the least of the\nissues we would want to deal with to get there.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's PostGIS support!\n", "msg_date": "Fri, 13 Mar 2009 13:28:36 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\n\nGregory Stark wrote:\n> \"Jignesh K. Shah\" <[email protected]> writes:\n>\n> \n>> Scott Carey wrote:\n>> \n>>> On 3/12/09 11:37 AM, \"Jignesh K. Shah\" <[email protected]> wrote:\n>>>\n>>> In general, I suggest that it is useful to run tests with a few different\n>>> types of pacing. Zero delay pacing will not have realistic number of\n>>> connections, but will expose bottlenecks that are universal, and less\n>>> controversial\n>>> \n>> I think I have done that before so I can do that again by running the users at\n>> 0 think time which will represent a \"Connection pool\" which is highly utilized\"\n>> and test how big the connection pool can be before the throughput tanks.. This\n>> can be useful for App Servers which sets up connections pools of their own\n>> talking with PostgreSQL.\n>> \n>\n> Keep in mind when you do this that it's not interesting to test a number of\n> connections much larger than the number of processors you have. Once the\n> system reaches 100% cpu usage it would be a misconfigured connection pooler\n> that kept more than that number of connections open.\n>\n> \n\nGreg, Unfortuately the problem is that.. I am trying to reach 100% CPU which I cannot and hence I am increasing the user count :-)\n\n-Jignesh\n\n\n", "msg_date": "Fri, 13 Mar 2009 09:36:53 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\n\"Jignesh K. Shah\" <[email protected]> writes:\n\n> Can we do a vote on which specific performance features we want to test?\n>\n> Many of the improvements may not be visible through this standard tests so\n> feedback on testing methology for those is also appreciated.\n> * Visibility map - Reduce Vacuum overhead - (I think I can time vacuum with\n> some usage on both databases)\n\nTiming vacuum is kind of pointless -- the only thing that matters is whether\nit's \"fast enough\". But it is worth saying that good benchmarks should include\nnormal vacuum runs. Benchmarks which don't run long enough to trigger vacuum\naren't realistic.\n\n> * Prefetch IO with posix_fadvice () - Though I am not sure if it is supported\n> on UNIX or not (but can be tested by standard tests)\n\nWell clearly this is my favourite :)\n\nAFAIK Opensolaris doesn't implement posix_fadvise() so there's no benefit. It\nwould be great to hear if you could catch the ear of the right people to get\nan implementation committed. Depending on how the i/o scheduler system is\nwritten it might not even be hard -- the Linux implementation of WILLNEED is\nall of 20 lines.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n", "msg_date": "Fri, 13 Mar 2009 13:43:01 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4 Performance improvements: was Re: Proposal of tunable fix for\n\tscalability of 8.4" }, { "msg_contents": "\"Jignesh K. Shah\" <[email protected]> writes:\n\n> Gregory Stark wrote:\n>> Keep in mind when you do this that it's not interesting to test a number of\n>> connections much larger than the number of processors you have. Once the\n>> system reaches 100% cpu usage it would be a misconfigured connection pooler\n>> that kept more than that number of connections open.\n>\n> Greg, Unfortuately the problem is that.. I am trying to reach 100% CPU which\n> I cannot and hence I am increasing the user count :-)\n\nThe effect of increasing the number of users with a connection pooler would be\nto decrease the 200ms sleep time to 0.\n\nThis is all assuming the idle time is *between* transactions. If you have idle\ntime in the middle of transactions things become a lot more tricky. I think we\nare missing something to deal with that use case.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's 24x7 Postgres support!\n", "msg_date": "Fri, 13 Mar 2009 13:54:09 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\nA minute ago I said:\n\n AFAIK Opensolaris doesn't implement posix_fadvise() so there's no benefit. It\n would be great to hear if you could catch the ear of the right people to get\n an implementation committed. Depending on how the i/o scheduler system is\n written it might not even be hard -- the Linux implementation of WILLNEED is\n all of 20 lines.\n\nI noticed after sending it that that's slightly unfair. The 20-line function\ncalls another function (which calls another function) to do the real readahead\nwork. That function (mm/readahead.c:__do_page_cache_readahead()) is 48 lines.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n", "msg_date": "Fri, 13 Mar 2009 13:57:06 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4 Performance improvements: was Re: Proposal of tunable fix for\n\tscalability of 8.4" }, { "msg_contents": "\n>>\n>>\n>> In general, I suggest that it is useful to run tests with a few \n>> different types of pacing. Zero delay pacing will not have realistic \n>> number of connections, but will expose bottlenecks that are \n>> universal, and less controversial. Small latency (100ms to 1s) tests \n>> are easy to make from the zero delay ones, and help expose problems \n>> with connection count or other forms of �non-active� concurrency. \n>> End-user realistic delays are app specific, and useful with larger \n>> holistic load tests (say, through the application interface). \n>> Generally, running them in this order helps because at each stage you \n>> are adding complexity. Based on your explanations, you�ve probably \n>> done much of this so far and your approach sounds solid to me.\n>> If the first case fails (zero delay, smaller user count), there is no \n>> way the others will pass.\n>>\n>>\n>\n> I think I have done that before so I can do that again by running the \n> users at 0 think time which will represent a \"Connection pool\" which \n> is highly utilized\" and test how big the connection pool can be before \n> the throughput tanks.. This can be useful for App Servers which sets \n> up connections pools of their own talking with PostgreSQL.\n>\n> -Jignesh\n>\n>\nSo I backed out my change and used the stock 8.4 snapshot that I had \ndownloaded.. With now 0 think time I do runs with lot less users.. \nstill I cannot get it to go to 100% CPU\n60: 8: Medium Throughput: 7761.000 Avg Medium Resp: 0.004\n120: 16: Medium Throughput: 16876.000 Avg Medium Resp: 0.004\n180: 24: Medium Throughput: 25359.000 Avg Medium Resp: 0.004\n240: 32: Medium Throughput: 33104.000 Avg Medium Resp: 0.005\n300: 40: Medium Throughput: 42200.000 Avg Medium Resp: 0.005\n360: 48: Medium Throughput: 49996.000 Avg Medium Resp: 0.005\n420: 56: Medium Throughput: 58260.000 Avg Medium Resp: 0.005\n480: 64: Medium Throughput: 66289.000 Avg Medium Resp: 0.005\n540: 72: Medium Throughput: 74667.000 Avg Medium Resp: 0.005\n600: 80: Medium Throughput: 82632.000 Avg Medium Resp: 0.005\n660: 88: Medium Throughput: 90211.000 Avg Medium Resp: 0.006\n720: 96: Medium Throughput: 98236.000 Avg Medium Resp: 0.006\n780: 104: Medium Throughput: 105517.000 Avg Medium Resp: 0.006\n840: 112: Medium Throughput: 112921.000 Avg Medium Resp: 0.006\n900: 120: Medium Throughput: 118256.000 Avg Medium Resp: 0.007\n960: 128: Medium Throughput: 126499.000 Avg Medium Resp: 0.007\n1020: 136: Medium Throughput: 133354.000 Avg Medium Resp: 0.007\n1080: 144: Medium Throughput: 135826.000 Avg Medium Resp: 0.008\n1140: 152: Medium Throughput: 121729.000 Avg Medium Resp: 0.012\n1200: 160: Medium Throughput: 130487.000 Avg Medium Resp: 0.011\n1260: 168: Medium Throughput: 123368.000 Avg Medium Resp: 0.013\n1320: 176: Medium Throughput: 134649.000 Avg Medium Resp: 0.012\n1380: 184: Medium Throughput: 136272.000 Avg Medium Resp: 0.013\n\n\nVmstat shows that CPUS are hardly busy in the 64-cpu system (CPUS are \nreported busy when there is active process assigned to the cpu)\n-bash-3.2$ vmstat 30\n kthr memory page disk faults cpu\n r b w swap free re mf pi po fr de sr s0 s1 s2 sd in sy cs us \nsy id\n 19 0 0 52691088 46220848 27 302 10 68 68 0 3 1 -0 -0 -0 13411 20762 \n26854 5 3 92\n 0 0 0 45095664 39898296 0 455 0 0 0 0 0 0 0 0 0 698 674 295 \n0 0 100\n 0 0 0 45040640 39867056 5 13 0 0 0 0 0 0 0 0 0 3925 4189 5721 \n0 0 99\n 0 0 0 45038856 39864016 0 5 0 0 0 0 0 0 0 0 0 9479 8643 15205 \n1 1 98\n 0 0 0 45037760 39862552 0 14 0 0 0 0 0 0 0 0 0 12088 9041 19890 \n2 1 98\n 0 0 0 45035960 39860080 0 6 0 0 0 0 0 0 0 0 0 16590 11611 28351 \n2 1 97\n 0 0 0 45034648 39858416 0 17 0 0 0 0 0 0 0 0 0 19192 13027 33218 \n3 1 96\n 0 0 0 45032360 39855464 0 10 0 0 0 0 0 0 0 0 0 22795 16467 40392 \n4 1 95\n 0 0 0 45030840 39853568 0 22 0 0 0 0 0 0 0 0 0 25349 18315 45178 \n4 1 94\n 0 0 0 45027456 39849648 0 10 0 0 0 0 0 0 0 0 0 28158 22500 50804 \n5 2 93\n 0 0 0 45000752 39832608 0 38 0 0 0 0 0 0 0 0 0 31332 25744 56751 \n6 2 92\n 0 0 0 45010120 39836728 0 6 0 0 0 0 0 0 0 0 0 36636 29334 66505 \n7 2 91\n 0 0 0 45017072 39838504 0 29 0 0 0 0 0 0 0 0 0 38553 32313 70915 \n7 2 91\n 0 0 0 45011384 39833768 0 11 0 0 0 0 0 0 0 0 0 41186 35949 76275 \n8 3 90\n 0 0 0 44890552 39826136 0 40 0 0 0 0 0 0 0 0 0 45123 44507 83665 \n9 3 88\n 0 0 0 44882808 39822048 0 6 0 0 0 0 0 0 0 0 0 49342 53431 91783 \n10 3 87\n 0 0 0 45003328 39825336 0 42 0 0 0 0 0 0 0 0 0 48516 42515 91135 \n10 3 87\n 0 0 0 44999688 39821008 0 6 0 0 0 0 0 0 0 0 0 54695 48741 \n102526 11 3 85\n kthr memory page disk faults cpu\n r b w swap free re mf pi po fr de sr s0 s1 s2 sd in sy cs us \nsy id\n 0 0 0 44980744 39806400 0 55 0 0 0 0 0 0 0 0 0 54968 51946 \n103245 12 4 84\n 0 0 0 44992288 39812256 0 6 0 1 1 0 0 0 0 0 0 60506 58205 \n113911 13 4 83\n 0 0 0 44875648 39802128 1 60 0 0 0 0 0 1 0 0 0 60485 66576 \n114081 13 4 83\n 0 0 0 44848792 39795008 0 8 0 0 0 0 0 1 0 0 0 66760 75060 \n126202 15 5 80\n 0 0 0 44837168 39786432 0 57 0 0 0 0 0 0 0 0 0 66015 68256 \n125209 15 4 81\n 1 0 0 44832680 39779064 0 7 0 0 0 0 0 0 0 0 0 72728 79089 \n138077 17 5 79\n 1 0 0 44926640 39773160 0 69 0 0 0 0 0 0 0 0 0 71990 79148 \n136786 17 5 78\n 1 0 0 44960800 39781416 0 6 0 0 0 0 0 0 0 0 0 75442 77829 \n143783 18 5 77\n 1 0 0 44846472 39773960 0 68 0 0 0 0 0 0 0 0 0 80395 97964 \n153336 19 6 75\n 1 0 0 44887168 39770680 0 7 0 0 0 0 0 0 0 0 0 80010 88144 \n152699 19 6 75\n 1 0 0 44951152 39769576 0 68 0 0 0 0 0 0 0 0 0 83670 85394 \n159745 20 6 74\n 1 0 0 44946080 39763120 0 7 0 0 0 0 0 0 0 0 0 85416 91961 \n163147 21 6 73\n 1 0 0 44923928 39744640 0 83 0 0 0 0 0 0 0 0 0 87625 104894 \n167412 22 6 71\n 1 0 0 44929704 39745368 0 7 0 0 0 0 0 0 0 0 0 93280 103922 \n178357 24 7 69\n 1 0 0 44822712 39738744 0 82 0 0 0 0 0 0 0 0 0 91739 113747 \n175232 23 7 70\n 1 0 0 44790040 39730168 0 6 0 0 0 0 0 0 0 0 0 96159 122496 \n183642 25 7 68\n 1 0 0 44868808 39733872 0 82 0 0 0 0 0 0 0 0 0 96166 107465 \n183502 25 7 68\n 2 0 0 44913296 39730272 0 6 0 0 0 0 0 0 0 0 0 103573 114064 \n197502 27 8 65\n 1 0 0 44890768 39712424 0 96 0 0 0 0 0 0 0 0 0 102235 123767 \n194747 28 8 64\n kthr memory page disk faults cpu\n r b w swap free re mf pi po fr de sr s0 s1 s2 sd in sy cs us \nsy id\n 2 0 0 44900096 39716808 0 6 0 0 0 0 0 0 0 0 0 97323 112955 \n185647 27 8 65\n 1 0 0 44793360 39708336 0 94 0 0 0 0 0 0 0 0 0 98631 131539 \n188076 27 8 65\n 2 0 0 44765136 39700536 0 8 0 0 0 0 0 0 0 0 0 90489 117037 \n172603 27 8 66\n 1 0 0 44887392 39700024 0 94 0 0 0 0 0 0 0 0 0 95832 106992 \n182677 27 8 65\n 2 0 0 44881856 39692632 0 6 0 0 0 0 0 0 0 0 0 95015 109679 \n181194 27 8 65\n 1 0 0 44860928 39674856 0 110 0 0 0 0 0 0 0 0 0 92909 119383 \n177459 27 8 65\n 1 0 0 44861320 39671704 0 8 0 0 0 0 0 0 0 0 0 94677 110967 \n180832 28 8 64\n 1 0 0 44774424 39676000 0 108 0 0 0 0 0 0 0 0 0 94953 123457 \n181397 27 8 65\n 1 0 0 44733000 39668528 0 6 0 0 0 0 0 0 0 0 0 100719 132038 \n192550 29 9 63\n 1 0 0 44841888 39668864 0 106 0 0 0 0 0 0 0 0 0 97293 109177 \n185589 28 8 64\n 1 0 0 44858976 39663592 0 6 0 0 0 0 0 0 0 0 0 103199 118256 \n197049 30 9 62\n 1 0 0 44837216 39646416 0 122 0 0 0 0 0 0 0 0 0 105637 133127 \n201788 31 9 60\n 1 0 0 44842624 39647232 0 8 0 0 0 0 0 0 0 0 0 110530 131454 \n211139 32 9 59\n 2 0 0 44740624 39638832 1 127 0 0 0 0 0 0 0 0 0 111114 145135 \n212398 32 9 59\n 2 0 0 44690824 39628568 0 8 0 0 0 0 0 0 0 0 0 109934 146164 \n210454 32 10 59\n 2 0 0 44691912 39616000 0 132 0 0 0 0 0 0 0 0 0 108231 132279 \n206885 32 9 59\n 1 0 0 44797968 39609832 0 9 0 0 0 0 0 0 0 0 0 111582 135125 \n213446 33 10 58\n 3 0 0 44781632 39598432 0 135 0 0 0 0 0 0 0 0 0 115277 150356 \n220792 34 10 56\n 5 0 0 44791408 39600432 0 10 0 0 0 0 0 0 0 0 0 111428 137996 \n212559 33 9 58\n kthr memory page disk faults cpu\n r b w swap free re mf pi po fr de sr s0 s1 s2 sd in sy cs us \nsy id\n 3 0 0 44710008 39603320 0 135 0 0 0 0 0 0 0 0 0 110564 145678 \n211567 33 10 57\n 5 0 0 44663368 39595008 0 6 0 0 0 0 0 0 0 0 0 108891 143083 \n208389 33 10 58\n 3 0 0 44753496 39593824 0 132 0 0 0 0 0 0 0 0 0 109922 126865 \n209869 33 9 57\n 4 0 0 44788368 39588528 0 7 0 0 0 0 0 0 0 0 0 108680 129073 \n208068 33 10 57\n 2 0 0 44767920 39570592 0 147 0 0 0 0 0 0 0 0 0 106671 142403 \n204724 33 10 58\n 4 0 0 44762656 39563256 0 11 0 0 0 0 0 0 0 0 0 106185 130328 \n204551 34 10 57\n 2 0 0 44674584 39560912 0 148 0 0 0 0 0 0 0 0 0 104757 139147 \n201448 32 10 58\n 1 0 0 44619824 39551024 0 9 0 0 0 0 0 0 0 0 0 103653 142125 \n199896 32 10 58\n 2 0 0 44622480 39552432 0 141 0 0 0 0 0 0 0 0 0 101373 134547 \n195553 32 9 58\n 1 0 0 44739936 39552312 0 11 0 0 0 0 0 0 0 0 0 102932 121742 \n198205 33 9 58\n\n\nAnd lock stats are as follows at about 280 users sampling for a single \nbackend process:\n# ./84_lwlock.d 29405\n\n Lock Id Mode State Count\n WALWriteLock Exclusive Acquired 1\n XidGenLock Exclusive Waiting 1\n CLogControlLock Shared Waiting 3\n ProcArrayLock Shared Waiting 7\n CLogControlLock Exclusive Waiting 9\n WALInsertLock Exclusive Waiting 45\n CLogControlLock Shared Acquired 52\n ProcArrayLock Exclusive Waiting 61\n XidGenLock Exclusive Acquired 96\n ProcArrayLock Exclusive Acquired 97\n CLogControlLock Exclusive Acquired 152\n WALInsertLock Exclusive Acquired 302\n ProcArrayLock Shared Acquired 729\n FirstLockMgrLock Shared Acquired 812\n FirstBufMappingLock Shared Acquired 857\n SInvalReadLock Shared Acquired 1551\n\n Lock Id Mode State Combined Time (ns)\n WALInsertLock Acquired 89909\n XidGenLock Exclusive Waiting 101488\n WALWriteLock Exclusive Acquired 140563\n CLogControlLock Shared Waiting 354756\n FirstBufMappingLock Acquired 471438\n FirstLockMgrLock Acquired 2907141\n XidGenLock Exclusive Acquired 7450934\n CLogControlLock Exclusive Waiting 11094716\n ProcArrayLock Acquired 15495229\n WALInsertLock Exclusive Waiting 20801169\n CLogControlLock Exclusive Acquired 21339264\n SInvalReadLock Acquired 24309991\n FirstLockMgrLock Exclusive Acquired 39904071\n FirstBufMappingLock Exclusive Acquired 40826435\n ProcArrayLock Shared Waiting 86352947\n WALInsertLock Exclusive Acquired 89336432\n SInvalReadLock Exclusive Acquired 252574515\n ProcArrayLock Exclusive Acquired 315064347\n ProcArrayLock Exclusive Waiting 847806215\n\nmpstat outputs is too much so I am doing aggegation by procesor set \nwhich is all 64 cpus\n\n-bash-3.2$ mpstat -a 10\n\nSET minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr \nsys wt idl sze\n 0 370 0 118649 127575 7595 244456 43931 62166 8700 0 158929 \n38 11 0 50 64\n 0 167 0 119668 128704 7644 246389 43287 62357 8816 0 161006 \n38 11 0 51 64\n 0 27 0 109461 117433 6997 224514 38562 56446 8171 0 148322 \n34 10 0 56 64\n 0 2 0 122368 131549 7871 250237 39620 61478 9082 0 165995 \n36 11 0 52 64\n 0 0 0 122025 131380 7973 249429 37292 59863 8922 0 166319 \n35 11 0 54 64\n\n(quick overview of columns )\n SET Processor set\n minf minor faults\n mjf major faults\n xcal inter-processor cross-calls\n intr interrupts\n ithr interrupts as threads (not counting clock\n interrupt)\n csw context switches\n icsw involuntary context switches\n migr thread migrations (to another processor)\n smtx spins on mutexes (lock not acquired on first\n try)\n srw spins on readers/writer locks (lock not\n acquired on first try)\n syscl system calls\n usr percent user time\n sys percent system time\n wt the I/O wait time is no longer calculated as a\n percentage of CPU time, and this statistic\n will always return zero.\n idl percent idle time\n sze number of processors in the requested proces-\n sor set\n\n\n-Jignesh\n\n\n-- \nJignesh Shah http://blogs.sun.com/jkshah \t\t\t\nThe New Sun Microsystems,Inc http://sun.com/postgresql\n\n", "msg_date": "Fri, 13 Mar 2009 10:56:45 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Gregory Stark wrote:\n> A minute ago I said:\n>\n> AFAIK Opensolaris doesn't implement posix_fadvise() so there's no benefit. It\n> would be great to hear if you could catch the ear of the right people to get\n> an implementation committed. Depending on how the i/o scheduler system is\n> written it might not even be hard -- the Linux implementation of WILLNEED is\n> all of 20 lines.\n>\n> I noticed after sending it that that's slightly unfair. The 20-line function\n> calls another function (which calls another function) to do the real readahead\n> work. That function (mm/readahead.c:__do_page_cache_readahead()) is 48 lines.\n>\n> \nIt's implemented. I'm guessing it's not what you want to see though:\n\nhttp://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libc/port/gen/posix_fadvise.c\n\n\n", "msg_date": "Fri, 13 Mar 2009 11:20:04 -0400", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4 Performance improvements: was Re: Proposal of\n\ttunable fix for scalability of 8.4" }, { "msg_contents": ">>> \"Jignesh K. Shah\" <[email protected]> wrote: \n> usr sys wt idl sze\n> 38 11 0 50 64\n \nThe fact that you're maxing out at 50% CPU utilization has me\nwondering -- are there really 64 CPUs here, or are there 32 CPUs with\n\"hyperthreading\" technology (or something conceptually similar)?\n \n-Kevin\n", "msg_date": "Fri, 13 Mar 2009 10:55:30 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": ">>> \"Jignesh K. Shah\" <[email protected]> wrote: \n> 600: 80: Medium Throughput: 82632.000 Avg Medium Resp: 0.005\n \nPersonally, I'd be pretty interested in seeing what the sampling shows\nin a steady state at this level. Any blocking at this level which\nwasn't waiting for input or output in communications with the client\nsoftware would probably something to look at very closely.\n \n-Kevin\n", "msg_date": "Fri, 13 Mar 2009 11:18:19 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\n\nNow with a modified Fix (not the original one that I proposed but \nsomething that works like a heart valve : Opens and shuts to minimum \ndefault way thus controlling how many waiters are waked up )\n\nTime:Users:throughput: Reponse\n60: 8: Medium Throughput: 7774.000 Avg Medium Resp: 0.004\n120: 16: Medium Throughput: 16874.000 Avg Medium Resp: 0.004\n180: 24: Medium Throughput: 25159.000 Avg Medium Resp: 0.004\n240: 32: Medium Throughput: 33216.000 Avg Medium Resp: 0.005\n300: 40: Medium Throughput: 42418.000 Avg Medium Resp: 0.005\n360: 48: Medium Throughput: 49655.000 Avg Medium Resp: 0.005\n420: 56: Medium Throughput: 58149.000 Avg Medium Resp: 0.005\n480: 64: Medium Throughput: 66558.000 Avg Medium Resp: 0.005\n540: 72: Medium Throughput: 74474.000 Avg Medium Resp: 0.005\n600: 80: Medium Throughput: 82241.000 Avg Medium Resp: 0.005\n660: 88: Medium Throughput: 90336.000 Avg Medium Resp: 0.005\n720: 96: Medium Throughput: 99101.000 Avg Medium Resp: 0.006\n780: 104: Medium Throughput: 106028.000 Avg Medium Resp: 0.006\n840: 112: Medium Throughput: 113196.000 Avg Medium Resp: 0.006\n900: 120: Medium Throughput: 119174.000 Avg Medium Resp: 0.006\n960: 128: Medium Throughput: 129408.000 Avg Medium Resp: 0.006\n1020: 136: Medium Throughput: 134433.000 Avg Medium Resp: 0.007\n1080: 144: Medium Throughput: 143121.000 Avg Medium Resp: 0.007\n1140: 152: Medium Throughput: 144603.000 Avg Medium Resp: 0.007\n1200: 160: Medium Throughput: 148604.000 Avg Medium Resp: 0.008\n1260: 168: Medium Throughput: 150274.000 Avg Medium Resp: 0.009\n1320: 176: Medium Throughput: 150581.000 Avg Medium Resp: 0.010\n1380: 184: Medium Throughput: 146912.000 Avg Medium Resp: 0.012\n1440: 192: Medium Throughput: 143945.000 Avg Medium Resp: 0.013\n1500: 200: Medium Throughput: 144029.000 Avg Medium Resp: 0.015\n1560: 208: Medium Throughput: 143468.000 Avg Medium Resp: 0.016\n1620: 216: Medium Throughput: 144367.000 Avg Medium Resp: 0.017\n1680: 224: Medium Throughput: 148340.000 Avg Medium Resp: 0.017\n1740: 232: Medium Throughput: 148842.000 Avg Medium Resp: 0.018\n1800: 240: Medium Throughput: 149533.000 Avg Medium Resp: 0.019\n1860: 248: Medium Throughput: 152334.000 Avg Medium Resp: 0.019\n1920: 256: Medium Throughput: 151521.000 Avg Medium Resp: 0.020\n1980: 264: Medium Throughput: 148961.000 Avg Medium Resp: 0.022\n2040: 272: Medium Throughput: 151270.000 Avg Medium Resp: 0.022\n2100: 280: Medium Throughput: 149783.000 Avg Medium Resp: 0.024\n2160: 288: Medium Throughput: 151743.000 Avg Medium Resp: 0.024\n2220: 296: Medium Throughput: 155190.000 Avg Medium Resp: 0.026\n2280: 304: Medium Throughput: 150955.000 Avg Medium Resp: 0.027\n2340: 312: Medium Throughput: 147118.000 Avg Medium Resp: 0.029\n2400: 320: Medium Throughput: 152768.000 Avg Medium Resp: 0.029\n2460: 328: Medium Throughput: 161044.000 Avg Medium Resp: 0.028\n2520: 336: Medium Throughput: 157926.000 Avg Medium Resp: 0.029\n2580: 344: Medium Throughput: 161005.000 Avg Medium Resp: 0.029\n2640: 352: Medium Throughput: 167274.000 Avg Medium Resp: 0.029\n2700: 360: Medium Throughput: 168253.000 Avg Medium Resp: 0.031\n\n\nWith final vmstats improving but still far from 100%\n kthr memory page disk faults cpu\n r b w swap free re mf pi po fr de sr s0 s1 s2 sd in sy cs us \nsy id\n 38 0 0 46052840 39345096 0 11 0 0 0 0 0 0 0 0 0 134137 290703 \n303518 40 14 45\n 43 0 0 45656456 38882912 23 77 0 0 0 0 0 0 0 0 0 135820 272899 \n300749 40 15 45\n 38 0 0 45650488 38816984 23 80 0 0 0 0 0 0 0 0 0 135009 272767 \n300192 39 15 46\n 47 0 0 46020792 39187688 0 5 0 0 0 0 0 0 0 0 0 140473 285445 \n312826 40 14 46\n 24 0 0 46143984 39326848 9 61 0 0 0 0 0 0 0 0 0 146194 308590 \n328241 40 15 45\n 37 0 0 45465256 38757000 22 74 0 0 0 0 0 0 0 0 0 136835 293971 \n301433 38 14 48\n 35 0 0 46017544 39308072 12 61 0 0 0 0 0 0 0 0 0 142749 312355 \n320592 42 15 43\n 36 0 0 45456000 38744688 11 24 0 0 0 0 0 0 0 0 0 143566 303461 \n317683 41 15 43\n 23 0 0 46007408 39291312 2 22 0 0 0 0 0 0 0 0 0 140246 300061 \n316663 42 15 43\n 20 0 0 46029656 39281704 10 25 0 0 0 0 0 0 0 0 0 147787 291825 \n326387 43 15 42\n 24 0 0 46131016 39288528 2 21 0 0 0 0 0 0 0 0 0 150796 310697 \n335791 43 15 42\n 20 0 0 46109448 39269392 16 67 0 0 0 0 0 0 0 0 0 150075 315517 \n332881 43 16 41\n 30 0 0 45540928 38710376 9 27 0 0 0 0 0 0 0 0 0 155214 316448 \n341472 43 16 40\n 14 0 0 45987496 39270016 0 5 0 0 0 0 0 0 0 0 0 155028 333711 \n344207 44 16 40\n 25 0 0 45981136 39263008 0 10 0 0 0 0 0 0 0 0 0 153968 327343 \n343776 45 16 39\n 54 0 0 46062984 39259936 0 7 0 0 0 0 0 0 0 0 0 153721 315839 \n344732 45 16 39\n 42 0 0 46099704 39252920 0 15 0 0 0 0 0 0 0 0 0 154629 323125 \n348798 45 16 39\n 54 0 0 46068944 39230808 0 8 0 0 0 0 0 0 0 0 0 157166 340265 \n354135 46 17 37\n\nBut the real winner shows up in lockstat where it seems to indicate that \nstress on Waiting from ProcArrayLock is relieved (thought shifting \nsomewhere else which is how lock works):\n\n# ./84_lwlock.d 8042\n\n Lock Id Mode State Count\n WALWriteLock Exclusive Acquired 1\n XidGenLock Exclusive Waiting 3\n CLogControlLock Shared Waiting 11\n ProcArrayLock Shared Waiting 39\n CLogControlLock Exclusive Waiting 52\n WALInsertLock Exclusive Waiting 73\n CLogControlLock Shared Acquired 91\n ProcArrayLock Exclusive Acquired 96\n XidGenLock Exclusive Acquired 96\n ProcArrayLock Exclusive Waiting 121\n CLogControlLock Exclusive Acquired 199\n WALInsertLock Exclusive Acquired 310\n FirstBufMappingLock Shared Acquired 408\n FirstLockMgrLock Shared Acquired 618\n ProcArrayLock Shared Acquired 746\n SInvalReadLock Shared Acquired 1542\n\n Lock Id Mode State Combined Time (ns)\n WALInsertLock Acquired 118673\n CLogControlLock Acquired 172130\n FirstBufMappingLock Acquired 177196\n WALWriteLock Exclusive Acquired 208403\n XidGenLock Exclusive Waiting 325989\n FirstLockMgrLock Acquired 2667351\n ProcArrayLock Acquired 8179335\n XidGenLock Exclusive Acquired 8896177\n CLogControlLock Shared Waiting 9680401\n CLogControlLock Exclusive Waiting 19105179\n CLogControlLock Exclusive Acquired 27484249\n SInvalReadLock Acquired 43026960\n FirstBufMappingLock Exclusive Acquired 45232906\n ProcArrayLock Shared Waiting 46741660\n WALInsertLock Exclusive Waiting 50912148\n FirstLockMgrLock Exclusive Acquired 58789829\n WALInsertLock Exclusive Acquired 86653791\n ProcArrayLock Exclusive Waiting 213980787\n ProcArrayLock Exclusive Acquired 270028367\n SInvalReadLock Exclusive Acquired 303044735\n\n\n\n\nSET minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr \nsys wt idl sze\n 0 1 0 147238 159453 8806 370676 89236 71258 98435 0 380008 \n47 17 0 35 64\n 0 6 0 132463 143446 7975 331685 80847 64746 86578 0 329315 \n44 16 0 41 64\n 0 16 0 146655 158621 8987 366866 90756 69953 93786 0 349346 \n49 17 0 34 64\n 0 18 0 151326 163492 8992 377634 92860 72406 98968 4 365121 \n49 17 0 33 64\n 0 2 0 142914 154169 8243 352104 81385 69598 91260 0 340887 \n42 16 0 42 64\n 0 16 0 156755 168962 9080 386475 93072 74775 101465 0 379250 \n47 18 0 36 64\n 0 1 0 152807 165134 8880 379521 90671 75073 99692 0 380412 \n48 18 0 35 64\n 0 1 0 134778 146041 8122 339137 79888 66633 89220 0 342600 \n43 16 0 41 64\n 0 16 0 153014 164789 8834 376117 93000 72743 97644 0 371792 \n48 18 0 35 64\n\n\nNot sure what SInvalReadLock does.. need to read up on that..\n\n\n-Jignesh\n\n>\n> 1200: 160: Medium Throughput: 130487.000 Avg Medium Resp: 0.011\n> 1260: 168: Medium Throughput: 123368.000 Avg Medium Resp: 0.013\n> 1320: 176: Medium Throughput: 134649.000 Avg Medium Resp: 0.012\n> 1380: 184: Medium Throughput: 136272.000 Avg Medium Resp: 0.013\n>\n>\n> kthr memory page disk faults cpu\n> r b w swap free re mf pi po fr de sr s0 s1 s2 sd in sy cs \n> us sy id\n> 3 0 0 44710008 39603320 0 135 0 0 0 0 0 0 0 0 0 110564 145678 \n> 211567 33 10 57\n> 5 0 0 44663368 39595008 0 6 0 0 0 0 0 0 0 0 0 108891 143083 \n> 208389 33 10 58\n> 3 0 0 44753496 39593824 0 132 0 0 0 0 0 0 0 0 0 109922 126865 \n> 209869 33 9 57\n> 4 0 0 44788368 39588528 0 7 0 0 0 0 0 0 0 0 0 108680 129073 \n> 208068 33 10 57\n> 2 0 0 44767920 39570592 0 147 0 0 0 0 0 0 0 0 0 106671 142403 \n> 204724 33 10 58\n> 4 0 0 44762656 39563256 0 11 0 0 0 0 0 0 0 0 0 106185 130328 \n> 204551 34 10 57\n> 2 0 0 44674584 39560912 0 148 0 0 0 0 0 0 0 0 0 104757 139147 \n> 201448 32 10 58\n> 1 0 0 44619824 39551024 0 9 0 0 0 0 0 0 0 0 0 103653 142125 \n> 199896 32 10 58\n> 2 0 0 44622480 39552432 0 141 0 0 0 0 0 0 0 0 0 101373 134547 \n> 195553 32 9 58\n> 1 0 0 44739936 39552312 0 11 0 0 0 0 0 0 0 0 0 102932 121742 \n> 198205 33 9 58\n>\n>\n> And lock stats are as follows at about 280 users sampling for a single \n> backend process:\n> # ./84_lwlock.d 29405\n>\n> Lock Id Mode State Count\n> WALWriteLock Exclusive Acquired 1\n> XidGenLock Exclusive Waiting 1\n> CLogControlLock Shared Waiting 3\n> ProcArrayLock Shared Waiting 7\n> CLogControlLock Exclusive Waiting 9\n> WALInsertLock Exclusive Waiting 45\n> CLogControlLock Shared Acquired 52\n> ProcArrayLock Exclusive Waiting 61\n> XidGenLock Exclusive Acquired 96\n> ProcArrayLock Exclusive Acquired 97\n> CLogControlLock Exclusive Acquired 152\n> WALInsertLock Exclusive Acquired 302\n> ProcArrayLock Shared Acquired 729\n> FirstLockMgrLock Shared Acquired 812\n> FirstBufMappingLock Shared Acquired 857\n> SInvalReadLock Shared Acquired 1551\n>\n> Lock Id Mode State Combined Time \n> (ns)\n> WALInsertLock Acquired \n> 89909\n> XidGenLock Exclusive Waiting \n> 101488\n> WALWriteLock Exclusive Acquired \n> 140563\n> CLogControlLock Shared Waiting \n> 354756\n> FirstBufMappingLock Acquired \n> 471438\n> FirstLockMgrLock Acquired \n> 2907141\n> XidGenLock Exclusive Acquired \n> 7450934\n> CLogControlLock Exclusive Waiting \n> 11094716\n> ProcArrayLock Acquired \n> 15495229\n> WALInsertLock Exclusive Waiting \n> 20801169\n> CLogControlLock Exclusive Acquired \n> 21339264\n> SInvalReadLock Acquired \n> 24309991\n> FirstLockMgrLock Exclusive Acquired \n> 39904071\n> FirstBufMappingLock Exclusive Acquired \n> 40826435\n> ProcArrayLock Shared Waiting \n> 86352947\n> WALInsertLock Exclusive Acquired \n> 89336432\n> SInvalReadLock Exclusive Acquired \n> 252574515\n> ProcArrayLock Exclusive Acquired \n> 315064347\n> ProcArrayLock Exclusive Waiting \n> 847806215\n>\n> mpstat outputs is too much so I am doing aggegation by procesor set \n> which is all 64 cpus\n>\n> -bash-3.2$ mpstat -a 10\n>\n> SET minf mjf xcal intr ithr csw icsw migr smtx srw syscl \n> usr sys wt idl sze\n> 0 370 0 118649 127575 7595 244456 43931 62166 8700 0 158929 \n> 38 11 0 50 64\n> 0 167 0 119668 128704 7644 246389 43287 62357 8816 0 161006 \n> 38 11 0 51 64\n> 0 27 0 109461 117433 6997 224514 38562 56446 8171 0 148322 \n> 34 10 0 56 64\n> 0 2 0 122368 131549 7871 250237 39620 61478 9082 0 165995 \n> 36 11 0 52 64\n> 0 0 0 122025 131380 7973 249429 37292 59863 8922 0 166319 \n> 35 11 0 54 64\n>\n> (quick overview of columns )\n> SET Processor set\n> minf minor faults\n> mjf major faults\n> xcal inter-processor cross-calls\n> intr interrupts\n> ithr interrupts as threads (not counting clock\n> interrupt)\n> csw context switches\n> icsw involuntary context switches\n> migr thread migrations (to another processor)\n> smtx spins on mutexes (lock not acquired on first\n> try)\n> srw spins on readers/writer locks (lock not\n> acquired on first try)\n> syscl system calls\n> usr percent user time\n> sys percent system time\n> wt the I/O wait time is no longer calculated as a\n> percentage of CPU time, and this statistic\n> will always return zero.\n> idl percent idle time\n> sze number of processors in the requested proces-\n> sor set\n>\n>\n> -Jignesh\n>\n>\n\n-- \nJignesh Shah http://blogs.sun.com/jkshah \t\t\t\nThe New Sun Microsystems,Inc http://sun.com/postgresql\n\n", "msg_date": "Fri, 13 Mar 2009 12:42:30 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 3/13/09 8:55 AM, \"Kevin Grittner\" <[email protected]> wrote:\n\n>>> \"Jignesh K. Shah\" <[email protected]> wrote:\n> usr sys wt idl sze\n> 38 11 0 50 64\n\nThe fact that you're maxing out at 50% CPU utilization has me\nwondering -- are there really 64 CPUs here, or are there 32 CPUs with\n\"hyperthreading\" technology (or something conceptually similar)?\n\n-Kevin\n\nIts a sun T1000 or T2000 type box, which are 4 threads per processor core IIRC. Its in his first post:\n\n\"\nUltraSPARC T2 based 1 socket (64 threads) and 2 socket (128 threads)\nservers that Sun sells.\n\"\n\nThese processors use an in-order execution engine and fill the bubbles in the pipelines with SMT (the non-marketing name for hyperthreading).\nThey are rather efficient at it though, moreso than Intel's first stab at it. And Intel's next generation chips hitting the streets in servers in less than a month, have it again.\n\n\n\nRe: [PERFORM] Proposal of tunable fix for scalability of 8.4\n\n\nOn 3/13/09 8:55 AM, \"Kevin Grittner\" <[email protected]> wrote:\n\n>>> \"Jignesh K. Shah\" <[email protected]> wrote:\n> usr sys  wt idl sze\n>  38  11   0  50  64\n\nThe fact that you're maxing out at 50% CPU utilization has me\nwondering -- are there really 64 CPUs here, or are there 32 CPUs with\n\"hyperthreading\" technology (or something conceptually similar)?\n\n-Kevin\n\nIts a sun T1000  or T2000 type box, which are 4 threads per processor core IIRC.  Its in his first post:\n\n“\nUltraSPARC T2 based 1 socket (64 threads) and 2 socket (128 threads)\nservers that Sun sells.\n“\n\nThese processors use an in-order execution engine and fill the bubbles in the pipelines with SMT (the non-marketing name for hyperthreading).\nThey are rather efficient at it though, moreso than Intel’s first stab at it.  And Intel’s next generation chips hitting the streets in servers in less than a month, have it again.", "msg_date": "Fri, 13 Mar 2009 09:54:01 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On Fri, 13 Mar 2009, Jignesh K. Shah wrote:\n\n> I can use dbt2, dbt3 tests to see how 8.4 performs and compare it with \n> 8.3?\n\nThat would be very helpful. There's been some work at updating the DTrace \ncapabilities available too; you might compare what that's reporting too.\n\n> * Visibility map - Reduce Vacuum overhead - (I think I can time vacuum with \n> some usage on both databases)\n\nThe reduced vacuum overhead should show up as just better overall \nperformance. If you can seperate out the vacuum specific time that would \nbe great, I don't know that it's essential. If the changes don't just \nmake a plain old speed improvement in your tests that would be a problem \nworth reporting.\n\n> * Parallel pg_restore (Can be tested with a big database dump)\n\nIt would be particularly useful if you could throw some of your 32+ core \nsystems at a parallel restore of something with a bunch of tables. I \ndon't think there have been (m)any tests of that code on Solaris or with \nthat many restore workers yet.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n\n", "msg_date": "Fri, 13 Mar 2009 13:15:32 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4 Performance improvements: was Re: Proposal of\n\ttunable fix for scalability of 8.4" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> I think that changing the locking behavior is attacking the problem at\n> the wrong level anyway.\n\nRight. By the time a patch here could have any effect, you've already\nlost the game --- having to deschedule and reschedule a process is a\nlarge cost compared to the typical lock hold time for most LWLocks. So\nit would be better to look at how to avoid blocking in the first place.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Mar 2009 13:16:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4 " }, { "msg_contents": "\n\nScott Carey wrote:\n> On 3/13/09 8:55 AM, \"Kevin Grittner\" <[email protected]> wrote:\n>\n> >>> \"Jignesh K. Shah\" <[email protected]> wrote:\n> > usr sys wt idl sze\n> > 38 11 0 50 64\n>\n> The fact that you're maxing out at 50% CPU utilization has me\n> wondering -- are there really 64 CPUs here, or are there 32 CPUs with\n> \"hyperthreading\" technology (or something conceptually similar)?\n>\n> -Kevin\n>\n> Its a sun T1000 or T2000 type box, which are 4 threads per processor \n> core IIRC. Its in his first post:\n>\n> �\n> UltraSPARC T2 based 1 socket (64 threads) and 2 socket (128 threads)\n> servers that Sun sells.\n> �\n>\n> These processors use an in-order execution engine and fill the bubbles \n> in the pipelines with SMT (the non-marketing name for hyperthreading).\n> They are rather efficient at it though, moreso than Intel�s first stab \n> at it. And Intel�s next generation chips hitting the streets in \n> servers in less than a month, have it again. \n\n\nThis are UltraSPARC T2 Plus which is 8 threads per core(ala CMT for us) \n.. Though the CPU% reported by vmstat is more based on \"scheduled in \nexecution\" rather than what is executed by \"computing engine\" of the the \ncore.. So unless you have scheduled in execution 100% on the thread, it \nwont be executing ..\nSo if you want to read mpstat right, you may not be executing everything \nthat is shown as executing but you are definitely NOT going to execute \nanything that is not shown as executing.. My goal is to reach a level \nwhere we can show PostgreSQL can effectively get to 100% CPU in say \nvmstat,mpstat first...\n\n-Jignesh\n\n-- \nJignesh Shah http://blogs.sun.com/jkshah \t\t\t\nThe New Sun Microsystems,Inc http://sun.com/postgresql\n\n", "msg_date": "Fri, 13 Mar 2009 13:21:15 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Alan Stange <[email protected]> writes:\n> Gregory Stark wrote:\n>> AFAIK Opensolaris doesn't implement posix_fadvise() so there's no benefit.\n\n> It's implemented. I'm guessing it's not what you want to see though:\n> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libc/port/gen/posix_fadvise.c\n\nUgh. So apparently, we actually need to special-case Solaris to not\nbelieve that posix_fadvise works, or we'll waste cycles uselessly\ncalling a do-nothing function. Thanks, Sun.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Mar 2009 13:23:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4 Performance improvements: was Re: Proposal of tunable fix for\n\tscalability of 8.4" }, { "msg_contents": "On 3/13/09 9:42 AM, \"Jignesh K. Shah\" <[email protected]> wrote:\n\n\nNow with a modified Fix (not the original one that I proposed but\nsomething that works like a heart valve : Opens and shuts to minimum\ndefault way thus controlling how many waiters are waked up )\n\nIs this the server with 128 thread capability or 64 threads? Idle time is reduced but other locks are hit.\n\nWith 200ms sleeps, no lock change:\nPeak throughput 102000/min @ 1000 users.avg response time is 23ms. Linear ramp up until 900 users @98000/min and 12ms response time.\nAt 2000 users, response time is 229ms and throughput is 90000/min.\n\nWith 200ms sleeps, lock modification 1 (wake all)\nPeak throughput at 1701112/min @2000 users and avg response time 63ms. Plateau starts at 1600 users and 160000/min throughput. As before, plateau starts when response time breaches 20ms, indicating contention.\n\nLets call the above a 65% throughput improvement with large connection count.\n\n-----------------\nNow, with 0ms delay, no threading change:\nThroughput is 136000/min @184 users, response time 13ms. Response time has not jumped too drastically yet, but linear performance increases stopped at about 130 users or so. ProcArrayLock busy, very busy. CPU: 35% user, 11% system, 54% idle\n\nWith 0ms delay, and lock modification 2 (wake some, but not all)\nThroughput is 161000/min @328 users, response time 28ms. At 184 users as before the change, throughput is 147000/min with response time 0.12ms. Performance scales linearly to 144 users, then slows down and slightly increases after that with more concurrency.\nThroughput increase is between 15% and 25%.\n\n\nWhat I see in the above is twofold:\nThis change improves throughput on this machine regardless of connection count.\nThe change seems to help with more connection count and the wait - in fact, it seems to make connection count at this level not be much of a factor at all.\n\nThe two changes tested are different, which clouds things a bit. I wonder what the first change would do in the second test case.\n\nIn any event, the second detail above is facinating - it suggests that these locks are what is responsible for a significant chunk of the overhead of idle or mostly idle connections (making connection pools less useful, though they can never fix mid-transaction pauses which are very common). And in any event, on large multiprocessor systems like this postgres is lock limited regardless of using a connection pool or not.\n\n\n\nRe: [PERFORM] Proposal of tunable fix for scalability of 8.4\n\n\n\nOn 3/13/09 9:42 AM, \"Jignesh K. Shah\" <[email protected]> wrote:\n\n\nNow with a modified Fix (not the original one that I proposed but\nsomething that works like a heart valve : Opens and shuts to minimum\ndefault way thus  controlling how many waiters are waked up )\n\nIs this the server with 128 thread capability or 64 threads?  Idle time is reduced but other locks are hit. \n\nWith 200ms sleeps, no lock change:\nPeak throughput 102000/min @ 1000 users.avg response time is 23ms.  Linear ramp up until 900 users @98000/min and 12ms response time.\nAt 2000 users, response time is 229ms and throughput is 90000/min.\n\nWith 200ms sleeps, lock modification 1 (wake all)\nPeak throughput at 1701112/min @2000 users and avg response time 63ms.  Plateau starts at 1600 users and 160000/min throughput. As before, plateau starts when response time breaches 20ms, indicating contention.\n\nLets call the above a 65% throughput improvement with large connection count.\n\n-----------------\nNow, with 0ms delay, no threading change:\nThroughput is 136000/min @184 users, response time 13ms.  Response time has not jumped too drastically yet, but linear performance increases stopped at about 130 users or so. ProcArrayLock busy, very busy.  CPU: 35% user, 11% system, 54% idle\n\nWith 0ms delay, and lock modification 2 (wake some, but not all)\nThroughput is 161000/min @328 users, response time 28ms.  At 184 users as before the change, throughput is 147000/min with response time 0.12ms.  Performance scales linearly to 144 users, then slows down and slightly increases after that with more concurrency.\nThroughput increase is between 15% and 25%. \n\n\nWhat I see in the above is twofold:\nThis change improves throughput on this machine regardless of connection count.  \nThe change seems to help with more connection count and the wait — in fact, it seems to make connection count at this level not be much of a factor at all.\n\nThe two changes tested are different, which clouds things a bit.  I wonder what the first change would do in the second test case.\n\nIn any event, the second detail above is facinating — it suggests that these locks are what is responsible for a significant chunk of the overhead of idle or mostly idle connections (making connection pools less useful, though they can never fix mid-transaction pauses which are very common).  And in any event, on large multiprocessor systems like this postgres is lock limited regardless of using a connection pool or not.", "msg_date": "Fri, 13 Mar 2009 10:29:51 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 3/13/09 10:16 AM, \"Tom Lane\" <[email protected]> wrote:\n\nRobert Haas <[email protected]> writes:\n> I think that changing the locking behavior is attacking the problem at\n> the wrong level anyway.\n\nRight. By the time a patch here could have any effect, you've already\nlost the game --- having to deschedule and reschedule a process is a\nlarge cost compared to the typical lock hold time for most LWLocks. So\nit would be better to look at how to avoid blocking in the first place.\n\n regards, tom lane\n\nIn an earlier post in this thread I mentioned the three main ways to solve scalability problems with respect to locking:\nAvoid locking (atomics, copy-on-write, etc), finer grained locks (data structure partitioning, etc) and optimizing the locks themselves.\n\nI don't know which of the above has the greatest opportunity in postgres. My base assumption was that lock avoidance was something that had been worked on significantly already, and that since lock algorithm optimization is rediculously hardware dependant, there was probably low hanging fruit there.\n\nMessing with unfair locks does not have to be the solution to the problem, but it can be a means to an end:\nIt takes less time and lines of code to change the lock and see what the benefit less locking would cause, than it does to change the code to avoid the locks.\n\nSo what we have here, is a tool - not necessarily what you want to use in production, but a handy tool. If you switch to unfair locks, and things speed up, you're lock bound and avoiding those locks will make things faster. The Dtrace data is also a great tool, that is showing the same thing but without the ability to know how large or small the gain is or being sure what the next bottleneck will be.\n\n\n\nRe: [PERFORM] Proposal of tunable fix for scalability of 8.4 \n\n\n\nOn 3/13/09 10:16 AM, \"Tom Lane\" <[email protected]> wrote:\n\nRobert Haas <[email protected]> writes:\n> I think that changing the locking behavior is attacking the problem at\n> the wrong level anyway.\n\nRight.  By the time a patch here could have any effect, you've already\nlost the game --- having to deschedule and reschedule a process is a\nlarge cost compared to the typical lock hold time for most LWLocks.  So\nit would be better to look at how to avoid blocking in the first place.\n\n                        regards, tom lane\n\nIn an earlier post in this thread I mentioned the three main ways to solve scalability problems with respect to locking:\nAvoid locking (atomics, copy-on-write, etc), finer grained locks (data structure partitioning, etc) and optimizing the locks themselves.\n\nI don’t know which of the above has the greatest opportunity in postgres.   My base assumption was that lock avoidance was something that had been worked on significantly already, and that since lock algorithm optimization is rediculously hardware dependant, there was probably low hanging fruit there.  \n\nMessing with unfair locks does not have to be the solution to the problem, but it can be a means to an end:\nIt takes less time and lines of code to change the lock and see what the benefit less locking would cause, than it does to change the code to avoid the locks. \n\nSo what we have here, is a tool — not necessarily what you want to use in production, but a handy tool.  If you switch to unfair locks, and things speed up, you’re lock bound and avoiding those locks will make things faster.  The Dtrace data is also a great tool, that is showing the same thing but without the ability to know how large or small the gain is or being sure what the next bottleneck will be.", "msg_date": "Fri, 13 Mar 2009 10:34:27 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4 " }, { "msg_contents": "On 3/13/09 10:29 AM, \"Scott Carey\" <[email protected]> wrote:\n\n\n-----------------\nNow, with 0ms delay, no threading change:\nThroughput is 136000/min @184 users, response time 13ms. Response time has not jumped too drastically yet, but linear performance increases stopped at about 130 users or so. ProcArrayLock busy, very busy. CPU: 35% user, 11% system, 54% idle\n\nWith 0ms delay, and lock modification 2 (wake some, but not all)\nThroughput is 161000/min @328 users, response time 28ms. At 184 users as before the change, throughput is 147000/min with response time 0.12ms. Performance scales linearly to 144 users, then slows down and slightly increases after that with more concurrency.\nThroughput increase is between 15% and 25%.\n\n\nForgot some data: with the second test above, CPU: 48% user, 18% sys, 35% idle. CPU increased from 46% used in the first test to 65% used, the corresponding throughput increase was not as large, but that is expected on an 8-threads per core server since memory bandwidth and cache resources at a minimum are shared and only trivial tasks can scale 100%.\n\nBased on the above, I would guess that attaining closer to 100% utilization (its hard to get past 90% with that many cores no matter what), will probablyl give another 10 to 15% improvement at most, to maybe 180000/min throughput.\n\nIts also rather interesting that the 2000 connection case with wait times gets 170000/min throughput and beats the 328 users with 0 delay result above. I suspect the 'wake all' version is just faster. I would love to see a 'wake all shared, leave exclusives at front of queue' version, since that would not allow lock starvation.\n\n\n\nRe: [PERFORM] Proposal of tunable fix for scalability of 8.4\n\n\nOn 3/13/09 10:29 AM, \"Scott Carey\" <[email protected]> wrote:\n\n\n-----------------\nNow, with 0ms delay, no threading change:\nThroughput is 136000/min @184 users, response time 13ms.  Response time has not jumped too drastically yet, but linear performance increases stopped at about 130 users or so. ProcArrayLock busy, very busy.  CPU: 35% user, 11% system, 54% idle\n\nWith 0ms delay, and lock modification 2 (wake some, but not all)\nThroughput is 161000/min @328 users, response time 28ms.  At 184 users as before the change, throughput is 147000/min with response time 0.12ms.  Performance scales linearly to 144 users, then slows down and slightly increases after that with more concurrency.\nThroughput increase is between 15% and 25%. \n\n\nForgot some data:  with the second test above, CPU: 48% user, 18% sys, 35% idle.   CPU increased from 46% used in the first test to 65% used, the corresponding throughput increase was not as large, but that is expected on an 8-threads per core server since memory bandwidth and cache resources at a minimum are shared and only trivial tasks can scale 100%.\n\nBased on the above, I would guess that attaining closer to 100% utilization (its hard to get past 90% with that many cores no matter what), will probablyl give another 10 to 15% improvement at most, to maybe 180000/min throughput.\n\nIts also rather interesting that the 2000 connection case with wait times gets 170000/min throughput and beats the 328 users with 0 delay result above.  I suspect the ‘wake all’ version is just faster.  I would love to see a ‘wake all shared, leave exclusives at front of queue’ version, since that would not allow lock starvation.", "msg_date": "Fri, 13 Mar 2009 10:48:38 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Tom Lane <[email protected]> wrote: \n> Robert Haas <[email protected]> writes:\n>> I think that changing the locking behavior is attacking the problem\n>> at the wrong level anyway.\n> \n> Right. By the time a patch here could have any effect, you've\n> already lost the game --- having to deschedule and reschedule a\n> process is a large cost compared to the typical lock hold time for\n> most LWLocks. So it would be better to look at how to avoid\n> blocking in the first place.\n \nThat's what motivated my request for a profile of the \"80 clients with\nzero wait\" case. If all data access is in RAM, why can't 80 processes\nkeep 64 threads (on 8 processors) busy? Does anybody else think\nthat's an interesting question, or am I off in left field here?\n \n-Kevin\n", "msg_date": "Fri, 13 Mar 2009 13:02:24 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Its an interesting question, but the answer is most likely simply that the client can't keep up. And in the real world, no matter how incredible your connection pool is, there will be some inefficiency, there will be some network delay, there will be some client side time, etc.\n\nI'm still not sure if we are dealing with a 64 or 128 thread machine too.\n\nThe average query finishes in 6ms according to the result., so any bit of network latency will multiply the number of connections needed to saturate, and any small delay in the client between queries, or going through a result set, will make it hard to have a 100% duty cycle.\n\nThe test result with zero delay stopped linear increase in performance at about 128 users and 7ms average query response time, at ~2100 queries per second. If this is a 128 thread machine, then that means the clients are pretty fast. If its a 64 thread machine, it means the clients can provide about a 50% duty cycle time, which is not horrible.\nThis is 16.5 queries per second per client, or an average time per (query plus client delay) of 1/16.5 = ~6ms.\nThat is to say, either this is a 128 thread machine, or the test harness is measuring average response time and including client side delay and thus there is a 50% duty cycle time and ~3ms client delay per request.\n\nWhat would really help is a counter that tracks active postgres connection count so one can look at that compared to the total connection count. Idle count and idle in transaction count would also be hugely useful to be able to track as a dynamic statistic or counter for load testing. For all of these, an average value over the last second or so is much better than an instantaneous count for these purposes.\n\n\nOn 3/13/09 11:02 AM, \"Kevin Grittner\" <[email protected]> wrote:\n\nTom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> I think that changing the locking behavior is attacking the problem\n>> at the wrong level anyway.\n>\n> Right. By the time a patch here could have any effect, you've\n> already lost the game --- having to deschedule and reschedule a\n> process is a large cost compared to the typical lock hold time for\n> most LWLocks. So it would be better to look at how to avoid\n> blocking in the first place.\n\nThat's what motivated my request for a profile of the \"80 clients with\nzero wait\" case. If all data access is in RAM, why can't 80 processes\nkeep 64 threads (on 8 processors) busy? Does anybody else think\nthat's an interesting question, or am I off in left field here?\n\n-Kevin\n\n\n\n\nRe: [PERFORM] Proposal of tunable fix for scalability of 8.4\n\n\nIts an interesting question, but the answer is most likely simply that the client can’t keep up.  And in the real world, no matter how incredible your connection pool is, there will be some inefficiency, there will be some network delay, there will be some client side time, etc.\n\nI’m still not sure if we are dealing with a 64 or 128 thread machine too. \n\nThe average query finishes in 6ms according to the result., so any bit of network latency will multiply the number of connections needed to saturate, and any small delay in the client between queries, or going through a result set, will make it hard to have a 100% duty cycle. \n\nThe test result with zero delay stopped linear increase in performance at about 128 users and 7ms average query response time, at ~2100 queries per second.  If this is a 128 thread machine, then that means the clients are pretty fast.  If its a 64 thread machine, it means the clients can provide about a 50% duty cycle time, which is not horrible.\nThis is 16.5 queries per second per client, or an average time per (query plus client delay) of 1/16.5 = ~6ms.\nThat is to say, either this is a 128 thread machine, or the test harness is measuring average response time and including client side delay and thus there is a 50% duty cycle time and ~3ms client delay per request.\n\nWhat would really help is a counter that tracks active postgres connection count so one can look at that compared to the total connection count.  Idle count and idle in transaction count would also be hugely useful to be able to track as a dynamic statistic or counter for load testing.  For all of these, an average value over the last second or so is much better than an instantaneous count for these purposes.\n\n\nOn 3/13/09 11:02 AM, \"Kevin Grittner\" <[email protected]> wrote:\n\nTom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> I think that changing the locking behavior is attacking the problem\n>> at the wrong level anyway.\n>\n> Right.  By the time a patch here could have any effect, you've\n> already lost the game --- having to deschedule and reschedule a\n> process is a large cost compared to the typical lock hold time for\n> most LWLocks.  So it would be better to look at how to avoid\n> blocking in the first place.\n\nThat's what motivated my request for a profile of the \"80 clients with\nzero wait\" case.  If all data access is in RAM, why can't 80 processes\nkeep 64 threads (on 8 processors) busy?  Does anybody else think\nthat's an interesting question, or am I off in left field here?\n\n-Kevin", "msg_date": "Fri, 13 Mar 2009 11:38:57 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Somebody else asked a question: This is actually a two socket machine \n(128) threads but one socket is disabled by the OS so only 64-threads \nare available... The idea being let me choke one socket first with 100% \nCPU ..\n> Forgot some data: with the second test above, CPU: 48% user, 18% sys, \n> 35% idle. CPU increased from 46% used in the first test to 65% used, \n> the corresponding throughput increase was not as large, but that is \n> expected on an 8-threads per core server since memory bandwidth and \n> cache resources at a minimum are shared and only trivial tasks can \n> scale 100%.\n>\n> Based on the above, I would guess that attaining closer to 100% \n> utilization (its hard to get past 90% with that many cores no matter \n> what), will probablyl give another 10 to 15% improvement at most, to \n> maybe 180000/min throughput.\n>\n> Its also rather interesting that the 2000 connection case with wait \n> times gets 170000/min throughput and beats the 328 users with 0 delay \n> result above. I suspect the �wake all� version is just faster. I would \n> love to see a �wake all shared, leave exclusives at front of queue� \n> version, since that would not allow lock starvation. \nConsidering that there is one link list it is just easier to wake the \nsequential selected few or wake them all up.. If I go through the list \ntrying to wake all the shared ones then I essentially need to have \nanother link list to collect all the exclusives ...\n\nI will retry the thundering herd of waking all waiters irrespective of \nshared, exclusive and see how that behaves.. I think the biggest benefit \nis when the process is waked up and the process in reality is already on \nthe cpu checking the field to see whether last guy who released the lock \nis allowing him to wake up or not.\n\nStill I will try some more experiments.. Definitely reducing time in \n\"Waiting\" lock waits benefits and making \"Acquired\" times more efficient \nresults in more tpm per user.\n\nI will try another run with plain wake up all and see with the same \nparameters (0 think time) that test behaves..\n\n-Jignesh\n\n-- \nJignesh Shah http://blogs.sun.com/jkshah \t\t\t\nThe New Sun Microsystems,Inc http://sun.com/postgresql\n\n", "msg_date": "Fri, 13 Mar 2009 14:48:33 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Redid the test with - waking up all waiters irrespective of shared, \nexclusive\n\n480: 64: Medium Throughput: 66688.000 Avg Medium Resp: 0.005\n540: 72: Medium Throughput: 74355.000 Avg Medium Resp: 0.005\n600: 80: Medium Throughput: 82920.000 Avg Medium Resp: 0.005\n660: 88: Medium Throughput: 91466.000 Avg Medium Resp: 0.005\n720: 96: Medium Throughput: 98749.000 Avg Medium Resp: 0.006\n780: 104: Medium Throughput: 107365.000 Avg Medium Resp: 0.006\n840: 112: Medium Throughput: 114121.000 Avg Medium Resp: 0.006\n900: 120: Medium Throughput: 119556.000 Avg Medium Resp: 0.006\n960: 128: Medium Throughput: 128544.000 Avg Medium Resp: 0.006\n1020: 136: Medium Throughput: 134725.000 Avg Medium Resp: 0.007\n1080: 144: Medium Throughput: 138817.000 Avg Medium Resp: 0.007\n1140: 152: Medium Throughput: 141482.000 Avg Medium Resp: 0.008\n1200: 160: Medium Throughput: 149430.000 Avg Medium Resp: 0.008\n1260: 168: Medium Throughput: 145104.000 Avg Medium Resp: 0.009\n1320: 176: Medium Throughput: 143059.000 Avg Medium Resp: 0.011\n1380: 184: Medium Throughput: 147687.000 Avg Medium Resp: 0.011\nlight: customer: No result set for custid 0\n1440: 192: Medium Throughput: 148081.000 Avg Medium Resp: 0.013\nlight: customer: No result set for custid 0\n1500: 200: Medium Throughput: 145452.000 Avg Medium Resp: 0.014\n1560: 208: Medium Throughput: 146057.000 Avg Medium Resp: 0.015\n1620: 216: Medium Throughput: 148456.000 Avg Medium Resp: 0.016\n1680: 224: Medium Throughput: 153088.000 Avg Medium Resp: 0.016\n1740: 232: Medium Throughput: 151263.000 Avg Medium Resp: 0.017\n1800: 240: Medium Throughput: 154146.000 Avg Medium Resp: 0.017\n1860: 248: Medium Throughput: 155520.000 Avg Medium Resp: 0.018\n1920: 256: Medium Throughput: 154696.000 Avg Medium Resp: 0.019\n1980: 264: Medium Throughput: 155391.000 Avg Medium Resp: 0.020\nlight: customer: No result set for custid 0\n2040: 272: Medium Throughput: 156086.000 Avg Medium Resp: 0.021\n2100: 280: Medium Throughput: 150085.000 Avg Medium Resp: 0.023\n2160: 288: Medium Throughput: 152253.000 Avg Medium Resp: 0.024\n2220: 296: Medium Throughput: 155203.000 Avg Medium Resp: 0.025\n2280: 304: Medium Throughput: 157962.000 Avg Medium Resp: 0.025\nlight: customer: No result set for custid 0\n2340: 312: Medium Throughput: 157270.000 Avg Medium Resp: 0.026\n2400: 320: Medium Throughput: 161298.000 Avg Medium Resp: 0.027\n2460: 328: Medium Throughput: 161527.000 Avg Medium Resp: 0.028\n2520: 336: Medium Throughput: 163569.000 Avg Medium Resp: 0.028\n2580: 344: Medium Throughput: 166190.000 Avg Medium Resp: 0.028\n2640: 352: Medium Throughput: 168516.000 Avg Medium Resp: 0.029\n2700: 360: Medium Throughput: 171417.000 Avg Medium Resp: 0.029\n2760: 368: Medium Throughput: 173350.000 Avg Medium Resp: 0.029\n2820: 376: Medium Throughput: 155672.000 Avg Medium Resp: 0.035\n2880: 384: Medium Throughput: 172821.000 Avg Medium Resp: 0.031\n2940: 392: Medium Throughput: 171819.000 Avg Medium Resp: 0.033\n3000: 400: Medium Throughput: 171388.000 Avg Medium Resp: 0.033\n3060: 408: Medium Throughput: 172949.000 Avg Medium Resp: 0.034\n3120: 416: Medium Throughput: 172638.000 Avg Medium Resp: 0.036\n3180: 424: Medium Throughput: 172310.000 Avg Medium Resp: 0.036\n\n(My timed test made it end here..)\n\nvmstat seems similar to wakeup some\nkthr memory page disk faults cpu\n r b w swap free re mf pi po fr de sr s0 s1 s2 sd in sy cs us \nsy id\n 63 0 0 45535728 38689856 0 14 0 0 0 0 0 0 0 0 0 163318 334225 \n360179 47 17 36\n 85 0 0 45436736 38690760 0 6 0 0 0 0 0 0 0 0 0 165536 347462 \n365987 47 17 36\n 59 0 0 45405184 38681752 0 11 0 0 0 0 0 0 0 0 0 155153 326182 \n345527 47 16 37\n 53 0 0 45393816 38673344 0 6 0 0 0 0 0 0 0 0 0 152752 317851 \n340737 47 16 37\n 66 0 0 45378312 38651920 0 11 0 0 0 0 0 0 0 0 0 150979 304350 \n336915 47 16 38\n 67 0 0 45489520 38639664 0 5 0 0 0 0 0 0 0 0 0 157188 318958 \n351905 47 16 37\n 82 0 0 45483600 38633344 0 10 0 0 0 0 0 0 0 0 0 168797 348619 \n375827 47 17 36\n 68 0 0 45463008 38614432 0 9 0 0 0 0 0 0 0 0 0 173020 376594 \n385370 47 18 35\n 54 0 0 45451376 38603792 0 13 0 0 0 0 0 0 0 0 0 161891 342522 \n364286 48 17 35\n 41 0 0 45356544 38605976 0 5 0 0 0 0 0 0 0 0 0 167250 358320 \n372469 47 17 36\n 27 0 0 45323472 38596952 0 11 0 0 0 0 0 0 0 0 0 165099 344695 \n364256 48 17 35\n\nmissed taking mpstat\nalso dtrace shows that \"Waiting\" for procarray is not the most expensive \nwait.\n-bash-3.2# ./84_lwlock.d 17071\n\n Lock Id Mode State Count\n CLogControlLock Shared Waiting 4\n CLogControlLock Exclusive Waiting 32\n ProcArrayLock Shared Waiting 35\n CLogControlLock Shared Acquired 47\n WALInsertLock Exclusive Waiting 53\n ProcArrayLock Exclusive Waiting 104\n XidGenLock Exclusive Acquired 116\n ProcArrayLock Exclusive Acquired 117\n CLogControlLock Exclusive Acquired 176\n WALInsertLock Exclusive Acquired 370\n FirstLockMgrLock Shared Acquired 793\n FirstBufMappingLock Shared Acquired 799\n ProcArrayLock Shared Acquired 882\n SInvalReadLock Shared Acquired 1827\n\n Lock Id Mode State Combined Time (ns)\n WALInsertLock Acquired 52915\n CLogControlLock Acquired 78332\n XidGenLock Acquired 103026\n FirstLockMgrLock Acquired 392836\n FirstBufMappingLock Acquired 2919896\n CLogControlLock Shared Waiting 5342211\n CLogControlLock Exclusive Waiting 9172692\n ProcArrayLock Shared Waiting 18186546\n ProcArrayLock Acquired 22478607\n XidGenLock Exclusive Acquired 26561444\n SInvalReadLock Acquired 29012891\n CLogControlLock Exclusive Acquired 30490159\n WALInsertLock Exclusive Waiting 35055294\n FirstLockMgrLock Exclusive Acquired 47077668\n FirstBufMappingLock Exclusive Acquired 47460381\n WALInsertLock Exclusive Acquired 99288648\n ProcArrayLock Exclusive Waiting 104221100\n ProcArrayLock Exclusive Acquired 356644807\n SInvalReadLock Exclusive Acquired 357530794\n\n\n\nSo clearly even waking up some more exclusives than just 1 seems to help \nscalability improve (though actual improvement mileage varies but there \nis some positive improvement).\n\nOne more change that I can think of doing is a minor change where we \nwake all sequential shared waiters but only 1 exclusive waiter.. I am \ngoing to change that to ... whatever sequential you get wake them all \nup.. so in essense it does a similar heart valve type approach of doing \nlittle bursts rather than tie them to 1 exclusive only.\n\n\n-Jignesh\n\n\nJignesh K. Shah wrote:\n>\n>\n> Now with a modified Fix (not the original one that I proposed but \n> something that works like a heart valve : Opens and shuts to minimum \n> default way thus controlling how many waiters are waked up )\n>\n> Time:Users:throughput: Reponse\n> 60: 8: Medium Throughput: 7774.000 Avg Medium Resp: 0.004\n> 120: 16: Medium Throughput: 16874.000 Avg Medium Resp: 0.004\n> 180: 24: Medium Throughput: 25159.000 Avg Medium Resp: 0.004\n> 240: 32: Medium Throughput: 33216.000 Avg Medium Resp: 0.005\n> 300: 40: Medium Throughput: 42418.000 Avg Medium Resp: 0.005\n> 360: 48: Medium Throughput: 49655.000 Avg Medium Resp: 0.005\n> 420: 56: Medium Throughput: 58149.000 Avg Medium Resp: 0.005\n> 480: 64: Medium Throughput: 66558.000 Avg Medium Resp: 0.005\n> 540: 72: Medium Throughput: 74474.000 Avg Medium Resp: 0.005\n> 600: 80: Medium Throughput: 82241.000 Avg Medium Resp: 0.005\n> 660: 88: Medium Throughput: 90336.000 Avg Medium Resp: 0.005\n> 720: 96: Medium Throughput: 99101.000 Avg Medium Resp: 0.006\n> 780: 104: Medium Throughput: 106028.000 Avg Medium Resp: 0.006\n> 840: 112: Medium Throughput: 113196.000 Avg Medium Resp: 0.006\n> 900: 120: Medium Throughput: 119174.000 Avg Medium Resp: 0.006\n> 960: 128: Medium Throughput: 129408.000 Avg Medium Resp: 0.006\n> 1020: 136: Medium Throughput: 134433.000 Avg Medium Resp: 0.007\n> 1080: 144: Medium Throughput: 143121.000 Avg Medium Resp: 0.007\n> 1140: 152: Medium Throughput: 144603.000 Avg Medium Resp: 0.007\n> 1200: 160: Medium Throughput: 148604.000 Avg Medium Resp: 0.008\n> 1260: 168: Medium Throughput: 150274.000 Avg Medium Resp: 0.009\n> 1320: 176: Medium Throughput: 150581.000 Avg Medium Resp: 0.010\n> 1380: 184: Medium Throughput: 146912.000 Avg Medium Resp: 0.012\n> 1440: 192: Medium Throughput: 143945.000 Avg Medium Resp: 0.013\n> 1500: 200: Medium Throughput: 144029.000 Avg Medium Resp: 0.015\n> 1560: 208: Medium Throughput: 143468.000 Avg Medium Resp: 0.016\n> 1620: 216: Medium Throughput: 144367.000 Avg Medium Resp: 0.017\n> 1680: 224: Medium Throughput: 148340.000 Avg Medium Resp: 0.017\n> 1740: 232: Medium Throughput: 148842.000 Avg Medium Resp: 0.018\n> 1800: 240: Medium Throughput: 149533.000 Avg Medium Resp: 0.019\n> 1860: 248: Medium Throughput: 152334.000 Avg Medium Resp: 0.019\n> 1920: 256: Medium Throughput: 151521.000 Avg Medium Resp: 0.020\n> 1980: 264: Medium Throughput: 148961.000 Avg Medium Resp: 0.022\n> 2040: 272: Medium Throughput: 151270.000 Avg Medium Resp: 0.022\n> 2100: 280: Medium Throughput: 149783.000 Avg Medium Resp: 0.024\n> 2160: 288: Medium Throughput: 151743.000 Avg Medium Resp: 0.024\n> 2220: 296: Medium Throughput: 155190.000 Avg Medium Resp: 0.026\n> 2280: 304: Medium Throughput: 150955.000 Avg Medium Resp: 0.027\n> 2340: 312: Medium Throughput: 147118.000 Avg Medium Resp: 0.029\n> 2400: 320: Medium Throughput: 152768.000 Avg Medium Resp: 0.029\n> 2460: 328: Medium Throughput: 161044.000 Avg Medium Resp: 0.028\n> 2520: 336: Medium Throughput: 157926.000 Avg Medium Resp: 0.029\n> 2580: 344: Medium Throughput: 161005.000 Avg Medium Resp: 0.029\n> 2640: 352: Medium Throughput: 167274.000 Avg Medium Resp: 0.029\n> 2700: 360: Medium Throughput: 168253.000 Avg Medium Resp: 0.031\n>\n>\n> With final vmstats improving but still far from 100%\n> kthr memory page disk faults cpu\n> r b w swap free re mf pi po fr de sr s0 s1 s2 sd in sy cs \n> us sy id\n> 38 0 0 46052840 39345096 0 11 0 0 0 0 0 0 0 0 0 134137 290703 \n> 303518 40 14 45\n> 43 0 0 45656456 38882912 23 77 0 0 0 0 0 0 0 0 0 135820 272899 \n> 300749 40 15 45\n> 38 0 0 45650488 38816984 23 80 0 0 0 0 0 0 0 0 0 135009 272767 \n> 300192 39 15 46\n> 47 0 0 46020792 39187688 0 5 0 0 0 0 0 0 0 0 0 140473 285445 \n> 312826 40 14 46\n> 24 0 0 46143984 39326848 9 61 0 0 0 0 0 0 0 0 0 146194 308590 \n> 328241 40 15 45\n> 37 0 0 45465256 38757000 22 74 0 0 0 0 0 0 0 0 0 136835 293971 \n> 301433 38 14 48\n> 35 0 0 46017544 39308072 12 61 0 0 0 0 0 0 0 0 0 142749 312355 \n> 320592 42 15 43\n> 36 0 0 45456000 38744688 11 24 0 0 0 0 0 0 0 0 0 143566 303461 \n> 317683 41 15 43\n> 23 0 0 46007408 39291312 2 22 0 0 0 0 0 0 0 0 0 140246 300061 \n> 316663 42 15 43\n> 20 0 0 46029656 39281704 10 25 0 0 0 0 0 0 0 0 0 147787 291825 \n> 326387 43 15 42\n> 24 0 0 46131016 39288528 2 21 0 0 0 0 0 0 0 0 0 150796 310697 \n> 335791 43 15 42\n> 20 0 0 46109448 39269392 16 67 0 0 0 0 0 0 0 0 0 150075 315517 \n> 332881 43 16 41\n> 30 0 0 45540928 38710376 9 27 0 0 0 0 0 0 0 0 0 155214 316448 \n> 341472 43 16 40\n> 14 0 0 45987496 39270016 0 5 0 0 0 0 0 0 0 0 0 155028 333711 \n> 344207 44 16 40\n> 25 0 0 45981136 39263008 0 10 0 0 0 0 0 0 0 0 0 153968 327343 \n> 343776 45 16 39\n> 54 0 0 46062984 39259936 0 7 0 0 0 0 0 0 0 0 0 153721 315839 \n> 344732 45 16 39\n> 42 0 0 46099704 39252920 0 15 0 0 0 0 0 0 0 0 0 154629 323125 \n> 348798 45 16 39\n> 54 0 0 46068944 39230808 0 8 0 0 0 0 0 0 0 0 0 157166 340265 \n> 354135 46 17 37\n>\n> But the real winner shows up in lockstat where it seems to indicate \n> that stress on Waiting from ProcArrayLock is relieved (thought \n> shifting somewhere else which is how lock works):\n>\n> # ./84_lwlock.d 8042\n>\n> Lock Id Mode State Count\n> WALWriteLock Exclusive Acquired 1\n> XidGenLock Exclusive Waiting 3\n> CLogControlLock Shared Waiting 11\n> ProcArrayLock Shared Waiting 39\n> CLogControlLock Exclusive Waiting 52\n> WALInsertLock Exclusive Waiting 73\n> CLogControlLock Shared Acquired 91\n> ProcArrayLock Exclusive Acquired 96\n> XidGenLock Exclusive Acquired 96\n> ProcArrayLock Exclusive Waiting 121\n> CLogControlLock Exclusive Acquired 199\n> WALInsertLock Exclusive Acquired 310\n> FirstBufMappingLock Shared Acquired 408\n> FirstLockMgrLock Shared Acquired 618\n> ProcArrayLock Shared Acquired 746\n> SInvalReadLock Shared Acquired 1542\n>\n> Lock Id Mode State Combined Time \n> (ns)\n> WALInsertLock Acquired \n> 118673\n> CLogControlLock Acquired \n> 172130\n> FirstBufMappingLock Acquired \n> 177196\n> WALWriteLock Exclusive Acquired \n> 208403\n> XidGenLock Exclusive Waiting \n> 325989\n> FirstLockMgrLock Acquired \n> 2667351\n> ProcArrayLock Acquired \n> 8179335\n> XidGenLock Exclusive Acquired \n> 8896177\n> CLogControlLock Shared Waiting \n> 9680401\n> CLogControlLock Exclusive Waiting \n> 19105179\n> CLogControlLock Exclusive Acquired \n> 27484249\n> SInvalReadLock Acquired \n> 43026960\n> FirstBufMappingLock Exclusive Acquired \n> 45232906\n> ProcArrayLock Shared Waiting \n> 46741660\n> WALInsertLock Exclusive Waiting \n> 50912148\n> FirstLockMgrLock Exclusive Acquired \n> 58789829\n> WALInsertLock Exclusive Acquired \n> 86653791\n> ProcArrayLock Exclusive Waiting \n> 213980787\n> ProcArrayLock Exclusive Acquired \n> 270028367\n> SInvalReadLock Exclusive Acquired \n> 303044735\n>\n>\n>\n>\n> SET minf mjf xcal intr ithr csw icsw migr smtx srw syscl \n> usr sys wt idl sze\n> 0 1 0 147238 159453 8806 370676 89236 71258 98435 0 380008 \n> 47 17 0 35 64\n> 0 6 0 132463 143446 7975 331685 80847 64746 86578 0 329315 \n> 44 16 0 41 64\n> 0 16 0 146655 158621 8987 366866 90756 69953 93786 0 349346 \n> 49 17 0 34 64\n> 0 18 0 151326 163492 8992 377634 92860 72406 98968 4 365121 \n> 49 17 0 33 64\n> 0 2 0 142914 154169 8243 352104 81385 69598 91260 0 340887 \n> 42 16 0 42 64\n> 0 16 0 156755 168962 9080 386475 93072 74775 101465 0 379250 \n> 47 18 0 36 64\n> 0 1 0 152807 165134 8880 379521 90671 75073 99692 0 380412 \n> 48 18 0 35 64\n> 0 1 0 134778 146041 8122 339137 79888 66633 89220 0 342600 \n> 43 16 0 41 64\n> 0 16 0 153014 164789 8834 376117 93000 72743 97644 0 371792 \n> 48 18 0 35 64\n>\n>\n> Not sure what SInvalReadLock does.. need to read up on that..\n>\n>\n> -Jignesh\n>\n>>\n>> 1200: 160: Medium Throughput: 130487.000 Avg Medium Resp: 0.011\n>> 1260: 168: Medium Throughput: 123368.000 Avg Medium Resp: 0.013\n>> 1320: 176: Medium Throughput: 134649.000 Avg Medium Resp: 0.012\n>> 1380: 184: Medium Throughput: 136272.000 Avg Medium Resp: 0.013\n>>\n>>\n>> kthr memory page disk faults \n>> cpu\n>> r b w swap free re mf pi po fr de sr s0 s1 s2 sd in sy cs \n>> us sy id\n>> 3 0 0 44710008 39603320 0 135 0 0 0 0 0 0 0 0 0 110564 145678 \n>> 211567 33 10 57\n>> 5 0 0 44663368 39595008 0 6 0 0 0 0 0 0 0 0 0 108891 143083 \n>> 208389 33 10 58\n>> 3 0 0 44753496 39593824 0 132 0 0 0 0 0 0 0 0 0 109922 126865 \n>> 209869 33 9 57\n>> 4 0 0 44788368 39588528 0 7 0 0 0 0 0 0 0 0 0 108680 129073 \n>> 208068 33 10 57\n>> 2 0 0 44767920 39570592 0 147 0 0 0 0 0 0 0 0 0 106671 142403 \n>> 204724 33 10 58\n>> 4 0 0 44762656 39563256 0 11 0 0 0 0 0 0 0 0 0 106185 130328 \n>> 204551 34 10 57\n>> 2 0 0 44674584 39560912 0 148 0 0 0 0 0 0 0 0 0 104757 139147 \n>> 201448 32 10 58\n>> 1 0 0 44619824 39551024 0 9 0 0 0 0 0 0 0 0 0 103653 142125 \n>> 199896 32 10 58\n>> 2 0 0 44622480 39552432 0 141 0 0 0 0 0 0 0 0 0 101373 134547 \n>> 195553 32 9 58\n>> 1 0 0 44739936 39552312 0 11 0 0 0 0 0 0 0 0 0 102932 121742 \n>> 198205 33 9 58\n>>\n>>\n>> And lock stats are as follows at about 280 users sampling for a \n>> single backend process:\n>> # ./84_lwlock.d 29405\n>>\n>> Lock Id Mode State Count\n>> WALWriteLock Exclusive Acquired 1\n>> XidGenLock Exclusive Waiting 1\n>> CLogControlLock Shared Waiting 3\n>> ProcArrayLock Shared Waiting 7\n>> CLogControlLock Exclusive Waiting 9\n>> WALInsertLock Exclusive Waiting 45\n>> CLogControlLock Shared Acquired 52\n>> ProcArrayLock Exclusive Waiting 61\n>> XidGenLock Exclusive Acquired 96\n>> ProcArrayLock Exclusive Acquired 97\n>> CLogControlLock Exclusive Acquired 152\n>> WALInsertLock Exclusive Acquired 302\n>> ProcArrayLock Shared Acquired 729\n>> FirstLockMgrLock Shared Acquired 812\n>> FirstBufMappingLock Shared Acquired 857\n>> SInvalReadLock Shared Acquired 1551\n>>\n>> Lock Id Mode State Combined Time \n>> (ns)\n>> WALInsertLock Acquired \n>> 89909\n>> XidGenLock Exclusive Waiting \n>> 101488\n>> WALWriteLock Exclusive Acquired \n>> 140563\n>> CLogControlLock Shared Waiting \n>> 354756\n>> FirstBufMappingLock Acquired \n>> 471438\n>> FirstLockMgrLock Acquired \n>> 2907141\n>> XidGenLock Exclusive Acquired \n>> 7450934\n>> CLogControlLock Exclusive Waiting \n>> 11094716\n>> ProcArrayLock Acquired \n>> 15495229\n>> WALInsertLock Exclusive Waiting \n>> 20801169\n>> CLogControlLock Exclusive Acquired \n>> 21339264\n>> SInvalReadLock Acquired \n>> 24309991\n>> FirstLockMgrLock Exclusive Acquired \n>> 39904071\n>> FirstBufMappingLock Exclusive Acquired \n>> 40826435\n>> ProcArrayLock Shared Waiting \n>> 86352947\n>> WALInsertLock Exclusive Acquired \n>> 89336432\n>> SInvalReadLock Exclusive Acquired \n>> 252574515\n>> ProcArrayLock Exclusive Acquired \n>> 315064347\n>> ProcArrayLock Exclusive Waiting \n>> 847806215\n>>\n>> mpstat outputs is too much so I am doing aggegation by procesor set \n>> which is all 64 cpus\n>>\n>> -bash-3.2$ mpstat -a 10\n>>\n>> SET minf mjf xcal intr ithr csw icsw migr smtx srw syscl \n>> usr sys wt idl sze\n>> 0 370 0 118649 127575 7595 244456 43931 62166 8700 0 158929 \n>> 38 11 0 50 64\n>> 0 167 0 119668 128704 7644 246389 43287 62357 8816 0 161006 \n>> 38 11 0 51 64\n>> 0 27 0 109461 117433 6997 224514 38562 56446 8171 0 148322 \n>> 34 10 0 56 64\n>> 0 2 0 122368 131549 7871 250237 39620 61478 9082 0 165995 \n>> 36 11 0 52 64\n>> 0 0 0 122025 131380 7973 249429 37292 59863 8922 0 166319 \n>> 35 11 0 54 64\n>>\n>> (quick overview of columns )\n>> SET Processor set\n>> minf minor faults\n>> mjf major faults\n>> xcal inter-processor cross-calls\n>> intr interrupts\n>> ithr interrupts as threads (not counting clock\n>> interrupt)\n>> csw context switches\n>> icsw involuntary context switches\n>> migr thread migrations (to another processor)\n>> smtx spins on mutexes (lock not acquired on first\n>> try)\n>> srw spins on readers/writer locks (lock not\n>> acquired on first try)\n>> syscl system calls\n>> usr percent user time\n>> sys percent system time\n>> wt the I/O wait time is no longer calculated as a\n>> percentage of CPU time, and this statistic\n>> will always return zero.\n>> idl percent idle time\n>> sze number of processors in the requested proces-\n>> sor set\n>>\n>>\n>> -Jignesh\n>>\n>>\n>\n\n-- \nJignesh Shah http://blogs.sun.com/jkshah \t\t\t\nThe New Sun Microsystems,Inc http://sun.com/postgresql\n\n", "msg_date": "Fri, 13 Mar 2009 16:02:22 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Alan Stange <[email protected]> writes:\n>> Gregory Stark wrote:\n>>> AFAIK Opensolaris doesn't implement posix_fadvise() so there's no benefit.\n>\n>> It's implemented. I'm guessing it's not what you want to see though:\n>> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libc/port/gen/posix_fadvise.c\n>\n> Ugh. So apparently, we actually need to special-case Solaris to not\n> believe that posix_fadvise works, or we'll waste cycles uselessly\n> calling a do-nothing function. Thanks, Sun.\n\nDo we? Or do we just document that setting effective_cache_size on Solaris\nwon't help?\n\nI'm leaning towards the latter because I expect Sun will implement this and\nthere will be people running 8.4 on newer versions of the OS long after it's\nout.\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's 24x7 Postgres support!\n", "msg_date": "Sat, 14 Mar 2009 01:58:50 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4 Performance improvements: was Re: Proposal of tunable fix for\n\tscalability of 8.4" }, { "msg_contents": "Gregory Stark <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> Ugh. So apparently, we actually need to special-case Solaris to not\n>> believe that posix_fadvise works, or we'll waste cycles uselessly\n>> calling a do-nothing function. Thanks, Sun.\n\n> Do we? Or do we just document that setting effective_cache_size on Solaris\n> won't help?\n\nI assume you meant effective_io_concurrency. We'd still need a special\ncase because the default is currently hard-wired at 1, not 0, if\nconfigure thinks the function exists. Also there's a posix_fadvise call\nin xlog.c that that parameter doesn't control anyhow.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Mar 2009 22:06:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4 Performance improvements: was Re: Proposal of tunable fix for\n\tscalability of 8.4" }, { "msg_contents": "On Fri, Mar 13, 2009 at 10:06 PM, Tom Lane <[email protected]> wrote:\n> Gregory Stark <[email protected]> writes:\n>> Tom Lane <[email protected]> writes:\n>>> Ugh.  So apparently, we actually need to special-case Solaris to not\n>>> believe that posix_fadvise works, or we'll waste cycles uselessly\n>>> calling a do-nothing function.  Thanks, Sun.\n>\n>> Do we? Or do we just document that setting effective_cache_size on Solaris\n>> won't help?\n>\n> I assume you meant effective_io_concurrency.  We'd still need a special\n> case because the default is currently hard-wired at 1, not 0, if\n> configure thinks the function exists.  Also there's a posix_fadvise call\n> in xlog.c that that parameter doesn't control anyhow.\n\nI think 1 should mean no prefetching, rather than 0. If the number of\nconcurrent I/O requests was 0, that would mean you couldn't perform\nany I/O at all.\n\n...Robert\n", "msg_date": "Fri, 13 Mar 2009 22:37:37 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4 Performance improvements: was Re: Proposal of\n\ttunable fix for scalability of 8.4" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n\n> On Fri, Mar 13, 2009 at 10:06 PM, Tom Lane <[email protected]> wrote:\n>\n>> I assume you meant effective_io_concurrency.  We'd still need a special\n>> case because the default is currently hard-wired at 1, not 0, if\n>> configure thinks the function exists.  Also there's a posix_fadvise call\n>> in xlog.c that that parameter doesn't control anyhow.\n>\n> I think 1 should mean no prefetching, rather than 0. If the number of\n> concurrent I/O requests was 0, that would mean you couldn't perform\n> any I/O at all.\n\nThat is actually how I had intended it but apparently I messed it up at some\npoint such that later patches were doing some prefetching at 1 and there was\nno way to disable it. When Tom reviewed it he corrected the inability to\ndisable prefetching by making 0 disable prefetching.\n\nI didn't think it was worth raising as an issue but I didn't realize we were\ncurrently doing prefetching by default? i didn't realize that. Even on a\nsystem with posix_fadvise there's nothing much to be gained unless the data is\non a RAID device, so the original objection holds anyways. We shouldn't do any\nprefetching unless the user tells us to.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's 24x7 Postgres support!\n", "msg_date": "Sat, 14 Mar 2009 04:02:15 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4 Performance improvements: was Re: Proposal of tunable fix for\n\tscalability of 8.4" }, { "msg_contents": "On Fri, 13 Mar 2009, Kevin Grittner wrote:\n\n> Tom Lane <[email protected]> wrote:\n>> Robert Haas <[email protected]> writes:\n>>> I think that changing the locking behavior is attacking the problem\n>>> at the wrong level anyway.\n>>\n>> Right. By the time a patch here could have any effect, you've\n>> already lost the game --- having to deschedule and reschedule a\n>> process is a large cost compared to the typical lock hold time for\n>> most LWLocks. So it would be better to look at how to avoid\n>> blocking in the first place.\n>\n> That's what motivated my request for a profile of the \"80 clients with\n> zero wait\" case. If all data access is in RAM, why can't 80 processes\n> keep 64 threads (on 8 processors) busy? Does anybody else think\n> that's an interesting question, or am I off in left field here?\n\nI don't think that anyone is arguing that it's not intersting, but I also \nthink that complete dismissal of the existing test case is also wrong.\n\nlast night Tom documented some reasons why the prior test may have some \nissues, but even with those I think the test shows that there is room for \nimprovement on the locking.\n\nmaking sure that the locking change doesn't cause problems for other \nworkload is a _very_ valid concern, but it's grounds for more testing, not \ndismissal.\n\nI think that the suggestion to wake up the first N waiters instead of all \nof them is a good optimization (and waking N - # active back-ends would be \neven better if there is an easy way to know that number) but I think that \nit's worth making the result testable by more people so that we can see if \nwhat workloads are pathalogical for this change (if any)\n\nDavid Lang\n", "msg_date": "Fri, 13 Mar 2009 21:29:24 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Tom Lane wrote:\n> Robert Haas <[email protected]> writes:\n>> I think that changing the locking behavior is attacking the problem at\n>> the wrong level anyway.\n> \n> Right. By the time a patch here could have any effect, you've already\n> lost the game --- having to deschedule and reschedule a process is a\n> large cost compared to the typical lock hold time for most LWLocks. So\n> it would be better to look at how to avoid blocking in the first place.\n\nI think the elephant in the room is that we have a single lock that \nneeds to be acquired every time a transaction commits, and every time a \nbackend takes a snapshot. It has worked well, and it still does for \nsmaller numbers of CPUs, but I'm not surprised it starts to become a \nbottleneck on a test like the one Jignesh is running. To make matters \nworse, the more backends there are, the longer the lock needs to be held \nto take a snapshot.\n\nIt's going require some hard thinking to bust that bottleneck. I've \nsometimes thought about maintaining a pre-calculated array of \nin-progress XIDs in shared memory. GetSnapshotData would simply memcpy() \nthat to private memory, instead of collecting the xids from ProcArray. \nOr we could try to move some of the if-tests inside the for-loop to \nafter the ProcArrayLock is released. For example, we could easily remove \nthe check for \"proc == MyProc\", and remove our own xid from the array \nafterwards. That's just linear speed up, though. I can't immediately \nthink of a way to completely avoid / partition away the contention.\n\nWALInsertLock is also quite high on Jignesh's list. That I've seen \nbecome the bottleneck on other tests too.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sat, 14 Mar 2009 10:23:57 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\nOn Wed, 2009-03-11 at 16:53 -0400, Jignesh K. Shah wrote:\n\n> 1200: 2000: Medium Throughput: -1781969.000 Avg Medium Resp: 0.019\n\nI think you need to iron out bugs in your test script before we put too\nmuch stock into the results generated. Your throughput should not be\nnegative.\n\nI'd be interested in knowing the number of S and X locks requested, so\nwe can think about this from first principles. My understanding is that\nratio of S:X is about 10:1. Do you have more exact numbers?\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Sat, 14 Mar 2009 13:30:02 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On Mar 11, 2009, at 10:48 PM, Jignesh K. Shah wrote:\n> Fair enough.. Well I am now appealing to all who has a fairly \n> decent sized hardware want to try it out and see whether there are \n> \"gains\", \"no-changes\" or \"regressions\" based on your workload. Also \n> it will help if you report number of cpus when you respond back to \n> help collect feedback.\n\n\nDo you have a self-contained test case? I have several boxes with 16- \ncores worth of Xeon with 96GB I could try it on (though you might not \ncare about having \"only\" 16 cores :P)\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828\n\n\n", "msg_date": "Sat, 14 Mar 2009 10:06:08 -0500", "msg_from": "decibel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On Mar 12, 2009, at 2:22 PM, Jignesh K. Shah wrote:\n>> Something that might be useful for him to report is the avg number \n>> of active backends for each data point ...\n> short of doing select * from pg_stat_activity and removing the IDLE \n> entries, any other clean way to get that information.\n\n\nUh, isn't there a DTrace probe that would provide that info? It \ncertainly seems like something you'd want to know...\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828\n\n\n", "msg_date": "Sat, 14 Mar 2009 10:22:38 -0500", "msg_from": "decibel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On Mar 13, 2009, at 8:05 AM, Gregory Stark wrote:\n> \"Jignesh K. Shah\" <[email protected]> writes:\n>\n>> Scott Carey wrote:\n>>> On 3/12/09 11:37 AM, \"Jignesh K. Shah\" <[email protected]> wrote:\n>>>\n>>> In general, I suggest that it is useful to run tests with a few \n>>> different\n>>> types of pacing. Zero delay pacing will not have realistic number of\n>>> connections, but will expose bottlenecks that are universal, and \n>>> less\n>>> controversial\n>>\n>> I think I have done that before so I can do that again by running \n>> the users at\n>> 0 think time which will represent a \"Connection pool\" which is \n>> highly utilized\"\n>> and test how big the connection pool can be before the throughput \n>> tanks.. This\n>> can be useful for App Servers which sets up connections pools of \n>> their own\n>> talking with PostgreSQL.\n>\n> Keep in mind when you do this that it's not interesting to test a \n> number of\n> connections much larger than the number of processors you have. \n> Once the\n> system reaches 100% cpu usage it would be a misconfigured \n> connection pooler\n> that kept more than that number of connections open.\n\n\nHow certain are you of that? I believe that assertion would only be \ntrue if a backend could never block on *anything*, which simply isn't \nthe case. Of course in most systems you'll usually be blocking on IO, \nbut even in a ramdisk scenario there's other things you can end up \nblocking on. That means having more threads than cores isn't \nunreasonable.\n\nIf you want to see this in action in an easy to repeat test, try \ncompiling a complex system (such as FreeBSD) with different levels of \n-j handed to make (of course you'll need to wait until everything is \nin cache, and I'm assuming you have enough memory so that everything \nwould fit in cache).\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828\n\n\n", "msg_date": "Sat, 14 Mar 2009 10:27:01 -0500", "msg_from": "decibel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On Mar 13, 2009, at 3:02 PM, Jignesh K. Shah wrote:\n> vmstat seems similar to wakeup some\n> kthr memory page disk \n> faults cpu\n> r b w swap free re mf pi po fr de sr s0 s1 s2 sd in sy \n> cs us sy id\n> 63 0 0 45535728 38689856 0 14 0 0 0 0 0 0 0 0 0 163318 334225 \n> 360179 47 17 36\n> 85 0 0 45436736 38690760 0 6 0 0 0 0 0 0 0 0 0 165536 347462 \n> 365987 47 17 36\n> 59 0 0 45405184 38681752 0 11 0 0 0 0 0 0 0 0 0 155153 326182 \n> 345527 47 16 37\n> 53 0 0 45393816 38673344 0 6 0 0 0 0 0 0 0 0 0 152752 317851 \n> 340737 47 16 37\n> 66 0 0 45378312 38651920 0 11 0 0 0 0 0 0 0 0 0 150979 304350 \n> 336915 47 16 38\n> 67 0 0 45489520 38639664 0 5 0 0 0 0 0 0 0 0 0 157188 318958 \n> 351905 47 16 37\n> 82 0 0 45483600 38633344 0 10 0 0 0 0 0 0 0 0 0 168797 348619 \n> 375827 47 17 36\n> 68 0 0 45463008 38614432 0 9 0 0 0 0 0 0 0 0 0 173020 376594 \n> 385370 47 18 35\n> 54 0 0 45451376 38603792 0 13 0 0 0 0 0 0 0 0 0 161891 342522 \n> 364286 48 17 35\n> 41 0 0 45356544 38605976 0 5 0 0 0 0 0 0 0 0 0 167250 358320 \n> 372469 47 17 36\n> 27 0 0 45323472 38596952 0 11 0 0 0 0 0 0 0 0 0 165099 344695 \n> 364256 48 17 35\n\n\nThe good news is there's now at least enough runnable procs. What I \nfind *extremely* odd is the CPU usage is almost dead constant...\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828\n\n\n", "msg_date": "Sat, 14 Mar 2009 10:40:18 -0500", "msg_from": "decibel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Fri, Mar 13, 2009 at 10:06 PM, Tom Lane <[email protected]> wrote:\n>> I assume you meant effective_io_concurrency. �We'd still need a special\n>> case because the default is currently hard-wired at 1, not 0, if\n>> configure thinks the function exists.\n\n> I think 1 should mean no prefetching, rather than 0.\n\nNo, 1 means \"prefetch a single block ahead\". It doesn't involve I/O\nconcurrency in the sense of multiple I/O requests being processed at\nonce; what it does give you is CPU vs I/O concurrency. 0 shuts that\ndown and returns the system to pre-8.4 behavior.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 14 Mar 2009 11:52:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4 Performance improvements: was Re: Proposal of tunable fix for\n\tscalability of 8.4" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> WALInsertLock is also quite high on Jignesh's list. That I've seen \n> become the bottleneck on other tests too.\n\nYeah, that's been seen to be an issue before. I had the germ of an idea\nabout how to fix that:\n\n\t... with no lock, determine size of WAL record ...\n\tobtain WALInsertLock\n\tidentify WAL start address of my record, advance insert pointer\n\t\tpast record end\n\t*release* WALInsertLock\n\twithout lock, copy record into the space just reserved\n\nThe idea here is to allow parallelization of the copying of data into\nthe buffers. The hold time on WALInsertLock would be very short. Maybe\nit could even become a spinlock, though I'm not sure, because the\n\"advance insert pointer\" bit is more complicated than it looks (you have\nto allow for the extra overhead when crossing a WAL page boundary).\n\nNow the fly in the ointment is that there would need to be some way to\nensure that we didn't write data out to disk until it was valid; in\nparticular how do we implement a request to flush WAL up to a particular\nLSN value, when maybe some of the records before that haven't been fully\ntransferred into the buffers yet? The best idea I've thought of so far\nis shared/exclusive locks on the individual WAL buffer pages, with the\nrather unusual behavior that writers of the page would take shared lock\nand only the reader (he who has to dump to disk) would take exclusive\nlock. But maybe there's a better way. Currently I don't believe that\ndumping a WAL buffer (WALWriteLock) blocks insertion of new WAL data,\nand it would be nice to preserve that property.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 14 Mar 2009 12:09:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4 " }, { "msg_contents": "Top posting because my email client will mess up the inline:\n\nRe: advance insert pointer.\nI have no idea how complicated that advance part is as you allude to. But can this be done without a lock at all?\nAn atomic compare and exchange (or compare and set, etc) should do it. Although boundaries in buffers could make it a bit more complicated than that. Sounds potentially lockless to me. CompareAndSet - like atomics would prevent context switches entirely and generally work fabulous if the item that needs locking is itself an atomic value like a pointer or int. This is similar to, but lighter weight than, a spin lock.\n\n________________________________________\nFrom: Tom Lane [[email protected]]\nSent: Saturday, March 14, 2009 9:09 AM\nTo: Heikki Linnakangas\nCc: Robert Haas; Scott Carey; Greg Smith; Jignesh K. Shah; Kevin Grittner; [email protected]\nSubject: Re: [PERFORM] Proposal of tunable fix for scalability of 8.4\n\nYeah, that's been seen to be an issue before. I had the germ of an idea\nabout how to fix that:\n\n ... with no lock, determine size of WAL record ...\n obtain WALInsertLock\n identify WAL start address of my record, advance insert pointer\n past record end\n *release* WALInsertLock\n without lock, copy record into the space just reserved\n\nThe idea here is to allow parallelization of the copying of data into\nthe buffers. The hold time on WALInsertLock would be very short. Maybe\nit could even become a spinlock, though I'm not sure, because the\n\"advance insert pointer\" bit is more complicated than it looks (you have\nto allow for the extra overhead when crossing a WAL page boundary).", "msg_date": "Sun, 15 Mar 2009 12:25:24 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4 " }, { "msg_contents": "\n\nSimon Riggs wrote:\n> On Wed, 2009-03-11 at 16:53 -0400, Jignesh K. Shah wrote:\n>\n> \n>> 1200: 2000: Medium Throughput: -1781969.000 Avg Medium Resp: 0.019\n>> \n>\n> I think you need to iron out bugs in your test script before we put too\n> much stock into the results generated. Your throughput should not be\n> negative.\n>\n> I'd be interested in knowing the number of S and X locks requested, so\n> we can think about this from first principles. My understanding is that\n> ratio of S:X is about 10:1. Do you have more exact numbers?\n>\n> \nSimon, that's a known bug for the test where the first time it reaches \nthe max number of users, it throws a negative number. But all other \nnumbers are pretty much accurate\n\nGenerally the users:transactions count depends on think time..\n\n-Jignesh\n\n-- \nJignesh Shah http://blogs.sun.com/jkshah \t\t\t\nThe New Sun Microsystems,Inc http://sun.com/postgresql\n\n", "msg_date": "Sun, 15 Mar 2009 16:36:56 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\n\ndecibel wrote:\n> On Mar 11, 2009, at 10:48 PM, Jignesh K. Shah wrote:\n>> Fair enough.. Well I am now appealing to all who has a fairly \n>> decent sized hardware want to try it out and see whether there are \n>> \"gains\", \"no-changes\" or \"regressions\" based on your workload. Also \n>> it will help if you report number of cpus when you respond back to \n>> help collect feedback.\n>\n>\n> Do you have a self-contained test case? I have several boxes with \n> 16-cores worth of Xeon with 96GB I could try it on (though you might \n> not care about having \"only\" 16 cores :P)\nI dont have authority over iGen, but I am pretty sure that with sysbench \nwe should be able to recreate the test case or even dbt-2\nThat said the patch should be pretty easy to apply to your own workloads \n(where more feedback is more appreciated ).. On x64 16 cores might bring \nout the problem faster too since typically they are 2.5X higher clock \nfrequency.. Try it out.. stock build vs patched builds.\n\n\n-Jignesh\n\n-- \nJignesh Shah http://blogs.sun.com/jkshah \t\t\t\nThe New Sun Microsystems,Inc http://sun.com/postgresql\n\n", "msg_date": "Sun, 15 Mar 2009 16:40:04 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\n\ndecibel wrote:\n> On Mar 13, 2009, at 3:02 PM, Jignesh K. Shah wrote:\n>> vmstat seems similar to wakeup some\n>> kthr memory page disk faults \n>> cpu\n>> r b w swap free re mf pi po fr de sr s0 s1 s2 sd in sy cs \n>> us sy id\n>> 63 0 0 45535728 38689856 0 14 0 0 0 0 0 0 0 0 0 163318 334225 \n>> 360179 47 17 36\n>> 85 0 0 45436736 38690760 0 6 0 0 0 0 0 0 0 0 0 165536 347462 \n>> 365987 47 17 36\n>> 59 0 0 45405184 38681752 0 11 0 0 0 0 0 0 0 0 0 155153 326182 \n>> 345527 47 16 37\n>> 53 0 0 45393816 38673344 0 6 0 0 0 0 0 0 0 0 0 152752 317851 \n>> 340737 47 16 37\n>> 66 0 0 45378312 38651920 0 11 0 0 0 0 0 0 0 0 0 150979 304350 \n>> 336915 47 16 38\n>> 67 0 0 45489520 38639664 0 5 0 0 0 0 0 0 0 0 0 157188 318958 \n>> 351905 47 16 37\n>> 82 0 0 45483600 38633344 0 10 0 0 0 0 0 0 0 0 0 168797 348619 \n>> 375827 47 17 36\n>> 68 0 0 45463008 38614432 0 9 0 0 0 0 0 0 0 0 0 173020 376594 \n>> 385370 47 18 35\n>> 54 0 0 45451376 38603792 0 13 0 0 0 0 0 0 0 0 0 161891 342522 \n>> 364286 48 17 35\n>> 41 0 0 45356544 38605976 0 5 0 0 0 0 0 0 0 0 0 167250 358320 \n>> 372469 47 17 36\n>> 27 0 0 45323472 38596952 0 11 0 0 0 0 0 0 0 0 0 165099 344695 \n>> 364256 48 17 35\n>\n>\n> The good news is there's now at least enough runnable procs. What I \n> find *extremely* odd is the CPU usage is almost dead constant...\nGenerally when there is dead constant.. signs of classic bottleneck ;-) \nWe will be fixing one to get to another.. but knocking bottlenecks is \nthe name of the game I think\n\n-Jignesh\n\n-- \nJignesh Shah http://blogs.sun.com/jkshah \t\t\t\nThe New Sun Microsystems,Inc http://sun.com/postgresql\n\n", "msg_date": "Sun, 15 Mar 2009 16:42:40 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\"Jignesh K. Shah\" <[email protected]> writes:\n\n> Generally when there is dead constant.. signs of classic bottleneck ;-) We\n> will be fixing one to get to another.. but knocking bottlenecks is the name of\n> the game I think\n\nIndeed. I think the bottleneck we're interested in addressing here is why you\nsay you weren't able to saturate the 64 threads with 64 processes when they're\nall RAM-resident.\n\n From what I see you still have 400+ processes? Is that right?\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!\n", "msg_date": "Mon, 16 Mar 2009 15:08:12 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "<[email protected]> wrote: \n> On Fri, 13 Mar 2009, Kevin Grittner wrote:\n>> If all data access is in RAM, why can't 80 processes\n>> keep 64 threads (on 8 processors) busy? Does anybody else think\n>> that's an interesting question, or am I off in left field here?\n> \n> I don't think that anyone is arguing that it's not intersting, but I\n> also think that complete dismissal of the existing test case is also\n> wrong.\n \nRight, I just think this point in the test might give more targeted\nresults. When you've got many more times the number of processes than\nprocessors, of course processes will be held up. It seems to me that\nthis is the point where the real issues are least likely to get lost\nin the noise. It also might point out delays from the clients which\nwould help in interpreting the results farther down the list.\n \nOne more reason this point is an interesting one is that it is one\nthat gets *worse* with the suggested patch, if only by half a percent.\n \nWithout:\n \n600: 80: Medium Throughput: 82632.000 Avg Medium Resp: 0.005\n \nwith:\n \n600: 80: Medium Throughput: 82241.000 Avg Medium Resp: 0.005\n \n-Kevin\n", "msg_date": "Mon, 16 Mar 2009 10:48:32 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On Sat, 14 Mar 2009, Heikki Linnakangas wrote:\n> I think the elephant in the room is that we have a single lock that needs to \n> be acquired every time a transaction commits, and every time a backend takes \n> a snapshot.\n\nI like this line of thinking.\n\nThere are two valid sides to this. One is the elephant - can we remove the \nneed for this lock, or at least reduce its contention. The second is the \nfact that these tests have shown that the locking code has potential for \nimprovement in the case where there are many processes waiting on the same \nlock. Both could be worked on, but perhaps the greatest benefit will come \nfrom stopping a single lock being so contended in the first place.\n\nOne possibility would be for the locks to alternate between exclusive and \nshared - that is:\n\n1. Take a snapshot of all shared waits, and grant them all - thundering\n herd style.\n2. Wait until ALL of them have finished, granting no more.\n3. Take a snapshot of all exclusive waits, and grant them all, one by one.\n4. Wait until all of them have been finished, granting no more.\n5. Back to (1).\n\nThis may also possibly improve CPU cache coherency. Or of course, it may \nmake everything much worse - I'm no expert. It would avoid starvation \nthough.\n\n> It's going require some hard thinking to bust that bottleneck. I've sometimes \n> thought about maintaining a pre-calculated array of in-progress XIDs in \n> shared memory. GetSnapshotData would simply memcpy() that to private memory, \n> instead of collecting the xids from ProcArray.\n\nShifting the contention from reading that data to altering it. But that \nwould probably be quite a lot fewer times, so it would be a benefit.\n\n> Or we could try to move some of the if-tests inside the for-loop to \n> after the ProcArrayLock is released.\n\nThat's always a useful change.\n\nOn Sat, 14 Mar 2009, Tom Lane wrote:\n> Now the fly in the ointment is that there would need to be some way to\n> ensure that we didn't write data out to disk until it was valid; in\n> particular how do we implement a request to flush WAL up to a particular\n> LSN value, when maybe some of the records before that haven't been fully\n> transferred into the buffers yet? The best idea I've thought of so far\n> is shared/exclusive locks on the individual WAL buffer pages, with the\n> rather unusual behavior that writers of the page would take shared lock\n> and only the reader (he who has to dump to disk) would take exclusive\n> lock. But maybe there's a better way. Currently I don't believe that\n> dumping a WAL buffer (WALWriteLock) blocks insertion of new WAL data,\n> and it would be nice to preserve that property.\n\nThe writers would need to take a shared lock on the page before releasing \nthe lock that marshals access to the \"how long is the log\" data. Other \nthan that, your idea would work.\n\nAn alternative would be to maintain a concurrent linked list of WAL writes \nin progress. An entry would be added to the tail every time a new writer \nis generated, marking the end of the log. When a writer finishes, it can \nremove the entry from the list very cheaply and with very little \ncontention. The reader (who dumps the WAL to disc) need only look at the \nhead of the list to find out how far the log is completed, because the \nlist is guaranteed to be in order of position in the log.\n\nThe linked list would probably be simpler - the writers don't need to lock \nmultiple things. It would also have fewer things accessing each \nlock, and therefore maybe less contention. However, it may involve more \nlocks than the one lock per WAL page method, and I don't know what the \noverhead of that would be. (It may be fewer - I don't know what the \naverage WAL write size is.)\n\nMatthew\n\n-- \n What goes up must come down. Ask any system administrator.\n", "msg_date": "Mon, 16 Mar 2009 16:26:27 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "I wrote: \n> One more reason this point is an interesting one is that it is one\n> that gets *worse* with the suggested patch, if only by half a\npercent.\n> \n> Without:\n> \n> 600: 80: Medium Throughput: 82632.000 Avg Medium Resp: 0.005\n> \n> with:\n> \n> 600: 80: Medium Throughput: 82241.000 Avg Medium Resp: 0.005\n \nOops. A later version:\n \n> Redid the test with - waking up all waiters irrespective of shared, \n> exclusive\n \n> 600: 80: Medium Throughput: 82920.000 Avg Medium Resp: 0.005\n \nThe one that showed the decreased performance at 800 was:\n \n> a modified Fix (not the original one that I proposed but something\n> that works like a heart valve : Opens and shuts to minimum \n> default way thus controlling how many waiters are waked up )\n \n-Kevin\n", "msg_date": "Mon, 16 Mar 2009 11:53:39 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\nOn Wed, 2009-03-11 at 22:20 -0400, Jignesh K. Shah wrote:\n\n> A tunable does not impact existing behavior\n\nWhy not put the tunable parameter into the patch and then show the test\nresults with it in? If there is no overhead, we should then be able to\nsee that.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Mon, 16 Mar 2009 17:39:58 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Note, some have mentioned that my client breaks inline formatting. My only comment is after Kevin's signature below:\n\nOn 3/16/09 9:53 AM, \"Kevin Grittner\" <[email protected]> wrote:\n\nI wrote:\n> One more reason this point is an interesting one is that it is one\n> that gets *worse* with the suggested patch, if only by half a\npercent.\n>\n> Without:\n>\n> 600: 80: Medium Throughput: 82632.000 Avg Medium Resp: 0.005\n>\n> with:\n>\n> 600: 80: Medium Throughput: 82241.000 Avg Medium Resp: 0.005\n\nOops. A later version:\n\n> Redid the test with - waking up all waiters irrespective of shared,\n> exclusive\n\n> 600: 80: Medium Throughput: 82920.000 Avg Medium Resp: 0.005\n\nThe one that showed the decreased performance at 800 was:\n\n> a modified Fix (not the original one that I proposed but something\n> that works like a heart valve : Opens and shuts to minimum\n> default way thus controlling how many waiters are waked up )\n\n-Kevin\n\n\nAll three of those are probably within the margin of error of the measurement. We would need to run the same test 3 or 4 times to gauge its variance before concluding much.\n\n\n\nRe: [PERFORM] Proposal of tunable fix for scalability of 8.4\n\n\nNote, some have mentioned that my client breaks inline formatting.  My only comment is after Kevin’s signature below:\n\nOn 3/16/09 9:53 AM, \"Kevin Grittner\" <[email protected]> wrote:\n\nI wrote:\n> One more reason this point is an interesting one is that it is one\n> that gets *worse* with the suggested patch, if only by half a\npercent.\n> \n> Without:\n> \n> 600: 80: Medium Throughput: 82632.000 Avg Medium Resp: 0.005\n> \n> with:\n> \n> 600: 80: Medium Throughput: 82241.000 Avg Medium Resp: 0.005\n\nOops.  A later version:\n\n> Redid the test with - waking up all waiters irrespective of shared,\n> exclusive\n\n> 600: 80: Medium Throughput: 82920.000 Avg Medium Resp: 0.005\n\nThe one that showed the decreased performance at 800 was:\n\n> a modified Fix (not the original one that I proposed but something\n> that works like a heart valve : Opens and shuts to minimum\n> default way thus  controlling how many waiters are waked up )\n\n-Kevin\n\n\nAll three of those are probably within the margin of error of the measurement. We would need to run the same test 3 or 4 times to gauge its variance before concluding much.", "msg_date": "Mon, 16 Mar 2009 11:44:34 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 03/16/09 11:08, Gregory Stark wrote:\n> \"Jignesh K. Shah\" <[email protected]> writes:\n>\n> \n>> Generally when there is dead constant.. signs of classic bottleneck ;-) We\n>> will be fixing one to get to another.. but knocking bottlenecks is the name of\n>> the game I think\n>> \n>\n> Indeed. I think the bottleneck we're interested in addressing here is why you\n> say you weren't able to saturate the 64 threads with 64 processes when they're\n> all RAM-resident.\n>\n> From what I see you still have 400+ processes? Is that right?\n>\n> \n\nAny one claiming they run CPU intensive are not always telling the \ntruth.. They *Think* they are running CPU intensive for the right part \nbut there could be memory misses, they could be doing statistics where \nthey are not really stressing the intended stuff to test, they could be \nparsing through the results where they are not stressing the backend \nwhile still claiming to be cpu intensive (though from a different \nperspective)\n\nSo yes a single process specially a client cannot claim to keep the \nbackend 100% active but so can neither a connection pooler since it \nstill has to some other stuff within the process.\n\n-Jignesh\n\n\n\n\n\n\n\n\n\nOn 03/16/09 11:08, Gregory Stark wrote:\n\n\"Jignesh K. Shah\" <[email protected]> writes:\n\n \n\nGenerally when there is dead constant.. signs of classic bottleneck ;-) We\nwill be fixing one to get to another.. but knocking bottlenecks is the name of\nthe game I think\n \n\n\nIndeed. I think the bottleneck we're interested in addressing here is why you\nsay you weren't able to saturate the 64 threads with 64 processes when they're\nall RAM-resident.\n\n>From what I see you still have 400+ processes? Is that right?\n\n \n\n\nAny one claiming they run CPU intensive are not always telling the\ntruth.. They *Think* they are running CPU intensive for the right part\nbut there could be memory misses, they could be doing statistics where\nthey are not really stressing the intended stuff to test, they could be\nparsing through the results where they are not stressing the backend\nwhile still claiming to be cpu intensive (though from a different\nperspective)\n\nSo yes a single process  specially a client cannot claim to keep the\nbackend 100% active but so can neither a connection pooler since it\nstill has to some other stuff within the process.\n\n-Jignesh", "msg_date": "Mon, 16 Mar 2009 15:39:20 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\n\nSimon Riggs wrote:\n> On Wed, 2009-03-11 at 22:20 -0400, Jignesh K. Shah wrote:\n>\n> \n>> A tunable does not impact existing behavior\n>> \n>\n> Why not put the tunable parameter into the patch and then show the test\n> results with it in? If there is no overhead, we should then be able to\n> see that.\n>\n> \nCan do? Though will need quick primer on adding tunables.\nIs it on wiki.postgresql.org anywhere?\n\n-Jignesh\n\n-- \nJignesh Shah http://blogs.sun.com/jkshah \t\t\t\nThe New Sun Microsystems,Inc http://sun.com/postgresql\n\n", "msg_date": "Tue, 17 Mar 2009 09:18:11 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 03/16/09 13:39, Simon Riggs wrote:\n> On Wed, 2009-03-11 at 22:20 -0400, Jignesh K. Shah wrote:\n>\n>> A tunable does not impact existing behavior\n>\n> Why not put the tunable parameter into the patch and then show the test\n> results with it in? If there is no overhead, we should then be able to\n> see that.\n>\n\n\nI did a patch where I define lock_wakeup_algorithm with default value of \n0, and range is 0 to 32\nIt basically handles three types of algorithms and 32 different \npermutations, such that:\nWhen lock_wakeup_algorithm is set to\n0 => default logic of wakeup (only 1 exclusive or all \nsequential shared)\n1 => wake up all sequential exclusives or all sequential \nshared\n32>= n >=2 => wake up first n waiters irrespective of exclusive or \nsequential\n\n\n\nI did a quick test with patch. Unfortunately it improves my number even \nwith default setting 0 (not sure whether I should be pleased or sad - \nDefinitely no overhead infact seems to help performance a bit. NOTE: \nLogic is same, implementation is slightly different for default set)\n\nmy Prepatch numbers typically peaked around 136,000 tpm\nWith the patch and settings:\n\nlock_wakeup_algorithm=0\nPEAK: 962: 512: Medium Throughput: 161121.000 Avg Medium Resp: 0.051\n\n\nWhen lock_wakeup_algorithm=1\nThen my PEAK increases to\nPEAK 1560: 832: Medium Throughput: 176577.000 Avg Medium Resp: 0.086\n(Couldn't recreate the 184K+ result.. need to check that)\n\nI still havent tested for the rest 2-32 values but you get the point, \nthe patch is quite flexible with various types of permutations and no \noverhead.\n\nDo give it a try on your own setup and play with values and compare it \nwith your original builds.\n\nRegards,\nJignesh", "msg_date": "Tue, 17 Mar 2009 17:41:20 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\nOn Tue, 2009-03-17 at 17:41 -0400, Jignesh K. Shah wrote:\n\n> I did a quick test with patch. Unfortunately it improves my number\n> even with default setting 0 (not sure whether I should be pleased or\n> sad - Definitely no overhead infact seems to help performance a bit.\n> NOTE: Logic is same, implementation is slightly different for default\n> set)\n\nOK, I bite. 25% gain from doing nothing??? You're stretching my... err,\ncredulity.\n\nI like the train of thought for setting 1 and it is worth investigating,\nbut something feels wrong somewhere.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Tue, 17 Mar 2009 22:59:38 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\n\nSimon Riggs wrote:\n> On Tue, 2009-03-17 at 17:41 -0400, Jignesh K. Shah wrote:\n>\n> \n>> I did a quick test with patch. Unfortunately it improves my number\n>> even with default setting 0 (not sure whether I should be pleased or\n>> sad - Definitely no overhead infact seems to help performance a bit.\n>> NOTE: Logic is same, implementation is slightly different for default\n>> set)\n>> \n>\n> OK, I bite. 25% gain from doing nothing??? You're stretching my... err,\n> credulity.\n>\n> I like the train of thought for setting 1 and it is worth investigating,\n> but something feels wrong somewhere.\n>\n> \nActually I think I am hurting my credibility here since I cannot \nexplain the improvement with the patch but still using default logic \n(thought different way I compare sequential using fields from the \nprevious proc structure instead of comparing with constant boolean) \nBut the change was necessary to allow it to handle multiple algorithms \nand yet be sleek and not bloated.\n\n In next couple of weeks I plan to test the patch on a different x64 \nbased system to do a sanity testing on lower number of cores and also \ntry out other workloads ...\n\nRegards,\nJignesh\n\n", "msg_date": "Tue, 17 Mar 2009 19:54:54 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\nOn Tue, 2009-03-17 at 19:54 -0400, Jignesh K. Shah wrote:\n> \n> Simon Riggs wrote:\n> > On Tue, 2009-03-17 at 17:41 -0400, Jignesh K. Shah wrote:\n> >\n> > \n> >> I did a quick test with patch. Unfortunately it improves my number\n> >> even with default setting 0 (not sure whether I should be pleased or\n> >> sad - Definitely no overhead infact seems to help performance a bit.\n> >> NOTE: Logic is same, implementation is slightly different for default\n> >> set)\n> >> \n> >\n> > OK, I bite. 25% gain from doing nothing??? You're stretching my... err,\n> > credulity.\n> >\n> > I like the train of thought for setting 1 and it is worth investigating,\n> > but something feels wrong somewhere.\n> >\n> > \n> Actually I think I am hurting my credibility here since I cannot \n> explain the improvement with the patch but still using default logic \n> (thought different way I compare sequential using fields from the \n> previous proc structure instead of comparing with constant boolean) \n> But the change was necessary to allow it to handle multiple algorithms \n> and yet be sleek and not bloated.\n> \n> In next couple of weeks I plan to test the patch on a different x64 \n> based system to do a sanity testing on lower number of cores and also \n> try out other workloads ...\n\nGood plan. I'm behind your ideas and will be happy to wait.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Wed, 18 Mar 2009 00:43:23 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\nOn Sat, 2009-03-14 at 12:09 -0400, Tom Lane wrote:\n> Heikki Linnakangas <[email protected]> writes:\n> > WALInsertLock is also quite high on Jignesh's list. That I've seen \n> > become the bottleneck on other tests too.\n> \n> Yeah, that's been seen to be an issue before. I had the germ of an idea\n> about how to fix that:\n> \n> \t... with no lock, determine size of WAL record ...\n> \tobtain WALInsertLock\n> \tidentify WAL start address of my record, advance insert pointer\n> \t\tpast record end\n> \t*release* WALInsertLock\n> \twithout lock, copy record into the space just reserved\n> \n> The idea here is to allow parallelization of the copying of data into\n> the buffers. The hold time on WALInsertLock would be very short. Maybe\n> it could even become a spinlock, though I'm not sure, because the\n> \"advance insert pointer\" bit is more complicated than it looks (you have\n> to allow for the extra overhead when crossing a WAL page boundary).\n> \n> Now the fly in the ointment is that there would need to be some way to\n> ensure that we didn't write data out to disk until it was valid; in\n> particular how do we implement a request to flush WAL up to a particular\n> LSN value, when maybe some of the records before that haven't been fully\n> transferred into the buffers yet? The best idea I've thought of so far\n> is shared/exclusive locks on the individual WAL buffer pages, with the\n> rather unusual behavior that writers of the page would take shared lock\n> and only the reader (he who has to dump to disk) would take exclusive\n> lock. But maybe there's a better way. Currently I don't believe that\n> dumping a WAL buffer (WALWriteLock) blocks insertion of new WAL data,\n> and it would be nice to preserve that property.\n\nYeh, that's just what we'd discussed previously:\nhttp://markmail.org/message/gectqy3yzvjs2hru#query:Reworking%20WAL%\n20locking+page:1+mid:gectqy3yzvjs2hru+state:results\n\nAre you thinking of doing this for 8.4? :-)\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Wed, 18 Mar 2009 07:48:38 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\nOn Mon, 2009-03-16 at 16:26 +0000, Matthew Wakeling wrote:\n> One possibility would be for the locks to alternate between exclusive\n> and \n> shared - that is:\n> \n> 1. Take a snapshot of all shared waits, and grant them all -\n> thundering\n> herd style.\n> 2. Wait until ALL of them have finished, granting no more.\n> 3. Take a snapshot of all exclusive waits, and grant them all, one by\n> one.\n> 4. Wait until all of them have been finished, granting no more.\n> 5. Back to (1)\n\nI agree with that, apart from the \"granting no more\" bit.\n\nCurrently we queue up exclusive locks, but there is no need to since for\nProcArrayLock commits are all changing different data.\n\nThe most useful behaviour is just to have two modes:\n* exclusive-lock held - all other x locks welcome, s locks queue\n* shared-lock held - all other s locks welcome, x locks queue\n\nThis *only* works for ProcArrayLock.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Wed, 18 Mar 2009 07:53:53 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Matthew Wakeling wrote:\n> On Sat, 14 Mar 2009, Heikki Linnakangas wrote:\n>> It's going require some hard thinking to bust that bottleneck. I've \n>> sometimes thought about maintaining a pre-calculated array of \n>> in-progress XIDs in shared memory. GetSnapshotData would simply \n>> memcpy() that to private memory, instead of collecting the xids from \n>> ProcArray.\n> \n> Shifting the contention from reading that data to altering it. But that \n> would probably be quite a lot fewer times, so it would be a benefit.\n\nIt's true that it would shift work from reading (GetSnapshotData) to \nmodifying (xact end) the ProcArray. Which could actually be much worse: \nwhen modifying, you hold an ExclusiveLock, but readers only hold a \nSharedLock. I don't think it's that bad in reality since at transaction \nend you would only need to remove your own xid from an array. That \nshould be very fast, especially if you know exactly where in the array \nyour own xid is.\n\n> On Sat, 14 Mar 2009, Tom Lane wrote:\n>> Now the fly in the ointment is that there would need to be some way to\n>> ensure that we didn't write data out to disk until it was valid; in\n>> particular how do we implement a request to flush WAL up to a particular\n>> LSN value, when maybe some of the records before that haven't been fully\n>> transferred into the buffers yet? The best idea I've thought of so far\n>> is shared/exclusive locks on the individual WAL buffer pages, with the\n>> rather unusual behavior that writers of the page would take shared lock\n>> and only the reader (he who has to dump to disk) would take exclusive\n>> lock. But maybe there's a better way. Currently I don't believe that\n>> dumping a WAL buffer (WALWriteLock) blocks insertion of new WAL data,\n>> and it would be nice to preserve that property.\n> \n> The writers would need to take a shared lock on the page before \n> releasing the lock that marshals access to the \"how long is the log\" \n> data. Other than that, your idea would work.\n> \n> An alternative would be to maintain a concurrent linked list of WAL \n> writes in progress. An entry would be added to the tail every time a new \n> writer is generated, marking the end of the log. When a writer finishes, \n> it can remove the entry from the list very cheaply and with very little \n> contention. The reader (who dumps the WAL to disc) need only look at the \n> head of the list to find out how far the log is completed, because the \n> list is guaranteed to be in order of position in the log.\n\nA linked list or an array of in-progress writes was my first thought as \nwell. But the real problem is: how does the reader wait until all WAL up \nto X have been written? It could poll, but that's inefficient.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 18 Mar 2009 13:20:16 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\n\"Jignesh K. Shah\" <[email protected]> writes:\n\n> In next couple of weeks I plan to test the patch on a different x64 based\n> system to do a sanity testing on lower number of cores and also try out other\n> workloads ...\n\nI'm actually more interested in the large number of cores but fewer processes\nand lower max_connections. If you set max_connections to 64 and eliminate the\nwait time you should, in theory, be able to get 100% cpu usage. It would be\nvery interesting to track down the contention which is preventing that.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's PostGIS support!\n", "msg_date": "Wed, 18 Mar 2009 11:36:18 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On Wed, 18 Mar 2009, Simon Riggs wrote:\n> I agree with that, apart from the \"granting no more\" bit.\n>\n> The most useful behaviour is just to have two modes:\n> * exclusive-lock held - all other x locks welcome, s locks queue\n> * shared-lock held - all other s locks welcome, x locks queue\n\nThe problem with making all other locks welcome is that there is a \npossibility of starvation. Imagine a case where there is a constant stream \nof shared locks - the exclusive locks may never actually get hold of the \nlock under the \"all other shared locks welcome\" strategy. Likewise with \nthe reverse.\n\nTaking a snapshot and queueing all newer locks forces fairness in the \nlocking strategy, and avoids one of the sides getting starved.\n\nMatthew\n\n-- \n I've run DOOM more in the last few days than I have the last few\n months. I just love debugging ;-) -- Linus Torvalds\n", "msg_date": "Wed, 18 Mar 2009 11:45:47 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On Wed, 18 Mar 2009, Heikki Linnakangas wrote:\n> A linked list or an array of in-progress writes was my first thought as well. \n> But the real problem is: how does the reader wait until all WAL up to X have \n> been written? It could poll, but that's inefficient.\n\nGood point - waiting for an exclusive lock on a page is a pretty easy way \nto wake up at the right time.\n\nHowever, is there not some way to wait for a notify? I'm no C expert, but \nin Java that's one of the most fundamental features of a lock.\n\nMatthew\n\n-- \n A bus station is where buses stop.\n A train station is where trains stop.\n On my desk, I have a workstation.\n", "msg_date": "Wed, 18 Mar 2009 11:49:33 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\nOn Wed, 2009-03-18 at 11:45 +0000, Matthew Wakeling wrote:\n> On Wed, 18 Mar 2009, Simon Riggs wrote:\n> > I agree with that, apart from the \"granting no more\" bit.\n> >\n> > The most useful behaviour is just to have two modes:\n> > * exclusive-lock held - all other x locks welcome, s locks queue\n> > * shared-lock held - all other s locks welcome, x locks queue\n> \n> The problem with making all other locks welcome is that there is a \n> possibility of starvation. Imagine a case where there is a constant stream \n> of shared locks - the exclusive locks may never actually get hold of the \n> lock under the \"all other shared locks welcome\" strategy. \n\nThat's exactly what happens now. \n\n> Likewise with the reverse.\n\nI think it depends upon how frequently requests arrive. Commits cause X\nlocks and we don't commit that often, so its very unlikely that we'd see\na constant stream of X locks and prevent shared lockers.\n\n\nSome comments from an earlier post on this topic (about 20 months ago):\n\nSince shared locks are currently queued behind exclusive requests\nwhen they cannot be immediately satisfied, it might be worth\nreconsidering the way LWLockRelease works also. When we wake up the\nqueue we only wake the Shared requests that are adjacent to the head of\nthe queue. Instead we could wake *all* waiting Shared requestors.\n\ne.g. with a lock queue like this:\n(HEAD) S<-S<-X<-S<-X<-S<-X<-S\nCurrently we would wake the 1st and 2nd waiters only. \n\nIf we were to wake the 3rd, 5th and 7th waiters also, then the queue\nwould reduce in length very quickly, if we assume generally uniform\nservice times. (If the head of the queue is X, then we wake only that\none process and I'm not proposing we change that). That would mean queue\njumping right? Well thats what already happens in other circumstances,\nso there cannot be anything intrinsically wrong with allowing it, the\nonly question is: would it help? \n\nWe need not wake the whole queue, there may be some generally more\nbeneficial heuristic. The reason for considering this is not to speed up\nShared requests but to reduce the queue length and thus the waiting time\nfor the Xclusive requestors. Each time a Shared request is dequeued, we\neffectively re-enable queue jumping, so a Shared request arriving during\nthat point will actually jump ahead of Shared requests that were unlucky\nenough to arrive while an Exclusive lock was held. Worse than that, the\nnew incoming Shared requests exacerbate the starvation, so the more\nnon-adjacent groups of Shared lock requests there are in the queue, the\nworse the starvation of the exclusive requestors becomes. We are\neffectively randomly starving some shared locks as well as exclusive\nlocks in the current scheme, based upon the state of the lock when they\nmake their request. The situation is worst when the lock is heavily\ncontended and the workload has a 50/50 mix of shared/exclusive requests,\ne.g. serializable transactions or transactions with lots of\nsubtransactions.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Wed, 18 Mar 2009 12:06:49 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On Wed, 18 Mar 2009, Simon Riggs wrote:\n> On Wed, 2009-03-18 at 11:45 +0000, Matthew Wakeling wrote:\n>> The problem with making all other locks welcome is that there is a\n>> possibility of starvation. Imagine a case where there is a constant stream\n>> of shared locks - the exclusive locks may never actually get hold of the\n>> lock under the \"all other shared locks welcome\" strategy.\n>\n> That's exactly what happens now.\n\nSo the question becomes whether such shared starvation of exclusive locks \nis an issue or not. I would imagine that the greater the number of CPUs \nand backend processes in the system, the more likely this is to become an \nissue.\n\n>> Likewise with the reverse.\n>\n> I think it depends upon how frequently requests arrive. Commits cause X\n> locks and we don't commit that often, so its very unlikely that we'd see\n> a constant stream of X locks and prevent shared lockers.\n\nWell, on a very large system, and in the case where exclusive locks are \nactually exclusive (so, not ProcArrayList), then processing can only \nhappen one at a time rather than in parallel, so that offsets the reduced \nfrequency of requests compared to shared. Again, it'd only become an issue \nwith very large numbers of CPUs and backends.\n\nInteresting comments from the previous thread - thanks for that. If the \ngoal is to reduce the waiting time for exclusive, then some fairness would \nseem to be useful.\n\nThe problem is that under the current system where shared locks join in on \nthe fun, you are relying on there being a time when there are no shared \nlocks at all in the queue in order for exclusive locks to ever get a \nchance.\n\nStatistically, if such a situation is likely to occur frequently, then the \naverage queue length of shared locks is small. If that is the case, then \nthere is little benefit in letting them join in, because the parallelism \ngain is small. However, if the average queue length is large, and you are \nseeing a decent amount of parallelism gain by allowing them to join in, \nthen it necessarily the case that times where there are no shared locks at \nall are few, and the exclusive locks are necessarily starved. The current \nimplementation guarantees either one of these scenarios.\n\nThe advantage of queueing all shared requests while servicing all \nexclusive requests one by one is that a decent number of shared requests \nwill be able to build up, allowing a good amount of parallelism to be \nreleased in the thundering herd when shared locks are favoured again. This \nmethod increases the parallelism as the number of parallel processes \nincreases.\n\nMatthew\n\n-- \nIlliteracy - I don't know the meaning of the word!\n", "msg_date": "Wed, 18 Mar 2009 12:33:42 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 03/18/09 08:06, Simon Riggs wrote:\n> On Wed, 2009-03-18 at 11:45 +0000, Matthew Wakeling wrote:\n> \n>> On Wed, 18 Mar 2009, Simon Riggs wrote:\n>> \n>>> I agree with that, apart from the \"granting no more\" bit.\n>>>\n>>> The most useful behaviour is just to have two modes:\n>>> * exclusive-lock held - all other x locks welcome, s locks queue\n>>> * shared-lock held - all other s locks welcome, x locks queue\n>>> \n>> The problem with making all other locks welcome is that there is a \n>> possibility of starvation. Imagine a case where there is a constant stream \n>> of shared locks - the exclusive locks may never actually get hold of the \n>> lock under the \"all other shared locks welcome\" strategy. \n>> \n>\n> That's exactly what happens now. \n>\n> \n>> Likewise with the reverse.\n>> \n>\n> I think it depends upon how frequently requests arrive. Commits cause X\n> locks and we don't commit that often, so its very unlikely that we'd see\n> a constant stream of X locks and prevent shared lockers.\n>\n>\n> Some comments from an earlier post on this topic (about 20 months ago):\n>\n> Since shared locks are currently queued behind exclusive requests\n> when they cannot be immediately satisfied, it might be worth\n> reconsidering the way LWLockRelease works also. When we wake up the\n> queue we only wake the Shared requests that are adjacent to the head of\n> the queue. Instead we could wake *all* waiting Shared requestors.\n>\n> e.g. with a lock queue like this:\n> (HEAD) S<-S<-X<-S<-X<-S<-X<-S\n> Currently we would wake the 1st and 2nd waiters only. \n>\n> If we were to wake the 3rd, 5th and 7th waiters also, then the queue\n> would reduce in length very quickly, if we assume generally uniform\n> service times. (If the head of the queue is X, then we wake only that\n> one process and I'm not proposing we change that). That would mean queue\n> jumping right? Well thats what already happens in other circumstances,\n> so there cannot be anything intrinsically wrong with allowing it, the\n> only question is: would it help? \n>\n> \n\nI thought about that.. Except without putting a restriction a huge queue \nwill cause lot of time spent in manipulating the lock list every time. \nOne more thing will be to maintain two list shared and exclusive and \nround robin through them for every time you access the list so \nmanipulation is low.. But the best thing is to allow flexibility to \nchange the algorithm since some workloads may work fine with one and \nothers will NOT. The flexibility then allows to tinker for those already \nreaching the limits.\n\n-Jignesh\n\n> We need not wake the whole queue, there may be some generally more\n> beneficial heuristic. The reason for considering this is not to speed up\n> Shared requests but to reduce the queue length and thus the waiting time\n> for the Xclusive requestors. Each time a Shared request is dequeued, we\n> effectively re-enable queue jumping, so a Shared request arriving during\n> that point will actually jump ahead of Shared requests that were unlucky\n> enough to arrive while an Exclusive lock was held. Worse than that, the\n> new incoming Shared requests exacerbate the starvation, so the more\n> non-adjacent groups of Shared lock requests there are in the queue, the\n> worse the starvation of the exclusive requestors becomes. We are\n> effectively randomly starving some shared locks as well as exclusive\n> locks in the current scheme, based upon the state of the lock when they\n> make their request. The situation is worst when the lock is heavily\n> contended and the workload has a 50/50 mix of shared/exclusive requests,\n> e.g. serializable transactions or transactions with lots of\n> subtransactions.\n>\n> \n\n\n\n\n\n\n\n\nOn 03/18/09 08:06, Simon Riggs wrote:\n\nOn Wed, 2009-03-18 at 11:45 +0000, Matthew Wakeling wrote:\n \n\nOn Wed, 18 Mar 2009, Simon Riggs wrote:\n \n\nI agree with that, apart from the \"granting no more\" bit.\n\nThe most useful behaviour is just to have two modes:\n* exclusive-lock held - all other x locks welcome, s locks queue\n* shared-lock held - all other s locks welcome, x locks queue\n \n\nThe problem with making all other locks welcome is that there is a \npossibility of starvation. Imagine a case where there is a constant stream \nof shared locks - the exclusive locks may never actually get hold of the \nlock under the \"all other shared locks welcome\" strategy. \n \n\n\nThat's exactly what happens now. \n\n \n\nLikewise with the reverse.\n \n\n\nI think it depends upon how frequently requests arrive. Commits cause X\nlocks and we don't commit that often, so its very unlikely that we'd see\na constant stream of X locks and prevent shared lockers.\n\n\nSome comments from an earlier post on this topic (about 20 months ago):\n\nSince shared locks are currently queued behind exclusive requests\nwhen they cannot be immediately satisfied, it might be worth\nreconsidering the way LWLockRelease works also. When we wake up the\nqueue we only wake the Shared requests that are adjacent to the head of\nthe queue. Instead we could wake *all* waiting Shared requestors.\n\ne.g. with a lock queue like this:\n(HEAD) S<-S<-X<-S<-X<-S<-X<-S\nCurrently we would wake the 1st and 2nd waiters only. \n\nIf we were to wake the 3rd, 5th and 7th waiters also, then the queue\nwould reduce in length very quickly, if we assume generally uniform\nservice times. (If the head of the queue is X, then we wake only that\none process and I'm not proposing we change that). That would mean queue\njumping right? Well thats what already happens in other circumstances,\nso there cannot be anything intrinsically wrong with allowing it, the\nonly question is: would it help? \n\n \n\n\nI thought about that.. Except without putting a restriction a huge\nqueue will cause lot of time spent in manipulating the lock list every\ntime. One more thing will be to maintain two list shared and exclusive\nand round robin through them for every time you access the list so\nmanipulation is low.. But the best thing is to allow flexibility to\nchange the algorithm since some workloads may work fine with one and\nothers will NOT. The flexibility then allows to tinker for those\nalready reaching the limits.\n\n-Jignesh\n\n\nWe need not wake the whole queue, there may be some generally more\nbeneficial heuristic. The reason for considering this is not to speed up\nShared requests but to reduce the queue length and thus the waiting time\nfor the Xclusive requestors. Each time a Shared request is dequeued, we\neffectively re-enable queue jumping, so a Shared request arriving during\nthat point will actually jump ahead of Shared requests that were unlucky\nenough to arrive while an Exclusive lock was held. Worse than that, the\nnew incoming Shared requests exacerbate the starvation, so the more\nnon-adjacent groups of Shared lock requests there are in the queue, the\nworse the starvation of the exclusive requestors becomes. We are\neffectively randomly starving some shared locks as well as exclusive\nlocks in the current scheme, based upon the state of the lock when they\nmake their request. The situation is worst when the lock is heavily\ncontended and the workload has a 50/50 mix of shared/exclusive requests,\ne.g. serializable transactions or transactions with lots of\nsubtransactions.", "msg_date": "Wed, 18 Mar 2009 09:38:14 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On Wed, 18 Mar 2009, Jignesh K. Shah wrote:\n> I thought about that.. Except without putting a restriction a huge queue will cause lot of time spent in manipulating the lock\n> list every time. One more thing will be to maintain two list shared and exclusive and round robin through them for every time you\n> access the list so manipulation is low.. But the best thing is to allow flexibility to change the algorithm since some workloads\n> may work fine with one and others will NOT. The flexibility then allows to tinker for those already reaching the limits.\n\nYeah, having two separate queues is the obvious way of doing this. It \nwould make most operations really trivial. Just wake everything in the \nshared queue at once, and you can throw it away wholesale and allocate a \nnew queue. It avoids a whole lot of queue manipulation.\n\nMatthew\n\n-- \n Software suppliers are trying to make their software packages more\n 'user-friendly'.... Their best approach, so far, has been to take all\n the old brochures, and stamp the words, 'user-friendly' on the cover.\n -- Bill Gates\n", "msg_date": "Wed, 18 Mar 2009 13:49:05 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 3/12/09 6:29 PM, \"Robert Haas\" <[email protected]> wrote:\n\n>> Its worth ruling out given that even if the likelihood is small, the fix is\n>> easy.  However, I don¹t see the throughput drop from peak as more\n>> concurrency is added that is the hallmark of this problem < usually with a\n>> lot of context switching and a sudden increase in CPU use per transaction.\n> \n> The problem is that the proposed \"fix\" bears a strong resemblence to\n> attempting to improve your gas mileage by removing a few non-critical\n> parts from your card, like, say, the bumpers, muffler, turn signals,\n> windshield wipers, and emergency brake.\n> \n\nThe fix I was referring to as easy was using a connection pooler -- as a\nreply to the previous post. Even if its a low likelihood that the connection\npooler fixes this case, its worth looking at.\n\n> \n> While it's true that the car\n> might be drivable in that condition (as long as nothing unexpected\n> happens), you're going to have a hard time convincing the manufacturer\n> to offer that as an options package.\n> \n\nThe original poster's request is for a config parameter, for experimentation\nand testing by the brave. My own request was for that version of the lock to\nprevent possible starvation but improve performance by unlocking all shared\nat once, then doing all exclusives one at a time next, etc.\n\n> \n> I think that changing the locking behavior is attacking the problem at\n> the wrong level anyway. If someone want to look at optimizing\n> PostgreSQL for very large numbers of concurrent connections without a\n> connection pooler... at least IMO, it would be more worthwhile to\n> study WHY there's so much locking contention, and, on a lock by lock\n> basis, what can be done about it without harming performance under\n> more normal loads? The fact that there IS locking contention is sorta\n> interesting, but it would be a lot more interesting to know why.\n> \n> ...Robert\n> \n\nI alluded to the three main ways of dealing with lock contention elsewhere.\nAvoiding locks, making finer grained locks, and making locks faster.\nAll are worthy. Some are harder to do than others. Some have been heavily\ntuned already. Its a case by case basis. And regardless, the unfair lock\nis a good test tool.\n\n", "msg_date": "Wed, 18 Mar 2009 10:43:18 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> On Mon, 2009-03-16 at 16:26 +0000, Matthew Wakeling wrote:\n>> One possibility would be for the locks to alternate between exclusive\n>> and \n>> shared - that is:\n>> \n>> 1. Take a snapshot of all shared waits, and grant them all -\n>> thundering\n>> herd style.\n>> 2. Wait until ALL of them have finished, granting no more.\n>> 3. Take a snapshot of all exclusive waits, and grant them all, one by\n>> one.\n>> 4. Wait until all of them have been finished, granting no more.\n>> 5. Back to (1)\n\n> I agree with that, apart from the \"granting no more\" bit.\n\n> Currently we queue up exclusive locks, but there is no need to since for\n> ProcArrayLock commits are all changing different data.\n\n> The most useful behaviour is just to have two modes:\n> * exclusive-lock held - all other x locks welcome, s locks queue\n> * shared-lock held - all other s locks welcome, x locks queue\n\nMy goodness, it seems people have forgotten about the \"lightweight\"\npart of the LWLock design.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 18 Mar 2009 16:26:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4 " }, { "msg_contents": "\nOn 3/18/09 4:36 AM, \"Gregory Stark\" <[email protected]> wrote:\n\n> \n> \n> \"Jignesh K. Shah\" <[email protected]> writes:\n> \n>> In next couple of weeks I plan to test the patch on a different x64 based\n>> system to do a sanity testing on lower number of cores and also try out other\n>> workloads ...\n> \n> I'm actually more interested in the large number of cores but fewer processes\n> and lower max_connections. If you set max_connections to 64 and eliminate the\n> wait time you should, in theory, be able to get 100% cpu usage. It would be\n> very interesting to track down the contention which is preventing that.\n\nMy previous calculation in this thread showed that even at 0 wait time, the\nclient seems to introduce ~3ms wait time overhead on average. So it takes\nclose to 128 threads in each test to stop the linear scaling since the\naverage processing time seems to be about ~3ms.\nEither that, or the tests actually are running on a system capable of 128\nthreads.\n\n> \n> --\n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n> Ask me about EnterpriseDB's PostGIS support!\n> \n> -\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Wed, 18 Mar 2009 14:16:01 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On Wed, Mar 18, 2009 at 1:43 PM, Scott Carey <[email protected]> wrote:\n>>> Its worth ruling out given that even if the likelihood is small, the fix is\n>>> easy.  However, I don¹t see the throughput drop from peak as more\n>>> concurrency is added that is the hallmark of this problem < usually with a\n>>> lot of context switching and a sudden increase in CPU use per transaction.\n>>\n>> The problem is that the proposed \"fix\" bears a strong resemblence to\n>> attempting to improve your gas mileage by removing a few non-critical\n>> parts from your card, like, say, the bumpers, muffler, turn signals,\n>> windshield wipers, and emergency brake.\n>\n> The fix I was referring to as easy was using a connection pooler -- as a\n> reply to the previous post. Even if its a low likelihood that the connection\n> pooler fixes this case, its worth looking at.\n\nOh, OK. There seem to be some smart people saying that's a pretty\nhigh-likelihood fix. I thought you were talking about the proposed\nlocking change.\n\n>> While it's true that the car\n>> might be drivable in that condition (as long as nothing unexpected\n>> happens), you're going to have a hard time convincing the manufacturer\n>> to offer that as an options package.\n>\n> The original poster's request is for a config parameter, for experimentation\n> and testing by the brave. My own request was for that version of the lock to\n> prevent possible starvation but improve performance by unlocking all shared\n> at once, then doing all exclusives one at a time next, etc.\n\nThat doesn't prevent starvation in general, although it will for some workloads.\n\nAnyway, it seems rather pointless to add a config parameter that isn't\nat all safe, and adds overhead to a critical part of the system for\npeople who don't use it. After all, if you find that it helps, what\nare you going to do? Turn it on in production? I just don't see how\nthis is any good other than as a thought-experiment.\n\nAt any rate, as I understand it, even after Jignesh eliminated the\nwaits, he wasn't able to push his CPU utilization above 48%. Surely\nsomething's not right there. And he also said that when he added a\nknob to control the behavior, he got a performance improvement even\nwhen the knob was set to 0, which corresponds to the behavior we have\nalready anyway. So I'm very skeptical that there's something wrong\nwith either the system or the test. Until that's understood and\nfixed, I don't think that looking at the numbers is worth much.\n\n> I alluded to the three main ways of dealing with lock contention elsewhere.\n> Avoiding locks, making finer grained locks, and making locks faster.\n> All are worthy.  Some are harder to do than others.  Some have been heavily\n> tuned already.  Its a case by case basis.  And regardless, the unfair lock\n> is a good test tool.\n\nIn view of the caveats above, I'll give that a firm maybe.\n\n...Robert\n", "msg_date": "Wed, 18 Mar 2009 17:25:10 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 03/18/09 17:16, Scott Carey wrote:\n> On 3/18/09 4:36 AM, \"Gregory Stark\" <[email protected]> wrote:\n>\n> \n>> \"Jignesh K. Shah\" <[email protected]> writes:\n>>\n>> \n>>> In next couple of weeks I plan to test the patch on a different x64 based\n>>> system to do a sanity testing on lower number of cores and also try out other\n>>> workloads ...\n>>> \n>> I'm actually more interested in the large number of cores but fewer processes\n>> and lower max_connections. If you set max_connections to 64 and eliminate the\n>> wait time you should, in theory, be able to get 100% cpu usage. It would be\n>> very interesting to track down the contention which is preventing that.\n>> \n>\n> My previous calculation in this thread showed that even at 0 wait time, the\n> client seems to introduce ~3ms wait time overhead on average. So it takes\n> close to 128 threads in each test to stop the linear scaling since the\n> average processing time seems to be about ~3ms.\n> Either that, or the tests actually are running on a system capable of 128\n> threads.\n>\n> \n\nNope 64 threads for sure .. Verified it number of times ..\n\n-Jignesh\n\n>> --\n>> Gregory Stark\n>> EnterpriseDB http://www.enterprisedb.com\n>> Ask me about EnterpriseDB's PostGIS support!\n>>\n>> -\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>> \n>\n>\n> -\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n\n\n\n\n\n\nOn 03/18/09 17:16, Scott Carey wrote:\n\nOn 3/18/09 4:36 AM, \"Gregory Stark\" <[email protected]> wrote:\n\n \n\n\n\"Jignesh K. Shah\" <[email protected]> writes:\n\n \n\nIn next couple of weeks I plan to test the patch on a different x64 based\nsystem to do a sanity testing on lower number of cores and also try out other\nworkloads ...\n \n\nI'm actually more interested in the large number of cores but fewer processes\nand lower max_connections. If you set max_connections to 64 and eliminate the\nwait time you should, in theory, be able to get 100% cpu usage. It would be\nvery interesting to track down the contention which is preventing that.\n \n\n\nMy previous calculation in this thread showed that even at 0 wait time, the\nclient seems to introduce ~3ms wait time overhead on average. So it takes\nclose to 128 threads in each test to stop the linear scaling since the\naverage processing time seems to be about ~3ms.\nEither that, or the tests actually are running on a system capable of 128\nthreads.\n\n \n\n\nNope 64 threads for sure .. Verified it number of times ..\n\n-Jignesh\n\n\n\n\n--\n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's PostGIS support!\n\n-\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n \n\n\n\n-\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 18 Mar 2009 17:57:25 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 03/18/09 17:25, Robert Haas wrote:\n> On Wed, Mar 18, 2009 at 1:43 PM, Scott Carey <[email protected]> wrote:\n> \n>>>> Its worth ruling out given that even if the likelihood is small, the fix is\n>>>> easy. However, I don�t see the throughput drop from peak as more\n>>>> concurrency is added that is the hallmark of this problem < usually with a\n>>>> lot of context switching and a sudden increase in CPU use per transaction.\n>>>> \n>>> The problem is that the proposed \"fix\" bears a strong resemblence to\n>>> attempting to improve your gas mileage by removing a few non-critical\n>>> parts from your card, like, say, the bumpers, muffler, turn signals,\n>>> windshield wipers, and emergency brake.\n>>> \n>> The fix I was referring to as easy was using a connection pooler -- as a\n>> reply to the previous post. Even if its a low likelihood that the connection\n>> pooler fixes this case, its worth looking at.\n>> \n>\n> Oh, OK. There seem to be some smart people saying that's a pretty\n> high-likelihood fix. I thought you were talking about the proposed\n> locking change.\n>\n> \n>>> While it's true that the car\n>>> might be drivable in that condition (as long as nothing unexpected\n>>> happens), you're going to have a hard time convincing the manufacturer\n>>> to offer that as an options package.\n>>> \n>> The original poster's request is for a config parameter, for experimentation\n>> and testing by the brave. My own request was for that version of the lock to\n>> prevent possible starvation but improve performance by unlocking all shared\n>> at once, then doing all exclusives one at a time next, etc.\n>> \n>\n> That doesn't prevent starvation in general, although it will for some workloads.\n>\n> Anyway, it seems rather pointless to add a config parameter that isn't\n> at all safe, and adds overhead to a critical part of the system for\n> people who don't use it. After all, if you find that it helps, what\n> are you going to do? Turn it on in production? I just don't see how\n> this is any good other than as a thought-experiment.\n> \n\nActually the patch I submitted shows no overhead from what I have seen \nand I think it is useful depending on workloads where it can be turned \non even on production.\n> At any rate, as I understand it, even after Jignesh eliminated the\n> waits, he wasn't able to push his CPU utilization above 48%. Surely\n> something's not right there. And he also said that when he added a\n> knob to control the behavior, he got a performance improvement even\n> when the knob was set to 0, which corresponds to the behavior we have\n> already anyway. So I'm very skeptical that there's something wrong\n> with either the system or the test. Until that's understood and\n> fixed, I don't think that looking at the numbers is worth much.\n>\n> \n\nI dont think anything is majorly wrong in my system.. Sometimes it is \nPostgreSQL locks in play and sometimes it can be OS/system related locks \nin play (network, IO, file system, etc). Right now in my patch after I \nfix waiting procarray problem other PostgreSQL locks comes into play: \nCLogControlLock, WALInsertLock , etc. Right now out of the box we have \nno means of tweaking something in production if you do land in that \nproblem. With the patch there is means of doing knob control to tweak \nthe bottlenecks of Locks for the main workload for which it is put in \nproduction.\n\nI still haven't seen any downsides with the patch yet other than \nhighlighting other bottlenecks in the system. (For example I haven't \nseen a run where the tpm on my workload decreases as you increase the \nnumber) What I am suggesting is run the patch and see if you find a \nworkload where you see a downside in performance and the lock statistics \noutput to see if it is pushing the bottleneck elsewhere more likely \nWALInsertLock or CLogControlBlock. If yes then this patch gives you the \nright tweaking opportunity to reduce stress on ProcArrayLock for a \nworkload while still not seriously stressing WALInsertLock or \nCLogControlBlock.\n\nRight now.. the standard answer applies.. nope you are running the wrong \nworkload for PostgreSQL, use a connection pooler or your own application \nlogic. Or maybe.. you have too many users for PostgreSQL use some \nproprietary database.\n\n-Jignesh\n\n\n\n\n>> I alluded to the three main ways of dealing with lock contention elsewhere.\n>> Avoiding locks, making finer grained locks, and making locks faster.\n>> All are worthy. Some are harder to do than others. Some have been heavily\n>> tuned already. Its a case by case basis. And regardless, the unfair lock\n>> is a good test tool.\n>> \n>\n> In view of the caveats above, I'll give that a firm maybe.\n>\n> ...Robert\n> \n\n\n\n\n\n\n\n\nOn 03/18/09 17:25, Robert Haas wrote:\n\nOn Wed, Mar 18, 2009 at 1:43 PM, Scott Carey <[email protected]> wrote:\n \n\n\n\nIts worth ruling out given that even if the likelihood is small, the fix is\neasy.  However, I don¹t see the throughput drop from peak as more\nconcurrency is added that is the hallmark of this problem < usually with a\nlot of context switching and a sudden increase in CPU use per transaction.\n \n\nThe problem is that the proposed \"fix\" bears a strong resemblence to\nattempting to improve your gas mileage by removing a few non-critical\nparts from your card, like, say, the bumpers, muffler, turn signals,\nwindshield wipers, and emergency brake.\n \n\nThe fix I was referring to as easy was using a connection pooler -- as a\nreply to the previous post. Even if its a low likelihood that the connection\npooler fixes this case, its worth looking at.\n \n\n\nOh, OK. There seem to be some smart people saying that's a pretty\nhigh-likelihood fix. I thought you were talking about the proposed\nlocking change.\n\n \n\n\nWhile it's true that the car\nmight be drivable in that condition (as long as nothing unexpected\nhappens), you're going to have a hard time convincing the manufacturer\nto offer that as an options package.\n \n\nThe original poster's request is for a config parameter, for experimentation\nand testing by the brave. My own request was for that version of the lock to\nprevent possible starvation but improve performance by unlocking all shared\nat once, then doing all exclusives one at a time next, etc.\n \n\n\nThat doesn't prevent starvation in general, although it will for some workloads.\n\nAnyway, it seems rather pointless to add a config parameter that isn't\nat all safe, and adds overhead to a critical part of the system for\npeople who don't use it. After all, if you find that it helps, what\nare you going to do? Turn it on in production? I just don't see how\nthis is any good other than as a thought-experiment.\n \n\n\nActually the patch I submitted shows no overhead from what I have seen\nand I think it is useful depending on workloads where it can be turned\non  even on production. \n\n\nAt any rate, as I understand it, even after Jignesh eliminated the\nwaits, he wasn't able to push his CPU utilization above 48%. Surely\nsomething's not right there. And he also said that when he added a\nknob to control the behavior, he got a performance improvement even\nwhen the knob was set to 0, which corresponds to the behavior we have\nalready anyway. So I'm very skeptical that there's something wrong\nwith either the system or the test. Until that's understood and\nfixed, I don't think that looking at the numbers is worth much.\n\n \n\n\nI dont think anything is majorly wrong in my system.. Sometimes it is\nPostgreSQL locks in play and sometimes it can be OS/system related\nlocks in play (network, IO, file system, etc).  Right now in my patch\nafter I fix waiting procarray  problem other PostgreSQL locks comes\ninto play: CLogControlLock, WALInsertLock , etc.  Right now out of the\nbox we have no means of tweaking something in production if you do land\nin that problem. With the patch there is means of doing knob control to\ntweak the bottlenecks of Locks for the main workload for which it is\nput in production.\n\nI still haven't seen any downsides with the patch yet other than\nhighlighting other bottlenecks in the system. (For example I haven't\nseen a run where the tpm on my workload decreases as you increase the\nnumber) What I am suggesting is run the patch and see if you find a\nworkload where you see a downside in performance and the lock\nstatistics output to see if it is pushing the bottleneck elsewhere more\nlikely WALInsertLock or CLogControlBlock. If yes then this patch gives\nyou the right tweaking opportunity to reduce stress on ProcArrayLock\nfor a workload while still not seriously stressing WALInsertLock or\nCLogControlBlock.\n\nRight now.. the standard answer applies.. nope you are running the\nwrong workload for PostgreSQL, use a connection pooler or your own\napplication logic. Or maybe.. you have too many users for PostgreSQL\nuse some proprietary database.\n\n-Jignesh\n\n\n\n\n\n\n\nI alluded to the three main ways of dealing with lock contention elsewhere.\nAvoiding locks, making finer grained locks, and making locks faster.\nAll are worthy.  Some are harder to do than others.  Some have been heavily\ntuned already.  Its a case by case basis.  And regardless, the unfair lock\nis a good test tool.\n \n\n\nIn view of the caveats above, I'll give that a firm maybe.\n\n...Robert", "msg_date": "Wed, 18 Mar 2009 18:11:28 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\nOn Wed, 2009-03-18 at 16:26 -0400, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > On Mon, 2009-03-16 at 16:26 +0000, Matthew Wakeling wrote:\n> >> One possibility would be for the locks to alternate between exclusive\n> >> and \n> >> shared - that is:\n> >> \n> >> 1. Take a snapshot of all shared waits, and grant them all -\n> >> thundering\n> >> herd style.\n> >> 2. Wait until ALL of them have finished, granting no more.\n> >> 3. Take a snapshot of all exclusive waits, and grant them all, one by\n> >> one.\n> >> 4. Wait until all of them have been finished, granting no more.\n> >> 5. Back to (1)\n> \n> > I agree with that, apart from the \"granting no more\" bit.\n> \n> > Currently we queue up exclusive locks, but there is no need to since for\n> > ProcArrayLock commits are all changing different data.\n> \n> > The most useful behaviour is just to have two modes:\n> > * exclusive-lock held - all other x locks welcome, s locks queue\n> > * shared-lock held - all other s locks welcome, x locks queue\n> \n> My goodness, it seems people have forgotten about the \"lightweight\"\n> part of the LWLock design.\n\n\"Lightweight\" is only useful if it fits purpose. If the LWlock design\ndoesn't fit all cases, especially with critical lock types, then we can\nhave special cases. We have both spinlocks and LWlocks, plus we split\nhash tables into multiple lock partitions. If we have 3 types of\nlightweight locking, why not consider having 4?\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Wed, 18 Mar 2009 23:06:56 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\nOn Wed, 2009-03-18 at 13:49 +0000, Matthew Wakeling wrote:\n> On Wed, 18 Mar 2009, Jignesh K. Shah wrote:\n> > I thought about that.. Except without putting a restriction a huge queue will cause lot of time spent in manipulating the lock\n> > list every time. One more thing will be to maintain two list shared and exclusive and round robin through them for every time you\n> > access the list so manipulation is low.. But the best thing is to allow flexibility to change the algorithm since some workloads\n> > may work fine with one and others will NOT. The flexibility then allows to tinker for those already reaching the limits.\n> \n> Yeah, having two separate queues is the obvious way of doing this. It \n> would make most operations really trivial. Just wake everything in the \n> shared queue at once, and you can throw it away wholesale and allocate a \n> new queue. It avoids a whole lot of queue manipulation.\n\nYes, that sounds good.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Wed, 18 Mar 2009 23:07:34 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Robert Haas wrote:\n> > The original poster's request is for a config parameter, for experimentation\n> > and testing by the brave. My own request was for that version of the lock to\n> > prevent possible starvation but improve performance by unlocking all shared\n> > at once, then doing all exclusives one at a time next, etc.\n> \n> That doesn't prevent starvation in general, although it will for some workloads.\n> \n> Anyway, it seems rather pointless to add a config parameter that isn't\n> at all safe, and adds overhead to a critical part of the system for\n> people who don't use it. After all, if you find that it helps, what\n> are you going to do? Turn it on in production? I just don't see how\n> this is any good other than as a thought-experiment.\n\nWe prefer things to be auto-tuned, and if not, it should be clear\nhow/when to set the configuration parameter.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 19 Mar 2009 13:37:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "> Actually the patch I submitted shows no overhead from what I have seen and I\n> think it is useful depending on workloads where it can be turned on  even on\n> production.\n\nWell, unless I'm misunderstanding something, waking all waiters every\ntime could lead to arbitrarily long delays for writers on mostly\nread-only workloads... and by arbitrarily along, we mean to say\n\"potentially just about forever\". That doesn't sound safe for\nproduction to me.\n\n> I dont think anything is majorly wrong in my system.. Sometimes it is\n> PostgreSQL locks in play and sometimes it can be OS/system related locks in\n> play (network, IO, file system, etc).  Right now in my patch after I fix\n> waiting procarray  problem other PostgreSQL locks comes into play:\n> CLogControlLock, WALInsertLock , etc.  Right now out of the box we have no\n> means of tweaking something in production if you do land in that problem.\n> With the patch there is means of doing knob control to tweak the bottlenecks\n> of Locks for the main workload for which it is put in production.\n\nI'll reiterate my previous objection: I think your approach is too\nsimplistic. I think Tom said it the best: a lot of work has gone into\nmaking the locking mechanism lightweight and safe. I'm pretty\ndoubtful that you're going to find a change that is still safe, but\nperforms much better. The discussions by Heikki, Simon, and others\nabout changing the way locks are used or inventing new kinds of locks\nseem much more promising to me.\n\n> Right now.. the standard answer applies.. nope you are running the wrong\n> workload for PostgreSQL, use a connection pooler or your own application\n> logic. Or maybe.. you have too many users for PostgreSQL use some\n> proprietary database.\n\nWell I certainly agree that we need to get away from that mentality,\nalthough there's nothing particularly evil about a connection\npooler... it might not be suitable for every workload, but you haven't\nspecified why one couldn't or shouldn't be used in the situation\nyou're trying to simulate here.\n\n...Robert\n", "msg_date": "Thu, 19 Mar 2009 16:49:49 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 3/18/09 2:25 PM, \"Robert Haas\" <[email protected]> wrote:\n\n> On Wed, Mar 18, 2009 at 1:43 PM, Scott Carey <[email protected]> wrote:\n>>>> Its worth ruling out given that even if the likelihood is small, the fix is\n>>>> easy.  However, I don¹t see the throughput drop from peak as more\n>>>> concurrency is added that is the hallmark of this problem < usually with a\n>>>> lot of context switching and a sudden increase in CPU use per transaction.\n>>> \n>>> The problem is that the proposed \"fix\" bears a strong resemblence to\n>>> attempting to improve your gas mileage by removing a few non-critical\n>>> parts from your card, like, say, the bumpers, muffler, turn signals,\n>>> windshield wipers, and emergency brake.\n>> \n>> The fix I was referring to as easy was using a connection pooler -- as a\n>> reply to the previous post. Even if its a low likelihood that the connection\n>> pooler fixes this case, its worth looking at.\n> \n> Oh, OK. There seem to be some smart people saying that's a pretty\n> high-likelihood fix. I thought you were talking about the proposed\n> locking change.\n> \n\nSorry for the confusion, I was countering the contention that a connection\npool would fix all of this, and gave that low likelihood of removing the\nlock contention given the results of the first set of data and its linear\nramp-up.\n\nI frankly think it is extremely unlikely given the test results that\nfiguring out how to run this with 64 threads (instead of the current linear\nramp up to 128) will give 100% CPU utilization.\nAny system that gets 100% CPU utilization with CPU_COUNT concurrent\nprocesses or threads and only 35% with CPU_COUNT*2 would be seriously flawed\nanyway... The only reasonable reasons for this I can think of would be if\neach one used enough memory to cause swapping or something else that forces\ndisk i/o. \n\nGranted, that Postgres isn't perfect and there is overhead for idle, tiny\nconnections, handling CPU_COUNT*2 connections with half idle and half active\nas the current test case does, does not invalidate the test -- it makes it\nrealistic.\nA 64 thread test case that can spend zero time in the client would be useful\nto provide more information however.\n\n>>> While it's true that the car\n>>> might be drivable in that condition (as long as nothing unexpected\n>>> happens), you're going to have a hard time convincing the manufacturer\n>>> to offer that as an options package.\n>> \n>> The original poster's request is for a config parameter, for experimentation\n>> and testing by the brave. My own request was for that version of the lock to\n>> prevent possible starvation but improve performance by unlocking all shared\n>> at once, then doing all exclusives one at a time next, etc.\n> \n> That doesn't prevent starvation in general, although it will for some\n> workloads.\n\nI'm pretty sure it would, it would guarantee that you alternate between\nshared and exclusive. Although if the implementation lets shared lockers cut\nin line at the wrong time it would not be.\n\n> \n> Anyway, it seems rather pointless to add a config parameter that isn't\n> at all safe, and adds overhead to a critical part of the system for\n> people who don't use it. After all, if you find that it helps, what\n> are you going to do? Turn it on in production? I just don't see how\n> this is any good other than as a thought-experiment.\n\nThe safety is yet to be determined. The overhead is yet to be determined.\nYou are assuming the worst case for both.\nIf it turns out that the current implementation can cause starvation\nalready, which the parallel discussion here indicates, that makes your\nstarvation concern an issue for both.\n\n> \n> At any rate, as I understand it, even after Jignesh eliminated the\n> waits, he wasn't able to push his CPU utilization above 48%. Surely\n> something's not right there. And he also said that when he added a\n> knob to control the behavior, he got a performance improvement even\n> when the knob was set to 0, which corresponds to the behavior we have\n> already anyway. So I'm very skeptical that there's something wrong\n> with either the system or the test. Until that's understood and\n> fixed, I don't think that looking at the numbers is worth much.\n> \n\nThe next bottleneck at 48% CPU is definitely very interesting. However, it\nhas an explanation: the test blocked on other locks.\n\nThe observation about the \"old\" algorithm with his patch going faster should\nbe understood to a point, but you don't need to understand everything in\norder to show that it is safe or better. There are changes made though that\nmay explain that. In Jignesh's words:\n\n\" still using default logic\n(thought different way I compare sequential using fields from the\nprevious proc structure instead of comparing with constant boolean) \"\n\nIt is possible that that minor change did some cache locality and/or branch\nprediction trick on the processor he has. I've seen plenty of strange\neffects caused by tiny changes before. Its expected to find the unexpected.\nIt will be useful to know what caused the improvement (was it the above?)\nbut we don't need to know why it changed -- that may be hard to get at\nwithout looking at the assembly code output and being an expert on that\nprocessor/compiler.\n\nOne of the trickiest things about locks, is that the little details are VERY\nhardware dependant, and the hardware can change the tradeoffs significantly\nfrom generation to generation (e.g. Intel's next x86 chips have a faster\ncompare and swap operation, and a special instruction for \"spinning\" that\ndoesn't spin and allows the \"spinner\" to not compete for execution resources\nwith other hardware threads, so spin locks are more viable and all locks and\natomics are faster).\n\n>> I alluded to the three main ways of dealing with lock contention elsewhere.\n>> Avoiding locks, making finer grained locks, and making locks faster.\n>> All are worthy.  Some are harder to do than others.  Some have been heavily\n>> tuned already.  Its a case by case basis.  And regardless, the unfair lock\n>> is a good test tool.\n> \n> In view of the caveats above, I'll give that a firm maybe.\n> \n> ...Robert\n> \n\nMy main point here, is that it clearly shows what the 'next' bottleneck is,\nso at minimum it can be used to estimate what the impact of lock changes or\navoiding locks may be on various configurations and test scenarios.\n\n\n", "msg_date": "Thu, 19 Mar 2009 13:57:03 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 3/19/09 10:37 AM, \"Bruce Momjian\" <[email protected]> wrote:\n\n> Robert Haas wrote:\n>>> The original poster's request is for a config parameter, for experimentation\n>>> and testing by the brave. My own request was for that version of the lock to\n>>> prevent possible starvation but improve performance by unlocking all shared\n>>> at once, then doing all exclusives one at a time next, etc.\n>> \n>> That doesn't prevent starvation in general, although it will for some\n>> workloads.\n>> \n>> Anyway, it seems rather pointless to add a config parameter that isn't\n>> at all safe, and adds overhead to a critical part of the system for\n>> people who don't use it. After all, if you find that it helps, what\n>> are you going to do? Turn it on in production? I just don't see how\n>> this is any good other than as a thought-experiment.\n> \n> We prefer things to be auto-tuned, and if not, it should be clear\n> how/when to set the configuration parameter.\n\nOf course. The proposal was to leave it at the default, and obviously\ndocument that it is not likely to be used. Its 1000x safer than fsync=off .\n. .\n\n> \n> --\n> Bruce Momjian <[email protected]> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n> \n> + If your life is a hard drive, Christ can be your backup. +\n> \n\n", "msg_date": "Thu, 19 Mar 2009 13:58:44 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\nOn 3/19/09 1:49 PM, \"Robert Haas\" <[email protected]> wrote:\n\n>> Actually the patch I submitted shows no overhead from what I have seen and I\n>> think it is useful depending on workloads where it can be turned on  even on\n>> production.\n> \n> Well, unless I'm misunderstanding something, waking all waiters every\n> time could lead to arbitrarily long delays for writers on mostly\n> read-only workloads... and by arbitrarily along, we mean to say\n> \"potentially just about forever\". That doesn't sound safe for\n> production to me.\n\nThe other discussion going on indicates that that condition already can\nhappen, shared can always currently cut in line while other shared locks\nhave the lock, though I don't understand all the details.\nAlso, the tests on the 'wake all' version clearly aren't starving anything\nin a load test with thousands of threads and very heavy lock contention,\nmostly for shared locks.\nInstead throughput increases and all wait times decrease.\nThere are several other proposals to make starvation less possible (wake\nonly shared and other proposals that alternate between shared and exclusive;\nwaking only X sized chunks, etc -- its all just investigation into fixing\nwhat can be improved on -- solutions that are easily testable should not\njust be thrown out: the first ones were just the easiest to try).\n\n\n> \n>> I dont think anything is majorly wrong in my system.. Sometimes it is\n>> PostgreSQL locks in play and sometimes it can be OS/system related locks in\n>> play (network, IO, file system, etc).  Right now in my patch after I fix\n>> waiting procarray  problem other PostgreSQL locks comes into play:\n>> CLogControlLock, WALInsertLock , etc.  Right now out of the box we have no\n>> means of tweaking something in production if you do land in that problem.\n>> With the patch there is means of doing knob control to tweak the bottlenecks\n>> of Locks for the main workload for which it is put in production.\n> \n> I'll reiterate my previous objection: I think your approach is too\n> simplistic. I think Tom said it the best: a lot of work has gone into\n> making the locking mechanism lightweight and safe. I'm pretty\n> doubtful that you're going to find a change that is still safe, but\n> performs much better. The discussions by Heikki, Simon, and others\n> about changing the way locks are used or inventing new kinds of locks\n> seem much more promising to me.\n\nThe data shows that in this use case, it is not lightweight enough.\nEnhancing or avoiding a few of these larger global locks is necessary to\nscale up to larger systems.\n\nThe other discussions are a direct result of this and excellent -- I don't\nsee the separation you are defining.\nBut If I understand correctly what was said in that other discussion, the\ncurrent lock implementation can starve out both exclusive access and some\nshared too. If it hasn't happened in this version, its not likely to happen\nin the 'wake all' version either, especially since it has been shown to\ndecrease contention.\n\nSometimes, the simplest solution is a good one. I can't tell you how many\ntimes I've seen a ton of sophisticated enhancements / proposals to improve\nscalability or performance be defeated by the simpler solution that most\nengineers thought was not good enough until faced with empirical evidence.\n\nThat evidence is what should guide this.\n\n> \n>> Right now.. the standard answer applies.. nope you are running the wrong\n>> workload for PostgreSQL, use a connection pooler or your own application\n>> logic. Or maybe.. you have too many users for PostgreSQL use some\n>> proprietary database.\n> \n> Well I certainly agree that we need to get away from that mentality,\n> although there's nothing particularly evil about a connection\n> pooler... it might not be suitable for every workload, but you haven't\n> specified why one couldn't or shouldn't be used in the situation\n> you're trying to simulate here.\n> \n> ...Robert\n> \n\nThere's nothing evil about a pooler, and there is nothing evil about making\nPostgres' concurrency overhead a lot lower either.\n\n", "msg_date": "Thu, 19 Mar 2009 14:43:21 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\n\nRobert Haas wrote:\n>> Actually the patch I submitted shows no overhead from what I have seen and I\n>> think it is useful depending on workloads where it can be turned on even on\n>> production.\n>> \n>\n> Well, unless I'm misunderstanding something, waking all waiters every\n> time could lead to arbitrarily long delays for writers on mostly\n> read-only workloads... and by arbitrarily along, we mean to say\n> \"potentially just about forever\". That doesn't sound safe for\n> production to me.\n>\n> \n\nHi Robert,\nThe patch I submmitted does not do any manipulation with the list. All \nit changes is gives the flexibility to change how many to wake up at one \ngo. 0 is default which wakes up only 1 X (Exclusive) at a time or all \nsequential S (Shared). Changing the value to 1 will wake up all \nsequential X or all sequential S as they are in the queue (no \nmanipulation). Values 2 and higher upto 32 wakes up the next n waiter in \nthe queue (X or S) AS they are in the queue. It absolutely does no \nmanipulation and hence there is no overhead. Absolutely safe for \nProduction as Scott mentioned there are other things in postgresql.conf \nwhich can be more dangerous than this tunable.\n\n>> I dont think anything is majorly wrong in my system.. Sometimes it is\n>> PostgreSQL locks in play and sometimes it can be OS/system related locks in\n>> play (network, IO, file system, etc). Right now in my patch after I fix\n>> waiting procarray problem other PostgreSQL locks comes into play:\n>> CLogControlLock, WALInsertLock , etc. Right now out of the box we have no\n>> means of tweaking something in production if you do land in that problem.\n>> With the patch there is means of doing knob control to tweak the bottlenecks\n>> of Locks for the main workload for which it is put in production.\n>> \n>\n> I'll reiterate my previous objection: I think your approach is too\n> simplistic. I think Tom said it the best: a lot of work has gone into\n> making the locking mechanism lightweight and safe. I'm pretty\n> doubtful that you're going to find a change that is still safe, but\n> performs much better. The discussions by Heikki, Simon, and others\n> about changing the way locks are used or inventing new kinds of locks\n> seem much more promising to me.\n>\n> \nThat is the beauty : The approach is simplistic but very effective. Lot \nof work has gone which is more incremental and this is another one of \nthose incremental changes which allows minor tweaks which the workload \nmay like very much and perform very well.. Performance tuning game is \nalmost like harmonic frequency. I agree that other kinds of locks seem \nmore promising. I had infact proposed one last year too:\nhttp://archives.postgresql.org//pgsql-hackers/2008-06/msg00291.php\n\nSeriously speaking a change will definitely cannot be done before 8.5 \ntime frame while this one is simple enough to go for 8.4.\nThe best thing one can contribute to the thread is to actually try the \npatch on the test system and run your own tests to see how it behaves.\n\n-Jignesh\n\n>> Right now.. the standard answer applies.. nope you are running the wrong\n>> workload for PostgreSQL, use a connection pooler or your own application\n>> logic. Or maybe.. you have too many users for PostgreSQL use some\n>> proprietary database.\n>> \n>\n> Well I certainly agree that we need to get away from that mentality,\n> although there's nothing particularly evil about a connection\n> pooler... it might not be suitable for every workload, but you haven't\n> specified why one couldn't or shouldn't be used in the situation\n> you're trying to simulate here.\n>\n> ...Robert\n> \n", "msg_date": "Thu, 19 Mar 2009 19:12:18 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Scott Carey wrote:\n> On 3/19/09 10:37 AM, \"Bruce Momjian\" <[email protected]> wrote:\n> \n> > Robert Haas wrote:\n> >>> The original poster's request is for a config parameter, for experimentation\n> >>> and testing by the brave. My own request was for that version of the lock to\n> >>> prevent possible starvation but improve performance by unlocking all shared\n> >>> at once, then doing all exclusives one at a time next, etc.\n> >> \n> >> That doesn't prevent starvation in general, although it will for some\n> >> workloads.\n> >> \n> >> Anyway, it seems rather pointless to add a config parameter that isn't\n> >> at all safe, and adds overhead to a critical part of the system for\n> >> people who don't use it. After all, if you find that it helps, what\n> >> are you going to do? Turn it on in production? I just don't see how\n> >> this is any good other than as a thought-experiment.\n> > \n> > We prefer things to be auto-tuned, and if not, it should be clear\n> > how/when to set the configuration parameter.\n> \n> Of course. The proposal was to leave it at the default, and obviously\n> document that it is not likely to be used. Its 1000x safer than fsync=off .\n\nRight, but even if people don't use it, people tuning their systems have\nto understand the setting to know if they should use it, so there is a\ncost even if a parameter is never used by anyone.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 19 Mar 2009 19:27:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On Thu, Mar 19, 2009 at 5:43 PM, Scott Carey <[email protected]> wrote:\n>> Well, unless I'm misunderstanding something, waking all waiters every\n>> time could lead to arbitrarily long delays for writers on mostly\n>> read-only workloads... and by arbitrarily along, we mean to say\n>> \"potentially just about forever\".  That doesn't sound safe for\n>> production to me.\n>\n> The other discussion going on indicates that that condition already can\n> happen, shared can always currently cut in line while other shared locks\n> have the lock, though I don't understand all the details.\n\nNo. If the first process waiting for an LWLock wants an exclusive\nlock, we wake up that process, and only that process. If the first\nprocess waiting for an LWLock wants a shared lock, we wake up that\nprocess, and the processes which it follow it in the queue that also\nwant shared locks. But if we come to a process which holds an\nexclusive lock, we stop. So if the wait queue looks like this\nSSSXSSSXSSS, then the first three processes will be woken up, but the\nremainder will not. The new wait queue will look like this: XSSSXSSS\n- and the exclusive waiter at the head of the queue is guaranteed to\nget the next turn.\n\nIf you wake up everybody, then the new queue will look like this: XXX.\n Superficially that's a good thing because you let 9 guys run rather\nthan 3. But suppose that while those 9 guys hold the lock, twenty\nmore shared locks join the end of the queue, so it looks like this\nXXXSSSSSSSSSSSSSSSSSSSS. Now when the last of the 9 guys releases the\nlock, we wake up everybody again, and odds are good that since there\nare a lot more S guys than X guys, once of the S guys will grab the\nlock first. The other S guys will all acquire the lock too, but the X\nguys are frozen out. This whole cycle can repeat: by the time those\n20 guys are done with their S locks, there can be 20 more guys waiting\nfor S locks, and once again when we wake everyone up one of the new S\nguys will probably grab it again. This can continue for an\nindefinitely long period of time.\n\nNow, of course, EVENTUALLY one of the X guys will probably beat out\nall the S-lock waiters and he'll get to do his thing. But there's no\nupper bound on how long this can take, and if the rate at which S-lock\nwaiters are joining the queue is much higher than the rate at which\nX-lock waiters are joining the queue, it may be quite a long time.\nEven if the overall system throughput is better with this change, the\nfact that the guys who need the X-lock get seriously shafted is a\nreally serious problem. If I start a million transactions on my\nsystem and they all complete in average of 1 second each, that sounds\npretty good - unless it's because 999,999 of them completed almost\ninstantaneously and the last one took a million seconds.\n\nNow, I'm not familiar enough with the use of ProcArrayLock to suggest\na workload that will produce this pathological behavior in PG. But,\nI'm pretty confident based on what I know about locking in general\nthat they exist.\n\n> Also, the tests on the 'wake all' version clearly aren't starving anything\n> in a load test with thousands of threads and very heavy lock contention,\n> mostly for shared locks.\n> Instead throughput increases and all wait times decrease.\n\nOn the average, yes...\n\n> There are several other proposals to make starvation less possible (wake\n> only shared and other proposals that alternate between shared and exclusive;\n> waking only X sized chunks, etc -- its all just investigation into fixing\n> what can be improved on -- solutions that are easily testable should not\n> just be thrown out: the first ones were just the easiest to try).\n\nAlternating between shared and exclusive is safe. But a lot more\ntesting in a lot more situations would be needed to determine whether\nit is better, I think. Waking chunks of a certain size I believe will\nproduce a more complicated version of the problem described above.\n\n...Robert\n", "msg_date": "Thu, 19 Mar 2009 23:45:17 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "From: Robert Haas [[email protected]]\nSent: Thursday, March 19, 2009 8:45 PM\nTo: Scott Carey\n Cc: Jignesh K. Shah; Greg Smith; Kevin Grittner; [email protected]\n Subject: Re: [PERFORM] Proposal of tunable fix for scalability of 8.4\n> \n> >On Thu, Mar 19, 2009 at 5:43 PM, Scott Carey <[email protected]> wrote:\n> >> Well, unless I'm misunderstanding something, waking all waiters every\n> >> time could lead to arbitrarily long delays for writers on mostly\n> >> read-only workloads... and by arbitrarily along, we mean to say\n> >> \"potentially just about forever\". That doesn't sound safe for\n> >> production to me.\n> >\n> > The other discussion going on indicates that that condition already can\n> > happen, shared can always currently cut in line while other shared locks\n> > have the lock, though I don't understand all the details.\n>\n> No. If the first process waiting for an LWLock wants an exclusive\n> lock, we wake up that process, and only that process. If the first\n> process waiting for an LWLock wants a shared lock, we wake up that\n> process, and the processes which it follow it in the queue that also\n> want shared locks. But if we come to a process which holds an\n> exclusive lock, we stop. So if the wait queue looks like this\n> SSSXSSSXSSS, then the first three processes will be woken up, but the\n> remainder will not. The new wait queue will look like this: XSSSXSSS\n> - and the exclusive waiter at the head of the queue is guaranteed to\n> get the next turn.\n\nYour description (much of which I cut out) is exactly how I understood it until Simon Riggs' post which changed my view and understanding. Under that situation, waking all shared will leave all XXXXX at the front and hence alternate shared/exclusive/shared/exclusive as long as both types are contending. Simon's post changed my view. Below is some cut/paste from it:\nNOTE: things without a > in front here represent Simon until the ENDQUOTE:\n\nQUOTE -----------\nOn Wed, 2009-03-18 at 11:45 +0000, Matthew Wakeling wrote:\n> On Wed, 18 Mar 2009, Simon Riggs wrote:\n> > I agree with that, apart from the \"granting no more\" bit.\n> >\n> > The most useful behaviour is just to have two modes:\n> > * exclusive-lock held - all other x locks welcome, s locks queue\n> > * shared-lock held - all other s locks welcome, x locks queue\n> \n> The problem with making all other locks welcome is that there is a \n> possibility of starvation. Imagine a case where there is a constant stream \n> of shared locks - the exclusive locks may never actually get hold of the \n> lock under the \"all other shared locks welcome\" strategy. \n\nThat's exactly what happens now. \n\n----------\n > [Scott Carey] (Further down in Simon's post, a quote from months ago: )\n----------\n\"Each time a Shared request is dequeued, we\neffectively re-enable queue jumping, so a Shared request arriving during\nthat point will actually jump ahead of Shared requests that were unlucky\nenough to arrive while an Exclusive lock was held. Worse than that, the\nnew incoming Shared requests exacerbate the starvation, so the more\nnon-adjacent groups of Shared lock requests there are in the queue, the\nworse the starvation of the exclusive requestors becomes. We are\neffectively randomly starving some shared locks as well as exclusive\nlocks in the current scheme, based upon the state of the lock when they\nmake their request.\"\n\nENDQUOTE ( Simon Riggs, cut/paste by me. post from his post Wednesday 3/18 5:10 AM pacific time).\n------------------\n\nI read that to mean that what is happening now is that in ADDITION to your explanation of how the queue works, while a batch of shared locks are executing, NEW shared locks execute immediately and don't even queue. That is, there is shared request queue jumping. The queue operates as your description but not everythig queues. \nIt seems pretty conclusive if that is truthful -- that there is starvation possible in the current system. At this stage, it would seem that neither of us are experts on the current behavior, or that Simon is wrong, or that I completely misunderstood his comments above.\n\n> Now, of course, EVENTUALLY one of the X guys will probably beat out\n> all the S-lock waiters and he'll get to do his thing. But there's no\n> upper bound on how long this can take, and if the rate at which S-lock\n> waiters are joining the queue is much higher than the rate at which\n> X-lock waiters are joining the queue, it may be quite a long time.\n\nAnd the average expected time and distribution of those events can be statistically calculated and empirically measured. The fact that there is a chance at all is not as important as the magitude of the chance and the distribution of those probabilities. \n\n> Even if the overall system throughput is better with this change, the\n> fact that the guys who need the X-lock get seriously shafted is a\n> really serious problem. \n\nIf 'serious shafting' is so, yes! We only disagree on the current possibility of this and the magnitude/likelihood of it. \nBy Simon's comments above the starvation possiblility is already the case. I am merely using that discussion as evidence. It may be wrong, so in reality we agree overall but both don't have enough knowledge to go much beyond that. I think we can both agree that IF the current system is unfair, then the 'wake all' system is roughly as unfair, and perhaps even more fair and that testing evidence (averages and standar deviations too!) should guide us. If the current system is truly fair and cannot have starvation, then the 'wake all' setup would be a step backwards on that front. That is why my early comments on this were to wake only the shared or alternate.\n\n(I think an unfair simple 'wake all' lock is still useful for experimentation and testing and perhaps configuration --we may differ on that).\n\n> If I start a million transactions on my\n> system and they all complete in average of 1 second each, that sounds\n> pretty good - unless it's because 999,999 of them completed almost\n> instantaneously and the last one took a million seconds.\n\nMeasuring standard deviation / variance is always important. Averages alone are surely not good enough. Whether this is average time to commit a transaction (low level) or the average cost of a query plan (higher level), consistency is highly valuable. Better to have slightly longer average times and very high consistency than the opposite. \n\n> > Also, the tests on the 'wake all' version clearly aren't starving anything\n> > in a load test with thousands of threads and very heavy lock contention,\n> > mostly for shared locks.\n> > Instead throughput increases and all wait times decrease.\n\n> On the average, yes...\n\nI agree we would need more than the average to be confident. Although I am not opposed to letting a user decide between the two -- gaining performance and sacrificing some consistency. Its a common real-world tradeoff.\n\n> > There are several other proposals to make starvation less possible (wake\n> > only shared and other proposals that alternate between shared and exclusive;\n> > waking only X sized chunks, etc -- its all just investigation into fixing\n> > what can be improved on -- solutions that are easily testable should not\n> > just be thrown out: the first ones were just the easiest to try).\n>\n> Alternating between shared and exclusive is safe. But a lot more\n> testing in a lot more situations would be needed to determine whether\n> it is better, I think. Waking chunks of a certain size I believe will\n> produce a more complicated version of the problem described above.\n>\n> ...Robert\n\nThe alternating proposal is the most elegant and based on my experience should also perform well. The two list solution for this is simpler and can probably be done without locking on the list adding with atomics (compare and set/swap). Appending to a linked list can be done lock-free safely as can atomically swapping out lists. Predominantly lock-free is the way to go for heavily contended situations like this. The proposal that compacts the list by freeing all shared, and compacts the exclusive remainders probably requires more locking and contention due to more complex list manipulation. I agree that the chunk version is probably more complicated than needed.\n\nOur disagreement here revolves around two things I believe: What the current functionality actually is, and how useful the brute force simple lock is as a tool and as a config option.", "msg_date": "Thu, 19 Mar 2009 23:01:05 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "________________________________________\nFrom: [email protected] [[email protected]] On Behalf Of Simon Riggs [[email protected]]\nSent: Wednesday, March 18, 2009 12:53 AM\nTo: Matthew Wakeling\nCc: [email protected]\nSubject: Re: [PERFORM] Proposal of tunable fix for scalability of 8.4\n\n> On Mon, 2009-03-16 at 16:26 +0000, Matthew Wakeling wrote:\n> > One possibility would be for the locks to alternate between exclusive\n> > and\n> > shared - that is:\n> >\n> > 1. Take a snapshot of all shared waits, and grant them all -\n> > thundering\n> > herd style.\n> > 2. Wait until ALL of them have finished, granting no more.\n> > 3. Take a snapshot of all exclusive waits, and grant them all, one by\n> > one.\n> > 4. Wait until all of them have been finished, granting no more.\n> > 5. Back to (1)\n>\n> I agree with that, apart from the \"granting no more\" bit.\n>\n> Currently we queue up exclusive locks, but there is no need to since for\n> ProcArrayLock commits are all changing different data.\n>\n> The most useful behaviour is just to have two modes:\n> * exclusive-lock held - all other x locks welcome, s locks queue\n> * shared-lock held - all other s locks welcome, x locks queue\n>\n> This *only* works for ProcArrayLock.\n>\n> --\n> Simon Riggs www.2ndQuadrant.com\n> PostgreSQL Training, Services and Support\n> \n\nI want to comment on an important distinction between these two variants. The \"granting no more\" bit WILL decrease performance under high contention. Here is my reasoning.\n\nWe have two \"two lists\" proposals. \n\nType A: allow line cutting (Simon, above):\n* exclusive-lock held and all exclusives process - all other NEW x locks welcome, s locks queue\n* shared-lock held and all shareds process- all other NEW s locks welcome, x locks queue\n\nType B: forbid line cutting (Matthew, above, modified to allow multiple exclusive for ProcArrayLock --\n for other types exclusive would be one at a time)\n* exclusive-lock held and all exclusives process - all NEW lock requests queue\n* shared-lock held and shareds process - all NEW lock requests queue\n\nA big benefit of the \"wake all\" proposal, is that a lot of access does not have to context switch out and back in. On a quick assessment, the type A above would lock and context switch even less than the wake-all (since exclusives don't go one at a time) but otherwise be similar. But this won't matter much if it is shared lock dominated.\nI would LOVE to have seen context switch rate numbers with the results so far, but many base unix tools don't show it by default (can get it from sar, rstat reports it) average # of context switches per transaction is an awesome measure of lock contention and lock efficiency. \n\nIn type A above, the ratio of requests that require a context switch is Q / (M + Q), where Q is the average queue size when the 'shared-exclusive' swap occrs and M is the average number of \"line cutters\".\n\nIn type B, the ratio of requests that must context switch is always == 1. Every request must queue and wait! This may perform worse than the current lock!\n\nOne way to guarantee some fairness is to compromise between the two. \n\nLets call this proposal C. Unfortunately, this is less elegant than the other two, since it has logic for both. It could be made tunable to be the complete spectrum though.\n* exclusive-lock held and all exclusives process - first N new X requests welcome, N+1 and later X requests and all shared locks queue.\n* shared-lock held and shareds process - first N new S requests welcom, N+1 and later S requests and all X locks queue\n\nSo, if shared locks are queuing and exclusive hold the lock and are operating, and another exclusive request arrives, it can cut in line only if it is one of the first N to do so before it will queue and wait and give shared locks their turn. \nThis counting condition can be done with an atomically incrementing integer using compare and set operations and no locks, and under heavy contention will reduce the number of context switches per operation to Q/(N + Q) where N is the number of 'line cutters' achieved and Q is the average queue size when the queued items are unlocked. Note how this is the same as the 'unbounded' equation with M above, except that N can never be greater than M (the 'natural' line cut count).\nSo for N = Q half are forced to context switch and half cut in line without a context switch. N can be tunable, and it can be a different number for shared and exclusive to bias towards one or the other if desired. ", "msg_date": "Thu, 19 Mar 2009 23:31:02 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On Thu, 19 Mar 2009, Scott Carey wrote:\n> In type B, the ratio of requests that must context switch is always == \n> 1. Every request must queue and wait!\n\nA remarkably good point, although not completely correct. Every request \nthat arrives when the lock is held in any way already will queue and wait. \nRequests that arrive when the lock is free will run immediately. I admit \nit, this is a killer for this particular locking strategy.\n\nFirstly, let's say that if the lock is in shared mode, and there are no \nexclusive waiters, then incoming shared lockers can be allowed to process \nimmediately. That's just obvious. Strictly following your or my suggestion \nwould preclude that, forcing a queue every so often.\n\n> One way to guarantee some fairness is to compromise between the two.\n>\n> Lets call this proposal C. Unfortunately, this is less elegant than the \n> other two, since it has logic for both. It could be made tunable to be \n> the complete spectrum though.\n>\n> * exclusive-lock held and all exclusives process - first N new X \n> requests welcome, N+1 and later X requests and all shared locks queue.\n>\n> * shared-lock held and shareds process - first N new S requests welcom, \n> N+1 and later S requests and all X locks queue\n\nI like your solution. For now, let's just examine normal shared/exclusive \nlocks, not the ProcArrayLock. The question is, what is the ideal number \nfor N?\n\nWith your solution, N is basically a time limit, to prevent the lock from \ncompletely starving exclusive (or possibly shared) locks. If the shared \nlocks are processing, then either the incoming shared requests are \nfrequent, at which point N will be reached soon and force a switch to \nexclusive mode, or the shared requests are infrequent, at which point the \nlock should become free fairly soon. This means that having a count should \nbe sufficient as a \"time\" limit.\n\nSo, what is \"too unfair\"? I'm guessing N can be set really quite high, and \nit should definitely scale by the number of CPUs in the machine. Exact \nvalues are probably best determined by experiment, but I'd say something \nlike ten times the number of CPUs.\n\nAs for ProcArrayLock, it sounds like it is very much a special case. The \nstatement that the writers don't interfere with each other seems very \nstrange to me, and makes me wonder if the structure needs any locks at \nall, or at least can be very partitioned. Perhaps it could be implemented \nas a lock-free structure. But I don't know what the actual structure is, \nso I could be talking through my hat.\n\nMatthew\n\n-- \nSo, given 'D' is undeclared too, with a default of zero, C++ is equal to D.\n mnw21, commenting on the \"Surely the value of C++ is zero, but C is now 1\"\n response to \"No, C++ isn't equal to D. 'C' is undeclared [...] C++ should\n really be called 1\" response to \"C++ -- shouldn't it be called D?\"\n", "msg_date": "Fri, 20 Mar 2009 15:28:44 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Scott Carey escribi�:\n\n> Your description (much of which I cut out) is exactly how I understood\n> it until Simon Riggs' post which changed my view and understanding.\n> Under that situation, waking all shared will leave all XXXXX at the\n> front and hence alternate shared/exclusive/shared/exclusive as long as\n> both types are contending. Simon's post changed my view. Below is\n> some cut/paste from it:\n\nSimon's explanation, however, is at odds with the code.\n\nhttp://git.postgresql.org/?p=postgresql.git;a=blob;f=src/backend/storage/lmgr/lwlock.c\n\nThere is \"queue jumping\" in the regular (heavyweight) lock manager, but\nthat's a pretty different body of code.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 20 Mar 2009 11:46:01 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> As for ProcArrayLock, it sounds like it is very much a special case.\n\nQuite. Read the section \"Interlocking Transaction Begin, Transaction\nEnd, and Snapshots\" in src/backend/access/transam/README before\nproposing any changes in this area --- it's a lot more delicate than\none might think. We'd have partitioned the ProcArray long ago if\nit wouldn't have broken the transaction system.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Mar 2009 11:55:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4 " }, { "msg_contents": "\nOn 3/20/09 8:28 AM, \"Matthew Wakeling\" <[email protected]> wrote:\n\n> On Thu, 19 Mar 2009, Scott Carey wrote:\n>> In type B, the ratio of requests that must context switch is always ==\n>> 1. Every request must queue and wait!\n> \n> A remarkably good point, although not completely correct. Every request\n> that arrives when the lock is held in any way already will queue and wait.\n> Requests that arrive when the lock is free will run immediately. I admit\n> it, this is a killer for this particular locking strategy.\n> \n\nYeah, its the \"when there is lock contention\" part that is a general truth\nfor all locks.\n\nAs for this killing this strategy, there is one exception:\nIf we know the operations done inside the lock are very fast, then we can\nuse pure spin locks. Then there is no context switching at all, ant it is\nmore optimal to go from list to list in smaller chunks with no 'cutting in\nline' as in this strategy. Although, even with spins, a limited number of\nline cutters is helpful to reduce overall spin time.\n\nAs a general reader/writer lock spin locks are more dangerous. It is often\noptimal to spin for a short time, then if the lock is still not attained\ncontext switch out with a wait. Generally speaking, lock optimization for\nheavily contended locks is an attempt to minimize context switches with the\nleast additional CPU overhead.\n\n\n> Firstly, let's say that if the lock is in shared mode, and there are no\n> exclusive waiters, then incoming shared lockers can be allowed to process\n> immediately. That's just obvious. Strictly following your or my suggestion\n> would preclude that, forcing a queue every so often.\n> \n\nDefinitely an important optimization!\n\n>> One way to guarantee some fairness is to compromise between the two.\n>> \n>> Lets call this proposal C. Unfortunately, this is less elegant than the\n>> other two, since it has logic for both. It could be made tunable to be\n>> the complete spectrum though.\n>> \n>> * exclusive-lock held and all exclusives process - first N new X\n>> requests welcome, N+1 and later X requests and all shared locks queue.\n>> \n>> * shared-lock held and shareds process - first N new S requests welcom,\n>> N+1 and later S requests and all X locks queue\n> \n> I like your solution. For now, let's just examine normal shared/exclusive\n> locks, not the ProcArrayLock. The question is, what is the ideal number\n> for N?\n> \n> With your solution, N is basically a time limit, to prevent the lock from\n> completely starving exclusive (or possibly shared) locks. If the shared\n> locks are processing, then either the incoming shared requests are\n> frequent, at which point N will be reached soon and force a switch to\n> exclusive mode, or the shared requests are infrequent, at which point the\n> lock should become free fairly soon. This means that having a count should\n> be sufficient as a \"time\" limit.\n> \n> So, what is \"too unfair\"? I'm guessing N can be set really quite high, and\n> it should definitely scale by the number of CPUs in the machine. Exact\n> values are probably best determined by experiment, but I'd say something\n> like ten times the number of CPUs.\n\nI would have guessed something large as well. Its the extremes and\npathological cases that are most concerning. In normal operation, the limit\nshould not be hit.\n\n> \n> As for ProcArrayLock, it sounds like it is very much a special case. The\n> statement that the writers don't interfere with each other seems very\n> strange to me, and makes me wonder if the structure needs any locks at\n> all, or at least can be very partitioned. Perhaps it could be implemented\n> as a lock-free structure. But I don't know what the actual structure is,\n> so I could be talking through my hat.\n> \n\nI do too much of that.\nIf it is something that should have very short lived lock holding then spin\nlocks or other very simple structures built on atomics could do it. Even a\nlinked list is not necessary if its all built with atomics and spins since\n'waking up' is merely setting a single value all waiters share. But I know\ntoo little about what goes on when the lock is held so this is getting very\nspeculative.\n\n\n", "msg_date": "Fri, 20 Mar 2009 11:53:50 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Alvaro Herrera escribi�:\n\n> Simon's explanation, however, is at odds with the code.\n> \n> http://git.postgresql.org/?p=postgresql.git;a=blob;f=src/backend/storage/lmgr/lwlock.c\n> \n> There is \"queue jumping\" in the regular (heavyweight) lock manager, but\n> that's a pretty different body of code.\n\nI'll just embarrass myself by pointing out that Neil Conway described\nthis back in 2004:\nhttp://archives.postgresql.org//pgsql-hackers/2004-11/msg00905.php\n\nSo Simon's correct.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 20 Mar 2009 17:58:13 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Alvaro Herrera escribi�:\n\n> So Simon's correct.\n\nAnd perhaps this explains why Jignesh is measuring an improvement on his\nbenchmark. Perhaps an useful experiment would be to turn this behavior\noff and compare performance. This lack of measurement is probably the\ncause that the suggested patch to fix it was never applied.\n\nThe patch is here\nhttp://archives.postgresql.org//pgsql-hackers/2004-11/msg00935.php\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 20 Mar 2009 18:05:13 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\n\nAlvaro Herrera wrote:\n> Alvaro Herrera escribi�:\n>\n> \n>> So Simon's correct.\n>> \n>\n> And perhaps this explains why Jignesh is measuring an improvement on his\n> benchmark. Perhaps an useful experiment would be to turn this behavior\n> off and compare performance. This lack of measurement is probably the\n> cause that the suggested patch to fix it was never applied.\n>\n> The patch is here\n> http://archives.postgresql.org//pgsql-hackers/2004-11/msg00935.php\n>\n> \nOne of the reasons why my patch helps is it keeps this check intact but \nallows other exclusive Wake up.. Now what PostgreSQL calls \"Wakes\" is \nin reality just makes a variable indicating wake up and not really \nsignalling a process to wake up. This is a key point to note. So when \nthe process wanting the exclusive fights the OS Scheduling policy to \nfinally get time on the CPU then it check the value to see if it is \nallowed to wake up and potentially due the delay between when some other \nprocess marked that process \"Waked up\" and when the process check the \nvalue \"Waked up\" it is likely that the lock is free (or other exclusive \nprocess had the lock, did its work and releaed it ). Over it works well \nsince it lives within the logical semantics of the locks but just uses \nvarious differences in OS scheduling and inherent delays in the system.\n\nIt actually makes sense if the process is on CPU wanting exclusive while \nsomeone else is doing exclusive, let them try getting the lock rather \nthan preventing it from trying. The Lock semantic will make sure that \nthey don't issue exclusive locks to two process so there is no issue \nwith it trying.\n\nIt's late in Friday so I wont be able to explain it better but when load \nis heavy, getting on CPU is an achievement, let them try an exclusive \nlock while they are already there.\n\nTry it!!\n\n-Jignesh\n\n", "msg_date": "Fri, 20 Mar 2009 19:39:13 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On Fri, Mar 20, 2009 at 7:39 PM, Jignesh K. Shah <[email protected]> wrote:\n> Alvaro Herrera wrote:\n>>> So Simon's correct.\n>> And perhaps this explains why Jignesh is measuring an improvement on his\n>> benchmark.  Perhaps an useful experiment would be to turn this behavior\n>> off and compare performance.  This lack of measurement is probably the\n>> cause that the suggested patch to fix it was never applied.\n>>\n>> The patch is here\n>> http://archives.postgresql.org//pgsql-hackers/2004-11/msg00935.php\n>\n> One of the reasons why my patch helps is it keeps this check intact but\n> allows other exclusive Wake up.. Now what PostgreSQL calls \"Wakes\" is  in\n> reality just makes a variable indicating wake up and not really signalling a\n> process to wake up. This is a key point to note. So when the process wanting\n> the exclusive fights the OS Scheduling policy to finally get time on the CPU\n> then it   check the value to see if it is allowed to wake up and potentially\n\nI'm confused. Is a process waiting for an LWLock is in a runnable\nstate? I thought we went to sleep on a semaphore.\n\n...Robert\n", "msg_date": "Fri, 20 Mar 2009 20:45:28 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\nOn Fri, 2009-03-20 at 15:28 +0000, Matthew Wakeling wrote:\n> On Thu, 19 Mar 2009, Scott Carey wrote:\n> > In type B, the ratio of requests that must context switch is always == \n> > 1. Every request must queue and wait!\n> \n> A remarkably good point, although not completely correct. Every request \n> that arrives when the lock is held in any way already will queue and wait. \n> Requests that arrive when the lock is free will run immediately. I admit \n> it, this is a killer for this particular locking strategy.\n\nI think the right mix of theory and test here is for people to come up\nwith new strategies that seem to make sense and then we'll test them\nall. Trying too hard to arrive at the best strategy purely through\ndiscussion will mean we miss a few tricks. Feels like we're on the right\ntrack here.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Sat, 21 Mar 2009 08:50:39 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "\n\nRobert Haas wrote:\n> On Fri, Mar 20, 2009 at 7:39 PM, Jignesh K. Shah <[email protected]> wrote:\n> \n>> Alvaro Herrera wrote:\n>> \n>>>> So Simon's correct.\n>>>> \n>>> And perhaps this explains why Jignesh is measuring an improvement on his\n>>> benchmark. Perhaps an useful experiment would be to turn this behavior\n>>> off and compare performance. This lack of measurement is probably the\n>>> cause that the suggested patch to fix it was never applied.\n>>>\n>>> The patch is here\n>>> http://archives.postgresql.org//pgsql-hackers/2004-11/msg00935.php\n>>> \n>> One of the reasons why my patch helps is it keeps this check intact but\n>> allows other exclusive Wake up.. Now what PostgreSQL calls \"Wakes\" is in\n>> reality just makes a variable indicating wake up and not really signalling a\n>> process to wake up. This is a key point to note. So when the process wanting\n>> the exclusive fights the OS Scheduling policy to finally get time on the CPU\n>> then it check the value to see if it is allowed to wake up and potentially\n>> \n>\n> I'm confused. Is a process waiting for an LWLock is in a runnable\n> state? I thought we went to sleep on a semaphore.\n>\n> ...Robert\n>\n> \nIf you check the code\nhttp://doxygen.postgresql.org/lwlock_8c-source.html#l00451\n\nSemaphore lock can wake up but then it needs to confirm !proc->lwWaiting \nwhich can be TRUE if you have not been \"Waked up\"\nthen it increase the extraWaits count and go back to PGSemaphoreLock \n.. What my patch gives the flexibility with sequential X wakeups that it \ncan still exit and check for getting the exclusive lock and if not add \nback to the queue. My theory is when it is already on CPU running makes \nsense to check for the lock if another exclusive is running since the \nchances are that it has completed within few cycles is very high.. and \nthe improvement that I see leads to that inference. Otherwise if \nlwWaiting is TRUE then it does not even check if the lock is available \nor not and just goes back and waits for the next chance.. This is the \npart that gets the benefit of my patch.\n\n-Jignesh\n\n\n-- \nJignesh Shah http://blogs.sun.com/jkshah \t\t\t\nThe New Sun Microsystems,Inc http://sun.com/postgresql\n\n", "msg_date": "Sat, 21 Mar 2009 19:02:46 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "On 3/15/09 1:40 PM, Jignesh K. Shah wrote:\n>\n>\n> decibel wrote:\n>> On Mar 11, 2009, at 10:48 PM, Jignesh K. Shah wrote:\n>>> Fair enough.. Well I am now appealing to all who has a fairly decent\n>>> sized hardware want to try it out and see whether there are \"gains\",\n>>> \"no-changes\" or \"regressions\" based on your workload. Also it will\n>>> help if you report number of cpus when you respond back to help\n>>> collect feedback.\n\nEAStress (the J2EE benchmark from Spec) would be perfect for this, and \nwe (community) have a license for it.\n\nHowever, EAstress really requires 2-3 J2EE servers to keep the DB server \nbusy.\n\n--Josh\n", "msg_date": "Sun, 29 Mar 2009 14:33:30 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Proposal of tunable fix for scalability of 8.4" }, { "msg_contents": "Tom Lane wrote:\n> Gregory Stark <[email protected]> writes:\n> > Tom Lane <[email protected]> writes:\n> >> Ugh. So apparently, we actually need to special-case Solaris to not\n> >> believe that posix_fadvise works, or we'll waste cycles uselessly\n> >> calling a do-nothing function. Thanks, Sun.\n> \n> > Do we? Or do we just document that setting effective_cache_size on Solaris\n> > won't help?\n> \n> I assume you meant effective_io_concurrency. We'd still need a special\n> case because the default is currently hard-wired at 1, not 0, if\n> configure thinks the function exists. Also there's a posix_fadvise call\n> in xlog.c that that parameter doesn't control anyhow.\n\nThe attached patch prevents the posix_fadvise() probe in configure on\nSolaris, and adds a comment why. I have already documented why Solaris\ncan't do effective_io_concurrency.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +", "msg_date": "Thu, 2 Apr 2009 19:08:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4 Performance improvements: was Re: Proposal\n\tof tunable fix for scalability of 8.4" }, { "msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Gregory Stark <[email protected]> writes:\n> > > Tom Lane <[email protected]> writes:\n> > >> Ugh. So apparently, we actually need to special-case Solaris to not\n> > >> believe that posix_fadvise works, or we'll waste cycles uselessly\n> > >> calling a do-nothing function. Thanks, Sun.\n> > \n> > > Do we? Or do we just document that setting effective_cache_size on Solaris\n> > > won't help?\n> > \n> > I assume you meant effective_io_concurrency. We'd still need a special\n> > case because the default is currently hard-wired at 1, not 0, if\n> > configure thinks the function exists. Also there's a posix_fadvise call\n> > in xlog.c that that parameter doesn't control anyhow.\n> \n> The attached patch prevents the posix_fadvise() probe in configure on\n> Solaris, and adds a comment why. I have already documented why Solaris\n> can't do effective_io_concurrency.\n\nUpdated patch applied; open item removed as complete.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +", "msg_date": "Tue, 7 Apr 2009 18:49:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4 Performance improvements: was Re: Proposal\n\tof tunable fix for scalability of 8.4" } ]
[ { "msg_contents": "Hi,\n Can you guide me, Where is the entry point to get the documentation\nfor Postgresql performance tuning, Optimization for Postgresql with\nStorage controller. \n \nYour recommendation and suggestion are welcome. \n \nRegards \nKarthikeyan.N\n \n \n\n\n\n\n\nHi,\n     Can you guide me, Where is the entry point to \nget the documentation for Postgresql performance tuning, Optimization for \nPostgresql with Storage controller. \n \nYour \nrecommendation and suggestion are welcome. \n \nRegards \nKarthikeyan.N", "msg_date": "Thu, 12 Mar 2009 16:22:37 +0530", "msg_from": "\"Nagalingam, Karthikeyan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Entry point for Postgresql Performance " }, { "msg_contents": "Nagalingam, Karthikeyan wrote:\n> Hi,\n> Can you guide me, Where is the entry point to get the \n> documentation for Postgresql performance tuning, Optimization for \n> Postgresql with Storage controller.\n> \n> Your recommendation and suggestion are welcome.\n> \n> Regards\n> Karthikeyan.N\n> \n> \nTake a look at\n\nhttp://www.postgresql.org/files/documentation/books/aw_pgsql/hw_performance/ \n\nhttp://www.scribd.com/doc/4846381/PostgreSQL-Performance-Tuning\nhttp://www.linuxjournal.com/article/4791\n\nWith Regards\n--Ashish\n", "msg_date": "Thu, 12 Mar 2009 16:40:14 +0530", "msg_from": "Ashish Karalkar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Entry point for Postgresql Performance" }, { "msg_contents": "Databases are usually IO bound , vmstat results can confirm individual\ncases and setups.\nIn case the server is IO bound the entry point should be setting up\nproperly performing\nIO. RAID10 helps a great extent in improving IO bandwidth by\nparallelizing the IO operations,\nmore spindles the better. Also write caches helps in great deal in\ncaching the writes and making\ncommits faster.\n\nIn my opinion system level tools (like vmstat) at peak load times can\nbe an entry point\nin understanding the bottlenecks of a particular setup.\n\nif there is swapping u absolutely need to double the ram . ( excess\nram can be used in disk block caching)\nif its cpu bound add more cores or high speed cpus\nif its io bound put better raid arrays & controller.\n\n\nregds\nmallah.\n\nOn Thu, Mar 12, 2009 at 4:22 PM, Nagalingam, Karthikeyan\n<[email protected]> wrote:\n> Hi,\n>      Can you guide me, Where is the entry point to get the documentation for\n> Postgresql performance tuning, Optimization for Postgresql with Storage\n> controller.\n>\n> Your recommendation and suggestion are welcome.\n>\n> Regards\n> Karthikeyan.N\n>\n>\n", "msg_date": "Thu, 12 Mar 2009 23:11:02 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Entry point for Postgresql Performance" } ]
[ { "msg_contents": "From the documentation, I understand that range of actual time represents\nthe time taken for retrieving the first result and the last result\nrespectively. However, the following output of explain analyze confuses me:\n\nGroupAggregate (cost=632185.58..632525.55 rows=122884 width=57)\n(actual time=187382.499..187383.241 rows=57 loops=1)\n -> Sort (cost=632185.58..632201.78 rows=122884 width=57) (actual\ntime=187167.792..187167.905 rows=399 loops=1)\n\n Sort Key: orders.o_totalprice, orders.o_orderdate,\ncustomer.c_name, customer.c_custkey, orders.o_orderkey\n -> Hash Join (cost=399316.78..629388.21 rows=122884\nwidth=57) (actual time=122805.133..186107.210 rows=399 loops=1)\n\n Hash Cond: (public.lineitem.l_orderkey = orders.o_orderkey)\n -> Seq Scan on lineitem (cost=0.00..163912.71\nrows=6000742 width=14) (actual time=0.022..53597.555 rows=6001215\nloops=1)\n\n -> Hash (cost=398960.15..398960.15 rows=30713\nwidth=51) (actual time=112439.592..112439.592 rows=57 loops=1)\n -> Hash Join (cost=369865.37..398960.15\nrows=30713 width=51) (actual time=80638.283..111855.510 rows=57\nloops=1)\n\n Hash Cond: (orders.o_custkey = customer.c_custkey)\n -> Nested Loop (cost=364201.67..391753.70\nrows=30713 width=29) (actual time=75598.246..107760.054 rows=57\nloops=1)\n\n -> GroupAggregate\n(cost=364201.67..366634.97 rows=30713 width=14) (actual\ntime=75427.115..96167.167 rows=57 loops=1)\n Filter: (sum(l_quantity) > 300::numeric)\n\n -> Sort\n(cost=364201.67..364992.54 rows=6000742 width=14) (actual\ntime=74987.112..86289.063 rows=6001215 loops=1)\n Sort Key:\npublic.lineitem.l_orderkey\n\n -> Seq Scan on lineitem\n(cost=0.00..163912.71 rows=6000742 width=14) (actual\ntime=0.006..51153.880 rows=6001215 loops=1)\n -> Index Scan using orders_pkey on\norders (cost=0.00..0.81 rows=1 width=25) (actual\ntime=169.485..173.006 rows=1 loops=57)\n\n Index Cond: (orders.o_orderkey\n= \"IN_subquery\".l_orderkey)\n -> Hash (cost=4360.96..4360.96\nrows=150072 width=26) (actual time=998.378..998.378 rows=150000\nloops=1)\n\n -> Seq Scan on customer\n(cost=0.00..4360.96 rows=150072 width=26) (actual time=8.188..883.778\nrows=150000 loops=1)\n Total runtime: 187644.927 ms\n(20 rows)\n\nMy settings: Memory - 1GB, Data size - 1GB, Lineitem ~ 650MB,\nshared_buffers: 200MB, work_mem: 1MB.\n\nPostgreSQL version: 8.2, OS: Sun Solaris 10u4\n\nQuery: TPC-H 18, Large Volume Customer Query\n\nQuestions:\n\n1) The actual time on Seq Scan on Lineitem shows that the first record is\nfetched at time 0.022ms and the last record is fetched at 53.5s. Does it\nmean the sequential scan is completed with-in first 53.4s (absolute time)?\nOr does it mean that sequential scan started at 112.43s (after build phase\nof Hash Join) and finished at 165.93s (112.43 + 53.5s)? My understanding is\nthat former is true. If so, the sequential scan has to fetched around 6M\nrecords (~650MB) ahead of build phase of Hash Join, which seems surprising.\nIs this called prefetching at DB level? Where does the DB hold all these\nrecords? Definitely, it can't hold in shared_buffers since it's only 200MB.\n\n2) Why is the Hash Join (top most) so slow? The hash is build over the\noutput of subplan which produces 57 records (~20kb). We can assume that\nthese 57 records fit into work_mem. Now, the Hash Join is producing first\nrecord at 122.8s where as the Hash build is completed at 112.4s (10.4s\ndifference. I have seen in some cases, this gap is even worse). Also, the\ntotal time for Hash Join is 63.3s, which seems too high given that Lineitem\nis already in the buffer. What is happening over here?\n\nAppreciate your help!\n\nRegards,\n~Vamsi\n\nFrom the documentation, I understand that range of actual time represents the time taken for retrieving the first result and the last result respectively. However, the following output of explain analyze confuses me:\nGroupAggregate (cost=632185.58..632525.55 rows=122884 width=57) (actual time=187382.499..187383.241 rows=57 loops=1) -> Sort (cost=632185.58..632201.78 rows=122884 width=57) (actual time=187167.792..187167.905 rows=399 loops=1)\n\n Sort Key: orders.o_totalprice, orders.o_orderdate, customer.c_name, customer.c_custkey, orders.o_orderkey -> Hash Join (cost=399316.78..629388.21 rows=122884 width=57) (actual time=122805.133..186107.210 rows=399 loops=1)\n\n Hash Cond: (public.lineitem.l_orderkey = orders.o_orderkey) -> Seq Scan on lineitem (cost=0.00..163912.71 rows=6000742 width=14) (actual time=0.022..53597.555 rows=6001215 loops=1)\n\n -> Hash (cost=398960.15..398960.15 rows=30713 width=51) (actual time=112439.592..112439.592 rows=57 loops=1) -> Hash Join (cost=369865.37..398960.15 rows=30713 width=51) (actual time=80638.283..111855.510 rows=57 loops=1)\n\n Hash Cond: (orders.o_custkey = customer.c_custkey) -> Nested Loop (cost=364201.67..391753.70 rows=30713 width=29) (actual time=75598.246..107760.054 rows=57 loops=1)\n\n -> GroupAggregate (cost=364201.67..366634.97 rows=30713 width=14) (actual time=75427.115..96167.167 rows=57 loops=1) Filter: (sum(l_quantity) > 300::numeric)\n\n -> Sort (cost=364201.67..364992.54 rows=6000742 width=14) (actual time=74987.112..86289.063 rows=6001215 loops=1) Sort Key: public.lineitem.l_orderkey\n\n -> Seq Scan on lineitem (cost=0.00..163912.71 rows=6000742 width=14) (actual time=0.006..51153.880 rows=6001215 loops=1) -> Index Scan using orders_pkey on orders (cost=0.00..0.81 rows=1 width=25) (actual time=169.485..173.006 rows=1 loops=57)\n\n Index Cond: (orders.o_orderkey = \"IN_subquery\".l_orderkey) -> Hash (cost=4360.96..4360.96 rows=150072 width=26) (actual time=998.378..998.378 rows=150000 loops=1)\n\n -> Seq Scan on customer (cost=0.00..4360.96 rows=150072 width=26) (actual time=8.188..883.778 rows=150000 loops=1) Total runtime: 187644.927 ms(20 rows)My settings: Memory - 1GB, Data size - 1GB, Lineitem ~ 650MB, shared_buffers: 200MB, work_mem: 1MB.\n\nPostgreSQL version: 8.2, OS: Sun Solaris 10u4Query: TPC-H 18, Large Volume Customer QueryQuestions:1) The actual time on Seq Scan on Lineitem shows that the first record is fetched at time 0.022ms and the last record is fetched at 53.5s. Does it mean the sequential scan is completed with-in first 53.4s (absolute time)? Or does it mean that sequential scan started at 112.43s (after build phase of Hash Join) and finished at 165.93s (112.43 + 53.5s)? My understanding is that former is true. If so, the sequential scan has to fetched around 6M records (~650MB) ahead of build phase of Hash Join, which seems surprising. Is this called prefetching at DB level? Where does the DB hold all these records? Definitely, it can't hold in shared_buffers since it's only 200MB. \n2) Why is the Hash Join (top most) so slow? The hash is build over the output of subplan which produces 57 records (~20kb). We can assume that these 57 records fit into work_mem. Now, the Hash Join is producing first record at 122.8s where as the Hash build is completed at 112.4s (10.4s difference. I have seen in some cases, this gap is even worse). Also, the total time for Hash Join is 63.3s, which seems too high given that Lineitem is already in the buffer. What is happening over here?\nAppreciate your help!Regards,~Vamsi", "msg_date": "Fri, 13 Mar 2009 17:15:07 -0400", "msg_from": "Vamsidhar Thummala <[email protected]>", "msg_from_op": true, "msg_subject": "Hash Join performance" }, { "msg_contents": "Vamsidhar Thummala <[email protected]> writes:\n> 1) The actual time on Seq Scan on Lineitem shows that the first record is\n> fetched at time 0.022ms and the last record is fetched at 53.5s. Does it\n> mean the sequential scan is completed with-in first 53.4s (absolute time)?\n\nNo, it means that we spent a total of 53.5 seconds executing that plan\nnode and its children. There's no direct way to determine how that was\ninterleaved with the execution of a peer plan node. In the particular\ncase here, since that seqscan is the outer child of a hash join, you\ncan infer that all the time charged to the inner child (the Hash node\nand its children) happened first, while we were building the hashtable,\nwhich is then probed for each row of the outer relation.\n\n> 2) Why is the Hash Join (top most) so slow?\n\nDoesn't look that bad to me. The net time charged to the HashJoin node\nis 186107.210 - 53597.555 - 112439.592 = 20070.063 msec. In addition it\nwould be reasonable to count the hashtable build time, which evidently\nis 112439.592 - 111855.510 = 584.082 msec. So the hashtable build took\nabout 10 msec/row, in addition to the data fetching; and then the actual\njoin spent about 3 microsec per outer row, again exclusive of obtaining\nthose rows. The table build seems a bit slow, maybe, but I don't see a\nproblem with the join speed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Mar 2009 17:34:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash Join performance " }, { "msg_contents": "Thanks for such quick response.\n\nOn Fri, Mar 13, 2009 at 5:34 PM, Tom Lane wrote:\n\n> > 2) Why is the Hash Join (top most) so slow?\n>\n> Doesn't look that bad to me. The net time charged to the HashJoin node\n> is 186107.210 - 53597.555 - 112439.592 = 20070.063 msec. In addition it\n> would be reasonable to count the hashtable build time, which evidently\n> is 112439.592 - 111855.510 = 584.082 msec. So the hashtable build took\n> about 10 msec/row, in addition to the data fetching; and then the actual\n> join spent about 3 microsec per outer row, again exclusive of obtaining\n> those rows. The table build seems a bit slow, maybe, but I don't see a\n> problem with the join speed.\n>\n\nI am wondering why are we subtracting the entire Seq Scan time of Lineitem\nfrom the total time to calculate the HashJoin time.\nDoes the Hash probing start as soon as the first record of Lineitem is\navailable, i.e., after 112439.592ms?\n\nHere is another plan I have for the same TPC-H 18 query with different\nconfiguration parameters (shared_buffers set to 400MB, just for experimental\npurposes) and HashJoin seems to take longer time (at least 155.58s based on\nabove calculation):\n\nGroupAggregate (cost=905532.09..912092.04 rows=119707 width=57)\n(actual time=392705.160..392705.853 rows=57 loops=1)\n -> Sort (cost=905532.09..906082.74 rows=119707 width=57) (actual\ntime=392705.116..392705.220 rows=399 loops=1)\n Sort Key: orders.o_totalprice, orders.o_orderdate,\ncustomer.c_name, customer.c_custkey, orders.o_orderkey\n -> Hash Join (cost=507550.05..877523.36 rows=119707\nwidth=57) (actual time=72616.327..392703.675 rows=399 loops=1)\n Hash Cond: (public.lineitem.l_orderkey = orders.o_orderkey)\n -> Seq Scan on lineitem (cost=0.00..261655.05\nrows=6000947 width=14) (actual time=0.027..178712.709 rows=6001215\nloops=1)\n -> Hash (cost=506580.84..506580.84 rows=29921\nwidth=51) (actual time=58421.050..58421.050 rows=57 loops=1)\n -> Hash Join (cost=416568.25..506580.84\nrows=29921 width=51) (actual time=25208.925..58419.502 rows=57\nloops=1)\n Hash Cond: (orders.o_custkey = customer.c_custkey)\n -> Merge IN Join\n(cost=405349.14..493081.88 rows=29921 width=29) (actual\ntime=37.244..57646.024 rows=57 loops=1)\n Merge Cond: (orders.o_orderkey =\n\"IN_subquery\".l_orderkey)\n -> Index Scan using orders_pkey on\norders (cost=0.00..79501.17 rows=1499952 width=25) (actual\ntime=0.100..5379.828 rows=1496151 loops=1)\n -> Materialize\n(cost=405349.14..406004.72 rows=29921 width=4) (actual\ntime=34.825..51619.816 rows=57 loops=1)\n -> GroupAggregate\n(cost=0.00..404639.71 rows=29921 width=14) (actual\ntime=34.818..51619.488 rows=57 loops=1)\n Filter: (sum(l_quantity)\n> 300::numeric)\n -> Index Scan using\nfkey_lineitem_1 on lineitem (cost=0.00..348617.14 rows=6000947\nwidth=14) (actual time=0.079..44140.117 rows=6001215 loops=1)\n -> Hash (cost=6803.60..6803.60\nrows=149978 width=26) (actual time=640.980..640.980 rows=150000\nloops=1)\n -> Seq Scan on customer\n(cost=0.00..6803.60 rows=149978 width=26) (actual time=0.021..510.993\nrows=150000 loops=1)\n\nI re-ran the query multiple times to verify the accuracy of results.\n\nRegards,\n~Vamsi\n\nThanks for such quick response.On Fri, Mar 13, 2009 at 5:34 PM, Tom Lane wrote:\n> 2) Why is the Hash Join (top most) so slow?\n\nDoesn't look that bad to me.  The net time charged to the HashJoin node\nis 186107.210 - 53597.555 - 112439.592 = 20070.063 msec.  In addition it\nwould be reasonable to count the hashtable build time, which evidently\nis 112439.592 - 111855.510 = 584.082 msec.  So the hashtable build took\nabout 10 msec/row, in addition to the data fetching; and then the actual\njoin spent about 3 microsec per outer row, again exclusive of obtaining\nthose rows.  The table build seems a bit slow, maybe, but I don't see a\nproblem with the join speed.\nI am wondering why are we subtracting the entire Seq Scan time of Lineitem from the total time to calculate the HashJoin time.  Does the Hash probing start as soon as the first record of Lineitem is available, i.e., after 112439.592ms? \nHere is another plan I have for the same TPC-H 18 query with different configuration parameters (shared_buffers set to 400MB, just for experimental purposes) and HashJoin seems to take longer time (at least 155.58s based on above calculation):\nGroupAggregate (cost=905532.09..912092.04 rows=119707 width=57) (actual time=392705.160..392705.853 rows=57 loops=1) -> Sort (cost=905532.09..906082.74 rows=119707 width=57) (actual time=392705.116..392705.220 rows=399 loops=1)\n Sort Key: orders.o_totalprice, orders.o_orderdate, customer.c_name, customer.c_custkey, orders.o_orderkey -> Hash Join (cost=507550.05..877523.36 rows=119707 width=57) (actual time=72616.327..392703.675 rows=399 loops=1)\n Hash Cond: (public.lineitem.l_orderkey = orders.o_orderkey) -> Seq Scan on lineitem (cost=0.00..261655.05 rows=6000947 width=14) (actual time=0.027..178712.709 rows=6001215 loops=1)\n -> Hash (cost=506580.84..506580.84 rows=29921 width=51) (actual time=58421.050..58421.050 rows=57 loops=1) -> Hash Join (cost=416568.25..506580.84 rows=29921 width=51) (actual time=25208.925..58419.502 rows=57 loops=1)\n Hash Cond: (orders.o_custkey = customer.c_custkey) -> Merge IN Join (cost=405349.14..493081.88 rows=29921 width=29) (actual time=37.244..57646.024 rows=57 loops=1)\n Merge Cond: (orders.o_orderkey = \"IN_subquery\".l_orderkey) -> Index Scan using orders_pkey on orders (cost=0.00..79501.17 rows=1499952 width=25) (actual time=0.100..5379.828 rows=1496151 loops=1)\n -> Materialize (cost=405349.14..406004.72 rows=29921 width=4) (actual time=34.825..51619.816 rows=57 loops=1) -> GroupAggregate (cost=0.00..404639.71 rows=29921 width=14) (actual time=34.818..51619.488 rows=57 loops=1)\n Filter: (sum(l_quantity) > 300::numeric) -> Index Scan using fkey_lineitem_1 on lineitem (cost=0.00..348617.14 rows=6000947 width=14) (actual time=0.079..44140.117 rows=6001215 loops=1)\n -> Hash (cost=6803.60..6803.60 rows=149978 width=26) (actual time=640.980..640.980 rows=150000 loops=1) -> Seq Scan on customer (cost=0.00..6803.60 rows=149978 width=26) (actual time=0.021..510.993 rows=150000 loops=1)\nI re-ran the query multiple times to verify the accuracy of results.Regards,~Vamsi", "msg_date": "Fri, 13 Mar 2009 18:11:52 -0400", "msg_from": "Vamsidhar Thummala <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash Join performance" }, { "msg_contents": "Vamsidhar Thummala <[email protected]> writes:\n> I am wondering why are we subtracting the entire Seq Scan time of Lineitem\n> from the total time to calculate the HashJoin time.\n\nWell, if you're trying to identify the speed of the join itself and not\nhow long it takes to provide the input for it, that seems like a\nsensible calculation to make.\n\n> Here is another plan I have for the same TPC-H 18 query with different\n> configuration parameters (shared_buffers set to 400MB, just for experimental\n> purposes) and HashJoin seems to take longer time (at least 155.58s based on\n> above calculation):\n\nYeah, that seems to work out to about 25us per row instead of 3us, which\nis a lot slower. Maybe the hash got split up into multiple batches ...\nwhat have you got work_mem set to? Try turning on log_temp_files and\nsee if it records any temp files as getting created.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Mar 2009 19:08:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash Join performance " }, { "msg_contents": "On Fri, Mar 13, 2009 at 7:08 PM, Tom Lane wrote:\n\n> Vamsidhar Thummala writes:\n> > I am wondering why are we subtracting the entire Seq Scan time of\n> Lineitem\n> > from the total time to calculate the HashJoin time.\n>\n> Well, if you're trying to identify the speed of the join itself and not\n> how long it takes to provide the input for it, that seems like a\n> sensible calculation to make.\n\n\nI am still not clear on this. I am thinking the output is produced in a\npipelined fashion i.e., as soon as the record of outer child is read\n(sequentially here) and if HashJoin finds a match by probing the inner hash\ntable (in memory), we have an output record. Please correct if I am wrong\nhere.\n\n\n>\n>\n> > Here is another plan I have for the same TPC-H 18 query with different\n> > configuration parameters (shared_buffers set to 400MB, just for\n> experimental\n> > purposes) and HashJoin seems to take longer time (at least 155.58s based\n> on\n> > above calculation):\n>\n> Yeah, that seems to work out to about 25us per row instead of 3us, which\n> is a lot slower. Maybe the hash got split up into multiple batches ...\n> what have you got work_mem set to? Try turning on log_temp_files and\n> see if it records any temp files as getting created.\n\n\nUnfortunately, I am working with Postgres 8.2 which doesn't have\nlog_temp_files. The work_mem is still at 1MB (all other parameters were kept\nconstant apart from shared_buffers w.r.t previous configuration). The hash\nis build on 57 records (~20kb, customer row length is 179 bytes and orders\nrow length is 104 bytes) produced by inner subplan and so I will be\nsurprised if multiple batches are created.\n\nThank you.\n\nRegards,\n-Vamsi\n\nOn Fri, Mar 13, 2009 at 7:08 PM, Tom Lane wrote:\nVamsidhar Thummala writes:\n> I am wondering why are we subtracting the entire Seq Scan time of Lineitem\n> from the total time to calculate the HashJoin time.\n\nWell, if you're trying to identify the speed of the join itself and not\nhow long it takes to provide the input for it, that seems like a\nsensible calculation to make.I am still not clear on this. I am thinking the output is produced in a pipelined fashion i.e., as soon as the record of outer child is read (sequentially here) and if HashJoin finds a match by probing the inner hash table (in memory), we have an output record. Please correct if I am wrong here.\n \n\n> Here is another plan I have for the same TPC-H 18 query with different\n> configuration parameters (shared_buffers set to 400MB, just for experimental\n> purposes) and HashJoin seems to take longer time (at least 155.58s based on\n> above calculation):\n\nYeah, that seems to work out to about 25us per row instead of 3us, which\nis a lot slower.  Maybe the hash got split up into multiple batches ...\nwhat have you got work_mem set to?  Try turning on log_temp_files and\nsee if it records any temp files as getting created.Unfortunately, I am working with Postgres 8.2 which doesn't have log_temp_files. The work_mem is still at 1MB (all other parameters were kept constant apart from shared_buffers w.r.t previous configuration). The hash is build on 57 records (~20kb, customer row length is 179 bytes and orders row length is 104 bytes) produced by inner subplan and so I will be surprised if multiple batches are created.\nThank you.Regards,-Vamsi", "msg_date": "Fri, 13 Mar 2009 22:10:43 -0400", "msg_from": "Vamsidhar Thummala <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash Join performance" } ]
[ { "msg_contents": "Hi,\n we are in the process of finding the best solution for Postgresql\ndeployment with storage controller. I have some query, Please give some\nsuggestion for the below\n \n1) Can we get customer deployment scenarios for postgresql with storage\ncontroller. Any flow diagram, operation diagram and implementation\ndiagram are welcome.\n\n2) Which protocol is mostly used in production. [NFS,ISCSi,FCP,etc...]\n\n3) What kind of application Mostly used with Postgresql.\n\n4) What is the business and technical issues for Postgresql with storage\ncontroller at present stage. \n\n5) In which area Postgresql most wanted.\n\n6) What kind of DR solution customer using for Postgresql with storage\ncontroller. \n\nThanks in advance, Any suggestion and recommendation are welcome.\n\nRegards \nKarthikeyan.N\n \n \n\n\n\n\n\nHi,\n    \nwe are in the process of finding the best solution for Postgresql deployment \nwith storage controller. I have some query, Please give some suggestion for the \nbelow\n \n1) Can \nwe get customer deployment scenarios for postgresql with storage controller. Any \nflow diagram, operation diagram and implementation diagram are \nwelcome.2) Which protocol is mostly used in production. [NFS,ISCSi,FCP,etc...]3) What kind of \napplication Mostly used with Postgresql.4) What is the business and \ntechnical issues for Postgresql with storage controller at present stage. \n5) In which area Postgresql most wanted.6) What kind of DR \nsolution customer using for Postgresql with storage controller. \n\nThanks in advance, Any suggestion and \nrecommendation are welcome.\nRegards \nKarthikeyan.N", "msg_date": "Mon, 16 Mar 2009 12:30:58 +0530", "msg_from": "\"Nagalingam, Karthikeyan\" <[email protected]>", "msg_from_op": true, "msg_subject": "deployment query " }, { "msg_contents": "Nagalingam, Karthikeyan <[email protected]> wrote:\n> Hi,\n> we are in the process of finding the best solution for Postgresql \n> deployment with storage controller. I have some query, Please give \n> some suggestion for the below\n>\n\nDoesn't Network Appliance have anyone who could help you with this? \nThis is the third time you've asked a set of incredibly broad general \nquestions of this list, that level of information shopping would perhaps \nbe best answered by a consulting service you would hire to do the task. \nAn email list is more useful for answering specific questions, but can't \nreally offer such broad advice given so little information. \n\n\"storage controller' could mean anything from a simple SATA port on a \ndesktop PC, to a EMC Symmetrix SAN, but we can guess based on your email \naddress, you're specifically interested in NAS storage like Network \nAppliance Filers. \n\n\"Customer Deployment Scenarios\" ?!? \n 1) Install postgres. \n 2) Create database schema. \n 3) Deploy application(s). \n\nProtocol? I'd venture a guess that the vast majority of postgres \ninstallations have direct attached JBOD or simple raid storage. \n\nWhat kind of application? Any application requiring a relational \ndatabase, ranging from web applications to accounting systems to \nmanufacturing execution systems. \n\nI don't even know what to make of your questions 4 and 5.\n\n\n", "msg_date": "Mon, 16 Mar 2009 10:38:05 -0700", "msg_from": "John R Pierce <[email protected]>", "msg_from_op": false, "msg_subject": "Re: deployment query" }, { "msg_contents": "Thanks for your reply john. \n\n\nRegards \nKarthikeyan.N\n \n\n-----Original Message-----\nFrom: John R Pierce [mailto:[email protected]] \nSent: Monday, March 16, 2009 11:08 PM\nTo: Nagalingam, Karthikeyan\nCc: [email protected]\nSubject: Re: [GENERAL] deployment query\n\nNagalingam, Karthikeyan <[email protected]> wrote:\n> Hi,\n> we are in the process of finding the best solution for Postgresql \n> deployment with storage controller. I have some query, Please give \n> some suggestion for the below\n>\n\nDoesn't Network Appliance have anyone who could help you with this?\n\nThis is the third time you've asked a set of incredibly broad general\nquestions of this list, that level of information shopping would perhaps\nbe best answered by a consulting service you would hire to do the task.\n\nAn email list is more useful for answering specific questions, but can't\nreally offer such broad advice given so little information. \n\n\"storage controller' could mean anything from a simple SATA port on a\ndesktop PC, to a EMC Symmetrix SAN, but we can guess based on your email\naddress, you're specifically interested in NAS storage like Network\nAppliance Filers. \n\n\"Customer Deployment Scenarios\" ?!? \n 1) Install postgres. \n 2) Create database schema. \n 3) Deploy application(s). \n\nProtocol? I'd venture a guess that the vast majority of postgres \ninstallations have direct attached JBOD or simple raid storage. \n\nWhat kind of application? Any application requiring a relational \ndatabase, ranging from web applications to accounting systems to\nmanufacturing execution systems. \n\nI don't even know what to make of your questions 4 and 5.\n\n\n", "msg_date": "Mon, 16 Mar 2009 23:47:58 +0530", "msg_from": "\"Nagalingam, Karthikeyan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: deployment query" }, { "msg_contents": "On Mon, Mar 16, 2009 at 1:00 AM, Nagalingam, Karthikeyan\n<[email protected]> wrote:\n> Hi,\n>     we are in the process of finding the best solution for Postgresql\n> deployment with storage controller. I have some query, Please give some\n> suggestion for the below\n>\n> 1) Can we get customer deployment scenarios for postgresql with storage\n> controller. Any flow diagram, operation diagram and implementation diagram\n> are welcome.\n\nLike John said, install, initdb, configure postgresql.conf and\npg_hba.conf, load dbs and go. Whether or not it's on a storage\ncontroller is kind of not that big of a deal, as long as it's\nreliable.\n\n> 2) Which protocol is mostly used in production. [NFS,ISCSi,FCP,etc...]\n\nIf I can, I almost always build databases on DAS. If I must use\nsomething else, I'd lean towards iSCSI. Don't trust NFS for\ndatabases.\n\n> 3) What kind of application Mostly used with Postgresql.\n\nAll kinds. We mostly use it for Content Management where I work. Last\nplace I worked it was our primary database for RT (ticketing system),\nbugzilla, media wiki, our statistical monitoring db, etc. Our primary\nlifting db was oracle, not because oracle was better, but because the\nVC vultures wouldn't sign off on postgresql out of ignorance /\nprejudice / lack of basic understanding / you name it.\n\n> 4) What is the business and technical issues for Postgresql with storage\n> controller at present stage.\n\nNot sure what you're asking here. Right now most postgresql\ninstallations are on direct attached storage. The biggest issues\naffecting any deployment to remote storage are the ones having to do\nwith the OS postgresql is most often deployed on, Linux. Device\ndrivers, iSCSI drivers, things like that.\n\nI would think that if you wanted more traction for pgsql (or other\ndbs) on netapp under linux, you could look at this area.\n\n> 5) In which area Postgresql most wanted.\n\nGot me.\n\n> 6) What kind of DR solution customer using for Postgresql with storage\n> controller.\n\nFirst rule of communications, define your acronyms. What's DR?\n", "msg_date": "Mon, 16 Mar 2009 13:13:38 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: deployment query" }, { "msg_contents": "Nagalingam, Karthikeyan wrote:\n> Hi,\n> we are in the process of finding the best solution for Postgresql \n> deployment with storage controller. I have some query, Please give some \n> suggestion for the below\n> \n> 1) Can we get customer deployment scenarios for postgresql with storage \n> controller. Any flow diagram, operation diagram and implementation \n> diagram are welcome.\n\nwell deployment is the same as for deploying it to plain old direct \nattached storage - so all the docs available on www.postgresql.org are \nmore or less valid for this.\n\n> \n> 2) Which protocol is mostly used in production. [NFS,ISCSi,FCP,etc...]\n\nall of those are used - however NFS is quite often discouraged due to \nvarious reliability issues (mostly on the client side) and operational \ncomplexity that caused issues in the past. ISCSI and Fiberchannel \ndeployments (both on netapp based storage and others) have worked very \nwell for me.\n\n\n> \n> 3) What kind of application Mostly used with Postgresql.\n\nthat is an extremely broad question - in doubt it is always \"the \napplication the customer uses\".\n\n> \n> 4) What is the business and technical issues for Postgresql with storage \n> controller at present stage.\n\nnot sure what a business issue would be here - but as for technical \nissues postgresql is comparable to the demands of other (commercial) \ndatabases in that regard. I personally found general tuning guidelines \nfor storage arrays that got written for oracle to be pretty well \nsuitable(within limits obviously) for postgresql too.\n\n> \n> 5) In which area Postgresql most wanted.\n\nit's the customer that counts :)\n\n> \n> 6) What kind of DR solution customer using for Postgresql with storage \n> controller.\n\nnot sure what the question here is - maybe you can explain that in more \ndetail?\n\n\nStefan\n", "msg_date": "Mon, 16 Mar 2009 21:22:36 +0100", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: deployment query" }, { "msg_contents": "Thanks Stefan for all your answers. My last question is \"What is the\nMostly used Disaster Recovery Solution for Postgresql in storage\nenvironment.\"\n\nRegards \nKarthikeyan.N\n \n\n-----Original Message-----\nFrom: Stefan Kaltenbrunner [mailto:[email protected]] \nSent: Tuesday, March 17, 2009 1:53 AM\nTo: Nagalingam, Karthikeyan\nCc: [email protected]\nSubject: Re: [GENERAL] deployment query\n\nNagalingam, Karthikeyan wrote:\n> Hi,\n> we are in the process of finding the best solution for Postgresql \n> deployment with storage controller. I have some query, Please give \n> some suggestion for the below\n> \n> 1) Can we get customer deployment scenarios for postgresql with \n> storage controller. Any flow diagram, operation diagram and \n> implementation diagram are welcome.\n\nwell deployment is the same as for deploying it to plain old direct\nattached storage - so all the docs available on www.postgresql.org are\nmore or less valid for this.\n\n> \n> 2) Which protocol is mostly used in production. [NFS,ISCSi,FCP,etc...]\n\nall of those are used - however NFS is quite often discouraged due to\nvarious reliability issues (mostly on the client side) and operational\ncomplexity that caused issues in the past. ISCSI and Fiberchannel\ndeployments (both on netapp based storage and others) have worked very\nwell for me.\n\n\n> \n> 3) What kind of application Mostly used with Postgresql.\n\nthat is an extremely broad question - in doubt it is always \"the\napplication the customer uses\".\n\n> \n> 4) What is the business and technical issues for Postgresql with \n> storage controller at present stage.\n\nnot sure what a business issue would be here - but as for technical\nissues postgresql is comparable to the demands of other (commercial)\ndatabases in that regard. I personally found general tuning guidelines\nfor storage arrays that got written for oracle to be pretty well\nsuitable(within limits obviously) for postgresql too.\n\n> \n> 5) In which area Postgresql most wanted.\n\nit's the customer that counts :)\n\n> \n> 6) What kind of DR solution customer using for Postgresql with storage\n\n> controller.\n\nnot sure what the question here is - maybe you can explain that in more\ndetail?\n\n\nStefan\n", "msg_date": "Tue, 17 Mar 2009 10:45:55 +0530", "msg_from": "\"Nagalingam, Karthikeyan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: deployment query" }, { "msg_contents": "On Mon, Mar 16, 2009 at 11:15 PM, Nagalingam, Karthikeyan\n<[email protected]> wrote:\n> Thanks Stefan for all your answers. My last question is \"What is the\n> Mostly used Disaster Recovery Solution for Postgresql in storage\n> environment.\"\n\nWe use two methods of backup to keep the database afloat amid things\ngoing horribly wrong. We have 1 or more slony backup dbs that allow\nfor failover and load balancing. We have offsite pg_dump backups\nwhich are transferred via ssh to an offsite server in case of\ncatastrophic failure in the data center (like a huge power surge) that\nkills both servers.\n\nWe routinely restore backup sets or parts of them for various testing scenarios.\n", "msg_date": "Mon, 16 Mar 2009 23:26:23 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: deployment query" }, { "msg_contents": "On Tue, 2009-03-17 at 10:45 +0530, Nagalingam, Karthikeyan wrote:\n> Thanks Stefan for all your answers. My last question is \"What is the\n> Mostly used Disaster Recovery Solution for Postgresql in storage\n> environment.\"\n\nThat vastly depends. The most common is likely warm standby (PITR). If\nyou are running Linux DRBD is a common solution as well. There is also\nSlony-I, and Mammoth Replicator.\n\nJoshua D. Drake\n\n-- \nPostgreSQL - XMPP: [email protected]\n Consulting, Development, Support, Training\n 503-667-4564 - http://www.commandprompt.com/\n The PostgreSQL Company, serving since 1997\n\n", "msg_date": "Mon, 16 Mar 2009 22:45:41 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: deployment query" } ]
[ { "msg_contents": "Hello List,\n\ni would like to pimp my postgres setup. To make sure i dont have a slow\nhardware, i tested it on three diffrent enviorments:\n1.) Native Debian Linux (Dom0) and 4GB of RAM\n2.) Debian Linux in Xen (DomU) 4GB of RAM\n3.) Blade with SSD Disk 8GB of RAM\n\nHere are my results: http://i39.tinypic.com/24azpxg.jpg\n\nHere is my postgres config: http://pastebin.com/m5e40dbf0\n\nHere is my sysctl:\n----------------------------------\nkernel.shmmax = 3853361408\nkernel.shmall = 99999999\n\nHere is my pgbench benchmark script: http://pastebin.com/m676d0c1b\n\n\n\nHere are my hardware details:\n========================\n\nOn Hardware 1 + 2 (native linux and debian) i have the following\nHardware underneath:\n--------------------------------------------------------------------------------------------------------------------------------------------------------\n- ARC-1220 8-Port PCI-Express on Raid6 with normal SATA drives 7200RPM\n- 1x Quadcore Intel(R) Xeon(R) CPU E5320 @ 1.86GHz\n\nThe Blade:\n----------------------------\n- 2x 16GB SSD set up in striping mode (Raid 0)\n- 2x Quardcore Intel(R) Xeon(R) CPU E5420 @ 2.50GHz\n\n\nAny idea why my performance colapses at 2GB Database size?\n\nThanks,\nMario\n\n\n\n", "msg_date": "Mon, 16 Mar 2009 11:48:36 +0100", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres benchmarking with pgbench" }, { "msg_contents": "On Mon, 16 Mar 2009, [email protected] wrote:\n\n> Any idea why my performance colapses at 2GB Database size?\n\npgbench results follow a general curve I outlined at \nhttp://www.westnet.com/~gsmith/content/postgresql/pgbench-scaling.htm and \nthe spot where performance drops hard depends on how big of a working set \nof data you can hold in RAM. (That shows a select-only test which is why \nthe results are so much higher than yours, all the tests work similarly as \nfar as the curve they trace).\n\nIn your case, you've got shared_buffers=1GB, but the rest of the RAM is \nthe server isn't so useful to you because you've got checkpoint_segments \nset to the default of 3. That means your system is continuously doing \nsmall checkpoints (check your database log files, you'll see what I \nmeant), which keeps things from ever really using much RAM before \neverything has to get forced to disk.\n\nIncrease checkpoint_segments to at least 30, and bump your \ntransactions/client to at least 10,000 while you're at it--the 32000 \ntransactions you're doing right now aren't nearly enough to get good \nresults from pgbench, 320K is in the right ballpark. That might be enough \nto push your TPS fall-off a bit closer to 4GB, and you'll certainly get \nmore useful results out of such a longer test. I'd suggest adding in \nscaling factors of 25, 50, and 150, those should let you see the standard \npgbench curve more clearly.\n\nOn this topic: I'm actually doing a talk introducing pgbench use at \ntonight's meeting of the Baltimore/Washington PUG, if any readers of this \nlist are in the area it should be informative: \nhttp://archives.postgresql.org/bwpug/2009-03/msg00000.php and \nhttp://omniti.com/is/here for directions.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 16 Mar 2009 11:04:23 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres benchmarking with pgbench" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n\n> On Mon, 16 Mar 2009, [email protected] wrote:\n>\n>> Any idea why my performance colapses at 2GB Database size?\n\nI don't understand how you get that graph from the data above. The data above\nseems to show your test databases at 1.4GB and 2.9GB. There are no 1GB and 2GB\ndata points like the graphs show.\n\nPresumably the data point at 2G on the graph should really be at 2.9GB? In\nwhich case I don't find it surprising at all that performance would start to\nshift from RAM-resident before that to disk-resident above that. You have 1GB\nset aside for shared buffers leaving about 3GB for filesystem cache.\n\nYou could try setting shared buffers smaller, perhaps 512kB or larger, perhaps\n3.5GB. To minimize the overlap. I would tend to avoid the latter though.\n\nOne thing to realize is that pgbench performs a completely flat distribution\nof data accesses. So every piece of data is equally likely to be used. In real\nlife work-loads there are usually some busier and some less busy sections of\nthe database and the cache tends to keep the hotter data resident even as the\ndata set grows.\n\n> In your case, you've got shared_buffers=1GB, but the rest of the RAM is the\n> server isn't so useful to you because you've got checkpoint_segments set to the\n> default of 3. That means your system is continuously doing small checkpoints\n> (check your database log files, you'll see what I meant), which keeps things\n> from ever really using much RAM before everything has to get forced to disk.\n\nWhy would checkpoints force out any data? It would dirty those pages and then\nsync the files marking them clean, but they should still live on in the\nfilesystem cache.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n", "msg_date": "Mon, 16 Mar 2009 15:28:10 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres benchmarking with pgbench" }, { "msg_contents": "On Mon, 16 Mar 2009, Gregory Stark wrote:\n\n> Why would checkpoints force out any data? It would dirty those pages and then\n> sync the files marking them clean, but they should still live on in the\n> filesystem cache.\n\nThe bulk of the buffer churn in pgbench is from the statement that updates \na row in the accounts table. That constantly generates updated data block \nand index block pages. If you can keep those changes in RAM for a while \nbefore forcing them to disk, you can get a lot of benefit from write \ncoalescing that goes away if constant checkpoints push things out with a \nfsync behind them.\n\nNot taking advantage of that effectively reduces the size of the OS cache, \nbecause you end up with a lot of space holding pending writes that \nwouldn't need to happen at all yet were the checkpoints spaced out better.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 16 Mar 2009 12:39:43 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres benchmarking with pgbench" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n\n> On Mon, 16 Mar 2009, Gregory Stark wrote:\n>\n>> Why would checkpoints force out any data? It would dirty those pages and then\n>> sync the files marking them clean, but they should still live on in the\n>> filesystem cache.\n>\n> The bulk of the buffer churn in pgbench is from the statement that updates a\n> row in the accounts table. That constantly generates updated data block and\n> index block pages. If you can keep those changes in RAM for a while before\n> forcing them to disk, you can get a lot of benefit from write coalescing that\n> goes away if constant checkpoints push things out with a fsync behind them.\n>\n> Not taking advantage of that effectively reduces the size of the OS cache,\n> because you end up with a lot of space holding pending writes that wouldn't\n> need to happen at all yet were the checkpoints spaced out better.\n\nOk, so it's purely a question of write i/o, not reduced cache effectiveness.\nI think I could see that. I would be curious to see these results with a\nlarger checkpoint_segments setting.\n\nLooking further at the graphs I think they're broken but not in the way I had\nguessed. It looks like they're *overstating* the point at which the drop\noccurs. Looking at the numbers it's clear that under 1GB performs well but at\n1.5GBP it's already dropping to the disk-resident speed.\n\nI think pgbench is just not that great a model for real-world usage . a) most\nreal world workloads are limited by read traffic, not write traffic, and\ncertainly not random update write traffic; and b) most real-world work loads\nfollow a less uniform distribution so keeping busy records and index regions\nin memory is more effective.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's 24x7 Postgres support!\n", "msg_date": "Tue, 17 Mar 2009 00:17:18 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres benchmarking with pgbench" }, { "msg_contents": "On Tue, 17 Mar 2009, Gregory Stark wrote:\n\n> I think pgbench is just not that great a model for real-world usage\n\npgbench's default workload isn't a good model for anything. It wasn't a \nparticularly real-world test when the TPC-B it's based on was created, and \nthat was way back in 1990. And pgbench isn't even a good implementation \nof that spec (the rows are too narrow, comments about that in pgbench.c).\n\nAt this point, the only good thing you can say about pgbench is that it's \nbeen a useful tool for comparing successive releases of PostgreSQL in a \nrelatively fair way. Basically, it measures what pgbench measures, and \nthat has only a loose relationship with what people want a database to do.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 17 Mar 2009 00:17:24 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres benchmarking with pgbench" }, { "msg_contents": "On Mon, Mar 16, 2009 at 10:17 PM, Greg Smith <[email protected]> wrote:\n> On Tue, 17 Mar 2009, Gregory Stark wrote:\n>\n>> I think pgbench is just not that great a model for real-world usage\n>\n> pgbench's default workload isn't a good model for anything.  It wasn't a\n> particularly real-world test when the TPC-B it's based on was created, and\n> that was way back in 1990.  And pgbench isn't even a good implementation of\n> that spec (the rows are too narrow, comments about that in pgbench.c).\n>\n> At this point, the only good thing you can say about pgbench is that it's\n> been a useful tool for comparing successive releases of PostgreSQL in a\n> relatively fair way.  Basically, it measures what pgbench measures, and that\n> has only a loose relationship with what people want a database to do.\n\nI'd say pgbench is best in negative. I.e it can't tell you a database\nserver is gonna be fast, but it can usually tell you when something's\nhorrifically wrong. If I just installed a new external storage array\nof some kind and I'm getting 6 tps, something is wrong somewhere.\n\nAnd it's good for exercising your disk farm for a week during burn in.\n It certainly turned up a bad RAID card last fall during acceptance\ntesting our new servers. Took 36 hours of pgbench to trip the bug and\ncause the card to lock up. Had one bad disk drive too that pgbench\nkilled of for me.\n", "msg_date": "Mon, 16 Mar 2009 23:33:12 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres benchmarking with pgbench" }, { "msg_contents": "Hi Greg,\n\nthanks a lot for your hints. I changed my config and changed raid6 to \nraid10, but whatever i do, the benchmark breaks down at a scaling factor \n75 where the database is \"only\" 1126MB big.\n\nHere are my benchmark Results (scaling factor, DB size in MB, TPS) using:\n pgbench -S -c X -t 1000 -U pgsql -d benchmark -h MYHOST\n\n1 19 8600\n5 79 8743\n10 154 8774\n20 303 8479\n30 453 8775\n40 602 8093\n50 752 6334\n75 1126 3881\n150 2247 2297\n200 2994 701\n250 3742 656\n300 4489 596\n400 5984 552\n500 7479 513\n\nI have no idea if this is any good for a QuardCore Intel(R) Xeon(R) CPU \nE5320 @ 1.86GHz with 4GB Ram and 6 SATA disk (7200rpm) in raid 10.\n\nHere is my config (maybe with some odd setting): \nhttp://pastebin.com/m5d7f5717\n\nI played around with:\n- max_connections\n- shared_buffers\n- work_mem\n- maintenance_work_mem\n- checkpoint_segments\n- effective_cache_size\n\n..but whatever i do, the graph looks the same. Any hints or tips what my \nconfig should look like? Or are these results even okay? Maybe i am \ndriving myself crazy for nothing?\n\nCheers,\nMario\n\n\nGreg Smith wrote:\n> On Mon, 16 Mar 2009, [email protected] wrote:\n>\n>> Any idea why my performance colapses at 2GB Database size?\n>\n> pgbench results follow a general curve I outlined at \n> http://www.westnet.com/~gsmith/content/postgresql/pgbench-scaling.htm \n> and the spot where performance drops hard depends on how big of a \n> working set of data you can hold in RAM. (That shows a select-only \n> test which is why the results are so much higher than yours, all the \n> tests work similarly as far as the curve they trace).\n>\n> In your case, you've got shared_buffers=1GB, but the rest of the RAM \n> is the server isn't so useful to you because you've got \n> checkpoint_segments set to the default of 3. That means your system \n> is continuously doing small checkpoints (check your database log \n> files, you'll see what I meant), which keeps things from ever really \n> using much RAM before everything has to get forced to disk.\n>\n> Increase checkpoint_segments to at least 30, and bump your \n> transactions/client to at least 10,000 while you're at it--the 32000 \n> transactions you're doing right now aren't nearly enough to get good \n> results from pgbench, 320K is in the right ballpark. That might be \n> enough to push your TPS fall-off a bit closer to 4GB, and you'll \n> certainly get more useful results out of such a longer test. I'd \n> suggest adding in scaling factors of 25, 50, and 150, those should let \n> you see the standard pgbench curve more clearly.\n>\n> On this topic: I'm actually doing a talk introducing pgbench use at \n> tonight's meeting of the Baltimore/Washington PUG, if any readers of \n> this list are in the area it should be informative: \n> http://archives.postgresql.org/bwpug/2009-03/msg00000.php and \n> http://omniti.com/is/here for directions.\n>\n> -- \n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n\n", "msg_date": "Thu, 19 Mar 2009 22:25:40 +0100", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres benchmarking with pgbench" }, { "msg_contents": "On Thu, Mar 19, 2009 at 3:25 PM, [email protected] <[email protected]> wrote:\n> Hi Greg,\n>\n> thanks a lot for your hints. I changed my config and changed raid6 to\n> raid10, but whatever i do, the benchmark breaks down at a scaling factor 75\n> where the database is \"only\" 1126MB big.\n>\n> Here are my benchmark Results (scaling factor, DB size in MB, TPS) using:\n>  pgbench -S -c  X  -t 1000 -U pgsql -d benchmark -h MYHOST\n\n-t 1000 is WAY too short to judge, you'll be seeing a lot of caching\neffects and no WAL flushing. Try a setting that gets you a run of at\nleast 5 or 10 minutes, preferably a half an hour for more realistic\nresults. Also what is -c X ??? Are you following the -c with the\nsame scaling factor that you used to create the test db? And why the\nselect only (-S)???\n\n> 1 19 8600\n> 5 79 8743\n> 10 154 8774\n> 20 303 8479\n> 30 453 8775\n> 40 602 8093\n> 50 752 6334\n> 75 1126 3881\n> 150 2247 2297\n> 200 2994 701\n> 250 3742 656\n> 300 4489 596\n> 400 5984 552\n> 500 7479 513\n>\n> I have no idea if this is any good for a QuardCore Intel(R) Xeon(R) CPU\n>  E5320  @ 1.86GHz with 4GB Ram and 6 SATA disk (7200rpm) in raid 10.\n>\n> Here is my config (maybe with some odd setting):\n> http://pastebin.com/m5d7f5717\n>\n> I played around with:\n> - max_connections\n> - shared_buffers\n\nYou've got this set to 1/2 your memory (2G). I've found that for\ntransactional work it's almost always better to set this much lower\nand let the OS do the caching, especially once your db is too big to\nfit in memory. Try setting lowering it and see what happens to your\nperformance envelope.\n\n> - work_mem\n> - maintenance_work_mem\n> - checkpoint_segments\n> - effective_cache_size\n\nThis is set to 3G, but with shared_mem set to 2G, you can't cache more\nthan 2G, because the OS will just be caching the same stuff as pgsql,\nor less. No biggie. Just not realistic\n\n> ..but whatever i do, the graph looks the same. Any hints or tips what my\n> config should look like? Or are these results even okay? Maybe i am driving\n> myself crazy for nothing?\n\nCould be. What do top and vmstat say during your test run?\n", "msg_date": "Thu, 19 Mar 2009 15:45:21 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres benchmarking with pgbench" }, { "msg_contents": "\nOn 3/19/09 2:25 PM, \"[email protected]\" <[email protected]> wrote:\n\n> \n> Here is my config (maybe with some odd setting):\n> http://pastebin.com/m5d7f5717\n> \n> I played around with:\n> - max_connections\n> - shared_buffers\n> - work_mem\n> - maintenance_work_mem\n> - checkpoint_segments\n> - effective_cache_size\n> \n> ..but whatever i do, the graph looks the same. Any hints or tips what my\n> config should look like? Or are these results even okay? Maybe i am\n> driving myself crazy for nothing?\n> \n> Cheers,\n> Mario\n> \n\nI'm assuming this is linux: What linux version? What is your kernel's\ndirty_ratio and background_dirty_ratio?\n\nThe default for a long time was 40 and 10, respectively. This is far too\nlarge for most uses on today's servers, you would not want 40% of your RAM\nto have pages not yet flushed to disk except perhaps on a small workstation.\nSee\nCurrent kernels default to 10 and 5, which is better.\n\nWhat is best for your real life workload will differ from pg_bench here.\nI don't know if this is the cause for any of your problems, but it is\nrelated closely to the checkpoint_segments and checkpoint size/time\nconfiguration.\n\nIs your xlog on the same device as the data? I have found that for most\nreal world workloads, having the xlog on a separate device helps\ntremendously. Even more so for 'poor' RAID controllers like the PERC5 --\nyour sync writes in xlog will be interfering with the RAID controller cache\nof your data due to bad design.\nBut my pg_bench knowledge with respect to this is limited.\n\n", "msg_date": "Thu, 19 Mar 2009 14:59:10 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres benchmarking with pgbench" }, { "msg_contents": "\n\[email protected] wrote:\n> Hi Greg,\n>\n> thanks a lot for your hints. I changed my config and changed raid6 to \n> raid10, but whatever i do, the benchmark breaks down at a scaling \n> factor 75 where the database is \"only\" 1126MB big.\n>\n> Here are my benchmark Results (scaling factor, DB size in MB, TPS) using:\n> pgbench -S -c X -t 1000 -U pgsql -d benchmark -h MYHOST\n>\n> 1 19 8600\n> 5 79 8743\n> 10 154 8774\n> 20 303 8479\n> 30 453 8775\n> 40 602 8093\n> 50 752 6334\n> 75 1126 3881\n> 150 2247 2297\n> 200 2994 701\n> 250 3742 656\n> 300 4489 596\n> 400 5984 552\n> 500 7479 513\n>\n> I have no idea if this is any good for a QuardCore Intel(R) Xeon(R) \n> CPU E5320 @ 1.86GHz with 4GB Ram and 6 SATA disk (7200rpm) in raid 10.\n>\n> Here is my config (maybe with some odd setting): \n> http://pastebin.com/m5d7f5717\n>\n> I played around with:\n> - max_connections\n> - shared_buffers\n> - work_mem\n> - maintenance_work_mem\n> - checkpoint_segments\n> - effective_cache_size\n>\n> ..but whatever i do, the graph looks the same. Any hints or tips what \n> my config should look like? Or are these results even okay? Maybe i am \n> driving myself crazy for nothing?\n>\nAre you running the pgbench client from a different system? Did you \ncheck if the pgbench client itself is bottlenecked or not. I have seen \nbefore the client of pgbench is severely limited on the load it can \ndrive and process.\n\n\n-Jignesh\n\n", "msg_date": "Thu, 19 Mar 2009 19:28:19 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres benchmarking with pgbench" } ]
[ { "msg_contents": "First of all, I did pose this question first on the pgsql - admin mailing\nlist.\n\nAnd I know it is not appreciated to post across multiple mailing lists so I\n\nApologize in advance. I do not make it a practice to do so but, this being\n\nA performance issue I think I should have inquired on this list first. Rest\n\n\nAssured I won't double post again. \n\n \n\nThe issue is that during a restore on a remote site, (Postgres 8.2.5) \n\narchived logs are taking an average of 35 - 40 seconds apiece to restore. \n\nThis is roughly the same speed that they are being archived on the\nproduction\n\nSite. I compress the logs when I copy them over, then uncompress them\n\nDuring the restore using a cat | gzip -dc command. I don't think \n\nThe bottleneck is in that command - a log typically is uncompressed and\ncopied\n\nIn less than 2 seconds when I do this manually. Also when I pass a log\n\nThat is already uncompressed the performance improves by only about 10\npercent.\n\n \n\nA log compresses (using) gzip down to between 5.5 and 6.5 MB. \n\n I have attempted Increases in shared_buffers (250MB to 1500MB).\n\n Other relevant (I think) config parameters include:\n\n Maintenance_work_mem (300MB)\n\n Work_mem (75MB)\n\n Wal_buffers (48) \n\n Checkpoint_segments (32)\n\n Autovacuum (off)\n\n \n\n \n\nipcs -l\n\n \n\n------ Shared Memory Limits --------\n\nmax number of segments = 4096\n\nmax seg size (kbytes) = 4194303\n\nmax total shared memory (kbytes) = 1073741824\n\nmin seg size (bytes) = 1\n\n \n\n------ Semaphore Limits --------\n\nmax number of arrays = 128\n\nmax semaphores per array = 250\n\nmax semaphores system wide = 32000\n\nmax ops per semop call = 32\n\nsemaphore max value = 32767\n\n \n\n------ Messages: Limits --------\n\nmax queues system wide = 16\n\nmax size of message (bytes) = 65536\n\ndefault max size of queue (bytes) = 65536\n\n \n\nOur database size is about 130 GB. We use tar \n\nTo backup the file structure. Takes roughly about\n\nAn hour to xtract the tarball before PITR log recovery\n\nBegins. The tarball itself 31GB compressed.\n\n \n\nAgain I apologize for the annoying double posting but\n\nI am pretty much out of ideas to make this work.\n\n \n\n \n\n \n\nMark Steben│Database Administrator│ \n\n@utoRevenue-R- \"Join the Revenue-tion\"\n95 Ashley Ave. West Springfield, MA., 01089 \n413-243-4800 x1512 (Phone) │ 413-732-1824 (Fax)\n\n@utoRevenue is a registered trademark and a division of Dominion Enterprises\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFirst of all, I did pose this question first on the pgsql –\nadmin mailing list.\nAnd I know it is not appreciated to post across multiple\nmailing lists so I\nApologize in advance.  I do not make it a practice to do so\nbut, this being\nA performance issue I think I should have inquired on this\nlist first.  Rest \nAssured I won’t double post again.  \n \nThe issue is that during a restore on a remote site, (Postgres\n8.2.5) \narchived logs are taking an average of 35 – 40 seconds\napiece to restore.  \nThis is roughly the same speed that they are being archived\non the production\nSite. I compress the logs when I copy them over, then\nuncompress them\nDuring the restore using a cat | gzip –dc command.  I\ndon’t think \nThe bottleneck is in that command – a log typically is\nuncompressed and copied\nIn less than 2 seconds when I do this manually.  Also when I\npass a log\nThat is already uncompressed the performance improves by\nonly about 10 percent.\n \nA log compresses (using) gzip down to between 5.5 and 6.5 MB.\n\n I have attempted Increases in shared_buffers (250MB to\n1500MB).\n  Other relevant (I think) config parameters include:\n       Maintenance_work_mem (300MB)\n       Work_mem (75MB)\n       Wal_buffers (48)   \n       Checkpoint_segments (32)\n       Autovacuum (off)\n \n      \nipcs -l\n \n------ Shared Memory Limits --------\nmax number of segments = 4096\nmax seg size (kbytes) = 4194303\nmax total shared memory (kbytes) = 1073741824\nmin seg size (bytes) = 1\n \n------ Semaphore Limits --------\nmax number of arrays = 128\nmax semaphores per array = 250\nmax semaphores system wide = 32000\nmax ops per semop call = 32\nsemaphore max value = 32767\n \n------ Messages: Limits --------\nmax queues system wide = 16\nmax size of message (bytes) = 65536\ndefault max size of queue (bytes) = 65536\n \nOur database size is about 130 GB.  We use tar \nTo backup the file structure. Takes roughly about\nAn hour to xtract the tarball before PITR log recovery\nBegins.  The tarball itself 31GB compressed.\n \nAgain I apologize for the annoying double posting but\nI am pretty much out of ideas to make this work.\n \n \n \nMark Steben│Database\nAdministrator│ \n@utoRevenue­®­ \"Join the Revenue-tion\"\n95 Ashley Ave. West Springfield, MA., 01089 \n413-243-4800 x1512 (Phone) │ 413-732-1824 (Fax)\n@utoRevenue is a registered\ntrademark and a division of Dominion Enterprises", "msg_date": "Mon, 16 Mar 2009 12:11:23 -0400", "msg_from": "\"Mark Steben\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance of archive logging in a PITR restore" }, { "msg_contents": "On Mon, 2009-03-16 at 12:11 -0400, Mark Steben wrote:\n> First of all, I did pose this question first on the pgsql – admin\n> mailing list.\n\n\n> The issue is that during a restore on a remote site, (Postgres 8.2.5) \n> \n> archived logs are taking an average of 35 – 40 seconds apiece to\n> restore. \n\nArchive logs are restored in a serialized manner so they will be slower\nto restore in general.\n\nJoshua D. Drake\n\n\n\n-- \nPostgreSQL - XMPP: [email protected]\n Consulting, Development, Support, Training\n 503-667-4564 - http://www.commandprompt.com/\n The PostgreSQL Company, serving since 1997\n\n", "msg_date": "Mon, 16 Mar 2009 09:21:30 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of archive logging in a PITR restore" }, { "msg_contents": "Joshua D. Drake wrote:\n> On Mon, 2009-03-16 at 12:11 -0400, Mark Steben wrote:\n>> The issue is that during a restore on a remote site, (Postgres 8.2.5) \n\n8.2.5 is quite old. You should upgrade to the latest 8.2.X release.\n\n>> archived logs are taking an average of 35 – 40 seconds apiece to\n>> restore. \n> \n> Archive logs are restored in a serialized manner so they will be slower\n> to restore in general.\n\nYeah, if you have several concurrent processes on the primary doing I/O \nand generating log, at restore the I/O will be serialized.\n\nVersion 8.3 is significantly better with this (as long as you don't \ndisable full_page_writes). In earlier versions, each page referenced in \nthe WAL was read from the filesystem, only to be replaced with the full \npage image. In 8.3, we skip the read and just write over the page image. \nDepending on your application, that can make a very dramatic difference \nto restore time.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 18 Mar 2009 13:33:59 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of archive logging in a PITR restore" } ]
[ { "msg_contents": "Our production database is seeing very heavy CPU utilization - anyone \nhave any ideas/input considering the following?\n\nCPU utilization gradually increases during the day until it approaches \n90%-100% at our peak time. When this happens our transactions/sec \ndrops and our site becomes very slow. When in this state, I can see \nhundreds of queries in pg_stat_activity that are not waiting on locks \nbut sit there for minutes. When the database is not in this state, \nthose same queries can complete in fractions of a second - faster that \nmy script that watches pg_stat_activity can keep track of them.\n\nThis server has dual quad core xeon 5310s, 32 GB RAM, and a few \ndifferent disk arrays (all managed in hardware by either the Perc5/i \nor Perc5/e adapter). The Postgres data is on a 14 disk 7.2k SATA raid \n10. This server runs nothing but Postgres.\n\nThe PostgreSQL database (according to pg_database_size) is 55GB and we \nare running PostgreSQL 8.3.5 and the 2.6.28.7-2 kernel under Arch Linux.\n\nRight now (not under peak load) this server is running at 68% CPU \nutilization and its SATA raid 10 is doing about 2MB/s writes and 11MB/ \ns reads. When I run dd I can hit 200+MB/s writes and 230+ MB/s reads, \nso we are barely using the available IO. Further when I run dd the \nCPU utilization of that process only approaches 20%-30% of one core.\n\nAdditionally, when I view \"top -c\" I generally see a dozen or so \n\"idle\" postgres processes (they appear and drop away quickly though) \nconsuming very large chunks of CPU (as much as 60% of a core). At any \ngiven time we have hundreds of idle postgres processes due to the JDBC \nconnection pooling but most of them are 0% as I would expect them to \nbe. I also see selects and inserts consuming very large percentages \nof CPU but I view that as less strange since they are doing work.\n\nAny ideas as to what is causing our CPUs to struggle? Is the fact \nthat our RAM covers a significant portion of the database causing our \nCPUs to do a bunch of thrashing as they work with memory while our \ndisk controllers sit idle? According to top we barely use any swap.\n\nWe currently have max_connections set to 1000 (roughly the sum of the \nJDBC pools on our application servers). Would decreasing this value \nhelp? We can decrease the JDBC pools or switch to pgbouncer for \npooling if this is the case.\n\nReally just looking for any input/ideas. Our workload is primarily \nOLTP in nature - essentially a social network. By transactions/sec at \nthe start I am using the xact_commit value in pg_stat_database. \nPlease let me know if this value is not appropriate for getting a tps \nguess. Right now with the 60% CPU utilization and low IO use \nxact_commit is increasing at a rate of 1070 a second.\n\nI have an identical PITR slave I can pause the PITR sync on to run any \ntest against. I will happily provide any additional information that \nwould be helpful.\n\nAny assistance is greatly appreciated.\n\nJoe Uhl\n", "msg_date": "Mon, 16 Mar 2009 14:48:48 -0400", "msg_from": "Joe Uhl <[email protected]>", "msg_from_op": true, "msg_subject": "High CPU Utilization" }, { "msg_contents": "On Monday 16 March 2009, Joe Uhl <[email protected]> wrote:\n> Right now (not under peak load) this server is running at 68% CPU\n> utilization and its SATA raid 10 is doing about 2MB/s writes and 11MB/\n> s reads. When I run dd I can hit 200+MB/s writes and 230+ MB/s reads,\n> so we are barely using the available IO. Further when I run dd the\n> CPU utilization of that process only approaches 20%-30% of one core.\n\nWhat does vmstat say when it's slow? The output of \"vmstat 1 30\" would be \ninformative. \n\nnote: dd is sequential I/O. Normal database usage is random I/O. \n\n-- \nEven a sixth-grader can figure out that you can’t borrow money to pay off \nyour debt\n", "msg_date": "Mon, 16 Mar 2009 12:52:22 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Utilization" }, { "msg_contents": "Here is vmstat 1 30. We are under peak load right now so I can gather \ninformation from the real deal :)\n\nHad an almost complete lockup a moment ago, number of non-idle \npostgres connections was 637. Going to drop our JDBC pool sizes a bit \nand bounce everything.\n\nprocs -----------memory---------- ---swap-- -----io---- -system-- ---- \ncpu----\nr b swpd free buff cache si so bi bo in cs us sy \nid wa\n12 35 95056 11102380 56856 14954948 3 4 669 541 1 2 \n23 3 54 19\n12 39 95056 11092484 56876 14963204 0 0 6740 1204 10066 \n13277 91 5 0 4\n8 42 95056 11081712 56888 14972244 0 0 8620 1168 10659 17020 \n78 6 0 15\n10 30 95052 11069768 56904 14982628 0 0 8944 976 9809 15109 \n81 6 1 12\n4 27 95048 11059576 56916 14991296 0 0 8852 440 7652 13294 \n63 4 2 32\n5 42 95048 11052524 56932 14996496 0 0 4700 384 6383 11249 \n64 4 4 28\n5 33 95048 11047492 56956 15001428 0 0 3852 572 6029 14010 \n36 4 5 56\n7 35 95048 11041184 56960 15005480 0 0 3964 136 5042 10802 \n40 3 1 56\n1 33 95048 11037988 56968 15009240 0 0 3892 168 3384 6479 \n26 1 3 69\n3 28 95048 11029332 56980 15015744 0 0 6724 152 4964 12844 \n11 2 8 79\n0 34 95048 11025880 56988 15020168 0 0 3852 160 3616 8614 \n11 1 6 82\n3 25 95048 10996356 57044 15044796 0 0 7892 456 3126 7115 \n4 3 8 85\n1 26 95048 10991692 57052 15050100 0 0 5188 176 2566 5976 \n3 2 12 83\n0 29 95048 10985408 57060 15054968 0 0 4200 80 2586 6582 \n4 1 12 83\n1 29 95048 10980828 57064 15058992 0 0 4560 64 2966 7557 \n7 2 6 85\n2 28 95048 10977192 57072 15063176 0 0 3860 72 2695 6742 \n11 1 7 81\n2 29 95048 10969120 57088 15067808 0 0 5084 84 3296 8067 \n14 1 0 84\n0 25 95048 10962096 57104 15072984 0 0 4440 500 2721 6263 \n12 1 6 80\n0 23 95044 10955320 57108 15079260 0 0 5712 232 2678 5990 \n6 1 6 87\n2 25 95044 10948644 57120 15084524 0 0 5120 184 3499 8143 \n20 3 9 69\n3 21 95044 10939744 57128 15090644 0 0 5756 264 4724 10272 \n32 3 5 60\n1 19 95040 10933196 57144 15095024 12 0 4440 180 2585 5244 \n13 2 15 70\n0 21 95040 10927596 57148 15098684 0 0 3248 136 2973 7292 \n8 1 9 81\n1 20 95040 10920708 57164 15104244 0 0 5192 360 1865 4547 \n3 1 9 87\n1 24 95040 10914552 57172 15105856 0 0 2308 16 1948 4450 \n6 1 1 93\n0 24 95036 10909148 57176 15110240 0 0 3824 152 1330 2632 \n3 1 6 90\n1 21 95036 10900628 57192 15116332 0 0 5680 180 1898 3986 \n4 1 11 84\n0 19 95036 10888356 57200 15121736 0 0 5952 120 2252 3991 \n12 1 8 79\n2 22 95036 10874336 57204 15128252 0 0 6320 112 2831 6755 \n5 2 8 85\n3 26 95036 10857592 57220 15134020 0 0 5124 216 3067 5296 \n32 6 3 59\n\nAlan, my apologies if you get this twice. Didn't reply back to the \nlist on first try.\n\nOn Mar 16, 2009, at 3:52 PM, Alan Hodgson wrote:\n\n> On Monday 16 March 2009, Joe Uhl <[email protected]> wrote:\n>> Right now (not under peak load) this server is running at 68% CPU\n>> utilization and its SATA raid 10 is doing about 2MB/s writes and \n>> 11MB/\n>> s reads. When I run dd I can hit 200+MB/s writes and 230+ MB/s \n>> reads,\n>> so we are barely using the available IO. Further when I run dd the\n>> CPU utilization of that process only approaches 20%-30% of one core.\n>\n> What does vmstat say when it's slow? The output of \"vmstat 1 30\" \n> would be\n> informative.\n>\n> note: dd is sequential I/O. Normal database usage is random I/O.\n>\n> -- \n> Even a sixth-grader can figure out that you can�t borrow money to \n> pay off\n> your debt\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Mon, 16 Mar 2009 16:10:01 -0400", "msg_from": "Joe Uhl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High CPU Utilization" }, { "msg_contents": "On Mon, 16 Mar 2009, Joe Uhl wrote:\n\n> Here is vmstat 1 30. We are under peak load right now so I can gather \n> information from the real deal\n\nQuite helpful, reformatting a bit and picking an informative section:\n\nprocs -----------memory---------- ---swap- ----io--- -system-- ----cpu----\nr b swpd free buff cache si so bi bo in cs us sy id wa\n0 34 95048 11025880 56988 15020168 0 0 3852 160 3616 8614 11 1 6 82\n3 25 95048 10996356 57044 15044796 0 0 7892 456 3126 7115 4 3 8 85\n1 26 95048 10991692 57052 15050100 0 0 5188 176 2566 5976 3 2 12 83\n\nThis says that your server is spending all its time waiting for I/O, \nactual CPU utilization is minimal. You're only achieving around 3-8MB/s \nof random I/O. That's the reality of what your disk I/O subsystem is \ncapable of, regardless of what its sequential performance with dd looks \nlike. If you were to run a more complicated benchmark like bonnie++ \ninstead, I'd bet that your \"seeks/second\" results are very low, even \nthough sequential read/write is fine.\n\nThe Perc5 controllers have a pretty bad reputation for performance on this \nlist, even in RAID10. Not much you can do about that beyond scrapping the \ncontroller and getting a better one.\n\nWhat you might do in order to reduce the total number of writes needed is \nsome standard postgresql.conf tuning; see \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nWhat you could do here is increase shared_buffers, checkpoint_segments, \nand checkpoint_completion_target as described there. Having more buffers \ndedicated to the database and having less checkpoints can result in less \nrandom I/O showing up, as popular data pages will stay in RAM for longer \nwithout getting written out so much.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 16 Mar 2009 16:35:27 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Utilization" }, { "msg_contents": "I dropped the pool sizes and brought things back up. Things are \nstable, site is fast, CPU utilization is still high. Probably just a \nmatter of time before issue comes back (we get slammed as kids get out \nof school in the US).\n\nNow when I run vmtstat 1 30 it looks very different (below). Waiting \nis minimal, user is very high. Under nontrivial load, according to \nxact_commit in pg_stat_database we are doing 1800+ tps.\n\nAppreciate the input and explanation on vmstat. I am going to throw \nsome of these numbers into zabbix so I can keep a better eye on them. \nThis server is a couple years old so the purchase of a new controller \nand/or disks is not out of the question.\n\nOn final note, have made several changes to postgresql.conf. Some of \nthose here:\nmax_connections = 1000\nshared_buffers = 7680MB\nwork_mem = 30MB\nsynchronous_commit = off\ncheckpoint_segments = 50\neffective_cache_size = 20000MB\n\nprocs -----------memory---------- ---swap-- -----io---- -system-- ---- \ncpu----\n r b swpd free buff cache si so bi bo in cs us \nsy id wa\n 9 8 73036 500164 82200 23497748 3 4 669 541 1 1 \n23 3 54 19\n20 4 73036 497252 82200 23500836 0 0 2500 680 11145 15168 \n91 4 2 2\n21 1 73036 491416 82204 23503832 0 0 1916 920 10303 14032 \n94 4 1 1\n23 5 73036 489580 82212 23505860 0 0 1348 3296 11682 15970 \n94 5 1 0\n31 1 73036 481408 82220 23507752 0 0 984 8988 10123 11289 \n97 3 0 0\n25 4 73036 483248 82232 23509420 0 0 1268 1312 10705 14063 \n96 4 0 0\n23 4 73036 480096 82232 23512380 0 0 2372 472 9805 13996 \n94 5 1 1\n24 4 73036 476732 82236 23515196 0 0 2012 720 10365 14307 \n96 3 1 0\n22 1 73036 474468 82236 23516584 0 0 944 3108 9838 12831 \n95 4 1 0\n14 1 73036 455756 82284 23534548 0 0 908 3284 9096 11333 \n94 4 1 0\n10 2 73036 455224 82292 23536304 0 0 1760 416 12454 17736 \n89 6 3 2\n17 0 73036 460620 82292 23538888 0 0 1292 968 12030 18333 \n90 7 2 1\n13 4 73036 459764 82292 23539724 0 0 332 288 9722 14197 \n92 5 2 1\n17 5 73036 457516 82292 23542176 0 0 1872 17752 10458 15465 \n91 5 2 1\n19 4 73036 450804 82300 23545640 0 0 2980 640 10602 15621 \n90 6 2 2\n24 0 73036 447660 82312 23547644 0 0 1736 10724 12401 15413 \n93 6 1 0\n20 6 73036 444380 82320 23550692 0 0 2064 476 9008 10985 \n94 4 1 0\n22 2 73036 442880 82328 23553640 0 0 2496 3156 10739 15211 \n93 5 1 1\n11 1 73036 441448 82328 23555632 0 0 1452 3552 10812 15337 \n93 5 2 1\n 6 2 73036 439812 82348 23557420 0 0 1052 1128 8603 10514 \n91 3 3 2\n 6 3 73036 433456 82348 23560860 0 0 2484 656 7636 13033 \n68 4 14 14\n 6 3 73036 433084 82348 23562628 0 0 1400 408 6046 11778 \n70 3 18 9\n 5 0 73036 430776 82356 23564264 0 0 1108 1300 7549 13754 \n73 4 16 7\n 5 2 73036 430124 82360 23565580 0 0 1016 2216 7844 14507 \n72 4 18 6\n 4 2 73036 429652 82380 23567480 0 0 1168 2468 7694 15466 \n58 4 24 14\n 6 2 73036 427304 82384 23569668 0 0 1132 752 5993 13606 \n49 5 36 10\n 7 1 73036 423020 82384 23571932 0 0 1244 824 8085 18072 \n56 3 30 10\nprocs -----------memory---------- ---swap-- -----io---- -system-- ---- \ncpu----\n r b swpd free buff cache si so bi bo in cs us \nsy id wa\n 4 0 73036 420816 82392 23573824 0 0 1292 820 5370 10958 \n46 2 41 10\n 9 1 73020 418048 82392 23576900 52 0 1632 2592 5931 11629 \n60 3 29 8\n 4 2 73004 415164 82424 23578620 56 0 1812 4116 7503 14674 \n71 3 15 12\n\nOn Mar 16, 2009, at 4:19 PM, Dave Youatt wrote:\n> Last column \"wa\" is % cpu time spent waiting (for IO to complete). \n> 80s\n> and 90s is pretty high, probably too high.\n>\n> Might also want to measure the IO/s performance of your RAID\n> controller. From the descriptions, it will be much more important \n> that\n> long sequential reads/writes for characterizing your workload.\n>\n> There are also some disappointing HW RAID controllers out there.\n> Generally, Aretec and Promise are good, Adaptec good, depending on\n> model, and the ones that Dell ship w/their servers haven't had good\n> reviews/reports.\n>\n>\n> On 03/16/2009 01:10 PM, Joe Uhl wrote:\n>> Here is vmstat 1 30. We are under peak load right now so I can \n>> gather\n>> information from the real deal :)\n>>\n>> Had an almost complete lockup a moment ago, number of non-idle\n>> postgres connections was 637. Going to drop our JDBC pool sizes a \n>> bit\n>> and bounce everything.\n>>\n>> procs -----------memory---------- ---swap-- -----io---- -system--\n>> ----cpu----\n>> r b swpd free buff cache si so bi bo in cs us \n>> sy\n>> id wa\n>> 12 35 95056 11102380 56856 14954948 3 4 669 541 1 2\n>> 23 3 54 19\n>> 12 39 95056 11092484 56876 14963204 0 0 6740 1204 10066\n>> 13277 91 5 0 4\n>> 8 42 95056 11081712 56888 14972244 0 0 8620 1168 10659 \n>> 17020\n>> 78 6 0 15\n>> 10 30 95052 11069768 56904 14982628 0 0 8944 976 9809 \n>> 15109\n>> 81 6 1 12\n>> 4 27 95048 11059576 56916 14991296 0 0 8852 440 7652 13294\n>> 63 4 2 32\n>> 5 42 95048 11052524 56932 14996496 0 0 4700 384 6383 11249\n>> 64 4 4 28\n>> 5 33 95048 11047492 56956 15001428 0 0 3852 572 6029 14010\n>> 36 4 5 56\n>> 7 35 95048 11041184 56960 15005480 0 0 3964 136 5042 10802\n>> 40 3 1 56\n>> 1 33 95048 11037988 56968 15009240 0 0 3892 168 3384 6479\n>> 26 1 3 69\n>> 3 28 95048 11029332 56980 15015744 0 0 6724 152 4964 12844\n>> 11 2 8 79\n>> 0 34 95048 11025880 56988 15020168 0 0 3852 160 3616 8614\n>> 11 1 6 82\n>> 3 25 95048 10996356 57044 15044796 0 0 7892 456 3126 7115\n>> 4 3 8 85\n>> 1 26 95048 10991692 57052 15050100 0 0 5188 176 2566 5976\n>> 3 2 12 83\n>> 0 29 95048 10985408 57060 15054968 0 0 4200 80 2586 6582\n>> 4 1 12 83\n>> 1 29 95048 10980828 57064 15058992 0 0 4560 64 2966 7557\n>> 7 2 6 85\n>> 2 28 95048 10977192 57072 15063176 0 0 3860 72 2695 6742\n>> 11 1 7 81\n>> 2 29 95048 10969120 57088 15067808 0 0 5084 84 3296 8067\n>> 14 1 0 84\n>> 0 25 95048 10962096 57104 15072984 0 0 4440 500 2721 6263\n>> 12 1 6 80\n>> 0 23 95044 10955320 57108 15079260 0 0 5712 232 2678 5990\n>> 6 1 6 87\n>> 2 25 95044 10948644 57120 15084524 0 0 5120 184 3499 8143\n>> 20 3 9 69\n>> 3 21 95044 10939744 57128 15090644 0 0 5756 264 4724 10272\n>> 32 3 5 60\n>> 1 19 95040 10933196 57144 15095024 12 0 4440 180 2585 5244\n>> 13 2 15 70\n>> 0 21 95040 10927596 57148 15098684 0 0 3248 136 2973 7292\n>> 8 1 9 81\n>> 1 20 95040 10920708 57164 15104244 0 0 5192 360 1865 4547\n>> 3 1 9 87\n>> 1 24 95040 10914552 57172 15105856 0 0 2308 16 1948 4450\n>> 6 1 1 93\n>> 0 24 95036 10909148 57176 15110240 0 0 3824 152 1330 2632\n>> 3 1 6 90\n>> 1 21 95036 10900628 57192 15116332 0 0 5680 180 1898 3986\n>> 4 1 11 84\n>> 0 19 95036 10888356 57200 15121736 0 0 5952 120 2252 3991\n>> 12 1 8 79\n>> 2 22 95036 10874336 57204 15128252 0 0 6320 112 2831 6755\n>> 5 2 8 85\n>> 3 26 95036 10857592 57220 15134020 0 0 5124 216 3067 5296\n>> 32 6 3 59\n>>\n>> Alan, my apologies if you get this twice. Didn't reply back to the\n>> list on first try.\n>>\n>> On Mar 16, 2009, at 3:52 PM, Alan Hodgson wrote:\n>>\n>>> On Monday 16 March 2009, Joe Uhl <[email protected]> wrote:\n>>>> Right now (not under peak load) this server is running at 68% CPU\n>>>> utilization and its SATA raid 10 is doing about 2MB/s writes and \n>>>> 11MB/\n>>>> s reads. When I run dd I can hit 200+MB/s writes and 230+ MB/s \n>>>> reads,\n>>>> so we are barely using the available IO. Further when I run dd the\n>>>> CPU utilization of that process only approaches 20%-30% of one \n>>>> core.\n>>>\n>>> What does vmstat say when it's slow? The output of \"vmstat 1 30\"\n>>> would be\n>>> informative.\n>>>\n>>> note: dd is sequential I/O. Normal database usage is random I/O.\n>>>\n>>> -- \n>>> Even a sixth-grader can figure out that you can�t borrow money to \n>>> pay\n>>> off\n>>> your debt\n>>>\n>>> -- \n>>> Sent via pgsql-performance mailing list\n>>> ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>>\n>\n\nOn Mar 16, 2009, at 4:35 PM, Greg Smith wrote:\n\n> On Mon, 16 Mar 2009, Joe Uhl wrote:\n>\n>> Here is vmstat 1 30. We are under peak load right now so I can \n>> gather information from the real deal\n>\n> Quite helpful, reformatting a bit and picking an informative section:\n>\n> procs -----------memory---------- ---swap- ----io--- -system-- \n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us \n> sy id wa\n> 0 34 95048 11025880 56988 15020168 0 0 3852 160 3616 8614 \n> 11 1 6 82\n> 3 25 95048 10996356 57044 15044796 0 0 7892 456 3126 7115 \n> 4 3 8 85\n> 1 26 95048 10991692 57052 15050100 0 0 5188 176 2566 5976 \n> 3 2 12 83\n>\n> This says that your server is spending all its time waiting for I/O, \n> actual CPU utilization is minimal. You're only achieving around \n> 3-8MB/s of random I/O. That's the reality of what your disk I/O \n> subsystem is capable of, regardless of what its sequential \n> performance with dd looks like. If you were to run a more \n> complicated benchmark like bonnie++ instead, I'd bet that your \n> \"seeks/second\" results are very low, even though sequential read/ \n> write is fine.\n>\n> The Perc5 controllers have a pretty bad reputation for performance \n> on this list, even in RAID10. Not much you can do about that beyond \n> scrapping the controller and getting a better one.\n>\n> What you might do in order to reduce the total number of writes \n> needed is some standard postgresql.conf tuning; see http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n>\n> What you could do here is increase shared_buffers, \n> checkpoint_segments, and checkpoint_completion_target as described \n> there. Having more buffers dedicated to the database and having \n> less checkpoints can result in less random I/O showing up, as \n> popular data pages will stay in RAM for longer without getting \n> written out so much.\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com \n> Baltimore, MD\n\n", "msg_date": "Mon, 16 Mar 2009 16:50:04 -0400", "msg_from": "Joe Uhl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High CPU Utilization" }, { "msg_contents": "On Mon, Mar 16, 2009 at 2:50 PM, Joe Uhl <[email protected]> wrote:\n> I dropped the pool sizes and brought things back up.  Things are stable,\n> site is fast, CPU utilization is still high.  Probably just a matter of time\n> before issue comes back (we get slammed as kids get out of school in the\n> US).\n\nYeah, I'm guessing your server (or more specifically its RAID card)\njust aren't up to the task. We had the same problem last year with a\nmachine with 16 Gig ram and dual dual core 3.0GHz xeons with a Perc 5\nsomething or other. No matter how we tuned it or played with it, we\njust couldn't get good random performance out of it. It's since been\nreplaced by a white box unit with a tyan mobo and dual 4 core opterons\nand an Areca 1680 and a 12 drive RAID-10. We can sustain 30 to 60 Megs\na second random access with 0 to 10% iowait.\n\nHere's a typical vmstat 10 output when our load factor is hovering around 8...\n r b swpd free buff cache si so bi bo in cs us sy id wa st\n 4 1 460 170812 92856 29928156 0 0 604 3986 4863 10146\n74 3 20 3 0\n 7 1 460 124160 92912 29939660 0 0 812 5701 4829 9733 70\n 3 23 3 0\n13 0 460 211036 92984 29947636 0 0 589 3178 4429 9964 69\n 3 25 3 0\n 7 2 460 90968 93068 29963368 0 0 1067 4463 4915 11081\n78 3 14 5 0\n 7 3 460 115216 93100 29963336 0 0 3008 3197 4032 11812\n69 4 15 12 0\n 6 1 460 142120 93088 29923736 0 0 1112 6390 4991 11023\n75 4 15 6 0\n 6 0 460 157896 93208 29932576 0 0 698 2196 4151 8877 71\n 2 23 3 0\n11 0 460 124868 93296 29948824 0 0 963 3645 4891 10382\n74 3 19 4 0\n 5 3 460 95960 93272 29918064 0 0 592 30055 5550 7430 56\n 3 18 23 0\n 9 0 460 95408 93196 29914556 0 0 1090 3522 4463 10421\n71 3 21 5 0\n 9 0 460 128632 93176 29916412 0 0 883 4774 4757 10378\n76 4 17 3 0\n\nNote the bursty parts where we're shoving out 30Megs a second and the\nwait jumps to 23%. That's about as bad as it gets during the day for\nus. NBote that in your graph your bi column appears to be dominating\nyour bo column, so it looks like you're reaching a point where the\nwrite cache on the controller gets full and you're real throughput is\nshown to be ~ 1 megabyte a second outbound, and the inbound traffic\neither has priority or is just filling in the gaps. It looks to me\nlike your RAID card is prioritizing reads over writes, and the whole\nsystem is just slowing to a crawl. I'm willing to bet that if you\nwere running pure SW RAID with no RAID controller you'd get better\nnumbers.\n", "msg_date": "Mon, 16 Mar 2009 16:09:13 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Utilization" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n\n> On Mon, 16 Mar 2009, Joe Uhl wrote:\n>\n>> Here is vmstat 1 30. We are under peak load right now so I can gather\n>> information from the real deal\n>\n> Quite helpful, reformatting a bit and picking an informative section:\n>\n> procs -----------memory---------- ---swap- ----io--- -system-- ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 0 34 95048 11025880 56988 15020168 0 0 3852 160 3616 8614 11 1 6 82\n> 3 25 95048 10996356 57044 15044796 0 0 7892 456 3126 7115 4 3 8 85\n> 1 26 95048 10991692 57052 15050100 0 0 5188 176 2566 5976 3 2 12 83\n>\n> This says that your server is spending all its time waiting for I/O, actual CPU\n> utilization is minimal. You're only achieving around 3-8MB/s of random I/O.\n> That's the reality of what your disk I/O subsystem is capable of, regardless of\n> what its sequential performance with dd looks like. If you were to run a more\n> complicated benchmark like bonnie++ instead, I'd bet that your \"seeks/second\"\n> results are very low, even though sequential read/write is fine.\n>\n> The Perc5 controllers have a pretty bad reputation for performance on this\n> list, even in RAID10. Not much you can do about that beyond scrapping the\n> controller and getting a better one.\n\nHm, well the tests I ran for posix_fadvise were actually on a Perc5 -- though\nwho knows if it was the same under the hood -- and I saw better performance\nthan this. I saw about 4MB/s for a single drive and up to about 35MB/s for 15\ndrives. However this was using linux md raid-0, not hardware raid.\n\nBut you shouldn't get your hopes up too much for random i/o. 3-8MB seems low\nbut consider the following:\n\n $ units\n 2445 units, 71 prefixes, 33 nonlinear units\n\n You have: 8kB / .5|7200min\n You want: MB/s\n * 1.92\n / 0.52083333\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's 24x7 Postgres support!\n", "msg_date": "Tue, 17 Mar 2009 00:30:20 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Utilization" }, { "msg_contents": "On Mon, 16 Mar 2009, Joe Uhl wrote:\n\n> Now when I run vmtstat 1 30 it looks very different (below).\n\nThat looks much better. Obviously you'd like some more headroom on the \nCPU situation than you're seeing, but that's way better than having so \nmuch time spent waiting for I/O.\n\n> max_connections = 1000\n> work_mem = 30MB\n\nBe warned that you need to be careful with this combination. If all 1000 \nconnections were to sort something at once, you could end up with >30GB \nworth of RAM used for that purpose. It's probably quite unlikely that \nwill happen, but 30MB is on the high side with that many connections.\n\nI wonder if your pool might work better, in terms of lowering total CPU \nusage, if you reduced the number of incoming connections. Each connection \nadds some overhead and now that you've got the I/O situation under better \ncontrol you might get by with less simultaneous ones. Something to \nconsider.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 17 Mar 2009 00:12:42 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Utilization" }, { "msg_contents": "On Tue, 17 Mar 2009, Gregory Stark wrote:\n\n> Hm, well the tests I ran for posix_fadvise were actually on a Perc5 -- though\n> who knows if it was the same under the hood -- and I saw better performance\n> than this. I saw about 4MB/s for a single drive and up to about 35MB/s for 15\n> drives. However this was using linux md raid-0, not hardware raid.\n\nRight, it's the hardware RAID on the Perc5 I think people mainly complain \nabout. If you use it in JBOD mode and let the higher performance CPU in \nyour main system drive the RAID functions it's not so bad.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 17 Mar 2009 00:19:25 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Utilization" }, { "msg_contents": "On Mar 17, 2009, at 12:19 AM, Greg Smith wrote:\n\n> On Tue, 17 Mar 2009, Gregory Stark wrote:\n>\n>> Hm, well the tests I ran for posix_fadvise were actually on a Perc5 \n>> -- though\n>> who knows if it was the same under the hood -- and I saw better \n>> performance\n>> than this. I saw about 4MB/s for a single drive and up to about \n>> 35MB/s for 15\n>> drives. However this was using linux md raid-0, not hardware raid.\n>\n> Right, it's the hardware RAID on the Perc5 I think people mainly \n> complain about. If you use it in JBOD mode and let the higher \n> performance CPU in your main system drive the RAID functions it's \n> not so bad.\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com \n> Baltimore, MD\n\nI have not yet had a chance to try software raid on the standby server \n(still planning to) but wanted to follow up to see if there was any \ngood way to figure out what the postgresql processes are spending \ntheir CPU time on.\n\nWe are under peak load right now, and I have Zabbix plotting CPU \nutilization and CPU wait (from vmstat output) along with all sorts of \nother vitals on charts. CPU utilization is a sustained 90% - 95% and \nCPU Wait is hanging below 10%. Since being pointed at vmstat by this \nlist I have been watching CPU Wait and it does get high at times \n(hence still wanting to try Perc5 in JBOD) but then there are \nsustained periods, right now included, where our CPUs are just getting \ncrushed while wait and IO (only doing about 1.5 MB/sec right now) are \nvery low.\n\nThis high CPU utilization only occurs when under peak load and when \nour JDBC pools are fully loaded. We are moving more things into our \ncache and constantly tuning indexes/tables but just want to see if \nthere is some underlying cause that is killing us.\n\nAny recommendations for figuring out what our database is spending its \nCPU time on?\n", "msg_date": "Fri, 20 Mar 2009 16:26:24 -0400", "msg_from": "Joe Uhl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High CPU Utilization" }, { "msg_contents": "On Fri, Mar 20, 2009 at 2:26 PM, Joe Uhl <[email protected]> wrote:\n> On Mar 17, 2009, at 12:19 AM, Greg Smith wrote:\n>\n>> On Tue, 17 Mar 2009, Gregory Stark wrote:\n>>\n>>> Hm, well the tests I ran for posix_fadvise were actually on a Perc5 --\n>>> though\n>>> who knows if it was the same under the hood -- and I saw better\n>>> performance\n>>> than this. I saw about 4MB/s for a single drive and up to about 35MB/s\n>>> for 15\n>>> drives. However this was using linux md raid-0, not hardware raid.\n>>\n>> Right, it's the hardware RAID on the Perc5 I think people mainly complain\n>> about.  If you use it in JBOD mode and let the higher performance CPU in\n>> your main system drive the RAID functions it's not so bad.\n>>\n>> --\n>> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n> I have not yet had a chance to try software raid on the standby server\n> (still planning to) but wanted to follow up to see if there was any good way\n> to figure out what the postgresql processes are spending their CPU time on.\n>\n> We are under peak load right now, and I have Zabbix plotting CPU utilization\n> and CPU wait (from vmstat output) along with all sorts of other vitals on\n> charts.  CPU utilization is a sustained 90% - 95% and CPU Wait is hanging\n> below 10%.  Since being pointed at vmstat by this list I have been watching\n> CPU Wait and it does get high at times (hence still wanting to try Perc5 in\n> JBOD) but then there are sustained periods, right now included, where our\n> CPUs are just getting crushed while wait and IO (only doing about 1.5 MB/sec\n> right now) are very low.\n>\n> This high CPU utilization only occurs when under peak load and when our JDBC\n> pools are fully loaded.  We are moving more things into our cache and\n> constantly tuning indexes/tables but just want to see if there is some\n> underlying cause that is killing us.\n>\n> Any recommendations for figuring out what our database is spending its CPU\n> time on?\n\nWhat does the cs entry on vmstat say at this time? If you're cs is\nskyrocketing then you're getting a context switch storm, which is\nusually a sign that there are just too many things going on at once /\nyou've got an old kernel things like that.\n", "msg_date": "Fri, 20 Mar 2009 14:29:33 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Utilization" }, { "msg_contents": "\nOn Mar 20, 2009, at 4:29 PM, Scott Marlowe wrote:\n\n> On Fri, Mar 20, 2009 at 2:26 PM, Joe Uhl <[email protected]> wrote:\n>> On Mar 17, 2009, at 12:19 AM, Greg Smith wrote:\n>>\n>>> On Tue, 17 Mar 2009, Gregory Stark wrote:\n>>>\n>>>> Hm, well the tests I ran for posix_fadvise were actually on a \n>>>> Perc5 --\n>>>> though\n>>>> who knows if it was the same under the hood -- and I saw better\n>>>> performance\n>>>> than this. I saw about 4MB/s for a single drive and up to about \n>>>> 35MB/s\n>>>> for 15\n>>>> drives. However this was using linux md raid-0, not hardware raid.\n>>>\n>>> Right, it's the hardware RAID on the Perc5 I think people mainly \n>>> complain\n>>> about. If you use it in JBOD mode and let the higher performance \n>>> CPU in\n>>> your main system drive the RAID functions it's not so bad.\n>>>\n>>> --\n>>> * Greg Smith [email protected] http://www.gregsmith.com \n>>> Baltimore, MD\n>>\n>> I have not yet had a chance to try software raid on the standby \n>> server\n>> (still planning to) but wanted to follow up to see if there was any \n>> good way\n>> to figure out what the postgresql processes are spending their CPU \n>> time on.\n>>\n>> We are under peak load right now, and I have Zabbix plotting CPU \n>> utilization\n>> and CPU wait (from vmstat output) along with all sorts of other \n>> vitals on\n>> charts. CPU utilization is a sustained 90% - 95% and CPU Wait is \n>> hanging\n>> below 10%. Since being pointed at vmstat by this list I have been \n>> watching\n>> CPU Wait and it does get high at times (hence still wanting to try \n>> Perc5 in\n>> JBOD) but then there are sustained periods, right now included, \n>> where our\n>> CPUs are just getting crushed while wait and IO (only doing about \n>> 1.5 MB/sec\n>> right now) are very low.\n>>\n>> This high CPU utilization only occurs when under peak load and when \n>> our JDBC\n>> pools are fully loaded. We are moving more things into our cache and\n>> constantly tuning indexes/tables but just want to see if there is \n>> some\n>> underlying cause that is killing us.\n>>\n>> Any recommendations for figuring out what our database is spending \n>> its CPU\n>> time on?\n>\n> What does the cs entry on vmstat say at this time? If you're cs is\n> skyrocketing then you're getting a context switch storm, which is\n> usually a sign that there are just too many things going on at once /\n> you've got an old kernel things like that.\n\ncs column (plus cpu columns) of vmtstat 1 30 reads as follows:\n\ncs us sy id wa\n11172 95 4 1 0\n12498 94 5 1 0\n14121 91 7 1 1\n11310 90 7 1 1\n12918 92 6 1 1\n10613 93 6 1 1\n9382 94 4 1 1\n14023 89 8 2 1\n10138 92 6 1 1\n11932 94 4 1 1\n15948 93 5 2 1\n12919 92 5 3 1\n10879 93 4 2 1\n14014 94 5 1 1\n9083 92 6 2 0\n11178 94 4 2 0\n10717 94 5 1 0\n9279 97 2 1 0\n12673 94 5 1 0\n8058 82 17 1 1\n8150 94 5 1 1\n11334 93 6 0 0\n13884 91 8 1 0\n10159 92 7 0 0\n9382 96 4 0 0\n11450 95 4 1 0\n11947 96 3 1 0\n8616 95 4 1 0\n10717 95 3 1 0\n\nWe are running on 2.6.28.7-2 kernel. I am unfamiliar with vmstat \noutput but reading the man page (and that cs = \"context switches per \nsecond\") makes my numbers seem very high.\n\nOur sum JDBC pools currently top out at 400 connections (and we are \ndoing work on all 400 right now). I may try dropping those pools down \neven smaller. Are there any general rules of thumb for figuring out \nhow many connections you should service at maximum? I know of the \nmemory constraints, but thinking more along the lines of connections \nper CPU core.\n\n", "msg_date": "Fri, 20 Mar 2009 16:49:00 -0400", "msg_from": "Joe Uhl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High CPU Utilization" }, { "msg_contents": "On Fri, Mar 20, 2009 at 2:49 PM, Joe Uhl <[email protected]> wrote:\n>\n> On Mar 20, 2009, at 4:29 PM, Scott Marlowe wrote:\n\n>> What does the cs entry on vmstat say at this time?  If you're cs is\n>> skyrocketing then you're getting a context switch storm, which is\n>> usually a sign that there are just too many things going on at once /\n>> you've got an old kernel things like that.\n>\n> cs column (plus cpu columns) of vmtstat 1 30 reads as follows:\n>\n> cs    us  sy id wa\n> 11172 95  4  1  0\n> 12498 94  5  1  0\n> 14121 91  7  1  1\n> 11310 90  7  1  1\n> 12918 92  6  1  1\n> 10613 93  6  1  1\n> 9382  94  4  1  1\n> 14023 89  8  2  1\n> 10138 92  6  1  1\n> 11932 94  4  1  1\n> 15948 93  5  2  1\n> 12919 92  5  3  1\n> 10879 93  4  2  1\n> 14014 94  5  1  1\n> 9083  92  6  2  0\n> 11178 94  4  2  0\n> 10717 94  5  1  0\n> 9279  97  2  1  0\n> 12673 94  5  1  0\n> 8058  82 17  1  1\n> 8150  94  5  1  1\n> 11334 93  6  0  0\n> 13884 91  8  1  0\n> 10159 92  7  0  0\n> 9382  96  4  0  0\n> 11450 95  4  1  0\n> 11947 96  3  1  0\n> 8616  95  4  1  0\n> 10717 95  3  1  0\n>\n> We are running on 2.6.28.7-2 kernel.  I am unfamiliar with vmstat output but\n> reading the man page (and that cs = \"context switches per second\") makes my\n> numbers seem very high.\n\nNo, those aren't really all that high. If you were hitting cs\ncontention, I'd expect it to be in the 25k to 100k range. <10k\naverage under load is pretty reasonable.\n\n> Our sum JDBC pools currently top out at 400 connections (and we are doing\n> work on all 400 right now).  I may try dropping those pools down even\n> smaller. Are there any general rules of thumb for figuring out how many\n> connections you should service at maximum?  I know of the memory\n> constraints, but thinking more along the lines of connections per CPU core.\n\nWell, maximum efficiency is usually somewhere in the range of 1 to 2\ntimes the number of cores you have, so trying to get the pool down to\na dozen or two connections would be the direction to generally head.\nMay not be reasonable or doable though.\n", "msg_date": "Fri, 20 Mar 2009 14:58:18 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Utilization" }, { "msg_contents": "\nOn Mar 20, 2009, at 4:58 PM, Scott Marlowe wrote:\n\n> On Fri, Mar 20, 2009 at 2:49 PM, Joe Uhl <[email protected]> wrote:\n>>\n>> On Mar 20, 2009, at 4:29 PM, Scott Marlowe wrote:\n>\n>>> What does the cs entry on vmstat say at this time? If you're cs is\n>>> skyrocketing then you're getting a context switch storm, which is\n>>> usually a sign that there are just too many things going on at \n>>> once /\n>>> you've got an old kernel things like that.\n>>\n>> cs column (plus cpu columns) of vmtstat 1 30 reads as follows:\n>>\n>> cs us sy id wa\n>> 11172 95 4 1 0\n>> 12498 94 5 1 0\n>> 14121 91 7 1 1\n>> 11310 90 7 1 1\n>> 12918 92 6 1 1\n>> 10613 93 6 1 1\n>> 9382 94 4 1 1\n>> 14023 89 8 2 1\n>> 10138 92 6 1 1\n>> 11932 94 4 1 1\n>> 15948 93 5 2 1\n>> 12919 92 5 3 1\n>> 10879 93 4 2 1\n>> 14014 94 5 1 1\n>> 9083 92 6 2 0\n>> 11178 94 4 2 0\n>> 10717 94 5 1 0\n>> 9279 97 2 1 0\n>> 12673 94 5 1 0\n>> 8058 82 17 1 1\n>> 8150 94 5 1 1\n>> 11334 93 6 0 0\n>> 13884 91 8 1 0\n>> 10159 92 7 0 0\n>> 9382 96 4 0 0\n>> 11450 95 4 1 0\n>> 11947 96 3 1 0\n>> 8616 95 4 1 0\n>> 10717 95 3 1 0\n>>\n>> We are running on 2.6.28.7-2 kernel. I am unfamiliar with vmstat \n>> output but\n>> reading the man page (and that cs = \"context switches per second\") \n>> makes my\n>> numbers seem very high.\n>\n> No, those aren't really all that high. If you were hitting cs\n> contention, I'd expect it to be in the 25k to 100k range. <10k\n> average under load is pretty reasonable.\n>\n>> Our sum JDBC pools currently top out at 400 connections (and we are \n>> doing\n>> work on all 400 right now). I may try dropping those pools down even\n>> smaller. Are there any general rules of thumb for figuring out how \n>> many\n>> connections you should service at maximum? I know of the memory\n>> constraints, but thinking more along the lines of connections per \n>> CPU core.\n>\n> Well, maximum efficiency is usually somewhere in the range of 1 to 2\n> times the number of cores you have, so trying to get the pool down to\n> a dozen or two connections would be the direction to generally head.\n> May not be reasonable or doable though.\n\nThanks for the info. Figure I can tune our pools down and monitor \nthroughput/CPU/IO and look for a sweet spot with our existing \nhardware. Just wanted to see if tuning connections down could \npotentially help.\n\nI feel as though we are going to have to replicate this DB before too \nlong. We've got an almost identical server doing nothing but PITR \nwith 8 CPU cores mostly idle that could be better spent. Our pgfouine \nreports, though only logging queries that take over 1 second, show \n90% reads.\n\nI have heard much about Slony, but has anyone used the newer version \nof Mammoth Replicator (or looks to be called PostgreSQL + Replication \nnow) on 8.3? From the documentation, it appears to be easier to set \nup and less invasive but I struggle to find usage information/stories \nonline.\n\n", "msg_date": "Fri, 20 Mar 2009 17:16:47 -0400", "msg_from": "Joe Uhl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High CPU Utilization" }, { "msg_contents": "\nOn Mar 20, 2009, at 4:58 PM, Scott Marlowe wrote:\n\n> On Fri, Mar 20, 2009 at 2:49 PM, Joe Uhl <[email protected]> wrote:\n>>\n>> On Mar 20, 2009, at 4:29 PM, Scott Marlowe wrote:\n>\n>>> What does the cs entry on vmstat say at this time? If you're cs is\n>>> skyrocketing then you're getting a context switch storm, which is\n>>> usually a sign that there are just too many things going on at \n>>> once /\n>>> you've got an old kernel things like that.\n>>\n>> cs column (plus cpu columns) of vmtstat 1 30 reads as follows:\n>>\n>> cs us sy id wa\n>> 11172 95 4 1 0\n>> 12498 94 5 1 0\n>> 14121 91 7 1 1\n>> 11310 90 7 1 1\n>> 12918 92 6 1 1\n>> 10613 93 6 1 1\n>> 9382 94 4 1 1\n>> 14023 89 8 2 1\n>> 10138 92 6 1 1\n>> 11932 94 4 1 1\n>> 15948 93 5 2 1\n>> 12919 92 5 3 1\n>> 10879 93 4 2 1\n>> 14014 94 5 1 1\n>> 9083 92 6 2 0\n>> 11178 94 4 2 0\n>> 10717 94 5 1 0\n>> 9279 97 2 1 0\n>> 12673 94 5 1 0\n>> 8058 82 17 1 1\n>> 8150 94 5 1 1\n>> 11334 93 6 0 0\n>> 13884 91 8 1 0\n>> 10159 92 7 0 0\n>> 9382 96 4 0 0\n>> 11450 95 4 1 0\n>> 11947 96 3 1 0\n>> 8616 95 4 1 0\n>> 10717 95 3 1 0\n>>\n>> We are running on 2.6.28.7-2 kernel. I am unfamiliar with vmstat \n>> output but\n>> reading the man page (and that cs = \"context switches per second\") \n>> makes my\n>> numbers seem very high.\n>\n> No, those aren't really all that high. If you were hitting cs\n> contention, I'd expect it to be in the 25k to 100k range. <10k\n> average under load is pretty reasonable.\n>\n>> Our sum JDBC pools currently top out at 400 connections (and we are \n>> doing\n>> work on all 400 right now). I may try dropping those pools down even\n>> smaller. Are there any general rules of thumb for figuring out how \n>> many\n>> connections you should service at maximum? I know of the memory\n>> constraints, but thinking more along the lines of connections per \n>> CPU core.\n>\n> Well, maximum efficiency is usually somewhere in the range of 1 to 2\n> times the number of cores you have, so trying to get the pool down to\n> a dozen or two connections would be the direction to generally head.\n> May not be reasonable or doable though.\n\nTurns out we may have an opportunity to purchase a new database server \nwith this increased load. Seems that the best route, based on \nfeedback to this thread, is to go whitebox, get quad opterons, and get \na very good disk controller.\n\nCan anyone recommend a whitebox vendor?\n\nIs there a current controller anyone on this list has experience with \nthat they could recommend?\n\nThis will be a bigger purchase so will be doing research and \nbenchmarking but any general pointers to a vendor/controller greatly \nappreciated.\n\n\n", "msg_date": "Tue, 24 Mar 2009 14:47:36 -0400", "msg_from": "Joe Uhl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High CPU Utilization" }, { "msg_contents": "On Tue, 24 Mar 2009, Joe Uhl wrote:\n\n> Can anyone recommend a whitebox vendor?\n\nI dumped a list of recommended vendors from a discussion here a while back \nat http://wiki.postgresql.org/wiki/SCSI_vs._IDE/SATA_Disks you could get \nstarted with.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 24 Mar 2009 15:29:16 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Utilization" }, { "msg_contents": "On Tue, Mar 24, 2009 at 1:29 PM, Greg Smith <[email protected]> wrote:\n> On Tue, 24 Mar 2009, Joe Uhl wrote:\n>\n>> Can anyone recommend a whitebox vendor?\n>\n> I dumped a list of recommended vendors from a discussion here a while back\n> at http://wiki.postgresql.org/wiki/SCSI_vs._IDE/SATA_Disks you could get\n> started with.\n\nI'd add Aberdeen Inc to that list. They supply quality white box\nservers with 3ware, areca, or LSI controllers, and provide a 5 year\nall inclusive warranty. Their customer service is top notch too.\n", "msg_date": "Tue, 24 Mar 2009 13:49:28 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Utilization" }, { "msg_contents": "At 02:47 PM 3/24/2009, Joe Uhl wrote:\n\n>Turns out we may have an opportunity to purchase a new database \n>server with this increased load. Seems that the best route, based \n>on feedback to this thread, is to go whitebox, get quad opterons, \n>and get a very good disk controller.\n>\n>Can anyone recommend a whitebox vendor?\nI'll 2nd the Aberdeen recommendation. I'll add Pogolinux to that list as well.\n\n\n>Is there a current controller anyone on this list has experience \n>with that they could recommend?\nThe 2 best performing RAID controller vendors at this time are AMCC \n(AKA 3Ware) and Areca.\nIn general, the 8+ port Areca's with their BB cache maxed outperform \nevery other controller available.\n\n\n>This will be a bigger purchase so will be doing research and \n>benchmarking but any general pointers to a vendor/controller greatly \n>appreciated.\n\nBe =very= careful to thoroughly bench both the AMD and Intel CPU \noptions. It is far from clear which is the better purchase.\n\nI'd be very interested to see the results of your research and \nbenchmarks posted here on pgsql-performance.\n\nRon Peacetree \n\n", "msg_date": "Tue, 24 Mar 2009 18:58:13 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Utilization" }, { "msg_contents": "On Tue, Mar 24, 2009 at 4:58 PM, Ron <[email protected]> wrote:\n> At 02:47 PM 3/24/2009, Joe Uhl wrote:\n>\n>> Turns out we may have an opportunity to purchase a new database server\n>> with this increased load.  Seems that the best route, based on feedback to\n>> this thread, is to go whitebox, get quad opterons, and get a very good disk\n>> controller.\n>>\n>> Can anyone recommend a whitebox vendor?\n>\n> I'll 2nd the Aberdeen recommendation.  I'll add Pogolinux to that list as\n> well.\n>\n>\n>> Is there a current controller anyone on this list has experience with that\n>> they could recommend?\n>\n> The 2 best performing RAID controller vendors at this time are AMCC (AKA\n> 3Ware) and Areca.\n> In general, the 8+ port Areca's with their BB cache maxed outperform every\n> other controller available.\n>\n>\n>> This will be a bigger purchase so will be doing research and benchmarking\n>> but any general pointers to a vendor/controller greatly appreciated.\n>\n> Be =very= careful to thoroughly bench both the AMD and Intel CPU options.\n>  It is far from clear which is the better purchase.\n\nMy anecdotal experience has been that the Opterons stay afloat longer\nas load increases, but I haven't had machines with similar enough\nhardware to really test that.\n\n> I'd be very interested to see the results of your research and benchmarks\n> posted here on pgsql-performance.\n\nMe too. I'm gonna spend some time this summer benchmarking and tuning\nthe database servers that I pretty much had to burn in and put in\nproduction this year due to time pressures.\n", "msg_date": "Tue, 24 Mar 2009 17:16:30 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Utilization" }, { "msg_contents": "\nOn 3/24/09 4:16 PM, \"Scott Marlowe\" <[email protected]> wrote:\n\n> On Tue, Mar 24, 2009 at 4:58 PM, Ron <[email protected]> wrote:\n>> At 02:47 PM 3/24/2009, Joe Uhl wrote:\n>> \n>>> Turns out we may have an opportunity to purchase a new database server\n>>> with this increased load.  Seems that the best route, based on feedback to\n>>> this thread, is to go whitebox, get quad opterons, and get a very good disk\n>>> controller.\n>>> \n>>> Can anyone recommend a whitebox vendor?\n>> \n>> I'll 2nd the Aberdeen recommendation.  I'll add Pogolinux to that list as\n>> well.\n>> \n>> \n>>> Is there a current controller anyone on this list has experience with that\n>>> they could recommend?\n>> \n>> The 2 best performing RAID controller vendors at this time are AMCC (AKA\n>> 3Ware) and Areca.\n>> In general, the 8+ port Areca's with their BB cache maxed outperform every\n>> other controller available.\n\nI personally have had rather bad performance experiences with 3Ware\n9550/9650 SATA cards. I have no experience with the AMCC SAS stuff though.\nAdaptec demolished the 9650 on arrays larger than 4 drives, and Areca will\ndo better at the very high end.\n\nHowever, if CPU is the issue for this particular case, then the RAID\ncontroller details are less significant.\n\nI don't know how much data you have, but don't forget the option of SSDs, or\na mix of hard drives and SSDs for different data. Ideally, you would want\nthe OS to just extend its pagecache onto a SSD, but only OpenSolaris can do\nthat right now and it is rather new (needs to be persistent across reboots).\n\nhttp://blogs.sun.com/brendan/entry/test\nhttp://blogs.sun.com/brendan/entry/l2arc_screenshots\n \n\n>> \n>> \n>>> This will be a bigger purchase so will be doing research and benchmarking\n>>> but any general pointers to a vendor/controller greatly appreciated.\n>> \n>> Be =very= careful to thoroughly bench both the AMD and Intel CPU options.\n>>  It is far from clear which is the better purchase.\n> \n> My anecdotal experience has been that the Opterons stay afloat longer\n> as load increases, but I haven't had machines with similar enough\n> hardware to really test that.\n> \n\nOne may want to note that Intel's next generation servers are due out within\n45 days from what I can sense ('Q2' traditionally means ~April 1 for Intel\nwhen on time). These should be a rather significant bump for a database as\nthey adopt the AMD / Alpha style memory-controller-on-CPU architecture and\nadd a lot of cache. Other relevant improvements: increased performance on\ncompare-and-swap operations, the return of hyper threading, and ridiculous\nmemory bandwidth per CPU (3 DDR3 memory channels per CPU).\n\n>> I'd be very interested to see the results of your research and benchmarks\n>> posted here on pgsql-performance.\n> \n> Me too. I'm gonna spend some time this summer benchmarking and tuning\n> the database servers that I pretty much had to burn in and put in\n> production this year due to time pressures.\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Tue, 24 Mar 2009 17:22:24 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High CPU Utilization" } ]
[ { "msg_contents": "Test hardware:\nMB:GA-EP43-DS3L, CPU :intel 7400 RAM:2G HD:320G sata\n\ndebian5, freebsd7.1 are installed by default choice by cd-rom\ndebian use XFS, Freebsd use UFS+softdateup\nall software installed by app_get or pkg_add, change NOTHING configure.\nat begening of every testing, use initdb create new data, don't edit\ndefault postgressql.conf\n\nthis is table struct:\n\\d user_account;\n Table \"public.user_account\"\n Column | Type | Modifiers\n-----------+-----------------------+-----------------------------------\n passport | character varying(32) | not null default ''::character(1)\n spassword | character(40) | not null default ''::character(1)\n guid | integer | not null default 0\n ptype | integer | default 0\nIndexes:\n \"user_account_pk_guid\" PRIMARY KEY, btree (guid)\n \"user_account_key_passport\" UNIQUE, btree (passport)\n\nuse sql file to insert data, one insert sql include 1000 row.\nua1m.sql include 1000 insert sql.\nso, there are all 1,000,000 row .\n\nusage : time psql node < ua1m.sql\n\n===============================================================================\nONE test:\ncreate table an primary key and unique, insert date.\n\ndebian : 3 min\nfreebsd: 10 min\n\ndebian win. faster thrice times then freebsd.\n\n===============================================================================\nTWO test:\nstep 1: create table without key and unique, after insert date,\nstep 2: ALTER TABLE....ADD pk and unique\n\n insert alter table\ndebian : 45s 26s\nfreebsd: 18s 10s\n\nfreebsd win.\n\n================================================================================\n\nwhe ONE test:\nI use iostat to observer IO.\nfreebsd : at beginning, io is 20-30M , after 30s, io go down slowly,\nabout 5M-7M\ndebian always keep one level.\n\nfreedb# iostat 2\n tty ad1 cpu\ntin tout KB/t tps MB/s us ni sy in id\n 0 22 11.88 8 0.09 0 0 0 0 100\n 0 22 0.00 0 0.00 0 0 0 0 100\n 0 22 0.00 0 0.00 0 0 0 0 100\n 0 22 0.00 0 0.00 0 0 0 0 100\n 0 75 42.62 52 2.16 4 0 1 0 95\n 0 388 47.42 491 22.75 30 0 4 0 66\n 0 388 46.60 502 22.85 32 0 3 0 65\n 0 321 50.01 588 28.69 26 0 4 0 69\n 0 298 45.77 592 26.44 26 0 2 0 71\n 0 328 43.86 536 22.96 27 0 3 1 69\n 0 269 45.72 659 29.42 25 0 2 1 72\n 0 239 35.69 661 23.05 21 0 2 1 76\n 0 216 26.91 535 14.06 19 0 2 0 80\n 0 343 42.71 520 21.69 34 0 3 0 62\n 0 29 33.25 343 11.13 1 0 2 0 97\n 0 96 27.97 412 11.26 8 0 2 0 90\n 0 319 27.48 429 11.52 25 0 3 0 71\n 0 126 38.58 357 13.45 10 0 2 0 87\n 0 81 34.50 302 10.19 6 0 2 0 91\n 0 81 36.62 288 10.32 5 0 2 0 92\n 0 66 30.89 285 8.60 4 0 2 0 94\n 0 44 43.47 382 16.23 4 0 2 0 94\n 0 21 32.31 297 9.37 0 0 1 0 99\n 0 36 27.29 385 10.26 2 0 1 0 97\n 0 21 22.12 350 7.56 0 0 1 0 99\n 0 66 26.02 477 12.11 5 0 2 0 92\n 0 51 36.14 292 10.32 3 0 1 0 96\n 0 29 31.48 278 8.55 1 0 1 0 98\n 0 44 35.26 284 9.80 3 0 2 0 95\n 0 51 34.27 295 9.87 3 0 1 0 96\n 0 44 30.35 238 7.06 3 0 2 0 96\n 0 44 28.12 290 7.98 2 0 2 0 96\n 0 44 30.64 271 8.11 3 0 1 0 96\n 0 51 32.52 288 9.16 3 0 2 0 96\n 0 44 33.29 265 8.61 2 0 1 0 96\n\n\n\ndebian1:~# iostat -t 2 sda\nLinux 2.6.26-1-686 (debian1) 03/16/2009 _i686_\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 2.66 2.17 246.58 329693 37377968\nsda 0.50 0.00 8.00 0 16\nsda 0.00 0.00 0.00 0 0\nsda 10.00 0.00 3117.00 0 6234\nsda 49.50 0.00 25833.50 0 51667\nsda 33.50 0.00 10091.00 0 20182\nsda 51.50 0.00 31448.50 0 62897\nsda 29.50 0.00 9900.00 0 19800\nsda 51.00 0.00 32721.00 0 65442\nsda 52.50 0.00 24387.00 0 48774\nsda 69.50 0.00 25178.00 0 50356\nsda 70.00 0.00 40984.50 0 81969\nsda 28.50 0.00 9674.00 0 19348\nsda 63.50 0.00 35200.50 0 70401\nsda 64.00 0.00 25347.50 0 50695\nsda 72.50 0.00 44291.50 0 88583\nsda 33.00 0.00 8569.50 0 17139\nsda 32.00 0.00 8192.00 0 16384\nsda 52.00 0.00 24642.00 0 49284\nsda 366.00 0.00 55748.50 0 111497\nsda 91.50 0.00 6927.00 0 13854\nsda 42.00 0.00 7020.50 0 14041\nsda 117.50 0.00 8630.00 0 17260\nsda 101.00 0.00 9893.50 0 19787\nsda 104.00 0.00 12161.50 0 24323\nsda 52.50 0.00 9542.00 0 19084\nsda 133.50 0.00 56748.50 0 113497\nsda 514.50 0.00 30546.50 0 61093\nsda 233.00 0.00 11944.00 0 23888\nsda 43.50 0.00 9901.50 0 19803\nsda 54.00 0.00 12983.00 0 25966\nsda 87.00 0.00 18138.50 0 36277\nsda 144.50 0.00 39053.00 0 78106\nsda 409.00 0.00 61243.50 0 122487\nsda 191.00 0.00 11877.50 0 23755\nsda 87.50 0.00 25915.50 0 51831\nsda 120.50 0.00 24951.50 0 49903\nsda 200.00 0.00 34608.00 0 69216\nsda 172.50 0.00 25180.00 0 50360\nsda 397.50 0.00 60361.50 0 120723\nsda 298.50 0.00 35150.00 0 70300\nsda 130.00 0.00 17058.50 0 34117\nsda 145.50 0.00 31988.50 0 63977\nsda 33.50 0.00 13424.50 0 26849\nsda 106.00 0.00 32461.00 0 64922\nsda 195.00 0.00 27111.00 0 54222\nsda 212.00 0.00 41948.00 0 83896\nsda 297.00 0.00 49826.50 0 99653\nsda 393.50 0.00 21554.00 0 43108\nsda 199.50 0.00 15239.00 0 30478\nsda 249.50 0.00 29903.00 0 59806\nsda 214.50 0.00 23047.50 0 46095\nsda 253.50 0.00 34445.00 0 68890\nsda 326.50 0.00 18723.00 0 37446\nsda 274.00 0.00 22240.50 0 44481\nsda 164.00 0.00 20395.50 0 40791\nsda 203.50 0.00 28445.00 0 56890\nsda 76.50 0.00 12331.50 0 24663\nsda 274.00 0.00 40263.00 0 80526\nsda 456.00 0.00 38636.00 0 77272\nsda 311.50 0.00 13403.50 0 26807\nsda 126.00 0.00 2441.00 0 4882\nsda 244.00 0.00 18099.00 0 36198\nsda 248.00 0.00 27507.00 0 55014\nsda 264.50 0.00 31205.50 0 62411\nsda 330.50 0.00 26000.50 0 52001\nsda 322.00 0.00 33353.00 0 66706\nsda 185.50 0.00 32089.50 0 64179\nsda 144.50 0.00 32589.00 0 65178\nsda 368.50 0.00 40860.50 0 81721\nsda 429.50 0.00 20813.00 0 41626\nsda 165.50 0.00 16998.00 0 33996\nsda 383.50 0.00 28479.50 0 56959\nsda 204.50 0.00 32974.00 0 65948\nsda 316.00 0.00 26019.00 0 52038\nsda 366.50 0.00 20155.50 0 40311\nsda 273.00 0.00 23505.00 0 47010\nsda 260.50 0.00 57853.50 0 115707\nsda 418.50 0.00 18528.00 0 37056\nsda 329.50 0.00 19092.00 0 38184\nsda 165.00 0.00 19736.50 0 39473\nsda 296.50 0.00 24410.00 0 48820\nsda 247.50 0.00 22831.50 0 45663\nsda 320.00 0.00 31209.50 0 62419\nsda 330.00 0.00 37919.00 0 75838\nsda 315.50 0.00 45214.00 0 90428\nsda 442.50 0.00 26841.50 0 53683\nsda 316.50 0.00 19826.50 0 39653\n", "msg_date": "Tue, 17 Mar 2009 23:59:52 +0800", "msg_from": "luo roger <[email protected]>", "msg_from_op": true, "msg_subject": "Confused ! when insert with Preimary key, Freebsd 7.1 is slower\n\tthrice times then Debian5" } ]
[ { "msg_contents": "Short summary:\n * extremely slow intarray indexes with gist__int_ops\n * gist__intbig_ops seems much faster even with short arrays\n * gin seems much much faster for inserts and creation(and queries)\n\n\nI was debugging a system with a table that slowed to where individual\ninserts were taking well over a second on a practically idle system.\nDropping an a gist index on an intarray made the problem go away.\n\nTiming the creation of gist indexes on this 8.3.6 system makes me\nthink there's something broken with intarray's gist indexes.\n\nThis table summarizes some of the times, shown more completely\nin a script below.\n=================================================================\ncreate gist index on 10000 = 5 seconds\ncreate gist index on 20000 = 32 seconds\ncreate gist index on 30000 = 39 seconds\ncreate gist index on 40000 = 102 seconds\ncreate gist index on 70000 = I waited 10 minutes before giving up.\n\ncreate gin index on 40000 = 0.7 seconds\ncreate gist index on 40000 = 5 seconds using gist__intbig_ops\n\ncreate gin index on 70000 = 1.0 seconds\ncreate gist index on 70000 = 9 seconds using gist__intbig_ops\n==================================================================\n\nThis surprised me for a number of reasons. The longest\narray in the table is 9 elements long, and most are 5 or 6\nso I'd have thought the default ops would have been better\nthan the big_ops. Secondly, I thought gin inserts were expected\nto be slower than gist, but I'm finding them much faster.\n\nNothing seems particular strange about the data. A dump\nof an excerpt of the table can be found at\nhttp://0ape.com/tmp/int_array.dmp\n(Yes, the production table had other columns; but this\ncolumn alone is enough to demonstrate the problem.)\n\n Any thoughts what I'm doing wrong?\n Ron\n\npsql output showing the timing follows.\n\n===============================================================================\nvm=# create table tmp_intarray_test as select tag_id_array as my_int_array from taggings;\nSELECT\nvm=# create table tmp_intarray_test_10000 as select * from tmp_intarray_test limit 10000;\nSELECT\nvm=# create table tmp_intarray_test_20000 as select * from tmp_intarray_test limit 20000;\nSELECT\nvm=# create table tmp_intarray_test_30000 as select * from tmp_intarray_test limit 30000;\nSELECT\nvm=# create table tmp_intarray_test_40000 as select * from tmp_intarray_test limit 40000;\nSELECT\nvm=# \\timing\nTiming is on.\nvm=#\nvm=# create index \"gist_10000 using GIST(my_int_array)\" on tmp_intarray_test_10000 using GIST (my_int_array);\nCREATE INDEX\nTime: 5760.050 ms\nvm=# create index \"gist_20000 using GIST(my_int_array)\" on tmp_intarray_test_20000 using GIST (my_int_array);\nCREATE INDEX\nTime: 32500.911 ms\nvm=# create index \"gist_30000 using GIST(my_int_array)\" on tmp_intarray_test_30000 using GIST (my_int_array);\nCREATE INDEX\nTime: 39284.031 ms\nvm=# create index \"gist_40000 using GIST(my_int_array)\" on tmp_intarray_test_40000 using GIST (my_int_array);\nCREATE INDEX\nTime: 102572.780 ms\nvm=#\nvm=#\nvm=#\nvm=#\n\nvm=#\nvm=#\nvm=# create index \"gin_40000\" on tmp_intarray_test_40000 using GIN (my_int_array gin__int_ops);\nCREATE INDEX\nTime: 696.668 ms\nvm=# create index \"gist_big_4000\" on tmp_intarray_test_40000 using GIST (my_int_array gist__intbig_ops);\nCREATE INDEX\nTime: 5227.353 ms\nvm=#\nvm=#\nvm=#\nvm=# \\d tmp_intarray_test\n Table \"public.tmp_intarray_test\"\n Column | Type | Modifiers\n--------------+-----------+-----------\n my_int_array | integer[] |\n\nvm=# select max(array_dims(my_int_array)) from tmp_intarray_test_30000;\n max\n-------\n [1:9]\n(1 row)\n\nTime: 119.607 ms\nvm=#\nvm=#\nvm=# select version();\n version\n-----------------------------------------------------------------------------------\n PostgreSQL 8.3.6 on i686-pc-linux-gnu, compiled by GCC gcc (Debian 4.3.3-1) 4.3.3\n(1 row)\n\nTime: 12.169 ms\n\nvm=# create index \"gistbig70000\" on tmp_intarray_test using GIST (my_int_array gist__intbig_ops);\nCREATE INDEX\nTime: 9156.886 ms\nvm=# create index \"gin70000\" on tmp_intarray_test using GIN (my_int_array gin__int_ops);\nCREATE INDEX\nTime: 1060.752 ms\nvm=# create index \"gist7000\" on tmp_intarray_test using GIST (my_int_array gist__int_ops);\n [.... it just sits here for 10 minutes or more ....]\n", "msg_date": "Tue, 17 Mar 2009 10:09:36 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "Extremely slow intarray index creation and inserts." }, { "msg_contents": "Ron Mayer wrote:\n> This table summarizes some of the times, shown more completely\n> in a script below.\n> =================================================================\n> create gist index on 10000 = 5 seconds\n> create gist index on 20000 = 32 seconds\n> create gist index on 30000 = 39 seconds\n> create gist index on 40000 = 102 seconds\n> create gist index on 70000 = I waited 10 minutes before giving up\n\n Finished after 34 minutes.\n\nvm=# create index \"gist70000\" on tmp_intarray_test using GIST (my_int_array gist__int_ops);\nCREATE INDEX\nTime: 2069836.856 ms\n\nIs that expected, or does it sound like a bug to take over\nhalf an hour to index 70000 rows of mostly 5 and 6-element\ninteger arrays?\n\n\n> create gin index on 40000 = 0.7 seconds\n> create gist index on 40000 = 5 seconds using gist__intbig_ops\n> \n> create gin index on 70000 = 1.0 seconds\n> create gist index on 70000 = 9 seconds using gist__intbig_ops\n> ==================================================================\n> \n> This surprised me for a number of reasons. The longest\n> array in the table is 9 elements long, and most are 5 or 6\n> so I'd have thought the default ops would have been better\n> than the big_ops. Secondly, I thought gin inserts were expected\n> to be slower than gist, but I'm finding them much faster.\n> \n> Nothing seems particular strange about the data. A dump\n> of an excerpt of the table can be found at\n> http://0ape.com/tmp/int_array.dmp\n> (Yes, the production table had other columns; but this\n> column alone is enough to demonstrate the problem.)\n> \n> Any thoughts what I'm doing wrong?\n> Ron\n> \n> psql output showing the timing follows.\n> \n> ===============================================================================\n> vm=# create table tmp_intarray_test as select tag_id_array as my_int_array from taggings;\n> SELECT\n> vm=# create table tmp_intarray_test_10000 as select * from tmp_intarray_test limit 10000;\n> SELECT\n> vm=# create table tmp_intarray_test_20000 as select * from tmp_intarray_test limit 20000;\n> SELECT\n> vm=# create table tmp_intarray_test_30000 as select * from tmp_intarray_test limit 30000;\n> SELECT\n> vm=# create table tmp_intarray_test_40000 as select * from tmp_intarray_test limit 40000;\n> SELECT\n> vm=# \\timing\n> Timing is on.\n> vm=#\n> vm=# create index \"gist_10000 using GIST(my_int_array)\" on tmp_intarray_test_10000 using GIST (my_int_array);\n> CREATE INDEX\n> Time: 5760.050 ms\n> vm=# create index \"gist_20000 using GIST(my_int_array)\" on tmp_intarray_test_20000 using GIST (my_int_array);\n> CREATE INDEX\n> Time: 32500.911 ms\n> vm=# create index \"gist_30000 using GIST(my_int_array)\" on tmp_intarray_test_30000 using GIST (my_int_array);\n> CREATE INDEX\n> Time: 39284.031 ms\n> vm=# create index \"gist_40000 using GIST(my_int_array)\" on tmp_intarray_test_40000 using GIST (my_int_array);\n> CREATE INDEX\n> Time: 102572.780 ms\n> vm=#\n> vm=#\n> vm=#\n> vm=#\n> \n> vm=#\n> vm=#\n> vm=# create index \"gin_40000\" on tmp_intarray_test_40000 using GIN (my_int_array gin__int_ops);\n> CREATE INDEX\n> Time: 696.668 ms\n> vm=# create index \"gist_big_4000\" on tmp_intarray_test_40000 using GIST (my_int_array gist__intbig_ops);\n> CREATE INDEX\n> Time: 5227.353 ms\n> vm=#\n> vm=#\n> vm=#\n> vm=# \\d tmp_intarray_test\n> Table \"public.tmp_intarray_test\"\n> Column | Type | Modifiers\n> --------------+-----------+-----------\n> my_int_array | integer[] |\n> \n> vm=# select max(array_dims(my_int_array)) from tmp_intarray_test_30000;\n> max\n> -------\n> [1:9]\n> (1 row)\n> \n> Time: 119.607 ms\n> vm=#\n> vm=#\n> vm=# select version();\n> version\n> -----------------------------------------------------------------------------------\n> PostgreSQL 8.3.6 on i686-pc-linux-gnu, compiled by GCC gcc (Debian 4.3.3-1) 4.3.3\n> (1 row)\n> \n> Time: 12.169 ms\n> \n> vm=# create index \"gistbig70000\" on tmp_intarray_test using GIST (my_int_array gist__intbig_ops);\n> CREATE INDEX\n> Time: 9156.886 ms\n> vm=# create index \"gin70000\" on tmp_intarray_test using GIN (my_int_array gin__int_ops);\n> CREATE INDEX\n> Time: 1060.752 ms\n> vm=# create index \"gist7000\" on tmp_intarray_test using GIST (my_int_array gist__int_ops);\n> [.... it just sits here for 10 minutes or more ....]\n> \n\n", "msg_date": "Tue, 17 Mar 2009 11:28:35 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extremely slow intarray index creation and inserts." }, { "msg_contents": "Ron Mayer <[email protected]> writes:\n> vm=# create index \"gist70000\" on tmp_intarray_test using GIST (my_int_array gist__int_ops);\n> CREATE INDEX\n> Time: 2069836.856 ms\n\n> Is that expected, or does it sound like a bug to take over\n> half an hour to index 70000 rows of mostly 5 and 6-element\n> integer arrays?\n\nI poked at this example with oprofile. It's entirely CPU-bound AFAICT,\nand the CPU utilization is approximately\n\n\t55%\tg_int_compress\n\t35%\tmemmove/memcpy (difficult to distinguish these)\n\t 1%\tpg_qsort\n\t<1%\tanything else\n\nProbably need to look at reducing the number of calls to g_int_compress\n... it must be getting called a whole lot more than once per new index\nentry, and I wonder why that should need to be.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Mar 2009 23:24:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely slow intarray index creation and inserts. " }, { "msg_contents": "Tom Lane wrote:\n> Ron Mayer <[email protected]> writes:\n>> vm=# create index \"gist70000\" on tmp_intarray_test using GIST (my_int_array gist__int_ops);\n>> CREATE INDEX\n>> Time: 2069836.856 ms\n> \n>> Is that expected, or does it sound like a bug to take over\n>> half an hour to index 70000 rows of mostly 5 and 6-element\n>> integer arrays?\n> \n> I poked at this example with oprofile. It's entirely CPU-bound AFAICT,\n\nOleg pointed out to me (off-list I now see) that it's not totally\nunexpected behavior and I should have been using gist__intbig_ops,\nsince the \"big\" refers to the cardinality of the entire set (which\nwas large, in my case) and not the length of the arrays.\n\nOleg Bartunov wrote:\nOB:> it's not about short or long arrays, it's about small or big\nOB:> cardinality of the whole set (the number of unique elements)\n\nI'm re-reading the docs and still wasn't obvious to me. A\npotential docs patch is attached below.\n\n> and the CPU utilization is approximately\n> \n> \t55%\tg_int_compress\n> \t35%\tmemmove/memcpy (difficult to distinguish these)\n> \t 1%\tpg_qsort\n> \t<1%\tanything else\n> \n> Probably need to look at reducing the number of calls to g_int_compress\n> ... it must be getting called a whole lot more than once per new index\n> entry, and I wonder why that should need to be.\n\nPerhaps that's a separate issue, but we're working\nfine with gist__intbig_ops for the time being.\n\n\n\nHere's a proposed docs patch that makes this more obvious.\n\n*** a/doc/src/sgml/intarray.sgml\n--- b/doc/src/sgml/intarray.sgml\n***************\n*** 239,245 ****\n <literal>gist__int_ops</> (used by default) is suitable for\n small and medium-size arrays, while\n <literal>gist__intbig_ops</> uses a larger signature and is more\n! suitable for indexing large arrays.\n </para>\n\n <para>\n--- 239,247 ----\n <literal>gist__int_ops</> (used by default) is suitable for\n small and medium-size arrays, while\n <literal>gist__intbig_ops</> uses a larger signature and is more\n! suitable for indexing high-cardinality data sets - where there\n! are a large number of unique elements across all rows being\n! indexed.\n </para>\n\n <para>\n\n", "msg_date": "Wed, 18 Mar 2009 09:12:29 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extremely slow intarray index creation and inserts." }, { "msg_contents": "Ron Mayer <[email protected]> writes:\n> Oleg Bartunov wrote:\n> OB:> it's not about short or long arrays, it's about small or big\n> OB:> cardinality of the whole set (the number of unique elements)\n\n> I'm re-reading the docs and still wasn't obvious to me. A\n> potential docs patch is attached below.\n\nDone, though not in exactly those words. I wonder though if we can\nbe less vague about it --- can we suggest a typical cutover point?\nLike \"use gist__intbig_ops if there are more than about 10,000 distinct\narray values\"? Even a rough order of magnitude for where to worry\nabout this would save a lot of people time.\n\n\t\t\tregards, tom lane\n\nIndex: intarray.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/intarray.sgml,v\nretrieving revision 1.5\nretrieving revision 1.6\ndiff -c -r1.5 -r1.6\n*** intarray.sgml\t10 Dec 2007 05:32:51 -0000\t1.5\n--- intarray.sgml\t18 Mar 2009 20:18:18 -0000\t1.6\n***************\n*** 237,245 ****\n <para>\n Two GiST index operator classes are provided:\n <literal>gist__int_ops</> (used by default) is suitable for\n! small and medium-size arrays, while\n <literal>gist__intbig_ops</> uses a larger signature and is more\n! suitable for indexing large arrays.\n </para>\n \n <para>\n--- 237,246 ----\n <para>\n Two GiST index operator classes are provided:\n <literal>gist__int_ops</> (used by default) is suitable for\n! small- to medium-size data sets, while\n <literal>gist__intbig_ops</> uses a larger signature and is more\n! suitable for indexing large data sets (i.e., columns containing\n! a large number of distinct array values).\n </para>\n \n <para>\n", "msg_date": "Wed, 18 Mar 2009 16:21:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely slow intarray index creation and inserts. " }, { "msg_contents": "We usually say about 200 unique values as a limit for\ngist_int_ops.\n\nOn Wed, 18 Mar 2009, Tom Lane wrote:\n\n> Ron Mayer <[email protected]> writes:\n>> Oleg Bartunov wrote:\n>> OB:> it's not about short or long arrays, it's about small or big\n>> OB:> cardinality of the whole set (the number of unique elements)\n>\n>> I'm re-reading the docs and still wasn't obvious to me. A\n>> potential docs patch is attached below.\n>\n> Done, though not in exactly those words. I wonder though if we can\n> be less vague about it --- can we suggest a typical cutover point?\n> Like \"use gist__intbig_ops if there are more than about 10,000 distinct\n> array values\"? Even a rough order of magnitude for where to worry\n> about this would save a lot of people time.\n>\n> \t\t\tregards, tom lane\n>\n> Index: intarray.sgml\n> ===================================================================\n> RCS file: /cvsroot/pgsql/doc/src/sgml/intarray.sgml,v\n> retrieving revision 1.5\n> retrieving revision 1.6\n> diff -c -r1.5 -r1.6\n> *** intarray.sgml\t10 Dec 2007 05:32:51 -0000\t1.5\n> --- intarray.sgml\t18 Mar 2009 20:18:18 -0000\t1.6\n> ***************\n> *** 237,245 ****\n> <para>\n> Two GiST index operator classes are provided:\n> <literal>gist__int_ops</> (used by default) is suitable for\n> ! small and medium-size arrays, while\n> <literal>gist__intbig_ops</> uses a larger signature and is more\n> ! suitable for indexing large arrays.\n> </para>\n>\n> <para>\n> --- 237,246 ----\n> <para>\n> Two GiST index operator classes are provided:\n> <literal>gist__int_ops</> (used by default) is suitable for\n> ! small- to medium-size data sets, while\n> <literal>gist__intbig_ops</> uses a larger signature and is more\n> ! suitable for indexing large data sets (i.e., columns containing\n> ! a large number of distinct array values).\n> </para>\n>\n> <para>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Thu, 19 Mar 2009 10:02:41 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely slow intarray index creation and inserts. " }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> We usually say about 200 unique values as a limit for\n> gist_int_ops.\n\nThat seems awfully small ... should we make gist_intbig_ops the default,\nor more likely, raise the signature size of both opclasses? Even at a\ncrossover point of 10000 I'm not sure that many real-world apps would\nbother considering gist_int_ops.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 19 Mar 2009 08:25:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely slow intarray index creation and inserts. " }, { "msg_contents": "On Thu, 19 Mar 2009, Tom Lane wrote:\n\n> Oleg Bartunov <[email protected]> writes:\n>> We usually say about 200 unique values as a limit for\n>> gist_int_ops.\n>\n> That seems awfully small ... should we make gist_intbig_ops the default,\n> or more likely, raise the signature size of both opclasses? Even at a\n> crossover point of 10000 I'm not sure that many real-world apps would\n> bother considering gist_int_ops.\n\ngist__int_ops doesn't uses signatures, it uses range compression, which\nis not lossy, but not capacious. Perhaps, that's why we decided to use it \nas default opclass.\n\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Thu, 19 Mar 2009 15:38:38 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely slow intarray index creation and inserts. " }, { "msg_contents": "On Thu, 19 Mar 2009, Oleg Bartunov wrote:\n\n> On Thu, 19 Mar 2009, Tom Lane wrote:\n>\n>> Oleg Bartunov <[email protected]> writes:\n>>> We usually say about 200 unique values as a limit for\n>>> gist_int_ops.\n>> \n>> That seems awfully small ... should we make gist_intbig_ops the default,\n>> or more likely, raise the signature size of both opclasses? Even at a\n>> crossover point of 10000 I'm not sure that many real-world apps would\n>> bother considering gist_int_ops.\n>\n> gist__int_ops doesn't uses signatures, it uses range compression, which\n> is not lossy, but not capacious. Perhaps, that's why we decided to use it as\n\nsorry, it's lossy\n\n> default opclass.\n>\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\n> Sternberg Astronomical Institute, Moscow University, Russia\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(495)939-16-83, +007(495)939-23-83\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Thu, 19 Mar 2009 15:45:28 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely slow intarray index creation and inserts. " } ]
[ { "msg_contents": "We have a few slow queries that use sequential scans on tables that have \nplenty of indexes (for other queries), on a box with a lot of RAM and 13 \nactive cores (don't ask), so I was curious to find out how to put this \nenvironment to better use. The result is (maybe) interesting, esp. since \nPostgreSQL is getting better at executing many queries in parallel \nlately and we will have more than 16 cores in typical servers very soon.\n\nThe simplified scenario was a query like\n\nselect * from t where foo ~ 'bla';\n\non a table with approx. 9m rows, taking around 12 seconds in the best \ncase. The table had a bigserial primary key \"eid\" with btree index, \nwhich seemed to be the most suitable starting point.\n\nThe current value range of eid was partitioned into intervals of equal \nsize and depending on the number of rows in these intervals, one or more \nof them were assigned to worker processes (this is done once per day, \nnot before each query!):\n\nworker 1: select * from t where foo ~ 'bla' and (eid >= 300000 and eid < \n400000)\nworker 2: select * from t where foo ~ 'bla' and (eid >= 500000 and eid < \n600000 or eid >= 1100000 and eid < 1200000 ...)\n...\n\nInstead of a sequential scan, a bunch of worker processes (implemented \nwith Gearman/Perl) would then execute queries (one each) with a plan like:\n Bitmap Heap Scan on t ...\n Recheck Cond: eid >= 300000 and eid < 400000 ..\n Filter: foo ~ 'bla'\n Bitmap Index Scan on t_pkey ...\n\nThis led to a speedup factor of 2 when the box was idle, i.e. the time \nto split the query, distribute the jobs to worker processes, execute the \nqueries in parallel, collect results and send them back to the client \nwas now ~6 seconds.\n\nObservations:\n- currently, the speedup is almost the same for anything between ~10 and \n 80+ workers (~12s down to ~7s on average, best run ~ 6s)\n- the effective processing speed of the workers varied greatly (fastest \n~3x to ~10x the rows/second of the slowest - real time divided by rows \nto work on)\n- the fastest workers went as fast as the sequential scans (in rows per \nsecond) - sometimes, but not always (most likely they were actually \nrunning alone then for some reason)\n- in each new run, the workers finished in a completely different order \n(even though they had the same parts of the table to work on and thus \nidentical queries) so perhaps the partitioning of the work load is quite \ngood already and it's more of a scheduling issue (Gearman? Shared buffer \ncontention?)\n- the Linux I/O scheduler had a visible effect, \"noop\" was better than \n\"deadline\" (others not tried yet) ~10%, but this is typical for random \nwrites and RAID controllers that manage their writeback cache as they \nlike (it's a wasted effort to reorder writes before hitting the RAID \ncontroller)\n- CLUSTER might help a lot (the workers should hit fewer pages and need \nfewer shared resources?) but I haven't tested it\n- our query performance is not limited by disk I/O (as is usually the \ncase I guess), since we have most of the tables/indexes in RAM. Whether \nit scales as well (or better?) with a proper disk subsystem and less \nRAM, is unknown.\n\nI hope there is some room for improvement so these queries can execute \nfaster in parallel for better scaling, these first results are quite \nencouraging. I'd love to put 32+ cores to use for single queries. \nPerhaps something like this could be built into PostgreSQL at some \npoint? There's no complicated multithreading/locking involved and \nPostgres has enough statistics available to distribute work even better. \nIt should be easy to implement this for any of the various connection \npooling solutions also.\n\nHas anyone done similar work in the light of upcoming many-core \nCPUs/systems? Any better results than 2x improvement?\n\nApologies if this is a well-known and widely used technique already. ;-)\n\nMarinos.\n\nPS. yes, for the example query we could use tsearch2 etc., but it has \ndrawbacks in our specific case (indexing overhead, no real regexps \npossible) and it's only an example anyway ...\n", "msg_date": "Tue, 17 Mar 2009 22:57:44 +0100", "msg_from": "Marinos Yannikos <[email protected]>", "msg_from_op": true, "msg_subject": "parallelizing slow queries for multiple cores (PostgreSQL + Gearman)" }, { "msg_contents": "On Tue, 17 Mar 2009, Marinos Yannikos wrote:\n\n> It should be easy to implement this for any of the various connection \n> pooling solutions also.\n\npgpool-II has something very similar in its \"Parallel Mode\", and the \nexample given at \nhttp://pgpool.projects.postgresql.org/pgpool-II/doc/pgpool-en.html looks \njust like your example. I wonder if anyone has tried using that feature \nbut just pointing pgpool-II at the same database instance multiple times?\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 17 Mar 2009 18:38:44 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: parallelizing slow queries for multiple cores (PostgreSQL\n\t+ Gearman)" } ]
[ { "msg_contents": "\nHi,\n>Has anyone done similar work in the light of upcoming many-core CPUs/systems? Any better results than 2x improvement?\n\nYes, in fact I've done a very similar thing on a quad CPU box a while back. In my case the table in question had about 26 million rows. I did nothing special to the table (no cluster, no partitioning, nothing, of course the table did had the appropriate indexes). Queries on this table are analytic/reporting kind of queries. Basically they are just aggregations over a large number of rows. E.g. \"the sum of column1 and the sum of column2 where time is some time and columnA has some value and columnB has some other value\", that kind of thing. From analysis the queries appeared to be nearly 100% CPU bound.\nIn my (Java) application I divided a reporting query for say the last 60 days in 2 equal portions: day 1 to 30 and day 31 to 60 and assigned these to two worker threads. The results of these worker threads was merged using a simple resultset merge (the end result is simply the total of all rows returned by thread1 and thread2). The speed up I measured on the quad box was a near perfect factor 2. I then divided the workload in 4 equal portions: day 1 to 15, 16 to 30, 31 to 45 and 46 till 60. The speed up I measured was only a little less then a factor 4. I my situation too, the time I measured included dispatching the jobs to a thread pool and merging their results.\nOf course, such a scheme can only be easily used when all workers return individual rows that are directly part of the end result. If some further calculation has to be done on those rows, which happens to be the same calculation that is also done in the query you are parallelizing, then in effect you are duplicating logic. If you do that a lot in your code you can easily create a maintenance nightmare. Also, you have to be aware that without additional measures, every worker lives in its own transaction. Depending on the nature of the data this could potentially result in inconsistent data being returned. In your case, on tables generated once per day this wouldn't be the case, but as a general technique you have to be aware of this.\nAnyway, it's very clear that computers are moving to many-core architectures. Simple entry level servers already come these days with 8 cores. I've asked a couple of times on this list whether PG is going to support using multiple cores for a single query anytime soon, but this appears to be very unlikely. Until then it seems the only way to utilize multiple cores for a single query is doing it at the application level or by using something like pgpool-II.\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/", "msg_date": "Wed, 18 Mar 2009 13:52:52 +0100", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "parallelizing slow queries for multiple cores (PostgreSQL + Gearman)" } ]
[ { "msg_contents": "Hi,\n\nI am not sure if sending this to the right place. I did try to get the\nanswer from pgpool mailing list but no luck . Would appreciate if someone\ncan help here.\n\nWe are receving the following error in the postgres database logs:\n\n2009-03-19 02:14:20 PDT [2547]: [79-1] LOG: duration: 0.039 ms statement:\nRESET ALL\n2009-03-19 02:14:20 PDT [2547]: [80-1] LOG: duration: 0.027 ms statement:\nSET SESSION AUTHORIZATION DEFAULT\n2009-03-19 02:14:20 PDT [2547]: [81-1] ERROR: prepared statement \"S_1\" does\nnot exist\n2009-03-19 02:14:20 PDT [2547]: [82-1] STATEMENT: DEALLOCATE \"S_1\"\n2009-03-19 02:14:20 PDT [2547]: [83-1] ERROR: prepared statement \"S_4\" does\nnot exist\n2009-03-19 02:14:20 PDT [2547]: [84-1] STATEMENT: DEALLOCATE \"S_4\"\n\nWe receive this errors when we start connecting the java application\nthorugh pgpool. What causes this problem and how can it be avoided?\n\nPostgres version: 8.3.3\npgpool II: 2.0.1\n\nThanks & Regards,\nNimesh.\n\nHi,\n \nI am not sure if sending this to the right place. I did try to get the answer from pgpool mailing list but no luck . Would appreciate if someone can help here.\n \nWe are receving the following error in the postgres database logs:\n \n2009-03-19 02:14:20 PDT [2547]: [79-1] LOG:  duration: 0.039 ms  statement:  RESET ALL\n2009-03-19 02:14:20 PDT [2547]: [80-1] LOG:  duration: 0.027 ms  statement:  SET SESSION AUTHORIZATION DEFAULT \n2009-03-19 02:14:20 PDT [2547]: [81-1] ERROR:  prepared statement \"S_1\" does not exist\n2009-03-19 02:14:20 PDT [2547]: [82-1] STATEMENT:  DEALLOCATE \"S_1\"\n2009-03-19 02:14:20 PDT [2547]: [83-1] ERROR:  prepared statement \"S_4\" does not exist\n2009-03-19 02:14:20 PDT [2547]: [84-1] STATEMENT:  DEALLOCATE \"S_4\"\n \nWe receive this errors when we start connecting the java application thorugh pgpool. What causes this problem and how can it be avoided? \n \nPostgres version: 8.3.3\npgpool II: 2.0.1\n \nThanks & Regards,\nNimesh.", "msg_date": "Thu, 19 Mar 2009 17:02:07 +0530", "msg_from": "Nimesh Satam <[email protected]>", "msg_from_op": true, "msg_subject": "Prepared statement does not exist" }, { "msg_contents": "\n\n\n\n--- On Thu, 19/3/09, Nimesh Satam <[email protected]> wrote:\n> \n> We are receving the following error in the postgres\n> database logs:\n> \n> 2009-03-19 02:14:20 PDT [2547]: [79-1] LOG: duration:\n> 0.039 ms statement:\n> RESET ALL\n> 2009-03-19 02:14:20 PDT [2547]: [80-1] LOG: duration:\n> 0.027 ms statement:\n> SET SESSION AUTHORIZATION DEFAULT\n> 2009-03-19 02:14:20 PDT [2547]: [81-1] ERROR: prepared\n> statement \"S_1\" does\n> not exist\n> 2009-03-19 02:14:20 PDT [2547]: [82-1] STATEMENT: \n> DEALLOCATE \"S_1\"\n> 2009-03-19 02:14:20 PDT [2547]: [83-1] ERROR: prepared\n> statement \"S_4\" does\n> not exist\n> 2009-03-19 02:14:20 PDT [2547]: [84-1] STATEMENT: \n> DEALLOCATE \"S_4\"\n> \n> We receive this errors when we start connecting the java\n> application\n> thorugh pgpool. What causes this problem and how can it be\n> avoided?\n\nLooks like your app is dissconnecting from pgpool which is causing pgpool to send the RESET ALL, this will deallocate the prepared statement. Then the app is reconnecting to pgpool again and expecting the prepared statement to still be available, which it will not be.\n\n\n \n", "msg_date": "Thu, 19 Mar 2009 11:37:11 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Prepared statement does not exist" }, { "msg_contents": "Glyn Astill,\n\nThank for your reply. But can you confirm on this? As what I see from\nthe logs, its pgpool which is trying to deallocate the prepared\nstatement and not the application. The application just disconnects\nand not tyring to use the same connection.\n\nRegards,\nNimesh.\n\nOn Thu, Mar 19, 2009 at 5:07 PM, Glyn Astill <[email protected]> wrote:\n\n>\n>\n>\n>\n> --- On Thu, 19/3/09, Nimesh Satam <[email protected]> wrote:\n> >\n> > We are receving the following error in the postgres\n> > database logs:\n> >\n> > 2009-03-19 02:14:20 PDT [2547]: [79-1] LOG: duration:\n> > 0.039 ms statement:\n> > RESET ALL\n> > 2009-03-19 02:14:20 PDT [2547]: [80-1] LOG: duration:\n> > 0.027 ms statement:\n> > SET SESSION AUTHORIZATION DEFAULT\n> > 2009-03-19 02:14:20 PDT [2547]: [81-1] ERROR: prepared\n> > statement \"S_1\" does\n> > not exist\n> > 2009-03-19 02:14:20 PDT [2547]: [82-1] STATEMENT:\n> > DEALLOCATE \"S_1\"\n> > 2009-03-19 02:14:20 PDT [2547]: [83-1] ERROR: prepared\n> > statement \"S_4\" does\n> > not exist\n> > 2009-03-19 02:14:20 PDT [2547]: [84-1] STATEMENT:\n> > DEALLOCATE \"S_4\"\n> >\n> > We receive this errors when we start connecting the java\n> > application\n> > thorugh pgpool. What causes this problem and how can it be\n> > avoided?\n>\n> Looks like your app is dissconnecting from pgpool which is causing pgpool\n> to send the RESET ALL, this will deallocate the prepared statement. Then the\n> app is reconnecting to pgpool again and expecting the prepared statement to\n> still be available, which it will not be.\n>\n>\n>\n>\n\nGlyn Astill,\nThank for your reply. But can you confirm on this? As what I see fromthe logs, its pgpool which is trying to deallocate the preparedstatement and not the application. The application just disconnectsand not tyring to use the same connection.\nRegards,Nimesh.\nOn Thu, Mar 19, 2009 at 5:07 PM, Glyn Astill <[email protected]> wrote:\n\n--- On Thu, 19/3/09, Nimesh Satam <[email protected]> wrote:>> We are receving the following error in the postgres> database logs:\n>> 2009-03-19 02:14:20 PDT [2547]: [79-1] LOG:  duration:> 0.039 ms  statement:> RESET ALL> 2009-03-19 02:14:20 PDT [2547]: [80-1] LOG:  duration:> 0.027 ms  statement:> SET SESSION AUTHORIZATION DEFAULT\n> 2009-03-19 02:14:20 PDT [2547]: [81-1] ERROR:  prepared> statement \"S_1\" does> not exist> 2009-03-19 02:14:20 PDT [2547]: [82-1] STATEMENT:> DEALLOCATE \"S_1\"> 2009-03-19 02:14:20 PDT [2547]: [83-1] ERROR:  prepared\n> statement \"S_4\" does> not exist> 2009-03-19 02:14:20 PDT [2547]: [84-1] STATEMENT:> DEALLOCATE \"S_4\">> We receive this errors when we start connecting the java\n> application> thorugh pgpool. What causes this problem and how can it be> avoided?Looks like your app is dissconnecting from pgpool which is causing pgpool to send the RESET ALL, this will deallocate the prepared statement. Then the app is reconnecting to pgpool again and expecting the prepared statement to still be available, which it will not be.", "msg_date": "Fri, 20 Mar 2009 14:33:58 +0530", "msg_from": "Nimesh Satam <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Prepared statement does not exist" }, { "msg_contents": "\n--- On Fri, 20/3/09, Nimesh Satam <[email protected]> wrote:\n\n> From: Nimesh Satam <[email protected]>\n> > > We are receving the following error in the\n> postgres\n> > > database logs:\n> > >\n> > > 2009-03-19 02:14:20 PDT [2547]: [79-1] LOG: \n> duration:\n> > > 0.039 ms statement:\n> > > RESET ALL\n> > > 2009-03-19 02:14:20 PDT [2547]: [80-1] LOG: \n> duration:\n> > > 0.027 ms statement:\n> > > SET SESSION AUTHORIZATION DEFAULT\n> > > 2009-03-19 02:14:20 PDT [2547]: [81-1] ERROR: \n> prepared\n> > > statement \"S_1\" does\n> > > not exist\n> > > 2009-03-19 02:14:20 PDT [2547]: [82-1] STATEMENT:\n> > > DEALLOCATE \"S_1\"\n> > > 2009-03-19 02:14:20 PDT [2547]: [83-1] ERROR: \n> prepared\n> > > statement \"S_4\" does\n> > > not exist\n> > > 2009-03-19 02:14:20 PDT [2547]: [84-1] STATEMENT:\n> > > DEALLOCATE \"S_4\"\n> > >\n> > > We receive this errors when we start connecting\n> the java\n> > > application\n> > > thorugh pgpool. What causes this problem and how\n> can it be\n> > > avoided?\n> >\n> > Looks like your app is dissconnecting from pgpool\n> which is causing pgpool\n> > to send the RESET ALL, this will deallocate the\n> prepared statement. Then the\n> > app is reconnecting to pgpool again and expecting the\n> prepared statement to\n> > still be available, which it will not be.\n> \n> Thank for your reply. But can you confirm on this? As what\n> I see from\n> the logs, its pgpool which is trying to deallocate the\n> prepared\n> statement and not the application. The application just\n> disconnects\n> and not tyring to use the same connection.\n\nThere is the possibility that it's pgpool sending the deallocate in error after the reset all then. Either way, this is not relevent to the performance list, send it over to the pgpool list... and tell them your pgpool version number too - it may be a fixed bug.\n\n\n \n", "msg_date": "Fri, 20 Mar 2009 12:02:01 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Prepared statement does not exist" } ]
[ { "msg_contents": "Hi,\nWe have the following 2 tables:\n\\d audit_change\n Table \"public.audit_change\"\n Column | Type | Modifiers\n----------------+------------------------+-----------\n id | character varying(32) | not null\naudit_entry_id | character varying(32) |\n...\nIndexes:\n \"audit_change_pk\" primary key, btree (id)\n \"audit_change_entry\" btree (audit_entry_id)\n\nand\n\\d audit_entry;\n Table \"public.audit_entry\"\n Column | Type | Modifiers\n----------------+--------------------------+-----------\n id | character varying(32) | not null\n object_id | character varying(32) | not null\n...\nIndexes:\n \"audit_entry_pk\" primary key, btree (id)\n \"audit_entry_object\" btree (object_id)\n\n\nWe do the following query:\nEXPLAIN ANALYZE\nSELECT audit_change.id AS id,\n audit_change.audit_entry_id AS auditEntryId,\n audit_entry.object_id AS objectId,\n audit_change.property_name AS propertyName,\n audit_change.property_type AS propertyType,\n audit_change.old_value AS\n oldValue, audit_change.new_value AS newValue,\n audit_change.flexfield AS flexField\nFROM audit_entry audit_entry, audit_change audit_change\nWHERE audit_change.audit_entry_id=audit_entry.id\nAND audit_entry.object_id='artf414029';\n\n QUERY \nPLAN \n---------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=8.79..253664.55 rows=4 width=136) (actual \ntime=4612.674..6683.158 rows=4 loops=1)\n Hash Cond: ((audit_change.audit_entry_id)::text = (audit_entry.id)::text)\n -> Seq Scan on audit_change (cost=0.00..225212.52 rows=7584852 \nwidth=123) (actual time=0.009..2838.216 rows=7584852 loops=1)\n -> Hash (cost=8.75..8.75 rows=3 width=45) (actual time=0.049..0.049 \nrows=4 loops=1)\n -> Index Scan using audit_entry_object on audit_entry \n(cost=0.00..8.75 rows=3 width=45) (actual time=0.033..0.042 rows=4 loops=1)\n Index Cond: ((object_id)::text = 'artf414029'::text)\n Total runtime: 6683.220 ms\n(7 rows)\n\n\nWhy does the query not use the index on audit_entry_id and do a seq scan \n(as you see the table has many rows)?\n\n\n\nIf we split the query into 2 queries, it only takes less than 0.3 ms\nEXPLAIN ANALYZE select * from audit_entry WHERE \naudit_entry.object_id='artf414029';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Index Scan using audit_entry_object on audit_entry (cost=0.00..8.75 \nrows=3 width=111) (actual time=0.037..0.044 rows=4 loops=1)\n Index Cond: ((object_id)::text = 'artf414029'::text)\n Total runtime: 0.073 ms\n(3 rows)\n\nEXPLAIN ANALYZE select * from audit_change WHERE audit_entry_id in \n('adte1DDFEA5B011C8988C3928752', 'adte5DDFEA5B011D441230BD20CC', \n'adte5DDFEA5B011E40601E8DA10F', 'adte5DDFEA5B011E8CC26071627C') ORDER BY \nproperty_name ASC;\n \nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=30.25..30.27 rows=10 width=123) (actual time=0.190..0.192 \nrows=4 loops=1)\n Sort Key: property_name\n -> Bitmap Heap Scan on audit_change (cost=9.99..30.08 rows=10 \nwidth=123) (actual time=0.173..0.177 rows=4 loops=1)\n Recheck Cond: ((audit_entry_id)::text = ANY \n(('{adte1DDFEA5B011C8988C3928752,adte5DDFEA5B011D441230BD20CC,adte5DDFEA5B011E40601E8DA10F,adte5DDFEA5B011E8CC26071627C}'::character \nvarying[])::text[]))\n -> Bitmap Index Scan on audit_change_entry (cost=0.00..9.99 \nrows=10 width=0) (actual time=0.167..0.167 rows=4 loops=1)\n Index Cond: ((audit_entry_id)::text = ANY \n(('{adte1DDFEA5B011C8988C3928752,adte5DDFEA5B011D441230BD20CC,adte5DDFEA5B011E40601E8DA10F,adte5DDFEA5B011E8CC26071627C}'::character \nvarying[])::text[]))\n Total runtime: 0.219 ms\n(7 rows)\n\nThanks for your help,\nAnne\n", "msg_date": "Thu, 19 Mar 2009 13:35:01 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Need help with one query" }, { "msg_contents": "Anne Rosset wrote:\n> EXPLAIN ANALYZE\n> SELECT\n> audit_change.id AS id,\n> audit_change.audit_entry_id AS auditEntryId,\n> audit_entry.object_id AS objectId,\n> audit_change.property_name AS propertyName,\n> audit_change.property_type AS propertyType,\n> audit_change.old_value AS oldValue,\n> audit_change.new_value AS newValue,\n> audit_change.flexfield AS flexField\n> FROM\n> audit_entry audit_entry, audit_change audit_change\n> WHERE\n> audit_change.audit_entry_id = audit_entry.id\n> AND audit_entry.object_id = 'artf414029';\n[query reformatted to make it more readable]\n\nNot quite clear why you are aliasing the tables to their own names...\n\n> ---------------------------------------------------------------------------------------------------------------------------------------------\n> \n> Hash Join (cost=8.79..253664.55 rows=4 width=136) (actual\n> time=4612.674..6683.158 rows=4 loops=1)\n> Hash Cond: ((audit_change.audit_entry_id)::text = (audit_entry.id)::text)\n> -> Seq Scan on audit_change (cost=0.00..225212.52 rows=7584852\n> width=123) (actual time=0.009..2838.216 rows=7584852 loops=1)\n> -> Hash (cost=8.75..8.75 rows=3 width=45) (actual time=0.049..0.049\n> rows=4 loops=1)\n> -> Index Scan using audit_entry_object on audit_entry \n> (cost=0.00..8.75 rows=3 width=45) (actual time=0.033..0.042 rows=4 loops=1)\n> Index Cond: ((object_id)::text = 'artf414029'::text)\n> Total runtime: 6683.220 ms\n\nVery odd. It knows the table is large and that the seq-scan is going to\nbe expensive.\n\nTry issuing \"set enable_seqscan = off\" and run the explain analyse\nagain. That should show the cost of using the indexes.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 20 Mar 2009 09:21:18 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help with one query" }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n>> Hash Join (cost=8.79..253664.55 rows=4 width=136) (actual\n>> time=4612.674..6683.158 rows=4 loops=1)\n>> Hash Cond: ((audit_change.audit_entry_id)::text = (audit_entry.id)::text)\n>> -> Seq Scan on audit_change (cost=0.00..225212.52 rows=7584852\n>> width=123) (actual time=0.009..2838.216 rows=7584852 loops=1)\n>> -> Hash (cost=8.75..8.75 rows=3 width=45) (actual time=0.049..0.049\n>> rows=4 loops=1)\n>> -> Index Scan using audit_entry_object on audit_entry \n>> (cost=0.00..8.75 rows=3 width=45) (actual time=0.033..0.042 rows=4 loops=1)\n>> Index Cond: ((object_id)::text = 'artf414029'::text)\n>> Total runtime: 6683.220 ms\n\n> Very odd. It knows the table is large and that the seq-scan is going to\n> be expensive.\n\nYeah, *very* odd. A nestloop with inner indexscan should have an\nestimated cost far lower than this plan. What Postgres version is\nthis exactly? Do you have any nondefault planner parameter settings?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Mar 2009 11:27:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help with one query " }, { "msg_contents": "Richard Huxton wrote:\n\n>Anne Rosset wrote:\n> \n>\n>>EXPLAIN ANALYZE\n>>SELECT\n>> audit_change.id AS id,\n>> audit_change.audit_entry_id AS auditEntryId,\n>> audit_entry.object_id AS objectId,\n>> audit_change.property_name AS propertyName,\n>> audit_change.property_type AS propertyType,\n>> audit_change.old_value AS oldValue,\n>> audit_change.new_value AS newValue,\n>> audit_change.flexfield AS flexField\n>>FROM\n>> audit_entry audit_entry, audit_change audit_change\n>>WHERE\n>> audit_change.audit_entry_id = audit_entry.id\n>> AND audit_entry.object_id = 'artf414029';\n>> \n>>\n>[query reformatted to make it more readable]\n>\n>Not quite clear why you are aliasing the tables to their own names...\n>\n> \n>\n>>---------------------------------------------------------------------------------------------------------------------------------------------\n>>\n>>Hash Join (cost=8.79..253664.55 rows=4 width=136) (actual\n>>time=4612.674..6683.158 rows=4 loops=1)\n>> Hash Cond: ((audit_change.audit_entry_id)::text = (audit_entry.id)::text)\n>> -> Seq Scan on audit_change (cost=0.00..225212.52 rows=7584852\n>>width=123) (actual time=0.009..2838.216 rows=7584852 loops=1)\n>> -> Hash (cost=8.75..8.75 rows=3 width=45) (actual time=0.049..0.049\n>>rows=4 loops=1)\n>> -> Index Scan using audit_entry_object on audit_entry \n>>(cost=0.00..8.75 rows=3 width=45) (actual time=0.033..0.042 rows=4 loops=1)\n>> Index Cond: ((object_id)::text = 'artf414029'::text)\n>>Total runtime: 6683.220 ms\n>> \n>>\n>\n>Very odd. It knows the table is large and that the seq-scan is going to\n>be expensive.\n>\n>Try issuing \"set enable_seqscan = off\" and run the explain analyse\n>again. That should show the cost of using the indexes.\n>\n> \n>\n\nWith \"set enable_seqscan = off\":\n\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------\nNested Loop (cost=11.35..12497.53 rows=59 width=859) (actual \ntime=46.074..49.742 rows=7 loops=1)\n-> Index Scan using audit_entry_pk on audit_entry (cost=0.00..7455.95 \nrows=55 width=164) (actual time=45.940..49.541 rows=2 loops=1)\nFilter: ((object_id)::text = 'artf1024'::text)\n-> Bitmap Heap Scan on audit_change (cost=11.35..90.93 rows=59 \nwidth=777) (actual time=0.086..0.088 rows=4 loops=2)\nRecheck Cond: ((audit_change.audit_entry_id)::text = (audit_entry.id)::text)\n-> Bitmap Index Scan on audit_change_entry (cost=0.00..11.33 rows=59 \nwidth=0) (actual time=0.076..0.076 rows=4 loops=2)\nIndex Cond: ((audit_change.audit_entry_id)::text = (audit_entry.id)::text)\nTotal runtime: 49.801 ms\n\n\nThe db version is 8.2.4\n\nWe are wondering if it is because of our audit_entry_id's format (like \n'adte1DDFEA5B011C8988C3928752'). Any inputs?\nThanks,\nAnne\n", "msg_date": "Fri, 20 Mar 2009 10:16:02 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help with one query" }, { "msg_contents": "On Fri, Mar 20, 2009 at 1:16 PM, Anne Rosset <[email protected]> wrote:\n> Richard Huxton wrote:\n>> Anne Rosset wrote:\n>>> EXPLAIN ANALYZE\n>>> SELECT\n>>>  audit_change.id             AS id,\n>>>  audit_change.audit_entry_id AS auditEntryId,\n>>>  audit_entry.object_id       AS objectId,\n>>>  audit_change.property_name  AS propertyName,\n>>>  audit_change.property_type  AS propertyType,\n>>>  audit_change.old_value      AS oldValue,\n>>>  audit_change.new_value      AS newValue,\n>>>  audit_change.flexfield      AS flexField\n>>> FROM\n>>>  audit_entry audit_entry, audit_change audit_change\n>>> WHERE\n>>>  audit_change.audit_entry_id = audit_entry.id\n>>>  AND audit_entry.object_id = 'artf414029';\n>>>\n>>\n>> [query reformatted to make it more readable]\n>>\n>> Not quite clear why you are aliasing the tables to their own names...\n>>\n>>\n>>>\n>>>\n>>> ---------------------------------------------------------------------------------------------------------------------------------------------\n>>>\n>>> Hash Join  (cost=8.79..253664.55 rows=4 width=136) (actual\n>>> time=4612.674..6683.158 rows=4 loops=1)\n>>>  Hash Cond: ((audit_change.audit_entry_id)::text =\n>>> (audit_entry.id)::text)\n>>>  ->  Seq Scan on audit_change  (cost=0.00..225212.52 rows=7584852\n>>> width=123) (actual time=0.009..2838.216 rows=7584852 loops=1)\n>>>  ->  Hash  (cost=8.75..8.75 rows=3 width=45) (actual time=0.049..0.049\n>>> rows=4 loops=1)\n>>>       ->  Index Scan using audit_entry_object on audit_entry\n>>> (cost=0.00..8.75 rows=3 width=45) (actual time=0.033..0.042 rows=4 loops=1)\n>>>             Index Cond: ((object_id)::text = 'artf414029'::text)\n>>> Total runtime: 6683.220 ms\n>>>\n>>\n>> Very odd. It knows the table is large and that the seq-scan is going to\n>> be expensive.\n>>\n>> Try issuing \"set enable_seqscan = off\" and run the explain analyse\n>> again. That should show the cost of using the indexes.\n>>\n>>\n>\n> With \"set enable_seqscan = off\":\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=11.35..12497.53 rows=59 width=859) (actual\n> time=46.074..49.742 rows=7 loops=1)\n> -> Index Scan using audit_entry_pk on audit_entry (cost=0.00..7455.95\n> rows=55 width=164) (actual time=45.940..49.541 rows=2 loops=1)\n> Filter: ((object_id)::text = 'artf1024'::text)\n> -> Bitmap Heap Scan on audit_change (cost=11.35..90.93 rows=59 width=777)\n> (actual time=0.086..0.088 rows=4 loops=2)\n> Recheck Cond: ((audit_change.audit_entry_id)::text = (audit_entry.id)::text)\n> -> Bitmap Index Scan on audit_change_entry (cost=0.00..11.33 rows=59\n> width=0) (actual time=0.076..0.076 rows=4 loops=2)\n> Index Cond: ((audit_change.audit_entry_id)::text = (audit_entry.id)::text)\n> Total runtime: 49.801 ms\n>\n>\n> The db version is 8.2.4\n>\n> We are wondering if it is because of our audit_entry_id's format (like\n> 'adte1DDFEA5B011C8988C3928752').  Any inputs?\n> Thanks,\n> Anne\n\nSomething is wrong here. How can setting enable_seqscan to off result\nin a plan with a far lower estimated cost than the original plan? If\nthe planner thought the non-seq-scan plan is cheaper, it would have\npicked that one to begin with.\n\n...Robert\n", "msg_date": "Fri, 20 Mar 2009 15:04:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help with one query" }, { "msg_contents": "Robert Haas escribi�:\n\n> Something is wrong here. How can setting enable_seqscan to off result\n> in a plan with a far lower estimated cost than the original plan? If\n> the planner thought the non-seq-scan plan is cheaper, it would have\n> picked that one to begin with.\n\nGEQO? Anne, what's geqo_threshold set to?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 20 Mar 2009 15:14:53 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help with one query" }, { "msg_contents": "Alvaro Herrera wrote:\n\n>Robert Haas escribi�:\n>\n> \n>\n>>Something is wrong here. How can setting enable_seqscan to off result\n>>in a plan with a far lower estimated cost than the original plan? If\n>>the planner thought the non-seq-scan plan is cheaper, it would have\n>>picked that one to begin with.\n>> \n>>\n>\n>GEQO? Anne, what's geqo_threshold set to?\n>\n> \n>\nHi Alvaro:\nIt is turned off\ngeqo | off | Enables genetic query optimization.\nThanks,\nAnne\n", "msg_date": "Fri, 20 Mar 2009 13:29:02 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help with one query" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Fri, Mar 20, 2009 at 1:16 PM, Anne Rosset <[email protected]> wrote:\n>> The db version is 8.2.4\n\n> Something is wrong here. How can setting enable_seqscan to off result\n> in a plan with a far lower estimated cost than the original plan?\n\nPlanner bug no doubt ... given how old the PG release is, I'm not\nparticularly interested in probing into it now. If Anne can still\nreproduce this behavior on 8.2.something-recent, we should look closer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Mar 2009 20:37:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help with one query " }, { "msg_contents": "On Fri, Mar 20, 2009 at 4:29 PM, Anne Rosset <[email protected]> wrote:\n> Alvaro Herrera wrote:\n>> Robert Haas escribió:\n>>> Something is wrong here.  How can setting enable_seqscan to off result\n>>> in a plan with a far lower estimated cost than the original plan?  If\n>>> the planner thought the non-seq-scan plan is cheaper, it would have\n>>> picked that one to begin with.\n>>>\n>>\n>> GEQO?  Anne, what's geqo_threshold set to?\n> Hi Alvaro:\n> It is turned off\n> geqo | off | Enables genetic query optimization.\n> Thanks,\n> Anne\n\nCan you please send ALL your non-commented postgresql.conf lines?\n\n...Robert\n", "msg_date": "Fri, 20 Mar 2009 20:37:52 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help with one query" }, { "msg_contents": "Tom Lane wrote:\n\n>Robert Haas <[email protected]> writes:\n> \n>\n>>On Fri, Mar 20, 2009 at 1:16 PM, Anne Rosset <[email protected]> wrote:\n>> \n>>\n>>>The db version is 8.2.4\n>>> \n>>>\n>\n> \n>\n>>Something is wrong here. How can setting enable_seqscan to off result\n>>in a plan with a far lower estimated cost than the original plan?\n>> \n>>\n>\n>Planner bug no doubt ... given how old the PG release is, I'm not\n>particularly interested in probing into it now. If Anne can still\n>reproduce this behavior on 8.2.something-recent, we should look closer.\n>\n>\t\t\tregards, tom lane\n> \n>\nThanks Tom, Richard.\nHere are our postgres conf :\n\n\nshared_buffers = 2000MB \nsort_mem = 150000 \nvacuum_mem = 100000\nwork_mem = 20MB # min 64kB\nmaintenance_work_mem = 256MB # min 1MB\nmax_fsm_pages = 204800\nfull_page_writes = off # recover from partial page writes\nwal_buffers = 1MB # min 32kB\n # (change requires restart)\ncommit_delay = 20000 # range 0-100000, in microseconds\ncommit_siblings = 3 # range 1-1000\ncheckpoint_segments = 128\ncheckpoint_warning = 240s\nenable_indexscan = on\nenable_mergejoin = on\nenable_nestloop = off\nrandom_page_cost = 2.0\neffective_cache_size = 2500MB\ngeqo = off\ndefault_statistics_target = 750\nstats_command_string = on\nupdate_process_title = on\n\nstats_start_collector = on\nstats_row_level = on\n\nautovacuum = on \nautovacuum_vacuum_threshold = 500 # min # of tuple updates before\n # vacuum\nautovacuum_analyze_threshold = 250 # min # of tuple updates before\n # analyze\nautovacuum_vacuum_scale_factor = 0.2 # fraction of rel size before\n # vacuum\nautovacuum_analyze_scale_factor = 0.1\n\n\n\nAnne\n", "msg_date": "Mon, 23 Mar 2009 10:08:59 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help with one query" }, { "msg_contents": "On Mon, Mar 23, 2009 at 1:08 PM, Anne Rosset <[email protected]> wrote:\n> enable_nestloop = off\n\nThat may be the source of your problem. Generally setting enable_* to\noff is a debugging tool, not something you ever want to do in\nproduction.\n\n...Robert\n", "msg_date": "Mon, 23 Mar 2009 13:15:20 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need help with one query" }, { "msg_contents": "Robert Haas wrote:\n\n>On Mon, Mar 23, 2009 at 1:08 PM, Anne Rosset <[email protected]> wrote:\n> \n>\n>>enable_nestloop = off\n>> \n>>\n>\n>That may be the source of your problem. Generally setting enable_* to\n>off is a debugging tool, not something you ever want to do in\n>production.\n>\n>...Robert\n> \n>\nThanks Robert. It seems to have solved the problem. Thanks a lot,\n\nAnne\n", "msg_date": "Mon, 23 Mar 2009 12:56:11 -0700", "msg_from": "Anne Rosset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need help with one query" } ]
[ { "msg_contents": "Hello List,\n\nis there a way to find out, how many transactions my currenc productive \ndatabase is doing?\n\nI know know how much i an offer with my new database and hardware, but i \nwould also like to know what i actually _need_ on my current productive \nsystem.\n\nIs there a way to find this out?\n\nCheers,\nMario\n", "msg_date": "Fri, 20 Mar 2009 10:26:29 +0100", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "current transaction in productive database" }, { "msg_contents": "\nOn Mar 20, 2009, at 5:26 AM, [email protected] wrote:\n\n> Hello List,\n>\n> is there a way to find out, how many transactions my currenc \n> productive database is doing?\n>\n> I know know how much i an offer with my new database and hardware, \n> but i would also like to know what i actually _need_ on my current \n> productive system.\n>\n> Is there a way to find this out?\n\nAre you looking to see how many transactions per second or more how \nmany transactions concurrently at a given time?\n\nFor the former you can use pgspy (its on pgfoundry) to get an idea of \nqueries per second coming in.\n\nFor the latter, just select * from pg_stat_activity where \ncurrent_query <> '<IDLE>';\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n", "msg_date": "Fri, 20 Mar 2009 13:01:42 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: current transaction in productive database" }, { "msg_contents": "On Fri, 20 Mar 2009, [email protected] wrote:\n\n> is there a way to find out, how many transactions my currenc productive \n> database is doing?\n\nWhat you probably want here is not a true transaction count, which might \ninclude thing that don't matter much for scaling purposes, but instead to \ncount things happening that involve a database change. You can find out \ntotals for that broken down by table using this:\n\n select * from pg_stat_user_tables\n\nSee http://www.postgresql.org/docs/8.3/static/monitoring-stats.html for \nmore details. You'll want to sum the totals for inserts, updates, and \ndeletes to get all the normal transcations.\n\nThat will be a total since the statistics were last reset. If you want a \nsnapshot for a period, you can either sample at the beginning and end and \nsubtract, or you can use:\n\n select pg_stat_reset();\n\nTo reset everything, wait for some period, and then look at the totals. \nYou may not want to do that immediately though. The totals since the \ndatabase were brought up that you'll find in the statistics views can be \ninteresting to look at for some historical perspective, so you should \nprobably save any of those that look interesting before you reset \nanything.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 20 Mar 2009 13:34:00 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: current transaction in productive database" }, { "msg_contents": "[email protected] escreveu:\n> is there a way to find out, how many transactions my currenc productive\n> database is doing?\n> \nIf you're looking for number of transactions then you can query the catalogs as:\n\n$ export myq=\"select sum(xact_commit+xact_rollback) from pg_stat_database\"\n$ psql -U postgres -c \"$myq\" && sleep 60 && psql -U postgres -c \"$myq\"\n sum\n-----------\n 178992891\n(1 row)\n\n sum\n-----------\n 178996065\n(1 row)\n\n$ bc -q\nscale=3\n(178996065-178992891)/60\n52.900\n\nDepending on your workload pattern, it's recommended to increase the sleep time.\n\n\n-- \n Euler Taveira de Oliveira\n http://www.timbira.com/\n", "msg_date": "Fri, 20 Mar 2009 14:50:36 -0300", "msg_from": "Euler Taveira de Oliveira <[email protected]>", "msg_from_op": false, "msg_subject": "Re: current transaction in productive database" } ]
[ { "msg_contents": "I just discovered this on a LinkedIn user group:\n\nhttp://bugzilla.kernel.org/show_bug.cgi?id=12309\n\nIs anyone here seeing evidence of this in PostgreSQL??\n--\nM. Edward (Ed) Borasky\nhttp://www.linkedin.com/in/edborasky\n\nI've never met a happy clam. In fact, most of them were pretty steamed.\n", "msg_date": "Fri, 20 Mar 2009 21:17:05 -0700", "msg_from": "\"M. Edward (Ed) Borasky\" <[email protected]>", "msg_from_op": true, "msg_subject": "\"iowait\" bug?" }, { "msg_contents": "2009/3/21 M. Edward (Ed) Borasky <[email protected]>:\n> I just discovered this on a LinkedIn user group:\n>\n> http://bugzilla.kernel.org/show_bug.cgi?id=12309\n>\n> Is anyone here seeing evidence of this in PostgreSQL??\nI've been hit by an I/O wait problem, as described here:\nhttps://bugzilla.redhat.com/show_bug.cgi?id=444759\nI've told it to that other bug, but no one seems to have followed that path.\nRegards,\nLaurent\n", "msg_date": "Sun, 22 Mar 2009 08:49:56 +0100", "msg_from": "Laurent Wandrebeck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"iowait\" bug?" }, { "msg_contents": "On Sun, Mar 22, 2009 at 8:49 AM, Laurent Wandrebeck\n<[email protected]> wrote:\n> 2009/3/21 M. Edward (Ed) Borasky <[email protected]>:\n>> I just discovered this on a LinkedIn user group:\n>>\n>> http://bugzilla.kernel.org/show_bug.cgi?id=12309\n>>\n>> Is anyone here seeing evidence of this in PostgreSQL??\n> I've been hit by an I/O wait problem, as described here:\n> https://bugzilla.redhat.com/show_bug.cgi?id=444759\n> I've told it to that other bug, but no one seems to have followed that path.\n\nWe applied this mwi patch on 3 pgsql servers, and seen great\nperformance improvement.\nUsing 3ware, 8 SAS HDD, Octocore (2x4) Xeon and 32GB RAM, on a custom\n2.6.18 kernel.\n\n-- \nLaurent Laborde\nhttp://www.over-blog.com/\n", "msg_date": "Sun, 22 Mar 2009 16:29:35 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"iowait\" bug?" }, { "msg_contents": "On Fri, 20 Mar 2009, M. Edward (Ed) Borasky wrote:\n\n> I just discovered this on a LinkedIn user group:\n> http://bugzilla.kernel.org/show_bug.cgi?id=12309\n\nI would bet there's at least 3 different bugs in that one. That bug \nreport got a lot of press via Slashdot a few months ago, and it's picked \nall sort of people who all have I/O wait issues, but they don't all have \nthe same cause. The 3ware-specific problem Laurent mentioned is an \nexample. That's not the same thing most of the people there are running \ninto, the typical reporter there has disks attached directly to their \nmotherboard. The irony here is that #12309 was a fork of #7372 to start \nover with a clean discussion slat because the same thing happened to that \nearlier one.\n\nThe original problem reported there showed up in 2.6.20, so I've been able \nto avoid this whole thing by sticking to the stock RHEL5 kernel (2.6.18) \non most of the production systems I deal with. (Except for my system with \nan Areca card--that one needs 2.6.22 or later to be stable, and seems to \nhave no unexpected I/O wait issues. I think this is because it's taking \nover the lowest level I/O scheduling from Linux, when it pushes from the \ncard's cache onto the disks).\n\nSome of the people there reported significant improvement by tuning the \npdflush tunables; now that I've had to do a few times on systems to get \nrid of unexpected write lulls. I wrote up a walkthrough on one of them at \nhttp://notemagnet.blogspot.com/2008/08/linux-write-cache-mystery.html that \ngoes over how to tell if you're running into that problem, and what to do \nabout it; something else I wrote on that already made it into the bug \nreport in comment #150.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sun, 22 Mar 2009 16:17:40 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"iowait\" bug?" }, { "msg_contents": "2009/3/22 Greg Smith <[email protected]>:\n> On Fri, 20 Mar 2009, M. Edward (Ed) Borasky wrote:\n>\n>> I just discovered this on a LinkedIn user group:\n>> http://bugzilla.kernel.org/show_bug.cgi?id=12309\n>\n> I would bet there's at least 3 different bugs in that one.  That bug report\n> got a lot of press via Slashdot a few months ago, and it's picked all sort\n> of people who all have I/O wait issues, but they don't all have the same\n> cause.  The 3ware-specific problem Laurent mentioned is an example.  That's\n> not the same thing most of the people there are running into, the typical\n> reporter there has disks attached directly to their motherboard.  The irony\n> here is that #12309 was a fork of #7372 to start over with a clean\n> discussion slat because the same thing happened to that earlier one.\nThat I/O wait problem is not 3ware specific. A friend of mine has the\nsame problem/fix with aacraid.\nI'd bet a couple coins that controllers that show this problem do not set mwi.\nquickly grepping linux sources (2.6.28.8) for pci_try_set_mwi:\n(only disks controllers showed here)\n230:pata_cs5530.c\n3442:sata_mv.c\n2016:3w-9xxx.c\n147:qla_init.c\n2412:lpfc_init.c\n171:cs5530.c\n>\n> The original problem reported there showed up in 2.6.20, so I've been able\n> to avoid this whole thing by sticking to the stock RHEL5 kernel (2.6.18) on\n> most of the production systems I deal with.  (Except for my system with an\n> Areca card--that one needs 2.6.22 or later to be stable, and seems to have\n> no unexpected I/O wait issues.  I think this is because it's taking over the\n> lowest level I/O scheduling from Linux, when it pushes from the card's cache\n> onto the disks).\nI thought about completely fair scheduler at first, but that one came\nin around 2.6.21.\nsome tests were done with different I/O scheduler, and they do not\nseem to be the real cause of I/O wait.\nA bad interaction between hard raid cards cache and system willing the\ncard to write at the same time could be a reason.\nunfortunately, I've met it with a now retired box at work, that was\nrunning a single disk plugged on the mobo controller.\nSo, there's something else under the hood...but my (very) limited\nkernel knowledge can't help more here.\n>\n> Some of the people there reported significant improvement by tuning the\n> pdflush tunables; now that I've had to do a few times on systems to get rid\n> of unexpected write lulls.  I wrote up a walkthrough on one of them at\n> http://notemagnet.blogspot.com/2008/08/linux-write-cache-mystery.html that\n> goes over how to tell if you're running into that problem, and what to do\n> about it; something else I wrote on that already made it into the bug report\n> in comment #150.\nI think that forcing the system to write down more often, and smaller\ndata just hides the problem, and doesn't correct it.\nBut well, that's just feeling, not science. I hope some real hacker\nwill be able to spot the problem(s) so they can be fixed.\nanyway, I keep a couple coins on mwi as a source of problem :-)\nRegards,\nLaurent\n", "msg_date": "Mon, 23 Mar 2009 00:14:52 +0100", "msg_from": "Laurent Wandrebeck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"iowait\" bug?" }, { "msg_contents": "On Mon, 23 Mar 2009, Laurent Wandrebeck wrote:\n\n> I thought about completely fair scheduler at first, but that one came\n> in around 2.6.21.\n\nCFS showed up in 2.6.23.\n\n> I think that forcing the system to write down more often, and smaller\n> data just hides the problem, and doesn't correct it.\n\nThat's one possibility. I've been considering things like whether the OS \nis getting bogged down managing things like the elevator sorting for \noutstanding writes. If there was something about that process that gets \nreally inefficient proportionally to the size of the pending queue, that \nwould both match the kinds of symptoms people are reporting, and would go \nbetter just reducing the maximum size of the issue by lowering the pdflush \ntunables.\n\nAnyway, the point I was trying to make is that there sure seem to be \nmultiple problems mixed into that one bug report, and it's starting to \nlook just as unmanagably messy as the older bug that had to be abandoned. \nIt would have been nice if somebody kicked out all the diversions it \nwanted into to keep the focus a bit better. Anybody using a SSD device, \nUSB, or ext4 should have been punted to somewhere else for example. \nPlenty of examples that don't require any of those things.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 23 Mar 2009 00:04:40 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"iowait\" bug?" } ]
[ { "msg_contents": "Hello all.\n\nTo our surprise this morning, we found a query that used to return\nit's result in about 50 ~ 100ms now take about 7.000ms to run.\n\nAfter some investigation, we found out that the PostgreSQL server\n(8.1) changed the execution plan (I'm assuming because the number of\nrows increased).\n\nSince this query may be executed a few times per second, it caused\nsome problem :)\n\nThe query is the following:\nSELECT meta_id, meta_type, COUNT(M.post_id) as count\nFROM dc_meta M LEFT JOIN dc_post P ON M.post_id = P.post_id\nWHERE P.blog_id = 'b4c62627b3203e7780078cf2f6373ab5'\n AND M.blog_id = 'b4c62627b3203e7780078cf2f6373ab5'\n AND meta_type = 'tag'\n AND ((post_status = 1 AND post_password IS NULL ))\nGROUP BY meta_id,meta_type,P.blog_id\nORDER BY count DESC\nLIMIT 40\n\nThe dc_post table has the following fields:\n- post_id bigint NOT NULL,\n- blog_id character varying(32) NOT NULL,\n- post_password character varying(32),\n- post_status smallint NOT NULL DEFAULT 0,\n- and some other not used for this query ;)\n\nUsefull indexes:\n- dc_pk_post PRIMARY KEY(post_id)\n- dc_fk_post_blog FOREIGN KEY (blog_id)\n- dc_idx_blog_post_post_status btree (blog_id, post_status)\n- dc_idx_post_blog_id btree (blog_id)\n\ndc_meta is as follow:\n- meta_id character varying(255) NOT NULL,\n- meta_type character varying(64) NOT NULL,\n- post_id bigint NOT NULL,\n- blog_id character varying(32)\n\nWith indexes:\n- dc_pk_meta PRIMARY KEY(meta_id, meta_type, post_id)\n- dc_fk_meta_blog FOREIGN KEY (blog_id)\n- dc_fk_meta_post FOREIGN KEY (post_id)\n- dc_idx_meta_blog_id btree (blog_id)\n- dc_idx_meta_meta_type btree (meta_type)\n- dc_idx_meta_post_id btree (post_id)\n\n(Aren't the foreign keys and index redundant btw? :)\n\nI've attached the EXPLAIN ANALYZE that runs now, the one that runs on\nour test server (witch contains data from 10 days ago), and another\none on the production server with nested loop disabled.\n\nThe query plan for the test server is the same that the production\nserver was last week.\n\nOn production\ndc_meta contains approx 791756 rows\ndc_post contains approx 235524 rows\n\nOn test :\ndc_meta contains approx 641398 rows\ndc_post contains approx 211295 rows\n\nThe statistics are at the default value everywhere (10)\n\nThe 'b4c6' blog is one of your biggest blogs, which contains 9326 tags\nand 3178 posts on the production server (9156 / 3132 in test)\n\nDoes anyone have and idea why this happened and how we may be able to\nfix the problem ?\n\nDisabling nested loop falls back on the previous plan, but we can't\nreally disable them since the application used (dotclear) and it's db\nlayer is designed to work with mysql as well.\n\nFor the moment I've changed the query to remove the P.blog_id =\n'b4c6..' clause and it does the trick, but it's still slower than the\nprevious one.\n\nThank you for your time\n\n-- \nRomuald Brunet", "msg_date": "Mon, 23 Mar 2009 14:52:44 +0100", "msg_from": "Romuald Brunet <[email protected]>", "msg_from_op": true, "msg_subject": "Slower query after psql changed it's execution plan" }, { "msg_contents": "Romuald Brunet <[email protected]> wrote: \n> The statistics are at the default value everywhere (10)\n \nTry setting that to 100 and running ANALYZE.\n \nThe small size of the sample with the default of 10 happened to land\nyou with a bad estimate this time. (If the numbers it has were\nactually representative of the data, the plan it's using would be\nreasonable.)\n \n-Kevin\n", "msg_date": "Mon, 23 Mar 2009 09:20:39 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slower query after psql changed it's execution\n\tplan" }, { "msg_contents": "Romuald Brunet <[email protected]> writes:\n> After some investigation, we found out that the PostgreSQL server\n> (8.1) changed the execution plan (I'm assuming because the number of\n> rows increased).\n\nThe problem seems to be that the estimate for the number of rows fetched\nfrom dc_post changed drastically --- it was in the right ballpark, and\nnow it's off by a factor of 30. Why is that? Maybe you had had a\nspecial statistics target setting, and it got dropped?\n\n> The statistics are at the default value everywhere (10)\n\nAlmost certainly not enough.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Mar 2009 10:26:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slower query after psql changed it's execution plan " } ]
[ { "msg_contents": "The application log shows that 99652 rows are being inserted into \nrelation ts_stats_transet_user_daily. 5 threads are doing the inserts. \nThe schema is lengthy, but it has a synthetic primary key (ts_id int8 \nnot null) and the following constraints:\n\nalter table ts_stats_transet_user_daily add constraint FK8ED105ED9DADA24\n foreign key (ts_transet_id) references ts_transets;\nalter table ts_stats_transet_user_daily add constraint K8ED105ED545ADA6D\n foreign key (ts_user_id) references ts_users;\n\nThis relation currently has 456532 rows and is not partitioned.\n\nThe inserts have been going on now for almost 1 hour -- not exactly \nspeedy. Here's what I find on the postgres side:\n\ncemdb=> select current_query, procpid, xact_start from pg_stat_activity;\n current_query | \nprocpid | xact_start\n------------------------------------------------------------------+---------+-------------------------------\n <IDLE> in transaction | \n15147 | 2009-03-23 12:08:31.604433-07\n <IDLE> | \n15382 |\n select current_query, procpid, xact_start from pg_stat_activity; | \n15434 | 2009-03-23 12:10:38.913764-07\n <IDLE> | \n15152 |\n <IDLE> | \n15150 |\n <IDLE> | \n15156 |\n <IDLE> in transaction | \n15183 | 2009-03-23 12:09:50.864992-07\n <IDLE> in transaction | \n15186 | 2009-03-23 12:10:07.955838-07\n <IDLE> | \n15188 |\n <IDLE> | \n15192 |\n <IDLE> in transaction | \n15193 | 2009-03-23 12:10:07.955859-07\n <IDLE> in transaction | \n15194 | 2009-03-23 12:08:59.940101-07\n(12 rows)\n\ncemdb=> select c.oid,c.relname,l.pid,l.mode,l.granted from pg_class c \njoin pg_locks l on c.oid=l.relation order by l.mode;\n oid | relname | pid | mode | \ngranted\n----------+-----------------------------+-------+------------------+---------\n 26493289 | ts_users_pkey | 15183 | AccessShareLock | t\n 26493267 | ts_transets_pkey | 15186 | AccessShareLock | t\n 1259 | pg_class | 15434 | AccessShareLock | t\n 26493289 | ts_users_pkey | 15147 | AccessShareLock | t\n 10969 | pg_locks | 15434 | AccessShareLock | t\n 26493267 | ts_transets_pkey | 15193 | AccessShareLock | t\n 26493289 | ts_users_pkey | 15194 | AccessShareLock | t\n 2662 | pg_class_oid_index | 15434 | AccessShareLock | t\n 26493267 | ts_transets_pkey | 15194 | AccessShareLock | t\n 26493289 | ts_users_pkey | 15193 | AccessShareLock | t\n 26493267 | ts_transets_pkey | 15147 | AccessShareLock | t\n 26493289 | ts_users_pkey | 15186 | AccessShareLock | t\n 26493267 | ts_transets_pkey | 15183 | AccessShareLock | t\n 2663 | pg_class_relname_nsp_index | 15434 | AccessShareLock | t\n 26472890 | ts_stats_transet_user_daily | 15147 | RowExclusiveLock | t\n 26472890 | ts_stats_transet_user_daily | 15186 | RowExclusiveLock | t\n 26472890 | ts_stats_transet_user_daily | 15193 | RowExclusiveLock | t\n 26472890 | ts_stats_transet_user_daily | 15194 | RowExclusiveLock | t\n 26472890 | ts_stats_transet_user_daily | 15183 | RowExclusiveLock | t\n 26473252 | ts_users | 15194 | RowShareLock | t\n 26472508 | ts_transets | 15183 | RowShareLock | t\n 26472508 | ts_transets | 15193 | RowShareLock | t\n 26473252 | ts_users | 15193 | RowShareLock | t\n 26473252 | ts_users | 15183 | RowShareLock | t\n 26472508 | ts_transets | 15147 | RowShareLock | t\n 26473252 | ts_users | 15186 | RowShareLock | t\n 26472508 | ts_transets | 15186 | RowShareLock | t\n 26473252 | ts_users | 15147 | RowShareLock | t\n 26472508 | ts_transets | 15194 | RowShareLock | t\n(29 rows)\n\ncemdb=> select c.oid,c.relname,l.pid,l.mode,l.granted from pg_class c \njoin pg_locks l on c.oid=l.relation order by l.pid;\n oid | relname | pid | mode | \ngranted\n----------+-----------------------------+-------+------------------+---------\n 26493289 | ts_users_pkey | 15147 | AccessShareLock | t\n 26473252 | ts_users | 15147 | RowShareLock | t\n 26493267 | ts_transets_pkey | 15147 | AccessShareLock | t\n 26472508 | ts_transets | 15147 | RowShareLock | t\n 26472890 | ts_stats_transet_user_daily | 15147 | RowExclusiveLock | t\n 26472890 | ts_stats_transet_user_daily | 15150 | RowExclusiveLock | t\n 26493289 | ts_users_pkey | 15150 | AccessShareLock | t\n 26493267 | ts_transets_pkey | 15150 | AccessShareLock | t\n 26472508 | ts_transets | 15150 | RowShareLock | t\n 26473252 | ts_users | 15150 | RowShareLock | t\n 26472890 | ts_stats_transet_user_daily | 15186 | RowExclusiveLock | t\n 26473252 | ts_users | 15186 | RowShareLock | t\n 26493267 | ts_transets_pkey | 15186 | AccessShareLock | t\n 26472508 | ts_transets | 15186 | RowShareLock | t\n 26493289 | ts_users_pkey | 15186 | AccessShareLock | t\n 26472890 | ts_stats_transet_user_daily | 15193 | RowExclusiveLock | t\n 26493289 | ts_users_pkey | 15193 | AccessShareLock | t\n 26473252 | ts_users | 15193 | RowShareLock | t\n 26472508 | ts_transets | 15193 | RowShareLock | t\n 26493267 | ts_transets_pkey | 15193 | AccessShareLock | t\n 26493267 | ts_transets_pkey | 15194 | AccessShareLock | t\n 26472508 | ts_transets | 15194 | RowShareLock | t\n 26493289 | ts_users_pkey | 15194 | AccessShareLock | t\n 26472890 | ts_stats_transet_user_daily | 15194 | RowExclusiveLock | t\n 26473252 | ts_users | 15194 | RowShareLock | t\n 1259 | pg_class | 15434 | AccessShareLock | t\n 2663 | pg_class_relname_nsp_index | 15434 | AccessShareLock | t\n 2662 | pg_class_oid_index | 15434 | AccessShareLock | t\n 10969 | pg_locks | 15434 | AccessShareLock | t\n(29 rows)\n\ncemdb=> select c.oid,c.relname,l.pid,l.mode,l.granted from pg_class c \njoin pg_locks l on c.oid=l.relation order by c.relname;\n oid | relname | pid | mode | \ngranted\n----------+-----------------------------+-------+------------------+---------\n 1259 | pg_class | 15434 | AccessShareLock | t\n 2662 | pg_class_oid_index | 15434 | AccessShareLock | t\n 2663 | pg_class_relname_nsp_index | 15434 | AccessShareLock | t\n 10969 | pg_locks | 15434 | AccessShareLock | t\n 26472890 | ts_stats_transet_user_daily | 15150 | RowExclusiveLock | t\n 26472890 | ts_stats_transet_user_daily | 15193 | RowExclusiveLock | t\n 26472890 | ts_stats_transet_user_daily | 15194 | RowExclusiveLock | t\n 26472890 | ts_stats_transet_user_daily | 15192 | RowExclusiveLock | t\n 26472890 | ts_stats_transet_user_daily | 15186 | RowExclusiveLock | t\n 26472508 | ts_transets | 15193 | RowShareLock | t\n 26472508 | ts_transets | 15186 | RowShareLock | t\n 26472508 | ts_transets | 15194 | RowShareLock | t\n 26472508 | ts_transets | 15192 | RowShareLock | t\n 26472508 | ts_transets | 15150 | RowShareLock | t\n 26493267 | ts_transets_pkey | 15192 | AccessShareLock | t\n 26493267 | ts_transets_pkey | 15194 | AccessShareLock | t\n 26493267 | ts_transets_pkey | 15150 | AccessShareLock | t\n 26493267 | ts_transets_pkey | 15186 | AccessShareLock | t\n 26493267 | ts_transets_pkey | 15193 | AccessShareLock | t\n 26473252 | ts_users | 15150 | RowShareLock | t\n 26473252 | ts_users | 15194 | RowShareLock | t\n 26473252 | ts_users | 15186 | RowShareLock | t\n 26473252 | ts_users | 15193 | RowShareLock | t\n 26473252 | ts_users | 15192 | RowShareLock | t\n 26493289 | ts_users_pkey | 15186 | AccessShareLock | t\n 26493289 | ts_users_pkey | 15192 | AccessShareLock | t\n 26493289 | ts_users_pkey | 15193 | AccessShareLock | t\n 26493289 | ts_users_pkey | 15194 | AccessShareLock | t\n 26493289 | ts_users_pkey | 15150 | AccessShareLock | t\n(29 rows)\n\nAny ideas as to what is happening here would be appreciated.\n\nThanks,\nBrian\n", "msg_date": "Mon, 23 Mar 2009 12:34:03 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "multiple threads inserting into the same table" }, { "msg_contents": "On Mon, Mar 23, 2009 at 3:34 PM, Brian Cox <[email protected]> wrote:\n> The application log shows that 99652 rows are being inserted into relation\n> ts_stats_transet_user_daily. 5 threads are doing the inserts. The schema is\n> lengthy, but it has a synthetic primary key (ts_id int8 not null) and the\n> following constraints:\n\nHow many indexes are there on ts_stats_transset_user_daily?\n\nAre these rows being inserted in individual insert statements, or are\nthey batched in some fashion?\n\nWhat's the disk/cpu activity on your system look like?\n\n-- \n- David T. Wilson\[email protected]\n", "msg_date": "Mon, 23 Mar 2009 15:49:34 -0400", "msg_from": "David Wilson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple threads inserting into the same table" }, { "msg_contents": "Brian Cox <[email protected]> writes:\n> The application log shows that 99652 rows are being inserted into \n> relation ts_stats_transet_user_daily. 5 threads are doing the inserts. \n\npg_stat_activity says those five threads are doing nothing except\nsitting around with open transactions. You sure you don't have a bug on\nthe application side?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Mar 2009 15:58:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple threads inserting into the same table " } ]
[ { "msg_contents": "Tom Lane [[email protected]] wrote:\n> pg_stat_activity says those five threads are doing nothing except\n> sitting around with open transactions. You sure you don't have a bug on\n> the application side?\n> \n> regards, tom lane\n\nThis is a java app. A thread dump reveals that these 5 threads are all\nasleep on a socket read to postgres (see below). DbUtils.java:2265 is:\n\nsession.connection().createStatement() \n.executeUpdate(((DatabaseInsert) insertObject).getInsertStmt(session));\n\nThis generates and executes a single SQL insert. Since, as you point \nout, postgres seems to think that this transaction isn't doing anything,\nit's hard to figure out what the read is doing.\n\nBrian\n\n\n\"DatabasePool.Thread1\" prio=10 tid=0x27f04c00 nid=0x3b38 runnable \n[0x29e27000..0x29e281b0]\n java.lang.Thread.State: RUNNABLE\n\tat java.net.SocketInputStream.socketRead0(Native Method)\n\tat java.net.SocketInputStream.read(SocketInputStream.java:129)\n\tat \norg.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:135)\n\tat \norg.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:104)\n\tat \norg.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:73)\n\tat org.postgresql.core.PGStream.ReceiveChar(PGStream.java:259)\n\tat \norg.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1182)\n\tat \norg.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:194)\n\t- locked <0x8975c878> (a org.postgresql.core.v3.QueryExecutorImpl)\n\tat \norg.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:451)\n\tat \norg.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:336)\n\tat \norg.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:282)\n\tat \ncom.mchange.v2.c3p0.impl.NewProxyStatement.executeUpdate(NewProxyStatement.java:64)\n\tat \ncom.timestock.tess.util.DbUtils$DatabaseInsertTask.insertObject(DbUtils.java:2265)\n\tat \ncom.timestock.tess.util.DbUtils$DatabaseInsertTask.call(DbUtils.java:2200)\n\tat \ncom.timestock.tess.util.DbUtils$DatabaseInsertTask.call(DbUtils.java:2157)\n\tat java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:138)\n\tat \njava.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)\n\tat \njava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)\n\tat java.lang.Thread.run(Thread.java:619)\n\n", "msg_date": "Mon, 23 Mar 2009 13:25:52 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: multiple threads inserting into the same table" }, { "msg_contents": "On Mon, Mar 23, 2009 at 2:25 PM, Brian Cox <[email protected]> wrote:\n> Tom Lane [[email protected]] wrote:\n>>\n>> pg_stat_activity says those five threads are doing nothing except\n>> sitting around with open transactions.  You sure you don't have a bug on\n>> the application side?\n>>\n>>                        regards, tom lane\n>\n> This is a java app. A thread dump reveals that these 5 threads are all\n> asleep on a socket read to postgres (see below). DbUtils.java:2265 is:\n>\n> session.connection().createStatement() .executeUpdate(((DatabaseInsert)\n> insertObject).getInsertStmt(session));\n>\n> This generates and executes a single SQL insert. Since, as you point out,\n> postgres seems to think that this transaction isn't doing anything,\n> it's hard to figure out what the read is doing.\n\nMight you have a firewall that's killing the connections? What does\nnetstat -an on the client side say about these connections?\n", "msg_date": "Mon, 23 Mar 2009 14:38:52 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple threads inserting into the same table" }, { "msg_contents": "Scott Marlowe <[email protected]> writes:\n> On Mon, Mar 23, 2009 at 2:25 PM, Brian Cox <[email protected]> wrote:\n>> This generates and executes a single SQL insert. Since, as you point out,\n>> postgres seems to think that this transaction isn't doing anything,\n>> it's hard to figure out what the read is doing.\n\n> Might you have a firewall that's killing the connections? What does\n> netstat -an on the client side say about these connections?\n\nnetstat will probably say the connection is open on both sides ---\notherwise the sockets would have closed. It looks like both sides\nstill think the connection is open. A firewall timeout is still\na possibility, but you'd have had to have a fairly long idle time\nfor that to happen. Are any of the threads issuing commands that might\nhave run for very long intervals (tens of minutes)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Mar 2009 18:57:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple threads inserting into the same table " }, { "msg_contents": "Brian Cox <[email protected]> writes:\n> This is a java app. A thread dump reveals that these 5 threads are all\n> asleep on a socket read to postgres (see below).\n\nIt seems clear that what you've got isn't a performance problem.\nMay I suggest taking it to pgsql-jdbc? The folk there are more likely\nto be able to help you figure out what's wrong than most of us.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Mar 2009 19:02:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple threads inserting into the same table " } ]
[ { "msg_contents": "Scott Marlowe [[email protected]] wrote:\n> Might you have a firewall that's killing the connections? What does\n> netstat -an on the client side say about these connections?\nI don't think so: 1) app and postgres are on the same machine and 2)\nthis has been the set up for months and I don't think anyone has diddled \nwith the machine.\n\nnetstat -an | fgrep 5432 shows:\n\n[root@rdl64xeoserv01 log]# netstat -an | fgrep 5432\ntcp 0 0 0.0.0.0:5432 0.0.0.0:* \n LISTEN\ntcp 0 0 127.0.0.1:5432 127.0.0.1:35996 \n ESTABLISHED\ntcp 0 0 127.0.0.1:5432 127.0.0.1:35999 \n ESTABLISHED\ntcp 0 0 127.0.0.1:5432 127.0.0.1:36017 \n ESTABLISHED\ntcp 0 0 127.0.0.1:5432 127.0.0.1:36019 \n ESTABLISHED\ntcp 0 0 127.0.0.1:5432 127.0.0.1:36018 \n ESTABLISHED\ntcp 0 0 127.0.0.1:5432 127.0.0.1:36005 \n ESTABLISHED\ntcp 0 0 127.0.0.1:5432 127.0.0.1:36001 \n ESTABLISHED\ntcp 0 0 127.0.0.1:5432 127.0.0.1:36013 \n ESTABLISHED\ntcp 0 0 127.0.0.1:5432 127.0.0.1:36008 \n ESTABLISHED\ntcp 0 0 127.0.0.1:5432 127.0.0.1:36011 \n ESTABLISHED\ntcp 0 0 130.200.163.80:5432 130.200.164.26:50481 \n ESTABLISHED\ntcp 0 0 :::5432 :::* \n LISTEN\ntcp 0 0 ::ffff:127.0.0.1:36001 ::ffff:127.0.0.1:5432 \n ESTABLISHED\ntcp 0 0 ::ffff:127.0.0.1:36005 ::ffff:127.0.0.1:5432 \n ESTABLISHED\ntcp 0 0 ::ffff:127.0.0.1:36008 ::ffff:127.0.0.1:5432 \n ESTABLISHED\ntcp 0 0 ::ffff:127.0.0.1:36011 ::ffff:127.0.0.1:5432 \n ESTABLISHED\ntcp 0 0 ::ffff:127.0.0.1:36013 ::ffff:127.0.0.1:5432 \n ESTABLISHED\ntcp 0 0 ::ffff:127.0.0.1:36017 ::ffff:127.0.0.1:5432 \n ESTABLISHED\ntcp 0 0 ::ffff:127.0.0.1:36019 ::ffff:127.0.0.1:5432 \n ESTABLISHED\ntcp 0 0 ::ffff:127.0.0.1:36018 ::ffff:127.0.0.1:5432 \n ESTABLISHED\ntcp 0 0 ::ffff:127.0.0.1:35996 ::ffff:127.0.0.1:5432 \n ESTABLISHED\ntcp 0 0 ::ffff:127.0.0.1:35999 ::ffff:127.0.0.1:5432 \n ESTABLISHED\nunix 2 [ ACC ] STREAM LISTENING 2640437 /tmp/.s.PGSQL.5432\n\n\n\n", "msg_date": "Mon, 23 Mar 2009 13:52:06 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: multiple threads inserting into the same table" } ]
[ { "msg_contents": "David Wilson [[email protected]] wrote:\n\n> How many indexes are there on ts_stats_transset_user_daily?\n10:\ncreate index ts_stats_transet_user_daily_DayIndex on \nts_stats_transet_user_daily (ts_day);\ncreate index ts_stats_transet_user_daily_HourIndex on \nts_stats_transet_user_daily (ts_hour);\ncreate index ts_stats_transet_user_daily_LastAggregatedRowIndex on \nts_stats_transet_user_daily (ts_last_aggregated_row);\ncreate index ts_stats_transet_user_daily_MonthIndex on \nts_stats_transet_user_daily (ts_month);\ncreate index ts_stats_transet_user_daily_StartTimeIndex on \nts_stats_transet_user_daily (ts_interval_start_time);\ncreate index ts_stats_transet_user_daily_TranSetIncarnationIdIndex on \nts_stats_transet_user_daily (ts_transet_incarnation_id);\ncreate index ts_stats_transet_user_daily_TranSetIndex on \nts_stats_transet_user_daily (ts_transet_id);\ncreate index ts_stats_transet_user_daily_UserIncarnationIdIndex on \nts_stats_transet_user_daily (ts_user_incarnation_id);\ncreate index ts_stats_transet_user_daily_UserIndex on \nts_stats_transet_user_daily (ts_user_id);\ncreate index ts_stats_transet_user_daily_WeekIndex on \nts_stats_transet_user_daily (ts_week);\ncreate index ts_stats_transet_user_daily_YearIndex on \nts_stats_transet_user_daily (ts_year);\n\n\n> Are these rows being inserted in individual insert statements, or are\n> they batched in some fashion?\nindividual insert stmts in a single transaction.\n\n> What's the disk/cpu activity on your system look like?\nThe app is using 100% CPU and I haven't figured out why, but the insert \nthreads are generally doing socket reads. But they can't be completely\nblocked as 1 thread is doing a read in one thread dump and is doing \nprocessing (preparing for another insert) in a later thread dump. So,\nit looks as if the inserts are taking a l-o-n-g time.\n\nHere's the output of vmstat and iostat. I've never looked at this info\nbefore, so I'm not sure what it says.\n\n[root@rdl64xeoserv01 log]# vmstat\nprocs -----------memory---------- ---swap-- -----io---- --system-- \n----cpu----\n r b swpd free buff cache si so bi bo in cs us \nsy id wa\n 1 0 0 9194740 58676 20980264 0 0 8 21 1 2 \n2 0 98 0\n[root@rdl64xeoserv01 log]# iostat\nLinux 2.6.9-42.ELsmp (rdl64xeoserv01.ca.com) 03/23/2009\n\navg-cpu: %user %nice %sys %iowait %idle\n 1.71 0.00 0.09 0.02 98.18\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 10.46 126.23 343.38 304224943 827588034\nsda1 0.00 0.00 0.00 1428 58\nsda2 57.73 126.23 343.37 304221426 827576144\nsda3 0.00 0.00 0.00 1073 0\n\n", "msg_date": "Mon, 23 Mar 2009 14:05:03 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: multiple threads inserting into the same table" } ]
[ { "msg_contents": "\nexample:\n\nselect version();\n version\n--------------------------------------------------------------------------------------------\n PostgreSQL 8.3.6 on i486-pc-linux-gnu, compiled by GCC gcc-4.3.real (Debian 4.3.3-3) 4.3.3\n\nshow maintenance_work_mem ;\n maintenance_work_mem\n----------------------\n 128MB\n\ncreate table a (i1 int, i2 int, i3 int, i4 int, i5 int, i6 int);\n\ninsert into a select n, n, n, n, n, n from generate_series(1, 100000) as n;\nINSERT 0 100000\nВремя: 570,110 мс\n\ncreate index arr_gin on a using gin ( (array[i1, i2, i3, i4, i5, i6]) );\nCREATE INDEX\nВремя: 203068,314 мс\n\ntruncate a;\ndrop index arr_gin ;\n\ncreate index arr_gin on a using gin ( (array[i1, i2, i3, i4, i5, i6]) );\nCREATE INDEX\nВремя: 3,246 мс\n\ninsert into a select n, n, n, n, n, n from generate_series(1, 100000) as n;\nINSERT 0 100000\nВремя: 2405,481 мс\n\nselect pg_size_pretty(pg_total_relation_size('a')) as total,\n pg_size_pretty(pg_relation_size('a')) as table;\n total | table\n---------+---------\n 9792 kB | 5096 kB\n\n\n203068.314 ms VS 2405.481 ms, is this behaviour normal ?\n\nThanks !\n\n-- \nSergey Burladyan\n", "msg_date": "Tue, 24 Mar 2009 03:59:34 +0300", "msg_from": "Sergey Burladyan <[email protected]>", "msg_from_op": true, "msg_subject": "Why creating GIN table index is so slow than inserting data into\n\tempty table with the same index?" }, { "msg_contents": "Sergey Burladyan <[email protected]> writes:\n> show maintenance_work_mem ;\n> maintenance_work_mem\n> ----------------------\n> 128MB\n\n> create table a (i1 int, i2 int, i3 int, i4 int, i5 int, i6 int);\n> insert into a select n, n, n, n, n, n from generate_series(1, 100000) as n;\n> create index arr_gin on a using gin ( (array[i1, i2, i3, i4, i5, i6]) );\n\n[ takes forever ]\n\nSeems the reason this is so awful is that the incoming data is exactly\npresorted, meaning that the tree structure that ginInsertEntry() is\ntrying to build degenerates to a linear list (ie, every new element\nbecomes the right child of the prior one). So the processing becomes\nO(N^2) up till you reach maintenance_work_mem and flush the tree. With\na large setting for maintenance_work_mem it gets spectacularly slow.\n\nI think a reasonable solution for this might be to keep an eye on\nmaxdepth and force a flush if that gets too large (more than a few\nhundred, perhaps?). Something like this:\n\n /* If we've maxed out our available memory, dump everything to the index */\n+ /* Also dump if the tree seems to be getting too unbalanced */\n- if (buildstate->accum.allocatedMemory >= maintenance_work_mem * 1024L)\n+ if (buildstate->accum.allocatedMemory >= maintenance_work_mem * 1024L ||\n+ buildstate->accum.maxdepth > DEPTH_LIMIT)\n {\n\nThe new fast-insert code likely will need a similar defense.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Mar 2009 23:35:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why creating GIN table index is so slow than inserting data into\n\tempty table with the same index?" }, { "msg_contents": "Tom Lane wrote:\n> Sergey Burladyan <[email protected]> writes:\n>> show maintenance_work_mem ;\n>> maintenance_work_mem\n>> ----------------------\n>> 128MB\n> \n>> create table a (i1 int, i2 int, i3 int, i4 int, i5 int, i6 int);\n>> insert into a select n, n, n, n, n, n from generate_series(1, 100000) as n;\n>> create index arr_gin on a using gin ( (array[i1, i2, i3, i4, i5, i6]) );\n> \n> [ takes forever ]\n> \n> Seems the reason this is so awful is that the incoming data is exactly\n> presorted, meaning that the tree structure that ginInsertEntry() is\n> trying to build degenerates to a linear list (ie, every new element\n> becomes the right child of the prior one). So the processing becomes\n> O(N^2) up till you reach maintenance_work_mem and flush the tree. With\n> a large setting for maintenance_work_mem it gets spectacularly slow.\n\nYes, this is probably the same issue I bumped into a while ago:\n\nhttp://archives.postgresql.org/message-id/[email protected]\n\n> I think a reasonable solution for this might be to keep an eye on\n> maxdepth and force a flush if that gets too large (more than a few\n> hundred, perhaps?). Something like this:\n> \n> /* If we've maxed out our available memory, dump everything to the index */\n> + /* Also dump if the tree seems to be getting too unbalanced */\n> - if (buildstate->accum.allocatedMemory >= maintenance_work_mem * 1024L)\n> + if (buildstate->accum.allocatedMemory >= maintenance_work_mem * 1024L ||\n> + buildstate->accum.maxdepth > DEPTH_LIMIT)\n> {\n> \n> The new fast-insert code likely will need a similar defense.\n\nI fooled around with a balanced tree, which solved the problem but \nunfortunately made the unsorted case slower. Limiting the depth like \nthat should avoid that so it's worth trying.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 24 Mar 2009 13:07:34 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why creating GIN table index is so slow than inserting\n\tdata into empty table with the same index?" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> Tom Lane wrote:\n>> I think a reasonable solution for this might be to keep an eye on\n>> maxdepth and force a flush if that gets too large (more than a few\n>> hundred, perhaps?). Something like this:\n\n> I fooled around with a balanced tree, which solved the problem but \n> unfortunately made the unsorted case slower.\n\nYeah, rebalancing the search tree would fix that, but every balanced\ntree algorithm I know about is complicated, slow, and needs extra\nmemory. It's really unclear that it'd be worth the trouble for a\ntransient data structure like this one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Mar 2009 09:45:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why creating GIN table index is so slow than inserting data into\n\tempty table with the same index?" } ]
[ { "msg_contents": "I hate to nag, but could anybody help me with this? We have a few\nrelated queries that are causing noticeable service delays in our\nproduction system. I've tried a number of different things, but I'm\nrunning out of ideas and don't know what to do next.\n\nThanks,\nBryan\n\nOn Mon, Mar 23, 2009 at 2:03 PM, Bryan Murphy <[email protected]> wrote:\n> Hey Guys,\n>\n> I've got a query on our production system that isn't choosing a good\n> plan.  I can't see why it's choosing to do a sequential scan on the\n> ItemExperienceLog table.  That table is about 800mb and has about 2.5\n> million records.  This example query only returns 4 records.  I've\n> tried upping the statics for ItemExperienceLog.VistorId and\n> ItemExperienceLog.ItemId to 1000 (from out default of 100) with no\n> success.\n>\n> Our primary keys are guids stored as char(32) not null.  Our database\n> uses UTF-8 encoding and is currently version v8.3.5.\n>\n> The queries:\n>\n>\n>\n> --SET enable_seqscan = off\n> --SET enable_seqscan = on\n>\n> --ALTER TABLE ItemExperienceLog ALTER COLUMN VisitorId SET STATISTICS 1000\n> --ALTER TABLE ItemExperienceLog ALTER COLUMN ItemId SET STATISTICS 1000\n> --ANALYZE ItemExperienceLog\n>\n> SELECT MAX(l.Id) as Id, l.ItemId\n> FROM ItemExperienceLog l\n> INNER JOIN Items_Primary p ON p.Id = l.ItemId\n> INNER JOIN Feeds f ON f.Id = p.FeedId\n> INNER JOIN Visitors v ON v.Id = l.VisitorId\n> WHERE\n>        v.UserId = 'fbe2537f21d94f519605612c0bf7c2c5'\n>        AND LOWER(f.Slug) = LOWER('Wealth_Building_by_NightingaleConant')\n> GROUP BY l.ItemId\n>\n>\n>\n> Explain verbose output (set enable_seqscan = on):\n>\n>\n>\n> HashAggregate  (cost=124392.54..124392.65 rows=9 width=37) (actual\n> time=7765.650..7765.654 rows=4 loops=1)\n>  ->  Nested Loop  (cost=2417.68..124392.49 rows=9 width=37) (actual\n> time=1706.703..7765.611 rows=11 loops=1)\n>        ->  Nested Loop  (cost=2417.68..123868.75 rows=1807 width=70)\n> (actual time=36.374..7706.677 rows=3174 loops=1)\n>              ->  Hash Join  (cost=2417.68..119679.50 rows=1807\n> width=37) (actual time=36.319..7602.221 rows=3174 loops=1)\n>                    Hash Cond: (l.visitorid = v.id)\n>                    ->  Seq Scan on itemexperiencelog l\n> (cost=0.00..107563.09 rows=2581509 width=70) (actual\n> time=0.010..4191.251 rows=2579880 loops=1)\n>                    ->  Hash  (cost=2401.43..2401.43 rows=1300\n> width=33) (actual time=3.673..3.673 rows=897 loops=1)\n>                          ->  Bitmap Heap Scan on visitors v\n> (cost=22.48..2401.43 rows=1300 width=33) (actual time=0.448..2.523\n> rows=897 loops=1)\n>                                Recheck Cond: (userid =\n> 'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)\n>                                ->  Bitmap Index Scan on\n> visitors_userid_index2  (cost=0.00..22.16 rows=1300 width=0) (actual\n> time=0.322..0.322 rows=897 loops=1)\n>                                      Index Cond: (userid =\n> 'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)\n>              ->  Index Scan using items_primary_pkey on items_primary\n> p  (cost=0.00..2.31 rows=1 width=66) (actual time=0.027..0.029 rows=1\n> loops=3174)\n>                    Index Cond: (p.id = l.itemid)\n>        ->  Index Scan using feeds_pkey on feeds f  (cost=0.00..0.28\n> rows=1 width=33) (actual time=0.016..0.016 rows=0 loops=3174)\n>              Index Cond: (f.id = p.feedid)\n>              Filter: (lower((f.slug)::text) =\n> 'wealth_building_by_nightingaleconant'::text)\n> Total runtime: 7765.767 ms\n>\n>\n>\n> Explain verbose output (set enable_seqscan = off):\n>\n>\n>\n> HashAggregate  (cost=185274.71..185274.82 rows=9 width=37) (actual\n> time=185.024..185.028 rows=4 loops=1)\n>  ->  Nested Loop  (cost=0.00..185274.67 rows=9 width=37) (actual\n> time=0.252..184.989 rows=11 loops=1)\n>        ->  Nested Loop  (cost=0.00..184751.21 rows=1806 width=70)\n> (actual time=0.223..134.943 rows=3174 loops=1)\n>              ->  Nested Loop  (cost=0.00..180564.28 rows=1806\n> width=37) (actual time=0.192..60.214 rows=3174 loops=1)\n>                    ->  Index Scan using visitors_userid_index2 on\n> visitors v  (cost=0.00..2580.97 rows=1300 width=33) (actual\n> time=0.052..2.342 rows=897 loops=1)\n>                          Index Cond: (userid =\n> 'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)\n>                    ->  Index Scan using\n> itemexperiencelog__index__visitorid on itemexperiencelog l\n> (cost=0.00..134.04 rows=230 width=70) (actual time=0.013..0.040 rows=4\n> loops=897)\n>                          Index Cond: (l.visitorid = v.id)\n>              ->  Index Scan using items_primary_pkey on items_primary\n> p  (cost=0.00..2.31 rows=1 width=66) (actual time=0.019..0.020 rows=1\n> loops=3174)\n>                    Index Cond: (p.id = l.itemid)\n>        ->  Index Scan using feeds_pkey on feeds f  (cost=0.00..0.28\n> rows=1 width=33) (actual time=0.014..0.014 rows=0 loops=3174)\n>              Index Cond: (f.id = p.feedid)\n>              Filter: (lower((f.slug)::text) =\n> 'wealth_building_by_nightingaleconant'::text)\n> Total runtime: 185.117 ms\n>\n>\n>\n> The relevent portions of postgresql.conf:\n>\n>\n> max_connections = 100\n> shared_buffers = 2GB\n> temp_buffers = 32MB\n> work_mem = 64MB\n> maintenance_work_mem = 256MB\n> max_stack_depth = 8MB\n> wal_buffers = 1MB\n> checkpoint_segments = 32\n> random_page_cost = 2.0\n> effective_cache_size = 12GB\n> default_statistics_target = 100\n>\n>\n>\n> The tables and the indexes that matter:\n>\n>\n>\n> CREATE TABLE itemexperiencelog\n> (\n>  id integer NOT NULL DEFAULT nextval('itemexperiencelog__sequence'::regclass),\n>  visitorid character(32) NOT NULL,\n>  itemid character(32) NOT NULL,\n>  created timestamp without time zone NOT NULL,\n>  modified timestamp without time zone NOT NULL,\n>  \"position\" integer NOT NULL DEFAULT 0,\n>  itemlength integer NOT NULL DEFAULT 0,\n>  usercomplete boolean NOT NULL DEFAULT false,\n>  contentcomplete boolean NOT NULL DEFAULT false,\n>  source character varying(32) NOT NULL,\n>  sessionid character(32) NOT NULL,\n>  authenticatedatcreation boolean NOT NULL DEFAULT false,\n>  CONSTRAINT itemexperiencelog_pkey PRIMARY KEY (id),\n>  CONSTRAINT itemexperiencelog_itemid_fkey FOREIGN KEY (itemid)\n>      REFERENCES items_primary (id) MATCH SIMPLE\n>      ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n>\n> CREATE INDEX itemexperiencelog__index__itemid\n>  ON itemexperiencelog\n>  USING btree\n>  (itemid);\n>\n> CREATE INDEX itemexperiencelog__index__visitorid\n>  ON itemexperiencelog\n>  USING btree\n>  (visitorid);\n>\n>\n>\n> CREATE TABLE items_primary\n> (\n>  id character(32) NOT NULL,\n>  feedid character(32),\n>  slug character varying(255),\n>  pubdate timestamp without time zone,\n>  isvisible boolean,\n>  deleted integer,\n>  itemgroupid integer,\n>  lockedflags integer NOT NULL,\n>  CONSTRAINT items_primary_pkey PRIMARY KEY (id),\n>  CONSTRAINT items_itemgroupid_fkey FOREIGN KEY (itemgroupid)\n>      REFERENCES itemgroups (id) MATCH SIMPLE\n>      ON UPDATE NO ACTION ON DELETE NO ACTION,\n>  CONSTRAINT items_primary_feedid_fkey FOREIGN KEY (feedid)\n>      REFERENCES feeds (id) MATCH SIMPLE\n>      ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n> WITH (OIDS=FALSE);\n>\n> CREATE INDEX items_primary_feedid_index\n>  ON items_primary\n>  USING btree\n>  (feedid);\n>\n>\n>\n> CREATE TABLE feeds\n> (\n>  id character(32) NOT NULL,\n>  rssupdateinterval integer,\n>  lastrssupdate timestamp without time zone,\n>  nextrssupdate timestamp without time zone,\n>  url text NOT NULL,\n>  title text NOT NULL,\n>  subtitle text,\n>  link text,\n>  slug character varying(2048) NOT NULL,\n>  description text,\n>  lang character varying(255),\n>  copyright text,\n>  pubdate timestamp without time zone,\n>  lastbuilddate character varying(255),\n>  docs text,\n>  generator character varying(255),\n>  managingeditor character varying(255),\n>  webmaster character varying(255),\n>  status integer,\n>  ttl character varying(255),\n>  image_title text,\n>  image_url text,\n>  image_link text,\n>  image_description text,\n>  image_width integer,\n>  image_height integer,\n>  rating character varying(255),\n>  skiphours character varying(255),\n>  skipdays character varying(255),\n>  genretagid character(32),\n>  sourceuserid character(32),\n>  created timestamp without time zone NOT NULL,\n>  deleted integer NOT NULL,\n>  cloud_domain character varying(255),\n>  cloud_port character varying(255),\n>  cloud_path character varying(255),\n>  cloud_registerprocedure character varying(255),\n>  cloud_protocol character varying(255),\n>  itunesauthor character varying(255),\n>  itunesblock character varying(255),\n>  itunescategories text,\n>  itunesimage character varying(255),\n>  itunesexplicit character varying(255),\n>  ituneskeywords text,\n>  itunesnewfeedurl character varying(255),\n>  itunesowner_name character varying(255),\n>  itunesowner_email character varying(255),\n>  itunessubtitle text,\n>  itunessummary text,\n>  yahooimage text,\n>  modified timestamp without time zone NOT NULL,\n>  mediatype integer,\n>  isvisible boolean NOT NULL DEFAULT false,\n>  mediaplayertype integer NOT NULL,\n>  episodebrowsecount integer,\n>  comments text NOT NULL DEFAULT ''::text,\n>  lockedflags integer NOT NULL DEFAULT 0,\n>  sequenceid integer NOT NULL DEFAULT nextval('feeds_sequence'::regclass),\n>  feedgroupid character(32),\n>  bannerurl text,\n>  categories text NOT NULL,\n>  publisher text,\n>  episodefetchstrategy integer NOT NULL,\n>  averageuserrating real,\n>  userratingscount integer,\n>  lastupdate timestamp without time zone NOT NULL,\n>  userratingstotal double precision NOT NULL,\n>  CONSTRAINT feeds_pkey PRIMARY KEY (id),\n>  CONSTRAINT feeds_feedgroupid_fkey FOREIGN KEY (feedgroupid)\n>      REFERENCES feedgroups (id) MATCH SIMPLE\n>      ON UPDATE NO ACTION ON DELETE NO ACTION,\n>  CONSTRAINT fk5cb07d0e1080d15d FOREIGN KEY (sourceuserid)\n>      REFERENCES users (id) MATCH SIMPLE\n>      ON UPDATE NO ACTION ON DELETE NO ACTION,\n>  CONSTRAINT fk5cb07d0eb113835f FOREIGN KEY (genretagid)\n>      REFERENCES tags (id) MATCH SIMPLE\n>      ON UPDATE NO ACTION ON DELETE NO ACTION,\n>  CONSTRAINT feeds_slug_key UNIQUE (slug)\n> )\n> WITH (OIDS=FALSE);\n>\n> CREATE UNIQUE INDEX feeds_slug_unique\n>  ON feeds\n>  USING btree\n>  (lower(slug::text));\n>\n>\n>\n> CREATE TABLE visitors\n> (\n>  id character(32) NOT NULL,\n>  visitorid character(32) NOT NULL,\n>  userid character(32),\n>  sessionid text NOT NULL,\n>  created timestamp without time zone NOT NULL,\n>  modified timestamp without time zone NOT NULL,\n>  deleted integer NOT NULL,\n>  sequenceid integer DEFAULT nextval('visitors_sequence'::regclass),\n>  CONSTRAINT visitors_pkey PRIMARY KEY (id)\n> )\n> WITH (OIDS=FALSE);\n>\n> CREATE INDEX visitors_userid_index2\n>  ON visitors\n>  USING btree\n>  (userid);\n>\n> CREATE INDEX visitors_visitorid_index2\n>  ON visitors\n>  USING btree\n>  (visitorid);\n>\n", "msg_date": "Tue, 24 Mar 2009 17:21:28 -0500", "msg_from": "Bryan Murphy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help Me Understand Why I'm Getting a Bad Query Plan" }, { "msg_contents": "I hate to nag, but could anybody help me with this? We have a few\nrelated queries that are causing noticeable service delays in our\nproduction system. I've tried a number of different things, but I'm\nrunning out of ideas and don't know what to do next.\n\nThanks,\nBryan\n\nOn Mon, Mar 23, 2009 at 2:03 PM, Bryan Murphy <[email protected]> wrote:\n> Hey Guys,\n>\n> I've got a query on our production system that isn't choosing a good\n> plan.  I can't see why it's choosing to do a sequential scan on the\n> ItemExperienceLog table.  That table is about 800mb and has about 2.5\n> million records.  This example query only returns 4 records.  I've\n> tried upping the statics for ItemExperienceLog.VistorId and\n> ItemExperienceLog.ItemId to 1000 (from out default of 100) with no\n> success.\n>\n> Our primary keys are guids stored as char(32) not null.  Our database\n> uses UTF-8 encoding and is currently version v8.3.5.\n>\n> The queries:\n>\n>\n>\n> --SET enable_seqscan = off\n> --SET enable_seqscan = on\n>\n> --ALTER TABLE ItemExperienceLog ALTER COLUMN VisitorId SET STATISTICS 1000\n> --ALTER TABLE ItemExperienceLog ALTER COLUMN ItemId SET STATISTICS 1000\n> --ANALYZE ItemExperienceLog\n>\n> SELECT MAX(l.Id) as Id, l.ItemId\n> FROM ItemExperienceLog l\n> INNER JOIN Items_Primary p ON p.Id = l.ItemId\n> INNER JOIN Feeds f ON f.Id = p.FeedId\n> INNER JOIN Visitors v ON v.Id = l.VisitorId\n> WHERE\n>        v.UserId = 'fbe2537f21d94f519605612c0bf7c2c5'\n>        AND LOWER(f.Slug) = LOWER('Wealth_Building_by_NightingaleConant')\n> GROUP BY l.ItemId\n>\n>\n>\n> Explain verbose output (set enable_seqscan = on):\n>\n>\n>\n> HashAggregate  (cost=124392.54..124392.65 rows=9 width=37) (actual\n> time=7765.650..7765.654 rows=4 loops=1)\n>  ->  Nested Loop  (cost=2417.68..124392.49 rows=9 width=37) (actual\n> time=1706.703..7765.611 rows=11 loops=1)\n>        ->  Nested Loop  (cost=2417.68..123868.75 rows=1807 width=70)\n> (actual time=36.374..7706.677 rows=3174 loops=1)\n>              ->  Hash Join  (cost=2417.68..119679.50 rows=1807\n> width=37) (actual time=36.319..7602.221 rows=3174 loops=1)\n>                    Hash Cond: (l.visitorid = v.id)\n>                    ->  Seq Scan on itemexperiencelog l\n> (cost=0.00..107563.09 rows=2581509 width=70) (actual\n> time=0.010..4191.251 rows=2579880 loops=1)\n>                    ->  Hash  (cost=2401.43..2401.43 rows=1300\n> width=33) (actual time=3.673..3.673 rows=897 loops=1)\n>                          ->  Bitmap Heap Scan on visitors v\n> (cost=22.48..2401.43 rows=1300 width=33) (actual time=0.448..2.523\n> rows=897 loops=1)\n>                                Recheck Cond: (userid =\n> 'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)\n>                                ->  Bitmap Index Scan on\n> visitors_userid_index2  (cost=0.00..22.16 rows=1300 width=0) (actual\n> time=0.322..0.322 rows=897 loops=1)\n>                                      Index Cond: (userid =\n> 'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)\n>              ->  Index Scan using items_primary_pkey on items_primary\n> p  (cost=0.00..2.31 rows=1 width=66) (actual time=0.027..0.029 rows=1\n> loops=3174)\n>                    Index Cond: (p.id = l.itemid)\n>        ->  Index Scan using feeds_pkey on feeds f  (cost=0.00..0.28\n> rows=1 width=33) (actual time=0.016..0.016 rows=0 loops=3174)\n>              Index Cond: (f.id = p.feedid)\n>              Filter: (lower((f.slug)::text) =\n> 'wealth_building_by_nightingaleconant'::text)\n> Total runtime: 7765.767 ms\n>\n>\n>\n> Explain verbose output (set enable_seqscan = off):\n>\n>\n>\n> HashAggregate  (cost=185274.71..185274.82 rows=9 width=37) (actual\n> time=185.024..185.028 rows=4 loops=1)\n>  ->  Nested Loop  (cost=0.00..185274.67 rows=9 width=37) (actual\n> time=0.252..184.989 rows=11 loops=1)\n>        ->  Nested Loop  (cost=0.00..184751.21 rows=1806 width=70)\n> (actual time=0.223..134.943 rows=3174 loops=1)\n>              ->  Nested Loop  (cost=0.00..180564.28 rows=1806\n> width=37) (actual time=0.192..60.214 rows=3174 loops=1)\n>                    ->  Index Scan using visitors_userid_index2 on\n> visitors v  (cost=0.00..2580.97 rows=1300 width=33) (actual\n> time=0.052..2.342 rows=897 loops=1)\n>                          Index Cond: (userid =\n> 'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)\n>                    ->  Index Scan using\n> itemexperiencelog__index__visitorid on itemexperiencelog l\n> (cost=0.00..134.04 rows=230 width=70) (actual time=0.013..0.040 rows=4\n> loops=897)\n>                          Index Cond: (l.visitorid = v.id)\n>              ->  Index Scan using items_primary_pkey on items_primary\n> p  (cost=0.00..2.31 rows=1 width=66) (actual time=0.019..0.020 rows=1\n> loops=3174)\n>                    Index Cond: (p.id = l.itemid)\n>        ->  Index Scan using feeds_pkey on feeds f  (cost=0.00..0.28\n> rows=1 width=33) (actual time=0.014..0.014 rows=0 loops=3174)\n>              Index Cond: (f.id = p.feedid)\n>              Filter: (lower((f.slug)::text) =\n> 'wealth_building_by_nightingaleconant'::text)\n> Total runtime: 185.117 ms\n>\n>\n>\n> The relevent portions of postgresql.conf:\n>\n>\n> max_connections = 100\n> shared_buffers = 2GB\n> temp_buffers = 32MB\n> work_mem = 64MB\n> maintenance_work_mem = 256MB\n> max_stack_depth = 8MB\n> wal_buffers = 1MB\n> checkpoint_segments = 32\n> random_page_cost = 2.0\n> effective_cache_size = 12GB\n> default_statistics_target = 100\n>\n>\n>\n> The tables and the indexes that matter:\n>\n>\n>\n> CREATE TABLE itemexperiencelog\n> (\n>  id integer NOT NULL DEFAULT nextval('itemexperiencelog__sequence'::regclass),\n>  visitorid character(32) NOT NULL,\n>  itemid character(32) NOT NULL,\n>  created timestamp without time zone NOT NULL,\n>  modified timestamp without time zone NOT NULL,\n>  \"position\" integer NOT NULL DEFAULT 0,\n>  itemlength integer NOT NULL DEFAULT 0,\n>  usercomplete boolean NOT NULL DEFAULT false,\n>  contentcomplete boolean NOT NULL DEFAULT false,\n>  source character varying(32) NOT NULL,\n>  sessionid character(32) NOT NULL,\n>  authenticatedatcreation boolean NOT NULL DEFAULT false,\n>  CONSTRAINT itemexperiencelog_pkey PRIMARY KEY (id),\n>  CONSTRAINT itemexperiencelog_itemid_fkey FOREIGN KEY (itemid)\n>      REFERENCES items_primary (id) MATCH SIMPLE\n>      ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n>\n> CREATE INDEX itemexperiencelog__index__itemid\n>  ON itemexperiencelog\n>  USING btree\n>  (itemid);\n>\n> CREATE INDEX itemexperiencelog__index__visitorid\n>  ON itemexperiencelog\n>  USING btree\n>  (visitorid);\n>\n>\n>\n> CREATE TABLE items_primary\n> (\n>  id character(32) NOT NULL,\n>  feedid character(32),\n>  slug character varying(255),\n>  pubdate timestamp without time zone,\n>  isvisible boolean,\n>  deleted integer,\n>  itemgroupid integer,\n>  lockedflags integer NOT NULL,\n>  CONSTRAINT items_primary_pkey PRIMARY KEY (id),\n>  CONSTRAINT items_itemgroupid_fkey FOREIGN KEY (itemgroupid)\n>      REFERENCES itemgroups (id) MATCH SIMPLE\n>      ON UPDATE NO ACTION ON DELETE NO ACTION,\n>  CONSTRAINT items_primary_feedid_fkey FOREIGN KEY (feedid)\n>      REFERENCES feeds (id) MATCH SIMPLE\n>      ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n> WITH (OIDS=FALSE);\n>\n> CREATE INDEX items_primary_feedid_index\n>  ON items_primary\n>  USING btree\n>  (feedid);\n>\n>\n>\n> CREATE TABLE feeds\n> (\n>  id character(32) NOT NULL,\n>  rssupdateinterval integer,\n>  lastrssupdate timestamp without time zone,\n>  nextrssupdate timestamp without time zone,\n>  url text NOT NULL,\n>  title text NOT NULL,\n>  subtitle text,\n>  link text,\n>  slug character varying(2048) NOT NULL,\n>  description text,\n>  lang character varying(255),\n>  copyright text,\n>  pubdate timestamp without time zone,\n>  lastbuilddate character varying(255),\n>  docs text,\n>  generator character varying(255),\n>  managingeditor character varying(255),\n>  webmaster character varying(255),\n>  status integer,\n>  ttl character varying(255),\n>  image_title text,\n>  image_url text,\n>  image_link text,\n>  image_description text,\n>  image_width integer,\n>  image_height integer,\n>  rating character varying(255),\n>  skiphours character varying(255),\n>  skipdays character varying(255),\n>  genretagid character(32),\n>  sourceuserid character(32),\n>  created timestamp without time zone NOT NULL,\n>  deleted integer NOT NULL,\n>  cloud_domain character varying(255),\n>  cloud_port character varying(255),\n>  cloud_path character varying(255),\n>  cloud_registerprocedure character varying(255),\n>  cloud_protocol character varying(255),\n>  itunesauthor character varying(255),\n>  itunesblock character varying(255),\n>  itunescategories text,\n>  itunesimage character varying(255),\n>  itunesexplicit character varying(255),\n>  ituneskeywords text,\n>  itunesnewfeedurl character varying(255),\n>  itunesowner_name character varying(255),\n>  itunesowner_email character varying(255),\n>  itunessubtitle text,\n>  itunessummary text,\n>  yahooimage text,\n>  modified timestamp without time zone NOT NULL,\n>  mediatype integer,\n>  isvisible boolean NOT NULL DEFAULT false,\n>  mediaplayertype integer NOT NULL,\n>  episodebrowsecount integer,\n>  comments text NOT NULL DEFAULT ''::text,\n>  lockedflags integer NOT NULL DEFAULT 0,\n>  sequenceid integer NOT NULL DEFAULT nextval('feeds_sequence'::regclass),\n>  feedgroupid character(32),\n>  bannerurl text,\n>  categories text NOT NULL,\n>  publisher text,\n>  episodefetchstrategy integer NOT NULL,\n>  averageuserrating real,\n>  userratingscount integer,\n>  lastupdate timestamp without time zone NOT NULL,\n>  userratingstotal double precision NOT NULL,\n>  CONSTRAINT feeds_pkey PRIMARY KEY (id),\n>  CONSTRAINT feeds_feedgroupid_fkey FOREIGN KEY (feedgroupid)\n>      REFERENCES feedgroups (id) MATCH SIMPLE\n>      ON UPDATE NO ACTION ON DELETE NO ACTION,\n>  CONSTRAINT fk5cb07d0e1080d15d FOREIGN KEY (sourceuserid)\n>      REFERENCES users (id) MATCH SIMPLE\n>      ON UPDATE NO ACTION ON DELETE NO ACTION,\n>  CONSTRAINT fk5cb07d0eb113835f FOREIGN KEY (genretagid)\n>      REFERENCES tags (id) MATCH SIMPLE\n>      ON UPDATE NO ACTION ON DELETE NO ACTION,\n>  CONSTRAINT feeds_slug_key UNIQUE (slug)\n> )\n> WITH (OIDS=FALSE);\n>\n> CREATE UNIQUE INDEX feeds_slug_unique\n>  ON feeds\n>  USING btree\n>  (lower(slug::text));\n>\n>\n>\n> CREATE TABLE visitors\n> (\n>  id character(32) NOT NULL,\n>  visitorid character(32) NOT NULL,\n>  userid character(32),\n>  sessionid text NOT NULL,\n>  created timestamp without time zone NOT NULL,\n>  modified timestamp without time zone NOT NULL,\n>  deleted integer NOT NULL,\n>  sequenceid integer DEFAULT nextval('visitors_sequence'::regclass),\n>  CONSTRAINT visitors_pkey PRIMARY KEY (id)\n> )\n> WITH (OIDS=FALSE);\n>\n> CREATE INDEX visitors_userid_index2\n>  ON visitors\n>  USING btree\n>  (userid);\n>\n> CREATE INDEX visitors_visitorid_index2\n>  ON visitors\n>  USING btree\n>  (visitorid);\n>\n", "msg_date": "Tue, 24 Mar 2009 17:21:52 -0500", "msg_from": "Bryan Murphy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help Me Understand Why I'm Getting a Bad Query Plan" }, { "msg_contents": "Brian,\n\n> I hate to nag, but could anybody help me with this? We have a few\n> related queries that are causing noticeable service delays in our\n> production system. I've tried a number of different things, but I'm\n> running out of ideas and don't know what to do next.\n\nFor some reason, your first post didn't make it to the list, which is \nwhy nobody responded.\n\n\n>> I've got a query on our production system that isn't choosing a good\n>> plan. I can't see why it's choosing to do a sequential scan on the\n>> ItemExperienceLog table. That table is about 800mb and has about 2.5\n>> million records. This example query only returns 4 records. I've\n>> tried upping the statics for ItemExperienceLog.VistorId and\n>> ItemExperienceLog.ItemId to 1000 (from out default of 100) with no\n>> success.\n\nYes, that is kind of inexplicable. For some reason, it's assigning a \nvery high cost to the nestloops, which is why it wants to avoid them \nwith a seq scan. Can you try lowering cpu_index_cost to 0.001 and see \nhow that affects the plan?\n\n--Josh\n", "msg_date": "Tue, 24 Mar 2009 19:30:10 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help Me Understand Why I'm Getting a Bad Query Plan" }, { "msg_contents": "There is one thing I don`t understand:\n\n -> Nested Loop (cost=0.00..180564.28 rows=1806\nwidth=37) (actual time=0.192..60.214 rows=3174 loops=1)\n -> Index Scan using visitors_userid_index2 on\nvisitors v (cost=0.00..2580.97 rows=1300 width=33) (actual\ntime=0.052..2.342 rows=897 loops=1)\n Index Cond: (userid =\n'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)\n -> Index Scan using\nitemexperiencelog__index__visitorid on itemexperiencelog l\n(cost=0.00..134.04 rows=230 width=70) (actual time=0.013..0.040 rows=4\nloops=897)\n Index Cond: (l.visitorid = v.id)\n\nIf it expects 1300 visitors with the userid, and for each of them to\nhave 230 entries in itemexperiencelog, how can it come up with 1806\nreturned rows (and be about right!)?\n\nGreetings\nMarcin\n", "msg_date": "Wed, 25 Mar 2009 04:04:08 +0100", "msg_from": "marcin mank <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help Me Understand Why I'm Getting a Bad Query Plan" }, { "msg_contents": "On Tue, Mar 24, 2009 at 9:30 PM, Josh Berkus <[email protected]> wrote:\n> For some reason, your first post didn't make it to the list, which is why\n> nobody responded.\n\nWeird... I've been having problems with gmail and google reader all week.\n\n>>> I've got a query on our production system that isn't choosing a good\n>>> plan.  I can't see why it's choosing to do a sequential scan on the\n>>> ItemExperienceLog table.  That table is about 800mb and has about 2.5\n>>> million records.  This example query only returns 4 records.  I've\n>>> tried upping the statics for ItemExperienceLog.VistorId and\n>>> ItemExperienceLog.ItemId to 1000 (from out default of 100) with no\n>>> success.\n>\n> Yes, that is kind of inexplicable.  For some reason, it's assigning a very\n> high cost to the nestloops, which is why it wants to avoid them with a seq\n> scan.  Can you try lowering cpu_index_cost to 0.001 and see how that affects\n> the plan?\n\nI'm assuming you meant cpu_index_tuple_cost. I changed that to 0.001\nas you suggested, forced postgres to reload it's configuration and I'm\nstill getting the same execution plan.\n\nLooking through our configuration one more time, I see that at some\npoint I set random_page_cost to 2.0, but I don't see any other changes\nto query planner settings from their default values.\n\nBryan\n", "msg_date": "Tue, 24 Mar 2009 22:43:37 -0500", "msg_from": "Bryan Murphy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help Me Understand Why I'm Getting a Bad Query Plan" }, { "msg_contents": "On Tue, Mar 24, 2009 at 10:04 PM, marcin mank <[email protected]> wrote:\n> There is one thing I don`t understand:\n>\n>              ->  Nested Loop  (cost=0.00..180564.28 rows=1806\n> width=37) (actual time=0.192..60.214 rows=3174 loops=1)\n>                    ->  Index Scan using visitors_userid_index2 on\n> visitors v  (cost=0.00..2580.97 rows=1300 width=33) (actual\n> time=0.052..2.342 rows=897 loops=1)\n>                          Index Cond: (userid =\n> 'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)\n>                    ->  Index Scan using\n> itemexperiencelog__index__visitorid on itemexperiencelog l\n> (cost=0.00..134.04 rows=230 width=70) (actual time=0.013..0.040 rows=4\n> loops=897)\n>                          Index Cond: (l.visitorid = v.id)\n>\n> If it expects 1300 visitors with the userid, and for each of them to\n> have 230 entries in itemexperiencelog, how can it come up with 1806\n> returned rows (and be about right!)?\n\nI'm not sure I follow what you're saying.\n\nOne thing to keep in mind, due to a lapse in our judgement at the\ntime, this itemexperiencelog table serves as both a current state\ntable, and a log table. Therefore, it potentially has multiple\nredundant entries, but we typically only look at the most recent entry\nto figure out the state of the current item.\n\nWe're in the process of re-factoring this now, as well as\ndenormalizing some of the tables to eliminate unnecessary joins, but I\nkeep running into these problems and need to understand what is going\non so that I know we're fixing the correct things.\n\nThanks,\nBryan\n", "msg_date": "Tue, 24 Mar 2009 22:47:00 -0500", "msg_from": "Bryan Murphy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help Me Understand Why I'm Getting a Bad Query Plan" }, { "msg_contents": "On Tue, Mar 24, 2009 at 11:43 PM, Bryan Murphy <[email protected]> wrote:\n> Looking through our configuration one more time, I see that at some\n> point I set random_page_cost to 2.0, but I don't see any other changes\n> to query planner settings from their default values.\n\nYou don't by any chance have enable_<something> set to \"off\", do you?\n\n...Robert\n", "msg_date": "Wed, 25 Mar 2009 09:40:41 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help Me Understand Why I'm Getting a Bad Query Plan" }, { "msg_contents": "On Wed, Mar 25, 2009 at 8:40 AM, Robert Haas <[email protected]> wrote:\n> On Tue, Mar 24, 2009 at 11:43 PM, Bryan Murphy <[email protected]> wrote:\n>> Looking through our configuration one more time, I see that at some\n>> point I set random_page_cost to 2.0, but I don't see any other changes\n>> to query planner settings from their default values.\n>\n> You don't by any chance have enable_<something> set to \"off\", do you?\n>\n> ...Robert\n\nAlas, I wish it were that simple. Here's the whole query tuning\nsection in it's entirety. All sections with the comment #BPM just\nbefore them are changes I made from the default value.\n\n(sorry for any duplicates, I'm still having dropouts with gmail)\n\nThanks,\nBryan\n\n#------------------------------------------------------------------------------\n# QUERY TUNING\n#------------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\n#seq_page_cost = 1.0 # measured on an arbitrary scale\n\n#BPM\n#random_page_cost = 4.0 # same scale as above\nrandom_page_cost = 2.0\n\n#cpu_tuple_cost = 0.01 # same scale as above\n#cpu_index_tuple_cost = 0.005 # same scale as above\n#cpu_operator_cost = 0.0025 # same scale as above\n\n#BPM\n#effective_cache_size = 128MB\neffective_cache_size = 12GB\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5 # range 1-10\n#geqo_pool_size = 0 # selects default based on effort\n#geqo_generations = 0 # selects default based on effort\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\n#BPM\n#default_statistics_target = 10 # range 1-1000\ndefault_statistics_target = 100\n\n#constraint_exclusion = off\n\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit\n # JOIN clauses\n", "msg_date": "Wed, 25 Mar 2009 12:53:35 -0500", "msg_from": "Bryan Murphy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help Me Understand Why I'm Getting a Bad Query Plan" }, { "msg_contents": "Bryan,\n\n> One thing to keep in mind, due to a lapse in our judgement at the\n> time, this itemexperiencelog table serves as both a current state\n> table, and a log table. Therefore, it potentially has multiple\n> redundant entries, but we typically only look at the most recent entry\n> to figure out the state of the current item.\n\nOh, I see. It thinks that it'll need to pull 260,000 redundant rows in \norder to get 1800 unique ones. Only it's wrong; you're only pulling \nabout 4000.\n\nTry increasing some stats still further: itemexperiencelog.visitorid and \nvisitors.user_id both to 500.\n\n--Josh\n\n", "msg_date": "Wed, 25 Mar 2009 14:55:40 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help Me Understand Why I'm Getting a Bad Query Plan" }, { "msg_contents": "On Wed, Mar 25, 2009 at 4:55 PM, Josh Berkus <[email protected]> wrote:\n> Oh, I see.  It thinks that it'll need to pull 260,000 redundant rows in\n> order to get 1800 unique ones.  Only it's wrong; you're only pulling about\n> 4000.\n>\n> Try increasing some stats still further: itemexperiencelog.visitorid and\n> visitors.user_id both to 500.\n\nI tried that already, but I decided to try again in case I messed up\nsomething last time. Here's what I ran. As you can see, it still\nchooses to do a sequential scan. Am I changing the stats for those\ncolumns correctly?\n\nThanks,\nBryan\n\n\nFirst, the query:\n\n\nSELECT MAX(l.Id) as Id, l.ItemId\nFROM ItemExperienceLog l\nINNER JOIN Items_Primary p ON p.Id = l.ItemId\nINNER JOIN Feeds f ON f.Id = p.FeedId\nINNER JOIN Visitors v ON v.Id = l.VisitorId\nWHERE\n\tv.UserId = 'fbe2537f21d94f519605612c0bf7c2c5'\n\tAND LOWER(f.Slug) = LOWER('Wealth_Building_by_NightingaleConant')\nGROUP BY l.ItemId\n\n\nThe query plan:\n\n\nHashAggregate (cost=130291.23..130291.35 rows=9 width=37) (actual\ntime=8385.428..8385.433 rows=4 loops=1)\n -> Nested Loop (cost=2649.02..130291.19 rows=9 width=37) (actual\ntime=3707.336..8385.388 rows=11 loops=1)\n -> Nested Loop (cost=2649.02..129744.01 rows=1888 width=70)\n(actual time=8.881..8322.029 rows=3210 loops=1)\n -> Hash Join (cost=2649.02..125273.81 rows=1888\nwidth=37) (actual time=8.836..8196.469 rows=3210 loops=1)\n Hash Cond: (l.visitorid = v.id)\n -> Seq Scan on itemexperiencelog l\n(cost=0.00..112491.03 rows=2697303 width=70) (actual\ntime=0.048..4459.139 rows=2646177 loops=1)\n -> Hash (cost=2631.24..2631.24 rows=1422\nwidth=33) (actual time=7.751..7.751 rows=899 loops=1)\n -> Bitmap Heap Scan on visitors v\n(cost=23.44..2631.24 rows=1422 width=33) (actual time=0.577..6.347\nrows=899 loops=1)\n Recheck Cond: (userid =\n'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)\n -> Bitmap Index Scan on\nvisitors_userid_index2 (cost=0.00..23.08 rows=1422 width=0) (actual\ntime=0.419..0.419 rows=899 loops=1)\n Index Cond: (userid =\n'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)\n -> Index Scan using items_primary_pkey on items_primary\np (cost=0.00..2.36 rows=1 width=66) (actual time=0.024..0.025 rows=1\nloops=3210)\n Index Cond: (p.id = l.itemid)\n -> Index Scan using feeds_pkey on feeds f (cost=0.00..0.28\nrows=1 width=33) (actual time=0.018..0.018 rows=0 loops=3210)\n Index Cond: (f.id = p.feedid)\n Filter: (lower((f.slug)::text) =\n'wealth_building_by_nightingaleconant'::text)\nTotal runtime: 8385.538 ms\n\n\nBump up the stats:\n\n\nALTER TABLE ItemExperienceLog ALTER COLUMN VisitorId SET STATISTICS 500;\nALTER TABLE ItemExperienceLog ALTER COLUMN ItemId SET STATISTICS 500;\nANALYZE ItemExperienceLog;\n\nALTER TABLE Visitors ALTER COLUMN UserId SET STATISTICS 500;\nALTER TABLE Visitors ALTER COLUMN Id SET STATISTICS 500;\nANALYZE Visitors;\n\n\nThe new query plan:\n\n\nHashAggregate (cost=127301.63..127301.72 rows=7 width=37) (actual\ntime=11447.033..11447.037 rows=4 loops=1)\n -> Nested Loop (cost=1874.67..127301.60 rows=7 width=37) (actual\ntime=4717.880..11446.987 rows=11 loops=1)\n -> Nested Loop (cost=1874.67..126923.09 rows=1306 width=70)\n(actual time=20.565..11345.756 rows=3210 loops=1)\n -> Hash Join (cost=1874.67..123822.53 rows=1306\nwidth=37) (actual time=20.445..8292.235 rows=3210 loops=1)\n Hash Cond: (l.visitorid = v.id)\n -> Seq Scan on itemexperiencelog l\n(cost=0.00..112010.04 rows=2646604 width=70) (actual\ntime=0.065..4438.481 rows=2646549 loops=1)\n -> Hash (cost=1862.32..1862.32 rows=988\nwidth=33) (actual time=19.360..19.360 rows=899 loops=1)\n -> Bitmap Heap Scan on visitors v\n(cost=18.08..1862.32 rows=988 width=33) (actual time=0.666..17.788\nrows=899 loops=1)\n Recheck Cond: (userid =\n'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)\n -> Bitmap Index Scan on\nvisitors_userid_index2 (cost=0.00..17.83 rows=988 width=0) (actual\ntime=0.520..0.520 rows=899 loops=1)\n Index Cond: (userid =\n'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)\n -> Index Scan using items_primary_pkey on items_primary\np (cost=0.00..2.36 rows=1 width=66) (actual time=0.944..0.945 rows=1\nloops=3210)\n Index Cond: (p.id = l.itemid)\n -> Index Scan using feeds_pkey on feeds f (cost=0.00..0.28\nrows=1 width=33) (actual time=0.029..0.029 rows=0 loops=3210)\n Index Cond: (f.id = p.feedid)\n Filter: (lower((f.slug)::text) =\n'wealth_building_by_nightingaleconant'::text)\nTotal runtime: 11447.155 ms\n", "msg_date": "Wed, 25 Mar 2009 17:14:45 -0500", "msg_from": "Bryan Murphy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help Me Understand Why I'm Getting a Bad Query Plan" }, { "msg_contents": "Bryan Murphy <[email protected]> writes:\n> I tried that already, but I decided to try again in case I messed up\n> something last time. Here's what I ran. As you can see, it still\n> chooses to do a sequential scan. Am I changing the stats for those\n> columns correctly?\n\nI think what you should be doing is messing with the cost parameters\n... and not in the direction you tried before. I gather from\n\teffective_cache_size = 12GB\nthat you have plenty of RAM on this machine. If the tables involved\nare less than 1GB then it's likely that you are operating in a fully\ncached condition, and the default cost parameters are not set up for\nthat. You want to be charging a lot less for page accesses relative to\nCPU effort. Try reducing both seq_page_cost and random_page_cost to 0.5\nor even 0.1. You'll need to watch your other queries to make sure\nnothing gets radically worse though ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Mar 2009 22:15:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help Me Understand Why I'm Getting a Bad Query Plan " }, { "msg_contents": "On Wed, Mar 25, 2009 at 9:15 PM, Tom Lane <[email protected]> wrote:\n> I think what you should be doing is messing with the cost parameters\n> ... and not in the direction you tried before.  I gather from\n>        effective_cache_size = 12GB\n> that you have plenty of RAM on this machine.  If the tables involved\n> are less than 1GB then it's likely that you are operating in a fully\n> cached condition, and the default cost parameters are not set up for\n> that.  You want to be charging a lot less for page accesses relative to\n> CPU effort.  Try reducing both seq_page_cost and random_page_cost to 0.5\n> or even 0.1.  You'll need to watch your other queries to make sure\n> nothing gets radically worse though ...\n>\n>                        regards, tom lane\n\nThanks Tom, I think that did the trick. I'm going to have to keep an\neye on the database for a few days to make sure there are no\nunintended consequences, but it looks good. Here's the new query\nplan:\n\n\nHashAggregate (cost=40906.58..40906.67 rows=7 width=37) (actual\ntime=204.661..204.665 rows=4 loops=1)\n -> Nested Loop (cost=0.00..40906.55 rows=7 width=37) (actual\ntime=0.293..204.628 rows=11 loops=1)\n -> Nested Loop (cost=0.00..40531.61 rows=1310 width=70)\n(actual time=0.261..113.576 rows=3210 loops=1)\n -> Nested Loop (cost=0.00..39475.97 rows=1310\nwidth=37) (actual time=0.232..29.484 rows=3210 loops=1)\n -> Index Scan using visitors_userid_index2 on\nvisitors v (cost=0.00..513.83 rows=1002 width=33) (actual\ntime=0.056..2.307 rows=899 loops=1)\n Index Cond: (userid =\n'fbe2537f21d94f519605612c0bf7c2c5'::bpchar)\n -> Index Scan using\nitemexperiencelog__index__visitorid on itemexperiencelog l\n(cost=0.00..37.43 rows=116 width=70) (actual time=0.013..0.021 rows=4\nloops=899)\n Index Cond: (l.visitorid = v.id)\n -> Index Scan using items_primary_pkey on items_primary\np (cost=0.00..0.79 rows=1 width=66) (actual time=0.018..0.019 rows=1\nloops=3210)\n Index Cond: (p.id = l.itemid)\n -> Index Scan using feeds_pkey on feeds f (cost=0.00..0.27\nrows=1 width=33) (actual time=0.023..0.023 rows=0 loops=3210)\n Index Cond: (f.id = p.feedid)\n Filter: (lower((f.slug)::text) =\n'wealth_building_by_nightingaleconant'::text)\nTotal runtime: 204.759 ms\n\n\nWhat I did was change seq_page_cost back to 1.0 and then changed\nrandom_page_cost to 0.5\n\nThis also makes logical sense to me. We've completely rewritten our\ncaching layer over the last three weeks, and introduced slony into our\narchitecture, so our usage patterns have transformed overnight.\nPreviously we were very i/o bound, now most of the actively used data\nis actually in memory. Just a few weeks ago there was so much churn\nalmost nothing stayed cached for long.\n\nThis is great, thanks guys!\n\nBryan\n", "msg_date": "Wed, 25 Mar 2009 22:19:13 -0500", "msg_from": "Bryan Murphy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help Me Understand Why I'm Getting a Bad Query Plan" }, { "msg_contents": "Bryan Murphy <[email protected]> writes:\n> What I did was change seq_page_cost back to 1.0 and then changed\n> random_page_cost to 0.5\n\n[ squint... ] It makes no physical sense for random_page_cost to be\nless than seq_page_cost. Please set them the same.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Mar 2009 23:28:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help Me Understand Why I'm Getting a Bad Query Plan " }, { "msg_contents": "On Wed, Mar 25, 2009 at 10:28 PM, Tom Lane <[email protected]> wrote:\n> Bryan Murphy <[email protected]> writes:\n>> What I did was change seq_page_cost back to 1.0 and then changed\n>> random_page_cost to 0.5\n>\n> [ squint... ]  It makes no physical sense for random_page_cost to be\n> less than seq_page_cost.  Please set them the same.\n>\n>                        regards, tom lane\n\nDone. Just saw the tip in the docs as well. Both are set to 0.5 now\nand I'm still getting the good query plan.\n\nThanks!\nBryan\n", "msg_date": "Wed, 25 Mar 2009 23:00:25 -0500", "msg_from": "Bryan Murphy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help Me Understand Why I'm Getting a Bad Query Plan" } ]
[ { "msg_contents": "I'm trying to pin down some performance issues with a machine where I \nwork, we are seeing (read only) query response times blow out by an \norder of magnitude or more at busy times. Initially we blamed \nautovacuum, but after a tweak of the cost_delay it is *not* the \nproblem. Then I looked at checkpoints... and altho there was some \ncorrelation with them and the query response - I'm thinking that the \nraid chunksize may well be the issue.\n\nFortunately there is an identical DR box, so I could do a little \ntesting. Details follow:\n\nSun 4140 2x quad-core opteron 2356 16G RAM, 6x 15K 140G SAS\nDebian Lenny\nPg 8.3.6\n\nThe disk is laid out using software (md) raid:\n\n4 drives raid 10 *4K* chunksize with database files (ext3 ordered, noatime)\n2 drives raid 1 with database transaction logs (ext3 ordered, noatime)\n\nThe relevant non default .conf params are:\n\nshared_buffers = 2048MB \nwork_mem = 4MB \nmaintenance_work_mem = 1024MB \nmax_fsm_pages = 153600 \nbgwriter_lru_maxpages = 200 \nwal_buffers = 2MB \ncheckpoint_segments = 32 \neffective_cache_size = 4096MB\nautovacuum_vacuum_scale_factor = 0.1 \nautovacuum_vacuum_cost_delay = 60 # This is high, but seemed to help...\n\nI've run pgbench:\n\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 24\nnumber of transactions per client: 12000\nnumber of transactions actually processed: 288000/288000\ntps = 655.335102 (including connections establishing)\ntps = 655.423232 (excluding connections establishing)\n\n\nLooking at iostat while it is running shows (note sda-sdd raid10, sde \nand sdf raid 1):\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \navgrq-sz avgqu-sz await svctm %util\nsda 0.00 56.80 0.00 579.00 0.00 2.47 \n8.74 133.76 235.10 1.73 100.00\nsdb 0.00 45.60 0.00 583.60 0.00 2.45 \n8.59 52.65 90.03 1.71 100.00\nsdc 0.00 49.00 0.00 579.80 0.00 2.45 \n8.66 72.56 125.09 1.72 100.00\nsdd 0.00 58.40 0.00 565.00 0.00 2.42 \n8.79 135.31 235.52 1.77 100.00\nsde 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00\nsdf 0.00 0.00 0.00 0.00 0.00 0.00 \n0.00 0.00 0.00 0.00 0.00\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \navgrq-sz avgqu-sz await svctm %util\nsda 0.00 12.80 0.00 23.40 0.00 0.15 \n12.85 3.04 103.38 4.27 10.00\nsdb 0.00 12.80 0.00 22.80 0.00 0.14 \n12.77 2.31 73.51 3.58 8.16\nsdc 0.00 12.80 0.00 21.40 0.00 0.13 \n12.86 2.38 79.21 3.63 7.76\nsdd 0.00 12.80 0.00 21.80 0.00 0.14 \n12.70 2.66 90.02 3.93 8.56\nsde 0.00 2546.80 0.00 146.80 0.00 10.53 \n146.94 0.97 6.38 5.34 78.40\nsdf 0.00 2546.80 0.00 146.60 0.00 10.53 \n147.05 0.97 6.38 5.53 81.04\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \navgrq-sz avgqu-sz await svctm %util\nsda 0.00 231.40 0.00 566.80 0.00 3.16 \n11.41 124.92 228.26 1.76 99.52\nsdb 0.00 223.00 0.00 558.00 0.00 3.06 \n11.23 46.64 83.55 1.70 94.88\nsdc 0.00 230.60 0.00 551.60 0.00 3.07 \n11.40 94.38 171.54 1.76 96.96\nsdd 0.00 231.40 0.00 528.60 0.00 2.94 \n11.37 122.55 220.81 1.83 96.48\nsde 0.00 1495.80 0.00 99.00 0.00 6.23 \n128.86 0.81 8.15 7.76 76.80\nsdf 0.00 1495.80 0.00 99.20 0.00 6.26 \n129.24 0.73 7.40 7.10 70.48\n\nTop looks like:\n\nCpu(s): 2.5%us, 1.9%sy, 0.0%ni, 71.9%id, 23.4%wa, 0.2%hi, 0.2%si, \n0.0%st\nMem: 16474084k total, 15750384k used, 723700k free, 1654320k buffers\nSwap: 2104440k total, 944k used, 2103496k free, 13552720k cached\n\nIt looks to me like we are maxing out the raid 10 array, and I suspect \nthe chunksize (4K) is the culprit. However as this is a pest to change \n(!) I'd like some opinions on whether I'm jumping to conclusions. I'd \nalso appreciate comments about what chunksize to use (I've tended to use \n256K in the past, but what are folks preferring these days?)\n\nregards\n\nMark\n\n\n", "msg_date": "Wed, 25 Mar 2009 14:09:07 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "Raid 10 chunksize" }, { "msg_contents": "\nOn 3/24/09 6:09 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n\n> I'm trying to pin down some performance issues with a machine where I\n> work, we are seeing (read only) query response times blow out by an\n> order of magnitude or more at busy times. Initially we blamed\n> autovacuum, but after a tweak of the cost_delay it is *not* the\n> problem. Then I looked at checkpoints... and altho there was some\n> correlation with them and the query response - I'm thinking that the\n> raid chunksize may well be the issue.\n> \n> Fortunately there is an identical DR box, so I could do a little\n> testing. Details follow:\n> \n> Sun 4140 2x quad-core opteron 2356 16G RAM, 6x 15K 140G SAS\n> Debian Lenny\n> Pg 8.3.6\n> \n> The disk is laid out using software (md) raid:\n> \n> 4 drives raid 10 *4K* chunksize with database files (ext3 ordered, noatime)\n> 2 drives raid 1 with database transaction logs (ext3 ordered, noatime)\n> \n\n> \n> Top looks like:\n> \n> Cpu(s): 2.5%us, 1.9%sy, 0.0%ni, 71.9%id, 23.4%wa, 0.2%hi, 0.2%si,\n> 0.0%st\n> Mem: 16474084k total, 15750384k used, 723700k free, 1654320k buffers\n> Swap: 2104440k total, 944k used, 2103496k free, 13552720k cached\n> \n> It looks to me like we are maxing out the raid 10 array, and I suspect\n> the chunksize (4K) is the culprit. However as this is a pest to change\n> (!) I'd like some opinions on whether I'm jumping to conclusions. I'd\n> also appreciate comments about what chunksize to use (I've tended to use\n> 256K in the past, but what are folks preferring these days?)\n> \n> regards\n> \n> Mark\n> \n> \n\nmd tends to work great at 1MB chunk sizes with RAID 1 or 10 for whatever\nreason. Unlike a hardware raid card, smaller chunks aren't going to help\nrandom i/o as it won't read the whole 1MB or bother caching much. Make sure\nany partitions built on top of md are 1MB aligned if you go that route.\nRandom I/O on files smaller than 1MB would be affected -- but that's not a\nproblem on a 16GB RAM server running a database that won't fit in RAM.\n\nYour xlogs are occasionally close to max usage too -- which is suspicious at\n10MB/sec. There is no reason for them to be on ext3 since they are a\ntransaction log that syncs writes so file system journaling doesn't mean\nanything. Ext2 there will lower the sync times and reduced i/o utilization.\n\nI also tend to use xfs if sequential access is important at all (obviously\nnot so in pg_bench). ext3 is slightly safer in a power failure with unsyncd\ndata, but Postgres has that covered with its own journal anyway so those\ndifferences are irrelevant.\n\n", "msg_date": "Tue, 24 Mar 2009 18:48:36 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Tue, Mar 24, 2009 at 6:48 PM, Scott Carey <[email protected]> wrote:\n> Your xlogs are occasionally close to max usage too -- which is suspicious at\n> 10MB/sec.  There is no reason for them to be on ext3 since they are a\n> transaction log that syncs writes so file system journaling doesn't mean\n> anything.  Ext2 there will lower the sync times and reduced i/o utilization.\n\nI would tend to recommend ext3 in data=writeback and make sure that\nit's mounted with noatime over using ext2 - for the sole reason that\nif the system shuts down unexpectedly, you don't have to worry about a\nlong fsck when bringing it back up.\n\nPerformance between the two filesystems should really be negligible\nfor Postgres logging.\n\n-Dave\n", "msg_date": "Tue, 24 Mar 2009 19:04:42 -0700", "msg_from": "David Rees <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Tue, Mar 24, 2009 at 7:09 PM, Mark Kirkwood <[email protected]> wrote:\n> I'm trying to pin down some performance issues with a machine where I work,\n> we are seeing (read only) query response times blow out by an order of\n> magnitude or more at busy times. Initially we blamed autovacuum, but after a\n>  tweak of the cost_delay it is *not* the problem. Then I looked at\n> checkpoints... and altho there was some correlation with them and the query\n> response - I'm thinking that the raid chunksize may well be the issue.\n\nSounds to me like you're mostly just running out of bandwidth on your\nRAID array. Whether or not you can tune it to run faster is the real\nissue. This problem becomes worse as you add clients and the RAID\narray starts to thrash. Thrashing is likely to be worse with a small\nchunk size, so that's definitely worth a look at fixing.\n\n> Fortunately there is an identical DR box, so I could do a little testing.\n\nCan you try changing the chunksize on the test box you're testing on\nto see if that helps?\n", "msg_date": "Tue, 24 Mar 2009 20:11:07 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "Scott Marlowe wrote:\n> On Tue, Mar 24, 2009 at 7:09 PM, Mark Kirkwood <[email protected]> wrote:\n> \n>> I'm trying to pin down some performance issues with a machine where I work,\n>> we are seeing (read only) query response times blow out by an order of\n>> magnitude or more at busy times. Initially we blamed autovacuum, but after a\n>> tweak of the cost_delay it is *not* the problem. Then I looked at\n>> checkpoints... and altho there was some correlation with them and the query\n>> response - I'm thinking that the raid chunksize may well be the issue.\n>> \n>\n> Sounds to me like you're mostly just running out of bandwidth on your\n> RAID array. Whether or not you can tune it to run faster is the real\n> issue. This problem becomes worse as you add clients and the RAID\n> array starts to thrash. Thrashing is likely to be worse with a small\n> chunk size, so that's definitely worth a look at fixing.\n>\n> \n\nYeah, I was wondering if we are maxing out the bandwidth...\n>> Fortunately there is an identical DR box, so I could do a little testing.\n>> \n>\n> Can you try changing the chunksize on the test box you're testing on\n> to see if that helps?\n>\n> \n\nYes - or I am hoping to anyway (part of posting here was to collect some \noutside validation for the idea). Thanks for your input!\n\n\nCheers\n\nMark\n", "msg_date": "Wed, 25 Mar 2009 16:20:52 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Tue, 24 Mar 2009, David Rees wrote:\n\n> I would tend to recommend ext3 in data=writeback and make sure that\n> it's mounted with noatime over using ext2 - for the sole reason that\n> if the system shuts down unexpectedly, you don't have to worry about a\n> long fsck when bringing it back up.\n\nWell, Mark's system is already using noatime, and if you believe \nhttp://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/ \nthere's little difference between writeback and ordered on the WAL disk. \nMight squeeze out some improvements with ext2 though, and if there's \nnothing besides the WAL on there fsck isn't ever going to take very long \nanyway--not much of a directory tree to traverse there.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 24 Mar 2009 23:29:17 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Wed, 25 Mar 2009, Mark Kirkwood wrote:\n\n> I'm thinking that the raid chunksize may well be the issue.\n\nWhy? I'm not saying you're wrong, I just don't see why that parameter \njumped out as a likely cause here.\n\n> Sun 4140 2x quad-core opteron 2356 16G RAM, 6x 15K 140G SAS\n\nThat server doesn't have any sort of write cache on it, right? That means \nthat all the fsync's done near checkpoint time are going to thrash your \ndisks around. One thing you can do to improve that situation is push \ncheckpoint_segments up to the maximum you can possibly stand. You could \nconsider double or even quadruple what you're using right now, the \nrecovery time after a crash will spike upwards a bunch though. That will \nminimize the number of checkpoints and reduce the average disk I/O they \nproduce per unit of time, due to how they're spread out in 8.3. You might \nbump upwards checkpoint_completion_target to 0.9 in order to get some \nimprovement without increasing recovery time as badly.\n\nAlso, if you want to minimize total I/O, you might drop \nbgwriter_lru_maxpages to 0. That feature presumes you have some spare I/O \ncapacity you use to prioritize lower latency, and it sounds like you \ndon't. You get the lowest total I/O per transaction with the background \nwriter turned off.\n\nYou happened to catch me on a night where I was running some pgbench tests \nhere, so I can give you something similar to compare against. Quad-core \nsystem, 8GB of RAM, write-caching controller with 3-disk RAID0 for \ndatabase and 1 disk for WAL; Linux software RAID though. Here's the same \ndata you collected at the same scale you're testing, with similar \npostgresql.conf settings too (same shared_buffers and \ncheckpoint_segments, I didn't touch any of the vacuum parameters):\n\nnumber of clients: 32\nnumber of transactions per client: 6250\nnumber of transactions actually processed: 200000/200000\ntps = 1097.933319 (including connections establishing)\ntps = 1098.372510 (excluding connections establishing)\n\nCpu(s): 3.6%us, 1.0%sy, 0.0%ni, 57.2%id, 37.5%wa, 0.0%hi, 0.7%si, 0.0%st\nMem: 8174288k total, 5545396k used, 2628892k free, 473248k buffers\nSwap: 0k total, 0k used, 0k free, 4050736k cached\n\nsda,b,d are the database, sdc is the WAL, here's a couple of busy periods:\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util\nsda 0.00 337.26 0.00 380.72 0.00 2.83 15.24 104.98 278.77 2.46 93.55\nsdb 0.00 343.56 0.00 386.31 0.00 2.86 15.17 91.32 236.61 2.46 94.95\nsdd 0.00 342.86 0.00 391.71 0.00 2.92 15.28 128.36 342.42 2.43 95.14\nsdc 0.00 808.89 0.00 45.45 0.00 3.35 150.72 1.22 26.75 21.13 96.02\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util\nsda 0.00 377.82 0.00 423.38 0.00 3.13 15.12 74.24 175.21 1.41 59.58\nsdb 0.00 371.73 0.00 423.18 0.00 3.13 15.15 50.61 119.81 1.41 59.58\nsdd 0.00 372.93 0.00 414.99 0.00 3.06 15.09 60.02 144.32 1.44 59.70\nsdc 0.00 3242.16 0.00 258.84 0.00 13.68 108.23 0.88 3.42 2.96 76.60\n\nThey don't really look much different from yours. I'm using software RAID \nand haven't touched any of its parameters; didn't even use noatime on the \next3 filesystems (you should though--that's one of those things the write \ncache really helps out with in my case).\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 25 Mar 2009 04:07:50 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "It sounds to me like you need to tune everything you can related to \npostgresql, but it will unlikely be enough as your load continues to \nincrease. You might want to look into moving some of the read activity \noff of the database. Depending on you application, memcached or ehcache \ncould help. You could also look at using something like Tokyo Cabinet \nas a short term front end data store. Without understanding the \napplication architecture, I can't offer much in way of a specific \nsuggestion.\n\n-Jerry\n\nJerry Champlin\nAbsolute Performance Inc.\n\n\nMark Kirkwood wrote:\n> Scott Marlowe wrote:\n>> On Tue, Mar 24, 2009 at 7:09 PM, Mark Kirkwood \n>> <[email protected]> wrote:\n>> \n>>> I'm trying to pin down some performance issues with a machine where \n>>> I work,\n>>> we are seeing (read only) query response times blow out by an order of\n>>> magnitude or more at busy times. Initially we blamed autovacuum, but \n>>> after a\n>>> tweak of the cost_delay it is *not* the problem. Then I looked at\n>>> checkpoints... and altho there was some correlation with them and \n>>> the query\n>>> response - I'm thinking that the raid chunksize may well be the issue.\n>>> \n>>\n>> Sounds to me like you're mostly just running out of bandwidth on your\n>> RAID array. Whether or not you can tune it to run faster is the real\n>> issue. This problem becomes worse as you add clients and the RAID\n>> array starts to thrash. Thrashing is likely to be worse with a small\n>> chunk size, so that's definitely worth a look at fixing.\n>>\n>> \n>\n> Yeah, I was wondering if we are maxing out the bandwidth...\n>>> Fortunately there is an identical DR box, so I could do a little \n>>> testing.\n>>> \n>>\n>> Can you try changing the chunksize on the test box you're testing on\n>> to see if that helps?\n>>\n>> \n>\n> Yes - or I am hoping to anyway (part of posting here was to collect \n> some outside validation for the idea). Thanks for your input!\n>\n>\n> Cheers\n>\n> Mark\n>\n", "msg_date": "Wed, 25 Mar 2009 04:05:40 -0600", "msg_from": "Jerry Champlin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "\nOn 3/25/09 1:07 AM, \"Greg Smith\" <[email protected]> wrote:\n\n> On Wed, 25 Mar 2009, Mark Kirkwood wrote:\n> \n>> I'm thinking that the raid chunksize may well be the issue.\n> \n> Why? I'm not saying you're wrong, I just don't see why that parameter\n> jumped out as a likely cause here.\n> \n\nIf postgres is random reading or writing at 8k block size, and the raid\narray is set with 4k block size, then every 8k random i/o will create TWO\ndisk seeks since it gets split to two disks. Effectively, iops will be cut\nin half.\n\n", "msg_date": "Wed, 25 Mar 2009 09:16:33 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nMark Kirkwood wrote:\n> I'm trying to pin down some performance issues with a machine where\n> I work, we are seeing (read only) query response times blow out by\n> an order of magnitude or more at busy times. Initially we blamed\n> autovacuum, but after a tweak of the cost_delay it is *not* the\n> problem. Then I looked at checkpoints... and altho there was some\n> correlation with them and the query response - I'm thinking that\n> the raid chunksize may well be the issue.\n>\n> Fortunately there is an identical DR box, so I could do a little\n> testing. Details follow:\n>\n> Sun 4140 2x quad-core opteron 2356 16G RAM, 6x 15K 140G SAS Debian\n> Lenny Pg 8.3.6\n>\n> The disk is laid out using software (md) raid:\n>\n> 4 drives raid 10 *4K* chunksize with database files (ext3 ordered,\n> noatime) 2 drives raid 1 with database transaction logs (ext3\n> ordered, noatime)\n>\n> The relevant non default .conf params are:\n>\n> shared_buffers = 2048MB work_mem = 4MB\n> maintenance_work_mem = 1024MB max_fsm_pages = 153600\n> bgwriter_lru_maxpages = 200 wal_buffers = 2MB\n> checkpoint_segments = 32 effective_cache_size = 4096MB\n> autovacuum_vacuum_scale_factor = 0.1 autovacuum_vacuum_cost_delay\n> = 60 # This is high, but seemed to help...\n>\n> I've run pgbench:\n>\n> transaction type: TPC-B (sort of) scaling factor: 100 number of\n> clients: 24 number of transactions per client: 12000 number of\n> transactions actually processed: 288000/288000 tps = 655.335102\n> (including connections establishing) tps = 655.423232 (excluding\n> connections establishing)\n>\n>\n> Looking at iostat while it is running shows (note sda-sdd raid10,\n> sde and sdf raid 1):\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s\n> avgrq-sz avgqu-sz await svctm %util sda 0.00\n> 56.80 0.00 579.00 0.00 2.47 8.74 133.76 235.10\n> 1.73 100.00 sdb 0.00 45.60 0.00 583.60\n> 0.00 2.45 8.59 52.65 90.03 1.71 100.00 sdc\n> 0.00 49.00 0.00 579.80 0.00 2.45 8.66 72.56\n> 125.09 1.72 100.00 sdd 0.00 58.40 0.00\n> 565.00 0.00 2.42 8.79 135.31 235.52 1.77 100.00 sde\n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> 0.00 0.00 0.00 sdf 0.00 0.00 0.00 0.00\n> 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s\n> avgrq-sz avgqu-sz await svctm %util sda 0.00\n> 12.80 0.00 23.40 0.00 0.15 12.85 3.04 103.38\n> 4.27 10.00 sdb 0.00 12.80 0.00 22.80\n> 0.00 0.14 12.77 2.31 73.51 3.58 8.16 sdc\n> 0.00 12.80 0.00 21.40 0.00 0.13 12.86 2.38\n> 79.21 3.63 7.76 sdd 0.00 12.80 0.00 21.80\n> 0.00 0.14 12.70 2.66 90.02 3.93 8.56 sde\n> 0.00 2546.80 0.00 146.80 0.00 10.53 146.94 0.97\n> 6.38 5.34 78.40 sdf 0.00 2546.80 0.00 146.60\n> 0.00 10.53 147.05 0.97 6.38 5.53 81.04\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s\n> avgrq-sz avgqu-sz await svctm %util sda 0.00\n> 231.40 0.00 566.80 0.00 3.16 11.41 124.92 228.26\n> 1.76 99.52 sdb 0.00 223.00 0.00 558.00\n> 0.00 3.06 11.23 46.64 83.55 1.70 94.88 sdc\n> 0.00 230.60 0.00 551.60 0.00 3.07 11.40 94.38\n> 171.54 1.76 96.96 sdd 0.00 231.40 0.00\n> 528.60 0.00 2.94 11.37 122.55 220.81 1.83 96.48 sde\n> 0.00 1495.80 0.00 99.00 0.00 6.23 128.86 0.81\n> 8.15 7.76 76.80 sdf 0.00 1495.80 0.00 99.20\n> 0.00 6.26 129.24 0.73 7.40 7.10 70.48\n>\n> Top looks like:\n>\n> Cpu(s): 2.5%us, 1.9%sy, 0.0%ni, 71.9%id, 23.4%wa, 0.2%hi,\n> 0.2%si, 0.0%st Mem: 16474084k total, 15750384k used, 723700k\n> free, 1654320k buffers Swap: 2104440k total, 944k used,\n> 2103496k free, 13552720k cached\n>\n> It looks to me like we are maxing out the raid 10 array, and I\n> suspect the chunksize (4K) is the culprit. However as this is a\n> pest to change (!) I'd like some opinions on whether I'm jumping to\n> conclusions. I'd also appreciate comments about what chunksize to\n> use (I've tended to use 256K in the past, but what are folks\n> preferring these days?)\n>\n> regards\n>\n> Mark\n>\n>\n>\nHello Mark,\n Okay, so, take all of this with a pinch of salt, but, I have the\nsame config (pretty much) as you, with checkpoint_Segments raised to\n192. The 'test' database server is Q8300, 8GB ram, 2 x 7200rpm SATA\ninto motherboard which I then lvm stripped together; lvcreate -n\ndata_lv -i 2 -I 64 mylv -L 60G (expandable under lvm2). That gives me\na stripe size of 64. Running pgbench with the same scaling factors;\n\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 24\nnumber of transactions per client: 12000\nnumber of transactions actually processed: 288000/288000\ntps = 1398.907206 (including connections establishing)\ntps = 1399.233785 (excluding connections establishing)\n \n It's also running ext4dev, but, this is the 'playground' server,\nnot the real iron (And I dread to do that on the real iron). In short,\nI think that chunksize/stripesize is killing you. Personally, I would\ngo for 64 or 128 .. that's jst my 2c .. feel free to\nignore/scorn/laugh as applicable ;)\n\n Regards\n Stef\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.9 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niEYEARECAAYFAknK0UsACgkQANG7uQ+9D9VK3wCeO/guLVb4K4V7VAQ29hJsmstb\n2JMAmQEmJjNTQlxng/49D2/xHNw2W19/\n=/rKD\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Wed, 25 Mar 2009 20:50:20 -0400", "msg_from": "Stef Telford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "I wrote:\n> Scott Marlowe wrote:\n>\n>>\n>> Can you try changing the chunksize on the test box you're testing on\n>> to see if that helps?\n>>\n>> \n>\n> Yes - or I am hoping to anyway (part of posting here was to collect \n> some outside validation for the idea). Thanks for your input!\n>\n\nRebuilt with 64K chunksize:\n\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 24\nnumber of transactions per client: 12000\nnumber of transactions actually processed: 288000/288000\ntps = 866.512162 (including connections establishing)\ntps = 866.651320 (excluding connections establishing)\n\n\nSo 64K looks quite a bit better. I'll endeavor to try out 256K next week \ntoo.\n\nMark\n", "msg_date": "Thu, 26 Mar 2009 17:28:46 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "Greg Smith wrote:\n> On Wed, 25 Mar 2009, Mark Kirkwood wrote:\n>\n>> I'm thinking that the raid chunksize may well be the issue.\n>\n> Why? I'm not saying you're wrong, I just don't see why that parameter \n> jumped out as a likely cause here.\n>\n\nSee my other post, however I agree - it wasn't clear whether split \nwrites (from the small chunksize) were killing us or the array was \nsimply maxed out...\n\n>> Sun 4140 2x quad-core opteron 2356 16G RAM, 6x 15K 140G SAS\n>\n> That server doesn't have any sort of write cache on it, right? That \n> means that all the fsync's done near checkpoint time are going to \n> thrash your disks around. One thing you can do to improve that \n> situation is push checkpoint_segments up to the maximum you can \n> possibly stand. You could consider double or even quadruple what \n> you're using right now, the recovery time after a crash will spike \n> upwards a bunch though. That will minimize the number of checkpoints \n> and reduce the average disk I/O they produce per unit of time, due to \n> how they're spread out in 8.3. You might bump upwards \n> checkpoint_completion_target to 0.9 in order to get some improvement \n> without increasing recovery time as badly.\n>\n\nYeah, no write cache at all.\n\n> Also, if you want to minimize total I/O, you might drop \n> bgwriter_lru_maxpages to 0. That feature presumes you have some spare \n> I/O capacity you use to prioritize lower latency, and it sounds like \n> you don't. You get the lowest total I/O per transaction with the \n> background writer turned off.\n>\n\nRight - but then a big very noticeable stall when you do have to \ncheckpoint? We want to avoid that I think, even at the cost of a little \noverall throughput.\n\n> You happened to catch me on a night where I was running some pgbench \n> tests here, so I can give you something similar to compare against. \n> Quad-core system, 8GB of RAM, write-caching controller with 3-disk \n> RAID0 for database and 1 disk for WAL; Linux software RAID though. \n> Here's the same data you collected at the same scale you're testing, \n> with similar postgresql.conf settings too (same shared_buffers and \n> checkpoint_segments, I didn't touch any of the vacuum parameters):\n>\n> number of clients: 32\n> number of transactions per client: 6250\n> number of transactions actually processed: 200000/200000\n> tps = 1097.933319 (including connections establishing)\n> tps = 1098.372510 (excluding connections establishing)\n>\n> Cpu(s): 3.6%us, 1.0%sy, 0.0%ni, 57.2%id, 37.5%wa, 0.0%hi, \n> 0.7%si, 0.0%st\n> Mem: 8174288k total, 5545396k used, 2628892k free, 473248k buffers\n> Swap: 0k total, 0k used, 0k free, 4050736k cached\n>\n> sda,b,d are the database, sdc is the WAL, here's a couple of busy \n> periods:\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \n> avgrq-sz avgqu-sz await svctm %util\n> sda 0.00 337.26 0.00 380.72 0.00 2.83 \n> 15.24 104.98 278.77 2.46 93.55\n> sdb 0.00 343.56 0.00 386.31 0.00 2.86 \n> 15.17 91.32 236.61 2.46 94.95\n> sdd 0.00 342.86 0.00 391.71 0.00 2.92 \n> 15.28 128.36 342.42 2.43 95.14\n> sdc 0.00 808.89 0.00 45.45 0.00 3.35 \n> 150.72 1.22 26.75 21.13 96.02\n>\n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s \n> avgrq-sz avgqu-sz await svctm %util\n> sda 0.00 377.82 0.00 423.38 0.00 3.13 \n> 15.12 74.24 175.21 1.41 59.58\n> sdb 0.00 371.73 0.00 423.18 0.00 3.13 \n> 15.15 50.61 119.81 1.41 59.58\n> sdd 0.00 372.93 0.00 414.99 0.00 3.06 \n> 15.09 60.02 144.32 1.44 59.70\n> sdc 0.00 3242.16 0.00 258.84 0.00 13.68 \n> 108.23 0.88 3.42 2.96 76.60\n>\n> They don't really look much different from yours. I'm using software \n> RAID and haven't touched any of its parameters; didn't even use \n> noatime on the ext3 filesystems (you should though--that's one of \n> those things the write cache really helps out with in my case).\n>\nYeah - with 64K chunksize I'm seeing a result more congruent with yours \n(866 or so for 24 clients), I think another pair of disks so we could \nhave 3 effective disks for the database would help get us to similar \nresults to yours... however for the meantime I'm trying to get the best \nout of what's there!\n\nThanks for your help\n\nMark\n", "msg_date": "Thu, 26 Mar 2009 17:37:00 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "Stef Telford wrote:\n>\n> Hello Mark,\n> Okay, so, take all of this with a pinch of salt, but, I have the\n> same config (pretty much) as you, with checkpoint_Segments raised to\n> 192. The 'test' database server is Q8300, 8GB ram, 2 x 7200rpm SATA\n> into motherboard which I then lvm stripped together; lvcreate -n\n> data_lv -i 2 -I 64 mylv -L 60G (expandable under lvm2). That gives me\n> a stripe size of 64. Running pgbench with the same scaling factors;\n>\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 100\n> number of clients: 24\n> number of transactions per client: 12000\n> number of transactions actually processed: 288000/288000\n> tps = 1398.907206 (including connections establishing)\n> tps = 1399.233785 (excluding connections establishing)\n> \n> It's also running ext4dev, but, this is the 'playground' server,\n> not the real iron (And I dread to do that on the real iron). In short,\n> I think that chunksize/stripesize is killing you. Personally, I would\n> go for 64 or 128 .. that's jst my 2c .. feel free to\n> ignore/scorn/laugh as applicable ;)\n>\n> \nStef - I suspect that your (quite high) tps is because your SATA disks \nare not honoring the fsync() request for each commit. SCSI/SAS disks \ntend to by default flush their cache at fsync - ATA/SATA tend not to. \nSome filesystems (e.g xfs) will try to work around this with write \nbarrier support, but it depends on the disk firmware.\n\nThanks for your reply!\n\nMark\n", "msg_date": "Thu, 26 Mar 2009 17:43:10 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Thu, 26 Mar 2009, Mark Kirkwood wrote:\n\n>> Also, if you want to minimize total I/O, you might drop \n>> bgwriter_lru_maxpages to 0. That feature presumes you have some spare I/O \n>> capacity you use to prioritize lower latency, and it sounds like you don't. \n>> You get the lowest total I/O per transaction with the background writer \n>> turned off.\n>> \n>\n> Right - but then a big very noticeable stall when you do have to checkpoint? \n> We want to avoid that I think, even at the cost of a little overall \n> throughput.\n\nThere's not really a big difference if you're running with a large value \nfor checkpoing_segments. That spreads the checkpoint I/O over a longer \nperiod of time. The current background writer doesn't aim to reduce \nwrites at checkpoint time, because that never really worked out like \npeople expected it to anyway.\n\nIt's aimed instead to write out buffers that database backend processes \nare going to need fairly soon, so they are less likely to block because \nthey have to write them out themselves. That leads to an occasional bit \nof wasted I/O, where the buffer written out gets used or dirtied against \nbefore it can be assigned to a backend. I've got a long paper expanding \non the documentation here you might find useful: \nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n\n> Yeah - with 64K chunksize I'm seeing a result more congruent with yours \n> (866 or so for 24 clients)\n\nThat's good to hear. If adjusting that helped so much, you might consider \naligning the filesystem partitions to the chunk size too; the partition \nheader usually screws that up on Linux. See these two references for \nideas: http://www.vmware.com/resources/techresources/608 \nhttp://spiralbound.net/2008/06/09/creating-linux-partitions-for-clariion\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 26 Mar 2009 13:39:21 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "\nOn 3/25/09 9:43 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n\n> Stef Telford wrote:\n>> \n>> Hello Mark,\n>> Okay, so, take all of this with a pinch of salt, but, I have the\n>> same config (pretty much) as you, with checkpoint_Segments raised to\n>> 192. The 'test' database server is Q8300, 8GB ram, 2 x 7200rpm SATA\n>> into motherboard which I then lvm stripped together; lvcreate -n\n>> data_lv -i 2 -I 64 mylv -L 60G (expandable under lvm2). That gives me\n>> a stripe size of 64. Running pgbench with the same scaling factors;\n>> \n>> starting vacuum...end.\n>> transaction type: TPC-B (sort of)\n>> scaling factor: 100\n>> number of clients: 24\n>> number of transactions per client: 12000\n>> number of transactions actually processed: 288000/288000\n>> tps = 1398.907206 (including connections establishing)\n>> tps = 1399.233785 (excluding connections establishing)\n>> \n>> It's also running ext4dev, but, this is the 'playground' server,\n>> not the real iron (And I dread to do that on the real iron). In short,\n>> I think that chunksize/stripesize is killing you. Personally, I would\n>> go for 64 or 128 .. that's jst my 2c .. feel free to\n>> ignore/scorn/laugh as applicable ;)\n>> \n>> \n> Stef - I suspect that your (quite high) tps is because your SATA disks\n> are not honoring the fsync() request for each commit. SCSI/SAS disks\n> tend to by default flush their cache at fsync - ATA/SATA tend not to.\n> Some filesystems (e.g xfs) will try to work around this with write\n> barrier support, but it depends on the disk firmware.\n\nThis has not been very true for a while now. SATA disks will flush their\nwrite cache when told, and properly adhere to write barriers. Of course,\nnot all file systems send the right write barrier commands and flush\ncommands to SATA drives (UFS for example, and older versions of ext3).\n\nIt may be the other way around, your SAS drives might have the write cache\ndisabled for no good reason other than to protect against file systems that\ndon't work right.\n\n> \n> Thanks for your reply!\n> \n> Mark\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Thu, 26 Mar 2009 14:44:15 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "\nOn 3/25/09 9:28 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n\n> I wrote:\n>> Scott Marlowe wrote:\n>> \n>>> \n>>> Can you try changing the chunksize on the test box you're testing on\n>>> to see if that helps?\n>>> \n>>> \n>> \n>> Yes - or I am hoping to anyway (part of posting here was to collect\n>> some outside validation for the idea). Thanks for your input!\n>> \n> \n> Rebuilt with 64K chunksize:\n> \n> transaction type: TPC-B (sort of)\n> scaling factor: 100\n> number of clients: 24\n> number of transactions per client: 12000\n> number of transactions actually processed: 288000/288000\n> tps = 866.512162 (including connections establishing)\n> tps = 866.651320 (excluding connections establishing)\n> \n> \n> So 64K looks quite a bit better. I'll endeavor to try out 256K next week\n> too.\n\nJust go all the way to 1MB, md _really_ likes 1MB chunk sizes for some\nreason. Benchmarks right and left on google show this to be optimal. My\ntests with md raid 0 over hardware raid 10's ended up with that being\noptimal as well.\n\nGreg's notes on aligning partitions to the chunk are key as well.\n\n\n\n> \n> Mark\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Thu, 26 Mar 2009 14:49:05 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "\nOn 3/26/09 2:44 PM, \"Scott Carey\" <[email protected]> wrote:\n\n> \n> \n> On 3/25/09 9:43 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n> \n>> Stef Telford wrote:\n>>> \n>>> Hello Mark,\n>>> Okay, so, take all of this with a pinch of salt, but, I have the\n>>> same config (pretty much) as you, with checkpoint_Segments raised to\n>>> 192. The 'test' database server is Q8300, 8GB ram, 2 x 7200rpm SATA\n>>> into motherboard which I then lvm stripped together; lvcreate -n\n>>> data_lv -i 2 -I 64 mylv -L 60G (expandable under lvm2). That gives me\n>>> a stripe size of 64. Running pgbench with the same scaling factors;\n>>> \n>>> starting vacuum...end.\n>>> transaction type: TPC-B (sort of)\n>>> scaling factor: 100\n>>> number of clients: 24\n>>> number of transactions per client: 12000\n>>> number of transactions actually processed: 288000/288000\n>>> tps = 1398.907206 (including connections establishing)\n>>> tps = 1399.233785 (excluding connections establishing)\n>>> \n>>> It's also running ext4dev, but, this is the 'playground' server,\n>>> not the real iron (And I dread to do that on the real iron). In short,\n>>> I think that chunksize/stripesize is killing you. Personally, I would\n>>> go for 64 or 128 .. that's jst my 2c .. feel free to\n>>> ignore/scorn/laugh as applicable ;)\n>>> \n>>> \n>> Stef - I suspect that your (quite high) tps is because your SATA disks\n>> are not honoring the fsync() request for each commit. SCSI/SAS disks\n>> tend to by default flush their cache at fsync - ATA/SATA tend not to.\n>> Some filesystems (e.g xfs) will try to work around this with write\n>> barrier support, but it depends on the disk firmware.\n> \n> This has not been very true for a while now. SATA disks will flush their\n> write cache when told, and properly adhere to write barriers. Of course,\n> not all file systems send the right write barrier commands and flush\n> commands to SATA drives (UFS for example, and older versions of ext3).\n> \n> It may be the other way around, your SAS drives might have the write cache\n> disabled for no good reason other than to protect against file systems that\n> don't work right.\n> \n\nA little extra info here >> md, LVM, and some other tools do not allow the\nfile system to use write barriers properly.... So those are on the bad list\nfor data integrity with SAS or SATA write caches without battery back-up.\nHowever, this is NOT an issue on the postgres data partition. Data fsync\nstill works fine, its the file system journal that might have out-of-order\nwrites. For xlogs, write barriers are not important, only fsync() not\nlying.\n\nAs an additional note, ext4 uses checksums per block in the journal, so it\nis resistant to out of order writes causing trouble. The test compared to\nhere was on ext4, and most likely the speed increase is partly due to that.\n\n>> \n>> Thanks for your reply!\n>> \n>> Mark\n>> \n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>> \n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Thu, 26 Mar 2009 15:02:08 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "Scott Carey wrote:\n>\n> A little extra info here >> md, LVM, and some other tools do not allow the\n> file system to use write barriers properly.... So those are on the bad list\n> for data integrity with SAS or SATA write caches without battery back-up.\n> However, this is NOT an issue on the postgres data partition. Data fsync\n> still works fine, its the file system journal that might have out-of-order\n> writes. For xlogs, write barriers are not important, only fsync() not\n> lying.\n>\n> As an additional note, ext4 uses checksums per block in the journal, so it\n> is resistant to out of order writes causing trouble. The test compared to\n> here was on ext4, and most likely the speed increase is partly due to that.\n>\n> \n\n[Looks at Stef's config - 2x 7200 rpm SATA RAID 0] I'm still highly \nsuspicious of such a system being capable of outperforming one with the \nsame number of (effective) - much faster - disks *plus* a dedicated WAL \ndisk pair... unless it is being a little loose about fsync! I'm happy to \nbelieve ext4 is better than ext3 - but not that much!\n\nHowever, its great to have so many different results to compare against!\n\nCheers\n\nMark\n\n", "msg_date": "Wed, 01 Apr 2009 20:57:57 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "Scott Carey wrote:\n> On 3/25/09 9:28 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n>\n> \n>>\n>> Rebuilt with 64K chunksize:\n>>\n>> transaction type: TPC-B (sort of)\n>> scaling factor: 100\n>> number of clients: 24\n>> number of transactions per client: 12000\n>> number of transactions actually processed: 288000/288000\n>> tps = 866.512162 (including connections establishing)\n>> tps = 866.651320 (excluding connections establishing)\n>>\n>>\n>> So 64K looks quite a bit better. I'll endeavor to try out 256K next week\n>> too.\n>> \n>\n> Just go all the way to 1MB, md _really_ likes 1MB chunk sizes for some\n> reason. Benchmarks right and left on google show this to be optimal. My\n> tests with md raid 0 over hardware raid 10's ended up with that being\n> optimal as well.\n>\n> Greg's notes on aligning partitions to the chunk are key as well.\n>\n> \nRebuilt with 256K chunksize:\n\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 24\nnumber of transactions per client: 12000\nnumber of transactions actually processed: 288000/288000\ntps = 942.852104 (including connections establishing)\ntps = 943.019223 (excluding connections establishing)\n\n\nA noticeable improvement again. I'm not sure that we will have time (or \npatience from the system guys that I keep bugging to redo the raid \nsetup!) to try 1M, but 256K gets us 40% or so improvement over the \noriginal 4K setup - which is quite nice!\n\nLooking on the net for md raid benchmarks, it is not 100% clear to me \nthat 1M is the overall best - several I found had tested sizes like 64K, \n128K, 512K, 1M and concluded that 1M was best - but without testing \n256K! whereas others had included ranges <=512K and decided that that \n256K was the best. I'd be very interested in seeing your data! (several \nyears ago I had carried out this type of testing - on a different type \nof machine, and for a different database vendor, but found that 256K \nseemed to give the overall best result).\n\nThe next step is to align the raid 10 partitions, as you and Greg \nsuggest and see what effect that has!\n\nThanks again\n\nMark\n", "msg_date": "Wed, 01 Apr 2009 21:11:45 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nMark Kirkwood wrote:\n> Scott Carey wrote:\n>>\n>> A little extra info here >> md, LVM, and some other tools do not\n>> allow the file system to use write barriers properly.... So\n>> those are on the bad list for data integrity with SAS or SATA\n>> write caches without battery back-up. However, this is NOT an\n>> issue on the postgres data partition. Data fsync still works\n>> fine, its the file system journal that might have out-of-order\n>> writes. For xlogs, write barriers are not important, only\n>> fsync() not lying.\n>>\n>> As an additional note, ext4 uses checksums per block in the\n>> journal, so it is resistant to out of order writes causing\n>> trouble. The test compared to here was on ext4, and most likely\n>> the speed increase is partly due to that.\n>>\n>>\n>\n> [Looks at Stef's config - 2x 7200 rpm SATA RAID 0] I'm still\n> highly suspicious of such a system being capable of outperforming\n> one with the same number of (effective) - much faster - disks\n> *plus* a dedicated WAL disk pair... unless it is being a little\n> loose about fsync! I'm happy to believe ext4 is better than ext3 -\n> but not that much!\n>\n> However, its great to have so many different results to compare\n> against!\n>\n> Cheers\n>\n> Mark\n>\nHello Mark,\n For the record, this is a 'base' debian 5 install (with openVZ but\npostgreSQL is running on the base hardware, not inside a container)\nand I have -explicitly- enabled sync in the conf. Eg;\n\n\nfsync = on # turns forced\nsynchronization on or off\nsynchronous_commit = on # immediate fsync at commit\n#wal_sync_method = fsync # the default is the first option\n\n\n Infact, if I turn -off- sync commit, it gets about 200 -slower-\nrather than faster. Curiously, I also have an intel x25-m winging it's\nway here for testing/benching under postgreSQL (along with a vertex\n120gb). I had one of the nice lads on the OCZ forum bench against a\n30gb vertex ssd, and if you think -my- TPS was crazy.. you should have\nseen his.\n\n\npostgres@rob-desktop:~$ /usr/lib/postgresql/8.3/bin/pgbench -c 24 -t\n12000 test_db\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 24\nnumber of transactions per client: 12000\nnumber of transactions actually processed: 288000/288000\ntps = 3662.200088 (including connections establishing)\ntps = 3664.823769 (excluding connections establishing)\n\n\n (Nb; Thread here;\nhttp://www.ocztechnologyforum.com/forum/showthread.php?t=54038 )\n\n Curiously, I think with SSD's there may have to be an 'off' flag\nif you put the xlog onto an ssd. It seems to complain about 'too\nfrequent checkpoints'.\n\n I can't wait for -either- of the drives to arrive. I want to see\nin -my- system what the speed is like for SSD's. The dataset I have to\nwork with is fairly small (30-40GB) so, using an 80GB ssd (even a few\nraided) is possible for me. Thankfully ;)\n\n Regards\n Stef\n(ps. I should note, running postgreSQL in a prod environment -without-\na nice UPS is never going to happen on my watch, so, turning on\nwrite-cache (to me) seems like a no-brainer really if it makes this\nkind of boost possible)\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.9 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niEYEARECAAYFAknTfKMACgkQANG7uQ+9D9XZ7wCfdU3JDXj1f2Em9dt7GdcxRbWR\neHUAn1zDb3HKEiAb0d/0R1MubtE44o/k\n=HXmP\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Wed, 01 Apr 2009 10:39:38 -0400", "msg_from": "Stef Telford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Wed, 1 Apr 2009, Stef Telford wrote:\n\n> I have -explicitly- enabled sync in the conf...In fact, if I turn -off- \n> sync commit, it gets about 200 -slower- rather than faster.\n\nYou should take a look at \nhttp://www.postgresql.org/docs/8.3/static/wal-reliability.html\n\nAnd check the output from \"hdparm -I\" as suggested there. If turning off \nfsync doesn't improve your performance, there's almost certainly something \nwrong with your setup. As suggested before, your drives probably have \nwrite caching turned on. PostgreSQL is incapable of knowing that, and \nwill happily write in an unsafe manner even if the fsync parameter is \nturned on. There's a bunch more information on this topic at \nhttp://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm\n\nAlso: a run to run variation in pgbench results of +/-10% TPS is normal, \nso unless you saw a consistent 200 TPS gain during multiple tests my guess \nis that changing fsync for you is doing nothing, rather than you \nsuggestion that it makes things slower.\n\n> Curiously, I think with SSD's there may have to be an 'off' flag\n> if you put the xlog onto an ssd. It seems to complain about 'too\n> frequent checkpoints'.\n\nYou just need to increase checkpoint_segments from the tiny default if you \nwant to push any reasonable numbers of transactions/second through pgbench \nwithout seeing this warning. Same thing happens with any high-performance \ndisk setup, it's not specific to SSDs.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 1 Apr 2009 12:08:15 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nGreg Smith wrote:\n> On Wed, 1 Apr 2009, Stef Telford wrote:\n>\n>> I have -explicitly- enabled sync in the conf...In fact, if I turn\n>> -off- sync commit, it gets about 200 -slower- rather than\n>> faster.\n>\n> You should take a look at\n> http://www.postgresql.org/docs/8.3/static/wal-reliability.html\n>\n> And check the output from \"hdparm -I\" as suggested there. If\n> turning off fsync doesn't improve your performance, there's almost\n> certainly something wrong with your setup. As suggested before,\n> your drives probably have write caching turned on. PostgreSQL is\n> incapable of knowing that, and will happily write in an unsafe\n> manner even if the fsync parameter is turned on. There's a bunch\n> more information on this topic at\n> http://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm\n>\n> Also: a run to run variation in pgbench results of +/-10% TPS is\n> normal, so unless you saw a consistent 200 TPS gain during multiple\n> tests my guess is that changing fsync for you is doing nothing,\n> rather than you suggestion that it makes things slower.\n>\nHello Greg,\n Turning off fsync -does- increase the throughput noticeably,\n- -however-, turning off synchronous_commit seemed to slow things down\nfor me. Your right though, when I toggled the sync_commit on the\nsystem, there was a small variation with TPS coming out between 1100\nand 1300. I guess I saw the initial run and thought that there was a\n'loss' in sync_commit = off\n\n I do agree that the benefit is probably from write-caching, but I\nthink that this is a 'win' as long as you have a UPS or BBU adaptor,\nand really, in a prod environment, not having a UPS is .. well. Crazy ?\n\n>> Curiously, I think with SSD's there may have to be an 'off' flag\n>> if you put the xlog onto an ssd. It seems to complain about 'too\n>> frequent checkpoints'.\n>\n> You just need to increase checkpoint_segments from the tiny default\n> if you want to push any reasonable numbers of transactions/second\n> through pgbench without seeing this warning. Same thing happens\n> with any high-performance disk setup, it's not specific to SSDs.\n>\nGood to know, I thought it maybe was atypical behaviour due to the\nnature of SSD's.\nRegards\nStef\n> -- * Greg Smith [email protected] http://www.gregsmith.com\n> Baltimore, MD\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.9 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niEYEARECAAYFAknTky0ACgkQANG7uQ+9D9UuNwCghLLC96mj9zzZPUF4GLvBDlQk\nfyIAn0V63YZJGzfm+4zPB9zjm8YKn42X\n=A6x2\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Wed, 01 Apr 2009 12:15:41 -0400", "msg_from": "Stef Telford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Wed, Apr 1, 2009 at 10:15 AM, Stef Telford <[email protected]> wrote:\n>     I do agree that the benefit is probably from write-caching, but I\n> think that this is a 'win' as long as you have a UPS or BBU adaptor,\n> and really, in a prod environment, not having a UPS is .. well. Crazy ?\n\nYou do know that UPSes can fail, right? En masse sometimes even.\n", "msg_date": "Wed, 1 Apr 2009 10:41:48 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "Scott Marlowe wrote:\n> On Wed, Apr 1, 2009 at 10:15 AM, Stef Telford <[email protected]> wrote:\n> \n>> I do agree that the benefit is probably from write-caching, but I\n>> think that this is a 'win' as long as you have a UPS or BBU adaptor,\n>> and really, in a prod environment, not having a UPS is .. well. Crazy ?\n>> \n>\n> You do know that UPSes can fail, right? En masse sometimes even.\n> \nHello Scott,\n Well, the only time the UPS has failed in my memory, was during the\ngreat Eastern Seaboard power outage of 2003. Lots of fond memories\nrunning around Toronto with a gas can looking for oil for generator\npower. This said though, anything could happen, the co-lo could be taken\nout by a meteor and then sync on or off makes no difference.\n \n Good UPS, a warm PITR standby, offsite backups and regular checks is\n\"good enough\" for me, and really, that's what it all comes down to.\nMitigating risk and factors into an 'acceptable' amount for each person.\nHowever, if you see over a 2x improvement from turning write-cache 'on'\nand have everything else in place, well, that seems like a 'no-brainer'\nto me, at least ;)\n\n Regards\n Stef\n", "msg_date": "Wed, 01 Apr 2009 12:48:58 -0400", "msg_from": "Stef Telford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Wed, 1 Apr 2009, Scott Marlowe wrote:\n> On Wed, Apr 1, 2009 at 10:15 AM, Stef Telford <[email protected]> wrote:\n>>     I do agree that the benefit is probably from write-caching, but I\n>> think that this is a 'win' as long as you have a UPS or BBU adaptor,\n>> and really, in a prod environment, not having a UPS is .. well. Crazy ?\n>\n> You do know that UPSes can fail, right? En masse sometimes even.\n\nI just lost all my diary appointments and address book data on my Palm \ndevice, because of a similar attitude. The device stores all its data in \nRAM, and never syncs it to permanent storage (like the SD card in the \nexpansion slot). But that's fine, right, because it has a battery, \ntherefore it can never fail? Well, it has the failure mode that if it ever \ncrashes hard, or the battery fails momentarily due to jogging around in a \npocket, then it just wipes all its data and starts from scratch.\n\nComputers crash. Hardware fails. Relying on un-backed-up RAM to keep your \ndata safe does not work.\n\nMatthew\n\n-- \n\"Programming today is a race between software engineers striving to build\n bigger and better idiot-proof programs, and the Universe trying to produce\n bigger and better idiots. So far, the Universe is winning.\" -- Rich Cook", "msg_date": "Wed, 1 Apr 2009 17:51:26 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Wed, Apr 1, 2009 at 10:48 AM, Stef Telford <[email protected]> wrote:\n> Scott Marlowe wrote:\n>> On Wed, Apr 1, 2009 at 10:15 AM, Stef Telford <[email protected]> wrote:\n>>\n>>>     I do agree that the benefit is probably from write-caching, but I\n>>> think that this is a 'win' as long as you have a UPS or BBU adaptor,\n>>> and really, in a prod environment, not having a UPS is .. well. Crazy ?\n>>>\n>>\n>> You do know that UPSes can fail, right?  En masse sometimes even.\n>>\n> Hello Scott,\n>    Well, the only time the UPS has failed in my memory, was during the\n> great Eastern Seaboard power outage of 2003. Lots of fond memories\n> running around Toronto with a gas can looking for oil for generator\n> power. This said though, anything could happen, the co-lo could be taken\n> out by a meteor and then sync on or off makes no difference.\n\nMeteor strike is far less likely than a power surge taking out a UPS.\nI saw a whole data center go black when a power conditioner blew out,\ntaking out the other three power conditioners, both industrial UPSes\nand the switch for the diesel generator. And I have friends who have\nseen the same type of thing before as well. The data is the most\nexpensive part of any server.\n", "msg_date": "Wed, 1 Apr 2009 10:54:58 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Wed, 1 Apr 2009, Stef Telford wrote:\n> Good UPS, a warm PITR standby, offsite backups and regular checks is\n> \"good enough\" for me, and really, that's what it all comes down to.\n> Mitigating risk and factors into an 'acceptable' amount for each person.\n> However, if you see over a 2x improvement from turning write-cache 'on'\n> and have everything else in place, well, that seems like a 'no-brainer'\n> to me, at least ;)\n\nIn that case, buying a battery-backed-up cache in the RAID controller \nwould be even more of a no-brainer.\n\nMatthew\n\n-- \n If pro is the opposite of con, what is the opposite of progress?\n", "msg_date": "Wed, 1 Apr 2009 18:01:18 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Wed, Apr 1, 2009 at 11:01 AM, Matthew Wakeling <[email protected]> wrote:\n> On Wed, 1 Apr 2009, Stef Telford wrote:\n>>\n>>   Good UPS, a warm PITR standby, offsite backups and regular checks is\n>> \"good enough\" for me, and really, that's what it all comes down to.\n>> Mitigating risk and factors into an 'acceptable' amount for each person.\n>> However, if you see over a 2x improvement from turning write-cache 'on'\n>> and have everything else in place, well, that seems like a 'no-brainer'\n>> to me, at least ;)\n>\n> In that case, buying a battery-backed-up cache in the RAID controller would\n> be even more of a no-brainer.\n\nThis is especially true in that you can reduce downtime. A lot of\ntimes downtime costs as much as anything else.\n", "msg_date": "Wed, 1 Apr 2009 11:04:12 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "Matthew Wakeling wrote:\n> On Wed, 1 Apr 2009, Stef Telford wrote:\n>> Good UPS, a warm PITR standby, offsite backups and regular checks is\n>> \"good enough\" for me, and really, that's what it all comes down to.\n>> Mitigating risk and factors into an 'acceptable' amount for each person.\n>> However, if you see over a 2x improvement from turning write-cache 'on'\n>> and have everything else in place, well, that seems like a 'no-brainer'\n>> to me, at least ;)\n>\n> In that case, buying a battery-backed-up cache in the RAID controller\n> would be even more of a no-brainer.\n>\n> Matthew\n>\nHey Matthew,\n See about 3 messages ago.. We already have them (I did say UPS or\nBBU, it should have been a logical 'and' instead of logical 'or' .. my\nbad ;). Your right though, that was a no-brainer as well.\n\n I am wondering how the card (3ware 9550sx) will work with SSD's, md\nor lvm, blocksize, ext3 or ext4 .. but.. this is the point of\nbenchmarking ;)\n\n Regards\n Stef\n", "msg_date": "Wed, 01 Apr 2009 13:10:48 -0400", "msg_from": "Stef Telford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Wed, 1 Apr 2009, Scott Marlowe wrote:\n\n> Meteor strike is far less likely than a power surge taking out a UPS.\n\nI average having a system go down during a power outage because the UPS it \nwas attached to wasn't working right anymore about once every five years. \nAnd I don't usually manage that many systems.\n\nThe only real way to know if a UPS is working right is to actually detach \npower and confirm the battery still works, which is downtime nobody ever \nfeels is warranted for a production system. Then, one day the power dies, \nthe UPS battery doesn't work to spec anymore, and you're done.\n\nOf course, I have a BBC controller in my home desktop, so that gives you \nan idea where I'm at as far as paranoia here goes.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 1 Apr 2009 13:49:46 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Wed, 1 Apr 2009, Greg Smith wrote:\n> The only real way to know if a UPS is working right is to actually detach \n> power and confirm the battery still works, which is downtime nobody ever \n> feels is warranted for a production system. Then, one day the power dies, \n> the UPS battery doesn't work to spec anymore, and you're done.\n\nMost decent servers have dual power supplies, and they should really be \nconnected to two independent UPS units. You can test them one by one \nwithout much risk of bringing down your server.\n\nMatthew\n\n-- \n Okay, I'm weird! But I'm saving up to be eccentric.\n", "msg_date": "Wed, 1 Apr 2009 18:54:04 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Wed, Apr 1, 2009 at 11:54 AM, Matthew Wakeling <[email protected]> wrote:\n> On Wed, 1 Apr 2009, Greg Smith wrote:\n>>\n>> The only real way to know if a UPS is working right is to actually detach\n>> power and confirm the battery still works, which is downtime nobody ever\n>> feels is warranted for a production system.  Then, one day the power dies,\n>> the UPS battery doesn't work to spec anymore, and you're done.\n>\n> Most decent servers have dual power supplies, and they should really be\n> connected to two independent UPS units. You can test them one by one without\n> much risk of bringing down your server.\n\nYeah, our primary DB servers have three PSes and can run on any two\njust fine. We have three power busses each coming from a different\nUPS at the hosting center.\n", "msg_date": "Wed, 1 Apr 2009 11:58:49 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nStef Telford wrote:\n> Mark Kirkwood wrote:\n>> Scott Carey wrote:\n>>> A little extra info here >> md, LVM, and some other tools do\n>>> not allow the file system to use write barriers properly.... So\n>>> those are on the bad list for data integrity with SAS or SATA\n>>> write caches without battery back-up. However, this is NOT an\n>>> issue on the postgres data partition. Data fsync still works\n>>> fine, its the file system journal that might have out-of-order\n>>> writes. For xlogs, write barriers are not important, only\n>>> fsync() not lying.\n>>>\n>>> As an additional note, ext4 uses checksums per block in the\n>>> journal, so it is resistant to out of order writes causing\n>>> trouble. The test compared to here was on ext4, and most\n>>> likely the speed increase is partly due to that.\n>>>\n>>>\n>> [Looks at Stef's config - 2x 7200 rpm SATA RAID 0] I'm still\n>> highly suspicious of such a system being capable of outperforming\n>> one with the same number of (effective) - much faster - disks\n>> *plus* a dedicated WAL disk pair... unless it is being a little\n>> loose about fsync! I'm happy to believe ext4 is better than ext3\n>> - but not that much!\n>\n>> However, its great to have so many different results to compare\n>> against!\n>\n>> Cheers\n>\n>> Mark\n>\n> postgres@rob-desktop:~$ /usr/lib/postgresql/8.3/bin/pgbench -c 24\n> -t 12000 test_db starting vacuum...end. transaction type: TPC-B\n> (sort of) scaling factor: 100 number of clients: 24 number of\n> transactions per client: 12000 number of transactions actually\n> processed: 288000/288000 tps = 3662.200088 (including connections\n> establishing) tps = 3664.823769 (excluding connections\n> establishing)\n>\n>\n> (Nb; Thread here;\n> http://www.ocztechnologyforum.com/forum/showthread.php?t=54038 )\nFyi, I got my intel x25-m in the mail, and I have been benching it for\nthe past hour or so. Here are some of the rough and ready figures.\nNote that I don't get anywhere near the vertex benchmark. I did\nhotplug it and made the filesystem using Theodore Ts'o webpage\ndirections (\nhttp://thunk.org/tytso/blog/2009/02/20/aligning-filesystems-to-an-ssds-erase-block-size/\n) ; The only thing is, ext3/4 seems to be fixated on a blocksize of\n4k, I am wondering if this could be part of the 'problem'. Any\nideas/thoughts on tuning gratefully received.\n\nAnyway, benchmarks (same system as previously, etc)\n\n(ext4dev, 4k block size, pg_xlog on 2x7.2krpm raid-0, rest on SSD)\n\nroot@debian:~# /usr/lib/postgresql/8.3/bin/pgbench -c 24 -t 12000 test_db\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 24\nnumber of transactions per client: 12000\nnumber of transactions actually processed: 288000/288000\ntps = 1407.254118 (including connections establishing)\ntps = 1407.645996 (excluding connections establishing)\n\n(ext4dev, 4k block size, everything on SSD)\n\nroot@debian:~# /usr/lib/postgresql/8.3/bin/pgbench -c 24 -t 12000 test_db\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 24\nnumber of transactions per client: 12000\nnumber of transactions actually processed: 288000/288000\ntps = 2130.734705 (including connections establishing)\ntps = 2131.545519 (excluding connections establishing)\n\n(I wanted to try and see if random_page_cost dropped down to 2.0,\nsequential_page_cost = 2.0 would make a difference. Eg; making the\nplanner aware that a random was the same cost as a sequential)\n\nroot@debian:/var/lib/postgresql/8.3/main#\n/usr/lib/postgresql/8.3/bin/pgbench -c 24 -t 12000 test_db\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 24\nnumber of transactions per client: 12000\nnumber of transactions actually processed: 288000/288000\ntps = 1982.481185 (including connections establishing)\ntps = 1983.223281 (excluding connections establishing)\n\n\nRegards\nStef\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.9 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niEYEARECAAYFAknTxccACgkQANG7uQ+9D9XoPgCfRwWwh0jTIs1iDQBVVdQJW/JN\nCBcAn3zoOO33BnYC/FgmFzw1I+isWvJh\n=0KYa\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Wed, 01 Apr 2009 15:51:35 -0400", "msg_from": "Stef Telford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Wed, 1 Apr 2009, Mark Kirkwood wrote:\n\n> Scott Carey wrote:\n>> \n>> A little extra info here >> md, LVM, and some other tools do not allow the\n>> file system to use write barriers properly.... So those are on the bad list\n>> for data integrity with SAS or SATA write caches without battery back-up.\n>> However, this is NOT an issue on the postgres data partition. Data fsync\n>> still works fine, its the file system journal that might have out-of-order\n>> writes. For xlogs, write barriers are not important, only fsync() not\n>> lying.\n>> \n>> As an additional note, ext4 uses checksums per block in the journal, so it\n>> is resistant to out of order writes causing trouble. The test compared to\n>> here was on ext4, and most likely the speed increase is partly due to that.\n>>\n>> \n>\n> [Looks at Stef's config - 2x 7200 rpm SATA RAID 0] I'm still highly \n> suspicious of such a system being capable of outperforming one with the same \n> number of (effective) - much faster - disks *plus* a dedicated WAL disk \n> pair... unless it is being a little loose about fsync! I'm happy to believe \n> ext4 is better than ext3 - but not that much!\n\ngiven how _horrible_ ext3 is with fsync, I can belive it more easily with \nfsync turned on than with it off.\n\nDavid Lang\n\n> However, its great to have so many different results to compare against!\n>\n> Cheers\n>\n> Mark\n>\n>\n>\n", "msg_date": "Wed, 1 Apr 2009 13:38:34 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nStef Telford wrote:\n> Stef Telford wrote:\n>> Mark Kirkwood wrote:\n>>> Scott Carey wrote:\n>>>> A little extra info here >> md, LVM, and some other tools do\n>>>> not allow the file system to use write barriers properly....\n>>>> So those are on the bad list for data integrity with SAS or\n>>>> SATA write caches without battery back-up. However, this is\n>>>> NOT an issue on the postgres data partition. Data fsync\n>>>> still works fine, its the file system journal that might have\n>>>> out-of-order writes. For xlogs, write barriers are not\n>>>> important, only fsync() not lying.\n>>>>\n>>>> As an additional note, ext4 uses checksums per block in the\n>>>> journal, so it is resistant to out of order writes causing\n>>>> trouble. The test compared to here was on ext4, and most\n>>>> likely the speed increase is partly due to that.\n>>>>\n>>>>\n>>> [Looks at Stef's config - 2x 7200 rpm SATA RAID 0] I'm still\n>>> highly suspicious of such a system being capable of\n>>> outperforming one with the same number of (effective) - much\n>>> faster - disks *plus* a dedicated WAL disk pair... unless it is\n>>> being a little loose about fsync! I'm happy to believe ext4 is\n>>> better than ext3 - but not that much! However, its great to\n>>> have so many different results to compare against! Cheers Mark\n>> postgres@rob-desktop:~$ /usr/lib/postgresql/8.3/bin/pgbench -c 24\n>> -t 12000 test_db starting vacuum...end. transaction type: TPC-B\n>> (sort of) scaling factor: 100 number of clients: 24 number of\n>> transactions per client: 12000 number of transactions actually\n>> processed: 288000/288000 tps = 3662.200088 (including connections\n>> establishing) tps = 3664.823769 (excluding connections\n>> establishing)\n>\n>\n>> (Nb; Thread here;\n>> http://www.ocztechnologyforum.com/forum/showthread.php?t=54038 )\n> Fyi, I got my intel x25-m in the mail, and I have been benching it\n> for the past hour or so. Here are some of the rough and ready\n> figures. Note that I don't get anywhere near the vertex benchmark.\n> I did hotplug it and made the filesystem using Theodore Ts'o\n> webpage directions (\n> http://thunk.org/tytso/blog/2009/02/20/aligning-filesystems-to-an-ssds-erase-block-size/\n> ) ; The only thing is, ext3/4 seems to be fixated on a blocksize\n> of 4k, I am wondering if this could be part of the 'problem'. Any\n> ideas/thoughts on tuning gratefully received.\n>\n> Anyway, benchmarks (same system as previously, etc)\n>\n> (ext4dev, 4k block size, pg_xlog on 2x7.2krpm raid-0, rest on SSD)\n>\n> root@debian:~# /usr/lib/postgresql/8.3/bin/pgbench -c 24 -t 12000\n> test_db starting vacuum...end. transaction type: TPC-B (sort of)\n> scaling factor: 100 number of clients: 24 number of transactions\n> per client: 12000 number of transactions actually processed:\n> 288000/288000 tps = 1407.254118 (including connections\n> establishing) tps = 1407.645996 (excluding connections\n> establishing)\n>\n> (ext4dev, 4k block size, everything on SSD)\n>\n> root@debian:~# /usr/lib/postgresql/8.3/bin/pgbench -c 24 -t 12000\n> test_db starting vacuum...end. transaction type: TPC-B (sort of)\n> scaling factor: 100 number of clients: 24 number of transactions\n> per client: 12000 number of transactions actually processed:\n> 288000/288000 tps = 2130.734705 (including connections\n> establishing) tps = 2131.545519 (excluding connections\n> establishing)\n>\n> (I wanted to try and see if random_page_cost dropped down to 2.0,\n> sequential_page_cost = 2.0 would make a difference. Eg; making the\n> planner aware that a random was the same cost as a sequential)\n>\n> root@debian:/var/lib/postgresql/8.3/main#\n> /usr/lib/postgresql/8.3/bin/pgbench -c 24 -t 12000 test_db starting\n> vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100\n> number of clients: 24 number of transactions per client: 12000\n> number of transactions actually processed: 288000/288000 tps =\n> 1982.481185 (including connections establishing) tps = 1983.223281\n> (excluding connections establishing)\n>\n>\n> Regards Stef\n\nHere is the single x25-m SSD, write cache -disabled-, XFS, noatime\nmounted using the no-op scheduler;\n\nstef@debian:~$ sudo /usr/lib/postgresql/8.3/bin/pgbench -c 24 -t 12000\ntest_db\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 24\nnumber of transactions per client: 12000\nnumber of transactions actually processed: 288000/288000\ntps = 1427.781843 (including connections establishing)\ntps = 1428.137858 (excluding connections establishing)\n\nRegards\nStef\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.9 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niEYEARECAAYFAknT0hEACgkQANG7uQ+9D9X8zQCfcJ+tRQ7Sh6/YQImPejfZr/h4\n/QcAn0hZujC1+f+4tBSF8EhNgR6q44kc\n=XzG/\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Wed, 01 Apr 2009 16:44:01 -0400", "msg_from": "Stef Telford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Wed, 1 Apr 2009, [email protected] wrote:\n\n> On Wed, 1 Apr 2009, Mark Kirkwood wrote:\n>\n>> Scott Carey wrote:\n>>> \n>>> A little extra info here >> md, LVM, and some other tools do not allow \n>>> the\n>>> file system to use write barriers properly.... So those are on the bad \n>>> list\n>>> for data integrity with SAS or SATA write caches without battery back-up.\n>>> However, this is NOT an issue on the postgres data partition. Data fsync\n>>> still works fine, its the file system journal that might have out-of-order\n>>> writes. For xlogs, write barriers are not important, only fsync() not\n>>> lying.\n>>> \n>>> As an additional note, ext4 uses checksums per block in the journal, so it\n>>> is resistant to out of order writes causing trouble. The test compared to\n>>> here was on ext4, and most likely the speed increase is partly due to \n>>> that.\n>>> \n>>> \n>> \n>> [Looks at Stef's config - 2x 7200 rpm SATA RAID 0] I'm still highly \n>> suspicious of such a system being capable of outperforming one with the \n>> same number of (effective) - much faster - disks *plus* a dedicated WAL \n>> disk pair... unless it is being a little loose about fsync! I'm happy to \n>> believe ext4 is better than ext3 - but not that much!\n>\n> given how _horrible_ ext3 is with fsync, I can belive it more easily with \n> fsync turned on than with it off.\n\nI realized after sending this that I needed to elaborate a little more.\n\nover the last week there has been a _huge_ thread on the linux-kernel list \n(>400 messages) that is summarized on lwn.net at \nhttp://lwn.net/SubscriberLink/326471/b7f5fedf0f7c545f/\n\nthere is a lot of information in this thread, but one big thing is that in \ndata=ordered mode (the default for most distros) ext3 can end up having to \nwrite all pending data when you do a fsync on one file, In addition \nreading from disk can take priority over writing the journal entry (the IO \nscheduler assumes that there is someone waiting for a read, but not for a \nwrite), so if you have one process trying to do a fsync and another \nreading from the disk, the one doing the fsync needs to wait until the \ndisk is idle to get the fsync completed.\n\next4 does things enough differently that fsyncs are relativly cheap again \n(like they are on XFS, ext2, and other filesystems). the tradeoff is that \nif you _don't_ do an fsync there is a increased window where you will get \ndata corruption if you crash.\n\nDavid Lang\n", "msg_date": "Wed, 1 Apr 2009 13:47:31 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "\nOn 4/1/09 9:15 AM, \"Stef Telford\" <[email protected]> wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> Greg Smith wrote:\n>> On Wed, 1 Apr 2009, Stef Telford wrote:\n>> \n>>> I have -explicitly- enabled sync in the conf...In fact, if I turn\n>>> -off- sync commit, it gets about 200 -slower- rather than\n>>> faster.\n>> \n>> You should take a look at\n>> http://www.postgresql.org/docs/8.3/static/wal-reliability.html\n>> \n>> And check the output from \"hdparm -I\" as suggested there. If\n>> turning off fsync doesn't improve your performance, there's almost\n>> certainly something wrong with your setup. As suggested before,\n>> your drives probably have write caching turned on. PostgreSQL is\n>> incapable of knowing that, and will happily write in an unsafe\n>> manner even if the fsync parameter is turned on. There's a bunch\n>> more information on this topic at\n>> http://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm\n>> \n>> Also: a run to run variation in pgbench results of +/-10% TPS is\n>> normal, so unless you saw a consistent 200 TPS gain during multiple\n>> tests my guess is that changing fsync for you is doing nothing,\n>> rather than you suggestion that it makes things slower.\n>> \n> Hello Greg,\n> Turning off fsync -does- increase the throughput noticeably,\n> - -however-, turning off synchronous_commit seemed to slow things down\n> for me. Your right though, when I toggled the sync_commit on the\n> system, there was a small variation with TPS coming out between 1100\n> and 1300. I guess I saw the initial run and thought that there was a\n> 'loss' in sync_commit = off\n> \n> I do agree that the benefit is probably from write-caching, but I\n> think that this is a 'win' as long as you have a UPS or BBU adaptor,\n> and really, in a prod environment, not having a UPS is .. well. Crazy ?\n\nWrite caching on SATA is totally fine. There were some old ATA drives that\nwhen paried with some file systems or OS's would not be safe. There are\nsome combinations that have unsafe write barriers. But there is a standard\nwell supported ATA command to sync and only return after the data is on\ndisk. If you are running an OS that is anything recent at all, and any\ndisks that are not really old, you're fine.\n\nThe notion that current SATA systems are unsafe to have write caching (or\nSAS for that matter) is not fully informed. You have to pair it with a file\nsystem and OS that doesn't issue the necessary cache flush commands to sync.\n\n\n\n", "msg_date": "Wed, 1 Apr 2009 15:07:37 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "\nOn 4/1/09 10:01 AM, \"Matthew Wakeling\" <[email protected]> wrote:\n\n> On Wed, 1 Apr 2009, Stef Telford wrote:\n>> Good UPS, a warm PITR standby, offsite backups and regular checks is\n>> \"good enough\" for me, and really, that's what it all comes down to.\n>> Mitigating risk and factors into an 'acceptable' amount for each person.\n>> However, if you see over a 2x improvement from turning write-cache 'on'\n>> and have everything else in place, well, that seems like a 'no-brainer'\n>> to me, at least ;)\n> \n> In that case, buying a battery-backed-up cache in the RAID controller\n> would be even more of a no-brainer.\n> \n> Matthew\n> \n\nWhy? Honestly, SATA write cache is safer than a battery backed raid card.\nThe raid card is one more point of failure, and SATA write caches with a\nmodern file system is safe.\n\n\n> --\n> If pro is the opposite of con, what is the opposite of progress?\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Wed, 1 Apr 2009 15:14:34 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "\nOn 4/1/09 9:54 AM, \"Scott Marlowe\" <[email protected]> wrote:\n\n> On Wed, Apr 1, 2009 at 10:48 AM, Stef Telford <[email protected]> wrote:\n>> Scott Marlowe wrote:\n>>> On Wed, Apr 1, 2009 at 10:15 AM, Stef Telford <[email protected]> wrote:\n>>> \n>>>>     I do agree that the benefit is probably from write-caching, but I\n>>>> think that this is a 'win' as long as you have a UPS or BBU adaptor,\n>>>> and really, in a prod environment, not having a UPS is .. well. Crazy ?\n>>>> \n>>> \n>>> You do know that UPSes can fail, right?  En masse sometimes even.\n>>> \n>> Hello Scott,\n>>    Well, the only time the UPS has failed in my memory, was during the\n>> great Eastern Seaboard power outage of 2003. Lots of fond memories\n>> running around Toronto with a gas can looking for oil for generator\n>> power. This said though, anything could happen, the co-lo could be taken\n>> out by a meteor and then sync on or off makes no difference.\n> \n> Meteor strike is far less likely than a power surge taking out a UPS.\n> I saw a whole data center go black when a power conditioner blew out,\n> taking out the other three power conditioners, both industrial UPSes\n> and the switch for the diesel generator. And I have friends who have\n> seen the same type of thing before as well. The data is the most\n> expensive part of any server.\n> \nYeah, well I¹ve had a RAID card die, which broke its Battery backed cache.\nThey¹re all unsafe, technically.\n\nIn fact, not only are battery backed caches unsafe, but hard drives. They\ncan return bad data. So if you want to be really safe:\n\n1: don't use Linux -- you have to use something with full data and metadata\nchecksums like ZFS or very expensive proprietary file systems.\n2: combine it with mirrored SSD's that don't use write cache (so you can\nhave fsync perf about as good as a battery backed raid card without that\nrisk).\n4: keep a live redundant system with a PITR backup at another site that can\nrecover in a short period of time.\n3: Run in a datacenter well underground with a plutonium nuclear power\nsupply. Meteor strikes and Nuclear holocaust, beware!\n\n", "msg_date": "Wed, 1 Apr 2009 15:15:36 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "\nOn 4/1/09 1:44 PM, \"Stef Telford\" <[email protected]> wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> Stef Telford wrote:\n>> Stef Telford wrote:\n>> Fyi, I got my intel x25-m in the mail, and I have been benching it\n>> for the past hour or so. Here are some of the rough and ready\n>> figures. Note that I don't get anywhere near the vertex benchmark.\n>> I did hotplug it and made the filesystem using Theodore Ts'o\n>> webpage directions (\n>> http://thunk.org/tytso/blog/2009/02/20/aligning-filesystems-to-an-ssds-erase-\n>> block-size/\n>> ) ; The only thing is, ext3/4 seems to be fixated on a blocksize\n>> of 4k, I am wondering if this could be part of the 'problem'. Any\n>> ideas/thoughts on tuning gratefully received.\n>> \n>> Anyway, benchmarks (same system as previously, etc)\n>> \n>> (ext4dev, 4k block size, pg_xlog on 2x7.2krpm raid-0, rest on SSD)\n>> \n>> root@debian:~# /usr/lib/postgresql/8.3/bin/pgbench -c 24 -t 12000\n>> test_db starting vacuum...end. transaction type: TPC-B (sort of)\n>> scaling factor: 100 number of clients: 24 number of transactions\n>> per client: 12000 number of transactions actually processed:\n>> 288000/288000 tps = 1407.254118 (including connections\n>> establishing) tps = 1407.645996 (excluding connections\n>> establishing)\n>> \n>> (ext4dev, 4k block size, everything on SSD)\n>> \n>> root@debian:~# /usr/lib/postgresql/8.3/bin/pgbench -c 24 -t 12000\n>> test_db starting vacuum...end. transaction type: TPC-B (sort of)\n>> scaling factor: 100 number of clients: 24 number of transactions\n>> per client: 12000 number of transactions actually processed:\n>> 288000/288000 tps = 2130.734705 (including connections\n>> establishing) tps = 2131.545519 (excluding connections\n>> establishing)\n>> \n>> (I wanted to try and see if random_page_cost dropped down to 2.0,\n>> sequential_page_cost = 2.0 would make a difference. Eg; making the\n>> planner aware that a random was the same cost as a sequential)\n>> \n>> root@debian:/var/lib/postgresql/8.3/main#\n>> /usr/lib/postgresql/8.3/bin/pgbench -c 24 -t 12000 test_db starting\n>> vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100\n>> number of clients: 24 number of transactions per client: 12000\n>> number of transactions actually processed: 288000/288000 tps =\n>> 1982.481185 (including connections establishing) tps = 1983.223281\n>> (excluding connections establishing)\n>> \n>> \n>> Regards Stef\n> \n> Here is the single x25-m SSD, write cache -disabled-, XFS, noatime\n> mounted using the no-op scheduler;\n> \n> stef@debian:~$ sudo /usr/lib/postgresql/8.3/bin/pgbench -c 24 -t 12000\n> test_db\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 100\n> number of clients: 24\n> number of transactions per client: 12000\n> number of transactions actually processed: 288000/288000\n> tps = 1427.781843 (including connections establishing)\n> tps = 1428.137858 (excluding connections establishing)\n\n\nOk, in my experience the next step to better performance on this setup in\nsituations not involving pg_bench is to turn dirty_background_ratio down to\na very small number (1 or 2). However, pg_bench relies quite a bit on the\nOS postponing writes due to its quirkiness. Depending on the scaling factor\nto memory ratio and how big shared_buffers is, results may vary.\n\nSo I'm not going to predict that that will help this particular case, but am\ncommenting that in general I have gotten the best throughput and lowest\nlatency with a low dirty_background_ratio and the noop scheduler when using\nthe Intel SSDs. I've tried all the other scheduler and queue tunables,\nwithout much result. Increasing max_sectors_kb helped a bit in some cases,\nbut it seemed inconsistent.\n\nThe Vertex does some things differently that might be very good for postgres\n(but bad for some other apps) as from what I've seen it prioritizes writes\nmore.\n\nFurthermore, it has and uses a write cache from what I've read... The Intel\ndrives don't use a write cache at all (The RAM is for the LBA > Physical map\nand management). If the vertex is way faster, I would suspect that its\nwrite cache may not be properly honoring cache flush commands.\n\nI have an app where I wish to keep the read latency as low as possible while\ndoing a large batch write with the write at ~90% disk utilization, and the\nIntels destroy everything else at that task so far.\n\nAnd in all honesty, I trust the Intel's data integrity a lot more than OCZ\nfor now.\n\n> \n> Regards\n> Stef\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.9 (GNU/Linux)\n> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n> \n> iEYEARECAAYFAknT0hEACgkQANG7uQ+9D9X8zQCfcJ+tRQ7Sh6/YQImPejfZr/h4\n> /QcAn0hZujC1+f+4tBSF8EhNgR6q44kc\n> =XzG/\n> -----END PGP SIGNATURE-----\n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Wed, 1 Apr 2009 15:35:57 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Wed, 1 Apr 2009, Scott Carey wrote:\n\n> On 4/1/09 9:54 AM, \"Scott Marlowe\" <[email protected]> wrote:\n>\n>> On Wed, Apr 1, 2009 at 10:48 AM, Stef Telford <[email protected]> wrote:\n>>> Scott Marlowe wrote:\n>>>> On Wed, Apr 1, 2009 at 10:15 AM, Stef Telford <[email protected]> wrote:\n>>>>\n>>>>> � � I do agree that the benefit is probably from write-caching, but I\n>>>>> think that this is a 'win' as long as you have a UPS or BBU adaptor,\n>>>>> and really, in a prod environment, not having a UPS is .. well. Crazy ?\n>>>>>\n>>>>\n>>>> You do know that UPSes can fail, right? �En masse sometimes even.\n>>>>\n>>> Hello Scott,\n>>> � �Well, the only time the UPS has failed in my memory, was during the\n>>> great Eastern Seaboard power outage of 2003. Lots of fond memories\n>>> running around Toronto with a gas can looking for oil for generator\n>>> power. This said though, anything could happen, the co-lo could be taken\n>>> out by a meteor and then sync on or off makes no difference.\n>>\n>> Meteor strike is far less likely than a power surge taking out a UPS.\n>> I saw a whole data center go black when a power conditioner blew out,\n>> taking out the other three power conditioners, both industrial UPSes\n>> and the switch for the diesel generator. And I have friends who have\n>> seen the same type of thing before as well. The data is the most\n>> expensive part of any server.\n>>\n> Yeah, well I?ve had a RAID card die, which broke its Battery backed cache.\n> They?re all unsafe, technically.\n>\n> In fact, not only are battery backed caches unsafe, but hard drives. They\n> can return bad data. So if you want to be really safe:\n>\n> 1: don't use Linux -- you have to use something with full data and metadata\n> checksums like ZFS or very expensive proprietary file systems.\n\nthis will involve other tradeoffs\n\n> 2: combine it with mirrored SSD's that don't use write cache (so you can\n> have fsync perf about as good as a battery backed raid card without that\n> risk).\n\nthey _all_ have write caches. a beast like you are looking for doesn't \nexist\n\n> 4: keep a live redundant system with a PITR backup at another site that can\n> recover in a short period of time.\n\na good option to keep in mind (and when the new replication code becomes \navailable, that will be even better)\n\n> 3: Run in a datacenter well underground with a plutonium nuclear power\n> supply. Meteor strikes and Nuclear holocaust, beware!\n\nat some point all that will fail\n\nbut you missed point #5 (in many ways a more important point than the \nothers that you describe)\n\nswitch from using postgres to using a database that can do two-phase \ncommits across redundant machines so that you know the data is safe on \nmultiple systems before the command is considered complete.\n\nDavid Lang\n>From [email protected] Wed Apr 1 20:39:34 2009\nReceived: from localhost (unknown [200.46.208.211])\n\tby mail.postgresql.org (Postfix) with ESMTP id 9AE2E634E2A\n\tfor <[email protected]>; Wed, 1 Apr 2009 20:39:33 -0300 (ADT)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by localhost (mx1.hub.org [200.46.208.211]) (amavisd-maia, port 10024)\n with ESMTP id 56626-06\n for <[email protected]>;\n Wed, 1 Apr 2009 20:39:24 -0300 (ADT)\nX-Greylist: from auto-whitelisted by SQLgrey-1.7.6\nReceived: from ey-out-2122.google.com (ey-out-2122.google.com [74.125.78.27])\n\tby mail.postgresql.org (Postfix) with ESMTP id 212E663225E\n\tfor <[email protected]>; Wed, 1 Apr 2009 20:39:30 -0300 (ADT)\nReceived: by ey-out-2122.google.com with SMTP id 22so54783eye.61\n for <[email protected]>; Wed, 01 Apr 2009 16:39:29 -0700 (PDT)\nDKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=gmail.com; s=gamma;\n h=domainkey-signature:mime-version:received:in-reply-to:references\n :date:message-id:subject:from:to:cc:content-type\n :content-transfer-encoding;\n bh=U09x0xo2zvgG+iDz8C+m8amviI4SidSayHYXSMGJs+8=;\n b=vFJetRtNcbe7UQmSfI1nE1JwmXNF1WBumBzbYtZeEKJ6dB2s+oULxClY1rQWsGwdP+\n 3RtK/aen2uaaP9WjDlUgCuZOwVOPvSywgweHSLMJwWghkp3dSMHvj1++jVFYIW9nJFOe\n s6/PTDNVzTDlexXGlZ83LPUVvDkJE5pKdACnA=\nDomainKey-Signature: a=rsa-sha1; c=nofws;\n d=gmail.com; s=gamma;\n h=mime-version:in-reply-to:references:date:message-id:subject:from:to\n :cc:content-type:content-transfer-encoding;\n b=Zlc6sRvh3plEjvB7pKtZ3tqkfWZudNFO2bWjxZj5plKpJy2K66cZv0JvGSf2FFunHe\n EpWw8Ql/OlVSb6i6bn47RKwjFNCLHiRv0aXSKssXVaP6Wzktx0nbFqw+izGEsW2K9/Rb\n 6WRDwxjhTlMUR/JJjzYLqHoQTzWnsNzwg6HRA=\nMIME-Version: 1.0\nReceived: by 10.210.81.10 with SMTP id e10mr3428573ebb.89.1238629169192; Wed, \n\t01 Apr 2009 16:39:29 -0700 (PDT)\nIn-Reply-To: <C5F93598.40BF%[email protected]>\nReferences: <[email protected]>\n\t <C5F93598.40BF%[email protected]>\nDate: Wed, 1 Apr 2009 17:39:29 -0600\nMessage-ID: <[email protected]>\nSubject: Re: Raid 10 chunksize\nFrom: Scott Marlowe <[email protected]>\nTo: Scott Carey <[email protected]>\nCc: Stef Telford <[email protected]>, Greg Smith <[email protected]>, \n\tMark Kirkwood <[email protected]>, \n\t\"[email protected]\" <[email protected]>\nContent-Type: text/plain; charset=ISO-8859-1\nContent-Transfer-Encoding: quoted-printable\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits=0.013 tagged_above=0 required=5 tests=AWL=0.013\nX-Spam-Level: \nX-Archive-Number: 200904/37\nX-Sequence-Number: 33404\n\nOn Wed, Apr 1, 2009 at 4:15 PM, Scott Carey <[email protected]> wrote=\n:\n>\n> On 4/1/09 9:54 AM, \"Scott Marlowe\" <[email protected]> wrote:\n>\n>> On Wed, Apr 1, 2009 at 10:48 AM, Stef Telford <[email protected]> wrote:\n>>> Scott Marlowe wrote:\n>>>> On Wed, Apr 1, 2009 at 10:15 AM, Stef Telford <[email protected]> wrote:\n>>>>\n>>>>> =A0 =A0 I do agree that the benefit is probably from write-caching, b=\nut I\n>>>>> think that this is a 'win' as long as you have a UPS or BBU adaptor,\n>>>>> and really, in a prod environment, not having a UPS is .. well. Crazy=\n ?\n>>>>>\n>>>>\n>>>> You do know that UPSes can fail, right? =A0En masse sometimes even.\n>>>>\n>>> Hello Scott,\n>>> =A0 =A0Well, the only time the UPS has failed in my memory, was during =\nthe\n>>> great Eastern Seaboard power outage of 2003. Lots of fond memories\n>>> running around Toronto with a gas can looking for oil for generator\n>>> power. This said though, anything could happen, the co-lo could be take=\nn\n>>> out by a meteor and then sync on or off makes no difference.\n>>\n>> Meteor strike is far less likely than a power surge taking out a UPS.\n>> I saw a whole data center go black when a power conditioner blew out,\n>> taking out the other three power conditioners, both industrial UPSes\n>> and the switch for the diesel generator. =A0And I have friends who have\n>> seen the same type of thing before as well. =A0The data is the most\n>> expensive part of any server.\n>>\n> Yeah, well I=B9ve had a RAID card die, which broke its Battery backed cac=\nhe.\n> They=B9re all unsafe, technically.\n\nThat's why you use two controllers with mirror sets across them and\nthem RAID-0 across the top. But I know what you mean. Now the mobo\nand memory are the single point of failure. Next stop, sequent etc.\n\n> In fact, not only are battery backed caches unsafe, but hard drives. =A0T=\nhey\n> can return bad data. =A0So if you want to be really safe:\n>\n> 1: don't use Linux -- you have to use something with full data and metada=\nta\n> checksums like ZFS or very expensive proprietary file systems.\n\nYou'd better be running them on sequent or Sysplex mainframe type hardware.\n\n> 4: keep a live redundant system with a PITR backup at another site that c=\nan\n> recover in a short period of time.\n> 3: Run in a datacenter well underground with a plutonium nuclear power\n> supply. =A0Meteor strikes and Nuclear holocaust, beware!\n\nPleaze, such hyperbole! Everyone know it can run on uranium just as\nwell. I'm sure these guys:\nhttp://royal.pingdom.com/2008/11/14/the-worlds-most-super-designed-data-cen=\nter-fit-for-a-james-bond-villain/\ncan sort that out for you.\n", "msg_date": "Wed, 1 Apr 2009 15:59:18 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "Stef Telford wrote:\n>\n> Hello Mark,\n> For the record, this is a 'base' debian 5 install (with openVZ but\n> postgreSQL is running on the base hardware, not inside a container)\n> and I have -explicitly- enabled sync in the conf. Eg;\n>\n>\n> fsync = on # turns forced\n>\n>\n> Infact, if I turn -off- sync commit, it gets about 200 -slower-\n> rather than faster. \n> \nSorry Stef - didn't mean to doubt you....merely your disks!\n\nCheers\n\nMark\n", "msg_date": "Thu, 02 Apr 2009 19:19:17 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "Greg Smith wrote:\n>\n>> Yeah - with 64K chunksize I'm seeing a result more congruent with \n>> yours (866 or so for 24 clients)\n>\n> That's good to hear. If adjusting that helped so much, you might \n> consider aligning the filesystem partitions to the chunk size too; the \n> partition header usually screws that up on Linux. See these two \n> references for ideas: \n> http://www.vmware.com/resources/techresources/608 \n> http://spiralbound.net/2008/06/09/creating-linux-partitions-for-clariion\n>\n\nWell I went away and did this (actually organized for for the system \nfolks to...). Retesting showed no appreciable difference (if anything \nslower). Then I got to thinking:\n\nFor a partition created on a (hardware) raided device, sure - alignment \nis very important, however in my case we are using software (md) raid - \nwhich creates devices out of individual partitions (which are on \nindividual SAS disks) e.g:\n\nmd3 : active raid10 sda4[0] sdd4[3] sdc4[2] sdb4[1]\n 177389056 blocks 256K chunks 2 near-copies [4/4] [UUUU]\n\nI'm thinking that alignment issues do not apply here, as md will \nallocate chunks starting at the beginning of wherever sda4 (etc) begins \n- so the absolute starting position of sda4 is irrelevant. Or am I \nmissing something?\n\nThanks again\n\nMark\n\n", "msg_date": "Thu, 02 Apr 2009 19:31:11 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Wed, 1 Apr 2009, Scott Carey wrote:\n\n> Write caching on SATA is totally fine. There were some old ATA drives that\n> when paried with some file systems or OS's would not be safe. There are\n> some combinations that have unsafe write barriers. But there is a standard\n> well supported ATA command to sync and only return after the data is on\n> disk. If you are running an OS that is anything recent at all, and any\n> disks that are not really old, you're fine.\n\nWhile I would like to believe this, I don't trust any claims in this area \nthat don't have matching tests that demonstrate things working as \nexpected. And I've never seen this work.\n\nMy laptop has a 7200 RPM drive, which means that if fsync is being passed \nthrough to the disk correctly I can only fsync <120 times/second. Here's \nwhat I get when I run sysbench on it, starting with the default ext3 \nconfiguration:\n\n$ uname -a\nLinux gsmith-t500 2.6.28-11-generic #38-Ubuntu SMP Fri Mar 27 09:00:52 UTC 2009 i686 GNU/Linux\n\n$ mount\n/dev/sda3 on / type ext3 (rw,relatime,errors=remount-ro)\n\n$ sudo hdparm -I /dev/sda | grep FLUSH\n \t *\tMandatory FLUSH_CACHE\n \t *\tFLUSH_CACHE_EXT\n\n$ ~/sysbench-0.4.8/sysbench/sysbench --test=fileio --file-fsync-freq=1 --file-num=1 --file-total-size=16384 --file-test-mode=rndwr run\nsysbench v0.4.8: multi-threaded system evaluation benchmark\n\nRunning the test with following options:\nNumber of threads: 1\n\nExtra file open flags: 0\n1 files, 16Kb each\n16Kb total file size\nBlock size 16Kb\nNumber of random requests for random IO: 10000\nRead/Write ratio for combined random IO test: 1.50\nPeriodic FSYNC enabled, calling fsync() each 1 requests.\nCalling fsync() at the end of test, Enabled.\nUsing synchronous I/O mode\nDoing random write test\nThreads started!\nDone.\n\nOperations performed: 0 Read, 10000 Write, 10000 Other = 20000 Total\nRead 0b Written 156.25Mb Total transferred 156.25Mb (39.176Mb/sec)\n 2507.29 Requests/sec executed\n\n\nOK, that's clearly cached writes where the drive is lying about fsync. \nThe claim is that since my drive supports both the flush calls, I just \nneed to turn on barrier support, right?\n\n[Edit /etc/fstab to remount with barriers]\n\n$ mount\n/dev/sda3 on / type ext3 (rw,relatime,errors=remount-ro,barrier=1)\n\n[sysbench again]\n\n 2612.74 Requests/sec executed\n\n-----\n\nThis is basically how this always works for me: somebody claims barriers \nand/or SATA disks work now, no really this time. I test, they give \nanswers that aren't possible if fsync were working properly, I conclude \nturning off the write cache is just as necessary as it always was. If you \ncan suggest something wrong with how I'm testing here, I'd love to hear \nabout it. I'd like to believe you but I can't seem to produce any \nevidence that supports you claims here.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 2 Apr 2009 04:53:23 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Wed, Mar 25, 2009 at 12:16 PM, Scott Carey <[email protected]> wrote:\n> On 3/25/09 1:07 AM, \"Greg Smith\" <[email protected]> wrote:\n>> On Wed, 25 Mar 2009, Mark Kirkwood wrote:\n>>> I'm thinking that the raid chunksize may well be the issue.\n>>\n>> Why?  I'm not saying you're wrong, I just don't see why that parameter\n>> jumped out as a likely cause here.\n>>\n>\n> If postgres is random reading or writing at 8k block size, and the raid\n> array is set with 4k block size, then every 8k random i/o will create TWO\n> disk seeks since it gets split to two disks.   Effectively, iops will be cut\n> in half.\n\nI disagree. The 4k raid chunks are likely to be grouped together on\ndisk and read sequentially. This will only give two seeks in special\ncases. Now, if the PostgreSQL block size is _smaller_ than the raid\nchunk size, random writes can get expensive (especially for raid 5)\nbecause the raid chunk has to be fully read in and written back out.\nBut this is mainly a theoretical problem I think.\n\nI'm going to go out on a limb and say that for block sizes that are\nwithin one or two 'powers of two' of each other, it doesn't matter a\nwhole lot. SSDs might be different, because of the 'erase' block\nwhich might be 128k, but I bet this is dealt with in such a fashion\nthat you wouldn't really notice it when dealing with different block\nsizes in pg.\n\nmerlin\n", "msg_date": "Thu, 2 Apr 2009 13:58:44 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "Greg Smith wrote:\n> OK, that's clearly cached writes where the drive is lying about fsync. \n> The claim is that since my drive supports both the flush calls, I just \n> need to turn on barrier support, right?\n>\nThat's a big pointy finger you are aiming at that drive - are you sure \nit was sent the flush instruction? Clearly *something* isn't right.\n\n> This is basically how this always works for me: somebody claims \n> barriers and/or SATA disks work now, no really this time. I test, \n> they give answers that aren't possible if fsync were working properly, \n> I conclude turning off the write cache is just as necessary as it \n> always was. If you can suggest something wrong with how I'm testing \n> here, I'd love to hear about it. I'd like to believe you but I can't \n> seem to produce any evidence that supports you claims here.\nTry similar tests with Solaris and Vista?\n\n(Might have to give the whole disk to ZFS with Solaris to give it \nconfidence to enable write cache, which mioght not be easy with a laptop \nboot drive: XP and Vista should show the toggle on the drive)\n\nJames\n\n", "msg_date": "Thu, 02 Apr 2009 20:16:30 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "\nOn 4/2/09 10:58 AM, \"Merlin Moncure\" <[email protected]> wrote:\n\n> On Wed, Mar 25, 2009 at 12:16 PM, Scott Carey <[email protected]> wrote:\n>> On 3/25/09 1:07 AM, \"Greg Smith\" <[email protected]> wrote:\n>>> On Wed, 25 Mar 2009, Mark Kirkwood wrote:\n>>>> I'm thinking that the raid chunksize may well be the issue.\n>>> \n>>> Why?  I'm not saying you're wrong, I just don't see why that parameter\n>>> jumped out as a likely cause here.\n>>> \n>> \n>> If postgres is random reading or writing at 8k block size, and the raid\n>> array is set with 4k block size, then every 8k random i/o will create TWO\n>> disk seeks since it gets split to two disks.   Effectively, iops will be cut\n>> in half.\n> \n> I disagree. The 4k raid chunks are likely to be grouped together on\n> disk and read sequentially. This will only give two seeks in special\n> cases. \n\nBy definition, adjacent raid blocks in a stripe are on different disks.\n\n\n> Now, if the PostgreSQL block size is _smaller_ than the raid\n> chunk size, random writes can get expensive (especially for raid 5)\n> because the raid chunk has to be fully read in and written back out.\n> But this is mainly a theoretical problem I think.\n\nThis is false and a RAID-5 myth. New parity can be constructed from the old\nparity + the change in data. Only 2 blocks have to be accessed, not the\nwhole stripe.\n\nPlus, this was about RAID 10 or 0 where parity does not apply.\n\n> \n> I'm going to go out on a limb and say that for block sizes that are\n> within one or two 'powers of two' of each other, it doesn't matter a\n> whole lot. SSDs might be different, because of the 'erase' block\n> which might be 128k, but I bet this is dealt with in such a fashion\n> that you wouldn't really notice it when dealing with different block\n> sizes in pg.\n\nWell, raid block size can be significantly larger than postgres or file\nsystem block size and the performance of random reads / writes won't get\nworse with larger block sizes. This holds only for RAID 0 (or 10), parity\nis the ONLY thing that makes larger block sizes bad since there is a\nread-modify-write type operation on something the size of one block.\n\nRaid block sizes smaller than the postgres block is always bad and\nmultiplies random i/o.\n\nRead a 8k postgres block in a 8MB md raid 0 block, and you read 8k from one\ndisk.\nRead a 8k postgres block on a md raid 0 with 4k blocks, and you read 4k from\ntwo disks.\n\n", "msg_date": "Thu, 2 Apr 2009 13:20:15 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Thu, Apr 2, 2009 at 4:20 PM, Scott Carey <[email protected]> wrote:\n>\n> On 4/2/09 10:58 AM, \"Merlin Moncure\" <[email protected]> wrote:\n>\n>> On Wed, Mar 25, 2009 at 12:16 PM, Scott Carey <[email protected]> wrote:\n>>> On 3/25/09 1:07 AM, \"Greg Smith\" <[email protected]> wrote:\n>>>> On Wed, 25 Mar 2009, Mark Kirkwood wrote:\n>>>>> I'm thinking that the raid chunksize may well be the issue.\n>>>>\n>>>> Why?  I'm not saying you're wrong, I just don't see why that parameter\n>>>> jumped out as a likely cause here.\n>>>>\n>>>\n>>> If postgres is random reading or writing at 8k block size, and the raid\n>>> array is set with 4k block size, then every 8k random i/o will create TWO\n>>> disk seeks since it gets split to two disks.   Effectively, iops will be cut\n>>> in half.\n>>\n>> I disagree.  The 4k raid chunks are likely to be grouped together on\n>> disk and read sequentially.  This will only give two seeks in special\n>> cases.\n>\n> By definition, adjacent raid blocks in a stripe are on different disks.\n>\n>\n>> Now, if the PostgreSQL block size is _smaller_ than the raid\n>> chunk size,  random writes can get expensive (especially for raid 5)\n>> because the raid chunk has to be fully read in and written back out.\n>> But this is mainly a theoretical problem I think.\n>\n> This is false and a RAID-5 myth.  New parity can be constructed from the old\n> parity + the change in data.  Only 2 blocks have to be accessed, not the\n> whole stripe.\n>\n> Plus, this was about RAID 10 or 0 where parity does not apply.\n>\n>>\n>> I'm going to go out on a limb and say that for block sizes that are\n>> within one or two 'powers of two' of each other, it doesn't matter a\n>> whole lot.  SSDs might be different, because of the 'erase' block\n>> which might be 128k, but I bet this is dealt with in such a fashion\n>> that you wouldn't really notice it when dealing with different block\n>> sizes in pg.\n>\n> Well, raid block size can be significantly larger than postgres or file\n> system block size and the performance of random reads / writes won't get\n> worse with larger block sizes.  This holds only for RAID 0 (or 10), parity\n> is the ONLY thing that makes larger block sizes bad since there is a\n> read-modify-write type operation on something the size of one block.\n>\n> Raid block sizes smaller than the postgres block is always bad and\n> multiplies random i/o.\n>\n> Read a 8k postgres block in a 8MB md raid 0 block, and you read 8k from one\n> disk.\n> Read a 8k postgres block on a md raid 0 with 4k blocks, and you read 4k from\n> two disks.\n\nyep...that's good analysis...thinko on my part.\n\nmerlin\n", "msg_date": "Thu, 2 Apr 2009 16:27:06 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "\nOn 4/2/09 1:53 AM, \"Greg Smith\" <[email protected]> wrote:\n\n> On Wed, 1 Apr 2009, Scott Carey wrote:\n> \n>> Write caching on SATA is totally fine. There were some old ATA drives that\n>> when paried with some file systems or OS's would not be safe. There are\n>> some combinations that have unsafe write barriers. But there is a standard\n>> well supported ATA command to sync and only return after the data is on\n>> disk. If you are running an OS that is anything recent at all, and any\n>> disks that are not really old, you're fine.\n> \n> While I would like to believe this, I don't trust any claims in this area\n> that don't have matching tests that demonstrate things working as\n> expected. And I've never seen this work.\n> \n> My laptop has a 7200 RPM drive, which means that if fsync is being passed\n> through to the disk correctly I can only fsync <120 times/second. Here's\n> what I get when I run sysbench on it, starting with the default ext3\n> configuration:\n> \n> $ uname -a\n> Linux gsmith-t500 2.6.28-11-generic #38-Ubuntu SMP Fri Mar 27 09:00:52 UTC\n> 2009 i686 GNU/Linux\n> \n> $ mount\n> /dev/sda3 on / type ext3 (rw,relatime,errors=remount-ro)\n> \n> $ sudo hdparm -I /dev/sda | grep FLUSH\n> * Mandatory FLUSH_CACHE\n> * FLUSH_CACHE_EXT\n> \n> $ ~/sysbench-0.4.8/sysbench/sysbench --test=fileio --file-fsync-freq=1\n> --file-num=1 --file-total-size=16384 --file-test-mode=rndwr run\n> sysbench v0.4.8: multi-threaded system evaluation benchmark\n> \n> Running the test with following options:\n> Number of threads: 1\n> \n> Extra file open flags: 0\n> 1 files, 16Kb each\n> 16Kb total file size\n> Block size 16Kb\n> Number of random requests for random IO: 10000\n> Read/Write ratio for combined random IO test: 1.50\n> Periodic FSYNC enabled, calling fsync() each 1 requests.\n> Calling fsync() at the end of test, Enabled.\n> Using synchronous I/O mode\n> Doing random write test\n> Threads started!\n> Done.\n> \n> Operations performed: 0 Read, 10000 Write, 10000 Other = 20000 Total\n> Read 0b Written 156.25Mb Total transferred 156.25Mb (39.176Mb/sec)\n> 2507.29 Requests/sec executed\n> \n> \n> OK, that's clearly cached writes where the drive is lying about fsync.\n> The claim is that since my drive supports both the flush calls, I just\n> need to turn on barrier support, right?\n> \n> [Edit /etc/fstab to remount with barriers]\n> \n> $ mount\n> /dev/sda3 on / type ext3 (rw,relatime,errors=remount-ro,barrier=1)\n> \n> [sysbench again]\n> \n> 2612.74 Requests/sec executed\n> \n> -----\n> \n> This is basically how this always works for me: somebody claims barriers\n> and/or SATA disks work now, no really this time. I test, they give\n> answers that aren't possible if fsync were working properly, I conclude\n> turning off the write cache is just as necessary as it always was. If you\n> can suggest something wrong with how I'm testing here, I'd love to hear\n> about it. I'd like to believe you but I can't seem to produce any\n> evidence that supports you claims here.\n\nYour data looks good, and puts a lot of doubt on my previous sources of\ninfo.\nSo I did more research, it seems that (most) drives don't lie, your OS and\nfile system do (or sometimes drive drivers or raid card). I know LVM and MD\nand other Linux block remapping layer things break write barriers as well.\nApparently ext3 doesn't implement fsync with a write barrier or cache flush.\nLinux kernel mailing lists implied that 2.6 had fixed these, but apparently\nnot. Write barriers were fixed, but not fsync. Even more confusing, it\nlooks like the behavior in some linux versions that are highly patched and\nbackported (SUSE, RedHat, mostly) may behave differently than those closer\nto the kernel trunk like Ubuntu.\n\nIf you can, try xfs with write barriers on. I'll try some tests using FIO\n(not familiar with sysbench but looks easy too) with various file systems\nand some SATA and SAS/SCSI setups when I get a chance.\n\nA lot of my prior evidence came from the linux kernel list and other places\nwhere I trusted the info over the years. I'll dig up more. But here is what\nI've learned in the past plus a bit from today:\nDrives don't lie anymore, and write barrier and lower level ATA commands\njust work. Linux fixed write barrier support in kernel 2.5.\nSeveral OS's do things right and many don't with respect to fsync. I had\nthought linux did fix this but it turns out they only fixed write barriers\nand left fsync broken:\nhttp://kerneltrap.org/mailarchive/linux-kernel/2008/2/26/987024/thread\n\nIn your tests the barriers slowed things down a lot, so something is working\nright there. From what I can see, with ext3 metadata changes cause much\nmore frequent write barrier activity, so 'relatime' and 'noatime' actually\nHURT your data integrity as a side effect of fsync not guaranteeing what you\nthink it does. \n\nThe big one, is this quote from the linux kernel list:\n\" Right now, if you want a reliable database on Linux, you _cannot_\nproperly depend on fsync() or fdatasync(). Considering how much Linux\nis used for critical databases, using these functions, this amazes me.\n\"\n\nCheck this full post out that started that thread:\nhttp://kerneltrap.org/mailarchive/linux-kernel/2008/2/26/987024\n\n\nI admit that it looks like I'm pretty wrong for Linux with ext3 at the\nleast. \nLinux is often not safe with disk write caches because its fsync() call\ndoesn't flush the cache. The root problem, is not the drives, its linux /\next3. Its write-barrier support is fine now (if you don't go through LVM or\nMD which don't support it), but fsync does not guarantee anything other than\nthe write having left the OS and gone to the device. In fact POSIX fsync(2)\ndoesn't require that the data is on disk. Interestingly, postgres would be\nsafer on linux if it used sync_file_range instead of fsync() but that has\nother drawbacks and limitations -- and is broken by use of LVM or MD.\nCurrently, linux + ext3 + postgres, you are only guaranteed when fsync()\nreturns that the data has left the OS, not that it is on a drive -- SATA or\nSAS. Strangely, sync_file_range() is safer than fsync() in the presence of\nany drive cache at all (including battery backed raid card failure) because\nit at least enforces write barriers.\n\nFsync + SATA write cache is safe on Solaris with ZFS, but not Solaris with\nUFS (the file system is write barrier and cache aware for the former and not\nthe latter).\n\nLinux (a lot) and Postgres (a little) can learn from some of the ZFS\nconcepts with regard to atomicity of changes and checksums on data and\nmetadata. Much of the above issues would simply not exist in the presence\nof good checksum use. Ext4 has journal segment checksums, but no metadata\nor data checksums exist for ability to detect partial writes to anything but\nthe journal. Postgres is adding checksums on data, and is already\nessentially copy-on-write for MVCC which is awesome -- are xlog writes\nprotected by checksums? Accidental out-of-order writes become an issue that\ncan be dealt with in a log or journal that has checksums even in the\npresence of OS and File Systems that don't have good guarantees for fsync\nlike Linux + ext3. Postgres could make itself safe even if drive write\ncache is enabled, fsync lies, AND there is a power failure. If I'm not\nmistaken, block checksums on data + xlog entry checksums can make it very\ndifficult to corrupt even if fsync is off (though data writes happening\nbefore xlog writes are still bad -- that would require external-to-block\nchecksums --like zfs -- to fix)!\n\n\nhttp://lkml.org/lkml/2005/5/15/85\n\nWhere the \"disks lie to you\" stuff probably came from:\nhttp://hardware.slashdot.org/article.pl?sid=05/05/13/0529252&tid=198&tid=128\n(turns out its the OS that isn't flushing the cache on fsync).\n\nhttp://xfs.org/index.php/XFS_FAQ#Q:_What_is_the_problem_with_the_write_cache\n_on_journaled_filesystems.3F\nSo if xfs fsync has a barrier, its safe with either:\nRaw device that respects cache flush + write caching on.\nOR\nBattery backed raid card + drive write caching off.\n\nXfs fsync supposedly works right (need to test) but fdatasync() does not.\n\n\nWhat this really boils down to is that POSIX fsync does not provide a\nguarantee that the data is on disk at all. My previous comments are wrong.\nThis means that fsync protects you from OS crashes, but not power failure.\nIt can do better in some systems / implementations.\n\n\n\n\n> \n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n> \n\n", "msg_date": "Thu, 2 Apr 2009 13:34:13 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "\nOn 4/2/09 1:20 PM, \"Scott Carey\" <[email protected]> wrote:\n> \n> Well, raid block size can be significantly larger than postgres or file\n> system block size and the performance of random reads / writes won't get\n> worse with larger block sizes. This holds only for RAID 0 (or 10), parity\n> is the ONLY thing that makes larger block sizes bad since there is a\n> read-modify-write type operation on something the size of one block.\n> \n> Raid block sizes smaller than the postgres block is always bad and\n> multiplies random i/o.\n> \n> Read a 8k postgres block in a 8MB md raid 0 block, and you read 8k from one\n> disk.\n> Read a 8k postgres block on a md raid 0 with 4k blocks, and you read 4k from\n> two disks.\n> \n\n\nOK, one more thing. The 8k read In a 8MB block size raid array can generate\ntwo reads in the following cases:\n\nYour read is on the boundary of the blocks AND\n\n1: your partition is not aligned with the raid blocks. This can happen if\nyou partition _inside_ the raid but not if you raid inside the partition\n(the latter only being applicable to software raid).\nOR\n2: your file system block size is smaller than the postgres block size and\nthe file block offset is not postgres block aligned.\n\nThe likelihood of the first condition is proportional to:\n\n(Postgres block size)/(raid block size)\n\nHence, for most all setups with software raid, a larger block size up to the\npoint where the above ratio gets sufficiently small is optimal. If the\nblock size gets too large, then random access is more and more likely to\nbias towards one drive over the others and lower throughput.\n\nObviously, in the extreme case where the block size is the disk size, you\nwould have to randomly access 100% of all the data to get full speed. \n\n", "msg_date": "Thu, 2 Apr 2009 13:44:20 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "Greg Smith wrote:\n> On Wed, 1 Apr 2009, Scott Carey wrote:\n> \n>> Write caching on SATA is totally fine. There were some old ATA drives\n>> that when paried with some file systems or OS's would not be safe. There are\n>> some combinations that have unsafe write barriers. But there is a\n>> standard\n>> well supported ATA command to sync and only return after the data is on\n>> disk. If you are running an OS that is anything recent at all, and any\n>> disks that are not really old, you're fine.\n> \n> While I would like to believe this, I don't trust any claims in this\n> area that don't have matching tests that demonstrate things working as\n> expected. And I've never seen this work.\n>\n> My laptop has a 7200 RPM drive, which means that if fsync is being\n> passed through to the disk correctly I can only fsync <120\n> times/second. Here's what I get when I run sysbench on it, starting\n> with the default ext3 configuration:\n\nI believe it's ext3 who's cheating in this scenario.\n\nAny chance you can test the program I posted here that\ntweaks the inode before the fsync:\nhttp://archives.postgresql.org//pgsql-general/2009-03/msg00703.php\n\nOn my system with the fchmod's in that program I was getting one\nfsync per disk revolution. Without the fchmod's, fsync() didn't\nwait at all.\n\nThis was the case on dozens of drives I tried, dating back to\nold PATA drives from 2000. Only drives from last century didn't\nbehave that way - but I can't accuse them of lying because\nhdparm showed that they didn't claim to support FLUSH_CACHE.\n\n\nI think this program shows that practically all hard drives are\nphysically capable of doing a proper fsync; but annoyingly\next3 refuses to send the FLUSH_CACHE commands to the drive\nunless the inode changed.\n\n\n> $ uname -a\n> Linux gsmith-t500 2.6.28-11-generic #38-Ubuntu SMP Fri Mar 27 09:00:52\n> UTC 2009 i686 GNU/Linux\n> \n> $ mount\n> /dev/sda3 on / type ext3 (rw,relatime,errors=remount-ro)\n> \n> $ sudo hdparm -I /dev/sda | grep FLUSH\n> * Mandatory FLUSH_CACHE\n> * FLUSH_CACHE_EXT\n> \n> $ ~/sysbench-0.4.8/sysbench/sysbench --test=fileio --file-fsync-freq=1\n> --file-num=1 --file-total-size=16384 --file-test-mode=rndwr run\n> sysbench v0.4.8: multi-threaded system evaluation benchmark\n> \n> Running the test with following options:\n> Number of threads: 1\n> \n> Extra file open flags: 0\n> 1 files, 16Kb each\n> 16Kb total file size\n> Block size 16Kb\n> Number of random requests for random IO: 10000\n> Read/Write ratio for combined random IO test: 1.50\n> Periodic FSYNC enabled, calling fsync() each 1 requests.\n> Calling fsync() at the end of test, Enabled.\n> Using synchronous I/O mode\n> Doing random write test\n> Threads started!\n> Done.\n> \n> Operations performed: 0 Read, 10000 Write, 10000 Other = 20000 Total\n> Read 0b Written 156.25Mb Total transferred 156.25Mb (39.176Mb/sec)\n> 2507.29 Requests/sec executed\n> \n> \n> OK, that's clearly cached writes where the drive is lying about fsync.\n> The claim is that since my drive supports both the flush calls, I just\n> need to turn on barrier support, right?\n> \n> [Edit /etc/fstab to remount with barriers]\n> \n> $ mount\n> /dev/sda3 on / type ext3 (rw,relatime,errors=remount-ro,barrier=1)\n> \n> [sysbench again]\n> \n> 2612.74 Requests/sec executed\n> \n> -----\n> \n> This is basically how this always works for me: somebody claims\n> barriers and/or SATA disks work now, no really this time. I test, they\n> give answers that aren't possible if fsync were working properly, I\n> conclude turning off the write cache is just as necessary as it always\n> was. If you can suggest something wrong with how I'm testing here, I'd\n> love to hear about it. I'd like to believe you but I can't seem to\n> produce any evidence that supports you claims here.\n> \n> -- \n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n> \n\n", "msg_date": "Thu, 02 Apr 2009 17:10:10 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "Ron Mayer wrote:\n> Greg Smith wrote:\n>> On Wed, 1 Apr 2009, Scott Carey wrote:\n>>\n>>> Write caching on SATA is totally fine. There were some old ATA drives\n>>> that when paried with some file systems or OS's would not be safe. There are\n>>> some combinations that have unsafe write barriers. But there is a\n>>> standard\n>>> well supported ATA command to sync and only return after the data is on\n>>> disk. If you are running an OS that is anything recent at all, and any\n>>> disks that are not really old, you're fine.\n>> While I would like to believe this, I don't trust any claims in this\n>> area that don't have matching tests that demonstrate things working as\n>> expected. And I've never seen this work.\n>>\n>> My laptop has a 7200 RPM drive, which means that if fsync is being\n>> passed through to the disk correctly I can only fsync <120\n>> times/second. Here's what I get when I run sysbench on it, starting\n>> with the default ext3 configuration:\n> \n> I believe it's ext3 who's cheating in this scenario.\n\nI assume so too. Here the same test using XFS, first with barriers (XFS \ndefault) and then without:\n\nLinux 2.6.28-gentoo-r2 #1 SMP Intel(R) Core(TM)2 CPU 6400 @ 2.13GHz \nGenuineIntel GNU/Linux\n\n/dev/sdb /data2 xfs rw,noatime,attr2,logbufs=8,logbsize=256k,noquota 0 0\n\n# sysbench --test=fileio --file-fsync-freq=1 --file-num=1 \n--file-total-size=16384 --file-test-mode=rndwr run\nsysbench 0.4.10: multi-threaded system evaluation benchmark\n\nRunning the test with following options:\nNumber of threads: 1\n\nExtra file open flags: 0\n1 files, 16Kb each\n16Kb total file size\nBlock size 16Kb\nNumber of random requests for random IO: 10000\nRead/Write ratio for combined random IO test: 1.50\nPeriodic FSYNC enabled, calling fsync() each 1 requests.\nCalling fsync() at the end of test, Enabled.\nUsing synchronous I/O mode\nDoing random write test\nThreads started!\nDone.\n\nOperations performed: 0 Read, 10000 Write, 10000 Other = 20000 Total\nRead 0b Written 156.25Mb Total transferred 156.25Mb (463.9Kb/sec)\n 28.99 Requests/sec executed\n\nTest execution summary:\n total time: 344.9013s\n total number of events: 10000\n total time taken by event execution: 0.1453\n per-request statistics:\n min: 0.01ms\n avg: 0.01ms\n max: 0.07ms\n approx. 95 percentile: 0.01ms\n\nThreads fairness:\n events (avg/stddev): 10000.0000/0.00\n execution time (avg/stddev): 0.1453/0.00\n\n\nAnd now without barriers:\n\n/dev/sdb /data2 xfs \nrw,noatime,attr2,nobarrier,logbufs=8,logbsize=256k,noquota 0 0\n\n# sysbench --test=fileio --file-fsync-freq=1 --file-num=1 \n--file-total-size=16384 --file-test-mode=rndwr run\nsysbench 0.4.10: multi-threaded system evaluation benchmark\n\nRunning the test with following options:\nNumber of threads: 1\n\nExtra file open flags: 0\n1 files, 16Kb each\n16Kb total file size\nBlock size 16Kb\nNumber of random requests for random IO: 10000\nRead/Write ratio for combined random IO test: 1.50\nPeriodic FSYNC enabled, calling fsync() each 1 requests.\nCalling fsync() at the end of test, Enabled.\nUsing synchronous I/O mode\nDoing random write test\nThreads started!\nDone.\n\nOperations performed: 0 Read, 10000 Write, 10000 Other = 20000 Total\nRead 0b Written 156.25Mb Total transferred 156.25Mb (62.872Mb/sec)\n 4023.81 Requests/sec executed\n\nTest execution summary:\n total time: 2.4852s\n total number of events: 10000\n total time taken by event execution: 0.1325\n per-request statistics:\n min: 0.01ms\n avg: 0.01ms\n max: 0.06ms\n approx. 95 percentile: 0.01ms\n\nThreads fairness:\n events (avg/stddev): 10000.0000/0.00\n execution time (avg/stddev): 0.1325/0.00\n\n\n-- \nBest regards,\nHannes Dorbath\n", "msg_date": "Fri, 03 Apr 2009 10:19:38 +0200", "msg_from": "Hannes Dorbath <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "Mark Kirkwood wrote:\n> Rebuilt with 256K chunksize:\n>\n> transaction type: TPC-B (sort of)\n> scaling factor: 100\n> number of clients: 24\n> number of transactions per client: 12000\n> number of transactions actually processed: 288000/288000\n> tps = 942.852104 (including connections establishing)\n> tps = 943.019223 (excluding connections establishing)\n>\n\nIncreasing checkpoint_segments to 96 and decreasing \nbgwriter_lru_maxpages to 100:\n\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 24\nnumber of transactions per client: 12000\nnumber of transactions actually processed: 288000/288000\ntps = 1219.221721 (including connections establishing)\ntps = 1219.501150 (excluding connections establishing)\n\n... as suggested by Greg (actually he suggested reducing \nbgwriter_lru_maxpages to 0, but this seemed to be no better). Anyway, \nseeing quite a reasonable improvement (about 83% from where we started). \nIt will be interesting to see how/if the improvements measured in \npgbench translate into the \"real\" application. Thanks for all your help \n(particularly to both Scotts, Greg and Stef).\n\nregards\n\nMark\n", "msg_date": "Fri, 03 Apr 2009 21:53:12 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Thu, 2 Apr 2009, James Mansion wrote:\n\n> Might have to give the whole disk to ZFS with Solaris to give it \n> confidence to enable write cache\n\nConfidence, sure, but not necessarily performance at the same time. The \nZFS Kool-Aid gets bitter sometimes too, and I worry that its reputation \ncauses people to just trust it when they should be wary. If there's \nanything this thread does, I hope it helps demonstrate how easy it is to \ndiscover reality doesn't match expectations at all in this very messy \narea. Trust No One! Keep Your Laser Handy!\n\nThere's a summary of the expected happy ZFS actions at \nhttp://www.opensolaris.org/jive/thread.jspa?messageID=19264& and a good \ncautionary tale of unhappy ZFS behavior in this area at \nhttp://blogs.digitar.com/jjww/2006/12/shenanigans-with-zfs-flushing-and-intelligent-arrays/ \nand its follow-up \nhttp://blogs.digitar.com/jjww/2007/10/back-in-the-sandbox-zfs-flushing-shenanigans-revisted/\n\nSystems with a hardware write cache are pretty common on this list, which \nmakes the situation described there not that unlikely to run into. The \nofficial word here is at\n\nhttp://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#FLUSH\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 3 Apr 2009 05:53:25 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Thu, 2 Apr 2009, Scott Carey wrote:\n\n> The big one, is this quote from the linux kernel list:\n> \" Right now, if you want a reliable database on Linux, you _cannot_\n> properly depend on fsync() or fdatasync(). Considering how much Linux\n> is used for critical databases, using these functions, this amazes me.\n> \"\n\nThings aren't as bad as that out of context quote makes them seem. There \nare two main problem situations here:\n\n1) You cannot trust Linux to flush data to a hard drive's write cache. \nSolution: turn off the write cache. Given the general poor state of \ntargeted fsync on Linux (quoting from a downthread comment by David Lang: \n\"in data=ordered mode, the default for most distros, ext3 can end up \nhaving to write all pending data when you do a fsync on one file\"), those \nfsyncs were likely to blow out the drive cache anyway.\n\n2) There are no hard guarantees about write ordering at the disk level; if \nyou write blocks ABC and then fsync, you might actually get, say, only B \nwritten before power goes out. I don't believe the PostgreSQL WAL design \nwill be corrupted by this particular situation, because until that fsync \ncomes back saying all 3 are done none of them are relied upon.\n\n> Interestingly, postgres would be safer on linux if it used \n> sync_file_range instead of fsync() but that has other drawbacks and \n> limitations\n\nI have thought about whether it would be possible to add a Linux-specific \nimprovement here into the code path that does something custom in this \narea for Windows/Mac OS X when you use fsync_method=fsync_writethrough\n\nWe really should update the documentation in this area before 8.4 ships. \nI'm looking into moving the \"Tuning PostgreSQL WAL Synchronization\" paper \nI wrote onto the wiki and then fleshing it out with all this \nfilesystem-specific trivia.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 3 Apr 2009 06:30:12 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" } ]
[ { "msg_contents": "So far using dd I am seeing around 264MB/s on ext3, 335MB/s on ext2 write\nspeed. So the question becomes what is the best filesystem for this drive?\n\nAnyone want me to run anything on it ?\n\nDave\n\nSo far using dd I am seeing around 264MB/s on ext3, 335MB/s on ext2 write speed. So the question becomes what is the best filesystem for this drive?Anyone want me to run anything on it ?Dave", "msg_date": "Thu, 26 Mar 2009 08:47:55 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "I have a fusion IO drive available for testing" }, { "msg_contents": "We have a box outfitted with two of them we are testing. We are testing with VxFS and ext3, but not quite ready to publish. I will reply when we are done.\n\n\n-----Original Message-----\nFrom: [email protected] on behalf of Dave Cramer\nSent: Thu 3/26/2009 5:47 AM\nTo: [email protected]\nSubject: [PERFORM] I have a fusion IO drive available for testing\n \nSo far using dd I am seeing around 264MB/s on ext3, 335MB/s on ext2 write\nspeed. So the question becomes what is the best filesystem for this drive?\n\nAnyone want me to run anything on it ?\n\nDave\n\n\n\n\n\n\nRE: [PERFORM] I have a fusion IO drive available for testing\n\n\n\nWe have a box outfitted with two of them we are testing.   We are testing with VxFS and ext3, but not quite ready to publish.   I will reply when we are done.\n\n\n-----Original Message-----\nFrom: [email protected] on behalf of Dave Cramer\nSent: Thu 3/26/2009 5:47 AM\nTo: [email protected]\nSubject: [PERFORM] I have a fusion IO drive available for testing\n\nSo far using dd I am seeing around 264MB/s on ext3, 335MB/s on ext2 write\nspeed. So the question becomes what is the best filesystem for this drive?\n\nAnyone want me to run anything on it ?\n\nDave", "msg_date": "Thu, 26 Mar 2009 09:49:56 -0400", "msg_from": "\"Kenny Gorman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I have a fusion IO drive available for testing" }, { "msg_contents": "\nOn Mar 26, 2009, at 8:47 AM, Dave Cramer wrote:\n\n> So far using dd I am seeing around 264MB/s on ext3, 335MB/s on ext2 \n> write speed. So the question becomes what is the best filesystem for \n> this drive?\n>\n> Anyone want me to run anything on it ?\n>\n> Dave\n\n\nI'd be more interested in the random io numbers.\n\nYou can do some tests with pgiosim (avail on pgfoundry) to sort-of \nsimulate an index scan. It just seeks and reads. It can also randomly \nwrite and or fsync.\n\nI'd be interested in seeing numbers for 1 proc and 10 on the fusionIO.\nYou have to make some file(s) for it to use first (I usually use dd to \ndo that, and make sure it is at least 2xRAM in size)\n\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n", "msg_date": "Fri, 27 Mar 2009 13:23:24 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I have a fusion IO drive available for testing" }, { "msg_contents": "On Thu, 26 Mar 2009, Dave Cramer wrote:\n\n> So far using dd I am seeing around 264MB/s on ext3, 335MB/s on ext2 write\n> speed. So the question becomes what is the best filesystem for this drive?\n\nuntil the current mess with ext3 and fsync gets resolved, i would say it \nwould probably be a bad choice. I consider ext4 too new, so I would say \nXFS or ext2 (depending on if you need the journal or not)\n\nfor the WAL you definantly don't need the journal, for the data I'm not \nsure. I believe that postgres does appropriate fsync calls so is safe on a \nnon-journaling filesystem. the fusionIO devices are small enough that a \nfsync on them does not take that long, so it may not be worth the overhead \nof the journaling.\n\nDavid Lang\n\n> Anyone want me to run anything on it ?\n>\n> Dave\n>\n", "msg_date": "Fri, 27 Mar 2009 10:30:25 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: I have a fusion IO drive available for testing" }, { "msg_contents": "\nOn Mar 27, 2009, at 1:30 PM, [email protected] wrote:\n\n>\n> for the WAL you definantly don't need the journal, for the data I'm \n> not sure. I believe that postgres does appropriate fsync calls so is \n> safe on a non-journaling filesystem. the fusionIO devices are small \n> enough that a fsync on them does not take that long, so it may not \n> be worth the overhead of the journaling.\n>\n\nThe win for the journal on the heap is simply so you don't need to \nspend $longtime fsck'ing if you crash, etc.\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n", "msg_date": "Fri, 27 Mar 2009 13:54:37 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I have a fusion IO drive available for testing" }, { "msg_contents": "On Fri, Mar 27, 2009 at 10:30 AM, <[email protected]> wrote:\n> On Thu, 26 Mar 2009, Dave Cramer wrote:\n>> So far using dd I am seeing around 264MB/s on ext3, 335MB/s on ext2 write\n>> speed. So the question becomes what is the best filesystem for this drive?\n>\n> until the current mess with ext3 and fsync gets resolved, i would say it\n> would probably be a bad choice. I consider ext4 too new, so I would say XFS\n> or ext2 (depending on if you need the journal or not)\n\nIf you're worried about the performance implications of ext3 in\ndata=ordered mode, the best thing to do is to mount the filesystem in\ndata=writeback mode instead.\n\nIf you're only using the filesystem for PostgreSQL data or logs, your\ndata will be just as safe except now that data and metadata won't be\nforced to disk in the order it was written.\n\nAnd you still get the benefit of a journal so fsck's after a crash will be fast.\n\nXFS probably is a decent choice, but I don't have much experience with\nit except on a desktop system where I can tell you that having write\nbarriers on absolutely kills performance of anything that does a lot\nof filesystem metadata updates. Again, not a big concern if the\nfilesystem is only being used for PostgreSQL data or logs.\n\n-Dave\n", "msg_date": "Fri, 27 Mar 2009 13:33:18 -0700", "msg_from": "David Rees <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I have a fusion IO drive available for testing" }, { "msg_contents": "On Fri, Mar 27, 2009 at 4:33 PM, David Rees <[email protected]> wrote:\n\n> On Fri, Mar 27, 2009 at 10:30 AM, <[email protected]> wrote:\n> > On Thu, 26 Mar 2009, Dave Cramer wrote:\n> >> So far using dd I am seeing around 264MB/s on ext3, 335MB/s on ext2\n> write\n> >> speed. So the question becomes what is the best filesystem for this\n> drive?\n> >\n> > until the current mess with ext3 and fsync gets resolved, i would say it\n> > would probably be a bad choice. I consider ext4 too new, so I would say\n> XFS\n> > or ext2 (depending on if you need the journal or not)\n>\n> If you're worried about the performance implications of ext3 in\n> data=ordered mode, the best thing to do is to mount the filesystem in\n> data=writeback mode instead.\n>\n> If you're only using the filesystem for PostgreSQL data or logs, your\n> data will be just as safe except now that data and metadata won't be\n> forced to disk in the order it was written.\n>\n> And you still get the benefit of a journal so fsck's after a crash will be\n> fast.\n>\n> XFS probably is a decent choice, but I don't have much experience with\n> it except on a desktop system where I can tell you that having write\n> barriers on absolutely kills performance of anything that does a lot\n> of filesystem metadata updates. Again, not a big concern if the\n> filesystem is only being used for PostgreSQL data or logs.\n>\n> -Dave\n>\n\nSo I tried writing directly to the device, gets around 250MB/s, reads at\naround 500MB/s\n\nThe client is using redhat so xfs is not an option.\n\nDave\n\nOn Fri, Mar 27, 2009 at 4:33 PM, David Rees <[email protected]> wrote:\nOn Fri, Mar 27, 2009 at 10:30 AM,  <[email protected]> wrote:\n> On Thu, 26 Mar 2009, Dave Cramer wrote:\n>> So far using dd I am seeing around 264MB/s on ext3, 335MB/s on ext2 write\n>> speed. So the question becomes what is the best filesystem for this drive?\n>\n> until the current mess with ext3 and fsync gets resolved, i would say it\n> would probably be a bad choice. I consider ext4 too new, so I would say XFS\n> or ext2 (depending on if you need the journal or not)\n\nIf you're worried about the performance implications of ext3 in\ndata=ordered mode, the best thing to do is to mount the filesystem in\ndata=writeback mode instead.\n\nIf you're only using the filesystem for PostgreSQL data or logs, your\ndata will be just as safe except now that data and metadata won't be\nforced to disk in the order it was written.\n\nAnd you still get the benefit of a journal so fsck's after a crash will be fast.\n\nXFS probably is a decent choice, but I don't have much experience with\nit except on a desktop system where I can tell you that having write\nbarriers on absolutely kills performance of anything that does a lot\nof filesystem metadata updates.  Again, not a big concern if the\nfilesystem is only being used for PostgreSQL data or logs.\n\n-Dave\nSo I tried writing directly to the device, gets around 250MB/s, reads at around 500MB/sThe client is using redhat so xfs is not an option.Dave", "msg_date": "Tue, 31 Mar 2009 08:20:41 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: I have a fusion IO drive available for testing" }, { "msg_contents": "Dave Cramer wrote:\n> So I tried writing directly to the device, gets around 250MB/s, reads at \n> around 500MB/s\n> \n> The client is using redhat so xfs is not an option.\n\nI'm using Red Hat and XFS, and have been for years. Why is XFS not an option with Red Hat?\n\nCraig\n", "msg_date": "Tue, 31 Mar 2009 07:03:08 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I have a fusion IO drive available for testing" }, { "msg_contents": "[email protected] (Craig James) writes:\n> Dave Cramer wrote:\n>> So I tried writing directly to the device, gets around 250MB/s,\n>> reads at around 500MB/s\n>>\n>> The client is using redhat so xfs is not an option.\n>\n> I'm using Red Hat and XFS, and have been for years. Why is XFS not an option with Red Hat?\n\nIf you report practically any kind of problem, and you're using XFS,\nor JFS, or such, their \"support offering\" is to tell you to use a\nsupported filesystem.\n-- \nlet name=\"cbbrowne\" and tld=\"cbbrowne.com\" in name ^ \"@\" ^ tld;;\nhttp://linuxdatabases.info/info/linuxxian.html\n\"The only thing better than TV with the sound off is Radio with the\nsound off.\" -- Dave Moon\n", "msg_date": "Tue, 31 Mar 2009 11:13:09 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I have a fusion IO drive available for testing" } ]
[ { "msg_contents": "XFS\r\n\r\n- Luke\r\n\r\n________________________________\r\nFrom: [email protected] <[email protected]>\r\nTo: [email protected] <[email protected]>\r\nSent: Thu Mar 26 05:47:55 2009\r\nSubject: [PERFORM] I have a fusion IO drive available for testing\r\n\r\nSo far using dd I am seeing around 264MB/s on ext3, 335MB/s on ext2 write speed. So the question becomes what is the best filesystem for this drive?\r\n\r\nAnyone want me to run anything on it ?\r\n\r\nDave\r\n\n\r\nXFS\r - Luke\n\n\nFrom: [email protected] <[email protected]>\rTo: [email protected] <[email protected]>\rSent: Thu Mar 26 05:47:55 2009Subject: [PERFORM] I have a fusion IO drive available for testing\r\r\nSo far using dd I am seeing around 264MB/s on ext3, 335MB/s on ext2 write speed. So the question becomes what is the best filesystem for this drive?Anyone want me to run anything on it ?Dave", "msg_date": "Thu, 26 Mar 2009 06:20:03 -0700", "msg_from": "Luke Lonergan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: I have a fusion IO drive available for testing" } ]
[ { "msg_contents": "\nSo, I have an query that has a very great difference between the possible \nperformance and the achieved performance. I was wondering if there was a \npossibility that Postgres would approach the possible performance by some \ndevious method.\n\nThe query returns a list of overlaps between objects. Each object is \ndefined by a start position and end position and a foreign key to the \nthing that is is located on. It's genes on chromosomes, in case you were \nwondering. The relevant parts of the location table are as follows:\n\nrelease-16.0-preview-14-mar=# \\d location\n Table \"public.location\"\n Column | Type | Modifiers\n-----------------+---------+-----------\n end | integer |\n start | integer |\n objectid | integer |\n id | integer | not null\nIndexes:\n \"location__object\" btree (objectid, id)\n \"location__start\" btree (start)\n \"location_bioseg\" gist (bioseg_create(intermine_start, intermine_end))\n\nThe table has 3490079 entries with 21875 distinct objectids, although the \nmajority of entries are concentrated on about ten distinct objectids. The \nentire table is in cache.\n\nThe query to run is:\n\nSELECT\n l1.id AS id1,\n l2.id AS id2\nFROM\n location l1,\n location l2\nWHERE\n l1.objectid = l2.objectid\n AND l1.id <> l2.id\n AND l1.start < l2.end\n AND l2.start < l1.end\n\nThe EXPLAIN result:\n QUERY PLAN\n----------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..180896163.93 rows=54169773639 width=8)\n Join Filter: ((l1.id <> l2.id) AND (l1.start < l2.end) AND (l2.start < l1.end))\n -> Seq Scan on location l1 (cost=0.00..78076.79 rows=3490079 width=16)\n -> Index Scan using location__object on location l2 (cost=0.00..24.43 rows=1369 width=16)\n Index Cond: (l2.objectid = l1.objectid)\n(5 rows)\n\nI could get an EXPLAIN ANALYSE, but it would take quite a few hours.\n\nNow, we have a spacial gist index on (start, end), using the bioseg addon, \nwhich works great for single overlap lookups, and does speed up the above \nquery, if we alter it to have an equivalent meaning, but use the bioseg \nfunction:\n\nSELECT\n l1.id AS id1,\n l2.id AS id2\nFROM\n location l1,\n location l2\nWHERE\n l1.objectid = l2.objectid\n AND l1.id <> l2.id\n AND bioseg_create(l1.start, l1.end) && bioseg_create(l2.start, l2.end);\n\nThe EXPLAIN result:\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..99982692.10 rows=4875280 width=8)\n Join Filter: ((l1.id <> l2.id) AND (l1.objectid = l2.objectid))\n -> Seq Scan on location l1 (cost=0.00..78076.79 rows=3490079 width=16)\n -> Index Scan using location_bioseg on location l2 (cost=0.00..12.92 rows=698 width=16)\n Index Cond: (bioseg_create(l1.start, l1.end) && bioseg_create(l2.start, l2.end))\n(5 rows)\n\nThis query takes about two hours.\n\nNow, it happens that there is an algorithm for calculating overlaps which \nis really quick. It involves iterating through the table in order of the \nstart variable and keeping a list of ranges which \"haven't ended yet\". \nWhen you read the next range from the table, you firstly purge all the \nranges from the list that end before the beginning of the new range. Then, \nyou output a result row for each element in the list combined with the new \nrange, then you add the new range to the list.\n\nThis algorithm just doesn't seem to fit into SQL at all. However, it would \nreduce two hours down to a few seconds. Any chances of it running in \nPostgres, or any other tips?\n\nMatthew\n\n-- \n Hi! You have reached 555-0129. None of us are here to answer the phone and \n the cat doesn't have opposing thumbs, so his messages are illegible. Please \n leave your name and message after the beep ...\n", "msg_date": "Thu, 26 Mar 2009 14:30:55 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Very specialised query" }, { "msg_contents": "Matthew Wakeling <[email protected]> wrote: \n> any other tips?\n \nI would try adding an index on (objectid, start) and another on\n(objectid, end) and see how that first query does. Also, if id is a\nunique identifier, I'd recommend a unique constraint or (better) a\nprimary key definition. Check the system tables to see whether all\nthe existing indexes are actually being used -- if not, drop them.\n \n-Kevin\n", "msg_date": "Thu, 26 Mar 2009 09:47:33 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very specialised query" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> This query takes about two hours.\n\n> Now, it happens that there is an algorithm for calculating overlaps which \n> is really quick. It involves iterating through the table in order of the \n> start variable and keeping a list of ranges which \"haven't ended yet\". \n> When you read the next range from the table, you firstly purge all the \n> ranges from the list that end before the beginning of the new range. Then, \n> you output a result row for each element in the list combined with the new \n> range, then you add the new range to the list.\n\n> This algorithm just doesn't seem to fit into SQL at all.\n\nNo, it doesn't. Have you thought about coding it in plpgsql?\n\nI have a feeling that it might be possible to do it using SQL:2003\nrecursive queries, but the procedural coding is likely to be easier\nto understand and better-performing. Not to mention that you won't\nhave to wait for 8.4...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Mar 2009 11:49:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very specialised query " }, { "msg_contents": "On Thu, 26 Mar 2009, Tom Lane wrote:\n> No, it doesn't. Have you thought about coding it in plpgsql?\n\n*Looks* Nice.\n\nSo, it looks like I would be able to write a plpgsql function that returns \na table equivalent to the query I posted earlier. However, I'd like to \neat my cake *and* have it. My intention is to create a view with those \nresults, and then use that view in all sorts of other queries. This will \nmean things like constraining the chromosome, or even constraining one of \nthe locations.\n\nThe algorithm I quoted will work great for the simple case of generating \n*all* overlaps. However, it will not be ideal for when the chromosome is \nconstrained (the constraint needs to be pushed into the query that the \nalgorithm iterates over, rather than filtered after the algorithm runs), \nand it will be very much less than ideal when one of the locations is \nconstrained (at which point a simple bio_seg index lookup is the fastest \nway).\n\nIs there a way to define these three methods of generating the results and \nget the planner to choose the fastest one?\n\nMatthew\n\n-- \n Beware of bugs in the above code; I have only proved it correct, not\n tried it. --Donald Knuth\n", "msg_date": "Thu, 26 Mar 2009 16:05:59 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very specialised query " }, { "msg_contents": "On Thu, 26 Mar 2009, I wrote:\n> release-16.0-preview-14-mar=# \\d location\n> Table \"public.location\"\n> Column | Type | Modifiers\n> -----------------+---------+-----------\n> end | integer |\n> start | integer |\n> objectid | integer |\n> id | integer | not null\n> Indexes:\n> \"location__object\" btree (objectid, id)\n> \"location__start\" btree (start)\n> \"location_bioseg\" gist (bioseg_create(intermine_start, intermine_end))\n\nSo, it would be useful if we could make the location_bioseg index a \nmulti-column index, like this:\n\nCREATE INDEX location_bioseg3 ON location USING GIST (objectid, bioseg_create(intermine_start, intermine_end));\n\nHowever, I get the following error message:\n\nERROR: data type integer has no default operator class for access method \"gist\"\nHINT: You must specify an operator class for the index or define a default operator class for the data type.\n\nIs there an operator class for integer for gist indexes that I can use?\n\nMatthew\n\n-- \n And why do I do it that way? Because I wish to remain sane. Um, actually,\n maybe I should just say I don't want to be any worse than I already am.\n - Computer Science Lecturer\n", "msg_date": "Fri, 27 Mar 2009 12:45:40 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very specialised query" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> Is there an operator class for integer for gist indexes that I can use?\n\nSee contrib/btree_gist.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 27 Mar 2009 10:30:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very specialised query " }, { "msg_contents": "Hello.\n\nYou could try adding \"AND l2.start > l1.start\" to the first query. This\nwill drop symmetric half of intersections (the ones that will remain are l2\ninside or to the left of l1), but you can redo results by\nid1,id2 union all id2, id1 and may allow to use start index for \"between\",\nfor my \"like\" test this looks like the next:\n\n\" -> Index Scan using location__start on location l2 (cost=0.00..756.34\nrows=37787 width=12)\"\n\" Index Cond: ((l2.start < l1.eend) AND (l2.start > l1.start))\"\n\nalso an index on (objectid, start) would help resulting in :\n\n\" -> Index Scan using lt on location l2 (cost=0.00..0.84 rows=20\nwidth=16)\"\n\" Index Cond: ((l2.objectid = l1.objectid) AND (l2.start < l1.eend)\nAND (l2.start > l1.start))\"\n\nBest regards,\n Vitalii Tymchyshyn\n\nHello.You could try  adding    \"AND l2.start > l1.start\" to the first query.  This will drop symmetric half of intersections (the ones that will remain are l2 inside or to the left of l1), but you can redo results by\nid1,id2 union all id2, id1 and may allow to use start index for \"between\",  for my \"like\" test this looks like the next:\"  ->  Index Scan using location__start on location l2  (cost=0.00..756.34 rows=37787 width=12)\"\n\"        Index Cond: ((l2.start < l1.eend) AND (l2.start > l1.start))\"also an index on (objectid, start) would help resulting in :\"  ->  Index Scan using lt on location l2  (cost=0.00..0.84 rows=20 width=16)\"\n\"        Index Cond: ((l2.objectid = l1.objectid) AND (l2.start < l1.eend) AND (l2.start > l1.start))\"Best regards, Vitalii Tymchyshyn", "msg_date": "Fri, 27 Mar 2009 18:36:50 +0200", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very specialised query" }, { "msg_contents": "On Fri, 27 Mar 2009, ����̦� �������� wrote:\n> ...an index on (objectid, start) would help...\n\nDefinitely.\n\n> You could try� adding �� \"AND l2.start > l1.start\" to the first query.� \n> This will drop symmetric half of intersections (the ones that will \n> remain are l2 inside or to the left of l1), but you can redo results by \n> id1,id2 union all id2, id1 and may allow to use start index for \n> \"between\"\n\nThat's remarkably clever, and should have been obvious. Thanks.\n\nAdding the constraint as you say does indeed make the query fast. However, \nthere seems to be a problem with the planner, in that it does not \ndistinguish between the costs of two alternative plans, which have vastly \ndifferent real costs. Consider the following:\n\nSELECT\n l1.id AS id1,\n l2.id AS id2\nFROM\n location l1,\n location l2\nWHERE\n l1.objectid = 228000093\n AND l2.objectid = 228000093\n AND l1.id <> l2.id\n AND l1.start < l2.end\n AND l1.end > l2.start\n AND l1.start < l2.start\nUNION ALL \nSELECT\n l1.id AS id1, l2.id AS id2\nFROM\n location l1,\n location l2\nWHERE\n l1.objectid = 228000093\n AND l2.objectid = 228000093\n AND l1.id <> l2.id\n AND l1.start < l2.end\n AND l1.end > l2.start\n AND l1.start >= l2.start;\n QUERY \nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Append (cost=0.00..20479179.74 rows=138732882 width=8)\n -> Nested Loop (cost=0.00..9545925.46 rows=69366441 width=8)\n Join Filter: ((l1.id <> l2.id) AND (l1.start < l2.end))\n -> Index Scan using location_test_obj_end on location l1 (cost=0.00..55966.07 rows=43277 width=12)\n Index Cond: (objectid = 228000093)\n -> Index Scan using location_test_obj_start on location l2 (cost=0.00..123.10 rows=4809 width=12)\n Index Cond: ((l2.objectid = 228000093) AND (l1.end > l2.start) AND (l1.start < l2.start))\n -> Nested Loop (cost=0.00..9545925.46 rows=69366441 width=8)\n Join Filter: ((l1.id <> l2.id) AND (l1.start < l2.end))\n -> Index Scan using location_test_obj_end on location l1 (cost=0.00..55966.07 rows=43277 width=12)\n Index Cond: (objectid = 228000093)\n -> Index Scan using location_test_obj_start on location l2 (cost=0.00..123.10 rows=4809 width=12)\n Index Cond: ((l2.objectid = 228000093) AND (l1.end > l2.start) AND (l1.start >= l2.start))\n(13 rows)\n\n\nNotice the two different index conditions:\n (l1.end > l2.start) AND (l1.start < l2.start) - \"between\"\n (l1.end > l2.start) AND (l1.start >= l2.start) - open-ended\nBoth have a cost of (cost=0.00..123.10 rows=4809 width=12)\n\nPostgres estimates these two index scans to be equivalent in cost, where \nthey are actually vastly different in real cost. Shouldn't Postgres favour \na \"between\" index scan over an open-ended one?\n\nMatthew\n\n-- \n [About NP-completeness] These are the problems that make efficient use of\n the Fairy Godmother. -- Computer Science Lecturer", "msg_date": "Fri, 27 Mar 2009 17:34:26 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very specialised query" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> Notice the two different index conditions:\n> (l1.end > l2.start) AND (l1.start < l2.start) - \"between\"\n> (l1.end > l2.start) AND (l1.start >= l2.start) - open-ended\n> Both have a cost of (cost=0.00..123.10 rows=4809 width=12)\n\n> Postgres estimates these two index scans to be equivalent in cost, where \n> they are actually vastly different in real cost. Shouldn't Postgres favour \n> a \"between\" index scan over an open-ended one?\n\nCurrently the planner only notices that for a range check that involves\ncomparisons of the same variable expression to two constants (or\npseudoconstants anyway). In principle it might be reasonable to have a\nheuristic that reduces the estimated selectivity in the example above,\nbut it looks to me like it'd make clauselist_selectivity() a lot slower\nand more complicated. When you see (l1.end > l2.start), how do you know\nwhich variable to try to match up against others? And if you try to\nmatch both, what do you do when you get matches for both?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 27 Mar 2009 14:43:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very specialised query " }, { "msg_contents": "Hi,\n\nLe 26 mars 09 � 15:30, Matthew Wakeling a �crit :\n> Now, it happens that there is an algorithm for calculating overlaps \n> which is really quick. It involves iterating through the table in \n> order of the start variable and keeping a list of ranges which \n> \"haven't ended yet\". When you read the next range from the table, \n> you firstly purge all the ranges from the list that end before the \n> beginning of the new range. Then, you output a result row for each \n> element in the list combined with the new range, then you add the \n> new range to the list.\n>\n> This algorithm just doesn't seem to fit into SQL at all.\n\nMaybe it's just that I didn't devote enough time to reading your \ndetailed explanation above, but this part sounds like it could be done \nin an aggregate you'd use in a correlated subquery containing the \nright ORDER BY, couldn't it?\n http://www.postgresql.org/docs/8.3/interactive/sql-createaggregate.html\n\nHTH,\n-- \ndim\n\n\n\n", "msg_date": "Fri, 27 Mar 2009 21:07:56 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very specialised query" }, { "msg_contents": "Hello,\n\nif your data are mostly static and you have a few mains objects,\nmaybe you can have some gain while defining conditional indexes for those plus one for the rest \nand then slicing the query:\n\n\ncreate index o_1x on X (start,end,id) where object_id = 1\ncreate index o_2x on X (start,end,id) where object_id = 2\ncreate index o_3x on X (start,end,id) where object_id = 3\ncreate index o_4x on X (start,end,id) where object_id = 4\n...\ncreate index o_4x on X (start,end,id) where object_id not in (1,2,3,4..)\n\n\nI'm not sure that putting all in one index and using the BETWEEN clause \nas in my example is the best method though.\n\nMarc Mamin\n\n\nSELECT \n l1.id AS id1,\n l2.id AS id2\nFROM\n location l1,\n location l2\nWHERE l1.objectid = 1\n AND (l2.start BETWEEN l1.start AND l1.end\n OR \n l1.start BETWEEN l2.start AND l2.end\n )\n l1.start\n AND l2.start <> l2.start -- if required\n AND l2.start <> l2.end -- if required\n AND l1.id <> l2.id\n\n\nUNION ALL\n\n...\n\tWHERE l1.objectid = 2\n...\t\n\nUNION ALL\n\n...\n\tWHERE l1.objectid not in (1,2,3,4..)\n\n\n\n\n\nAW: [PERFORM] Very specialised query\n\n\n\nHello,\n\nif your data are mostly static and you have a few mains objects,\nmaybe you can have some gain while defining conditional indexes for those plus one for the rest\nand then slicing the query:\n\n\ncreate index o_1x on X (start,end,id) where object_id = 1\ncreate index o_2x on X (start,end,id) where object_id = 2\ncreate index o_3x on X (start,end,id) where object_id = 3\ncreate index o_4x on X (start,end,id) where object_id = 4\n...\ncreate index o_4x on X (start,end,id) where object_id not in (1,2,3,4..)\n\n\nI'm not sure that putting all in one index and using the BETWEEN clause\nas in my example is the best method though.\n\nMarc Mamin\n\n\nSELECT\n    l1.id AS id1,\n    l2.id AS id2\nFROM\n    location l1,\n    location l2\nWHERE l1.objectid = 1\n    AND (l2.start BETWEEN  l1.start AND l1.end\n         OR\n         l1.start BETWEEN  l2.start AND l2.end\n         )\n         l1.start\n    AND l2.start <> l2.start -- if required\n    AND l2.start <> l2.end   -- if required\n    AND l1.id <> l2.id\n\n\nUNION ALL\n\n...\n        WHERE l1.objectid = 2\n...    \n\nUNION ALL\n\n...\n        WHERE l1.objectid not in (1,2,3,4..)", "msg_date": "Fri, 27 Mar 2009 23:53:22 +0100", "msg_from": "\"Marc Mamin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very specialised query" }, { "msg_contents": "On Fri, 27 Mar 2009, Marc Mamin wrote:\n> if your data are mostly static and you have a few mains objects,\n> maybe you can have some gain while defining conditional indexes for those plus one for the rest\n> and then slicing the query:\n\nMaybe. I thought about doing that. However, I am not convinced that would \nbe much of a gain, and would require major rewriting of the queries, as \nyou suggest.\n\n> WHERE (l2.start BETWEEN� l1.start AND l1.end\n> �������� OR\n> �������� l1.start BETWEEN� l2.start AND l2.end\n> �������� )\n\nYes, that's another way to calculate an overlap. However, it turns out to \nnot be that fast. The problem is that OR there, which causes a bitmap \nindex scan, as the leaf of a nested loop join, which can be rather slow.\n\nMatthew\n\n-- \n I'd try being be a pessimist, but it probably wouldn't work anyway.", "msg_date": "Mon, 30 Mar 2009 11:17:48 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very specialised query" }, { "msg_contents": "On Fri, 27 Mar 2009, Tom Lane wrote:\n>> Notice the two different index conditions:\n>> (l1.end > l2.start) AND (l1.start < l2.start) - \"between\"\n>> (l1.end > l2.start) AND (l1.start >= l2.start) - open-ended\n>> Both have a cost of (cost=0.00..123.10 rows=4809 width=12)\n\n> Currently the planner only notices that for a range check that involves\n> comparisons of the same variable expression to two constants (or\n> pseudoconstants anyway). In principle it might be reasonable to have a\n> heuristic that reduces the estimated selectivity in the example above,\n> but it looks to me like it'd make clauselist_selectivity() a lot slower\n> and more complicated. When you see (l1.end > l2.start), how do you know\n> which variable to try to match up against others? And if you try to\n> match both, what do you do when you get matches for both?\n\nThose two index conditions are on an index scan on the field l2.start. \nTherefore, I would expect to only have to take any notice of l2.start when \nworking out selectivity on a range check for this particular plan. When \nthere is an index scan on a different field, then try and match that one \nup instead.\n\nMatthew\n\n-- \n", "msg_date": "Mon, 30 Mar 2009 11:24:32 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very specialised query " }, { "msg_contents": "On Fri, 27 Mar 2009, Dimitri Fontaine wrote:\n> Maybe it's just that I didn't devote enough time to reading your detailed \n> explanation above, but this part sounds like it could be done in an aggregate \n> you'd use in a correlated subquery containing the right ORDER BY, couldn't \n> it?\n> http://www.postgresql.org/docs/8.3/interactive/sql-createaggregate.html\n\nCan you return multiple rows from an aggregate function?\n\nMatthew\n\n-- \n", "msg_date": "Mon, 30 Mar 2009 11:26:03 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very specialised query" }, { "msg_contents": "\n>> WHERE (l2.start BETWEEN  l1.start AND l1.end\n>>          OR\n>>          l1.start BETWEEN  l2.start AND l2.end\n>>          )\n\n>Yes, that's another way to calculate an overlap. However, it turns out to not be that fast. \n>The problem is that OR there, which causes a bitmap index scan, as the leaf of a nested loop join, \n>which can be rather slow.\n\n\nOk , than splitting these checks in 2 Queries with UNION is better. \nBut I often read that BETWEEN is faster than using 2 comparison operators.\nHere I guess that a combined index on (start,end) makes sense:\n\n..\nWHERE l2.start BETWEEN  l1.start AND l1.end\n..\nUNION\n..\nWHERE l1.start BETWEEN  l2.start AND l2.end\n..\n\n\nThe first clause being equivalent to\n \n AND l1.start <= l2.end\n AND l1.end >= l2.start\n AND l1.start <= l2.start\n\nI don't know how you have to deal the limit conditions...\n\n\nMarc Mamin\n", "msg_date": "Mon, 30 Mar 2009 17:35:47 +0200", "msg_from": "\"Marc Mamin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very specialised query" }, { "msg_contents": ">> Shouldn't Postgres favour a \"between\" index scan over an open-ended \n>> one?\n\nOn Fri, 27 Mar 2009, Tom Lane wrote:\n> Currently the planner only notices that for a range check that involves\n> comparisons of the same variable expression to two constants (or\n> pseudoconstants anyway). In principle it might be reasonable to have a\n> heuristic that reduces the estimated selectivity in the example above,\n> but it looks to me like it'd make clauselist_selectivity() a lot slower\n> and more complicated. When you see (l1.end > l2.start), how do you know\n> which variable to try to match up against others? And if you try to\n> match both, what do you do when you get matches for both?\n\nArguably, having multiple \"greater than\" constraints on a field should not \naffect the selectivity much, because the separate constraints will all be \nthrowing away the same set of rows. So having a single \"greater than\" will \nhalve the number of rows, while two \"greater than\"s will divide the number \nof rows by three, etc. That's a vast approximation of course, and should \nbe skewed by the statistics.\n\nHowever, combining a \"greater than\" with a \"less than\" has a different \neffect on selectivity. If the numbers were completely random with \nidentical value spread in each field, then a single \"greater than\" would \nhalve the number of rows and an additional \"less than\" would halve the \nnumber again. However, in most real life situations the values will not be \ncompletely random, but will be very skewed, as in our case. Unless the \nplanner starts collecting statistics on the standard distribution of the \ndifference between two fields, that will be hard to account for though.\n\nHave a look at the following EXPLAIN ANALYSE:\n\nSELECT\n l1.id AS id1,\n l2.id AS id2\nFROM\n location l1,\n location l2\nWHERE\n l1.objectid = 228000093\n AND l2.objectid = 228000093\n AND l1.id <> l2.id\n AND l1.start < l2.end\n AND l1.end > l2.start\n AND l1.start < l2.start\nUNION ALL \nSELECT\n l1.id AS id1,\n l2.id AS id2\nFROM\n location l1,\n location l2\nWHERE\n l1.objectid = 228000093\n AND l2.objectid = 228000093\n AND l1.id <> l2.id\n AND l1.start < l2.end\n AND l1.end > l2.start\n AND l1.start >= l2.start;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------\n Append (cost=0.00..20479179.74 rows=138732882 width=8)\n (actual time=0.055..2362748.298 rows=258210 loops=1)\n -> Nested Loop (cost=0.00..9545925.46 rows=69366441 width=8)\n (actual time=0.053..627.038 rows=99436 loops=1)\n Join Filter: ((l1.id <> l2.id) AND (l1.start < l2.end))\n -> Index Scan using location_test_obj_end on location l1\n (cost=0.00..55966.07 rows=43277 width=12)\n (actual time=0.025..58.604 rows=43534 loops=1)\n Index Cond: (objectid = 228000093)\n -> Index Scan using location_test_obj_start on location l2\n (cost=0.00..123.10 rows=4809 width=12)\n (actual time=0.003..0.006 rows=2 loops=43534)\n Index Cond: ((l2.objectid = 228000093) AND (l1.end > l2.start) AND (l1.start < l2.start))\n -> Nested Loop (cost=0.00..9545925.46 rows=69366441 width=8)\n (actual time=0.041..2361632.540 rows=158774 loops=1)\n Join Filter: ((l1.id <> l2.id) AND (l1.start < l2.end))\n -> Index Scan using location_test_obj_end on location l1\n (cost=0.00..55966.07 rows=43277 width=12)\n (actual time=0.009..80.814 rows=43534 loops=1)\n Index Cond: (objectid = 228000093)\n -> Index Scan using location_test_obj_start on location l2\n (cost=0.00..123.10 rows=4809 width=12)\n (actual time=0.012..29.823 rows=21768 loops=43534)\n Index Cond: ((l2.objectid = 228000093) AND (l1.end > l2.start) AND (l1.start >= l2.start))\n Total runtime: 2363015.959 ms\n(14 rows)\n\nNote again the two leaf index scans that really matter for performance. On \none of them, there is a difference, and the other is open ended.\n\n Expected rows Seen rows\ndifference 4809 2\nopen-ended 4809 21768\n\nThe first query fragment takes 700ms to execute, and the second takes \nabout forty minutes. It is effectively doing a nested loop join with \nhardly any benefit gained from the indexes at all, over a simple index on \nobjectid. I may as well make the query a lot simpler, and do:\n\nSELECT\n l1.id AS id1,\n l2.id AS id2\nFROM\n location l1,\n location l2\nWHERE\n l1.objectid = 228000093\n AND l2.objectid = 228000093\n AND l1.id <> l2.id\n AND l1.start < l2.end\n AND l1.end > l2.start\n\nUnless this particular issue is improved in the planner, I don't think \nthis particular style of query is going to work for us. I know that the \ndatabase could in theory return a result in about 1400ms. I'll see how \nclose to that I can get with plpgsql.\n\nMatthew\n\n-- \n First law of computing: Anything can go wro\n sig: Segmentation fault. core dumped.\n", "msg_date": "Mon, 30 Mar 2009 16:57:16 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very specialised query " }, { "msg_contents": "On Mon, 30 Mar 2009, Marc Mamin wrote:\n> But I often read that BETWEEN is faster than using 2 comparison operators.\n\nhttp://www.postgresql.org/docs/current/static/functions-comparison.html \nsays otherwise.\n\n> a BETWEEN x AND y\n>\n> is equivalent to\n>\n> a >= x AND a <= y\n>\n> There is no difference between the two respective forms apart from the \n> CPU cycles required to rewrite the first one into the second one \n> internally.\n\nMatthew\n\n-- \n Don't worry! The world can't end today because it's already tomorrow\n in Australia.\n", "msg_date": "Mon, 30 Mar 2009 16:59:05 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very specialised query" }, { "msg_contents": "Hi.\n\nLook, what I did mean by \"symmetric\" is that you don't need to make second\npart of query because you will get just same results simply by\n\nselect\ncase when n == 1 then id1 else id2 end,\ncase when n == 2 then id1 else id2 end\n\nfrom (\nSELECT\n l1.id AS id1,\n l2.id AS id2\nFROM\n location l1,\n location l2\nWHERE\n l1.objectid = 228000093\n AND l2.objectid = 228000093\n AND l1.id <> l2.id\n AND l1.start < l2.end\n AND l1.end > l2.start\n AND l1.start < l2.start) a, (values (1),(2)) b(n)\n\n(I may miss some border cases like when l1.start=l2.start and/or\nl1.end=l2.end, but this can be fixed by adding \"=\" to query).\n\nLook, You can have 4 types of intersections:\na) 1s 2s 2e 1e - 2 inside 1\nb) 2s 1s 1e 2e - 1 inside 2 (symmetric to (a), if you have 1,2 from (a) you\ncan generate 2,1 for (b))\nc) 1s 2s 1e 2e - 1 to the left of 2\nd) 2s 1s 2e 1e - 2 to the left of 1 (symmetric to (c), if you have 1,2 from\n(c) you can generate 2,1 for (d))\n\nThe query above gives you results for (a) and (c) and you don't need any\nsecond part - simply add \"symmetric\" results.\n\nCorrect me if I miss something.\n\nBest Regards, Vitalii Tymchyshyn\n\nHi.Look, what I did mean by \"symmetric\" is that you don't need to make second part of query because you will get just same results simply by select case when n == 1 then id1 else id2 end,\ncase when n == 2 then id1 else id2 end\nfrom (SELECT\n    l1.id AS id1,\n    l2.id AS id2\nFROM\n    location l1,\n    location l2\nWHERE\n        l1.objectid = 228000093\n    AND l2.objectid = 228000093\n    AND l1.id <> l2.id\n    AND l1.start < l2.end\n    AND l1.end > l2.start\n    AND l1.start < l2.start) a, (values (1),(2)) b(n)(I may miss some border cases like when l1.start=l2.start and/or l1.end=l2.end, but this can be fixed by adding \"=\" to query).Look,  You can have 4 types of intersections:\na)  1s 2s 2e 1e - 2 inside 1b)  2s 1s 1e 2e - 1 inside 2 (symmetric to (a), if you have 1,2 from (a) you can generate 2,1 for (b))c)  1s 2s 1e 2e - 1 to the left of 2d)  2s 1s 2e 1e - 2 to the left of 1 (symmetric to (c), if you have 1,2 from (c) you can generate 2,1 for (d))\nThe query above gives you results for (a) and (c) and you don't need  any second part - simply add \"symmetric\" results.Correct me if I miss something.Best Regards, Vitalii Tymchyshyn", "msg_date": "Mon, 30 Mar 2009 19:14:30 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very specialised query" }, { "msg_contents": "On Mon, 30 Mar 2009, Віталій Тимчишин wrote:\n> select\n> case when n == 1 then id1 else id2 end,\n> case when n == 2 then id1 else id2 end\n> \n> from (\n>    ... a, (values (1),(2)) b(n)\n\nYeah, that's nice.\n\nHowever, it is still the case that we can't trust the database to choose \nthe correct plan. It is currently only choosing the correct plan now by \nchance, and some time later it may by chance switch to one that takes 40 \nminutes.\n\nI'll certainly add it to my list of possibilities.\n\nMatthew\n\n-- \n Jadzia: Don't forget the 34th rule of acquisition: Peace is good for business.\n Quark: That's the 35th.\n Jadzia: Oh yes, that's right. What's the 34th again?\n Quark: War is good for business. It's easy to get them mixed up.", "msg_date": "Mon, 30 Mar 2009 17:22:15 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very specialised query" }, { "msg_contents": "Hello Matthew,\n\nAnother idea:\n\nAre your objects limited to some smaller ranges of your whole interval ?\nIf yes you may possibly reduce the ranges to search for while using an additional table with the min(start) max(end) of each object...\n\nMarc Mamin\n\n\n\n\n\nAW: [PERFORM] Very specialised query\n\n\n\nHello Matthew,\n\nAnother idea:\n\nAre your objects limited to some smaller ranges of your whole interval ?\nIf yes you may possibly reduce the ranges to search for while using an additional table with the min(start) max(end) of each object...\n\nMarc Mamin", "msg_date": "Mon, 30 Mar 2009 19:44:51 +0200", "msg_from": "\"Marc Mamin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very specialised query" }, { "msg_contents": ">\n>\n> Yeah, that's nice.\n>\n> However, it is still the case that we can't trust the database to choose\n> the correct plan. It is currently only choosing the correct plan now by\n> chance, and some time later it may by chance switch to one that takes 40\n> minutes.\n\n\nWhat is the bad plan? Is it like the first plan from your first message?\nYou can sometimes tweak optimizer to make sure it will do correct plan. E.g.\nwhen your database fits in memory, you can tweak page access costs. Also\ndon't forget to raise statistics target.\n\nBTW: About aggregates: they can return arrays, but I can't imagine what you\ncan group by on... May be windowing functions from 8.4 could help.\n\nAlso, if your maximum length (select max(end-start) from location) is low\nenough, you can try adding some more constraints to make optimizer happy\n(have it more precise row count to select correct plan).\n\n\nYeah, that's nice.\n\nHowever, it is still the case that we can't trust the database to choose the correct plan. It is currently only choosing the correct plan now by chance, and some time later it may by chance switch to one that takes 40 minutes.\nWhat is the bad plan? Is it like the first plan from your first message?You can sometimes tweak optimizer to make sure it will do correct plan. E.g. when your database fits in memory, you can tweak page access costs. Also don't forget to raise statistics target.\nBTW: About aggregates: they can return arrays, but I can't imagine what you can group by on... May be windowing functions from 8.4 could help.Also, if your maximum length (select max(end-start) from location) is low enough, you can try adding some more constraints to make optimizer happy (have it more precise row count to select correct plan).", "msg_date": "Mon, 30 Mar 2009 21:13:23 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very specialised query" }, { "msg_contents": "On Mon, 30 Mar 2009, Marc Mamin wrote:\n> Are your objects limited to some smaller ranges of your whole interval ?\n> If yes you may possibly reduce the ranges to search for while using an additional table with the min(start) max(end) of each\n> object...\n\nNo, they aren't. However, even if they were, that would not actually speed \nup the query at all. We are already looking up in the index by objectid, \nand that would be an equivalent constraint to limiting by the available \nrange of start/end values.\n\nI'm currently arguing with plpgsql over this problem, but it looks like \nit will run reasonably fast.\n\nMatthew\n\n-- \n If you're thinking \"Oh no, this lecturer thinks Turing Machines are a feasible\n method of computation, where's the door?\", then you are in luck. There are\n some there, there, and by the side there. Oxygen masks will not drop from the\n ceiling... -- Computer Science Lecturer\n", "msg_date": "Tue, 31 Mar 2009 12:54:44 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very specialised query" }, { "msg_contents": "On Mon, 30 Mar 2009, Віталій Тимчишин wrote:\n> select\n> case when n == 1 then id1 else id2 end,\n> case when n == 2 then id1 else id2 end\n> \n> from (\n> SELECT\n>    l1.id AS id1,\n>    l2.id AS id2\n> FROM\n>    location l1,\n>    location l2\n> WHERE\n>        l1.objectid = 228000093\n>    AND l2.objectid = 228000093\n>    AND l1.id <> l2.id\n>    AND l1.start < l2.end\n>    AND l1.end > l2.start\n>    AND l1.start < l2.start) a, (values (1),(2)) b(n)\n\nIt is a nice idea. However, the planner gets the join the wrong way round:\n\nselect distinct\n case when n = 1 then id1 else id2 end,\n case when n = 1 then id2 else id1 end\nFROM (\n select\n l1.id AS id1,\n l2.id AS id2\n FROM\n location l1,\n location l2\n WHERE\n l1.id <> l2.id\n AND l1.objectid = l2.objectid\n AND l1.start <= l2.end\n AND l2.start <= l1.end\n AND l1.start <= l2.start\n ) AS a,\n (values (1), (2)) b(n);\n\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=7366497963.75..7637346831.94 rows=36113182426 width=12)\n (actual time=1642178.623..2206678.691 rows=139606782 loops=1)\n -> Sort (cost=7366497963.75..7456780919.81 rows=36113182426 width=12)\n (actual time=1642178.619..1899057.147 rows=166377424 loops=1)\n Sort Key: (CASE WHEN (\"*VALUES*\".column1 = 1) THEN l1.subjectid ELSE l2.subjectid END), (CASE WHEN (\"*VALUES*\".column1 = 1) THEN l2.subjectid ELSE l1.subjectid END)\n Sort Method: external merge Disk: 3903272kB\n -> Nested Loop (cost=0.00..592890483.66 rows=36113182426 width=12)\n (actual time=85.333..984211.011 rows=166377424 loops=1)\n -> Values Scan on \"*VALUES*\" (cost=0.00..0.03 rows=2 width=4)\n (actual time=0.002..0.008 rows=2 loops=1)\n -> Nested Loop (cost=0.00..25596373.62 rows=18056591213 width=8)\n (actual time=42.684..322743.335 rows=83188712 loops=2)\n Join Filter: ((l1.subjectid <> l2.subjectid) AND (l1.intermine_start <= l2.intermine_end))\n -> Seq Scan on location l1\n (cost=0.00..78076.79 rows=3490079 width=16)\n (actual time=0.008..3629.672 rows=3490079 loops=2)\n -> Index Scan using location_test_obj_start on location l2\n (cost=0.00..3.89 rows=152 width=16)\n (actual time=0.005..0.038 rows=25 loops=6980158)\n Index Cond: ((l2.objectid = l1.objectid) AND (l2.intermine_start <= l1.intermine_end) AND (l1.intermine_start <= l2.intermine_start))\n Total runtime: 2339619.383 ms\n\nThe outer nested join has the VALUES as the main loop, and the complicated \njoin as the leaf. So, the complicated overlap-finding join gets run twice.\n\nOh, there's also the great big sort and unique, but I think I can get rid \nof that.\n\nMatthew\n\n-- \n Contrary to popular belief, Unix is user friendly. It just happens to be\n very selective about who its friends are. -- Kyle Hearn", "msg_date": "Tue, 31 Mar 2009 18:08:00 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very specialised query" }, { "msg_contents": ">\n>\n> The outer nested join has the VALUES as the main loop, and the complicated\n> join as the leaf. So, the complicated overlap-finding join gets run twice.\n\n\nThat's weird. What do you have as statistics target? Planner is incorrect\nfew orders of magnitude, so increasing it may help.\nBTW: One of constraints is redundant l1.start <= l2.start implies l1.start\n<= l2.end, so latter can be removed as for me.\n\n\n>\n>\n> Oh, there's also the great big sort and unique, but I think I can get rid\n> of that.\n>\n\nAs far as I can see, duplicates will occur if and only if l1.start ==\nl2.start && l1.end == l2.end.\nThat can be easily filtered by adding \"where n=1 or l1.start != l2.start or\nl1.end != l2.end\" to outer select.\n\n\nThe outer nested join has the VALUES as the main loop, and the complicated join as the leaf. So, the complicated overlap-finding join gets run twice.That's weird. What do you have as statistics target? Planner is incorrect few orders of magnitude, so increasing it may help.\nBTW: One of constraints is redundant l1.start <= l2.start implies l1.start <= l2.end, so latter can be removed as for me. \n\n\nOh, there's also the great big sort and unique, but I think I can get rid of that.\nAs far as I can see, duplicates will occur if and only if l1.start == l2.start && l1.end == l2.end.That can be easily filtered by adding \"where n=1 or l1.start != l2.start or l1.end != l2.end\" to outer select.", "msg_date": "Wed, 1 Apr 2009 00:11:52 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very specialised query" }, { "msg_contents": "On Wed, 1 Apr 2009, Віталій Тимчишин wrote:\n> The outer nested join has the VALUES as the main loop, and the complicated join as the leaf. So, the complicated\n> overlap-finding join gets run twice.\n> \n> That's weird. What do you have as statistics target? Planner is incorrect few orders of magnitude, so increasing it may help.\n\nUnfortunately, the statistics are skewed, so increasing the statistics \ntarget won't help. The problem is this:\n\nselect avg(end - start), stddev_pop(end - start), min(start), max(start) from location;\n\n avg | stddev_pop | min | max\n-----------------------+----------------+-----+----------\n 1716.7503512098150214 | 24935.63375733 | 1 | 61544858\n(1 row)\n\n> Oh, there's also the great big sort and unique, but I think I can get rid of that.\n> \n> \n> As far as I can see, duplicates will occur if and only if l1.start == l2.start && l1.end == l2.end.\n> That can be easily filtered by adding \"where n=1 or l1.start != l2.start or l1.end != l2.end\" to outer select.\n\nClose - duplicates will occur when l1.start == l2.start, so you filter \nthem out by adding \"where n = 1 OR l1.start <> l2.start\".\n\nMatthew\n\n-- \n Lord grant me patience, and I want it NOW!", "msg_date": "Wed, 1 Apr 2009 18:10:16 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very specialised query" }, { "msg_contents": "On Mon, 30 Mar 2009, Віталій Тимчишин wrote:\n> What is the bad plan? Is it like the first plan from your first message?\n\nIt's the plan a few messages back. The UNION ALL query I showed \neffectively got the database to do it both ways round.\n\nIt's the case that a \"between\" index scan will return much fewer rows than \nan open-ended index scan.\n\n> BTW: About aggregates: they can return arrays, but I can't imagine what you can group by on... May be windowing functions from 8.4\n> could help.\n\nA normal function seems the best way to go about this - they can return \nmultiple rows.\n\nSo, I have written a plpgsql function to calculate overlaps. It works \nreasonably quickly where there aren't that many overlaps. However, it \nseems to go very slowly when there are a large number of rows to return. I \nam considering recoding it as a C function instead.\n\n1. The docs say that returning multiple rows from plpgsql waits until the\n whole lot are done before returning any. Does this happen with the C\n functions too?\n2. What sort of speedup would I be likely to see?\n3. How do I RAISE NOTICE in a C function?\n\n> Also, if your maximum length (select max(end-start) from location) is low enough, you can try adding some more constraints to make\n> optimizer happy (have it more precise row count to select correct plan).\n\nAlas:\n\nselect min(start), max(start), min(end), max(end), max(end - start) from location;\n\n min | max | min | max | max\n-----+----------+-----+----------+----------\n 1 | 61544858 | 1 | 61545105 | 21512431\n(1 row)\n\nMatthew\n\n-- \n I suppose some of you have done a Continuous Maths course. Yes? Continuous\n Maths? <menacing stares from audience> Whoah, it was like that, was it!\n -- Computer Science Lecturer", "msg_date": "Wed, 1 Apr 2009 18:33:16 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very specialised query" }, { "msg_contents": "On Wed, 1 Apr 2009, Matthew Wakeling wrote:\n> So, I have written a plpgsql function to calculate overlaps. It works \n> reasonably quickly where there aren't that many overlaps. However, it seems \n> to go very slowly when there are a large number of rows to return.\n\nIn plpgsql, what happens about memory allocation? It's just, I'm \ngenerating and throwing away an awful lot of arrays. When do they get \nunallocated?\n\nAlso, if I assign a variable (or an element of an array) to be the \ncontents of another variable (which may be a primitive, array, or row), \ndoes it copy the contents or do it by reference?\n\nMatthew\n\n-- \n For those of you who are into writing programs that are as obscure and\n complicated as possible, there are opportunities for... real fun here\n -- Computer Science Lecturer\n", "msg_date": "Wed, 1 Apr 2009 19:27:19 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very specialised query" }, { "msg_contents": "Trying to rewrite a plpgsql function in C.\n\nHow do I do the equivalent of:\n\nFOR loc IN SELECT * FROM location ORDER BY objectid, intermine_start, intermine_end LOOP\nEND LOOP;\n\nin a C function?\n\nMatthew\n\n-- \n I wouldn't be so paranoid if you weren't all out to get me!!\n", "msg_date": "Thu, 2 Apr 2009 16:02:26 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very specialised query" }, { "msg_contents": "Matthew Wakeling wrote:\n> Trying to rewrite a plpgsql function in C.\n> \n> How do I do the equivalent of:\n> \n> FOR loc IN SELECT * FROM location ORDER BY objectid, intermine_start,\n> intermine_end LOOP\n> END LOOP;\n> \n> in a C function?\n\nPlease create a new message to the list with a new subject line for a\nnew question.\n\nThanks.\n\n--\nCraig Ringer\n", "msg_date": "Fri, 03 Apr 2009 00:36:02 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very specialised query" } ]
[ { "msg_contents": "I have two tables, like this:\n\nBig table:\n\nCREATE TABLE photo_info_data\n(\n photo_id integer NOT NULL,\n field_name character varying NOT NULL,\n field_value character varying,\n CONSTRAINT photo_info_data_pk PRIMARY KEY (photo_id, field_name)\n)\nWITH (OIDS=FALSE);\n\nCREATE INDEX user_info_data_ix_field_value\n ON user_info_data\n USING btree\n (field_value);\n\n\nSmall table:\n\nCREATE TABLE t_query_data\n(\n i integer,\n \"key\" character varying,\n op character varying,\n \"value\" character varying\n)\nWITH (OIDS=FALSE);\n\nI have around 2400000 rows in photo_info_data, and just two rows in \nt_query_data:\n i | key | op | value\n---+----------+----+--------\n 1 | f-stop | eq | 2.6\n 2 | shutter | gt | 1/100\n\n\nThis is the query I'm executing:\n\nSELECT\n\t*\nFROM\n\tphoto_info_data u\n\tJOIN t_query_data t on u.field_name = key\n\nThis query takes around 900ms to execute. It returns 6 rows.\n\nWhen I do 'explain analyze' for some reason it takes around 7 seconds, \nand this is what I get:\n\nphototest=# explain analyze select * from photo_info_data u join \nt_query_data t on u.field_name = key;\n QUERY PLAN \n\n------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=1.04..58676.31 rows=218048 width=68) (actual \ntime=2381.895..7087.225 rows=6 loops=1)\n Hash Cond: ((u.field_name)::text = (t.key)::text)\n -> Seq Scan on photo_info_data u (cost=0.00..47500.30 rows=2398530 \nwidth=50) (actual time=0.042..3454.112 rows=2398446 loops=1)\n -> Hash (cost=1.02..1.02 rows=2 width=18) (actual \ntime=0.016..0.016 rows=2 loops=1)\n -> Seq Scan on t_query_data t (cost=0.00..1.02 rows=2 \nwidth=18) (actual time=0.003..0.007 rows=2 loops=1)\n Total runtime: 7087.291 ms\n(6 rows)\n\nTime: 7088.663 ms\n\n\nI can rerun this query many times, it's always around 7 seconds. I/O \nwait during the query is nonexistant, it just takes 100% of CPU time (i \nhave a DualCore Opteron server).\n\nIf I force the planner not to use sequential_scan, here is what I get:\n\nphototest=# explain analyze select * from photo_info_data u join \nt_query_data t on u.field_name = key;\n QUERY \nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=100039134.84..100130206.79 rows=218048 width=68) \n(actual time=271.138..540.998 rows=6 loops=1)\n -> Seq Scan on t_query_data t (cost=100000000.00..100000001.02 \nrows=2 width=18) (actual time=0.008..0.015 rows=2 loops=1)\n -> Bitmap Heap Scan on photo_info_data u (cost=39134.84..63740.08 \nrows=109024 width=50) (actual time=270.464..270.469 rows=3 loops=2)\n Recheck Cond: ((u.field_name)::text = (t.key)::text)\n -> Bitmap Index Scan on photo_info_data_pk \n(cost=0.00..39107.59 rows=109024 width=0) (actual time=270.435..270.435 \nrows=3 loops=2)\n Index Cond: ((u.field_name)::text = (t.key)::text)\n Total runtime: 541.065 ms\n(7 rows)\n\nTime: 542.147 ms\n\n\nThe database currently has only those two tables. I have vacuumed them \nprior running above queries.\n\nI tought this information also might be important:\nphototest=# select key, count(*) from photo_info_data u join \nt_query_data t on u.field_name = key group by key;\n key | count\n----------+-------\n f-stop | 3\n shutter | 3\n(2 rows)\n\n\nAm I doing something wrong here? The photo_info_data would hold around \n10.000.000 records, should I be doing 'set seq_scan to false' each time \nI will want to run this query? (Since I'm accessing postgres trough JDBC \nI'll have same situation I had weeks ago, I described it here also).\n\n\tMike\n", "msg_date": "Mon, 30 Mar 2009 16:07:51 +0200", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Forcing seq_scan off for large table joined with tiny table yeilds\n\timproved performance" }, { "msg_contents": "Mario Splivalo <[email protected]> writes:\n> -> Bitmap Heap Scan on photo_info_data u (cost=39134.84..63740.08 \n> rows=109024 width=50) (actual time=270.464..270.469 rows=3 loops=2)\n> Recheck Cond: ((u.field_name)::text = (t.key)::text)\n> -> Bitmap Index Scan on photo_info_data_pk \n> (cost=0.00..39107.59 rows=109024 width=0) (actual time=270.435..270.435 \n> rows=3 loops=2)\n> Index Cond: ((u.field_name)::text = (t.key)::text)\n\nYou need to figure out why that rowcount estimate is off by more than\nfour orders of magnitude :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Mar 2009 10:16:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Forcing seq_scan off for large table joined with tiny table\n\tyeilds improved performance" }, { "msg_contents": "Tom Lane wrote:\n> Mario Splivalo <[email protected]> writes:\n>> -> Bitmap Heap Scan on photo_info_data u (cost=39134.84..63740.08 \n>> rows=109024 width=50) (actual time=270.464..270.469 rows=3 loops=2)\n>> Recheck Cond: ((u.field_name)::text = (t.key)::text)\n>> -> Bitmap Index Scan on photo_info_data_pk \n>> (cost=0.00..39107.59 rows=109024 width=0) (actual time=270.435..270.435 \n>> rows=3 loops=2)\n>> Index Cond: ((u.field_name)::text = (t.key)::text)\n> \n> You need to figure out why that rowcount estimate is off by more than\n> four orders of magnitude :-(\n\nHuh, thnx! :) Could you give me some starting points, what do I do?\n\nCould it be because table is quite large, and there are only 3 columns \nthat match join condition?\n\nNow, after I finished writing above lines, index creation on \nphoto_info_data(field_name) was done. When I rerun above query, here is \nwhat I get:\n\nphototest=# explain analyze select field_name, count(*) from \nt_query_data t join photo_info_data u on t.key = u.field_name group by \nfield_name;\n \n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=57414.33..57414.61 rows=22 width=9) (actual \ntime=0.135..0.139 rows=2 loops=1)\n -> Nested Loop (cost=2193.50..56324.09 rows=218048 width=9) \n(actual time=0.063..0.114 rows=6 loops=1)\n -> Seq Scan on t_query_data t (cost=0.00..1.02 rows=2 \nwidth=6) (actual time=0.019..0.022 rows=2 loops=1)\n -> Bitmap Heap Scan on photo_info_data u \n(cost=2193.50..26798.74 rows=109024 width=9) (actual time=0.025..0.030 \nrows=3 loops=2)\n Recheck Cond: ((u.field_name)::text = (t.key)::text)\n -> Bitmap Index Scan on photo_info_data_ix__field_name \n (cost=0.00..2166.24 rows=109024 width=0) (actual time=0.019..0.019 \nrows=3 loops=2)\n Index Cond: ((u.field_name)::text = (t.key)::text)\n Total runtime: 0.200 ms\n(8 rows)\n\n\nSo, I guess I solved my problem! :) The explain analyze still shows that \nrow estimate is 'quite off' (109024 estimated vs only 3 actuall), but \nthe query is light-speeded :)\n\nI tought that having primary key (and auto-index because of primary key) \non (photo_id, field_name) should be enough. Now I have two indexes on \nfield_name, but that seems to do good.\n\n\tMike\n", "msg_date": "Mon, 30 Mar 2009 17:34:31 +0200", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Forcing seq_scan off for large table joined with tiny\n\ttable yeilds improved performance" }, { "msg_contents": "On Mon, Mar 30, 2009 at 9:34 AM, Mario Splivalo\n<[email protected]> wrote:\n\n>         ->  Bitmap Heap Scan on photo_info_data u (cost=2193.50..26798.74\n> rows=109024 width=9) (actual time=0.025..0.030 rows=3 loops=2)\n>               Recheck Cond: ((u.field_name)::text = (t.key)::text)\n>               ->  Bitmap Index Scan on photo_info_data_ix__field_name\n>  (cost=0.00..2166.24 rows=109024 width=0) (actual time=0.019..0.019 rows=3\n> loops=2)\n\n> So, I guess I solved my problem! :) The explain analyze still shows that row\n> estimate is 'quite off' (109024 estimated vs only 3 actuall), but the query\n> is light-speeded :)\n\nIt's not really solved, it's just a happy coincidence that the current\nplan runs well. In order to keep the query planner making good\nchoices you need to increase stats target for the field in the index\nabove. The easiest way to do so is to do this:\n\nalter database mydb set default_statistics_target=100;\n\nand run analyze again:\n\nanalyze;\n\n> I tought that having primary key (and auto-index because of primary key) on\n> (photo_id, field_name) should be enough. Now I have two indexes on\n> field_name, but that seems to do good.\n\nNope, it's about the stats collected that let the planner make the right choice.\n", "msg_date": "Mon, 30 Mar 2009 10:25:36 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Forcing seq_scan off for large table joined with tiny\n\ttable yeilds improved performance" }, { "msg_contents": "Scott Marlowe wrote:\n> \n> It's not really solved, it's just a happy coincidence that the current\n> plan runs well. In order to keep the query planner making good\n> choices you need to increase stats target for the field in the index\n> above. The easiest way to do so is to do this:\n> \n> alter database mydb set default_statistics_target=100;\n> \n> and run analyze again:\n> \n> analyze;\n\nSo, i removed the index on field_name, set \ndefault_default_statistics_target to 100, analyzed, and the results are \nthe same:\n\n QUERY PLAN \n\n------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=1.04..58676.31 rows=218048 width=68) (actual \ntime=0.067..12268.394 rows=6 loops=1)\n Hash Cond: ((u.field_name)::text = (t.key)::text)\n -> Seq Scan on photo_info_data u (cost=0.00..47500.30 rows=2398530 \nwidth=50) (actual time=0.013..6426.611 rows=2398446 loops=1)\n -> Hash (cost=1.02..1.02 rows=2 width=18) (actual \ntime=0.015..0.015 rows=2 loops=1)\n -> Seq Scan on t_query_data t (cost=0.00..1.02 rows=2 \nwidth=18) (actual time=0.002..0.006 rows=2 loops=1)\n Total runtime: 12268.459 ms\n(6 rows)\n\nI even changed default_statistics_target to 1000:\n\n------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=1.04..58580.29 rows=208561 width=67) (actual \ntime=0.054..12434.283 rows=6 loops=1)\n Hash Cond: ((u.field_name)::text = (t.key)::text)\n -> Seq Scan on photo_info_data u (cost=0.00..47499.46 rows=2398446 \nwidth=49) (actual time=0.012..6129.923 rows=2398446 loops=1)\n -> Hash (cost=1.02..1.02 rows=2 width=18) (actual \ntime=0.015..0.015 rows=2 loops=1)\n -> Seq Scan on t_query_data t (cost=0.00..1.02 rows=2 \nwidth=18) (actual time=0.002..0.004 rows=2 loops=1)\n Total runtime: 12434.338 ms\n(6 rows)\n\n\nEven when I run this query, I get sequential scan:\n\nexplain analyze select * from photo_info_data where field_name = \n'f-spot' or field_name = 'shutter';\n\n QUERY PLAN \n\n-------------------------------------------------------------------------------------------------------------------\n Seq Scan on photo_info_data (cost=0.00..59491.69 rows=1705 width=49) \n(actual time=0.018..1535.963 rows=6 loops=1)\n Filter: (((field_name)::text = 'f-spot'::text) OR \n((field_name)::text = 'shutter'::text))\n Total runtime: 1536.010 ms\n(3 rows)\n\nThese are the representations of te values 'f-spot' and 'shutter' for \nthe field field_name in photo_info_data table:\n\nxmltest=# select field_name, count(*) from user_info_data where \nfield_name in ('visina', 'spol') group by field_name;\n field_name | count\n------------+-------\n 'f-spot' | 3\n 'shutter' | 3\n(2 rows)\n\n\nMaybe my test-data is poor? As I've mentioned, photo_info_data has \nlittle over 2300000 rows. And this is complete 'distribution' of the data:\n\nxmltest=# select field_name, count(*) from user_info_data group by \nfield_name order by count(*) desc;\n field_name | count\n----------------+--------\n field_Xx1 | 350000\n field_Xx2 | 332447\n field_Xx3 | 297414\n field_Xx4 | 262394\n field_Xx5 | 227396\n field_Xx6 | 192547\n field_Xx7 | 157612\n field_Xx8 | 122543\n field_Xx9 | 87442\n field_Xx10 | 52296\n field_1 | 50000\n field_2 | 47389\n field_3 | 42412\n field_4 | 37390\n field_5 | 32366\n field_6 | 27238\n field_7 | 22360\n field_Xx11 | 17589\n field_8 | 17412\n field_9 | 12383\n field_10 | 7386\n field_11 | 2410\n f-spot | 3\n shutter | 3\n focal | 3\n flash | 3\n m_city | 3\n person | 3\n iso | 2\n(29 rows)\n\nNo matter what field_name value I enter in WHERE condition, planner \nchooses sequential scan. Only when I add seperate index on field_name, \nplanner chooes index scan or bitmap index scan.\n\n\tMike\n", "msg_date": "Mon, 06 Apr 2009 14:20:47 +0200", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Forcing seq_scan off for large table joined with tiny\n\ttable yeilds improved performance" }, { "msg_contents": "On Mon, Apr 6, 2009 at 6:20 AM, Mario Splivalo\n<[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> It's not really solved, it's just a happy coincidence that the current\n>> plan runs well.  In order to keep the query planner making good\n>> choices you need to increase stats target for the field in the index\n>> above.  The easiest way to do so is to do this:\n>>\n>> alter database mydb set default_statistics_target=100;\n>>\n>> and run analyze again:\n>>\n>> analyze;\n>\n> So, i removed the index on field_name, set default_default_statistics_target\n> to 100, analyzed, and the results are the same:\n\nWhy did you remove the index?\n", "msg_date": "Mon, 6 Apr 2009 07:58:05 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Forcing seq_scan off for large table joined with tiny\n\ttable yeilds improved performance" }, { "msg_contents": "Scott Marlowe wrote:\n> On Mon, Apr 6, 2009 at 6:20 AM, Mario Splivalo\n> <[email protected]> wrote:\n>> Scott Marlowe wrote:\n>>> It's not really solved, it's just a happy coincidence that the current\n>>> plan runs well. In order to keep the query planner making good\n>>> choices you need to increase stats target for the field in the index\n>>> above. The easiest way to do so is to do this:\n>>>\n>>> alter database mydb set default_statistics_target=100;\n>>>\n>>> and run analyze again:\n>>>\n>>> analyze;\n>> So, i removed the index on field_name, set default_default_statistics_target\n>> to 100, analyzed, and the results are the same:\n> \n> Why did you remove the index?\n> \n\nBecause I already have index on that column, index needed to enforce PK \nconstraint. Here is the original DDL for the table:\n\nCREATE TABLE photo_info_data\n(\n photo_id integer NOT NULL,\n field_name character varying NOT NULL,\n field_value character varying,\n CONSTRAINT photo_info_data_pk PRIMARY KEY (user_id, field_name)\n)\n\nCREATE INDEX photo_info_data_ix_field_value\n ON user_info_data USING btree (field_value);\n\nSo, there is index on (user_id, field_name). Postgres is using index for \nuser_id (...WHERE user_id = 12345) but not on field-name (...WHERE \nfield_name = 'f-spot'). When I add extra index on field name:\n\nCREATE INDEX photo_info_data_ix__field_name\n ON user_info_data USING btree (field_name);\n\nThen that index is used.\n\n\tMike\n", "msg_date": "Mon, 06 Apr 2009 16:37:53 +0200", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Forcing seq_scan off for large table joined with tiny\n\ttable yeilds improved performance" }, { "msg_contents": "On Mon, Apr 6, 2009 at 8:37 AM, Mario Splivalo\n<[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> On Mon, Apr 6, 2009 at 6:20 AM, Mario Splivalo\n>> <[email protected]> wrote:\n>>>\n>>> Scott Marlowe wrote:\n>>>>\n>>>> It's not really solved, it's just a happy coincidence that the current\n>>>> plan runs well.  In order to keep the query planner making good\n>>>> choices you need to increase stats target for the field in the index\n>>>> above.  The easiest way to do so is to do this:\n>>>>\n>>>> alter database mydb set default_statistics_target=100;\n>>>>\n>>>> and run analyze again:\n>>>>\n>>>> analyze;\n>>>\n>>> So, i removed the index on field_name, set\n>>> default_default_statistics_target\n>>> to 100, analyzed, and the results are the same:\n>>\n>> Why did you remove the index?\n>>\n>\n> Because I already have index on that column, index needed to enforce PK\n> constraint. Here is the original DDL for the table:\n>\n> CREATE TABLE photo_info_data\n> (\n>  photo_id integer NOT NULL,\n>  field_name character varying NOT NULL,\n>  field_value character varying,\n>  CONSTRAINT photo_info_data_pk PRIMARY KEY (user_id, field_name)\n> )\n>\n> CREATE INDEX photo_info_data_ix_field_value\n>  ON user_info_data USING btree (field_value);\n>\n> So, there is index on (user_id, field_name). Postgres is using index for\n> user_id (...WHERE user_id = 12345) but not on field-name (...WHERE\n> field_name = 'f-spot'). When I add extra index on field name:\n>\n> CREATE INDEX photo_info_data_ix__field_name\n>  ON user_info_data USING btree (field_name);\n>\n> Then that index is used.\n\nOn older versions of pgsql, the second of two terms in a multicolumn\nindex can't be used alone. On newer versions it can, but it is much\nless efficient than if it's a single column index or if the term is\nthe first one not the second.\n", "msg_date": "Mon, 6 Apr 2009 08:47:07 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Forcing seq_scan off for large table joined with tiny\n\ttable yeilds improved performance" }, { "msg_contents": "Scott Marlowe wrote:\n>> CREATE INDEX photo_info_data_ix_field_value\n>> ON user_info_data USING btree (field_value);\n>>\n>> So, there is index on (user_id, field_name). Postgres is using index for\n>> user_id (...WHERE user_id = 12345) but not on field-name (...WHERE\n>> field_name = 'f-spot'). When I add extra index on field name:\n>>\n>> CREATE INDEX photo_info_data_ix__field_name\n>> ON user_info_data USING btree (field_name);\n>>\n>> Then that index is used.\n> \n> On older versions of pgsql, the second of two terms in a multicolumn\n> index can't be used alone. On newer versions it can, but it is much\n> less efficient than if it's a single column index or if the term is\n> the first one not the second.\n\nI'm using 8.3.7. So, you'd also suggest to keep that extra (in a way \nredundant) index on field_name, since I need PK on (photo_id, field_name) ?\n\n\tMike\n", "msg_date": "Mon, 06 Apr 2009 16:50:49 +0200", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Forcing seq_scan off for large table joined with tiny\n\ttable yeilds improved performance" }, { "msg_contents": "On Mon, Apr 6, 2009 at 8:50 AM, Mario Splivalo\n<[email protected]> wrote:\n> Scott Marlowe wrote:\n>>>\n>>> CREATE INDEX photo_info_data_ix_field_value\n>>>  ON user_info_data USING btree (field_value);\n>>>\n>>> So, there is index on (user_id, field_name). Postgres is using index for\n>>> user_id (...WHERE user_id = 12345) but not on field-name (...WHERE\n>>> field_name = 'f-spot'). When I add extra index on field name:\n>>>\n>>> CREATE INDEX photo_info_data_ix__field_name\n>>>  ON user_info_data USING btree (field_name);\n>>>\n>>> Then that index is used.\n>>\n>> On older versions of pgsql, the second of two terms in a multicolumn\n>> index can't be used alone.  On newer versions it can, but it is much\n>> less efficient than if it's a single column index or if the term is\n>> the first one not the second.\n>\n> I'm using 8.3.7. So, you'd also suggest to keep that extra (in a way\n> redundant) index on field_name, since I need PK on (photo_id, field_name) ?\n\nEither that or reverse the terms in the pk.\n\nAlso, you might want to look at adjusting random_page_access to\nsomething around 1.5 to 2.0.\n", "msg_date": "Mon, 6 Apr 2009 09:22:50 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Forcing seq_scan off for large table joined with tiny\n\ttable yeilds improved performance" } ]
[ { "msg_contents": "I'm running a 64-bit build of Postgres 8.3.5 on AIX 5.3, and have a really\nstrange, annoying transient problem with one particular query stalling.\n\nThe symptom here is that when this query is made with X or more records in\na temp table involved in the join (where X is constant when the problem\nmanifests, but is different between manifestations) the query takes about\n20 minutes. When the query is made with X-1 records it takes less than a\nsecond. It's just this one query -- for everything else the system's nice\nand snappy. The machine load's never above 9 (it's a 32 CPU box) and\nhasn't had less than 60G of free system memory on it.\n\nAn EXPLAIN ANALYZE of the two queries (with X-1 and X records) is even\nmore bizarre. Not only are the two query plans identical (save trivial\ndifferences because of the record count differences) but the explain\nEXPLAIN ANALYZE total runtimes are near-identical -- the fast case showed\n259ms, the slow case 265ms.\n\nWhen the slow query was actually run, though, it took 20 minutes. There\nwere no blocked back ends shown in pg_stat_activity, and the back end\nitself was definitely moving. I trussed the back end stuck running the\nslow query and it spent nearly all its time doing kread() and kwrite()\ncalls. The DB log didn't show anything interesting in it. I checked to\nmake sure the SQL statement that was EXPLAIN ANALYZEd was the one actually\nexecuted, and I pulled the client code into the debugger and\nsingle-stepped through just to make sure it was getting stuck on that one\nSQL statement and it wasn't the client doing something unexpected.\n\nJust to be even more annoying, the problem goes away. Right now I can't\ntrigger the problem. Last friday it happened reliably feeding 58 records\ninto this query. The week before it was 38 records. Today? Nothing, the\nsystem's full of snappy.\n\nAt the moment I'm at a loss as to what's going wrong, and the fact that I\ncan't duplicate it right now when I actually have time to look into the\nproblem's damned annoying. What I'm looking for are some hints as to where\nto look the next time it does happen, or things I can flip on to catch\nmore in the log. (The logging parameters are all set at their defaults)\nI'm going to try attaching with dbx to get a stack trace (I assume this is\nrelatively safe?) but past that I'm kind of stumped.\n\nHelp?\n\n\n\n\n", "msg_date": "Mon, 30 Mar 2009 13:50:52 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Trying to track down weird query stalls" }, { "msg_contents": "On Mon, Mar 30, 2009 at 1:50 PM, <[email protected]> wrote:\n> I'm running a 64-bit build of Postgres 8.3.5 on AIX 5.3, and have a really\n> strange, annoying transient problem with one particular query stalling.\n>\n> The symptom here is that when this query is made with X or more records in\n> a temp table involved in the join (where X is constant when the problem\n> manifests, but is different between manifestations) the query takes about\n> 20 minutes. When the query is made with X-1 records it takes less than a\n> second. It's just this one query -- for everything else the system's nice\n> and snappy. The machine load's never above 9 (it's a 32 CPU box) and\n> hasn't had less than 60G of free system memory on it.\n>\n> An EXPLAIN ANALYZE of the two queries (with X-1 and X records) is even\n> more bizarre. Not only are the two query plans identical (save trivial\n> differences because of the record count differences) but the explain\n> EXPLAIN ANALYZE total runtimes are near-identical -- the fast case showed\n> 259ms, the slow case 265ms.\n>\n> When the slow query was actually run, though, it took 20 minutes. There\n> were no blocked back ends shown in pg_stat_activity, and the back end\n> itself was definitely moving. I trussed the back end stuck running the\n> slow query and it spent nearly all its time doing kread() and kwrite()\n> calls. The DB log didn't show anything interesting in it. I checked to\n> make sure the SQL statement that was EXPLAIN ANALYZEd was the one actually\n> executed, and I pulled the client code into the debugger and\n> single-stepped through just to make sure it was getting stuck on that one\n> SQL statement and it wasn't the client doing something unexpected.\n>\n> Just to be even more annoying, the problem goes away. Right now I can't\n> trigger the problem. Last friday it happened reliably feeding 58 records\n> into this query. The week before it was 38 records. Today? Nothing, the\n> system's full of snappy.\n>\n> At the moment I'm at a loss as to what's going wrong, and the fact that I\n> can't duplicate it right now when I actually have time to look into the\n> problem's damned annoying. What I'm looking for are some hints as to where\n> to look the next time it does happen, or things I can flip on to catch\n> more in the log. (The logging parameters are all set at their defaults)\n> I'm going to try attaching with dbx to get a stack trace (I assume this is\n> relatively safe?) but past that I'm kind of stumped.\n\nlog_min_duration_statement is a good place to start, but it sounds\nlike the query plan you're getting when you test it by hand isn't the\nsame one that's actually executing, so I'm not sure how far that's\ngoing to get you. contrib/auto_explain sounds like it would be just\nthe thing, but unfortunately that's an 8.4 feature which hasn't been\nreleased yet. I'm not sure whether it could be built and run against\n8.3.\n\n...Robert\n", "msg_date": "Mon, 30 Mar 2009 14:31:39 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to track down weird query stalls" }, { "msg_contents": "> On Mon, Mar 30, 2009 at 1:50 PM, <[email protected]> wrote:\n>> I'm running a 64-bit build of Postgres 8.3.5 on AIX 5.3, and have a\n>> really\n>> strange, annoying transient problem with one particular query stalling.\n>>\n>> The symptom here is that when this query is made with X or more records\n>> in\n>> a temp table involved in the join (where X is constant when the problem\n>> manifests, but is different between manifestations) the query takes\n>> about\n>> 20 minutes. When the query is made with X-1 records it takes less than a\n>> second. It's just this one query -- for everything else the system's\n>> nice\n>> and snappy. The machine load's never above 9 (it's a 32 CPU box) and\n>> hasn't had less than 60G of free system memory on it.\n>>\n>> An EXPLAIN ANALYZE of the two queries (with X-1 and X records) is even\n>> more bizarre. Not only are the two query plans identical (save trivial\n>> differences because of the record count differences) but the explain\n>> EXPLAIN ANALYZE total runtimes are near-identical -- the fast case\n>> showed\n>> 259ms, the slow case 265ms.\n>\n> log_min_duration_statement is a good place to start, but it sounds\n> like the query plan you're getting when you test it by hand isn't the\n> same one that's actually executing, so I'm not sure how far that's\n> going to get you. contrib/auto_explain sounds like it would be just\n> the thing, but unfortunately that's an 8.4 feature which hasn't been\n> released yet. I'm not sure whether it could be built and run against\n> 8.3.\n\nI'm not executing any of the EXPLAINs by hand, because I didn't want to\nhave to worry about typos or filling in temp tables with test data. Inside\nthe app the SQL for the problematic query's stored in a variable -- when\nthe task runs with debugging enabled it first executes the query with\nEXPLAIN ANALYZE prepended and dumps the output, then it executes the query\nitself. It's possible something's going wrong in that, but the code's\npretty simple.\n\nArguably in this case the actual query should run faster than the EXPLAIN\nANALYZE version, since the cache is hot. (Though that'd only likely shave\na few dozen ms off the runtime)\n\n-Dan\n", "msg_date": "Mon, 30 Mar 2009 14:42:13 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Trying to track down weird query stalls" }, { "msg_contents": "On Mon, Mar 30, 2009 at 2:42 PM, <[email protected]> wrote:\n>> On Mon, Mar 30, 2009 at 1:50 PM,  <[email protected]> wrote:\n>>> I'm running a 64-bit build of Postgres 8.3.5 on AIX 5.3, and have a\n>>> really\n>>> strange, annoying transient problem with one particular query stalling.\n>>>\n>>> The symptom here is that when this query is made with X or more records\n>>> in\n>>> a temp table involved in the join (where X is constant when the problem\n>>> manifests, but is different between manifestations) the query takes\n>>> about\n>>> 20 minutes. When the query is made with X-1 records it takes less than a\n>>> second. It's just this one query -- for everything else the system's\n>>> nice\n>>> and snappy. The machine load's never above 9 (it's a 32 CPU box) and\n>>> hasn't had less than 60G of free system memory on it.\n>>>\n>>> An EXPLAIN ANALYZE of the two queries (with X-1 and X records) is even\n>>> more bizarre. Not only are the two query plans identical (save trivial\n>>> differences because of the record count differences) but the explain\n>>> EXPLAIN ANALYZE total runtimes are near-identical -- the fast case\n>>> showed\n>>> 259ms, the slow case 265ms.\n>>\n>> log_min_duration_statement is a good place to start, but it sounds\n>> like the query plan you're getting when you test it by hand isn't the\n>> same one that's actually executing, so I'm not sure how far that's\n>> going to get you.  contrib/auto_explain sounds like it would be just\n>> the thing, but unfortunately that's an 8.4 feature which hasn't been\n>> released yet.  I'm not sure whether it could be built and run against\n>> 8.3.\n>\n> I'm not executing any of the EXPLAINs by hand, because I didn't want to\n> have to worry about typos or filling in temp tables with test data. Inside\n> the app the SQL for the problematic query's stored in a variable -- when\n> the task runs with debugging enabled it first executes the query with\n> EXPLAIN ANALYZE prepended and dumps the output, then it executes the query\n> itself. It's possible something's going wrong in that, but the code's\n> pretty simple.\n>\n> Arguably in this case the actual query should run faster than the EXPLAIN\n> ANALYZE version, since the cache is hot. (Though that'd only likely shave\n> a few dozen ms off the runtime)\n\nWell... yeah. Also EXPLAIN ANALYZE has a non-trivial amount of\noverhead, so that is quite bizarre. I have to suspect there is some\nsubtle difference between the way the EXPLAIN ANALYZE is done and the\nway the actual query is done... like maybe one uses parameter\nsubstitution and the other doesn't or, well, I don't know. But I\ndon't see how turning on debugging (which is essentially what EXPLAIN\nANALYZE is) can prevent the query from being slow.\n\n...Robert\n", "msg_date": "Mon, 30 Mar 2009 15:20:15 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to track down weird query stalls" }, { "msg_contents": "> On Mon, Mar 30, 2009 at 2:42 PM, <[email protected]> wrote:\n>>> On Mon, Mar 30, 2009 at 1:50 PM, �<[email protected]> wrote:\n>> I'm not executing any of the EXPLAINs by hand, because I didn't want to\n>> have to worry about typos or filling in temp tables with test data.\n>> Inside\n>> the app the SQL for the problematic query's stored in a variable -- when\n>> the task runs with debugging enabled it first executes the query with\n>> EXPLAIN ANALYZE prepended and dumps the output, then it executes the\n>> query\n>> itself. It's possible something's going wrong in that, but the code's\n>> pretty simple.\n>>\n>> Arguably in this case the actual query should run faster than the\n>> EXPLAIN\n>> ANALYZE version, since the cache is hot. (Though that'd only likely\n>> shave\n>> a few dozen ms off the runtime)\n>\n> Well... yeah. Also EXPLAIN ANALYZE has a non-trivial amount of\n> overhead, so that is quite bizarre. I have to suspect there is some\n> subtle difference between the way the EXPLAIN ANALYZE is done and the\n> way the actual query is done... like maybe one uses parameter\n> substitution and the other doesn't or, well, I don't know. But I\n> don't see how turning on debugging (which is essentially what EXPLAIN\n> ANALYZE is) can prevent the query from being slow.\n\nHence the query to the list. *Something* is going on, and beats me what.\nI'm assuming I'm triggering some bug in the postgres back end, or there's\nsome completely bizarre edge case that this tickles. (The massive\nkread/kwrite activity that truss showed me when I checked seemed rather\nunusual, to say the least)\n\nEXPLAIN ANALYZE is my normal means of diagnosing performance problems, but\nthat isn't helping as it shows perfectly sane results. That leaves\nabnormal means, and outside of trussing the back end or attaching with dbx\nto get a stack trace I just don't have any of those. I'm not even sure\nwhat I should be looking for when I do get a stack trace.\n\n-Dan\n", "msg_date": "Mon, 30 Mar 2009 15:35:10 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Trying to track down weird query stalls" }, { "msg_contents": "On Mon, Mar 30, 2009 at 12:42 PM, <[email protected]> wrote:\n>> On Mon, Mar 30, 2009 at 1:50 PM,  <[email protected]> wrote:\n>>> I'm running a 64-bit build of Postgres 8.3.5 on AIX 5.3, and have a\n>>> really\n>>> strange, annoying transient problem with one particular query stalling.\n>>>\n>>> The symptom here is that when this query is made with X or more records\n>>> in\n>>> a temp table involved in the join (where X is constant when the problem\n>>> manifests, but is different between manifestations) the query takes\n>>> about\n>>> 20 minutes. When the query is made with X-1 records it takes less than a\n>>> second. It's just this one query -- for everything else the system's\n>>> nice\n>>> and snappy. The machine load's never above 9 (it's a 32 CPU box) and\n>>> hasn't had less than 60G of free system memory on it.\n>>>\n>>> An EXPLAIN ANALYZE of the two queries (with X-1 and X records) is even\n>>> more bizarre. Not only are the two query plans identical (save trivial\n>>> differences because of the record count differences) but the explain\n>>> EXPLAIN ANALYZE total runtimes are near-identical -- the fast case\n>>> showed\n>>> 259ms, the slow case 265ms.\n>>\n>> log_min_duration_statement is a good place to start, but it sounds\n>> like the query plan you're getting when you test it by hand isn't the\n>> same one that's actually executing, so I'm not sure how far that's\n>> going to get you.  contrib/auto_explain sounds like it would be just\n>> the thing, but unfortunately that's an 8.4 feature which hasn't been\n>> released yet.  I'm not sure whether it could be built and run against\n>> 8.3.\n>\n> I'm not executing any of the EXPLAINs by hand, because I didn't want to\n> have to worry about typos or filling in temp tables with test data. Inside\n> the app the SQL for the problematic query's stored in a variable -- when\n> the task runs with debugging enabled it first executes the query with\n> EXPLAIN ANALYZE prepended and dumps the output, then it executes the query\n> itself. It's possible something's going wrong in that, but the code's\n> pretty simple.\n>\n> Arguably in this case the actual query should run faster than the EXPLAIN\n> ANALYZE version, since the cache is hot. (Though that'd only likely shave\n> a few dozen ms off the runtime)\n\nJoining a lot of tables together? Could be GEQO kicking in.\n", "msg_date": "Mon, 30 Mar 2009 13:38:03 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to track down weird query stalls" }, { "msg_contents": "> On Mon, Mar 30, 2009 at 12:42 PM, <[email protected]> wrote:\n>> Arguably in this case the actual query should run faster than the\n>> EXPLAIN\n>> ANALYZE version, since the cache is hot. (Though that'd only likely\n>> shave\n>> a few dozen ms off the runtime)\n>\n> Joining a lot of tables together? Could be GEQO kicking in.\n\nOnly if I get different query plans for the query depending on whether\nit's being EXPLAIN ANALYZEd or not. That seems unlikely...\n\n-Dan\n", "msg_date": "Mon, 30 Mar 2009 15:42:25 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Trying to track down weird query stalls" }, { "msg_contents": "On Mon, Mar 30, 2009 at 1:42 PM, <[email protected]> wrote:\n>> On Mon, Mar 30, 2009 at 12:42 PM,  <[email protected]> wrote:\n>>> Arguably in this case the actual query should run faster than the\n>>> EXPLAIN\n>>> ANALYZE version, since the cache is hot. (Though that'd only likely\n>>> shave\n>>> a few dozen ms off the runtime)\n>>\n>> Joining a lot of tables together?  Could be GEQO kicking in.\n>\n> Only if I get different query plans for the query depending on whether\n> it's being EXPLAIN ANALYZEd or not. That seems unlikely...\n\nYes, you can. In fact you often will. Not because it's being\nexplained or not, just because that's how GEQO works.\n", "msg_date": "Mon, 30 Mar 2009 13:47:17 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to track down weird query stalls" }, { "msg_contents": "> On Mon, Mar 30, 2009 at 1:42 PM, <[email protected]> wrote:\n>>> On Mon, Mar 30, 2009 at 12:42 PM, �<[email protected]> wrote:\n>>>> Arguably in this case the actual query should run faster than the\n>>>> EXPLAIN\n>>>> ANALYZE version, since the cache is hot. (Though that'd only likely\n>>>> shave\n>>>> a few dozen ms off the runtime)\n>>>\n>>> Joining a lot of tables together? �Could be GEQO kicking in.\n>>\n>> Only if I get different query plans for the query depending on whether\n>> it's being EXPLAIN ANALYZEd or not. That seems unlikely...\n>\n> Yes, you can. In fact you often will. Not because it's being\n> explained or not, just because that's how GEQO works.\n\nOuch. I did *not* know that was possible -- I assumed that the plan was\ndeterministic and independent of explain analyze. The query has seven\ntables (one of them a temp table) and my geqo_threshold is set to 12. If\nI'm reading the docs right GEQO shouldn't kick in.\n\n-Dan\n", "msg_date": "Mon, 30 Mar 2009 16:02:28 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Trying to track down weird query stalls" }, { "msg_contents": "On Mon, Mar 30, 2009 at 4:02 PM, <[email protected]> wrote:\n>> On Mon, Mar 30, 2009 at 1:42 PM,  <[email protected]> wrote:\n>>>> On Mon, Mar 30, 2009 at 12:42 PM,  <[email protected]> wrote:\n>>>>> Arguably in this case the actual query should run faster than the\n>>>>> EXPLAIN\n>>>>> ANALYZE version, since the cache is hot. (Though that'd only likely\n>>>>> shave\n>>>>> a few dozen ms off the runtime)\n>>>>\n>>>> Joining a lot of tables together?  Could be GEQO kicking in.\n>>>\n>>> Only if I get different query plans for the query depending on whether\n>>> it's being EXPLAIN ANALYZEd or not. That seems unlikely...\n>>\n>> Yes, you can.  In fact you often will.  Not because it's being\n>> explained or not, just because that's how GEQO works.\n>\n> Ouch. I did *not* know that was possible -- I assumed that the plan was\n> deterministic and independent of explain analyze. The query has seven\n> tables (one of them a temp table) and my geqo_threshold is set to 12. If\n> I'm reading the docs right GEQO shouldn't kick in.\n\nAny chance we could see the actual query? Right now I think we are\nshooting in the dark.\n\n...Robert\n", "msg_date": "Mon, 30 Mar 2009 16:13:27 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to track down weird query stalls" }, { "msg_contents": "> On Mon, Mar 30, 2009 at 4:02 PM, <[email protected]> wrote:\n>>> On Mon, Mar 30, 2009 at 1:42 PM, �<[email protected]> wrote:\n>>>>> On Mon, Mar 30, 2009 at 12:42 PM, �<[email protected]> wrote:\n>>>>>> Arguably in this case the actual query should run faster than the\n>>>>>> EXPLAIN\n>>>>>> ANALYZE version, since the cache is hot. (Though that'd only likely\n>>>>>> shave\n>>>>>> a few dozen ms off the runtime)\n>>>>>\n>>>>> Joining a lot of tables together? �Could be GEQO kicking in.\n>>>>\n>>>> Only if I get different query plans for the query depending on whether\n>>>> it's being EXPLAIN ANALYZEd or not. That seems unlikely...\n>>>\n>>> Yes, you can. �In fact you often will. �Not because it's being\n>>> explained or not, just because that's how GEQO works.\n>>\n>> Ouch. I did *not* know that was possible -- I assumed that the plan was\n>> deterministic and independent of explain analyze. The query has seven\n>> tables (one of them a temp table) and my geqo_threshold is set to 12. If\n>> I'm reading the docs right GEQO shouldn't kick in.\n>\n> Any chance we could see the actual query? Right now I think we are\n> shooting in the dark.\n\nThe query is:\n\nselect distinct\n temp_symbol.entityid,\n temp_symbol.libname,\n temp_symbol.objid,\n temp_symbol.objname,\n temp_symbol.fromsymid,\n temp_symbol.fromsymtype,\n temp_symbol.objinstance,\n NULL,\n temp_symbol.csid,\n libinstance.entityid,\n NULL,\n libobject.objid,\n NULL,\n provide_symbol.symbolid,\n provide_symbol.symboltype,\n libobject.objinstance,\n libobject.libinstanceid,\n objectinstance.csid,\n NULL,\n provide_symbol.is_weak,\n NULL,\n provide_symbol.is_local,\n NULL,\n provide_symbol.is_template,\n NULL,\n provide_symbol.is_common\n from libinstance,\n library,\n libobject,\n provide_symbol,\n temp_symbol,\n objectinstance,\n attributes\nwhere libinstance.libdate <= 1238445044\n and libinstance.enddate > 1238445044\n and libinstance.libinstanceid = libobject.libinstanceid\n and libinstance.architecture = ?\n\n\n and attributes.entityid = libinstance.entityid\n and attributes.branchid = libinstance.branchid\n and attributes.architecture = libinstance.architecture\n and library.libid = libinstance.libid\n and not secondary\nand attribute in ('notoffline', 'notoffline')\nand (provide_symbol.symboltype = 'T')\n and libobject.objinstance = provide_symbol.objinstance\n and libinstance.branchid = ?\n and provide_symbol.symbolid = temp_symbol.symbolid\n and objectinstance.objinstance = libobject.objinstance\nand libinstance.istemp = 0\n\nThe explain analyze for the query's attached in a (possibly hopeless)\nattempt to keep it from being word-wrapped into unreadability.\n\n-Dan", "msg_date": "Mon, 30 Mar 2009 16:33:14 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Trying to track down weird query stalls" }, { "msg_contents": "[email protected] escribi�:\n\n> where libinstance.libdate <= 1238445044\n> and libinstance.enddate > 1238445044\n> and libinstance.libinstanceid = libobject.libinstanceid\n> and libinstance.architecture = ?\n\nHow are you generating the explain? My bet is that you're just\nsubstituting a literal in the architecture condition, but if the driver\nis smart then maybe it's preparating the query beforehand. You'll get a\ndifferent plan in that case. Try something like this:\n\nprepare foo(smallint) as ...\n where libinstance.architecture = $1\n ...\n\nexplain analyze execute foo(1);\n\nIf the plan you get from that is bad (and it often is), you should look\nat avoiding a query prepare.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 30 Mar 2009 16:39:39 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to track down weird query stalls" }, { "msg_contents": "> [email protected] escribi�:\n>\n>> where libinstance.libdate <= 1238445044\n>> and libinstance.enddate > 1238445044\n>> and libinstance.libinstanceid = libobject.libinstanceid\n>> and libinstance.architecture = ?\n>\n> How are you generating the explain? My bet is that you're just\n> substituting a literal in the architecture condition, but if the driver\n> is smart then maybe it's preparating the query beforehand. You'll get a\n> different plan in that case.\n\nI don't think so. Perl's DBI is involved, but the statement's in a\nvariable. The code in question is:\n\n if ($db->{debug}) {\n $db->debug(\"SQL is: $sql\\n\");\n my $rows = $db->{dbh}->selectall_arrayref(\"explain analyze $sql\",\n undef, $db->{arch},\n $db->{basebranch});\n foreach my $row (@$rows) {\n $db->debug(join(\" \", @$row). \"\\n\");\n }\n $db->debug_stamp(\"Initial query done\\n\");\n }\n\n $rows = $db->{dbh}->selectall_arrayref($sql,\n\t\t\t\t\t undef, $db->{arch},\n\t\t\t\t\t $db->{basebranch});\n\nThere's no transform of the sql variable between the two statements, just\na quick loop over the returned rows from the explain analyze to print them\nout. (I did try to make sure that the debugging bits were done the same\nway as the mainline code, but I may have bobbled something)\n\n-Dan\n\n\n", "msg_date": "Mon, 30 Mar 2009 16:49:55 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Trying to track down weird query stalls" }, { "msg_contents": "[email protected] escribi�:\n> > [email protected] escribi�:\n> >\n> >> where libinstance.libdate <= 1238445044\n> >> and libinstance.enddate > 1238445044\n> >> and libinstance.libinstanceid = libobject.libinstanceid\n> >> and libinstance.architecture = ?\n> >\n> > How are you generating the explain? My bet is that you're just\n> > substituting a literal in the architecture condition, but if the driver\n> > is smart then maybe it's preparating the query beforehand. You'll get a\n> > different plan in that case.\n> \n> I don't think so. Perl's DBI is involved, but the statement's in a\n> variable.\n\nSo what's the \"?\" in the query you pasted earlier?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 30 Mar 2009 17:00:22 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to track down weird query stalls" }, { "msg_contents": "> [email protected] escribi�:\n>> > [email protected] escribi�:\n>> >\n>> >> where libinstance.libdate <= 1238445044\n>> >> and libinstance.enddate > 1238445044\n>> >> and libinstance.libinstanceid = libobject.libinstanceid\n>> >> and libinstance.architecture = ?\n>> >\n>> > How are you generating the explain? My bet is that you're just\n>> > substituting a literal in the architecture condition, but if the\n>> driver\n>> > is smart then maybe it's preparating the query beforehand. You'll get\n>> a\n>> > different plan in that case.\n>>\n>> I don't think so. Perl's DBI is involved, but the statement's in a\n>> variable.\n>\n> So what's the \"?\" in the query you pasted earlier?\n\nThe first ? (for architecture) is 1, the second ? (for branchid) is 0.\nThey both should get passed to Postgres as $1 and $2, respectively,\nassuming DBD::Pg does its substitution right. (They're both supposed to go\nin as placeholders)\n\n-Dan\n", "msg_date": "Mon, 30 Mar 2009 17:07:36 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Trying to track down weird query stalls" }, { "msg_contents": "[email protected] escribi�:\n\n> > So what's the \"?\" in the query you pasted earlier?\n> \n> The first ? (for architecture) is 1, the second ? (for branchid) is 0.\n> They both should get passed to Postgres as $1 and $2, respectively,\n> assuming DBD::Pg does its substitution right. (They're both supposed to go\n> in as placeholders)\n\nRight, so how about you reread what I wrote above?\n\nOh, hmm, so to be more clear: I don't think DBD::Pg is actually sending\nEXECUTE PREPARE. You need to do this over psql.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 30 Mar 2009 17:22:39 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to track down weird query stalls" }, { "msg_contents": "> [email protected] escribi�:\n>\n>> > So what's the \"?\" in the query you pasted earlier?\n>>\n>> The first ? (for architecture) is 1, the second ? (for branchid) is 0.\n>> They both should get passed to Postgres as $1 and $2, respectively,\n>> assuming DBD::Pg does its substitution right. (They're both supposed to\n>> go\n>> in as placeholders)\n>\n> Right, so how about you reread what I wrote above?\n>\n> Oh, hmm, so to be more clear: I don't think DBD::Pg is actually sending\n> EXECUTE PREPARE. You need to do this over psql.\n\nFair enough. (And sorry about the mis-read) Next time this occurs I'll try\nand duplicate this in psql. FWIW, a quick read of the C underlying the\nDBD::Pg module shows it using PQexecPrepared, so I'm pretty sure it is\nusing prepared statements with placeholders, but double-checking seems\nprudent.\n\n-Dan\n", "msg_date": "Mon, 30 Mar 2009 17:34:17 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Trying to track down weird query stalls" }, { "msg_contents": "[email protected] escribi�:\n\n> Fair enough. (And sorry about the mis-read) Next time this occurs I'll try\n> and duplicate this in psql. FWIW, a quick read of the C underlying the\n> DBD::Pg module shows it using PQexecPrepared, so I'm pretty sure it is\n> using prepared statements with placeholders, but double-checking seems\n> prudent.\n\nYes, but I doubt that it'll be smart enough to work for EXPLAIN in the\nway we need here.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Mon, 30 Mar 2009 17:38:39 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to track down weird query stalls" }, { "msg_contents": "\nOn 3/30/09 2:34 PM, \"[email protected]\" <[email protected]> wrote:\n\n>> [email protected] escribió:\n>> \n>>>> So what's the \"?\" in the query you pasted earlier?\n>>> \n>>> The first ? (for architecture) is 1, the second ? (for branchid) is 0.\n>>> They both should get passed to Postgres as $1 and $2, respectively,\n>>> assuming DBD::Pg does its substitution right. (They're both supposed to\n>>> go\n>>> in as placeholders)\n>> \n>> Right, so how about you reread what I wrote above?\n>> \n>> Oh, hmm, so to be more clear: I don't think DBD::Pg is actually sending\n>> EXECUTE PREPARE. You need to do this over psql.\n> \n> Fair enough. (And sorry about the mis-read) Next time this occurs I'll try\n> and duplicate this in psql. FWIW, a quick read of the C underlying the\n> DBD::Pg module shows it using PQexecPrepared, so I'm pretty sure it is\n> using prepared statements with placeholders, but double-checking seems\n> prudent.\n> \n> -Dan\n> \n\nRegardless, its always a good idea to do a manual explain analyze with and\nwithout parameterization in psql if prepared statements are involved. The\nquery planner functions very differently with and without them, almost\nalways with a performance detriment to query execution times when\nparameterized.\n\n", "msg_date": "Mon, 30 Mar 2009 14:44:50 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trying to track down weird query stalls" } ]
[ { "msg_contents": "Hi,\nFor performance reasons (obviously ;)) I'm experimenting with parallel restore in PG 8.4. I grabbed the latest source snapshot (of today, March 30) and compiled this with zlib support. I dumped a DB from PG 8.3.5 (using maximum compression). I got this message however:\npostgres@mymachine:/home/henk/postgresql-8.4/bin$ time\n./pg_restore -p 5434 -h localhost -U henk -d db_test -j 8 -Fc\n/home/henk/test-databases/dumps/db_test.custom\npg_restore: [archiver] WARNING: archive is compressed, but this\ninstallation does not support compression -- no data will be available\npg_restore: [archiver] cannot restore from compressed archive (compression\nnot supported in this installation)\n\nSo initially it seemed its only possible to do a pg_restore using the uncompressed pg_dump custom format. So I tried the uncompressed dump, but this too failed. This last part was a little problematic anyway, since pg_dump absolutely wants to read its input from a file and does not accept any input from stdin. I assume reading from a file is necessary for the multiple parallel processes to each read their own part of the file, something which might be difficult to do when reading from stdin.\nApart from the fact that it simply doesn't work for me at the moment, I see a major problem with this approach though. Dumping in the custom format (option -Fc) is far slower than dumping in the plain format. Even if the parallel restore would speed up things, then the combined time of a dump and restore would still be negatively affected when compared to doing a plain dump and restore. I'm aware of the fact that I might be hitting some bugs, as a development snapshot is by definition of course not stable. Also, perhaps I'm missing something.\nMy question is thus; could someone advise me how to get parallel restore to work and how to speed up a dump in the custom file format?\nMany thanks in advance\n\n\n_________________________________________________________________\nSee all the ways you can stay connected to friends and family\nhttp://www.microsoft.com/windows/windowslive/default.aspx\n\n\n\n\n\nHi,For performance reasons (obviously ;)) I'm experimenting with parallel restore in PG 8.4. I grabbed the latest source snapshot (of today, March 30) and compiled this with zlib support. I dumped a DB from PG 8.3.5 (using maximum compression). I got this message however:postgres@mymachine:/home/henk/postgresql-8.4/bin$ time./pg_restore -p 5434 -h localhost -U henk -d db_test -j 8 -Fc/home/henk/test-databases/dumps/db_test.custompg_restore: [archiver] WARNING: archive is compressed, but thisinstallation does not support compression -- no data will be availablepg_restore: [archiver] cannot restore from compressed archive (compressionnot supported in this installation)So initially it seemed its only possible to do a pg_restore using the uncompressed pg_dump custom format. So I tried the uncompressed dump, but this too failed. This last part was a little problematic anyway, since pg_dump absolutely wants to read its input from a file and does not accept any input from stdin. I assume reading from a file is necessary for the multiple parallel processes to each read their own part of the file, something which might be difficult to do when reading from stdin.Apart from the fact that it simply doesn't work for me at the moment, I see a major problem with this approach though. Dumping in the custom format (option -Fc) is far slower than dumping in the plain format. Even if the parallel restore would speed up things, then the combined time of a dump and restore would still be negatively affected when compared to doing a plain dump and restore. I'm aware of the fact that I might be hitting some bugs, as a development snapshot is by definition of course not stable. Also, perhaps I'm missing something.My question is thus; could someone advise me how to get parallel restore to work and how to speed up a dump in the custom file format?Many thanks in advanceSee all the ways you can stay connected to friends and family", "msg_date": "Wed, 1 Apr 2009 00:57:34 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "How to get parallel restore in PG 8.4 to work?" }, { "msg_contents": "henk de wit <[email protected]> writes:\n> For performance reasons (obviously ;)) I'm experimenting with parallel restore in PG 8.4. I grabbed the latest source snapshot (of today, March 30) and compiled this with zlib support. I dumped a DB from PG 8.3.5 (using maximum compression). I got this message however:\n> postgres@mymachine:/home/henk/postgresql-8.4/bin$ time\n> ./pg_restore -p 5434 -h localhost -U henk -d db_test -j 8 -Fc\n> /home/henk/test-databases/dumps/db_test.custom\n> pg_restore: [archiver] WARNING: archive is compressed, but this\n> installation does not support compression -- no data will be available\n> pg_restore: [archiver] cannot restore from compressed archive (compression\n> not supported in this installation)\n\nAs far as one can tell from here, you built *without* zlib support.\nThis is unrelated to parallel restore as such.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 31 Mar 2009 19:08:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get parallel restore in PG 8.4 to work? " }, { "msg_contents": "Hi,\n> henk de wit <[email protected]> writes:\n>> For performance reasons (obviously ;)) I'm experimenting with parallel restore in PG 8.4. [...] I got this message however:\n>> [...]\n>> pg_restore: [archiver] WARNING: archive is compressed, but this\n>> installation does not support compression -- no data will be available\n\n> As far as one can tell from here, you built *without* zlib support.\n> This is unrelated to parallel restore as such.\n\nI see. Thanks for the confirmation. I would have sworn I built with zlib support, but obviously I did something wrong. For some reason that I can't remember now, I did omit support for readline. Could that have anything to do with it, or are those completely unrelated?\nTo continue testing, I imported a PG 8.3 dump in the plain format into PG 8.4, dumped this again in the custom format and imported that again into PG 8.4 using the parallel restore feature. This proved to be very beneficial. Top shows that all the cores are being used:\n./pg_restore -p 5433 -h localhost -d db_test -j 8 -Fc\n/ssd/tmp/test_dump.custom\n\ntop - 11:33:37 up 1 day, 18:07, 5 users, load average: 5.63, 2.12, 0.97\nTasks: 187 total, 7 running, 180 sleeping, 0 stopped, 0 zombie\nCpu0 : 91.7%us, 8.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st\nCpu1 : 90.0%us, 9.3%sy, 0.0%ni, 0.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu2 : 81.5%us, 15.9%sy, 0.0%ni, 2.3%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st\nCpu3 : 87.0%us, 10.3%sy, 0.0%ni, 2.3%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st\nCpu4 : 91.4%us, 8.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.3%hi, 0.3%si, 0.0%st\nCpu5 : 66.8%us, 16.3%sy, 0.0%ni, 4.3%id, 11.0%wa, 0.0%hi, 1.7%si, 0.0%st\nCpu6 : 76.0%us, 12.7%sy, 0.0%ni, 0.0%id, 10.7%wa, 0.0%hi, 0.7%si, 0.0%st\nCpu7 : 97.3%us, 2.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st\nMem: 33021204k total, 32861900k used, 159304k free, 40k buffers\nSwap: 7811064k total, 2164k used, 7808900k free, 29166332k cached\n\nThe performance numbers are quite amazing. The dump is approximately 19GB in size and the filesystem I use is xfs on Debian Lenny. Using the normal restore (with a single process) the time it takes to do a full restore is 45 minutes, when using 8 processes this drops to just 14 minutes and 23 seconds. Using 16 processes it drops further to just 11 minutes and 46 seconds.\nI still have some work to do to find out why dumping in the custom format is so much slower. Unfortunately I forgot to time this exactly, but my feeling was that it was 'very slow'. I'll try to get some exact numbers though.\nKind regards,Henk\n\n\n\n_________________________________________________________________\nWhat can you do with the new Windows Live? Find out\nhttp://www.microsoft.com/windows/windowslive/default.aspx\n\n\n\n\n\nHi,> henk de wit <[email protected]> writes:>> For performance reasons (obviously ;)) I'm experimenting with parallel restore in PG 8.4. [...] I got this message however:>> [...]>> pg_restore: [archiver] WARNING: archive is compressed, but this>> installation does not support compression -- no data will be available> As far as one can tell from here, you built *without* zlib support.> This is unrelated to parallel restore as such.I see. Thanks for the confirmation. I would have sworn I built with zlib support, but obviously I did something wrong. For some reason that I can't remember now, I did omit support for readline. Could that have anything to do with it, or are those completely unrelated?To continue testing, I imported a PG 8.3 dump in the plain format into PG 8.4, dumped this again in the custom format and imported that again into PG 8.4 using the parallel restore feature. This proved to be very beneficial. Top shows that all the cores are being used:./pg_restore -p 5433 -h localhost -d db_test -j 8 -Fc/ssd/tmp/test_dump.customtop - 11:33:37 up 1 day, 18:07,  5 users,  load average: 5.63, 2.12, 0.97Tasks: 187 total,   7 running, 180 sleeping,   0 stopped,   0 zombieCpu0  : 91.7%us,  8.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.3%si, 0.0%stCpu1  : 90.0%us,  9.3%sy,  0.0%ni,  0.7%id,  0.0%wa,  0.0%hi,  0.0%si, 0.0%stCpu2  : 81.5%us, 15.9%sy,  0.0%ni,  2.3%id,  0.0%wa,  0.0%hi,  0.3%si, 0.0%stCpu3  : 87.0%us, 10.3%sy,  0.0%ni,  2.3%id,  0.0%wa,  0.0%hi,  0.3%si, 0.0%stCpu4  : 91.4%us,  8.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.3%hi,  0.3%si, 0.0%stCpu5  : 66.8%us, 16.3%sy,  0.0%ni,  4.3%id, 11.0%wa,  0.0%hi,  1.7%si, 0.0%stCpu6  : 76.0%us, 12.7%sy,  0.0%ni,  0.0%id, 10.7%wa,  0.0%hi,  0.7%si, 0.0%stCpu7  : 97.3%us,  2.3%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.3%si, 0.0%stMem:  33021204k total, 32861900k used,   159304k free,       40k buffersSwap:  7811064k total,     2164k used,  7808900k free, 29166332k cachedThe performance numbers are quite amazing. The dump is approximately 19GB in size and the filesystem I use is xfs on Debian Lenny. Using the normal restore (with a single process) the time it takes to do a full restore is 45 minutes, when using 8 processes this drops to just 14 minutes and 23 seconds. Using 16 processes it drops further to just 11 minutes and 46 seconds.I still have some work to do to find out why dumping in the custom format is so much slower. Unfortunately I forgot to time this exactly, but my feeling was that it was 'very slow'. I'll try to get some exact numbers though.Kind regards,HenkWhat can you do with the new Windows Live? Find out", "msg_date": "Wed, 1 Apr 2009 14:22:33 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to get parallel restore in PG 8.4 to work?" }, { "msg_contents": "henk de wit <[email protected]> writes:\n> I still have some work to do to find out why dumping in the custom\n> format is so much slower.\n\nOffhand the only reason I can see for it to be much different from\nplain-text output is that -Fc compresses by default. If you don't\ncare about that, try -Fc -Z0.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 Apr 2009 09:46:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get parallel restore in PG 8.4 to work? " }, { "msg_contents": ">> I still have some work to do to find out why dumping in the custom\n>> format is so much slower.\n> \n> Offhand the only reason I can see for it to be much different from\n> plain-text output is that -Fc compresses by default. If you don't\n> care about that, try -Fc -Z0.\n\nOk, I did some performance testing today and I appeared to be wrong after all. My apologies for the noise.\nHere are some test results:\nScenarioxfsjfs patchedjfscat backup | gunzip | psql45 min--pg_dump> hdd (uncompressed) (==pg_dump -Fp)--10 min 15 secpg_dump -Fc> hdd (uncompressed)10 min 20 sec10 min 21 sec10 min 28 secpg_dump -Fc | gzip> hdd11 min 20 sec11 min 25 sec12 min 04 secpg_restore 8 threads14 min 23 sec11 min 40 sec11 min 20 secpg_restore 16 threads11 min 46 sec12 min 40 sec12 min 33 secpg_restore 32 threads11 min 42 sec12 min 30 sec12 min 30 sec\nAs can be seen in the table (hope this renders correctly on the mailing list), there is barely a difference between a plain dump and a custom format dump. For who it concerns, xfs performance a little better than jfs here, but the difference is marginal. More on topic, beyond 16 processes there isn't any notable speed improvement for the parallel restore (as expected).\nKind regards,Henk\n_________________________________________________________________\nSee all the ways you can stay connected to friends and family\nhttp://www.microsoft.com/windows/windowslive/default.aspx\n\n\n\n\n\n>> I still have some work to do to find out why dumping in the custom>> format is so much slower.> > Offhand the only reason I can see for it to be much different from> plain-text output is that -Fc compresses by default. If you don't> care about that, try -Fc -Z0.Ok, I did some performance testing today and I appeared to be wrong after all. My apologies for the noise.Here are some test results:Scenarioxfsjfs patchedjfscat backup | gunzip | psql45 min--pg_dump> hdd (uncompressed) (==pg_dump -Fp)--10 min 15 secpg_dump -Fc> hdd (uncompressed)10 min 20 sec10 min 21 sec10 min 28 secpg_dump -Fc | gzip> hdd11 min 20 sec11 min 25 sec12 min 04 secpg_restore 8 threads14 min 23 sec11 min 40 sec11 min 20 secpg_restore 16 threads11 min 46 sec12 min 40 sec12 min 33 secpg_restore 32 threads11 min 42 sec12 min 30 sec12 min 30 secAs can be seen in the table (hope this renders correctly on the mailing list), there is barely a difference between a plain dump and a custom format dump. For who it concerns, xfs performance a little better than jfs here, but the difference is marginal. More on topic, beyond 16 processes there isn't any notable speed improvement for the parallel restore (as expected).Kind regards,HenkSee all the ways you can stay connected to friends and family", "msg_date": "Thu, 2 Apr 2009 22:18:20 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to get parallel restore in PG 8.4 to work?" } ]
[ { "msg_contents": "Hi\nI have some problems with the PostgreSQL 8.3.6.\nThe client(Microsoft Access 2000) link postgresql table(via ODBC) and \nwork with this. Sometimes on the client appear:\n ODBC--call failed.\n Could not send Query(connection dead)(#26);\n\nIn PostgreSQL log appear:\n could not receive data from client: Connection timed out\n unexpected EOF on client connection\n\nIn the Client (Microsoft Access 2000) log :\n STATEMENT ERROR: func=SC_execute, desc='', errnum=27, sqlstate=, \nerrmsg='Could not send query to backend'\n\nI want to mention that the problem does not appear on the PostgreSQL8.1.\n\nWe couldn't find when it happens, and why.. \n\npostgresql.conf:\nmax_connections = 256\ntcp_keepalives_idle = 1 # TCP_KEEPIDLE, in seconds;\n # 0 selects the system default\ntcp_keepalives_interval = 1 # TCP_KEEPINTVL, in seconds;\n # 0 selects the system default\ntcp_keepalives_count = 1 # TCP_KEEPCNT;\nshared_buffers = 512MB\nwork_mem = 32MB # min 64kB\nmaintenance_work_mem = 256MB # min 1MB\nmax_fsm_pages = 262145 # min max_fsm_relations*16, 6 \nbytes each\n # (change requires restart)\nmax_fsm_relations = 16384 # min 100, ~70 bytes each\nwal_buffers = 256kB # min 32kB\ncheckpoint_segments = 128 # in logfile segments, min 1, \n16MB each\n\n\n\nCan any one help me with this ?\n", "msg_date": "Wed, 01 Apr 2009 12:10:27 +0300", "msg_from": "Mahu Vasile <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL " }, { "msg_contents": "On Wed, Apr 1, 2009 at 5:10 AM, Mahu Vasile <[email protected]> wrote:\n> tcp_keepalives_count = 1                # TCP_KEEPCNT;\n\nThis might not be what you want.\n\nhttp://www.postgresql.org/docs/8.3/static/runtime-config-connection.html\n\nPresumably you'd like to wait more than 1 second before declaring the\nconnection dead...\n\nBeyond, that it sounds like a client problem more than a PostgreSQL problem.\n\n...Robert\n", "msg_date": "Wed, 1 Apr 2009 10:23:46 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL" } ]
[ { "msg_contents": "How hard would it be to teach planer to optimize self join?\n\nWhile this query which demonstrates it is not that common\n\nSELECT count(*)\nFROM\n\tbig_table a\n\tINNER JOIN big_table b ON a.id = b.id;\n\nThis type of query (self joining large table) is very common\n(at least in our environment because of heavy usage of views).\n\nIt would be great if Postgres could rewrite this query\n\nSELECT bt1.id, bt1.total, sq.id, sq.total\nFROM\n\tbig_table bt1\n\tINNER JOIN small_table st1 on st1.big_id = bt1.id\n\tINNER JOIN\n\t(\n\t\tSELECT bt2.id, st2.total\n\t\tFROM\n\t\t\tbig_table bt2\n\t\t\tINNER JOIN small_table st2 on st2.big_id = bt2.id\n\t\tWHERE\n\t\t\tst2.total > 100\n\t) sq ON sq.id = bt1.id\nWHERE\n\tst1.total<200\n\nlike this\n\nSELECT bt1.id, bt1.total, bt1.id, st2.total\nFROM\n\tbig_table bt1\n\tINNER JOIN small_table st1 on st1.big_id = bt1.id\n\tINNER JOIN small_table st2 on st2.big_id = bt1.id AND st2.total > 100\nWHERE\n\tst1.total<200\n\nRegards,\nRikard\n", "msg_date": "Wed, 01 Apr 2009 18:30:23 +0200", "msg_from": "Rikard Pavelic <[email protected]>", "msg_from_op": true, "msg_subject": "self join revisited" }, { "msg_contents": "On Wed, 1 Apr 2009, Rikard Pavelic wrote:\n> It would be great if Postgres could rewrite this query\n>\n> SELECT bt1.id, bt1.total, sq.id, sq.total\n> FROM\n> \tbig_table bt1\n> \tINNER JOIN small_table st1 on st1.big_id = bt1.id\n> \tINNER JOIN\n> \t(\n> \t\tSELECT bt2.id, st2.total\n> \t\tFROM\n> \t\t\tbig_table bt2\n> \t\t\tINNER JOIN small_table st2 on st2.big_id = bt2.id\n> \t\tWHERE\n> \t\t\tst2.total > 100\n> \t) sq ON sq.id = bt1.id\n> WHERE\n> \tst1.total<200\n>\n> like this\n>\n> SELECT bt1.id, bt1.total, bt1.id, st2.total\n> FROM\n> \tbig_table bt1\n> \tINNER JOIN small_table st1 on st1.big_id = bt1.id\n> \tINNER JOIN small_table st2 on st2.big_id = bt1.id AND st2.total > 100\n> WHERE\n> \tst1.total<200\n\nThose queries are only equivalent if big_table.id is unique. However, even \nso some benefit could be gained from a self-join algorithm. For instance, \nif given some rather evil cleverness, it could be adapted to calculate \noverlaps very quickly.\n\nHowever, a self-join is very similar to a merge join, and the benefit over \na standard merge join would be small.\n\nMatthew\n\n-- \n\"We did a risk management review. We concluded that there was no risk\n of any management.\" -- Hugo Mills <[email protected]>\n", "msg_date": "Wed, 1 Apr 2009 17:46:04 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: self join revisited" }, { "msg_contents": "Rikard Pavelic <[email protected]> writes:\n> It would be great if Postgres could rewrite this query\n\nAFAICS those queries are not going to produce the same results,\nso Postgres would be 100% incorrect to rewrite it like that for you.\n\n(If they do produce the same results, it would depend on a bunch\nof assumptions you have not stated.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 Apr 2009 13:19:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: self join revisited " }, { "msg_contents": "Tom Lane wrote:\n> Rikard Pavelic <[email protected]> writes:\n> \n>> It would be great if Postgres could rewrite this query\n>> \n>\n> AFAICS those queries are not going to produce the same results,\n> so Postgres would be 100% incorrect to rewrite it like that for you.\n>\n> (If they do produce the same results, it would depend on a bunch\n> of assumptions you have not stated.)\n>\n> \t\t\tregards, tom lane\n> \n\nCan I try again? :)\n\nHow hard would it be to teach the planner about preserving uniqueness of\nrelations in subqueries?\nAnd using that information to remove unnecessary self joins on unique sets?\n\nI can try to rewrite some queries to test it on real data for how much\ngain it would provide.\n\nRegards,\nRikard\n", "msg_date": "Wed, 01 Apr 2009 20:41:49 +0200", "msg_from": "Rikard Pavelic <[email protected]>", "msg_from_op": true, "msg_subject": "Re: self join revisited" }, { "msg_contents": "> Can I try again? :)\n>\n> How hard would it be to teach the planner about preserving uniqueness of\n> relations in subqueries?\n> And using that information to remove unnecessary self joins on unique sets?\n>\n> I can try to rewrite some queries to test it on real data for how much\n> gain it would provide.\n\nI think join removal is a swell idea. In fact, I went so far as to\npost a patch to implement it. :-)\n\nhttp://archives.postgresql.org/message-id/[email protected]\n\nIt's a slightly different problem, because I'm looking at removing\nleft joins that provably don't change the output set due to a\nsufficiently strong uniqueness contraint on the nullable side of the\njoin, and you're looking at removing self-joins that provably don't\nchange the output set, which I believe requires establishing that all\nthe columns of some unique index are constrained to be equal between\nthe two copies of the table. But the two problems are very similar in\nthat you need an efficient way to assess whether there's an adequate\nunique constraint, and some of the infrastructure could probably be\nshared.\n\nThe problem from a coding perspective seems to be how and where to do\nthe test for unique-ness. In either the left-join case or the\nself-join case, you need to verify that one of the relations involved\nhas a unique index whose column list is equal to or a superset of the\navailable merge-joinable clauses (or perhaps hash-joinable clauses). I\nended up putting the logic in sort_inner_and_outer(), but that's\nmaking the decision to drop the join fairly late in the game. It\nwould be nice to make it earlier, before we start the dynamic\nprogramming algorithm, especially for self-join removal, where\nthrowing away the join after it's been constructed involves moving the\nquals around.\n\ncreate_unique_path() also does some interesting stuff in this area,\nbut I haven't figured out how much of that might be applicable to join\nremoval.\n\n...Robert\n", "msg_date": "Wed, 1 Apr 2009 17:14:43 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: self join revisited" } ]
[ { "msg_contents": "Hannes sent this off-list, presumably via newsgroup, and it's certainly \nworth sharing. I've always been scared off of using XFS because of the \nproblems outlined at http://zork.net/~nick/mail/why-reiserfs-is-teh-sukc , \nwith more testing showing similar issues at \nhttp://pages.cs.wisc.edu/~vshree/xfs.pdf too\n\n(I'm finding that old message with Ted saying \"Making sure you don't lose \ndata is Job #1\" hilarious right now, consider the recent ext4 data loss \ndebacle)\n\n---------- Forwarded message ----------\nDate: Fri, 3 Apr 2009 10:19:38 +0200\nFrom: Hannes Dorbath <[email protected]>\nNewsgroups: pgsql.performance\nSubject: Re: [PERFORM] Raid 10 chunksize\n\nRon Mayer wrote:\n> Greg Smith wrote:\n>> On Wed, 1 Apr 2009, Scott Carey wrote:\n>> \n>>> Write caching on SATA is totally fine. There were some old ATA drives\n>>> that when paried with some file systems or OS's would not be safe. There \n>>> are\n>>> some combinations that have unsafe write barriers. But there is a\n>>> standard\n>>> well supported ATA command to sync and only return after the data is on\n>>> disk. If you are running an OS that is anything recent at all, and any\n>>> disks that are not really old, you're fine.\n>> While I would like to believe this, I don't trust any claims in this\n>> area that don't have matching tests that demonstrate things working as\n>> expected. And I've never seen this work.\n>> \n>> My laptop has a 7200 RPM drive, which means that if fsync is being\n>> passed through to the disk correctly I can only fsync <120\n>> times/second. Here's what I get when I run sysbench on it, starting\n>> with the default ext3 configuration:\n> \n> I believe it's ext3 who's cheating in this scenario.\n\nI assume so too. Here the same test using XFS, first with barriers (XFS \ndefault) and then without:\n\nLinux 2.6.28-gentoo-r2 #1 SMP Intel(R) Core(TM)2 CPU 6400 @ 2.13GHz \nGenuineIntel GNU/Linux\n\n/dev/sdb /data2 xfs rw,noatime,attr2,logbufs=8,logbsize=256k,noquota 0 0\n\n# sysbench --test=fileio --file-fsync-freq=1 --file-num=1 \n--file-total-size=16384 --file-test-mode=rndwr run\nsysbench 0.4.10: multi-threaded system evaluation benchmark\n\nRunning the test with following options:\nNumber of threads: 1\n\nExtra file open flags: 0\n1 files, 16Kb each\n16Kb total file size\nBlock size 16Kb\nNumber of random requests for random IO: 10000\nRead/Write ratio for combined random IO test: 1.50\nPeriodic FSYNC enabled, calling fsync() each 1 requests.\nCalling fsync() at the end of test, Enabled.\nUsing synchronous I/O mode\nDoing random write test\nThreads started!\nDone.\n\nOperations performed: 0 Read, 10000 Write, 10000 Other = 20000 Total\nRead 0b Written 156.25Mb Total transferred 156.25Mb (463.9Kb/sec)\n 28.99 Requests/sec executed\n\nTest execution summary:\n total time: 344.9013s\n total number of events: 10000\n total time taken by event execution: 0.1453\n per-request statistics:\n min: 0.01ms\n avg: 0.01ms\n max: 0.07ms\n approx. 95 percentile: 0.01ms\n\nThreads fairness:\n events (avg/stddev): 10000.0000/0.00\n execution time (avg/stddev): 0.1453/0.00\n\n\nAnd now without barriers:\n\n/dev/sdb /data2 xfs rw,noatime,attr2,nobarrier,logbufs=8,logbsize=256k,noquota \n0 0\n\n# sysbench --test=fileio --file-fsync-freq=1 --file-num=1 \n--file-total-size=16384 --file-test-mode=rndwr run\nsysbench 0.4.10: multi-threaded system evaluation benchmark\n\nRunning the test with following options:\nNumber of threads: 1\n\nExtra file open flags: 0\n1 files, 16Kb each\n16Kb total file size\nBlock size 16Kb\nNumber of random requests for random IO: 10000\nRead/Write ratio for combined random IO test: 1.50\nPeriodic FSYNC enabled, calling fsync() each 1 requests.\nCalling fsync() at the end of test, Enabled.\nUsing synchronous I/O mode\nDoing random write test\nThreads started!\nDone.\n\nOperations performed: 0 Read, 10000 Write, 10000 Other = 20000 Total\nRead 0b Written 156.25Mb Total transferred 156.25Mb (62.872Mb/sec)\n 4023.81 Requests/sec executed\n\nTest execution summary:\n total time: 2.4852s\n total number of events: 10000\n total time taken by event execution: 0.1325\n per-request statistics:\n min: 0.01ms\n avg: 0.01ms\n max: 0.06ms\n approx. 95 percentile: 0.01ms\n\nThreads fairness:\n events (avg/stddev): 10000.0000/0.00\n execution time (avg/stddev): 0.1325/0.00\n\n\n-- \nBest regards,\nHannes Dorbath\n", "msg_date": "Fri, 3 Apr 2009 05:29:10 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Fri, 3 Apr 2009, Greg Smith wrote:\n\n> Hannes sent this off-list, presumably via newsgroup, and it's certainly worth \n> sharing. I've always been scared off of using XFS because of the problems \n> outlined at http://zork.net/~nick/mail/why-reiserfs-is-teh-sukc , with more \n> testing showing similar issues at http://pages.cs.wisc.edu/~vshree/xfs.pdf \n> too\n>\n> (I'm finding that old message with Ted saying \"Making sure you don't lose \n> data is Job #1\" hilarious right now, consider the recent ext4 data loss \n> debacle)\n\nalso note that the message from Ted was back in 2004, there has been a \n_lot_ of work done on XFS in the last 4 years.\n\nas for the second link, that focuses on what happens to the filesystem if \nthe disk under it starts returning errors or garbage. with the _possible_ \nexception of ZFS, every filesystem around will do strange things under \nthose conditions. and in my option, the way to deal with this sort of \nthing isn't to move to ZFS to detect the problem, it's to setup redundancy \nin your storage so that you can not only detect the problem, but correct \nit as well (it's a good thing to know that your database file is corrupt, \nbut that's not nearly as useful as having some way to recover the data \nthat was there)\n\nDavid Lang\n\n> ---------- Forwarded message ----------\n> Date: Fri, 3 Apr 2009 10:19:38 +0200\n> From: Hannes Dorbath <[email protected]>\n> Newsgroups: pgsql.performance\n> Subject: Re: [PERFORM] Raid 10 chunksize\n>\n> Ron Mayer wrote:\n>> Greg Smith wrote:\n>>> On Wed, 1 Apr 2009, Scott Carey wrote:\n>>> \n>>>> Write caching on SATA is totally fine. There were some old ATA drives\n>>>> that when paried with some file systems or OS's would not be safe. There \n>>>> are\n>>>> some combinations that have unsafe write barriers. But there is a\n>>>> standard\n>>>> well supported ATA command to sync and only return after the data is on\n>>>> disk. If you are running an OS that is anything recent at all, and any\n>>>> disks that are not really old, you're fine.\n>>> While I would like to believe this, I don't trust any claims in this\n>>> area that don't have matching tests that demonstrate things working as\n>>> expected. And I've never seen this work.\n>>> \n>>> My laptop has a 7200 RPM drive, which means that if fsync is being\n>>> passed through to the disk correctly I can only fsync <120\n>>> times/second. Here's what I get when I run sysbench on it, starting\n>>> with the default ext3 configuration:\n>> \n>> I believe it's ext3 who's cheating in this scenario.\n>\n> I assume so too. Here the same test using XFS, first with barriers (XFS \n> default) and then without:\n>\n> Linux 2.6.28-gentoo-r2 #1 SMP Intel(R) Core(TM)2 CPU 6400 @ 2.13GHz \n> GenuineIntel GNU/Linux\n>\n> /dev/sdb /data2 xfs rw,noatime,attr2,logbufs=8,logbsize=256k,noquota 0 0\n>\n> # sysbench --test=fileio --file-fsync-freq=1 --file-num=1 \n> --file-total-size=16384 --file-test-mode=rndwr run\n> sysbench 0.4.10: multi-threaded system evaluation benchmark\n>\n> Running the test with following options:\n> Number of threads: 1\n>\n> Extra file open flags: 0\n> 1 files, 16Kb each\n> 16Kb total file size\n> Block size 16Kb\n> Number of random requests for random IO: 10000\n> Read/Write ratio for combined random IO test: 1.50\n> Periodic FSYNC enabled, calling fsync() each 1 requests.\n> Calling fsync() at the end of test, Enabled.\n> Using synchronous I/O mode\n> Doing random write test\n> Threads started!\n> Done.\n>\n> Operations performed: 0 Read, 10000 Write, 10000 Other = 20000 Total\n> Read 0b Written 156.25Mb Total transferred 156.25Mb (463.9Kb/sec)\n> 28.99 Requests/sec executed\n>\n> Test execution summary:\n> total time: 344.9013s\n> total number of events: 10000\n> total time taken by event execution: 0.1453\n> per-request statistics:\n> min: 0.01ms\n> avg: 0.01ms\n> max: 0.07ms\n> approx. 95 percentile: 0.01ms\n>\n> Threads fairness:\n> events (avg/stddev): 10000.0000/0.00\n> execution time (avg/stddev): 0.1453/0.00\n>\n>\n> And now without barriers:\n>\n> /dev/sdb /data2 xfs \n> rw,noatime,attr2,nobarrier,logbufs=8,logbsize=256k,noquota 0 0\n>\n> # sysbench --test=fileio --file-fsync-freq=1 --file-num=1 \n> --file-total-size=16384 --file-test-mode=rndwr run\n> sysbench 0.4.10: multi-threaded system evaluation benchmark\n>\n> Running the test with following options:\n> Number of threads: 1\n>\n> Extra file open flags: 0\n> 1 files, 16Kb each\n> 16Kb total file size\n> Block size 16Kb\n> Number of random requests for random IO: 10000\n> Read/Write ratio for combined random IO test: 1.50\n> Periodic FSYNC enabled, calling fsync() each 1 requests.\n> Calling fsync() at the end of test, Enabled.\n> Using synchronous I/O mode\n> Doing random write test\n> Threads started!\n> Done.\n>\n> Operations performed: 0 Read, 10000 Write, 10000 Other = 20000 Total\n> Read 0b Written 156.25Mb Total transferred 156.25Mb (62.872Mb/sec)\n> 4023.81 Requests/sec executed\n>\n> Test execution summary:\n> total time: 2.4852s\n> total number of events: 10000\n> total time taken by event execution: 0.1325\n> per-request statistics:\n> min: 0.01ms\n> avg: 0.01ms\n> max: 0.06ms\n> approx. 95 percentile: 0.01ms\n>\n> Threads fairness:\n> events (avg/stddev): 10000.0000/0.00\n> execution time (avg/stddev): 0.1325/0.00\n>\n>\n>\n", "msg_date": "Fri, 3 Apr 2009 18:05:20 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "On Fri, 3 Apr 2009, [email protected] wrote:\n\n> also note that the message from Ted was back in 2004, there has been a _lot_ \n> of work done on XFS in the last 4 years.\n\nSure, I know they've made progress, which is why I didn't also bring up \nolder ugly problems like delayed allocation issues reducing files to zero \nlength on XFS. I thought that particular issue was pretty fundamental to \nthe logical journal scheme XFS is based on. What's you'll get out of disk \nI/O at smaller than the block level is pretty unpredictable when there's a \nfailure.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 3 Apr 2009 22:26:49 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Raid 10 chunksize" }, { "msg_contents": "\n\nOn 4/3/09 6:05 PM, \"[email protected]\" <[email protected]> wrote:\n\n> On Fri, 3 Apr 2009, Greg Smith wrote:\n> \n>> Hannes sent this off-list, presumably via newsgroup, and it's certainly worth\n>> sharing. I've always been scared off of using XFS because of the problems\n>> outlined at http://zork.net/~nick/mail/why-reiserfs-is-teh-sukc , with more\n>> testing showing similar issues at http://pages.cs.wisc.edu/~vshree/xfs.pdf\n>> too\n>> \n>> (I'm finding that old message with Ted saying \"Making sure you don't lose\n>> data is Job #1\" hilarious right now, consider the recent ext4 data loss\n>> debacle)\n> \n> also note that the message from Ted was back in 2004, there has been a\n> _lot_ of work done on XFS in the last 4 years.\n> \n> as for the second link, that focuses on what happens to the filesystem if\n> the disk under it starts returning errors or garbage. with the _possible_\n> exception of ZFS, every filesystem around will do strange things under\n> those conditions. and in my option, the way to deal with this sort of\n> thing isn't to move to ZFS to detect the problem, it's to setup redundancy\n> in your storage so that you can not only detect the problem, but correct\n> it as well (it's a good thing to know that your database file is corrupt,\n> but that's not nearly as useful as having some way to recover the data\n> that was there)\n\nNot trying to spread too much kool-aid around, but ZFS does that.\n\nIf a mirror set (which might be 2, 3 or more copies in the mirror) detects a\nchecksum error, it reads the other copies and attempts to correct the bad\nblock.\nPLUS, the performance under normal conditions for reads scales with the\nmirrors. 12 disks in raid 10 do writes as fast as 6 disk raid 0, but reads\nas fast as 12 disk raid 0 since it does not have to read both mirror sets to\ndetect an error, only to recover. You can even just write zeros to random\nspots in a mirror and it will throw errors and use the other copies.\n\nThis really isn't a ZFS promotion, rather its a promotion of the power of\nchecksums at the file system and raid level. A hardware raid card could\njust as well sacrifice some space to place checksums on its blocks and get\nmuch the same result.\n\n\n> \n> David Lang\n> \n\n\n", "msg_date": "Fri, 3 Apr 2009 22:24:52 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Raid 10 chunksize" } ]
[ { "msg_contents": "\nSo, I have a view. The query that the view uses can be written two \ndifferent ways, to use two different indexes. Then I use the view in \nanother query, under some circumstances the first way will be quick, and \nunder other circumstances the second way will be quick.\n\nWhat I want to know is, can I create a view that has both queries, and \nallows the planner to choose which one to use? The documentation seems to \nsay so in http://www.postgresql.org/docs/8.3/interactive/querytree.html \n(the rule system \"creates zero or more query trees as result\"), but \ndoesn't say how one would do it.\n\nMatthew\n\n-- \n I have an inferiority complex. But it's not a very good one.\n", "msg_date": "Fri, 3 Apr 2009 14:17:58 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Rewriting using rules for performance" }, { "msg_contents": "On Fri, Apr 3, 2009 at 9:17 AM, Matthew Wakeling <[email protected]> wrote:\n> So, I have a view. The query that the view uses can be written two different\n> ways, to use two different indexes. Then I use the view in another query,\n> under some circumstances the first way will be quick, and under other\n> circumstances the second way will be quick.\n>\n> What I want to know is, can I create a view that has both queries, and\n> allows the planner to choose which one to use? The documentation seems to\n> say so in http://www.postgresql.org/docs/8.3/interactive/querytree.html (the\n> rule system \"creates zero or more query trees as result\"), but doesn't say\n> how one would do it.\n\nI think this would be clearer if you gave an actual example of what\nyou're trying to accomplish, but the short answer is \"no\". The rule\nsystem lets you create multiple query trees to perform multiple\nactions (for example, when an INSERT command is issued, do the\noriginal insert plus also an update) and it implements views. But\nit's independent of query planning.\n\nOn the other hand, the query planner should be figuring out which\nindex to use without any help from you. If it's not, something is\nwrong.\n\n...Robert\n", "msg_date": "Fri, 3 Apr 2009 09:46:32 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rewriting using rules for performance" }, { "msg_contents": "On Fri, 3 Apr 2009, Robert Haas wrote:\n> On the other hand, the query planner should be figuring out which\n> index to use without any help from you. If it's not, something is\n> wrong.\n\nUnfortunately it cannot tell that\n\nSELECT l1.id AS id1, l2.id AS id2 FROM location l1, location l2\nWHERE l1.start <= l2.end AND l2.start <= l1.end\n\nis the same as\n\nSELECT l1.id AS id1, l2.id AS id2 FROM location l1, location l2\nWHERE bioseg_create(l1.start, l1.end) && bioseg_create(l2.start, l2.end)\n\nwhich is also the same as\n\nSELECT * from do_overlaps() AS (id1 int, id2 int)\n\nBut thanks for clarifying the rule thing for me.\n\nMatthew\n\n-- \n The email of the species is more deadly than the mail.\n", "msg_date": "Fri, 3 Apr 2009 14:55:15 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Rewriting using rules for performance" }, { "msg_contents": "On Fri, Apr 3, 2009 at 9:17 AM, Matthew Wakeling <[email protected]> wrote:\n>\n> So, I have a view. The query that the view uses can be written two different\n> ways, to use two different indexes. Then I use the view in another query,\n> under some circumstances the first way will be quick, and under other\n> circumstances the second way will be quick.\n>\n> What I want to know is, can I create a view that has both queries, and\n> allows the planner to choose which one to use? The documentation seems to\n> say so in http://www.postgresql.org/docs/8.3/interactive/querytree.html (the\n> rule system \"creates zero or more query trees as result\"), but doesn't say\n> how one would do it.\n\nyes.\n\ncreate view v as\nselect * from\n(\n select true as b, pg_sleep(1)::text\n union all\n select false as b, pg_sleep(1)::text\n) q;\n\nrecent versions of pg are smart enough to optimize (in some cases):\nselect * from v where b;\n\nmerlin\n", "msg_date": "Fri, 3 Apr 2009 11:52:36 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rewriting using rules for performance" }, { "msg_contents": "On Fri, Apr 3, 2009 at 11:52 AM, Merlin Moncure <[email protected]> wrote:\n> On Fri, Apr 3, 2009 at 9:17 AM, Matthew Wakeling <[email protected]> wrote:\n>>\n>> So, I have a view. The query that the view uses can be written two different\n>> ways, to use two different indexes. Then I use the view in another query,\n>> under some circumstances the first way will be quick, and under other\n>> circumstances the second way will be quick.\n>>\n>> What I want to know is, can I create a view that has both queries, and\n>> allows the planner to choose which one to use? The documentation seems to\n>> say so in http://www.postgresql.org/docs/8.3/interactive/querytree.html (the\n>> rule system \"creates zero or more query trees as result\"), but doesn't say\n>> how one would do it.\n>\n> yes.\n>\n> create view v as\n> select * from\n> (\n>  select true as b, pg_sleep(1)::text\n>  union all\n>  select false as b, pg_sleep(1)::text\n> ) q;\n>\n> recent versions of pg are smart enough to optimize (in some cases):\n> select * from v where b;\n\n\noop, I read your question wrong. for the above to work, _you_ have to\nchoose the plan, not the planner. I think it still might be possible\nso long as you can deterministically figure out (say, as the result of\na function) which query you want the planner to choose using a form of\nthe above technique.\n\nmerlin\n", "msg_date": "Fri, 3 Apr 2009 11:59:38 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rewriting using rules for performance" } ]
[ { "msg_contents": "\nI'm writing a plpgsql function that effectively does a merge join on the \nresults of two queries. Now, it appears that I cannot read the results of \ntwo queries as streams in plpgsql, so I need to copy the contents of one \nquery into an array first, and then iterate over the second query \nafterwards.\n\nI have discovered that creating large arrays in plpgql is rather slow. In \nfact, it seems to be O(n^2). The following code fragment is incredibly \nslow:\n\n genes = '{}';\n next_new = 1;\n FOR loc IN SELECT location.* FROM location, gene WHERE location.subjectid = gene.id ORDER BY objectid, intermine_start, intermine_end LOOP\n genes[next_new] = loc;\n IF (next_new % 10000 = 0) THEN\n RAISE NOTICE 'Scanned % gene locations', next_new;\n END IF;\n next_new = next_new + 1;\n END LOOP;\n genes_size = coalesce(array_upper(genes, 1), 0);\n RAISE NOTICE 'Scanned % gene locations', genes_size;\n\nFor 200,000 rows it takes 40 minutes.\n\nSo, is there a way to dump the results of a query into an array quickly in \nplpgsql, or alternatively is there a way to read two results streams \nsimultaneously?\n\nMatthew\n\n-- \n I would like to think that in this day and age people would know better than\n to open executables in an e-mail. I'd also like to be able to flap my arms\n and fly to the moon. -- Tim Mullen\n", "msg_date": "Fri, 3 Apr 2009 14:32:54 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "plpgsql arrays" }, { "msg_contents": "On Fri, Apr 3, 2009 at 9:32 AM, Matthew Wakeling <[email protected]> wrote:\n> I'm writing a plpgsql function that effectively does a merge join on the\n> results of two queries. Now, it appears that I cannot read the results of\n> two queries as streams in plpgsql, so I need to copy the contents of one\n> query into an array first, and then iterate over the second query\n> afterwards.\n\nWhy not just use SQL to do the join?\n\n...Robert\n", "msg_date": "Fri, 3 Apr 2009 09:47:31 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql arrays" }, { "msg_contents": "On Fri, 3 Apr 2009, Robert Haas wrote:\n> On Fri, Apr 3, 2009 at 9:32 AM, Matthew Wakeling <[email protected]> wrote:\n>> I'm writing a plpgsql function that effectively does a merge join on the\n>> results of two queries.\n\n> Why not just use SQL to do the join?\n\nBecause the merge condition is:\n\nWHERE l1.start <= l2.end AND l2.start <= l1.end\n\nand merge joins in postgres only currently cope with the case where the \nmerge condition is an equals relationship.\n\nOh, hang on, I think I saw something in the docs about what conditions can \nbe used in a merge...\n\nMatthew\n\n-- \n Let's say I go into a field and I hear \"baa baa baa\". Now, how do I work \n out whether that was \"baa\" followed by \"baa baa\", or if it was \"baa baa\"\n followed by \"baa\"?\n - Computer Science Lecturer\n", "msg_date": "Fri, 3 Apr 2009 14:57:12 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plpgsql arrays" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> On Fri, 3 Apr 2009, Robert Haas wrote:\n>> Why not just use SQL to do the join?\n\n> Because the merge condition is:\n\n> WHERE l1.start <= l2.end AND l2.start <= l1.end\n\n> and merge joins in postgres only currently cope with the case where the \n> merge condition is an equals relationship.\n\n> Oh, hang on, I think I saw something in the docs about what conditions can \n> be used in a merge...\n\nNo, you got it right the first time. I was about to suggest that maybe\nyou could make it work by recasting the problem as equality on an\ninterval datatype, but the problem is that this is not equality but\n\"overlaps\". And you can't cheat and call it equality, because it's\nnot transitive.\n\nI don't actually believe that a standard merge join algorithm will work\nwith an intransitive join condition ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Apr 2009 10:04:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql arrays " }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> I have discovered that creating large arrays in plpgql is rather slow. In \n> fact, it seems to be O(n^2).\n\nFor variable-width element types, yeah. Don't go that way.\n\n> ... alternatively is there a way to read two results streams \n> simultaneously?\n\nUse two cursors and FETCH from each as needed? In recent releases you\ncan even scroll backwards, which you're going to need to do to make\na merge join work.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Apr 2009 10:08:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql arrays " }, { "msg_contents": "On Fri, 3 Apr 2009, Tom Lane wrote:\n>> Oh, hang on, I think I saw something in the docs about what conditions can\n>> be used in a merge...\n>\n> No, you got it right the first time. I was about to suggest that maybe\n> you could make it work by recasting the problem as equality on an\n> interval datatype, but the problem is that this is not equality but\n> \"overlaps\". And you can't cheat and call it equality, because it's\n> not transitive.\n\nWell, according to \nhttp://www.postgresql.org/docs/8.3/interactive/xoper-optimization.html#AEN41844\n\n| So, both data types must be capable of being fully ordered, and the \n| join operator must be one that can only succeed for pairs of values that \n| fall at the \"same place\" in the sort order.\n\n> I don't actually believe that a standard merge join algorithm will work\n> with an intransitive join condition ...\n\nA standard merge join should work absolutely fine, depending on how it's \nimplemented. If the implementation keeps a list of \"current\" right-hand \nelements, and adds right-hand rows to the list when they compare \"equal\" \nto the current left-hand element, and removes them from the list when they \ncompare \"not equal\" to the current left-hand element, then it would work \nfine. If it does something else like rewinding the right-hand stream, or \nthrowing away the list when the current left-hand element is \"not equal\" \nthe previous left-hand element, (which would be fine for true equality) \nthen it will not work.\n\nThe description in the docs doesn't make it clear which way Postgres does \nit.\n\nMatthew\n\n-- \n I have an inferiority complex. But it's not a very good one.\n", "msg_date": "Fri, 3 Apr 2009 15:14:54 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plpgsql arrays " }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> On Fri, 3 Apr 2009, Tom Lane wrote:\n>> I don't actually believe that a standard merge join algorithm will work\n>> with an intransitive join condition ...\n\n> A standard merge join should work absolutely fine, depending on how it's \n> implemented. If the implementation keeps a list of \"current\" right-hand \n> elements, and adds right-hand rows to the list when they compare \"equal\" \n> to the current left-hand element, and removes them from the list when they \n> compare \"not equal\" to the current left-hand element, then it would work \n> fine.\n\nNo, it would not. Not unless you have sorted the inputs in some way\nthat has more knowledge than the \"equal\" operator represents. Otherwise\nyou can have elements drop out that might still be needed to match to a\nlater left-hand element. Remember the point of the intransitivity\nassumption: there can be a right-hand element X that is \"equal\" to two\nleft-hand elements Y and Z, but Y and Z are not \"equal\" to each other\nand thus might not be kept adjacent in the left-hand sorting.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Apr 2009 10:24:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql arrays " }, { "msg_contents": "On Fri, 3 Apr 2009, Tom Lane wrote:\n> Not unless you have sorted the inputs in some way that has more \n> knowledge than the \"equal\" operator represents. Otherwise you can have \n> elements drop out that might still be needed to match to a later \n> left-hand element.\n\nOf course. You certainly have to choose a sort order that works. Sorting \nby the start field would be sufficient in this case.\n\nMatthew\n\n-- \n For those of you who are into writing programs that are as obscure and\n complicated as possible, there are opportunities for... real fun here\n -- Computer Science Lecturer\n", "msg_date": "Fri, 3 Apr 2009 15:28:34 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plpgsql arrays " }, { "msg_contents": "On Fri, 3 Apr 2009, Matthew Wakeling wrote:\n> On Fri, 3 Apr 2009, Tom Lane wrote:\n>> Not unless you have sorted the inputs in some way that has more knowledge \n>> than the \"equal\" operator represents. Otherwise you can have elements drop \n>> out that might still be needed to match to a later left-hand element.\n>\n> Of course. You certainly have to choose a sort order that works. Sorting by \n> the start field would be sufficient in this case.\n\nOh &^%\")(!. That algorithm only finds the matches where l1.start >= \nl2.start. Yeah, you're quite right.\n\nMatthew\n\n-- \n And why do I do it that way? Because I wish to remain sane. Um, actually,\n maybe I should just say I don't want to be any worse than I already am.\n - Computer Science Lecturer\n", "msg_date": "Fri, 3 Apr 2009 15:45:25 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plpgsql arrays " }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> On Fri, 3 Apr 2009, Tom Lane wrote:\n>> Not unless you have sorted the inputs in some way that has more \n>> knowledge than the \"equal\" operator represents. Otherwise you can have \n>> elements drop out that might still be needed to match to a later \n>> left-hand element.\n\n> Of course. You certainly have to choose a sort order that works. Sorting \n> by the start field would be sufficient in this case.\n\nUh, no, it wouldn't. Visually:\n\n\tL1\t-------------------------\n\tL2\t-----------\n\tL3\t---------------------\n\n\tR1\t --------\n\nAt L2, you'd conclude that you're done matching R1.\n\nIntuitively, it seems like 1-D \"overlaps\" is a tractable enough\noperator that you should be able to make something merge-like\nwork. But it's more complicated than I think you realize.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Apr 2009 10:48:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql arrays " }, { "msg_contents": "On Fri, 3 Apr 2009, Tom Lane wrote:\n> Intuitively, it seems like 1-D \"overlaps\" is a tractable enough\n> operator that you should be able to make something merge-like\n> work. But it's more complicated than I think you realize.\n\nIt's tractable when the two sides are symmetrical, but not so much when \nthey aren't. Our \"no it isn't\" messages obviously crossed on the wire.\n\nMatthew\n\n-- \n Some people, when confronted with a problem, think \"I know, I'll use regular\n expressions.\" Now they have two problems. -- Jamie Zawinski\n", "msg_date": "Fri, 3 Apr 2009 15:58:03 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plpgsql arrays " }, { "msg_contents": "On Fri, Apr 3, 2009 at 9:32 AM, Matthew Wakeling <[email protected]> wrote:\n>  genes = '{}';\n>  next_new = 1;\n>  FOR loc IN SELECT location.* FROM location, gene WHERE location.subjectid =\n> gene.id ORDER BY objectid, intermine_start, intermine_end LOOP\n>     genes[next_new] = loc;\n>     IF (next_new % 10000 = 0) THEN\n>         RAISE NOTICE 'Scanned % gene locations', next_new;\n>     END IF;\n>     next_new = next_new + 1;\n>  END LOOP;\n>  genes_size = coalesce(array_upper(genes, 1), 0);\n>  RAISE NOTICE 'Scanned % gene locations', genes_size;\n>\n> For 200,000 rows it takes 40 minutes.\n>\n> So, is there a way to dump the results of a query into an array quickly in\n> plpgsql, or alternatively is there a way to read two results streams\n> simultaneously?\n\ntry this:\nselect array(SELECT location.* FROM location, gene WHERE\nlocation.subjectid = gene.id ORDER BY objectid, intermine_start,\nintermine_end)) into genes;\n\nmerlin\n", "msg_date": "Fri, 3 Apr 2009 11:02:13 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql arrays" }, { "msg_contents": "On Fri, Apr 3, 2009 at 11:02 AM, Merlin Moncure <[email protected]> wrote:\n> On Fri, Apr 3, 2009 at 9:32 AM, Matthew Wakeling <[email protected]> wrote:\n>>  genes = '{}';\n>>  next_new = 1;\n>>  FOR loc IN SELECT location.* FROM location, gene WHERE location.subjectid =\n>> gene.id ORDER BY objectid, intermine_start, intermine_end LOOP\n>>     genes[next_new] = loc;\n>>     IF (next_new % 10000 = 0) THEN\n>>         RAISE NOTICE 'Scanned % gene locations', next_new;\n>>     END IF;\n>>     next_new = next_new + 1;\n>>  END LOOP;\n>>  genes_size = coalesce(array_upper(genes, 1), 0);\n>>  RAISE NOTICE 'Scanned % gene locations', genes_size;\n>>\n>> For 200,000 rows it takes 40 minutes.\n>>\n>> So, is there a way to dump the results of a query into an array quickly in\n>> plpgsql, or alternatively is there a way to read two results streams\n>> simultaneously?\n>\n> try this:\n> select array(SELECT location.* FROM location, gene WHERE\n> location.subjectid = gene.id ORDER BY objectid, intermine_start,\n> intermine_end)) into genes;\n\none more time:\nselect array(SELECT location FROM location, gene WHERE\n location.subjectid = gene.id ORDER BY objectid, intermine_start,\n intermine_end)) into genes;\n\nthis will make array of location records. when you access the records\nto do the merge, make sure to use () noation:\n\nif (genes[x]).field > something then\n...\n\nmerlin\n", "msg_date": "Fri, 3 Apr 2009 11:09:44 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql arrays" }, { "msg_contents": "On Fri, 3 Apr 2009, Merlin Moncure wrote:\n> select array(SELECT location FROM location, gene WHERE\n> location.subjectid = gene.id ORDER BY objectid, intermine_start,\n> intermine_end)) into genes;\n\nYeah, that works nicely.\n\n> this will make array of location records. when you access the records\n> to do the merge, make sure to use () noation:\n>\n> if (genes[x]).field > something then\n\nHow is that different to genes[x].field?\n\nMatthew\n\n-- \n And the lexer will say \"Oh look, there's a null string. Oooh, there's \n another. And another.\", and will fall over spectacularly when it realises\n there are actually rather a lot.\n - Computer Science Lecturer (edited)\n", "msg_date": "Fri, 3 Apr 2009 16:15:32 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plpgsql arrays" }, { "msg_contents": "On Fri, Apr 3, 2009 at 11:15 AM, Matthew Wakeling <[email protected]> wrote:\n> On Fri, 3 Apr 2009, Merlin Moncure wrote:\n>>\n>> select array(SELECT location FROM location, gene WHERE\n>> location.subjectid = gene.id ORDER BY objectid, intermine_start,\n>> intermine_end)) into genes;\n>\n> Yeah, that works nicely.\n>\n>> this will make array of location records.  when you access the records\n>> to do the merge, make sure to use () noation:\n>>\n>> if (genes[x]).field > something then\n>\n> How is that different to genes[x].field?\n\nah, it isn't...in many cases where you access composite type fields,\n() is required (especially in query). it isn't here, so you can\nsafely leave it off.\n\nmerlin\n", "msg_date": "Fri, 3 Apr 2009 11:18:33 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql arrays" }, { "msg_contents": "\nOn Fri, 2009-04-03 at 10:04 -0400, Tom Lane wrote:\n> Matthew Wakeling <[email protected]> writes:\n> > On Fri, 3 Apr 2009, Robert Haas wrote:\n> >> Why not just use SQL to do the join?\n> \n> > Because the merge condition is:\n> \n> > WHERE l1.start <= l2.end AND l2.start <= l1.end\n> \n> > and merge joins in postgres only currently cope with the case where the \n> > merge condition is an equals relationship.\n\n(snip)\n\n> I don't actually believe that a standard merge join algorithm will work\n> with an intransitive join condition ...\n\nI think it's a common enough problem that having a non-standard join\nalgorithm written for that case would be interesting indeed.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Fri, 03 Apr 2009 18:30:39 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql arrays" }, { "msg_contents": "Simon Riggs wrote:\n> \n> On Fri, 2009-04-03 at 10:04 -0400, Tom Lane wrote:\n\n> > I don't actually believe that a standard merge join algorithm will work\n> > with an intransitive join condition ...\n> \n> I think it's a common enough problem that having a non-standard join\n> algorithm written for that case would be interesting indeed.\n\n+42\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 3 Apr 2009 13:33:24 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql arrays" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> On Fri, 2009-04-03 at 10:04 -0400, Tom Lane wrote:\n>> I don't actually believe that a standard merge join algorithm will work\n>> with an intransitive join condition ...\n\n> I think it's a common enough problem that having a non-standard join\n> algorithm written for that case would be interesting indeed.\n\nSounds like a great PhD thesis topic.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Apr 2009 13:38:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql arrays " }, { "msg_contents": "> Uh, no, it wouldn't.  Visually:\n>\n>        L1      -------------------------\n>        L2      -----------\n>        L3      ---------------------\n>\n>        R1                     --------\n>\n> At L2, you'd conclude that you're done matching R1.\n>\n\nNo, you should conclude that you're done matching L2. You conclude\nyou're done matching R1 when you reach L4 ( or there exists a j st\nLj.start > R1.end, or equivalently Lj is strictly greater than R1 )\n\nFWIW, this is a very common problem in bioinformatics. I've mostly\nimplemented this in python and C. The code is available at\nencodestatistics.org. Look in encode.py at the overlap method of the\nfeature_region class, or ( for the C version ) region_overlap in\nblock_bootstrap.c ( svn is more up to date for the C ).\n\n-Nathan\n", "msg_date": "Fri, 3 Apr 2009 12:03:43 -0700", "msg_from": "Nathan Boley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql arrays" }, { "msg_contents": "On Fri, 3 Apr 2009, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n>> On Fri, 2009-04-03 at 10:04 -0400, Tom Lane wrote:\n>>> I don't actually believe that a standard merge join algorithm will work\n>>> with an intransitive join condition ...\n>\n>> I think it's a common enough problem that having a non-standard join\n>> algorithm written for that case would be interesting indeed.\n>\n> Sounds like a great PhD thesis topic.\n\nI agree it'd be very cool to have a non-standard join algorithm for this \nbuilt into Postgres. However it is nowhere near complicated enough for a \nPhD thesis topic.\n\nI'm just putting the finishing touches on a plpgsql implementation - in \norder to perform the join on a asymmetric set of ranges, you just need to \nkeep two separate history lists as you sweep through the two incoming \nstreams. This would be sufficient for range constraints.\n\nMatthew\n\n-- \nSurely the value of C++ is zero, but C's value is now 1?\n -- map36, commenting on the \"No, C++ isn't equal to D. 'C' is undeclared\n [...] C++ should really be called 1\" response to \"C++ -- shouldn't it\n be called D?\"\n", "msg_date": "Mon, 6 Apr 2009 12:47:40 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plpgsql arrays " }, { "msg_contents": "On Fri, 3 Apr 2009, Simon Riggs wrote:\n> On Fri, 2009-04-03 at 10:04 -0400, Tom Lane wrote:\n>> Matthew Wakeling <[email protected]> writes:\n>>> On Fri, 3 Apr 2009, Robert Haas wrote:\n>>>> Why not just use SQL to do the join?\n>>>\n>>> Because the merge condition is:\n>>>\n>>> WHERE l1.start <= l2.end AND l2.start <= l1.end\n>>>\n>>> and merge joins in postgres only currently cope with the case where the\n>>> merge condition is an equals relationship.\n>>\n>> I don't actually believe that a standard merge join algorithm will work\n>> with an intransitive join condition ...\n>\n> I think it's a common enough problem that having a non-standard join\n> algorithm written for that case would be interesting indeed.\n\nI'm currently trying to persuade my boss to give me time to do some work \nto implement this in Postgres. It's not something I will be able to start \nright away, but maybe in a little while.\n\nI'm currently seeing this as being able to mark overlap constraints (\"&&\" \nin quite a few data types) as \"OVERLAP_MERGES\", and have the planner be \nable to use the new merge join algorithm. So it wouldn't help with the \nexact query above, but would if I rewrote it to use the bioseg or spacial \ndata types' overlap operators.\n\nI will need a little help as I am not incredibly familiar with the \nPostgres innards. Would someone be able to do that?\n\nMatthew\n\n-- \n Existence is a convenient concept to designate all of the files that an\n executable program can potentially process. -- Fortran77 standard\n", "msg_date": "Mon, 6 Apr 2009 13:52:42 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plpgsql arrays" }, { "msg_contents": "On Mon, Apr 6, 2009 at 8:52 AM, Matthew Wakeling <[email protected]> wrote:\n> On Fri, 3 Apr 2009, Simon Riggs wrote:\n>>\n>> On Fri, 2009-04-03 at 10:04 -0400, Tom Lane wrote:\n>>>\n>>> Matthew Wakeling <[email protected]> writes:\n>>>>\n>>>> On Fri, 3 Apr 2009, Robert Haas wrote:\n>>>>>\n>>>>> Why not just use SQL to do the join?\n>>>>\n>>>> Because the merge condition is:\n>>>>\n>>>> WHERE l1.start <= l2.end AND l2.start <= l1.end\n>>>>\n>>>> and merge joins in postgres only currently cope with the case where the\n>>>> merge condition is an equals relationship.\n>>>\n>>> I don't actually believe that a standard merge join algorithm will work\n>>> with an intransitive join condition ...\n>>\n>> I think it's a common enough problem that having a non-standard join\n>> algorithm written for that case would be interesting indeed.\n>\n> I'm currently trying to persuade my boss to give me time to do some work to\n> implement this in Postgres. It's not something I will be able to start right\n> away, but maybe in a little while.\n>\n> I'm currently seeing this as being able to mark overlap constraints (\"&&\" in\n> quite a few data types) as \"OVERLAP_MERGES\", and have the planner be able to\n> use the new merge join algorithm. So it wouldn't help with the exact query\n> above, but would if I rewrote it to use the bioseg or spacial data types'\n> overlap operators.\n>\n> I will need a little help as I am not incredibly familiar with the Postgres\n> innards. Would someone be able to do that?\n\nI can help review if you post a patch, even if it's WIP. But you\nshould post it to -hackers, not here.\n\n...Robert\n", "msg_date": "Mon, 6 Apr 2009 10:36:50 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql arrays" }, { "msg_contents": "On Fri, 3 Apr 2009, Tom Lane wrote:\n>> ... alternatively is there a way to read two results streams\n>> simultaneously?\n>\n> Use two cursors and FETCH from each as needed? In recent releases you\n> can even scroll backwards, which you're going to need to do to make\n> a merge join work.\n\nWhat would be the syntax for putting a single row from a cursor into a \nvariable? I have tried:\n\nFETCH INTO left left_cursor;\n\nwhich says syntax error, and\n\nleft = FETCH left_cursor;\n\nwhich gives the error 'ERROR: missing datatype declaration at or near \"=\"'\n\nMatthew\n\n-- \n I've run DOOM more in the last few days than I have the last few\n months. I just love debugging ;-) -- Linus Torvalds\n", "msg_date": "Tue, 7 Apr 2009 16:18:41 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plpgsql arrays " }, { "msg_contents": "Matthew Wakeling wrote:\n> What would be the syntax for putting a single row from a cursor into a \n> variable? I have tried:\n>\n> FETCH INTO left left_cursor;\n>\n> which says syntax error, and\n>\n> left = FETCH left_cursor;\n>\n> which gives the error 'ERROR: missing datatype declaration at or near \n> \"=\"'\n>\n> Matthew\n>\n\nHave to declare Left variable as record data type declaration part of \nthe function\n\n", "msg_date": "Tue, 07 Apr 2009 11:23:59 -0400", "msg_from": "justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql arrays" }, { "msg_contents": "On Tue, 7 Apr 2009, justin wrote:\n>> What would be the syntax for putting a single row from a cursor into a \n>> variable? I have tried:\n>> \n>> FETCH INTO left left_cursor;\n>> \n>> which says syntax error, and\n>> \n>> left = FETCH left_cursor;\n>> \n>> which gives the error 'ERROR: missing datatype declaration at or near \"=\"'\n>\n> Have to declare Left variable as record data type declaration part of the \n> function\n\nIt is.\n\nCREATE OR REPLACE FUNCTION overlap_gene_primer() RETURNS SETOF RECORD AS $$\nDECLARE\n left location;\n retval RECORD;\nBEGIN\n DECLARE left_cursor NO SCROLL CURSOR FOR SELECT location FROM location, gene WHERE location.id = gene.id ORDER BY objectid, start, end;\n left = FETCH left_cursor;\nEND;\n$$ LANGUAGE plpgsql;\n\nMatthew\n\n-- \n\"Prove to thyself that all circuits that radiateth and upon which thou worketh\n are grounded, lest they lift thee to high-frequency potential and cause thee\n to radiate also. \" -- The Ten Commandments of Electronics\n", "msg_date": "Tue, 7 Apr 2009 16:33:30 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plpgsql arrays" }, { "msg_contents": "\n\n\n\n\nMatthew Wakeling wrote:\nOn Tue, 7 Apr 2009, justin wrote:\n \n\nWhat would be the syntax for putting a\nsingle row from a cursor into a variable? I have tried:\n \n\nFETCH INTO left left_cursor;\n \n\nwhich says syntax error, and\n \n\nleft = FETCH left_cursor;\n \n\nwhich gives the error 'ERROR:  missing datatype declaration at or near\n\"=\"'\n \n\n\nHave to declare Left variable  as record data type declaration part of\nthe function\n \n\n\nIt is.\n \n\nCREATE OR REPLACE FUNCTION overlap_gene_primer() RETURNS SETOF RECORD\nAS $$\n \nDECLARE\n \n    left location;\n \n    retval RECORD;\n \nBEGIN\n \n    DECLARE left_cursor NO SCROLL CURSOR FOR SELECT location FROM\nlocation, gene WHERE location.id = gene.id ORDER BY objectid, start,\nend;\n \n    left = FETCH left_cursor;\n \nEND;\n \n$$ LANGUAGE plpgsql;\n \n\nMatthew\n \n\n\nChange the type to Record\nfrom the help file\nFETCH retrieves the next row from the\ncursor into a target, which might be a row variable, a record\nvariable, or a comma-separated list of simple variables, just like\nSELECT INTO. If there is no next row, the\ntarget is set to NULL(s). As with SELECT INTO,\nthe special variable FOUND can be checked\nto see whether a row was obtained or not\n\n\n\n", "msg_date": "Tue, 07 Apr 2009 11:45:00 -0400", "msg_from": "justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql arrays" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> CREATE OR REPLACE FUNCTION overlap_gene_primer() RETURNS SETOF RECORD AS $$\n> DECLARE\n> left location;\n> retval RECORD;\n> BEGIN\n> DECLARE left_cursor NO SCROLL CURSOR FOR SELECT location FROM location, gene WHERE location.id = gene.id ORDER BY objectid, start, end;\n> left = FETCH left_cursor;\n> END;\n> $$ LANGUAGE plpgsql;\n\nWell, the DECLARE for the cursor should go in the DECLARE section,\nand the syntax for the FETCH should be\n\tFETCH cursorname INTO recordvariablename;\nand I'm too lazy to check right now but I think you might be missing\nan OPEN for the cursor.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Apr 2009 12:08:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql arrays " }, { "msg_contents": "On Tue, 7 Apr 2009, Tom Lane wrote:\n> Matthew Wakeling <[email protected]> writes:\n>> CREATE OR REPLACE FUNCTION overlap_gene_primer() RETURNS SETOF RECORD AS $$\n>> DECLARE\n>> left location;\n>> retval RECORD;\n>> BEGIN\n>> DECLARE left_cursor NO SCROLL CURSOR FOR SELECT location FROM location, gene WHERE location.id = gene.id ORDER BY objectid, start, end;\n>> left = FETCH left_cursor;\n>> END;\n>> $$ LANGUAGE plpgsql;\n>\n> Well, the DECLARE for the cursor should go in the DECLARE section,\n> and the syntax for the FETCH should be\n> \tFETCH cursorname INTO recordvariablename;\n> and I'm too lazy to check right now but I think you might be missing\n> an OPEN for the cursor.\n\nYeah, thanks to Justin I found the plpgsql docs for cursors. The main \ncursors docs should really link there.\n\nThis seems to do what I want:\n\nCREATE OR REPLACE FUNCTION overlap_gene_primer() RETURNS SETOF RECORD AS \n$$\nDECLARE\n left_cursor NO SCROLL CURSOR FOR SELECT location.* FROM location, gene WHERE location.subjectid = gene.id ORDER BY objectid, start, end;\n left location;\nBEGIN\n OPEN left_cursor;\n FETCH left_cursor INTO left;\nEND;\n$$ LANGUAGE plpgsql;\n\nMatthew\n\n-- \n Lord grant me patience, and I want it NOW!\n", "msg_date": "Tue, 7 Apr 2009 17:22:08 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plpgsql arrays " }, { "msg_contents": "On Tue, Apr 7, 2009 at 11:18 AM, Matthew Wakeling <[email protected]> wrote:\n> On Fri, 3 Apr 2009, Tom Lane wrote:\n>>>\n>>> ... alternatively is there a way to read two results streams\n>>> simultaneously?\n>>\n>> Use two cursors and FETCH from each as needed?  In recent releases you\n>> can even scroll backwards, which you're going to need to do to make\n>> a merge join work.\n>\n> What would be the syntax for putting a single row from a cursor into a\n> variable? I have tried:\n>\n> FETCH INTO left left_cursor;\n\naccording to the docs,\n\n Examples:\n\nFETCH curs1 INTO rowvar;\nFETCH curs2 INTO foo, bar, baz;\nFETCH LAST FROM curs3 INTO x, y;\nFETCH RELATIVE -2 FROM curs4 INTO x;\n\nhttp://www.postgresql.org/docs/8.3/interactive/plpgsql-cursors.html#PLPGSQL-CURSOR-USING\n\nmerlin\n", "msg_date": "Tue, 7 Apr 2009 15:24:23 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql arrays" }, { "msg_contents": "Merlin Moncure <[email protected]> writes:\n> On Tue, Apr 7, 2009 at 11:18 AM, Matthew Wakeling <[email protected]> wrote:\n>> What would be the syntax for putting a single row from a cursor into a\n>> variable? I have tried:\n>> \n>> FETCH INTO left left_cursor;\n\n> according to the docs,\n> http://www.postgresql.org/docs/8.3/interactive/plpgsql-cursors.html#PLPGSQL-CURSOR-USING\n\nSubsequent discussion showed that the problem was Matthew hadn't found\nthat page. I guess that at least the DECLARE CURSOR reference page\nought to have something like \"if you are trying to use cursors in\nplpgsql, see <link>\". Matthew, where *were* you looking exactly?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Apr 2009 15:31:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql arrays " }, { "msg_contents": "On Tue, 7 Apr 2009, Tom Lane wrote:\n> Subsequent discussion showed that the problem was Matthew hadn't found\n> that page. I guess that at least the DECLARE CURSOR reference page\n> ought to have something like \"if you are trying to use cursors in\n> plpgsql, see <link>\". Matthew, where *were* you looking exactly?\n\nThe DECLARE CURSOR page, and then guessing the INTO bit because that's how \nSELECT works.\n\nMatthew\n\n-- \n for a in past present future; do\n for b in clients employers associates relatives neighbours pets; do\n echo \"The opinions here in no way reflect the opinions of my $a $b.\"\n done; done\n", "msg_date": "Wed, 8 Apr 2009 11:35:51 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plpgsql arrays " }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> On Tue, 7 Apr 2009, Tom Lane wrote:\n>> Subsequent discussion showed that the problem was Matthew hadn't found\n>> that page. I guess that at least the DECLARE CURSOR reference page\n>> ought to have something like \"if you are trying to use cursors in\n>> plpgsql, see <link>\". Matthew, where *were* you looking exactly?\n\n> The DECLARE CURSOR page, and then guessing the INTO bit because that's how \n> SELECT works.\n\nI've added cross-references in the DECLARE and FETCH pages. I hope\nthat's sufficient to catch the attention of anyone trying to use cursors\n...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 10 Apr 2009 13:57:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql arrays " } ]
[ { "msg_contents": "Hello!\n\nSorry for the wall of text here.\n\nI'm working on a performance POC and I'm using pgbench and could\nuse some advice. Mostly I want to ensure that my test is valid\nand that I'm using pgbench properly.\n\nThe story behind the POC is that my developers want to pull web items\nfrom the database (not too strange) however our environment is fairly\nunique in that the item size is between 50k and 1.5megs and i need\nto retrive the data in less than a second. Oh, and we're talking about\na minimum of 400 concurrent users.\n\nMy intuition tells me that this is nuts, for a number of reasons, but\nto convince everyone I need to get some performance numbers.\n(So right now i'm just focused on how much time it takes to pull this \nrecord from the DB, not memory usage, http caching, contention, etc.)\n\nWhat i did was create a table \"temp\" with \"id(pk)\" and \"content(bytea)\"\n[ going to compare bytea vs large objects in this POC as well even\nthough i know that large objects are better for this ]\n\nI loaded the table with aproximately 50k items that were 1.2Megs in size.\n\nHere is my transaction file:\n\\setrandom iid 1 50000\nBEGIN;\nSELECT content FROM test WHERE item_id = :iid;\nEND;\n\nand then i executed:\npgbench -c 400 -t 50 -f trans.sql -l \n\ntrying to simulate 400 concurrent users performing 50 operations each\nwhich is consistant with my needs.\n\nThe results actually have surprised me, the database isn't really tuned\nand i'm not working on great hardware. But still I'm getting:\n\ncaling factor: 1\nnumber of clients: 400\nnumber of transactions per client: 50\nnumber of transactions actually processed: 20000/20000\ntps = 51.086001 (including connections establishing)\ntps = 51.395364 (excluding connections establishing)\n\nI'm not really sure how to evaulate the tps, I've read in this forum that\nsome folks are getting 2k tps so this wouldn't appear to be good to me.\n\nHowever: When i look at the logfile generated:\n\nhead -5 pgbench_log.7205\n0 0 15127082 0 1238784175 660088\n1 0 15138079 0 1238784175 671205\n2 0 15139007 0 1238784175 672180\n3 0 15141097 0 1238784175 674357\n4 0 15142000 0 1238784175 675345\n\n(I wrote a script to average the total transaction time for every record\nin the file)\navg_times.ksh pgbench_log.7205\nAvg tx time seconds: 7\n\nThat's not too bad, it seems like with real hardware + actually tuning\nthe DB i might be able to meet my requirement.\n\nSo the question is - Can anyone see a flaw in my test so far?\n(considering that i'm just focused on the performance of pulling\nthe 1.2M record from the table) and if so any suggestions to further\nnail it down?\n\nThanks\n\nDave Kerr\n", "msg_date": "Fri, 3 Apr 2009 12:53:02 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Question on pgbench output" }, { "msg_contents": "David Kerr <[email protected]> writes:\n> The results actually have surprised me, the database isn't really tuned\n> and i'm not working on great hardware. But still I'm getting:\n\n> caling factor: 1\n> number of clients: 400\n> number of transactions per client: 50\n> number of transactions actually processed: 20000/20000\n> tps = 51.086001 (including connections establishing)\n> tps = 51.395364 (excluding connections establishing)\n\n> I'm not really sure how to evaulate the tps, I've read in this forum that\n> some folks are getting 2k tps so this wouldn't appear to be good to me.\n\nWell, you're running a custom transaction definition so comparing your\nnumber to anyone else's is 100% meaningless. All you know here is that\nyour individual transactions are more expensive than the default pgbench\ntransaction (which we could'a told you without testing...)\n\n> (I wrote a script to average the total transaction time for every record\n> in the file)\n> avg_times.ksh pgbench_log.7205\n> Avg tx time seconds: 7\n\nThat squares with your previous results: if you're completing 50\ntransactions per sec then it takes about 8 seconds to do 400 of 'em.\nSo any one of the clients ought to see about 8 second response time.\nI think that your test is probably valid.\n\n> That's not too bad, it seems like with real hardware + actually tuning\n> the DB i might be able to meet my requirement.\n\nHow much more \"real\" is the target hardware than what you have?\nYou appear to need about a factor of 10 better disk throughput than\nyou have, and that's not going to be too cheap. I suspect that the\nthing is going to be seek-limited, and seek rate is definitely the\nmost expensive number to increase.\n\nIf the items you're pulling are static files, you should consider\nstoring them as plain files and just using the DB as an index.\nMegabyte-sized fields aren't the cheapest things to push around.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Apr 2009 16:43:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question on pgbench output " }, { "msg_contents": "On Fri, Apr 03, 2009 at 04:43:29PM -0400, Tom Lane wrote:\n- > I'm not really sure how to evaulate the tps, I've read in this forum that\n- > some folks are getting 2k tps so this wouldn't appear to be good to me.\n- \n- Well, you're running a custom transaction definition so comparing your\n- number to anyone else's is 100% meaningless. All you know here is that\n- your individual transactions are more expensive than the default pgbench\n- transaction (which we could'a told you without testing...)\n\nThat makes sense. I guess I included it incase there was a community\ndefined sense of what a good TPS for a highly responsive web-app. \n(like if you're getting 1000tps on your web app then your users are\nhappy)\n\nBut from the sounds of it, yeah, that would probably be difficult to \nreally measure.\n\n- > (I wrote a script to average the total transaction time for every record\n- > in the file)\n- > avg_times.ksh pgbench_log.7205\n- > Avg tx time seconds: 7\n- \n- That squares with your previous results: if you're completing 50\n- transactions per sec then it takes about 8 seconds to do 400 of 'em.\n- So any one of the clients ought to see about 8 second response time.\n- I think that your test is probably valid.\n\nOk, great. thanks!\n\n- > That's not too bad, it seems like with real hardware + actually tuning\n- > the DB i might be able to meet my requirement.\n- \n- How much more \"real\" is the target hardware than what you have?\n- You appear to need about a factor of 10 better disk throughput than\n- you have, and that's not going to be too cheap. I suspect that the\n- thing is going to be seek-limited, and seek rate is definitely the\n- most expensive number to increase.\n\nThe hardware i'm using is a 5 or 6 year old POS IBM Blade. we haven't\nspecced the new hardware yet but I would say that it will be sigificantly \nbetter. \n\n- If the items you're pulling are static files, you should consider\n- storing them as plain files and just using the DB as an index.\n- Megabyte-sized fields aren't the cheapest things to push around.\n\nI agree 100% and of course the memory allocation, etc from being able\nto cache the items in httpd vs in the DB is a major consideration.\n\nThanks again.\n\nDave Kerr\n", "msg_date": "Fri, 3 Apr 2009 14:18:21 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question on pgbench output" }, { "msg_contents": "On Fri, Apr 3, 2009 at 1:53 PM, David Kerr <[email protected]> wrote:\n> Here is my transaction file:\n> \\setrandom iid 1 50000\n> BEGIN;\n> SELECT content FROM test WHERE item_id = :iid;\n> END;\n>\n> and then i executed:\n> pgbench -c 400 -t 50 -f trans.sql -l\n>\n> The results actually have surprised me, the database isn't really tuned\n> and i'm not working on great hardware. But still I'm getting:\n>\n> caling factor: 1\n> number of clients: 400\n> number of transactions per client: 50\n> number of transactions actually processed: 20000/20000\n> tps = 51.086001 (including connections establishing)\n> tps = 51.395364 (excluding connections establishing)\n\nNot bad. With an average record size of 1.2Meg you're reading ~60 Meg\nper second (plus overhead) off of your drive(s).\n\n> So the question is - Can anyone see a flaw in my test so far?\n> (considering that i'm just focused on the performance of pulling\n> the 1.2M record from the table) and if so any suggestions to further\n> nail it down?\n\nYou can either get more memory (enough to hold your whole dataset in\nram), get faster drives and aggregate them with RAID-10, or look into\nsomething like memcached servers, which can cache db queries for your\napp layer.\n", "msg_date": "Fri, 3 Apr 2009 15:43:13 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question on pgbench output" }, { "msg_contents": "On Fri, 3 Apr 2009, David Kerr wrote:\n\n> Here is my transaction file:\n> \\setrandom iid 1 50000\n> BEGIN;\n> SELECT content FROM test WHERE item_id = :iid;\n> END;\n\nWrapping a SELECT in a BEGIN/END block is unnecessary, and it will \nsignificantly slow down things for two reason: the transactions overhead \nand the time pgbench is spending parsing/submitting those additional \nlines. Your script should be two lines long, the \\setrandom one and the \nSELECT.\n\n> trying to simulate 400 concurrent users performing 50 operations each\n> which is consistant with my needs.\n\npgbench is extremely bad at simulating large numbers of clients. The \npgbench client operates as a single thread that handles both parsing the \ninput files, sending things to clients, and processing their responses. \nIt's very easy to end up in a situation where that bottlenecks at the \npgbench client long before getting to 400 concurrent connections.\n\nThat said, if you're in the hundreds of transactions per second range that \nprobably isn't biting you yet. I've seen it more once you get around \n5000+ things per second going on.\n\n> I'm not really sure how to evaulate the tps, I've read in this forum that\n> some folks are getting 2k tps so this wouldn't appear to be good to me.\n\nYou can't compare what you're doing to what anybody else because your \nitem size is so big. The standard pgbench transactions all involve very \nsmall rows.\n\nThe thing that's really missing from your comments so far is the cold vs. \nhot cache issue: at the point when you're running pgbench, is a lot of \nthe data already in the PostgreSQL or OS buffer cache? If you're starting \nwithout any data in there, 50 TPS is completely reasonable--each SELECT \ncould potentially be pulling both data and some number of index blocks, \nand the tests I was just doing yesterday (with a single disk drive) \nstarted at about 40TPS. By the time the test was finished running and the \ncaches were all full of useful data, it was 17K TPS instead.\n\n> (I wrote a script to average the total transaction time for every record\n> in the file)\n\nWait until Monday, I'm announcing some pgbench tools at PG East this \nweekend that will take care of all this as well as things like graphing. \nIt pushes all the info pgbench returns, including the latency information, \ninto a database and generates a big stack of derived reports. I'd rather \nsee you help improve that than reinvent this particular wheel.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 3 Apr 2009 17:51:50 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question on pgbench output" }, { "msg_contents": "David Kerr <[email protected]> writes:\n> On Fri, Apr 03, 2009 at 04:43:29PM -0400, Tom Lane wrote:\n> - How much more \"real\" is the target hardware than what you have?\n> - You appear to need about a factor of 10 better disk throughput than\n> - you have, and that's not going to be too cheap.\n\n> The hardware i'm using is a 5 or 6 year old POS IBM Blade. we haven't\n> specced the new hardware yet but I would say that it will be sigificantly \n> better. \n\nThe point I was trying to make is that it's the disk subsystem, not\nthe CPU, that is going to make or break you.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Apr 2009 18:30:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question on pgbench output " }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> pgbench is extremely bad at simulating large numbers of clients. The \n> pgbench client operates as a single thread that handles both parsing the \n> input files, sending things to clients, and processing their responses. \n> It's very easy to end up in a situation where that bottlenecks at the \n> pgbench client long before getting to 400 concurrent connections.\n\nYeah, good point.\n\n> That said, if you're in the hundreds of transactions per second range that \n> probably isn't biting you yet. I've seen it more once you get around \n> 5000+ things per second going on.\n\nHowever, I don't think anyone else has been pgbench'ing transactions\nwhere client-side libpq has to absorb (and then discard) a megabyte of\ndata per xact. I wouldn't be surprised that that eats enough CPU to\nmake it an issue. David, did you pay any attention to how busy the\npgbench process was?\n\nAnother thing that strikes me as a bit questionable is that your stated\nrequirements involve being able to pump 400MB/sec from the database\nserver to your various client machines (presumably those 400 people\naren't running their client apps directly on the DB server). What's the\nnetwork fabric going to be, again? Gigabit Ethernet won't cut it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Apr 2009 18:52:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question on pgbench output " }, { "msg_contents": "On Fri, Apr 03, 2009 at 06:52:26PM -0400, Tom Lane wrote:\n- Greg Smith <[email protected]> writes:\n- > pgbench is extremely bad at simulating large numbers of clients. The \n- > pgbench client operates as a single thread that handles both parsing the \n- > input files, sending things to clients, and processing their responses. \n- > It's very easy to end up in a situation where that bottlenecks at the \n- > pgbench client long before getting to 400 concurrent connections.\n- \n- Yeah, good point.\n\nhmmm ok, I didn't realize that pgbouncer wasn't threaded. I've got a Plan B \nthat doesn't use pgbouncer that i'll try.\n\n- > That said, if you're in the hundreds of transactions per second range that \n- > probably isn't biting you yet. I've seen it more once you get around \n- > 5000+ things per second going on.\n- \n- However, I don't think anyone else has been pgbench'ing transactions\n- where client-side libpq has to absorb (and then discard) a megabyte of\n- data per xact. I wouldn't be surprised that that eats enough CPU to\n- make it an issue. David, did you pay any attention to how busy the\n- pgbench process was?\nI can run it again and have a look, no problem.\n\n- Another thing that strikes me as a bit questionable is that your stated\n- requirements involve being able to pump 400MB/sec from the database\n- server to your various client machines (presumably those 400 people\n- aren't running their client apps directly on the DB server). What's the\n- network fabric going to be, again? Gigabit Ethernet won't cut it...\n\nYes, sorry I'm not trying to be confusing but i didn't want to bog\neveryone down with a ton of details. \n\n400 concurrent users doesn't mean that they're pulling 1.5 megs / second\nevery second. Just that they could potentially pull 1.5 megs at any one\nsecond. most likely there is a 6 (minimum) to 45 second (average) gap \nbetween each individual user's pull. My plan B above emulates that, but\ni was using pgbouncer to try to emulate \"worst case\" scenario.\n\n- The point I was trying to make is that it's the disk subsystem, not\n- the CPU, that is going to make or break you.\n\nMakes sense, I definitely want to avoid I/Os. \n\n\nOn Fri, Apr 03, 2009 at 05:51:50PM -0400, Greg Smith wrote:\n- Wrapping a SELECT in a BEGIN/END block is unnecessary, and it will\n- significantly slow down things for two reason: the transactions\n overhead\n- and the time pgbench is spending parsing/submitting those additional\n- lines. Your script should be two lines long, the \\setrandom one and\n the\n- SELECT.\n-\n\nOh perfect, I can try that too. thanks\n\n- The thing that's really missing from your comments so far is the cold\n- vs. hot cache issue: at the point when you're running pgbench, is a lot\n\nI'm testing with a cold cache because most likely the way the items are\nspead out, of those 400 users only a few at a time might access similar\nitems.\n\n- Wait until Monday, I'm announcing some pgbench tools at PG East this\n- weekend that will take care of all this as well as things like\n- graphing. It pushes all the info pgbench returns, including the latency\n- information, into a database and generates a big stack of derived reports. \n- I'd rather see you help improve that than reinvent this particular wheel.\n\nAh very cool, wish i could go (but i'm on the west coast).\n\n\nThanks again guys.\n\nDave Kerr\n\n", "msg_date": "Fri, 3 Apr 2009 16:34:58 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question on pgbench output" }, { "msg_contents": "Gah - sorry, setting up pgbouncer for my Plan B.\n\nI meant -pgbench-\n\nDave Kerr\n\n\nOn Fri, Apr 03, 2009 at 04:34:58PM -0700, David Kerr wrote:\n- On Fri, Apr 03, 2009 at 06:52:26PM -0400, Tom Lane wrote:\n- - Greg Smith <[email protected]> writes:\n- - > pgbench is extremely bad at simulating large numbers of clients. The \n- - > pgbench client operates as a single thread that handles both parsing the \n- - > input files, sending things to clients, and processing their responses. \n- - > It's very easy to end up in a situation where that bottlenecks at the \n- - > pgbench client long before getting to 400 concurrent connections.\n- - \n- - Yeah, good point.\n- \n- hmmm ok, I didn't realize that pgbouncer wasn't threaded. I've got a Plan B \n- that doesn't use pgbouncer that i'll try.\n- \n- - > That said, if you're in the hundreds of transactions per second range that \n- - > probably isn't biting you yet. I've seen it more once you get around \n- - > 5000+ things per second going on.\n- - \n- - However, I don't think anyone else has been pgbench'ing transactions\n- - where client-side libpq has to absorb (and then discard) a megabyte of\n- - data per xact. I wouldn't be surprised that that eats enough CPU to\n- - make it an issue. David, did you pay any attention to how busy the\n- - pgbench process was?\n- I can run it again and have a look, no problem.\n- \n- - Another thing that strikes me as a bit questionable is that your stated\n- - requirements involve being able to pump 400MB/sec from the database\n- - server to your various client machines (presumably those 400 people\n- - aren't running their client apps directly on the DB server). What's the\n- - network fabric going to be, again? Gigabit Ethernet won't cut it...\n- \n- Yes, sorry I'm not trying to be confusing but i didn't want to bog\n- everyone down with a ton of details. \n- \n- 400 concurrent users doesn't mean that they're pulling 1.5 megs / second\n- every second. Just that they could potentially pull 1.5 megs at any one\n- second. most likely there is a 6 (minimum) to 45 second (average) gap \n- between each individual user's pull. My plan B above emulates that, but\n- i was using pgbouncer to try to emulate \"worst case\" scenario.\n- \n- - The point I was trying to make is that it's the disk subsystem, not\n- - the CPU, that is going to make or break you.\n- \n- Makes sense, I definitely want to avoid I/Os. \n- \n- \n- On Fri, Apr 03, 2009 at 05:51:50PM -0400, Greg Smith wrote:\n- - Wrapping a SELECT in a BEGIN/END block is unnecessary, and it will\n- - significantly slow down things for two reason: the transactions\n- overhead\n- - and the time pgbench is spending parsing/submitting those additional\n- - lines. Your script should be two lines long, the \\setrandom one and\n- the\n- - SELECT.\n- -\n- \n- Oh perfect, I can try that too. thanks\n- \n- - The thing that's really missing from your comments so far is the cold\n- - vs. hot cache issue: at the point when you're running pgbench, is a lot\n- \n- I'm testing with a cold cache because most likely the way the items are\n- spead out, of those 400 users only a few at a time might access similar\n- items.\n- \n- - Wait until Monday, I'm announcing some pgbench tools at PG East this\n- - weekend that will take care of all this as well as things like\n- - graphing. It pushes all the info pgbench returns, including the latency\n- - information, into a database and generates a big stack of derived reports. \n- - I'd rather see you help improve that than reinvent this particular wheel.\n- \n- Ah very cool, wish i could go (but i'm on the west coast).\n- \n- \n- Thanks again guys.\n- \n- Dave Kerr\n- \n- \n- -- \n- Sent via pgsql-performance mailing list ([email protected])\n- To make changes to your subscription:\n- http://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 3 Apr 2009 16:47:49 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question on pgbench output" }, { "msg_contents": "On Fri, 3 Apr 2009, Tom Lane wrote:\n\n> However, I don't think anyone else has been pgbench'ing transactions\n> where client-side libpq has to absorb (and then discard) a megabyte of\n> data per xact. I wouldn't be surprised that that eats enough CPU to\n> make it an issue. David, did you pay any attention to how busy the\n> pgbench process was?\n\nI certainly haven't ever tried that. David, the thing you want to do here \nis run \"top -c\" when pgbench is going. You should see the pgbench process \nand a bunch of postmaster ones, with \"-c\" (or by hitting \"c\" while top is \nrunning) you can even see what they're all doing. If the pgbench process \nis consuming close to 100% of a CPU's time, that means the results it's \ngiving are not valid--what you're seeing in that case are the limitations \nof the testing program instead.\n\nYou can even automate collection of that with something like this:\n\ntop -b -d 10 -c -n 10000 > top.log &\nTOPPID=$!\n(run test)\nkill $TOPPID\n\nThat will save a snapshot every 10 seconds of what's happening during your \ntest.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 3 Apr 2009 22:35:58 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question on pgbench output " }, { "msg_contents": "On Fri, Apr 03, 2009 at 10:35:58PM -0400, Greg Smith wrote:\n- On Fri, 3 Apr 2009, Tom Lane wrote:\n- \n- and a bunch of postmaster ones, with \"-c\" (or by hitting \"c\" while top is \n- running) you can even see what they're all doing. If the pgbench process \n- is consuming close to 100% of a CPU's time, that means the results it's \n- giving are not valid--what you're seeing in that case are the limitations \n- of the testing program instead.\n\nLooks pretty good to me. not too much mem or CPU.\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 4241 postgres 20 0 9936 2652 1680 S 0 0.1 0:00.02 pgbench\n 4241 postgres 20 0 22948 10m 1708 R 4 0.3 0:00.46 pgbench\n 4241 postgres 20 0 26628 14m 1708 R 5 0.3 0:00.96 pgbench\n 4241 postgres 20 0 29160 15m 1708 R 5 0.4 0:01.44 pgbench\n 4241 postgres 20 0 30888 16m 1708 R 4 0.4 0:01.86 pgbench\n 4241 postgres 20 0 31624 17m 1708 R 5 0.4 0:02.34 pgbench\n 4241 postgres 20 0 32552 18m 1708 R 5 0.5 0:02.82 pgbench\n 4241 postgres 20 0 33160 18m 1708 R 5 0.5 0:03.28 pgbench\n 4241 postgres 20 0 33608 18m 1708 R 4 0.5 0:03.70 pgbench\n 4241 postgres 20 0 34056 19m 1708 R 4 0.5 0:04.08 pgbench\n 4241 postgres 20 0 34056 19m 1708 R 4 0.5 0:04.52 pgbench\n 4241 postgres 20 0 34376 19m 1708 R 4 0.5 0:04.98 pgbench\n 4241 postgres 20 0 34536 19m 1708 R 4 0.5 0:05.42 pgbench\n 4241 postgres 20 0 34536 19m 1708 R 5 0.5 0:05.88 pgbench\n 4241 postgres 20 0 34664 19m 1708 R 5 0.5 0:06.34 pgbench\n 4241 postgres 20 0 34664 19m 1708 R 5 0.5 0:06.82 pgbench\n 4241 postgres 20 0 34664 19m 1708 R 4 0.5 0:07.26 pgbench\n 4241 postgres 20 0 34664 20m 1708 R 4 0.5 0:07.72 pgbench\n 4241 postgres 20 0 34664 20m 1708 R 4 0.5 0:08.12 pgbench\n\nDave\n", "msg_date": "Sat, 4 Apr 2009 21:06:11 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question on pgbench output" }, { "msg_contents": "\nOn Fri, 2009-04-03 at 16:34 -0700, David Kerr wrote:\n> 400 concurrent users doesn't mean that they're pulling 1.5 megs /\n> second every second. Just that they could potentially pull 1.5 megs at\n> any one second. most likely there is a 6 (minimum) to 45 second\n> (average) gap between each individual user's pull.\n\nThere's a world of difference between 400 connected and 400 concurrent\nusers. You've been testing 400 concurrent users, yet without measuring\ndata transfer. The think time will bring the number of users right down\nagain, but you really need to include the much higher than normal data\ntransfer into your measurements and pgbench won't help there.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Sun, 05 Apr 2009 08:01:39 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question on pgbench output" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> On Fri, 2009-04-03 at 16:34 -0700, David Kerr wrote:\n>> 400 concurrent users doesn't mean that they're pulling 1.5 megs /\n>> second every second.\n\n> There's a world of difference between 400 connected and 400 concurrent\n> users. You've been testing 400 concurrent users, yet without measuring\n> data transfer. The think time will bring the number of users right down\n> again, but you really need to include the much higher than normal data\n> transfer into your measurements and pgbench won't help there.\n\nActually pgbench can simulate think time perfectly well: use its \\sleep\ncommand in your script. I think you can even set it up to randomize the\nsleep time.\n\nI agree that it seems David has been measuring a case far beyond what\nhis real problem is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 Apr 2009 11:46:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question on pgbench output " }, { "msg_contents": "Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n>> On Fri, 2009-04-03 at 16:34 -0700, David Kerr wrote:\n>>> 400 concurrent users doesn't mean that they're pulling 1.5 megs /\n>>> second every second.\n> \n>> There's a world of difference between 400 connected and 400 concurrent\n>> users. You've been testing 400 concurrent users, yet without measuring\n>> data transfer. The think time will bring the number of users right down\n>> again, but you really need to include the much higher than normal data\n>> transfer into your measurements and pgbench won't help there.\n> \n> Actually pgbench can simulate think time perfectly well: use its \\sleep\n> command in your script. I think you can even set it up to randomize the\n> sleep time.\n> \n> I agree that it seems David has been measuring a case far beyond what\n> his real problem is.\n> \n> \t\t\tregards, tom lane\n> \n\nFortunately the network throughput issue is not mine to solve.\n\nWould it be fair to say that with the pgbench output i've given so far\nthat if all my users clicked \"go\" at the same time (i.e., worst case \nscenario), i could expect (from the database) about 8 second response time?\n\nThanks\n\nDave Kerr\n", "msg_date": "Sun, 05 Apr 2009 10:12:34 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question on pgbench output" }, { "msg_contents": "David Kerr <[email protected]> writes:\n> Fortunately the network throughput issue is not mine to solve.\n\n> Would it be fair to say that with the pgbench output i've given so far\n> that if all my users clicked \"go\" at the same time (i.e., worst case \n> scenario), i could expect (from the database) about 8 second response time?\n\nFor the hardware you've got, and ignoring the network bandwidth issue,\nthat appears to be a fair estimate of the worst case response.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 Apr 2009 13:43:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question on pgbench output " } ]
[ { "msg_contents": "All,\n\nI've been using Bonnie++ for ages to do filesystem testing of new DB \nservers. But Josh Drake recently turned me on to IOZone.\n\nThing is, IOZone offers a huge complex series of parameters, so I'd \nreally like to have some idea of how to configure it so its results are \napplicable to database performance.\n\nFor example, I have a database which is expected to grow to around 200GB \nin size, most of which consists of two tables partioned into 0.5GB \nchunks. Reads and writes are consistent and fairly random. The system \nhas 8 cores. How would you configure IOZone to test a filesystem for this?\n\n--Josh Berkus\n", "msg_date": "Fri, 03 Apr 2009 16:12:55 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Using IOZone to simulate DB access patterns" }, { "msg_contents": "On 4/3/09 4:12 PM, Josh Berkus wrote:\n> All,\n>\n> I've been using Bonnie++ for ages to do filesystem testing of new DB\n> servers. But Josh Drake recently turned me on to IOZone.\n\nRelated to this: is IOZone really multi-threaded? I'm doing a test run \nright now, and only one CPU is actually active. While there are 6 \nIOZone processes, most of them are idle.\n\n--Josh\n\n", "msg_date": "Fri, 03 Apr 2009 17:09:53 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using IOZone to simulate DB access patterns" }, { "msg_contents": "On Fri, 2009-04-03 at 17:09 -0700, Josh Berkus wrote:\n> On 4/3/09 4:12 PM, Josh Berkus wrote:\n> > All,\n> >\n> > I've been using Bonnie++ for ages to do filesystem testing of new DB\n> > servers. But Josh Drake recently turned me on to IOZone.\n> \n> Related to this: is IOZone really multi-threaded? I'm doing a test run \n> right now, and only one CPU is actually active. While there are 6 \n> IOZone processes, most of them are idle.\n\nIn order to test real interactivity (AFAIK) with iozone you have to\nlaunch multiple iozone instances. You also need to do them from separate\ndirectories, otherwise it all starts writing the same file. The work I\ndid here: \n\nhttp://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/\n\nWas actually with multiple bash scripts firing separate instances. The\ninteresting thing here is the -s 1000m and -r8k. Those options are\nbasically use a 1000 meg file (like our data files) with 8k chunks (like\nour pages).\n\nBased on your partitioning scheme, what is the break out? Can you\nreasonably expect all partitions to be used equally?\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> --Josh\n> \n> \n-- \nPostgreSQL - XMPP: [email protected]\n Consulting, Development, Support, Training\n 503-667-4564 - http://www.commandprompt.com/\n The PostgreSQL Company, serving since 1997\n\n", "msg_date": "Fri, 10 Apr 2009 09:00:33 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using IOZone to simulate DB access patterns" }, { "msg_contents": "JD,\n\n> In order to test real interactivity (AFAIK) with iozone you have to\n> launch multiple iozone instances. You also need to do them from separate\n> directories, otherwise it all starts writing the same file. The work I\n> did here:\n\nActually, current IOZone allows you to specify multiple files. For \nexample, the command line I was using:\n\niozone -R -i 0 -i 1 -i 2 -i 3 -i 4 -i 5 -i 8 -l 6 -u 6 -r 8k -s 4G -F f1 \nf2 f3 f4 f5 f6\n\nAnd it does indeed launch 6 processes under that configuration. \nHowever, I found that for pretty much all of the write tests except for \nthe first the processes blocked each other:\n\n\nF S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD\n0 S 26 6061 5825 0 80 0 - 11714 wait pts/3 00:00:00 iozone\n1 D 26 6238 6061 0 78 0 - 11714 sync_p pts/3 00:00:03 iozone\n1 D 26 6239 6061 0 78 0 - 11714 sync_p pts/3 00:00:03 iozone\n1 D 26 6240 6061 0 78 0 - 11714 sync_p pts/3 00:00:03 iozone\n1 D 26 6241 6061 0 78 0 - 11714 sync_p pts/3 00:00:03 iozone\n1 D 26 6242 6061 0 78 0 - 11714 stext pts/3 00:00:03 iozone\n1 R 26 6243 6061 0 78 0 - 11714 - pts/3 00:00:03 iozone\n\n\nDon Capps says that the IOZone code is perfect, and that pattern \nindicates a problem with my system, which is possible. Can someone else \ntry concurrent IOZone on their system and see if they get the same \npattern? I just don't have that many multi-core machines to test on.\n\nAlso, WTF is the difference between \"Children See\" and \"Parent Sees\"? \nIOZone doesn't document this anywhere.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Fri, 10 Apr 2009 10:15:36 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using IOZone to simulate DB access patterns" }, { "msg_contents": "you can also play with this-tiny-shiny tool :\nhttp://pgfoundry.org/projects/pgiosim/\nIt just works and heavily stress the disk with random read/write.\n\n\n-- \nF4FQM\nKerunix Flan\nLaurent Laborde\n", "msg_date": "Mon, 27 Apr 2009 17:33:46 +0200", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using IOZone to simulate DB access patterns" } ]
[ { "msg_contents": "> I've been using Bonnie++ for ages to do filesystem testing of new DB servers. But Josh Drake recently turned me on to IOZone.\n\nPerhaps a little off-topic here, but I'm assuming you are using Linux to test your DB server (since you mention Bonnie++). But it seems to me that IOZone only has a win32 client. How did you actually run IOZone on Linux?\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\n\n> I've been using Bonnie++ for ages to do filesystem testing of new DB servers. But Josh Drake recently turned me on to IOZone.Perhaps a little off-topic here, but I'm assuming you are using Linux to test your DB server (since you mention Bonnie++). But it seems to me that IOZone only has a win32 client. How did you actually run IOZone on Linux?Express yourself instantly with MSN Messenger! MSN Messenger", "msg_date": "Sat, 4 Apr 2009 12:00:52 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using IOZone to simulate DB access patterns" }, { "msg_contents": "henk de wit wrote:\n>> I've been using Bonnie++ for ages to do filesystem testing of new DB servers. But Josh Drake recently turned me on to IOZone.\n> \n> Perhaps a little off-topic here, but I'm assuming you are using Linux to \n> test your DB server (since you mention Bonnie++). But it seems to me \n> that IOZone only has a win32 client. How did you actually run IOZone on \n> Linux?\n\n$ apt-cache search iozone\niozone3 - Filesystem and Disk Benchmarking Tool\n\n-- \nJesper\n", "msg_date": "Sat, 04 Apr 2009 12:49:52 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using IOZone to simulate DB access patterns" }, { "msg_contents": "> $ apt-cache search iozone\n> iozone3 - Filesystem and Disk Benchmarking Tool\n\nYou are right. I was confused with IOMeter, which can't be run on Linux (the Dynamo part can, but that's not really useful without the 'command & control' part).\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\n\n> $ apt-cache search iozone> iozone3 - Filesystem and Disk Benchmarking ToolYou are right. I was confused with IOMeter, which can't be run on Linux (the Dynamo part can, but that's not really useful without the 'command & control' part).Express yourself instantly with MSN Messenger! MSN Messenger", "msg_date": "Sat, 4 Apr 2009 17:54:43 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using IOZone to simulate DB access patterns" }, { "msg_contents": "All,\n\nWow, am I really the only person here who's used IOZone?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Thu, 09 Apr 2009 22:41:47 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using IOZone to simulate DB access patterns" }, { "msg_contents": "Josh Berkus wrote:\n> All,\n>\n> Wow, am I really the only person here who's used IOZone?\n>\n\nNo - I used to use it exclusively, but everyone else tended to demand I \nredo stuff with bonnie before taking any finding seriously... so I've \nkinda 'submitted to the Borg' as it were....\n", "msg_date": "Fri, 10 Apr 2009 18:26:58 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using IOZone to simulate DB access patterns" }, { "msg_contents": "On 4/9/09 11:26 PM, Mark Kirkwood wrote:\n> Josh Berkus wrote:\n>> All,\n>>\n>> Wow, am I really the only person here who's used IOZone?\n>>\n>\n> No - I used to use it exclusively, but everyone else tended to demand I\n> redo stuff with bonnie before taking any finding seriously... so I've\n> kinda 'submitted to the Borg' as it were....\n\nBonnie++ has its own issues with concurrency; it's using some kind of \nad-hoc threading implementation, which results in not getting real \nparallelism. I just did a test with -c 8 on Bonnie++ 1.95, and the \nprogram only ever used 3 cores.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Fri, 10 Apr 2009 10:10:03 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using IOZone to simulate DB access patterns" }, { "msg_contents": "I've switched to using FIO.\n\nBonnie in my experience produces poor results and is better suited to\ntesting desktop/workstation type load. Most of its tests don't apply to how\npostgres writes/reads anyway.\n\nIOZone is a bit more troublesome to get it to work on the file(s) you want\nunder concurrency and is also hard to get it to avoid the OS file cache. On\nsystems with lots of RAM, it takes too long as a result. I personally like\nit better than bonnnie by far, but its not flexible enough for me and is\noften used by hardware providers to 'show' theier RAID cards are doing fine\n(PERC 6 doing 4GB /sec file access -- see! Its fine!) but the thing is just\ntesting in memory cached reads for most of the test or all if not configured\nright...\n\nFIO with profiles such as the below samples are easy to set up, and they can\nbe mix/matched to test what happens with mixed read/write seq/rand -- with\nsurprising and useful tuning results. Forcing a cache flush or sync before\nor after a run is trivial. Changing to asynchronous I/O, direct I/O, or\nother forms is trivial. The output result formatting is very useful as\nwell.\n\nI got into using FIO when I needed to test a matrix of about 400 different\ntuning combinations. This would have taken a month with Iozone, but I could\ncreate my profiles with FIO, force the OS cache to flush, and constrain the\ntime appropriately for each test, and run the batch overnight.\n\n\n#----------------\n[read-rand]\nrw=randread\n; this will be total of all individual files per process\nsize=1g\ndirectory=/data/test\nfadvise_hint=0\nblocksize=8k\ndirect=0\nioengine=sync\niodepth=1\nnumjobs=32\n; this is number of files total per process\nnrfiles=1\ngroup_reporting=1\nruntime=1m\nexec_prerun=echo 3 > /proc/sys/vm/drop_caches\n#--------------------\n[read]\nrw=read\n; this will be total of all individual files per process\nsize=512m\ndirectory=/data/test\nfadvise_hint=0\nblocksize=8k\ndirect=0\nioengine=sync\niodepth=1\nnumjobs=8\n; this is number of files total per process\nnrfiles=1\nruntime=30s\ngroup_reporting=1\nexec_prerun=echo 3 > /proc/sys/vm/drop_caches\n\n#----------------------\n[write]\nrw=write\n; this will be total of all individual files per process\nsize=4g\ndirectory=/data/test\nfadvise_hint=0\nblocksize=8k\ndirect=0\nioengine=sync\niodepth=1\nnumjobs=1\n;rate=10000\n; this is number of files total per process\nnrfiles=1\nruntime=48s\ngroup_reporting=1\nend_fsync=1\nexec_prerun=echo 3 >sync; /proc/sys/vm/drop_caches\n\n\n\nOn 4/9/09 10:41 PM, \"Josh Berkus\" <[email protected]> wrote:\n\n> All,\n> \n> Wow, am I really the only person here who's used IOZone?\n> \n> --\n> Josh Berkus\n> PostgreSQL Experts Inc.\n> www.pgexperts.com\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Fri, 10 Apr 2009 10:11:46 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using IOZone to simulate DB access patterns" }, { "msg_contents": "Scott,\n\n> FIO with profiles such as the below samples are easy to set up, and they can\n> be mix/matched to test what happens with mixed read/write seq/rand -- with\n> surprising and useful tuning results. Forcing a cache flush or sync before\n> or after a run is trivial. Changing to asynchronous I/O, direct I/O, or\n> other forms is trivial. The output result formatting is very useful as\n> well.\n\nFIO? Link?\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Fri, 10 Apr 2009 10:31:35 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using IOZone to simulate DB access patterns" }, { "msg_contents": "\nOn 4/10/09 10:31 AM, \"Josh Berkus\" <[email protected]> wrote:\n\n> Scott,\n> \n>> FIO with profiles such as the below samples are easy to set up, and they can\n>> be mix/matched to test what happens with mixed read/write seq/rand -- with\n>> surprising and useful tuning results. Forcing a cache flush or sync before\n>> or after a run is trivial. Changing to asynchronous I/O, direct I/O, or\n>> other forms is trivial. The output result formatting is very useful as\n>> well.\n> \n> FIO? Link?\n\nFirst google result:\nhttp://freshmeat.net/projects/fio/\n\nWritten by Jens Axobe, the Linux Kernel I/O block layer maintainer. He\nwrote the CFQ scheduler and Noop scheduler, and is the author of blktrace as\nwell.\n\n\n\" fio is an I/O tool meant to be used both for benchmark and stress/hardware\nverification. It has support for 13 different types of I/O engines (sync,\nmmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi,\nsolarisaio, and more), I/O priorities (for newer Linux kernels), rate I/O,\nforked or threaded jobs, and much more. It can work on block devices as well\nas files. fio accepts job descriptions in a simple-to-understand text\nformat. Several example job files are included. fio displays all sorts of\nI/O performance information. It supports Linux, FreeBSD, and OpenSolaris\"\n\n\n> \n> \n> --\n> Josh Berkus\n> PostgreSQL Experts Inc.\n> www.pgexperts.com\n> \n\n", "msg_date": "Fri, 10 Apr 2009 10:40:46 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using IOZone to simulate DB access patterns" }, { "msg_contents": "On Fri, 10 Apr 2009, Scott Carey wrote:\n\n> FIO with profiles such as the below samples are easy to set up\n\nThere are some more sample FIO profiles with results from various \nfilesystems at \nhttp://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 10 Apr 2009 14:01:33 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using IOZone to simulate DB access patterns" }, { "msg_contents": "\nOn 4/10/09 11:01 AM, \"Greg Smith\" <[email protected]> wrote:\n\n> On Fri, 10 Apr 2009, Scott Carey wrote:\n> \n>> FIO with profiles such as the below samples are easy to set up\n> \n> There are some more sample FIO profiles with results from various\n> filesystems at\n> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n\nI wish to thank Greg here as many of my profile variations came from the\nabove as a starting point.\n\nNote in his results the XFS file system behavior on random writes is due to\nFIO doing 'sparse writes' (which Postgres does not do, and fio exposes some\nissues on xfs with) in the default random write mode. To properly simulate\nPostgres these should be random overwrites.\n\nAdd 'overwrite=true' to the profile for random writes and the whole file\nwill be allocated before randomly (over)writing to it.\n\nHere is the man page:\nhttp://linux.die.net/man/1/fio\n\n> \n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n> \n\n", "msg_date": "Fri, 10 Apr 2009 11:17:39 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using IOZone to simulate DB access patterns" }, { "msg_contents": "On Fri, 10 Apr 2009, Scott Carey wrote:\n\n> I wish to thank Greg here as many of my profile variations came from the\n> above as a starting point.\n\nThat page was mainly Mark Wong's work, I just remembered where it was.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 10 Apr 2009 15:25:01 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using IOZone to simulate DB access patterns" }, { "msg_contents": "I've done quite a bit with IOzone, but if you're on Linux, you have lots of\noptions. In particular, you can actually capture I/O patterns from a running\napplication with blktrace, and then replay them with btrecord / btreplay.\n\nThe documentation for this stuff is a bit hard to find. Some of the distros\ndon't install it by default. But have a look at\n\nhttp://ow.ly/2zyW\n\nfor some \"Getting Started\" info.\n-- \nM. Edward (Ed) Borasky\nhttp://www.linkedin.com/in/edborasky\n\nI've never met a happy clam. In fact, most of them were pretty steamed.\n\nI've done quite a bit with IOzone, but if you're on Linux, you have lots of options. In particular, you can actually capture I/O patterns from a running application with blktrace, and then replay them with btrecord / btreplay. \nThe documentation for this stuff is a bit hard to find. Some of the distros don't install it by default. But have a look athttp://ow.ly/2zyWfor some \"Getting Started\" info.\n-- M. Edward (Ed) Boraskyhttp://www.linkedin.com/in/edboraskyI've never met a happy clam. In fact, most of them were pretty steamed.", "msg_date": "Fri, 10 Apr 2009 17:03:37 -0700", "msg_from": "\"M. Edward (Ed) Borasky\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using IOZone to simulate DB access patterns" }, { "msg_contents": "On Fri, Apr 10, 2009 at 11:01 AM, Greg Smith <[email protected]> wrote:\n> On Fri, 10 Apr 2009, Scott Carey wrote:\n>\n>> FIO with profiles such as the below samples are easy to set up\n>\n> There are some more sample FIO profiles with results from various\n> filesystems at\n> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n\nThere's a couple of potential flaws I'm trying to characterize this\nweekend. I'm having second thoughts about how I did the sequential\nread and write profiles. Using multiple processes doesn't let it\nreally do sequential i/o. I've done one comparison so far resulting\nin about 50% more throughput using just one process to do sequential\nwrites. I just want to make sure there shouldn't be any concern for\nbeing processor bound on one core.\n\nThe other flaw is having a minimum run time. The max of 1 hour seems\nto be good to establishing steady system utilization, but letting some\ntests finish in less than 15 minutes doesn't provide \"good\" data.\n\"Good\" meaning looking at the time series of data and feeling\nconfident it's a reliable result. I think I'm describing that\ncorrectly...\n\nRegards,\nMark\n", "msg_date": "Sat, 11 Apr 2009 11:44:33 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using IOZone to simulate DB access patterns" }, { "msg_contents": "\n\nOn 4/11/09 11:44 AM, \"Mark Wong\" <[email protected]> wrote:\n\n> On Fri, Apr 10, 2009 at 11:01 AM, Greg Smith <[email protected]> wrote:\n>> On Fri, 10 Apr 2009, Scott Carey wrote:\n>> \n>>> FIO with profiles such as the below samples are easy to set up\n>> \n>> There are some more sample FIO profiles with results from various\n>> filesystems at\n>> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n> \n> There's a couple of potential flaws I'm trying to characterize this\n> weekend. I'm having second thoughts about how I did the sequential\n> read and write profiles. Using multiple processes doesn't let it\n> really do sequential i/o. I've done one comparison so far resulting\n> in about 50% more throughput using just one process to do sequential\n> writes. I just want to make sure there shouldn't be any concern for\n> being processor bound on one core.\n\nFWIW, my raid array will do 1200MB/sec, and no tool I've used can saturate\nit without at least two processes. 'dd' and fio can get close (1050MB/sec),\nif the block size is <= ~32k <=64k. With a postgres sized 8k block 'dd'\ncan't top 900MB/sec or so. FIO can saturate it only with two+ readers.\n\nI optimized my configuration for 4 concurrent sequential readers with 4\nconcurrent random readers, and this helped the overall real world\nperformance a lot. I would argue that on any system with concurrent\nqueries, concurrency of all types is important to measure. Postgres isn't\ngoing to hold up one sequential scan to wait for another. Postgres on a\n3.16Ghz CPU is CPU bound on a sequential scan at between 250MB/sec and\n800MB/sec on the type of tables/queries I have. Concurrent sequential\nperformance was affected by:\nXfs -- the gain over ext3 was large\nReadahead tuning -- about 2MB per spindle was optimal (20MB for me, sw raid\n0 on 2x[10 drive hw raid 10]).\nDeadline scheduler (big difference with concurrent sequential + random\nmixed).\n\nOne reason your tests write so much faster than they read was the linux\nreadahead value not being tuned as you later observed. This helps ext3 a\nlot, and xfs enough so that fio single threaded was faster than 'dd' to the\nraw device.\n\n> \n> The other flaw is having a minimum run time. The max of 1 hour seems\n> to be good to establishing steady system utilization, but letting some\n> tests finish in less than 15 minutes doesn't provide \"good\" data.\n> \"Good\" meaning looking at the time series of data and feeling\n> confident it's a reliable result. I think I'm describing that\n> correctly...\n\nIt really depends on the specific test though. You can usually get random\niops numbers that are realistic in a fairly short time, and 1 minute long\ntests for me vary by about 3% (which can be +-35MB/sec in my case).\n\nI ran my tests on a partition that was only 20% the size of the whole\nvolume, and at the front of it. Sequential transfer varies by a factor of 2\nacross a SATA disk from start to end, so if you want to compare file systems\nfairly on sequential transfer rate you have to limit the partition to an\narea with relatively constant STR or else one file system might win just\nbecause it placed your file earlier on the drive.\n\n\n> \n> Regards,\n> Mark\n> \n\n", "msg_date": "Sat, 11 Apr 2009 19:00:07 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using IOZone to simulate DB access patterns" }, { "msg_contents": "On Sat, Apr 11, 2009 at 11:44 AM, Mark Wong <[email protected]> wrote:\n> On Fri, Apr 10, 2009 at 11:01 AM, Greg Smith <[email protected]> wrote:\n>> On Fri, 10 Apr 2009, Scott Carey wrote:\n>>\n>>> FIO with profiles such as the below samples are easy to set up\n>>\n>> There are some more sample FIO profiles with results from various\n>> filesystems at\n>> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n>\n> There's a couple of potential flaws I'm trying to characterize this\n> weekend.  I'm having second thoughts about how I did the sequential\n> read and write profiles.  Using multiple processes doesn't let it\n> really do sequential i/o.  I've done one comparison so far resulting\n> in about 50% more throughput using just one process to do sequential\n> writes.  I just want to make sure there shouldn't be any concern for\n> being processor bound on one core.\n>\n> The other flaw is having a minimum run time.  The max of 1 hour seems\n> to be good to establishing steady system utilization, but letting some\n> tests finish in less than 15 minutes doesn't provide \"good\" data.\n> \"Good\" meaning looking at the time series of data and feeling\n> confident it's a reliable result.  I think I'm describing that\n> correctly...\n\nFYI, I've updated the wiki with the parameters I'm running with now.\nI haven't updated the results yet though.\n\nRegards,\nMark\n", "msg_date": "Sun, 26 Apr 2009 20:28:13 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using IOZone to simulate DB access patterns" }, { "msg_contents": "On Sat, Apr 11, 2009 at 7:00 PM, Scott Carey <[email protected]> wrote:\n>\n>\n> On 4/11/09 11:44 AM, \"Mark Wong\" <[email protected]> wrote:\n>\n>> On Fri, Apr 10, 2009 at 11:01 AM, Greg Smith <[email protected]> wrote:\n>>> On Fri, 10 Apr 2009, Scott Carey wrote:\n>>>\n>>>> FIO with profiles such as the below samples are easy to set up\n>>>\n>>> There are some more sample FIO profiles with results from various\n>>> filesystems at\n>>> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n>>\n>> There's a couple of potential flaws I'm trying to characterize this\n>> weekend.  I'm having second thoughts about how I did the sequential\n>> read and write profiles.  Using multiple processes doesn't let it\n>> really do sequential i/o.  I've done one comparison so far resulting\n>> in about 50% more throughput using just one process to do sequential\n>> writes.  I just want to make sure there shouldn't be any concern for\n>> being processor bound on one core.\n>\n> FWIW, my raid array will do 1200MB/sec, and no tool I've used can saturate\n> it without at least two processes.  'dd' and fio can get close (1050MB/sec),\n> if the block size is <= ~32k <=64k.  With a postgres sized 8k block 'dd'\n> can't top 900MB/sec or so. FIO can saturate it only with two+ readers.\n>\n> I optimized my configuration for 4 concurrent sequential readers with 4\n> concurrent random readers, and this helped the overall real world\n> performance a lot.  I would argue that on any system with concurrent\n> queries, concurrency of all types is important to measure.  Postgres isn't\n> going to hold up one sequential scan to wait for another.  Postgres on a\n> 3.16Ghz CPU is CPU bound on a sequential scan at between 250MB/sec and\n> 800MB/sec on the type of tables/queries I have.  Concurrent sequential\n> performance was affected by:\n> Xfs -- the gain over ext3 was large\n> Readahead tuning -- about 2MB per spindle was optimal (20MB for me, sw raid\n> 0 on 2x[10 drive hw raid 10]).\n> Deadline scheduler (big difference with concurrent sequential + random\n> mixed).\n>\n> One reason your tests write so much faster than they read was the linux\n> readahead value not being tuned as you later observed.  This helps ext3 a\n> lot, and xfs enough so that fio single threaded was faster than 'dd' to the\n> raw device.\n>\n>>\n>> The other flaw is having a minimum run time.  The max of 1 hour seems\n>> to be good to establishing steady system utilization, but letting some\n>> tests finish in less than 15 minutes doesn't provide \"good\" data.\n>> \"Good\" meaning looking at the time series of data and feeling\n>> confident it's a reliable result.  I think I'm describing that\n>> correctly...\n>\n> It really depends on the specific test though.  You can usually get random\n> iops numbers that are realistic in a fairly short time, and 1 minute long\n> tests for me vary by about 3% (which can be +-35MB/sec in my case).\n>\n> I ran my tests on a partition that was only 20% the size of the whole\n> volume, and at the front of it.  Sequential transfer varies by a factor of 2\n> across a SATA disk from start to end, so if you want to compare file systems\n> fairly on sequential transfer rate you have to limit the partition to an\n> area with relatively constant STR or else one file system might win just\n> because it placed your file earlier on the drive.\n\nThat's probably what is going with the 1 disk test:\n\nhttp://207.173.203.223/~markwkm/community10/fio/linux-2.6.28-gentoo/1-disk-raid0/ext2/seq-read/io-charts/iostat-rMB.s.png\n\nversus the 4 disk test:\n\nhttp://207.173.203.223/~markwkm/community10/fio/linux-2.6.28-gentoo/4-disk-raid0/ext2/seq-read/io-charts/iostat-rMB.s.png\n\nThese are the throughput numbs but the iops are in the same directory.\n\nRegards,\nMark\n", "msg_date": "Sun, 26 Apr 2009 20:44:51 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using IOZone to simulate DB access patterns" } ]
[ { "msg_contents": "I am looking to setup replication of my postgresql database, primarily \nfor performance reasons.\n\nThe searching I've done shows a lot of different options, can anyone \ngive suggestions about which one(s) are best? I've read the archives, \nbut there seems to be more replication solutions since the last thread \non this subject and it seems to change frequently.\n\nI'd really like a solution that replicates DDL, but very few do so I \nthink I'm out of luck for that. I can live without it.\nMulti-master support would be nice too, but also seems to cause too many \nproblems so it looks like I'll have to do without it too.\n\n\n*Slony-I* - I've used this in the past, but it's a huge pain to work \nwith, caused serious performance issues under heavy load due to long \nrunning transactions (may not be the case anymore, it's been a while \nsince I used it on a large database with many writes), and doesn't seem \nvery reliable (I've had replication break on me multiple times).\n\n*Mammoth Replicator* - This is open source now, is it any good? It \nsounds like it's trigger based like Slony. Is it based on Slony, or \nsimply use a similar solution?\n\n*pgpool* - Won't work for us reliably for replication because we have \nsome triggers and stored procedures that write data.\n\n*PGCluster* - Sounds cool, but based on the mailing list traffic and the \nlast news post on the site being from 2005, development seems to be near \ndead. Also, no releases seems to make it beyond the RC stage -- for \nmulti-master stability is particularly important for data integrity.\n\n*PGReplicator - *Don't know anything special about it.\n*\nBucardo* - Don't know anything special about it.\n\n*Postgres-R* - Don't know anything special about it.\n\n*SkyTools/Londiste* - Don't know anything special about it.\n\n\n\n\n\nI am looking to setup replication of my postgresql database, primarily\nfor performance reasons.\n\nThe searching I've done shows a lot of different options, can anyone\ngive suggestions about which one(s) are best? I've read the archives,\nbut there seems to be more replication solutions since the last thread\non this subject and it seems to change frequently. \n\nI'd really like a solution that replicates DDL, but very few do so I\nthink I'm out of luck for that. I can live without it. \nMulti-master support would be nice too, but also seems to cause too\nmany problems so it looks like I'll have to do without it too. \n\n\nSlony-I - I've used this in the past, but it's a huge pain to\nwork with, caused serious performance issues under heavy load due to\nlong running transactions (may not be the case anymore, it's been a\nwhile since I used it on a large database with many writes), and\ndoesn't seem very reliable (I've had replication break on me multiple\ntimes).\n\nMammoth Replicator - This is open source now, is it any good? It\nsounds like it's trigger based like Slony. Is it based on Slony, or\nsimply use a similar solution?\n\npgpool - Won't work for us reliably for replication because we\nhave some triggers and stored procedures that write data.\n\nPGCluster - Sounds cool, but based on the mailing list traffic\nand the last news post on the site being from 2005, development seems\nto be near dead. Also, no releases seems to make it beyond the RC stage\n-- for multi-master stability is particularly important for data\nintegrity.\n\nPGReplicator - Don't know anything special about it.\n\nBucardo - Don't know anything special about it.\n\nPostgres-R - Don't know anything special about it.\n\nSkyTools/Londiste - Don't know anything special about it.", "msg_date": "Sun, 05 Apr 2009 11:36:33 -0700", "msg_from": "Lists <[email protected]>", "msg_from_op": true, "msg_subject": "Best replication solution?" }, { "msg_contents": "I have a high traffic database with high volumes of reads, and moderate \nvolumes of writes. Millions of queries a day.\n\nRunning the latest version of Postgresql 8.2.x (I want to upgrade to \n8.3, but the dump/reload requires an unacceptable amount of downtime)\n\nServer is a dual core xeon 3GB ram and 2 mirrors of 15k SAS drives (1 \nfor most data, 1 for wal and a few tables and indexes)\n\nIn total all databases on the server are about 10G on disk (about 2GB in \npgdump format).\n\n\nThe IO on the disks is being maxed out and I don't have the budget to \nadd more disks at this time. The web server has a raid10 of sata drives \nwith some io bandwidth to spare so I would like to replicate all data \nover, and send some read queries to that server -- in particular the \nvery IO intensive FTI based search queries.\n\n\nries van Twisk wrote:\n> Dr Mr No Name,\n>\n> what replication solution is the best depends on your requirements.\n> May be you can tell a bit more what your situation is?\n> Since you didn't gave us to much information about your requirements \n> it's hard to give you any advice.\n>\n> Ries\n>\n> On Apr 5, 2009, at 1:36 PM, Lists wrote:\n>\n>> I am looking to setup replication of my postgresql database, \n>> primarily for performance reasons.\n>>\n>> The searching I've done shows a lot of different options, can anyone \n>> give suggestions about which one(s) are best? I've read the archives, \n>> but there seems to be more replication solutions since the last \n>> thread on this subject and it seems to change frequently.\n>>\n>> I'd really like a solution that replicates DDL, but very few do so I \n>> think I'm out of luck for that. I can live without it.\n>> Multi-master support would be nice too, but also seems to cause too \n>> many problems so it looks like I'll have to do without it too.\n>>\n>>\n>> *Slony-I* - I've used this in the past, but it's a huge pain to work \n>> with, caused serious performance issues under heavy load due to long \n>> running transactions (may not be the case anymore, it's been a while \n>> since I used it on a large database with many writes), and doesn't \n>> seem very reliable (I've had replication break on me multiple times).\n>>\n>> *Mammoth Replicator* - This is open source now, is it any good? It \n>> sounds like it's trigger based like Slony. Is it based on Slony, or \n>> simply use a similar solution?\n>>\n>> *pgpool* - Won't work for us reliably for replication because we have \n>> some triggers and stored procedures that write data.\n>>\n>> *PGCluster* - Sounds cool, but based on the mailing list traffic and \n>> the last news post on the site being from 2005, development seems to \n>> be near dead. Also, no releases seems to make it beyond the RC stage \n>> -- for multi-master stability is particularly important for data \n>> integrity.\n>>\n>> *PGReplicator - *Don't know anything special about it.\n>> *\n>> Bucardo* - Don't know anything special about it.\n>>\n>> *Postgres-R* - Don't know anything special about it.\n>>\n>> *SkyTools/Londiste* - Don't know anything special about it.\n>\n>\n>\n>\n>\n>\n\n\n\n\n\n\n\nI have a high traffic database with high volumes of reads, and moderate\nvolumes of writes. Millions of queries a day.\n\nRunning the latest version of Postgresql 8.2.x (I want to upgrade to\n8.3, but the dump/reload requires an unacceptable amount of downtime)\n\nServer is a dual core xeon 3GB ram and 2 mirrors of 15k SAS drives (1\nfor most data, 1 for wal and a few tables and indexes) \n\nIn total all databases on the server are about 10G on disk (about 2GB\nin pgdump format).\n\n\nThe IO on the disks is being maxed out and I don't have the budget to\nadd more disks at this time. The web server has a raid10 of sata drives\nwith some io bandwidth to spare so I would like to replicate all data\nover, and send some read queries to that server -- in particular the\nvery IO intensive FTI based search queries. \n\n\nries van Twisk wrote:\nDr Mr No Name,\n \n\nwhat replication solution is the best depends on your\nrequirements.\nMay be you can tell a bit more what your situation is?\nSince you didn't gave us to much information about your\nrequirements it's hard to give you any advice.\n\n\nRies\n\n\n\n\nOn Apr 5, 2009, at 1:36 PM, Lists wrote:\n\n\n I am looking to setup\nreplication of my postgresql database, primarily for performance\nreasons.\n\nThe searching I've done shows a lot of different options, can anyone\ngive suggestions about which one(s) are best? I've read the archives,\nbut there seems to be more replication solutions since the last thread\non this subject and it seems to change frequently. \n\nI'd really like a solution that replicates DDL, but very few do so I\nthink I'm out of luck for that. I can live without it. \nMulti-master support would be nice too, but also seems to cause too\nmany problems so it looks like I'll have to do without it too. \n\n\nSlony-I - I've used this in the past, but it's a huge pain\nto work with, caused serious performance issues under heavy load due to\nlong running transactions (may not be the case anymore, it's been a\nwhile since I used it on a large database with many writes), and\ndoesn't seem very reliable (I've had replication break on me multiple\ntimes).\n\nMammoth Replicator - This is open source now, is it any\ngood? It sounds like it's trigger based like Slony. Is it based on\nSlony, or simply use a similar solution?\n\npgpool - Won't work for us reliably for replication because\nwe have some triggers and stored procedures that write data.\n\nPGCluster - Sounds cool, but based on the mailing list\ntraffic and the last news post on the site being from 2005, development\nseems to be near dead. Also, no releases seems to make it beyond the RC\nstage -- for multi-master stability is particularly important for data\nintegrity.\n\nPGReplicator - Don't know anything special about it.\n\nBucardo - Don't know anything special about it.\n\nPostgres-R - Don't know anything special about it.\n\nSkyTools/Londiste - Don't know anything special about it.", "msg_date": "Sun, 05 Apr 2009 14:57:17 -0700", "msg_from": "Lists <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best replication solution?" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: RIPEMD160\n\n\n> Running the latest version of Postgresql 8.2.x (I want to upgrade to\n> 8.3, but the dump/reload requires an unacceptable amount of downtime)\n\nYou can use Slony or Bucardo to ugrade in place. Both will incur some\noverhead and more overall complexity than a dump/reload, but going to\n8.3 is well worth it (and will bring your IO down).\n\n> The IO on the disks is being maxed out and I don't have the budget to\n> add more disks at this time. The web server has a raid10 of sata drives\n> with some io bandwidth to spare so I would like to replicate all data\n> over, and send some read queries to that server -- in particular the\n> very IO intensive FTI based search queries.\n\nSounds like a good solution for a table-based, read-only-slaves solutions,\nespecially if you only need enough of the schema to perform some of the\nmore intense queries. Again, Slony and Bucardo are probably the best fit.\nAll this assumes that the tables in question have some sort of unique key,\nyou aren't using large objects, or changing DDL frequently. I'd give Slony a\nsecond try and Bucardo a first one on your QA/test cluster and see how\nthey work out for you. You could even make the read-only slaves 8.3, since\nthey will be starting from scratch.\n\nOf course, if the underlying problem replication is trying to solve is too\nmuch search traffic (e.g. select queries) on the main database, there are other\nsolutions you could consider (e.g. external search such as Sphinx or SOLR,\ncaching solutions such as Squid or Varnish, moving the slaves to the cloud, etc.)\n\n- --\nGreg Sabino Mullane [email protected]\nEnd Point Corporation\nPGP Key: 0x14964AC8 200904052158\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n-----BEGIN PGP SIGNATURE-----\n\niEYEAREDAAYFAknZZMgACgkQvJuQZxSWSsjbcgCfWqTUEDGlDqAnLaCAhcJlSLCk\nEVMAni0oCevrnMdZ2Fuw8Tysaxp3q+/U\n=0vu6\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Mon, 6 Apr 2009 02:11:21 -0000", "msg_from": "\"Greg Sabino Mullane\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best replication solution?" }, { "msg_contents": "Lists wrote:\n> Server is a dual core xeon 3GB ram and 2 mirrors of 15k SAS drives (1 \n> for most data, 1 for wal and a few tables and indexes)\n> \n> In total all databases on the server are about 10G on disk (about 2GB in \n> pgdump format).\n\nI'd suggest buying as much RAM as you can fit into the server. RAM is \ncheap, and with a database of that size more cache could have a dramatic \neffect.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 06 Apr 2009 11:26:50 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best replication solution?" }, { "msg_contents": "On Sun, Apr 05, 2009 at 11:36:33AM -0700, Lists wrote:\n\n> *Slony-I* - I've used this in the past, but it's a huge pain to work \n> with, caused serious performance issues under heavy load due to long \n> running transactions (may not be the case anymore, it's been a while \n> since I used it on a large database with many writes), and doesn't seem \n> very reliable (I've had replication break on me multiple times).\n\nIt is indeed a pain to work with, but I find it hard to believe that\nit is the actual source of performance issues. What's more likely\ntrue is that it wasn't tuned to your write load -- that _will_ cause\nperformance issues. Of course, tuning it is a major pain, as\nmentioned. I'm also somewhat puzzled by the claim of unreliability:\nmost of the actual replication failures I've ever seen under Slony are\ndue to operator error (these are trivial to induce, alas --\naforementioned pain to work with again). Slony is baroque and\nconfusing, but it's specifically designed to fail in safe ways (which\nis not true of some of the other systems: several of them have modes\nin which it's possible to have systems out of sync with each other,\nbut with no way to detect as much. IMO, that's much worse, so we\ndesigned Slony to fail noisily if it was going to fail at all). \n\n> *Mammoth Replicator* - This is open source now, is it any good? It \n> sounds like it's trigger based like Slony. Is it based on Slony, or \n> simply use a similar solution?\n\nIt's completely unrelated, and it doesn't use triggers. I think the\npeople programming it are first-rate. Last I looked at it, I felt a\nlittle uncomfortable with certain design choices, which seemed to me\nto be a little hacky. They were all on the TODO list, though.\n\n> *SkyTools/Londiste* - Don't know anything special about it.\n\nI've been quite impressed by the usability. It's not quite as\nflexible as Slony, but it has the same theory of operation. The\ndocumentation is not as voluminous, although it's also much handier as\nreference material than Slony's (which is, in my experience, a little\nhard to navigate if you don't already know the system pretty well).\n\nA\n\n-- \nAndrew Sullivan\[email protected]\n", "msg_date": "Mon, 6 Apr 2009 08:35:30 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best replication solution?" }, { "msg_contents": "I'm currently running 32bit FreeBSD so I can't really add more ram (PAE \ndoesn't work well under FreeBSD from what I've read) and there are \nenough writes that more ram won't solve the problem completely.\n\nHowever I will add plenty more ram next time I rebuild it.\n\n\nHeikki Linnakangas wrote:\n> Lists wrote:\n>> Server is a dual core xeon 3GB ram and 2 mirrors of 15k SAS drives (1 \n>> for most data, 1 for wal and a few tables and indexes)\n>>\n>> In total all databases on the server are about 10G on disk (about 2GB \n>> in pgdump format).\n>\n> I'd suggest buying as much RAM as you can fit into the server. RAM is \n> cheap, and with a database of that size more cache could have a \n> dramatic effect.\n>\n\n", "msg_date": "Mon, 06 Apr 2009 20:44:20 -0700", "msg_from": "Lists <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best replication solution?" }, { "msg_contents": "Andrew Sullivan wrote:\n> On Sun, Apr 05, 2009 at 11:36:33AM -0700, Lists wrote:\n>\n> \n>> *Slony-I* - I've used this in the past, but it's a huge pain to work \n>> with, caused serious performance issues under heavy load due to long \n>> running transactions (may not be the case anymore, it's been a while \n>> since I used it on a large database with many writes), and doesn't seem \n>> very reliable (I've had replication break on me multiple times).\n>> \n>\n> It is indeed a pain to work with, but I find it hard to believe that\n> it is the actual source of performance issues. What's more likely\n> true is that it wasn't tuned to your write load -- that _will_ cause\n> performance issues. \nCan you point me in the direction of the documentation for tuning it? I \ndon't see anything in the documentation for tuning for write load.\n\n> Of course, tuning it is a major pain, as\n> mentioned. I'm also somewhat puzzled by the claim of unreliability:\n> most of the actual replication failures I've ever seen under Slony are\n> due to operator error (these are trivial to induce, alas --\n> aforementioned pain to work with again). \nRecently I had a problem with \"duplicate key\" errors on the slave, which \nshouldn't be possible since they keys are the same.\nI've just noticed in the documentation that\n\n The Duplicate Key Violation\n <http://www.slony.info/documentation/faq.html#DUPKEY> bug has helped\n track down a number of rather obscure PostgreSQL race conditions, so\n that in modern versions of Slony-I and PostgreSQL, there should be\n little to worry about.\n\nso that may no longer be an issue. However I experienced with this the \nlatest Slony (as of late last year) and Postgresql 8.3.\n\nAlso the dupe key error linked appears to be duplicate key of slony \nmeta-data were as this was a duplicate key of one of my table's primary \nkey.\n> Slony is baroque and\n> confusing, but it's specifically designed to fail in safe ways (which\n> is not true of some of the other systems: several of them have modes\n> in which it's possible to have systems out of sync with each other,\n> but with no way to detect as much. IMO, that's much worse, so we\n> designed Slony to fail noisily if it was going to fail at all). \n> \nAn error is better than silently failing, but of course neither is optimal.\n\nThe slony project could really benefit from a simpler user interface and \nsimpler documentation. It's integration into pgadminIII is a good step, \nbut even with that it is still a bit of a pain so I hope it continues to \nimprove in ease of use.\n\nBeing powerful and flexable is good, but ease of use with sensible \ndefaults for complex items that can be easily overridden is even better.\n\n> \n>> *Mammoth Replicator* - This is open source now, is it any good? It \n>> sounds like it's trigger based like Slony. Is it based on Slony, or \n>> simply use a similar solution?\n>> \n>\n> It's completely unrelated, and it doesn't use triggers. I think the\n> people programming it are first-rate. Last I looked at it, I felt a\n> little uncomfortable with certain design choices, which seemed to me\n> to be a little hacky. They were all on the TODO list, though.\n>\n> \n>> *SkyTools/Londiste* - Don't know anything special about it.\n>> \n>\n> I've been quite impressed by the usability. It's not quite as\n> flexible as Slony, but it has the same theory of operation. The\n> documentation is not as voluminous, although it's also much handier as\n> reference material than Slony's (which is, in my experience, a little\n> hard to navigate if you don't already know the system pretty well).\n>\n> A\n>\n> \nThanks, I'll look into both of those as well.\n\n\n\n\n\n\n\nAndrew Sullivan wrote:\n\nOn Sun, Apr 05, 2009 at 11:36:33AM -0700, Lists wrote:\n\n \n\n*Slony-I* - I've used this in the past, but it's a huge pain to work \nwith, caused serious performance issues under heavy load due to long \nrunning transactions (may not be the case anymore, it's been a while \nsince I used it on a large database with many writes), and doesn't seem \nvery reliable (I've had replication break on me multiple times).\n \n\n\nIt is indeed a pain to work with, but I find it hard to believe that\nit is the actual source of performance issues. What's more likely\ntrue is that it wasn't tuned to your write load -- that _will_ cause\nperformance issues. \n\nCan you point me in the direction of the documentation for tuning it? I\ndon't see anything in the documentation for tuning for write load.\n\n\nOf course, tuning it is a major pain, as\nmentioned. I'm also somewhat puzzled by the claim of unreliability:\nmost of the actual replication failures I've ever seen under Slony are\ndue to operator error (these are trivial to induce, alas --\naforementioned pain to work with again). \n\nRecently I had a problem with \"duplicate key\" errors on the slave,\nwhich shouldn't be possible since they keys are the same. \nI've just noticed in the documentation that \nThe Duplicate\nKey Violation bug has helped track down a number of rather obscure\n PostgreSQL race conditions, so that\nin modern versions of Slony-I and PostgreSQL, there should be little to worry\nabout.\n\nso that may no longer be an issue. However I experienced with this the\nlatest Slony (as of late last year) and Postgresql 8.3. \n\nAlso the dupe key error linked appears to be duplicate key of slony\nmeta-data were as this was a duplicate key of one of my table's primary\nkey. \n\nSlony is baroque and\nconfusing, but it's specifically designed to fail in safe ways (which\nis not true of some of the other systems: several of them have modes\nin which it's possible to have systems out of sync with each other,\nbut with no way to detect as much. IMO, that's much worse, so we\ndesigned Slony to fail noisily if it was going to fail at all). \n \n\nAn error is better than silently failing, but of course neither is\noptimal. \n\nThe slony project could really benefit from a simpler user interface\nand simpler documentation. It's integration into pgadminIII is a good\nstep, but even with that it is still a bit of a pain so I hope it\ncontinues to improve in ease of use. \n\nBeing powerful and flexable is good, but ease of use with sensible\ndefaults for complex items that can be easily overridden is even\nbetter. \n\n\n\n \n\n*Mammoth Replicator* - This is open source now, is it any good? It \nsounds like it's trigger based like Slony. Is it based on Slony, or \nsimply use a similar solution?\n \n\n\nIt's completely unrelated, and it doesn't use triggers. I think the\npeople programming it are first-rate. Last I looked at it, I felt a\nlittle uncomfortable with certain design choices, which seemed to me\nto be a little hacky. They were all on the TODO list, though.\n\n \n\n*SkyTools/Londiste* - Don't know anything special about it.\n \n\n\nI've been quite impressed by the usability. It's not quite as\nflexible as Slony, but it has the same theory of operation. The\ndocumentation is not as voluminous, although it's also much handier as\nreference material than Slony's (which is, in my experience, a little\nhard to navigate if you don't already know the system pretty well).\n\nA\n\n \n\nThanks, I'll look into both of those as well.", "msg_date": "Mon, 06 Apr 2009 21:07:05 -0700", "msg_from": "Lists <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best replication solution?" }, { "msg_contents": "On Monday 06 April 2009 14:35:30 Andrew Sullivan wrote:\n> > *SkyTools/Londiste* - Don't know anything special about it.\n>\n> I've been quite impressed by the usability. It's not quite as\n> flexible as Slony, but it has the same theory of operation. The\n> documentation is not as voluminous, although it's also much handier as\n> reference material than Slony's (which is, in my experience, a little\n> hard to navigate if you don't already know the system pretty well).\n\nAs a londiste user I find it really trustworthy solution, and very easy to use \nand understand. We made some recent efforts on documentation front:\n http://wiki.postgresql.org/wiki/SkyTools\n http://wiki.postgresql.org/wiki/Londiste_Tutorial\n\nRegards,\n-- \ndim", "msg_date": "Tue, 7 Apr 2009 11:35:54 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best replication solution?" }, { "msg_contents": "Lists wrote:\n> I'm currently running 32bit FreeBSD so I can't really add more ram (PAE\n> doesn't work well under FreeBSD from what I've read) \n\nThat's probably left-over from the time many drivers were not 64-bit\nfriendly. I've yet to see a new configuration that doesn't work with PAE\n(also, the default \"PAE\" configuration file is too conservative. Drivers\nthat work on amd64 should work on PAE without problems). In any case,\nit's easy to try it - you can always boot the kernel.old.", "msg_date": "Tue, 07 Apr 2009 12:11:00 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best replication solution?" }, { "msg_contents": "Andrew Sullivan wrote:\n> On Sun, Apr 05, 2009 at 11:36:33AM -0700, Lists wrote:\n>\n> \n>> *Slony-I* - I've used this in the past, but it's a huge pain to work \n>> with, caused serious performance issues under heavy load due to long \n>> running transactions (may not be the case anymore, it's been a while \n>> since I used it on a large database with many writes), and doesn't seem \n>> very reliable (I've had replication break on me multiple times).\n>> \n>\n> \n> I'm also somewhat puzzled by the claim of unreliability:\n> most of the actual replication failures I've ever seen under Slony are\n> due to operator error (these are trivial to induce, alas --\n> aforementioned pain to work with again). Slony is baroque and\n> confusing, but it's specifically designed to fail in safe ways (which\n> is not true of some of the other systems: several of them have modes\n> in which it's possible to have systems out of sync with each other,\n> but with no way to detect as much. IMO, that's much worse, so we\n> designed Slony to fail noisily if it was going to fail at all). \n>\n> \n\n From my experience - gained from unwittingly being in the wrong place \nat the wrong time and so being volunteered into helping people with \nSlony failures - it seems to be quite possible to have nodes out of sync \nand not be entirely aware of it - in addition to there being numerous \nways to shoot yourself in the foot via operator error. Complexity seems \nto be the major evil here.\n\nI've briefly experimented with Londiste, and it is certainly much \nsimpler to administer. Currently it lacks a couple of features Slony has \n(chained slaves and partial DDL support), but I'll be following its \ndevelopment closely - because if these can be added - whilst keeping the \noperator overhead (and the foot-gun) small, then this looks like a winner.\n\nregards\n\nMark\n\n", "msg_date": "Tue, 07 Apr 2009 22:31:02 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best replication solution?" }, { "msg_contents": "On Mon, Apr 06, 2009 at 09:07:05PM -0700, Lists wrote:\n\n> Can you point me in the direction of the documentation for tuning it? I \n> don't see anything in the documentation for tuning for write load.\n\nNo, exactly. As I said, it's a pain. The main thing you need to do\nis to make sure that your set size is just right for your workload.\nThe only way to get this right, unhappily, is trial and error and a\nbunch of oral-tradition rules of thumb. It's one of the weakest parts\nof Slony from a user's point of view, IMO, but nobody's ever offered\nto do the work to make really good tuning tools.\n\n> Recently I had a problem with \"duplicate key\" errors on the slave, which \n> shouldn't be possible since they keys are the same.\n> I've just noticed in the documentation that\n>\n> The Duplicate Key Violation\n> <http://www.slony.info/documentation/faq.html#DUPKEY> bug has helped\n> track down a number of rather obscure PostgreSQL race conditions, so\n> that in modern versions of Slony-I and PostgreSQL, there should be\n> little to worry about.\n>\n> so that may no longer be an issue. However I experienced with this the \n> latest Slony (as of late last year) and Postgresql 8.3.\n\nThat problem was quite an old one. \"8.3\" doesn't tell me very much,\nbut the issues should be covered there anyway. It is of course\nlogically possible that there is a bug. Often, however, the duplicate\nkey violations I've seen turn out to be from operator error. There\nare a _lot_ of sharp, pointy bits in Slony administration, and it's\nnearly trivial to make an apparently innocuous error that causes you\nthis sort of big pain.\n\n> Also the dupe key error linked appears to be duplicate key of slony \n> meta-data were as this was a duplicate key of one of my table's primary \n> key.\n\nThis really ought to be impossible -- Slony just speaks standard SQL\nstatements between nodes. But I won't say there's no possible bug\nthere. Your best bet is the Slony list. It's a smallish community,\nthough, so you don't always get the response as fast as you want. \n\nA\n-- \nAndrew Sullivan\[email protected]\n", "msg_date": "Tue, 7 Apr 2009 12:21:18 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best replication solution?" }, { "msg_contents": "On Tue, Apr 07, 2009 at 10:31:02PM +1200, Mark Kirkwood wrote:\n>\n> From my experience - gained from unwittingly being in the wrong place at \n> the wrong time and so being volunteered into helping people with Slony \n> failures - it seems to be quite possible to have nodes out of sync and \n> not be entirely aware of it \n\nI should have stated that differently. First, you're right that if\nyou don't know where to look or what to look for, you can easily be\nunaware of nodes being out of sync. What's not a problem with Slony\nis that the nodes can get out of internally consistent sync state: if\nyou have a node that is badly lagged, at least it represents, for\nsure, an actual point in time of the origin set's history. Some of\nthe replication systems aren't as careful about this, and it's\npossible to get the replica into a state that never happened on the\norigin. That's much worse, in my view.\n\nIn addition, it is not possible that Slony's system tables report the\nreplica as being up to date without them actually being so, because\nthe system tables are updated in the same transaction as the data is\nsent. It's hard to read those tables, however, because you have to\ncheck every node and understand all the states.\n\n> Complexity seems to be the major evil here.\n\nYes. Slony is massively complex.\n\n> simpler to administer. Currently it lacks a couple of features Slony has \n> (chained slaves and partial DDL support), but I'll be following its \n> development closely - because if these can be added - whilst keeping the \n> operator overhead (and the foot-gun) small, then this looks like a \n> winner.\n\nWell, those particular features -- which are indeed the source of much\nof the complexity in Slony -- were planned in from the beginning.\nLondiste aimed to be simpler, so it would be interesting to see\nwhether those features could be incorporated without the same\ncomplication.\n\nA\n\n-- \nAndrew Sullivan\[email protected]\n", "msg_date": "Tue, 7 Apr 2009 13:18:04 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best replication solution?" }, { "msg_contents": "Andrew Sullivan wrote:\n>\n> I should have stated that differently. First, you're right that if\n> you don't know where to look or what to look for, you can easily be\n> unaware of nodes being out of sync. What's not a problem with Slony\n> is that the nodes can get out of internally consistent sync state: if\n> you have a node that is badly lagged, at least it represents, for\n> sure, an actual point in time of the origin set's history. Some of\n> the replication systems aren't as careful about this, and it's\n> possible to get the replica into a state that never happened on the\n> origin. That's much worse, in my view.\n>\n> In addition, it is not possible that Slony's system tables report the\n> replica as being up to date without them actually being so, because\n> the system tables are updated in the same transaction as the data is\n> sent. It's hard to read those tables, however, because you have to\n> check every node and understand all the states.\n>\n> \n>\nYes, and nicely explained!\n\n> (on Londiste DDL + slave chaining)...\n> Well, those particular features -- which are indeed the source of much\n> of the complexity in Slony -- were planned in from the beginning.\n> Londiste aimed to be simpler, so it would be interesting to see\n> whether those features could be incorporated without the same\n> complication.\n>\n>\n>\n> \nYeah, that's the challenge!\n\nPersonally I would like DDL to be possible without any special wrappers \nor precautions, as the usual (accidental) breakage I end up looking at \nin Slony is because someone (or an app's upgrade script) has performed \nan ALTER TABLE directly on the master schema...\n\nCheers\n\nMark\n\n", "msg_date": "Wed, 08 Apr 2009 19:15:28 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best replication solution?" }, { "msg_contents": "Heikki Linnakangas wrote:\n> Lists wrote:\n>> Server is a dual core xeon 3GB ram and 2 mirrors of 15k SAS drives (1 \n>> for most data, 1 for wal and a few tables and indexes)\n>>\n>> In total all databases on the server are about 10G on disk (about 2GB \n>> in pgdump format).\n> \n> I'd suggest buying as much RAM as you can fit into the server. RAM is \n> cheap, and with a database of that size more cache could have a dramatic \n> effect.\n\nI'll second this. Although it doesn't really answer the original \nquestion, you have to keep in mind that for read-intensive workloads, \ncaching will give you the biggest benefit by far, orders of magnitude \nmore than replication solutions unless you want to spend a lot of $ on \nhardware (which I take it you don't if you are reluctant to add new \ndisks). Keeping the interesting parts of the DB completely in RAM makes \na big difference, common older (P4-based) Xeon boards can usually be \nupgraded to 12-16GB RAM, newer ones to anywhere between 16 and 192GB ...\n\nAs for replication solutions - Slony I wouldn't recommend (tried it for \nworkloads with large writes - bad idea), but PgQ looks very solid and \nyou could either use Londiste or build your own very fast non-RDBMS \nslaves using PgQ by keeping the data in an optimized format for your \nqueries (e.g. if you don't need joins - use TokyoCabinet/Berkeley DB).\n\nRegards,\n Marinos\n\n", "msg_date": "Wed, 08 Apr 2009 13:45:04 +0200", "msg_from": "Marinos Yannikos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best replication solution?" }, { "msg_contents": "\nOn Apr 7, 2009, at 1:18 PM, Andrew Sullivan wrote:\n\n> I should have stated that differently. First, you're right that if\n> you don't know where to look or what to look for, you can easily be\n> unaware of nodes being out of sync. What's not a problem with Slony\n\n_$cluster.sl_status on the origin is a very handy tool to see your \nslaves, how many sync's behind they are and whatnot. Maybe I'm lucky, \nbut I haven't got into a funky state that didn't cause my alarms that \nwatch sl_status to go nuts.\n\n>> Complexity seems to be the major evil here.\n>\n> Yes. Slony is massively complex.\n>\n\nConfiguring slony by hand using slonik commands does suck horribly.\nBut the included altperl tools that come with it, along with \nslon_tools.conf removes a HUGE amount of that suck.\n\nTo add a table with a pk you edit slon_tools.conf and add something \nalong the lines of:\n\n\"someset\" => {\n\t\"set_id\" => 5,\n\t\"table_id\" => 5,\n\t\"pkeyedtables\" => [ \"tacos\", \"burritos\", \"gorditas\" ]\n}\n\nthen you just run\n\n[create tables on slave(s)]\nslonik_create_set someset;\nslonik_subscribe_set 1 2;\n\nthere are other handy scripts in there as well for failing over, \nadding tables, merging, etc. that hide a lot of the suck. Especially \nthe suck of adding a node and creating the store paths.\n\nI'm running slony on a rather write intensive system, works fine, just \nmake sure you've got beefy IO. One sucky thing though is if a slave \nis down sl_log can grow very large (I've had it get over 30M rows, the \nslave was only down for hours) and this causes major cpu churn while \nthe queries slon issues sift through tons of data. But, to be fair, \nthat'll hurt any replication system.\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n", "msg_date": "Wed, 08 Apr 2009 14:20:14 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best replication solution?" }, { "msg_contents": "Hi,\n\nOk I need to answer some more :)\n\nLe 8 avr. 09 � 20:20, Jeff a �crit :\n> To add a table with a pk you edit slon_tools.conf and add something \n> along the lines of:\n>\n> \"someset\" => {\n> \t\"set_id\" => 5,\n> \t\"table_id\" => 5,\n> \t\"pkeyedtables\" => [ \"tacos\", \"burritos\", \"gorditas\" ]\n> }\n>\n> then you just run\n>\n> [create tables on slave(s)]\n> slonik_create_set someset;\n> slonik_subscribe_set 1 2;\n\n $ londiste.py setup.ini provider add schema.table\n $ londiste.py setup.ini subscriber add schema.table\n\nNote both of those commands are to be run from the same host (often \nenough, the slave), if you have more than one slave, issue the second \nof them only on the remaining ones.\n\n> there are other handy scripts in there as well for failing over, \n> adding tables, merging, etc. that hide a lot of the suck. \n> Especially the suck of adding a node and creating the store paths.\n\nThere's no set in Londiste, so you just don't manage them. You add \ntables to queues (referencing the provider in fact) and the subscriber \nis free to subscribe to only a subset of the provider queue's tables. \nAnd any table could participate into more than one queue at any time \ntoo, of course.\n\n> I'm running slony on a rather write intensive system, works fine, \n> just make sure you've got beefy IO. One sucky thing though is if a \n> slave is down sl_log can grow very large (I've had it get over 30M \n> rows, the slave was only down for hours) and this causes major cpu \n> churn while the queries slon issues sift through tons of data. But, \n> to be fair, that'll hurt any replication system.\n\nThis could happen in Londiste too, just set pgq_lazy_fetch to a \nreasonable value and Londiste will use a cursor to fetch the events, \nlowering the load. Events are just tuples in an INSERT only table, \nwhich when not used anymore is TRUNCATEd away. PGQ will use 3 tables \nwhere to store events and will rotate its choice of where to insert \nnew envents, allowing to use TRUNCATE rather than DELETE. And \nPostgreSQL is quite efficient to manage this :)\n http://wiki.postgresql.org/wiki/Londiste_Tutorial#Londiste_is_eating_all_my_CPU_and_lag_is_raising\n\n\nOh and some people asked what Londiste with failover and DDL would \nlook like. Here's what the API being cooked looks like at the moment:\n $ londiste setup.ini execute myddl.script.sql\n\n $ londiste conf/londiste_db3.ini change-provider --provider=rnode1\n $ londiste conf/londiste_db1.ini switchover --target=rnode2\n\nBut I'm not the one who should be unveiling all of this, which is \ncurrently being prepared to reach alpha soon'ish.\n\nRegards,\n-- \ndim\n\n", "msg_date": "Wed, 8 Apr 2009 22:46:47 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best replication solution?" }, { "msg_contents": "\nOn Apr 8, 2009, at 4:46 PM, Dimitri Fontaine wrote:\n\n>\n> $ londiste.py setup.ini provider add schema.table\n> $ londiste.py setup.ini subscriber add schema.table\n>\n\nThat is nice. One could probably do that for slony too.\n\nI may try some tests out with londiste.. I'm always open to new \n(ideally, better) things.\n>\n> This could happen in Londiste too, just set pgq_lazy_fetch to a \n> reasonable value and Londiste will use a cursor to fetch the events, \n> lowering the load. Events are just tuples in an INSERT only table, \n> which when not used anymore is TRUNCATEd away. PGQ will use 3 tables \n> where to store events and will rotate its choice of where to insert \n> new envents, allowing to use TRUNCATE rather than DELETE. And \n> PostgreSQL is quite efficient to manage this :)\n> http://wiki.postgresql.org/wiki/Londiste_Tutorial#Londiste_is_eating_all_my_CPU_and_lag_is_raising\n>\n\nWell, Slony always uses a cursor to fetch, the problem is it may have \nto slog through millions of rows to find the new data - I've analyzed \nthe queries and there isn't much it can do - lots of calls to the \nxxid_ functions to determine whats to be used, whats not to be used. \nWhen all slaves have a sync event ack'd it is free to be removed by \nthe cleanup routine which is run every few minutes.\n\n>\n> Oh and some people asked what Londiste with failover and DDL would \n> look like. Here's what the API being cooked looks like at the moment:\n> $ londiste setup.ini execute myddl.script.sql\n>\n> $ londiste conf/londiste_db3.ini change-provider --provider=rnode1\n> $ londiste conf/londiste_db1.ini switchover --target=rnode2\n>\n\nok, so londiste can't do failover yet, or is it just somewhat \nconvoluted at this point?\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n", "msg_date": "Thu, 09 Apr 2009 08:19:22 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best replication solution?" } ]
[ { "msg_contents": "I have a somewhat large table (more than 100 million rows) that contains log\ndata with start_time and end_time columns. When I try to do queries on this\ntable I always find them slower than what I need and what I believe should\nbe possible.\n\n \n\nFor example, I limited the following query to just a single day and it still\nis much slower than what I would expect. In reality I need to do queries\nthat span a few weeks.\n\n \n\nexplain analyze select * from ad_log where date(start_time) <\ndate('2009-03-31') and date(start_time) >= date('2009-03-30');\n\n \n\n \n\nBitmap Heap Scan on ad_log (cost=73372.57..3699152.24 rows=2488252\nwidth=32) (actual time=49792.862..64611.255 rows=2268490 loops=1)\n\n Recheck Cond: ((date(start_time) < '2009-03-31'::date) AND\n(date(start_time) >= '2009-03-30'::date))\n\n -> Bitmap Index Scan on ad_log_date_all (cost=0.00..72750.51\nrows=2488252 width=0) (actual time=49776.332..49776.332 rows=2268490\nloops=1)\n\n Index Cond: ((date(start_time) < '2009-03-31'::date) AND\n(date(start_time) >= '2009-03-30'::date))\n\n Total runtime: 65279.352 ms\n\n \n\n \n\nThe definition of the table is:\n\n \n\n Column | Type |\nModifiers\n\n------------+-----------------------------+---------------------------------\n---------------------------\n\n ad_log_id | integer | not null default\nnextval('ad_log_ad_log_id_seq'::regclass)\n\n channel | integer | not null\n\n player | integer | not null\n\n ad | integer | not null\n\n start_time | timestamp without time zone |\n\n end_time | timestamp without time zone |\n\nIndexes:\n\n \"ad_log_pkey\" PRIMARY KEY, btree (ad_log_id)\n\n \"ad_log_unique\" UNIQUE, btree (channel, player, ad, start_time,\nend_time)\n\n \"ad_log_ad\" btree (ad)\n\n \"ad_log_ad_date\" btree (ad, date(start_time))\n\n \"ad_log_channel\" btree (channel)\n\n \"ad_log_channel_date\" btree (channel, date(start_time))\n\n \"ad_log_date_all\" btree (date(start_time), channel, player, ad)\n\n \"ad_log_player\" btree (player)\n\n \"ad_log_player_date\" btree (player, date(start_time))\n\nForeign-key constraints:\n\n \"ad_log_ad_fkey\" FOREIGN KEY (ad) REFERENCES ads(id)\n\n \"ad_log_channel_fkey\" FOREIGN KEY (channel) REFERENCES channels(id)\n\n \"ad_log_player_fkey\" FOREIGN KEY (player) REFERENCES players_history(id)\n\nTriggers:\n\n rollup_ad_logs_daily AFTER INSERT ON ad_log FOR EACH ROW EXECUTE\nPROCEDURE rollup_ad_logs_daily()\n\n \n\n \n\nAny suggestions would be appreciated.\n\n \n\n--Rainer\n\n\n\n\n\n\n\n\n\n\n\nI have a somewhat large table (more than 100 million rows)\nthat contains log data with start_time and end_time columns. When I try to do queries\non this table I always find them slower than what I need and what I believe should\nbe possible.\n \nFor example, I limited the following query to just a single\nday and it still is much slower than what I would expect. In reality I need to\ndo queries that span a few weeks.\n \nexplain analyze\nselect * from ad_log where date(start_time) < date('2009-03-31') and\ndate(start_time) >= date('2009-03-30');\n \n \nBitmap Heap Scan on\nad_log  (cost=73372.57..3699152.24 rows=2488252 width=32) (actual\ntime=49792.862..64611.255 rows=2268490 loops=1)\n   Recheck\nCond: ((date(start_time) < '2009-03-31'::date) AND (date(start_time) >=\n'2009-03-30'::date))\n  \n->  Bitmap Index Scan on ad_log_date_all  (cost=0.00..72750.51\nrows=2488252 width=0) (actual time=49776.332..49776.332 rows=2268490 loops=1)\n        \nIndex Cond: ((date(start_time) < '2009-03-31'::date) AND (date(start_time)\n>= '2009-03-30'::date))\n Total runtime:\n65279.352 ms\n \n \nThe definition of the table is:\n \n  \nColumn  \n|           \nType            \n|                        \nModifiers\n------------+-----------------------------+------------------------------------------------------------\n ad_log_id \n|\ninteger                    \n| not null default nextval('ad_log_ad_log_id_seq'::regclass)\n channel   \n|\ninteger                    \n| not null\n player    \n|\ninteger                    \n| not null\n ad        \n|\ninteger                    \n| not null\n start_time |\ntimestamp without time zone |\n end_time  \n| timestamp without time zone |\nIndexes:\n   \n\"ad_log_pkey\" PRIMARY KEY, btree (ad_log_id)\n   \n\"ad_log_unique\" UNIQUE, btree (channel, player, ad, start_time,\nend_time)\n   \n\"ad_log_ad\" btree (ad)\n   \n\"ad_log_ad_date\" btree (ad, date(start_time))\n   \n\"ad_log_channel\" btree (channel)\n   \n\"ad_log_channel_date\" btree (channel, date(start_time))\n   \n\"ad_log_date_all\" btree (date(start_time), channel, player, ad)\n   \n\"ad_log_player\" btree (player)\n   \n\"ad_log_player_date\" btree (player, date(start_time))\nForeign-key\nconstraints:\n   \n\"ad_log_ad_fkey\" FOREIGN KEY (ad) REFERENCES ads(id)\n   \n\"ad_log_channel_fkey\" FOREIGN KEY (channel) REFERENCES channels(id)\n   \n\"ad_log_player_fkey\" FOREIGN KEY (player) REFERENCES\nplayers_history(id)\nTriggers:\n   \nrollup_ad_logs_daily AFTER INSERT ON ad_log FOR EACH ROW EXECUTE PROCEDURE\nrollup_ad_logs_daily()\n \n \nAny suggestions would be appreciated.\n \n--Rainer", "msg_date": "Mon, 6 Apr 2009 08:26:11 +0900", "msg_from": "\"Rainer Mager\" <[email protected]>", "msg_from_op": true, "msg_subject": "difficulties with time based queries" }, { "msg_contents": "On Sun, Apr 5, 2009 at 7:26 PM, Rainer Mager <[email protected]> wrote:\n> Bitmap Heap Scan on ad_log  (cost=73372.57..3699152.24 rows=2488252\n> width=32) (actual time=49792.862..64611.255 rows=2268490 loops=1)\n>\n>    Recheck Cond: ((date(start_time) < '2009-03-31'::date) AND\n> (date(start_time) >= '2009-03-30'::date))\n>\n>    ->  Bitmap Index Scan on ad_log_date_all  (cost=0.00..72750.51\n> rows=2488252 width=0) (actual time=49776.332..49776.332 rows=2268490\n> loops=1)\n>\n>          Index Cond: ((date(start_time) < '2009-03-31'::date) AND\n> (date(start_time) >= '2009-03-30'::date))\n>\n>  Total runtime: 65279.352 ms\n\nThe stats look good and it's using a viable index for your query. What\nkind of hardware is this on, and what are the relevant postgresql.conf\nlines? (Or, for that matter, what does iostat say while this query's\nrunning?)\n\n-- \n- David T. Wilson\[email protected]\n", "msg_date": "Sun, 5 Apr 2009 19:46:34 -0400", "msg_from": "David Wilson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: difficulties with time based queries" }, { "msg_contents": "> When I try to do queries on this\n> table I always find them slower than what I need and what I believe \n> should be possible.\n>\n> -> Bitmap Index Scan on ad_log_date_all (cost=0.00..72750.51\n> rows=2488252 width=0) (actual time=49776.332..49776.332 rows=2268490\n> loops=1)\n>\n> Index Cond: ((date(start_time) < '2009-03-31'::date) AND\n> (date(start_time) >= '2009-03-30'::date))\n>\n> Total runtime: 65279.352 ms\n\n\tWell, it is grabbing 2.268.490 rows, that's a lot of rows, so it is not \ngoing to be very fast like a few milliseconds.\n\tYour columns are small, ints, dates, not large text strings which would \naugment the total amount of data.\n\tSo your timing looks pretty slow, it should be faster than this, maybe a \nfew seconds.\n\n\tWith this quantity of rows, you want to try to make the disk accesses as \nlinear as possible.\n\tThis means your table should be organized on disk by date, at least \nroughly.\n\tIf your data comes from an import that was sorted on some other column, \nthis may not be the case.\n\n\tWhat kind of bytes/s do you get from the drives ?\n\n=> Can you post the result of \"vmstat 1\" during the entire execution of \nthe query ?\n\n\t2 phases should be visible in the vmstat output, the indexscan, and the \nbitmap heapscan.\n\n\tYou could use CLUSTER on the table (it will take a long time), or simply \ncreate another table and INSERT INTO ... SELECT ORDER BY date. This will \nalso take a long time, but faster than CLUSTER. Then you could recreate \nthe indexes.\n\n\tDo you UPDATE or DELETE a lot from this table ? Is it vacuum'd enough ?\n\n\t\n", "msg_date": "Mon, 06 Apr 2009 01:54:26 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: difficulties with time based queries" }, { "msg_contents": "\"Rainer Mager\" <[email protected]> writes:\n> explain analyze select * from ad_log where date(start_time) <\n> date('2009-03-31') and date(start_time) >= date('2009-03-30');\n\n> Bitmap Heap Scan on ad_log (cost=73372.57..3699152.24 rows=2488252\n> width=32) (actual time=49792.862..64611.255 rows=2268490 loops=1)\n> Recheck Cond: ((date(start_time) < '2009-03-31'::date) AND\n> (date(start_time) >= '2009-03-30'::date))\n> -> Bitmap Index Scan on ad_log_date_all (cost=0.00..72750.51\n> rows=2488252 width=0) (actual time=49776.332..49776.332 rows=2268490\n> loops=1)\n> Index Cond: ((date(start_time) < '2009-03-31'::date) AND\n> (date(start_time) >= '2009-03-30'::date))\n> Total runtime: 65279.352 ms\n\nHmm ... it's pretty unusual to see the index fetch portion of a bitmap\nscan take the bulk of the runtime. Usually that part is fast and where\nthe pain comes is in fetching from the heap. I wonder whether that\nindex has become bloated. How big are the table and the index\nphysically? (Look at pg_class.relpages, or if you want a really\naccurate number try pg_relation_size().)\n\nWhat Postgres version is this, exactly?\n\nBTW, I think you've gone way overboard in your indexing of this table;\nthose indexes are certainly consuming well more space than the table\ndoes, and a lot of them are redundant.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 Apr 2009 20:12:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: difficulties with time based queries " }, { "msg_contents": "Thanks for all the replies, I'll try to address the follow up questions:\n\n> From: David Wilson [mailto:[email protected]]\n> \n> The stats look good and it's using a viable index for your query. What\n> kind of hardware is this on, and what are the relevant postgresql.conf\n> lines? (Or, for that matter, what does iostat say while this query's\n> running?)\n\nI'm running on Windows, so I don't have iostat, but perfmon tells me my Avg.\nDisk Queue Length went up to 1.2 during the query (versus a normal value of\nabout 0.02). Also disk throughput was at about 1.2 MB/s during the query. I\ndon't know how much of this is random versus linear.\n\n\n\n> From: PFC [mailto:[email protected]]\n>\n> \tWith this quantity of rows, you want to try to make the disk\n> accesses as\n> linear as possible.\n> \tThis means your table should be organized on disk by date, at\n> least\n> roughly.\n> \tIf your data comes from an import that was sorted on some other\n> column,\n> this may not be the case.\n> \n> \tWhat kind of bytes/s do you get from the drives ?\n\nThe data should be mostly ordered by date. It is all logged in semi-realtime\nsuch that 99% will be logged within an hour of the timestamp. Also, as\nstated above, during this query it was about 1.2 MB/s, which I know isn't\ngreat. I admit this isn't the best hardware in the world, but I would expect\nbetter than that for linear queries.\n\n> \tDo you UPDATE or DELETE a lot from this table ? Is it vacuum'd\n> enough ?\n\nNo, this table has no UPDATEs or DELETEs. It is auto vacuum'd, but no manual\nvacuuming.\n\nIn regards to clustering, I'm hesitant to do that unless I have no other\nchoice. My understanding is that I would need to do periodic re-clustering\nto maintain it, and during that time the table is very busy.\n\n\n> From: Tom Lane [mailto:[email protected]]\n> Hmm ... it's pretty unusual to see the index fetch portion of a bitmap\n> scan take the bulk of the runtime. Usually that part is fast and where\n> the pain comes is in fetching from the heap. I wonder whether that\n> index has become bloated. How big are the table and the index\n> physically? (Look at pg_class.relpages, or if you want a really\n> accurate number try pg_relation_size().)\n\nCan you give me some more info on how to look at these stats? That is,\nwhat/where is pg_class.relpages, etc. I'll also do some searching for this\ninfo.\n\n> What Postgres version is this, exactly?\n\n8.3.3\n \n> BTW, I think you've gone way overboard in your indexing of this table;\n> those indexes are certainly consuming well more space than the table\n> does, and a lot of them are redundant.\n\nAgreed, I need to look carefully at all of the queries we do on this table\nand reduce this.\n\n\n\n--Rainer\n\n", "msg_date": "Mon, 6 Apr 2009 10:26:08 +0900", "msg_from": "\"Rainer Mager\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: difficulties with time based queries" }, { "msg_contents": "\"Rainer Mager\" <[email protected]> writes:\n>> From: Tom Lane [mailto:[email protected]]\n>> Hmm ... it's pretty unusual to see the index fetch portion of a bitmap\n>> scan take the bulk of the runtime. Usually that part is fast and where\n>> the pain comes is in fetching from the heap. I wonder whether that\n>> index has become bloated. How big are the table and the index\n>> physically? (Look at pg_class.relpages, or if you want a really\n>> accurate number try pg_relation_size().)\n\n> Can you give me some more info on how to look at these stats?\n\nSince you've got 8.3 it's easy: select pg_relation_size('tablename')\n(or indexname). The result is in bytes, so you might want to\ndivide by 1K or 1M to keep the number readable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 Apr 2009 21:33:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: difficulties with time based queries " }, { "msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> \"Rainer Mager\" <[email protected]> writes:\n> >> From: Tom Lane [mailto:[email protected]]\n> >> Hmm ... it's pretty unusual to see the index fetch portion of a\n> bitmap\n> >> scan take the bulk of the runtime. Usually that part is fast and\n> where\n> >> the pain comes is in fetching from the heap. I wonder whether that\n> >> index has become bloated. How big are the table and the index\n> >> physically? (Look at pg_class.relpages, or if you want a really\n> >> accurate number try pg_relation_size().)\n> \n> > Can you give me some more info on how to look at these stats?\n> \n> Since you've got 8.3 it's easy: select pg_relation_size('tablename')\n> (or indexname). The result is in bytes, so you might want to\n> divide by 1K or 1M to keep the number readable.\n\nOk, nice and simple...I like it:\n\nThe result for the table ad_log, is 30,063 MB. The result for the index,\nad_log_date_all, is 17,151 MB. I guess this roughly makes sense since the\nindex is on 4 fields and the table only has 6 fields.\n\nFor the particular query I'm trying to optimize at the moment I believe I\nshould be able to use an index that references only 2 fields, which, I\nimagine, should reduce the time needed to read it. I'll play with this a bit\nand see what happens.\n\nAny other suggestions?\n\n--Rainer\n\n\n\n", "msg_date": "Mon, 6 Apr 2009 12:35:36 +0900", "msg_from": "\"Rainer Mager\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: difficulties with time based queries " }, { "msg_contents": "On Sun, Apr 5, 2009 at 11:35 PM, Rainer Mager <[email protected]> wrote:\n>> -----Original Message-----\n>> From: Tom Lane [mailto:[email protected]]\n>> \"Rainer Mager\" <[email protected]> writes:\n>> >> From: Tom Lane [mailto:[email protected]]\n>> >> Hmm ... it's pretty unusual to see the index fetch portion of a\n>> bitmap\n>> >> scan take the bulk of the runtime.  Usually that part is fast and\n>> where\n>> >> the pain comes is in fetching from the heap.   I wonder whether that\n>> >> index has become bloated.  How big are the table and the index\n>> >> physically?  (Look at pg_class.relpages, or if you want a really\n>> >> accurate number try pg_relation_size().)\n>>\n>> > Can you give me some more info on how to look at these stats?\n>>\n>> Since you've got 8.3 it's easy: select pg_relation_size('tablename')\n>> (or indexname).  The result is in bytes, so you might want to\n>> divide by 1K or 1M to keep the number readable.\n>\n> Ok, nice and simple...I like it:\n>\n> The result for the table ad_log, is 30,063 MB. The result for the index,\n> ad_log_date_all, is 17,151 MB. I guess this roughly makes sense since the\n> index is on 4 fields and the table only has 6 fields.\n>\n> For the particular query I'm trying to optimize at the moment I believe I\n> should be able to use an index that references only 2 fields, which, I\n> imagine, should reduce the time needed to read it. I'll play with this a bit\n> and see what happens.\n\nEven if your query \"could use\" an index four fields, a lot of times it\nwon't be the winning strategy, because it means reading a lot more\ndata from the disk. Plus, all of these huge indices are competing for\nRAM with data from the table itself. You might want to think about\ngetting rid of all of the indices with more than 1 or 2 columns.\nad_log_unique is probably huge and it seems like it's probably not\nimproving your data integrity as much as you might think...\n\n...Robert\n", "msg_date": "Mon, 6 Apr 2009 06:37:13 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: difficulties with time based queries" }, { "msg_contents": "On Mon, 6 Apr 2009, Rainer Mager wrote:\n> The data should be mostly ordered by date. It is all logged in semi-realtime\n> such that 99% will be logged within an hour of the timestamp. Also, as\n> stated above, during this query it was about 1.2 MB/s, which I know isn't\n> great. I admit this isn't the best hardware in the world, but I would expect\n> better than that for linear queries.\n\nMight you have an unbalanced index tree? Reindexing would also solve that \nproblem.\n\nMatthew\n\n-- \n There are only two kinds of programming languages: those people always\n bitch about and those nobody uses. (Bjarne Stroustrup)\n", "msg_date": "Mon, 6 Apr 2009 13:24:48 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: difficulties with time based queries" }, { "msg_contents": "So, I defragged my disk and reran my original query and it got a little\nbetter, but still far higher than I'd like. I then rebuilt (dropped and\nrecreated) the ad_log_date_all index and reran the query and it is quite a\nbit better:\n\n \n\n# explain analyze select * from ad_log where date(start_time) <\ndate('2009-03-31') and date(start_time) >= date('2009-03-30');\n\n QUERY PLAN\n\n----------------------------------------------------------------------------\n------------------------------------------------------------------\n\n Bitmap Heap Scan on ad_log (cost=64770.21..3745596.62 rows=2519276\nwidth=32) (actual time=1166.479..13862.107 rows=2275167 loops=1)\n\n Recheck Cond: ((date(start_time) < '2009-03-31'::date) AND\n(date(start_time) >= '2009-03-30'::date))\n\n -> Bitmap Index Scan on ad_log_date_all (cost=0.00..64140.39\nrows=2519276 width=0) (actual time=1143.582..1143.582 rows=2275167 loops=1)\n\n Index Cond: ((date(start_time) < '2009-03-31'::date) AND\n(date(start_time) >= '2009-03-30'::date))\n\n Total runtime: 14547.885 ms\n\n \n\n \n\nDuring the query the disk throughput peaked at 30MB/s and was mostly at\naround 20MB/s, much better.\n\n \n\nSo, a few questions:\n\n \n\nWhat can I do to prevent the index from getting bloated, or in whatever\nstate it was in?\n\n \n\nWhat else can I do to further improve queries on this table? Someone\nsuggested posting details of my conf file. Which settings are most likely to\nbe useful for this?\n\n \n\nAny other suggestions?\n\n \n\n \n\nThanks,\n\n \n\n--Rainer\n\n\n\n\n\n\n\n\n\n\n\nSo, I\ndefragged my disk and reran my original query and it got a little better, but\nstill far higher than I'd like. I then rebuilt (dropped and recreated) the\nad_log_date_all index and reran the query and it is quite a bit better:\n \n# explain analyze\nselect * from ad_log where date(start_time) < date('2009-03-31') and\ndate(start_time) >= date('2009-03-30');\n                                                                 \nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap\nScan on ad_log  (cost=64770.21..3745596.62 rows=2519276 width=32) (actual\ntime=1166.479..13862.107 rows=2275167 loops=1)\n  \nRecheck Cond: ((date(start_time) < '2009-03-31'::date) AND (date(start_time)\n>= '2009-03-30'::date))\n  \n->  Bitmap Index Scan on ad_log_date_all  (cost=0.00..64140.39\nrows=2519276 width=0) (actual time=1143.582..1143.582 rows=2275167 loops=1)\n        \nIndex Cond: ((date(start_time) < '2009-03-31'::date) AND (date(start_time)\n>= '2009-03-30'::date))\n Total\nruntime: 14547.885 ms\n \n \nDuring the query the disk throughput peaked at 30MB/s and was\nmostly at around 20MB/s, much better.\n \nSo, a few questions:\n \nWhat can I do to prevent the index from getting bloated, or in\nwhatever state it was in?\n \nWhat else can I do to further improve queries on this table? Someone\nsuggested posting details of my conf file. Which settings are most likely to be\nuseful for this?\n \nAny other suggestions?\n \n \nThanks,\n \n--Rainer", "msg_date": "Thu, 9 Apr 2009 07:49:39 +0900", "msg_from": "\"Rainer Mager\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: difficulties with time based queries " }, { "msg_contents": "\"Rainer Mager\" <[email protected]> writes:\n> So, I need indices that make it fast querying against start_time as well as\n> all possible combinations of channel, player, and ad.\n\nThere's some general principles in the manual --- have you read\nhttp://www.postgresql.org/docs/8.3/static/indexes.html\nespecially 11.3 and 11.5?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 10 Apr 2009 01:20:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: difficulties with time based queries " }, { "msg_contents": "\n> What can I do to prevent the index from getting bloated, or in whatever\n> state it was in?\n>\n>\n> What else can I do to further improve queries on this table? Someone\n> suggested posting details of my conf file. Which settings are most \n> likely to\n> be useful for this?\n\n\tIf you often do range queries on date, consider partitioning your table \nby date (something like 1 partition per month).\n\tOf course, if you also often do range queries on something other than \ndate, and uncorrelated, forget it.\n\n\tIf you make a lot of big aggregate queries, consider materialized views :\n\n\tLike \"how many games player X won this week\", etc\n\n\t- create \"helper\" tables which contain the query results\n\t- every night, recompute the results taking into account the most recent \ndata\n\t- don't recompute results based on old data that never changes\n\n\tThis is only interesting if the aggregation reduces the data volume by \n\"an appreciable amount\". For instance, if you run a supermarket with 1000 \ndistinct products in stock and you sell 100.000 items a day, keeping a \ncache of \"count of product X sold each day\" will reduce your data load by \nabout 100 on the query \"count of product X sold this month\".\n\n\tThe two suggestion above are not mutually exclusive.\n\n\tYou could try bizgres also. Or even MySQL !... MySQL's query engine is \nslower than pg but the tables take much less space than Postgres, and it \ncan do index-only queries. So you can fit more in the cache. This is only \nvalid for MyISAM (InnoDB is a bloated hog). Of course, noone would want to \nuse MyISAM for the \"safe\" storage, but it's pretty good as a read-only \nstorage. You can even use the Archive format for even more compactness and \nuse of cache. Of course you'd have to devise a way to dump from pg and \nload into MySQL but that's not hard. MySQL can be good if you target a \ntable with lots of small rows with a few ints, all of them in a \nmulticolumn index, so it doesn't need to hit the table itself.\n\n\tNote that one in his right mind would never run aggregate queries on a \nlive R/W MyISAM table since the long queries will block all writes and \nblow up the reaction time. But for a read-only cache updated at night, or \nreplication slave, it's okay.\n", "msg_date": "Tue, 14 Apr 2009 11:18:31 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: difficulties with time based queries" }, { "msg_contents": ">\n> If you often do range queries on date, consider partitioning your\n> table by date (something like 1 partition per month).\n> Of course, if you also often do range queries on something other\n> than date, and uncorrelated, forget it.\n\n\nIf you pick your partition to line up with your queries than you can\nprobably do away with the date index. Even if it doesn't always line up\nperfectly its worth considering.\n\n\n>\n> If you make a lot of big aggregate queries, consider materialized\n> views :\n>\n> Like \"how many games player X won this week\", etc\n>\n> - create \"helper\" tables which contain the query results\n> - every night, recompute the results taking into account the most\n> recent data\n> - don't recompute results based on old data that never changes\n>\n> This is only interesting if the aggregation reduces the data volume\n> by \"an appreciable amount\". For instance, if you run a supermarket with 1000\n> distinct products in stock and you sell 100.000 items a day, keeping a cache\n> of \"count of product X sold each day\" will reduce your data load by about\n> 100 on the query \"count of product X sold this month\".\n\n\nThis obviously creates some administration overhead. So long as this is\nmanageable for you this is a great solution. You might also want to look at\nMondrian at http://mondrian.pentaho.org/ . It takes some tinkering but buys\nyou some neat views into your data and automatically uses those aggregate\ntables.\n\n\nNik Everett\n\n       If you often do range queries on date, consider partitioning your table by date (something like 1 partition per month).\n\n        Of course, if you also often do range queries on something other than date, and uncorrelated, forget it.If you pick your partition to line up with your queries than you can probably do away with the date index.  Even if it doesn't always line up perfectly its worth considering.\n\n\n        If you make a lot of big aggregate queries, consider materialized views :\n\n        Like \"how many games player X won this week\", etc\n\n        - create \"helper\" tables which contain the query results\n        - every night, recompute the results taking into account the most recent data\n        - don't recompute results based on old data that never changes\n\n        This is only interesting if the aggregation reduces the data volume by \"an appreciable amount\". For instance, if you run a supermarket with 1000 distinct products in stock and you sell 100.000 items a day, keeping a cache of \"count of product X sold each day\" will reduce your data load by about 100 on the query \"count of product X sold this month\".\nThis obviously creates some administration overhead.  So long as this is manageable for you this is a great solution.  You might also want to look at Mondrian at http://mondrian.pentaho.org/ .  It takes some tinkering but buys you some neat views into your data and automatically uses those aggregate tables.\n Nik Everett", "msg_date": "Tue, 14 Apr 2009 08:56:01 -0400", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: difficulties with time based queries" } ]
[ { "msg_contents": "I have an issue with the add foreign key constraint which goes for waiting\nand locks other queries as well.\n\nALTER TABLE ONLY holding_positions ADD CONSTRAINT\nholding_positions_stock_id_fkey FOREIGN KEY (stock_id)\n REFERENCES stocks (stock_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION;\n\nThe holding_positions table has no data in it as yet.\n\n\nps aux | grep postgres\npostgres 5855 0.8 1.6 346436 271452 ? Ss 16:34 0:07\npostgres: abc stocks 192.100.100.111(60308) SELECT waiting\n postgres 6134 0.0 0.0 346008 4184 ? Ss 16:44 0:00\npostgres: xyz stocks 192.100.100.222(34604) ALTER TABLE waiting\n\n\n\nAny suggestions would be appreciated.\n\nRoopa\n\n \nI have an issue with the add foreign key constraint which goes for waiting and locks other queries as well.\n \nALTER TABLE ONLY holding_positions ADD CONSTRAINT holding_positions_stock_id_fkey FOREIGN KEY (stock_id)      REFERENCES stocks (stock_id) MATCH SIMPLE      ON UPDATE NO ACTION ON DELETE NO ACTION;\n \nThe holding_positions table has no data in it as yet.\n \n \nps aux | grep postgres\npostgres  5855  0.8  1.6 346436 271452 ?     Ss   16:34   0:07 postgres: abc stocks 192.100.100.111(60308) SELECT waiting\n\npostgres  6134  0.0  0.0 346008 4184 ?       Ss   16:44   0:00 postgres: xyz stocks 192.100.100.222(34604) ALTER TABLE waiting\n \n \n\nAny suggestions would be appreciated.\nRoopa", "msg_date": "Mon, 6 Apr 2009 16:54:33 +1000", "msg_from": "roopasatish <[email protected]>", "msg_from_op": true, "msg_subject": "probelm with alter table add constraint......" }, { "msg_contents": "On Mon, Apr 6, 2009 at 2:54 AM, roopasatish <[email protected]> wrote:\n>\n> I have an issue with the add foreign key constraint which goes for waiting\n> and locks other queries as well.\n>\n> ALTER TABLE ONLY holding_positions ADD CONSTRAINT\n> holding_positions_stock_id_fkey FOREIGN KEY (stock_id)\n>       REFERENCES stocks (stock_id) MATCH SIMPLE\n>       ON UPDATE NO ACTION ON DELETE NO ACTION;\n>\n> The holding_positions table has no data in it as yet.\n>\n>\n> ps aux | grep postgres\n> postgres  5855  0.8  1.6 346436 271452 ?     Ss   16:34   0:07\n> postgres: abc stocks 192.100.100.111(60308) SELECT waiting\n> postgres  6134  0.0  0.0 346008 4184 ?       Ss   16:44   0:00\n> postgres: xyz stocks 192.100.100.222(34604) ALTER TABLE waiting\n>\n>\n>\n> Any suggestions would be appreciated.\n\nYou need to look at what locks they're waiting for.\n\nselect locktype, database, relation::regclass, page, tuple,\nvirtualxid, transactionid, classid, objid, objsubid,\nvirtualtransaction, pid, mode, granted from pg_locks;\n\n...Robert\n", "msg_date": "Mon, 6 Apr 2009 06:43:44 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: probelm with alter table add constraint......" }, { "msg_contents": "roopasatish wrote:\n> I have an issue with the add foreign key constraint which \n> goes for waiting and locks other queries as well.\n> \n> ALTER TABLE ONLY holding_positions ADD CONSTRAINT \n> holding_positions_stock_id_fkey FOREIGN KEY (stock_id)\n> REFERENCES stocks (stock_id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION;\n> \n> The holding_positions table has no data in it as yet.\n\nLook in pg_catalog.pg_locks for a second transaction that\nholds a lock on the table holding_positions.\n\nHow many backends do you see in pg_stat_activity that\nare running or in a transaction?\n\nAny other backend that is in a transaction that has e.g.\nselected from the table will block the ALTER TABLE.\n\nYours,\nLaurenz Albe\n", "msg_date": "Mon, 6 Apr 2009 12:47:11 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: probelm with alter table add constraint......" }, { "msg_contents": "\"Albe Laurenz\" <[email protected]> writes:\n> roopasatish wrote:\n>> I have an issue with the add foreign key constraint which \n>> goes for waiting and locks other queries as well.\n>> \n>> ALTER TABLE ONLY holding_positions ADD CONSTRAINT \n>> holding_positions_stock_id_fkey FOREIGN KEY (stock_id)\n>> REFERENCES stocks (stock_id) MATCH SIMPLE\n>> ON UPDATE NO ACTION ON DELETE NO ACTION;\n>> \n>> The holding_positions table has no data in it as yet.\n\n> Look in pg_catalog.pg_locks for a second transaction that\n> holds a lock on the table holding_positions.\n\nThis statement also needs to get lock on the referenced table \"stocks\".\nAn open transaction that's referenced either table will block it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 Apr 2009 09:54:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: probelm with alter table add constraint...... " }, { "msg_contents": "\n\n\nTom Lane-2 wrote:\n> \n> \"Albe Laurenz\" <[email protected]> writes:\n>> roopasatish wrote:\n>>> I have an issue with the add foreign key constraint which \n>>> goes for waiting and locks other queries as well.\n>>> \n>>> ALTER TABLE ONLY holding_positions ADD CONSTRAINT \n>>> holding_positions_stock_id_fkey FOREIGN KEY (stock_id)\n>>> REFERENCES stocks (stock_id) MATCH SIMPLE\n>>> ON UPDATE NO ACTION ON DELETE NO ACTION;\n>>> \n>>> The holding_positions table has no data in it as yet.\n> \n>> Look in pg_catalog.pg_locks for a second transaction that\n>> holds a lock on the table holding_positions.\n> \n> This statement also needs to get lock on the referenced table \"stocks\".\n> An open transaction that's referenced either table will block it.\n> \n> \t\t\tregards, tom lane\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n\n\nI can't lock the table 'stocks' as its used continuously by many users. Is\nthere a way to run the constraint in a background without affecting the\nusers using the database.\n\nThanks a lot in advance\nRoopa\n-- \nView this message in context: http://www.nabble.com/probelm-with-alter-table-add-constraint......-tp22903334p23170924.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Wed, 22 Apr 2009 00:16:12 -0700 (PDT)", "msg_from": "roopabenzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: probelm with alter table add constraint......" } ]
[ { "msg_contents": "I know that EXPLAIN will show the query plan. I know that pg_locks will \nshow the locks currently held for activity transactions. Is there a way \nto determine what locks a query will hold when it is executed?\n\nThanks,\nBrian\n", "msg_date": "Tue, 07 Apr 2009 11:05:29 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "determining the locks that will be held by a query" } ]
[ { "msg_contents": "It seems that ANALYZE does not really sample text column values as much \nas it could. We have some very bad query plans resulting from this:\n\n...\n -> Bitmap Index Scan on m_pkey (cost=0.00..28.61 rows=102 \nwidth=0) (actual time=171.824..171.824 rows=683923 loops=1)\n Index Cond: ((e >= 'ean'::text) AND (e < 'eao'::text)\n\nThis gets even worse for longer strings, where we know that many \nmatching rows exist:\n\n# explain analyze select substring(e,5) from m where id=257421 and e ~ \n'^ean=';\n QUERY PLAN \n\n---------------------------------------------------------------------------------------------------------------------------\n Index Scan using m_pkey on m (cost=0.00..12.50 rows=1 width=60) \n(actual time=1623.795..1703.958 rows=18 loops=1)\n Index Cond: ((e >= 'ean='::text) AND (e < 'ean>'::text))\n Filter: ((e ~ '^ean='::text) AND (id = 257421))\n Total runtime: 1703.991 ms\n(4 rows)\n\nHere it would be much better to use the existing index on \"id\" (btree) \nfirst because the current index condition selects 683k rows whereas the \nresult contains 18 rows. Using the index on id would yield 97 rows to \nfilter.\n\nIs it possible to work around this problem somehow, other than adding \npartial indexes for the ~ / LIKE condition (when it's constant) or a \n2-dimensional index?\n\n(what exactly does ANALYZE look at for text columns? in our case, about \n7% of the rows match the index condition, so it seems that left-anchored \nregexp/like matches are not evaluated using the gathered \nmost-common-value list at all)\n\nRegards,\n Marinos\n", "msg_date": "Wed, 08 Apr 2009 15:42:12 +0200", "msg_from": "Marinos Yannikos <[email protected]>", "msg_from_op": true, "msg_subject": "bad query plans for ~ \"^string\" (and like \"string%\") (8.3.6)" }, { "msg_contents": "Marinos Yannikos wrote:\n> (what exactly does ANALYZE look at for text columns? in our case, about \n> 7% of the rows match the index condition, so it seems that left-anchored \n> regexp/like matches are not evaluated using the gathered \n> most-common-value list at all)\n\noops, I think I gave myself the answer there. Of course the \nmost-common-value list will not help if all the values that match the \n\"bad\" index condition exist only once, but have a common prefix...\n\nPerhaps Postgres could sample the first few characters separately for \nsuch queries, but it's probably not worth it.\n\nRegards,\n Marinos\n", "msg_date": "Wed, 08 Apr 2009 15:50:58 +0200", "msg_from": "Marinos Yannikos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad query plans for ~ \"^string\" (and like \"string%\")\n (8.3.6)" }, { "msg_contents": "On Wed, Apr 8, 2009 at 9:42 AM, Marinos Yannikos <[email protected]> wrote:\n> It seems that ANALYZE does not really sample text column values as much as\n> it could. We have some very bad query plans resulting from this:\n>\n> ...\n>         ->  Bitmap Index Scan on m_pkey  (cost=0.00..28.61 rows=102 width=0)\n> (actual time=171.824..171.824 rows=683923 loops=1)\n>               Index Cond: ((e >= 'ean'::text) AND (e < 'eao'::text)\n>\n> This gets even worse for longer strings, where we know that many matching\n> rows exist:\n>\n> # explain analyze select substring(e,5) from m where id=257421 and e ~\n> '^ean=';\n>                                                        QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------\n>  Index Scan using m_pkey on m  (cost=0.00..12.50 rows=1 width=60) (actual\n> time=1623.795..1703.958 rows=18 loops=1)\n>   Index Cond: ((e >= 'ean='::text) AND (e < 'ean>'::text))\n>   Filter: ((e ~ '^ean='::text) AND (id = 257421))\n>  Total runtime: 1703.991 ms\n> (4 rows)\n>\n> Here it would be much better to use the existing index on \"id\" (btree) first\n> because the current index condition selects 683k rows whereas the result\n> contains 18 rows. Using the index on id would yield 97 rows to filter.\n>\n> Is it possible to work around this problem somehow, other than adding\n> partial indexes for the ~ / LIKE condition (when it's constant) or a\n> 2-dimensional index?\n>\n> (what exactly does ANALYZE look at for text columns? in our case, about 7%\n> of the rows match the index condition, so it seems that left-anchored\n> regexp/like matches are not evaluated using the gathered most-common-value\n> list at all)\n\nWhat are you using for default_statistics_target?\n\nYou can see the gathered data in pg_statistic.\n\n...Robert\n", "msg_date": "Wed, 8 Apr 2009 09:53:58 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad query plans for ~ \"^string\" (and like \"string%\")\n\t(8.3.6)" }, { "msg_contents": "Marinos Yannikos <[email protected]> writes:\n> Marinos Yannikos wrote:\n>> (what exactly does ANALYZE look at for text columns? in our case, about \n>> 7% of the rows match the index condition, so it seems that left-anchored \n>> regexp/like matches are not evaluated using the gathered \n>> most-common-value list at all)\n\n> oops, I think I gave myself the answer there. Of course the \n> most-common-value list will not help if all the values that match the \n> \"bad\" index condition exist only once, but have a common prefix...\n\nThe costing is really done off the range condition ((e >= 'ean'::text)\nAND (e < 'eao'::text) in your example). I wouldn't think it would have\nsuch a hard time getting a good rowcount estimate for that. Maybe you\nneed to bump up the statistics target for that column?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Apr 2009 10:28:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad query plans for ~ \"^string\" (and like \"string%\") (8.3.6) " } ]
[ { "msg_contents": "(This is related to an earlier post on -sql.)\n\nI'm querying for the N high scores for each game, with two tables:\nscores and games.\n\nCREATE TABLE game (id SERIAL NOT NULL PRIMARY KEY);\nCREATE TABLE score (id SERIAL NOT NULL PRIMARY KEY, score REAL,\ngame_id INTEGER REFERENCES game (id));\n-- test data: 1000 games, 100000 scores\nINSERT INTO game (id) select generate_series(1,1000);\nINSERT INTO score (game_id, score) select game.id, random() from game,\ngenerate_series(1,100);\nCREATE INDEX score_idx1 ON score (game_id, score desc);\nANALYZE;\n\nThis query retrieves the single highest score for each game, but\ndoesn't allow any more than that--I can't get the top five scores for\neach game. However, it's very fast: on the above test data, it runs\nin 25ms on my system. With 1000000 scores, it takes 40ms.\n\nSELECT s.* FROM score s\nWHERE s.id IN (\n -- Get the high scoring score ID for each game:\n SELECT\n (\n -- Get the high score for game g:\n SELECT s2.id FROM score s2 WHERE s2.game_id = g.id ORDER BY\ns2.score DESC LIMIT 1\n )\n FROM game g\n);\n\n\nThis rewrite allows getting the top N scores. Unfortunately, this one\ntakes 950ms for the same data. With 1000000 scores, it takes 14800ms.\n\nSELECT s.* FROM score s, game g\nWHERE s.game_id = g.id AND\n s.id IN (\n SELECT s2.id FROM score s2 WHERE s2.game_id=g.id ORDER BY s2.score\nDESC LIMIT 1\n );\n\n\nThis seems simple: for each game, search for the highest score, and\nthen scan the tree to get the next N-1 highest scores. The first\nversion does just that, but the second one is doing a seq scan over\nscore.\n\nI do want to be able to use a LIMIT higher than 1, which only works\nwith the second form. Any suggestions of how to get the efficient\nscanning of the first while being able to use a LIMIT greater than 1?\n\n(It'd even be faster to make several calls to the first version,\nvarying an OFFSET to get each high score--but that's terrible.)\n\n-- \nGlenn Maynard\n", "msg_date": "Wed, 8 Apr 2009 17:09:24 -0400", "msg_from": "Glenn Maynard <[email protected]>", "msg_from_op": true, "msg_subject": "Nested query performance issue" }, { "msg_contents": "2009/4/9 Glenn Maynard <[email protected]>\n\n> (This is related to an earlier post on -sql.)\n>\n> I'm querying for the N high scores for each game, with two tables:\n> scores and games.\n>\n> CREATE TABLE game (id SERIAL NOT NULL PRIMARY KEY);\n> CREATE TABLE score (id SERIAL NOT NULL PRIMARY KEY, score REAL,\n> game_id INTEGER REFERENCES game (id));\n> -- test data: 1000 games, 100000 scores\n> INSERT INTO game (id) select generate_series(1,1000);\n> INSERT INTO score (game_id, score) select game.id, random() from game,\n> generate_series(1,100);\n> CREATE INDEX score_idx1 ON score (game_id, score desc);\n> ANALYZE;\n>\n\nHow about\n\nselect s1.*\nfrom score s1 join score s2 on s1.game_id=s2.game_id and s2.score >=\ns1.score\ngroup by s1.*\nhaving count(s2.*) <= N\n\nNote: you can have problems if you have same scores - you will loose last\ngroup that overlap N\n\nIn any case, you don't need to join game since all you need is game_id you\nalready have in score.\n\nP.S. EXPLAIN ANALYZE could help\n\nBest regards, Vitalii Tymchyshyn\n\n2009/4/9 Glenn Maynard <[email protected]>\n(This is related to an earlier post on -sql.)\n\nI'm querying for the N high scores for each game, with two tables:\nscores and games.\n\nCREATE TABLE game (id SERIAL NOT NULL PRIMARY KEY);\nCREATE TABLE score (id SERIAL NOT NULL PRIMARY KEY, score REAL,\ngame_id INTEGER REFERENCES game (id));\n-- test data: 1000 games, 100000 scores\nINSERT INTO game (id) select generate_series(1,1000);\nINSERT INTO score (game_id, score) select game.id, random() from game,\ngenerate_series(1,100);\nCREATE INDEX score_idx1 ON score (game_id, score desc);\nANALYZE;\nHow aboutselect s1.*from score s1 join score s2 on s1.game_id=s2.game_id and s2.score >= s1.scoregroup by s1.*having count(s2.*) <= NNote: you can have problems if you have same scores - you will loose last group that overlap N\nIn any case, you don't need to join game since all you need is game_id you already have in score.P.S. EXPLAIN ANALYZE could helpBest regards, Vitalii Tymchyshyn", "msg_date": "Thu, 9 Apr 2009 00:30:48 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested query performance issue" }, { "msg_contents": "(I didn't notice that I ended up with \"score.score\" in this test case. Oops.)\n\n2009/4/8 Віталій Тимчишин <[email protected]>:\n> How about\n>\n> select s1.*\n> from score s1 join score s2 on s1.game_id=s2.game_id and s2.score >=\n> s1.score\n> group by s1.*\n> having count(s2.*) <= N\n\nI can see what this is doing, but I'm getting:\n\nERROR: could not identify an ordering operator for type score\nHINT: Use an explicit ordering operator or modify the query.\n\nI'm not sure why; if I replace s1.* and s2.* with s1.id and s2.id it\nworks, but then I only get IDs.\n\nUnfortunately, with N = 1 this takes 8100ms (vs. 950ms and 25ms)...\n\n-- \nGlenn Maynard\n", "msg_date": "Wed, 8 Apr 2009 17:54:55 -0400", "msg_from": "Glenn Maynard <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Nested query performance issue" }, { "msg_contents": "OK, got to my postgres. Here you are:\n\ncreate or replace function explode_array(in_array anyarray) returns setof\nanyelement as\n$$\n select ($1)[s] from generate_series(1,array_upper($1, 1)) as s;\n$$\nlanguage sql immutable;\n\nSELECT s.* FROM score s\nWHERE s.id IN (\n select\n -- Get the high scoring score ID for each game:\n explode_array(ARRAY(\n -- Get the high score for game g:\n SELECT s2.id FROM score s2 WHERE s2.game_id = g.id ORDER BY\ns2.score DESC LIMIT 5\n ))\n FROM game g\n);\n\nIt takes ~64ms for me\n\nBest regards, Vitaliy Tymchyshyn\n\nOK, got to my postgres. Here you are:create or replace function explode_array(in_array anyarray) returns setof anyelement as$$    select ($1)[s] from generate_series(1,array_upper($1, 1)) as s;$$language sql immutable;\nSELECT s.* FROM score sWHERE s.id IN (  select  -- Get the high scoring score ID for each game:  explode_array(ARRAY(      -- Get the high score for game g:      SELECT s2.id FROM score s2 WHERE s2.game_id = g.id ORDER BY\ns2.score DESC LIMIT 5  ))  FROM game g);It takes ~64ms for meBest regards, Vitaliy Tymchyshyn", "msg_date": "Thu, 9 Apr 2009 12:25:26 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested query performance issue" }, { "msg_contents": "Glenn Maynard wrote:\n> This rewrite allows getting the top N scores. Unfortunately, this one\n> takes 950ms for the same data. With 1000000 scores, it takes 14800ms.\n> \n> SELECT s.* FROM score s, game g\n> WHERE s.game_id = g.id AND\n> s.id IN (\n> SELECT s2.id FROM score s2 WHERE s2.game_id=g.id ORDER BY s2.score\n> DESC LIMIT 1\n> );\n\nYou don't really need the join with game here, simplifying this into:\n\nSELECT s.* FROM score s\nWHERE s.id IN (\n SELECT s2.id FROM score s2 WHERE s2.game_id=s.game_id ORDER BY \ns2.score\nDESC LIMIT 1\n);\n\nI don't think it makes it any faster, though.\n\nYou can also do this in a very nice and clean fashion using the upcoming \nPG 8.4 window functions:\n\nSELECT * FROM (\n SELECT s.*, rank() OVER (PARTITION BY s.game_id ORDER BY score DESC) \nAS rank FROM score s\n) AS sub WHERE rank <= 5;\n\nbut I'm not sure how much faster it is. At least here on my laptop it \ndoes a full index scan on score, which may or may not be faster than \njust picking the top N values for each game using the index.\n\n> This seems simple: for each game, search for the highest score, and\n> then scan the tree to get the next N-1 highest scores. The first\n> version does just that, but the second one is doing a seq scan over\n> score.\n\nYou can do that approach with a SQL function:\n\nCREATE FUNCTION topnscores(game_id int , n int) RETURNS SETOF score \nLANGUAGE SQL AS $$\nSELECT * FROM score s WHERE s.game_id = $1 ORDER BY score DESC LIMIT $2\n$$;\n\nSELECT (sub.ts).id, (sub.ts).score, (sub.ts).game_id\nFROM (SELECT topnscores(g.id, 5) ts FROM game g) sub;\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 09 Apr 2009 14:29:11 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested query performance issue" }, { "msg_contents": "On Thu, Apr 9, 2009 at 7:29 AM, Heikki Linnakangas\n<[email protected]> wrote:\n>> SELECT s.* FROM score s, game g\n>> WHERE s.game_id = g.id AND\n>>  s.id IN (\n>>    SELECT s2.id FROM score s2 WHERE s2.game_id=g.id ORDER BY s2.score\n>> DESC LIMIT 1\n>>  );\n>\n> You don't really need the join with game here, simplifying this into:\n>\n> SELECT s.* FROM score s\n> WHERE s.id IN (\n>    SELECT s2.id FROM score s2 WHERE s2.game_id=s.game_id ORDER BY s2.score\n> DESC LIMIT 1\n> );\n>\n> I don't think it makes it any faster, though.\n\nIt's about 10% faster for me. I'm surprised the planner can't figure\nout that this join is redundant.\n\n> SELECT * FROM (\n>  SELECT s.*, rank() OVER (PARTITION BY s.game_id ORDER BY score DESC) AS\n> rank FROM score s\n> ) AS sub WHERE rank <= 5;\n>\n> but I'm not sure how much faster it is. At least here on my laptop it does a\n> full index scan on score, which may or may not be faster than just picking\n> the top N values for each game using the index.\n\nI'll definitely check this out when 8.4 is released.\n\n> You can do that approach with a SQL function:\n>\n> CREATE FUNCTION topnscores(game_id int , n int) RETURNS SETOF score LANGUAGE\n> SQL AS $$\n> SELECT * FROM score s WHERE s.game_id = $1 ORDER BY score DESC LIMIT $2\n> $$;\n>\n> SELECT (sub.ts).id, (sub.ts).score, (sub.ts).game_id\n> FROM (SELECT topnscores(g.id, 5) ts FROM game g) sub;\n(\"as ts\", for anyone trying this at home)\n\nThanks--this one runs in 32ms, which seems about right compared\nagainst the original fast LIMIT 1 version.\n\nI see a slight improvement if I mark the function stable: 31.9ms to\n31.2; minor but consistent. Just out of curiosity, any explanations\nfor this difference? I don't see any change in the resulting query\nplan, but the plan doesn't enter the function call.\n\n-- \nGlenn Maynard\n", "msg_date": "Thu, 9 Apr 2009 19:42:18 -0400", "msg_from": "Glenn Maynard <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Nested query performance issue" }, { "msg_contents": "2009/4/9 Віталій Тимчишин <[email protected]>:\n> create or replace function explode_array(in_array anyarray) returns setof\n> anyelement as\n> $$\n>     select ($1)[s] from generate_series(1,array_upper($1, 1)) as s;\n> $$\n> language sql immutable;\n\nI tried using an ARRAY like this, but didn't quite figure out the\nexplode_array piece.\n\n-- \nGlenn Maynard\n", "msg_date": "Thu, 9 Apr 2009 19:42:41 -0400", "msg_from": "Glenn Maynard <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Nested query performance issue" }, { "msg_contents": "On Thu, 9 Apr 2009, tiv00 wrote:\n\n> create or replace function explode_array(in_array anyarray) returns setof anyelement as\n> $$\n> ��� select ($1)[s] from generate_series(1,array_upper($1, 1)) as s;\n> $$\n> language sql immutable;\n\nNote that you can make this function a bit more general by using \narray_lower as the bottom bound:\n\ncreate or replace function explode_array(in_array anyarray) returns setof anyelement as\n$$\n select ($1)[s] from generate_series\n (array_lower($1, 1), array_upper($1, 1)) as s;\n$$\nlanguage sql immutable;\n\nWhile you won't run into them in most situations, it is possible to create \narrays where the lower bound isn't 1 by using the subscript syntax. The \nexample in the manual even shows that somewhat odd possibilities like \nassigning something to \"myarray[-2:7]\" works.\n\nAs already pointed out, once you're in 8.4 the windowing functions might \nbe a better fit here, but 8.4 does have \"unnest\" built-in that replaces \nthe need to code this sort of thing yourself. You might want to name this \nfunction accordingly to match that upcoming standard (or not, depending on \nwhether you want to avoid or be reminding of the potential for using the \nbuilt-in). See \nhttp://www.depesz.com/index.php/2008/11/14/waiting-for-84-array-aggregate-and-array-unpacker/\nfor some examples.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>From [email protected] Fri Apr 10 02:08:40 2009\nReceived: from localhost (unknown [200.46.204.183])\n\tby mail.postgresql.org (Postfix) with ESMTP id 33D7463315E\n\tfor <[email protected]>; Fri, 10 Apr 2009 02:08:37 -0300 (ADT)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by localhost (mx1.hub.org [200.46.204.183]) (amavisd-maia, port 10024)\n with ESMTP id 46378-10\n for <[email protected]>;\n Fri, 10 Apr 2009 02:08:24 -0300 (ADT)\nX-Greylist: from auto-whitelisted by SQLgrey-1.7.6\nReceived: from mail.vanten.com (doc.vanten.com [203.216.141.136])\n\tby mail.postgresql.org (Postfix) with ESMTP id E71E363242D\n\tfor <[email protected]>; Fri, 10 Apr 2009 02:08:23 -0300 (ADT)\nReceived: from hexagon.office.vanten.com (hexagon.office.vanten.com [192.168.220.16])\n\tby mail.vanten.com (Postfix) with ESMTP id 4FB3F4119\n\tfor <[email protected]>; Fri, 10 Apr 2009 14:08:20 +0900 (JST)\nReceived: from Espresso (unknown [192.168.220.80])\n\tby hexagon.office.vanten.com (Postfix) with ESMTP id 95A733C883\n\tfor <[email protected]>; Fri, 10 Apr 2009 14:08:19 +0900 (JST)\nFrom: \"Rainer Mager\" <[email protected]>\nTo: <[email protected]>\nReferences: <003001c9b645$e8a7cec0$b9f76c40$@com> <[email protected]> <003501c9b656$a9c73fe0$fd55bfa0$@com> <[email protected]> \nIn-Reply-To: \nSubject: Re: difficulties with time based queries \nDate: Fri, 10 Apr 2009 14:08:56 +0900\nMessage-ID: <000001c9b99a$743da930$5cb8fb90$@com>\nMIME-Version: 1.0\nContent-Type: multipart/alternative;\n\tboundary=\"----=_NextPart_000_0001_01C9B9E5.E4255130\"\nX-Mailer: Microsoft Office Outlook 12.0\nThread-Index: Acm2V67taS6N+dZOSV+76Ry0403szgAD8gPgACs3BPAAlg2qoA==\nContent-Language: en-us\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits=0.001 tagged_above=0 required=5\n tests=HTML_MESSAGE=0.001\nX-Spam-Level: \nX-Archive-Number: 200904/184\nX-Sequence-Number: 33551\n\nThis is a multipart message in MIME format.\n\n------=_NextPart_000_0001_01C9B9E5.E4255130\nContent-Type: text/plain;\n\tcharset=\"us-ascii\"\nContent-Transfer-Encoding: 7bit\n\nThanks for all of the suggestions so far. I've been trying to reduce the\nnumber of indices I have, but I'm running into a problem. I have a need to\ndo queries on this table with criteria applied to the date and possibly any\nor all of the other key columns. As a reminder, here's my table:\n\n \n\n Table \"public.ad_log\"\n\n Column | Type |\nModifiers\n\n------------+-----------------------------+---------------------------------\n---------------------------\n\n ad_log_id | integer | not null default\nnextval('ad_log_ad_log_id_seq'::regclass)\n\n channel | integer | not null\n\n player | integer | not null\n\n ad | integer | not null\n\n start_time | timestamp without time zone |\n\n end_time | timestamp without time zone |\n\n \n\nSo, I need indices that make it fast querying against start_time as well as\nall possible combinations of channel, player, and ad. Below is a sample\nquery that uses all of these (note that I've removed actual strings to\nprotect customer data). The result is fine in terms of speed, but since it's\nusing the ad_log_ad_date index I'm wondering what the best strategy is to\ncover queries that don't specify an ad. Should I have 4 indices, one with\njust the start_time (for when no other columns are specified) and the other\nthree each with the start_time and the three other criteria: channel,\nplayer, and ad? I'm currently experimenting with various options, but since\nit takes a couple of hours to create a particular index this is taking a\nwhile.\n\n \n\n \n\n# explain analyze SELECT ad_log.ad_log_id, channels.name as channel_name,\nplayers.name as player_name, ads.name as ad_name, start_time, end_time,\n(data IS NOT NULL) AS has_screenshot FROM channels, players,\nplayers_history, ads, ad_log LEFT OUTER JOIN ad_log_screenshot USING\n(ad_log_id) WHERE channel=channels.id AND player=players_history.id AND\nplayers_history.player_instance = players.id AND ad=ads.id AND channels.name\nLIKE '<some channel>' AND players.name LIKE '<some player>' AND ads.name\nLIKE '<some ad>' AND date(start_time) BETWEEN '2009-01-20' AND\ndate('2009-01-21') ORDER BY channels.name, players.name, start_time,\nads.name LIMIT 100 OFFSET 100;\n\n \n\n \nQUERY PLAN\n\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n--------------------------\n\n Limit (cost=7425.26..7425.26 rows=1 width=120) (actual\ntime=1256.116..1256.202 rows=39 loops=1)\n\n -> Sort (cost=7425.26..7425.26 rows=1 width=120) (actual\ntime=1255.965..1256.068 rows=139 loops=1)\n\n Sort Key: channels.name, players.name, ad_log.start_time, ads.name\n\n Sort Method: quicksort Memory: 35kB\n\n -> Nested Loop Left Join (cost=0.01..7425.25 rows=1 width=120)\n(actual time=179.086..1255.451 rows=139 loops=1)\n\n -> Nested Loop (cost=0.01..7417.06 rows=1 width=88) (actual\ntime=137.488..1212.531 rows=139 loops=1)\n\n Join Filter: (ad_log.channel = channels.id)\n\n -> Nested Loop (cost=0.01..7415.73 rows=1 width=60)\n(actual time=120.308..1192.867 rows=139 loops=1)\n\n Join Filter: (players_history.id = ad_log.player)\n\n -> Nested Loop (cost=0.00..36.92 rows=1\nwidth=17) (actual time=21.960..23.405 rows=1 loops=1)\n\n Join Filter: (players.id =\nplayers_history.player_instance)\n\n -> Seq Scan on players (cost=0.00..11.80\nrows=1 width=17) (actual time=5.981..6.083 rows=1 loops=1)\n\n Filter: (name ~~ '<some\nplayer>'::text)\n\n -> Seq Scan on players_history\n(cost=0.00..14.50 rows=850 width=8) (actual time=15.880..16.592 rows=850\nloops=1)\n\n -> Nested Loop (cost=0.01..7371.03 rows=622\nwidth=51) (actual time=75.161..1156.076 rows=15600 loops=1)\n\n -> Seq Scan on ads (cost=0.00..72.79\nrows=1 width=27) (actual time=15.776..31.975 rows=1 loops=1)\n\n Filter: (name ~~ '<some ad>'::text)\n\n -> Index Scan using ad_log_ad_date on\nad_log (cost=0.01..7267.77 rows=2438 width=32) (actual\ntime=59.375..1095.229 rows=15600 loops=1)\n\n Index Cond: ((ad_log.ad = ads.id) AND\n(date(ad_log.start_time) >= '2009-01-20'::date) AND (date(ad_log.start_time)\n<= '2009-01-21'::date))\n\n -> Seq Scan on channels (cost=0.00..1.31 rows=1\nwidth=36) (actual time=0.128..0.132 rows=1 loops=139)\n\n Filter: (channels.name ~~ '<some channel>'::text)\n\n -> Index Scan using ad_log_screenshot_pkey on\nad_log_screenshot (cost=0.00..8.18 rows=1 width=36) (actual\ntime=0.304..0.304 rows=0 loops=139)\n\n Index Cond: (ad_log.ad_log_id =\nad_log_screenshot.ad_log_id)\n\n Total runtime: 1256.572 ms\n\n \n\n \n\n \n\nThanks,\n\n \n\n--Rainer\n\n\n------=_NextPart_000_0001_01C9B9E5.E4255130\nContent-Type: text/html;\n\tcharset=\"us-ascii\"\nContent-Transfer-Encoding: quoted-printable\n\n<html xmlns:v=3D\"urn:schemas-microsoft-com:vml\" =\nxmlns:o=3D\"urn:schemas-microsoft-com:office:office\" =\nxmlns:w=3D\"urn:schemas-microsoft-com:office:word\" =\nxmlns:m=3D\"http://schemas.microsoft.com/office/2004/12/omml\" =\nxmlns=3D\"http://www.w3.org/TR/REC-html40\">\n\n<head>\n<meta http-equiv=3DContent-Type content=3D\"text/html; =\ncharset=3Dus-ascii\">\n<meta name=3DGenerator content=3D\"Microsoft Word 12 (filtered medium)\">\n<style>\n<!--\n /* Font Definitions */\n @font-face\n\t{font-family:\"MS Mincho\";\n\tpanose-1:2 2 6 9 4 2 5 8 3 4;}\n@font-face\n\t{font-family:\"Cambria Math\";\n\tpanose-1:2 4 5 3 5 4 6 3 2 4;}\n@font-face\n\t{font-family:Calibri;\n\tpanose-1:2 15 5 2 2 2 4 3 2 4;}\n@font-face\n\t{font-family:Consolas;\n\tpanose-1:2 11 6 9 2 2 4 3 2 4;}\n@font-face\n\t{font-family:\"MS Mincho\";\n\tpanose-1:2 2 6 9 4 2 5 8 3 4;}\n /* Style Definitions */\n p.MsoNormal, li.MsoNormal, div.MsoNormal\n\t{margin:0in;\n\tmargin-bottom:.0001pt;\n\tfont-size:11.0pt;\n\tfont-family:\"Calibri\",\"sans-serif\";}\na:link, span.MsoHyperlink\n\t{mso-style-priority:99;\n\tcolor:blue;\n\ttext-decoration:underline;}\na:visited, span.MsoHyperlinkFollowed\n\t{mso-style-priority:99;\n\tcolor:purple;\n\ttext-decoration:underline;}\np.MsoPlainText, li.MsoPlainText, div.MsoPlainText\n\t{mso-style-priority:99;\n\tmso-style-link:\"Plain Text Char\";\n\tmargin:0in;\n\tmargin-bottom:.0001pt;\n\tfont-size:10.5pt;\n\tfont-family:Consolas;}\nspan.PlainTextChar\n\t{mso-style-name:\"Plain Text Char\";\n\tmso-style-priority:99;\n\tmso-style-link:\"Plain Text\";\n\tfont-family:Consolas;}\nspan.EmailStyle19\n\t{mso-style-type:personal;}\nspan.EmailStyle20\n\t{mso-style-type:personal-reply;\n\tfont-family:\"Calibri\",\"sans-serif\";\n\tcolor:#1F497D;}\n.MsoChpDefault\n\t{mso-style-type:export-only;\n\tfont-size:10.0pt;}\n@page Section1\n\t{size:8.5in 11.0in;\n\tmargin:99.25pt 92.4pt 85.05pt 92.4pt;}\ndiv.Section1\n\t{page:Section1;}\n-->\n</style>\n<!--[if gte mso 9]><xml>\n <o:shapedefaults v:ext=3D\"edit\" spidmax=3D\"1026\" />\n</xml><![endif]--><!--[if gte mso 9]><xml>\n <o:shapelayout v:ext=3D\"edit\">\n <o:idmap v:ext=3D\"edit\" data=3D\"1\" />\n </o:shapelayout></xml><![endif]-->\n</head>\n\n<body lang=3DEN-US link=3Dblue vlink=3Dpurple>\n\n<div class=3DSection1>\n\n<p class=3DMsoNormal><span style=3D'color:#1F497D'>Thanks for all of the\nsuggestions so far. I&#8217;ve been trying to reduce the number of =\nindices I have,\nbut I&#8217;m running into a problem. I have a need to do queries on =\nthis table\nwith criteria applied to the date and possibly any or all of the other =\nkey\ncolumns. As a reminder, here&#8217;s my table:<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span =\nstyle=3D'color:#1F497D'><o:p>&nbsp;</o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=\nnbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\nTable &quot;public.ad_log&quot;<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;\nColumn&nbsp;&nbsp;\n|&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\nType&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=\nsp;\n|&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\nModifiers<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>------------+-----------------------------+----------=\n--------------------------------------------------<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;ad_log_id&nbsp;\n|\ninteger&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n| not null default =\nnextval('ad_log_ad_log_id_seq'::regclass)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;channel&nbsp;&nbsp;&nbsp;\n|\ninteger&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n| not null<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;player&nbsp;&nbsp;&nbsp;&nbsp;\n|\ninteger&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n| not null<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;ad&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=\nsp;\n|\ninteger&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n| not null<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;start_time\n| timestamp without time zone |<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;end_time&nbsp;&nbsp;\n| timestamp without time zone |<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span =\nstyle=3D'color:#1F497D'><o:p>&nbsp;</o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'color:#1F497D'>So, I need indices =\nthat make it\nfast querying against start_time as well as all possible combinations of\nchannel, player, and ad. Below is a sample query that uses all of these =\n(note\nthat I&#8217;ve removed actual strings to protect customer data). The =\nresult is\nfine in terms of speed, but since it&#8217;s using the ad_log_ad_date =\nindex I&#8217;m\nwondering what the best strategy is to cover queries that don&#8217;t =\nspecify\nan ad. Should I have 4 indices, one with just the start_time (for when =\nno other\ncolumns are specified) and the other three each with the start_time and =\nthe\nthree other criteria: channel, player, and ad? I&#8217;m currently\nexperimenting with various options, but since it takes a couple of hours =\nto\ncreate a particular index this is taking a while.<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span =\nstyle=3D'color:#1F497D'><o:p>&nbsp;</o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'><o:p>&nbsp;</o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>#\nexplain analyze SELECT ad_log.ad_log_id, channels.name as channel_name,\nplayers.name as player_name, ads.name as ad_name, start_time, end_time, =\n(data\nIS NOT NULL) AS has_screenshot FROM channels, players, players_history, =\nads, &nbsp;ad_log\nLEFT OUTER JOIN ad_log_screenshot USING (ad_log_id) WHERE =\nchannel=3Dchannels.id\nAND player=3Dplayers_history.id AND players_history.player_instance =3D =\nplayers.id\nAND ad=3Dads.id AND channels.name LIKE '&lt;some channel&gt;' AND =\nplayers.name\nLIKE '&lt;some player&gt;' AND ads.name LIKE '&lt;some ad&gt;' AND&nbsp; =\ndate(start_time)\nBETWEEN '2009-01-20' AND date('2009-01-21') ORDER BY channels.name,\nplayers.name, start_time, ads.name LIMIT 100 OFFSET =\n100;<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'><o:p>&nbsp;</o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=\nnbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=\nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=\nsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=\np;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;\nQUERY PLAN<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>-----------------------------------------------------=\n-------------------------------------------------------------------------=\n----------------------------------------------------<o:p></o:p></span></p=\n>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;Limit&nbsp;\n(cost=3D7425.26..7425.26 rows=3D1 width=3D120) (actual =\ntime=3D1256.116..1256.202\nrows=3D39 loops=3D1)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;\n-&gt;&nbsp; Sort&nbsp; (cost=3D7425.26..7425.26 rows=3D1 width=3D120) =\n(actual\ntime=3D1255.965..1256.068 rows=3D139 loops=3D1)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\nSort Key: channels.name, players.name, ad_log.start_time, =\nads.name<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\nSort Method:&nbsp; quicksort&nbsp; Memory: 35kB<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n-&gt;&nbsp; Nested Loop Left Join&nbsp; (cost=3D0.01..7425.25 rows=3D1 =\nwidth=3D120)\n(actual time=3D179.086..1255.451 rows=3D139 =\nloops=3D1)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n-&gt;&nbsp; Nested Loop&nbsp; (cost=3D0.01..7417.06 rows=3D1 width=3D88) =\n(actual\ntime=3D137.488..1212.531 rows=3D139 loops=3D1)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\nJoin Filter: (ad_log.channel =3D channels.id)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n-&gt;&nbsp; Nested Loop&nbsp; (cost=3D0.01..7415.73 rows=3D1 width=3D60) =\n(actual\ntime=3D120.308..1192.867 rows=3D139 loops=3D1)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\nJoin Filter: (players_history.id =3D =\nad_log.player)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n-&gt;&nbsp; Nested Loop&nbsp; (cost=3D0.00..36.92 rows=3D1 width=3D17) =\n(actual\ntime=3D21.960..23.405 rows=3D1 loops=3D1)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\nJoin Filter: (players.id =3D =\nplayers_history.player_instance)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n-&gt;&nbsp; Seq Scan on players&nbsp; (cost=3D0.00..11.80 rows=3D1 =\nwidth=3D17)\n(actual time=3D5.981..6.083 rows=3D1 loops=3D1)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=\nnbsp;&nbsp;&nbsp;&nbsp;&nbsp;\nFilter: (name ~~ '&lt;some player&gt;'::text)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n-&gt;&nbsp; Seq Scan on players_history&nbsp; (cost=3D0.00..14.50 =\nrows=3D850\nwidth=3D8) (actual time=3D15.880..16.592 rows=3D850 =\nloops=3D1)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n-&gt;&nbsp; Nested Loop&nbsp; (cost=3D0.01..7371.03 rows=3D622 =\nwidth=3D51) (actual\ntime=3D75.161..1156.076 rows=3D15600 loops=3D1)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n-&gt;&nbsp; Seq Scan on ads&nbsp; (cost=3D0.00..72.79 rows=3D1 =\nwidth=3D27) (actual\ntime=3D15.776..31.975 rows=3D1 loops=3D1)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=\nnbsp;&nbsp;&nbsp;&nbsp;&nbsp;\nFilter: (name ~~ '&lt;some ad&gt;'::text)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n-&gt;&nbsp; Index Scan using ad_log_ad_date on ad_log&nbsp; =\n(cost=3D0.01..7267.77\nrows=3D2438 width=3D32) (actual time=3D59.375..1095.229 rows=3D15600 =\nloops=3D1)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=\nnbsp;&nbsp;&nbsp;&nbsp;&nbsp;\nIndex Cond: ((ad_log.ad =3D ads.id) AND (date(ad_log.start_time) &gt;=3D\n'2009-01-20'::date) AND (date(ad_log.start_time) &lt;=3D =\n'2009-01-21'::date))<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n-&gt;&nbsp; Seq Scan on channels&nbsp; (cost=3D0.00..1.31 rows=3D1 =\nwidth=3D36)\n(actual time=3D0.128..0.132 rows=3D1 loops=3D139)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\nFilter: (channels.name ~~ '&lt;some =\nchannel&gt;'::text)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\n-&gt;&nbsp; Index Scan using ad_log_screenshot_pkey on =\nad_log_screenshot&nbsp;\n(cost=3D0.00..8.18 rows=3D1 width=3D36) (actual time=3D0.304..0.304 =\nrows=3D0 loops=3D139)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=\n;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;\nIndex Cond: (ad_log.ad_log_id =3D =\nad_log_screenshot.ad_log_id)<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span style=3D'font-family:\"Courier =\nNew\";color:#1F497D'>&nbsp;Total\nruntime: 1256.572 ms<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span =\nstyle=3D'color:#1F497D'><o:p>&nbsp;</o:p></span></p>\n\n<p class=3DMsoNormal><span =\nstyle=3D'color:#1F497D'><o:p>&nbsp;</o:p></span></p>\n\n<p class=3DMsoNormal><span =\nstyle=3D'color:#1F497D'><o:p>&nbsp;</o:p></span></p>\n\n<p class=3DMsoNormal><span =\nstyle=3D'color:#1F497D'>Thanks,<o:p></o:p></span></p>\n\n<p class=3DMsoNormal><span =\nstyle=3D'color:#1F497D'><o:p>&nbsp;</o:p></span></p>\n\n<p class=3DMsoNormal><span =\nstyle=3D'color:#1F497D'>--Rainer<o:p></o:p></span></p>\n\n</div>\n\n</body>\n\n</html>\n\n------=_NextPart_000_0001_01C9B9E5.E4255130--\n\n", "msg_date": "Thu, 9 Apr 2009 19:59:41 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested query performance issue" }, { "msg_contents": "On Thu, Apr 9, 2009 at 7:29 AM, Heikki Linnakangas\n<[email protected]> wrote:\n> CREATE FUNCTION topnscores(game_id int , n int) RETURNS SETOF score LANGUAGE\n> SQL AS $$\n> SELECT * FROM score s WHERE s.game_id = $1 ORDER BY score DESC LIMIT $2\n> $$;\n>\n> SELECT (sub.ts).id, (sub.ts).score, (sub.ts).game_id\n> FROM (SELECT topnscores(g.id, 5) ts FROM game g) sub;\n\nThe inner query:\n\nSELECT topnscores(g.id, 5) ts FROM game g\n\nhttp://www.postgresql.org/docs/8.3/static/xfunc-sql.html says this is\ndeprecated (though no deprecation warning is being generated):\n\n> Currently, functions returning sets can also be called in the select list of a query. For each row that the query generates by itself, the function returning set is invoked, and an output row is generated for each element of the function's result set. Note, however, that this capability is deprecated and might be removed in future releases.\n\nIt doesn't say how else to write this, though, and it's not obvious to\nme. \"SELECT ts.* FROM topnscores(g.id, 5) AS ts, game g\" doesn't work\n(\"function expression in FROM cannot refer to other relations of same\nquery level\"). Is there an equivalent way to do this so I won't have\ndeprecation looming over my back? I'm likely to become very dependent\non this pattern.\n\n-- \nGlenn Maynard\n", "msg_date": "Fri, 10 Apr 2009 02:11:29 -0400", "msg_from": "Glenn Maynard <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Nested query performance issue" }, { "msg_contents": "Glenn Maynard <[email protected]> writes:\n> http://www.postgresql.org/docs/8.3/static/xfunc-sql.html says this is\n> deprecated (though no deprecation warning is being generated):\n\n>> Currently, functions returning sets can also be called in the select list of a query. For each row that the query generates by itself, the function returning set is invoked, and an output row is generated for each element of the function's result set. Note, however, that this capability is deprecated and might be removed in future releases.\n\nThe way to parse that is \"we don't like this and we will get rid of it\nif we can ever figure out a good substitute\". Right now there is no\n100% substitute, so it stays. (In fact, 8.4 will extend the feature so\nit works in cases that don't work today, like for PL functions.)\n\nThere are, however, good reasons not to like it, such as the rather\nquestionable behavior if there's more than one SRF in the same select\nlist. Don't complain if you run into that wart.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 10 Apr 2009 09:42:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested query performance issue " }, { "msg_contents": "On Thu, 9 Apr 2009, Glenn Maynard wrote:\n> On Thu, Apr 9, 2009 at 7:29 AM, Heikki Linnakangas wrote:\n>>> SELECT s.* FROM score s, game g\n>>> WHERE s.game_id = g.id AND\n>>>  s.id IN (\n>>>    SELECT s2.id FROM score s2 WHERE s2.game_id=g.id ORDER BY s2.score\n>>> DESC LIMIT 1\n>>>  );\n>>\n>> You don't really need the join with game here, simplifying this into:\n>>\n>> SELECT s.* FROM score s\n>> WHERE s.id IN (\n>>    SELECT s2.id FROM score s2 WHERE s2.game_id=s.game_id ORDER BY s2.score\n>> DESC LIMIT 1\n>> );\n>>\n>> I don't think it makes it any faster, though.\n>\n> It's about 10% faster for me. I'm surprised the planner can't figure\n> out that this join is redundant.\n\nBecause the join isn't redundant? You're making the assumption that for \nevery score.game_id there is exactly one game.id that matches. Of course, \nyou may have a unique constraint and foreign key/trigger that ensures \nthis.\n\nMatthew\n\n-- \n The third years are wandering about all worried at the moment because they\n have to hand in their final projects. Please be sympathetic to them, say\n things like \"ha-ha-ha\", but in a sympathetic tone of voice \n -- Computer Science Lecturer", "msg_date": "Tue, 14 Apr 2009 10:33:25 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested query performance issue" }, { "msg_contents": "On Tue, Apr 14, 2009 at 5:33 AM, Matthew Wakeling <[email protected]> wrote:\n>> It's about 10% faster for me.  I'm surprised the planner can't figure\n>> out that this join is redundant.\n>\n> Because the join isn't redundant? You're making the assumption that for\n> every score.game_id there is exactly one game.id that matches. Of course,\n> you may have a unique constraint and foreign key/trigger that ensures this.\n\nThat's the definition of the tables I gave.\n\nCREATE TABLE game (id SERIAL NOT NULL PRIMARY KEY); -- pk implies unique\nCREATE TABLE score (id SERIAL NOT NULL PRIMARY KEY, score REAL,\ngame_id INTEGER REFERENCES game (id));\n\n(I don't think it makes any difference to whether this can be\noptimized, but adding NOT NULL back to game_id doesn't change it,\neither.)\n\n-- \nGlenn Maynard\n", "msg_date": "Tue, 14 Apr 2009 06:04:22 -0400", "msg_from": "Glenn Maynard <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Nested query performance issue" }, { "msg_contents": "2009/4/9 Віталій Тимчишин <[email protected]>:\n> OK, got to my postgres. Here you are:\n>\n> create or replace function explode_array(in_array anyarray) returns setof\n> anyelement as\n> $$\n>     select ($1)[s] from generate_series(1,array_upper($1, 1)) as s;\n> $$\n> language sql immutable;\n>\n\nin 8.4, this will be replaced by the built in 'unnest'. Also we have array_agg.\n\nmerlin\n", "msg_date": "Tue, 14 Apr 2009 09:46:28 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested query performance issue" } ]
[ { "msg_contents": "Hi all,\n\nHas anyone experimented with the Linux deadline parameters and have \nsome experiences to share?\n\nRegards,\nMark\n", "msg_date": "Thu, 9 Apr 2009 07:00:54 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": true, "msg_subject": "linux deadline i/o elevator tuning" }, { "msg_contents": "Mark Wong <[email protected]> wrote: \n> Has anyone experimented with the Linux deadline parameters and\n> have some experiences to share?\n \nWe've always used elevator=deadline because of posts like this:\n \nhttp://archives.postgresql.org/pgsql-performance/2008-04/msg00148.php\n \nI haven't benchmarked it, but when one of our new machines seemed a\nlittle sluggish, I found this hadn't been set. Setting this and\nrebooting Linux got us back to our normal level of performance.\n \n-Kevin\n", "msg_date": "Thu, 09 Apr 2009 09:09:52 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux deadline i/o elevator tuning" }, { "msg_contents": "acording to kernel folks, anticipatory scheduler is even better for dbs.\nOh well, it probably means everyone has to test it on their own at the\nend of day.\n", "msg_date": "Thu, 9 Apr 2009 15:27:17 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux deadline i/o elevator tuning" }, { "msg_contents": "On Thu, 9 Apr 2009, Grzegorz Jaśkiewicz wrote:\n> acording to kernel folks, anticipatory scheduler is even better for dbs.\n> Oh well, it probably means everyone has to test it on their own at the\n> end of day.\n\nBut the anticipatory scheduler basically makes the huge assumption that \nyou have one single disc in the system that takes a long time to seek from \none place to another. This assumption fails on both RAID arrays and SSDs, \nso I'd be interested to see some numbers to back that one up.\n\nMatthew\n\n-- \n import oz.wizards.Magic;\n if (Magic.guessRight())... -- Computer Science Lecturer", "msg_date": "Thu, 9 Apr 2009 15:32:29 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux deadline i/o elevator tuning" }, { "msg_contents": "On Thu, Apr 9, 2009 at 3:32 PM, Matthew Wakeling <[email protected]> wrote:\n> On Thu, 9 Apr 2009, Grzegorz Jaśkiewicz wrote:\n>>\n>> acording to kernel folks, anticipatory scheduler is even better for dbs.\n>> Oh well, it probably means everyone has to test it on their own at the\n>> end of day.\n>\n> But the anticipatory scheduler basically makes the huge assumption that you\n> have one single disc in the system that takes a long time to seek from one\n> place to another. This assumption fails on both RAID arrays and SSDs, so I'd\n> be interested to see some numbers to back that one up.\n\n(btw, CFQ is the anticipatory scheduler).\n\nno they not. They only assume that application reads blocks in\nsynchronous fashion, and that data read in block N will determine\nwhere the N+1 block is going to be.\nSo to avoid possible starvation problem, it will wait for short amount\nof time - in hope that app will want to read possibly next block on\ndisc, and putting that request at the end of queue could potentially\nstarve it. (that reason alone is why 2.6 linux feels so much more\nresponsive).\n\n\n-- \nGJ\n", "msg_date": "Thu, 9 Apr 2009 15:39:15 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux deadline i/o elevator tuning" }, { "msg_contents": "Matthew Wakeling <[email protected]> wrote: \n> On Thu, 9 Apr 2009, Grzegorz Jaᅵkiewicz wrote:\n>> acording to kernel folks, anticipatory scheduler is even better for\n>> dbs. Oh well, it probably means everyone has to test it on their\n>> own at the end of day.\n> \n> But the anticipatory scheduler basically makes the huge assumption\n> that you have one single disc in the system that takes a long time\n> to seek from one place to another. This assumption fails on both\n> RAID arrays and SSDs, so I'd be interested to see some numbers to\n> back that one up.\n \nYeah, we're running on servers with at least 4 effective spindles,\nwith some servers having several dozen effective spindles. Assuming\none is not very effective. The setting which seemed sluggish for our\nenvironment was the anticipatory scheduler, so the kernel guys\napparently aren't thinking about the type of load we have on the\nhardware we have.\n \n-Kevin\n", "msg_date": "Thu, 09 Apr 2009 09:40:22 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux deadline i/o elevator tuning" }, { "msg_contents": "On 9-4-2009 16:09 Kevin Grittner wrote:\n> I haven't benchmarked it, but when one of our new machines seemed a\n> little sluggish, I found this hadn't been set. Setting this and\n> rebooting Linux got us back to our normal level of performance.\n\nWhy would you reboot after changing the elevator? For 2.6-kernels, it \ncan be adjusted on-the-fly for each device separately (echo 'deadline' > \n/sys/block/sda/queue/scheduler).\n\nI saw a nice reduction in load and slowness too after adjusting the cfq \nto deadline for a machine that was at its maximum I/O-capacity on a \nraid-array.\nApart from deadline, 'noop' should also be interesting for RAID and \nSSD-owners, as it basically just forwards the I/O-request to the device \nand doesn't do much (if any?) scheduling.\n\nBest regards,\n\nArjen\n", "msg_date": "Thu, 09 Apr 2009 16:42:28 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux deadline i/o elevator tuning" }, { "msg_contents": "Grzegorz Jaᅵkiewicz <[email protected]> wrote: \n> (btw, CFQ is the anticipatory scheduler).\n \nThese guys have it wrong?:\n \nhttp://www.wlug.org.nz/LinuxIoScheduler\n \n-Kevin\n", "msg_date": "Thu, 09 Apr 2009 09:42:45 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux deadline i/o elevator tuning" }, { "msg_contents": "On Thu, 9 Apr 2009, Grzegorz Ja�kiewicz wrote:\n> (btw, CFQ is the anticipatory scheduler).\n\nNo, CFQ and anticipatory are two completely different schedulers. You can \nchoose between them.\n\n>> But the anticipatory scheduler basically makes the huge assumption that you\n>> have one single disc in the system that takes a long time to seek from one\n>> place to another. This assumption fails on both RAID arrays and SSDs, so I'd\n>> be interested to see some numbers to back that one up.\n>\n> So to avoid possible starvation problem, it will wait for short amount\n> of time - in hope that app will want to read possibly next block on\n> disc, and putting that request at the end of queue could potentially\n> starve it. (that reason alone is why 2.6 linux feels so much more\n> responsive).\n\nThis only actually helps if the assumptions I stated above are true. \nAnticipatory is an opportunistic scheduler - it actually witholds requests \nfrom the disc as you describe, in the hope that a block will be fetched \nsoon right next to the last one. However, if you have more than one disc, \nthen witholding requests means that you lose the ability to perform more \nthan one request at once. Also, it assumes that it will take longer to \nseek to the next real request that it will for the program to issue its \nnext request, which is broken on SSDs. Anticipatory attempts to increase \nperformance by being unfair - it is essentially the opposite of CFQ.\n\nMatthew\n\n-- \n Now you see why I said that the first seven minutes of this section will have\n you looking for the nearest brick wall to beat your head against. This is\n why I do it at the end of the lecture - so I can run.\n -- Computer Science lecturer", "msg_date": "Thu, 9 Apr 2009 15:47:28 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux deadline i/o elevator tuning" }, { "msg_contents": "On Thu, Apr 9, 2009 at 3:42 PM, Kevin Grittner\n<[email protected]> wrote:\n> Grzegorz Jaœkiewicz <[email protected]> wrote:\n>> (btw, CFQ is the anticipatory scheduler).\n>\n> These guys have it wrong?:\n>\n> http://www.wlug.org.nz/LinuxIoScheduler\n\n\nsorry, I meant it replaced it :) (is default now).\n\n\n-- \nGJ\n", "msg_date": "Thu, 9 Apr 2009 15:48:16 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux deadline i/o elevator tuning" }, { "msg_contents": "On Thu, Apr 9, 2009 at 7:00 AM, Mark Wong <[email protected]> wrote:\n> Hi all,\n>\n> Has anyone experimented with the Linux deadline parameters and have some\n> experiences to share?\n\nHi all,\n\nThanks for all the responses, but I didn't mean selecting deadline as\nmuch as its parameters such as:\n\nantic_expire\nread_batch_expire\nread_expire\nwrite_batch_expire\nwrite_expire\n\nRegards,\nMark\n", "msg_date": "Thu, 9 Apr 2009 07:53:02 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: linux deadline i/o elevator tuning" }, { "msg_contents": "Arjen van der Meijden <[email protected]> wrote: \n> On 9-4-2009 16:09 Kevin Grittner wrote:\n>> I haven't benchmarked it, but when one of our new machines seemed a\n>> little sluggish, I found this hadn't been set. Setting this and\n>> rebooting Linux got us back to our normal level of performance.\n> \n> Why would you reboot after changing the elevator? For 2.6-kernels,\n> it can be adjusted on-the-fly for each device separately\n> (echo 'deadline' > /sys/block/sda/queue/scheduler).\n \nOn the OS where this happened, not yet an option:\n \nkgrittn@DBUTL-PG:~> cat /proc/version\nLinux version 2.6.5-7.315-bigsmp (geeko@buildhost) (gcc version 3.3.3\n(SuSE Linux)) #1 SMP Wed Nov 26 13:03:18 UTC 2008\nkgrittn@DBUTL-PG:~> ls -l /sys/block/sda/queue/\ntotal 0\ndrwxr-xr-x 2 root root 0 2009-03-06 15:27 iosched\n-rw-r--r-- 1 root root 4096 2009-03-06 15:27 nr_requests\n-rw-r--r-- 1 root root 4096 2009-03-06 15:27 read_ahead_kb\n \nOn machines built more recently than the above, I do see a scheduler\nentry in the /sys/block/sda/queue/ directory. I didn't know about\nthis enhancement, but I'll keep it in mind. Thanks for the tip!\n \n> Apart from deadline, 'noop' should also be interesting for RAID and \n> SSD-owners, as it basically just forwards the I/O-request to the\n> device and doesn't do much (if any?) scheduling.\n \nYeah, I've been tempted to give that a try, given that we have BBU\ncache with write-back. Without a performance problem using elevator,\nthough, it hasn't seemed worth the time.\n \n-Kevin\n", "msg_date": "Thu, 09 Apr 2009 09:57:43 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux deadline i/o elevator tuning" }, { "msg_contents": "On Thu, Apr 9, 2009 at 7:53 AM, Mark Wong <[email protected]> wrote:\n> On Thu, Apr 9, 2009 at 7:00 AM, Mark Wong <[email protected]> wrote:\n>> Hi all,\n>>\n>> Has anyone experimented with the Linux deadline parameters and have some\n>> experiences to share?\n>\n> Hi all,\n>\n> Thanks for all the responses, but I didn't mean selecting deadline as\n> much as its parameters such as:\n>\n> antic_expire\n> read_batch_expire\n> read_expire\n> write_batch_expire\n> write_expire\n\nAnd I dumped the parameters for the anticipatory scheduler. :p Here\nare the deadline parameters:\n\nfifo_batch\nfront_merges\nread_expire\nwrite_expire\nwrites_starved\n\nRegards,\nMark\n", "msg_date": "Thu, 9 Apr 2009 08:09:47 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: linux deadline i/o elevator tuning" }, { "msg_contents": "The anticipatory scheduler gets absolutely atrocious performance for server\nworkloads on even moderate server hardware. It is applicable only to single\nspindle setups on desktop-like worlkoads.\n\nSeriously, never use this for a database. It _literally_ will limit you to\n100 iops maximum random access iops by waiting 10ms for 'nearby' LBA\nrequests.\n\n\nFor Postgres, deadline, cfq, and noop are the main options.\n\nNoop is good for ssds and a few high performance hardware caching RAID cards\n(and only a few of the good ones), and poor otherwise.\n\nCfq tends to favor random access over sequential access in mixed load\nenvironments and does not tend to favor reads over writes. Because it\nbatches its elevator algorithm by requesting process, it becomes less\nefficient with lots of spindles where multiple processes have requests from\nnearby disk regions.\n\nDeadline tends to favor reads over writes and slightly favor sequential\naccess to random access (and gets more MB/sec on average as a result in\nmixed loads). It tends to work well for large stand-alone servers and not\nas well for desktop/workstation type loads.\n\nI have done a little tuning of the parameters of cfq and deadline, and never\nnoticed much difference. I suppose you could shift the deadline biases to\nread or write with these.\n\n\nOn 4/9/09 7:27 AM, \"Grzegorz Jaśkiewicz\" <[email protected]> wrote:\n\n> acording to kernel folks, anticipatory scheduler is even better for dbs.\n> Oh well, it probably means everyone has to test it on their own at the\n> end of day.\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Thu, 9 Apr 2009 15:50:12 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux deadline i/o elevator tuning" }, { "msg_contents": "Grzegorz Jaskiewicz wrote:\r\n> acording to kernel folks, anticipatory scheduler is even better for dbs.\r\n> Oh well, it probably means everyone has to test it on their own at the\r\n> end of day.\r\n\r\nIn my test case, noop and deadline performed well, deadline being a little\r\nbetter than noop.\r\n\r\nBoth anticipatory and CFQ sucked big time.\r\n\r\nYours,\r\nLaurenz Albe\r\n", "msg_date": "Fri, 10 Apr 2009 08:47:30 +0200", "msg_from": "\"Albe Laurenz *EXTERN*\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux deadline i/o elevator tuning" }, { "msg_contents": "\nOn Apr 10, 2009, at 2:47 AM, Albe Laurenz *EXTERN* wrote:\n\n> Grzegorz Jaskiewicz wrote:\n>> acording to kernel folks, anticipatory scheduler is even better for \n>> dbs.\n>> Oh well, it probably means everyone has to test it on their own at \n>> the\n>> end of day.\n>\n> In my test case, noop and deadline performed well, deadline being a \n> little\n> better than noop.\n>\n> Both anticipatory and CFQ sucked big time.\n>\n\nThis is my experience as well, I posted about playing with the \nscheduler a while ago on -performance, but I can't seem to find it.\n\nIf you have a halfway OK raid controller, CFQ is useless. You can fire \nup something such as pgbench or pgiosim, fire up an iostat and then \nwatch your iops jump high when you flip to noop or deadline and \nplummet on cfq. Try it. it's neat!\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n", "msg_date": "Mon, 13 Apr 2009 10:13:15 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux deadline i/o elevator tuning" }, { "msg_contents": "Jeff <[email protected]> wrote:\n \n> If you have a halfway OK raid controller, CFQ is useless. You can\nfire \n> up something such as pgbench or pgiosim, fire up an iostat and then \n\n> watch your iops jump high when you flip to noop or deadline and \n> plummet on cfq.\n \nAn interesting data point, but not, by itself, conclusive. One of the\nnice things about a good scheduler is that it allows multiple writes\nto the OS to be combined into a single write to the controller cache. \nI think that having a large OS cache and the deadline elevator allowed\nus to use what some considered extremely aggressive background writer\nsettings without *any* discernible increase in OS output to the disk. \nThe significant measure is throughput from the application point of\nview; if you see that drop as cfq causes the disk I/O to drop, *then*\nyou've proven your point.\n \nOf course, I'm betting that's what you do see....\n \n-Kevin\n", "msg_date": "Mon, 13 Apr 2009 10:18:16 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux deadline i/o elevator tuning" } ]
[ { "msg_contents": "All,\n\nI was looking at these IOZone results for some NAS hardware and thinking \nabout index scans:\n\nChildren see throughput for 6 readers \t\t= 72270.04 KB/sec\nParent sees throughput for 6 readers \t\t= 72269.06 KB/sec\nMin throughput per process \t\t\t= 11686.53 KB/sec\nMax throughput per process \t\t\t= 12506.65 KB/sec\nAvg throughput per process \t\t\t= 12045.01 KB/sec\nMin xfer \t\t\t\t\t= 3919344.00 KB\n\nChildren see throughput for 6 reverse readers \t= 17313.57 KB/sec\nParent sees throughput for 6 reverse readers \t= 17313.52 KB/sec\nMin throughput per process \t\t\t= 2569.21 KB/sec\nMax throughput per process \t\t\t= 3101.18 KB/sec\nAvg throughput per process \t\t\t= 2885.60 KB/sec\nMin xfer \t\t\t\t\t= 3474840.00 KB\n\nNow, what that says to me is that for this system reverse sequential \nreads are 1/4 the speed of forwards reads. And from my testing \nelsewhere, that seems fairly typical of disk systems in general.\n\nNow, while index scans (for indexes on disk) aren't 100% sequential \nreads, it seems like we should be increasing (substantially) the \nestimated cost of reverse index scans if the index is likely to be on \ndisk. No?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Thu, 09 Apr 2009 23:43:01 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Shouldn't the planner have a higher cost for reverse index scans?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Now, what that says to me is that for this system reverse sequential \n> reads are 1/4 the speed of forwards reads. And from my testing \n> elsewhere, that seems fairly typical of disk systems in general.\n\nWell, that's because filesystems try to lay out files so that logically\nsuccessive sectors are about as far apart as needed to support the\ndisk's maximum transfer rate. If you fetch them in reverse order,\nthen instead of optimizing the rotational latency you find you are\npessimizing it. This has got approximately nothing to do with\nindexscans, either forward or reverse, because then we aren't fetching\nblocks in a pre-optimized order.\n\n> Now, while index scans (for indexes on disk) aren't 100% sequential \n> reads, it seems like we should be increasing (substantially) the \n> estimated cost of reverse index scans if the index is likely to be on \n> disk. No?\n\nAFAICS this is already folded into random_page_cost.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 10 Apr 2009 09:50:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't the planner have a higher cost for reverse index scans?" }, { "msg_contents": "Tom,\n\n>> Now, while index scans (for indexes on disk) aren't 100% sequential\n>> reads, it seems like we should be increasing (substantially) the\n>> estimated cost of reverse index scans if the index is likely to be on\n>> disk. No?\n>\n> AFAICS this is already folded into random_page_cost.\n\nNot as far as I can tell. It looks to me like the planner is assuming \nthat a forwards index scan and a reverse index scan will have the same \ncost.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Fri, 10 Apr 2009 10:07:49 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shouldn't the planner have a higher cost for reverse index scans?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Not as far as I can tell. It looks to me like the planner is assuming \n> that a forwards index scan and a reverse index scan will have the same \n> cost.\n\nRight, because they do. If you think otherwise, demonstrate it.\n(bonnie tests approximating a reverse seqscan are not relevant\nto the performance of indexscans.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 10 Apr 2009 13:19:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't the planner have a higher cost for reverse index scans?" }, { "msg_contents": "Tom,\n\n> Right, because they do. If you think otherwise, demonstrate it.\n> (bonnie tests approximating a reverse seqscan are not relevant\n> to the performance of indexscans.)\n\nWorking on it. I *think* I've seen this issue in the field, which is \nwhy I brought it up in the first place, but getting a good test case is, \nof course, difficult.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Fri, 10 Apr 2009 10:46:43 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shouldn't the planner have a higher cost for reverse index scans?" }, { "msg_contents": "On Fri, 10 Apr 2009, Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n>> Not as far as I can tell. It looks to me like the planner is assuming\n>> that a forwards index scan and a reverse index scan will have the same\n>> cost.\n>\n> Right, because they do. If you think otherwise, demonstrate it.\n\nThey do when the correlation of indexed value versus position in the table \nis low, resulting in random access. However, when the correlation is near \n1, then the index scan approximates to sequential access to disc. In that \ncase, scan direction would be important.\n\nOf course, there's the separate issue that correlation isn't actually that \ngood a measure of the cost of an index scan, but I'm not sure what is \nbetter, and feasible.\n\nMatthew\n\n\n-- \n Our riverbanks and seashores have a beauty all can share, provided\n there's at least one boot, three treadless tyres, a half-eaten pork\n pie, some oil drums, an old felt hat, a lorry-load of tar blocks,\n and a broken bedstead there. -- Flanders and Swann\n", "msg_date": "Tue, 14 Apr 2009 10:39:22 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't the planner have a higher cost for reverse index scans? " }, { "msg_contents": "Josh Berkus wrote:\n> Tom,\n>\n>> Right, because they do. If you think otherwise, demonstrate it.\n>> (bonnie tests approximating a reverse seqscan are not relevant\n>> to the performance of indexscans.)\n>\n> Working on it. I *think* I've seen this issue in the field, which is \n> why I brought it up in the first place, but getting a good test case \n> is, of course, difficult.\n>\n>\nI think I may be experiencing this situation now.\n\nThe query\n\n select comment_date\n from user_comments\n where user_comments.uid=1\n order by comment_date desc limit 1\n\n Explain:\n \"Limit (cost=0.00..2699.07 rows=1 width=8) (actual\n time=52848.785..52848.787 rows=1 loops=1)\"\n \" -> Index Scan Backward using idx_user_comments_comment_date on\n user_comments (cost=0.00..5789515.40 rows=2145 width=8) (actual\n time=52848.781..52848.781 rows=1 loops=1)\"\n \" Filter: (uid = 1)\"\n \"Total runtime: 52848.840 ms\"\n\ntakes 10's of seconds to complete (52 sec last run). However\n\n select comment_date\n from user_comments\n where user_comments.uid=1\n order by comment_date limit 1\n\n Explain:\n \"Limit (cost=0.00..2699.07 rows=1 width=8) (actual\n time=70.402..70.403 rows=1 loops=1)\"\n \" -> Index Scan using idx_user_comments_comment_date on\n user_comments (cost=0.00..5789515.40 rows=2145 width=8) (actual\n time=70.398..70.398 rows=1 loops=1)\"\n \" Filter: (uid = 1)\"\n \"Total runtime: 70.453 ms\"\n\ntakes well under 1 sec.\n\n\nreply_date is a timestamp with time zone and has the index\n\n CREATE INDEX idx_user_comments_comment_date\n ON user_comments\n USING btree\n (comment_date);\n\n\nI don't understand why it is so much slower to scan it reverse\n\nIt's a fairly big table. About 4.4 million rows, 888MB. That index is \n96MB. I tried dropping and recreating the index, but it doesn't seem to \nhave helped any.\n\n\nCan I create a reverse index on the dates so it can do a forward scan of \nthe reverse index?\n\n\n\n\n\n\n\nJosh Berkus wrote:\nTom,\n \n\nRight, because they do.  If you think\notherwise, demonstrate it.\n \n(bonnie tests approximating a reverse seqscan are not relevant\n \nto the performance of indexscans.)\n \n\n\nWorking on it.  I *think* I've seen this issue in the field, which is\nwhy I brought it up in the first place, but getting a good test case\nis, of course, difficult.\n \n\n\n\nI think I may be experiencing this situation now.\n\nThe query\nselect comment_date \n    from user_comments \n    where user_comments.uid=1\n    order by comment_date desc limit 1\n\nExplain:\n\"Limit  (cost=0.00..2699.07 rows=1 width=8) (actual\ntime=52848.785..52848.787 rows=1 loops=1)\"\n\"  ->  Index Scan Backward using idx_user_comments_comment_date on\nuser_comments  (cost=0.00..5789515.40 rows=2145 width=8) (actual\ntime=52848.781..52848.781 rows=1 loops=1)\"\n\"        Filter: (uid = 1)\"\n\"Total runtime: 52848.840 ms\"\n\n\ntakes 10's of seconds to complete (52 sec last run). However\nselect comment_date \n    from user_comments \n    where user_comments.uid=1\n    order by comment_date limit 1\n\nExplain:\n\"Limit  (cost=0.00..2699.07 rows=1 width=8) (actual time=70.402..70.403\nrows=1 loops=1)\"\n\"  ->  Index Scan using idx_user_comments_comment_date on\nuser_comments  (cost=0.00..5789515.40 rows=2145 width=8) (actual\ntime=70.398..70.398 rows=1 loops=1)\"\n\"        Filter: (uid = 1)\"\n\"Total runtime: 70.453 ms\"\n\ntakes well under 1 sec.\n\n\nreply_date is a timestamp with time zone and has the index\nCREATE INDEX idx_user_comments_comment_date\n  ON user_comments\n  USING btree\n  (comment_date);\n\n\nI don't understand why it is so much slower to scan it reverse \n\nIt's a fairly big table. About 4.4 million rows, 888MB. That index is\n96MB. I tried dropping and recreating the index, but it doesn't seem to\nhave helped any.\n\n\nCan I create a reverse index on the dates so it can do a forward scan\nof the reverse index?", "msg_date": "Wed, 15 Apr 2009 23:02:29 -0700", "msg_from": "Lists <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't the planner have a higher cost for reverse index scans?" }, { "msg_contents": "create index foobar on table(row desc);\n", "msg_date": "Thu, 16 Apr 2009 09:11:08 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't the planner have a higher cost for reverse index scans?" }, { "msg_contents": "On Thu, Apr 16, 2009 at 2:02 AM, Lists <[email protected]> wrote:\n>\n> Right, because they do.  If you think otherwise, demonstrate it.\n> (bonnie tests approximating a reverse seqscan are not relevant\n> to the performance of indexscans.)\n>\n> Working on it.  I *think* I've seen this issue in the field, which is why I\n> brought it up in the first place, but getting a good test case is, of\n> course, difficult.\n>\n>\n> I think I may be experiencing this situation now.\n>\n> The query\n>\n> select comment_date\n>     from user_comments\n>     where user_comments.uid=1\n>     order by comment_date desc limit 1\n\ntry this:\ncreate index comment_data_uid_idx on user_comments(uid, comment_date);\n\nselect * from user_comments where (uid, comment_date) < (1, high_date)\n order by uid desc, comment_date desc limit 1;\n\nselect * from user_comments where (uid, comment_date) > (1, low_date)\n order by uid, comment_date limit 1;\n\nlow_date and high_date are arbitrarily chosen to be lower and higher\nthan the lowest and highest dates found in the table, respectively.\nYou will be amazed how much faster this is than what you are doing\nnow. You will not need to make an index for the 'desc' case.\n\nfor ranges, (give me some comments for user x from now back to particular time:\nset enable_seqscan = false;\nselect * from user_comments where (uid, comment_date)\n between(1, time_of_interest) and (1, high_date)\n order by uid desc, comment_date desc;\n\nenable_seqscan is required because the server will suddenly and\nspectacularly switch to sequential scans because it can't use the non\nleftmost portion of the index in range queries (this only mainly\nmatters when the left-most field is inselective and the comparison is\nequal).\n\nmerlin\n", "msg_date": "Thu, 16 Apr 2009 08:06:13 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't the planner have a higher cost for reverse index scans?" }, { "msg_contents": "Merlin Moncure <[email protected]> writes:\n> On Thu, Apr 16, 2009 at 2:02 AM, Lists <[email protected]> wrote:\n>> select comment_date\n>> from user_comments\n>> where user_comments.uid=1\n>> order by comment_date desc limit 1\n\n> try this:\n> create index comment_data_uid_idx on user_comments(uid, comment_date);\n\n> select * from user_comments where (uid, comment_date) < (1, high_date)\n> order by uid desc, comment_date desc limit 1;\n\nYou don't really need to complicate your queries like that. Having the\ntwo-column index will suffice to make the given query work fine, at\nleast in reasonably modern PG versions (since 8.1 I think).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Apr 2009 11:36:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't the planner have a higher cost for reverse index scans?" }, { "msg_contents": "Lists <[email protected]> writes:\n> The query\n\n> select comment_date\n> from user_comments\n> where user_comments.uid=1\n> order by comment_date desc limit 1\n\n> Explain:\n> \"Limit (cost=0.00..2699.07 rows=1 width=8) (actual\n> time=52848.785..52848.787 rows=1 loops=1)\"\n> \" -> Index Scan Backward using idx_user_comments_comment_date on\n> user_comments (cost=0.00..5789515.40 rows=2145 width=8) (actual\n> time=52848.781..52848.781 rows=1 loops=1)\"\n> \" Filter: (uid = 1)\"\n> \"Total runtime: 52848.840 ms\"\n\n> takes 10's of seconds to complete (52 sec last run). However\n\n> select comment_date\n> from user_comments\n> where user_comments.uid=1\n> order by comment_date limit 1\n\n> Explain:\n> \"Limit (cost=0.00..2699.07 rows=1 width=8) (actual\n> time=70.402..70.403 rows=1 loops=1)\"\n> \" -> Index Scan using idx_user_comments_comment_date on\n> user_comments (cost=0.00..5789515.40 rows=2145 width=8) (actual\n> time=70.398..70.398 rows=1 loops=1)\"\n> \" Filter: (uid = 1)\"\n> \"Total runtime: 70.453 ms\"\n\n> takes well under 1 sec.\n\nAFAICS this is pure chance --- it is based on when we happen to hit the\nfirst row with uid = 1 while scanning in forward or reverse comment_date\norder. Unless you have evidence that the number of rows skipped over\nis similar in both cases, there is no reason to suppose that this\nexample bears on Josh's concern.\n\nAs noted by Merlin, if you're willing to create another index to help\nthis type of query, then a two-column index on (uid, comment_date) would\nbe ideal.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Apr 2009 11:42:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't the planner have a higher cost for reverse index scans?" }, { "msg_contents": "Tom Lane wrote:\n> Lists <[email protected]> writes:\n> \n>> The query\n>> \n>\n> \n>> select comment_date\n>> from user_comments\n>> where user_comments.uid=1\n>> order by comment_date desc limit 1\n>> \n>\n> \n>> Explain:\n>> \"Limit (cost=0.00..2699.07 rows=1 width=8) (actual\n>> time=52848.785..52848.787 rows=1 loops=1)\"\n>> \" -> Index Scan Backward using idx_user_comments_comment_date on\n>> user_comments (cost=0.00..5789515.40 rows=2145 width=8) (actual\n>> time=52848.781..52848.781 rows=1 loops=1)\"\n>> \" Filter: (uid = 1)\"\n>> \"Total runtime: 52848.840 ms\"\n>> \n>\n> \n>> takes 10's of seconds to complete (52 sec last run). However\n>> \n>\n> \n>> select comment_date\n>> from user_comments\n>> where user_comments.uid=1\n>> order by comment_date limit 1\n>> \n>\n> \n>> Explain:\n>> \"Limit (cost=0.00..2699.07 rows=1 width=8) (actual\n>> time=70.402..70.403 rows=1 loops=1)\"\n>> \" -> Index Scan using idx_user_comments_comment_date on\n>> user_comments (cost=0.00..5789515.40 rows=2145 width=8) (actual\n>> time=70.398..70.398 rows=1 loops=1)\"\n>> \" Filter: (uid = 1)\"\n>> \"Total runtime: 70.453 ms\"\n>> \n>\n> \n>> takes well under 1 sec.\n>> \n>\n> AFAICS this is pure chance --- it is based on when we happen to hit the\n> first row with uid = 1 while scanning in forward or reverse comment_date\n> order. Unless you have evidence that the number of rows skipped over\n> is similar in both cases, there is no reason to suppose that this\n> example bears on Josh's concern.\n>\n> As noted by Merlin, if you're willing to create another index to help\n> this type of query, then a two-column index on (uid, comment_date) would\n> be ideal.\n>\n> \t\t\tregards, tom lane\n> \n\nThank you Tom and Merlin (and Grzegorz for the answer to my other \nquestion I no longer need). The composite index seems to do the trick. \nThe reverse index scan is now taking about the same time.\n\nRows with uid=1 should be spread throughout the table but there should \nbe a larger amount earlier in the table (based on insert order).\n\nI already had a separate index on uid\n\n CREATE INDEX idx_user_comments_uid\n ON user_comments\n USING btree\n (uid);\n\nUnder the circumstances, shouldn't a bitmap of those 2 indexes be far \nfaster than using just the date index (compared to the old plan, not the \nnew composite index). Why would the planner not choose that plan?\n\n\n\n\n\n\n\nTom Lane wrote:\n\nLists <[email protected]> writes:\n \n\nThe query\n \n\n\n \n\n select comment_date\n from user_comments\n where user_comments.uid=1\n order by comment_date desc limit 1\n \n\n\n \n\n Explain:\n \"Limit (cost=0.00..2699.07 rows=1 width=8) (actual\n time=52848.785..52848.787 rows=1 loops=1)\"\n \" -> Index Scan Backward using idx_user_comments_comment_date on\n user_comments (cost=0.00..5789515.40 rows=2145 width=8) (actual\n time=52848.781..52848.781 rows=1 loops=1)\"\n \" Filter: (uid = 1)\"\n \"Total runtime: 52848.840 ms\"\n \n\n\n \n\ntakes 10's of seconds to complete (52 sec last run). However\n \n\n\n \n\n select comment_date\n from user_comments\n where user_comments.uid=1\n order by comment_date limit 1\n \n\n\n \n\n Explain:\n \"Limit (cost=0.00..2699.07 rows=1 width=8) (actual\n time=70.402..70.403 rows=1 loops=1)\"\n \" -> Index Scan using idx_user_comments_comment_date on\n user_comments (cost=0.00..5789515.40 rows=2145 width=8) (actual\n time=70.398..70.398 rows=1 loops=1)\"\n \" Filter: (uid = 1)\"\n \"Total runtime: 70.453 ms\"\n \n\n\n \n\ntakes well under 1 sec.\n \n\n\nAFAICS this is pure chance --- it is based on when we happen to hit the\nfirst row with uid = 1 while scanning in forward or reverse comment_date\norder. Unless you have evidence that the number of rows skipped over\nis similar in both cases, there is no reason to suppose that this\nexample bears on Josh's concern.\n\nAs noted by Merlin, if you're willing to create another index to help\nthis type of query, then a two-column index on (uid, comment_date) would\nbe ideal.\n\n\t\t\tregards, tom lane\n \n\n\nThank you Tom and Merlin (and Grzegorz for the answer to my other\nquestion I no longer need). The composite index seems to do the trick.\nThe reverse index scan is now taking about the same time.\n\nRows with uid=1 should be spread throughout the table but there should\nbe a larger amount earlier in the table (based on insert order).\n\nI already had a separate index on uid \nCREATE INDEX idx_user_comments_uid\n  ON user_comments\n  USING btree\n  (uid);\n\nUnder the circumstances, shouldn't a bitmap of those 2 indexes be far\nfaster than using just the date index (compared to the old plan, not\nthe new composite index). Why would the planner not choose that plan?", "msg_date": "Thu, 16 Apr 2009 08:52:32 -0700", "msg_from": "Lists <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't the planner have a higher cost for reverse index scans?" }, { "msg_contents": "Lists <[email protected]> writes:\n> I already had a separate index on uid\n\n> CREATE INDEX idx_user_comments_uid\n> ON user_comments\n> USING btree\n> (uid);\n\n> Under the circumstances, shouldn't a bitmap of those 2 indexes be far \n> faster than using just the date index (compared to the old plan, not the \n> new composite index). Why would the planner not choose that plan?\n\nIt wouldn't produce sorted output; you'd have to read all the rows with\nuid 1 and then sort them to find the lowest [highest] comment_date.\nI'm sure the planner did consider that, but guessed that the other way\nwould win on average. The fact that you have lots of rows with uid=1\nwould tend to push its cost estimates in that direction. Unfortunately\nit doesn't have any clue that the rows with uid=1 are concentrated in\nolder comment_dates, making the approach a loser for the highest-date\nflavor of the problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Apr 2009 12:01:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shouldn't the planner have a higher cost for reverse index scans?" } ]
[ { "msg_contents": "Hi all,\n\nWhat are your experiences with Postgres 8.x in production use on Windows \nServer 2003/2008? Are there any limitations, trade-offs or quirks?\n\nMy client is accustomed to Windows Server environment, but it seems hard \nto google good information about these types of installations.\n\nRegards,\nOgnjen\n", "msg_date": "Fri, 10 Apr 2009 11:47:16 +0200", "msg_from": "Ognjen Blagojevic <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres 8.x on Windows Server in production" }, { "msg_contents": "Ognjen,\n\n> What are your experiences with Postgres 8.x in production use on Windows\n> Server 2003/2008? Are there any limitations, trade-offs or quirks?\n\nFirst of all, you need to know that the first *two* digits of a \nPostgreSQL version are major version numbers. So 8.3 is not the same \nPostgres which 8.1 is.\n\nHere's the top level summary:\n\nPostgreSQL on Windows, compared to Linux, in general:\n\t-- is a bit slower\n\t-- is not as reliable, because the underlying FS and OS are not as \nreliable*\n\t-- some of the tools for Postgres which are available on Linux do not \nwork on Windows (especially performance tools)\n\t-- is less secure, because the OS is less secure\n\nYet 1000's of users are running PostgreSQL on Windows in production. It \nreally depends on what kind of application you're running, and what its \ndemands are. For a CMS or a contact manager or a personnel directory? \nNo problem. For a central payroll system for 18,000 employees? I'd \nuse Linux.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Fri, 10 Apr 2009 11:07:09 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 8.x on Windows Server in production" }, { "msg_contents": "On Fri, Apr 10, 2009 at 7:07 PM, Josh Berkus <[email protected]> wrote:\n> Yet 1000's of users are running PostgreSQL on Windows in production.  It\n> really depends on what kind of application you're running, and what its\n> demands are.  For a CMS or a contact manager or a personnel directory? No\n> problem.  For a central payroll system for 18,000 employees?    I'd use\n> Linux.\n\nConfirmed from my experience too.\n\nOn top of that, I would like to add - that using it on windows first,\nmight be a good step ahead. And installing linux on server isn't so\nhard anymore, and shouldn't be a problem, unlike 8 years ago :)\n\nGive it a try, and please tell us what sort of application you want to\nput on it.\n\n-- \nGJ\n", "msg_date": "Fri, 10 Apr 2009 21:39:56 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 8.x on Windows Server in production" }, { "msg_contents": "We use Postgres 8.x in production on Windows Server 2003. We have not done a\ndirect head-to-head comparison against any *nix environment, so I can't\nreally compare them, but I can still give a few comments.\n\nFirst of all, it seems that some of the popular file systems in *nix are\nmore robust at preventing disk fragmentation than NTFS is. Because of this I\ndefinitely recommend have some defragging solution. What we've settled on in\nO&O (that's the company name) Defrag Server version. Their software has some\nnice features related to defragging in the background while monitoring\nsystem usage so an to impact performance minimally.\n\nSecond, one big difficulty with running on Windows is that most of the\nPostgres expertise seems is around *nix environments. This means that when\nyou do need to investigate a performance issue it can be more difficult to\nget direct advice. For example, perusing this mailing list will show lot's\nof tips suggesting running various tools to show io performance, etc. Well,\non Windows the toolset is different.\n\n\nAll in all we've been happy enough with Windows. Certainly we've never\nconsidered migrating to *nix because of difficulties with it.\n\n\n\n--Rainer\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Ognjen Blagojevic\n> Sent: Friday, April 10, 2009 6:47 PM\n> To: [email protected]\n> Subject: [PERFORM] Postgres 8.x on Windows Server in production\n> \n> Hi all,\n> \n> What are your experiences with Postgres 8.x in production use on\n> Windows\n> Server 2003/2008? Are there any limitations, trade-offs or quirks?\n> \n> My client is accustomed to Windows Server environment, but it seems\n> hard\n> to google good information about these types of installations.\n> \n> Regards,\n> Ognjen\n> \n> --\n> Sent via pgsql-performance mailing list (pgsql-\n> [email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Mon, 13 Apr 2009 08:13:46 +0900", "msg_from": "\"Rainer Mager\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 8.x on Windows Server in production" }, { "msg_contents": "On Sun, Apr 12, 2009 at 5:13 PM, Rainer Mager <[email protected]> wrote:\n> We use Postgres 8.x in production on Windows Server 2003. We have not done a\n> direct head-to-head comparison against any *nix environment, so I can't\n> really compare them, but I can still give a few comments.\n\nJust wondering, what version are you actually running? Big\ndifferences from 8.0, 8.1, 8.2, 8.3 and soon 8.4. For people taking\nyour advice on running on windows, it helps them make a decision on\nwhether or not to upgrade.\n\n> First of all, it seems that some of the popular file systems in *nix are\n> more robust at preventing disk fragmentation than NTFS is. Because of this I\n> definitely recommend have some defragging solution. What we've settled on in\n\nLinux file systems still fragment, they just don't tend to fragment as\nmuch. As the drive gets closer to being full fragmentation will\nbecome more of a problem.\n", "msg_date": "Sun, 12 Apr 2009 17:41:04 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 8.x on Windows Server in production" }, { "msg_contents": "We're running 8.3, but when we started this server about 2 years ago it was\nan earlier 8.x, I don't remember which.\n\n--Rainer\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Scott Marlowe\n> Sent: Monday, April 13, 2009 8:41 AM\n> To: Rainer Mager\n> Cc: Ognjen Blagojevic; [email protected]\n> Subject: Re: [PERFORM] Postgres 8.x on Windows Server in production\n> \n> On Sun, Apr 12, 2009 at 5:13 PM, Rainer Mager <[email protected]>\n> wrote:\n> > We use Postgres 8.x in production on Windows Server 2003. We have not\n> done a\n> > direct head-to-head comparison against any *nix environment, so I\n> can't\n> > really compare them, but I can still give a few comments.\n> \n> Just wondering, what version are you actually running? Big\n> differences from 8.0, 8.1, 8.2, 8.3 and soon 8.4. For people taking\n> your advice on running on windows, it helps them make a decision on\n> whether or not to upgrade.\n> \n> > First of all, it seems that some of the popular file systems in *nix\n> are\n> > more robust at preventing disk fragmentation than NTFS is. Because of\n> this I\n> > definitely recommend have some defragging solution. What we've\n> settled on in\n> \n> Linux file systems still fragment, they just don't tend to fragment as\n> much. As the drive gets closer to being full fragmentation will\n> become more of a problem.\n> \n> --\n> Sent via pgsql-performance mailing list (pgsql-\n> [email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Mon, 13 Apr 2009 08:49:24 +0900", "msg_from": "\"Rainer Mager\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 8.x on Windows Server in production" }, { "msg_contents": "On Sun, Apr 12, 2009 at 5:49 PM, Rainer Mager <[email protected]> wrote:\n> We're running 8.3, but when we started this server about 2 years ago it was\n> an earlier 8.x, I don't remember which.\n\nCool. PostgreSQL is one of the few projects where I've always\nrecommended upgrading and keeping on the latest major version as soon\nas possible after it comes out. This stands in stark contrast to\napache 2.0, which was out for over two years before it was worth the\neffort to migrate to. The improvements just weren't worth the effort\nto upgrade. I've seen enough performance and capability in each major\nversion of pgsql to make it worth the upgrade since 7.0 came out. I\nthink we've skipped one or two short releases, like 7.1 or 8.2, but\nin general even those represented useful gains over previous versions.\n", "msg_date": "Sun, 12 Apr 2009 17:58:00 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 8.x on Windows Server in production" }, { "msg_contents": "Hi all,\n\nFirst, thank you all for your answers.\n\n\nGrzegorz Jaśkiewicz wrote:\n> Give it a try, and please tell us what sort of application you want to\n> put on it.\n\nIt is a student database for the college which is a client of ours. The \nsize of the database should be around 1GB, half being binary data \n(images). Not more than 100 users at the time will be working with the \napplication.\n\nI don't worry about the performance, but more about the maintenance \nunder Windows. What file system to use? How to schedule vacuuming and \nbackup? Are there any windows services that should be turned off? Those \nquestions come to my mind when I consider new OS for the RDBMS.\n\nRegards,\nOgnjen\n\n\n", "msg_date": "Mon, 13 Apr 2009 15:23:11 +0200", "msg_from": "Ognjen Blagojevic <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres 8.x on Windows Server in production" }, { "msg_contents": "You'll almost certainly want to use NTFS.\n\nI suspect you'll want to set the NTFS Allocation Unit Size to 8192 or\nsome integer multiple of 8192, since I believe that is the pg page\nsize. XP format dialog will not allow you to set it above 4096, but\nthe command line format utility will. I do remember setting it as high\nas 64k for SQL Server on Windows Server 2003 (SQL Server does IO in\n8-page units called extents)\nSomeone please correct me if I have that wrong.\n\nDo not allow any indexing service activity on the data or transaction\nlog volumes. If this is a dedicated database server you may as well\nturn indexing service off.\n\nDon't enable compression on the data or transaction log volumes either.\n\nPay attention to Automatic Updates - you likely don't want your\ndatabase server to restart every 4th Wednesday morning or so.\n\nHope this helps,\nJustin\n2009/4/13 Ognjen Blagojevic <[email protected]>:\n> Hi all,\n>\n> First, thank you all for your answers.\n>\n>\n> Grzegorz Jaśkiewicz wrote:\n>>\n>> Give it a try, and please tell us what sort of application you want to\n>> put on it.\n>\n> It is a student database for the college which is a client of ours. The size\n> of the database should be around 1GB, half being binary data (images). Not\n> more than 100 users at the time will be working with the application.\n>\n> I don't worry about the performance, but more about the maintenance under\n> Windows. What file system to use? How to schedule vacuuming and backup? Are\n> there any windows services that should be turned off? Those questions come\n> to my mind when I consider new OS for the RDBMS.\n>\n> Regards,\n> Ognjen\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Mon, 13 Apr 2009 09:42:59 -0400", "msg_from": "Justin Pitts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 8.x on Windows Server in production" }, { "msg_contents": "2009/4/13 Ognjen Blagojevic <[email protected]>:\n> Hi all,\n>\n> First, thank you all for your answers.\n>\n>\n> Grzegorz Jaśkiewicz wrote:\n>>\n>> Give it a try, and please tell us what sort of application you want to\n>> put on it.\n>\n> It is a student database for the college which is a client of ours. The size\n> of the database should be around 1GB, half being binary data (images). Not\n> more than 100 users at the time will be working with the application.\n>\n> I don't worry about the performance, but more about the maintenance under\n> Windows. What file system to use? How to schedule vacuuming and backup? Are\n> there any windows services that should be turned off? Those questions come\n> to my mind when I consider new OS for the RDBMS.\n\nNTFS, use autovacuum, backup nightly (?), Turn off anti-virus software.\n", "msg_date": "Mon, 13 Apr 2009 10:44:54 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 8.x on Windows Server in production" }, { "msg_contents": "2009/4/13 Ognjen Blagojevic <[email protected]>:\n> It is a student database for the college which is a client of ours. The size\n> of the database should be around 1GB, half being binary data (images). Not\n> more than 100 users at the time will be working with the application.\n\nnice, if you want to store pics, I suggest filesystem - with some nice\nschema to do it.\nThe way I do it, is using md5 of pics/other data, and three level of\ndirectories to lead you to the right file on disc.\nIt is dead easy to implement, and much faster than bytea in DB. (at\nleast on linux, I wouldn't hold my breath so much for ntfs, but you\nnever know).\n\n100 connections is quite a bit, so you need a bit of memory. It also\ndepends on the actual queries, and schema.\nIf you have any questions about that, and performance - we are here to help.\n\n> I don't worry about the performance, but more about the maintenance under\n> Windows. What file system to use? How to schedule vacuuming and backup? Are\n> there any windows services that should be turned off? Those questions come\n> to my mind when I consider new OS for the RDBMS.\n\njust like scott said,\nMake sure to either turn off anti-virus, or tell it to stay away from\npostgresql. (which leaves your machine quite vulnerable, so make sure\nto secure it well).\n\nhth\n\n-- \nGJ\n", "msg_date": "Mon, 13 Apr 2009 19:47:18 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 8.x on Windows Server in production" } ]
[ { "msg_contents": "I sent this out on 4/7 and either missed a response or didn't get one. \nIf this is the wrong forum, I'd appreciate a redirect.\n\nI know that EXPLAIN will show the query plan. I know that pg_locks will\nshow the locks currently held for activity transactions. Is there a way\nto determine what locks a query will hold when it is executed?\n\nThanks,\nBrian\n", "msg_date": "Fri, 10 Apr 2009 12:05:11 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "determining the locks that will be held by a query" }, { "msg_contents": "Brian Cox <[email protected]> wrote: \n> I know that EXPLAIN will show the query plan. I know that pg_locks\n> will show the locks currently held for activity transactions. Is\n> there a way to determine what locks a query will hold when it is\n> executed?\n \nOnly to read the docs regarding locking, and to desk-check your query,\nat least as far as I know.\n \nKeep in mind that some statements will only obtain locks if they find\nrows that are affected, which might vary from one run to the next.\n \n-Kevin\n", "msg_date": "Fri, 10 Apr 2009 15:02:18 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: determining the locks that will be held by a\n\tquery" } ]
[ { "msg_contents": "\nHi chaps,\n\nIs anyone using 2.6.26 with postgres? I was thinking about shifting my home test machine up from 2.6.18, however I recall reading a post somewhere a while back about the scheduler in more recent versions being a bit cranky...\n\nI just thought I'd ask before I go ahead, I don't have too much time for testing etc at the moment.\n\nthanks\nGlyn\n\n\n \n", "msg_date": "Fri, 10 Apr 2009 20:06:12 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "2.6.26 kernel and PostgreSQL" }, { "msg_contents": "Glyn Astill <[email protected]> wrote: \n> I was thinking about shifting my home test machine up from 2.6.18,\n> however I recall reading a post somewhere a while back about the\n> scheduler in more recent versions being a bit cranky...\n \nA recent post on the topic:\n \nhttp://archives.postgresql.org/pgsql-performance/2009-04/msg00098.php\n \n-Kevin\n", "msg_date": "Fri, 10 Apr 2009 15:17:12 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2.6.26 kernel and PostgreSQL" }, { "msg_contents": "\n\n\n\n--- On Fri, 10/4/09, Kevin Grittner <[email protected]> wrote:\n\n> Glyn Astill <[email protected]> wrote: \n> > I was thinking about shifting my home test machine up\n> from 2.6.18,\n> > however I recall reading a post somewhere a while back\n> about the\n> > scheduler in more recent versions being a bit\n> cranky...\n> \n> A recent post on the topic:\n> \n> http://archives.postgresql.org/pgsql-performance/2009-04/msg00098.php\n> \n> -Kevin\n> \n\nSo it was only for connections over a unix socket, but wow; it's still an ongoing issue. Nice to see somebody is on top of it though.\n\n\n\n \n", "msg_date": "Fri, 10 Apr 2009 22:00:06 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 2.6.26 kernel and PostgreSQL" }, { "msg_contents": "On Fri, 10 Apr 2009, Glyn Astill wrote:\n\n> So it was only for connections over a unix socket, but wow; it's still \n> an ongoing issue.\n\nThe problem is actually with pgbench when running on a UNIX socket, not \nwith the PostgreSQL server itself. On my tests, the actual database \nserver itself seems to work just as well or better on later kernels that \nuse the new scheduler than the older scheduler did.\n\nBasically, if all these apply:\n\n1) You are running pgbench\n2) You're running a quick statement, such as a simple select, that gives \n>10000TPS or so\n3) Connecting via UNIX socket\n4) Clients > around 10\n5) Linux kernel >=2.6.23 (which means CFS as the scheduler)\n6) The CFS features are at their defaults (SCHED_FEAT_SYNC_WAKEUPS is on)\n\nYou'll get weird results. Change any of those and things are still fine.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 13 Apr 2009 04:25:46 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2.6.26 kernel and PostgreSQL" }, { "msg_contents": "\n\n\n\n--- On Mon, 13/4/09, Greg Smith <[email protected]> wrote:\n\n> From: Greg Smith <[email protected]>\n> Subject: Re: [PERFORM] 2.6.26 kernel and PostgreSQL\n> To: \"Glyn Astill\" <[email protected]>\n> Cc: [email protected], \"Kevin Grittner\" <[email protected]>\n> Date: Monday, 13 April, 2009, 9:25 AM\n> On Fri, 10 Apr 2009, Glyn Astill wrote:\n> \n> > So it was only for connections over a unix socket, but\n> wow; it's still an ongoing issue.\n> \n> The problem is actually with pgbench when running on a UNIX\n> socket, not with the PostgreSQL server itself. On my tests,\n> the actual database server itself seems to work just as well\n> or better on later kernels that use the new scheduler than\n> the older scheduler did.\n> \n> Basically, if all these apply:\n> \n> 1) You are running pgbench\n> 2) You're running a quick statement, such as a simple\n> select, that gives \n> > 10000TPS or so\n> 3) Connecting via UNIX socket\n> 4) Clients > around 10\n> 5) Linux kernel >=2.6.23 (which means CFS as the\n> scheduler)\n> 6) The CFS features are at their defaults\n> (SCHED_FEAT_SYNC_WAKEUPS is on)\n> \n> You'll get weird results. Change any of those and\n> things are still fine.\n> \n\nAce, I'll upgrade today then. Thanks Greg\n\n\n \n", "msg_date": "Mon, 13 Apr 2009 11:49:53 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 2.6.26 kernel and PostgreSQL" } ]
[ { "msg_contents": "Hi\n\nI've been doing some testing for the Bacula project, which uses\nPostgreSQL as one of the databases in which it stores backup catalogs.\n\nInsert times are critical in this environment, as the app may insert\nmillions of records a day.\n\nI've been evaluating a schema change for Bacula that takes a field\nthat's currently stored as a gruesome-to-work-with base64-encoded\nrepresentation of a binary blob, and expands it into a set of integer\nfields that can be searched, indexed, etc.\n\nThe table size of the expanded form is marginally smaller than for the\nbase64-encoded string version. However, INSERT times are *CONSIDERABLY*\ngreater for the version with more fields. It takes 1011 seconds to\ninsert the base64 version, vs 1290 seconds for the expanded-fields\nversion. That's a difference of 279 seconds, or 27%.\n\nDespite that, the final table sizes are the same.\n\nThe SQL dump for the base64 version is 1734MB and the expanded one is\n2189MB, about a 25% increase. Given that the final table sizes are the\nsame, is the slowdown likely to just be the cost of parsing the extra\nSQL, converting the textual representations of the numbers, etc?\n\nIf I use tab-separated input and COPY, the original-format file is\n1300MB and the expanded-structure format is 1618MB. The performance hit\non COPY-based insert is not as bad, at 161s vs 182s (13%), but still\nquite significant.\n\nAny ideas about what I might be able to do to improve the efficiency of\ninserting records with many integer fields?\n\n\nIn case it's of interest, the base64 and expanded schema are:\n\n\nCREATE TABLE file (\n fileid bigint NOT NULL,\n fileindex integer DEFAULT 0 NOT NULL,\n jobid integer NOT NULL,\n pathid integer NOT NULL,\n filenameid integer NOT NULL,\n markid integer DEFAULT 0 NOT NULL,\n lstat text NOT NULL,\n md5 text NOT NULL\n);\n\n\n\nCREATE TABLE file (\n fileid bigint,\n fileindex integer,\n jobid integer,\n pathid integer,\n filenameid integer,\n markid integer,\n st_dev integer,\n st_ino integer,\n st_mod integer,\n st_nlink integer,\n st_uid integer,\n st_gid integer,\n st_rdev bigint,\n st_size integer,\n st_blksize integer,\n st_blocks integer,\n st_atime integer,\n st_mtime integer,\n st_ctime integer,\n linkfi integer,\n md5 text\n);\n\n\n( Yes, those are the fields of a `struct lstat' ).\n\n--\nCraig Ringer\n\n", "msg_date": "Tue, 14 Apr 2009 15:54:40 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": true, "msg_subject": "INSERT times - same storage space but more fields -> much slower\n\tinserts" }, { "msg_contents": "Craig,\n\n* Craig Ringer ([email protected]) wrote:\n> I've been doing some testing for the Bacula project, which uses\n> PostgreSQL as one of the databases in which it stores backup catalogs.\n\nWe also use Bacula with a PostgreSQL backend.\n\n> I've been evaluating a schema change for Bacula that takes a field\n> that's currently stored as a gruesome-to-work-with base64-encoded\n> representation of a binary blob, and expands it into a set of integer\n> fields that can be searched, indexed, etc.\n\nThis would be extremely nice.\n\n> The table size of the expanded form is marginally smaller than for the\n> base64-encoded string version. However, INSERT times are *CONSIDERABLY*\n> greater for the version with more fields. It takes 1011 seconds to\n> insert the base64 version, vs 1290 seconds for the expanded-fields\n> version. That's a difference of 279 seconds, or 27%.\n> \n> Despite that, the final table sizes are the same.\n> \n> If I use tab-separated input and COPY, the original-format file is\n> 1300MB and the expanded-structure format is 1618MB. The performance hit\n> on COPY-based insert is not as bad, at 161s vs 182s (13%), but still\n> quite significant.\n> \n> Any ideas about what I might be able to do to improve the efficiency of\n> inserting records with many integer fields?\n\nBacula should be using COPY for the batch data loads, so hopefully won't\nsuffer too much from having the fields split out. I think it would be\ninteresting to try doing PQexecPrepared with binary-format data instead\nof using COPY though. I'd be happy to help you implement a test setup\nfor doing that, if you'd like.\n\n\t\tThanks,\n\n\t\t\tStephen", "msg_date": "Tue, 14 Apr 2009 11:56:14 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT times - same storage space but more fields ->\n\tmuch slower inserts" }, { "msg_contents": "On Tue, 14 Apr 2009, Stephen Frost wrote:\n> Bacula should be using COPY for the batch data loads, so hopefully won't\n> suffer too much from having the fields split out. I think it would be\n> interesting to try doing PQexecPrepared with binary-format data instead\n> of using COPY though. I'd be happy to help you implement a test setup\n> for doing that, if you'd like.\n\nYou can always do binary-format COPY.\n\nMatthew\n\n-- \n An ant doesn't have a lot of processing power available to it. I'm not trying\n to be speciesist - I wouldn't want to detract you from such a wonderful\n creature, but, well, there isn't a lot there, is there?\n -- Computer Science Lecturer\n", "msg_date": "Tue, 14 Apr 2009 17:01:08 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT times - same storage space but more fields ->\n\tmuch slower inserts" }, { "msg_contents": "* Matthew Wakeling ([email protected]) wrote:\n> On Tue, 14 Apr 2009, Stephen Frost wrote:\n>> Bacula should be using COPY for the batch data loads, so hopefully won't\n>> suffer too much from having the fields split out. I think it would be\n>> interesting to try doing PQexecPrepared with binary-format data instead\n>> of using COPY though. I'd be happy to help you implement a test setup\n>> for doing that, if you'd like.\n>\n> You can always do binary-format COPY.\n\nI've never played with binary-format COPY actually. I'd be happy to\nhelp test that too though.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 14 Apr 2009 12:15:34 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT times - same storage space but more fields ->\n\tmuch slower inserts" }, { "msg_contents": "Stephen Frost wrote:\n> * Matthew Wakeling ([email protected]) wrote:\n>> On Tue, 14 Apr 2009, Stephen Frost wrote:\n>>> Bacula should be using COPY for the batch data loads, so hopefully won't\n>>> suffer too much from having the fields split out. I think it would be\n>>> interesting to try doing PQexecPrepared with binary-format data instead\n>>> of using COPY though. I'd be happy to help you implement a test setup\n>>> for doing that, if you'd like.\n>> You can always do binary-format COPY.\n> \n> I've never played with binary-format COPY actually. I'd be happy to\n> help test that too though.\n\nI'd have to check the source/a protocol dump to be sure, but I think\nPQexecPrepared(...), while it takes binary arguments, actually sends\nthem over the wire in text form. PostgreSQL does have a binary protocol\nas well, but it suffers from the same issues as binary-format COPY:\n\nUnlike PQexecPrepared(...), binary-format COPY doesn't handle endian and\ntype size issues for you. You need to convert the data to the database\nserver's endianness and type sizes, but I don't think the PostgreSQL\nprotocol provides any way to find those out.\n\nIt's OK if we're connected via a UNIX socket (and thus are on the same\nhost), though I guess a sufficiently perverse individual could install a\n32-bit bacula+libpq, and run a 64-bit PostgreSQL server, or even vice versa.\n\nIt should also be OK when connected to `localhost' (127.0.0.0/8) .\n\nIn other cases, binary-format COPY would be unsafe without some way to\ndetermine remote endianness and sizeof(various types).\n\n--\nCraig Ringer\n", "msg_date": "Wed, 15 Apr 2009 08:31:37 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: INSERT times - same storage space but more fields ->\n\tmuch slower inserts" }, { "msg_contents": "Craig Ringer <[email protected]> writes:\n> Unlike PQexecPrepared(...), binary-format COPY doesn't handle endian and\n> type size issues for you. You need to convert the data to the database\n> server's endianness and type sizes, but I don't think the PostgreSQL\n> protocol provides any way to find those out.\n\nThe on-the-wire binary format is much better specified than you think.\n(The documentation of it sucks, however.) It's big-endian in all cases\nand the datatype sizes are well defined.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 14 Apr 2009 20:40:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT times - same storage space but more fields -> much slower\n\tinserts" }, { "msg_contents": "Craig,\n\n* Craig Ringer ([email protected]) wrote:\n> In other cases, binary-format COPY would be unsafe without some way to\n> determine remote endianness and sizeof(various types).\n\nAs Tom mentioned already, the binary protocol is actually pretty well\ndefined, and it's in network-byte-order, aka, big-endian. The only\nissue that I can think of off-hand that you need to know about the\nserver is if it's using 64-bit integers for date-times or if it's using\nfloat. That's a simple check to do, however, specifically with:\n\nshow integer_datetimes;\n\nIt's also alot cheaper to do the necessary byte-flipping to go from\nwhatever-endian to network-byte-order than to do the whole printf/atoi\nconversion. Handling timestamps takes a bit more magic but you can just\npull the appropriate code/#defines from the server backend, but I don't\nthink that's even an issue for this particular set.\n\nWhat does your test harness currently look like, and what would you like\nto see to test the binary-format COPY? I'd be happy to write up the\ncode necessary to implement binary-format COPY for this.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 14 Apr 2009 20:54:31 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT times - same storage space but more fields ->\n\tmuch slower inserts" }, { "msg_contents": "On Tue, 14 Apr 2009, Stephen Frost wrote:\n> What does your test harness currently look like, and what would you like\n> to see to test the binary-format COPY? I'd be happy to write up the\n> code necessary to implement binary-format COPY for this.\n\nIf anyone needs this code in Java, we have a version at \nhttp://www.intermine.org/\n\nDownload source code: http://www.intermine.org/wiki/SVNCheckout\n\nJavadoc: http://www.intermine.org/api/\n\nThe code is contained in the org.intermine.sql.writebatch package, in the \nintermine/objectstore/main/src/org/intermine/sql/writebatch directory in \nthe source.\n\nThe public interface is org.intermine.sql.writebatch.Batch.\n\nThe Postgres-specific binary COPY code is in \norg.intermine.sql.writebatch.BatchWriterPostgresCopyImpl.\n\nThe implementation unfortunately relies on a very old modified version of \nthe Postgres JDBC driver, which is in the intermine/objectstore/main/lib \ndirectory.\n\nThe code is released under the LGPL, and we would appreciate notification \nif it is used.\n\nThe code implements quite a sophisticated system for writing rows to \ndatabase tables very quickly. It batches together multiple writes into \nCOPY statements, and writes them in the background in another thread, \nwhile fully honouring flush calls. When it is using the database \nconnection is well-defined. I hope someone can find it useful.\n\nMatthew\n\n-- \n -. .-. .-. .-. .-. .-. .-. .-. .-. .-. .-. .-. .-.\n ||X|||\\ /|||X|||\\ /|||X|||\\ /|||X|||\\ /|||X|||\\ /|||X|||\\ /|||\n |/ \\|||X|||/ \\|||X|||/ \\|||X|||/ \\|||X|||/ \\|||X|||/ \\|||X|||/\n ' `-' `-' `-' `-' `-' `-' `-' `-' `-' `-' `-' `-'\n", "msg_date": "Wed, 15 Apr 2009 12:50:51 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT times - same storage space but more fields ->\n\tmuch slower inserts" }, { "msg_contents": "On Wed, 15 Apr 2009, Matthew Wakeling wrote:\n> If anyone needs this code in Java, we have a version at \n> http://www.intermine.org/\n>\n> Download source code: http://www.intermine.org/wiki/SVNCheckout\n>\n> Javadoc: http://www.intermine.org/api/\n\nSorry, that should be http://www.flymine.org/api/\n\nMatthew\n\n-- \n What goes up must come down. Ask any system administrator.\n", "msg_date": "Wed, 15 Apr 2009 12:57:57 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INSERT times - same storage space but more fields ->\n\tmuch slower inserts" } ]
[ { "msg_contents": "ts_defect_meta_values has 460M rows. The following query, in retrospect \nnot too surprisingly, runs out of memory on a 32 bit postgres:\n\nupdate ts_defect_meta_values set ts_defect_date=(select ts_occur_date \nfrom ts_defects where ts_id=ts_defect_id)\n\nI changed the logic to update the table in 1M row batches. However, \nafter 159M rows, I get:\n\nERROR: could not extend relation 1663/16385/19505: wrote only 4096 of \n8192 bytes at block 7621407\n\nA df run on this machine shows plenty of space:\n\n[root@rql32xeoall03 tmp]# df\nFilesystem 1K-blocks Used Available Use% Mounted on\n/dev/sda2 276860796 152777744 110019352 59% /\n/dev/sda1 101086 11283 84584 12% /boot\nnone 4155276 0 4155276 0% /dev/shm\n\nThe updates are done inside of a single transaction. postgres 8.3.5.\n\nIdeas on what is going on appreciated.\n\nThanks,\nBrian\n", "msg_date": "Tue, 14 Apr 2009 17:41:24 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "error updating a very large table" }, { "msg_contents": "On Wed, Apr 15, 2009 at 1:41 AM, Brian Cox <[email protected]> wrote:\n> ts_defect_meta_values has 460M rows. The following query, in retrospect not\n> too surprisingly, runs out of memory on a 32 bit postgres:\n>\n> update ts_defect_meta_values set ts_defect_date=(select ts_occur_date from\n> ts_defects where ts_id=ts_defect_id)\n>\n> I changed the logic to update the table in 1M row batches. However, after\n> 159M rows, I get:\n>\n> ERROR:  could not extend relation 1663/16385/19505: wrote only 4096 of 8192\n> bytes at block 7621407\n>\n> A df run on this machine shows plenty of space:\n>\n> [root@rql32xeoall03 tmp]# df\n> Filesystem           1K-blocks      Used Available Use% Mounted on\n> /dev/sda2            276860796 152777744 110019352  59% /\n> /dev/sda1               101086     11283     84584  12% /boot\n> none                   4155276         0   4155276   0% /dev/shm\n>\n> The updates are done inside of a single transaction. postgres 8.3.5.\n>\n> Ideas on what is going on appreciated.\n>\nany triggers on updated table ?\n\nas for the update query performance, try different way of doing it:\nupdate foo set bar=x.z FROM foo2 WHERE foo.z=bar.sadfasd;\n\n\n-- \nGJ\n", "msg_date": "Wed, 15 Apr 2009 12:19:11 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: error updating a very large table" }, { "msg_contents": "Brian Cox <[email protected]> writes:\n> I changed the logic to update the table in 1M row batches. However, \n> after 159M rows, I get:\n\n> ERROR: could not extend relation 1663/16385/19505: wrote only 4096 of \n> 8192 bytes at block 7621407\n\nYou're out of disk space.\n\n> A df run on this machine shows plenty of space:\n\nPer-user quota restriction, perhaps?\n\nI'm also wondering about temporary files, although I suppose 100G worth\nof temp files is a bit much for this query. But you need to watch df\nwhile the query is happening, rather than suppose that an after-the-fact\nreading means anything.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Apr 2009 09:51:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: error updating a very large table " }, { "msg_contents": "\nOn Wed, 2009-04-15 at 09:51 -0400, Tom Lane wrote:\n> Brian Cox <[email protected]> writes:\n> > I changed the logic to update the table in 1M row batches. However, \n> > after 159M rows, I get:\n> \n> > ERROR: could not extend relation 1663/16385/19505: wrote only 4096 of \n> > 8192 bytes at block 7621407\n> \n> You're out of disk space.\n> \n> > A df run on this machine shows plenty of space:\n> \n> Per-user quota restriction, perhaps?\n> \n> I'm also wondering about temporary files, although I suppose 100G worth\n> of temp files is a bit much for this query. But you need to watch df\n> while the query is happening, rather than suppose that an after-the-fact\n> reading means anything.\n\nAnytime we get an out of space error we will be in the same situation.\n\nWhen we get this error, we should\n* summary of current temp file usage\n* df (if possible on OS)\n\nOtherwise we'll always be wondering what caused the error.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Wed, 15 Apr 2009 17:57:17 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: error updating a very large table" } ]
[ { "msg_contents": "Hey guys,\n\n \n\nI need some information on \n\n1. What are the best features of Npgsql product as compare to other\ncommercial .net data providers?\n\n2. If you have encountered any major problems, bugs or performance issue\netc... With this product?\n\n \n\nThanks in advance,\n\nPeeyush Jain| Software Engineer -Netezza Dev | Persistent Systems Ltd\n\n <mailto:[email protected]> [email protected] |\nCell: +91 9373069475 | Tel: +91 (20) 3023 6762\n\nInnovation in software product design, development and delivery-\n<http://www.persistentsys.com/> www.persistentsys.com\n\n \n\n \n\n \n\n\nDISCLAIMER\n==========\nThis e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.\n\n\n\n\n\n\n\n\n\nHey guys,\n \nI need some information on \n1. What are the best features of Npgsql\nproduct as compare to other commercial .net data providers?\n2. If you have encountered any major problems,\nbugs or performance issue etc... With this product?\n \nThanks in advance,\nPeeyush Jain| Software Engineer –Netezza Dev | Persistent Systems Ltd\[email protected]  | Cell:\n+91 9373069475 | Tel: +91 (20) 3023 6762\nInnovation in software\nproduct design, development and delivery- www.persistentsys.com\n \n \n \n\nDISCLAIMER\n==========\nThis e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.", "msg_date": "Wed, 15 Apr 2009 16:27:50 +0530", "msg_from": "\"Peeyush\" <[email protected]>", "msg_from_op": true, "msg_subject": "need information" }, { "msg_contents": "Peeyush wrote:\n> I need some information on \n> \n> 1. What are the best features of Npgsql product as compare to \n> other commercial .net data providers?\n> \n> 2. If you have encountered any major problems, bugs or \n> performance issue etc... With this product?\n\nYou sent this to way too many lists, and you didn't select\nthose very carefully (pgsql-bugs ??).\n\nThe correct place for such a question is either the general list\n(this one) or - even better - the Npgsql open-discussion forum:\n\nhttp://pgfoundry.org/forum/forum.php?forum_id=518\n\nAs to your questions:\n\nQuestion 1 is wrong, because Npgsql is no commercial .NET data provider.\nThat's the main advantage: it is open source.\n\nConcerning your second question, the only unresolved problem I am\naware of is with SSL connections, and that is a problem in\nMono, see https://bugzilla.novell.com/show_bug.cgi?id=321325\n\nYours,\nLaurenz Albe\n", "msg_date": "Wed, 15 Apr 2009 17:15:43 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need information" }, { "msg_contents": "On Wed, 2009-04-15 at 17:15 +0200, Albe Laurenz wrote:\n\n> As to your questions:\n> \n> Question 1 is wrong, because Npgsql is no commercial .NET data provider.\n> That's the main advantage: it is open source.\n\nThis is actually a misconception. Open Source doesn't disqualify it as\ncommercial. It disqualifies it as proprietary. I can make money\nproviding consulting for Npgsql, that makes it commercial or at least\nthe opportunity for it to be commercial.\n\nNot to be pedantic but let's be accurate with our data. We are database\npeople after all :)\n\n\nSincerely,\n\nJoshua D. Drake\n\n-- \nPostgreSQL - XMPP: [email protected]\n Consulting, Development, Support, Training\n 503-667-4564 - http://www.commandprompt.com/\n The PostgreSQL Company, serving since 1997\n\n", "msg_date": "Wed, 15 Apr 2009 08:55:31 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need information" }, { "msg_contents": "On Wed, Apr 15, 2009 at 07:57, Peeyush <[email protected]> wrote:\n> Hey guys,\n>\n>\n>\n\n\nHi, Peeyush!\n\n> I need some information on\n>\n> 1. What are the best features of Npgsql product as compare to other\n> commercial .net data providers?\n>\n\nWell, the first one is that it is opensource. :)\n\nWe are working actively on it and we try to provide as fast as\npossible the features our users need. Although some times we don't\ndeliver them as fast as we wished :(\n\nWe already have a very nice code base which can give you almost\nanything you would need in your programs.\n\nPlease, give it a try and let us know what you think.\n\n\n> 2. If you have encountered any major problems, bugs or performance issue\n> etc... With this product?\n>\n\nWell, one of the biggest performance problems we have but we already\ndid some performance tunning is with bytea handling. It used to take a\nlot of time with large data.\n\nOur current performance problem is with prepared statements. Npgsql\nstill needs some tunning on this.\n\n\nI think this is it for while.\n\nAs Albe already said, you would get a good feedback if you post your\nquestion to our forum.\n\nThanks in advance.\n\nI hope it helps.\n\n\n\n>\n>\n> Thanks in advance,\n>\n> Peeyush Jain| Software Engineer –Netezza Dev | Persistent Systems Ltd\n>\n> [email protected]  | Cell: +91 9373069475 | Tel: +91 (20) 3023\n> 6762\n>\n> Innovation in software product design, development and delivery-\n> www.persistentsys.com\n>\n>\n>\n>\n>\n>\n>\n> DISCLAIMER ========== This e-mail may contain privileged and confidential\n> information which is the property of Persistent Systems Ltd. It is intended\n> only for the use of the individual or entity to which it is addressed. If\n> you are not the intended recipient, you are not authorized to read, retain,\n> copy, print, distribute or use this message. If you have received this\n> communication in error, please notify the sender and delete all copies of\n> this message. Persistent Systems Ltd. does not accept any liability for\n> virus infected mails.\n\n\n\n-- \nRegards,\n\nFrancisco Figueiredo Jr.\nNpgsql Lead Developer\nhttp://www.npgsql.org\nhttp://fxjr.blogspot.com\nhttp://twitter.com/franciscojunior\nhttp://friendfeed.com/franciscojunior\n", "msg_date": "Wed, 15 Apr 2009 15:10:13 -0300", "msg_from": "\"Francisco Figueiredo Jr.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need information" }, { "msg_contents": "Thanks a lot for kind information.\n\n \n\nCurrently I am investigating on Npgsql and dotConnect .. because we have\nfeature list of dotConnect I attached the same for your quick reference but\nwe don't have in Npgsql. If you can help me in this then it will be greatful\nfor me.\n\n \n\nThanks & regards,\n\nPeeyush Jain | Software Engineer -Netezza Dev | Persistent Systems Ltd\n\[email protected] | Cell: +91 9373069475 | Tel: +91 (20) 3023\n6762\n\nInnovation in software product design, development and delivery-\nwww.persistentsys.com\n\n \n\n \n\n \n\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Francisco Figueiredo\nJr.\nSent: Wednesday, April 15, 2009 11:40 PM\nTo: Peeyush\nCc: [email protected]; npgsql-devel\nSubject: Re: [GENERAL] need information\n\n \n\nOn Wed, Apr 15, 2009 at 07:57, Peeyush <[email protected]>\nwrote:\n\n> Hey guys,\n\n> \n\n> \n\n> \n\n \n\n \n\nHi, Peeyush!\n\n \n\n> I need some information on\n\n> \n\n> 1. What are the best features of Npgsql product as compare to other\n\n> commercial .net data providers?\n\n> \n\n \n\nWell, the first one is that it is opensource. :)\n\n \n\nWe are working actively on it and we try to provide as fast as\n\npossible the features our users need. Although some times we don't\n\ndeliver them as fast as we wished :(\n\n \n\nWe already have a very nice code base which can give you almost\n\nanything you would need in your programs.\n\n \n\nPlease, give it a try and let us know what you think.\n\n \n\n \n\n> 2. If you have encountered any major problems, bugs or performance issue\n\n> etc... With this product?\n\n> \n\n \n\nWell, one of the biggest performance problems we have but we already\n\ndid some performance tunning is with bytea handling. It used to take a\n\nlot of time with large data.\n\n \n\nOur current performance problem is with prepared statements. Npgsql\n\nstill needs some tunning on this.\n\n \n\n \n\nI think this is it for while.\n\n \n\nAs Albe already said, you would get a good feedback if you post your\n\nquestion to our forum.\n\n \n\nThanks in advance.\n\n \n\nI hope it helps.\n\n \n\n \n\n \n\n> \n\n> \n\n> Thanks in advance,\n\n> \n\n> Peeyush Jain| Software Engineer -Netezza Dev | Persistent Systems Ltd\n\n> \n\n> [email protected] | Cell: +91 9373069475 | Tel: +91 (20) 3023\n\n> 6762\n\n> \n\n> Innovation in software product design, development and delivery-\n\n> www.persistentsys.com\n\n> \n\n> \n\n> \n\n> \n\n> \n\n> \n\n> \n\n> DISCLAIMER ========== This e-mail may contain privileged and confidential\n\n> information which is the property of Persistent Systems Ltd. It is\nintended\n\n> only for the use of the individual or entity to which it is addressed. If\n\n> you are not the intended recipient, you are not authorized to read,\nretain,\n\n> copy, print, distribute or use this message. If you have received this\n\n> communication in error, please notify the sender and delete all copies of\n\n> this message. Persistent Systems Ltd. does not accept any liability for\n\n> virus infected mails.\n\n \n\n \n\n \n\n-- \n\nRegards,\n\n \n\nFrancisco Figueiredo Jr.\n\nNpgsql Lead Developer\n\nhttp://www.npgsql.org\n\nhttp://fxjr.blogspot.com\n\nhttp://twitter.com/franciscojunior\n\nhttp://friendfeed.com/franciscojunior\n\n\nDISCLAIMER\n==========\nThis e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.\n\n\n\n\n\n\n\n\n\n\n\nThanks a lot for kind information.\n \nCurrently I am investigating on Npgsql and\ndotConnect …… because we have feature list of dotConnect I attached\nthe same for your quick reference but we don’t have in Npgsql. If you can\nhelp me in this then it will be greatful for me.\n \nThanks & regards,\nPeeyush Jain | Software Engineer –Netezza Dev | Persistent\nSystems Ltd\[email protected]  | Cell: +91 9373069475 | Tel: +91 (20)\n3023 6762\nInnovation in software product design, development and delivery-\nwww.persistentsys.com\n \n \n \n \n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Francisco Figueiredo\nJr.\nSent: Wednesday, April 15, 2009 11:40 PM\nTo: Peeyush\nCc: [email protected];\nnpgsql-devel\nSubject: Re: [GENERAL] need information\n \nOn Wed, Apr 15, 2009 at 07:57, Peeyush\n<[email protected]> wrote:\n> Hey guys,\n> \n> \n> \n \n \nHi, Peeyush!\n \n> I need some information on\n> \n> 1. What are the best features of Npgsql product as compare to\nother\n> commercial .net data providers?\n> \n \nWell, the first one is that it is opensource. :)\n \nWe are working actively on it and we try to provide as fast as\npossible the features our users need. Although some times we don't\ndeliver them as fast as we wished :(\n \nWe already have a very nice code base which can give you almost\nanything you would need in your programs.\n \nPlease, give it a try and let us know what you think.\n \n \n> 2. If you have encountered any major problems, bugs or performance\nissue\n> etc... With this product?\n> \n \nWell, one of the biggest performance problems we have but we already\ndid some performance tunning is with bytea handling. It used to take a\nlot of time with large data.\n \nOur current performance problem is with prepared statements. Npgsql\nstill needs some tunning on this.\n \n \nI think this is it for while.\n \nAs Albe already said, you would get a good feedback if you post your\nquestion to our forum.\n \nThanks in advance.\n \nI hope it helps.\n \n \n \n> \n> \n> Thanks in advance,\n> \n> Peeyush Jain| Software Engineer –Netezza Dev |\nPersistent Systems Ltd\n> \n> [email protected]  | Cell: +91 9373069475 | Tel:\n+91 (20) 3023\n> 6762\n> \n> Innovation in software product design, development and delivery-\n> www.persistentsys.com\n> \n> \n> \n> \n> \n> \n> \n> DISCLAIMER ========== This e-mail may contain privileged and\nconfidential\n> information which is the property of Persistent Systems Ltd. It is\nintended\n> only for the use of the individual or entity to which it is\naddressed. If\n> you are not the intended recipient, you are not authorized to\nread, retain,\n> copy, print, distribute or use this message. If you have received\nthis\n> communication in error, please notify the sender and delete all\ncopies of\n> this message. Persistent Systems Ltd. does not accept any\nliability for\n> virus infected mails.\n \n \n \n-- \nRegards,\n \nFrancisco Figueiredo Jr.\nNpgsql Lead Developer\nhttp://www.npgsql.org\nhttp://fxjr.blogspot.com\nhttp://twitter.com/franciscojunior\nhttp://friendfeed.com/franciscojunior\n\nDISCLAIMER\n==========\nThis e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.", "msg_date": "Thu, 16 Apr 2009 10:36:44 +0530", "msg_from": "\"Peeyush\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: need information" }, { "msg_contents": "Joshua D. Drake wrote:\n>> Question 1 is wrong, because Npgsql is no commercial .NET data provider.\n>> That's the main advantage: it is open source.\n> \n> This is actually a misconception. Open Source doesn't disqualify it as\n> commercial. It disqualifies it as proprietary. I can make money\n> providing consulting for Npgsql, that makes it commercial or at least\n> the opportunity for it to be commercial.\n> \n> Not to be pedantic but let's be accurate with our data. We are database\n> people after all :)\n\nThank you for the correction.\n\nAlthough I'd say that the fact that you can make money by consulting\nfor something does not make it commercial software. Maybe I'm wrong.\n\nBut it is of course possible to forbid people to use your open source\nsoftware unless they pay for it, which would make it commercial\nin my eyes.\n\nThis is getting off topic, sorry.\n\nYours,\nLaurenz Albe\n", "msg_date": "Thu, 16 Apr 2009 11:22:46 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need information" }, { "msg_contents": "On Thu, Apr 16, 2009 at 11:22:46AM +0200, Albe Laurenz wrote:\n> Joshua D. Drake wrote:\n> >> Question 1 is wrong, because Npgsql is no commercial .NET data provider.\n> >> That's the main advantage: it is open source.\n> > \n> > This is actually a misconception. Open Source doesn't disqualify it as\n> > commercial. It disqualifies it as proprietary. I can make money\n> > providing consulting for Npgsql, that makes it commercial or at least\n> > the opportunity for it to be commercial.\n> > \n> > Not to be pedantic but let's be accurate with our data. We are database\n> > people after all :)\n> \n> Thank you for the correction.\n> \n> Although I'd say that the fact that you can make money by consulting\n> for something does not make it commercial software. Maybe I'm wrong.\n\n\"Commercial\" means, \"used in commerce.\" It has nothing to do with the\nterms under which the software's source code is (or is not) available.\n\n> But it is of course possible to forbid people to use your open\n> source software unless they pay for it, which would make it\n> commercial in my eyes.\n\nThat would make it *proprietary*, as no FLOSS license allows such a\nrestriction.\n\n> This is getting off topic, sorry.\n\nVaguely. Has that stopped us before? ;)\n\nCheers,\nDavid.\n-- \nDavid Fetter <[email protected]> http://fetter.org/\nPhone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter\nSkype: davidfetter XMPP: [email protected]\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n", "msg_date": "Thu, 16 Apr 2009 11:07:38 -0700", "msg_from": "David Fetter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need information" }, { "msg_contents": "On Thu, Apr 16, 2009 at 02:12, Peeyush <[email protected]> wrote:\n>\n> Sorry missed the attachment L\n>\n\nNo problem.\n\nThanks for the document with feature list of dotConnect.\nI'll create one for Npgsql which can give more information. Npgsql is\nmissing a list like that for a long time.\n\nAnother missing feature from the list is the design time support. We\nare working on that although we don't anything ready yet.\n\nWe also don't have something like pgdump or pgloader.\n\nWe don't have compact framework support\n\n\nThose are the biggest features we are lacking. I can see a lot more\ncompared to dotConnect which we still would need to work on.\n\nAs soon as I compile the list I'll let you know.\n\nThanks for your interest in Npgsql. Your feedback also is very nice so\nwe can improve even more Npgsql.\n\n\n>\n>\n>\n>\n> Peeyush Jain| Software Engineer –Netezza Dev | Persistent Systems Ltd\n>\n> [email protected]  | Cell: +91 9373069475 | Tel: +91 (20) 3023 6762\n>\n> Innovation in software product design, development and delivery- www.persistentsys.com\n>\n>\n>\n>\n>\n>\n>\n> ________________________________\n>\n> From: Peeyush [mailto:[email protected]]\n> Sent: Thursday, April 16, 2009 10:37 AM\n> To: '[email protected]'\n>\n> Cc: '[email protected]'; 'npgsql-devel'\n> Subject: RE: [GENERAL] need information\n>\n>\n>\n> Thanks a lot for kind information.\n>\n>\n>\n> Currently I am investigating on Npgsql and dotConnect …… because we have feature list of dotConnect I attached the same for your quick reference but we don’t have in Npgsql. If you can help me in this then it will be greatful for me.\n>\n>\n>\n> Thanks & regards,\n>\n> Peeyush Jain | Software Engineer –Netezza Dev | Persistent Systems Ltd\n>\n> [email protected]  | Cell: +91 9373069475 | Tel: +91 (20) 3023 6762\n>\n> Innovation in software product design, development and delivery- www.persistentsys.com\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> -----Original Message-----\n>\n> From: [email protected] [mailto:[email protected]] On Behalf Of Francisco Figueiredo Jr.\n> Sent: Wednesday, April 15, 2009 11:40 PM\n> To: Peeyush\n> Cc: [email protected]; npgsql-devel\n> Subject: Re: [GENERAL] need information\n>\n>\n>\n> On Wed, Apr 15, 2009 at 07:57, Peeyush <[email protected]> wrote:\n>\n> > Hey guys,\n>\n> >\n>\n> >\n>\n> >\n>\n>\n>\n>\n>\n> Hi, Peeyush!\n>\n>\n>\n> > I need some information on\n>\n> >\n>\n> > 1. What are the best features of Npgsql product as compare to other\n>\n> > commercial .net data providers?\n>\n> >\n>\n>\n>\n> Well, the first one is that it is opensource. :)\n>\n>\n>\n> We are working actively on it and we try to provide as fast as\n>\n> possible the features our users need. Although some times we don't\n>\n> deliver them as fast as we wished :(\n>\n>\n>\n> We already have a very nice code base which can give you almost\n>\n> anything you would need in your programs.\n>\n>\n>\n> Please, give it a try and let us know what you think.\n>\n>\n>\n>\n>\n> > 2. If you have encountered any major problems, bugs or performance issue\n>\n> > etc... With this product?\n>\n> >\n>\n>\n>\n> Well, one of the biggest performance problems we have but we already\n>\n> did some performance tunning is with bytea handling. It used to take a\n>\n> lot of time with large data.\n>\n>\n>\n> Our current performance problem is with prepared statements. Npgsql\n>\n> still needs some tunning on this.\n>\n>\n>\n>\n>\n> I think this is it for while.\n>\n>\n>\n> As Albe already said, you would get a good feedback if you post your\n>\n> question to our forum.\n>\n>\n>\n> Thanks in advance.\n>\n>\n>\n> I hope it helps.\n>\n>\n>\n>\n>\n>\n>\n> >\n>\n> >\n>\n> > Thanks in advance,\n>\n> >\n>\n> > Peeyush Jain| Software Engineer –Netezza Dev | Persistent Systems Ltd\n>\n> >\n>\n> > [email protected]  | Cell: +91 9373069475 | Tel: +91 (20) 3023\n>\n> > 6762\n>\n> >\n>\n> > Innovation in software product design, development and delivery-\n>\n> > www.persistentsys.com\n>\n> >\n>\n> >\n>\n> >\n>\n> >\n>\n> >\n>\n> >\n>\n> >\n>\n> > DISCLAIMER ========== This e-mail may contain privileged and confidential\n>\n> > information which is the property of Persistent Systems Ltd. It is intended\n>\n> > only for the use of the individual or entity to which it is addressed. If\n>\n> > you are not the intended recipient, you are not authorized to read, retain,\n>\n> > copy, print, distribute or use this message. If you have received this\n>\n> > communication in error, please notify the sender and delete all copies of\n>\n> > this message. Persistent Systems Ltd. does not accept any liability for\n>\n> > virus infected mails.\n>\n>\n>\n>\n>\n>\n>\n> --\n>\n> Regards,\n>\n>\n>\n> Francisco Figueiredo Jr.\n>\n> Npgsql Lead Developer\n>\n> http://www.npgsql.org\n>\n> http://fxjr.blogspot.com\n>\n> http://twitter.com/franciscojunior\n>\n> http://friendfeed.com/franciscojunior\n>\n> DISCLAIMER ========== This e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.\n\n\n--\nRegards,\n\nFrancisco Figueiredo Jr.\nNpgsql Lead Developer\nhttp://www.npgsql.org\nhttp://fxjr.blogspot.com\nhttp://twitter.com/franciscojunior\nhttp://friendfeed.com/franciscojunior\n", "msg_date": "Thu, 16 Apr 2009 20:43:20 -0300", "msg_from": "\"Francisco Figueiredo Jr.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need information" } ]
[ { "msg_contents": "\nI have a query that is executed really badly by Postgres. It is a nine \ntable join, where two of the tables are represented in a view. If I remove \none of the tables from the query, then the query runs very quickly using a \ncompletely different plan.\n\nHere is the view:\n\nrelease-16.0-preview-09-apr=# \\d locatedsequencefeatureoverlappingfeatures\nView \"public.locatedsequencefeatureoverlappingfeatures\"\n Column | Type | Modifiers\n------------------------+---------+-----------\n overlappingfeatures | integer |\n locatedsequencefeature | integer |\nView definition:\n SELECT l1.subjectid AS overlappingfeatures, l2.subjectid AS locatedsequencefeature\n FROM location l1, location l2\n WHERE l1.objectid = l2.objectid AND l1.subjectid <> l2.subjectid AND bioseg_create(l1.intermine_start, l1.intermine_end) && bioseg_create(l2.intermine_start, l2.intermine_end);\n\n\nHere is the query that works:\n\nSELECT *\nFROM\n gene AS a1_,\n intergenicregion AS a2_,\n regulatoryregion AS a3_,\n chromosome AS a4_,\n location AS a5_,\n LocatedSequenceFeatureOverlappingFeatures AS indirect0,\n BioEntitiesDataSets AS indirect1\nWHERE\n a1_.id = 1267676\n AND a1_.upstreamIntergenicRegionId = a2_.id\n AND a2_.id = indirect0.LocatedSequenceFeature\n AND indirect0.OverlappingFeatures = a3_.id\n AND a3_.chromosomeid = a4_.id\n AND a3_.chromosomeLocationId = a5_.id\n AND a3_.id = indirect1.BioEntities\n\nQUERY PLAN\n-----------------------------------------------------------------\n Nested Loop (cost=0.00..44.82 rows=1 width=787)\n (actual time=18.347..184.178 rows=105 loops=1)\n -> Nested Loop\n (cost=0.00..44.54 rows=1 width=626)\n (actual time=18.329..182.837 rows=105 loops=1)\n -> Nested Loop\n (cost=0.00..43.82 rows=1 width=561)\n (actual time=18.249..180.801 rows=105 loops=1)\n -> Nested Loop\n (cost=0.00..43.51 rows=1 width=380)\n (actual time=10.123..178.471 rows=144 loops=1)\n -> Nested Loop\n (cost=0.00..42.85 rows=1 width=372)\n (actual time=0.854..31.446 rows=142 loops=1)\n -> Nested Loop\n (cost=0.00..38.57 rows=1 width=168)\n (actual time=0.838..29.505 rows=142 loops=1)\n Join Filter: ((l1.subjectid <> l2.subjectid) AND (l2.objectid = l1.objectid))\n -> Nested Loop\n (cost=0.00..10.02 rows=1 width=176)\n (actual time=0.207..0.218 rows=1 loops=1)\n -> Index Scan using gene_pkey on gene a1_\n (cost=0.00..4.29 rows=1 width=160)\n (actual time=0.107..0.110 rows=1 loops=1)\n Index Cond: (id = 1267676)\n -> Index Scan using location__key_all on location l2\n (cost=0.00..5.70 rows=2 width=16)\n (actual time=0.090..0.093 rows=1 loops=1)\n Index Cond: (l2.subjectid = a1_.upstreamintergenicregionid)\n -> Index Scan using location_bioseg on location l1\n (cost=0.00..12.89 rows=696 width=16)\n (actual time=0.095..26.458 rows=1237 loops=1)\n Index Cond: (bioseg_create(l1.intermine_start, l1.intermine_end) && bioseg_create(l2.intermine_start, l2.intermine_end))\n -> Index Scan using intergenicregion_pkey on intergenicregion a2_\n (cost=0.00..4.27 rows=1 width=204)\n (actual time=0.004..0.006 rows=1 loops=142)\n Index Cond: (a2_.id = a1_.upstreamintergenicregionid)\n -> Index Scan using bioentitiesdatasets__bioentities on bioentitiesdatasets indirect1\n (cost=0.00..0.63 rows=2 width=8)\n (actual time=1.026..1.028 rows=1 loops=142)\n Index Cond: (indirect1.bioentities = l1.subjectid)\n -> Index Scan using regulatoryregion_pkey on regulatoryregion a3_\n (cost=0.00..0.29 rows=1 width=181)\n (actual time=0.008..0.009 rows=1 loops=144)\n Index Cond: (a3_.id = l1.subjectid)\n -> Index Scan using location_pkey on location a5_\n (cost=0.00..0.71 rows=1 width=65)\n (actual time=0.010..0.012 rows=1 loops=105)\n Index Cond: (a5_.id = a3_.chromosomelocationid)\n -> Index Scan using chromosome_pkey on chromosome a4_\n (cost=0.00..0.27 rows=1 width=161)\n (actual time=0.003..0.005 rows=1 loops=105)\n Index Cond: (a4_.id = a3_.chromosomeid)\n Total runtime: 184.596 ms\n(25 rows)\n\n\nHere is the query that does not work:\n\nSELECT *\nFROM\n gene AS a1_,\n intergenicregion AS a2_,\n regulatoryregion AS a3_,\n chromosome AS a4_,\n location AS a5_,\n dataset AS a6_,\n LocatedSequenceFeatureOverlappingFeatures AS indirect0,\n BioEntitiesDataSets AS indirect1\nWHERE\n a1_.id = 1267676\n AND a1_.upstreamIntergenicRegionId = a2_.id\n AND a2_.id = indirect0.LocatedSequenceFeature\n AND indirect0.OverlappingFeatures = a3_.id\n AND a3_.chromosomeid = a4_.id\n AND a3_.chromosomeLocationId = a5_.id\n AND a3_.id = indirect1.BioEntities\n AND indirect1.DataSets = a6_.id\n\nI just left this running overnight, and it hasn't completed an EXPLAIN \nANALYSE. It is basically the previous query (which returns 105 rows) with \nanother table attached on a primary key. Should be very quick.\n\nQUERY PLAN\n---------------------------------------------------------------------------\n Nested Loop\n (cost=0.21..49789788.95 rows=1 width=960)\n -> Nested Loop\n (cost=0.21..49789788.67 rows=1 width=799)\n -> Nested Loop\n (cost=0.21..49789787.94 rows=1 width=734)\n -> Merge Join\n (cost=0.21..49789787.64 rows=1 width=553)\n Merge Cond: (a1_.upstreamintergenicregionid = a2_.id)\n -> Nested Loop\n (cost=0.00..99575037.26 rows=2 width=349)\n -> Nested Loop\n (cost=0.00..99575036.70 rows=2 width=176)\n -> Nested Loop\n (cost=0.00..99575036.05 rows=1 width=168)\n Join Filter: (a1_.upstreamintergenicregionid = l2.subjectid)\n -> Index Scan using gene__upstreamintergenicregion on gene a1_\n (cost=0.00..6836.09 rows=1 width=160)\n Index Cond: (id = 1267676)\n -> Nested Loop\n (cost=0.00..99507386.51 rows=4865076 width=8)\n Join Filter: ((l1.subjectid <> l2.subjectid) AND (l1.objectid = l2.objectid))\n -> Index Scan using location__key_all on location l1\n (cost=0.00..158806.58 rows=3479953 width=16)\n -> Index Scan using location_bioseg on location l2\n (cost=0.00..12.89 rows=696 width=16)\n Index Cond: (bioseg_create(l1.intermine_start, l1.intermine_end) && bioseg_create(l2.intermine_start, l2.intermine_end))\n -> Index Scan using bioentitiesdatasets__bioentities on bioentitiesdatasets indirect1\n (cost=0.00..0.63 rows=2 width=8)\n Index Cond: (indirect1.bioentities = l1.subjectid)\n -> Index Scan using dataset_pkey on dataset a6_\n (cost=0.00..0.27 rows=1 width=173)\n Index Cond: (a6_.id = indirect1.datasets)\n -> Index Scan using intergenicregion_pkey on intergenicregion a2_\n (cost=0.00..2132.03 rows=54785 width=204)\n -> Index Scan using regulatoryregion_pkey on regulatoryregion a3_\n (cost=0.00..0.29 rows=1 width=181)\n Index Cond: (a3_.id = l1.subjectid)\n -> Index Scan using location_pkey on location a5_\n (cost=0.00..0.71 rows=1 width=65)\n Index Cond: (a5_.id = a3_.chromosomelocationid)\n -> Index Scan using chromosome_pkey on chromosome a4_\n (cost=0.00..0.27 rows=1 width=161)\n Index Cond: (a4_.id = a3_.chromosomeid)\n(27 rows)\n\nI'm curious about two things - firstly why is it choosing such a dumb way \nof joining l1 to l2, with a full index scan on l1, where it could use a \nconditional index scan on l1 as with the working query? Secondly, why is \nthe merge join's cost half that of the nested loop inside it?\n\ngeqo threshold is set to 15, so this is not the genetic optimiser stuffing \nup. Besides, it creates the same plan each time. The database is fully \nanalysed with a reasonably high statistics target. Here are all the \nnon-comment entries in postgresql.conf:\n\nlisten_addresses = '*' # what IP address(es) to listen on;\nmax_connections = 300 # (change requires restart)\nshared_buffers = 500MB # min 128kB or max_connections*16kB\ntemp_buffers = 100MB # min 800kB\nwork_mem = 2000MB # min 64kB\nmaintenance_work_mem = 1600MB # min 1MB\nmax_stack_depth = 9MB # min 100kB\nmax_fsm_pages = 204800 # min max_fsm_relations*16, 6 bytes each\nrandom_page_cost = 2.0 # same scale as above\neffective_cache_size = 23GB\ngeqo_threshold = 15\ndefault_statistics_target = 500 # range 1-1000\nlog_destination = 'stderr' # Valid values are combinations of\nlogging_collector = on # Enable capturing of stderr and csvlog\nlog_directory = 'pg_log' # directory where log files are written,\nlog_truncate_on_rotation = on # If on, an existing log file of the\nlog_rotation_age = 1d # Automatic rotation of logfiles will\nlog_rotation_size = 0 # Automatic rotation of logfiles will\nlog_min_duration_statement = 0 # -1 is disabled, 0 logs all statements\nlog_duration = on\nlog_line_prefix = '%t ' # special values:\nlog_statement = 'all' # none, ddl, mod, all\ndatestyle = 'iso, mdy'\nlc_messages = 'C' # locale for system error message\nlc_monetary = 'C' # locale for monetary formatting\nlc_numeric = 'C' # locale for number formatting\nlc_time = 'C' # locale for time formatting\ndefault_text_search_config = 'pg_catalog.english'\n\nAnything I can do to solve this?\n\nMatthew\n\n-- \nSurely the value of C++ is zero, but C's value is now 1?\n -- map36, commenting on the \"No, C++ isn't equal to D. 'C' is undeclared\n [...] C++ should really be called 1\" response to \"C++ -- shouldn't it\n be called D?\"\n", "msg_date": "Thu, 16 Apr 2009 11:37:29 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Really dumb planner decision" }, { "msg_contents": "On Thu, Apr 16, 2009 at 11:37 AM, Matthew Wakeling <[email protected]> wrote:\n>\n> I have a query that is executed really badly by Postgres. It is a nine table\n> join, where two of the tables are represented in a view. If I remove one of\n> the tables from the query, then the query runs very quickly using a\n> completely different plan.\n\nAnd what happens if you execute that view alone, with WHERE .. just\nlike it would be a part of the whole query? ((id = 1267676))\n\n\n-- \nGJ\n", "msg_date": "Thu, 16 Apr 2009 11:48:19 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really dumb planner decision" }, { "msg_contents": "On Thu, Apr 16, 2009 at 11:37 AM, Matthew Wakeling <[email protected]> wrote:\n> SELECT *\n> FROM\n>    gene AS a1_,\n>    intergenicregion AS a2_,\n>    regulatoryregion AS a3_,\n>    chromosome AS a4_,\n>    location AS a5_,\n>    dataset AS a6_,\n>    LocatedSequenceFeatureOverlappingFeatures AS indirect0,\n>    BioEntitiesDataSets AS indirect1\n> WHERE\n>        a1_.id = 1267676\n>    AND a1_.upstreamIntergenicRegionId = a2_.id\n>    AND a2_.id = indirect0.LocatedSequenceFeature\n>    AND indirect0.OverlappingFeatures = a3_.id\n>    AND a3_.chromosomeid = a4_.id\n>    AND a3_.chromosomeLocationId = a5_.id\n>    AND a3_.id = indirect1.BioEntities\n>    AND indirect1.DataSets = a6_.id\n\nOn a second look, it looks like you are are joining that view twice,\nat this point, I have no idea myself what it might be. But I guess it\nhas to search over 5M rows for each of 105 in other query.\n\nI wonder what more experienced guys here will have to say about it.\n\n\n-- \nGJ\n", "msg_date": "Thu, 16 Apr 2009 12:16:13 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really dumb planner decision" }, { "msg_contents": "On Thu, 16 Apr 2009, Grzegorz Jaśkiewicz wrote:\n> On a second look, it looks like you are are joining that view twice,\n> at this point, I have no idea myself what it might be. But I guess it\n> has to search over 5M rows for each of 105 in other query.\n>\n> I wonder what more experienced guys here will have to say about it.\n\nThat view appears as a two-column table, so I have joined something on \nboth of those columns, yes.\n\nInterestingly, joining the dataset table breaks the query plan, but that \ntable only has 77 rows, and it is joined on by its unique primary key \nindex. That should be really trivial for Postgres to do.\n\nMatthew\n\n-- \n I quite understand I'm doing algebra on the blackboard and the usual response\n is to throw objects... If you're going to freak out... wait until party time\n and invite me along -- Computer Science Lecturer", "msg_date": "Thu, 16 Apr 2009 12:24:42 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really dumb planner decision" }, { "msg_contents": "On Thu, 16 Apr 2009, Grzegorz Ja�kiewicz wrote:\n> On Thu, Apr 16, 2009 at 11:37 AM, Matthew Wakeling <[email protected]> wrote:\n>>\n>> I have a query that is executed really badly by Postgres. It is a nine table\n>> join, where two of the tables are represented in a view. If I remove one of\n>> the tables from the query, then the query runs very quickly using a\n>> completely different plan.\n>\n> And what happens if you execute that view alone, with WHERE .. just\n> like it would be a part of the whole query? ((id = 1267676))\n\nReally quick, just like the query that works in my email.\n\nSELECT *\nFROM\n gene AS a1_,\n LocatedSequenceFeatureOverlappingFeatures AS indirect0\nWHERE\n a1_.id = 1267676\n AND a1_.upstreamIntergenicRegionId = indirect0.LocatedSequenceFeature\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop\n (cost=0.00..38.57 rows=1 width=168)\n (actual time=0.759..27.723 rows=142 loops=1)\n Join Filter: ((l1.subjectid <> l2.subjectid) AND (l2.objectid = l1.objectid))\n -> Nested Loop\n (cost=0.00..10.02 rows=1 width=176)\n (actual time=0.136..0.149 rows=1 loops=1)\n -> Index Scan using gene_pkey on gene a1_\n (cost=0.00..4.29 rows=1 width=160)\n (actual time=0.059..0.062 rows=1 loops=1)\n Index Cond: (id = 1267676)\n -> Index Scan using location__key_all on location l2\n (cost=0.00..5.70 rows=2 width=16)\n (actual time=0.067..0.071 rows=1 loops=1)\n Index Cond: (l2.subjectid = a1_.upstreamintergenicregionid)\n -> Index Scan using location_bioseg on location l1\n (cost=0.00..12.89 rows=696 width=16)\n (actual time=0.092..24.730 rows=1237 loops=1)\n Index Cond: (bioseg_create(l1.intermine_start, l1.intermine_end) && bioseg_create(l2.intermine_start, l2.intermine_end))\n Total runtime: 28.051 ms\n(10 rows)\n\nMatthew\n\n-- \n\"Take care that thou useth the proper method when thou taketh the measure of\n high-voltage circuits so that thou doth not incinerate both thee and the\n meter; for verily, though thou has no account number and can be easily\n replaced, the meter doth have one, and as a consequence, bringeth much woe\n upon the Supply Department.\" -- The Ten Commandments of Electronics", "msg_date": "Thu, 16 Apr 2009 12:31:42 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really dumb planner decision" }, { "msg_contents": "2009/4/16 Matthew Wakeling <[email protected]>:\n> On Thu, 16 Apr 2009, Grzegorz Jaśkiewicz wrote:\n>>\n>> On Thu, Apr 16, 2009 at 11:37 AM, Matthew Wakeling <[email protected]>\n>> wrote:\n>>>\n>>> I have a query that is executed really badly by Postgres. It is a nine\n>>> table\n>>> join, where two of the tables are represented in a view. If I remove one\n>>> of\n>>> the tables from the query, then the query runs very quickly using a\n>>> completely different plan.\n>>\n>> And what happens if you execute that view alone, with WHERE .. just\n>> like it would be a part of the whole query? ((id = 1267676))\n>\n> Really quick, just like the query that works in my email.\n\nWhat happens if you change join_collapse_limit and from_collapse_limit\nto some huge number?\n\nhttp://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-FROM-COLLAPSE-LIMIT\n\n...Robert\n", "msg_date": "Thu, 16 Apr 2009 07:42:30 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really dumb planner decision" }, { "msg_contents": "On Thu, 16 Apr 2009, Robert Haas wrote:\n> What happens if you change join_collapse_limit and from_collapse_limit\n> to some huge number?\n>\n> http://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-FROM-COLLAPSE-LIMIT\n\nThat solves the problem. So, a view is treated as a subquery then?\n\nMatthew\n\n-- \n Contrary to popular belief, Unix is user friendly. It just happens to be\n very selective about who its friends are. -- Kyle Hearn\n", "msg_date": "Thu, 16 Apr 2009 13:05:13 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really dumb planner decision" }, { "msg_contents": "On Thu, Apr 16, 2009 at 8:05 AM, Matthew Wakeling <[email protected]> wrote:\n> On Thu, 16 Apr 2009, Robert Haas wrote:\n>>\n>> What happens if you change join_collapse_limit and from_collapse_limit\n>> to some huge number?\n>>\n>>\n>> http://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-FROM-COLLAPSE-LIMIT\n>\n> That solves the problem. So, a view is treated as a subquery then?\n>\n\nno...the view is simply inlined into the query (think C macro) using\nthe rules. You just bumped into an arbitrary (and probably too low)\nlimit into the number of tables the planner can look at in terms of\noptimizing certain types of plans. It's the first thing to look at\nwhen you add tables to a big query and performance falls off a cliff\nwhen it shouldn't.\n\nmerlin\n", "msg_date": "Thu, 16 Apr 2009 08:11:30 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really dumb planner decision" }, { "msg_contents": "Merlin Moncure <[email protected]> writes:\n> On Thu, Apr 16, 2009 at 8:05 AM, Matthew Wakeling <[email protected]> wrote:\n>> That solves the problem. So, a view is treated as a subquery then?\n\n> no...the view is simply inlined into the query (think C macro) using\n> the rules. You just bumped into an arbitrary (and probably too low)\n> limit into the number of tables the planner can look at in terms of\n> optimizing certain types of plans.\n\nBear in mind that those limits exist to keep you from running into\nexponentially increasing planning time when the size of a planning\nproblem gets big. \"Raise 'em to the moon\" isn't really a sane strategy.\nIt might be that we could get away with raising them by one or two given\nthe general improvement in hardware since the values were last looked\nat; but I'd be hesitant to push the defaults further than that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Apr 2009 09:49:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really dumb planner decision " }, { "msg_contents": "Tom Lane <[email protected]> wrote: \n> Bear in mind that those limits exist to keep you from running into\n> exponentially increasing planning time when the size of a planning\n> problem gets big. \"Raise 'em to the moon\" isn't really a sane\nstrategy.\n> It might be that we could get away with raising them by one or two\ngiven\n> the general improvement in hardware since the values were last\nlooked\n> at; but I'd be hesitant to push the defaults further than that.\n \nI also think that there was a change somewhere in the 8.2 or 8.3 time\nframe which mitigated this. (Perhaps a change in how statistics were\nscanned?) The combination of a large statistics target and higher\nlimits used to drive plan time through the roof, but I'm now seeing\nplan times around 50 ms for limits of 20 and statistics targets of\n100. Given the savings from the better plans, it's worth it, at least\nin our case.\n \nI wonder what sort of testing would be required to determine a safe\ninstallation default with the current code.\n \n-Kevin\n", "msg_date": "Thu, 16 Apr 2009 09:11:14 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really dumb planner decision" }, { "msg_contents": "On Thu, Apr 16, 2009 at 10:11 AM, Kevin Grittner\n<[email protected]> wrote:\n> Tom Lane <[email protected]> wrote:\n>> Bear in mind that those limits exist to keep you from running into\n>> exponentially increasing planning time when the size of a planning\n>> problem gets big.  \"Raise 'em to the moon\" isn't really a sane\n> strategy.\n>> It might be that we could get away with raising them by one or two\n> given\n>> the general improvement in hardware since the values were last\n> looked\n>> at; but I'd be hesitant to push the defaults further than that.\n>\n> I also think that there was a change somewhere in the 8.2 or 8.3 time\n> frame which mitigated this.  (Perhaps a change in how statistics were\n> scanned?)  The combination of a large statistics target and higher\n> limits used to drive plan time through the roof, but I'm now seeing\n> plan times around 50 ms for limits of 20 and statistics targets of\n> 100.  Given the savings from the better plans, it's worth it, at least\n> in our case.\n>\n> I wonder what sort of testing would be required to determine a safe\n> installation default with the current code.\n\nWell, given all the variables, maybe we should instead bet targeting\nplan time, either indirectly vi estimated values, or directly by\nallowing a configurable planning timeout, jumping off to alternate\napproach (nestloopy style, or geqo) if available.\n\nmerlin\n", "msg_date": "Thu, 16 Apr 2009 10:44:03 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really dumb planner decision" }, { "msg_contents": "On Thu, Apr 16, 2009 at 9:49 AM, Tom Lane <[email protected]> wrote:\n> Merlin Moncure <[email protected]> writes:\n>> On Thu, Apr 16, 2009 at 8:05 AM, Matthew Wakeling <[email protected]> wrote:\n>>> That solves the problem. So, a view is treated as a subquery then?\n>\n>> no...the view is simply inlined into the query (think C macro) using\n>> the rules.  You just bumped into an arbitrary (and probably too low)\n>> limit into the number of tables the planner can look at in terms of\n>> optimizing certain types of plans.\n>\n> Bear in mind that those limits exist to keep you from running into\n> exponentially increasing planning time when the size of a planning\n> problem gets big.  \"Raise 'em to the moon\" isn't really a sane strategy.\n> It might be that we could get away with raising them by one or two given\n> the general improvement in hardware since the values were last looked\n> at; but I'd be hesitant to push the defaults further than that.\n\nI hasten to point out that I only suggested raising them to the moon\nas a DEBUGGING strategy, not a production configuration.\n\nI do however suspect that raising the defaults would be a good idea.\nIt seems that the limit has been 8 since those parameters were added\nback in January of 2003, and yes, hardware is a lot better now. We\nshould probably raise geqo_threshold at the same time, since that's\nsupposed to be larger than these parameters and the default is only\n12.\n\n...Robert\n", "msg_date": "Thu, 16 Apr 2009 11:49:14 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really dumb planner decision" }, { "msg_contents": "On Thu, 16 Apr 2009, Robert Haas wrote:\n> I hasten to point out that I only suggested raising them to the moon\n> as a DEBUGGING strategy, not a production configuration.\n\nThe problem is that we have created a view that by itself a very \ntime-consuming query to answer, relying on it being incorporated into a \nquery that will constrain it and cause it to be evaluated a lot quicker. \nThis kind of scenario kind of guarantees a bad plan as soon as the number \nof tables reaches from_collapse_limit.\n\nMatthew\n\n-- \n Failure is not an option. It comes bundled with your Microsoft product. \n -- Ferenc Mantfeld\n", "msg_date": "Thu, 16 Apr 2009 16:54:49 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really dumb planner decision" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> On Thu, 16 Apr 2009, Robert Haas wrote:\n>> I hasten to point out that I only suggested raising them to the moon\n>> as a DEBUGGING strategy, not a production configuration.\n\n> The problem is that we have created a view that by itself a very \n> time-consuming query to answer, relying on it being incorporated into a \n> query that will constrain it and cause it to be evaluated a lot quicker. \n> This kind of scenario kind of guarantees a bad plan as soon as the number \n> of tables reaches from_collapse_limit.\n\nWell, if the payoff for you exceeds the extra planning time, then you\nraise the setting. That's why it's a configurable knob. I was just\npointing out that there are downsides to raising it further than\nabsolutely necessary.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Apr 2009 12:04:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really dumb planner decision " } ]
[ { "msg_contents": "\nI have been doing some queries that are best answered with GiST indexes, \nhowever I have found that their performance is a little lacking. I thought \nI would do a direct comparison on a level playing field. Here are two \nEXPLAIN ANALYSE results for the same query, with two different indexes. \nThe two indexes are identical except that one is btree and the other GiST.\n\nHere is the query:\n\nSELECT *\nFROM\n location l1,\n location l2,\n gene,\n primer\nWHERE\n l1.subjectid <> l2.subjectid\n AND l1.objectid = l2.objectid\n AND l1.subjectid = gene.id\n AND l2.subjectid = primer.id\n AND l2.intermine_start <= l1.intermine_start\n AND l2.intermine_end >= l1.intermine_start\n\nHere is the btree index:\n\nCREATE INDEX location_object_start ON location (objectid, intermine_start);\n\nQUERY PLAN\n----------------------------------------------------------------------\n Hash Join\n (cost=26213.16..135980894.76 rows=3155740824 width=484)\n (actual time=2799.260..14256.588 rows=2758 loops=1)\n Hash Cond: (l1.subjectid = gene.id)\n -> Nested Loop\n (cost=0.00..4364485.01 rows=8891802645 width=324)\n (actual time=9.748..10418.807 rows=390695 loops=1)\n Join Filter: (l1.subjectid <> l2.subjectid)\n -> Nested Loop\n (cost=0.00..446862.58 rows=572239 width=259)\n (actual time=9.720..4226.117 rows=211880 loops=1)\n -> Seq Scan on primer\n (cost=0.00..15358.80 rows=211880 width=194)\n (actual time=9.678..579.877 rows=211880 loops=1)\n -> Index Scan using location__key_all on location l2\n (cost=0.00..2.00 rows=3 width=65)\n (actual time=0.004..0.007 rows=1 loops=211880)\n Index Cond: (l2.subjectid = primer.id)\n -> Index Scan using location_object_start on location l1\n (cost=0.00..3.85 rows=150 width=65)\n (actual time=0.005..0.012 rows=3 loops=211880)\n Index Cond: ((l1.objectid = l2.objectid) AND (l2.intermine_start <= l1.intermine_start) AND (l2.intermine_end >= l1.intermine_start))\n -> Hash\n (cost=20496.96..20496.96 rows=457296 width=160)\n (actual time=2788.698..2788.698 rows=457296 loops=1)\n -> Seq Scan on gene\n (cost=0.00..20496.96 rows=457296 width=160)\n (actual time=0.038..1420.604 rows=457296 loops=1)\n Total runtime: 14263.846 ms\n(13 rows)\n\n\nHere is the GiST index:\n\nCREATE INDEX location_object_start_gist ON location USING gist (objectid, intermine_start);\n\nQUERY PLAN\n------------------------------------------------------------------------\n Hash Join\n (cost=26213.16..136159960.32 rows=3155740824 width=484)\n (actual time=2576.109..2300486.267 rows=2758 loops=1)\n Hash Cond: (l1.subjectid = gene.id)\n -> Nested Loop\n (cost=0.00..4543550.56 rows=8891802645 width=324)\n (actual time=366.121..2296668.740 rows=390695 loops=1)\n Join Filter: (l1.subjectid <> l2.subjectid)\n -> Nested Loop\n (cost=0.00..446862.58 rows=572239 width=259)\n (actual time=362.774..13423.443 rows=211880 loops=1)\n -> Seq Scan on primer\n (cost=0.00..15358.80 rows=211880 width=194)\n (actual time=319.559..1296.907 rows=211880 loops=1)\n -> Index Scan using location__key_all on location l2\n (cost=0.00..2.00 rows=3 width=65)\n (actual time=0.041..0.045 rows=1 loops=211880)\n Index Cond: (l2.subjectid = primer.id)\n -> Index Scan using location_object_start_gist on location l1\n (cost=0.00..4.16 rows=150 width=65)\n (actual time=3.354..10.757 rows=3 loops=211880)\n Index Cond: ((l1.objectid = l2.objectid) AND (l2.intermine_start <= l1.intermine_start) AND (l2.intermine_end >= l1.intermine_start))\n -> Hash\n (cost=20496.96..20496.96 rows=457296 width=160)\n (actual time=2157.914..2157.914 rows=457296 loops=1)\n -> Seq Scan on gene\n (cost=0.00..20496.96 rows=457296 width=160)\n (actual time=3.904..1206.907 rows=457296 loops=1)\n Total runtime: 2300510.674 ms\n(13 rows)\n\nThe query plans are identical except in the type of index used, but there \nis a factor of a few hundred in execute time. Is this the kind of factor \nthat would be expected, or is there something amiss? Is this seen as \nsomething that might be improved in the future?\n\nMatthew\n\n-- \n \"We have always been quite clear that Win95 and Win98 are not the systems to\n use if you are in a hostile security environment.\" \"We absolutely do recognize\n that the Internet is a hostile environment.\" Paul Leach <[email protected]>\n", "msg_date": "Thu, 16 Apr 2009 17:06:03 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "GiST index performance" }, { "msg_contents": "Matthew Wakeling <[email protected]> wrote:\n> I have been doing some queries that are best answered with GiST\n> indexes\n \nFor what definition of \"best answered\"?\n \nSince an index is only a performance tuning feature (unless declared\nUNIQUE), and should never alter the results (beyond possibly affecting\nrow order if that is unspecified), how is an index which performs\nworse than an alternative the best answer?\n \n-Kevin\n", "msg_date": "Thu, 16 Apr 2009 11:33:21 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "On Thu, 16 Apr 2009, Kevin Grittner wrote:\n> Matthew Wakeling <[email protected]> wrote:\n>> I have been doing some queries that are best answered with GiST\n>> indexes\n>\n> For what definition of \"best answered\"?\n>\n> Since an index is only a performance tuning feature (unless declared\n> UNIQUE), and should never alter the results (beyond possibly affecting\n> row order if that is unspecified), how is an index which performs\n> worse than an alternative the best answer?\n\nDon't be misled by my example using integers. I'm doing queries on the \nbioseg data type, and the only index type for that is GiST. There isn't a \nbetter alternative.\n\nMatthew\n\n-- \n \"Finger to spiritual emptiness underlying everything.\"\n -- How a foreign C manual referred to a \"pointer to void.\"\n", "msg_date": "Thu, 16 Apr 2009 17:37:58 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Matthew Wakeling <[email protected]> wrote:\n>> I have been doing some queries that are best answered with GiST\n>> indexes\n \n> For what definition of \"best answered\"?\n \n> Since an index is only a performance tuning feature (unless declared\n> UNIQUE), and should never alter the results (beyond possibly affecting\n> row order if that is unspecified), how is an index which performs\n> worse than an alternative the best answer?\n\nThe main point of GIST is to be able to index queries that simply are\nnot indexable in btree. So I assume that Matthew is really worried\nabout some queries that are not btree-indexable. One would fully\nexpect btree to beat out GIST for btree-indexable cases. I think the\nsignificant point here is that it's winning by a factor of a couple\nhundred; that's pretty awful, and might point to some implementation\nproblem.\n\nMatthew, can you put together a self-contained test case with a similar\nslowdown? Also, what are the physical sizes of the two indexes?\nI notice that the inner nestloop join gets slower too, when it's not\nchanged at all --- that suggests that the overall I/O load is a lot\nworse, so maybe the reason the query is falling off a performance cliff\nis that the GIST index fails to fit in cache.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Apr 2009 12:46:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "hello,\n\nthere is other performance problem on this request.\n\nIf you analyse query plan, you see that most of the time are lost during \nsequencial scan, and you have 2 seq scan.\n\nYou have to create other indexes to match the request.\n\nPostgresq is totally dependant on index to reach is performance.\n\nRegarding gist or btree, I personnaly had better performance with btree.\n\nRegards\n\ndavid\n\nMatthew Wakeling a �crit :\n>\n> I have been doing some queries that are best answered with GiST \n> indexes, however I have found that their performance is a little \n> lacking. I thought I would do a direct comparison on a level playing \n> field. Here are two EXPLAIN ANALYSE results for the same query, with \n> two different indexes. The two indexes are identical except that one \n> is btree and the other GiST.\n>\n> Here is the query:\n>\n> SELECT *\n> FROM\n> location l1,\n> location l2,\n> gene,\n> primer\n> WHERE\n> l1.subjectid <> l2.subjectid\n> AND l1.objectid = l2.objectid\n> AND l1.subjectid = gene.id\n> AND l2.subjectid = primer.id\n> AND l2.intermine_start <= l1.intermine_start\n> AND l2.intermine_end >= l1.intermine_start\n>\n> Here is the btree index:\n>\n> CREATE INDEX location_object_start ON location (objectid, \n> intermine_start);\n>\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> Hash Join\n> (cost=26213.16..135980894.76 rows=3155740824 width=484)\n> (actual time=2799.260..14256.588 rows=2758 loops=1)\n> Hash Cond: (l1.subjectid = gene.id)\n> -> Nested Loop\n> (cost=0.00..4364485.01 rows=8891802645 width=324)\n> (actual time=9.748..10418.807 rows=390695 loops=1)\n> Join Filter: (l1.subjectid <> l2.subjectid)\n> -> Nested Loop\n> (cost=0.00..446862.58 rows=572239 width=259)\n> (actual time=9.720..4226.117 rows=211880 loops=1)\n> -> Seq Scan on primer\n> (cost=0.00..15358.80 rows=211880 width=194)\n> (actual time=9.678..579.877 rows=211880 loops=1)\n> -> Index Scan using location__key_all on location l2\n> (cost=0.00..2.00 rows=3 width=65)\n> (actual time=0.004..0.007 rows=1 loops=211880)\n> Index Cond: (l2.subjectid = primer.id)\n> -> Index Scan using location_object_start on location l1\n> (cost=0.00..3.85 rows=150 width=65)\n> (actual time=0.005..0.012 rows=3 loops=211880)\n> Index Cond: ((l1.objectid = l2.objectid) AND \n> (l2.intermine_start <= l1.intermine_start) AND (l2.intermine_end >= \n> l1.intermine_start))\n> -> Hash\n> (cost=20496.96..20496.96 rows=457296 width=160)\n> (actual time=2788.698..2788.698 rows=457296 loops=1)\n> -> Seq Scan on gene\n> (cost=0.00..20496.96 rows=457296 width=160)\n> (actual time=0.038..1420.604 rows=457296 loops=1)\n> Total runtime: 14263.846 ms\n> (13 rows)\n>\n>\n> Here is the GiST index:\n>\n> CREATE INDEX location_object_start_gist ON location USING gist \n> (objectid, intermine_start);\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Hash Join\n> (cost=26213.16..136159960.32 rows=3155740824 width=484)\n> (actual time=2576.109..2300486.267 rows=2758 loops=1)\n> Hash Cond: (l1.subjectid = gene.id)\n> -> Nested Loop\n> (cost=0.00..4543550.56 rows=8891802645 width=324)\n> (actual time=366.121..2296668.740 rows=390695 loops=1)\n> Join Filter: (l1.subjectid <> l2.subjectid)\n> -> Nested Loop\n> (cost=0.00..446862.58 rows=572239 width=259)\n> (actual time=362.774..13423.443 rows=211880 loops=1)\n> -> Seq Scan on primer\n> (cost=0.00..15358.80 rows=211880 width=194)\n> (actual time=319.559..1296.907 rows=211880 loops=1)\n> -> Index Scan using location__key_all on location l2\n> (cost=0.00..2.00 rows=3 width=65)\n> (actual time=0.041..0.045 rows=1 loops=211880)\n> Index Cond: (l2.subjectid = primer.id)\n> -> Index Scan using location_object_start_gist on location l1\n> (cost=0.00..4.16 rows=150 width=65)\n> (actual time=3.354..10.757 rows=3 loops=211880)\n> Index Cond: ((l1.objectid = l2.objectid) AND \n> (l2.intermine_start <= l1.intermine_start) AND (l2.intermine_end >= \n> l1.intermine_start))\n> -> Hash\n> (cost=20496.96..20496.96 rows=457296 width=160)\n> (actual time=2157.914..2157.914 rows=457296 loops=1)\n> -> Seq Scan on gene\n> (cost=0.00..20496.96 rows=457296 width=160)\n> (actual time=3.904..1206.907 rows=457296 loops=1)\n> Total runtime: 2300510.674 ms\n> (13 rows)\n>\n> The query plans are identical except in the type of index used, but \n> there is a factor of a few hundred in execute time. Is this the kind \n> of factor that would be expected, or is there something amiss? Is this \n> seen as something that might be improved in the future?\n>\n> Matthew\n>\n\n", "msg_date": "Thu, 16 Apr 2009 19:19:18 +0200", "msg_from": "dforum <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "On Thu, 16 Apr 2009, dforum wrote:\n> there is other performance problem on this request.\n>\n> If you analyse query plan, you see that most of the time are lost during \n> sequencial scan, and you have 2 seq scan.\n\nNonsense. Sequential scans account for all of one or two seconds of \nprocessing in these queries, which are 14 seconds and 38 minutes \nrespectively.\n\nMatthew\n\n-- \n Doctor: Are you okay? You appear to be injured.\n Neelix: Aaaaaaah!\n Doctor: It's okay, it looks superficial.\n Neelix: Am I going to die?\n Doctor: Not unless you are allergic to tomatoes. This appears to be a sauce\n some kind.\n", "msg_date": "Thu, 16 Apr 2009 18:23:49 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "dforum <[email protected]> writes:\n> If you analyse query plan, you see that most of the time are lost during \n> sequencial scan, and you have 2 seq scan.\n\nI think you missed the loops count.\n\n>> -> Index Scan using location_object_start_gist on location l1\n>> (cost=0.00..4.16 rows=150 width=65)\n>> (actual time=3.354..10.757 rows=3 loops=211880)\n>> Index Cond: ((l1.objectid = l2.objectid) AND \n>> (l2.intermine_start <= l1.intermine_start) AND (l2.intermine_end >= \n>> l1.intermine_start))\n\nThis indexscan is accounting for 10.757 * 211880 msec, which is 99%\nof the runtime.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Apr 2009 13:52:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "On Thu, 16 Apr 2009, Tom Lane wrote:\n> Matthew, can you put together a self-contained test case with a similar\n> slowdown?\n\nIt isn't the smoking gun I thought it would be, but:\n\nCREATE TABLE a AS SELECT a FROM generate_series(1,1000000) AS a(a);\nCREATE TABLE b AS SELECT b FROM generate_series(1,1000000) AS b(b);\n\nANALYSE;\n\nCREATE INDEX a_a ON a (a);\n\nEXPLAIN ANALYSE SELECT * FROM a, b WHERE a.a BETWEEN b.b AND b.b + 2;\n\nDROP INDEX a_a;\nCREATE INDEX a_a ON a USING gist (a);\n\nEXPLAIN ANALYSE SELECT * FROM a, b WHERE a.a BETWEEN b.b AND b.b + 2;\n\n\nI see four seconds versus thirty seconds. The difference was much greater \non my non-test-case - I wonder if multi-column indexing has something to \ndo with it.\n\n> Also, what are the physical sizes of the two indexes?\n\n relname | pg_size_pretty\n----------------------------+----------------\n location_object_start_gist | 193 MB\n location_object_start | 75 MB\n(2 rows)\n\n> I notice that the inner nestloop join gets slower too, when it's not\n> changed at all --- that suggests that the overall I/O load is a lot\n> worse, so maybe the reason the query is falling off a performance cliff\n> is that the GIST index fails to fit in cache.\n\nMemory in the machine is 16GB.\n\nMatthew\n\n-- \n [About NP-completeness] These are the problems that make efficient use of\n the Fairy Godmother. -- Computer Science Lecturer\n", "msg_date": "Thu, 16 Apr 2009 18:54:05 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> On Thu, 16 Apr 2009, Tom Lane wrote:\n>> Also, what are the physical sizes of the two indexes?\n\n> location_object_start_gist | 193 MB\n> location_object_start | 75 MB\n\n>> I notice that the inner nestloop join gets slower too, when it's not\n>> changed at all --- that suggests that the overall I/O load is a lot\n>> worse, so maybe the reason the query is falling off a performance cliff\n>> is that the GIST index fails to fit in cache.\n\n> Memory in the machine is 16GB.\n\nHmm, and what is shared_buffers set to? How big are the tables and\nother indexes used in the query? We still have to explain why the\ninner nestloop got slower, and it's hard to see that unless something\nstopped fitting in cache.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Apr 2009 13:59:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "On Thu, 16 Apr 2009, Tom Lane wrote:\n> Hmm, and what is shared_buffers set to? How big are the tables and\n> other indexes used in the query? We still have to explain why the\n> inner nestloop got slower, and it's hard to see that unless something\n> stopped fitting in cache.\n\nI just noticed that someone has started running a big java program (6GB \nRAM so far) on that machine. Maybe it was running during the bad run. I'll \nsee if I can re-run those two queries later on when the machine is idle.\n\nshared_buffers = 500MB\n\nLocation table: 336 MB\nGene table: 124 MB\nPrimer table: 103 MB\n\nlocation__key_all index: 334 MB\n\nMatthew\n\n-- \n For those of you who are into writing programs that are as obscure and\n complicated as possible, there are opportunities for... real fun here\n -- Computer Science Lecturer\n", "msg_date": "Thu, 16 Apr 2009 19:05:23 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "dforum wrote:\n> hello,\n> \n> there is other performance problem on this request.\n> \n> If you analyse query plan, you see that most of the time are lost during\n> sequencial scan, and you have 2 seq scan.\n> \n> You have to create other indexes to match the request.\n> \n> Postgresq is totally dependant on index to reach is performance.\n\nThat depends a lot on your queries. Sometimes a sequential scan is a\nfaster and better choice. It may also be faster for small tables.\n\nI've usually found that when I (for performance testing purposes) force\nthe planner to an index scan instead of its preferred sequential scan,\nthe query runs slower than it did with a sequential scan.\n\nSure, there are queries that are horrifyingly slow without appropriate\nindexes, but I wouldn't go so far as to say that Pg is totally dependent\non indexes to perform well. It depends a lot on the query.\n\n--\nCraig Ringer\n", "msg_date": "Fri, 17 Apr 2009 09:22:08 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "On Thu, 16 Apr 2009, Tom Lane wrote:\n> Matthew, can you put together a self-contained test case with a similar\n> slowdown?\n\nI have done a bit of investigation, and I think I might have found the \nsmoking gun I was looking for. I just added a load of debug to the gist \nconsistent function on the bioseg type, and did a single overlap lookup in \nthe index.\n\nThe index contains values ranging from 1 to 28,000,000 or so.\nThe range I looked up was 23503297..23504738 (so a very small proportion).\nThe index contains 375154 entries.\nThe index returned 59 rows.\nThe consistent method was called 54022 times - 5828 times for branch\n (internal) index entries, and 48194 times for leaf entries.\n\nObviously this is a really bad index layout - scanning that many entries \nfor such a small output. In fact, I saw lots of overlapping branch index \nentries, so the index isn't actually differentiating between the different \nbranches of the tree very well. This indicates a failure of the picksplit \nor the penalty functions. I shall investigate this further next week.\n\nI shall also investigate whether this is the exact same problem that I had \nwith the int4 gist system.\n\nMatthew\n\n-- \nSo, given 'D' is undeclared too, with a default of zero, C++ is equal to D.\n mnw21, commenting on the \"Surely the value of C++ is zero, but C is now 1\"\n response to \"No, C++ isn't equal to D. 'C' is undeclared [...] C++ should\n really be called 1\" response to \"C++ -- shouldn't it be called D?\"\n", "msg_date": "Fri, 17 Apr 2009 18:18:45 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "On Fri, 17 Apr 2009, Matthew Wakeling wrote:\n> I have done a bit of investigation, and I think I might have found the \n> smoking gun I was looking for.\n\nI have found a bug in the contrib package seg, which has been copied into \nthe bioseg data type as well. It causes the index to be created with \nhorribly bad unselective trees, so that when a search is performed many of \nthe branches of the tree need to be followed. This explanation does not \nextend to btree_gist, so I will have to further investigate that. Apply \nthe following patch to contrib/seg/seg.c:\n\n*** seg.c\t2006-09-10 18:36:51.000000000 +0100\n--- seg.c_new\t2009-04-20 15:02:52.000000000 +0100\n***************\n*** 426,432 ****\n \t\telse\n \t\t{\n \t\t\tdatum_r = union_dr;\n! \t\t\tsize_r = size_alpha;\n \t\t\t*right++ = i;\n \t\t\tv->spl_nright++;\n \t\t}\n--- 426,432 ----\n \t\telse\n \t\t{\n \t\t\tdatum_r = union_dr;\n! \t\t\tsize_r = size_beta;\n \t\t\t*right++ = i;\n \t\t\tv->spl_nright++;\n \t\t}\n\n\nMatthew\n\n-- \n The early bird gets the worm. If you want something else for breakfast, get\n up later.\n", "msg_date": "Mon, 20 Apr 2009 15:11:08 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> I have found a bug in the contrib package seg, which has been copied into \n> the bioseg data type as well. It causes the index to be created with \n> horribly bad unselective trees, so that when a search is performed many of \n> the branches of the tree need to be followed. This explanation does not \n> extend to btree_gist, so I will have to further investigate that. Apply \n> the following patch to contrib/seg/seg.c:\n\n> *** seg.c\t2006-09-10 18:36:51.000000000 +0100\n> --- seg.c_new\t2009-04-20 15:02:52.000000000 +0100\n> ***************\n> *** 426,432 ****\n> \t\telse\n> \t\t{\n> \t\t\tdatum_r = union_dr;\n> ! \t\t\tsize_r = size_alpha;\n> \t\t\t*right++ = i;\n> \t\t\tv->spl_nright++;\n> \t\t}\n> --- 426,432 ----\n> \t\telse\n> \t\t{\n> \t\t\tdatum_r = union_dr;\n> ! \t\t\tsize_r = size_beta;\n> \t\t\t*right++ = i;\n> \t\t\tv->spl_nright++;\n> \t\t}\n\nLooks like contrib/cube has the same error. I don't see a similar code\npattern elsewhere though. Oleg, Teodor, do you concur that this is a\ncorrect patch? Is it safe to back-patch (I think it should be)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 20 Apr 2009 11:27:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "On Mon, 20 Apr 2009, Teodor Sigaev wrote:\n>> Looks like contrib/cube has the same error. I don't see a similar code\n>> pattern elsewhere though. Oleg, Teodor, do you concur that this is a\n>> correct patch? Is it safe to back-patch (I think it should be)?\n> Yeah, good catch, and it doesn't touch any already-on-disk data. Although \n> release notes should mention advice about REINDEX seg and cube opclasses.\n\nUnfortunately, it seems there is another bug in the picksplit function. \nMy patch fixes a bug that reveals this new bug. The whole picksplit \nalgorithm is fundamentally broken, and needs to be rewritten completely, \nwhich is what I am doing.\n\nIf you apply my patch, then index sizes will go up by a factor of ten or \nso, because the picksplit function tends to split the set of 367 ranges \ninto one set of 366 and another set of 1, leading to a horribly unbalanced \ntree. Before the patch, the different branches of the tree were \nunselective, so new entries would just get stuffed in anywhere, leading to \na much more \"balanced\" tree.\n\nI shall have a proper fix to this problem later today.\n\nMatthew\n\n-- \n It's one of those irregular verbs - \"I have an independent mind,\" \"You are\n an eccentric,\" \"He is round the twist.\"\n -- Bernard Woolly, Yes Prime Minister\n", "msg_date": "Tue, 21 Apr 2009 11:40:19 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "On Tue, 21 Apr 2009, Matthew Wakeling wrote:\n> Unfortunately, it seems there is another bug in the picksplit function. My \n> patch fixes a bug that reveals this new bug. The whole picksplit algorithm is \n> fundamentally broken, and needs to be rewritten completely, which is what I \n> am doing.\n\nI have now rewritten the picksplit and penalty functions for the bioseg \ndata type, and they perform much better. The index size is now 164MB, \ncompared to 350MB or so originally, and 2400MB after my earlier bugfix. \nExecution time of one of our queries (basically a nested loop join over \na sequential scan and an index lookup in this index type) has gone down \nfrom 45 minutes to two minutes.\n\nI have abandoned \"Guttman's poly time split algorithm\". A fundamental flaw \nin the picksplit algorithm is that it would create two separate target \nsets, and incrementally add entries to whichever one would grow the least \nin range size. However, if the entries arrived in any sort of order, they \nwould all be added to the one set, growing it by a small amount each time. \nThis caused the picksplit algorithm to split a set of 367 entries into a \nset of 366 and a set of one a high proportion of the time.\n\nI have replaced the picksplit algorithm with a simple one. For each range \nelement, find the midpoint of the range. Then find the mean of all the \nmidpoints. All elements with a midpoint below the mean go in one set, and \nthe others go in the second set. This usually splits the entries in a \nmeaningful way.\n\nI have also changed the penalty function. Previously, the penalty was the \namount that the range would have to expand. So, if a new element fitted \ninside the existing range, then the penalty is zero. I have changed it to \ncreate a tie-break between multiple index pages that the element would fit \nin without expanding the range - the element should be inserted into the \nindex page with the smallest range. This prevents large elements from \nmessing up the index by forcing a large index page range that sucks in all \nthe elements in the whole area into a non-selective group.\n\nI may experiment with improving these functions further. The main problem \nwith this index is the fact that I need to index ranges with a wide \nvariety of widths, and I have a couple more strategies yet to help with \nthat.\n\nI will post a patch when I have ported my bioseg code over to the seg data \ntype.\n\nMatthew\n\n-- \n Riker: Our memory pathways have become accustomed to your sensory input.\n Data: I understand - I'm fond of you too, Commander. And you too Counsellor\n", "msg_date": "Wed, 22 Apr 2009 14:53:05 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "On Wed, 22 Apr 2009, Matthew Wakeling wrote:\n> I will post a patch when I have ported my bioseg code over to the seg data \n> type.\n\nHere is my patch ported over to the seg contrib package, attached. Apply \nit to seg.c and all should be well. A similar thing needs to be done to \ncube, but I haven't looked at that.\n\nMatthew\n\n-- \n An optimist sees the glass as half full, a pessimist as half empty,\n and an engineer as having redundant storage capacity.", "msg_date": "Wed, 22 Apr 2009 17:06:25 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> Here is my patch ported over to the seg contrib package, attached. Apply \n> it to seg.c and all should be well. A similar thing needs to be done to \n> cube, but I haven't looked at that.\n\nTeodor, Oleg, do you intend to review/apply this patch?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 May 2009 18:07:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "On Wed, 6 May 2009, Tom Lane wrote:\n\n> Matthew Wakeling <[email protected]> writes:\n>> Here is my patch ported over to the seg contrib package, attached. Apply\n>> it to seg.c and all should be well. A similar thing needs to be done to\n>> cube, but I haven't looked at that.\n>\n> Teodor, Oleg, do you intend to review/apply this patch?\n\nTom,\n\nI just returned from trek around Annapurna and just learned about Matthew's\nexperiments, Teodor is in holidays and will be available after May 11, \nthen there are should be PGCon, so if it can wait, we could look on this\nafter PGCon.\n\nMatthew, did you try various data ? From our experience we learned there\nare can be various corner cases.\n\n\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Thu, 7 May 2009 15:09:19 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "On Thu, 7 May 2009, Oleg Bartunov wrote:\n> Did you try Guttman quadratic split algorithm ? We also found linear\n> split algorithm for Rtree.\n\nThe existing (bugfixed) seg split algorithm is the Guttman quadratic split \nalgorithm. Guttman did all his work on two-dimensional and above data, \ndismissing one-dimensional data as being handled adequately by B-trees, \nwhich is not true for segment overlaps. It turns out that the algorithm \nhas a weakness with certain types of data, and one-dimensional data is \nalmost certain to exercise that weakness. The greater the number of \ndimensions, the less the weakness is exercised.\n\nThe problem is that the algorithm does not calculate a split pivot. \nInstead it finds two suitable entries, and adds the remaining entries to \nthose two in turn. This can lead to the majority of the entries being \nadded to just one side. In fact, I saw lots of cases where 367 entries \nwere being split into two pages of 366 and one entry.\n\nGuttman's linear split algorithm has the same weakness.\n\n>> One thing I am seeing is a really big difference in performance between \n>> Postgres/GiST and a Java implementation I have written, using the same \n>> algorithms. Postgres takes three minutes to perform a set of index lookups \n>> while java takes six seconds. The old version of bioseg took an hour. I \n>> can't see anything in the GiST support code that could account for this.\n>\n> is the number of index lookups different, or just index lookup time is very\n> big ?\n\nSame number of index lookups. Same algorithms. I have a set of 681879 \nsegments, and I load them all into the index. I then query the index for \noverlaps for each one in turn. For some reason, GiST lookups seem to be \nslow, even if they are using a good algorithm. I have seen that problem \nwith btree_gist on integers too. I can't see any reason for this is the \nGiST code - it all seems pretty tight to me. We probably need to do some \nprofiling.\n\nMatthew\n\n-- \n I suppose some of you have done a Continuous Maths course. Yes? Continuous\n Maths? <menacing stares from audience> Whoah, it was like that, was it!\n -- Computer Science Lecturer\n", "msg_date": "Thu, 7 May 2009 13:31:09 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST index performance " }, { "msg_contents": "\nWas this corrected? I don't see any commits to seg.c.\n\n---------------------------------------------------------------------------\n\nMatthew Wakeling wrote:\n> On Thu, 7 May 2009, Oleg Bartunov wrote:\n> > Did you try Guttman quadratic split algorithm ? We also found linear\n> > split algorithm for Rtree.\n> \n> The existing (bugfixed) seg split algorithm is the Guttman quadratic split \n> algorithm. Guttman did all his work on two-dimensional and above data, \n> dismissing one-dimensional data as being handled adequately by B-trees, \n> which is not true for segment overlaps. It turns out that the algorithm \n> has a weakness with certain types of data, and one-dimensional data is \n> almost certain to exercise that weakness. The greater the number of \n> dimensions, the less the weakness is exercised.\n> \n> The problem is that the algorithm does not calculate a split pivot. \n> Instead it finds two suitable entries, and adds the remaining entries to \n> those two in turn. This can lead to the majority of the entries being \n> added to just one side. In fact, I saw lots of cases where 367 entries \n> were being split into two pages of 366 and one entry.\n> \n> Guttman's linear split algorithm has the same weakness.\n> \n> >> One thing I am seeing is a really big difference in performance between \n> >> Postgres/GiST and a Java implementation I have written, using the same \n> >> algorithms. Postgres takes three minutes to perform a set of index lookups \n> >> while java takes six seconds. The old version of bioseg took an hour. I \n> >> can't see anything in the GiST support code that could account for this.\n> >\n> > is the number of index lookups different, or just index lookup time is very\n> > big ?\n> \n> Same number of index lookups. Same algorithms. I have a set of 681879 \n> segments, and I load them all into the index. I then query the index for \n> overlaps for each one in turn. For some reason, GiST lookups seem to be \n> slow, even if they are using a good algorithm. I have seen that problem \n> with btree_gist on integers too. I can't see any reason for this is the \n> GiST code - it all seems pretty tight to me. We probably need to do some \n> profiling.\n> \n> Matthew\n> \n> -- \n> I suppose some of you have done a Continuous Maths course. Yes? Continuous\n> Maths? <menacing stares from audience> Whoah, it was like that, was it!\n> -- Computer Science Lecturer\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 25 Feb 2010 18:44:02 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "On Thu, Feb 25, 2010 at 6:44 PM, Bruce Momjian <[email protected]> wrote:\n> Was this corrected?  I don't see any commits to seg.c.\n\nI don't think this was ever reviewed.\n\nIt seems like a good patch but I'd be skeptical of committing it now\nunless someone has the time to review it carefully. If not, let's add\nit to the next CF so we don't lose it again.\n\n...Robert\n\n>\n> ---------------------------------------------------------------------------\n>\n> Matthew Wakeling wrote:\n>> On Thu, 7 May 2009, Oleg Bartunov wrote:\n>> > Did you try Guttman quadratic split algorithm ? We also found linear\n>> > split algorithm for Rtree.\n>>\n>> The existing (bugfixed) seg split algorithm is the Guttman quadratic split\n>> algorithm. Guttman did all his work on two-dimensional and above data,\n>> dismissing one-dimensional data as being handled adequately by B-trees,\n>> which is not true for segment overlaps. It turns out that the algorithm\n>> has a weakness with certain types of data, and one-dimensional data is\n>> almost certain to exercise that weakness. The greater the number of\n>> dimensions, the less the weakness is exercised.\n>>\n>> The problem is that the algorithm does not calculate a split pivot.\n>> Instead it finds two suitable entries, and adds the remaining entries to\n>> those two in turn. This can lead to the majority of the entries being\n>> added to just one side. In fact, I saw lots of cases where 367 entries\n>> were being split into two pages of 366 and one entry.\n>>\n>> Guttman's linear split algorithm has the same weakness.\n>>\n>> >> One thing I am seeing is a really big difference in performance between\n>> >> Postgres/GiST and a Java implementation I have written, using the same\n>> >> algorithms. Postgres takes three minutes to perform a set of index lookups\n>> >> while java takes six seconds. The old version of bioseg took an hour. I\n>> >> can't see anything in the GiST support code that could account for this.\n>> >\n>> > is the number of index lookups different, or just index lookup time is very\n>> > big ?\n>>\n>> Same number of index lookups. Same algorithms. I have a set of 681879\n>> segments, and I load them all into the index. I then query the index for\n>> overlaps for each one in turn. For some reason, GiST lookups seem to be\n>> slow, even if they are using a good algorithm. I have seen that problem\n>> with btree_gist on integers too. I can't see any reason for this is the\n>> GiST code - it all seems pretty tight to me. We probably need to do some\n>> profiling.\n>>\n>> Matthew\n>>\n>> --\n>>  I suppose some of you have done a Continuous Maths course. Yes? Continuous\n>>  Maths? <menacing stares from audience> Whoah, it was like that, was it!\n>>                                         -- Computer Science Lecturer\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n> --\n>  Bruce Momjian  <[email protected]>        http://momjian.us\n>  EnterpriseDB                             http://enterprisedb.com\n>  PG East:  http://www.enterprisedb.com/community/nav-pg-east-2010.do\n>  + If your life is a hard drive, Christ can be your backup. +\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Tue, 2 Mar 2010 11:23:22 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" }, { "msg_contents": "Robert Haas wrote:\n> On Thu, Feb 25, 2010 at 6:44 PM, Bruce Momjian <[email protected]> wrote:\n> > Was this corrected? ?I don't see any commits to seg.c.\n> \n> I don't think this was ever reviewed.\n> \n> It seems like a good patch but I'd be skeptical of committing it now\n> unless someone has the time to review it carefully. If not, let's add\n> it to the next CF so we don't lose it again.\n\nI have asked Oleg and Teodor to work on it.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do\n", "msg_date": "Tue, 2 Mar 2010 20:13:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST index performance" } ]
[ { "msg_contents": "\nPG (8.3.7) doesn't seem to want to do a hash join across two partitioned \ntables. I have two partition hierarchies: impounds (with different \nimpound sources) and liens (with vehicle liens from different companies). \nTrying to match those up gives:\n\nEXPLAIN SELECT COUNT(*)\nFROM impounds i\n \tJOIN liens l ON (i.vin = l.vin);\n\n Aggregate (cost=11164042.66..11164042.67 rows=1 width=0)\n -> Nested Loop (cost=0.27..3420012.94 rows=3097611886 width=0)\n Join Filter: ((i.vin)::text = (l.vin)::text)\n -> Append (cost=0.00..1072.77 rows=33577 width=21)\n -> Seq Scan on impounds i (cost=0.00..11.40 rows=140 width=21)\n -> Seq Scan on impounds_s1 i (cost=0.00..926.87 rows=29587 width=18)\n -> Seq Scan on impounds_s2 i (cost=0.00..99.96 rows=3296 width=18)\n -> Seq Scan on impounds_s3 i (cost=0.00..23.14 rows=414 width=18)\n -> Seq Scan on impounds_s4 i (cost=0.00..11.40 rows=140 width=21)\n -> Append (cost=0.27..101.64 rows=15 width=21)\n -> Bitmap Heap Scan on liens l (cost=0.27..5.60 rows=2 width=21)\n Recheck Cond: ((l.vin)::text = (i.vin)::text)\n -> Bitmap Index Scan on liens_pk (cost=0.00..0.27 rows=2 width=0)\n Index Cond: ((l.vin)::text = (i.vin)::text)\n -> Index Scan using liens_s1_pk on liens_s1 l (cost=0.00..7.02 rows=1 width=18)\n Index Cond: ((l.vin)::text = (i.vin)::text)\n -> Index Scan using liens_s2_pk on liens_s2 l (cost=0.00..3.47 rows=1 width=21)\n Index Cond: ((l.vin)::text = (i.vin)::text)\n -> Index Scan using newliens_s3_pk on liens_s3 l (cost=0.00..7.52 rows=1 width=18)\n Index Cond: ((l.vin)::text = (i.vin)::text)\n -> Index Scan using newliens_s4_pk on liens_s4 l (cost=0.00..7.67 rows=1 width=18)\n Index Cond: ((l.vin)::text = (i.vin)::text)\n -> Index Scan using newliens_s5_pk on liens_s5 l (cost=0.00..7.62 rows=1 width=18)\n Index Cond: ((l.vin)::text = (i.vin)::text)\n -> Index Scan using newliens_s6_pk on liens_s6 l (cost=0.00..7.61 rows=1 width=18)\n Index Cond: ((l.vin)::text = (i.vin)::text)\n -> Index Scan using newliens_s7_pk on liens_s7 l (cost=0.00..7.50 rows=1 width=18)\n Index Cond: ((l.vin)::text = (i.vin)::text)\n -> Index Scan using newliens_s8_pk on liens_s8 l (cost=0.00..7.36 rows=1 width=18)\n Index Cond: ((l.vin)::text = (i.vin)::text)\n -> Index Scan using newliens_s9_pk on liens_s9 l (cost=0.00..7.43 rows=1 width=18)\n Index Cond: ((l.vin)::text = (i.vin)::text)\n -> Index Scan using newliens_s10_pk on liens_s10 l (cost=0.00..7.79 rows=1 width=18)\n Index Cond: ((l.vin)::text = (i.vin)::text)\n -> Index Scan using newliens_s11_pk on liens_s11 l (cost=0.00..8.07 rows=1 width=18)\n Index Cond: ((l.vin)::text = (i.vin)::text)\n -> Index Scan using newliens_s12_pk on liens_s12 l (cost=0.00..8.45 rows=1 width=18)\n Index Cond: ((l.vin)::text = (i.vin)::text)\n -> Index Scan using newliens_s13_pk on liens_s13 l (cost=0.00..8.53 rows=1 width=18)\n Index Cond: ((l.vin)::text = (i.vin)::text)\n\n\nThis takes quite a while as it's got to do tons of index probes which \nresults it tons of random IO. I killed this after five minutes of \nrunning.\n\nBut if I do:\n\nCREATE TABLE i1 AS SELECT * FROM impounds;\nCREATE TABLE l1 AS SELECT * FROM liens;\n\nI get a reasonable plan, which runs in about 15 seconds, from:\n\nEXPLAIN SELECT COUNT(*)\nFROM i1 i\n JOIN l1 l ON (i.vin = l.vin);\n\n Aggregate (cost=749054.78..749054.79 rows=1 width=0)\n -> Hash Join (cost=1444.18..748971.43 rows=33338 width=0)\n Hash Cond: ((l.vin)::text = (i.vin)::text)\n -> Seq Scan on l1 l (cost=0.00..332068.96 rows=18449996 \nwidth=18)\n -> Hash (cost=1027.97..1027.97 rows=33297 width=18)\n -> Seq Scan on i1 i (cost=0.00..1027.97 rows=33297 \nwidth=18)\n\n\nI've tried to force the hash join plan on the partitioned tables via:\n\nset enable_nestloop to off;\n\nThis results in a merge join plan which needs to do a giant sort, again \nkilled after five minutes.\n\n Aggregate (cost=58285765.20..58285765.21 rows=1 width=0)\n -> Merge Join (cost=4077389.31..50541735.48 rows=3097611886 width=0)\n Merge Cond: ((i.vin)::text = (l.vin)::text)\n -> Sort (cost=4286.45..4370.39 rows=33577 width=21)\n Sort Key: i.vin\n -> Append (cost=0.00..1072.77 rows=33577 width=21)\n -> Seq Scan on impounds i (cost=0.00..11.40 rows=140 width=21)\n -> [Seq Scans on other partitions]\n -> Materialize (cost=4073102.86..4303737.81 rows=18450796 width=21)\n -> Sort (cost=4073102.86..4119229.85 rows=18450796 width=21)\n Sort Key: l.vin\n -> Append (cost=0.00..332797.96 rows=18450796 width=21)\n -> Seq Scan on liens l (cost=0.00..14.00 rows=400 width=21)\n -> [Seq Scans on other partitions]\n\n\nDisabling mergejoin pushes it back to a nestloop join. Why can't it hash \njoin these two together?\n\nKris Jurka\n", "msg_date": "Thu, 16 Apr 2009 19:09:37 -0400 (EDT)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": true, "msg_subject": "No hash join across partitioned tables?" }, { "msg_contents": "Kris Jurka <[email protected]> writes:\n> PG (8.3.7) doesn't seem to want to do a hash join across two partitioned \n> tables.\n\nCould we see the whole declaration of these tables? (pg_dump -s output\nwould be convenient)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Apr 2009 19:12:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables? " }, { "msg_contents": "On Thu, 16 Apr 2009, Tom Lane wrote:\n\n> Kris Jurka <[email protected]> writes:\n>> PG (8.3.7) doesn't seem to want to do a hash join across two partitioned\n>> tables.\n>\n> Could we see the whole declaration of these tables? (pg_dump -s output\n> would be convenient)\n>\n\nThe attached table definition with no data wants to mergejoin first, but \nafter disabling mergejoin it does indeed do a hashjoin.\n\nLooking back at the cost estimates for the merge and nestloop joins, it \nseems to be selecting the number of rows in the cartesian product * .005 \nwhile the number of output rows in this case is 2437 (cartesian product * \n4e-9). Perhaps the cost estimates for the real data are so high because \nof this bogus row count that the fudge factor to disable mergejoin isn't \nenough?\n\nKris Jurka", "msg_date": "Thu, 16 Apr 2009 19:30:51 -0400 (EDT)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": true, "msg_subject": "Re: No hash join across partitioned tables? " }, { "msg_contents": "\n\nOn Thu, 16 Apr 2009, Kris Jurka wrote:\n\n> Perhaps the cost estimates for the real data are so high because of this \n> bogus row count that the fudge factor to disable mergejoin isn't enough?\n>\n\nIndeed, I get these cost estimates on 8.4b1 with an increased \ndisable_cost value:\n\nnestloop: 11171206.18\nmerge: 58377401.39\nhash: 116763544.76\n\nSo the default disable_cost isn't enough to push it to use the hash join \nplan and goes back to nestloop. Since disable_cost hasn't been touched \nsince January 2000, perhaps it's time to bump that up to match today's \nhardware and problem sizes? This isn't even a particularly big problem, \nit's joing 18M rows against 30k.\n\nThe real problem is getting reasonable stats to pass through the partition \nAppend step, so it can make a reasonable estimate of the join output size.\n\nKris Jurka\n\n", "msg_date": "Thu, 16 Apr 2009 21:02:04 -0400 (EDT)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": true, "msg_subject": "Re: No hash join across partitioned tables? " }, { "msg_contents": "Kris Jurka <[email protected]> writes:\n> So the default disable_cost isn't enough to push it to use the hash join \n> plan and goes back to nestloop. Since disable_cost hasn't been touched \n> since January 2000, perhaps it's time to bump that up to match today's \n> hardware and problem sizes?\n\nI think disable_cost was originally set at a time when costs were\nintegers :-(. Yeah, there's probably no reason not to throw another\nzero or two on it.\n\nIs there another issue here besides that one? I think you were hoping\nthat the hash join would be faster than the alternatives, but the cost\nestimate says it's a lot slower. Is that actually the case?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Apr 2009 11:02:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables? " }, { "msg_contents": "Tom Lane wrote:\n\n> Is there another issue here besides that one? I think you were hoping\n> that the hash join would be faster than the alternatives, but the cost\n> estimate says it's a lot slower. Is that actually the case?\n> \n\nThe hash join takes less than twenty seconds, the other two joins I \nkilled after five minutes. I can try to collect explain analyze results \nlater today if you'd like.\n\nKris Jurka\n", "msg_date": "Fri, 17 Apr 2009 08:07:21 -0700", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": true, "msg_subject": "Re: No hash join across partitioned tables?" }, { "msg_contents": "Kris Jurka <[email protected]> writes:\n> The hash join takes less than twenty seconds, the other two joins I \n> killed after five minutes. I can try to collect explain analyze results \n> later today if you'd like.\n\nPlease, unless the test case you already posted has similar behavior.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Apr 2009 11:08:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables? " }, { "msg_contents": "Tom Lane wrote:\n> Kris Jurka <[email protected]> writes:\n>> The hash join takes less than twenty seconds, the other two joins I \n>> killed after five minutes. I can try to collect explain analyze results \n>> later today if you'd like.\n> \n\nAttached are the explain analyze results. The analyze part hits the \nhash join worst of all, so I've also included the timings without analyzing.\n\nMethod Time (ms) Time w/Analyze (ms)\nnestloop 304853 319060\nmerge 514517 683757\nhash 18957 143731\n\nKris Jurka", "msg_date": "Fri, 17 Apr 2009 10:05:32 -0700", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": true, "msg_subject": "Re: No hash join across partitioned tables?" }, { "msg_contents": "Kris Jurka <[email protected]> writes:\n> The real problem is getting reasonable stats to pass through the partition \n> Append step, so it can make a reasonable estimate of the join output size.\n\nI dug around a bit and concluded that the lack of stats for the Append\nrelation is indeed the main problem. It's not so much the bad join size\nestimate (although that could hurt for cases where you need to join this\nresult to another table). Rather, it's that the planner is deliberately\nbiased against picking hash joins in the absence of stats for the inner\nrelation. Per the comments for estimate_hash_bucketsize:\n\n * If no statistics are available, use a default estimate of 0.1. This will\n * discourage use of a hash rather strongly if the inner relation is large,\n * which is what we want. We do not want to hash unless we know that the\n * inner rel is well-dispersed (or the alternatives seem much worse).\n\nWhile we could back off the default a bit here, I think it'd be better\nto fix it by not punting on the stats-for-append-relations problem.\nThat doesn't seem like material for 8.4 at this point, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 19 Apr 2009 19:31:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables? " }, { "msg_contents": "\nDid this get addressed?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Kris Jurka <[email protected]> writes:\n> > The real problem is getting reasonable stats to pass through the partition \n> > Append step, so it can make a reasonable estimate of the join output size.\n> \n> I dug around a bit and concluded that the lack of stats for the Append\n> relation is indeed the main problem. It's not so much the bad join size\n> estimate (although that could hurt for cases where you need to join this\n> result to another table). Rather, it's that the planner is deliberately\n> biased against picking hash joins in the absence of stats for the inner\n> relation. Per the comments for estimate_hash_bucketsize:\n> \n> * If no statistics are available, use a default estimate of 0.1. This will\n> * discourage use of a hash rather strongly if the inner relation is large,\n> * which is what we want. We do not want to hash unless we know that the\n> * inner rel is well-dispersed (or the alternatives seem much worse).\n> \n> While we could back off the default a bit here, I think it'd be better\n> to fix it by not punting on the stats-for-append-relations problem.\n> That doesn't seem like material for 8.4 at this point, though.\n> \n> \t\t\tregards, tom lane\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 25 Feb 2010 18:46:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Did this get addressed?\n\nPartially. There are stats now but autovacuum is not bright about\nwhen to update them.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Feb 2010 19:03:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables? " }, { "msg_contents": "On Thu, Feb 25, 2010 at 7:03 PM, Tom Lane <[email protected]> wrote:\n> Bruce Momjian <[email protected]> writes:\n>> Did this get addressed?\n>\n> Partially.  There are stats now but autovacuum is not bright about\n> when to update them.\n\nIs that something you're planning to fix for 9.0? If not, we at least\nneed to document what we intend for people to do about it.\n\n...Robert\n", "msg_date": "Tue, 2 Mar 2010 11:16:51 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables?" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Feb 25, 2010 at 7:03 PM, Tom Lane <[email protected]> wrote:\n>> Partially. �There are stats now but autovacuum is not bright about\n>> when to update them.\n\n> Is that something you're planning to fix for 9.0? If not, we at least\n> need to document what we intend for people to do about it.\n\nI want to look at it, but I'm not sure whether the fix will be small\nenough that we want to put it in during beta.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Mar 2010 11:23:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables? " }, { "msg_contents": "On Tue, Mar 2, 2010 at 4:23 PM, Tom Lane <[email protected]> wrote:\n\n> Robert Haas <[email protected]> writes:\n> > On Thu, Feb 25, 2010 at 7:03 PM, Tom Lane <[email protected]> wrote:\n> >> Partially. There are stats now but autovacuum is not bright about\n> >> when to update them.\n>\n> > Is that something you're planning to fix for 9.0? If not, we at least\n> > need to document what we intend for people to do about it.\n>\n> I want to look at it, but I'm not sure whether the fix will be small\n> enough that we want to put it in during beta.\n>\n> I am pretty sure many people will appreciate it, even if it isn't going to\nbe small.\n\nIs that stat collection across child tables any useful by it self ?\n\n-- \nGJ\n\nOn Tue, Mar 2, 2010 at 4:23 PM, Tom Lane <[email protected]> wrote:\nRobert Haas <[email protected]> writes:\n> On Thu, Feb 25, 2010 at 7:03 PM, Tom Lane <[email protected]> wrote:\n>> Partially.  There are stats now but autovacuum is not bright about\n>> when to update them.\n\n> Is that something you're planning to fix for 9.0?  If not, we at least\n> need to document what we intend for people to do about it.\n\nI want to look at it, but I'm not sure whether the fix will be small\nenough that we want to put it in during beta.\nI am pretty sure many people will appreciate it, even if it isn't going to be small. Is that stat collection across child tables any useful by it self ?-- \nGJ", "msg_date": "Tue, 2 Mar 2010 16:27:14 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables?" }, { "msg_contents": "On Tue, Mar 2, 2010 at 12:23 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> On Thu, Feb 25, 2010 at 7:03 PM, Tom Lane <[email protected]> wrote:\n>>> Partially.  There are stats now but autovacuum is not bright about\n>>> when to update them.\n>\n>> Is that something you're planning to fix for 9.0?  If not, we at least\n>> need to document what we intend for people to do about it.\n>\n> I want to look at it, but I'm not sure whether the fix will be small\n> enough that we want to put it in during beta.\n\nIn going back through emails I had marked as possibly needing another\nlook before 9.0 is released, I came across this issue again. As I\nunderstand it, analyze (or analyse) now collects statistics for both\nthe parent individually, and for the parent and its children together.\n However, as I further understand it, autovacuum won't actually fire\noff an analyze unless there's enough activity on the parent table\nconsidered individually to warrant it. So if you have an empty parent\nand a bunch of children with data in it, your stats will still stink,\nunless you analyze by hand.\n\nAssuming my understanding of the problem is correct, we could:\n\n(a) fix it,\n(b) document that you should consider periodic manual analyze commands\nin this situation, or\n(c) do nothing.\n\nThoughts?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Wed, 9 Jun 2010 15:47:55 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables?" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> In going back through emails I had marked as possibly needing another\n> look before 9.0 is released, I came across this issue again. As I\n> understand it, analyze (or analyse) now collects statistics for both\n> the parent individually, and for the parent and its children together.\n> However, as I further understand it, autovacuum won't actually fire\n> off an analyze unless there's enough activity on the parent table\n> considered individually to warrant it. So if you have an empty parent\n> and a bunch of children with data in it, your stats will still stink,\n> unless you analyze by hand.\n\nCheck.\n\n> Assuming my understanding of the problem is correct, we could:\n\n> (a) fix it,\n> (b) document that you should consider periodic manual analyze commands\n> in this situation, or\n> (c) do nothing.\n\n> Thoughts?\n\nThe objections to (a) are that it might result in excessive ANALYZE work\nif not done intelligently, and that we haven't got a patch ready anyway.\nI would have liked to get to this for 9.0 but I feel it's a bit late\nnow.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Jun 2010 16:11:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables? " }, { "msg_contents": "(moving to -hackers)\n\nOn Wed, Jun 9, 2010 at 4:11 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> In going back through emails I had marked as possibly needing another\n>> look before 9.0 is released, I came across this issue again.  As I\n>> understand it, analyze (or analyse) now collects statistics for both\n>> the parent individually, and for the parent and its children together.\n>>  However, as I further understand it, autovacuum won't actually fire\n>> off an analyze unless there's enough activity on the parent table\n>> considered individually to warrant it.  So if you have an empty parent\n>> and a bunch of children with data in it, your stats will still stink,\n>> unless you analyze by hand.\n>\n> Check.\n>\n>> Assuming my understanding of the problem is correct, we could:\n>\n>> (a) fix it,\n>> (b) document that you should consider periodic manual analyze commands\n>> in this situation, or\n>> (c) do nothing.\n>\n>> Thoughts?\n>\n> The objections to (a) are that it might result in excessive ANALYZE work\n> if not done intelligently, and that we haven't got a patch ready anyway.\n> I would have liked to get to this for 9.0 but I feel it's a bit late\n> now.\n\nI guess I can't really disagree with that. Should we try to document\nthis in some way?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Thu, 10 Jun 2010 09:29:41 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] No hash join across partitioned tables?" }, { "msg_contents": "Greetings all,\n\nI have been trying to create/run a build farm as part of a project I am\nworking on. However, I have noticed the primary git repostitory,\ngit.postgresql.org/git, does not seem to be working. Namely, whenever I\ntry to clone the directory, I receive this error:\n\nError: Unable to find 5e4933c31d3cd2750ee1793efe6eca43055fb273e under\nhttp://git.postgresql.org/git/postgresql.git\nCannot obtain needed blob 5e4933c31d3cd2750ee1793efe6eca4305fb273e while\nprocessing commit c5609c66ce2ee4fdb180be95721252b47f90499\nError: fetch failed.\n\nI thought it would be prudent to notify the list so someone could\npossibly check into this.\n\nThanks!\n\nScott Luxenberg\n", "msg_date": "Thu, 10 Jun 2010 10:42:18 -0400", "msg_from": "\"Luxenberg, Scott I.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Error with GIT Repository" }, { "msg_contents": "\n\nLuxenberg, Scott I. wrote:\n> Greetings all,\n>\n> I have been trying to create/run a build farm as part of a project I am\n> working on. \n\nThat seems an odd thing to do since we have one ...\n\n> However, I have noticed the primary git repostitory,\n> git.postgresql.org/git, does not seem to be working. Namely, whenever I\n> try to clone the directory, I receive this error:\n>\n> Error: Unable to find 5e4933c31d3cd2750ee1793efe6eca43055fb273e under\n> http://git.postgresql.org/git/postgresql.git\n> Cannot obtain needed blob 5e4933c31d3cd2750ee1793efe6eca4305fb273e while\n> processing commit c5609c66ce2ee4fdb180be95721252b47f90499\n> Error: fetch failed.\n>\n> I thought it would be prudent to notify the list so someone could\n> possibly check into this.\n>\n>\n> \n\n\nWhy are you cloning over http? Here is the best way to clone, which \nseems to be working:\n\n [andrew@sophia ]$ git clone --mirror\n git://git.postgresql.org/git/postgresql.git\n Initialized empty Git repository in /home/andrew/postgresql.git/\n remote: Counting objects: 376865, done.\n remote: Compressing objects: 100% (87569/87569), done.\n remote: Total 376865 (delta 310187), reused 352950 (delta 287485)\n Receiving objects: 100% (376865/376865), 178.73 MiB | 251 KiB/s, done.\n Resolving deltas: 100% (310187/310187), done.\n [andrew@sophia ]$\n\ncheers\n\nandrew\n", "msg_date": "Thu, 10 Jun 2010 11:26:59 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error with GIT Repository" }, { "msg_contents": "* Andrew Dunstan ([email protected]) wrote:\n> Luxenberg, Scott I. wrote:\n> >I have been trying to create/run a build farm as part of a project I am\n> >working on.\n> \n> That seems an odd thing to do since we have one ...\n\nTo clarify, he's setting up a build farm *member*. :)\n\n> >However, I have noticed the primary git repostitory,\n> >git.postgresql.org/git, does not seem to be working. Namely, whenever I\n> >try to clone the directory, I receive this error:\n> >\n> >Error: Unable to find 5e4933c31d3cd2750ee1793efe6eca43055fb273e under\n> >http://git.postgresql.org/git/postgresql.git\n> >Cannot obtain needed blob 5e4933c31d3cd2750ee1793efe6eca4305fb273e while\n> >processing commit c5609c66ce2ee4fdb180be95721252b47f90499\n> >Error: fetch failed.\n> >\n> >I thought it would be prudent to notify the list so someone could\n> >possibly check into this.\n> \n> \n> Why are you cloning over http? Here is the best way to clone, which\n> seems to be working:\n\nUnfortunately for us, the port that git uses isn't currently allowed\noutbound by our corporate firewall. I expect that to be true for other\nPG users who want git and for some build-farm members, so I think we\nreally need to support git cloning over http.\n\nAs a side-note, it works just fine from git-hub's http mirror and that's\nwhat we've been playing with, but I don't know if we want to recommend\nthat for build-farm members..\n\n Thanks!\n\n Stephen", "msg_date": "Thu, 10 Jun 2010 11:44:16 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error with GIT Repository" }, { "msg_contents": "\n\nStephen Frost wrote:\n> * Andrew Dunstan ([email protected]) wrote:\n> \n>> Luxenberg, Scott I. wrote:\n>> \n>>> I have been trying to create/run a build farm as part of a project I am\n>>> working on.\n>>> \n>> That seems an odd thing to do since we have one ...\n>> \n>\n> To clarify, he's setting up a build farm *member*. :)\n> \n\nAha. Amazing the difference one little word can make ...\n\n>\n> As a side-note, it works just fine from git-hub's http mirror and that's\n> what we've been playing with, but I don't know if we want to recommend\n> that for build-farm members..\n>\n>\n> \n\nI don't see why not. Buildfarm members are going to have to reset their \nrepos when we finally cut over in a few months. Luckily, this is a \nfairly painless operation - blow away the repo and change the config \nfile and the script will resync as if nothing had happened.\n\ncheers\n\nandrew\n", "msg_date": "Thu, 10 Jun 2010 12:15:56 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error with GIT Repository" }, { "msg_contents": "* Andrew Dunstan ([email protected]) wrote:\n> I don't see why not. Buildfarm members are going to have to reset their \n> repos when we finally cut over in a few months. Luckily, this is a \n> fairly painless operation - blow away the repo and change the config \n> file and the script will resync as if nothing had happened.\n\nShould we stop bothering to offer http://git.postgresql.org then..? Or\ndo we expect it to get fixed and work correctly once we cut over and\nrebuild? Also, perhaps we could list the git-hub option on the wiki\n(http://wiki.postgresql.org/wiki/Other_Git_Repositories)?\n\n(and, yea, it's the same me)\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Thu, 10 Jun 2010 12:20:29 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error with GIT Repository" }, { "msg_contents": "On Thu, Jun 10, 2010 at 18:20, Stephen Frost <[email protected]> wrote:\n> * Andrew Dunstan ([email protected]) wrote:\n>> I don't see why not. Buildfarm members are going to have to reset their\n>> repos when we finally cut over in a few months. Luckily, this is a\n>> fairly painless operation - blow away the repo and change the config\n>> file and the script will resync as if nothing had happened.\n>\n> Should we stop bothering to offer http://git.postgresql.org then..?  Or\n\nNo, we should not.\n\nEspecially if someone has a clue how to do it. The last time I fixed\nit by runnin repack, but that didn't work this time. I have no clue\nwhy it's asking for a file that doesn't exist.\n\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n", "msg_date": "Thu, 10 Jun 2010 19:30:00 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error with GIT Repository" }, { "msg_contents": "Excerpts from Andrew Dunstan's message of jue jun 10 11:26:59 -0400 2010:\n\n> Why are you cloning over http? Here is the best way to clone, which \n> seems to be working:\n> \n> [andrew@sophia ]$ git clone --mirror\n> git://git.postgresql.org/git/postgresql.git\n> Initialized empty Git repository in /home/andrew/postgresql.git/\n\nIn case you're a git-ignorant like me and are wondering why the above\ndoes not produce a usable checkout, the complete recipe is here:\n\nhttp://archives.postgresql.org/message-id/[email protected]\n(in short, you need a git clone --reference)\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 10 Jun 2010 15:23:47 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error with GIT Repository" }, { "msg_contents": "> Why are you cloning over http? \n\nMe too I've used http, since I'm behind a proxy and I couldn't\nfind a \"simple\" way of having the git:// method working behind\na proxy...\n\n\n \n", "msg_date": "Fri, 11 Jun 2010 07:43:16 +0000 (GMT)", "msg_from": "Leonardo F <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error with GIT Repository" }, { "msg_contents": "On Thursday 10 June 2010 19:30:00 Magnus Hagander wrote:\n> On Thu, Jun 10, 2010 at 18:20, Stephen Frost <[email protected]> wrote:\n> > * Andrew Dunstan ([email protected]) wrote:\n> >> I don't see why not. Buildfarm members are going to have to reset their\n> >> repos when we finally cut over in a few months. Luckily, this is a\n> >> fairly painless operation - blow away the repo and change the config\n> >> file and the script will resync as if nothing had happened.\n> > \n> > Should we stop bothering to offer http://git.postgresql.org then..? Or\n> \n> No, we should not.\n> \n> Especially if someone has a clue how to do it. The last time I fixed\n> it by runnin repack, but that didn't work this time. I have no clue\n> why it's asking for a file that doesn't exist.\nDoes the repo run 'update-server-info' in some hook?\n\nAndres\n", "msg_date": "Fri, 11 Jun 2010 19:12:26 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error with GIT Repository" }, { "msg_contents": "On Fri, Jun 11, 2010 at 19:12, Andres Freund <[email protected]> wrote:\n> On Thursday 10 June 2010 19:30:00 Magnus Hagander wrote:\n>> On Thu, Jun 10, 2010 at 18:20, Stephen Frost <[email protected]> wrote:\n>> > * Andrew Dunstan ([email protected]) wrote:\n>> >> I don't see why not. Buildfarm members are going to have to reset their\n>> >> repos when we finally cut over in a few months. Luckily, this is a\n>> >> fairly painless operation - blow away the repo and change the config\n>> >> file and the script will resync as if nothing had happened.\n>> >\n>> > Should we stop bothering to offer http://git.postgresql.org then..?  Or\n>>\n>> No, we should not.\n>>\n>> Especially if someone has a clue how to do it. The last time I fixed\n>> it by runnin repack, but that didn't work this time. I have no clue\n>> why it's asking for a file that doesn't exist.\n> Does the repo run  'update-server-info'  in some hook?\n\nYup, it runs after every time it pulls from cvs.\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n", "msg_date": "Fri, 11 Jun 2010 19:19:50 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error with GIT Repository" }, { "msg_contents": "On Thu, Jun 10, 2010 at 9:29 AM, Robert Haas <[email protected]> wrote:\n> (moving to -hackers)\n>\n> On Wed, Jun 9, 2010 at 4:11 PM, Tom Lane <[email protected]> wrote:\n>> Robert Haas <[email protected]> writes:\n>>> In going back through emails I had marked as possibly needing another\n>>> look before 9.0 is released, I came across this issue again.  As I\n>>> understand it, analyze (or analyse) now collects statistics for both\n>>> the parent individually, and for the parent and its children together.\n>>>  However, as I further understand it, autovacuum won't actually fire\n>>> off an analyze unless there's enough activity on the parent table\n>>> considered individually to warrant it.  So if you have an empty parent\n>>> and a bunch of children with data in it, your stats will still stink,\n>>> unless you analyze by hand.\n>>\n>> Check.\n>>\n>>> Assuming my understanding of the problem is correct, we could:\n>>\n>>> (a) fix it,\n>>> (b) document that you should consider periodic manual analyze commands\n>>> in this situation, or\n>>> (c) do nothing.\n>>\n>>> Thoughts?\n>>\n>> The objections to (a) are that it might result in excessive ANALYZE work\n>> if not done intelligently, and that we haven't got a patch ready anyway.\n>> I would have liked to get to this for 9.0 but I feel it's a bit late\n>> now.\n>\n> I guess I can't really disagree with that.  Should we try to document\n> this in some way?\n\nProposed patch attached.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company", "msg_date": "Sun, 13 Jun 2010 23:47:06 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] No hash join across partitioned tables?" }, { "msg_contents": "On Sun, Jun 13, 2010 at 11:47 PM, Robert Haas <[email protected]> wrote:\n> Proposed patch attached.\n\nHearing no objections, I have committed this patch.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Tue, 15 Jun 2010 14:44:32 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] No hash join across partitioned tables?" }, { "msg_contents": "On Fri, Jun 11, 2010 at 10:19 AM, Magnus Hagander <[email protected]> wrote:\n>>> Especially if someone has a clue how to do it. The last time I fixed\n>>> it by runnin repack, but that didn't work this time. I have no clue\n>>> why it's asking for a file that doesn't exist.\n>> Does the repo run  'update-server-info'  in some hook?\n>\n> Yup, it runs after every time it pulls from cvs.\n\nIs this still a problem? I was just noticing this thread\nunceremoniously died, and a long time ago now I remembering discussing\na problem involving the Postgres git mirror accumulating packfiles\neternally. It seemed that whatever repacking scheme was used would get\nrid of loose objects, turning them into packs but never consolidate\npacks.\n\nWhy not just run 'git gc'? This is probably the only quasi-regularly\nrequired maintenance command, so much so that git (I think) runs it\nfrom time to time when certain thresholds are passed in modern day.\n(For a clone-source it is probably a good idea to run it a bit more\nliberally)\n\nfdr\n", "msg_date": "Wed, 30 Jun 2010 15:22:09 -0700", "msg_from": "Daniel Farina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Error with GIT Repository" }, { "msg_contents": "Tom Lane wrote:\n> Robert Haas <[email protected]> writes:\n> > In going back through emails I had marked as possibly needing another\n> > look before 9.0 is released, I came across this issue again. As I\n> > understand it, analyze (or analyse) now collects statistics for both\n> > the parent individually, and for the parent and its children together.\n> > However, as I further understand it, autovacuum won't actually fire\n> > off an analyze unless there's enough activity on the parent table\n> > considered individually to warrant it. So if you have an empty parent\n> > and a bunch of children with data in it, your stats will still stink,\n> > unless you analyze by hand.\n> \n> Check.\n> \n> > Assuming my understanding of the problem is correct, we could:\n> \n> > (a) fix it,\n> > (b) document that you should consider periodic manual analyze commands\n> > in this situation, or\n> > (c) do nothing.\n> \n> > Thoughts?\n> \n> The objections to (a) are that it might result in excessive ANALYZE work\n> if not done intelligently, and that we haven't got a patch ready anyway.\n> I would have liked to get to this for 9.0 but I feel it's a bit late\n> now.\n\nWhat do we want to do about the above issue?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Thu, 1 Jul 2010 23:00:00 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom Lane wrote:\n>> I would have liked to get to this for 9.0 but I feel it's a bit late\n>> now.\n\n> What do we want to do about the above issue?\n\nTODO item.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Jul 2010 00:05:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables? " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Tom Lane wrote:\n> >> I would have liked to get to this for 9.0 but I feel it's a bit late\n> >> now.\n> \n> > What do we want to do about the above issue?\n> \n> TODO item.\n\nAdded to TODO:\n\n Have autoanalyze of parent tables occur when child tables are modified\n\n * http://archives.postgresql.org/message-id/[email protected] \n\nI am surprised there is no documentation update requirement for this.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Fri, 2 Jul 2010 16:53:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I am surprised there is no documentation update requirement for this.\n\nSomebody put something about it in the docs a few days ago, IIRC.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Jul 2010 16:58:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables? " }, { "msg_contents": "On Fri, Jul 2, 2010 at 4:58 PM, Tom Lane <[email protected]> wrote:\n> Bruce Momjian <[email protected]> writes:\n>> I am surprised there is no documentation update requirement for this.\n>\n> Somebody put something about it in the docs a few days ago, IIRC.\n\nThat was me.\n\nhttp://archives.postgresql.org/pgsql-committers/2010-06/msg00144.php\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise Postgres Company\n", "msg_date": "Fri, 2 Jul 2010 17:10:58 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables?" }, { "msg_contents": "Robert Haas wrote:\n> On Fri, Jul 2, 2010 at 4:58 PM, Tom Lane <[email protected]> wrote:\n> > Bruce Momjian <[email protected]> writes:\n> >> I am surprised there is no documentation update requirement for this.\n> >\n> > Somebody put something about it in the docs a few days ago, IIRC.\n> \n> That was me.\n> \n> http://archives.postgresql.org/pgsql-committers/2010-06/msg00144.php\n\nOh, thanks, I missed that.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + None of us is going to be here forever. +\n", "msg_date": "Fri, 2 Jul 2010 17:12:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables?" }, { "msg_contents": "Excerpts from Robert Haas's message of mié jun 09 15:47:55 -0400 2010:\n\n> In going back through emails I had marked as possibly needing another\n> look before 9.0 is released, I came across this issue again. As I\n> understand it, analyze (or analyse) now collects statistics for both\n> the parent individually, and for the parent and its children together.\n> However, as I further understand it, autovacuum won't actually fire\n> off an analyze unless there's enough activity on the parent table\n> considered individually to warrant it. So if you have an empty parent\n> and a bunch of children with data in it, your stats will still stink,\n> unless you analyze by hand.\n\nSo, is there something we could now do about this, while there's still\ntime before 9.1?\n\nI haven't followed this issue very closely, but it seems to me that what\nwe want is that we want an ANALYZE in a child table to be mutated into\nan analyze of its parent table, if the conditions are right; and that an\nANALYZE of a parent removes the child tables from being analyzed on the\nsame run.\n\nIf we analyze the parent, do we also update the children stats, or is it\njust that we keep two stats for the parent, one with children and one\nwithout, both being updated when the parent is analyzed?\n\nIf the latter's the case, maybe we should modify ANALYZE a bit more, so\nthat we can analyze the whole hierarchy in one go, and store the lot of\nstats with a single pass (each child alone, the parent alone, the parent\nplus children). However it's not real clear how would this work with\nmultiple inheritance levels.\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Sat, 16 Oct 2010 02:03:02 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables?" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> If we analyze the parent, do we also update the children stats, or is it\n> just that we keep two stats for the parent, one with children and one\n> without, both being updated when the parent is analyzed?\n\nThe latter.\n\nThe trick here is that we need to fire an analyze on the parent even\nthough only its children may have had any updates.\n\n> If the latter's the case, maybe we should modify ANALYZE a bit more, so\n> that we can analyze the whole hierarchy in one go, and store the lot of\n> stats with a single pass (each child alone, the parent alone, the parent\n> plus children). However it's not real clear how would this work with\n> multiple inheritance levels.\n\nIt's also not clear how it works without blowing out memory...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 16 Oct 2010 01:22:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables? " }, { "msg_contents": "On Fri, Oct 15, 2010 at 10:22 PM, Tom Lane <[email protected]> wrote:\n\n> Alvaro Herrera <[email protected]> writes:\n> > If we analyze the parent, do we also update the children stats, or is it\n> > just that we keep two stats for the parent, one with children and one\n> > without, both being updated when the parent is analyzed?\n>\n> The latter.\n>\n> The trick here is that we need to fire an analyze on the parent even\n> though only its children may have had any updates.\n>\n> > If the latter's the case, maybe we should modify ANALYZE a bit more, so\n> > that we can analyze the whole hierarchy in one go, and store the lot of\n> > stats with a single pass (each child alone, the parent alone, the parent\n> > plus children). However it's not real clear how would this work with\n> > multiple inheritance levels.\n>\n\nAn issue with automatically analyzing the entire hierarchy is 'abstract'\ntable definitions. I've got a set of tables for storing the same data at\ndifferent granularities of aggregation. Within each granularity, I've got\npartitions, but because the set of columns is identical for each\ngranularity, I've got an abstract table definition that is inherited by\neverything. I don't need or want statistics kept on that table because I\nnever query across the abstract table, only the parent table of each\naggregation granularity\n\ncreate table abstract_fact_table (\ntime timestamp,\nmeasure1 bigint,\nmeasure2 bigint,\nmeasure3 bigint,\nfk1 bigint,\nfk2 bigint\n);\n\ncreate table minute_scale_fact_table (\n} inherits abstract_fact_table;\n\n// Then there are several partitions for minute scale data\n\ncreate table hour_scale_fact_table (\n) inherits abstract_fact_table;\n\n// then several partitions for hour scale data\n\netc. I do run queries on the minute_scale_fact_table and\nhour_scale_fact_table but never do so on abstract_fact_table. I could\ncertainly modify my schema such that the abstract table goes away entirely\neasily enough, but I find this easier for new developers to come in and\ncomprehend, since the similarity between the table definitions is explicit.\n\nI'm glad this topic came up, as I was unaware that I need to run analyze on\nthe parent partitions separately - and no data is every inserted directly\ninto the top level of each granularity hierarchy, so it will never fire by\nitself.\n\nIf I am using ORM and I've got functionality in a common baseclass in the\nsource code, I'll often implement its mapping in the database via a parent\ntable that the table for any subclass mapping can inherit from. Again, I\nhave no interest in maintaining statistics on the parent table, since I\nnever query against it directly.\n\nOn Fri, Oct 15, 2010 at 10:22 PM, Tom Lane <[email protected]> wrote:\nAlvaro Herrera <[email protected]> writes:\n> If we analyze the parent, do we also update the children stats, or is it\n> just that we keep two stats for the parent, one with children and one\n> without, both being updated when the parent is analyzed?\n\nThe latter.\n\nThe trick here is that we need to fire an analyze on the parent even\nthough only its children may have had any updates.\n\n> If the latter's the case, maybe we should modify ANALYZE a bit more, so\n> that we can analyze the whole hierarchy in one go, and store the lot of\n> stats with a single pass (each child alone, the parent alone, the parent\n> plus children).  However it's not real clear how would this work with\n> multiple inheritance levels.An issue with automatically analyzing the entire hierarchy is 'abstract' table definitions.  I've got a set of tables for storing the same data at different granularities of aggregation.  Within each granularity, I've got partitions, but because the set of columns is identical for each granularity, I've got an abstract table definition that is inherited by everything.  I don't need or want statistics kept on that table because I never query across the abstract table, only the parent table of each aggregation granularity\ncreate table abstract_fact_table (time timestamp,measure1 bigint,measure2 bigint,measure3 bigint,fk1 bigint,fk2 bigint);\ncreate table minute_scale_fact_table (} inherits abstract_fact_table;// Then there are several partitions for minute scale datacreate table hour_scale_fact_table (\n) inherits abstract_fact_table;// then several partitions for hour scale dataetc.  I do run queries on the minute_scale_fact_table and hour_scale_fact_table but never do so on abstract_fact_table.  I could certainly modify my schema such that the abstract table goes away entirely easily enough, but I find this easier for new developers to come in and comprehend, since the similarity between the table definitions is explicit.\nI'm glad this topic came up, as I was unaware that I need to run analyze on the parent partitions separately - and no data is every inserted directly into the top level of each granularity hierarchy, so it will never fire by itself.\nIf I am using ORM and I've got functionality in a common baseclass in the source code, I'll often implement its mapping in the database via a parent table that the table for any subclass mapping can inherit from.  Again, I have no interest in maintaining statistics on the parent table, since I never query against it directly.", "msg_date": "Fri, 15 Oct 2010 22:35:46 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables?" }, { "msg_contents": "Excerpts from Samuel Gendler's message of sáb oct 16 02:35:46 -0300 2010:\n\n> An issue with automatically analyzing the entire hierarchy is 'abstract'\n> table definitions. I've got a set of tables for storing the same data at\n> different granularities of aggregation. Within each granularity, I've got\n> partitions, but because the set of columns is identical for each\n> granularity, I've got an abstract table definition that is inherited by\n> everything. I don't need or want statistics kept on that table because I\n> never query across the abstract table, only the parent table of each\n> aggregation granularity\n\nHmm, I think you'd be better served by using LIKE instead of regular\ninheritance.\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Sat, 16 Oct 2010 12:29:39 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables?" }, { "msg_contents": "On Sat, Oct 16, 2010 at 8:29 AM, Alvaro Herrera\n<[email protected]>wrote:\n\n> Excerpts from Samuel Gendler's message of sáb oct 16 02:35:46 -0300 2010:\n>\n> > An issue with automatically analyzing the entire hierarchy is 'abstract'\n> > table definitions. I've got a set of tables for storing the same data at\n> > different granularities of aggregation. Within each granularity, I've\n> got\n> > partitions, but because the set of columns is identical for each\n> > granularity, I've got an abstract table definition that is inherited by\n> > everything. I don't need or want statistics kept on that table because I\n> > never query across the abstract table, only the parent table of each\n> > aggregation granularity\n>\n> Hmm, I think you'd be better served by using LIKE instead of regular\n> inheritance.\n>\n>\nYep. I inherited the architecture, though, and changing it hasn't been a\nhigh priority.\n\n--sam\n\nOn Sat, Oct 16, 2010 at 8:29 AM, Alvaro Herrera <[email protected]> wrote:\nExcerpts from Samuel Gendler's message of sáb oct 16 02:35:46 -0300 2010:\n\n> An issue with automatically analyzing the entire hierarchy is 'abstract'\n> table definitions.  I've got a set of tables for storing the same data at\n> different granularities of aggregation.  Within each granularity, I've got\n> partitions, but because the set of columns is identical for each\n> granularity, I've got an abstract table definition that is inherited by\n> everything.  I don't need or want statistics kept on that table because I\n> never query across the abstract table, only the parent table of each\n> aggregation granularity\n\nHmm, I think you'd be better served by using LIKE instead of regular\ninheritance.\nYep.  I inherited the architecture, though, and changing it hasn't been a high priority. --sam", "msg_date": "Sun, 17 Oct 2010 23:13:01 -0700", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables?" }, { "msg_contents": "Excerpts from Samuel Gendler's message of lun oct 18 03:13:01 -0300 2010:\n> On Sat, Oct 16, 2010 at 8:29 AM, Alvaro Herrera\n> <[email protected]>wrote:\n> \n> > Excerpts from Samuel Gendler's message of sáb oct 16 02:35:46 -0300 2010:\n> >\n> > > An issue with automatically analyzing the entire hierarchy is\n> > > 'abstract' table definitions. I've got a set of tables for\n> > > storing the same data at different granularities of aggregation.\n> > > Within each granularity, I've got partitions, but because the set\n> > > of columns is identical for each granularity, I've got an abstract\n> > > table definition that is inherited by everything. I don't need or\n> > > want statistics kept on that table because I never query across\n> > > the abstract table, only the parent table of each aggregation\n> > > granularity\n> >\n> > Hmm, I think you'd be better served by using LIKE instead of regular\n> > inheritance.\n>\n> Yep. I inherited the architecture, though, and changing it hasn't been a\n> high priority.\n\nI understand that; my point is merely that maybe we shouldn't work\nthrough many hoops to solve this particular facet of the problem,\nbecause it seems to be pilot error. (If you really needed to avoid the\nextra I/O that would be caused by unnecessary analyzes, you could turn\nautovac off for the abstract tables).\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 18 Oct 2010 11:44:53 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables?" }, { "msg_contents": "On Sat, Oct 16, 2010 at 1:22 AM, Tom Lane <[email protected]> wrote:\n> Alvaro Herrera <[email protected]> writes:\n>> If we analyze the parent, do we also update the children stats, or is it\n>> just that we keep two stats for the parent, one with children and one\n>> without, both being updated when the parent is analyzed?\n>\n> The latter.\n>\n> The trick here is that we need to fire an analyze on the parent even\n> though only its children may have had any updates.\n\nCan we execute a SQL query at the point where we need this\ninformation? Because it doesn't seem too hard to work up a query that\ntotals the inserts, updates, and reltuples across all children of each\ntable.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Tue, 26 Oct 2010 08:23:26 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No hash join across partitioned tables?" } ]
[ { "msg_contents": "I have a problem with a part of big query because of incorrect \nestimation. It's easy to emulate the case:\n\ncreate table a (id bigint, id2 bigint);\ncreate table b (id bigint, id2 bigint);\n\ninsert into a (id, id2)\nselect random() * 100000, random() * 100\nfrom generate_series(1, 100000);\n\ninsert into b (id, id2)\nselect id, case when random() < 0.1 then random() * 100 else id2 end\nfrom a;\n\nalter table a alter column id set statistics 1000;\nalter table a alter column id2 set statistics 1000;\nalter table b alter column id set statistics 1000;\nalter table b alter column id2 set statistics 1000;\n\nanalyze a;\nanalyze b;\n\nexplain analyze\nselect *\nfrom a\n join b on b.id = a.id and b.id2 = a.id2;\n\n\"Hash Join (cost=1161.00..3936.15 rows=1661 width=32) (actual \ntime=424.865..1128.194 rows=91268 loops=1)\"\n\" Hash Cond: ((a.id = b.id) AND (a.id2 = b.id2))\"\n\" -> Seq Scan on a (cost=0.00..791.00 rows=100000 width=16) (actual \ntime=0.013..197.908 rows=100000 loops=1)\"\n\" -> Hash (cost=791.00..791.00 rows=100000 width=16) (actual \ntime=424.777..424.777 rows=100000 loops=1)\"\n\" -> Seq Scan on b (cost=0.00..791.00 rows=100000 width=16) \n(actual time=0.010..197.536 rows=100000 loops=1)\"\n\"Total runtime: 1305.121 ms\"\n\n", "msg_date": "Fri, 17 Apr 2009 13:50:15 +0900", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizer's issue" } ]
[ { "msg_contents": "crawler=# select * from assigments;\n jobid | timeout | workerid\n-------+---------+----------\n(0 rows)\n\nTime: 0.705 ms\ncrawler=# \\d+ assigments\n Table \"public.assigments\"\n Column | Type | Modifiers\n | Storage | Description\n----------+--------------------------+-------------------------------------------------+---------+-------------\n jobid | bigint | not null\n | plain |\n timeout | timestamp with time zone | not null default (now() +\n'00:02:00'::interval) | plain |\n workerid | bigint | not null\n | plain |\nIndexes:\n \"assigments_pkey\" PRIMARY KEY, btree (jobid)\nForeign-key constraints:\n \"assigments_jobid_fkey\" FOREIGN KEY (jobid) REFERENCES jobs(id)\nMATCH FULL ON UPDATE CASCADE ON DELETE CASCADE\nHas OIDs: no\n\ncrawler=# \\d+ jobs\n Table \"public.jobs\"\n Column | Type | Modifiers\n | Storage | Description\n------------+--------------------------+---------------------------------------------------+---------+-------------\n id | bigint | not null default\nnextval('jobs_id_seq'::regclass) | plain |\n domainid | bigint | not null\n | plain |\n priority | smallint | not null default 1\n | plain |\n added | timestamp with time zone | not null default now()\n | plain |\n notify_end | boolean | not null default false\n | plain |\nIndexes:\n \"jobs_pkey\" PRIMARY KEY, btree (domainid)\n \"job_id_uidx\" UNIQUE, btree (id)\n \"foo\" btree (notify_end DESC, priority DESC, added)\n \"foo_bar\" btree (notify_end, priority, added)\n \"jobs_worker_priority_on_jobs\" btree (calc_prio(notify_end,\npriority, added))\nForeign-key constraints:\n \"jobs_domain_id_fkey\" FOREIGN KEY (domainid) REFERENCES\ndomains(id) MATCH FULL ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE\nReferenced by:\n \"assigments_jobid_fkey\" IN assigments FOREIGN KEY (jobid) REFERENCES\njobs(id) MATCH FULL ON UPDATE CASCADE ON DELETE CASCADE\nHas OIDs: no\n\ncrawler=# explain analyze select * from full_assigments_view;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..11040.77 rows=1510 width=31) (actual\ntime=0.003..0.003 rows=0 loops=1)\n -> Nested Loop (cost=0.00..10410.97 rows=1510 width=24) (actual\ntime=0.002..0.002 rows=0 loops=1)\n -> Seq Scan on assigments a (cost=0.00..25.10 rows=1510\nwidth=16) (actual time=0.002..0.002 rows=0 loops=1)\n -> Index Scan using job_id_uidx on jobs j (cost=0.00..6.87\nrows=1 width=16) (never executed)\n Index Cond: (j.id = a.jobid)\n -> Index Scan using domains_id_idx on domains d (cost=0.00..0.40\nrows=1 width=19) (never executed)\n Index Cond: (d.id = j.domainid)\n Total runtime: 0.123 ms\n(8 rows)\n\nTime: 1.390 ms\n\n View \"public.full_assigments_view\"\n Column | Type | Modifiers | Storage | Description\n-------------+---------+-----------+----------+-------------\n domain_name | text | | extended |\n job_id | bigint | | plain |\n timed_out | boolean | | plain |\nView definition:\n SELECT d.name AS domain_name, j.id AS job_id, (now() - a.timeout) >\n'00:00:00'::interval AS timed_out\n FROM assigments a\n JOIN jobs j ON a.jobid = j.id\n JOIN domains d ON d.id = j.domainid;\n\n\ndefault_statistics_target=100\nall the other settings are pretty much default,\n\nThat expected 1510 rows in 'assigments' seems to be pretty off,\nespecially since I just vacuumed/analyze the db.\nAny ideas ?\n\n\n-- \nGJ\n", "msg_date": "Sat, 18 Apr 2009 00:12:49 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": true, "msg_subject": "stats are way off on 8.4 b1" }, { "msg_contents": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]> writes:\n> That expected 1510 rows in 'assigments' seems to be pretty off,\n\nThe planner does not trust an empty table to stay empty. Every\nPostgres version in living memory has acted like that; it's not\nnew to 8.4.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Apr 2009 19:29:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats are way off on 8.4 b1 " }, { "msg_contents": "2009/4/18 Tom Lane <[email protected]>:\n> =?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]> writes:\n>> That expected 1510 rows in 'assigments' seems to be pretty off,\n>\n> The planner does not trust an empty table to stay empty.  Every\n> Postgres version in living memory has acted like that; it's not\n> new to 8.4.\n\nok, thanks\nQuick question Tom. Can correlation be negative ?\n\n\n-- \nGJ\n", "msg_date": "Sat, 18 Apr 2009 00:34:23 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: stats are way off on 8.4 b1" }, { "msg_contents": "Grzegorz Jaśkiewicz wrote:\n> Can correlation be negative ?\n\nYes, if the data in the column are in descending order. For example:\n\npostgres=# CREATE TABLE foo(id int4);\nCREATE TABLE\npostgres=# INSERT INTO foo SELECT 1000 - generate_series(1, 1000);\nINSERT 0 1000\npostgres=# ANALYZE foo;\nANALYZE\npostgres=# SELECT attname, correlation FROM pg_stats WHERE tablename='foo';\n attname | correlation\n---------+-------------\n id | -1\n(1 row)\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sat, 18 Apr 2009 09:59:27 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stats are way off on 8.4 b1" }, { "msg_contents": "2009/4/18 Heikki Linnakangas <[email protected]>:\n> Grzegorz Jaśkiewicz wrote:\n>>\n>> Can correlation be negative ?\n>\n> Yes, if the data in the column are in descending order. For example:\n>\n> postgres=# CREATE TABLE foo(id int4);\n> CREATE TABLE\n> postgres=# INSERT INTO foo SELECT 1000 - generate_series(1, 1000);\n> INSERT 0 1000\n> postgres=# ANALYZE foo;\n> ANALYZE\n> postgres=# SELECT attname, correlation FROM pg_stats  WHERE tablename='foo';\n>  attname | correlation\n> ---------+-------------\n>  id      |          -1\n> (1 row)\n\naye, thanks.\n\n\n-- \nGJ\n", "msg_date": "Sat, 18 Apr 2009 13:02:08 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: stats are way off on 8.4 b1" } ]
[ { "msg_contents": "Hello People,\n\nI have initiated a work to review the sqls of our internal software.\nLot of them he problem are about sql logic, or join with table unecessary,\nand so on.\nBut software has lot of sql with date, doing thinks like:\n[..]\n date >= '2009-04-01' AND\n date <= '2009-04-15'\n[..]\n\nRedoing the SQL with fix date (date = '2009-04-01') o cost in explain always\nstill about 200 or less. But with a period the cost is high, about 6000 or\nmore.\n\nSelect is using Index and the date is using index too.\n\nThere is some way to use date period with less cost?\n\nRafael Domiciano\n\nHello People,I have initiated a work to review the sqls of our internal software.Lot of them he problem are about sql logic, or join with table unecessary, and so on.But software has lot of sql with date, doing thinks like:\n[..]  date >= '2009-04-01' AND  date <= '2009-04-15'[..]Redoing the SQL with fix date (date = '2009-04-01') o cost in explain always still about 200 or less. But with a period the cost is high, about 6000 or more.\nSelect is using Index and the date is using index too.There is some way to use date period with less cost?Rafael Domiciano", "msg_date": "Mon, 20 Apr 2009 10:55:36 -0300", "msg_from": "Rafael Domiciano <[email protected]>", "msg_from_op": true, "msg_subject": "SQL With Dates" }, { "msg_contents": "BETWEEN X AND Y\n\nOn Mon, Apr 20, 2009 at 2:55 PM, Rafael Domiciano\n<[email protected]> wrote:\n> Hello People,\n>\n> I have initiated a work to review the sqls of our internal software.\n> Lot of them he problem are about sql logic, or join with table unecessary,\n> and so on.\n> But software has lot of sql with date, doing thinks like:\n> [..]\n>   date >= '2009-04-01' AND\n>   date <= '2009-04-15'\n> [..]\n>\n> Redoing the SQL with fix date (date = '2009-04-01') o cost in explain always\n> still about 200 or less. But with a period the cost is high, about 6000 or\n> more.\n>\n> Select is using Index and the date is using index too.\n>\n> There is some way to use date period with less cost?\n>\n> Rafael Domiciano\n>\n\n\n\n-- \nGJ\n", "msg_date": "Mon, 20 Apr 2009 15:14:15 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL With Dates" }, { "msg_contents": "Hello Grzegorz,\n\nThnks for response, but lot of the selects is using BETWEEN and the cost is\nthe same.\n\n2009/4/20 Grzegorz Jaśkiewicz <[email protected]>\n\n> BETWEEN X AND Y\n>\n> On Mon, Apr 20, 2009 at 2:55 PM, Rafael Domiciano\n> <[email protected]> wrote:\n> > Hello People,\n> >\n> > I have initiated a work to review the sqls of our internal software.\n> > Lot of them he problem are about sql logic, or join with table\n> unecessary,\n> > and so on.\n> > But software has lot of sql with date, doing thinks like:\n> > [..]\n> > date >= '2009-04-01' AND\n> > date <= '2009-04-15'\n> > [..]\n> >\n> > Redoing the SQL with fix date (date = '2009-04-01') o cost in explain\n> always\n> > still about 200 or less. But with a period the cost is high, about 6000\n> or\n> > more.\n> >\n> > Select is using Index and the date is using index too.\n> >\n> > There is some way to use date period with less cost?\n> >\n> > Rafael Domiciano\n> >\n>\n>\n>\n> --\n> GJ\n>\n\nHello Grzegorz,Thnks for response, but lot of the selects is using BETWEEN and the cost is the same.2009/4/20 Grzegorz Jaśkiewicz <[email protected]>\nBETWEEN X AND Y\n\nOn Mon, Apr 20, 2009 at 2:55 PM, Rafael Domiciano\n<[email protected]> wrote:\n> Hello People,\n>\n> I have initiated a work to review the sqls of our internal software.\n> Lot of them he problem are about sql logic, or join with table unecessary,\n> and so on.\n> But software has lot of sql with date, doing thinks like:\n> [..]\n>   date >= '2009-04-01' AND\n>   date <= '2009-04-15'\n> [..]\n>\n> Redoing the SQL with fix date (date = '2009-04-01') o cost in explain always\n> still about 200 or less. But with a period the cost is high, about 6000 or\n> more.\n>\n> Select is using Index and the date is using index too.\n>\n> There is some way to use date period with less cost?\n>\n> Rafael Domiciano\n>\n\n\n\n--\nGJ", "msg_date": "Mon, 20 Apr 2009 15:48:28 -0300", "msg_from": "Rafael Domiciano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL With Dates" }, { "msg_contents": "On Mon, Apr 20, 2009 at 7:55 AM, Rafael Domiciano\n<[email protected]> wrote:\n> Hello People,\n>\n> I have initiated a work to review the sqls of our internal software.\n> Lot of them he problem are about sql logic, or join with table unecessary,\n> and so on.\n> But software has lot of sql with date, doing thinks like:\n> [..]\n>   date >= '2009-04-01' AND\n>   date <= '2009-04-15'\n> [..]\n>\n> Redoing the SQL with fix date (date = '2009-04-01') o cost in explain always\n> still about 200 or less. But with a period the cost is high, about 6000 or\n> more.\n\nYep. Because you'll be getting more rows. More rows == more cost. TANSTAAFL.\n", "msg_date": "Mon, 20 Apr 2009 12:59:12 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL With Dates" }, { "msg_contents": "It sounds like what you're doing is comparing the planner's cost\nestimate from running EXPLAIN on a few different queries. The planner's\ncost estimate was never intended to do what you're trying to do; it's\nnot an absolute scale of cost, it's just a tool that the planner uses to\nget relative comparisons of logically equivalent plans.\n\nThe actual number that the planner spits out is meaningless in an\nabsolute sense. It's entirely possible that one query with an estimated\ncost of 10000 will run faster than a query with an estimated cost of\n100. What you actually need to do is compare the real running time of\nthe queries in order to see which ones are actually problematic.\n\nFor that, you'd do better using a tool like pgFouine to look at actual\nperformance trends. \n\n-- Mark\n\n\nOn Mon, 2009-04-20 at 10:55 -0300, Rafael Domiciano wrote:\n> Hello People,\n> \n> I have initiated a work to review the sqls of our internal software.\n> Lot of them he problem are about sql logic, or join with table\n> unecessary, and so on.\n> But software has lot of sql with date, doing thinks like:\n> [..]\n> date >= '2009-04-01' AND\n> date <= '2009-04-15'\n> [..]\n> \n> Redoing the SQL with fix date (date = '2009-04-01') o cost in explain\n> always still about 200 or less. But with a period the cost is high,\n> about 6000 or more.\n> \n> Select is using Index and the date is using index too.\n> \n> There is some way to use date period with less cost?\n> \n> Rafael Domiciano\n\n\n", "msg_date": "Mon, 20 Apr 2009 13:04:47 -0700", "msg_from": "\"Mark Lewis\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL With Dates" }, { "msg_contents": "On Mon, Apr 20, 2009 at 9:55 AM, Rafael Domiciano\n<[email protected]> wrote:\n> Hello People,\n>\n> I have initiated a work to review the sqls of our internal software.\n> Lot of them he problem are about sql logic, or join with table unecessary,\n> and so on.\n> But software has lot of sql with date, doing thinks like:\n> [..]\n>   date >= '2009-04-01' AND\n>   date <= '2009-04-15'\n> [..]\n>\n> Redoing the SQL with fix date (date = '2009-04-01') o cost in explain always\n> still about 200 or less. But with a period the cost is high, about 6000 or\n> more.\n>\n> Select is using Index and the date is using index too.\n>\n> There is some way to use date period with less cost?\n\nIf you have an actual performance problem (as opposed to a big number\nin EXPLAIN), then it's possible that the planner isn't estimating the\nnumber of rows that will be in that range very accurately. In that\ncase, you might need to increase the statistics target for that\ncolumn, or your default_statistics_target.\n\nIn 8.3, the default default_statistics_target = 10. In 8.4, it will\nbe 100, so you might try that for a starting point. But date columns\ncan sometimes have highly skewed data, so you might find that you need\nan even higher value for that particular column. I wouldn't recommend\nraising the database-wide setting above 100 though (though I know some\npeople have used 200 or 400 without too much pain, especially on\nReally Big Databases where longer planning time isn't a big deal\nbecause the execution times are measured in minutes - it doesn't sound\nlike that's your situation though).\n\nThe first thing, to do, is see how fast the query actually runs. Try\nsetting \\timing in psql and running the query to see how long it\nactually takes. If it's fast enough, you're done. If not, run\nEXPLAIN ANALYZE and compare the estimated row counts to t he actual\nrow counts. If they're pretty close, you're out of luck - as others\nhave already said, TANSTAAFL. If they're way off, the try the above.\n\n...Robert\n", "msg_date": "Tue, 21 Apr 2009 11:57:32 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL With Dates" } ]
[ { "msg_contents": "I am working with the rsyslog developers to improve it's performance in \ninserting log messages to databases.\n\ncurrently they have a postgres interface that works like all the other \nones, where rsyslog formats an insert statement, passes that the the \ninterface module, that sends it to postgres (yes, each log as a seperate \ntransaction)\n\nthe big win is going to be in changing the core of rsyslog so that it can \nprocess multiple messages at a time (bundling them into a single \ntransaction)\n\nbut then we run into confusion.\n\noff the top of my head I know of several different ways to get the data \ninto postgres\n\n1. begin; insert; insert;...;end\n\n2. insert into table values (),(),(),()\n\n3. copy from stdin\n (how do you tell it how many records to read from stdin, or that you \nhave given it everything without disconnecting)\n\n4. copy from stdin in binary mode\n\nand each of the options above can be done with prepared statements, stored \nprocedures, or functions.\n\nI know that using procedures or functions can let you do fancy things like \ninserting the row(s) into the appropriate section of a partitioned table\n\nother than this sort of capability, what sort of differences should be \nexpected between the various approaches (including prepared statements vs \nunprepared)\n\nsince the changes that rsyslog is making will affect all the other \ndatabase interfaces as well, any comments about big wins or things to \navoid for other databases would be appriciated.\n\nDavid Lang\n", "msg_date": "Mon, 20 Apr 2009 14:53:21 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "performance for high-volume log insertion" }, { "msg_contents": "David,\n\n* [email protected] ([email protected]) wrote:\n> I am working with the rsyslog developers to improve it's performance in \n> inserting log messages to databases.\n\nGreat!\n\n> currently they have a postgres interface that works like all the other \n> ones, where rsyslog formats an insert statement, passes that the the \n> interface module, that sends it to postgres (yes, each log as a seperate \n> transaction)\n\nOuch.\n\n> the big win is going to be in changing the core of rsyslog so that it can \n> process multiple messages at a time (bundling them into a single \n> transaction)\n\nYup.\n\n> 1. begin; insert; insert;...;end\n\nDoing the insert in a transaction should definitely improve your\nperformance. Doing them as prepared statements would be good too, and\nusing binary mode would very likely help.\n\n> 2. insert into table values (),(),(),()\n\nUsing this structure would be more database agnostic, but won't perform\nas well as the COPY options I don't believe. It might be interesting to\ndo a large \"insert into table values (),(),()\" as a prepared statement,\nbut then you'd have to have different sizes for each different number of\nitems you want inserted.\n\n> 3. copy from stdin\n> (how do you tell it how many records to read from stdin, or that you \n> have given it everything without disconnecting)\n\nAssuming you're using libpq, you just call PQputCopyEnd(). Then you\ncall PQgetResult() to check that everything worked ok.\n\n> 4. copy from stdin in binary mode\n\nBinary mode, in general, should be faster. You should consider what\nformat the data is inside your application though (it's less useful to\nuse binary copy if you're having to convert from text to binary in your\napplication).\n\n> and each of the options above can be done with prepared statements, \n> stored procedures, or functions.\n>\n> I know that using procedures or functions can let you do fancy things \n> like inserting the row(s) into the appropriate section of a partitioned \n> table\n\nWe would normally recommend having the database handle the partitioning\nby using a trigger on the base table to call a stored procedure. The\napplication really doesn't need to know about this.\n\n> other than this sort of capability, what sort of differences should be \n> expected between the various approaches (including prepared statements vs \n> unprepared)\n>\n> since the changes that rsyslog is making will affect all the other \n> database interfaces as well, any comments about big wins or things to \n> avoid for other databases would be appriciated.\n\nHope this helps.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Mon, 20 Apr 2009 21:55:15 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Mon, 20 Apr 2009, Stephen Frost wrote:\n\n> David,\n>\n> * [email protected] ([email protected]) wrote:\n>> I am working with the rsyslog developers to improve it's performance in\n>> inserting log messages to databases.\n>\n> Great!\n>\n>> currently they have a postgres interface that works like all the other\n>> ones, where rsyslog formats an insert statement, passes that the the\n>> interface module, that sends it to postgres (yes, each log as a seperate\n>> transaction)\n>\n> Ouch.\n\nyep\n\n>> the big win is going to be in changing the core of rsyslog so that it can\n>> process multiple messages at a time (bundling them into a single\n>> transaction)\n>\n> Yup.\n>\n>> 1. begin; insert; insert;...;end\n>\n> Doing the insert in a transaction should definitely improve your\n> performance. Doing them as prepared statements would be good too, and\n> using binary mode would very likely help.\n>\n>> 2. insert into table values (),(),(),()\n>\n> Using this structure would be more database agnostic, but won't perform\n> as well as the COPY options I don't believe. It might be interesting to\n> do a large \"insert into table values (),(),()\" as a prepared statement,\n> but then you'd have to have different sizes for each different number of\n> items you want inserted.\n\non the other hand, when you have a full queue (lots of stuff to insert) is \nwhen you need the performance the most. if it's enough of a win on the \ndatabase side, it could be worth more effort on the applicaiton side.\n\n>> 3. copy from stdin\n>> (how do you tell it how many records to read from stdin, or that you\n>> have given it everything without disconnecting)\n>\n> Assuming you're using libpq, you just call PQputCopyEnd(). Then you\n> call PQgetResult() to check that everything worked ok.\n\none of the big questions is what value we will get by making things \ndatabase specififc (more below)\n\n>> 4. copy from stdin in binary mode\n>\n> Binary mode, in general, should be faster. You should consider what\n> format the data is inside your application though (it's less useful to\n> use binary copy if you're having to convert from text to binary in your\n> application).\n\nany idea what sort of difference binary mode would result in?\n\n>> and each of the options above can be done with prepared statements,\n>> stored procedures, or functions.\n>>\n>> I know that using procedures or functions can let you do fancy things\n>> like inserting the row(s) into the appropriate section of a partitioned\n>> table\n>\n> We would normally recommend having the database handle the partitioning\n> by using a trigger on the base table to call a stored procedure. The\n> application really doesn't need to know about this.\n\nwell, the trigger or stored procedure/funcion can be part of the database \nconfig, or loaded by the app when it starts.\n\n\n\none very big question is how much of a gain there is in moving from a \ndatabase agnostic approach to a database specific approach.\n\ncurrently rsyslog makes use of it's extensive formatting capabilities to \nformat a string along the lines of\n$DBformat=\"insert into table X values ('$timestamp','$msg');\"\nthen it hands the resulting string to the database interface module. This \nis a bit of a pain to setup, and not especially efficiant, but it has the \nability to insert data into whatever schema you want to use (unlike a lot \nof apps that try to force you to use their schema)\n\nI proposed a 5 variable replacement for this to allow for N log entries to \nbe combined into one string to be sent to the database:\n\nDBinit (one-time things like initialinzing prepared statements, etc)\nDBstart (string for the start of a transaction)\nDBjoin (tring to use to join multiple DBitems togeather)\nDBend (string for the end of a transaction)\nDBitem (formatting of a single action )\n\nso you could do something like\n\nDBstart = \"insert into table X values\"\nDBjoin = \",\"\nDBend = \";\"\nDBitem = \"('$timestampe','$msg')\"\n\nand it would create a string like #2\n\nthis is extremely flexible. I think it can do everything except binary \nmode operations, including copy. It is also pretty database agnostic.\n\nbut people are asking about how to do binary mode, and some were thinking \nthat you couldn't do prepared statements in Oracle with a string-based \ninterface.\n\nso I decided to post here to try and get an idea of (1) how much \nperformance would be lost by sticking with strings, and (2) of all the \nvarious ways of inserting the data, what sort of performance differences \nare we talking about\n\nDavid Lang\n", "msg_date": "Mon, 20 Apr 2009 19:24:22 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "David,\n\n* [email protected] ([email protected]) wrote:\n> any idea what sort of difference binary mode would result in?\n\nIt depends a great deal on your application..\n\n> currently rsyslog makes use of it's extensive formatting capabilities to \n> format a string along the lines of\n> $DBformat=\"insert into table X values ('$timestamp','$msg');\"\n\nIs this primairly the commands sent to the database? If so, I don't\nthink you'll get much by going to binary-mode. The text '$msg' isn't\ngoing to be any different in binary. The '$timestamp' would be, but I'm\nguessing you'd have to restructure it some to match the PG binary\ntimestamp format and while that *would* be a win, I don't think it would\nend up being all that much of a win.\n\n> I proposed a 5 variable replacement for this to allow for N log entries \n> to be combined into one string to be sent to the database:\n>\n> DBinit (one-time things like initialinzing prepared statements, etc)\n> DBstart (string for the start of a transaction)\n> DBjoin (tring to use to join multiple DBitems togeather)\n> DBend (string for the end of a transaction)\n> DBitem (formatting of a single action )\n>\n> so you could do something like\n>\n> DBstart = \"insert into table X values\"\n> DBjoin = \",\"\n> DBend = \";\"\n> DBitem = \"('$timestampe','$msg')\"\n>\n> and it would create a string like #2\n\nUsing this textual representation for the DBitem would cause difficulty\nfor any kind of prepared statement usage (Oracle or PG), and so I would\nreally recommend getting away from it if possible. Instead, I would\nencourage going with the PG (and Oracle, as I recall) structure of\nhaving an array of pointers to the values.\n\nTake a look at the documentation for PQexecParams here:\nhttp://www.postgresql.org/docs/8.3/interactive/libpq-exec.html\n\n(note that you'll want to use PQprepare and PQexecPrepared in the end,\nbut the detailed documentation is under PQexecParams)\n\nBasically, you would have:\n\nDBnParams = 2;\nDBparamValues[0] = ptr to $timestamp\nDBparamValues[1] = ptr to $msg\n\nIf you just use the text format, you don't actually need anything else\nfor PG, just pass in NULL for paramTypes, paramLengths, and\nparamFormats, and 0 for resultFormat.\n\nOf course, if that's your only structure, then you can just make a C\nstruct that has those two pointers in it and simplify your API by\npassing the struct around.\n\n> this is extremely flexible. I think it can do everything except binary \n> mode operations, including copy. It is also pretty database agnostic.\n\nWith that DBitem, I'm not sure how you would do copy easily. You'd have\nto strip out the params and possibly the comma depending on what you're\ndoing, and you might have to adjust your escaping (how is that done\ntoday in $msg?). All-in-all, not using prepared queries is just messy\nand I would recommend avoiding that, regardless of anything else.\n\n> but people are asking about how to do binary mode, and some were thinking \n> that you couldn't do prepared statements in Oracle with a string-based \n> interface.\n\nPrepared statements pretty much require that you are able to pass in the\nitems in a non-string-based way (I don't mean to imply that you can't\nuse *strings*, you can, but it's 1 string per column). Otherwise,\nyou've got the whole issue of figuring out where one column ends and the\nnext begins again, which is half the point of prepared statements.\n\n> so I decided to post here to try and get an idea of (1) how much \n> performance would be lost by sticking with strings, and (2) of all the \n> various ways of inserting the data, what sort of performance differences \n> are we talking about\n\nSticking with strings, if that's the format that's going to end up in\nthe database, is fine. That's an orthogonal issue to using prepared\nstatements though, which you should really do. Once you've converted to\nusing prepared statements of some kind, and batching together inserts in\nlarger transactions instead of one insert per transactions, then you can\ncome back to the question of passing things-which-can-be-binary as\nbinary (eg, timestamps, integers, floats, doubles, etc) and do some\nperformance testing to see how much an improvment it will get you.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Mon, 20 Apr 2009 22:44:58 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Mon, 20 Apr 2009, [email protected] wrote:\n\n> any idea what sort of difference binary mode would result in?\n\nThe win from switching from INSERT to COPY can be pretty big, further \noptimizing to BINARY you'd really need to profile to justify. I haven't \nfound any significant difference in binary mode compared to overhead of \nthe commit itself in most cases. The only thing I consistently run into \nis that timestamps can bog things down considerably in text mode, but you \nhave to be pretty efficient in your app to do any better generating those \nthose in the PostgreSQL binary format yourself. If you had a lot of \ndifficult to parse data types like that, binary might be a plus, but it \ndoesn't sound like that will be the case for what you're doing.\n\nBut you don't have to believe me, it's easy to generate a test case here \nyourself. Copy some typical data into the database, export it both ways:\n\nCOPY t to 'f';\nCOPY t to 'f' WITH BINARY;\n\nAnd then compare copying them both in again with \"\\timing\". That should \nlet you definitively answer whether it's really worth the trouble.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 20 Apr 2009 23:12:25 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "Greg,\n\n* Greg Smith ([email protected]) wrote:\n> The win from switching from INSERT to COPY can be pretty big, further \n> optimizing to BINARY you'd really need to profile to justify. \n\nHave you done any testing to compare COPY vs. INSERT using prepared\nstatements? I'd be curious to know how those compare and against\nmulti-value INSERTS, prepared and unprepared.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Mon, 20 Apr 2009 23:15:18 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Mon, 20 Apr 2009, Stephen Frost wrote:\n\n> David,\n>\n> * [email protected] ([email protected]) wrote:\n>> any idea what sort of difference binary mode would result in?\n>\n> It depends a great deal on your application..\n>\n>> currently rsyslog makes use of it's extensive formatting capabilities to\n>> format a string along the lines of\n>> $DBformat=\"insert into table X values ('$timestamp','$msg');\"\n>\n> Is this primairly the commands sent to the database? If so, I don't\n> think you'll get much by going to binary-mode. The text '$msg' isn't\n> going to be any different in binary. The '$timestamp' would be, but I'm\n> guessing you'd have to restructure it some to match the PG binary\n> timestamp format and while that *would* be a win, I don't think it would\n> end up being all that much of a win.\n\nthe applicaiton is the log server pulling apart messages, reformatting \nthem to whatever is appropriate for the database schema, and then \ninserting them into the database (for other applications to access, ones \nthat rsyslog knows nothing about)\n\nI used the example of a trivial table with timestamp and log message, but \nin most cases you will break out sending host and application as well, and \nin some cases may parse apart the log message itself. I have a use case \nwhere the message itself if pipe delimited, and I will want to do make use \nof the first four fields of the message (probably as seperate columns) \nbefore dumping the rest of the message into a text field.\n\n>> I proposed a 5 variable replacement for this to allow for N log entries\n>> to be combined into one string to be sent to the database:\n>>\n>> DBinit (one-time things like initialinzing prepared statements, etc)\n>> DBstart (string for the start of a transaction)\n>> DBjoin (tring to use to join multiple DBitems togeather)\n>> DBend (string for the end of a transaction)\n>> DBitem (formatting of a single action )\n>>\n>> so you could do something like\n>>\n>> DBstart = \"insert into table X values\"\n>> DBjoin = \",\"\n>> DBend = \";\"\n>> DBitem = \"('$timestampe','$msg')\"\n>>\n>> and it would create a string like #2\n>\n> Using this textual representation for the DBitem would cause difficulty\n> for any kind of prepared statement usage (Oracle or PG), and so I would\n> really recommend getting away from it if possible.\n\nthat example would be, but the same mechanism would let you do\n\n\nDBinit=\"PREPARE rsyslog_insert(date, text) AS\\nINSERT INTO foo VALUES(\\$1, \n\\$2);\"\nDBstart = \"begini;B\\n\"\nDBjoin = \"\"\nDBend = \"end;\"\nDBitem = \"EXECUTE rsyslog_insert('$timestamp','$msg');\\n\"\n\nwhich would become\n\nPREPARE rsyslog_insert(date, text) AS\n INSERT INTO foo VALUES($1, $2);\nbegin;\nEXECUTE rsyslog_insert('20090420-06:00', \"log1\");\nEXECUTE rsyslog_insert('20090420-06:00', \"log2\");\nEXECUTE rsyslog_insert('20090420-06:00', \"log3\");\nend;\n\nwhich I think makes good use of prepared statements.\n\n> Instead, I would\n> encourage going with the PG (and Oracle, as I recall) structure of\n> having an array of pointers to the values.\n>\n> Take a look at the documentation for PQexecParams here:\n> http://www.postgresql.org/docs/8.3/interactive/libpq-exec.html\n>\n> (note that you'll want to use PQprepare and PQexecPrepared in the end,\n> but the detailed documentation is under PQexecParams)\n>\n> Basically, you would have:\n>\n> DBnParams = 2;\n> DBparamValues[0] = ptr to $timestamp\n> DBparamValues[1] = ptr to $msg\n>\n> If you just use the text format, you don't actually need anything else\n> for PG, just pass in NULL for paramTypes, paramLengths, and\n> paramFormats, and 0 for resultFormat.\n>\n> Of course, if that's your only structure, then you can just make a C\n> struct that has those two pointers in it and simplify your API by\n> passing the struct around.\n\nthe database structure is not being defined by (or specificly for) \nrsyslog. so at compile time we have _no_ idea how many variables of what \ntype there are going to be. my example of ($timestamp,$msg) was intended \nto just be a sample (avoiding typing out some elaberate set of parameters)\n\nrsyslog provides the following items, which can be sliced and diced with \nsubstatutions, substrings, and additional inserted text.\n\nmsg \tthe MSG part of the message (aka \"the message\" ;))\n\nrawmsg \tthe message excactly as it was received from the socket. Should be \nuseful for debugging.\n\nuxtradmsg \twill disappear soon - do NOT use!\n\nhostname \thostname from the message\n\nsource \talias for HOSTNAME\n\nfromhost \thostname of the system the message was received from (in a \nrelay chain, this is the system immediately in front of us and not \nnecessarily the original sender). This is a DNS-resolved name, except if \nthat is not possible or DNS resolution has been disabled.\n\nfromhost-ip \tThe same as fromhost, but alsways as an IP address. Local \ninputs (like imklog) use 127.0.0.1 in this property.\n\nsyslogtag \tTAG from the message\n\nprogramname \tthe \"static\" part of the tag, as defined by BSD syslogd. \nFor example, when TAG is \"named[12345]\", programname is \"named\".\n\npri \tPRI part of the message - undecoded (single value)\n\npri-text \tthe PRI part of the message in a textual form (e.g. \n\"syslog.info\")\n\niut \tthe monitorware InfoUnitType - used when talking to a MonitorWare\n\nbackend (also for phpLogCon)\n\nsyslogfacility \tthe facility from the message - in numerical form\n\nsyslogfacility-text \tthe facility from the message - in text form\n\nsyslogseverity \tseverity from the message - in numerical form\n\nsyslogseverity-text \tseverity from the message - in text form\n\nsyslogpriority \tan alias for syslogseverity - included for historical \nreasons (be careful: it still is the severity, not PRI!)\n\nsyslogpriority-text \tan alias for syslogseverity-text\n\ntimegenerated \ttimestamp when the message was RECEIVED. Always in high \nresolution\n\ntimereported \ttimestamp from the message. Resolution depends on what was \nprovided in the message (in most cases, only seconds)\n\ntimestamp \talias for timereported\n\nprotocol-version \tThe contents of the PROTCOL-VERSION field from \nIETF draft draft-ietf-syslog-protcol\n\nstructured-data \tThe contents of the STRUCTURED-DATA field from \nIETF draft draft-ietf-syslog-protocol\n\napp-name \tThe contents of the APP-NAME field from IETF draft \ndraft-ietf-syslog-protocol\n\nprocid \tThe contents of the PROCID field from IETF draft \ndraft-ietf-syslog-protocol\n\nmsgid \tThe contents of the MSGID field from IETF draft \ndraft-ietf-syslog-protocol\n\ninputname \tThe name of the input module that generated the message \n(e.g. \"imuxsock\", \"imudp\"). Note that not all modules necessarily provide \nthis property. If not provided, it is an empty string. Also note that the \ninput module may provide any value of its liking. Most importantly, it is \nnot necessarily the module input name. Internal sources can also provide \ninputnames. Currently, \"rsyslogd\" is defined as inputname for messages \ninternally generated by rsyslogd, for example startup and shutdown and \nerror messages. This property is considered useful when trying to filter \nmessages based on where they originated - e.g. locally generated messages \n(\"rsyslogd\", \"imuxsock\", \"imklog\") should go to a different place than \nmessages generated somewhere.\n\n$now \tThe current date stamp in the format YYYY-MM-DD\n\n$year \tThe current year (4-digit)\n\n$month \tThe current month (2-digit)\n\n$day \tThe current day of the month (2-digit)\n\n$hour \tThe current hour in military (24 hour) time (2-digit)\n\n$hhour \tThe current half hour we are in. From minute 0 to 29, this is \nalways 0 while from 30 to 59 it is always 1.\n\n$qhour \tThe current quarter hour we are in. Much like $HHOUR, but values \nrange from 0 to 3 (for the four quater hours that are in each hour)\n\n$minute \tThe current minute (2-digit)\n\n$myhostname \tThe name of the current host as it knows itself (probably \nuseful for filtering in a generic way)\n\n>> this is extremely flexible. I think it can do everything except binary\n>> mode operations, including copy. It is also pretty database agnostic.\n>\n> With that DBitem, I'm not sure how you would do copy easily. You'd have\n> to strip out the params and possibly the comma depending on what you're\n> doing, and you might have to adjust your escaping (how is that done\n> today in $msg?). All-in-all, not using prepared queries is just messy\n> and I would recommend avoiding that, regardless of anything else.\n\nrsyslog message formatting provides tools for doing the nessasary escaping \n(and is using it for the single insert messages today)\n\nprepared statements in text mode have similar problems (although they \n_are_ better in defending against sql injection attacks, so a bit safer). \nI don't see how you would easily use the API that you pointed me at above \nwithout having to know the database layout at compile time.\n\n>> but people are asking about how to do binary mode, and some were thinking\n>> that you couldn't do prepared statements in Oracle with a string-based\n>> interface.\n>\n> Prepared statements pretty much require that you are able to pass in the\n> items in a non-string-based way (I don't mean to imply that you can't\n> use *strings*, you can, but it's 1 string per column). Otherwise,\n> you've got the whole issue of figuring out where one column ends and the\n> next begins again, which is half the point of prepared statements.\n\nif you had said figuring out where the column data ends and the SQL \ncommand begins I would have agreed with you fully.\n\nI agree that defining a fixed table layout and compiling that knowledge \ninto rsyslog is the safest (and probably most efficiant) way to do things, \nbut there is no standard for log messages in a database, and different \npeople will want to do different things with the logs, so I don't see how \na fixed definition could work.\n\n>> so I decided to post here to try and get an idea of (1) how much\n>> performance would be lost by sticking with strings, and (2) of all the\n>> various ways of inserting the data, what sort of performance differences\n>> are we talking about\n>\n> Sticking with strings, if that's the format that's going to end up in\n> the database, is fine. That's an orthogonal issue to using prepared\n> statements though, which you should really do. Once you've converted to\n> using prepared statements of some kind, and batching together inserts in\n> larger transactions instead of one insert per transactions, then you can\n> come back to the question of passing things-which-can-be-binary as\n> binary (eg, timestamps, integers, floats, doubles, etc) and do some\n> performance testing to see how much an improvment it will get you.\n\nso the binary mode only makes a difference on things like timestamps and \nnumbers? (i.e. no significant added efficiancy in processing the command \nitself?)\n\nthanks for taking the time to answer, I was trying to keep the problem \ndefinition small and simple, and from your reply it looks like I made it \ntoo simple.\n\nI think the huge complication is that when RedHat compiles rsyslog to ship \nit in the distro, they have no idea how it is going to be used (if it will \ngo to a database, what database engine it will interface with, or what the \nschema of that database would look like). Only the sysadmin(s)/dba(s) know \nthat and they need to be able to tell rsyslog what to do to get the data \nwhere they want it to be, and in the format they want it to be in.\n\nDavid Lang\n", "msg_date": "Mon, 20 Apr 2009 20:29:54 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "David,\n\n* [email protected] ([email protected]) wrote:\n> the database structure is not being defined by (or specificly for) \n> rsyslog. so at compile time we have _no_ idea how many variables of what \n> type there are going to be. my example of ($timestamp,$msg) was intended \n> to just be a sample (avoiding typing out some elaberate set of \n> parameters)\n\nThat's fine. I don't see any reason that the API I suggested be a\ncompile-time option. Certainly, libpq has no idea how many, or what\nkind, of types are being passed to it by the caller, that's why it *has*\nthe API that it does. You just need to work out what each prepared\nqueries' parameters are and construct the necessary arrays.\n\n> rsyslog provides the following items, which can be sliced and diced with \n> substatutions, substrings, and additional inserted text.\n\n[...]\n\nLooks like mainly text fields still, so you might want to just stick\nwith text for now.\n\n> rsyslog message formatting provides tools for doing the nessasary \n> escaping (and is using it for the single insert messages today)\n>\n> prepared statements in text mode have similar problems (although they \n> _are_ better in defending against sql injection attacks, so a bit safer). \n\nUhh, if you use prepared statements with PQexecPrepared, there *is no*\nescaping necessary. I'm not sure what you mean by 'similar problems'.\nCan you elaborate on that? If you mean doing 'prepared queries' by\nusing creating a string and then using PQexec with\n'EXECUTE blah (1,2,3);' then you really need to go read the\ndocumentation I suggested. That's *not* what I'm getting at when I say\n'prepared queries', I'm talking about a protocol-level well-defined\nformat for passing arguments independently of commands. A call to\nPQexecPrepared looks like this:\n\nPQprepare(conn, \"myquery\", \"INSERT INTO TAB1 VALUES ($1, $2);\", 0, NULL);\n\nvalues[0] = \"a\";\nvalues[1] = \"b\";\nPQexecPrepared(conn, \"myquery\", 2, values, NULL, NULL, 0);\n\nNote that we don't ever send an 'EXECUTE myquery (1,2,3);' type of thing\nto libpq. libpq will handle the execute and the parameters and whatnot\nas part of the PG 3.0 protocol.\n\n> I don't see how you would easily use the API that you pointed me at above \n> without having to know the database layout at compile time.\n\nThe arrays don't have to be of fixed length.. You can malloc() them at\nruntime based on the number of parameters which are being used in a\nparticular message. Perhaps what I'm missing here is exactly what\nyou're expecting the user to provide you with versus what you're going\nto be giving to libpq. I have been assuming that you have a format\ndefinition system already in place that looks something like:\n\nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'\n\nWhich you then parse, figure out what the escape codes mean, and build\nsome structure which understands the whole thing at an abstract level.\nFor example, you should know that '%Y' is the first variable, is an\ninteger, etc.\n\nFrom this, you can determine that there are 6 parameters, at runtime.\nYou can then malloc() an array with 6 char* pointers. Then, when you\nhave those 6 strings somewhere, you just set each of your array\nparameters to the appropriate in-memory string address, eg:\n\narray = malloc(sizeof(char*) * num_of_params);\nfor (int i = 0; i < num_of_params; i++) {\n\tswitch(params[i]) {\n\t\tcase 'Y':\tarray[i] = my_year_string; break;\n\t\tcase 'm':\tarray[i] = my_month_string; break;\n\t}\n}\n\netc, until you eventually have your array of pointers, with a valid\nin-memory string somewhere for each pointer, that you can then turn\naround and pass to PQexecPrepared. Obviously, you don't have to\nmalloc() every time, if you keep track of each type of message.\n\n> I agree that defining a fixed table layout and compiling that knowledge \n> into rsyslog is the safest (and probably most efficiant) way to do \n> things, but there is no standard for log messages in a database, and \n> different people will want to do different things with the logs, so I \n> don't see how a fixed definition could work.\n\nI didn't intend to imply that you have to use a fixed definition, just\nthat if you currently only have 1 then you might as well. It's entirely\npossible to support any definition using the API I suggested.\n\n> so the binary mode only makes a difference on things like timestamps and \n> numbers? (i.e. no significant added efficiancy in processing the command \n> itself?)\n\nI'm slightly confused by what you mean by this? Binary mode is for\nparameters only, commands are never 'binary'. Binary mode just means\nthat the application gives the value to the database in the format which\nit expects, and so the database doesn't have to translate a textual\nrepresentation of a value into the binary format the database needs.\n\n> thanks for taking the time to answer, I was trying to keep the problem \n> definition small and simple, and from your reply it looks like I made it \n> too simple.\n\nYes, probably. You likely assumed that I knew something about how\nrsyslog works with databases, and to be honest, I have no idea. :)\n\n> I think the huge complication is that when RedHat compiles rsyslog to \n> ship it in the distro, they have no idea how it is going to be used (if \n> it will go to a database, what database engine it will interface with, or \n> what the schema of that database would look like). Only the \n> sysadmin(s)/dba(s) know that and they need to be able to tell rsyslog \n> what to do to get the data where they want it to be, and in the format \n> they want it to be in.\n\nThat's really all fine. You just need to get from the user, at runtime,\nwhat they want their commands to look like. Once you have that, it\nshould be entirely possible to dynamically construct the prepared\nqueries, most likely without the user having to know anything about COPY\nor prepared statements or anything. For my part, I'd want something\nlike:\n\ntable = \"mytable\";\ndata = \"$Y, $m, $d, $H, $msg\";\n\nI'd avoid having the user provide actual SQL, because that becomes\ndifficult to deal with unless you embed an SQL parser in rsyslog, and\nI don't really see the value in that. If the user wants to do\nsomething fancy with the data in the database, I would encourage them\nto put an 'ON INSERT' trigger on 'mytable' to do whatever they want with\nthe data that's coming in. This gives you the freedom necessary to\nbuild an appropriate statement for any database you're connecting to,\ndynamically, using prepared queries, and even binary mode if you want.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 21 Apr 2009 00:10:08 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Tue, 21 Apr 2009, Stephen Frost wrote:\n\n> David,\n>\n> * [email protected] ([email protected]) wrote:\n>> the database structure is not being defined by (or specificly for)\n>> rsyslog. so at compile time we have _no_ idea how many variables of what\n>> type there are going to be. my example of ($timestamp,$msg) was intended\n>> to just be a sample (avoiding typing out some elaberate set of\n>> parameters)\n>\n> That's fine. I don't see any reason that the API I suggested be a\n> compile-time option. Certainly, libpq has no idea how many, or what\n> kind, of types are being passed to it by the caller, that's why it *has*\n> the API that it does. You just need to work out what each prepared\n> queries' parameters are and construct the necessary arrays.\n\nI misunderstood what you were saying (more below)\n\n>> rsyslog provides the following items, which can be sliced and diced with\n>> substatutions, substrings, and additional inserted text.\n>\n> [...]\n>\n> Looks like mainly text fields still, so you might want to just stick\n> with text for now.\n\nyes, almost exclusivly text fields, and the fields that could be numbers \n(or dates) are in text formats when we get them anyway.\n\n>> rsyslog message formatting provides tools for doing the nessasary\n>> escaping (and is using it for the single insert messages today)\n>>\n>> prepared statements in text mode have similar problems (although they\n>> _are_ better in defending against sql injection attacks, so a bit safer).\n>\n> Uhh, if you use prepared statements with PQexecPrepared, there *is no*\n> escaping necessary. I'm not sure what you mean by 'similar problems'.\n> Can you elaborate on that? If you mean doing 'prepared queries' by\n> using creating a string and then using PQexec with\n> 'EXECUTE blah (1,2,3);' then you really need to go read the\n> documentation I suggested. That's *not* what I'm getting at when I say\n> 'prepared queries', I'm talking about a protocol-level well-defined\n> format for passing arguments independently of commands. A call to\n> PQexecPrepared looks like this:\n>\n> PQprepare(conn, \"myquery\", \"INSERT INTO TAB1 VALUES ($1, $2);\", 0, NULL);\n>\n> values[0] = \"a\";\n> values[1] = \"b\";\n> PQexecPrepared(conn, \"myquery\", 2, values, NULL, NULL, 0);\n>\n> Note that we don't ever send an 'EXECUTE myquery (1,2,3);' type of thing\n> to libpq. libpq will handle the execute and the parameters and whatnot\n> as part of the PG 3.0 protocol.\n\nwhen you said to stick with text mode, I thought you were meaning that we \nwould create a string with EXECUTE.... in it and send that. it would have \nsimilar escaping issues (although with fewer vunerabilities if they mess \nup)\n\n\n>> so the binary mode only makes a difference on things like timestamps and\n>> numbers? (i.e. no significant added efficiancy in processing the command\n>> itself?)\n>\n> I'm slightly confused by what you mean by this? Binary mode is for\n> parameters only, commands are never 'binary'. Binary mode just means\n> that the application gives the value to the database in the format which\n> it expects, and so the database doesn't have to translate a textual\n> representation of a value into the binary format the database needs.\n\nI thought that part of the 'efficiancy' and 'performance' to be gained \nfrom binary modes were avoiding the need to parse commands, if it's only \nthe savings in converting column contents from text to specific types, \nit's much less important.\n\n>> I think the huge complication is that when RedHat compiles rsyslog to\n>> ship it in the distro, they have no idea how it is going to be used (if\n>> it will go to a database, what database engine it will interface with, or\n>> what the schema of that database would look like). Only the\n>> sysadmin(s)/dba(s) know that and they need to be able to tell rsyslog\n>> what to do to get the data where they want it to be, and in the format\n>> they want it to be in.\n>\n> That's really all fine. You just need to get from the user, at runtime,\n> what they want their commands to look like. Once you have that, it\n> should be entirely possible to dynamically construct the prepared\n> queries, most likely without the user having to know anything about COPY\n> or prepared statements or anything. For my part, I'd want something\n> like:\n>\n> table = \"mytable\";\n> data = \"$Y, $m, $d, $H, $msg\";\n\nif the user creates the data this way, you just reintroduced the escaping \nproblem. they would have to do something like\n\ndata = \"$Y\"\ndata = \"$m\"\ndata = \"$d\"\ndata = \"$H\"\ndata = \"$msg\"\n\none key thing is that it's very probable that the user will want to \nmanipulate the string, not just send a single variable as-is\n\n> I'd avoid having the user provide actual SQL, because that becomes\n> difficult to deal with unless you embed an SQL parser in rsyslog, and\n> I don't really see the value in that.\n\nthere's no need for rsyslog to parse the SQL, just to be able to escape it \nappropriately and then pass it to the database for execution\n\n\n> If the user wants to do\n> something fancy with the data in the database, I would encourage them\n> to put an 'ON INSERT' trigger on 'mytable' to do whatever they want with\n> the data that's coming in. This gives you the freedom necessary to\n> build an appropriate statement for any database you're connecting to,\n> dynamically, using prepared queries, and even binary mode if you want.\n\n\n\none huge advantage of putting the sql into the configuration is the \nability to work around other users of the database.\n\nfor example, what if the database has additional columns that you don't \nwant to touch (say an item# sequence), if the SQL is in the config this is \neasy to work around, if it's seperate (or created by the module), this is \nvery hard to do.\n\nI guess you could give examples of the SQL in the documentation for how to \ncreate the prepared statement etc in the databases, but how is that much \nbetter than having it in the config file?\n\nfor many users it's easier to do middlein -fancy stuff in the SQL than \nloading things into the database (can you pre-load prepared statements in \nthe database? or are they a per-connection thing?)\n\n\nso back to the main questions of the advantages\n\nprepared statements avoid needing to escape things, but at the \ncomplication of a more complex API.\n\nthere's still the question of the performance difference. I have been \nthinking that the overhead of doing the work itself would overwelm the \nperformance benifits of prepared statements.\n\nas I understand it, the primary performance benifit is the ability to \navoid the parsing and planning stages of the command. for simple commands \n(which I assume inserts to be, even if inserting a lot of stuff), the \nplanning would seem to be cheap compared to the work of doing the inserts\n\non a fully tuned database are we talking about 10% performance? 1%? 0.01%?\n\nany ideas\n", "msg_date": "Mon, 20 Apr 2009 23:00:54 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Mon, 20 Apr 2009, Greg Smith wrote:\n\n> On Mon, 20 Apr 2009, [email protected] wrote:\n>\n>> any idea what sort of difference binary mode would result in?\n>\n> The win from switching from INSERT to COPY can be pretty big, further \n> optimizing to BINARY you'd really need to profile to justify. I haven't \n> found any significant difference in binary mode compared to overhead of the \n> commit itself in most cases. The only thing I consistently run into is that \n> timestamps can bog things down considerably in text mode, but you have to be \n> pretty efficient in your app to do any better generating those those in the \n> PostgreSQL binary format yourself. If you had a lot of difficult to parse \n> data types like that, binary might be a plus, but it doesn't sound like that \n> will be the case for what you're doing.\n>\n> But you don't have to believe me, it's easy to generate a test case here \n> yourself. Copy some typical data into the database, export it both ways:\n>\n> COPY t to 'f';\n> COPY t to 'f' WITH BINARY;\n>\n> And then compare copying them both in again with \"\\timing\". That should let \n> you definitively answer whether it's really worth the trouble.\n\nwhile I fully understand the 'benchmark your situation' need, this isn't \nthat simple.\n\nin this case we are trying to decide what API/interface to use in a \ninfrastructure tool that will be distributed in common distros (it's now \nthe default syslog package of debian and fedora), there are so many \nvariables in hardware, software, and load that trying to benchmark it \nbecomes effectivly impossible.\n\nbased on Stephan's explination of where binary could help, I think the \neasy answer is to decide not to bother with it (the postgres text to X \nconverters get far more optimization attention than anything rsyslog could \ndeploy)\n\nDavid Lang\n", "msg_date": "Mon, 20 Apr 2009 23:05:33 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Mon, 20 Apr 2009, Stephen Frost wrote:\n\n> Greg,\n>\n> * Greg Smith ([email protected]) wrote:\n>> The win from switching from INSERT to COPY can be pretty big, further\n>> optimizing to BINARY you'd really need to profile to justify.\n>\n> Have you done any testing to compare COPY vs. INSERT using prepared\n> statements? I'd be curious to know how those compare and against\n> multi-value INSERTS, prepared and unprepared.\n\nand this is the rest of the question that I was trying (unsucessfully) to \nask.\n\nis this as simple as creating a database and doing an explain on each of \nthese? or do I need to actually measure the time (at which point the \nspecific hardware and tuning settings become an issue again)\n\nDavid Lang\n", "msg_date": "Mon, 20 Apr 2009 23:07:56 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "David,\n\n* [email protected] ([email protected]) wrote:\n> I thought that part of the 'efficiancy' and 'performance' to be gained \n> from binary modes were avoiding the need to parse commands, if it's only \n> the savings in converting column contents from text to specific types, \n> it's much less important.\n\nNo, binary mode is about the column contents. Prepared queries is about\navoiding having to parse commands (which is why the command and the data\nelements are seperately done).\n\n>> table = \"mytable\";\n>> data = \"$Y, $m, $d, $H, $msg\";\n>\n> if the user creates the data this way, you just reintroduced the escaping \n> problem. they would have to do something like\n>\n> data = \"$Y\"\n> data = \"$m\"\n> data = \"$d\"\n> data = \"$H\"\n> data = \"$msg\"\n\nYes, there is a bit of escaping that the admins will have to deal with\nin the config file. There's no way around that though, regardless of\nwhat you do. If you let them put SQL in, then they may have to escape\nstuff there too. In the end, an escape problem in the config file is\nsomething which you should really catch when you read the file in, and\neven if you don't catch it there, it's less problematic, not really a\nperformance problem, and much less of a security risk, than having the\nescaping done on data from an untrusted source.\n\n> one key thing is that it's very probable that the user will want to \n> manipulate the string, not just send a single variable as-is\n\nYou could probably let them do some manipulation, add extra\nnon-escape-code fields, maybe tack something on the beginning and end,\nbasically, anything that can be done in the application prior to it\nhitting the database should be fine.\n\n>> I'd avoid having the user provide actual SQL, because that becomes\n>> difficult to deal with unless you embed an SQL parser in rsyslog, and\n>> I don't really see the value in that.\n>\n> there's no need for rsyslog to parse the SQL, just to be able to escape \n> it appropriately and then pass it to the database for execution\n\nIf the user is providing SQL, then you need to be able to parse that SQL\nif you're going to do prepared queries. It might not require you to be\nable to fully parse SQL the way the back-end does, but anything you do\nthat's not a full SQL parser is going to end up having limitations that\nwon't be easy to find or document.\n\nFor example, you could ask users to provide the prepared statement the\nway the database wants it, and then list the data elements seperately\nfrom it somehow, eg:\n\nmyquery = \"INSERT INTO blah (col1, col2, col3) VALUES ($1, $2, $3);\"\nmyvals[1] = \"$Y\"\nmyvals[2] = \"$M\"\nmyvals[3] = \"$msg\"\n\nThe user could then do:\n\nmyquery = \"INSERT INTO blah (col1, col2) SELECT substring($1), $2;\"\nmyvals[1] = \"$M\"\nmyvals[2] = \"$msg\"\n\nBoth of these will work just fine as prepared queries. \n\nYou can then parse that string by just looking for the $'s, but of\ncourse, if the user wants to put an *actual* dollar sign in, then you\nhave to support that somehow ($$?). Then you have to deal with whatever\nother quoting requirements you have in your config file (how do you deal\nwith double quotes? What about single quotes? etc, etc).\n\nYou could possibly even put your escape codes into myquery and just try\nto figure out how to do the substitutions with the $NUMs and build your\nprepared query string. It gets uglier and uglier if you ask me though.\n\nIn the end, I'd still worry about users coming up with new and different\nways to break your sql 'parser'.\n\n> one huge advantage of putting the sql into the configuration is the \n> ability to work around other users of the database.\n\nSee, I just don't see that.\n\n> for example, what if the database has additional columns that you don't \n> want to touch (say an item# sequence), if the SQL is in the config this \n> is easy to work around, if it's seperate (or created by the module), this \n> is very hard to do.\n\nYou can do that with a trigger trivially.. That could also be supported\nthrough other mechanisms (for instance, let the user provide a list of\ncolumns to fill with DEFAULT in the prepared query).\n\n> I guess you could give examples of the SQL in the documentation for how \n> to create the prepared statement etc in the databases, but how is that \n> much better than having it in the config file?\n>\n> for many users it's easier to do middlein -fancy stuff in the SQL than \n> loading things into the database (can you pre-load prepared statements in \n> the database? or are they a per-connection thing?)\n\nPrepared statements, at least under PG, are a per-connection thing.\nTriggers aren't the same, those are attached to tables and get called\nwhenever a particular action is done on those tables (defined in the\ntrigger definition). The trigger is then called with the row which is\nbeing inserted, etc, and can do whatever it wants with that row (put it\nin a different table, ignore it, etc).\n\n> so back to the main questions of the advantages\n>\n> prepared statements avoid needing to escape things, but at the \n> complication of a more complex API.\n>\n> there's still the question of the performance difference. I have been \n> thinking that the overhead of doing the work itself would overwelm the \n> performance benifits of prepared statements.\n\nWhat work is it that you're referring to here? Based on what you've\nsaid about your application so far, I would expect that the run-time\ncost to prepare the statement (which you do only once) to be a bit of a\ncost, but not much, and that the actual inserts would be almost free\nfrom the application side, and much easier for the database to\nparse/use.\n\n> as I understand it, the primary performance benifit is the ability to \n> avoid the parsing and planning stages of the command. for simple commands \n> (which I assume inserts to be, even if inserting a lot of stuff), the \n> planning would seem to be cheap compared to the work of doing the inserts\n\nThe planning isn't very expensive, no, but it's also not free. The\nparsing is more expensive.\n\n> on a fully tuned database are we talking about 10% performance? 1%? 0.01%?\n>\n> any ideas\n\nIt depends a great deal on your application.. Do you have some example\ndata that we could use to test with? Some default templates that you\nthink most people end up using which we could create dummy data for?\n\nOf course, in the end, batching your inserts into fewer transactions\nthan 1-per-insert should give you a huge benefit right off the bat.\nWhat we're talking about is really the steps after that, which might not\never be necessary for your particular application. On the other hand,\nI'd never let production code go out that isn't using prepared queries\nwherever possible.\n\n:)\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 21 Apr 2009 02:41:38 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "* [email protected] ([email protected]) wrote:\n> while I fully understand the 'benchmark your situation' need, this isn't \n> that simple.\n\nIt really is. You know your application, you know it's primary use\ncases, and probably have some data to play with. You're certainly in a\nmuch better situation to at least *try* and benchmark it than we are.\n\n> in this case we are trying to decide what API/interface to use in a \n> infrastructure tool that will be distributed in common distros (it's now \n> the default syslog package of debian and fedora), there are so many \n> variables in hardware, software, and load that trying to benchmark it \n> becomes effectivly impossible.\n\nYou don't need to know how it will perform in every situation. The main\nquestion you have is if using prepared queries is faster or not, so pick\na common structure, create a table, get some data, and test. I can say\nthat prepared queries will be more likely to give you a performance\nboost with wider tables (more columns).\n\n> based on Stephan's explination of where binary could help, I think the \n> easy answer is to decide not to bother with it (the postgres text to X \n> converters get far more optimization attention than anything rsyslog \n> could deploy)\n\nWhile that's true, there's no substitute for not having to do a\nconversion at all. After all, it's alot cheaper to do a bit of\nbyte-swapping on an integer value that's already an integer in memory\nthan to sprintf and atoi it.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 21 Apr 2009 02:45:54 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "David,\n\n* [email protected] ([email protected]) wrote:\n> is this as simple as creating a database and doing an explain on each of \n> these? or do I need to actually measure the time (at which point the \n> specific hardware and tuning settings become an issue again)\n\nNo, you need to measure the time. An explain isn't going to tell you\nmuch. However, I think the point here is that if you see a 10%\nperformance improvment on some given hardware for a particular test,\nthen chances are pretty good most people will see a performance\nbenefit. Some more, some less, but it's unlikely anyone will have worse\nperformance for it. There are some edge cases where a prepared\nstatement can reduce performance, but that's almost always on SELECT\nqueries, I can't think of a reason off-hand why it'd ever be slower for\nINSERTs unless you're already doing things you shouldn't be if you care\nabout performance (like doing a join against some other table with each\ninsert..).\n\nAdditionally, there's really no way for us to know what an acceptable\nperformance improvment is for you to justify the added code maintenance\nand whatnot for your project. If you're really just looking for the\nlow-hanging fruit, then batch your inserts into transactions and go from\nthere.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 21 Apr 2009 02:50:59 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Tue, 21 Apr 2009, Stephen Frost wrote:\n\n> * [email protected] ([email protected]) wrote:\n>> while I fully understand the 'benchmark your situation' need, this isn't\n>> that simple.\n>\n> It really is. You know your application, you know it's primary use\n> cases, and probably have some data to play with. You're certainly in a\n> much better situation to at least *try* and benchmark it than we are.\n\nrsyslog is a syslog server. it replaces (or for debian and fedora, has \nreplaced) your standard syslog daemon. it recieves log messages from every \napp on your system (and possibly others), filters, maniulates them, and \nthen stores them somewhere. among the places that it can store the logs \nare database servers (native support for MySQL, PostgreSQL, and Oracle. \nplus libdbi for others)\n\nother apps then search and report on the data after it is stored. what \napps?, I don't know either. pick your favorite reporting tool and you'll \nbe a step ahead of me (I don't know a really good reporting tool)\n\nas for sample data, you have syslog messages, just like I do. so you have \nthe same access to data that I have.\n\nhow would you want to query them? how would people far less experianced \nthat you want to query them?\n\n\nI can speculate that some people would do two columns (time, everything \nelse), others will do three (time, server, everything else), and others \nwill go further (I know some who would like to extract IP addresses \nembedded in a message into their own column). some people will index on \nthe time and host, others will want to do full-text searches of \neverything.\n\n\nI can talk about the particular use case I have at work, but that would be \nfar from normal (full text searches on 10s of TB of data, plus reports, \netc) but we don't (currently) use postgres to do that, and I'm not sure \nhow I would configure postgres for that sort of load. so I don't think \nthat my personal situation is a good fit. I looked at bizgres a few years \nago, but I didn't know enough about what I was trying to do or how much \ndata I was trying to deal with to go forward with it at that time.\n\n\ndo I do the benchmark on the type of hardware that I use for the system \nabove (after spending how much time experimenting to find corret tuning)? \nor on a stock, untuned postgres running on a desktop-type system (we all \nknow how horrible the defaults are), how am I supposed to know if the \ndifferences that I will see in my 'benchmarks' are the result of the \ndifferences between the commands, and not that I missed some critical knob \nto turn?\n\nbenchmarking is absolutly the right answer for some cases, especially when \nsomeone is asking exactly how something will work for them. but in this \ncase I don't have the specific use case. I am trying to find out where the \nthroretical advantages are for these things that 'everybody knows you \nshould do with a database' to understand the probability that they will \nmake a difference in this case.\n\n>> in this case we are trying to decide what API/interface to use in a\n>> infrastructure tool that will be distributed in common distros (it's now\n>> the default syslog package of debian and fedora), there are so many\n>> variables in hardware, software, and load that trying to benchmark it\n>> becomes effectivly impossible.\n>\n> You don't need to know how it will perform in every situation. The main\n> question you have is if using prepared queries is faster or not, so pick\n> a common structure, create a table, get some data, and test. I can say\n> that prepared queries will be more likely to give you a performance\n> boost with wider tables (more columns).\n\nthis is very helpful, I can't say what the schema would look like, but I \nwould guess that it will tend towards the narrow side (or at least not \nupdate very many columns explicitly)\n\n>> based on Stephan's explination of where binary could help, I think the\n>> easy answer is to decide not to bother with it (the postgres text to X\n>> converters get far more optimization attention than anything rsyslog\n>> could deploy)\n>\n> While that's true, there's no substitute for not having to do a\n> conversion at all. After all, it's alot cheaper to do a bit of\n> byte-swapping on an integer value that's already an integer in memory\n> than to sprintf and atoi it.\n\nbut it's not a integer in memory, it's text that arrived over the network \nor through a socket as a log message from another application.\n\nDavid Lang\n", "msg_date": "Tue, 21 Apr 2009 00:49:59 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Tue, 21 Apr 2009, Stephen Frost wrote:\n\n> David,\n>\n> * [email protected] ([email protected]) wrote:\n>> I thought that part of the 'efficiancy' and 'performance' to be gained\n>> from binary modes were avoiding the need to parse commands, if it's only\n>> the savings in converting column contents from text to specific types,\n>> it's much less important.\n>\n> No, binary mode is about the column contents. Prepared queries is about\n> avoiding having to parse commands (which is why the command and the data\n> elements are seperately done).\n\nthanks for the clarification.\n\n>>> I'd avoid having the user provide actual SQL, because that becomes\n>>> difficult to deal with unless you embed an SQL parser in rsyslog, and\n>>> I don't really see the value in that.\n>>\n>> there's no need for rsyslog to parse the SQL, just to be able to escape\n>> it appropriately and then pass it to the database for execution\n>\n> If the user is providing SQL, then you need to be able to parse that SQL\n> if you're going to do prepared queries. It might not require you to be\n> able to fully parse SQL the way the back-end does, but anything you do\n> that's not a full SQL parser is going to end up having limitations that\n> won't be easy to find or document.\n\nthe current situation is that rsyslog never parses the SQL (other than as \ntext for a template, just like if you were going to write the log message \nto disk)\n\nif we stick with the string based API we never need to\n\nthe user gives us one string 'prepare...' that we send to the database. \nthe user then gives us another string 'execute...' that we send to the \ndatabase. at no point do we ever need to parse the SQL, or even really \nknow that it is SQL (the one exception is an escapeing routine that \nreplace ' with '' in the strings comeing from the outside world), it's \njust strings assembled using the same string assembly logic that is used \nfor writing files to disk, crafting the payload of network packets to \nother servers, etc.\n\nI do agree that there is a reduction in security risk. but since rsyslog \nis rather draconian about forcing the escaping, I'm not sure this is \nenough to tip the scales.\n\n\n>> one huge advantage of putting the sql into the configuration is the\n>> ability to work around other users of the database.\n>\n> See, I just don't see that.\n\nmoving a bit away from the traditional syslog use case for a moment. with \nthe ability to accept messages from many different types of sources (some \nunreliable like UDP syslog, others very reliably with full \napplication-level acknowledgements), the ability to filter messages to \ndifferent destination, and the ability to configure it to craft arbatrary \nSQL statements, rsyslog can be useful as an 'impedance match' between \ndifferent applications. you can coherse just about any app to write some \nsort of message to a file/pipe, and rsyslog can take that and get it into \na database elsewhere. yes, you or I could write a quick program that would \nreformat the message and submit it (in perl/python/etc, but extending that \nto handle outages, high-volume bursts of traffic, etc starts to be hard.\n\nthis is very much _not_ a common use case, but it's a useful side-use of \nrsyslog today.\n\n>> I guess you could give examples of the SQL in the documentation for how\n>> to create the prepared statement etc in the databases, but how is that\n>> much better than having it in the config file?\n>>\n>> for many users it's easier to do middlein -fancy stuff in the SQL than\n>> loading things into the database (can you pre-load prepared statements in\n>> the database? or are they a per-connection thing?)\n>\n> Prepared statements, at least under PG, are a per-connection thing.\n> Triggers aren't the same, those are attached to tables and get called\n> whenever a particular action is done on those tables (defined in the\n> trigger definition). The trigger is then called with the row which is\n> being inserted, etc, and can do whatever it wants with that row (put it\n> in a different table, ignore it, etc).\n\nthat sounds like a lot of work at the database level to avoid some \ncomplexity on the app side (and it seems that the need to fire a trigger \nprobably cost more than the prepared statement ever hoped to gain.)\n\n>> so back to the main questions of the advantages\n>>\n>> prepared statements avoid needing to escape things, but at the\n>> complication of a more complex API.\n>>\n>> there's still the question of the performance difference. I have been\n>> thinking that the overhead of doing the work itself would overwelm the\n>> performance benifits of prepared statements.\n>\n> What work is it that you're referring to here?\n\ndoing the inserts themselves (putting the data in the tables, updating \nindexes, issuing a fsync)\n\n> Based on what you've\n> said about your application so far, I would expect that the run-time\n> cost to prepare the statement (which you do only once) to be a bit of a\n> cost, but not much, and that the actual inserts would be almost free\n> from the application side, and much easier for the database to\n> parse/use.\n\nthe inserts are far from free ;-)\n\nbut I agree that with prepared statements, the overhead of the insert is \nsmall. I'm trying to get a guess as to how small.\n\n>> as I understand it, the primary performance benifit is the ability to\n>> avoid the parsing and planning stages of the command. for simple commands\n>> (which I assume inserts to be, even if inserting a lot of stuff), the\n>> planning would seem to be cheap compared to the work of doing the inserts\n>\n> The planning isn't very expensive, no, but it's also not free. The\n> parsing is more expensive.\n\nok, that makes the multi-value insert process (or copy) sound more \nattractive (less parsing)\n\n>> on a fully tuned database are we talking about 10% performance? 1%? 0.01%?\n>>\n>> any ideas\n>\n> It depends a great deal on your application.. Do you have some example\n> data that we could use to test with? Some default templates that you\n> think most people end up using which we could create dummy data for?\n\ntake the contents of /var/log/messages on your system, split it into \ntimestamp, server, log and you have at least a reasonable case to work \nwith.\n\n> Of course, in the end, batching your inserts into fewer transactions\n> than 1-per-insert should give you a huge benefit right off the bat.\n\nagreed, that was my expectation, but then people started saying how \nimportant it was to use prepared statements.\n\n> What we're talking about is really the steps after that, which might not\n> ever be necessary for your particular application. On the other hand,\n> I'd never let production code go out that isn't using prepared queries\n> wherever possible.\n\nI do understand the sentiment, really I do. I'm just trying to understand \nthe why.\n\nin this case, moving from being able to insert one record per transaction \nto inserting several hundred per transaction will _probably_ swamp \neverything else. but I really hate the word 'probably', I've been \nsurprised to many times.\n\none of the more recent surprises was on my multi-TB (4-core, 32G ram, 16 \nspindles of data) log query server I discovered that there was no \nnoticable difference between raid 10 and raid 6 for the bulk of my query \nworkload (an example is finding 14 log entries out of 12B). it turns out \nthat if you are read-only with a seek-heavy workload you can keep all \nspindles busy with either approach. so in spite of the fact that \n'everybody knows' that raid 10 is _far_ better than raid 6, especially for \ndatabases, I discovered that that really only applies to writes.\n\n\nso I really do want to understand the 'why' if possible.\n\nyou have helped a lot.\n\nDavid Lang\n", "msg_date": "Tue, 21 Apr 2009 01:26:23 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "[email protected] wrote:\n> On Tue, 21 Apr 2009, Stephen Frost wrote:\n> \n>> * [email protected] ([email protected]) wrote:\n>>> while I fully understand the 'benchmark your situation' need, this isn't\n>>> that simple.\n>>\n>> It really is. You know your application, you know it's primary use\n>> cases, and probably have some data to play with. You're certainly in a\n>> much better situation to at least *try* and benchmark it than we are.\n> \n> rsyslog is a syslog server. it replaces (or for debian and fedora, has \n> replaced) your standard syslog daemon. it recieves log messages from \n> every app on your system (and possibly others), filters, maniulates \n> them, and then stores them somewhere. among the places that it can store \n> the logs are database servers (native support for MySQL, PostgreSQL, and \n> Oracle. plus libdbi for others)\n\nWell, from a performance standpoint the obvious things to do are:\n1. Keep a connection open, do NOT reconnect for each log-statement\n2. Batch log statements together where possible\n3. Use prepared statements\n4. Partition the tables by day/week/month/year (configurable I suppose)\n\nThe first two are vital, the third takes you a step further. The fourth \nis a long-term admin thing.\n\nAnd possibly\n5. Have two connections, one for fatal/error etc and one for info/debug \nlevel log statements (configurable split?). Then you can use the \nsynchronous_commit setting on the less important ones. Might buy you \nsome performance on a busy system.\n\nhttp://www.postgresql.org/docs/8.3/interactive/runtime-config-wal.html#RUNTIME-CONFIG-WAL-SETTINGS\n\n> other apps then search and report on the data after it is stored. what \n> apps?, I don't know either. pick your favorite reporting tool and you'll \n> be a step ahead of me (I don't know a really good reporting tool)\n> \n> as for sample data, you have syslog messages, just like I do. so you \n> have the same access to data that I have.\n> \n> how would you want to query them? how would people far less experianced \n> that you want to query them?\n> \n> I can speculate that some people would do two columns (time, everything \n> else), others will do three (time, server, everything else), and others \n> will go further (I know some who would like to extract IP addresses \n> embedded in a message into their own column). some people will index on \n> the time and host, others will want to do full-text searches of everything.\n\nWell, assuming it looks much like traditional syslog, I would do \nsomething like: (timestamp, host, facility, priority, message). It's \neasy enough to stitch back together if people want that.\n\nPostgreSQL's full-text indexing is quite well suited to logfiles I'd \nhave thought, since it knows about filenames, urls etc already.\n\nIf you want to get fancy, add a msg_type column and one subsidiary table \nfor each msg_type. So - you might have smtp_connect_from (hostname, \nip_addr). A set of perl regexps can match and extract the fields for \nthese extra tables, or you could do it with triggers inside the \ndatabase. I think it makes sense to do it in the application. Easier for \nusers to contribute new patterns/extractions. Meanwhile, the core table \nis untouched so you don't *need* to know about these extra tables.\n\nIf you have subsidiary tables, you'll want to partition those too and \nperhaps stick them in their own schema (logs200901, logs200902 etc).\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 21 Apr 2009 09:56:23 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "Hi,\n\nI just finished reading this thread. We are currently working on\nsetting up a central log system using rsyslog and PostgreSQL. It\nworks well once we patched the memory leak. We also looked at what\ncould be done to improve the efficiency of the DB interface. On the\nrsyslog side, moving to prepared queries allows you to remove the\nescaping that needs to be done currently before attempting to\ninsert the data into the SQL backend as well as removing the parsing\nand planning time from the insert. This is a big win for high insert\nrates, which is what we are talking about. The escaping process is\nalso a big CPU user in rsyslog which then hands the escaped string\nto the backend which then has to undo everything that had been done\nand parse/plan the resulting query. This can use a surprising amount\nof additional CPU. Even if you cannot support a general prepared\nquery interface, by specifying what the query should look like you\ncan handle much of the low-hanging fruit query-wise.\n\nWe are currently using a date based trigger to use a new partition\neach day and keep 2 months of logs currently. This can be usefully\nmanaged on the backend database, but if rsyslog supported changing\nthe insert to the new table on a time basis, the CPU used by the\ntrigger to support this on the backend could be reclaimed. This\nwould be a win for any DB backend. As you move to the new partition,\nissuing a truncate to clear the table would simplify the DB interfaces.\n\nAnother performance enhancement already mentioned, would be to\nallow certain extra fields in the DB to be automatically populated\nas a function of the log messages. For example, logging the mail queue\nid for messages from mail systems would make it much easier to locate\nparticular mail transactions in large amounts of data.\n\nTo sum up, eliminating the escaping in rsyslog through the use of\nprepared queries would reduce the CPU load on the DB backend. Batching\nthe inserts will also net you a big performance increase. Some DB-based\napplications allow for the specification of several types of queries,\none for single inserts and then a second to support multiple inserts\n(copy). Rsyslog already supports the queuing pieces to allow you to\nbatch inserts. Just some ideas.\n\nRegards,\nKen\n\n\nOn Tue, Apr 21, 2009 at 09:56:23AM +0100, Richard Huxton wrote:\n> [email protected] wrote:\n>> On Tue, 21 Apr 2009, Stephen Frost wrote:\n>>> * [email protected] ([email protected]) wrote:\n>>>> while I fully understand the 'benchmark your situation' need, this isn't\n>>>> that simple.\n>>>\n>>> It really is. You know your application, you know it's primary use\n>>> cases, and probably have some data to play with. You're certainly in a\n>>> much better situation to at least *try* and benchmark it than we are.\n>> rsyslog is a syslog server. it replaces (or for debian and fedora, has \n>> replaced) your standard syslog daemon. it recieves log messages from every \n>> app on your system (and possibly others), filters, maniulates them, and \n>> then stores them somewhere. among the places that it can store the logs \n>> are database servers (native support for MySQL, PostgreSQL, and Oracle. \n>> plus libdbi for others)\n>\n> Well, from a performance standpoint the obvious things to do are:\n> 1. Keep a connection open, do NOT reconnect for each log-statement\n> 2. Batch log statements together where possible\n> 3. Use prepared statements\n> 4. Partition the tables by day/week/month/year (configurable I suppose)\n>\n> The first two are vital, the third takes you a step further. The fourth is \n> a long-term admin thing.\n>\n> And possibly\n> 5. Have two connections, one for fatal/error etc and one for info/debug \n> level log statements (configurable split?). Then you can use the \n> synchronous_commit setting on the less important ones. Might buy you some \n> performance on a busy system.\n>\n> http://www.postgresql.org/docs/8.3/interactive/runtime-config-wal.html#RUNTIME-CONFIG-WAL-SETTINGS\n>\n>> other apps then search and report on the data after it is stored. what \n>> apps?, I don't know either. pick your favorite reporting tool and you'll \n>> be a step ahead of me (I don't know a really good reporting tool)\n>> as for sample data, you have syslog messages, just like I do. so you have \n>> the same access to data that I have.\n>> how would you want to query them? how would people far less experianced \n>> that you want to query them?\n>> I can speculate that some people would do two columns (time, everything \n>> else), others will do three (time, server, everything else), and others \n>> will go further (I know some who would like to extract IP addresses \n>> embedded in a message into their own column). some people will index on \n>> the time and host, others will want to do full-text searches of \n>> everything.\n>\n> Well, assuming it looks much like traditional syslog, I would do something \n> like: (timestamp, host, facility, priority, message). It's easy enough to \n> stitch back together if people want that.\n>\n> PostgreSQL's full-text indexing is quite well suited to logfiles I'd have \n> thought, since it knows about filenames, urls etc already.\n>\n> If you want to get fancy, add a msg_type column and one subsidiary table \n> for each msg_type. So - you might have smtp_connect_from (hostname, \n> ip_addr). A set of perl regexps can match and extract the fields for these \n> extra tables, or you could do it with triggers inside the database. I think \n> it makes sense to do it in the application. Easier for users to contribute \n> new patterns/extractions. Meanwhile, the core table is untouched so you \n> don't *need* to know about these extra tables.\n>\n> If you have subsidiary tables, you'll want to partition those too and \n> perhaps stick them in their own schema (logs200901, logs200902 etc).\n>\n> -- \n> Richard Huxton\n> Archonet Ltd\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Tue, 21 Apr 2009 08:33:30 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "Kenneth,\n could you join the discussion on the rsyslog mailing list?\nrsyslog-users <[email protected]>\n\nI'm surprised to hear you say that rsyslog can already do batch inserts \nand am interested in how you did that.\n\nwhat sort of insert rate did you mange to get?\n\nDavid Lang\n\nOn Tue, 21 Apr 2009, Kenneth Marshall wrote:\n\n> Date: Tue, 21 Apr 2009 08:33:30 -0500\n> From: Kenneth Marshall <[email protected]>\n> To: Richard Huxton <[email protected]>\n> Cc: [email protected], Stephen Frost <[email protected]>,\n> Greg Smith <[email protected]>, [email protected]\n> Subject: Re: [PERFORM] performance for high-volume log insertion\n> \n> Hi,\n>\n> I just finished reading this thread. We are currently working on\n> setting up a central log system using rsyslog and PostgreSQL. It\n> works well once we patched the memory leak. We also looked at what\n> could be done to improve the efficiency of the DB interface. On the\n> rsyslog side, moving to prepared queries allows you to remove the\n> escaping that needs to be done currently before attempting to\n> insert the data into the SQL backend as well as removing the parsing\n> and planning time from the insert. This is a big win for high insert\n> rates, which is what we are talking about. The escaping process is\n> also a big CPU user in rsyslog which then hands the escaped string\n> to the backend which then has to undo everything that had been done\n> and parse/plan the resulting query. This can use a surprising amount\n> of additional CPU. Even if you cannot support a general prepared\n> query interface, by specifying what the query should look like you\n> can handle much of the low-hanging fruit query-wise.\n>\n> We are currently using a date based trigger to use a new partition\n> each day and keep 2 months of logs currently. This can be usefully\n> managed on the backend database, but if rsyslog supported changing\n> the insert to the new table on a time basis, the CPU used by the\n> trigger to support this on the backend could be reclaimed. This\n> would be a win for any DB backend. As you move to the new partition,\n> issuing a truncate to clear the table would simplify the DB interfaces.\n>\n> Another performance enhancement already mentioned, would be to\n> allow certain extra fields in the DB to be automatically populated\n> as a function of the log messages. For example, logging the mail queue\n> id for messages from mail systems would make it much easier to locate\n> particular mail transactions in large amounts of data.\n>\n> To sum up, eliminating the escaping in rsyslog through the use of\n> prepared queries would reduce the CPU load on the DB backend. Batching\n> the inserts will also net you a big performance increase. Some DB-based\n> applications allow for the specification of several types of queries,\n> one for single inserts and then a second to support multiple inserts\n> (copy). Rsyslog already supports the queuing pieces to allow you to\n> batch inserts. Just some ideas.\n>\n> Regards,\n> Ken\n>\n>\n> On Tue, Apr 21, 2009 at 09:56:23AM +0100, Richard Huxton wrote:\n>> [email protected] wrote:\n>>> On Tue, 21 Apr 2009, Stephen Frost wrote:\n>>>> * [email protected] ([email protected]) wrote:\n>>>>> while I fully understand the 'benchmark your situation' need, this isn't\n>>>>> that simple.\n>>>>\n>>>> It really is. You know your application, you know it's primary use\n>>>> cases, and probably have some data to play with. You're certainly in a\n>>>> much better situation to at least *try* and benchmark it than we are.\n>>> rsyslog is a syslog server. it replaces (or for debian and fedora, has\n>>> replaced) your standard syslog daemon. it recieves log messages from every\n>>> app on your system (and possibly others), filters, maniulates them, and\n>>> then stores them somewhere. among the places that it can store the logs\n>>> are database servers (native support for MySQL, PostgreSQL, and Oracle.\n>>> plus libdbi for others)\n>>\n>> Well, from a performance standpoint the obvious things to do are:\n>> 1. Keep a connection open, do NOT reconnect for each log-statement\n>> 2. Batch log statements together where possible\n>> 3. Use prepared statements\n>> 4. Partition the tables by day/week/month/year (configurable I suppose)\n>>\n>> The first two are vital, the third takes you a step further. The fourth is\n>> a long-term admin thing.\n>>\n>> And possibly\n>> 5. Have two connections, one for fatal/error etc and one for info/debug\n>> level log statements (configurable split?). Then you can use the\n>> synchronous_commit setting on the less important ones. Might buy you some\n>> performance on a busy system.\n>>\n>> http://www.postgresql.org/docs/8.3/interactive/runtime-config-wal.html#RUNTIME-CONFIG-WAL-SETTINGS\n>>\n>>> other apps then search and report on the data after it is stored. what\n>>> apps?, I don't know either. pick your favorite reporting tool and you'll\n>>> be a step ahead of me (I don't know a really good reporting tool)\n>>> as for sample data, you have syslog messages, just like I do. so you have\n>>> the same access to data that I have.\n>>> how would you want to query them? how would people far less experianced\n>>> that you want to query them?\n>>> I can speculate that some people would do two columns (time, everything\n>>> else), others will do three (time, server, everything else), and others\n>>> will go further (I know some who would like to extract IP addresses\n>>> embedded in a message into their own column). some people will index on\n>>> the time and host, others will want to do full-text searches of\n>>> everything.\n>>\n>> Well, assuming it looks much like traditional syslog, I would do something\n>> like: (timestamp, host, facility, priority, message). It's easy enough to\n>> stitch back together if people want that.\n>>\n>> PostgreSQL's full-text indexing is quite well suited to logfiles I'd have\n>> thought, since it knows about filenames, urls etc already.\n>>\n>> If you want to get fancy, add a msg_type column and one subsidiary table\n>> for each msg_type. So - you might have smtp_connect_from (hostname,\n>> ip_addr). A set of perl regexps can match and extract the fields for these\n>> extra tables, or you could do it with triggers inside the database. I think\n>> it makes sense to do it in the application. Easier for users to contribute\n>> new patterns/extractions. Meanwhile, the core table is untouched so you\n>> don't *need* to know about these extra tables.\n>>\n>> If you have subsidiary tables, you'll want to partition those too and\n>> perhaps stick them in their own schema (logs200901, logs200902 etc).\n>>\n>> --\n>> Richard Huxton\n>> Archonet Ltd\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n", "msg_date": "Tue, 21 Apr 2009 08:37:54 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Tue, Apr 21, 2009 at 08:37:54AM -0700, [email protected] wrote:\n> Kenneth,\n> could you join the discussion on the rsyslog mailing list?\n> rsyslog-users <[email protected]>\n>\n> I'm surprised to hear you say that rsyslog can already do batch inserts and \n> am interested in how you did that.\n>\n> what sort of insert rate did you mange to get?\n>\n> David Lang\n>\nDavid,\n\nI would be happy to join the discussion. I did not mean to say\nthat rsyslog currently supported batch inserts, just that the\npieces that provide \"stand-by queuing\" could be used to manage\nbatching inserts.\n\nCheers,\nKen\n\n> On Tue, 21 Apr 2009, Kenneth Marshall wrote:\n>\n>> Date: Tue, 21 Apr 2009 08:33:30 -0500\n>> From: Kenneth Marshall <[email protected]>\n>> To: Richard Huxton <[email protected]>\n>> Cc: [email protected], Stephen Frost <[email protected]>,\n>> Greg Smith <[email protected]>, [email protected]\n>> Subject: Re: [PERFORM] performance for high-volume log insertion\n>> Hi,\n>>\n>> I just finished reading this thread. We are currently working on\n>> setting up a central log system using rsyslog and PostgreSQL. It\n>> works well once we patched the memory leak. We also looked at what\n>> could be done to improve the efficiency of the DB interface. On the\n>> rsyslog side, moving to prepared queries allows you to remove the\n>> escaping that needs to be done currently before attempting to\n>> insert the data into the SQL backend as well as removing the parsing\n>> and planning time from the insert. This is a big win for high insert\n>> rates, which is what we are talking about. The escaping process is\n>> also a big CPU user in rsyslog which then hands the escaped string\n>> to the backend which then has to undo everything that had been done\n>> and parse/plan the resulting query. This can use a surprising amount\n>> of additional CPU. Even if you cannot support a general prepared\n>> query interface, by specifying what the query should look like you\n>> can handle much of the low-hanging fruit query-wise.\n>>\n>> We are currently using a date based trigger to use a new partition\n>> each day and keep 2 months of logs currently. This can be usefully\n>> managed on the backend database, but if rsyslog supported changing\n>> the insert to the new table on a time basis, the CPU used by the\n>> trigger to support this on the backend could be reclaimed. This\n>> would be a win for any DB backend. As you move to the new partition,\n>> issuing a truncate to clear the table would simplify the DB interfaces.\n>>\n>> Another performance enhancement already mentioned, would be to\n>> allow certain extra fields in the DB to be automatically populated\n>> as a function of the log messages. For example, logging the mail queue\n>> id for messages from mail systems would make it much easier to locate\n>> particular mail transactions in large amounts of data.\n>>\n>> To sum up, eliminating the escaping in rsyslog through the use of\n>> prepared queries would reduce the CPU load on the DB backend. Batching\n>> the inserts will also net you a big performance increase. Some DB-based\n>> applications allow for the specification of several types of queries,\n>> one for single inserts and then a second to support multiple inserts\n>> (copy). Rsyslog already supports the queuing pieces to allow you to\n>> batch inserts. Just some ideas.\n>>\n>> Regards,\n>> Ken\n>>\n>>\n>> On Tue, Apr 21, 2009 at 09:56:23AM +0100, Richard Huxton wrote:\n>>> [email protected] wrote:\n>>>> On Tue, 21 Apr 2009, Stephen Frost wrote:\n>>>>> * [email protected] ([email protected]) wrote:\n>>>>>> while I fully understand the 'benchmark your situation' need, this \n>>>>>> isn't\n>>>>>> that simple.\n>>>>>\n>>>>> It really is. You know your application, you know it's primary use\n>>>>> cases, and probably have some data to play with. You're certainly in a\n>>>>> much better situation to at least *try* and benchmark it than we are.\n>>>> rsyslog is a syslog server. it replaces (or for debian and fedora, has\n>>>> replaced) your standard syslog daemon. it recieves log messages from \n>>>> every\n>>>> app on your system (and possibly others), filters, maniulates them, and\n>>>> then stores them somewhere. among the places that it can store the logs\n>>>> are database servers (native support for MySQL, PostgreSQL, and Oracle.\n>>>> plus libdbi for others)\n>>>\n>>> Well, from a performance standpoint the obvious things to do are:\n>>> 1. Keep a connection open, do NOT reconnect for each log-statement\n>>> 2. Batch log statements together where possible\n>>> 3. Use prepared statements\n>>> 4. Partition the tables by day/week/month/year (configurable I suppose)\n>>>\n>>> The first two are vital, the third takes you a step further. The fourth \n>>> is\n>>> a long-term admin thing.\n>>>\n>>> And possibly\n>>> 5. Have two connections, one for fatal/error etc and one for info/debug\n>>> level log statements (configurable split?). Then you can use the\n>>> synchronous_commit setting on the less important ones. Might buy you some\n>>> performance on a busy system.\n>>>\n>>> http://www.postgresql.org/docs/8.3/interactive/runtime-config-wal.html#RUNTIME-CONFIG-WAL-SETTINGS\n>>>\n>>>> other apps then search and report on the data after it is stored. what\n>>>> apps?, I don't know either. pick your favorite reporting tool and you'll\n>>>> be a step ahead of me (I don't know a really good reporting tool)\n>>>> as for sample data, you have syslog messages, just like I do. so you \n>>>> have\n>>>> the same access to data that I have.\n>>>> how would you want to query them? how would people far less experianced\n>>>> that you want to query them?\n>>>> I can speculate that some people would do two columns (time, everything\n>>>> else), others will do three (time, server, everything else), and others\n>>>> will go further (I know some who would like to extract IP addresses\n>>>> embedded in a message into their own column). some people will index on\n>>>> the time and host, others will want to do full-text searches of\n>>>> everything.\n>>>\n>>> Well, assuming it looks much like traditional syslog, I would do \n>>> something\n>>> like: (timestamp, host, facility, priority, message). It's easy enough to\n>>> stitch back together if people want that.\n>>>\n>>> PostgreSQL's full-text indexing is quite well suited to logfiles I'd have\n>>> thought, since it knows about filenames, urls etc already.\n>>>\n>>> If you want to get fancy, add a msg_type column and one subsidiary table\n>>> for each msg_type. So - you might have smtp_connect_from (hostname,\n>>> ip_addr). A set of perl regexps can match and extract the fields for \n>>> these\n>>> extra tables, or you could do it with triggers inside the database. I \n>>> think\n>>> it makes sense to do it in the application. Easier for users to \n>>> contribute\n>>> new patterns/extractions. Meanwhile, the core table is untouched so you\n>>> don't *need* to know about these extra tables.\n>>>\n>>> If you have subsidiary tables, you'll want to partition those too and\n>>> perhaps stick them in their own schema (logs200901, logs200902 etc).\n>>>\n>>> --\n>>> Richard Huxton\n>>> Archonet Ltd\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list \n>>> ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>\n>>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Tue, 21 Apr 2009 10:44:58 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Mon, 20 Apr 2009, [email protected] wrote:\n\n> one huge advantage of putting the sql into the configuration is the ability \n> to work around other users of the database.\n\n+1 on this. We've always found tools much easier to work with when they \ncould be adapted to our schema, as opposed to changing our process for the \ntool.\n", "msg_date": "Tue, 21 Apr 2009 09:02:58 -0700 (PDT)", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "* Ben Chobot ([email protected]) wrote:\n> On Mon, 20 Apr 2009, [email protected] wrote:\n>> one huge advantage of putting the sql into the configuration is the \n>> ability to work around other users of the database.\n>\n> +1 on this. We've always found tools much easier to work with when they \n> could be adapted to our schema, as opposed to changing our process for \n> the tool.\n\nI think we're all in agreement that we should allow the user to define\ntheir schema and support loading the data into it. The question has\nbeen if the user really needs the flexibility to define arbitrary SQL to\nbe used to do the inserts.\n\nSomething I'm a bit confused about, still, is if this is really even a\nproblem. It sounds like rsyslog already allows arbitrary SQL in the\nconfig file with some kind of escape syntax for the variables. Why not\njust keep that, but split it into a prepared query (where you change the\nvariables to $NUM vars for the prepared statement) and an array of\nvalues (to pass to PQexecPrepared)?\n\nIf you already know how to figure out what the variables in the\narbitrary SQL statement are, this shouldn't be any more limiting than\ntoday, except where a prepared query can't have a variable argument but\na non-prepared query can (eg, table name). You could deal with that\nwith some kind of configuration variable that lets the user choose to\nuse prepared queries or not though, or some additional syntax that\nindicates certain variables shouldn't be translated to $NUM vars (eg:\n$*blah instead of $blah).\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 21 Apr 2009 12:37:48 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Tue, 21 Apr 2009, Stephen Frost wrote:\n\n> * Ben Chobot ([email protected]) wrote:\n>> On Mon, 20 Apr 2009, [email protected] wrote:\n>>> one huge advantage of putting the sql into the configuration is the\n>>> ability to work around other users of the database.\n>>\n>> +1 on this. We've always found tools much easier to work with when they\n>> could be adapted to our schema, as opposed to changing our process for\n>> the tool.\n>\n> I think we're all in agreement that we should allow the user to define\n> their schema and support loading the data into it. The question has\n> been if the user really needs the flexibility to define arbitrary SQL to\n> be used to do the inserts.\n>\n> Something I'm a bit confused about, still, is if this is really even a\n> problem. It sounds like rsyslog already allows arbitrary SQL in the\n> config file with some kind of escape syntax for the variables. Why not\n> just keep that, but split it into a prepared query (where you change the\n> variables to $NUM vars for the prepared statement) and an array of\n> values (to pass to PQexecPrepared)?\n>\n> If you already know how to figure out what the variables in the\n> arbitrary SQL statement are, this shouldn't be any more limiting than\n> today, except where a prepared query can't have a variable argument but\n> a non-prepared query can (eg, table name). You could deal with that\n> with some kind of configuration variable that lets the user choose to\n> use prepared queries or not though, or some additional syntax that\n> indicates certain variables shouldn't be translated to $NUM vars (eg:\n> $*blah instead of $blah).\n\nI think the key thing is that rsyslog today doesn't know anything about \nSQL variables, it just creates a string that the user and the database say \nlooks like a SQL statement.\n\nan added headache is that the rsyslog config does not have the concept of \narrays (the closest that it has is one special-case hack to let you \nspecify one variable multiple times)\n\nif the performance win of the prepared statement is significant, then it's \nprobably worth the complication of changing these things.\n\nDavid Lang\n", "msg_date": "Tue, 21 Apr 2009 09:59:01 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Mon, 20 Apr 2009, [email protected] wrote:\n\n> while I fully understand the 'benchmark your situation' need, this isn't \n> that simple. in this case we are trying to decide what API/interface to \n> use in a infrastructure tool that will be distributed in common distros \n> (it's now the default syslog package of debian and fedora), there are so \n> many variables in hardware, software, and load that trying to benchmark \n> it becomes effectivly impossible.\n\n From your later comments, you're wandering a bit outside of what you were \nasking about here. Benchmarking the *query* side of things can be \nextremely complicated. You have to worry about memory allocation, cold \nvs. warm cache, scale of database relative to RAM, etc.\n\nYou were asking specifically about *insert* performance, which isn't \nnearly as complicated. There are basically three setups:\n\n1) Disk/controller has a proper write cache. Writes and fsync will be \nfast. You can insert a few thousand individual transactions per second.\n\n2) Disk/controller has a \"lying\" write cache. Writes and fsync will be \nfast, but it's not safe for database use. But since (1) is expensive and \nthis one you can get for free jut by using a regular SATA drive with its \nwrite cache enabled, you can use this case as a proxy for approximately \nhow (1) would act. You'll still get a few thousand transactions per \nsecond, sustained writes may slow down relative to (1) if you insert \nenough that you hit a checkpoint (triggering lots of random I/O).\n\n3) All write caches have been disabled because they were not \nbattery-backed. This is the case if you have a regular SATA drive and you \ndisable its write cache because you care about write durability. You'll \nget a bit less than RPM/60 writes/second, so <120 inserts/second with a \ntypical 7200RPM drive. Here batching multiple INSERTs together is \ncritical to get any sort of reasonable performance.\n\nIn (3), I'd expect that trivia like INSERT vs. COPY and COPY BINARY vs. \nCOPY TEXT would be overwhelmed by the overhead of the commit itself. \nTherefore you probably want to test with case (2) instead, as it doesn't \nrequire any additional hardware but has similar performance to a \nproduction-worthy (1). All of the other things you're worried about \nreally don't matter here; you can get an approximate measure of what the \nperformance of the various INSERT/COPY schemes are that is somewhat \nplatform dependant, but the results should be good enough to give you some \nrule of thumb suggestions for whether optimizations are significant enough \nto justify the coding effort to implement them or not.\n\nI'm not sure whether you're familiar with all the fsync trivia here. In \nnormal syslog use, there's an fsync call after every write. You can \ndisable that by placing a \"-\" before the file name in /etc/syslog.conf The \nthing that is going to make database-based writes very different is that \nsyslog's fsync'd writes are unlikely to leave you in a bad state if the \ndrive lies about them, while database writes can. So someone using syslog \non a standard SATA drive isn't getting the write guarantee they think they \nare, but the downside on a crash is minimal. If you've got a high-volume \nsyslog environment (>100 lines/second), you can't support those as \nindividual database writes unless you've got a battery-backed write \ncontroller. A regular disk just can't process genuine fsync calls any \nfaster than that. A serious syslog deployment that turns fsync on and \nexpects it to really do its thing is already exposed to this issue though. \nI think it may be a the case that a lot of people think they have durable \nwrites in their configuration but really don't.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 21 Apr 2009 13:17:41 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "* [email protected] ([email protected]) wrote:\n> I think the key thing is that rsyslog today doesn't know anything about \n> SQL variables, it just creates a string that the user and the database \n> say looks like a SQL statement.\n\nerr, what SQL variables? You mean the $NUM stuff? They're just\nplaceholders.. You don't really need to *do* anything with them.. Or\nare you worried that users would provide something that would break as a\nprepared query? If so, you just need to figure out how to handle that\ncleanly..\n\n> an added headache is that the rsyslog config does not have the concept of \n> arrays (the closest that it has is one special-case hack to let you \n> specify one variable multiple times)\n\nArgh. The array I'm talking about is a C array, and has nothing to do\nwith the actual config syntax.. I swear, I think you're making this\nmore difficult by half.\n\nAlright, looking at the documentation on rsyslog.com, I see something\nlike:\n\n$template MySQLInsert,\"insert iut, message, receivedat values\n('%iut%', '%msg:::UPPERCASE%', '%timegenerated:::date-mysql%')\ninto systemevents\\r\\n\", SQL\n\nIgnoring the fact that this is horrible, horrible non-SQL, I see that\nyou use %blah% to define variables inside your string. That's fine.\nThere's no reason why you can't use this exact syntax to build a\nprepared query. No user-impact changes are necessary. Here's what you\ndo:\n\n\tbuild your prepared query by doing:\n\tcopy user string\n\tnewstring = replace all %blah% strings with $1, $2, $3, etc.\n\tmyvars = dynamically created C array with the %blah%'s in it\n\tcall PQprepare(newstring)\n\n\twhen a record comes in:\n\tallocate a myvalues array of pointers\n\tloop through myvars\n\t for each myvar\n\t\t set the corresponding pointer in myvalues to the string which\n\t\t it corresponds to from the record\n\tcall PQexecPrepared(myvalues)\n\nThat's pretty much it. I assume you already deal with escaping %'s\nsomehow during the config load so that the prepared statement will be\nwhat the user expects. As I mentioned before, the only obvious issue I\nsee with doing this implicitly is that the user might want to put\nvariables in places that you can't have variables in prepared queries.\nYou could deal with that by having the user indicate per template, using\nanother template option, if the query can be prepared or not. Another\noptions is adding to your syntax something like '%*blah%' which would\ntell the system to pre-populate that variable before issuing PQprepare\non the resultant string. Of course, you might just use PQexecParams\nthere, unless you want to be gung-ho and actually keep a hash around of\nprepared queries on the assumption that the variable the user gave you\ndoesn't change very often (eg, '%*month%') and it's cheap to keep a\nsmall list of them around to use when they do match up.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 21 Apr 2009 13:25:57 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Tue, 21 Apr 2009, Greg Smith wrote:\n\n> On Mon, 20 Apr 2009, [email protected] wrote:\n>\n>> while I fully understand the 'benchmark your situation' need, this isn't \n>> that simple. in this case we are trying to decide what API/interface to \n>> use in a infrastructure tool that will be distributed in common distros \n>> (it's now the default syslog package of debian and fedora), there are so \n>> many variables in hardware, software, and load that trying to benchmark it \n>> becomes effectivly impossible.\n>\n> From your later comments, you're wandering a bit outside of what you were \n> asking about here. Benchmarking the *query* side of things can be extremely \n> complicated. You have to worry about memory allocation, cold vs. warm cache, \n> scale of database relative to RAM, etc.\n>\n> You were asking specifically about *insert* performance, which isn't nearly \n> as complicated. There are basically three setups:\n>\n> 1) Disk/controller has a proper write cache. Writes and fsync will be fast. \n> You can insert a few thousand individual transactions per second.\n>\n> 2) Disk/controller has a \"lying\" write cache. Writes and fsync will be fast, \n> but it's not safe for database use. But since (1) is expensive and this one \n> you can get for free jut by using a regular SATA drive with its write cache \n> enabled, you can use this case as a proxy for approximately how (1) would \n> act. You'll still get a few thousand transactions per second, sustained \n> writes may slow down relative to (1) if you insert enough that you hit a \n> checkpoint (triggering lots of random I/O).\n>\n> 3) All write caches have been disabled because they were not battery-backed. \n> This is the case if you have a regular SATA drive and you disable its write \n> cache because you care about write durability. You'll get a bit less than \n> RPM/60 writes/second, so <120 inserts/second with a typical 7200RPM drive. \n> Here batching multiple INSERTs together is critical to get any sort of \n> reasonable performance.\n\nin case #1 would you expect to get significant gains from batching? \ndoesn't it suffer from problems similar to #2 when checkpoints hit?\n\n> In (3), I'd expect that trivia like INSERT vs. COPY and COPY BINARY vs. COPY \n> TEXT would be overwhelmed by the overhead of the commit itself. Therefore you \n> probably want to test with case (2) instead, as it doesn't require any \n> additional hardware but has similar performance to a production-worthy (1). \n> All of the other things you're worried about really don't matter here; you \n> can get an approximate measure of what the performance of the various \n> INSERT/COPY schemes are that is somewhat platform dependant, but the results \n> should be good enough to give you some rule of thumb suggestions for whether \n> optimizations are significant enough to justify the coding effort to \n> implement them or not.\n\nI'll see about setting up a test in the next day or so. should I be able \nto script this through psql? or do I need to write a C program to test \nthis?\n\n> I'm not sure whether you're familiar with all the fsync trivia here. In \n> normal syslog use, there's an fsync call after every write. You can disable \n> that by placing a \"-\" before the file name in /etc/syslog.conf The thing that \n> is going to make database-based writes very different is that syslog's \n> fsync'd writes are unlikely to leave you in a bad state if the drive lies \n> about them, while database writes can. So someone using syslog on a standard \n> SATA drive isn't getting the write guarantee they think they are, but the \n> downside on a crash is minimal. If you've got a high-volume syslog \n> environment (>100 lines/second), you can't support those as individual \n> database writes unless you've got a battery-backed write controller. A \n> regular disk just can't process genuine fsync calls any faster than that. A \n> serious syslog deployment that turns fsync on and expects it to really do its \n> thing is already exposed to this issue though. I think it may be a the case \n> that a lot of people think they have durable writes in their configuration \n> but really don't.\n\nrsyslog is a little different, instead of just input -> disk it does input \n-> queue -> output (where output can be many things, including disk or \ndatabase)\n\nit's default is to use memory-based queues (and no fsync), but has config \noptions to do disk based queues with a fsync after each update\n\nDavid Lang\n", "msg_date": "Tue, 21 Apr 2009 11:09:18 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Tue, Apr 21, 2009 at 11:09:18AM -0700, [email protected] wrote:\n> On Tue, 21 Apr 2009, Greg Smith wrote:\n>\n>> On Mon, 20 Apr 2009, [email protected] wrote:\n>>\n>>> while I fully understand the 'benchmark your situation' need, this isn't \n>>> that simple. in this case we are trying to decide what API/interface to \n>>> use in a infrastructure tool that will be distributed in common distros \n>>> (it's now the default syslog package of debian and fedora), there are so \n>>> many variables in hardware, software, and load that trying to benchmark \n>>> it becomes effectivly impossible.\n>>\n>> From your later comments, you're wandering a bit outside of what you were \n>> asking about here. Benchmarking the *query* side of things can be \n>> extremely complicated. You have to worry about memory allocation, cold \n>> vs. warm cache, scale of database relative to RAM, etc.\n>>\n>> You were asking specifically about *insert* performance, which isn't \n>> nearly as complicated. There are basically three setups:\n>>\n>> 1) Disk/controller has a proper write cache. Writes and fsync will be \n>> fast. You can insert a few thousand individual transactions per second.\n>>\n>> 2) Disk/controller has a \"lying\" write cache. Writes and fsync will be \n>> fast, but it's not safe for database use. But since (1) is expensive and \n>> this one you can get for free jut by using a regular SATA drive with its \n>> write cache enabled, you can use this case as a proxy for approximately \n>> how (1) would act. You'll still get a few thousand transactions per \n>> second, sustained writes may slow down relative to (1) if you insert \n>> enough that you hit a checkpoint (triggering lots of random I/O).\n>>\n>> 3) All write caches have been disabled because they were not \n>> battery-backed. This is the case if you have a regular SATA drive and you \n>> disable its write cache because you care about write durability. You'll \n>> get a bit less than RPM/60 writes/second, so <120 inserts/second with a \n>> typical 7200RPM drive. Here batching multiple INSERTs together is critical \n>> to get any sort of reasonable performance.\n>\n> in case #1 would you expect to get significant gains from batching? doesn't \n> it suffer from problems similar to #2 when checkpoints hit?\n>\nEven with a disk controller with a proper write cache, the latency for\nsingle-insert-at-a-time will keep the number of updates to the low\nthousands per second (on the controllers I have used). If you can batch\nthem, it would not be unreasonable to increase performance by an order\nof magnitude or more. At the high end, other issues like CPU usage can\nrestrict performance.\n\nKen\n>> In (3), I'd expect that trivia like INSERT vs. COPY and COPY BINARY vs. \n>> COPY TEXT would be overwhelmed by the overhead of the commit itself. \n>> Therefore you probably want to test with case (2) instead, as it doesn't \n>> require any additional hardware but has similar performance to a \n>> production-worthy (1). All of the other things you're worried about really \n>> don't matter here; you can get an approximate measure of what the \n>> performance of the various INSERT/COPY schemes are that is somewhat \n>> platform dependant, but the results should be good enough to give you some \n>> rule of thumb suggestions for whether optimizations are significant enough \n>> to justify the coding effort to implement them or not.\n>\n> I'll see about setting up a test in the next day or so. should I be able to \n> script this through psql? or do I need to write a C program to test this?\n>\n>> I'm not sure whether you're familiar with all the fsync trivia here. In \n>> normal syslog use, there's an fsync call after every write. You can \n>> disable that by placing a \"-\" before the file name in /etc/syslog.conf The \n>> thing that is going to make database-based writes very different is that \n>> syslog's fsync'd writes are unlikely to leave you in a bad state if the \n>> drive lies about them, while database writes can. So someone using syslog \n>> on a standard SATA drive isn't getting the write guarantee they think they \n>> are, but the downside on a crash is minimal. If you've got a high-volume \n>> syslog environment (>100 lines/second), you can't support those as \n>> individual database writes unless you've got a battery-backed write \n>> controller. A regular disk just can't process genuine fsync calls any \n>> faster than that. A serious syslog deployment that turns fsync on and \n>> expects it to really do its thing is already exposed to this issue though. \n>> I think it may be a the case that a lot of people think they have durable \n>> writes in their configuration but really don't.\n>\n> rsyslog is a little different, instead of just input -> disk it does input \n> -> queue -> output (where output can be many things, including disk or \n> database)\n>\n> it's default is to use memory-based queues (and no fsync), but has config \n> options to do disk based queues with a fsync after each update\n>\n> David Lang\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Tue, 21 Apr 2009 13:34:22 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Tue, 21 Apr 2009, Stephen Frost wrote:\n\n> * [email protected] ([email protected]) wrote:\n>> I think the key thing is that rsyslog today doesn't know anything about\n>> SQL variables, it just creates a string that the user and the database\n>> say looks like a SQL statement.\n>\n> err, what SQL variables? You mean the $NUM stuff? They're just\n> placeholders.. You don't really need to *do* anything with them.. Or\n> are you worried that users would provide something that would break as a\n> prepared query? If so, you just need to figure out how to handle that\n> cleanly..\n>\n>> an added headache is that the rsyslog config does not have the concept of\n>> arrays (the closest that it has is one special-case hack to let you\n>> specify one variable multiple times)\n>\n> Argh. The array I'm talking about is a C array, and has nothing to do\n> with the actual config syntax.. I swear, I think you're making this\n> more difficult by half.\n\nnot intentinally, but you may be right.\n\n> Alright, looking at the documentation on rsyslog.com, I see something\n> like:\n>\n> $template MySQLInsert,\"insert iut, message, receivedat values\n> ('%iut%', '%msg:::UPPERCASE%', '%timegenerated:::date-mysql%')\n> into systemevents\\r\\n\", SQL\n>\n> Ignoring the fact that this is horrible, horrible non-SQL,\n\nthat example is for MySQL, nuff said ;-) or are you referring to the \nmodifiers that rsyslog has to manipulate the strings before inserting \nthem? (as opposed to using sql to manipulate the strings)\n\n> I see that\n> you use %blah% to define variables inside your string. That's fine.\n> There's no reason why you can't use this exact syntax to build a\n> prepared query. No user-impact changes are necessary. Here's what you\n> do:\n\n<snip psudocode to replace %blah% with $num>\n\nfor some reason I was stuck on the idea of the config specifying the \nstatement and variables seperatly, so I wasn't thinking this way, however \nthere are headaches\n\ndoing this will require changes to the structure of rsyslog, today the \nstring manipulation is done before calling the output (database) module, \nso all the database module currently gets is a string. in a (IMHO \nmisguided) attempt at security in a multi-threaded program, the output \nmodules are not given access to the full data, only to the distiled \nresult.\n\nalso, this approach won't work if the user wants to combine fixed text \nwith the variable into a column. an example of doing that would be to have \na filter to match specific lines, and then use a slightly different \ntemplate for those lines. I guess that could be done in SQL instead of in \nthe rsyslog string manipulation (i.e. instead of 'blah-%host%' do \n'blah-'||'%host')\n\n> As I mentioned before, the only obvious issue I\n> see with doing this implicitly is that the user might want to put\n> variables in places that you can't have variables in prepared queries.\n\nthis problem space would be anywhere except the column contents, right?\n\n> You could deal with that by having the user indicate per template, using\n> another template option, if the query can be prepared or not. Another\n> options is adding to your syntax something like '%*blah%' which would\n> tell the system to pre-populate that variable before issuing PQprepare\n> on the resultant string. Of course, you might just use PQexecParams\n> there, unless you want to be gung-ho and actually keep a hash around of\n> prepared queries on the assumption that the variable the user gave you\n> doesn't change very often (eg, '%*month%') and it's cheap to keep a\n> small list of them around to use when they do match up.\n\nrsyslog supports something similar for writing to disk where you can use \nvariables as part of the filename/path (referred to as 'dynafiles' in the \ndocumentation). that's a little easier to deal with as the filename is \nspecified seperatly from the format of the data to write. If we end up \ndoing prepared statements I suspect they initially won't support variables \noutside of the columns.\n\nDavid Lang\n", "msg_date": "Tue, 21 Apr 2009 11:35:02 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Tue, 21 Apr 2009, [email protected] wrote:\n\n>> I see that\n>> you use %blah% to define variables inside your string. That's fine.\n>> There's no reason why you can't use this exact syntax to build a\n>> prepared query. No user-impact changes are necessary. Here's what you\n>> do:\n>\n> <snip psudocode to replace %blah% with $num>\n>\n> for some reason I was stuck on the idea of the config specifying the \n> statement and variables seperatly, so I wasn't thinking this way, however \n> there are headaches\n>\n> doing this will require changes to the structure of rsyslog, today the string \n> manipulation is done before calling the output (database) module, so all the \n> database module currently gets is a string. in a (IMHO misguided) attempt at \n> security in a multi-threaded program, the output modules are not given access \n> to the full data, only to the distiled result.\n>\n> also, this approach won't work if the user wants to combine fixed text with \n> the variable into a column. an example of doing that would be to have a \n> filter to match specific lines, and then use a slightly different template \n> for those lines. I guess that could be done in SQL instead of in the rsyslog \n> string manipulation (i.e. instead of 'blah-%host%' do 'blah-'||'%host')\n\nby the way, now that I understand how you were viewing this, I see why you \nwere saying that there would need to be a SQL parser. I was missing that \nheadache, by going the direction of having the user specify the individual \ncomponents (which has it's own headache)\n\nDavid Lang\n", "msg_date": "Tue, 21 Apr 2009 11:44:46 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "* [email protected] ([email protected]) wrote:\n>> Ignoring the fact that this is horrible, horrible non-SQL,\n>\n> that example is for MySQL, nuff said ;-)\n\nindeed.\n\n> for some reason I was stuck on the idea of the config specifying the \n> statement and variables seperatly, so I wasn't thinking this way, however \n> there are headaches\n\nNothing worth doing is ever completely without complications. :)\n\n> doing this will require changes to the structure of rsyslog, today the \n> string manipulation is done before calling the output (database) module, \n> so all the database module currently gets is a string. in a (IMHO \n> misguided) attempt at security in a multi-threaded program, the output \n> modules are not given access to the full data, only to the distiled \n> result.\n\nAh, yes, that's definitely a problem and I agree- a very misguided\napproach to doing things. Certainly, to use prepared queries, you will\nhave to pass the data to whatever is talking to the database in some\nkind of structured way. There's not much advantage if you're getting it\nas a string and having to parse it out yourself before using a prepared\nquery with the database.\n\nIn a multi-threaded program, I think it would at least be reasonably\neasy/cheap to provide the output modules with the full data? Of course,\nyou would need to teach the string manipulation logic to not do its\nescaping and other related work for prepared queries which are just\ngoing to use the full data anyway.\n\n> also, this approach won't work if the user wants to combine fixed text \n> with the variable into a column. an example of doing that would be to \n> have a filter to match specific lines, and then use a slightly different \n> template for those lines. I guess that could be done in SQL instead of in \n> the rsyslog string manipulation (i.e. instead of 'blah-%host%' do \n> 'blah-'||'%host')\n\nIt would be more like: 'blah-' || %host%\n\nJust to be clear (if you put the %host% in quotes, and then convert that\nto '$1', it won't be considered a variable, at least in PG). That might\nbe an issue going forward, but on the flip side, I can see some reasons\nfor supporting both prepared and unprepared queries, so if you implement\nthat through an additional template option, you can document that the\nuser needs to ensure the prepared query is structured correctly with the\ncorrect quoting. This gives you the flexibility of the unprepared query\nfor users who don't care about performance, and the benefits of prepared\nqueries, where they can be used, for users who do need that performance.\n\nOr you could just force your users to move everything to prepared\nqueries but it's probably too late for that. :) Maybe if things had\nstarted out that way..\n\n>> As I mentioned before, the only obvious issue I\n>> see with doing this implicitly is that the user might want to put\n>> variables in places that you can't have variables in prepared queries.\n>\n> this problem space would be anywhere except the column contents, right?\n\nWell, it depends on the query.. You can have variables in the column\ncontents, sure, but you can also have them in where clauses if you're\ndoing something like:\n\ninsert into blah select $1,$2,$3,b from mytable where $2 = c;\n\nI believe, in PG at least, you can use them pretty much anywhere you can\nuse a constant.\n\n> rsyslog supports something similar for writing to disk where you can use \n> variables as part of the filename/path (referred to as 'dynafiles' in the \n> documentation). that's a little easier to deal with as the filename is \n> specified seperatly from the format of the data to write. If we end up \n> doing prepared statements I suspect they initially won't support \n> variables outside of the columns.\n\nThat sounds reasonable, to me at least.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 21 Apr 2009 15:14:58 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "* [email protected] ([email protected]) wrote:\n> by the way, now that I understand how you were viewing this, I see why \n> you were saying that there would need to be a SQL parser. I was missing \n> that headache, by going the direction of having the user specify the \n> individual components (which has it's own headache)\n\nRight, but really, you're already parsing the SQL to the extent that you\nneed to, and whatever limitations and headaches that causes you've\nalready had to deal with through proper escaping and whatnot of your\nvariables.. So I'm not sure that it'll be all that bad in the end.\n\nIf you add this as a new feature that users essentially have to opt-in\nto, then I think you can offload alot of the work on to the users for\ndoing things like fixing quoting (so the $NUM vars aren't quoted).\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 21 Apr 2009 15:17:43 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "[email protected] wrote:\n>>> 2. insert into table values (),(),(),()\n>>\n>> Using this structure would be more database agnostic, but won't perform\n>> as well as the COPY options I don't believe. It might be interesting to\n>> do a large \"insert into table values (),(),()\" as a prepared statement,\n>> but then you'd have to have different sizes for each different number of\n>> items you want inserted.\n>\n> on the other hand, when you have a full queue (lots of stuff to \n> insert) is when you need the performance the most. if it's enough of a \n> win on the database side, it could be worth more effort on the \n> applicaiton side.\nAre you sure preparing a simple insert is really worthwhile?\n\nI'd check if I were you. It shouldn't take long to plan.\n\nNote that this structure (above) is handy but not universal.\n\nYou might want to try:\n\ninsert into table\nselect (...)\nunion\nselect (...)\nunion\nselect (...)\n...\n\nas well, since its more univeral. Works on Sybase and SQLServer for \nexample (and v.quickly too - much more so than a TSQL batch with lots of \ninserts or execs of stored procs)\n\nJames\n\n\n", "msg_date": "Tue, 21 Apr 2009 20:22:10 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "* James Mansion ([email protected]) wrote:\n> [email protected] wrote:\n>> on the other hand, when you have a full queue (lots of stuff to \n>> insert) is when you need the performance the most. if it's enough of a \n>> win on the database side, it could be worth more effort on the \n>> applicaiton side.\n> Are you sure preparing a simple insert is really worthwhile?\n>\n> I'd check if I were you. It shouldn't take long to plan.\n\nUsing prepared queries, at least if you use PQexecPrepared or\nPQexecParams, also reduces the work required on the client to build the\nwhole string, and the parsing overhead on the database side to pull it\napart again. That's where the performance is going to be improved by\ngoing that route, not so much in eliminating the planning.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Tue, 21 Apr 2009 15:25:31 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Tue, 21 Apr 2009, [email protected] wrote:\n\n>> 1) Disk/controller has a proper write cache. Writes and fsync will be \n>> fast. You can insert a few thousand individual transactions per second.\n>> \n> in case #1 would you expect to get significant gains from batching? doesn't \n> it suffer from problems similar to #2 when checkpoints hit?\n\nTypically controllers with a write cache are doing elevator sorting across \na much larger chunk of working memory (typically >=256MB instead of <32MB \non the disk itself) which means a mix of random writes will average better \nperformance--on top of being able to aborb a larger chunk of them before \nblocking on writes. You get some useful sorting in the OS itself, but \nevery layer of useful additional cache helps significantly here.\n\nBatching is always a win because even a write-cached commit is still \npretty expensive, from the server on down the chain.\n\n> I'll see about setting up a test in the next day or so. should I be able to \n> script this through psql? or do I need to write a C program to test this?\n\nYou can easily compare things with psql, like in the COPY BINARY vs. TEXT \nexample I gave earlier, that's why I was suggesting you run your own tests \nhere just to get a feel for things on your data set.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 21 Apr 2009 20:01:10 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Tue, 21 Apr 2009, Stephen Frost wrote:\n\n> * James Mansion ([email protected]) wrote:\n>> [email protected] wrote:\n>>> on the other hand, when you have a full queue (lots of stuff to\n>>> insert) is when you need the performance the most. if it's enough of a\n>>> win on the database side, it could be worth more effort on the\n>>> applicaiton side.\n>> Are you sure preparing a simple insert is really worthwhile?\n>>\n>> I'd check if I were you. It shouldn't take long to plan.\n>\n> Using prepared queries, at least if you use PQexecPrepared or\n> PQexecParams, also reduces the work required on the client to build the\n> whole string, and the parsing overhead on the database side to pull it\n> apart again. That's where the performance is going to be improved by\n> going that route, not so much in eliminating the planning.\n\nin a recent thread about prepared statements, where it was identified that \nsince the planning took place at the time of the prepare you sometimes \nhave worse plans than for non-prepared statements, a proposal was made to \nhave a 'pre-parsed, but not pre-planned' version of a prepared statement. \nThis was dismissed as a waste of time (IIRC by Tom L) as the parsing time \nwas negligable.\n\nwas that just because it was a more complex query to plan?\n\nDavid Lang\n", "msg_date": "Tue, 21 Apr 2009 17:12:26 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Tue, Apr 21, 2009 at 8:12 PM, <[email protected]> wrote:\n>> Using prepared queries, at least if you use PQexecPrepared or\n>> PQexecParams, also reduces the work required on the client to build the\n>> whole string, and the parsing overhead on the database side to pull it\n>> apart again.  That's where the performance is going to be improved by\n>> going that route, not so much in eliminating the planning.\n>\n> in a recent thread about prepared statements, where it was identified that\n> since the planning took place at the time of the prepare you sometimes have\n> worse plans than for non-prepared statements, a proposal was made to have a\n> 'pre-parsed, but not pre-planned' version of a prepared statement. This was\n> dismissed as a waste of time (IIRC by Tom L) as the parsing time was\n> negligable.\n>\n> was that just because it was a more complex query to plan?\n\nJoins are expensive to plan; a simple insert is not. I also disagree\nthat pre-parsed but not pre-planned is a waste of time, whoever said\nit. Sometimes it's what you want, especially in PL/pgsql.\n\n...Robert\n", "msg_date": "Tue, 21 Apr 2009 22:29:16 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "Stephen Frost wrote:\n> apart again. That's where the performance is going to be improved by\n> going that route, not so much in eliminating the planning.\n> \nFine. But like I said, I'd suggest measuring the fractional improvement \nfor this\nwhen sending multi-row inserts before writing something complex. I \nthink the\nbig will will be doing multi-row inserts at all. If you are going to \nprepare then\nyou'll need a collection of different prepared statements for different \nbatch sizes\n(say 1,2,3,4,5,10,20,50) and things will get complicated. A multi-row \ninsert\nwith unions and dynamic SQL is actually rather universal.\n\nPersonally I'd implement that first (and it should be easy to do across \nmultiple\ndbms types) and then return to it to have a more complex client side with\nprepared statements etc if (and only if) necessary AND the performance\nimprovement were measurably worthwhile, given the indexing and storage\noverheads.\n\nThere is no point optimising away the CPU of the simple parse if you are\njust going to get hit with a lot of latency from round trips, and forming a\ngeneric multi-insert SQL string is much, much easier to get working as a \nfirst\nstep. Server CPU isn't a bottleneck all that often - and with something as\nsimple as this you'll hit IO performance bottlenecks rather easily.\n\nJames\n\n\n", "msg_date": "Wed, 22 Apr 2009 06:26:07 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "David,\n\n* [email protected] ([email protected]) wrote:\n> in a recent thread about prepared statements, where it was identified \n> that since the planning took place at the time of the prepare you \n> sometimes have worse plans than for non-prepared statements, a proposal \n> was made to have a 'pre-parsed, but not pre-planned' version of a \n> prepared statement. This was dismissed as a waste of time (IIRC by Tom L) \n> as the parsing time was negligable.\n>\n> was that just because it was a more complex query to plan?\n\nYes, as I beleive was mentioned already, planning time for inserts is\nreally small. Parsing time for inserts when there's little parsing that\nhas to happen also isn't all *that* expensive and the same goes for\nconversions from textual representations of data to binary.\n\nWe're starting to re-hash things, in my view. The low-hanging fruit is\ndoing multiple things in a single transaction, either by using COPY,\nmulti-value INSERTs, or just multiple INSERTs in a single transaction.\nThat's absolutely step one.\n\nAdding in other things, where they make sense (prepared statements,\nbinary format for things you have as binary, etc) is a good idea if it\ncan be done along the way.\n\n\tStephen", "msg_date": "Wed, 22 Apr 2009 08:19:28 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "* James Mansion ([email protected]) wrote:\n> Fine. But like I said, I'd suggest measuring the fractional improvement \n> for this\n> when sending multi-row inserts before writing something complex. I \n> think the\n> big will will be doing multi-row inserts at all. \n\nYou're re-hashing things I've already said. The big win is batching the\ninserts, however that's done, into fewer transactions. Sure, multi-row\ninserts could be used to do that, but so could dropping begin/commits in\nright now which probably takes even less effort.\n\n> If you are going to \n> prepare then\n> you'll need a collection of different prepared statements for different \n> batch sizes\n> (say 1,2,3,4,5,10,20,50) and things will get complicated. A multi-row \n> insert\n> with unions and dynamic SQL is actually rather universal.\n\nNo, as was pointed out previously already, you really just need 2. A\nsingle-insert, and a batch insert of some size. It'd be interesting to\nsee if there's really much of a performance difference between a\n50-insert prepared statement, and 50 1-insert prepared statements. If\nthey're both done in larger transactions, I don't know that there's\nreally alot of performance difference.\n\n> Personally I'd implement that first (and it should be easy to do across \n> multiple\n> dbms types) and then return to it to have a more complex client side with\n> prepared statements etc if (and only if) necessary AND the performance\n> improvement were measurably worthwhile, given the indexing and storage\n> overheads.\n\nstorage overhead? indexing overhead? We're talking about prepared\nstatements here, what additional storage requirement do you think those\nwould impose? What additional indexing overhead? I don't believe we\nactually do anything differently between prepared statements and\nmulti-row inserts that would change either of those.\n\n> There is no point optimising away the CPU of the simple parse if you are\n> just going to get hit with a lot of latency from round trips, and forming a\n> generic multi-insert SQL string is much, much easier to get working as a \n> first\n> step. Server CPU isn't a bottleneck all that often - and with something as\n> simple as this you'll hit IO performance bottlenecks rather easily.\n\nAh, latency is a reasonable thing to bring up. Of course, if you want\nto talk about latency then you get to consider that multi-insert SQL\nwill inherently have larger packet sizes which could cause them to be\ndelayed in some QoS arrangements.\n\nAs I said, most of this is a re-hash of things already said. The\nlow-hanging fruit here is doing multiple inserts inside of a\ntransaction, rather than 1 insert per transaction. Regardless of how\nthat's done, it's going to give the best bang-for-buck. It will\ncomplicate the client code some, regardless of how it's implemented, so\nthat failures are handled gracefully (if that's something you care about\nanyway), but as there exists some queueing mechanisms in rsyslog\nalready, hopefully it won't be too bad.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Wed, 22 Apr 2009 08:44:44 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "\nOn Mon, 2009-04-20 at 14:53 -0700, [email protected] wrote:\n\n> the big win is going to be in changing the core of rsyslog so that it can \n> process multiple messages at a time (bundling them into a single \n> transaction)\n\nThat isn't necessarily true as a single \"big win\".\n\nThe reason there is an overhead per transaction is because of commit\ndelays, which can be removed by executing\n\n SET synchronous_commit = off; \n\nafter connecting to PostgreSQL 8.3+\n\nYou won't need to do much else. This can also be enabled for a\nPostgreSQL user without even changing the rsyslog source code, so it\nshould be easy enough to test.\n\nAnd this type of application is *exactly* what it was designed for.\n\nSome other speedups should also be possible, but this is easiest. \n\nI would guess that batching inserts will be a bigger win than simply\nusing prepared statements because it will reduce network roundtrips to a\ncentralised log server. Preparing statements might show up well on tests\nbecause people will do tests against a local database, most likely.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Wed, 22 Apr 2009 16:19:06 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Wed, Apr 22, 2009 at 8:19 AM, Stephen Frost <[email protected]> wrote:\n> Yes, as I beleive was mentioned already, planning time for inserts is\n> really small.  Parsing time for inserts when there's little parsing that\n> has to happen also isn't all *that* expensive and the same goes for\n> conversions from textual representations of data to binary.\n>\n> We're starting to re-hash things, in my view.  The low-hanging fruit is\n> doing multiple things in a single transaction, either by using COPY,\n> multi-value INSERTs, or just multiple INSERTs in a single transaction.\n> That's absolutely step one.\n\nThis is all well-known, covered information, but perhaps some numbers\nwill help drive this home. 40000 inserts into a single-column,\nunindexed table; with predictable results:\n\nseparate inserts, no transaction: 21.21s\nseparate inserts, same transaction: 1.89s\n40 inserts, 100 rows/insert: 0.18s\none 40000-value insert: 0.16s\n40 prepared inserts, 100 rows/insert: 0.15s\nCOPY (text): 0.10s\nCOPY (binary): 0.10s\n\nOf course, real workloads will change the weights, but this is more or\nless the magnitude of difference I always see--batch your inserts into\nsingle statements, and if that's not enough, skip to COPY.\n\n-- \nGlenn Maynard\n", "msg_date": "Wed, 22 Apr 2009 15:33:21 -0400", "msg_from": "Glenn Maynard <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Wed, 22 Apr 2009, Glenn Maynard wrote:\n\n> On Wed, Apr 22, 2009 at 8:19 AM, Stephen Frost <[email protected]> wrote:\n>> Yes, as I beleive was mentioned already, planning time for inserts is\n>> really small.  Parsing time for inserts when there's little parsing that\n>> has to happen also isn't all *that* expensive and the same goes for\n>> conversions from textual representations of data to binary.\n>>\n>> We're starting to re-hash things, in my view.  The low-hanging fruit is\n>> doing multiple things in a single transaction, either by using COPY,\n>> multi-value INSERTs, or just multiple INSERTs in a single transaction.\n>> That's absolutely step one.\n>\n> This is all well-known, covered information, but perhaps some numbers\n> will help drive this home. 40000 inserts into a single-column,\n> unindexed table; with predictable results:\n>\n> separate inserts, no transaction: 21.21s\n> separate inserts, same transaction: 1.89s\n\nare these done as seperate round trips?\n\ni.e.\nbegin <send>\ninsert <send>\ninsert <send>\n..\nend <send>\n\nor as one round trip?\n\ni.e.\nbegin;insert;insert..;end\n\n> 40 inserts, 100 rows/insert: 0.18s\n> one 40000-value insert: 0.16s\n> 40 prepared inserts, 100 rows/insert: 0.15s\n\nare one of these missing a 0?\n\n> COPY (text): 0.10s\n> COPY (binary): 0.10s\n>\n> Of course, real workloads will change the weights, but this is more or\n> less the magnitude of difference I always see--batch your inserts into\n> single statements, and if that's not enough, skip to COPY.\n\nthanks for this information, this is exactly what I was looking for.\n\ncan this get stored somewhere for reference?\n\nDavid Lang\n>From [email protected] Wed Apr 22 17:37:08 2009\nReceived: from localhost (unknown [200.46.208.211])\n\tby mail.postgresql.org (Postfix) with ESMTP id 7AF82632D82\n\tfor <[email protected]>; Wed, 22 Apr 2009 17:37:07 -0300 (ADT)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by localhost (mx1.hub.org [200.46.208.211]) (amavisd-maia, port 10024)\n with ESMTP id 59754-08\n for <[email protected]>;\n Wed, 22 Apr 2009 17:37:01 -0300 (ADT)\nX-Greylist: from auto-whitelisted by SQLgrey-1.7.6\nReceived: from tamriel.snowman.net (tamriel.snowman.net [72.66.115.51])\n\tby mail.postgresql.org (Postfix) with ESMTP id D13AD632CD9\n\tfor <[email protected]>; Wed, 22 Apr 2009 17:37:05 -0300 (ADT)\nReceived: by tamriel.snowman.net (Postfix, from userid 1000)\n\tid 0530722108; Wed, 22 Apr 2009 16:37:04 -0400 (EDT)\nDate: Wed, 22 Apr 2009 16:37:03 -0400\nFrom: Stephen Frost <[email protected]>\nTo: Glenn Maynard <[email protected]>\nCc: [email protected]\nSubject: Re: performance for high-volume log insertion\nMessage-ID: <[email protected]>\nReferences: <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]>\nMIME-Version: 1.0\nContent-Type: multipart/signed; micalg=pgp-sha1;\n\tprotocol=\"application/pgp-signature\"; boundary=\"mLR48wkpOFoZrHIA\"\nContent-Disposition: inline\nIn-Reply-To: <[email protected]>\nX-Editor: Vim http://www.vim.org/\nX-Info: http://www.snowman.net\nX-Operating-System: Linux/2.6.26-1-amd64 (x86_64)\nX-Uptime: 16:34:12 up 100 days, 21:32, 24 users, load average: 0.13, 0.11,\n\t0.06\nUser-Agent: Mutt/1.5.18 (2008-05-17)\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits=0.205 tagged_above=0 required=5 tests=AWL=0.205\nX-Spam-Level: \nX-Archive-Number: 200904/357\nX-Sequence-Number: 33724\n\n\n--mLR48wkpOFoZrHIA\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\nGlenn,\n\n* Glenn Maynard ([email protected]) wrote:\n> This is all well-known, covered information, but perhaps some numbers\n> will help drive this home. 40000 inserts into a single-column,\n> unindexed table; with predictable results:\n\nThanks for doing the work. I had been intending to but hadn't gotten to\nit yet.\n\n> separate inserts, no transaction: 21.21s\n> separate inserts, same transaction: 1.89s\n> 40 inserts, 100 rows/insert: 0.18s\n> one 40000-value insert: 0.16s\n> 40 prepared inserts, 100 rows/insert: 0.15s\n> COPY (text): 0.10s\n> COPY (binary): 0.10s\n\nWhat about 40000 individual prepared inserts? Just curious about it.\n\nAlso, kind of suprised about COPY text vs. binary. What was the data\ntype in the table..? If text, that makes sense, if it was an integer or\nsomething else, I'm kind of suprised.\n\n\tThanks,\n\n\t\tStephen\n\n--mLR48wkpOFoZrHIA\nContent-Type: application/pgp-signature; name=\"signature.asc\"\nContent-Description: Digital signature\nContent-Disposition: inline\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.9 (GNU/Linux)\n\niEYEARECAAYFAknvf+8ACgkQrzgMPqB3kigfrgCggjcQ5axBN+Skqg35MaA/EaIb\nOUAAn34HJWhYZZhOAJl8UZ2nZ5+iOaAL\n=pDo7\n-----END PGP SIGNATURE-----\n\n--mLR48wkpOFoZrHIA--\n", "msg_date": "Wed, 22 Apr 2009 13:07:34 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "Stephen Frost <[email protected]> writes:\n> * Glenn Maynard ([email protected]) wrote:\n>> separate inserts, no transaction: 21.21s\n>> separate inserts, same transaction: 1.89s\n>> 40 inserts, 100 rows/insert: 0.18s\n>> one 40000-value insert: 0.16s\n>> 40 prepared inserts, 100 rows/insert: 0.15s\n>> COPY (text): 0.10s\n>> COPY (binary): 0.10s\n\n> What about 40000 individual prepared inserts? Just curious about it.\n\nAlso, just to be clear: were the \"40 insert\" cases 40 separate\ntransactions or one transaction? I think the latter was meant but\nit's not 100% clear.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Apr 2009 16:49:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion " }, { "msg_contents": "Stephen Frost wrote:\n> You're re-hashing things I've already said. The big win is batching the\n> inserts, however that's done, into fewer transactions. Sure, multi-row\n> inserts could be used to do that, but so could dropping begin/commits in\n> right now which probably takes even less effort.\n> \nWell, I think you are seriously underestimating the cost of the \nround-trip compared\nto all the other effects (possibly bar the commits). When I tested the \nunion insert\ntechnique on SQLServer and Sybase I got measurable improvements going from\n100 row statements to 200 row statements, though I suspect in that case the\nper-statement overheads are quite high. I expected improvements from 10 \nto 20\nrow batches, but it carried on getting better for a long time after \nthat. The\nSybase parser runs out of workspace first.\n\n\n> No, as was pointed out previously already, you really just need 2. A\n> \nAnd I'm disagreeing with that. Single row is a given, but I think \nyou'll find it pays to have one\nround trip if at all possible and invoking multiple prepared statements \ncan work against this.\n\n> see if there's really much of a performance difference between a\n> 50-insert prepared statement, and 50 1-insert prepared statements. If\n> they're both done in larger transactions, I don't know that there's\n> really alot of performance difference.\n> \nI think you'll be surprised, but the only way is to test it. And also \nthe simple 50 row single\ninsert as text. See if you can measure the difference between that and \nthe prepared\nstatement.\n> storage overhead? indexing overhead? We're talking about prepared\n> statements here, what additional storage requirement do you think those\n> would impose? What additional indexing overhead? I don't believe we\n> actually do anything differently between prepared statements and\n> multi-row inserts that would change either of those.\n> \nThat's my point. You will brickwall on the actual database operations \nfor execution\nearly enough that the efficiency difference between parse-and-execute \nand prepared\nstatements will be hard to measure - at least if you have multi-row \nstatements.\n\nBut this really needs testing and timing.\n\n> Ah, latency is a reasonable thing to bring up. Of course, if you want\n> to talk about latency then you get to consider that multi-insert SQL\n> will inherently have larger packet sizes which could cause them to be\n> delayed in some QoS arrangements.\n> \nNo, I mean latency from round trips from the client to the server \nprocess. I don't know why\nyou think I'd mean that.\n> As I said, most of this is a re-hash of things already said. The\n> low-hanging fruit here is doing multiple inserts inside of a\n> transaction, rather than 1 insert per transaction. Regardless of how\n> that's done, it's going to give the best bang-for-buck. It will\n> complicate the client code some, regardless of how it's implemented, so\n> that failures are handled gracefully (if that's something you care about\n> anyway), but as there exists some queueing mechanisms in rsyslog\n> already, hopefully it won't be too bad.\n> \nI think you have largely missed the point. There are two things here:\n 1) how many rows per commit\n 2) how many rows per logical RPC (ie round trip) between the client\n and server processes\n\nWe are agreed that the first is a Very Big Deal, but you seem resistant to\nthe idea that the second of these is a big deal once you've dealt with \nthe former.\n\nMy experience has been that its much more important than any benefits of\npreparing statements etc, particularly if the use of a prepared \nstatement can\nmake it harder to do multi-row RPCs because the protocol doesn't\nallow pipelining (at least without things getting very hairy).\n\nClearly 'copy' is your friend for this too, at least potentially (even \nif it means\nstreaming to a staging table).\n\n\nJames\n\n", "msg_date": "Wed, 22 Apr 2009 21:53:03 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Wed, Apr 22, 2009 at 4:07 PM, <[email protected]> wrote:\n> are these done as seperate round trips?\n>\n> i.e.\n> begin <send>\n> insert <send>\n> insert <send>\n> ..\n> end <send>\n>\n> or as one round trip?\n\nAll tests were done by constructing a file and using \"time psql -f ...\".\n\n>> 40 inserts, 100 rows/insert: 0.18s\n>> one 40000-value insert: 0.16s\n>> 40 prepared inserts, 100 rows/insert: 0.15s\n>\n> are one of these missing a 0?\n\nSorry, 400 * 100. All cases inserted 40000 rows, and I deleted all\nrows between tests (but did not recreate the table).\n\n-- \nGlenn Maynard\n", "msg_date": "Wed, 22 Apr 2009 17:04:43 -0400", "msg_from": "Glenn Maynard <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Wed, 2009-04-22 at 21:53 +0100, James Mansion wrote:\n> Stephen Frost wrote:\n> > You're re-hashing things I've already said. The big win is batching the\n> > inserts, however that's done, into fewer transactions. Sure, multi-row\n> > inserts could be used to do that, but so could dropping begin/commits in\n> > right now which probably takes even less effort.\n> > \n> Well, I think you are seriously underestimating the cost of the \n> round-trip compared\n\nThe breakdown is this:\n\n1. Eliminate single inserts\n2. Eliminate round trips\n\nYes round trips are hugely expensive. \n\n> \n> > No, as was pointed out previously already, you really just need 2. A\n> > \n> And I'm disagreeing with that. Single row is a given, but I think \n> you'll find it pays to have one\n\nMy experience shows that you are correct. Even if you do a single BEGIN;\nwith 1000 inserts you are still getting a round trip for every insert\nuntil you commit. Based on 20ms round trip time, you are talking\n20seconds additional overhead.\n\nJoshua D. Drake\n\n\n-- \nPostgreSQL - XMPP: [email protected]\n Consulting, Development, Support, Training\n 503-667-4564 - http://www.commandprompt.com/\n The PostgreSQL Company, serving since 1997\n\n", "msg_date": "Wed, 22 Apr 2009 14:17:43 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Wed, Apr 22, 2009 at 4:37 PM, Stephen Frost <[email protected]> wrote:\n> Thanks for doing the work.  I had been intending to but hadn't gotten to\n> it yet.\n\nI'd done similar tests recently, for some batch import code, so it was\njust a matter of recreating it.\n\n>> separate inserts, no transaction: 21.21s\n>> separate inserts, same transaction: 1.89s\n>> 40 inserts, 100 rows/insert: 0.18s\n>> one 40000-value insert: 0.16s\n>> 40 prepared inserts, 100 rows/insert: 0.15s\n>> COPY (text): 0.10s\n>> COPY (binary): 0.10s\n>\n> What about 40000 individual prepared inserts?  Just curious about it.\n\n40000 inserts, one prepared statement each (constructing the prepared\nstatement only once), in a single transaction: 1.68s\n\nI'm surprised that there's any win here at all.\n\n> Also, kind of suprised about COPY text vs. binary.  What was the data\n> type in the table..?  If text, that makes sense, if it was an integer or\n> something else, I'm kind of suprised.\n\nEach row had one integer column. I expect strings to be harder to\nparse, since it's allocating buffers and parsing escapes, which is\nusually more expensive than parsing an integer out of a string. I'd\nexpect the difference to be negligible either way, though, and I'd be\ninterested in hearing about use cases where binary copying is enough\nof a win to be worth the development cost of maintaining the feature.\n\nOn Wed, Apr 22, 2009 at 4:49 PM, Tom Lane <[email protected]> wrote:\n> Also, just to be clear: were the \"40 insert\" cases 40 separate\n> transactions or one transaction? I think the latter was meant but\n> it's not 100% clear.\n\nOne transaction--the only case where I ran more than one transaction\nwas the first.\n\n-- \nGlenn Maynard\n", "msg_date": "Wed, 22 Apr 2009 17:20:01 -0400", "msg_from": "Glenn Maynard <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Wed, Apr 22, 2009 at 4:53 PM, James Mansion\n<[email protected]> wrote:\n> And I'm disagreeing with that.  Single row is a given, but I think you'll\n> find it pays to have one\n> round trip if at all possible and invoking multiple prepared statements can\n> work against this.\n\nYou're talking about round-trips to a *local* server, on the same\nsystem, not a dedicated server with network round-trips, right?\n\nBlocking round trips to another process on the same server should be\nfairly cheap--that is, writing to a socket (or pipe, or localhost TCP\nconnection) where the other side is listening for it; and then\nblocking in return for the response. The act of writing to an FD that\nanother process is waiting for will make the kernel mark the process\nas \"ready to wake up\" immediately, and the act of blocking for the\nresponse will kick the scheduler to some waiting process, so as long\nas there isn't something else to compete for CPU for, each write/read\nwill wake up the other process instantly. There's a task switching\ncost, but that's too small to be relevant here.\n\nDoing 1000000 local round trips, over a pipe: 5.25s (5 *microseconds*\neach), code attached. The cost *should* be essentially identical for\nany local transport (pipes, named pipes, local TCP connections), since\nthe underlying scheduler mechanisms are the same.\n\nThat's not to say that batching inserts doesn't make a difference--it\nclearly does, and it would probably be a much larger difference with\nactual network round-trips--but round-trips to a local server aren't\ninherently slow. I'd be (casually) interested in knowing what the\nactual costs are behind an SQL command round-trip (where the command\nisn't blocking on I/O, eg. an INSERT inside a transaction, with no I/O\nfor things like constraint checks that need to be done before the\ncommand can return success).\n\n-- \nGlenn Maynard", "msg_date": "Wed, 22 Apr 2009 17:48:02 -0400", "msg_from": "Glenn Maynard <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "* Glenn Maynard ([email protected]) wrote:\n> >> separate inserts, no transaction: 21.21s\n> >> separate inserts, same transaction: 1.89s\n> >> 40 inserts, 100 rows/insert: 0.18s\n> >> one 40000-value insert: 0.16s\n> >> 40 prepared inserts, 100 rows/insert: 0.15s\n> >> COPY (text): 0.10s\n> >> COPY (binary): 0.10s\n> >\n> > What about 40000 individual prepared inserts?  Just curious about it.\n> \n> 40000 inserts, one prepared statement each (constructing the prepared\n> statement only once), in a single transaction: 1.68s\n> \n> I'm surprised that there's any win here at all.\n\nFor a single column table, I wouldn't expect much either. With more\ncolumns I think it would be a larger improvement.\n\n> Each row had one integer column. I expect strings to be harder to\n> parse, since it's allocating buffers and parsing escapes, which is\n> usually more expensive than parsing an integer out of a string. I'd\n> expect the difference to be negligible either way, though, and I'd be\n> interested in hearing about use cases where binary copying is enough\n> of a win to be worth the development cost of maintaining the feature.\n\nI've seen it help, but I was sending everything as binary (I figured,\nonce I'm doing it, might as well do it all), which included dates,\ntimestamps, IP addresses, integers, and some text. It may have more of\nan impact on dates and timestamps than on simple integers.\n\n\tThanks!\n\n\t\tStephen", "msg_date": "Wed, 22 Apr 2009 17:51:00 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Wed, Apr 22, 2009 at 5:51 PM, Stephen Frost <[email protected]> wrote:\n> For a single column table, I wouldn't expect much either.  With more\n> columns I think it would be a larger improvement.\n\nMaybe. I'm not sure why parsing \"(1,2,3,4,5)\" in an EXECUTE parameter\nshould be faster than parsing the exact same thing in an INSERT,\nthough.\n\n> I've seen it help, but I was sending everything as binary (I figured,\n> once I'm doing it, might as well do it all), which included dates,\n> timestamps, IP addresses, integers, and some text.  It may have more of\n> an impact on dates and timestamps than on simple integers.\n\nOf course, you still need to get it in that format. Be careful to\ninclude any parsing you're doing to create the binary date in the\nbenchmarks. Inevitably, at least part of the difference will be costs\nsimply moving from the psql process to your own.\n\n-- \nGlenn Maynard\n", "msg_date": "Wed, 22 Apr 2009 18:16:18 -0400", "msg_from": "Glenn Maynard <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "Stephen Frost wrote on 22.04.2009 23:51:\n>>> What about 40000 individual prepared inserts? Just curious about it.\n\n>> 40000 inserts, one prepared statement each (constructing the prepared\n>> statement only once), in a single transaction: 1.68s\n>>\n>> I'm surprised that there's any win here at all.\n> \n> For a single column table, I wouldn't expect much either. With more\n> columns I think it would be a larger improvement.\n\nOut of curiosity I did some tests through JDBC.\n\nUsing a single-column (integer) table, re-using a prepared statement took about \n7 seconds to insert 100000 rows with JDBC's batch interface and a batch size of 1000\n\nUsing a prepared statement that had a 1000 (?) after the insert (insert into foo \n values (?), (?), ...) the insert took about 0.8 seconds. Quite an improvement \nI'd say.\n\nThen I switched to a three column table (int, varchar(500), varchar(500)).\n\nInsert using a preparedstatement with batch (first scenario) now was ~8.5 \nseconds, whereas the multi-value insert now took ~3 seconds. So the difference \ngot smaller, but still was quite substantial. This was inserting relatively \nsmall strings (~20 characters) into the table\n\nWhen increasing the size of the inserted strings, things began to change. When I \nbumped the length of the strings to 70 and 200 characters, the multi-value \ninsert slowed down considerably. Both solutions now took around 9 seconds.\n\nThe multi-value solution ranged between 7 and 9 seconds, whereas the \"regular\" \ninsert syntax was pretty constant at roughly 9 seconds (I ran it about 15 times).\n\nSo it seems, that as the size of the row increases the multi-value insert loses \nits head-start compared to the \"regular\" insert.\n\nI also played around with batch size. Going beyond 200 didn't make a big \ndifference.\n\nFor the JDBC batch, the batch size was the number of rows after which I called \nexecuteBatch() for the multi-value insert, this was the number of tuples I sent \nin a single INSERT statement.\n\nThe multi-value statement seems to perform better with lower \"batch\" sizes \n(~10-50) whereas the JDBC batching seems to be fastest with about 200 statements \nper batch.\n\n\nThomas\n\n", "msg_date": "Thu, 23 Apr 2009 00:25:52 +0200", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Wed, 22 Apr 2009, Glenn Maynard wrote:\n\n> On Wed, Apr 22, 2009 at 4:53 PM, James Mansion\n> <[email protected]> wrote:\n>> And I'm disagreeing with that.  Single row is a given, but I think you'll\n>> find it pays to have one\n>> round trip if at all possible and invoking multiple prepared statements can\n>> work against this.\n>\n> You're talking about round-trips to a *local* server, on the same\n> system, not a dedicated server with network round-trips, right?\n\nthe use-case for a production setup for logging servers would probably \ninclude a network hop.\n\nDavid Lang\n\n> Blocking round trips to another process on the same server should be\n> fairly cheap--that is, writing to a socket (or pipe, or localhost TCP\n> connection) where the other side is listening for it; and then\n> blocking in return for the response. The act of writing to an FD that\n> another process is waiting for will make the kernel mark the process\n> as \"ready to wake up\" immediately, and the act of blocking for the\n> response will kick the scheduler to some waiting process, so as long\n> as there isn't something else to compete for CPU for, each write/read\n> will wake up the other process instantly. There's a task switching\n> cost, but that's too small to be relevant here.\n>\n> Doing 1000000 local round trips, over a pipe: 5.25s (5 *microseconds*\n> each), code attached. The cost *should* be essentially identical for\n> any local transport (pipes, named pipes, local TCP connections), since\n> the underlying scheduler mechanisms are the same.\n>\n> That's not to say that batching inserts doesn't make a difference--it\n> clearly does, and it would probably be a much larger difference with\n> actual network round-trips--but round-trips to a local server aren't\n> inherently slow. I'd be (casually) interested in knowing what the\n> actual costs are behind an SQL command round-trip (where the command\n> isn't blocking on I/O, eg. an INSERT inside a transaction, with no I/O\n> for things like constraint checks that need to be done before the\n> command can return success).\n>\n>\n>From [email protected] Wed Apr 22 22:48:35 2009\nReceived: from localhost (unknown [200.46.204.183])\n\tby mail.postgresql.org (Postfix) with ESMTP id B2E12632E70\n\tfor <[email protected]>; Wed, 22 Apr 2009 22:48:34 -0300 (ADT)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by localhost (mx1.hub.org [200.46.204.183]) (amavisd-maia, port 10024)\n with ESMTP id 10845-09\n for <[email protected]>;\n Wed, 22 Apr 2009 22:48:32 -0300 (ADT)\nX-Greylist: from auto-whitelisted by SQLgrey-1.7.6\nReceived: from tamriel.snowman.net (tamriel.snowman.net [72.66.115.51])\n\tby mail.postgresql.org (Postfix) with ESMTP id 55E82632292\n\tfor <[email protected]>; Wed, 22 Apr 2009 22:48:33 -0300 (ADT)\nReceived: by tamriel.snowman.net (Postfix, from userid 1000)\n\tid 44F55228C7; Wed, 22 Apr 2009 21:48:31 -0400 (EDT)\nDate: Wed, 22 Apr 2009 21:48:31 -0400\nFrom: Stephen Frost <[email protected]>\nTo: Glenn Maynard <[email protected]>\nCc: [email protected]\nSubject: Re: performance for high-volume log insertion\nMessage-ID: <[email protected]>\nReferences: <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]>\nMIME-Version: 1.0\nContent-Type: multipart/signed; micalg=pgp-sha1;\n\tprotocol=\"application/pgp-signature\"; boundary=\"12Mx6XhZmCw9eEgg\"\nContent-Disposition: inline\nIn-Reply-To: <[email protected]>\nX-Editor: Vim http://www.vim.org/\nX-Info: http://www.snowman.net\nX-Operating-System: Linux/2.6.26-1-amd64 (x86_64)\nX-Uptime: 21:28:07 up 101 days, 2:26, 23 users, load average: 0.00, 0.04,\n\t0.05\nUser-Agent: Mutt/1.5.18 (2008-05-17)\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits=0 tagged_above=0 required=5 tests=none\nX-Spam-Level: \nX-Archive-Number: 200904/367\nX-Sequence-Number: 33734\n\n\n--12Mx6XhZmCw9eEgg\nContent-Type: text/plain; charset=iso-8859-1\nContent-Disposition: inline\nContent-Transfer-Encoding: quoted-printable\n\n* Glenn Maynard ([email protected]) wrote:\n> On Wed, Apr 22, 2009 at 5:51 PM, Stephen Frost <[email protected]> wrote:\n> > For a single column table, I wouldn't expect much either. =A0With more\n> > columns I think it would be a larger improvement.\n>=20\n> Maybe. I'm not sure why parsing \"(1,2,3,4,5)\" in an EXECUTE parameter\n> should be faster than parsing the exact same thing in an INSERT,\n> though.\n\nErm.. Prepared queries is about using PQexecPrepared(), not about\nsending a text string as an SQL EXECUTE(). PQexecPrepared takes an\narray of arguments. That gets translated into a Bind command in the\nprotocol with a defined number of parameters and a length for each\nparameter being passed. That removes any need for scanning/parsing the\nstring sent to the backend. That's the savings I'm referring to.\n\nIf you weren't using PQexecPrepared() (and using psql, you wouldn't\nbe..), then the difference you saw was more likely planning cost.\n\n> Of course, you still need to get it in that format. Be careful to\n> include any parsing you're doing to create the binary date in the\n> benchmarks. Inevitably, at least part of the difference will be costs\n> simply moving from the psql process to your own.\n\nSure. What I recall from when I was working on this is that it wasn't\nterribly hard to go from unix timestamps (epoch from 1970) to a PG\ntimestamp format (and there was nice example code in the backend) in\nterms of CPU time.\n\n\tThanks,\n\n\t\tStephen\n\n--12Mx6XhZmCw9eEgg\nContent-Type: application/pgp-signature; name=\"signature.asc\"\nContent-Description: Digital signature\nContent-Disposition: inline\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.9 (GNU/Linux)\n\niEYEARECAAYFAknvyO8ACgkQrzgMPqB3kigw8QCgl44vCriX5XkLYMcML36TAVrv\nM0sAnjXQaAj9NDcjwkzopKlcO62U+TnR\n=kczC\n-----END PGP SIGNATURE-----\n\n--12Mx6XhZmCw9eEgg--\n", "msg_date": "Wed, 22 Apr 2009 15:46:30 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "* [email protected] ([email protected]) wrote:\n> On Wed, 22 Apr 2009, Glenn Maynard wrote:\n>> You're talking about round-trips to a *local* server, on the same\n>> system, not a dedicated server with network round-trips, right?\n>\n> the use-case for a production setup for logging servers would probably \n> include a network hop.\n\nSure, but there's a big difference between a rtt of 0.041ms (my dinky\nhome network) and 20ms (from my home network in Virginia to Boston). I\nwasn't intending to discount latency, it can be a concen, but I doubt\nit'll be 20ms for most people..\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Wed, 22 Apr 2009 21:50:36 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Wed, Apr 22, 2009 at 9:48 PM, Stephen Frost <[email protected]> wrote:\n> Erm..  Prepared queries is about using PQexecPrepared(), not about\n> sending a text string as an SQL EXECUTE().  PQexecPrepared takes an\n> array of arguments.  That gets translated into a Bind command in the\n> protocol with a defined number of parameters and a length for each\n> parameter being passed.  That removes any need for scanning/parsing the\n> string sent to the backend.  That's the savings I'm referring to.\n\nI'd suggest this be mentioned in the sql-prepare documentation, then,\nbecause that documentation only discusses using prepared statements to\neliminate redundant planning costs. (I'm sure it's mentioned in the\nAPI docs and elsewhere, but if it's a major intended use of PREPARE,\nthe PREPARE documentation should make note of it.)\n\n-- \nGlenn Maynard\n", "msg_date": "Wed, 22 Apr 2009 22:20:41 -0400", "msg_from": "Glenn Maynard <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Wed, 22 Apr 2009, Stephen Frost wrote:\n\n> * Glenn Maynard ([email protected]) wrote:\n>> On Wed, Apr 22, 2009 at 5:51 PM, Stephen Frost <[email protected]> wrote:\n>>> For a single column table, I wouldn't expect much either. �With more\n>>> columns I think it would be a larger improvement.\n>>\n>> Maybe. I'm not sure why parsing \"(1,2,3,4,5)\" in an EXECUTE parameter\n>> should be faster than parsing the exact same thing in an INSERT,\n>> though.\n>\n> Erm.. Prepared queries is about using PQexecPrepared(), not about\n> sending a text string as an SQL EXECUTE(). PQexecPrepared takes an\n> array of arguments. That gets translated into a Bind command in the\n> protocol with a defined number of parameters and a length for each\n> parameter being passed. That removes any need for scanning/parsing the\n> string sent to the backend. That's the savings I'm referring to.\n\nare you sure? I thought that what goes out over the wire is always text.\n\nDavid Lang\n\n> If you weren't using PQexecPrepared() (and using psql, you wouldn't\n> be..), then the difference you saw was more likely planning cost.\n>\n>> Of course, you still need to get it in that format. Be careful to\n>> include any parsing you're doing to create the binary date in the\n>> benchmarks. Inevitably, at least part of the difference will be costs\n>> simply moving from the psql process to your own.\n>\n> Sure. What I recall from when I was working on this is that it wasn't\n> terribly hard to go from unix timestamps (epoch from 1970) to a PG\n> timestamp format (and there was nice example code in the backend) in\n> terms of CPU time.\n>\n> \tThanks,\n>\n> \t\tStephen\n>\n>From [email protected] Thu Apr 23 08:04:40 2009\nReceived: from localhost (unknown [200.46.204.183])\n\tby mail.postgresql.org (Postfix) with ESMTP id 4789B63254D\n\tfor <[email protected]>; Thu, 23 Apr 2009 08:04:38 -0300 (ADT)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by localhost (mx1.hub.org [200.46.204.183]) (amavisd-maia, port 10024)\n with ESMTP id 78570-10\n for <[email protected]>;\n Thu, 23 Apr 2009 08:04:36 -0300 (ADT)\nX-Greylist: from auto-whitelisted by SQLgrey-1.7.6\nReceived: from tamriel.snowman.net (tamriel.snowman.net [72.66.115.51])\n\tby mail.postgresql.org (Postfix) with ESMTP id 8FBA66323BD\n\tfor <[email protected]>; Thu, 23 Apr 2009 08:04:36 -0300 (ADT)\nReceived: by tamriel.snowman.net (Postfix, from userid 1000)\n\tid A581622238; Thu, 23 Apr 2009 07:04:34 -0400 (EDT)\nDate: Thu, 23 Apr 2009 07:04:34 -0400\nFrom: Stephen Frost <[email protected]>\nTo: [email protected]\nCc: Glenn Maynard <[email protected]>,\n\[email protected]\nSubject: Re: performance for high-volume log insertion\nMessage-ID: <[email protected]>\nReferences: <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]>\nMIME-Version: 1.0\nContent-Type: multipart/signed; micalg=pgp-sha1;\n\tprotocol=\"application/pgp-signature\"; boundary=\"7p2pcO0s0ZIbU/PH\"\nContent-Disposition: inline\nIn-Reply-To: <[email protected]>\nX-Editor: Vim http://www.vim.org/\nX-Info: http://www.snowman.net\nX-Operating-System: Linux/2.6.26-1-amd64 (x86_64)\nX-Uptime: 06:48:25 up 101 days, 11:46, 19 users, load average: 0.01, 0.05,\n\t0.06\nUser-Agent: Mutt/1.5.18 (2008-05-17)\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits=0 tagged_above=0 required=5 tests=none\nX-Spam-Level: \nX-Archive-Number: 200904/371\nX-Sequence-Number: 33738\n\n\n--7p2pcO0s0ZIbU/PH\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\n\n* [email protected] ([email protected]) wrote:\n> On Wed, 22 Apr 2009, Stephen Frost wrote:\n>> Erm.. Prepared queries is about using PQexecPrepared(), not about\n>> sending a text string as an SQL EXECUTE(). PQexecPrepared takes an\n>> array of arguments. That gets translated into a Bind command in the\n>> protocol with a defined number of parameters and a length for each\n>> parameter being passed. That removes any need for scanning/parsing the\n>> string sent to the backend. That's the savings I'm referring to.\n>\n> are you sure? I thought that what goes out over the wire is always text.\n\nWow, why is there so much confusion and misunderstanding about this?\n\n*psql* sends everything to the backend as text (except perhaps COPY\nBINARY.. but that's because the user handles it), but if you're using\nlibpq, PQexecPrepared, and protocol 3.0 (any recent PG version), it's\ngoing to use the Parse/Bind protocol-level commands. To make it perhaps\nmore clear, here's a snippet from the libpq code for PQsendQueryGuts(),\nwhich is the work-horse called by PQexecPrepared:\n\n\t/*\n * We will send Parse (if needed), Bind, Describe Portal, Execute, Sync,\n * using specified statement name and the unnamed portal.\n\t */\n[...]\n\n /* Construct the Bind message */\n if (pqPutMsgStart('B', false, conn) < 0 ||\n pqPuts(\"\", conn) < 0 ||\n pqPuts(stmtName, conn) < 0)\n goto sendFailed;\n\n /* Send parameter formats */\n[...]\n-- No param formats included, let the backend know\n if (pqPutInt(0, 2, conn) < 0)\n goto sendFailed;\n\n-- Tell the backend the number of parameters to expect\n if (pqPutInt(nParams, 2, conn) < 0)\n goto sendFailed;\n\n /* Send parameters */\n for (i = 0; i < nParams; i++)\n[...]\n-- Pull the length from the caller-provided for each param\n nbytes = paramLengths[i];\n[...]\n-- Send the length, then the param, over the wire\n if (pqPutInt(nbytes, 4, conn) < 0 ||\n pqPutnchar(paramValues[i], nbytes, conn) < 0)\n goto sendFailed;\n[...]\n-- All done, send finish indicator\n if (pqPutInt(1, 2, conn) < 0 ||\n pqPutInt(resultFormat, 2, conn))\n goto sendFailed;\n if (pqPutMsgEnd(conn) < 0)\n goto sendFailed;\n\n /* construct the Describe Portal message */\n if (pqPutMsgStart('D', false, conn) < 0 ||\n pqPutc('P', conn) < 0 ||\n pqPuts(\"\", conn) < 0 ||\n pqPutMsgEnd(conn) < 0)\n goto sendFailed;\n\n /* construct the Execute message */\n if (pqPutMsgStart('E', false, conn) < 0 ||\n pqPuts(\"\", conn) < 0 ||\n pqPutInt(0, 4, conn) < 0 ||\n pqPutMsgEnd(conn) < 0)\n goto sendFailed;\n\n[...]\n-- clear everything out\n if (pqFlush(conn) < 0)\n goto sendFailed;\n\nAny other questions?\n\n\tThanks,\n\n\t\tStephen\n\n--7p2pcO0s0ZIbU/PH\nContent-Type: application/pgp-signature; name=\"signature.asc\"\nContent-Description: Digital signature\nContent-Disposition: inline\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.9 (GNU/Linux)\n\niEYEARECAAYFAknwS0IACgkQrzgMPqB3kiizaQCfcWZ6JbSwo9wgt95YTxJy6awn\nVacAnAxdm4gzJsgS0ArWJd+Iii0ZPxdQ\n=AqNi\n-----END PGP SIGNATURE-----\n\n--7p2pcO0s0ZIbU/PH--\n", "msg_date": "Wed, 22 Apr 2009 21:56:51 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "* Glenn Maynard ([email protected]) wrote:\n> I'd suggest this be mentioned in the sql-prepare documentation, then,\n> because that documentation only discusses using prepared statements to\n> eliminate redundant planning costs. (I'm sure it's mentioned in the\n> API docs and elsewhere, but if it's a major intended use of PREPARE,\n> the PREPARE documentation should make note of it.)\n\nArgh. Perhaps the problem is that it's somewhat 'overloaded'. PG\nsupports *both* SQL-level PREPARE/EXECUTE commands and the more\ntraditional (well, in my view anyway...) API/protocol of PQprepare() and\nPQexecPrepared(). When using the API/protocol, you don't actually\nexplicitly call the SQL 'PREPARE blah AS INSERT INTO', you just call\nPQprepare() with 'INSERT INTO blah VALUES ($1, $2, $3);' and then call\nPQexecPrepared() later.\n\nThat's the reason it's not documented in the SQL-level PREPARE docs,\nanyway. I'm not against adding some kind of reference there, but it's\nnot quite the way you think it is..\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Thu, 23 Apr 2009 07:11:32 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "\n\nOn Thu, 23 Apr 2009, Thomas Kellerer wrote:\n\n> Out of curiosity I did some tests through JDBC.\n>\n> Using a single-column (integer) table, re-using a prepared statement \n> took about 7 seconds to insert 100000 rows with JDBC's batch interface \n> and a batch size of 1000\n>\n\nAs a note for non-JDBC users, the JDBC driver's batch interface allows \nexecuting multiple statements in a single network roundtrip. This is \nsomething you can't get in libpq, so beware of this for comparison's sake.\n\n> I also played around with batch size. Going beyond 200 didn't make a big \n> difference.\n>\n\nDespite the size of the batch passed to the JDBC driver, the driver breaks \nit up into internal sub-batch sizes of 256 to send to the server. It does \nthis to avoid network deadlocks from sending too much data to the server \nwithout reading any in return. If the driver was written differently it \ncould handle this better and send the full batch size, but at the moment \nthat's not possible and we're hoping the gains beyond this size aren't too \nlarge.\n\nKris Jurka\n", "msg_date": "Sun, 26 Apr 2009 13:07:56 -0400 (EDT)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "Kris Jurka wrote on 26.04.2009 19:07:\n> Despite the size of the batch passed to the JDBC driver, the driver \n> breaks it up into internal sub-batch sizes of 256 to send to the \n> server. It does this to avoid network deadlocks from sending too much \n> data to the server without reading any in return. If the driver was \n> written differently it could handle this better and send the full batch \n> size, but at the moment that's not possible and we're hoping the gains \n> beyond this size aren't too large.\n\nAh, thanks for the info.\nI have seen this behaviour with other DBMS as well.\n\nGoing beyond ~200-500 doesn't improve the performance on Oracle or SQL Server as \nwell.\n\nSo I guess PG wouldn't really benefit that much from a different implementation \nin the driver :)\n\nThomas\n\n\n\n", "msg_date": "Sun, 26 Apr 2009 19:28:23 +0200", "msg_from": "Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Sun, Apr 26, 2009 at 11:07 AM, Kris Jurka <[email protected]> wrote:\n>\n>\n> On Thu, 23 Apr 2009, Thomas Kellerer wrote:\n>\n>> Out of curiosity I did some tests through JDBC.\n>>\n>> Using a single-column (integer) table, re-using a prepared statement took\n>> about 7 seconds to insert 100000 rows with JDBC's batch interface and a\n>> batch size of 1000\n>>\n>\n> As a note for non-JDBC users, the JDBC driver's batch interface allows\n> executing multiple statements in a single network roundtrip.  This is\n> something you can't get in libpq, so beware of this for comparison's sake.\n\nReally? I thought that executing statements like so:\n\nselect * from a;insert ...;delete;\n\nin psql / libpq would execute them all in one trip.\n", "msg_date": "Mon, 27 Apr 2009 00:29:19 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "Scott Marlowe wrote:\n> On Sun, Apr 26, 2009 at 11:07 AM, Kris Jurka <[email protected]> wrote:\n>>\n>> As a note for non-JDBC users, the JDBC driver's batch interface allows\n>> executing multiple statements in a single network roundtrip. This is\n>> something you can't get in libpq, so beware of this for comparison's sake..\n> \n> Really? I thought that executing statements like so:\n> \n> select * from a;insert ...;delete;\n> \n> in psql / libpq would execute them all in one trip.\n\nRight, but those aren't prepared. I suppose it's possible to issue a \nprepare and then issue a batch of comma separated \"execute\" statements, \nbut that's not exactly a natural interface.\n\nKris Jurka\n", "msg_date": "Sun, 26 Apr 2009 23:45:08 -0700", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Mon, Apr 27, 2009 at 12:45 AM, Kris Jurka <[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> On Sun, Apr 26, 2009 at 11:07 AM, Kris Jurka <[email protected]> wrote:\n>>>\n>>> As a note for non-JDBC users, the JDBC driver's batch interface allows\n>>> executing multiple statements in a single network roundtrip.  This is\n>>> something you can't get in libpq, so beware of this for comparison's\n>>> sake..\n>>\n>> Really?  I thought that executing statements like so:\n>>\n>> select * from a;insert ...;delete;\n>>\n>> in psql / libpq would execute them all in one trip.\n>\n> Right, but those aren't prepared.  I suppose it's possible to issue a\n> prepare and then issue a batch of comma separated \"execute\" statements, but\n> that's not exactly a natural interface.\n\nOh right,. Sorry, didn't realize you were talking about prepared statements.\n", "msg_date": "Mon, 27 Apr 2009 01:07:03 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Sun, 26 Apr 2009, Kris Jurka wrote:\n\n> Scott Marlowe wrote:\n>> On Sun, Apr 26, 2009 at 11:07 AM, Kris Jurka <[email protected]> wrote:\n>>> \n>>> As a note for non-JDBC users, the JDBC driver's batch interface allows\n>>> executing multiple statements in a single network roundtrip. This is\n>>> something you can't get in libpq, so beware of this for comparison's \n>>> sake..\n>> \n>> Really? I thought that executing statements like so:\n>> \n>> select * from a;insert ...;delete;\n>> \n>> in psql / libpq would execute them all in one trip.\n>\n> Right, but those aren't prepared. I suppose it's possible to issue a prepare \n> and then issue a batch of comma separated \"execute\" statements, but that's \n> not exactly a natural interface.\n\nfor the task we are discussing here (log inserting) why wouldn't it be \nreasonable to have a prepared insert and then do begin;execute...;end to \ndo a batch of them at once.\n\nDavid Lang\n", "msg_date": "Mon, 27 Apr 2009 00:14:38 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "\n> Blocking round trips to another process on the same server should be\n> fairly cheap--that is, writing to a socket (or pipe, or localhost TCP\n> connection) where the other side is listening for it; and then\n> blocking in return for the response. The act of writing to an FD that\n> another process is waiting for will make the kernel mark the process\n> as \"ready to wake up\" immediately, and the act of blocking for the\n> response will kick the scheduler to some waiting process, so as long\n> as there isn't something else to compete for CPU for, each write/read\n> will wake up the other process instantly. There's a task switching\n> cost, but that's too small to be relevant here.\n>\n> Doing 1000000 local round trips, over a pipe: 5.25s (5 *microseconds*\n> each), code attached. The cost *should* be essentially identical for\n> any local transport (pipes, named pipes, local TCP connections), since\n> the underlying scheduler mechanisms are the same.\n\n\tRoundtrips can be quite fast but they have a hidden problem, which is \nthat everything gets serialized.\n\tThis means if you have a process that generates data to insert, and a \npostgres process, and 2 cores on your CPU, you will never use more than 1 \ncore, because both are waiting on each other.\n\tPipelining is a way to solve this...\n\tIn the ideal case, if postgres is as fast as the data-generating process, \neach would use 1 core, yielding 2x speedup.\n\tOf course if one of the processes is like 10x faster than the other, it \ndoesn't matter.\n\n", "msg_date": "Sat, 02 May 2009 02:29:40 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Sat, 2 May 2009, PFC wrote:\n\n>> Blocking round trips to another process on the same server should be\n>> fairly cheap--that is, writing to a socket (or pipe, or localhost TCP\n>> connection) where the other side is listening for it; and then\n>> blocking in return for the response. The act of writing to an FD that\n>> another process is waiting for will make the kernel mark the process\n>> as \"ready to wake up\" immediately, and the act of blocking for the\n>> response will kick the scheduler to some waiting process, so as long\n>> as there isn't something else to compete for CPU for, each write/read\n>> will wake up the other process instantly. There's a task switching\n>> cost, but that's too small to be relevant here.\n>> \n>> Doing 1000000 local round trips, over a pipe: 5.25s (5 *microseconds*\n>> each), code attached. The cost *should* be essentially identical for\n>> any local transport (pipes, named pipes, local TCP connections), since\n>> the underlying scheduler mechanisms are the same.\n>\n> \tRoundtrips can be quite fast but they have a hidden problem, which is \n> that everything gets serialized.\n> \tThis means if you have a process that generates data to insert, and a \n> postgres process, and 2 cores on your CPU, you will never use more than 1 \n> core, because both are waiting on each other.\n> \tPipelining is a way to solve this...\n> \tIn the ideal case, if postgres is as fast as the data-generating \n> process, each would use 1 core, yielding 2x speedup.\n> \tOf course if one of the processes is like 10x faster than the other, \n> it doesn't matter.\n\nin the case of rsyslog there are config options to allow multiple \nthreads to be working on doing the inserts, so it doesn't need to be \nserialized as badly as you are fearing (there is locking involved, so it \ndoesn't scale perfectly)\n\nDavid Lang\n", "msg_date": "Fri, 1 May 2009 17:49:38 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: performance for high-volume log insertion" }, { "msg_contents": "On Fri, May 1, 2009 at 8:29 PM, PFC <[email protected]> wrote:\n>        Roundtrips can be quite fast but they have a hidden problem, which is\n> that everything gets serialized.\n\nThe client and server will serialize, but what usually matters most is\navoiding serializing against disk I/O--and that's why write-back\ncaching exists. There's still a benefit to pipelining (not everything\nthe db might need to read to complete the write will always be in\ncache), but if everything was being serialized it'd be an order of\nmagnitude worse. That's why running each insert in a separate\ntransaction is so much slower; in that case, it *will* serialize\nagainst the disk (by default).\n\n-- \nGlenn Maynard\n", "msg_date": "Sat, 2 May 2009 00:13:19 -0400", "msg_from": "Glenn Maynard <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance for high-volume log insertion" } ]
[ { "msg_contents": "I have a database with two tables that relate similar data, and a view\nwhich projects and combines the data from these two tables in order to\naccess them both in a consistent manner. With enough information, the\napplication can specifically choose to query from one table or the\nother, but in the more general case the data could come from either\ntable, so I need to query the view. When I join against the view (or\nan equivalent subselect), however, it looks like the joining condition\nis not pushed down into the individual components of the union that\ndefines the view. This leads to a significant performance degradation\nwhen using the view; I ask the list for help in resolving this\nproblem. The remainder of this email digs into this problem in\ndetail.\n\n(If you were interested in background on this database, it implements\na backing store for a higher level RDF database, specifically for the\nRDFLib project. I would be happy to talk more about this application,\nor the corresponding database design issues, with anyone who might be\ninterested, in whatever forum would be appropriate.)\n\nI begin with the poorly performing query, which follows this\nparagraph. This query joins one of the tables to the view, and using\n'explain' on this query gives the query plan listed below the query.\nNote that in this query plan, the join filter happens after (above)\nthe collection of matching rows from each of the parts of the UNION.\n\n<query>\nselect * from\n relations as component_0_statements\ncross join\n URI_or_literal_object as component_1_statements\nwhere\ncomponent_0_statements.predicate = -2875059751320018987 and\ncomponent_0_statements.object = -2827607394936393903 and\ncomponent_1_statements.subject = component_0_statements.subject and\ncomponent_1_statements.predicate = -2875059751320018987\n</query>\n\n<query-plan>\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=96.31..36201.57 rows=1 width=128)\n Join Filter: (component_0_statements.subject = literalproperties.subject)\n -> Index Scan using relations_poscindex on relations\ncomponent_0_statements (cost=0.00..9.96 rows=1 width=40)\n Index Cond: ((predicate = (-2875059751320018987)::bigint) AND\n(object = (-2827607394936393903)::bigint))\n -> Append (cost=96.31..36044.62 rows=11759 width=88)\n -> Bitmap Heap Scan on literalproperties\n(cost=96.31..16190.72 rows=5052 width=49)\n Recheck Cond: (literalproperties.predicate =\n(-2875059751320018987)::bigint)\n -> Bitmap Index Scan on\nliteralproperties_predicateindex (cost=0.00..95.04 rows=5052 width=0)\n Index Cond: (literalproperties.predicate =\n(-2875059751320018987)::bigint)\n -> Bitmap Heap Scan on relations (cost=128.99..19736.31\nrows=6707 width=40)\n Recheck Cond: (relations.predicate =\n(-2875059751320018987)::bigint)\n -> Bitmap Index Scan on relations_predicateindex\n(cost=0.00..127.32 rows=6707 width=0)\n Index Cond: (relations.predicate =\n(-2875059751320018987)::bigint)\n(13 rows)\n</query-plan>\n\nAs it turns out, all of the results are in fact from the 'relations'\ntable, so we get the same results if we query that table instead of\nthe more general view. The corresponding query follows this\nparagraph, and its query plan immediately follows it. Note that in\nthis query plan, the join condition is pushed down to the leaf node as\nan Index Condition, which seems to be the main source of the dramatic\nperformance difference.\n\n<query>\nselect * from\n relations as component_0_statements\ncross join\n relations as component_1_statements\nwhere\ncomponent_0_statements.predicate = -2875059751320018987 and\ncomponent_0_statements.object = -2827607394936393903 and\ncomponent_1_statements.subject = component_0_statements.subject and\ncomponent_1_statements.predicate = -2875059751320018987\n</query>\n\n<query-plan>\n\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..26.11 rows=1 width=80)\n -> Index Scan using relations_poscindex on relations\ncomponent_0_statements (cost=0.00..9.96 rows=1 width=40)\n Index Cond: ((predicate = (-2875059751320018987)::bigint) AND\n(object = (-2827607394936393903)::bigint))\n -> Index Scan using relations_subjectindex on relations\ncomponent_1_statements (cost=0.00..16.13 rows=1 width=40)\n Index Cond: (component_1_statements.subject =\ncomponent_0_statements.subject)\n Filter: (component_1_statements.predicate =\n(-2875059751320018987)::bigint)\n(6 rows)\n</query-plan>\n\nMy research led me to a post by Tom Lane describing the conditions in\nwhich the WHERE conditions cannot be pushed down to the UNION parts:\n<http://archives.postgresql.org/pgsql-performance/2007-11/msg00041.php>.\nI refactored the UNION definition slightly to attempt to bring all\nthe column types into alignment, as that seemed like it might be a\nblocker, but the problem persists. It didn't look like the other\nconditions would hold in my case, but I certainly could be wrong. For\nreference, the definitions of the two tables and the view are listed\nbelow. The 'literalproperties' tables has 8229098 rows, and the\n'relations' table has 6960820 rows.\n\n# \\d literalproperties\n Table \"public.literalproperties\"\n Column | Type | Modifiers\n----------------+----------------------+-----------\n subject | bigint | not null\n subject_term | character(1) | not null\n predicate | bigint | not null\n predicate_term | character(1) | not null\n object | bigint | not null\n context | bigint | not null\n context_term | character(1) | not null\n data_type | bigint |\n language | character varying(3) |\nIndexes:\n \"literalproperties_poscindex\" UNIQUE, btree (predicate, object,\nsubject, context, data_type, language)\n \"literalproperties_context_termindex\" btree (context_term)\n \"literalproperties_contextindex\" btree (context)\n \"literalproperties_data_typeindex\" btree (data_type)\n \"literalproperties_languageindex\" btree (language)\n \"literalproperties_objectindex\" btree (object)\n \"literalproperties_predicate_termindex\" btree (predicate_term)\n \"literalproperties_predicateindex\" btree (predicate)\n \"literalproperties_subject_termindex\" btree (subject_term)\n \"literalproperties_subjectindex\" btree (subject)\n\n# \\d relations;\n Table \"public.relations\"\n Column | Type | Modifiers\n----------------+--------------+-----------\n subject | bigint | not null\n subject_term | character(1) | not null\n predicate | bigint | not null\n predicate_term | character(1) | not null\n object | bigint | not null\n object_term | character(1) | not null\n context | bigint | not null\n context_term | character(1) | not null\nIndexes:\n \"relations_poscindex\" UNIQUE, btree (predicate, object, subject, context)\n \"relations_context_termindex\" btree (context_term)\n \"relations_contextindex\" btree (context)\n \"relations_object_termindex\" btree (object_term)\n \"relations_objectindex\" btree (object)\n \"relations_predicate_termindex\" btree (predicate_term)\n \"relations_predicateindex\" btree (predicate)\n \"relations_subject_termindex\" btree (subject_term)\n \"relations_subjectindex\" btree (subject)\n\n# \\d uri_or_literal_object\n View \"public.uri_or_literal_object\"\n Column | Type | Modifiers\n----------------+----------------------+-----------\n subject | bigint |\n subject_term | character(1) |\n predicate | bigint |\n predicate_term | character(1) |\n object | bigint |\n object_term | character(1) |\n context | bigint |\n context_term | character(1) |\n data_type | bigint |\n language | character varying(3) |\nView definition:\n SELECT literalproperties.subject, literalproperties.subject_term,\nliteralproperties.predicate, literalproperties.predicate_term,\nliteralproperties.object, 'L'::character(1) AS object_term,\nliteralproperties.context, literalproperties.context_term,\nliteralproperties.data_type, literalproperties.language\n FROM literalproperties\nUNION ALL\n SELECT relations.subject, relations.subject_term,\nrelations.predicate, relations.predicate_term, relations.object,\nrelations.object_term, relations.context, relations.context_term,\nNULL::bigint AS data_type, NULL::character varying(3) AS language\n FROM relations;\n\nDoes anyone have any ideas about how I could better optimize joins\nagainst a union (either with a view or a subquery) like this?\n\nThanks, and take care,\n\n John L. Clark\n", "msg_date": "Tue, 21 Apr 2009 10:21:34 -0400", "msg_from": "\"John L. Clark\" <[email protected]>", "msg_from_op": true, "msg_subject": "WHERE condition not being pushed down to union parts" }, { "msg_contents": "\"John L. Clark\" <[email protected]> writes:\n> I have a database with two tables that relate similar data, and a view\n> which projects and combines the data from these two tables in order to\n> access them both in a consistent manner. With enough information, the\n> application can specifically choose to query from one table or the\n> other, but in the more general case the data could come from either\n> table, so I need to query the view. When I join against the view (or\n> an equivalent subselect), however, it looks like the joining condition\n> is not pushed down into the individual components of the union that\n> defines the view.\n\nYou never mentioned what PG version you are using, but I'm betting\nit's 8.1.x. This should work the way you are expecting in 8.2 and up.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 Apr 2009 10:35:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WHERE condition not being pushed down to union parts " }, { "msg_contents": "On Tue, Apr 21, 2009 at 10:35 AM, Tom Lane <[email protected]> wrote:\n> You never mentioned what PG version you are using, but I'm betting\n> it's 8.1.x. This should work the way you are expecting in 8.2 and up.\n\nNaturally, I would forget (at least) one critical piece of information:\n\n$ pg_config --version\nPostgreSQL 8.3.7\n\nOther ideas?\n\nTake care,\n\n John L. Clark\n", "msg_date": "Tue, 21 Apr 2009 10:41:09 -0400", "msg_from": "\"John L. Clark\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WHERE condition not being pushed down to union parts" }, { "msg_contents": "\"John L. Clark\" <[email protected]> writes:\n> On Tue, Apr 21, 2009 at 10:35 AM, Tom Lane <[email protected]> wrote:\n>> You never mentioned what PG version you are using, but I'm betting\n>> it's 8.1.x. This should work the way you are expecting in 8.2 and up.\n\n> Naturally, I would forget (at least) one critical piece of information:\n\n> $ pg_config --version\n> PostgreSQL 8.3.7\n\nIn that case you're going to need to provide a reproducible test case,\n'cause it worksforme.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 Apr 2009 10:50:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WHERE condition not being pushed down to union parts " }, { "msg_contents": "On Tue, Apr 21, 2009 at 12:05 PM, John L. Clark <[email protected]> wrote:\n> On Tue, Apr 21, 2009 at 10:50 AM, Tom Lane <[email protected]> wrote:\n>> In that case you're going to need to provide a reproducible test case,\n>> 'cause it worksforme.\n>\n> Ok. I scaled back my example by just selecting 1000 \"random\" rows\n> from each of the component tables. The resulting database dump should\n> be attached to this email. I tried a very small subset (just 10\n> rows), but the resulting tables were small enough that the query plans\n> were changing to use scans. Note that I haven't actually run sample\n> queries with this smaller dataset. I have only been inspecting the\n> query plans of the two queries that I listed in my original message,\n> and the results are the same, except that the magnitude of the costs\n> are scaled down. This scaling leads to a smaller performance penalty,\n> but the query plan still shows that the join filter is still not being\n> pushed down in the case of the view (built from a union).\n\nI posted this earlier, but I haven't seen it come through the mailing\nlist, perhaps because of the attachment. I have also posted the\nattachment at <http://infinitesque.net/temp/union_performance_2009-04-21.postgresql.dump.gz>.\n The MD5 checksum is \"3942fee39318aa5d9f18ac2ef3c298cf\". If the\noriginal does end up coming through, I'm sorry about the redundant\npost.\n\nTake care,\n\n John L. Clark\n", "msg_date": "Tue, 21 Apr 2009 14:11:36 -0400", "msg_from": "\"John L. Clark\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WHERE condition not being pushed down to union parts" }, { "msg_contents": "\"John L. Clark\" <[email protected]> writes:\n> I posted this earlier, but I haven't seen it come through the mailing\n> list, perhaps because of the attachment. I have also posted the\n> attachment at <http://infinitesque.net/temp/union_performance_2009-04-21.postgresql.dump.gz>.\n\nAh. The problem is that your view contains constants in the UNION arms:\n\nCREATE VIEW uri_or_literal_object AS\n SELECT literalproperties.subject, literalproperties.subject_term, literalproperties.predicate, literalproperties.predicate_term, literalproperties.object, 'L'::character(1) AS object_term, literalproperties.context, literalproperties.context_term, literalproperties.data_type, literalproperties.language FROM literalproperties\nUNION ALL\n SELECT relations.subject, relations.subject_term, relations.predicate, relations.predicate_term, relations.object, relations.object_term, relations.context, relations.context_term, NULL::bigint AS data_type, NULL::character varying(3) AS language FROM relations;\n\nIn 8.2 and 8.3, the planner is only smart enough to generate\ninner-indexscan nestloop plans on UNIONs if all the elements of the\nSELECT lists are simple variables (that is, table columns).\n8.4 will be smarter about this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 Apr 2009 15:58:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WHERE condition not being pushed down to union parts " }, { "msg_contents": "On Tue, Apr 21, 2009 at 3:58 PM, Tom Lane <[email protected]> wrote:\n> Ah. The problem is that your view contains constants in the UNION arms:\n\n> In 8.2 and 8.3, the planner is only smart enough to generate\n> inner-indexscan nestloop plans on UNIONs if all the elements of the\n> SELECT lists are simple variables (that is, table columns).\n> 8.4 will be smarter about this.\n\nAh, and so it is! I installed 8.4beta1 and have loaded it with the\nbig database; it is pushing the index condition down to the parts of\nthe UNION, and my queries are now running MUCH faster. Here's the new\nquery plan for the query involving the UNION-constructed view:\n\n<query-plan>\n\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..53.32 rows=1083 width=80)\n Join Filter: (component_0_statements.subject = literalproperties.subject)\n -> Index Scan using relations_poscindex on relations\ncomponent_0_statements (cost=0.00..13.97 rows=2 width=40)\n Index Cond: ((predicate = (-2875059751320018987)::bigint) AND\n(object = (-2827607394936393903)::bigint))\n -> Append (cost=0.00..19.65 rows=2 width=60)\n -> Index Scan using literalproperties_subjectindex on\nliteralproperties (cost=0.00..10.05 rows=1 width=57)\n Index Cond: (literalproperties.subject =\ncomponent_0_statements.subject)\n Filter: (literalproperties.predicate =\n(-2875059751320018987)::bigint)\n -> Index Scan using relations_subjectindex on relations\n(cost=0.00..9.59 rows=1 width=64)\n Index Cond: (relations.subject = component_0_statements.subject)\n Filter: (relations.predicate = (-2875059751320018987)::bigint)\n(11 rows)\n</query-plan>\n\nThanks for your help, Tom. I am certainly amused and pleased that my\nexact use case is handled in the very next PostgreSQL release.\n\nTake care,\n\n John L. Clark\n", "msg_date": "Thu, 23 Apr 2009 12:09:04 -0400", "msg_from": "\"John L. Clark\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WHERE condition not being pushed down to union parts" }, { "msg_contents": "\"John L. Clark\" <[email protected]> writes:\n> Thanks for your help, Tom. I am certainly amused and pleased that my\n> exact use case is handled in the very next PostgreSQL release.\n\nWell, sir, your timing is excellent ;-). That's been a known problem\nfor quite some time, and it was only in this release cycle that it got\naddressed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Apr 2009 12:14:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WHERE condition not being pushed down to union parts " } ]
[ { "msg_contents": "\n", "msg_date": "Fri, 24 Apr 2009 08:51:52 +1000", "msg_from": "Adam Ruth <[email protected]>", "msg_from_op": true, "msg_subject": "" } ]
[ { "msg_contents": "Hi,\n\nI'm having serious performance problems with my two plpgsql functions \n(lots of calculations inside, including distance (lat,long) using \nearthdistance etc).\n\nMy problem is: both functions are executing for hours (if not days) but \nresource consumption is low (I/O, CPU, memory), like 2-5% only. I'm a \nbit confused on what's going on there... locks? poor resources \nmanagement by Windows? I'm not really sure where the bottleneck might be.\n\nAny hints/recommendations where and how should I look for improvements?\n\nI'm running 8.2 on WinXP.\n\nI'm aware it's kind of hard to say without seeing function's body, but I \nthought I can give a try anyway.\n\nRegards,\nfoo\n", "msg_date": "Mon, 27 Apr 2009 17:39:38 +0800", "msg_from": "Wojtek <[email protected]>", "msg_from_op": true, "msg_subject": "plpgsql function running long, but resources consumption is very low" }, { "msg_contents": "Hello\n\nwithout source code we cannot help\n\nregards\nPavel Stehule\n\n\n2009/4/27 Wojtek <[email protected]>:\n> Hi,\n>\n> I'm having serious performance problems with my two plpgsql functions (lots\n> of calculations inside, including distance (lat,long) using earthdistance\n> etc).\n>\n> My problem is: both functions are executing for hours (if not days) but\n> resource consumption is low (I/O, CPU, memory), like 2-5% only. I'm a bit\n> confused on what's going on there... locks? poor resources management by\n> Windows? I'm not really sure where the bottleneck might be.\n>\n> Any hints/recommendations where and how should I look for improvements?\n>\n> I'm running 8.2 on WinXP.\n>\n> I'm aware it's kind of hard to say without seeing function's body, but I\n> thought I can give a try anyway.\n>\n> Regards,\n> foo\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Mon, 27 Apr 2009 11:56:12 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql function running long, but resources\n\tconsumption is very low" }, { "msg_contents": "\nOn 4/27/09 2:56 AM, \"Pavel Stehule\" <[email protected]> wrote:\n\n> Hello\n> \n> without source code we cannot help\n> \n\nThat's not true. We can only go so far without source, but the general\nproblem of \"what might the bottleneck be if it doesn't appear to be CPU or\ndisk\" can be investigated significantly without source code.\n\nThings to try:\nCheck networking stats. Is this running locally -- if so is it local pipe,\nlocalhost, or ip address? What tool / language / driver is the client?\n\nCheck pg_locks while it is running -- Observe the count and types of locks.\nFind the backend that is running, and report all the locks that that backend\nis associated with. Observe this multiple times and report if this is\nrelatively constant or changing, and if so, how.\n\n\nProvide perfmon stats during this time:\nCPU % (user, system)\nContext switch rate (System -> Context Switches / sec)\nDisk %time, iops, queue length, avg size -- per disk.\nNetwork bandwidth used, packets/sec, avg packet size.\n\nA bunch of the above, especially the CPU stuff, can be broken down per\nprocess or thread. Analyzing disk I/O is possible per process or thread as\nwell if something looks up there (with FileMon).\nThe network stuff can be broken down by client / port somewhat as well if\nnecessary.\nCheck out sysinternals.com for other tools that might be useful.\n\n\nIn all likelihood, the above will narrow this down but not solve it, but its\nbetter than nothing and might find it.\nIt would certainly be useful if those who have experienced such situations\nshared their solutions, or those who know more about the inner workings\nsuggested possible causes or provided links to more information (or old\nsimilar topics).\n\n\n> regards\n> Pavel Stehule\n> \n> \n> 2009/4/27 Wojtek <[email protected]>:\n>> Hi,\n>> \n>> I'm having serious performance problems with my two plpgsql functions (lots\n>> of calculations inside, including distance (lat,long) using earthdistance\n>> etc).\n>> \n>> My problem is: both functions are executing for hours (if not days) but\n>> resource consumption is low (I/O, CPU, memory), like 2-5% only. I'm a bit\n>> confused on what's going on there... locks? poor resources management by\n>> Windows? I'm not really sure where the bottleneck might be.\n>> \n>> Any hints/recommendations where and how should I look for improvements?\n>> \n>> I'm running 8.2 on WinXP.\n>> \n>> I'm aware it's kind of hard to say without seeing function's body, but I\n>> thought I can give a try anyway.\n>> \n>> Regards,\n>> foo\n>> \n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Mon, 27 Apr 2009 12:08:04 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql function running long, but resources\n\tconsumption is very low" } ]
[ { "msg_contents": "Friendly greetings !\n\nAccording to : http://developer.postgresql.org/pgdocs/postgres/storage-page-layout.html\nEvery table and index is stored as an array of pages of a fixed size\n(usually 8 kB, although a different page size can be selected when\ncompiling the server).\n\nIs there any usage/interest to change this page size ?\n\nthank you.\n\n-- \nF4FQM\nKerunix Flan\nLaurent Laborde\n", "msg_date": "Tue, 28 Apr 2009 11:53:03 +0200", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "any interest of changing the page size ?" } ]
[ { "msg_contents": "I have the opportunity to set up a new postgres server for our\nproduction database. I've read several times in various postgres\nlists about the importance of separating logs from the actual database\ndata to avoid disk contention.\n\nCan someone suggest a typical partitioning scheme for a postgres server?\n\nMy initial thought was to create /var/lib/postgresql as a partition on\na separate set of disks.\n\nHowever, I can see that the xlog files will be stored here as well:\nhttp://www.postgresql.org/docs/8.3/interactive/storage-file-layout.html\n\nShould the xlog files be stored on a separate partition to improve performance?\n\nAny suggestions would be very helpful. Or if there is a document that\nlays out some best practices for server setup, that would be great.\n\nThe database usage will be read heavy (financial data) with batch\nwrites occurring overnight and occassionally during the day.\n\nserver information:\nDell PowerEdge 2970, 8 core Opteron 2384\n6 1TB hard drives with a PERC 6i\n64GB of ram\n\nWe will be running Ubuntu 9.04.\n\nThanks in advance,\nWhit\n", "msg_date": "Tue, 28 Apr 2009 12:56:48 -0400", "msg_from": "Whit Armstrong <[email protected]>", "msg_from_op": true, "msg_subject": "partition question for new server setup" }, { "msg_contents": "On Tue, Apr 28, 2009 at 10:56 AM, Whit Armstrong\n<[email protected]> wrote:\n> I have the opportunity to set up a new postgres server for our\n> production database.  I've read several times in various postgres\n> lists about the importance of separating logs from the actual database\n> data to avoid disk contention.\n>\n> Can someone suggest a typical partitioning scheme for a postgres server?\n\nAt work I have 16 SAS disks. They are setup with 12 in a RAID-10, 2\nin a RAID-1 and 2 hot spares.\n\nThe OS, /var/log, and postgres base go in the RAID-1. I then create a\nnew data directory on the RAID-10, shut down pgsql, copy the base\ndirectory over to the RAID-10 and replace the base dir in the pg data\ndirectory with a link to the RAID-10's base directory and restart\npostgres. So, my pg_xlog and all OS and logging stuff goes on the\nRAID-10 and the main store for the db goes on the RAID-10.\n", "msg_date": "Tue, 28 Apr 2009 11:37:13 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "Thanks, Scott.\n\nJust to clarify you said:\n\n> postgres.  So, my pg_xlog and all OS and logging stuff goes on the\n> RAID-10 and the main store for the db goes on the RAID-10.\n\nIs that meant to be that the pg_xlog and all OS and logging stuff go\non the RAID-1 and the real database (the\n/var/lib/postgresql/8.3/main/base directory) goes on the RAID-10\npartition?\n\nThis is very helpful. Thanks for your feedback.\n\nAdditionally are there any clear choices w/ regard to filesystem\ntypes? Our choices would be xfs, ext3, or ext4.\n\nIs anyone out there running ext4 on a production system?\n\n-Whit\n", "msg_date": "Tue, 28 Apr 2009 13:48:40 -0400", "msg_from": "Whit Armstrong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "On Tue, Apr 28, 2009 at 11:48 AM, Whit Armstrong\n<[email protected]> wrote:\n> Thanks, Scott.\n>\n> Just to clarify you said:\n>\n>> postgres.  So, my pg_xlog and all OS and logging stuff goes on the\n>> RAID-10 and the main store for the db goes on the RAID-10.\n>\n> Is that meant to be that the pg_xlog and all OS and logging stuff go\n> on the RAID-1 and the real database (the\n> /var/lib/postgresql/8.3/main/base directory) goes on the RAID-10\n> partition?\n\nYeah, and extra 0 jumped in there. Faulty keyboard I guess. :) OS\nand everything but base is on the RAID-1.\n\n> This is very helpful.  Thanks for your feedback.\n>\n> Additionally are there any clear choices w/ regard to filesystem\n> types?  Our choices would be xfs, ext3, or ext4.\n\nWell, there's a lot of people who use xfs and ext3. XFS is generally\nrated higher than ext3 both for performance and reliability. However,\nwe run Centos 5 in production, and XFS isn't one of the blessed file\nsystems it comes with, so we're running ext3. It's worked quite well\nfor us.\n", "msg_date": "Tue, 28 Apr 2009 11:56:25 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "On Tuesday 28 April 2009, Whit Armstrong <[email protected]> wrote:\n> Additionally are there any clear choices w/ regard to filesystem\n> types? Our choices would be xfs, ext3, or ext4.\n\nxfs consistently delivers much higher sequential throughput than ext3 (up to \n100%), at least on my hardware.\n\n-- \nEven a sixth-grader can figure out that you can’t borrow money to pay off \nyour debt\n", "msg_date": "Tue, 28 Apr 2009 11:03:25 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "On Tue, Apr 28, 2009 at 11:56:25AM -0600, Scott Marlowe wrote:\n> On Tue, Apr 28, 2009 at 11:48 AM, Whit Armstrong\n> <[email protected]> wrote:\n> > Thanks, Scott.\n> >\n> > Just to clarify you said:\n> >\n> >> postgres. ?So, my pg_xlog and all OS and logging stuff goes on the\n> >> RAID-10 and the main store for the db goes on the RAID-10.\n> >\n> > Is that meant to be that the pg_xlog and all OS and logging stuff go\n> > on the RAID-1 and the real database (the\n> > /var/lib/postgresql/8.3/main/base directory) goes on the RAID-10\n> > partition?\n> \n> Yeah, and extra 0 jumped in there. Faulty keyboard I guess. :) OS\n> and everything but base is on the RAID-1.\n> \n> > This is very helpful. ?Thanks for your feedback.\n> >\n> > Additionally are there any clear choices w/ regard to filesystem\n> > types? ?Our choices would be xfs, ext3, or ext4.\n> \n> Well, there's a lot of people who use xfs and ext3. XFS is generally\n> rated higher than ext3 both for performance and reliability. However,\n> we run Centos 5 in production, and XFS isn't one of the blessed file\n> systems it comes with, so we're running ext3. It's worked quite well\n> for us.\n> \n\nThe other optimizations are using data=writeback when mounting the\next3 filesystem for PostgreSQL and using the elevator=deadline for\nthe disk driver. I do not know how you specify that for Ubuntu.\n\nCheers,\nKen\n", "msg_date": "Tue, 28 Apr 2009 13:06:21 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "Whit Armstrong wrote:\n> I have the opportunity to set up a new postgres server for our\n> production database. I've read several times in various postgres\n> lists about the importance of separating logs from the actual database\n> data to avoid disk contention.\n> \n> Can someone suggest a typical partitioning scheme for a postgres server?\n> \n> My initial thought was to create /var/lib/postgresql as a partition on\n> a separate set of disks.\n> \n> However, I can see that the xlog files will be stored here as well:\n> http://www.postgresql.org/docs/8.3/interactive/storage-file-layout.html\n> \n> Should the xlog files be stored on a separate partition to improve performance?\n> \n> Any suggestions would be very helpful. Or if there is a document that\n> lays out some best practices for server setup, that would be great.\n> \n> The database usage will be read heavy (financial data) with batch\n> writes occurring overnight and occassionally during the day.\n> \n> server information:\n> Dell PowerEdge 2970, 8 core Opteron 2384\n> 6 1TB hard drives with a PERC 6i\n> 64GB of ram\n\nWe're running a similar configuration: PowerEdge 8 core, PERC 6i, but we have 8 of the 2.5\" 10K 384GB disks.\n\nWhen I asked the same question on this forum, I was advised to just put all 8 disks into a single RAID 10, and forget about separating things. The performance of a battery-backed PERC 6i (you did get a battery-backed cache, right?) with 8 disks is quite good. \n\nIn order to separate the logs, OS and data, I'd have to split off at least two of the 8 disks, leaving only six for the RAID 10 array. But then my xlogs would be on a single disk, which might not be safe. A more robust approach would be to split off four of the disks, put the OS on a RAID 1, the xlog on a RAID 1, and the database data on a 4-disk RAID 10. Now I've separated the data, but my primary partition has lost half its disks.\n\nSo, I took the advice, and just made one giant 8-disk RAID 10, and I'm very happy with it. It has everything: Postgres, OS and logs. But since the RAID array is 8 disks instead of 4, the net performance seems to quite good.\n\nBut ... your mileage may vary. My box has just one thing running on it: Postgres. There is almost no other disk activity to interfere with the file-system caching. If your server is going to have a bunch of other activity that generate a lot of non-Postgres disk activity, then this advice might not apply.\n\nCraig\n\n", "msg_date": "Tue, 28 Apr 2009 11:07:13 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "Kenneth Marshall wrote:\n>>> Additionally are there any clear choices w/ regard to filesystem\n>>> types? ?Our choices would be xfs, ext3, or ext4.\n>> Well, there's a lot of people who use xfs and ext3. XFS is generally\n>> rated higher than ext3 both for performance and reliability. However,\n>> we run Centos 5 in production, and XFS isn't one of the blessed file\n>> systems it comes with, so we're running ext3. It's worked quite well\n>> for us.\n>>\n> \n> The other optimizations are using data=writeback when mounting the\n> ext3 filesystem for PostgreSQL and using the elevator=deadline for\n> the disk driver. I do not know how you specify that for Ubuntu.\n\nAfter a reading various articles, I thought that \"noop\" was the right choice when you're using a battery-backed RAID controller. The RAID controller is going to cache all data and reschedule the writes anyway, so the kernal schedule is irrelevant at best, and can slow things down.\n\nOn Ubuntu, it's\n\n echo noop >/sys/block/hdx/queue/scheduler\n\nwhere \"hdx\" is replaced by the appropriate device.\n\nCraig\n\n", "msg_date": "Tue, 28 Apr 2009 11:16:44 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "Craig James <[email protected]> wrote: \n \n> After a reading various articles, I thought that \"noop\" was the\n> right choice when you're using a battery-backed RAID controller. \n> The RAID controller is going to cache all data and reschedule the\n> writes anyway, so the kernal schedule is irrelevant at best, and can\n> slow things down.\n \nWouldn't that depend on the relative sizes of those caches? In a\nnot-so-hypothetical example, we have machines with 120 GB OS cache,\nand 256 MB BBU RAID controller cache. We seem to benefit from\nelevator=deadline at the OS level.\n \n-Kevin\n", "msg_date": "Tue, 28 Apr 2009 13:30:59 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "> echo noop >/sys/block/hdx/queue/scheduler\n\ncan this go into /etc/init.d somewhere?\n\nor does that change stick between reboots?\n\n-Whit\n\n\nOn Tue, Apr 28, 2009 at 2:16 PM, Craig James <[email protected]> wrote:\n> Kenneth Marshall wrote:\n>>>>\n>>>> Additionally are there any clear choices w/ regard to filesystem\n>>>> types? ?Our choices would be xfs, ext3, or ext4.\n>>>\n>>> Well, there's a lot of people who use xfs and ext3.  XFS is generally\n>>> rated higher than ext3 both for performance and reliability.  However,\n>>> we run Centos 5 in production, and XFS isn't one of the blessed file\n>>> systems it comes with, so we're running ext3.  It's worked quite well\n>>> for us.\n>>>\n>>\n>> The other optimizations are using data=writeback when mounting the\n>> ext3 filesystem for PostgreSQL and using the elevator=deadline for\n>> the disk driver. I do not know how you specify that for Ubuntu.\n>\n> After a reading various articles, I thought that \"noop\" was the right choice\n> when you're using a battery-backed RAID controller.  The RAID controller is\n> going to cache all data and reschedule the writes anyway, so the kernal\n> schedule is irrelevant at best, and can slow things down.\n>\n> On Ubuntu, it's\n>\n>  echo noop >/sys/block/hdx/queue/scheduler\n>\n> where \"hdx\" is replaced by the appropriate device.\n>\n> Craig\n>\n>\n", "msg_date": "Tue, 28 Apr 2009 14:37:37 -0400", "msg_from": "Whit Armstrong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "On Tue, Apr 28, 2009 at 01:30:59PM -0500, Kevin Grittner wrote:\n> Craig James <[email protected]> wrote: \n> \n> > After a reading various articles, I thought that \"noop\" was the\n> > right choice when you're using a battery-backed RAID controller. \n> > The RAID controller is going to cache all data and reschedule the\n> > writes anyway, so the kernal schedule is irrelevant at best, and can\n> > slow things down.\n> \n> Wouldn't that depend on the relative sizes of those caches? In a\n> not-so-hypothetical example, we have machines with 120 GB OS cache,\n> and 256 MB BBU RAID controller cache. We seem to benefit from\n> elevator=deadline at the OS level.\n> \n> -Kevin\n> \nThis was my understanding as well. If your RAID controller had a\nlot of well managed cache, then the noop scheduler was a win. Less\nperformant RAID controllers benefit from teh deadline scheduler.\n\nCheers,\nKen\n", "msg_date": "Tue, 28 Apr 2009 13:40:41 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "Whit Armstrong <[email protected]> wrote: \n>> echo noop >/sys/block/hdx/queue/scheduler\n> \n> can this go into /etc/init.d somewhere?\n \nWe set the default for the kernel in the /boot/grub/menu.lst file. On\na kernel line, add elevator=xxx (where xxx is your choice of\nscheduler).\n \n-Kevin\n\n", "msg_date": "Tue, 28 Apr 2009 14:13:12 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "I see.\n\nThanks for everyone for replying. The whole discussion has been very helpful.\n\nCheers,\nWhit\n\n\nOn Tue, Apr 28, 2009 at 3:13 PM, Kevin Grittner\n<[email protected]> wrote:\n> Whit Armstrong <[email protected]> wrote:\n>>>   echo noop >/sys/block/hdx/queue/scheduler\n>>\n>> can this go into /etc/init.d somewhere?\n>\n> We set the default for the kernel in the /boot/grub/menu.lst file.  On\n> a kernel line, add  elevator=xxx (where xxx is your choice of\n> scheduler).\n>\n> -Kevin\n>\n>\n", "msg_date": "Tue, 28 Apr 2009 15:15:25 -0400", "msg_from": "Whit Armstrong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "On Tue, Apr 28, 2009 at 12:06 PM, Kenneth Marshall <[email protected]> wrote:\n> On Tue, Apr 28, 2009 at 11:56:25AM -0600, Scott Marlowe wrote:\n>> On Tue, Apr 28, 2009 at 11:48 AM, Whit Armstrong\n>> <[email protected]> wrote:\n>> > Thanks, Scott.\n>> >\n>> > Just to clarify you said:\n>> >\n>> >> postgres. ?So, my pg_xlog and all OS and logging stuff goes on the\n>> >> RAID-10 and the main store for the db goes on the RAID-10.\n>> >\n>> > Is that meant to be that the pg_xlog and all OS and logging stuff go\n>> > on the RAID-1 and the real database (the\n>> > /var/lib/postgresql/8.3/main/base directory) goes on the RAID-10\n>> > partition?\n>>\n>> Yeah, and extra 0 jumped in there.  Faulty keyboard I guess. :)  OS\n>> and everything but base is on the RAID-1.\n>>\n>> > This is very helpful. ?Thanks for your feedback.\n>> >\n>> > Additionally are there any clear choices w/ regard to filesystem\n>> > types? ?Our choices would be xfs, ext3, or ext4.\n>>\n>> Well, there's a lot of people who use xfs and ext3.  XFS is generally\n>> rated higher than ext3 both for performance and reliability.  However,\n>> we run Centos 5 in production, and XFS isn't one of the blessed file\n>> systems it comes with, so we're running ext3.  It's worked quite well\n>> for us.\n>>\n>\n> The other optimizations are using data=writeback when mounting the\n> ext3 filesystem for PostgreSQL and using the elevator=deadline for\n> the disk driver. I do not know how you specify that for Ubuntu.\n\nYeah, we set the scheduler to deadline on our db servers and it\ndropped the load and io wait noticeably, even with our rather fast\narrays and controller. We also use data=writeback.\n", "msg_date": "Tue, 28 Apr 2009 13:15:29 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "On Tue, Apr 28, 2009 at 12:37 PM, Whit Armstrong\n<[email protected]> wrote:\n>>  echo noop >/sys/block/hdx/queue/scheduler\n>\n> can this go into /etc/init.d somewhere?\n>\n> or does that change stick between reboots?\n\nI just stick in /etc/rc.local\n", "msg_date": "Tue, 28 Apr 2009 13:17:54 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "On Tue, Apr 28, 2009 at 12:40 PM, Kenneth Marshall <[email protected]> wrote:\n> On Tue, Apr 28, 2009 at 01:30:59PM -0500, Kevin Grittner wrote:\n>> Craig James <[email protected]> wrote:\n>>\n>> > After a reading various articles, I thought that \"noop\" was the\n>> > right choice when you're using a battery-backed RAID controller.\n>> > The RAID controller is going to cache all data and reschedule the\n>> > writes anyway, so the kernal schedule is irrelevant at best, and can\n>> > slow things down.\n>>\n>> Wouldn't that depend on the relative sizes of those caches?  In a\n>> not-so-hypothetical example, we have machines with 120 GB OS cache,\n>> and 256 MB BBU RAID controller cache.  We seem to benefit from\n>> elevator=deadline at the OS level.\n>>\n>> -Kevin\n>>\n> This was my understanding as well. If your RAID controller had a\n> lot of well managed cache, then the noop scheduler was a win. Less\n> performant RAID controllers benefit from teh deadline scheduler.\n\nI have an Areca 1680ix with 512M cache on a machine with 32Gig ram and\nI get slightly better performance and lower load factors from deadline\nthan from noop, but it's not by much.\n", "msg_date": "Tue, 28 Apr 2009 13:19:36 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "On 4/28/09 11:16 AM, \"Craig James\" <[email protected]> wrote:\n\n> Kenneth Marshall wrote:\n>>>> Additionally are there any clear choices w/ regard to filesystem\n>>>> types? ?Our choices would be xfs, ext3, or ext4.\n>>> Well, there's a lot of people who use xfs and ext3. XFS is generally\n>>> rated higher than ext3 both for performance and reliability. However,\n>>> we run Centos 5 in production, and XFS isn't one of the blessed file\n>>> systems it comes with, so we're running ext3. It's worked quite well\n>>> for us.\n>>> \n>> \n>> The other optimizations are using data=writeback when mounting the\n>> ext3 filesystem for PostgreSQL and using the elevator=deadline for\n>> the disk driver. I do not know how you specify that for Ubuntu.\n> \n> After a reading various articles, I thought that \"noop\" was the right choice\n> when you're using a battery-backed RAID controller. The RAID controller is\n> going to cache all data and reschedule the writes anyway, so the kernal\n> schedule is irrelevant at best, and can slow things down.\n> \n> On Ubuntu, it's\n> \n> echo noop >/sys/block/hdx/queue/scheduler\n> \n> where \"hdx\" is replaced by the appropriate device.\n> \n> Craig\n> \n\nI've always had better performance from deadline than noop, no matter what\nraid controller I have. Perhaps with a really good one or a SAN that\nchanges (NOT a PERC 6 mediocre thingamabob).\n\nPERC 6 really, REALLY needs to have the linux \"readahead\" value set up to at\nleast 1MB per effective spindle to get good sequential read performance.\nXfs helps with it too, but you can mitigate half of the ext3 vs xfs\nsequential access performance with high readahead settings:\n\n/sbin/blockdev --setra <value> <device>\n\nValue is in blocks (512 bytes)\n\n/sbin/blockdev --getra <device> to see its setting. Google for more info.\n\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Tue, 28 Apr 2009 16:40:24 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": ">> \n>> server information:\n>> Dell PowerEdge 2970, 8 core Opteron 2384\n>> 6 1TB hard drives with a PERC 6i\n>> 64GB of ram\n> \n> We're running a similar configuration: PowerEdge 8 core, PERC 6i, but we have\n> 8 of the 2.5\" 10K 384GB disks.\n> \n> When I asked the same question on this forum, I was advised to just put all 8\n> disks into a single RAID 10, and forget about separating things. The\n> performance of a battery-backed PERC 6i (you did get a battery-backed cache,\n> right?) with 8 disks is quite good.\n> \n> In order to separate the logs, OS and data, I'd have to split off at least two\n> of the 8 disks, leaving only six for the RAID 10 array. But then my xlogs\n> would be on a single disk, which might not be safe. A more robust approach\n> would be to split off four of the disks, put the OS on a RAID 1, the xlog on a\n> RAID 1, and the database data on a 4-disk RAID 10. Now I've separated the\n> data, but my primary partition has lost half its disks.\n> \n> So, I took the advice, and just made one giant 8-disk RAID 10, and I'm very\n> happy with it. It has everything: Postgres, OS and logs. But since the RAID\n> array is 8 disks instead of 4, the net performance seems to quite good.\n> \n\nIf you go this route, there are a few risks:\n1. If everything is on the same partition/file system, fsyncs from the\nxlogs may cross-pollute to the data. Ext3 is notorious for this, though\ndata=writeback limits the effect you especially might not want\ndata=writeback on your OS partition. I would recommend that the OS, Data,\nand xlogs + etc live on three different partitions regardless of the number\nof logical RAID volumes.\n2. Cheap raid controllers (PERC, others) will see fsync for an array and\nflush everything that is dirty (not just the partition or file data), which\nis a concern if you aren't using it in write-back with battery backed cache,\neven for a very read heavy db that doesn't need high fsync speed for\ntransactions. \n\n> But ... your mileage may vary. My box has just one thing running on it:\n> Postgres. There is almost no other disk activity to interfere with the\n> file-system caching. If your server is going to have a bunch of other\n> activity that generate a lot of non-Postgres disk activity, then this advice\n> might not apply.\n> \n> Craig\n> \n\n6 and 8 disk counts are tough. My biggest single piece of advise is to have\nthe xlogs in a partition separate from the data (not necessarily a different\nraid logical volume), with file system and mount options tuned for each case\nseparately. I've seen this alone improve performance by a factor of 2.5 on\nsome file system / storage combinations.\n\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Tue, 28 Apr 2009 16:58:30 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "are there any other xfs settings that should be tuned for postgres?\n\nI see this post mentions \"allocation groups.\" does anyone have\nsuggestions for those settings?\nhttp://archives.postgresql.org/pgsql-admin/2009-01/msg00144.php\n\nwhat about raid stripe size? does it really make a difference? I\nthink the default for the perc is 64kb (but I'm not in front of the\nserver right now).\n\n-Whit\n\n\nOn Tue, Apr 28, 2009 at 7:40 PM, Scott Carey <[email protected]> wrote:\n> On 4/28/09 11:16 AM, \"Craig James\" <[email protected]> wrote:\n>\n>> Kenneth Marshall wrote:\n>>>>> Additionally are there any clear choices w/ regard to filesystem\n>>>>> types? ?Our choices would be xfs, ext3, or ext4.\n>>>> Well, there's a lot of people who use xfs and ext3.  XFS is generally\n>>>> rated higher than ext3 both for performance and reliability.  However,\n>>>> we run Centos 5 in production, and XFS isn't one of the blessed file\n>>>> systems it comes with, so we're running ext3.  It's worked quite well\n>>>> for us.\n>>>>\n>>>\n>>> The other optimizations are using data=writeback when mounting the\n>>> ext3 filesystem for PostgreSQL and using the elevator=deadline for\n>>> the disk driver. I do not know how you specify that for Ubuntu.\n>>\n>> After a reading various articles, I thought that \"noop\" was the right choice\n>> when you're using a battery-backed RAID controller.  The RAID controller is\n>> going to cache all data and reschedule the writes anyway, so the kernal\n>> schedule is irrelevant at best, and can slow things down.\n>>\n>> On Ubuntu, it's\n>>\n>>   echo noop >/sys/block/hdx/queue/scheduler\n>>\n>> where \"hdx\" is replaced by the appropriate device.\n>>\n>> Craig\n>>\n>\n> I've always had better performance from deadline than noop, no matter what\n> raid controller I have.  Perhaps with a really good one or a SAN that\n> changes (NOT a PERC 6 mediocre thingamabob).\n>\n> PERC 6 really, REALLY needs to have the linux \"readahead\" value set up to at\n> least 1MB per effective spindle to get good sequential read performance.\n> Xfs helps with it too, but you can mitigate half of the ext3 vs xfs\n> sequential access performance with high readahead settings:\n>\n> /sbin/blockdev --setra <value> <device>\n>\n> Value is in blocks (512 bytes)\n>\n> /sbin/blockdev --getra <device> to see its setting.   Google for more info.\n>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n", "msg_date": "Tue, 28 Apr 2009 20:02:11 -0400", "msg_from": "Whit Armstrong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "Thanks, Scott.\n\nSo far, I've followed a pattern similar to Scott Marlowe's setup. I\nhave configured 2 disks as a RAID 1 volume, and 4 disks as a RAID 10\nvolume. So, the OS and xlogs will live on the RAID 1 vol and the data\nwill live on the RAID 10 vol.\n\nI'm running the memtest on it now, so we still haven't locked\nourselves into any choices.\n\nregarding your comment:\n> 6 and 8 disk counts are tough. My biggest single piece of advise is to have\n> the xlogs in a partition separate from the data (not necessarily a different\n> raid logical volume), with file system and mount options tuned for each case\n> separately. I've seen this alone improve performance by a factor of 2.5 on\n> some file system / storage combinations.\n\ncan you suggest mount options for the various partitions? I'm leaning\ntowards xfs for the filesystem format unless someone complains loudly\nabout data corruption on xfs for a recent 2.6 kernel.\n\n-Whit\n\n\nOn Tue, Apr 28, 2009 at 7:58 PM, Scott Carey <[email protected]> wrote:\n>>>\n>>> server information:\n>>> Dell PowerEdge 2970, 8 core Opteron 2384\n>>> 6 1TB hard drives with a PERC 6i\n>>> 64GB of ram\n>>\n>> We're running a similar configuration: PowerEdge 8 core, PERC 6i, but we have\n>> 8 of the 2.5\" 10K 384GB disks.\n>>\n>> When I asked the same question on this forum, I was advised to just put all 8\n>> disks into a single RAID 10, and forget about separating things.  The\n>> performance of a battery-backed PERC 6i (you did get a battery-backed cache,\n>> right?) with 8 disks is quite good.\n>>\n>> In order to separate the logs, OS and data, I'd have to split off at least two\n>> of the 8 disks, leaving only six for the RAID 10 array.  But then my xlogs\n>> would be on a single disk, which might not be safe.  A more robust approach\n>> would be to split off four of the disks, put the OS on a RAID 1, the xlog on a\n>> RAID 1, and the database data on a 4-disk RAID 10.  Now I've separated the\n>> data, but my primary partition has lost half its disks.\n>>\n>> So, I took the advice, and just made one giant 8-disk RAID 10, and I'm very\n>> happy with it.  It has everything: Postgres, OS and logs.  But since the RAID\n>> array is 8 disks instead of 4, the net performance seems to quite good.\n>>\n>\n> If you go this route, there are a few risks:\n> 1.  If everything is on the same partition/file system, fsyncs from the\n> xlogs may cross-pollute to the data.  Ext3 is notorious for this, though\n> data=writeback limits the effect you especially might not want\n> data=writeback on your OS partition.  I would recommend that the OS, Data,\n> and xlogs + etc live on three different partitions regardless of the number\n> of logical RAID volumes.\n> 2. Cheap raid controllers (PERC, others) will see fsync for an array and\n> flush everything that is dirty (not just the partition or file data), which\n> is a concern if you aren't using it in write-back with battery backed cache,\n> even for a very read heavy db that doesn't need high fsync speed for\n> transactions.\n>\n>> But ... your mileage may vary.  My box has just one thing running on it:\n>> Postgres.  There is almost no other disk activity to interfere with the\n>> file-system caching.  If your server is going to have a bunch of other\n>> activity that generate a lot of non-Postgres disk activity, then this advice\n>> might not apply.\n>>\n>> Craig\n>>\n>\n> 6 and 8 disk counts are tough.  My biggest single piece of advise is to have\n> the xlogs in a partition separate from the data (not necessarily a different\n> raid logical volume), with file system and mount options tuned for each case\n> separately.  I've seen this alone improve performance by a factor of 2.5 on\n> some file system / storage combinations.\n>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n", "msg_date": "Tue, 28 Apr 2009 20:10:26 -0400", "msg_from": "Whit Armstrong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "On Tue, Apr 28, 2009 at 5:58 PM, Scott Carey <[email protected]> wrote:\n\n> 1.  If everything is on the same partition/file system, fsyncs from the\n> xlogs may cross-pollute to the data.  Ext3 is notorious for this, though\n> data=writeback limits the effect you especially might not want\n> data=writeback on your OS partition.  I would recommend that the OS, Data,\n> and xlogs + etc live on three different partitions regardless of the number\n> of logical RAID volumes.\n\nNote that I remember reading some comments a while back that just\nhaving a different file system, on the same logical set, makes things\nfaster. I.e. a partition for OS, one for xlog and one for pgdata on\nthe same large logical volume was noticeably faster than having it all\non the same big partition on a single logical volume.\n", "msg_date": "Tue, 28 Apr 2009 18:19:56 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "\nOn 4/28/09 5:02 PM, \"Whit Armstrong\" <[email protected]> wrote:\n\n> are there any other xfs settings that should be tuned for postgres?\n> \n> I see this post mentions \"allocation groups.\" does anyone have\n> suggestions for those settings?\n> http://archives.postgresql.org/pgsql-admin/2009-01/msg00144.php\n> \n> what about raid stripe size? does it really make a difference? I\n> think the default for the perc is 64kb (but I'm not in front of the\n> server right now).\n> \n\nWhen I tested a PERC I couldn't tell the difference between the 64k and 256k\nsettings. The other settings that looked like they might improve things all\nhad worse performance (other than write back cache of course).\n\nAlso, if you have partitions at all on the data device, you'll want to try\nand stripe align it. The easiest way is to simply put the file system on\nthe raw device rather than a partition (e.g. /dev/sda rather than\n/dev/sda1). Partition alignment can be very annoying to do well. It will\naffect performance a little, less so with larger stripe sizes.\n\n> -Whit\n> \n> \n> On Tue, Apr 28, 2009 at 7:40 PM, Scott Carey <[email protected]> wrote:\n\n\n", "msg_date": "Tue, 28 Apr 2009 18:31:59 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "\nOn 4/28/09 5:10 PM, \"Whit Armstrong\" <[email protected]> wrote:\n\n> Thanks, Scott.\n> \n> So far, I've followed a pattern similar to Scott Marlowe's setup. I\n> have configured 2 disks as a RAID 1 volume, and 4 disks as a RAID 10\n> volume. So, the OS and xlogs will live on the RAID 1 vol and the data\n> will live on the RAID 10 vol.\n> \n> I'm running the memtest on it now, so we still haven't locked\n> ourselves into any choices.\n> \n\nIts a fine option -- the only way to know if one big volume with separate\npartitions is better is to test your actual application since it is highly\ndependant on the use case.\n\n\n> regarding your comment:\n>> 6 and 8 disk counts are tough. My biggest single piece of advise is to have\n>> the xlogs in a partition separate from the data (not necessarily a different\n>> raid logical volume), with file system and mount options tuned for each case\n>> separately. I've seen this alone improve performance by a factor of 2.5 on\n>> some file system / storage combinations.\n> \n> can you suggest mount options for the various partitions? I'm leaning\n> towards xfs for the filesystem format unless someone complains loudly\n> about data corruption on xfs for a recent 2.6 kernel.\n> \n> -Whit\n> \n\nI went with ext3 for the OS -- it makes Ops feel a lot better. ext2 for a\nseparate xlogs partition, and xfs for the data.\next2's drawbacks are not relevant for a small partition with just xlog data,\nbut are a problem for the OS.\n \nFor a setup like yours xlog speed is not going to limit you.\nI suggest a partition for the OS with default ext3 mount options, and a\nsecond partition for postgres/xlogs minus the data on ext3 with\ndata=writeback.\n\next3 with default data=ordered on the xlogs causes performance issues as\nothers have mentioned here. But data=ordered is probably the right thing\nfor the OS. Your xlogs will not be a bottleneck and will probably be fine\neither way -- and this is a mount-time option so you can switch.\n\nI went with xfs for the data partition, and did not see benefit from\nanything other than the 'noatime' mount option. The default xfs settings\nare fine, and the raid specific formatting options are primarily designed to\nhelp raid 5 or 6 out.\nIf you go with ext3 for the data partition, make sure its data=writeback\nwith 'noatime'. Both of these are mount time options.\n\nI said it before, but I'll repeat -- don't neglect the OS readahead setting\nfor the device, especially the data device.\nSomething like:\n/sbin/blockdev --setra 8192 /dev/sd<X>\nWhere <X> is the right letter for your data raid volume\nWill have a big impact on larger sequential scans. This has to go in\nrc.local or whatever script runs after boot on your distro.\n\n\n> \n> On Tue, Apr 28, 2009 at 7:58 PM, Scott Carey <[email protected]> wrote:\n>>>> \n>>>> server information:\n>>>> Dell PowerEdge 2970, 8 core Opteron 2384\n>>>> 6 1TB hard drives with a PERC 6i\n>>>> 64GB of ram\n>>> \n>>> We're running a similar configuration: PowerEdge 8 core, PERC 6i, but we\n>>> have\n>>> 8 of the 2.5\" 10K 384GB disks.\n>>> \n>>> When I asked the same question on this forum, I was advised to just put all\n>>> 8\n>>> disks into a single RAID 10, and forget about separating things.  The\n>>> performance of a battery-backed PERC 6i (you did get a battery-backed cache,\n>>> right?) with 8 disks is quite good.\n>>> \n>>> In order to separate the logs, OS and data, I'd have to split off at least\n>>> two\n>>> of the 8 disks, leaving only six for the RAID 10 array.  But then my xlogs\n>>> would be on a single disk, which might not be safe.  A more robust approach\n>>> would be to split off four of the disks, put the OS on a RAID 1, the xlog on\n>>> a\n>>> RAID 1, and the database data on a 4-disk RAID 10.  Now I've separated the\n>>> data, but my primary partition has lost half its disks.\n>>> \n>>> So, I took the advice, and just made one giant 8-disk RAID 10, and I'm very\n>>> happy with it.  It has everything: Postgres, OS and logs.  But since the\n>>> RAID\n>>> array is 8 disks instead of 4, the net performance seems to quite good.\n>>> \n>> \n>> If you go this route, there are a few risks:\n>> 1.  If everything is on the same partition/file system, fsyncs from the\n>> xlogs may cross-pollute to the data.  Ext3 is notorious for this, though\n>> data=writeback limits the effect you especially might not want\n>> data=writeback on your OS partition.  I would recommend that the OS, Data,\n>> and xlogs + etc live on three different partitions regardless of the number\n>> of logical RAID volumes.\n>> 2. Cheap raid controllers (PERC, others) will see fsync for an array and\n>> flush everything that is dirty (not just the partition or file data), which\n>> is a concern if you aren't using it in write-back with battery backed cache,\n>> even for a very read heavy db that doesn't need high fsync speed for\n>> transactions.\n>> \n>>> But ... your mileage may vary.  My box has just one thing running on it:\n>>> Postgres.  There is almost no other disk activity to interfere with the\n>>> file-system caching.  If your server is going to have a bunch of other\n>>> activity that generate a lot of non-Postgres disk activity, then this advice\n>>> might not apply.\n>>> \n>>> Craig\n>>> \n>> \n>> 6 and 8 disk counts are tough.  My biggest single piece of advise is to have\n>> the xlogs in a partition separate from the data (not necessarily a different\n>> raid logical volume), with file system and mount options tuned for each case\n>> separately.  I've seen this alone improve performance by a factor of 2.5 on\n>> some file system / storage combinations.\n>> \n>>> \n>>> --\n>>> Sent via pgsql-performance mailing list ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>> \n>> \n>> \n> \n\n", "msg_date": "Tue, 28 Apr 2009 18:58:51 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "Thanks, Scott.\n\n> I went with ext3 for the OS -- it makes Ops feel a lot better. ext2 for a\n> separate xlogs partition, and xfs for the data.\n> ext2's drawbacks are not relevant for a small partition with just xlog data,\n> but are a problem for the OS.\n\nCan you suggest an appropriate size for the xlogs partition? These\nfiles are controlled by checkpoint_segments, is that correct?\n\nWe have checkpoint_segments set to 500 in the current setup, which is\nabout 8GB. So 10 to 15 GB xlogs partition? Is that reasonable?\n\n-Whit\n", "msg_date": "Wed, 29 Apr 2009 10:28:08 -0400", "msg_from": "Whit Armstrong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "\nOn 4/29/09 7:28 AM, \"Whit Armstrong\" <[email protected]> wrote:\n\n> Thanks, Scott.\n> \n>> I went with ext3 for the OS -- it makes Ops feel a lot better. ext2 for a\n>> separate xlogs partition, and xfs for the data.\n>> ext2's drawbacks are not relevant for a small partition with just xlog data,\n>> but are a problem for the OS.\n> \n> Can you suggest an appropriate size for the xlogs partition? These\n> files are controlled by checkpoint_segments, is that correct?\n> \n> We have checkpoint_segments set to 500 in the current setup, which is\n> about 8GB. So 10 to 15 GB xlogs partition? Is that reasonable?\n> \n\nYes and no.\nIf you are using or plan to ever use log shipping you¹ll need more space.\nIn most setups, It will keep around logs until successful shipping has\nhappened and been told to remove them, which will allow them to grow.\nThere may be other reasons why the total files there might be greater and\nI'm not an expert in all the possibilities there so others will probably\nhave to answer that.\n\nWith a basic install however, it won't use much more than your calculation\nabove. \nYou probably want a little breathing room in general, and in most new\nsystems today its not hard to carve out 50GB. I'd be shocked if your mirror\nthat you are carving this out of isn't at least 250GB since its SATA.\n\nI will reiterate that on a system your size the xlog throughput won't be a\nbottleneck (fsync latency might, but raid cards with battery backup is for\nthat). So the file system choice isn't a big deal once its on its own\npartition -- the main difference at that point is almost entirely max write\nthroughput. \n\n", "msg_date": "Wed, 29 Apr 2009 11:58:25 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partition question for new server setup" }, { "msg_contents": "Thanks to everyone who helped me arrive at the config for this server.\n Here is my first set of benchmarks using the standard pgbench setup.\n\nThe benchmark numbers seem pretty reasonable to me, but I don't have a\ngood feel for what typical numbers are. Any feedback is appreciated.\n\n-Whit\n\n\nthe server is set up as follows:\n6 1TB drives all seagateBarracuda ES.2\ndell PERC 6 raid controller card\nRAID 1 volume with OS and pg_xlog mounted as a separate partition w/\nnoatime and data=writeback both ext3\nRAID 10 volume with pg_data as xfs\n\nnodeadmin@node3:~$ /usr/lib/postgresql/8.3/bin/pgbench -t 10000 -c 10\n-U dbadmin test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 5498.740733 (including connections establishing)\ntps = 5504.244984 (excluding connections establishing)\nnodeadmin@node3:~$ /usr/lib/postgresql/8.3/bin/pgbench -t 10000 -c 10\n-U dbadmin test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 5627.047823 (including connections establishing)\ntps = 5632.835873 (excluding connections establishing)\nnodeadmin@node3:~$ /usr/lib/postgresql/8.3/bin/pgbench -t 10000 -c 10\n-U dbadmin test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 5629.213818 (including connections establishing)\ntps = 5635.225116 (excluding connections establishing)\nnodeadmin@node3:~$\n\n\n\nOn Wed, Apr 29, 2009 at 2:58 PM, Scott Carey <[email protected]> wrote:\n>\n> On 4/29/09 7:28 AM, \"Whit Armstrong\" <[email protected]> wrote:\n>\n>> Thanks, Scott.\n>>\n>>> I went with ext3 for the OS -- it makes Ops feel a lot better. ext2 for a\n>>> separate xlogs partition, and xfs for the data.\n>>> ext2's drawbacks are not relevant for a small partition with just xlog data,\n>>> but are a problem for the OS.\n>>\n>> Can you suggest an appropriate size for the xlogs partition?  These\n>> files are controlled by checkpoint_segments, is that correct?\n>>\n>> We have checkpoint_segments set to 500 in the current setup, which is\n>> about 8GB.  So 10 to 15 GB xlogs partition?  Is that reasonable?\n>>\n>\n> Yes and no.\n> If you are using or plan to ever use log shipping you¹ll need more space.\n> In most setups, It will keep around logs until successful shipping has\n> happened and been told to remove them, which will allow them to grow.\n> There may be other reasons why the total files there might be greater and\n> I'm not an expert in all the possibilities there so others will probably\n> have to answer that.\n>\n> With a basic install however, it won't use much more than your calculation\n> above.\n> You probably want a little breathing room in general, and in most new\n> systems today its not hard to carve out 50GB.  I'd be shocked if your mirror\n> that you are carving this out of isn't at least 250GB since its SATA.\n>\n> I will reiterate that on a system your size the xlog throughput won't be a\n> bottleneck (fsync latency might, but raid cards with battery backup is for\n> that).  So the file system choice isn't a big deal once its on its own\n> partition -- the main difference at that point is almost entirely max write\n> throughput.\n>\n>\n", "msg_date": "Wed, 29 Apr 2009 21:15:45 -0400", "msg_from": "Whit Armstrong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partition question for new server setup" } ]