threads
listlengths 1
275
|
---|
[
{
"msg_contents": "In investigating a slow query, I distiled the code below from a larger \nquery:\n\nSELECT\n\t*\nFROM\n\t/* SUBQUERY banners */ (\n\t\tSELECT\n\t\t\t*\n\t\tFROM\n\t\t\t/* SUBQUERY banners_links */ (\n\t\t\t\tSELECT\n\t\t\t\t\t*\n\t\t\t\tFROM\n\t\t\t\t\tbanners_links\n\t\t\t\tWHERE\n\t\t\t\t\tmerchant_id = 5631\n\t\t\t) as banners_links\n\t\tWHERE\n\t\t\tmerchant_id = 5631 AND\n\t\t\tbanners_links.status = 0\n\t) AS banners\n\n\t\tLEFT OUTER JOIN\n\n\t/* SUBQUERY types */ (\n\t\tSELECT\n\t\t\tbanner_types.id \t\tAS type_id,\n\t\t\tbanner_types.type \t\tAS type,\n\t\t\tbanners_banner_types.banner_id \tAS id\n\t\tFROM\n\t\t\tbanner_types,banners_banner_types\n\t\tWHERE\n\t\t\tbanners_banner_types.banner_id IN /* SUBQUERY */ (\n\t\t\t\tSELECT\n\t\t\t\t\tid\n\t\t\t\tFROM\n\t\t\t\t\tbanners_links\n\t\t\t\tWHERE\n\t\t\t\t\tmerchant_id = 5631\n\t\t\t) AND\n\t\t\tbanners_banner_types.type_id = banner_types.id\n ) AS types\n\n\tUSING (id)\n\nObviously, the subquery \"banners_links\" is redundant. The query however is a \ngenerated one, and this redundancy probably wasn't noted before. Logically \nyou would say it shouldn't hurt, but in fact it does. The above query \nexecutes painfully slow. The left outer join is killing the performance, as \nwitnessed by the plan:\n\n\"Nested Loop Left Join (cost=964.12..1480.67 rows=1 width=714) (actual \ntime=20.801..8233.410 rows=553 loops=1)\"\n\" Join Filter: (public.banners_links.id = banners_banner_types.banner_id)\"\n\" -> Bitmap Heap Scan on banners_links (cost=4.35..42.12 rows=1 \nwidth=671) (actual time=0.127..0.690 rows=359 loops=1)\"\n\" Recheck Cond: ((merchant_id = 5631) AND (merchant_id = 5631))\"\n\" Filter: ((status)::text = '0'::text)\"\n\" -> Bitmap Index Scan on banners_links_merchant_id_idx \n(cost=0.00..4.35 rows=10 width=0) (actual time=0.092..0.092 rows=424 \nloops=1)\"\n\" Index Cond: ((merchant_id = 5631) AND (merchant_id = 5631))\"\n\" -> Hash Join (cost=959.77..1432.13 rows=514 width=51) (actual \ntime=0.896..22.611 rows=658 loops=359)\"\n\" Hash Cond: (banners_banner_types.type_id = banner_types.id)\"\n\" -> Hash IN Join (cost=957.32..1422.52 rows=540 width=16) (actual \ntime=0.894..21.878 rows=658 loops=359)\"\n\" Hash Cond: (banners_banner_types.banner_id = \npublic.banners_links.id)\"\n\" -> Seq Scan on banners_banner_types (cost=0.00..376.40 \nrows=22240 width=16) (actual time=0.003..10.149 rows=22240 loops=359)\"\n\" -> Hash (cost=952.02..952.02 rows=424 width=8) (actual \ntime=0.779..0.779 rows=424 loops=1)\"\n\" -> Bitmap Heap Scan on banners_links \n(cost=11.54..952.02 rows=424 width=8) (actual time=0.108..0.513 rows=424 \nloops=1)\"\n\" Recheck Cond: (merchant_id = 5631)\"\n\" -> Bitmap Index Scan on \nbanners_links_merchant_id_idx (cost=0.00..11.43 rows=424 width=0) (actual \ntime=0.078..0.078 rows=424 loops=1)\"\n\" Index Cond: (merchant_id = 5631)\"\n\" -> Hash (cost=2.20..2.20 rows=20 width=43) (actual \ntime=0.033..0.033 rows=20 loops=1)\"\n\" -> Seq Scan on banner_types (cost=0.00..2.20 rows=20 \nwidth=43) (actual time=0.004..0.017 rows=20 loops=1)\"\n\"Total runtime: 8233.710 ms\"\n\nI noticed that the recheck condition looks a bit weird:\n\nRecheck Cond: ((merchant_id = 5631) AND (merchant_id = 5631))\n\nYou would think that PG (8.2.3) would be smart enough to optimize this away. \nAlso the estimate of the nested loop left join and the actual results are \nway off. I tried increasing the statistics of both public.banners_links.id \nand banners_banner_types.banner_id (to the highest value 1000), analyzed, \nvacuum analyzed and did a vacuum full, but without any improvements.\n\nAnyway, when I remove the redundant sub query the code becomes:\n\nSELECT\n\t*\nFROM\n\t/* SUBQUERY banners */ (\n\t\tSELECT\n\t\t\t*\n\t\tFROM\n\t\t\tbanners_links\n\t\tWHERE\n\t\t\tmerchant_id = 5631 AND\n\t\t\tbanners_links.status = 0\n\t) AS banners\n\n\t\tLEFT OUTER JOIN\n\n\t/* SUBQUERY types */ (\n\t\tSELECT\n\t\t\tbanner_types.id \t\tAS type_id,\n\t\t\tbanner_types.type \t\tAS type,\n\t\t\tbanners_banner_types.banner_id \tAS id\n\t\tFROM\n\t\t\tbanner_types,banners_banner_types\n\t\tWHERE\n\t\t\tbanners_banner_types.banner_id IN /* SUBQUERY */ (\n\t\t\t\tSELECT\n\t\t\t\t\tid\n\t\t\t\tFROM\n\t\t\t\t\tbanners_links\n\t\t\t\tWHERE\n\t\t\t\t\tmerchant_id = 5631\n\t\t\t) AND\n\t\t\tbanners_banner_types.type_id = banner_types.id\n ) AS types\n\n\tUSING (id)\n\nWith this query, the execution time drops from 8 seconds to a mere 297 ms! \nThe plan now looks as follows:\n\n\"Hash Left Join (cost=1449.99..2392.68 rows=2 width=714) (actual \ntime=24.257..25.292 rows=553 loops=1)\"\n\" Hash Cond: (public.banners_links.id = banners_banner_types.banner_id)\"\n\" -> Bitmap Heap Scan on banners_links (cost=11.43..954.03 rows=2 \nwidth=671) (actual time=0.122..0.563 rows=359 loops=1)\"\n\" Recheck Cond: (merchant_id = 5631)\"\n\" Filter: ((status)::text = '0'::text)\"\n\" -> Bitmap Index Scan on banners_links_merchant_id_idx \n(cost=0.00..11.43 rows=424 width=0) (actual time=0.086..0.086 rows=424 \nloops=1)\"\n\" Index Cond: (merchant_id = 5631)\"\n\" -> Hash (cost=1432.13..1432.13 rows=514 width=51) (actual \ntime=24.128..24.128 rows=658 loops=1)\"\n\" -> Hash Join (cost=959.77..1432.13 rows=514 width=51) (actual \ntime=1.714..23.606 rows=658 loops=1)\"\n\" Hash Cond: (banners_banner_types.type_id = banner_types.id)\"\n\" -> Hash IN Join (cost=957.32..1422.52 rows=540 width=16) \n(actual time=1.677..22.811 rows=658 loops=1)\"\n\" Hash Cond: (banners_banner_types.banner_id = \npublic.banners_links.id)\"\n\" -> Seq Scan on banners_banner_types \n(cost=0.00..376.40 rows=22240 width=16) (actual time=0.005..10.306 \nrows=22240 loops=1)\"\n\" -> Hash (cost=952.02..952.02 rows=424 width=8) \n(actual time=0.772..0.772 rows=424 loops=1)\"\n\" -> Bitmap Heap Scan on banners_links \n(cost=11.54..952.02 rows=424 width=8) (actual time=0.105..0.510 rows=424 \nloops=1)\"\n\" Recheck Cond: (merchant_id = 5631)\"\n\" -> Bitmap Index Scan on \nbanners_links_merchant_id_idx (cost=0.00..11.43 rows=424 width=0) (actual \ntime=0.077..0.077 rows=424 loops=1)\"\n\" Index Cond: (merchant_id = 5631)\"\n\" -> Hash (cost=2.20..2.20 rows=20 width=43) (actual \ntime=0.032..0.032 rows=20 loops=1)\"\n\" -> Seq Scan on banner_types (cost=0.00..2.20 rows=20 \nwidth=43) (actual time=0.004..0.018 rows=20 loops=1)\"\n\"Total runtime: 25.602 ms\"\n\nWe see that instead of a nested loop left join the planner now chooses a \nHash Left Join. I'm not really an expert in this matter and would like some \nmore insight into what's happening here. Naively I would say that a planner \nwould have to be smart enough to see this by itself?\n\nThanks in advance for all hints.\n\n_________________________________________________________________\nTalk with your online friends with Messenger \nhttp://www.join.msn.com/messenger/overview\n\n",
"msg_date": "Sun, 22 Apr 2007 01:23:50 +0200",
"msg_from": "\"henk de wit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Redundant sub query triggers slow nested loop left join"
},
{
"msg_contents": "\"henk de wit\" <[email protected]> writes:\n> Naively I would say that a planner \n> would have to be smart enough to see this by itself?\n\nWe got rid of direct tests for redundant WHERE clauses a long time ago\n(in 7.4, according to some quick tests I just made). They took a lot\nof cycles and almost never accomplished anything.\n\nSince you have two redundant tests, the selectivity is being\ndouble-counted, leading to a too-small rows estimate and a not very\nappropriate choice of join plan.\n\nFWIW, CVS HEAD does get rid of the duplicate conditions for the common\ncase of mergejoinable equality operators --- but it's not explicitly\nlooking for duplicate conditions, rather this is falling out of a new\nmethod for making transitive equality deductions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Apr 2007 19:48:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Redundant sub query triggers slow nested loop left join "
}
] |
[
{
"msg_contents": ">Since you have two redundant tests, the selectivity is being\n>double-counted, leading to a too-small rows estimate and a not very\n>appropriate choice of join plan.\n\nI see, thanks for the explanation. I did notice though that in the second \ncase, with 1 redundant test removed, the estimate is still low:\n\n\"Hash Left Join (cost=1449.99..2392.68 rows=2 width=714) (actual \ntime=24.257..25.292 rows=553 loops=1)\"\n\nIn that case the prediction is 2 rows, which is only 1 row more than in the \nprevious case. Yet the plan is much better and performance improved \ndramatically. Is there a reason/explanation for that?\n\n>FWIW, CVS HEAD does get rid of the duplicate conditions for the common\n>case of mergejoinable equality operators --- but it's not explicitly\n>looking for duplicate conditions, rather this is falling out of a new\n>method for making transitive equality deductions.\n\nThis sounds very interesting Tom. Is there some documentation somewhere \nwhere I can read about this new method?\n\n_________________________________________________________________\nLive Search, for accurate results! http://www.live.nl\n\n",
"msg_date": "Sun, 22 Apr 2007 18:17:40 +0200",
"msg_from": "\"henk de wit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Redundant sub query triggers slow nested loop left join"
},
{
"msg_contents": "\"henk de wit\" <[email protected]> writes:\n> In that case the prediction is 2 rows, which is only 1 row more than in the \n> previous case. Yet the plan is much better and performance improved \n> dramatically. Is there a reason/explanation for that?\n\nWell, it's just an estimated-cost comparison. If there's only one row\nthen a nestloop join looks like the best way since it requires no extra\noverhead. But as soon as you get to two rows, the other side of the\njoin would have to be executed twice, and that's more expensive than\ndoing it once and setting up a hash table. In the actual event, with\n359 rows out of the scan, the nestloop way is just horrid because it\nrepeats the other side 359 times :-(\n\nIt strikes me that it might be interesting to use a minimum rowcount\nestimate of two rows, not one, for any case where we can't actually\nprove there is at most one row (ie, the query conditions match a unique\nindex). That is probably enough to discourage this sort of brittle\nbehavior ... though no doubt there'd still be cases where it's the\nwrong thing. We do not actually have any code right now to make such\nproofs, but there's been some discussion recently about adding such\nlogic in support of removing useless outer joins.\n\n>> FWIW, CVS HEAD does get rid of the duplicate conditions for the common\n>> case of mergejoinable equality operators --- but it's not explicitly\n>> looking for duplicate conditions, rather this is falling out of a new\n>> method for making transitive equality deductions.\n\n> This sounds very interesting Tom. Is there some documentation somewhere \n> where I can read about this new method?\n\nCheck the archives for mention of equivalence classes, notably these\ntwo threads:\nhttp://archives.postgresql.org/pgsql-hackers/2007-01/msg00568.php\nhttp://archives.postgresql.org/pgsql-hackers/2007-01/msg00826.php\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Apr 2007 13:53:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Redundant sub query triggers slow nested loop left join "
},
{
"msg_contents": "\n>In the actual event, with\n>359 rows out of the scan, the nestloop way is just horrid because it\n>repeats the other side 359 times :-(\n\nIndeed. :(\n\nBtw, I tried to apply the removal of the redundant check in the larger query \n(the one from which I extracted the part shown earlier) but it only performs \nworse after that. The more redundant checks I remove, the slower the query \ngets. I figure the original designer of the query inserted those checks to \nquickly limit the number of rows involved in the nested loop. Of course, the \nproblem is probably not the number of rows involved, but the unfortunate \nchoice of the nested loop.\n\nI spend a few hours today in trying to figure it all out, but I'm pretty \nstuck at the moment.\n\nFor what its worth, this is the plan PG 8.2 comes up with right after I \nremove the same check that made the isolated query in the openings post so \nmuch faster:\n\nSort (cost=6006.54..6006.55 rows=1 width=597) (actual \ntime=14561.499..14561.722 rows=553 loops=1)\n Sort Key: public.banners_links.id\n -> Nested Loop Left Join (cost=3917.68..6006.53 rows=1 width=597) \n(actual time=64.723..14559.811 rows=553 loops=1)\n Join Filter: (public.banners_links.id = \npublic.fetch_banners.banners_links_id)\n -> Nested Loop Left Join (cost=3917.68..6002.54 rows=1 width=527) \n(actual time=64.607..14509.291 rows=553 loops=1)\n Join Filter: (public.banners_links.id = \nreward_ratings.banner_id)\n -> Nested Loop Left Join (cost=2960.36..4395.12 rows=1 \nwidth=519) (actual time=52.761..8562.575 rows=553 loops=1)\n Join Filter: (public.banners_links.id = \nbanners_banner_types.banner_id)\n -> Nested Loop Left Join (cost=2000.60..2956.57 rows=1 \nwidth=484) (actual time=32.026..304.700 rows=359 loops=1)\n Join Filter: (public.banners_links.id = \necpc_per_banner_link.banners_links_id)\n -> Nested Loop (cost=124.58..1075.70 rows=1 \nwidth=468) (actual time=9.793..187.724 rows=359 loops=1)\n -> Nested Loop Left Join \n(cost=124.58..1067.42 rows=1 width=89) (actual time=9.786..184.671 rows=359 \nloops=1)\n Join Filter: (public.banners_links.id \n= users_banners_tot_sub.banner_id)\n -> Hash Left Join \n(cost=107.97..1050.78 rows=1 width=81) (actual time=6.119..7.605 rows=359 \nloops=1)\n Hash Cond: \n(public.banners_links.id = special_deals.id)\n Filter: \n(special_deals.special_deal IS NULL)\n -> Bitmap Heap Scan on \nbanners_links (cost=11.43..954.03 rows=2 width=73) (actual \ntime=0.128..1.069 rows=359 loops=1)\n Recheck Cond: (merchant_id \n= 5631)\n Filter: ((status)::text = \n'0'::text)\n -> Bitmap Index Scan on \nbanners_links_merchant_id_idx (cost=0.00..11.43 rows=424 width=0) (actual \ntime=0.089..0.089 rows=424 loops=1)\n Index Cond: \n(merchant_id = 5631)\n -> Hash (cost=86.93..86.93 \nrows=769 width=16) (actual time=5.982..5.982 rows=780 loops=1)\n -> Subquery Scan \nspecial_deals (cost=69.62..86.93 rows=769 width=16) (actual \ntime=4.179..5.414 rows=780 loops=1)\n -> HashAggregate \n(cost=69.62..79.24 rows=769 width=16) (actual time=4.179..4.702 rows=780 \nloops=1)\n -> Seq Scan \non banner_deals (cost=0.00..53.75 rows=3175 width=16) (actual \ntime=0.006..1.480 rows=3175 loops=1)\n -> HashAggregate (cost=16.61..16.62 \nrows=1 width=24) (actual time=0.011..0.292 rows=424 loops=359)\n -> Nested Loop \n(cost=0.00..16.60 rows=1 width=24) (actual time=0.029..3.096 rows=424 \nloops=1)\n -> Index Scan using \nusers_banners_affiliate_id_idx on users_banners (cost=0.00..8.30 rows=1 \nwidth=16) (actual time=0.021..0.523 rows=424 loops=1)\n Index Cond: \n((affiliate_id = 5631) AND (affiliate_id = 5631))\n Filter: \n((status)::text = '3'::text)\n -> Index Scan using \nusers_banners_id_idx on users_banners_rotation (cost=0.00..8.29 rows=1 \nwidth=16) (actual time=0.003..0.004 rows=1 loops=424)\n Index Cond: \n(users_banners_rotation.users_banners_id = users_banners.id)\n -> Index Scan using \nbanners_org_id_banner.idx on banners_org (cost=0.00..8.27 rows=1 width=387) \n(actual time=0.005..0.006 rows=1 loops=359)\n Index Cond: (public.banners_links.id = \nbanners_org.id_banner)\n -> Sort (cost=1876.01..1876.50 rows=194 \nwidth=30) (actual time=0.062..0.183 rows=290 loops=359)\n Sort Key: CASE WHEN \n(precalculated_stats_banners_links.clicks_total > 0) THEN \n(((precalculated_stats_banners_links.revenue_total_affiliate / \n(precalculated_stats_banners_links.clicks_total)::numeric))::double \nprecision / 1000::double precision) ELSE 0::double precision END\n -> Merge IN Join (cost=1819.78..1868.64 \nrows=194 width=30) (actual time=16.701..21.797 rows=290 loops=1)\n Merge Cond: \n(precalculated_stats_banners_links.banners_links_id = \npublic.banners_links.id)\n -> Sort (cost=849.26..869.24 \nrows=7993 width=30) (actual time=12.416..15.717 rows=7923 loops=1)\n Sort Key: \nprecalculated_stats_banners_links.banners_links_id\n -> Index Scan using \npre_calc_banners_status on precalculated_stats_banners_links \n(cost=0.00..331.13 rows=7993 width=30) (actual time=0.006..6.209 rows=7923 \nloops=1)\n Index Cond: (status = 4)\n -> Sort (cost=970.52..971.58 \nrows=424 width=8) (actual time=0.885..1.049 rows=366 loops=1)\n Sort Key: \npublic.banners_links.id\n -> Bitmap Heap Scan on \nbanners_links (cost=11.54..952.02 rows=424 width=8) (actual \ntime=0.121..0.549 rows=424 loops=1)\n Recheck Cond: (merchant_id \n= 5631)\n -> Bitmap Index Scan on \nbanners_links_merchant_id_idx (cost=0.00..11.43 rows=424 width=0) (actual \ntime=0.087..0.087 rows=424 loops=1)\n Index Cond: \n(merchant_id = 5631)\n -> Hash Join (cost=959.77..1432.13 rows=514 width=43) \n(actual time=0.900..22.684 rows=658 loops=359)\n Hash Cond: (banners_banner_types.type_id = \nbanner_types.id)\n -> Hash IN Join (cost=957.32..1422.52 rows=540 \nwidth=16) (actual time=0.898..21.944 rows=658 loops=359)\n Hash Cond: (banners_banner_types.banner_id = \npublic.banners_links.id)\n -> Seq Scan on banners_banner_types \n(cost=0.00..376.40 rows=22240 width=16) (actual time=0.004..10.184 \nrows=22240 loops=359)\n -> Hash (cost=952.02..952.02 rows=424 \nwidth=8) (actual time=0.751..0.751 rows=424 loops=1)\n -> Bitmap Heap Scan on banners_links \n(cost=11.54..952.02 rows=424 width=8) (actual time=0.127..0.470 rows=424 \nloops=1)\n Recheck Cond: (merchant_id = \n5631)\n -> Bitmap Index Scan on \nbanners_links_merchant_id_idx (cost=0.00..11.43 rows=424 width=0) (actual \ntime=0.086..0.086 rows=424 loops=1)\n Index Cond: (merchant_id = \n5631)\n -> Hash (cost=2.20..2.20 rows=20 width=43) \n(actual time=0.037..0.037 rows=20 loops=1)\n -> Seq Scan on banner_types \n(cost=0.00..2.20 rows=20 width=43) (actual time=0.004..0.015 rows=20 \nloops=1)\n -> Hash IN Join (cost=957.32..1606.26 rows=93 width=16) \n(actual time=10.751..10.751 rows=0 loops=553)\n Hash Cond: (reward_ratings.banner_id = \npublic.banners_links.id)\n -> Seq Scan on reward_ratings (cost=0.00..633.66 \nrows=3827 width=16) (actual time=0.007..8.770 rows=4067 loops=553)\n Filter: ((now() >= period_start) AND (now() <= \nperiod_end))\n -> Hash (cost=952.02..952.02 rows=424 width=8) (actual \ntime=0.747..0.747 rows=424 loops=1)\n -> Bitmap Heap Scan on banners_links \n(cost=11.54..952.02 rows=424 width=8) (actual time=0.120..0.472 rows=424 \nloops=1)\n Recheck Cond: (merchant_id = 5631)\n -> Bitmap Index Scan on \nbanners_links_merchant_id_idx (cost=0.00..11.43 rows=424 width=0) (actual \ntime=0.088..0.088 rows=424 loops=1)\n Index Cond: (merchant_id = 5631)\n -> Seq Scan on fetch_banners (cost=0.00..2.88 rows=88 width=78) \n(actual time=0.003..0.042 rows=88 loops=553)\nTotal runtime: 14562.251 ms\n\nThe same check (merchant_id = 5631) still appears at 5 other places in the \nquery. If I remove one other, the query goes to 20 seconds, if I then remove \none other again it goes to 28 seconds, etc all the way to more than 40 \nseconds. I understand the above looks like a complicated mess, but would you \nhave any pointers of what I could possibly do next to force a better plan?\n\n>Check the archives for mention of equivalence classes, notably these\n>two threads:\n>http://archives.postgresql.org/pgsql-hackers/2007-01/msg00568.php\n>http://archives.postgresql.org/pgsql-hackers/2007-01/msg00826.php\n\nI'm going to read those. Thanks for the references.\n\n_________________________________________________________________\nPlay online games with your friends with Messenger \nhttp://www.join.msn.com/messenger/overview\n\n",
"msg_date": "Sun, 22 Apr 2007 22:22:22 +0200",
"msg_from": "\"henk de wit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Redundant sub query triggers slow nested loop left join"
},
{
"msg_contents": "\"henk de wit\" <[email protected]> writes:\n> I understand the above looks like a complicated mess, but would you \n> have any pointers of what I could possibly do next to force a better plan?\n\nTaking a closer look, it seems the problem is the underestimation of the\nnumber of rows resulting from this relation scan:\n\n> -> Bitmap Heap Scan on \n> banners_links (cost=11.43..954.03 rows=2 width=73) (actual \n> time=0.128..1.069 rows=359 loops=1)\n> Recheck Cond: (merchant_id = 5631)\n> Filter: ((status)::text = '0'::text)\n> -> Bitmap Index Scan on \n> banners_links_merchant_id_idx (cost=0.00..11.43 rows=424 width=0) (actual \n> time=0.089..0.089 rows=424 loops=1)\n> Index Cond: (merchant_id = 5631)\n\nYou might be able to improve matters by increasing the statistics target\nfor this table. I have a bad feeling though that the problem may be\nlack of cross-column statistics --- the thing is evidently assuming\nthat only about 1 in 200 rows have status = '0', which might be accurate\nas a global average but not for this particular merchant. What exactly\nis the relationship between status and merchant_id, anyway?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Apr 2007 16:42:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Redundant sub query triggers slow nested loop left join "
},
{
"msg_contents": ">You might be able to improve matters by increasing the statistics target\n>for this table.\n\nI have tried to increase the statistics for the status column to the maximum \nof 1000. After that I performed an analyze, vacuum analyze and vacuum full \nanalyze on the table. Unfortunately this didn't seem to make any difference.\n\n>I have a bad feeling though that the problem may be\n>lack of cross-column statistics\n\nI assume this isn't a thing that can be tweaked/increased in PG 8.2?\n\n>--- the thing is evidently assuming\n>that only about 1 in 200 rows have status = '0', which might be accurate\n>as a global average but not for this particular merchant. What exactly\n>is the relationship between status and merchant_id, anyway?\n\nThe meaning is that a \"banners_link\" belongs to a merchant with the id \nmerchant_id. A \"banners_link\" can be disabled (status 1) or enabled (status \n0). Globally about 1/3 of the banners_links have status 0 and 2/3 have \nstatus 1. The 1 in 200 estimate is indeed way off.\n\nFor the merchant in question the numbers are a bit different though, but not \nthat much. Out of 424 rows total, 359 have status 0 and 65 have status 1.\n\n_________________________________________________________________\nFREE pop-up blocking with the new Windows Live Toolbar - get it now! \nhttp://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/\n\n",
"msg_date": "Mon, 23 Apr 2007 00:39:04 +0200",
"msg_from": "\"henk de wit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Redundant sub query triggers slow nested loop left join"
},
{
"msg_contents": "One interesting other thing to note; if I remove the banners_links.status = \n0 condition from the query altogether the execution times improve \ndramatically again. The results are not correct right now, but if worse \ncomes to worst I can always remove the unwanted rows in a procedural \nlanguage (it's a simple case of iterating a resultset and omitting rows with \nstatus 1). Of course this would not really be a neat solution.\n\nAnyway, the plan without the status = 0 condition now looks like this:\n\nSort (cost=6058.87..6058.88 rows=2 width=597) (actual time=305.869..306.138 \nrows=658 loops=1)\n Sort Key: public.banners_links.id\n -> Nested Loop Left Join (cost=5051.23..6058.86 rows=2 width=597) \n(actual time=69.956..304.259 rows=658 loops=1)\n Join Filter: (public.banners_links.id = \npublic.fetch_banners.banners_links_id)\n -> Nested Loop Left Join (cost=5048.26..6051.92 rows=2 width=527) \n(actual time=69.715..249.122 rows=658 loops=1)\n Join Filter: (public.banners_links.id = \nreward_ratings.banner_id)\n -> Nested Loop Left Join (cost=3441.91..4441.39 rows=2 \nwidth=519) (actual time=57.795..235.954 rows=658 loops=1)\n Join Filter: (public.banners_links.id = \necpc_per_banner_link.banners_links_id)\n -> Nested Loop (cost=1563.28..2554.02 rows=2 \nwidth=503) (actual time=35.359..42.018 rows=658 loops=1)\n -> Hash Left Join (cost=1563.28..2545.93 rows=2 \nwidth=124) (actual time=35.351..37.987 rows=658 loops=1)\n Hash Cond: (public.banners_links.id = \nusers_banners_tot_sub.banner_id)\n -> Hash Left Join (cost=1546.63..2529.27 \nrows=2 width=116) (actual time=30.757..32.552 rows=658 loops=1)\n Hash Cond: (public.banners_links.id = \nbanners_banner_types.banner_id)\n -> Hash Left Join \n(cost=108.08..1090.62 rows=2 width=81) (actual time=6.087..7.085 rows=424 \nloops=1)\n Hash Cond: \n(public.banners_links.id = special_deals.id)\n Filter: \n(special_deals.special_deal IS NULL)\n -> Bitmap Heap Scan on \nbanners_links (cost=11.54..952.02 rows=424 width=73) (actual \ntime=0.125..0.514 rows=424 loops=1)\n Recheck Cond: (merchant_id \n= 5631)\n -> Bitmap Index Scan on \nbanners_links_merchant_id_idx (cost=0.00..11.43 rows=424 width=0) (actual \ntime=0.089..0.089 rows=424 loops=1)\n Index Cond: \n(merchant_id = 5631)\n -> Hash (cost=86.93..86.93 \nrows=769 width=16) (actual time=5.951..5.951 rows=780 loops=1)\n -> Subquery Scan \nspecial_deals (cost=69.62..86.93 rows=769 width=16) (actual \ntime=4.164..5.389 rows=780 loops=1)\n -> HashAggregate \n(cost=69.62..79.24 rows=769 width=16) (actual time=4.164..4.670 rows=780 \nloops=1)\n -> Seq Scan \non banner_deals (cost=0.00..53.75 rows=3175 width=16) (actual \ntime=0.005..1.496 rows=3175 loops=1)\n -> Hash (cost=1432.13..1432.13 \nrows=514 width=43) (actual time=24.661..24.661 rows=658 loops=1)\n -> Hash Join \n(cost=959.77..1432.13 rows=514 width=43) (actual time=1.780..24.147 rows=658 \nloops=1)\n Hash Cond: \n(banners_banner_types.type_id = banner_types.id)\n -> Hash IN Join \n(cost=957.32..1422.52 rows=540 width=16) (actual time=1.738..23.332 rows=658 \nloops=1)\n Hash Cond: \n(banners_banner_types.banner_id = public.banners_links.id)\n -> Seq Scan on \nbanners_banner_types (cost=0.00..376.40 rows=22240 width=16) (actual \ntime=0.005..10.355 rows=22240 loops=1)\n -> Hash \n(cost=952.02..952.02 rows=424 width=8) (actual time=0.808..0.808 rows=424 \nloops=1)\n -> Bitmap \nHeap Scan on banners_links (cost=11.54..952.02 rows=424 width=8) (actual \ntime=0.114..0.515 rows=424 loops=1)\n Recheck \nCond: (merchant_id = 5631)\n -> \nBitmap Index Scan on banners_links_merchant_id_idx (cost=0.00..11.43 \nrows=424 width=0) (actual time=0.085..0.085 rows=424 loops=1)\n \nIndex Cond: (merchant_id = 5631)\n -> Hash (cost=2.20..2.20 \nrows=20 width=43) (actual time=0.034..0.034 rows=20 loops=1)\n -> Seq Scan on \nbanner_types (cost=0.00..2.20 rows=20 width=43) (actual time=0.004..0.016 \nrows=20 loops=1)\n -> Hash (cost=16.63..16.63 rows=1 \nwidth=24) (actual time=4.582..4.582 rows=424 loops=1)\n -> Subquery Scan \nusers_banners_tot_sub (cost=16.61..16.63 rows=1 width=24) (actual \ntime=3.548..4.235 rows=424 loops=1)\n -> HashAggregate \n(cost=16.61..16.62 rows=1 width=24) (actual time=3.547..3.850 rows=424 \nloops=1)\n -> Nested Loop \n(cost=0.00..16.60 rows=1 width=24) (actual time=0.031..3.085 rows=424 \nloops=1)\n -> Index Scan using \nusers_banners_affiliate_id_idx on users_banners (cost=0.00..8.30 rows=1 \nwidth=16) (actual time=0.021..0.516 rows=424 loops=1)\n Index Cond: \n((affiliate_id = 5631) AND (affiliate_id = 5631))\n Filter: \n((status)::text = '3'::text)\n -> Index Scan using \nusers_banners_id_idx on users_banners_rotation (cost=0.00..8.29 rows=1 \nwidth=16) (actual time=0.003..0.004 rows=1 loops=424)\n Index Cond: \n(users_banners_rotation.users_banners_id = users_banners.id)\n -> Index Scan using banners_org_id_banner.idx on \nbanners_org (cost=0.00..4.03 rows=1 width=387) (actual time=0.003..0.004 \nrows=1 loops=658)\n Index Cond: (public.banners_links.id = \nbanners_org.id_banner)\n -> Materialize (cost=1878.63..1880.57 rows=194 \nwidth=20) (actual time=0.034..0.153 rows=290 loops=658)\n -> Sort (cost=1876.01..1876.50 rows=194 \nwidth=30) (actual time=22.105..22.230 rows=290 loops=1)\n Sort Key: CASE WHEN \n(precalculated_stats_banners_links.clicks_total > 0) THEN \n(((precalculated_stats_banners_links.revenue_total_affiliate / \n(precalculated_stats_banners_links.clicks_total)::numeric))::double \nprecision / 1000::double precision) ELSE 0::double precision END\n -> Merge IN Join (cost=1819.78..1868.64 \nrows=194 width=30) (actual time=16.723..21.832 rows=290 loops=1)\n Merge Cond: \n(precalculated_stats_banners_links.banners_links_id = \npublic.banners_links.id)\n -> Sort (cost=849.26..869.24 \nrows=7993 width=30) (actual time=12.474..15.725 rows=7923 loops=1)\n Sort Key: \nprecalculated_stats_banners_links.banners_links_id\n -> Index Scan using \npre_calc_banners_status on precalculated_stats_banners_links \n(cost=0.00..331.13 rows=7993 width=30) (actual time=0.007..6.220 rows=7923 \nloops=1)\n Index Cond: (status = 4)\n -> Sort (cost=970.52..971.58 \nrows=424 width=8) (actual time=0.862..1.012 rows=366 loops=1)\n Sort Key: \npublic.banners_links.id\n -> Bitmap Heap Scan on \nbanners_links (cost=11.54..952.02 rows=424 width=8) (actual \ntime=0.121..0.490 rows=424 loops=1)\n Recheck Cond: (merchant_id \n= 5631)\n -> Bitmap Index Scan on \nbanners_links_merchant_id_idx (cost=0.00..11.43 rows=424 width=0) (actual \ntime=0.087..0.087 rows=424 loops=1)\n Index Cond: \n(merchant_id = 5631)\n -> Materialize (cost=1606.35..1607.28 rows=93 width=16) \n(actual time=0.019..0.019 rows=0 loops=658)\n -> Hash IN Join (cost=957.32..1606.25 rows=93 \nwidth=16) (actual time=11.916..11.916 rows=0 loops=1)\n Hash Cond: (reward_ratings.banner_id = \npublic.banners_links.id)\n -> Seq Scan on reward_ratings (cost=0.00..633.66 \nrows=3826 width=16) (actual time=0.016..9.190 rows=4067 loops=1)\n Filter: ((now() >= period_start) AND (now() \n<= period_end))\n -> Hash (cost=952.02..952.02 rows=424 width=8) \n(actual time=0.738..0.738 rows=424 loops=1)\n -> Bitmap Heap Scan on banners_links \n(cost=11.54..952.02 rows=424 width=8) (actual time=0.118..0.459 rows=424 \nloops=1)\n Recheck Cond: (merchant_id = 5631)\n -> Bitmap Index Scan on \nbanners_links_merchant_id_idx (cost=0.00..11.43 rows=424 width=0) (actual \ntime=0.086..0.086 rows=424 loops=1)\n Index Cond: (merchant_id = 5631)\n -> Materialize (cost=2.97..3.85 rows=88 width=78) (actual \ntime=0.000..0.037 rows=88 loops=658)\n -> Seq Scan on fetch_banners (cost=0.00..2.88 rows=88 \nwidth=78) (actual time=0.005..0.052 rows=88 loops=1)\nTotal runtime: 306.734 ms\n\n_________________________________________________________________\nPlay online games with your friends with Messenger \nhttp://www.join.msn.com/messenger/overview\n\n",
"msg_date": "Mon, 23 Apr 2007 00:58:15 +0200",
"msg_from": "\"henk de wit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Redundant sub query triggers slow nested loop left join"
},
{
"msg_contents": "\"henk de wit\" <[email protected]> writes:\n>> --- the thing is evidently assuming\n>> that only about 1 in 200 rows have status = '0', which might be accurate\n>> as a global average but not for this particular merchant. What exactly\n>> is the relationship between status and merchant_id, anyway?\n\n> The meaning is that a \"banners_link\" belongs to a merchant with the id \n> merchant_id. A \"banners_link\" can be disabled (status 1) or enabled (status \n> 0). Globally about 1/3 of the banners_links have status 0 and 2/3 have \n> status 1. The 1 in 200 estimate is indeed way off.\n\nWell, that's darn odd. It should not be getting that so far wrong.\nWhat's the datatype of the status column exactly (I'm guessing varchar\nbut maybe not)? Would you show us the pg_stats row for the status column?\n\n> One interesting other thing to note; if I remove the banners_links.status = \n> 0 condition from the query altogether the execution times improve \n> dramatically again.\n\nRight, because it was dead on about how many merchant_id = 5631 rows\nthere are. The estimation error is creeping in where it guesses how\nselective the status filter is. It should be using the global fraction\nof status = 0 rows for that, but it seems to be using a default estimate\ninstead (1/200 is actually the default eqsel estimate now that I think\nabout it). I'm not sure why, but I think it must have something to do\nwith the subquery structure of your query. Were you showing us the\nwhole truth about your query, or were there details you left out?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Apr 2007 22:14:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Redundant sub query triggers slow nested loop left join "
}
] |
[
{
"msg_contents": ">Well, that's darn odd. It should not be getting that so far wrong.\n>What's the datatype of the status column exactly (I'm guessing varchar\n>but maybe not)? Would you show us the pg_stats row for the status column?\n\nIt has been created as a char(1) in fact. The pg_stats row for the status \ncolumn is:\n\npublic|banners_links|status|0|5|2|{0,1}|{0.626397,0.373603}||0.560611\n\n>I'm not sure why, but I think it must have something to do\n>with the subquery structure of your query. Were you showing us the\n>whole truth about your query, or were there details you left out?\n\nThe query I gave in the opening post was just a small part, the part that I \ninitially identified as the 'slow path'. The last plan I gave was from the \nwhole query, without any details left out. I didn't gave the SQL of that \nyet, so here it is:\n\nSELECT\n\tid,\n\tstatus,\n\tmerchant_id,\n\tdescription,\n\torg_text,\n\tusers_banners_id,\n\tbanner_url,\n\tcookie_redirect,\n\ttype,\n\n\tCASE WHEN special_deal IS null THEN\n\t\t''\n\tELSE\n\t\t'special deal'\n\tEND AS special_deal,\n\n\tCASE WHEN url_of_banner IS null\tTHEN\n\t\t''\n\tELSE\n\t\turl_of_banner\n\tEND AS url_of_banner,\n\n\tCASE WHEN period_end IS NULL THEN\n\t\t'not_active'\n\tELSE\n\t\t'active'\n\tEND AS active_not_active,\n\n\tCASE WHEN ecpc IS NULL THEN\n\t\t0.00\n\tELSE\n\t\tROUND(ecpc::numeric,2)\n\tEND AS ecpc,\n\n\tCASE WHEN ecpc_merchant IS NULL THEN\n\t\t0.00\n\tELSE\n\t\tROUND(ecpc_merchant::numeric,2)\n\tEND AS ecpc_merchant\n\nFROM\n\t/* SUBQUERY grand_total_fetch_banners */ (\n\t\t/* SUBQUERY grand_total */(\n\t\t\t/* SUBQUERY banners_special_deals */\t(\n\n\t\t\t\t/* SUBQUERY banners */ (\n\t\t\t\t\tSELECT\n\t\t\t\t\t\t*\n\t\t\t\t\tFROM\n\t\t\t\t\t\t/* SUBQUERY banners_links */ (\n\t\t\t\t\t\t\tSELECT\n\t\t\t\t\t\t\t\tbanners_links.id,\n\t\t\t\t\t\t\t\tmerchant_id,\n\t\t\t\t\t\t\t\tbanners_org.banner_text AS org_text,\n\t\t\t\t\t\t\t\tdescription,\n\t\t\t\t\t\t\t\tstatus,\n\t\t\t\t\t\t\t\tbanner_url,\n\t\t\t\t\t\t\t\tecpc,\n\t\t\t\t\t\t\t\tecpc_merchant,\n\t\t\t\t\t\t\t\tCOALESCE(cookie_redirect,0) AS cookie_redirect\n\t\t\t\t\t\t\tFROM\n\t\t\t\t\t\t\t\t/* SUBQUERY banners_links */ (\n\n\t\t\t\t\t\t\t\t\t/* subselect tot join ecpc_per_banner_links on banners_links*/\n\t\t\t\t\t\t\t\t\t/* SUBQUERY banners_links */ (\n\t\t\t\t\t\t\t\t\t\tSELECT\n\t\t\t\t\t\t\t\t\t\t\t*\n\t\t\t\t\t\t\t\t\t\tFROM\n\t\t\t\t\t\t\t\t\t\t\tbanners_links\n\t\t\t\t\t\t\t\t\t\tWHERE\n\t\t\t\t\t\t\t\t\t\t\tmerchant_id = 5631\n\t\t\t\t\t\t\t\t\t) AS banners_links\n\n\t\t\t\t\t\t\t\t\t\tLEFT OUTER JOIN\n\n\t\t\t\t\t\t\t\t\t/* SUBQUERY ecpc_per_banner_link */\t(\n\t\t\t\t\t\t\t\t\t\tSELECT\n\t\t\t\t\t\t\t\t\t\t\tCASE WHEN clicks_total > 0 THEN\n\t\t\t\t\t\t\t\t\t\t\t\t(revenue_total_affiliate/clicks_total)::float/1000.0\n\t\t\t\t\t\t\t\t\t\t\tELSE\n\t\t\t\t\t\t\t\t\t\t\t\t0.0\n\t\t\t\t\t\t\t\t\t\t\tEND AS ecpc,\n\t\t\t\t\t\t\t\t\t\t\tCASE WHEN clicks_total > 0 THEN\n\t\t\t\t\t\t\t\t\t\t\t\t(revenue_total/clicks_total)::float/1000.0\n\t\t\t\t\t\t\t\t\t\t\tELSE\n\t\t\t\t\t\t\t\t\t\t\t\t0.0\n\t\t\t\t\t\t\t\t\t\t\tEND AS ecpc_merchant,\n\t\t\t\t\t\t\t\t\t\t\tbanners_links_id\n\t\t\t\t\t\t\t\t\t\tFROM\n\t\t\t\t\t\t\t\t\t\t\tprecalculated_stats_banners_links\n\t\t\t\t\t\t\t\t\t\tWHERE\n\t\t\t\t\t\t\t\t\t\t\tstatus = 4\t\t\tAND\n\t\t\t\t\t\t\t\t\t\t\tbanners_links_id IN /* SUBQUERY */ (\n\t\t\t\t\t\t\t\t\t\t\t\tSELECT\n\t\t\t\t\t\t\t\t\t\t\t\t\tid\n\t\t\t\t\t\t\t\t\t\t\t\tFROM\n\t\t\t\t\t\t\t\t\t\t\t\t\tbanners_links\n\t\t\t\t\t\t\t\t\t\t\t\tWHERE\n\t\t\t\t\t\t\t\t\t\t\t\t\tmerchant_id = 5631\n\t\t\t\t\t\t\t\t\t\t\t)\n\t\t\t\t\t\t\t\t\t\tORDER BY\n\t\t\t\t\t\t\t\t\t\t\tecpc DESC\n\t\t\t\t\t\t\t\t\t) AS ecpc_per_banner_link\n\n\t\t\t\t\t\t\t\t\t\tON (banners_links.id = ecpc_per_banner_link.banners_links_id)\n\t\t\t\t\t\t\t\t) AS banners_links\n\n\t\t\t\t\t\t\t\t\t,\n\n\t\t\t\t\t\t\t\tbanners_org\n\n\t\t\t\t\t\t\tWHERE\n\t\t\t\t\t\t\t\tmerchant_id = 5631\t\t\t\t\t\t\tAND\n\t\t\t\t\t\t\t\tbanners_links.id = banners_org.id_banner\t\t\tAND\n\t\t\t\t\t\t\t\t(banners_links.id = -1 OR -1 = -1)\tAND\n\t\t\t\t\t\t\t\t(banners_links.status = 0 OR 0 = -1)\n\t\t\t\t\t\t) AS banners_links\n\n\t\t\t\t\t\t\tLEFT OUTER JOIN\n\n\t\t\t\t\t\t/* SUBQUERY users_banners_tot_sub */(\n\t\t\t\t\t\t\tSELECT\n\t\t\t\t\t\t\t\tMAX (users_banners_id) AS users_banners_id,\n\t\t\t\t\t\t\t\tmerchant_users_banners_id,\n\t\t\t\t\t\t\t\tbanner_id\n\t\t\t\t\t\t\tFROM\n\t\t\t\t\t\t\t\t/* SUBQUERY users_banners_rotations_sub */(\n\t\t\t\t\t\t\t\t\tSELECT\n\t\t\t\t\t\t\t\t\t\taffiliate_id \t\tAS merchant_users_banners_id,\n\t\t\t\t\t\t\t\t\t\tusers_banners.id \tAS users_banners_id,\n\t\t\t\t\t\t\t\t\t\tusers_banners_rotation.banner_id\n\t\t\t\t\t\t\t\t\tFROM\n\t\t\t\t\t\t\t\t\t\tusers_banners, users_banners_rotation\n\t\t\t\t\t\t\t\t\tWHERE\n\t\t\t\t\t\t\t\t\t\taffiliate_id = 5631\t\t\t\t\t\t\t\t\tAND\n\t\t\t\t\t\t\t\t\t\tusers_banners_rotation.users_banners_id = users_banners.id\tAND\n\t\t\t\t\t\t\t\t\t\tusers_banners.status = 3\n\t\t\t\t\t\t\t\t) AS users_banners_rotations_sub\n\t\t\t\t\t\t\tGROUP BY\n\t\t\t\t\t\t\t\tmerchant_users_banners_id,banner_id\n\t\t\t\t\t\t) AS users_banners_tot_sub\n\n\t\t\t\t\t\t\tON (\n\t\t\t\t\t\t\t\tbanners_links.id = users_banners_tot_sub.banner_id \tAND\n\t\t\t\t\t\t\t\tbanners_links.merchant_id = \nusers_banners_tot_sub.merchant_users_banners_id\n\t\t\t\t\t\t\t)\n\t\t\t\t\t) AS banners\n\n\t\t\t\t\t\tLEFT OUTER JOIN\n\n\t\t\t\t\t/* SUBQUERY special_deals */(\n\t\t\t\t\t\tSELECT\n\t\t\t\t\t\t\tbanner_deals.banner_id \tAS id,\n\t\t\t\t\t\t\tMAX(affiliate_id) \t\tAS special_deal\n\t\t\t\t\t\tFROM\n\t\t\t\t\t\t\tbanner_deals\n\t\t\t\t\t\tGROUP BY\n\t\t\t\t\t\t\tbanner_deals.banner_id\n\t\t\t\t\t) AS special_deals\n\n\t\t\t\t\t\tUSING (id)\n\n\t\t\t) AS banners_special_deals\n\n\t\t\t\tLEFT OUTER JOIN\n\n\t\t\t/* SUBQUERY types */ (\n\t\t\t\tSELECT\n\t\t\t\t\tbanner_types.id \t\t\t\tAS type_id,\n\t\t\t\t\tbanner_types.type \t\t\t\tAS type,\n\t\t\t\t\tbanners_banner_types.banner_id \tAS id\n\t\t\t\tFROM\n\t\t\t\t\tbanner_types,banners_banner_types\n\t\t\t\tWHERE\n\t\t\t\t\tbanners_banner_types.banner_id IN /* SUBQUERY */ (\n\t\t\t\t\t\tSELECT\n\t\t\t\t\t\t\tid\n\t\t\t\t\t\tFROM\n\t\t\t\t\t\t\tbanners_links\n\t\t\t\t\t\tWHERE\n\t\t\t\t\t\t\tmerchant_id = 5631\n\t\t\t\t\t) AND\n\t\t\t\t\tbanners_banner_types.type_id = banner_types.id\n\t\t ) AS types\n\n\t\t\t\tUSING (id)\n\n\t\t) as grand_total\n\n\t\t\tLEFT OUTER JOIN\n\n\t\t/* SUBQUERY fetch_banners */ (\n\t\t\tSELECT\n\t\t\t\tbanners_links_id AS id,\n\t\t\t\turl_of_banner\n\t\t\tFROM\n\t\t\t\tfetch_banners\n\t\t) AS fetch_banners\n\n\t\t\tUSING (id)\n\t) AS grand_total_fetch_banners\n\n\t\tLEFT OUTER JOIN\n\n /* SUBQUERY active_banners */ (\n \tSELECT\n\t \tbanner_id AS id,\n\t \tperiod_end\n \tFROM\n \t\treward_ratings\n \tWHERE\n \t\tnow() BETWEEN period_start AND period_end\n \tAND\n \t\tbanner_id IN /* SUBQUERY */ (\n \t\t\tSELECT\n \t\t\t\tid\n \t\t\tFROM\n \t\t\t\tbanners_links\n \t\t\tWHERE\n \t\t\t\tmerchant_id = 5631\n \t\t)\n ) AS active_banners\n\n \tUSING (id)\n\nWHERE\n\t(type_id = -1 OR -1 = -1 )\tAND\n\t(special_deal IS null)\n\nORDER BY\n\tid DESC\n\nThis is the original query without even the earlier mentioned redundant \ncheck removed. For this query, PG 8.2 creates the following plan:\n\nSort (cost=5094.40..5094.41 rows=1 width=597) (actual \ntime=15282.503..15282.734 rows=553 loops=1)\n Sort Key: public.banners_links.id\n -> Nested Loop Left Join (cost=3883.68..5094.39 rows=1 width=597) \n(actual time=64.066..15280.773 rows=553 loops=1)\n Join Filter: (public.banners_links.id = reward_ratings.banner_id)\n -> Nested Loop Left Join (cost=2926.37..3486.98 rows=1 width=589) \n(actual time=51.992..9231.245 rows=553 loops=1)\n Join Filter: (public.banners_links.id = \npublic.fetch_banners.banners_links_id)\n -> Nested Loop Left Join (cost=2926.37..3483.00 rows=1 \nwidth=519) (actual time=51.898..9183.007 rows=553 loops=1)\n Join Filter: (public.banners_links.id = \necpc_per_banner_link.banners_links_id)\n -> Nested Loop (cost=1050.35..1602.14 rows=1 \nwidth=503) (actual time=29.585..9015.077 rows=553 loops=1)\n -> Nested Loop Left Join (cost=1050.35..1593.86 \nrows=1 width=124) (actual time=29.577..9010.273 rows=553 loops=1)\n Join Filter: (public.banners_links.id = \nusers_banners_tot_sub.banner_id)\n -> Nested Loop Left Join \n(cost=1033.74..1577.21 rows=1 width=116) (actual time=25.904..8738.006 \nrows=553 loops=1)\n Join Filter: (public.banners_links.id \n= special_deals.id)\n Filter: (special_deals.special_deal IS \nNULL)\n -> Nested Loop Left Join \n(cost=964.12..1480.67 rows=1 width=108) (actual time=20.905..8259.497 \nrows=553 loops=1)\n Join Filter: \n(public.banners_links.id = banners_banner_types.banner_id)\n -> Bitmap Heap Scan on \nbanners_links (cost=4.35..42.12 rows=1 width=73) (actual time=0.160..1.122 \nrows=359 loops=1)\n Recheck Cond: \n((merchant_id = 5631) AND (merchant_id = 5631))\n Filter: ((status)::text = \n'0'::text)\n -> Bitmap Index Scan on \nbanners_links_merchant_id_idx (cost=0.00..4.35 rows=10 width=0) (actual \ntime=0.123..0.123 rows=424 loops=1)\n Index Cond: \n((merchant_id = 5631) AND (merchant_id = 5631))\n -> Hash Join \n(cost=959.77..1432.13 rows=514 width=43) (actual time=0.899..22.685 rows=658 \nloops=359)\n Hash Cond: \n(banners_banner_types.type_id = banner_types.id)\n -> Hash IN Join \n(cost=957.32..1422.52 rows=540 width=16) (actual time=0.897..21.946 rows=658 \nloops=359)\n Hash Cond: \n(banners_banner_types.banner_id = public.banners_links.id)\n -> Seq Scan on \nbanners_banner_types (cost=0.00..376.40 rows=22240 width=16) (actual \ntime=0.004..10.164 rows=22240 loops=359)\n -> Hash \n(cost=952.02..952.02 rows=424 width=8) (actual time=0.790..0.790 rows=424 \nloops=1)\n -> Bitmap \nHeap Scan on banners_links (cost=11.54..952.02 rows=424 width=8) (actual \ntime=0.108..0.503 rows=424 loops=1)\n Recheck \nCond: (merchant_id = 5631)\n -> \nBitmap Index Scan on banners_links_merchant_id_idx (cost=0.00..11.43 \nrows=424 width=0) (actual time=0.078..0.078 rows=424 loops=1)\n \nIndex Cond: (merchant_id = 5631)\n -> Hash (cost=2.20..2.20 \nrows=20 width=43) (actual time=0.033..0.033 rows=20 loops=1)\n -> Seq Scan on \nbanner_types (cost=0.00..2.20 rows=20 width=43) (actual time=0.004..0.017 \nrows=20 loops=1)\n -> HashAggregate (cost=69.62..79.24 \nrows=769 width=16) (actual time=0.008..0.498 rows=780 loops=553)\n -> Seq Scan on banner_deals \n(cost=0.00..53.75 rows=3175 width=16) (actual time=0.004..1.454 rows=3175 \nloops=1)\n -> HashAggregate (cost=16.61..16.62 rows=1 \nwidth=24) (actual time=0.007..0.291 rows=424 loops=553)\n -> Nested Loop (cost=0.00..16.60 \nrows=1 width=24) (actual time=0.056..3.123 rows=424 loops=1)\n -> Index Scan using \nusers_banners_affiliate_id_idx on users_banners (cost=0.00..8.30 rows=1 \nwidth=16) (actual time=0.046..0.555 rows=424 loops=1)\n Index Cond: ((affiliate_id \n= 5631) AND (affiliate_id = 5631))\n Filter: ((status)::text = \n'3'::text)\n -> Index Scan using \nusers_banners_id_idx on users_banners_rotation (cost=0.00..8.29 rows=1 \nwidth=16) (actual time=0.003..0.004 rows=1 loops=424)\n Index Cond: \n(users_banners_rotation.users_banners_id = users_banners.id)\n -> Index Scan using \"banners_org_id_banner.idx\" \non banners_org (cost=0.00..8.27 rows=1 width=387) (actual time=0.005..0.006 \nrows=1 loops=553)\n Index Cond: (public.banners_links.id = \nbanners_org.id_banner)\n -> Sort (cost=1876.01..1876.50 rows=194 width=30) \n(actual time=0.041..0.161 rows=290 loops=553)\n Sort Key: CASE WHEN \n(precalculated_stats_banners_links.clicks_total > 0) THEN \n(((precalculated_stats_banners_links.revenue_total_affiliate / \n(precalculated_stats_banners_links.clicks_total)::numeric))::double \nprecision / 1000::double precision) ELSE 0::double precision END\n -> Merge IN Join (cost=1819.78..1868.64 rows=194 \nwidth=30) (actual time=16.769..21.879 rows=290 loops=1)\n Merge Cond: \n(precalculated_stats_banners_links.banners_links_id = \npublic.banners_links.id)\n -> Sort (cost=849.26..869.24 rows=7993 \nwidth=30) (actual time=12.486..15.740 rows=7923 loops=1)\n Sort Key: \nprecalculated_stats_banners_links.banners_links_id\n -> Index Scan using \npre_calc_banners_status on precalculated_stats_banners_links \n(cost=0.00..331.13 rows=7993 width=30) (actual time=0.007..6.291 rows=7923 \nloops=1)\n Index Cond: (status = 4)\n -> Sort (cost=970.52..971.58 rows=424 \nwidth=8) (actual time=0.879..1.023 rows=366 loops=1)\n Sort Key: public.banners_links.id\n -> Bitmap Heap Scan on banners_links \n(cost=11.54..952.02 rows=424 width=8) (actual time=0.123..0.509 rows=424 \nloops=1)\n Recheck Cond: (merchant_id = \n5631)\n -> Bitmap Index Scan on \nbanners_links_merchant_id_idx (cost=0.00..11.43 rows=424 width=0) (actual \ntime=0.089..0.089 rows=424 loops=1)\n Index Cond: (merchant_id = \n5631)\n -> Seq Scan on fetch_banners (cost=0.00..2.88 rows=88 \nwidth=78) (actual time=0.003..0.042 rows=88 loops=553)\n -> Hash IN Join (cost=957.32..1606.24 rows=93 width=16) (actual \ntime=10.933..10.933 rows=0 loops=553)\n Hash Cond: (reward_ratings.banner_id = \npublic.banners_links.id)\n -> Seq Scan on reward_ratings (cost=0.00..633.66 rows=3822 \nwidth=16) (actual time=0.007..8.955 rows=4067 loops=553)\n Filter: ((now() >= period_start) AND (now() <= \nperiod_end))\n -> Hash (cost=952.02..952.02 rows=424 width=8) (actual \ntime=0.738..0.738 rows=424 loops=1)\n -> Bitmap Heap Scan on banners_links \n(cost=11.54..952.02 rows=424 width=8) (actual time=0.118..0.475 rows=424 \nloops=1)\n Recheck Cond: (merchant_id = 5631)\n -> Bitmap Index Scan on \nbanners_links_merchant_id_idx (cost=0.00..11.43 rows=424 width=0) (actual \ntime=0.087..0.087 rows=424 loops=1)\n Index Cond: (merchant_id = 5631)\nTotal runtime: 15283.225 ms\n\nIf I change 1 of the redundant checks:\n\n/* SUBQUERY banners_links */ (\n\tSELECT\n\t\t*\n\tFROM\n\t\tbanners_links\n\tWHERE\n\t\tmerchant_id = 5631\n) AS banners_links\n\ninto just banner_links, PG comes up with the (large) plan I posted earlier.\n\n_________________________________________________________________\nLive Search, for accurate results! http://www.live.nl\n\n",
"msg_date": "Mon, 23 Apr 2007 13:35:26 +0200",
"msg_from": "\"henk de wit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Redundant sub query triggers slow nested loop left join"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nI have a table in my database that is updated every minute with new acquired\r\ndata. Anyway there is a query to get latest values to be displayed on\r\nscreen. I have postgresql 7.4.2 that work very fine. The problem was that\r\nafter hdd crash I have rebuild database from the archive and... Execution\r\ntime of this query starts to be unacceptable. And I found funny thing. Using\r\nstatic value in place expression remove this problem. Query started to be\r\nexecuted fast again.\r\n\r\nI did not change any settings in postgresql configuration. Just had to\r\nrestart all the services.\r\n\r\nCan someone tell me why the optimizer stopped to choose index? I had seqscan\r\ndisabled already.\r\n\r\nOne note about those two outputs below: there are different number of\r\ntouples returned due to the fact that in fact the timestamp is chosen\r\ndifferently. \r\n\r\nRegards,\r\n\r\n/Arek\r\n\r\n------------------------------------------------------------------\r\n\r\nexplain analyze SELECT distinct on (index) index, status, value FROM _values\r\nWHERE device=1 and timestamp>(now()-5*interval '1 min') ORDER by index,\r\ntimestamp desc;\r\n QUERY\r\nPLAN \r\n----------------------------------------------------------------------------\r\n-------------------------------------------------------------\r\n Unique (cost=100117679.93..100117756.29 rows=1 width=24) (actual\r\ntime=5279.262..5279.308 rows=10 loops=1)\r\n -> Sort (cost=100117679.93..100117718.11 rows=15272 width=24) (actual\r\ntime=5279.260..5279.275 rows=21 loops=1)\r\n Sort Key: \"index\", \"timestamp\"\r\n -> Seq Scan on _values (cost=100000000.00..100116618.64\r\nrows=15272 width=24) (actual time=5277.596..5279.184 rows=21 loops=1)\r\n Filter: ((device = 1) AND ((\"timestamp\")::timestamp with time\r\nzone > (now() - '00:05:00'::interval)))\r\n Total runtime: 5279.391 ms\r\n(6 rows)\r\n\r\nexplain analyze SELECT distinct on (index) index, status, value FROM _values\r\nWHERE device=1 and timestamp>'2007-04-22 21:20' ORDER by index, timestamp\r\ndesc;\r\n QUERY\r\nPLAN \r\n----------------------------------------------------------------------------\r\n---------------------------------------------------------------\r\n Unique (cost=703.45..703.47 rows=1 width=24) (actual time=4.807..4.867\r\nrows=10 loops=1)\r\n -> Sort (cost=703.45..703.46 rows=5 width=24) (actual time=4.804..4.827\r\nrows=31 loops=1)\r\n Sort Key: \"index\", \"timestamp\"\r\n -> Index Scan using _values_dbidx_idx on _values \r\n(cost=0.00..703.39 rows=5 width=24) (actual time=0.260..4.728 rows=31\r\nloops=1)\r\n Index Cond: (\"timestamp\" > '2007-04-22 21:20:00'::timestamp\r\nwithout time zone)\r\n Filter: (device = 1)\r\n Total runtime: 4.958 ms\r\n(7 rows)\r\n\r\n\r\n-- \r\nList przeskanowano programem ArcaMail, ArcaVir 2007\r\nprzeskanowano 2007-04-23 19:20:29, silnik: 2007.01.01 12:00:00, bazy: 2007.04.15 09:21:20\r\nThis message has been scanned by ArcaMail, ArcaVir 2007\r\nscanned 2007-04-23 19:20:29, engine: 2007.01.01 12:00:00, base: 2007.04.15 09:21:20\r\nhttp://www.arcabit.com\r\n\n",
"msg_date": "Mon, 23 Apr 2007 19:20:29 +0200",
"msg_from": "\"Arkadiusz Raj\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "index usage"
},
{
"msg_contents": "On Mon, Apr 23, 2007 at 07:20:29PM +0200, Arkadiusz Raj wrote:\n> I have a table in my database that is updated every minute with new acquired\n\n> data. Anyway there is a query to get latest values to be displayed on\n\n> screen. I have postgresql 7.4.2 that work very fine.\n\nYou want _at least_ the latest 7.4 version -- ideally, the latest 8.2\nversion.\n\n> The problem was that\n\n> after hdd crash I have rebuild database from the archive and... Execution\n\n> time of this query starts to be unacceptable.\n\nHave you re-ANALYZEd after the data load?\n\nAnyhow, the issue with the planner not knowing how to estimate expressions\nlike \"now() - interval '5 minutes'\" correctly is a known 7.4 issue, and it's\nfixed in later versions. It might have worked more or less by accident\nearlier, although it seems odd that it wouldn't even have considered the\nindex scan...\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 23 Apr 2007 19:58:11 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index usage"
}
] |
[
{
"msg_contents": "Hi,\n\n\nI tried to create a standby system as per the steps mentioned in the\nfollowing article\n\nhttp://archives.postgresql.org/sydpug/2006-10/msg00001.php.\n\nCan anybody let me know the steps which are supposed to be followed to make\nthe standby machine for read access? and how it should be one.\n\n\nAlso it will be helpful if you can let me know what care needs to be taken\nto implement this in production.\n\n\nRegards,\nNimesh.\n\nHi,\n \n \nI tried to create a standby system as per the steps mentioned in the following article\n \nhttp://archives.postgresql.org/sydpug/2006-10/msg00001.php.\n \nCan anybody let me know the steps which are supposed to be followed to make the standby machine for read access? and how it should be one.\n \n \nAlso it will be helpful if you can let me know what care needs to be taken to implement this in production.\n \n \nRegards,\nNimesh.",
"msg_date": "Tue, 24 Apr 2007 12:41:55 +0530",
"msg_from": "\"Nimesh Satam\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Warm - standby system."
},
{
"msg_contents": "On 4/24/07, Nimesh Satam <[email protected]> wrote:\n> Can anybody let me know the steps which are supposed to be followed to make\n> the standby machine for read access? and how it should be one.\n\nNot possible at the moment. The warm standby is not \"hot\" -- it cannot\nbe used for queries while it's acting as a standby. Explained here:\n\n http://www.postgresql.org/docs/8.2/static/warm-standby.html\n\nAlexander.\n",
"msg_date": "Tue, 24 Apr 2007 11:33:05 +0200",
"msg_from": "\"Alexander Staubo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Warm - standby system."
}
] |
[
{
"msg_contents": "We have a table which we want to normalize and use the same SQL to\nperform selects using a view. \nThe old table had 3 columns in it's index\n(region_id,wx_element,valid_time). \nThe new table meteocode_elmts has a similar index but the region_id is a\nreference to another table region_lookup and wx_element to table\nmeteocode_elmts_lookup. This will make our index and table\nsignificantly smaller. \nAs stated ablove we want to use the same SQL query to check the view. \nThe problem is we have not been able to set up the view so that it\nreferences the \"rev\" index. It just uses the region_id but ignores the\nwx_element, therefore the valid_time is also ignored. The rev index now\nconsists of region_id(reference to region_lookup\ntable),wx_element(reference to meteocode_elmts_lookup) and valid_time.\n\nWe are using Postgresql 7.4.0. Below is the relevant views and tables\nplus an explain analyze of the query to the old table and the view.\n\nOld table forceastelement\nphoenix=# \\d forecastelement \n Table \"public.forecastelement\" \n Column | Type | Modifiers \n----------------+-----------------------------+-----------\n origin | character varying(10) | not null \n timezone | character varying(99) | not null \n region_id | character varying(20) | not null \n wx_element | character varying(99) | not null \n value | character varying(99) | not null \n flag | character(3) | not null \n units | character varying(99) | not null \n valid_time | timestamp without time zone | not null \n issue_time | timestamp without time zone | not null \n next_forecast | timestamp without time zone | not null reception_time\n| timestamp without time zone | not null\nIndexes: \n \"forecastelement_rwv_idx\" btree (region_id, wx_element, valid_time) \n\nNew and view nad tables are\nphoenix=# \\d fcstelmt_view \n View \"public.fcstelmt_view\" \n Column | Type | Modifiers \n----------------+-----------------------------+-----------\n origin | character varying(10) | \n timezone | character varying(10) | \n region_id | character varying(99) | \n wx_element | character varying(99) | \n value | character varying(99) | \n flag | character(3) | \n unit | character varying | \n valid_time | timestamp without time zone | \n issue_time | timestamp without time zone | \n next_forecast | timestamp without time zone | reception_time |\ntimestamp without time zone | \n\nView definition: \n SELECT meteocode_bltns.origin, meteocode_bltns.timezone,\nregion_lookup.region_id, meteocode_elmts_lookup.wx_element,\nmeteocode_elmts.value, meteocode_bltns.flag, ( SELECT\nmeteocode_units_lookup.unit FROM meteocode_units_lookup WHERE\nmeteocode_units_lookup.id = meteocode_elmts.unit_id) AS unit,\nmeteocode_elmts.valid_time, meteocode_bltns.issue_time,\nmeteocode_bltns.next_forecast, meteocode_bltns.reception_time FROM\nmeteocode_bltns, meteocode_elmts, region_lookup, meteocode_elmts_lookup\nWHERE meteocode_bltns.meteocode_id = meteocode_elmts.meteocode AND\nregion_lookup.id = meteocode_elmts.reg_id AND meteocode_elmts_lookup.id\n= meteocode_elmts.wx_element_id;\n\nphoenix=# \\d meteocode_elmts \n Table \"public.meteocode_elmts\" \n Column | Type | Modifiers \n---------------+-----------------------------+-----------\n meteocode | integer | \n value | character varying(99) | not null \n unit_id | integer | \n valid_time | timestamp without time zone | not null \n lcleffect | integer | \n reg_id | integer | \n wx_element_id | integer | \nIndexes: \n \"rev\" btree (reg_id, wx_element_id, valid_time) phoenix=# \\d\nmeteocode_bltns \n Table \"public.meteocode_bltns\" \n Column | Type |\nModifiers \n----------------+-----------------------------+-------------------------\n----------------+-----------------------------+-------------------------\n----------------+-----------------------------+---------\n meteocode_id | integer | not null default\nnextval('\"meteocode_bltns_idseq\"'::text) \n origin | character varying(10) | not null \n header | character varying(20) | not null \n timezone | character varying(10) | not null \n flag | character(3) | not null \n initial | character varying(40) | not null \n issue_time | timestamp without time zone | not null \n next_forecast | timestamp without time zone | not null reception_time\n| timestamp without time zone | not null\nIndexes: \n \"meteocode_bltns_meteocode_id_idx\" btree (meteocode_id) \n\nphoenix=# \\d region_lookup \n Table \"public.region_lookup\" \n Column | Type | Modifiers \n-----------+-----------------------+-----------\n id | integer | not null \n region_id | character varying(99) |\nIndexes: \n \"region_lookup_pkey\" primary key, btree (id) \n\nphoenix=# \\d meteocode_elmts_lookup \n Table \"public.meteocode_elmts_lookup\" \n Column | Type | Modifiers \n------------+-----------------------+-----------\n id | integer | not null \n wx_element | character varying(99) | not null\nIndexes: \n \"meteocode_elmts_lookup_pkey\" primary key, btree (id) \n \"wx_element_idx\" btree (wx_element) \n\nphoenix=# \\d meteocode_units_lookup \n Table \"public.meteocode_units_lookup\" \n Column | Type | Modifiers \n--------+-----------------------+-----------\n id | integer | not null \n unit | character varying(99) | not null \nIndexes: \n \"meteocode_units_lookup_pkey\" primary key, btree (id) \n\nVIEW \nPWFPM_DEV=# explain analyze SELECT\norigin,timezone,region_id,wx_element,value,flag,unit,valid_time,issue_ti\nme,next_forecast FROM fcstelmt_view where origin = 'OFFICIAL' and\ntimezone = 'CST6CDT' and region_id = 'PU-REG-WNT-00027' and wx_element\n= 'NGTPERIOD_MINTEMP' and value = '-26' and flag= 'REG' and unit =\n'CELSIUS' and valid_time = '2007-04-09 00:00:00' and issue_time =\n'2007-04-08 15:00:00' and next_forecast = '2007-04-09 04:00:00' ;\nQUERY PLAN\n\n Hash Join (cost=1.47..1309504.33 rows=1 width=264) (actual\ntime=21.609..84.940 rows=1 loops=1) \n Hash Cond: (\"outer\".wx_element = \"inner\".id) \n -> Nested Loop (cost=0.00..1309501.76 rows=1 width=201) (actual\ntime=17.161..80.489 rows=1 loops=1) \n -> Nested Loop (cost=0.00..1309358.57 rows=1 width=154)\n(actual time=17.018..80.373 rows=2 loops=1) \n -> Seq Scan on region_lookup (cost=0.00..26.73 rows=7\nwidth=71) (actual time=0.578..2.135 rows=1 loops=1) \n Filter: ((region_id)::text = 'PU-REG-WNT-00027'::text) \n -> Index Scan using rev on meteocode_elmts\n(cost=0.00..187047.39 rows=1 width=91) (actual time=16.421..78 .208\nrows=2 loops=1) \n Index Cond: (\"outer\".id = meteocode_elmts.region_id) \n Filter: (((value)::text = '-26'::text) AND (valid_time = '2007-04-09\n00:00:00'::timestamp without tim e zone) AND (((subplan))::text =\n'CELSIUS'::text))\n SubPlan -> Seq Scan on meteocode_units_lookup\n(cost=0.00..1.09 rows=1 width=67) (actual time=0.013..0.018 rows=1\nloops=2)\n Filter: (id = $0) \n -> Index Scan using meteocode_bltns_meteocode_id_idx on\nmeteocode_bltns (cost=0.00..143.18 rows=1 width=55) (ac tual\ntime=0.044..0.045 rows=0 loops=2)\n Index Cond: (meteocode_bltns.meteocode_id =\n\"outer\".meteocode) \n Filter: (((origin)::text = 'OFFICIAL'::text) AND\n((timezone)::text = 'CST6CDT'::text) AND (flag = 'REG'::bp char) AND\n(issue_time = '2007-04-08 15:00:00'::timestamp without time zone) AND\n(next_forecast = '2007-04-09 04:00:00'::ti mestamp without time\nzone))\n\n -> Hash (cost=1.46..1.46 rows=2 width=71) (actual time=0.081..0.081\nrows=0 loops=1) \n -> Seq Scan on meteocode_elmts_lookup (cost=0.00..1.46 rows=2\nwidth=71) (actual time=0.042..0.076 rows=1 loops= 1)\n Filter: ((wx_element)::text = 'NGTPERIOD_MINTEMP'::text) \n SubPlan \n -> Seq Scan on meteocode_units_lookup (cost=0.00..1.09 rows=1\nwidth=67) (actual time=0.007..0.012 rows=1 loops=1)\n Filter: (id = $0)\n Total runtime: 85.190 ms\n(22 rows) \n\nOLD TABLE \nPWFPM_DEV=# explain analyze SELECT\norigin,timezone,region_id,wx_element,value,flag,units,valid_time,issue_t\nime,next_forecast FROM forecastelement where origin = 'OFFICIAL' and\ntimezone = 'CST6CDT' and region_id = 'PU-REG-WNT-00027' and wx_element =\n'NGTPERIOD_MINTEMP' and value = '-26' and flag= 'REG' and units =\n'CELSIUS' and valid_time = '2007-04-09 00:00:00' and issue_time =\n'2007-04-08 15:00:00' and next_forecast = '2007-04-09 04:00:00' ;\nQUERY PLAN\n\n Index Scan using forecastelement_rwv_idx on forecastelement\n(cost=0.00..4.03 rows=1 width=106) (actual time=0.207..0.207 rows=0\nloops=1)\nIndex Cond: (((region_id)::text = 'PU-REG-WNT-00027'::text) AND\n((wx_element)::text = 'NGTPERIOD_MINTEMP'::text) AND (valid_time =\n'2007-04-09 00:00:00'::timestamp without time zone))\n Filter: (((origin)::text = 'OFFICIAL'::text) AND ((timezone)::text =\n'CST6CDT'::text) AND ((value)::text = '-26'::text) AND (flag =\n'REG'::bpchar) AND ((units)::text = 'CELSIUS'::text) AND (issue_time =\n'2007-04-08 15:00:00'::timestamp without time zone) AND (next_forecast =\n'2007-04-09 04:00:00'::timestamp without time zone))\n Total runtime: 0.327 ms \n(4 rows) \n",
"msg_date": "Tue, 24 Apr 2007 11:17:14 -0400",
"msg_from": "Dan Shea <[email protected]>",
"msg_from_op": true,
"msg_subject": "View is not using a table index"
},
{
"msg_contents": "Dan Shea wrote:\n> We have a table which we want to normalize and use the same SQL to\n> perform selects using a view. \n> The old table had 3 columns in it's index\n> (region_id,wx_element,valid_time). \n> The new table meteocode_elmts has a similar index but the region_id is a\n> reference to another table region_lookup and wx_element to table\n> meteocode_elmts_lookup. This will make our index and table\n> significantly smaller. \n> As stated ablove we want to use the same SQL query to check the view. \n> The problem is we have not been able to set up the view so that it\n> references the \"rev\" index. It just uses the region_id but ignores the\n> wx_element, therefore the valid_time is also ignored. The rev index now\n> consists of region_id(reference to region_lookup\n> table),wx_element(reference to meteocode_elmts_lookup) and valid_time.\n> \n> We are using Postgresql 7.4.0. Below is the relevant views and tables\n> plus an explain analyze of the query to the old table and the view.\n\nPlease say it's not really 7.4.0 - you're running 7.4.xx actually, \naren't you, where xx is quite a high number?\n\n> phoenix=# \\d region_lookup \n> Table \"public.region_lookup\" \n> Column | Type | Modifiers \n> -----------+-----------------------+-----------\n> id | integer | not null \n> region_id | character varying(99) |\n> Indexes: \n> \"region_lookup_pkey\" primary key, btree (id) \n> \n> phoenix=# \\d meteocode_elmts_lookup \n> Table \"public.meteocode_elmts_lookup\" \n> Column | Type | Modifiers \n> ------------+-----------------------+-----------\n> id | integer | not null \n> wx_element | character varying(99) | not null\n> Indexes: \n> \"meteocode_elmts_lookup_pkey\" primary key, btree (id) \n> \"wx_element_idx\" btree (wx_element) \n\nAnyway, you're joining to these tables and testing against the text \nvalues without any index useful to the join.\n\nTry indexes on (wx_element, id) and (region_id,id) etc. Re-analyse the \ntables and see what that does for you.\n\nOh - I'd expect an index over the timestamps might help too.\n\nThen, if you've got time try setting up an 8.2 installation, do some \nbasic configuration and transfer the data. I'd be surprised if you \ndidn't get some noticeable improvements just from the version number \nincrease.\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 24 Apr 2007 16:41:43 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: View is not using a table index"
},
{
"msg_contents": "Version is PWFPM_DEV=# select version();\n version\n------------------------------------------------------------------------\n--------------------------------\n PostgreSQL 7.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.3\n20030502 (Red Hat Linux 3.2.3-20)\n(1 row)\n\nWe used the rpm source from postgresql-7.4-0.5PGDG.\n\nYou make it sound so easy. Our database size is at 308 GB. We actually\nhave 8.2.3 running and would like to transfer in the future. We have to\ninvestigate the best way to do it.\n\nDan.\n\n\n-----Original Message-----\nFrom: Richard Huxton [mailto:[email protected]] \nSent: Tuesday, April 24, 2007 11:42 AM\nTo: Shea,Dan [NCR]\nCc: [email protected]\nSubject: Re: [PERFORM] View is not using a table index\n\nDan Shea wrote:\n> We have a table which we want to normalize and use the same SQL to \n> perform selects using a view.\n> The old table had 3 columns in it's index \n> (region_id,wx_element,valid_time).\n> The new table meteocode_elmts has a similar index but the region_id is\n\n> a reference to another table region_lookup and wx_element to table \n> meteocode_elmts_lookup. This will make our index and table \n> significantly smaller.\n> As stated ablove we want to use the same SQL query to check the view.\n\n> The problem is we have not been able to set up the view so that it \n> references the \"rev\" index. It just uses the region_id but ignores \n> the wx_element, therefore the valid_time is also ignored. The rev \n> index now consists of region_id(reference to region_lookup \n> table),wx_element(reference to meteocode_elmts_lookup) and valid_time.\n> \n> We are using Postgresql 7.4.0. Below is the relevant views and tables\n\n> plus an explain analyze of the query to the old table and the view.\n\nPlease say it's not really 7.4.0 - you're running 7.4.xx actually,\naren't you, where xx is quite a high number?\n\n> phoenix=# \\d region_lookup \n> Table \"public.region_lookup\" \n> Column | Type | Modifiers \n> -----------+-----------------------+-----------\n> id | integer | not null \n> region_id | character varying(99) |\n> Indexes: \n> \"region_lookup_pkey\" primary key, btree (id)\n> \n> phoenix=# \\d meteocode_elmts_lookup \n> Table \"public.meteocode_elmts_lookup\" \n> Column | Type | Modifiers \n> ------------+-----------------------+-----------\n> id | integer | not null \n> wx_element | character varying(99) | not null\n> Indexes: \n> \"meteocode_elmts_lookup_pkey\" primary key, btree (id) \n> \"wx_element_idx\" btree (wx_element)\n\nAnyway, you're joining to these tables and testing against the text\nvalues without any index useful to the join.\n\nTry indexes on (wx_element, id) and (region_id,id) etc. Re-analyse the\ntables and see what that does for you.\n\nOh - I'd expect an index over the timestamps might help too.\n\nThen, if you've got time try setting up an 8.2 installation, do some\nbasic configuration and transfer the data. I'd be surprised if you\ndidn't get some noticeable improvements just from the version number\nincrease.\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 24 Apr 2007 13:31:19 -0400",
"msg_from": "Dan Shea <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: View is not using a table index"
},
{
"msg_contents": "* Dan Shea <[email protected]> [070424 19:33]:\n> Version is PWFPM_DEV=# select version();\n> version\n> ------------------------------------------------------------------------\n> --------------------------------\n> PostgreSQL 7.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.3\n> 20030502 (Red Hat Linux 3.2.3-20)\n> (1 row)\n> \n> We used the rpm source from postgresql-7.4-0.5PGDG.\n> \n> You make it sound so easy. Our database size is at 308 GB. We actually\n> have 8.2.3 running and would like to transfer in the future. We have to\n> investigate the best way to do it.\n\nThat depends upon your requirements for the uptime.\n\nAndreas\n",
"msg_date": "Tue, 24 Apr 2007 20:03:57 +0200",
"msg_from": "Andreas Kostyrka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: View is not using a table index"
},
{
"msg_contents": "Dan Shea <[email protected]> writes:\n> You make it sound so easy. Our database size is at 308 GB.\n\nWell, if you can't update major versions that's understandable; that's\nwhy we're still maintaining the old branches. But there is no excuse\nfor not running a reasonably recent sub-release within your branch.\nRead the release notes, and consider what you will say if one of the\nseveral data-loss-causing bugs that were fixed long ago eats your DB:\nhttp://developer.postgresql.org/pgdocs/postgres/release.html\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 24 Apr 2007 14:09:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: View is not using a table index "
},
{
"msg_contents": "Tom Lane wrote:\n> Dan Shea <[email protected]> writes:\n> \n>> You make it sound so easy. Our database size is at 308 GB.\n>> \n>\n> Well, if you can't update major versions that's understandable; that's\n> why we're still maintaining the old branches. But there is no excuse\n> for not running a reasonably recent sub-release within your branch.\n> Read the release notes, and consider what you will say if one of the\n> several data-loss-causing bugs that were fixed long ago eats your DB:\n> \n\nWas it Feb 2002? The Slammer effectively shut down the entire Internet,\ndue to a severe bug in Microsucks SQL Server... A fix for that buffer\noverflow bug had been available since August 2001; yet 90% of all SQL\nservers on the planet were unpatched.\n\nAs much as it pains me to admit it, the lesson about the importance of\nbeing a conscious, competent administrator takes precedence over the\nlesson of how unbelievably incompetent and irresponsible and etc. etc.\nMicrosoft is to have such a braindead bug in such a high-profile and\nhigh-price product.\n\nTom said it really nicely --- do stop and think about it; the day arrives\nwhen you *lost* all those 308 GB of valuable data; and it was only in\nyour hands to have prevented it! Would you want to see the light of\n*that* day?\n\nCarlos\n--\n\n",
"msg_date": "Tue, 24 Apr 2007 17:13:59 -0400",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: View is not using a table index"
},
{
"msg_contents": "Carlos Moreno wrote:\n> Tom Lane wrote:\n>> Well, if you can't update major versions that's understandable; that's\n>> why we're still maintaining the old branches. But there is no excuse\n>> for not running a reasonably recent sub-release within your branch.\n> \n> Slammer..bug in Microsucks SQL Server....fix...had been available\n\nFeature request.\n\nHow about if PostgreSQL periodically check for updates on the\ninternet and log WARNINGs as soon as it sees it's not running\nthe newest minor version for a branch. Ideally, it could\nbe set so the time-of-day's configurable to avoid setting off\npagers in the middle of the night.\n\nI might not lurk on the mailinglists enough to notice every\ndot release; but I sure would notice if pagers went off with\nwarnings in the log files from production servers.\n\nIs that a possible TODO?\n\n\n\n(The thread started on the performance mailing lists but\nI moved it to general since it drifted off topic).\n",
"msg_date": "Wed, 25 Apr 2007 15:52:17 -0700",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Feature request - have postgresql log warning when new sub-release\n\tcomes out."
},
{
"msg_contents": "Ron Mayer wrote:\n> Carlos Moreno wrote:\n>> Tom Lane wrote:\n>>> Well, if you can't update major versions that's understandable; that's\n>>> why we're still maintaining the old branches. But there is no excuse\n>>> for not running a reasonably recent sub-release within your branch.\n>> Slammer..bug in Microsucks SQL Server....fix...had been available\n> \n> Feature request.\n> \n> How about if PostgreSQL periodically check for updates on the\n> internet and log WARNINGs as soon as it sees it's not running\n> the newest minor version for a branch. Ideally, it could\n> be set so the time-of-day's configurable to avoid setting off\n> pagers in the middle of the night.\n\nuhmmm gah, errm no... ehhhh why? :)\n\nI could see a contrib module that was an agent that did that but not as \npart of actual core.\n\nJoshua D. Drake\n\n\n> \n> I might not lurk on the mailinglists enough to notice every\n> dot release; but I sure would notice if pagers went off with\n> warnings in the log files from production servers.\n> \n> Is that a possible TODO?\n> \n> \n> \n> (The thread started on the performance mailing lists but\n> I moved it to general since it drifted off topic).\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Wed, 25 Apr 2007 16:32:06 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature request - have postgresql log warning when\n\tnew sub-release comes out."
},
{
"msg_contents": "Ron Mayer <[email protected]> wrote:\n>\n> Carlos Moreno wrote:\n> > Tom Lane wrote:\n> >> Well, if you can't update major versions that's understandable; that's\n> >> why we're still maintaining the old branches. But there is no excuse\n> >> for not running a reasonably recent sub-release within your branch.\n> > \n> > Slammer..bug in Microsucks SQL Server....fix...had been available\n> \n> Feature request.\n> \n> How about if PostgreSQL periodically check for updates on the\n> internet and log WARNINGs as soon as it sees it's not running\n> the newest minor version for a branch. Ideally, it could\n> be set so the time-of-day's configurable to avoid setting off\n> pagers in the middle of the night.\n> \n> I might not lurk on the mailinglists enough to notice every\n> dot release; but I sure would notice if pagers went off with\n> warnings in the log files from production servers.\n> \n> Is that a possible TODO?\n\nIf you switch to FreeBSD, you can easily have this done automatically\nwith existing tools.\n\n...\n\nActually, I've a feeling that it would be trivial to do with just\nabout any existing packaging system ...\n\n-- \nBill Moran\nhttp://www.potentialtech.com\n",
"msg_date": "Wed, 25 Apr 2007 22:52:49 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature request - have postgresql log warning when\n\tnew sub-release comes out."
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 04/25/07 21:52, Bill Moran wrote:\n[snip]\n> \n> If you switch to FreeBSD, you can easily have this done automatically\n> with existing tools.\n> \n> ...\n> \n> Actually, I've a feeling that it would be trivial to do with just\n> about any existing packaging system ...\n\nOr Debian, the Universal Operating System.\n\nAnd if you don't want to move up to a good OS, you could always\nparse http://www.postgresql.org/versions.xml for the exact\ninformation you need.\n\n- --\nRon Johnson, Jr.\nJefferson LA USA\n\nGive a man a fish, and he eats for a day.\nHit him with a fish, and he goes away for good!\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\n\niD8DBQFGMKk1S9HxQb37XmcRAjZAAKCsgXoDofYQJGixA1vV0/IUr0tPjACeJeWR\nZbLeGYpEwiwEZ7Q1ELrqOuU=\n=SM1D\n-----END PGP SIGNATURE-----\n",
"msg_date": "Thu, 26 Apr 2007 08:29:25 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature request - have postgresql log warning when\n\tnew sub-release comes out."
},
{
"msg_contents": "\n> \n> Actually, I've a feeling that it would be trivial to do with just\n> about any existing packaging system ...\n\nYes pretty much every version of Linux, and FreeBSD, heck even Solaris \nif you are willing to run 8.1.\n\nJ\n\n\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Thu, 26 Apr 2007 08:10:32 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature request - have postgresql log warning when\n\tnew sub-release comes out."
},
{
"msg_contents": "On Thursday 26. April 2007 17:10, Joshua D. Drake wrote:\n>> Actually, I've a feeling that it would be trivial to do with just\n>> about any existing packaging system ...\n>\n>Yes pretty much every version of Linux, and FreeBSD, heck even Solaris\n>if you are willing to run 8.1.\n\nGentoo is still on version 8.1.8, though, and even that is soft-masked \n(stable is at 8.0.12). Seems like a problem with getting 8.2.x to build \non this platform:\n\n<http://forums.gentoo.org/viewtopic-t-534835-highlight-postgresql.html>\n-- \nLeif Biberg Kristensen | Registered Linux User #338009\nhttp://solumslekt.org/ | Cruising with Gentoo/KDE\nMy Jazz Jukebox: http://www.last.fm/user/leifbk/\n",
"msg_date": "Thu, 26 Apr 2007 18:55:42 +0200",
"msg_from": "\"Leif B. Kristensen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature request - have postgresql log warning when new\n\tsub-release comes out."
},
{
"msg_contents": "Leif B. Kristensen wrote:\n> On Thursday 26. April 2007 17:10, Joshua D. Drake wrote:\n>>> Actually, I've a feeling that it would be trivial to do with just\n>>> about any existing packaging system ...\n>> Yes pretty much every version of Linux, and FreeBSD, heck even Solaris\n>> if you are willing to run 8.1.\n> \n> Gentoo is still on version 8.1.8, though, and even that is soft-masked \n> (stable is at 8.0.12). Seems like a problem with getting 8.2.x to build \n> on this platform:\n> \n> <http://forums.gentoo.org/viewtopic-t-534835-highlight-postgresql.html>\n\nI run 8.2.x on a Gentoo/x86_64 development box (just did the upgrade to\n8.2.4 yesterday) using the postgresql-experimental overlay (via layman)\nand have run into no problems. Everything has compiled,\ninstalled/upgraded and been run with no hiccups along the way, nor any\nhacky workarounds.\n\nThe 8.2 series isn't in the main portage tree yet because, as I\nunderstand it (and I could certainly be mistaken), the contributors\nmaintaining the ebuilds are reworking the slotting setup as well as\ncleaning up the distinctions between server/library/client-only installs.\n\nGranted, I'm not advising a mission-critical server that happens to be\nrunning Gentoo use a portage overlay explicitly marked \"experimental\"\nfor its RDBMS package management -- just pointing out that there is a\npretty straight-forward way to get the 8.2 series through portage if\nyou're willing to use an overlay for it.\n\n-Jon\n\n-- \nSenior Systems Developer\nMedia Matters for America\nhttp://mediamatters.org/\n",
"msg_date": "Thu, 26 Apr 2007 14:12:41 -0400",
"msg_from": "Jon Sime <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature request - have postgresql log warning when\n\tnew sub-release comes out."
},
{
"msg_contents": "Joshua D. Drake wrote:\n> Ron Mayer wrote:\n>> How about if PostgreSQL periodically check for updates on the\n>> internet and log WARNINGs as soon as it sees it's not running\n>> the newest minor version for a branch. ...\n> \n> uhmmm gah, errm no... ehhhh why? :)\n\nMostly because it seems like a near FAQ here that someone\nposts questions about people running very old postgresqls\nwhere the answers are \"that was fixed in the latest minor version\".\n\nRegarding people saying that their OS package manager\ncan do this for them - I note that the people who have\nthis problem the worst seem to be the people running\nolder postgresqls, and your OS vendor may not be keeping\nthe major version number of their postgresql the same\nas yours. For example, apt-cache search here isn't showing\nme 8.0 (though it does show 7.4, 8.1, and 8.2).\n\n> I could see a contrib module that was an agent that did that but not as\n> part of actual core.\n\nI was thinking it would protect the more ignorant users\nwho didn't even know about contrib. I imagine anyone\nwho did know enough to install a contrib module would\nalso know how to write such a script without it.\n\nNo big deal, though - if others don't think there's a need, then\nI'm not going to push for it.\n",
"msg_date": "Thu, 26 Apr 2007 11:38:44 -0700",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature request - have postgresql log warning when new\n sub-release\n\tcomes out."
},
{
"msg_contents": "On 4/25/07, Ron Mayer <[email protected]> wrote:\n> Carlos Moreno wrote:\n> > Tom Lane wrote:\n> >> Well, if you can't update major versions that's understandable; that's\n> >> why we're still maintaining the old branches. But there is no excuse\n> >> for not running a reasonably recent sub-release within your branch.\n> >\n> > Slammer..bug in Microsucks SQL Server....fix...had been available\n>\n> Feature request.\n>\n> How about if PostgreSQL periodically check for updates on the\n> internet and log WARNINGs as soon as it sees it's not running\n> the newest minor version for a branch. Ideally, it could\n> be set so the time-of-day's configurable to avoid setting off\n> pagers in the middle of the night.\n>\n> I might not lurk on the mailinglists enough to notice every\n> dot release; but I sure would notice if pagers went off with\n> warnings in the log files from production servers.\n>\n> Is that a possible TODO?\n>\n>\n>\n> (The thread started on the performance mailing lists but\n> I moved it to general since it drifted off topic).\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n\nwhat about the distros that do backporting for the bug fixes ?\nthose would be saying you are with a outdated PostgreSQL version\n\n-- \nLeonel\n",
"msg_date": "Thu, 26 Apr 2007 13:02:57 -0600",
"msg_from": "Leonel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature request - have postgresql log warning when new\n\tsub-release comes out."
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 04/26/07 13:38, Ron Mayer wrote:\n> Joshua D. Drake wrote:\n>> Ron Mayer wrote:\n>>> How about if PostgreSQL periodically check for updates on the\n>>> internet and log WARNINGs as soon as it sees it's not\n>>> running the newest minor version for a branch. ...\n>> uhmmm gah, errm no... ehhhh why? :)\n> \n> Mostly because it seems like a near FAQ here that someone posts\n> questions about people running very old postgresqls where the\n> answers are \"that was fixed in the latest minor version\".\n> \n> Regarding people saying that their OS package manager can do this\n> for them - I note that the people who have this problem the worst\n> seem to be the people running older postgresqls, and your OS\n> vendor may not be keeping the major version number of their\n> postgresql the same as yours. For example, apt-cache search\n> here isn't showing me 8.0 (though it does show 7.4, 8.1, and\n> 8.2).\n\nFor example: Debian. It's Stable releases only get *security*\npatches, nothing related to features or performance.\n\n>> I could see a contrib module that was an agent that did that\n>> but not as part of actual core.\n> \n> I was thinking it would protect the more ignorant users who\n> didn't even know about contrib. I imagine anyone who did know\n> enough to install a contrib module would also know how to write\n> such a script without it.\n> \n> No big deal, though - if others don't think there's a need, then \n> I'm not going to push for it.\n\nA *tiny* Perl/Python script to parse\nhttp://www.postgresql.org/versions.xml is all you need. Putting it\nin cron and emailing when \"someone\" a version changes seems useful.\n\nOk, it's official: you're elected to implement it!\n\n- --\nRon Johnson, Jr.\nJefferson LA USA\n\nGive a man a fish, and he eats for a day.\nHit him with a fish, and he goes away for good!\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\n\niD8DBQFGMQJ/S9HxQb37XmcRAgUEAKDWKzM8scO7Mc8uB26iqIo8WnJGmwCg6e4w\nvRuaSXH0sMhtnNZbYsuDKmc=\n=wGYf\n-----END PGP SIGNATURE-----\n",
"msg_date": "Thu, 26 Apr 2007 14:50:24 -0500",
"msg_from": "Ron Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Feature request - have postgresql log warning\n\twhen new sub-release comes out."
},
{
"msg_contents": "On Thursday 26. April 2007 20:12, Jon Sime wrote:\n>I run 8.2.x on a Gentoo/x86_64 development box (just did the upgrade\n> to 8.2.4 yesterday) using the postgresql-experimental overlay (via\n> layman) and have run into no problems. Everything has compiled,\n>installed/upgraded and been run with no hiccups along the way, nor any\n>hacky workarounds.\n\nPostgresql-8.2.4 went soft-masked today. I've upgraded and my own local \nweb application is working just fine, but dependencies for several \nother libs and apps are broken, and I'm in the process of running \nrevdep-rebuild and rebuilding 14 packages right now.\n-- \nLeif Biberg Kristensen | Registered Linux User #338009\nhttp://solumslekt.org/ | Cruising with Gentoo/KDE\nMy Jazz Jukebox: http://www.last.fm/user/leifbk/\n",
"msg_date": "Thu, 3 May 2007 17:18:50 +0200",
"msg_from": "\"Leif B. Kristensen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature request - have postgresql log warning when new\n\tsub-release comes out."
}
] |
[
{
"msg_contents": "I have this table:\n\nCREATE TABLE test_zip_assoc (\n id serial NOT NULL,\n f_id integer DEFAULT 0 NOT NULL,\n lat_radians numeric(6,5) DEFAULT 0.00000 NOT NULL,\n long_radians numeric(6,5) DEFAULT 0.00000 NOT NULL\n);\nCREATE INDEX lat_radians ON test_zip_assoc USING btree (lat_radians);\nCREATE INDEX long_radians ON test_zip_assoc USING btree\n(long_radians);\n\n\n\nIt's basically a table that associates some foreign_key (for an event,\nfor instance) with a particular location using longitude and\nlatitude. I'm basically doing a simple proximity search. I have\npopulated the database with *10 million* records. I then test\nperformance by picking 50 zip codes at random and finding the records\nwithin 50 miles with a query like this:\n\nSELECT id\n\tFROM test_zip_assoc\n\tWHERE\n\t\tlat_radians > 0.69014816041\n\t\tAND lat_radians < 0.71538026567\n\t\tAND long_radians > -1.35446228028\n\t\tAND long_radians < -1.32923017502\n\n\nOn my development server (dual proc/dual core Opteron 2.8 Ghz with 4GB\nram) this query averages 1.5 seconds each time it runs after a brief\nwarmup period. In PostGreSQL it averages about 15 seconds.\n\nBoth of those times are too slow. I need the query to run in under a\nsecond with as many as a billion records. I don't know if this is\npossible but I'm really hoping someone can help me restructure my\nindexes (multicolumn?, multiple indexes with a 'where' clause?) so\nthat I can get this running as fast as possible.\n\nIf I need to consider some non-database data structure in RAM I will\ndo that too. Any help or tips would be greatly appreciated. I'm\nwilling to go to greath lengths to test this if someone can make a\ngood suggestion that sounds like it has a reasonable chance of\nimproving the speed of this search. There's an extensive thread on my\nefforts already here:\n\nhttp://phpbuilder.com/board/showthread.php?t=10331619&page=10\n\n",
"msg_date": "24 Apr 2007 14:26:46 -0700",
"msg_from": "zardozrocks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Simple query, 10 million records...MySQL ten times faster"
},
{
"msg_contents": "On 24 Apr 2007 14:26:46 -0700, zardozrocks <[email protected]> wrote:\n> I have this table:\n>\n> CREATE TABLE test_zip_assoc (\n> id serial NOT NULL,\n> f_id integer DEFAULT 0 NOT NULL,\n> lat_radians numeric(6,5) DEFAULT 0.00000 NOT NULL,\n> long_radians numeric(6,5) DEFAULT 0.00000 NOT NULL\n> );\n> CREATE INDEX lat_radians ON test_zip_assoc USING btree (lat_radians);\n> CREATE INDEX long_radians ON test_zip_assoc USING btree\n> (long_radians);\n>\n>\n>\n> It's basically a table that associates some foreign_key (for an event,\n> for instance) with a particular location using longitude and\n> latitude. I'm basically doing a simple proximity search. I have\n> populated the database with *10 million* records. I then test\n> performance by picking 50 zip codes at random and finding the records\n> within 50 miles with a query like this:\n>\n> SELECT id\n> FROM test_zip_assoc\n> WHERE\n> lat_radians > 0.69014816041\n> AND lat_radians < 0.71538026567\n> AND long_radians > -1.35446228028\n> AND long_radians < -1.32923017502\n>\n>\n> On my development server (dual proc/dual core Opteron 2.8 Ghz with 4GB\n> ram) this query averages 1.5 seconds each time it runs after a brief\n> warmup period. In PostGreSQL it averages about 15 seconds.\n>\n> Both of those times are too slow. I need the query to run in under a\n> second with as many as a billion records. I don't know if this is\n> possible but I'm really hoping someone can help me restructure my\n> indexes (multicolumn?, multiple indexes with a 'where' clause?) so\n> that I can get this running as fast as possible.\n>\n> If I need to consider some non-database data structure in RAM I will\n> do that too. Any help or tips would be greatly appreciated. I'm\n> willing to go to greath lengths to test this if someone can make a\n> good suggestion that sounds like it has a reasonable chance of\n> improving the speed of this search. There's an extensive thread on my\n> efforts already here:\n\nYou can always go the earthdist route. the index takes longer to\nbuild (like 5x) longer than btree, but will optimize that exact\noperation.\n\nmerlin\n",
"msg_date": "Thu, 26 Apr 2007 17:09:28 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple query, 10 million records...MySQL ten times faster"
},
{
"msg_contents": "In response to zardozrocks <[email protected]>:\n\n> I have this table:\n> \n> CREATE TABLE test_zip_assoc (\n> id serial NOT NULL,\n> f_id integer DEFAULT 0 NOT NULL,\n> lat_radians numeric(6,5) DEFAULT 0.00000 NOT NULL,\n> long_radians numeric(6,5) DEFAULT 0.00000 NOT NULL\n> );\n> CREATE INDEX lat_radians ON test_zip_assoc USING btree (lat_radians);\n> CREATE INDEX long_radians ON test_zip_assoc USING btree\n> (long_radians);\n> \n> \n> \n> It's basically a table that associates some foreign_key (for an event,\n> for instance) with a particular location using longitude and\n> latitude. I'm basically doing a simple proximity search. I have\n> populated the database with *10 million* records. I then test\n> performance by picking 50 zip codes at random and finding the records\n> within 50 miles with a query like this:\n> \n> SELECT id\n> \tFROM test_zip_assoc\n> \tWHERE\n> \t\tlat_radians > 0.69014816041\n> \t\tAND lat_radians < 0.71538026567\n> \t\tAND long_radians > -1.35446228028\n> \t\tAND long_radians < -1.32923017502\n> \n> \n> On my development server (dual proc/dual core Opteron 2.8 Ghz with 4GB\n> ram) this query averages 1.5 seconds each time it runs after a brief\n> warmup period. In PostGreSQL it averages about 15 seconds.\n> \n> Both of those times are too slow. I need the query to run in under a\n> second with as many as a billion records. I don't know if this is\n> possible but I'm really hoping someone can help me restructure my\n> indexes (multicolumn?, multiple indexes with a 'where' clause?) so\n> that I can get this running as fast as possible.\n> \n> If I need to consider some non-database data structure in RAM I will\n> do that too. Any help or tips would be greatly appreciated. I'm\n> willing to go to greath lengths to test this if someone can make a\n> good suggestion that sounds like it has a reasonable chance of\n> improving the speed of this search. There's an extensive thread on my\n> efforts already here:\n> \n> http://phpbuilder.com/board/showthread.php?t=10331619&page=10\n\nWhy didn't you investigate/respond to the last posts there? The advice\nto bump shared_buffers is good advice. work_mem might also need bumped.\n\nFigure out which postgresql.conf your system is using and get it dialed\nin for your hardware. You can make all the indexes you want, but if\nyou've told Postgres that it only has 8M of RAM to work with, performance\nis going to suck. I don't see hardware specs on that thread (but I\ndidn't read the whole thing) If the system you're using is a dedicated\nDB system, set shared_buffers to 1/3 - 1/2 of the physical RAM on the\nmachine for starters.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Thu, 26 Apr 2007 17:11:38 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple query, 10 million records...MySQL ten times\n faster"
},
{
"msg_contents": "zardozrocks wrote:\n> lat_radians numeric(6,5) DEFAULT 0.00000 NOT NULL,\n> long_radians numeric(6,5) DEFAULT 0.00000 NOT NULL\n\nNative data types such as integer or real are much faster than numeric. \n If you need 6 digits, it's better to multiply your coordinates by 10^6 \nand store as INTEGER.\n\n> On my development server (dual proc/dual core Opteron 2.8 Ghz with 4GB\n> ram) this query averages 1.5 seconds each time it runs after a brief\n> warmup period. In PostGreSQL it averages about 15 seconds.\n\nWhat hard drive(s) and controller(s) do you have? Please post EXPLAIN \nANALYZE output of the problem query and your postgresql.conf also.\n\n-- \nBenjamin Minshall <[email protected]>\nSenior Developer -- Intellicon, Inc.",
"msg_date": "Thu, 26 Apr 2007 17:12:57 -0400",
"msg_from": "Benjamin Minshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple query, 10 million records...MySQL ten times\n faster"
},
{
"msg_contents": "zardozrocks wrote:\n> I have this table:\n> \n> CREATE TABLE test_zip_assoc (\n> id serial NOT NULL,\n> f_id integer DEFAULT 0 NOT NULL,\n> lat_radians numeric(6,5) DEFAULT 0.00000 NOT NULL,\n> long_radians numeric(6,5) DEFAULT 0.00000 NOT NULL\n> );\n> CREATE INDEX lat_radians ON test_zip_assoc USING btree (lat_radians);\n> CREATE INDEX long_radians ON test_zip_assoc USING btree\n> (long_radians);\n\nMaybe I'm missing something, but wouldn't it be easier to just use \nPostGIS? Or failing that, using the vanilla built-in point type and an \nr-tree index? That's what r-tree indexes are made for.\n\n-- \nJeff Hoffmann\nHead Plate Spinner\nPropertyKey.com\n",
"msg_date": "Thu, 26 Apr 2007 16:40:17 -0500",
"msg_from": "Jeff Hoffmann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple query, 10 million records...MySQL ten times\n faster"
},
{
"msg_contents": "On Tue, 2007-04-24 at 16:26, zardozrocks wrote:\n> I have this table:\n> \n> CREATE TABLE test_zip_assoc (\n> id serial NOT NULL,\n> f_id integer DEFAULT 0 NOT NULL,\n> lat_radians numeric(6,5) DEFAULT 0.00000 NOT NULL,\n> long_radians numeric(6,5) DEFAULT 0.00000 NOT NULL\n> );\n\nLike someone else mentioned numeric types are SLOW. See if you can use\nintegers, or at least floats.\n\nI also wonder if you might be better served with geometric types and\nGiST indexes on them than using your lat / long grid. With a geometric\ntype, you could define the lat / long as a point and use geometric\noperations with it.\n\nSee the pgsql manual:\n\nhttp://www.postgresql.org/docs/8.1/static/datatype-geometric.html\nhttp://www.postgresql.org/docs/8.1/static/functions-geometry.html\n\n> It's basically a table that associates some foreign_key (for an event,\n> for instance) with a particular location using longitude and\n> latitude. I'm basically doing a simple proximity search. I have\n> populated the database with *10 million* records. I then test\n> performance by picking 50 zip codes at random and finding the records\n> within 50 miles with a query like this:\n\nI assume that there aren't 10 million zip codes, right?\n\nAre you storing the lat / long of the individual venues? Or the zip\ncodes? If you're storing the lat / long of the zips, then I can't\nimagine there are 10 million zip codes. If you could use the lat / long\nnumbers to find the zip codes that are in your range, then join that to\na venue table that fks off of the zip code table, I would think it would\nbe much faster, as you'd have a smaller data set to trundle through.\n\n> SELECT id\n> \tFROM test_zip_assoc\n> \tWHERE\n> \t\tlat_radians > 0.69014816041\n> \t\tAND lat_radians < 0.71538026567\n> \t\tAND long_radians > -1.35446228028\n> \t\tAND long_radians < -1.32923017502\n> \n> \n> On my development server (dual proc/dual core Opteron 2.8 Ghz with 4GB\n> ram) this query averages 1.5 seconds each time it runs after a brief\n> warmup period. In PostGreSQL it averages about 15 seconds.\n\nI wonder how well it would run if you had 10, 20, 30, 40 etc... users\nrunning it at the same time. My guess is that you'll be very lucky to\nget anything close to linear scaling in any database. That's because\nthis is CPU / Memory bandwidth intensive, so it's gonna kill your DB. \nOTOH, if it was I/O bound you could throw more hardware at it (bb cache\nRAID controller, etc)\n\n> Both of those times are too slow. I need the query to run in under a\n> second with as many as a billion records. I don't know if this is\n> possible but I'm really hoping someone can help me restructure my\n> indexes (multicolumn?, multiple indexes with a 'where' clause?) so\n> that I can get this running as fast as possible.\n\nYou're trying to do a whole lot of processing in a little time. You're\neither gonna have to accept a less exact answer (i.e. base it on zip\ncodes) or come up with some way of mining the data for the answers ahead\nof time, kind of like a full text search for lat and long.\n\nSo, have you tried what I suggested about increasing shared_buffers and\nwork_mem yet?\n",
"msg_date": "Thu, 26 Apr 2007 17:42:26 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple query, 10 million records...MySQL ten times\n\tfaster"
},
{
"msg_contents": "\n\nIs there a reason you are not using postgis. The R tree indexes are\ndesigned for exactly this type of query and should be able to do it very\nquickly.\n\nHope that helps,\n\nJoe\n\n> I have this table:\n>\n> CREATE TABLE test_zip_assoc (\n> id serial NOT NULL,\n> f_id integer DEFAULT 0 NOT NULL,\n> lat_radians numeric(6,5) DEFAULT 0.00000 NOT NULL,\n> long_radians numeric(6,5) DEFAULT 0.00000 NOT NULL\n> );\n> CREATE INDEX lat_radians ON test_zip_assoc USING btree (lat_radians);\n> CREATE INDEX long_radians ON test_zip_assoc USING btree\n> (long_radians);\n>\n>\n>\n> It's basically a table that associates some foreign_key (for an event,\n> for instance) with a particular location using longitude and\n> latitude. I'm basically doing a simple proximity search. I have\n> populated the database with *10 million* records. I then test\n> performance by picking 50 zip codes at random and finding the records\n> within 50 miles with a query like this:\n>\n> SELECT id\n> \tFROM test_zip_assoc\n> \tWHERE\n> \t\tlat_radians > 0.69014816041\n> \t\tAND lat_radians < 0.71538026567\n> \t\tAND long_radians > -1.35446228028\n> \t\tAND long_radians < -1.32923017502\n>\n>\n> On my development server (dual proc/dual core Opteron 2.8 Ghz with 4GB\n> ram) this query averages 1.5 seconds each time it runs after a brief\n> warmup period. In PostGreSQL it averages about 15 seconds.\n>\n> Both of those times are too slow. I need the query to run in under a\n> second with as many as a billion records. I don't know if this is\n> possible but I'm really hoping someone can help me restructure my\n> indexes (multicolumn?, multiple indexes with a 'where' clause?) so\n> that I can get this running as fast as possible.\n>\n> If I need to consider some non-database data structure in RAM I will\n> do that too. Any help or tips would be greatly appreciated. I'm\n> willing to go to greath lengths to test this if someone can make a\n> good suggestion that sounds like it has a reasonable chance of\n> improving the speed of this search. There's an extensive thread on my\n> efforts already here:\n>\n> http://phpbuilder.com/board/showthread.php?t=10331619&page=10\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n\n",
"msg_date": "Fri, 27 Apr 2007 08:43:32 +1000 (EST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Simple query,\n 10 million records...MySQL ten times faster"
},
{
"msg_contents": "On 24 Apr 2007 14:26:46 -0700, zardozrocks <[email protected]> wrote:\n> I have this table:\n>\n> CREATE TABLE test_zip_assoc (\n> id serial NOT NULL,\n> f_id integer DEFAULT 0 NOT NULL,\n> lat_radians numeric(6,5) DEFAULT 0.00000 NOT NULL,\n> long_radians numeric(6,5) DEFAULT 0.00000 NOT NULL\n> );\n> CREATE INDEX lat_radians ON test_zip_assoc USING btree (lat_radians);\n> CREATE INDEX long_radians ON test_zip_assoc USING btree\n> (long_radians);\n\nThis is a spatial search -- B-tree indexes are much less efficient\nthan this than certain other data structures. The R-tree and its many\nvariants are based on subdividing the space in regions, allowing you\nto do efficient checks on containment, intersection, etc., based on\npoints or bounding boxes.\n\nPostgreSQL implements R-trees natively as well as through a mechanism\ncalled GiST, a framework for implementing pluggable tree-like indexes.\nIt also provides some geometric data types. However, as far as I know,\nPostgreSQL's R-tree/GiST indexes do *not* provide the operators to do\nbounding box searches. For this you need PostGIS.\n\nPostGIS implements the whole GIS stack, and it's so good at this that\nit's practically the de facto tool among GIS analysts. Installing\nPostGIS into a database is simple, and once you have done this, you\ncan augment your table with a geometry (*):\n\n alter table test_zip_assoc add column lonlat geometry;\n update test_zip_assoc set lonlat = makepoint(\n long_radians / (3.14159265358979 / 180),\n lat_radians / (3.14159265358979 / 180));\n\nThe division is to convert your radians into degrees; PostGIS works\nwith degrees, at least out of the box.\n\nNow you can query on a bounding box (although, are you sure you got\nyour lons and lats in order? That's Antarctica, isn't it?):\n\n select * from test_zip_assoc\n where lonlat && makebox2d(\n makepoint(-77.6049721697096, 39.5425768302107),\n makepoint(-76.1592790300818, 40.9882699698386))\n\nThis is bound to be blazingly fast. Next you can order by geographic\ndistance if you like:\n\n order by distance_sphere(lonlat,\n makepoint(-77.6049721697096, 39.5425768302107))\n\nNobody has mentioned PostGIS so far, so I hope I'm not missing some\ncrucial detail, like \"no spatial indexes allowed!\".\n\n(*) I cheated here. The PostGIS manual recommends that you use a\nfunction to create geometric column, because it will set up some\nauxilary data structures for you that are needed for certain\noperations. The recommended syntax is:\n\n select AddGeometryColumn('', 'test_zip_assoc', 'geom', -1, 'POINT', 2);\n\nAlexander.\n",
"msg_date": "Fri, 27 Apr 2007 01:12:31 +0200",
"msg_from": "\"Alexander Staubo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple query, 10 million records...MySQL ten times faster"
},
{
"msg_contents": "On 4/27/07, Alexander Staubo <[email protected]> wrote:\n[snip]\n> PostGIS implements the whole GIS stack, and it's so good at this that\n> it's practically the de facto tool among GIS analysts. Installing\n> PostGIS into a database is simple, and once you have done this, you\n> can augment your table with a geometry (*):\n>\n> alter table test_zip_assoc add column lonlat geometry;\n\nI forgot to include the crucial step, of course:\n\n create index test_zip_assoc_lonlat_index on test_zip_assoc\n using gist (lonlat gist_geometry_ops);\n analyze test_zip_assoc_lonlat;\n\nThis creates a GiST index on the geometry and (significantly) updates\nthe table statistics.\n\nAlexander.\n",
"msg_date": "Fri, 27 Apr 2007 01:14:34 +0200",
"msg_from": "\"Alexander Staubo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple query, 10 million records...MySQL ten times faster"
},
{
"msg_contents": "Folks,\n\nwe in astronomy permanently work with billiards objects with spherical\natributes and have several sky-indexing schemes. See my page\nfor links http://www.sai.msu.su/~megera/wiki/SkyPixelization\n\nWe have q3c package for PostgreSQL available from q3c.sf.net, which \nwe use in production with terabytes-sized database.\n\nOleg\nOn Thu, 26 Apr 2007, Scott Marlowe wrote:\n\n> On Tue, 2007-04-24 at 16:26, zardozrocks wrote:\n>> I have this table:\n>>\n>> CREATE TABLE test_zip_assoc (\n>> id serial NOT NULL,\n>> f_id integer DEFAULT 0 NOT NULL,\n>> lat_radians numeric(6,5) DEFAULT 0.00000 NOT NULL,\n>> long_radians numeric(6,5) DEFAULT 0.00000 NOT NULL\n>> );\n>\n> Like someone else mentioned numeric types are SLOW. See if you can use\n> integers, or at least floats.\n>\n> I also wonder if you might be better served with geometric types and\n> GiST indexes on them than using your lat / long grid. With a geometric\n> type, you could define the lat / long as a point and use geometric\n> operations with it.\n>\n> See the pgsql manual:\n>\n> http://www.postgresql.org/docs/8.1/static/datatype-geometric.html\n> http://www.postgresql.org/docs/8.1/static/functions-geometry.html\n>\n>> It's basically a table that associates some foreign_key (for an event,\n>> for instance) with a particular location using longitude and\n>> latitude. I'm basically doing a simple proximity search. I have\n>> populated the database with *10 million* records. I then test\n>> performance by picking 50 zip codes at random and finding the records\n>> within 50 miles with a query like this:\n>\n> I assume that there aren't 10 million zip codes, right?\n>\n> Are you storing the lat / long of the individual venues? Or the zip\n> codes? If you're storing the lat / long of the zips, then I can't\n> imagine there are 10 million zip codes. If you could use the lat / long\n> numbers to find the zip codes that are in your range, then join that to\n> a venue table that fks off of the zip code table, I would think it would\n> be much faster, as you'd have a smaller data set to trundle through.\n>\n>> SELECT id\n>> \tFROM test_zip_assoc\n>> \tWHERE\n>> \t\tlat_radians > 0.69014816041\n>> \t\tAND lat_radians < 0.71538026567\n>> \t\tAND long_radians > -1.35446228028\n>> \t\tAND long_radians < -1.32923017502\n>>\n>>\n>> On my development server (dual proc/dual core Opteron 2.8 Ghz with 4GB\n>> ram) this query averages 1.5 seconds each time it runs after a brief\n>> warmup period. In PostGreSQL it averages about 15 seconds.\n>\n> I wonder how well it would run if you had 10, 20, 30, 40 etc... users\n> running it at the same time. My guess is that you'll be very lucky to\n> get anything close to linear scaling in any database. That's because\n> this is CPU / Memory bandwidth intensive, so it's gonna kill your DB.\n> OTOH, if it was I/O bound you could throw more hardware at it (bb cache\n> RAID controller, etc)\n>\n>> Both of those times are too slow. I need the query to run in under a\n>> second with as many as a billion records. I don't know if this is\n>> possible but I'm really hoping someone can help me restructure my\n>> indexes (multicolumn?, multiple indexes with a 'where' clause?) so\n>> that I can get this running as fast as possible.\n>\n> You're trying to do a whole lot of processing in a little time. You're\n> either gonna have to accept a less exact answer (i.e. base it on zip\n> codes) or come up with some way of mining the data for the answers ahead\n> of time, kind of like a full text search for lat and long.\n>\n> So, have you tried what I suggested about increasing shared_buffers and\n> work_mem yet?\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n",
"msg_date": "Fri, 27 Apr 2007 08:46:21 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple query, 10 million records...MySQL ten times\n faster"
}
] |
[
{
"msg_contents": "Hello!\n\nI have strange situation. I`m testing performance of PostgreSQL database \nat different filesystems (ext2,ex3,jfs) and I cant say that JFS is as \nmuch faster as it is said.\nMy test look`s like that:\n\nServer: 2 x Xeon 2,4GHz 2GB ram 8 x HDD SCSI configured in RAID arrays \nlike that:\n\nUnit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify \nIgnECC\n------------------------------------------------------------------------------\nu0 RAID-10 OK - 64K 467.522 ON - -\nu6 RAID-1 OK - - 298.09 ON - -\n\nPort Status Unit Size Blocks Serial\n---------------------------------------------------------------\np0 OK u0 233.76 GB 490234752 Y634Y1DE\np1 OK u0 233.76 GB 490234752 Y636TR9E\np2 OK u0 233.76 GB 490234752 Y64VZF1E\np3 OK u0 233.76 GB 490234752 Y64G8HRE\np4 NOT-PRESENT - - - -\np5 OK - 233.76 GB 490234752 Y63YMSNE\np6 OK u6 298.09 GB 625142448 3QF08HFF\np7 OK u6 298.09 GB 625142448 3QF08HHW\n\n\nwhere u6 stores Fedora Core 6 operating system, and u0 stores 3 \npartitions with ext2, ext3 and jfs filesystem.\nPostgresql 8.2 engine is intalled at system partition (u6 in raid) and \nrun with data directory at diffrent FS partition for particular test.\nTo test I use pgBench with default database schema, run for 25, 50, 75 \nusers at one time. Every test I run 5 time to take average.\nUnfortunetly my result shows that ext is fastest, ext3 and jfs are very \nsimillar. I can understand that ext2 without jurnaling is faster than \next3, it is said that jfs is 40 - 60% faster. I cant see the difference. \nPart of My results: (transaction type | scaling factor | num of clients \n| tpl | num on transactions | tps including connection time | tps \nexcliding connection time)\n\nEXT2:\n\nTPC-B (sort of),50,75,13,975|975,338.286682,358.855582\nTPC-B (sort of),50,75,133,9975|9975,126.777438,127.023687\nTPC-B (sort of),50,75,1333,99975|99975,125.612325,125.636193\n\nEXT3:\n\nTPC-B (sort of),50,75,13,975|975,226.139237,244.619009\nTPC-B (sort of),50,75,133,9975|9975,88.678922,88.935371\nTPC-B (sort of),50,75,1333,99975|99975,79.126892,79.147423\n\nJFS:\n\nTPC-B (sort of),50,75,13,975|975,235.626369,255.863271\nTPC-B (sort of),50,75,133,9975|9975,88.408323,88.664584\nTPC-B (sort of),50,75,1333,99975|99975,81.003394,81.024297\n\n\nCan anyone tell me what`s wrong with my test? Or maybe it is normal?\n\nPawel Gruszczynski\n",
"msg_date": "Wed, 25 Apr 2007 08:51:15 +0200",
"msg_from": "=?ISO-8859-2?Q?Pawe=B3_Gruszczy=F1ski?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "What`s wrong with JFS configuration?"
},
{
"msg_contents": "Pawe� Gruszczy�ski wrote:\n> To test I use pgBench with default database schema, run for 25, 50, 75 \n> users at one time. Every test I run 5 time to take average.\n> Unfortunetly my result shows that ext is fastest, ext3 and jfs are very \n> simillar. I can understand that ext2 without jurnaling is faster than \n> ext3, it is said that jfs is 40 - 60% faster. I cant see the difference. \n> Part of My results: (transaction type | scaling factor | num of clients \n> | tpl | num on transactions | tps including connection time | tps \n> excliding connection time)\n> \n> EXT2:\n> \n> TPC-B (sort of),50,75,13,975|975,338.286682,358.855582\n> ...\n> \n> Can anyone tell me what`s wrong with my test? Or maybe it is normal?\n\nWith a scaling factor of 50, your database size is ~ 1 GB, which fits \ncomfortably in your RAM. You're not exercising your drives or filesystem \nmuch. Assuming you haven't disabled fsync, the performance of that test \nis bound by the speed your drives can flush WAL commit records to disk.\n\nI wouldn't expect the filesystem to make a big difference anyway, but \nyou'll see..\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Wed, 25 Apr 2007 09:54:12 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What`s wrong with JFS configuration?"
},
{
"msg_contents": "\nOn 25-Apr-07, at 4:54 AM, Heikki Linnakangas wrote:\n\n> Paweł Gruszczyński wrote:\n>> To test I use pgBench with default database schema, run for 25, \n>> 50, 75 users at one time. Every test I run 5 time to take average.\n>> Unfortunetly my result shows that ext is fastest, ext3 and jfs are \n>> very simillar. I can understand that ext2 without jurnaling is \n>> faster than ext3, it is said that jfs is 40 - 60% faster. I cant \n>> see the difference. Part of My results: (transaction type | \n>> scaling factor | num of clients | tpl | num on transactions | tps \n>> including connection time | tps excliding connection time)\n>> EXT2:\n>> TPC-B (sort of),50,75,13,975|975,338.286682,358.855582\n>> ...\n>> Can anyone tell me what`s wrong with my test? Or maybe it is normal?\n>\n> With a scaling factor of 50, your database size is ~ 1 GB, which \n> fits comfortably in your RAM. You're not exercising your drives or \n> filesystem much. Assuming you haven't disabled fsync, the \n> performance of that test is bound by the speed your drives can \n> flush WAL commit records to disk.\n>\n> I wouldn't expect the filesystem to make a big difference anyway, \n> but you'll see..\n\nIf you really believe that jfs is 40 -60% faster ( which I highly \ndoubt ) you should see this by simply reading/writing a very large \nfile (2x your memory size) with dd .\n\nJust curious but what data do you have that suggests this 40-60% \nnumber ?\n\nDave\n>\n> -- \n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n\n",
"msg_date": "Wed, 25 Apr 2007 07:21:30 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What`s wrong with JFS configuration?"
},
{
"msg_contents": "Alexander Staubo napisał(a):\n> On 4/25/07, Paweł Gruszczyński <[email protected]> wrote:\n>> I have strange situation. I`m testing performance of PostgreSQL database\n>> at different filesystems (ext2,ex3,jfs) and I cant say that JFS is as\n>> much faster as it is said.\n>\n> I don't know about 40-60% faster, but JFS is known to be a fast, good\n> file system -- faster than other file systems for some things, slower\n> for others. It's particularly known for putting a somewhat lower load\n> on the CPU than most other journaling file systems.\n>\n> Alexander.\n>\nI was just reading some informations on the web (for example: \nhttp://www.nabble.com/a-comparison-of-ext3,-jfs,-and-xfs-on-hardware-raid-t144738.html). \n\nMy test should tell mi if it`s true, but now I see that rather everyhing \nis ok with my test method and the gain of using JFS is not so high.\n\nPawel\n",
"msg_date": "Wed, 25 Apr 2007 15:38:54 +0200",
"msg_from": "=?UTF-8?B?UGF3ZcWCIEdydXN6Y3p5xYRza2k=?=\n <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What`s wrong with JFS configuration?"
},
{
"msg_contents": "[email protected] (Pawe� Gruszczy�ski) writes:\n> To test I use pgBench with default database schema, run for 25, 50, 75\n> users at one time. Every test I run 5 time to take average.\n> Unfortunetly my result shows that ext is fastest, ext3 and jfs are\n> very simillar. I can understand that ext2 without jurnaling is faster\n> than ext3, it is said that jfs is 40 - 60% faster. I cant see the\n> difference. Part of My results: (transaction type | scaling factor |\n> num of clients | tpl | num on transactions | tps including connection\n> time | tps excliding connection time)\n>\n> EXT2:\n>\n> TPC-B (sort of),50,75,13,975|975,338.286682,358.855582\n> TPC-B (sort of),50,75,133,9975|9975,126.777438,127.023687\n> TPC-B (sort of),50,75,1333,99975|99975,125.612325,125.636193\n>\n> EXT3:\n>\n> TPC-B (sort of),50,75,13,975|975,226.139237,244.619009\n> TPC-B (sort of),50,75,133,9975|9975,88.678922,88.935371\n> TPC-B (sort of),50,75,1333,99975|99975,79.126892,79.147423\n>\n> JFS:\n>\n> TPC-B (sort of),50,75,13,975|975,235.626369,255.863271\n> TPC-B (sort of),50,75,133,9975|9975,88.408323,88.664584\n> TPC-B (sort of),50,75,1333,99975|99975,81.003394,81.024297\n>\n>\n> Can anyone tell me what`s wrong with my test? Or maybe it is normal?\n\nFor one thing, this test is *probably* staying mostly in memory. That\nwill be skewing results away from measuring anything about the\nfilesystem.\n\nWhen I did some testing of comparative Linux filesystem performance,\nback in 2003, I found that JFS was maybe 20% percent faster on a\n\"write-only\" workload than XFS, which was a few percent faster than\next3. The differences weren't terribly large.\n\nIf you're seeing such huge differences with pgbench (which includes\nread load, which should be virtually unaffected by one's choice of\nfilesystem), then I can only conclude that something about your\ntesting methodology is magnifying the differences.\n-- \nlet name=\"cbbrowne\" and tld=\"cbbrowne.com\" in String.concat \"@\" [name;tld];;\nhttp://cbbrowne.com/info/oses.html\n\"On the Internet, no one knows you're using Windows NT\"\n-- Ramiro Estrugo, [email protected]\n",
"msg_date": "Wed, 25 Apr 2007 10:46:28 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What`s wrong with JFS configuration?"
},
{
"msg_contents": "On Apr 25, 2007, at 8:51 AM, Paweł Gruszczyński wrote:\n> where u6 stores Fedora Core 6 operating system, and u0 stores 3 \n> partitions with ext2, ext3 and jfs filesystem.\n\nKeep in mind that drives have a faster data transfer rate at the \nouter-edge than they do at the inner edge, so if you've got all 3 \nfilesystems sitting on that array at the same time it's not a fair \ntest. I heard numbers on the impact of this a *long* time ago and I \nthink it was in the 10% range, but I could be remembering wrong.\n\nYou'll need to drop each filesystem and create the next one to get a \nfair comparison.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Wed, 25 Apr 2007 19:40:09 +0200",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What`s wrong with JFS configuration?"
},
{
"msg_contents": "On Wed, 25 Apr 2007, Pawe�~B Gruszczy�~Dski wrote:\n\n> I was just reading some informations on the web (for example: \n> http://www.nabble.com/a-comparison-of-ext3,-jfs,-and-xfs-on-hardware-raid-t144738.html).\n\nYou were doing your tests with a database scale of 50. As Heikki already \npointed out, that's pretty small (around 800MB) and you're mostly \nstressing parts of the system that may not change much based on filesystem \nchoice. This is even more true when some of your tests are only using a \nsmall amount of transactions in a short period of time, which means just \nabout everything could still be sitting in memory at the end of the test \nwith the database disks barely used.\n\nIn the example you reference above, a scaling factor of 1000 was used. \nThis makes for a fairly large database of about 16GB. When running in \nthat configuration, as stated he's mostly testing seek performance--you \ncan't hold any significant portion of 16GB in memory, so you're always \nmoving around the disks to find the data needed. It's a completely \ndifferent type of test than what you did.\n\nIf you want to try and replicate the filesystem differences shown on that \npage, start with the bonnie++ tests and see if you get similar results \nthere. It's hard to predict whether you'll see the same differences given \nhow different your RAID setup is from Jeff Baker's tests.\n\nIt's not a quick trip from there to check if an improvement there holds up \nin database use that's like a real-world load. In addition to addressing \nthe scaling factor issue, you'll need to so some basic PostgreSQL \nparameter tuning from the defaults, think about the impact of checkpoints \non your test, and worry about whether your WAL I/O is being done \nefficiently before you get to the point where the database I/O is being \nmeasured usefully at all via pgbench.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD",
"msg_date": "Thu, 26 Apr 2007 00:04:53 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What`s wrong with JFS configuration?"
},
{
"msg_contents": "Jim Nasby wrote:\n\n> On Apr 25, 2007, at 8:51 AM, Paweł Gruszczyński wrote:\n>> where u6 stores Fedora Core 6 operating system, and u0 stores 3 \n>> partitions with ext2, ext3 and jfs filesystem.\n> \n> Keep in mind that drives have a faster data transfer rate at the \n> outer-edge than they do at the inner edge [...]\n\nI've been wondering from time to time if partitions position\ncan be a (probably modest, of course) performance gain factor.\n\nIf I create a partition at the beginning or end of the disk,\nis this going to have a determined platter physical position?\n\nI remember having heard that every manufacturer has its own\nallocation logic.\n\nHas anyone got some information, just for curiosity?\n\n-- \nCosimo\n\n",
"msg_date": "Thu, 26 Apr 2007 10:45:17 +0200",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What`s wrong with JFS configuration?"
},
{
"msg_contents": "Adding -performance back in so others can learn.\n\nOn Apr 26, 2007, at 9:40 AM, Paweł Gruszczyński wrote:\n\n> Jim Nasby napisał(a):\n>> On Apr 25, 2007, at 8:51 AM, Paweł Gruszczyński wrote:\n>>> where u6 stores Fedora Core 6 operating system, and u0 stores 3 \n>>> partitions with ext2, ext3 and jfs filesystem.\n>>\n>> Keep in mind that drives have a faster data transfer rate at the \n>> outer-edge than they do at the inner edge, so if you've got all 3 \n>> filesystems sitting on that array at the same time it's not a fair \n>> test. I heard numbers on the impact of this a *long* time ago and \n>> I think it was in the 10% range, but I could be remembering wrong.\n>>\n>> You'll need to drop each filesystem and create the next one go get \n>> a fair comparison.\n>\n> I thought about it by my situation is not so clear, becouse my hard \n> drive for postgresql data is rather \"logical\" becouse of RAID array \n> i mode 1+0. My RAID Array is divided like this:\n>\n> Device Boot Start End Blocks Id System\n> /dev/sda1 1 159850 163686384 83 Linux\n> /dev/sda2 159851 319431 163410944 83 Linux\n> /dev/sda3 319432 478742 163134464 83 Linux\n>\n> and partitions are:\n>\n> /dev/sda1 ext2 161117780 5781744 147151720 4% /fs/ext2\n> /dev/sda2 ext3 160846452 2147848 150528060 2% /fs/ext3\n> /dev/sda3 jfs 163096512 3913252 159183260 3% /fs/jfs\n>\n> so if RAID 1+0 do not change enything, JFS file system is at third \n> partition wich is at the end of hard drive.\n\nYes, which means that JFS is going to be at a disadvantage to ext3, \nwhich will be at a disadvantage to ext2. You should really re-perform \nthe tests with each filesystem in the same location.\n\n> What about HDD with two magnetic disk`s? Then the speed depending \n> of partition phisical location is more difficult to calculate ;) \n> Propably first is slow, secund is fast in firs halt and slow in \n> secund halt, third is the fastes one. In both cases my JFS partitin \n> should be ath the end on magnetic disk. Am I wrong?\n\nI'm not a HDD expert, but as far as I know the number of platters \ndoesn't change anything. When you have multiple platters, the drive \nessentially splits bytes across all the platters; it doesn't start \nwriting one platter, then switch to another platter.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Fri, 27 Apr 2007 15:08:50 +0100",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What`s wrong with JFS configuration?"
}
] |
[
{
"msg_contents": "I was recently running defrag on my windows/parallels VM and noticed \na bunch of WAL files that defrag couldn't take care of, presumably \nbecause the database was running. What's disturbing to me is that \nthese files all had ~2000 fragments. Now, this was an EnterpriseDB \ndatabase which means the WAL files were 64MB instead of 16MB, but \neven having 500 fragments for a 16MB WAL file seems like it would \ndefinitely impact performance.\n\nCan anyone else confirm this? I don't know if this is a windows-only \nissue, but I don't know of a way to check fragmentation in unix.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Wed, 25 Apr 2007 19:26:07 +0200",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fragmentation of WAL files"
},
{
"msg_contents": "In response to Jim Nasby <[email protected]>:\n\n> I was recently running defrag on my windows/parallels VM and noticed \n> a bunch of WAL files that defrag couldn't take care of, presumably \n> because the database was running. What's disturbing to me is that \n> these files all had ~2000 fragments. Now, this was an EnterpriseDB \n> database which means the WAL files were 64MB instead of 16MB, but \n> even having 500 fragments for a 16MB WAL file seems like it would \n> definitely impact performance.\n\nI don't know about that. I've seen marketing material that claims that\nmodern NTFS doesn't suffer performance problems from fragmentation. I've\nnever tested it myself, but my point is that you might want to do some\nexperiments -- you might find out that it doesn't make any difference.\n\nIf it does, you should be able to stop the DB, defragment the files, then\nstart the DB back up. Since WAL files are recycled, they shouldn't\nfragment again -- unless I'm missing something.\n\nIf that works, it may indicate that (on Windows) a good method for installing\nis to create all the necessary WAL files as empty files before launching\nthe DB.\n\n> Can anyone else confirm this? I don't know if this is a windows-only \n> issue, but I don't know of a way to check fragmentation in unix.\n\nI can confirm that it's only a Windows problem. No UNIX filesystem\nthat I'm aware of suffers from fragmentation.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Thu, 26 Apr 2007 08:00:50 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fragmentation of WAL files"
},
{
"msg_contents": "Bill Moran wrote:\n> In response to Jim Nasby <[email protected]>:\n> \n>> I was recently running defrag on my windows/parallels VM and noticed \n>> a bunch of WAL files that defrag couldn't take care of, presumably \n>> because the database was running. What's disturbing to me is that \n>> these files all had ~2000 fragments. Now, this was an EnterpriseDB \n>> database which means the WAL files were 64MB instead of 16MB, but \n>> even having 500 fragments for a 16MB WAL file seems like it would \n>> definitely impact performance.\n> \n> I don't know about that. I've seen marketing material that claims that\n> modern NTFS doesn't suffer performance problems from fragmentation. I've\n> never tested it myself, but my point is that you might want to do some\n> experiments -- you might find out that it doesn't make any difference.\n> \n> If it does, you should be able to stop the DB, defragment the files, then\n> start the DB back up. Since WAL files are recycled, they shouldn't\n> fragment again -- unless I'm missing something.\n> \n> If that works, it may indicate that (on Windows) a good method for installing\n> is to create all the necessary WAL files as empty files before launching\n> the DB.\n\nIf that turns out to be a problem, I wonder if it would help to expand \nthe WAL file to full size with ftruncate or something similar, instead \nof growing it page by page.\n\n>> Can anyone else confirm this? I don't know if this is a windows-only \n>> issue, but I don't know of a way to check fragmentation in unix.\n> \n> I can confirm that it's only a Windows problem. No UNIX filesystem\n> that I'm aware of suffers from fragmentation.\n\nWhat do you mean by suffering? All filesystems fragment files at some \npoint. When and how differs from filesystem to filesystem. And some \nfilesystems might be smarter than others in placing the fragments.\n\nThere's a tool for Linux in the e2fsprogs package called filefrag that \nshows the fragmentation of a file, but I've never used it myself.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Thu, 26 Apr 2007 13:10:21 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fragmentation of WAL files"
},
{
"msg_contents": "In response to Heikki Linnakangas <[email protected]>:\n\n[snip]\n\n> >> Can anyone else confirm this? I don't know if this is a windows-only \n> >> issue, but I don't know of a way to check fragmentation in unix.\n> > \n> > I can confirm that it's only a Windows problem. No UNIX filesystem\n> > that I'm aware of suffers from fragmentation.\n> \n> What do you mean by suffering? All filesystems fragment files at some \n> point. When and how differs from filesystem to filesystem. And some \n> filesystems might be smarter than others in placing the fragments.\n\nTo clarify my viewpoint:\nTo my knowledge, there is no Unix filesystem that _suffers_ from\nfragmentation. Specifically, all filessytems have some degree of\nfragmentation that occurs, but every Unix filesystem that I am aware of\nhas built-in mechanisms to mitigate this and prevent it from becoming\na performance issue.\n\n> There's a tool for Linux in the e2fsprogs package called filefrag that \n> shows the fragmentation of a file, but I've never used it myself.\n\nInteresting. However, the existence of a tool does not particularly\nindicated the _need_ for said tool. It might just have been something\ncool that somebody wrote.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Thu, 26 Apr 2007 09:14:32 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Filesystem fragmentation (Re: Fragmentation of WAL files)"
},
{
"msg_contents": "> In response to Jim Nasby <[email protected]>:\n>> I was recently running defrag on my windows/parallels VM and noticed \n>> a bunch of WAL files that defrag couldn't take care of, presumably \n>> because the database was running. What's disturbing to me is that \n>> these files all had ~2000 fragments.\n\nIt sounds like that filesystem is too stupid to coalesce successive\nwrite() calls into one allocation fragment :-(. I agree with the\ncomments that this might not be important, but you could experiment\nto see --- try increasing the size of \"zbuffer\" in XLogFileInit to\nmaybe 16*XLOG_BLCKSZ, re-initdb, and see if performance improves.\n\nThe suggestion to use ftruncate is so full of holes that I won't\nbother to point them all out, but certainly we could write more than\njust XLOG_BLCKSZ at a time while preparing the file.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Apr 2007 11:37:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fragmentation of WAL files "
},
{
"msg_contents": "Bill Moran wrote:\n> In response to Heikki Linnakangas <[email protected]>:\n>>>> Can anyone else confirm this? I don't know if this is a windows-only \n>>>> issue, but I don't know of a way to check fragmentation in unix.\n>>> I can confirm that it's only a Windows problem. No UNIX filesystem\n>>> that I'm aware of suffers from fragmentation.\n>> What do you mean by suffering? All filesystems fragment files at some \n>> point. When and how differs from filesystem to filesystem. And some \n>> filesystems might be smarter than others in placing the fragments.\n> \n> To clarify my viewpoint:\n> To my knowledge, there is no Unix filesystem that _suffers_ from\n> fragmentation. Specifically, all filessytems have some degree of\n> fragmentation that occurs, but every Unix filesystem that I am aware of\n> has built-in mechanisms to mitigate this and prevent it from becoming\n> a performance issue.\n\nMore specifically, this problem was solved on UNIX file systems way back in the 1970's and 1980's. No UNIX file system (including Linux) since then has had significant fragmentation problems, unless the file system gets close to 100% full. If you run below 90% full, fragmentation shouldn't ever be a significant performance problem.\n\nThe word \"fragmentation\" would have dropped from the common parlance if it weren't for MS Windoz.\n\nCraig\n",
"msg_date": "Thu, 26 Apr 2007 09:35:02 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Filesystem fragmentation (Re: Fragmentation of WAL\n files)"
},
{
"msg_contents": "\"Craig A. James\" <[email protected]> writes:\n\n> More specifically, this problem was solved on UNIX file systems way back in the\n> 1970's and 1980's. No UNIX file system (including Linux) since then has had\n> significant fragmentation problems, unless the file system gets close to 100%\n> full. If you run below 90% full, fragmentation shouldn't ever be a significant\n> performance problem.\n\nNote that the main technique used to avoid fragmentation -- paradoxically --\nis to break the file up into reasonable sized chunks. This allows the\nfilesystem the flexibility to place the chunks efficiently.\n\nIn the case of a performance-critical file like the WAL that's always read\nsequentially it may be to our advantage to defeat this technique and force it\nto be allocated sequentially. I'm not sure whether any filesystems provide any\noption to do so.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Thu, 26 Apr 2007 17:47:29 +0100",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Filesystem fragmentation (Re: Fragmentation of WAL files)"
},
{
"msg_contents": "Gregory Stark <[email protected]> writes:\n> In the case of a performance-critical file like the WAL that's always read\n> sequentially it may be to our advantage to defeat this technique and force it\n> to be allocated sequentially. I'm not sure whether any filesystems provide any\n> option to do so.\n\nWe more or less do that already by filling the entire file in one go\nwhen it's created ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Apr 2007 13:49:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Filesystem fragmentation (Re: Fragmentation of WAL files) "
},
{
"msg_contents": "On Thu, 26 Apr 2007, Bill Moran wrote:\n\n> I've seen marketing material that claims that modern NTFS doesn't suffer \n> performance problems from fragmentation.\n\nYou're only reading half of the marketing material then. For a balanced \npicture, read the stuff generated by the companies that sell defragmenting \ntools. A good one to start with is \nhttp://files.diskeeper.com/pdf/HowFileFragmentationOccursonWindowsXP.pdf\n\nGoing back to the Jim's original question, they suggest a Microsoft paper \nthat talks about how the defrag report can be misleading in respect to \nopen files. See http://support.microsoft.com/kb/228198\n\nAlso, some of the most interesting details they gloss over are specific to \nwhich version of Windows you're using; the reference guide to the subject \nof how NTFS decides how much space to pre-allocate at a time is available \nat http://support.microsoft.com/kb/841551 (ZIP file wrapped into EXE, \nyuck!)\n\nIf you compare them, you can see that the approach they're using in NTFS \nhas evolved to become more like that used by a good UNIX filesystem over \ntime. I think your typical UNIX still has a healthy lead in this area, \nbut the difference isn't as big as it used to be.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Fri, 27 Apr 2007 00:50:55 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fragmentation of WAL files"
},
{
"msg_contents": "* Bill Moran:\n\n> To clarify my viewpoint:\n> To my knowledge, there is no Unix filesystem that _suffers_ from\n> fragmentation. Specifically, all filessytems have some degree of\n> fragmentation that occurs, but every Unix filesystem that I am aware of\n> has built-in mechanisms to mitigate this and prevent it from becoming\n> a performance issue.\n\nOne database engine tends to create a huge number of fragments because\nthe files are written with holes in them. There is a significant\nimpact on sequential reads, but that doesn't matter much because the\nengine doesn't implement fast, out-of-order B-tree scans anyway. 8-/\n\nI still think that preallocating in reasonably sized chunks is\nbeneficial.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Fri, 27 Apr 2007 09:39:28 +0200",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Filesystem fragmentation (Re: Fragmentation of WAL files)"
}
] |
[
{
"msg_contents": "The outer track / inner track performance ratio is more like 40 percent. Recent example is 78MB/s outer and 44MB/s inner for the new Seagate 750MB drive (see http://www.storagereview.com for benchmark results)\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom: \tJim Nasby [mailto:[email protected]]\nSent:\tThursday, April 26, 2007 03:53 AM Eastern Standard Time\nTo:\tPawel Gruszczynski\nCc:\[email protected]\nSubject:\tRe: [PERFORM] What`s wrong with JFS configuration?\n\nOn Apr 25, 2007, at 8:51 AM, Pawel Gruszczynski wrote:\n> where u6 stores Fedora Core 6 operating system, and u0 stores 3 \n> partitions with ext2, ext3 and jfs filesystem.\n\nKeep in mind that drives have a faster data transfer rate at the \nouter-edge than they do at the inner edge, so if you've got all 3 \nfilesystems sitting on that array at the same time it's not a fair \ntest. I heard numbers on the impact of this a *long* time ago and I \nthink it was in the 10% range, but I could be remembering wrong.\n\nYou'll need to drop each filesystem and create the next one to get a \nfair comparison.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\n\n\n\nRe: [PERFORM] What`s wrong with JFS configuration?\n\n\n\nThe outer track / inner track performance ratio is more like 40 percent. Recent example is 78MB/s outer and 44MB/s inner for the new Seagate 750MB drive (see http://www.storagereview.com for benchmark results)\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom: Jim Nasby [mailto:[email protected]]\nSent: Thursday, April 26, 2007 03:53 AM Eastern Standard Time\nTo: Pawel Gruszczynski\nCc: [email protected]\nSubject: Re: [PERFORM] What`s wrong with JFS configuration?\n\nOn Apr 25, 2007, at 8:51 AM, Pawel Gruszczynski wrote:\n> where u6 stores Fedora Core 6 operating system, and u0 stores 3 \n> partitions with ext2, ext3 and jfs filesystem.\n\nKeep in mind that drives have a faster data transfer rate at the \nouter-edge than they do at the inner edge, so if you've got all 3 \nfilesystems sitting on that array at the same time it's not a fair \ntest. I heard numbers on the impact of this a *long* time ago and I \nthink it was in the 10% range, but I could be remembering wrong.\n\nYou'll need to drop each filesystem and create the next one to get a \nfair comparison.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match",
"msg_date": "Thu, 26 Apr 2007 04:17:39 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What`s wrong with JFS configuration?"
}
] |
[
{
"msg_contents": "Dear,\nWe are facing performance tuning problem while using PostgreSQL Database \nover the network on a linux OS.\nOur Database consists of more than 500 tables with an average of 10K \nrecords per table with an average of 20 users accessing the database \nsimultaneously over the network. Each table has indexes and we are \nquerying the database using Hibernate.\nThe biggest problem is while insertion, updating and fetching of records, \nie the database performance is very slow. It take a long time to respond \nin the above scenario.\nPlease provide me with the tuning of the database. I am attaching my \npostgresql.conf file for the reference of our current configuration\n\n\n\nPlease replay me ASAP\nRegards,\n Shohab Abdullah \n Software Engineer,\n Manufacturing SBU-POWAI\n Larsen and Toubro Infotech Ltd.| 4th floor, L&T Technology Centre, \nSaki Vihar Road, Powai, Mumbai-400072\n (: +91-22-67767366 | (: +91-9870247322\n Visit us at : http://www.lntinfotech.com \n”I cannot predict future, I cannot change past, I have just the present \nmoment, I must treat it as my last\" \n----------------------------------------------------------------------------------------\nThe information contained in this email has been classified: \n[ X] L&T Infotech General Business\n[ ] L&T Infotech Internal Use Only\n[ ] L&T Infotech Confidential\n[ ] L&T Infotech Proprietary\nThis e-mail and any files transmitted with it are for the sole use of the \nintended recipient(s) and may contain confidential and privileged \ninformation.\nIf you are not the intended recipient, please contact the sender by reply \ne-mail and destroy all copies of the original message.\n\n______________________________________________________________________",
"msg_date": "Thu, 26 Apr 2007 16:49:58 +0530",
"msg_from": "Shohab Abdullah <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL Performance Tuning"
},
{
"msg_contents": "Shohab Abdullah wrote:\n> \n> Dear,\n> We are facing performance tuning problem while using PostgreSQL Database\n> over the network on a linux OS.\n> Our Database consists of more than 500 tables with an average of 10K\n> records per table with an average of 20 users accessing the database\n> simultaneously over the network. Each table has indexes and we are\n> querying the database using Hibernate.\n> The biggest problem is while insertion, updating and fetching of\n> records, ie the database performance is very slow. It take a long time\n> to respond in the above scenario.\n> Please provide me with the tuning of the database. I am attaching my\n> *postgresql.conf* file for the reference of our current configuration\n\nHave you changed _anything_ from the defaults? The defaults are set so\nPG will run on as many installations as practical. They are not set for\nperformance - that is specific to your equipment, your data, and how you\nneed to handle the data. Assuming the record sizes aren't huge, that's\nnot a very large data set nor number of users.\n\nLook at these for starters:\nhttp://www.varlena.com/GeneralBits/Tidbits/perf.html\nhttp://www.varlena.com/GeneralBits/Tidbits/annotated_conf_e.html\n\nYou might try setting the logging parameters to log queries longer than\n\"x\" (where x is in milliseconds - you will have to decide the\nappropriate value for \"too long\") and start looking into those first.\n\nMake sure that you are running \"analyze\" if it is not being run by\nautovacuum.\n\nUse \"EXPLAIN <your query>\" to see how the query is being planned - as a\nfirst-pass assume that on any reasonably sized table the words\n\"sequential scan\" means \"fix this\". Note that you may have to cast\nvariables in a query to match the variable in an index in order for the\nplanner to figure out that it can use the index.\n\nRead the guidelines then take an educated stab at some settings and see\nhow they work - other than turning off fsync, there's not much in\npostgresql.conf that will put your data at risk.\n\nCheers,\nSteve\n",
"msg_date": "Thu, 26 Apr 2007 12:49:27 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] PostgreSQL Performance Tuning"
},
{
"msg_contents": "Steve Crawford wrote:\n> Have you changed _anything_ from the defaults? The defaults are set so\n> PG will run on as many installations as practical. They are not set for\n> performance - that is specific to your equipment, your data, and how you\n> need to handle the data. \nIs this really the sensible thing to do? I know we should not encourage\nthe world we're leaving in even more in the ways of \"have the computer\ndo everything for us so that we don't need to have even a clue about what\nwe're doing\" ... But, wouldn't it make sense that the configure script\ndetermines the amount of physical memory and perhaps even do a HD\nspeed estimate to set up defaults that are closer to a \nperformance-optimized\nconfiguration?\n\nThen, perhaps command switches so that you could specify the type of\naccess you estimate for your system. Perhaps something like:\n\n./configure --db-size=100GB --write-percentage=20 .... etc.\n\n(switch write-percentage above indicates that we estimate that 20% of\nthe DB activity would be writing to the disk --- there may be other\nswitches to indicate the percentage of queries that are transactions,\nthe percentage of queries that are complex; percentage that require\nindex usage, etc. etc. etc.)... And then, based on that, a better set of\ndefaults could be set by the configuration script.\n\nDoes this make sense? Or perhaps I'm watching too much science\nfiction?\n\nCarlos\n--\n\n",
"msg_date": "Thu, 26 Apr 2007 21:58:53 -0400",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "Carlos Moreno <[email protected]> writes:\n> ... But, wouldn't it make sense that the configure script\n> determines the amount of physical memory and perhaps even do a HD\n> speed estimate to set up defaults that are closer to a \n> performance-optimized\n> configuration?\n\nNo. Most copies of Postgres these days are executed on machines very\nfar away from where the code was built. It's a little bit safer to\ntry to tune things at initdb time ... as indeed we already do. But\nthe fundamental problem remains that we don't know that much about\nhow the installation will be used. For example, the planner\nconfiguration parameters turn out to have not that much to do with the\nabsolute speed of your drive, and a whole lot to do with the ratio\nof the size of your database to the amount of RAM you've got; and the\nultimate size of the DB is one thing initdb certainly can't guess.\n\nAlso, there is an extremely good reason why Postgres will never be set\nup to try to take over the whole machine by default: most of the\ndevelopers run multiple postmasters on their machines.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Apr 2007 22:49:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning "
},
{
"msg_contents": "Tom Lane wrote:\n> Carlos Moreno <[email protected]> writes:\n> \n>> ... But, wouldn't it make sense that the configure script\n>> determines the amount of physical memory and perhaps even do a HD\n>> speed estimate to set up defaults that are closer to a \n>> performance-optimized\n>> configuration?\n>> \n>\n> No. Most copies of Postgres these days are executed on machines very\n> far away from where the code was built. It's a little bit safer to\n> try to tune things at initdb time ... as indeed we already do. \n\nD'oh! Yes, that makes more sense, of course.\n\n> But\n> the fundamental problem remains that we don't know that much about\n> how the installation will be used. \n\nNotice that the second part of my suggestion covers this --- have \nadditional\nswitches to initdb so that the user can tell it about estimates on how \nthe DB\nwill be used: estimated size of the DB, estimated percentage of \nactivity that\nwill involve writing, estimated percentage of activity that will be \ntransactions,\npercentage that will use indexes, percentage of queries that will be \ncomplex,\netc. etc.\n\nWouldn't initdb be able to do a better job at coming up with sensible\ndefaults if it counts on this information? Of course, all these \nparameters\nwould have their own defaults --- the user won't necessarily know or have\nan accurate estimate for each and every one of them.\n\n> Also, there is an extremely good reason why Postgres will never be set\n> up to try to take over the whole machine by default: most of the\n> developers run multiple postmasters on their machines.\n> \nWouldn't this be covered by the above suggestion?? One of the switches\nfor the command initdb could allow the user to specify how many instances\nwill be run (I assume you're talking about having different instances \nlistening\non different ports for increased concurrency-related benefits?)\n\nDoes my suggestion make more sense now? Or is it still too unrealistic to\nmake it work properly/safely?\n\nCarlos\n--\n\n",
"msg_date": "Fri, 27 Apr 2007 09:27:49 -0400",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "On Fri, Apr 27, 2007 at 09:27:49AM -0400, Carlos Moreno wrote:\n>Notice that the second part of my suggestion covers this --- have \n>additional\n>switches to initdb so that the user can tell it about estimates on how \n>the DB\n>will be used: estimated size of the DB, estimated percentage of \n>activity that\n>will involve writing, estimated percentage of activity that will be \n>transactions,\n>percentage that will use indexes, percentage of queries that will be \n>complex,\n>etc. etc.\n\nIf the person knows all that, why wouldn't they know to just change the \nconfig parameters?\n\nMike Stone\n",
"msg_date": "Fri, 27 Apr 2007 10:30:25 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "Carlos Moreno <[email protected]> writes:\n> Tom Lane wrote:\n>> But\n>> the fundamental problem remains that we don't know that much about\n>> how the installation will be used. \n\n> Notice that the second part of my suggestion covers this --- have \n> additional switches to initdb\n\nThat's been proposed and rejected before, too; the main problem being\nthat initdb is frequently a layer or two down from the user (eg,\nexecuted by initscripts that can't pass extra arguments through, even\nassuming they're being invoked by hand in the first place).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Apr 2007 10:36:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning "
},
{
"msg_contents": "Maybe he's looking for a switch for initdb that would make it\ninteractive and quiz you about your expected usage-- sort of a magic\nauto-configurator wizard doohicky? I could see that sort of thing being\nnice for the casual user or newbie who otherwise would have a horribly\nmis-tuned database. They could instead have only a marginally mis-tuned\ndatabase :)\n\nOn Fri, 2007-04-27 at 10:30 -0400, Michael Stone wrote:\n> On Fri, Apr 27, 2007 at 09:27:49AM -0400, Carlos Moreno wrote:\n> >Notice that the second part of my suggestion covers this --- have \n> >additional\n> >switches to initdb so that the user can tell it about estimates on how \n> >the DB\n> >will be used: estimated size of the DB, estimated percentage of \n> >activity that\n> >will involve writing, estimated percentage of activity that will be \n> >transactions,\n> >percentage that will use indexes, percentage of queries that will be \n> >complex,\n> >etc. etc.\n> \n> If the person knows all that, why wouldn't they know to just change the \n> config parameters?\n> \n> Mike Stone\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n",
"msg_date": "Fri, 27 Apr 2007 07:36:52 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance\n\tTuning"
},
{
"msg_contents": "\nHello.\n\n\nJust my 2 cents, and not looking to the technical aspects:\n\nsetting up PSQL is the weakest point of PSQL as we have experienced ourself,\nonce it is running it is great.\n\nI can imagine that a lot of people of stops after their first trials after\nthey have\nexperienced the troubles and performance of a standard set up.\n\nThis is ofcourse a lost user forever.\n\nSo anything that could be done to get an easier and BETTER setup would\nstrongly enhance PSQL.\n\nMy 2 cents.\n\nHenk Sanders\n\n\n\n-----Oorspronkelijk bericht-----\nVan: [email protected]\n[mailto:[email protected]]Namens Tom Lane\nVerzonden: vrijdag 27 april 2007 16:37\nAan: Carlos Moreno\nCC: PostgreSQL Performance\nOnderwerp: Re: [PERFORM] Feature Request --- was: PostgreSQL Performance\nTuning\n\n\nCarlos Moreno <[email protected]> writes:\n> Tom Lane wrote:\n>> But\n>> the fundamental problem remains that we don't know that much about\n>> how the installation will be used.\n\n> Notice that the second part of my suggestion covers this --- have\n> additional switches to initdb\n\nThat's been proposed and rejected before, too; the main problem being\nthat initdb is frequently a layer or two down from the user (eg,\nexecuted by initscripts that can't pass extra arguments through, even\nassuming they're being invoked by hand in the first place).\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\n",
"msg_date": "Fri, 27 Apr 2007 17:03:09 +0200",
"msg_from": "\"H.J. Sanders\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning "
},
{
"msg_contents": "On Fri, Apr 27, 2007 at 07:36:52AM -0700, Mark Lewis wrote:\n>Maybe he's looking for a switch for initdb that would make it\n>interactive and quiz you about your expected usage-- sort of a magic\n>auto-configurator wizard doohicky? I could see that sort of thing being\n>nice for the casual user or newbie who otherwise would have a horribly\n>mis-tuned database. They could instead have only a marginally mis-tuned\n>database :)\n\nHowever you implement it, anyone who can answer all of those questions \nis probably capable of reading and understanding the performance section \nin the manual. \n\nIt's probably more practical to have a seperate script that looks at the \nrunning system (ram, disks, pg config, db size, indices, stats, etc.) \nand makes suggestions--if someone wants to write such a thing.\n\nMike Stone\n",
"msg_date": "Fri, 27 Apr 2007 11:30:33 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "On Apr 27, 2007, at 3:30 PM, Michael Stone wrote:\n> On Fri, Apr 27, 2007 at 09:27:49AM -0400, Carlos Moreno wrote:\n>> Notice that the second part of my suggestion covers this --- have \n>> additional\n>> switches to initdb so that the user can tell it about estimates on \n>> how the DB\n>> will be used: estimated size of the DB, estimated percentage of \n>> activity that\n>> will involve writing, estimated percentage of activity that will \n>> be transactions,\n>> percentage that will use indexes, percentage of queries that will \n>> be complex,\n>> etc. etc.\n>\n> If the person knows all that, why wouldn't they know to just change \n> the config parameters?\n\nBecause knowing your expected workload is a lot easier for many \npeople than knowing what every GUC does.\n\nPersonally, I think it would be a tremendous start if we just \nprovided a few sample configs like MySQL does. Or if someone wanted \nto get fancy they could stick a web page somewhere that would produce \na postgresql.conf based simply on how much available RAM you had, \nsince that's one of the biggest performance-hampering issues we run \ninto (ie: shared_buffers left at the default of 32MB).\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Fri, 27 Apr 2007 17:12:21 +0100",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "Michael Stone wrote:\n> On Fri, Apr 27, 2007 at 09:27:49AM -0400, Carlos Moreno wrote:\n>> Notice that the second part of my suggestion covers this --- have \n>> additional\n>> switches to initdb\n<snip>\n> If the person knows all that, why wouldn't they know to just change the \n> config parameters?\n>\n\nExactly.. What I think would be much more productive is to use the great amount \nof information that PG tracks internally and auto-tune the parameters based on \nit. For instance:\n\nWhy does the user need to manually track max_fsm_pages and max_fsm_relations? I \nbet there are many users who have never taken the time to understand what this \nmeans and wondering why performance still stinks after vacuuming their database \n( spoken from my own experience )\n\nHow about work_mem? shared_buffers? column statistics sizes? random_page_cost?\n\nCouldn't some fairly simple regression tests akin to a VACUUM process spot \npotential problems? \"Hey, it looks like you need more fsm_relations.. I bumped \nthat up automatically for you\". Or \"These indexes look bloated, shall I \nautomatically reindex them for you?\"\n\nI'm sure there are many more examples, that with some creative thinking, could \nbe auto-adjusted to match the usage patterns of the database. PG does an \nexcellent job of exposing the variables to the users, but mostly avoids telling \nthe user what to do or doing it for them. Instead, it is up to the user to know \nwhere to look, what to look for, and how to react to things to improve \nperformance. This is not all bad, but it is assuming that all users are hackers \n( which used to be true ), but certainly doesn't help when the average SQLServer \nadmin tries out Postgres and then is surprised at the things they are now \nresponsible for managing. PG is certainly *not* the only database to suffer \nfrom this syndrome, I know..\n\nI like to think of my systems as good employees. I don't want to have to \nmicromanage everything they do. I want to tell them \"here's what I want done\", \nand assuming I made a good hiring choice, they will do it and take some liberty \nto adjust parameters where needed to achieve the spirit of the goal, rather than \n blindly do something inefficiently because I failed to explain to them the \nabsolute most efficient way to accomplish the task.\n\nGranted, there are some people who don't like the developers making any \nassumptions about their workload. But this doesn't have to be an either/or \nproposition. I don't think any control needs to be abandoned. But \nself-adjusting defaults seem like an achievable goal ( I know, I know, \"show us \nthe patch\" ). I just don't know if this feeling has resonated well between new \nusers and long-term developers. I know it must be grating to have to answer the \nsame questions over and over and over \"have you analyzed? Did you leave \npostgresql.conf at the defaults??\". Seems like a win-win for both sides, IMHO.\n\nIn closing, I am not bashing PG! I love it and swear by it. These comments are \npurely from an advocacy perspective. I'd love to see PG user base continue to grow.\n\nMy .02\n\n-Dan\n\n\n",
"msg_date": "Fri, 27 Apr 2007 11:47:48 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "Dan,\n\n> Exactly.. What I think would be much more productive is to use the\n> great amount of information that PG tracks internally and auto-tune the\n> parameters based on it. For instance:\n\n*Everyone* wants this. The problem is that it's very hard code to write \ngiven the number of variables. I'm working on it but progress is slow, \ndue to my travel schedule.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Fri, 27 Apr 2007 10:59:49 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "In response to Dan Harris <[email protected]>:\n\n> Michael Stone wrote:\n> > On Fri, Apr 27, 2007 at 09:27:49AM -0400, Carlos Moreno wrote:\n> >> Notice that the second part of my suggestion covers this --- have \n> >> additional\n> >> switches to initdb\n> <snip>\n> > If the person knows all that, why wouldn't they know to just change the \n> > config parameters?\n> \n> Exactly.. What I think would be much more productive is to use the great amount \n> of information that PG tracks internally and auto-tune the parameters based on \n> it. For instance:\n> \n> Why does the user need to manually track max_fsm_pages and max_fsm_relations? I \n> bet there are many users who have never taken the time to understand what this \n> means and wondering why performance still stinks after vacuuming their database \n> ( spoken from my own experience )\n\nBut there are two distinct routes that can be taken if there's not enough\nfsm space: add fsm space or vacuum more frequently. I don't want the system\nto eat up a bunch of memory for fsm entries if my workload indicates that\nI can easily vacuum more frequently.\n\n> How about work_mem? shared_buffers? column statistics sizes? random_page_cost?\n\nThe only one that seems practical (to me) is random_page_cost. The others are\nall configuration options that I (as a DBA) want to be able to decide for\nmyself. For example, I have some dedicated PG servers that I pretty much\nmax those values out at, to let PG know that it can use everything on the\nsystem -- but I also have some shared use machines with PG, where I carefully\nconstrain those values so that PG doesn't muscle other daemons out of their\nshare of the RAM (work_mem is probably the best example)\n\nIt would be nice to have some kind of utility that could tell me what\nrandom_page_cost should be, as I've never felt comfortable tweaking it.\nLike some utility to run that would say \"based on the seek tests I just\nran, you should set random_page_cost to x\". Of course, if such a thing\nexisted, it could just fill in the value for you. But I haven't figured\nout how to pick a good value for that setting, so I have no idea how to\nsuggest to have it automatically set.\n\n> Couldn't some fairly simple regression tests akin to a VACUUM process spot \n> potential problems? \"Hey, it looks like you need more fsm_relations.. I bumped \n> that up automatically for you\". Or \"These indexes look bloated, shall I \n> automatically reindex them for you?\"\n\nA lot of that stuff does happen. A vacuum verbose will tell you what it\nthinks you should do, but I don't _want_ it to do it automatically. What\nif I create huge temporary tables once a week for some sort of analysis that\noverload the fsm space? And if I'm dropping those tables when the analysis\nis done, do I want the fsm space constantly adjusting?\n\nPlus, some is just impossible. shared_buffers requires a restart. Do you\nwant your DB server spontaneously restarting because it thought more\nbuffers might be nice?\n\n> I'm sure there are many more examples, that with some creative thinking, could \n> be auto-adjusted to match the usage patterns of the database. PG does an \n> excellent job of exposing the variables to the users, but mostly avoids telling \n> the user what to do or doing it for them. Instead, it is up to the user to know \n> where to look, what to look for, and how to react to things to improve \n> performance. This is not all bad, but it is assuming that all users are hackers \n> ( which used to be true ), but certainly doesn't help when the average SQLServer \n> admin tries out Postgres and then is surprised at the things they are now \n> responsible for managing. PG is certainly *not* the only database to suffer \n> from this syndrome, I know..\n\nI expect the suffering is a result of the fact that databases are non-trivial\npieces of software, and there's no universally simple way to set them up\nand make them run well.\n\n> I like to think of my systems as good employees. I don't want to have to \n> micromanage everything they do. I want to tell them \"here's what I want done\", \n> and assuming I made a good hiring choice, they will do it and take some liberty \n> to adjust parameters where needed to achieve the spirit of the goal, rather than \n> blindly do something inefficiently because I failed to explain to them the \n> absolute most efficient way to accomplish the task.\n\nThat's silly. No software does that. You're asking software to behave like\nhumans. If that were the case, this would be Isaac Asimov's world, not the\nreal one.\n\n> Granted, there are some people who don't like the developers making any \n> assumptions about their workload. But this doesn't have to be an either/or \n> proposition. I don't think any control needs to be abandoned. But \n> self-adjusting defaults seem like an achievable goal ( I know, I know, \"show us \n> the patch\" ). I just don't know if this feeling has resonated well between new \n> users and long-term developers. I know it must be grating to have to answer the \n> same questions over and over and over \"have you analyzed? Did you leave \n> postgresql.conf at the defaults??\". Seems like a win-win for both sides, IMHO.\n\nWell, it seems like this is happening where it's practical -- autovacuum is\na good example.\n\nPersonally, I wouldn't be opposed to more automagic stuff, just as long as\nI have the option to disable it. There are some cases where I still\ndisable autovac.\n\n> In closing, I am not bashing PG! I love it and swear by it. These comments are \n> purely from an advocacy perspective. I'd love to see PG user base continue to grow.\n\nI expect that part of the problem is \"who's going to do it?\"\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Fri, 27 Apr 2007 14:11:27 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance\n Tuning"
},
{
"msg_contents": "At 10:36a -0400 on 27 Apr 2007, Tom Lane wrote:\n> That's been proposed and rejected before, too; the main problem being\n> that initdb is frequently a layer or two down from the user (eg,\n> executed by initscripts that can't pass extra arguments through, even\n> assuming they're being invoked by hand in the first place).\n\nAnd following after Dan Harris' response . . .\n\nSo what's the problem with having some sort of cronjob contrib module \nthat utilizes the actual and current statistics to make \nrecommendations? I don't think it'd be right to simply change the \nconfiguration options as it sees fit (especially as it was pointed \nout that many run multiple postmasters or have other uses for the \nmachines in question), but perhaps it could send a message (email?) \nalong the lines of \"Hey, I'm currently doing this many of X \ntransactions, against this much of Y data, and working under these \nconstraints. You might get better performance (in this area ... ) if \nyou altered the the configurations options like so: ...\"\n\nCertainly not for the masters, but perhaps for standard installation \nsort of deals, sort of liking bringing up the rear . . . just a thought.\n\nKevin\n",
"msg_date": "Fri, 27 Apr 2007 14:40:07 -0400",
"msg_from": "Kevin Hunter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning "
},
{
"msg_contents": "On Fri, Apr 27, 2007 at 02:40:07PM -0400, Kevin Hunter wrote:\n> out that many run multiple postmasters or have other uses for the \n> machines in question), but perhaps it could send a message (email?) \n> along the lines of \"Hey, I'm currently doing this many of X \n> transactions, against this much of Y data, and working under these \n> constraints. You might get better performance (in this area ... ) if \n> you altered the the configurations options like so: ...\"\n\n\nor storing the values in the db for later trending analysis, witness \nora statspack.\n",
"msg_date": "Fri, 27 Apr 2007 14:52:25 -0400",
"msg_from": "Ray Stell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "Bill,\n\n> The only one that seems practical (to me) is random_page_cost. The\n> others are all configuration options that I (as a DBA) want to be able\n> to decide for myself. \n\nActually, random_page_cost *should* be a constant \"4.0\" or \"3.5\", which \nrepresents the approximate ratio of seek/scan speed which has been \nrelatively constant across 6 years of HDD technology. The only reason we \nmake it a configuration variable is that there's defects in our cost model \nwhich cause users to want to tinker with it.\n\nMind you, that's gotten better in recent versions as well. Lately I mostly \ntinker with effective_cache_size and the various cpu_* stats rather than \nmodifying random_page_cost.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Fri, 27 Apr 2007 12:11:11 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "Bill Moran wrote:\n> In response to Dan Harris <[email protected]>:\n<snip>\n>> Why does the user need to manually track max_fsm_pages and max_fsm_relations? I \n>> bet there are many users who have never taken the time to understand what this \n>> means and wondering why performance still stinks after vacuuming their database \n>> ( spoken from my own experience )\n> \n> But there are two distinct routes that can be taken if there's not enough\n> fsm space: add fsm space or vacuum more frequently. I don't want the system\n> to eat up a bunch of memory for fsm entries if my workload indicates that\n> I can easily vacuum more frequently.\n\nThere's no magic bullet here, but heuristics should be able to tell us you can \n\"easily vacuum more frequently\" And again, I said these things would be \n*optional*. Like an item in postgresql.conf \n\"i_have_read_the_manual_and_know_what_this_all_means = false #default false\". \nIf you change it to true, you have all the control you're used to and nothing \nwill get in your way.\n\n> \n>> How about work_mem? shared_buffers? column statistics sizes? random_page_cost?\n> \n> The only one that seems practical (to me) is random_page_cost. The others are\n> all configuration options that I (as a DBA) want to be able to decide for\n> myself. For example, I have some dedicated PG servers that I pretty much\n> max those values out at, to let PG know that it can use everything on the\n> system -- but I also have some shared use machines with PG, where I carefully\n> constrain those values so that PG doesn't muscle other daemons out of their\n> share of the RAM (work_mem is probably the best example)\n> \n\nJust because you carefully constrain it does not preclude the ability for \nprogram logic to maintain statistics to do what I suggested.\n\n> It would be nice to have some kind of utility that could tell me what\n> random_page_cost should be, as I've never felt comfortable tweaking it.\n> Like some utility to run that would say \"based on the seek tests I just\n> ran, you should set random_page_cost to x\". Of course, if such a thing\n> existed, it could just fill in the value for you. But I haven't figured\n> out how to pick a good value for that setting, so I have no idea how to\n> suggest to have it automatically set.\n\nMe either, but I thought if there's a reason it's user-settable, there must be \nsome demonstrable method for deciding what is best.\n\n> \n>> Couldn't some fairly simple regression tests akin to a VACUUM process spot \n>> potential problems? \"Hey, it looks like you need more fsm_relations.. I bumped \n>> that up automatically for you\". Or \"These indexes look bloated, shall I \n>> automatically reindex them for you?\"\n> \n> A lot of that stuff does happen. A vacuum verbose will tell you what it\n> thinks you should do, but I don't _want_ it to do it automatically. What\n> if I create huge temporary tables once a week for some sort of analysis that\n> overload the fsm space? And if I'm dropping those tables when the analysis\n> is done, do I want the fsm space constantly adjusting?\n\nI understand *you* don't want it done automatically. But my suspicion is that \nthere are a lot more newbie pg admins who would rather let the system do \nsomething sensible as a default. Again, you sound defensive that somehow my \nideas would take power away from you. I'm not sure why that is, but certainly \nI'm not suggesting that. An auto-pilot mode is not a bad idea just because a \nfew pilots don't want to use it.\n\n> \n> Plus, some is just impossible. shared_buffers requires a restart. Do you\n> want your DB server spontaneously restarting because it thought more\n> buffers might be nice?\n\nWell, maybe look at the bigger picture and see if it can be fixed to *not* \nrequire a program restart? Or.. take effect on the next pid that gets created? \n This is a current limitation, but doesn't need to be one for eternity does it?\n\n> \n>> I'm sure there are many more examples, that with some creative thinking, could \n>> be auto-adjusted to match the usage patterns of the database. PG does an \n>> excellent job of exposing the variables to the users, but mostly avoids telling \n>> the user what to do or doing it for them. Instead, it is up to the user to know \n>> where to look, what to look for, and how to react to things to improve \n>> performance. This is not all bad, but it is assuming that all users are hackers \n>> ( which used to be true ), but certainly doesn't help when the average SQLServer \n>> admin tries out Postgres and then is surprised at the things they are now \n>> responsible for managing. PG is certainly *not* the only database to suffer \n>> from this syndrome, I know..\n> \n> I expect the suffering is a result of the fact that databases are non-trivial\n> pieces of software, and there's no universally simple way to set them up\n> and make them run well.\n\nSpeaking as a former SQL Server admin ( from day 1 of the Sybase fork up to \nversion 2000 ), I can say there *is* a way to make them simple. It's certainly \nnot a perfect piece of software, but the learning curve speaks for itself. It \ncan auto-shrink your databases ( without locking problems ). Actually it pretty \nmuch runs itself. It auto-allocates RAM for you ( up to the ceiling *you* \ncontrol ). It automatically re-analyzes itself.. I was able to successfully \nmanage several servers with not insignificant amounts of data in them for many \nyears without being a trained DBA. After switching to PG, I found myself having \nto twiddle with all sorts of settings that seemed like it should just know about \nwithout me having to tell it.\n\nI'm not saying it was simple to make it do that. MS has invested LOTS of money \nand effort into making it that way. I don't expect PG to have features like \nthat tomorrow or even next release. But, I feel it's important to make sure \nthat those who *can* realistically take steps in that direction understand this \npoint of view ( and with Josh's other reply to this, I think many do ).\n\n> \n>> I like to think of my systems as good employees. I don't want to have to \n>> micromanage everything they do. I want to tell them \"here's what I want done\", \n>> and assuming I made a good hiring choice, they will do it and take some liberty \n>> to adjust parameters where needed to achieve the spirit of the goal, rather than \n>> blindly do something inefficiently because I failed to explain to them the \n>> absolute most efficient way to accomplish the task.\n> \n> That's silly. No software does that. You're asking software to behave like\n> humans. If that were the case, this would be Isaac Asimov's world, not the\n> real one.\n\nIt's not silly. There are plenty of systems that do that. Maybe you just \nhaven't used them. Again, SQL Server did a lot of those things for me. I \ndidn't have to fiddle with checkboxes or multi-select tuning options. It \nlearned what its load was and reacted appropriately. I never had to stare at \nplanner outputs and try and figure out why the heck did it choose that plan. \nAlthough, I certainly could have if I wanted to. It has a tool called the SQL \nProfiler which will \"watch\" your workload on the database, do regression testing \nand suggest ( and optionally implement with a single click ) indexes on your \ntables. I've been wanting to do this for years with PG, and had a small start \non a project to do just that actually.\n\n> \n>> Granted, there are some people who don't like the developers making any \n>> assumptions about their workload. But this doesn't have to be an either/or \n>> proposition. I don't think any control needs to be abandoned. But \n>> self-adjusting defaults seem like an achievable goal ( I know, I know, \"show us \n>> the patch\" ). I just don't know if this feeling has resonated well between new \n>> users and long-term developers. I know it must be grating to have to answer the \n>> same questions over and over and over \"have you analyzed? Did you leave \n>> postgresql.conf at the defaults??\". Seems like a win-win for both sides, IMHO.\n> \n> Well, it seems like this is happening where it's practical -- autovacuum is\n> a good example.\n\nAgreed, this is a huge step forward. And again, I'm not taking an offensive \nposture on this. Just that I think it's worth giving my .02 since I have had \nstrong feelings about this for awhile.\n\n> \n> Personally, I wouldn't be opposed to more automagic stuff, just as long as\n> I have the option to disable it. There are some cases where I still\n> disable autovac.\n> \n>> In closing, I am not bashing PG! I love it and swear by it. These comments are \n>> purely from an advocacy perspective. I'd love to see PG user base continue to grow.\n> \n> I expect that part of the problem is \"who's going to do it?\"\n> \n\nYes, this is the classic problem. I'm not demanding anyone pick up the ball and \njump on this today, tomorrow, etc.. I just think it would be good for those who \n*could* make a difference to keep those goals in mind when they continue. If \nyou have the right mindset, this problem will fix itself over time.\n\n-Dan\n",
"msg_date": "Fri, 27 Apr 2007 14:27:51 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "Dan,\n\n> Yes, this is the classic problem. I'm not demanding anyone pick up the\n> ball and jump on this today, tomorrow, etc.. I just think it would be\n> good for those who *could* make a difference to keep those goals in mind\n> when they continue. If you have the right mindset, this problem will\n> fix itself over time.\n\nDon't I wish. Autotuning is *hard*. It took Oracle 6 years. It took \nMicrosoft 3-4 years, and theirs still has major issues last I checked. And \nboth of those DBs support less OSes than we do. I think it's going to \ntake more than the *right mindset* and my spare time.\n\n> I appreciate your efforts in this regard. Do you have a formal project\n> plan for this? If you can share it with me, I'll take a look and see if\n> there is anything I can do to help out.\n\nNope, just some noodling around on the configurator:\nwww.pgfoundry.org/projects/configurator\n\n> I am on the verge of starting a Java UI that will query a bunch of the\n> pg_* tables and give the user information about wasted table space,\n> index usage, table scans, slow-running queries and spoon-feed it in a\n> nice attractive interface that can be a real-time system monitor tool. \n> This could be a cooperative project or might have some redundancy with\n> what you're up to.\n\nI'd be *very* interested in collaborating with you on this. Further, we \ncould feed DTrace (& systemtap?) into the interface to get data that \nPostgreSQL doesn't currently produce.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Fri, 27 Apr 2007 17:46:17 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "On Fri, 27 Apr 2007, Josh Berkus wrote:\n\n> Dan,\n>\n>> Yes, this is the classic problem. �I'm not demanding anyone pick up the\n>> ball and jump on this today, tomorrow, etc.. I just think it would be\n>> good for those who *could* make a difference to keep those goals in mind\n>> when they continue. �If you have the right mindset, this problem will\n>> fix itself over time.\n>\n> Don't I wish. Autotuning is *hard*. It took Oracle 6 years. It took\n> Microsoft 3-4 years, and theirs still has major issues last I checked. And\n> both of those DBs support less OSes than we do. I think it's going to\n> take more than the *right mindset* and my spare time.\n\nI think there are a couple different things here.\n\n1. full autotuning\n\n as you say, this is very hard and needs a lot of info about your \nparticular database useage.\n\n2. getting defaults that are closer to right then current.\n\n this is much easier. for this nobody is expecting that the values are \nright, we're just begging for some tool to get us within an couple orders \nof magnatude of what's correct.\n\nthe current defaults are appropriate for a single cpu with 10's of MB of \nram and a single drive\n\nnowdays you have people trying to run quick-and-dirty tests on some spare \nhardware they have laying around (waiting for another project) that's got \n4-8 CPU's with 10's of GB of ram and a couple dozen drives\n\nthese people don't know about database tuneing, they can learn, but they \nwant to see if postgres is even in the ballpark. if the results are close \nto acceptable they will ask questions and research the tuneing, but if the \nresults are orders of magnatude lower then they need to be they'll just \nsay that postgress is too slow and try another database.\n\nan autodefault script that was written assuming that postgres has the box \nto itself would be a wonderful start.\n\nI think the next step would be to be able to tell the script 'only plan on \nuseing 1/2 of this box'\n\nand beyond that would be the steps that you are thinking of where the \nuseage pattern is considered.\n\nbut when every performance question is answered with \"did you change the \ndefaults? they are way too low for modern hardware, raise them by 2 orders \nof magnatude and then we'll start investigating\"\n\nDavid Lang\n>From [email protected] Sat Apr 28 05:31:42 2007\nReceived: from localhost (maia-1.hub.org [200.46.204.191])\n\tby postgresql.org (Postfix) with ESMTP id 3F1869FB607\n\tfor <[email protected]>; Sat, 28 Apr 2007 05:31:41 -0300 (ADT)\nReceived: from postgresql.org ([200.46.204.71])\n by localhost (mx1.hub.org [200.46.204.191]) (amavisd-maia, port 10024)\n with ESMTP id 15723-06 for <[email protected]>;\n Sat, 28 Apr 2007 05:31:38 -0300 (ADT)\nX-Greylist: domain auto-whitelisted by SQLgrey-1.7.4\nReceived: from nz-out-0506.google.com (nz-out-0506.google.com [64.233.162.233])\n\tby postgresql.org (Postfix) with ESMTP id 04BFF9FB59A\n\tfor <[email protected]>; Sat, 28 Apr 2007 05:31:37 -0300 (ADT)\nReceived: by nz-out-0506.google.com with SMTP id s1so704120nze\n for <[email protected]>; Sat, 28 Apr 2007 01:31:36 -0700 (PDT)\nDKIM-Signature: a=rsa-sha1; c=relaxed/relaxed;\n d=gmail.com; s=beta;\n h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:references;\n b=BM60fgQwXPcs1GUfLpPBiCFj5cZ3ZqLypaxyNCB6+eG3++YbjB9vQMB0CByg7H9VZbROjDCBQaKYNcPXqTOcIOahC8BGUcaLwzGbnNivSrRw/fSOHvbj27DV+HyFqO4McuCcyAVEiMu53x6NLrW+WYN38IUsBA6Qq09UYTnewnI=\nDomainKey-Signature: a=rsa-sha1; c=nofws;\n d=gmail.com; s=beta;\n h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:references;\n b=JEKwm6hM5kTSCrvKvJ2/XNkpHjnI6OimQ6r116Qq03eWF9W//Zk772yoJWl5eAGMOT/ylxOtxiAJPWMyaI2CauOodqtEW/0PqFAzOsG0kS4q1Jbd3xEScSJephvmtWzshiAykiVOg6HoEpzH51kLE5p99hU9jcdQh0FtIAWcqpA=\nReceived: by 10.115.74.1 with SMTP id b1mr1299635wal.1177749095891;\n Sat, 28 Apr 2007 01:31:35 -0700 (PDT)\nReceived: by 10.114.12.15 with HTTP; Sat, 28 Apr 2007 01:31:35 -0700 (PDT)\nMessage-ID: <[email protected]>\nDate: Sat, 28 Apr 2007 10:31:35 +0200\nFrom: \"Harald Armin Massa\" <[email protected]>\nTo: \"Carlos Moreno\" <[email protected]>\nSubject: Re: Feature Request --- was: PostgreSQL Performance Tuning\nCc: \"PostgreSQL Performance\" <[email protected]>\nIn-Reply-To: <[email protected]>\nMIME-Version: 1.0\nContent-Type: multipart/alternative; \n\tboundary=\"----=_Part_239023_22416331.1177749095813\"\nReferences: <OFC96167D5.6BB63AC2-ON652572C9.003B5263-652572C9.003E4142@lntinfotech.com>\n\t <[email protected]> <[email protected]>\n\t <[email protected]> <[email protected]>\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Archive-Number: 200704/490\nX-Sequence-Number: 24367\n\n------=_Part_239023_22416331.1177749095813\nContent-Type: text/plain; charset=ISO-8859-1; format=flowed\nContent-Transfer-Encoding: quoted-printable\nContent-Disposition: inline\n\nCarlos,\n\nabout your feature proposal: as I learned, nearly all\nPerfomance.Configuration can be done by editing the .INI file and making th=\ne\nPostmaster re-read it.\n\nSo, WHY at all should those parameters be guessed at the installation of th=\ne\ndatabase? Would'nt it be a saver point of time to have some postgresql-tun=\ne\n\nutilitiy, which gets run after the installation, maybe every once in a\nwhile. That tool can check vital information like \"Databasesize to memory\nrelation\"; and suggest a new postgresql.ini.\n\nThat tool needs NO INTEGRATION whatsoever - it can be developed, deployed\ntotally independend and later only be bundled.\n\nDoes my suggestion make more sense now? Or is it still too unrealistic to\n> make it work properly/safely?\n\n\nAnd as this tool can be tested seperately, does not need a new initdb every\ntime ... it can be developed more easily.\n\nMaybe there is even a pointy flashy version possible (perhaps even for mone=\ny\n:) which gives nice graphics and \"optimized\", like those Windows Optimizers=\n.\n:) I am sure, some DBAs in BIGCOMPs would be thrilled :)\n\nMay that be a possible way?\n\nHarald\n\n\n--=20\nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nReinsburgstra=DFe 202b\n70197 Stuttgart\n0173/9409607\nfx 01212-5-13695179\n-\nPython: the only language with more web frameworks than keywords.\n\n------=_Part_239023_22416331.1177749095813\nContent-Type: text/html; charset=ISO-8859-1\nContent-Transfer-Encoding: quoted-printable\nContent-Disposition: inline\n\nCarlos,<br><br>about your feature proposal: as I learned, nearly all Perfom=\nance.Configuration can be done by editing the .INI file and making the Post=\nmaster re-read it.<br><br>So, WHY at all should those parameters be guessed=\n at the installation of the database? Would'nt it be a saver point of t=\nime to have some postgresql-tune=20\n<br>utilitiy, which gets run after the installation, maybe every once in a =\nwhile. That tool can check vital information like "Databasesize to mem=\nory relation"; and suggest a new postgresql.ini.<br><br>That tool need=\ns NO INTEGRATION whatsoever - it can be developed, deployed totally indepen=\ndend and later only be bundled.=20\n<br><br><div><blockquote class=3D\"gmail_quote\" style=3D\"border-left: 1px so=\nlid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;\">Does=\n my suggestion make more sense now? Or is it still too unrealist=\nic to<br>make it work properly/safely?\n</blockquote><div><br></div></div>And as this tool can be tested seperately=\n, does not need a new initdb every time ... it can be developed more easily=\n. <br><br>Maybe there is even a pointy flashy version possible (perhaps eve=\nn for money :) which gives nice graphics and "optimized", like th=\nose Windows Optimizers. :) I am sure, some DBAs in BIGCOMPs would be =\nthrilled :)\n<br><br>May that be a possible way?<br><br>Harald<br><br><br>-- <br>GHUM Ha=\nrald Massa<br>persuadere et programmare<br>Harald Armin Massa<br>Reinsburgs=\ntra=DFe 202b<br>70197 Stuttgart<br>0173/9409607<br>fx 01212-5-13695179 <br>\n-<br>Python: the only language with more web frameworks than keywords.\n\n------=_Part_239023_22416331.1177749095813--\n",
"msg_date": "Fri, 27 Apr 2007 18:40:25 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "On 4/28/07, Harald Armin Massa <[email protected]> wrote:\n> about your feature proposal: as I learned, nearly all\n> Perfomance.Configuration can be done by editing the .INI file and making the\n> Postmaster re-read it.\n\nUm, shared_buffers is one of the most important initial parameters to\nset and it most certainly cannot be set after startup.\n\n> So, WHY at all should those parameters be guessed at the installation of the\n> database?\n\nBecause a lot of good assumptions can be made on the initial install.\nLikewise, some of the most important parameters cannot be tuned after\nstartup.\n\n> Maybe there is even a pointy flashy version possible (perhaps even for money\n> :) which gives nice graphics and \"optimized\", like those Windows Optimizers.\n> :) I am sure, some DBAs in BIGCOMPs would be thrilled :)\n\nI'd suggest that you not make snide remarks about someone else's\ndesign when your own analysis is somewhat flawed.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n",
"msg_date": "Sat, 28 Apr 2007 12:54:18 -0400",
"msg_from": "\"Jonah H. Harris\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "Jonah,\n\nUm, shared_buffers is one of the most important initial parameters to\n> set and it most certainly cannot be set after startup.\n\n\nNot after startup, correct. But after installation. It is possible to change\nPostgreSQL.conf (not ini, to much windows on my side, sorry) and restart\npostmaster.\n\nBecause a lot of good assumptions can be made on the initial install.\n> Likewise, some of the most important parameters cannot be tuned after\n> startup.\n\n\nYes. These assumptions can be made - but then they are assumptions. When the\ndatabase is filled and working, there are measurable facts. And yes, that\nneeds a restart of postmaster, that does not work on 24/7. But there are\nmany databases which can be restartet for tuning in regular maintainance\nsessions.\n\n> :) which gives nice graphics and \"optimized\", like those Windows\n> Optimizers.\n> > :) I am sure, some DBAs in BIGCOMPs would be thrilled :)\n>\n> >I'd suggest that you not make snide remarks about someone else's\n> >design when your own analysis is somewhat flawed.\n\n\nSorry, Jonah, if my words sounded \"snide\". I had feedback from some DBAs in\nBIGCOMPs, who said very positive things about the beauty of pgadmin. I saw\nsome DBAs quite happy about the graphical displays of TOAD. I worked for a\nMVS Hoster who paid BIG SUMS to Candle Software for a Software called\nOmegamon, which made it possible to have charts about performance figures.\nSo I deducted that people would even be willing to pay money for a GUI which\npresents the opimizing process.\n\nThat idea of \"tune PostgreSQL database after installation\" also came from\nthe various request on pgsql-performance. Some ask before they install; but\nthere are also MANY questions with \"our PostgreSQL database was running fast\nuntill xxxx\", with xxxx usually being a table grown bigger then n records.\n\nAnd I really did not want to discredit the idea of properly configuring from\nthe start. Just wanted to open an other option to do that tuning.\n\nHarald\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nReinsburgstraße 202b\n70197 Stuttgart\n0173/9409607\nfx 01212-5-13695179\n-\nPython: the only language with more web frameworks than keywords.\n\nJonah,Um, shared_buffers is one of the most important initial parameters toset and it most certainly cannot be set after startup.\nNot after startup, correct. But after installation. It is possible to change PostgreSQL.conf (not ini, to much windows on my side, sorry) and restart postmaster. \nBecause a lot of good assumptions can be made on the initial install.Likewise, some of the most important parameters cannot be tuned afterstartup.Yes. These assumptions can be made - but then they are assumptions. When the database is filled and working, there are measurable facts. And yes, that needs a restart of postmaster, that does not work on 24/7. But there are many databases which can be restartet for tuning in regular maintainance sessions.\n> :) which gives nice graphics and \"optimized\", like those Windows Optimizers.\n> :) I am sure, some DBAs in BIGCOMPs would be thrilled :)>I'd suggest that you not make snide remarks about someone else's>design when your own analysis is somewhat flawed.\nSorry, Jonah, if my words sounded \"snide\". I had feedback from some DBAs in BIGCOMPs, who said very positive things about the beauty of pgadmin. I saw some DBAs quite happy about the graphical displays of TOAD. I worked for a MVS Hoster who paid BIG SUMS to Candle Software for a Software called Omegamon, which made it possible to have charts about performance figures. \nSo I deducted that people would even be willing to pay money for a GUI which presents the opimizing process.That idea of \"tune PostgreSQL database after installation\" also came from the various request on pgsql-performance. Some ask before they install; but there are also MANY questions with \"our PostgreSQL database was running fast untill xxxx\", with xxxx usually being a table grown bigger then n records. \nAnd I really did not want to discredit the idea of properly configuring from the start. Just wanted to open an other option to do that tuning.Harald-- GHUM Harald Massapersuadere et programmare\nHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607fx 01212-5-13695179 -Python: the only language with more web frameworks than keywords.",
"msg_date": "Sat, 28 Apr 2007 20:15:52 +0200",
"msg_from": "\"Harald Armin Massa\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "Harald Armin Massa wrote:\n> Carlos,\n>\n> about your feature proposal: as I learned, nearly all \n> Perfomance.Configuration can be done by editing the .INI file and \n> making the Postmaster re-read it.\n>\n> So, WHY at all should those parameters be guessed at the installation \n> of the database? Would'nt it be a saver point of time to have some \n> postgresql-tune\n> utilitiy, which gets run after the installation, maybe every once in a \n> while. That tool can check vital information like \"Databasesize to \n> memory relation\"; and suggest a new postgresql.ini.\n\nI would soooo volunteer to do that and donate it to the PG project!!\n\nProblem is, you see, I'm not sure what an appropriate algorithm would\nbe --- for instance, you mention \"DB size to memory relation\" as if it is\nan extremely important parameter and/or a parameter with extremely\nobvious consequences in the configuration file --- I'm not even sure\nabout what I would do given that. I always get the feeling that figuring\nout stuff from the performance-tuning-related documentation requires\ntechnical knowledge about the internals of PG (which, granted, it's\nout there, given that we get 100% of the source code ... But still, that's\nfar from the point, I'm sure we agree)\n\nThat's why I formulated my previous post as a Feature Request, rather\nthan a \"would this be a good feature / should I get started working on\nthat?\" :-)\n\nCarlos\n--\n\n",
"msg_date": "Sat, 28 Apr 2007 16:34:49 -0400",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "On Fri, 27 Apr 2007, Josh Berkus wrote:\n\n> *Everyone* wants this. The problem is that it's very hard code to write\n> given the number of variables\n\nThere's lots of variables, and there are at least three major ways to work \non improving someone's system:\n\n1) Collect up data about their system (memory, disk layout), find out a \nbit about their apps/workload, and generate a config file based on that.\n\n2) Connect to the database and look around. Study the tables and some \ntheir stats, make some estimates based on what your find, produce a new \nconfig file.\n\n3) Monitor the database while it's doing its thing. See which parts go \nwell and which go badly by viewing database statistics like pg_statio. \n>From that, figure out where the bottlenecks are likely to be and push more \nresources toward them. What I've been working on lately is exposing more \nreadouts of performance-related database internals to make this more \npractical.\n\nWhen first exposed to this problem, most people assume that (1) is good \nenough--ask some questions, look at the machine, and magically a \nreasonable starting configuration can be produced. It's already been \npointed out that anyone with enough knowledge to do all that can probably \nspit out a reasonable guess for the config file without help. If you're \ngoing to the trouble of building a tool for offering configuration advice, \nit can be widly more effective if you look inside the database after it's \ngot data in it, and preferably after it's been running under load for a \nwhile, and make your recommendations based on all that information.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Mon, 30 Apr 2007 07:48:24 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "Greg Smith wrote:\n> If you're going to the trouble of building a tool for offering \n> configuration advice, it can be widly more effective if you look inside \n> the database after it's got data in it, and preferably after it's been \n> running under load for a while, and make your recommendations based on \n> all that information.\n\nThere are two completely different problems that are getting mixed together in this discussion. Several people have tried to distinguish them, but let's be explicit:\n\n1. Generating a resonable starting configuration for neophyte users who have installed Postgres for the first time.\n\n2. Generating an optimal configuration for a complex, running system that's loaded with data.\n\nThe first problem is easy: Any improvement would be welcome and would give most users a better initial experience. The second problem is nearly impossible. Forget the second problem (or put it on the \"let's find someone doing a PhD project\" list), and focus on the first.\n\n From my limited experience, a simple questionaire could be used to create a pretty good starting configuration file. Furthermore, many of the answers can be discovered automatically:\n\n1. How much memory do you have?\n2. How many disks do you have?\n a. Which disk contains the OS?\n b. Which disk(s) have swap space?\n c. Which disks are \"off limits\" (not to be used by Postgres)\n3. What is the general nature of your database?\n a. Mostly static (few updates, lots of access)\n b. Mostly archival (lots of writes, few reads)\n c. Very dynamic (data are added, updated, and deleted a lot)\n4. Do you have a lot of small, fast transactions or a few big, long transactions?\n5. How big do you expect your database to be?\n6. How many simultaneous users do you expect?\n7. What are the users you want configured initially?\n8. Do you want local access only, or network access?\n\nWith these few questions (and perhaps a couple more), a decent set of startup files could be created that would give good, 'tho not optimal, performance for most people just getting started.\n\nI agree with an opinion posted a couple days ago: The startup configuration is one of the weakest features of Postgres. It's not rocket science, but there are several files, and it's not obvious to the newcomer that the files even exist.\n\nHere's just one example: A coworker installed Postgres and couldn't get it to work at all. He struggled for hours. When he contacted me, I tried his installation and it worked fine. He tried it, and he couldn't connect. I asked him, \"Are you using localhost?\" He said yes, but what he meant was he was using the local *network*, 192.168.0.5, whereas I was using \"localhost\". He didn't have network access enabled. So, four hours wasted.\n\nThis is the sort of thing that makes experienced users say, \"Well, duh!\" But there are many number of these little traps and obscure configuration parameters that make the initial Postgres experience a poor one. It wouldn't take much to make a big difference to new users.\n\nCraig\n\n\n\n",
"msg_date": "Mon, 30 Apr 2007 09:18:51 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "At 12:18p -0400 on 30 Apr 2007, Craig A. James wrote:\n> 1. Generating a resonable starting configuration for neophyte users \n> who have installed Postgres for the first time.\n\nI recognize that PostgreSQL and MySQL try to address different \nproblem-areas, but is this one reason why a lot of people with whom I \ntalk prefer MySQL? Because PostgreSQL is so \"slooow\" out of the box?*\n\nThanks,\n\nKevin\n\n* Not trolling; I use PostgreSQL almost exclusively.\n",
"msg_date": "Mon, 30 Apr 2007 13:30:05 -0400",
"msg_from": "Kevin Hunter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "Greg,\n\n> 1) Collect up data about their system (memory, disk layout), find out a\n> bit about their apps/workload, and generate a config file based on that.\n\nWe could start with this. Where I bogged down is that collecting system \ninformation about several different operating systems ... and in some cases \ngenerating scripts for boosting things like shmmax ... is actually quite a \nlarge problem from a slog perspective; there is no standard way even within \nLinux to describe CPUs, for example. Collecting available disk space \ninformation is even worse. So I'd like some help on this portion.\n\nI actually have algorithms which are \"good enough to start with\" for most of \nthe important GUCs worked out, and others could be set through an interactive \nscript (\"Does your application require large batch loads involving thousands \nor millions of updates in the same transaction?\" \"How large (GB) do you \nexpect your database to be?\")\n\n> 2) Connect to the database and look around. Study the tables and some\n> their stats, make some estimates based on what your find, produce a new\n> config file.\n\nI'm not sure that much more for (2) can be done than for (1). Tables-on-disk \ndon't tell us much. \n\n> 3) Monitor the database while it's doing its thing. See which parts go\n> well and which go badly by viewing database statistics like pg_statio.\n> From that, figure out where the bottlenecks are likely to be and push more\n> resources toward them. What I've been working on lately is exposing more\n> readouts of performance-related database internals to make this more\n> practical.\n\nWe really should collaborate on that. \n\n> When first exposed to this problem, most people assume that (1) is good\n> enough--ask some questions, look at the machine, and magically a\n> reasonable starting configuration can be produced. It's already been\n> pointed out that anyone with enough knowledge to do all that can probably\n> spit out a reasonable guess for the config file without help. \n\nBut that's actually more than most people already do. Further, if you don't \nstart with a \"reasonable\" configuration, then it's difficult-impossible to \nanalyze where your settings are out-of-whack; behavior introduced by some \nway-to-low settings will mask any other tuning that needs to be done. It's \nalso hard/impossible to devise tuning algorithms that work for both gross \ntuning (increase shared_buffers by 100x) and fine tuning (decrease \nbgwriter_interval to 45ms).\n\nSo whether or not we do (3), we need to do (1) first.\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Tue, 1 May 2007 09:23:47 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "\n> large problem from a slog perspective; there is no standard way even within \n> Linux to describe CPUs, for example. Collecting available disk space \n> information is even worse. So I'd like some help on this portion.\n> \n\nQuite likely, naiveness follows... But, aren't things like /proc/cpuinfo ,\n/proc/meminfo, /proc/partitions / /proc/diskstats standard, at the very\nleast across Linux distros? I'm not familiar with BSD or other Unix\nflavours, but I would expect these (or their equivalent) to exist in those,\nno?\n\nAm I just being naive?\n\nCarlos\n--\n\n",
"msg_date": "Tue, 01 May 2007 19:40:39 -0400",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "On Tue, 1 May 2007, Carlos Moreno wrote:\n\n>> large problem from a slog perspective; there is no standard way even\n>> within Linux to describe CPUs, for example. Collecting available disk\n>> space information is even worse. So I'd like some help on this portion.\n>> \n>\n> Quite likely, naiveness follows... But, aren't things like /proc/cpuinfo ,\n> /proc/meminfo, /proc/partitions / /proc/diskstats standard, at the very\n> least across Linux distros? I'm not familiar with BSD or other Unix\n> flavours, but I would expect these (or their equivalent) to exist in those,\n> no?\n>\n> Am I just being naive?\n\nunfortunantly yes.\n\nacross different linux distros they are fairly standard (however different \nkernel versions will change them)\n\nhowever different kernels need drasticly different tools to get the info \nfrom them.\n\nDavid Lang\n",
"msg_date": "Tue, 1 May 2007 16:48:33 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "On Mon, 30 Apr 2007, Kevin Hunter wrote:\n\n> I recognize that PostgreSQL and MySQL try to address different \n> problem-areas, but is this one reason why a lot of people with whom I \n> talk prefer MySQL? Because PostgreSQL is so \"slooow\" out of the box?\n\nIt doesn't help, but there are many other differences that are as big or \nbigger. Here are a few samples off the top of my head:\n\n1) Performance issues due to MVCC (MySQL fans love to point out how \nfast they can do select count(*) from x)\n2) Not knowing you have to run vacuum analyze and therefore never seeing a \ngood query result\n3) Unfair comparison of PostgreSQL with robust WAL vs. MySQL+MyISAM on \nwrite-heavy worksloads\n\nThese are real issues, which of course stack on top of things like \noutdated opinions from older PG releases with performance issues resolved \nin the last few years.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Tue, 1 May 2007 21:52:16 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "On Tue, 1 May 2007, Josh Berkus wrote:\n\n> there is no standard way even within Linux to describe CPUs, for \n> example. Collecting available disk space information is even worse. So \n> I'd like some help on this portion.\n\nI'm not fooled--secretly you and your co-workers laugh at how easy this is \non Solaris and are perfectly happy with how difficult it is on Linux, \nright?\n\nI joke becuase I've been re-solving some variant on this problem every few \nyears for a decade now and it just won't go away. Last time I checked the \nright answer was to find someone else who's already done it, packaged that \ninto a library, and appears committed to keeping it up to date; just pull \na new rev of that when you need it. For example, for the CPU/memory part, \ntop solves this problem and is always kept current, so on open-source \nplatforms there's the potential to re-use that code. Now that I know \nthat's one thing you're (understandably) fighting with I'll dig up my \nreferences on that (again).\n\n> It's also hard/impossible to devise tuning algorithms that work for both \n> gross tuning (increase shared_buffers by 100x) and fine tuning (decrease \n> bgwriter_interval to 45ms).\n\nI would advocate focusing on iterative improvements to an existing \nconfiguration rather than even bothering with generating a one-off config \nfor exactly this reason. It *is* hard/impossible to get it right in a \nsingle shot, because of how many parameters interact and the way \nbottlenecks clear, so why not assume from the start you're going to do it \nseveral times--then you've only got one piece of software to write.\n\nThe idea I have in my head is a tool that gathers system info, connects to \nthe database, and then spits out recommendations in order of expected \neffectiveness--with the specific caveat that changing too many things at \none time isn't recommended, and some notion of parameter dependencies. \nThe first time you run it, you'd be told that shared_buffers was wildly \nlow, effective_cache_size isn't even in the right ballpark, and your \nwork_mem looks small relative to the size of your tables; fix those before \nyou bother doing anything else because any data collected with those at \nvery wrong values is bogus. Take two, those parameters pass their sanity \ntests, but since you're actually running at a reasonable speed now the \nfact that your tables are no longer being vacuumed frequently enough might \nbubble to the top.\n\nIt would take a few passes through to nail down everything, but as long as \nit's put together such that you'd be in a similar position to the \nsingle-shot tool after running it once it would remove that as something \nseparate that needed to be built.\n\nTo argue against myself for a second, it may very well be the case that \nwriting the simpler tool is the only way to get a useful prototype for \nbuilding the more complicated one; very easy to get bogged down in feature \ncreep on a grand design otherwise.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Tue, 1 May 2007 22:59:51 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "On Tue, 1 May 2007, Greg Smith wrote:\n\n> On Tue, 1 May 2007, Josh Berkus wrote:\n>\n>> there is no standard way even within Linux to describe CPUs, for example.\n>> Collecting available disk space information is even worse. So I'd like\n>> some help on this portion.\n\nwhat type of description of the CPU's are you looking for?\n\n>> It's also hard/impossible to devise tuning algorithms that work for both\n>> gross tuning (increase shared_buffers by 100x) and fine tuning (decrease\n>> bgwriter_interval to 45ms).\n>\n> I would advocate focusing on iterative improvements to an existing \n> configuration rather than even bothering with generating a one-off config for \n> exactly this reason. It *is* hard/impossible to get it right in a single \n> shot, because of how many parameters interact and the way bottlenecks clear, \n> so why not assume from the start you're going to do it several times--then \n> you've only got one piece of software to write.\n\nnobody is asking for things to be right the first time.\n\n> The idea I have in my head is a tool that gathers system info, connects to \n> the database, and then spits out recommendations in order of expected \n> effectiveness--with the specific caveat that changing too many things at one \n> time isn't recommended, and some notion of parameter dependencies. The first \n> time you run it, you'd be told that shared_buffers was wildly low, \n> effective_cache_size isn't even in the right ballpark, and your work_mem \n> looks small relative to the size of your tables; fix those before you bother \n> doing anything else because any data collected with those at very wrong \n> values is bogus.\n\nwhy not have a much simpler script that gets these values up into the \nright ballpark first? then after that the process and analysis that you \nare suggesting would be useful. the problem is that the defaults are _so_ \nfar off that no sane incremental program is going to be able to converge \non the right answer rapidly.\n\nDavid Lang\n\n> Take two, those parameters pass their sanity tests, but \n> since you're actually running at a reasonable speed now the fact that your \n> tables are no longer being vacuumed frequently enough might bubble to the \n> top.\n>\n> It would take a few passes through to nail down everything, but as long as \n> it's put together such that you'd be in a similar position to the single-shot \n> tool after running it once it would remove that as something separate that \n> needed to be built.\n>\n> To argue against myself for a second, it may very well be the case that \n> writing the simpler tool is the only way to get a useful prototype for \n> building the more complicated one; very easy to get bogged down in feature \n> creep on a grand design otherwise.\n",
"msg_date": "Tue, 1 May 2007 21:34:13 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "The more I think about this thread, the more I'm convinced of 2 things:\n\n1= Suggesting initial config values is a fundamentally different \nexercise than tuning a running DBMS.\nThis can be handled reasonably well by HW and OS snooping. OTOH, \ndetailed fine tuning of a running DBMS does not appear to be amenable \nto this approach.\n\nSo...\n2= We need to implement the kind of timer support that Oracle 10g has.\nOracle performance tuning was revolutionized by there being \nmicro-second accurate timers available for all Oracle operations.\nIMHO, we should learn from that.\n\nOnly the combination of the above looks like it will really be \nsuccessful in addressing the issues brought up in this thread.\n\nCheers,\nRon Peacetree\n\n\nAt 01:59 PM 4/27/2007, Josh Berkus wrote:\n>Dan,\n>\n> > Exactly.. What I think would be much more productive is to use the\n> > great amount of information that PG tracks internally and auto-tune the\n> > parameters based on it. For instance:\n>\n>*Everyone* wants this. The problem is that it's very hard code to write\n>given the number of variables. I'm working on it but progress is slow,\n>due to my travel schedule.\n>\n>--\n>--Josh\n>\n>Josh Berkus\n>PostgreSQL @ Sun\n>San Francisco\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n",
"msg_date": "Thu, 03 May 2007 15:15:23 -0400",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance\n Tuning"
},
{
"msg_contents": "Greg,\n\n> I'm not fooled--secretly you and your co-workers laugh at how easy this\n> is on Solaris and are perfectly happy with how difficult it is on Linux,\n> right?\n\nDon't I wish. There's issues with getting CPU info on Solaris, too, if you \nget off of Sun Hardware to generic white boxes. The base issue is that \nthere's no standardization on how manufacturers report the names of their \nCPUs, 32/64bit, or clock speeds. So any attempt to determine \"how fast\" \na CPU is, even on a 1-5 scale, requires matching against a database of \nregexes which would have to be kept updated.\n\nAnd let's not even get started on Windows.\n\n> I joke becuase I've been re-solving some variant on this problem every\n> few years for a decade now and it just won't go away. Last time I\n> checked the right answer was to find someone else who's already done it,\n> packaged that into a library, and appears committed to keeping it up to\n> date; just pull a new rev of that when you need it. For example, for\n> the CPU/memory part, top solves this problem and is always kept current,\n> so on open-source platforms there's the potential to re-use that code. \n> Now that I know that's one thing you're (understandably) fighting with\n> I'll dig up my references on that (again).\n\nActually, total memory is not an issue, that's fairly straight forwards. \nNor is # of CPUs. Memory *used* is a PITA, which is why I'd ignore that \npart and make some assumptions. It would have to be implemented in a \nper-OS manner, which is what bogged me down.\n\n> I would advocate focusing on iterative improvements to an existing\n> configuration rather than even bothering with generating a one-off\n> config for exactly this reason. It *is* hard/impossible to get it right\n> in a single shot, because of how many parameters interact and the way\n> bottlenecks clear, so why not assume from the start you're going to do\n> it several times--then you've only got one piece of software to write.\n\nSounds fine to me. \n\n> To argue against myself for a second, it may very well be the case that\n> writing the simpler tool is the only way to get a useful prototype for\n> building the more complicated one; very easy to get bogged down in\n> feature creep on a grand design otherwise.\n\nIt's certainly easy for me. ;-)\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Thu, 3 May 2007 12:21:55 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "On Thu, 3 May 2007, Josh Berkus wrote:\n\n> Greg,\n>\n>> I'm not fooled--secretly you and your co-workers laugh at how easy this\n>> is on Solaris and are perfectly happy with how difficult it is on Linux,\n>> right?\n>\n> Don't I wish. There's issues with getting CPU info on Solaris, too, if you\n> get off of Sun Hardware to generic white boxes. The base issue is that\n> there's no standardization on how manufacturers report the names of their\n> CPUs, 32/64bit, or clock speeds. So any attempt to determine \"how fast\"\n> a CPU is, even on a 1-5 scale, requires matching against a database of\n> regexes which would have to be kept updated.\n>\n> And let's not even get started on Windows.\n\nI think the only sane way to try and find the cpu speed is to just do a \nbusy loop of some sort (ideally something that somewhat resembles the main \ncode) and see how long it takes. you may have to do this a few times until \nyou get a loop that takes long enough (a few seconds) on a fast processor\n\nDavid Lang\n",
"msg_date": "Thu, 3 May 2007 13:16:14 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "\n>> CPUs, 32/64bit, or clock speeds. So any attempt to determine \"how \n>> fast\"\n>> a CPU is, even on a 1-5 scale, requires matching against a database of\n>> regexes which would have to be kept updated.\n>>\n>> And let's not even get started on Windows.\n>\n> I think the only sane way to try and find the cpu speed is to just do \n> a busy loop of some sort (ideally something that somewhat resembles \n> the main code) and see how long it takes. you may have to do this a \n> few times until you get a loop that takes long enough (a few seconds) \n> on a fast processor\n\nI was going to suggest just that (but then was afraid that again I may have\nbeen just being naive) --- I can't remember the exact name, but I remember\nusing (on some Linux flavor) an API call that fills a struct with data \non the\nresource usage for the process, including CPU time; I assume measured\nwith precision (that is, immune to issues of other applications running\nsimultaneously, or other random events causing the measurement to be\npolluted by random noise).\n\nAs for 32/64 bit --- doesn't PG already know that information? I mean,\n./configure does gather that information --- does it not?\n\nCarlos\n--\n\n",
"msg_date": "Thu, 03 May 2007 17:06:30 -0400",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "On Thu, 3 May 2007, Carlos Moreno wrote:\n\n>> > CPUs, 32/64bit, or clock speeds. So any attempt to determine \"how \n>> > fast\"\n>> > a CPU is, even on a 1-5 scale, requires matching against a database of\n>> > regexes which would have to be kept updated.\n>> > \n>> > And let's not even get started on Windows.\n>>\n>> I think the only sane way to try and find the cpu speed is to just do a\n>> busy loop of some sort (ideally something that somewhat resembles the main\n>> code) and see how long it takes. you may have to do this a few times until\n>> you get a loop that takes long enough (a few seconds) on a fast processor\n>\n> I was going to suggest just that (but then was afraid that again I may have\n> been just being naive) --- I can't remember the exact name, but I remember\n> using (on some Linux flavor) an API call that fills a struct with data on the\n> resource usage for the process, including CPU time; I assume measured\n> with precision (that is, immune to issues of other applications running\n> simultaneously, or other random events causing the measurement to be\n> polluted by random noise).\n\nsince what we are looking for here is a reasonable first approximation, \nnot perfection I don't think we should worry much about pollution of the \nvalue. if the person has other things running while they are running this \ntest that will be running when they run the database it's no longer \n'pollution' it's part of the environment. I think a message at runtime \nthat it may produce inaccurate results if you have other heavy processes \nrunning for the config that won't be running with the database would be \ngood enough (remember it's not only CPU time that's affected like this, \nit's disk performance as well)\n\n> As for 32/64 bit --- doesn't PG already know that information? I mean,\n> ./configure does gather that information --- does it not?\n\nwe're not talking about comiling PG, we're talking about getting sane \ndefaults for a pre-compiled binary. if it's a 32 bit binary assume a 32 \nbit cpu, if it's a 64 bit binary assume a 64 bit cpu (all hardcoded into \nthe binary at compile time)\n\nDavid Lang\n",
"msg_date": "Thu, 3 May 2007 14:23:26 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "\n>> been just being naive) --- I can't remember the exact name, but I \n>> remember\n>> using (on some Linux flavor) an API call that fills a struct with \n>> data on the\n>> resource usage for the process, including CPU time; I assume measured\n>> with precision (that is, immune to issues of other applications running\n>> simultaneously, or other random events causing the measurement to be\n>> polluted by random noise).\n>\n> since what we are looking for here is a reasonable first \n> approximation, not perfection I don't think we should worry much about \n> pollution of the value.\n\nWell, it's not as much worrying as it is choosing the better among two \nequally\ndifficult options --- what I mean is that obtaining the *real* resource \nusage as\nreported by the kernel is, from what I remember, equally hard as it is \nobtaining\nthe time with milli- or micro-seconds resolution.\n\nSo, why not choosing this option? (in fact, if we wanted to do it \"the \nscripted\nway\", I guess we could still use \"time test_cpuspeed_loop\" and read the \nreport\nby the command time, specifying CPU time and system calls time.\n\n>> As for 32/64 bit --- doesn't PG already know that information? I mean,\n>> ./configure does gather that information --- does it not?\n>\n> we're not talking about comiling PG, we're talking about getting sane \n> defaults for a pre-compiled binary. if it's a 32 bit binary assume a \n> 32 bit cpu, if it's a 64 bit binary assume a 64 bit cpu (all hardcoded \n> into the binary at compile time)\n\nRight --- I was thinking that configure, which as I understand, \ngenerates the\nMakefiles to compile applications including initdb, could plug those values\nas compile-time constants, so that initdb (or a hypothetical additional \nutility\nthat would do what we're discussing in this thread) already has them. \nAnyway,\nyes, that would go for the binaries as well --- we're pretty much saying \nthe\nsame thing :-)\n\nCarlos\n--\n\n",
"msg_date": "Thu, 03 May 2007 18:42:12 -0400",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "On Thu, 3 May 2007, Carlos Moreno wrote:\n\n>> > been just being naive) --- I can't remember the exact name, but I \n>> > remember\n>> > using (on some Linux flavor) an API call that fills a struct with data \n>> > on the\n>> > resource usage for the process, including CPU time; I assume measured\n>> > with precision (that is, immune to issues of other applications running\n>> > simultaneously, or other random events causing the measurement to be\n>> > polluted by random noise).\n>>\n>> since what we are looking for here is a reasonable first approximation,\n>> not perfection I don't think we should worry much about pollution of the\n>> value.\n>\n> Well, it's not as much worrying as it is choosing the better among two \n> equally\n> difficult options --- what I mean is that obtaining the *real* resource usage \n> as\n> reported by the kernel is, from what I remember, equally hard as it is \n> obtaining\n> the time with milli- or micro-seconds resolution.\n>\n> So, why not choosing this option? (in fact, if we wanted to do it \"the \n> scripted\n> way\", I guess we could still use \"time test_cpuspeed_loop\" and read the \n> report\n> by the command time, specifying CPU time and system calls time.\n\nI don't think it's that hard to get system time to a reasonable level (if \nthis config tuner needs to run for a min or two to generate numbers that's \nacceptable, it's only run once)\n\nbut I don't think that the results are really that critical.\n\ndo we really care if the loop runs 1,000,000 times per second or 1,001,000 \ntimes per second? I'd argue that we don't even care about 1,000,000 times \nper second vs 1,100,000 times per second, what we care about is 1,000,000 \ntimes per second vs 100,000 times per second, if you do a 10 second test \nand run it for 11 seconds you are still in the right ballpark (i.e. close \nenough that you really need to move to the stage2 tuneing to figure the \nexact values)\n\n>> > As for 32/64 bit --- doesn't PG already know that information? I mean,\n>> > ./configure does gather that information --- does it not?\n>>\n>> we're not talking about comiling PG, we're talking about getting sane\n>> defaults for a pre-compiled binary. if it's a 32 bit binary assume a 32\n>> bit cpu, if it's a 64 bit binary assume a 64 bit cpu (all hardcoded into\n>> the binary at compile time)\n>\n> Right --- I was thinking that configure, which as I understand, generates the\n> Makefiles to compile applications including initdb, could plug those values\n> as compile-time constants, so that initdb (or a hypothetical additional \n> utility\n> that would do what we're discussing in this thread) already has them. \n> Anyway,\n> yes, that would go for the binaries as well --- we're pretty much saying the\n> same thing :-)\n\nI'm thinking along the lines of a script or pre-compiled binary (_not_ \ninitdb) that you could run and have it generate a new config file that has \nvalues that are at within about an order of magnatude of being correct.\n\nDavid Lang\n",
"msg_date": "Thu, 3 May 2007 15:49:47 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "\n> I don't think it's that hard to get system time to a reasonable level \n> (if this config tuner needs to run for a min or two to generate \n> numbers that's acceptable, it's only run once)\n>\n> but I don't think that the results are really that critical.\n\nStill --- this does not provide a valid argument against my claim.\n\nOk, we don't need precision --- but do we *need* to have less\nprecision?? I mean, you seem to be proposing that we deliberately\ngo out of our way to discard a solution with higher precision and\nchoose the one with lower precision --- just because we do not\nhave a critical requirement for the extra precision.\n\nThat would be a valid argument if the extra precision came at a\nconsiderable cost (well, or at whatever cost, considerable or not).\n\nBut my point is still that obtaining the time in the right ballpark\nand obtaining the time with good precision are two things that\nhave, from any conceivable point of view (programming effort,\nresources consumption when executing it, etc. etc.), the exact\nsame cost --- why not pick the one that gives us the better results?\n\nMostly when you consider that:\n\n> I'd argue that we don't even care about 1,000,000 times per second vs \n> 1,100,000 times per second, what we care about is 1,000,000 times per \n> second vs 100,000 times per second\n\nPart of my claim is that measuring real-time you could get an\nerror like this or even a hundred times this!! Most of the time\nyou wouldn't, and definitely if the user is careful it would not\nhappen --- but it *could* happen!!! (and when I say could, I\nreally mean: trust me, I have actually seen it happen)\n\nWhy not just use an *extremely simple* solution that is getting\ninformation from the kernel reporting the actual CPU time that\nhas been used???\n\nOf course, this goes under the premise that in all platforms there\nis such a simple solution like there is on Linux (the exact name\nof the API function still eludes me, but I have used it in the past,\nand I recall that it was just three or five lines of code).\n\nCarlos\n--\n\n",
"msg_date": "Thu, 03 May 2007 20:23:46 -0400",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "On Thu, 3 May 2007, Carlos Moreno wrote:\n\n>> I don't think it's that hard to get system time to a reasonable level (if\n>> this config tuner needs to run for a min or two to generate numbers that's\n>> acceptable, it's only run once)\n>>\n>> but I don't think that the results are really that critical.\n>\n> Still --- this does not provide a valid argument against my claim.\n>\n> Ok, we don't need precision --- but do we *need* to have less\n> precision?? I mean, you seem to be proposing that we deliberately\n> go out of our way to discard a solution with higher precision and\n> choose the one with lower precision --- just because we do not\n> have a critical requirement for the extra precision.\n>\n> That would be a valid argument if the extra precision came at a\n> considerable cost (well, or at whatever cost, considerable or not).\n\nthe cost I am seeing is the cost of portability (getting similarly \naccruate info from all the different operating systems)\n\n> But my point is still that obtaining the time in the right ballpark\n> and obtaining the time with good precision are two things that\n> have, from any conceivable point of view (programming effort,\n> resources consumption when executing it, etc. etc.), the exact\n> same cost --- why not pick the one that gives us the better results?\n>\n> Mostly when you consider that:\n>\n>> I'd argue that we don't even care about 1,000,000 times per second vs\n>> 1,100,000 times per second, what we care about is 1,000,000 times per\n>> second vs 100,000 times per second\n>\n> Part of my claim is that measuring real-time you could get an\n> error like this or even a hundred times this!! Most of the time\n> you wouldn't, and definitely if the user is careful it would not\n> happen --- but it *could* happen!!! (and when I say could, I\n> really mean: trust me, I have actually seen it happen)\n\nif you have errors of several orders of magnatude in the number of loops \nit can run in a given time period then you don't have something that you \ncan measure to any accuracy (and it wouldn't matter anyway, if your loops \nare that variable, your code execution would be as well)\n\n> Why not just use an *extremely simple* solution that is getting\n> information from the kernel reporting the actual CPU time that\n> has been used???\n>\n> Of course, this goes under the premise that in all platforms there\n> is such a simple solution like there is on Linux (the exact name\n> of the API function still eludes me, but I have used it in the past,\n> and I recall that it was just three or five lines of code).\n\nI think the problem is that it's a _different_ 3-5 lines of code for each \nOS.\n\nif I'm wrong and it's the same for the different operating systems then I \nagree that we should use the most accurate clock we can get. I just don't \nthink we have that.\n\nDavid Lang\n",
"msg_date": "Thu, 3 May 2007 17:52:38 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "\n>> That would be a valid argument if the extra precision came at a\n>> considerable cost (well, or at whatever cost, considerable or not).\n>\n> the cost I am seeing is the cost of portability (getting similarly \n> accruate info from all the different operating systems)\n\nFair enough --- as I mentioned, I was arguing under the premise that\nthere would be a quite similar solution for all the Unix-flavours (and\nhopefully an equivalent --- and equivalently simple --- one for Windows) \n...\nWhether or not that premise holds, I wouldn't bet either way.\n\n>> error like this or even a hundred times this!! Most of the time\n>> you wouldn't, and definitely if the user is careful it would not\n>> happen --- but it *could* happen!!! (and when I say could, I\n>> really mean: trust me, I have actually seen it happen)\n> Part of my claim is that measuring real-time you could get an\n>\n> if you have errors of several orders of magnatude in the number of \n> loops it can run in a given time period then you don't have something \n> that you can measure to any accuracy (and it wouldn't matter anyway, \n> if your loops are that variable, your code execution would be as well)\n\nNot necessarily --- operating conditions may change drastically from\none second to the next; that does not mean that your system is useless;\nsimply that the measuring mechanism is way too vulnerable to the\nparticular operating conditions at the exact moment it was executed.\n\nI'm not sure if that was intentional, but you bring up an interesting\nissue --- or in any case, your comment made me drastically re-think\nmy whole argument: do we *want* to measure the exact speed, or\nrather the effective speed under normal operating conditions on the\ntarget machine?\n\nI know the latter is almost impossible --- we're talking about an estimate\nof a random process' parameter (and we need to do it in a short period\nof time) ... But the argument goes more or less like this: if you have a\nmachine that runs at 1000 MIPS, but it's usually busy running things\nthat in average consume 500 of those 1000 MIPS, would we want PG's\nconfiguration file to be obtained based on 1000 or based on 500 MIPS???\nAfter all, the CPU is, as far as PostgreSQL will be able see, 500 MIPS\nfast, *not* 1000.\n\nI think I better stop, if we want to have any hope that the PG team will\never actually implement this feature (or similar) ... We're probably just\nscaring them!! :-)\n\nCarlos\n--\n\n",
"msg_date": "Thu, 03 May 2007 21:13:19 -0400",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "On Thu, 3 May 2007, Carlos Moreno wrote:\n\n>\n>> > error like this or even a hundred times this!! Most of the time\n>> > you wouldn't, and definitely if the user is careful it would not\n>> > happen --- but it *could* happen!!! (and when I say could, I\n>> > really mean: trust me, I have actually seen it happen)\n>> Part of my claim is that measuring real-time you could get an\n>>\n>> if you have errors of several orders of magnatude in the number of loops\n>> it can run in a given time period then you don't have something that you\n>> can measure to any accuracy (and it wouldn't matter anyway, if your loops\n>> are that variable, your code execution would be as well)\n>\n> Not necessarily --- operating conditions may change drastically from\n> one second to the next; that does not mean that your system is useless;\n> simply that the measuring mechanism is way too vulnerable to the\n> particular operating conditions at the exact moment it was executed.\n>\n> I'm not sure if that was intentional, but you bring up an interesting\n> issue --- or in any case, your comment made me drastically re-think\n> my whole argument: do we *want* to measure the exact speed, or\n> rather the effective speed under normal operating conditions on the\n> target machine?\n>\n> I know the latter is almost impossible --- we're talking about an estimate\n> of a random process' parameter (and we need to do it in a short period\n> of time) ... But the argument goes more or less like this: if you have a\n> machine that runs at 1000 MIPS, but it's usually busy running things\n> that in average consume 500 of those 1000 MIPS, would we want PG's\n> configuration file to be obtained based on 1000 or based on 500 MIPS???\n> After all, the CPU is, as far as PostgreSQL will be able see, 500 MIPS\n> fast, *not* 1000.\n>\n> I think I better stop, if we want to have any hope that the PG team will\n> ever actually implement this feature (or similar) ... We're probably just\n> scaring them!! :-)\n\nsimpler is better (or perfect is the enemy of good enough)\n\nif you do your sample over a few seconds (or few tens of seconds) things \nwill average out quite a bit.\n\nthe key is to be going for a reasonable starting point. after that then \nthe full analysis folks can start in with all their monitoring and \ntuneing, but the 80/20 rule really applies here. 80% of the gain is from \ngetting 'fairly close' to the right values, and that should only be 20% of \nthe full 'tuneing project'\n\nDavid Lang\n\n",
"msg_date": "Thu, 3 May 2007 18:52:47 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "On Thu, 3 May 2007, Josh Berkus wrote:\n\n> So any attempt to determine \"how fast\" a CPU is, even on a 1-5 scale, \n> requires matching against a database of regexes which would have to be \n> kept updated.\n\nThis comment, along with the subsequent commentary today going far astray \ninto CPU measurement land, serves as a perfect example to demonstrate why \nI advocate attacking this from the perspective that assumes there is \nalready a database around we can query.\n\nWe don't have to care how fast the CPU is in any real terms; all we need \nto know is how many of them are (which as you point out is relatively easy \nto find), and approximately how fast each one of them can run PostgreSQL. \nHere the first solution to this problem I came up with in one minute of \nR&D:\n\n-bash-3.00$ psql\npostgres=# \\timing\nTiming is on.\npostgres=# select count(*) from generate_series(1,100000,1);\n count\n--------\n 100000\n(1 row)\n\nTime: 106.535 ms\n\nThere you go, a completely cross-platform answer. You should run the \nstatement twice and only use the second result for better consistancy. I \nran this on all the sytems I was around today and got these results:\n\nP4 2.4GHz\t107ms\nXeon 3GHz\t100ms\nOpteron 275\t65ms\nAthlon X2 4600\t61ms\n\nFor comparison sake, these numbers are more useful at predicting actual \napplication performance than Linux's bogomips number, which completely \nreverses the relative performance of the Intel vs. AMD chips in this set \nfrom the reality of how well they run Postgres.\n\nMy philosophy in this area is that if you can measure something \nperformance-related with reasonable accuracy, don't even try to estimate \nit instead. All you have to do is follow some of the downright bizzare \ndd/bonnie++ results people post here to realize that there can be a vast \ndifference between the performance you'd expect given a particular \nhardware class and what you actually get.\n\nWhile I'm ranting here, I should mention that I also sigh every time I see \npeople suggest we should ask the user how big their database is. The kind \nof newbie user people keep talking about helping has *no idea whatsoever* \nhow big the data actually is after it gets into the database and all the \nindexes are built. But if you tell someone \"right now this database has 1 \nmillion rows and takes up 800MB; what multiple of its current size do you \nexpect it to grow to?\", now that's something people can work with.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Fri, 4 May 2007 00:33:29 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "On Fri, May 04, 2007 at 12:33:29AM -0400, Greg Smith wrote:\n>-bash-3.00$ psql\n>postgres=# \\timing\n>Timing is on.\n>postgres=# select count(*) from generate_series(1,100000,1);\n> count\n>--------\n> 100000\n>(1 row)\n>\n>Time: 106.535 ms\n>\n>There you go, a completely cross-platform answer. You should run the \n>statement twice and only use the second result for better consistancy. I \n>ran this on all the sytems I was around today and got these results:\n>\n>P4 2.4GHz\t107ms\n>Xeon 3GHz\t100ms\n>Opteron 275\t65ms\n>Athlon X2 4600\t61ms\n\nPIII 1GHz\t265ms\nOpteron 250\t39ms\n\nsomething seems inconsistent here.\n\n>For comparison sake, these numbers are more useful at predicting actual \n>application performance than Linux's bogomips number, which completely \n>reverses the relative performance of the Intel vs. AMD chips in this set \n>from the reality of how well they run Postgres.\n\nYou misunderstand the purpose of bogomips; they have no absolute \nmeaning, and a comparison between different type of cpus is not \npossible.\n\n>While I'm ranting here, I should mention that I also sigh every time I see \n>people suggest we should ask the user how big their database is. The kind \n>of newbie user people keep talking about helping has *no idea whatsoever* \n>how big the data actually is after it gets into the database and all the \n>indexes are built. \n\n100% agreed.\n\nMike Stone\n",
"msg_date": "Fri, 04 May 2007 08:45:39 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "[email protected] schrieb:\n> On Tue, 1 May 2007, Carlos Moreno wrote:\n>\n>>> large problem from a slog perspective; there is no standard way even\n>>> within Linux to describe CPUs, for example. Collecting available disk\n>>> space information is even worse. So I'd like some help on this\n>>> portion.\n>>>\n>>\n>> Quite likely, naiveness follows... But, aren't things like\n>> /proc/cpuinfo ,\n>> /proc/meminfo, /proc/partitions / /proc/diskstats standard, at the very\n>> least across Linux distros? I'm not familiar with BSD or other Unix\n>> flavours, but I would expect these (or their equivalent) to exist in\n>> those,\n>> no?\n>>\n>> Am I just being naive?\n>\n> unfortunantly yes.\n>\n> across different linux distros they are fairly standard (however\n> different kernel versions will change them)\n>\n> however different kernels need drasticly different tools to get the\n> info from them.\n>\n> David Lang\n>\nBefore inventing a hyper tool, we might consider to provide 3-5 example\nszenarios for common hardware configurations. This consumes less time\nand be discussed and defined in a couple of days. This is of course not\nthe correct option for a brandnew 20 spindle Sata 10.000 Raid 10 system\nbut these are probably not the target for default configurations.\n\nIf we carefully document these szenario they would we a great help for\npeople having some hardware \"between\" the szenarios.\n\nSebastian Hennebrueder\n\n\n",
"msg_date": "Fri, 04 May 2007 18:40:13 +0200",
"msg_from": "Sebastian Hennebrueder <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "Sebastian,\n\n> Before inventing a hyper tool, we might consider to provide 3-5 example\n> szenarios for common hardware configurations. This consumes less time\n> and be discussed and defined in a couple of days. This is of course not\n> the correct option for a brandnew 20 spindle Sata 10.000 Raid 10 system\n> but these are probably not the target for default configurations.\n\nThat's been suggested a number of times, but some GUCs are really tied to the \n*exact* amount of RAM you have available. So I've never seen how \"example \nconfigurations\" could help.\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Fri, 4 May 2007 10:11:42 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "\nJosh Berkus schrieb:\n> Sebastian,\n>\n> \n>> Before inventing a hyper tool, we might consider to provide 3-5 example\n>> szenarios for common hardware configurations. This consumes less time\n>> and be discussed and defined in a couple of days. This is of course not\n>> the correct option for a brandnew 20 spindle Sata 10.000 Raid 10 system\n>> but these are probably not the target for default configurations.\n>> \n>\n> That's been suggested a number of times, but some GUCs are really tied to the \n> *exact* amount of RAM you have available. So I've never seen how \"example \n> configurations\" could help.\n>\n> \n\nI would define the szenario as\n256 MB freely available for PostgresQL\n=> setting x can be of size ...\n\n",
"msg_date": "Fri, 04 May 2007 19:22:31 +0200",
"msg_from": "Sebastian Hennebrueder <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "On Fri, 4 May 2007, Michael Stone wrote:\n\n>> P4 2.4GHz\t107ms\n>> Xeon 3GHz\t100ms\n>> Opteron 275\t65ms\n>> Athlon X2 4600\t61ms\n> PIII 1GHz\t265ms\n> Opteron 250\t39ms\n> something seems inconsistent here.\n\nI don't see what you mean. The PIII results are exactly what I'd expect, \nand I wouldn't be surprised that your Opteron 250 has significantly faster \nmemory than my two AMD samples (which are slower than average in this \nregard) such that it runs this particular task better.\n\nRegardless, the requirement here, as Josh put it, was to get a way to \ngrade the CPUs on approximately a 1-5 scale. In that context, there isn't \na need for an exact value. The above has 3 major generations of \nprocessors involved, and they sort out appropriately into groups; that's \nall that needs to happen here.\n\n> You misunderstand the purpose of bogomips; they have no absolute meaning, and \n> a comparison between different type of cpus is not possible.\n\nAs if I don't know what the bogo stands for, ha! I brought that up \nbecause someone suggested testing CPU speed using some sort of idle loop. \nThat's exactly what bogomips does. My point was that something that \nsimple can give dramatically less useful results for predicting PostgreSQL \nperformance than what you can find out running a real query.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Fri, 4 May 2007 21:07:53 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "Josh Berkus wrote:\n> Sebastian,\n> \n>> Before inventing a hyper tool, we might consider to provide 3-5 example\n>> szenarios for common hardware configurations. This consumes less time\n>> and be discussed and defined in a couple of days. This is of course not\n>> the correct option for a brandnew 20 spindle Sata 10.000 Raid 10 system\n>> but these are probably not the target for default configurations.\n> \n> That's been suggested a number of times, but some GUCs are really tied to the \n> *exact* amount of RAM you have available. So I've never seen how \"example \n> configurations\" could help.\n> \n\nI'm not convinced about this objection - having samples gives a bit of a \nheads up on *what* knobs you should at least look at changing.\n\nAlso it might be helpful on the -general or -perf lists to be able to \nsay \"try config 3 (or whatever we call 'em) and see what changes...\"\n\nI've certainly found the sample config files supplied with that database \nwhose name begins with 'M' a useful *start* when I want something better \nthan default...\n\nCheers\n\nMark\n",
"msg_date": "Sat, 05 May 2007 19:19:09 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "On Fri, May 04, 2007 at 09:07:53PM -0400, Greg Smith wrote:\n> As if I don't know what the bogo stands for, ha! I brought that up \n> because someone suggested testing CPU speed using some sort of idle loop. \n> That's exactly what bogomips does.\n\nJust for reference (I'm sure you know, but others might not): BogoMIPS are\nmeasured using a busy loop because that's precisely the number the kernel is\ninterested in -- it's used to know for how long to loop when doing small\ndelays in the kernel (say, \"sleep 20 microseconds\"), which is done using a\nbusy loop.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sat, 5 May 2007 11:30:59 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "Mark Kirkwood schrieb:\n> Josh Berkus wrote:\n>> Sebastian,\n>>\n>>> Before inventing a hyper tool, we might consider to provide 3-5 example\n>>> szenarios for common hardware configurations. This consumes less time\n>>> and be discussed and defined in a couple of days. This is of course not\n>>> the correct option for a brandnew 20 spindle Sata 10.000 Raid 10 system\n>>> but these are probably not the target for default configurations.\n>>\n>> That's been suggested a number of times, but some GUCs are really\n>> tied to the *exact* amount of RAM you have available. So I've never\n>> seen how \"example configurations\" could help.\n>>\n>\n> I'm not convinced about this objection - having samples gives a bit of\n> a heads up on *what* knobs you should at least look at changing.\n>\n> Also it might be helpful on the -general or -perf lists to be able to\n> say \"try config 3 (or whatever we call 'em) and see what changes...\"\n>\n> I've certainly found the sample config files supplied with that\n> database whose name begins with 'M' a useful *start* when I want\n> something better than default...\n>\n> Cheers\n>\n> Mark\n>\nSome ideas about szenarios and setting. This is meant as a discussion\nproposal, I am by far not a database guru!\nThe settings do not provide a perfect setup but a more efficient as\ncompared to default setup.\n\ncriterias:\nfree memory\ncpu ? what is the consequence?\nseparate spindels\ntotal connections\nWindows/linux/soloars ?\n\nadapted settings:\nmax_connections\nshared_buffers\neffective_cache_size\n/work_mem\n//maintenance_work_mem\n\n/checkpoint_segments ?\ncheckpoint_timeout ?\ncheckpoint_warning ?\n\n\nSzenario a) 256 MB free memory, one disk or raid where all disks are in\nthe raid,\nmax_connections = 40\nshared_buffers = 64MB\neffective_cache_size = 180 MB\n/work_mem = 1 MB\n//maintenance_work_mem = 4 MB\n/\n\nSzenario b) 1024 MB free memory, one disk or raid where all disks are in\nthe raid\nmax_connections = 80\nshared_buffers = 128 MB\neffective_cache_size = 600 MB\n/work_mem = 1,5 MB\n//maintenance_work_mem = 16 MB\n/\nSzenario c) 2048 MB free memory, one disk or raid where all disks are in\nthe raid\nmax_connections = 160\nshared_buffers = 256 MB\neffective_cache_size = 1200 MB\n/work_mem = 2 MB\n//maintenance_work_mem = 32 MB\n/\nSzenario d) 2048 MB free memory, raid of multiple discs, second raid or\ndisk\nmax_connections = 160\nshared_buffers = 256 MB\neffective_cache_size = 1200 MB\n/work_mem = 2 MB/\n/maintenance_work_mem = 32 MB\n/WAL on second spindle\n\n\n\n\n\n",
"msg_date": "Sat, 05 May 2007 17:54:33 +0200",
"msg_from": "Sebastian Hennebrueder <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "On May 4, 2007, at 12:11 PM, Josh Berkus wrote:\n> Sebastian,\n>> Before inventing a hyper tool, we might consider to provide 3-5 \n>> example\n>> szenarios for common hardware configurations. This consumes less time\n>> and be discussed and defined in a couple of days. This is of \n>> course not\n>> the correct option for a brandnew 20 spindle Sata 10.000 Raid 10 \n>> system\n>> but these are probably not the target for default configurations.\n>\n> That's been suggested a number of times, but some GUCs are really \n> tied to the\n> *exact* amount of RAM you have available. So I've never seen how \n> \"example\n> configurations\" could help.\n\nUh... what GUCs are that exacting on the amount of memory? For a \ndecent, base-line configuration, that is.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Sun, 6 May 2007 00:38:10 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOnly some problems that come to my mind with this:\n\na) Hardware is sometimes changed underhand without telling the customer.\nEven for server-level hardware. (Been there.)\n\nb) Hardware recommendations would get stale quickly. What use is a\nhardware spec that specifies some versions of Xeons, when the supply\ndries up. (the example is not contrived, certain versions of PG and\nXeons with certain usage patterns don't work that well. google for\ncontext switch storms)\n\nc) All that is depending upon the PG version too, so with every new\nversion somebody would have to reverify that the recommendations are\nstill valid. (Big example, partitioned tables got way better supported\nin recent versions. So a setup that anticipated Seqscans over big tables\nmight suddenly perform way better. OTOH, there are some regressions\nperformance wise sometimes)\n\nd) And to add insult to this, all that tuning (hardware and software\nside) is sensitive to your workload. Before you start yelling, well,\nhave you ever rolled back an application version, because you notice\nwhat stupidities the developers have added. (And yes you can try to\navoid this by adding better staging to your processes, but it's really\nreally hard to setup a staging environment that has the same performance\n characteristics as production.)\n\nSo, while it's a nice idea to have a set of recommended hardware setups,\nI don't see much of a point. What makes a sensible database server is\nnot exactly a secret. Sizing slightly harder. And after that one enters\nthe realm of fine tuning the complete system. That does not end at the\nsocket on port 5432.\n\nAndreas\n\nJim Nasby wrote:\n> On May 4, 2007, at 12:11 PM, Josh Berkus wrote:\n>> Sebastian,\n>>> Before inventing a hyper tool, we might consider to provide 3-5 example\n>>> szenarios for common hardware configurations. This consumes less time\n>>> and be discussed and defined in a couple of days. This is of course not\n>>> the correct option for a brandnew 20 spindle Sata 10.000 Raid 10 system\n>>> but these are probably not the target for default configurations.\n>>\n>> That's been suggested a number of times, but some GUCs are really tied\n>> to the\n>> *exact* amount of RAM you have available. So I've never seen how\n>> \"example\n>> configurations\" could help.\n> \n> Uh... what GUCs are that exacting on the amount of memory? For a decent,\n> base-line configuration, that is.\n> -- \n> Jim Nasby [email protected]\n> EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGPX5aHJdudm4KnO0RAorYAJ9XymZy+pp1oHEQUu3VGB7G2G2cSgCfeGaU\nX2bpEq3aM3tzP4MYeR02D6U=\n=vtPy\n-----END PGP SIGNATURE-----\n",
"msg_date": "Sun, 06 May 2007 09:06:02 +0200",
"msg_from": "Andreas Kostyrka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tuning"
}
] |
[
{
"msg_contents": "Dear,\nWe are facing performance tuning problem while using PostgreSQL Database \nover the network on a linux OS.\nOur Database consists of more than 500 tables with an average of 10K \nrecords per table with an average of 20 users accessing the database \nsimultaneously over the network. Each table has indexes and we are \nquerying the database using Hibernate.\nThe biggest problem is while insertion, updating and fetching of records, \nie the database performance is very slow. It take a long time to respond \nin the above scenario.\nPlease provide me with the tuning of the database. I am attaching my \npostgresql.conf file for the reference of our current configuration\n\n\n\nPlease replay me ASAP\nRegards,\n Shohab Abdullah \n Software Engineer,\n Manufacturing SBU-POWAI\n Larsen and Toubro Infotech Ltd.| 4th floor, L&T Technology Centre, \nSaki Vihar Road, Powai, Mumbai-400072\n (: +91-22-67767366 | (: +91-9870247322\n Visit us at : http://www.lntinfotech.com \n”I cannot predict future, I cannot change past, I have just the present \nmoment, I must treat it as my last\" \n----------------------------------------------------------------------------------------\nThe information contained in this email has been classified: \n[ X] L&T Infotech General Business\n[ ] L&T Infotech Internal Use Only\n[ ] L&T Infotech Confidential\n[ ] L&T Infotech Proprietary\nThis e-mail and any files transmitted with it are for the sole use of the \nintended recipient(s) and may contain confidential and privileged \ninformation.\nIf you are not the intended recipient, please contact the sender by reply \ne-mail and destroy all copies of the original message.\n\n______________________________________________________________________",
"msg_date": "Thu, 26 Apr 2007 17:08:48 +0530",
"msg_from": "Shohab Abdullah <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fw: PostgreSQL Performance Tuning"
},
{
"msg_contents": "Please try to keep postings to one mailing list - I've replied to the \nperformance list here.\n\nShohab Abdullah wrote:\n> Dear,\n> We are facing performance tuning problem while using PostgreSQL Database \n> over the network on a linux OS.\n> Our Database consists of more than 500 tables with an average of 10K \n> records per table with an average of 20 users accessing the database \n> simultaneously over the network. Each table has indexes and we are \n> querying the database using Hibernate.\n> The biggest problem is while insertion, updating and fetching of records, \n> ie the database performance is very slow. It take a long time to respond \n> in the above scenario.\n> Please provide me with the tuning of the database. I am attaching my \n> postgresql.conf file for the reference of our current configuration\n\nYou haven't provided any details on what version of PG you are using, \nwhat hardware you are using, whether there is a specific bottleneck \n(disk, memory, cpu) or certain queries.\n\nWithout that, no-one can suggest useful settings. You might find this \ndocument a good place to start: http://www.powerpostgresql.com/PerfList/\n\nHTH\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 26 Apr 2007 12:53:32 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fw: PostgreSQL Performance Tuning"
},
{
"msg_contents": "Please try to post to one list at a time.\n\nI've replied to this on the -performance list.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 26 Apr 2007 12:53:56 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fw: PostgreSQL Performance Tuning"
},
{
"msg_contents": "Hello!\n\nI would do the following (in that order):\n1.) Check for a performant application logic and application design (e.g. \ndegree of granularity of the Java Hibernate Mapping, are there some \nobject iterators with hundreds of objects, etc.)\n2.) Check the hibernate generated queries and whether the query is \nsuitable or not. Also do a \"explain query\" do see the query plan.\n\nSometimes a manually generated is much more efficient than hibernate ones.\n\n3.) Optimize the database e.g. postgresql.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n\n\nOn Thu, 26 Apr 2007, Shohab Abdullah wrote:\n\n> Dear,\n> We are facing performance tuning problem while using PostgreSQL Database\n> over the network on a linux OS.\n> Our Database consists of more than 500 tables with an average of 10K\n> records per table with an average of 20 users accessing the database\n> simultaneously over the network. Each table has indexes and we are\n> querying the database using Hibernate.\n> The biggest problem is while insertion, updating and fetching of records,\n> ie the database performance is very slow. It take a long time to respond\n> in the above scenario.\n> Please provide me with the tuning of the database. I am attaching my\n> postgresql.conf file for the reference of our current configuration\n>\n>\n>\n> Please replay me ASAP\n> Regards,\n> Shohab Abdullah\n> Software Engineer,\n> Manufacturing SBU-POWAI\n> Larsen and Toubro Infotech Ltd.| 4th floor, L&T Technology Centre,\n> Saki Vihar Road, Powai, Mumbai-400072\n> (: +91-22-67767366 | (: +91-9870247322\n> Visit us at : http://www.lntinfotech.com\n> ÿÿI cannot predict future, I cannot change past, I have just the present\n> moment, I must treat it as my last\"\n> ----------------------------------------------------------------------------------------\n> The information contained in this email has been classified:\n> [ X] L&T Infotech General Business\n> [ ] L&T Infotech Internal Use Only\n> [ ] L&T Infotech Confidential\n> [ ] L&T Infotech Proprietary\n> This e-mail and any files transmitted with it are for the sole use of the\n> intended recipient(s) and may contain confidential and privileged\n> information.\n> If you are not the intended recipient, please contact the sender by reply\n> e-mail and destroy all copies of the original message.\n>\n> ______________________________________________________________________\n>",
"msg_date": "Thu, 26 Apr 2007 19:02:36 +0200 (CEST)",
"msg_from": "Gerhard Wiesinger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fw: PostgreSQL Performance Tuning"
}
] |
[
{
"msg_contents": "NUMERIC operations are very slow in pgsql. Equality comparisons are somewhat faster, but other operations are very slow compared to other vendor's NUMERIC.\n\nWe've sped it up a lot here internally, but you may want to consider using FLOAT for what you are doing.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom: \tBill Moran [mailto:[email protected]]\nSent:\tThursday, April 26, 2007 05:13 PM Eastern Standard Time\nTo:\tzardozrocks\nCc:\[email protected]\nSubject:\tRe: [PERFORM] Simple query, 10 million records...MySQL ten times faster\n\nIn response to zardozrocks <[email protected]>:\n\n> I have this table:\n> \n> CREATE TABLE test_zip_assoc (\n> id serial NOT NULL,\n> f_id integer DEFAULT 0 NOT NULL,\n> lat_radians numeric(6,5) DEFAULT 0.00000 NOT NULL,\n> long_radians numeric(6,5) DEFAULT 0.00000 NOT NULL\n> );\n> CREATE INDEX lat_radians ON test_zip_assoc USING btree (lat_radians);\n> CREATE INDEX long_radians ON test_zip_assoc USING btree\n> (long_radians);\n> \n> \n> \n> It's basically a table that associates some foreign_key (for an event,\n> for instance) with a particular location using longitude and\n> latitude. I'm basically doing a simple proximity search. I have\n> populated the database with *10 million* records. I then test\n> performance by picking 50 zip codes at random and finding the records\n> within 50 miles with a query like this:\n> \n> SELECT id\n> \tFROM test_zip_assoc\n> \tWHERE\n> \t\tlat_radians > 0.69014816041\n> \t\tAND lat_radians < 0.71538026567\n> \t\tAND long_radians > -1.35446228028\n> \t\tAND long_radians < -1.32923017502\n> \n> \n> On my development server (dual proc/dual core Opteron 2.8 Ghz with 4GB\n> ram) this query averages 1.5 seconds each time it runs after a brief\n> warmup period. In PostGreSQL it averages about 15 seconds.\n> \n> Both of those times are too slow. I need the query to run in under a\n> second with as many as a billion records. I don't know if this is\n> possible but I'm really hoping someone can help me restructure my\n> indexes (multicolumn?, multiple indexes with a 'where' clause?) so\n> that I can get this running as fast as possible.\n> \n> If I need to consider some non-database data structure in RAM I will\n> do that too. Any help or tips would be greatly appreciated. I'm\n> willing to go to greath lengths to test this if someone can make a\n> good suggestion that sounds like it has a reasonable chance of\n> improving the speed of this search. There's an extensive thread on my\n> efforts already here:\n> \n> http://phpbuilder.com/board/showthread.php?t=10331619&page=10\n\nWhy didn't you investigate/respond to the last posts there? The advice\nto bump shared_buffers is good advice. work_mem might also need bumped.\n\nFigure out which postgresql.conf your system is using and get it dialed\nin for your hardware. You can make all the indexes you want, but if\nyou've told Postgres that it only has 8M of RAM to work with, performance\nis going to suck. I don't see hardware specs on that thread (but I\ndidn't read the whole thing) If the system you're using is a dedicated\nDB system, set shared_buffers to 1/3 - 1/2 of the physical RAM on the\nmachine for starters.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n\n\n\nRe: [PERFORM] Simple query, 10 million records...MySQL ten times faster\n\n\n\nNUMERIC operations are very slow in pgsql. Equality comparisons are somewhat faster, but other operations are very slow compared to other vendor's NUMERIC.\n\nWe've sped it up a lot here internally, but you may want to consider using FLOAT for what you are doing.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom: Bill Moran [mailto:[email protected]]\nSent: Thursday, April 26, 2007 05:13 PM Eastern Standard Time\nTo: zardozrocks\nCc: [email protected]\nSubject: Re: [PERFORM] Simple query, 10 million records...MySQL ten times faster\n\nIn response to zardozrocks <[email protected]>:\n\n> I have this table:\n>\n> CREATE TABLE test_zip_assoc (\n> id serial NOT NULL,\n> f_id integer DEFAULT 0 NOT NULL,\n> lat_radians numeric(6,5) DEFAULT 0.00000 NOT NULL,\n> long_radians numeric(6,5) DEFAULT 0.00000 NOT NULL\n> );\n> CREATE INDEX lat_radians ON test_zip_assoc USING btree (lat_radians);\n> CREATE INDEX long_radians ON test_zip_assoc USING btree\n> (long_radians);\n>\n>\n>\n> It's basically a table that associates some foreign_key (for an event,\n> for instance) with a particular location using longitude and\n> latitude. I'm basically doing a simple proximity search. I have\n> populated the database with *10 million* records. I then test\n> performance by picking 50 zip codes at random and finding the records\n> within 50 miles with a query like this:\n>\n> SELECT id\n> FROM test_zip_assoc\n> WHERE\n> lat_radians > 0.69014816041\n> AND lat_radians < 0.71538026567\n> AND long_radians > -1.35446228028\n> AND long_radians < -1.32923017502\n>\n>\n> On my development server (dual proc/dual core Opteron 2.8 Ghz with 4GB\n> ram) this query averages 1.5 seconds each time it runs after a brief\n> warmup period. In PostGreSQL it averages about 15 seconds.\n>\n> Both of those times are too slow. I need the query to run in under a\n> second with as many as a billion records. I don't know if this is\n> possible but I'm really hoping someone can help me restructure my\n> indexes (multicolumn?, multiple indexes with a 'where' clause?) so\n> that I can get this running as fast as possible.\n>\n> If I need to consider some non-database data structure in RAM I will\n> do that too. Any help or tips would be greatly appreciated. I'm\n> willing to go to greath lengths to test this if someone can make a\n> good suggestion that sounds like it has a reasonable chance of\n> improving the speed of this search. There's an extensive thread on my\n> efforts already here:\n>\n> http://phpbuilder.com/board/showthread.php?t=10331619&page=10\n\nWhy didn't you investigate/respond to the last posts there? The advice\nto bump shared_buffers is good advice. work_mem might also need bumped.\n\nFigure out which postgresql.conf your system is using and get it dialed\nin for your hardware. You can make all the indexes you want, but if\nyou've told Postgres that it only has 8M of RAM to work with, performance\nis going to suck. I don't see hardware specs on that thread (but I\ndidn't read the whole thing) If the system you're using is a dedicated\nDB system, set shared_buffers to 1/3 - 1/2 of the physical RAM on the\nmachine for starters.\n\n--\nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org",
"msg_date": "Thu, 26 Apr 2007 17:39:56 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple query, 10 million records...MySQL ten\n times faster"
}
] |
[
{
"msg_contents": "Hi,\n\nI have pg 8.1.4 running in\nWindows XP Pro\nwirh a Pentium D\n\nand I notice that I can not use more than 50% of the cpus (Pentium D has 2 \ncpus), how can I change the settings to use the 100% of it.\n\nRegards,\nAndrew Retzlaff\n\n_________________________________________________________________\nAdvertisement: Visit LetsShop.com to WIN Fabulous Books Weekly \nhttp://a.ninemsn.com.au/b.aspx?URL=http%3A%2F%2Fwww%2Eletsshop%2Ecom%2FLetsShopBookClub%2Ftabid%2F866%2FDefault%2Easpx&_t=751480117&_r=HM_Tagline_books&_m=EXT\n\n",
"msg_date": "Fri, 27 Apr 2007 04:43:06 +0000",
"msg_from": "\"Andres Retzlaff\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Usage up to 50% CPU"
},
{
"msg_contents": "On Fri, Apr 27, 2007 at 04:43:06AM +0000, Andres Retzlaff wrote:\n> Hi,\n> \n> I have pg 8.1.4 running in\n> Windows XP Pro\n> wirh a Pentium D\n> \n> and I notice that I can not use more than 50% of the cpus (Pentium D has 2 \n> cpus), how can I change the settings to use the 100% of it.\n\nA single query will only use one CPU. If you have multiple parallell\nclients, they will use the differnt CPUs.\n\n//Magnus\n\n",
"msg_date": "Fri, 27 Apr 2007 10:06:53 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Usage up to 50% CPU"
},
{
"msg_contents": "Hi Magnus,\n\nin this case each CPU goes up to 50%, giveing me 50% total usage. I was \nspecting as you say 1 query 100% cpu.\n\nAny ideas?\n\nAndrew\n\nOn Fri, Apr 27, 2007 at 04:43:06AM +0000, Andres Retzlaff wrote:\n > Hi,\n >\n > I have pg 8.1.4 running in\n > Windows XP Pro\n > wirh a Pentium D\n >\n > and I notice that I can not use more than 50% of the cpus (Pentium D has \n2\n > cpus), how can I change the settings to use the 100% of it.\n\nA single query will only use one CPU. If you have multiple parallell\nclients, they will use the differnt CPUs.\n\n//Magnus\n\n_________________________________________________________________\nAdvertisement: Its simple! Sell your car for just $30 at carsales.com.au \nhttp://a.ninemsn.com.au/b.aspx?URL=http%3A%2F%2Fsecure%2Dau%2Eimrworldwide%2Ecom%2Fcgi%2Dbin%2Fa%2Fci%5F450304%2Fet%5F2%2Fcg%5F801577%2Fpi%5F1005244%2Fai%5F838588&_t=754951090&_r=tig&_m=EXT\n\n",
"msg_date": "Fri, 27 Apr 2007 08:10:48 +0000",
"msg_from": "\"Andres Retzlaff\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Usage up to 50% CPU"
},
{
"msg_contents": "On Fri, Apr 27, 2007 at 08:10:48AM +0000, Andres Retzlaff wrote:\n> Hi Magnus,\n> \n> in this case each CPU goes up to 50%, giveing me 50% total usage. I was \n> specting as you say 1 query 100% cpu.\n> \n> Any ideas?\n\nNo. 1 query will only use 100% of *one* CPU, which means 50% total usage.\nYou need at least one query per CPU to reach full 100% of the whole system.\n(Actually, you can get slightly above 50% since the query will run on one\nCPU and the OS and autovacuum and bgwriter can run on the other. But it's\nmarginally)\n\n//Magnus\n\n",
"msg_date": "Fri, 27 Apr 2007 10:13:50 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Usage up to 50% CPU"
},
{
"msg_contents": "Magnus Hagander wrote:\n> On Fri, Apr 27, 2007 at 08:10:48AM +0000, Andres Retzlaff wrote:\n>> Hi Magnus,\n>>\n>> in this case each CPU goes up to 50%, giveing me 50% total usage. I was \n>> specting as you say 1 query 100% cpu.\n>>\n>> Any ideas?\n> \n> No. 1 query will only use 100% of *one* CPU, which means 50% total usage.\n> You need at least one query per CPU to reach full 100% of the whole system.\n> (Actually, you can get slightly above 50% since the query will run on one\n> CPU and the OS and autovacuum and bgwriter can run on the other. But it's\n> marginally)\n> \n> //Magnus\n\nI would think that as you are sitting and watching the cpu usage, your \nquery would seem to taking a while to run, leading me to wonder if you \nare getting a full table scan that is causing pg to wait for disk response?\n\nOr are you running a long list of steps that take a while?\n\n\n\n-- \n\nShane Ambler\[email protected]\n\nGet Sheeky @ http://Sheeky.Biz\n",
"msg_date": "Fri, 27 Apr 2007 19:23:41 +0930",
"msg_from": "Shane Ambler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Usage up to 50% CPU"
},
{
"msg_contents": "On Fri, Apr 27, 2007 at 07:23:41PM +0930, Shane Ambler wrote:\n>I would think that as you are sitting and watching the cpu usage, your \n>query would seem to taking a while to run, leading me to wonder if you \n>are getting a full table scan that is causing pg to wait for disk response?\n\nIf so, you probably wouldn't be seeing that much cpu usage... (if the \ncpu is waiting for disk it's idling)\n\nMike Stone\n",
"msg_date": "Fri, 27 Apr 2007 08:52:45 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Usage up to 50% CPU"
},
{
"msg_contents": ">On Fri, Apr 27, 2007 at 07:23:41PM +0930, Shane Ambler wrote:\n>>I would think that as you are sitting and watching the cpu usage, your \n>>query would seem to taking a while to run, leading me to wonder if you are \n>>getting a full table scan that is causing pg to wait for disk response?\n\n>If so, you probably wouldn't be seeing that much cpu usage... (if the cpu \n>is waiting for disk it's >idling)\n\n>Mike Stone\n\nYes, I notice 3 steeps:\n1) the CPU is at 100% (of 1 cpu)\n2) the HDD is 100% but no much of the cpu\n3) I get the query display\n\nI will see what can I do to improved the speed. At this moment is taking 10 \nsec to get the responce.\n\nRegards,\nAndrew Retzlaff\n\n_________________________________________________________________\nAdvertisement: Its simple! Sell your car for just $30 at carsales.com.au \nhttp://a.ninemsn.com.au/b.aspx?URL=http%3A%2F%2Fsecure%2Dau%2Eimrworldwide%2Ecom%2Fcgi%2Dbin%2Fa%2Fci%5F450304%2Fet%5F2%2Fcg%5F801577%2Fpi%5F1005244%2Fai%5F838588&_t=754951090&_r=tig&_m=EXT\n\n",
"msg_date": "Sat, 28 Apr 2007 19:28:14 +0000",
"msg_from": "\"Andres Retzlaff\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Usage up to 50% CPU"
}
] |
[
{
"msg_contents": "Hi!\nI read the link below and am puzzled by or curious about something.\nhttp://www.postgresql.org/docs/8.1/interactive/datatype-character.html\n\nThe Tip below is intriguing\n\n\"Tip: There are no performance differences between these three types,\napart from the increased storage size when using the blank-padded type.\nWhile character(n) has performance advantages in some other database\nsystems, it has no such advantages in PostgreSQL. In most situations text\nor character varying should be used instead.\"\n\nHow can a field that doesn't have a limit like \"text\" perform similarly to\nchar varying(128), for example? At some point, we need to write data to\ndisk. The more data that needs to be written, the longer the disk write\nwill take, especially when it requires finding free sectors to write to.\n\nAnother interesting quote from the same page is the following:\n\n\"Long values are also stored in background tables so they do not interfere\nwith rapid access to the shorter column values. \"\n\nIf the long values are stored in a separate table, on a different part of\nthe disk, doesn't this imply an extra disk seek? Won't it therefore take\nlonger?\n\n\nSid\n\n\n\n\n\n",
"msg_date": "Fri, 27 Apr 2007 08:30:51 -0700 (PDT)",
"msg_from": "\"Siddharth Anand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[Fwd: ] How"
},
{
"msg_contents": "Siddharth Anand wrote:\n> Hi!\n> I read the link below and am puzzled by or curious about something.\n> http://www.postgresql.org/docs/8.1/interactive/datatype-character.html\n> \n> The Tip below is intriguing\n> \n> \"Tip: There are no performance differences between these three types,\n> apart from the increased storage size when using the blank-padded type.\n> While character(n) has performance advantages in some other database\n> systems, it has no such advantages in PostgreSQL. In most situations text\n> or character varying should be used instead.\"\n> \n> How can a field that doesn't have a limit like \"text\" perform similarly to\n> char varying(128), for example? At some point, we need to write data to\n> disk. The more data that needs to be written, the longer the disk write\n> will take, especially when it requires finding free sectors to write to.\n\nThat's no difference *for the same amount of data*. So, char(128), \nvarchar(128) with 128 characters and text with 128 characters in it are \nthe same. This isn't always the case with other systems.\n\n> Another interesting quote from the same page is the following:\n> \n> \"Long values are also stored in background tables so they do not interfere\n> with rapid access to the shorter column values. \"\n> \n> If the long values are stored in a separate table, on a different part of\n> the disk, doesn't this imply an extra disk seek? Won't it therefore take\n> longer?\n\nYes. But you gain every time you read from the table and aren't \ninterested in that column. Typically large text columns contain \ndescriptive text and aren't used in joins, so it pays for itself quite \neasily.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 27 Apr 2007 17:07:22 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: ] How"
}
] |
[
{
"msg_contents": "Hi!\nI read the link below and am puzzled by or curious about something.\nhttp://www.postgresql.org/docs/8.1/interactive/datatype-character.html\n\nThe Tip below is intriguing\n\n\"Tip: There are no performance differences between these three types,\napart from the increased storage size when using the blank-padded type.\nWhile character(n) has performance advantages in some other database\nsystems, it has no such advantages in PostgreSQL. In most situations text\nor character varying should be used instead.\"\n\nHow can a field that doesn't have a limit like \"text\" perform similarly to\nchar varying(128), for example? At some point, we need to write data to\ndisk. The more data that needs to be written, the longer the disk write\nwill take, especially when it requires finding free sectors to write to.\n\nAnother interesting quote from the same page is the following:\n\n\"Long values are also stored in background tables so they do not interfere\nwith rapid access to the shorter column values. \"\n\nIf the long values are stored in a separate table, on a different part of\nthe disk, doesn't this imply an extra disk seek? Won't it therefore take\nlonger?\n\n\nSid\n\n\n\n\n\n\n\n",
"msg_date": "Fri, 27 Apr 2007 08:31:51 -0700 (PDT)",
"msg_from": "\"Siddharth Anand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How can fixed and variable width columns perform similarly?"
},
{
"msg_contents": "Hi Tom,\nMy question wasn't phrased clearly. Oracle exhibits a performance\ndegradation for very large-sized fields (CLOB types that I equate to\nPostGres' text type) when compared with the performance of field types\nlike varchar that handle a max character limit of a few thousand bytes in\nOracle.\n\nIt sounds like PostGres doesn't exhibit this same difference. I wanted to\nunderstand how this could be and whether there was a trade-off.\n\nCheers!\nSid\n> \"Siddharth Anand\" <[email protected]> writes:\n>> How can a field that doesn't have a limit like \"text\" perform similarly\n>> to\n>> char varying(128), for example? At some point, we need to write data to\n>> disk. The more data that needs to be written, the longer the disk write\n>> will take, especially when it requires finding free sectors to write to.\n>\n> What's your point? If you're not going to put more than 128 characters\n> in the field, there's no difference in the amount of data involved.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n\n",
"msg_date": "Fri, 27 Apr 2007 09:19:04 -0700 (PDT)",
"msg_from": "\"Siddharth Anand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How can fixed and variable width columns perform similarly?"
},
{
"msg_contents": "\"Siddharth Anand\" <[email protected]> writes:\n> How can a field that doesn't have a limit like \"text\" perform similarly to\n> char varying(128), for example? At some point, we need to write data to\n> disk. The more data that needs to be written, the longer the disk write\n> will take, especially when it requires finding free sectors to write to.\n\nWhat's your point? If you're not going to put more than 128 characters\nin the field, there's no difference in the amount of data involved.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Apr 2007 12:25:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can fixed and variable width columns perform similarly? "
},
{
"msg_contents": "I think the manual is implying that if you store a value like \"Sid\" in a\nfield either of type varchar(128) or type text there is no performance\ndifference. The manual is not saying that you get the same performance\nstoring a 500k text field as when you store the value \"Sid\".\n\nDave\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Siddharth Anand\nSent: Friday, April 27, 2007 10:32 AM\nTo: [email protected]\nSubject: [PERFORM] How can fixed and variable width columns perform\nsimilarly?\n\nHi!\nI read the link below and am puzzled by or curious about something.\nhttp://www.postgresql.org/docs/8.1/interactive/datatype-character.html\n\nThe Tip below is intriguing\n\n\"Tip: There are no performance differences between these three types,\napart from the increased storage size when using the blank-padded type.\nWhile character(n) has performance advantages in some other database\nsystems, it has no such advantages in PostgreSQL. In most situations text\nor character varying should be used instead.\"\n\nHow can a field that doesn't have a limit like \"text\" perform similarly to\nchar varying(128), for example? At some point, we need to write data to\ndisk. The more data that needs to be written, the longer the disk write\nwill take, especially when it requires finding free sectors to write to.\n\nAnother interesting quote from the same page is the following:\n\n\"Long values are also stored in background tables so they do not interfere\nwith rapid access to the shorter column values. \"\n\nIf the long values are stored in a separate table, on a different part of\nthe disk, doesn't this imply an extra disk seek? Won't it therefore take\nlonger?\n\n\nSid\n\n\n\n\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n",
"msg_date": "Fri, 27 Apr 2007 11:40:15 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can fixed and variable width columns perform similarly?"
},
{
"msg_contents": "\"Siddharth Anand\" <[email protected]> writes:\n> My question wasn't phrased clearly. Oracle exhibits a performance\n> degradation for very large-sized fields (CLOB types that I equate to\n> PostGres' text type) when compared with the performance of field types\n> like varchar that handle a max character limit of a few thousand bytes in\n> Oracle.\n\n> It sounds like PostGres doesn't exhibit this same difference. I wanted to\n> understand how this could be and whether there was a trade-off.\n\nAh. Well, the answer is that we change behavior dynamically depending\non the size of the particular field value, instead of hard-wiring it to\nthe declared column type. It sounds like Oracle's CLOB might be doing\nabout the same thing as an out-of-line \"toasted\" field value in\nPostgres. In PG, text and varchar behave identically except that\nvarchar(N) adds an insert-time check on the length of the field value\n--- but this is just a constraint check and doesn't have any direct\ninfluence on how the value is stored.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Apr 2007 13:12:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can fixed and variable width columns perform similarly? "
}
] |
[
{
"msg_contents": "Hi,\n\nWe're facing some perfomance problems with the database for a web site with\nvery specific needs. First of all, we're using version 8.1 in a server with\n1GB of RAM. I know memory normally should be more, but as our tables are not\nso big (as a matter of fact, they are small) I think the solution would not\nbe adding more RAM.\n\nWhat we basically have is a site where each user has a box with links to\nother randomly selected users. Whenever a box from a user is shown, a SPs is\nexecuted: a credit is added to that user and a credit is substracted from\nthe accounts of the shown links. Accounts with no credits do not have to be\nlisted. So, we've lots (LOTS) of users querying and updating the same table.\nSometimes with big peaks.\n\nOur first attempt was to split that table in two: one for the actual credits\nand another one for the users. So, only the credits table gets updated on\nevery request, but it has a trigger that updates a flag field in the users\ntable saying if the user has credits. This had a good impact, but I guess\nit's not enough.\n\nFor now, we only have 23.000 users, but it's going to grow. Do you have any\nadvice? Is this possible with postgres or do you recommend just to try with\na volatile memory approach for the credits?\n\nWe're using pgpool and the output from free shows only 350M of RAM being\nused.\n\nSome relevants parts of the .conf:\n\nmax_connections = 160\nshared_buffers = 40000\nwork_mem = 3096\nmaintenance_work_mem = 131072\nmax_fsm_pages = 70000\nfsync = false\nautovacuum = on\n\nAny help would be really appreciated.\n\nThanks in advance,\nMauro.\n\n",
"msg_date": "Fri, 27 Apr 2007 14:38:57 -0300",
"msg_from": "\"Mauro N. Infantino\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Very specific server situation"
},
{
"msg_contents": "\"Mauro N. Infantino\" <[email protected]> writes:\n> What we basically have is a site where each user has a box with links to\n> other randomly selected users. Whenever a box from a user is shown, a SPs is\n> executed: a credit is added to that user and a credit is substracted from\n> the accounts of the shown links. Accounts with no credits do not have to be\n> listed. So, we've lots (LOTS) of users querying and updating the same table.\n\nHave you checked to make sure the query plans are reasonable? Have you\nchecked that autovacuum is running often enough? (You might want to try\ncontrib/pgstattuple to see how much dead space there is in your\nheavily-updated tables.) Also, on a high-update workload it is\nabsolutely critical to boost checkpoint_segments far enough that you are\nnot doing checkpoints oftener than maybe once every five minutes.\n\nIf the performance problems seem \"bursty\" then you may also need to look\nat adjusting bgwriter and/or vacuum cost delay parameters to smooth out\nthe I/O load.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Apr 2007 18:51:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very specific server situation "
},
{
"msg_contents": "Tom, \n\nThank you very much for your suggestions.\n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]] \n> \n> Have you checked to make sure the query plans are reasonable? \n\nI've attached the main query and its explain plan. I can't find a way to\nimprove it.\n\nDoes it make any difference if it's executed from a stored procedure? Is\nthere any difference between the SP's language (PL/pgSQL, PL/php, etc. It\nneeds to make some other tiny things besides the query)?\n\n> You might want to try contrib/pgstattuple\n\nThanks. I'll give it a try and report the results here.\n\n> absolutely critical to boost checkpoint_segments far enough \n\nHow do I know how ofen checkpoints are done?\nI've modified the parameters:\n\ncheckpoint_segments = 36 # it was 12 before\ncheckpoint_timeout = 1000\ncheckpoint_warning = 300 # so, I'll get a warning if it's too frequent.\ncommit_delay = 5000\ncommit_siblings = 2\n\n> adjusting bgwriter and/or vacuum cost delay parameters\n\nI've used a moderate cost delay configuration to see how it responds\n(vacuum_cost_delay = 100 & vacuum_cost_limit = 200).\nDo you have any advice on how to configure the bgwriter? I have no clue\nabout it and couldn't find anything clear.\n\nAlso, I know an upgrade to 8.2 is always a good thing, but is there any\nchange that could help this specific situation?\n\nAgain, thank you very much for your answers (and, of course, everything you\ndo in pgsql).\n\nRegards,\nMauro.",
"msg_date": "Sat, 28 Apr 2007 17:43:01 -0300",
"msg_from": "\"Mauro N. Infantino\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very specific server situation "
}
] |
[
{
"msg_contents": ">Well, that's darn odd. It should not be getting that so far wrong.\n\nI've been puzzling on this for over a week now, but can't seem to find a \nsolution. Would you have some more hints of what I could possibly try next? \nAs far as I can see, the mentioned status column is just a simple column. Or \ncould it be related to the fact that its type is var(1)? Would that confuse \nthe planner?\n\n_________________________________________________________________\nTalk with your online friends with Messenger \nhttp://www.join.msn.com/messenger/overview\n\n",
"msg_date": "Sat, 28 Apr 2007 11:56:36 +0200",
"msg_from": "\"henk de wit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Redundant sub query triggers slow nested loop left join"
}
] |
[
{
"msg_contents": "Perhaps one other interesting observation; when I earlier removed the status \ncheck for which the rows got so wrongly estimated, the query got \ndramatically faster. However, once I also remove all redundant checks the \nquery gets slower again.\n\nThis is the query with both status and redundant check removed:\n\nSELECT\n\tid,\n\tstatus,\n\tmerchant_id,\n\tdescription,\n\torg_text,\n\tusers_banners_id,\n\tbanner_url,\n\tcookie_redirect,\n\ttype,\n\n\tCASE WHEN special_deal IS null THEN\n\t\t''\n\tELSE\n\t\t'special deal'\n\tEND AS special_deal,\n\n\tCASE WHEN url_of_banner IS null\tTHEN\n\t\t''\n\tELSE\n\t\turl_of_banner\n\tEND AS url_of_banner,\n\n\tCASE WHEN period_end IS NULL THEN\n\t\t'not_active'\n\tELSE\n\t\t'active'\n\tEND AS active_not_active,\n\n\tCASE WHEN ecpc IS NULL THEN\n\t\t0.00\n\tELSE\n\t\tROUND(ecpc::numeric,2)\n\tEND AS ecpc,\n\n\tCASE WHEN ecpc_merchant IS NULL THEN\n\t\t0.00\n\tELSE\n\t\tROUND(ecpc_merchant::numeric,2)\n\tEND AS ecpc_merchant\n\nFROM\n\t/* SUBQUERY grand_total_fetch_banners */ (\n\t\t/* SUBQUERY grand_total */(\n\t\t\t/* SUBQUERY banners_special_deals */\t(\n\n\t\t\t\t/* SUBQUERY banners */ (\n\t\t\t\t\tSELECT\n\t\t\t\t\t\t*\n\t\t\t\t\tFROM\n\t\t\t\t\t\t/* SUBQUERY banners_links */ (\n\t\t\t\t\t\t\tSELECT\n\t\t\t\t\t\t\t\tbanners_links.id,\n\t\t\t\t\t\t\t\tmerchant_id,\n\t\t\t\t\t\t\t\tbanners_org.banner_text AS org_text,\n\t\t\t\t\t\t\t\tdescription,\n\t\t\t\t\t\t\t\tstatus,\n\t\t\t\t\t\t\t\tbanner_url,\n\t\t\t\t\t\t\t\tecpc,\n\t\t\t\t\t\t\t\tecpc_merchant,\n\t\t\t\t\t\t\t\tCOALESCE(cookie_redirect,0) AS cookie_redirect\n\t\t\t\t\t\t\tFROM\n\t\t\t\t\t\t\t\t/* SUBQUERY banners_links */ (\n\n\t\t\t\t\t\t\t\t\t/* subselect tot join ecpc_per_banner_links on banners_links*/\n\t\t\t\t\t\t\t\t\t/* SUBQUERY banners_links */ (\n\t\t\t\t\t\t\t\t\t\tSELECT\n\t\t\t\t\t\t\t\t\t\t\t*\n\t\t\t\t\t\t\t\t\t\tFROM\n\t\t\t\t\t\t\t\t\t\t\tbanners_links\n\t\t\t\t\t\t\t\t\t\tWHERE\n\t\t\t\t\t\t\t\t\t\t\tmerchant_id = 217\n\t\t\t\t\t\t\t\t\t) AS banners_links\n\n\t\t\t\t\t\t\t\t\t\tLEFT OUTER JOIN\n\n\t\t\t\t\t\t\t\t\t/* SUBQUERY ecpc_per_banner_link */\t(\n\t\t\t\t\t\t\t\t\t\tSELECT\n\t\t\t\t\t\t\t\t\t\t\tCASE WHEN clicks_total > 0 THEN\n\t\t\t\t\t\t\t\t\t\t\t\t(revenue_total_affiliate/clicks_total)::float/1000.0\n\t\t\t\t\t\t\t\t\t\t\tELSE\n\t\t\t\t\t\t\t\t\t\t\t\t0.0\n\t\t\t\t\t\t\t\t\t\t\tEND AS ecpc,\n\t\t\t\t\t\t\t\t\t\t\tCASE WHEN clicks_total > 0 THEN\n\t\t\t\t\t\t\t\t\t\t\t\t(revenue_total/clicks_total)::float/1000.0\n\t\t\t\t\t\t\t\t\t\t\tELSE\n\t\t\t\t\t\t\t\t\t\t\t\t0.0\n\t\t\t\t\t\t\t\t\t\t\tEND AS ecpc_merchant,\n\n\t\t\t\t\t\t\t\t\t\t\tbanners_links_id\n\t\t\t\t\t\t\t\t\t\tFROM\n\t\t\t\t\t\t\t\t\t\t\tprecalculated_stats_banners_links\n\t\t\t\t\t\t\t\t\t\tWHERE\n\t\t\t\t\t\t\t\t\t\t\tstatus = 4\n\t\t\t\t\t\t\t\t\t) AS ecpc_per_banner_link\n\n\t\t\t\t\t\t\t\t\t\tON (banners_links.id = ecpc_per_banner_link.banners_links_id)\n\t\t\t\t\t\t\t\t) AS banners_links\n\n\t\t\t\t\t\t\t\t\t,\n\n\t\t\t\t\t\t\t\tbanners_org\n\n\t\t\t\t\t\t\tWHERE\n\t\t\t\t\t\t\t\tbanners_links.id = banners_org.id_banner\t\t\tAND\n\t\t\t\t\t\t\t\t(banners_links.id = -1 OR -1 = -1)\n\t\t\t\t\t\t) AS banners_links\n\n\t\t\t\t\t\t\tLEFT OUTER JOIN\n\n\t\t\t\t\t\t/* SUBQUERY users_banners_tot_sub */(\n\t\t\t\t\t\t\tSELECT\n\t\t\t\t\t\t\t\tMAX (users_banners_id) AS users_banners_id,\n\t\t\t\t\t\t\t\tmerchant_users_banners_id,\n\t\t\t\t\t\t\t\tbanner_id\n\t\t\t\t\t\t\tFROM\n\t\t\t\t\t\t\t\t/* SUBQUERY users_banners_rotations_sub */(\n\t\t\t\t\t\t\t\t\tSELECT\n\t\t\t\t\t\t\t\t\t\taffiliate_id \t\tAS merchant_users_banners_id,\n\t\t\t\t\t\t\t\t\t\tusers_banners.id \tAS users_banners_id,\n\t\t\t\t\t\t\t\t\t\tusers_banners_rotation.banner_id\n\t\t\t\t\t\t\t\t\tFROM\n\t\t\t\t\t\t\t\t\t\tusers_banners, users_banners_rotation\n\t\t\t\t\t\t\t\t\tWHERE\n\t\t\t\t\t\t\t\t\t\tusers_banners_rotation.users_banners_id = users_banners.id\tAND\n\t\t\t\t\t\t\t\t\t\tusers_banners.status = 3\n\t\t\t\t\t\t\t\t) AS users_banners_rotations_sub\n\t\t\t\t\t\t\tGROUP BY\n\t\t\t\t\t\t\t\tmerchant_users_banners_id,banner_id\n\t\t\t\t\t\t) AS users_banners_tot_sub\n\n\t\t\t\t\t\t\tON (\n\t\t\t\t\t\t\t\tbanners_links.id = users_banners_tot_sub.banner_id \tAND\n\t\t\t\t\t\t\t\tbanners_links.merchant_id = \nusers_banners_tot_sub.merchant_users_banners_id\n\t\t\t\t\t\t\t)\n\t\t\t\t\t) AS banners\n\n\t\t\t\t\t\tLEFT OUTER JOIN\n\n\t\t\t\t\t/* SUBQUERY special_deals */(\n\t\t\t\t\t\tSELECT\n\t\t\t\t\t\t\tbanner_deals.banner_id \tAS id,\n\t\t\t\t\t\t\tMAX(affiliate_id) \t\tAS special_deal\n\t\t\t\t\t\tFROM\n\t\t\t\t\t\t\tbanner_deals\n\t\t\t\t\t\tGROUP BY\n\t\t\t\t\t\t\tbanner_deals.banner_id\n\t\t\t\t\t) AS special_deals\n\n\t\t\t\t\t\tUSING (id)\n\n\t\t\t) AS banners_special_deals\n\n\t\t\t\tLEFT OUTER JOIN\n\n\t\t\t/* SUBQUERY types */ (\n\t\t\t\tSELECT\n\t\t\t\t\tbanner_types.id \t\t\t\tAS type_id,\n\t\t\t\t\tbanner_types.type \t\t\t\tAS type,\n\t\t\t\t\tbanners_banner_types.banner_id \tAS id\n\t\t\t\tFROM\n\t\t\t\t\tbanner_types,banners_banner_types\n\t\t\t\tWHERE\n\n\t\t\t\t\tbanners_banner_types.type_id = banner_types.id\n\t\t ) AS types\n\n\t\t\t\tUSING (id)\n\n\t\t) as grand_total\n\n\t\t\tLEFT OUTER JOIN\n\n\t\t/* SUBQUERY fetch_banners */ (\n\t\t\tSELECT\n\t\t\t\tbanners_links_id AS id,\n\t\t\t\turl_of_banner\n\t\t\tFROM\n\t\t\t\tfetch_banners\n\t\t) AS fetch_banners\n\n\t\t\tUSING (id)\n\t) AS grand_total_fetch_banners\n\n\t\tLEFT OUTER JOIN\n\n /* SUBQUERY active_banners */ (\n \tSELECT\n\t \tbanner_id AS id,\n\t \tperiod_end\n \tFROM\n \t\treward_ratings\n \tWHERE\n \t\tnow() BETWEEN period_start AND period_end\n\n ) AS active_banners\n\n \tUSING (id)\nWHERE\n\t(type_id = -1 OR -1 = -1 )\tAND\n\t(special_deal IS null)\n\nORDER BY\n\tid DESC\n\nFor this query, PG comes up with the following plan:\n\nSort (cost=3772.80..3772.81 rows=2 width=597) (actual \ntime=3203.143..3203.315 rows=436 loops=1)\n Sort Key: public.banners_links.id\n -> Nested Loop Left Join (cost=2345.33..3772.79 rows=2 width=597) \n(actual time=108.926..3201.931 rows=436 loops=1)\n -> Nested Loop Left Join (cost=2341.06..3742.03 rows=2 width=589) \n(actual time=108.902..3197.302 rows=436 loops=1)\n Join Filter: (public.banners_links.id = \necpc_per_banner_link.banners_links_id)\n -> Nested Loop (cost=1722.18..2763.47 rows=2 width=573) \n(actual time=68.228..78.611 rows=436 loops=1)\n -> Hash Left Join (cost=1722.18..2754.88 rows=2 \nwidth=194) (actual time=68.219..75.916 rows=436 loops=1)\n Hash Cond: (public.banners_links.id = \nusers_banners_tot_sub.banner_id)\n -> Nested Loop Left Join (cost=1227.70..2260.38 \nrows=2 width=186) (actual time=61.822..68.891 rows=436 loops=1)\n -> Hash Left Join (cost=1227.70..2259.73 \nrows=2 width=116) (actual time=61.811..67.321 rows=436 loops=1)\n Hash Cond: (public.banners_links.id = \nbanners_banner_types.banner_id)\n -> Hash Left Join \n(cost=103.40..946.54 rows=2 width=81) (actual time=6.135..7.009 rows=331 \nloops=1)\n Hash Cond: \n(public.banners_links.id = special_deals.id)\n Filter: \n(special_deals.special_deal IS NULL)\n -> Bitmap Heap Scan on \nbanners_links (cost=6.86..816.67 rows=336 width=73) (actual \ntime=0.111..0.496 rows=336 loops=1)\n Recheck Cond: (merchant_id \n= 217)\n -> Bitmap Index Scan on \nbanners_links_merchant_id_idx (cost=0.00..6.77 rows=336 width=0) (actual \ntime=0.079..0.079 rows=336 loops=1)\n Index Cond: \n(merchant_id = 217)\n -> Hash (cost=86.93..86.93 \nrows=769 width=16) (actual time=6.012..6.012 rows=780 loops=1)\n -> Subquery Scan \nspecial_deals (cost=69.62..86.93 rows=769 width=16) (actual \ntime=4.240..5.451 rows=780 loops=1)\n -> HashAggregate \n(cost=69.62..79.24 rows=769 width=16) (actual time=4.239..4.748 rows=780 \nloops=1)\n -> Seq Scan \non banner_deals (cost=0.00..53.75 rows=3175 width=16) (actual \ntime=0.006..1.485 rows=3175 loops=1)\n -> Hash (cost=673.83..673.83 \nrows=21158 width=43) (actual time=55.659..55.659 rows=22112 loops=1)\n -> Hash Join \n(cost=2.45..673.83 rows=21158 width=43) (actual time=0.047..36.885 \nrows=22112 loops=1)\n Hash Cond: \n(banners_banner_types.type_id = banner_types.id)\n -> Seq Scan on \nbanners_banner_types (cost=0.00..376.40 rows=22240 width=16) (actual \ntime=0.005..10.653 rows=22240 loops=1)\n -> Hash (cost=2.20..2.20 \nrows=20 width=43) (actual time=0.034..0.034 rows=20 loops=1)\n -> Seq Scan on \nbanner_types (cost=0.00..2.20 rows=20 width=43) (actual time=0.003..0.016 \nrows=20 loops=1)\n -> Index Scan using \nfetch_banners_banners_links_id_idx on fetch_banners (cost=0.00..0.32 rows=1 \nwidth=78) (actual time=0.002..0.002 rows=0 loops=436)\n Index Cond: (public.banners_links.id = \npublic.fetch_banners.banners_links_id)\n -> Hash (cost=494.34..494.34 rows=11 width=24) \n(actual time=6.378..6.378 rows=336 loops=1)\n -> Subquery Scan users_banners_tot_sub \n(cost=494.09..494.34 rows=11 width=24) (actual time=5.588..6.124 rows=336 \nloops=1)\n -> HashAggregate \n(cost=494.09..494.23 rows=11 width=24) (actual time=5.586..5.810 rows=336 \nloops=1)\n -> Nested Loop \n(cost=360.46..494.01 rows=11 width=24) (actual time=2.876..5.232 rows=336 \nloops=1)\n -> Bitmap Heap Scan on \nusers_banners (cost=360.46..402.65 rows=11 width=16) (actual \ntime=2.863..3.133 rows=336 loops=1)\n Recheck Cond: \n((affiliate_id = 217) AND ((status)::text = '3'::text))\n -> BitmapAnd \n(cost=360.46..360.46 rows=11 width=0) (actual time=2.842..2.842 rows=0 \nloops=1)\n -> Bitmap \nIndex Scan on users_banners_affiliate_id_idx (cost=0.00..5.31 rows=138 \nwidth=0) (actual time=0.072..0.072 rows=350 loops=1)\n Index \nCond: (affiliate_id = 217)\n -> Bitmap \nIndex Scan on users_banners_status_idx (cost=0.00..354.90 rows=19016 \nwidth=0) (actual time=2.741..2.741 rows=17406 loops=1)\n Index \nCond: ((status)::text = '3'::text)\n -> Index Scan using \nusers_banners_id_idx on users_banners_rotation (cost=0.00..8.29 rows=1 \nwidth=16) (actual time=0.004..0.004 rows=1 loops=336)\n Index Cond: \n(users_banners_rotation.users_banners_id = users_banners.id)\n -> Index Scan using banners_org_id_banner.idx on \nbanners_org (cost=0.00..4.28 rows=1 width=387) (actual time=0.003..0.004 \nrows=1 loops=436)\n Index Cond: (public.banners_links.id = \nbanners_org.id_banner)\n -> Materialize (cost=618.88..698.81 rows=7993 width=20) \n(actual time=0.000..3.299 rows=7923 loops=436)\n -> Index Scan using pre_calc_banners_status on \nprecalculated_stats_banners_links (cost=0.00..530.96 rows=7993 width=30) \n(actual time=0.025..26.349 rows=7923 loops=1)\n Index Cond: (status = 4)\n -> Bitmap Heap Scan on reward_ratings (cost=4.27..15.33 rows=3 \nwidth=16) (actual time=0.005..0.005 rows=0 loops=436)\n Recheck Cond: (public.banners_links.id = \nreward_ratings.banner_id)\n Filter: ((now() >= period_start) AND (now() <= period_end))\n -> Bitmap Index Scan on reward_ratings_banner_id_idx \n(cost=0.00..4.27 rows=3 width=0) (actual time=0.003..0.003 rows=0 loops=436)\n Index Cond: (public.banners_links.id = \nreward_ratings.banner_id)\nTotal runtime: 3204.016 ms\n\n\nFor the \"banners_links.id = ecpc_per_banner_link.banners_links_id\" join, it \nchooses the dreaded Nested loop left join again, which takes up the bulk of \nthe query execution time.\n\nAfter some fiddling and experimentation with the query, I found that if I \nonly removed both case statements in the \"ecpc_per_banner_link\" subquery, it \nbecomes fast again:\n\nSort (cost=2875.63..2875.64 rows=6 width=599) (actual time=107.824..109.456 \nrows=1780 loops=1)\n Sort Key: public.banners_links.id\n -> Nested Loop Left Join (cost=1726.45..2875.55 rows=6 width=599) \n(actual time=68.243..98.013 rows=1780 loops=1)\n -> Nested Loop Left Join (cost=1722.18..2783.30 rows=6 width=591) \n(actual time=68.220..84.351 rows=1780 loops=1)\n -> Nested Loop (cost=1722.18..2763.47 rows=2 width=573) \n(actual time=68.210..78.427 rows=436 loops=1)\n -> Hash Left Join (cost=1722.18..2754.88 rows=2 \nwidth=194) (actual time=68.196..75.592 rows=436 loops=1)\n Hash Cond: (public.banners_links.id = \nusers_banners_tot_sub.banner_id)\n -> Nested Loop Left Join (cost=1227.70..2260.38 \nrows=2 width=186) (actual time=61.870..68.654 rows=436 loops=1)\n -> Hash Left Join (cost=1227.70..2259.73 \nrows=2 width=116) (actual time=61.859..67.140 rows=436 loops=1)\n Hash Cond: (public.banners_links.id = \nbanners_banner_types.banner_id)\n -> Hash Left Join \n(cost=103.40..946.54 rows=2 width=81) (actual time=6.099..6.944 rows=331 \nloops=1)\n Hash Cond: \n(public.banners_links.id = special_deals.id)\n Filter: \n(special_deals.special_deal IS NULL)\n -> Bitmap Heap Scan on \nbanners_links (cost=6.86..816.67 rows=336 width=73) (actual \ntime=0.105..0.451 rows=336 loops=1)\n Recheck Cond: (merchant_id \n= 217)\n -> Bitmap Index Scan on \nbanners_links_merchant_id_idx (cost=0.00..6.77 rows=336 width=0) (actual \ntime=0.073..0.073 rows=336 loops=1)\n Index Cond: \n(merchant_id = 217)\n -> Hash (cost=86.93..86.93 \nrows=769 width=16) (actual time=5.989..5.989 rows=780 loops=1)\n -> Subquery Scan \nspecial_deals (cost=69.62..86.93 rows=769 width=16) (actual \ntime=4.225..5.445 rows=780 loops=1)\n -> HashAggregate \n(cost=69.62..79.24 rows=769 width=16) (actual time=4.223..4.742 rows=780 \nloops=1)\n -> Seq Scan \non banner_deals (cost=0.00..53.75 rows=3175 width=16) (actual \ntime=0.006..1.484 rows=3175 loops=1)\n -> Hash (cost=673.83..673.83 \nrows=21158 width=43) (actual time=55.750..55.750 rows=22112 loops=1)\n -> Hash Join \n(cost=2.45..673.83 rows=21158 width=43) (actual time=0.042..36.943 \nrows=22112 loops=1)\n Hash Cond: \n(banners_banner_types.type_id = banner_types.id)\n -> Seq Scan on \nbanners_banner_types (cost=0.00..376.40 rows=22240 width=16) (actual \ntime=0.005..10.791 rows=22240 loops=1)\n -> Hash (cost=2.20..2.20 \nrows=20 width=43) (actual time=0.032..0.032 rows=20 loops=1)\n -> Seq Scan on \nbanner_types (cost=0.00..2.20 rows=20 width=43) (actual time=0.004..0.016 \nrows=20 loops=1)\n -> Index Scan using \nfetch_banners_banners_links_id_idx on fetch_banners (cost=0.00..0.32 rows=1 \nwidth=78) (actual time=0.002..0.002 rows=0 loops=436)\n Index Cond: (public.banners_links.id = \npublic.fetch_banners.banners_links_id)\n -> Hash (cost=494.34..494.34 rows=11 width=24) \n(actual time=6.312..6.312 rows=336 loops=1)\n -> Subquery Scan users_banners_tot_sub \n(cost=494.09..494.34 rows=11 width=24) (actual time=5.519..6.056 rows=336 \nloops=1)\n -> HashAggregate \n(cost=494.09..494.23 rows=11 width=24) (actual time=5.518..5.747 rows=336 \nloops=1)\n -> Nested Loop \n(cost=360.46..494.01 rows=11 width=24) (actual time=2.814..5.166 rows=336 \nloops=1)\n -> Bitmap Heap Scan on \nusers_banners (cost=360.46..402.65 rows=11 width=16) (actual \ntime=2.801..3.079 rows=336 loops=1)\n Recheck Cond: \n((affiliate_id = 217) AND ((status)::text = '3'::text))\n -> BitmapAnd \n(cost=360.46..360.46 rows=11 width=0) (actual time=2.781..2.781 rows=0 \nloops=1)\n -> Bitmap \nIndex Scan on users_banners_affiliate_id_idx (cost=0.00..5.31 rows=138 \nwidth=0) (actual time=0.088..0.088 rows=350 loops=1)\n Index \nCond: (affiliate_id = 217)\n -> Bitmap \nIndex Scan on users_banners_status_idx (cost=0.00..354.90 rows=19016 \nwidth=0) (actual time=2.673..2.673 rows=17406 loops=1)\n Index \nCond: ((status)::text = '3'::text)\n -> Index Scan using \nusers_banners_id_idx on users_banners_rotation (cost=0.00..8.29 rows=1 \nwidth=16) (actual time=0.004..0.004 rows=1 loops=336)\n Index Cond: \n(users_banners_rotation.users_banners_id = users_banners.id)\n -> Index Scan using banners_org_id_banner.idx on \nbanners_org (cost=0.00..4.28 rows=1 width=387) (actual time=0.004..0.004 \nrows=1 loops=436)\n Index Cond: (public.banners_links.id = \nbanners_org.id_banner)\n -> Index Scan using pre_calc_banners_id on \nprecalculated_stats_banners_links (cost=0.00..9.87 rows=4 width=22) (actual \ntime=0.004..0.008 rows=4 loops=436)\n Index Cond: (public.banners_links.id = \nprecalculated_stats_banners_links.banners_links_id)\n -> Bitmap Heap Scan on reward_ratings (cost=4.27..15.33 rows=3 \nwidth=16) (actual time=0.004..0.004 rows=0 loops=1780)\n Recheck Cond: (public.banners_links.id = \nreward_ratings.banner_id)\n Filter: ((now() >= period_start) AND (now() <= period_end))\n -> Bitmap Index Scan on reward_ratings_banner_id_idx \n(cost=0.00..4.27 rows=3 width=0) (actual time=0.003..0.003 rows=1 \nloops=1780)\n Index Cond: (public.banners_links.id = \nreward_ratings.banner_id)\nTotal runtime: 111.046 ms\n\nI'm really having a hard time with this query. Every time when I change the \nsmallest of things, the query time dramatically jumps up or down.\n\nI understand that some calculations just take time, but it seems to me that \nits not the particular things I actually do that makes the difference, but \nthe fact that some actions or structure of the query 'just happen' to force \na bad plan while others just happen to force a good plan. With my limited \nknowledge I absolutely see no connection between what actions influence what \nand why.\n\nCould someone shed some light on this issue?\n\n_________________________________________________________________\nPlay online games with your friends with Messenger \nhttp://www.join.msn.com/messenger/overview\n\n",
"msg_date": "Sun, 29 Apr 2007 00:51:39 +0200",
"msg_from": "\"henk de wit\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Redundant sub query triggers slow nested loop left join"
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi!\n\nI'm currently experimenting with PostgreSQL 8.2.4 and table\npartitioning in order to improve the performance of an\napplication I'm working on.\n\nMy application is about managing measurement values (lots of!)\nI have one table \"t_mv\" which stores all the measurement values.\nA single measurement value has a timestamp and belongs\nto a single time series, so table \"t_mv\" looks like this:\n\nCREATE TABLE t_mv\n(\n zr integer NOT NULL, -- the time series id\n ts timestamp with time zone NOT NULL, -- the timestamp\n ... -- other attributes of a mv\n)\nWITHOUT OIDS;\n\nALTER TABLE t_mv\n ADD CONSTRAINT pk_mv_zr_ts PRIMARY KEY (zr, ts);\n\nEach time series defines several other attributes which are common\nto all measurement values of this time series (like sampling location,\nphysical parameter, aggregation, cardinality, type, visibility, etc.)\n\nThe application should be able to handle several thousand\ndifferent time series and hundreds of millions of measurement\nvalues, so table t_mv can get quite large.\n\nI have tested installations with up to 70 millions rows in t_mv\nand PostgreSQL can handle that with a quite good performance\neven on non high-end machines (operating system is Linux, btw)\n\nBut as I expect installations witch much more rows in t_mv, I\ntried to implement a \"partitioned tables\" concept using inheritance\nand CHECK constraints, just like it is described in the docs\n(e.g. chapter 5.9 in the current PostgreSQL 8.2.4 documentation)\n\nI split the t_mv table on the timestamp attribute to build\nchild tables which hold all measurement values for a single month.\nThat way I have several tables called \"t_mv_YYYYMM\" which all\ninherit from \"t_mv\". The number of child tables depends on the\ntime period the application has to store the measurement values\n(which can be several years so I'm expecting up to 100 child\ntables or even more).\nFor the application everything looks the same: inserts, updates\nand queries all are against the \"t_mv\" parent table, the application\nis not aware of the fact that this table is actually \"split\" into\nseveral child tables.\n\nThis is working fine and for some standard queries it actually\ngives some performance improvement compared to the standard\n\"everything in one big table\" concept. The performance improvement\nincreases with the number of rows in t_mv, for a small table (less\nthan 10 million rows or so) IMHO it is not really worth the effort\nor even counter-productive.\n\nBut I have some special queries where the performance with\npartitioned tables actually get much worse: those are queries where\nI'm working with \"open\" time intervals, i.e. where I want to\nget the previous and/or next timestamp from a given interval.\n\nA simple example: Get the timestamp of a measurement value for time\nseries 3622 which is right before the measurement value with time\nstamp '2007-04-22 00:00:00':\n\ntestdb_std=> select ts from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00' order by ts desc limit 1;\n ts\n- ------------------------\n 2007-04-21 23:00:00+02\n(1 row)\n\n\nIm my application there are many queries like this. Such\nqueries also come in several variations, including quite\nsophisticated joins with lots of other tables \"above\" the\ntime series table.\n\nNote: as I'm working with (potentially) non-equidistant\ntime series I can not just calculate the timestamps, I\nhave to retrieve them from the database!\n\nIn the standard case, the query plan for the example query looks like this:\n\ntestdb_std=> explain analyze select ts from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00' order by ts desc limit 1;\n QUERY PLAN\n- -----------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..1.70 rows=1 width=8) (actual time=0.233..0.235 rows=1 loops=1)\n -> Index Scan Backward using pk_mv_zr_ts on t_mv (cost=0.00..21068.91 rows=12399 width=8) (actual time=0.221..0.221 rows=1 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n Total runtime: 0.266 ms\n(4 rows)\n\n\nIf I switch to partitioned tables, the query retrieves the same result (of course):\n\ntestdb_std=> \\c testdb_part\nYou are now connected to database \"testdb_part\".\ntestdb_part=> select ts from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00' order by ts desc limit 1;\n ts\n- ------------------------\n 2007-04-21 23:00:00+02\n(1 row)\n\n\nBut the query plan becomes:\n\ntestdb_part=> explain analyze select ts from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00' order by ts desc limit 1;\n QUERY PLAN\n- ---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=23985.83..23985.83 rows=1 width=8) (actual time=230.100..230.102 rows=1 loops=1)\n -> Sort (cost=23985.83..24019.84 rows=13605 width=8) (actual time=230.095..230.095 rows=1 loops=1)\n Sort Key: mwdb.t_mv.ts\n -> Result (cost=0.00..23051.72 rows=13605 width=8) (actual time=0.154..177.519 rows=15810 loops=1)\n -> Append (cost=0.00..23051.72 rows=13605 width=8) (actual time=0.149..114.186 rows=15810 loops=1)\n -> Index Scan using pk_mv_zr_ts on t_mv (cost=0.00..8.27 rows=1 width=8) (actual time=0.047..0.047 rows=0 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200507 on t_mv_200507 t_mv (cost=0.00..2417.53 rows=1519 width=8) (actual time=0.095..2.419 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200508 on t_mv_200508 t_mv (cost=0.00..918.81 rows=539 width=8) (actual time=0.081..2.134 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200509 on t_mv_200509 t_mv (cost=0.00..941.88 rows=555 width=8) (actual time=0.061..2.051 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200510 on t_mv_200510 t_mv (cost=0.00..915.29 rows=538 width=8) (actual time=0.064..2.113 rows=715 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200511 on t_mv_200511 t_mv (cost=0.00..925.93 rows=545 width=8) (actual time=0.048..2.986 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200512 on t_mv_200512 t_mv (cost=0.00..936.53 rows=550 width=8) (actual time=0.049..2.212 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200601 on t_mv_200601 t_mv (cost=0.00..981.42 rows=579 width=8) (actual time=0.065..3.029 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200602 on t_mv_200602 t_mv (cost=0.00..856.25 rows=502 width=8) (actual time=0.045..2.866 rows=672 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200603 on t_mv_200603 t_mv (cost=0.00..977.84 rows=575 width=8) (actual time=0.052..3.044 rows=743 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200604 on t_mv_200604 t_mv (cost=0.00..906.40 rows=531 width=8) (actual time=0.053..1.976 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200605 on t_mv_200605 t_mv (cost=0.00..938.28 rows=550 width=8) (actual time=0.050..2.357 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200606 on t_mv_200606 t_mv (cost=0.00..922.35 rows=541 width=8) (actual time=0.054..2.063 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200607 on t_mv_200607 t_mv (cost=0.00..2112.64 rows=1315 width=8) (actual time=0.047..2.226 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200608 on t_mv_200608 t_mv (cost=0.00..990.23 rows=582 width=8) (actual time=0.048..2.094 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200609 on t_mv_200609 t_mv (cost=0.00..902.84 rows=528 width=8) (actual time=0.039..2.252 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200610 on t_mv_200610 t_mv (cost=0.00..964.87 rows=567 width=8) (actual time=0.033..2.118 rows=745 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200611 on t_mv_200611 t_mv (cost=0.00..947.17 rows=557 width=8) (actual time=0.060..2.160 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200612 on t_mv_200612 t_mv (cost=0.00..929.43 rows=545 width=8) (actual time=0.039..2.051 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200701 on t_mv_200701 t_mv (cost=0.00..940.05 rows=551 width=8) (actual time=0.036..2.217 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200702 on t_mv_200702 t_mv (cost=0.00..847.38 rows=496 width=8) (actual time=0.035..1.830 rows=672 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200703 on t_mv_200703 t_mv (cost=0.00..956.00 rows=561 width=8) (actual time=0.062..2.326 rows=743 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200704 on t_mv_200704 t_mv (cost=0.00..814.38 rows=378 width=8) (actual time=0.050..1.406 rows=504 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n Total runtime: 231.730 ms\n(52 rows)\n\nOops!\nCompare the costs or the actual query time between those queries!\n(Note: I set \"constraint_exclusion = on\", of course!)\n\nAs such queries are used all over the application, this nullifies\nany performance improvements for standard queries and in fact makes\nthe overall application performance as \"feeled\" by the user _much_\nworse.\n\nI also tried it with \"min()\" and \"max()\" aggregate functions\ninstead of the \"limit 1\" query, but this does not change much:\n\n\nStandard \"big\" table:\n\ntestdb_std=> select max(ts) from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00' ;\n max\n- ------------------------\n 2007-04-21 23:00:00+02\n(1 row)\n\n\ntestdb_std=> explain analyze select max(ts) from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00' ;\n QUERY PLAN\n- -------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=1.70..1.71 rows=1 width=0) (actual time=0.071..0.073 rows=1 loops=1)\n InitPlan\n -> Limit (cost=0.00..1.70 rows=1 width=8) (actual time=0.060..0.062 rows=1 loops=1)\n -> Index Scan Backward using pk_mv_zr_ts on t_mv (cost=0.00..21068.91 rows=12399 width=8) (actual time=0.056..0.056 rows=1 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n Filter: ((ts)::timestamp with time zone IS NOT NULL)\n Total runtime: 0.221 ms\n(7 rows)\n\n\n\"Partitioned table\":\n\ntestdb_part=> select max(ts) from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00' ;\n max\n- ------------------------\n 2007-04-21 23:00:00+02\n(1 row)\n\ntestdb_part=> explain analyze select max(ts) from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00' ;\n QUERY PLAN\n- ----------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=23085.73..23085.74 rows=1 width=8) (actual time=390.094..390.096 rows=1 loops=1)\n -> Append (cost=0.00..23051.72 rows=13605 width=8) (actual time=0.241..290.934 rows=15810 loops=1)\n -> Index Scan using pk_mv_zr_ts on t_mv (cost=0.00..8.27 rows=1 width=8) (actual time=0.038..0.038 rows=0 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200507 on t_mv_200507 t_mv (cost=0.00..2417.53 rows=1519 width=8) (actual time=0.197..12.598 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200508 on t_mv_200508 t_mv (cost=0.00..918.81 rows=539 width=8) (actual time=0.095..5.947 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200509 on t_mv_200509 t_mv (cost=0.00..941.88 rows=555 width=8) (actual time=0.118..2.247 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200510 on t_mv_200510 t_mv (cost=0.00..915.29 rows=538 width=8) (actual time=0.121..6.219 rows=715 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200511 on t_mv_200511 t_mv (cost=0.00..925.93 rows=545 width=8) (actual time=2.287..9.991 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200512 on t_mv_200512 t_mv (cost=0.00..936.53 rows=550 width=8) (actual time=0.110..2.285 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200601 on t_mv_200601 t_mv (cost=0.00..981.42 rows=579 width=8) (actual time=0.209..4.682 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200602 on t_mv_200602 t_mv (cost=0.00..856.25 rows=502 width=8) (actual time=0.079..6.079 rows=672 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200603 on t_mv_200603 t_mv (cost=0.00..977.84 rows=575 width=8) (actual time=0.091..4.793 rows=743 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200604 on t_mv_200604 t_mv (cost=0.00..906.40 rows=531 width=8) (actual time=0.108..7.637 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200605 on t_mv_200605 t_mv (cost=0.00..938.28 rows=550 width=8) (actual time=0.116..4.772 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200606 on t_mv_200606 t_mv (cost=0.00..922.35 rows=541 width=8) (actual time=0.074..6.071 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200607 on t_mv_200607 t_mv (cost=0.00..2112.64 rows=1315 width=8) (actual time=0.082..4.807 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200608 on t_mv_200608 t_mv (cost=0.00..990.23 rows=582 width=8) (actual time=2.283..8.671 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200609 on t_mv_200609 t_mv (cost=0.00..902.84 rows=528 width=8) (actual time=0.107..6.067 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200610 on t_mv_200610 t_mv (cost=0.00..964.87 rows=567 width=8) (actual time=0.074..3.933 rows=745 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200611 on t_mv_200611 t_mv (cost=0.00..947.17 rows=557 width=8) (actual time=0.091..6.291 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200612 on t_mv_200612 t_mv (cost=0.00..929.43 rows=545 width=8) (actual time=0.077..4.101 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200701 on t_mv_200701 t_mv (cost=0.00..940.05 rows=551 width=8) (actual time=0.077..2.558 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200702 on t_mv_200702 t_mv (cost=0.00..847.38 rows=496 width=8) (actual time=0.073..4.346 rows=672 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200703 on t_mv_200703 t_mv (cost=0.00..956.00 rows=561 width=8) (actual time=2.532..7.206 rows=743 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n -> Index Scan using pk_mv_200704 on t_mv_200704 t_mv (cost=0.00..814.38 rows=378 width=8) (actual time=0.120..4.163 rows=504 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n Total runtime: 394.384 ms\n(49 rows)\n\n\nNow my question is: Does the query planner in the case of partitioned tables\nreally have to scan all indexes in order to get the next timestamp smaller\n(or larger) than a given one?\n\nThere are check conditions on all table partitions like this:\n\nFor table t_mv_200704:\nCHECK (ts::timestamp with time zone >= '2007-04-01 00:00:00+02'::timestamp with time zone\n AND ts::timestamp with time zone < '2007-05-01 00:00:00+02'::timestamp with time zone)\n\nFor table t_mv_200703:\nCHECK (ts::timestamp with time zone >= '2007-03-01 00:00:00+01'::timestamp with time zone\n AND ts::timestamp with time zone < '2007-04-01 00:00:00+02'::timestamp with time zone)\n\nand so on...\n\nSo the tables are in a well defined, monotonic sort order regarding the timestamp.\n\nThis means that if there is a max(ts) for ts < '2007-04-22 00:00:00'\nalready in table t_mv_200704, it makes no sense to look further in\nother tables where the timestamps can only be smaller than the\ntimestamp already found. Am I correct?\n\nIs there room for improvements of the query planner for queries\nlike this or is this a special case which will never get handled\nanyway?\n\nOr would you suggest a completely different table structure\nor perhaps some other query?\n\nI'm open for any suggestion!\n\n- - andreas\n\n- --\nAndreas Haumer | mailto:[email protected]\n*x Software + Systeme | http://www.xss.co.at/\nKarmarschgasse 51/2/20 | Tel: +43-1-6060114-0\nA-1100 Vienna, Austria | Fax: +43-1-6060114-71\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGNdZ2xJmyeGcXPhERAsbfAJ9nA+z50uXiV4SHntt1Y9IuZ/rzWwCff8ar\nxKSMfzwgjx9kQipeDoEnXWE=\n=57aJ\n-----END PGP SIGNATURE-----\n",
"msg_date": "Mon, 30 Apr 2007 13:43:52 +0200",
"msg_from": "Andreas Haumer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query performance problems with partitioned tables"
},
{
"msg_contents": "Andreas Haumer <andreas 'at' xss.co.at> writes:\n\n[...]\n\n> testdb_part=> explain analyze select ts from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00' order by ts desc limit 1;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=23985.83..23985.83 rows=1 width=8) (actual time=230.100..230.102 rows=1 loops=1)\n> -> Sort (cost=23985.83..24019.84 rows=13605 width=8) (actual time=230.095..230.095 rows=1 loops=1)\n> Sort Key: mwdb.t_mv.ts\n> -> Result (cost=0.00..23051.72 rows=13605 width=8) (actual time=0.154..177.519 rows=15810 loops=1)\n> -> Append (cost=0.00..23051.72 rows=13605 width=8) (actual time=0.149..114.186 rows=15810 loops=1)\n> -> Index Scan using pk_mv_zr_ts on t_mv (cost=0.00..8.27 rows=1 width=8) (actual time=0.047..0.047 rows=0 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200507 on t_mv_200507 t_mv (cost=0.00..2417.53 rows=1519 width=8) (actual time=0.095..2.419 rows=744 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n\n[...]\n\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200704 on t_mv_200704 t_mv (cost=0.00..814.38 rows=378 width=8) (actual time=0.050..1.406 rows=504 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> Total runtime: 231.730 ms\n> (52 rows)\n> \n> Oops!\n> Compare the costs or the actual query time between those queries!\n\nWell, I'd say that scanning all partitions until the partition\ncontaining april 2007, when one of the query parameter is having\ntimestamp before april 2007 but without an initial timestamp\nlimit, looks normal :)\n\n\n[...]\n\n> Now my question is: Does the query planner in the case of partitioned tables\n> really have to scan all indexes in order to get the next timestamp smaller\n> (or larger) than a given one?\n\nWell, how can the planner know inside which partition the wanted\nrow is? There might be no data, say, inside a couple of\npartitions in the past before finding the wanted row, in which\ncase 3 partitions in the past must be scanned.\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n",
"msg_date": "30 Apr 2007 15:05:06 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance problems with partitioned tables"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi!\n\nGuillaume Cottenceau schrieb:\n> Andreas Haumer <andreas 'at' xss.co.at> writes:\n[...]\n> \n>> Now my question is: Does the query planner in the case of partitioned tables\n>> really have to scan all indexes in order to get the next timestamp smaller\n>> (or larger) than a given one?\n> \n> Well, how can the planner know inside which partition the wanted\n> row is? There might be no data, say, inside a couple of\n> partitions in the past before finding the wanted row, in which\n> case 3 partitions in the past must be scanned.\n> \n\nI think the planner could do the following:\n\na) It could make a better decision in which direction to scan\n the partitions (depending on sort order involved in the query)\n\nb) It could stop scanning as soon as there can not be any further\n resulting row according to the CHECK constraints given on the tables.\n\n\nCurrently it doesn't do this.\n\nLook at this example:\n\ntestdb_part=> select ts from mwdb.t_mv where zr=3622 and ts > '2006-01-01 00:00:00' order by ts asc limit 1;\n ts\n- ------------------------\n 2006-01-01 01:00:00+01\n(1 row)\n\ntestdb_part=> explain analyze select ts from mwdb.t_mv where zr=3622 and ts > '2006-01-01 00:00:00' order by ts asc limit 1;\n QUERY PLAN\n- --------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=15843.41..15843.41 rows=1 width=8) (actual time=152.476..152.478 rows=1 loops=1)\n -> Sort (cost=15843.41..15865.39 rows=8795 width=8) (actual time=152.472..152.472 rows=1 loops=1)\n Sort Key: mwdb.t_mv.ts\n -> Result (cost=0.00..15267.23 rows=8795 width=8) (actual time=0.102..122.540 rows=11629 loops=1)\n -> Append (cost=0.00..15267.23 rows=8795 width=8) (actual time=0.098..76.140 rows=11629 loops=1)\n -> Index Scan using pk_mv_zr_ts on t_mv (cost=0.00..8.27 rows=1 width=8) (actual time=0.022..0.022 rows=0 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200601 on t_mv_200601 t_mv (cost=0.00..986.73 rows=582 width=8) (actual time=0.070..2.136 rows=743 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200602 on t_mv_200602 t_mv (cost=0.00..847.40 rows=497 width=8) (actual time=0.066..2.063 rows=672 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200603 on t_mv_200603 t_mv (cost=0.00..961.33 rows=565 width=8) (actual time=0.063..2.115 rows=743 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200604 on t_mv_200604 t_mv (cost=0.00..901.09 rows=528 width=8) (actual time=0.156..2.200 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200605 on t_mv_200605 t_mv (cost=0.00..945.38 rows=555 width=8) (actual time=0.052..2.088 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200606 on t_mv_200606 t_mv (cost=0.00..995.58 rows=587 width=8) (actual time=0.054..1.869 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200607 on t_mv_200607 t_mv (cost=0.00..983.15 rows=578 width=8) (actual time=0.045..1.989 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200608 on t_mv_200608 t_mv (cost=0.00..976.05 rows=573 width=8) (actual time=0.048..1.877 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200609 on t_mv_200609 t_mv (cost=0.00..902.86 rows=529 width=8) (actual time=0.054..2.225 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200610 on t_mv_200610 t_mv (cost=0.00..934.74 rows=548 width=8) (actual time=0.034..2.671 rows=745 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200611 on t_mv_200611 t_mv (cost=0.00..913.50 rows=536 width=8) (actual time=0.053..2.302 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200612 on t_mv_200612 t_mv (cost=0.00..983.15 rows=578 width=8) (actual time=0.059..2.449 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200701 on t_mv_200701 t_mv (cost=0.00..929.43 rows=545 width=8) (actual time=0.034..2.035 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200702 on t_mv_200702 t_mv (cost=0.00..863.33 rows=506 width=8) (actual time=0.034..1.675 rows=672 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200703 on t_mv_200703 t_mv (cost=0.00..925.87 rows=542 width=8) (actual time=0.055..2.036 rows=743 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200704 on t_mv_200704 t_mv (cost=0.00..1209.39 rows=545 width=8) (actual time=0.061..2.296 rows=711 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n Total runtime: 153.195 ms\n(40 rows)\n\n\nTable t_mv_200601 gets scanned first, which is fine.\n\nThis already gives a row matching the given WHERE clause.\nIt makes no sense to scan the other tables, as the query\nasks for one row only and all the other tables have timestamps\nlarger than all the timestamps in table t_mv_200601 (according\nto the CHECK constraints for the partion tables)\n\nThe same would be true with the following query using an aggregate function\n(perhaps this is a better example for my reasoning):\n\ntestdb_part=> select min(ts) from mwdb.t_mv where zr=3622 and ts > '2006-01-01 00:00:00';\n min\n- ------------------------\n 2006-01-01 01:00:00+01\n(1 row)\n\ntestdb_part=> explain analyze select min(ts) from mwdb.t_mv where zr=3622 and ts > '2006-01-01 00:00:00';\n QUERY PLAN\n- --------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=15289.22..15289.23 rows=1 width=8) (actual time=106.735..106.737 rows=1 loops=1)\n -> Append (cost=0.00..15267.23 rows=8795 width=8) (actual time=0.184..78.174 rows=11629 loops=1)\n -> Index Scan using pk_mv_zr_ts on t_mv (cost=0.00..8.27 rows=1 width=8) (actual time=0.035..0.035 rows=0 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200601 on t_mv_200601 t_mv (cost=0.00..986.73 rows=582 width=8) (actual time=0.143..2.207 rows=743 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200602 on t_mv_200602 t_mv (cost=0.00..847.40 rows=497 width=8) (actual time=0.020..1.709 rows=672 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200603 on t_mv_200603 t_mv (cost=0.00..961.33 rows=565 width=8) (actual time=0.033..2.076 rows=743 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200604 on t_mv_200604 t_mv (cost=0.00..901.09 rows=528 width=8) (actual time=0.027..2.039 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200605 on t_mv_200605 t_mv (cost=0.00..945.38 rows=555 width=8) (actual time=0.031..2.109 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200606 on t_mv_200606 t_mv (cost=0.00..995.58 rows=587 width=8) (actual time=0.023..2.001 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200607 on t_mv_200607 t_mv (cost=0.00..983.15 rows=578 width=8) (actual time=0.027..2.064 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200608 on t_mv_200608 t_mv (cost=0.00..976.05 rows=573 width=8) (actual time=0.030..1.932 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200609 on t_mv_200609 t_mv (cost=0.00..902.86 rows=529 width=8) (actual time=0.021..2.408 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200610 on t_mv_200610 t_mv (cost=0.00..934.74 rows=548 width=8) (actual time=0.014..2.046 rows=745 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200611 on t_mv_200611 t_mv (cost=0.00..913.50 rows=536 width=8) (actual time=0.024..1.846 rows=720 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200612 on t_mv_200612 t_mv (cost=0.00..983.15 rows=578 width=8) (actual time=0.019..2.556 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200701 on t_mv_200701 t_mv (cost=0.00..929.43 rows=545 width=8) (actual time=0.022..2.188 rows=744 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200702 on t_mv_200702 t_mv (cost=0.00..863.33 rows=506 width=8) (actual time=0.023..2.311 rows=672 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200703 on t_mv_200703 t_mv (cost=0.00..925.87 rows=542 width=8) (actual time=0.027..1.977 rows=743 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200704 on t_mv_200704 t_mv (cost=0.00..1209.39 rows=545 width=8) (actual time=0.022..2.084 rows=711 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n Total runtime: 107.152 ms\n(37 rows)\n\n\nAs soon as the query found a \"min(ts)\" which is larger than\ntimestamp \"2006-01-01 00:00:00\" (the WHERE clause) in table\nt_mv_200601, it can stop scanning, as there *can not* be any\ntimestamp smaller than the one already found in the other\ntables (again, according to the CHECK constraints)!\n\nPerhaps the logic to implement this is complex, but IMHO\nit _should_ be doable (and proofable), shouldn't it?\n\nIn fact, the query planner already does partly select the tables\nto scan in an intelligent way, because it does not scan the tables\nwith timestamps smaller than \"2006-01-01 00:00:00\", but IMHO it\nstill scans too much tables.\n\nComments?\n\n- - andreas\n\n- --\nAndreas Haumer | mailto:[email protected]\n*x Software + Systeme | http://www.xss.co.at/\nKarmarschgasse 51/2/20 | Tel: +43-1-6060114-0\nA-1100 Vienna, Austria | Fax: +43-1-6060114-71\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGNe83xJmyeGcXPhERAk2fAJ98aqfKl7pQtac4HvSRr9GYbktadgCfU76J\nZmMj1A3UFejvS+2JrstrTaA=\n=Myeo\n-----END PGP SIGNATURE-----\n",
"msg_date": "Mon, 30 Apr 2007 15:29:30 +0200",
"msg_from": "Andreas Haumer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance problems with partitioned tables"
},
{
"msg_contents": "Andreas Haumer wrote:\n> \n> I think the planner could do the following:\n> \n> a) It could make a better decision in which direction to scan\n> the partitions (depending on sort order involved in the query)\n> \n> b) It could stop scanning as soon as there can not be any further\n> resulting row according to the CHECK constraints given on the tables.\n[snip]\n> Perhaps the logic to implement this is complex, but IMHO\n> it _should_ be doable (and proofable), shouldn't it?\n\nAh, it might be do-able for some subset of cases, but is it \ncost-effective to check for in *all* cases? Don't forget the constraints \nand where clauses can be arbitrarily complex.\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 30 Apr 2007 14:45:05 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance problems with partitioned tables"
},
{
"msg_contents": "Andreas Haumer <andreas 'at' xss.co.at> writes:\n\n> > Well, how can the planner know inside which partition the wanted\n> > row is? There might be no data, say, inside a couple of\n> > partitions in the past before finding the wanted row, in which\n> > case 3 partitions in the past must be scanned.\n> > \n> \n> I think the planner could do the following:\n> \n> a) It could make a better decision in which direction to scan\n> the partitions (depending on sort order involved in the query)\n> \n> b) It could stop scanning as soon as there can not be any further\n> resulting row according to the CHECK constraints given on the tables.\n\nAbout these precise points, I'll let a pg guru give an answer.\n\n> Look at this example:\n> \n> testdb_part=> select ts from mwdb.t_mv where zr=3622 and ts > '2006-01-01 00:00:00' order by ts asc limit 1;\n> ts\n> ------------------------\n> 2006-01-01 01:00:00+01\n> (1 row)\n> \n> testdb_part=> explain analyze select ts from mwdb.t_mv where zr=3622 and ts > '2006-01-01 00:00:00' order by ts asc limit 1;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=15843.41..15843.41 rows=1 width=8) (actual time=152.476..152.478 rows=1 loops=1)\n> -> Sort (cost=15843.41..15865.39 rows=8795 width=8) (actual time=152.472..152.472 rows=1 loops=1)\n> Sort Key: mwdb.t_mv.ts\n> -> Result (cost=0.00..15267.23 rows=8795 width=8) (actual time=0.102..122.540 rows=11629 loops=1)\n> -> Append (cost=0.00..15267.23 rows=8795 width=8) (actual time=0.098..76.140 rows=11629 loops=1)\n> -> Index Scan using pk_mv_zr_ts on t_mv (cost=0.00..8.27 rows=1 width=8) (actual time=0.022..0.022 rows=0 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n\n[...]\n\n> -> Index Scan using pk_mv_200704 on t_mv_200704 t_mv (cost=0.00..1209.39 rows=545 width=8) (actual time=0.061..2.296 rows=711 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2006-01-01 00:00:00+01'::timestamp with time zone))\n> Total runtime: 153.195 ms\n> (40 rows)\n> \n> \n> Table t_mv_200601 gets scanned first, which is fine.\n> \n> This already gives a row matching the given WHERE clause.\n> It makes no sense to scan the other tables, as the query\n> asks for one row only and all the other tables have timestamps\n> larger than all the timestamps in table t_mv_200601 (according\n> to the CHECK constraints for the partion tables)\n\nI think this is the last claimed point which is incorrect. Pg has\nno general guarantee the partitions actually create a disjoint\nset, even with the CHECK constraints. Pg can only optimize by\navoiding scanning the partitions inside which no satisfactory\ndata could be found by the CHECK constraint, but I think it's not\npossible (too complicated) to infer that any found row in your\nother partitions would not be in the final resultset because of\n1. the query's resultset order 2. the limit 3. the actual\nconditions in the CHECK constraints (there is no direct way to\nsee that timestamps in your 200704 partition are greater than\ntimsteamp in your 200601 partition).\n\nI guess some sort of pg guru would be needed here to clarify\nthings in a smart way, unlike me :)\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n",
"msg_date": "30 Apr 2007 15:54:39 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance problems with partitioned tables"
},
{
"msg_contents": "Just cast the value in the WHERE clause:\n\nselect ts from mwdb.t_mv where zr=3622 and ts > '2006-01-01 00:00:00'\n::TIMESTAMP order by ts asc limit 1;\n\nThis search only into the right partitioned tables if you build the rules\nbased in the ts field.\n\n\n----\nNeil Peter Braggio\[email protected]\n\n\nOn 4/30/07, Richard Huxton <[email protected]> wrote:\n>\n> Andreas Haumer wrote:\n> >\n> > I think the planner could do the following:\n> >\n> > a) It could make a better decision in which direction to scan\n> > the partitions (depending on sort order involved in the query)\n> >\n> > b) It could stop scanning as soon as there can not be any further\n> > resulting row according to the CHECK constraints given on the tables.\n> [snip]\n> > Perhaps the logic to implement this is complex, but IMHO\n> > it _should_ be doable (and proofable), shouldn't it?\n>\n> Ah, it might be do-able for some subset of cases, but is it\n> cost-effective to check for in *all* cases? Don't forget the constraints\n> and where clauses can be arbitrarily complex.\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\nJust cast the value in the WHERE clause:\n\nselect ts from mwdb.t_mv where zr=3622 and ts > '2006-01-01 00:00:00' ::TIMESTAMP order by ts asc limit 1;\n\nThis search only into the right partitioned tables if you build the rules based in the ts field.\n\n\n----\nNeil Peter Braggio\[email protected]\nOn 4/30/07, Richard Huxton <[email protected]> wrote:\nAndreas Haumer wrote:>> I think the planner could do the following:>> a) It could make a better decision in which direction to scan> the partitions (depending on sort order involved in the query)\n>> b) It could stop scanning as soon as there can not be any further> resulting row according to the CHECK constraints given on the tables.[snip]> Perhaps the logic to implement this is complex, but IMHO\n> it _should_ be doable (and proofable), shouldn't it?Ah, it might be do-able for some subset of cases, but is itcost-effective to check for in *all* cases? Don't forget the constraintsand where clauses can be arbitrarily complex.\n-- Richard Huxton Archonet Ltd---------------------------(end of broadcast)---------------------------TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to \[email protected] so that your message can get through to the mailing list cleanly",
"msg_date": "Mon, 30 Apr 2007 10:06:07 -0400",
"msg_from": "\"Neil Peter Braggio\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance problems with partitioned tables"
},
{
"msg_contents": "Andreas Haumer <[email protected]> writes:\n> A simple example: Get the timestamp of a measurement value for time\n> series 3622 which is right before the measurement value with time\n> stamp '2007-04-22 00:00:00':\n\n> testdb_std=> select ts from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00' order by ts desc limit 1;\n\nAs already pointed out, this is only going to be able to exclude\npartitions that are strictly after the limit-time, since you have no\nWHERE clause that excludes anything before. Can you set a reasonable\nupper bound on the maximum inter-measurement time? If so, you could\nquery something like this:\n\n select ts from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00'\n and ts > '2007-04-21 00:00:00'\n order by ts desc limit 1;\n\nIf you don't have a hard limit, but do have some smarts on the client\nside, you could try successive queries like this with larger and larger\nwindows until you get an answer.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Apr 2007 11:06:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance problems with partitioned tables "
},
{
"msg_contents": "\"Guillaume Cottenceau\" <[email protected]> writes:\n\n> I think this is the last claimed point which is incorrect. Pg has\n> no general guarantee the partitions actually create a disjoint\n> set, even with the CHECK constraints. Pg can only optimize by\n> avoiding scanning the partitions inside which no satisfactory\n> data could be found by the CHECK constraint, but I think it's not\n> possible (too complicated) to infer that any found row in your\n> other partitions would not be in the final resultset because of\n> 1. the query's resultset order 2. the limit 3. the actual\n> conditions in the CHECK constraints (there is no direct way to\n> see that timestamps in your 200704 partition are greater than\n> timsteamp in your 200601 partition).\n\nI think the answer is that yes there are a number of query transformations\nthat could be done for partitioned tables that we're not doing currently.\n\nGenerally speaking we need to look at each type of plan node and figure out\nwhether it can usefully be pushed down below the Append node and how we can\ndetermine when that can be done. \n\nSo if each arm of the Append was already in order we could easily push the\nLIMIT inside the Append (and duplicating the work above the Append). But that\nwouldn't save us any work because we only generate rows from the partitions as\nthey're needed anyways.\n\nIn your example we could save work because the Sort needs all the rows before\nit starts. So we could sort each arm and apply the limit before passing it up\nto the outer Sort. That might save work in this case.\n\nBut figuring out when that's less work than just sorting all of them is\ntricky. If the LIMIT is 1,000 and you have ten arms of 1,000 tuples each then\nyou aren't going to save any work doing this. But if the LIMIT is 1 and you\nhave few arms with many tuples then you could save a lot of work.\n\nActually I happen to have been reading up on algorithms related to this this\nweekend. It's possible to implement a LimitUnsorted in linear-time which would\nfetch the first n records according to some sort key without actually sorting\nthe records. That might make it more worthwhile.\n\nIn short. Yes, there are a lot of optimizations possible around partitioned\ntables that we don't do either because it's not clear how to tell when they're\nworthwhile or because the code just isn't there yet.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Mon, 30 Apr 2007 16:35:55 +0100",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance problems with partitioned tables"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi!\n\nNeil Peter Braggio schrieb:\n> Just cast the value in the WHERE clause:\n> \n> select ts from mwdb.t_mv where zr=3622 and ts > '2006-01-01 00:00:00'\n> ::TIMESTAMP order by ts asc limit 1;\n> \n> This search only into the right partitioned tables if you build the\n> rules based in the ts field.\n> \n\nThis doesn't help.\n\nA cast is not needed in this case, as the following query\nshows, where the query planner already is able to reduce\nthe scan to the right tables:\n\ntestdb_part=> select ts from mwdb.t_mv where zr=3622 and ts > '2005-12-31 22:00:00' and ts < '2006-01-01 02:00:00';\n ts\n- ------------------------\n 2005-12-31 23:00:00+01\n 2006-01-01 00:00:00+01\n 2006-01-01 01:00:00+01\n(3 rows)\n\n\ntestdb_part=> explain analyze select ts from mwdb.t_mv where zr=3622 and ts > '2005-12-31 22:00:00' and ts < '2006-01-01 02:00:00';\n QUERY PLAN\n- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.00..26.64 rows=4 width=8) (actual time=0.040..0.088 rows=3 loops=1)\n -> Append (cost=0.00..26.64 rows=4 width=8) (actual time=0.035..0.071 rows=3 loops=1)\n -> Index Scan using i_mv_ts on t_mv (cost=0.00..8.27 rows=1 width=8) (actual time=0.010..0.010 rows=0 loops=1)\n Index Cond: (((ts)::timestamp with time zone > '2005-12-31 22:00:00+01'::timestamp with time zone) AND ((ts)::timestamp with time zone < '2006-01-01 02:00:00+01'::timestamp with time zone))\n Filter: ((zr)::integer = 3622)\n -> Index Scan using pk_mv_200512 on t_mv_200512 t_mv (cost=0.00..8.30 rows=1 width=8) (actual time=0.019..0.022 rows=1 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2005-12-31 22:00:00+01'::timestamp with time zone) AND ((ts)::timestamp with time zone < '2006-01-01 02:00:00+01'::timestamp with time zone))\n -> Index Scan using pk_mv_200601 on t_mv_200601 t_mv (cost=0.00..10.07 rows=2 width=8) (actual time=0.014..0.019 rows=2 loops=1)\n Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone > '2005-12-31 22:00:00+01'::timestamp with time zone) AND ((ts)::timestamp with time zone < '2006-01-01 02:00:00+01'::timestamp with time zone))\n Total runtime: 0.176 ms\n(10 rows)\n\n\nHere, two child tables are involved (t_mv_200512 and t_mv_200601)\nand the query only uses those two, even without cast of the constants\nin the where clause.\n\n- - andreas\n\n- --\nAndreas Haumer | mailto:[email protected]\n*x Software + Systeme | http://www.xss.co.at/\nKarmarschgasse 51/2/20 | Tel: +43-1-6060114-0\nA-1100 Vienna, Austria | Fax: +43-1-6060114-71\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGNiATxJmyeGcXPhERAo23AJwPCBwvWQT/m3QRXRWqK0aECeMQ2gCbBDjA\nE5iZNnU41vrFBNtXzdCSmWY=\n=0+pC\n-----END PGP SIGNATURE-----\n",
"msg_date": "Mon, 30 Apr 2007 18:58:15 +0200",
"msg_from": "Andreas Haumer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance problems with partitioned tables"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi!\n\nTom Lane schrieb:\n[...]\n> As already pointed out, this is only going to be able to exclude\n> partitions that are strictly after the limit-time, since you have no\n> WHERE clause that excludes anything before. Can you set a reasonable\n> upper bound on the maximum inter-measurement time? If so, you could\n> query something like this:\n> \n> select ts from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00'\n> and ts > '2007-04-21 00:00:00'\n> order by ts desc limit 1;\n> \n\nThat might be possible, though I'll have to check with our\nbusiness logic. Those \"open interval\" queries usually are\nneeded to catch the immediate neighbors for navigation\npurposes (e.g. the \"next\" and \"previous\" values in a list)\nor for drawing diagrams where the line starts somewhere\nleft or right \"outside\" the diagram.\n\n> If you don't have a hard limit, but do have some smarts on the client\n> side, you could try successive queries like this with larger and larger\n> windows until you get an answer.\n> \n\nWell, the beauty of the \"inheritance method\" of course is\nto keep such rules out of the application... ;-)\n\nI have a DAO layer on top of Hibernate and I'd rather not\ntouch this to put special database access logic in (especially\nas I plan to use partitioned tables as an option for really\nlarge installations. For small ones it looks like we don't\nneed or want partitioned tables anyway)\n\nPerhaps I can hide this logic in some stored procedures\n(I already have several stored procedures to handle\nautomatic and transparent creation of child tables on\nINSERTs anyway...)\n\n- - andreas\n\n- --\nAndreas Haumer | mailto:[email protected]\n*x Software + Systeme | http://www.xss.co.at/\nKarmarschgasse 51/2/20 | Tel: +43-1-6060114-0\nA-1100 Vienna, Austria | Fax: +43-1-6060114-71\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD4DBQFGNiVvxJmyeGcXPhERAkbzAJj7HBK6tMZpb0RPD7iN6vpyc1tiAKC2heFx\n7pnq02iqW2QosLd93Y03PA==\n=pJ7q\n-----END PGP SIGNATURE-----\n",
"msg_date": "Mon, 30 Apr 2007 19:20:49 +0200",
"msg_from": "Andreas Haumer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance problems with partitioned tables"
},
{
"msg_contents": "On Mon, Apr 30, 2007 at 03:29:30PM +0200, Andreas Haumer wrote:\n> This already gives a row matching the given WHERE clause.\n> It makes no sense to scan the other tables, as the query\n> asks for one row only and all the other tables have timestamps\n> larger than all the timestamps in table t_mv_200601 (according\n> to the CHECK constraints for the partion tables)\n\nSo for each row, it has to check all CHECK constraints to see if it has\nenough rows? That sounds fairly inefficient.\n\nI wonder if the planner could copy the limit down through the Append, though\n-- it certainly doesn't need more than one row from each partition. It sounds\nslightly cumbersome to try to plan such a thing, though...\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 30 Apr 2007 19:23:34 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance problems with partitioned tables"
},
{
"msg_contents": "\nWait, rereading the original queries I seem to have misunderstood something.\nThe individual parts of the partitioned tables are being accessed in timestamp\norder. So what's missing is some way for the optimizer to know that the\nresulting append results will still be in order. If it knew that all the\nconstraints were mutually exclusive and covered ascending ranges then it could\navoid doing the extra sort. Hm...\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Mon, 30 Apr 2007 20:19:16 +0100",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance problems with partitioned tables"
},
{
"msg_contents": "Andreas Haumer wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> Hi!\n>\n> I'm currently experimenting with PostgreSQL 8.2.4 and table\n> partitioning in order to improve the performance of an\n> application I'm working on.\n>\n> My application is about managing measurement values (lots of!)\n> I have one table \"t_mv\" which stores all the measurement values.\n> A single measurement value has a timestamp and belongs\n> to a single time series, so table \"t_mv\" looks like this:\n>\n> CREATE TABLE t_mv\n> (\n> zr integer NOT NULL, -- the time series id\n> ts timestamp with time zone NOT NULL, -- the timestamp\n> ... -- other attributes of a mv\n> )\n> WITHOUT OIDS;\n>\n> ALTER TABLE t_mv\n> ADD CONSTRAINT pk_mv_zr_ts PRIMARY KEY (zr, ts);\n>\n> Each time series defines several other attributes which are common\n> to all measurement values of this time series (like sampling location,\n> physical parameter, aggregation, cardinality, type, visibility, etc.)\n>\n> The application should be able to handle several thousand\n> different time series and hundreds of millions of measurement\n> values, so table t_mv can get quite large.\n>\n> I have tested installations with up to 70 millions rows in t_mv\n> and PostgreSQL can handle that with a quite good performance\n> even on non high-end machines (operating system is Linux, btw)\n>\n> But as I expect installations witch much more rows in t_mv, I\n> tried to implement a \"partitioned tables\" concept using inheritance\n> and CHECK constraints, just like it is described in the docs\n> (e.g. chapter 5.9 in the current PostgreSQL 8.2.4 documentation)\n>\n> I split the t_mv table on the timestamp attribute to build\n> child tables which hold all measurement values for a single month.\n> That way I have several tables called \"t_mv_YYYYMM\" which all\n> inherit from \"t_mv\". The number of child tables depends on the\n> time period the application has to store the measurement values\n> (which can be several years so I'm expecting up to 100 child\n> tables or even more).\n> For the application everything looks the same: inserts, updates\n> and queries all are against the \"t_mv\" parent table, the application\n> is not aware of the fact that this table is actually \"split\" into\n> several child tables.\n>\n> This is working fine and for some standard queries it actually\n> gives some performance improvement compared to the standard\n> \"everything in one big table\" concept. The performance improvement\n> increases with the number of rows in t_mv, for a small table (less\n> than 10 million rows or so) IMHO it is not really worth the effort\n> or even counter-productive.\n>\n> But I have some special queries where the performance with\n> partitioned tables actually get much worse: those are queries where\n> I'm working with \"open\" time intervals, i.e. where I want to\n> get the previous and/or next timestamp from a given interval.\n>\n> A simple example: Get the timestamp of a measurement value for time\n> series 3622 which is right before the measurement value with time\n> stamp '2007-04-22 00:00:00':\n>\n> testdb_std=> select ts from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00' order by ts desc limit 1;\n> ts\n> - ------------------------\n> 2007-04-21 23:00:00+02\n> (1 row)\n>\n>\n> Im my application there are many queries like this. Such\n> queries also come in several variations, including quite\n> sophisticated joins with lots of other tables \"above\" the\n> time series table.\n>\n> Note: as I'm working with (potentially) non-equidistant\n> time series I can not just calculate the timestamps, I\n> have to retrieve them from the database!\n>\n> In the standard case, the query plan for the example query looks like this:\n>\n> testdb_std=> explain analyze select ts from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00' order by ts desc limit 1;\n> QUERY PLAN\n> - -----------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..1.70 rows=1 width=8) (actual time=0.233..0.235 rows=1 loops=1)\n> -> Index Scan Backward using pk_mv_zr_ts on t_mv (cost=0.00..21068.91 rows=12399 width=8) (actual time=0.221..0.221 rows=1 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> Total runtime: 0.266 ms\n> (4 rows)\n>\n>\n> If I switch to partitioned tables, the query retrieves the same result (of course):\n>\n> testdb_std=> \\c testdb_part\n> You are now connected to database \"testdb_part\".\n> testdb_part=> select ts from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00' order by ts desc limit 1;\n> ts\n> - ------------------------\n> 2007-04-21 23:00:00+02\n> (1 row)\n>\n>\n> But the query plan becomes:\n>\n> testdb_part=> explain analyze select ts from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00' order by ts desc limit 1;\n> QUERY PLAN\n> - ---------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=23985.83..23985.83 rows=1 width=8) (actual time=230.100..230.102 rows=1 loops=1)\n> -> Sort (cost=23985.83..24019.84 rows=13605 width=8) (actual time=230.095..230.095 rows=1 loops=1)\n> Sort Key: mwdb.t_mv.ts\n> -> Result (cost=0.00..23051.72 rows=13605 width=8) (actual time=0.154..177.519 rows=15810 loops=1)\n> -> Append (cost=0.00..23051.72 rows=13605 width=8) (actual time=0.149..114.186 rows=15810 loops=1)\n> -> Index Scan using pk_mv_zr_ts on t_mv (cost=0.00..8.27 rows=1 width=8) (actual time=0.047..0.047 rows=0 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200507 on t_mv_200507 t_mv (cost=0.00..2417.53 rows=1519 width=8) (actual time=0.095..2.419 rows=744 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200508 on t_mv_200508 t_mv (cost=0.00..918.81 rows=539 width=8) (actual time=0.081..2.134 rows=744 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200509 on t_mv_200509 t_mv (cost=0.00..941.88 rows=555 width=8) (actual time=0.061..2.051 rows=720 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200510 on t_mv_200510 t_mv (cost=0.00..915.29 rows=538 width=8) (actual time=0.064..2.113 rows=715 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200511 on t_mv_200511 t_mv (cost=0.00..925.93 rows=545 width=8) (actual time=0.048..2.986 rows=720 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200512 on t_mv_200512 t_mv (cost=0.00..936.53 rows=550 width=8) (actual time=0.049..2.212 rows=744 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200601 on t_mv_200601 t_mv (cost=0.00..981.42 rows=579 width=8) (actual time=0.065..3.029 rows=744 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200602 on t_mv_200602 t_mv (cost=0.00..856.25 rows=502 width=8) (actual time=0.045..2.866 rows=672 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200603 on t_mv_200603 t_mv (cost=0.00..977.84 rows=575 width=8) (actual time=0.052..3.044 rows=743 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200604 on t_mv_200604 t_mv (cost=0.00..906.40 rows=531 width=8) (actual time=0.053..1.976 rows=720 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200605 on t_mv_200605 t_mv (cost=0.00..938.28 rows=550 width=8) (actual time=0.050..2.357 rows=744 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200606 on t_mv_200606 t_mv (cost=0.00..922.35 rows=541 width=8) (actual time=0.054..2.063 rows=720 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200607 on t_mv_200607 t_mv (cost=0.00..2112.64 rows=1315 width=8) (actual time=0.047..2.226 rows=744 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200608 on t_mv_200608 t_mv (cost=0.00..990.23 rows=582 width=8) (actual time=0.048..2.094 rows=744 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200609 on t_mv_200609 t_mv (cost=0.00..902.84 rows=528 width=8) (actual time=0.039..2.252 rows=720 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200610 on t_mv_200610 t_mv (cost=0.00..964.87 rows=567 width=8) (actual time=0.033..2.118 rows=745 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200611 on t_mv_200611 t_mv (cost=0.00..947.17 rows=557 width=8) (actual time=0.060..2.160 rows=720 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200612 on t_mv_200612 t_mv (cost=0.00..929.43 rows=545 width=8) (actual time=0.039..2.051 rows=744 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200701 on t_mv_200701 t_mv (cost=0.00..940.05 rows=551 width=8) (actual time=0.036..2.217 rows=744 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200702 on t_mv_200702 t_mv (cost=0.00..847.38 rows=496 width=8) (actual time=0.035..1.830 rows=672 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200703 on t_mv_200703 t_mv (cost=0.00..956.00 rows=561 width=8) (actual time=0.062..2.326 rows=743 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200704 on t_mv_200704 t_mv (cost=0.00..814.38 rows=378 width=8) (actual time=0.050..1.406 rows=504 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> Total runtime: 231.730 ms\n> (52 rows)\n>\n> Oops!\n> Compare the costs or the actual query time between those queries!\n> (Note: I set \"constraint_exclusion = on\", of course!)\n>\n> As such queries are used all over the application, this nullifies\n> any performance improvements for standard queries and in fact makes\n> the overall application performance as \"feeled\" by the user _much_\n> worse.\n>\n> I also tried it with \"min()\" and \"max()\" aggregate functions\n> instead of the \"limit 1\" query, but this does not change much:\n>\n>\n> Standard \"big\" table:\n>\n> testdb_std=> select max(ts) from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00' ;\n> max\n> - ------------------------\n> 2007-04-21 23:00:00+02\n> (1 row)\n>\n>\n> testdb_std=> explain analyze select max(ts) from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00' ;\n> QUERY PLAN\n> - -------------------------------------------------------------------------------------------------------------------------------------------------\n> Result (cost=1.70..1.71 rows=1 width=0) (actual time=0.071..0.073 rows=1 loops=1)\n> InitPlan\n> -> Limit (cost=0.00..1.70 rows=1 width=8) (actual time=0.060..0.062 rows=1 loops=1)\n> -> Index Scan Backward using pk_mv_zr_ts on t_mv (cost=0.00..21068.91 rows=12399 width=8) (actual time=0.056..0.056 rows=1 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> Filter: ((ts)::timestamp with time zone IS NOT NULL)\n> Total runtime: 0.221 ms\n> (7 rows)\n>\n>\n> \"Partitioned table\":\n>\n> testdb_part=> select max(ts) from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00' ;\n> max\n> - ------------------------\n> 2007-04-21 23:00:00+02\n> (1 row)\n>\n> testdb_part=> explain analyze select max(ts) from mwdb.t_mv where zr=3622 and ts < '2007-04-22 00:00:00' ;\n> QUERY PLAN\n> - ----------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=23085.73..23085.74 rows=1 width=8) (actual time=390.094..390.096 rows=1 loops=1)\n> -> Append (cost=0.00..23051.72 rows=13605 width=8) (actual time=0.241..290.934 rows=15810 loops=1)\n> -> Index Scan using pk_mv_zr_ts on t_mv (cost=0.00..8.27 rows=1 width=8) (actual time=0.038..0.038 rows=0 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200507 on t_mv_200507 t_mv (cost=0.00..2417.53 rows=1519 width=8) (actual time=0.197..12.598 rows=744 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200508 on t_mv_200508 t_mv (cost=0.00..918.81 rows=539 width=8) (actual time=0.095..5.947 rows=744 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200509 on t_mv_200509 t_mv (cost=0.00..941.88 rows=555 width=8) (actual time=0.118..2.247 rows=720 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200510 on t_mv_200510 t_mv (cost=0.00..915.29 rows=538 width=8) (actual time=0.121..6.219 rows=715 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200511 on t_mv_200511 t_mv (cost=0.00..925.93 rows=545 width=8) (actual time=2.287..9.991 rows=720 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200512 on t_mv_200512 t_mv (cost=0.00..936.53 rows=550 width=8) (actual time=0.110..2.285 rows=744 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200601 on t_mv_200601 t_mv (cost=0.00..981.42 rows=579 width=8) (actual time=0.209..4.682 rows=744 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200602 on t_mv_200602 t_mv (cost=0.00..856.25 rows=502 width=8) (actual time=0.079..6.079 rows=672 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200603 on t_mv_200603 t_mv (cost=0.00..977.84 rows=575 width=8) (actual time=0.091..4.793 rows=743 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200604 on t_mv_200604 t_mv (cost=0.00..906.40 rows=531 width=8) (actual time=0.108..7.637 rows=720 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200605 on t_mv_200605 t_mv (cost=0.00..938.28 rows=550 width=8) (actual time=0.116..4.772 rows=744 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200606 on t_mv_200606 t_mv (cost=0.00..922.35 rows=541 width=8) (actual time=0.074..6.071 rows=720 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200607 on t_mv_200607 t_mv (cost=0.00..2112.64 rows=1315 width=8) (actual time=0.082..4.807 rows=744 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200608 on t_mv_200608 t_mv (cost=0.00..990.23 rows=582 width=8) (actual time=2.283..8.671 rows=744 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200609 on t_mv_200609 t_mv (cost=0.00..902.84 rows=528 width=8) (actual time=0.107..6.067 rows=720 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200610 on t_mv_200610 t_mv (cost=0.00..964.87 rows=567 width=8) (actual time=0.074..3.933 rows=745 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200611 on t_mv_200611 t_mv (cost=0.00..947.17 rows=557 width=8) (actual time=0.091..6.291 rows=720 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200612 on t_mv_200612 t_mv (cost=0.00..929.43 rows=545 width=8) (actual time=0.077..4.101 rows=744 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200701 on t_mv_200701 t_mv (cost=0.00..940.05 rows=551 width=8) (actual time=0.077..2.558 rows=744 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200702 on t_mv_200702 t_mv (cost=0.00..847.38 rows=496 width=8) (actual time=0.073..4.346 rows=672 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200703 on t_mv_200703 t_mv (cost=0.00..956.00 rows=561 width=8) (actual time=2.532..7.206 rows=743 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> -> Index Scan using pk_mv_200704 on t_mv_200704 t_mv (cost=0.00..814.38 rows=378 width=8) (actual time=0.120..4.163 rows=504 loops=1)\n> Index Cond: (((zr)::integer = 3622) AND ((ts)::timestamp with time zone < '2007-04-22 00:00:00+02'::timestamp with time zone))\n> Total runtime: 394.384 ms\n> (49 rows)\n>\n>\n> Now my question is: Does the query planner in the case of partitioned tables\n> really have to scan all indexes in order to get the next timestamp smaller\n> (or larger) than a given one?\n>\n> There are check conditions on all table partitions like this:\n>\n> For table t_mv_200704:\n> CHECK (ts::timestamp with time zone >= '2007-04-01 00:00:00+02'::timestamp with time zone\n> AND ts::timestamp with time zone < '2007-05-01 00:00:00+02'::timestamp with time zone)\n>\n> For table t_mv_200703:\n> CHECK (ts::timestamp with time zone >= '2007-03-01 00:00:00+01'::timestamp with time zone\n> AND ts::timestamp with time zone < '2007-04-01 00:00:00+02'::timestamp with time zone)\n>\n> and so on...\n>\n> So the tables are in a well defined, monotonic sort order regarding the timestamp.\n>\n> This means that if there is a max(ts) for ts < '2007-04-22 00:00:00'\n> already in table t_mv_200704, it makes no sense to look further in\n> other tables where the timestamps can only be smaller than the\n> timestamp already found. Am I correct?\n>\n> Is there room for improvements of the query planner for queries\n> like this or is this a special case which will never get handled\n> anyway?\n>\n> Or would you suggest a completely different table structure\n> or perhaps some other query?\n>\n> I'm open for any suggestion!\n>\n> - - andreas\n>\n> - --\n> Andreas Haumer | mailto:[email protected]\n> *x Software + Systeme | http://www.xss.co.at/\n> Karmarschgasse 51/2/20 | Tel: +43-1-6060114-0\n> A-1100 Vienna, Austria | Fax: +43-1-6060114-71\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.6 (GNU/Linux)\n> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n>\n> iD8DBQFGNdZ2xJmyeGcXPhERAsbfAJ9nA+z50uXiV4SHntt1Y9IuZ/rzWwCff8ar\n> xKSMfzwgjx9kQipeDoEnXWE=\n> =57aJ\n> -----END PGP SIGNATURE-----\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \nHello, Andreas, I too am having exactly the same issue as you do. \nComparing my partitioned and plain table performance, I've found that \nthe plain tables perform about 25% faster than partitioned table. Using \n'explain select ...', I see that constraints are being used so in \npartitioned tables fewer rows are examined. But still partitioned tables \nare 25% slower, what a let down.\n\nFei\n",
"msg_date": "Thu, 03 May 2007 11:02:21 -0400",
"msg_from": "Fei Liu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance problems with partitioned tables"
},
{
"msg_contents": "On 5/3/07, Fei Liu <[email protected]> wrote:\n> Hello, Andreas, I too am having exactly the same issue as you do.\n> Comparing my partitioned and plain table performance, I've found that\n> the plain tables perform about 25% faster than partitioned table. Using\n> 'explain select ...', I see that constraints are being used so in\n> partitioned tables fewer rows are examined. But still partitioned tables\n> are 25% slower, what a let down.\n\nThat's a little bit harsh. The main use of partitioning is not to\nmake the table faster but to make the maintenance easier. When\nconstraint exclusion works well for a particular query you can get a\nsmall boost but many queries will break down in a really negative way.\n So, you are sacrificing flexibility for easier maintenance. You have\nto really be careful how you use it.\n\nThe best case for partitioning is when you can logically divide up\nyour data so that you really only have to deal with one sliver of it\nat a time...for joins and such. If the OP could force the constraint\nexclusion (maybe by hashing the timestamp down to a period and using\nthat for where clause), his query would be fine. The problem is it's\nnot always easy to do that.\n\nmerlin\n",
"msg_date": "Fri, 4 May 2007 08:07:57 +0530",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance problems with partitioned tables"
},
{
"msg_contents": "On Thu, 2007-05-03 at 21:37, Merlin Moncure wrote:\n> On 5/3/07, Fei Liu <[email protected]> wrote:\n> > Hello, Andreas, I too am having exactly the same issue as you do.\n> > Comparing my partitioned and plain table performance, I've found that\n> > the plain tables perform about 25% faster than partitioned table. Using\n> > 'explain select ...', I see that constraints are being used so in\n> > partitioned tables fewer rows are examined. But still partitioned tables\n> > are 25% slower, what a let down.\n> \n> That's a little bit harsh. The main use of partitioning is not to\n> make the table faster but to make the maintenance easier. When\n> constraint exclusion works well for a particular query you can get a\n> small boost but many queries will break down in a really negative way.\n> So, you are sacrificing flexibility for easier maintenance. You have\n> to really be careful how you use it.\n> \n> The best case for partitioning is when you can logically divide up\n> your data so that you really only have to deal with one sliver of it\n> at a time...for joins and such. If the OP could force the constraint\n> exclusion (maybe by hashing the timestamp down to a period and using\n> that for where clause), his query would be fine. The problem is it's\n> not always easy to do that.\n\nAgree++\n\nI've been testing partitioning for a zip code lookup thing that was\nposted here earlier, and I partitioned a 10,000,000 row set into about\n400 partitions. I found that selecting a range of areas defined by x/y\ncoordinates was faster without any indexes. The same selection with one\nbig table and one big (x,y) index took 3 to 10 seconds typically, same\nselect against the partitions with no indexes took 0.2 to 0.5 seconds.\n\nFor that particular application, the only way to scale it was with\npartitioning.\n",
"msg_date": "Fri, 04 May 2007 10:11:12 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance problems with partitioned tables"
},
{
"msg_contents": "On 5/4/07, Scott Marlowe <[email protected]> wrote:\n> On Thu, 2007-05-03 at 21:37, Merlin Moncure wrote:\n> > On 5/3/07, Fei Liu <[email protected]> wrote:\n> > > Hello, Andreas, I too am having exactly the same issue as you do.\n> > > Comparing my partitioned and plain table performance, I've found that\n> > > the plain tables perform about 25% faster than partitioned table. Using\n> > > 'explain select ...', I see that constraints are being used so in\n> > > partitioned tables fewer rows are examined. But still partitioned tables\n> > > are 25% slower, what a let down.\n> >\n> > That's a little bit harsh. The main use of partitioning is not to\n> > make the table faster but to make the maintenance easier. When\n> > constraint exclusion works well for a particular query you can get a\n> > small boost but many queries will break down in a really negative way.\n> > So, you are sacrificing flexibility for easier maintenance. You have\n> > to really be careful how you use it.\n> >\n> > The best case for partitioning is when you can logically divide up\n> > your data so that you really only have to deal with one sliver of it\n> > at a time...for joins and such. If the OP could force the constraint\n> > exclusion (maybe by hashing the timestamp down to a period and using\n> > that for where clause), his query would be fine. The problem is it's\n> > not always easy to do that.\n>\n> Agree++\n>\n> I've been testing partitioning for a zip code lookup thing that was\n> posted here earlier, and I partitioned a 10,000,000 row set into about\n> 400 partitions. I found that selecting a range of areas defined by x/y\n> coordinates was faster without any indexes. The same selection with one\n> big table and one big (x,y) index took 3 to 10 seconds typically, same\n> select against the partitions with no indexes took 0.2 to 0.5 seconds.\n\nI was thinking about that problem....one approach I was playing with\nwas to normalize the 10mm table to zipcode (chopping off + 4) and then\ndoing bounding box ops on the zipcode (using earthdistance/gist) table\nand also the detail table using tradictional tactics or gist. I think\nthis would give reasonable performance without partitioning (10mm\nrecords doesn't scare me anymore!). If the records are frequently\nupdated you may want to TP anways though do to (pre-hot) vacuum\nissues.\n\nmerlin\n",
"msg_date": "Fri, 4 May 2007 13:10:16 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance problems with partitioned tables"
},
{
"msg_contents": "Scott Marlowe wrote:\n> On Thu, 2007-05-03 at 21:37, Merlin Moncure wrote:\n> \n>> On 5/3/07, Fei Liu <[email protected]> wrote:\n>> \n>>> Hello, Andreas, I too am having exactly the same issue as you do.\n>>> Comparing my partitioned and plain table performance, I've found that\n>>> the plain tables perform about 25% faster than partitioned table. Using\n>>> 'explain select ...', I see that constraints are being used so in\n>>> partitioned tables fewer rows are examined. But still partitioned tables\n>>> are 25% slower, what a let down.\n>>> \n>> That's a little bit harsh. The main use of partitioning is not to\n>> make the table faster but to make the maintenance easier. When\n>> constraint exclusion works well for a particular query you can get a\n>> small boost but many queries will break down in a really negative way.\n>> So, you are sacrificing flexibility for easier maintenance. You have\n>> to really be careful how you use it.\n>>\n>> The best case for partitioning is when you can logically divide up\n>> your data so that you really only have to deal with one sliver of it\n>> at a time...for joins and such. If the OP could force the constraint\n>> exclusion (maybe by hashing the timestamp down to a period and using\n>> that for where clause), his query would be fine. The problem is it's\n>> not always easy to do that.\n>> \n>\n> Agree++\n>\n> I've been testing partitioning for a zip code lookup thing that was\n> posted here earlier, and I partitioned a 10,000,000 row set into about\n> 400 partitions. I found that selecting a range of areas defined by x/y\n> coordinates was faster without any indexes. The same selection with one\n> big table and one big (x,y) index took 3 to 10 seconds typically, same\n> select against the partitions with no indexes took 0.2 to 0.5 seconds.\n>\n> For that particular application, the only way to scale it was with\n> partitioning.\n> \nIn my particular case, I have 2 million records uniformly split up in 40 \npartitions. It's ranged data varying with time, each partition has one \nmonth of data. Do you think this is a good candidate to seek performance \nboost with partitioned tables?\n",
"msg_date": "Tue, 08 May 2007 14:41:38 -0400",
"msg_from": "Fei Liu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance problems with partitioned tables"
},
{
"msg_contents": "On Tue, 2007-05-08 at 13:41, Fei Liu wrote:\n> Scott Marlowe wrote:\n> > On Thu, 2007-05-03 at 21:37, Merlin Moncure wrote:\n> > \n> >> On 5/3/07, Fei Liu <[email protected]> wrote:\n> >> \n> >>> Hello, Andreas, I too am having exactly the same issue as you do.\n> >>> Comparing my partitioned and plain table performance, I've found that\n> >>> the plain tables perform about 25% faster than partitioned table. Using\n> >>> 'explain select ...', I see that constraints are being used so in\n> >>> partitioned tables fewer rows are examined. But still partitioned tables\n> >>> are 25% slower, what a let down.\n> >>> \n> >> That's a little bit harsh. The main use of partitioning is not to\n> >> make the table faster but to make the maintenance easier. When\n> >> constraint exclusion works well for a particular query you can get a\n> >> small boost but many queries will break down in a really negative way.\n> >> So, you are sacrificing flexibility for easier maintenance. You have\n> >> to really be careful how you use it.\n> >>\n> >> The best case for partitioning is when you can logically divide up\n> >> your data so that you really only have to deal with one sliver of it\n> >> at a time...for joins and such. If the OP could force the constraint\n> >> exclusion (maybe by hashing the timestamp down to a period and using\n> >> that for where clause), his query would be fine. The problem is it's\n> >> not always easy to do that.\n> >> \n> >\n> > Agree++\n> >\n> > I've been testing partitioning for a zip code lookup thing that was\n> > posted here earlier, and I partitioned a 10,000,000 row set into about\n> > 400 partitions. I found that selecting a range of areas defined by x/y\n> > coordinates was faster without any indexes. The same selection with one\n> > big table and one big (x,y) index took 3 to 10 seconds typically, same\n> > select against the partitions with no indexes took 0.2 to 0.5 seconds.\n> >\n> > For that particular application, the only way to scale it was with\n> > partitioning.\n> > \n> In my particular case, I have 2 million records uniformly split up in 40 \n> partitions. It's ranged data varying with time, each partition has one \n> month of data. Do you think this is a good candidate to seek performance \n> boost with partitioned tables?\n\nThat really really really depends on your access patterns. IF you\ntypically access them by certain date ranges, then partitioning is\nalmost always a win. If you have enough requests that don't select a\nrange of dates, it might wind up being slow.\n\nThere are other advantages to partitioning though, such as ease of\nmaintenance, being able to do partial backups easily, archiving old\npartitions, placing the more active partitions on faster storage.\n",
"msg_date": "Tue, 08 May 2007 13:58:44 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance problems with partitioned tables"
}
] |
[
{
"msg_contents": "Hello group, I need to design and develop a web reporting system to let \nusers query/view syslog files on a unix host. For now, I am \nconcentrating on the authentication file that has user logon \n(success/failure) and logoff records. The log file is logrotated every \nweek or so. My reporting system parses the log entries and put the \nresult into a postgresql database (I am proposing to use postgresql as \nthe backend). Since this deals with multi-year archive and I believe \n'partitioing' is an ideal feature to handle this problem. So here is the \ndesign scheme:\n\nCREATE TABLE logon_success(\n name varchar(32) not null,\n srcip inet not null,\n date date not null,\n time time not null,\n ...\n);\n\n\nCREATE TABLE logon_success_yy${year}mm${month}(\n CHECK (date >= DATE '$year-$month-01' AND date < DATE \n'$next_year-$next_month-1')\n)\nINHERITS ($tname)\n;\n\nAs you can see from the sample code, I am using perl to dynamically \ngenerate children tables as I parse log files in a daily cron job \nscript. Once the log file is analyzed and archived in the database, I \nhave a simple web UI that sysadmin can select and view user logon \nevents. I have built a sample framework and it works so far. Keep in \nmind, this reporting system is not limited to just user logon, it should \nalso work with system events such as services failures/startup, hardware \nfailures, etc\n\nMy initial testing has not shown any significant difference between a \npartitioning approach and a plain (all entries in master) database \napproach...\n2005-01-01 | 00:27:55 | firewood | ssh | Login Successful | None | local \n| user9819 | 192.168.1.31\n\nMy test was based on two artificial tables that has 1700 records per day \nfrom 2004-02-01 to 2007-04-27, around 2 million entries that are \nidentical in both tables.\nMy test script:\necho Testing database $t1 time based\ntime psql -p 5583 netilla postgres << EOF\nselect count(date) from $t1 where date > '2005-03-01' and date < \n'2006-12-11';\n\\q\nEOF\n\necho Testing database $t2 time based\ntime psql -p 5583 netilla postgres << EOF\nselect count(date) from $t2 where date > '2005-03-01' and date < \n'2006-12-11';\n\\q\nEOF\n\nResult:\n./timing_test.sh\nTesting database logon_test time based\n count\n---------\n1121472\n(1 row)\n\n0.00user 0.00system 0:02.92elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k\n0inputs+0outputs (0major+456minor)pagefaults 0swaps\nTesting database logon_test2 time based\n count\n---------\n1121472\n(1 row)\n\n0.00user 0.00system 0:02.52elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k\n0inputs+0outputs (0major+456minor)pagefaults 0swaps\n\nBut the numbers are really not static and logon_test2 (with \npartitioning) sometimes behave worse than logon_test...\n\nNow here are my questions:\n1) Should I use database to implement such a reporting system? Are there \nany alternatives, architects, designs?\n2) Is partitioning a good approach to speed up log query/view? The user \ncomment in partitioning in pgsql manual seems to indicate partitioning \nmay be slower than non-partitioned table under certain circumstances.\n3) How to avoid repetitive log entry scanning since my cron job script \nis run daily but logrotate runs weekly? This means everytime my script \nwill be parsing duplicate entries.\n4) When parsing log files, it's quite possible that there are identical \nentries, for example a user logins really fast, resulting 2 or more \nidentical entries..In this case can I still use primary key/index at \nall? If I can, how do I design primary key or index to speed up query?\n5) What are the most glaring limitations and flaws in my design?\n6) What are the best approaches to analyze postgresql query performance \nand how to improve postgresql query performance?\nThank you for taking time to review and answer my questions! Let me know \nif I am not clear on any specific detail..\n\nFei\n\n",
"msg_date": "Mon, 30 Apr 2007 10:13:46 -0400",
"msg_from": "Fei Liu <[email protected]>",
"msg_from_op": true,
"msg_subject": "sytem log audit/reporting and psql"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi!\n\nFei Liu schrieb:\n[...]\n> \n> Now here are my questions:\n\nThese are a lot of questions, and some of them are not related\nto pgsql-performance or even PostgreSQL.\n\nI'll try to answer some of them, because I'm currently experimenting\nwith partitioned tables, too.\n\n\n> 2) Is partitioning a good approach to speed up log query/view? The user\n> comment in partitioning in pgsql manual seems to indicate partitioning\n> may be slower than non-partitioned table under certain circumstances.\n\nYou can look at table partitioning under several points of view.\nMy experience with table partitioning under the aspect of \"performance\" is:\n\n*) It is a benefit for large tables only, and the definition of\n \"large\" depends on your data.\n I did some testing for an application we are developing and here\n it shows that table partitioning does not seem to make sense for\n tables with less than 10 million rows (perhaps even more)\n\n*) The performance benefit depends on your queries. Some queries get\n a big improvement, but some queries might even run significantly\n slower.\n\n*) Depending on the way you setup your system, inserts can be much\n slower with partitioned tables (e.g. if you are using triggers\n to automatically create your partitions on demand)\n\n> 3) How to avoid repetitive log entry scanning since my cron job script\n> is run daily but logrotate runs weekly? This means everytime my script\n> will be parsing duplicate entries.\n\nThis has nothing to do with postgres, but I wrote something similar years\nago. Here's what I did and what you could do:\n\nRemember the last line of your logfile in some external file (or even the\ndatabase). Then on the next run you can read the logfile again line by line\nand skip all lines until you have found the line you saved on the last run.\n* If you find the line that way, just start parsing the logfile beginning\n at the next line.\n* If you can not find your line and you reach EOF, start parsing again at\n the beginning of the logfile.\n* If this is your first run and you don't have a line stored yet, start\n parsing at the beginning of the logfile\n\nWhen you are finished you have to remember the last line from the logfile\nagain at some place.\n\n\n> 6) What are the best approaches to analyze postgresql query performance\n> and how to improve postgresql query performance?\n\nHere are some general recommendations for performance testing from\nmy experience:\n\n*) Test with real data. For table partitioning this means you have\n to create really large datasets to make your tests useful.\n You should write a small program to generate your test data if\n possible or use some other means to create your test database.\n You also need time: creating a test database and importing 100\n million rows of test data will take several hours or even days!\n\n*) Test with the real queries from your application!\n Testing with just a few easy standard queries will almost for sure\n not be sufficient to get the right numbers for the performance you\n will see in your application later on!\n Look at the thread \"Query performance problems with partitioned tables\"\n I started on pgsql-performance just yesterday to see what I mean!\n\n*) Use \"EXPLAIN ANALYZE\" and look at the \"cost\" and \"actual time\" numbers\n this gives you. It also will show you the query plan used by PostgreSQL\n when executing your query. You sometimes might be surprised what is going\n on behind the scenes...\n\n*) If you are just using some stopwatch to time your queries be aware\n of other factors which might significantly influence your test:\n Caching, other jobs running on the machine in parallel, cosmic rays, ...\n\n*) Before running your tests you should always try to get to some well\n defined starting point (this might even mean rebooting your server\n before running each test) and you should always repeat each test\n several times and then calculate a mean value (and standard deviation\n to see how \"good\" your results are...)\n\n*) Document your test setup and procedure as well as your results\n (otherwise two days later you won't remember which test obtained\n what result)\n\nHTH\n\n- - andreas\n\n- --\nAndreas Haumer | mailto:[email protected]\n*x Software + Systeme | http://www.xss.co.at/\nKarmarschgasse 51/2/20 | Tel: +43-1-6060114-0\nA-1100 Vienna, Austria | Fax: +43-1-6060114-71\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGN3NIxJmyeGcXPhERAspaAJ9MgymiwyehN6yU6jGtA0pbkdolsACfb6JC\nkB5KLyQ5WOTUD9uabVzsjwY=\n=3QSa\n-----END PGP SIGNATURE-----\n",
"msg_date": "Tue, 01 May 2007 19:05:14 +0200",
"msg_from": "Andreas Haumer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sytem log audit/reporting and psql"
}
] |
[
{
"msg_contents": "My pg 8.1 install on an AMD-64 box (4 processors) with 9 gigs of ram\nrunning RHEL4 is acting kind of odd and I thought I would see if anybody\nhas any hints.\n\n \n\nI have Java program using postgresql-8.1-409.jdbc3.jar to connect over\nthe network. In general it works very well. I have run batch updates\nwith several thousand records repeatedly that has worked fine.\n\n \n\nThe Program pulls a summation of the DB and does some processing with\nit. It starts off wonderfully running a query every .5 seconds.\nUnfortunately, after a while it will start running queries that take 20\nto 30 seconds.\n\n \n\nLooking at the EXPLAIN for the query no sequential scans are going on\nand everything has an index that points directly at its search criteria.\n\n \n\nExample:\n\n \n\nSelect sum(whatever) from a inner join b on a.something=b.something\nWHERE b.day=1 and b.hour=1\n\nSelect sum(whatever) from a inner join b on a.something=b.something\nWHERE b.day=1 and b.hour=2\n\nSelect sum(whatever) from a inner join b on a.something=b.something\nWHERE b.day=1 and b.hour=3\n\n.\n\n.\n\nSelect sum(whatever) from a inner join b on a.something=b.something\nWHERE b.day=1 and b.hour=23\n\nSelect sum(whatever) from a inner join b on a.something=b.something\nWHERE b.day=1 and b.hour=24\n\nSelect sum(whatever) from a inner join b on a.something=b.something\nWHERE b.day=2 and b.hour=1\n\nSelect sum(whatever) from a inner join b on a.something=b.something\nWHERE b.day=2 and b.hour=2\n\n.\n\n.\n\n.\n\n \n\n \n\nThis query runs fine for a while (up to thousands of times). But what\nhappens is that it starts to have really nasty pauses when you switch\nthe day condition. After the first query with the day it runs like a\ncharm for 24 iterations, then slows back down again\n\n \n\nMy best guess was that an index never finished running, but REINDEX on\nthe table (b in this case) didn't seem to help.\n\n \n\nIdeas?\n\n \n\nAP\n\n\n\n\n\n\n\n\n\n\nMy pg 8.1 install on an AMD-64 box (4 processors) with 9\ngigs of ram running RHEL4 is acting kind of odd and I thought I would see if\nanybody has any hints.\n \nI have Java program using postgresql-8.1-409.jdbc3.jar to\nconnect over the network. In general it works very well. I have run\nbatch updates with several thousand records repeatedly that has worked fine.\n \nThe Program pulls a summation of the DB and does some\nprocessing with it. It starts off wonderfully running a query every .5\nseconds. Unfortunately, after a while it will start running queries that\ntake 20 to 30 seconds.\n \nLooking at the EXPLAIN for the query no sequential scans are\ngoing on and everything has an index that points directly at its search\ncriteria.\n \nExample:\n \nSelect sum(whatever) from a inner join b on\na.something=b.something WHERE b.day=1 and b.hour=1\nSelect sum(whatever) from a inner join b on\na.something=b.something WHERE b.day=1 and b.hour=2\nSelect sum(whatever) from a inner join b on\na.something=b.something WHERE b.day=1 and b.hour=3\n.\n.\nSelect sum(whatever) from a inner join b on\na.something=b.something WHERE b.day=1 and b.hour=23\nSelect sum(whatever) from a inner join b on\na.something=b.something WHERE b.day=1 and b.hour=24\nSelect sum(whatever) from a inner join b on\na.something=b.something WHERE b.day=2 and b.hour=1\nSelect sum(whatever) from a inner join b on\na.something=b.something WHERE b.day=2 and b.hour=2\n.\n.\n.\n \n \nThis query runs fine for a while (up to thousands of times).\nBut what happens is that it starts to have really nasty pauses when you switch\nthe day condition. After the first query with the day it runs like a\ncharm for 24 iterations, then slows back down again\n \nMy best guess was that an index never finished running, but REINDEX\non the table (b in this case) didn’t seem to help.\n \nIdeas?\n \nAP",
"msg_date": "Wed, 2 May 2007 11:24:54 -0400",
"msg_from": "\"Parks, Aaron B.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Intermitent slow queries"
},
{
"msg_contents": "On 2-May-07, at 11:24 AM, Parks, Aaron B. wrote:\n\n> My pg 8.1 install on an AMD-64 box (4 processors) with 9 gigs of \n> ram running RHEL4 is acting kind of odd and I thought I would see \n> if anybody has any hints.\n>\n>\n>\n> I have Java program using postgresql-8.1-409.jdbc3.jar to connect \n> over the network. In general it works very well. I have run batch \n> updates with several thousand records repeatedly that has worked fine.\n>\n>\n>\n> The Program pulls a summation of the DB and does some processing \n> with it. It starts off wonderfully running a query every .5 \n> seconds. Unfortunately, after a while it will start running \n> queries that take 20 to 30 seconds.\n>\n>\n>\n> Looking at the EXPLAIN for the query no sequential scans are going \n> on and everything has an index that points directly at its search \n> criteria.\n>\n>\n>\n> Example:\n>\n>\n>\n> Select sum(whatever) from a inner join b on a.something=b.something \n> WHERE b.day=1 and b.hour=1\n>\n> Select sum(whatever) from a inner join b on a.something=b.something \n> WHERE b.day=1 and b.hour=2\n>\n> Select sum(whatever) from a inner join b on a.something=b.something \n> WHERE b.day=1 and b.hour=3\n>\n> .\n>\n> .\n>\n> Select sum(whatever) from a inner join b on a.something=b.something \n> WHERE b.day=1 and b.hour=23\n>\n> Select sum(whatever) from a inner join b on a.something=b.something \n> WHERE b.day=1 and b.hour=24\n>\n> Select sum(whatever) from a inner join b on a.something=b.something \n> WHERE b.day=2 and b.hour=1\n>\n> Select sum(whatever) from a inner join b on a.something=b.something \n> WHERE b.day=2 and b.hour=2\n>\n> .\n>\n> .\n>\n> .\n>\n>\n>\n>\n>\n> This query runs fine for a while (up to thousands of times). But \n> what happens is that it starts to have really nasty pauses when you \n> switch the day condition. After the first query with the day it \n> runs like a charm for 24 iterations, then slows back down again\n>\n>\n>\n> My best guess was that an index never finished running, but REINDEX \n> on the table (b in this case) didn�t seem to help.\nI'd think it has more to do with caching data. The first query caches \nthe days data, then the next day's data has to be read from disk.\n>\n>\n> Ideas?\n>\n>\n>\n> AP\n>\n>\n\n\nOn 2-May-07, at 11:24 AM, Parks, Aaron B. wrote:My pg 8.1 install on an AMD-64 box (4 processors) with 9 gigs of ram running RHEL4 is acting kind of odd and I thought I would see if anybody has any hints. I have Java program using postgresql-8.1-409.jdbc3.jar to connect over the network. In general it works very well. I have run batch updates with several thousand records repeatedly that has worked fine. The Program pulls a summation of the DB and does some processing with it. It starts off wonderfully running a query every .5 seconds. Unfortunately, after a while it will start running queries that take 20 to 30 seconds. Looking at the EXPLAIN for the query no sequential scans are going on and everything has an index that points directly at its search criteria. Example: Select sum(whatever) from a inner join b on a.something=b.something WHERE b.day=1 and b.hour=1Select sum(whatever) from a inner join b on a.something=b.something WHERE b.day=1 and b.hour=2Select sum(whatever) from a inner join b on a.something=b.something WHERE b.day=1 and b.hour=3..Select sum(whatever) from a inner join b on a.something=b.something WHERE b.day=1 and b.hour=23Select sum(whatever) from a inner join b on a.something=b.something WHERE b.day=1 and b.hour=24Select sum(whatever) from a inner join b on a.something=b.something WHERE b.day=2 and b.hour=1Select sum(whatever) from a inner join b on a.something=b.something WHERE b.day=2 and b.hour=2... This query runs fine for a while (up to thousands of times). But what happens is that it starts to have really nasty pauses when you switch the day condition. After the first query with the day it runs like a charm for 24 iterations, then slows back down again My best guess was that an index never finished running, but REINDEX on the table (b in this case) didn’t seem to help.I'd think it has more to do with caching data. The first query caches the days data, then the next day's data has to be read from disk. Ideas? AP",
"msg_date": "Wed, 2 May 2007 14:17:56 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intermitent slow queries"
},
{
"msg_contents": "Among other possibilities, there's a known problem with slow memory \nleaks in various JVM's under circumstances similar to those you are describing.\nThe behavior you are describing is typical of this scenario. The \nincreasing delay is caused by longer and longer JVM garbage \ncollection runs as java attempts to reclaim enough memory from a \nsmaller and smaller universe of available memory.\n\nThe fastest test, and possible fix, is to go and buy more RAM. See \nif 16MB of RAM, heck even 10MB, makes the problem go away or delays \nit's onset. If so, there's good circumstantial evidence that you are \nbeing bitten by a slow memory leak; most likely in the JVM.\n\nCheers,\nRon Peacetree\n\n\nAt 11:24 AM 5/2/2007, Parks, Aaron B. wrote:\n>My pg 8.1 install on an AMD-64 box (4 processors) with 9 gigs of ram \n>running RHEL4 is acting kind of odd and I thought I would see if \n>anybody has any hints.\n>\n>I have Java program using postgresql-8.1-409.jdbc3.jar to connect \n>over the network. In general it works very well. I have run batch \n>updates with several thousand records repeatedly that has worked fine.\n>\n>The Program pulls a summation of the DB and does some processing \n>with it. It starts off wonderfully running a query every .5 \n>seconds. Unfortunately, after a while it will start running queries \n>that take 20 to 30 seconds.\n>\n>Looking at the EXPLAIN for the query no sequential scans are going \n>on and everything has an index that points directly at its search criteria.\n>\n>Example:\n>\n>Select sum(whatever) from a inner join b on a.something=b.something \n>WHERE b.day=1 and b.hour=1\n>Select sum(whatever) from a inner join b on a.something=b.something \n>WHERE b.day=1 and b.hour=2\n>Select sum(whatever) from a inner join b on a.something=b.something \n>WHERE b.day=1 and b.hour=3\n>.\n>.\n>Select sum(whatever) from a inner join b on a.something=b.something \n>WHERE b.day=1 and b.hour=23\n>Select sum(whatever) from a inner join b on a.something=b.something \n>WHERE b.day=1 and b.hour=24\n>Select sum(whatever) from a inner join b on a.something=b.something \n>WHERE b.day=2 and b.hour=1\n>Select sum(whatever) from a inner join b on a.something=b.something \n>WHERE b.day=2 and b.hour=2\n>.\n>.\n>.\n>\n>\n>This query runs fine for a while (up to thousands of times). But \n>what happens is that it starts to have really nasty pauses when you \n>switch the day condition. After the first query with the day it \n>runs like a charm for 24 iterations, then slows back down again\n>\n>My best guess was that an index never finished running, but REINDEX \n>on the table (b in this case) didn't seem to help.\n>\n>Ideas?\n>\n>AP\n\n",
"msg_date": "Wed, 02 May 2007 14:55:26 -0400",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intermitent slow queries"
},
{
"msg_contents": "On Wed, May 02, 2007 at 02:55:26PM -0400, Ron wrote:\n> The fastest test, and possible fix, is to go and buy more RAM. See \n> if 16MB of RAM, heck even 10MB, makes the problem go away or delays \n> it's onset.\n\nSomething tells me 16MB of RAM is not going to help him much? :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 2 May 2007 20:59:55 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intermitent slow queries"
},
{
"msg_contents": "Dave:\n\n \n\nThinks for the thought, but I'm not sure how to fix that. I'm going to\nincrease the shared memory pages to 5K as soon as my latest vacuum\nfinishes to see if that helps.\n\n \n\nAP\n\n \n\n________________________________\n\nFrom: Dave Cramer [mailto:[email protected]] \nSent: Wednesday, May 02, 2007 2:18 PM\nTo: Parks, Aaron B.\nCc: [email protected]\nSubject: Re: [PERFORM] Intermitent slow queries\n\n \n\n \n\nOn 2-May-07, at 11:24 AM, Parks, Aaron B. wrote:\n\n\n\n\n\nMy pg 8.1 install on an AMD-64 box (4 processors) with 9 gigs of ram\nrunning RHEL4 is acting kind of odd and I thought I would see if anybody\nhas any hints.\n\n \n\nI have Java program using postgresql-8.1-409.jdbc3.jar to connect over\nthe network. In general it works very well. I have run batch updates\nwith several thousand records repeatedly that has worked fine.\n\n \n\nThe Program pulls a summation of the DB and does some processing with\nit. It starts off wonderfully running a query every .5 seconds.\nUnfortunately, after a while it will start running queries that take 20\nto 30 seconds.\n\n \n\nLooking at the EXPLAIN for the query no sequential scans are going on\nand everything has an index that points directly at its search criteria.\n\n \n\nExample:\n\n \n\nSelect sum(whatever) from a inner join b on a.something=b.something\nWHERE b.day=1 and b.hour=1\n\nSelect sum(whatever) from a inner join b on a.something=b.something\nWHERE b.day=1 and b.hour=2\n\nSelect sum(whatever) from a inner join b on a.something=b.something\nWHERE b.day=1 and b.hour=3\n\n.\n\n.\n\nSelect sum(whatever) from a inner join b on a.something=b.something\nWHERE b.day=1 and b.hour=23\n\nSelect sum(whatever) from a inner join b on a.something=b.something\nWHERE b.day=1 and b.hour=24\n\nSelect sum(whatever) from a inner join b on a.something=b.something\nWHERE b.day=2 and b.hour=1\n\nSelect sum(whatever) from a inner join b on a.something=b.something\nWHERE b.day=2 and b.hour=2\n\n.\n\n.\n\n.\n\n \n\n \n\nThis query runs fine for a while (up to thousands of times). But what\nhappens is that it starts to have really nasty pauses when you switch\nthe day condition. After the first query with the day it runs like a\ncharm for 24 iterations, then slows back down again\n\n \n\nMy best guess was that an index never finished running, but REINDEX on\nthe table (b in this case) didn't seem to help.\n\nI'd think it has more to do with caching data. The first query caches\nthe days data, then the next day's data has to be read from disk.\n\n\n\n \n\nIdeas?\n\n \n\nAP\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\nDave:\n \nThinks for the thought, but I’m not sure\nhow to fix that. I’m going to increase the shared memory pages to 5K as soon\nas my latest vacuum finishes to see if that helps.\n \nAP\n \n\n\n\n\nFrom: Dave Cramer\n[mailto:[email protected]] \nSent: Wednesday, May 02, 2007 2:18\nPM\nTo: Parks,\n Aaron B.\nCc:\[email protected]\nSubject: Re: [PERFORM] Intermitent\nslow queries\n\n \n \n\n\nOn 2-May-07, at 11:24 AM, Parks, Aaron B.\nwrote:\n\n\n\n\n\nMy pg 8.1 install on an AMD-64 box (4\nprocessors) with 9 gigs of ram running RHEL4 is acting kind of odd and I\nthought I would see if anybody has any hints.\n \nI have Java program\nusing postgresql-8.1-409.jdbc3.jar to connect over the network. In\ngeneral it works very well. I have run batch updates with several\nthousand records repeatedly that has worked fine.\n \nThe Program pulls a\nsummation of the DB and does some processing with it. It starts off\nwonderfully running a query every .5 seconds. Unfortunately, after a\nwhile it will start running queries that take 20 to 30 seconds.\n \nLooking at the EXPLAIN\nfor the query no sequential scans are going on and everything has an index that\npoints directly at its search criteria.\n \nExample:\n \nSelect sum(whatever)\nfrom a inner join b on a.something=b.something WHERE b.day=1 and b.hour=1\nSelect sum(whatever)\nfrom a inner join b on a.something=b.something WHERE b.day=1 and b.hour=2\nSelect sum(whatever)\nfrom a inner join b on a.something=b.something WHERE b.day=1 and b.hour=3\n.\n.\nSelect sum(whatever)\nfrom a inner join b on a.something=b.something WHERE b.day=1 and b.hour=23\nSelect sum(whatever)\nfrom a inner join b on a.something=b.something WHERE b.day=1 and b.hour=24\nSelect sum(whatever)\nfrom a inner join b on a.something=b.something WHERE b.day=2 and b.hour=1\nSelect sum(whatever)\nfrom a inner join b on a.something=b.something WHERE b.day=2 and b.hour=2\n.\n.\n.\n \n \nThis query runs fine for\na while (up to thousands of times). But what happens is that it starts to have\nreally nasty pauses when you switch the day condition. After the first\nquery with the day it runs like a charm for 24 iterations, then slows back down\nagain\n \nMy best guess was that\nan index never finished running, but REINDEX on the table (b in this case)\ndidn’t seem to help.\n\nI'd think it has more to do with caching data. The first query caches\nthe days data, then the next day's data has to be read from disk.\n\n\n\n \nIdeas?\n \nAP",
"msg_date": "Wed, 2 May 2007 15:04:05 -0400",
"msg_from": "\"Parks, Aaron B.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Intermitent slow queries"
},
{
"msg_contents": "Ron:\n\nI'm not sure how the JVM would really affect the issue as it is on a\nWindows box connecting remotely. As indicated the PG Server itself has\n9 gigs of ram and it never goes up above 1.8 total usage.\n\nIf the PG driver is doing something funny (IE waiting to send requests)\nthat's way out past my ability to fix it, so I will hope that's not it.\n\nYou can see the CPU slamming doing the queries, then after a while it\njust stops and all I get is tiny little blips on the usage.\n\nAP\n\n-----Original Message-----\nFrom: Ron [mailto:[email protected]] \nSent: Wednesday, May 02, 2007 2:55 PM\nTo: Parks, Aaron B.\nCc: [email protected]\nSubject: Re: [PERFORM] Intermitent slow queries\n\nAmong other possibilities, there's a known problem with slow memory \nleaks in various JVM's under circumstances similar to those you are\ndescribing.\nThe behavior you are describing is typical of this scenario. The \nincreasing delay is caused by longer and longer JVM garbage \ncollection runs as java attempts to reclaim enough memory from a \nsmaller and smaller universe of available memory.\n\nThe fastest test, and possible fix, is to go and buy more RAM. See \nif 16MB of RAM, heck even 10MB, makes the problem go away or delays \nit's onset. If so, there's good circumstantial evidence that you are \nbeing bitten by a slow memory leak; most likely in the JVM.\n\nCheers,\nRon Peacetree\n\n\nAt 11:24 AM 5/2/2007, Parks, Aaron B. wrote:\n>My pg 8.1 install on an AMD-64 box (4 processors) with 9 gigs of ram \n>running RHEL4 is acting kind of odd and I thought I would see if \n>anybody has any hints.\n>\n>I have Java program using postgresql-8.1-409.jdbc3.jar to connect \n>over the network. In general it works very well. I have run batch \n>updates with several thousand records repeatedly that has worked fine.\n>\n>The Program pulls a summation of the DB and does some processing \n>with it. It starts off wonderfully running a query every .5 \n>seconds. Unfortunately, after a while it will start running queries \n>that take 20 to 30 seconds.\n>\n>Looking at the EXPLAIN for the query no sequential scans are going \n>on and everything has an index that points directly at its search\ncriteria.\n>\n>Example:\n>\n>Select sum(whatever) from a inner join b on a.something=b.something \n>WHERE b.day=1 and b.hour=1\n>Select sum(whatever) from a inner join b on a.something=b.something \n>WHERE b.day=1 and b.hour=2\n>Select sum(whatever) from a inner join b on a.something=b.something \n>WHERE b.day=1 and b.hour=3\n>.\n>.\n>Select sum(whatever) from a inner join b on a.something=b.something \n>WHERE b.day=1 and b.hour=23\n>Select sum(whatever) from a inner join b on a.something=b.something \n>WHERE b.day=1 and b.hour=24\n>Select sum(whatever) from a inner join b on a.something=b.something \n>WHERE b.day=2 and b.hour=1\n>Select sum(whatever) from a inner join b on a.something=b.something \n>WHERE b.day=2 and b.hour=2\n>.\n>.\n>.\n>\n>\n>This query runs fine for a while (up to thousands of times). But \n>what happens is that it starts to have really nasty pauses when you \n>switch the day condition. After the first query with the day it \n>runs like a charm for 24 iterations, then slows back down again\n>\n>My best guess was that an index never finished running, but REINDEX \n>on the table (b in this case) didn't seem to help.\n>\n>Ideas?\n>\n>AP\n\n",
"msg_date": "Wed, 2 May 2007 15:07:00 -0400",
"msg_from": "\"Parks, Aaron B.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Intermitent slow queries"
},
{
"msg_contents": "Well, the traditional DBMS way of dealing with this sort of \nsummarization when the tables involved do not fit into RAM is to \ncreate a \"roll up\" table or tables for the time period commonly \nsummarized over.\n\nSince it looks like you've got a table with a row per hour, create \nanother that has a row per day that summarizes hours 1...24.\nDitto weekly, monthly, quarterly, or any other time period you \nfrequently summarize over.\n\nYes, this is explicitly a \"space for time\" trade-off. DBs are \ngenerally not very well suited to time series data.\n\nI also find it, errr, =interesting= that your dedicated pg server \nwith 9 GB of RAM \"never goes up above 1.8 in total usage\".\nThat simply does not make sense if your OS and pg conf files are \nconfigured correctly.\n\nMake sure that you are running 64b RHEL 4 that is patched / \nconfigured correctly to use the RAM you have.\n(with 4 ?multi-core? CPUs, you =are= running a recent 2.6 based kernel, right?)\n\nDitto checking the pg conf file to make sure the values therein are sane.\nWith 9 GB of RAM, you should be able to:\n=max out shared_buffers at 262143 (2 GB of shared buffers),\n=set work_mem and maintenance_work_mem to considerably larger than \nthe defaults.\n(If this query has the box to itself when running, you can set the \nmemory use parameters to values tuned specifically to the query.)\n=just for giggles, boost max_stack_depth from 2 MB -> 4 MB\n=set effective_cache_size to a realistic value given your HW + OS + \nthe tuning above.\n\nThe main point here is that most of your RAM should be in use. If \nyou are getting poor performance and most of the RAM is !not! in use, \nSomething's Wrong (tm).\n\nOf course, the \"holy grail\" is to have the entire data set you are \noperating over to be RAM resident during the query. If you can \nmanage that, said query should be =fast=.\nRAM is cheap enough that if you can make this query RAM resident by a \nreasonable combination of configuration + schema + RAM purchasing, \nyou should do it.\n\nCheers,\nRon Peacetree\n\n\nAt 03:07 PM 5/2/2007, Parks, Aaron B. wrote:\n>Ron:\n>\n>I'm not sure how the JVM would really affect the issue as it is on a\n>Windows box connecting remotely. As indicated the PG Server itself has\n>9 gigs of ram and it never goes up above 1.8 total usage.\n>\n>If the PG driver is doing something funny (IE waiting to send requests)\n>that's way out past my ability to fix it, so I will hope that's not it.\n>\n>You can see the CPU slamming doing the queries, then after a while it\n>just stops and all I get is tiny little blips on the usage.\n>\n>AP\n>\n>-----Original Message-----\n>From: Ron [mailto:[email protected]]\n>Sent: Wednesday, May 02, 2007 2:55 PM\n>To: Parks, Aaron B.\n>Cc: [email protected]\n>Subject: Re: [PERFORM] Intermitent slow queries\n>\n>Among other possibilities, there's a known problem with slow memory\n>leaks in various JVM's under circumstances similar to those you are\n>describing.\n>The behavior you are describing is typical of this scenario. The\n>increasing delay is caused by longer and longer JVM garbage\n>collection runs as java attempts to reclaim enough memory from a\n>smaller and smaller universe of available memory.\n>\n>The fastest test, and possible fix, is to go and buy more RAM. See\n>if 16MB of RAM, heck even 10MB, makes the problem go away or delays\n>it's onset. If so, there's good circumstantial evidence that you are\n>being bitten by a slow memory leak; most likely in the JVM.\n>\n>Cheers,\n>Ron Peacetree\n>\n>\n>At 11:24 AM 5/2/2007, Parks, Aaron B. wrote:\n> >My pg 8.1 install on an AMD-64 box (4 processors) with 9 gigs of ram\n> >running RHEL4 is acting kind of odd and I thought I would see if\n> >anybody has any hints.\n> >\n> >I have Java program using postgresql-8.1-409.jdbc3.jar to connect\n> >over the network. In general it works very well. I have run batch\n> >updates with several thousand records repeatedly that has worked fine.\n> >\n> >The Program pulls a summation of the DB and does some processing\n> >with it. It starts off wonderfully running a query every .5\n> >seconds. Unfortunately, after a while it will start running queries\n> >that take 20 to 30 seconds.\n> >\n> >Looking at the EXPLAIN for the query no sequential scans are going\n> >on and everything has an index that points directly at its search\n>criteria.\n> >\n> >Example:\n> >\n> >Select sum(whatever) from a inner join b on a.something=b.something\n> >WHERE b.day=1 and b.hour=1\n> >Select sum(whatever) from a inner join b on a.something=b.something\n> >WHERE b.day=1 and b.hour=2\n> >Select sum(whatever) from a inner join b on a.something=b.something\n> >WHERE b.day=1 and b.hour=3\n> >.\n> >.\n> >Select sum(whatever) from a inner join b on a.something=b.something\n> >WHERE b.day=1 and b.hour=23\n> >Select sum(whatever) from a inner join b on a.something=b.something\n> >WHERE b.day=1 and b.hour=24\n> >Select sum(whatever) from a inner join b on a.something=b.something\n> >WHERE b.day=2 and b.hour=1\n> >Select sum(whatever) from a inner join b on a.something=b.something\n> >WHERE b.day=2 and b.hour=2\n> >.\n> >.\n> >.\n> >\n> >\n> >This query runs fine for a while (up to thousands of times). But\n> >what happens is that it starts to have really nasty pauses when you\n> >switch the day condition. After the first query with the day it\n> >runs like a charm for 24 iterations, then slows back down again\n> >\n> >My best guess was that an index never finished running, but REINDEX\n> >on the table (b in this case) didn't seem to help.\n> >\n> >Ideas?\n> >\n> >AP\n\n",
"msg_date": "Thu, 03 May 2007 05:18:27 -0400",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intermitent slow queries"
}
] |
[
{
"msg_contents": "Hi,\n\nI am using postgres 8.1.3 for this. If this has been dealt with later, please disregard. And this is not a complaint or a request, I am just curious, so I know how to best construct my queries.\n\nThere is a unique index mapping domains to domain_ids.\n\nviews_ts specifies a partitioned table, where views_ts_2007_04_01 is the only partition matching the range given in the query.\n\nMy goal is to produce summaries of counts of rows for each day within a given range (can be days, months, years).\n\nThe issue: the second query results in a lower cost estimate. I am wondering why the second query plan was not chosen for the first query.\n\nThanks!\nBrian\n\nlive=> explain select ts::date,count(*) from views_ts join domains using (domain_id) where domain = '1234.com' and ts >= '2007-04-01' and ts < '2007-04-02' group by ts::date;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=9040.97..9041.00 rows=2 width=8)\n -> Hash Join (cost=6.01..9040.96 rows=2 width=8)\n Hash Cond: (\"outer\".domain_id = \"inner\".domain_id)\n -> Append (cost=0.00..7738.01 rows=259383 width=16)\n -> Seq Scan on views_ts (cost=0.00..1138.50 rows=1 width=16)\n Filter: ((ts >= '2007-04-01 00:00:00+10'::timestamp with time zone) AND (ts < '2007-04-02 00:00:00+10'::timestamp with time zone))\n -> Seq Scan on views_ts_2007_04_01 views_ts (cost=0.00..6599.51 rows=259382 width=16)\n Filter: ((ts >= '2007-04-01 00:00:00+10'::timestamp with time zone) AND (ts < '2007-04-02 00:00:00+10'::timestamp with time zone))\n -> Hash (cost=6.01..6.01 rows=1 width=8)\n -> Index Scan using domains_domain on domains (cost=0.00..6.01 rows=1 width=8)\n Index Cond: (\"domain\" = '1234.com'::text)\n(11 rows)\n\nlive=> explain select ts::date,count(*) from views_ts where domain_id = (select domain_id from domains where domain = '1234.com') and ts >= '2007-04-01' and ts < '2007-04-02' group by ts::date;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=1993.93..1995.99 rows=137 width=8)\n InitPlan\n -> Index Scan using domains_domain on domains (cost=0.00..6.01 rows=1 width=8)\n Index Cond: (\"domain\" = '1234.com'::text)\n -> Result (cost=0.00..1986.69 rows=247 width=8)\n -> Append (cost=0.00..1986.07 rows=247 width=8)\n -> Seq Scan on views_ts (cost=0.00..1245.75 rows=1 width=8)\n Filter: ((domain_id = $0) AND (ts >= '2007-04-01 00:00:00+10'::timestamp with time zone) AND (ts < '2007-04-02 00:00:00+10'::timestamp with time zone))\n -> Bitmap Heap Scan on views_ts_2007_04_01 views_ts (cost=2.86..740.32 rows=246 width=8)\n Recheck Cond: (domain_id = $0)\n Filter: ((ts >= '2007-04-01 00:00:00+10'::timestamp with time zone) AND (ts < '2007-04-02 00:00:00+10'::timestamp with time zone))\n -> Bitmap Index Scan on views_ts_2007_04_01_domain_id (cost=0.00..2.86 rows=246 width=0)\n Index Cond: (domain_id = $0)\n\n\n\nHi,I am using postgres 8.1.3 for this. If this has been dealt with later, please disregard. And this is not a complaint or a request, I am just curious, so I know how to best construct my queries.There is a unique index mapping domains to domain_ids.views_ts specifies a partitioned table, where views_ts_2007_04_01 is the only partition matching the range given in the query.My goal is to produce summaries of counts of rows for each day within a given range (can be days, months, years).The issue: the second query results in a lower cost estimate. I am wondering why the second query plan was\n not chosen for the first query.Thanks!Brianlive=> explain select ts::date,count(*) from views_ts join domains using (domain_id) where domain = '1234.com' and ts >= '2007-04-01' and ts < '2007-04-02' group by ts::date; QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=9040.97..9041.00 rows=2 width=8) -> Hash Join (cost=6.01..9040.96 rows=2\n width=8) Hash Cond: (\"outer\".domain_id = \"inner\".domain_id) -> Append (cost=0.00..7738.01 rows=259383 width=16) -> Seq Scan on views_ts (cost=0.00..1138.50 rows=1 width=16) Filter: ((ts >= '2007-04-01 00:00:00+10'::timestamp with time zone) AND (ts < '2007-04-02 00:00:00+10'::timestamp with time zone)) -> Seq Scan on views_ts_2007_04_01 views_ts (cost=0.00..6599.51 rows=259382 width=16) Filter: ((ts >=\n '2007-04-01 00:00:00+10'::timestamp with time zone) AND (ts < '2007-04-02 00:00:00+10'::timestamp with time zone)) -> Hash (cost=6.01..6.01 rows=1 width=8) -> Index Scan using domains_domain on domains (cost=0.00..6.01 rows=1 width=8) Index Cond: (\"domain\" = '1234.com'::text)(11 rows)live=> explain select ts::date,count(*) from views_ts where domain_id = (select domain_id from domains where domain = '1234.com') and ts >= '2007-04-01' and ts < '2007-04-02' group by\n ts::date; QUERY PLAN \n ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=1993.93..1995.99 rows=137 width=8) InitPlan -> Index Scan using domains_domain on domains (cost=0.00..6.01 rows=1 width=8) Index Cond: (\"domain\" = '1234.com'::text) -> Result (cost=0.00..1986.69 rows=247 width=8) -> Append (cost=0.00..1986.07 rows=247 width=8) -> Seq Scan on views_ts (cost=0.00..1245.75 rows=1 width=8) \n Filter: ((domain_id = $0) AND (ts >= '2007-04-01 00:00:00+10'::timestamp with time zone) AND (ts < '2007-04-02 00:00:00+10'::timestamp with time zone)) -> Bitmap Heap Scan on views_ts_2007_04_01 views_ts (cost=2.86..740.32 rows=246 width=8) Recheck Cond: (domain_id = $0) Filter: ((ts >= '2007-04-01 00:00:00+10'::timestamp with time zone) AND (ts < '2007-04-02 00:00:00+10'::timestamp with time zone)) -> Bitmap Index Scan on views_ts_2007_04_01_domain_id (cost=0.00..2.86 rows=246\n width=0) Index Cond: (domain_id = $0)",
"msg_date": "Wed, 2 May 2007 23:07:57 -0700 (PDT)",
"msg_from": "Brian Herlihy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Join vs Subquery"
},
{
"msg_contents": "\"Brian Herlihy\" <[email protected]> writes:\n\n> There is a unique index mapping domains to domain_ids.\n...\n> The issue: the second query results in a lower cost estimate. I am wondering\n> why the second query plan was not chosen for the first query.\n\nWell the unique index you mentioned is critical to being able to conclude the\nqueries are equivalent. Postgres in the past hasn't been able to use things\nlike unique indexes to make planning decisions because it had no\ninfrastructure to replan if you dropped the index. We do have such\ninfrastructure now so it may be possible to add features like this in the\nfuture.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Thu, 03 May 2007 12:46:31 +0100",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Join vs Subquery"
},
{
"msg_contents": "Brian Herlihy <[email protected]> writes:\n> The issue: the second query results in a lower cost estimate. I am wondering why the second query plan was not chosen for the first query.\n\n8.1 is incapable of pushing indexable join conditions down below an Append.\nTry 8.2.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 May 2007 10:23:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Join vs Subquery "
}
] |
[
{
"msg_contents": "Today's survey is: just what are *you* doing to collect up the \ninformation about your system made available by the various pg_stat views? \nI have this hacked together script that dumps them into a file, imports \nthem into another database, and then queries against some of the more \ninteresting data. You would thing there would be an organized project \naddressing this need around to keep everyone from reinventing that wheel, \nbut I'm not aware of one.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Thu, 3 May 2007 10:45:48 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_stat_* collection"
},
{
"msg_contents": "On 5/3/07, Greg Smith <[email protected]> wrote:\n> Today's survey is: just what are *you* doing to collect up the\n> information about your system made available by the various pg_stat views?\n> I have this hacked together script that dumps them into a file, imports\n> them into another database, and then queries against some of the more\n> interesting data. You would thing there would be an organized project\n> addressing this need around to keep everyone from reinventing that wheel,\n> but I'm not aware of one.\n\nI have a bunch of plugin scripts for Munin\n(http://munin.projects.linpro.no/) that collect PostgreSQL statistics.\nGraphs like this are useful:\n\n http://purefiction.net/paste/pg_munin_example.png\n\nI have been considering tarring them up as a proper release at some\npoint. Anyone interested?\n\nAlexander.\n",
"msg_date": "Thu, 3 May 2007 16:52:55 +0200",
"msg_from": "\"Alexander Staubo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_* collection"
},
{
"msg_contents": "On Thu, May 03, 2007 at 10:45:48AM -0400, Greg Smith wrote:\n> Today's survey is: just what are *you* doing to collect up the \n> information about your system made available by the various pg_stat views? \n> I have this hacked together script that dumps them into a file, imports \n> them into another database, and then queries against some of the more \n> interesting data. You would thing there would be an organized project \n> addressing this need around to keep everyone from reinventing that wheel, \n> but I'm not aware of one.\n\nIf you're interested in exposing them with snmp, join the pgsnmpd project\n:-)\n\n//Magnus\n\n",
"msg_date": "Thu, 3 May 2007 17:03:15 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_* collection"
},
{
"msg_contents": "[Alexander Staubo - Thu at 04:52:55PM +0200]\n> I have been considering tarring them up as a proper release at some\n> point. Anyone interested?\n\nYes.\n\nEventually I have my own collection as well:\n\ndb_activity - counts the number of (all, slow, very slow, stuck \"idle in transaction\") queries in progress; this is one of the better indicators on how busy/overloaded the database is.\n\n(I also have a separate script dumping the contents from\npg_stat_activity to a log file, which I frequentlymonitor by \"tail -F\").\n\ndb_commits + db_rollbacks pr database - I'm not sure if those are useful\nfor anything, will eventually remove them. Maybe nice to be able to\ncompare the activity between different databases running on the same\nhost, if they are comparable.\n\ndb_connections - num of connections compared to max connections. Useful\nfor alarms.\n\ndb_hanging_transactions - age of oldest transaction. Useful for alarms,\nsince hanging transactions can be very bad for the db performance.\n\ndb_locks - monitors the number of locks. I've never actually needed\nthis for anything, maybe I should remove it.\n\ndb_num_backends - number of backends, sorted by databases. Probably not\nso useful.\n\ndb_space (one for each database) - monitors space usage, found this\nscript through google.\n\ndb_xid_wraparound - gives alarms if the databases aren't beeing\nvacuumed.\n\n",
"msg_date": "Thu, 3 May 2007 18:16:46 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_* collection"
},
{
"msg_contents": "On Thu, 2007-05-03 at 10:45 -0400, Greg Smith wrote:\n> Today's survey is: just what are *you* doing to collect up the \n> information about your system made available by the various pg_stat views? \n> I have this hacked together script that dumps them into a file, imports \n> them into another database, and then queries against some of the more \n> interesting data. You would thing there would be an organized project \n> addressing this need around to keep everyone from reinventing that wheel, \n> but I'm not aware of one.\n> \n\nIs anyone out there collecting their own statistics? What's the easiest\nway to take statistical samples of the data in a table without reading\nthe entire thing?\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Thu, 03 May 2007 11:21:30 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_* collection"
},
{
"msg_contents": "On Thu, 3 May 2007, Alexander Staubo wrote:\n\n> I have a bunch of plugin scripts for Munin that collect PostgreSQL \n> statistics. I have been considering tarring them up as a proper release \n> at some point.\n\nExcellent plan. Pop out a tar file, trade good ideas with Tobias, have \nsome other people play with the code and improve it. Let me know if you \nneed a place to put the files at, since I'd like to look at them anyway I \ncould easily dump them onto a web page while I was at it.\n\nMunin is a very interesting solution to this class of problem. They've \nmanaged to streamline the whole data collection process by layering clever \nPerl hacks three deep. It's like the anti-SNMP--just build the simplest \npossible interface that will work and then stop designing. The result is \nso easy to work with that it's no surprise people like Munin.\n\nIt's also completely inappropriate for any environment I work in, because \nthere really is no thought of security whatsoever in the whole thing. \nWhat I'm still thinking about is whether it's possible to fix that issue \nwhile still keeping the essential simplicity that makes Munin so friendly.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Fri, 4 May 2007 00:53:55 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_* collection"
},
{
"msg_contents": "[Greg Smith - Fri at 12:53:55AM -0400]\n> Munin is a very interesting solution to this class of problem. They've \n> managed to streamline the whole data collection process by layering clever \n> Perl hacks three deep. It's like the anti-SNMP--just build the simplest \n> possible interface that will work and then stop designing. The result is \n> so easy to work with that it's no surprise people like Munin.\n\nIt's fairly easy to throw in new graphs, and I like that. One of the\ndrawbacks is that it spends a lot of CPU building the graphs etc - if I\ncontinue adding graphs in my current speed, and we set up even more\nservers, soon it will take us more than five minutes generating the\ngraphs.\n\nAlso, local configuration can be tricky. Locally I fix this by loading\na config file with a hard-coded path. Luckily, as long as the postgres\nmunin plugins are run at localhost as the postgres user, most of them\ndon't need any configuration. Still, it can be useful to tune the alarm\nthresholds.\n\n> It's also completely inappropriate for any environment I work in, because \n> there really is no thought of security whatsoever in the whole thing. \n> What I'm still thinking about is whether it's possible to fix that issue \n> while still keeping the essential simplicity that makes Munin so friendly.\n\nWhat layers of security do you need? We're using https, basic auth and\nssh-tunnels. We've considered the munin data to be regarded as\nconfidential, at the other hand it's nothing ultra-secret there; i.e.\nsecuring the backups of the production database probably deserves more\nattention.\n\n",
"msg_date": "Fri, 4 May 2007 07:08:59 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_* collection"
},
{
"msg_contents": "On Fri, May 04, 2007 at 12:53:55AM -0400, Greg Smith wrote:\n>It's also completely inappropriate for any environment I work in, because \n>there really is no thought of security whatsoever in the whole thing. \n\nThat makes it sound more like snmp, not less. :-)\n\nMike Stone\n",
"msg_date": "Fri, 04 May 2007 08:47:53 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_* collection"
}
] |
[
{
"msg_contents": "\n\n> ------- Original Message -------\n> From: Josh Berkus <[email protected]>\n> To: [email protected]\n> Sent: 03/05/07, 20:21:55\n> Subject: Re: [PERFORM] Feature Request --- was: PostgreSQL Performance Tuning\n> \n> \n> And let's not even get started on Windows.\n\nWMI is your friend.\n\n/D\n",
"msg_date": "Thu, 3 May 2007 20:43:23 +0100",
"msg_from": "\"Dave Page\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Feature Request --- was: PostgreSQL Performance Tunin\n g"
}
] |
[
{
"msg_contents": "This is in Postgres 8.1.5\n\nI have a table like\nCREATE TABLE x (a VARCHAR, b VARCHAR, c VARCHAR);\nCREATE INDEX y on x(a);\nCREATE INDEX z on x(b);\n\nThere are over a million rows in 'x'. Neither a nor b are unique. \nThere are probably about 20 or so distinct values of a and 30 or so \ndistinct values of b\n\nI've done a 'vacuum analyze' first.\n\nIf I do\nEXPLAIN SELECT * FROM x ORDER BY a;\nit says\n Index Scan using y on x (cost=0.00..2903824.15 rows=1508057 width=152)\n\nThat's what I'd expect\n\nHowever, if I do\nEXPLAIN SELECT * FROM x ORDER BY b;\nit says\nSort (cost=711557.34..715327.48 rows=1508057 \nwidth=152)\n Sort Key: \nb\n -> Seq Scan on x (cost=0.00..53203.57 rows=1508057 width=152)\n\nWhy doesn't it use the other index? If use 'set enable_seqscan=0' then it does.\n\nI tried using EXPLAIN ANALYZE to see how long it actually took:\n- seq scan - 75 secs\n- index scan - 13 secs\n- seq scan - 77 secs\n(I tried the seq scan version after the index scan as well to see if \ndisk caching was a factor, but it doesn't look like it)\n\nIf I do something like SELECT * FROM x WHERE b='...'; then it does \nuse the index , it's just for ordering it doesn't seem to. (Yes, it's \na BTREE index, not a hash index)\n\nOh, and if I use\nEXPLAIN SELECT * FROM x ORDER BY b LIMIT 100000;\nthen it uses the index scan, not the seq scan.\nIf I use\nEXPLAIN SELECT * FROM x ORDER BY b LIMIT 1000000;\nit uses the seq scan again, so I can't just set an arbitrarily big \nlimit to use the index.\n\nAny ideas? To me it looks like a bug in the planner. I can't think of \nany logical reason not to use an existing index to retrieve a sorted \nlisting of the data.\n\nPaul VPOP3 - Internet Email Server/Gateway\[email protected] http://www.pscs.co.uk/\n\n\n",
"msg_date": "Fri, 04 May 2007 15:36:19 +0100",
"msg_from": "Paul Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index not being used in sorting of simple table"
},
{
"msg_contents": "Paul Smith wrote:\n> Why doesn't it use the other index? If use 'set enable_seqscan=0' then \n> it does.\n\nJust a guess, but is the table clustered on column a? Maybe not \nexplicitly, but was it loaded from data that was sorted by a?\n\nAnalyzer calculates the correlation between physical order and each \ncolumn. The planner will favor index scans instead of sorting when the \ncorrelation is strong, and it thinks the data doesn't fit in memory. \nOtherwise an explicitly sort will result in less I/O and be therefore \nmore favorable.\n\nYou can check the correlation stats with:\nSELECT tablename, attname, correlation FROM pg_stats where tablename='x';\n\n> I tried using EXPLAIN ANALYZE to see how long it actually took:\n> - seq scan - 75 secs\n> - index scan - 13 secs\n> - seq scan - 77 secs\n\n> (I tried the seq scan version after the index scan as well to see if \n> disk caching was a factor, but it doesn't look like it)\n\nThat won't flush the heap pages from cache...\n\nHow much memory do you have and how large is the table? I suspect that \nthe planner thinks it doesn't fit in memory, and therefore favors the \nseqscan+sort plan which would require less random I/O, but in reality \nit's in cache and the index scan is faster because it doesn't need to \nsort. Have you set your effective_cache_size properly?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Fri, 04 May 2007 16:26:23 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index not being used in sorting of simple table"
},
{
"msg_contents": "Paul Smith <[email protected]> writes:\n> If I do\n> EXPLAIN SELECT * FROM x ORDER BY a;\n> it says\n> Index Scan using y on x (cost=0.00..2903824.15 rows=1508057 width=152)\n\n> That's what I'd expect\n\n> However, if I do\n> EXPLAIN SELECT * FROM x ORDER BY b;\n> it says\n> Sort (cost=711557.34..715327.48 rows=1508057 \n> width=152)\n> Sort Key: \n> b\n> -> Seq Scan on x (cost=0.00..53203.57 rows=1508057 width=152)\n\n> Why doesn't it use the other index?\n\nYou have the question backwards: given those cost estimates, I'd wonder\nwhy it doesn't do a sort in both cases. Offhand I think the sort cost\nestimate is pretty much independent of the data itself, so it should\nhave come out with a cost near 715327 for sorting on A, so why's it\nusing an indexscan that costs more than 4x as much?\n\nThe indexscan cost estimate varies quite a bit depending on the\nestimated correlation (physical ordering) of the column, so seeing\nit do different things in the two cases isn't surprising in itself.\nBut I think there's some relevant factor you've left out of the example.\n\nAs for getting the estimates more in line with reality, you probably\nneed to play with random_page_cost and/or effective_cache_size.\n\n> Any ideas? To me it looks like a bug in the planner. I can't think of \n> any logical reason not to use an existing index to retrieve a sorted \n> listing of the data.\n\nSorry, but using a forced sort frequently *is* faster than a full-table\nindexscan. It all depends on how much locality of reference there is,\nie how well the index order and physical table order match up. The\nplanner's statistical correlation estimate and cost parameters may be\nfar enough off to make it pick the wrong choice, but it's not a bug that\nit considers the options.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 May 2007 11:43:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index not being used in sorting of simple table "
},
{
"msg_contents": "At 16:26 04/05/2007, you wrote:\n>Paul Smith wrote:\n>>Why doesn't it use the other index? If use 'set enable_seqscan=0' \n>>then it does.\n>\n>Just a guess, but is the table clustered on column a? Maybe not \n>explicitly, but was it loaded from data that was sorted by a?\n\nI wouldn't have thought so - a is pretty 'random' as far as order of \ninsertion goes. On the other hand 'b' (the one whose index doesn't \nget used) is probably pretty correlated - 'b' is the date when the \nentry was added to the table, so they would be added in order of 'b' \n(they also get deleted after a while, and I'm not sure how PGSQL \nre-uses deleted rows that have been vacuumed)\n\n>Analyzer calculates the correlation between physical order and each \n>column. The planner will favor index scans instead of sorting when \n>the correlation is strong, and it thinks the data doesn't fit in \n>memory. Otherwise an explicitly sort will result in less I/O and be \n>therefore more favorable.\n\nAh, I see.\n\n>You can check the correlation stats with:\n>SELECT tablename, attname, correlation FROM pg_stats where tablename='x';\n\nThere I get\n x | a | 0.977819\n x | b | 0.78292\n\nThis is a bit odd, because I'd have thought they'd be more correlated \non 'b' than 'a'..\n\n>>I tried using EXPLAIN ANALYZE to see how long it actually took:\n>>- seq scan - 75 secs\n>>- index scan - 13 secs\n>>- seq scan - 77 secs\n>\n>>(I tried the seq scan version after the index scan as well to see \n>>if disk caching was a factor, but it doesn't look like it)\n>\n>That won't flush the heap pages from cache...\n\nNo, I know, but it would mean that if the pages were being loaded \ninto disk cache by the first scan which would make the second scan \nquicker, it would probably make the third one quicker as well.\n\n>How much memory do you have and how large is the table?\n\nThe table is about 300MB. I have 2GB RAM on my PC (but most of it is \nin use - the disk cache size is currently 600MB).\n\n>I suspect that the planner thinks it doesn't fit in memory, and \n>therefore favors the seqscan+sort plan which would require less random I/O,\n>but in reality it's in cache and the index scan is faster because it \n>doesn't need to sort. Have you set your effective_cache_size properly?\n\nI haven't set that at all - it's the default..\n\nIf I set this to 51200 (I think that means 400MB) then it does use \nthe index scan method, so thanks for this bit of info.\n\n\nPaul VPOP3 - Internet Email Server/Gateway\[email protected] http://www.pscs.co.uk/\n\n\n",
"msg_date": "Fri, 04 May 2007 17:10:48 +0100",
"msg_from": "Paul Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index not being used in sorting of simple table"
},
{
"msg_contents": "Hi,\n\nPaul:\nQuite like Tom, I too think that its the first query that is more intriguing\nthan the second one. (The expected cost for the indexscan (A) query is 4x\nthe expected time for the 'Sequential Scan' (B) query !!)\n\nCould you provide with the (complete output of) EXPLAIN ANALYZE times for\nboth of these queries ? That would tell how much time it actually took as\ncompared to the expected times.\n\nTom:\nThere is one thing though, that I couldn't really understand. Considering\nthat A's correlation in pg_stats being very high compared to B, isn't it 'a\nbetter candidate' for a sequential scan as compared to B in this scenario ?\nOr is it the other way around ?\n\nRegards,\nRobins Tharakan\n\nOn 5/4/07, Tom Lane <[email protected]> wrote:\n>\n> Paul Smith <[email protected]> writes:\n> > If I do\n> > EXPLAIN SELECT * FROM x ORDER BY a;\n> > it says\n> > Index Scan using y on x (cost=0.00..2903824.15 rows=1508057\n> width=152)\n>\n> > That's what I'd expect\n>\n> > However, if I do\n> > EXPLAIN SELECT * FROM x ORDER BY b;\n> > it says\n> > Sort (cost=711557.34..715327.48 rows=1508057\n> > width=152)\n> > Sort Key:\n> > b\n> > -> Seq Scan on x (cost=0.00..53203.57 rows=1508057 width=152)\n>\n> > Why doesn't it use the other index?\n>\n> You have the question backwards: given those cost estimates, I'd wonder\n> why it doesn't do a sort in both cases. Offhand I think the sort cost\n> estimate is pretty much independent of the data itself, so it should\n> have come out with a cost near 715327 for sorting on A, so why's it\n> using an indexscan that costs more than 4x as much?\n>\n> The indexscan cost estimate varies quite a bit depending on the\n> estimated correlation (physical ordering) of the column, so seeing\n> it do different things in the two cases isn't surprising in itself.\n> But I think there's some relevant factor you've left out of the example.\n>\n> As for getting the estimates more in line with reality, you probably\n> need to play with random_page_cost and/or effective_cache_size.\n>\n> > Any ideas? To me it looks like a bug in the planner. I can't think of\n> > any logical reason not to use an existing index to retrieve a sorted\n> > listing of the data.\n>\n> Sorry, but using a forced sort frequently *is* faster than a full-table\n> indexscan. It all depends on how much locality of reference there is,\n> ie how well the index order and physical table order match up. The\n> planner's statistical correlation estimate and cost parameters may be\n> far enough off to make it pick the wrong choice, but it's not a bug that\n> it considers the options.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\n\n\n-- \nRobins\n\nHi,Paul:Quite like Tom, I too think that its the first\nquery that is more intriguing than the second one. (The expected cost for the indexscan (A) query is 4x the expected time\nfor the 'Sequential Scan' (B) query !!)Could you provide with the (complete output of) EXPLAIN\nANALYZE times for both of these queries ? That would tell how much time\nit actually took as compared to the expected times.Tom:There\nis one thing though, that I couldn't really understand. Considering\nthat A's correlation in pg_stats being very high compared to B, isn't\nit 'a better candidate' for a sequential scan as compared to B in this\nscenario ? Or is it the other way around ?\nRegards,Robins TharakanOn 5/4/07, Tom Lane <[email protected]> wrote:\nPaul Smith <[email protected]> writes:> If I do> EXPLAIN SELECT * FROM x ORDER BY a;> it says> Index Scan using y on x (cost=0.00..2903824.15\n rows=1508057 width=152)> That's what I'd expect> However, if I do> EXPLAIN SELECT * FROM x ORDER BY b;> it says> Sort (cost=711557.34..715327.48 rows=1508057> width=152)\n> Sort Key:> b> -> Seq Scan on x (cost=0.00..53203.57 rows=1508057 width=152)> Why doesn't it use the other index?You have the question backwards: given those cost estimates, I'd wonder\nwhy it doesn't do a sort in both cases. Offhand I think the sort costestimate is pretty much independent of the data itself, so it shouldhave come out with a cost near 715327 for sorting on A, so why's it\nusing an indexscan that costs more than 4x as much?The indexscan cost estimate varies quite a bit depending on theestimated correlation (physical ordering) of the column, so seeingit do different things in the two cases isn't surprising in itself.\nBut I think there's some relevant factor you've left out of the example.As for getting the estimates more in line with reality, you probablyneed to play with random_page_cost and/or effective_cache_size.\n> Any ideas? To me it looks like a bug in the planner. I can't think of> any logical reason not to use an existing index to retrieve a sorted> listing of the data.Sorry, but using a forced sort frequently *is* faster than a full-table\nindexscan. It all depends on how much locality of reference there is,ie how well the index order and physical table order match up. Theplanner's statistical correlation estimate and cost parameters may be\nfar enough off to make it pick the wrong choice, but it's not a bug thatit considers the options. regards, tom lane---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend-- Robins",
"msg_date": "Sun, 6 May 2007 18:07:45 +0530",
"msg_from": "Robins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index not being used in sorting of simple table"
},
{
"msg_contents": "Robins <[email protected]> writes:\n> There is one thing though, that I couldn't really understand. Considering\n> that A's correlation in pg_stats being very high compared to B, isn't it 'a\n> better candidate' for a sequential scan as compared to B in this scenario ?\n\nNo, high correlation reduces the cost of an indexscan but doesn't do\nanything much for a seqscan-and-sort. (Actually, I suppose it could\nhelp by reducing the number of initial runs to be merged, but that's\nnot an effect the planner knows about.) The interesting point is that\nPaul shows\n\nSELECT tablename, attname, correlation FROM pg_stats where tablename='x';\n x | a | 0.977819\n x | b | 0.78292\n\nwhen his initial verbal description indicated that b should have the\nbetter correlation. So that's something else odd about this case.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 06 May 2007 20:35:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index not being used in sorting of simple table "
}
] |
[
{
"msg_contents": "I hope someone can help me with this vacuum problem. I can post more \ninfo if needed.\n\nVersions: Postgresql version 8.09 on FreeBSD 6.1\nSituation: huge amounts of adds and deletes daily. Running daily vacuums\nProblem: Vacuum times jump up from 45 minutes, or 1:30 minutes to 6+ \nhours overnight, once every 1 to 3 months.\nSolutions tried: db truncate - brings vacuum times down. Reindexing \nbrings vacuum times down.\n\nI know my indexes are getting fragmented and my tables are getting \nfragmented. I also know that some of my btree indexes are not being used \nin queries. I also know that using \"UNIQUE\" in a query makes PG ignore \nany index.\n\nI am looking for the cause of this. Recently I have been looking at \nEXPLAIN and ANALYZE.\n1. Running EXPLAIN on a query tells me how my query SHOULD run and \nrunning ANALYZE tells me how it DOES run. Is that correct?\n2. If (1) is true, then a difference between the two means my query \nplan is messed up and running ANALYZE on a table-level will somehow \nrebuild the plan. Is that correct?\n3. If (2) is correct, then running ANALYZE on a nightly basis before \nrunning vacuum will keep vacuum times down. Is that correct?\n\nYudhvir Singh\n",
"msg_date": "Sat, 05 May 2007 15:57:25 -0700",
"msg_from": "Yudhvir Singh Sidhu <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to Find Cause of Long Vacuum Times - NOOB Question"
},
{
"msg_contents": "On Sat, May 05, 2007 at 03:57:25PM -0700, Yudhvir Singh Sidhu wrote:\n> Situation: huge amounts of adds and deletes daily. Running daily vacuums\n\nIf you have huge amounts of adds and deletes, you might want to vacuum more\noften; optionally, look into autovacuum.\n\n> Problem: Vacuum times jump up from 45 minutes, or 1:30 minutes to 6+ \n> hours overnight, once every 1 to 3 months.\n\nYou might want to check your FSM settings. Take a look at the output of\nVACUUM VERBOSE and see how the results stack up against your FSM settings.\nOptionally, you could do a VACUUM FULL to clear the bloat, but this will lock\nthe tables and is not recommended on a regular basis.\n\n> I know my indexes are getting fragmented and my tables are getting \n> fragmented. \n\nThis sounds like a case of table bloat, ie. vacuuming too seldom and/or too\nlow FSM settings.\n\n> I also know that some of my btree indexes are not being used in queries.\n\nThis is a separate problem, usually; if you need help with a specific query,\npost query and the EXPLAIN ANALYZE output here. (Note that using an index is\nnot always a win; Postgres' planner knows about this and tries to figure out\nwhen it is a win and when it is not.)\n\n> I also know that using \"UNIQUE\" in a query makes PG ignore any index.\n\nDo you mean DISTINCT? There are known problems with SELECT DISTINCT, but I'm\nnot sure how it could make Postgres start ignoring an index. Again, it's a\nseparate problem.\n\n> I am looking for the cause of this. Recently I have been looking at \n> EXPLAIN and ANALYZE.\n\nThis is a good beginning. :-)\n\n> 1. Running EXPLAIN on a query tells me how my query SHOULD run and \n> running ANALYZE tells me how it DOES run. Is that correct?\n\nNearly. EXPLAIN tells you how the plan Postgres has chosen, with estimates on\nthe costs of each step. EXPLAIN ANALYZE (just plain \"ANALYZE\" is a different\ncommand, which updates the planner's statistics) does the same, but also runs\nthe query and shows the time each step ended up taking. (Note that the\nunits of the estimates and the timings are different, so you can't compare\nthem directly.)\n\n> 2. If (1) is true, then a difference between the two means my query \n> plan is messed up and running ANALYZE on a table-level will somehow \n> rebuild the plan. Is that correct?\n\nAgain, sort of right, but not entirely. ANALYZE updates the planner's\nstatistics. Having good statistics is very useful for the planner in\nselecting the plan that actually ends up being the best.\n\n> 3. If (2) is correct, then running ANALYZE on a nightly basis before \n> running vacuum will keep vacuum times down. Is that correct?\n\nNo, ANALYZE will only update planner statistics, which has nothing to do with\nvacuum times. On the other hand, it might help with some of your queries.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sun, 6 May 2007 01:40:14 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to Find Cause of Long Vacuum Times - NOOB Question"
},
{
"msg_contents": "Steinar H. Gunderson wrote:\n> On Sat, May 05, 2007 at 03:57:25PM -0700, Yudhvir Singh Sidhu wrote:\n> \n>> Situation: huge amounts of adds and deletes daily. Running daily vacuums\n>> \n>\n> If you have huge amounts of adds and deletes, you might want to vacuum more\n> often; optionally, look into autovacuum.\n>\n> \n>> Problem: Vacuum times jump up from 45 minutes, or 1:30 minutes to 6+ \n>> hours overnight, once every 1 to 3 months.\n>> \n>\n> You might want to check your FSM settings. Take a look at the output of\n> VACUUM VERBOSE and see how the results stack up against your FSM settings.\n> Optionally, you could do a VACUUM FULL to clear the bloat, but this will lock\n> the tables and is not recommended on a regular basis.\n>\n> \n>> I know my indexes are getting fragmented and my tables are getting \n>> fragmented. \n>> \n>\n> This sounds like a case of table bloat, ie. vacuuming too seldom and/or too\n> low FSM settings.\n>\n> \n>> I also know that some of my btree indexes are not being used in queries.\n>> \n>\n> This is a separate problem, usually; if you need help with a specific query,\n> post query and the EXPLAIN ANALYZE output here. (Note that using an index is\n> not always a win; Postgres' planner knows about this and tries to figure out\n> when it is a win and when it is not.)\n>\n> \n>> I also know that using \"UNIQUE\" in a query makes PG ignore any index.\n>> \n>\n> Do you mean DISTINCT? There are known problems with SELECT DISTINCT, but I'm\n> not sure how it could make Postgres start ignoring an index. Again, it's a\n> separate problem.\n>\n> \n>> I am looking for the cause of this. Recently I have been looking at \n>> EXPLAIN and ANALYZE.\n>> \n>\n> This is a good beginning. :-)\n>\n> \n>> 1. Running EXPLAIN on a query tells me how my query SHOULD run and \n>> running ANALYZE tells me how it DOES run. Is that correct?\n>> \n>\n> Nearly. EXPLAIN tells you how the plan Postgres has chosen, with estimates on\n> the costs of each step. EXPLAIN ANALYZE (just plain \"ANALYZE\" is a different\n> command, which updates the planner's statistics) does the same, but also runs\n> the query and shows the time each step ended up taking. (Note that the\n> units of the estimates and the timings are different, so you can't compare\n> them directly.)\n>\n> \n>> 2. If (1) is true, then a difference between the two means my query \n>> plan is messed up and running ANALYZE on a table-level will somehow \n>> rebuild the plan. Is that correct?\n>> \n>\n> Again, sort of right, but not entirely. ANALYZE updates the planner's\n> statistics. Having good statistics is very useful for the planner in\n> selecting the plan that actually ends up being the best.\n>\n> \n>> 3. If (2) is correct, then running ANALYZE on a nightly basis before \n>> running vacuum will keep vacuum times down. Is that correct?\n>> \n>\n> No, ANALYZE will only update planner statistics, which has nothing to do with\n> vacuum times. On the other hand, it might help with some of your queries.\n>\n> /* Steinar */\n> \nGee Wow. I am so glad I looked into this subject. I think I am onto the \nright path in solving the long-running vacuum problem. Thanks a lot for \nthe detailed insight Steinar.\n\nHere is what I think the story is:\na. Large amounts of rows are added to and deleted from a table - daily. \nWith this much activity, the statistics get out of whack easily. That's \nwhere ANALYZE or VACUUM ANALYZE would help with query speed.\nb. If ANALYZE does not have a direct impact on vacuum times, what does? \nMeaning what in this EXPLAIN/ANALYZE and Indexing world would have a \ndirect impact?\n\nAgain, thank you Steinar for validating my suspicion. It is great to be \non the right path.\n\nYudhvir\n\n\n\n\nHere is another command and I suspect does something different than \nANALYZE by itself: VACUUM ANALYZE.\n",
"msg_date": "Sat, 05 May 2007 21:52:56 -0700",
"msg_from": "Yudhvir Singh Sidhu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to Find Cause of Long Vacuum Times - NOOB Question"
},
{
"msg_contents": "Yudhvir Singh Sidhu wrote:\n> Versions: Postgresql version 8.09 on FreeBSD 6.1\n> Situation: huge amounts of adds and deletes daily. Running daily vacuums\n> Problem: Vacuum times jump up from 45 minutes, or 1:30 minutes to 6+ \n> hours overnight, once every 1 to 3 months.\n> Solutions tried: db truncate - brings vacuum times down. Reindexing \n> brings vacuum times down.\n> \n> I know my indexes are getting fragmented and my tables are getting \n> fragmented. I also know that some of my btree indexes are not being used \n> in queries. I also know that using \"UNIQUE\" in a query makes PG ignore \n> any index.\n\nIf the increase in vacuum time is indeed because of index fragmentation, \nupgrading to 8.2 might help. Since 8.2, we vacuum indexes in physical \norder, which speeds it up significantly, especially on fragmented indexes.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Sun, 06 May 2007 09:15:13 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to Find Cause of Long Vacuum Times - NOOB Question"
},
{
"msg_contents": "On Sat, May 05, 2007 at 09:52:56PM -0700, Yudhvir Singh Sidhu wrote:\n> Here is what I think the story is:\n> a. Large amounts of rows are added to and deleted from a table - daily. \n> With this much activity, the statistics get out of whack easily. That's \n> where ANALYZE or VACUUM ANALYZE would help with query speed.\n\nYou are still confusing ANALYZE and VACUUM. Those are distinct operations,\nand help for different reasons.\n\nDeleting rows leaves \"dead rows\" -- for various reasons, Postgres can't\nactually remove them from disk at the DELETE point. VACUUM scans through the\ndisk, searching for dead rows, and actually marks them as removed. This\nresults in faster query times since there will be less data overall to search\nfor.\n\nANALYZE updates the statistics, as mentioned. Yes, by adding or deleting a\nlot of data, the estimates can get out of whack, leading to bad query plans.\n\n> b. If ANALYZE does not have a direct impact on vacuum times, what does? \n> Meaning what in this EXPLAIN/ANALYZE and Indexing world would have a \n> direct impact?\n\nImproving your vacuum speed is overall not that easy (although there are\noptions you can tweak, and you can of course improve your hardware). The\nsimplest thing to do is simply to vacuum more often, as there will be less\nwork to do each time. It's a bit like cleaning your house -- it might be\nless work to clean it once a year, but it sure is a better idea in the long\nrun to clean a bit every now and then. :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sun, 6 May 2007 11:17:07 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to Find Cause of Long Vacuum Times - NOOB Question"
},
{
"msg_contents": "Steinar H. Gunderson wrote:\n> On Sat, May 05, 2007 at 09:52:56PM -0700, Yudhvir Singh Sidhu wrote:\n> \n>> Here is what I think the story is:\n>> a. Large amounts of rows are added to and deleted from a table - daily. \n>> With this much activity, the statistics get out of whack easily. That's \n>> where ANALYZE or VACUUM ANALYZE would help with query speed.\n>> \n>\n> You are still confusing ANALYZE and VACUUM. Those are distinct operations,\n> and help for different reasons.\n>\n> Deleting rows leaves \"dead rows\" -- for various reasons, Postgres can't\n> actually remove them from disk at the DELETE point. VACUUM scans through the\n> disk, searching for dead rows, and actually marks them as removed. This\n> results in faster query times since there will be less data overall to search\n> for.\n>\n> ANALYZE updates the statistics, as mentioned. Yes, by adding or deleting a\n> lot of data, the estimates can get out of whack, leading to bad query plans.\n>\n> \n>> b. If ANALYZE does not have a direct impact on vacuum times, what does? \n>> Meaning what in this EXPLAIN/ANALYZE and Indexing world would have a \n>> direct impact?\n>> \n>\n> Improving your vacuum speed is overall not that easy (although there are\n> options you can tweak, and you can of course improve your hardware). The\n> simplest thing to do is simply to vacuum more often, as there will be less\n> work to do each time. It's a bit like cleaning your house -- it might be\n> less work to clean it once a year, but it sure is a better idea in the long\n> run to clean a bit every now and then. :-)\n>\n> /* Steinar */\n> \n\nThanks for the clarification Steingar,\n\nI'll try some of the things we discussed out on Monday and will let you \nguys know what happens. I know I am confusing some concepts but I am new \nto this db and to tuning in general. I am excited about this new \nadventure and really appreciate the level of support I have seen.\n\nYudhvir\n",
"msg_date": "Sun, 06 May 2007 02:35:30 -0700",
"msg_from": "Yudhvir Singh Sidhu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to Find Cause of Long Vacuum Times - NOOB Question"
},
{
"msg_contents": "On May 5, 2007, at 5:57 PM, Yudhvir Singh Sidhu wrote:\n> Problem: Vacuum times jump up from 45 minutes, or 1:30 minutes to 6 \n> + hours overnight, once every 1 to 3 months.\n> Solutions tried: db truncate - brings vacuum times down. \n> Reindexing brings vacuum times down.\n\nDoes it jump up to 6+ hours just once and then come back down? Or \nonce at 6+ hours does it stay there?\n\nGetting that kind of change in vacuum time sounds a lot like you \nsuddenly didn't have enough maintenance_work_mem to remember all the \ndead tuples in one pass; increasing that setting might bring things \nback in line (you can increase it on a per-session basis, too).\n\nAlso, have you considered vacuuming during the day, perhaps via \nautovacuum? If you can vacuum more often you'll probably get less \nbloat. You'll probably want to experiment with the vacuum_cost_delay \nsettings to reduce the impact of vacuuming during the day (try \nsetting vacuum_cost_delay to 20 as a starting point).\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Mon, 7 May 2007 10:54:53 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to Find Cause of Long Vacuum Times - NOOB Question"
},
{
"msg_contents": "Jim Nasby wrote:\n> On May 5, 2007, at 5:57 PM, Yudhvir Singh Sidhu wrote:\n>> Problem: Vacuum times jump up from 45 minutes, or 1:30 minutes to 6+ \n>> hours overnight, once every 1 to 3 months.\n>> Solutions tried: db truncate - brings vacuum times down. Reindexing \n>> brings vacuum times down.\n>\n> Does it jump up to 6+ hours just once and then come back down? Or once \n> at 6+ hours does it stay there?\n>\n> Getting that kind of change in vacuum time sounds a lot like you \n> suddenly didn't have enough maintenance_work_mem to remember all the \n> dead tuples in one pass; increasing that setting might bring things \n> back in line (you can increase it on a per-session basis, too).\n>\n> Also, have you considered vacuuming during the day, perhaps via \n> autovacuum? If you can vacuum more often you'll probably get less \n> bloat. You'll probably want to experiment with the vacuum_cost_delay \n> settings to reduce the impact of vacuuming during the day (try setting \n> vacuum_cost_delay to 20 as a starting point).\n> -- \n> Jim Nasby [email protected]\n> EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n>\n>\n>\nIt ramps up and I have to run a db truncate to bring it back down. On \nsome machines it creeps up, on others it spikes. I have seen it climb \nfrom 6 to 12 to 21 in 3 consequtive days. Well, what's one to do? I have \nmaintenance_work_mem set to 32768 - Is that enough? I vacuum daily.\n\nI just turned vacuum verbose on on one of the systems and will find out \ntomorrow what it shows me. I plan on playing with Max_fsm_ settings \ntomorrow. And I'll keep you guys up to date.\n\nYudhvir\n\n\n",
"msg_date": "Mon, 07 May 2007 21:10:23 -0700",
"msg_from": "Yudhvir Singh Sidhu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to Find Cause of Long Vacuum Times - NOOB Question"
},
{
"msg_contents": "On May 7, 2007, at 11:10 PM, Yudhvir Singh Sidhu wrote:\n> Jim Nasby wrote:\n>> On May 5, 2007, at 5:57 PM, Yudhvir Singh Sidhu wrote:\n>>> Problem: Vacuum times jump up from 45 minutes, or 1:30 minutes \n>>> to 6+ hours overnight, once every 1 to 3 months.\n>>> Solutions tried: db truncate - brings vacuum times down. \n>>> Reindexing brings vacuum times down.\n>>\n>> Does it jump up to 6+ hours just once and then come back down? Or \n>> once at 6+ hours does it stay there?\n>>\n>> Getting that kind of change in vacuum time sounds a lot like you \n>> suddenly didn't have enough maintenance_work_mem to remember all \n>> the dead tuples in one pass; increasing that setting might bring \n>> things back in line (you can increase it on a per-session basis, \n>> too).\n>>\n>> Also, have you considered vacuuming during the day, perhaps via \n>> autovacuum? If you can vacuum more often you'll probably get less \n>> bloat. You'll probably want to experiment with the \n>> vacuum_cost_delay settings to reduce the impact of vacuuming \n>> during the day (try setting vacuum_cost_delay to 20 as a starting \n>> point).\n> It ramps up and I have to run a db truncate to bring it back down. \n> On some machines it creeps up, on others it spikes. I have seen it \n> climb from 6 to 12 to 21 in 3 consequtive days. Well, what's one to \n> do? I have maintenance_work_mem set to 32768 - Is that enough?\n\nDepends on how many dead rows there are to be vacuumed. If there's a \nlot, you could certainly be exceeding maintenance_work_mem. If you \nlook closely at the output of VACUUM VERBOSE you'll see the indexes \nfor a particular table being scanned more than once if all the dead \nrows can't fit into maintenance_work_mem.\n\n> I vacuum daily.\n\nIf you've got high update rates, that very likely might not be often \nenough.\n\n> I just turned vacuum verbose on on one of the systems and will find \n> out tomorrow what it shows me. I plan on playing with Max_fsm_ \n> settings tomorrow. And I'll keep you guys up to date.\n\nThe tail end of vacuumdb -av will tell you exactly how much room is \nneeded in the FSM.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Wed, 9 May 2007 12:14:26 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to Find Cause of Long Vacuum Times - NOOB Question"
}
] |
[
{
"msg_contents": "Dear All,\r\n\r\nI have several tables containing data sorted by 2 keys (neither are keys in db terms (not unique), however). I would like to retrieve all rows from all tables sorted by the same keys, essentially merging the contents of the tables together. While I am completely aware of sort order not being a (fundamental) property of an RDBMS table, I am also aware of indices and clustering (in fact, data is inserted into the tables into the correct order, and not consequently modified in any way). I have a union query like this one:\r\n\r\nselect a,b,c,d,e from table1 union all\r\nselect a,b,c,d,e from table2 union all\r\netc...\r\nselect a,b,c,d,e from tablen order by a,b;\r\n\r\nIs there a way to prevent PostgreSQL from doing a full sort on the result set after the unions have been completed? Even if I write\r\n\r\n(select a,b,c,d,e from table1 order by a,b) union all\r\n(select a,b,c,d,e from table2 order by a,b) union all\r\netc...\r\n(select a,b,c,d,e from tablen order by a,b) order by a,b;\r\n\r\nPostgreSQL does not seem to realise (maybe it should not be able to do this trick anyway) that the last \"order by\" clause is merely a final merge step on the ordered data sets.\r\n\r\nIs there a workaround for this within PostgreSQL (another type of query, parameter tuning, stored procedure, anything) or should I use my back-up plan of making separate queries and merging the results in the target language?\r\n\r\nThanks a lot,\r\nAmbrus\r\n\r\n--\r\nWagner, Ambrus (IJ/ETH/GBD)\r\nTool Designer\r\nGSDC Hungary\r\n\r\nLocation: Science Park, A2 40 008\r\nPhone: +36 1 439 5282 \r\n",
"msg_date": "Mon, 7 May 2007 15:05:58 +0200",
"msg_from": "\"Ambrus Wagner (IJ/ETH)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Merging large volumes of data"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nI think you'll have to stick with doing your sorting (or merging) in\nyour client. Don't think that PG recognizes the fact it's just a merge step.\n\nAndreas\n\nAmbrus Wagner (IJ/ETH) wrote:\n> Dear All,\n> \n> I have several tables containing data sorted by 2 keys (neither are keys in db terms (not unique), however). I would like to retrieve all rows from all tables sorted by the same keys, essentially merging the contents of the tables together. While I am completely aware of sort order not being a (fundamental) property of an RDBMS table, I am also aware of indices and clustering (in fact, data is inserted into the tables into the correct order, and not consequently modified in any way). I have a union query like this one:\n> \n> select a,b,c,d,e from table1 union all\n> select a,b,c,d,e from table2 union all\n> etc...\n> select a,b,c,d,e from tablen order by a,b;\n> \n> Is there a way to prevent PostgreSQL from doing a full sort on the result set after the unions have been completed? Even if I write\n> \n> (select a,b,c,d,e from table1 order by a,b) union all\n> (select a,b,c,d,e from table2 order by a,b) union all\n> etc...\n> (select a,b,c,d,e from tablen order by a,b) order by a,b;\n> \n> PostgreSQL does not seem to realise (maybe it should not be able to do this trick anyway) that the last \"order by\" clause is merely a final merge step on the ordered data sets.\n> \n> Is there a workaround for this within PostgreSQL (another type of query, parameter tuning, stored procedure, anything) or should I use my back-up plan of making separate queries and merging the results in the target language?\n> \n> Thanks a lot,\n> Ambrus\n> \n> --\n> Wagner, Ambrus (IJ/ETH/GBD)\n> Tool Designer\n> GSDC Hungary\n> \n> Location: Science Park, A2 40 008\n> Phone: +36 1 439 5282 \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGPy1CHJdudm4KnO0RAuKlAKCbYu2G/MYfmX9gAlSxkzA6KB4A+QCeIlAT\nUSxhGD5XL7oGlIh+i2rVyN4=\n=APcb\n-----END PGP SIGNATURE-----\n",
"msg_date": "Mon, 07 May 2007 15:44:34 +0200",
"msg_from": "Andreas Kostyrka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Merging large volumes of data"
},
{
"msg_contents": "\n\"Andreas Kostyrka\" <[email protected]> writes:\n\n>> (select a,b,c,d,e from table1 order by a,b) union all\n>> (select a,b,c,d,e from table2 order by a,b) union all\n>> etc...\n>> (select a,b,c,d,e from tablen order by a,b) order by a,b;\n>> \n>> PostgreSQL does not seem to realise (maybe it should not be able to do this\n>> trick anyway) that the last \"order by\" clause is merely a final merge step\n>> on the ordered data sets.\n\nThere's no plan type in Postgres for merging pre-sorted data like this. The\nonly merge plan type is for joins which isn't going to be what you need.\n\nBut the queries as written here would be just as fast or faster to do one big\nsort as they would be to do separate sorts and merge the results.\n\nYou might want to do it the way you describe if there were selective WHERE\nclauses that you've left out that make the intermediate orderings come for\nfree.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Mon, 07 May 2007 15:24:23 +0100",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Merging large volumes of data"
},
{
"msg_contents": "Andreas Kostyrka <[email protected]> writes:\n> Ambrus Wagner (IJ/ETH) wrote:\n>> Is there a way to prevent PostgreSQL from doing a full sort on the result set after the unions have been completed? Even if I write\n>> \n>> (select a,b,c,d,e from table1 order by a,b) union all\n>> (select a,b,c,d,e from table2 order by a,b) union all\n>> etc...\n>> (select a,b,c,d,e from tablen order by a,b) order by a,b;\n>> \n>> PostgreSQL does not seem to realise (maybe it should not be able to do this trick anyway) that the last \"order by\" clause is merely a final merge step on the ordered data sets.\n\nAt least to a first-order approximation, teaching it this would be a\nwaste of time.\n\nAssume for simplicity that each of your K sub-selects yields N tuples.\nThen there are KN items altogether, so if we just sort the big data set\nit takes O(KN*log(KN)) time ... which is the same as O(KN*(log K + log N)).\nOTOH, if we sort each sub-select by itself, that takes O(N*log N) time,\nor O(KN*log N) for all K sub-sorts. Now we've got to do a K-way merge,\nwhich will take O(log K) comparisons for each of the KN tuples, ie,\nO(KN*log K)). Net result: exactly the same runtime.\n\nOf course this argument fails if you have some way of obtaining the\nsub-select values pre-sorted for free. But it's never really free.\nHistorical experience is that full-table indexscans often underperform\nexplicit sorts, at least when there are enough tuples involved to\nmake the problem interesting.\n\nSo the bottom line is that the use-case for this optimization seems\nfar too narrow to justify the implementation effort.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 May 2007 11:09:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Merging large volumes of data "
}
] |
[
{
"msg_contents": "Hi,\n\nI am about to order a new server for my Postgres cluster. I will\nprobably get a Dual Xeon Quad Core instead of my current Dual Xeon.\nWhich OS would you recommend to optimize Postgres behaviour (i/o\naccess, multithreading, etc) ?\n\nI am hesitating between Fedora Core 6, CentOS and Debian. Can anyone\nhelp with this ?\n\n\nRegards\n\n",
"msg_date": "7 May 2007 14:55:27 -0700",
"msg_from": "David Levy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Best OS for Postgres 8.2"
},
{
"msg_contents": "David Levy wrote:\n> Hi,\n> \n> I am about to order a new server for my Postgres cluster. I will\n> probably get a Dual Xeon Quad Core instead of my current Dual Xeon.\n> Which OS would you recommend to optimize Postgres behaviour (i/o\n> access, multithreading, etc) ?\n> \n> I am hesitating between Fedora Core 6, CentOS and Debian. Can anyone\n> help with this ?\n\nWell you just described three linux distributions, which is hardly a \nquestion about which OS to use ;). I would stick with the long supported \nversions of Linux, thus CentOS 5, Debian 4, Ubuntu Dapper.\n\nJoshua D. Drake\n\n\n> \n> \n> Regards\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n",
"msg_date": "Mon, 07 May 2007 14:59:18 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "In response to \"Joshua D. Drake\" <[email protected]>:\n\n> David Levy wrote:\n> > Hi,\n> > \n> > I am about to order a new server for my Postgres cluster. I will\n> > probably get a Dual Xeon Quad Core instead of my current Dual Xeon.\n> > Which OS would you recommend to optimize Postgres behaviour (i/o\n> > access, multithreading, etc) ?\n> > \n> > I am hesitating between Fedora Core 6, CentOS and Debian. Can anyone\n> > help with this ?\n> \n> Well you just described three linux distributions, which is hardly a \n> question about which OS to use ;). I would stick with the long supported \n> versions of Linux, thus CentOS 5, Debian 4, Ubuntu Dapper.\n\nThere used to be a prominent site that recommended FreeBSD for Postgres.\nDon't know if that's still recommended or not -- but bringing it up is\nlikely to start a Holy War.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Mon, 7 May 2007 18:04:01 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "Bill Moran wrote:\n> In response to \"Joshua D. Drake\" <[email protected]>:\n> \n>> David Levy wrote:\n>>> Hi,\n>>>\n>>> I am about to order a new server for my Postgres cluster. I will\n>>> probably get a Dual Xeon Quad Core instead of my current Dual Xeon.\n>>> Which OS would you recommend to optimize Postgres behaviour (i/o\n>>> access, multithreading, etc) ?\n>>>\n>>> I am hesitating between Fedora Core 6, CentOS and Debian. Can anyone\n>>> help with this ?\n>> Well you just described three linux distributions, which is hardly a \n>> question about which OS to use ;). I would stick with the long supported \n>> versions of Linux, thus CentOS 5, Debian 4, Ubuntu Dapper.\n> \n> There used to be a prominent site that recommended FreeBSD for Postgres.\n> Don't know if that's still recommended or not -- but bringing it up is\n> likely to start a Holy War.\n\nHeh... I doubt it will start a war. FreeBSD is a good OS. However, I \nspecifically noted the Dual Xeon Quad Core, which means, 8 procs. It is \nmy understanding (and I certainly could be wrong) that FreeBSD doesn't \nhandle SMP nearly as well as Linux (and Linux not as well as Solaris).\n\nSincerely,\n\nJoshua D. Drake\n\n> \n\n",
"msg_date": "Mon, 07 May 2007 15:14:08 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "\nOn May 7, 2007, at 2:55 PM, David Levy wrote:\n\n> Hi,\n>\n> I am about to order a new server for my Postgres cluster. I will\n> probably get a Dual Xeon Quad Core instead of my current Dual Xeon.\n> Which OS would you recommend to optimize Postgres behaviour (i/o\n> access, multithreading, etc) ?\n>\n> I am hesitating between Fedora Core 6, CentOS and Debian. Can anyone\n> help with this ?\n\nWell, all three you mention are much the same, just with a different\nbadge on the box, as far as performance is concerned. They're all\ngoing to be a moderately recent Linux kernel, with your choice\nof filesystems, so any choice between them is going to be driven\nmore by available staff and support or personal preference.\n\nI'd probably go CentOS 5 over Fedora just because Fedora doesn't\nget supported for very long - more of an issue with a dedicated\ndatabase box with a long lifespan than your typical desktop or\ninterchangeable webserver.\n\nI might also look at Solaris 10, though. I've yet to play with it \nmuch, but it\nseems nice, and I suspect it might manage 8 cores better than current\nLinux setups.\n\nCheers,\n Steve\n\n\n",
"msg_date": "Mon, 7 May 2007 15:57:31 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "David Levy wrote:\n> Hi,\n> \n> I am about to order a new server for my Postgres cluster. I will\n> probably get a Dual Xeon Quad Core instead of my current Dual Xeon.\n> Which OS would you recommend to optimize Postgres behaviour (i/o\n> access, multithreading, etc) ?\n> \n> I am hesitating between Fedora Core 6, CentOS and Debian. Can anyone\n> help with this ?\n\nUse the one you're most comfortable with.\n\nI don't think you'll notice *that* much difference between linux systems \nfor performance - but whether you're comfortable using any of them will \nmake a difference in managing it in general.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Tue, 08 May 2007 10:40:17 +1000",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "On Tue, 8 May 2007, Chris wrote:\n\n> David Levy wrote:\n>> Hi,\n>>\n>> I am about to order a new server for my Postgres cluster. I will\n>> probably get a Dual Xeon Quad Core instead of my current Dual Xeon.\n>> Which OS would you recommend to optimize Postgres behaviour (i/o\n>> access, multithreading, etc) ?\n>>\n>> I am hesitating between Fedora Core 6, CentOS and Debian. Can anyone\n>> help with this ?\n>\n> Use the one you're most comfortable with.\n>\n> I don't think you'll notice *that* much difference between linux systems for \n> performance - but whether you're comfortable using any of them will make a \n> difference in managing it in general.\n\nthe tuneing that you do (both of the OS and of postgres) will make more \nof a difference then anything else.\n\nDavid Lang\n",
"msg_date": "Mon, 7 May 2007 17:53:29 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "[email protected] wrote:\n> On Tue, 8 May 2007, Chris wrote:\n> \n>> David Levy wrote:\n>>> Hi,\n>>>\n>>> I am about to order a new server for my Postgres cluster. I will\n>>> probably get a Dual Xeon Quad Core instead of my current Dual Xeon.\n>>> Which OS would you recommend to optimize Postgres behaviour (i/o\n>>> access, multithreading, etc) ?\n>>>\n>>> I am hesitating between Fedora Core 6, CentOS and Debian. Can anyone\n>>> help with this ?\n>>\n>> Use the one you're most comfortable with.\n>>\n>> I don't think you'll notice *that* much difference between linux \n>> systems for performance - but whether you're comfortable using any of \n>> them will make a difference in managing it in general.\n> \n> the tuneing that you do (both of the OS and of postgres) will make more \n> of a difference then anything else.\n\nWhich is why it's best to know/understand the OS first ;)\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Tue, 08 May 2007 11:14:48 +1000",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "I am using FC6 in production for our pg 8.2.4 DB server and am quite \nhappy with it.\n\nThe big advantage with FC6 for me was that the FC6 team seems to keep \nmore current with the latest stable revs of most OSSW (including \nkernel revs!) better than any of the other major distros.\n\n(Also, SE Linux is a =good= thing security-wise. If it's good enough \nfor the NSA...)\n\nDownside is that initial install and config can be a bit complicated.\n\nWe're happy with it.\n\nCheers,\nRon Peacetree\n\n\nAt 05:55 PM 5/7/2007, David Levy wrote:\n>Hi,\n>\n>I am about to order a new server for my Postgres cluster. I will\n>probably get a Dual Xeon Quad Core instead of my current Dual Xeon.\n>Which OS would you recommend to optimize Postgres behaviour (i/o\n>access, multithreading, etc) ?\n>\n>I am hesitating between Fedora Core 6, CentOS and Debian. Can anyone\n>help with this ?\n>\n>\n>Regards\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n",
"msg_date": "Mon, 07 May 2007 23:41:44 -0400",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "On Mon, 7 May 2007, David Levy wrote:\n\n> I am hesitating between Fedora Core 6, CentOS and Debian. Can anyone\n> help with this ?\n\nDebian packages PostgreSQL in a fashion unique to it; it's arguable \nwhether it's better or not (I don't like it), but going with that will \nassure your installation is a bit non-standard compared with most Linux \ninstallas. The main reasons you'd pick Debian are either that you like \nthat scheme (which tries to provide some structure to running multiple \nclusters on one box), or that you plan to rely heavily on community \npackages that don't come with the Redhat distributions and therefore would \nappreciate how easy it is to use apt-get against the large Debian software \nrepository.\n\nGiven the buginess and unexpected changes from packages updates of every \nFedora Core release I've ever tried, I wouldn't trust any OS from that \nline to run a database keeping track of where my socks are at. Core 6 \nseems better than most of the older ones. I find it hard to understand \nwhat it offers that Centos doesn't such that you'd want Fedora instead.\n\nCentos just released a new version 5 recently. It's running a fairly \nmodern kernel with several relevant performance improvements over the much \nolder V4; unless you have some odd piece of hardware where there is only a \ndriver available for Centos 4 (I ran into this with a disk controller), \nthe new version would better.\n\nThe main advantages of Centos over the other two are that so many people \nare/will be running very similar configurations that you should able to \nfind help easily if you run into any issues. I revisited fresh installs \nof each recently, and after trying both I found it more comfortable to run \nthe database server on Centos, but I did miss the gigantic and easy to \ninstall Debian software repository.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Mon, 7 May 2007 23:56:14 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> Debian packages PostgreSQL in a fashion unique to it; it's arguable \n> whether it's better or not (I don't like it), but going with that will \n> assure your installation is a bit non-standard compared with most Linux \n> installas.\n\n<dons red fedora>\n\nWhat Debian has done is set up an arrangement that lets you run two (or\nmore) different PG versions in parallel. Since that's amazingly helpful\nduring a major-PG-version upgrade, most of the other packagers are\nscheming how to do something similar. I'm not sure when this will\nhappen in the PGDG or Red Hat RPMs, but it probably will eventually.\n\n> Given the buginess and unexpected changes from packages updates of every \n> Fedora Core release I've ever tried, I wouldn't trust any OS from that \n> line to run a database keeping track of where my socks are at. Core 6 \n> seems better than most of the older ones. I find it hard to understand \n> what it offers that Centos doesn't such that you'd want Fedora instead.\n\nFedora is about cutting edge, RHEL is about stability, and Centos tracks\nRHEL. No surprises there. (<plug> and if someday you want commercial\nsupport for your OS, a Centos->RHEL update will get you there easily.\nAFAIK Red Hat doesn't have a clean solution for someone running Fedora\nwho suddenly realizes he needs a 24x7-supportable OS right now.\nSomething to work on... </plug>)\n\n</dons red fedora>\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 May 2007 00:37:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2 "
},
{
"msg_contents": "In #postgresql on freenode, somebody ever mentioned that ZFS from \nSolaris helps a lot to the performance of pgsql, so dose anyone have \ninformation about that?\n\nSteve Atkins wrote:\n> \n> On May 7, 2007, at 2:55 PM, David Levy wrote:\n> \n>> Hi,\n>>\n>> I am about to order a new server for my Postgres cluster. I will\n>> probably get a Dual Xeon Quad Core instead of my current Dual Xeon.\n>> Which OS would you recommend to optimize Postgres behaviour (i/o\n>> access, multithreading, etc) ?\n>>\n>> I am hesitating between Fedora Core 6, CentOS and Debian. Can anyone\n>> help with this ?\n> \n> Well, all three you mention are much the same, just with a different\n> badge on the box, as far as performance is concerned. They're all\n> going to be a moderately recent Linux kernel, with your choice\n> of filesystems, so any choice between them is going to be driven\n> more by available staff and support or personal preference.\n> \n> I'd probably go CentOS 5 over Fedora just because Fedora doesn't\n> get supported for very long - more of an issue with a dedicated\n> database box with a long lifespan than your typical desktop or\n> interchangeable webserver.\n> \n> I might also look at Solaris 10, though. I've yet to play with it much, \n> but it\n> seems nice, and I suspect it might manage 8 cores better than current\n> Linux setups.\n> \n> Cheers,\n> Steve\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\nRegards\n\nIan\n",
"msg_date": "Tue, 08 May 2007 12:41:26 +0800",
"msg_from": "=?UTF-8?B?5p2O5b2mIElhbiBMaQ==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "On Tue, 8 May 2007, �~]~N彦 Ian Li wrote:\n\n> In #postgresql on freenode, somebody ever mentioned that ZFS from Solaris \n> helps a lot to the performance of pgsql, so dose anyone have information \n> about that?\n\nthe filesystem you use will affect the performance of postgres \nsignificantly. I've heard a lot of claims for ZFS, unfortunantly many of \nthem from people who have prooven that they didn't know what they were \ntalking about by the end of their first or second e-mails.\n\nmuch of the hype for ZFS is it's volume management capabilities and admin \ntools. Linux has most (if not all) of the volume management capabilities, \nit just seperates them from the filesystems so that any filesystem can use \nthem, and as a result you use one tool to setup your RAID, one to setup \nsnapshots, and a third to format your filesystems where ZFS does this in \none userspace tool.\n\nonce you seperate the volume management piece out, the actual performance \nquestion is a lot harder to answer. there are a lot of people who say that \nit's far faster then the alternate filesystems on Solaris, but I haven't \nseen any good comparisons between it and Linux filesystems.\n\nOn Linux you have the choice of several filesystems, and the perfomance \nwill vary wildly depending on your workload. I personally tend to favor \next2 (for small filesystems where the application is ensuring data \nintegrity) or XFS (for large filesystems)\n\nI personally don't trust reiserfs, jfs seems to be a tools for \ntransitioning from AIX more then anything else, and ext3 seems to have all \nthe scaling issues of ext2 plus the overhead (and bottleneck) of \njournaling.\n\none issue with journaling filesystems, if you journal the data as well as \nthe metadata you end up with a very reliable setup, however it means that \nall your data needs to be written twice, oncce to the journal, and once to \nthe final location. the write to the journal can be slightly faster then a \nnormal write to the final location (the journal is a sequential write to \nan existing file), however the need to write twice can effectivly cut your \ndisk I/O bandwidth in half when doing heavy writes. worse, when you end up \nwriting mor ethen will fit in the journal (128M is the max for ext3) the \nentire system then needs to stall while the journal gets cleared to make \nspace for the additional writes.\n\nif you don't journal your data then you avoid the problems above, but in a \ncrash you may find that you lost data, even though the filesystem is \n'intact' according to fsck.\n\nDavid Lang\n\n> Steve Atkins wrote:\n>>\n>> On May 7, 2007, at 2:55 PM, David Levy wrote:\n>> \n>> > Hi,\n>> > \n>> > I am about to order a new server for my Postgres cluster. I will\n>> > probably get a Dual Xeon Quad Core instead of my current Dual Xeon.\n>> > Which OS would you recommend to optimize Postgres behaviour (i/o\n>> > access, multithreading, etc) ?\n>> > \n>> > I am hesitating between Fedora Core 6, CentOS and Debian. Can anyone\n>> > help with this ?\n>>\n>> Well, all three you mention are much the same, just with a different\n>> badge on the box, as far as performance is concerned. They're all\n>> going to be a moderately recent Linux kernel, with your choice\n>> of filesystems, so any choice between them is going to be driven\n>> more by available staff and support or personal preference.\n>>\n>> I'd probably go CentOS 5 over Fedora just because Fedora doesn't\n>> get supported for very long - more of an issue with a dedicated\n>> database box with a long lifespan than your typical desktop or\n>> interchangeable webserver.\n>>\n>> I might also look at Solaris 10, though. I've yet to play with it much,\n>> but it\n>> seems nice, and I suspect it might manage 8 cores better than current\n>> Linux setups.\n>>\n>> Cheers,\n>> Steve\n>>\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 5: don't forget to increase your free space map settings\n>> \n>\n> Regards\n>\n> Ian\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n>From [email protected] Tue May 8 05:09:47 2007\nReceived: from localhost (maia-2.hub.org [200.46.204.187])\n\tby postgresql.org (Postfix) with ESMTP id B7F2B9FBB48\n\tfor <[email protected]>; Tue, 8 May 2007 05:09:46 -0300 (ADT)\nReceived: from postgresql.org ([200.46.204.71])\n by localhost (mx1.hub.org [200.46.204.187]) (amavisd-maia, port 10024)\n with ESMTP id 21379-08 for <[email protected]>;\n Tue, 8 May 2007 05:09:40 -0300 (ADT)\nX-Greylist: domain auto-whitelisted by SQLgrey-1.7.4\nReceived: from nz-out-0506.google.com (nz-out-0506.google.com [64.233.162.234])\n\tby postgresql.org (Postfix) with ESMTP id C1AE49FBB21\n\tfor <[email protected]>; Tue, 8 May 2007 05:09:42 -0300 (ADT)\nReceived: by nz-out-0506.google.com with SMTP id s1so1988964nze\n for <[email protected]>; Tue, 08 May 2007 01:09:41 -0700 (PDT)\nDKIM-Signature: a=rsa-sha1; c=relaxed/relaxed;\n d=gmail.com; s=beta;\n h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references;\n b=tMnqJ4jJFkANPEKsjO48H7tdlMkh1PtxD9ojia3Hs7kK4jLUtNUWSWo6GEPQw3nHKK/IFUU42X/xGazATgo19i5FXATTGs/NxHYcWcT4jYq/r84Fmsj2ndDu3lnjOLK8pGaJ+Di1Rw1oLfhcu+zCkBrfTX9KTtbO4BSU2riTwHc=\nDomainKey-Signature: a=rsa-sha1; c=nofws;\n d=gmail.com; s=beta;\n h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references;\n b=ibMOoo+Gg9CIe5nVV5w4I8kJBzehuai5dBAmssoI98av8cCWzpmlj29ellDiBihXGB3g1EbKZR/mldl4oZV3yA/kvjUtA5mAHvvxnkQMjMHzEOOokHx/41kpWk6p14IjD8PYoLxlcaggO4I9y9xyJR8/1+ikTKvqdLOk3eUa+tc=\nReceived: by 10.115.95.1 with SMTP id x1mr2497915wal.1178611781182;\n Tue, 08 May 2007 01:09:41 -0700 (PDT)\nReceived: by 10.114.160.9 with HTTP; Tue, 8 May 2007 01:09:40 -0700 (PDT)\nMessage-ID: <[email protected]>\nDate: Tue, 8 May 2007 10:09:40 +0200\nFrom: \"Claus Guttesen\" <[email protected]>\nTo: \"David Levy\" <[email protected]>\nSubject: Re: Best OS for Postgres 8.2\nCc: [email protected]\nIn-Reply-To: <[email protected]>\nMIME-Version: 1.0\nContent-Type: text/plain; charset=ISO-8859-1; format=flowed\nContent-Transfer-Encoding: 7bit\nContent-Disposition: inline\nReferences: <[email protected]>\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Archive-Number: 200705/86\nX-Sequence-Number: 24476\n\n> I am about to order a new server for my Postgres cluster. I will\n> probably get a Dual Xeon Quad Core instead of my current Dual Xeon.\n> Which OS would you recommend to optimize Postgres behaviour (i/o\n> access, multithreading, etc) ?\n>\n> I am hesitating between Fedora Core 6, CentOS and Debian. Can anyone\n> help with this ?\n\nMy only experience is with FreeBSD. My installation is running 6.2 and\npg 7.4 on a four-way woodcrest and besides being very stable it's also\nperforming very well. But then FreeBSD 6.x might not scale as well\nbeyond four cores atm. There you probably would need FreeBSD 7 which\nis the development branch and should require extensive testing.\n\nHow big will the db be in size?\n\n-- \nregards\nClaus\n",
"msg_date": "Tue, 8 May 2007 00:59:21 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "[email protected] wrote:\n> if you don't journal your data then you avoid the problems above, but in \n> a crash you may find that you lost data, even though the filesystem is \n> 'intact' according to fsck.\n\nPostgreSQL itself journals it's data to the WAL, so that shouldn't happen.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 08 May 2007 09:21:39 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "> > In #postgresql on freenode, somebody ever mentioned that ZFS from Solaris\n> > helps a lot to the performance of pgsql, so dose anyone have information\n> > about that?\n>\n> the filesystem you use will affect the performance of postgres\n> significantly. I've heard a lot of claims for ZFS, unfortunantly many of\n> them from people who have prooven that they didn't know what they were\n> talking about by the end of their first or second e-mails.\n>\n> much of the hype for ZFS is it's volume management capabilities and admin\n> tools. Linux has most (if not all) of the volume management capabilities,\n> it just seperates them from the filesystems so that any filesystem can use\n> them, and as a result you use one tool to setup your RAID, one to setup\n> snapshots, and a third to format your filesystems where ZFS does this in\n> one userspace tool.\n\nEven though those posters may have proven them selves wrong, zfs is\nstill a very handy fs and it should not be judged relative to these\nstatements.\n\n> once you seperate the volume management piece out, the actual performance\n> question is a lot harder to answer. there are a lot of people who say that\n> it's far faster then the alternate filesystems on Solaris, but I haven't\n> seen any good comparisons between it and Linux filesystems.\n\nOne could install pg on solaris 10 and format the data-area as ufs and\nthen as zfs and compare import- and query-times and other benchmarking\nbut comparing ufs/zfs to Linux-filesystems would also be a comparison\nof those two os'es.\n\n-- \nregards\nClaus\n",
"msg_date": "Tue, 8 May 2007 10:22:33 +0200",
"msg_from": "\"Claus Guttesen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "On Tue, 8 May 2007, Claus Guttesen wrote:\n\n>> > In #postgresql on freenode, somebody ever mentioned that ZFS from \n>> > Solaris\n>> > helps a lot to the performance of pgsql, so dose anyone have information\n>> > about that?\n>>\n>> the filesystem you use will affect the performance of postgres\n>> significantly. I've heard a lot of claims for ZFS, unfortunantly many of\n>> them from people who have prooven that they didn't know what they were\n>> talking about by the end of their first or second e-mails.\n>>\n>> much of the hype for ZFS is it's volume management capabilities and admin\n>> tools. Linux has most (if not all) of the volume management capabilities,\n>> it just seperates them from the filesystems so that any filesystem can use\n>> them, and as a result you use one tool to setup your RAID, one to setup\n>> snapshots, and a third to format your filesystems where ZFS does this in\n>> one userspace tool.\n>\n> Even though those posters may have proven them selves wrong, zfs is\n> still a very handy fs and it should not be judged relative to these\n> statements.\n\nI don't disagree with you, I'm just noteing that too many of the 'ZFS is \ngreat' posts need to be discounted as a result (the same thing goes for \nthe 'reiserfs4 is great' posts)\n\n>> once you seperate the volume management piece out, the actual performance\n>> question is a lot harder to answer. there are a lot of people who say that\n>> it's far faster then the alternate filesystems on Solaris, but I haven't\n>> seen any good comparisons between it and Linux filesystems.\n>\n> One could install pg on solaris 10 and format the data-area as ufs and\n> then as zfs and compare import- and query-times and other benchmarking\n> but comparing ufs/zfs to Linux-filesystems would also be a comparison\n> of those two os'es.\n\nhowever, such a comparison is very legitimate, it doesn't really matter \nwhich filesystem is better if the OS that it's tied to limits it so much \nthat the other one wins out with an inferior filesystem\n\ncurrently ZFS is only available on Solaris, parts of it have been released \nunder GPLv2, but it doesn't look like enough of it to be ported to Linux \n(enough was released for grub to be able to access it read-only, but not \nthe full filesystem). there are also patent concerns that are preventing \nany porting to Linux.\n\non the other hand, it's integrated userspace tools are pushing people to \ncreate similar tools for Linux (without needeing to combine the vairous \npieces in the kernel)\n\nDavid Lang\n",
"msg_date": "Tue, 8 May 2007 01:45:52 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "[email protected] wrote:\n> On Tue, 8 May 2007, Claus Guttesen wrote:\n> \n>>> > In #postgresql on freenode, somebody ever mentioned that ZFS from \n>>> > Solaris\n>>> > helps a lot to the performance of pgsql, so dose anyone have \n>>> information\n>>> > about that?\n>>>\n>>> the filesystem you use will affect the performance of postgres\n>>> significantly. I've heard a lot of claims for ZFS, unfortunantly \n>>> many of\n>>> them from people who have prooven that they didn't know what they were\n>>> talking about by the end of their first or second e-mails.\n>>>\n>>> much of the hype for ZFS is it's volume management capabilities and \n>>> admin\n>>> tools. Linux has most (if not all) of the volume management \n>>> capabilities,\n>>> it just seperates them from the filesystems so that any filesystem \n>>> can use\n>>> them, and as a result you use one tool to setup your RAID, one to setup\n>>> snapshots, and a third to format your filesystems where ZFS does \n>>> this in\n>>> one userspace tool.\n>>\n>> Even though those posters may have proven them selves wrong, zfs is\n>> still a very handy fs and it should not be judged relative to these\n>> statements.\n> \n> I don't disagree with you, I'm just noteing that too many of the 'ZFS is \n> great' posts need to be discounted as a result (the same thing goes for \n> the 'reiserfs4 is great' posts)\n> \n>>> once you seperate the volume management piece out, the actual \n>>> performance\n>>> question is a lot harder to answer. there are a lot of people who \n>>> say that\n>>> it's far faster then the alternate filesystems on Solaris, but I \n>>> haven't\n>>> seen any good comparisons between it and Linux filesystems.\n>>\n>> One could install pg on solaris 10 and format the data-area as ufs and\n>> then as zfs and compare import- and query-times and other benchmarking\n>> but comparing ufs/zfs to Linux-filesystems would also be a comparison\n>> of those two os'es.\n> \n> however, such a comparison is very legitimate, it doesn't really matter \n> which filesystem is better if the OS that it's tied to limits it so much \n> that the other one wins out with an inferior filesystem\n> \n> currently ZFS is only available on Solaris, parts of it have been \n> released under GPLv2, but it doesn't look like enough of it to be ported \n> to Linux (enough was released for grub to be able to access it \n> read-only, but not the full filesystem). there are also patent concerns \n> that are preventing any porting to Linux.\n\nThis is not entirely correct. ZFS is only under the CDDL license and it \nhas been ported to FreeBSD.\n\nhttp://mail.opensolaris.org/pipermail/zfs-discuss/2007-April/026922.html\n\n--\nTrygve\n",
"msg_date": "Tue, 08 May 2007 10:49:49 +0200",
"msg_from": "=?ISO-8859-1?Q?Trygve_Laugst=F8l?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "On Mon, May 07, 2007 at 11:56:14PM -0400, Greg Smith wrote:\n> Debian packages PostgreSQL in a fashion unique to it; it's arguable \n> whether it's better or not (I don't like it), but going with that will \n> assure your installation is a bit non-standard compared with most Linux \n> installas. The main reasons you'd pick Debian are either that you like \n> that scheme (which tries to provide some structure to running multiple \n> clusters on one box), or that you plan to rely heavily on community \n> packages that don't come with the Redhat distributions and therefore would \n> appreciate how easy it is to use apt-get against the large Debian software \n> repository.\n\nJust to add to this: As far as I understand it, this scheme was originally\nmainly put in place to allow multiple _versions_ of Postgres to be installed\nalongside each other, for smoother upgrades. (There's a command that does all\nthe details of running first pg_dumpall for the users and groups, then the\nnew pg_dump with -Fc to get all data and LOBs over, then some hand-fixing to\nchange explicit paths to $libdir, etc...)\n\nOf course, you lose all that if you need a newer Postgres version than the OS\nprovides. (Martin Pitt, the Debian/Ubuntu maintainer of Postgres -- the\npackaging in Debian and Ubuntu is the same, sans version differences -- makes\nhis own backported packages of the newest Postgres to Debian stable; it's up\nto you if you'd trust that or not.)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 8 May 2007 11:35:50 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "On 5/8/07, [email protected] <[email protected]> wrote:\n[snip]\n> I personally don't trust reiserfs, jfs seems to be a tools for\n> transitioning from AIX more then anything else [...]\n\nWhat makes you say this? I have run JFS for years with complete\nsatisfaction, and I have never logged into an AIX box.\n\nJFS has traditionally been seen as an underdog, but undeservedly so,\nin my opinion; one cause might be the instability of the very early\nreleases, which seems to have tainted its reputation, or the alienness\nof its AIX heritage. However, every benchmark I have come across puts\nits on par with, and often surpassing, the more popular file systems\nin performance. In particular, JFS seems to shine with respect to CPU\noverhead.\n\nAlexander.\n",
"msg_date": "Tue, 8 May 2007 11:43:18 +0200",
"msg_from": "\"Alexander Staubo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "On Mon, May 07, 2007 at 03:14:08PM -0700, Joshua D. Drake wrote:\n> It is my understanding (and I certainly could be wrong) that FreeBSD\n> doesn't handle SMP nearly as well as Linux (and Linux not as well as\n> Solaris).\n\nI'm not actually sure about the last part. There are installations as big as\n1024 CPUs that run Linux -- most people won't need that, but it's probably an\nindicator that eight cores should run OK :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 8 May 2007 11:48:06 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "On Tue, 8 May 2007, Trygve Laugst�l wrote:\n\n>> currently ZFS is only available on Solaris, parts of it have been released\n>> under GPLv2, but it doesn't look like enough of it to be ported to Linux\n>> (enough was released for grub to be able to access it read-only, but not\n>> the full filesystem). there are also patent concerns that are preventing\n>> any porting to Linux.\n>\n> This is not entirely correct. ZFS is only under the CDDL license and it has \n> been ported to FreeBSD.\n>\n> http://mail.opensolaris.org/pipermail/zfs-discuss/2007-April/026922.html\n>\nI wonder how they handled the license issues? I thought that if you \ncombined stuff that was BSD licensed with stuff with a more restrictive \nlicense the result was under the more restrictive license. thanks for the \ninfo.\n\nhere's a link about the GPLv2 stuff for zfs\n\nhttp://blogs.sun.com/darren/entry/zfs_under_gplv2_already_exists\n>From [email protected] Tue May 8 07:22:31 2007\nReceived: from localhost (maia-4.hub.org [200.46.204.183])\n\tby postgresql.org (Postfix) with ESMTP id 3ED559FB28A\n\tfor <[email protected]>; Tue, 8 May 2007 07:22:29 -0300 (ADT)\nReceived: from postgresql.org ([200.46.204.71])\n by localhost (mx1.hub.org [200.46.204.183]) (amavisd-maia, port 10024)\n with ESMTP id 58273-05 for <[email protected]>;\n Tue, 8 May 2007 07:22:26 -0300 (ADT)\nX-Greylist: from auto-whitelisted by SQLgrey-1.7.5\nReceived: from oxford.xeocode.com (unknown [62.232.55.118])\n\tby postgresql.org (Postfix) with ESMTP id 1ACC39FB1B7\n\tfor <[email protected]>; Tue, 8 May 2007 07:22:26 -0300 (ADT)\nReceived: from localhost ([127.0.0.1] helo=oxford.xeocode.com)\n\tby oxford.xeocode.com with esmtp (Exim 4.67)\n\t(envelope-from <[email protected]>)\n\tid 1HlMpr-00010j-V3; Tue, 08 May 2007 11:22:24 +0100\nFrom: Gregory Stark <[email protected]>\nTo: \"Pomarede Nicolas\" <[email protected]>\nCc: <[email protected]>\nSubject: Re: truncate a table instead of vaccum full when count(*) is 0\nIn-Reply-To: <Pine.LNX.4.64.0705081125140.20675@localhost> (Pomarede Nicolas's\n\tmessage of \"Tue, 8 May 2007 11:43:14 +0200 (CEST)\")\nOrganization: EnterpriseDB\nReferences: <Pine.LNX.4.64.0705081125140.20675@localhost>\nX-Draft-From: (\"nnimap+mail01.enterprisedb.com:INBOX.performance\" 216)\nDate: Tue, 08 May 2007 11:22:23 +0100\nMessage-ID: <[email protected]>\nUser-Agent: Gnus/5.110006 (No Gnus v0.6) Emacs/21.4 (gnu/linux)\nMIME-Version: 1.0\nContent-Type: text/plain; charset=us-ascii\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Archive-Number: 200705/100\nX-Sequence-Number: 24490\n\n\n\"Pomarede Nicolas\" <[email protected]> writes:\n\n> But for the data (dead rows), even running a vacuum analyze every day is not\n> enough, and doesn't truncate some empty pages at the end, so the data size\n> remains in the order of 200-300 MB, when only a few effective rows are there.\n\nTry running vacuum more frequently. Once per day isn't very frequent for\nvacuum, every 60 or 30 minutes isn't uncommon. For your situation you might\neven consider running it continuously in a loop.\n\n> I see in the 8.3 list of coming changes that the FSM will try to re-use pages\n> in a better way to help truncating empty pages. Is this correct ?\n\nThere are several people working on improvements to vacuum but it's not clear\nright now exactly what we'll end up with. I think most of the directly vacuum\nrelated changes wouldn't actually help you either. \n\nThe one that would help you is named \"HOT\". If you're interested in\nexperimenting with an experimental patch you could consider taking CVS and\napplying HOT and seeing how it affects you. Or if you see an announcement that\nit's been comitted taking a beta and experimenting with it before the 8.3\nrelease could be interesting. Experiments with real-world databases can be\nvery helpful for developers since it's hard to construct truly realistic\nbenchmarks.\n\n> So, I would like to truncate the table when the number of rows reaches 0 (just\n> after the table was processed, and just before some new rows are added).\n>\n> Is there an easy way to do this under psql ? For example, lock the table, do a\n> count(*), if result is 0 row then truncate the table, unlock the table (a kind\n> of atomic 'truncate table if count(*) == 0').\n>\n> Would this work and what would be the steps ?\n\nIt would work but you may end up keeping the lock for longer than you're happy\nfor. Another option to consider would be to use CLUSTER instead of vacuum full\nthough the 8.2 CLUSTER wasn't entirely MVCC safe and I think in your situation\nthat might actually be a problem. It would cause transactions that started\nbefore the cluster (but didn't access the table before the cluster) to not see\nany records after the cluster.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Tue, 8 May 2007 03:16:16 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "On Tue, 8 May 2007, Steinar H. Gunderson wrote:\n\n> On Mon, May 07, 2007 at 03:14:08PM -0700, Joshua D. Drake wrote:\n>> It is my understanding (and I certainly could be wrong) that FreeBSD\n>> doesn't handle SMP nearly as well as Linux (and Linux not as well as\n>> Solaris).\n>\n> I'm not actually sure about the last part. There are installations as big as\n> 1024 CPUs that run Linux -- most people won't need that, but it's probably an\n> indicator that eight cores should run OK :-)\n\nover the weekend the question of scalability was raised on the linux \nkernel mailing list and people are shipping 1024 cpu systems with linux, \nand testing 4096 cpu systems. there are occasionally still bottlenecks \nthat limit scalability, butunless you run into a bad driver or filesystem \nyou should have no problems in the 8-16 core range.\n\nany comparison between Linux and any other OS needs to include a date for \nwhen the comparison was made, Linux is changing at a frightning pace (I \nthink I saw something within the last few weeks that said that the rate of \nchange for the kernel has averaged around 9000 lines of code per day over \nthe last couple of years) you need to re-check comparisons every year or \ntwo or you end up working with obsolete data.\n\nDavid Lang\n",
"msg_date": "Tue, 8 May 2007 03:27:01 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "[email protected] wrote:\n> On Tue, 8 May 2007, Trygve Laugst�l wrote:\n> \n>>> currently ZFS is only available on Solaris, parts of it have been \n>>> released\n>>> under GPLv2, but it doesn't look like enough of it to be ported to \n>>> Linux\n>>> (enough was released for grub to be able to access it read-only, but \n>>> not\n>>> the full filesystem). there are also patent concerns that are \n>>> preventing\n>>> any porting to Linux.\n>>\n>> This is not entirely correct. ZFS is only under the CDDL license and \n>> it has been ported to FreeBSD.\n>>\n>> http://mail.opensolaris.org/pipermail/zfs-discuss/2007-April/026922.html\n>>\n> I wonder how they handled the license issues? I thought that if you \n> combined stuff that was BSD licensed with stuff with a more restrictive \n> license the result was under the more restrictive license. thanks for \n> the info.\n\nThe CDDL is not a restrictive license like GPL, it is based on the MIT \nlicense so it can be used with BSD stuff without problems. There are \nlots of discussion going on (read: flamewars) on the opensolaris lists \nabout how it can/should it/will it be integrated into linux.\n\n> here's a link about the GPLv2 stuff for zfs\n> \n> http://blogs.sun.com/darren/entry/zfs_under_gplv2_already_exists\n\nThat title is fairly misleading as it's only some read-only bits to be \nable to boot off ZFS with grub.\n\n--\nTrygve\n",
"msg_date": "Tue, 08 May 2007 12:29:09 +0200",
"msg_from": "=?ISO-8859-1?Q?Trygve_Laugst=F8l?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "I'm really not a senior member around here and while all this licensing\nstuff and underlying fs between OSs is very interesting can we please\nthink twice before continuing it.\n\nThanks for the minute,\n\n./C\n",
"msg_date": "Tue, 08 May 2007 13:55:41 +0300",
"msg_from": "=?ISO-8859-1?Q?=22C=2E_Bergstr=F6m=22?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [OT] Best OS for Postgres 8.2"
},
{
"msg_contents": "WRT ZFS on Linux, if someone were to port it, the license issue would get worked out IMO (with some discussion to back me up). From discussions with the developers, the biggest issue is a technical one: the Linux VFS layer makes the port difficult.\n\nI don't hold any hope that the FUSE port will be a happy thing, the performance won't be there.\n\nAny volunteers to port ZFS to Linux?\n\n- Luke\n\n",
"msg_date": "Tue, 8 May 2007 08:01:25 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "> I'm really not a senior member around here and while all this licensing\n> stuff and underlying fs between OSs is very interesting can we please\n> think twice before continuing it.\n\nAgree, there are other lists for this stuff; and back to what one of\nthe original posters said: it doesn't matter much.\n\n[Also not a regular poster, but I always gain something from reading\nthis list.]\n\nMost people who really go into OS selection / FS selection are looking\nfor a cheap/silver bullet for performance. No such thing exists. The\ndifference made by any modern OS/FS is almost immaterial. You need to\ndo the slow slogging work of site/application specific optimization and\ntuning; that is where you will find significant performance\nimprovements.\n\n- \nAdam Tauno Williams, Network & Systems Administrator\nConsultant - http://www.whitemiceconsulting.com\nDeveloper - http://www.opengroupware.org\n\n",
"msg_date": "Tue, 08 May 2007 08:25:25 -0400",
"msg_from": "Adam Tauno Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [OT] Best OS for Postgres 8.2"
},
{
"msg_contents": "I've seen the FUSE port of ZFS, and it does run sslloowwllyy. It \nappears that a native linux port is going to be required if we want \nZFS to be reasonably performant.\n\nWRT which FS to use for pg; the biggest issue is what kind of DB you \nwill be building. The best pg FS for OLTP and OLAP are not the same \nIME. Ditto a dependence on how large your records and the amount of \nIO in your typical transactions are.\n\nFor lot's of big, more reads than writes transactions, SGI's XFS \nseems to be best.\nXFS is not the best for OLTP. Especially for OLTP involving lots of small IOs.\n\njfs seems to be best for that.\n\nCaveat: I have not yet experimented with any version of reiserfs in \nproduction.\n\nCheers,\nRon Peacetree\n\n\nAt 08:01 AM 5/8/2007, Luke Lonergan wrote:\n>WRT ZFS on Linux, if someone were to port it, the license issue \n>would get worked out IMO (with some discussion to back me up). From \n>discussions with the developers, the biggest issue is a technical \n>one: the Linux VFS layer makes the port difficult.\n>\n>I don't hold any hope that the FUSE port will be a happy thing, the \n>performance won't be there.\n>\n>Any volunteers to port ZFS to Linux?\n>\n>- Luke\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: don't forget to increase your free space map settings\n\n",
"msg_date": "Tue, 08 May 2007 12:08:10 -0400",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "I am back with the chatlog and seem it's the Transparent compression \nthat helps a lot, very interesting...\n\nhere is the log of #postgresql on Apr. 21th around 13:20 GMT (snipped) :\n<Solatis> why is that, when hard disk i/o is my bottleneck ?\n<Solatis> well i have 10 disks in a raid1+0 config\n<Solatis> it's sata2 yes\n<Solatis> i run solaris express, whose kernel says SunOS\n<Solatis> running 'SunOS solatis2 5.11 snv_61 i86pc i386 i86pc\n<Solatis> well, the thing is, i'm using zfs\n<Solatis> yeah, it was the reason for me to install solaris in \nthe first place\n<Solatis> and a benchmark for my system comparing debian linux \nwith solaris express showed a +- 18% performance gain when switching \nto solaris\n<Solatis> so i'm happy\n<Solatis> (note: the benchmarking was not scientifically \ngrounded at all, it was just around 50 million stored procedure \ncalls which do select/update/inserts on my database which would \nsimulate my specific case)\n<Solatis> but the killer thing was to enable compression on zfs\n<Solatis> that reduced the hard disk i/o with a factor 3, which \nwas the probable cause of the performance increase\n<Solatis> oh, at the moment it's factor 2.23\n<Solatis> still, it's funny to see that postgresql says that my \ndatabase is using around 41GB's, while only taking up 18GB on the \nhard disk\n=== end of log ===\n\[email protected] wrote:\n> On Tue, 8 May 2007, �~]~N彦 Ian Li wrote:\n> \n>> In #postgresql on freenode, somebody ever mentioned that ZFS from \n>> Solaris helps a lot to the performance of pgsql, so dose anyone have \n>> information about that?\n> \n> the filesystem you use will affect the performance of postgres \n> significantly. I've heard a lot of claims for ZFS, unfortunantly many of \n> them from people who have prooven that they didn't know what they were \n> talking about by the end of their first or second e-mails.\n> \n> much of the hype for ZFS is it's volume management capabilities and \n> admin tools. Linux has most (if not all) of the volume management \n> capabilities, it just seperates them from the filesystems so that any \n> filesystem can use them, and as a result you use one tool to setup your \n> RAID, one to setup snapshots, and a third to format your filesystems \n> where ZFS does this in one userspace tool.\n> \n> once you seperate the volume management piece out, the actual \n> performance question is a lot harder to answer. there are a lot of \n> people who say that it's far faster then the alternate filesystems on \n> Solaris, but I haven't seen any good comparisons between it and Linux \n> filesystems.\n> \n> On Linux you have the choice of several filesystems, and the perfomance \n> will vary wildly depending on your workload. I personally tend to favor \n> ext2 (for small filesystems where the application is ensuring data \n> integrity) or XFS (for large filesystems)\n> \n> I personally don't trust reiserfs, jfs seems to be a tools for \n> transitioning from AIX more then anything else, and ext3 seems to have \n> all the scaling issues of ext2 plus the overhead (and bottleneck) of \n> journaling.\n> \n> one issue with journaling filesystems, if you journal the data as well \n> as the metadata you end up with a very reliable setup, however it means \n> that all your data needs to be written twice, oncce to the journal, and \n> once to the final location. the write to the journal can be slightly \n> faster then a normal write to the final location (the journal is a \n> sequential write to an existing file), however the need to write twice \n> can effectivly cut your disk I/O bandwidth in half when doing heavy \n> writes. worse, when you end up writing mor ethen will fit in the journal \n> (128M is the max for ext3) the entire system then needs to stall while \n> the journal gets cleared to make space for the additional writes.\n> \n> if you don't journal your data then you avoid the problems above, but in \n> a crash you may find that you lost data, even though the filesystem is \n> 'intact' according to fsck.\n> \n> David Lang\n> \nRegards\nIan\n",
"msg_date": "Wed, 09 May 2007 00:58:27 +0800",
"msg_from": "=?UTF-8?B?5p2O5b2mIElhbiBMaQ==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "On Tue, 8 May 2007, [email protected] wrote:\n\n> one issue with journaling filesystems, if you journal the data as well as the \n> metadata you end up with a very reliable setup, however it means that all \n> your data needs to be written twice, oncce to the journal, and once to the \n> final location. the write to the journal can be slightly faster then a normal \n> write to the final location (the journal is a sequential write to an existing \n> file), however the need to write twice can effectivly cut your disk I/O \n> bandwidth in half when doing heavy writes. worse, when you end up writing mor \n> ethen will fit in the journal (128M is the max for ext3) the entire system \n> then needs to stall while the journal gets cleared to make space for the \n> additional writes.\n>\n> if you don't journal your data then you avoid the problems above, but in a \n> crash you may find that you lost data, even though the filesystem is 'intact' \n> according to fsck.\n\nThat sounds like an ad for FreeBSD and UFS2+Softupdates. :)\n\nMetadata is as safe as it is in a journaling filesystem, but none of the \noverhead of journaling.\n\nCharles\n\n> David Lang\n>\n>> Steve Atkins wrote:\n>>>\n>>> On May 7, 2007, at 2:55 PM, David Levy wrote:\n>>> \n>>> > Hi,\n>>> > > I am about to order a new server for my Postgres cluster. I will\n>>> > probably get a Dual Xeon Quad Core instead of my current Dual Xeon.\n>>> > Which OS would you recommend to optimize Postgres behaviour (i/o\n>>> > access, multithreading, etc) ?\n>>> > > I am hesitating between Fedora Core 6, CentOS and Debian. Can anyone\n>>> > help with this ?\n>>>\n>>> Well, all three you mention are much the same, just with a different\n>>> badge on the box, as far as performance is concerned. They're all\n>>> going to be a moderately recent Linux kernel, with your choice\n>>> of filesystems, so any choice between them is going to be driven\n>>> more by available staff and support or personal preference.\n>>>\n>>> I'd probably go CentOS 5 over Fedora just because Fedora doesn't\n>>> get supported for very long - more of an issue with a dedicated\n>>> database box with a long lifespan than your typical desktop or\n>>> interchangeable webserver.\n>>>\n>>> I might also look at Solaris 10, though. I've yet to play with it much,\n>>> but it\n>>> seems nice, and I suspect it might manage 8 cores better than current\n>>> Linux setups.\n>>>\n>>> Cheers,\n>>> Steve\n>>> \n>>> \n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 5: don't forget to increase your free space map settings\n>>> \n>> \n>> Regards\n>> \n>> Ian\n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faq\n>> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n",
"msg_date": "Tue, 8 May 2007 16:49:58 -0400 (EDT)",
"msg_from": "Charles Sprickman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "On Tue, 8 May 2007, Tom Lane wrote:\n\n> What Debian has done is set up an arrangement that lets you run two (or\n> more) different PG versions in parallel. Since that's amazingly helpful\n> during a major-PG-version upgrade, most of the other packagers are\n> scheming how to do something similar.\n\nI alluded to that but it is worth going into more detail on for those not \nfamiliar with this whole topic. I normally maintain multiple different PG \nversions in parallel already, mostly using environment variables to switch \nbetween them with some shell code. Debian has taken an approach where \ncommands like pg_ctl are wrapped in multi-version/cluster aware scripts, \nso you can do things like restarting multiple installations more easily \nthan that.\n\nMy issue wasn't with the idea, it was with the implementation. When I \nhave my newbie hat on, it adds a layer of complexity that isn't needed for \nsimple installs. And when I have my developer hat on, I found that need \nto conform to the requirements of that system on top of Debian's already \nunique install locations and packaging issues just made it painful to \nbuild and work with with customized versions of Postgres, compared to \ndistributions that use a relatively simple packaging scheme (like the RPM \nbased RedHat or SuSE).\n\nI hope anyone else working this problem is thinking about issues like \nthis. Debian's approach strikes me as being a good one for a seasoned \nsystems administrator or DBA, which is typical for them. I'd hate to see \na change in this area make it more difficult for new users though, as \nthat's already perceived as a PG weakness. I think you can build a layer \nthat adds the capability for the people who need it without complicating \nthings for people who don't.\n\n> and if someday you want commercial support for your OS, a Centos->RHEL \n> update will get you there easily.\n\nFor those that like to live dangerously, it's also worth mentioning that \nit's possible to hack this conversion in either direction without actually \ndoing an OS re-install/upgrade just by playing with the packages that are \ndifferent between the two. So someone who installs CentOS now could swap \nto RHEL very quickly in a pinch if they have enough cojones to do the \nrequired package substitutions.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Tue, 8 May 2007 23:31:55 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2 "
},
{
"msg_contents": "On Tue, 8 May 2007, Luke Lonergan wrote:\n\n> From discussions with the developers, the biggest issue is a technical \n> one: the Linux VFS layer makes the [ZFS] port difficult.\n\nDifficult on two levels. First you'd have to figure out how to make it \nwork at all; then you'd have to reshape it into a form that it would be \nacceptable to the Linux kernel developers, who haven't seemed real keen on \nthe idea so far.\n\nThe standard article I'm you've already seen this week on this topic is \nJeff Bonwick's at \nhttp://blogs.sun.com/bonwick/entry/rampant_layering_violation\n\nWhat really bugged me was his earlier article linked to there where he \ntalks about how ZFS eliminates the need for hardware RAID controllers:\nhttp://blogs.sun.com/bonwick/entry/raid_z\n\nWhile there may be merit to that idea for some applications, like \nsituations where you have a pig of a RAID5 volume, that's just hype for \ndatabase writes. \"We issue the SYNCHRONIZE CACHE command to the disks \nafter pushing all data in a transaction group\"--see, that would be the \npart the hardware controller is needed to accelerate. If you really care \nabout whether your data hit disk, there is no way to break the RPM barrier \nwithout hardware support. The fact that he misunderstands such a \nfundamental point makes me wonder what other gigantic mistakes might be \nburied in his analysis.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Tue, 8 May 2007 23:51:51 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "On Tue, 8 May 2007, Greg Smith wrote:\n\n> On Tue, 8 May 2007, Luke Lonergan wrote:\n>\n>> From discussions with the developers, the biggest issue is a technical\n>> one: the Linux VFS layer makes the [ZFS] port difficult.\n>\n> Difficult on two levels. First you'd have to figure out how to make it work \n> at all; then you'd have to reshape it into a form that it would be acceptable \n> to the Linux kernel developers, who haven't seemed real keen on the idea so \n> far.\n\ngiven that RAID, snapshots, etc are already in the linux kernel, I suspect \nthat what will need to happen is for the filesystem to be ported without \nthose features and then the userspace tools (that manipulate the volumes ) \nbe ported to use the things already in the kernel.\n\n> The standard article I'm you've already seen this week on this topic is Jeff \n> Bonwick's at http://blogs.sun.com/bonwick/entry/rampant_layering_violation\n\nyep, that sounds like what I've been hearing.\n\nwhat the ZFS (and reiserfs4) folks haven't been wanting to hear from the \nlinux kernel devs is that they are interested in having all these neat \nfeatures available for use with all filesystems (and the linux kernel has \na _lot_ of filesystems available), with solaris you basicly have UFS and \nZFS so it's not as big a deal.\n\n> What really bugged me was his earlier article linked to there where he talks \n> about how ZFS eliminates the need for hardware RAID controllers:\n> http://blogs.sun.com/bonwick/entry/raid_z\n>\n> While there may be merit to that idea for some applications, like situations \n> where you have a pig of a RAID5 volume, that's just hype for database writes. \n> \"We issue the SYNCHRONIZE CACHE command to the disks after pushing all data \n> in a transaction group\"--see, that would be the part the hardware controller \n> is needed to accelerate. If you really care about whether your data hit \n> disk, there is no way to break the RPM barrier without hardware support. The \n> fact that he misunderstands such a fundamental point makes me wonder what \n> other gigantic mistakes might be buried in his analysis.\n\nI've seen similar comments from some of the linux kernel devs, they've \nused low-end raid controllers with small processors on them and think that \na second core/socket in the main system to run software raid on is better.\n\nDavid Lang\n",
"msg_date": "Wed, 9 May 2007 01:57:51 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "On Wed, May 09, 2007 at 01:57:51AM -0700, [email protected] wrote:\n> given that RAID, snapshots, etc are already in the linux kernel, I suspect \n> that what will need to happen is for the filesystem to be ported without \n> those features and then the userspace tools (that manipulate the volumes ) \n> be ported to use the things already in the kernel.\n\nWell, part of the idea behind ZFS is that these parts are _not_ separated in\n\"layers\" -- for instance, the filesystem can push data down to the RAID level\nto determine the stripe size used.\n\nWhether this is a good idea is of course hotly debated, but I don't think you\ncan port just the filesystem part and call it a day.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 9 May 2007 11:38:24 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "On Wed, 9 May 2007, Steinar H. Gunderson wrote:\n\n> On Wed, May 09, 2007 at 01:57:51AM -0700, [email protected] wrote:\n>> given that RAID, snapshots, etc are already in the linux kernel, I suspect\n>> that what will need to happen is for the filesystem to be ported without\n>> those features and then the userspace tools (that manipulate the volumes )\n>> be ported to use the things already in the kernel.\n>\n> Well, part of the idea behind ZFS is that these parts are _not_ separated in\n> \"layers\" -- for instance, the filesystem can push data down to the RAID level\n> to determine the stripe size used.\n\nthere's nothing preventing this from happening if they are seperate layers \neither.\n\nthere are some performance implications of the seperate layers, but until \nsomeone has the ability to do head-to-head comparisons it's hard to say \nwhich approach will win (in theory the lack of layers makes for faster \ncode, but in practice the fact that each layer is gone over by experts \nlooking for ways to optimize it may overwelm the layering overhead)\n\n> Whether this is a good idea is of course hotly debated, but I don't think you\n> can port just the filesystem part and call it a day.\n\nOh, I'm absolutly sure that doing so won't satidfy people (wnd would \ngenerate howles of outrage from some parts), but having watched other \ngroups try and get things into the kernel that the kernel devs felt were \nlayering violations I think that it's wat will ultimatly happen.\n\nDavid Lang\n",
"msg_date": "Wed, 9 May 2007 02:44:26 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "Hello Ian,\n\nI have done some testing with postgresql and ZFS on Solaris 10 11/06.\nWhile I work for Sun, I dont claim to be a ZFS expert (for that matter \nnot even Solaris or PostgreSQL).\n\nLets first look at the scenarios of how postgresql can be deployed on \nSolaris\nFirst the Solaris Options\n1. UFS with default setup (which is buffered file system)\n2. UFS with forcedirectio option (or unbuffered file system)\n3. ZFS by default (128K recordsize with checksum but no compression)\n4. ZFS with Compression (Default compression using LZ* algorithm .. now \neven a gzip algorithm is supported)\n\n(For simplicity I am not considering RAID levels here since that \nincreases the number of scenarios quite a bit and also skipping Solaris \nVolume Manager - legacy volume management capabilities in Solaris)\n\nNow for the postgresql.conf options\na. wal_sync_method set to default - maps to opendatasync\nb. wal_sync_method set to fdatasync\n\n(assuming checkpoint_segments and wal_buffers are high already)\n\n(This are my tests results based on the way I used the workload and \nyour mileage will vary)\nSo with this type of configurations I found the following\n1a. Default UFS with default wal_sync_method - Sucks for me mostly \nusing pgbench or EAStress type workloads\n1b. Default UFS with fdatasync - works well specially increasing \nsegmapsize from default 12% to higher values\n2a ForcedirectIO with default wal_sync_method - works well but then is \nlimited to hardware disk performances\n (In a way good to have RAID controller with big Write cache for \nit.. One advantage is lower system cpu utilization)\n2b Didn't see huge difference from 2a in this case\n3a It was better than 1a but still limited\n3b It was better even than 3a and 1b but cpu utilization seemed higher\n4a - Didn't test this out\n4b - Hard to say since in my case since I wasnt disk bound (per se) but \nCPU bound. The compression helps when number of IOs to the disk are high \nand it helps to cut it down at the cost of CPU cycles\n\n\nOverall ZFS seems to improve performance with PostgreSQL on Solaris 10 \nwith a bit increased system times compared to UFS.\n(So the final results depends on the metrics that you are measuring the \nperformance :-) ) (ZFS engineers are constantly improving the \nperformance and I have seen the improvements from Solaris 10 1/06 \nrelease to my current setup)\n\nOf course I haven't compared against any other OS.. If someone has \nalready done that I would be interested in knowing the results.\n\nNow comes the thing that I am still exploring\n* Do we do checksum in WAL ? I guess we do .. Which means that we are \nnow doing double checksumming on the data. One in ZFS and one in \npostgresql. ZFS does allow checksumming to be turned off (but on new \nblocks allocated). But of course the philosophy is where should it be \ndone (ZFS or PostgreSQL). ZFS checksumming gives ability to correct the \ndata on the bad checksum if you use mirror devices. PostgreSQL doesnt \ngive that ability and in case of an error would fail. ( I dont know the \nexact behavior of postgresql when it would encounter a failed checksum)\n\nHope this helps.\n\n\nRegards,\nJignesh\n\n\n\n李彦 Ian Li wrote:\n> In #postgresql on freenode, somebody ever mentioned that ZFS from \n> Solaris helps a lot to the performance of pgsql, so dose anyone have \n> information about that?\n>\n> Steve Atkins wrote:\n>>\n>> On May 7, 2007, at 2:55 PM, David Levy wrote:\n>>\n>>> Hi,\n>>>\n>>> I am about to order a new server for my Postgres cluster. I will\n>>> probably get a Dual Xeon Quad Core instead of my current Dual Xeon.\n>>> Which OS would you recommend to optimize Postgres behaviour (i/o\n>>> access, multithreading, etc) ?\n>>>\n>>> I am hesitating between Fedora Core 6, CentOS and Debian. Can anyone\n>>> help with this ?\n>>\n>> Well, all three you mention are much the same, just with a different\n>> badge on the box, as far as performance is concerned. They're all\n>> going to be a moderately recent Linux kernel, with your choice\n>> of filesystems, so any choice between them is going to be driven\n>> more by available staff and support or personal preference.\n>>\n>> I'd probably go CentOS 5 over Fedora just because Fedora doesn't\n>> get supported for very long - more of an issue with a dedicated\n>> database box with a long lifespan than your typical desktop or\n>> interchangeable webserver.\n>>\n>> I might also look at Solaris 10, though. I've yet to play with it \n>> much, but it\n>> seems nice, and I suspect it might manage 8 cores better than current\n>> Linux setups.\n>>\n>> Cheers,\n>> Steve\n>>\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 5: don't forget to increase your free space map settings\n>>\n>\n> Regards\n>\n> Ian\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n",
"msg_date": "Wed, 09 May 2007 17:27:30 +0100",
"msg_from": "Jignesh Shah <[email protected]>",
"msg_from_op": false,
"msg_subject": "ZFS and Postgresql - WASRe: Best OS for Postgres 8.2"
},
{
"msg_contents": "Jignesh Shah escribi�:\n\n> Now comes the thing that I am still exploring\n> * Do we do checksum in WAL ? I guess we do .. Which means that we are \n> now doing double checksumming on the data. One in ZFS and one in \n> postgresql. ZFS does allow checksumming to be turned off (but on new \n> blocks allocated). But of course the philosophy is where should it be \n> done (ZFS or PostgreSQL).\n\nChecksums on WAL are not optional in Postgres, because AFAIR they are\nused to determine when it should stop recovering.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Wed, 9 May 2007 13:01:45 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ZFS and Postgresql - WASRe: Best OS for Postgres 8.2"
},
{
"msg_contents": "On May 8, 2007, at 2:59 AM, [email protected] wrote:\n> one issue with journaling filesystems, if you journal the data as \n> well as the metadata you end up with a very reliable setup, however \n> it means that all your data needs to be written twice, oncce to the \n> journal, and once to the final location. the write to the journal \n> can be slightly faster then a normal write to the final location \n> (the journal is a sequential write to an existing file), however \n> the need to write twice can effectivly cut your disk I/O bandwidth \n> in half when doing heavy writes. worse, when you end up writing mor \n> ethen will fit in the journal (128M is the max for ext3) the entire \n> system then needs to stall while the journal gets cleared to make \n> space for the additional writes.\n\nThat's why you want to mount ext3 partitions used with PostgreSQL \nwith data=writeback.\n\nSome folks will also use a small filesystem for pg_xlog and mount \nthat as ext2.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Wed, 9 May 2007 12:10:34 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "But we still pay the penalty on WAL while writing them in the first \nplace I guess .. Is there an option to disable it.. I can test how much \nis the impact I guess couple of %s but good to verify :-) )\n\n\nRegards,\nJignesh\n\n\nAlvaro Herrera wrote:\n> Jignesh Shah escribi�:\n>\n> \n>> Now comes the thing that I am still exploring\n>> * Do we do checksum in WAL ? I guess we do .. Which means that we are \n>> now doing double checksumming on the data. One in ZFS and one in \n>> postgresql. ZFS does allow checksumming to be turned off (but on new \n>> blocks allocated). But of course the philosophy is where should it be \n>> done (ZFS or PostgreSQL).\n>> \n>\n> Checksums on WAL are not optional in Postgres, because AFAIR they are\n> used to determine when it should stop recovering.\n>\n> \n",
"msg_date": "Wed, 09 May 2007 18:49:16 +0100",
"msg_from": "Jignesh Shah <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ZFS and Postgresql - WASRe: Best OS for Postgres 8.2"
},
{
"msg_contents": "On Wed, 9 May 2007, Jignesh Shah wrote:\n\n> But we still pay the penalty on WAL while writing them in the first place I \n> guess .. Is there an option to disable it.. I can test how much is the impact \n> I guess couple of %s but good to verify :-) )\n\non modern CPU's where the CPU is significantly faster then RAM, \ncalculating a checksum is free if the CPU has to touch the data anyway \n(cycles where it would be waiting for a cache miss are spent doing the \ncalculations)\n\nif you don't believe me, hack the source to remove the checksum and see if \nyou can measure any difference.\n\nDavid Lang\n\n >\n> Regards,\n> Jignesh\n>\n>\n> Alvaro Herrera wrote:\n>> Jignesh Shah escribi�:\n>>\n>> \n>> > Now comes the thing that I am still exploring\n>> > * Do we do checksum in WAL ? I guess we do .. Which means that we are \n>> > now doing double checksumming on the data. One in ZFS and one in \n>> > postgresql. ZFS does allow checksumming to be turned off (but on new \n>> > blocks allocated). But of course the philosophy is where should it be \n>> > done (ZFS or PostgreSQL).\n>> > \n>>\n>> Checksums on WAL are not optional in Postgres, because AFAIR they are\n>> used to determine when it should stop recovering.\n>>\n>> \n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n>\n>From [email protected] Wed May 9 15:41:21 2007\nReceived: from localhost (maia-3.hub.org [200.46.204.184])\n\tby postgresql.org (Postfix) with ESMTP id A90F09FB46E\n\tfor <[email protected]>; Wed, 9 May 2007 15:41:20 -0300 (ADT)\nReceived: from postgresql.org ([200.46.204.71])\n by localhost (mx1.hub.org [200.46.204.184]) (amavisd-maia, port 10024)\n with ESMTP id 85364-01 for <[email protected]>;\n Wed, 9 May 2007 15:41:08 -0300 (ADT)\nX-Greylist: domain auto-whitelisted by SQLgrey-1.7.4\nReceived: from wr-out-0506.google.com (wr-out-0506.google.com [64.233.184.235])\n\tby postgresql.org (Postfix) with ESMTP id E15479FA217\n\tfor <[email protected]>; Wed, 9 May 2007 15:41:09 -0300 (ADT)\nReceived: by wr-out-0506.google.com with SMTP id 70so302762wra\n for <[email protected]>; Wed, 09 May 2007 11:41:08 -0700 (PDT)\nDKIM-Signature: a=rsa-sha1; c=relaxed/relaxed;\n d=gmail.com; s=beta;\n h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:references;\n b=g1zIF4bAuqSx9xY+tHoUSfcG0PAUOwOTe/AxVWbVOTEMdg8dtdSN022EuYUR1Ow/OrtKdf7bzeTn1Lru6qy8mYM+ZCELf04aKExcZ1W7OE/+504RcvV1fEzZMMYPjl2cO1CppR+77BGvuUGv9MAW4YKTqiN8LXtTZi9C+FJehW4=\nDomainKey-Signature: a=rsa-sha1; c=nofws;\n d=gmail.com; s=beta;\n h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:references;\n b=JvyBekBF3lPKnLONW3Ns3+UkNrN9jQpUvP8mqdN0sBpzp/hPdP01ie72wobmCY3FE3eomuoWgSk1t2sg/HR7w+tvpmK4kMN1PvTg9lNG0uJzdkMGGx70revTVRjuzk2Yb2Vbw2g1ZgrCyOqdm+LAhwi04iHei+QFI3YEzCmonEs=\nReceived: by 10.114.74.1 with SMTP id w1mr252771waa.1178736067911;\n Wed, 09 May 2007 11:41:07 -0700 (PDT)\nReceived: by 10.115.19.4 with HTTP; Wed, 9 May 2007 11:41:07 -0700 (PDT)\nMessage-ID: <[email protected]>\nDate: Wed, 9 May 2007 20:41:07 +0200\nFrom: \"Valentine Gogichashvili\" <[email protected]>\nTo: \"Oleg Bartunov\" <[email protected]>\nSubject: Re: Cannot make GIN intarray index be used by the planner\nCc: [email protected]\nIn-Reply-To: <[email protected]>\nMIME-Version: 1.0\nContent-Type: multipart/alternative; \n\tboundary=\"----=_Part_110820_33458115.1178736067834\"\nReferences: <[email protected]>\n\t <[email protected]>\n\t <[email protected]>\n\t <[email protected]>\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Archive-Number: 200705/207\nX-Sequence-Number: 24597\n\n------=_Part_110820_33458115.1178736067834\nContent-Type: text/plain; charset=UTF-8; format=flowed\nContent-Transfer-Encoding: base64\nContent-Disposition: inline\n\nSGkgYWdhaW4sCgp0aGUgdmVyc2lvbiBvZiB0aGUgc2VydmVyIEkgYW0gb24gaXMgUG9zdGdyZVNR\nTCA4LjIuMyBvbiBpNjg2LXBjLWxpbnV4LWdudSwKY29tcGlsZWQgYnkgR0NDIGdjYyAoR0NDKSA0\nLjAuMiAyMDA1MDkwMSAocHJlcmVsZWFzZSkgKFNVU0UgTGludXgpCgpoZXJlIGlzIHRoZSBEVAoK\nQ1JFQVRFIFRBQkxFICJ2ZXJzaW9uQSIubXlpbnRhcnJheV90YWJsZV9ub251bGxzCigKICBpZCBp\nbnRlZ2VyLAogIG15aW50YXJyYXlfaW50NCBpbnRlZ2VyW10KKQpXSVRIT1VUIE9JRFM7CgpDUkVB\nVEUgSU5ERVggaWR4X25vbm51bGxzX215aW50YXJyYXlfaW50NF9naW4KICBPTiAidmVyc2lvbkEi\nLm15aW50YXJyYXlfdGFibGVfbm9udWxscwogIFVTSU5HIGdpbgogIChteWludGFycmF5X2ludDQp\nOwoKdGhlcmUgYXJlIDc0NTk4OSByZWNvcmRzIGluIHRoZSB0YWJsZSB3aXRoIG5vIG51bGwgdmFs\ndWVzIGZvciB0aGUKbXlpbnRhcnJheV9pbnQ0CmZpZWxkLgoKU28gaGVyZSBpcyB0aGUgZXhlY3V0\naW9uIHBsYW4KCm15dmlkZW9pbmRleD0jIGV4cGxhaW4gYW5hbHl6ZSBTRUxFQ1QgaWQsIGljb3Vu\ndChteWludGFycmF5X2ludDQpCiAgRlJPTSAidmVyc2lvbkEiLm15aW50YXJyYXlfdGFibGVfbm9u\ndWxscwogV0hFUkUgQVJSQVlbOF0gPEAgbXlpbnRhcnJheV9pbnQ0OwogICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFFVRVJZIFBM\nQU4KLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t\nLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t\nLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIFNlcSBTY2FuIG9uIG15aW50YXJyYXlfdGFi\nbGVfbm9udWxscwooY29zdD0xMDAwMDAwMDAuMDAuLjEwMDAxNTI2Ny43M3Jvd3M9NzQ2IHdpZHRo\nPTMyKSAoYWN0dWFsIHRpbWU9CjAuMDc5Li4xMTU2LjM5MyByb3dzPTI4MjA3IGxvb3BzPTEpCiAg\nIEZpbHRlcjogKCd7OH0nOjppbnRlZ2VyW10gPEAgbXlpbnRhcnJheV9pbnQ0KQogVG90YWwgcnVu\ndGltZTogMTI2Ni4zNDYgbXMKKDMgcm93cykKClRoZW4gSSBkcm9wIHRoZSBHSU4gYW5kIGNyZWF0\nZSBhIEdpU1QgaW5kZXgKCkRST1AgSU5ERVggInZlcnNpb25BIi5pZHhfbm9ubnVsbHNfbXlpbnRh\ncnJheV9pbnQ0X2dpbjsKCkNSRUFURSBJTkRFWCBpZHhfbm9ubnVsbHNfbXlpbnRhcnJheV9pbnQ0\nX2dpc3QKICBPTiAidmVyc2lvbkEiLm15aW50YXJyYXlfdGFibGVfbm9udWxscwogIFVTSU5HIGdp\nc3QKICAobXlpbnRhcnJheV9pbnQ0KTsKCmFuZCBoZXJlIGFyZSB0aGUgcmVzdWx0cyBmb3IgdGhl\nIGV4ZWN1dGlvbiBwbGFuCgpteXZpZGVvaW5kZXg9IyBleHBsYWluIGFuYWx5emUgU0VMRUNUIGlk\nLCBpY291bnQobXlpbnRhcnJheV9pbnQ0KQpteXZpZGVvaW5kZXgtIyAgIEZST00gInZlcnNpb25B\nIi5teWludGFycmF5X3RhYmxlX25vbnVsbHMKbXl2aWRlb2luZGV4LSMgIFdIRVJFIEFSUkFZWzhd\nIDxAIG15aW50YXJyYXlfaW50NDsKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgUVVFUlkKUExBTgotLS0tLS0tLS0tLS0t\nLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t\nLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t\nLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQogQml0bWFwIEhlYXAgU2NhbiBvbiBteWludGFycmF5X3Rh\nYmxlX25vbnVsbHMgIChjb3N0PTQyLjM2Li4yMTM3LjYyIHJvd3M9NzQ2CndpZHRoPTMyKSAoYWN0\ndWFsIHRpbWU9MTU0LjI3Ni4uMzAxLjYxNSByb3dzPTI4MjA3IGxvb3BzPTEpCiAgIFJlY2hlY2sg\nQ29uZDogKCd7OH0nOjppbnRlZ2VyW10gPEAgbXlpbnRhcnJheV9pbnQ0KQogICAtPiAgQml0bWFw\nIEluZGV4IFNjYW4gb24gaWR4X25vbm51bGxzX215aW50YXJyYXlfaW50NF9naXN0ICAoY29zdD0K\nMC4wMC4uNDIuMTcgcm93cz03NDYgd2lkdGg9MCkgKGFjdHVhbCB0aW1lPTE1MC43MTMuLjE1MC43\nMTMgcm93cz0yODIwNwpsb29wcz0xKQogICAgICAgICBJbmRleCBDb25kOiAoJ3s4fSc6OmludGVn\nZXJbXSA8QCBteWludGFycmF5X2ludDQpCiBUb3RhbCBydW50aW1lOiA0MTAuMzk0IG1zCig1IHJv\nd3MpCgpBcyB5b3UgY2FuIHNlZSB0aGUgaW5kZXggaXMgaW4gdXNlLi4uCgpOb3cgSSBjcmVhdGUg\nY3JlYXRlIHRoZSBzYW1lIHRhYmxlIHdpdGggbXlpbnRhcnJheV9pbnQ0IGNvbnZlcnRlZCBpbnRv\nIHRleHQKYXJyYXkgYW5kIGNyZWF0ZSBhIEdJTiBpbmRleCBvbiB0aGUgbmV3IHRleHQgYXJyYXkg\nZmllbGQKClNFTEVDVCBpZCwgbXlpbnRhcnJheV9pbnQ0Ojp0ZXh0W10gYXMgbXlpbnRhcnJheV9p\nbnQ0X3RleHQgaW50bwpteWludGFycmF5X3RhYmxlX25vbnVsbHNfdGV4dCBmcm9tIG15aW50YXJy\nYXlfdGFibGVfbm9udWxsczsKCkNSRUFURSBJTkRFWCBpZHhfbm9ubnVsbHNfbXlpbnRhcnJheV9p\nbnQ0X3RleHRfZ2luCiAgT04gInZlcnNpb25BIi5teWludGFycmF5X3RhYmxlX25vbnVsbHNfdGV4\ndAogIFVTSU5HIGdpbgogIChteWludGFycmF5X2ludDRfdGV4dCk7CgphbmQgaGF2ZSBhIHRhYmxl\nIHdpdGggRFQ6CgpDUkVBVEUgVEFCTEUgInZlcnNpb25BIi5teWludGFycmF5X3RhYmxlX25vbnVs\nbHNfdGV4dAooCiAgaWQgaW50ZWdlciwKICBteWludGFycmF5X2ludDRfdGV4dCB0ZXh0W10KKQpX\nSVRIT1VUIE9JRFM7CgpOb3cgdGhlIHNhbWUgcmVxdWVzdCBoYXMgdGhlIGZvbGxvd2luZyBleGVj\ndXRpb24gcGxhbjoKCm15dmlkZW9pbmRleD0jIGV4cGxhaW4gYW5hbHl6ZSBTRUxFQ1QgaWQsIGFy\ncmF5X3VwcGVyKCBteWludGFycmF5X2ludDRfdGV4dCwKMSApCiAgRlJPTSAidmVyc2lvbkEiLm15\naW50YXJyYXlfdGFibGVfbm9udWxsc190ZXh0CiBXSEVSRSBBUlJBWVsnOCddIDxAIG15aW50YXJy\nYXlfaW50NF90ZXh0OwogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg\nICAgICAgICAgICAgICAgICAgICAgICAgICAgUVVFUlkKUExBTgotLS0tLS0tLS0tLS0tLS0tLS0t\nLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t\nLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t\nLS0tLS0tLS0tLS0tLS0tLS0KIEJpdG1hcCBIZWFwIFNjYW4gb24gbXlpbnRhcnJheV90YWJsZV9u\nb251bGxzX3RleHQKKGNvc3Q9MTAuMDYuLjIxMzYuOTdyb3dzPTc0NiB3aWR0aD0zNykgKGFjdHVh\nbCB0aW1lPQoxNy40NjMuLjE5MS4wOTQgcm93cz0yODIwNyBsb29wcz0xKQogICBSZWNoZWNrIENv\nbmQ6ICgnezh9Jzo6dGV4dFtdIDxAIG15aW50YXJyYXlfaW50NF90ZXh0KQogICAtPiAgQml0bWFw\nIEluZGV4IFNjYW4gb24gaWR4X25vbm51bGxzX215aW50YXJyYXlfaW50NF90ZXh0X2dpbiAgKGNv\nc3Q9CjAuMDAuLjkuODcgcm93cz03NDYgd2lkdGg9MCkgKGFjdHVhbCB0aW1lPTEzLjk4Mi4uMTMu\nOTgyIHJvd3M9MjgyMDcgbG9vcHM9MSkKICAgICAgICAgSW5kZXggQ29uZDogKCd7OH0nOjp0ZXh0\nW10gPEAgbXlpbnRhcnJheV9pbnQ0X3RleHQpCiBUb3RhbCBydW50aW1lOiAzMDMuMzQ4IG1zCig1\nIHJvd3MpCgoKSSBob3BlIHRoaXMgaW5mb3JtYXRpb24gd2lsbCBtYWtlIHRoZSBxdWVzdGlvbiBt\nb3JlIHVuZGVyc3RhbmRhYmxlLgoKV2l0aCBiZXN0IHJlZ2FyZHMsCgotLSBWYWxlbnRpbmUKCgoK\nT24gNS85LzA3LCBPbGVnIEJhcnR1bm92IDxvbGVnQHNhaS5tc3Uuc3U+IHdyb3RlOgo+Cj4gT24g\nV2VkLCA5IE1heSAyMDA3LCBWYWxlbnRpbmUgR29naWNoYXNodmlsaSB3cm90ZToKPgo+ID4gSSBo\nYXZlIGV4cGVyaW1lbnRlZCBxdWl0ZSBhIGxvdC4gU28gZmlyc3QgSSBkaWQgd2hlbiBzdGFydGlu\nZyB0aGUKPiBhdHRlbXB0IHRvCj4gPiBtb3ZlIGZyb20gR2lTVCB0byBHSU4sIHdhcyB0byBkcm9w\nIHRoZSBHaVNUIGluZGV4IGFuZCBjcmVhdGUgYSBicmFuZCBuZXcKPiBHSU4KPiA+IGluZGV4Li4u\nIGFmdGVyIHRoYXQgZGlkIG5vdCBicmluZyB0aGUgcmVzdWx0cywgSSBzdGFydGVkIHRvIGNyZWF0\nZSBhbGwKPiB0aGlzCj4gPiB0YWJsZXMgd2l0aCBkaWZmZXJlbnQgc2V0cyBvZiBpbmRleGVzIGFu\nZCBzbyBvbi4uLgo+ID4KPiA+IFNvIHRoZSBhbnN3ZXIgdG8gdGhlIHF1ZXN0aW9uIGlzOiBubyB0\naGVyZSBpbiBvbmx5IEdJTiBpbmRleCBvbiB0aGUKPiB0YWJsZS4KPgo+IHRoZW4sIHlvdSBoYXZl\nIHRvIHByb3ZpZGUgdXMgbW9yZSBpbmZvbWF0aW9uIC0KPiBwZyB2ZXJzaW9uLAo+IFxkdCBzb3Vy\nY2V0YWJsZXdpdGhfaW50NAo+IGV4cGxhaW4gYW5hbHl6ZQo+Cj4gYnR3LCBJIGRpZCB0ZXN0IG9m\nIGRldmVsb3BtZW50IHZlcnNpb24gb2YgR2lOLCBzZWUKPiBodHRwOi8vd3d3LnNhaS5tc3Uuc3Uv\nfm1lZ2VyYS93aWtpL0dpblRlc3QKPgo+ID4KPiA+IFRoYW5rIHlvdSBpbiBhZHZhbmNlLAo+ID4K\nPiA+IFZhbGVudGluZQo+ID4KPiA+IE9uIDUvOS8wNywgT2xlZyBCYXJ0dW5vdiA8b2xlZ0BzYWku\nbXN1LnN1PiB3cm90ZToKPiA+Pgo+ID4+IERvIHlvdSBoYXZlIGJvdGggaW5kZXhlcyAoR2lTVCwg\nR0lOKSBvbiB0aGUgc2FtZSB0YWJsZSA/Cj4gPj4KPiA+PiBPbiBXZWQsIDkgTWF5IDIwMDcsIFZh\nbGVudGluZSBHb2dpY2hhc2h2aWxpIHdyb3RlOgo+ID4+Cj4gPj4gPiBIZWxsbyBhbGwsCj4gPj4g\nPgo+ID4+ID4gSSBhbSB0cnlpbmcgdG8gbW92ZSBmcm9tIEdpU1QgaW50YXJyYXkgaW5kZXggdG8g\nR0lOIGludGFycmF5IGluZGV4LAo+IGJ1dAo+ID4+IG15Cj4gPj4gPiBHSU4gaW5kZXggaXMgbm90\nIGJlaW5nIHVzZWQgYnkgdGhlIHBsYW5uZXIuCj4gPj4gPgo+ID4+ID4gVGhlIG5vcm1hbCBxdWVy\neSBpcyBsaWtlIHRoYXQKPiA+PiA+Cj4gPj4gPiBzZWxlY3QgKgo+ID4+ID4gZnJvbSBzb3VyY2V0\nYWJsZXdpdGhfaW50NAo+ID4+ID4gd2hlcmUgQVJSQVlbbXlpbnRdIDxAIG15aW50X2FycmF5Cj4g\nPj4gPiAgYW5kIHNvbWVfb3RoZXJfZmlsdGVycwo+ID4+ID4KPiA+PiA+ICh3aXRoIEdpU1QgaW5k\nZXggZXZlcnl0aGluZyB3b3JrcyBmaW5lLCBidXQgR0lOIGluZGV4IGlzIG5vdCBiZWluZwo+IHVz\nZWQpCj4gPj4gPgo+ID4+ID4gSWYgSSBjcmVhdGUgdGhlIHNhbWUgdGFibGUgcG9wdWxhdGluZyBp\ndCB3aXRoIHRleHRbXSBkYXRhIGxpa2UKPiA+PiA+Cj4gPj4gPiBzZWxlY3QgbXlpbnRfYXJyYXk6\nOnRleHRbXSBhcyBteWludF9hcnJheV9hc190ZXh0YXJyYXkKPiA+PiA+IGludG8gbmV3dGFibGV3\naXRoX3RleHQKPiA+PiA+IGZyb20gc291cmNldGFibGV3aXRoX2ludDQKPiA+PiA+Cj4gPj4gPiBh\nbmQgdGhlbiBjcmVhdGUgYSBHSU4gaW5kZXggdXNpbmcgdGhpcyBuZXcgdGV4dFtdIGNvbHVtbgo+\nID4+ID4KPiA+PiA+IHRoZSBwbGFubmVyIHN0YXJ0cyB0byB1c2UgdGhlIGluZGV4IGFuZCBxdWVy\naWVzIHJ1biB3aXRoIGdyYXRlIHNwZWVkCj4gPj4gd2hlbgo+ID4+ID4gdGhlIHF1ZXJ5IGxvb2tz\nIGxpa2UgdGhhdDoKPiA+PiA+Cj4gPj4gPiBzZWxlY3QgKgo+ID4+ID4gZnJvbSBuZXd0YWJsZXdp\ndGhfdGV4dAo+ID4+ID4gd2hlcmUgQVJSQVlbJ215aW50J10gPEAgbXlpbnRfYXJyYXlfYXNfdGV4\ndGFycmF5Cj4gPj4gPiAgYW5kIHNvbWVfb3RoZXJfZmlsdGVycwo+ID4+ID4KPiA+PiA+IFdoZXJl\nIHRoZSBwcm9ibGVtIGNhbiBiZSB3aXRoIF9pbnQ0IEdJTiBpbmRleCBpbiB0aGlzIGNvbnN0ZWxs\nYXRpb24/Cj4gPj4gPgo+ID4+ID4gYnkgbm93IHRoZSBlbmFibGVfc2Vxc2NhbiBpcyBzZXQgdG8g\nb2ZmIGluIHRoZSBjb25maWd1cmF0aW9uLgo+ID4+ID4KPiA+PiA+IFdpdGggYmVzdCByZWdhcmRz\nLAo+ID4+ID4KPiA+PiA+IC0tIFZhbGVudGluZSBHb2dpY2hhc2h2aWxpCj4gPj4gPgo+ID4+Cj4g\nPj4gICAgICAgICBSZWdhcmRzLAo+ID4+ICAgICAgICAgICAgICAgICBPbGVnCj4gPj4gX19fX19f\nX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwo+\nID4+IE9sZWcgQmFydHVub3YsIFJlc2VhcmNoIFNjaWVudGlzdCwgSGVhZCBvZiBBc3Ryb05ldCAo\nd3d3LmFzdHJvbmV0LnJ1KSwKPiA+PiBTdGVybmJlcmcgQXN0cm9ub21pY2FsIEluc3RpdHV0ZSwg\nTW9zY293IFVuaXZlcnNpdHksIFJ1c3NpYQo+ID4+IEludGVybmV0OiBvbGVnQHNhaS5tc3Uuc3Us\nIGh0dHA6Ly93d3cuc2FpLm1zdS5zdS9+bWVnZXJhLwo+ID4+IHBob25lOiArMDA3KDQ5NSk5Mzkt\nMTYtODMsICswMDcoNDk1KTkzOS0yMy04Mwo+ID4+Cj4gPgo+ID4KPiA+Cj4gPgo+Cj4gICAgICAg\nICBSZWdhcmRzLAo+ICAgICAgICAgICAgICAgICBPbGVnCj4gX19fX19fX19fX19fX19fX19fX19f\nX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwo+IE9sZWcgQmFydHVub3Ys\nIFJlc2VhcmNoIFNjaWVudGlzdCwgSGVhZCBvZiBBc3Ryb05ldCAod3d3LmFzdHJvbmV0LnJ1KSwK\nPiBTdGVybmJlcmcgQXN0cm9ub21pY2FsIEluc3RpdHV0ZSwgTW9zY293IFVuaXZlcnNpdHksIFJ1\nc3NpYQo+IEludGVybmV0OiBvbGVnQHNhaS5tc3Uuc3UsIGh0dHA6Ly93d3cuc2FpLm1zdS5zdS9+\nbWVnZXJhLwo+IHBob25lOiArMDA3KDQ5NSk5MzktMTYtODMsICswMDcoNDk1KTkzOS0yMy04Mwo+\nCgoKCi0tIArhg5Xhg5Dhg5rhg5Thg5zhg6Lhg5jhg5wg4YOS4YOd4YOS4YOY4YOp4YOQ4YOo4YOV\n4YOY4YOa4YOYClZhbGVudGluZSBHb2dpY2hhc2h2aWxpCg==\n------=_Part_110820_33458115.1178736067834\nContent-Type: text/html; charset=UTF-8\nContent-Transfer-Encoding: base64\nContent-Disposition: inline\n\nSGkgYWdhaW4sIDxicj48YnI+dGhlIHZlcnNpb24gb2YgdGhlIHNlcnZlciBJIGFtIG9uIGlzIDxm\nb250IHN0eWxlPSJmb250LWZhbWlseTogY291cmllciBuZXcsbW9ub3NwYWNlOyIgc2l6ZT0iMSI+\nUG9zdGdyZVNRTCA4LjIuMyBvbiBpNjg2LXBjLWxpbnV4LWdudSwgY29tcGlsZWQgYnkgR0NDIGdj\nYyAoR0NDKSA0LjAuMiAyMDA1MDkwMSAocHJlcmVsZWFzZSkgKFNVU0UgTGludXgpPC9mb250Pgo8\nYnI+PGJyPmhlcmUgaXMgdGhlIERUPGJyPjxmb250IHNpemU9IjEiPjxiciBzdHlsZT0iZm9udC1m\nYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsiPjxzcGFuIHN0eWxlPSJmb250LWZhbWlseTog\nY291cmllciBuZXcsbW9ub3NwYWNlOyI+Q1JFQVRFIFRBQkxFICZxdW90O3ZlcnNpb25BJnF1b3Q7\nLm15aW50YXJyYXlfdGFibGVfbm9udWxsczwvc3Bhbj48YnIgc3R5bGU9ImZvbnQtZmFtaWx5OiBj\nb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij4KPHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiBjb3VyaWVy\nIG5ldyxtb25vc3BhY2U7Ij4oPC9zcGFuPjxiciBzdHlsZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIg\nbmV3LG1vbm9zcGFjZTsiPjxzcGFuIHN0eWxlPSJmb250LWZhbWlseTogY291cmllciBuZXcsbW9u\nb3NwYWNlOyI+Jm5ic3A7IGlkIGludGVnZXIsPC9zcGFuPjxiciBzdHlsZT0iZm9udC1mYW1pbHk6\nIGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsiPgo8c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6IGNvdXJp\nZXIgbmV3LG1vbm9zcGFjZTsiPiZuYnNwOyBteWludGFycmF5X2ludDQgaW50ZWdlcltdPC9zcGFu\nPjxiciBzdHlsZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsiPjxzcGFuIHN0\neWxlPSJmb250LWZhbWlseTogY291cmllciBuZXcsbW9ub3NwYWNlOyI+KSA8L3NwYW4+PGJyIHN0\neWxlPSJmb250LWZhbWlseTogY291cmllciBuZXcsbW9ub3NwYWNlOyI+CjxzcGFuIHN0eWxlPSJm\nb250LWZhbWlseTogY291cmllciBuZXcsbW9ub3NwYWNlOyI+V0lUSE9VVCBPSURTOzwvc3Bhbj48\nc3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsiPjwvc3Bhbj48\nYnIgc3R5bGU9ImZvbnQtZmFtaWx5OiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij48YnIgc3R5bGU9\nImZvbnQtZmFtaWx5OiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij4KPHNwYW4gc3R5bGU9ImZvbnQt\nZmFtaWx5OiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij48L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQt\nZmFtaWx5OiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij5DUkVBVEUgSU5ERVggaWR4X25vbm51bGxz\nX215aW50YXJyYXlfaW50NF9naW48L3NwYW4+PGJyIHN0eWxlPSJmb250LWZhbWlseTogY291cmll\nciBuZXcsbW9ub3NwYWNlOyI+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiBjb3VyaWVyIG5ldyxt\nb25vc3BhY2U7Ij4KJm5ic3A7IE9OICZxdW90O3ZlcnNpb25BJnF1b3Q7Lm15aW50YXJyYXlfdGFi\nbGVfbm9udWxsczwvc3Bhbj48YnIgc3R5bGU9ImZvbnQtZmFtaWx5OiBjb3VyaWVyIG5ldyxtb25v\nc3BhY2U7Ij48c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsi\nPiZuYnNwOyBVU0lORyBnaW48L3NwYW4+PGJyIHN0eWxlPSJmb250LWZhbWlseTogY291cmllciBu\nZXcsbW9ub3NwYWNlOyI+CjxzcGFuIHN0eWxlPSJmb250LWZhbWlseTogY291cmllciBuZXcsbW9u\nb3NwYWNlOyI+Jm5ic3A7IChteWludGFycmF5X2ludDQpOzwvc3Bhbj48YnIgc3R5bGU9ImZvbnQt\nZmFtaWx5OiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij48L2ZvbnQ+PGJyPnRoZXJlIGFyZSA3NDU5\nODkgcmVjb3JkcyBpbiB0aGUgdGFibGUgd2l0aCBubyBudWxsIHZhbHVlcyBmb3IgdGhlIDxmb250\nIHNpemU9IjEiPjxzcGFuIHN0eWxlPSJmb250LWZhbWlseTogY291cmllciBuZXcsbW9ub3NwYWNl\nOyI+Cm15aW50YXJyYXlfaW50NCA8L3NwYW4+PC9mb250PjxzcGFuPmZpZWxkPC9zcGFuPjxmb250\nIHNpemU9IjEiPjxzcGFuIHN0eWxlPSJmb250LWZhbWlseTogY291cmllciBuZXcsbW9ub3NwYWNl\nOyI+Ljwvc3Bhbj48L2ZvbnQ+PGJyPjxicj5TbyBoZXJlIGlzIHRoZSBleGVjdXRpb24gcGxhbjxi\ncj48YnI+PGZvbnQgc2l6ZT0iMSI+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiBjb3VyaWVyIG5l\ndyxtb25vc3BhY2U7Ij4KbXl2aWRlb2luZGV4PSMgZXhwbGFpbiBhbmFseXplIFNFTEVDVCBpZCwg\naWNvdW50KG15aW50YXJyYXlfaW50NCk8L3NwYW4+PGJyIHN0eWxlPSJmb250LWZhbWlseTogY291\ncmllciBuZXcsbW9ub3NwYWNlOyI+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiBjb3VyaWVyIG5l\ndyxtb25vc3BhY2U7Ij4mbmJzcDsgRlJPTSAmcXVvdDt2ZXJzaW9uQSZxdW90Oy5teWludGFycmF5\nX3RhYmxlX25vbnVsbHMKPC9zcGFuPjxiciBzdHlsZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3\nLG1vbm9zcGFjZTsiPjxzcGFuIHN0eWxlPSJmb250LWZhbWlseTogY291cmllciBuZXcsbW9ub3Nw\nYWNlOyI+Jm5ic3A7V0hFUkUgQVJSQVlbOF0gJmx0O0AgbXlpbnRhcnJheV9pbnQ0Ozwvc3Bhbj48\nYnIgc3R5bGU9ImZvbnQtZmFtaWx5OiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij48c3BhbiBzdHls\nZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsiPgombmJzcDsmbmJzcDsmbmJz\ncDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm\nbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz\ncDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm\nbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz\ncDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm\nbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz\ncDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgUVVFUlkgUExBTjwvc3Bhbj48YnIgc3R5bGU9ImZv\nbnQtZmFtaWx5OiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij48c3BhbiBzdHlsZT0iZm9udC1mYW1p\nbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsiPi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t\nLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t\nLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tCjwv\nc3Bhbj48YnIgc3R5bGU9ImZvbnQtZmFtaWx5OiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij48c3Bh\nbiBzdHlsZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsiPiZuYnNwO1NlcSBT\nY2FuIG9uIG15aW50YXJyYXlfdGFibGVfbm9udWxscyZuYnNwOyAoY29zdD0xMDAwMDAwMDAuMDAu\nLjEwMDAxNTI2Ny43MyByb3dzPTc0NiB3aWR0aD0zMikgKGFjdHVhbCB0aW1lPTAuMDc5Li4xMTU2\nLjM5Mwogcm93cz0yODIwNyBsb29wcz0xKTwvc3Bhbj48YnIgc3R5bGU9ImZvbnQtZmFtaWx5OiBj\nb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij48c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIg\nbmV3LG1vbm9zcGFjZTsiPiZuYnNwOyZuYnNwOyBGaWx0ZXI6ICgmIzM5O3s4fSYjMzk7OjppbnRl\nZ2VyW10gJmx0O0AgbXlpbnRhcnJheV9pbnQ0KTwvc3Bhbj48YnIgc3R5bGU9ImZvbnQtZmFtaWx5\nOiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij4KPHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiBjb3Vy\naWVyIG5ldyxtb25vc3BhY2U7Ij4mbmJzcDtUb3RhbCBydW50aW1lOiAxMjY2LjM0NiBtczwvc3Bh\nbj48YnIgc3R5bGU9ImZvbnQtZmFtaWx5OiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij48c3BhbiBz\ndHlsZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsiPigzIHJvd3MpPC9zcGFu\nPjxiciBzdHlsZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsiPgo8L2ZvbnQ+\nPGJyPlRoZW4gSSBkcm9wIHRoZSBHSU4gYW5kIGNyZWF0ZSBhIEdpU1QgaW5kZXg8YnI+PGJyPjxm\nb250IHNpemU9IjEiPjxzcGFuIHN0eWxlPSJmb250LWZhbWlseTogY291cmllciBuZXcsbW9ub3Nw\nYWNlOyI+RFJPUCBJTkRFWCAmcXVvdDt2ZXJzaW9uQSZxdW90Oy5pZHhfbm9ubnVsbHNfbXlpbnRh\ncnJheV9pbnQ0X2dpbjs8L3NwYW4+PGJyIHN0eWxlPSJmb250LWZhbWlseTogY291cmllciBuZXcs\nbW9ub3NwYWNlOyI+CjxiciBzdHlsZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFj\nZTsiPjxzcGFuIHN0eWxlPSJmb250LWZhbWlseTogY291cmllciBuZXcsbW9ub3NwYWNlOyI+Q1JF\nQVRFIElOREVYIGlkeF9ub25udWxsc19teWludGFycmF5X2ludDRfZ2lzdDwvc3Bhbj48YnIgc3R5\nbGU9ImZvbnQtZmFtaWx5OiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij48c3BhbiBzdHlsZT0iZm9u\ndC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsiPgombmJzcDsgT04gJnF1b3Q7dmVyc2lv\nbkEmcXVvdDsubXlpbnRhcnJheV90YWJsZV9ub251bGxzPC9zcGFuPjxiciBzdHlsZT0iZm9udC1m\nYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsiPjxzcGFuIHN0eWxlPSJmb250LWZhbWlseTog\nY291cmllciBuZXcsbW9ub3NwYWNlOyI+Jm5ic3A7IFVTSU5HIGdpc3Q8L3NwYW4+PGJyIHN0eWxl\nPSJmb250LWZhbWlseTogY291cmllciBuZXcsbW9ub3NwYWNlOyI+CjxzcGFuIHN0eWxlPSJmb250\nLWZhbWlseTogY291cmllciBuZXcsbW9ub3NwYWNlOyI+Jm5ic3A7IChteWludGFycmF5X2ludDQp\nOzwvc3Bhbj48L2ZvbnQ+PGJyPjxicj5hbmQgaGVyZSBhcmUgdGhlIHJlc3VsdHMgZm9yIHRoZSBl\neGVjdXRpb24gcGxhbjxicj48YnI+PGZvbnQgc3R5bGU9ImZvbnQtZmFtaWx5OiBjb3VyaWVyIG5l\ndyxtb25vc3BhY2U7IiBzaXplPSIxIj5teXZpZGVvaW5kZXg9IyBleHBsYWluIGFuYWx5emUgU0VM\nRUNUIGlkLCBpY291bnQobXlpbnRhcnJheV9pbnQ0KQo8YnI+bXl2aWRlb2luZGV4LSMmbmJzcDsm\nbmJzcDsgRlJPTSAmcXVvdDt2ZXJzaW9uQSZxdW90Oy5teWludGFycmF5X3RhYmxlX25vbnVsbHM8\nYnI+bXl2aWRlb2luZGV4LSMmbmJzcDsgV0hFUkUgQVJSQVlbOF0gJmx0O0AgbXlpbnRhcnJheV9p\nbnQ0Ozxicj4mbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm\nbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz\ncDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm\nbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz\ncDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm\nbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz\ncDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm\nbmJzcDsmbmJzcDsmbmJzcDsgUVVFUlkgUExBTjxicj4tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t\nLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t\nLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t\nLS0tLS0tLS0tLQo8YnI+Jm5ic3A7Qml0bWFwIEhlYXAgU2NhbiBvbiBteWludGFycmF5X3RhYmxl\nX25vbnVsbHMmbmJzcDsgKGNvc3Q9NDIuMzYuLjIxMzcuNjIgcm93cz03NDYgd2lkdGg9MzIpIChh\nY3R1YWwgdGltZT0xNTQuMjc2Li4zMDEuNjE1IHJvd3M9MjgyMDcgbG9vcHM9MSk8YnI+Jm5ic3A7\nJm5ic3A7IFJlY2hlY2sgQ29uZDogKCYjMzk7ezh9JiMzOTs6OmludGVnZXJbXSAmbHQ7QCBteWlu\ndGFycmF5X2ludDQpPGJyPiZuYnNwOyZuYnNwOyAtJmd0OyZuYnNwOyBCaXRtYXAgSW5kZXggU2Nh\nbiBvbiBpZHhfbm9ubnVsbHNfbXlpbnRhcnJheV9pbnQ0X2dpc3QmbmJzcDsgKGNvc3Q9CjAuMDAu\nLjQyLjE3IHJvd3M9NzQ2IHdpZHRoPTApIChhY3R1YWwgdGltZT0xNTAuNzEzLi4xNTAuNzEzIHJv\nd3M9MjgyMDcgbG9vcHM9MSk8YnI+Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7\nJm5ic3A7Jm5ic3A7IEluZGV4IENvbmQ6ICgmIzM5O3s4fSYjMzk7OjppbnRlZ2VyW10gJmx0O0Ag\nbXlpbnRhcnJheV9pbnQ0KTxicj4mbmJzcDtUb3RhbCBydW50aW1lOiA0MTAuMzk0IG1zPGJyPig1\nIHJvd3MpPC9mb250Pjxicj48YnI+QXMgeW91IGNhbiBzZWUgdGhlIGluZGV4IGlzIGluIHVzZS4u\nLiAKPGJyPjxicj5Ob3cgSSBjcmVhdGUgY3JlYXRlIHRoZSBzYW1lIHRhYmxlIHdpdGggbXlpbnRh\ncnJheV9pbnQ0IGNvbnZlcnRlZCBpbnRvIHRleHQgYXJyYXkgYW5kIGNyZWF0ZSBhIEdJTiBpbmRl\neCBvbiB0aGUgbmV3IHRleHQgYXJyYXkgZmllbGQ8YnI+PGJyPjxmb250IHNpemU9IjEiPjxzcGFu\nIHN0eWxlPSJmb250LWZhbWlseTogY291cmllciBuZXcsbW9ub3NwYWNlOyI+U0VMRUNUIGlkLCBt\neWludGFycmF5X2ludDQ6OnRleHRbXSBhcyBteWludGFycmF5X2ludDRfdGV4dCBpbnRvIG15aW50\nYXJyYXlfdGFibGVfbm9udWxsc190ZXh0IGZyb20gbXlpbnRhcnJheV90YWJsZV9ub251bGxzOwo8\nL3NwYW4+PGJyIHN0eWxlPSJmb250LWZhbWlseTogY291cmllciBuZXcsbW9ub3NwYWNlOyI+PGJy\nIHN0eWxlPSJmb250LWZhbWlseTogY291cmllciBuZXcsbW9ub3NwYWNlOyI+PHNwYW4gc3R5bGU9\nImZvbnQtZmFtaWx5OiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij5DUkVBVEUgSU5ERVggaWR4X25v\nbm51bGxzX215aW50YXJyYXlfaW50NF90ZXh0X2dpbjwvc3Bhbj48YnIgc3R5bGU9ImZvbnQtZmFt\naWx5OiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij4KPHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiBj\nb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij4mbmJzcDsgT04gJnF1b3Q7dmVyc2lvbkEmcXVvdDsubXlp\nbnRhcnJheV90YWJsZV9ub251bGxzX3RleHQ8L3NwYW4+PGJyIHN0eWxlPSJmb250LWZhbWlseTog\nY291cmllciBuZXcsbW9ub3NwYWNlOyI+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiBjb3VyaWVy\nIG5ldyxtb25vc3BhY2U7Ij4mbmJzcDsgVVNJTkcgZ2luCjwvc3Bhbj48YnIgc3R5bGU9ImZvbnQt\nZmFtaWx5OiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij48c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6\nIGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsiPiZuYnNwOyAobXlpbnRhcnJheV9pbnQ0X3RleHQpOzwv\nc3Bhbj48L2ZvbnQ+PGJyPjxicj5hbmQgaGF2ZSBhIHRhYmxlIHdpdGggRFQ6PGJyPjxicj48Zm9u\ndCBzaXplPSIxIj48c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFj\nZTsiPgpDUkVBVEUgVEFCTEUgJnF1b3Q7dmVyc2lvbkEmcXVvdDsubXlpbnRhcnJheV90YWJsZV9u\nb251bGxzX3RleHQ8L3NwYW4+PGJyIHN0eWxlPSJmb250LWZhbWlseTogY291cmllciBuZXcsbW9u\nb3NwYWNlOyI+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7\nIj4oPC9zcGFuPjxiciBzdHlsZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsi\nPgo8c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsiPiZuYnNw\nOyBpZCBpbnRlZ2VyLDwvc3Bhbj48YnIgc3R5bGU9ImZvbnQtZmFtaWx5OiBjb3VyaWVyIG5ldyxt\nb25vc3BhY2U7Ij48c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFj\nZTsiPiZuYnNwOyBteWludGFycmF5X2ludDRfdGV4dCB0ZXh0W108L3NwYW4+PGJyIHN0eWxlPSJm\nb250LWZhbWlseTogY291cmllciBuZXcsbW9ub3NwYWNlOyI+CjxzcGFuIHN0eWxlPSJmb250LWZh\nbWlseTogY291cmllciBuZXcsbW9ub3NwYWNlOyI+KSA8L3NwYW4+PGJyIHN0eWxlPSJmb250LWZh\nbWlseTogY291cmllciBuZXcsbW9ub3NwYWNlOyI+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiBj\nb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij5XSVRIT1VUIE9JRFM7PC9zcGFuPjwvZm9udD48YnI+PGJy\nPk5vdyB0aGUgc2FtZSByZXF1ZXN0IGhhcyB0aGUgZm9sbG93aW5nIGV4ZWN1dGlvbiBwbGFuOgo8\nYnI+PGJyPjxmb250IHNpemU9IjEiPjxzcGFuIHN0eWxlPSJmb250LWZhbWlseTogY291cmllciBu\nZXcsbW9ub3NwYWNlOyI+bXl2aWRlb2luZGV4PSMgZXhwbGFpbiBhbmFseXplIFNFTEVDVCBpZCwg\nYXJyYXlfdXBwZXIoIG15aW50YXJyYXlfaW50NF90ZXh0LCAxICk8L3NwYW4+PGJyIHN0eWxlPSJm\nb250LWZhbWlseTogY291cmllciBuZXcsbW9ub3NwYWNlOyI+PHNwYW4gc3R5bGU9ImZvbnQtZmFt\naWx5OiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij4KJm5ic3A7IEZST00gJnF1b3Q7dmVyc2lvbkEm\ncXVvdDsubXlpbnRhcnJheV90YWJsZV9ub251bGxzX3RleHQ8L3NwYW4+PGJyIHN0eWxlPSJmb250\nLWZhbWlseTogY291cmllciBuZXcsbW9ub3NwYWNlOyI+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5\nOiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij4mbmJzcDtXSEVSRSBBUlJBWVsmIzM5OzgmIzM5O10g\nJmx0O0AgbXlpbnRhcnJheV9pbnQ0X3RleHQ7PC9zcGFuPgo8YnIgc3R5bGU9ImZvbnQtZmFtaWx5\nOiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij48c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6IGNvdXJp\nZXIgbmV3LG1vbm9zcGFjZTsiPiZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZu\nYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw\nOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZu\nYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw\nOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZu\nYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw\nOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZu\nYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyBRVUVSWSBQTEFOPC9zcGFuPjxiciBz\ndHlsZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsiPgo8c3BhbiBzdHlsZT0i\nZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsiPi0tLS0tLS0tLS0tLS0tLS0tLS0t\nLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t\nLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t\nLS0tLS0tLS0tLS0tLS0tLTwvc3Bhbj48YnIgc3R5bGU9ImZvbnQtZmFtaWx5OiBjb3VyaWVyIG5l\ndyxtb25vc3BhY2U7Ij4KPHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiBjb3VyaWVyIG5ldyxtb25v\nc3BhY2U7Ij4mbmJzcDtCaXRtYXAgSGVhcCBTY2FuIG9uIG15aW50YXJyYXlfdGFibGVfbm9udWxs\nc190ZXh0Jm5ic3A7IChjb3N0PTEwLjA2Li4yMTM2Ljk3IHJvd3M9NzQ2IHdpZHRoPTM3KSAoYWN0\ndWFsIHRpbWU9MTcuNDYzLi4xOTEuMDk0IHJvd3M9MjgyMDcgbG9vcHM9MSk8L3NwYW4+PGJyIHN0\neWxlPSJmb250LWZhbWlseTogY291cmllciBuZXcsbW9ub3NwYWNlOyI+CjxzcGFuIHN0eWxlPSJm\nb250LWZhbWlseTogY291cmllciBuZXcsbW9ub3NwYWNlOyI+Jm5ic3A7Jm5ic3A7IFJlY2hlY2sg\nQ29uZDogKCYjMzk7ezh9JiMzOTs6OnRleHRbXSAmbHQ7QCBteWludGFycmF5X2ludDRfdGV4dCk8\nL3NwYW4+PGJyIHN0eWxlPSJmb250LWZhbWlseTogY291cmllciBuZXcsbW9ub3NwYWNlOyI+PHNw\nYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij4KJm5ic3A7Jm5i\nc3A7IC0mZ3Q7Jm5ic3A7IEJpdG1hcCBJbmRleCBTY2FuIG9uIGlkeF9ub25udWxsc19teWludGFy\ncmF5X2ludDRfdGV4dF9naW4mbmJzcDsgKGNvc3Q9MC4wMC4uOS44NyByb3dzPTc0NiB3aWR0aD0w\nKSAoYWN0dWFsIHRpbWU9MTMuOTgyLi4xMy45ODIgcm93cz0yODIwNyBsb29wcz0xKTwvc3Bhbj48\nYnIgc3R5bGU9ImZvbnQtZmFtaWx5OiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij48c3BhbiBzdHls\nZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsiPgombmJzcDsmbmJzcDsmbmJz\ncDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgSW5kZXggQ29uZDogKCYjMzk7ezh9JiMz\nOTs6OnRleHRbXSAmbHQ7QCBteWludGFycmF5X2ludDRfdGV4dCk8L3NwYW4+PGJyIHN0eWxlPSJm\nb250LWZhbWlseTogY291cmllciBuZXcsbW9ub3NwYWNlOyI+PHNwYW4gc3R5bGU9ImZvbnQtZmFt\naWx5OiBjb3VyaWVyIG5ldyxtb25vc3BhY2U7Ij4mbmJzcDtUb3RhbCBydW50aW1lOiAzMDMuMzQ4\nIG1zPC9zcGFuPjxiciBzdHlsZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsi\nPgo8c3BhbiBzdHlsZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsiPig1IHJv\nd3MpPC9zcGFuPjxiciBzdHlsZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsi\nPjxiciBzdHlsZT0iZm9udC1mYW1pbHk6IGNvdXJpZXIgbmV3LG1vbm9zcGFjZTsiPjwvZm9udD48\nYnI+SSBob3BlIHRoaXMgaW5mb3JtYXRpb24gd2lsbCBtYWtlIHRoZSBxdWVzdGlvbiBtb3JlIHVu\nZGVyc3RhbmRhYmxlLgo8YnI+PGJyPldpdGggYmVzdCByZWdhcmRzLCA8YnI+PGJyPi0tIFZhbGVu\ndGluZTxicj48YnI+PGJyPjxicj48ZGl2PjxzcGFuIGNsYXNzPSJnbWFpbF9xdW90ZSI+T24gNS85\nLzA3LCA8YiBjbGFzcz0iZ21haWxfc2VuZGVybmFtZSI+T2xlZyBCYXJ0dW5vdjwvYj4gJmx0Ozxh\nIGhyZWY9Im1haWx0bzpvbGVnQHNhaS5tc3Uuc3UiPm9sZWdAc2FpLm1zdS5zdTwvYT4mZ3Q7IHdy\nb3RlOjwvc3Bhbj4KPGJsb2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0iYm9yZGVy\nLWxlZnQ6IDFweCBzb2xpZCByZ2IoMjA0LCAyMDQsIDIwNCk7IG1hcmdpbjogMHB0IDBwdCAwcHQg\nMC44ZXg7IHBhZGRpbmctbGVmdDogMWV4OyI+T24gV2VkLCA5IE1heSAyMDA3LCBWYWxlbnRpbmUg\nR29naWNoYXNodmlsaSB3cm90ZTo8YnI+PGJyPiZndDsgSSBoYXZlIGV4cGVyaW1lbnRlZCBxdWl0\nZSBhIGxvdC4gU28gZmlyc3QgSSBkaWQgd2hlbiBzdGFydGluZyB0aGUgYXR0ZW1wdCB0bwo8YnI+\nJmd0OyBtb3ZlIGZyb20gR2lTVCB0byBHSU4sIHdhcyB0byBkcm9wIHRoZSBHaVNUIGluZGV4IGFu\nZCBjcmVhdGUgYSBicmFuZCBuZXcgR0lOPGJyPiZndDsgaW5kZXguLi4gYWZ0ZXIgdGhhdCBkaWQg\nbm90IGJyaW5nIHRoZSByZXN1bHRzLCBJIHN0YXJ0ZWQgdG8gY3JlYXRlIGFsbCB0aGlzPGJyPiZn\ndDsgdGFibGVzIHdpdGggZGlmZmVyZW50IHNldHMgb2YgaW5kZXhlcyBhbmQgc28gb24uLi4KPGJy\nPiZndDs8YnI+Jmd0OyBTbyB0aGUgYW5zd2VyIHRvIHRoZSBxdWVzdGlvbiBpczogbm8gdGhlcmUg\naW4gb25seSBHSU4gaW5kZXggb24gdGhlIHRhYmxlLjxicj48YnI+dGhlbiwgeW91IGhhdmUgdG8g\ncHJvdmlkZSB1cyBtb3JlIGluZm9tYXRpb24gLTxicj5wZyB2ZXJzaW9uLDxicj5cZHQgc291cmNl\ndGFibGV3aXRoX2ludDQ8YnI+ZXhwbGFpbiBhbmFseXplPGJyPjxicj5idHcsIEkgZGlkIHRlc3Qg\nb2YgZGV2ZWxvcG1lbnQgdmVyc2lvbiBvZiBHaU4sIHNlZQo8YnI+PGEgaHJlZj0iaHR0cDovL3d3\ndy5zYWkubXN1LnN1L35tZWdlcmEvd2lraS9HaW5UZXN0Ij5odHRwOi8vd3d3LnNhaS5tc3Uuc3Uv\nfm1lZ2VyYS93aWtpL0dpblRlc3Q8L2E+PGJyPjxicj4mZ3Q7PGJyPiZndDsgVGhhbmsgeW91IGlu\nIGFkdmFuY2UsPGJyPiZndDs8YnI+Jmd0OyBWYWxlbnRpbmU8YnI+Jmd0Ozxicj4mZ3Q7IE9uIDUv\nOS8wNywgT2xlZyBCYXJ0dW5vdiAmbHQ7PGEgaHJlZj0ibWFpbHRvOm9sZWdAc2FpLm1zdS5zdSI+\nCm9sZWdAc2FpLm1zdS5zdTwvYT4mZ3Q7IHdyb3RlOjxicj4mZ3Q7Jmd0Ozxicj4mZ3Q7Jmd0OyBE\nbyB5b3UgaGF2ZSBib3RoIGluZGV4ZXMgKEdpU1QsIEdJTikgb24gdGhlIHNhbWUgdGFibGUgPzxi\ncj4mZ3Q7Jmd0Ozxicj4mZ3Q7Jmd0OyBPbiBXZWQsIDkgTWF5IDIwMDcsIFZhbGVudGluZSBHb2dp\nY2hhc2h2aWxpIHdyb3RlOjxicj4mZ3Q7Jmd0Ozxicj4mZ3Q7Jmd0OyAmZ3Q7IEhlbGxvIGFsbCwK\nPGJyPiZndDsmZ3Q7ICZndDs8YnI+Jmd0OyZndDsgJmd0OyBJIGFtIHRyeWluZyB0byBtb3ZlIGZy\nb20gR2lTVCBpbnRhcnJheSBpbmRleCB0byBHSU4gaW50YXJyYXkgaW5kZXgsIGJ1dDxicj4mZ3Q7\nJmd0OyBteTxicj4mZ3Q7Jmd0OyAmZ3Q7IEdJTiBpbmRleCBpcyBub3QgYmVpbmcgdXNlZCBieSB0\naGUgcGxhbm5lci48YnI+Jmd0OyZndDsgJmd0Ozxicj4mZ3Q7Jmd0OyAmZ3Q7IFRoZSBub3JtYWwg\ncXVlcnkgaXMgbGlrZSB0aGF0Cjxicj4mZ3Q7Jmd0OyAmZ3Q7PGJyPiZndDsmZ3Q7ICZndDsgc2Vs\nZWN0ICo8YnI+Jmd0OyZndDsgJmd0OyBmcm9tIHNvdXJjZXRhYmxld2l0aF9pbnQ0PGJyPiZndDsm\nZ3Q7ICZndDsgd2hlcmUgQVJSQVlbbXlpbnRdICZsdDtAIG15aW50X2FycmF5PGJyPiZndDsmZ3Q7\nICZndDsmbmJzcDsmbmJzcDthbmQgc29tZV9vdGhlcl9maWx0ZXJzPGJyPiZndDsmZ3Q7ICZndDs8\nYnI+Jmd0OyZndDsgJmd0OyAod2l0aCBHaVNUIGluZGV4IGV2ZXJ5dGhpbmcgd29ya3MgZmluZSwg\nYnV0IEdJTiBpbmRleCBpcyBub3QgYmVpbmcgdXNlZCkKPGJyPiZndDsmZ3Q7ICZndDs8YnI+Jmd0\nOyZndDsgJmd0OyBJZiBJIGNyZWF0ZSB0aGUgc2FtZSB0YWJsZSBwb3B1bGF0aW5nIGl0IHdpdGgg\ndGV4dFtdIGRhdGEgbGlrZTxicj4mZ3Q7Jmd0OyAmZ3Q7PGJyPiZndDsmZ3Q7ICZndDsgc2VsZWN0\nIG15aW50X2FycmF5Ojp0ZXh0W10gYXMgbXlpbnRfYXJyYXlfYXNfdGV4dGFycmF5PGJyPiZndDsm\nZ3Q7ICZndDsgaW50byBuZXd0YWJsZXdpdGhfdGV4dAo8YnI+Jmd0OyZndDsgJmd0OyBmcm9tIHNv\ndXJjZXRhYmxld2l0aF9pbnQ0PGJyPiZndDsmZ3Q7ICZndDs8YnI+Jmd0OyZndDsgJmd0OyBhbmQg\ndGhlbiBjcmVhdGUgYSBHSU4gaW5kZXggdXNpbmcgdGhpcyBuZXcgdGV4dFtdIGNvbHVtbjxicj4m\nZ3Q7Jmd0OyAmZ3Q7PGJyPiZndDsmZ3Q7ICZndDsgdGhlIHBsYW5uZXIgc3RhcnRzIHRvIHVzZSB0\naGUgaW5kZXggYW5kIHF1ZXJpZXMgcnVuIHdpdGggZ3JhdGUgc3BlZWQKPGJyPiZndDsmZ3Q7IHdo\nZW48YnI+Jmd0OyZndDsgJmd0OyB0aGUgcXVlcnkgbG9va3MgbGlrZSB0aGF0Ojxicj4mZ3Q7Jmd0\nOyAmZ3Q7PGJyPiZndDsmZ3Q7ICZndDsgc2VsZWN0ICo8YnI+Jmd0OyZndDsgJmd0OyBmcm9tIG5l\nd3RhYmxld2l0aF90ZXh0PGJyPiZndDsmZ3Q7ICZndDsgd2hlcmUgQVJSQVlbJiMzOTtteWludCYj\nMzk7XSAmbHQ7QCBteWludF9hcnJheV9hc190ZXh0YXJyYXkKPGJyPiZndDsmZ3Q7ICZndDsmbmJz\ncDsmbmJzcDthbmQgc29tZV9vdGhlcl9maWx0ZXJzPGJyPiZndDsmZ3Q7ICZndDs8YnI+Jmd0OyZn\ndDsgJmd0OyBXaGVyZSB0aGUgcHJvYmxlbSBjYW4gYmUgd2l0aCBfaW50NCBHSU4gaW5kZXggaW4g\ndGhpcyBjb25zdGVsbGF0aW9uPzxicj4mZ3Q7Jmd0OyAmZ3Q7PGJyPiZndDsmZ3Q7ICZndDsgYnkg\nbm93IHRoZSBlbmFibGVfc2Vxc2NhbiBpcyBzZXQgdG8gb2ZmIGluIHRoZSBjb25maWd1cmF0aW9u\nLgo8YnI+Jmd0OyZndDsgJmd0Ozxicj4mZ3Q7Jmd0OyAmZ3Q7IFdpdGggYmVzdCByZWdhcmRzLDxi\ncj4mZ3Q7Jmd0OyAmZ3Q7PGJyPiZndDsmZ3Q7ICZndDsgLS0gVmFsZW50aW5lIEdvZ2ljaGFzaHZp\nbGk8YnI+Jmd0OyZndDsgJmd0Ozxicj4mZ3Q7Jmd0Ozxicj4mZ3Q7Jmd0OyZuYnNwOyZuYnNwOyZu\nYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyBSZWdhcmRzLDxicj4mZ3Q7Jmd0OyZu\nYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw\nOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyBPbGVnPGJyPiZndDsmZ3Q7IF9f\nX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f\nX18KPGJyPiZndDsmZ3Q7IE9sZWcgQmFydHVub3YsIFJlc2VhcmNoIFNjaWVudGlzdCwgSGVhZCBv\nZiBBc3Ryb05ldCAoPGEgaHJlZj0iaHR0cDovL3d3dy5hc3Ryb25ldC5ydSI+d3d3LmFzdHJvbmV0\nLnJ1PC9hPiksPGJyPiZndDsmZ3Q7IFN0ZXJuYmVyZyBBc3Ryb25vbWljYWwgSW5zdGl0dXRlLCBN\nb3Njb3cgVW5pdmVyc2l0eSwgUnVzc2lhPGJyPiZndDsmZ3Q7IEludGVybmV0OiA8YSBocmVmPSJt\nYWlsdG86b2xlZ0BzYWkubXN1LnN1Ij4Kb2xlZ0BzYWkubXN1LnN1PC9hPiwgPGEgaHJlZj0iaHR0\ncDovL3d3dy5zYWkubXN1LnN1L35tZWdlcmEvIj5odHRwOi8vd3d3LnNhaS5tc3Uuc3Uvfm1lZ2Vy\nYS88L2E+PGJyPiZndDsmZ3Q7IHBob25lOiArMDA3KDQ5NSk5MzktMTYtODMsICswMDcoNDk1KTkz\nOS0yMy04Mzxicj4mZ3Q7Jmd0Ozxicj4mZ3Q7PGJyPiZndDs8YnI+Jmd0Ozxicj4mZ3Q7PGJyPjxi\ncj4mbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDtSZWdhcmRz\nLAo8YnI+Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i\nc3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7T2xlZzxicj5fX19f\nX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f\nPGJyPk9sZWcgQmFydHVub3YsIFJlc2VhcmNoIFNjaWVudGlzdCwgSGVhZCBvZiBBc3Ryb05ldCAo\nPGEgaHJlZj0iaHR0cDovL3d3dy5hc3Ryb25ldC5ydSI+d3d3LmFzdHJvbmV0LnJ1PC9hPiksPGJy\nPlN0ZXJuYmVyZyBBc3Ryb25vbWljYWwgSW5zdGl0dXRlLCBNb3Njb3cgVW5pdmVyc2l0eSwgUnVz\nc2lhCjxicj5JbnRlcm5ldDogPGEgaHJlZj0ibWFpbHRvOm9sZWdAc2FpLm1zdS5zdSI+b2xlZ0Bz\nYWkubXN1LnN1PC9hPiwgPGEgaHJlZj0iaHR0cDovL3d3dy5zYWkubXN1LnN1L35tZWdlcmEvIj5o\ndHRwOi8vd3d3LnNhaS5tc3Uuc3Uvfm1lZ2VyYS88L2E+PGJyPnBob25lOiArMDA3KDQ5NSk5Mzkt\nMTYtODMsICswMDcoNDk1KTkzOS0yMy04Mzxicj48L2Jsb2NrcXVvdGU+PC9kaXY+PGJyPjxiciBj\nbGVhcj0iYWxsIj4KPGJyPi0tIDxicj7hg5Xhg5Dhg5rhg5Thg5zhg6Lhg5jhg5wg4YOS4YOd4YOS\n4YOY4YOp4YOQ4YOo4YOV4YOY4YOa4YOYPGJyPlZhbGVudGluZSBHb2dpY2hhc2h2aWxpCg==\n------=_Part_110820_33458115.1178736067834--\n",
"msg_date": "Wed, 9 May 2007 10:55:44 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: ZFS and Postgresql - WASRe: Best OS for Postgres 8.2"
},
{
"msg_contents": "\n \n> currently ZFS is only available on Solaris, parts of it have been released\n> under GPLv2, but it doesn't look like enough of it to be ported to Linux\n> (enough was released for grub to be able to access it read-only, but not\n> the full filesystem). there are also patent concerns that are preventing\n> any porting to Linux.\n\nI don't know if anyone mentioned this in the thread already, but it looks\nlike ZFS may be coming to MacOSX 10.5\n\nhttp://news.worldofapple.com/archives/2006/12/17/zfs-file-system-makes-it-to\n-mac-os-x-leopard/\n\n",
"msg_date": "Fri, 11 May 2007 14:35:03 +0100",
"msg_from": "Adam Witney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "On Tuesday 08 May 2007 23:31, Greg Smith wrote:\n> On Tue, 8 May 2007, Tom Lane wrote:\n> > What Debian has done is set up an arrangement that lets you run two (or\n> > more) different PG versions in parallel. Since that's amazingly helpful\n> > during a major-PG-version upgrade, most of the other packagers are\n> > scheming how to do something similar.\n>\n> I alluded to that but it is worth going into more detail on for those not\n> familiar with this whole topic. I normally maintain multiple different PG\n> versions in parallel already, mostly using environment variables to switch\n> between them with some shell code. Debian has taken an approach where\n> commands like pg_ctl are wrapped in multi-version/cluster aware scripts,\n> so you can do things like restarting multiple installations more easily\n> than that.\n>\n> My issue wasn't with the idea, it was with the implementation. When I\n> have my newbie hat on, it adds a layer of complexity that isn't needed for\n> simple installs.\n\nI think I would disagree with this. The confusion comes from the fact that it \nis different, not that it is more complex. For new users what seems to be \nmost confusing is getting from install to initdb to logging in... if you tell \nthem to use pg_ctlcluster rather than pg_ctl, it isn't more confusing, there \njust following directions at that point anyway. If the upstream project were \nto switch to debian's system, I think you'd end most of the confusion, make \nit easier to run concurrent servers and simplify the upgrade process for \nsource installs, and give other package maintiners a way to achive what \ndebian has. Maybe in PG 9... \n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n",
"msg_date": "Fri, 11 May 2007 23:12:57 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
},
{
"msg_contents": "On Tue, 2007-05-08 at 23:31 -0400, Greg Smith wrote:\n> \n> My issue wasn't with the idea, it was with the implementation. When I \n> have my newbie hat on, it adds a layer of complexity that isn't needed for \n> simple installs.\n\nI find it very hard to agree with that.\n\nAs a newbie I install postgresql and have a database server installed,\nand operating. The fact that the DB files are installed somewhere\nlike /var/lib/postgresql/8.1/main is waaay beyond newbie.\n\nAt that point I can \"createdb\" or \"createuser\", but I _do_ _not_ need to\nknow anything about the cluster stuff until there is more than one DB on\nthe machine.\n\nThe Debian wrappers all default appropriately for the single-cluster\ncase, so having a single cluster has added _no_ perceivable complexity\nfor a newbie (as it should).\n\nIf you have a second cluster, whether it's the same Pg version or not,\nthings necessarily start to get complicated. OTOH I haven't had any\nproblem explaining to people that the --cluster option applies, and\nthere are sane ways to make that default to a reasonable thing as well.\n\nAll in all I think that the Debian scripts are excellent. I'm sure\nthere are improvements that could be made, but overall they don't get in\nthe way, they do the right thing in the minimal case, and they give the\nadvanced user a lot more choices about multiple DB instances on the same\nmachine.\n\nCheers,\n\t\t\t\t\tAndrew McMillan.\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n Open Source: the difference between trust and antitrust\n-------------------------------------------------------------------------",
"msg_date": "Sun, 13 May 2007 18:34:20 +1200",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS for Postgres 8.2"
}
] |
[
{
"msg_contents": "\n\nHello to all,\n\nI have a table that is used as a spool for various events. Some processes \nwrite data into it, and another process reads the resulting rows, do some \nwork, and delete the rows that were just processed.\n\nAs you can see, with hundreds of thousands events a day, this table will \nneed being vaccumed regularly to avoid taking too much space (data and \nindex).\n\nNote that processing rows is quite fast in fact, so at any time a \ncount(*) on this table rarely exceeds 10-20 rows.\n\n\nFor the indexes, a good way to bring them to a size corresponding to the \nactual count(*) is to run 'reindex'.\n\nBut for the data (dead rows), even running a vacuum analyze every day is \nnot enough, and doesn't truncate some empty pages at the end, so the data \nsize remains in the order of 200-300 MB, when only a few effective rows \nare there.\n\nI see in the 8.3 list of coming changes that the FSM will try to re-use \npages in a better way to help truncating empty pages. Is this correct ?\n\nRunning a vacuum full is a solution for now, but it locks the table for \ntoo long (10 minutes or so), which is not acceptable in that case, since \nevents should be processed in less that 10 seconds.\n\nSo, I would like to truncate the table when the number of rows reaches 0 \n(just after the table was processed, and just before some new rows are \nadded).\n\nIs there an easy way to do this under psql ? For example, lock the table, \ndo a count(*), if result is 0 row then truncate the table, unlock the \ntable (a kind of atomic 'truncate table if count(*) == 0').\n\nWould this work and what would be the steps ?\n\nThanks\n\nNicolas\n",
"msg_date": "Tue, 8 May 2007 11:43:14 +0200 (CEST)",
"msg_from": "Pomarede Nicolas <[email protected]>",
"msg_from_op": true,
"msg_subject": "truncate a table instead of vaccum full when count(*) is 0"
},
{
"msg_contents": "\n\nOn Tue, 8 May 2007, Pomarede Nicolas wrote:\n\n> As you can see, with hundreds of thousands events a day, this table will need\n> being vaccumed regularly to avoid taking too much space (data and index).\n> \n> Note that processing rows is quite fast in fact, so at any time a count(*) on\n> this table rarely exceeds 10-20 rows.\n> \n> For the indexes, a good way to bring them to a size corresponding to the\n> actual count(*) is to run 'reindex'.\n\nwhy you have index in table where is only 10-20 rows?\n\nare those indexes to prevent some duplicate rows?\n\nI have some tables just to store unprosessed data, and because there is \nonly few rows and I always process all rows there is no need for \nindexes. there is just column named id, and when I insert row I take \nnextval('id_seq') :\n\ninsert into some_tmp_table(id,'message',...) values (nextval('id_seq'),'do \nsomething',...);\n\nI know that deleting is slower than with indexes, but it's still fast \nenough, because all rows are in memory.\n\nand that id-column is just for delete, it's unique and i can always delete \nusing only it.\n\nIsmo\n",
"msg_date": "Tue, 8 May 2007 13:00:00 +0300 (EEST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: truncate a table instead of vaccum full when count(*)\n is 0"
},
{
"msg_contents": "Pomarede Nicolas <npomarede 'at' corp.free.fr> writes:\n\n> Hello to all,\n> \n> I have a table that is used as a spool for various events. Some\n> processes write data into it, and another process reads the resulting\n> rows, do some work, and delete the rows that were just processed.\n> \n> As you can see, with hundreds of thousands events a day, this table\n> will need being vaccumed regularly to avoid taking too much space\n> (data and index).\n> \n> Note that processing rows is quite fast in fact, so at any time a\n> count(*) on this table rarely exceeds 10-20 rows.\n> \n> \n> For the indexes, a good way to bring them to a size corresponding to\n> the actual count(*) is to run 'reindex'.\n> \n> But for the data (dead rows), even running a vacuum analyze every day\n> is not enough, and doesn't truncate some empty pages at the end, so\n> the data size remains in the order of 200-300 MB, when only a few\n> effective rows are there.\n\nAs far as I know, you probably need to increase your\nmax_fsm_pages, because your pg is probably not able to properly\ntrack unused pages between subsequent VACUUM's.\n\nhttp://www.postgresql.org/docs/8.2/interactive/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-FSM\n\nHave you investigated this? It seems that you already know about\nthe FSM stuff, according to your question about FSM and 8.3.\n\nYou can also run VACUUM ANALYZE more frequently (after all, it\ndoesn't lock the table).\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n",
"msg_date": "08 May 2007 12:02:39 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: truncate a table instead of vaccum full when count(*) is 0"
},
{
"msg_contents": "Pomarede Nicolas wrote:\n> But for the data (dead rows), even running a vacuum analyze every day is \n> not enough, and doesn't truncate some empty pages at the end, so the \n> data size remains in the order of 200-300 MB, when only a few effective \n> rows are there.\n\nFor a table like that you should run VACUUM much more often than once a \nday. Turn on autovacuum, or set up a cron script etc. to run it every 15 \nminutes or so.\n\n> Running a vacuum full is a solution for now, but it locks the table for \n> too long (10 minutes or so), which is not acceptable in that case, since \n> events should be processed in less that 10 seconds.\n> \n> So, I would like to truncate the table when the number of rows reaches 0 \n> (just after the table was processed, and just before some new rows are \n> added).\n> \n> Is there an easy way to do this under psql ? For example, lock the \n> table, do a count(*), if result is 0 row then truncate the table, unlock \n> the table (a kind of atomic 'truncate table if count(*) == 0').\n> \n> Would this work and what would be the steps ?\n\nIt should work, just like you describe it, with the caveat that TRUNCATE \nwill remove any old row versions that might still be visible to an older \ntransaction running in serializable mode. It sounds like it's not a \nproblem in your scenario, but it's hard to say for sure without seeing \nthe application. Running vacuum more often is probably a simpler and \nbetter solution, anyway.\n\nWhich version of PostgreSQL is this?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 08 May 2007 11:07:38 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: truncate a table instead of vaccum full when count(*)\n is 0"
},
{
"msg_contents": "On Tue, 8 May 2007, [email protected] wrote:\n\n>\n>\n> On Tue, 8 May 2007, Pomarede Nicolas wrote:\n>\n>> As you can see, with hundreds of thousands events a day, this table will need\n>> being vaccumed regularly to avoid taking too much space (data and index).\n>>\n>> Note that processing rows is quite fast in fact, so at any time a count(*) on\n>> this table rarely exceeds 10-20 rows.\n>>\n>> For the indexes, a good way to bring them to a size corresponding to the\n>> actual count(*) is to run 'reindex'.\n>\n> why you have index in table where is only 10-20 rows?\n>\n> are those indexes to prevent some duplicate rows?\n\nI need these indexes to sort rows to process in chronological order. I'm \nalso using an index on 'oid' to delete a row after it was processed (I \ncould use a unique sequence too, but I think it would be the same).\n\nAlso, I sometime have peaks that insert lots of data in a short time, so \nan index on the event's date is useful.\n\nAnd as the number of effective row compared to the number of dead rows is \nonly 1%, doing a count(*) for example takes many seconds, even if the \nresult of count(*) is 10 row (because pg will sequential scan all the data \npages of the table). Without index on the date, I would need sequential \nscan to fetch row to process, and this would be slower due to the high \nnumber of dead rows.\n\n>\n> I have some tables just to store unprosessed data, and because there is\n> only few rows and I always process all rows there is no need for\n> indexes. there is just column named id, and when I insert row I take\n> nextval('id_seq') :\n>\n> insert into some_tmp_table(id,'message',...) values (nextval('id_seq'),'do\n> something',...);\n>\n> I know that deleting is slower than with indexes, but it's still fast\n> enough, because all rows are in memory.\n>\n> and that id-column is just for delete, it's unique and i can always delete\n> using only it.\n>\n> Ismo\n\nNicolas\n",
"msg_date": "Tue, 8 May 2007 12:09:30 +0200 (CEST)",
"msg_from": "Pomarede Nicolas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: truncate a table instead of vaccum full when count(*)\n is 0"
},
{
"msg_contents": "On Tue, 8 May 2007, Guillaume Cottenceau wrote:\n\n> Pomarede Nicolas <npomarede 'at' corp.free.fr> writes:\n>\n>> Hello to all,\n>>\n>> I have a table that is used as a spool for various events. Some\n>> processes write data into it, and another process reads the resulting\n>> rows, do some work, and delete the rows that were just processed.\n>>\n>> As you can see, with hundreds of thousands events a day, this table\n>> will need being vaccumed regularly to avoid taking too much space\n>> (data and index).\n>>\n>> Note that processing rows is quite fast in fact, so at any time a\n>> count(*) on this table rarely exceeds 10-20 rows.\n>>\n>>\n>> For the indexes, a good way to bring them to a size corresponding to\n>> the actual count(*) is to run 'reindex'.\n>>\n>> But for the data (dead rows), even running a vacuum analyze every day\n>> is not enough, and doesn't truncate some empty pages at the end, so\n>> the data size remains in the order of 200-300 MB, when only a few\n>> effective rows are there.\n>\n> As far as I know, you probably need to increase your\n> max_fsm_pages, because your pg is probably not able to properly\n> track unused pages between subsequent VACUUM's.\n>\n> http://www.postgresql.org/docs/8.2/interactive/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-FSM\n>\n> Have you investigated this? It seems that you already know about\n> the FSM stuff, according to your question about FSM and 8.3.\n>\n> You can also run VACUUM ANALYZE more frequently (after all, it\n> doesn't lock the table).\n\nthanks, but max FSM is already set to a large enough value (I'm running a \nvacuum analyze every day on the whole database, and set max fsm according \nto the last lines of vacuum, so all pages are stored in the FSM).\n\n\nNicolas\n\n",
"msg_date": "Tue, 8 May 2007 12:13:54 +0200 (CEST)",
"msg_from": "Pomarede Nicolas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: truncate a table instead of vaccum full when count(*)\n is 0"
},
{
"msg_contents": "On Tue, 8 May 2007, Heikki Linnakangas wrote:\n\n> Pomarede Nicolas wrote:\n>> But for the data (dead rows), even running a vacuum analyze every day is \n>> not enough, and doesn't truncate some empty pages at the end, so the data \n>> size remains in the order of 200-300 MB, when only a few effective rows are \n>> there.\n>\n> For a table like that you should run VACUUM much more often than once a day. \n> Turn on autovacuum, or set up a cron script etc. to run it every 15 minutes \n> or so.\n\nYes, I already do this on another spool table ; I run a vacuum after \nprocessing it, but I wondered if there was another way to keep the disk \nsize low for this table.\n\nAs for autovacuum, the threshold values to analyze/vacuum are not adapted \nto my situation, because I have some big tables that I prefer to keep \nvacuumed frequently to prevent growing in disk size, even if the number of \ninsert/update is not big enough and in my case autovacuum would not run \noften enough. Instead of configuring autovacuum on a per table basis, I \nprefer running a vacuum on the database every day.\n\n\n\n>\n>> Running a vacuum full is a solution for now, but it locks the table for too \n>> long (10 minutes or so), which is not acceptable in that case, since events \n>> should be processed in less that 10 seconds.\n>> \n>> So, I would like to truncate the table when the number of rows reaches 0 \n>> (just after the table was processed, and just before some new rows are \n>> added).\n>> \n>> Is there an easy way to do this under psql ? For example, lock the table, \n>> do a count(*), if result is 0 row then truncate the table, unlock the table \n>> (a kind of atomic 'truncate table if count(*) == 0').\n>> \n>> Would this work and what would be the steps ?\n>\n> It should work, just like you describe it, with the caveat that TRUNCATE will \n> remove any old row versions that might still be visible to an older \n> transaction running in serializable mode. It sounds like it's not a problem \n> in your scenario, but it's hard to say for sure without seeing the \n> application. Running vacuum more often is probably a simpler and better \n> solution, anyway.\n>\n> Which version of PostgreSQL is this?\n\nShouldn't locking the table prevent this ? I mean, if I try to get an \nexclusive lock on the table, shouldn't I get one only when there's no \nolder transaction, and in that case I can truncate the table safely, \nknowing that no one is accessing it due to the lock ?\n\nthe pg version is 8.1.2 (not the latest I know, but migrating this base is \nquite complicated since it needs to be up 24/24 a day)\n\nthanks\n\nNicolas\n\n",
"msg_date": "Tue, 8 May 2007 12:23:56 +0200 (CEST)",
"msg_from": "Pomarede Nicolas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: truncate a table instead of vaccum full when count(*)\n is 0"
},
{
"msg_contents": "Pomarede Nicolas wrote:\n> On Tue, 8 May 2007, Heikki Linnakangas wrote:\n>> Pomarede Nicolas wrote:\n>>> But for the data (dead rows), even running a vacuum analyze every day \n>>> is not enough, and doesn't truncate some empty pages at the end, so \n>>> the data size remains in the order of 200-300 MB, when only a few \n>>> effective rows are there.\n>>\n>> For a table like that you should run VACUUM much more often than once \n>> a day. Turn on autovacuum, or set up a cron script etc. to run it \n>> every 15 minutes or so.\n> \n> Yes, I already do this on another spool table ; I run a vacuum after \n> processing it, but I wondered if there was another way to keep the disk \n> size low for this table.\n\nHow much concurrent activity is there in the database? Running a vacuum \nright after processing it would not remove the deleted tuples if there's \nanother transaction running at the same time. Running the vacuum a few \nminutes later might help with that. You should run VACUUM VERBOSE to see \nhow many non-removable dead tuples there is.\n\n>>> Is there an easy way to do this under psql ? For example, lock the \n>>> table, do a count(*), if result is 0 row then truncate the table, \n>>> unlock the table (a kind of atomic 'truncate table if count(*) == 0').\n>>>\n>>> Would this work and what would be the steps ?\n>>\n>> It should work, just like you describe it, with the caveat that \n>> TRUNCATE will remove any old row versions that might still be visible \n>> to an older transaction running in serializable mode. It sounds like \n>> it's not a problem in your scenario, but it's hard to say for sure \n>> without seeing the application. Running vacuum more often is probably \n>> a simpler and better solution, anyway.\n> \n> Shouldn't locking the table prevent this ? I mean, if I try to get an \n> exclusive lock on the table, shouldn't I get one only when there's no \n> older transaction, and in that case I can truncate the table safely, \n> knowing that no one is accessing it due to the lock ?\n\nSerializable transactions that started before the transaction that takes \nthe lock would need to see the old row versions:\n\nXact 1: BEGIN ISOLATION LEVEL SERIALIZABLE;\nXact 1: SELECT 1; -- To take a snapshot, perform any query\nXact 2: DELETE FROM foo;\nXact 3: BEGIN;\nXact 3: LOCK TABLE foo;\nXact 3: SELECT COUNT(*) FROM foo; -- Sees delete by xact 2, returns 0,\nXact 3: TRUNCATE foo;\nXact 3: COMMIT;\nXact 1: SELECT COUNT(*) FROM foo; -- Returns 0, but because the \ntransaction is in serializable mode, it should've still seen the rows \ndeleted by xact 2.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 08 May 2007 11:37:15 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: truncate a table instead of vaccum full when count(*)\n is 0"
},
{
"msg_contents": "\n\"Pomarede Nicolas\" <[email protected]> writes:\n\n> Yes, I already do this on another spool table ; I run a vacuum after processing\n> it, but I wondered if there was another way to keep the disk size low for this\n> table.\n\n\"after processing it\" might be too soon if there are still transactions around\nthat are a few minutes old and predate you committing after processing it.\n\nBut any table that receives as many deletes or updates as these tables do will\nneed to be vacuumed on the order of minutes, not days.\n\n>> It should work, just like you describe it, with the caveat that TRUNCATE will\n>> remove any old row versions that might still be visible to an older\n>> transaction running in serializable mode. \n>\n> Shouldn't locking the table prevent this ? I mean, if I try to get an exclusive\n> lock on the table, shouldn't I get one only when there's no older transaction,\n> and in that case I can truncate the table safely, knowing that no one is\n> accessing it due to the lock ?\n\nIt would arise if the transaction starts before you take the lock but hasn't\nlooked at the table yet. Then the lock table succeeds, you truncate it and\ncommit, then the old transaction gets around to looking at the table.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Tue, 08 May 2007 11:50:58 +0100",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: truncate a table instead of vaccum full when count(*) is 0"
},
{
"msg_contents": "On Tue, 8 May 2007, Heikki Linnakangas wrote:\n\n> Pomarede Nicolas wrote:\n>> On Tue, 8 May 2007, Heikki Linnakangas wrote:\n>>> Pomarede Nicolas wrote:\n>>>> But for the data (dead rows), even running a vacuum analyze every day is \n>>>> not enough, and doesn't truncate some empty pages at the end, so the data \n>>>> size remains in the order of 200-300 MB, when only a few effective rows \n>>>> are there.\n>>> \n>>> For a table like that you should run VACUUM much more often than once a \n>>> day. Turn on autovacuum, or set up a cron script etc. to run it every 15 \n>>> minutes or so.\n>> \n>> Yes, I already do this on another spool table ; I run a vacuum after \n>> processing it, but I wondered if there was another way to keep the disk \n>> size low for this table.\n>\n> How much concurrent activity is there in the database? Running a vacuum right \n> after processing it would not remove the deleted tuples if there's another \n> transaction running at the same time. Running the vacuum a few minutes later \n> might help with that. You should run VACUUM VERBOSE to see how many \n> non-removable dead tuples there is.\n>\n\nThere's not too much simultaneous transaction on the database, most of the \ntime it shouldn't exceed one minute (worst case). Except, as I need to run \na vacuum analyze on the whole database every day, it now takes 8 hours to \ndo the vacuum (I changed vacuum values to be a little slower instead of \ntaking too much i/o and making the base unusable, because with \ndefault vacuum values it takes 3-4 hours of high i/o usage (total base \nis 20 GB) ).\n\nSo, at this time, the complete vacuum is running, and vacuuming only the \nspool table gives all dead rows are currently not removable (which is \nnormal).\n\nI will run it again later when the complete vacuum is over, to see how \npages are affected.\n\n\nNicolas\n\n\n",
"msg_date": "Tue, 8 May 2007 12:52:10 +0200 (CEST)",
"msg_from": "Pomarede Nicolas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: truncate a table instead of vaccum full when count(*)\n is 0"
},
{
"msg_contents": "Pomarede Nicolas wrote:\n> There's not too much simultaneous transaction on the database, most of \n> the time it shouldn't exceed one minute (worst case). Except, as I need \n> to run a vacuum analyze on the whole database every day, it now takes 8 \n> hours to do the vacuum (I changed vacuum values to be a little slower \n> instead of taking too much i/o and making the base unusable, because \n> with default vacuum values it takes 3-4 hours of high i/o usage (total \n> base is 20 GB) ).\n> \n> So, at this time, the complete vacuum is running, and vacuuming only the \n> spool table gives all dead rows are currently not removable (which is \n> normal).\n\nOh, I see. I know you don't want to upgrade, but that was changed in \n8.2. Vacuum now ignores concurrent vacuums in the oldest xid \ncalculation, so the long-running vacuum won't stop the vacuum on the \nspool table from removing dead rows.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 08 May 2007 11:59:48 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: truncate a table instead of vaccum full when count(*)\n is 0"
},
{
"msg_contents": "Heikki Linnakangas <heikki 'at' enterprisedb.com> writes:\n\n> Pomarede Nicolas wrote:\n> > But for the data (dead rows), even running a vacuum analyze every\n> > day is not enough, and doesn't truncate some empty pages at the end,\n> > so the data size remains in the order of 200-300 MB, when only a few\n> > effective rows are there.\n> \n> For a table like that you should run VACUUM much more often than once\n> a day. Turn on autovacuum, or set up a cron script etc. to run it\n> every 15 minutes or so.\n\nHeikki, is there theoretical need for frequent VACUUM when\nmax_fsm_pages is large enough to hold references of dead rows?\n\nVACUUM documentation says: \"tuples that are deleted or obsoleted\nby an update are not physically removed from their table; they\nremain present until a VACUUM is done\".\n\nFree Space Map documentation says: \"the shared free space map\ntracks the locations of unused space in the database. An\nundersized free space map may cause the database to consume\nincreasing amounts of disk space over time, because free space\nthat is not in the map cannot be re-used\".\n\nI am not sure of the relationship between these two statements.\nAre these deleted/obsoleted tuples stored in the FSM and actually\nthe occupied space is reused before a VACUUM is performed, or is\nsomething else happening? Maybe the FSM is only storing a\nreference to diskspages containing only dead rows, and that's the\ndifference I've been missing?\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n",
"msg_date": "08 May 2007 13:06:07 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: truncate a table instead of vaccum full when count(*) is 0"
},
{
"msg_contents": "Guillaume Cottenceau wrote:\n> Heikki, is there theoretical need for frequent VACUUM when\n> max_fsm_pages is large enough to hold references of dead rows?\n\nNot really, if you don't mind that your table with 10 rows takes \nhundreds of megabytes on disk. If max_fsm_pages is large enough, the \ntable size will reach a steady state size and won't grow further. It \ndepends on your scenario, it might be totally acceptable.\n\n> VACUUM documentation says: \"tuples that are deleted or obsoleted\n> by an update are not physically removed from their table; they\n> remain present until a VACUUM is done\".\n> \n> Free Space Map documentation says: \"the shared free space map\n> tracks the locations of unused space in the database. An\n> undersized free space map may cause the database to consume\n> increasing amounts of disk space over time, because free space\n> that is not in the map cannot be re-used\".\n> \n> I am not sure of the relationship between these two statements.\n> Are these deleted/obsoleted tuples stored in the FSM and actually\n> the occupied space is reused before a VACUUM is performed, or is\n> something else happening? Maybe the FSM is only storing a\n> reference to diskspages containing only dead rows, and that's the\n> difference I've been missing?\n\nFSM stores information on how much free space there is on each page. \nDeleted but not yet vacuumed tuples don't count as free space. If a page \nis full of dead tuples, it's not usable for inserting new tuples, and \nit's not recorded in the FSM.\n\nWhen vacuum runs, it physically removes tuples from the table and frees \nthe space occupied by them. At the end it updates the FSM.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 08 May 2007 12:19:06 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: truncate a table instead of vaccum full when count(*)\n is 0"
},
{
"msg_contents": "On Tue, 8 May 2007, Heikki Linnakangas wrote:\n\n> Pomarede Nicolas wrote:\n>> There's not too much simultaneous transaction on the database, most of the \n>> time it shouldn't exceed one minute (worst case). Except, as I need to run \n>> a vacuum analyze on the whole database every day, it now takes 8 hours to \n>> do the vacuum (I changed vacuum values to be a little slower instead of \n>> taking too much i/o and making the base unusable, because with default \n>> vacuum values it takes 3-4 hours of high i/o usage (total base is 20 GB) ).\n>> \n>> So, at this time, the complete vacuum is running, and vacuuming only the \n>> spool table gives all dead rows are currently not removable (which is \n>> normal).\n>\n> Oh, I see. I know you don't want to upgrade, but that was changed in 8.2. \n> Vacuum now ignores concurrent vacuums in the oldest xid calculation, so the \n> long-running vacuum won't stop the vacuum on the spool table from removing \n> dead rows.\n\nWell, this concurrent vacuum is very interesting, I didn't notice this in \n8.2, but it would really help here to vacuum frequently this spool table \nand have dead rows removed while the 'big' vacuum is running.\nSeems, I will have to consider migrating to 8.2 then :)\n\n\nAnyway, now my vacuum is over, I can vacuum the spool table and see the \nresults :\n\nbefore : 6422 pages for the data and 1700 pages for the indexes.\n\nafter vacuum analyze : 6422 data pages / 1700 index pages\n\n\nhere's the log for vacuum :\n\nfbxtv=# vacuum analyze verbose mysql_spool ;\nINFO: vacuuming \"public.mysql_spool\"\nINFO: index \"pk_mysql_spool\" now contains 21 row versions in 1700 pages\nDETAIL: 7759 index row versions were removed.\n1696 index pages have been deleted, 1667 are currently reusable.\nCPU 0.01s/0.00u sec elapsed 1.78 sec.\nINFO: \"mysql_spool\": removed 7759 row versions in 1521 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 4.88 sec.\nINFO: \"mysql_spool\": found 7759 removable, 21 nonremovable row versions \nin 6422 pages\nDETAIL: 20 dead row versions cannot be removed yet.\nThere were 261028 unused item pointers.\n0 pages are entirely empty.\nCPU 0.01s/0.01u sec elapsed 25.90 sec.\nINFO: vacuuming \"pg_toast.pg_toast_386146338\"\nINFO: index \"pg_toast_386146338_index\" now contains 0 row versions in 1 \npages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: \"pg_toast_386146338\": found 0 removable, 0 nonremovable row \nversions in 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: analyzing \"public.mysql_spool\"\nINFO: \"mysql_spool\": scanned 3000 of 6422 pages, containing 0 live rows \nand 14 dead rows; 0 rows in sample, 0 estimated total rows\nVACUUM\n\n\nSo far, so good, nearly all rows are marked as dead and removable. But \nthen, if I do 'select ctid,* from mysql_spool', I can see ctid values in \nthe range 5934, 5935, 6062, ...\n\nIsn't it possible for postgres to start using pages 0,1,2, ... after the \nvacuum, which would mean that after a few minutes, all high pages number \nwould now be completly free and could be truncated when the next vacuum is \nrun ?\n\nActually, if I run another vacuum, some more dead rows are added to the \nlist of removable rows, but I can never reach the point where data is \nstored in the low pages number (in my case a few pages would be enough) \nand all other pages get truncated at the end.\nWell at least, the number of pages doesn't increase past 6422 in this \ncase, but I'd like to reclaim space sometimes.\n\nIs this one of the feature that is planned for 8.3 : reusing low pages \nnumber in piority after a vacuum to help subsequent vacuums truncating the \nend of the table once data are located at the beginning of the table ?\n\n\nThanks to all for all your very interesting answers.\n\nNicolas\n\n\n",
"msg_date": "Tue, 8 May 2007 19:31:24 +0200 (CEST)",
"msg_from": "Pomarede Nicolas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: truncate a table instead of vaccum full when count(*)\n is 0"
}
] |
[
{
"msg_contents": "I'm trying to come up with a way to estimate the need for a\nVACUUM FULL and/or a REINDEX on some tables.\n\n\nAccording to documentation[1], VACUUM FULL's only benefit is\nreturning unused disk space to the operating system; am I correct\nin assuming there's also the benefit of optimizing the\nperformance of scans, because rows are physically compacted on\nthe disk?\n\nWith that in mind, I've tried to estimate how much benefit would\nbe brought by running VACUUM FULL, with the output of VACUUM\nVERBOSE. However, it seems that for example the \"removable rows\"\nreported by each VACUUM VERBOSE run is actually reused by VACUUM,\nso is not what I'm looking for.\n\n\nThen according to documentation[2], REINDEX has some benefit when\nall but a few index keys on a page have been deleted, because the\npage remains allocated (thus, I assume it improves index scan\nperformance, am I correct?). However, again I'm unable to\nestimate the expected benefit. With a slightly modified version\nof a query found in documentation[3] to see the pages used by a\nrelation[4], I'm able to see that the index data from a given\ntable...\n\n relname | relpages | reltuples \n ------------------------+----------+-----------\n idx_sessions_owner_key | 38 | 2166\n pk_sessions | 25 | 2166\n\n...is duly optimized after a REINDEX:\n\n relname | relpages | reltuples \n ------------------------+----------+-----------\n idx_sessions_owner_key | 13 | 2166\n pk_sessions | 7 | 2166\n\nbut what I'd need is really these 38-13 and 25-7 figures (or\nestimates) prior to running REINDEX.\n\n\nThanks for any insight.\n\n\nRef: \n[1] http://www.postgresql.org/docs/8.2/interactive/routine-vacuuming.html\n\n[2] http://www.postgresql.org/docs/8.2/interactive/routine-reindex.html\n\n[3] http://www.postgresql.org/docs/8.2/interactive/disk-usage.html\n\n[4] SELECT c2.relname, c2.relpages, c2.reltuples\n FROM pg_class c, pg_class c2, pg_index i\n WHERE c.relname = 'sessions'\n AND c.oid = i.indrelid\n AND c2.oid = i.indexrelid\n ORDER BY c2.relname;\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n",
"msg_date": "08 May 2007 14:08:24 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "estimating the need for VACUUM FULL and REINDEX"
},
{
"msg_contents": "Guillaume Cottenceau wrote:\n> According to documentation[1], VACUUM FULL's only benefit is\n> returning unused disk space to the operating system; am I correct\n> in assuming there's also the benefit of optimizing the\n> performance of scans, because rows are physically compacted on\n> the disk?\n\nThat's right.\n\n> With that in mind, I've tried to estimate how much benefit would\n> be brought by running VACUUM FULL, with the output of VACUUM\n> VERBOSE. However, it seems that for example the \"removable rows\"\n> reported by each VACUUM VERBOSE run is actually reused by VACUUM,\n> so is not what I'm looking for.\n\nTake a look at contrib/pgstattuple. If a table has high percentage of \nfree space, VACUUM FULL will compact that out.\n\n> Then according to documentation[2], REINDEX has some benefit when\n> all but a few index keys on a page have been deleted, because the\n> page remains allocated (thus, I assume it improves index scan\n> performance, am I correct?). However, again I'm unable to\n> estimate the expected benefit. With a slightly modified version\n> of a query found in documentation[3] to see the pages used by a\n> relation[4], I'm able to see that the index data from a given\n> table...\n\nSee pgstatindex, in the same contrib-module. The number you're looking \nfor is avg_leaf_density. REINDEX will bring that to 90% (with default \nfill factor), so if it's much lower than that REINDEX will help.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 08 May 2007 13:40:05 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: estimating the need for VACUUM FULL and REINDEX"
},
{
"msg_contents": "In response to Guillaume Cottenceau <[email protected]>:\n\n> I'm trying to come up with a way to estimate the need for a\n> VACUUM FULL and/or a REINDEX on some tables.\n\nYou shouldn't vacuum full unless you have a good reason. Vacuum full\ncauses index bloat.\n\n> According to documentation[1], VACUUM FULL's only benefit is\n> returning unused disk space to the operating system; am I correct\n> in assuming there's also the benefit of optimizing the\n> performance of scans, because rows are physically compacted on\n> the disk?\n\nIn my experience, the smaller the overall database size, the less shared\nmemory it requires. Keeping it vacuumed will reduce the amount of space\ntaken up in memory, which means it's more likely that the data you need\nat any particular time is in memory.\n\nLook up a thread with my name on it a lot related to reindexing. I did\nsome experiments with indexes and reindexing and the only advantage I found\nwas that the space requirement for the indexes is reduced by reindexing.\nI was not able to find any performance difference in newly created indexes\nvs. indexes that were starting to bloat.\n\n> With that in mind, I've tried to estimate how much benefit would\n> be brought by running VACUUM FULL, with the output of VACUUM\n> VERBOSE. However, it seems that for example the \"removable rows\"\n> reported by each VACUUM VERBOSE run is actually reused by VACUUM,\n> so is not what I'm looking for.\n\nI'm not sure what you mean by that last sentence.\n\nThere are only two circumstances (I can think of) for running vacuum\nfull:\n1) You've just made some major change to the database (such as adding\n an obscene # of records, making massive changes to a large\n percentage of the existing data, or issuing a lot of \"alter table\")\n and want to get the FSM back down to a manageable size.\n2) You are desperately hurting for disk space, and need a holdover\n until you can get bigger drives.\n\nReindexing pretty much falls into the same 2 scenerios. I do recommend\nthat you reindex after any vacuum full.\n\nHowever, a much better approach is to either schedule frequent vacuums\n(without the full) or configure/enable autovacuum appropriately for your\nsetup.\n\n> Then according to documentation[2], REINDEX has some benefit when\n> all but a few index keys on a page have been deleted, because the\n> page remains allocated (thus, I assume it improves index scan\n> performance, am I correct?). However, again I'm unable to\n> estimate the expected benefit. With a slightly modified version\n> of a query found in documentation[3] to see the pages used by a\n> relation[4], I'm able to see that the index data from a given\n> table...\n> \n> relname | relpages | reltuples \n> ------------------------+----------+-----------\n> idx_sessions_owner_key | 38 | 2166\n> pk_sessions | 25 | 2166\n> \n> ...is duly optimized after a REINDEX:\n> \n> relname | relpages | reltuples \n> ------------------------+----------+-----------\n> idx_sessions_owner_key | 13 | 2166\n> pk_sessions | 7 | 2166\n> \n> but what I'd need is really these 38-13 and 25-7 figures (or\n> estimates) prior to running REINDEX.\n\nAgain, my experience shows that reindexing is only worthwhile if you're\nreally hurting for disk space/memory.\n\nI don't know of any way to tell what size an index would be if it were\ncompletely packed, but it doesn't seem as if this is the best approach\nanyway. Newer versions of PG have the option to create indexes with\nempty space already there at creation time (I believe this is called\n\"fill factor\") to allow for future growth.\n\nThe only other reason I can see for vacuum full/reindex is if you _can_.\nFor example, if there is a period that you know the database will be\nunused that it sufficiently long that you know these operations can\ncomplete. Keep in mind that both reindex and vacuum full create performance\nproblems while they are running. If you knew, however, that the system\nwas _never_ being used between 6:00 PM and 8:00 AM, you could run them\nover night. In that case, I would recommend replacing vacuum full with\ncluster.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Tue, 8 May 2007 08:46:11 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: estimating the need for VACUUM FULL and REINDEX"
},
{
"msg_contents": "Heikki Linnakangas <heikki 'at' enterprisedb.com> writes:\n\n> Guillaume Cottenceau wrote:\n> > According to documentation[1], VACUUM FULL's only benefit is\n> > returning unused disk space to the operating system; am I correct\n> > in assuming there's also the benefit of optimizing the\n> > performance of scans, because rows are physically compacted on\n> > the disk?\n> \n> That's right.\n\nOk. Then I think the documentation should probably be updated? It\nseems to totally miss this benefit.\n\nWe've been hit by degrading performance, probably because of too\nseldom VACUUM ANALYZE, and in this situation it seems that the\ntwo solutions are either VACUUM FULL or dumping and recreating\nthe database. Maybe this situation should be described in the\ndocumentation. In this list, everyone always say \"you should\nVACUUM ANALYZE frequently\" but little is done to consider the\ncase when we have to deal with an existing database on which this\nhasn't been done properly.\n \n> > With that in mind, I've tried to estimate how much benefit would\n> > be brought by running VACUUM FULL, with the output of VACUUM\n> > VERBOSE. However, it seems that for example the \"removable rows\"\n> > reported by each VACUUM VERBOSE run is actually reused by VACUUM,\n> > so is not what I'm looking for.\n> \n> Take a look at contrib/pgstattuple. If a table has high percentage of\n> free space, VACUUM FULL will compact that out.\n\nThanks a lot. I've followed this path and I think it should be\nsaid that free_space must also be large compared to 8K -\nfree_percent can be large for tables with very few tuples even on\nalready compacted tables.\n \n> > Then according to documentation[2], REINDEX has some benefit when\n> > all but a few index keys on a page have been deleted, because the\n> > page remains allocated (thus, I assume it improves index scan\n> > performance, am I correct?). However, again I'm unable to\n> > estimate the expected benefit. With a slightly modified version\n> > of a query found in documentation[3] to see the pages used by a\n> > relation[4], I'm able to see that the index data from a given\n> > table...\n> \n> See pgstatindex, in the same contrib-module. The number you're looking\n> for is avg_leaf_density. REINDEX will bring that to 90% (with default\n> fill factor), so if it's much lower than that REINDEX will help.\n\nWoops, seems that this was not availabe in pgstattuple of pg 7.4 :/\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n",
"msg_date": "09 May 2007 11:18:12 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: estimating the need for VACUUM FULL and REINDEX"
},
{
"msg_contents": "Guillaume Cottenceau <gc 'at' mnc.ch> writes:\n\n> With that in mind, I've tried to estimate how much benefit would\n> be brought by running VACUUM FULL, with the output of VACUUM\n> VERBOSE. However, it seems that for example the \"removable rows\"\n> reported by each VACUUM VERBOSE run is actually reused by VACUUM,\n> so is not what I'm looking for.\n\nI've tried to better understand how autovacuum works (we use 7.4)\nto see if a similar mechanism could be used in 7.4 (e.g. run\nVACUUM ANALYZE often enough to not end up with a need to VACUUM\nFULL).\n\nThe autovacuum daemon uses statistics collected thanks to\nstats_row_level. However, inside pg_stat_user_tables, the values\nn_tup_upd and n_tup_del seem to be reported from pg startup and\nnever reset, whereas the information from previous VACUUM would\nbe needed here, if I understand correctly. Is there anything that\ncan be done from that point on with existing pg information, or\nI'd need e.g. to remember the values of my last VACUUM myself?\n\nThanks.\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n",
"msg_date": "11 May 2007 17:31:44 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: estimating the need for VACUUM FULL and REINDEX"
},
{
"msg_contents": "Guillaume Cottenceau wrote:\n> Guillaume Cottenceau <gc 'at' mnc.ch> writes:\n> \n> > With that in mind, I've tried to estimate how much benefit would\n> > be brought by running VACUUM FULL, with the output of VACUUM\n> > VERBOSE. However, it seems that for example the \"removable rows\"\n> > reported by each VACUUM VERBOSE run is actually reused by VACUUM,\n> > so is not what I'm looking for.\n> \n> I've tried to better understand how autovacuum works (we use 7.4)\n> to see if a similar mechanism could be used in 7.4 (e.g. run\n> VACUUM ANALYZE often enough to not end up with a need to VACUUM\n> FULL).\n> \n> The autovacuum daemon uses statistics collected thanks to\n> stats_row_level. However, inside pg_stat_user_tables, the values\n> n_tup_upd and n_tup_del seem to be reported from pg startup and\n> never reset, whereas the information from previous VACUUM would\n> be needed here, if I understand correctly. Is there anything that\n> can be done from that point on with existing pg information, or\n> I'd need e.g. to remember the values of my last VACUUM myself?\n\nIn 7.4 there was the pg_autovacuum daemon in contrib, wasn't there? No\nneed to write one yourself.\n\nAFAIR what it did was precisely to remember the numbers from the last\nvacuum, which was cumbersome and not very effective (because they were\nlost on restart for example). Also, the new autovac has some features\nthat the old one didn't have. Ability to set per-table configuration\nfor example.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Fri, 11 May 2007 13:25:04 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: estimating the need for VACUUM FULL and REINDEX"
},
{
"msg_contents": "On Fri, May 11, 2007 at 01:25:04PM -0400, Alvaro Herrera wrote:\n> Guillaume Cottenceau wrote:\n> > Guillaume Cottenceau <gc 'at' mnc.ch> writes:\n> > \n> > > With that in mind, I've tried to estimate how much benefit would\n> > > be brought by running VACUUM FULL, with the output of VACUUM\n> > > VERBOSE. However, it seems that for example the \"removable rows\"\n> > > reported by each VACUUM VERBOSE run is actually reused by VACUUM,\n> > > so is not what I'm looking for.\n> > \n> > I've tried to better understand how autovacuum works (we use 7.4)\n> > to see if a similar mechanism could be used in 7.4 (e.g. run\n> > VACUUM ANALYZE often enough to not end up with a need to VACUUM\n> > FULL).\n> > \n> > The autovacuum daemon uses statistics collected thanks to\n> > stats_row_level. However, inside pg_stat_user_tables, the values\n> > n_tup_upd and n_tup_del seem to be reported from pg startup and\n> > never reset, whereas the information from previous VACUUM would\n> > be needed here, if I understand correctly. Is there anything that\n> > can be done from that point on with existing pg information, or\n> > I'd need e.g. to remember the values of my last VACUUM myself?\n> \n> In 7.4 there was the pg_autovacuum daemon in contrib, wasn't there? No\n> need to write one yourself.\n\nCorrect. But one important note: the default parameters in the 7.4\ncontrib autovac are *horrible*. They will let your table grow to 3x\nminimum size, instead of 1.4x in 8.0/8.1 and 1.2x in 8.2. You must\nspecify a different scale if you want anything resembling good results.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Sat, 12 May 2007 11:51:18 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: estimating the need for VACUUM FULL and REINDEX"
}
] |
[
{
"msg_contents": "\nHi,\n\nDespite numerous efforts, we're unable to solve a severe performance limitation between Pg 7.3.2\nand Pg 8.1.4.\n\nThe query and 'explain analyze' plan below, runs in \n\t26.20 msec on Pg 7.3.2, and \n\t2463.968 ms on Pg 8.1.4, \n\nand the Pg7.3.2 is on older hardware and OS.\n\nMultiply this time difference by >82K, and a 10 minute procedure (which includes\nthis query), now runs in 10 *hours*.....not good....\n\nIn general, however, we're pleased with performance of this very same Pg8.1.4 server \nas compared to the Pg7.3.2 server (loading/dumping, and other queries are much faster).\n\nQUERY:\n\nSELECT dx.db_id, dx.accession, f.uniquename, f.name, cvt.name as ntype,\n fd.is_current \nfrom feature f, feature_dbxref fd, dbxref dx, cvterm cvt \nwhere fd.dbxref_id = dx.dbxref_id\n and fd.feature_id = f.feature_id \n and f.type_id = cvt.cvterm_id \n and accession like 'AY851043%' \n and cvt.name not in ('gene','protein','natural_transposable_element','chromosome_structure_variation','chromosome_arm','repeat_region')\n;\n\n\nexplain analyze output on Pg7.3.2:\n\n-----------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..23.45 rows=1 width=120) (actual time=25.59..25.59 rows=0 loops=1)\n -> Nested Loop (cost=0.00..17.49 rows=1 width=82) (actual time=25.58..25.58 rows=0 loops=1)\n -> Nested Loop (cost=0.00..11.93 rows=1 width=30) (actual time=25.58..25.58 rows=0 loops=1)\n -> Index Scan using dbxref_idx2 on dbxref dx (cost=0.00..5.83 rows=1 width=21) (actual time=25.58..25.58 rows=0 loops=1)\n Index Cond: ((accession >= 'AY851043'::character varying) AND (accession < 'AY851044'::character varying))\n Filter: (accession ~~ 'AY851043%'::text)\n -> Index Scan using feature_dbxref_idx2 on feature_dbxref fd (cost=0.00..6.05 rows=5 width=9) (never executed)\n Index Cond: (fd.dbxref_id = \"outer\".dbxref_id)\n -> Index Scan using feature_pkey on feature f (cost=0.00..5.54 rows=1 width=52) (never executed)\n Index Cond: (\"outer\".feature_id = f.feature_id)\n -> Index Scan using cvterm_pkey on cvterm cvt (cost=0.00..5.94 rows=1 width=38) (never executed)\n Index Cond: (\"outer\".type_id = cvt.cvterm_id)\n Filter: ((name <> 'gene'::character varying) AND (name <> 'protein'::character varying) AND (name <> 'natural_transposable_element'::character varying) AND (name <> 'chromosome_structure_variation'::character varying) AND (name <> 'chromosome_arm'::character varying) AND (name <> 'repeat_region'::character varying))\n Total runtime: 26.20 msec\n(14 rows)\n\n========\n\n\nexplain analyze output on Pg8.1.4:\n\n-----------------------------------------------------------------\n Nested Loop (cost=0.00..47939.87 rows=1 width=108) (actual time=2463.654..2463.654 rows=0 loops=1)\n -> Nested Loop (cost=0.00..47933.92 rows=1 width=73) (actual time=2463.651..2463.651 rows=0 loops=1)\n -> Nested Loop (cost=0.00..47929.86 rows=1 width=22) (actual time=2463.649..2463.649 rows=0 loops=1)\n -> Seq Scan on dbxref dx (cost=0.00..47923.91 rows=1 width=21) (actual time=2463.646..2463.646 rows=0 loops=1)\n Filter: ((accession)::text ~~ 'AY851043%'::text)\n -> Index Scan using feature_dbxref_idx2 on feature_dbxref fd (cost=0.00..5.90 rows=4 width=9) (never executed)\n Index Cond: (fd.dbxref_id = \"outer\".dbxref_id)\n -> Index Scan using feature_pkey on feature f (cost=0.00..4.05 rows=1 width=59) (never executed)\n Index Cond: (\"outer\".feature_id = f.feature_id)\n -> Index Scan using cvterm_pkey on cvterm cvt (cost=0.00..5.94 rows=1 width=43) (never executed)\n Index Cond: (\"outer\".type_id = cvt.cvterm_id)\n Filter: (((name)::text <> 'gene'::text) AND ((name)::text <> 'protein'::text) AND ((name)::text <> 'natural_transposable_element'::text) AND ((name)::text <> 'chromosome_structure_variation'::text) AND ((name)::text <> 'chromosome_arm'::text) AND ((name)::text <> 'repeat_region'::text))\n Total runtime: 2463.968 ms\n(13 rows)\n\n\n=======\n\nI tried tuning configs, including shutting off enable seqscan, forcing use of index (set shared_buffers high\nwith random_page_cost set low). A colleague who gets 1697ms on Pg8.1.4 with this query provided his \npostgresql.conf -- didn't help....\n\nWe use standard dump/load commands between these servers:\n pg_dump -O fb_2007_01_05 | compress > fb_2007_01_05.Z\n uncompress -c fb_2007_01_05.Z | psql fb_2007_01_05\n\nHardware/OS specs:\n\t- Pg7.3.2: SunFire 280R, 900mHz SPARC processor, 3gb total RAM, 10Krpm SCSI internal disks, Solaris 2.8 \n\t- Pg8.1.4: v240 - dual Ultra-SPARC IIIi 1500MHz SPARC processor, 8GB total RAM, Solaris 2.10 \n\t (used both Sun-supplied postgres binaries, and compiled postgres from source)\n\n\nThanks for your help,\nSusan Russo\n\n",
"msg_date": "Tue, 8 May 2007 10:18:34 -0400 (EDT)",
"msg_from": "Susan Russo <[email protected]>",
"msg_from_op": true,
"msg_subject": "specific query (not all) on Pg8 MUCH slower than Pg7"
},
{
"msg_contents": "Susan Russo <[email protected]> writes:\n> Despite numerous efforts, we're unable to solve a severe performance limitation between Pg 7.3.2\n> and Pg 8.1.4.\n\n> The query and 'explain analyze' plan below, runs in \n> \t26.20 msec on Pg 7.3.2, and \n> \t2463.968 ms on Pg 8.1.4, \n\nYou're not getting the indexscan optimization of the LIKE clause, which\nis most likely due to having initdb'd the 8.1 installation in something\nother than C locale. You can either redo the initdb in C locale (which\nmight be a good move to fix other inconsistencies from the 7.3 behavior\nyou're used to) or create a varchar_pattern_ops index on the column(s)\nyou're using LIKE with.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 May 2007 10:48:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: specific query (not all) on Pg8 MUCH slower than Pg7 "
},
{
"msg_contents": "On Tue, May 08, 2007 at 10:18:34AM -0400, Susan Russo wrote:\n> explain analyze output on Pg7.3.2:\n> \n> -> Index Scan using dbxref_idx2 on dbxref dx (cost=0.00..5.83 rows=1 width=21) (actual time=25.58..25.58 rows=0 loops=1)\n> Index Cond: ((accession >= 'AY851043'::character varying) AND (accession < 'AY851044'::character varying))\n> Filter: (accession ~~ 'AY851043%'::text)\n> \n> explain analyze output on Pg8.1.4:\n> \n> -> Seq Scan on dbxref dx (cost=0.00..47923.91 rows=1 width=21) (actual time=2463.646..2463.646 rows=0 loops=1)\n> Filter: ((accession)::text ~~ 'AY851043%'::text)\n\nThis is almost all of your cost. Did you perchance initdb the 8.1.4 cluster\nin a non-C locale? You could always try\n\n CREATE INDEX test_index ON dbxref (accession varchar_pattern_ops);\n\nwhich would create an index that might be more useful for your LIKE query,\neven in a non-C locale.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 8 May 2007 16:48:34 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: specific query (not all) on Pg8 MUCH slower than Pg7"
},
{
"msg_contents": "\n--- Susan Russo <[email protected]> wrote:\n> and accession like 'AY851043%' \n\nI don't know if you've tried refactoring your query, but you could try:\n\n AND accession BETWEEN 'AY8510430' AND 'AY8510439' -- where the last digit is\n ^ ^ -- lowest AND highest expected value\n\nRegards,\nRichard Broersma Jr.\n",
"msg_date": "Tue, 8 May 2007 07:54:48 -0700 (PDT)",
"msg_from": "Richard Broersma Jr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: specific query (not all) on Pg8 MUCH slower than Pg7"
},
{
"msg_contents": "On 5/8/07, Tom Lane <[email protected]> wrote:\n> You're not getting the indexscan optimization of the LIKE clause, which\n> is most likely due to having initdb'd the 8.1 installation in something\n> other than C locale. You can either redo the initdb in C locale (which\n> might be a good move to fix other inconsistencies from the 7.3 behavior\n> you're used to) or create a varchar_pattern_ops index on the column(s)\n> you're using LIKE with.\n\nGiven the performance implications of setting the wrong locale, and\nthe high probability of accidentally doing this (I run my shells with\nLANG=en_US.UTF-8, so all my databases have inherited this locale), why\nis there no support for changing the database locale after the fact?\n\n# alter database test set lc_collate = 'C';\nERROR: parameter \"lc_collate\" cannot be changed\n\nAlexander.\n",
"msg_date": "Tue, 8 May 2007 16:59:30 +0200",
"msg_from": "\"Alexander Staubo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: specific query (not all) on Pg8 MUCH slower than Pg7"
},
{
"msg_contents": "In response to \"Alexander Staubo\" <[email protected]>:\n\n> On 5/8/07, Tom Lane <[email protected]> wrote:\n> > You're not getting the indexscan optimization of the LIKE clause, which\n> > is most likely due to having initdb'd the 8.1 installation in something\n> > other than C locale. You can either redo the initdb in C locale (which\n> > might be a good move to fix other inconsistencies from the 7.3 behavior\n> > you're used to) or create a varchar_pattern_ops index on the column(s)\n> > you're using LIKE with.\n> \n> Given the performance implications of setting the wrong locale, and\n> the high probability of accidentally doing this (I run my shells with\n> LANG=en_US.UTF-8, so all my databases have inherited this locale), why\n> is there no support for changing the database locale after the fact?\n> \n> # alter database test set lc_collate = 'C';\n> ERROR: parameter \"lc_collate\" cannot be changed\n\nHow would that command handle UTF data that could not be converted to C?\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Tue, 8 May 2007 11:04:27 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: specific query (not all) on Pg8 MUCH slower than Pg7"
},
{
"msg_contents": "\"Alexander Staubo\" <[email protected]> writes:\n> why is there no support for changing the database locale after the fact?\n\nIt'd corrupt all your indexes (or all the ones on textual columns anyway).\n\nThere are some TODO entries related to this, but don't hold your breath\nwaiting for a fix ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 May 2007 11:05:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: specific query (not all) on Pg8 MUCH slower than Pg7 "
}
] |
[
{
"msg_contents": "Hi,\n\n>You could always try\n>\n> CREATE INDEX test_index ON dbxref (accession varchar_pattern_ops);\n\nWOW! we're now at runtime 0.367ms on Pg8\n\nNext step is to initdb w/C Locale (tonight) (Thanks Tom et al.!).\n\nThanks again - will report back soon.\n\nSusan\n",
"msg_date": "Tue, 8 May 2007 11:19:55 -0400 (EDT)",
"msg_from": "Susan Russo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: specific query (not all) on Pg8 MUCH slower than Pg7"
},
{
"msg_contents": "Susan Russo wrote:\n> Hi,\n> \n> >You could always try\n> >\n> > CREATE INDEX test_index ON dbxref (accession varchar_pattern_ops);\n> \n> WOW! we're now at runtime 0.367ms on Pg8\n> \n> Next step is to initdb w/C Locale (tonight) (Thanks Tom et al.!).\n\nThat's alternative to the pattern_ops index; it won't help you obtain a\nplan faster than this one.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Tue, 8 May 2007 11:34:15 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: specific query (not all) on Pg8 MUCH slower than Pg7"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Susan Russo wrote:\n>> Next step is to initdb w/C Locale (tonight) (Thanks Tom et al.!).\n\n> That's alternative to the pattern_ops index; it won't help you obtain a\n> plan faster than this one.\n\nNo, but since their old DB was evidently running in C locale, this seems\nlike a prudent thing to do to avoid other surprising changes in behavior.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 May 2007 12:09:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: specific query (not all) on Pg8 MUCH slower than Pg7 "
}
] |
[
{
"msg_contents": "This query does some sort of analysis on an email archive:\n\n SELECT\n eh_subj.header_body AS subject,\n count(distinct eh_from.header_body)\n FROM\n email JOIN mime_part USING (email_id)\n JOIN email_header eh_subj USING (email_id, mime_part_id)\n JOIN email_header eh_from USING (email_id, mime_part_id)\n WHERE\n eh_subj.header_name = 'subject'\n AND eh_from.header_name = 'from'\n AND mime_part_id = 0\n AND (time >= timestamp '2007-05-05 17:01:59' AND time < timestamp '2007-05-05 17:01:59' + interval '60 min')\n GROUP BY\n eh_subj.header_body;\n\nThe planner chooses this plan:\n\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=87142.18..87366.58 rows=11220 width=184) (actual time=7883.541..8120.647 rows=35000 loops=1)\n -> Sort (cost=87142.18..87170.23 rows=11220 width=184) (actual time=7883.471..7926.031 rows=35000 loops=1)\n Sort Key: eh_subj.header_body\n -> Hash Join (cost=46283.30..86387.42 rows=11220 width=184) (actual time=5140.182..7635.615 rows=35000 loops=1)\n Hash Cond: (eh_subj.email_id = email.email_id)\n -> Bitmap Heap Scan on email_header eh_subj (cost=11853.68..50142.87 rows=272434 width=104) (actual time=367.956..1719.736 rows=280989 loops=1)\n Recheck Cond: ((mime_part_id = 0) AND (header_name = 'subject'::text))\n -> BitmapAnd (cost=11853.68..11853.68 rows=27607 width=0) (actual time=326.507..326.507 rows=0 loops=1)\n -> Bitmap Index Scan on idx__email_header__header_body_subject (cost=0.00..5836.24 rows=272434 width=0) (actual time=178.041..178.041 rows=280989 loops=1)\n -> Bitmap Index Scan on idx__email_header__header_name (cost=0.00..5880.97 rows=281247 width=0) (actual time=114.574..114.574 rows=280989 loops=1)\n Index Cond: (header_name = 'subject'::text)\n -> Hash (cost=34291.87..34291.87 rows=11020 width=120) (actual time=4772.148..4772.148 rows=35000 loops=1)\n -> Hash Join (cost=24164.59..34291.87 rows=11020 width=120) (actual time=3131.067..4706.997 rows=35000 loops=1)\n Hash Cond: (mime_part.email_id = email.email_id)\n -> Seq Scan on mime_part (cost=0.00..8355.81 rows=265804 width=12) (actual time=0.038..514.291 rows=267890 loops=1)\n Filter: (mime_part_id = 0)\n -> Hash (cost=24025.94..24025.94 rows=11092 width=112) (actual time=3130.982..3130.982 rows=35000 loops=1)\n -> Hash Join (cost=22244.54..24025.94 rows=11092 width=112) (actual time=996.556..3069.280 rows=35000 loops=1)\n Hash Cond: (eh_from.email_id = email.email_id)\n -> Bitmap Heap Scan on email_header eh_from (cost=15576.58..16041.55 rows=107156 width=104) (actual time=569.762..1932.017 rows=280990 loops=1)\n Recheck Cond: ((mime_part_id = 0) AND (header_name = 'from'::text))\n -> BitmapAnd (cost=15576.58..15576.58 rows=160 width=0) (actual time=532.217..532.217 rows=0 loops=1)\n -> Bitmap Index Scan on dummy_index (cost=0.00..3724.22 rows=107156 width=0) (actual time=116.386..116.386 rows=280990 loops=1)\n -> Bitmap Index Scan on idx__email_header__from_local (cost=0.00..5779.24 rows=107156 width=0) (actual time=174.883..174.883 rows=280990 loops=1)\n -> Bitmap Index Scan on dummy2_index (cost=0.00..5992.25 rows=107156 width=0) (actual time=173.575..173.575 rows=280990 loops=1)\n -> Hash (cost=6321.79..6321.79 rows=27694 width=8) (actual time=426.739..426.739 rows=35000 loops=1)\n -> Index Scan using idx__email__time on email (cost=0.00..6321.79 rows=27694 width=8) (actual time=50.000..375.021 rows=35000 loops=1)\n Index Cond: ((\"time\" >= '2007-05-05 17:01:59'::timestamp without time zone) AND (\"time\" < '2007-05-05 18:01:59'::timestamp without time zone))\n Total runtime: 8160.442 ms\n\nThe estimates all look pretty good and reasonable.\n\nA faster plan, however, is this:\n\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=1920309.81..1920534.21 rows=11220 width=184) (actual time=5349.493..5587.536 rows=35000 loops=1)\n -> Sort (cost=1920309.81..1920337.86 rows=11220 width=184) (actual time=5349.427..5392.110 rows=35000 loops=1)\n Sort Key: eh_subj.header_body\n -> Nested Loop (cost=15576.58..1919555.05 rows=11220 width=184) (actual time=537.938..5094.377 rows=35000 loops=1)\n -> Nested Loop (cost=15576.58..475387.23 rows=11020 width=120) (actual time=537.858..4404.330 rows=35000 loops=1)\n -> Nested Loop (cost=15576.58..430265.44 rows=11092 width=112) (actual time=537.768..4024.184 rows=35000 loops=1)\n -> Bitmap Heap Scan on email_header eh_from (cost=15576.58..16041.55 rows=107156 width=104) (actual time=537.621..1801.032 rows=280990 loops=1)\n Recheck Cond: ((mime_part_id = 0) AND (header_name = 'from'::text))\n -> BitmapAnd (cost=15576.58..15576.58 rows=160 width=0) (actual time=500.006..500.006 rows=0 loops=1)\n -> Bitmap Index Scan on dummy_index (cost=0.00..3724.22 rows=107156 width=0) (actual time=85.025..85.025 rows=280990 loops=1)\n -> Bitmap Index Scan on idx__email_header__from_local (cost=0.00..5779.24 rows=107156 width=0) (actual time=173.006..173.006 rows=280990 loops=1)\n -> Bitmap Index Scan on dummy2_index (cost=0.00..5992.25 rows=107156 width=0) (actual time=174.463..174.463 rows=280990 loops=1)\n -> Index Scan using email_pkey on email (cost=0.00..3.85 rows=1 width=8) (actual time=0.005..0.005 rows=0 loops=280990)\n Index Cond: (email.email_id = eh_from.email_id)\n Filter: ((\"time\" >= '2007-05-05 17:01:59'::timestamp without time zone) AND (\"time\" < '2007-05-05 18:01:59'::timestamp without time zone))\n -> Index Scan using mime_part_pkey on mime_part (cost=0.00..4.06 rows=1 width=12) (actual time=0.005..0.006 rows=1 loops=35000)\n Index Cond: ((email.email_id = mime_part.email_id) AND (mime_part.mime_part_id = 0))\n -> Index Scan using idx__email_header__email_id__mime_part_id on email_header eh_subj (cost=0.00..130.89 rows=13 width=104) (actual time=0.009..0.015 rows=1 loops=35000)\n Index Cond: ((email.email_id = eh_subj.email_id) AND (0 = eh_subj.mime_part_id))\n Filter: (header_name = 'subject'::text)\n Total runtime: 5625.024 ms\n\nNote how spectacularly overpriced this plan is. The costs for the nested\nloops are calculated approximately as number of outer tuples times cost of\nthe inner scan. So slight overestimations of the inner scans such as \n\nIndex Scan using email_pkey on email (cost=0.00..3.85 rows=1 width=8) (actual time=0.005..0.005 rows=0 loops=280990)\n\nkill this calculation.\n\nMost likely, all of these database is cached, so I tried reducing\nseq_page_cost and random_page_cost, but I needed to turn them all the way\ndown to 0.02 or 0.03, which is almost like cpu_tuple_cost. Is that\nreasonable? Or what is wrong here?\n\n\nPostgreSQL 8.2.1 on x86_64-unknown-linux-gnu\nwork_mem = 256MB\neffective_cache_size = 384MB\n\nThe machine has 1GB of RAM.\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n",
"msg_date": "Tue, 8 May 2007 17:22:16 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Nested loops overpriced"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Note how spectacularly overpriced this plan is.\n\nHmm, I'd have expected it to discount the repeated indexscans a lot more\nthan it seems to be doing for you. As an example in the regression\ndatabase, note what happens to the inner indexscan cost estimate when\nthe number of outer tuples grows:\n\nregression=# set enable_hashjoin TO 0;\nSET\nregression=# set enable_mergejoin TO 0;\nSET\nregression=# set enable_bitmapscan TO 0;\nSET\nregression=# explain select * from tenk1 a join tenk1 b using (thousand) where a.unique1 = 1;\n QUERY PLAN\n----------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..52.82 rows=10 width=484)\n -> Index Scan using tenk1_unique1 on tenk1 a (cost=0.00..8.27 rows=1 width=244)\n Index Cond: (unique1 = 1)\n -> Index Scan using tenk1_thous_tenthous on tenk1 b (cost=0.00..44.42 rows=10 width=244)\n Index Cond: (a.thousand = b.thousand)\n(5 rows)\n\nregression=# explain select * from tenk1 a join tenk1 b using (thousand) where a.ten = 1;\n QUERY PLAN\n---------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..2531.08 rows=9171 width=484)\n -> Seq Scan on tenk1 a (cost=0.00..483.00 rows=900 width=244)\n Filter: (ten = 1)\n -> Index Scan using tenk1_thous_tenthous on tenk1 b (cost=0.00..2.15 rows=10 width=244)\n Index Cond: (a.thousand = b.thousand)\n(5 rows)\n\nThis is with 8.2.4 but AFAICS from the CVS logs, 8.2's cost estimation\ncode didn't change since 8.2.1. What do you get for a comparably\nsimple case?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 May 2007 11:53:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Nested loops overpriced "
},
{
"msg_contents": "Am Dienstag, 8. Mai 2007 17:53 schrieb Tom Lane:\n> Hmm, I'd have expected it to discount the repeated indexscans a lot more\n> than it seems to be doing for you. As an example in the regression\n> database, note what happens to the inner indexscan cost estimate when\n> the number of outer tuples grows:\n\nI can reproduce your results in the regression test database. 8.2.1 and 8.2.4 \nbehave the same.\n\nI checked the code around cost_index(), and the assumptions appear to be \ncorrect (at least this query doesn't produce wildly unusual data). \nApparently, however, the caching effects are much more significant than the \nmodel takes into account.\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n",
"msg_date": "Wed, 9 May 2007 13:58:25 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Nested loops overpriced"
},
{
"msg_contents": "I'm having something weird too...\n\nLook:\n\n Nested Loop Left Join (cost=93.38..7276.26 rows=93 width=58) (actual\ntime=99.211..4804.525 rows=2108 loops=1)\n -> Hash Join (cost=93.38..3748.18 rows=93 width=4) (actual\ntime=0.686..20.632 rows=45 loops=1)\n Hash Cond: ((u.i)::text = (m.i)::text)\n -> Seq Scan on u (cost=0.00..2838.80 rows=10289 width=4)\n(actual time=0.010..7.813 rows=10291 loops=1)\n -> Hash (cost=87.30..87.30 rows=30 width=7) (actual\ntime=0.445..0.445 rows=45 loops=1)\n -> Index Scan using m_pkey on m (cost=0.00..87.30\nrows=30 width=7) (actual time=0.046..0.371 rows=45 loops=1)\n Index Cond: (t = 1613)\n Filter: ((a)::text = 'Y'::text)\n -> Index Scan using s_pkey on s (cost=0.00..37.33 rows=3\nwidth=58) (actual time=19.864..106.198 rows=47 loops=45)\n Index Cond: ((u.i = s.u) AND ((s.p)::text = '4'::text) AND\n(s.t = 1613) AND ((s.c)::text = 'cmi.core.total_time'::text))\n Total runtime: 4805.975 ms\n\nAnd disabling all the joins Tom said:\n\n Nested Loop Left Join (cost=0.00..16117.12 rows=93 width=58) (actual\ntime=2.706..168.556 rows=2799 loops=1)\n -> Nested Loop (cost=0.00..13187.94 rows=93 width=4) (actual\ntime=2.622..125.739 rows=50 loops=1)\n -> Seq Scan on u (cost=0.00..2838.80 rows=10289 width=4)\n(actual time=0.012..9.863 rows=10291 loops=1)\n -> Index Scan using m_pkey on m (cost=0.00..0.80 rows=1\nwidth=7) (actual time=0.009..0.009 rows=0 loops=10291)\n Index Cond: ((m.t = 1615) AND ((u.i)::text = (m.i)::text))\n Filter: ((a)::text = 'Y'::text)\n -> Index Scan using s_pkey on s (cost=0.00..31.09 rows=2\nwidth=58) (actual time=0.047..0.778 rows=56 loops=50)\n Index Cond: ((u.i = s.u) AND ((s.p)::text = '4'::text) AND\n(s.t = 1615) AND ((s.c)::text = 'cmi.core.total_time'::text))\n Total runtime: 169.836 ms\n\nI had PostgreSQL 8.2.3 on x86_64-redhat-linux-gnu, shared_buffers with\n1640MB, effective_cache_size with 5400MB and 8GB of RAM, where all\nshared_buffers blocks are used (pg_buffercache, relfilenode IS NOT\nNULL).\n\nNote that even when I set default_statistics_target to 500, and\ncalling \"ANALYZE s;\", I cannot see the number of estimated rows on the\nindex scan on s close to the actual rows.\n\nCould it be related?\n\n2007/5/9, Peter Eisentraut <[email protected]>:\n> Am Dienstag, 8. Mai 2007 17:53 schrieb Tom Lane:\n> > Hmm, I'd have expected it to discount the repeated indexscans a lot more\n> > than it seems to be doing for you. As an example in the regression\n> > database, note what happens to the inner indexscan cost estimate when\n> > the number of outer tuples grows:\n>\n> I can reproduce your results in the regression test database. 8.2.1 and 8.2.4\n> behave the same.\n>\n> I checked the code around cost_index(), and the assumptions appear to be\n> correct (at least this query doesn't produce wildly unusual data).\n> Apparently, however, the caching effects are much more significant than the\n> model takes into account.\n>\n> --\n> Peter Eisentraut\n> http://developer.postgresql.org/~petere/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n\n\n\n-- \nDaniel Cristian Cruz\n",
"msg_date": "Wed, 9 May 2007 10:38:56 -0300",
"msg_from": "\"Daniel Cristian Cruz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Nested loops overpriced"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Am Dienstag, 8. Mai 2007 17:53 schrieb Tom Lane:\n>> Hmm, I'd have expected it to discount the repeated indexscans a lot more\n>> than it seems to be doing for you. As an example in the regression\n>> database, note what happens to the inner indexscan cost estimate when\n>> the number of outer tuples grows:\n\n> I can reproduce your results in the regression test database. 8.2.1 and 8.2.4 \n> behave the same.\n\nWell, there's something funny going on here. You've got for instance\n\n -> Index Scan using email_pkey on email (cost=0.00..3.85 rows=1 width=8) (actual time=0.005..0.005 rows=0 loops=280990)\n Index Cond: (email.email_id = eh_from.email_id)\n Filter: ((\"time\" >= '2007-05-05 17:01:59'::timestamp without time zone) AND (\"time\" < '2007-05-05 18:01:59'::timestamp without time zone))\n\non the inside of a nestloop whose outer side is predicted to return\n107156 rows. That should've been discounted to *way* less than 3.85\ncost units per iteration.\n\nAre you using any nondefault planner settings? How big are these\ntables, anyway?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 09 May 2007 10:11:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Nested loops overpriced "
},
{
"msg_contents": "\n\"Daniel Cristian Cruz\" <[email protected]> writes:\n\n> -> Nested Loop (cost=0.00..13187.94 rows=93 width=4) (actual time=2.622..125.739 rows=50 loops=1)\n> -> Seq Scan on u (cost=0.00..2838.80 rows=10289 width=4) (actual time=0.012..9.863 rows=10291 loops=1)\n> -> Index Scan using m_pkey on m (cost=0.00..0.80 rows=1 width=7) (actual time=0.009..0.009 rows=0 loops=10291)\n\nThat's not discounting the nested loop for cache effect at all!\n\nWhat is your effective_cache_size for this?\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Wed, 09 May 2007 15:34:00 +0100",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Nested loops overpriced"
},
{
"msg_contents": "2007/5/9, Gregory Stark <[email protected]>:\n>\n> \"Daniel Cristian Cruz\" <[email protected]> writes:\n>\n> > -> Nested Loop (cost=0.00..13187.94 rows=93 width=4) (actual time=2.622..125.739 rows=50 loops=1)\n> > -> Seq Scan on u (cost=0.00..2838.80 rows=10289 width=4) (actual time=0.012..9.863 rows=10291 loops=1)\n> > -> Index Scan using m_pkey on m (cost=0.00..0.80 rows=1 width=7) (actual time=0.009..0.009 rows=0 loops=10291)\n>\n> That's not discounting the nested loop for cache effect at all!\n>\n> What is your effective_cache_size for this?\n\neffective_cache_size is 5400MB.\n\nI forgot to mention a modifications on cost:\ncpu_tuple_cost = 0.2\nWhich forced a usage of indexes.\n\nI set it to 0.01 and the plan has a index scan on m before the hash on\nu, being 15% slower:\n\n Hash Cond: ((u.i)::text = (m.i)::text)\n -> Seq Scan on u (cost=0.00..2838.80 rows=10289 width=4)\n(actual time=0.007..6.138 rows=10292 loops=1)\n -> Hash (cost=87.30..87.30 rows=30 width=7) (actual\ntime=0.185..0.185 rows=50 loops=1)\n -> Index Scan using m_pkey on m (cost=0.00..87.30\nrows=30 width=7) (actual time=0.021..0.144 rows=50 loops=1)\n Index Cond: (t = 1615)\n Filter: ((a)::text = 'Y'::text)\n\nI'm still confused since I didn't understood what \"That's not\ndiscounting the nested loop for cache effect at all!\" could mean...\n\nThanks for the help.\n-- \nDaniel Cristian Cruz\n",
"msg_date": "Wed, 9 May 2007 11:51:58 -0300",
"msg_from": "\"Daniel Cristian Cruz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Nested loops overpriced"
},
{
"msg_contents": "Am Mittwoch, 9. Mai 2007 16:11 schrieb Tom Lane:\n> Well, there's something funny going on here. You've got for instance\n>\n> -> Index Scan using email_pkey on email (cost=0.00..3.85\n> rows=1 width=8) (actual time=0.005..0.005 rows=0 loops=280990) Index Cond:\n> (email.email_id = eh_from.email_id)\n> Filter: ((\"time\" >= '2007-05-05 17:01:59'::timestamp\n> without time zone) AND (\"time\" < '2007-05-05 18:01:59'::timestamp without\n> time zone))\n>\n> on the inside of a nestloop whose outer side is predicted to return\n> 107156 rows. That should've been discounted to *way* less than 3.85\n> cost units per iteration.\n\nThis is the new plan with 8.2.4. It's still got the same problem, though.\n\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=5627064.21..5627718.73 rows=32726 width=184) (actual time=4904.834..5124.585 rows=35000 loops=1)\n -> Sort (cost=5627064.21..5627146.03 rows=32726 width=184) (actual time=4904.771..4947.892 rows=35000 loops=1)\n Sort Key: eh_subj.header_body\n -> Nested Loop (cost=0.00..5624610.06 rows=32726 width=184) (actual time=0.397..4628.141 rows=35000 loops=1)\n -> Nested Loop (cost=0.00..1193387.12 rows=28461 width=120) (actual time=0.322..3960.360 rows=35000 loops=1)\n -> Nested Loop (cost=0.00..1081957.26 rows=28648 width=112) (actual time=0.238..3572.023 rows=35000 loops=1)\n -> Index Scan using dummy_index on email_header eh_from (cost=0.00..13389.15 rows=280662 width=104) (actual time=0.133..1310.248 rows=280990 loops=1)\n -> Index Scan using email_pkey on email (cost=0.00..3.79 rows=1 width=8) (actual time=0.005..0.005 rows=0 loops=280990)\n Index Cond: (email.email_id = eh_from.email_id)\n Filter: ((\"time\" >= '2007-05-05 17:01:59'::timestamp without time zone) AND (\"time\" < '2007-05-05 18:01:59'::timestamp without time zone))\n -> Index Scan using mime_part_pkey on mime_part (cost=0.00..3.88 rows=1 width=12) (actual time=0.005..0.006 rows=1 loops=35000)\n Index Cond: ((email.email_id = mime_part.email_id) AND (mime_part.mime_part_id = 0))\n -> Index Scan using idx__email_header__email_id__mime_part_id on email_header eh_subj (cost=0.00..155.47 rows=18 width=104) (actual time=0.009..0.014 rows=1 loops=35000)\n Index Cond: ((email.email_id = eh_subj.email_id) AND (0 = eh_subj.mime_part_id))\n Filter: (header_name = 'subject'::text)\n Total runtime: 5161.390 ms\n\n> Are you using any nondefault planner settings?\n\nrandom_page_cost = 3\neffective_cache_size = 384MB\n\n> How big are these tables, anyway?\n\nemail\t\t35 MB\nemail_header\t421 MB\nmime_part\t37 MB\n\nEverything is analyzed, vacuumed, and reindexed.\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n",
"msg_date": "Wed, 9 May 2007 18:17:44 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Nested loops overpriced"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> Are you using any nondefault planner settings?\n\n> random_page_cost = 3\n> effective_cache_size = 384MB\n\n>> How big are these tables, anyway?\n\n> email\t\t35 MB\n> email_header\t421 MB\n> mime_part\t37 MB\n\nHmmm ... I see at least part of the problem, which is that email_header\nis joined twice in this query, which means that it's counted twice in\nfiguring the total volume of pages competing for cache space. So the\nthing thinks cache space is oversubscribed nearly 3X when in reality\nthe database is fully cached. I remember having dithered about whether\nto try to avoid counting the same physical relation more than once in\ntotal_table_pages, but this example certainly suggests that we\nshouldn't. Meanwhile, do the estimates get better if you set\neffective_cache_size to 1GB or so?\n\nTo return to your original comment: if you're trying to model a\nsituation with a fully cached database, I think it's sensible\nto set random_page_cost = seq_page_cost = 0.1 or so. You had\nmentioned having to decrease them to 0.02, which seems unreasonably\nsmall to me too, but maybe with the larger effective_cache_size\nyou won't have to go that far.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 09 May 2007 13:40:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Nested loops overpriced "
},
{
"msg_contents": "Am Mittwoch, 9. Mai 2007 19:40 schrieb Tom Lane:\n> I remember having dithered about whether\n> to try to avoid counting the same physical relation more than once in\n> total_table_pages, but this example certainly suggests that we\n> shouldn't. Meanwhile, do the estimates get better if you set\n> effective_cache_size to 1GB or so?\n\nYes, that makes the plan significantly cheaper (something like 500,000 instead \nof 5,000,000), but still a lot more expensive than the hash join (about \n100,000).\n\n> To return to your original comment: if you're trying to model a\n> situation with a fully cached database, I think it's sensible\n> to set random_page_cost = seq_page_cost = 0.1 or so. You had\n> mentioned having to decrease them to 0.02, which seems unreasonably\n> small to me too, but maybe with the larger effective_cache_size\n> you won't have to go that far.\n\nHeh, when I decrease these parameters, the hash join gets cheaper as well. I \ncan't actually get it to pick the nested-loop join.\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n",
"msg_date": "Thu, 10 May 2007 17:30:22 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Nested loops overpriced"
},
{
"msg_contents": "Am Mittwoch, 9. Mai 2007 19:40 schrieb Tom Lane:\n> Hmmm ... I see at least part of the problem, which is that email_header\n> is joined twice in this query, which means that it's counted twice in\n> figuring the total volume of pages competing for cache space. �So the\n> thing thinks cache space is oversubscribed nearly 3X when in reality\n> the database is fully cached.\n\nI should add that other, similar queries in this database that do not involve \njoining the same table twice produce seemingly optimal plans. (It picks hash \njoins which are actually faster than nested loops.)\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n",
"msg_date": "Thu, 10 May 2007 17:35:06 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Nested loops overpriced"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Am Mittwoch, 9. Mai 2007 19:40 schrieb Tom Lane:\n>> Hmmm ... I see at least part of the problem, which is that email_header\n>> is joined twice in this query, which means that it's counted twice in\n>> figuring the total volume of pages competing for cache space. So the\n>> thing thinks cache space is oversubscribed nearly 3X when in reality\n>> the database is fully cached.\n\n> I should add that other, similar queries in this database that do not\n> involve joining the same table twice produce seemingly optimal plans.\n> (It picks hash joins which are actually faster than nested loops.)\n\nIt strikes me that in a situation like this, where the same table is\nbeing scanned twice by concurrent indexscans, we ought to amortize the\nfetches across *both* scans rather than treating them independently;\nso there are actually two different ways in which we're being too\npessimistic about the indexscanning cost.\n\nDifficult to see how to fix that in the current planner design however;\nsince it's a bottom-up process, we have to cost the individual scans\nwithout any knowledge of what approach will be chosen for other scans.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 10 May 2007 11:47:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Nested loops overpriced "
}
] |
[
{
"msg_contents": "Does using DISTINCT in a query force PG to abandon any index search it might\nhave embarked upon?\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nDoes using DISTINCT in a query force PG to abandon any index search it might have embarked upon?-- Yudhvir Singh Sidhu408 375 3134 cell",
"msg_date": "Tue, 8 May 2007 12:52:35 -0700",
"msg_from": "\"Y Sidhu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "DISTINCT Question"
},
{
"msg_contents": "On Tue, May 08, 2007 at 12:52:35PM -0700, Y Sidhu wrote:\n> Does using DISTINCT in a query force PG to abandon any index search it might\n> have embarked upon?\n\nNo.\n\nIf you need help with a specific query, please post it, along with your table\ndefinitions and EXPLAIN ANALYZE output.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 8 May 2007 21:55:51 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DISTINCT Question"
},
{
"msg_contents": "Y Sidhu wrote:\n> Does using DISTINCT in a query force PG to abandon any index search it \n> might have embarked upon?\n\nDepends on the where clause.\n\n> \n> -- \n> Yudhvir Singh Sidhu\n> 408 375 3134 cell\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Tue, 08 May 2007 12:57:19 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DISTINCT Question"
},
{
"msg_contents": "On Tue, 2007-05-08 at 14:52, Y Sidhu wrote:\n> Does using DISTINCT in a query force PG to abandon any index search it\n> might have embarked upon?\n\n explain analyze select distinct request from businessrequestsummary\nwhere lastflushtime between now() - interval '30 minutes' and now();\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=3.04..3.04 rows=1 width=17) (actual time=110.565..162.630\nrows=75 loops=1)\n -> Sort (cost=3.04..3.04 rows=1 width=17) (actual\ntime=110.555..135.252 rows=6803 loops=1)\n Sort Key: request\n -> Index Scan using businessrequestsummary_lastflushtime_dx on\nbusinessrequestsummary (cost=0.01..3.03 rows=1 width=17) (actual\ntime=0.063..59.674 rows=6803 loops=1)\n Index Cond: ((lastflushtime >= (now() -\n'00:30:00'::interval)) AND (lastflushtime <= now()))\n Total runtime: 163.925 ms\n(6 rows)\n\nI'd say no. \n",
"msg_date": "Tue, 08 May 2007 15:28:22 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DISTINCT Question"
}
] |
[
{
"msg_contents": "I'm building a kiosk with a 3D front end accessing PostGIS/PostgreSQL \nvia Apache/PHP. The 3D display is supposed to show smooth motion from \nlocation to location, with PostGIS giving dynamically updated \ninformation on the locations. Everything runs on the same machine, \nand it all works, but when I start a query the 3D display stutters \nhorribly. It looks like PostgreSQL grabs hold of the CPU and doesn't \nlet go until it's completed the query.\n\nI don't need the PostgreSQL query to return quickly, but I must \nretain smooth animation while the query is being processed. In other \nwords, I need PostgreSQL to spread out its CPU usage so that it \ndoesn't monopolize the CPU for any significant time (more than 50ms \nor so).\n\nPossible solutions:\n\n1: Set the PostgreSQL task priority lower than the 3D renderer task, \nand to make sure that the 3D renderer sleep()s enough to let \nPostgreSQL get its work done. The obvious objection to this obvious \nsolution is \"Priority inversion!\", but I see that as an additional \nchallenge to be surmounted rather than an absolute prohibition. So, \nany thoughts on setting the PostgreSQL task priority (including by \nthe much-maligned tool shown at \n<http://weblog.bignerdranch.com/?p=11>)?\n\n2: Some variation of the Cost-Based Vacuum Delay. Hypothetically, \nthis would have the PostgreSQL task sleep() periodically while \nprocessing the query, allowing the 3D renderer to continue working at \na reduced frame rate. My understanding, however, is that this only \nworks during VACUUM and ANALYZE commands, so it won't help during my \nSELECT commands. So, any thoughts on using Cost-Based Vacuum Delay as \na Cost-Based Select Delay?\n\n3: ... some other solution I haven't thought of.\n\n\nAny thoughts, suggestions, ideas?\n\n\nThanks,\nDan\n\n-- \nDaniel T. Griscom [email protected]\nSuitable Systems http://www.suitable.com/\n1 Centre Street, Suite 204 (781) 665-0053\nWakefield, MA 01880-2400\n",
"msg_date": "Tue, 8 May 2007 16:27:10 -0400",
"msg_from": "Daniel Griscom <[email protected]>",
"msg_from_op": true,
"msg_subject": "Throttling PostgreSQL's CPU usage"
},
{
"msg_contents": "On Tue, May 08, 2007 at 04:27:10PM -0400, Daniel Griscom wrote:\n> 3: ... some other solution I haven't thought of.\n\nOn a wild guess, could you try setting the CPU costs higher, to make the\nplanner choose a less CPU-intensive plan?\n\nOther (weird) suggestions would include calling a user-defined function that\nsleep()ed for you between every row. Or use a dual-core system. :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 8 May 2007 22:51:20 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Throttling PostgreSQL's CPU usage"
},
{
"msg_contents": "In response to Daniel Griscom <[email protected]>:\n\n> I'm building a kiosk with a 3D front end accessing PostGIS/PostgreSQL \n> via Apache/PHP. The 3D display is supposed to show smooth motion from \n> location to location, with PostGIS giving dynamically updated \n> information on the locations. Everything runs on the same machine, \n> and it all works, but when I start a query the 3D display stutters \n> horribly. It looks like PostgreSQL grabs hold of the CPU and doesn't \n> let go until it's completed the query.\n> \n> I don't need the PostgreSQL query to return quickly, but I must \n> retain smooth animation while the query is being processed. In other \n> words, I need PostgreSQL to spread out its CPU usage so that it \n> doesn't monopolize the CPU for any significant time (more than 50ms \n> or so).\n> \n> Possible solutions:\n> \n> 1: Set the PostgreSQL task priority lower than the 3D renderer task, \n> and to make sure that the 3D renderer sleep()s enough to let \n> PostgreSQL get its work done. The obvious objection to this obvious \n> solution is \"Priority inversion!\", but I see that as an additional \n> challenge to be surmounted rather than an absolute prohibition. So, \n> any thoughts on setting the PostgreSQL task priority (including by \n> the much-maligned tool shown at \n> <http://weblog.bignerdranch.com/?p=11>)?\n\nIf it's all PostgreSQL processes that you want take a backseat to your\nrendering process, why not just nice the initial PostgreSQL daemon? All\nchildren will inherit the nice value, and there's no chance of priority\ninversion because all the PostgreSQL backends are running at the same\npriority.\n\nJust a thought.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Tue, 8 May 2007 16:55:49 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Throttling PostgreSQL's CPU usage"
},
{
"msg_contents": "On Tue, 8 May 2007, Daniel Griscom wrote:\n\n> I'm building a kiosk with a 3D front end accessing PostGIS/PostgreSQL via \n> Apache/PHP. The 3D display is supposed to show smooth motion from location to \n> location, with PostGIS giving dynamically updated information on the \n> locations. Everything runs on the same machine, and it all works, but when I \n> start a query the 3D display stutters horribly. It looks like PostgreSQL \n> grabs hold of the CPU and doesn't let go until it's completed the query.\n>\n> I don't need the PostgreSQL query to return quickly, but I must retain smooth \n> animation while the query is being processed. In other words, I need \n> PostgreSQL to spread out its CPU usage so that it doesn't monopolize the CPU \n> for any significant time (more than 50ms or so).\n>\n> Possible solutions:\n>\n> 1: Set the PostgreSQL task priority lower than the 3D renderer task, and to \n> make sure that the 3D renderer sleep()s enough to let PostgreSQL get its work \n> done. The obvious objection to this obvious solution is \"Priority \n> inversion!\", but I see that as an additional challenge to be surmounted \n> rather than an absolute prohibition. So, any thoughts on setting the \n> PostgreSQL task priority (including by the much-maligned tool shown at \n> <http://weblog.bignerdranch.com/?p=11>)?\n\nthis may or may not help\n\n> 3: ... some other solution I haven't thought of.\n\ntake a look at the scheduler discussion that has been takeing place on the \nlinux-kernel list. there are a number of things being discussed specificly \nto address the type of problems that you are running into (CPU hog causes \nlatencies for graphics processes).\n\nit looks like nothing will go into the 2.6.22 kernel officially, but if \nyou are willing to test the begezzes out of it before you depend on it, I \nsuspect that either the SD or CFS schedulers will clean things up for you.\n\nDavid Lang\n",
"msg_date": "Tue, 8 May 2007 13:59:13 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Throttling PostgreSQL's CPU usage"
},
{
"msg_contents": "1. If you go the route of using nice, you might want to run the 3D\nfront-end at a higher priority instead of running PG at a lower\npriority. That way apache, php and the other parts all run at the same\npriority as PG and just the one task that you want to run smoothly is\nelevated.\n\n2. You may not even need separate priorities if you're running on Linux\nwith a recent kernel and you enable the sleep() calls that you would\nneed anyway for solution #1 to work. This is because Linux kernels are\ngetting pretty good nowadays about rewarding tasks with a lot of sleeps,\nalthough there are some further kernel changes still under development\nthat look even more promising.\n\n-- Mark\n\nOn Tue, 2007-05-08 at 16:27 -0400, Daniel Griscom wrote:\n> I'm building a kiosk with a 3D front end accessing PostGIS/PostgreSQL \n> via Apache/PHP. The 3D display is supposed to show smooth motion from \n> location to location, with PostGIS giving dynamically updated \n> information on the locations. Everything runs on the same machine, \n> and it all works, but when I start a query the 3D display stutters \n> horribly. It looks like PostgreSQL grabs hold of the CPU and doesn't \n> let go until it's completed the query.\n> \n> I don't need the PostgreSQL query to return quickly, but I must \n> retain smooth animation while the query is being processed. In other \n> words, I need PostgreSQL to spread out its CPU usage so that it \n> doesn't monopolize the CPU for any significant time (more than 50ms \n> or so).\n> \n> Possible solutions:\n> \n> 1: Set the PostgreSQL task priority lower than the 3D renderer task, \n> and to make sure that the 3D renderer sleep()s enough to let \n> PostgreSQL get its work done. The obvious objection to this obvious \n> solution is \"Priority inversion!\", but I see that as an additional \n> challenge to be surmounted rather than an absolute prohibition. So, \n> any thoughts on setting the PostgreSQL task priority (including by \n> the much-maligned tool shown at \n> <http://weblog.bignerdranch.com/?p=11>)?\n> \n> 2: Some variation of the Cost-Based Vacuum Delay. Hypothetically, \n> this would have the PostgreSQL task sleep() periodically while \n> processing the query, allowing the 3D renderer to continue working at \n> a reduced frame rate. My understanding, however, is that this only \n> works during VACUUM and ANALYZE commands, so it won't help during my \n> SELECT commands. So, any thoughts on using Cost-Based Vacuum Delay as \n> a Cost-Based Select Delay?\n> \n> 3: ... some other solution I haven't thought of.\n> \n> \n> Any thoughts, suggestions, ideas?\n> \n> \n> Thanks,\n> Dan\n> \n",
"msg_date": "Tue, 08 May 2007 14:10:10 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Throttling PostgreSQL's CPU usage"
},
{
"msg_contents": "Steinar H. Gunderson wrote:\n> Or use a dual-core system. :-)\n\nAm I missing something?? There is just *one* instance of this idea in, \nwhat,\nfour replies?? I find it so obvious, and so obviously the only solution \nthat\nhas any hope to work, that it makes me think I'm missing something ...\n\nIs it that multiple PostgreSQL processes will end up monopolizing as many\nCPU cores as you give it? (ok, that would suck, for sure :-))\n\nIf there is a way to guarantee (or at least to encourage) that PG will \nnot use\nmore than one, or even two cores, then a quad-core machine looks like a\npromising solution... One thing feels kind of certain to me: the kind of\nsystem that the OP describes has a most-demanding need for *extremely\nhigh* CPU power --- multi-core, or multi-CPU, would seem the better\nsolution anyway, since it promotes responsiveness more than raw CPU\npower.\n\nCarlos\n--\n\n",
"msg_date": "Tue, 08 May 2007 18:32:14 -0400",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Throttling PostgreSQL's CPU usage"
},
{
"msg_contents": "Carlos Moreno wrote:\n> Steinar H. Gunderson wrote:\n>> Or use a dual-core system. :-)\n> \n> Am I missing something?? There is just *one* instance of this idea in, \n> what,\n> four replies?? I find it so obvious, and so obviously the only solution \n> that\n> has any hope to work, that it makes me think I'm missing something ...\n> \n> Is it that multiple PostgreSQL processes will end up monopolizing as many\n> CPU cores as you give it? (ok, that would suck, for sure :-))\n\n\nPostgreSQL is process based, so if you have one query that is eating a \nlot of cpu, it is only one cpu... you would have another for your render \nto run on.\n\nJoshua D. Drake\n\n> \n> If there is a way to guarantee (or at least to encourage) that PG will \n> not use\n> more than one, or even two cores, then a quad-core machine looks like a\n> promising solution... One thing feels kind of certain to me: the kind of\n> system that the OP describes has a most-demanding need for *extremely\n> high* CPU power --- multi-core, or multi-CPU, would seem the better\n> solution anyway, since it promotes responsiveness more than raw CPU\n> power.\n> \n> Carlos\n> -- \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Tue, 08 May 2007 15:39:55 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Throttling PostgreSQL's CPU usage"
},
{
"msg_contents": "On Tue, May 08, 2007 at 06:32:14PM -0400, Carlos Moreno wrote:\n>> Or use a dual-core system. :-)\n> Am I missing something?? There is just *one* instance of this idea in,\n> what, four replies?? I find it so obvious, and so obviously the only\n> solution that has any hope to work, that it makes me think I'm missing\n> something ...\n\nActually, it should be added that this suggestion was only partially\ntongue-in-cheek. I wrote a 3D application as part of an internship a couple\nof years ago, and it had a problem that worked vaguely like the given\nscenario: Adding a background task (in this case the task that loaded in new\npieces of terrain) would kill the framerate for the user, but nicing down\n(actually, down-prioritizing, as this was on Windows) the back-end would\nstarve it completely of cycles. The solution was to just define that this\nwould only be run on multiprocessor systems, where both tasks would chug\nalong nicely :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 9 May 2007 00:40:05 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Throttling PostgreSQL's CPU usage"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> Am I missing something?? There is just *one* instance of this idea \n> in, what,\n>> four replies?? I find it so obvious, and so obviously the only \n>> solution that\n>> has any hope to work, that it makes me think I'm missing something ...\n>>\n>> Is it that multiple PostgreSQL processes will end up monopolizing as \n>> many\n>> CPU cores as you give it? (ok, that would suck, for sure :-))\n>\n> PostgreSQL is process based, so if you have one query that is eating a \n> lot of cpu, it is only one cpu... you would have another for your \n> render to run on.\n\nThere is still the issue that there could be several (many?) queries \nrunning\nconcurrently --- but that's much easier to control at the application \nlevel;\nso maybe simply using a multi-CPU/multi-core hardware would be the\nsimplest solution?\n\nCarlos\n--\n\n",
"msg_date": "Tue, 08 May 2007 18:46:42 -0400",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Throttling PostgreSQL's CPU usage"
},
{
"msg_contents": "Thanks for all the feedback. Unfortunately I didn't specify that this \nis running on a WinXP machine (the 3D renderer is an ActiveX plugin), \nand I don't even think \"nice\" is available. I've tried using the \nWindows Task Manager to set every postgres.exe process to a low \npriority, but that didn't make a difference.\n\nSeveral people have mentioned having multiple processors; my current \nmachine is a uni-processor machine, but I believe we could spec the \nactual runtime machine to have multiple processors/cores. I'm only \nrunning one query at a time; would that query be guaranteed to \nconfine itself to a single processor/core?\n\nIn terms of performance, I don't think simply more power will do the \ntrick; I've got an AMD 3200+, and even doubling the power/halving the \nstutter time won't be good enough.\n\nSomeone suggested \"setting the CPU costs higher\"; where would I learn \nabout that?\n\nSomeone else mentioned having a custom function that sleep()ed on \nevery row access; where would I learn more about that?\n\nI've also been reading up on VACUUM. I haven't explicitly run it in \nthe several days since I've installed the database (by loading a \nhumongous data.sql file); might this be part of the performance \nproblem?\n\n\nThanks again,\nDan\n\n-- \nDaniel T. Griscom [email protected]\nSuitable Systems http://www.suitable.com/\n1 Centre Street, Suite 204 (781) 665-0053\nWakefield, MA 01880-2400\n",
"msg_date": "Tue, 8 May 2007 19:03:17 -0400",
"msg_from": "Daniel Griscom <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Throttling PostgreSQL's CPU usage"
},
{
"msg_contents": "On Tue, May 08, 2007 at 07:03:17PM -0400, Daniel Griscom wrote:\n> I'm only running one query at a time; would that query be guaranteed to\n> confine itself to a single processor/core?\n\nYes; at least it won't be using two at a time. (Postgres can't guarantee that\nWindows won't move it to another core, of course.)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 9 May 2007 01:09:55 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Throttling PostgreSQL's CPU usage"
},
{
"msg_contents": "On Tue, 8 May 2007, Daniel Griscom wrote:\n\n> Thanks for all the feedback. Unfortunately I didn't specify that this is \n> running on a WinXP machine (the 3D renderer is an ActiveX plugin), and I \n> don't even think \"nice\" is available. I've tried using the Windows Task \n> Manager to set every postgres.exe process to a low priority, but that didn't \n> make a difference.\n>\n> Several people have mentioned having multiple processors; my current machine \n> is a uni-processor machine, but I believe we could spec the actual runtime \n> machine to have multiple processors/cores. I'm only running one query at a \n> time; would that query be guaranteed to confine itself to a single \n> processor/core?\n>\n> In terms of performance, I don't think simply more power will do the trick; \n> I've got an AMD 3200+, and even doubling the power/halving the stutter time \n> won't be good enough.\n>\n> Someone suggested \"setting the CPU costs higher\"; where would I learn about \n> that?\n>\n> Someone else mentioned having a custom function that sleep()ed on every row \n> access; where would I learn more about that?\n>\n> I've also been reading up on VACUUM. I haven't explicitly run it in the \n> several days since I've installed the database (by loading a humongous \n> data.sql file); might this be part of the performance problem?\n\nit would cause postgres to work harder then it needs to, but it doesn't \nsolve the problem of postgres eating cpu that you need for your rendering \n(i.e. it may reduce the stutters, but won't eliminate them)\n\na single query will confine itself to one core, but if you have a vaccum \nor autovaccum run it will affect the second core.\n\nI don't know what you can do on windows beyond this though.\n\nDavid Lang\n\nP.S. make sure you get real multi-core cpu's, hyperthreading is _not_ a \nsecond core for this problem.\n",
"msg_date": "Tue, 8 May 2007 16:21:21 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Throttling PostgreSQL's CPU usage"
},
{
"msg_contents": "Daniel Griscom wrote:\n>\n> Several people have mentioned having multiple processors; my current \n> machine is a uni-processor machine, but I believe we could spec the \n> actual runtime machine to have multiple processors/cores. \n\nMy estimate is that yes, you should definitely consider that.\n\n> I'm only running one query at a time; would that query be guaranteed \n> to confine itself to a single processor/core?\n\n From what Joshua mentions, looks like you do have that guarantee.\n\n>\n> In terms of performance, I don't think simply more power will do the \n> trick; I've got an AMD 3200+, and even doubling the power/halving the \n> stutter time won't be good enough.\n\nAs I mentioned, the important thing is not really raw CPU power as\nmuch as *responsiveness* --- that is what IMO the multi-core/multi-CPU\nboosts the most. The thing is, to guarantee the required responsiveness\nwith a single-CPU-single-core you would have to increase the CPU\nspeed (the real speed --- operations per second) by a fraction \nspectacularly\nhigh, so that you that guarantee that the rendering will have the CPU\nsoon enough ... When maybe with even less raw CPU power, but\nhaving always one of them ready to process the second task, you\nreduce the latency to pretty much zero.\n\n> Someone suggested \"setting the CPU costs higher\"; where would I learn \n> about that?\n\nDocumentation --- look up the postgresql.conf file.\n\n> Someone else mentioned having a custom function that sleep()ed on \n> every row access; where would I learn more about that?\n\nDocumentation for server-side procedures (PL/PgSQL --- although as I\nunderstand it, you can write server-side procedures in C, Perl, Java, and\nothers).\n\n> I've also been reading up on VACUUM. I haven't explicitly run it in \n> the several days since I've installed the database (by loading a \n> humongous data.sql file); might this be part of the performance problem?\n\nI think it may depend on the version you're running --- but definitely, you\ndo want to run vacuum analyze (notice, not simply vacuum; you want a\nvacuum analyze) often, and definitely after loading up a lot of new data.\n(that is, you definitely want to run a vacuum analyze right away --- that\nis, at the earliest opportunity)\n\nHTH,\n\nCarlos\n--\n\n",
"msg_date": "Tue, 08 May 2007 19:22:04 -0400",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Throttling PostgreSQL's CPU usage"
},
{
"msg_contents": "On Tue, 8 May 2007, Carlos Moreno wrote:\n\n> Daniel Griscom wrote:\n>>\n>> Several people have mentioned having multiple processors; my current\n>> machine is a uni-processor machine, but I believe we could spec the actual\n>> runtime machine to have multiple processors/cores. \n>\n> My estimate is that yes, you should definitely consider that.\n>\n>> I'm only running one query at a time; would that query be guaranteed to\n>> confine itself to a single processor/core?\n>\n> From what Joshua mentions, looks like you do have that guarantee.\n\nisn't there a way to limit how many processes postgres will create?\n\nif this is limited to 1, what happens when a vaccum run hits (or \nautovaccum)\n\nDavid Lang\n\n",
"msg_date": "Tue, 8 May 2007 16:24:08 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Throttling PostgreSQL's CPU usage"
},
{
"msg_contents": "You can use the workload management feature that we've contributed to\nBizgres. That allows you to control the level of statement concurrency by\nestablishing queues and associating them with roles.\n\nThat would provide the control you are seeking.\n\n- Luke\n\n\nOn 5/8/07 4:24 PM, \"[email protected]\" <[email protected]> wrote:\n\n> On Tue, 8 May 2007, Carlos Moreno wrote:\n> \n>> Daniel Griscom wrote:\n>>> \n>>> Several people have mentioned having multiple processors; my current\n>>> machine is a uni-processor machine, but I believe we could spec the actual\n>>> runtime machine to have multiple processors/cores.\n>> \n>> My estimate is that yes, you should definitely consider that.\n>> \n>>> I'm only running one query at a time; would that query be guaranteed to\n>>> confine itself to a single processor/core?\n>> \n>> From what Joshua mentions, looks like you do have that guarantee.\n> \n> isn't there a way to limit how many processes postgres will create?\n> \n> if this is limited to 1, what happens when a vaccum run hits (or\n> autovaccum)\n> \n> David Lang\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n",
"msg_date": "Tue, 08 May 2007 21:09:59 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Throttling PostgreSQL's CPU usage"
},
{
"msg_contents": "> Thanks for all the feedback. Unfortunately I didn't specify that this \n> is running on a WinXP machine (the 3D renderer is an ActiveX plugin), \n> and I don't even think \"nice\" is available. I've tried using the \n> Windows Task Manager to set every postgres.exe process to a low \n> priority, but that didn't make a difference.\n\nAre you sure you're actually cpu limited? The windows schedules is actually pretty good at down shifting like that. It sounds like you might be i/o bound \ninstead. Especially if you're on ide disks in this machine.\n\n> Several people have mentioned having multiple processors; my current \n> machine is a uni-processor machine, but I believe we could spec the \n> actual runtime machine to have multiple processors/cores. I'm only \n> running one query at a time; would that query be guaranteed to \n> confine itself to a single processor/core?\n\nYes. Background processes can run on the other, like the background writer. They normally don't use a lot of cpu. You can avoid that as well by setting the cpu \naffinity on pg_ctl or postmaster.\n\n\n> In terms of performance, I don't think simply more power will do the \n> trick; I've got an AMD 3200+, and even doubling the power/halving the \n> stutter time won't be good enough.\n\nAgain, make sure cpu really is the problem.\n\n/Magnus\n\n",
"msg_date": "Wed, 09 May 2007 07:56:40 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Throttling PostgreSQL's CPU usage"
},
{
"msg_contents": "Thanks again for all the feedback. Running on a dual processor/core \nmachine is clearly a first step, and I'll look into the other \nsuggestions as well.\n\n\nThanks,\nDan\n\n-- \nDaniel T. Griscom [email protected]\nSuitable Systems http://www.suitable.com/\n1 Centre Street, Suite 204 (781) 665-0053\nWakefield, MA 01880-2400\n",
"msg_date": "Wed, 9 May 2007 09:49:31 -0400",
"msg_from": "Daniel Griscom <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Throttling PostgreSQL's CPU usage"
},
{
"msg_contents": "Daniel Griscom wrote:\n> Thanks again for all the feedback. Running on a dual processor/core \n> machine is clearly a first step, and I'll look into the other \n> suggestions as well.\n\nAs per one of the last suggestions, do consider as well putting a dual \nhard disk\n(as in, independent hard disks, to allow for simultaneous access to \nboth). That\nway, you can send the WAL (px_log directory) to a separate physical drive\nand improve performance (reduce the potential for bottlenecks) if there \nis a\nconsiderable amount of writes. With Windows, you can mount specific\ndirectories to given HD partitions --- that would do the trick.\n\nAlso, of course, do make sure that you give it a generous amount of RAM, so\nthat an as-large-as-possible fraction of the read operations are done \ndirectly\noff the machine's memory.\n\nBTW, have you considered using a *separate* machine for PostgreSQL?\n(that way this machine could be running on Linux or some Unix flavor,\nand the Windows machine is dedicated to the ActiveX rendering stuff).\nI mean, if you are going to get a new machine because you need to replace\nit, you might as well get a new machine not as powerful, since now you\nwill have the dual-CPU given by the fact that there are two machines.\n\nGood luck!\n\nCarlos\n--\n\n",
"msg_date": "Wed, 09 May 2007 10:29:56 -0400",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Throttling PostgreSQL's CPU usage"
}
] |
[
{
"msg_contents": "I am trying to follow a message thread. One guy says we should be running\nvacuum analyze daily and the other says we should be running vacuum multiple\ntimes a day. I have tried looking for what a vacuum analyze is to help me\nunderstand but no luck.\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nI am trying to follow a message thread. One guy says we should be\nrunning vacuum analyze daily and the other says we should be running\nvacuum multiple times a day. I have tried looking for what a vacuum\nanalyze is to help me understand but no luck.-- Yudhvir Singh Sidhu408 375 3134 cell",
"msg_date": "Tue, 8 May 2007 14:43:00 -0700",
"msg_from": "\"Y Sidhu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "What's The Difference Between VACUUM and VACUUM ANALYZE?"
},
{
"msg_contents": "On Tue, 8 May 2007, Y Sidhu wrote:\n\n> I am trying to follow a message thread. One guy says we should be running\n> vacuum analyze daily and the other says we should be running vacuum multiple\n> times a day. I have tried looking for what a vacuum analyze is to help me\n> understand but no luck.\n\nvaccum frees tuples that are no longer refrenced\nvaccum analyse does the same thing, but then does some additional \ninformation gathering about what data is in the tables Postgres uses this \ndata to adjust it's estimates of how long various things will take \n(sequential scan, etc). if these estimates are off by a huge amount \n(especially if you have never done a vaccum analyse after loading your \ntable) then it's very likely that postgres will be slow becouse it's doing \nexpensive operations that it thinks are cheap.\n\nDavid Lang\n",
"msg_date": "Tue, 8 May 2007 14:46:36 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: What's The Difference Between VACUUM and VACUUM ANALYZE?"
},
{
"msg_contents": "Y Sidhu escribi�:\n> I am trying to follow a message thread. One guy says we should be running\n> vacuum analyze daily and the other says we should be running vacuum multiple\n> times a day. I have tried looking for what a vacuum analyze is to help me\n> understand but no luck.\n\nVACUUM ANALYZE is like VACUUM, except that it also runs an ANALYZE\nafterwards.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Tue, 8 May 2007 17:52:13 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What's The Difference Between VACUUM and VACUUM ANALYZE?"
},
{
"msg_contents": "On Tue, May 08, 2007 at 05:52:13PM -0400, Alvaro Herrera wrote:\n>> I am trying to follow a message thread. One guy says we should be running\n>> vacuum analyze daily and the other says we should be running vacuum multiple\n>> times a day. I have tried looking for what a vacuum analyze is to help me\n>> understand but no luck.\n> VACUUM ANALYZE is like VACUUM, except that it also runs an ANALYZE\n> afterwards.\n\nShoot me if I'm wrong here, but doesn't VACUUM ANALYZE check _all_ tuples,\nas compared to the random selection employed by ANALYZE?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 9 May 2007 00:06:08 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What's The Difference Between VACUUM and VACUUM ANALYZE?"
},
{
"msg_contents": "Steinar H. Gunderson wrote:\n> On Tue, May 08, 2007 at 05:52:13PM -0400, Alvaro Herrera wrote:\n> >> I am trying to follow a message thread. One guy says we should be running\n> >> vacuum analyze daily and the other says we should be running vacuum multiple\n> >> times a day. I have tried looking for what a vacuum analyze is to help me\n> >> understand but no luck.\n> > VACUUM ANALYZE is like VACUUM, except that it also runs an ANALYZE\n> > afterwards.\n> \n> Shoot me if I'm wrong here, but doesn't VACUUM ANALYZE check _all_ tuples,\n> as compared to the random selection employed by ANALYZE?\n\nYou are wrong, but it won't be me the one to shoot you.\n\nThere have been noises towards making the ANALYZE portion use the same\nscan that VACUUM already does, but nobody has written the code (it would\nbe useful for some kinds of stats).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Tue, 8 May 2007 18:21:02 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What's The Difference Between VACUUM and VACUUM ANALYZE?"
},
{
"msg_contents": "\"Alvaro Herrera\" <[email protected]> writes:\n\n> Steinar H. Gunderson wrote:\n>\n>> Shoot me if I'm wrong here, but doesn't VACUUM ANALYZE check _all_ tuples,\n>> as compared to the random selection employed by ANALYZE?\n>\n> You are wrong, but it won't be me the one to shoot you.\n>\n> There have been noises towards making the ANALYZE portion use the same\n> scan that VACUUM already does, but nobody has written the code (it would\n> be useful for some kinds of stats).\n\nI think it does for the count of total records in the table. \nBut not for the rest of the stats.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Wed, 09 May 2007 12:31:11 +0100",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What's The Difference Between VACUUM and VACUUM ANALYZE?"
}
] |
[
{
"msg_contents": "Hi Everybody,\n\nI was trying to see how many inserts per seconds my application could\nhandle on various machines.\n\nThose are the machines I used to run my app:\n\n \n\n1) Pentium M 1.7Ghz\n\n2) Pentium 4 2.4 Ghz\n\n3) DMP Xeon 3Ghz\n\n \n\nSure, I was expecting the dual Zeon to outperform the Pentium M and 4.\nBut the data showed the opposite.\n\nSo, I wrote a simple program (in C) using the libpq.so.5 which opens a\nconnection to the database (DB in localhost), \n\nCreates a Prepared statement for the insert and does a 10,000 insert.\nThe result did not change.\n\n \n\nOnly after setting fsync to off in the config file, the amount of time\nto insert 10,000 records was acceptable.\n\n \n\nHere is the data:\n\n \n\nTime for 10000 inserts\n\nFsync=on\n\nFsync=off\n\nPentium M 1.7\n\n~17 sec\n\n~6 sec\n\nPentium 4 2.4\n\n~13 sec\n\n~11 sec\n\nDual Xeon\n\n~65 sec\n\n~1.9 sec\n\n \n\nI read that postgres does have issues with MP Xeon (costly context\nswitching). But I still think that with fsync=on 65 seconds is\nridiculous. \n\n \n\nCan anybody direct me to some improved/acceptable performance with\nfsync=on?\n\n \n\nThx,\n\n \n\nOrhan a.\n\n\n\n\n\n\n\n\n\n\nHi Everybody,\nI was trying to see how many inserts per seconds my\napplication could handle on various machines.\nThose are the machines I used to run my app:\n \n1) Pentium M\n1.7Ghz\n2) Pentium 4\n2.4 Ghz\n3) DMP Xeon 3Ghz\n \nSure, I was expecting the dual Zeon to outperform the\nPentium M and 4. But the data showed the opposite.\nSo, I wrote a simple program (in C) using the libpq.so.5\nwhich opens a connection to the database (DB in localhost), \nCreates a Prepared statement for the insert and does a 10,000\ninsert. The result did not change.\n \nOnly after setting fsync to off in the config file, the amount\nof time to insert 10,000 records was acceptable.\n \nHere is the data:\n \n\n\n\nTime for 10000 inserts\n\n\nFsync=on\n\n\nFsync=off\n\n\n\n\nPentium M 1.7\n\n\n~17 sec\n\n\n~6 sec\n\n\n\n\nPentium 4 2.4\n\n\n~13 sec\n\n\n~11 sec\n\n\n\n\nDual Xeon\n\n\n~65 sec\n\n\n~1.9 sec\n\n\n\n \nI read that postgres does have issues with MP Xeon (costly context\nswitching). But I still think that with fsync=on 65 seconds is ridiculous. \n \nCan anybody direct me to some improved/acceptable\n performance with fsync=on?\n \nThx,\n \nOrhan a.",
"msg_date": "Tue, 8 May 2007 18:59:13 -0400",
"msg_from": "\"Orhan Aglagul\" <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
},
{
"msg_contents": "Orhan Aglagul wrote:\n> Hi Everybody,\n> \n> I was trying to see how many inserts per seconds my application could \n> handle on various machines.\n>\n> \n> I read that postgres does have issues with MP Xeon (costly context \n> switching). But I still think that with fsync=on 65 seconds is ridiculous.\n\nCPU is unlikely your bottleneck.. You failed to mention anything about your I/O \nsetup. More details in this regard will net you better responses. However, an \narchive search for insert performance will probably be worthwhile, since this \ntype of question is repeated about once a month.\n\n\n\n",
"msg_date": "Tue, 08 May 2007 17:48:59 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Dan Harris wrote:\n> Orhan Aglagul wrote:\n>> Hi Everybody,\n>>\n>> I was trying to see how many inserts per seconds my application could \n>> handle on various machines.\n>>\n>>\n>> I read that postgres does have issues with MP Xeon (costly context \n>> switching). But I still think that with fsync=on 65 seconds is \n>> ridiculous.\n> \n> CPU is unlikely your bottleneck.. You failed to mention anything about \n> your I/O setup. More details in this regard will net you better \n> responses. However, an archive search for insert performance will \n> probably be worthwhile, since this type of question is repeated about \n> once a month.\n\nHe also fails to mention if he is doing the inserts one at a time or as \nbatch.\n\n\nJoshua D. Drake\n\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Tue, 08 May 2007 17:05:15 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Joshua D. Drake wrote:\n>\n>> CPU is unlikely your bottleneck.. You failed to mention anything \n>> about your I/O setup. [...]\n>\n> He also fails to mention if he is doing the inserts one at a time or \n> as batch.\n\nWould this really be important? I mean, would it affect a *comparison*?? \nAs long as he does it the same way for all the hardware setups, seems ok\nto me.\n\nCarlos\n--\n\n",
"msg_date": "Tue, 08 May 2007 20:20:29 -0400",
"msg_from": "Carlos Moreno <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "On Tue, 2007-05-08 at 17:59, Orhan Aglagul wrote:\n> Hi Everybody,\n> \n> I was trying to see how many inserts per seconds my application could\n> handle on various machines.\n> \n> Those are the machines I used to run my app:\n> \n> \n> \n> 1) Pentium M 1.7Ghz\n> \n> 2) Pentium 4 2.4 Ghz\n> \n> 3) DMP Xeon 3Ghz\n> \n> \n> \n> Sure, I was expecting the dual Zeon to outperform the Pentium M and 4.\n> But the data showed the opposite.\n> \n> So, I wrote a simple program (in C) using the libpq.so.5 which opens a\n> connection to the database (DB in localhost), \n> \n> Creates a Prepared statement for the insert and does a 10,000 insert.\n> The result did not change.\n> \n> \n> \n> Only after setting fsync to off in the config file, the amount of time\n> to insert 10,000 records was acceptable.\n> \n> \n> \n> Here is the data:\n> \n> \n> \n> Time for 10000 inserts\n> \n> Fsync=on\n> \n> Fsync=off\n> \n> Pentium M 1.7\n> \n> ~17 sec\n> \n> ~6 sec\n> \n> Pentium 4 2.4\n> \n> ~13 sec\n> \n> ~11 sec\n> \n> Dual Xeon\n> \n> ~65 sec\n> \n> ~1.9 sec\n> \n> \n> \n> \n> I read that postgres does have issues with MP Xeon (costly context\n> switching). But I still think that with fsync=on 65 seconds is\n> ridiculous. \n> \n> \n> \n> Can anybody direct me to some improved/acceptable performance with\n> fsync=on?\n\nI'm guessing you didn't do the inserts inside a single transaction,\nwhich means that each insert was it's own transaction.\n\nTry doing them all in a transaction. I ran this simple php script:\n\n<?php\n$conn = pg_connect(\"dbname=smarlowe\");\npg_query(\"begin\");\nfor ($i=0;$i<10000;$i++){\n $r = rand(1,10000000);\n pg_query(\"insert into tenk (i1) values ($r)\");\n}\npq_query(\"commit\");\n?>\n\nand it finished in 3.5 seconds on my workstation (nothing special)\n",
"msg_date": "Tue, 08 May 2007 19:22:45 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "On Tue, 2007-05-08 at 17:59, Orhan Aglagul wrote:\n> Hi Everybody,\n> \n> I was trying to see how many inserts per seconds my application could\n> handle on various machines.\n\n> \n> Here is the data:\n> \n> \n> \n> Time for 10000 inserts\n> \n> Fsync=on\n> \n> Fsync=off\n> \n> Pentium M 1.7\n> \n> ~17 sec\n> \n> ~6 sec\n> \n> Pentium 4 2.4\n> \n> ~13 sec\n> \n> ~11 sec\n> \n> Dual Xeon\n> \n> ~65 sec\n> \n> ~1.9 sec\n> \n> \n> \n\nIn addition to my previous post, if you see that big a change between\nfsync on and off, you likely have a drive subsystem that is actually\nreporting fsync properly.\n\nThe other two machines are lying. Or they have a battery backed caching\nraid controller\n",
"msg_date": "Tue, 08 May 2007 19:30:42 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Forgot to reply to the mailing list..... Sorry (new here)\nHere are responses to previous questions....\n\n-----Original Message-----\nFrom: Orhan Aglagul \nSent: Tuesday, May 08, 2007 5:30 PM\nTo: 'Joshua D. Drake'\nSubject: RE: [PERFORM]\n\nI am using a prepared statement and inserting in a loop 10,000 records. \nI need the data real time, so I am not using batch inserts. I have to\nrun each insert as a separate transaction.... \nI am running the app on a RH EL4 (Kernel 2.6.20). \nIn fact my CPU usage is too low when running the app with fsync=off. \n\nHere is the output of vmstat during the test:\nFirst 10 lines:\n\n\nr b swpd free buff cache si so bi bo in cs us sy\nid wa\n 0 1 0 1634144 21828 234752 0 0 32 408 210 404 0 0\n90 9\n 0 1 0 1634020 21828 234816 0 0 0 1404 538 1879 0 0\n50 50\n 0 1 0 1633896 21828 234940 0 0 0 1400 525 1849 0 0\n50 49\n 0 1 0 1633772 21828 235048 0 0 0 1412 537 1878 0 0\n50 50\n 0 1 0 1633648 21832 235168 0 0 0 1420 531 1879 0 0\n50 50\n 0 1 0 1633524 21840 235280 0 0 0 1420 535 1884 0 0\n50 50\n 0 1 0 1633524 21844 235400 0 0 0 1396 535 1718 0 0\n50 50\n 0 1 0 1633524 21848 235524 0 0 0 1536 561 1127 0 0\n50 50\n 0 1 0 1633524 21852 235644 0 0 0 1412 557 1390 0 0\n50 50\n 0 1 0 1633268 21860 235728 0 0 0 1408 582 1393 0 0\n50 50\n 0 1 0 1633268 21868 235844 0 0 0 1424 548 1377 1 4\n50 45\n 1 0 0 1633144 21876 235968 0 0 0 1404 548 1394 14 4\n48 34\n 0 1 0 1633020 21884 236084 0 0 0 1420 540 1374 5 0\n50 46\n...\n\nThe logical volume is an ext3 file system. That's where all the database\nfiles reside. (No hardware optimization done). \n\nSorry for the delay,\nThanks..\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Joshua D.\nDrake\nSent: Tuesday, May 08, 2007 5:05 PM\nTo: Dan Harris\nCc: PostgreSQL Performance\nSubject: Re: [PERFORM]\n\nDan Harris wrote:\n> Orhan Aglagul wrote:\n>> Hi Everybody,\n>>\n>> I was trying to see how many inserts per seconds my application could\n\n>> handle on various machines.\n>>\n>>\n>> I read that postgres does have issues with MP Xeon (costly context \n>> switching). But I still think that with fsync=on 65 seconds is \n>> ridiculous.\n> \n> CPU is unlikely your bottleneck.. You failed to mention anything\nabout \n> your I/O setup. More details in this regard will net you better \n> responses. However, an archive search for insert performance will \n> probably be worthwhile, since this type of question is repeated about \n> once a month.\n\nHe also fails to mention if he is doing the inserts one at a time or as \nbatch.\n\n\nJoshua D. Drake\n\n> \n> \n> \n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n",
"msg_date": "Tue, 8 May 2007 21:12:16 -0400",
"msg_from": "\"Orhan Aglagul\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: "
},
{
"msg_contents": "On Tue, 8 May 2007, Orhan Aglagul wrote:\n\n> Time for 10000 inserts\n> Pentium M 1.7\n> ~17 sec fsync=on\n> ~6 sec fsync=off\n\nThis is 588 inserts/second with fsync on. It's impossible to achieve that \nwithout write caching at either the controller or hard drive. My bet \nwould be that your hard drive in this system is a regular IDE/SATA drive \nthat has write caching enabled, which is the normal case. That means this \nsystem doesn't really do a fsync when you tell it to.\n\n> Pentium 4 2.4\n> ~13 sec fsync=on\n> ~11 sec fsync=off\n\nSame response here. Odds are good the fsync=on numbers here are a \nfantasy; unless you have some serious disk hardware in this server, it \ncan't really be doing an fsync and giving this performance level.\n\n> Dual Xeon\n> ~65 sec fsync=on\n> ~1.9 sec fsync=off\n\nNow this looks reasonable. 5263/second with fsync off, 154/second with it \non. This system appears to have hard drives in it that correctly write \ndata out when asked to via the fsync mechanism. I would bet this one is a \nserver that has some number of 10,000 RPM SCSI drives in it. Such a drive \ngives a theoretical maximum of 166.7 inserts/second if the inserts are \ndone one at a time.\n\nIf this all is confusing to you, I have written a long primer on this \nsubject that explains how the interaction between the PostgreSQL, fsync, \nand the underlying drives work. If you have the patience to work your way \nthrough it and follow the references along the way, I think you'll find \nthe results you've been seeing will make more sense, and you'll be in a \nbetter position to figure out what you should do next:\n\nhttp://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Wed, 9 May 2007 00:14:42 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "On Tuesday 08 May 2007 20:20, Carlos Moreno wrote:\n> Joshua D. Drake wrote:\n> >> CPU is unlikely your bottleneck.. You failed to mention anything\n> >> about your I/O setup. [...]\n> >\n> > He also fails to mention if he is doing the inserts one at a time or\n> > as batch.\n>\n> Would this really be important? I mean, would it affect a *comparison*??\n> As long as he does it the same way for all the hardware setups, seems ok\n> to me.\n>\n\nSure. He looks i/o bound, and single inserts vs. batch inserts will skew \nresults even further depending on which way your doing it. \n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n",
"msg_date": "Wed, 09 May 2007 00:58:00 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
}
] |
[
{
"msg_contents": "\nYes, I did not do it in one transaction. \nAll 3 machines are configured with the same OS and same version\npostgres.\nNo kernel tweaking and no postgres tweaking done (except the fsync)...\n\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: Tuesday, May 08, 2007 5:23 PM\nTo: Orhan Aglagul\nCc: [email protected]\nSubject: Re: [PERFORM]\n\nOn Tue, 2007-05-08 at 17:59, Orhan Aglagul wrote:\n> Hi Everybody,\n> \n> I was trying to see how many inserts per seconds my application could\n> handle on various machines.\n> \n> Those are the machines I used to run my app:\n> \n> \n> \n> 1) Pentium M 1.7Ghz\n> \n> 2) Pentium 4 2.4 Ghz\n> \n> 3) DMP Xeon 3Ghz\n> \n> \n> \n> Sure, I was expecting the dual Zeon to outperform the Pentium M and 4.\n> But the data showed the opposite.\n> \n> So, I wrote a simple program (in C) using the libpq.so.5 which opens a\n> connection to the database (DB in localhost), \n> \n> Creates a Prepared statement for the insert and does a 10,000 insert.\n> The result did not change.\n> \n> \n> \n> Only after setting fsync to off in the config file, the amount of time\n> to insert 10,000 records was acceptable.\n> \n> \n> \n> Here is the data:\n> \n> \n> \n> Time for 10000 inserts\n> \n> Fsync=on\n> \n> Fsync=off\n> \n> Pentium M 1.7\n> \n> ~17 sec\n> \n> ~6 sec\n> \n> Pentium 4 2.4\n> \n> ~13 sec\n> \n> ~11 sec\n> \n> Dual Xeon\n> \n> ~65 sec\n> \n> ~1.9 sec\n> \n> \n> \n> \n> I read that postgres does have issues with MP Xeon (costly context\n> switching). But I still think that with fsync=on 65 seconds is\n> ridiculous. \n> \n> \n> \n> Can anybody direct me to some improved/acceptable performance with\n> fsync=on?\n\nI'm guessing you didn't do the inserts inside a single transaction,\nwhich means that each insert was it's own transaction.\n\nTry doing them all in a transaction. I ran this simple php script:\n\n<?php\n$conn = pg_connect(\"dbname=smarlowe\");\npg_query(\"begin\");\nfor ($i=0;$i<10000;$i++){\n $r = rand(1,10000000);\n pg_query(\"insert into tenk (i1) values ($r)\");\n}\npq_query(\"commit\");\n?>\n\nand it finished in 3.5 seconds on my workstation (nothing special)\n",
"msg_date": "Tue, 8 May 2007 21:13:22 -0400",
"msg_from": "\"Orhan Aglagul\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "FW: "
}
] |
[
{
"msg_contents": "\n\n-----Original Message-----\nFrom: Orhan Aglagul \nSent: Tuesday, May 08, 2007 5:37 PM\nTo: 'Scott Marlowe'\nSubject: RE: [PERFORM]\n\nBut 10,000 records in 65 sec comes to ~153 records per second. On a dual\n3.06 Xeon....\nWhat range is acceptable?\n\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: Tuesday, May 08, 2007 5:31 PM\nTo: Orhan Aglagul\nCc: [email protected]\nSubject: Re: [PERFORM]\n\nOn Tue, 2007-05-08 at 17:59, Orhan Aglagul wrote:\n> Hi Everybody,\n> \n> I was trying to see how many inserts per seconds my application could\n> handle on various machines.\n\n> \n> Here is the data:\n> \n> \n> \n> Time for 10000 inserts\n> \n> Fsync=on\n> \n> Fsync=off\n> \n> Pentium M 1.7\n> \n> ~17 sec\n> \n> ~6 sec\n> \n> Pentium 4 2.4\n> \n> ~13 sec\n> \n> ~11 sec\n> \n> Dual Xeon\n> \n> ~65 sec\n> \n> ~1.9 sec\n> \n> \n> \n\nIn addition to my previous post, if you see that big a change between\nfsync on and off, you likely have a drive subsystem that is actually\nreporting fsync properly.\n\nThe other two machines are lying. Or they have a battery backed caching\nraid controller\n",
"msg_date": "Tue, 8 May 2007 21:13:53 -0400",
"msg_from": "\"Orhan Aglagul\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "FW: "
}
] |
[
{
"msg_contents": "\nNo, it is one transaction per insert. \n\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: Tuesday, May 08, 2007 5:38 PM\nTo: Orhan Aglagul\nSubject: RE: [PERFORM]\n\nOn Tue, 2007-05-08 at 19:36, Orhan Aglagul wrote:\n> But 10,000 records in 65 sec comes to ~153 records per second. On a\ndual\n> 3.06 Xeon....\n> What range is acceptable?\n\nIf you're doing that in one big transaction, that's horrible. Because\nit shouldn't be waiting for each insert to fsync, but the whole\ntransaction.\n",
"msg_date": "Tue, 8 May 2007 21:14:14 -0400",
"msg_from": "\"Orhan Aglagul\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "FW: "
},
{
"msg_contents": "On Tue, 8 May 2007, Orhan Aglagul wrote:\n\n> No, it is one transaction per insert.\n>\n> -----Original Message-----\n> From: Scott Marlowe [mailto:[email protected]]\n> Sent: Tuesday, May 08, 2007 5:38 PM\n> To: Orhan Aglagul\n> Subject: RE: [PERFORM]\n>\n> On Tue, 2007-05-08 at 19:36, Orhan Aglagul wrote:\n>> But 10,000 records in 65 sec comes to ~153 records per second. On a\n> dual\n>> 3.06 Xeon....\n>> What range is acceptable?\n>\n> If you're doing that in one big transaction, that's horrible. Because\n> it shouldn't be waiting for each insert to fsync, but the whole\n> transaction.\n\nwith a standard 7200 rpm drive ~150 transactions/sec sounds about right\n\nto really speed things up you want to get a disk controller with a battery \nbacked cache so that the writes don't need to hit the disk to be safe.\n\nthat should get your speeds up to (and possibly above) what you got by \nturning fsync off.\n\nDavid Lang\n",
"msg_date": "Tue, 8 May 2007 18:22:45 -0700 (PDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: FW: "
},
{
"msg_contents": "<[email protected]> writes:\n\n> with a standard 7200 rpm drive ~150 transactions/sec sounds about right\n>\n> to really speed things up you want to get a disk controller with a battery\n> backed cache so that the writes don't need to hit the disk to be safe.\n\nNote that this is only if you're counting transactions/sec in a single\nsession. You can get much more if you have many sessions since they can all\ncommit together in a single disk i/o.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Wed, 09 May 2007 12:36:19 +0100",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: "
}
] |
[
{
"msg_contents": "There's another odd thing about this plan from yesterday.\n\nQuery:\n\n SELECT\n eh_subj.header_body AS subject,\n count(distinct eh_from.header_body)\n FROM\n email JOIN mime_part USING (email_id)\n JOIN email_header eh_subj USING (email_id, mime_part_id)\n JOIN email_header eh_from USING (email_id, mime_part_id)\n WHERE\n eh_subj.header_name = 'subject'\n AND eh_from.header_name = 'from'\n AND mime_part_id = 0\n AND (time >= timestamp '2007-05-05 17:01:59' AND time < timestamp '2007-05-05 17:01:59' + interval '60 min')\n GROUP BY\n eh_subj.header_body;\n\nPlan:\n\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=1920309.81..1920534.21 rows=11220 width=184) (actual time=5349.493..5587.536 rows=35000 loops=1)\n -> Sort (cost=1920309.81..1920337.86 rows=11220 width=184) (actual time=5349.427..5392.110 rows=35000 loops=1)\n Sort Key: eh_subj.header_body\n -> Nested Loop (cost=15576.58..1919555.05 rows=11220 width=184) (actual time=537.938..5094.377 rows=35000 loops=1)\n -> Nested Loop (cost=15576.58..475387.23 rows=11020 width=120) (actual time=537.858..4404.330 rows=35000 loops=1)\n -> Nested Loop (cost=15576.58..430265.44 rows=11092 width=112) (actual time=537.768..4024.184 rows=35000 loops=1)\n -> Bitmap Heap Scan on email_header eh_from (cost=15576.58..16041.55 rows=107156 width=104) (actual time=537.621..1801.032 rows=280990 loops=1)\n Recheck Cond: ((mime_part_id = 0) AND (header_name = 'from'::text))\n -> BitmapAnd (cost=15576.58..15576.58 rows=160 width=0) (actual time=500.006..500.006 rows=0 loops=1)\n -> Bitmap Index Scan on dummy_index (cost=0.00..3724.22 rows=107156 width=0) (actual time=85.025..85.025 rows=280990 loops=1)\n -> Bitmap Index Scan on idx__email_header__from_local (cost=0.00..5779.24 rows=107156 width=0) (actual time=173.006..173.006 rows=280990 loops=1)\n -> Bitmap Index Scan on dummy2_index (cost=0.00..5992.25 rows=107156 width=0) (actual time=174.463..174.463 rows=280990 loops=1)\n -> Index Scan using email_pkey on email (cost=0.00..3.85 rows=1 width=8) (actual time=0.005..0.005 rows=0 loops=280990)\n Index Cond: (email.email_id = eh_from.email_id)\n Filter: ((\"time\" >= '2007-05-05 17:01:59'::timestamp without time zone) AND (\"time\" < '2007-05-05 18:01:59'::timestamp without time zone))\n -> Index Scan using mime_part_pkey on mime_part (cost=0.00..4.06 rows=1 width=12) (actual time=0.005..0.006 rows=1 loops=35000)\n Index Cond: ((email.email_id = mime_part.email_id) AND (mime_part.mime_part_id = 0))\n -> Index Scan using idx__email_header__email_id__mime_part_id on email_header eh_subj (cost=0.00..130.89 rows=13 width=104) (actual time=0.009..0.015 rows=1 loops=35000)\n Index Cond: ((email.email_id = eh_subj.email_id) AND (0 = eh_subj.mime_part_id))\n Filter: (header_name = 'subject'::text)\n Total runtime: 5625.024 ms\n\n\nI'm wondering what it wants to achieve with these three index scans:\n\n -> Bitmap Index Scan on dummy_index (cost=0.00..3724.22 rows=107156 width=0) (actual time=85.025..85.025 rows=280990 loops=1)\n -> Bitmap Index Scan on idx__email_header__from_local (cost=0.00..5779.24 rows=107156 width=0) (actual time=173.006..173.006 rows=280990 loops=1)\n -> Bitmap Index Scan on dummy2_index (cost=0.00..5992.25 rows=107156 width=0) (actual time=174.463..174.463 rows=280990 loops=1)\n\nThe indexes in question are:\n\nCREATE INDEX dummy_index ON email_header ((555)) WHERE mime_part_id = 0 AND header_name = 'from';\nCREATE INDEX dummy2_index ON email_header (substr(header_body,5)) WHERE mime_part_id = 0 AND header_name = 'from';\nCREATE INDEX idx__email_header__from_local ON email_header (get_localpart(header_body)) WHERE mime_part_id = 0 AND header_name = 'from';\n\nIt appears to want to use these indexes to get the restriction\n\n AND eh_from.header_name = 'from'\n AND mime_part_id = 0\n\nfrom the query, but why does it need three of them to do it, when all\nof them have the same predicate and none of them has an indexed\nexpression that appears in the query?\n\nThere are more partial indexes with the same predicate, but it appears\nto always use three. (The two \"dummy\" indexes are just leftovers from\nthese experiments.)\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n",
"msg_date": "Wed, 9 May 2007 14:10:57 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Apparently useless bitmap scans"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> There's another odd thing about this plan from yesterday.\n\nIs this still 8.2.1? The logic to choose bitmap indexes was rewritten\njust before 8.2.4,\n\n2007-04-17 16:03 tgl\n\n * src/backend/optimizer/path/indxpath.c:\n\nRewrite choose_bitmap_and() to make it more robust in the presence of\ncompeting alternatives for indexes to use in a bitmap scan. The former\ncoding took estimated selectivity as an overriding factor, causing it to\nsometimes choose indexes that were much slower to scan than ones with a\nslightly worse selectivity. It was also too narrow-minded about which\ncombinations of indexes to consider ANDing. The rewrite makes it pay more\nattention to index scan cost than selectivity; this seems sane since it's\nimpossible to have very bad selectivity with low cost, whereas the reverse\nisn't true. Also, we now consider each index alone, as well as adding\neach index to an AND-group led by each prior index, for a total of about\nO(N^2) rather than O(N) combinations considered. This makes the results\nmuch less dependent on the exact order in which the indexes are\nconsidered. It's still a lot cheaper than an O(2^N) exhaustive search.\nA prefilter step eliminates all but the cheapest of those indexes using\nthe same set of WHERE conditions, to keep the effective value of N down in\nscenarios where the DBA has created lots of partially-redundant indexes.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Wed, 9 May 2007 10:29:14 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Apparently useless bitmap scans"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> I'm wondering what it wants to achieve with these three index scans:\n\nSee if you still get that with 8.2.4. choose_bitmap_and was fairly far\nout in left field before that :-( ... particularly for cases with\npartially redundant indexes available.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 09 May 2007 10:49:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Apparently useless bitmap scans "
},
{
"msg_contents": "Am Mittwoch, 9. Mai 2007 16:29 schrieb Alvaro Herrera:\n> Peter Eisentraut wrote:\n> > There's another odd thing about this plan from yesterday.\n>\n> Is this still 8.2.1? The logic to choose bitmap indexes was rewritten\n> just before 8.2.4,\n\nOK, upgrading to 8.2.4 fixes this odd plan choice. The query does run\na bit faster too, but the cost estimate has actually gone up!\n\n8.2.1:\n\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=87142.18..87366.58 rows=11220 width=184) (actual time=7883.541..8120.647 rows=35000 loops=1)\n -> Sort (cost=87142.18..87170.23 rows=11220 width=184) (actual time=7883.471..7926.031 rows=35000 loops=1)\n Sort Key: eh_subj.header_body\n -> Hash Join (cost=46283.30..86387.42 rows=11220 width=184) (actual time=5140.182..7635.615 rows=35000 loops=1)\n Hash Cond: (eh_subj.email_id = email.email_id)\n -> Bitmap Heap Scan on email_header eh_subj (cost=11853.68..50142.87 rows=272434 width=104) (actual time=367.956..1719.736 rows=280989 loops=1)\n Recheck Cond: ((mime_part_id = 0) AND (header_name = 'subject'::text))\n -> BitmapAnd (cost=11853.68..11853.68 rows=27607 width=0) (actual time=326.507..326.507 rows=0 loops=1)\n -> Bitmap Index Scan on idx__email_header__header_body_subject (cost=0.00..5836.24 rows=272434 width=0) (actual time=178.041..178.041 rows=280989 loops=1)\n -> Bitmap Index Scan on idx__email_header__header_name (cost=0.00..5880.97 rows=281247 width=0) (actual time=114.574..114.574 rows=280989 loops=1)\n Index Cond: (header_name = 'subject'::text)\n -> Hash (cost=34291.87..34291.87 rows=11020 width=120) (actual time=4772.148..4772.148 rows=35000 loops=1)\n -> Hash Join (cost=24164.59..34291.87 rows=11020 width=120) (actual time=3131.067..4706.997 rows=35000 loops=1)\n Hash Cond: (mime_part.email_id = email.email_id)\n -> Seq Scan on mime_part (cost=0.00..8355.81 rows=265804 width=12) (actual time=0.038..514.291 rows=267890 loops=1)\n Filter: (mime_part_id = 0)\n -> Hash (cost=24025.94..24025.94 rows=11092 width=112) (actual time=3130.982..3130.982 rows=35000 loops=1)\n -> Hash Join (cost=22244.54..24025.94 rows=11092 width=112) (actual time=996.556..3069.280 rows=35000 loops=1)\n Hash Cond: (eh_from.email_id = email.email_id)\n -> Bitmap Heap Scan on email_header eh_from (cost=15576.58..16041.55 rows=107156 width=104) (actual time=569.762..1932.017 rows=280990 loops=1)\n Recheck Cond: ((mime_part_id = 0) AND (header_name = 'from'::text))\n -> BitmapAnd (cost=15576.58..15576.58 rows=160 width=0) (actual time=532.217..532.217 rows=0 loops=1)\n -> Bitmap Index Scan on dummy_index (cost=0.00..3724.22 rows=107156 width=0) (actual time=116.386..116.386 rows=280990 loops=1)\n -> Bitmap Index Scan on idx__email_header__from_local (cost=0.00..5779.24 rows=107156 width=0) (actual time=174.883..174.883 rows=280990 loops=1)\n -> Bitmap Index Scan on dummy2_index (cost=0.00..5992.25 rows=107156 width=0) (actual time=173.575..173.575 rows=280990 loops=1)\n -> Hash (cost=6321.79..6321.79 rows=27694 width=8) (actual time=426.739..426.739 rows=35000 loops=1)\n -> Index Scan using idx__email__time on email (cost=0.00..6321.79 rows=27694 width=8) (actual time=50.000..375.021 rows=35000 loops=1)\n Index Cond: ((\"time\" >= '2007-05-05 17:01:59'::timestamp without time zone) AND (\"time\" < '2007-05-05 18:01:59'::timestamp without time zone))\n Total runtime: 8160.442 ms\n\n\n8.2.4:\n\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=100086.52..100658.46 rows=28597 width=182) (actual time=6063.766..6281.818 rows=35000 loops=1)\n -> Sort (cost=100086.52..100158.01 rows=28597 width=182) (actual time=6063.697..6105.215 rows=35000 loops=1)\n Sort Key: eh_subj.header_body\n -> Hash Join (cost=36729.27..97969.83 rows=28597 width=182) (actual time=3690.316..5790.094 rows=35000 loops=1)\n Hash Cond: (eh_subj.email_id = email.email_id)\n -> Bitmap Heap Scan on email_header eh_subj (cost=5903.20..63844.68 rows=267832 width=103) (actual time=214.699..1564.804 rows=280989 loops=1)\n Recheck Cond: ((mime_part_id = 0) AND (header_name = 'subject'::text))\n -> Bitmap Index Scan on idx__email_header__header_body_subject (cost=0.00..5836.24 rows=267832 width=0) (actual time=172.188..172.188 rows=280989 loops=1)\n -> Hash (cost=30468.98..30468.98 rows=28567 width=119) (actual time=3475.484..3475.484 rows=35000 loops=1)\n -> Hash Join (cost=13773.73..30468.98 rows=28567 width=119) (actual time=1260.579..3409.443 rows=35000 loops=1)\n Hash Cond: (eh_from.email_id = email.email_id)\n -> Index Scan using dummy_index on email_header eh_from (cost=0.00..13286.00 rows=277652 width=103) (actual time=0.076..1391.974 rows=280990 loops=1)\n -> Hash (cost=13429.63..13429.63 rows=27528 width=20) (actual time=1260.422..1260.422 rows=35000 loops=1)\n -> Hash Join (cost=1799.41..13429.63 rows=27528 width=20) (actual time=114.765..1206.500 rows=35000 loops=1)\n Hash Cond: (mime_part.email_id = email.email_id)\n -> Seq Scan on mime_part (cost=0.00..8355.81 rows=266589 width=12) (actual time=0.036..407.539 rows=267890 loops=1)\n Filter: (mime_part_id = 0)\n -> Hash (cost=1454.07..1454.07 rows=27627 width=8) (actual time=114.644..114.644 rows=35000 loops=1)\n -> Index Scan using idx__email__time on email (cost=0.00..1454.07 rows=27627 width=8) (actual time=0.144..63.017 rows=35000 loops=1)\n Index Cond: ((\"time\" >= '2007-05-05 17:01:59'::timestamp without time zone) AND (\"time\" < '2007-05-05 18:01:59'::timestamp without time zone))\n Total runtime: 6320.790 ms\n(21 Zeilen)\n\n\nThe only significant change is that the first Bitmap Heap Scan (line 6)\nbecame more expensive. You will notice that in the old plan, you had a\npretty good correspondence of 10 cost units to 1 millisecond throughout,\nwhereas in the new plan that does not apply to said Bitmap Heap Scan.\nI'm not sure whether that is cause for concern.\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n",
"msg_date": "Wed, 9 May 2007 17:26:03 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Apparently useless bitmap scans"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> OK, upgrading to 8.2.4 fixes this odd plan choice. The query does run\n> a bit faster too, but the cost estimate has actually gone up!\n\nYeah, because the former code was making an unrealistically small\nestimate of the number of tuples found by BitmapAnd (due to\ndouble-counting the selectivities of redundant indexes), and of course\nthat means a smaller estimate of the cost to fetch them in the bitmap\nheap scan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 09 May 2007 11:56:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Apparently useless bitmap scans "
}
] |
[
{
"msg_contents": "Hello all,\n\nI am trying to move from GiST intarray index to GIN intarray index, but my\nGIN index is not being used by the planner.\n\nThe normal query is like that\n\nselect *\n from sourcetablewith_int4\n where ARRAY[myint] <@ myint_array\n and some_other_filters\n\n(with GiST index everything works fine, but GIN index is not being used)\n\nIf I create the same table populating it with text[] data like\n\nselect myint_array::text[] as myint_array_as_textarray\n into newtablewith_text\n from sourcetablewith_int4\n\nand then create a GIN index using this new text[] column\n\nthe planner starts to use the index and queries run with grate speed when\nthe query looks like that:\n\nselect *\n from newtablewith_text\n where ARRAY['myint'] <@ myint_array_as_textarray\n and some_other_filters\n\nWhere the problem can be with _int4 GIN index in this constellation?\n\nby now the enable_seqscan is set to off in the configuration.\n\nWith best regards,\n\n-- Valentine Gogichashvili\n\nHello all, I am trying to move from GiST intarray index to GIN intarray index, but my GIN index is not being used by the planner.The normal query is like that \nselect * from sourcetablewith_int4 where ARRAY[myint] <@ myint_array and some_other_filters(with GiST index everything works fine, but GIN index is not being used)If I create the same table populating it with text[] data like \nselect myint_array::text[] as myint_array_as_textarray into newtablewith_text \n from sourcetablewith_int4and then create a GIN index using this new text[] column\nthe planner starts to use the index and queries run with grate speed when the query looks like that:select * \n from newtablewith_text \n where ARRAY['myint'] <@ myint_array_as_textarray \n and some_other_filtersWhere the problem can be with _int4 GIN index in this constellation? by now the enable_seqscan i\ns set to off in the configuration.With best regards, -- Valentine Gogichashvili",
"msg_date": "Wed, 9 May 2007 15:12:45 +0200",
"msg_from": "\"Valentine Gogichashvili\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cannot make GIN intarray index be used by the planner"
},
{
"msg_contents": "Do you have both indexes (GiST, GIN) on the same table ?\n\nOn Wed, 9 May 2007, Valentine Gogichashvili wrote:\n\n> Hello all,\n>\n> I am trying to move from GiST intarray index to GIN intarray index, but my\n> GIN index is not being used by the planner.\n>\n> The normal query is like that\n>\n> select *\n> from sourcetablewith_int4\n> where ARRAY[myint] <@ myint_array\n> and some_other_filters\n>\n> (with GiST index everything works fine, but GIN index is not being used)\n>\n> If I create the same table populating it with text[] data like\n>\n> select myint_array::text[] as myint_array_as_textarray\n> into newtablewith_text\n> from sourcetablewith_int4\n>\n> and then create a GIN index using this new text[] column\n>\n> the planner starts to use the index and queries run with grate speed when\n> the query looks like that:\n>\n> select *\n> from newtablewith_text\n> where ARRAY['myint'] <@ myint_array_as_textarray\n> and some_other_filters\n>\n> Where the problem can be with _int4 GIN index in this constellation?\n>\n> by now the enable_seqscan is set to off in the configuration.\n>\n> With best regards,\n>\n> -- Valentine Gogichashvili\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n",
"msg_date": "Wed, 9 May 2007 17:31:19 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cannot make GIN intarray index be used by the planner"
},
{
"msg_contents": "I have experimented quite a lot. So first I did when starting the attempt to\nmove from GiST to GIN, was to drop the GiST index and create a brand new GIN\nindex... after that did not bring the results, I started to create all this\ntables with different sets of indexes and so on...\n\nSo the answer to the question is: no there in only GIN index on the table.\n\nThank you in advance,\n\nValentine\n\nOn 5/9/07, Oleg Bartunov <[email protected]> wrote:\n>\n> Do you have both indexes (GiST, GIN) on the same table ?\n>\n> On Wed, 9 May 2007, Valentine Gogichashvili wrote:\n>\n> > Hello all,\n> >\n> > I am trying to move from GiST intarray index to GIN intarray index, but\n> my\n> > GIN index is not being used by the planner.\n> >\n> > The normal query is like that\n> >\n> > select *\n> > from sourcetablewith_int4\n> > where ARRAY[myint] <@ myint_array\n> > and some_other_filters\n> >\n> > (with GiST index everything works fine, but GIN index is not being used)\n> >\n> > If I create the same table populating it with text[] data like\n> >\n> > select myint_array::text[] as myint_array_as_textarray\n> > into newtablewith_text\n> > from sourcetablewith_int4\n> >\n> > and then create a GIN index using this new text[] column\n> >\n> > the planner starts to use the index and queries run with grate speed\n> when\n> > the query looks like that:\n> >\n> > select *\n> > from newtablewith_text\n> > where ARRAY['myint'] <@ myint_array_as_textarray\n> > and some_other_filters\n> >\n> > Where the problem can be with _int4 GIN index in this constellation?\n> >\n> > by now the enable_seqscan is set to off in the configuration.\n> >\n> > With best regards,\n> >\n> > -- Valentine Gogichashvili\n> >\n>\n> Regards,\n> Oleg\n> _____________________________________________________________\n> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\n> Sternberg Astronomical Institute, Moscow University, Russia\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(495)939-16-83, +007(495)939-23-83\n>\n\n\n\n-- \nვალენტინ გოგიჩაშვილი\nValentine Gogichashvili\n\nI have experimented quite a lot. So first I did when starting the attempt to move from GiST to GIN, was to drop the GiST index and create a brand new GIN index... after that did not bring the results, I started to create all this tables with different sets of indexes and so on...\nSo the answer to the question is: no there in only GIN index on the table.Thank you in advance, ValentineOn 5/9/07, Oleg Bartunov\n <[email protected]> wrote:Do you have both indexes (GiST, GIN) on the same table ?\nOn Wed, 9 May 2007, Valentine Gogichashvili wrote:> Hello all,>> I am trying to move from GiST intarray index to GIN intarray index, but my> GIN index is not being used by the planner.\n>> The normal query is like that>> select *> from sourcetablewith_int4> where ARRAY[myint] <@ myint_array> and some_other_filters>> (with GiST index everything works fine, but GIN index is not being used)\n>> If I create the same table populating it with text[] data like>> select myint_array::text[] as myint_array_as_textarray> into newtablewith_text> from sourcetablewith_int4>\n> and then create a GIN index using this new text[] column>> the planner starts to use the index and queries run with grate speed when> the query looks like that:>> select *> from newtablewith_text\n> where ARRAY['myint'] <@ myint_array_as_textarray> and some_other_filters>> Where the problem can be with _int4 GIN index in this constellation?>> by now the enable_seqscan is set to off in the configuration.\n>> With best regards,>> -- Valentine Gogichashvili> Regards, Oleg_____________________________________________________________Oleg Bartunov, Research Scientist, Head of AstroNet (\nwww.astronet.ru),Sternberg Astronomical Institute, Moscow University, RussiaInternet: [email protected], \nhttp://www.sai.msu.su/~megera/phone: +007(495)939-16-83, +007(495)939-23-83-- ვალენტინ გოგიჩაშვილიValentine Gogichashvili",
"msg_date": "Wed, 9 May 2007 15:36:44 +0200",
"msg_from": "\"Valentine Gogichashvili\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cannot make GIN intarray index be used by the planner"
},
{
"msg_contents": "On Wed, 9 May 2007, Valentine Gogichashvili wrote:\n\n> I have experimented quite a lot. So first I did when starting the attempt to\n> move from GiST to GIN, was to drop the GiST index and create a brand new GIN\n> index... after that did not bring the results, I started to create all this\n> tables with different sets of indexes and so on...\n>\n> So the answer to the question is: no there in only GIN index on the table.\n\nthen, you have to provide us more infomation - \npg version, \n\\dt sourcetablewith_int4\nexplain analyze\n\nbtw, I did test of development version of GiN, see\nhttp://www.sai.msu.su/~megera/wiki/GinTest\n\n>\n> Thank you in advance,\n>\n> Valentine\n>\n> On 5/9/07, Oleg Bartunov <[email protected]> wrote:\n>> \n>> Do you have both indexes (GiST, GIN) on the same table ?\n>> \n>> On Wed, 9 May 2007, Valentine Gogichashvili wrote:\n>> \n>> > Hello all,\n>> >\n>> > I am trying to move from GiST intarray index to GIN intarray index, but\n>> my\n>> > GIN index is not being used by the planner.\n>> >\n>> > The normal query is like that\n>> >\n>> > select *\n>> > from sourcetablewith_int4\n>> > where ARRAY[myint] <@ myint_array\n>> > and some_other_filters\n>> >\n>> > (with GiST index everything works fine, but GIN index is not being used)\n>> >\n>> > If I create the same table populating it with text[] data like\n>> >\n>> > select myint_array::text[] as myint_array_as_textarray\n>> > into newtablewith_text\n>> > from sourcetablewith_int4\n>> >\n>> > and then create a GIN index using this new text[] column\n>> >\n>> > the planner starts to use the index and queries run with grate speed\n>> when\n>> > the query looks like that:\n>> >\n>> > select *\n>> > from newtablewith_text\n>> > where ARRAY['myint'] <@ myint_array_as_textarray\n>> > and some_other_filters\n>> >\n>> > Where the problem can be with _int4 GIN index in this constellation?\n>> >\n>> > by now the enable_seqscan is set to off in the configuration.\n>> >\n>> > With best regards,\n>> >\n>> > -- Valentine Gogichashvili\n>> >\n>>\n>> Regards,\n>> Oleg\n>> _____________________________________________________________\n>> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\n>> Sternberg Astronomical Institute, Moscow University, Russia\n>> Internet: [email protected], http://www.sai.msu.su/~megera/\n>> phone: +007(495)939-16-83, +007(495)939-23-83\n>> \n>\n>\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n",
"msg_date": "Wed, 9 May 2007 17:49:36 +0400 (MSD)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cannot make GIN intarray index be used by the planner"
},
{
"msg_contents": "[cc'ing to pgsql-hackers since this is looking like a contrib/intarray bug]\n\n\"Valentine Gogichashvili\" <[email protected]> writes:\n> here is the DT\n\nThat works fine for me in 8.2:\n\nregression=# explain SELECT id, (myintarray_int4)\n FROM myintarray_table_nonulls\n WHERE ARRAY[8] <@ myintarray_int4;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_nonnulls_myintarray_int4_gin on myintarray_table_nonulls (cost=0.00..8.27 rows=1 width=36)\n Index Cond: ('{8}'::integer[] <@ myintarray_int4)\n(2 rows)\n\nWhat I am betting is that you've installed contrib/intarray in this\ndatabase and that's bollixed things up somehow. In particular, intarray\ntries to take over the position of \"default\" gin opclass for int4[],\nand the opclass that it installs as default has operators named just\nlike the built-in ones. If somehow your query is using pg_catalog.<@\ninstead of intarray's public.<@, then the planner wouldn't think the\nindex is relevant.\n\nIn a quick test your example still works with intarray installed, because\nwhat it's really created is public.<@ (integer[], integer[]) which is\nan exact match and therefore takes precedence over the built-in\npg_catalog.<@ (anyarray, anyarray). But if for example you don't have\npublic in your search_path then the wrong operator would be chosen.\n\nPlease look at the pg_index entry for your index, eg\n\nselect * from pg_index where indexrelid =\n'\"versionA\".idx_nonnulls_myintarray_int4_gin'::regclass;\n\nand see whether the index opclass is the built-in one or not.\n\nNote to hackers: we've already discussed that intarray shouldn't be\ntrying to take over the default gin opclass, but I am beginning to\nwonder if it still has a reason to live at all. We should at least\nconsider removing the redundant operators to avoid risks like this one.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 09 May 2007 15:18:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Cannot make GIN intarray index be used by the planner "
},
{
"msg_contents": "Hello again,\n\nI got the opclass for the index and it looks like it is a default one\n\nmyvideoindex=# select pg_opclass.*, pg_type.typname\nmyvideoindex-# from pg_index, pg_opclass, pg_type\nmyvideoindex-# where pg_index.indexrelid =\n'idx_nonnulls_myintarray_int4_gin'::regclass\nmyvideoindex-# and pg_opclass.oid = any (pg_index.indclass::oid[] )\nmyvideoindex-# and pg_type.oid = pg_opclass.opcintype;\n\n opcamid | opcname | opcnamespace | opcowner | opcintype | opcdefault |\nopckeytype | typname\n---------+-----------+--------------+----------+-----------+------------+------------+---------\n 2742 | _int4_ops | 11 | 10 | 1007 | t\n| 23 | _int4\n(1 row)\n\nThe search_path is set to the following\n\nmyvideoindex=# show search_path;\n search_path\n--------------------\n \"versionA\", public\n(1 row)\n\nWith best regards,\n\n-- Valentine\n\nOn 5/9/07, Tom Lane <[email protected]> wrote:\n>\n> [cc'ing to pgsql-hackers since this is looking like a contrib/intarray\n> bug]\n>\n> \"Valentine Gogichashvili\" <[email protected]> writes:\n> > here is the DT\n>\n> That works fine for me in 8.2:\n>\n> regression=# explain SELECT id, (myintarray_int4)\n> FROM myintarray_table_nonulls\n> WHERE ARRAY[8] <@ myintarray_int4;\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------\n> Index Scan using idx_nonnulls_myintarray_int4_gin on\n> myintarray_table_nonulls (cost=0.00..8.27 rows=1 width=36)\n> Index Cond: ('{8}'::integer[] <@ myintarray_int4)\n> (2 rows)\n>\n> What I am betting is that you've installed contrib/intarray in this\n> database and that's bollixed things up somehow. In particular, intarray\n> tries to take over the position of \"default\" gin opclass for int4[],\n> and the opclass that it installs as default has operators named just\n> like the built-in ones. If somehow your query is using pg_catalog.<@\n> instead of intarray's public.<@, then the planner wouldn't think the\n> index is relevant.\n>\n> In a quick test your example still works with intarray installed, because\n> what it's really created is public.<@ (integer[], integer[]) which is\n> an exact match and therefore takes precedence over the built-in\n> pg_catalog.<@ (anyarray, anyarray). But if for example you don't have\n> public in your search_path then the wrong operator would be chosen.\n>\n> Please look at the pg_index entry for your index, eg\n>\n> select * from pg_index where indexrelid =\n> '\"versionA\".idx_nonnulls_myintarray_int4_gin'::regclass;\n>\n> and see whether the index opclass is the built-in one or not.\n>\n> Note to hackers: we've already discussed that intarray shouldn't be\n> trying to take over the default gin opclass, but I am beginning to\n> wonder if it still has a reason to live at all. We should at least\n> consider removing the redundant operators to avoid risks like this one.\n>\n> regards, tom lane\n>\n\n\n\n-- \nვალენტინ გოგიჩაშვილი\nValentine Gogichashvili\n\nHello again, I got the opclass for the index and it looks like it is a default onemyvideoindex=# select pg_opclass.*, pg_type.typname\nmyvideoindex-# from pg_index, pg_opclass, pg_typemyvideoindex-# where pg_index.indexrelid = 'idx_nonnulls_myintarray_int4_gin'::regclass\nmyvideoindex-# and pg_opclass.oid = any (pg_index.indclass::oid[] )\nmyvideoindex-# and pg_type.oid = pg_opclass.opcintype; opcamid | opcname | opcnamespace | opcowner | opcintype | opcdefault | opckeytype | typname\n---------+-----------+--------------+----------+-----------+------------+------------+---------\n 2742 | _int4_ops | 11 | 10 | 1007 | t | 23 | _int4\n(1 row)The search_path is set to the followingmyvideoindex=# show search_path; search_path-------------------- \"versionA\", public\n(1 row)With best regards, -- ValentineOn 5/9/07, Tom Lane <[email protected]> wrote:\n[cc'ing to pgsql-hackers since this is looking like a contrib/intarray bug]\n\"Valentine Gogichashvili\" <[email protected]> writes:> here is the DTThat works fine for me in 8.2:regression=# explain SELECT id, (myintarray_int4)\n FROM myintarray_table_nonulls WHERE ARRAY[8] <@ myintarray_int4; QUERY PLAN------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_nonnulls_myintarray_int4_gin on myintarray_table_nonulls (cost=0.00..8.27 rows=1 width=36) Index Cond: ('{8}'::integer[] <@ myintarray_int4)(2 rows)What I am betting is that you've installed contrib/intarray in this\ndatabase and that's bollixed things up somehow. In particular, intarraytries to take over the position of \"default\" gin opclass for int4[],and the opclass that it installs as default has operators named just\nlike the built-in ones. If somehow your query is using pg_catalog.<@instead of intarray's public.<@, then the planner wouldn't think theindex is relevant.In a quick test your example still works with intarray installed, because\nwhat it's really created is public.<@ (integer[], integer[]) which isan exact match and therefore takes precedence over the built-inpg_catalog.<@ (anyarray, anyarray). But if for example you don't have\npublic in your search_path then the wrong operator would be chosen.Please look at the pg_index entry for your index, egselect * from pg_index where indexrelid ='\"versionA\".idx_nonnulls_myintarray_int4_gin'::regclass;\nand see whether the index opclass is the built-in one or not.Note to hackers: we've already discussed that intarray shouldn't betrying to take over the default gin opclass, but I am beginning to\nwonder if it still has a reason to live at all. We should at leastconsider removing the redundant operators to avoid risks like this one. regards, tom lane\n-- ვალენტინ გოგიჩაშვილიValentine Gogichashvili",
"msg_date": "Thu, 10 May 2007 11:07:17 +0200",
"msg_from": "\"Valentine Gogichashvili\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Cannot make GIN intarray index be used by the planner"
}
] |
[
{
"msg_contents": "That's email from my friend.\nAny hint?\n\n-------- Original Message --------\nSubject: bug\nDate: Wed, 09 May 2007 15:03:00 +0200\nFrom: Michal Postupalski\nTo: Andrzej Zawadzki\n\nWe've just changed our database from 8.1 to 8.2 and we are\ngrief-stricken about very poor performance with queries using clause:\n\"sth IN (...)\". As we can see any query is translate to \"sth = ANY\n('{....}'::bpchar[]))\" and it tooks much more time beacuse it doesn't\nuse index'es. Why ? How can we speed up these queries? I've just read\n\"Performance of IN (...) vs. = ANY array[...]\" on pgsql-performance\nmailing list and I didn't find any solutions. Can anybody tell me what\ncan I do with postgres to force him using indexes? If there isn't any\nsolution I'm afraid that we will have to do downgrade to previous\nversion 8.1.\n\nexample:\nSELECT count(*)\nFROM kredytob b, kredyty k\nWHERE true\nAND b.kredytid = k.id\nAND '' IN ('', upper(b.nazwisko))\nAND '' IN ('', upper(b.imie))\nAND '78111104485' IN ('', b.pesel)\nAND '' IN ('', upper(trim(b.dowseria))) AND '' IN ('', b.dowosnr) AND 0\nIN (0, b.typkred) AND k.datazwrot IS NULL;\n\nregards...\nMichaďż˝ Postupalski\n\n",
"msg_date": "Wed, 09 May 2007 15:22:41 +0200",
"msg_from": "Andrzej Zawadzki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Poor performance with queries using clause: sth IN (...)"
},
{
"msg_contents": "\n> AND '' IN ('', upper(b.nazwisko))\n> AND '' IN ('', upper(b.imie))\n> AND '78111104485' IN ('', b.pesel)\n> AND '' IN ('', upper(trim(b.dowseria))) \n> AND '' IN ('', b.dowosnr) \n> AND 0 IN (0, b.typkred) \n> AND k.datazwrot IS NULL;\n\nHum, interesting. Most of the work Postgres does with IN clauses is on the\nassumption that the column you're trying to restrict is on the left hand side\nof the IN clause.\n\n1) I think you'll be much better off expanding these into OR clauses. \n\n2) I assume the left hand sides of the IN clauses are actually parameters? I\n would recommend using bound parameters mostly for security but also for\n performance reasons in that case.\n\n3) having upper() and trim() around the columns makes it basically impossible\n for the planner to use indexes even if it was capable of expanding the IN\n clauses into OR expressions. Your options are either\n\n a) use an expression index, for example \n CREATE INDEX idx_nazwisko on kredytob (upper(nazwisko))\n\n b) use a case-insensitive locale (which you may already be doing) in which\n case the upper() is simply unnecessary.\n\n c) use the citext data type (or a case insensitive indexable operator but we\n don't seem to have a case insensitive equals, only LIKE and regexp\n matches? That seems strange.)\n\n4) You should consider using text or varchar instead of char(). char() has no\n performance advantages in Postgres and is annoying to work with.\n\nSomething like this with expression indexes on upper(nazwisko), upper(imie),\nupper(trim(downseria)) would actually be optimized using indexes:\n\n AND (? = '' OR upper(b.nazwisko) = ?)\n AND (? = '' OR upper(b.imie) = ?)\n AND (? = '' OR b.pesel = ?)\n AND (? = '' OR upper(trim(b.downseria)) = ?)\n AND (? = '' OR b.dowosnr = ?)\n AND (? = 0 OR b.typkred = ?)\n AND k.datazwrot IS NULL\n\nIf this is the only query or a particularly important query you could consider\nmaking all those indexes partial with \"WHERE datazwrot IS NULL\" as well.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Wed, 09 May 2007 15:29:58 +0100",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance with queries using clause: sth IN (...)"
}
] |
[
{
"msg_contents": "How do you specify a log file for vacuum verbose to send info to? I have\nverbose turned on but cannot see any log messages.\n\nI have upped maintenance_work_mem setting from 32768 to 98304. This is on a\n4 GB, 3.2 GHz Xeon, dual core, dual cpu with HTT turned on. I hope that\nhelps with vacuum times. PG 8.0.9 on a UFS2 FreeBSD 6.2 prerelease.\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nHow do you specify a log file for vacuum verbose to send info to? I have verbose turned on but cannot see any log messages. \n\nI have upped maintenance_work_mem setting from 32768 to 98304. This is\non a 4 GB, 3.2 GHz Xeon, dual core, dual cpu with HTT turned on. I hope\nthat helps with vacuum times. PG 8.0.9 on a UFS2 FreeBSD 6.2 prerelease.-- Yudhvir Singh Sidhu408 375 3134 cell",
"msg_date": "Wed, 9 May 2007 15:00:22 -0700",
"msg_from": "\"Y Sidhu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Vacuum Times - Verbose and maintenance_work_mem"
}
] |
[
{
"msg_contents": "Hi,\n\n \n\nI have several databases. They are each about 35gb in size and have about\n10.5K relations (count from pg_stat_all_tables) in them. Pg_class is about\n26k rows and the data directory contains about 70k files. These are busy\nmachines, they run about 50 xactions per second, ( aproxx insert / update /\ndelete about 500 rows per second).\n\n \n\nWe started getting errors about the number of open file descriptors\n\n \n\n: 2007-05-09 03:07:50.083 GMT 1146975740: LOG: 53000: out of file\ndescriptors: Too many open files; release and retry\n\n2007-05-09 03:07:50.083 GMT 1146975740: CONTEXT: SQL statement \"insert …..\n\"\n\n PL/pgSQL function \"trigfunc_whatever\" line 50 at execute statement\n\n2007-05-09 03:07:50.083 GMT 1146975740: LOCATION: BasicOpenFile, fd.c:471\n\n2007-05-09 03:07:50.091 GMT 0: LOG: 00000: duration: 12.362 ms\n\n2007-05-09 03:07:50.091 GMT 0: LOCATION: exec_simple_query, postgres.c:1090\n\n \n\nSo we decreased the max_files_per_process to 800. This took care of the\nerror *BUT* about quadrupled the IO wait that is happening on the machine.\nIt went from a peek of about 50% to peeks of over 200% (4 processor\nmachines, 4 gigs ram, raid). The load on the machine remained constant.\n\n \n\nI am really to get an understanding of exactly what this setting is and\n‘what’ is out of file descriptors and how I can fix that. I need to bring\nthat IO back down.\n\n \n\n \n\nThanks for any help.\n\nRalph\n\n \n\n \n\n \n\n\n-- \nInternal Virus Database is out-of-date.\nChecked by AVG Free Edition.\nVersion: 7.5.432 / Virus Database: 268.15.9/573 - Release Date: 05/12/2006\n16:07\n \n\n\n\n\n\n\n\n\n\n\nHi,\n \nI have several databases. They are each about 35gb in\nsize and have about 10.5K relations (count from pg_stat_all_tables) in them. \nPg_class is about 26k rows and the data directory contains about 70k files. \nThese are busy machines, they run about 50 xactions per second, ( aproxx insert\n/ update / delete about 500 rows per second).\n \nWe started getting errors about the number of open file descriptors\n \n: 2007-05-09 03:07:50.083 GMT 1146975740: LOG: 53000:\nout of file descriptors: Too many open files; release and retry\n2007-05-09 03:07:50.083 GMT 1146975740: CONTEXT: SQL\nstatement \"insert ….. \"\n PL/pgSQL function\n\"trigfunc_whatever\" line 50 at execute statement\n2007-05-09 03:07:50.083 GMT 1146975740: LOCATION: \nBasicOpenFile, fd.c:471\n2007-05-09 03:07:50.091 GMT 0: LOG: 00000: duration:\n12.362 ms\n2007-05-09 03:07:50.091 GMT 0: LOCATION: \nexec_simple_query, postgres.c:1090\n \nSo we decreased the max_files_per_process to 800. \nThis took care of the error *BUT* about quadrupled the IO wait\nthat is happening on the machine. It went from a peek of about 50% to peeks of\nover 200% (4 processor machines, 4 gigs ram, raid). The load on the\nmachine remained constant.\n \nI am really to get an understanding of exactly what this\nsetting is and ‘what’ is out of file descriptors and how I can fix\nthat. I need to bring that IO back down.\n \n \nThanks for any help.\nRalph",
"msg_date": "Thu, 10 May 2007 11:51:12 +1200",
"msg_from": "\"Ralph Mason\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance Woes"
}
] |
[
{
"msg_contents": "> I have several databases. They are each about 35gb in size and have about\n> 10.5K relations (count from pg_stat_all_tables) in them. Pg_class is\n> about 26k rows and the data directory contains about 70k files. These are\n> busy machines, they run about 50 xactions per second, ( aproxx insert /\n> update / delete about 500 rows per second).\n>\n> We started getting errors about the number of open file descriptors\n>\n> : 2007-05-09 03:07:50.083 GMT 1146975740: LOG: 53000: out of file\n> descriptors: Too many open files; release and retry\n>\n> 2007-05-09 03:07:50.083 GMT 1146975740: CONTEXT: SQL statement \"insert\n> ….. \"\n>\n> PL/pgSQL function \"trigfunc_whatever\" line 50 at execute statement\n>\n> 2007-05-09 03:07:50.083 GMT 1146975740: LOCATION: BasicOpenFile, fd.c:471\n>\n> 2007-05-09 03:07:50.091 GMT 0: LOG: 00000: duration: 12.362 ms\n>\n> 2007-05-09 03:07:50.091 GMT 0: LOCATION: exec_simple_query, postgres.c\n> :1090\n>\n>\n>\n> So we decreased the max_files_per_process to 800. This took care of the\n> error **BUT** about quadrupled the IO wait that is happening on the\n> machine. It went from a peek of about 50% to peeks of over 200% (4 processor\n> machines, 4 gigs ram, raid). The load on the machine remained constant.\n>\n\n\nWhat version of Pg/OS? What is your hardware config?\n\nI had seen these errors with earlier versions of Pg 7.4.x which was fixed in\nlater releases according to the changelogs\n\n\nI have several databases. They are each about 35gb in\nsize and have about 10.5K relations (count from pg_stat_all_tables) in them. \nPg_class is about 26k rows and the data directory contains about 70k files. \nThese are busy machines, they run about 50 xactions per second, ( aproxx insert\n/ update / delete about 500 rows per second).\nWe started getting errors about the number of open file descriptors\n: 2007-05-09 03:07:50.083 GMT 1146975740: LOG: 53000:\nout of file descriptors: Too many open files; release and retry\n2007-05-09 03:07:50.083 GMT 1146975740: CONTEXT: SQL\nstatement \"insert ….. \"\n PL/pgSQL function\n\"trigfunc_whatever\" line 50 at execute statement\n2007-05-09 03:07:50.083 GMT 1146975740: LOCATION: \nBasicOpenFile, fd.c:471\n2007-05-09 03:07:50.091 GMT 0: LOG: 00000: duration:\n12.362 ms\n2007-05-09 03:07:50.091 GMT 0: LOCATION: \nexec_simple_query, postgres.c:1090\n \nSo we decreased the max_files_per_process to 800. \nThis took care of the error *BUT* about quadrupled the IO wait\nthat is happening on the machine. It went from a peek of about 50% to peeks of\nover 200% (4 processor machines, 4 gigs ram, raid). The load on the\nmachine remained constant.What version of Pg/OS? What is your hardware config? I had seen these errors with earlier versions of Pg 7.4.x which was fixed in later releases according to the changelogs",
"msg_date": "Wed, 9 May 2007 17:26:07 -0700",
"msg_from": "\"CAJ CAJ\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Woes"
},
{
"msg_contents": "\n> 2007-05-09 03:07:50.083 GMT 1146975740: LOCATION: BasicOpenFile,\n> fd.c:471\n> \n> 2007-05-09 03:07:50.091 GMT 0: LOG: 00000: duration: 12.362 ms\n> \n> 2007-05-09 03:07:50.091 GMT 0: LOCATION: exec_simple_query,\n> postgres.c:1090\n> \n> \n> \n> So we decreased the max_files_per_process to 800. This took care\n> of the error **BUT** about quadrupled the IO wait that is happening\n> on the machine. It went from a peek of about 50% to peeks of over\n> 200% (4 processor machines, 4 gigs ram, raid). The load on the\n> machine remained constant.\n> \n\nSounds to me like you just need to up the total amount of open files \nallowed by the operating system.\n\nJoshua D. Drake\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Wed, 09 May 2007 17:29:51 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Woes"
},
{
"msg_contents": "From: [email protected]\n[mailto:[email protected]] On Behalf Of CAJ CAJ\nSent: 10 May 2007 12:26\nTo: Ralph Mason\nCc: [email protected]\nSubject: Re: [PERFORM] Performance Woes\n\n \n\n \n\nI have several databases. They are each about 35gb in size and have about\n10.5K relations (count from pg_stat_all_tables) in them. Pg_class is about\n26k rows and the data directory contains about 70k files. These are busy\nmachines, they run about 50 xactions per second, ( aproxx insert / update /\ndelete about 500 rows per second).\n\nWe started getting errors about the number of open file descriptors\n\n: 2007-05-09 03:07:50.083 GMT 1146975740: LOG: 53000: out of file\ndescriptors: Too many open files; release and retry\n\n2007-05-09 03:07:50.083 GMT 1146975740: CONTEXT: SQL statement \"insert …..\n\"\n\n PL/pgSQL function \"trigfunc_whatever\" line 50 at execute statement\n\n2007-05-09 03:07:50.083 GMT 1146975740: LOCATION: BasicOpenFile, fd.c:471\n\n2007-05-09 03:07:50.091 GMT 0: LOG: 00000: duration: 12.362 ms\n\n2007-05-09 03:07:50.091 GMT 0: LOCATION: exec_simple_query, postgres.c:1090\n\n \n\nSo we decreased the max_files_per_process to 800. This took care of the\nerror *BUT* about quadrupled the IO wait that is happening on the machine.\nIt went from a peek of about 50% to peeks of over 200% (4 processor\nmachines, 4 gigs ram, raid). The load on the machine remained constant.\n\n>What version of Pg/OS? What is your hardware config? \n\n>I had seen these errors with earlier versions of Pg 7.4.x which was fixed\nin later releases according to the changelogs \n\n\n\n\"PostgreSQL 8.1.4 on x86_64-redhat-linux-gnu, compiled by GCC\nx86_64-redhat-linux-gcc (GCC) 4.1.0 20060304 (Red Hat 4.1.0-3)\"\n\n \n\nsu postgres -c 'ulimit -a'\n\n \n\ncore file size (blocks, -c) 0\n\ndata seg size (kbytes, -d) unlimited\n\nmax nice (-e) 0\n\nfile size (blocks, -f) unlimited\n\npending signals (-i) 49152\n\nmax locked memory (kbytes, -l) 32\n\nmax memory size (kbytes, -m) unlimited\n\nopen files (-n) 1000000\n\npipe size (512 bytes, -p) 8\n\nPOSIX message queues (bytes, -q) 819200\n\nmax rt priority (-r) 0\n\nstack size (kbytes, -s) 10240\n\ncpu time (seconds, -t) unlimited\n\nmax user processes (-u) 49152\n\nvirtual memory (kbytes, -v) unlimited\n\nfile locks (-x) unlimited\n\nfile locks (-x) unlimited\n\nSeems like I should be able to use lots and lots of open files.\n\nMachines are quad processor opterons, 4gb ram with raid 5 data and logging\nto a raid 0.\n\nRalph\n\n \n\n\n-- \nInternal Virus Database is out-of-date.\nChecked by AVG Free Edition.\nVersion: 7.5.432 / Virus Database: 268.15.9/573 - Release Date: 05/12/2006\n16:07\n \n\n\n\n\n\n\n\n\n\n\n \n \n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of CAJ CAJ\nSent: 10 May 2007 12:26\nTo: Ralph Mason\nCc: [email protected]\nSubject: Re: [PERFORM] Performance Woes\n\n \n \n\n\n\n\nI have several databases. They are each about 35gb in size and have\nabout 10.5K relations (count from pg_stat_all_tables) in them. Pg_class\nis about 26k rows and the data directory contains about 70k files. These\nare busy machines, they run about 50 xactions per second, ( aproxx insert /\nupdate / delete about 500 rows per second).\nWe started getting errors about the number of open file descriptors\n: 2007-05-09 03:07:50.083 GMT 1146975740: LOG: 53000: out of file\ndescriptors: Too many open files; release and retry\n2007-05-09 03:07:50.083 GMT 1146975740: CONTEXT: SQL statement\n\"insert ….. \"\n PL/pgSQL function\n\"trigfunc_whatever\" line 50 at execute statement\n2007-05-09 03:07:50.083 GMT 1146975740: LOCATION: BasicOpenFile,\nfd.c:471\n2007-05-09 03:07:50.091 GMT 0: LOG: 00000: duration: 12.362 ms\n2007-05-09 03:07:50.091 GMT 0: LOCATION: exec_simple_query,\npostgres.c:1090\n \nSo we decreased the max_files_per_process to 800. This took care\nof the error *BUT* about quadrupled the IO wait that is happening\non the machine. It went from a peek of about 50% to peeks of over 200% (4\nprocessor machines, 4 gigs ram, raid). The load on the machine remained\nconstant.\n\n\n\n\n>What version of\nPg/OS? What is your hardware config? \n\n>I had seen these errors with earlier\nversions of Pg 7.4.x which was fixed in later releases according to the\nchangelogs \n\n\n\"PostgreSQL 8.1.4 on x86_64-redhat-linux-gnu, compiled by\nGCC x86_64-redhat-linux-gcc (GCC) 4.1.0 20060304 (Red Hat 4.1.0-3)\"\n \nsu postgres -c 'ulimit -a'\n \ncore file\nsize (blocks, -c) 0\ndata seg\nsize (kbytes, -d)\nunlimited\nmax\nnice \n(-e) 0\nfile\nsize \n(blocks, -f) unlimited\npending\nsignals \n(-i) 49152\nmax locked memory (kbytes,\n-l) 32\nmax memory size \n(kbytes, -m) unlimited\nopen\nfiles \n(-n) 1000000\npipe\nsize (512\nbytes, -p) 8\nPOSIX message queues (bytes, -q) 819200\nmax rt\npriority \n(-r) 0\nstack\nsize \n(kbytes, -s) 10240\ncpu\ntime \n(seconds, -t) unlimited\nmax user\nprocesses \n(-u) 49152\nvirtual\nmemory (kbytes, -v)\nunlimited\nfile\nlocks \n(-x) unlimited\nfile\nlocks \n (-x) unlimited\n\n\nSeems like I should be able to use lots and lots\nof open files.\nMachines\nare quad processor opterons, 4gb ram with raid 5 data and logging to a raid 0.\nRalph",
"msg_date": "Thu, 10 May 2007 12:34:47 +1200",
"msg_from": "\"Ralph Mason\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Woes"
},
{
"msg_contents": "Hello,\n\nYou likely need to increase your file-max parameters using sysctl.conf.\n\nSincerely,\n\nJoshua D. Drake\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Wed, 09 May 2007 17:40:03 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Woes"
},
{
"msg_contents": "On Wed, 2007-05-09 at 17:29 -0700, Joshua D. Drake wrote:\n> > 2007-05-09 03:07:50.083 GMT 1146975740: LOCATION: BasicOpenFile,\n> > fd.c:471\n> > \n> > 2007-05-09 03:07:50.091 GMT 0: LOG: 00000: duration: 12.362 ms\n> > \n> > 2007-05-09 03:07:50.091 GMT 0: LOCATION: exec_simple_query,\n> > postgres.c:1090\n> > \n> > \n> > \n> > So we decreased the max_files_per_process to 800. This took care\n> > of the error **BUT** about quadrupled the IO wait that is happening\n> > on the machine. It went from a peek of about 50% to peeks of over\n> > 200% (4 processor machines, 4 gigs ram, raid). The load on the\n> > machine remained constant.\n> > \n> \n> Sounds to me like you just need to up the total amount of open files \n> allowed by the operating system.\n\nIt looks more like the opposite, here's the docs for\nmax_files_per_process:\n\n\"Sets the maximum number of simultaneously open files allowed to each\nserver subprocess. The default is one thousand files. If the kernel is\nenforcing a safe per-process limit, you don't need to worry about this\nsetting. But on some platforms (notably, most BSD systems), the kernel\nwill allow individual processes to open many more files than the system\ncan really support when a large number of processes all try to open that\nmany files. If you find yourself seeing \"Too many open files\" failures,\ntry reducing this setting. This parameter can only be set at server\nstart.\"\n\nTo me, that means that his machine is allowing the new FD to be created,\nbut then can't really support that many so it gives an error.\n\nRalph, how many connections do you have open at once? It seems like the\nmachine perhaps just can't handle that many FDs in all of those\nprocesses at once.\n\nThat is a lot of tables. Maybe a different OS will handle it better?\nMaybe there's some way that you can use fewer connections and then the\nOS could still handle it?\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Wed, 09 May 2007 17:42:19 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Woes"
},
{
"msg_contents": ">To me, that means that his machine is allowing the new FD to be created,\n>but then can't really support that many so it gives an error.\n\nfiles-max is 297834\nulimit is 1000000\n\n(doesn't make sense but there you go)\n\nWhat I don’t really understand is with max_files_per_process at 800 we don't\nget the problem, but with 1000 we do.\n\n$lsof | wc -l \n14944\n\n\n$cat /proc/sys/fs/file-nr \n12240 0 297834\n\n\n>Ralph, how many connections do you have open at once? It seems like the\n>machine perhaps just can't handle that many FDs in all of those\n>processes at once.\n\nThere are only 30 connections - of those probably only 10 are really active.\nIt doesn't seem like we should be stressing this machine/\n\n\n>That is a lot of tables. Maybe a different OS will handle it better?\n>Maybe there's some way that you can use fewer connections and then the\n>OS could still handle it?\n\nIt would be less but then you can't maintain the db b/c of the constant\nvacuuming needed :-( I think the linux folks would get up in arms if you\ntold them they couldn't handle that many open files ;-)\n\nThanks,\nRalph\n\n-- \nInternal Virus Database is out-of-date.\nChecked by AVG Free Edition.\nVersion: 7.5.432 / Virus Database: 268.15.9/573 - Release Date: 05/12/2006\n16:07\n \n\n",
"msg_date": "Thu, 10 May 2007 13:45:28 +1200",
"msg_from": "\"Ralph Mason\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Woes"
},
{
"msg_contents": "Just adding a bit of relevant information:\n\nWe have the kernel file-max setting set to 297834 (256 per 4mb of ram).\n\n/proc/sys/fs/file-nr tells us that we have roughly 13000 allocated handles\nof which zero are always free.\n\n\n\nOn 10/05/07, Jeff Davis <[email protected]> wrote:\n>\n> On Wed, 2007-05-09 at 17:29 -0700, Joshua D. Drake wrote:\n> > > 2007-05-09 03:07:50.083 GMT 1146975740: LOCATION: BasicOpenFile,\n> > > fd.c:471\n> > >\n> > > 2007-05-09 03:07:50.091 GMT 0: LOG: 00000: duration: 12.362 ms\n> > >\n> > > 2007-05-09 03:07:50.091 GMT 0: LOCATION: exec_simple_query,\n> > > postgres.c:1090\n> > >\n> > >\n> > >\n> > > So we decreased the max_files_per_process to 800. This took care\n> > > of the error **BUT** about quadrupled the IO wait that is\n> happening\n> > > on the machine. It went from a peek of about 50% to peeks of over\n> > > 200% (4 processor machines, 4 gigs ram, raid). The load on the\n> > > machine remained constant.\n> > >\n> >\n> > Sounds to me like you just need to up the total amount of open files\n> > allowed by the operating system.\n>\n> It looks more like the opposite, here's the docs for\n> max_files_per_process:\n>\n> \"Sets the maximum number of simultaneously open files allowed to each\n> server subprocess. The default is one thousand files. If the kernel is\n> enforcing a safe per-process limit, you don't need to worry about this\n> setting. But on some platforms (notably, most BSD systems), the kernel\n> will allow individual processes to open many more files than the system\n> can really support when a large number of processes all try to open that\n> many files. If you find yourself seeing \"Too many open files\" failures,\n> try reducing this setting. This parameter can only be set at server\n> start.\"\n>\n> To me, that means that his machine is allowing the new FD to be created,\n> but then can't really support that many so it gives an error.\n>\n> Ralph, how many connections do you have open at once? It seems like the\n> machine perhaps just can't handle that many FDs in all of those\n> processes at once.\n>\n> That is a lot of tables. Maybe a different OS will handle it better?\n> Maybe there's some way that you can use fewer connections and then the\n> OS could still handle it?\n>\n> Regards,\n> Jeff Davis\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n\n\n-- \nScott Mohekey\nSystems Administrator\nTelogis\nIntelligent Location Technologies\n\nNOTICE:\nThis message (including any attachments) contains CONFIDENTIAL INFORMATION\nintended for a specific individual and purpose, and is protected by law. If\nyou are not the intended recipient, you should delete this message and are\nhereby notified that any disclosure, copying, or distribution of this\nmessage, or the taking of any action based on it, is strictly prohibited\n\nJust adding a bit of relevant information:We have the kernel file-max setting set to 297834 (256 per 4mb of ram)./proc/sys/fs/file-nr tells us that we have roughly 13000 allocated handles of which zero are always free.\nOn 10/05/07, Jeff Davis <[email protected]> wrote:\nOn Wed, 2007-05-09 at 17:29 -0700, Joshua D. Drake wrote:> > 2007-05-09 03:07:50.083 GMT 1146975740: LOCATION: BasicOpenFile,> > fd.c:471> >> > 2007-05-09 03:07:50.091 GMT 0: LOG: 00000: duration: \n12.362 ms> >> > 2007-05-09 03:07:50.091 GMT 0: LOCATION: exec_simple_query,> > postgres.c:1090> >> >> >> > So we decreased the max_files_per_process to 800. This took care\n> > of the error **BUT** about quadrupled the IO wait that is happening> > on the machine. It went from a peek of about 50% to peeks of over> > 200% (4 processor machines, 4 gigs ram, raid). The load on the\n> > machine remained constant.> >>> Sounds to me like you just need to up the total amount of open files> allowed by the operating system.It looks more like the opposite, here's the docs for\nmax_files_per_process:\"Sets the maximum number of simultaneously open files allowed to eachserver subprocess. The default is one thousand files. If the kernel isenforcing a safe per-process limit, you don't need to worry about this\nsetting. But on some platforms (notably, most BSD systems), the kernelwill allow individual processes to open many more files than the systemcan really support when a large number of processes all try to open that\nmany files. If you find yourself seeing \"Too many open files\" failures,try reducing this setting. This parameter can only be set at serverstart.\"To me, that means that his machine is allowing the new FD to be created,\nbut then can't really support that many so it gives an error.Ralph, how many connections do you have open at once? It seems like themachine perhaps just can't handle that many FDs in all of those\nprocesses at once.That is a lot of tables. Maybe a different OS will handle it better?Maybe there's some way that you can use fewer connections and then theOS could still handle it?Regards,\n Jeff Davis---------------------------(end of broadcast)---------------------------TIP 2: Don't 'kill -9' the postmaster-- Scott Mohekey\nSystems AdministratorTelogisIntelligent Location TechnologiesNOTICE:This message (including any attachments) contains CONFIDENTIAL INFORMATION intended for a specific individual and purpose, and is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this message, or the taking of any action based on it, is strictly prohibited",
"msg_date": "Thu, 10 May 2007 13:50:26 +1200",
"msg_from": "\"Scott Mohekey\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Woes"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> On Wed, 2007-05-09 at 17:29 -0700, Joshua D. Drake wrote:\n>> Sounds to me like you just need to up the total amount of open files \n>> allowed by the operating system.\n\n> It looks more like the opposite, here's the docs for\n> max_files_per_process:\n\nI think Josh has got the right advice. The manual is just saying that\nyou can reduce max_files_per_process to avoid the failure, but it's not\nmaking any promises about the performance penalty for doing that.\nApparently Ralph's app needs a working set of between 800 and 1000 open\nfiles to have reasonable performance.\n\n> That is a lot of tables. Maybe a different OS will handle it better?\n> Maybe there's some way that you can use fewer connections and then the\n> OS could still handle it?\n\nAlso, it might be worth rethinking the database structure to reduce the\nnumber of tables. But for a quick-fix, increasing the kernel limit\nseems like the easiest answer.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 10 May 2007 00:30:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Woes "
}
] |
[
{
"msg_contents": "Ralph Mason wrote:\n\n> I have several databases. They are each about 35gb in size and have about\n> 10.5K relations (count from pg_stat_all_tables) in them. Pg_class is about\n> 26k rows and the data directory contains about 70k files. These are busy\n> machines, they run about 50 xactions per second, ( aproxx insert / update /\n> delete about 500 rows per second).\n\nIs it always the same trigger the problematic one? Is it just PL/pgSQL,\nor do you have something else? Something that may be trying to open\nadditional files for example? Something that may be trying to open\nfiles behind your back? PL/Perl with funky operators or temp files?\n\nAlso, what PG version is this?\n\n> So we decreased the max_files_per_process to 800. This took care of the\n> error *BUT* about quadrupled the IO wait that is happening on the machine.\n> It went from a peek of about 50% to peeks of over 200% (4 processor\n> machines, 4 gigs ram, raid). The load on the machine remained constant.\n\nThe max_files_per_process settings controls how many actual file\ndescriptors each process is allowed to have. Postgres uses internally a\n\"virtual file descriptor\", which normally have one file descriptor open\neach. However, if your transactions need to access lots of files, the\nVFDs will close the kernel FDs to allow other VFDs to open theirs.\n\nSo it sounds like your transaction has more than 800 files open. The\nextra IO wait could be caused by the additional system calls to open and\nclose those files as needed. I would actually expect it to cause extra\n\"system\" load (as opposed to \"user\") rather than IO, but I'm not sure.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Wed, 9 May 2007 22:05:08 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Woes"
}
] |
[
{
"msg_contents": "Dear list,\n\nI'm running postgres on a tomcat server. The vacuum is run every hour\n(cronjob) which leads to a performance drop of the tomcat applications.\nI played around with renice command and I think it is possible to reduce\nthis effect which a renice. The problem is how can I figure out the PID\nof the postmaster performing the vacuum(automated)? Has anybody a nice\nsolution to change process priority? A shell script, maybe even for java?\n\nbest regards\n\nDani\n\n\n",
"msg_date": "Thu, 10 May 2007 06:38:11 +0200",
"msg_from": "Daniel Haensse <[email protected]>",
"msg_from_op": true,
"msg_subject": "Background vacuum"
},
{
"msg_contents": "Daniel Haensse wrote:\n> Dear list,\n> \n> I'm running postgres on a tomcat server. The vacuum is run every hour\n> (cronjob) which leads to a performance drop of the tomcat applications.\n> I played around with renice command and I think it is possible to reduce\n> this effect which a renice. The problem is how can I figure out the PID\n> of the postmaster performing the vacuum(automated)? Has anybody a nice\n> solution to change process priority? A shell script, maybe even for java?\n> \n\nWhile this may technically work, I think it lacks a key point. 'nice' ( at \nleast the versions I'm familiar with ) do not adjust I/O priority. VACUUM is \nbogging things down because of the extra strain on I/O. CPU usage shouldn't \nreally be much of a factor.\n\nInstead, I would recommend looking at vacuum_cost_delay and the related settings \nto make vacuum lower priority than the queries you care about. This should be a \ncleaner solution for you.\n\n-Dan\n",
"msg_date": "Wed, 09 May 2007 23:05:21 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background vacuum"
},
{
"msg_contents": "Dan Harris wrote:\n> Daniel Haensse wrote:\n>> Has anybody a nice\n>> solution to change process priority? A shell script, maybe even for java?\n\nOne way is to write astored procedure that sets it's own priority.\nAn example is here:\nhttp://weblog.bignerdranch.com/?p=11\n\n\n> While this may technically work, I think it lacks a key point. 'nice' (\n> at least the versions I'm familiar with ) do not adjust I/O priority. \n> VACUUM is bogging things down because of the extra strain on I/O. CPU\n> usage shouldn't really be much of a factor.\n\nActually, CPU priorities _are_ an effective way of indirectly scheduling\nI/O priorities.\n\nThis paper studied both CPU and lock priorities on a variety\nof databases including PostgreSQL.\n\nhttp://www.cs.cmu.edu/~bianca/icde04.pdf\n\n\" By contrast, for PostgreSQL, lock scheduling is not as\n effective as CPU scheduling (see Figure 4(c)).\n ...\n The effectiveness of CPU-Prio for TPC-C on\n PostgreSQL is surprising, given that I/O (I/O-related\n lightweight locks) is its bottleneck. Due to CPU prioritization,\n high-priority transactions are able to request I/O resources\n before low-priority transactions can. As a result,\n high-priority transactions wait fewer times (52% fewer) for\n I/O, and when they do wait, they wait behind fewer transactions\n (43% fewer). The fact that simple CPU prioritization\n is able to improve performance so significantly suggests that\n more complicated I/O scheduling is not always necessary.\n ...\n For TPC-C on MVCC DBMS, and in particular PostgreSQL,\n CPU scheduling is most effective, due to its ability\n to indirectly schedule the I/O bottleneck.\n ...\n For TPC-C running on PostgreSQL, the simplest CPU scheduling\n policy (CPU-Prio) provides a factor of 2 improvement\n for high-priority transactions, while adding priority inheritance\n (CPU-Prio-Inherit) provides a factor of 6 improvement\n while hardly penalizing low-priority transactions.\n Preemption (P-CPU) provides no appreciable benefit\n over CPU-Prio-Inherit\n \"\n\n> Instead, I would recommend looking at vacuum_cost_delay and the related\n> settings to make vacuum lower priority than the queries you care about. \n> This should be a cleaner solution for you.\n\nYeah, that's still true.\n\n",
"msg_date": "Thu, 10 May 2007 17:10:56 -0700",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background vacuum"
},
{
"msg_contents": "On Thu, May 10, 2007 at 05:10:56PM -0700, Ron Mayer wrote:\n> One way is to write astored procedure that sets it's own priority.\n> An example is here:\n> http://weblog.bignerdranch.com/?p=11\n\nDo you have evidence to show this will actually work consistently?\nThe problem with doing this is that if your process is holding a lock\nthat prevents some other process from doing something, then your\nlowered priority actually causes that _other_ process to go slower\ntoo. This is part of the reason people object to the suggestion that\nrenicing a single back end will help anything.\n\n> This paper studied both CPU and lock priorities on a variety\n> of databases including PostgreSQL.\n> \n> http://www.cs.cmu.edu/~bianca/icde04.pdf\n> \n> \" By contrast, for PostgreSQL, lock scheduling is not as\n> effective as CPU scheduling (see Figure 4(c)).\n\nIt is likely that in _some_ cases, you can get this benefit, because\nyou don't have contention issues. The explanation for the good lock\nperformance by Postgres on the TPC-C tests they were using is\nPostgreSQL's MVCC: Postgres locks less. The problem comes when you\nhave contention, and in that case, CPU scheduling will really hurt. \n\nThis means that, to use CPU scheduling safely, you have to be really\nsure that you know what the other transactions are doing. \n\nA \n\n-- \nAndrew Sullivan | [email protected]\nInformation security isn't a technological problem. It's an economics\nproblem.\n\t\t--Bruce Schneier\n",
"msg_date": "Thu, 17 May 2007 09:55:30 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background vacuum"
},
{
"msg_contents": "Andrew Sullivan wrote:\n> On Thu, May 10, 2007 at 05:10:56PM -0700, Ron Mayer wrote:\n>> One way is to write astored procedure that sets it's own priority.\n>> An example is here:\n>> http://weblog.bignerdranch.com/?p=11\n> \n> Do you have evidence to show this will actually work consistently?\n\nThe paper referenced below gives a better explanation than I can.\n\nTheir conclusion was that on many real-life workloads (including\nTPC-C and TPC-H like workloads) on many databases (including DB2\nand postgresql) the benefits vastly outweighed the disadvantages.\n\n> The problem with doing this is that if your process is holding a lock\n> that prevents some other process from doing something, then your\n> lowered priority actually causes that _other_ process to go slower\n> too. This is part of the reason people object to the suggestion that\n> renicing a single back end will help anything.\n\nSure. And in the paper they discussed the effect and found that\nif you do have an OS scheduler than supports priority inheritance\nthe benefits are even bigger than those without it. But even\nfor OS's and scheduler combinations without it the benefits\nwere very significant.\n\n> \n>> This paper studied both CPU and lock priorities on a variety\n>> of databases including PostgreSQL.\n>>\n>> http://www.cs.cmu.edu/~bianca/icde04.pdf\n>>\n>> \" By contrast, for PostgreSQL, lock scheduling is not as\n>> effective as CPU scheduling (see Figure 4(c)).\n> \n> It is likely that in _some_ cases, you can get this benefit, because\n> you don't have contention issues. The explanation for the good lock\n> performance by Postgres on the TPC-C tests they were using is\n> PostgreSQL's MVCC: Postgres locks less. The problem comes when you\n> have contention, and in that case, CPU scheduling will really hurt. \n> \n> This means that, to use CPU scheduling safely, you have to be really\n> sure that you know what the other transactions are doing. \n\nNot necessarily. From the wide range of conditions the paper tested\nI'd say it's more like quicksort - you need to be sure you avoid\ntheoretical pathological conditions that noone (that I can find)\nhas encountered in practice.\n\nIf you do know of such a workload, I (and imagine the authors\nof that paper) would be quite interested.\n\nSince they showed that the benefits are very real for both\nTPC-C and TPC-H like workloads I think the burden of proof\nis now more on the people warning of the (so far theoretical)\ndrawbacks.\n",
"msg_date": "Thu, 17 May 2007 18:44:24 -0700",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background vacuum"
},
{
"msg_contents": "Ah, glad this came up again 'cause a problem here caused my original reply \nto bounce.\n\nOn Thu, 10 May 2007, Ron Mayer wrote:\n\n> Actually, CPU priorities _are_ an effective way of indirectly scheduling \n> I/O priorities. This paper studied both CPU and lock priorities on a \n> variety of databases including PostgreSQL. \n> http://www.cs.cmu.edu/~bianca/icde04.pdf\n\nI spent a fair amount of time analyzing that paper recently, and found it \nhard to draw any strong current conclusions from it. Locking and related \nscalability issues are much better now than in the PG 7.3 they tested. \nFor example, from the paper:\n\n\"We find almost all lightweight locking in PostgreSQL fucntions to \nserialize the I/O buffer pool and WAL activity...as a result, we attribute \nall the lightweight lock waiting time for the above-listed locks to I/O.\"\n\nWell, sure, if you classify those as I/O waits it's no surprise you can \ndarn near directly control them via CPU scheduling; I question the current \nrelevancy of this historical observation about the old code. I think it's \nmuch easier to get into an honest I/O bound situation now with a TPC-C \nlike workload (they kind of cheated on that part too which is a whole \n'nother discussion), especially with the even faster speeds of modern \nprocessors, and then you're in a situation where CPU scheduling is not so \neffective for indirectly controlling I/O prioritization.\n\nCount me on the side that agrees adjusting the vacuuming parameters is the \nmore straightforward way to cope with this problem.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Thu, 17 May 2007 22:50:48 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background vacuum"
},
{
"msg_contents": "Greg Smith wrote:\n> \n> Count me on the side that agrees adjusting the vacuuming parameters is\n> the more straightforward way to cope with this problem.\n\n\nAgreed for vacuum; but it still seems interesting to me that\nacross databases and workloads high priority transactions\ntended to get through faster than low priority ones. Is there\nany reason to believe that the drawbacks of priority inversion\noutweigh the benefits of setting priorities?\n",
"msg_date": "Thu, 17 May 2007 20:09:02 -0700",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background vacuum"
},
{
"msg_contents": "Ron Mayer <[email protected]> writes:\n> Greg Smith wrote:\n>> Count me on the side that agrees adjusting the vacuuming parameters is\n>> the more straightforward way to cope with this problem.\n\n> Agreed for vacuum; but it still seems interesting to me that\n> across databases and workloads high priority transactions\n> tended to get through faster than low priority ones. Is there\n> any reason to believe that the drawbacks of priority inversion\n> outweigh the benefits of setting priorities?\n\nWell, it's unclear, and anecdotal evidence is unlikely to convince\nanybody. I had put some stock in the CMU paper, but if it's based\non PG 7.3 then you've got to **seriously** question its relevance\nto the current code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 May 2007 23:22:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background vacuum "
},
{
"msg_contents": "Tom Lane wrote:\n> Ron Mayer <[email protected]> writes:\n>> Greg Smith wrote:\n>>> Count me on the side that agrees adjusting the vacuuming parameters is\n>>> the more straightforward way to cope with this problem.\n> \n>> Agreed for vacuum; but it still seems interesting to me that\n>> across databases and workloads high priority transactions\n>> tended to get through faster than low priority ones. Is there\n>> any reason to believe that the drawbacks of priority inversion\n>> outweigh the benefits of setting priorities?\n> \n> Well, it's unclear, and anecdotal evidence is unlikely to convince\n> anybody. I had put some stock in the CMU paper, but if it's based\n> on PG 7.3 then you've got to **seriously** question its relevance\n> to the current code.\n\nI was thinking the paper's results might apply more generally\nto RDBMS-like applications since they did test 3 of them with\ndifferent locking behavior and different bottlenecks.\n\nBut true, I should stop bringing up 7.3 examples.\n\n\nAnecdotally ;-) I've found renice-ing reports to help; especially\nin the (probably not too uncommon case) where slow running\nbatch reporting queries hit different tables than interactive\nreporting queries. I guess that's why I keep defending\npriorities as a useful technique. It seems even more useful\nconsidering the existence of schedulers that have priority\ninheritance features.\n\nI'll admit there's still the theoretical possibility that\nit's a foot-gun so I don't mind people having to write\ntheir own stored procedure to enable it - but I'd be\nsurprised if anyone could find a real world case where\npriorities would do more harm than good.\n\nThough, yeah, it'd be easy to construct an artificial\ncase that'd demonstrate priority inversion (i.e. have\na low priority process that takes a lock and sits\nand spins on some CPU-intensive stored procedure\nwithout doing any I/O).\n",
"msg_date": "Fri, 18 May 2007 12:21:39 -0700",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background vacuum"
},
{
"msg_contents": "On Fri, 18 May 2007, Ron Mayer wrote:\n\n> Anecdotally ;-) I've found renice-ing reports to help\n\nLet's break this down into individual parts:\n\n1) Is there enough CPU-intensive activity in some database tasks that they \ncan be usefully be controlled by tools like nice? Sure.\n\n2) Is it so likely that you'll fall victim to a priority inversion problem \nthat you shouldn't ever consider that technique? No.\n\n3) Does the I/O scheduler in modern OSes deal with a lot more things than \njust the CPU? You bet.\n\n4) Is vacuuming a challenging I/O demand? Quite.\n\nAdd all this up, and that fact that you're satisfied with how nice has \nworked successfully for you doesn't have to conflict with an opinion that \nit's not the best approach for controlling vacuuming. I just wouldn't \nextrapolate your experience too far here.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Sat, 19 May 2007 01:27:16 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background vacuum"
},
{
"msg_contents": "Greg Smith wrote:\n> \n> Let's break this down into individual parts:\n\nGreat summary.\n\n> 4) Is vacuuming a challenging I/O demand? Quite.\n> \n> Add all this up, and that fact that you're satisfied with how nice has\n> worked successfully for you doesn't have to conflict with an opinion\n> that it's not the best approach for controlling vacuuming. I just\n> wouldn't extrapolate your experience too far here.\n\nI wasn't claiming it's a the best approach for vacuuming.\n\n From my first posting in this thread I've been agreeing that\nvacuum_cost_delay is the better tool for handling vacuum. Just\nthat the original poster also asked for a way of setting priorities\nso I pointed him to one.\n",
"msg_date": "Sat, 19 May 2007 15:38:12 -0700",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background vacuum"
}
] |
[
{
"msg_contents": "\n\nHi again,\n\nVery mixed news to report...\n\nRecap:\n\n\nI'd reported:\n> Despite numerous efforts, we're unable to solve a severe performance \n>limitation between Pg 7.3.2\n> and Pg 8.1.4.\n>\n> The query and 'explain analyze' plan below, runs in \n> \t26.20 msec on Pg 7.3.2, and \n> \t2463.968 ms on Pg 8.1.4, \n>\n\n\nTom Lane responded:\n>You're not getting the indexscan optimization of the LIKE clause, which\n>is most likely due to having initdb'd the 8.1 installation in something\n>other than C locale. You can either redo the initdb in C locale (which\n>might be a good move to fix other inconsistencies from the 7.3 behavior\n>you're used to) or create a varchar_pattern_ops index on the column(s)\n>you're using LIKE with.\n\n\nSteinar H. Gunderson suggested:\n>You could always try\n>\n> CREATE INDEX test_index ON dbxref (accession varchar_pattern_ops);\n\n\nI'd responded:\n>>You could always try\n>>\n>> CREATE INDEX test_index ON dbxref (accession varchar_pattern_ops);\n>\n>WOW! we're now at runtime 0.367ms on Pg8\n>\n>Next step is to initdb w/C Locale (tonight) (Thanks Tom et al.!).\n>\n>Thanks again - will report back soon.\n\n\nAlvaro Herrera pointed out:\n>> Next step is to initdb w/C Locale (tonight) (Thanks Tom et al.!).\n>>\n>That's alternative to the pattern_ops index; it won't help you obtain a\n>plan faster than this one.\n\n\n\nTom concurred:\n>>> Next step is to initdb w/C Locale (tonight) (Thanks Tom et al.!).\n>>\n>> That's alternative to the pattern_ops index; it won't help you obtain\n>> a plan faster than this one.\n>\n>No, but since their old DB was evidently running in C locale, this\n>seems like a prudent thing to do to avoid other surprising \n>changes in behavior.\n\n=====\n\nWe reconfigured the server, as follows: \n\ninitdb -D /var/lib/pgsql/data --encoding=UTF8 --locale=C\n\n -----I'm wondering if this was incorrect (?). our Pg7 servers encode SQL_ASCII -----\n\n\nNEXT, loaded db, and the good news is the query showed:\n\tTotal runtime: 0.372 ms\n\n\nAs mentioned in original post, this query is just part of a longer procedure.\nreminder: \n\t\t The longer procedure was taking >10 *hours* to run on Pg8.1.4\n This same longer procedure runs in ~22 minutes on Pg7.3.2 server.\n\n\n=====\n\nBefore redoing the initdb with C-locale, I did a CREATE INDEX on the 8.1.4\nserver, which resulted not only in much faster query times, but in a drastic\nimprovement in the time of the overall/longer procedure (<11mins).\n\nWith the initdb locale C Pg8.1.4 server, it ran for 6 hours before I killed it (and output\nfile was <.25 expected end size).\n\n======\n\n\nI'm perplexed we're not seeing better performance on Pg8.1.4 server given the \n22 minutes runtime we're seeing on the Pg7.3.2 servers (on older hardware and OS).\n\n\nSo, while initdb C locale helped the initial query, it seems to have had no positive affect\non the longer procedure.\n\n\nIs there some other difference between 7.3.2 and 8.1.4 we're missing?\n\n\nThanks for any help.\nRegards,\nSusan Russo\n\n\n=======\nI enclose the db calls (selects) contained in the 'overall procedure' referred to above (taken directly\nfrom a perl script): THOUGH THIS RUNS IN 22 mins on Pg7.3.2, and >10 hours on Pg8.1.4...\n\n \tmy $aq = $dbh->prepare(sprintf(\"SELECT * from dbxref dx, db where accession = '%s' and dx.db_id = db.db_id and db.name = 'GB_protein'\",$rec));\n\n\n my $pq = $dbh->prepare(sprintf(\"SELECT o.genus, o.species, f.feature_id, f.uniquename, f.name, accession, is_current from feature f, feature_dbxref fd, dbxref d, cvterm cvt, organism o where accession = '%s' and d.dbxref_id = fd.dbxref_id and fd.feature_id = f.feature_id and f.uniquename like '%s' and f.organism_id = o.organism_id and f.type_id = cvt.cvterm_id and cvt.name = 'gene'\",$rec,$fbgnwc));\n\n\n my $uq = $dbh2->prepare(sprintf(\"SELECT db.name, accession, version, is_current from feature_dbxref fd, dbxref dx, db where fd.feature_id = %d and fd.dbxref_id = dx.dbxref_id and dx.db_id = db.db_id and db.name = '%s'\",$pr{feature_id},$uds{$uh{$rec}{stflag}}));\n\n\n\n my $cq = $dbh2->prepare(sprintf(\"SELECT f.uniquename, f.name, cvt.name as ntype, dx.db_id, dx.accession, fd.is_current from dbxref dx, feature f, feature_dbxref fd, cvte\nrm cvt where accession like '%s' and dx.dbxref_id = fd.dbxref_id and fd.feature_id = f.feature_id and f.type_id = cvt.cvterm_id and cvt.name not in ('gene','protein','natural_transposable_element','chromosome_structure_variation','chromosome_arm','repeat_region')\",$nacc));\n\n\n\n",
"msg_date": "Thu, 10 May 2007 09:23:03 -0400 (EDT)",
"msg_from": "Susan Russo <[email protected]>",
"msg_from_op": true,
"msg_subject": "REVISIT specific query (not all) on Pg8 MUCH slower than Pg7 "
},
{
"msg_contents": "On Thu, May 10, 2007 at 09:23:03AM -0400, Susan Russo wrote:\n> \tmy $aq = $dbh->prepare(sprintf(\"SELECT * from dbxref dx, db where accession = '%s' and dx.db_id = db.db_id and db.name = 'GB_protein'\",$rec));\n\nThis is not related to your performance issues, but it usually considered bad\nform to use sprintf like this (mainly for security reasons). The usual way of\ndoing this would be:\n\n my $aq = $dbh->prepare(\"SELECT * from dbxref dx, db where accession = ? and dx.db_id = db.db_id and db.name = 'GB_protein'\");\n $aq->execute($rec);\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 10 May 2007 15:38:07 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: REVISIT specific query (not all) on Pg8 MUCH slower than Pg7"
},
{
"msg_contents": "In response to Susan Russo <[email protected]>:\n\n> \n> \n> Hi again,\n> \n> Very mixed news to report...\n> \n> Recap:\n> \n> \n> I'd reported:\n> > Despite numerous efforts, we're unable to solve a severe performance \n> >limitation between Pg 7.3.2\n> > and Pg 8.1.4.\n> >\n> > The query and 'explain analyze' plan below, runs in \n> > \t26.20 msec on Pg 7.3.2, and \n> > \t2463.968 ms on Pg 8.1.4, \n> >\n> \n> \n> Tom Lane responded:\n> >You're not getting the indexscan optimization of the LIKE clause, which\n> >is most likely due to having initdb'd the 8.1 installation in something\n> >other than C locale. You can either redo the initdb in C locale (which\n> >might be a good move to fix other inconsistencies from the 7.3 behavior\n> >you're used to) or create a varchar_pattern_ops index on the column(s)\n> >you're using LIKE with.\n> \n> \n> Steinar H. Gunderson suggested:\n> >You could always try\n> >\n> > CREATE INDEX test_index ON dbxref (accession varchar_pattern_ops);\n> \n> \n> I'd responded:\n> >>You could always try\n> >>\n> >> CREATE INDEX test_index ON dbxref (accession varchar_pattern_ops);\n> >\n> >WOW! we're now at runtime 0.367ms on Pg8\n> >\n> >Next step is to initdb w/C Locale (tonight) (Thanks Tom et al.!).\n> >\n> >Thanks again - will report back soon.\n> \n> \n> Alvaro Herrera pointed out:\n> >> Next step is to initdb w/C Locale (tonight) (Thanks Tom et al.!).\n> >>\n> >That's alternative to the pattern_ops index; it won't help you obtain a\n> >plan faster than this one.\n> \n> \n> \n> Tom concurred:\n> >>> Next step is to initdb w/C Locale (tonight) (Thanks Tom et al.!).\n> >>\n> >> That's alternative to the pattern_ops index; it won't help you obtain\n> >> a plan faster than this one.\n> >\n> >No, but since their old DB was evidently running in C locale, this\n> >seems like a prudent thing to do to avoid other surprising \n> >changes in behavior.\n> \n> =====\n> \n> We reconfigured the server, as follows: \n> \n> initdb -D /var/lib/pgsql/data --encoding=UTF8 --locale=C\n> \n> -----I'm wondering if this was incorrect (?). our Pg7 servers encode SQL_ASCII -----\n> \n> \n> NEXT, loaded db, and the good news is the query showed:\n> \tTotal runtime: 0.372 ms\n> \n> \n> As mentioned in original post, this query is just part of a longer procedure.\n> reminder: \n> \t\t The longer procedure was taking >10 *hours* to run on Pg8.1.4\n> This same longer procedure runs in ~22 minutes on Pg7.3.2 server.\n> \n> \n> =====\n> \n> Before redoing the initdb with C-locale, I did a CREATE INDEX on the 8.1.4\n> server, which resulted not only in much faster query times, but in a drastic\n> improvement in the time of the overall/longer procedure (<11mins).\n> \n> With the initdb locale C Pg8.1.4 server, it ran for 6 hours before I killed it (and output\n> file was <.25 expected end size).\n\nQuick reminders:\n*) Did you recreate all the indexes on the new system after the initdb?\n*) Did you vacuum and analyze after loading your data?\n\n> \n> ======\n> \n> \n> I'm perplexed we're not seeing better performance on Pg8.1.4 server given the \n> 22 minutes runtime we're seeing on the Pg7.3.2 servers (on older hardware and OS).\n> \n> \n> So, while initdb C locale helped the initial query, it seems to have had no positive affect\n> on the longer procedure.\n> \n> \n> Is there some other difference between 7.3.2 and 8.1.4 we're missing?\n\nI suggest you provide \"explain analyze\" output for the query on both versions.\n\n> \n> \n> Thanks for any help.\n> Regards,\n> Susan Russo\n> \n> \n> =======\n> I enclose the db calls (selects) contained in the 'overall procedure' referred to above (taken directly\n> from a perl script): THOUGH THIS RUNS IN 22 mins on Pg7.3.2, and >10 hours on Pg8.1.4...\n> \n> \tmy $aq = $dbh->prepare(sprintf(\"SELECT * from dbxref dx, db where accession = '%s' and dx.db_id = db.db_id and db.name = 'GB_protein'\",$rec));\n> \n> \n> my $pq = $dbh->prepare(sprintf(\"SELECT o.genus, o.species, f.feature_id, f.uniquename, f.name, accession, is_current from feature f, feature_dbxref fd, dbxref d, cvterm cvt, organism o where accession = '%s' and d.dbxref_id = fd.dbxref_id and fd.feature_id = f.feature_id and f.uniquename like '%s' and f.organism_id = o.organism_id and f.type_id = cvt.cvterm_id and cvt.name = 'gene'\",$rec,$fbgnwc));\n> \n> \n> my $uq = $dbh2->prepare(sprintf(\"SELECT db.name, accession, version, is_current from feature_dbxref fd, dbxref dx, db where fd.feature_id = %d and fd.dbxref_id = dx.dbxref_id and dx.db_id = db.db_id and db.name = '%s'\",$pr{feature_id},$uds{$uh{$rec}{stflag}}));\n> \n> \n> \n> my $cq = $dbh2->prepare(sprintf(\"SELECT f.uniquename, f.name, cvt.name as ntype, dx.db_id, dx.accession, fd.is_current from dbxref dx, feature f, feature_dbxref fd, cvte\n> rm cvt where accession like '%s' and dx.dbxref_id = fd.dbxref_id and fd.feature_id = f.feature_id and f.type_id = cvt.cvterm_id and cvt.name not in ('gene','protein','natural_transposable_element','chromosome_structure_variation','chromosome_arm','repeat_region')\",$nacc));\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n> \n> \n> \n> \n> \n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n",
"msg_date": "Thu, 10 May 2007 09:47:02 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: REVISIT specific query (not all) on Pg8 MUCH slower\n than Pg7"
}
] |
[
{
"msg_contents": ">Quick reminders:\n>*) Did you recreate all the indexes on the new system after the initdb?\n>*) Did you vacuum and analyze after loading your data?\n\nNo, I didn't - am reindexing db now and will run vacuum analyze afterwards. \n\n>I suggest you provide \"explain analyze\" output for the query on both versions.\n\nPg8: \n\n-----------------------------------------------------------------\n Merge Join (cost=151939.73..156342.67 rows=10131 width=1585) (actual time=0.129..0.129 rows=0 loops=1)\n Merge Cond: (\"outer\".cvterm_id = \"inner\".type_id)\n -> Index Scan using cvterm_pkey on cvterm cvt (cost=0.00..4168.22 rows=32478 width=520) (actual time=0.044..0.044 rows=1 loops=1)\n Filter: (((name)::text <> 'gene'::text) AND ((name)::text <> 'protein'::text) AND ((name)::text <>\n'natural_transposable_element'::text) AND ((name)::text <> 'chromosome_structure_variation'::text) AND ((name)::text <> \n'chromosome_arm'::text) AND ((name)::text <> 'repeat_region'::text))\n -> Sort (cost=151939.73..151965.83 rows=10441 width=1073) (actual time=0.079..0.079 rows=0 loops=1)\n Sort Key: f.type_id\n -> Nested Loop (cost=17495.27..151242.80 rows=10441 width=1073) (actual time=0.070..0.070 rows=0 loops=1)\n -> Hash Join (cost=17495.27..88325.38 rows=10441 width=525) (actual time=0.068..0.068 rows=0 loops=1)\n Hash Cond: (\"outer\".dbxref_id = \"inner\".dbxref_id)\n -> Seq Scan on feature_dbxref fd (cost=0.00..34182.71 rows=2088171 width=9) (actual time=0.008..0.008 rows=1 loops=1)\n -> Hash (cost=17466.34..17466.34 rows=11572 width=524) (actual time=0.042..0.042 rows=0 loops=1)\n -> Bitmap Heap Scan on dbxref dx (cost=117.43..17466.34 rows=11572 width=524) (actual time=0.041..0.041 rows=0 loops=1)\n Filter: ((accession)::text ~~ 'AY851043%'::text)\n -> Bitmap Index Scan on dbxref_idx2 (cost=0.00..117.43 rows=11572 width=0) (actual time=0.037..0.037 rows=0 loops=1)\n Index Cond: (((accession)::text >= 'AY851043'::character varying) AND ((accession)::text < 'AY851044'::character varying))\n -> Index Scan using feature_pkey on feature f (cost=0.00..6.01 rows=1 width=556) (never executed)\n Index Cond: (\"outer\".feature_id = f.feature_id)\n Total runtime: 0.381 ms\n(18 rows)\n\n\n=======\n\nPg7:\n\n-----------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..23.45 rows=1 width=120) (actual time=0.08..0.08 rows=0 loops=1)\n -> Nested Loop (cost=0.00..17.49 rows=1 width=82) (actual time=0.08..0.08 rows=0 loops=1)\n -> Nested Loop (cost=0.00..11.93 rows=1 width=30) (actual time=0.08..0.08 rows=0 loops=1)\n -> Index Scan using dbxref_idx2 on dbxref dx (cost=0.00..5.83 rows=1 width=21) (actual time=0.08..0.08 rows=0 loops=1)\n Index Cond: ((accession >= 'AY851043'::character varying) AND (accession < 'AY851044'::character varying))\n Filter: (accession ~~ 'AY851043%'::text)\n -> Index Scan using feature_dbxref_idx2 on feature_dbxref fd (cost=0.00..6.05 rows=5 width=9) (never executed)\n Index Cond: (fd.dbxref_id = \"outer\".dbxref_id)\n -> Index Scan using feature_pkey on feature f (cost=0.00..5.54 rows=1 width=52) (never executed)\n Index Cond: (\"outer\".feature_id = f.feature_id)\n -> Index Scan using cvterm_pkey on cvterm cvt (cost=0.00..5.94 rows=1 width=38) (never executed)\n Index Cond: (\"outer\".type_id = cvt.cvterm_id)\n Filter: ((name <> 'gene'::character varying) AND (name <> 'protein'::character varying) AND (name <> 'natural_transposable_element'::character varying) AND (name <> 'chromosome_structure_variation'::character varying)\nAND (name <> 'chromosome_arm'::character varying) AND (name <> 'repeat_region'::character varying))\n Total runtime: 0.36 msec\n(14 rows)\n\n",
"msg_date": "Thu, 10 May 2007 10:08:41 -0400 (EDT)",
"msg_from": "Susan Russo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: REVISIT specific query (not all) on Pg8 MUCH slower than Pg7"
}
] |
[
{
"msg_contents": "Hello again - \nvacuum analyze of db did the trick, thanks!\nlonger procedure went from over 6 hours to ~11 minutes....quite dramatic.\n\nReindexing wasn't necessary (did test on one db -slog-slog-, though).\n\nRegards,\nSusan\n",
"msg_date": "Thu, 10 May 2007 22:05:01 -0400 (EDT)",
"msg_from": "Susan Russo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: REVISIT specific query (not all) on Pg8 MUCH slower than Pg7"
}
] |
[
{
"msg_contents": "\nThe following bug has been logged online:\n\nBug reference: 3270\nLogged by: Liviu Ionescu\nEmail address: [email protected]\nPostgreSQL version: 8.2.4\nOperating system: Linux\nDescription: limit < 16 optimizer behaviour\nDetails: \n\nI have a table of about 15Mrows, and a query like this:\n\nSELECT historianid,storagedate,slotdate,status,value FROM historiandata \nJOIN rtunodes ON(historiandata.historianid=rtunodes.nodeid)\nJOIN rtus ON(rtunodes.rtuid=rtus.nodeid)\nWHERE realmid IN (1119,1422,698,1428) \nAND historianid in (2996)\nORDER BY storagedate desc \nLIMIT 10\n\nif there are no records with the given historianid, if limit is >= 16 the\nquery is quite fast, otherwise it takes forever.\n\nmy current fix was to always increase the limit to 16, but, although I know\nthe optimizer behaviour depends on LIMIT, I still feel this looks like a\nbug; if the resultset has no records the value of the LIMIT should not\nmatter.\n\nregards,\n\nLiviu Ionescu\n\n\n\nCREATE TABLE historiandata\n(\n historianid int4 NOT NULL,\n status int2 NOT NULL DEFAULT 0,\n value float8,\n slotdate timestamptz NOT NULL,\n storagedate timestamptz NOT NULL DEFAULT now(),\n CONSTRAINT historiandata_pkey PRIMARY KEY (historianid, slotdate),\n CONSTRAINT historianid_fkey FOREIGN KEY (historianid)\n REFERENCES historians (nodeid) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE RESTRICT\n) \nWITHOUT OIDS;\nALTER TABLE historiandata OWNER TO tomcat;\n\n\n-- Index: historiandata_historianid_index\n\n-- DROP INDEX historiandata_historianid_index;\n\nCREATE INDEX historiandata_historianid_index\n ON historiandata\n USING btree\n (historianid);\n\n-- Index: historiandata_slotdate_index\n\n-- DROP INDEX historiandata_slotdate_index;\n\nCREATE INDEX historiandata_slotdate_index\n ON historiandata\n USING btree\n (slotdate);\n\n-- Index: historiandata_storagedate_index\n\n-- DROP INDEX historiandata_storagedate_index;\n\nCREATE INDEX historiandata_storagedate_index\n ON historiandata\n USING btree\n (storagedate);\n\n\nCREATE TABLE rtunodes\n(\n nodeid int4 NOT NULL,\n rtuid int4 NOT NULL,\n no_publicnodeid int4,\n name varchar(64) NOT NULL,\n isinvalid bool NOT NULL DEFAULT false,\n nodetype varchar(16),\n CONSTRAINT rtunodes_pkey PRIMARY KEY (nodeid),\n CONSTRAINT nodeid_fkey FOREIGN KEY (nodeid)\n REFERENCES nodes (nodeid) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE RESTRICT,\n CONSTRAINT rtuid_fkey FOREIGN KEY (rtuid)\n REFERENCES rtus (nodeid) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE RESTRICT\n) \nWITHOUT OIDS;\nALTER TABLE rtunodes OWNER TO tomcat;\n\n\n\nCREATE TABLE rtus\n(\n nodeid int4 NOT NULL,\n passwd varchar(10) NOT NULL,\n xml text,\n no_nextpublicnodeid int4 NOT NULL DEFAULT 1,\n rtudriverid int2,\n realmid int4 NOT NULL,\n enablegetlogin bool NOT NULL DEFAULT false,\n enablegetconfig bool NOT NULL DEFAULT false,\n businfoxml text,\n uniqueid varchar(32) NOT NULL,\n no_publicrtuid int4,\n loginname varchar(10) NOT NULL,\n protocolversion varchar(8) DEFAULT '0.0'::character varying,\n isinvalid bool DEFAULT false,\n CONSTRAINT rtus_pkey PRIMARY KEY (nodeid),\n CONSTRAINT nodeid_fkey FOREIGN KEY (nodeid)\n REFERENCES nodes (nodeid) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE RESTRICT,\n CONSTRAINT realmid_fkey FOREIGN KEY (realmid)\n REFERENCES realms (nodeid) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE RESTRICT,\n CONSTRAINT rtudriverid_fkey FOREIGN KEY (rtudriverid)\n REFERENCES rtudrivers (rtudriverid) MATCH SIMPLE\n ON UPDATE CASCADE ON DELETE RESTRICT,\n CONSTRAINT rtus_loginname_unique UNIQUE (loginname),\n CONSTRAINT rtus_uniqueid_unique UNIQUE (uniqueid)\n) \nWITHOUT OIDS;\nALTER TABLE rtus OWNER TO tomcat;\n\n\n-- Index: rtus_realmid_index\n\n-- DROP INDEX rtus_realmid_index;\n\nCREATE INDEX rtus_realmid_index\n ON rtus\n USING btree\n (realmid);\n\n-- Index: rtus_rtudriverid_index\n\n-- DROP INDEX rtus_rtudriverid_index;\n\nCREATE INDEX rtus_rtudriverid_index\n ON rtus\n USING btree\n (rtudriverid);\n",
"msg_date": "Fri, 11 May 2007 14:07:57 GMT",
"msg_from": "\"Liviu Ionescu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG #3270: limit < 16 optimizer behaviour"
},
{
"msg_contents": "This should have been asked on the performance list, not filed as a bug.\nI doubt anyone will have a complete answer to your question without\nEXPLAIN ANALYZE output from the query.\n\nHave you ANALYZE'd the tables recently? Poor statistics is one possible\ncause of the issue you are having.\n\nOn Fri, May 11, 2007 at 14:07:57 +0000,\n Liviu Ionescu <[email protected]> wrote:\n> \n> The following bug has been logged online:\n> \n> Bug reference: 3270\n> Logged by: Liviu Ionescu\n> Email address: [email protected]\n> PostgreSQL version: 8.2.4\n> Operating system: Linux\n> Description: limit < 16 optimizer behaviour\n> Details: \n> \n> I have a table of about 15Mrows, and a query like this:\n> \n> SELECT historianid,storagedate,slotdate,status,value FROM historiandata \n> JOIN rtunodes ON(historiandata.historianid=rtunodes.nodeid)\n> JOIN rtus ON(rtunodes.rtuid=rtus.nodeid)\n> WHERE realmid IN (1119,1422,698,1428) \n> AND historianid in (2996)\n> ORDER BY storagedate desc \n> LIMIT 10\n> \n> if there are no records with the given historianid, if limit is >= 16 the\n> query is quite fast, otherwise it takes forever.\n> \n> my current fix was to always increase the limit to 16, but, although I know\n> the optimizer behaviour depends on LIMIT, I still feel this looks like a\n> bug; if the resultset has no records the value of the LIMIT should not\n> matter.\n> \n> regards,\n> \n> Liviu Ionescu\n> \n> \n> \n> CREATE TABLE historiandata\n> (\n> historianid int4 NOT NULL,\n> status int2 NOT NULL DEFAULT 0,\n> value float8,\n> slotdate timestamptz NOT NULL,\n> storagedate timestamptz NOT NULL DEFAULT now(),\n> CONSTRAINT historiandata_pkey PRIMARY KEY (historianid, slotdate),\n> CONSTRAINT historianid_fkey FOREIGN KEY (historianid)\n> REFERENCES historians (nodeid) MATCH SIMPLE\n> ON UPDATE CASCADE ON DELETE RESTRICT\n> ) \n> WITHOUT OIDS;\n> ALTER TABLE historiandata OWNER TO tomcat;\n> \n> \n> -- Index: historiandata_historianid_index\n> \n> -- DROP INDEX historiandata_historianid_index;\n> \n> CREATE INDEX historiandata_historianid_index\n> ON historiandata\n> USING btree\n> (historianid);\n> \n> -- Index: historiandata_slotdate_index\n> \n> -- DROP INDEX historiandata_slotdate_index;\n> \n> CREATE INDEX historiandata_slotdate_index\n> ON historiandata\n> USING btree\n> (slotdate);\n> \n> -- Index: historiandata_storagedate_index\n> \n> -- DROP INDEX historiandata_storagedate_index;\n> \n> CREATE INDEX historiandata_storagedate_index\n> ON historiandata\n> USING btree\n> (storagedate);\n> \n> \n> CREATE TABLE rtunodes\n> (\n> nodeid int4 NOT NULL,\n> rtuid int4 NOT NULL,\n> no_publicnodeid int4,\n> name varchar(64) NOT NULL,\n> isinvalid bool NOT NULL DEFAULT false,\n> nodetype varchar(16),\n> CONSTRAINT rtunodes_pkey PRIMARY KEY (nodeid),\n> CONSTRAINT nodeid_fkey FOREIGN KEY (nodeid)\n> REFERENCES nodes (nodeid) MATCH SIMPLE\n> ON UPDATE CASCADE ON DELETE RESTRICT,\n> CONSTRAINT rtuid_fkey FOREIGN KEY (rtuid)\n> REFERENCES rtus (nodeid) MATCH SIMPLE\n> ON UPDATE CASCADE ON DELETE RESTRICT\n> ) \n> WITHOUT OIDS;\n> ALTER TABLE rtunodes OWNER TO tomcat;\n> \n> \n> \n> CREATE TABLE rtus\n> (\n> nodeid int4 NOT NULL,\n> passwd varchar(10) NOT NULL,\n> xml text,\n> no_nextpublicnodeid int4 NOT NULL DEFAULT 1,\n> rtudriverid int2,\n> realmid int4 NOT NULL,\n> enablegetlogin bool NOT NULL DEFAULT false,\n> enablegetconfig bool NOT NULL DEFAULT false,\n> businfoxml text,\n> uniqueid varchar(32) NOT NULL,\n> no_publicrtuid int4,\n> loginname varchar(10) NOT NULL,\n> protocolversion varchar(8) DEFAULT '0.0'::character varying,\n> isinvalid bool DEFAULT false,\n> CONSTRAINT rtus_pkey PRIMARY KEY (nodeid),\n> CONSTRAINT nodeid_fkey FOREIGN KEY (nodeid)\n> REFERENCES nodes (nodeid) MATCH SIMPLE\n> ON UPDATE CASCADE ON DELETE RESTRICT,\n> CONSTRAINT realmid_fkey FOREIGN KEY (realmid)\n> REFERENCES realms (nodeid) MATCH SIMPLE\n> ON UPDATE CASCADE ON DELETE RESTRICT,\n> CONSTRAINT rtudriverid_fkey FOREIGN KEY (rtudriverid)\n> REFERENCES rtudrivers (rtudriverid) MATCH SIMPLE\n> ON UPDATE CASCADE ON DELETE RESTRICT,\n> CONSTRAINT rtus_loginname_unique UNIQUE (loginname),\n> CONSTRAINT rtus_uniqueid_unique UNIQUE (uniqueid)\n> ) \n> WITHOUT OIDS;\n> ALTER TABLE rtus OWNER TO tomcat;\n> \n> \n> -- Index: rtus_realmid_index\n> \n> -- DROP INDEX rtus_realmid_index;\n> \n> CREATE INDEX rtus_realmid_index\n> ON rtus\n> USING btree\n> (realmid);\n> \n> -- Index: rtus_rtudriverid_index\n> \n> -- DROP INDEX rtus_rtudriverid_index;\n> \n> CREATE INDEX rtus_rtudriverid_index\n> ON rtus\n> USING btree\n> (rtudriverid);\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n",
"msg_date": "Fri, 11 May 2007 14:10:58 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #3270: limit < 16 optimizer behaviour"
}
] |
[
{
"msg_contents": "\n \tHi guys,\n\n \tI'm looking for a database+hardware solution which should be able \nto handle up to 500 requests per second. The requests will consist in:\n \t- single row updates in indexed tables (the WHERE clauses will use \nthe index(es), the updated column(s) will not be indexed);\n \t- inserts in the same kind of tables;\n \t- selects with approximately the same WHERE clause as the update \nstatements will use.\n \tSo nothing very special about these requests, only about the \nthroughput.\n\n \tCan anyone give me an idea about the hardware requirements, type \nof \nclustering (at postgres level or OS level), and eventually about the OS \n(ideally should be Linux) which I could use to get something like this in \nplace?\n\n \tThanks!\n\n-- \n",
"msg_date": "Sat, 12 May 2007 09:43:25 +0300 (EEST)",
"msg_from": "Tarhon-Onu Victor <[email protected]>",
"msg_from_op": true,
"msg_subject": "500 requests per second"
},
{
"msg_contents": "Tarhon-Onu Victor wrote:\n> \n> Hi guys,\n> \n> I'm looking for a database+hardware solution which should be able to \n> handle up to 500 requests per second. \n\nCrucial questions:\n\n1. Is this one client making 500 requests, or 500 clients making one \nrequest per second?\n2. Do you expect the indexes at least to fit in RAM?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 14 May 2007 10:03:26 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 500 requests per second"
},
{
"msg_contents": "On Mon, 14 May 2007, Richard Huxton wrote:\n\n> 1. Is this one client making 500 requests, or 500 clients making one request \n> per second?\n\n \tUp to 250 clients will make up to 500 requests per second.\n\n> 2. Do you expect the indexes at least to fit in RAM?\n\n \tnot entirely... or not all of them.\n\n-- \n",
"msg_date": "Tue, 15 May 2007 12:17:17 +0300 (EEST)",
"msg_from": "Tarhon-Onu Victor <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 500 requests per second"
},
{
"msg_contents": "Tarhon-Onu Victor wrote:\n> On Mon, 14 May 2007, Richard Huxton wrote:\n> \n>> 1. Is this one client making 500 requests, or 500 clients making one \n>> request per second?\n> \n> Up to 250 clients will make up to 500 requests per second.\n\nWell, PG is pretty good at handling multiple clients. But if I'm \nunderstanding you here, you're talking about potentially 250*500=125000 \nupdates per second. If each update writes 1KB to disk, that's 125MB/sec \ncontinuously written. Are these figures roughly correct?\n\n>> 2. Do you expect the indexes at least to fit in RAM?\n> \n> not entirely... or not all of them.\n\nHmm - OK. So you're going to have index reads accessing disk as well. \nExactly how big are you looking at here?\nWill it be constantly growing?\nCan you partition the large table(s) by date or similar?\n\nWell, the two things I'd expect to slow you down are:\n1. Contention updating index blocks\n2. Disk I/O as you balance updates and selects.\n\nSince you're constantly updating, you'll want to have WAL on a separate \nset of disks from the rest of your database, battery-backed cache on \nyour raid controller etc. Check the mailing list archives for recent \ndiscussions about good/bad controllers. You'll also want to \nsubstantially increase checkpoint limits, of course.\n\nIf you can cope with the fact that there's a delay, you might want to \nlook at replication (e.g. slony) to have reads happening on a separate \nmachine from writes. That may well not be possible in your case.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 15 May 2007 11:47:29 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 500 requests per second"
},
{
"msg_contents": "On Tue, May 15, 2007 at 11:47:29AM +0100, Richard Huxton wrote:\n> Tarhon-Onu Victor wrote:\n> >On Mon, 14 May 2007, Richard Huxton wrote:\n> >\n> >>1. Is this one client making 500 requests, or 500 clients making one \n> >>request per second?\n> >\n> > Up to 250 clients will make up to 500 requests per second.\n> \n> Well, PG is pretty good at handling multiple clients. But if I'm \n> understanding you here, you're talking about potentially 250*500=125000 \n> updates per second. If each update writes 1KB to disk, that's 125MB/sec \n> continuously written. Are these figures roughly correct?\n \nI'm guessing it's 500TPS overall, not per connection. It'd be rather\nchallenging just to do 125,000 network round trips per second.\n\n> >>2. Do you expect the indexes at least to fit in RAM?\n> >\n> > not entirely... or not all of them.\n> \n> Hmm - OK. So you're going to have index reads accessing disk as well. \n> Exactly how big are you looking at here?\n> Will it be constantly growing?\n> Can you partition the large table(s) by date or similar?\n> \n> Well, the two things I'd expect to slow you down are:\n> 1. Contention updating index blocks\n> 2. Disk I/O as you balance updates and selects.\n> \n> Since you're constantly updating, you'll want to have WAL on a separate \n> set of disks from the rest of your database, battery-backed cache on \n> your raid controller etc. Check the mailing list archives for recent \n> discussions about good/bad controllers. You'll also want to \n> substantially increase checkpoint limits, of course.\n> \n> If you can cope with the fact that there's a delay, you might want to \n> look at replication (e.g. slony) to have reads happening on a separate \n> machine from writes. That may well not be possible in your case.\n\nJust as a data point, I've worked with some folks that are doing ~250TPS\non a disk array with around 20-30 drives. IIRC a good amount of their\nworking set did fit into memory, but not all of it.\n\nYour biggest constraint is really going to be I/O operations per second.\nIf 90% of your data is in cache then you'll need to do a minimum of\n50IOPS (realistically you'd probably have to double that). If 50% of\nyour working set fits in cache you'd then be looking at 250IOPS, which\nis a pretty serious rate.\n\nI very strongly encourage you to do benchmarking to get a feel for how\nyour system performs on a given set of hardware so that you have some\nidea of where you need to get to. You should also be looking hard at\nyour application and system architecture for ways to cut down on your\nthroughput. There may be some things you can do that would reduce the\namount of database hardware you need to buy.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 15 May 2007 18:28:37 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 500 requests per second"
},
{
"msg_contents": "On 5/12/07, Tarhon-Onu Victor <[email protected]> wrote:\n>\n> Hi guys,\n>\n> I'm looking for a database+hardware solution which should be able\n> to handle up to 500 requests per second. The requests will consist in:\n> - single row updates in indexed tables (the WHERE clauses will use\n> the index(es), the updated column(s) will not be indexed);\n> - inserts in the same kind of tables;\n> - selects with approximately the same WHERE clause as the update\n> statements will use.\n> So nothing very special about these requests, only about the\n> throughput.\n>\n> Can anyone give me an idea about the hardware requirements, type\n> of\n> clustering (at postgres level or OS level), and eventually about the OS\n> (ideally should be Linux) which I could use to get something like this in\n> place?\n\nI work on a system about like you describe....400tps constant....24/7.\n Major challenges are routine maintenance and locking. Autovacuum is\nyour friend but you will need to schedule a full vaccum once in a\nwhile because of tps wraparound. If you allow AV to do this, none of\nyour other tables get vacuumed until it completes....heh!\n\nIf you lock the wrong table, transactions will accumulate rapidly and\nthe system will grind to a halt rather quickly (this can be mitigated\nsomewhat by smart code on the client).\n\nOther general advice:\n* reserve plenty of space for WAL and keep volume separate from data\nvolume...during a long running transaction WAL files will accumulate\nrapidly and panic the server if it runs out of space.\n* set reasonable statement timeout\n* backup with pitr. pg_dump is a headache on extremely busy servers.\n* get good i/o system for your box. start with 6 disk raid 10 and go\nfrom there.\n* spend some time reading about bgwriter settings, commit_delay, etc.\n* keep an eye out for postgresql hot (hopefully coming with 8.3) and\nmake allowances for it in your design if possible.\n* normalize your database and think of vacuum as dangerous enemy.\n\ngood luck! :-)\n\nmerlin\n",
"msg_date": "Mon, 21 May 2007 15:50:27 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 500 requests per second"
},
{
"msg_contents": "On Mon, May 21, 2007 at 03:50:27PM -0400, Merlin Moncure wrote:\n> I work on a system about like you describe....400tps constant....24/7.\n> Major challenges are routine maintenance and locking. Autovacuum is\n> your friend but you will need to schedule a full vaccum once in a\n> while because of tps wraparound. If you allow AV to do this, none of\n> your other tables get vacuumed until it completes....heh!\n \nBTW, that's changed in either 8.2 or 8.3; the change is that freeze\ninformation is now tracked on a per-table basis instead of per-database.\nSo autovacuum isn't forced into freezing everything in the database at\nonce.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 21 May 2007 15:46:55 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 500 requests per second"
},
{
"msg_contents": "\n>> I'm looking for a database+hardware solution which should be \n>> able to handle up to 500 requests per second.\n\n\tWhat proportion of reads and writes in those 500 tps ?\n\t(If you have 450 selects and 50 inserts/update transactions, your \nhardware requirements will be different than those for the reverse \nproportion)\n\n\tWhat is the estimated size of your data and hot working set ?\n",
"msg_date": "Mon, 21 May 2007 22:48:04 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 500 requests per second"
},
{
"msg_contents": "> * set reasonable statement timeout\n> * backup with pitr. pg_dump is a headache on extremely busy servers.\n\nWhere do you put your pitr wal logs so that they don't take up extra \nI/O ?\n> * get good i/o system for your box. start with 6 disk raid 10 and go\n> from there.\n> * spend some time reading about bgwriter settings, commit_delay, etc.\n> * keep an eye out for postgresql hot (hopefully coming with 8.3) and\n> make allowances for it in your design if possible.\n> * normalize your database and think of vacuum as dangerous enemy.\n>\n> good luck! :-)\n>\n> merlin\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n",
"msg_date": "Mon, 21 May 2007 20:52:43 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 500 requests per second"
}
] |
[
{
"msg_contents": "We're in the process of upgrading our db server's memory from 2GB to 8GB to \nimprove access performance. This is a dedicated dual Xeon db server not \nrunning any significant non-db processes. Our database size on disk is \n~11GB, although we expect it to grow to ~20GB. Much of this data is \ninactive and seldom used or accessed. Read access is our primary concern as \nour data is normalized and retrieveing a complete product requires many \nreads to associated tables and indexes to put it all together.\n\nOur larger tables have 10-20 rows per disk block, I assume that most blocks \nwill have a mix of frequently accessed and inactive rows. Of course, we \nwouldn't want to double-cache, so my inclination would be to either give \nmost of the memory to shared_buffers, or to leave that small and let the \nkernel (Linux 2.6x) do the buffering.\n\nI have no idea regarding the inner working of the pg's shared cache, but \nwhat I would like to find out is whether it is table-row-based, or \ndisk-block-based. In the case of it being disk-block based, my inclination \nwould be to let the kernel do the buffering. In the case of the cache being \ntable-row-based, I would expect it to be much more space-efficient and I \nwould be inclined to give the memory to the pg. In that case, is it \nfeasible to set shared_buffers to something like 500000 x 8k blocks? We \nmake extensive use of indexes on the larger tables and would seldom, if \never, do sequential scans.\n\nAny comments or advice would be great!\n\nMichael.\n\n",
"msg_date": "Sat, 12 May 2007 13:10:12 +0200",
"msg_from": "\"Michael van Rooyen\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Kernel cache vs shared_buffers"
},
{
"msg_contents": "Michael van Rooyen wrote:\n> I have no idea regarding the inner working of the pg's shared cache, but \n> what I would like to find out is whether it is table-row-based, or \n> disk-block-based. \n\nIt's block based.\n\n> In the case of it being disk-block based, my \n> inclination would be to let the kernel do the buffering. In the case of \n> the cache being table-row-based, I would expect it to be much more \n> space-efficient and I would be inclined to give the memory to the pg. \n> In that case, is it feasible to set shared_buffers to something like \n> 500000 x 8k blocks? We make extensive use of indexes on the larger \n> tables and would seldom, if ever, do sequential scans.\n\nA common rule of thumb people quote here is to set shared_buffers to 1/4 \nof available RAM, and leave the rest for OS cache. That's probably a \ngood configuration to start with.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Sat, 12 May 2007 15:28:45 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Kernel cache vs shared_buffers"
},
{
"msg_contents": "On Sat, May 12, 2007 at 03:28:45PM +0100, Heikki Linnakangas wrote:\n> >In the case of it being disk-block based, my \n> >inclination would be to let the kernel do the buffering. In the case of \n> >the cache being table-row-based, I would expect it to be much more \n> >space-efficient and I would be inclined to give the memory to the pg. \n> >In that case, is it feasible to set shared_buffers to something like \n> >500000 x 8k blocks? We make extensive use of indexes on the larger \n> >tables and would seldom, if ever, do sequential scans.\n> \n> A common rule of thumb people quote here is to set shared_buffers to 1/4 \n> of available RAM, and leave the rest for OS cache. That's probably a \n> good configuration to start with.\n\nIf you really care about performance it would be a good idea to start\nwith that and do your own benchmarking. Much of the consensus about\nshared_buffers was built up before 8.0, and the shared buffer management\nwe have today looks nothing like what was in 7.4. You might find that\nshared_buffers = 50% of memory or even higher might perform better for\nyour workload.\n\nIf you do find results like that, please share them. :)\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Sat, 12 May 2007 11:55:41 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Kernel cache vs shared_buffers"
},
{
"msg_contents": "> A common rule of thumb people quote here is to set shared_buffers to 1/4\n> of available RAM, and leave the rest for OS cache. That's probably a\n> good configuration to start with.\n>\n\nAnd just for the record: This rule of thumb does NOT apply to\nPostgreSQL on Windows. My current rule of thumb on Windows: set\nshared_buffers to minimum * 2\nAdjust effective_cache_size to the number given as \"system cache\"\nwithin the task manager.\n\nHarald\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nReinsburgstraße 202b\n70197 Stuttgart\n0173/9409607\nfx 01212-5-13695179\n-\nPython: the only language with more web frameworks than keywords.\n",
"msg_date": "Sun, 13 May 2007 10:39:36 +0200",
"msg_from": "\"Harald Armin Massa\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Kernel cache vs shared_buffers"
},
{
"msg_contents": "Harald Armin Massa wrote:\n>> A common rule of thumb people quote here is to set shared_buffers to 1/4\n>> of available RAM, and leave the rest for OS cache. That's probably a\n>> good configuration to start with.\n> \n> And just for the record: This rule of thumb does NOT apply to\n> PostgreSQL on Windows. My current rule of thumb on Windows: set\n> shared_buffers to minimum * 2\n> Adjust effective_cache_size to the number given as \"system cache\"\n> within the task manager.\n\nWhy?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Sun, 13 May 2007 10:41:34 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Kernel cache vs shared_buffers"
},
{
"msg_contents": "Heikki,\n\n\n> > PostgreSQL on Windows. My current rule of thumb on Windows: set\n> > shared_buffers to minimum * 2\n> > Adjust effective_cache_size to the number given as \"system cache\"\n> > within the task manager.\n>\n> Why?\n\nI tried with shared_buffers = 50% of available memory, and with 30% of\navailable memory, and the thoughput on complex queries stalled or got\nworse.\n\nI lowered shared_buffers to minimum, and started raising\neffective_cache_size, and performance on real world queries improved.\npg_bench did not fully agree when simulating large numbers concurrent\nqueries.\n\nSo I tried setting shared_buffers between minimum and 2.5*minimum, and\npg_bench speeds recovered and real world queries did similiar.\n\nMy understanding is that shared_buffers are realised as memory mapped\nfile in win32; and that they are only usually kept in memory. Maybe I\nunderstood that wrong.\n\nHarald\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nReinsburgstraße 202b\n70197 Stuttgart\n0173/9409607\nfx 01212-5-13695179\n-\nPython: the only language with more web frameworks than keywords.\n",
"msg_date": "Sun, 13 May 2007 11:57:16 +0200",
"msg_from": "\"Harald Armin Massa\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Kernel cache vs shared_buffers"
},
{
"msg_contents": "Harald Armin Massa wrote:\n> Heikki,\n> \n> \n>> > PostgreSQL on Windows. My current rule of thumb on Windows: set\n>> > shared_buffers to minimum * 2\n>> > Adjust effective_cache_size to the number given as \"system cache\"\n>> > within the task manager.\n>>\n>> Why?\n> \n> I tried with shared_buffers = 50% of available memory, and with 30% of\n> available memory, and the thoughput on complex queries stalled or got\n> worse.\n> \n> I lowered shared_buffers to minimum, and started raising\n> effective_cache_size, and performance on real world queries improved.\n> pg_bench did not fully agree when simulating large numbers concurrent\n> queries.\n> \n> So I tried setting shared_buffers between minimum and 2.5*minimum, and\n> pg_bench speeds recovered and real world queries did similiar.\n> \n> My understanding is that shared_buffers are realised as memory mapped\n> file in win32; and that they are only usually kept in memory. Maybe I\n> understood that wrong.\n\nAlmost. It's a memory mapped region backed by the system pagefile.\n\nThat said, it would be good to try to figure out *why* this is\nhappening. It's been on my list of things to do to run checks with the\nprofiler (now that we can ;-) with the msvc stuff) and try to figure out\nwhere it's slowing down. It could be as simple as that there's much more\noverhead trying to access the shared memory from different processes\nthan we're used to on other platforms.\n\n//Magnus\n",
"msg_date": "Sun, 13 May 2007 12:39:04 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Kernel cache vs shared_buffers"
}
] |
[
{
"msg_contents": "On May 5, 9:44 am, CharlesBlackstone <[email protected]>\nwrote:\n\n> I think a lot of people are aware that an Opteron system has less\n> bandwidth restrictions with a lot of processors, but that woodcrests\n> don't have as good a memory controller and fall behind opterons after\n> 4 cores or so. I'm asking how severe this is. Heavy number cruncing of\n> huge data sets in RAM is a bandwidth intensive operation. So, I'm\n> asking how badly woodcrests are impacted above 4 cores, for example, 8\n> cores vs 4 cores, on bandwidth performance. I didn't think this was\n> that vague, is there anything else I can tell you that will make the\n> question less difficult to answer?\n\n\nYour question is difficult to answer because you'd first need to know\n(at least approximately) what's the ratio of\nFLOPS vs memory accesses, and the pattern of those accesses. It all\nboils down to that. If your program\ncan keep the CPU busy during \"long\" stretches of time without needing\nto access the memory bus, then your\nprogram will definitely benefit from more cpus/cores. If, on the\nother hand, your program needs to request\n(i.e. load/store) to main RAM (i.e. cache misses) very frequently,\nthen you will have contention on the memory\nbus and your performance per cpu will degrade.\n\nYou ask \"how badly\" will your app degrade; well, the actual way to\nmodel and predict that would be using the hardware performance\ncounters (OProfile under Linux, cputrack on Solaris, etc), and then\nyou'd get an idea about the rate of instructions vs anything else\n(load/stores\nto ram, retired FLOPS, cache misses, TLB misses, etc). But of\ncourse the best way is to measure your program on the real thing.\n\nI wanted to post this even if it's a bit late on the thread because\nright now I have exactly this kind of problem.\nWe're trying to figure out if a dual-Quadcore (Xeon) will be better\n(cost/benefit wise) than a 4-way Opteron dualcore, for *our* program.\n\nSpec CPU 2006 can give you some pretty good insights on this: go to\nthe advanced query option, and list all available results,\nbut filter by \"number of total cores\" equal to 8. Go straight to the\nint_rate and fp_rate figures, and you'll be able to compare how\n4-way dual Opterons compare to (Xeon) dual-Quadcores. At least, on\nthe Spec-2006 suite, whose programs have working set sizes quite\nbig, although they may not be as RAM-bottlenecked as your particular\nprogram.\n\nAs you say, Opterons do definitely have a much better memory system.\nBut then a 4-way mobo is WAY more expensive that a dual-socket one...\n\nAnd btw, if you want to benchmark just memory bandwidth/latency\nperformance, STREAM (http://www.cs.virginia.edu/stream/)\nis the way to go.\n\nCheers,\n\nJL\n\n",
"msg_date": "13 May 2007 15:00:15 -0700",
"msg_from": "jlmarin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Diminishing bandwidth performance with multiple quad core X5355s"
},
{
"msg_contents": "On 14-5-2007 0:00 jlmarin wrote:\n> I wanted to post this even if it's a bit late on the thread because\n> right now I have exactly this kind of problem.\n> We're trying to figure out if a dual-Quadcore (Xeon) will be better\n> (cost/benefit wise) than a 4-way Opteron dualcore, for *our* program.\n\nWe've benchmarked the Sun Fire x4600 (with the older socket 939 cpu's) \nand compared it to a much cheaper dual quad core xeon X5355.\n\nAs you can see on the end of this page:\nhttp://tweakers.net/reviews/674/8\n\nThe 4-way dual core opteron performs less (in our benchmark) than the \n2-way quad core xeon. Our benchmark does not consume a lot of memory, \nbut I don't know which of the two profits most of that. Obviously it may \nwell be that the Socket F opterons with support for DDR2 memory perform \nbetter, but we haven't seen much proof of that.\nGiven the cost of a 4-way dual core opteron vs a 2-way quad core xeon, \nI'd go for the latter for now. The savings can be used to build a system \nwith heavier I/O and/or more memory, which normally yield bigger gains \nin database land.\nFor example a Dell 2900 with 2x X5355 + 16GB of memory costs about 7000 \neuros less than a Dell 6950 with 4x 8220 + 16GB. You can buy an \nadditional MD1000 with 15x 15k rpm disks for that... And I doubt you'll \nfind any real-world database benchmark that will favour the \nopteron-system if you look at the price/performance-picture.\n\nOf course this picture might very well change as soon as the new \n'Barcelona' quad core opterons are finally available.\n\n> As you say, Opterons do definitely have a much better memory system.\n> But then a 4-way mobo is WAY more expensive that a dual-socket one...\n\nAnd it might be limited by NUMA and the relatively simple broadcast \narchitecture for cache coherency.\n\nBest regards,\n\nArjen van der Meijden\n",
"msg_date": "Sun, 20 May 2007 19:13:58 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Diminishing bandwidth performance with multiple quad core X5355s"
}
] |
[
{
"msg_contents": "Anyone know of a pg_stats howto? I'd appreciate any direction.\n\nYudhvir\n",
"msg_date": "Sun, 13 May 2007 17:43:21 -0700",
"msg_from": "Yudhvir Singh Sidhu <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_stats how-to?"
},
{
"msg_contents": "Can you be a little more specific? What exactly are you trying to achieve\nwith pg_stats?\n\nYou can always get help for documentation at -->\nhttp://www.postgresql.org/docs/8.2/static/view-pg-stats.html\n\n--\nShoaib Mir\nEnterpriseDB (www.enterprisedb.com)\n\nOn 5/13/07, Yudhvir Singh Sidhu <[email protected]> wrote:\n>\n> Anyone know of a pg_stats howto? I'd appreciate any direction.\n>\n> Yudhvir\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\nCan you be a little more specific? What exactly are you trying to achieve with pg_stats?You can always get help for documentation at --> http://www.postgresql.org/docs/8.2/static/view-pg-stats.html\n--Shoaib MirEnterpriseDB (www.enterprisedb.com)On 5/13/07, Yudhvir Singh Sidhu <\[email protected]> wrote:Anyone know of a pg_stats howto? I'd appreciate any direction.\nYudhvir---------------------------(end of broadcast)---------------------------TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to \[email protected] so that your message can get through to the mailing list cleanly",
"msg_date": "Sun, 13 May 2007 20:51:27 -0400",
"msg_from": "\"Shoaib Mir\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stats how-to?"
},
{
"msg_contents": "> Anyone know of a pg_stats howto? I'd appreciate any direction.\n\nLet me know if you find one! :)\n\nIt isn't a HOWTO, but I have collected some notes regarding the\nperformance views in a document -\nhttp://docs.opengroupware.org/Members/whitemice/wmogag/file_view\n - see the chapter on PostgreSQL. Hopefully this will continue to\nexpand.\n\nThe information is pretty scattered [ coming from a DB2 / Informix\nbackground there there are specific performance guide], try -\nhttp://book.itzero.com/read/others/0508/Sams.PostgreSQL.2nd.Edition.Jul.2005_html/0672327562/ch04lev1sec2.html\nhttp://www.postgresql.org/docs/8.0/interactive/monitoring-stats.html\n\nAlso just trolling on this list is useful.\n\n-- \nAdam Tauno Williams, Network & Systems Administrator\nConsultant - http://www.whitemiceconsulting.com\nDeveloper - http://www.opengroupware.org\n\n",
"msg_date": "Sun, 13 May 2007 21:44:42 -0400",
"msg_from": "Adam Tauno Williams <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stats how-to?"
},
{
"msg_contents": "I am trying to use them. I have set these values in my conf file:\n stats_start_collector TRUE stats_reset_on_server_start FALSE\nstats_command_string TRUE\nnow what?\n\nYudhvir\n==========\n\nOn 5/13/07, Shoaib Mir <[email protected]> wrote:\n>\n> Can you be a little more specific? What exactly are you trying to achieve\n> with pg_stats?\n>\n> You can always get help for documentation at --> http://www.postgresql.org/docs/8.2/static/view-pg-stats.html\n>\n>\n> --\n> Shoaib Mir\n> EnterpriseDB (www.enterprisedb.com)\n>\n> On 5/13/07, Yudhvir Singh Sidhu < [email protected]> wrote:\n> >\n> > Anyone know of a pg_stats howto? I'd appreciate any direction.\n> >\n> > Yudhvir\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> >\n>\n>\n\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nI am trying to use them. I have set these values in my conf file:\n\n\n\n\nstats_start_collector\nTRUE\n\n\nstats_reset_on_server_start\nFALSE\n\n\nstats_command_string\nTRUE\n\n\n\nnow what?\n\nYudhvir\n==========On 5/13/07, Shoaib Mir <[email protected]> wrote:\nCan you be a little more specific? What exactly are you trying to achieve with pg_stats?You can always get help for documentation at --> \nhttp://www.postgresql.org/docs/8.2/static/view-pg-stats.html\n--Shoaib MirEnterpriseDB (www.enterprisedb.com)\nOn 5/13/07, Yudhvir Singh Sidhu <\[email protected]> wrote:Anyone know of a pg_stats howto? I'd appreciate any direction.\nYudhvir---------------------------(end of broadcast)---------------------------TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to \n\[email protected] so that your message can get through to the mailing list cleanly\n-- Yudhvir Singh Sidhu408 375 3134 cell",
"msg_date": "Mon, 14 May 2007 09:16:54 -0700",
"msg_from": "\"Y Sidhu\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stats how-to?"
},
{
"msg_contents": "Have you either re-loaded the config or restarted the server since\nmaking those changes?\n\nOn Mon, May 14, 2007 at 09:16:54AM -0700, Y Sidhu wrote:\n> I am trying to use them. I have set these values in my conf file:\n> stats_start_collector TRUE stats_reset_on_server_start FALSE\n> stats_command_string TRUE\n> now what?\n> \n> Yudhvir\n> ==========\n> \n> On 5/13/07, Shoaib Mir <[email protected]> wrote:\n> >\n> >Can you be a little more specific? What exactly are you trying to achieve\n> >with pg_stats?\n> >\n> >You can always get help for documentation at --> \n> >http://www.postgresql.org/docs/8.2/static/view-pg-stats.html\n> >\n> >\n> >--\n> >Shoaib Mir\n> >EnterpriseDB (www.enterprisedb.com)\n> >\n> >On 5/13/07, Yudhvir Singh Sidhu < [email protected]> wrote:\n> >>\n> >> Anyone know of a pg_stats howto? I'd appreciate any direction.\n> >>\n> >> Yudhvir\n> >>\n> >> ---------------------------(end of broadcast)---------------------------\n> >> TIP 1: if posting/reading through Usenet, please send an appropriate\n> >> subscribe-nomail command to [email protected] so that your\n> >> message can get through to the mailing list cleanly\n> >>\n> >\n> >\n> \n> \n> -- \n> Yudhvir Singh Sidhu\n> 408 375 3134 cell\n\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 14 May 2007 11:26:10 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stats how-to?"
},
{
"msg_contents": "Please include the list in your replies...\n\nOk, so you've got stats collection turned on. What's the question then?\nAnd are stats_block_level and stats_row_level also enabled?\n\nOn Mon, May 14, 2007 at 09:28:46AM -0700, Y Sidhu wrote:\n> yes\n> \n> Yudhvir\n> ===\n> \n> On 5/14/07, Jim C. Nasby <[email protected]> wrote:\n> >\n> >Have you either re-loaded the config or restarted the server since\n> >making those changes?\n> >\n> >On Mon, May 14, 2007 at 09:16:54AM -0700, Y Sidhu wrote:\n> >> I am trying to use them. I have set these values in my conf file:\n> >> stats_start_collector TRUE stats_reset_on_server_start FALSE\n> >> stats_command_string TRUE\n> >> now what?\n> >>\n> >> Yudhvir\n> >> ==========\n> >>\n> >> On 5/13/07, Shoaib Mir <[email protected]> wrote:\n> >> >\n> >> >Can you be a little more specific? What exactly are you trying to\n> >achieve\n> >> >with pg_stats?\n> >> >\n> >> >You can always get help for documentation at -->\n> >> >http://www.postgresql.org/docs/8.2/static/view-pg-stats.html\n> >> >\n> >> >\n> >> >--\n> >> >Shoaib Mir\n> >> >EnterpriseDB (www.enterprisedb.com)\n> >> >\n> >> >On 5/13/07, Yudhvir Singh Sidhu < [email protected]> wrote:\n> >> >>\n> >> >> Anyone know of a pg_stats howto? I'd appreciate any direction.\n> >> >>\n> >> >> Yudhvir\n> >> >>\n> >> >> ---------------------------(end of\n> >broadcast)---------------------------\n> >> >> TIP 1: if posting/reading through Usenet, please send an appropriate\n> >> >> subscribe-nomail command to [email protected] so that\n> >your\n> >> >> message can get through to the mailing list cleanly\n> >> >>\n> >> >\n> >> >\n> >>\n> >>\n> >> --\n> >> Yudhvir Singh Sidhu\n> >> 408 375 3134 cell\n> >\n> >--\n> >Jim Nasby [email protected]\n> >EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n> >\n> \n> \n> \n> -- \n> Yudhvir Singh Sidhu\n> 408 375 3134 cell\n\n-- \nJim C. Nasby, Database Architect [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Mon, 14 May 2007 12:45:58 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stats how-to?"
},
{
"msg_contents": "The stats_block_level and stats_row_level are NOT enabled. The question is\nhow to use pg_stats. Do I access/see them via the ANALYZE command? or using\nSQL. I cannot find any document which will get me started on this.\n\nYudhvir\n--------------\n\nOn 5/14/07, Jim C. Nasby <[email protected]> wrote:\n>\n> Please include the list in your replies...\n>\n> Ok, so you've got stats collection turned on. What's the question then?\n> And are stats_block_level and stats_row_level also enabled?\n>\n> On Mon, May 14, 2007 at 09:28:46AM -0700, Y Sidhu wrote:\n> > yes\n> >\n> > Yudhvir\n> > ===\n> >\n> > On 5/14/07, Jim C. Nasby <[email protected]> wrote:\n> > >\n> > >Have you either re-loaded the config or restarted the server since\n> > >making those changes?\n> > >\n> > >On Mon, May 14, 2007 at 09:16:54AM -0700, Y Sidhu wrote:\n> > >> I am trying to use them. I have set these values in my conf file:\n> > >> stats_start_collector TRUE stats_reset_on_server_start FALSE\n> > >> stats_command_string TRUE\n> > >> now what?\n> > >>\n> > >> Yudhvir\n> > >> ==========\n> > >>\n> > >> On 5/13/07, Shoaib Mir <[email protected]> wrote:\n> > >> >\n> > >> >Can you be a little more specific? What exactly are you trying to\n> > >achieve\n> > >> >with pg_stats?\n> > >> >\n> > >> >You can always get help for documentation at -->\n> > >> >http://www.postgresql.org/docs/8.2/static/view-pg-stats.html\n> > >> >\n> > >> >\n> > >> >--\n> > >> >Shoaib Mir\n> > >> >EnterpriseDB (www.enterprisedb.com)\n> > >> >\n> > >> >On 5/13/07, Yudhvir Singh Sidhu < [email protected]> wrote:\n> > >> >>\n> > >> >> Anyone know of a pg_stats howto? I'd appreciate any direction.\n> > >> >>\n> > >> >> Yudhvir\n> > >> >>\n> > >> >> ---------------------------(end of\n> > >broadcast)---------------------------\n> > >> >> TIP 1: if posting/reading through Usenet, please send an\n> appropriate\n> > >> >> subscribe-nomail command to [email protected] so\n> that\n> > >your\n> > >> >> message can get through to the mailing list cleanly\n> > >> >>\n> > >> >\n> > >> >\n> > >>\n> > >>\n> > >> --\n> > >> Yudhvir Singh Sidhu\n> > >> 408 375 3134 cell\n> > >\n> > >--\n> > >Jim Nasby [email protected]\n> > >EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n> > >\n> >\n> >\n> >\n> > --\n> > Yudhvir Singh Sidhu\n> > 408 375 3134 cell\n>\n> --\n> Jim C. Nasby, Database Architect [email protected]\n> Give your computer some brain candy! www.distributed.net Team #1828\n>\n> Windows: \"Where do you want to go today?\"\n> Linux: \"Where do you want to go tomorrow?\"\n> FreeBSD: \"Are you guys coming, or what?\"\n>\n\n\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nThe stats_block_level and stats_row_level are NOT enabled. The question\nis how to use pg_stats. Do I access/see them via the ANALYZE command?\nor using SQL. I cannot find any document which will get me started on\nthis. \n\nYudhvir\n--------------On 5/14/07, Jim C. Nasby <[email protected]> wrote:\nPlease include the list in your replies...Ok, so you've got stats collection turned on. What's the question then?And are stats_block_level and stats_row_level also enabled?On Mon, May 14, 2007 at 09:28:46AM -0700, Y Sidhu wrote:\n> yes>> Yudhvir> ===>> On 5/14/07, Jim C. Nasby <[email protected]> wrote:> >> >Have you either re-loaded the config or restarted the server since\n> >making those changes?> >> >On Mon, May 14, 2007 at 09:16:54AM -0700, Y Sidhu wrote:> >> I am trying to use them. I have set these values in my conf file:> >> stats_start_collector TRUE stats_reset_on_server_start FALSE\n> >> stats_command_string TRUE> >> now what?> >>> >> Yudhvir> >> ==========> >>> >> On 5/13/07, Shoaib Mir <\[email protected]> wrote:> >> >> >> >Can you be a little more specific? What exactly are you trying to> >achieve> >> >with pg_stats?> >> >\n> >> >You can always get help for documentation at -->> >> >http://www.postgresql.org/docs/8.2/static/view-pg-stats.html\n> >> >> >> >> >> >--> >> >Shoaib Mir> >> >EnterpriseDB (www.enterprisedb.com)> >> >\n> >> >On 5/13/07, Yudhvir Singh Sidhu < [email protected]> wrote:> >> >>> >> >> Anyone know of a pg_stats howto? I'd appreciate any direction.\n> >> >>> >> >> Yudhvir> >> >>> >> >> ---------------------------(end of> >broadcast)---------------------------> >> >> TIP 1: if posting/reading through Usenet, please send an appropriate\n>\n>>\n>> subscribe-nomail\ncommand to [email protected] so that> >your> >> >> message can get through to the mailing list cleanly> >> >>\n> >> >> >> >> >>> >>> >> --> >> Yudhvir Singh Sidhu> >> 408 375 3134 cell> >> >-->\n>Jim\nNasby [email protected]>\n>EnterpriseDB http://enterprisedb.com 512.569.9461\n(cell)> >>>>> --> Yudhvir Singh Sidhu> 408 375 3134 cell--Jim\nC. Nasby, Database\nArchitect [email protected] your computer some brain candy! www.distributed.net Team #1828Windows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"FreeBSD: \"Are you guys coming, or what?\"-- Yudhvir Singh Sidhu408 375 3134 cell",
"msg_date": "Mon, 14 May 2007 11:09:21 -0700",
"msg_from": "\"Y Sidhu\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stats how-to?"
},
{
"msg_contents": "On Mon, May 14, 2007 at 11:09:21AM -0700, Y Sidhu wrote:\n> The stats_block_level and stats_row_level are NOT enabled. The question is\n> how to use pg_stats. Do I access/see them via the ANALYZE command? or using\n> SQL. I cannot find any document which will get me started on this.\n\nOk, we're both confused I think... I thought you were talking about the\npg_stat* views, which depend on the statistics collector (that's what\nthe stats_* parameters control).\n\nThat actually has nothing at all to do with pg_stats or pg_statistics.\nThose deal with statistics about the data in the database, and not about\nstatistics from the engine (which is what the pg_stat* views do...).\n\nIf you want to know about pg_stats, take a look at\nhttp://www.postgresql.org/docs/8.2/interactive/view-pg-stats.html ...\nbut normally you shouldn't need to worry yourself about that. Are you\ntrying to debug something?\n\nInformation about the backend statistics can be found at\nhttp://www.postgresql.org/docs/8.2/interactive/monitoring.html\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 14 May 2007 13:36:16 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stats how-to?"
},
{
"msg_contents": "I am sorry about this Jim, please understand that I am a newbie and am\ntrying to solve long vacuum time problems and get a handle on speeding up\nqueries/reports. I was pointed to pg_stats and that's where I am at now. I\nhave added this into my conf file:\n stats_start_collector TRUE stats_reset_on_server_start FALSE\nstats_command_string TRUE\nHowever, these being production servers, I have not enabled these:\n stats_row_level stats_block_level\nYes, I have re-started the server(s). It seems like I query tables to get\nthe info. If so, are there any queries written that I can use?\n\nThanks for following up on this with me.\n\nYudhvir\n\n===\nOn 5/14/07, Jim C. Nasby <[email protected]> wrote:\n>\n> On Mon, May 14, 2007 at 11:09:21AM -0700, Y Sidhu wrote:\n> > The stats_block_level and stats_row_level are NOT enabled. The question\n> is\n> > how to use pg_stats. Do I access/see them via the ANALYZE command? or\n> using\n> > SQL. I cannot find any document which will get me started on this.\n>\n> Ok, we're both confused I think... I thought you were talking about the\n> pg_stat* views, which depend on the statistics collector (that's what\n> the stats_* parameters control).\n>\n> That actually has nothing at all to do with pg_stats or pg_statistics.\n> Those deal with statistics about the data in the database, and not about\n> statistics from the engine (which is what the pg_stat* views do...).\n>\n> If you want to know about pg_stats, take a look at\n> http://www.postgresql.org/docs/8.2/interactive/view-pg-stats.html ...\n> but normally you shouldn't need to worry yourself about that. Are you\n> trying to debug something?\n>\n> Information about the backend statistics can be found at\n> http://www.postgresql.org/docs/8.2/interactive/monitoring.html\n> --\n> Jim Nasby [email protected]\n> EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n>\n\n\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nI am sorry about this Jim, please understand that I am a newbie and am\ntrying to solve long vacuum time problems and get a handle on speeding\nup queries/reports. I was pointed to pg_stats and that's where I am at\nnow. I have added this into my conf file:\n\n\n\n\nstats_start_collector\nTRUE\n\n\nstats_reset_on_server_start\nFALSE\n\n\nstats_command_string\nTRUE\n\n\n\nHowever, these being production servers, I have not enabled these:\n\n\n\nstats_row_level\n\n\nstats_block_level\n\n\n\nYes, I have re-started the server(s). It seems like I query tables to\nget the info. If so, are there any queries written that I can use? \n\nThanks for following up on this with me.\n\nYudhvir\n\n===\nOn 5/14/07, Jim C. Nasby <[email protected]> wrote:\nOn Mon, May 14, 2007 at 11:09:21AM -0700, Y Sidhu wrote:> The stats_block_level and stats_row_level are NOT enabled. The question is> how to use pg_stats. Do I access/see them via the ANALYZE command? or using\n> SQL. I cannot find any document which will get me started on this.Ok, we're both confused I think... I thought you were talking about thepg_stat* views, which depend on the statistics collector (that's what\nthe stats_* parameters control).That actually has nothing at all to do with pg_stats or pg_statistics.Those deal with statistics about the data in the database, and not aboutstatistics from the engine (which is what the pg_stat* views do...).\nIf you want to know about pg_stats, take a look athttp://www.postgresql.org/docs/8.2/interactive/view-pg-stats.html ...but normally you shouldn't need to worry yourself about that. Are you\ntrying to debug something?Information about the backend statistics can be found athttp://www.postgresql.org/docs/8.2/interactive/monitoring.html\n--Jim\nNasby [email protected] http://enterprisedb.com 512.569.9461 (cell)\n-- Yudhvir Singh Sidhu408 375 3134 cell",
"msg_date": "Mon, 14 May 2007 12:02:03 -0700",
"msg_from": "\"Y Sidhu\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stats how-to?"
},
{
"msg_contents": "On Mon, May 14, 2007 at 12:02:03PM -0700, Y Sidhu wrote:\n> I am sorry about this Jim, please understand that I am a newbie and am\n> trying to solve long vacuum time problems and get a handle on speeding up\n> queries/reports. I was pointed to pg_stats and that's where I am at now. I\n\nWell, I have no idea what that person was trying to convey then. What\nare you trying to look up? Better yet, what's your actual problem?\n\n> have added this into my conf file:\n> stats_start_collector TRUE stats_reset_on_server_start FALSE\n> stats_command_string TRUE\n> However, these being production servers, I have not enabled these:\n> stats_row_level stats_block_level\nFYI, stats_command_string has a far larger performance overhead than any\nof the other stats commands prior to 8.2.\n\n> Yes, I have re-started the server(s). It seems like I query tables to get\n> the info. If so, are there any queries written that I can use?\n> \n> Thanks for following up on this with me.\n> \n> Yudhvir\n> \n> ===\n> On 5/14/07, Jim C. Nasby <[email protected]> wrote:\n> >\n> >On Mon, May 14, 2007 at 11:09:21AM -0700, Y Sidhu wrote:\n> >> The stats_block_level and stats_row_level are NOT enabled. The question\n> >is\n> >> how to use pg_stats. Do I access/see them via the ANALYZE command? or\n> >using\n> >> SQL. I cannot find any document which will get me started on this.\n> >\n> >Ok, we're both confused I think... I thought you were talking about the\n> >pg_stat* views, which depend on the statistics collector (that's what\n> >the stats_* parameters control).\n> >\n> >That actually has nothing at all to do with pg_stats or pg_statistics.\n> >Those deal with statistics about the data in the database, and not about\n> >statistics from the engine (which is what the pg_stat* views do...).\n> >\n> >If you want to know about pg_stats, take a look at\n> >http://www.postgresql.org/docs/8.2/interactive/view-pg-stats.html ...\n> >but normally you shouldn't need to worry yourself about that. Are you\n> >trying to debug something?\n> >\n> >Information about the backend statistics can be found at\n> >http://www.postgresql.org/docs/8.2/interactive/monitoring.html\n> >--\n> >Jim Nasby [email protected]\n> >EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n> >\n> \n> \n> \n> -- \n> Yudhvir Singh Sidhu\n> 408 375 3134 cell\n\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 14 May 2007 14:07:48 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stats how-to?"
},
{
"msg_contents": "My immediate problem is to decrease vacuum times.\n\nYudhvir\n=======\n\nOn 5/14/07, Jim C. Nasby <[email protected]> wrote:\n>\n> On Mon, May 14, 2007 at 12:02:03PM -0700, Y Sidhu wrote:\n> > I am sorry about this Jim, please understand that I am a newbie and am\n> > trying to solve long vacuum time problems and get a handle on speeding\n> up\n> > queries/reports. I was pointed to pg_stats and that's where I am at now.\n> I\n>\n> Well, I have no idea what that person was trying to convey then. What\n> are you trying to look up? Better yet, what's your actual problem?\n>\n> > have added this into my conf file:\n> > stats_start_collector TRUE stats_reset_on_server_start FALSE\n> > stats_command_string TRUE\n> > However, these being production servers, I have not enabled these:\n> > stats_row_level stats_block_level\n> FYI, stats_command_string has a far larger performance overhead than any\n> of the other stats commands prior to 8.2.\n>\n> > Yes, I have re-started the server(s). It seems like I query tables to\n> get\n> > the info. If so, are there any queries written that I can use?\n> >\n> > Thanks for following up on this with me.\n> >\n> > Yudhvir\n> >\n> > ===\n> > On 5/14/07, Jim C. Nasby <[email protected]> wrote:\n> > >\n> > >On Mon, May 14, 2007 at 11:09:21AM -0700, Y Sidhu wrote:\n> > >> The stats_block_level and stats_row_level are NOT enabled. The\n> question\n> > >is\n> > >> how to use pg_stats. Do I access/see them via the ANALYZE command? or\n> > >using\n> > >> SQL. I cannot find any document which will get me started on this.\n> > >\n> > >Ok, we're both confused I think... I thought you were talking about the\n> > >pg_stat* views, which depend on the statistics collector (that's what\n> > >the stats_* parameters control).\n> > >\n> > >That actually has nothing at all to do with pg_stats or pg_statistics.\n> > >Those deal with statistics about the data in the database, and not\n> about\n> > >statistics from the engine (which is what the pg_stat* views do...).\n> > >\n> > >If you want to know about pg_stats, take a look at\n> > >http://www.postgresql.org/docs/8.2/interactive/view-pg-stats.html ...\n> > >but normally you shouldn't need to worry yourself about that. Are you\n> > >trying to debug something?\n> > >\n> > >Information about the backend statistics can be found at\n> > >http://www.postgresql.org/docs/8.2/interactive/monitoring.html\n> > >--\n> > >Jim Nasby [email protected]\n> > >EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n> > >\n> >\n> >\n> >\n> > --\n> > Yudhvir Singh Sidhu\n> > 408 375 3134 cell\n>\n> --\n> Jim Nasby [email protected]\n> EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n>\n\n\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nMy immediate problem is to decrease vacuum times.\n\nYudhvir\n=======On 5/14/07, Jim C. Nasby <[email protected]> wrote:\nOn Mon, May 14, 2007 at 12:02:03PM -0700, Y Sidhu wrote:> I am sorry about this Jim, please understand that I am a newbie and am> trying to solve long vacuum time problems and get a handle on speeding up\n> queries/reports. I was pointed to pg_stats and that's where I am at now. IWell, I have no idea what that person was trying to convey then. Whatare you trying to look up? Better yet, what's your actual problem?\n> have added this into my conf file:> stats_start_collector TRUE stats_reset_on_server_start FALSE> stats_command_string TRUE> However, these being production servers, I have not enabled these:\n> stats_row_level stats_block_levelFYI, stats_command_string has a far larger performance overhead than anyof the other stats commands prior to 8.2.> Yes, I have re-started the server(s). It seems like I query tables to get\n> the info. If so, are there any queries written that I can use?>> Thanks for following up on this with me.>> Yudhvir>> ===> On 5/14/07, Jim C. Nasby <\[email protected]> wrote:> >> >On Mon, May 14, 2007 at 11:09:21AM -0700, Y Sidhu wrote:> >> The stats_block_level and stats_row_level are NOT enabled. The question> >is\n> >> how to use pg_stats. Do I access/see them via the ANALYZE command? or> >using> >> SQL. I cannot find any document which will get me started on this.> >> >Ok, we're both confused I think... I thought you were talking about the\n> >pg_stat* views, which depend on the statistics collector (that's what> >the stats_* parameters control).> >> >That actually has nothing at all to do with pg_stats or pg_statistics.\n> >Those deal with statistics about the data in the database, and not about> >statistics from the engine (which is what the pg_stat* views do...).> >> >If you want to know about pg_stats, take a look at\n> >http://www.postgresql.org/docs/8.2/interactive/view-pg-stats.html ...> >but normally you shouldn't need to worry yourself about that. Are you\n> >trying to debug something?> >> >Information about the backend statistics can be found at> >http://www.postgresql.org/docs/8.2/interactive/monitoring.html\n> >-->\n>Jim\nNasby [email protected]>\n>EnterpriseDB http://enterprisedb.com 512.569.9461\n(cell)> >>>>> --> Yudhvir Singh Sidhu> 408 375 3134 cell--Jim\nNasby [email protected] http://enterprisedb.com 512.569.9461 (cell)\n-- Yudhvir Singh Sidhu408 375 3134 cell",
"msg_date": "Mon, 14 May 2007 13:53:10 -0700",
"msg_from": "\"Y Sidhu\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stats how-to?"
},
{
"msg_contents": "In response to \"Y Sidhu\" <[email protected]>:\n\n> My immediate problem is to decrease vacuum times.\n\nDon't take this as being critical, I'm just trying to point out a slight\ndifference between what you're doing and what you think you're doing:\n\nYour problem is not decreasing vacuum times. You _think_ that the solution\nto your problem is decreasing vacuum times. We don't know what your\nactual problem is, and \"decreasing vacuum times\" may not be the correct\nsolution to it.\n\nPlease describe the _problem_. Is vacuum causing performance issues while\nit's running? I mean, if vacuum takes a long time to run, so what -- what\nis the actual _problem_ caused by vacuum taking long to run.\n\nYou may benefit by enabling autovacuum, or setting vacuum_cost_delay to\nallow vacuum to run with less interference to other queries (for example).\n\nSome details on what you're doing and what's happening would be helpful,\nsuch as the output of vacuum verbose, details on the size of your database,\nyour hardware, how long vacuum is taking, what you feel is an acceptable\nlength of time, your PG config.\n\n> On 5/14/07, Jim C. Nasby <[email protected]> wrote:\n> >\n> > On Mon, May 14, 2007 at 12:02:03PM -0700, Y Sidhu wrote:\n> > > I am sorry about this Jim, please understand that I am a newbie and am\n> > > trying to solve long vacuum time problems and get a handle on speeding\n> > up\n> > > queries/reports. I was pointed to pg_stats and that's where I am at now.\n> > I\n> >\n> > Well, I have no idea what that person was trying to convey then. What\n> > are you trying to look up? Better yet, what's your actual problem?\n> >\n> > > have added this into my conf file:\n> > > stats_start_collector TRUE stats_reset_on_server_start FALSE\n> > > stats_command_string TRUE\n> > > However, these being production servers, I have not enabled these:\n> > > stats_row_level stats_block_level\n> > FYI, stats_command_string has a far larger performance overhead than any\n> > of the other stats commands prior to 8.2.\n> >\n> > > Yes, I have re-started the server(s). It seems like I query tables to\n> > get\n> > > the info. If so, are there any queries written that I can use?\n> > >\n> > > Thanks for following up on this with me.\n> > >\n> > > Yudhvir\n> > >\n> > > ===\n> > > On 5/14/07, Jim C. Nasby <[email protected]> wrote:\n> > > >\n> > > >On Mon, May 14, 2007 at 11:09:21AM -0700, Y Sidhu wrote:\n> > > >> The stats_block_level and stats_row_level are NOT enabled. The\n> > question\n> > > >is\n> > > >> how to use pg_stats. Do I access/see them via the ANALYZE command? or\n> > > >using\n> > > >> SQL. I cannot find any document which will get me started on this.\n> > > >\n> > > >Ok, we're both confused I think... I thought you were talking about the\n> > > >pg_stat* views, which depend on the statistics collector (that's what\n> > > >the stats_* parameters control).\n> > > >\n> > > >That actually has nothing at all to do with pg_stats or pg_statistics.\n> > > >Those deal with statistics about the data in the database, and not\n> > about\n> > > >statistics from the engine (which is what the pg_stat* views do...).\n> > > >\n> > > >If you want to know about pg_stats, take a look at\n> > > >http://www.postgresql.org/docs/8.2/interactive/view-pg-stats.html ...\n> > > >but normally you shouldn't need to worry yourself about that. Are you\n> > > >trying to debug something?\n> > > >\n> > > >Information about the backend statistics can be found at\n> > > >http://www.postgresql.org/docs/8.2/interactive/monitoring.html\n\n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n",
"msg_date": "Mon, 14 May 2007 17:09:59 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stats how-to?"
},
{
"msg_contents": "Bill,\n\nI suspect it is fragmentation of some sort. Vacuum times sometimes shoot up,\nit may be table fragmentation. What kind of tables? We have 2 of them which\nexperience lots of adds and deletes only. No updates. So a typical day\nexperiences record adds a few dozen times on the order of 2.5 million. And\ndeletes once daily. Each of these tables has about 3 btree indexes. So, I am\nsuspecting fragmentation, whatever that means, of the tables and indexes. I\nrecover a couple of percentage points of a 73 GB SCSI disk when I run a\nREINDEX n those tables.\n\n\nYudhvir\n=========\nOn 5/14/07, Bill Moran <[email protected]> wrote:\n>\n> In response to \"Y Sidhu\" <[email protected]>:\n>\n> > My immediate problem is to decrease vacuum times.\n>\n> Don't take this as being critical, I'm just trying to point out a slight\n> difference between what you're doing and what you think you're doing:\n>\n> Your problem is not decreasing vacuum times. You _think_ that the\n> solution\n> to your problem is decreasing vacuum times. We don't know what your\n> actual problem is, and \"decreasing vacuum times\" may not be the correct\n> solution to it.\n>\n> Please describe the _problem_. Is vacuum causing performance issues while\n> it's running? I mean, if vacuum takes a long time to run, so what -- what\n> is the actual _problem_ caused by vacuum taking long to run.\n>\n> You may benefit by enabling autovacuum, or setting vacuum_cost_delay to\n> allow vacuum to run with less interference to other queries (for example).\n>\n> Some details on what you're doing and what's happening would be helpful,\n> such as the output of vacuum verbose, details on the size of your\n> database,\n> your hardware, how long vacuum is taking, what you feel is an acceptable\n> length of time, your PG config.\n>\n> > On 5/14/07, Jim C. Nasby <[email protected]> wrote:\n> > >\n> > > On Mon, May 14, 2007 at 12:02:03PM -0700, Y Sidhu wrote:\n> > > > I am sorry about this Jim, please understand that I am a newbie and\n> am\n> > > > trying to solve long vacuum time problems and get a handle on\n> speeding\n> > > up\n> > > > queries/reports. I was pointed to pg_stats and that's where I am at\n> now.\n> > > I\n> > >\n> > > Well, I have no idea what that person was trying to convey then. What\n> > > are you trying to look up? Better yet, what's your actual problem?\n> > >\n> > > > have added this into my conf file:\n> > > > stats_start_collector TRUE stats_reset_on_server_start FALSE\n> > > > stats_command_string TRUE\n> > > > However, these being production servers, I have not enabled these:\n> > > > stats_row_level stats_block_level\n> > > FYI, stats_command_string has a far larger performance overhead than\n> any\n> > > of the other stats commands prior to 8.2.\n> > >\n> > > > Yes, I have re-started the server(s). It seems like I query tables\n> to\n> > > get\n> > > > the info. If so, are there any queries written that I can use?\n> > > >\n> > > > Thanks for following up on this with me.\n> > > >\n> > > > Yudhvir\n> > > >\n> > > > ===\n> > > > On 5/14/07, Jim C. Nasby <[email protected]> wrote:\n> > > > >\n> > > > >On Mon, May 14, 2007 at 11:09:21AM -0700, Y Sidhu wrote:\n> > > > >> The stats_block_level and stats_row_level are NOT enabled. The\n> > > question\n> > > > >is\n> > > > >> how to use pg_stats. Do I access/see them via the ANALYZE\n> command? or\n> > > > >using\n> > > > >> SQL. I cannot find any document which will get me started on\n> this.\n> > > > >\n> > > > >Ok, we're both confused I think... I thought you were talking about\n> the\n> > > > >pg_stat* views, which depend on the statistics collector (that's\n> what\n> > > > >the stats_* parameters control).\n> > > > >\n> > > > >That actually has nothing at all to do with pg_stats or\n> pg_statistics.\n> > > > >Those deal with statistics about the data in the database, and not\n> > > about\n> > > > >statistics from the engine (which is what the pg_stat* views\n> do...).\n> > > > >\n> > > > >If you want to know about pg_stats, take a look at\n> > > > >http://www.postgresql.org/docs/8.2/interactive/view-pg-stats.html...\n> > > > >but normally you shouldn't need to worry yourself about that. Are\n> you\n> > > > >trying to debug something?\n> > > > >\n> > > > >Information about the backend statistics can be found at\n> > > > >http://www.postgresql.org/docs/8.2/interactive/monitoring.html\n>\n>\n>\n> --\n> Bill Moran\n> Collaborative Fusion Inc.\n> http://people.collaborativefusion.com/~wmoran/\n>\n> [email protected]\n> Phone: 412-422-3463x4023\n>\n\n\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nBill,\n\nI suspect it is fragmentation of some sort. Vacuum times sometimes\nshoot up, it may be table fragmentation. What kind of tables? We have 2\nof them which experience lots of adds and deletes only. No updates. So\na typical day experiences record adds a few dozen times on the order of\n2.5 million. And deletes once daily. Each of these tables has about 3\nbtree indexes. So, I am suspecting fragmentation, whatever that means,\nof the tables and indexes. I recover a couple of percentage points of a\n73 GB SCSI disk when I run a REINDEX n those tables.\n\n\nYudhvir=========On 5/14/07, Bill Moran <[email protected]> wrote:\nIn response to \"Y Sidhu\" <[email protected]>:> My immediate problem is to decrease vacuum times.Don't take this as being critical, I'm just trying to point out a slight\ndifference between what you're doing and what you think you're doing:Your problem is not decreasing vacuum times. You _think_ that the solutionto your problem is decreasing vacuum times. We don't know what your\nactual problem is, and \"decreasing vacuum times\" may not be the correctsolution to it.Please describe the _problem_. Is vacuum causing performance issues whileit's running? I mean, if vacuum takes a long time to run, so what -- what\nis the actual _problem_ caused by vacuum taking long to run.You may benefit by enabling autovacuum, or setting vacuum_cost_delay toallow vacuum to run with less interference to other queries (for example).\nSome details on what you're doing and what's happening would be helpful,such as the output of vacuum verbose, details on the size of your database,your hardware, how long vacuum is taking, what you feel is an acceptable\nlength of time, your PG config.> On 5/14/07, Jim C. Nasby <[email protected]> wrote:> >> > On Mon, May 14, 2007 at 12:02:03PM -0700, Y Sidhu wrote:\n> > > I am sorry about this Jim, please understand that I am a newbie and am> > > trying to solve long vacuum time problems and get a handle on speeding> > up> > > queries/reports. I was pointed to pg_stats and that's where I am at now.\n> > I> >> > Well, I have no idea what that person was trying to convey then. What> > are you trying to look up? Better yet, what's your actual problem?> >> > > have added this into my conf file:\n> > > stats_start_collector TRUE stats_reset_on_server_start FALSE> > > stats_command_string TRUE> > > However, these being production servers, I have not enabled these:> > > stats_row_level stats_block_level\n> > FYI, stats_command_string has a far larger performance overhead than any> > of the other stats commands prior to 8.2.> >> > > Yes, I have re-started the server(s). It seems like I query tables to\n> > get> > > the info. If so, are there any queries written that I can use?> > >> > > Thanks for following up on this with me.> > >> > > Yudhvir\n> > >> > > ===> > > On 5/14/07, Jim C. Nasby <[email protected]> wrote:> > > >> > > >On Mon, May 14, 2007 at 11:09:21AM -0700, Y Sidhu wrote:\n> > > >> The stats_block_level and stats_row_level are NOT enabled. The> > question> > > >is> > > >> how to use pg_stats. Do I access/see them via the ANALYZE command? or\n> > > >using> > > >> SQL. I cannot find any document which will get me started on this.> > > >> > > >Ok, we're both confused I think... I thought you were talking about the\n> > > >pg_stat* views, which depend on the statistics collector (that's what> > > >the stats_* parameters control).> > > >> > > >That actually has nothing at all to do with pg_stats or pg_statistics.\n> > > >Those deal with statistics about the data in the database, and not> > about> > > >statistics from the engine (which is what the pg_stat* views do...).> > > >\n> > > >If you want to know about pg_stats, take a look at> > > >http://www.postgresql.org/docs/8.2/interactive/view-pg-stats.html\n ...> > > >but normally you shouldn't need to worry yourself about that. Are you> > > >trying to debug something?> > > >> > > >Information about the backend statistics can be found at\n> > > >http://www.postgresql.org/docs/8.2/interactive/monitoring.html--Bill MoranCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/[email protected]: 412-422-3463x4023\n-- Yudhvir Singh Sidhu408 375 3134 cell",
"msg_date": "Mon, 14 May 2007 15:15:49 -0700",
"msg_from": "\"Y Sidhu\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stats how-to?"
},
{
"msg_contents": "\"Y Sidhu\" <[email protected]> writes:\n> it may be table fragmentation. What kind of tables? We have 2 of them which\n> experience lots of adds and deletes only. No updates. So a typical day\n> experiences record adds a few dozen times on the order of 2.5 million. And\n> deletes once daily. Each of these tables has about 3 btree indexes.\n\nWith an arrangement like that you should vacuum once daily, shortly\nafter the deletes --- there's really no point in doing it on any other\nschedule. Note \"shortly\" not \"immediately\" --- you want to be sure that\nany transactions old enough to see the deleted rows have ended.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 14 May 2007 20:20:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stats how-to? "
},
{
"msg_contents": "On Mon, May 14, 2007 at 08:20:49PM -0400, Tom Lane wrote:\n> \"Y Sidhu\" <[email protected]> writes:\n> > it may be table fragmentation. What kind of tables? We have 2 of them which\n> > experience lots of adds and deletes only. No updates. So a typical day\n> > experiences record adds a few dozen times on the order of 2.5 million. And\n> > deletes once daily. Each of these tables has about 3 btree indexes.\n> \n> With an arrangement like that you should vacuum once daily, shortly\n> after the deletes --- there's really no point in doing it on any other\n> schedule. Note \"shortly\" not \"immediately\" --- you want to be sure that\n> any transactions old enough to see the deleted rows have ended.\n\nAlso, think about ways you might avoid the deletes altogether. Could you\ndo a truncate instead? Could you use partitioning? If you are using\ndeletes then look at CLUSTERing the table some time after the deletes\n(but be aware that prior to 8.3 CLUSTER doesn't fully obey MVCC).\n\nTo answer your original question, a way to take a look at how bloated\nyour tables are would be to ANALYZE, divide reltuples by relpages from\npg_class (gives how many rows per page you have) and compare that to 8k\n/ average row size. The average row size for table rows would be the sum\nof avg_width from pg_stats for the table + 24 bytes overhead. For\nindexes, it would be the sum of avg_width for all fields in the index\nplus some overhead (8 bytes, I think).\n\nAn even simpler alternative would be to install contrib/pgstattuple and\nuse the pgstattuple function, though IIRC that does read the entire\nrelation from disk.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 15 May 2007 18:38:56 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stats how-to?"
},
{
"msg_contents": "On 5/15/07, Jim C. Nasby <[email protected]> wrote:\n>\n> On Mon, May 14, 2007 at 08:20:49PM -0400, Tom Lane wrote:\n> > \"Y Sidhu\" <[email protected]> writes:\n> > > it may be table fragmentation. What kind of tables? We have 2 of them\n> which\n> > > experience lots of adds and deletes only. No updates. So a typical day\n> > > experiences record adds a few dozen times on the order of 2.5 million.\n> And\n> > > deletes once daily. Each of these tables has about 3 btree indexes.\n> >\n> > With an arrangement like that you should vacuum once daily, shortly\n> > after the deletes --- there's really no point in doing it on any other\n> > schedule. Note \"shortly\" not \"immediately\" --- you want to be sure that\n> > any transactions old enough to see the deleted rows have ended.\n>\n> Also, think about ways you might avoid the deletes altogether. Could you\n> do a truncate instead? Could you use partitioning? If you are using\n> deletes then look at CLUSTERing the table some time after the deletes\n> (but be aware that prior to 8.3 CLUSTER doesn't fully obey MVCC).\n>\n> To answer your original question, a way to take a look at how bloated\n> your tables are would be to ANALYZE, divide reltuples by relpages from\n> pg_class (gives how many rows per page you have) and compare that to 8k\n> / average row size. The average row size for table rows would be the sum\n> of avg_width from pg_stats for the table + 24 bytes overhead. For\n> indexes, it would be the sum of avg_width for all fields in the index\n> plus some overhead (8 bytes, I think).\n>\n> An even simpler alternative would be to install contrib/pgstattuple and\n> use the pgstattuple function, though IIRC that does read the entire\n> relation from disk.\n> --\n> Jim Nasby [email protected]\n> EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n>\n\nHere are my results:\n\na. SELECT sum(reltuples)/sum(relpages) as rows_per_page FROM pg_class;\n\nI get 66\n\nb. SELECT (8000/(sum(avg_width)+24)) as table_stat FROM pg_stats;\n\nI get 1\n\n\nYudhvir\n\nOn 5/15/07, Jim C. Nasby <[email protected]> wrote:\nOn Mon, May 14, 2007 at 08:20:49PM -0400, Tom Lane wrote:> \"Y Sidhu\" <[email protected]> writes:> > it may be table fragmentation. What kind of tables? We have 2 of them which\n> > experience lots of adds and deletes only. No updates. So a typical day> > experiences record adds a few dozen times on the order of 2.5 million. And> > deletes once daily. Each of these tables has about 3 btree indexes.\n>> With an arrangement like that you should vacuum once daily, shortly> after the deletes --- there's really no point in doing it on any other> schedule. Note \"shortly\" not \"immediately\" --- you want to be sure that\n> any transactions old enough to see the deleted rows have ended.Also, think about ways you might avoid the deletes altogether. Could youdo a truncate instead? Could you use partitioning? If you are using\ndeletes then look at CLUSTERing the table some time after the deletes(but be aware that prior to 8.3 CLUSTER doesn't fully obey MVCC).To answer your original question, a way to take a look at how bloated\nyour tables are would be to ANALYZE, divide reltuples by relpages frompg_class (gives how many rows per page you have) and compare that to 8k/ average row size. The average row size for table rows would be the sum\nof avg_width from pg_stats for the table + 24 bytes overhead. Forindexes, it would be the sum of avg_width for all fields in the indexplus some overhead (8 bytes, I think).An even simpler alternative would be to install contrib/pgstattuple and\nuse the pgstattuple function, though IIRC that does read the entirerelation from disk.--Jim\nNasby [email protected] http://enterprisedb.com 512.569.9461 (cell)\nHere are my results:\n\n\na. SELECT sum(reltuples)/sum(relpages) as rows_per_page FROM pg_class;\n\nI get 66\n\nb. SELECT (8000/(sum(avg_width)+24)) as table_stat FROM pg_stats;\n\nI get 1\n\n\nYudhvir",
"msg_date": "Fri, 18 May 2007 16:26:05 -0700",
"msg_from": "\"Y Sidhu\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stats how-to?"
},
{
"msg_contents": "On Fri, May 18, 2007 at 04:26:05PM -0700, Y Sidhu wrote:\n> >To answer your original question, a way to take a look at how bloated\n> >your tables are would be to ANALYZE, divide reltuples by relpages from\n> >pg_class (gives how many rows per page you have) and compare that to 8k\n> >/ average row size. The average row size for table rows would be the sum\n> >of avg_width from pg_stats for the table + 24 bytes overhead. For\n> >indexes, it would be the sum of avg_width for all fields in the index\n> >plus some overhead (8 bytes, I think).\n> >\n> >An even simpler alternative would be to install contrib/pgstattuple and\n> >use the pgstattuple function, though IIRC that does read the entire\n> >relation from disk.\n> >--\n> >Jim Nasby [email protected]\n> >EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n> >\n> \n> Here are my results:\n> \n> a. SELECT sum(reltuples)/sum(relpages) as rows_per_page FROM pg_class;\n> \n> I get 66\n> \n> b. SELECT (8000/(sum(avg_width)+24)) as table_stat FROM pg_stats;\n> \n> I get 1\n\nAnd those results will be completely meaningless because they're\ncovering the entire database (catalog tables included). You need to\ncompare the two numbers on a table-by-table basis, and you'd also have\nto ignore any small tables (say smaller than 1000 pages). Also, a page\nis 8192 bytes in size (though granted there's a page header that's\nsomething like 16 bytes).\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 21 May 2007 12:24:06 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stats how-to?"
},
{
"msg_contents": "Thanks again! I'll make the change and get those numbers.\n\nYudhvir\n\nOn 5/21/07, Jim C. Nasby <[email protected]> wrote:\n> On Fri, May 18, 2007 at 04:26:05PM -0700, Y Sidhu wrote:\n> > >To answer your original question, a way to take a look at how bloated\n> > >your tables are would be to ANALYZE, divide reltuples by relpages from\n> > >pg_class (gives how many rows per page you have) and compare that to 8k\n> > >/ average row size. The average row size for table rows would be the sum\n> > >of avg_width from pg_stats for the table + 24 bytes overhead. For\n> > >indexes, it would be the sum of avg_width for all fields in the index\n> > >plus some overhead (8 bytes, I think).\n> > >\n> > >An even simpler alternative would be to install contrib/pgstattuple and\n> > >use the pgstattuple function, though IIRC that does read the entire\n> > >relation from disk.\n> > >--\n> > >Jim Nasby [email protected]\n> > >EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n> > >\n> >\n> > Here are my results:\n> >\n> > a. SELECT sum(reltuples)/sum(relpages) as rows_per_page FROM pg_class;\n> >\n> > I get 66\n> >\n> > b. SELECT (8000/(sum(avg_width)+24)) as table_stat FROM pg_stats;\n> >\n> > I get 1\n>\n> And those results will be completely meaningless because they're\n> covering the entire database (catalog tables included). You need to\n> compare the two numbers on a table-by-table basis, and you'd also have\n> to ignore any small tables (say smaller than 1000 pages). Also, a page\n> is 8192 bytes in size (though granted there's a page header that's\n> something like 16 bytes).\n> --\n> Jim Nasby [email protected]\n> EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n>\n\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n",
"msg_date": "Mon, 21 May 2007 13:19:02 -0700",
"msg_from": "\"Y Sidhu\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stats how-to?"
}
] |
[
{
"msg_contents": "Daniel,\n\n> basically we have a transactional web application. The users can browse\n> through some entries in the database and start a bunch of heavy queries\n> that take up to 15 minutes (if multiple such heavy queries run in\n> parallel the performance of the database and webinterface drops very\n> quickly). The easiest thing will be to serialize these heavy queries and\n> run it in the background. I can do this from the tomcat server.\n> Now I would like to prioritize the webinterface user interaction over\n> these heavy queries. How can I implement this from java?\n>\n> Ron mentioned this paper (see\n> http://www.nabble.com/Background-vacuum-t3719682.html )\n>\n> And I finally found this:\n> http://weblog.bignerdranch.com/?p=11\n>\n> Is there an easier way from java?\n\nCheck out the Bizgres project (www.bizgres.org). They currently have a \nquery-queuing system which will eventually be a contrib to mainstream \nPostgreSQL, but you should be able to use it for development right now.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Mon, 14 May 2007 10:22:03 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Transaction prioritization from java possible?"
}
] |
[
{
"msg_contents": "I'm trying to debug a query that gets all the French translations for \nall US string values. Ultimately, my goal is to rank them all by edit \ndistance, and only pick the top N.\n\nHowever, I cannot get the basic many-to-many join to return all the \nresults in less than 3 seconds, which seems slow to me. (My \ncompetition is an in-memory perl hash that runs on client machines \nproviding results in around 3 seconds, after a 30 second startup time.)\n\nThe simplified schema is :\n\tsource ->> translation_pair <<- translation\n\nThe keys are all sequence generated oids. I do wonder if the \nperformance would be better if I used the string values as keys to \nget better data distribution. Would this help speed up performance?\n\nThere are 159283 rows in source\nThere are 1723935 rows in translation, of which 159686 are French\n\n=# explain SELECT s.source_id, s.value AS sourceValue, t.value AS \ntranslationValue\n FROM\n source s,\n translation_pair tp,\n translation t,\n language l\n WHERE\n s.source_id = tp.source_id\n AND tp.translation_id = t.translation_id\n AND t.language_id = l.language_id\n AND l.name = 'French' ;\n\n QUERY PLAN\n------------------------------------------------------------------------ \n-----------------------------------------------------\nMerge Join (cost=524224.49..732216.29 rows=92447 width=97)\n Merge Cond: (tp.source_id = s.source_id)\n -> Sort (cost=524224.49..524455.60 rows=92447 width=55)\n Sort Key: tp.source_id\n -> Nested Loop (cost=1794.69..516599.30 rows=92447 width=55)\n -> Nested Loop (cost=1794.69..27087.87 rows=86197 \nwidth=55)\n -> Index Scan using language_name_key on \n\"language\" l (cost=0.00..8.27 rows=1 width=4)\n Index Cond: ((name)::text = 'French'::text)\n -> Bitmap Heap Scan on translation t \n(cost=1794.69..25882.43 rows=95774 width=59)\n Recheck Cond: (t.language_id = \nl.language_id)\n -> Bitmap Index Scan on \ntranslation_language_l_key (cost=0.00..1770.74 rows=95774 width=0)\n Index Cond: (t.language_id = \nl.language_id)\n -> Index Scan using translation_pair_translation_id \non translation_pair tp (cost=0.00..5.67 rows=1 width=8)\n Index Cond: (tp.translation_id = t.translation_id)\n -> Index Scan using source_pkey on source s \n(cost=0.00..206227.65 rows=159283 width=46)\n(15 rows)\n\nI'm running Postgres 8.2.3 on latest Mac OSX 10.4.x. The CPU is a \n3Ghz Dual-Core Intel Xeon, w/ 5G ram. The drive is very fast although \nI don't know the configuration (I think its an XRaid w/ 3 SAS/SCSI \n70G Seagate drives).\n\nThe regular performance configurable values are:\nwork_mem 32MB\nshared_buffers 32MB\nmax_fsm_pages 204800\nmax_fsm_relations 1000\n\n\nThanks for any advice,\n\nDrew\n",
"msg_date": "Tue, 15 May 2007 06:57:55 -0700",
"msg_from": "Drew Wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Many to many join seems slow?"
},
{
"msg_contents": "Drew Wilson escribi�:\n\n> =# explain SELECT s.source_id, s.value AS sourceValue, t.value AS \n> translationValue\n> FROM\n> source s,\n> translation_pair tp,\n> translation t,\n> language l\n> WHERE\n> s.source_id = tp.source_id\n> AND tp.translation_id = t.translation_id\n> AND t.language_id = l.language_id\n> AND l.name = 'French' ;\n\nPlease provide an EXPLAIN ANALYZE of the query.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Tue, 15 May 2007 10:05:06 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Many to many join seems slow?"
},
{
"msg_contents": "> Please provide an EXPLAIN ANALYZE of the query.\n\nOops, sorry about that.\n\n=# EXPLAIN ANALYZE SELECT s.source_id, s.value as sourceValue, \nt.value as translationValue\n-# FROM\n-# source s,\n-# translation_pair tp,\n-# translation t,\n-# language l\n-# WHERE\n-# s.source_id = tp.source_id\n-# AND tp.translation_id = t.translation_id\n-# AND t.language_id = l.language_id\n-# AND l.name = 'French' ;\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n----------------------------\nMerge Join (cost=524224.49..732216.29 rows=92447 width=97) (actual \ntime=1088.871..1351.840 rows=170759 loops=1)\n Merge Cond: (tp.source_id = s.source_id)\n -> Sort (cost=524224.49..524455.60 rows=92447 width=55) (actual \ntime=1088.774..1113.483 rows=170759 loops=1)\n Sort Key: tp.source_id\n -> Nested Loop (cost=1794.69..516599.30 rows=92447 \nwidth=55) (actual time=23.252..929.847 rows=170759 loops=1)\n -> Nested Loop (cost=1794.69..27087.87 rows=86197 \nwidth=55) (actual time=23.194..132.139 rows=159686 loops=1)\n -> Index Scan using language_name_key on \n\"language\" l (cost=0.00..8.27 rows=1 width=4) (actual \ntime=0.030..0.031 rows=1 loops=1)\n Index Cond: ((name)::text = 'French'::text)\n -> Bitmap Heap Scan on translation t \n(cost=1794.69..25882.43 rows=95774 width=59) (actual \ntime=23.155..95.227 rows=159686 loops=1)\n Recheck Cond: (t.language_id = \nl.language_id)\n -> Bitmap Index Scan on \ntranslation_language_l_key (cost=0.00..1770.74 rows=95774 width=0) \n(actual time=22.329..22.329 rows=159686 loops=1)\n Index Cond: (t.language_id = \nl.language_id)\n -> Index Scan using translation_pair_translation_id \non translation_pair tp (cost=0.00..5.67 rows=1 width=8) (actual \ntime=0.004..0.004 rows=1 loops=159686)\n Index Cond: (tp.translation_id = t.translation_id)\n -> Index Scan using source_pkey on source s \n(cost=0.00..206227.65 rows=159283 width=46) (actual \ntime=0.086..110.564 rows=186176 loops=1)\nTotal runtime: 1366.757 ms\n(16 rows)\n\nOn May 15, 2007, at 7:05 AM, Alvaro Herrera wrote:\n\n> Drew Wilson escribi�:\n>\n>> =# explain SELECT s.source_id, s.value AS sourceValue, t.value AS\n>> translationValue\n>> FROM\n>> source s,\n>> translation_pair tp,\n>> translation t,\n>> language l\n>> WHERE\n>> s.source_id = tp.source_id\n>> AND tp.translation_id = t.translation_id\n>> AND t.language_id = l.language_id\n>> AND l.name = 'French' ;\n>\n> Please provide an EXPLAIN ANALYZE of the query.\n>\n> -- \n> Alvaro Herrera http:// \n> www.CommandPrompt.com/\n> The PostgreSQL Company - Command Prompt, Inc.\n\n",
"msg_date": "Tue, 15 May 2007 07:11:40 -0700",
"msg_from": "Drew Wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Many to many join seems slow?"
},
{
"msg_contents": "Drew Wilson wrote:\n> Merge Join (cost=524224.49..732216.29 rows=92447 width=97) (actual \n> time=1088.871..1351.840 rows=170759 loops=1)\n> ...\n> Total runtime: 1366.757 ms\n\nIt looks like the query actual runs in less than 3 seconds, but it takes \nsome time to fetch 170759 rows to the client.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 15 May 2007 15:17:23 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Many to many join seems slow?"
},
{
"msg_contents": "2007/5/15, Drew Wilson <[email protected]>:\n> =# explain SELECT s.source_id, s.value AS sourceValue, t.value AS\n> translationValue\n> FROM\n> source s,\n> translation_pair tp,\n> translation t,\n> language l\n> WHERE\n> s.source_id = tp.source_id\n> AND tp.translation_id = t.translation_id\n> AND t.language_id = l.language_id\n> AND l.name = 'French' ;\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> -----------------------------------------------------\n> Merge Join (cost=524224.49..732216.29 rows=92447 width=97)\n\nThis way you get all word matches for the French language. Shouldn't\nit be all matches for a specific word (s.value = 'word' in WHERE)?\n\n-- \nDaniel Cristian Cruz\n",
"msg_date": "Tue, 15 May 2007 13:00:29 -0300",
"msg_from": "\"Daniel Cristian Cruz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Many to many join seems slow?"
},
{
"msg_contents": "Yes, I'll be filtering by string value. However, I just wanted to see \nhow long it takes to scan all translations in a particular language.\n\nDrew\n\nOn May 15, 2007, at 9:00 AM, Daniel Cristian Cruz wrote:\n\n> 2007/5/15, Drew Wilson <[email protected]>:\n>> =# explain SELECT s.source_id, s.value AS sourceValue, t.value AS\n>> translationValue\n>> FROM\n>> source s,\n>> translation_pair tp,\n>> translation t,\n>> language l\n>> WHERE\n>> s.source_id = tp.source_id\n>> AND tp.translation_id = t.translation_id\n>> AND t.language_id = l.language_id\n>> AND l.name = 'French' ;\n>>\n>> QUERY PLAN\n>> --------------------------------------------------------------------- \n>> ---\n>> -----------------------------------------------------\n>> Merge Join (cost=524224.49..732216.29 rows=92447 width=97)\n>\n> This way you get all word matches for the French language. Shouldn't\n> it be all matches for a specific word (s.value = 'word' in WHERE)?\n>\n> -- \n> Daniel Cristian Cruz\n\n",
"msg_date": "Tue, 15 May 2007 09:43:55 -0700",
"msg_from": "Drew Wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Many to many join seems slow?"
},
{
"msg_contents": "You're right. If I redirect output to /dev/null, the query completes \nin 1.4s.\n\n# \\o /dev/null\n# SELECT s.source_id, s.value as sourceValue, t.value as \ntranslationValue...\n...\nTime: 1409.557 ms\n#\n\nThat'll do for now.\n\nThanks,\n\nDrew\n\nOn May 15, 2007, at 7:17 AM, Heikki Linnakangas wrote:\n\n> Drew Wilson wrote:\n>> Merge Join (cost=524224.49..732216.29 rows=92447 width=97) \n>> (actual time=1088.871..1351.840 rows=170759 loops=1)\n>> ...\n>> Total runtime: 1366.757 ms\n>\n> It looks like the query actual runs in less than 3 seconds, but it \n> takes some time to fetch 170759 rows to the client.\n>\n> -- \n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Tue, 15 May 2007 09:45:07 -0700",
"msg_from": "Drew Wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Many to many join seems slow?"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm running version 8.2 with the bitmap index patch posted on pgsql-hackers. While selection queries with equality predicates (col = value) are able to make use of the bitmap index, those with IS NULL predicates (col IS NULL) are not able to use the bitmap index. The online manuals seem to indicate that IS NULL predicates by default do not use indices but they can be forced to do so by setting enable_seqscan to off. Even after setting enable_seqscan to off, the optimizer still chooses sequential scan over bitmap index scan. Below shows various queries with plans showing use (and lack of) the bitmap index on a table containing 1500 rows. \n\nI also checked that if I create a btree index on col and set enable_seqscan to off, the optimizer correctly chooses the btree index for IS NULL queries. So my question is whether there is something fundamentally different about the bitmap index that precludes its use in IS NULL queries? Does the bitmap index not store a bit vector for the NULL value (i.e. a bit vector that contains a 1 for each row with a NULL value and 0 for other rows) ? \n\nThanks,\nJason\n\nmy_db=# explain analyze select * from some_values where col=98;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on some_values (cost=5.01..94.42 rows=97 width=8) (actual time=0.493..0.923 rows=100 loops=1)\n Recheck Cond: (col = 98)\n -> Bitmap Index Scan on some_values_idx (cost=0.00..4.98 rows=97 width=0) (actual time=0.475..0.475 rows=0 loops=1)\n Index Cond: (col = 98)\n Total runtime: 1.321 ms\n(5 rows)\n\nmy_db=# explain analyze select * from some_values where col is null;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------\n Seq Scan on some_values (cost=0.00..184.00 rows=1 width=8) (actual time=0.102..1.966 rows=1 loops=1)\n Filter: (col IS NULL)\n Total runtime: 2.014 ms\n(3 rows)\n\nmy_db=# set enable_seqscan to off;\nSET\nmy_db=# explain analyze select * from some_values where col is null;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------\n Seq Scan on some_values (cost=100000000.00..100000184.00 rows=1 width=8) (actual time=0.100..1.934 rows=1 loops=1)\n Filter: (col IS NULL)\n Total runtime: 1.976 ms\n(3 rows)\n\n \n---------------------------------\nLuggage? GPS? Comic books? \nCheck out fitting gifts for grads at Yahoo! Search.\nHello,I'm running version 8.2 with the bitmap index patch posted on pgsql-hackers. While selection queries with equality predicates (col = value) are able to make use of the bitmap index, those with IS NULL predicates (col IS NULL) are not able to use the bitmap index. The online manuals seem to indicate that IS NULL predicates by default do not use indices but they can be forced to do so by setting enable_seqscan to off. Even after setting enable_seqscan to off, the optimizer still chooses sequential scan over bitmap index scan. Below shows various queries with plans showing use (and lack of) the bitmap index on a table containing 1500 rows. I also checked that if I create a btree index on col and set enable_seqscan to off, the optimizer correctly chooses the btree index for IS NULL queries. So my question is whether there is something fundamentally different about the bitmap index that precludes its use in IS NULL queries? Does the bitmap index\n not store a bit vector for the NULL value (i.e. a bit vector that contains a 1 for each row with a NULL value and 0 for other rows) ? Thanks,Jasonmy_db=# explain analyze select * from some_values where col=98; QUERY PLAN \n ------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on some_values (cost=5.01..94.42 rows=97 width=8) (actual time=0.493..0.923 rows=100 loops=1) Recheck Cond: (col = 98) -> Bitmap Index Scan on some_values_idx (cost=0.00..4.98 rows=97 width=0) (actual time=0.475..0.475 rows=0 loops=1) Index Cond: (col = 98) Total runtime: 1.321 ms(5 rows)my_db=# explain analyze select * from some_values where col is null; QUERY\n PLAN ------------------------------------------------------------------------------------------------------- Seq Scan on some_values (cost=0.00..184.00 rows=1 width=8) (actual time=0.102..1.966 rows=1 loops=1) Filter: (col IS NULL) Total runtime: 2.014 ms(3 rows)my_db=# set enable_seqscan to off;SETmy_db=# explain analyze select * from some_values where col is\n null; QUERY PLAN --------------------------------------------------------------------------------------------------------------------- Seq Scan on some_values (cost=100000000.00..100000184.00 rows=1 width=8) (actual time=0.100..1.934 rows=1 loops=1) Filter: (col IS NULL) Total runtime: 1.976 ms(3\n rows)\nLuggage? GPS? Comic books? \nCheck out fitting gifts for grads at Yahoo! Search.",
"msg_date": "Tue, 15 May 2007 08:16:04 -0700 (PDT)",
"msg_from": "Jason Pinnix <[email protected]>",
"msg_from_op": true,
"msg_subject": "bitmap index and IS NULL predicate"
},
{
"msg_contents": "On 5/15/07, Jason Pinnix <[email protected]> wrote:\n> Does the bitmap\n> index not store a bit vector for the NULL value (i.e. a bit vector that\n> contains a 1 for each row with a NULL value and 0 for other rows) ?\n\nYou should be able to do this with a conditional index:\n\n create index ... (col) where col is null;\n\nAlexander.\n",
"msg_date": "Tue, 15 May 2007 17:22:17 +0200",
"msg_from": "\"Alexander Staubo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bitmap index and IS NULL predicate"
}
] |
[
{
"msg_contents": "Dear all,\n\nAfter some time spent better understanding how the VACUUM process\nworks, what problems we had in production and how to improve our\nmaintenance policy[1], I've come up with a little documentation\npatch - basically, I think the documentation under estimates (or\nsometimes misses) the benefit of VACUUM FULL for scans, and the\nneeds of VACUUM FULL if the routine VACUUM hasn't been done\nproperly since the database was put in production. Find the patch\nagainst snapshot attached (text not filled, to ease reading). It\nmight help others in my situation in the future.\n\n\n\n\nRef: \n[1] http://archives.postgresql.org/pgsql-performance/2006-08/msg00419.php\n http://archives.postgresql.org/pgsql-performance/2007-05/msg00112.php\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36",
"msg_date": "15 May 2007 18:43:50 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "[doc patch] a slight VACUUM / VACUUM FULL doc improvement proposal"
},
{
"msg_contents": "On Tue, May 15, 2007 at 06:43:50PM +0200, Guillaume Cottenceau wrote:\n>patch - basically, I think the documentation under estimates (or\n>sometimes misses) the benefit of VACUUM FULL for scans, and the\n>needs of VACUUM FULL if the routine VACUUM hasn't been done\n>properly since the database was put in production.\n\nIt's also possible to overestimate the benefit of vacuum full, leading \nto people vacuum full'ing almost constantly, then complaining about \nperformance due to the associated overhead. I think there have been more \npeople on this list whose performance problems were caused by \nunnecessary full vacs than by those whose performance problems were \ncaused by insufficient full vacs.\n\nMike Stone\n",
"msg_date": "Tue, 15 May 2007 13:44:29 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [doc patch] a slight VACUUM / VACUUM FULL doc improvement\n proposal"
},
{
"msg_contents": "Michael Stone <mstone+postgres 'at' mathom.us> writes:\n\n> On Tue, May 15, 2007 at 06:43:50PM +0200, Guillaume Cottenceau wrote:\n> >patch - basically, I think the documentation under estimates (or\n> >sometimes misses) the benefit of VACUUM FULL for scans, and the\n> >needs of VACUUM FULL if the routine VACUUM hasn't been done\n> >properly since the database was put in production.\n> \n> It's also possible to overestimate the benefit of vacuum full, leading\n> to people vacuum full'ing almost constantly, then complaining about\n> performance due to the associated overhead. I think there have been\n> more people on this list whose performance problems were caused by\n> unnecessary full vacs than by those whose performance problems were\n> caused by insufficient full vacs.\n\nCome on, I don't suggest to remove several bold warnings about\nit, the best one being \"Therefore, frequently using VACUUM FULL\ncan have an extremely negative effect on the performance of\nconcurrent database queries.\" My point is to add the few\nadditional mentions; I don't think the claims that VACUUM FULL\nphysically compacts the data, and might be useful in case of too\nlong time with infrequent VACUUM are incorrect, are they?\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n",
"msg_date": "16 May 2007 09:41:46 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [doc patch] a slight VACUUM / VACUUM FULL doc improvement\n proposal"
},
{
"msg_contents": "On Wed, May 16, 2007 at 09:41:46AM +0200, Guillaume Cottenceau wrote:\n> Michael Stone <mstone+postgres 'at' mathom.us> writes:\n> \n> > On Tue, May 15, 2007 at 06:43:50PM +0200, Guillaume Cottenceau wrote:\n> > >patch - basically, I think the documentation under estimates (or\n> > >sometimes misses) the benefit of VACUUM FULL for scans, and the\n> > >needs of VACUUM FULL if the routine VACUUM hasn't been done\n> > >properly since the database was put in production.\n> > \n> > It's also possible to overestimate the benefit of vacuum full, leading\n> > to people vacuum full'ing almost constantly, then complaining about\n> > performance due to the associated overhead. I think there have been\n> > more people on this list whose performance problems were caused by\n> > unnecessary full vacs than by those whose performance problems were\n> > caused by insufficient full vacs.\n> \n> Come on, I don't suggest to remove several bold warnings about\n> it, the best one being \"Therefore, frequently using VACUUM FULL\n> can have an extremely negative effect on the performance of\n> concurrent database queries.\" My point is to add the few\n> additional mentions; I don't think the claims that VACUUM FULL\n> physically compacts the data, and might be useful in case of too\n> long time with infrequent VACUUM are incorrect, are they?\n\nUnfortunately they are, to a degree. VACUUM FULL can create a\nsubstantial amount of churn in the indexes, resulting in bloated\nindexes. So often you have to REINDEX after you VACUUM FULL.\n\nLong term I think we should ditch 'VACUUM FULL' altogether and create a\nCOMPACT command (it's very easy for users to get confused between\n\"vacuum all the databases in the cluster\" or \"vacuum the entire\ndatabase\" and \"VACUUM FULL\").\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 16 May 2007 10:48:09 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [doc patch] a slight VACUUM / VACUUM FULL doc improvement\n proposal"
},
{
"msg_contents": "\"Jim C. Nasby\" <decibel 'at' decibel.org> writes:\n\n> On Wed, May 16, 2007 at 09:41:46AM +0200, Guillaume Cottenceau wrote:\n\n[...]\n\n> > Come on, I don't suggest to remove several bold warnings about\n> > it, the best one being \"Therefore, frequently using VACUUM FULL\n> > can have an extremely negative effect on the performance of\n> > concurrent database queries.\" My point is to add the few\n> > additional mentions; I don't think the claims that VACUUM FULL\n> > physically compacts the data, and might be useful in case of too\n> > long time with infrequent VACUUM are incorrect, are they?\n> \n> Unfortunately they are, to a degree. VACUUM FULL can create a\n> substantial amount of churn in the indexes, resulting in bloated\n> indexes. So often you have to REINDEX after you VACUUM FULL.\n\nOk, VACUUM FULL does his job (it physically compacts the data and\nmight be useful in case of too long time with infrequent VACUUM),\nbut we are going to not talk about it because we often needs a\nREINDEX after it? The natural conclusion would rather be to\ndocument the fact than REINDEX is needed after VACUUM FULL, isn't\nit?\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n",
"msg_date": "16 May 2007 18:00:19 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [doc patch] a slight VACUUM / VACUUM FULL doc improvement\n proposal"
},
{
"msg_contents": "Guillaume Cottenceau wrote:\n> \"Jim C. Nasby\" <decibel 'at' decibel.org> writes:\n> \n> > On Wed, May 16, 2007 at 09:41:46AM +0200, Guillaume Cottenceau wrote:\n> \n> [...]\n> \n> > > Come on, I don't suggest to remove several bold warnings about\n> > > it, the best one being \"Therefore, frequently using VACUUM FULL\n> > > can have an extremely negative effect on the performance of\n> > > concurrent database queries.\" My point is to add the few\n> > > additional mentions; I don't think the claims that VACUUM FULL\n> > > physically compacts the data, and might be useful in case of too\n> > > long time with infrequent VACUUM are incorrect, are they?\n> > \n> > Unfortunately they are, to a degree. VACUUM FULL can create a\n> > substantial amount of churn in the indexes, resulting in bloated\n> > indexes. So often you have to REINDEX after you VACUUM FULL.\n> \n> Ok, VACUUM FULL does his job (it physically compacts the data and\n> might be useful in case of too long time with infrequent VACUUM),\n> but we are going to not talk about it because we often needs a\n> REINDEX after it? The natural conclusion would rather be to\n> document the fact than REINDEX is needed after VACUUM FULL, isn't\n> it?\n\nMaybe, but we should also mention that CLUSTER is a likely faster\nworkaround.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Wed, 16 May 2007 12:09:26 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [doc patch] a slight VACUUM / VACUUM FULL doc improvement\n proposal"
},
{
"msg_contents": "On Wed, May 16, 2007 at 12:09:26PM -0400, Alvaro Herrera wrote:\n> Guillaume Cottenceau wrote:\n> > \"Jim C. Nasby\" <decibel 'at' decibel.org> writes:\n> > \n> > > On Wed, May 16, 2007 at 09:41:46AM +0200, Guillaume Cottenceau wrote:\n> > \n> > [...]\n> > \n> > > > Come on, I don't suggest to remove several bold warnings about\n> > > > it, the best one being \"Therefore, frequently using VACUUM FULL\n> > > > can have an extremely negative effect on the performance of\n> > > > concurrent database queries.\" My point is to add the few\n> > > > additional mentions; I don't think the claims that VACUUM FULL\n> > > > physically compacts the data, and might be useful in case of too\n> > > > long time with infrequent VACUUM are incorrect, are they?\n> > > \n> > > Unfortunately they are, to a degree. VACUUM FULL can create a\n> > > substantial amount of churn in the indexes, resulting in bloated\n> > > indexes. So often you have to REINDEX after you VACUUM FULL.\n> > \n> > Ok, VACUUM FULL does his job (it physically compacts the data and\n> > might be useful in case of too long time with infrequent VACUUM),\n> > but we are going to not talk about it because we often needs a\n> > REINDEX after it? The natural conclusion would rather be to\n> > document the fact than REINDEX is needed after VACUUM FULL, isn't\n> > it?\n> \n> Maybe, but we should also mention that CLUSTER is a likely faster\n> workaround.\n\nWhat this boils down to is that there should probably be a separate\nsubsection that deals with \"Oh noes! My tables are too big!\"\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 16 May 2007 11:17:02 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [doc patch] a slight VACUUM / VACUUM FULL doc improvement\n proposal"
},
{
"msg_contents": "On Wed, May 16, 2007 at 12:09:26PM -0400, Alvaro Herrera wrote:\n>Maybe, but we should also mention that CLUSTER is a likely faster\n>workaround.\n\nUnless, of course, you don't particularly care about the order of the \nitems in your table; you might end up wasting vastly more time rewriting \ntables due to unnecessary clustering than for full vacuums on a table \nthat doesn't need it.\n\nMike Stone\n",
"msg_date": "Wed, 16 May 2007 12:20:38 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [doc patch] a slight VACUUM / VACUUM FULL doc improvement\n proposal"
},
{
"msg_contents": "[email protected] (Michael Stone) writes:\n> On Wed, May 16, 2007 at 12:09:26PM -0400, Alvaro Herrera wrote:\n>>Maybe, but we should also mention that CLUSTER is a likely faster\n>>workaround.\n>\n> Unless, of course, you don't particularly care about the order of\n> the items in your table; you might end up wasting vastly more time\n> rewriting tables due to unnecessary clustering than for full vacuums\n> on a table that doesn't need it.\n\nActually, this is irrelevant.\n\nIf CLUSTER is faster than VACUUM FULL (and if it isn't, in all cases,\nit *frequently* is, and probably will be, nearly always, soon), then\nit's a faster workaround.\n-- \noutput = (\"cbbrowne\" \"@\" \"linuxfinances.info\")\nhttp://cbbrowne.com/info/oses.html\n\"What if you slept? And what if, in your sleep, you dreamed?\n And what if, in your dream, you went to heaven and there\n plucked a strange and beautiful flower? And what if, when\n you awoke, you had the flower in your hand? Ah, what then?\"\n --Coleridge\n",
"msg_date": "Wed, 16 May 2007 15:34:42 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [doc patch] a slight VACUUM / VACUUM FULL doc improvement\n proposal"
},
{
"msg_contents": "On Wed, May 16, 2007 at 03:34:42PM -0400, Chris Browne wrote:\n>[email protected] (Michael Stone) writes:\n>> Unless, of course, you don't particularly care about the order of\n>> the items in your table; you might end up wasting vastly more time\n>> rewriting tables due to unnecessary clustering than for full vacuums\n>> on a table that doesn't need it.\n>\n>Actually, this is irrelevant.\n\nI think it's perfectly relevant.\n\n>If CLUSTER is faster than VACUUM FULL (and if it isn't, in all cases,\n>it *frequently* is, and probably will be, nearly always, soon), then\n>it's a faster workaround.\n\nCluster reorders the table. If a table doesn't have any dead rows and \nyou tell someone to run cluster or vacuum full, the vaccuum basically \nwon't do anything and the cluster will reorder the whole table. Cluster \nis great for certain access patterns, but I've been noticing this odd \ntendency lately to treat it like a silver bullet.\n\nMike Stone\n",
"msg_date": "Wed, 16 May 2007 17:17:16 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [doc patch] a slight VACUUM / VACUUM FULL doc improvement\n proposal"
},
{
"msg_contents": "Michael Stone wrote:\n> On Wed, May 16, 2007 at 03:34:42PM -0400, Chris Browne wrote:\n> >[email protected] (Michael Stone) writes:\n> >>Unless, of course, you don't particularly care about the order of\n> >>the items in your table; you might end up wasting vastly more time\n> >>rewriting tables due to unnecessary clustering than for full vacuums\n> >>on a table that doesn't need it.\n> >\n> >Actually, this is irrelevant.\n> \n> I think it's perfectly relevant.\n> \n> >If CLUSTER is faster than VACUUM FULL (and if it isn't, in all cases,\n> >it *frequently* is, and probably will be, nearly always, soon), then\n> >it's a faster workaround.\n> \n> Cluster reorders the table. If a table doesn't have any dead rows and \n> you tell someone to run cluster or vacuum full, the vaccuum basically \n> won't do anything and the cluster will reorder the whole table. Cluster \n> is great for certain access patterns, but I've been noticing this odd \n> tendency lately to treat it like a silver bullet.\n\nWell, it's certainly not a silver bullet; you would use VACUUM (not\nfull) for most of your needs, and CLUSTER for the rare other cases. Of\ncourse you would not pick an index at random each time, but rather keep\nusing the same one, which would supposedly be faster.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Wed, 16 May 2007 17:25:50 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [doc patch] a slight VACUUM / VACUUM FULL doc improvement\n proposal"
},
{
"msg_contents": "Michael Stone <[email protected]> writes:\n> On Wed, May 16, 2007 at 03:34:42PM -0400, Chris Browne wrote:\n>> If CLUSTER is faster than VACUUM FULL (and if it isn't, in all cases,\n>> it *frequently* is, and probably will be, nearly always, soon), then\n>> it's a faster workaround.\n\n> Cluster reorders the table. If a table doesn't have any dead rows and \n> you tell someone to run cluster or vacuum full, the vaccuum basically \n> won't do anything and the cluster will reorder the whole table. Cluster \n> is great for certain access patterns, but I've been noticing this odd \n> tendency lately to treat it like a silver bullet.\n\nSure, but VACUUM FULL looks even less like a silver bullet.\n\nThere's been talk of providing an operation that uses the same\ninfrastructure as CLUSTER, but doesn't make any attempt to re-order the\ntable: just seqscan the old heap, transfer still-live tuples into a new\nheap, then rebuild indexes from scratch. This is clearly going to be a\nlot faster than a VACUUM FULL under conditions in which the latter would\nhave to move most of the tuples. Heikki just fixed one of the major\nobjections to it (ie, CLUSTER not being MVCC-safe). The other objection\nis that peak transient disk space usage could be much higher than VACUUM\nFULL's, but still for a lot of scenarios this'd be better.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 May 2007 17:30:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [doc patch] a slight VACUUM / VACUUM FULL doc improvement\n\tproposal"
},
{
"msg_contents": "Patch attached and applied. Thanks.\n\nI added a mention of CLUSTER.\n\n---------------------------------------------------------------------------\n\n\nGuillaume Cottenceau wrote:\n> Dear all,\n> \n> After some time spent better understanding how the VACUUM process\n> works, what problems we had in production and how to improve our\n> maintenance policy[1], I've come up with a little documentation\n> patch - basically, I think the documentation under estimates (or\n> sometimes misses) the benefit of VACUUM FULL for scans, and the\n> needs of VACUUM FULL if the routine VACUUM hasn't been done\n> properly since the database was put in production. Find the patch\n> against snapshot attached (text not filled, to ease reading). It\n> might help others in my situation in the future.\n> \n\n[ Attachment, skipping... ]\n\n> \n> Ref: \n> [1] http://archives.postgresql.org/pgsql-performance/2006-08/msg00419.php\n> http://archives.postgresql.org/pgsql-performance/2007-05/msg00112.php\n> \n> -- \n> Guillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\n> Av. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +",
"msg_date": "Wed, 30 May 2007 15:45:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] [doc patch] a slight VACUUM / VACUUM FULL doc\n\timprovement proposal"
}
] |
[
{
"msg_contents": "Anyone seen PG filling up a 66 GB partition from say 40-ish percentage to\n60-ish percentage in a manner of minutes. When I run a 'fsck' the disk usage\ncomes down to 40-ish percentage. That's about 10+ GB's variance.\n\nThis is a FreeBSD 6.2 RC2, 4GB memory, Xeon 3.2 GHz '4' of the '8' CPUs in\nuse - dual cpu, dual core with HTT turned off in the sense that the other 4\ncpu's have been masked out. The drive is a Western Digital 70 GB SATA.\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nAnyone seen PG filling up a 66 GB partition from say 40-ish percentage\nto 60-ish percentage in a manner of minutes. When I run a 'fsck' the\ndisk usage comes down to 40-ish percentage. That's about 10+ GB's\nvariance. \n\nThis is a FreeBSD 6.2 RC2, 4GB memory, Xeon 3.2 GHz '4' of the '8' CPUs\nin use - dual cpu, dual core with HTT turned off in the sense that the\nother 4 cpu's have been masked out. The drive is a Western Digital 70\nGB SATA.-- Yudhvir Singh Sidhu408 375 3134 cell",
"msg_date": "Tue, 15 May 2007 13:18:42 -0700",
"msg_from": "\"Y Sidhu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Disk Fills Up and fsck \"Compresses\" it"
},
{
"msg_contents": "I'm guessing you're seeing the affect of softupdates. With those enabled\nit can take some time before the space freed by a delete will actually\nshow up as available.\n\nOn Tue, May 15, 2007 at 01:18:42PM -0700, Y Sidhu wrote:\n> Anyone seen PG filling up a 66 GB partition from say 40-ish percentage to\n> 60-ish percentage in a manner of minutes. When I run a 'fsck' the disk usage\n> comes down to 40-ish percentage. That's about 10+ GB's variance.\n> \n> This is a FreeBSD 6.2 RC2, 4GB memory, Xeon 3.2 GHz '4' of the '8' CPUs in\n> use - dual cpu, dual core with HTT turned off in the sense that the other 4\n> cpu's have been masked out. The drive is a Western Digital 70 GB SATA.\n> \n> -- \n> Yudhvir Singh Sidhu\n> 408 375 3134 cell\n\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 15 May 2007 18:41:53 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disk Fills Up and fsck \"Compresses\" it"
},
{
"msg_contents": "What do you mean by \"softupdates?\" Is that a parameter in what I am guessing\nis the conf file?\n\n\nYudhvir\n\nOn 5/15/07, Jim C. Nasby <[email protected]> wrote:\n>\n> I'm guessing you're seeing the affect of softupdates. With those enabled\n> it can take some time before the space freed by a delete will actually\n> show up as available.\n>\n> On Tue, May 15, 2007 at 01:18:42PM -0700, Y Sidhu wrote:\n> > Anyone seen PG filling up a 66 GB partition from say 40-ish percentage\n> to\n> > 60-ish percentage in a manner of minutes. When I run a 'fsck' the disk\n> usage\n> > comes down to 40-ish percentage. That's about 10+ GB's variance.\n> >\n> > This is a FreeBSD 6.2 RC2, 4GB memory, Xeon 3.2 GHz '4' of the '8' CPUs\n> in\n> > use - dual cpu, dual core with HTT turned off in the sense that the\n> other 4\n> > cpu's have been masked out. The drive is a Western Digital 70 GB SATA.\n> >\n> > --\n> > Yudhvir Singh Sidhu\n> > 408 375 3134 cell\n>\n> --\n> Jim Nasby [email protected]\n> EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n>\n\n\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nWhat do you mean by \"softupdates?\" Is that a parameter in what I am guessing is the conf file? \n\n\nYudhvirOn 5/15/07, Jim C. Nasby <[email protected]> wrote:\nI'm guessing you're seeing the affect of softupdates. With those enabledit can take some time before the space freed by a delete will actuallyshow up as available.On Tue, May 15, 2007 at 01:18:42PM -0700, Y Sidhu wrote:\n> Anyone seen PG filling up a 66 GB partition from say 40-ish percentage to> 60-ish percentage in a manner of minutes. When I run a 'fsck' the disk usage> comes down to 40-ish percentage. That's about 10+ GB's variance.\n>> This is a FreeBSD 6.2 RC2, 4GB memory, Xeon 3.2 GHz '4' of the '8' CPUs in> use - dual cpu, dual core with HTT turned off in the sense that the other 4> cpu's have been masked out. The drive is a Western Digital 70 GB SATA.\n>> --> Yudhvir Singh Sidhu> 408 375 3134 cell--Jim\nNasby [email protected] http://enterprisedb.com 512.569.9461 (cell)\n-- Yudhvir Singh Sidhu408 375 3134 cell",
"msg_date": "Wed, 16 May 2007 08:21:23 -0700",
"msg_from": "\"Y Sidhu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Disk Fills Up and fsck \"Compresses\" it"
},
{
"msg_contents": "No, it's part of FreeBSD's UFS. google FreeBSD softupdates and you\nshould get plenty of info.\n\nAs I said, it's probably not worth worrying about.\n\nOn Wed, May 16, 2007 at 08:21:23AM -0700, Y Sidhu wrote:\n> What do you mean by \"softupdates?\" Is that a parameter in what I am guessing\n> is the conf file?\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 16 May 2007 10:51:45 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disk Fills Up and fsck \"Compresses\" it"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> No, it's part of FreeBSD's UFS. google FreeBSD softupdates and you\n> should get plenty of info.\n> \n> As I said, it's probably not worth worrying about.\n> \n> On Wed, May 16, 2007 at 08:21:23AM -0700, Y Sidhu wrote:\n>> What do you mean by \"softupdates?\" Is that a parameter in what I am guessing\n>> is the conf file?\n\nHere is quite a good article on the interaction of fsck and softupdates \nin FreeBSD:\n\nhttp://www.usenix.org/publications/library/proceedings/bsdcon02/mckusick/mckusick_html/index.html\n\nHaving said that, it seems to talk about space lost by unreferenced \nblocks and inodes is the context of panic or power outage, as opposed to \nnormal softupdate operation (unless I'm missing something...)\n\nCheers\n\nMark\n",
"msg_date": "Thu, 17 May 2007 11:28:30 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disk Fills Up and fsck \"Compresses\" it"
}
] |
[
{
"msg_contents": "I turned on all the stats in the conf file (below) and restarted the server.\nQuestion is, what's the name of the database and how do I run a simple\nselect query?\n\nstats_start_collector = true\nstats_command_string = true\nstats_block_level = true\nstats_row_level = true\nstats_reset_on_server_start = true\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nI turned on all the stats in the conf file (below) and restarted the\nserver. Question is, what's the name of the database and how do I run a\nsimple select query?\n\nstats_start_collector = true\nstats_command_string = true\nstats_block_level = true\nstats_row_level = true\nstats_reset_on_server_start = true-- Yudhvir Singh Sidhu408 375 3134 cell",
"msg_date": "Tue, 15 May 2007 15:42:01 -0700",
"msg_from": "\"Y Sidhu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to Run a pg_stats Query"
},
{
"msg_contents": "Y Sidhu escribi�:\n> I turned on all the stats in the conf file (below) and restarted the server.\n> Question is, what's the name of the database and how do I run a simple\n> select query?\n> \n> stats_start_collector = true\n> stats_command_string = true\n> stats_block_level = true\n> stats_row_level = true\n> stats_reset_on_server_start = true\n\nStats are present on all databases. As for the name of the tables, try\npg_stat_user_tables and pg_stat_activity for starters. There are a lot\nmore; check the documentation or a \\d pg_stat* in psql.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Tue, 15 May 2007 18:50:29 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to Run a pg_stats Query"
}
] |
[
{
"msg_contents": "I've been taking notes on what people ask about on this list, mixed that \nup with work I've been doing lately, and wrote some documentation readers \nof this mailing list may find useful. There are a series of articles now \nat http://www.westnet.com/~gsmith/content/postgresql/ about performance \ntesting and tuning.\n\nThe \"5-minute Introduction to PostgreSQL Performance\" and the \"Disk \nperformance testing\" articles were aimed to be FAQ-style pieces people \nasking questions here might be pointed toward.\n\nAll of the pieces in the \"Advanced Topics\" sections aren't finished to my \nstandards yet, but may be useful anyway so I've posted what I've got so \nfar.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Tue, 15 May 2007 23:55:32 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "New performance documentation released"
},
{
"msg_contents": "Cool!\n\nNow we can point people to your faq instead of repeating the \"dd\" test\ninstructions. Thanks for normalizing this out of the list :-)\n\n- Luke\n\n\nOn 5/15/07 8:55 PM, \"Greg Smith\" <[email protected]> wrote:\n\n> I've been taking notes on what people ask about on this list, mixed that\n> up with work I've been doing lately, and wrote some documentation readers\n> of this mailing list may find useful. There are a series of articles now\n> at http://www.westnet.com/~gsmith/content/postgresql/ about performance\n> testing and tuning.\n> \n> The \"5-minute Introduction to PostgreSQL Performance\" and the \"Disk\n> performance testing\" articles were aimed to be FAQ-style pieces people\n> asking questions here might be pointed toward.\n> \n> All of the pieces in the \"Advanced Topics\" sections aren't finished to my\n> standards yet, but may be useful anyway so I've posted what I've got so\n> far.\n> \n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n",
"msg_date": "Wed, 16 May 2007 08:38:36 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New performance documentation released"
}
] |
[
{
"msg_contents": "Hi.\nI have the following join condition in a query\n\"posttag inner join tag ON posttag.tagid = tag.id and tag.name =\n'blah'\"\ntag.id is PK, I have indexes on posttag.tagid and tag.name both\ncreated with all the options set to default.\nPG version is 8.1.\n\n\nThe query is very slow (3 minutes on test data), here's what takes all\nthe time, from explain results:\n\n> Bitmap Heap Scan on tag (cost=897.06..345730.89 rows=115159 width=8)\n Recheck Cond: ((name)::text = 'blah'::text)\n -> Bitmap Index Scan on tag_idxn\n(cost=0.00..897.06 rows=115159 width=0)\n Index Cond: ((name)::text =\n'blah'::text)\n\nWhat is recheck? I googled some and found something about lossy\nindexes but no fixes for this issue.\nThe only reason I ever have this index is to do joins like this one;\nhow do I make it not lossy?\n\nIf I cannot make it not lossy, is there any way to make it skip\nrecheck and say to hell with the losses? :)\nThe query without recheck will run like up to 100 times faster\naccording to overall query plan.\n\nI'm pondering encoding the tag name to int or bytea field(s) and\njoining on them but that's kinda ugly.\n\n",
"msg_date": "16 May 2007 22:42:27 -0700",
"msg_from": "Sergei Shelukhin <[email protected]>",
"msg_from_op": true,
"msg_subject": "any way to get rid of Bitmap Heap Scan recheck?"
},
{
"msg_contents": "Any ideas?\n\n",
"msg_date": "19 May 2007 03:22:18 -0700",
"msg_from": "Sergei Shelukhin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: any way to get rid of Bitmap Heap Scan recheck?"
},
{
"msg_contents": "Sergei Shelukhin wrote:\n> Hi.\n> I have the following join condition in a query\n> \"posttag inner join tag ON posttag.tagid = tag.id and tag.name =\n> 'blah'\"\n> tag.id is PK, I have indexes on posttag.tagid and tag.name both\n> created with all the options set to default.\n> PG version is 8.1.\n> \n> \n> The query is very slow (3 minutes on test data), here's what takes all\n> the time, from explain results:\n> \n>> Bitmap Heap Scan on tag (cost=897.06..345730.89 rows=115159 width=8)\n> Recheck Cond: ((name)::text = 'blah'::text)\n> -> Bitmap Index Scan on tag_idxn\n> (cost=0.00..897.06 rows=115159 width=0)\n> Index Cond: ((name)::text =\n> 'blah'::text)\n> \n> What is recheck? I googled some and found something about lossy\n> indexes but no fixes for this issue.\n> The only reason I ever have this index is to do joins like this one;\n> how do I make it not lossy?\n> \n> If I cannot make it not lossy, is there any way to make it skip\n> recheck and say to hell with the losses? :)\n> The query without recheck will run like up to 100 times faster\n> according to overall query plan.\n\nA bitmapped index scan works in two stages. First the index or indexes \nare scanned to create a bitmap representing matching tuples. That shows \nup as Bitmap Index Scan in explain. Then all the matching tuples are \nfetched from the heap, that's the Bitmap Heap Scan.\n\nIf the bitmap is larger than work_mem (because there's a lot of matching \ntuples), it's stored in memory as lossy. In lossy mode, we don't store \nevery tuple in the bitmap, but each page with any matching tuples on it \nis represented as a single bit. When performing the Bitmap Heap Scan \nphase with a lossy bitmap, the pages need to be scanned, using the \nRecheck condition, to see which tuples match.\n\nThe Recheck condition is always shown, even if the bitmap is not stored \nas lossy and no rechecking is done.\n\nNow let's get to your situation. The problem is almost certainly not the \nrechecking or lossy bitmaps, but you can increase your work_mem to make \nsure.\n\nI'd suggest you do the usual drill: ANALYZE all relevant tables. If that \ndoesn't solve the problem, run EXPLAIN ANALYZE instead of just EXPLAIN. \nSee if you can figure something out of that, and if you need more help, \nsend the output back to the list together with the table definitions and \nindexes of all tables involved in the query.\n\n> I'm pondering encoding the tag name to int or bytea field(s) and\n> joining on them but that's kinda ugly.\n\nI doubt that helps, but it's hard to say without seeing the schema.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Sat, 19 May 2007 19:05:17 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: any way to get rid of Bitmap Heap Scan recheck?"
},
{
"msg_contents": "Sergei Shelukhin <[email protected]> writes:\n> The query is very slow (3 minutes on test data), here's what takes all\n> the time, from explain results:\n\n>> Bitmap Heap Scan on tag (cost=897.06..345730.89 rows=115159 width=8)\n> Recheck Cond: ((name)::text = 'blah'::text)\n> -> Bitmap Index Scan on tag_idxn\n> (cost=0.00..897.06 rows=115159 width=0)\n> Index Cond: ((name)::text =\n> 'blah'::text)\n\nIt's usually a good idea to do EXPLAIN ANALYZE on troublesome queries,\nrather than trusting that the planner's estimates reflect reality.\n\n> The query without recheck will run like up to 100 times faster\n> according to overall query plan.\n\nSorry, but you have no concept what you're talking about. The\ndifference between indexscan and heap scan estimates here reflects\nfetching rows from the heap, not recheck costs. Even if it were\na good idea to get rid of the recheck (which it is not), it wouldn't\nreduce the costs materially.\n\nIf the table is fairly static then it might help to CLUSTER on that\nindex, so that the rows needed are brought together physically.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 19 May 2007 14:10:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: any way to get rid of Bitmap Heap Scan recheck? "
},
{
"msg_contents": "Sergei Shelukhin wrote:\n> Hi.\n> I have the following join condition in a query\n> \"posttag inner join tag ON posttag.tagid = tag.id and tag.name =\n> 'blah'\"\n> tag.id is PK, I have indexes on posttag.tagid and tag.name both\n> created with all the options set to default.\n> PG version is 8.1.\n>\n>\n> The query is very slow (3 minutes on test data), here's what takes all\n> the time, from explain results:Any ideas?\n\nYes, post the output of\n\nexplain analyze select ... (rest of query here)\n\nfor starters\n",
"msg_date": "Wed, 20 Jun 2007 17:06:25 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: any way to get rid of Bitmap Heap Scan recheck?"
}
] |
[
{
"msg_contents": "I sent this to pgsql-admin but didn't receive a response. Would this be\na WAL log performance/efficiency issue?\n\nThanks,\n\nKeaton\n\n\nGiven these postgresql.conf settings:\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\nfsync = on # turns forced synchronization\non or off\nwal_sync_method = fsync # the default is the first\noption\n # supported by the operating\nsystem:\n # open_datasync\n # fdatasync\n # fsync\n # fsync_writethrough\n # open_sync\nfull_page_writes = on # recover from partial page\nwrites\nwal_buffers = 32 # min 4, 8KB each\ncommit_delay = 100000 # range 0-100000, in\nmicroseconds\ncommit_siblings = 1000 # range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 500 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 300 # range 30-3600, in seconds\ncheckpoint_warning = 120 # in seconds, 0 is off\n\n# - Archiving -\narchive_command = '/mnt/logship/scripts/archivemaster.sh %p %f'\n# command to use to archive a logfile\n# segment\n\n\n\nAnd these tables to load data into:\n\n List of relations\nSchema | Name | Type | Owner \n--------+-----------+-------+----------\npublic | testload | table | postgres\npublic | testload2 | table | postgres\npublic | testload3 | table | postgres\n(3 rows)\n\npostgres=# \\d testload\n Table \"public.testload\"\nColumn | Type | Modifiers \n--------+----------------+-----------\nname | character(100) | \n\npostgres=# \\d testload2\n Table \"public.testload2\"\nColumn | Type | Modifiers \n--------+----------------+-----------\nname | character(100) | \n\npostgres=# \\d testload3\n Table \"public.testload3\"\nColumn | Type | Modifiers \n--------+----------------+-----------\nname | character(100) | \n\nThere are no indexes on the tables.\n\n\nUsing an 8K data page:\n\n8K data page (8192 bytes)\nLess page header and row overhead leaves ~8000 bytes\nAt 100 bytes per row = ~80 rows/page\nRows loaded: 250,000 / 80 = 3125 data pages * 8192 = 25,600,000 bytes /\n1048576 = ~ 24.4 MB of data page space.\n\nThe test file is shown here (250,000 rows all the same):\n-bash-3.1$ more datafile.txt\nAAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFFGGGGGGGGGGHHHHHHHHHHIIIIIIIIIIJJJJJJJJJJ\nAAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFFGGGGGGGGGGHHHHHHHHHHIIIIIIIIIIJJJJJJJJJJ\nAAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFFGGGGGGGGGGHHHHHHHHHHIIIIIIIIIIJJJJJJJJJJ\n\nThe load script:\n-bash-3.1$ more loaddata.sql\ncopy testload from '/home/kadams/logship/datafile.txt' delimiter '|';\ncopy testload2 from '/home/kadams/logship/datafile.txt' delimiter '|';\ncopy testload3 from '/home/kadams/logship/datafile.txt' delimiter '|';\n\nSo the one load process does a COPY into the three tables. 24.4 MB * 3\ntables = ~ 73.2 MB of data page space.\n\nThis is the only process running on the database. No other loads/users\nare on the system.\n\npsql -f sql/loaddata.sql >/dev/null 2>&1 &\n\nIt seems that 112 MB of WAL file space (16 MB * 7) is required for 73.2\nMB of loaded data, which is an extra 34.8% of disk space to log/archive\nthe COPY commands:\n\nFirst pass:\nLOG: transaction ID wrap limit is 2147484146, limited by database\n\"postgres\"\nLOG: archived transaction log file \"00000001000000010000005E\"\nLOG: archived transaction log file \"00000001000000010000005F\"\nLOG: archived transaction log file \"000000010000000100000060\"\nLOG: archived transaction log file \"000000010000000100000061\"\nLOG: archived transaction log file \"000000010000000100000062\"\nLOG: archived transaction log file \"000000010000000100000063\"\nLOG: archived transaction log file \"000000010000000100000064\"\n\n# of logs in pg_xlog: 9\n\nSecond pass:\nLOG: archived transaction log file \"000000010000000100000065\"\nLOG: archived transaction log file \"000000010000000100000066\"\nLOG: archived transaction log file \"000000010000000100000067\"\nLOG: archived transaction log file \"000000010000000100000068\"\nLOG: archived transaction log file \"000000010000000100000069\"\nLOG: archived transaction log file \"00000001000000010000006A\"\nLOG: archived transaction log file \"00000001000000010000006B\"\n\n# of logs in pg_xlog: 15\n\nThird pass:\nLOG: archived transaction log file \"00000001000000010000006C\"\nLOG: archived transaction log file \"00000001000000010000006D\"\nLOG: archived transaction log file \"00000001000000010000006E\"\nLOG: archived transaction log file \"00000001000000010000006F\"\nLOG: archived transaction log file \"000000010000000100000070\"\nLOG: archived transaction log file \"000000010000000100000071\"\nLOG: archived transaction log file \"000000010000000100000072\"\n\n# of logs in pg_xlog: 22\n\nFourth pass:\nLOG: archived transaction log file \"000000010000000100000073\"\nLOG: archived transaction log file \"000000010000000100000074\"\nLOG: archived transaction log file \"000000010000000100000075\"\nLOG: archived transaction log file \"000000010000000100000076\"\nLOG: archived transaction log file \"000000010000000100000077\"\nLOG: archived transaction log file \"000000010000000100000078\"\nLOG: archived transaction log file \"000000010000000100000079\"\n\n# of logs in pg_xlog: 29\n\nPostgreSQL continued to add log files in pg_xlog, so my assumption is\nthat checkpoints did not come into play during the load process,\ncorrect? (Frequent checkpoints would have added even more to the WAL\nfile overhead, is my understanding.)\n\nSo is there anything I can do to reduce the 34.8% overhead in WAL file\nspace when loading data? Do you see any glaring mistakes in the\ncalculations themselves, and would you agree with this overhead figure?\n\nWe are running on PostgreSQL 8.1.4 and are planning to move to 8.3 when\nit becomes available. Are there space utilization/performance\nimprovements in WAL logging in the upcoming release?\n\nThanks,\n\nKeaton\n\n\n\n\n\n\n\n\nI sent this to pgsql-admin but didn't receive a response. Would this be a WAL log performance/efficiency issue?\n\nThanks,\n\nKeaton\n\n\nGiven these postgresql.conf settings:\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\nfsync = on # turns forced synchronization on or off\nwal_sync_method = fsync # the default is the first option\n # supported by the operating system:\n # open_datasync\n # fdatasync\n # fsync\n # fsync_writethrough\n # open_sync\nfull_page_writes = on # recover from partial page writes\nwal_buffers = 32 # min 4, 8KB each\ncommit_delay = 100000 # range 0-100000, in microseconds\ncommit_siblings = 1000 # range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 500 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 300 # range 30-3600, in seconds\ncheckpoint_warning = 120 # in seconds, 0 is off\n\n# - Archiving -\narchive_command = '/mnt/logship/scripts/archivemaster.sh %p %f'\n# command to use to archive a logfile\n# segment\n\n\n\nAnd these tables to load data into:\n\n List of relations\nSchema | Name | Type | Owner \n--------+-----------+-------+----------\npublic | testload | table | postgres\npublic | testload2 | table | postgres\npublic | testload3 | table | postgres\n(3 rows)\n\npostgres=# \\d testload\n Table \"public.testload\"\nColumn | Type | Modifiers \n--------+----------------+-----------\nname | character(100) | \n\npostgres=# \\d testload2\n Table \"public.testload2\"\nColumn | Type | Modifiers \n--------+----------------+-----------\nname | character(100) | \n\npostgres=# \\d testload3\n Table \"public.testload3\"\nColumn | Type | Modifiers \n--------+----------------+-----------\nname | character(100) | \n\nThere are no indexes on the tables.\n\n\nUsing an 8K data page:\n\n8K data page (8192 bytes)\nLess page header and row overhead leaves ~8000 bytes\nAt 100 bytes per row = ~80 rows/page\nRows loaded: 250,000 / 80 = 3125 data pages * 8192 = 25,600,000 bytes / 1048576 = ~ 24.4 MB of data page space.\n\nThe test file is shown here (250,000 rows all the same):\n-bash-3.1$ more datafile.txt\nAAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFFGGGGGGGGGGHHHHHHHHHHIIIIIIIIIIJJJJJJJJJJ\nAAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFFGGGGGGGGGGHHHHHHHHHHIIIIIIIIIIJJJJJJJJJJ\nAAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFFGGGGGGGGGGHHHHHHHHHHIIIIIIIIIIJJJJJJJJJJ\n\nThe load script:\n-bash-3.1$ more loaddata.sql\ncopy testload from '/home/kadams/logship/datafile.txt' delimiter '|';\ncopy testload2 from '/home/kadams/logship/datafile.txt' delimiter '|';\ncopy testload3 from '/home/kadams/logship/datafile.txt' delimiter '|';\n\nSo the one load process does a COPY into the three tables. 24.4 MB * 3 tables = ~ 73.2 MB of data page space.\n\nThis is the only process running on the database. No other loads/users are on the system.\n\npsql -f sql/loaddata.sql >/dev/null 2>&1 &\n\nIt seems that 112 MB of WAL file space (16 MB * 7) is required for 73.2 MB of loaded data, which is an extra 34.8% of disk space to log/archive the COPY commands:\n\nFirst pass:\nLOG: transaction ID wrap limit is 2147484146, limited by database \"postgres\"\nLOG: archived transaction log file \"00000001000000010000005E\"\nLOG: archived transaction log file \"00000001000000010000005F\"\nLOG: archived transaction log file \"000000010000000100000060\"\nLOG: archived transaction log file \"000000010000000100000061\"\nLOG: archived transaction log file \"000000010000000100000062\"\nLOG: archived transaction log file \"000000010000000100000063\"\nLOG: archived transaction log file \"000000010000000100000064\"\n\n# of logs in pg_xlog: 9\n\nSecond pass:\nLOG: archived transaction log file \"000000010000000100000065\"\nLOG: archived transaction log file \"000000010000000100000066\"\nLOG: archived transaction log file \"000000010000000100000067\"\nLOG: archived transaction log file \"000000010000000100000068\"\nLOG: archived transaction log file \"000000010000000100000069\"\nLOG: archived transaction log file \"00000001000000010000006A\"\nLOG: archived transaction log file \"00000001000000010000006B\"\n\n# of logs in pg_xlog: 15\n\nThird pass:\nLOG: archived transaction log file \"00000001000000010000006C\"\nLOG: archived transaction log file \"00000001000000010000006D\"\nLOG: archived transaction log file \"00000001000000010000006E\"\nLOG: archived transaction log file \"00000001000000010000006F\"\nLOG: archived transaction log file \"000000010000000100000070\"\nLOG: archived transaction log file \"000000010000000100000071\"\nLOG: archived transaction log file \"000000010000000100000072\"\n\n# of logs in pg_xlog: 22\n\nFourth pass:\nLOG: archived transaction log file \"000000010000000100000073\"\nLOG: archived transaction log file \"000000010000000100000074\"\nLOG: archived transaction log file \"000000010000000100000075\"\nLOG: archived transaction log file \"000000010000000100000076\"\nLOG: archived transaction log file \"000000010000000100000077\"\nLOG: archived transaction log file \"000000010000000100000078\"\nLOG: archived transaction log file \"000000010000000100000079\"\n\n# of logs in pg_xlog: 29\n\nPostgreSQL continued to add log files in pg_xlog, so my assumption is that checkpoints did not come into play during the load process, correct? (Frequent checkpoints would have added even more to the WAL file overhead, is my understanding.)\n\nSo is there anything I can do to reduce the 34.8% overhead in WAL file space when loading data? Do you see any glaring mistakes in the calculations themselves, and would you agree with this overhead figure?\n\nWe are running on PostgreSQL 8.1.4 and are planning to move to 8.3 when it becomes available. Are there space utilization/performance improvements in WAL logging in the upcoming release?\n\nThanks,\n\nKeaton",
"msg_date": "Thu, 17 May 2007 08:04:30 -0600",
"msg_from": "Keaton Adams <[email protected]>",
"msg_from_op": true,
"msg_subject": "WAL log performance/efficiency question"
},
{
"msg_contents": "Keaton Adams wrote:\n> Using an 8K data page:\n> \n> 8K data page (8192 bytes)\n> Less page header and row overhead leaves ~8000 bytes\n> At 100 bytes per row = ~80 rows/page\n> Rows loaded: 250,000 / 80 = 3125 data pages * 8192 = 25,600,000 bytes /\n> 1048576 = ~ 24.4 MB of data page space.\n\nThat's not accurate. There's 32 bytes of overhead per row, and that \ngives you just 61 tuples per page. Anyhow, I'd suggest measuring the \nreal table size with pg_relpages function (from contrib/pgstattuple) or \nfrom pg_class.relpages column (after ANALYZE).\n\n> We are running on PostgreSQL 8.1.4 and are planning to move to 8.3 when\n> it becomes available. Are there space utilization/performance\n> improvements in WAL logging in the upcoming release?\n\nOne big change in 8.3 is that COPY on a table that's been created or \ntruncated in the same transaction doesn't need to write WAL at all, if \nWAL archiving isn't enabled.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Thu, 17 May 2007 15:23:00 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL log performance/efficiency question"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> Keaton Adams wrote:\n>> We are running on PostgreSQL 8.1.4 and are planning to move to 8.3 when\n>> it becomes available. Are there space utilization/performance\n>> improvements in WAL logging in the upcoming release?\n\n> One big change in 8.3 is that COPY on a table that's been created or \n> truncated in the same transaction doesn't need to write WAL at all, if \n> WAL archiving isn't enabled.\n\nThere are a couple of improvements in tuple storage (the header is\nshorter, and short varlena fields have less overhead); those would\ntranslate pretty directly into less WAL space too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 May 2007 10:55:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL log performance/efficiency question "
},
{
"msg_contents": "So for every data page there is a 20 byte header, for every row there is\na 4 byte identifier (offset into the page), AND there is also a 28 byte\nfixed-size header (27 + optional null bitmap)?? (I did find the section\nin the 8.1 manual that give the physical page layout.) The other RDBMS\nplatforms I have worked with have a header in the 28 byte range and a\nrow pointer of 4 bytes, and that's it. I find it a bit surprising that\nPostgreSQL would need another 28 bytes per row to track its contents.\n\nI'll try the pg_relpages function as you suggest and recalculate from\nthere.\n\nThanks for the info,\n\n-Keaton\n\n\n\nOn Thu, 2007-05-17 at 15:23 +0100, Heikki Linnakangas wrote:\n\n> Keaton Adams wrote:\n> > Using an 8K data page:\n> > \n> > 8K data page (8192 bytes)\n> > Less page header and row overhead leaves ~8000 bytes\n> > At 100 bytes per row = ~80 rows/page\n> > Rows loaded: 250,000 / 80 = 3125 data pages * 8192 = 25,600,000 bytes /\n> > 1048576 = ~ 24.4 MB of data page space.\n> \n> That's not accurate. There's 32 bytes of overhead per row, and that \n> gives you just 61 tuples per page. Anyhow, I'd suggest measuring the \n> real table size with pg_relpages function (from contrib/pgstattuple) or \n> from pg_class.relpages column (after ANALYZE).\n> \n> > We are running on PostgreSQL 8.1.4 and are planning to move to 8.3 when\n> > it becomes available. Are there space utilization/performance\n> > improvements in WAL logging in the upcoming release?\n> \n> One big change in 8.3 is that COPY on a table that's been created or \n> truncated in the same transaction doesn't need to write WAL at all, if \n> WAL archiving isn't enabled.\n> \n\n\n\n\n\n\n\nSo for every data page there is a 20 byte header, for every row there is a 4 byte identifier (offset into the page), AND there is also a 28 byte fixed-size header (27 + optional null bitmap)?? (I did find the section in the 8.1 manual that give the physical page layout.) The other RDBMS platforms I have worked with have a header in the 28 byte range and a row pointer of 4 bytes, and that's it. I find it a bit surprising that PostgreSQL would need another 28 bytes per row to track its contents.\n\nI'll try the pg_relpages function as you suggest and recalculate from there.\n\nThanks for the info,\n\n-Keaton\n\n\n\nOn Thu, 2007-05-17 at 15:23 +0100, Heikki Linnakangas wrote:\n\n\nKeaton Adams wrote:\n> Using an 8K data page:\n> \n> 8K data page (8192 bytes)\n> Less page header and row overhead leaves ~8000 bytes\n> At 100 bytes per row = ~80 rows/page\n> Rows loaded: 250,000 / 80 = 3125 data pages * 8192 = 25,600,000 bytes /\n> 1048576 = ~ 24.4 MB of data page space.\n\nThat's not accurate. There's 32 bytes of overhead per row, and that \ngives you just 61 tuples per page. Anyhow, I'd suggest measuring the \nreal table size with pg_relpages function (from contrib/pgstattuple) or \nfrom pg_class.relpages column (after ANALYZE).\n\n> We are running on PostgreSQL 8.1.4 and are planning to move to 8.3 when\n> it becomes available. Are there space utilization/performance\n> improvements in WAL logging in the upcoming release?\n\nOne big change in 8.3 is that COPY on a table that's been created or \ntruncated in the same transaction doesn't need to write WAL at all, if \nWAL archiving isn't enabled.",
"msg_date": "Thu, 17 May 2007 09:34:43 -0600",
"msg_from": "Keaton Adams <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WAL log performance/efficiency question"
},
{
"msg_contents": "Keaton Adams wrote:\n> So for every data page there is a 20 byte header, for every row there is\n> a 4 byte identifier (offset into the page), AND there is also a 28 byte\n> fixed-size header (27 + optional null bitmap)?? (I did find the section\n> in the 8.1 manual that give the physical page layout.) The other RDBMS\n> platforms I have worked with have a header in the 28 byte range and a\n> row pointer of 4 bytes, and that's it. I find it a bit surprising that\n> PostgreSQL would need another 28 bytes per row to track its contents.\n\nYes, it is more than many other DBMSs. It contains mostly MVCC-related \nvisibility information that other DBMSs store at page level etc, or \ndon't have MVCC at all.\n\nAs Tom mentioned, that's going to be a bit better in 8.3. We reduced the \nheader size from 27 + null bitmap to 23 + null bitmap, which makes a big \ndifference especially on 64-bit architectures, where the header used to \nbe padded up to 32 bytes, and now it's only 24 bytes.\n\nFor character fields, including CHAR(100) like you have, we also store a \n4 bytes length header per field. That's been reduced to 1 byte for \nstring shorter than 127 bytes in 8.3.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Thu, 17 May 2007 16:52:11 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL log performance/efficiency question"
},
{
"msg_contents": "OK, I understand.\n\nSo one clarifying question on WAL contents:\n\nOn an insert of a 100 byte row that is logged, what goes into the WAL\nlog? Is it 100 bytes, 132 bytes (row + overhead), or other? Does just\nthe row contents get logged, or the contents plus all of the relative\noverhead? I understand that after a checkpoint the first insert\nrequires the entire 8K page to be written to the WAL, so do subsequent\ninserts into WAL follow the same storage pattern as the layout on the\ndata page, or is the byte count less?\n\n-K\n\n\n\n> \n> 8K data page (8192 bytes)\n> Less page header and row overhead leaves ~8000 bytes\n> At 100 bytes per row = ~80 rows/page\n> Rows loaded: 250,000 / 80 = 3125 data pages * 8192 = 25,600,000\n> bytes / 1048576 = ~ 24.4 MB of data page space.\n> \n\n\n\nOn Thu, 2007-05-17 at 08:04 -0600, Keaton Adams wrote:\n\n> I sent this to pgsql-admin but didn't receive a response. Would this\n> be a WAL log performance/efficiency issue?\n> \n> Thanks,\n> \n> Keaton\n> \n> \n> Given these postgresql.conf settings:\n> \n> #---------------------------------------------------------------------------\n> # WRITE AHEAD LOG\n> #---------------------------------------------------------------------------\n> \n> # - Settings -\n> \n> fsync = on # turns forced synchronization\n> on or off\n> wal_sync_method = fsync # the default is the first\n> option\n> # supported by the operating\n> system:\n> # open_datasync\n> # fdatasync\n> # fsync\n> # fsync_writethrough\n> # open_sync\n> full_page_writes = on # recover from partial page\n> writes\n> wal_buffers = 32 # min 4, 8KB each\n> commit_delay = 100000 # range 0-100000, in\n> microseconds\n> commit_siblings = 1000 # range 1-1000\n> \n> # - Checkpoints -\n> \n> checkpoint_segments = 500 # in logfile segments, min 1, 16MB\n> each\n> checkpoint_timeout = 300 # range 30-3600, in seconds\n> checkpoint_warning = 120 # in seconds, 0 is off\n> \n> # - Archiving -\n> archive_command = '/mnt/logship/scripts/archivemaster.sh %p %f'\n> # command to use to archive a logfile\n> # segment\n> \n> \n> \n> And these tables to load data into:\n> \n> List of relations\n> Schema | Name | Type | Owner \n> --------+-----------+-------+----------\n> public | testload | table | postgres\n> public | testload2 | table | postgres\n> public | testload3 | table | postgres\n> (3 rows)\n> \n> postgres=# \\d testload\n> Table \"public.testload\"\n> Column | Type | Modifiers \n> --------+----------------+-----------\n> name | character(100) | \n> \n> postgres=# \\d testload2\n> Table \"public.testload2\"\n> Column | Type | Modifiers \n> --------+----------------+-----------\n> name | character(100) | \n> \n> postgres=# \\d testload3\n> Table \"public.testload3\"\n> Column | Type | Modifiers \n> --------+----------------+-----------\n> name | character(100) | \n> \n> There are no indexes on the tables.\n> \n> \n> Using an 8K data page:\n> \n> 8K data page (8192 bytes)\n> Less page header and row overhead leaves ~8000 bytes\n> At 100 bytes per row = ~80 rows/page\n> Rows loaded: 250,000 / 80 = 3125 data pages * 8192 = 25,600,000\n> bytes / 1048576 = ~ 24.4 MB of data page space.\n> \n> The test file is shown here (250,000 rows all the same):\n> -bash-3.1$ more datafile.txt\n> AAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFFGGGGGGGGGGHHHHHHHHHHIIIIIIIIIIJJJJJJJJJJ\n> AAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFFGGGGGGGGGGHHHHHHHHHHIIIIIIIIIIJJJJJJJJJJ\n> AAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFFGGGGGGGGGGHHHHHHHHHHIIIIIIIIIIJJJJJJJJJJ\n> \n> The load script:\n> -bash-3.1$ more loaddata.sql\n> copy testload from '/home/kadams/logship/datafile.txt' delimiter '|';\n> copy testload2 from '/home/kadams/logship/datafile.txt' delimiter '|';\n> copy testload3 from '/home/kadams/logship/datafile.txt' delimiter '|';\n> \n> So the one load process does a COPY into the three tables. 24.4 MB *\n> 3 tables = ~ 73.2 MB of data page space.\n> \n> This is the only process running on the database. No other\n> loads/users are on the system.\n> \n> psql -f sql/loaddata.sql >/dev/null 2>&1 &\n> \n> It seems that 112 MB of WAL file space (16 MB * 7) is required for\n> 73.2 MB of loaded data, which is an extra 34.8% of disk space to\n> log/archive the COPY commands:\n> \n> First pass:\n> LOG: transaction ID wrap limit is 2147484146, limited by database\n> \"postgres\"\n> LOG: archived transaction log file \"00000001000000010000005E\"\n> LOG: archived transaction log file \"00000001000000010000005F\"\n> LOG: archived transaction log file \"000000010000000100000060\"\n> LOG: archived transaction log file \"000000010000000100000061\"\n> LOG: archived transaction log file \"000000010000000100000062\"\n> LOG: archived transaction log file \"000000010000000100000063\"\n> LOG: archived transaction log file \"000000010000000100000064\"\n> \n> # of logs in pg_xlog: 9\n> \n> Second pass:\n> LOG: archived transaction log file \"000000010000000100000065\"\n> LOG: archived transaction log file \"000000010000000100000066\"\n> LOG: archived transaction log file \"000000010000000100000067\"\n> LOG: archived transaction log file \"000000010000000100000068\"\n> LOG: archived transaction log file \"000000010000000100000069\"\n> LOG: archived transaction log file \"00000001000000010000006A\"\n> LOG: archived transaction log file \"00000001000000010000006B\"\n> \n> # of logs in pg_xlog: 15\n> \n> Third pass:\n> LOG: archived transaction log file \"00000001000000010000006C\"\n> LOG: archived transaction log file \"00000001000000010000006D\"\n> LOG: archived transaction log file \"00000001000000010000006E\"\n> LOG: archived transaction log file \"00000001000000010000006F\"\n> LOG: archived transaction log file \"000000010000000100000070\"\n> LOG: archived transaction log file \"000000010000000100000071\"\n> LOG: archived transaction log file \"000000010000000100000072\"\n> \n> # of logs in pg_xlog: 22\n> \n> Fourth pass:\n> LOG: archived transaction log file \"000000010000000100000073\"\n> LOG: archived transaction log file \"000000010000000100000074\"\n> LOG: archived transaction log file \"000000010000000100000075\"\n> LOG: archived transaction log file \"000000010000000100000076\"\n> LOG: archived transaction log file \"000000010000000100000077\"\n> LOG: archived transaction log file \"000000010000000100000078\"\n> LOG: archived transaction log file \"000000010000000100000079\"\n> \n> # of logs in pg_xlog: 29\n> \n> PostgreSQL continued to add log files in pg_xlog, so my assumption is\n> that checkpoints did not come into play during the load process,\n> correct? (Frequent checkpoints would have added even more to the WAL\n> file overhead, is my understanding.)\n> \n> So is there anything I can do to reduce the 34.8% overhead in WAL file\n> space when loading data? Do you see any glaring mistakes in the\n> calculations themselves, and would you agree with this overhead\n> figure?\n> \n> We are running on PostgreSQL 8.1.4 and are planning to move to 8.3\n> when it becomes available. Are there space utilization/performance\n> improvements in WAL logging in the upcoming release?\n> \n> Thanks,\n> \n> Keaton\n> \n\n\n\n\n\n\n\n\nOK, I understand.\n\nSo one clarifying question on WAL contents:\n\nOn an insert of a 100 byte row that is logged, what goes into the WAL log? Is it 100 bytes, 132 bytes (row + overhead), or other? Does just the row contents get logged, or the contents plus all of the relative overhead? I understand that after a checkpoint the first insert requires the entire 8K page to be written to the WAL, so do subsequent inserts into WAL follow the same storage pattern as the layout on the data page, or is the byte count less?\n\n-K\n\n\n\n\n8K data page (8192 bytes)\nLess page header and row overhead leaves ~8000 bytes\nAt 100 bytes per row = ~80 rows/page\nRows loaded: 250,000 / 80 = 3125 data pages * 8192 = 25,600,000 bytes / 1048576 = ~ 24.4 MB of data page space.\n\n\n\n\nOn Thu, 2007-05-17 at 08:04 -0600, Keaton Adams wrote:\n\nI sent this to pgsql-admin but didn't receive a response. Would this be a WAL log performance/efficiency issue?\n\nThanks,\n\nKeaton\n\n\nGiven these postgresql.conf settings:\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\nfsync = on # turns forced synchronization on or off\nwal_sync_method = fsync # the default is the first option\n # supported by the operating system:\n # open_datasync\n # fdatasync\n # fsync\n # fsync_writethrough\n # open_sync\nfull_page_writes = on # recover from partial page writes\nwal_buffers = 32 # min 4, 8KB each\ncommit_delay = 100000 # range 0-100000, in microseconds\ncommit_siblings = 1000 # range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 500 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 300 # range 30-3600, in seconds\ncheckpoint_warning = 120 # in seconds, 0 is off\n\n# - Archiving -\narchive_command = '/mnt/logship/scripts/archivemaster.sh %p %f'\n# command to use to archive a logfile\n# segment\n\n\n\nAnd these tables to load data into:\n\n List of relations\nSchema | Name | Type | Owner \n--------+-----------+-------+----------\npublic | testload | table | postgres\npublic | testload2 | table | postgres\npublic | testload3 | table | postgres\n(3 rows)\n\npostgres=# \\d testload\n Table \"public.testload\"\nColumn | Type | Modifiers \n--------+----------------+-----------\nname | character(100) | \n\npostgres=# \\d testload2\n Table \"public.testload2\"\nColumn | Type | Modifiers \n--------+----------------+-----------\nname | character(100) | \n\npostgres=# \\d testload3\n Table \"public.testload3\"\nColumn | Type | Modifiers \n--------+----------------+-----------\nname | character(100) | \n\nThere are no indexes on the tables.\n\n\nUsing an 8K data page:\n\n8K data page (8192 bytes)\nLess page header and row overhead leaves ~8000 bytes\nAt 100 bytes per row = ~80 rows/page\nRows loaded: 250,000 / 80 = 3125 data pages * 8192 = 25,600,000 bytes / 1048576 = ~ 24.4 MB of data page space.\n\nThe test file is shown here (250,000 rows all the same):\n-bash-3.1$ more datafile.txt\nAAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFFGGGGGGGGGGHHHHHHHHHHIIIIIIIIIIJJJJJJJJJJ\nAAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFFGGGGGGGGGGHHHHHHHHHHIIIIIIIIIIJJJJJJJJJJ\nAAAAAAAAAABBBBBBBBBBCCCCCCCCCCDDDDDDDDDDEEEEEEEEEEFFFFFFFFFFGGGGGGGGGGHHHHHHHHHHIIIIIIIIIIJJJJJJJJJJ\n\nThe load script:\n-bash-3.1$ more loaddata.sql\ncopy testload from '/home/kadams/logship/datafile.txt' delimiter '|';\ncopy testload2 from '/home/kadams/logship/datafile.txt' delimiter '|';\ncopy testload3 from '/home/kadams/logship/datafile.txt' delimiter '|';\n\nSo the one load process does a COPY into the three tables. 24.4 MB * 3 tables = ~ 73.2 MB of data page space.\n\nThis is the only process running on the database. No other loads/users are on the system.\n\npsql -f sql/loaddata.sql >/dev/null 2>&1 &\n\nIt seems that 112 MB of WAL file space (16 MB * 7) is required for 73.2 MB of loaded data, which is an extra 34.8% of disk space to log/archive the COPY commands:\n\nFirst pass:\nLOG: transaction ID wrap limit is 2147484146, limited by database \"postgres\"\nLOG: archived transaction log file \"00000001000000010000005E\"\nLOG: archived transaction log file \"00000001000000010000005F\"\nLOG: archived transaction log file \"000000010000000100000060\"\nLOG: archived transaction log file \"000000010000000100000061\"\nLOG: archived transaction log file \"000000010000000100000062\"\nLOG: archived transaction log file \"000000010000000100000063\"\nLOG: archived transaction log file \"000000010000000100000064\"\n\n# of logs in pg_xlog: 9\n\nSecond pass:\nLOG: archived transaction log file \"000000010000000100000065\"\nLOG: archived transaction log file \"000000010000000100000066\"\nLOG: archived transaction log file \"000000010000000100000067\"\nLOG: archived transaction log file \"000000010000000100000068\"\nLOG: archived transaction log file \"000000010000000100000069\"\nLOG: archived transaction log file \"00000001000000010000006A\"\nLOG: archived transaction log file \"00000001000000010000006B\"\n\n# of logs in pg_xlog: 15\n\nThird pass:\nLOG: archived transaction log file \"00000001000000010000006C\"\nLOG: archived transaction log file \"00000001000000010000006D\"\nLOG: archived transaction log file \"00000001000000010000006E\"\nLOG: archived transaction log file \"00000001000000010000006F\"\nLOG: archived transaction log file \"000000010000000100000070\"\nLOG: archived transaction log file \"000000010000000100000071\"\nLOG: archived transaction log file \"000000010000000100000072\"\n\n# of logs in pg_xlog: 22\n\nFourth pass:\nLOG: archived transaction log file \"000000010000000100000073\"\nLOG: archived transaction log file \"000000010000000100000074\"\nLOG: archived transaction log file \"000000010000000100000075\"\nLOG: archived transaction log file \"000000010000000100000076\"\nLOG: archived transaction log file \"000000010000000100000077\"\nLOG: archived transaction log file \"000000010000000100000078\"\nLOG: archived transaction log file \"000000010000000100000079\"\n\n# of logs in pg_xlog: 29\n\nPostgreSQL continued to add log files in pg_xlog, so my assumption is that checkpoints did not come into play during the load process, correct? (Frequent checkpoints would have added even more to the WAL file overhead, is my understanding.)\n\nSo is there anything I can do to reduce the 34.8% overhead in WAL file space when loading data? Do you see any glaring mistakes in the calculations themselves, and would you agree with this overhead figure?\n\nWe are running on PostgreSQL 8.1.4 and are planning to move to 8.3 when it becomes available. Are there space utilization/performance improvements in WAL logging in the upcoming release?\n\nThanks,\n\nKeaton",
"msg_date": "Thu, 17 May 2007 10:01:34 -0600",
"msg_from": "Keaton Adams <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WAL log performance/efficiency question"
}
] |
[
{
"msg_contents": "We have a database running on a 4 processor machine. As time goes by the IO\ngets worse and worse peeking at about 200% as the machine loads up.\n\n \n\nThe weird thing is that if we restart postgres it’s fine for hours but over\ntime it goes bad again.\n\n \n\n(CPU usage graph here HYPERLINK\n\"http://www.flickr.com/photos/8347741@N02/502596262/\"http://www.flickr.com/p\nhotos/8347741@N02/502596262/ ) You can clearly see where the restart\nhappens in the IO area\n\n \n\nThis is Postgres 8.1.4 64bit.\n\n \n\nAnyone have any ideas?\n\n \n\nThanks\n\nRalph\n\n \n\n\n-- \nInternal Virus Database is out-of-date.\nChecked by AVG Free Edition.\nVersion: 7.5.432 / Virus Database: 268.15.9/573 - Release Date: 5/12/2006\n4:07 p.m.\n \n\n\n\n\n\n\n\n\n\n\nWe have a database running on a 4 processor\nmachine. As time goes by the IO gets worse and worse peeking at about\n200% as the machine loads up.\n \nThe weird thing is that if we restart\npostgres it’s fine for hours but over time it goes bad again.\n \n(CPU usage graph here http://www.flickr.com/photos/8347741@N02/502596262/\n) You can clearly see where the restart happens in the IO area\n \nThis is Postgres 8.1.4 64bit.\n \nAnyone have any ideas?\n \nThanks\nRalph",
"msg_date": "Fri, 18 May 2007 10:45:29 +1200",
"msg_from": "\"Ralph Mason\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Ever Increasing IOWAIT"
},
{
"msg_contents": "Ralph Mason wrote:\n> We have a database running on a 4 processor machine. As time goes by \n> the IO gets worse and worse peeking at about 200% as the machine loads up.\n> \n> \n> \n> The weird thing is that if we restart postgres it’s fine for hours but \n> over time it goes bad again.\n> \n> \n> \n> (CPU usage graph here \n> http://www.flickr.com/photos/8347741@N02/502596262/ ) You can clearly \n> see where the restart happens in the IO area\n> \n> \n> \n> This is Postgres 8.1.4 64bit.\n\n1. Upgrade to 8.1.9. There is a bug with autovac that is fixed that is \npretty important.\n\n> \n> \n> \n> Anyone have any ideas?\n> \n\nSure... you aren't analyzing enough. You are using prepared queries that \nhave plans that get stale... you are not running autovac... You are \ncursed (kidding)..\n\nJoshua D. Drake\n\n> \n> \n> Thanks\n> \n> Ralph\n> \n> \n> \n> \n> --\n> Internal Virus Database is out-of-date.\n> Checked by AVG Free Edition.\n> Version: 7.5.432 / Virus Database: 268.15.9/573 - Release Date: \n> 5/12/2006 4:07 p.m.\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n\n",
"msg_date": "Thu, 17 May 2007 16:02:34 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ever Increasing IOWAIT"
},
{
"msg_contents": "Hi Josh - thanks for thoughts.\n\n> \n> This is Postgres 8.1.4 64bit.\n\n>1. Upgrade to 8.1.9. There is a bug with autovac that is fixed that is \n>pretty important.\n\nWe don't use pg_autovac - we have our own process that runs very often\nvacuuming tables that are dirty. It works well and vacuums when activity is\nhappening. During busy time active tables are vacuumed about once a minute.\nThe 'slack' space on busy tables sits at about 100% (eg the table has 2X the\nnumber of pages it would after a cluster) We use rows updated and deleted\nto decide what to vacuum. Those busy tables are reasonably small and take\nless than a second to vacuum. \n\nAlso, If it were a vacuuming problem why would a restart of the engine fix\nit fully?\n\n> \n> Anyone have any ideas?\n> \n\n>Sure... you aren't analyzing enough. You are using prepared queries that \n>have plans that get stale... you are not running autovac... You are \n>cursed (kidding)..\n\nThe shape of the data never changes and we don't reanalyze on start-up so\nsuspect analyzing won't do much (although we do every so often). \n\nWe don't use prepared queries - just lots of functions - but like I said\nabove the shape of the data doesn't change. So even if postgres stores plans\nfor those (does it?) it seems like it should be just fine.\n\n\nThanks\nRalph\n \n\n\n\n-- \nInternal Virus Database is out-of-date.\nChecked by AVG Free Edition.\nVersion: 7.5.432 / Virus Database: 268.15.9/573 - Release Date: 5/12/2006\n4:07 p.m.\n \n\n",
"msg_date": "Fri, 18 May 2007 11:19:09 +1200",
"msg_from": "\"Ralph Mason\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Ever Increasing IOWAIT"
},
{
"msg_contents": "Ralph Mason wrote:\n> We have a database running on a 4 processor machine. As time goes by the IO\n> gets worse and worse peeking at about 200% as the machine loads up.\n> \n> The weird thing is that if we restart postgres it�s fine for hours but over\n> time it goes bad again.\n> \n> (CPU usage graph here HYPERLINK\n> \"http://www.flickr.com/photos/8347741@N02/502596262/\"http://www.flickr.com/p\n> hotos/8347741@N02/502596262/ ) You can clearly see where the restart\n> happens in the IO area\n\nI'm assuming here we're talking about that big block of iowait at about \n4-6am?\n\nI take it vmstat/iostat show a corresponding increase in disk activity \nat that time.\n\nThe question is - what?\nDoes the number of PG processes increase at that time? If that's not \nintentional then you might need to see what your applications are up to.\n\nDo you have a vacuum/backup scheduled for that time? Do you have some \nother process doing a lot of file I/O at that time?\n\n> This is Postgres 8.1.4 64bit.\n\nYou'll want to upgrade to the latest patch release - you're missing 5 \nlots of bug-fixes there.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 18 May 2007 10:12:01 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ever Increasing IOWAIT"
},
{
"msg_contents": "You're not swapping are you? One explanation could be that PG is\nconfigured to think it has access to a little more memory than the box\ncan really provide, which forces it to swap once it's been running for\nlong enough to fill up its shared buffers or after a certain number of\nconcurrent connections are opened.\n\n-- Mark Lewis\n\nOn Fri, 2007-05-18 at 10:45 +1200, Ralph Mason wrote:\n> We have a database running on a 4 processor machine. As time goes by\n> the IO gets worse and worse peeking at about 200% as the machine loads\n> up.\n> \n> \n> \n> The weird thing is that if we restart postgres it’s fine for hours but\n> over time it goes bad again.\n> \n> \n> \n> (CPU usage graph here\n> http://www.flickr.com/photos/8347741@N02/502596262/ ) You can clearly\n> see where the restart happens in the IO area\n> \n> \n> \n> This is Postgres 8.1.4 64bit.\n> \n> \n> \n> Anyone have any ideas?\n> \n> \n> \n> Thanks\n> \n> Ralph\n> \n> \n> \n> \n> \n> --\n> Internal Virus Database is out-of-date.\n> Checked by AVG Free Edition.\n> Version: 7.5.432 / Virus Database: 268.15.9/573 - Release Date:\n> 5/12/2006 4:07 p.m.\n> \n> \n",
"msg_date": "Fri, 18 May 2007 06:58:03 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ever Increasing IOWAIT"
},
{
"msg_contents": "Ralph Mason wrote:\n> We have a database running on a 4 processor machine. As time goes by \n> the IO gets worse and worse peeking at about 200% as the machine loads up.\n> \n> The weird thing is that if we restart postgres it’s fine for hours but \n> over time it goes bad again.\n> \n> (CPU usage graph here HYPERLINK\n> \"http://www.flickr.com/photos/8347741@N02/502596262/\"http://www.flickr\n> .com/p hotos/8347741@N02/502596262/ ) You can clearly see where the \n> restart happens in the IO area\n>I'm assuming here we're talking about that big block of iowait at about \n>4-6am?\n\nActually no - that is a vacuum of the whole database to double check It's\nnot a vacuuming problem (I am sure it's not). The restart is at at 22:00\nwhere you see the io drop to nothing, the database is still doing the same\nwork.\n\n>>I take it vmstat/iostat show a corresponding increase in disk activity \n>>at that time.\n\nI didn't know you could have IO/wait without disk activity - I will check\nthat out. \n\n>>The question is - what?\n>>Does the number of PG processes increase at that time? If that's not \n>>intentional then you might need to see what your applications are up to.\n\nNo the number of connections is stable and the jobs they do stays the same,\njust this deteriorating of i/o wait over time.\n\n>>Do you have a vacuum/backup scheduled for that time? Do you have some \n>>other process doing a lot of file I/O at that time?\n\n> This is Postgres 8.1.4 64bit.\n\n>You'll want to upgrade to the latest patch release - you're missing 5 \n>lots of bug-fixes there.\n\nThanks - will try that.\n\n\n-- \nInternal Virus Database is out-of-date.\nChecked by AVG Free Edition.\nVersion: 7.5.432 / Virus Database: 268.15.9/573 - Release Date: 5/12/2006\n4:07 p.m.\n \n\n",
"msg_date": "Mon, 21 May 2007 08:40:39 +1200",
"msg_from": "\"Ralph Mason\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Ever Increasing IOWAIT"
},
{
"msg_contents": "\n\n>You're not swapping are you? One explanation could be that PG is\n>configured to think it has access to a little more memory than the box\n>can really provide, which forces it to swap once it's been running for\n>long enough to fill up its shared buffers or after a certain number of\n>concurrent connections are opened.\n>\n>-- Mark Lewis\n\nNo - no swap on this machine. The number of connections is stable.\n\nRalph\n\n\nOn Fri, 2007-05-18 at 10:45 +1200, Ralph Mason wrote:\n> We have a database running on a 4 processor machine. As time goes by\n> the IO gets worse and worse peeking at about 200% as the machine loads\n> up.\n> \n> \n> \n> The weird thing is that if we restart postgres it’s fine for hours but\n> over time it goes bad again.\n> \n> \n> \n> (CPU usage graph here\n> http://www.flickr.com/photos/8347741@N02/502596262/ ) You can clearly\n> see where the restart happens in the IO area\n> \n> \n> \n> This is Postgres 8.1.4 64bit.\n> \n> \n> \n> Anyone have any ideas?\n> \n> \n> \n> Thanks\n> \n> Ralph\n> \n> \n> \n> \n> \n> --\n> Internal Virus Database is out-of-date.\n> Checked by AVG Free Edition.\n> Version: 7.5.432 / Virus Database: 268.15.9/573 - Release Date:\n> 5/12/2006 4:07 p.m.\n> \n> \n\n---------------------------(end of broadcast)---------------------------\nTIP 7: You can help support the PostgreSQL project by donating at\n\n http://www.postgresql.org/about/donate\n\n-- \nInternal Virus Database is out-of-date.\nChecked by AVG Free Edition.\nVersion: 7.5.432 / Virus Database: 268.15.9/573 - Release Date: 5/12/2006 4:07 p.m.\n \n\n-- \nInternal Virus Database is out-of-date.\nChecked by AVG Free Edition.\nVersion: 7.5.432 / Virus Database: 268.15.9/573 - Release Date: 5/12/2006 4:07 p.m.\n \n\n",
"msg_date": "Mon, 21 May 2007 08:41:41 +1200",
"msg_from": "\"Ralph Mason\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Ever Increasing IOWAIT"
},
{
"msg_contents": "\"Ralph Mason\" <[email protected]> writes:\n> Ralph Mason wrote:\n>> We have a database running on a 4 processor machine. As time goes by \n>> the IO gets worse and worse peeking at about 200% as the machine loads up.\n>> \n>> The weird thing is that if we restart postgres it's fine for hours but\n>> over time it goes bad again.\n\nDo you by any chance have stats collection enabled and\nstats_reset_on_server_start set to true? If so, maybe this is explained\nby growth in the size of the stats file over time. It'd be interesting\nto keep an eye on the size of $PGDATA/global/pgstat.stat over a fast-to-\nslow cycle.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 20 May 2007 17:55:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ever Increasing IOWAIT "
},
{
"msg_contents": "\"Ralph Mason\" <[email protected]> writes:\n> Ralph Mason wrote:\n>> We have a database running on a 4 processor machine. As time goes by \n>> the IO gets worse and worse peeking at about 200% as the machine loads\nup.\n>> \n>> The weird thing is that if we restart postgres it's fine for hours but\n>> over time it goes bad again.\n\n>Do you by any chance have stats collection enabled and\n>stats_reset_on_server_start set to true? If so, maybe this is explained\n>by growth in the size of the stats file over time. It'd be interesting\n>to keep an eye on the size of $PGDATA/global/pgstat.stat over a fast-to-\n>slow cycle.\n\nWe do because we use the stats to figure out when we will vacuum. Our\nvacuum process reads that table and when it runs resets it using\npg_stat_reset() to clear it down each time it runs (about ever 60 seconds\nwhen the db is very busy), stats_reset_on_server_restart is off.\n\nInterestingly after a suggestion here I went and looked at the IO stat at\nthe same time. It shows the writes as expected and picking up exactly where\nthey were before the reset, but the reads drop dramatically - like it's\nreading far less data after the reset.\n\nI will watch the size of the pgstat.stat table.\n\nRalph\n\n\n\n\n-- \nInternal Virus Database is out-of-date.\nChecked by AVG Free Edition.\nVersion: 7.5.432 / Virus Database: 268.15.9/573 - Release Date: 5/12/2006\n4:07 p.m.\n \n\n",
"msg_date": "Mon, 21 May 2007 17:17:42 +1200",
"msg_from": "\"Ralph Mason\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Ever Increasing IOWAIT "
}
] |
[
{
"msg_contents": "I recently tried to upgrade to 8.2.4, but major queries I wrote for 8.1.4 are now planned differently on 8.2.4 and are no longer usable. What the 8.1.4 planned as a series of 'hash left join's and took about 2 seconds now is planned as 'nested loop left joins' and takes forever.\n\nOther request were also affected, increasing the time form miliseconds to hundreds of miliseconds, even seconds.\n\nThe worst performance hit was on the following query. I know it is a bit extreme, but worked perfectly on 8.1.4.\n\nRegards,\n\nLiviu\n\n\nSELECT n.nodeid, \n CASE\n WHEN n.parentnodeid IS NULL THEN -1\n ELSE n.parentnodeid\n END AS parentnodeid, n.nodename, av.value AS iconname, \n avt.value AS templatename, avs.value AS subclass, n.globalnodeid, n.isaddupi, \n CASE\n WHEN realms.nodeid IS NOT NULL THEN 'SERVER'::text\n WHEN areas.nodeid IS NOT NULL THEN 'AREA'::text\n WHEN rtus.nodeid IS NOT NULL THEN 'DEVICE'::text\n WHEN rtunodes.nodeid IS NOT NULL THEN 'TAG'::text\n ELSE NULL::text\n END AS \"class\", realms.name AS realmname, \n CASE\n WHEN n.nodeclass::text = 'area'::text AND n.nodesubclass IS NOT NULL THEN true\n ELSE false\n END AS istemplate, \n CASE\n WHEN realms.nodeid IS NOT NULL THEN realms.nodeid\n WHEN areas.nodeid IS NOT NULL THEN areas.realmid\n WHEN rtus.nodeid IS NOT NULL THEN rtus.realmid\n WHEN rtunodes.nodeid IS NOT NULL THEN r.realmid\n ELSE NULL::integer\n END AS realmid, rtunodes.rtuid, rtunodes.isinvalid, n.isvalid\n FROM nodes n\n LEFT JOIN realms ON n.nodeid = realms.nodeid\n LEFT JOIN areas ON n.nodeid = areas.nodeid\n LEFT JOIN rtus ON n.nodeid = rtus.nodeid\n LEFT JOIN templates ON n.nodeid = templates.nodeid\n LEFT JOIN templatenodes ON n.nodeid = templatenodes.nodeid\n LEFT JOIN (rtunodes\n JOIN rtus r ON rtunodes.rtuid = r.nodeid) ON n.nodeid = rtunodes.nodeid\n LEFT JOIN ( SELECT attributes_values2_view.nodeid, attributes_values2_view.value\n FROM attributes_values2_view\n WHERE attributes_values2_view.attributename::text = 'iconName'::text) av ON n.nodeid = av.nodeid\n LEFT JOIN ( SELECT attributes_values2_view.nodeid, attributes_values2_view.value\n FROM attributes_values2_view\n WHERE attributes_values2_view.attributename::text = 'addUPItemplate'::text) avt ON n.nodeid = avt.nodeid\n LEFT JOIN ( SELECT attributes_values2_view.nodeid, attributes_values2_view.value\n FROM attributes_values2_view\n WHERE attributes_values2_view.attributename::text = 'addUPIsubclass'::text) avs ON n.nodeid = avs.nodeid\n WHERE templates.nodeid IS NULL AND templatenodes.nodeid IS NULL;\n\n\nCREATE OR REPLACE VIEW attributes_values2_view AS \n SELECT nodeattributes.nodeid, nodeattributes.attributeid, a.name AS attributename, \n t.name AS typename, a.typeid, a.valuesize, a.flags, nodeattributes.value, a.creationdate\n FROM nodeattributes\n LEFT JOIN attributes a USING (attributeid)\n LEFT JOIN types t USING (typeid)\n WHERE t.isattributetype;\n\n\n\nthe 8.2.4 plan with join_collapse_limit = 1 (with default it was worse, full of nested loops)\n\n\"Nested Loop Left Join (cost=32.01..2012.31 rows=1 width=230)\"\n\" Join Filter: (n.nodeid = public.nodeattributes.nodeid)\"\n\" -> Nested Loop Left Join (cost=26.47..1411.38 rows=1 width=220)\"\n\" Join Filter: (n.nodeid = public.nodeattributes.nodeid)\"\n\" -> Nested Loop Left Join (cost=20.93..810.45 rows=1 width=210)\"\n\" Join Filter: (n.nodeid = public.nodeattributes.nodeid)\"\n\" -> Nested Loop Left Join (cost=15.39..209.52 rows=1 width=200)\"\n\" Join Filter: (n.nodeid = rtunodes.nodeid)\"\n\" -> Nested Loop Left Join (cost=11.14..122.60 rows=1 width=187)\"\n\" Filter: (templatenodes.nodeid IS NULL)\"\n\" -> Hash Left Join (cost=11.14..99.52 rows=11 width=187)\"\n\" Hash Cond: (n.nodeid = templates.nodeid)\"\n\" Filter: (templates.nodeid IS NULL)\"\n\" -> Hash Left Join (cost=8.70..87.95 rows=2266 width=187)\"\n\" Hash Cond: (n.nodeid = rtus.nodeid)\"\n\" -> Hash Left Join (cost=4.45..74.20 rows=2266 width=179)\"\n\" Hash Cond: (n.nodeid = areas.nodeid)\"\n\" -> Hash Left Join (cost=1.45..61.81 rows=2266 width=171)\"\n\" Hash Cond: (n.nodeid = realms.nodeid)\"\n\" -> Seq Scan on nodes n (cost=0.00..51.66 rows=2266 width=49)\"\n\" -> Hash (cost=1.20..1.20 rows=20 width=122)\"\n\" -> Seq Scan on realms (cost=0.00..1.20 rows=20 width=122)\"\n\" -> Hash (cost=1.89..1.89 rows=89 width=8)\"\n\" -> Seq Scan on areas (cost=0.00..1.89 rows=89 width=8)\"\n\" -> Hash (cost=3.00..3.00 rows=100 width=8)\"\n\" -> Seq Scan on rtus (cost=0.00..3.00 rows=100 width=8)\"\n\" -> Hash (cost=1.64..1.64 rows=64 width=4)\"\n\" -> Seq Scan on templates (cost=0.00..1.64 rows=64 width=4)\"\n\" -> Index Scan using nodeid_pkey on templatenodes (cost=0.00..2.09 rows=1 width=4)\"\n\" Index Cond: (n.nodeid = templatenodes.nodeid)\"\n\" -> Hash Join (cost=4.25..63.93 rows=1839 width=13)\"\n\" Hash Cond: (rtunodes.rtuid = r.nodeid)\"\n\" -> Seq Scan on rtunodes (cost=0.00..34.39 rows=1839 width=9)\"\n\" -> Hash (cost=3.00..3.00 rows=100 width=8)\"\n\" -> Seq Scan on rtus r (cost=0.00..3.00 rows=100 width=8)\"\n\" -> Hash Join (cost=5.54..600.89 rows=3 width=14)\"\n\" Hash Cond: (a.typeid = t.typeid)\"\n\" -> Hash Join (cost=4.38..599.23 rows=125 width=18)\"\n\" Hash Cond: (public.nodeattributes.attributeid = a.attributeid)\"\n\" -> Seq Scan on nodeattributes (cost=0.00..505.35 rows=23535 width=18)\"\n\" -> Hash (cost=4.36..4.36 rows=1 width=8)\"\n\" -> Seq Scan on attributes a (cost=0.00..4.36 rows=1 width=8)\"\n\" Filter: ((name)::text = 'iconName'::text)\"\n\" -> Hash (cost=1.10..1.10 rows=5 width=4)\"\n\" -> Seq Scan on types t (cost=0.00..1.10 rows=5 width=4)\"\n\" Filter: isattributetype\"\n\" -> Hash Join (cost=5.54..600.89 rows=3 width=14)\"\n\" Hash Cond: (a.typeid = t.typeid)\"\n\" -> Hash Join (cost=4.38..599.23 rows=125 width=18)\"\n\" Hash Cond: (public.nodeattributes.attributeid = a.attributeid)\"\n\" -> Seq Scan on nodeattributes (cost=0.00..505.35 rows=23535 width=18)\"\n\" -> Hash (cost=4.36..4.36 rows=1 width=8)\"\n\" -> Seq Scan on attributes a (cost=0.00..4.36 rows=1 width=8)\"\n\" Filter: ((name)::text = 'addUPItemplate'::text)\"\n\" -> Hash (cost=1.10..1.10 rows=5 width=4)\"\n\" -> Seq Scan on types t (cost=0.00..1.10 rows=5 width=4)\"\n\" Filter: isattributetype\"\n\" -> Hash Join (cost=5.54..600.89 rows=3 width=14)\"\n\" Hash Cond: (a.typeid = t.typeid)\"\n\" -> Hash Join (cost=4.38..599.23 rows=125 width=18)\"\n\" Hash Cond: (public.nodeattributes.attributeid = a.attributeid)\"\n\" -> Seq Scan on nodeattributes (cost=0.00..505.35 rows=23535 width=18)\"\n\" -> Hash (cost=4.36..4.36 rows=1 width=8)\"\n\" -> Seq Scan on attributes a (cost=0.00..4.36 rows=1 width=8)\"\n\" Filter: ((name)::text = 'addUPIsubclass'::text)\"\n\" -> Hash (cost=1.10..1.10 rows=5 width=4)\"\n\" -> Seq Scan on types t (cost=0.00..1.10 rows=5 width=4)\"\n\" Filter: isattributetype\"\n\n\n\nthe 8.1.4 plan\n\n\"Hash Left Join (cost=1587.19..1775.85 rows=2270 width=230)\"\n\" Hash Cond: (\"outer\".nodeid = \"inner\".nodeid)\"\n\" -> Hash Left Join (cost=1086.04..1257.64 rows=2270 width=220)\"\n\" Hash Cond: (\"outer\".nodeid = \"inner\".nodeid)\"\n\" -> Hash Left Join (cost=584.89..745.10 rows=2270 width=210)\"\n\" Hash Cond: (\"outer\".nodeid = \"inner\".nodeid)\"\n\" -> Hash Left Join (cost=83.74..232.55 rows=2270 width=200)\"\n\" Hash Cond: (\"outer\".nodeid = \"inner\".nodeid)\"\n\" -> Hash Left Join (cost=14.47..128.10 rows=2270 width=187)\"\n\" Hash Cond: (\"outer\".nodeid = \"inner\".nodeid)\"\n\" Filter: (\"inner\".nodeid IS NULL)\"\n\" -> Hash Left Join (cost=8.43..108.26 rows=2270 width=187)\"\n\" Hash Cond: (\"outer\".nodeid = \"inner\".nodeid)\"\n\" Filter: (\"inner\".nodeid IS NULL)\"\n\" -> Hash Left Join (cost=6.62..94.47 rows=2270 width=187)\"\n\" Hash Cond: (\"outer\".nodeid = \"inner\".nodeid)\"\n\" -> Hash Left Join (cost=3.30..78.74 rows=2270 width=179)\"\n\" Hash Cond: (\"outer\".nodeid = \"inner\".nodeid)\"\n\" -> Hash Left Join (cost=1.24..64.48 rows=2270 width=171)\"\n\" Hash Cond: (\"outer\".nodeid = \"inner\".nodeid)\"\n\" -> Seq Scan on nodes n (cost=0.00..51.70 rows=2270 width=49)\"\n\" -> Hash (cost=1.19..1.19 rows=19 width=122)\"\n\" -> Seq Scan on realms (cost=0.00..1.19 rows=19 width=122)\"\n\" -> Hash (cost=1.85..1.85 rows=85 width=8)\"\n\" -> Seq Scan on areas (cost=0.00..1.85 rows=85 width=8)\"\n\" -> Hash (cost=3.06..3.06 rows=106 width=8)\"\n\" -> Seq Scan on rtus (cost=0.00..3.06 rows=106 width=8)\"\n\" -> Hash (cost=1.64..1.64 rows=64 width=4)\"\n\" -> Seq Scan on templates (cost=0.00..1.64 rows=64 width=4)\"\n\" -> Hash (cost=5.44..5.44 rows=244 width=4)\"\n\" -> Seq Scan on templatenodes (cost=0.00..5.44 rows=244 width=4)\"\n\" -> Hash (cost=64.72..64.72 rows=1816 width=13)\"\n\" -> Hash Join (cost=3.33..64.72 rows=1816 width=13)\"\n\" Hash Cond: (\"outer\".rtuid = \"inner\".nodeid)\"\n\" -> Seq Scan on rtunodes (cost=0.00..34.16 rows=1816 width=9)\"\n\" -> Hash (cost=3.06..3.06 rows=106 width=8)\"\n\" -> Seq Scan on rtus r (cost=0.00..3.06 rows=106 width=8)\"\n\" -> Hash (cost=501.14..501.14 rows=4 width=14)\"\n\" -> Nested Loop (cost=207.37..501.14 rows=4 width=14)\"\n\" -> Nested Loop (cost=0.00..5.44 rows=1 width=4)\"\n\" Join Filter: (\"outer\".typeid = \"inner\".typeid)\"\n\" -> Seq Scan on attributes a (cost=0.00..4.28 rows=1 width=8)\"\n\" Filter: ((name)::text = 'iconName'::text)\"\n\" -> Seq Scan on types t (cost=0.00..1.10 rows=5 width=4)\"\n\" Filter: isattributetype\"\n\" -> Bitmap Heap Scan on nodeattributes (cost=207.37..493.33 rows=190 width=18)\"\n\" Recheck Cond: (nodeattributes.attributeid = \"outer\".attributeid)\"\n\" -> Bitmap Index Scan on nodeattributes_pkey (cost=0.00..207.37 rows=190 width=0)\"\n\" Index Cond: (nodeattributes.attributeid = \"outer\".attributeid)\"\n\" -> Hash (cost=501.14..501.14 rows=4 width=14)\"\n\" -> Nested Loop (cost=207.37..501.14 rows=4 width=14)\"\n\" -> Nested Loop (cost=0.00..5.44 rows=1 width=4)\"\n\" Join Filter: (\"outer\".typeid = \"inner\".typeid)\"\n\" -> Seq Scan on attributes a (cost=0.00..4.28 rows=1 width=8)\"\n\" Filter: ((name)::text = 'addUPItemplate'::text)\"\n\" -> Seq Scan on types t (cost=0.00..1.10 rows=5 width=4)\"\n\" Filter: isattributetype\"\n\" -> Bitmap Heap Scan on nodeattributes (cost=207.37..493.33 rows=190 width=18)\"\n\" Recheck Cond: (nodeattributes.attributeid = \"outer\".attributeid)\"\n\" -> Bitmap Index Scan on nodeattributes_pkey (cost=0.00..207.37 rows=190 width=0)\"\n\" Index Cond: (nodeattributes.attributeid = \"outer\".attributeid)\"\n\" -> Hash (cost=501.14..501.14 rows=4 width=14)\"\n\" -> Nested Loop (cost=207.37..501.14 rows=4 width=14)\"\n\" -> Nested Loop (cost=0.00..5.44 rows=1 width=4)\"\n\" Join Filter: (\"outer\".typeid = \"inner\".typeid)\"\n\" -> Seq Scan on attributes a (cost=0.00..4.28 rows=1 width=8)\"\n\" Filter: ((name)::text = 'addUPIsubclass'::text)\"\n\" -> Seq Scan on types t (cost=0.00..1.10 rows=5 width=4)\"\n\" Filter: isattributetype\"\n\" -> Bitmap Heap Scan on nodeattributes (cost=207.37..493.33 rows=190 width=18)\"\n\" Recheck Cond: (nodeattributes.attributeid = \"outer\".attributeid)\"\n\" -> Bitmap Index Scan on nodeattributes_pkey (cost=0.00..207.37 rows=190 width=0)\"\n\" Index Cond: (nodeattributes.attributeid = \"outer\".attributeid)\" \n\n\n",
"msg_date": "Fri, 18 May 2007 12:02:44 +0300",
"msg_from": "\"Liviu Ionescu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance drop on 8.2.4, reverting to 8.1.4"
},
{
"msg_contents": "On Fri, May 18, 2007 at 12:02:44PM +0300, Liviu Ionescu wrote:\n> the 8.2.4 plan with join_collapse_limit = 1 (with default it was worse, full of nested loops)\n\nIt will probably be useful with EXPLAIN ANALYZE of your queries, not just the\nEXPLAIN.\n\n> \"Nested Loop Left Join (cost=32.01..2012.31 rows=1 width=230)\"\n\nIt looks like the planner thinks this is going to be really cheap -- so it's\nmisestimating something somewhere. Have you ANALYZEd recently?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 18 May 2007 11:49:21 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4"
},
{
"msg_contents": "> It will probably be useful with EXPLAIN ANALYZE of your \n> queries, not just the EXPLAIN.\n\nit took 245 seconds to complete, see below.\n\n> It looks like the planner thinks this is going to be really \n> cheap -- so it's misestimating something somewhere. Have you \n> ANALYZEd recently?\n\nyes, but to be sure I did it again before issuing the request; no improvements...\n\nregards,\n\nLiviu\n\n\"Nested Loop Left Join (cost=32.03..2026.70 rows=1 width=125) (actual time=16.686..244822.521 rows=2026 loops=1)\"\n\" Join Filter: (n.nodeid = public.nodeattributes.nodeid)\"\n\" -> Nested Loop Left Join (cost=26.55..1420.57 rows=1 width=115) (actual time=13.833..176136.527 rows=2026 loops=1)\"\n\" Join Filter: (n.nodeid = public.nodeattributes.nodeid)\"\n\" -> Nested Loop Left Join (cost=21.06..810.90 rows=1 width=105) (actual time=10.336..95476.175 rows=2026 loops=1)\"\n\" Join Filter: (n.nodeid = public.nodeattributes.nodeid)\"\n\" -> Nested Loop Left Join (cost=15.55..194.15 rows=1 width=95) (actual time=6.514..11524.892 rows=2026 loops=1)\"\n\" Join Filter: (n.nodeid = rtunodes.nodeid)\"\n\" -> Nested Loop Left Join (cost=11.17..107.94 rows=1 width=82) (actual time=0.661..71.751 rows=2026 loops=1)\"\n\" Filter: (templatenodes.nodeid IS NULL)\"\n\" -> Hash Left Join (cost=11.17..99.66 rows=1 width=82) (actual time=0.643..36.053 rows=2206 loops=1)\"\n\" Hash Cond: (n.nodeid = templates.nodeid)\"\n\" Filter: (templates.nodeid IS NULL)\"\n\" -> Hash Left Join (cost=8.73..88.06 rows=2270 width=82) (actual time=0.502..27.756 rows=2270 loops=1)\"\n\" Hash Cond: (n.nodeid = rtus.nodeid)\"\n\" -> Hash Left Join (cost=4.34..74.11 rows=2270 width=74) (actual time=0.286..20.179 rows=2270 loops=1)\"\n\" Hash Cond: (n.nodeid = areas.nodeid)\"\n\" -> Hash Left Join (cost=1.43..61.83 rows=2270 width=66) (actual time=0.114..13.062 rows=2270 loops=1)\"\n\" Hash Cond: (n.nodeid = realms.nodeid)\"\n\" -> Seq Scan on nodes n (cost=0.00..51.70 rows=2270 width=49) (actual time=0.016..4.089 rows=2270 loops=1)\"\n\" -> Hash (cost=1.19..1.19 rows=19 width=17) (actual time=0.056..0.056 rows=19 loops=1)\"\n\" -> Seq Scan on realms (cost=0.00..1.19 rows=19 width=17) (actual time=0.006..0.023 rows=19 loops=1)\"\n\" -> Hash (cost=1.85..1.85 rows=85 width=8) (actual time=0.156..0.156 rows=85 loops=1)\"\n\" -> Seq Scan on areas (cost=0.00..1.85 rows=85 width=8) (actual time=0.007..0.070 rows=85 loops=1)\"\n\" -> Hash (cost=3.06..3.06 rows=106 width=8) (actual time=0.200..0.200 rows=106 loops=1)\"\n\" -> Seq Scan on rtus (cost=0.00..3.06 rows=106 width=8) (actual time=0.010..0.105 rows=106 loops=1)\"\n\" -> Hash (cost=1.64..1.64 rows=64 width=4) (actual time=0.119..0.119 rows=64 loops=1)\"\n\" -> Seq Scan on templates (cost=0.00..1.64 rows=64 width=4) (actual time=0.006..0.059 rows=64 loops=1)\"\n\" -> Index Scan using nodeid_pkey on templatenodes (cost=0.00..8.27 rows=1 width=4) (actual time=0.009..0.009 rows=0 loops=2206)\"\n\" Index Cond: (n.nodeid = templatenodes.nodeid)\"\n\" -> Hash Join (cost=4.38..63.51 rows=1816 width=13) (actual time=0.012..4.417 rows=1816 loops=2026)\"\n\" Hash Cond: (rtunodes.rtuid = r.nodeid)\"\n\" -> Seq Scan on rtunodes (cost=0.00..34.16 rows=1816 width=9) (actual time=0.009..1.290 rows=1816 loops=2026)\"\n\" -> Hash (cost=3.06..3.06 rows=106 width=8) (actual time=0.194..0.194 rows=106 loops=1)\"\n\" -> Seq Scan on rtus r (cost=0.00..3.06 rows=106 width=8) (actual time=0.005..0.091 rows=106 loops=1)\"\n\" -> Hash Join (cost=5.51..611.90 rows=388 width=14) (actual time=0.033..39.896 rows=2079 loops=2026)\"\n\" Hash Cond: (a.typeid = t.typeid)\"\n\" -> Hash Join (cost=4.34..604.41 rows=647 width=18) (actual time=0.031..36.513 rows=2079 loops=2026)\"\n\" Hash Cond: (public.nodeattributes.attributeid = a.attributeid)\"\n\" -> Seq Scan on nodeattributes (cost=0.00..505.35 rows=23535 width=18) (actual time=0.008..16.826 rows=23535 loops=2026)\"\n\" -> Hash (cost=4.28..4.28 rows=5 width=8) (actual time=0.077..0.077 rows=5 loops=1)\"\n\" -> Seq Scan on attributes a (cost=0.00..4.28 rows=5 width=8) (actual time=0.033..0.067 rows=5 loops=1)\"\n\" Filter: ((name)::text = 'iconName'::text)\"\n\" -> Hash (cost=1.10..1.10 rows=6 width=4) (actual time=0.023..0.023 rows=6 loops=1)\"\n\" -> Seq Scan on types t (cost=0.00..1.10 rows=6 width=4) (actual time=0.005..0.012 rows=6 loops=1)\"\n\" Filter: isattributetype\"\n\" -> Hash Join (cost=5.49..606.76 rows=233 width=14) (actual time=0.104..38.474 rows=1865 loops=2026)\"\n\" Hash Cond: (a.typeid = t.typeid)\"\n\" -> Hash Join (cost=4.31..601.80 rows=388 width=18) (actual time=0.101..35.484 rows=1865 loops=2026)\"\n\" Hash Cond: (public.nodeattributes.attributeid = a.attributeid)\"\n\" -> Seq Scan on nodeattributes (cost=0.00..505.35 rows=23535 width=18) (actual time=0.008..16.624 rows=23535 loops=2026)\"\n\" -> Hash (cost=4.28..4.28 rows=3 width=8) (actual time=0.071..0.071 rows=3 loops=1)\"\n\" -> Seq Scan on attributes a (cost=0.00..4.28 rows=3 width=8) (actual time=0.019..0.058 rows=3 loops=1)\"\n\" Filter: ((name)::text = 'addUPItemplate'::text)\"\n\" -> Hash (cost=1.10..1.10 rows=6 width=4) (actual time=0.025..0.025 rows=6 loops=1)\"\n\" -> Seq Scan on types t (cost=0.00..1.10 rows=6 width=4) (actual time=0.004..0.012 rows=6 loops=1)\"\n\" Filter: isattributetype\"\n\" -> Hash Join (cost=5.48..604.19 rows=155 width=14) (actual time=0.794..33.783 rows=132 loops=2026)\"\n\" Hash Cond: (a.typeid = t.typeid)\"\n\" -> Hash Join (cost=4.30..600.50 rows=259 width=18) (actual time=0.791..33.550 rows=132 loops=2026)\"\n\" Hash Cond: (public.nodeattributes.attributeid = a.attributeid)\"\n\" -> Seq Scan on nodeattributes (cost=0.00..505.35 rows=23535 width=18) (actual time=0.008..16.623 rows=23535 loops=2026)\"\n\" -> Hash (cost=4.28..4.28 rows=2 width=8) (actual time=0.060..0.060 rows=2 loops=1)\"\n\" -> Seq Scan on attributes a (cost=0.00..4.28 rows=2 width=8) (actual time=0.015..0.054 rows=2 loops=1)\"\n\" Filter: ((name)::text = 'addUPIsubclass'::text)\"\n\" -> Hash (cost=1.10..1.10 rows=6 width=4) (actual time=0.022..0.022 rows=6 loops=1)\"\n\" -> Seq Scan on types t (cost=0.00..1.10 rows=6 width=4) (actual time=0.003..0.009 rows=6 loops=1)\"\n\" Filter: isattributetype\"\n\"Total runtime: 244826.065 ms\"\n\n",
"msg_date": "Fri, 18 May 2007 13:14:56 +0300",
"msg_from": "\"Liviu Ionescu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4"
},
{
"msg_contents": "On Fri, May 18, 2007 at 01:14:56PM +0300, Liviu Ionescu wrote:\n> yes, but to be sure I did it again before issuing the request; no improvements...\n\nIs this with the join collapse limit set to 1, or with default? (Default is\ngenerally more interesting.)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 18 May 2007 12:46:46 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4"
},
{
"msg_contents": "> Is this with the join collapse limit set to 1, or with \n> default? (Default is generally more interesting.)\n\nbelow is the same query with the default setting.\n\nregards,\n\nLiviu\n\n\n\"Nested Loop Left Join (cost=23.35..1965.46 rows=1 width=125) (actual time=50.408..231926.123 rows=2026 loops=1)\"\n\" Join Filter: (n.nodeid = public.nodeattributes.nodeid)\"\n\" -> Nested Loop Left Join (cost=17.81..1357.58 rows=1 width=115) (actual time=47.103..156521.050 rows=2026 loops=1)\"\n\" Join Filter: (n.nodeid = public.nodeattributes.nodeid)\"\n\" -> Nested Loop Left Join (cost=12.30..752.97 rows=1 width=105) (actual time=43.924..81977.726 rows=2026 loops=1)\"\n\" Join Filter: (n.nodeid = public.nodeattributes.nodeid)\"\n\" -> Nested Loop Left Join (cost=6.83..150.65 rows=1 width=95) (actual time=40.603..12477.227 rows=2026 loops=1)\"\n\" -> Nested Loop Left Join (cost=6.83..150.37 rows=1 width=78) (actual time=38.448..12459.918 rows=2026 loops=1)\"\n\" -> Nested Loop Left Join (cost=6.83..150.08 rows=1 width=70) (actual time=31.793..12436.536 rows=2026 loops=1)\"\n\" -> Nested Loop Left Join (cost=6.83..149.80 rows=1 width=62) (actual time=6.588..12394.366 rows=2026 loops=1)\"\n\" Filter: (templatenodes.nodeid IS NULL)\"\n\" -> Nested Loop Left Join (cost=6.83..149.51 rows=1 width=62) (actual time=6.525..12362.969 rows=2206 loops=1)\"\n\" Join Filter: (n.nodeid = rtunodes.nodeid)\"\n\" -> Hash Left Join (cost=2.44..63.29 rows=1 width=49) (actual time=0.361..14.426 rows=2206 loops=1)\"\n\" Hash Cond: (n.nodeid = templates.nodeid)\"\n\" Filter: (templates.nodeid IS NULL)\"\n\" -> Seq Scan on nodes n (cost=0.00..51.70 rows=2270 width=49) (actual time=0.071..4.417 rows=2270 loops=1)\"\n\" -> Hash (cost=1.64..1.64 rows=64 width=4) (actual time=0.152..0.152 rows=64 loops=1)\"\n\" -> Seq Scan on templates (cost=0.00..1.64 rows=64 width=4) (actual time=0.032..0.082 rows=64 loops=1)\"\n\" -> Hash Join (cost=4.38..63.51 rows=1816 width=13) (actual time=0.011..4.365 rows=1816 loops=2206)\"\n\" Hash Cond: (rtunodes.rtuid = r.nodeid)\"\n\" -> Seq Scan on rtunodes (cost=0.00..34.16 rows=1816 width=9) (actual time=0.008..1.276 rows=1816 loops=2206)\"\n\" -> Hash (cost=3.06..3.06 rows=106 width=8) (actual time=0.241..0.241 rows=106 loops=1)\"\n\" -> Seq Scan on rtus r (cost=0.00..3.06 rows=106 width=8) (actual time=0.029..0.136 rows=106 loops=1)\"\n\" -> Index Scan using nodeid_pkey on templatenodes (cost=0.00..0.28 rows=1 width=4) (actual time=0.008..0.008 rows=0 loops=2206)\"\n\" Index Cond: (n.nodeid = templatenodes.nodeid)\"\n\" -> Index Scan using rtus_pkey on rtus (cost=0.00..0.27 rows=1 width=8) (actual time=0.016..0.016 rows=0 loops=2026)\"\n\" Index Cond: (n.nodeid = rtus.nodeid)\"\n\" -> Index Scan using areas_pkey on areas (cost=0.00..0.27 rows=1 width=8) (actual time=0.007..0.007 rows=0 loops=2026)\"\n\" Index Cond: (n.nodeid = areas.nodeid)\"\n\" -> Index Scan using realms_pkey on realms (cost=0.00..0.27 rows=1 width=17) (actual time=0.004..0.004 rows=0 loops=2026)\"\n\" Index Cond: (n.nodeid = realms.nodeid)\"\n\" -> Hash Join (cost=5.48..600.38 rows=155 width=14) (actual time=0.812..34.198 rows=132 loops=2026)\"\n\" Hash Cond: (public.nodeattributes.attributeid = a.attributeid)\"\n\" -> Seq Scan on nodeattributes (cost=0.00..505.35 rows=23535 width=18) (actual time=0.009..16.660 rows=23535 loops=2026)\"\n\" -> Hash (cost=5.47..5.47 rows=1 width=4) (actual time=0.196..0.196 rows=2 loops=1)\"\n\" -> Hash Join (cost=1.18..5.47 rows=1 width=4) (actual time=0.124..0.187 rows=2 loops=1)\"\n\" Hash Cond: (a.typeid = t.typeid)\"\n\" -> Seq Scan on attributes a (cost=0.00..4.28 rows=2 width=8) (actual time=0.044..0.103 rows=2 loops=1)\"\n\" Filter: ((name)::text = 'addUPIsubclass'::text)\"\n\" -> Hash (cost=1.10..1.10 rows=6 width=4) (actual time=0.047..0.047 rows=6 loops=1)\"\n\" -> Seq Scan on types t (cost=0.00..1.10 rows=6 width=4) (actual time=0.028..0.034 rows=6 loops=1)\"\n\" Filter: isattributetype\"\n\" -> Hash Join (cost=5.51..601.70 rows=233 width=14) (actual time=0.103..35.496 rows=1865 loops=2026)\"\n\" Hash Cond: (public.nodeattributes.attributeid = a.attributeid)\"\n\" -> Seq Scan on nodeattributes (cost=0.00..505.35 rows=23535 width=18) (actual time=0.009..16.595 rows=23535 loops=2026)\"\n\" -> Hash (cost=5.48..5.48 rows=2 width=4) (actual time=0.116..0.116 rows=3 loops=1)\"\n\" -> Hash Join (cost=1.18..5.48 rows=2 width=4) (actual time=0.063..0.107 rows=3 loops=1)\"\n\" Hash Cond: (a.typeid = t.typeid)\"\n\" -> Seq Scan on attributes a (cost=0.00..4.28 rows=3 width=8) (actual time=0.017..0.056 rows=3 loops=1)\"\n\" Filter: ((name)::text = 'addUPItemplate'::text)\"\n\" -> Hash (cost=1.10..1.10 rows=6 width=4) (actual time=0.022..0.022 rows=6 loops=1)\"\n\" -> Seq Scan on types t (cost=0.00..1.10 rows=6 width=4) (actual time=0.004..0.010 rows=6 loops=1)\"\n\" Filter: isattributetype\"\n\" -> Hash Join (cost=5.54..603.02 rows=388 width=14) (actual time=0.031..35.795 rows=2079 loops=2026)\"\n\" Hash Cond: (public.nodeattributes.attributeid = a.attributeid)\"\n\" -> Seq Scan on nodeattributes (cost=0.00..505.35 rows=23535 width=18) (actual time=0.008..16.766 rows=23535 loops=2026)\"\n\" -> Hash (cost=5.50..5.50 rows=3 width=4) (actual time=0.120..0.120 rows=5 loops=1)\"\n\" -> Hash Join (cost=1.18..5.50 rows=3 width=4) (actual time=0.074..0.110 rows=5 loops=1)\"\n\" Hash Cond: (a.typeid = t.typeid)\"\n\" -> Seq Scan on attributes a (cost=0.00..4.28 rows=5 width=8) (actual time=0.025..0.050 rows=5 loops=1)\"\n\" Filter: ((name)::text = 'iconName'::text)\"\n\" -> Hash (cost=1.10..1.10 rows=6 width=4) (actual time=0.026..0.026 rows=6 loops=1)\"\n\" -> Seq Scan on types t (cost=0.00..1.10 rows=6 width=4) (actual time=0.004..0.010 rows=6 loops=1)\"\n\" Filter: isattributetype\"\n\"Total runtime: 231929.656 ms\"\n\n",
"msg_date": "Fri, 18 May 2007 14:05:36 +0300",
"msg_from": "\"Liviu Ionescu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4"
},
{
"msg_contents": "On Fri, May 18, 2007 at 02:05:36PM +0300, Liviu Ionescu wrote:\n> \" -> Hash Left Join (cost=2.44..63.29 rows=1 width=49) (actual time=0.361..14.426 rows=2206 loops=1)\"\n> \" Hash Cond: (n.nodeid = templates.nodeid)\"\n> \" Filter: (templates.nodeid IS NULL)\"\n> \" -> Seq Scan on nodes n (cost=0.00..51.70 rows=2270 width=49) (actual time=0.071..4.417 rows=2270 loops=1)\"\n> \" -> Hash (cost=1.64..1.64 rows=64 width=4) (actual time=0.152..0.152 rows=64 loops=1)\"\n> \" -> Seq Scan on templates (cost=0.00..1.64 rows=64 width=4) (actual time=0.032..0.082 rows=64 loops=1)\"\n\nThis seems to be the source of the misestimation. You might want to try using\n\"n WHERE n.nodein NOT IN (SELECT nodeid FROM templates)\" instead of \"n LEFT\nJOIN templates USING (nodeid) WHERE templates.nodeid IS NULL\" and see if it\nhelps.\n\n> \"Total runtime: 231929.656 ms\"\n\nNote that this is better than the version with collapse_limit set to 1. :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 18 May 2007 13:14:55 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4"
},
{
"msg_contents": "> This seems to be the source of the misestimation. You might \n> want to try using \"n WHERE n.nodein NOT IN (SELECT nodeid \n> FROM templates)\" instead of \"n LEFT JOIN templates USING \n> (nodeid) WHERE templates.nodeid IS NULL\" and see if it helps.\n\nit helped, the new version of the query takes 2303 ms on both 8.1.4 and 8.2.4.\n\nany idea why the 8.2.4 planner is not happy with the initial select? was it just a big chance that it worked in 8.1.4 or the 8.2.4 planner has a problem?\n\nor, from another perspective, is the new syntax more portable? what are the chances that after upgrading to 8.3.x to encounter new problems?\n\nregards,\n\nLiviu\n\n\n",
"msg_date": "Fri, 18 May 2007 14:51:42 +0300",
"msg_from": "\"Liviu Ionescu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4"
},
{
"msg_contents": "On Fri, May 18, 2007 at 02:51:42PM +0300, Liviu Ionescu wrote:\n> it helped, the new version of the query takes 2303 ms on both 8.1.4 and 8.2.4.\n\nAnd the old one?\n\n> any idea why the 8.2.4 planner is not happy with the initial select? was it\n> just a big chance that it worked in 8.1.4 or the 8.2.4 planner has a\n> problem?\n\nI guess it was more or less by chance, especially as 8.1 did not reorder\nouter joins. Others might know more about the estimation, though.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 18 May 2007 15:10:47 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4"
},
{
"msg_contents": "> > it helped, the new version of the query takes 2303 ms on both 8.1.4 \n> > and 8.2.4.\n> \n> And the old one?\n\nslightly shorter, 2204 ms.\n\nas a subjective perception, the entire application is slightly slower on 8.2.4, probably there are many queries that were manually tunned for 7.x/8.1.x and now need rewriting, which is not really what I expected from an upgrade.\n \n\nLiviu\n\n",
"msg_date": "Fri, 18 May 2007 16:33:37 +0300",
"msg_from": "\"Liviu Ionescu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4"
},
{
"msg_contents": "> > This seems to be the source of the misestimation. You might \n> > want to try using \"n WHERE n.nodein NOT IN (SELECT nodeid \n> > FROM templates)\" instead of \"n LEFT JOIN templates USING \n> > (nodeid) WHERE templates.nodeid IS NULL\" and see if it helps.\n> \n> it helped, the new version of the query takes 2303 ms on both \n> 8.1.4 and 8.2.4.\n\nthis is very interesting. on 8.1.x i have also repeatedly had to rewrite\njoins as their equivalent IN/NOT IN alternatives in order to improve\nperformance, so i feel that at least under some alignments of the\nplanets 8.1 has similar problems.\n\ngeorge\n",
"msg_date": "Fri, 18 May 2007 08:13:08 -0700",
"msg_from": "\"George Pavlov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4"
},
{
"msg_contents": "> under some alignments of the planets 8.1 has similar problems.\n\n8.1 might have similar problems, but the point here is different: if what\nwas manually tuned to work in 8.1 confuses the 8.2 planner and performance\ndrops so much (from 2303 to 231929 ms in my case) upgrading a production\nmachine to 8.2 is a risky business. I probably have hundreds of sql\nstatements in my application, some of them quite complex, and it is not\nreasonable to check all of them in order to certify them to be 8.2\ncompliant.\n\nregards,\n\nLiviu\n\n",
"msg_date": "Fri, 18 May 2007 18:40:31 +0300",
"msg_from": "\"Liviu Ionescu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4"
},
{
"msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> On Fri, May 18, 2007 at 02:05:36PM +0300, Liviu Ionescu wrote:\n>> \" -> Hash Left Join (cost=2.44..63.29 rows=1 width=49) (actual time=0.361..14.426 rows=2206 loops=1)\"\n>> \" Hash Cond: (n.nodeid = templates.nodeid)\"\n>> \" Filter: (templates.nodeid IS NULL)\"\n\n> This seems to be the source of the misestimation.\n\nYeah. 8.2 is estimating that the \"nodeid IS NULL\" condition will\ndiscard all or nearly all the rows, presumably because there aren't any\nnull nodeid's in the underlying table --- it fails to consider that the\nLEFT JOIN may inject some nulls. 8.1 was not any brighter; the reason\nit gets a different estimate is that it doesn't distinguish left-join\nand WHERE clauses at all, but assumes that the result of the left join\ncan't have fewer rows than its left input, even after applying the\nfilter condition. In this particular scenario that happens to be a\nbetter estimate. So even though 8.2 is smarter, and there is no bug\nhere that wasn't in 8.1 too, it's getting a worse estimate leading to\na worse plan.\n\nThis is a sufficiently common idiom that I think it's a must-fix\nproblem. Not sure about details yet, but it seems somehow the\nselectivity estimator had better start accounting for\nouter-join-injected NULLs.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 May 2007 11:54:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4 "
},
{
"msg_contents": "On Fri, May 18, 2007 at 06:40:31PM +0300, Liviu Ionescu wrote:\n> > under some alignments of the planets 8.1 has similar problems.\n> \n> 8.1 might have similar problems, but the point here is different: if what\n> was manually tuned to work in 8.1 confuses the 8.2 planner and performance\n> drops so much (from 2303 to 231929 ms in my case) upgrading a production\n> machine to 8.2 is a risky business. I probably have hundreds of sql\n> statements in my application, some of them quite complex, and it is not\n> reasonable to check all of them in order to certify them to be 8.2\n> compliant.\n> \n> regards,\n> \n> Liviu\n> \n> \nLiviu,\n\nIt is arguable, that updating the DB software version in an enterprise\nenvironment requires exactly that: check all production queries on the\nnew software to identify any issues. In part, this is brought on by the\nvery tuning that you performed against the previous software. Restore\nthe 8.1 DB into 8.2. Then run the queries against both versions to\nevaluate functioning and timing.\n\nKen\n",
"msg_date": "Fri, 18 May 2007 11:21:39 -0500",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4"
},
{
"msg_contents": "> It is arguable, that updating the DB software version in an \n> enterprise environment requires exactly that: check all \n> production queries on the new software to identify any \n> issues. In part, this is brought on by the very tuning that \n> you performed against the previous software. Restore the 8.1 \n> DB into 8.2. Then run the queries against both versions to \n> evaluate functioning and timing.\n\nyou're right. my previous message was not a complain, was a warning for\nothers to avoid the same mistake. I was overconfident and got bitten. in the\nfuture I'll check my queries on 8.2/8.3/... on a development configuration\nbefore upgrading the production server.\n\nregards,\n\nLiviu\n\n",
"msg_date": "Fri, 18 May 2007 19:49:30 +0300",
"msg_from": "\"Liviu Ionescu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4"
},
{
"msg_contents": "On 18.05.2007, at 10:21, Kenneth Marshall wrote:\n\n> It is arguable, that updating the DB software version in an enterprise\n> environment requires exactly that: check all production queries on the\n> new software to identify any issues. In part, this is brought on by \n> the\n> very tuning that you performed against the previous software. Restore\n> the 8.1 DB into 8.2. Then run the queries against both versions to\n> evaluate functioning and timing.\n\nAnd it is always a good idea to do this in a \"clean room environment\" \naka test server and set the logging in PostgreSQL to log all queries \nlonger than xx ms. If you first install 8.1 on the test machine, do a \ntest run and then upgrade to 8.2, you can compare results from the \ntests and find the queries that are slower or faster quite easily.\n\ncug\n",
"msg_date": "Sat, 19 May 2007 13:51:04 -0600",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4"
},
{
"msg_contents": "\nOn May 18, 2007, at 11:40 AM, Liviu Ionescu wrote:\n\n> 8.1 might have similar problems, but the point here is different: \n> if what\n> was manually tuned to work in 8.1 confuses the 8.2 planner and \n> performance\n> drops so much (from 2303 to 231929 ms in my case) upgrading a \n> production\n> machine to 8.2 is a risky business. I probably have hundreds of sql\n\nDoing any major software version upgrade on any production system is \nsuicidal without first vetting your entire stack against the proposed \nupgrades. For best results, verify even minor version upgrades on \ntest systems first, with full testing of your app.\n\nWe do. It has saved us many times.\n\n",
"msg_date": "Wed, 23 May 2007 11:47:27 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4"
},
{
"msg_contents": "On 5/18/07, Tom Lane <[email protected]> wrote:\n>\n>\n> Yeah. 8.2 is estimating that the \"nodeid IS NULL\" condition will\n> discard all or nearly all the rows, presumably because there aren't any\n> null nodeid's in the underlying table --- it fails to consider that the\n> LEFT JOIN may inject some nulls. 8.1 was not any brighter; the reason\n> it gets a different estimate is that it doesn't distinguish left-join\n> and WHERE clauses at all, but assumes that the result of the left join\n> can't have fewer rows than its left input, even after applying the\n> filter condition. In this particular scenario that happens to be a\n> better estimate. So even though 8.2 is smarter, and there is no bug\n> here that wasn't in 8.1 too, it's getting a worse estimate leading to\n> a worse plan.\n>\n> This is a sufficiently common idiom that I think it's a must-fix\n> problem. Not sure about details yet, but it seems somehow the\n> selectivity estimator had better start accounting for\n> outer-join-injected NULLs.\n>\nThis problem is causing us a bit of grief as we plan to move from 8.1.4 to\n8.2.4. We have many (on the order of a hundred) queries that are of the\nform:\n\n(A) LEFT JOIN (B) ON col WHERE B.col IS NULL\n\nThese queries are much slower on 8.2 than on 8.1 for what looks like the\nreason outlined above. I have rewritten a few key queries to be of the\nequivalent form:\n\n(A) WHERE col NOT IN (SELECT col FROM (B))\n\nwhich has resulted in a dramatic improvement. I'm really hoping that I'm\nnot going to need to re-write every single one of our queries that are of\nthe first form above. Is there any estimation as to if/when the fix will\nbecome available? I'm hoping this isn't going to be a showstopper in us\nmoving to 8.2.\n\nThanks,\nSteve\n\nOn 5/18/07, Tom Lane <[email protected]> wrote:\nYeah. 8.2 is estimating that the \"nodeid IS NULL\" condition willdiscard all or nearly all the rows, presumably because there aren't any\nnull nodeid's in the underlying table --- it fails to consider that theLEFT JOIN may inject some nulls. 8.1 was not any brighter; the reasonit gets a different estimate is that it doesn't distinguish left-join\nand WHERE clauses at all, but assumes that the result of the left joincan't have fewer rows than its left input, even after applying thefilter condition. In this particular scenario that happens to be a\nbetter estimate. So even though 8.2 is smarter, and there is no bughere that wasn't in 8.1 too, it's getting a worse estimate leading toa worse plan.This is a sufficiently common idiom that I think it's a must-fix\nproblem. Not sure about details yet, but it seems somehow theselectivity estimator had better start accounting forouter-join-injected NULLs.\nThis problem is causing us a bit of grief as we plan to move from 8.1.4 to 8.2.4. We have many (on the order of a hundred) queries that are of the form:\n \n(A) LEFT JOIN (B) ON col WHERE B.col IS NULL\n \nThese queries are much slower on 8.2 than on 8.1 for what looks like the reason outlined above. I have rewritten a few key queries to be of the equivalent form:\n \n(A) WHERE col NOT IN (SELECT col FROM (B))\n \nwhich has resulted in a dramatic improvement. I'm really hoping that I'm not going to need to re-write every single one of our queries that are of the first form above. Is there any estimation as to if/when the fix will become available? I'm hoping this isn't going to be a showstopper in us moving to \n8.2.\n \nThanks,\nSteve",
"msg_date": "Tue, 5 Jun 2007 17:30:14 -0400",
"msg_from": "\"Steven Flatt\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4"
},
{
"msg_contents": "On Tue, Jun 05, 2007 at 05:30:14PM -0400, Steven Flatt wrote:\n> (A) LEFT JOIN (B) ON col WHERE B.col IS NULL\n> \n> These queries are much slower on 8.2 than on 8.1 for what looks like the\n> reason outlined above. I have rewritten a few key queries to be of the\n> equivalent form:\n> \n> (A) WHERE col NOT IN (SELECT col FROM (B))\n\nAt least those _can_ be rewritten into a sane form. I have an application\nwith a large FULL OUTER JOIN, where _both_ sides can return NULLs. (It's\nbasically a diff between a \"current\" and a \"wanted\" state.)\n\nIt performs reasonably well under both 8.1 and 8.2, though. Fourteen-table\njoin or so :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 5 Jun 2007 23:38:47 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4"
},
{
"msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n> Is there any estimation as to if/when the fix will\n> become available? I'm hoping this isn't going to be a showstopper in us\n> moving to 8.2.\n\nIf you're feeling desperate you could revert this patch in your local\ncopy:\nhttp://archives.postgresql.org/pgsql-committers/2006-11/msg00066.php\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Jun 2007 18:08:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4 "
},
{
"msg_contents": "On 6/5/07, Tom Lane <[email protected]> wrote:\n>\n> If you're feeling desperate you could revert this patch in your local\n> copy:\n> http://archives.postgresql.org/pgsql-committers/2006-11/msg00066.php\n>\n> regards, tom lane\n>\n\nReverting that patch has not appeared to solve our problem. Perhaps I\ndidn't provide enough information, because I feel like there's more going\non here.\n\nOne instance of our problem goes like this, and I have included a\nself-contained example with which you can reproduce the problem. We make\nheavy use of partitioned tables, so during our schema install, we create a\nlot of inherited tables (on the order of 2000) to which we also want to add\nthe FK constraints that exist on the parent table. The PLpgSQL function\nbelow does this. It queries for all FK constraints that are on the parent\ntable but not on the child, then generates the sql to add them to\nthe child. (The function has been modified from the original but the main\nquery is the same.)\n\nNote the \"this is slow\" section and the \"replace with this which is fast\"\nsection. Both queries are fast on 8.1.4 (entire function completes in 2\nminutes), but not on 8.2.4. If you notice the \"ELAPSED TIME\"s written to\nthe console, the query times start equally fast but grows painfully slow\nrather quickly with the \"slow\" version on 8.2.4.\n\nSorry for not providing explain analyze output, but I found it hard to tie\nthe output into the execution of the function. When I did stand-alone\nexplain analyzes, the actual times reported were similar on 8.1.4 and 8.2.4.\nI think the degradation has more to do with doing many such queries in a\nsingle transaction or something like that.\n\nPlus, correct me if I'm wrong, but the degrading query is executed against\npg_catalog tables only, which are in general smallish, so I have a hard time\nbelieving that even a sub-optimal query plan results in this level of\ndegradation.\n\nAny help is much appreciated, thanks.\nSteve\n\n\nCREATE OR REPLACE FUNCTION inherit_fks_test()\n RETURNS interval\n VOLATILE\n LANGUAGE PLpgSQL\n AS '\n DECLARE\n childtbl varchar;\n childoid oid;\n rec record;\n start timestamptz;\n finish timestamptz;\n time1 timestamptz;\n time2 timestamptz;\n elapsed interval;\n BEGIN\n start := timeofday();\n\n EXECUTE ''SET LOCAL log_min_messages TO NOTICE'';\n EXECUTE ''CREATE TABLE foo(a INT UNIQUE)'';\n EXECUTE ''CREATE TABLE bar(b INT REFERENCES foo(a))'';\n\n FOR count IN 1 .. 2000\n LOOP\n childtbl := ''bar_'' || count;\n EXECUTE ''CREATE TABLE '' || childtbl || ''() INHERITS\n(bar)'';\n\n childoid := childtbl::regclass::oid;\n\n time1 := timeofday();\n FOR rec IN\n SELECT ''ALTER TABLE ''\n || quote_ident(n.nspname) || ''.''\n || quote_ident(cl.relname)\n || '' ADD CONSTRAINT ''\n || quote_ident(parent_const.conname) || '' ''\n || parent_const.def AS cmd\n FROM pg_catalog.pg_class cl\n JOIN pg_catalog.pg_namespace n\n ON (n.oid = cl.relnamespace)\n JOIN pg_catalog.pg_inherits i\n ON (i.inhrelid = cl.oid)\n JOIN (\n SELECT c.conname,\n c.conrelid,\n c.confrelid,\n pg_get_constraintdef(c.oid) AS def\n FROM pg_catalog.pg_constraint c\n WHERE c.confrelid <> 0\n ) AS parent_const\n ON (parent_const.conrelid = i.inhparent)\n\n-- This is slow\n-------------------------------------------------------------------------------\n LEFT OUTER JOIN (\n SELECT c2.conname,\n c2.conrelid,\n c2.confrelid,\n pg_get_constraintdef(c2.oid) AS def\n FROM pg_catalog.pg_constraint c2\n WHERE c2.confrelid <> 0\n ) AS child_const\n ON (child_const.conrelid = cl.oid\n AND child_const.conname =\n parent_const.conname\n AND child_const.confrelid =\n parent_const.confrelid\n AND child_const.def = parent_const.def)\n WHERE child_const.conname IS NULL\n-------------------------------------------------------------------------------\n\n-- Replace with this which is fast\n-------------------------------------------------------------------------------\n-- WHERE conname NOT IN (\n-- SELECT c2.conname\n-- FROM pg_catalog.pg_constraint c2\n-- WHERE c2.confrelid <> 0\n-- AND c2.conrelid = cl.oid\n-- AND c2.conname = parent_const.conname\n-- AND c2.confrelid =\nparent_const.confrelid\n-- AND pg_get_constraintdef(c2.oid) =\n-- parent_const.def\n-- )\n-------------------------------------------------------------------------------\n\n AND cl.oid = childoid\n LOOP\n time2 := timeofday();\n EXECUTE rec.cmd;\n END LOOP;\n\n elapsed := time2 - time1;\n RAISE NOTICE ''%: ELAPSED TIME: %'',count,elapsed;\n\n END LOOP;\n\n finish := timeofday();\n RETURN finish - start;\n END;\n ';\n\nOn 6/5/07, Tom Lane <[email protected]> wrote:\nIf you're feeling desperate you could revert this patch in your localcopy:\nhttp://archives.postgresql.org/pgsql-committers/2006-11/msg00066.php regards, tom lane\nReverting that patch has not appeared to solve our problem. Perhaps I didn't provide enough information, because I feel like there's more going on here.\n \nOne instance of our problem goes like this, and I have included a self-contained example with which you can reproduce the problem. We make heavy use of partitioned tables, so during our schema install, we create a lot of inherited tables (on the order of 2000) to which we also want to add the FK constraints that exist on the parent table. The PLpgSQL function below does this. It queries for all FK constraints that are on the parent table but not on the child, then generates the sql to add them to the child. (The function has been modified from the original but the main query is the same.)\n\n \nNote the \"this is slow\" section and the \"replace with this which is fast\" section. Both queries are fast on 8.1.4 (entire function completes in 2 minutes), but not on 8.2.4. If you notice the \"ELAPSED TIME\"s written to the console, the query times start equally fast but grows painfully slow rather quickly with the \"slow\" version on \n8.2.4.\n \nSorry for not providing explain analyze output, but I found it hard to tie the output into the execution of the function. When I did stand-alone explain analyzes, the actual times reported were similar on 8.1.4 and \n8.2.4. I think the degradation has more to do with doing many such queries in a single transaction or something like that.\n \nPlus, correct me if I'm wrong, but the degrading query is executed against pg_catalog tables only, which are in general smallish, so I have a hard time believing that even a sub-optimal query plan results in this level of degradation.\n\n \nAny help is much appreciated, thanks.\nSteve\n \n\nCREATE OR REPLACE FUNCTION inherit_fks_test() RETURNS interval VOLATILE LANGUAGE PLpgSQL AS ' DECLARE childtbl varchar; childoid oid; rec record;\n start timestamptz; finish timestamptz; time1 timestamptz; time2 timestamptz; elapsed interval; BEGIN start := timeofday();\n\n EXECUTE ''SET LOCAL log_min_messages TO NOTICE''; EXECUTE ''CREATE TABLE foo(a INT UNIQUE)''; EXECUTE ''CREATE TABLE bar(b INT REFERENCES foo(a))'';\n\n FOR count IN 1 .. 2000 LOOP childtbl := ''bar_'' || count; EXECUTE ''CREATE TABLE '' || childtbl || ''() INHERITS (bar)'';\n\n childoid := childtbl::regclass::oid;\n time1 := timeofday(); FOR rec IN SELECT ''ALTER TABLE '' || quote_ident(n.nspname) || ''.'' || quote_ident(\ncl.relname) || '' ADD CONSTRAINT '' || quote_ident(parent_const.conname) || '' '' || parent_const.def AS cmd\n FROM pg_catalog.pg_class cl JOIN pg_catalog.pg_namespace n ON (n.oid = cl.relnamespace) JOIN pg_catalog.pg_inherits i\n ON (i.inhrelid = cl.oid) JOIN ( SELECT c.conname, c.conrelid, \nc.confrelid, pg_get_constraintdef(c.oid) AS def FROM pg_catalog.pg_constraint c WHERE c.confrelid <> 0 ) AS parent_const\n ON (parent_const.conrelid = i.inhparent)\n-- This is slow------------------------------------------------------------------------------- LEFT OUTER JOIN ( SELECT c2.conname, \nc2.conrelid, c2.confrelid, pg_get_constraintdef(c2.oid) AS def FROM pg_catalog.pg_constraint c2 WHERE \nc2.confrelid <> 0 ) AS child_const ON (child_const.conrelid = cl.oid AND child_const.conname = parent_const.conname\n AND child_const.confrelid = parent_const.confrelid AND child_const.def = parent_const.def) WHERE child_const.conname IS NULL\n-------------------------------------------------------------------------------\n-- Replace with this which is fast--------------------------------------------------------------------------------- WHERE conname NOT IN (-- SELECT c2.conname\n-- FROM pg_catalog.pg_constraint c2-- WHERE c2.confrelid <> 0-- AND c2.conrelid = cl.oid-- AND \nc2.conname = parent_const.conname-- AND c2.confrelid = parent_const.confrelid-- AND pg_get_constraintdef(c2.oid) =-- parent_const.def\n-- )-------------------------------------------------------------------------------\n AND cl.oid = childoid LOOP time2 := timeofday(); EXECUTE rec.cmd; END LOOP;\n elapsed := time2 - time1; RAISE NOTICE ''%: ELAPSED TIME: %'',count,elapsed;\n END LOOP;\n finish := timeofday(); RETURN finish - start; END; ';",
"msg_date": "Thu, 7 Jun 2007 13:11:44 -0400",
"msg_from": "\"Steven Flatt\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4"
},
{
"msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n> One instance of our problem goes like this, and I have included a\n> self-contained example with which you can reproduce the problem.\n\nThis is fairly interesting, because if you run the query by hand after\nthe function finishes, it's pretty fast. What I think is happening is\nthat the plpgsql function caches a plan for the catalog query that is\npredicated on pg_constraint and pg_inherits being small, and after\nyou've inserted a few thousand rows in them, that's not true anymore.\n\nIn CVS 8.2 (and HEAD), the core of the query seems to be\nplanned like this initially:\n\n -> Hash Join (cost=1.24..8.70 rows=1 width=76)\n Hash Cond: (c.conrelid = i.inhparent)\n -> Seq Scan on pg_constraint c (cost=0.00..7.35 rows=27 width=76)\n Filter: (confrelid <> 0::oid)\n -> Hash (cost=1.23..1.23 rows=1 width=8)\n -> Seq Scan on pg_inherits i (cost=0.00..1.23 rows=1 width=8)\n Filter: (inhrelid = 42154::oid)\n\nWith a thousand or so rows inserted in each catalog, it likes\nthis plan better:\n\n -> Nested Loop (cost=0.00..16.55 rows=1 width=76)\n -> Index Scan using pg_inherits_relid_seqno_index on pg_inherits i (cost=0.00..8.27 rows=1 width=8)\n Index Cond: (inhrelid = 42154::oid)\n -> Index Scan using pg_constraint_conrelid_index on pg_constraint c (cost=0.00..8.27 rows=1 width=76)\n Index Cond: (c.conrelid = i.inhparent)\n Filter: (c.confrelid <> 0::oid)\n\nand indeed that plan is a lot better as the catalogs grow.\nBut the plpgsql function cached the other plan at start.\n\nI'm not entirely sure why 8.1 doesn't fall into the same trap ---\nperhaps it's because it's unable to rearrange outer joins.\nIt's certainly not being any smarter than 8.2.\n\nAnyway, it seems that you could either try to get some pg_constraint and\npg_inherits rows created before you start this function, or you could\nchange it to use an EXECUTE to force replanning of the inner query.\nOr just start a new session after the first few hundred table creations.\n\nI was hoping that the auto plan invalidation code in CVS HEAD would get\nit out of this problem, but it seems not to for the problem-as-given.\nThe trouble is that it won't change plans until autovacuum analyzes the\ntables, and that won't happen until the transaction commits and sends\noff its I-inserted-lotsa-rows report to the stats collector. So any\ngiven large transaction is stuck with the plans it first forms. There's\nprobably nothing we can do about that in time for 8.3, but it's\nsomething to think about for future releases ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Jun 2007 21:11:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4 "
},
{
"msg_contents": "Tom Lane escribi�:\n\n> I was hoping that the auto plan invalidation code in CVS HEAD would get\n> it out of this problem, but it seems not to for the problem-as-given.\n> The trouble is that it won't change plans until autovacuum analyzes the\n> tables, and that won't happen until the transaction commits and sends\n> off its I-inserted-lotsa-rows report to the stats collector. So any\n> given large transaction is stuck with the plans it first forms. There's\n> probably nothing we can do about that in time for 8.3, but it's\n> something to think about for future releases ...\n\nI think there is something we can do about this -- drop the default\nvalue for analyze threshold. We even discussed way back that we could\ndrop the concept of thresholds altogether, and nobody came up with an\nargument for defending them.\n\n> it won't change plans until autovacuum analyzes the\n> tables, and that won't happen until the transaction commits and sends\n> off its I-inserted-lotsa-rows report to the stats collector. So any\n> given large transaction is stuck with the plans it first forms. There's\n> probably nothing we can do about that in time for 8.3, but it's\n> something to think about for future releases ...\n\nAh, *within* a single large transaction :-( Yeah that's probably not\nvery solvable for the moment.\n\n-- \nAlvaro Herrera Valdivia, Chile ICBM: S 39� 49' 18.1\", W 73� 13' 56.4\"\n\"Ninguna manada de bestias tiene una voz tan horrible como la humana\" (Orual)\n",
"msg_date": "Thu, 7 Jun 2007 21:22:57 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Tom Lane escribi�:\n>> I was hoping that the auto plan invalidation code in CVS HEAD would get\n>> it out of this problem, but it seems not to for the problem-as-given.\n>> The trouble is that it won't change plans until autovacuum analyzes the\n>> tables, and that won't happen until the transaction commits and sends\n>> off its I-inserted-lotsa-rows report to the stats collector.\n\n> I think there is something we can do about this -- drop the default\n> value for analyze threshold.\n\nMaybe worth doing, but it doesn't help for Steve's example.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Jun 2007 21:31:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4 "
},
{
"msg_contents": "Thanks Tom and Alvaro.\n\nTo follow up on this, I re-wrote and tweaked a number of queries (including\nthe one provided) to change \"LEFT OUTER JOIN ... WHERE col IS NULL\" clauses\nto \"WHERE col NOT IN (...)\" clauses.\n\nThis has brought performance to an acceptable level on 8.2.\n\nThanks for your time,\nSteve\n\n\nOn 6/7/07, Tom Lane <[email protected]> wrote:\n>\n> Alvaro Herrera <[email protected]> writes:\n> > Tom Lane escribió:\n> >> I was hoping that the auto plan invalidation code in CVS HEAD would get\n> >> it out of this problem, but it seems not to for the problem-as-given.\n> >> The trouble is that it won't change plans until autovacuum analyzes the\n> >> tables, and that won't happen until the transaction commits and sends\n> >> off its I-inserted-lotsa-rows report to the stats collector.\n>\n> > I think there is something we can do about this -- drop the default\n> > value for analyze threshold.\n>\n> Maybe worth doing, but it doesn't help for Steve's example.\n>\n> regards, tom lane\n>\n\nThanks Tom and Alvaro.\n \nTo follow up on this, I re-wrote and tweaked a number of queries (including the one provided) to change \"LEFT OUTER JOIN ... WHERE col IS NULL\" clauses to \"WHERE col NOT IN (...)\" clauses.\n \nThis has brought performance to an acceptable level on 8.2.\n \nThanks for your time,\nSteve \nOn 6/7/07, Tom Lane <[email protected]> wrote:\nAlvaro Herrera <[email protected]> writes:\n> Tom Lane escribió:>> I was hoping that the auto plan invalidation code in CVS HEAD would get>> it out of this problem, but it seems not to for the problem-as-given.>> The trouble is that it won't change plans until autovacuum analyzes the\n>> tables, and that won't happen until the transaction commits and sends>> off its I-inserted-lotsa-rows report to the stats collector.> I think there is something we can do about this -- drop the default\n> value for analyze threshold.Maybe worth doing, but it doesn't help for Steve's example. regards, tom lane",
"msg_date": "Tue, 12 Jun 2007 14:31:01 -0400",
"msg_from": "\"Steven Flatt\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance drop on 8.2.4, reverting to 8.1.4"
}
] |
[
{
"msg_contents": "I have an interesting problem. I have the following query that ran ok on Monday and Tuesday and it has been running ok since I have been at this job. I have seen it to be IO intensive, but since Wednesday it has become CPU intensive. Database wise fresh data has been put into the tables, vacuumed & analyzed, no other parameter has been modified.\n \n Wednesday it ran over 24 hours and it did not finish and all this time it pegged a CPU between 95-99%. Yesterday the same story. I do not understand what could have caused it to behave like this suddenly. I am hoping somebody can point me to do research in the right direction. \n \n The query is as follows and it's explain plan is also attached:\n \n set enable_nestloop = off; \nINSERT INTO linkshare.macys_ls_daily_shipped \nSELECT \n ddw.intr_xref, \n cdm.cdm_get_linkshare_id_safe(ddw.intr_xref, 10), \n to_char(cdm.cdm_utc_convert(cdm.cdm_get_linkshare_timestamp(ddw.intr_xref, 10), -5), 'YYYY-MM-DD/HH24:MI:SS'), \n to_char(cdm.cdm_utc_convert(to_char(sales.order_date, 'YYYY-MM-DD HH24:MI:SS')::timestamp without time zone, -5), 'YYYY-MM-DD/HH24:MI:SS') , \n ddw.item_upc, \n sum(abs(ddw.itm_qty)), \n sum((ddw.tran_itm_total * 100::numeric)::integer), \n 'USD', '', '', '', \n ddw.item_desc \nFROM \n cdm.cdm_ddw_tran_item_grouped ddw \nJOIN \n cdm.cdm_sitesales sales ON ddw.intr_xref::text = sales.order_number::text \nWHERE \n ddw.cal_date > (CURRENT_DATE - 7) AND ddw.cal_date < CURRENT_DATE \nAND \n ddw.intr_xref IS NOT NULL \nAND trim(cdm.cdm_get_linkshare_id_safe(ddw.intr_xref, 10)) <> '' \nAND cdm.cdm_utc_convert(cdm.cdm_get_linkshare_timestamp(ddw.intr_xref, 10), -5)::text::date >= (CURRENT_DATE - 52) \nAND sales.order_date >= (CURRENT_DATE - 52) \nAND (tran_typ_id = 'S'::bpchar) \nAND btrim(item_group::text) <> 'EGC'::text \nAND btrim(item_group::text) <> 'VGC'::text \nGROUP BY \n ddw.intr_xref, \n cdm.cdm_get_linkshare_id_safe(ddw.intr_xref, 10), \n to_char(cdm.cdm_utc_convert(cdm.cdm_get_linkshare_timestamp(ddw.intr_xref, 10), -5), 'YYYY-MM-DD/HH24:MI:SS'), \n to_char(cdm.cdm_utc_convert(to_char(sales.order_date, 'YYYY-MM-DD HH24:MI:SS')::timestamp without time zone, -5), 'YYYY-MM-DD/HH24:MI:SS'), \n ddw.item_upc, \n 8, 9, 10, 11, \n ddw.item_desc;\n \n \n HashAggregate (cost=152555.97..152567.32 rows=267 width=162)\n -> Hash Join (cost=139308.18..152547.96 rows=267 width=162)\n Hash Cond: ((\"outer\".intr_xref)::text = (\"inner\".order_number)::text)\n -> GroupAggregate (cost=106793.14..109222.13 rows=4319 width=189)\n -> Sort (cost=106793.14..106901.09 rows=43182 width=189)\n Sort Key: cdm_ddw_tran_item.appl_xref, cdm_ddw_tran_item.intr_xref, cdm_ddw_tran_item.tran_typ_id, cdm_ddw_tran_item.cal_date, cdm_ddw_tran_item.cal_time, cdm_ddw_tran_item.tran_itm_total, cdm_ddw_tran_item.tran_tot_amt, cdm_ddw_tran_item.fill_store_div, cdm_ddw_tran_item.itm_price, cdm_ddw_tran_item.item_id, cdm_ddw_tran_item.item_upc, cdm_ddw_tran_item.item_pid, cdm_ddw_tran_item.item_desc, cdm_ddw_tran_item.nrf_color_name, cdm_ddw_tran_item.nrf_size_name, cdm_ddw_tran_item.dept_id, c\n -> Index Scan using cdm_ddw_tranp_item_cal_date on cdm_ddw_tran_item (cost=0.01..103468.52 rows=43182 width=189)\n Index Cond: ((cal_date > (('now'::text)::date - 7)) AND (cal_date < ('now'::text)::date))\n Filter: ((intr_xref IS NOT NULL) AND (btrim(cdm.cdm_get_linkshare_id_safe(intr_xref, 10)) <> ''::text) AND (((cdm.cdm_utc_convert(cdm.cdm_get_linkshare_timestamp(intr_xref, 10), -5))::text)::date >= (('now'::text)::date - 52)) AND (tran_typ_id = 'S'::bpchar) AND (btrim((item_group)::text) <> 'EGC'::text) AND (btrim((item_group)::text) <> 'VGC'::text))\n -> Hash (cost=31409.92..31409.92 rows=442050 width=20)\n -> Index Scan using cdm_sitesales_order_date on cdm_sitesales sales (cost=0.00..31409.92 rows=442050 width=20)\n Index Cond: (order_date >= (('now'::text)::date - 52))\n \n\n \n---------------------------------\nBored stiff? Loosen up...\nDownload and play hundreds of games for free on Yahoo! Games.\nI have an interesting problem. I have the following query that ran ok on Monday and Tuesday and it has been running ok since I have been at this job. I have seen it to be IO intensive, but since Wednesday it has become CPU intensive. Database wise fresh data has been put into the tables, vacuumed & analyzed, no other parameter has been modified. Wednesday it ran over 24 hours and it did not finish and all this time it pegged a CPU between 95-99%. Yesterday the same story. I do not understand what could have caused it to behave like this suddenly. I am hoping somebody can point me to do research in the right direction. The query is as follows and it's explain plan is also attached: set enable_nestloop = off; INSERT INTO linkshare.macys_ls_daily_shipped SELECT ddw.intr_xref, cdm.cdm_get_linkshare_id_safe(ddw.intr_xref, 10),\n to_char(cdm.cdm_utc_convert(cdm.cdm_get_linkshare_timestamp(ddw.intr_xref, 10), -5), 'YYYY-MM-DD/HH24:MI:SS'), to_char(cdm.cdm_utc_convert(to_char(sales.order_date, 'YYYY-MM-DD HH24:MI:SS')::timestamp without time zone, -5), 'YYYY-MM-DD/HH24:MI:SS') , ddw.item_upc, sum(abs(ddw.itm_qty)), sum((ddw.tran_itm_total * 100::numeric)::integer), 'USD', '', '', '', ddw.item_desc FROM cdm.cdm_ddw_tran_item_grouped ddw JOIN cdm.cdm_sitesales sales ON ddw.intr_xref::text = sales.order_number::text WHERE ddw.cal_date > (CURRENT_DATE - 7) AND ddw.cal_date < CURRENT_DATE AND ddw.intr_xref IS NOT NULL AND trim(cdm.cdm_get_linkshare_id_safe(ddw.intr_xref, 10)) <> '' AND cdm.cdm_utc_convert(cdm.cdm_get_linkshare_timestamp(ddw.intr_xref, 10), -5)::text::date >= (CURRENT_DATE - 52) AND sales.order_date >= (CURRENT_DATE -\n 52) AND (tran_typ_id = 'S'::bpchar) AND btrim(item_group::text) <> 'EGC'::text AND btrim(item_group::text) <> 'VGC'::text GROUP BY ddw.intr_xref, cdm.cdm_get_linkshare_id_safe(ddw.intr_xref, 10), to_char(cdm.cdm_utc_convert(cdm.cdm_get_linkshare_timestamp(ddw.intr_xref, 10), -5), 'YYYY-MM-DD/HH24:MI:SS'), to_char(cdm.cdm_utc_convert(to_char(sales.order_date, 'YYYY-MM-DD HH24:MI:SS')::timestamp without time zone, -5), 'YYYY-MM-DD/HH24:MI:SS'), ddw.item_upc, 8, 9, 10, 11, ddw.item_desc; HashAggregate (cost=152555.97..152567.32 rows=267 width=162) -> Hash Join (cost=139308.18..152547.96 rows=267 width=162) Hash Cond: ((\"outer\".intr_xref)::text = (\"inner\".order_number)::text) -> \n GroupAggregate (cost=106793.14..109222.13 rows=4319 width=189) -> Sort (cost=106793.14..106901.09 rows=43182 width=189) Sort Key: cdm_ddw_tran_item.appl_xref, cdm_ddw_tran_item.intr_xref, cdm_ddw_tran_item.tran_typ_id, cdm_ddw_tran_item.cal_date, cdm_ddw_tran_item.cal_time, cdm_ddw_tran_item.tran_itm_total, cdm_ddw_tran_item.tran_tot_amt, cdm_ddw_tran_item.fill_store_div, cdm_ddw_tran_item.itm_price, cdm_ddw_tran_item.item_id, cdm_ddw_tran_item.item_upc, cdm_ddw_tran_item.item_pid, cdm_ddw_tran_item.item_desc, cdm_ddw_tran_item.nrf_color_name, cdm_ddw_tran_item.nrf_size_name, cdm_ddw_tran_item.dept_id, c -> Index Scan using\n cdm_ddw_tranp_item_cal_date on cdm_ddw_tran_item (cost=0.01..103468.52 rows=43182 width=189) Index Cond: ((cal_date > (('now'::text)::date - 7)) AND (cal_date < ('now'::text)::date)) Filter: ((intr_xref IS NOT NULL) AND (btrim(cdm.cdm_get_linkshare_id_safe(intr_xref, 10)) <> ''::text) AND (((cdm.cdm_utc_convert(cdm.cdm_get_linkshare_timestamp(intr_xref, 10), -5))::text)::date >= (('now'::text)::date - 52)) AND (tran_typ_id = 'S'::bpchar) AND (btrim((item_group)::text) <> 'EGC'::text) AND (btrim((item_group)::text) <> 'VGC'::text)) -> Hash (cost=31409.92..31409.92 rows=442050\n width=20) -> Index Scan using cdm_sitesales_order_date on cdm_sitesales sales (cost=0.00..31409.92 rows=442050 width=20) Index Cond: (order_date >= (('now'::text)::date - 52)) \nBored stiff? Loosen up...Download and play hundreds of games for free on Yahoo! Games.",
"msg_date": "Fri, 18 May 2007 09:02:52 -0700 (PDT)",
"msg_from": "Abu Mushayeed <[email protected]>",
"msg_from_op": true,
"msg_subject": "CPU Intensive query"
},
{
"msg_contents": "On Fri, May 18, 2007 at 09:02:52AM -0700, Abu Mushayeed wrote:\n> I have an interesting problem. I have the following query that ran ok on\n> Monday and Tuesday and it has been running ok since I have been at this\n> job. I have seen it to be IO intensive, but since Wednesday it has become\n> CPU intensive. Database wise fresh data has been put into the tables,\n> vacuumed & analyzed, no other parameter has been modified.\n\nWhat Postgres version is this?\n\n> The query is as follows and it's explain plan is also attached:\n\nNormally EXPLAIN ANALYZE data would be much better than EXPLAIN, but if the\nquery indeed does not finish, it's not going to help much.\n\n> set enable_nestloop = off; \n\nWhat's the rationale for this?\n\n> HashAggregate (cost=152555.97..152567.32 rows=267 width=162)\n\n152000 disk page fetches is a bit, but it shouldn't take 24 hours. There's\nprobably misestimation involved at some point here. Does it really return 267\nrows, or many more?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 18 May 2007 18:42:10 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPU Intensive query"
},
{
"msg_contents": "What Postgres version is this?\n \n 8.1.3\n \n> set enable_nestloop = off; \n\nWhat's the rationale for this?\n\n To eliminate nested loop. It does a nested loop betwwen to very large table(millions of rows).\n \n > HashAggregate (cost=152555.97..152567.32 rows=267 width=162)\n\n152000 disk page fetches is a bit, but it shouldn't take 24 hours. There's\nprobably misestimation involved at some point here. Does it really return 267\nrows, or many more?\n\n It returns finally about 19-20 thousand rows.\n\n\n\"Steinar H. Gunderson\" <[email protected]> wrote:\n On Fri, May 18, 2007 at 09:02:52AM -0700, Abu Mushayeed wrote:\n> I have an interesting problem. I have the following query that ran ok on\n> Monday and Tuesday and it has been running ok since I have been at this\n> job. I have seen it to be IO intensive, but since Wednesday it has become\n> CPU intensive. Database wise fresh data has been put into the tables,\n> vacuumed & analyzed, no other parameter has been modified.\n\nWhat Postgres version is this?\n\n> The query is as follows and it's explain plan is also attached:\n\nNormally EXPLAIN ANALYZE data would be much better than EXPLAIN, but if the\nquery indeed does not finish, it's not going to help much.\n\n> set enable_nestloop = off; \n\nWhat's the rationale for this?\n\n> HashAggregate (cost=152555.97..152567.32 rows=267 width=162)\n\n152000 disk page fetches is a bit, but it shouldn't take 24 hours. There's\nprobably misestimation involved at some point here. Does it really return 267\nrows, or many more?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n\n \n---------------------------------\n8:00? 8:25? 8:40? Find a flick in no time\n with theYahoo! Search movie showtime shortcut.\nWhat Postgres version is this? 8.1.3 > set enable_nestloop = off; What's the rationale for this? To eliminate nested loop. It does a nested loop betwwen to very large table(millions of rows). > HashAggregate (cost=152555.97..152567.32 rows=267 width=162)152000 disk page fetches is a bit, but it shouldn't take 24 hours. There'sprobably misestimation involved at some point here. Does it really return 267rows, or many more? It returns finally about 19-20 thousand rows.\"Steinar H. Gunderson\" <[email protected]> wrote: On Fri, May 18, 2007 at 09:02:52AM -0700, Abu Mushayeed wrote:> I have an interesting problem. I have the following query that ran ok on> Monday and Tuesday and it\n has been running ok since I have been at this> job. I have seen it to be IO intensive, but since Wednesday it has become> CPU intensive. Database wise fresh data has been put into the tables,> vacuumed & analyzed, no other parameter has been modified.What Postgres version is this?> The query is as follows and it's explain plan is also attached:Normally EXPLAIN ANALYZE data would be much better than EXPLAIN, but if thequery indeed does not finish, it's not going to help much.> set enable_nestloop = off; What's the rationale for this?> HashAggregate (cost=152555.97..152567.32 rows=267 width=162)152000 disk page fetches is a bit, but it shouldn't take 24 hours. There'sprobably misestimation involved at some point here. Does it really return 267rows, or many more?/* Steinar */-- Homepage: http://www.sesse.net/---------------------------(end of\n broadcast)---------------------------TIP 2: Don't 'kill -9' the postmaster\n8:00? 8:25? 8:40? Find a flick in no time with theYahoo! Search movie showtime shortcut.",
"msg_date": "Fri, 18 May 2007 14:37:27 -0700 (PDT)",
"msg_from": "Abu Mushayeed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CPU Intensive query"
},
{
"msg_contents": "Abu Mushayeed <[email protected]> writes:\n> The query is as follows and it's explain plan is also attached:\n\nThe query itself seems to be a simple join over not too many rows, so\nI don't see how it could be taking 24 hours. What I suspect is you're\nincurring lots and lots of invocations of those user-written functions\nand one of them has suddenly decided to get very slow.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 May 2007 17:51:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPU Intensive query "
},
{
"msg_contents": "> one of them has suddenly decided to get very slow\n \n Is there a way to predict when the system will do this? Also, why would it suddenly go from IO intensive to CPU intensive.\n \n Also, this query ran today and it already finished. Today it was IO intensive.\n \n Please provide me some direction to this problem.\n \n Thanks.\n\nTom Lane <[email protected]> wrote:\n Abu Mushayeed writes:\n> The query is as follows and it's explain plan is also attached:\n\nThe query itself seems to be a simple join over not too many rows, so\nI don't see how it could be taking 24 hours. What I suspect is you're\nincurring lots and lots of invocations of those user-written functions\nand one of them has suddenly decided to get very slow.\n\nregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: You can help support the PostgreSQL project by donating at\n\nhttp://www.postgresql.org/about/donate\n\n\n \n---------------------------------\nNow that's room service! Choose from over 150,000 hotels \nin 45,000 destinations on Yahoo! Travel to find your fit.\n> one of them has suddenly decided to get very slow Is there a way to predict when the system will do this? Also, why would it suddenly go from IO intensive to CPU intensive. Also, this query ran today and it already finished. Today it was IO intensive. Please provide me some direction to this problem. Thanks.Tom Lane <[email protected]> wrote: Abu Mushayeed writes:> The query is as follows and it's explain plan is also attached:The query itself seems to be a simple join over not too many rows, soI don't see how it could be taking 24 hours. What I suspect is you'reincurring lots and lots of invocations of those user-written functionsand one of them has suddenly\n decided to get very slow.regards, tom lane---------------------------(end of broadcast)---------------------------TIP 7: You can help support the PostgreSQL project by donating athttp://www.postgresql.org/about/donate\nNow that's room service! Choose from over 150,000 hotels in 45,000 destinations on Yahoo! Travel to find your fit.",
"msg_date": "Fri, 18 May 2007 15:26:08 -0700 (PDT)",
"msg_from": "Abu Mushayeed <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CPU Intensive query "
},
{
"msg_contents": "On Fri, May 18, 2007 at 02:37:27PM -0700, Abu Mushayeed wrote:\n>>> set enable_nestloop = off; \n>> What's the rationale for this?\n> To eliminate nested loop. It does a nested loop betwwen to very large\n> table(millions of rows).\n\nIf the planner chooses a nested loop, it is because it believes it is the\nmost efficient solution. I'd turn it back on and try to figure out why the\nplanner was wrong. Note that a nested loop with an index scan on one or both\nsides can easily be as efficient as anything.\n\nDid you ANALYZE your tables recently? If the joins are really between\nmillions of rows and the planner thinks it's a couple thousands, the stats\nsound rather off... \n\n>>> HashAggregate (cost=152555.97..152567.32 rows=267 width=162)\n>> 152000 disk page fetches is a bit, but it shouldn't take 24 hours. There's\n>> probably misestimation involved at some point here. Does it really return 267\n>> rows, or many more?\n> It returns finally about 19-20 thousand rows.\n\nSo the planner is off by a factor of at least a hundred. That's a good\nfirst-level explanation for why it's slow, at least...\n\nIf you can, please provide EXPLAIN ANALYZE output for your query (after\nrunning ANALYZE on all your tables, if you haven't already); even though\nit will take some time, it usually makes this kind of performance debugging\nmuch easier.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sat, 19 May 2007 00:32:33 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPU Intensive query"
},
{
"msg_contents": "On Sat, May 19, 2007 at 12:32:33AM +0200, Steinar H. Gunderson wrote:\n> Did you ANALYZE your tables recently? If the joins are really between\n> millions of rows and the planner thinks it's a couple thousands, the stats\n> sound rather off... \n\nSorry, I forgot your first e-mail where you said you had both vacuumed and\nanalyzed recently. The estimates are still off, though -- the WHERE query\nmight be difficult to estimate properly. (I'm not sure how Tom arrived on\nhis conclusion of expensive user-defined functions, but given the usual\nprecisions of his guesses, I'd check that too...)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sat, 19 May 2007 00:35:29 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPU Intensive query"
},
{
"msg_contents": "On Fri, May 18, 2007 at 03:26:08PM -0700, Abu Mushayeed wrote:\n\n> Also, this query ran today and it already finished. Today it was\n> IO intensive.\n\nAre you entirely sure that it's not a coincidence, and something\n_else_ in the system is causing the CPU issues? \n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe plural of anecdote is not data.\n\t\t--Roger Brinner\n",
"msg_date": "Sat, 19 May 2007 09:20:57 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CPU Intensive query"
}
] |
[
{
"msg_contents": "We have recently ported our application to the postgres database. For\nthe most part performance has not been an issue; however there is one\nsituation that is a problem and that is the initial read of rows\ncontaining BYTEA values that have an average size of 2 kilobytes or\ngreater. For BYTEA values postgres requires as much 3 seconds to read\nthe values from disk into its buffer cache. After the initial read into\nbuffer cache, performance is comparable to other commercial DBMS that we\nhave ported to. As would be expected the commercial DBMS are also slower\nto display data that is not already in the buffer cache, but the\nmagnitude of difference for postgres for this type of data read from\ndisk as opposed to read from buffer cache is much greater.\n\n \n\nWe have vacuumed the table and played around with the database\ninitialization parameters in the postgresql.conf. Neither helped with\nthis problem.\n\n \n\nDoes anyone have any tips on improving the read from disk performance of\nBYTEA data that is typically 2KB or larger?\n\n \n\nMark\n\n\n\n\n\n\n\n\n\n\nWe have recently ported our application to the postgres\ndatabase. For the most part performance has not been an issue; however there is\none situation that is a problem and that is the initial read of rows containing\nBYTEA values that have an average size of 2 kilobytes or greater. For BYTEA\nvalues postgres requires as much 3 seconds to read the values from disk into\nits buffer cache. After the initial read into buffer cache, performance is\ncomparable to other commercial DBMS that we have ported to. As would be\nexpected the commercial DBMS are also slower to display data that is not\nalready in the buffer cache, but the magnitude of difference for postgres for\nthis type of data read from disk as opposed to read from buffer cache is much\ngreater.\n \nWe have vacuumed the table and played around with the\ndatabase initialization parameters in the postgresql.conf. Neither helped with this\nproblem.\n \nDoes anyone have any tips on improving the read from disk\nperformance of BYTEA data that is typically 2KB or larger?\n \nMark",
"msg_date": "Fri, 18 May 2007 10:11:51 -0700",
"msg_from": "\"Mark Harris\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "reading large BYTEA type is slower than expected"
},
{
"msg_contents": "\"Mark Harris\" <[email protected]> writes:\n> We have recently ported our application to the postgres database. For\n> the most part performance has not been an issue; however there is one\n> situation that is a problem and that is the initial read of rows\n> containing BYTEA values that have an average size of 2 kilobytes or\n> greater. For BYTEA values postgres requires as much 3 seconds to read\n> the values from disk into its buffer cache.\n\nHow large is \"large\"?\n\n(No, I don't believe it takes 3 sec to fetch a single 2Kb value.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 May 2007 13:48:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reading large BYTEA type is slower than expected "
},
{
"msg_contents": "Mark,\n\nI am no expert but this looks like a file system I/O thing. I set\nhw.ata.wc=1 for a SATA drive and =0 for a SCSI drive in /boot/loader.conf on\nmy FreeBSD systems. That seems to provide some needed tweaking.\n\nYudhvir\n==========\nOn 5/18/07, Mark Harris <[email protected]> wrote:\n>\n> We have recently ported our application to the postgres database. For the\n> most part performance has not been an issue; however there is one situation\n> that is a problem and that is the initial read of rows containing BYTEA\n> values that have an average size of 2 kilobytes or greater. For BYTEA values\n> postgres requires as much 3 seconds to read the values from disk into its\n> buffer cache. After the initial read into buffer cache, performance is\n> comparable to other commercial DBMS that we have ported to. As would be\n> expected the commercial DBMS are also slower to display data that is not\n> already in the buffer cache, but the magnitude of difference for postgres\n> for this type of data read from disk as opposed to read from buffer cache is\n> much greater.\n>\n>\n>\n> We have vacuumed the table and played around with the database\n> initialization parameters in the postgresql.conf. Neither helped with this\n> problem.\n>\n>\n>\n> Does anyone have any tips on improving the read from disk performance of\n> BYTEA data that is typically 2KB or larger?\n>\n>\n>\n> Mark\n>\n\n\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nMark,\n\nI am no expert but this looks like a file system I/O thing. I set\nhw.ata.wc=1 for a SATA drive and =0 for a SCSI drive in\n/boot/loader.conf on my FreeBSD systems. That seems to provide some\nneeded tweaking.\n\nYudhvir==========On 5/18/07, Mark Harris <[email protected]> wrote:\n\n\nWe have recently ported our application to the postgres\ndatabase. For the most part performance has not been an issue; however there is\none situation that is a problem and that is the initial read of rows containing\nBYTEA values that have an average size of 2 kilobytes or greater. For BYTEA\nvalues postgres requires as much 3 seconds to read the values from disk into\nits buffer cache. After the initial read into buffer cache, performance is\ncomparable to other commercial DBMS that we have ported to. As would be\nexpected the commercial DBMS are also slower to display data that is not\nalready in the buffer cache, but the magnitude of difference for postgres for\nthis type of data read from disk as opposed to read from buffer cache is much\ngreater.\n \nWe have vacuumed the table and played around with the\ndatabase initialization parameters in the postgresql.conf. Neither helped with this\nproblem.\n \nDoes anyone have any tips on improving the read from disk\nperformance of BYTEA data that is typically 2KB or larger?\n \nMark\n\n\n-- Yudhvir Singh Sidhu408 375 3134 cell",
"msg_date": "Fri, 18 May 2007 10:50:25 -0700",
"msg_from": "\"Y Sidhu\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reading large BYTEA type is slower than expected"
},
{
"msg_contents": "Tom,\n\nNo it is not 3 seconds to read a single value. Multiple records are\nread, approximately 120 records if the raster dataset is created with\nour application's default configuration.\n\nPlease read on to understand why, if you need to.\n\nWe are a GIS software company and have two basic types of data, raster\nand vector. Both types of data are stored in a BYTEA.\n\nVector data are representations of geometry stored as a series of\nvertices to represent points, lines and polygons. This type of data is\ntypically 30 to 200 bytes, but can be very large (consider how many\nvertices would be required to represent the Pacific Ocean at a detailed\nresolution). Vector data does not seem to exhibit the cold fetch issue\n(fetch from disk as opposed to fetch from buffer cache).\n\nIt is with raster data that we see the problem. Raster data is image\ndata stored in the database. When we store a georeferenced image in the\ndatabase we block it up into tiles. The default tile size is 128 by 128\npixels.\nWe compress the data using either: LZ77, JPEG or JPEG2000. Typically the\npixel blocks stored in the BYTEA range in size from 800 bytes to 16000\nbytes for 8-bit data stored with the default tile size, depending on the\ntype of compression and the variability of the data.\n\nOur application is capable of mosaicking source images together into\nhuge raster datasets that can grow into terabytes. Consider the entire\nlandsat imagery with a resolution of 15 meters mosaicked into one raster\ndataset. It requires less than a terabyte to store that data.\n\nFor practical reasons, as you can imagine, we construct a reduced\nresolution pyramid on the raster base level, allowing applications to\nview a reduced resolution level of the raster dataset as the user zooms\nout, and a higher resolution level as the user zooms in. The pyramid\nlevels are also stored as pixel blocks in the table. Each pyramid level\nis reduced in resolution by 1/2 in the X and Y dimension. Therefore\npyramid level 1 will be 1/4 of pyramid level 0 (the base).\n\nAs the application queries the raster blocks table which stores the\nraster data tiles, it will request a raster tiles that fall within the\nspatial extent of the window for a particular pyramid level. Therefore\nthe number of records queried from the raster blocks table containing\nthe BYTEA column of pixel data is fairly constant. For the screen\nresolution of 1680 by 1050 that I am testing with about 120 records will\nbe fetched from the raster blocks table each time the user pans or\nzooms.\n\nMark\n\n \n\n\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Friday, May 18, 2007 10:48 AM\nTo: Mark Harris\nCc: [email protected]\nSubject: Re: [PERFORM] reading large BYTEA type is slower than expected \n\n\"Mark Harris\" <[email protected]> writes:\n> We have recently ported our application to the postgres database. For\n> the most part performance has not been an issue; however there is one\n> situation that is a problem and that is the initial read of rows\n> containing BYTEA values that have an average size of 2 kilobytes or\n> greater. For BYTEA values postgres requires as much 3 seconds to read\n> the values from disk into its buffer cache.\n\nHow large is \"large\"?\n\n(No, I don't believe it takes 3 sec to fetch a single 2Kb value.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 18 May 2007 11:51:58 -0700",
"msg_from": "\"Mark Harris\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: reading large BYTEA type is slower than expected "
},
{
"msg_contents": "Tom,\n\nActually the 120 records I quoted is a mistake. Since it is a three band\nimage the number of records should be 360 records or 120 records for\neach band.\n\nMark\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Friday, May 18, 2007 10:48 AM\nTo: Mark Harris\nCc: [email protected]\nSubject: Re: [PERFORM] reading large BYTEA type is slower than expected \n\n\"Mark Harris\" <[email protected]> writes:\n> We have recently ported our application to the postgres database. For\n> the most part performance has not been an issue; however there is one\n> situation that is a problem and that is the initial read of rows\n> containing BYTEA values that have an average size of 2 kilobytes or\n> greater. For BYTEA values postgres requires as much 3 seconds to read\n> the values from disk into its buffer cache.\n\nHow large is \"large\"?\n\n(No, I don't believe it takes 3 sec to fetch a single 2Kb value.)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Fri, 18 May 2007 12:37:00 -0700",
"msg_from": "\"Mark Harris\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: reading large BYTEA type is slower than expected "
}
] |
[
{
"msg_contents": "I need some help on recommendations to solve a perf problem.\n\nI've got a table with ~121 million records in it. Select count on it \ncurrently takes ~45 minutes, and an update to the table to set a value on \none of the columns I finally killed after it ran 17 hours and had still \nnot completed. Queries into the table are butt slow, and\n\nSystem: SUSE LINUX 10.0 (X86-64)\nPostgresql: PostgreSQL 8.2.1\nIndex type: btree\n\nA select count took ~48 minutes before I made some changes to the \npostgresql.conf, going from default values to these:\nshared_buffers = 24MB\nwork_mem = 256MB\nmaintenance_work_mem = 512MB\nrandom_page_cost = 100\nstats_start_collector = off\nstats_row_level = off\n\nAs a test I am trying to do an update on state using the following queries:\nupdate res set state=5001;\nselect count(resid) from res;\n\nThe update query that started this all I had to kill after 17hours. It \nshould have updated all 121+ million records. That brought my select \ncount down to 19 minutes, but still a far cry from acceptable.\n\nThe system has 2GB of RAM (more is alreads on order), but doesn't seem to \nshow problems in TOP with running away with RAM. If anything, I don't \nthink it's using enough as I only see about 6 processes using 26-27 MB \neach) and is running on a single disk (guess I will likely have to at the \nminimum go to a RAID1). Workload will primarily be comprised of queries \nagainst the indicies (thus why so many of them) and updates to a single \nrecord from about 10 clients where that one records will have md5, state, \nrval, speed, audit, and date columns updated. Those updates don't seem to \nbe a problem, and are generally processed in bulk of 500 to 5000 at a \ntime.\n\nHere is the schema for the table giving me problems:\n\nCREATE TABLE res\n(\n res_id integer NOT NULL DEFAULT nextval('result_id_seq'::regclass),\n res_client_id integer NOT NULL,\n \"time\" real DEFAULT 0,\n error integer DEFAULT 0,\n md5 character(32) DEFAULT 0,\n res_tc_id integer NOT NULL,\n state smallint DEFAULT 0,\n priority smallint,\n rval integer,\n speed real,\n audit real,\n date timestamp with time zone,\n gold_result_id integer,\n CONSTRAINT result_pkey PRIMARY KEY (res_id),\n CONSTRAINT unique_res UNIQUE (res_client_id, res_tc_id)\n)\nWITHOUT OIDS;\nALTER TABLE res OWNER TO postgres;\n\nCREATE INDEX index_audit\n ON res\n USING btree\n (audit);\n\nCREATE INDEX index_event\n ON res\n USING btree\n (error);\n\nCREATE INDEX index_priority\n ON res\n USING btree\n (priority);\n\nCREATE INDEX index_rval\n ON res\n USING btree\n (rval);\n\nCREATE INDEX index_speed\n ON res\n USING btree\n (speed);\n\nCREATE INDEX index_state\n ON res\n USING btree\n (state);\n\nCREATE INDEX index_tc_id\n ON res\n USING btree\n (res_tc_id);\n\nCREATE INDEX index_time\n ON res\n USING btree\n (\"time\");\n",
"msg_date": "Fri, 18 May 2007 12:43:40 -0500 (CDT)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "121+ million record table perf problems"
},
{
"msg_contents": "On Fri, May 18, 2007 at 12:43:40PM -0500, [email protected] wrote:\n> I've got a table with ~121 million records in it. Select count on it \n> currently takes ~45 minutes, and an update to the table to set a value on \n> one of the columns I finally killed after it ran 17 hours and had still \n> not completed. Queries into the table are butt slow, and\n\nI don't think you've told us anything like enough to get started on\nsolving your problem. But to start with, you know that in Postgres,\nan unrestricted count() on a table always results in reading the\nentire table, right?\n\nStandard questions: have you performed any vacuum or analyse?\n\nYour update statement is also a case where you have to touch every\nrow. Note that, given that you seem to be setting the state field to\nthe same value for everything, an index on there will do you not one\njot of good until there's greater selectivity.\n\nHow fast is the disk? Is it fast enough to read and touch every one\nof those rows on the table inside of 17 hours? \n\nNote also that your approach of updating all 121 million records in\none statement is approximately the worst way to do this in Postgres,\nbecause it creates 121 million dead tuples on your table. (You've\ncreated some number of those by killing the query as well.)\n\nAll of that said, 17 hours seems kinda long. \n\n> As a test I am trying to do an update on state using the following queries:\n> update res set state=5001;\n> select count(resid) from res;\n\nWhat is this testing?\n\n> The update query that started this all I had to kill after 17hours. \n\nDoes that suggest that the update you're trying to make work well is\n_not_ update res set state = 5001?\n\n> each) and is running on a single disk (guess I will likely have to at the \n> minimum go to a RAID1). Workload will primarily be comprised of queries \n\nI bet that single disk is your problem. Iostat is your friend, I'd\nsay.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nEverything that happens in the world happens at some place.\n\t\t--Jane Jacobs \n",
"msg_date": "Fri, 18 May 2007 14:30:18 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 121+ million record table perf problems"
},
{
"msg_contents": "[email protected] wrote:\n> I need some help on recommendations to solve a perf problem.\n> \n> I've got a table with ~121 million records in it. Select count on it \n> currently takes ~45 minutes, and an update to the table to set a value \n> on one of the columns I finally killed after it ran 17 hours and had \n> still not completed. Queries into the table are butt slow, and\n\nScanning 121 million rows is going to be slow even on 16 disks.\n\n> \n> System: SUSE LINUX 10.0 (X86-64)\n> Postgresql: PostgreSQL 8.2.1\n> Index type: btree\n\nYou really should be running 8.2.4.\n\n> \n> A select count took ~48 minutes before I made some changes to the \n> postgresql.conf, going from default values to these:\n> shared_buffers = 24MB\n\nThis could be increased.\n\n> work_mem = 256MB\n> maintenance_work_mem = 512MB\n> random_page_cost = 100\n> stats_start_collector = off\n> stats_row_level = off\n> \n> As a test I am trying to do an update on state using the following queries:\n> update res set state=5001;\n\nYou are updating 121 million rows, that takes a lot of time considering \nyou are actually (at a very low level) marking 121 million rows dead and \ninserting 121 million more.\n\n> The update query that started this all I had to kill after 17hours. It \n> should have updated all 121+ million records. That brought my select \n> count down to 19 minutes, but still a far cry from acceptable.\n\nNot quite sure what you would considerable acceptable based on what you \nare trying to do.\n\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n> \n> Here is the schema for the table giving me problems:\n> \n> CREATE TABLE res\n> (\n> res_id integer NOT NULL DEFAULT nextval('result_id_seq'::regclass),\n> res_client_id integer NOT NULL,\n> \"time\" real DEFAULT 0,\n> error integer DEFAULT 0,\n> md5 character(32) DEFAULT 0,\n> res_tc_id integer NOT NULL,\n> state smallint DEFAULT 0,\n> priority smallint,\n> rval integer,\n> speed real,\n> audit real,\n> date timestamp with time zone,\n> gold_result_id integer,\n> CONSTRAINT result_pkey PRIMARY KEY (res_id),\n> CONSTRAINT unique_res UNIQUE (res_client_id, res_tc_id)\n> )\n> WITHOUT OIDS;\n> ALTER TABLE res OWNER TO postgres;\n> \n> CREATE INDEX index_audit\n> ON res\n> USING btree\n> (audit);\n> \n> CREATE INDEX index_event\n> ON res\n> USING btree\n> (error);\n> \n> CREATE INDEX index_priority\n> ON res\n> USING btree\n> (priority);\n> \n> CREATE INDEX index_rval\n> ON res\n> USING btree\n> (rval);\n> \n> CREATE INDEX index_speed\n> ON res\n> USING btree\n> (speed);\n> \n> CREATE INDEX index_state\n> ON res\n> USING btree\n> (state);\n> \n> CREATE INDEX index_tc_id\n> ON res\n> USING btree\n> (res_tc_id);\n> \n> CREATE INDEX index_time\n> ON res\n> USING btree\n> (\"time\");\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Fri, 18 May 2007 11:51:04 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 121+ million record table perf problems"
},
{
"msg_contents": "[email protected] wrote:\n\n> I need some help on recommendations to solve a perf problem.\n>\n> I've got a table with ~121 million records in it. Select count on it \n> currently takes ~45 minutes, and an update to the table to set a value \n> on one of the columns I finally killed after it ran 17 hours and had \n> still not completed. Queries into the table are butt slow, and\n>\nThis is way too long. I just did a select count(*) on a table of mine \nthat has 48 million rows and it took only 178 seconds. And this is on a \nserious POS disk subsystem that's giving me about 1/2 the read speed of \na single off the shelf SATA disk.\nAs select count(*) has to read the whole table sequentially, the time it \ntakes is linear with the size of the table (once you get large enough \nthat the whole table doesn't get cached in memory). So I'd be surprised \nif a 121 million record table took more than 500 or so seconds to read, \nand would expect it to be less.\n\nSo my advice: vacuum. I'll bet you've got a whole boatload of dead \ntuples kicking around. Then analyze. Then consider firing off a \nreindex and/or cluster against the table. The other thing I'd consider \nis dropping the money on some more hardware- a few hundred bucks to get \na battery backed raid card and half a dozen SATA drives would probably \ndo wonders for your performance.\n\n>\n> shared_buffers = 24MB\n\nUp your shared buffers. This is a mistake I made originally as well- \nbut this is the total number of shared buffers used by the system. I \nhad originally assumed that the number of shared buffers used was this \ntimes the number of backends, but it's not.\n\nWith 2G of memory, I'd start with shared buffers of 512MB, and consider \nupping it to 768MB or even 1024MB. This will also really help performance.\n\n> stats_start_collector = off\n> stats_row_level = off\n>\nI think I'd also recommend turning these one.\n\nBrian\n\n",
"msg_date": "Fri, 18 May 2007 14:53:42 -0400",
"msg_from": "Brian Hurt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 121+ million record table perf problems"
},
{
"msg_contents": "On Friday 18 May 2007 11:51, \"Joshua D. Drake\" <[email protected]> wrote:\n> > The update query that started this all I had to kill after 17hours. It\n> > should have updated all 121+ million records. That brought my select\n> > count down to 19 minutes, but still a far cry from acceptable.\n\nYou're going to want to drop all your indexes before trying to update 121 \nmillion records. Updates in PostgreSQL are really quite slow, mostly due \nto all the index updates. Drop indexes, do the updates, create a primary \nkey, cluster the table on that key to free up the dead space, then recreate \nthe rest of the indexes. That's about as fast as you can get that process.\n\nOf course, doing anything big on one disk is also going to be slow, no \nmatter what you do. I don't think a table scan should take 19 minutes, \nthough, not for 121 million records. You should be able to get at least \n60-70MB/sec out of anything modern. I can only assume your disk is \nthrashing doing something else at the same time as the select.\n\n-- \n\"We can no more blame our loss of freedom on Congressmen than we can\nprostitution on pimps. Both simply provide broker services for their\ncustomers.\" -- Dr. Walter Williams\n\n",
"msg_date": "Fri, 18 May 2007 12:08:41 -0700",
"msg_from": "Alan Hodgson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 121+ million record table perf problems"
},
{
"msg_contents": "Andrew Sullivan <[email protected]> writes:\n> All of that said, 17 hours seems kinda long. \n\nI imagine he's done a bunch of those full-table UPDATEs without\nvacuuming, and now has approximately a gazillion dead tuples bloating\nthe table.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 May 2007 15:37:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 121+ million record table perf problems "
},
{
"msg_contents": "\n>\n> I've got a table with ~121 million records in it. Select count on it \n> currently takes ~45 minutes, and an update to the table to set a value \n> on one of the columns I finally killed after it ran 17 hours and had \n> still not completed. Queries into the table are butt slow, and\n>\n> The update query that started this all I had to kill after 17hours. \n> It should have updated all 121+ million records. That brought my \n> select count down to 19 minutes, but still a far cry from acceptable.\n\nIf you have a column that needs to be updated often for all rows, \nseparate it into a different table, and create a view that joins it back \nto the main table so that your application still sees the old schema.\n\nThis will greatly speed your update since (in Postgres) and update is \nthe same as a delete+insert. By updating that one column, you're \nre-writing your entire 121 million rows. If you separate it, you're \nonly rewriting that one column. Don't forget to vacuum/analyze and \nreindex when you're done.\n\nBetter yet, if you can stand a short down time, you can drop indexes on \nthat column, truncate, then do 121 million inserts, and finally \nreindex. That will be MUCH faster.\n\nCraig\n\n\n",
"msg_date": "Fri, 18 May 2007 15:33:08 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 121+ million record table perf problems"
},
{
"msg_contents": "Craig James wrote:\n\n> Better yet, if you can stand a short down time, you can drop indexes on \n> that column, truncate, then do 121 million inserts, and finally \n> reindex. That will be MUCH faster.\n\nOr you can do a CLUSTER, which does all the same things automatically.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Fri, 18 May 2007 19:20:52 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 121+ million record table perf problems"
},
{
"msg_contents": "On Fri, 18 May 2007, [email protected] wrote:\n\n> shared_buffers = 24MB\n> work_mem = 256MB\n> maintenance_work_mem = 512MB\n\nYou should take a minute to follow the suggestions at \nhttp://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm and set \ndramatically higher values for shared_buffers and effective_cache_size for \nyour server. Also, your work_mem figure may be OK for now, but if ever do \nhave 10 people connect to this database at once and run big queries you \ncould have an issue with it set that high--that's a per client setting.\n\nAfter you're done with that, you should also follow the suggestions there \nto do a VACCUM ANALYZE. That may knock out two other potential issues at \nonce. It will take a while to run, but I think you need it badly to sort \nout what you've already done.\n\n> random_page_cost = 100\n\nI'm not sure what logic prompted this change, but after you correct the \nabove you should return this to its default; if this is helping now it's \nonly because other things are so far off from where they should be.\n\n> update res set state=5001;\n> The update query that started this all I had to kill after 17hours. It \n> should have updated all 121+ million records. That brought my select count \n> down to 19 minutes, but still a far cry from acceptable.\n\nYou should work on the select side of this first. If that isn't running \nin a moderate amount of time, trying to get the much more difficult update \nto happen quickly is hopeless.\n\nOnce the select is under control, there are a lot of parameters to adjust \nthat will effect the speed of the updates. The first thing to do is \ndramatically increase checkpoint_segments; I would set that to at least 30 \nin your situation.\n\nAlso: going to RAID-1 won't make a bit of difference to your update \nspeed; could even make it worse. Adding more RAM may not help much \neither. If you don't have one already, the real key to improving \nperformance in a heavy update situation is to get a better disk controller \nwith a cache that helps accelerate writes. Then the next step is to \nstripe this data across multiple disks in a RAID-0 configuration to split \nthe I/O up.\n\nYou have a lot of work ahead of you. Even after you resolve the gross \nissues here, you have a table that has around 10 indexes on it. \nMaintaining those is far from free; every time you update a single record \nin that table, the system has to update each of those indexes on top of \nthe record update itself. So you're really asking your system to do \naround 1.2 billion disk-related operations when you throw out your simple \nbatch update against every row, and good luck getting that to run in a \ntime frame that's less than days long.\n\nThe right way to get a feel for what's going on is to drop all the indexes \nexcept for the constraints and see how the bulk update runs after the \nparameter changes suggested above are in place and the database has been \ncleaned up with vacuum+analyze. Once you have a feel for that, add some \nindexes back in and see how it degrades. Then you'll know how adding each \none of them impacts your performance. I suspect you're going to have to \nredesign your indexing scheme before this is over. I don't think your \ncurrent design is ever going to work the way you expect it to.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Sat, 19 May 2007 01:00:00 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 121+ million record table perf problems"
},
{
"msg_contents": "\nOn May 18, 2007, at 2:30 PM, Andrew Sullivan wrote:\n\n> Note also that your approach of updating all 121 million records in\n> one statement is approximately the worst way to do this in Postgres,\n> because it creates 121 million dead tuples on your table. (You've\n> created some number of those by killing the query as well.)\n>\n> All of that said, 17 hours seems kinda long.\n\nI don't think that is too long. Growing the table one page at a time \ntakes a long time when you add a lot of pages to a table that big. \nAdd in the single disk and you're flying the disk head all over the \nplace so it will just be slow. No way around it.\n\nAnd just for good measure, I ran a count on one of my big tables \nwhich consists of two integers and a varchar(7):\n\ndb=> select count(*) from mytable;\n count\n-----------\n311994721\n(1 row)\n\nTime: 157689.057 ms\n\nSo I'm going to bet $1 that you're I/O starved.\n\nAlso, for memory usage, postgres won't use more than you tell it to...\n\n",
"msg_date": "Mon, 21 May 2007 17:24:09 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 121+ million record table perf problems"
}
] |
[
{
"msg_contents": "I have a two column table with over 160 million rows in it. As the size\nof the table grows queries on this table get exponentially slower. I am\nusing version 8.1.5 32-bit on Red Hat Enterprise Linux 3. The hardware\nis an Intel 3 Ghz Xeon with 4GB RAM, and 6 disks in a RAID 5\nconfiguration. For current testing I am running a single database\nconnection with no other applications running on the machine, and the\nswap is not being used at all.\n\nHere is the table definition:\n\nmdsdb=# \\d backup_location\n Table \"public.backup_location\"\n Column | Type | Modifiers\n-----------+---------+-----------\n record_id | bigint | not null\n backup_id | integer | not null\nIndexes:\n \"backup_location_pkey\" PRIMARY KEY, btree (record_id, backup_id)\n \"backup_location_rid\" btree (record_id)\nForeign-key constraints:\n \"backup_location_bfk\" FOREIGN KEY (backup_id) REFERENCES\nbackups(backup_id) ON DELETE CASCADE\n \nHere is the table size:\n\nmdsdb=# select count(*) from backup_location;\n count\n-----------\n 162101296\n(1 row)\n\nAnd here is a simple query on this table that takes nearly 20 minutes to\nreturn less then 3000 rows. I ran an analyze immediately before I ran\nthis query:\n\nmdsdb=# explain analyze select record_id from backup_location where\nbackup_id = 1070;\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-------------\n Index Scan using backup_location_pkey on backup_location\n(cost=0.00..1475268.53 rows=412394 width=8) (actual\ntime=3318.057..1196723.915 rows=2752 loops=1)\n Index Cond: (backup_id = 1070)\n Total runtime: 1196725.617 ms\n(3 rows)\n\nObviously at this point the application is not usable. If possible we\nwould like to grow this table to the 3-5 billion row range, but I don't\nknow if that is realistic.\n\nAny guidance would be greatly appreciated.\n\nThanks,\nEd\n",
"msg_date": "Fri, 18 May 2007 11:30:12 -0700",
"msg_from": "\"Tyrrill, Ed\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow queries on big table"
},
{
"msg_contents": "Tyrrill, Ed wrote:\n> I have a two column table with over 160 million rows in it. As the size\n> of the table grows queries on this table get exponentially slower. I am\n> using version 8.1.5 32-bit on Red Hat Enterprise Linux 3. The hardware\n> is an Intel 3 Ghz Xeon with 4GB RAM, and 6 disks in a RAID 5\n> configuration. For current testing I am running a single database\n> connection with no other applications running on the machine, and the\n> swap is not being used at all.\n>\n> Here is the table definition:\n>\n> mdsdb=# \\d backup_location\n> Table \"public.backup_location\"\n> Column | Type | Modifiers\n> -----------+---------+-----------\n> record_id | bigint | not null\n> backup_id | integer | not null\n> Indexes:\n> \"backup_location_pkey\" PRIMARY KEY, btree (record_id, backup_id)\n> \"backup_location_rid\" btree (record_id)\n> Foreign-key constraints:\n> \"backup_location_bfk\" FOREIGN KEY (backup_id) REFERENCES\n> backups(backup_id) ON DELETE CASCADE\n> \n> Here is the table size:\n>\n> mdsdb=# select count(*) from backup_location;\n> count\n> -----------\n> 162101296\n> (1 row)\n>\n> And here is a simple query on this table that takes nearly 20 minutes to\n> return less then 3000 rows. I ran an analyze immediately before I ran\n> this query:\n>\n> mdsdb=# explain analyze select record_id from backup_location where\n> backup_id = 1070;\n> \n> QUERY PLAN\n>\n> ------------------------------------------------------------------------\n> ------------------------------------------------------------------------\n> -------------\n> Index Scan using backup_location_pkey on backup_location\n> (cost=0.00..1475268.53 rows=412394 width=8) (actual\n> time=3318.057..1196723.915 rows=2752 loops=1)\n> Index Cond: (backup_id = 1070)\n> Total runtime: 1196725.617 ms\n> (3 rows)\n> \nI've got a few points. Firstly, is your data amenable to partitioning? \nIf so that might be a big winner.\nSecondly, it might be more efficient for the planner to choose the \nbackup_location_rid index than the combination primary key index. You \ncan test this theory with this cool pg trick:\n\nbegin;\nalter table backup_location drop constraint backup_location_pkey;\nexplain analyze select ....\nrollback;\n\nto see if it's faster.\n\n> Obviously at this point the application is not usable. If possible we\n> would like to grow this table to the 3-5 billion row range, but I don't\n> know if that is realistic.\n>\n> Any guidance would be greatly appreciated.\n> \n\nWithout knowing more about your usage patterns, it's hard to say. But \npartitioning seems like your best choice at the moment.\n",
"msg_date": "Fri, 18 May 2007 14:36:22 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries on big table"
},
{
"msg_contents": "\"Tyrrill, Ed\" <[email protected]> writes:\n> Index Scan using backup_location_pkey on backup_location\n> (cost=0.00..1475268.53 rows=412394 width=8) (actual\n> time=3318.057..1196723.915 rows=2752 loops=1)\n> Index Cond: (backup_id = 1070)\n> Total runtime: 1196725.617 ms\n\nIf we take that at face value it says the indexscan is requiring 434\nmsec per actual row fetched. Which is just not very credible; the worst\ncase should be about 1 disk seek per row fetched. So there's something\ngoing on that doesn't meet the eye.\n\nWhat I'm wondering about is whether the table is heavily updated and\nseldom vacuumed, leading to lots and lots of dead tuples being fetched\nand then rejected (hence they'd not show in the actual-rows count).\n\nThe other thing that seems pretty odd is that it's not using a bitmap\nscan --- for such a large estimated rowcount I'd have expected a bitmap\nscan not a plain indexscan. What do you get from EXPLAIN ANALYZE if\nyou force a bitmap scan? (Set enable_indexscan off, and enable_seqscan\ntoo if you have to.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 May 2007 15:59:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries on big table "
},
{
"msg_contents": "Tyrrill, Ed wrote:\n> mdsdb=# \\d backup_location\n> Table \"public.backup_location\"\n> Column | Type | Modifiers\n> -----------+---------+-----------\n> record_id | bigint | not null\n> backup_id | integer | not null\n> Indexes:\n> \"backup_location_pkey\" PRIMARY KEY, btree (record_id, backup_id)\n> \"backup_location_rid\" btree (record_id)\n> Foreign-key constraints:\n> \"backup_location_bfk\" FOREIGN KEY (backup_id) REFERENCES\n> backups(backup_id) ON DELETE CASCADE\n\n[snip]\n\n> mdsdb=# explain analyze select record_id from backup_location where\n> backup_id = 1070;\n> \n> QUERY PLAN\n> \n> ------------------------------------------------------------------------\n> ------------------------------------------------------------------------\n> -------------\n> Index Scan using backup_location_pkey on backup_location\n> (cost=0.00..1475268.53 rows=412394 width=8) (actual\n> time=3318.057..1196723.915 rows=2752 loops=1)\n> Index Cond: (backup_id = 1070)\n> Total runtime: 1196725.617 ms\n> (3 rows)\n\nThe \"backup_location_rid\" index on your table is not necessary. The\nprimary key index on (record_id, backup_id) can be used by Postgres,\neven if the query is only constrained by record_id. See\nhttp://www.postgresql.org/docs/8.2/interactive/indexes-multicolumn.html\nfor details.\n\nThe explain plan indicates that your query is filtered on backup_id, but\nis using the primary key index on (record_id, backup_id). Based on the\ntable definition, you do not have any good index for filtering on backup_id.\n\nThe explain plan also seems way off, as I would expect a sequential scan\nwould be used without a good index for backup_id. Did you disable\nsequential scans before running this query? Have you altered any other\nconfiguration or planner parameters?\n\nAs your \"backup_location_rid\" is not necessary, I would recommend\ndropping that index and creating a new one on just backup_id. This\nshould be a net wash on space, and the new index should make for a\nstraight index scan for the query you presented. Don't forget to\nanalyze after changing the indexes.\n\nHope this helps.\n\nAndrew\n\n",
"msg_date": "Fri, 18 May 2007 15:05:41 -0500",
"msg_from": "Andrew Kroeger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries on big table"
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> Secondly, it might be more efficient for the planner to choose the \n> backup_location_rid index than the combination primary key index.\n\nOh, I'm an idiot; I didn't notice the way the index was set up. Yeah,\nthat index pretty well sucks for a query on backup_id --- it has to scan\nthe entire index, since there's no constraint on the leading column.\nSo that's where the time is going.\n\nThis combination of indexes:\n\n> Indexes:\n> \"backup_location_pkey\" PRIMARY KEY, btree (record_id, backup_id)\n> \"backup_location_rid\" btree (record_id)\n\nis really just silly. You should have the pkey and then an index on\nbackup_id alone. See the discussion of multiple indexes in the fine\nmanual:\nhttp://www.postgresql.org/docs/8.2/static/indexes-multicolumn.html\nhttp://www.postgresql.org/docs/8.2/static/indexes-bitmap-scans.html\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 May 2007 16:06:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries on big table "
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n> \n> Scott Marlowe <[email protected]> writes:\n> > Secondly, it might be more efficient for the planner to choose the \n> > backup_location_rid index than the combination primary key index.\n> \n> Oh, I'm an idiot; I didn't notice the way the index was set up.\n> Yeah, that index pretty well sucks for a query on backup_id ---\n> it has to scan the entire index, since there's no constraint on the\n> leading column.\n> So that's where the time is going.\n> \n> This combination of indexes:\n> \n> > Indexes:\n> > \"backup_location_pkey\" PRIMARY KEY, btree (record_id, backup_id)\n> > \"backup_location_rid\" btree (record_id)\n> \n> is really just silly. You should have the pkey and then an index on\n> backup_id alone. See the discussion of multiple indexes in the fine\n> manual:\n> http://www.postgresql.org/docs/8.2/static/indexes-multicolumn.html\n> http://www.postgresql.org/docs/8.2/static/indexes-bitmap-scans.html\n> \n> \t\t\tregards, tom lane\n\nThanks for the help guys! That was my problem. I actually need the\nbackup_location_rid index for a different query so I am going to keep\nit. Here is the result with the new index:\n\nmdsdb=# explain analyze select record_id from backup_location where\nbackup_id = 1070;\n QUERY\nPLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n Index Scan using backup_location_bid on backup_location\n(cost=0.00..9573.07 rows=415897 width=8) (actual time=0.106..3.486\nrows=2752 loops=1)\n Index Cond: (backup_id = 1070)\n Total runtime: 4.951 ms\n(3 rows)\n",
"msg_date": "Fri, 18 May 2007 14:22:52 -0700",
"msg_from": "\"Tyrrill, Ed\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow queries on big table "
},
{
"msg_contents": "On Fri, May 18, 2007 at 02:22:52PM -0700, Tyrrill, Ed wrote:\n> Total runtime: 4.951 ms\n\nGoing from 1197 seconds to 5 milliseconds. That's some sort of record in a\nwhile, I think :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 18 May 2007 23:28:23 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries on big table"
},
{
"msg_contents": "\"Tyrrill, Ed\" <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> This combination of indexes:\n>>\n>>> Indexes:\n>>> \"backup_location_pkey\" PRIMARY KEY, btree (record_id, backup_id)\n>>> \"backup_location_rid\" btree (record_id)\n>>\n>> is really just silly. You should have the pkey and then an index on\n>> backup_id alone.\n\n> Thanks for the help guys! That was my problem. I actually need the\n> backup_location_rid index for a different query so I am going to keep\n> it.\n\nWell, you don't really *need* it; the two-column index on (record_id,\nbackup_id) will serve perfectly well for queries on its leading column\nalone. It'll be physically bigger and hence slightly slower to scan\nthan a single-column index; but unless the table is almost completely\nread-only, the update overhead of maintaining all three indexes is\nprobably going to cost more than you can save with it. Try that other\nquery with and without backup_location_rid and see how much you're\nreally saving.\n\n> Index Scan using backup_location_bid on backup_location\n> (cost=0.00..9573.07 rows=415897 width=8) (actual time=0.106..3.486\n> rows=2752 loops=1)\n> Index Cond: (backup_id = 1070)\n> Total runtime: 4.951 ms\n\nThat's more like it ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 May 2007 17:36:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow queries on big table "
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n>> Thanks for the help guys! That was my problem. I actually need the \n>> backup_location_rid index for a different query so I am going to keep\n\n>> it.\n>\n> Well, you don't really *need* it; the two-column index on (record_id,\n> backup_id) will serve perfectly well for queries on its leading column\n> alone. It'll be physically >>bigger and hence slightly slower to scan\n> than a single-column index; but unless the table is almost completely\n> read-only, the update overhead of maintaining all three indexes is\n> probably going to cost more than you can save with it. Try that other\n> query with and without backup_location_rid and see how much you're\n> really saving.\n\nWell, the query that got me to add backup_location_rid took 105 minutes\nusing only the primary key index. After I added backup_location_rid\nthe query was down to about 45 minutes. Still not very good, and I am\nstill fiddling around with it. The query is:\n\nmdsdb=# explain analyze select backupobjects.record_id from\nbackupobjects left outer join backup_location using(record_id) where\nbackup_id is null;\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-------------------------\n Merge Left Join (cost=0.00..21408455.06 rows=11790970 width=8) (actual\ntime=2784967.410..2784967.410 rows=0 loops=1)\n Merge Cond: (\"outer\".record_id = \"inner\".record_id)\n Filter: (\"inner\".backup_id IS NULL)\n -> Index Scan using backupobjects_pkey on backupobjects\n(cost=0.00..443484.31 rows=11790970 width=8) (actual\ntime=0.073..47865.957 rows=11805996 loops=1)\n -> Index Scan using backup_location_rid on backup_location\n(cost=0.00..20411495.21 rows=162435366 width=12) (actual\ntime=0.110..2608485.437 rows=162426837 loops=1)\n Total runtime: 2784991.612 ms\n(6 rows)\n\nIt is of course the same backup_location, but backupobjects is:\n\nmdsdb=# \\d backupobjects\n Table \"public.backupobjects\"\n Column | Type | Modifiers\n----------------+-----------------------------+-----------\n record_id | bigint | not null\n dir_record_id | integer |\n name | text |\n extension | character varying(64) |\n hash | character(40) |\n mtime | timestamp without time zone |\n size | bigint |\n user_id | integer |\n group_id | integer |\n meta_data_hash | character(40) |\nIndexes:\n \"backupobjects_pkey\" PRIMARY KEY, btree (record_id)\n \"backupobjects_meta_data_hash_key\" UNIQUE, btree (meta_data_hash)\n \"backupobjects_extension\" btree (extension)\n \"backupobjects_hash\" btree (hash)\n \"backupobjects_mtime\" btree (mtime)\n \"backupobjects_size\" btree (size)\n\nrecord_id has in backupobjects has a many to many relationship to\nrecord_id\nin backup_location.\n\nEd\n",
"msg_date": "Fri, 18 May 2007 16:16:20 -0700",
"msg_from": "\"Tyrrill, Ed\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow queries on big table "
}
] |
[
{
"msg_contents": "I've tried searching the documentation to answer this question but could \nnot find anything. When trying to choose the optimal fillfactor for an \nindex, what is important the number of times the row is updated or the \ncolumn indexed upon is updated? In my case each row is updated on \naverage about 5 times but for some of the columns with indexes don't \nchange after insertion ever. thanks for any advice\n",
"msg_date": "Fri, 18 May 2007 14:57:32 -0400",
"msg_from": "Gene Hart <[email protected]>",
"msg_from_op": true,
"msg_subject": "choosing fillfactor"
},
{
"msg_contents": "Gene Hart wrote:\n> I've tried searching the documentation to answer this question but could \n> not find anything. When trying to choose the optimal fillfactor for an \n> index, what is important the number of times the row is updated or the \n> column indexed upon is updated? In my case each row is updated on \n> average about 5 times but for some of the columns with indexes don't \n> change after insertion ever. thanks for any advice\n\nIt's the number of times the row is updated, regardless of which columns \nare changed.\n\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Fri, 18 May 2007 21:19:00 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: choosing fillfactor"
}
] |
[
{
"msg_contents": "This may be the wrong list to post to but I thought I'd post here\nfirst since it is a performance related problem.\n\nEssentially, I'm looking for the most efficient way to break a\ndatabase into two 'shards' based on a top level table's\nprimary key. For example, split a sales database into two using a\nterritory.\n\nThe database rows are constrained to have unique ownership, meaning\neach row of data can be traced to one and only one shard, e.g., sales\nterritory. Multiple inheritance would make this a harder thing to do.\n\nIn essence, I'd like to do a cascading delete using one of these\nterritory's ids with the caveat that the data isn't just deleted but\ndeleted and COPYed out to disk. This COPYed data could then be loaded\ninto a fresh schema to bring up the second shard. Seemingly the data\nwould be in the right insert order, for referential integrity\npurposes, as a result of this operation since it would be doing a\nbreadth first search for the data.\n\nI can envision a couple different ways to do this:\na) Gather the relational tree up to but not including the leaves and\nuse it to parse out the shard from a db dump. Then do a cascading\ndelete to remove the data from the database.\n\nb) Recursively COPY (query) to a file (breadth first COPY) while\ncrawling down the relational tree.\n\nThe complications I see are having to make sure the referential tree\nis a DAG (directed acyclic graph) or unroll it to become one.\n\nI know Live Journal, Skype, etc. have to do this sort of thing when\nthey need to scale and didn't want to reinvent the wheel or, more\nimportantly, step on the same land mines that others have stepped on.\n\nThanks for any and all feedback.\n\nChristian\n\n",
"msg_date": "18 May 2007 15:41:24 -0700",
"msg_from": "C Storm <[email protected]>",
"msg_from_op": true,
"msg_subject": "Efficient recursion"
},
{
"msg_contents": "C Storm <[email protected]> writes:\n> Essentially, I'm looking for the most efficient way to break a\n> database into two 'shards' based on a top level table's\n> primary key. For example, split a sales database into two using a\n> territory.\n\nI think what you are looking for here is partitioning, not recursion.\nPG's support for partitioned tables is a bit crude, but usable:\nhttp://www.postgresql.org/docs/8.2/static/ddl-partitioning.html\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 19 May 2007 14:12:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Efficient recursion "
}
] |
[
{
"msg_contents": "Hi,\nI have about 6 tables that inherit from one table. They all have the\nexact same indexes but when i try to query all by a row (which is\nindexed, btree) the QP decides to do sequential scan for some of them\n(the bigger tables) rather than use the index.\nAny ideas why that may happen?\nI am using postgres 8.2\nThanks,\nS.\n",
"msg_date": "Sat, 19 May 2007 15:08:07 -0700",
"msg_from": "\"s d\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "QP Problem"
},
{
"msg_contents": "\"s d\" <[email protected]> writes:\n> I have about 6 tables that inherit from one table. They all have the\n> exact same indexes but when i try to query all by a row (which is\n> indexed, btree) the QP decides to do sequential scan for some of them\n> (the bigger tables) rather than use the index.\n\nPlease show the details: the table definitions, exact query, and plan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 19 May 2007 19:20:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: QP Problem "
}
] |
[
{
"msg_contents": "\n\tI felt the world needed a new benchmark ;)\n\tSo : Forum style benchmark with simulation of many users posting and \nviewing forums and topics on a PHP website.\n\n\thttp://home.peufeu.com/ftsbench/forum1.png\n\n\tOne of those curves is \"a very popular open-source database which claims \nto offer unparallelled speed\".\n\tThe other one is of course Postgres 8.2.3 which by popular belief is \n\"full-featured but slow\"\n\n\tWhat is your guess ?\n",
"msg_date": "Sun, 20 May 2007 16:58:45 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres Benchmark Results"
},
{
"msg_contents": "I assume red is PostgreSQL and green is MySQL. That reflects my own \nbenchmarks with those two.\n\nBut I don't fully understand what the graph displays. Does it reflect \nthe ability of the underlying database to support a certain amount of \nusers per second given a certain database size? Or is the growth of the \ndatabase part of the benchmark?\n\nBtw, did you consider that older topics are normally read much less and \nalmost never get new postings? I think the size of the \"active data set\" \nis more dependent on the amount of active members than on the actual \namount of data available.\nThat can reduce the impact of the size of the database greatly, although \nwe saw very nice gains in performance on our forum (over 22GB of \nmessages) when replacing the databaseserver with one with twice the \nmemory, cpu's and I/O.\n\nBest regards,\n\nArjen\n\nOn 20-5-2007 16:58 PFC wrote:\n> \n> I felt the world needed a new benchmark ;)\n> So : Forum style benchmark with simulation of many users posting and \n> viewing forums and topics on a PHP website.\n> \n> http://home.peufeu.com/ftsbench/forum1.png\n> \n> One of those curves is \"a very popular open-source database which \n> claims to offer unparallelled speed\".\n> The other one is of course Postgres 8.2.3 which by popular belief is \n> \"full-featured but slow\"\n> \n> What is your guess ?\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n",
"msg_date": "Sun, 20 May 2007 18:41:36 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "\n> I assume red is PostgreSQL and green is MySQL. That reflects my own \n> benchmarks with those two.\n\n\tWell, since you answered first, and right, you win XD\n\n\tThe little curve that dives into the ground is MySQL with InnoDB.\n\tThe Energizer bunny that keeps going is Postgres.\n\n> But I don't fully understand what the graph displays. Does it reflect \n> the ability of the underlying database to support a certain amount of \n> users per second given a certain database size? Or is the growth of the \n> database part of the benchmark?\n\n\tBasically I have a test client which simulates a certain number of \nconcurrent users browsing a forum, and posting (posting rate is \nartificially high in order to fill the tables quicker than the months it \nwould take in real life).\n\n\tSince the fake users pick which topics to view and post in by browsing \nthe pages, like people would do, it tends to pick the topics in the first \nfew pages of the forum, those with the most recent posts. So, like in real \nlife, some topics fall through the first pages, and go down to rot at the \nbottom, while others grow much more.\n\n\tSo, as the database grows (X axis) ; the total number of webpages served \nper second (viewings + postings) is on the Y axis, representing the user's \nexperience (fast / slow / dead server)\n\n\tThe number of concurrent HTTP or Postgres connections is not plotted, it \ndoesn't really matter anyway for benchmarking purposes, you need to have \nenough to keep the server busy, but not too much or you're just wasting \nRAM. For a LAN that's about 30 HTTP connections and about 8 PHP processes \nwith each a database connection.\n\tSince I use lighttpd, I don't really care about the number of actual slow \nclients (ie. real concurrent HTTP connections). Everything is funneled \nthrough those 8 PHP processes, so postgres never sees huge concurrency.\n\tAbout 2/3 of the CPU is used by PHP anyway, only 1/3 by Postgres ;)\n\n> Btw, did you consider that older topics are normally read much less and \n> almost never get new postings? I think the size of the \"active data set\" \n> is more dependent on the amount of active members than on the actual \n> amount of data available.\n\n\tYes, see above.\n\tThe posts table is clustered on (topic_id, post_id) and this is key to \nperformance.\n\n> That can reduce the impact of the size of the database greatly, although \n> we saw very nice gains in performance on our forum (over 22GB of \n> messages) when replacing the databaseserver with one with twice the \n> memory, cpu's and I/O.\n\n\tWell, you can see on the curve when it hits IO-bound behaviour.\n\n\tI'm writing a full report, but I'm having a lot of problems with MySQL, \nI'd like to give it a fair chance, but it shows real obstination in NOT \nworking.\n\t\n",
"msg_date": "Sun, 20 May 2007 19:09:38 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "On 20-5-2007 19:09 PFC wrote:\n> Since I use lighttpd, I don't really care about the number of actual \n> slow clients (ie. real concurrent HTTP connections). Everything is \n> funneled through those 8 PHP processes, so postgres never sees huge \n> concurrency.\n\nWell, that would only be in favour of postgres anyway, it scales in our \nbenchmarks better to multiple cpu's, multiple clients and appaerantly in \nyours to larger datasets. MySQL seems to be faster up untill a certain \namount of concurrent clients (close to the amount of cpu's available) \nand beyond that can collapse dramatically.\n\n> I'm writing a full report, but I'm having a lot of problems with \n> MySQL, I'd like to give it a fair chance, but it shows real obstination \n> in NOT working.\n\nYeah, it displayed very odd behaviour when doing benchmarks here too. If \nyou haven't done already, you can try the newest 5.0-verion (5.0.41?) \nwhich eliminates several scaling issues in InnoDB, but afaik not all of \nthem. Besides that, it just can be pretty painful to get a certain query \nfast, although we've not very often seen it failing completely in the \nlast few years.\n\nBest regards,\n\nArjen van der Meijden\n",
"msg_date": "Sun, 20 May 2007 19:26:38 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "PFC <[email protected]> writes:\n> \tThe little curve that dives into the ground is MySQL with InnoDB.\n> \tThe Energizer bunny that keeps going is Postgres.\n\nJust for comparison's sake it would be interesting to see a curve for\nmysql/myisam. Mysql's claim to speed is mostly based on measurements\ntaken with myisam tables, but I think that doesn't hold up very well\nunder concurrent load.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 20 May 2007 13:26:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results "
},
{
"msg_contents": "PFC írta:\n>\n> I felt the world needed a new benchmark ;)\n> So : Forum style benchmark with simulation of many users posting \n> and viewing forums and topics on a PHP website.\n>\n> http://home.peufeu.com/ftsbench/forum1.png\n>\n> One of those curves is \"a very popular open-source database which \n> claims to offer unparallelled speed\".\n> The other one is of course Postgres 8.2.3 which by popular belief \n> is \"full-featured but slow\"\n>\n> What is your guess ?\n\nRed is PostgreSQL.\n\nThe advertised \"unparallelled speed\" must surely mean\nbenchmarking only single-client access on the noname DB. ;-)\n\nI also went into benchmarking mode last night for my own\namusement when I read on the linux-kernel ML that\nNCQ support for nForce5 chips was released.\nI tried current PostgreSQL 8.3devel CVS.\npgbench over local TCP connection with\n25 clients and 3000 transacts/client gave me\naround 445 tps before applying NCQ support.\n680 tps after.\n\nIt went over 840 tps after adding HOT v7 patch,\nstill with 25 clients. It topped at 1062 tps with 3-4 clients.\nI used a single Seagate 320GB SATA2 drive\nfor the test, which only has less than 40GB free.\nSo it's already at the end of the disk giving smaller\ntransfer rates then at the beginning. Filesystem is ext3.\nDual core Athlon64 X2 4200 in 64-bit mode.\nI have never seen such a performance before\non a desktop machine.\n\n-- \n----------------------------------\nZoltán Böszörményi\nCybertec Geschwinde & Schönig GmbH\nhttp://www.postgresql.at/\n\n",
"msg_date": "Sun, 20 May 2007 20:00:25 +0200",
"msg_from": "Zoltan Boszormenyi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "On Sun, 20 May 2007 19:26:38 +0200, Tom Lane <[email protected]> wrote:\n\n> PFC <[email protected]> writes:\n>> \tThe little curve that dives into the ground is MySQL with InnoDB.\n>> \tThe Energizer bunny that keeps going is Postgres.\n>\n> Just for comparison's sake it would be interesting to see a curve for\n> mysql/myisam. Mysql's claim to speed is mostly based on measurements\n> taken with myisam tables, but I think that doesn't hold up very well\n> under concurrent load.\n>\n> \t\t\tregards, tom lane\n\n\n\tI'm doing that now. Here is what I wrote in the report :\n\n\tUsing prepared statements (important), Postgres beats MyISAM on \"simple \nselects\" as they say, as well as complex selects, even with 1 thread.\n\t\n\tMyISAM caused massive data corruption : posts and topics disappear, \nstorage engine errors pop off, random thrashed rows appear in the forums \ntable, therefore screwing up everything, etc. In short : it doesn't work. \nBut, since noone in their right mind would use MyISAM for critical data, I \ninclude this result anyway, as a curiosity.\n\n\tI had to write a repair SQL script to fix the corruption in order to see \nhow MySQL will fare when it gets bigger than RAM...\n\n",
"msg_date": "Sun, 20 May 2007 20:10:17 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "On Sun, May 20, 2007 at 08:00:25PM +0200, Zoltan Boszormenyi wrote:\n> I also went into benchmarking mode last night for my own\n> amusement when I read on the linux-kernel ML that\n> NCQ support for nForce5 chips was released.\n> I tried current PostgreSQL 8.3devel CVS.\n> pgbench over local TCP connection with\n> 25 clients and 3000 transacts/client gave me\n> around 445 tps before applying NCQ support.\n> 680 tps after.\n> \n> It went over 840 tps after adding HOT v7 patch,\n> still with 25 clients. It topped at 1062 tps with 3-4 clients.\n> I used a single Seagate 320GB SATA2 drive\n> for the test, which only has less than 40GB free.\n> So it's already at the end of the disk giving smaller\n> transfer rates then at the beginning. Filesystem is ext3.\n> Dual core Athlon64 X2 4200 in 64-bit mode.\n> I have never seen such a performance before\n> on a desktop machine.\n\nI'd be willing to bet money that the drive is lying about commits/fsync.\nEach transaction committed essentially requires one revolution of the\ndrive with pg_xlog on it, so a 15kRPM drive limits you to 250TPS.\n\nBTW, PostgreSQL sees a big speed boost if you mount ext3 with the option\ndata=writeback. Note that doing that probably has a negative impact on\ndata recovery after a crash for non-database files.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 21 May 2007 16:01:25 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "On Sun, May 20, 2007 at 04:58:45PM +0200, PFC wrote:\n> \n> \tI felt the world needed a new benchmark ;)\n> \tSo : Forum style benchmark with simulation of many users posting and \n> viewing forums and topics on a PHP website.\n> \n> \thttp://home.peufeu.com/ftsbench/forum1.png\n\nAny chance of publishing your benchmark code so others can do testing?\nIt sounds like a useful, well-thought-out benchmark (even if it is\nrather specialized).\n\nAlso, I think it's important for you to track how long it takes to\nrespond to requests, both average and maximum. In a web application no\none's going to care if you're doing 1000TPS if it means that every time\nyou click on something it takes 15 seconds to get the next page back.\nWith network round-trip times and what-not considered I'd say you don't\nwant it to take any more than 200-500ms between when a request hits a\nwebserver and when the last bit of data has gone back to the client.\n\nI'm guessing that there's about 600MB of memory available for disk\ncaching? (Well, 600MB minus whatever shared_buffers is set to).\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 21 May 2007 16:05:22 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "On Mon, 21 May 2007 23:05:22 +0200, Jim C. Nasby <[email protected]> \nwrote:\n\n> On Sun, May 20, 2007 at 04:58:45PM +0200, PFC wrote:\n>>\n>> \tI felt the world needed a new benchmark ;)\n>> \tSo : Forum style benchmark with simulation of many users posting and\n>> viewing forums and topics on a PHP website.\n>>\n>> \thttp://home.peufeu.com/ftsbench/forum1.png\n>\n> Any chance of publishing your benchmark code so others can do testing?\n> It sounds like a useful, well-thought-out benchmark (even if it is\n> rather specialized).\n\n\tYes, that was the intent from the start.\n\tIt is specialized, because forums are one of the famous server killers. \nThis is mostly due to bad database design, bad PHP skills, and the \nhorrendous MySQL FULLTEXT.\n\tI'll have to clean up the code and document it for public consumption, \nthough.\n\tHowever, the Python client is too slow. It saturates at about 1000 hits/s \non a Athlon 64 3000+, so you can forget about benchmarking anything meaner \nthan a Core 2 duo.\n\n> Also, I think it's important for you to track how long it takes to\n> respond to requests, both average and maximum. In a web application no\n> one's going to care if you're doing 1000TPS if it means that every time\n> you click on something it takes 15 seconds to get the next page back.\n> With network round-trip times and what-not considered I'd say you don't\n> want it to take any more than 200-500ms between when a request hits a\n> webserver and when the last bit of data has gone back to the client.\n\n\tYeah, I will do that too.\n\n> I'm guessing that there's about 600MB of memory available for disk\n> caching? (Well, 600MB minus whatever shared_buffers is set to).\n\n\tIt's about that. The machine has 1 GB of RAM.\n\n\n",
"msg_date": "Mon, 21 May 2007 23:39:39 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "Am 21.05.2007 um 15:01 schrieb Jim C. Nasby:\n\n> I'd be willing to bet money that the drive is lying about commits/ \n> fsync.\n> Each transaction committed essentially requires one revolution of the\n> drive with pg_xlog on it, so a 15kRPM drive limits you to 250TPS.\n\nYes, that right, but if a lot of the transactions are selects, there \nis no entry in the x_log for them and most of the stuff can come from \nthe cache - read from memory which is blazing fast compared to any \ndisk ... And this was a pg_bench test - I don't know what the \nbenchmark really does but if I remember correctly it is mostly reading.\n\ncug\n\n\n",
"msg_date": "Mon, 21 May 2007 17:44:57 -0600",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "I assume red is the postgresql. AS you add connections, Mysql always dies.\n\nOn 5/20/07, PFC <[email protected]> wrote:\n>\n>\n> I felt the world needed a new benchmark ;)\n> So : Forum style benchmark with simulation of many users posting\n> and\n> viewing forums and topics on a PHP website.\n>\n> http://home.peufeu.com/ftsbench/forum1.png\n>\n> One of those curves is \"a very popular open-source database which\n> claims\n> to offer unparallelled speed\".\n> The other one is of course Postgres 8.2.3 which by popular belief\n> is\n> \"full-featured but slow\"\n>\n> What is your guess ?\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\nI assume red is the postgresql. AS you add connections, Mysql always dies.On 5/20/07, PFC <[email protected]> wrote:\n I felt the world needed a new benchmark ;) So : Forum style benchmark with simulation of many users posting and\nviewing forums and topics on a PHP website. http://home.peufeu.com/ftsbench/forum1.png One of those curves is \"a very popular open-source database which claims\nto offer unparallelled speed\". The other one is of course Postgres 8.2.3 which by popular belief is\"full-featured but slow\" What is your guess ?---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster",
"msg_date": "Mon, 21 May 2007 20:05:20 -0400",
"msg_from": "Rich <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> On Sun, May 20, 2007 at 08:00:25PM +0200, Zoltan Boszormenyi wrote:\n> \n>> I also went into benchmarking mode last night for my own\n>> amusement when I read on the linux-kernel ML that\n>> NCQ support for nForce5 chips was released.\n>> I tried current PostgreSQL 8.3devel CVS.\n>> pgbench over local TCP connection with\n>> 25 clients and 3000 transacts/client gave me\n>> around 445 tps before applying NCQ support.\n>> 680 tps after.\n>>\n>> It went over 840 tps after adding HOT v7 patch,\n>> still with 25 clients. It topped at 1062 tps with 3-4 clients.\n>> I used a single Seagate 320GB SATA2 drive\n>> for the test, which only has less than 40GB free.\n>> So it's already at the end of the disk giving smaller\n>> transfer rates then at the beginning. Filesystem is ext3.\n>> Dual core Athlon64 X2 4200 in 64-bit mode.\n>> I have never seen such a performance before\n>> on a desktop machine.\n>> \n>\n> I'd be willing to bet money that the drive is lying about commits/fsync.\n> Each transaction committed essentially requires one revolution of the\n> drive with pg_xlog on it, so a 15kRPM drive limits you to 250TPS.\n>\n> BTW, PostgreSQL sees a big speed boost if you mount ext3 with the option\n> data=writeback. Note that doing that probably has a negative impact on\n> data recovery after a crash for non-database files.\n> \n\nI thought you were limited to 250 or so COMMITS to disk per second, and \nsince >1 client can be committed at once, you could do greater than 250 \ntps, as long as you had >1 client providing input. Or was I wrong?\n",
"msg_date": "Mon, 21 May 2007 19:28:21 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "Scott Marlowe wrote:\n> Jim C. Nasby wrote:\n> >On Sun, May 20, 2007 at 08:00:25PM +0200, Zoltan Boszormenyi wrote:\n> > \n> >>I also went into benchmarking mode last night for my own\n> >>amusement when I read on the linux-kernel ML that\n> >>NCQ support for nForce5 chips was released.\n> >>I tried current PostgreSQL 8.3devel CVS.\n> >>pgbench over local TCP connection with\n> >>25 clients and 3000 transacts/client gave me\n> >>around 445 tps before applying NCQ support.\n> >>680 tps after.\n> >>\n> >>It went over 840 tps after adding HOT v7 patch,\n> >>still with 25 clients. It topped at 1062 tps with 3-4 clients.\n> >>I used a single Seagate 320GB SATA2 drive\n> >>for the test, which only has less than 40GB free.\n> >>So it's already at the end of the disk giving smaller\n> >>transfer rates then at the beginning. Filesystem is ext3.\n> >>Dual core Athlon64 X2 4200 in 64-bit mode.\n> >>I have never seen such a performance before\n> >>on a desktop machine.\n> >> \n> >\n> >I'd be willing to bet money that the drive is lying about commits/fsync.\n> >Each transaction committed essentially requires one revolution of the\n> >drive with pg_xlog on it, so a 15kRPM drive limits you to 250TPS.\n> >\n> >BTW, PostgreSQL sees a big speed boost if you mount ext3 with the option\n> >data=writeback. Note that doing that probably has a negative impact on\n> >data recovery after a crash for non-database files.\n> > \n> \n> I thought you were limited to 250 or so COMMITS to disk per second, and \n> since >1 client can be committed at once, you could do greater than 250 \n> tps, as long as you had >1 client providing input. Or was I wrong?\n\nMy impression is that you are correct in theory -- this is the \"commit\ndelay\" feature. But it seems that the feature does not work as well as\none would like; and furthermore, it is disabled by default.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Mon, 21 May 2007 20:47:37 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "On Mon, 21 May 2007, Guido Neitzer wrote:\n\n> Yes, that right, but if a lot of the transactions are selects, there is no \n> entry in the x_log for them and most of the stuff can come from the cache - \n> read from memory which is blazing fast compared to any disk ... And this was \n> a pg_bench test - I don't know what the benchmark really does but if I \n> remember correctly it is mostly reading.\n\nThe standard pgbench transaction includes a select, an insert, and three \nupdates. All five finished equals one transaction; the fact that the \nSELECT statment in there could be executed much faster where it to happen \non its own doesn't matter.\n\nBecause it does the most work on the biggest table, the entire combination \nis usually driven mostly by how long the UPDATE to the accounts table \ntakes. The TPS numbers can certainly be no larger than the rate at which \nyou can execute that.\n\nAs has been pointed out, every time you commit a transacation the disk has \nto actually write that out before it's considered complete. Unless you \nhave a good caching disk controller (which your nForce5 is not) you're \nlimited to 120 TPS with a 7200RPM drive and 250 with a 15000 RPM one. \nWhile it's possible to improve slightly on this using the commit_delay \nfeature, I haven't been able to replicate even a 100% improvement that way \nwhen running pgbench, and to get even close to that level of improvement \nwould require a large number of clients.\n\nUnless you went out of your way to turn it off, your drive is caching \nwrites; every Seagate SATA drive I've ever seen does by default. \"1062 \ntps with 3-4 clients\" just isn't possible with your hardware otherwise. \nIf you turn that feature off with:\n\nhdparm -W0 /dev/hda (might be /dev/sda with the current driver)\n\nthat will disable the disk caching and you'll be reporting accurate \nnumbers--which will be far lower than you're seeing now.\n\nWhile your results are an interesting commentary on how fast the system \ncan run when it has a write cache available, and the increase with recent \ncode is interesting, your actual figures here are a fantasy. The database \nisn't working properly and a real system using this hardware would be \nexpected to become corrupted if ran for long enough. I have a paper at \nhttp://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm you \nmight want to read that goes into more detail than you probably want to \nknow on this subject if you're like to read more about it--and you really, \nreally should if you intend to put important data into a PostgreSQL \ndatabase.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Tue, 22 May 2007 01:51:40 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "Am 21.05.2007 um 23:51 schrieb Greg Smith:\n\n> The standard pgbench transaction includes a select, an insert, and \n> three updates.\n\nI see. Didn't know that, but it makes sense.\n\n> Unless you went out of your way to turn it off, your drive is \n> caching writes; every Seagate SATA drive I've ever seen does by \n> default. \"1062 tps with 3-4 clients\" just isn't possible with your \n> hardware otherwise.\n\nBtw: it wasn't my hardware in this test!\n\ncug\n",
"msg_date": "Tue, 22 May 2007 00:01:11 -0600",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "Jim C. Nasby �rta:\n> On Sun, May 20, 2007 at 08:00:25PM +0200, Zoltan Boszormenyi wrote:\n> \n>> I also went into benchmarking mode last night for my own\n>> amusement when I read on the linux-kernel ML that\n>> NCQ support for nForce5 chips was released.\n>> I tried current PostgreSQL 8.3devel CVS.\n>> pgbench over local TCP connection with\n>> 25 clients and 3000 transacts/client gave me\n>> around 445 tps before applying NCQ support.\n>> 680 tps after.\n>>\n>> It went over 840 tps after adding HOT v7 patch,\n>> still with 25 clients. It topped at 1062 tps with 3-4 clients.\n>> I used a single Seagate 320GB SATA2 drive\n>> for the test, which only has less than 40GB free.\n>> So it's already at the end of the disk giving smaller\n>> transfer rates then at the beginning. Filesystem is ext3.\n>> Dual core Athlon64 X2 4200 in 64-bit mode.\n>> I have never seen such a performance before\n>> on a desktop machine.\n>> \n>\n> I'd be willing to bet money that the drive is lying about commits/fsync.\n> \n\nIt could well be the case.\n\n> Each transaction committed essentially requires one revolution of the\n> drive with pg_xlog on it, so a 15kRPM drive limits you to 250TPS.\n> \n\nBy \"revolution\", you mean one 360 degrees turnaround of the platter, yes?\nOn the other hand, if you have multiple clients, isn't the 250 COMMITs/sec\nlimit is true only per client? Of course assuming that the disk subsystem\nhas more TCQ/NCQ threads than the actual number of DB clients.\n\n> BTW, PostgreSQL sees a big speed boost if you mount ext3 with the option\n> data=writeback. Note that doing that probably has a negative impact on\n> data recovery after a crash for non-database files.\n> \n\nI haven't touched the FS options.\nI can even use ext2 if I want non-recoverability. :-)\n\n-- \n----------------------------------\nZolt�n B�sz�rm�nyi\nCybertec Geschwinde & Sch�nig GmbH\nhttp://www.postgresql.at/\n\n",
"msg_date": "Tue, 22 May 2007 08:14:15 +0200",
"msg_from": "Zoltan Boszormenyi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "Greg Smith �rta:\n> On Mon, 21 May 2007, Guido Neitzer wrote:\n>\n>> Yes, that right, but if a lot of the transactions are selects, there \n>> is no entry in the x_log for them and most of the stuff can come from \n>> the cache - read from memory which is blazing fast compared to any \n>> disk ... And this was a pg_bench test - I don't know what the \n>> benchmark really does but if I remember correctly it is mostly reading.\n>\n> The standard pgbench transaction includes a select, an insert, and \n> three updates. All five finished equals one transaction; the fact \n> that the SELECT statment in there could be executed much faster where \n> it to happen on its own doesn't matter.\n>\n> Because it does the most work on the biggest table, the entire \n> combination is usually driven mostly by how long the UPDATE to the \n> accounts table takes. The TPS numbers can certainly be no larger than \n> the rate at which you can execute that.\n>\n> As has been pointed out, every time you commit a transacation the disk \n> has to actually write that out before it's considered complete. \n> Unless you have a good caching disk controller (which your nForce5 is \n> not) you're limited to 120 TPS with a 7200RPM drive and 250 with a \n> 15000 RPM one. While it's possible to improve slightly on this using \n> the commit_delay feature, I haven't been able to replicate even a 100% \n> improvement that way when running pgbench, and to get even close to \n> that level of improvement would require a large number of clients.\n>\n> Unless you went out of your way to turn it off, your drive is caching \n> writes; every Seagate SATA drive I've ever seen does by default. \n> \"1062 tps with 3-4 clients\" just isn't possible with your hardware \n> otherwise. If you turn that feature off with:\n>\n> hdparm -W0 /dev/hda (might be /dev/sda with the current driver)\n>\n> that will disable the disk caching and you'll be reporting accurate \n> numbers--which will be far lower than you're seeing now.\n\nAnd AFAIR according to a comment on LKML some time ago,\nit greatly decreases your disk's MTBF as well.\nBut thanks for the great insights, anyway.\nI already knew that nForce5 is not a caching controller. :-)\nI meant it's a good desktop performer.\nAnd having a good UPS and a bit oversized Enermax PSU\nhelps avoiding crashes with the sometimes erratic power line.\n\n> While your results are an interesting commentary on how fast the \n> system can run when it has a write cache available, and the increase \n> with recent code is interesting, your actual figures here are a \n> fantasy. The database isn't working properly and a real system using \n> this hardware would be expected to become corrupted if ran for long \n> enough. I have a paper at \n> http://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm you \n> might want to read that goes into more detail than you probably want \n> to know on this subject if you're like to read more about it--and you \n> really, really should if you intend to put important data into a \n> PostgreSQL database.\n\nThanks, I will read it.\n\n> -- \n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n\n-- \n----------------------------------\nZolt�n B�sz�rm�nyi\nCybertec Geschwinde & Sch�nig GmbH\nhttp://www.postgresql.at/\n\n",
"msg_date": "Tue, 22 May 2007 08:26:54 +0200",
"msg_from": "Zoltan Boszormenyi <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "\"Alvaro Herrera\" <[email protected]> writes:\n\n> Scott Marlowe wrote:\n>> \n>> I thought you were limited to 250 or so COMMITS to disk per second, and \n>> since >1 client can be committed at once, you could do greater than 250 \n>> tps, as long as you had >1 client providing input. Or was I wrong?\n>\n> My impression is that you are correct in theory -- this is the \"commit\n> delay\" feature. But it seems that the feature does not work as well as\n> one would like; and furthermore, it is disabled by default.\n\nEven without commit delay a client will commit any pending WAL records when it\nsyncs the WAL. The clients waiting to commit their records will find it\nalready synced when they get woken up.\n\nHowever as mentioned a while back in practice it doesn't work quite right and\nyou should expect to get 1/2 the expected performance. So even with 10 clients\nyou should expect to see 5*120 tps on a 7200 rpm drive and 5*250 tps on a\n15kprm drive.\n\nHeikki posted a patch that experimented with fixing this. Hopefully it'll be\nfixed for 8.4.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Tue, 22 May 2007 09:03:56 +0100",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "\nWhat's interesting here is that on a couple metrics the green curve is\nactually *better* until it takes that nosedive at 500 MB. Obviously it's not\nbetter on average hits/s, the most obvious metric. But on deviation and\nworst-case hits/s it's actually doing better.\n\nNote that while the average hits/s between 100 and 500 is over 600 tps for\nPostgres there is a consistent smattering of plot points spread all the way\ndown to 200 tps, well below the 400-500 tps that MySQL is getting.\n\nSome of those are undoubtedly caused by things like checkpoints and vacuum\nruns. Hopefully the improvements that are already in the pipeline will reduce\nthem.\n\nI mention this only to try to move some of the focus from the average\nperformance to trying to remove the pitfalls that affact 1-10% of transactions\nand screw the worst-case performance. In practical terms it's the worst-case\nthat governs perceptions, not average case.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Tue, 22 May 2007 09:16:56 +0100",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "> Note that while the average hits/s between 100 and 500 is over 600 tps \n> for\n> Postgres there is a consistent smattering of plot points spread all the \n> way\n> down to 200 tps, well below the 400-500 tps that MySQL is getting.\n\n\tYes, these are due to checkpointing, mostly.\n\tAlso, note that a real forum would not insert 100 posts/s, so it would \nnot feel this effect. But in order to finish the benchmark in a correct \namount of time, we have to push on the inserts.\n\n> Some of those are undoubtedly caused by things like checkpoints and \n> vacuum\n> runs. Hopefully the improvements that are already in the pipeline will \n> reduce\n> them.\n\n\tI am re-running it with other tuning, notably cost-based vacuum delay and \nless frequent checkpoints, and it is a *lot* smoother.\n\tThese take a full night to run, so I'll post more results when I have \nusefull stuff to show.\n\tThis has proven to be a very interesting trip to benchmarkland...\n",
"msg_date": "Tue, 22 May 2007 12:10:03 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "On Tue, 22 May 2007, Gregory Stark wrote:\n\n> However as mentioned a while back in practice it doesn't work quite right and\n> you should expect to get 1/2 the expected performance. So even with 10 clients\n> you should expect to see 5*120 tps on a 7200 rpm drive and 5*250 tps on a\n> 15kprm drive.\n\nI would agree that's the approximate size of the upper-bound. There are \nso many factors that go into the effectiveness of commit_delay that I \nwouldn't word it so strongly as to say you can \"expect\" that much benefit. \nThe exact delay amount (which can be hard to set if your client load \nvaries greatly), size of the transactions, balance of seek-bound reads vs. \nmemory based ones in the transactions, serialization in the transaction \nstream, and so many other things can slow the effective benefit.\n\nAlso, there are generally other performance issues in the types of systems \nyou would think would get the most benefit from this parameter that end up \nslowing things down anyway. I've been seeing a best case of closer to \n2*single tps rather than 5* on my single-drive systems with no write \ncaching, but I'll admit I haven't done an exhausting look at it yet (too \nbusy with the real systems that have good controllers). One of these \ndays...\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Tue, 22 May 2007 23:48:33 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "\"Greg Smith\" <[email protected]> writes:\n\n> On Tue, 22 May 2007, Gregory Stark wrote:\n>\n>> However as mentioned a while back in practice it doesn't work quite right and\n>> you should expect to get 1/2 the expected performance. So even with 10 clients\n>> you should expect to see 5*120 tps on a 7200 rpm drive and 5*250 tps on a\n>> 15kprm drive.\n>\n> I would agree that's the approximate size of the upper-bound. There are so\n> many factors that go into the effectiveness of commit_delay that I wouldn't\n> word it so strongly as to say you can \"expect\" that much benefit. The exact\n> delay amount (which can be hard to set if your client load varies greatly),\n> size of the transactions, balance of seek-bound reads vs. memory based ones in\n> the transactions, serialization in the transaction stream, and so many other\n> things can slow the effective benefit.\n\nThis is without commit_delay set at all. Just the regular WAL sync behaviour.\n\n> Also, there are generally other performance issues in the types of systems you\n> would think would get the most benefit from this parameter that end up slowing\n> things down anyway. I've been seeing a best case of closer to 2*single tps\n> rather than 5* on my single-drive systems with no write caching, but I'll admit\n> I haven't done an exhausting look at it yet (too busy with the real systems\n> that have good controllers). One of these days...\n\nCertainly there can be other bottlenecks you reach before WAL fsyncs become\nyour limiting factor. If your transactions are reading significant amounts of\ndata you'll be limited by i/o from your data drives. If your data is on the\nsame drive as your WAL your seek times will be higher than the rotational\nlatency too.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Wed, 23 May 2007 09:31:26 +0100",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "\n> I am re-running it with other tuning, notably cost-based vacuum \n> delay and less frequent checkpoints, and it is a *lot* smoother.\n> These take a full night to run, so I'll post more results when I \n> have usefull stuff to show.\n> This has proven to be a very interesting trip to benchmarkland...\n\n[ rather late in my reply but I had to ]\n\nAre you tuning mysql in a similar fashion ?\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Mon, 28 May 2007 13:53:16 +1000",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "On Mon, 28 May 2007 05:53:16 +0200, Chris <[email protected]> wrote:\n\n>\n>> I am re-running it with other tuning, notably cost-based vacuum \n>> delay and less frequent checkpoints, and it is a *lot* smoother.\n>> These take a full night to run, so I'll post more results when I \n>> have usefull stuff to show.\n>> This has proven to be a very interesting trip to benchmarkland...\n>\n> [ rather late in my reply but I had to ]\n>\n> Are you tuning mysql in a similar fashion ?\n\n\tWell, the tuning knobs are different, there are no check points or \nvacuum... but yes I tried to tune MySQL too, but the hardest part was \nsimply making it work without deadlocking continuously.\n\t\n\n\n",
"msg_date": "Mon, 28 May 2007 08:41:51 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "PFC,\n\nThanks for doing those graphs. They've been used by Simon &Heikki, and \nnow me, to show our main issue with PostgreSQL performance: consistency. \n That is, our median response time beats MySQL and even Oracle, but our \nbottom 10% does not, and is in fact intolerably bad.\n\nIf you want us to credit you by your real name, let us know what it is.\n\nThanks!\n\n--Josh Berkus\n",
"msg_date": "Sat, 02 Jun 2007 12:44:33 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
}
] |
[
{
"msg_contents": "\n>\tI'm writing a full report, but I'm having a \n> lot of problems with MySQL, \n> I'd like to give it a fair chance, but it shows \n> real obstination in NOT \n> working.\n\nWell that matches up well with my experience, better even yet, file a performance bug to the commercial support and you'll get an explanation why your schema (or your hardware, well anything but the database software used) is the guilty factor.\n\nbut you know these IT manager journals consider mysql as the relevant opensource database. Guess it matches better with their expection than PG or say MaxDB (the artist known formerly as Sap DB).\n\nAndreas\n\t\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n",
"msg_date": "Sun, 20 May 2007 20:48:54 +0200",
"msg_from": "\"Andreas Kostyrka\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "\n> Well that matches up well with my experience, better even yet, file a \n> performance bug to the commercial support and you'll get an explanation \n> why your schema (or your hardware, well anything but the database \n> software used) is the guilty factor.\n\n\tYeah, I filed a bug last week since REPEATABLE READ isn't repeatable : it \nworks for SELECT but INSERT INTO ... SELECT switches to READ COMMITTED and \nthus does not insert the same rows that the same SELECT would have \nreturned.\n\n> but you know these IT manager journals consider mysql as the relevant \n> opensource database. Guess it matches better with their expection than \n> PG or say MaxDB (the artist known formerly as Sap DB).\n\n\tNever tried MaxDB.\n\n\tSo far, my MyISAM benchmarks show that, while on the CPU limited case, \nPostgres is faster (even on small simple selects) , when the dataset grows \nlarger, MyISAM keeps going much better than Postgres. That was to be \nexpected since the tables are more compact, it can read indexes without \nhitting the tables, and of course it doesn't have transaction overhead.\n\n\tHowever, these good results are slightly mitigated by the massive data \ncorruption and complete mayhem that ensues, either from \"transactions\" \naborting mid-way, that can't be rolled back obviously, leaving stuff with \nbroken relations, or plain simple engine bugs which replace your data with \ncrap. After about 1/2 hour of hitting the tables hard, they start to \ncorrupt and you get cryptic error messages. Fortunately \"REPAIR TABLE\" \nprovides good consolation in telling you how much corrupt data it had to \nerase from your table... really reassuring !\n\n\tI believe the following current or future Postgres features will provide \nan interesting answer to MyISAM :\n\n\t- The fact that it doesn't corrupt your data, duh.\n\t- HOT\n\t- the new non-logged tables\n\t- Deferred Transactions, since adding a comment to a blog post doesn't \nneed the same guarantees than submitting a paid order, it makes sense that \nthe application could tell postgres which transactions we care about if \npower is lost. This will massively boost performance for websites I \nbelieve.\n\t- the patch that keeps tables in approximate cluster order\n\n\tBy the way, about the ALTER TABLE SET PERSISTENCE ... for non-logged \ntables, will we get an ON RECOVER trigger ?\n\tFor instance, I have counts tables that are often updated by triggers. On \nrecovery, I could simply re-create the counts from the actual data. So I \ncould use the extra speed of non-crash proof tables.\n\n>\n> Andreas\n> \t\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\n\n",
"msg_date": "Mon, 21 May 2007 23:35:14 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
},
{
"msg_contents": "> - Deferred Transactions, since adding a comment to a blog post\n> doesn't need the same guarantees than submitting a paid order, it makes\n> sense that the application could tell postgres which transactions we\n> care about if power is lost. This will massively boost performance for\n> websites I believe.\n\nThis would be massively useful. Very often all I care about is that the\ntransaction is semantically committed; that is, that other transactions\nstarting from that moment will see the modifications done. As opposed to\nactually persisting data to disk.\n\nIn particular I have a situation where I attempt to utilize available\nhardware by using concurrency. The problem is that I have to either\nhugely complicate my client code or COMMIT more often than I would like\nin order to satisfy dependencies between different transactions. If a\ndeferred/delayed commit were possible I could get all the performance\nbenefit without the code complexity, and with no penalty (because in\nthis case persistence is not important).\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org",
"msg_date": "Tue, 22 May 2007 08:28:57 +0200",
"msg_from": "Peter Schuller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres Benchmark Results"
}
] |
[
{
"msg_contents": "Hi all,\n\nI know we've covered this before but I'm having trouble with it today.\n\nI have some geographic data in tables that I'm working with. I have a\ncountry, state and city table. I was selecting the country_name out of the\ncountry table but discovered that some countries (like Antarctica) didn't\nhave cities in the city table.\n\nI resolved to query the country table for only country_name's which had\ncountry_id's in the city table - meaning the country had cities listed.\n\nThe problem was I had a couple different sources (in separate tables) with\nsome extraneous column data so I chose to consolidate the city tables from\nthe different sources and column data that I don't need because I don't have\nthe hardware to support it.\n\nThat was the end of my query time.\n\nHere's the original table and query:\n\n# \\d geo.world_city\n Table \"geo.world_city\"\n Column | Type | Modifiers\n------------+------------------------+-----------\n city_id | integer | not null\n state_id | smallint |\n country_id | smallint |\n rc | smallint |\n latitude | numeric(9,7) |\n longitude | numeric(10,7) |\n dsg | character(5) |\n cc1 | character(2) |\n adm1 | character(2) |\n city_name | character varying(200) |\nIndexes:\n \"world_city_pk\" PRIMARY KEY, btree (city_id)\n \"idx_world_city_cc1\" btree (cc1)\n \"idx_world_city_cc1_adm1\" btree (cc1, adm1)\n \"idx_world_city_country_id\" btree (country_id)\n \"idx_world_city_name_first_letter\" btree\n(state_id, \"substring\"(lower(city_name::text), 1, 1))\n \"idx_world_city_state_id\" btree (state_id)\n\nexplain analyze\nSELECT country_id, country_name\nFROM geo.country\nWHERE country_id IN\n (select country_id FROM geo.world_city)\n;\n\n QUERY\nPLAN\n-----------------------------------------------------------------------------\n--------------------------------------------------------------------------\n Nested Loop IN Join (cost=0.00..167.97 rows=155 width=15) (actual\ntime=85.502..3479.449 rows=231 loops=1)\n -> Seq Scan on country (cost=0.00..6.44 rows=244 width=15) (actual\ntime=0.089..0.658 rows=244 loops=1)\n -> Index Scan using idx_world_city_country_id on world_city\n(cost=0.00..8185.05 rows=12602 width=2) (actual time=14.250..14.250 rows=1\nloops=244)\n Index Cond: (country.country_id = world_city.country_id)\n Total runtime: 3479.921 ms\n\nOdd that it took 3 seconds because every previous run has been much quicker.\nThe next run was:\n\n QUERY\nPLAN\n-----------------------------------------------------------------------------\n------------------------------------------------------------------------\n Nested Loop IN Join (cost=0.00..167.97 rows=155 width=15) (actual\ntime=0.087..6.967 rows=231 loops=1)\n -> Seq Scan on country (cost=0.00..6.44 rows=244 width=15) (actual\ntime=0.028..0.158 rows=244 loops=1)\n -> Index Scan using idx_world_city_country_id on world_city\n(cost=0.00..8185.05 rows=12602 width=2) (actual time=0.026..0.026 rows=1\nloops=244)\n Index Cond: (country.country_id = world_city.country_id)\n Total runtime: 7.132 ms\n(5 rows)\n\n\nBut that was irrelevant. I created a new table and eliminated the data and\n it looks like this:\n\n# \\d geo.city\n Table \"geo.city\"\n Column | Type | Modifiers\n------------+------------------------+-----------\n city_id | integer | not null\n state_id | smallint |\n country_id | smallint |\n latitude | numeric(9,7) |\n longitude | numeric(10,7) |\n city_name | character varying(100) |\nIndexes:\n \"city_pk\" PRIMARY KEY, btree (city_id)\n \"idx_city_country_id\" btree (country_id) CLUSTER\nForeign-key constraints:\n \"city_state_id_fk\" FOREIGN KEY (state_id) REFERENCES geo.state(state_id)\nON UPDATE CASCADE ON DELETE CASCADE\n\nexplain analyze\nSELECT country_id, country_name\nFROM geo.country\nWHERE country_id IN\n (select country_id FROM geo.city)\n;\n\n-- won't complete in a reasonable amount of time.\n\nThis one won't use the country_id index. The two tables have almost the same\nnumber of rows:\n\ncmi=# select count(*) from geo.world_city;\n count\n---------\n 1953314\n(1 row)\n\ncmi=# select count(*) from geo.city;\n count\n---------\n 2122712\n(1 row)\n\n\n I tried to force it and didn't see any improvement. I've vacuummed,\nanalyzed, clustered. Can someone help me to get only the countries who have\ncities in the city table in a reasonable amount of time?\n\n-------------------------------------------------------\n",
"msg_date": "Sun, 20 May 2007 22:28:30 -0600",
"msg_from": "\"Chuck D.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Rewriting DISTINCT and losing performance"
},
{
"msg_contents": "Chuck,\n\n> explain analyze\n> SELECT country_id, country_name\n> FROM geo.country\n> WHERE country_id IN\n> (select country_id FROM geo.city)\n> ;\n> \n> -- won't complete in a reasonable amount of time.\n\nCan we see the plan?\n\n--Josh\n\n",
"msg_date": "Mon, 21 May 2007 05:14:14 -0400",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rewriting DISTINCT and losing performance"
},
{
"msg_contents": "Chuck D. wrote:\n> Table \"geo.city\"\n> Column | Type | Modifiers\n> ------------+------------------------+-----------\n> city_id | integer | not null\n> state_id | smallint |\n> country_id | smallint |\n> latitude | numeric(9,7) |\n> longitude | numeric(10,7) |\n> city_name | character varying(100) |\n> Indexes:\n> \"city_pk\" PRIMARY KEY, btree (city_id)\n> \"idx_city_country_id\" btree (country_id) CLUSTER\n> Foreign-key constraints:\n> \"city_state_id_fk\" FOREIGN KEY (state_id) REFERENCES geo.state(state_id)\n> ON UPDATE CASCADE ON DELETE CASCADE\n\nAny good reason why country_id is NULLable?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 21 May 2007 12:40:21 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rewriting DISTINCT and losing performance"
},
{
"msg_contents": "On Monday 21 May 2007 03:14, Josh Berkus wrote:\n> Chuck,\n>\n> Can we see the plan?\n>\n> --Josh\n>\n\nSorry Josh, I guess I could have just used EXPLAIN instead of EXPLAIN \nANALYZE.\n\n# explain\nSELECT country_id, country_name\nFROM geo.country\nWHERE country_id IN\n (select country_id FROM geo.city)\n;\n QUERY PLAN\n--------------------------------------------------------------------\n Nested Loop IN Join (cost=0.00..1252.60 rows=155 width=15)\n Join Filter: (country.country_id = city.country_id)\n -> Seq Scan on country (cost=0.00..6.44 rows=244 width=15)\n -> Seq Scan on city (cost=0.00..43409.12 rows=2122712 width=2)\n(4 rows)\n\n\nVersus the same query using the older, larger world_city table:\n\n# explain\nSELECT country_id, country_name\nFROM geo.country\nWHERE country_id IN\n (select country_id FROM geo.world_city)\n;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------\n Nested Loop IN Join (cost=0.00..23.16 rows=155 width=15)\n -> Seq Scan on country (cost=0.00..6.44 rows=244 width=15)\n -> Index Scan using idx_world_city_country_id on world_city \n(cost=0.00..706.24 rows=12602 width=2)\n Index Cond: (country.country_id = world_city.country_id)\n(4 rows)\n\n\n",
"msg_date": "Mon, 21 May 2007 09:09:43 -0600",
"msg_from": "\"Chuck D.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Rewriting DISTINCT and losing performance"
},
{
"msg_contents": "On Monday 21 May 2007 05:40, Richard Huxton wrote:\n> Chuck D. wrote:\n>\n> Any good reason why country_id is NULLable?\n\nIt has been a while since I imported the data so it took some time to examine \nit but here is what I found.\n\nIn the original data, some cities do not have coutries. Strange huh? Most \nwere in the Gaza Strip, No Man's Land or disputed territory where several \ncountries claimed ownership. This is according to USGS and the board of \nnames.\n\nRecognizing that this did me no good in my application I decided to repair \nthat data so that country_id could have a NOT NULL modifier.\n\n\n",
"msg_date": "Mon, 21 May 2007 11:13:23 -0600",
"msg_from": "\"Chuck D.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Rewriting DISTINCT and losing performance"
},
{
"msg_contents": "Chuck D. wrote:\n> On Monday 21 May 2007 03:14, Josh Berkus wrote:\n>> Chuck,\n>>\n>> Can we see the plan?\n>>\n>> --Josh\n>>\n> \n> Sorry Josh, I guess I could have just used EXPLAIN instead of EXPLAIN \n> ANALYZE.\n> \n> # explain\n> SELECT country_id, country_name\n> FROM geo.country\n> WHERE country_id IN\n> (select country_id FROM geo.city)\n> ;\n> QUERY PLAN\n> --------------------------------------------------------------------\n> Nested Loop IN Join (cost=0.00..1252.60 rows=155 width=15)\n> Join Filter: (country.country_id = city.country_id)\n> -> Seq Scan on country (cost=0.00..6.44 rows=244 width=15)\n> -> Seq Scan on city (cost=0.00..43409.12 rows=2122712 width=2)\n\nThe only thing I can think of is that the CLUSTERing on city.country_id \nmakes the system think it'll be cheaper to seq-scan the whole table.\n\nI take it you have got 2 million rows in \"city\"?\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 21 May 2007 18:34:11 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rewriting DISTINCT and losing performance"
},
{
"msg_contents": "On Monday 21 May 2007 11:34, Richard Huxton wrote:\n> Chuck D. wrote:\n>\n> The only thing I can think of is that the CLUSTERing on city.country_id\n> makes the system think it'll be cheaper to seq-scan the whole table.\n>\n> I take it you have got 2 million rows in \"city\"?\n\nWell here is where it gets strange. The CLUSTER was just one thing I tried to \ndo to enhance the performance. I had the same result prior to cluster.\n\nHowever, after updating that country_id column to NOT NULL and eliminating \nNULL values it will use the country_id index and perform quickly. Oddly \nenough, the original table, world_city still has NULL values in the \ncountry_id column and it has always used the country_id index.\n\nDoesn't that seem a bit strange? Does it have to do with the smaller size of \nthe new table maybe?\n",
"msg_date": "Mon, 21 May 2007 12:17:44 -0600",
"msg_from": "\"Chuck D.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Rewriting DISTINCT and losing performance"
},
{
"msg_contents": "\"Chuck D.\" <[email protected]> writes:\n> Doesn't that seem a bit strange? Does it have to do with the smaller size of\n> the new table maybe?\n\nNo, it seems to be a planner bug:\nhttp://archives.postgresql.org/pgsql-hackers/2007-05/msg00920.php\n\nI imagine that your table statistics are close to the critical point\nwhere a bitmap scan looks cheaper or more expensive than a plain index\nscan, and so the chosen plan varies depending on more-or-less chance\nfactors. Certainly getting rid of NULLs shouldn't have had any direct\nimpact on this choice.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 May 2007 17:32:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rewriting DISTINCT and losing performance "
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI am testing my shared_buffers pool and am running into a problem with slow\ninserts and commits. I was reading in several places that in the\n8.XPostgreSQL engines should set the shared_buffers closer to 25% of\nthe\nsystems memory. On me development system, I have done that. We have 9GB of\nmemory on the machine and I set my shared_buffers = 292188 (~25% of total\nmemory).\n\nWhen my users logged in today, they are noticing the system is much slower.\nTracing my log files, I am seeing that most of the commits are taking over\n1sec. I am seeing a range of 1-5 seconds per commit.\n\nWhat is the correlation here between the shared_buffers and the disk\nactivity? This is not something I would have expected at all.\n\nI was wanting to test for improved performance so I can have a good basis\nfor making changes in my production systems.\n\nMy postgresql.conf is pasted below.\n\nThanks for any comments/clarifications,\n\nchris\nPG 8.1.3\nRH 4 AS\n\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\n\nlisten_addresses = '*' # what IP address(es) to listen on;\n\nport = 50001\n\nmax_connections = 1024\n\nsuperuser_reserved_connections = 10\n\nshared_buffers = 292188 # setting to 25% of memory\n\nmax_prepared_transactions = 256 # can be 0 or more\n\nwork_mem = 16384 # min 64, size in KB\n\nmaintenance_work_mem = 1048576 # min 1024, size in KB\n\nmax_fsm_pages = 8000000 # min max_fsm_relations*16, 6 bytes each\n\nmax_fsm_relations = 20000 # min 100, ~70 bytes each\n\nvacuum_cost_delay = 0 # 0-1000 milliseconds\n\nvacuum_cost_page_hit = 0 # 0-10000 credits\n\nvacuum_cost_page_miss = 0 # 0-10000 credits\n\nvacuum_cost_page_dirty = 0 # 0-10000 credits\n\nvacuum_cost_limit = 1 # 0-10000 credits\n\nwal_buffers = 64 # min 4, 8KB each\n\ncheckpoint_segments = 256 # in logfile segments, min 1, 16MB each\n\ncheckpoint_timeout = 300 # range 30-3600, in seconds\n\narchive_command = '/home/postgres/bin/archive_pg_xlog.sh %p %f 50001' #\ncommand to use to archive a logfile\n\neffective_cache_size = 383490 # typically 8KB each\n\nrandom_page_cost = 2 # units are one sequential page fetch\n\ndefault_statistics_target = 100 # range 1-1000\n\nconstraint_exclusion = on\n\nredirect_stderr = on # Enable capturing of stderr into log\n\nlog_directory = 'pg_log' # Directory where log files are written\n\nlog_truncate_on_rotation = on # If on, any existing log file of\nthe same\n\nlog_rotation_age = 1440 # Automatic rotation of logfiles will\n\nlog_rotation_size = 1048576 # Automatic rotation of logfiles will\n\n\nlog_min_messages = debug2 # Values, in order of decreasing detail:\n\nlog_min_duration_statement = 0 # -1 is disabled, 0 logs all\nstatements\n\nlog_connections = on\n\nlog_disconnections = on\n\nlog_duration = on\n\nlog_line_prefix = '%d,%p,%u,%m,%c,%l,%s,%x,%i,' # Special values:\n\nlog_statement = 'all' # none, mod, ddl, all\n\nstats_start_collector = on\n\nstats_command_string = on\n\nstats_block_level = on\n\nstats_row_level = on\n\nstats_reset_on_server_start = on\n\nautovacuum = on # enable autovacuum subprocess?\n\nautovacuum_naptime = 60 # time between autovacuum runs, in secs\n\nautovacuum_vacuum_threshold = 1000 # min # of tuple updates before\n\nautovacuum_analyze_threshold = 500 # min # of tuple updates before\n\nautovacuum_vacuum_scale_factor = 0.001 # fraction of rel size before\n\nautovacuum_analyze_scale_factor = 0.0005 # fraction of rel size before\n\nautovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for\n\nautovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n\nstatement_timeout = 0 # 0 is disabled, in milliseconds\n\nlc_messages = 'C' # locale for system error message\n\nlc_monetary = 'C' # locale for monetary formatting\n\nlc_numeric = 'C' # locale for number formatting\n\nlc_time = 'C' # locale for time formatting\n\nadd_missing_from = on\n\nHi everyone,I am testing my shared_buffers pool and am running into a problem with slow inserts and commits. I was reading in several places that in the 8.X PostgreSQL engines should set the shared_buffers closer to 25% of the systems memory. On me development system, I have done that. We have 9GB of memory on the machine and I set my shared_buffers = 292188 (~25% of total memory).\nWhen my users logged in today, they are noticing the system is much slower. Tracing my log files, I am seeing that most of the commits are taking over 1sec. I am seeing a range of 1-5 seconds per commit.\nWhat is the correlation here between the shared_buffers and the disk activity? This is not something I would have expected at all.I was wanting to test for improved performance so I can have a good basis for making changes in my production systems.\nMy postgresql.conf is pasted below.Thanks for any comments/clarifications,chrisPG 8.1.3RH 4 AS# -----------------------------# PostgreSQL configuration file# -----------------------------\nlisten_addresses = '*' # what IP address(es) to listen on; port = 50001max_connections = 1024superuser_reserved_connections = 10 shared_buffers = 292188 # setting to 25% of memory\nmax_prepared_transactions = 256 # can be 0 or morework_mem = 16384 # min 64, size in KBmaintenance_work_mem = 1048576 # min 1024, size in KBmax_fsm_pages = 8000000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 20000 # min 100, ~70 bytes eachvacuum_cost_delay = 0 # 0-1000 millisecondsvacuum_cost_page_hit = 0 # 0-10000 creditsvacuum_cost_page_miss = 0 # 0-10000 credits\nvacuum_cost_page_dirty = 0 # 0-10000 creditsvacuum_cost_limit = 1 # 0-10000 creditswal_buffers = 64 # min 4, 8KB eachcheckpoint_segments = 256 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 300 # range 30-3600, in secondsarchive_command = '/home/postgres/bin/archive_pg_xlog.sh %p %f 50001' # command to use to archive a logfile effective_cache_size = 383490 # typically 8KB each\nrandom_page_cost = 2 # units are one sequential page fetch default_statistics_target = 100 # range 1-1000constraint_exclusion = onredirect_stderr = on # Enable capturing of stderr into log \nlog_directory = 'pg_log' # Directory where log files are writtenlog_truncate_on_rotation = on # If on, any existing log file of the same log_rotation_age = 1440 # Automatic rotation of logfiles will \nlog_rotation_size = 1048576 # Automatic rotation of logfiles will log_min_messages = debug2 # Values, in order of decreasing detail:log_min_duration_statement = 0 # -1 is disabled, 0 logs all statements\nlog_connections = onlog_disconnections = onlog_duration = onlog_line_prefix = '%d,%p,%u,%m,%c,%l,%s,%x,%i,' # Special values:log_statement = 'all' # none, mod, ddl, all\nstats_start_collector = onstats_command_string = onstats_block_level = onstats_row_level = onstats_reset_on_server_start = onautovacuum = on # enable autovacuum subprocess?\nautovacuum_naptime = 60 # time between autovacuum runs, in secsautovacuum_vacuum_threshold = 1000 # min # of tuple updates beforeautovacuum_analyze_threshold = 500 # min # of tuple updates before \nautovacuum_vacuum_scale_factor = 0.001 # fraction of rel size before autovacuum_analyze_scale_factor = 0.0005 # fraction of rel size before autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for \nautovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for statement_timeout = 0 # 0 is disabled, in millisecondslc_messages = 'C' # locale for system error message \nlc_monetary = 'C' # locale for monetary formattinglc_numeric = 'C' # locale for number formattinglc_time = 'C' # locale for time formatting\nadd_missing_from = on",
"msg_date": "Mon, 21 May 2007 10:42:54 -0400",
"msg_from": "\"Chris Hoover\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Increasing Shared_buffers = slow commits?"
},
{
"msg_contents": "On 5/21/07, Chris Hoover <[email protected]> wrote:\n> Hi everyone,\n>\n> I am testing my shared_buffers pool and am running into a problem with slow\n> inserts and commits. I was reading in several places that in the 8.X\n> PostgreSQL engines should set the shared_buffers closer to 25% of the\n> systems memory. On me development system, I have done that. We have 9GB of\n> memory on the machine and I set my shared_buffers = 292188 (~25% of total\n> memory).\n>\n> When my users logged in today, they are noticing the system is much slower.\n> Tracing my log files, I am seeing that most of the commits are taking over\n> 1sec. I am seeing a range of 1-5 seconds per commit.\n>\n> What is the correlation here between the shared_buffers and the disk\n> activity? This is not something I would have expected at all.\n\nhave you overcommited your memory? maybe you are thrashing a\nbit...long commit times are usually symptom of high iowait. can you\npop up top and monitor iowait for a bit?\n\ncan you lower shared buffers again and confirm that performance\nincreases? how about doing some iostat/vmstat runs and looking for\nvalues that are significantly different depending on the shared\nbuffers setting.\n\nmerlin\n",
"msg_date": "Mon, 21 May 2007 14:40:10 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increasing Shared_buffers = slow commits?"
}
] |
[
{
"msg_contents": "\n\n\tWell, CLUSTER is so slow (and it doesn't cluster the toast tables \nassociated with the table to be clustered).\n\tHowever, when people use CLUSTER they use it to speed up their queries.\n\tFor that the table does not need to be perfectly in-order.\n\n\tSo, here is a new idea for CLUSTER :\n\n\t- choose a chunk size (about 50% of your RAM)\n\t- setup disk sorts for all indexes\n\t- seq scan the table :\n\t\t- take a chunk of chunk_size\n\t\t- sort it (in memory)\n\t\t- write it into new table file\n\t\t- while we have the data on-hand, also send the indexed columns data \ninto the corresponding disk-sorts\n\n\t- finish the index disk sorts and rebuild indexes\n\n\tThis does not run a complete sort on the table. It would be about as fast \nas your seq scan disk throughput. Obviously, the end result is not as good \nas a real CLUSTER since the table will be made up of several ordered \nchunks and a range lookup. Therefore, a range lookup on the clustered \ncolumns would need at most N seeks, versus 1 for a really clustered table. \nBut it only scans the table once and writes it once, even counting index \nrebuild.\n\n\tI would think that, with this approach, if people can CLUSTER a large \ntable in 5 minutes instead of hours, they will use it, instead of not \nusing it. Therefore, even if the resulting table is not as optimal as a \nfully clustered table, it will still be much better than the non-clustered \ncase.\n\n\n\n",
"msg_date": "Tue, 22 May 2007 09:29:00 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": true,
"msg_subject": "Feature suggestion : FAST CLUSTER"
},
{
"msg_contents": "On Tue, May 22, 2007 at 09:29:00AM +0200, PFC wrote:\n> \tThis does not run a complete sort on the table. It would be about as \n> \tfast as your seq scan disk throughput. Obviously, the end result is not as \n> good as a real CLUSTER since the table will be made up of several ordered \n> chunks and a range lookup. Therefore, a range lookup on the clustered \n> columns would need at most N seeks, versus 1 for a really clustered table. \n> But it only scans the table once and writes it once, even counting index \n> rebuild.\n\nDo you have any data that indicates such an arrangement would be\nsubstantially better than less-clustered data?\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)",
"msg_date": "Sun, 27 May 2007 10:53:38 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature suggestion : FAST CLUSTER"
},
{
"msg_contents": "On Sun, 27 May 2007 17:53:38 +0200, Jim C. Nasby <[email protected]> \nwrote:\n\n> On Tue, May 22, 2007 at 09:29:00AM +0200, PFC wrote:\n>> \tThis does not run a complete sort on the table. It would be about as\n>> \tfast as your seq scan disk throughput. Obviously, the end result is \n>> not as\n>> good as a real CLUSTER since the table will be made up of several \n>> ordered\n>> chunks and a range lookup. Therefore, a range lookup on the clustered\n>> columns would need at most N seeks, versus 1 for a really clustered \n>> table.\n>> But it only scans the table once and writes it once, even counting index\n>> rebuild.\n>\n> Do you have any data that indicates such an arrangement would be\n> substantially better than less-clustered data?\n\n\tWhile the little benchmark that will answer your question is running, \nI'll add a few comments :\n\n\tI have been creating a new benchmark for PostgreSQL and MySQL, that I \nwill call the Forum Benchmark. It mimics the activity of a forum.\n\tSo far, I have got interesting results about Postgres and InnoDB and will \npublish an extensive report with lots of nasty stuff in it, in, say, 2 \nweeks, since I'm doing this in spare time.\n\n\tAnyway, forums like clustered tables, specifically clusteriing posts on \n(topic_id, post_id), in order to be able to display a page with one disk \nseek, instead of one seek per post.\n\tPostgreSQL humiliates InnoDB on CPU-bound workloads (about 2x faster \nsince I run it on dual core ; InnoDB uses only one core). However, InnoDB \ncan automatically cluster tables without maintenance. This means InnoDB \nwill, even though it sucks and is awfully bloated, run a lot faster than \npostgres if things become IO-bound, ie. if the dataset is larger than RAM.\n\tPostgres needs to cluster the posts table in order to keep going. CLUSTER \nis very slow. I tried inserting into a new posts table, ordering by \n(post_id, topic_id), then renaming the new table in place of the old. It \nis faster, but still slow when handling lots of data.\n\tI am trying other approaches, some quite hack-ish, and will report my \nfindings.\n\n\tRegards\n\t\n\t\n\n",
"msg_date": "Sun, 27 May 2007 19:34:30 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Feature suggestion : FAST CLUSTER"
},
{
"msg_contents": "On 5/27/07, PFC <[email protected]> wrote:\n> PostgreSQL humiliates InnoDB on CPU-bound workloads (about 2x faster\n> since I run it on dual core ; InnoDB uses only one core). However, InnoDB\n> can automatically cluster tables without maintenance.\n\nHow does it know what to cluster by? Does it gather statistics about\nquery patterns on which it can decide an optimal clustering, or does\nit merely follow a clustering previously set up by the user?\n\nAlexander.\n",
"msg_date": "Sun, 27 May 2007 20:27:31 +0200",
"msg_from": "\"Alexander Staubo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature suggestion : FAST CLUSTER"
},
{
"msg_contents": "\n\n> How does it know what to cluster by? Does it gather statistics about\n> query patterns on which it can decide an optimal clustering, or does\n> it merely follow a clustering previously set up by the user?\n\n\tNothing fancy, InnoDB ALWAYS clusters on the primary key, whatever it is. \nSo, if you can hack your stuff into having a primary key that clusters \nnicely, good for you. If not, well...\n\tSo, I used (topic_id, post_id) as the PK, even though it isn't the real \nPK (this should be post_id)...\n",
"msg_date": "Sun, 27 May 2007 20:44:20 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Feature suggestion : FAST CLUSTER"
},
{
"msg_contents": "On May 27, 2007, at 12:34 PM, PFC wrote:\n> On Sun, 27 May 2007 17:53:38 +0200, Jim C. Nasby \n> <[email protected]> wrote:\n>> On Tue, May 22, 2007 at 09:29:00AM +0200, PFC wrote:\n>>> \tThis does not run a complete sort on the table. It would be \n>>> about as\n>>> \tfast as your seq scan disk throughput. Obviously, the end \n>>> result is not as\n>>> good as a real CLUSTER since the table will be made up of \n>>> several ordered\n>>> chunks and a range lookup. Therefore, a range lookup on the \n>>> clustered\n>>> columns would need at most N seeks, versus 1 for a really \n>>> clustered table.\n>>> But it only scans the table once and writes it once, even \n>>> counting index\n>>> rebuild.\n>>\n>> Do you have any data that indicates such an arrangement would be\n>> substantially better than less-clustered data?\n> \tWhile the little benchmark that will answer your question is \n> running, I'll add a few comments :\n>\n> \tI have been creating a new benchmark for PostgreSQL and MySQL, \n> that I will call the Forum Benchmark. It mimics the activity of a \n> forum.\n> \tSo far, I have got interesting results about Postgres and InnoDB \n> and will publish an extensive report with lots of nasty stuff in \n> it, in, say, 2 weeks, since I'm doing this in spare time.\n>\n> \tAnyway, forums like clustered tables, specifically clusteriing \n> posts on (topic_id, post_id), in order to be able to display a page \n> with one disk seek, instead of one seek per post.\n> \tPostgreSQL humiliates InnoDB on CPU-bound workloads (about 2x \n> faster since I run it on dual core ; InnoDB uses only one core). \n> However, InnoDB can automatically cluster tables without \n> maintenance. This means InnoDB will, even though it sucks and is \n> awfully bloated, run a lot faster than postgres if things become IO- \n> bound, ie. if the dataset is larger than RAM.\n> \tPostgres needs to cluster the posts table in order to keep going. \n> CLUSTER is very slow. I tried inserting into a new posts table, \n> ordering by (post_id, topic_id), then renaming the new table in \n> place of the old. It is faster, but still slow when handling lots \n> of data.\n> \tI am trying other approaches, some quite hack-ish, and will report \n> my findings.\n\nI assume you meant topic_id, post_id. :)\n\nThe problem with your proposal is that it does nothing to ensure that \nposts for a topic stay together as soon as the table is large enough \nthat you can't sort it in a single pass. If you've got a long-running \nthread, it's still going to get spread out throughout the table.\n\nWhat you really want is CLUSTER CONCURRENTLY, which I believe is on \nthe TODO list. BUT... there's another caveat here: for any post where \nthe row ends up being larger than 2k, the text is going to get \nTOASTed anyway, which means it's going to be in a separate table, in \na different ordering. I don't know of a good way to address that; you \ncan cluster the toast table, but you'll be clustering on an OID, \nwhich isn't going to help you.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Mon, 28 May 2007 19:48:00 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Feature suggestion : FAST CLUSTER"
},
{
"msg_contents": "On Sun, 27 May 2007 19:34:30 +0200, PFC <[email protected]> wrote:\n\n> On Sun, 27 May 2007 17:53:38 +0200, Jim C. Nasby <[email protected]> \n> wrote:\n>\n>> On Tue, May 22, 2007 at 09:29:00AM +0200, PFC wrote:\n>>> \tThis does not run a complete sort on the table. It would be about as\n>>> \tfast as your seq scan disk throughput. Obviously, the end result is \n>>> not as\n>>> good as a real CLUSTER since the table will be made up of several \n>>> ordered\n>>> chunks and a range lookup. Therefore, a range lookup on the clustered\n>>> columns would need at most N seeks, versus 1 for a really clustered \n>>> table.\n>>> But it only scans the table once and writes it once, even counting \n>>> index\n>>> rebuild.\n>>\n>> Do you have any data that indicates such an arrangement would be\n>> substantially better than less-clustered data?\n>\n> \tWhile the little benchmark that will answer your question is running, \n> I'll add a few comments :\n\n\tAlright, so far :\n\n\tThis is a simulated forum workload, so it's mostly post insertions, some \nedits, and some topic deletes.\n\tIt will give results applicable to forums, obviously, but also anything \nthat wotks on the same schema :\n\t- topics + posts\n\t- blog articles + coomments\n\t- e-commerce site where users can enter their reviews\n\tSo, the new trend being to let the users to participate, this kind of \nworkload will become more and more relevant for websites.\n\n\tSo, how to cluster the posts table on (topic_id, post_id) to get all the \nposts on the same webpake in 1 seek ?\n\n\tI am benchmarking the following :\n\t- CLUSTER obviously\n\t- Creating a new table and INSERT .. SELECT ORDER BY topic_id, post_id, \nthen reindexing etc\n\t- not doing anything (just vacuuming all tables)\n\t- not even vacuuming the posts table.\n\n\tI al also trying the following more exotic approaches :\n\n\t* chunked sort :\n\n\tWell, sorting 1GB of data when your work_mem is only 512 MB needs several \npasses, hence a lot of disk IO. The more data, the more IO.\n\tSo, instead of doing this, I will :\n\t- grab about 250 MB of posts from the table\n\t- sort them by (topic_id, post_id)\n\t- insert them in a new table\n\t- repeat\n\t- then reindex, etc and replace old table with new.\n\t(reindex is very fast, since the table is nicely defragmented now, I get \nfull disk speed. However I would like being able to create 2 indexes with \nONE table scan !)\n\tI'm trying 2 different ways to do that, with plpgsql and cursors.\n\tIt is much faster than sorting the whole data set, because the sorts are \nonly done in memory (hence the \"chunks\")\n\tSo far, it seems a database clustered this way is about as fast as using \nCLUSTER, but the clustering operation is faster.\n\tMore results in about 3 days when the benchmarks finish.\n\n\t* other dumb stuff\n\n\tI'll try DELETing the last 250MB of records, stuff them in a temp table, \nvacuum, and re-insert them in order.\n\n\n\t\n",
"msg_date": "Tue, 29 May 2007 08:23:52 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Feature suggestion : FAST CLUSTER"
},
{
"msg_contents": "\n> I assume you meant topic_id, post_id. :)\n\n\tUm, yes ;)\n\n> The problem with your proposal is that it does nothing to ensure that \n> posts for a topic stay together as soon as the table is large enough \n> that you can't sort it in a single pass. If you've got a long-running \n> thread, it's still going to get spread out throughout the table.\n\n\tI completely agree with you.\n\tHowever, you have to consider the use cases for clustered tables.\n\n\tSuppose you want to cluster on (A,B). A can be topic, category, month, \nstore, whatever ; B can be post_id, product_id, etc.\n\n\tNow consider these cases :\n\n1- Fully clustered table and defragmented file, TOAST table also in the \nsame order as the main table.\nCLUSTER does not do the second part, but [INSERT INTO new_table SELECT * \n FROM old_table ORDER BY a,b] also fills the TOAST table in the right order.\n\n2- Totally unclustered\n\n3- InnoDB which is an index-table ie. a BTree with data in the leafs ; \nthis means clustering is automatic\n\n4- My partial cluster proposal, ie. the table and its TOAST table have \nbeen clustered in chunks, say, of 500 MB.\n\n\t* You always want to get ALL records with a specific value of A.\n\nIn this case, a fully clustered table will obviously be the best choice : \n1 seek, then seq scan, Bitmap Index Scan rules.\nYou might think that the InnoDB case would perform the same.\nHowever, after some time the table and files on disk will be very \nfragmented, and btree pages with the same value of A will be everywhere, \nas they have been splitted and joined by insertions and deletions, so your \ntheoretical sequential scan might well translate into 1 seek per page. \nObviously since all the rows in the page will be of interest to you, this \nis still better than 1 seek per row, but well, you get the idea.\n\n\t* You want records with a specific value of A, and B inside a range\n\n\tExample :\n\t- topic_id = X AND post_id between start_of_page and end_of_page\n\t- get the sales record for january grouped by store\n\t- get the first 10 comments of a blog post\n\t- etc\n\tI would bet that this use case happens much more often than the previous \none.\n\nIn this case, a fully clustered table will obviously, again, be the best \nchoice.\nRandomly \"organized\" table will cost 1 seek per row, ie. buy more RAM or \nget fired.\nInnoDB will not work that bad, the number of seeks will be (number of rows \nwanted) / (rows per page)\n\n\tHowever, how would the chunked clustered case work ?\n\n\tIn the worst case, if the table has been sorted in N \"chunks\", you'll \nneed N seeks.\n\tHowever, since people generally cluster their stuff on some sort of \ntemporal related column (posts and comments are sorted by insertion time) \nyou can safely bet that most of the rows you want will end up in the same \nchunk, or in maybe 2-3 chunks, reducing the number of seeks to something \nbetween 1 and the number of chunks.\n\n\tThe fact is, sometimes people don't use CLUSTER when it would really help \nbecause it takes too long.\n\t\n\tSuppose you have a 3GB table, and your work_mem is 512MB.\n\n\tCLUSTER will take forever.\n\tA hypothetical new implementation of CLUSTER which would do an on-disk \nsort would create several sort bins, then combine them.\n\tA chunked cluster like I'm proposing would be several times faster since \nit would roughly operate at the raw disk IO speed (Postgres sorting is so \nfast when in RAM...)\n\n\tSo, having a full and chunked cluster would allow users to run the full \ncluster maybe once a month, and the chunked cluster maybe once every few \ndays.\n\tAnd the chunked CLUSTER would find most of the rows already in order, so \nits end result would be very close to a full CLUSTER, with a fraction of \nthe runtime.\n\n\tAn added bonus is for index rebuild.\n\tInstead of first, clustering the table, then rebuilding the indexes, this \ncould be done :\n\n\t- initialize sort buffers for the index builds\n\t- loop :\n\t\t- grab 500 MB of data from the table\n\t\t- sort it\n\t\t- insert it into new table\n\t\t- while data is still in RAM, extract the indexed columns and shove them \ninto each index's sort buffer\n\t\t- repeat until all data is processed\n\n\t- now, you have all indexed columns ready to be used, without need to \nrescan the table ! index rebuild will be much faster.\n\n\tI see VACUUM FULL is scheduled for a reimplementation in a future version.\n\tThis could be the way : with the same code path, this could do VACUUM \nFULL, CLUSTER, and chunked CLUSTER, just by changing if/how the sort step \nis done, giving the user a nice performance / maintenance time tradeoff.\n\tGetting this in maybe, 8.5 is better than CLUSTER CONCURRENTLY in 8.10 I \nwould dare say.\n\tAnd the original table can still be read from.\n\n\tBenchmarks are running. I will back this with figures in a few days.\n\n\tAnoher thing to do for CLUSTER would be to cluster only the tail of the \ntable.\n\n\tHave a nice day !\n\n\n>\n> What you really want is CLUSTER CONCURRENTLY, which I believe is on the \n> TODO list. BUT... there's another caveat here: for any post where the \n> row ends up being larger than 2k, the text is going to get TOASTed \n> anyway, which means it's going to be in a separate table, in a different \n> ordering. I don't know of a good way to address that; you can cluster \n> the toast table, but you'll be clustering on an OID, which isn't going \n> to help you.\n> --\n> Jim Nasby [email protected]\n> EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n\n",
"msg_date": "Tue, 29 May 2007 10:43:56 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Feature suggestion : FAST CLUSTER"
}
] |
[
{
"msg_contents": "I found several post about INSERT/UPDATE performance in this group,\nbut actually it was not really what I am searching an answer for...\n\nI have a simple reference table WORD_COUNTS that contains the count of\nwords that appear in a word array storage in another table.\n\nCREATE TABLE WORD_COUNTS\n(\n word text NOT NULL,\n count integer,\n CONSTRAINT PK_WORD_COUNTS PRIMARY KEY (word)\n)\nWITHOUT OIDS;\n\nI have some PL/pgSQL code in a stored procedure like\n\n FOR r\n IN select id, array_of_words\n from word_storage\n LOOP\n begin\n -- insert the missing words\n insert into WORD_COUNTS\n ( word, count )\n ( select word, 0\n from ( select distinct (r.array_of_words)\n[s.index] as d_word\n from generate_series(1,\narray_upper( r.array_of_words, 1 ) ) as s(index) ) as distinct_words\n where word not in ( select d_word from\nWORD_COUNTS ) );\n -- update the counts\n update WORD_COUNTS\n set count = COALESCE( count, 0 ) + 1\n where word in ( select distinct (r.array_of_words)[s.index] as\nword\n from generate_series(1,\narray_upper( r.array_of_words, 1) ) as s(index) );\n exception when others then\n error_count := error_count + 1;\n end;\n record_count := record_count + 1;\n END LOOP;\n\nThis code runs extremely slowly. It takes about 10 minutes to process\n10000 records and the word storage has more then 2 million records to\nbe processed.\n\nDoes anybody have a know-how about populating of such a reference\ntables and what can be optimized in this situation.\n\nMaybe the generate_series() procedure to unnest the array is the place\nwhere I loose the performance?\n\nAre the set update/inserts more effitient, then single inserts/updates\nrun in smaller loops?\n\nThanks for your help,\n\nValentine Gogichashvili\n\n",
"msg_date": "22 May 2007 01:23:03 -0700",
"msg_from": "valgog <[email protected]>",
"msg_from_op": true,
"msg_subject": "Key/Value reference table generation: INSERT/UPDATE performance"
},
{
"msg_contents": "On 22 May 2007 01:23:03 -0700, valgog <[email protected]> wrote:\n>\n> I found several post about INSERT/UPDATE performance in this group,\n> but actually it was not really what I am searching an answer for...\n>\n> I have a simple reference table WORD_COUNTS that contains the count of\n> words that appear in a word array storage in another table.\n>\n> CREATE TABLE WORD_COUNTS\n> (\n> word text NOT NULL,\n> count integer,\n> CONSTRAINT PK_WORD_COUNTS PRIMARY KEY (word)\n> )\n> WITHOUT OIDS;\n\n\n\nIs there any reason why count is not not null? (That should siplify your\ncode by removing the coalesce)\n\ninsert is more efficient than update because update is always a delete\nfollowed by an insert.\n\nOh and group by is nearly always quicker than distinct and can always? be\nrewritten as such. I'm not 100% sure why its different but it is.\n\nPeter.\n\n\n\nI have some PL/pgSQL code in a stored procedure like\n>\n> FOR r\n> IN select id, array_of_words\n> from word_storage\n> LOOP\n> begin\n> -- insert the missing words\n> insert into WORD_COUNTS\n> ( word, count )\n> ( select word, 0\n> from ( select distinct (r.array_of_words)\n> [s.index] as d_word\n> from generate_series(1,\n> array_upper( r.array_of_words, 1 ) ) as s(index) ) as distinct_words\n> where word not in ( select d_word from\n> WORD_COUNTS ) );\n> -- update the counts\n> update WORD_COUNTS\n> set count = COALESCE( count, 0 ) + 1\n> where word in ( select distinct (r.array_of_words)[s.index] as\n> word\n> from generate_series(1,\n> array_upper( r.array_of_words, 1) ) as s(index) );\n> exception when others then\n> error_count := error_count + 1;\n> end;\n> record_count := record_count + 1;\n> END LOOP;\n>\n> This code runs extremely slowly. It takes about 10 minutes to process\n> 10000 records and the word storage has more then 2 million records to\n> be processed.\n>\n> Does anybody have a know-how about populating of such a reference\n> tables and what can be optimized in this situation.\n>\n> Maybe the generate_series() procedure to unnest the array is the place\n> where I loose the performance?\n>\n> Are the set update/inserts more effitient, then single inserts/updates\n> run in smaller loops?\n>\n> Thanks for your help,\n>\n> Valentine Gogichashvili\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\nOn 22 May 2007 01:23:03 -0700, valgog <[email protected]> wrote:\nI found several post about INSERT/UPDATE performance in this group,but actually it was not really what I am searching an answer for...I have a simple reference table WORD_COUNTS that contains the count ofwords that appear in a word array storage in another table.\nCREATE TABLE WORD_COUNTS( word text NOT NULL, count integer, CONSTRAINT PK_WORD_COUNTS PRIMARY KEY (word))WITHOUT OIDS;Is there any reason why count is not not null? (That should siplify your code by removing the coalesce)\ninsert is more efficient than update because update is always a delete followed by an insert.Oh and group by is nearly always quicker than distinct and can always? be rewritten as such. I'm not 100% sure why its different but it is.\nPeter.I have some PL/pgSQL code in a stored procedure like\n FOR r IN select id, array_of_words from word_storage LOOP begin -- insert the missing words insert into WORD_COUNTS ( word, count ) ( select word, 0\n from ( select distinct (r.array_of_words)[s.index] as d_word from generate_series(1,array_upper( r.array_of_words, 1 ) ) as s(index) ) as distinct_words\n where word not in ( select d_word fromWORD_COUNTS ) ); -- update the counts update WORD_COUNTS set count = COALESCE( count, 0 ) + 1 where word in ( select distinct (\nr.array_of_words)[s.index] asword from generate_series(1,array_upper( r.array_of_words, 1) ) as s(index) ); exception when others then error_count := error_count + 1;\n end; record_count := record_count + 1; END LOOP;This code runs extremely slowly. It takes about 10 minutes to process10000 records and the word storage has more then 2 million records to\nbe processed.Does anybody have a know-how about populating of such a referencetables and what can be optimized in this situation.Maybe the generate_series() procedure to unnest the array is the place\nwhere I loose the performance?Are the set update/inserts more effitient, then single inserts/updatesrun in smaller loops?Thanks for your help,Valentine Gogichashvili---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [email protected] so that your message can get through to the mailing list cleanly",
"msg_date": "Tue, 22 May 2007 10:05:27 +0100",
"msg_from": "\"Peter Childs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Key/Value reference table generation: INSERT/UPDATE performance"
},
{
"msg_contents": "valgog wrote:\n> I found several post about INSERT/UPDATE performance in this group,\n> but actually it was not really what I am searching an answer for...\n> \n> I have a simple reference table WORD_COUNTS that contains the count of\n> words that appear in a word array storage in another table.\n\nI think this is the root of your problem, I'm afraid. You're trying to \ncount individual words when you're storing an array of words. I don't \nthink any of the Gist/GIN indexes will help you with this either.\n\nHowever, since \"you don't want to start from here\" isn't very useful \nhere and now:\n\n1. See what the performance (explain analyse) of the \"select \ndistinct...generate_series()\" statement is. I think you're right and \nit's going to be slow.\n2. You're looping through each row of word_storage and counting \nseparately. Write it as one query if possible.\n3. As Peter says, don't insert then update, start with an empty table \nand just insert totals for the lot (see #2).\n\nI'd probably write the query in plperl/python or something else that \nsupports hash/dictionary structures. Then just process the whole \nword_storage into the hash - assuming you only have a few thousand \ndistinct words that shouldn't take up too much memory.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 22 May 2007 10:21:43 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Key/Value reference table generation: INSERT/UPDATE\n performance"
},
{
"msg_contents": "I have rewritten the code like\n\n existing_words_array := ARRAY( select word\n from WORD_COUNTS\n where word = ANY\n( array_of_words ) );\n not_existing_words_array := ARRAY( select distinct_word\n from ( select distinct\n(array_of_words)[s.index] as distinct_word\n from\ngenerate_series(1, array_upper( array_of_words, 1 ) ) as s(index)\n ) as distinct_words\n where distinct_word <> ALL\n( existing_words_array ) );\n -- insert the missing words\n if not_existing_words_array is not null then\n insert into WORD_COUNTS\n ( word, count )\n ( select word, 1\n from ( select\nnot_existing_words_array[s.index] as word\n from generate_series( 1,\narray_upper( not_existing_words_array, 1 ) ) as s(index) ) as\ndistinct_words\n );\n end if;\n -- update the counts\n if existing_words_array is not null then\n update WORD_COUNTS\n set count = COALESCE( count, 0 ) + 1\n where sw_word = ANY ( existing_words_array );\n end if;\n\n\nNow it processes a million records in 14 seconds... so it was probably\nthe problem of looking up NOT IN WORD_COUNTS was way too expencive\n\n",
"msg_date": "22 May 2007 03:00:41 -0700",
"msg_from": "valgog <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Key/Value reference table generation: INSERT/UPDATE performance"
},
{
"msg_contents": "On Tue, 22 May 2007 10:23:03 +0200, valgog <[email protected]> wrote:\n\n> I found several post about INSERT/UPDATE performance in this group,\n> but actually it was not really what I am searching an answer for...\n>\n> I have a simple reference table WORD_COUNTS that contains the count of\n> words that appear in a word array storage in another table.\n\n\tMmm.\n\n\tIf I were you, I would :\n\n\t- Create a procedure that flattens all the arrays and returns all the \nwords :\n\nPROCEDURE flatten_arrays RETURNS SETOF TEXT\nFOR word_array IN SELECT word_array FROM your_table LOOP\n\tFOR i IN 1...array_upper( word_array ) LOOP\n\t\tRETURN NEXT tolower( word_array[ i ] )\n\nSo, SELECT * FROM flatten_arrays() returns all the words in all the arrays.\nTo get the counts quickly I'd do this :\n\nSELECT word, count(*) FROM flatten_arrays() AS word GROUP BY word\n\nYou can then populate your counts table very easily and quickly, since \nit's just a seq scan and hash aggregate. One second for 10.000 rows would \nbe slow.\n",
"msg_date": "Tue, 22 May 2007 12:14:48 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Key/Value reference table generation: INSERT/UPDATE performance"
},
{
"msg_contents": "On May 22, 12:14 pm, [email protected] (PFC) wrote:\n> On Tue, 22 May 2007 10:23:03 +0200, valgog <[email protected]> wrote:\n> > I found several post about INSERT/UPDATE performance in this group,\n> > but actually it was not really what I am searching an answer for...\n>\n> > I have a simple reference table WORD_COUNTS that contains the count of\n> > words that appear in a word array storage in another table.\n>\n> Mmm.\n>\n> If I were you, I would :\n>\n> - Create a procedure that flattens all the arrays and returns all the \n> words :\n>\n> PROCEDURE flatten_arrays RETURNS SETOF TEXT\n> FOR word_array IN SELECT word_array FROM your_table LOOP\n> FOR i IN 1...array_upper( word_array ) LOOP\n> RETURN NEXT tolower( word_array[ i ] )\n>\n> So, SELECT * FROM flatten_arrays() returns all the words in all the arrays.\n> To get the counts quickly I'd do this :\n>\n> SELECT word, count(*) FROM flatten_arrays() AS word GROUP BY word\n>\n> You can then populate your counts table very easily and quickly, since \n> it's just a seq scan and hash aggregate. One second for 10.000 rows would \n> be slow.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\ngood idea indeed! will try this approach.\n\n",
"msg_date": "22 May 2007 03:35:29 -0700",
"msg_from": "valgog <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Key/Value reference table generation: INSERT/UPDATE performance"
},
{
"msg_contents": "On May 22, 12:00 pm, valgog <[email protected]> wrote:\n> I have rewritten the code like\n>\n> existing_words_array := ARRAY( select word\n> from WORD_COUNTS\n> where word = ANY\n> ( array_of_words ) );\n> not_existing_words_array := ARRAY( select distinct_word\n> from ( select distinct\n> (array_of_words)[s.index] as distinct_word\n> from\n> generate_series(1, array_upper( array_of_words, 1 ) ) as s(index)\n> ) as distinct_words\n> where distinct_word <> ALL\n> ( existing_words_array ) );\n> -- insert the missing words\n> if not_existing_words_array is not null then\n> insert into WORD_COUNTS\n> ( word, count )\n> ( select word, 1\n> from ( select\n> not_existing_words_array[s.index] as word\n> from generate_series( 1,\n> array_upper( not_existing_words_array, 1 ) ) as s(index) ) as\n> distinct_words\n> );\n> end if;\n> -- update the counts\n> if existing_words_array is not null then\n> update WORD_COUNTS\n> set count = COALESCE( count, 0 ) + 1\n> where sw_word = ANY ( existing_words_array );\n> end if;\n>\n> Now it processes a million records in 14 seconds... so it was probably\n> the problem of looking up NOT IN WORD_COUNTS was way too expencive\n\nSorry... this code did not update anythig at all, as I forgot about\nthe NULL values... had to COALASCE practically everything and use\narray_upper()... do not have the performance numbers of the insert,\nupdates yet...\n\n",
"msg_date": "22 May 2007 03:38:06 -0700",
"msg_from": "valgog <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Key/Value reference table generation: INSERT/UPDATE performance"
},
{
"msg_contents": "Le mardi 22 mai 2007, Richard Huxton a écrit :\n> valgog wrote:\n> > I found several post about INSERT/UPDATE performance in this group,\n> > but actually it was not really what I am searching an answer for...\n> >\n> > I have a simple reference table WORD_COUNTS that contains the count of\n> > words that appear in a word array storage in another table.\n>\n> I think this is the root of your problem, I'm afraid. You're trying to\n> count individual words when you're storing an array of words. I don't\n> think any of the Gist/GIN indexes will help you with this either.\n>\n> However, since \"you don't want to start from here\" isn't very useful\n> here and now:\n>\n> 1. See what the performance (explain analyse) of the \"select\n> distinct...generate_series()\" statement is. I think you're right and\n> it's going to be slow.\n> 2. You're looping through each row of word_storage and counting\n> separately. Write it as one query if possible.\n> 3. As Peter says, don't insert then update, start with an empty table\n> and just insert totals for the lot (see #2).\n>\n> I'd probably write the query in plperl/python or something else that\n> supports hash/dictionary structures. Then just process the whole\n> word_storage into the hash - assuming you only have a few thousand\n> distinct words that shouldn't take up too much memory.\n+1\nI made something very similar, and using PL/pgsql is very slow, when using \nperl is very quick. \n\nI have also use partioning because of cost of update (copy last partition to \nthe new, adding the new count, so there is only insert, and drop old table if \nyou want)\n",
"msg_date": "Tue, 22 May 2007 14:28:05 +0200",
"msg_from": "cedric <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Key/Value reference table generation: INSERT/UPDATE performance"
},
{
"msg_contents": "On 5/22/07, cedric <[email protected]> wrote:\n> I made something very similar, and using PL/pgsql is very slow, when using\n> perl is very quick.\n\nAnother solution is to use tsearch2 for that:\nCREATE TABLE word_counts AS SELECT * FROM stat('SELECT\nto_tsvector(''simple'', lower(coalesce(field containing words, '''')))\nFROM your table');\n\nI don't know if the fact you have an array of words is a must have or\njust a design choice. If you have to keep that, you can transform the\narray easily into a string with array_to_string and use the same sort\nof query.\n\nI don't know what are exactly your speed requirements but it's quite\nfast here. If you drop your table and recreate it into a transaction,\nit should work like a charm (or you can use TRUNCATE and INSERT INTO).\n\n--\nGuillaume\n",
"msg_date": "Wed, 23 May 2007 10:04:24 +0200",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Key/Value reference table generation: INSERT/UPDATE performance"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a table with a file size of 400 MB with an index of 100 MB. Does PostgreSQL take the file sizes of both the table and the index into account when determing if it should do a table or an index scan? \n\nTIA\n\nJoost\n",
"msg_date": "Tue, 22 May 2007 11:29:46 +0200",
"msg_from": "\"Joost Kraaijeveld\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "is file size relevant in choosing index or table scan?"
},
{
"msg_contents": "Joost Kraaijeveld wrote:\n> Hi,\n> \n> I have a table with a file size of 400 MB with an index of 100 MB.\n> Does PostgreSQL take the file sizes of both the table and the index\n> into account when determing if it should do a table or an index scan?\n\nIn effect yes, although it will think in terms of row sizes and disk \nblocks. It also considers how many rows it thinks it will fetch and \nwhether the rows it wants are together or spread amongst many blocks. It \nalso tries to estimate what the chances are of those blocks being cached \nin RAM vs still on disk.\n\nSo: 1 row from a 4 million row table, accessed by primary key => index.\n20 rows from a 200 row table => seq scan (probably).\nIn between => depends on your postgresql.conf\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 22 May 2007 10:42:18 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: is file size relevant in choosing index or table scan?"
}
] |
[
{
"msg_contents": "Hi all,\n\n I have some tables where all the queries that will be executed are \ntimestamps driven, so it'd be nice to have an index over those fields.\n\n On older versions of PostgreSQL, at least in my experience, queries \non timestamps fields even having indexes where performing quite bad \nmainly sequential scans where performed.\n\n Now I have a newer version of PostgreSQL and I've done some tests \ncomparing the performance of an index over a timestamp field with a \nnumeric field. To do so, I have the following table:\n\n Table \"public.payment_transactions\"\n Column | Type | Modifiers\n----------------+-----------------------------+---------------------------------\ntransaction_id | character varying(32) | not null\ntimestamp_in | timestamp without time zone | default now()\ncredits | integer |\nepoch_in | bigint |\nepoch_in2 | double precision |\nIndexes:\n \"pk_paytrans_transid\" PRIMARY KEY, btree (transaction_id)\n \"idx_paytrans_epochin\" btree (epoch_in)\n \"idx_paytrans_epochin2\" btree (epoch_in2)\n \"idx_paytrans_timestamp\" btree (timestamp_in)\n\ntimestamp_in it's the timestamp, epoch_in and epoch_in2 are the epoch \nequivalent to timestamp to test how the indexes perform. We have three \ndifferent indexes (testing purposes) one over a timestamp field, one \nover an int8 and one over a double precision field.\n\nWhile doing the tests this table has about 100.000 entries.\n\n\n\nTo test the diferent indexes I have executed the following:\n\nIndex over timestamp_in (timestamp)\n\n# explain analyze select * from payment_transactions where timestamp_in \nbetween '2007-02-13'::timestamp and '2007-02-15'::timestamp;\n \n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\nIndex Scan using idx_paytrans_timestamp on payment_transactions \n(cost=0.00..1480.24 rows=1698 width=138) (actual time=11.693..310.402 \nrows=1587 loops=1)\n Index Cond: ((timestamp_in >= '2007-02-13 00:00:00'::timestamp \nwithout time zone) AND (timestamp_in <= '2007-02-15 00:00:00'::timestamp \nwithout time zone))\nTotal runtime: 318.328 ms\n(3 rows)\n\n\nIndex over epoch_in (int8)\n\n# explain analyze select * from payment_transactions where epoch_in \nbetween extract( epoch from '2007-02-13'::date )::int8 and extract( \nepoch from '2007-02-15'::date )::int8;\n \nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\nIndex Scan using idx_paytrans_epochin on payment_transactions \n(cost=0.00..1483.24 rows=1698 width=138) (actual time=34.369..114.943 \nrows=1587 loops=1)\n Index Cond: ((epoch_in >= 1171321200::bigint) AND (epoch_in <= \n1171494000::bigint))\nTotal runtime: 120.804 ms\n(3 rows)\n\n\nIndex over epoch_in (double precision)\n\n# explain analyze select * from payment_transactions where epoch_in2 \nbetween extract( epoch from '2007-02-13'::date ) and extract( epoch from \n'2007-02-15'::date );\n \nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\nIndex Scan using idx_paytrans_epochin2 on payment_transactions \n(cost=0.00..1479.24 rows=1698 width=138) (actual time=26.115..51.357 \nrows=1587 loops=1)\n Index Cond: ((epoch_in2 >= 1171321200::double precision) AND \n(epoch_in2 <= 1171494000::double precision))\nTotal runtime: 57.065 ms\n(3 rows)\n\n\n\nAs you can see the time difference are very big\n Timestamp: 318.328 ms\n int8 index: 120.804 ms\n double precision: 57.065 ms\n\nis this normal? am I doing anything wrong?\n\nAs rule of thumb is better to store epochs than timestamps?\n\nThank you very much\n-- \nArnau\n",
"msg_date": "Tue, 22 May 2007 12:39:02 +0200",
"msg_from": "Arnau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performace comparison of indexes over timestamp fields "
},
{
"msg_contents": "On 5/22/07, Arnau <[email protected]> wrote:\n> On older versions of PostgreSQL, at least in my experience, queries\n> on timestamps fields even having indexes where performing quite bad\n> mainly sequential scans where performed.\n\nPostgreSQL uses B-tree indexes for scalar values. For an expression\nsuch as \"t between a and b\", I believe it's going to match both sides\nof the table independently (ie., t >= a and t <= b) and intersect\nthese subsets. This is inefficient.\n\nYou should get better performance by mapping timestamps to a\none-dimensional plane and indexing them using GiST. GiST implements an\nR-tree-like structure that supports bounding-box searches.\n\nThis involves setting up a functional index:\n\n create index ... on payment_transactions using gist (\n box(point(extract(epoch from time), 0), point(extract(epoch from\ntime), 0)) box_ops)\n\nI'm using box() here because GiST doesn't have a concept of points.\n\nThen insert as usual, and then query with something like:\n\n select ... from payment_transactions\n where box(\n point(extract(epoch from '2006-04-01'::date), 0),\n point(extract(epoch from '2006-08-01'::date), 0)) && box(\n point(extract(epoch from time), 0),\n point(extract(epoch from time), 0));\n\nPostgreSQL should be able to exploit the GiST index by recognizing\nthat the result of box() expression operand is already computed in the\nindex.\n\nThis much less inconvenient and portable -- I would love for\nPostgreSQL to be provide syntactic sugar and special-casing to make\nthis transparent -- but worth it if you are dealing with a lot of\nrange searches.\n\n> Now I have a newer version of PostgreSQL and I've done some tests\n> comparing the performance of an index over a timestamp field with a\n> numeric field. To do so, I have the following table:\n>\n> Table \"public.payment_transactions\"\n> Column | Type | Modifiers\n> ----------------+-----------------------------+---------------------------------\n> transaction_id | character varying(32) | not null\n> timestamp_in | timestamp without time zone | default now()\n> credits | integer |\n> epoch_in | bigint |\n> epoch_in2 | double precision |\n[snip]\n\nA timestamp is stored internally as an 8-byte double-precision float.\nTherefore, timestamp_in and epoch_in2 should behave identically.\n\n> While doing the tests this table has about 100.000 entries.\n\nMake sure PostgreSQL is able to keep the entire table in memory by\nsetting shared_buffers; you don't want to be hitting to the disk.\n\nMake sure you run \"analyze\" on the table before you execute the test.\n\n> To test the diferent indexes I have executed the following:\n\nYour query plans are roughly identical. The difference in the timings\nimplies that you only ran the queries once. I suggest you run each\nquery at least 10 times, and report the individual numbers (the \"total\nruntime\" parts of the output) you get. Arithmetic means are not that\ninteresting.\n\nAlexander.\n",
"msg_date": "Tue, 22 May 2007 14:39:33 +0200",
"msg_from": "\"Alexander Staubo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace comparison of indexes over timestamp fields"
},
{
"msg_contents": "On Tue, May 22, 2007 at 02:39:33PM +0200, Alexander Staubo wrote:\n> PostgreSQL uses B-tree indexes for scalar values. For an expression\n> such as \"t between a and b\", I believe it's going to match both sides\n> of the table independently (ie., t >= a and t <= b) and intersect\n> these subsets. This is inefficient.\n\nA B-tree index can satisfy range queries such as this.\n\n> You should get better performance by mapping timestamps to a\n> one-dimensional plane and indexing them using GiST. GiST implements an\n> R-tree-like structure that supports bounding-box searches.\n\nYou may be thinking of interval overlaps?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 22 May 2007 14:43:20 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace comparison of indexes over timestamp fields"
},
{
"msg_contents": "On 5/22/07, Steinar H. Gunderson <[email protected]> wrote:\n> On Tue, May 22, 2007 at 02:39:33PM +0200, Alexander Staubo wrote:\n> > PostgreSQL uses B-tree indexes for scalar values. For an expression\n> > such as \"t between a and b\", I believe it's going to match both sides\n> > of the table independently (ie., t >= a and t <= b) and intersect\n> > these subsets. This is inefficient.\n>\n> A B-tree index can satisfy range queries such as this.\n\nYou're right, and I'm wrong -- my head is not in the right place\ntoday. B-trees are inefficient for intervals, but quite satisfactory\nfor range searches.\n\nAlexander.\n",
"msg_date": "Tue, 22 May 2007 15:00:51 +0200",
"msg_from": "\"Alexander Staubo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace comparison of indexes over timestamp fields"
},
{
"msg_contents": "Arnau <[email protected]> writes:\n> As you can see the time difference are very big\n> Timestamp: 318.328 ms\n> int8 index: 120.804 ms\n> double precision: 57.065 ms\n\nAs already suggested elsewhere, you probably weren't sufficiently\ncareful in taking your measurements.\n\nA look at the code says that int8 comparison ought to be the fastest\nof these. If timestamps are implemented as floats (which you didn't\nsay) their comparison speed ought to be *exactly* the same as floats,\nbecause the comparison functions are line-for-line the same. If\ntimestamps are implemented as int8 then they should be similar to\nint8 comparisons, maybe a tad slower due to an extra level of function\ncall. But in any case it seems likely that the actual comparison\nfunction calls would be just a small fraction of the runtime.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 May 2007 10:11:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performace comparison of indexes over timestamp fields "
}
] |
[
{
"msg_contents": "Hi,\n\nOut of curiosity, can anyone share his tips & tricks to validate a \nmachine before labelling it as 'ready to use postgres - you probably \nwon't trash my data today' ?\nI'm looking for a way to stress test components especially kernel/disk \nto have confidence > 0 that I can use postgres on top of it.\n\nAny secret trick is welcome (beside the memtest one :)\n\nThanks !\n\n-- stephane\n",
"msg_date": "Tue, 22 May 2007 15:10:29 +0200",
"msg_from": "Stephane Bailliez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tips & Tricks for validating hardware/os"
},
{
"msg_contents": "On 5/22/07, Stephane Bailliez <[email protected]> wrote:\n> Out of curiosity, can anyone share his tips & tricks to validate a\n> machine before labelling it as 'ready to use postgres - you probably\n> won't trash my data today' ?\n> I'm looking for a way to stress test components especially kernel/disk\n> to have confidence > 0 that I can use postgres on top of it.\n>\n> Any secret trick is welcome (beside the memtest one :)\n\nCompile the Linux kernel -- it's a pretty decent stress test.\n\nYou could run pgbench, which comes with PostgreSQL (as part of the\ncontrib package). Give a database size that's larger than the amount\nof physical memory in the box.\n\nAlexander.\n",
"msg_date": "Tue, 22 May 2007 15:45:25 +0200",
"msg_from": "\"Alexander Staubo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tips & Tricks for validating hardware/os"
},
{
"msg_contents": "\n>> Out of curiosity, can anyone share his tips & tricks to validate a\n>> machine before labelling it as 'ready to use postgres - you probably\n>> won't trash my data today' ?\n>> I'm looking for a way to stress test components especially kernel/disk\n>> to have confidence > 0 that I can use postgres on top of it.\n\n\tThat would be running a filesystem benchmark, pulling the plug, then \ncounting the dead.\n\n\thttp://sr5tech.com/write_back_cache_experiments.htm\n",
"msg_date": "Tue, 22 May 2007 15:55:26 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tips & Tricks for validating hardware/os"
},
{
"msg_contents": "On Tue, 22 May 2007, Stephane Bailliez wrote:\n\n> Out of curiosity, can anyone share his tips & tricks to validate a machine \n> before labelling it as 'ready to use postgres - you probably won't trash my \n> data today' ?\n\nWrite a little script that runs pgbench in a loop forever. Set your \nshared_buffer cache to use at least 50% of the memory in the machine, and \nadjust the database size and concurrent clients so it's generating a \nsubstantial amount of disk I/O and using a fair amount of the CPU.\n\nInstall the script so that it executes on system startup, like adding it \nto rc.local Put the machine close to your desk. Every time you walk by \nit, kill the power and then start it back up. This will give you a mix of \nlong overnight runs with no interruption to stress the overall system, \nwith a nice dose of recovery trauma. Skim the Postgres and OS log files \nevery day. Do that for a week, if it's still running your data should be \nsafe under real conditions.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Wed, 23 May 2007 00:11:35 -0400 (EDT)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tips & Tricks for validating hardware/os"
}
] |
[
{
"msg_contents": "Are there any performance improvements that come from using a domain \nover a check constraint (aside from the ease of management component)?\n\nthanks\n\n-- \nChander Ganesan\nOpen Technology Group, Inc.\nOne Copley Parkway, Suite 210\nMorrisville, NC 27560\nPhone: 877-258-8987/919-463-0999\n\n",
"msg_date": "Tue, 22 May 2007 12:56:21 -0400",
"msg_from": "Chander Ganesan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Domains versus Check Constraints"
},
{
"msg_contents": "On Tue, May 22, 2007 at 12:56:21PM -0400, Chander Ganesan wrote:\n> Are there any performance improvements that come from using a domain \n> over a check constraint (aside from the ease of management component)?\n\nNo. Plus support for domain constraints isn't universal (plpgsql doesn't\nhonor them, for example).\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)",
"msg_date": "Sun, 27 May 2007 10:59:42 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Domains versus Check Constraints"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> On Tue, May 22, 2007 at 12:56:21PM -0400, Chander Ganesan wrote:\n>> Are there any performance improvements that come from using a domain \n>> over a check constraint (aside from the ease of management component)?\n> \n> No. Plus support for domain constraints isn't universal (plpgsql doesn't\n> honor them, for example).\n\nsince 8.2 domain constraints are enforced everywhere ...\n\n\nStefan\n",
"msg_date": "Sun, 27 May 2007 18:34:22 +0200",
"msg_from": "Stefan Kaltenbrunner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Domains versus Check Constraints"
}
] |
[
{
"msg_contents": "My application has two threads, one inserts thousands of records per second into a table (t1) and the other thread periodically deletes expired records (also in thousands) from the same table (expired ones). So, we have one thread adding a row while the other thread is trying to delete a row. In a short time the overall performance of any sql statements on that instance degrades. (ex. Select count(*) from t1 takes more then few seconds with less than 10K rows).\n\nMy question is: Would any sql statement perform better if I would rename the table to t1_%indx periodically, create a new table t1 (for new inserts) and just drop the tables with expired records rather then doing a delete record? (t1 is a simple table with many rows and no constraints). \n\n(I know I could run vacuum analyze) \n\nThanks,\n\nOrhan A. \n\n\n\n\n\nDrop table vs Delete record\n\n\n\n\nMy application has two threads, one inserts thousands of records per second into a table (t1) and the other thread periodically deletes expired records (also in thousands) from the same table (expired ones). So, we have one thread adding a row while the other thread is trying to delete a row. In a short time the overall performance of any sql statements on that instance degrades. (ex. Select count(*) from t1 takes more then few seconds with less than 10K rows).\n\nMy question is: Would any sql statement perform better if I would rename the table to t1_%indx periodically, create a new table t1 (for new inserts) and just drop the tables with expired records rather then doing a delete record? (t1 is a simple table with many rows and no constraints).\n\n(I know I could run vacuum analyze)\n\nThanks,\n\nOrhan A.",
"msg_date": "Tue, 22 May 2007 14:38:40 -0400",
"msg_from": "\"Orhan Aglagul\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Drop table vs Delete record"
}
] |
[
{
"msg_contents": "Consider table partitioning (it's described in the manual).\n\nAndreas\n\n-- Ursprüngl. Mitteil. --\nBetreff:\t[PERFORM] Drop table vs Delete record\nVon:\t\"Orhan Aglagul\" <[email protected]>\nDatum:\t\t22.05.2007 18:42\n\n\nMy application has two threads, one inserts thousands of records per second into a table (t1) and the other thread periodically deletes expired records (also in thousands) from the same table (expired ones). So, we have one thread adding a row while the other thread is trying to delete a row. In a short time the overall performance of any sql statements on that instance degrades. (ex. Select count(*) from t1 takes more then few seconds with less than 10K rows).\n\nMy question is: Would any sql statement perform better if I would rename the table to t1_%indx periodically, create a new table t1 (for new inserts) and just drop the tables with expired records rather then doing a delete record? (t1 is a simple table with many rows and no constraints). \n\n(I know I could run vacuum analyze) \n\nThanks,\n\nOrhan A. \n\n",
"msg_date": "Tue, 22 May 2007 20:48:39 +0200",
"msg_from": "\"Andreas Kostyrka\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Drop table vs Delete record"
},
{
"msg_contents": "Checking out right now....\nThanks for the fast response.\n\n-----Original Message-----\nFrom: Andreas Kostyrka [mailto:[email protected]] \nSent: Tuesday, May 22, 2007 11:49 AM\nTo: Orhan Aglagul\nCc: <[email protected]>\nSubject: AW: [PERFORM] Drop table vs Delete record\n\nConsider table partitioning (it's described in the manual).\n\nAndreas\n\n-- Ursprüngl. Mitteil. --\nBetreff:\t[PERFORM] Drop table vs Delete record\nVon:\t\"Orhan Aglagul\" <[email protected]>\nDatum:\t\t22.05.2007 18:42\n\n\nMy application has two threads, one inserts thousands of records per second into a table (t1) and the other thread periodically deletes expired records (also in thousands) from the same table (expired ones). So, we have one thread adding a row while the other thread is trying to delete a row. In a short time the overall performance of any sql statements on that instance degrades. (ex. Select count(*) from t1 takes more then few seconds with less than 10K rows).\n\nMy question is: Would any sql statement perform better if I would rename the table to t1_%indx periodically, create a new table t1 (for new inserts) and just drop the tables with expired records rather then doing a delete record? (t1 is a simple table with many rows and no constraints). \n\n(I know I could run vacuum analyze) \n\nThanks,\n\nOrhan A. \n\n",
"msg_date": "Tue, 22 May 2007 14:53:21 -0400",
"msg_from": "\"Orhan Aglagul\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Drop table vs Delete record"
}
] |
[
{
"msg_contents": "You forgot pulling some RAID drives at random times to see how the hardware deals with the fact. And how it deals with the rebuild afterwards. (Many RAID solutions leave you with worst of both worlds, taking longer to rebuild than a restore from backup would take, while at the same ime providing a disc IO performance that is SO bad that the server becomes useless during the rebuild)\n\nAndreas\n\n-- Ursprüngl. Mitteil. --\nBetreff:\tRe: [PERFORM] Tips & Tricks for validating hardware/os\nVon:\tGreg Smith <[email protected]>\nDatum:\t\t23.05.2007 05:15\n\nOn Tue, 22 May 2007, Stephane Bailliez wrote:\n\n> Out of curiosity, can anyone share his tips & tricks to validate a machine \n> before labelling it as 'ready to use postgres - you probably won't trash my \n> data today' ?\n\nWrite a little script that runs pgbench in a loop forever. Set your \nshared_buffer cache to use at least 50% of the memory in the machine, and \nadjust the database size and concurrent clients so it's generating a \nsubstantial amount of disk I/O and using a fair amount of the CPU.\n\nInstall the script so that it executes on system startup, like adding it \nto rc.local Put the machine close to your desk. Every time you walk by \nit, kill the power and then start it back up. This will give you a mix of \nlong overnight runs with no interruption to stress the overall system, \nwith a nice dose of recovery trauma. Skim the Postgres and OS log files \nevery day. Do that for a week, if it's still running your data should be \nsafe under real conditions.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: You can help support the PostgreSQL project by donating at\n\n http://www.postgresql.org/about/donate\n\n",
"msg_date": "Wed, 23 May 2007 08:32:25 +0200",
"msg_from": "\"Andreas Kostyrka\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tips & Tricks for validating hardware/os"
},
{
"msg_contents": "\nOn May 23, 2007, at 2:32 AM, Andreas Kostyrka wrote:\n\n> You forgot pulling some RAID drives at random times to see how the \n> hardware deals with the fact. And how it deals with the rebuild \n> afterwards. (Many RAID solutions leave you with worst of both \n> worlds, taking longer to rebuild than a restore from backup would \n> take, while at the same ime providing a disc IO performance that is \n> SO bad that the server becomes useless during the rebuild)\n\n*cough* adaptec *cough* :-(\n\n",
"msg_date": "Wed, 23 May 2007 11:51:49 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tips & Tricks for validating hardware/os"
}
] |
[
{
"msg_contents": "\nHi,\n\nWe're seeing these type of error messages:\n\nNOTICE: number of page slots needed (237120) exceeds max_fsm_pages (120000)\nHINT: Consider increasing the configuration parameter \"max_fsm_pages\" to a value over 237120.\nvacuumdb: vacuuming database \"fb_2007_01_17\"\n\nI've played 'catch up' wrt adjusting max_fsm_pages (seems to be a regular event),\nhowever am wondering if the vacuum analyze which reports the error was\nactually completed?\n\nThanks\nSusan Russo\n\n",
"msg_date": "Wed, 23 May 2007 09:26:54 -0400 (EDT)",
"msg_from": "Susan Russo <[email protected]>",
"msg_from_op": true,
"msg_subject": "does VACUUM ANALYZE complete with this error?"
},
{
"msg_contents": "Susan Russo <[email protected]> writes:\n> We're seeing these type of error messages:\n\n> NOTICE: number of page slots needed (237120) exceeds max_fsm_pages (120000)\n> HINT: Consider increasing the configuration parameter \"max_fsm_pages\" to a value over 237120.\n> vacuumdb: vacuuming database \"fb_2007_01_17\"\n\n> I've played 'catch up' wrt adjusting max_fsm_pages (seems to be a regular event),\n\nWhat PG version is that? I recall we fixed a problem recently that\ncaused the requested max_fsm_pages to increase some more when you'd\nincreased it to what the message said.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 May 2007 10:01:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: does VACUUM ANALYZE complete with this error? "
},
{
"msg_contents": "\nOn May 23, 2007, at 9:26 AM, Susan Russo wrote:\n\n> I've played 'catch up' wrt adjusting max_fsm_pages (seems to be a \n> regular event),\n> however am wondering if the vacuum analyze which reports the error was\n> actually completed?\n\nYes, it completed. However not all pages with open space in them are \naccounted for, so you will probably end up allocating more pages than \nyou otherwise would have. I take it as a sign of not running vacuum \noften enough so that the pages with available space get filled up \nsooner.\n\nI'd bump fsm pages up to perhaps double or triple what you've got \nnow, and try running vacuum a bit more often on your hottest tables \n(or even consider changing your table structure to limit the \n\"hotness\" of that table). For example, if the table has a lot of \ncolumns, but only one is updated often, split that column out into \nits own table so that you have all the changes clustered into the \nsame set of pages while leaving the rest of the columns in place.\n\n",
"msg_date": "Wed, 23 May 2007 11:57:15 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: does VACUUM ANALYZE complete with this error?"
}
] |
[
{
"msg_contents": "Hi, \n \nI have a table with varchar and text columns, and I have to search through\nthese text in the whole table. \n \nAn example would be:\nSELECT * FROM table\n WHERE name like '%john%' or street like '%srt%'\n \nAnyway, the query planner always does seq scan on the whole table and that\ntakes some time. How can this be optimized or made in another way to be\nfaster?\n \nI tried to make indexes on the columns but no success. \n \nPG 8.2\n \nRegards,\nAndy.\n\n\n\n\n\nHi, \n\n \nI have a table with \nvarchar and text columns, and I have to search through these text in the whole \ntable. \n \nAn example would \nbe:\nSELECT * \nFROM \ntable\n \nWHERE name like '%john%' or street like '%srt%'\n \nAnyway, the query \nplanner always does seq scan on the whole table and that takes some time. How \ncan this be optimized or made in another way to be faster?\n \nI tried to make \nindexes on the columns but no success. \n \nPG \n8.2\n \nRegards,\nAndy.",
"msg_date": "Wed, 23 May 2007 18:08:49 +0300",
"msg_from": "\"Andy\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "LIKE search and performance"
},
{
"msg_contents": "Andy wrote:\n> SELECT * FROM table\n> WHERE name like '%john%' or street like '%srt%'\n> \n> Anyway, the query planner always does seq scan on the whole table and that\n> takes some time. How can this be optimized or made in another way to be\n> faster?\n> \n> I tried to make indexes on the columns but no success. \n\nNone of the normal indexes will work for finding text in the middle of a \nstring. If you do think of a simple way of solving this, drop a short \nletter explaining your idea to your local patent office followed by the \nNobel prize committee.\n\nHowever, one of the contrib packages is \"tsearch2\" which is designed to \ndo keyword searches on text for you. It'll also handle stemming (e.g. \n\"search\" will match \"searching\" etc.) with the right choice of \ndictionary. Loads of other clever stuff in it too.\n\nIt's one of the optional packages with most Linux packaging systems and \non the Windows one too. If you install from source see the contrib/ \ndirectory for details.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 23 May 2007 16:52:12 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "Andy wrote:\n> Hi,\n> \n> I have a table with varchar and text columns, and I have to search \n> through these text in the whole table.\n> \n> An example would be:\n> SELECT * FROM table\n> WHERE name like '%john%' or street like '%srt%'\n> \n> Anyway, the query planner always does seq scan on the whole table and \n> that takes some time. How can this be optimized or made in another way \n> to be faster?\n\nUse tsearch2 (http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/) for full text indexing.\n\nRigmor\n\n> \n> I tried to make indexes on the columns but no success.\n> \n> PG 8.2\n> \n> Regards,\n> Andy.\n\n\n\n",
"msg_date": "Wed, 23 May 2007 18:52:21 +0300",
"msg_from": "Rigmor Ukuhe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "Am 23.05.2007 um 09:08 schrieb Andy:\n\n> I have a table with varchar and text columns, and I have to search \n> through these text in the whole table.\n>\n> An example would be:\n> SELECT * FROM table\n> WHERE name like '%john%' or street \n> like '%srt%'\n>\n> Anyway, the query planner always does seq scan on the whole table \n> and that takes some time. How can this be optimized or made in \n> another way to be faster?\n\nThe problem is that normal indexes cannot be used for \"contains\" \nqueries.\n\nIf you need fulltext search capabilities you have to take a look at \ntsearch2 or an external search engine like Lucene.\n\ncug\n",
"msg_date": "Wed, 23 May 2007 10:00:18 -0600",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "On 5/23/07, Andy <[email protected]> wrote:\n> An example would be:\n> SELECT * FROM table\n> WHERE name like '%john%' or street like '%srt%'\n>\n> Anyway, the query planner always does seq scan on the whole table and that\n> takes some time. How can this be optimized or made in another way to be\n> faster?\n\nThere's no algorithm in existence that can \"index\" arbitrary\nsubstrings the way you think. The only rational way to accomplish this\nis to first break the text into substrings using some algorithm (eg.,\nwords delimited by whitespace and punctuation), and index the\nsubstrings individually.\n\nYou can do this using vanilla PostgreSQL, and you can use Tsearch2\nand/or its GIN indexes to help speed up the searches. The simplest\nsolution would be to put all the substrings in a table, one row per\nsubstring, along with an attribute referencing the source table/row.\n\nAt this point you have effectively reduced your search space: you can\nuse a query to isolate the words in your \"dictionary\" that contain the\nsubstrings. So a query might be:\n\n select ... from persons where id in (\n select person_id from person_words\n where word like '%john%';\n )\n\nThe \"like\" search, even though it will use a sequential scan, is bound\nto be much faster on these small words than searching for substrings\nthrough large gobs of text in the persons table.\n\nNote that PostgreSQL *can* exploit the index for *prefix* matching, if\nyou tell it to use the right operator class:\n\n create index persons_name_index on persons (name text_pattern_ops);\n\nor, if you're using varchars (though there is rarely any reason to):\n\n create index persons_name_index on persons (name varchar_pattern_ops);\n\n(These two may be identical internally. Check the docs.)\n\nNow you can do:\n\n select ... from persons where name like 'john%';\n\nwhich will yield a query plan such as this:\n\n Index Scan using persons_name_index on persons (cost=0.00..8.27\nrows=1 width=29) (actual time=0.184..0.373 rows=51 loops=1)\n Index Cond: ((name ~>=~ 'john'::text) AND (name ~<~ 'joho'::text))\n Filter: (title ~~ 'john%'::text)\n\nAlexander.\n",
"msg_date": "Wed, 23 May 2007 18:05:26 +0200",
"msg_from": "\"Alexander Staubo\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "Thank you all for the answers. \nI will try your suggestions and see what that brings in terms of\nperformance. \n\nAndy. \n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Rigmor Ukuhe\n> Sent: Wednesday, May 23, 2007 6:52 PM\n> Cc: [email protected]\n> Subject: Re: [PERFORM] LIKE search and performance\n> \n> Andy wrote:\n> > Hi,\n> > \n> > I have a table with varchar and text columns, and I have to search \n> > through these text in the whole table.\n> > \n> > An example would be:\n> > SELECT * FROM table\n> > WHERE name like '%john%' or \n> street like '%srt%'\n> > \n> > Anyway, the query planner always does seq scan on the whole \n> table and \n> > that takes some time. How can this be optimized or made in \n> another way \n> > to be faster?\n> \n> Use tsearch2 \n> (http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/) for \n> full text indexing.\n> \n> Rigmor\n> \n> > \n> > I tried to make indexes on the columns but no success.\n> > \n> > PG 8.2\n> > \n> > Regards,\n> > Andy.\n> \n> \n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> \n\n",
"msg_date": "Thu, 24 May 2007 10:03:42 +0300",
"msg_from": "\"Andy\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "Alexander Staubo wrote:\n> On 5/23/07, Andy <[email protected]> wrote:\n>> An example would be:\n>> SELECT * FROM table\n>> WHERE name like '%john%' or street like \n>> '%srt%'\n>>\n>> Anyway, the query planner always does seq scan on the whole table and \n>> that\n>> takes some time. How can this be optimized or made in another way to be\n>> faster?\n>\n> There's no algorithm in existence that can \"index\" arbitrary\n> substrings the way you think. The only rational way to accomplish this\n> is to first break the text into substrings using some algorithm (eg.,\n> words delimited by whitespace and punctuation), and index the\n> substrings individually.\nThat seems rather harsh. If I'd put an index on each of these colomns \nI'd certainly\nexpect it to use the indices - and I'm pretty sure that Sybase would. \nI'd expect\nit to scan the index leaf pages instead of the table itself - they \nshould be much\nmore compact and also likely to be hot in cache.\n\nWhy *wouldn't* the planner do this?\n\nJames\n\n",
"msg_date": "Thu, 24 May 2007 19:50:29 +0100",
"msg_from": "James Mansion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "James Mansion wrote:\n> Alexander Staubo wrote:\n>> On 5/23/07, Andy <[email protected]> wrote:\n>>> An example would be:\n>>> SELECT * FROM table\n>>> WHERE name like '%john%' or street like\n>>> '%srt%'\n>>>\n>>> Anyway, the query planner always does seq scan on the whole table and\n>>> that\n>>> takes some time. How can this be optimized or made in another way to be\n>>> faster?\n>>\n>> There's no algorithm in existence that can \"index\" arbitrary\n>> substrings the way you think. The only rational way to accomplish this\n>> is to first break the text into substrings using some algorithm (eg.,\n>> words delimited by whitespace and punctuation), and index the\n>> substrings individually.\n> That seems rather harsh. If I'd put an index on each of these colomns\n> I'd certainly\n> expect it to use the indices - and I'm pretty sure that Sybase would. \n> I'd expect\n> it to scan the index leaf pages instead of the table itself - they\n> should be much\n> more compact and also likely to be hot in cache.\n\nIf Sybase is still like SQL Server (or the other way around), it *may*\nend up scanning the index *IFF* the index is a clustered index. If it's\na normal index, it will do a sequential scan on the table.\n\nYes, the leaf page of the index is more compact, but you also have to\nscan the intermediate pages to get to the leaf pages. But again, it can\nbe a win. On such a system.\n\nIt's not a win on PostgreSQL, because of our MVCC implementation. We\nneed to scan *both* index *and* data pages if we go down that route, in\nwhich case it's a lot faster to just scan the data pages alone.\n\nI don't really know how MSSQL deals with this now that they have\nMVCC-ish behavior, and I have no idea at all if sybase has anything like\nMVCC.\n\n\n> Why *wouldn't* the planner do this?\n\nThe question should be why the optimizer doesn't consider it, and the\nexecutor uses it. The planner really doesn't decide that part :-)\nHopefully, the answer can be found above.\n\n//Magnus\n",
"msg_date": "Thu, 24 May 2007 21:23:55 +0200",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "\n> If Sybase is still like SQL Server (or the other way around), it *may*\n> end up scanning the index *IFF* the index is a clustered index. If it's\n> a normal index, it will do a sequential scan on the table.\n>\n> \nAre you sure its not covered? Have to check at work - but I'm off next \nweek so it'll have to wait.\n\n> It's not a win on PostgreSQL, because of our MVCC implementation. We\n> need to scan *both* index *and* data pages if we go down that route, in\n> which case it's a lot faster to just scan the data pages alone.\n>\n> \nWhy do you need to go to all the data pages - doesn't the index \nstructure contain all the keys so\nyou prefilter and then check to see if the *matched* items are still in \nview? I'll be first to admit I\nknow zip about Postgres, but it seems odd - doesn't the index contain \ncopies of the key values?.\n\nI suspect that I mis-spoke with 'leaf'. I really just mean 'all index \npages with data', since the scan\ndoes not even need to be in index order, just a good way to get at the \ndata in a compact way.\n\n\n",
"msg_date": "Thu, 24 May 2007 21:54:34 +0100",
"msg_from": "James Mansion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "On Thu, 2007-05-24 at 21:54 +0100, James Mansion wrote:\n> > If Sybase is still like SQL Server (or the other way around), it *may*\n> > end up scanning the index *IFF* the index is a clustered index. If it's\n> > a normal index, it will do a sequential scan on the table.\n> >\n> > \n> Are you sure its not covered? Have to check at work - but I'm off next \n> week so it'll have to wait.\n> \n> > It's not a win on PostgreSQL, because of our MVCC implementation. We\n> > need to scan *both* index *and* data pages if we go down that route, in\n> > which case it's a lot faster to just scan the data pages alone.\n> >\n> > \n> Why do you need to go to all the data pages - doesn't the index \n> structure contain all the keys so\n> you prefilter and then check to see if the *matched* items are still in \n> view? I'll be first to admit I\n> know zip about Postgres, but it seems odd - doesn't the index contain \n> copies of the key values?.\n> \n> I suspect that I mis-spoke with 'leaf'. I really just mean 'all index \n> pages with data', since the scan\n> does not even need to be in index order, just a good way to get at the \n> data in a compact way.\n\nPG could scan the index looking for matches first and only load the\nactual rows if it found a match, but that could only be a possible win\nif there were very few matches, because the difference in cost between a\nfull index scan and a sequential scan would need to be greater than the\ncost of randomly fetching all of the matching data rows from the table\nto look up the visibility information. \n\nSo yes it would be possible, but the odds of it being faster than a\nsequential scan are small enough to make it not very useful.\n\nAnd since it's basically impossible to know the selectivity of this kind\nof where condition, I doubt the planner would ever realistically want to\nchoose that plan anyway because of its poor worst-case behavior.\n\n-- Mark\n",
"msg_date": "Thu, 24 May 2007 14:02:40 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "Mark Lewis wrote:\n\n> PG could scan the index looking for matches first and only load the\n> actual rows if it found a match, but that could only be a possible win\n> if there were very few matches, because the difference in cost between a\n> full index scan and a sequential scan would need to be greater than the\n> cost of randomly fetching all of the matching data rows from the table\n> to look up the visibility information. \n\nJust out of curiosity: Does Postgress store a duplicate of the data in the index, even for long strings? I thought indexes only had to store the string up to the point where there was no ambiguity, for example, if I have \"missing\", \"mississippi\" and \"misty\", the index only needs \"missin\", \"missis\" and \"mist\" in the actual index. This would make it impossible to use a full index scan for a LIKE query.\n\nCraig\n",
"msg_date": "Thu, 24 May 2007 14:23:48 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "Craig James wrote:\n> Mark Lewis wrote:\n> \n> >PG could scan the index looking for matches first and only load the\n> >actual rows if it found a match, but that could only be a possible win\n> >if there were very few matches, because the difference in cost between a\n> >full index scan and a sequential scan would need to be greater than the\n> >cost of randomly fetching all of the matching data rows from the table\n> >to look up the visibility information. \n> \n> Just out of curiosity: Does Postgress store a duplicate of the data in the \n> index, even for long strings? I thought indexes only had to store the \n> string up to the point where there was no ambiguity, for example, if I have \n> \"missing\", \"mississippi\" and \"misty\", the index only needs \"missin\", \n> \"missis\" and \"mist\" in the actual index.\n\nWhat would happen when you inserted a new tuple with just \"miss\"? You\nwould need to expand all the other tuples in the index.\n\n-- \nAlvaro Herrera http://www.flickr.com/photos/alvherre/\n\"Puedes vivir solo una vez, pero si lo haces bien, una vez es suficiente\"\n",
"msg_date": "Thu, 24 May 2007 17:46:54 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "On Thu, May 24, 2007 at 02:02:40PM -0700, Mark Lewis wrote:\n> PG could scan the index looking for matches first and only load the\n> actual rows if it found a match, but that could only be a possible win\n> if there were very few matches, because the difference in cost between a\n> full index scan and a sequential scan would need to be greater than the\n> cost of randomly fetching all of the matching data rows from the table\n> to look up the visibility information. \n\n> So yes it would be possible, but the odds of it being faster than a\n> sequential scan are small enough to make it not very useful.\n\n> And since it's basically impossible to know the selectivity of this kind\n> of where condition, I doubt the planner would ever realistically want to\n> choose that plan anyway because of its poor worst-case behavior.\n\nWhat is a real life example where an intelligent and researched\ndatabase application would issue a like or ilike query as their\nprimary condition in a situation where they expected very high\nselectivity?\n\nAvoiding a poor worst-case behaviour for a worst-case behaviour that\nwon't happen doesn't seem practical.\n\nWhat real life examples, that could not be implemented a better way,\nwould behave poorly if like/ilike looked at the index first to filter?\n\nI don't understand... :-)\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Thu, 24 May 2007 17:54:51 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "Alvaro Herrera wrote:\n >> Just out of curiosity: Does Postgress store a duplicate of the data in the \n>> index, even for long strings? I thought indexes only had to store the \n>> string up to the point where there was no ambiguity, for example, if I have \n>> \"missing\", \"mississippi\" and \"misty\", the index only needs \"missin\", \n>> \"missis\" and \"mist\" in the actual index.\n> \n> What would happen when you inserted a new tuple with just \"miss\"? You\n> would need to expand all the other tuples in the index.\n\nThat's right. This technique used by some index implementations is a tradeoff between size and update speed. Most words in most natural languages can be distinguished by the first few characters. The chances of having to modify more than a few surrounding nodes when you insert \"miss\" is small, so some implementations choose this method. Other implementations choose to store the full string. I was just curious which method Postgres uses.\n\nCraig\n\n",
"msg_date": "Thu, 24 May 2007 15:08:16 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "\n> PG could scan the index looking for matches first and only load the\n> actual rows if it found a match, but that could only be a possible win\n> if there were very few matches, because the difference in cost between a\n> full index scan and a sequential scan would need to be greater than the\n> cost of randomly fetching all of the matching data rows from the table\n> to look up the visibility information.\n\n\tIf you need to do that kind of thing, ie. seq scanning a table checking \nonly one column among a large table of many columns, then don't use an \nindex. An index, being a btree, needs to be traversed in order (or else, a \nlot of locking problems come up) which means some random accesses.\n\n\tSo, you could make a table, with 2 columns, updated via triggers : your \ntext field, and the primary key of your main table. Scanning that would be \nfaster.\n\n\tStill, a better solution for searching in text is :\n\n\t- tsearch2 if you need whole words\n\t- trigrams for any substring match\n\t- xapian for full text search with wildcards (ie. John* = Johnny)\n\n\tSpeed-wise those three will beat any seq scan on a large table by a huge \nmargin.\n",
"msg_date": "Fri, 25 May 2007 00:09:15 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "[email protected] wrote:\n>> And since it's basically impossible to know the selectivity of this kind\n>> of where condition, I doubt the planner would ever realistically want to\n>> choose that plan anyway because of its poor worst-case behavior.\n> \n> What is a real life example where an intelligent and researched\n> database application would issue a like or ilike query as their\n> primary condition in a situation where they expected very high\n> selectivity?\n> \n> Avoiding a poor worst-case behaviour for a worst-case behaviour that\n> won't happen doesn't seem practical.\n\nBut if you are also filtering on e.g. date, and that has an index with \ngood selectivity, you're never going to use the text index anyway are \nyou? If you've only got a dozen rows to check against, might as well \njust read them in.\n\nThe only time it's worth considering the behaviour at all is *if* the \nworst-case is possible.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 25 May 2007 09:13:25 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "On Fri, May 25, 2007 at 09:13:25AM +0100, Richard Huxton wrote:\n> [email protected] wrote:\n> >>And since it's basically impossible to know the selectivity of this kind\n> >>of where condition, I doubt the planner would ever realistically want to\n> >>choose that plan anyway because of its poor worst-case behavior.\n> >What is a real life example where an intelligent and researched\n> >database application would issue a like or ilike query as their\n> >primary condition in a situation where they expected very high\n> >selectivity?\n> >Avoiding a poor worst-case behaviour for a worst-case behaviour that\n> >won't happen doesn't seem practical.\n> But if you are also filtering on e.g. date, and that has an index with \n> good selectivity, you're never going to use the text index anyway are \n> you? If you've only got a dozen rows to check against, might as well \n> just read them in.\n> The only time it's worth considering the behaviour at all is *if* the \n> worst-case is possible.\n\nI notice you did not provide a real life example as requested. :-)\n\nThis seems like an ivory tower restriction. Not allowing best performance\nin a common situation vs not allowing worst performance in a not-so-common\nsituation.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Fri, 25 May 2007 10:16:30 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "[email protected] wrote:\n> On Fri, May 25, 2007 at 09:13:25AM +0100, Richard Huxton wrote:\n>> [email protected] wrote:\n>>>> And since it's basically impossible to know the selectivity of this kind\n>>>> of where condition, I doubt the planner would ever realistically want to\n>>>> choose that plan anyway because of its poor worst-case behavior.\n>>> What is a real life example where an intelligent and researched\n>>> database application would issue a like or ilike query as their\n>>> primary condition in a situation where they expected very high\n>>> selectivity?\n>>> Avoiding a poor worst-case behaviour for a worst-case behaviour that\n>>> won't happen doesn't seem practical.\n>> But if you are also filtering on e.g. date, and that has an index with \n>> good selectivity, you're never going to use the text index anyway are \n>> you? If you've only got a dozen rows to check against, might as well \n>> just read them in.\n>> The only time it's worth considering the behaviour at all is *if* the \n>> worst-case is possible.\n> \n> I notice you did not provide a real life example as requested. :-)\n\nOK - any application that allows user-built queries: <choose column: \nfoo> <choose filter: contains> <choose target: \"bar\">\n\nWant another? Any application that has a \"search by name\" box - users \ncan (and do) put one letter in and hit enter.\n\nUnfortunately you don't always have control over the selectivity of \nqueries issued.\n\n> This seems like an ivory tower restriction. Not allowing best performance\n> in a common situation vs not allowing worst performance in a not-so-common\n> situation.\n\nWhat best performance plan are you thinking of? I'm assuming we're \ntalking about trailing-wildcard matches here, rather than \"contains\" \nstyle matches.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 25 May 2007 16:35:22 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "\n\n> OK - any application that allows user-built queries: <choose column: \n> foo> <choose filter: contains> <choose target: \"bar\">\n>\n> Want another? Any application that has a \"search by name\" box - users \n> can (and do) put one letter in and hit enter.\n>\n> Unfortunately you don't always have control over the selectivity of \n> queries issued.\n\n\t-*- HOW TO MAKE A SEARCH FORM -*-\n\n\tImagine you have to code the search on IMDB.\n\n\tThis is what a smart developer would do\n\n\tFirst, he uses AJAX autocompletion, so the thing is reactive.\n\tThen, he does not bother the user with a many-fields form. Instead of \nforcing the user to think (users HATE that), he writes smart code.\n\tDoes Google Maps have separate fields for country, city, street, zipcode \n? No. Because Google is about as smart as it gets.\n\n\tSo, you parse the user query.\n\n\tIf the user types, for instance, less than 3 letters (say, spi), he \nprobably wants stuff that *begins* with those letters. There is no point \nin searching for the letter \"a\" in a million movie titles database.\n\tSo, if the user types \"spi\", you display \"name LIKE spi%\", which is \nindexed, very fast. And since you're smart, you use AJAX. And you display \nonly the most popular results (ie. most clicked on).\n\nhttp://imdb.com/find?s=all&q=spi\n\n\tSince 99% of the time the user wanted \"spiderman\" or \"spielberg\", you're \ndone and he's happy. Users like being happy.\n\tIf the user just types \"a\", you display the first 10 things that start \nwith \"a\", this is useless but the user will marvel at your AJAX skillz. \nThen he will probably type in a few other letters.\n\n\tThen, if the user uses his space bar and types \"spi 1980\" you'll \nrecognize a year and display spielberg's movies in 1980.\n\tConverting your strings to phonetics is also a good idea since about 0.7% \nof the l33T teenagers can spell stuff especially spiElberg.\n\n\tOnly the guy who wants to know who had sex with marilyn monroe on the \n17th day of the shooting of Basic Instinct will need to use the Advanced \nsearch.\n\n\tIf you detect several words, then switch to a prefix-based fulltext \nsearch like Xapian which utterly rocks.\n\tExample : the user types \"savin priv\", you search for \"savin*\" NEAR \n\"priv*\" and you display \"saving private ryan\" before he has even finished \ntyping the second word of his query. Users love that, they feel \nunderstood, they will click on your ads and buy your products.\n\n\tIn all cases, search results should be limited to less than 100 to be \neasy on the database. The user doesn't care about a search returning more \nthan 10-20 results, he will just rephrase the query, and the time taken to \nfetch those thousands of records with name LIKE '%a%' will have been \nutterly lost. Who goes to page 2 in google results ?\n\n\tBOTTOM LINE : databases don't think, you do.\n",
"msg_date": "Fri, 25 May 2007 18:51:04 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "On Fri, May 25, 2007 at 04:35:22PM +0100, Richard Huxton wrote:\n> >I notice you did not provide a real life example as requested. :-)\n> OK - any application that allows user-built queries: <choose column: \n> foo> <choose filter: contains> <choose target: \"bar\">\n> Want another? Any application that has a \"search by name\" box - users \n> can (and do) put one letter in and hit enter.\n> Unfortunately you don't always have control over the selectivity of \n> queries issued.\n\nThe database has 10 million records. The user enters \"bar\" and it\ntranslates to \"%bar%\". You are suggesting that we expect bar to match\n1 million+ records? :-)\n\nI hope not. I would define this as bad process. I would also use \"LIMIT\"\nto something like \"100\".\n\n> >This seems like an ivory tower restriction. Not allowing best performance\n> >in a common situation vs not allowing worst performance in a not-so-common\n> >situation.\n> What best performance plan are you thinking of? I'm assuming we're \n> talking about trailing-wildcard matches here, rather than \"contains\" \n> style matches.\n\n\"Trailing-wildcard\" already uses B-Tree index, does it not?\n\nI am speaking of contains, as contains is the one that was said to\nrequire a seqscan. I am questioning why it requires a seqscan. The\nclaim was made that with MVCC, the index is insufficient to check\nfor visibility and that the table would need to be accessed anyways,\ntherefore a seqscan is required. I question whether a like '%bar%'\nshould be considered a high selectivity query in the general case.\nI question whether a worst case should be assumed.\n\nPerhaps I question too much? :-)\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Fri, 25 May 2007 12:56:33 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "[email protected] wrote:\n\n> I am speaking of contains, as contains is the one that was said to\n> require a seqscan. I am questioning why it requires a seqscan. The\n> claim was made that with MVCC, the index is insufficient to check\n> for visibility and that the table would need to be accessed anyways,\n> therefore a seqscan is required. I question whether a like '%bar%'\n> should be considered a high selectivity query in the general case.\n> I question whether a worst case should be assumed.\n\nIf you are doing %bar% you should be using pg_tgrm or tsearch2.\n\nJ\n\n\n> \n> Perhaps I question too much? :-)\n> \n> Cheers,\n> mark\n> \n\n",
"msg_date": "Fri, 25 May 2007 10:06:46 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "PFC wrote:\n> \n>> OK - any application that allows user-built queries: <choose column: \n>> foo> <choose filter: contains> <choose target: \"bar\">\n>>\n>> Want another? Any application that has a \"search by name\" box - users \n>> can (and do) put one letter in and hit enter.\n>>\n>> Unfortunately you don't always have control over the selectivity of \n>> queries issued.\n> \n> -*- HOW TO MAKE A SEARCH FORM -*-\n> \n> Imagine you have to code the search on IMDB.\n> \n> This is what a smart developer would do\n\nAll good domain-specific tips to provide users with a satisfying \nsearch-experience.\n\nNone of which address the question of what plan PG should produce for: \nSELECT * FROM bigtable WHERE foo LIKE 's%'\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 25 May 2007 18:08:53 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "[email protected] wrote:\n> On Fri, May 25, 2007 at 04:35:22PM +0100, Richard Huxton wrote:\n>>> I notice you did not provide a real life example as requested. :-)\n>> OK - any application that allows user-built queries: <choose column: \n>> foo> <choose filter: contains> <choose target: \"bar\">\n>> Want another? Any application that has a \"search by name\" box - users \n>> can (and do) put one letter in and hit enter.\n>> Unfortunately you don't always have control over the selectivity of \n>> queries issued.\n> \n> The database has 10 million records. The user enters \"bar\" and it\n> translates to \"%bar%\". You are suggesting that we expect bar to match\n> 1 million+ records? :-)\n\nI was saying that you don't know. At least, I don't know of any cheap \nway of gathering full substring stats or doing a full substring \nindexing. Even tsearch2 can't do that.\n\n> I hope not. I would define this as bad process. I would also use \"LIMIT\"\n> to something like \"100\".\n\nYes, but that's not the query we're talking about is it? If possible you \ndon't do '%bar%' searches at all. If you do, you try to restrict it \nfurther or LIMIT the results. There's nothing to discuss in these cases.\n\n>>> This seems like an ivory tower restriction. Not allowing best performance\n>>> in a common situation vs not allowing worst performance in a not-so-common\n>>> situation.\n>> What best performance plan are you thinking of? I'm assuming we're \n>> talking about trailing-wildcard matches here, rather than \"contains\" \n>> style matches.\n> \n> \"Trailing-wildcard\" already uses B-Tree index, does it not?\n\nYes, it searches the btree and then checks the data for visibility. I \nthought that was what you felt could be worked around. It appears I was \nwrong.\n\n> I am speaking of contains, as contains is the one that was said to\n> require a seqscan. I am questioning why it requires a seqscan. \n\nWell, you seemed to be suggesting you had something better in mind. At \nleast, that was my reading of your original post.\n\n > The\n> claim was made that with MVCC, the index is insufficient to check\n> for visibility \n\nTrue, for PG's implementation of MVCC. You *could* have visibility in \neach index, but that obviously would take more space. For a table with \nmany indexes, that could be a *lot* more space. You also have to update \nall that visibilty information too.\n\n > and that the table would need to be accessed anyways,\n> therefore a seqscan is required. I question whether a like '%bar%'\n> should be considered a high selectivity query in the general case.\n> I question whether a worst case should be assumed.\n\nWell, the general rule-of-thumb is only about 10% for the changeover \nbetween index & seq-scan. That is, once you are reading 10% of the rows \non disk (to check visibility) you might as well read them all (since \nyou'll be reading most of the blocks anyway if the rows are randomly \ndistributed). If you are doing SELECT * from that table then you'll want \nall that data you read. If you are doing SELECT count(*) then you only \nwanted the visibility :-(\n\nNow you and I can look at a substring and probably make a good guess how \ncommon it is (assuming we know the targets are British surnames or \nJapanese towns). PG needs one number - or rather, it picks one number \nfor each length of search-string (afaik).\n\n> Perhaps I question too much? :-)\n\nNot sure it's possible to question too much :-)\nHowever, you need to provide answers occasionally too - what numbers \nwould you pick? :-)\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 25 May 2007 18:29:16 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "> None of which address the question of what plan PG should produce for: \n> SELECT * FROM bigtable WHERE foo LIKE 's%'\n\n\tAh, this one already uses the btree since the '%' is at the end.\n\tMy point is that a search like this will yield too many results to be \nuseful to the user anyway, so optimizing its performance is a kind of red \nherring.\n\n",
"msg_date": "Fri, 25 May 2007 19:31:16 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "PFC wrote:\n>> None of which address the question of what plan PG should produce for: \n>> SELECT * FROM bigtable WHERE foo LIKE 's%'\n> \n> Ah, this one already uses the btree since the '%' is at the end.\n> My point is that a search like this will yield too many results to \n> be useful to the user anyway, so optimizing its performance is a kind of \n> red herring.\n\nAt the *application level* yes.\nAt the *query planner* level no.\n\nAt the query planner level I just want it to come up with the best plan \nit can. The original argument was that PG's estimate of the number of \nmatching rows was too optimistic (or pessimistic) in the case where we \nare doing a contains substring-search.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 25 May 2007 18:32:39 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "\"Richard Huxton\" <[email protected]> writes:\n\n> Now you and I can look at a substring and probably make a good guess how common\n> it is (assuming we know the targets are British surnames or Japanese towns). PG\n> needs one number - or rather, it picks one number for each length of\n> search-string (afaik).\n\nI don't think that's true. Postgres calculates the lower and upper bound\nimplied by the search pattern and then uses the histogram to estimate how\nselective that range is. It's sometimes surprisingly good but obviously it's\nnot perfect.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Fri, 25 May 2007 18:58:13 +0100",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "Gregory Stark wrote:\n> \"Richard Huxton\" <[email protected]> writes:\n> \n>> Now you and I can look at a substring and probably make a good guess how common\n>> it is (assuming we know the targets are British surnames or Japanese towns). PG\n>> needs one number - or rather, it picks one number for each length of\n>> search-string (afaik).\n> \n> I don't think that's true. Postgres calculates the lower and upper bound\n> implied by the search pattern and then uses the histogram to estimate how\n> selective that range is. It's sometimes surprisingly good but obviously it's\n> not perfect.\n\nSorry - I'm obviously picking my words badly today.\n\nI meant for the \"contains\" substring match. It gives different (goes \naway and checks...yes) predictions based on string length. So it guesses \nthat LIKE '%aaa%' will match more than LIKE '%aaaa%'. Of course, if we \nwere matching surnames you and I could say that this is very unlikely, \nbut without some big statistics table I guess there's not much more PG \ncan do.\n\nFor a trailing wildcard LIKE 'aaa%' it can and does as you say convert \nthis into something along the lines of (>= 'aaa' AND < 'aab'). Although \nIIRC that depends if your locale allows such (not sure, I don't really \nuse non-C/non-English locales enough).\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 25 May 2007 19:13:14 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "[email protected] wrote:\n> What is a real life example where an intelligent and researched\n> database application would issue a like or ilike query as their\n> primary condition in a situation where they expected very high\n> selectivity?\n> \nIn my case the canonical example is to search against textual keys where \nthe search is\nperformed automatically if the user hs typed enough data and paused. In \nalmost all\ncases the '%' trails, and I'm looking for 'starts with' in effect. \nusually the search will have\na specified upper number of returned rows, if that's an available \nfacility. I realise in this\ncase that matching against the index does not allow the match count \nunless we check\nMVCC as we go, but I don't see why another thread can't be doing that.\n\nJames\n\n",
"msg_date": "Wed, 06 Jun 2007 23:23:13 +0100",
"msg_from": "James Mansion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
},
{
"msg_contents": "On Wed, Jun 06, 2007 at 11:23:13PM +0100, James Mansion wrote:\n> [email protected] wrote:\n> >What is a real life example where an intelligent and researched\n> >database application would issue a like or ilike query as their\n> >primary condition in a situation where they expected very high\n> >selectivity?\n> In my case the canonical example is to search against textual keys\n> where the search is performed automatically if the user hs typed\n> enough data and paused. In almost all cases the '%' trails, and I'm\n> looking for 'starts with' in effect. usually the search will have a\n> specified upper number of returned rows, if that's an available\n> facility. I realise in this case that matching against the index\n> does not allow the match count unless we check MVCC as we go, but I\n> don't see why another thread can't be doing that.\n\nI believe PostgreSQL already considers using the index for \"starts\nwith\", so this wasn't part of the discussion for me. Sorry that this\nwasn't clear.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Wed, 6 Jun 2007 23:33:52 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: LIKE search and performance"
}
] |
[
{
"msg_contents": "I am a newbie, as you all know, but I am still embarassed asking this\nquestion. I started my tuning career by changing shared_buffers. Soon I\ndiscovered that I was hitting up against the available RAM on the system.\nSo, I brought the number down. Then I discovered max_fsm_pages. I could take\nthat up quite high and found out that it is a 'disk' thing. Then I started\nincreasing checkpoint_segments,which is also a disk thing. However, setting\nit to 25, and then increasing any of the other 2 variables, the postgresql\ndaemon stops working. meaning it does not start upon reboot. When I bring\nshared_buffers or max_fsm_pages back down, the daemon starts and all is\nnormal. This happens on a 1 GB RAM machine and a 4 GB RAM machine.\n\nAnyone know what I am doing wrong?\n\nSystem: FreeBSD 6.1, Postgresql 8.09, 2 GB RAM\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nI am a newbie, as you all know, but I am still embarassed asking this\nquestion. I started my tuning career by changing shared_buffers. Soon I\ndiscovered that I was hitting up against the available RAM on the\nsystem. So, I brought the number down. Then I discovered max_fsm_pages.\nI could take that up quite high and found out that it is a 'disk'\nthing. Then I started increasing checkpoint_segments,which is also a\ndisk thing. However, setting it to 25, and then increasing any of the\nother 2 variables, the postgresql daemon stops working. meaning it does\nnot start upon reboot. When I bring shared_buffers or max_fsm_pages\nback down, the daemon starts and all is normal. This happens on a 1 GB\nRAM machine and a 4 GB RAM machine. \n\nAnyone know what I am doing wrong?\n\nSystem: FreeBSD 6.1, Postgresql 8.09, 2 GB RAM-- Yudhvir Singh Sidhu408 375 3134 cell",
"msg_date": "Wed, 23 May 2007 09:22:22 -0700",
"msg_from": "\"Y Sidhu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "max_fsm_pages, shared_buffers and checkpoint_segments"
},
{
"msg_contents": "Do you have an overall plan (besides \"make it go faster!\") or are you \njust trying out the knobs as you find them?\n\nThis may be helpful:\nhttp://www.powerpostgresql.com/Downloads/annotated_conf_80.html\n\nOn May 23, 2007, at 9:22 AM, Y Sidhu wrote:\n\n> I am a newbie, as you all know, but I am still embarassed asking \n> this question. I started my tuning career by changing \n> shared_buffers. Soon I discovered that I was hitting up against the \n> available RAM on the system. So, I brought the number down. Then I \n> discovered max_fsm_pages. I could take that up quite high and found \n> out that it is a 'disk' thing. Then I started increasing \n> checkpoint_segments,which is also a disk thing. However, setting it \n> to 25, and then increasing any of the other 2 variables, the \n> postgresql daemon stops working. meaning it does not start upon \n> reboot. When I bring shared_buffers or max_fsm_pages back down, the \n> daemon starts and all is normal. This happens on a 1 GB RAM machine \n> and a 4 GB RAM machine.\n>\n> Anyone know what I am doing wrong?\n>\n> System: FreeBSD 6.1, Postgresql 8.09, 2 GB RAM\n>\n> -- \n> Yudhvir Singh Sidhu\n> 408 375 3134 cell\n\n",
"msg_date": "Wed, 23 May 2007 09:31:02 -0700",
"msg_from": "Ben <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: max_fsm_pages, shared_buffers and checkpoint_segments"
},
{
"msg_contents": "I cannot answer that question on the grounds that it may incriminate me.\nHehe. I am really trying to get our vacuum times down. The cause of the\nproblem, I believe, are daily mass deletes. Yes, I am working on performing\nvacuums more than once a day. No, I am not considering partitioning the\noffending table because a few scripts have to be changed. I am also turning\nthe knobs as I find them.\n\nAny help is appreciated b'cause \"I can't hold er t'gether much longer\nkap'n.\" Sorry, that's the best 'Scotty' I can do this morning.\n\nYudhvir\n=============\n\nOn 5/23/07, Ben <[email protected]> wrote:\n>\n> Do you have an overall plan (besides \"make it go faster!\") or are you\n> just trying out the knobs as you find them?\n>\n> This may be helpful:\n> http://www.powerpostgresql.com/Downloads/annotated_conf_80.html\n>\n> On May 23, 2007, at 9:22 AM, Y Sidhu wrote:\n>\n> > I am a newbie, as you all know, but I am still embarassed asking\n> > this question. I started my tuning career by changing\n> > shared_buffers. Soon I discovered that I was hitting up against the\n> > available RAM on the system. So, I brought the number down. Then I\n> > discovered max_fsm_pages. I could take that up quite high and found\n> > out that it is a 'disk' thing. Then I started increasing\n> > checkpoint_segments,which is also a disk thing. However, setting it\n> > to 25, and then increasing any of the other 2 variables, the\n> > postgresql daemon stops working. meaning it does not start upon\n> > reboot. When I bring shared_buffers or max_fsm_pages back down, the\n> > daemon starts and all is normal. This happens on a 1 GB RAM machine\n> > and a 4 GB RAM machine.\n> >\n> > Anyone know what I am doing wrong?\n> >\n> > System: FreeBSD 6.1, Postgresql 8.09, 2 GB RAM\n> >\n> > --\n> > Yudhvir Singh Sidhu\n> > 408 375 3134 cell\n>\n>\n\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nI cannot answer that question on the grounds that it may incriminate me.\nHehe. I am really trying to get our vacuum times down. The cause of the\nproblem, I believe, are daily mass deletes. Yes, I am working on\nperforming vacuums more than once a day. No, I am not considering\npartitioning the offending table because a few scripts have to be\nchanged. I am also turning the knobs as I find them. \n\nAny help is appreciated b'cause \"I can't hold er t'gether much longer\nkap'n.\" Sorry, that's the best 'Scotty' I can do this morning.\n\nYudhvir\n=============On 5/23/07, Ben <[email protected]> wrote:\nDo you have an overall plan (besides \"make it go faster!\") or are youjust trying out the knobs as you find them?This may be helpful:\nhttp://www.powerpostgresql.com/Downloads/annotated_conf_80.htmlOn May 23, 2007, at 9:22 AM, Y Sidhu wrote:> I am a newbie, as you all know, but I am still embarassed asking> this question. I started my tuning career by changing\n> shared_buffers. Soon I discovered that I was hitting up against the> available RAM on the system. So, I brought the number down. Then I> discovered max_fsm_pages. I could take that up quite high and found\n> out that it is a 'disk' thing. Then I started increasing> checkpoint_segments,which is also a disk thing. However, setting it> to 25, and then increasing any of the other 2 variables, the\n> postgresql daemon stops working. meaning it does not start upon> reboot. When I bring shared_buffers or max_fsm_pages back down, the> daemon starts and all is normal. This happens on a 1 GB RAM machine\n> and a 4 GB RAM machine.>> Anyone know what I am doing wrong?>> System: FreeBSD 6.1, Postgresql 8.09, 2 GB RAM>> --> Yudhvir Singh Sidhu> 408 375 3134 cell\n-- Yudhvir Singh Sidhu408 375 3134 cell",
"msg_date": "Wed, 23 May 2007 09:43:36 -0700",
"msg_from": "\"Y Sidhu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: max_fsm_pages, shared_buffers and checkpoint_segments"
},
{
"msg_contents": "Mass deletes are expensive to clean up after. Truncates are better if \nyou can, but, as it sounds like you can't, you might look into \nvacuum_cost_delay and its many variables. It will make your vacuums \nrun longer, not shorter, but it will also make them have less of an \nimpact, if you configure it properly for your workload.\n\nAs you've found out, it's probably better not to poke things randomly \nin the hope of making it faster.\n\nOn May 23, 2007, at 9:43 AM, Y Sidhu wrote:\n\n> I cannot answer that question on the grounds that it may \n> incriminate me. Hehe. I am really trying to get our vacuum times \n> down. The cause of the problem, I believe, are daily mass deletes. \n> Yes, I am working on performing vacuums more than once a day. No, I \n> am not considering partitioning the offending table because a few \n> scripts have to be changed. I am also turning the knobs as I find \n> them.\n>\n> Any help is appreciated b'cause \"I can't hold er t'gether much \n> longer kap'n.\" Sorry, that's the best 'Scotty' I can do this morning.\n>\n> Yudhvir\n> =============\n>\n> On 5/23/07, Ben <[email protected]> wrote:\n> Do you have an overall plan (besides \"make it go faster!\") or are you\n> just trying out the knobs as you find them?\n>\n> This may be helpful:\n> http://www.powerpostgresql.com/Downloads/annotated_conf_80.html\n>\n> On May 23, 2007, at 9:22 AM, Y Sidhu wrote:\n>\n> > I am a newbie, as you all know, but I am still embarassed asking\n> > this question. I started my tuning career by changing\n> > shared_buffers. Soon I discovered that I was hitting up against the\n> > available RAM on the system. So, I brought the number down. Then I\n> > discovered max_fsm_pages. I could take that up quite high and found\n> > out that it is a 'disk' thing. Then I started increasing\n> > checkpoint_segments,which is also a disk thing. However, setting it\n> > to 25, and then increasing any of the other 2 variables, the\n> > postgresql daemon stops working. meaning it does not start upon\n> > reboot. When I bring shared_buffers or max_fsm_pages back down, the\n> > daemon starts and all is normal. This happens on a 1 GB RAM machine\n> > and a 4 GB RAM machine.\n> >\n> > Anyone know what I am doing wrong?\n> >\n> > System: FreeBSD 6.1, Postgresql 8.09, 2 GB RAM\n> >\n> > --\n> > Yudhvir Singh Sidhu\n> > 408 375 3134 cell\n>\n>\n>\n>\n> -- \n> Yudhvir Singh Sidhu\n> 408 375 3134 cell\n\n\nMass deletes are expensive to clean up after. Truncates are better if you can, but, as it sounds like you can't, you might look into vacuum_cost_delay and its many variables. It will make your vacuums run longer, not shorter, but it will also make them have less of an impact, if you configure it properly for your workload.As you've found out, it's probably better not to poke things randomly in the hope of making it faster.On May 23, 2007, at 9:43 AM, Y Sidhu wrote:I cannot answer that question on the grounds that it may incriminate me. Hehe. I am really trying to get our vacuum times down. The cause of the problem, I believe, are daily mass deletes. Yes, I am working on performing vacuums more than once a day. No, I am not considering partitioning the offending table because a few scripts have to be changed. I am also turning the knobs as I find them. Any help is appreciated b'cause \"I can't hold er t'gether much longer kap'n.\" Sorry, that's the best 'Scotty' I can do this morning. Yudhvir =============On 5/23/07, Ben <[email protected]> wrote: Do you have an overall plan (besides \"make it go faster!\") or are youjust trying out the knobs as you find them?This may be helpful: http://www.powerpostgresql.com/Downloads/annotated_conf_80.htmlOn May 23, 2007, at 9:22 AM, Y Sidhu wrote:> I am a newbie, as you all know, but I am still embarassed asking> this question. I started my tuning career by changing > shared_buffers. Soon I discovered that I was hitting up against the> available RAM on the system. So, I brought the number down. Then I> discovered max_fsm_pages. I could take that up quite high and found > out that it is a 'disk' thing. Then I started increasing> checkpoint_segments,which is also a disk thing. However, setting it> to 25, and then increasing any of the other 2 variables, the > postgresql daemon stops working. meaning it does not start upon> reboot. When I bring shared_buffers or max_fsm_pages back down, the> daemon starts and all is normal. This happens on a 1 GB RAM machine > and a 4 GB RAM machine.>> Anyone know what I am doing wrong?>> System: FreeBSD 6.1, Postgresql 8.09, 2 GB RAM>> --> Yudhvir Singh Sidhu> 408 375 3134 cell -- Yudhvir Singh Sidhu408 375 3134 cell",
"msg_date": "Wed, 23 May 2007 10:00:22 -0700",
"msg_from": "Ben <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: max_fsm_pages, shared_buffers and checkpoint_segments"
},
{
"msg_contents": "\n> When I bring shared_buffers or max_fsm_pages back down, the daemon \n> starts and all is\n> normal.\n\n\tLinux has a system setting for the maximum number of shared memory that a \nprocess can allocate. When Postgres wants more, Linux says \"No.\"\n\tLook in the docs for the setting (sysctl whatsisname).\n\tVACUUM VERBOSE will tell you if you need to put more max_fsm_pages or not.\n\t\n",
"msg_date": "Wed, 23 May 2007 19:56:11 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: max_fsm_pages, shared_buffers and checkpoint_segments"
},
{
"msg_contents": "> increasing checkpoint_segments,which is also a disk thing. However, setting\n> it to 25, and then increasing any of the other 2 variables, the postgresql\n> daemon stops working. meaning it does not start upon reboot. When I bring\n\nSounds like you need to increase your shared memory limits.\nUnfortunately this will require a reboot on FreeBSD :(\n\nSee:\n\n http://www.postgresql.org/docs/8.2/static/kernel-resources.html\n\nLast time I checked PostgreSQL should be complaining about the shared\nmemory on startup rather than silently fail though. Check your logs\nperhaps. Though I believe the RC script will cause the message to be\nprinted interactively at the console too, if you run it. (Assuming you\nare using it installed from ports).\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org",
"msg_date": "Wed, 23 May 2007 22:40:31 +0200",
"msg_from": "Peter Schuller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: max_fsm_pages, shared_buffers and checkpoint_segments"
},
{
"msg_contents": "Y Sidhu wrote:\n> I cannot answer that question on the grounds that it may incriminate me.\n> Hehe. I am really trying to get our vacuum times down. The cause of the\n> problem, I believe, are daily mass deletes. Yes, I am working on performing\n> vacuums more than once a day. No, I am not considering partitioning the\n> offending table because a few scripts have to be changed. I am also turning\n> the knobs as I find them.\n\nYudhvir, I don't think the tuning options are going to make any \ndifference to your vacuum times.\n\nI don't know if this been brought up already, but the way vacuum works \nin 8.1 and 8.2 is that when it scans the table for the second time, it \ndoes a WAL flush for every block that had deleted tuples on it. That's \nreally expensive, in particular if you don't have a separate drive for \nthe WAL, and/or you don't have a battery backed up cache in your controller.\n\nYou could try turning fsync=off to see if it helps, but be warned that \nthat's dangerous. If you have a power failure etc. while the database is \nbusy, you can get data corruption. So do that to see if it helps on a \ntest matchine, and if it does, put WAL on another drive or get a \ncontroller with battery backed up cache. Or wait until release 8.3, \nwhich should fix that issue.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Thu, 24 May 2007 05:27:27 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: max_fsm_pages, shared_buffers and checkpoint_segments"
},
{
"msg_contents": "\nOn May 23, 2007, at 4:40 PM, Peter Schuller wrote:\n\n> Sounds like you need to increase your shared memory limits.\n> Unfortunately this will require a reboot on FreeBSD :(\n\nNo, it does not. You can tune some of the sysv IPC parameters at \nruntime. the shmmax and shmall are such parameters.\n\n",
"msg_date": "Thu, 31 May 2007 17:23:55 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: max_fsm_pages, shared_buffers and checkpoint_segments"
}
] |
[
{
"msg_contents": "Hi Tom,\n\n>What PG version is that? I recall we fixed a problem recently that\n>caused the requested max_fsm_pages to increase some more when you'd\n>increased it to what the message said.\n\n8.1.4 \n\nAs Vivek suggested, we are implementing more regular vacuuming.\n\nThanks!\nSusan\n",
"msg_date": "Wed, 23 May 2007 14:18:41 -0400 (EDT)",
"msg_from": "Susan Russo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: does VACUUM ANALYZE complete with this error?"
},
{
"msg_contents": "Susan Russo <[email protected]> writes:\n>> What PG version is that? I recall we fixed a problem recently that\n>> caused the requested max_fsm_pages to increase some more when you'd\n>> increased it to what the message said.\n\n> 8.1.4 \n\nOK, I checked the CVS history and found this:\n\n2006-09-21 16:31 tgl\n\n\t* contrib/pg_freespacemap/README.pg_freespacemap,\n\tcontrib/pg_freespacemap/pg_freespacemap.c,\n\tcontrib/pg_freespacemap/pg_freespacemap.sql.in,\n\tsrc/backend/access/gin/ginvacuum.c,\n\tsrc/backend/access/gist/gistvacuum.c,\n\tsrc/backend/access/nbtree/nbtree.c, src/backend/commands/vacuum.c,\n\tsrc/backend/commands/vacuumlazy.c,\n\tsrc/backend/storage/freespace/freespace.c,\n\tsrc/include/storage/freespace.h: Fix free space map to correctly\n\ttrack the total amount of FSM space needed even when a single\n\trelation requires more than max_fsm_pages pages. Also, make VACUUM\n\temit a warning in this case, since it likely means that VACUUM FULL\n\tor other drastic corrective measure is needed.\tPer reports from\n\tJeff Frost and others of unexpected changes in the claimed\n\tmax_fsm_pages need.\n\nThis is in 8.2, but we didn't back-patch because it made incompatible\nchanges in the contrib/pg_freespacemap views.\n\nAs the commit message says, the behavior of having the requested\nmax_fsm_pages value move up after you increase the setting is triggered\nby having individual tables that need more than max_fsm_pages. So you\ndefinitely have got a problem of needing more vacuuming...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 May 2007 14:31:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: does VACUUM ANALYZE complete with this error? "
}
] |
[
{
"msg_contents": "Is there any easy way to take a database and add/delete records to create\nfragmentation of the records and indexes. I am trying to recreate high\nvacuum times.\n\n-- \nYudhvir Singh Sidhu\n408 375 3134 cell\n\nIs there any easy way to take a database and add/delete records to\ncreate fragmentation of the records and indexes. I am trying to\nrecreate high vacuum times.-- Yudhvir Singh Sidhu408 375 3134 cell",
"msg_date": "Wed, 23 May 2007 11:58:06 -0700",
"msg_from": "\"Y Sidhu\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Simulate database fragmentation"
},
{
"msg_contents": "On Wed, May 23, 2007 at 11:58:06AM -0700, Y Sidhu wrote:\n> Is there any easy way to take a database and add/delete records to create\n> fragmentation of the records and indexes. I am trying to recreate high\n> vacuum times.\n\nUpdate random rows, then do a vacuum. That will result in free space in\nrandom locations. At that point you'd probably want to update some\nranges of rows, enough so that they get forced to new pages.\n\nA better idea might be to just insert random data.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)",
"msg_date": "Sun, 27 May 2007 11:05:38 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simulate database fragmentation"
}
] |
[
{
"msg_contents": "Hi Tom - thanks for the additional/confirming info.\n\n>So you definitely have got a problem of needing more vacuuming...\n\nYes, we're going to nightly, as I said in last message, however, \nit worse than this.....\n\nI found that *1* vacuum analyze works well in many instances to \nhelp optimize query performance (which in one example was running\nin lightening speed on 2 of our 5 identical software/hardware/configs \nPg 8.1.4 servers). However, in several cases, a *2nd* vacuum\nanalyze was necessary. (btw - first vacuum was after adjusting\nmax_fsm_pages, and getting no error msgs from vacuum).\n\nI *think* - please advise, I may be able to affect configs\nfor a more effective vacuum analyze the first time around (??)\nPerhaps an increase to deafult_statistics_target (set to 100??).\n\nI'd read that when performing a vacuum analyze, Pg doesn't actually\ngo through all values in each table and update statistics, rather,\nit samples some of the values and uses that statistical sample. \nThus, different runs of the vacuum analyze might generate different\nstatistics (on different dbs on different servers) since the same db\nmay be used differently on a different server. Is this correct??\n\nThanks for any advice....I'm hoping regular duplicate vacuum\nanalyze isn't the solution... \n\nSusan\n",
"msg_date": "Wed, 23 May 2007 15:03:57 -0400 (EDT)",
"msg_from": "Susan Russo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: does VACUUM ANALYZE complete with this error?"
},
{
"msg_contents": "Susan Russo wrote:\n> Hi Tom - thanks for the additional/confirming info.\n>\n> \n>> So you definitely have got a problem of needing more vacuuming...\n>> \n>\n> Yes, we're going to nightly, as I said in last message, however, \n> it worse than this.....\n>\n> I found that *1* vacuum analyze works well in many instances to \n> help optimize query performance (which in one example was running\n> in lightening speed on 2 of our 5 identical software/hardware/configs \n> Pg 8.1.4 servers). However, in several cases, a *2nd* vacuum\n> analyze was necessary. (btw - first vacuum was after adjusting\n> max_fsm_pages, and getting no error msgs from vacuum).\n>\n> I *think* - please advise, I may be able to affect configs\n> for a more effective vacuum analyze the first time around (??)\n> Perhaps an increase to deafult_statistics_target (set to 100??).\n>\n> I'd read that when performing a vacuum analyze, Pg doesn't actually\n> go through all values in each table and update statistics, rather,\n> it samples some of the values and uses that statistical sample. \n> Thus, different runs of the vacuum analyze might generate different\n> statistics (on different dbs on different servers) since the same db\n> may be used differently on a different server. Is this correct??\n>\n> Thanks for any advice....I'm hoping regular duplicate vacuum\n> analyze isn't the solution... \nCouple -o- points\n\n Update your pg servers. 8.1.9 is out, and there's plenty of bugs fixed \nbetween 8.1.4 and 8.1.9 that you should update. It's relatively \npainless and worth the effort.\n\n I get the feeling you think vacuum and analyze are still married. \nThey're not, they got divorced around 7.3 or so. Used to be to run \nanalyze you needed vacuum. Now you can either one without the other.\n\n Vacuum is more expensive than analyze. Since vacuum reclaims lost \ntuples, it has to do more work than analyze which only has to do a quick \npass over a random sampling of the table, hence you are right in what \nyou heard, that from one run to the next the data analyze returns will \nusually be a bit different. Increasing the default stats target allows \nanalyze to look at more random samples and get a more accurate report on \nthe values and their distributions in the table. This comes at the cost \nof slightly greater analyze and query planning times.\n",
"msg_date": "Wed, 23 May 2007 17:38:46 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: does VACUUM ANALYZE complete with this error?"
}
] |
[
{
"msg_contents": "Hi,\n\nwith that setup you should vacuum aggressivley.\nI'd send a vacuum statement in a third thread every 15 minutes or so.\n\nThe table renaming trick doesn't sound very handy or even\nnecessary...\n\nBye,\nChris.\n\n\n\n > Date: Tue, 22 May 2007 14:38:40 -0400\n > From: \"Orhan Aglagul\" <[email protected]>\n > To: <[email protected]>\n > Subject: Drop table vs Delete record\n > Message-ID: <[email protected]>\n >\n >\n > My application has two threads, one inserts thousands of records per second into a table (t1) and the other thread \n > periodically deletes expired records (also in thousands) from the same table (expired ones). So, we have one thread\n > adding a row while the other thread is trying to delete a row. In a short time the overall performance of any sql\n > statements on that instance degrades. (ex. Select count(*) from t1 takes more then few seconds with less than 10K\n > rows).\n >\n > My question is: Would any sql statement perform better if I would rename the table to t1_%indx periodically, create a \n > new table t1 (for new inserts) and just drop the tables with expired records rather then doing a delete record? (t1 is\n > a simple table with many rows and no constraints).\n >\n > (I know I could run vacuum analyze)\n >\n > Thanks,\n >\n > Orhan A.\n\n",
"msg_date": "Wed, 23 May 2007 22:24:23 +0200",
"msg_from": "Chris Mair <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Drop table vs Delete record"
}
] |
[
{
"msg_contents": "Auto-vacuum has made Postgres a much more \"friendly\" system. Is there some reason the planner can't also auto-ANALYZE in some situations?\n\nHere's an example I ran into:\n\n create table my_tmp_table (...);\n insert into my_tmp_table (select some stuff from here and there);\n select ... from my_tmp_table join another_table on (...);\n\nThe last statement generated a horrible plan, because the planner had no idea what was in the temporary table (which only had about 100 rows in it). Simply inserting an ANALYZE before the SELECT improved performance by a factor of 100 or so.\n\nThere are several situations where you could automatically analyze the data.\n\n1. Any time you have to do a full table scan, you might as well throw in an ANALYZE of the data you're scanning. If I understand things, ANALYZE takes a random sample anyway, so a full table scan should be able to produce even better statistics than a normal ANALYZE.\n\n2. If you have a table with NO statistics, the chances of generating a sensible plan are pretty random. Since ANALYZE is quite fast, if the planner encounters no statistics, why not ANALYZE it on the spot? (This might need to be a configurable feature, though.)\n\n3. A user-configurable update threshold, such as, \"When 75% of the rows have changed since the last ANALYZE, trigger an auto-analyze.\" The user-configurable part would account for the fact that some tables stats don't change much even after many updates, but others may need to be reanalyzed after a modest number of updates.\n\nAuto-vacuum, combined with auto-analyze, would eliminate many of the problems that plague neophyte (and sometimes experienced) users of Postgres. A substantial percentage of the questions to this list are answered with, \"Have you ANALYZED?\"\n\nCraig\n",
"msg_date": "Wed, 23 May 2007 16:46:54 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": true,
"msg_subject": "Auto-ANALYZE?"
},
{
"msg_contents": "Craig James <[email protected]> writes:\n> Auto-vacuum has made Postgres a much more \"friendly\" system. Is there some reason the planner can't also auto-ANALYZE in some situations?\n\nautovacuum handles analyze too. Trying to make the planner do it is\na crummy idea for a couple of reasons:\n\n* unpredictable performance if queries sometimes go off for a few\n seconds to collect stats\n* pg_statistics update requires semi-exclusive lock\n* work is lost if transaction later rolls back\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 May 2007 19:55:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Auto-ANALYZE? "
}
] |
[
{
"msg_contents": "Hi all,\n\n \n\nI have a 4 CPU, 4GB Ram memory box running PostgreSql 8.2.3 under Win 2003 in a very high IO intensive insert application.\n\n \n\nThe application inserts about 570 rows per minute or 9 rows per second.\n\n \n\nWe have been facing some memory problem that we cannot understand.\n\n \n\n From time to time memory allocation goes high and even after we stop postgresql service the memory continues allocated and if were restart the service the Postgres crash over.\n\n \n\nIt's a 5 GB database size already that was born 1 and a half month ago. We have 2 principal tables partitioned.\n\n \n\nAbove is the log file. Do anyone have any idea what could the problem be......\n\n \n\nThanks in advance.\n\n \n\n \n\n2007-05-23 13:21:00 LOG: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall.\n\n (error code 1450)\n\n2007-05-23 13:21:00 LOG: could not fork new process for connection: A blocking operation was interrupted by a call to WSACancelBlockingCall.\n\n \n\n2007-05-23 13:21:06 LOG: could not receive data from client: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.\n\n2007-05-23 13:21:17 LOG: server process (PID 256868) exited with exit code 128\n\n2007-05-23 13:21:17 LOG: terminating any other active server processes\n\n2007-05-23 13:21:17 WARNING: terminating connection because of crash of another server process\n\n2007-05-23 13:21:17 DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n\n2007-05-23 13:21:17 HINT: In a moment you should be able to reconnect to the database and repeat your command.\n\n2007-05-23 13:21:17 WARNING: terminating connection because of crash of another server process\n\n2007-05-23 13:21:17 DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n\n2007-05-23 13:21:17 WARNING: terminating connection because of crash of another server process\n\n2007-05-23 13:21:17 DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n\n \n\n\n\n\n\n\n\n\n\n\n\n\nHi all,\n \nI have a 4 CPU, 4GB Ram\nmemory box running PostgreSql 8.2.3 under Win 2003 in a very high IO\nintensive insert application.\n \nThe application inserts\nabout 570 rows per minute or 9 rows per second.\n \nWe have been facing some\nmemory problem that we cannot understand.\n \nFrom time to time memory\nallocation goes high and even after we stop postgresql service the memory\ncontinues allocated and if were restart the service the Postgres crash over.\n \nIt’s a 5 GB\ndatabase size already that was born 1 and a half month ago. We have 2 principal\ntables partitioned.\n \nAbove is the log file. Do\nanyone have any idea what could the problem be……\n \nThanks in advance.\n \n \n2007-05-23 13:21:00 LOG: \nCreateProcess call failed: A blocking operation was interrupted by a call to\nWSACancelBlockingCall.\n\n (error code\n1450)\n2007-05-23 13:21:00 LOG: \ncould not fork new process for connection: A blocking operation was interrupted\nby a call to WSACancelBlockingCall.\n\n \n2007-05-23 13:21:06 LOG: \ncould not receive data from client: An operation on a socket could not be\nperformed because the system lacked sufficient buffer space or because a queue\nwas full.\n\n2007-05-23 13:21:17 LOG: \nserver process (PID 256868) exited with exit code 128\n2007-05-23 13:21:17 LOG: \nterminating any other active server processes\n2007-05-23 13:21:17\nWARNING: terminating connection because of crash of another server process\n2007-05-23 13:21:17\nDETAIL: The postmaster has commanded this server process to roll back the\ncurrent transaction and exit, because another server process exited abnormally\nand possibly corrupted shared memory.\n2007-05-23 13:21:17\nHINT: In a moment you should be able to reconnect to the database and repeat\nyour command.\n2007-05-23 13:21:17\nWARNING: terminating connection because of crash of another server process\n2007-05-23 13:21:17\nDETAIL: The postmaster has commanded this server process to roll back the\ncurrent transaction and exit, because another server process exited abnormally\nand possibly corrupted shared memory.\n2007-05-23 13:21:17\nWARNING: terminating connection because of crash of another server process\n2007-05-23 13:21:17\nDETAIL: The postmaster has commanded this server process to roll back the\ncurrent transaction and exit, because another server process exited abnormally\nand possibly corrupted shared memory.",
"msg_date": "Wed, 23 May 2007 23:01:24 -0300",
"msg_from": "=?iso-8859-1?Q?Leandro_Guimar=E3es_dos_Santos?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Memory allocation and Vacuum abends"
},
{
"msg_contents": "What does top report as using the most memory?\n\nOn Wed, May 23, 2007 at 11:01:24PM -0300, Leandro Guimar?es dos Santos wrote:\n> Hi all,\n> \n> \n> \n> I have a 4 CPU, 4GB Ram memory box running PostgreSql 8.2.3 under Win 2003 in a very high IO intensive insert application.\n> \n> \n> \n> The application inserts about 570 rows per minute or 9 rows per second.\n> \n> \n> \n> We have been facing some memory problem that we cannot understand.\n> \n> \n> \n> From time to time memory allocation goes high and even after we stop postgresql service the memory continues allocated and if were restart the service the Postgres crash over.\n> \n> \n> \n> It's a 5 GB database size already that was born 1 and a half month ago. We have 2 principal tables partitioned.\n> \n> \n> \n> Above is the log file. Do anyone have any idea what could the problem be......\n> \n> \n> \n> Thanks in advance.\n> \n> \n> \n> \n> \n> 2007-05-23 13:21:00 LOG: CreateProcess call failed: A blocking operation was interrupted by a call to WSACancelBlockingCall.\n> \n> (error code 1450)\n> \n> 2007-05-23 13:21:00 LOG: could not fork new process for connection: A blocking operation was interrupted by a call to WSACancelBlockingCall.\n> \n> \n> \n> 2007-05-23 13:21:06 LOG: could not receive data from client: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.\n> \n> 2007-05-23 13:21:17 LOG: server process (PID 256868) exited with exit code 128\n> \n> 2007-05-23 13:21:17 LOG: terminating any other active server processes\n> \n> 2007-05-23 13:21:17 WARNING: terminating connection because of crash of another server process\n> \n> 2007-05-23 13:21:17 DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n> \n> 2007-05-23 13:21:17 HINT: In a moment you should be able to reconnect to the database and repeat your command.\n> \n> 2007-05-23 13:21:17 WARNING: terminating connection because of crash of another server process\n> \n> 2007-05-23 13:21:17 DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n> \n> 2007-05-23 13:21:17 WARNING: terminating connection because of crash of another server process\n> \n> 2007-05-23 13:21:17 DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n> \n> \n> \n\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)",
"msg_date": "Sun, 27 May 2007 11:06:48 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Memory allocation and Vacuum abends"
}
] |
[
{
"msg_contents": "Hi *,\nfor caching large autogenerated XML files, I have created a bytea table \nin my database so that the cached files can be used by multiple servers. \nThere are about 500 rows and 10-20 Updates per minute on the table. The \nfiles stored in the bytea are anything from 10kB to 10MB. My PostgreSQL \nversion is 8.0.13 on Gentoo Linux (x86) with PostGIS 1.2.0.\n\nFor vacuum I use the pg_autovacuum daemon. It decided to vacuum my cache \ntable about every 3 hours, the vacuum process takes 20-30 minutes \n(oops!) every time.\n\nNow my big big problem is that the database gets really really slow \nduring these 20 minutes and after the vacuum process is running for a \nshort time, many transactions show state \"UPDATE waiting\" in the process \nlist. In my Java application server I sometimes get tons of deadlock \nExceptions (waiting on ShareLock blahblah). The web frontend gets nearly \nunusable, logging in takes more than 60 seconds, etc. etc.\n\nUnder normal circumstances my application is really fast, vacuuming \nother tables is no problem, only the bytea table is really awkward\n\nI hope some of you performance cracks can help me...\n\n\nthis is my table definition:\n\nTable �public.binary_cache�\n Column | Type | Attributes\n----------+-----------------------------+-----------\n cache_id | bigint | not null\n date | timestamp without time zone |\n data | bytea |\n\nIndexe:\n �binary_cache_pkey� PRIMARY KEY, btree (cache_id)\n\n\nThanks in advance for any hints!\n\n-- \nBastian Voigt\nNeum�nstersche Stra�e 4\n20251 Hamburg\ntelefon +49 - 40 - 67957171\nmobil +49 - 179 - 4826359\n\n\n",
"msg_date": "Fri, 25 May 2007 10:29:30 +0200",
"msg_from": "Bastian Voigt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance Problem with Vacuum of bytea table (PG 8.0.13)"
},
{
"msg_contents": "Bastian Voigt wrote:\n> Hi *,\n> for caching large autogenerated XML files, I have created a bytea table \n> in my database so that the cached files can be used by multiple servers. \n> There are about 500 rows and 10-20 Updates per minute on the table. The \n> files stored in the bytea are anything from 10kB to 10MB. My PostgreSQL \n> version is 8.0.13 on Gentoo Linux (x86) with PostGIS 1.2.0.\n> \n> For vacuum I use the pg_autovacuum daemon. It decided to vacuum my cache \n> table about every 3 hours, the vacuum process takes 20-30 minutes \n> (oops!) every time.\n\nTry vacuuming every 3 minutes and see what happens.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 25 May 2007 11:33:02 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with Vacuum of bytea table (PG\n 8.0.13)"
},
{
"msg_contents": "\n\nOK, I'll give that a try. What about pg_autovacuum then? Is it a problem\nwhen two processes try to vacuum the same table in parallel? Or do I\nneed to deactivate autovacuum altogether?\n>\n> Try vacuuming every 3 minutes and see what happens.\n>\n\n(Sorry Richard, forgot to reply to the list!)\n-- \nBastian Voigt\nNeum�nstersche Stra�e 4\n20251 Hamburg\ntelefon +49 - 40 - 67957171\nmobil +49 - 179 - 4826359\n\n\n\n",
"msg_date": "Fri, 25 May 2007 12:48:17 +0200",
"msg_from": "Bastian Voigt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Problem with Vacuum of bytea table (PG\n 8.0.13)"
},
{
"msg_contents": "Bastian Voigt wrote:\n> \n> OK, I'll give that a try. What about pg_autovacuum then? Is it a problem\n> when two processes try to vacuum the same table in parallel? Or do I\n> need to deactivate autovacuum altogether?\n\nI was about to say that you can tune pg_autovacuum, but I just checked \nyour original post and you're running 8.0.x - not sure about that one.\n\nYou'll have to check the documentation for that version to see if you \ncan either:\n1. exclude that table from pg_autovacuum\n2. increase pg_autovacuum's sensitivity\n\nIf not, and this table is the most active, it might be simpler just to \nrun your own vacuum-ing from a cron job.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 25 May 2007 12:00:23 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with Vacuum of bytea table (PG\n 8.0.13)"
},
{
"msg_contents": "Richard Huxton wrote:\n>\n> I was about to say that you can tune pg_autovacuum, but I just checked \n> your original post and you're running 8.0.x - not sure about that one.\nThe system catalog pg_autovacuum which allows finetuning autovacuum at \ntable level was introduced in 8.1 :-(\n\n> You'll have to check the documentation for that version to see if you \n> can either:\n> 1. exclude that table from pg_autovacuum\n> 2. increase pg_autovacuum's sensitivity\n(1) seems to be impossible (correct me if I'm wrong..), so maybe I'll go \nfor (2) ...\n\n> If not, and this table is the most active, it might be simpler just to \n> run your own vacuum-ing from a cron job.\nWell, it is one of the most active, but there are others. pg_autovacuum \nseems to do a very good job, apart from this one table...\n\n\n-- \nBastian Voigt\nNeum�nstersche Stra�e 4\n20251 Hamburg\ntelefon +49 - 40 - 67957171\nmobil +49 - 179 - 4826359\n\n\n",
"msg_date": "Fri, 25 May 2007 13:10:09 +0200",
"msg_from": "Bastian Voigt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Problem with Vacuum of bytea table (PG\n 8.0.13)"
},
{
"msg_contents": "Bastian Voigt wrote:\n> Richard Huxton wrote:\n>>\n>> I was about to say that you can tune pg_autovacuum, but I just checked \n>> your original post and you're running 8.0.x - not sure about that one.\n> The system catalog pg_autovacuum which allows finetuning autovacuum at \n> table level was introduced in 8.1 :-(\n\nHmm - thought it might have been :-(\n\n>> You'll have to check the documentation for that version to see if you \n>> can either:\n>> 1. exclude that table from pg_autovacuum\n>> 2. increase pg_autovacuum's sensitivity\n> (1) seems to be impossible (correct me if I'm wrong..), so maybe I'll go \n> for (2) ...\n\nNo, the per-table stuff was via the system table.\n\n>> If not, and this table is the most active, it might be simpler just to \n>> run your own vacuum-ing from a cron job.\n> Well, it is one of the most active, but there are others. pg_autovacuum \n> seems to do a very good job, apart from this one table...\n\nDo you have any settings in your postgresql.conf? Failing that, you \nwould have to poke around the source.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 25 May 2007 12:51:30 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with Vacuum of bytea table (PG\n 8.0.13)"
},
{
"msg_contents": "No, this did not help. The vacuum process is still running far too long \nand makes everything slow. It is even worse than before, cause now the \nsystem is slow almost all the time while when vacuuming only every 3 \nhours it is only slow once every three hours.....\n\n\nI now did the following. Well, no comment.....\n\n\nShellscript A:\n\nwhile true\ndo\n psql -U $user -d $database -c \"vacuum analyze verbose binary_cache\"\n echo \"Going to sleep\"\n sleep 60\ndone\n\n\nShellscript B:\n\nwhile true\ndo\n ps aux > $tempfile\n numwaiting=`grep UPDATE.waiting $tempfile | grep -c -v grep`\n echo \"Number of waiting updates: $numwaiting\"\n\n vacuumpid=`grep VACUUM $tempfile| grep -v grep | awk '{print $2}'`\n echo \"PID of vacuum process: $vacuumpid\"\n\n if [ $numwaiting -gt 5 ]\n then\n echo \"Too many waiting transactions, killing vacuum \nprocess $vacuumpid...\"\n kill $vacuumpid\n fi\n echo \"Sleeping 30 Seconds\"\n sleep 30\ndone\n\n-- \nBastian Voigt\nNeum�nstersche Stra�e 4\n20251 Hamburg\ntelefon +49 - 40 - 67957171\nmobil +49 - 179 - 4826359\n\n\n",
"msg_date": "Fri, 25 May 2007 14:30:36 +0200",
"msg_from": "Bastian Voigt <[email protected]>",
"msg_from_op": true,
"msg_subject": "My quick and dirty \"solution\" (Re: Performance Problem\n\twith Vacuum of bytea table (PG 8.0.13))"
},
{
"msg_contents": "you should first cluster the table on primary key.\nThe table is probably already bloated from the 3 hr delay it had before.\nFirst\nCLUSTER \"primary key index name\" ON group_fin_account_tst;\nThen\nvacuum it every 3 minutes.\nNB! clustering takes an access exclusive lock on table\n\nKristo\n\nOn 25.05.2007, at 15:30, Bastian Voigt wrote:\n\n> No, this did not help. The vacuum process is still running far too \n> long and makes everything slow. It is even worse than before, cause \n> now the system is slow almost all the time while when vacuuming \n> only every 3 hours it is only slow once every three hours.....\n>\n>\n> I now did the following. Well, no comment.....\n>\n>\n> Shellscript A:\n>\n> while true\n> do\n> psql -U $user -d $database -c \"vacuum analyze verbose binary_cache\"\n> echo \"Going to sleep\"\n> sleep 60\n> done\n>\n>\n> Shellscript B:\n>\n> while true\n> do\n> ps aux > $tempfile\n> numwaiting=`grep UPDATE.waiting $tempfile | grep -c -v grep`\n> echo \"Number of waiting updates: $numwaiting\"\n>\n> vacuumpid=`grep VACUUM $tempfile| grep -v grep | awk '{print \n> $2}'`\n> echo \"PID of vacuum process: $vacuumpid\"\n>\n> if [ $numwaiting -gt 5 ]\n> then\n> echo \"Too many waiting transactions, killing vacuum \n> process $vacuumpid...\"\n> kill $vacuumpid\n> fi\n> echo \"Sleeping 30 Seconds\"\n> sleep 30\n> done\n>\n> -- \n> Bastian Voigt\n> Neum�nstersche Stra�e 4\n> 20251 Hamburg\n> telefon +49 - 40 - 67957171\n> mobil +49 - 179 - 4826359\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n",
"msg_date": "Fri, 25 May 2007 15:57:23 +0300",
"msg_from": "Kristo Kaiv <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: My quick and dirty \"solution\" (Re: Performance Problem with\n\tVacuum of bytea table (PG 8.0.13))"
},
{
"msg_contents": "Bastian Voigt wrote:\n> No, this did not help. The vacuum process is still running far too long \n> and makes everything slow. It is even worse than before, cause now the \n> system is slow almost all the time while when vacuuming only every 3 \n> hours it is only slow once every three hours.....\n\nCould you check the output of vacuum verbose on that table and see how \nmuch work it's doing? I'd have thought the actual bytea data would be \nTOASTed away to a separate table for storage, leaving the vacuum with \nvery little work to do.\n\nIt might well be your actual problem is your disk I/O is constantly \nsaturated and the vacuum just pushes it over the edge. In which case \nyou'll either need more/better disks or to find a quiet time once a day \nto vacuum and just do so then.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 25 May 2007 14:16:37 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: My quick and dirty \"solution\" (Re: Performance Problem\n\twith Vacuum of bytea table (PG 8.0.13))"
},
{
"msg_contents": "Bastian Voigt wrote:\n> No, this did not help. The vacuum process is still running far too long \n> and makes everything slow. It is even worse than before, cause now the \n> system is slow almost all the time while when vacuuming only every 3 \n> hours it is only slow once every three hours.....\n> \n> \n> I now did the following. Well, no comment.....\n\nKilling the vacuum mid-process doesn't help you, because the table will\nbe in a sorrier state than it was when it started.\n\nI think it would be better if you:\n\n1. Revert pg_autovacuum changes so that it processes every 3 hours or\nwhatever, like you had at the start of this thread. Or maybe less.\nThat one will take care of the _other_ tables.\n\n2. Vacuum the bytea table manually more often, say every 10 minutes or\nso (vacuum, sleep 10m, goto start). Make sure this is done with an\nappropriate vacuum_cost_delay setting (and related settings).\n\n3. Raise max_fsm_pages so that a lot of pages with free space can be\nrecorded for that table\n\n\nThe point here is that vacuuming the bytea table can take a long time\ndue to vacuum_cost_delay, but it won't affect the rest of the system;\nregular operation will continue to run at (almost) normal speed. Having\na big number of free pages ensures that the free space in the table is\nnot \"lost\".\n\nAlso, you may want to reindex that table once, because with so many\nkilling vacuums you have probably screwed up the indexes big time (maybe\ncluster it once instead of reindexing, because that will compact the\nheap as well as the indexes).\n\nAnother recommendation is to upgrade to 8.2.4 which is faster and has\na better autovacuum.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Fri, 25 May 2007 09:56:52 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: My quick and dirty \"solution\" (Re: Performance Problem with\n\tVacuum of bytea table (PG 8.0.13))"
},
{
"msg_contents": "Kristo Kaiv wrote:\n> you should first cluster the table on primary key.\n> The table is probably already bloated from the 3 hr delay it had before.\n> First\n> CLUSTER \"primary key index name\" ON group_fin_account_tst;\n> Then\n> vacuum it every 3 minutes.\n> NB! clustering takes an access exclusive lock on table\nKristo,\nthanks a bunch!!\nThis was the solution...\nThe cluster operation took about 60sec, and after it was done the vacuum \nfinished in only 10sec. or so, with no noticeable performance \nbottleneck. Now vacuum is running every 2-3 minutes and makes no problems.\n\nHhhh, now I can look forward to a laid-back weekend..\n\nRichard, Kristo, Alvaro, thanks 1000 times for responding so quickly\n\n:-)\n\n-- \nBastian Voigt\nNeum�nstersche Stra�e 4\n20251 Hamburg\ntelefon +49 - 40 - 67957171\nmobil +49 - 179 - 4826359\n\n\n",
"msg_date": "Fri, 25 May 2007 16:06:14 +0200",
"msg_from": "Bastian Voigt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: My quick and dirty \"solution\" (Re: Performance Problem\n\twith Vacuum of bytea table (PG 8.0.13))"
},
{
"msg_contents": "Richard Huxton wrote:\n> Could you check the output of vacuum verbose on that table and see how \n> much work it's doing? I'd have thought the actual bytea data would be \n> TOASTed away to a separate table for storage, leaving the vacuum with \n> very little work to do.\nI'm quite new to postgres (actually I just ported our running \napplication from MySQL...), so I don't know what toast means. But I \nnoticed that vacuum also tried to cleanup some \"toast\" relations or so. \nThis was what took so long.\n\n> It might well be your actual problem is your disk I/O is constantly \n> saturated and the vacuum just pushes it over the edge. In which case \n> you'll either need more/better disks or to find a quiet time once a \n> day to vacuum and just do so then.\nYes, that was definitely the case. But now everything runs smoothly \nagain, so I don't think I need to buy new disks.\n\nRegards\nBastian\n\n\n-- \nBastian Voigt\nNeum�nstersche Stra�e 4\n20251 Hamburg\ntelefon +49 - 40 - 67957171\nmobil +49 - 179 - 4826359\n\n\n",
"msg_date": "Fri, 25 May 2007 16:11:45 +0200",
"msg_from": "Bastian Voigt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: My quick and dirty \"solution\" (Re: Performance Problem\n\twith Vacuum of bytea table (PG 8.0.13))"
},
{
"msg_contents": "Bastian Voigt <[email protected]> writes:\n> Now my big big problem is that the database gets really really slow \n> during these 20 minutes and after the vacuum process is running for a \n> short time, many transactions show state \"UPDATE waiting\" in the process \n> list. In my Java application server I sometimes get tons of deadlock \n> Exceptions (waiting on ShareLock blahblah). The web frontend gets nearly \n> unusable, logging in takes more than 60 seconds, etc. etc.\n\nHmm. That's a bit weird --- what are they waiting on exactly? Look in\npg_locks to see what the situation is. A vacuum per se ought not be\nblocking any updates.\n\nAside from the recommendation to make the vacuums happen more frequently\ninstead of less so, you should experiment with vacuum_cost_delay and\nrelated parameters. The idea is to reduce vacuum's I/O load so that it\ndoesn't hurt foreground response time. This means any individual vacuum\nwill take longer, but you won't need to care.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 May 2007 10:18:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with Vacuum of bytea table (PG 8.0.13) "
},
{
"msg_contents": "On Fri, May 25, 2007 at 10:29:30AM +0200, Bastian Voigt wrote:\n> Hi *,\n> for caching large autogenerated XML files, I have created a bytea table \n> in my database so that the cached files can be used by multiple servers. \n> There are about 500 rows and 10-20 Updates per minute on the table. The \n> files stored in the bytea are anything from 10kB to 10MB. My PostgreSQL \n> version is 8.0.13 on Gentoo Linux (x86) with PostGIS 1.2.0.\n> \n> For vacuum I use the pg_autovacuum daemon. It decided to vacuum my cache \n> table about every 3 hours, the vacuum process takes 20-30 minutes \n> (oops!) every time.\n\nYou'll want to decrease autovacum_vacuum_scale_factor to 0.2 if you're\non anything less than 8.2.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)",
"msg_date": "Sun, 27 May 2007 11:08:31 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem with Vacuum of bytea table (PG 8.0.13)"
}
] |
[
{
"msg_contents": "\nI set up pg to replace a plain gdbm database for my application. But\neven running to the same machine, via a unix socket\n\n * the pg database ran 100 times slower \n\nAcross the net it was\n\n * about 500 to 1000 times slower than local gdbm\n \nwith no cpu use to speak of.\n\nI'd heard that networked databases are slow. I might have left it at\nthat if curiosity hadn't led me to write a network server for gdbm\ndatabases, and talk to _that_ just to get a comparison.\n\nLo and behold and smack me with a corncob if it wasn't _slower_ than pg.\n\nOn a whim I mapped the network bandwidth per packet size with the NPtcp\nsuite, and got surprising answers .. at 1500B, naturally, the bandwidth\nwas the full 10Mb/s (minus overheads, say 8.5Mb/s) of my pathetic little\nlocal net. At 100B the bandwidth available was only 25Kb/s. At 10B,\nyou might as well use tin cans and taut string instead.\n\nI also mapped the network flows using ntop, and yes, the average packet\nsize for both gdbm and pg in one direction was only about 100B or\nso. That's it! Clearly there are a lot of short queries going out and\nthe answers were none too big either ( I had a LIMIT 1 in all my PG\nqueries).\n\nAbout 75% of traffic was in the 64-128B range while my application was\nrunning, with the peak bandwidth in that range being about 75-125Kb/s\n(and I do mean bits, not bytes).\n\nSoooo ... I took a look at my implementation of remote gdbm, and did\na very little work to aggregate outgoing transmissions together into\nlumps. Three lines added in two places. At the level of the protocol\nwhere I could tell how long the immediate conversation segment would be,\nI \"corked\" the tcp socket before starting the segment and \"uncorked\" it\nafter the segment (for \"cork\", see tcp(7), setsockopt(2) and TCP_CORK in\nlinux).\n\nSurprise, ... I got a speed up of hundreds of times. The same application\nthat crawled under my original rgdbm implementation and under PG now\nmaxed out the network bandwidth at close to a full 10Mb/s and 1200\npkts/s, at 10% CPU on my 700MHz client, and a bit less on the 1GHz\nserver.\n\nSo\n\n * Is that what is holding up postgres over the net too? Lots of tiny\n packets?\n\nAnd if so\n\n * can one fix it the way I fixed it for remote gdbm?\n\nThe speedup was hundreds of times. Can someone point me at the relevant\nbits of pg code? A quick look seems to say that fe-*.c is \ninteresting. I need to find where the actual read and write on the\nconn->sock is done.\n\nVery illuminating gnuplot outputs available on request.\n\nPeter\n\n",
"msg_date": "Fri, 25 May 2007 10:50:58 +0200 (MET DST)",
"msg_from": "\"Peter T. Breuer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "general PG network slowness (possible cure) (repost)"
},
{
"msg_contents": "Peter T. Breuer wrote:\n> I set up pg to replace a plain gdbm database for my application. But\n> even running to the same machine, via a unix socket\n> \n> * the pg database ran 100 times slower \n\nFor what operations? Bulk reads? 19-way joins?\n\n> Across the net it was\n> \n> * about 500 to 1000 times slower than local gdbm\n> \n> with no cpu use to speak of.\n\nDisk-intensive or memory intensive?\n\n> I'd heard that networked databases are slow. I might have left it at\n> that if curiosity hadn't led me to write a network server for gdbm\n> databases, and talk to _that_ just to get a comparison.\n> \n> Lo and behold and smack me with a corncob if it wasn't _slower_ than pg.\n> \n> On a whim I mapped the network bandwidth per packet size with the NPtcp\n> suite, and got surprising answers .. at 1500B, naturally, the bandwidth\n> was the full 10Mb/s (minus overheads, say 8.5Mb/s) of my pathetic little\n> local net. At 100B the bandwidth available was only 25Kb/s. At 10B,\n> you might as well use tin cans and taut string instead.\n\nThis sounds like you're testing a single connection. You would expect \n\"dead time\" to dominate in that scenario. What happens when you have 50 \nsimultaneous connections? Or do you think it's just packet overhead?\n\n> I also mapped the network flows using ntop, and yes, the average packet\n> size for both gdbm and pg in one direction was only about 100B or\n> so. That's it! Clearly there are a lot of short queries going out and\n> the answers were none too big either ( I had a LIMIT 1 in all my PG\n> queries).\n\nI'm not sure that 100B query-results are usually the bottleneck.\nWhy would you have LIMIT 1 on all your queries?\n\n> About 75% of traffic was in the 64-128B range while my application was\n> running, with the peak bandwidth in that range being about 75-125Kb/s\n> (and I do mean bits, not bytes).\n\nNone of this sounds like typical database traffic to me. Yes, there are \nlots of small result-sets, but there are also typically larger (several \nkilobytes) to much larger (10s-100s KB).\n\n> Soooo ... I took a look at my implementation of remote gdbm, and did\n> a very little work to aggregate outgoing transmissions together into\n> lumps. Three lines added in two places. At the level of the protocol\n> where I could tell how long the immediate conversation segment would be,\n> I \"corked\" the tcp socket before starting the segment and \"uncorked\" it\n> after the segment (for \"cork\", see tcp(7), setsockopt(2) and TCP_CORK in\n> linux).\n\nI'm a bit puzzled, because I'd have thought the standard Nagle algorithm \nwould manage this gracefully enough for short-query cases. There's no \nway (that I know of) for a backend to handle more than one query at a time.\n\n> Surprise, ... I got a speed up of hundreds of times. The same application\n> that crawled under my original rgdbm implementation and under PG now\n> maxed out the network bandwidth at close to a full 10Mb/s and 1200\n> pkts/s, at 10% CPU on my 700MHz client, and a bit less on the 1GHz\n> server.\n> \n> So\n> \n> * Is that what is holding up postgres over the net too? Lots of tiny\n> packets?\n\nI'm not sure your setup is typical, interesting though the figures are. \nGoogle a bit for pg_bench perhaps and see if you can reproduce the \neffect with a more typical load. I'd be interested in being proved wrong.\n\n> And if so\n> \n> * can one fix it the way I fixed it for remote gdbm?\n> \n> The speedup was hundreds of times. Can someone point me at the relevant\n> bits of pg code? A quick look seems to say that fe-*.c is \n> interesting. I need to find where the actual read and write on the\n> conn->sock is done.\n\nYou'll want to look in backend/libpq and interfaces/libpq I think \n(although I'm not a developer).\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 25 May 2007 11:31:22 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: general PG network slowness (possible cure) (repost)"
},
{
"msg_contents": "On Fri, May 25, 2007 at 10:50:58AM +0200, Peter T. Breuer wrote:\n> I set up pg to replace a plain gdbm database for my application.\n\nPostgres and gdbm are completely different. You want to rethink your queries\nso each does more work, instead of running a zillion of them over the network.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 25 May 2007 15:03:26 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: general PG network slowness (possible cure) (repost)"
},
{
"msg_contents": "\"Also sprach Richard Huxton:\"\n[Charset ISO-8859-1 unsupported, filtering to ASCII...]\n> Peter T. Breuer wrote:\n> > I set up pg to replace a plain gdbm database for my application. But\n> > even running to the same machine, via a unix socket\n> > \n> > * the pg database ran 100 times slower \n> \n> For what operations? Bulk reads? 19-way joins?\n\nThe only operations being done are simple \"find the row with this key\",\nor \"update the row with this key\". That's all. The queries are not an\nissue (though why the PG thread choose to max out cpu when it gets the\nchance to do so through a unix socket, I don't know).\n\n> > Across the net it was\n> > \n> > * about 500 to 1000 times slower than local gdbm\n> > \n> > with no cpu use to speak of.\n> \n> Disk-intensive or memory intensive?\n\nThere is no disk as such... it's running on a ramdisk at the server\nend. But assuming you mean i/o, i/o was completely stalled. Everything\nwas idle, all waiting on the net.\n\n> > On a whim I mapped the network bandwidth per packet size with the NPtcp\n> > suite, and got surprising answers .. at 1500B, naturally, the bandwidth\n> > was the full 10Mb/s (minus overheads, say 8.5Mb/s) of my pathetic little\n> > local net. At 100B the bandwidth available was only 25Kb/s. At 10B,\n> > you might as well use tin cans and taut string instead.\n> \n> This sounds like you're testing a single connection. You would expect \n> \"dead time\" to dominate in that scenario. What happens when you have 50 \n\nIndeed, it is single, because that's my application. I don't have \n50 simultaneous connections. The use of the database is as a permanent\nstorage area for the results of previous analyses (static analysis of\nthe linux kernel codes) from a single client.\n\nMultiple threads accessing at the same time might help keep the network\ndrivers busier, which would help. They would always see their buffers\nfilling at an even rate and be able to send out groups of packets at\nonce.\n\n> simultaneous connections? Or do you think it's just packet overhead?\n\nIt's not quite overhead in the sense of the logical layer. It's a\nphysical layer thing. I replied in another mail on this thread, but in\nsummary, tcp behaves badly with small packets on ethernet, even on a\ndedicated line (as this was). One needs to keep it on a tight rein.\n\n> > I also mapped the network flows using ntop, and yes, the average packet\n> > size for both gdbm and pg in one direction was only about 100B or\n> > so. That's it! Clearly there are a lot of short queries going out and\n> > the answers were none too big either ( I had a LIMIT 1 in all my PG\n> > queries).\n> \n> I'm not sure that 100B query-results are usually the bottleneck.\n> Why would you have LIMIT 1 on all your queries?\n\nBecause there is always only one answer to the query, according to the\nlogic. So I can always tell the database manager to stop looking after\none, which will always help it.\n\n> > About 75% of traffic was in the 64-128B range while my application was\n> > running, with the peak bandwidth in that range being about 75-125Kb/s\n> > (and I do mean bits, not bytes).\n> \n> None of this sounds like typical database traffic to me. Yes, there are \n> lots of small result-sets, but there are also typically larger (several \n> kilobytes) to much larger (10s-100s KB).\n\nThere's none here.\n\n> > Soooo ... I took a look at my implementation of remote gdbm, and did\n> > a very little work to aggregate outgoing transmissions together into\n> > lumps. Three lines added in two places. At the level of the protocol\n> > where I could tell how long the immediate conversation segment would be,\n> > I \"corked\" the tcp socket before starting the segment and \"uncorked\" it\n> > after the segment (for \"cork\", see tcp(7), setsockopt(2) and TCP_CORK in\n> > linux).\n> \n> I'm a bit puzzled, because I'd have thought the standard Nagle algorithm \n> would manage this gracefully enough for short-query cases. There's no \n\nOn the contrary, Nagle is also often wrong here because it will delay\nsending in order to accumulate more data into buffers when only a little\nhas arrived, then give up when no more data arrives to be sent out, then\nsend out the (short) packet anyway, late. There's no other traffic\napart from my (single thread) application.\n\nWhat we want is to direct the sending exactly,n this situation saying\nwhen to not send, and when to send. Disable Nagle for a start, use\nasync read (noblock), and sync write, with sends from the socket blocked\nfrom initiation of a message until the whole message is ready to be sent\nout. Sending the message piecemeal just hurts too.\n\n> way (that I know of) for a backend to handle more than one query at a time.\n\nThat's not the scenario.\n\n> > Surprise, ... I got a speed up of hundreds of times. The same application\n> > that crawled under my original rgdbm implementation and under PG now\n> > maxed out the network bandwidth at close to a full 10Mb/s and 1200\n> > pkts/s, at 10% CPU on my 700MHz client, and a bit less on the 1GHz\n> > server.\n> > \n> > So\n> > \n> > * Is that what is holding up postgres over the net too? Lots of tiny\n> > packets?\n> \n> I'm not sure your setup is typical, interesting though the figures are. \n> Google a bit for pg_bench perhaps and see if you can reproduce the \n> effect with a more typical load. I'd be interested in being proved wrong.\n\nBut the load is typical HERE. The application works well against gdbm\nand I was hoping to see speedup from using a _real_ full-fledged DB\ninstead.\n\nWell, at least it's very helpful for debugging.\n\n> > And if so\n> > \n> > * can one fix it the way I fixed it for remote gdbm?\n> > \n> > The speedup was hundreds of times. Can someone point me at the relevant\n> > bits of pg code? A quick look seems to say that fe-*.c is \n> > interesting. I need to find where the actual read and write on the\n> > conn->sock is done.\n> \n> You'll want to look in backend/libpq and interfaces/libpq I think \n> (although I'm not a developer).\n\nI'll look around there. Specific directions are greatly\nappreciated.\n\nThanks.\n\nPeter\n",
"msg_date": "Fri, 25 May 2007 15:44:37 +0200 (MET DST)",
"msg_from": "\"Peter T. Breuer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: general PG network slowness (possible cure) (repost)"
},
{
"msg_contents": "Peter T. Breuer wrote:\n> \n> The only operations being done are simple \"find the row with this key\",\n> or \"update the row with this key\". That's all. The queries are not an\n> issue (though why the PG thread choose to max out cpu when it gets the\n> chance to do so through a unix socket, I don't know).\n\n> There is no disk as such... it's running on a ramdisk at the server\n> end. But assuming you mean i/o, i/o was completely stalled. Everything\n> was idle, all waiting on the net.\n\n> Indeed, it is single, because that's my application. I don't have \n> 50 simultaneous connections. The use of the database is as a permanent\n> storage area for the results of previous analyses (static analysis of\n> the linux kernel codes) from a single client.\n\n>> I'm not sure your setup is typical, interesting though the figures are. \n>> Google a bit for pg_bench perhaps and see if you can reproduce the \n>> effect with a more typical load. I'd be interested in being proved wrong.\n> \n> But the load is typical HERE. The application works well against gdbm\n> and I was hoping to see speedup from using a _real_ full-fledged DB\n> instead.\n\nI'm not sure you really want a full RDBMS. If you only have a single \nconnection and are making basic key-lookup queries then 90% of \nPostgreSQL's code is just getting in your way. Sounds to me like gdbm \n(or one of its alternatives) is a good match for you. Failing that, \nsqlite is probably the next lowest-overhead solution.\n\nOf course, if you want to have multiple clients interacting and \nperforming complex 19-way joins on gigabyte-sized tables with full-text \nindexing and full transaction control then you *do* want a RDBMS.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 25 May 2007 14:52:34 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: general PG network slowness (possible cure) (repost)"
},
{
"msg_contents": "\"Also sprach Richard Huxton:\"\n> I'm not sure you really want a full RDBMS. If you only have a single \n> connection and are making basic key-lookup queries then 90% of \n> PostgreSQL's code is just getting in your way. Sounds to me like gdbm \n\nYep - I could happily tell it not to try and compile a special lookup\nscheme each time, for example! (how that?). I could presumably also\nhelp it by preloading the commands I will run and sending over the \nparams only with a \"do a no. 17 now!\".\n\n> (or one of its alternatives) is a good match for you. Failing that, \n> sqlite is probably the next lowest-overhead solution.\n\nNot a bad idea. but PG _will_ be useful when folk come to analyse the\nresult of the analyses being done. What is slow is getting the data\ninto the database now via simple store, fetch and update.\n\n> Of course, if you want to have multiple clients interacting and \n> performing complex 19-way joins on gigabyte-sized tables with full-text \n\nWell, the dbs are in the tens of MB from a single run over a single\nfile (i.e analysis of a single 30KLOC source). The complete analysis\nspace is something like 4000 times that, for 4300 C files in the linux\nkernel source. And then there is all the linux kernel versions. Then\nthere is godzilla and apache source ..\n\n> indexing and full transaction control then you *do* want a RDBMS.\n\nWe want one anyway. The problem is filling the data and the simple\nfetch and update queries on it.\n\nI really think it would be worthwhile getting some developer to tell me\nwhere the network send is done in PG.\n\nPeter\n",
"msg_date": "Fri, 25 May 2007 16:02:21 +0200 (MET DST)",
"msg_from": "\"Peter T. Breuer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: general PG network slowness (possible cure) (repost)"
},
{
"msg_contents": "\"Peter T. Breuer\" <[email protected]> writes:\n> Soooo ... I took a look at my implementation of remote gdbm, and did\n> a very little work to aggregate outgoing transmissions together into\n> lumps.\n\nWe do that already --- for a simple query/response such as you are\ndescribing, each query cycle will involve one physical client->server\nmessage followed by one physical server->client message. The only way\nto aggregate more is for the application code to merge queries together.\n\nMigrating a dbm-style application to a SQL database is often a real\npain, precisely because the application is designed to a mindset of\n\"fetch one record, manipulate it, update it\", where \"fetch\" and \"update\"\nare assumed to be too stupid to do any of the work for you. The way\nto get high performance with a SQL engine is to push as much of the work\nas you can to the database side, and let the engine process multiple\nrecords per query; and that can easily mean rewriting the app from the\nground up :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 May 2007 10:07:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: general PG network slowness (possible cure) (repost) "
},
{
"msg_contents": "Peter T. Breuer wrote:\n> \"Also sprach Richard Huxton:\"\n>> I'm not sure you really want a full RDBMS. If you only have a single \n>> connection and are making basic key-lookup queries then 90% of \n>> PostgreSQL's code is just getting in your way. Sounds to me like gdbm \n> \n> Yep - I could happily tell it not to try and compile a special lookup\n> scheme each time, for example! (how that?). I could presumably also\n> help it by preloading the commands I will run and sending over the \n> params only with a \"do a no. 17 now!\".\n\nPREPARE/EXECUTE (or the equivalent libpq functions).\nAlso - if you can have multiple connections to the DB you should be able \nto have several queries running at once.\n\n>> (or one of its alternatives) is a good match for you. Failing that, \n>> sqlite is probably the next lowest-overhead solution.\n> \n> Not a bad idea. but PG _will_ be useful when folk come to analyse the\n> result of the analyses being done. What is slow is getting the data\n> into the database now via simple store, fetch and update.\n\nI'd have an hourly/daily bulk-load running from the simple system into \nPG. If you have to search all the data from your app that's not \npractical of course.\n\n>> Of course, if you want to have multiple clients interacting and \n>> performing complex 19-way joins on gigabyte-sized tables with full-text \n> \n> Well, the dbs are in the tens of MB from a single run over a single\n> file (i.e analysis of a single 30KLOC source). The complete analysis\n> space is something like 4000 times that, for 4300 C files in the linux\n> kernel source. And then there is all the linux kernel versions. Then\n> there is godzilla and apache source ..\n\nIf you're doing some sort of token analysis on source-code you probably \nwant to look into how tsearch2 / trigram / Gist+GIN indexes work. It \nmight be that you're doing work in your app that the DB can handle for you.\n\n>> indexing and full transaction control then you *do* want a RDBMS.\n> \n> We want one anyway. The problem is filling the data and the simple\n> fetch and update queries on it.\n\nOK\n\n> I really think it would be worthwhile getting some developer to tell me\n> where the network send is done in PG.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 25 May 2007 15:09:25 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: general PG network slowness (possible cure) (repost)"
},
{
"msg_contents": "Peter T. Breuer escribi�:\n\n> I really think it would be worthwhile getting some developer to tell me\n> where the network send is done in PG.\n\nSee src/backend/libpq/pqcomm.c (particularly internal_flush()).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Fri, 25 May 2007 10:10:13 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: general PG network slowness (possible cure) (repost)"
},
{
"msg_contents": "\"Also sprach Alvaro Herrera:\"\n> > I really think it would be worthwhile getting some developer to tell me\n> > where the network send is done in PG.\n> \n> See src/backend/libpq/pqcomm.c (particularly internal_flush()).\n\nYes. Thanks. That looks like it. It calls secure_write continually\nuntil the buffer is empty.\n\nSecure_write is located ibe-secure.c, but I'm not using ssl, so the \ncall reduces to just\n\n n = send(port->sock, ptr, len, 0);\n\nAnd definitely all those could be grouped if there are several to do.\nBut under normal circumstances the send will be pushing against a\nlttle resistance (the copy to the driver/protocol stack buffer is faster\nthan the physical network send, by a ratio of GB/s to MB/s, or 1000 to\n1), and thus all these sends will probably complete as a single unit\nonce they have been started.\n\nIt's worth a try. I thought first this may be too low level, but it\nlooks as though internal_flush is only triggered when some other buffer\nis full, or deliberately, so it may be useful to block until it fires.\n\nI'll try it.\n\n\nPeter\n",
"msg_date": "Fri, 25 May 2007 17:06:10 +0200 (MET DST)",
"msg_from": "\"Peter T. Breuer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: general PG network slowness (possible cure) (repost)"
},
{
"msg_contents": "\"Peter T. Breuer\" <[email protected]> writes:\n> And definitely all those could be grouped if there are several to do.\n\nExcept that in the situation you're describing, there's only a hundred\nor two bytes of response to each query, which means that only one send()\nwill occur anyway. (The flush call comes only when we are done\nresponding to the current client query.)\n\nIt's possible that for bulk data transmission situations we could\noptimize things a bit better --- in particular I've wondered whether we\ncan reliably find out the MTU of the connection and use that as the\noutput buffer size, instead of trusting the kernel to choose the best\nmessage boundaries --- but for the situation you're worried about\nthere will be only one send.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 May 2007 11:20:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: general PG network slowness (possible cure) (repost) "
},
{
"msg_contents": "\"Also sprach Tom Lane:\"\n> \"Peter T. Breuer\" <[email protected]> writes:\n> > And definitely all those could be grouped if there are several to do.\n> \n> Except that in the situation you're describing, there's only a hundred\n> or two bytes of response to each query, which means that only one send()\n> will occur anyway. (The flush call comes only when we are done\n> responding to the current client query.)\n\nIt may still be useful. The kernel won't necessarily send data as you\npush it down to the network protocols and driver. The driver may decide\nto wait for more data to accumulate, particularly if it only has a\ncouple of hundred bytes to send so far and the medium is high speed and\nmedium latency (fast ethernet). It'll get fed up with waiting for more\ndata eventually, and send it out, but it is essentially waiting on\n_itself_ in that case, since the outgoing data is required at the other\nside of the net as a response to be processed before another query can\nbe sent out, only then prompting the postmaster to start stuffing the\noutput buffer with more bytes.\n\nWaiting on oneself is bad for us procrastinators. We need some whips.\n\nI'll try and really force a send, and try some more tricks.\nUnfortunately this isn't really quite the right level, so I have to use\nsome heuristics. Can you guarantee that internal_flush is not called\nuntil (a) the internal buffer is full, OR (b) we have finished\ncomposing a reply, AND (c) there is no other way to send out data?\n\nI also need to find where we begin to compose a reply. That's somewhere\nwell before internal flush ever gets called. I want to block output at\nthat point. \n\nAs it is, I can either unblock just before internal_flush and block\nafter, or block just before internal_flush and unblock after (:-)\nthat's not quite as daft as it sounds, but needs care). Really I want\nto do\n\n query received\n *block output\n process query\n create response\n *unblock output\n send\n\nInstead, I have here to do\n\n query received\n process query\n create response\n *unblock output\n send\n *block output\n\nWhich is not quite the same. It may work though, because the driver\nwill know nothing is going to go out while it is listening for the next\nquery, and it will not have sent anything prematurely or kept it back\ninopportunely.\n\n> It's possible that for bulk data transmission situations we could\n> optimize things a bit better --- in particular I've wondered whether we\n> can reliably find out the MTU of the connection and use that as the\n> output buffer size, instead of trusting the kernel to choose the best\n> message boundaries --- but for the situation you're worried about\n\nDon't bother, I think. MTU is often effectively only notional these\ndays at the hardware level in many media.\n\nOTOH, on my little net, MTU really does mean something because it's\n10BT.\n\n> there will be only one send.\n\nTrue.\n\nPeter\n",
"msg_date": "Fri, 25 May 2007 17:45:41 +0200 (MET DST)",
"msg_from": "\"Peter T. Breuer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: general PG network slowness (possible cure) (repost)"
},
{
"msg_contents": "\"Peter T. Breuer\" <[email protected]> writes:\n> \"Also sprach Tom Lane:\"\n>> Except that in the situation you're describing, there's only a hundred\n>> or two bytes of response to each query, which means that only one send()\n>> will occur anyway. (The flush call comes only when we are done\n>> responding to the current client query.)\n\n> It may still be useful. The kernel won't necessarily send data as you\n> push it down to the network protocols and driver. The driver may decide\n> to wait for more data to accumulate,\n\nNo, because we set TCP_NODELAY. Once we've flushed a message to the\nkernel, we don't want the kernel sitting on it --- any delay there adds\ndirectly to the elapsed query time. At least this is the case for the\nfinal response to a query. I'm not too clear on whether this means we\nneed to be careful about intermediate message boundaries when there's a\nlot of data being sent.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 May 2007 12:16:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: general PG network slowness (possible cure) (repost) "
},
{
"msg_contents": "\"Also sprach Tom Lane:\"\n> > It may still be useful. The kernel won't necessarily send data as you\n> > push it down to the network protocols and driver. The driver may decide\n> > to wait for more data to accumulate,\n> \n> No, because we set TCP_NODELAY. Once we've flushed a message to the\n\nThat just means \"disable Nagle\", which is indeed more or less the\ncorrect thing to do .. you don't want to sit around waiting for more\ndata when we're sure there will be none, as you say. Yet you also don't\nwant to send short data out prematurely, which disabling Nagle can\ncause.\n\nAnd disabling Nagle doesn't actually force data out immediately you want\nit to be sent ... it just disables extra waits imposed by the Nagle\nalgorithm/protocol. It doesn't stop the driver from waiting around\nbecause it feels taking the bus might be a bit premature right now,\nfor example.\n\n> kernel, we don't want the kernel sitting on it --- any delay there adds\n> directly to the elapsed query time. At least this is the case for the\n> final response to a query. I'm not too clear on whether this means we\n> need to be careful about intermediate message boundaries when there's a\n> lot of data being sent.\n\nIt's unclear. But not my situation.\n\n\nIf I clear TCP_CORK all data is sent at that point. If I set TCP_CORK\ndata is held until I clear TCP_CORK, or 200ms have passed with no send.\n\nPeter\n",
"msg_date": "Fri, 25 May 2007 18:34:36 +0200 (MET DST)",
"msg_from": "\"Peter T. Breuer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: general PG network slowness (possible cure) (repost)"
},
{
"msg_contents": "\"Also sprach Richard Huxton:\"\n> > scheme each time, for example! (how that?). I could presumably also\n> > help it by preloading the commands I will run and sending over the \n> > params only with a \"do a no. 17 now!\".\n> \n> PREPARE/EXECUTE (or the equivalent libpq functions).\n\nYes, thank you. It seems to speed things up by a factor of 2.\n\nBut can I prepare a DECLARE x BINARY CURSOR FOR SELECT ... statement?\nThe manual seems to say no.\n\nPeter\n",
"msg_date": "Sat, 26 May 2007 00:24:39 +0200 (MET DST)",
"msg_from": "\"Peter T. Breuer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: general PG network slowness (possible cure) (repost)"
},
{
"msg_contents": "\"Peter T. Breuer\" <[email protected]> writes:\n> But can I prepare a DECLARE x BINARY CURSOR FOR SELECT ... statement?\n> The manual seems to say no.\n\nNo, you just prepare the SELECT. At the protocol level, DECLARE CURSOR\nis a tad useless. You can still fetch the data in binary if you want...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 May 2007 20:19:13 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: general PG network slowness (possible cure) (repost) "
},
{
"msg_contents": "\"Also sprach Tom Lane:\"\n> \"Peter T. Breuer\" <[email protected]> writes:\n> > But can I prepare a DECLARE x BINARY CURSOR FOR SELECT ... statement?\n> > The manual seems to say no.\n> \n> No, you just prepare the SELECT. At the protocol level, DECLARE CURSOR\n> is a tad useless. You can still fetch the data in binary if you want...\n\nHow? It's a 7.4 server (or may be, more generally) and declare binary\ncursor is the only way I know to get binary data off it. AFAIR the only\nother way works only for an 8.* server and consists of sending the query\nwith an annotation that a binary reply is expected.\n\nPeter\n",
"msg_date": "Sat, 26 May 2007 09:07:26 +0200 (MET DST)",
"msg_from": "\"Peter T. Breuer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: general PG network slowness (possible cure) (repost)"
},
{
"msg_contents": "\"Peter T. Breuer\" <[email protected]> writes:\n> \"Also sprach Tom Lane:\"\n>> No, you just prepare the SELECT. At the protocol level, DECLARE CURSOR\n>> is a tad useless. You can still fetch the data in binary if you want...\n\n> How? It's a 7.4 server (or may be, more generally) and declare binary\n> cursor is the only way I know to get binary data off it. AFAIR the only\n> other way works only for an 8.* server and consists of sending the query\n> with an annotation that a binary reply is expected.\n\nNo, that works for a 7.4 server too; we haven't changed the protocol\nsince then. (I forget though to what extent 7.4 libpq exposes the\ncapability.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 26 May 2007 12:46:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: general PG network slowness (possible cure) (repost) "
},
{
"msg_contents": "\nPlease let us know if there is something we should change in the\nPostgreSQL source code.\n\n---------------------------------------------------------------------------\n\nPeter T. Breuer wrote:\n> \"Also sprach Tom Lane:\"\n> > > It may still be useful. The kernel won't necessarily send data as you\n> > > push it down to the network protocols and driver. The driver may decide\n> > > to wait for more data to accumulate,\n> > \n> > No, because we set TCP_NODELAY. Once we've flushed a message to the\n> \n> That just means \"disable Nagle\", which is indeed more or less the\n> correct thing to do .. you don't want to sit around waiting for more\n> data when we're sure there will be none, as you say. Yet you also don't\n> want to send short data out prematurely, which disabling Nagle can\n> cause.\n> \n> And disabling Nagle doesn't actually force data out immediately you want\n> it to be sent ... it just disables extra waits imposed by the Nagle\n> algorithm/protocol. It doesn't stop the driver from waiting around\n> because it feels taking the bus might be a bit premature right now,\n> for example.\n> \n> > kernel, we don't want the kernel sitting on it --- any delay there adds\n> > directly to the elapsed query time. At least this is the case for the\n> > final response to a query. I'm not too clear on whether this means we\n> > need to be careful about intermediate message boundaries when there's a\n> > lot of data being sent.\n> \n> It's unclear. But not my situation.\n> \n> \n> If I clear TCP_CORK all data is sent at that point. If I set TCP_CORK\n> data is held until I clear TCP_CORK, or 200ms have passed with no send.\n> \n> Peter\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Mon, 28 May 2007 17:08:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: general PG network slowness (possible cure)\n (repost)"
},
{
"msg_contents": "On 5/26/07, Peter T. Breuer <[email protected]> wrote:\n> \"Also sprach Tom Lane:\"\n> > \"Peter T. Breuer\" <[email protected]> writes:\n> > > But can I prepare a DECLARE x BINARY CURSOR FOR SELECT ... statement?\n> > > The manual seems to say no.\n> >\n> > No, you just prepare the SELECT. At the protocol level, DECLARE CURSOR\n> > is a tad useless. You can still fetch the data in binary if you want...\n>\n> How? It's a 7.4 server (or may be, more generally) and declare binary\n> cursor is the only way I know to get binary data off it. AFAIR the only\n> other way works only for an 8.* server and consists of sending the query\n> with an annotation that a binary reply is expected.\n\nYou want to be calling PQexecPrepared and flip the resultFormat.\n\nhttp://www.postgresql.org/docs/7.4/interactive/libpq-exec.html#LIBPQ-EXEC-MAIN\n\nIMO, it's usually not worth bothering with binary unless you are\ndealing with bytea objects.\n\nmerlin\n",
"msg_date": "Tue, 29 May 2007 09:19:36 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: general PG network slowness (possible cure) (repost)"
}
] |
[
{
"msg_contents": "Hi all,\n\n I have a doubt/problem about how PostgreSQL handles multiple DDBB \ninstances running on a same server and how I should design the \narchitecture of an application.\n\n I have an application that works with multiple customers. Thinking in \nscalability we are thinking in applying the following approaches:\n\n - Create a separate database instance for each customer.\n - We think that customer's DB will be quite small, about 200MB as \naverage.\n - The number of clients, then DDBB, can be significant(thousands).\n - Have as many customers as possible on the same server, so a single \nserver could have more than 300 DDBB instances.\n\n\n Do you think this makes sense? or taking into account that the \nexpected DDBB size, would be better to join several customers DDBB in \njust one instance. What I'm worried about is, if having so many DDBB \ninstances PostgreSQL's performance would be worse.\n\n I have been following the list and one of the advises that appears \nmore often is keep your DB in memory, so if I have just one instance \ninstead of \"hundreds\" the performance will be better?\n\nThank you very much\n-- \nArnau\n",
"msg_date": "Fri, 25 May 2007 10:52:14 +0200",
"msg_from": "Arnau <[email protected]>",
"msg_from_op": true,
"msg_subject": "How PostgreSQL handles multiple DDBB instances?"
},
{
"msg_contents": "Arnau <[email protected]> writes:\n> I have an application that works with multiple customers. Thinking in \n> scalability we are thinking in applying the following approaches:\n\n> - Create a separate database instance for each customer.\n> - We think that customer's DB will be quite small, about 200MB as \n> average.\n> - The number of clients, then DDBB, can be significant(thousands).\n> - Have as many customers as possible on the same server, so a single \n> server could have more than 300 DDBB instances.\n\nThis is probably a bad idea, unless each customer's performance demands\nare so low that you can afford to use very small shared-memory settings\nfor each instance. But even small settings will probably eat ~10MB per\ninstance --- can you afford to build these machines with multiple GB of\nRAM?\n\nCan you instead run things with one postmaster per machine and one\ndatabase per customer within that instance? From a performance\nperspective this is likely to work much better.\n\nIf you desire to give the customers database-superuser capability then\nthis probably won't do, but if they are restricted users it might be OK.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 May 2007 10:24:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How PostgreSQL handles multiple DDBB instances? "
},
{
"msg_contents": "Hi Tom,\n\n> Arnau <[email protected]> writes:\n>> I have an application that works with multiple customers. Thinking in \n>> scalability we are thinking in applying the following approaches:\n> \n>> - Create a separate database instance for each customer.\n>> - We think that customer's DB will be quite small, about 200MB as \n>> average.\n>> - The number of clients, then DDBB, can be significant(thousands).\n>> - Have as many customers as possible on the same server, so a single \n>> server could have more than 300 DDBB instances.\n> \n> This is probably a bad idea, unless each customer's performance demands\n> are so low that you can afford to use very small shared-memory settings\n> for each instance. But even small settings will probably eat ~10MB per\n> instance --- can you afford to build these machines with multiple GB of\n> RAM?\n> \n> Can you instead run things with one postmaster per machine and one\n> database per customer within that instance? From a performance\n> perspective this is likely to work much better.\n\n What I meant is just have only one postmaster per server and a lot of \ndatabases running in it. Something like that:\n\n template1=# \\l\n List of databases\n Name | Owner | Encoding\n-------------------+-----------+----------\n alertwdv2 | gguridi | LATIN1\n postgres | postgres | LATIN1\n template0 | postgres | LATIN1\n template1 | postgres | LATIN1\n voicexml | root | LATIN1\n wikidb | root | LATIN1\n(6 rows)\n\n Here I just have 6 databases, so my doubt is if instead having 6 \ndatabases have 300/600 bases running on the same postmaster how this \nwill impact the performance e.g.\n\n template1=# \\l\n List of databases\n Name | Owner | Encoding\n-------------------+-----------+----------\n template0 | postgres | LATIN1\n template1 | postgres | LATIN1\n customers_group_1 | root | LATIN1\n(3 rows)\n\nInstead of:\n\n template1=# \\l\n List of databases\n Name | Owner | Encoding\n-------------------+-----------+----------\n template0 | postgres | LATIN1\n template1 | postgres | LATIN1\n customers_1 | root | LATIN1\n customers_2 | root | LATIN1\n customers_3 | root | LATIN1\n ...\n customers_500 | root | LATIN1\n(502 rows)\n\n\n> If you desire to give the customers database-superuser capability then\n> this probably won't do, but if they are restricted users it might be OK.\n\n The users won't have superuser access just execute plain queries.\n\nThank you very much\n-- \nArnau\n",
"msg_date": "Fri, 25 May 2007 19:16:05 +0200",
"msg_from": "Arnau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How PostgreSQL handles multiple DDBB instances?"
},
{
"msg_contents": "Arnau <[email protected]> writes:\n>> Can you instead run things with one postmaster per machine and one\n>> database per customer within that instance? From a performance\n>> perspective this is likely to work much better.\n\n> What I meant is just have only one postmaster per server and a lot of \n> databases running in it.\n\nOK, we are on the same page then. Should work fine. I think I've heard\nof people running installations with thousands of DBs in them. You'll\nwant to test it a bit of course ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 May 2007 14:04:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How PostgreSQL handles multiple DDBB instances? "
},
{
"msg_contents": "Tom Lane wrote:\n> Arnau <[email protected]> writes:\n>>> Can you instead run things with one postmaster per machine and one\n>>> database per customer within that instance? From a performance\n>>> perspective this is likely to work much better.\n> \n>> What I meant is just have only one postmaster per server and a lot of \n>> databases running in it.\n> \n> OK, we are on the same page then. Should work fine. I think I've heard\n> of people running installations with thousands of DBs in them. You'll\n> want to test it a bit of course ...\n\nI'm worried about performance, I have done some tests and I have on a \nserver more than 400 DBs, so it's possible to run such amount of DBs in \na single postmaster.\n\n The point I'm worried is performance. Do you think the performance \nwould be better executing exactly the same queries only adding an extra \ncolumn to all the tables e.g. customer_id, than open a connection to the \nonly one customers DB and execute the query there?\n\n I don't know if PostgreSQL cache's mechanism works as good as \nquerying to 400 possible DBs or just to one possible DB.\n\nThank you very much for your help :)\n-- \nArnau\n",
"msg_date": "Fri, 25 May 2007 20:16:08 +0200",
"msg_from": "Arnau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How PostgreSQL handles multiple DDBB instances?"
},
{
"msg_contents": "Arnau <[email protected]> writes:\n> The point I'm worried is performance. Do you think the performance \n> would be better executing exactly the same queries only adding an extra \n> column to all the tables e.g. customer_id, than open a connection to the \n> only one customers DB and execute the query there?\n\n[ shrug... ] That's going to depend on enough factors that I don't\nthink anyone could give you a generic answer. You'd have to test it for\nyourself under your own application conditions.\n\nHowever: doing it that way seems to me to create severe risks that the\ncustomers might be able to look at each others' data. You probably want\nto go with separate databases just as a security matter.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 25 May 2007 14:34:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How PostgreSQL handles multiple DDBB instances? "
},
{
"msg_contents": "On Fri, 2007-05-25 at 20:16 +0200, Arnau wrote:\n> The point I'm worried is performance. Do you think the performance \n> would be better executing exactly the same queries only adding an extra \n> column to all the tables e.g. customer_id, than open a connection to the \n> only one customers DB and execute the query there?\n\nHave you already considered using views with specific privileges to\nseparate your customers?\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Tue, 29 May 2007 15:07:33 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How PostgreSQL handles multiple DDBB instances?"
},
{
"msg_contents": "On Fri, 2007-05-25 at 20:16 +0200, Arnau wrote:\n> The point I'm worried is performance. Do you think the performance \n> would be better executing exactly the same queries only adding an extra \n> column to all the tables e.g. customer_id, than open a connection to the \n> only one customers DB and execute the query there?\n\nThere is no simple answer to this question; it depends too much on your data. In many cases, adding a customer_id to every table, and perhaps also per-customer views (per Jeff's suggestion), can work really well.\n\nHowever, performance is not the only consideration, or even the main consideration. We operate with about 150 separate databases. In our cases, administration issues and software design outweighed performance issues.\n\nFor example, with separate databases, security is simpler, *and* it's easy to convince the customer that their data is protected. Creating views only helps for read-only access. When the customer wants to modify their data, how will you keep them from accessing and overwriting one another's data? Even with views, can you convince the customer you've done it right? With separate databases, you use the built-in security of Postgres, and don't have to duplicate it in your schema and apps.\n\nWith separate databases, it's really easy to discard a customer. This can be particularly important for a big customer with millions of linked records. In a database-for-everyone design, you'll have lots of foreign keys, indexes, etc. that make deleting a whole customer a REALLY big job. Contrast that with just discarding a whole database, which typically takes a couple seconds.\n\nBut even more important (to us) is the simplicity of the applications and management. It's far more than just an extra \" ... and customer = xyz\" added to every query. Throwing the customers together means every application has to understand security, and many operations that would be simple become horribly tangled. Want to back up a customer's data? You can't use pg_dump, you have to write your own dump app. Want to restore a customer's data? Same. Want to do a big update? Your whole database is affected and probably needs to be vacuum/analyzed. On and on, at every turn, management and applications are more complex.\n\nIf you have hundreds of separate databases, it's also easy to scale: Just buy more servers, and move some of the databases. With a single monster database, as load increases, you may hit the wall sooner or later.\n\nPostgres is really good at maintaining many separate databases. Why do it yourself?\n\nThere are indeed performance issues, but even that's not black and white. Depending on the specifics of your queries and the load on your servers, you may get better performance from a single monster database, or from hundreds of separate databases.\n\nSo, your question has no simple answer. You should indeed evaluate the performance, but other issues may dominate your decision.\n\nCraig\n\n\n",
"msg_date": "Tue, 29 May 2007 18:33:48 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How PostgreSQL handles multiple DDBB instances?"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.